You are on page 1of 129

EL2620 Nonlinear Control

Lecture notes
Karl Henrik Johansson, Bo Wahlberg and Elling W. Jacobsen
This revision December 2011

Automatic Control
KTH, Stockholm, Sweden
Preface
Many people have contributed to these lecture notes in nonlinear control.
Originally it was developed by Bo Bernhardsson and Karl Henrik Johansson,
and later revised by Bo Wahlberg and myself. Contributions and comments
by Mikael Johansson, Ola Markusson, Ragnar Wallin, Henning Schmidt,
Krister Jacobsson, Björn Johansson and Torbjörn Nordling are gratefully
acknowledged.

Elling W. Jacobsen
Stockholm, December 2011
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control


Automatic Control Lab, KTH
• Disposition Course Goal
7.5 credits, lp 2
To provide participants with a solid theoretical foundation of nonlinear
28h lectures, 28h exercises, 3 home-works control systems combined with a good engineering understanding
• Instructors
Elling W. Jacobsen, lectures and course responsible You should after the course be able to
jacobsen@kth.se
Per Hägg, Farhad Farokhi, teaching assistants
• understand common nonlinear control phenomena
pehagg@kth.se, farakhi@kth.se • apply the most powerful nonlinear analysis methods
Hanna Holmqvist, course administration
hanna.holmqvist@ee.kth.se
• use some practical nonlinear control design methods
STEX (entrance floor, Osquldasv. 10), course material
stex@s3.kth.se

Lecture 1 1 Lecture 1 2

EL2620 2011 EL2620 2011

EL2620 Nonlinear Control Today’s Goal


You should be able to
Lecture 1
• Describe distinctive phenomena in nonlinear dynamic systems
• Practical information • Mathematically describe common nonlinearities in control
• Course outline systems

• Linear vs Nonlinear Systems • Transform differential equations to first-order form


• Nonlinear differential equations • Derive equilibrium points

Lecture 1 3 Lecture 1 4
EL2620 2011 EL2620 2011

Material
• Textbook: Khalil, Nonlinear Systems, Prentice Hall, 3rd ed.,
2002. Optional but highly recommended.
Course Information
• Lecture notes: Copies of transparencies (from previous year)
• All info and handouts are available at the course homepage at • Exercises: Class room and home exercises
KTH Social
• Homeworks: 3 computer exercises to hand in (and review)
• Compulsory course items
• Software: Matlab
– 3 homeworks, have to be handed in on time (we are strict on
this!) Alternative textbooks (decreasing mathematical rigour):
Sastry, Nonlinear Systems: Analysis, Stability and Control; Vidyasagar,
– 5h written exam on December 11 2012 Nonlinear Systems Analysis; Slotine & Li, Applied Nonlinear Control; Glad &
Ljung, Reglerteori, flervariabla och olinjära metoder.
Only references to Khalil will be given.
Two course compendia sold by STEX.

Lecture 1 5 Lecture 1 6

EL2620 2011 EL2620 2011

Linear Systems
Course Outline
Definition: Let M be a signal space. The system S : M → M is
linear if for all u, v ∈ M and α ∈ R
• Introduction: nonlinear models and phenomena, computer
simulation (L1-L2) S(αu) = αS(u) scaling
• Feedback analysis: linearization, stability theory, describing S(u + v) = S(u) + S(v) superposition
functions (L3-L6)
Example: Linear time-invariant systems
• Control design: compensation, high-gain design, Lyapunov
methods (L7-L10) ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t), x(0) = 0
• Alternatives: gain scheduling, optimal control, neural networks, y(t) = g(t) " u(t) =
fuzzy control (L11-L13)
g(τ )u(t − τ )dτ
0
! t

• Summary (L14) Y (s) = G(s)U (s)


Notice the importance to have zero initial conditions

Lecture 1 7 Lecture 1 8
EL2620 2011 EL2620 2011

Linear Models may be too Crude


Approximations
Linear Systems Have Nice Properties Example:Positioning of a ball on a beam

Local stability=global stability Stability if all eigenvalues of A (or


poles of G(s)) are in the left half-plane

φ x
Superposition Enough to know a step (or impulse) response

Frequency analysis possible Sinusoidal inputs give sinusoidal


outputs: Y (iω) = G(iω)U (iω)

Nonlinear model: mẍ(t) = mg sin φ(t), Linear model: ẍ(t) = gφ(t)

Lecture 1 9 Lecture 1 10

EL2620 2011 EL2620 2011

Can the ball move 0.1 meter in 0.1 seconds from steady state?
Linear model (step response with φ = φ0 ) gives
t2
2
x(t) ≈ 10 φ0 ≈ 0.05φ0

so that Linear Models are not Rich Enough


0.1
= 2 rad = 114◦
0.05 Linear models can not describe many phenomena seen in
φ0 ≈
nonlinear systems
Unrealistic answer. Clearly outside linear region!
Linear model valid only if sin φ ≈φ

Must consider nonlinear model. Possibly also include other


nonlinearities such as centripetal force, saturation, friction etc.

Lecture 1 11 Lecture 1 12
EL2620 2011 EL2620 2011

PSfr
Stability Can Depend on Reference Signal S TEP R ESPONSES
Example: Control system with valve characteristic f (u) = u2 0.4

Motor Valve Process r = 0.2 0.2

Output y
r 1 1 y
0
Σ s (s+1)2 0 5 10 15 20 25 30
4 Time t

2
r = 1.68

Output y
0
0 5 10 15 20 25 30
−1
10 Time t

Simulink block diagram: r = 1.72 5

Output y
1 2 1
u y 0
s (s+1)(s+1) 0 5 10 15 20 25 30
Time t

Stability depends on amplitude of the reference signal!


-1 (The linearized gain of the valve increases with increasing amplitude)

Lecture 1 13 Lecture 1 14

EL2620 2011 EL2620 2011

Multiple Equilibria
Stable Periodic Solutions
Example: chemical reactor
Input Example: Position control of motor with back-lash
20

0 1
15
1 5 y
ẋ1 = Constant 5s 2+s

Coolant temp xc
10
−x1 exp − + f (1 − x1 )
x2 Sum P-controller Backlash
Motor
! "

1 0 50 100 150 200


ẋ2 = x1 exp
Output
− − !f (x2 − xc )
x2
! "

200
f = 0.7, ! = 0.4
150 -1
100

50

Temperature x2
0 1
0 50 100 150 200 Motor: G(s) =
time [s] s(1+5s)

Existence of multiple stable equilibria for the same input gives Controller: K=5
hysteresis effect

Lecture 1 15 Lecture 1 16
EL2620 2011 EL2620 2011

Back-lash induces an oscillation Automatic Tuning of PID Controllers


Period and amplitude independent of initial conditions: Relay induces a desired oscillation whose frequency and amplitude
0.5 are used to choose PID parameters
0

Output y
PID
-0.5
0 10 20 30 40 50 A u y
Time t Σ Process
0.5 T
Relay
0

Output y
-0.5
0 10 20 30 40 50 −1
0.5 Time t

0 1 u

Output y
y
-0.5
0 10 20 30 40 50 0
Time t

−1

How predict and avoid oscillations? 0 5 10


Time

Lecture 1 17 Lecture 1 18

EL2620 2011 EL2620 2011

Harmonic Distortion
Example: Electrical power distribution
Example: Sinusoidal response of saturation
Nonlinearities such as rectifiers, switched electronics, and
a sin t y(t) = k=1 Ak sin(kt) transformers give rise to harmonic distortion
Saturation
"∞

k=2 Energy in tone k


2 Total Harmonic Distortion =
0 Energy in tone 1
10 a=1 1
"∞

-2

Amplitude y
10 -1

0
-2
Example: Electrical amplifiers
10 0 1 2 3 4 5
Frequency (Hz) Time t

2 Effective amplifiers work in nonlinear region


0
10 a=2 1

0
Introduces spectrum leakage, which is a problem in cellular systems
-2

Amplitude y
10 -1
Trade-off between effectivity and linearity
0
-2
10 0 1 2 3 4 5
Frequency (Hz) Time t

Lecture 1 19 Lecture 1 20
EL2620 2011 EL2620 2011

Subharmonics
Nonlinear Differential Equations
Example: Duffing’s equation ÿ + ẏ + y − y 3 = a sin(ωt)
0.5 Definition: A solution to

ẋ(t) = f (x(t)), x(0) = x0 (1)

y
0

over an interval [0, T ] is a C1 function x : [0, T ] → Rn such that (1)


-0.5 is fulfilled.
0 5 10 15 20 25 30
Time t
1 • When does there exists a solution?
0.5

0
• When is the solution unique?
-0.5

a sin ωt
-1
Example: ẋ = Ax, x(0) = x0 , gives x(t) = exp(At)x0
0 5 10 15 20 25 30
Time t

Lecture 1 21 Lecture 1 22

EL2620 2011 EL2620 2011

Existence Problems Finite Escape Time


2
Example: The differential equation ẋ = x , x(0) = x0
Simulation for various initial conditions x0
x0 1 Finite escape time of dx/dt = x2
has solution x(t) = , 0≤t< 5
1 − x0 t x0 4.5

1 4

Solution not defined for tf = 3.5


x0
3
Solution interval depends on initial condition! 2.5
x(t)

dx 1.5
2
= dt 1
x2
Recall the trick: ẋ = x ⇒
0.5
x0
0
0 1 2 3 4 5
−1 −1
x(t) x(0) Time t
Integrate ⇒ − = t ⇒ x(t) =
1 − x0 t

Lecture 1 23 Lecture 1 24
EL2620 2011 EL2620 2011

Uniqueness Problems
Physical Interpretation

Example: ẋ = x, x(0) = 0, has many solutions:
Consider the reverse example, i.e., the water tank lab process with

x(t) = x(T ) = 0
(t − C)2 /4 t > C
0
ẋ = − x,
#

t≤C
where x is the water level. It is then impossible to know at what time
2

t < T the level was x(t) = x0 > 0.


1.5

0.5
Hint: Reverse time s

x(t)
= T − t ⇒ ds = −dt and thus
0
dx dx
-0.5
ds dt
=−
-1
0 1 2 3 4 5
Time t

Lecture 1 25 Lecture 1 26

EL2620 2011 EL2620 2011

Lipschitz Continuity Local Existence and Uniqueness


Definition: f : Rn → Rn is Lipschitz continuous if there exists Theorem:
L, r > 0 such that for all If f is Lipschitz continuous, then there exists δ > 0 such that
ẋ(t) = f (x(t)), x(0) = x0
x, y ∈ Br (x0 ) = {z ∈ Rn : (z − x0 ( < r},

has a unique solution in Br (x0 ) over [0, δ]. Proof: See Khalil,
(f (x) − f (y)( ≤ L(x − y(

Appendix C.1. Based on the contraction mapping theorem

Remarks
Slope L
Euclidean norm is given by • δ = δ(r, L)
2
(x( = x21 + ··· + x2n • f being C 0 is not sufficient (cf., tank example)
• f being C 1 implies Lipschitz continuity
(L = maxx∈Br (x0 ) f $ (x))

Lecture 1 27 Lecture 1 28
EL2620 2011 EL2620 2011

State-Space Models Transformation to Autonomous System


State x, input u, output y
A nonautonomous system
General: f (x, u, y, ẋ, u̇, ẏ, . . .) = 0
ẋ = f (x, t)

Explicit: ẋ = f (x, u), y = h(x) is always possible to transform to an autonomous system by


introducing xn+1 = t:

Affine in u: ẋ = f (x) + g(x)u, y = h(x) ẋ = f (x, xn+1 )


ẋn+1 = 1
Linear: ẋ = Ax + Bu, y = Cx

Lecture 1 29 Lecture 1 30

EL2620 2011 EL2620 2011

Transformation to First-Order System


Equilibria
dn y
Given a differential equation in y with highest derivative dtn
,
Definition: A point (x∗ , u∗ , y ∗ ) is an equilibrium, if a solution starting
dy dn−1 y in (x∗ , u∗ , y ∗ ) stays there forever.
express the equation in x = y dt
... dtn−1
Example:
$ %T

Pendulum Corresponds to putting all derivatives to zero:


M R2 θ̈ + k θ̇ + M gR sin θ = 0
General: f (x∗ , u∗ , y ∗ , 0, 0, . . .) = 0
x = θ θ̇ gives Explicit: 0 = f (x∗ , u∗ ), y ∗ = h(x∗ )
Affine in u: 0 = f (x∗ ) + g(x∗ )u∗ , y ∗ = h(x∗ )
& 'T

ẋ1 = x2
Linear: 0 = Ax∗ + Bu∗ , y ∗ = Cx∗
k g
MR 2 R Often the equilibrium is defined only through the state x∗
ẋ2 = − x2 − sin x1

Lecture 1 31 Lecture 1 32
EL2620 2011 EL2620 2011

Some Common Nonlinearities in Control


Multiple Equilibria
Systems
Example: Pendulum
|u| eu
M R2 θ̈ + k θ̇ + M gR sin θ = 0
Abs Math
Saturation
θ̈ = θ̇ = 0 gives sin θ = 0 and thus θ∗ = kπ Function

Alternatively in first-order form:


Look-Up
Sign Dead Zone
Table
ẋ1 = x2
k g
MR 2 R
ẋ2 = − x2 − sin x1

ẋ1 = ẋ2 = 0 gives x∗2 = 0 and sin(x∗1 ) = 0 Relay Backlash Coulomb &
Viscous Friction

Lecture 1 33 Lecture 1 34

EL2620 2011 EL2620 2011

When do we need Nonlinear Analysis &


Design? Next Lecture
• When the system is strongly nonlinear • Simulation in Matlab
• When the range of operation is large • Linearization
• When distinctive nonlinear phenomena are relevant • Phase plane analysis
• When we want to push performance to the limit

Lecture 1 35 Lecture 1 36
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control


Today’s Goal
Lecture 2 You should be able to

• Model and simulate in Simulink


• Wrap-up of Lecture 1: Nonlinear systems and phenomena
• Linearize using Simulink
• Modeling and simulation in Simulink
• Phase-plane analysis • Do phase-plane analysis using pplane (or other tool)

Lecture 2 1 Lecture 2 2

EL2620 2011 EL2620 2011

Analysis Through Simulation


Simulation tools: Simulink
ODE’s ẋ = f (t, x, u)
• ACSL, Simnon, Simulink
DAE’s F (t, ẋ, x, u) =0
• Omsim, Dymola, Modelica > matlab
http://www.modelica.org >> simulink
Special purpose simulation tools
• Spice, EMTP, ADAMS, gPROMS

Lecture 2 3 Lecture 2 4
EL2620 2011 EL2620 2011

Choose Simulation Parameters


An Example in Simulink
File -> New -> Model
Double click on Continous
Transfer Fcn
Step (in Sources)
Scope (in Sinks)
Connect (mouse-left)
Simulation -> Parameters

1
s+1
Step Transfer Fcn Scope

Don’t forget “Apply”

Lecture 2 5 Lecture 2 6

EL2620 2011 EL2620 2011

Save Results to Workspace


How To Get Better Accuracy
stepmodel.mdl
Modify Refine, Absolute and Relative Tolerances, Integration method
t
Clock To Workspace Refine adds interpolation points:
Refine = 1 Refine = 10
1 1
1
y 0.8 0.8
s+1
0.6 0.6
Step Transfer Fcn To Workspace1
0.4 0.4

0.2 0.2
u
0 0
To Workspace2
-0.2 -0.2

-0.4 -0.4

-0.6 -0.6
Check “Save format” of output blocks (“Array” instead of “Structure”)
-0.8 -0.8

-1 -1
>> plot(t,y) 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

Lecture 2 7 Lecture 2 8
EL2620 2011 EL2620 2011

PSfr
Nonlinear Control System
Use Scripts to Document Simulations Example: Control system with valve characteristic f (u) = u2
Motor Valve Process
If the block-diagram is saved to stepmodel.mdl,
r 1 1 y
the following Script-file simstepmodel.m simulates the system: Σ s (s+1)2
open_system(’stepmodel’)
set_param(’stepmodel’,’RelTol’,’1e-3’)
set_param(’stepmodel’,’AbsTol’,’1e-6’) −1

set_param(’stepmodel’,’Refine’,’1’)
tic Simulink block diagram:
sim(’stepmodel’,6) 1 1
u2 y
toc s (s+1)(s+1)

subplot(2,1,1),plot(t,y),title(’y’)
subplot(2,1,2),plot(t,u),title(’u’)
-1

Lecture 2 9 Lecture 2 10

EL2620 2011 EL2620 2011

Example: Two-Tank System


Linearization in Simulink
The system consists of two identical tank models:
Linearize about equilibrium (x0 , u0 , y0 ):
ḣ = (u − q)/A
>> A=2.7e-3;a=7e-6,g=9.8;
q = a 2g h >> [x0,u0,y0]=trim(’twotank’,[0.1 0.1]’,[],0.1)
x0 =
! √

1
0.1000
1/A f(u) 1
1 s 0.1000
Sum q
In Gain Integrator Fcn
u0 =
2
9.7995e-006
h
y0 =
0.1000
q
q In
>> [aa,bb,cc,dd]=linmod(’twotank’,x0,u0);
h 1
1 In >> sys=ss(aa,bb,cc,dd);
h Out
In Subsystem2 >> bode(sys)
Subsystem

Lecture 2 11 Lecture 2 12
EL2620 2011 EL2620 2011

Differential Equation Editor


dee is a Simulink-based differential equation editor
Phase-Plane Analysis
>> dee • Download ICTools from
http://www.control.lth.se/˜ictools
Differential Equation
Editor
DEE • Down load DFIELD and PPLANE from
http://math.rice.edu/˜dfield
deedemo1 deedemo2 deedemo3 deedemo4
This was the preferred tool last year!

Run the demonstrations

Lecture 2 13 Lecture 2 14

EL2620 2011

Homework 1
• Use your favorite phase-plane analysis tool

• Follow instructions in Exercise Compendium on how to write the


report.

• See the course homepage for a report example

• The report should be short and include only necessary plots.


Write in English.

Lecture 2 15
EL2620 2011 EL2620 2011

Today’s Goal
EL2620 Nonlinear Control
You should be able to
Lecture 3 • Explain local and global stability
• Linearize around equilibria and trajectories
• Stability definitions
• Sketch phase portraits for two-dimensional systems
• Linearization
• Phase-plane analysis • Classify equilibria into nodes, focuses, saddle points, and
center points
• Periodic solutions
• Analyze stability of periodic solutions through Poincaré maps

Lecture 3 1 Lecture 3 2

EL2620 2011 EL2620 2011

Local Stability
Consider ẋ = f (x) with f (0) = 0
Definition: The equilibrium x∗ = 0 is stable if for all ! > 0 there Asymptotic Stability
exists δ = δ(!) > 0 such that

!x(0)! < δ ⇒ !x(t)! < !, ∀t ≥ 0 Definition: The equilibrium x = 0 is asymptotically stable if it is


stable and δ can be chosen such that

x(t) lim x(t) = 0


t→∞
!x(0)! < δ ⇒

δ
The equilibrium is globally asymptotically stable if it is stable and
! limt→∞ x(t) = 0 for all x(0).

If x∗ = 0 is not stable it is called unstable.


Lecture 3 3 Lecture 3 4
EL2620 2011 EL2620 2011

Linearization Around a Trajectory


Let (x0 (t), u0 (t)) denote a solution to ẋ = f (x, u) and consider Hence, for small (x̃, ũ), approximately
another solution (x(t), u(t)) = (x0 (t) + x̃(t), u0 (t) + ũ(t)):
˙
x̃(t) = A(x0 (t), u0 (t))x̃(t) + B(x0 (t), u0 (t))ũ(t)
ẋ(t) = f (x0 (t) + x̃(t), u0 (t) + ũ(t))
where
∂f
= f (x0 (t), u0 (t)) + (x0 (t), u0 (t))x̃(t) ∂f
∂x A(x0 (t), u0 (t)) = (x0 (t), u0 (t))
∂f ∂x
+ ∂f
∂u
(x0 (t), u0 (t))ũ(t) + O(!x̃, ũ!2 )
B(x0 (t), u0 (t)) = (x0 (t), u0 (t))
∂u

(x0 (t), u0 (t)) Note that A and B are time dependent. However, if
(x0 (t) + x̃(t), u0 (t) + ũ(t)) (x0 (t), u0 (t)) ≡ (x0 , u0 ) then A and B are constant.

Lecture 3 5 Lecture 3 6

EL2620 2011 EL2620 2011

Example Pointwise Left Half-Plane Eigenvalues of


A(t) Do Not Impose Stability
ḣ(t) = v(t)
A(t) = , α>0
−1 + α cos2 t 1 − α sin t cos t
m(t) v̇(t) = −g + ve u(t)/m(t)
' (

−1 − α sin t cos t −1 + α sin2 t


ṁ(t) = −u(t)
h(t) Pointwise eigenvalues are given by

T

Let x0 (t) = (h0 (t), v0 (t), m0 (t)) , u0 (t) ≡ u0 > 0, be a solution. λ(t) = λ =
α − 2 ± α2 − 4
Then, 2
which are stable for 0 < α < 2. However,
0 1 0 0
˙ e u0 e cos t e−t sin t
x̃(t) 0 t)
 

x(t) = x(0),
 

m0 −u0 t
0 0 0 1
' (α−1)t (

−e(α−1)t sin t e−t cos t


   ve  ũ(t)
= 0 0 − (m0v−u 2  x̃(t) +

is unbounded solution for α > 1.

Lecture 3 7 Lecture 3 8
EL2620 2011 EL2620 2011

Lyapunov’s Linearization Method Example


Theorem: Let x0 be an equilibrium of ẋ
The linearization of
= f (x) with f ∈ C1 .
∂f
Denote A = ∂x
(x0 ) and α(A) = max Re(λ(A)).
ẋ1 = −x21 + x1 + sin x2
• If α(A) < 0, then x0 is asymptotically stable
ẋ2 = cos x2 − x31 − 5x2
• If α(A) > 0, then x0 is unstable
= (1, 0)T is given by
at the equilibrium x0
The fundamental result for linear systems theory!
A= ,
−1 1
λ(A) = {−2, −4}
The case α(A) = 0 needs further investigation.
' (

−3 −5
The theorem is also called Lyapunov’s Indirect Method. x0 is thus an asymptotically stable equilibrium for the nonlinear
system.
A proof is given next lecture.

Lecture 3 9 Lecture 3 10

EL2620 2011 EL2620 2011

Linear Systems Revival


d x1 x Example: Two real negative eigenvalues
=A 1
x
dt 2 x2
) * ) *

Analytic solution: x(t) = eAt x(0), t ≥ 0. Given the eigenvalues λ1 < λ2 < 0, with corresponding
eigenvectors v1 and v2 , respectively.
If A is diagonalizable, then
Solution: x(t) = c1 eλ1 t v1 + c2 eλ2 t v2
λ1 t
0
eAt = V eΛt V −1 = v1 v2 λ2 t v1 v2
0 e
) *

Slow eigenvalue/vector: x(t) ≈ c2 eλ2 t v2 for large t.


+ , e + ,−1

where v1 , v2 are the eigenvectors of A (Av1 = λ1 v1 etc). Moves along the slow eigenvector towards x = 0 for large t

This implies that


λ1 t λ2 t
Fast eigenvalue/vector: x(t) ≈ c1 eλ1 t v1 + c2 v2 for small t.
x(t) = c1 e v1 + c 2 e v2 , Moves along the fast eigenvector for small t
where the constants c1 and c2 are given by the initial conditions

Lecture 3 11 Lecture 3 12
EL2620 2011 EL2620 2011

Equilibrium Points for Linear Systems


x2
Phase-Plane Analysis for Linear Systems
x1
The location of the eigenvalues λ(A) determines the characteristics
of the trajectories.
Im λ
Six cases:
Re λ

stable node unstable node saddle point


Im λi = 0 : λ1 , λ2 < 0 λ1 , λ2 > 0 λ1 < 0 < λ2

Im λi *= 0 : Re λi < 0 Re λi > 0 Re λi = 0
stable focus unstable focus center point

Lecture 3 13 Lecture 3 14

EL2620 2011 EL2620 2011

Example—Unstable Focus

ẋ = x, σ, ω > 0,
σ −ω λ1,2 = 1 ± i λ1,2 = 0.3 ± i
ω σ
λ1,2 = σ ± iω
) *

At 1 1 eσt eiωt 0 1 1
x(t) = e x(0) = x(0)
0
) *) *) *−1

−i i eσt e−iωt −i i

In polar coordinates r = x21 + x22 , θ = arctan x2 /x1


(x1 = r cos θ , x2 = r sin θ ):
-

ṙ = σr
θ̇ = ω

Lecture 3 15 Lecture 3 16
EL2620 2011 EL2620 2011

Example—Stable Node

ẋ = x
−1 1
) *

0 −2

=
1 −1
v1 v2
0 1
(λ1 , λ2 ) = (−1, −2) and
) *
+ ,

v1 is the slow direction and v2 is the fast.


Fast: x2 = −x1 + c3
Slow: x2 = 0

Lecture 3 17 Lecture 3 18

EL2620 2011 EL2620 2011

Phase-Plane Analysis for Nonlinear Systems


Close to equilibrium points “nonlinear system” ≈ “linear system”
5 minute exercise: What is the phase portrait if λ1 = λ2 ? Theorem: Assume

ẋ = f (x) = Ax + g(x),
with lim%x%→0 !g(x)!/!x! = 0. If ż = Az has a focus, node, or
Hint: Two cases; only one linear independent eigenvector or all saddle point, then ẋ = f (x) has the same type of equilibrium at the
vectors are eigenvectors origin.

Remark: If the linearized system has a center, then the nonlinear


system has either a center or a focus.

Lecture 3 19 Lecture 3 20
EL2620 2011 EL2620 2011

How to Draw Phase Portraits


By hand:
Phase-Locked Loop
1. Find equilibria A PLL tracks phase θin (t) of a signal sin (t) = A sin[ωt + θin (t)].
2. Sketch local behavior around equilibria
sin “θout ”
Phase Filter VCO
3. Sketch (ẋ1 , ẋ2 ) for some other points. Notice that Detector
dx2 ẋ1
=
dx1 ẋ2
4. Try to find possible periodic orbits θin e K θ̇out 1 θout
sin(·)
5. Guess solutions − 1 + sT s
By computer:
1. Matlab: dee or pplane

Lecture 3 21 Lecture 3 22

EL2620 2011 EL2620 2011

Phase-Plane Analysis of PLL Classification of Equilibria

Let (x1 , x2 ) = (θout , θ̇out ), K, T > 0, and θin (t) ≡ θin . Linearization gives the following characteristic equations:
n even:
ẋ1 (t) = x2 (t)
λ2 + T −1 λ + KT −1 = 0
−1 −1
ẋ2 (t) = −T x2 (t) + KT sin(θin − x1 (t)) K > (4T )−1 gives stable focus
Equilibria are (θin + nπ, 0) since 0 < K < (4T )−1 gives stable node

ẋ1 = 0 ⇒ x2 = 0 n odd:
ẋ2 = 0 ⇒ sin(θin − x1 ) = 0 ⇒ x1 = θin + nπ λ2 + T −1 λ − KT −1 = 0
Saddle points for all K, T >0

Lecture 3 23 Lecture 3 24
EL2620 2011 EL2620 2011

Phase-Plane for PLL Periodic Solutions


Example of an asymptotically stable periodic solution:
(K, T ) = (1/2, 1): focuses 2kπ, 0 , saddle points (2k + 1)π, 0
. / . /

ẋ1 = x1 − x2 − x1 (x21 + x22 )


(1)
ẋ2 = x1 + x2 − x2 (x21 + x22 )

Lecture 3 25 Lecture 3 26

EL2620 2011 EL2620 2011

Periodic solution: Polar coordinates.


x1 = r cos θ ⇒ ẋ1 = cos θṙ − r sin θθ̇
x2 = r sin θ ⇒ ẋ2 = sin θṙ + r cos θθ̇ A system has a periodic solution if for some T >0
implies
ṙ 1 r cos θ r sin θ ẋ1 x(t + T ) = x(t), ∀t ≥ 0
=
θ̇ r ẋ2 A periodic orbit is the image of x in the phase portrait.
' ( ' (' (

− sin θ cos θ
Now, from (1)
ẋ1 = r(1 − r2 ) cos θ − r sin θ
2
• When does there exist a periodic solution?
ẋ2 = r(1 − r ) sin θ + r cos θ • When is it stable?
gives

Note that x(t)


ṙ = r(1 − r2 )
≡ const is by convention not regarded periodic
θ̇ = 1
Only r = 1 is a stable equilibrium!

Lecture 3 27 Lecture 3 28
EL2620 2011 EL2620 2011

Poincaré Map
Assume φt (x0 ) is a periodic solution with period T .
Let Σ ⊂ Rn be an n − 1-dim hyperplane transverse to f at x0 .
Flow Definition: The Poincaré map P : Σ → Σ is

The solution of ẋ = f (x) is sometimes denoted P (x) = φτ (x) (x)


φt (x0 ) where τ (x) is the time of first return.

to emphasize the dependence on the initial point x0 ∈ Rn Σ


φt (·) is called the flow.
P (x)
x φt (x)

Lecture 3 29 Lecture 3 30

EL2620 2011 EL2620 2011

Existence of Periodic Orbits


A point x∗ such that P (x∗ ) = x∗ corresponds to a periodic orbit.

P (x∗ ) = x∗ 1 minute exercise: What does a fixed point of P k corresponds to?

x∗ is called a fixed point of P .

Lecture 3 31 Lecture 3 32
EL2620 2011 EL2620 2011

Example—Stable Unit Circle


Stable Periodic Orbit Rewrite (1) in polar coordinates:

The linearization of P around x∗ gives a matrix W such that ṙ = r(1 − r2 )


θ̇ = 1
P (x) ≈ W x
Choose Σ
if x is close to x∗ .
= {(r, θ) : r > 0, θ = 2πk}.
The solution is
• λj (W ) = 1 for some j
−2t −1/2
• If |λi (W )| < 1 for all i *= j , then the corresponding periodic φt (r0 , θ0 ) = [1 + (r0−2 − 1)e ] , t + θ0
orbit is asymptotically stable
' (

• If |λi (W )| > 1 for some i, then the periodic orbit is unstable. First return time from any point (r0 , θ0 ) ∈ Σ is
τ (r0 , θ0 ) = 2π.

Lecture 3 33 Lecture 3 34

EL2620 2011

The Poincaré map is

P (r0 , θ0 ) =
[1 + (r0−2 − 1)e−2·2π ]−1/2
θ0 + 2π
' (

(r0 , θ0 ) = (1, 2πk) is a fixed point.


The periodic solution that corresponds to (r(t), θ(t)) = (1, t) is
asymptotically stable because

dP e−4π 0
W = (1, 2πk) =
d(r0 , θ0 ) 0 1
' (

⇒ Stable periodic orbit (as we already knew for this example) !

Lecture 3 35
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control Today’s Goal


Lecture 5 You should be able to

• derive the gain of a system


• Input–output stability • analyze stability using
– Small Gain Theorem
u y – Circle Criterion
S – Passivity

Lecture 5 1 Lecture 5 2

EL2620 2011 EL2620 2011

History
f (y)
Gain
r y
G(s)
− y Idea: Generalize the concept of gain to nonlinear dynamical systems

f (·) u y
S

For what G(s) and f (·) is the closed-loop system stable? The gain γ of S is the largest amplification from u to y
• Luré and Postnikov’s problem (1944) Here S can be a constant, a matrix, a linear time-invariant system, etc
• Aizerman’s conjecture (1949) (False!)
Question: How should we measure the size of u and y ?
• Kalman’s conjecture (1957) (False!)
• Solution by Popov (1960) (Led to the Circle Criterion)

Lecture 5 3 Lecture 5 4
EL2620 2011 EL2620 2011

Norms Gain of a Matrix


A norm " · " measures size Every matrix M ∈ Cn×n has a singular value decomposition
Definition: M = U ΣV ∗
+
where
A norm is a function " · " : Ω → R , such that for all x, y ∈ Ω
• "x" ≥ 0 and "x" = 0 ⇔ x = 0
Σ = diag {σ1 , . . . , σn } ; U ∗ U = I ; V ∗ V = I
• "x + y" ≤ "x" + "y"
σi - the singular values
• "αx" = |α| · "x", for all α ∈ R
The “gain” of M is the largest singular value of M :
Examples: "M x"
σmax (M ) = σ1 = sup
Euclidean norm: "x" = x21 + ··· + x2n x∈Rn "x"
Max norm:
!

"x" = max{|x1 |, . . . , |xn |} where " · " is the Euclidean norm.

Lecture 5 5 Lecture 5 6

EL2620 2011 EL2620 2011

Signal Norms
Eigenvalues are not gains
A signal x is a function x : R+ → R.
The spectral radius of a matrix M
A signal norm " · "k is a norm on the space of signals x.
i
ρ(M ) = max |λi (M )|
Examples:
is not a gain (nor a norm).

2-norm (energy norm): "x"2 = 0
|x(t)|2 dt
Why? What amplification is described by the eigenvalues?
"#

sup-norm: "x"∞ = supt∈R+ |x(t)|

Lecture 5 7 Lecture 5 8
EL2620 2011 EL2620 2011

Parseval’s Theorem System Gain


y = S(u).
L2 denotes the space of signals with bounded energy: "x"2 < ∞
A system S is a map from L2 to L2 :
Theorem: If x, y∈ L2 have the Fourier transforms
u y
X(iω) = e −iωt
x(t)dt, Y (iω) = e−iωt y(t)dt, S
0 0
$ ∞ $ ∞

then ∞ ∞
1
y(t)x(t)dt = Y ∗ (iω)X(iω)dω. The gain of S is defined as γ(S) = sup
"y"2
= sup
"S(u)"2
0 2π −∞ u∈L2
$ $

"u"2 u∈L2 "u"2


In particular,

2 1 ∞ Example: The gain of a scalar static system y(t) = αu(t) is
=

"x"22 |x(t)| dt = |X(iω)|2 dω.
0 −∞
$ $

γ(α) = sup = sup


"αu"2 |α|"u"2
= |α|
The power calculated in the time domain equals the power calculated u∈L2 "u"2 u∈L2 "u"2
in the frequency domain

Lecture 5 9 Lecture 5 10

EL2620 2011 EL2620 2011

Gain of a Static Nonlinearity


Lemma: A static nonlinearity f such that |f (x)| ≤ K|x| and
f (x∗ ) = Kx∗ has gain γ(f ) = K .
Kx
f (x)
2 minute exercise: Show that γ(S1 S2 ) u(t) y(t)
f (·) x
≤ γ(S1 )γ(S2 ).

y x∗
u
S2 S1

Proof:"y"22 = 0 f 2 u(t) dt ≤ 0 K 2 u2 (t)dt = K 2 "u"22 ,


#∞ % & #∞

where u(t) = x∗ , t ∈ (0, 1), gives equality, so

γ(f ) = sup "y"2 "u"2 = K


u∈L2
'

Lecture 5 11 Lecture 5 12
EL2620 2011 EL2620 2011

Gain of a Stable Linear System


1
10

Lemma: γ(G)
0
10
|G(iω)| BIBO Stability
-1
γ G = sup 10 Definition:
"Gu"2
= sup |G(iω)|
ω∈(0,∞) S is bounded-input bounded-output (BIBO) stable if γ(S)
-2
u∈L2 "u"2
10 -1 0 1
10 10 10
< ∞.
% &

Proof: Assume |G(iω)| ≤ K for ω ∈ (0, ∞) and |G(iω ∗ )| = K u y


for some ω ∗ . Parseval’s theorem gives γ S = sup
"S(u)"2
S
u∈L2
1
"u"2
% &

"y"22 = |Y (iω)|2 dω
2π −∞
$ ∞

= Ax is asymptotically stable then


Example: If ẋ
1
= |G(iω)|2 |U (iω)|2 dω ≤ K 2 "u"22 G(s) = C(sI − A)−1 B + D is BIBO stable.
2π −∞
$ ∞

Arbitrary close to equality by choosing u(t) close to sin ω ∗ t.

Lecture 5 13 Lecture 5 14

EL2620 2011 EL2620 2011

Example—Static Nonlinear Feedback


The Small Gain Theorem Ky
r y f (y)
G(s)
r1 e1
y

S1
f (·)
e2 r2
S2
2 f (y)
G(s) = ,
(s + 1)2 y
0≤ ≤ K, ∀y *= 0, f (0) = 0
Theorem: Assume S1 and S2 are BIBO stable. If γ(S1 )γ(S2 ) < 1,
then the closed-loop system is BIBO stable from (r1 , r2 ) to (e1 , e2 ) γ(G) = 2 and γ(f ) ≤ K .
Small Gain Theorem gives BIBO stability for K ∈ (0, 1/2).

Lecture 5 15 Lecture 5 16
EL2620 2011 EL2620 2011

“Proof” of the Small Gain Theorem


The Nyquist Theorem
"e1 "2 ≤ "r1 "2 + γ(S2 )["r2 "2 + γ(S1 )"e1 "2 ]
From: U(1)
gives 1

G(s) 0.5
"r1 "2 + γ(S2 )"r2 "2
"e1 "2 ≤
0

To: Y(1)
1 − γ(S2 )γ(S1 ) −
γ(S2 )γ(S1 ) < 1, "r1 "2 < ∞, "r2 "2 < ∞ give "e1 "2 < ∞. -0.5

Similarly we get -1
-1 -0.5 0 0.5 1

"r2 "2 + γ(S1 )"r1 "2


Theorem: If G has no poles in the right half plane and the Nyquist
"e2 "2 ≤
1 − γ(S1 )γ(S2 )
curve G(iω), ω ∈ [0, ∞), does not encircle −1, then the
so also e2 is bounded. closed-loop system is stable.
Note: Formal proof requires " · "2e , see Khalil

Lecture 5 17 Lecture 5 18

EL2620 2011 EL2620 2011

The Circle Criterion


k2 y f (y)
Small Gain Theorem can be Conservative
r y k1 y
G(s)
Let f (y) = Ky in the previous example. Then the Nyquist Theorem − y
− k11 − k12
proves stability for all K ∈ [0, ∞), while the Small Gain Theorem
only proves stability for K ∈ (0, 1/2). f (·)
G(iω)
r y 1
G(s) Theorem: Assume that G(s) has no poles in the right half plane, and
0.5

0

f (y)
-0.5
f (0) = 0
y
0 ≤ k1 ≤ ≤ k2 , ∀y *= 0,
f (·) G(iω)
-1

-1 -0.5 0 0.5 1 1.5 2


If the Nyquist curve of G(s) does not encircle or intersect the circle
defined by the points −1/k1 and −1/k2 , then the closed-loop
system is BIBO stable.

Lecture 5 19 Lecture 5 20
EL2620 2011 EL2620 2011

Example—Static Nonlinear Feedback (cont’d) Proof of the Circle Criterion

Ky Let k
f (y) 1

0.5 1
= (k1 + k2 )/2, f((y) = f (y) − ky , and r(1 = r1 − kr2 :

y 0 2
=: R, ∀y *= 0,
K

) )

-0.5 G(iω)
) f((y) ) k2 − k1
) ) f((0) = 0

-1 G
) y )≤

−k
-1 -0.5 0 0.5 1 1.5 2
r1 e1 y1
G(s) y1
(

The “circle” is defined by −1/k1 = −∞ and −1/k2 = −1/K . G


Since y2 e2 r2
r(1

−f (·)
min Re G(iω) = −1/4 r2
the Circle Criterion gives that the system is BIBO stable if K ∈ (0, 4). −f((·)

Lecture 5 21 Lecture 5 22

EL2620 2011 EL2620 2011

The curve G−1 (iω) and the circle {z ∈ C : |z + k| > R} mapped


through z +→ 1/z gives the result:
G(s)
−k
r(1

r2 R
(

1 −k2 − k1
− k11 − k12
G(iω)
−f((·)

R
< 1, where 1
G(iω)
Small Gain Theorem gives stability if |G(iω)|R
G G(iω)
G is stable (This has to be checked later). Hence,
1 + kG
(

G
Note that
1 + kG
is stable since −1/k is inside the circle.
1
(=

G(iω) Note that G(s) may have poles on the imaginary axis, e.g.,
) )

|G(iω)|
integrators are allowed
) 1 )
=) ) + k )) > R
(

Lecture 5 23 Lecture 5 24
EL2620 2011 EL2620 2011

Scalar Product
Scalar product for signals y and u
u y
T S
,y, u-T = y T (t)u(t)dt
0
$

Passivity and BIBO Stability If u and y are interpreted as vectors then ,y, u-T
= |y|T |u|T cos φ

The main result: Feedback interconnections of passive systems are


|y|T = ,y, y-T - length of y , φ - angle between u and y
passive, and BIBO stable (under some additional mild criteria)
!

Cauchy-Schwarz Inequality: ,y, u-T ≤ |y|T |u|T

Example: u = sin t and y = cos t are orthogonal if T = kπ ,


because
cos φ = =0
,y, u-T
|y|T |u|T

Lecture 5 25 Lecture 5 26

EL2620 2011 EL2620 2011

Passive System

u y
S

Definition: Consider signals u, y : [0, T ] → Rm . The system S is


passive if 2 minute exercise: Is the pure delay system y(t) = u(t − θ)
passive? Consider for instance the input u(t) = sin ((π/θ)t).
,y, u-T ≥ 0, for all T > 0 and all u
and strictly passive if there exists ) > 0 such that
,y, u-T ≥ )(|y|2T + |u|2T ), for all T > 0 and all u

Warning: There exist many other definitions for strictly passive

Lecture 5 27 Lecture 5 28
EL2620 2011 EL2620 2011

Feedback of Passive Systems is Passive


Example—Passive Electrical Components r1 e1 y1
S1

T
u(t) = Ri(t) : ,u, i-T = Ri2 (t)dt ≥ R,i, i-T ≥ 0 y2 e2 r2
0 S2
$

T
du du Cu2 (T ) Lemma: If S1 and S2 are passive then the closed-loop system from
i=C u(t)C dt =
dt dt 2
: ,u, i-T = ≥0
0 (r1 , r2 ) to (y1 , y2 ) is also passive.
$

Proof:
T
di di Li2 (T )
L i(t)dt = ,y, e-T = ,y1 , e1 -T + ,y2 , e2 -T = ,y1 , r1 − y2 -T + ,y2 , r2 + y1 -T
dt dt 2
u = L : ,u, i-T = ≥0
0
$

= ,y1 , r1 -T + ,y2 , r2 -T = ,y, r-T

Hence, ,y, r-T ≥ 0 if ,y1 , e1 -T ≥ 0 and ,y2 , e2 -T ≥ 0.

Lecture 5 29 Lecture 5 30

EL2620 2011 EL2620 2011

Passivity of Linear Systems A Strictly Passive System Has Finite Gain

Theorem: An asymptotically stable linear system G(s) is passive if u y


and only if S
Re G(iω) ≥ 0, ∀ω > 0
&y&2
It is strictly passive if and only if there exists ) > 0 such that Lemma: If S is strictly passive then γ(S) = supu∈L2 &u&2
< ∞.

Re G(iω
Proof:
− )) ≥ 0, ∀ω > 0

Example: 0.6

1 0.4
)(|y|2∞ + |u|2∞ ) ≤ ,y, u-∞ ≤ |y|∞ · |u|∞ = "y"2 · "u"2
G(s) = is strictly passive, 0.2
s+1 0
Hence, )"y"22 ≤ "y"2 · "u"2 , so
1 -0.2

G(s) = is passive but not strictly -0.4


G(iω) 1
s
passive. 0 0.2 0.4 0.6 0.8 1 )
"y"2 ≤ "u"2

Lecture 5 31 Lecture 5 32
EL2620 2011 EL2620 2011

Proof
S1 strictly passive and S2 passive give
The Passivity Theorem
) |y1 |2T + |e1 |2T ≤ ,y1 , e1 -T + ,y2 , e2 -T = ,y, r-T
r1 e1 y1 Therefore
% &

S1
− 1
)
|y1 |2T + ,r1 − y2 , r1 − y2 -T ≤ ,y, r-T
y2 e2 r2 or
S2 1
)
|y1 |2T + |y2 |2T − 2,y2 , r2 -T + |r1 |2T ≤ ,y, r-T
Hence
Theorem: If S1 is strictly passive and S2 is passive, then the
closed-loop system is BIBO stable from r to y . 1 1
2+
) )
|y|2T ≤ 2,y2 , r2 -T + ,y, r-T ≤ |y|T |r|T
* +

Let T → ∞ and the result follows.

Lecture 5 33 Lecture 5 34

EL2620 2011 EL2620 2011

The Passivity Theorem is a


“Small Phase Theorem”
G(s)
r1 e1 y1 −
S1

y2 e2 r2
S2
φ1 φ2 2 minute exercise: Apply the Passivity Theorem and compare it with
the Nyquist Theorem. What about conservativeness? [Compare the
S2 passive ⇒ cos φ2 ≥ 0 ⇒ |φ2 | ≤ π/2 discussion on the Small Gain Theorem.]

S1 strictly passive ⇒ cos φ1 > 0 ⇒ |φ1 | < π/2

Lecture 5 35 Lecture 5 36
EL2620 2011 EL2620 2011

replacements
Example—Gain Adaptation
Applications in telecommunication channel estimation and in noise
cancellation etc. Gain Adaptation—Closed-Loop System
Process
u y
θ∗ G(s)
u y
θ∗ G(s)
ym − c θ
θ(t) G(s) −
s
ym
Model θ(t) G(s)

Adaptation law:

c > 0.
dt
= −cu(t)[ym (t) − y(t)],

Lecture 5 37 Lecture 5 38

EL2620 2011 EL2620 2011

Gain Adaptation is BIBO Stable Simulation of Gain Adaptation


1
Let G(s) = , c = 1, u = sin t, and θ(0) = 0.
(θ − θ∗ )u
G(s)
ym − y
s+1
2

θ ∗ y , ym
0
− θ c
s -2

0 5 10 15 20

1.5
u S 1 θ
0.5
S is passive (see exercises).
0
If G(s) is strictly passive, the closed-loop system is BIBO stable 0 5 10 15 20

Lecture 5 39 Lecture 5 40
EL2620 2011 EL2620 2011

Storage Function
Consider the nonlinear control system Storage Function and Passivity
ẋ = f (x, u), y = h(x)
Lemma: If there exists a storage function V for a system
1 n
A storage function is a C function V
ẋ = f (x, u), y = h(x)
: R → R such that
• V (0) = 0 and V (x) ≥ 0, ∀x *= 0
with x(0) = 0, then the system is passive.
• V̇ (x) ≤ uT y , ∀x, u
Remark: Proof: For all T > 0,
• V (T ) represents the stored energy in the system T
,y, u-T = y(t)u(t)dt ≥ V (x(T ))−V (x(0)) = V (x(T )) ≥ 0
0
$

• V (x(T )) ≤ y(t)u(t)dt + V (x(0)) , ∀T > 0


0
$ T

stored energy at t = T
absorbed energy
, -. / , -. /
, -. / stored energy at t = 0

Lecture 5 41 Lecture 5 42

EL2620 2011 EL2620 2011

Example—KYP Lemma
Lyapunov vs. Passivity Consider an asymptotically stable linear system

Storage function is a generalization of Lyapunov function


ẋ = Ax + Bu, y = Cx
Assume there exists positive definite matrices P, Q such that
Lyapunov idea: “Energy is decreasing”
AT P + P A = −Q, BT P = C
V̇ ≤ 0
Consider V = 0.5xT P x. Then
Passivity idea: “Increase in stored energy ≤ Added energy”
V̇ = 0.5(ẋT P x + xT P ẋ) = 0.5xT (AT P + P A)x + uB T P x
V̇ ≤ uT y = −0.5xT Qx + uy < uy, x *= 0
and hence the system is strictly passive. This fact is part of the
Kalman-Yakubovich-Popov lemma.

Lecture 5 43 Lecture 5 44
EL2620 2011

Today’s Goal
You should be able to

• derive the gain of a system


• analyze stability using
– Small Gain Theorem
– Circle Criterion
– Passivity

Lecture 5 45
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control Today’s Goal


You should be able to
Lecture 6
• Derive describing functions for static nonlinearities
• Analyze existence and stability of periodic solutions by describing
function analysis
• Describing function analysis

Lecture 6 1 Lecture 6 2

EL2620 2011 EL2620 2011

Motivating Example

2
y
r e u y u A Frequency Response Approach
1
G(s)
− 0 Nyquist / Bode:
-1
A (linear) feedback system will have sustained oscillations
-2
(center) if the loop-gain is 1 at the frequency where the phase lag
0 5 10 15 20
is −180o
4 But, can we talk about the frequency response, in terms of gain and
G(s) = and u = sat e give a stable oscillation.
s(s + 1)2 phase lag, of a static nonlinearity?

• How can the oscillation be predicted?

Lecture 6 3 Lecture 6 4
EL2620 2011 EL2620 2011

Fourier Series
A periodic function u(t) = u(t + T ) has a Fourier series expansion
The Fourier Coefficients are Optimal

a0
u(t) = + (an cos nωt + bn sin nωt) The finite expansion
2 n=1
k
!


= + an + b2n sin[nωt + arctan(an /bn )] u + (an cos nωt + bn sin nωt)
2 2 n=1
n=1
a0 ! " 2 a0 !

solves
$k (t) =

where ω = 2π/T and T


2
min
û T
u(t) − u
2 T 2 T 0
#

an (ω) = u(t) cos nωt dt, bn (ω) = u(t) sin nωt dt


% &2

T 0 T 0
# # $k (t) dt

Note: Sometimes we make the change of variable t → φ/ω

Lecture 6 5 Lecture 6 6

EL2620 2011 EL2620 2011

Key Idea Definition of Describing Function


r e u y
N.L. G(s) The describing function is

b1 (ω) + ia1 (ω)
N (A, ω) =
A
e(t) = A sin ωt gives e(t) u(t) e(t) u
∞ N.L. N (A, ω)
u(t) = a2n + b2n sin[nωt + arctan(an /bn )]
$1 (t)

n=1 If G is low pass and a0 = 0, then


! "

If |G(inω)|# |G(iω)| for n ≥ 2, then n = 1 suffices, so that u


y(t) ≈ |G(iω)| a21 + b21 sin[ωt + arctan(a1 /b1 ) + arg G(iω)] can be used instead of u(t) to analyze the system.
$1 (t) = |N (A, ω)|A sin[ωt + arg N (A, ω)]
'

That is, we assume all higher harmonics are filtered out by G Amplitude dependent gain and phase shift!

Lecture 6 7 Lecture 6 8
EL2620 2011 EL2620 2011

Describing Function for a Relay


1
u
u Odd Static Nonlinearities
0.5
H e
e 0 Assume f (·) and g(·) are odd (i.e. f (−e) = −f (e)) static
-0.5 nonlinearities with describing functions Nf and Ng . Then,
-1
−H
0 1 2 3 4 5 6
• Im Nf (A, ω) = 0
φ = 2πt/T
1 2π
a1 = u(φ) cos φ dφ = 0
π 0
#
• Nf (A, ω) = Nf (A)
1 2π 2 π 4H
b1 = u(φ) sin φ dφ = H sin φ dφ =
π 0 π 0 π
# #

• Nαf (A) = αNf (A)


The describing function for a relay is thus
b1 (ω) + ia1 (ω) 4H • Nf +g (A) = Nf (A) + Ng (A)
N (A) = =
A πA

Lecture 6 9 Lecture 6 10

EL2620 2011 EL2620 2011

replacements
Periodic Solutions in Relay System
Existence of Periodic Solutions
0.1

0
−1/N (A)
G(iω) r e u y
e u y G(s) -0.1
0 f (·) G(s) -0.2

-0.3

− G(iω)
-0.4
−1/N (A)
-0.5
A
-1 -0.8 -0.6 -0.4 -0.2 0

3
G(s) =
(s + 1)3
Proposal: sustained oscillations if loop-gain 1 and phase-lag −180o with feedback u = −sgn y

G(iω)N (A) = −1 ⇔ G(iω) = −1/N (A) √


No phase lag in f (·), arg G(iω) = −π for ω = 3 = 1.7
The intersections of the curves G(iω) and −1/N (A) √
give ω and A for a possible periodic solution. G(i 3) = −3/8 = −1/N (A) = −πA/4 ⇒ A = 12/8π ≈ 0.48

Lecture 6 11 Lecture 6 12
EL2620 2011 EL2620 2011

Describing Function for a Saturation


u
1
The prediction via the describing function agrees very well with the
true oscillations: H 0.5
e
1 e u
D 0
u
−D
y -0.5
0.5
-1
0 1 2 3 4 5 6
−H
0
φ
Let e(t)= A sin ωt = A sin φ. First set H = D. Then for
-0.5
φ ∈ (0, π)
-1
0 2 4 6 8 10
A sin φ,
u(φ) =
φ ∈ (0, φ0 ) ∪ (π − φ0 , π)
D,
(

φ ∈ (φ0 , π − φ0 )
Note that G filters out almost all higher-order harmonics.
where φ0 = arcsin D/A.

Lecture 6 13 Lecture 6 14

EL2620 2011 EL2620 2011

1
Hence, if H = D , then N (A) = 2φ0 + sin 2φ0 .
π
) *

If H += D , then the rule Nαf (A) = αNf (A) gives


1 2π
a1 = u(φ) cos φ dφ = 0 H
π 0 N (A) = 2φ0 + sin 2φ0

#
) *

1 2π 4 π/2
b1 = u(φ) sin φ dφ = u(φ) sin φ dφ 1.1

1
π 0 π 0
# #

0.9
4A φ0 2 4D π/2 0.8
N (A) for H = D = 1
= sin φ dφ + sin φ dφ
π 0 π φ0 0.7
# #

0.6
A 0.5
= 2φ0 + sin 2φ0 0.4
π
0.3
) *

0.2

0.1
0 2 4 6 8 10

Lecture 6 15 Lecture 6 16
EL2620 2011 EL2620 2011

The Nyquist Theorem


G(iω)
KG(s)
− −1/K

5 minute exercise: What oscillation amplitude and frequency do the


describing function analysis predict for the “Motivating Example”?
Assume that G is stable, and K is a positive gain.
• If G(iω) goes through the point −1/K the closed-loop system
displays sustained oscillations
• If G(iω) encircles the point −1/K , then the closed-loop system
is unstable (growing amplitude oscillations).
• If G(iω) does not encircle the point −1/K , then the closed-loop
system is stable (damped oscillations)

Lecture 6 17 Lecture 6 18

EL2620 2011 EL2620 2011

Stability of Periodic Solutions An Unstable Periodic Solution

Ω G(Ω)
G(Ω)

−1/N (A)

Assume that G(s) is stable. −1/N (A)

• If G(Ω) encircles the point −1/N (A), then the oscillation


amplitude is increasing.
• If G(Ω) does not encircle the point −1/N (A), then the An intersection with amplitude A0 is unstable if A < A0 leads to
oscillation amplitude is decreasing. decreasing amplitude and A > A0 leads to increasing.

Lecture 6 19 Lecture 6 20
EL2620 2011 EL2620 2011

Automatic Tuning of PID Controller


Stable Periodic Solution in Relay System Period and amplitude of relay feedback limit cycle can be used for
0.2
autotuning:
0.15 G(iω)
0.1 PID
r e u y
G(s) 0.05 A u y
Σ Process
− 0 T
-0.05 Relay
-0.1
−1/N (A)
-0.15

-0.2
−1
-5 -4 -3 -2 -1 0

(s + 10)2 1 u
G(s) = y
(s + 1)3
with feedback u = −sgn y
0
gives one stable and one unstable limit cycle. The left most
intersection corresponds to the stable one. −1

0 5 10
Time

Lecture 6 21 Lecture 6 22

EL2620 2011 EL2620 2011

Describing Function for a Quantizer


u 1

0.8 e 1 2π
1 0.6 a1 = u(φ) cos φ dφ = 0
0.4 π 0
0.2 u
#

e 0

-0.2
1 2π 4 π/2
D -0.4
b1 = u(φ) sin φdφ = sin φdφ
-0.6
π 0 π φ0
# #

-0.8

-1
4
0 1 2 3 4 5 6 = cos φ0 =
π π
1 − D2 /A2
Let e(t)
4"

= A sin ωt = A sin φ. Then for φ ∈ (0, π)


0, A<D
0, N (A) =
u(φ) =
φ ∈ (0, φ0 )
+

πA
1 − D2 /A2 , A ≥ D
1,
(

φ ∈ (φ0 , π − φ0 )
4 "

where φ0 = arcsin D/A.

Lecture 6 23 Lecture 6 24
EL2620 2011 EL2620 2011

Plot of Describing Function Quantizer


0.7

0.6 Describing Function Pitfalls


0.5 N (A) for D = 1 Describing function analysis can give erroneous results.
0.4
• A DF may predict a limit cycle even if one does not exist.
0.3
• A limit cycle may exist even if the DF does not predict it.
0.2
• The predicted amplitude and frequency are only approximations
0.1 and can be far from the true values.
0
0 2 4 6 8 10

Notice that N (A) ≈ 1.3/A for large amplitudes

Lecture 6 25 Lecture 6 26

EL2620 2011 EL2620 2011

Accuracy of Describing Function Analysis


Control loop with friction F = sgn y : 1 1.2
y z = 1/3 z = 4/3
1
F 0
Friction 0.8
-1 z = 1/3
0.6
0 5 10 15 20 25 30
_ 0.4
yref y 0.4
0.2
C G 0.2 y z = 4/3
_ 0 0

-0.2 -0.2

-0.4 -0.4
0 5 10 15 20 25 30 -1 -0.5 0 0.5

Corresponds to DF gives period times and amplitudes (T, A) = (11.4, 1.00) and
G s(s − z) (17.3, 0.23), respectively.
= 3
1 + GC s + 2s2 + 2s + 1
with feedback u = −sgn y
Accurate results only if y is close to sinusoidal!
The oscillation depends on the zero at s = z.

Lecture 6 27 Lecture 6 28
EL2620 2011 EL2620 2011

Harmonic Balance
e(t) = A sin ωt u(t)
f (·)

A few more Fourier coefficients in the truncation


2
2 minute exercise: What is N (A) for f (x) =x ? k
u + (an cos nωt + bn sin nωt)
2 n=1
a0 !

may give much better result. Describing function corresponds to


$k (t) =

k = 1 and a0 = 0.
Example: f (x) = x2 gives u(t) = (1 − cos 2ωt)/2. Hence by
considering a0 = 1 and a2 = 1/2 we get the exact result.

Lecture 6 29 Lecture 6 30

EL2620 2011 EL2620 2011

Analysis of Oscillations—A Summary


Time-domain:

• Poincaré maps and Lyapunov functions Today’s Goal


• Rigorous results but only for simple examples You should be able to
• Hard to use for large problems • Derive describing functions for static nonlinearities
Frequency-domain: • Analyze existence and stability of periodic solutions by describing
• Describing function analysis function analysis

• Approximate results
• Powerful graphical methods

Lecture 6 31 Lecture 6 32
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control

Lecture 7 Today’s Goal


You should be able to analyze and design
• Compensation for saturation (anti-windup)
• Anti-windup for PID and state-space controllers
• Friction models • Compensation for friction

• Compensation for friction

Lecture 7 1 Lecture 7 2

EL2620 2011 EL2620 2011

PSfr
The Problem with Saturating Actuator
r e u y
Example—Windup in PID Controller
v
PID G
2

1
y

0
0 20 40 60 80
• The feedback path is broken when u saturates ⇒ Open loop 0.1

behavior!
0
u

−0.1
• Leads to problem when system and/or the controller are unstable
0 20 40 60 80
– Example: I-part in PID Time

1 PID controller without (dashed) and with (solid) anti-windup


Recall: CPID (s) = K 1 + + Td s
Ti s
! "

Lecture 7 3 Lecture 7 4
EL2620 2011 EL2620 2011

Anti-Windup for PID Controller Anti-Windup is Based on Tracking


Anti-windup (a) with actuator output available and (b) without
(a)
When the control signal saturates, the integration state in the
−y
KTd s
controller tracks the proper state
Actuator
e v u
K ∑ The tracking time Tt is the design parameter of the anti-windup
K 1 − + Common choices of Tt :
∑ ∑
Ti s
es
1
• Tt = Ti
Tt √
(b)
−y
• Tt = Ti Td
KTd s

Actuator model Actuator


u
Remark: If 0 < Tt # Ti , then the integrator state becomes sensitive
e v
K ∑ to the instances when es $= 0:
K 1 − +
∑ ∑ Ke(τ ) es (τ ) 1 t
Ti s
es
I(t) = + dτ ≈ es (τ ) dτ
1 0 Ti Tt Tt 0
# t$ % #

Tt

Lecture 7 5 Lecture 7 6

EL2620 2011 EL2620 2011

Anti-Windup for Observer-Based Anti-Windup for


State Feedback Controller General State-Space Controller
xm State-space controller:
Observer State feedback Actuator ẋc = F xc + Gy
y sat v
− xˆ
Σ L u = Cxc + Dy
Windup possible if F unstable and u saturates
Idea: Rewrite representation of control law from (a) to (b) with the
same input–output relation, but where the unstable SA is replaced by
x̂˙ = Ax̂ + B sat v + K(y − C x̂) a stable SB . If u saturates, then (b) behaves better than (a).
(a) (b)
y
v = L(xm − x̂)
y u u
SA SB
x̂ is estimate of process state, xm desired (model) state
Need actuator model if sat v is not measurable

Lecture 7 7 Lecture 7 8
EL2620 2011 EL2620 2011

State-space controller without and with anti-windup:


Mimic the observer-based controller:
D

ẋc = F xc + Gy + K(u − Cxc − Dy)


F

y
= (F − KC)xc + (G − KD)y + Ku
xc u
G ∑ s −1 C ∑

D
Choose K such that F0 = F − KC has desired (stable)
eigenvalues. Then use controller
F − KC
ẋc = F0 xc + G0 y + Ku
u = sat(Cxc + Dy) y xc v u
G − KD ∑ s −1 C ∑ sat

where G0
K
= G − KD.

Lecture 7 9 Lecture 7 10

EL2620 2011 EL2620 2011

Controllers with ”Stable” Zeros Controller F (s) with ”Stable” Zeros


Most controllers are minimum phase, i.e. have zeros strictly in LHP Let D = lims→∞ F (s) and consider the feedback implementation
zero dynamics
y u
Σ sat
ẋc = F xc + Gy ⇒u=0 ẋc = (F − GC/D) xc + D
u = Cxc + Dy −
& '( )

y = −C/Dxc
Thus, choose ”observer” gain

F (s) D
1 − 1
K = G/D ⇒ F − KC = F − GC/D
and the eigenvalues of the ”observer” based controller becomes equal It is easy to show that transfer function from y to u with no saturation
to the zeros of F (s) when u saturates equals F (s)!
Note that this implies G − KD = 0 in the figure on the previous If the transfer function (1/F (s) − 1/D) in the feedback loop is
slide, and we thus obtain P-feedback with gain D under saturation. stable (stable zeros) ⇒ No stability problems in case of saturation

Lecture 7 11 Lecture 7 12
EL2620 2011 EL2620 2011

Internal Model Control (IMC) Example

IMC: apply feedback only when system G and model Ĝ differ! 1


G(s) =
C(s) T1 s + 1
Choose
*

r u y
T1 s + 1
Q(s) G(s) Q= , τ < T1
− τs + 1
Gives the controller
G(s) Q
− F = ⇒
y*
*

1 − QG
T1 s + 1 T1 1
Assume G stable. Note: feedback from the model error y F = = 1+
*

τs τ T1 s
! "

Design: assume G ≈ G and choose Q stable with Q ≈ G−1 . PI-controller!


− y*.

Lecture 7 13 Lecture 7 14
*

EL2620 2011 EL2620 2011

Example (cont’d)
IMC with Static Nonlinearity Assume r = 0 and abuse of Laplace transform notation
Include nonlinearity in model
τs + 1 τs + 1
u = −Q(y − Gv)

r u y
* = − T1 s + 1 y + 1 v

Q G < umax (v = u): PI controller u =


−(T1 s + 1)
y
τs
if |u|

v If |u| > umax (v = ±umax ):
G − T1 s + 1 umax
τs + 1 τs + 1
u=− y±
*

Choose Q ≈ G−1 . No integration.

An alternative way to implement anti-windup!

Lecture 7 15 Lecture 7 16
EL2620 2011 EL2620 2011

Bumpless Transfer
Another application of the tracking idea is in the switching between
Other Anti-Windup Solutions automatic and manual control modes.
PID with anti-windup and bumpless transfer:
Solutions above are all based on tracking.
1 ∑
Tr +
Other solutions include: −
uc 1
∑ 1
Tm s
• Tune controller to avoid saturation
y
PD
uc
M
• Don’t update controller states at saturation
e 1 u
∑ 1 ∑
Tr s
A
• Conditionally reset integration state to zero
− +

1
• Apply optimal control theory (Lecture 12)
Tr

Note the incremental form of the manual control mode (u̇ ≈ uc /Tm )

Lecture 7 17 Lecture 7 18

EL2620 2011 EL2620 2011

Friction
Friction is present almost everywhere
Stick-Slip Motion
x
• Often bad:
– Friction in valves and other actuators 2 y
1

0
0 10 20
• Sometimes good:
– Friction in brakes Fp Time
0.4
Ff
0.2

0
• Sometimes too small:
– Earthquakes 0 10 20
Time
y x Fp
Problems: 1

Ff
0
0 10 20
Time
• How to model friction?
• How to compensate for friction?
• How to detect friction in control loops?

Lecture 7 19 Lecture 7 20
EL2620 2011 EL2620 2011

Position Control of Servo with Friction


F
Friction

xr
u 1 x
PID Σ 1
ms v s

0.5 5 minute exercise: Which are the signals in the previous plots?
0
0 20 40 60 80 100
Time

0.2

−0.2
0 20 40 60 80 100
Time
1

−1
0 20 40 60 80 100
Time

Lecture 7 21 Lecture 7 22

EL2620 2011 EL2620 2011

Stribeck Effect
Friction increases with decreasing velocity (for low velocities)
Friction Modeling
Stribeck (1902)
Steady state friction: Joint 1

200

150

100

50

0
Friction [Nm]

-50

-100

-150

-200

-0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04


Velocity [rad/sec]

Lecture 7 23 Lecture 7 24
EL2620 2011 EL2620 2011

Classical Friction Models


Friction Compensation
• Lubrication
• Integral action
• Dither signal
• Model-based friction compensation
• Adaptive friction compensation
• The Knocker
Advanced models capture various friction phenomena better

Lecture 7 25 Lecture 7 26

EL2620 2011 EL2620 2011

Integral Action Modified Integral Action


K
Modify the integral part to I = ê(τ )dτ where
• Integral action compensates for any external disturbance
Ti

ê(t)
• Works if friction force changes slowly (v(t) ≈ const)
+t

• If friction force changes quickly, then large integral action


(small Ti ) necessary. May lead to stability problem e(t)
F η
Friction

xr u v x
1 1 Advantage: Avoid that small static error introduces oscillation
PID ms s
Disadvantage: Error won’t go to zero

Lecture 7 27 Lecture 7 28
EL2620 2011 EL2620 2011

replacements
Dither Signal
Avoid sticking at v = 0 (where there is high friction) Model-Based Friction Compensation
by adding high-frequency mechanical vibration (dither )
For process with friction F :
F
Friction
mẍ = u − F

xr u v x use control signal


1 1
PID ms s u = uPID + F̂
where uPID is the regular control signal and F̂ an estimate of F .
Possible if:

Cf., mechanical maze puzzle • An estimate F̂ ≈ F is available


(labyrintspel) • u and F does apply at the same point

Lecture 7 29 Lecture 7 30

EL2620 2011 EL2620 2011

Adaptive Friction Compensation


F Friction Adaptation converges: e = a − â → 0 as t → ∞
Proof:
1 v 1 x
+
uPID −
ms s de dâ dz d
+
dt dt dt dt
=− = − + km |v|
Friction
estimator = −kuPID sgn v + kmv̇ sgn v

Coulomb friction model: F = a sgn v
= −k sgn v(uPID − mv̇)

Friction estimator:
= −k sgn v(F − F̂ )
= −k(a − â)
ż = kuPID sgn v
= −ke
â = z − km|v|
d
F̂ = â sgn v Remark: Careful with dt |v| at v = 0.

Lecture 7 31 Lecture 7 32
EL2620 2011 EL2620 2011

1
P-controller 1
PI-controller
0.5 vref 0.5 The Knocker
0 v 0

-0.5 -0.5

-1 -1
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
Coulomb friction compensation and square wave dither
10 10
Typical control signal u
5 u 5
2.5

0 0

-5 -5
2

-10 -10
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
1.5

P-controller with adaptive friction compensation


1
1
0.5

0
0.5
-0.5

-1
0 1 2 3 4 5 6 7 8 0

10

5 -0.5
0 1 2 3 4 5 6 7 8 9 10

-5

-10
Hägglund: Patent and Innovation Cup winner
0 1 2 3 4 5 6 7 8

Lecture 7 33 Lecture 7 34

EL2620 2011 EL2620 2011

Detection of Friction in Control Loops


• Friction is due to wear and increases with time
• Q: When should valves be maintained?
• Idea: Monitor loops automatically and estimate friction Today’s Goal
102

101

100 You should be able to analyze and design

y
99

98

50 100 150 200 250 300 350


• Anti-windup for PID, state-space, and polynomial controllers

12
• Compensation for friction
11.5

u
11

10.5

50 100 150 200 250 300 350

time
Horch: PhD thesis (2000) and patent

Lecture 7 35 Lecture 7 36
EL2620 2011

Next Lecture
• Backlash
• Quantization

Lecture 7 37
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control


Today’s Goal
Lecture 8 You should be able to analyze and design for

• Backlash
• Backlash
• Quantization • Quantization

Lecture 8 1 Lecture 8 2

EL2620 2011 EL2620 2011

Linear and Angular Backlash


xout
Fout
! Backlash
" D! " D! Backlash (glapp) is
Fin! • present in most mechanical and hydraulic systems
xin • increasing with wear
2D • necessary for a gearbox to work in high temperature
• bad for control performance
Tin Tout • sometimes inducing oscillations

θin θout

Lecture 8 3 Lecture 8 4
EL2620 2011 EL2620 2011

Backlash Model Alternative Model


ẋin , in contact
ẋout = Force
0, otherwise
(Torque)
!

“in contact” if |xout − xin | = D and ẋin (xin − xout ) > 0.


xout
θout

D xin − xout
D (θin − θout )
D xin
θin

Not equivalent to “Backlash Model”


• Multivalued output; current output depends on history. Thus,
backlash is a dynamic phenomena.

Lecture 8 5 Lecture 8 6

EL2620 2011 EL2620 2011

1.5
No backlash K = 0.25, 1, 4 1.6
Backlash K = 0.25, 1, 4
1.4
Effects of Backlash
1.2
1
1
P-control of motor angle with gearbox having backlash with D = 0.2 0.8

0.6
0.5
0.4
θref e u 1 θ̇in 1 θin θout
K 0.2
1 + sT s 0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40

• Oscillations for K = 4 but not for K = 0.25 or K = 1. Why?


• Note that the amplitude will decrease with decreasing D,
but never vanish

Lecture 8 7 Lecture 8 8
EL2620 2011 EL2620 2011

−1/N (A) for D = 0.2:


Describing Function for Backlash
0

If A < D then N (A) = 0 else -2

-4
1 π
Re N (A) = -6
π 2
+ arcsin(1 − 2D/A)
"

-8

Im -1/N(A)
D D
-10
A A
+ 2(1 − 2D/A) 1−
-12
# $ %&

4D D
-14

πA A
Im N (A) = − 1−
-16
$ %

-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1


Re -1/N(A)

Note that −1/N (A) → −1 as A → ∞ (physical interpretation?)

Lecture 8 9 Lecture 8 10

EL2620 2011 EL2620 2011

replacements
Describing Function Analysis
Nyquist Diagrams
Stability Proof for Backlash System
4 Input and output of backlash
1.8
3 1.6 The describing function method is only approximate.
2 1.4

1 1.2
Do there exist conditions that guarantee stability (of the
1 steady-state)?
0

Imaginary Axis
−1/N (A) 0.8
-1
0.6
-2 K = 0.25
θref e u 1 θ̇in 1 θin θout
0.4
K=4 K
-3 K=1 0.2 1 + sT s
-4 0

-4 -2 0 2 4 0 5 10 15 20 25 30 35 40
Real Axis

K = 4, D = 0.2:
= 0.33, ω = 1.24
DF analysis: Intersection at A Note that θin and θout will not converge to zero
Simulation: A = 0.33, ω = 2π/5.0 = 1.26
Q: What about θ̇in and θ̇out ?
Describing function predicts oscillation well

Lecture 8 11 Lecture 8 12
EL2620 2011 EL2620 2011

Homework 2
Rewrite the system:
Analyze this backlash system with input–output stability results:
θin θout θ̇in θ̇out
BL θ̇in θ̇out
BL

G(s) G(s)
G(s)
The block “BL” satisfies
Passivity Theorem BL is passive
θ̇in in contact
θ̇out = Small Gain Theorem BL has gain γ(BL) =1
0 otherwise
!

Circle Criterion BL contained in sector [0, 1]

Lecture 8 13 Lecture 8 14

EL2620 2011 EL2620 2011

Backlash Compensation Linear Controller Design


Introduce phase lead compensation:
• Mechanical solutions
θref e 1+sT2
u 1 θ̇in 1 θin θout
K 1+sT 1
1 + sT s
• Deadzone

• Linear controller design
• Backlash inverse

Lecture 8 15 Lecture 8 16
EL2620 2011 EL2620 2011

Backlash Inverse
1+sT2
F (s) = K 1+sT with T1 = 0.5, T2 = 2.0:
1 Idea: Let xin jump ±2D when ẋout should change sign
xout
Nyquist Diagrams
1.5
10

8 1
6
0.5
4 y, with/without filter

2 0
0 5 10 15 20 D
0
$

Imaginary Axis
2 ! xin
! #$ !
#
-2 u ! ## ! xout
1
# ##
-4
with filter # D
-6 0 xin
-8 without filter -1
u, with/without filter
-10 -2
-10 -5 0 5 10 0 5 10 15 20
Real Axis

Oscillation removed!

Lecture 8 17 Lecture 8 18

EL2620 2011 EL2620 2011

Example—Perfect Compensation
Motor with backlash on input in feedback with PD-controller

1.2

0.8
xin (t) =

0.4
u−D
 + if u(t) > u(t−)

in
 u+D

0
+ if u(t) < u(t−)

0 2 4 6 8 10

 x (t−) otherwise

• If D 4

0
+ = D then perfect compensation (xout = u)

• If D
-2
0 2 4 6 8 10
+ < D then under-compensation (decreased backlash)

• If D
+ > D then over-compensation (may give oscillation)

Lecture 8 19 Lecture 8 20
EL2620 2011 EL2620 2011

Example—Under-Compensation Example—Over-Compensation

1.2 1.2

0.8 0.8

0.4 0.4

0 0
0 20 40 60 80 0 1 2 3 4 5

6
4
4

2 2

0
0
-2
0 20 40 60 80 0 1 2 3 4 5

Lecture 8 21 Lecture 8 22

EL2620 2011 EL2620 2011

Quantization Linear Model of Quantization


• What precision is needed in A/D and D/A converters? (8–14 bits?) Model quantization error as a uniformly distributed stochastic signal e
independent of u with
• What precision is needed in computations? (8–64 bits?)
∞ ∆/2
e2 ∆2
Var(e) = e2 fe de = de =
−∞ −∆/2 ∆ 12
, ,


u y e
D/A G A/D
∆/2
u y u y
Q +

• Quantization in A/D and D/A converters


• Quantization of parameters • May be reasonable if ∆ is small compared to the variations in u
• Roundoff, overflow, underflow in computations • But, added noise can never affect stability while quantization can!

Lecture 8 23 Lecture 8 24
EL2620 2011 EL2620 2011

Describing Function for Deadzone Relay Describing Function for Quantizer


u 1
y
0.8 e
1 0.6

0.4

0.2 u u y ∆
e 0
Q ∆/2
-0.2 u
D -0.4

-0.6

-0.8

-1
0 1 2 3 4 5 6

Lecture 6 ⇒
A < ∆2
0, A<D N (A) = n
2n+1

N (A) = ∆ , 2n−1 ∆<A< ∆


2i − 1
2 2

2A
1−
 0,

πA i=1
- #

πA
1 − D2 /A2 , A > D
$ %2
4∆ /
4 . 

Lecture 8 25 Lecture 8 26

EL2620 2011 EL2620 2011

Example—Motor with P-controller.


1

Quantization of process output with ∆ = 0.2

N (A)
Nyquist of linear part (K & ZOH & G(s)) intersects at −0.5K :
0 Stability for K < 2 without Q.
0 2 4
A/Delta
Stable oscillation predicted for K > 2/1.27 = 1.57 with Q.
• The maximum value is 4/π ≈ 1.27 attained at A ≈ 0.71∆. r 1−e−s
u y
K s
G(s) Q
• Predicts oscillation if Nyquist curve intersects negative real axis to −
the left of −π/4 ≈ −0.79

• Controller with gain margin > 1/0.79 = 1.27 avoids oscillation


• Reducing ∆ reduces only the oscillation amplitude

Lecture 8 27 Lecture 8 28
EL2620 2011 EL2620 2011

Example—1/s2 & 2nd-Order Controller


(a)

1
2

Output
K = 0.8
0
0 50
(b)
0

Imaginary axis

Output
K = 1.2
0
0 50 −2
(c)
−6 −4 −2 0
Real axis
1

Output
K = 1.6
0 A/D
0 50 F D/A G
Time

Lecture 8 29 Lecture 8 30

EL2620 2011 EL2620 2011

Quantization ∆ = 0.02 in A/D converter: Quantization ∆ = 0.01 in D/A converter:


1 1

Output
Output

0 0
0 50 100 150 0 50 100 150
0.05
1.02

Output
0.98
Unquantized

−0.05
0 50 100 150 0 50 100 150
0.05 0.05

0 0

Input
Input

−0.05 −0.05
0 50 100 150 0 50 100 150
Time Time

Describing function:
Ay = 0.01 and T = 39 Describing function:
Au = 0.005 and T = 39
Simulation: Ay = 0.01 and T = 28 Simulation: Au = 0.005 and T = 39

Lecture 8 31 Lecture 8 32
EL2620 2011 EL2620 2011

Quantization Compensation
• Improve accuracy (larger word length)
• Avoid unstable controller and gain margins < 1.3 Today’s Goal
Digital Analog
controller D/A You should now be able to analyze and design for
• Use the tracking idea from
anti-windup to improve D/A • Backlash
converter +−
• Quantization
• Use analog dither, oversam-
pling, and digital lowpass fil-
ter to improve accuracy of A/D + A/D filter decim.

converter

Lecture 8 33 Lecture 8 34
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control Today’s Goal


You should be able to analyze and design
Lecture 9
• High-gain control systems
• Nonlinear control design based on high-gain control
• Sliding mode controllers

Lecture 9 1 Lecture 9 2

EL2620 2011 EL2620 2011

Linearization Through High Gain Feedback


α2 e f (e)
History of the Feedback Amplifier
r y α1 e
New York–San Francisco communication link 1914. f (·)
− e
High signal amplification with low distortion was needed.
K
r y
f (·)

f (·) f (·)
f (e) α1 α2
k r
e
α1 ≤ ≤ α2 ⇒ r ≤ y ≤
1 + α1 K 1 + α2 K
Feedback amplifiers were the solution! choose K $ 1/α1 , yields
Black, Bode, and Nyquist at Bell Labs 1920–1950. 1
r
K
y≈

Lecture 9 3 Lecture 9 4
EL2620 2011 EL2620 2011

Inverting Nonlinearities
Compensation of static nonlinearity through inversion:

Controller
A Word of Caution
Nyquist: high loop-gain may induce oscillations (due to dynamics)! F (s) fˆ−1 (·) f (·) G(s)

Should be combined with feedback as in the figure!

Lecture 9 5 Lecture 9 6

EL2620 2011 EL2620 2011

Remark: How to Obtain f −1 from f using


Feedback Example—Linearization of Static Nonlinearity
f (u)
v 1
k u
0.8
s r e u y y(r)
K f (·) 0.6
u

0.4

f (u)
0.2
f (·)
0
0 0.2 0.4 0.6 0.8 1

u̇ = k v − f (u) Linearization of f (u) = u2 through feedback.


The case K
If k
! "

= 100 is shown in the plot: y(r) ≈ r.


> 0 large and df /du > 0, then u̇ → 0 and
0 = k v − f (u) ⇔ f (u) = v ⇔ u = f −1 (v)
! "

Lecture 9 7 Lecture 9 8
EL2620 2011 EL2620 2011

The Sensitivity Function S = (1 + GF )−1


Distortion Reduction via Feedback
r y
The closed-loop system is G
− The feedback reduces distortion in each link.
G
Gcl = Several links give distortion-free high gain.
1 + GF F
f (·) f (·)
Small perturbations dG in G gives − −
dGcl 1 dGcl 1 dG dG
= = =S K K
dG (1 + GF )2 1 + GF G G

Gcl
S is the closed-loop sensitivity to open-loop perturbations.

Lecture 9 9 Lecture 9 10

EL2620 2011 EL2620 2011

Example—Distortion Reduction
r y
G
Let G = 1000, −
distortion dG/G = 0.1
Transcontinental Communication Revolution
K
The feedback amplifier was patented by Black 1937.
Choose K = 0.1 ⇒ S = (1 + GK)−1 ≈ 0.01. Then
dGcl dG Year Channels Loss (dB) No amp’s
=S
G
≈ 0.001
Gcl 1914 1 60 3–6
100 feedback amplifiers in series give total amplification 1923 1–4 150–400 6–20
Gtot = (Gcl )100 ≈ 10100 1938 16 1000 40
1941 480 30000 600
and total distortion
dGtot
= (1 + 10−3 )100 − 1 ≈ 0.1
Gtot

Lecture 9 11 Lecture 9 12
EL2620 2011 EL2620 2011

Sensitivity and the Circle Criterion


Small Sensitivity Allows Large Uncertainty
r y If |S(iω)| is small, we can choose r large (close to one).
GF
−1 − This corresponds to a large sector for f (·).
r Hence, |S(iω)| small implies low sensitivity to nonlinearities.
f (·) k2 y f (y)

GF (iω) k1 y
1
k1 =
1+r y
Consider a circle C
:= {z ∈ C : |z + 1| = r}, r ∈ (0, 1).
GF (iω) stays outside C if 1
k2 =
|1 + GF (iω)| > r ⇔ |S(iω)| ≤ r−1 1−r
1 f (y) 1
Then, the Circle Criterion gives stability if
1+r y
≤ ≤
1−r

Lecture 9 13 Lecture 9 14

EL2620 2011 EL2620 2011

A Control Design Idea


On–Off Control Assume V (x) = xT P x, P = P T > 0, represents the energy of
ẋ = Ax + Bu,
On–off control is the simplest control strategy.
u ∈ [−1, 1]

Common in temperature control, level control etc. Choose u such that V decays as fast as possible:

V̇ = xT (AT P + P A)x + 2B T P xu
r e u y
G(s) is minimized if u = − sgn(B T P x) (Notice that V̇ = a + bu, i.e. just
− a segment of line in u, −1 < u < 1. Hence the lowest value is at an
endpoint, depending on the sign of the slope b. )
ẋ = Ax − B
B T QP x = 0
The relay corresponds to infinitely high gain at the switching point. ẋ = Ax + B

Lecture 9 15 Lecture 9 16
EL2620 2011 EL2620 2011

Sliding Modes
σ(x) > 0
f+ Example
σ(x) < 0
f (x), σ(x) > 0
ẋ =
f − (x), σ(x) < 0 1
ẋ = x+ u = Ax + Bu
0 −1
# +

f− 1
$ % $ %

1 −1

The sliding mode is ẋ


u = − sgn σ(x) = − sgn x2 = − sgn(Cx)
= αf + + (1 − α)f − , where α satisfies
αfn+ + (1 − α)fn− = 0 for the normal projections of f + , f − is equivalent to

f− x2 > 0
ẋ =
αf + + (1 − α)f − Ax − B,
Ax + B, x2 < 0
#

f+

The sliding surface is S = {x : σ(x) = 0}.

Lecture 9 17 Lecture 9 18

EL2620 2011 EL2620 2011

For small x2 we have


Sliding Mode Dynamics
dx2
ẋ (t) ≈ x1 − 1, ≈ 1 − x1 x2 > 0 The dynamics along the sliding surface S is obtained by setting
dx1

u = ueq ∈ [−1, 1] such that x(t) stays on S .



ueq is called the equivalent control.



 2

x2 < 0
dx1



ẋ2 (t) ≈ x1 + 1, dx2 ≈ 1 + x1

This implies the following behavior


x2 > 0

x2 = 0
x1 = −1 x1 = 1

x2 < 0

Lecture 9 19 Lecture 9 20
EL2620 2011 EL2620 2011

Deriving the Equivalent Control


Assume
Example (cont’d)
ẋ = f (x) + g(x)u
Finding u = ueq such that σ̇(x) = ẋ2 = 0 on σ(x) = x2 = 0 gives
u = − sgn σ(x)
0 = ẋ2 = x1 − x2 + ueq = x1 + ueq ⇒ ueq = −x1
has a stable sliding surface S
=0
= {x : σ(x) = 0}. Then, for x ∈ S ,

Insert this in the equation for ẋ1 :


dσ dx dσ
0 = σ̇(x) = = f (x) + g(x)u
*+,-

dx dt dx
·
$ %

The equivalent control is thus given by


ẋ1 = − x2 + ueq = −x1
=0

gives the dynamics on the sliding surface S


dσ dσ
*+,-

= {x : x2 = 0}. g(x) f (x)


dx dx
ueq = −
$ %−1

if the inverse exists.

Lecture 9 21 Lecture 9 22

EL2620 2011 EL2620 2011

Equivalent Control for Linear System


Sliding Dynamics
ẋ = Ax + Bu
u = − sgn σ(x) = − sgn(Cx) The dynamics on S = {x : Cx = 0} is given by

Assume CB
1
> 0. The sliding surface S = {x : Cx = 0} so BC Ax,
CB
ẋ = Ax + Bueq = I −
$ %


0 = σ̇(x) = f (x) + g(x)u = C Ax + Bueq under the constraint Cx = 0, where the eigenvalues of
dx
$ %
! "

(I − BC/CB)A are equal to the zeros of


gives ueq = −CAx/CB . sG(s) = sC(sI − A)−1 B .
Example (cont’d): For the example:
Remark: The condition that Cx = 0 corresponds to the zero at
ueq = −CAx/CB = − 1 −1 x = −x1 , s = 0, and thus this dynamic disappears on S = {x : Cx = 0}.
! "

because σ(x) = x2 = 0. (Same result as before.)

Lecture 9 23 Lecture 9 24
EL2620 2011 EL2620 2011

Proof Design of Sliding Mode Controller


Idea: Design a control law that forces the state to σ(x) = 0. Choose
σ(x) such that the sliding mode tends to the origin. Assume
ẋ = Ax + Bu
1 1 x1 f1 (x) + g1 (x)u
y = Cx ⇒ ẏ = CAx + CBu ⇒ u =
CB
CAx −
CB
ẏ ⇒ x1
   

1 1 ..
B ẏ .
d    

CB CB
ẋ = I − BC Ax −
 x2   

xn xn−1
$ %  ..  =   = f (x) + g(x)u
dt  .   

Hence, the transfer function from ẏ to u equals Choose control law


1 1 pT f (x) µ
+ BC)A))−1 B sgn σ(x),
−1 −1
CB CB CB CB T
p g(x) p g(x)
CA(sI − ((I − u=− − T

but this transfer function is also 1/(sG(s)) Hence, the eigenvalues of T


> 0 is a design
T
(I − BC/CB)A are equal to the zeros of sG(s). p = p1 . . . pn are the coefficients of a stable polynomial.
where !µ " parameter, σ(x) = p x, and

Lecture 9 25 Lecture 9 26

EL2620 2011 EL2620 2011

Closed-Loop Stability
Consider V (x) = σ 2 (x)/2 with σ(x) = pT x. Then,
Time to Switch

V̇ = σ T (x)σ̇(x) = xT p pT f (x) + pT g(x)u Consider an initial point x0 such that σ0 = σ(x0 ) > 0. Since

With the chosen control law, we get


! "

σ(x)σ̇(x) = −µσ(x) sgn σ(x)

V̇ = −µσ(x) sgn σ(x) < 0 it follows that as long as σ(x) > 0:


so x tend to σ(x) = 0. σ̇(x) = −µ
0 = σ(x) = p1 x1 + · · · + pn−1 xn−1 + pn xn Hence, the time to the first switch (σ(x) = 0) is
= p1 x(n−1)
n pn−1 x(1)
n + pn x(0)
n σ0
ts =
+ ··· +
µ
<∞
where x(k) denote time derivative. Now p corresponds to a stable
differential equation, and xn → 0 exponentially as t → ∞ . The Note that ts → 0 as µ → ∞.
state relations xk−1 = ẋk now give x → 0 exponentially as t → ∞.

Lecture 9 27 Lecture 9 28
EL2620 2011 EL2620 2011

Example—Sliding Mode Controller


Phase Portrait
Design state-feedback controller for
Simulation with µ = 0.5. Note the sliding surface σ(x) = x1 + x2 .
1 0 1
ẋ = x+ u x2
1 0 0
$ % $ %

1
y= 0 1 x
! "

Choose p1 s + p2 = s + 1 so that σ(x) = x1 + x2 . The controller is


0
given by

pT Ax µ
−1
pT B p B
u=− − T sgn σ(x)
= 2x1 − µ sgn(x1 + x2 ) −2 −1 0 1 2 x1

Lecture 9 29 Lecture 9 30

EL2620 2011 EL2620 2011

Time Plots
x1 The Sliding Mode Controller is Robust
Initial condition x2
x(0) = 1.5 0 . ẋ = f (x) + g(x)u is known. Still, however,
Simulation agrees well with
! "T Assume that only a model ẋ = f4(x) + g4(x)u of the true system

time to switch pT g
V̇ = σ(x) − µ T sgn σ(x) < 0
σ0
5 6

ts = =3
pT (f g4T − f4g T )p

µ if sgn(pT g)
pT g4 p g4

and sliding dynamics


The closed-loop system is thus robust against model errors!
u
= sgn(pT g4) and µ > 0 is sufficiently large.

(High gain control with stable open loop zeros)


ẏ = −y

Lecture 9 31 Lecture 9 32
EL2620 2011 EL2620 2011

Comments on Sliding Mode Control


• Efficient handling of model uncertainties Today’s Goal
• Often impossible to implement infinite fast switching You should be able to analyze and design

• Smooth version through low pass filter or boundary layer • High-gain control systems
• Applications in robotics and vehicle control • Sliding mode controllers
• Compare puls-width modulated control signals

Lecture 9 33 Lecture 9 34

EL2620 2011

Next Lecture
• Lyapunov design methods
• Exact feedback linearization

Lecture 9 35
EL2620 2011 EL2620 2011

Output Feedback and State Feedback


ẋ = f (x, u)
EL2620 Nonlinear Control y = h(x)
• Output feedback: Find u = k(y) such that the closed-loop
Lecture 10 system
ẋ = f x, k h(x)
has nice properties.
! ! ""

• Exact feedback linearization


• Input-output linearization • State feedback: Find u = !(x) such that
• Lyapunov-based control design methods ẋ = f (x, !(x))
has nice properties.

k and ! may include dynamics.

Lecture 10 1 Lecture 10 2

EL2620 2011 EL2620 2011

Nonlinear Observers
What if x is not measurable?

ẋ = f (x, u), y = h(x)


Nonlinear Controllers
Simplest observer
• Nonlinear dynamical controller: ż = a(z, y), u = c(z) x x, u)
Feedback correction, as in linear case,
#̇ = f (#

• Linear dynamics, static nonlinearity: ż = Az + By , u = c(z)


x x))
x, u) + K(y − h(#
• Linear controller: ż = Az + By , u = Cz Choices of K
#̇ = f (#

• Linearize f at x0 , find K for the linearization


• Linearize f at x x) for the linearization
Second case is called Extended Kalman Filter
#(t), find K = K(#

Lecture 10 3 Lecture 10 4
EL2620 2011 EL2620 2011

Exact Feedback Linearization


Consider the nonlinear control-affine system

ẋ = f (x) + g(x)u
Some state feedback control approaches idea: use a state-feedback controller u(x) to make the system linear

• Exact feedback linearization Example 1:


• Input-output linearization ẋ = cos x − x3 + u
• Lyapunov-based design - backstepping control The state-feedback controller

u(x) = − cos x + x3 − kx + v
yields the linear system

ẋ = −kx + v

Lecture 10 5 Lecture 10 6

EL2620 2011 EL2620 2011

Another Example Diffeomorphisms


A nonlinear state transformation z = T (x) with
ẋ1 = a sin x2
ẋ2 = −x21 + u • T invertible for x in the domain of interest
• T and T −1 continuously differentiable
How do we cancel the term sin x2 ?
is called a diffeomorphism
Perform transformation of states into linearizable form: Definition: A nonlinear system
z1 = x1 , z2 = ẋ1 = a sin x2 ẋ = f (x) + g(x)u
yields is feedback linearizable if there exist a diffeomorphism T whose
ż1 = z2 , ż2 = a cos x2 (−x21 + u) domain contains the origin and transforms the system into the form
and the linearizing control becomes
ẋ = Ax + Bγ(x) (u − α(x))
v
u(x) = x21 + , x2 ∈ [−π/2, π/2] with (A, B) controllable and γ(x) nonsingular for all x in the domain
a cos x2
of interest.

Lecture 10 7 Lecture 10 8
EL2620 2011 EL2620 2011

Exact vs. Input-Output Linearization


The example again, but now with an output
Input-Output Linearization
ẋ1 = a sin x2 , ẋ2 = −x21 + u, y = x2
Use state feedback u(x) to make the control-affine system
• The control law u = x21 + v/a cos x2 yields
ẋ = f (x) + g(x)u
ż1 = z2 ; ż2 = v ; y = sin−1 (z2 /a)
y = h(x)
which is nonlinear in the output.
linear from the input v to the output y
• If we want a linear input-output relationship we could instead use
u = x21 + v The general idea: differentiate the output, y = h(x), p times untill the
control u appears explicitly in y (p) , and then determine u so that
to obtain
ẋ1 = a sin x2 , ẋ2 = v, y = x2 y (p) = v
which is linear from v to y
i.e., G(s) = 1/sp
Caution: the control has rendered the state x1 unobservable

Lecture 10 9 Lecture 10 10

EL2620 2011 EL2620 2011

Lie Derivatives
Example: controlled van der Pol equation
Consider the nonlinear SISO system
ẋ1 = x2
ẋ = f (x) + g(x)u ; x ∈ Rn , u ∈ R1
ẋ2 = −x1 + %(1 − x21 )x2 +u y = h(x), y ∈ R
y = x1
The derivative of the output
Differentiate the output
dh dh
ẏ = ẋ = (f (x) + g(x)u) ! Lf h(x) + Lg h(x)u
ẏ = ẋ1 = x2 dx dx
ÿ = ẋ2 = −x1 + %(1 − x21 )x2 + u where Lf h(x) and Lg h(x) are Lie derivatives (Lf h is the derivative
of h along the vector field of ẋ = f (x))
The state feedback controller
Repeated derivatives
x21 )x2 +v ÿ = v d(Lk−1 d(Lf h)
f h)
u = x1 − %(1 − ⇒
Lkf h(x) = f (x), Lg Lf h(x) = g(x)
dx dx

Lecture 10 11 Lecture 10 12
EL2620 2011 EL2620 2011

Lie derivatives and relative degree


Example
The controlled van der Pol equation
• The relative degree p of a system is defined as the number of
integrators between the input and the output (the number of times
y must be differentiated for the input u to appear) ẋ1 = x2
• A linear system ẋ2 = −x1 + %(1 − x21 )x2 + u
Y (s) b0 sm + . . . + bm y = x1
= n
U (s) s + a1 sn−1 + . . . + an Differentiating the output
has relative degree p =n−m ẏ = ẋ1 = x2
• A nonlinear system has relative degree p if ÿ = ẋ2 = −x1 + %(1 − x21 )x2 + u
Lg Lfi−1 h(x) = 0, i = 1, . . . , p−1 ; Lg Lp−1
f h(x) $= 0 ∀x ∈ D Thus, the system has relative degree p =2

Lecture 10 13 Lecture 10 14

EL2620 2011 EL2620 2011

The input-output linearizing control


Consider a nth order SISO system with relative degree p
and hence the state-feedback controller
ẋ = f (x) + g(x)u
1
y = h(x) u= −Lpf h(x) + v
Lg Lp−1
f h(x)
Differentiating the output repeatedly
! "

results in the linear input-output system


=0
dh y (p) = v
ẏ = ẋ = Lf h(x) + Lg h(x)u
dx
$ %& '

..
.
y (p) = Lpf h(x) + Lg Lp−1
f h(x)u

Lecture 10 15 Lecture 10 16
EL2620 2011 EL2620 2011

Zero Dynamics van der Pol again

• Note that the order of the linearized system is p, corresponding to ẋ1 = x2


the relative degree of the system
ẋ2 = −x1 + %(1 − x21 )x2 + u
• Thus, if p < n then n − p states are unobservable in y .
• With y = x1 the relative degree p = n = 2 and there are no
• The dynamics of the n − p states not observable in the linearized zero dynamics, thus we can transform the system into ÿ = v . Try
dynamics of y are called the zero dynamics. Corresponds to the it yourself!
dynamics of the system when y is forced to be zero for all times.
• With y = x2 the relative degree p = 1 < n and the zero
• A system with unstable zero dynamics is called non-minimum dynamics are given by ẋ1 = 0, which is not asymptotically stable
phase (and should not be input-output linearized!) (but bounded)

Lecture 10 17 Lecture 10 18

EL2620 2011 EL2620 2011

A simple introductory example


Lyapunov-Based Control Design Methods Consider
ẋ = cos x − x3 + u
ẋ = f (x, u) Apply the linearizing control
• Find stabilizing state feedback u = u(x) u = − cos x + x3 − kx
Choose the Lyapunov candidate V (x) = x2 /2
• Verify stability through Control Lyapunov function
• Methods depend on structure of f
Here we limit discussion to Back-stepping control design, which
V (x) > 0 , V̇ = −kx2 < 0
require certain f discussed later. Thus, the system is globally asymptotically stable

But, the term x3 in the control law may require large control moves!

Lecture 10 19 Lecture 10 20
EL2620 2011 EL2620 2011

Simulating the two controllers


Simulation with x(0) = 10
The same example
State trajectory Control input
10 1000
ẋ = cos x − x3 + u
linearizing linearizing
Now try the control law non-linearizing non-linearizing
9
800
8

7
600
u = − cos x − kx
6
2 400
Choose the same Lyapunov candidate V (x) = x /2 5

Input u

State x
4
200
3

2
V (x) > 0 , V̇ = −x4 − kx2 < 0
0

1
Thus, also globally asymptotically stable (and more negative V̇ ) 0 -200
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time [s] time [s]

The linearizing control is slower and uses excessive input. Thus,


linearization can have a significant cost!

Lecture 10 21 Lecture 10 22

EL2620 2011 EL2620 2011

Back-Stepping Control Design


We want to design a state feedback u = u(x) that stabilizes
Suppose the partial system
ẋ1 = f (x1 ) + g(x1 )x2
(1) ẋ1 = f (x1 ) + g(x1 )v̄
ẋ2 = u
can be stabilized by v̄ = φ(x1 ) and there exists Lyapunov fcn
at x = 0 with f (0) = 0. V1 = V1 (x1 ) such that
Idea: See the system as a cascade connection. Design controller first
for the inner loop and then for the outer. dV1
V̇1 (x1 ) = f (x1 ) + g(x1 )φ(x1 ) ≤ −W (x1 )
u x2 x1 dx1
) *

g(x1 )
for some positive definite function W .
This is a critical assumption in backstepping control!
( (

f ()

Lecture 10 23 Lecture 10 24
EL2620 2011 EL2620 2011

Introduce new state ζ


The Trick = x2 − φ(x1 ) and control v = u − φ̇:
ẋ1 = f (x1 ) + g(x1 )φ(x1 ) + g(x1 )ζ
Equation (1) can be rewritten as
ζ̇ = v
ẋ1 = f (x1 ) + g(x1 )φ(x1 ) + g(x1 )[x2 − φ(x1 )]
ẋ2 = u where
dφ dφ
φ̇(x1 ) = ẋ1 = f (x1 ) + g(x1 )x2
u x2 x1 dx1 dx1
g(x1 )
) *

u ζ x1
g(x1 )
( (

−φ(x1 )
f + gφ
( (

f + gφ
−φ̇(x1 )

Lecture 10 25 Lecture 10 26

EL2620 2011 EL2620 2011

Back-Stepping Lemma
Consider V2 (x1 , x2 ) = V1 (x1 ) + ζ 2 /2. Then,
dV1 dV1 Lemma: Let z = (x1 , . . . , xk−1 )T and
V̇2 (x1 , x2 ) = f (x1 ) + g(x1 )φ(x1 ) + g(x1 )ζ + ζv
dx1 dx1 ż = f (z) + g(z)xk
) *

dV1 ẋk = u
≤ −W (x1 ) + g(x1 )ζ + ζv
dx1
Assume φ(0) = 0, f (0) = 0,
Choosing
dV1 ż = f (z) + g(z)φ(z)
v=− g(x1 ) − kζ, k>0
dx1
gives stable, and V (z) a Lyapunov fcn (with V̇
2
≤ −W ). Then,
dφ dV
V̇2 (x1 , x2 ) ≤ −W (x1 ) − kζ
Hence, x = 0 is asymptotically stable for (1) with control law u= f (z) + g(z)xk
dz dz
− g(z) − (xk − φ(z))
u(x) = φ̇(x) + v(x).
) *

If V1 radially unbounded, then global stability. stabilizes x = 0 with V (z) + (xk − φ(z))2 /2 being a Lyapunov fcn.

Lecture 10 27 Lecture 10 28
EL2620 2011 EL2620 2011

Strict Feedback Systems


Back-stepping Lemma can be applied to stabilize systems on strict
feedback form:

ẋ1 = f1 (x1 ) + g1 (x1 )x2


ẋ2 = f2 (x1 , x2 ) + g2 (x1 , x2 )x3 2 minute exercise: Give an example of a linear system
ẋ3 = f3 (x1 , x2 , x3 ) + g3 (x1 , x2 , x3 )x4 ẋ = Ax + Bu on strict feedback form.
..
.
ẋn = fn (x1 , . . . , xn ) + gn (x1 , . . . , xn )u
where gk $= 0
Note: x1 , . . . , xk do not depend on xk+2 , . . . , xn .

Lecture 10 29 Lecture 10 30

EL2620 2011 EL2620 2011

Back-Stepping Example
Back-Stepping Lemma can be applied recursively to a system Design back-stepping controller for

ẋ = f (x) + g(x)u ẋ1 = x21 + x2 , ẋ2 = x3 , ẋ3 = u

on strict feedback form. Step 0 Verify strict feedback form


Back-stepping generates stabilizing feedbacks φk (x1 , . . . , xk ) Step 1 Consider first subsystem
(equal to u in Back-Stepping Lemma) and Lyapunov functions
ẋ1 = x21 + φ1 (x1 ), ẋ2 = u1
where φ1 (x1 )
Vk (x1 , . . . , xk ) = Vk−1 (x1 , . . . , xk−1 ) + [xk − φk−1 ]2 /2
= −x21 − x1 stabilizes the first equation. With
by “stepping back” from x1 to u (see Khalil pp. 593–594 for details). V1 (x1 ) = x21 /2, Back-Stepping Lemma gives
Back-stepping results in the final state feedback u1 = (−2x1 − 1)(x21 + x2 ) − x1 − (x2 + x21 + x1 ) = φ2 (x1 , x2 )
u = φn (x1 , . . . , xn ) V2 = x21 /2 + (x2 + x21 + x1 )2 /2

Lecture 10 31 Lecture 10 32
EL2620 2011

Step 2 Applying Back-Stepping Lemma on

ẋ1 = x21 + x2
ẋ2 = x3
ẋ3 = u
gives

dφ2 dV2
u = u2 =
dz dz
f (z) + g(z)xn − g(z) − (xn − φ2 (z))
) *

∂φ2 2 ∂φ2 ∂V2


= (x1 + x2 ) + x3 − − (x3 − φ2 (x1 , x2 ))
∂x1 ∂x2 ∂x2
which globally stabilizes the system.

Lecture 10 33
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control


Today’s Goal
Lecture 11 You should be able to

• Determine if a nonlinear system is controllable


• Nonlinear controllability
• Apply gain scheduling to simple examples
• Gain scheduling

Lecture 11 1 Lecture 11 2

EL2620 2011 EL2620 2011

Linear Systems
Controllability Lemma:
ẋ = Ax + Bu
Definition: is controllable if and only if
ẋ = f (x, u)
0 1 Wn = B AB . . . An−1 B
is controllable if for any x , x there exists T > 0 and
has full rank.
! "

u : [0, T ] → R such that x(0) = x0 and x(T ) = x1 .

Is there a corresponding result for nonlinear systems?

Lecture 11 3 Lecture 11 4
EL2620 2011 EL2620 2011

Controllable Linearization Car Example


y
Lemma: Let
ż = Az + Bu θ
ϕ
be the linearization of x
(x, y)
ẋ = f (x) + g(x)u
at x = 0 with f (0) = 0. If the linear system is controllable then the Input: u1 steering wheel velocity, u2 forward
nonlinear system is controllable in a neighborhood of the origin. velocity

Remark:
x 0 cos(ϕ + θ)
• Hence, if rank Wn = n then there is an ! > 0 such that for every
     

x1 ∈ B! (0) there exists u : [0, T ] → R so that x(T ) = x1


d      

θ 1 0
 y  0   sin(ϕ + θ) 

• A nonlinear system can be controllable, even if the linearized


  =   u1 +   u2 = g1 (z)u1 + g2 (z)u2

system is not controllable


dt ϕ 0  sin(θ) 

Lecture 11 5 Lecture 11 6

EL2620 2011 EL2620 2011

Lie Brackets
Linearization for u1 = u2 = 0 gives
Lie bracket between vector fields f, g : Rn → Rn is a vector field
ż = Az + B1 u1 + B2 u2 defined by
∂g ∂f
with A = 0 and [f, g] = g
∂x ∂x
f−
Example:
0 cos(ϕ0 + θ0 )
cos x2 x1
   

f= , g=
x1 1
    ) * ) *

1 0
0   sin(ϕ0 + θ0 ) 

1 0 cos x2 x1
B1 =   , B2 =  

[f, g] =
0 − sin x2
0   sin(θ0 ) 

0 0 1 0 1

rank Wn = rank B AB . . . An−1 B = 2 < 4, so the x1
) *) * ) *) *

linearization is not controllable. Still the car is controllable! cos x2 + sin x2


=
! "
) *

−x1
Linearization does not capture the controllability good enough

Lecture 11 7 Lecture 11 8
EL2620 2011 EL2620 2011

Lie Bracket Direction Proof


For the system 1. For t ∈ [0, !], assuming ! small and x(0) = x0 , Taylor series yields
ẋ = g1 (x)u1 + g2 (x)u2
1 dg1
the control x(!) = x0 + g1 (x0 )! + (1)
2 dx
g1 (x0 )!2 + O(!3 )

(1, 0), t ∈ [0, !) 2. Similarily, for t ∈ [!, 2!]


(0, 1),

(u1 , u2 ) =
t ∈ [!, 2!)
1 dg2

(−1, 0), x(2!) = x(!) + g2 (x(!))! + g2 (x(!))!2



2 dx
t ∈ [2!, 3!)

dg2
(0, −1), t ∈ [3!, 4!)
and with x(!) from (1), and g2 (x(!)) = g2 (x0 ) + dx !g1 (x0 )

gives motion

x(2!) = x0 + !(g1 (x0 ) + g2 (x0 ))+


x(4!) = x(0) + !2 [g1 , g2 ] + O(!3 ) 1 dg1 dg2 1 dg2
!2 (x0 )g1 (x0 ) + (x0 )g1 (x0 ) + (x0 )g2 (x0 )
2 dx dx 2 dx
) *

The system can move in the [g1 , g2 ] direction!

Lecture 11 9 Lecture 11 10

EL2620 2011 EL2620 2011

Car Example (Cont’d)

Proof, continued ∂g2 ∂g1


g3 := [g1 , g2 ] = g2
∂x ∂x
g1 −
3. Similarily, for t
0
∈ [2!, 3!]
0 0 − sin(ϕ + θ) − sin(ϕ + θ)
2 dg2 dg1 1 dg2
  

x(3!) = x0 + !g2 + ! g2 + g2
dx dx 2 dx
g1 −
0
) *   

0 0 0 0 1
0 0 cos(ϕ + θ) cos(ϕ + θ)  0

4. Finally, for t
=   − 0

∈ [3!, 4!]
0 0 cos(θ)  0

dg2 dg1 − sin(ϕ + θ)


x(4!) = x0 + !2 g2
dx dx
g1 −
 
) *
 

0
 cos(ϕ + θ) 
= 
 cos(θ) 

Lecture 11 11 Lecture 11 12
EL2620 2011 EL2620 2011

We can hence move the car in the g3 direction (“wriggle”) by applying


The car can also move in the direction
the control sequence
− sin(ϕ + 2θ)
(u1 , u2 ) = {(1, 0), (0, 1), (−1, 0), (0, −1)} ∂g2 ∂g3
 

y g4 := [g3 , g2 ] =
∂x ∂x 0
g3 −
 cos(ϕ + 2θ) 

θ 0
 
g2 = . . . =  

ϕ
 

x
(x, y) (−sin(ϕ), cos(ϕ))

g4 direction corresponds to
sideways movement

Lecture 11 13 Lecture 11 14

EL2620 2011 EL2620 2011

Parking Theorem
You can get out of any parking lot that is ! > 0 bigger than your car 2 minute exercise: What does the direction [g1 , g2 ] correspond to for
by applying control corresponding to g4 , that is, by applying the a linear system ẋ = g1 (x)u1 + g2 (x)u2 = B1 u1 + B2 u2 ?
control sequence
Wriggle, Drive, −Wriggle, −Drive

Lecture 11 15 Lecture 11 16
EL2620 2011 EL2620 2011

The Lie Bracket Tree


Controllability Theorem
[g1 , g2 ]

Theorem:The system

[g2 , [g1 , g2 ]] ẋ = g1 (x)u1 + g2 (x)u2


[g1 , [g1 , g2 ]]

is controllable if the Lie bracket tree (together with g1 and g2 ) spans


Rn for all x
[g1 , [g1 , [g1 , g2 ]]] [g2 , [g1 , [g1 , g2 ]]] [g1 , [g2 , [g1 , g2 ]]] [g2 , [g2 , [g1 , g2 ]]]
Remark:

• The system can be steered in any direction of the Lie bracket tree

Lecture 11 17 Lecture 11 18

EL2620 2011 EL2620 2011

Example—Unicycle

x cos θ 0
2 minute exercise:
dt
     

θ 0 1 θ • Show that {g1 , g2 , [g1 , g2 ]} spans R3 for the unicycle


d  1 
x2 = sin θ  u1 + 0 u2

(x1 , x2 )
• Is the linearization of the unicycle controllable?
cos θ 0 sin θ
     

0 1 0
g1 =  sin θ  , g2 = 0 , [g1 , g2 ] = − cos θ

Controllable because {g1 , g2 , [g1 , g2 ]} spans R3

Lecture 11 19 Lecture 11 20
EL2620 2011 EL2620 2011

When is Feedback Linearization Possible?


Q: When can we transform ẋ = f (x) + g(x)u into ż = Az + bv by
Example—Rolling Penny means of feedback u = α(x) + β(x)v and change of variables
z = T (x) (see previous lecture)?
A: The answer requires Lie brackets and further concepts from
φ x1 cos θ 0 differential geometry (see Khalil and PhD course)
x sin θ
     

θ
d      

φ 1 0
 2   0 
 =  u1 +   u2

(x1 , x2 ) v u x z
dt  θ   0  1 

β ẋ = f T
Controllable because {g1 , g2 , [g1 , g2 ], [g2 , [g1 , g2 ]]} spans R4

Lecture 11 21 Lecture 11 22

EL2620 2011 EL2620 2011

Gain Scheduling
Control parameters depend on operating conditions:
Controller Valve Characteristics
parameters Gain
schedule
Flo
Operating
condition
Quickopening
Command
signal Control
signal
Controller Process Output
Linear

Eualpercentage
Example: PID controller with K = K(α), where α is the scheduling
variable. Position

Examples of scheduling variable are production rate, machine speed,


Mach number, flow rate

Lecture 11 23 Lecture 11 24
EL2620 2011 EL2620 2011

Nonlinear Valve
Without gain scheduling:
Process

uc c u v y uc
Σ PI fˆ −1 f G 0 (s)
0.3

y
0.2
−1 0 10 20 30 40
Time
uc
1.1
Valve characteristics
y
1.0
0 10 20 30 40
Time
5.2
y
10
fˆ(u)
f (u)
5.0 uc
0
0 10 20 30 40
0 0.5 1 1.5 2
Time

Lecture 11 25 Lecture 11 26

EL2620 2011 EL2620 2011

With gain scheduling:


Flight Control
uc
0.3 Pitch dynamics
y q = θ˙
0.2
0 20 40 60 80 100
Time
uc α
1.1
θ
y V
1.0
0 20 40 60 80 100
Time
uc
5.1
Nz δe
y
5.0
0 20 40 60 80 100
Time

Lecture 11 27 Lecture 11 28
EL2620 2011 EL2620 2011

The Pitch Control Channel


Flight Control
VIAS H

Operating conditions: Pitch stick


Gear
VIAS
80
Position
Filter A/D K DSE
H M
H Σ
60

T1s
K SG
1+ T1s

Σ D/A Σ
40 Acceleration
3 4 Filter A/D D/A Σ

Altitude (x1000 ft)


To servos
M Filter
20 Pitch rate
1
Filter A/D K Q1 Σ K NZ Σ
1+ T3 s
1 2
0 M H
0 0.4 0.8 1.2 1.6 2.0 2.4 T2 s
K QD
1+ T2 s
Mach number
H M VIAS

Lecture 11 29 Lecture 11 30

EL2620 2011

Today’s Goal
You should be able to

• Determine if a nonlinear system is controllable


• Apply gain scheduling to simple examples

Lecture 11 31
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control


Today’s Goal
Lecture 12 You should be able to

• Design controllers based on optimal control theory


• Optimal control

Lecture 12 1 Lecture 12 2

EL2620 2011 EL2620 2011

Example—Boat in Stream
Optimal Control Problems Sail as far as possible in x1 direction

Idea: formulate the control design problem as an optimization problem x2


Speed of water v(x2 ) with dv/dx2 =1 v(x2 )
min J(x, u, t), ẋ = f (t, x, u)
u(t)
Rudder angle control:
+ provides a systematic design framework u(t) ∈ U = {(u1 , u2 ) : u21 + u22 = 1} x1
+ applicable to nonlinear problems

+ can deal with constraints max x1 (tf )


u:[0,tf ]→U
- difficult to formulate control objectives as a single objective ẋ1 (t) = v(x2 ) + u1 (t)
function
ẋ2 (t) = u2 (t)
- determining the optimal controller can be hard
x1 (0) = x2 (0) = 0

Lecture 12 3 Lecture 12 4
EL2620 2011 EL2620 2011

Example—Resource Allocation Example—Minimal Curve Length


Maximization of stored profit Find the curve with minimal length between a given point and a line
x(t)
x(t) ∈ [0, ∞) production rate
u(t) ∈ [0, 1] portion of x reinvested
Curve: (t, x(t)) with x(0) = a a
1 − u(t) portion of x stored
γu(t)x(t) change of production rate (γ > 0) Line: Vertical through (tf , 0)
[1 − u(t)]x(t) amount of stored profit t
tf
tf
max tf
u:[0,tf ]→[0,1]
[1 − u(t)]x(t)dt
0 min 1 + u2 (t)dt
!

u:[0,tf ]→R 0
ẋ(t) = γu(t)x(t)
! "

ẋ(t) = u(t)
x(0) = x0 > 0
x(0) = a

Lecture 12 5 Lecture 12 6

EL2620 2011 EL2620 2011

Optimal Control Problem Pontryagin’s Maximum Principle


Standard form: Theorem: Introduce the Hamiltonian function
tf
H(x, u, λ) = L(x, u) + λT f (x, u)
min L(x(t), u(t)) dt + φ(x(tf ))
u:[0,tf ]→U 0
Suppose the optimal control problem above has the solution
!

ẋ(t) = f (x(t), u(t)), x(0) = x0 u∗ : [0, tf ] → U and x∗ : [0, tf ] → Rn . Then,

u∈U
min H(x∗ (t), u, λ(t)) = H(x∗ (t), u∗ (t), λ(t)), ∀t ∈ [0, tf ]
Remarks:
• U ⊂ Rm set of admissible control where λ(t) solves the adjoint equation

• Infinite dimensional optimization problem: ∂H T ∗ ∂φT ∗


(x (t), u∗ (t), λ(t)), λ(tf ) = (x (tf ))
∂x ∂x
Optimization over functions u : [0, tf ] → U
λ̇(t) = −

• Constraints on x from the dynamics Moreover, the optimal control is given by

u∗ (t) = arg min H(x∗ (t), u, λ(t))


u∈U
• Final time tf fixed (free later)

Lecture 12 7 Lecture 12 8
EL2620 2011 EL2620 2011

Example—Boat in Stream (cont’d)


Remarks Hamiltonian satisfies

T
• See textbook, e.g., Glad and Ljung, for proof. The outline is simply
to note that every change of u(t) from the optimal u∗ (t) must H = λ f = λ1 λ2
u2
increase the criterium. Then perform a clever Taylor expansion.
% &
# $ v(x2 ) + u1

= 0 λ1 ,
∂x
• Pontryagin’s Maximum Principle provides necessary condition: φ(x) = −x1
there may exist many or none solutions
∂H # $

(cf., minu:[0,1]→R x(1), ẋ = u, x(0) = 0) Adjoint equations

• The Maximum Principle provides all possible candidates. λ̇1 (t) = 0, λ1 (tf ) = −1
• Solution involves 2n ODE’s with boundary conditions x(0) = x0 λ̇2 (t) = −λ1 (t), λ2 (tf ) = 0
and λ(tf ) = ∂φT /∂x(x∗ (tf )). Often hard to solve explicitly. have solution
• “maximum” is due to Pontryagin’s original formulation λ1 (t) = −1, λ2 (t) = t − tf

Lecture 12 9 Lecture 12 10

EL2620 2011 EL2620 2011

Optimal control Example—Resource Allocation (cont’d)


u∗ (t) = arg 2min
2
λ1 (t)(v(x∗2 (t)) + u1 ) + λ2 (t)u2
u1 +u2 =1 tf

λ1 (t)u1 + λ2 (t)u2 min


u:[0,tf ]→[0,1]
[u(t) − 1]x(t)dt
0
= arg 2min
2
u1 +u2 =1
!

ẋ(t) = γu(t)x(t), x(0) = x0


Hence,
Hamiltonian satisfies
λ1 (t) λ2 (t)
λ21 (t) + λ22 (t) λ21 (t) + λ22 (t) H = L + λT f = (u − 1)x + λγux
or Adjoint equation
u1 (t) = − " , u2 (t) = − "

1 tf − t λ̇(t) = 1 − u∗ (t) − λ(t)γu∗ (t), λ(tf ) = 0


1 + (t − tf )2 1 + (t − tf )2
u1 (t) = " , u2 (t) = "

Lecture 12 11 Lecture 12 12
EL2620 2011 EL2620 2011

λ(t)
0

-0.2

Optimal control -0.4

-0.6

-0.8

-1
∗ ∗ ∗
u∈[0,1] 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
u (t) = arg min (u − 1)x (t) + λ(t)γux (t)

= arg min u(1 + λ(t)γ), ∗


(x (t) > 0) u∗ (t)
1
u∈[0,1]
0.8

0.6
0, 0.4
=
λ(t) ≥ −1/γ
0.2
1, 0
'

λ(t) < −1/γ


0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

For t
1,
≈ tf , we have u∗ (t) = 0 (why?) and thus λ̇(t) = 1.
∗ t ∈ [0, tf − 1/γ]
For t < tf − 1/γ , we have u∗ (t) = 1 and thus λ̇(t) = −γλ(t). • u (t) =
0,
'

t ∈ (tf − 1/γ, tf ]
• It’s optimal to reinvest in the beginning

Lecture 12 13 Lecture 12 14

EL2620 2011 EL2620 2011

5 minute exercise II: Solve the optimal control problem


5 minute exercise: Find the curve with minimal length by solving 1
tf min u4 dt + x(1)
min 1+ u2 (t)dt 0
!

u:[0,tf ]→R 0
!

ẋ = −x + u
"

ẋ(t) = u(t), x(0) = a x(0) = 0

Lecture 12 15 Lecture 12 16
EL2620 2011 EL2620 2011

History—Calculus of Variations
• Brachistochrone (shortest time) problem (1696): Find the
History—Optimal Control
(frictionless) curve that takes a particle from A to B in shortest • The space race (Sputnik, 1957)
time
• Pontryagin’s Maximum Principle (1956)
ds dx2 + dy 2 1 + y $ (x)2
dt = = dx • Bellman’s Dynamic Programming (1957)
v v 2gy(x)
" "

• Huge influence on engineering and other sciences:


Minimize
= "

B
– Robotics—trajectory generation
1 + y $ (x)2
J(y) = dx – Aeronautics—satellite orbits
A 2gy(x)
"

– Physics—Snell’s law, conservation laws


!

Solved by John and James Bernoulli, Newton, l’Hospital – Finance—portfolio theory


"

• Find the curve enclosing largest area (Euler)

Lecture 12 17 Lecture 12 18

EL2620 2011 EL2620 2011

Goddard’s Rocket Problem (1910)


How to send a rocket as high up in the air as possible?
Generalized form:
tf
min L(x(t), u(t)) dt + φ(x(tf ))
v g u:[0,tf ]→U 0
m

!

m

v ẋ(t) = f (x(t), u(t)), x(0) = x0


  u − D

dt
m

ψ(x(tf )) = 0
d   

−γu
h =  

h
 

Note the diffences compared to standard form:


(v(0), h(0), m(0)) = (0, 0, m0 ), g, γ > 0
• End time tf is free
u motor force, D = D(v, h) air resistance
Constraints:
• Final state is constrained: ψ(x(tf )) = x3 (tf ) − m1 = 0
0 ≤ u ≤ umax and m(tf ) = m1 (empty)
Optimization criterion: maxu h(tf )

Lecture 12 19 Lecture 12 20
EL2620 2011 EL2620 2011

Solution to Goddard’s Problem General Pontryagin’s Maximum Principle


Goddard’s problem is on generalized form with Theorem: Suppose u∗ : [0, tf ] → U and x∗ : [0, tf ] → Rn are
solutions to
tf
x = (v, h, m)T , L ≡ 0, φ(x) = −x2 , ψ(x) = x3 − m1
min L(x(t), u(t)) dt + φ(tf , x(tf ))
u:[0,tf ]→U 0
!

D(v, h) ≡ 0:
• Easy: let u(t) = umax until m(t) = m1 ẋ(t) = f (x(t), u(t)), x(0) = x0
ψ(tf , x(tf )) = 0
• Burn fuel as fast as possible, because it costs energy to lift it
Then, there exists n0 ≥ 0, µ ∈ Rn such that (n0 , µT ) += 0 and
D(v, h) +≡ 0:
u∈U
min H(x∗ (t), u, λ(t), n0 ) = H(x∗ (t), u∗ (t), λ(t), n0 ), t ∈ [0, tf ]
• Hard: e.g., it can be optimal to have low speed when air
resistance is high, in order to burn fuel at higher level where
• Took 50 years before a complete solution was presented H(x, u, λ, n0 ) = n0 L(x, u) + λT f (x, u)

Lecture 12 21 Lecture 12 22

EL2620 2011 EL2620 2011

Example—Minimum Time Control


∂H T ∗ Bring the states of the double integrator to the origin as fast as possible
(x (t), u∗ (t), λ(t), n0 )
∂x
λ̇(t) = −
tf
∂φ ∂ψ
λT (tf ) = n0 (tf , x∗ (tf )) + µT (tf , x∗ (tf )) min 1 dt = min tf
∂x ∂x u:[0,tf ]→[−1,1] 0 u:[0,tf ]→[−1,1]
!

H(x∗ (tf ), u∗ (tf ), λ(tf ), n0 ) ẋ1 (t) = x2 (t), ẋ2 (t) = u(t)
∂φ ∂ψ ψ(x(tf )) = (x1 (tf ), x2 (tf ))T = (0, 0)T
(tf , x∗ (tf ))
∂t ∂t
= −n0 (tf , x∗ (tf )) − µT
Optimal control is the bang-bang control
Remarks: u∗ (t) = arg min 1 + λ1 (t)x∗2 (t) + λ2 (t)u
u∈[−1,1]
• tf may be a free variable
1, λ2 (t) < 0
• With fixed tf : H(x∗ (tf ), u∗ (tf ), λ(tf ), n0 ) = 0 =
'

−1, λ2 (t) ≥ 0
• ψ defines end point constraints

Lecture 12 23 Lecture 12 24
EL2620 2011 EL2620 2011

Adjoint equations λ̇1 (t) = 0, λ̇2 (t) = −λ1 (t) gives


λ1 (t) = c1 , λ2 (t) = c2 − c1 t Reference Generation using Optimal Control
With u(t) = ζ = ±1, we have • Optimal control problem makes no distinction between open-loop
control u∗ (t) and closed-loop control u∗ (t, x).
x1 (t) = x1 (0) + x2 (0)t + ζt2 /2
• We may use the optimal open-loop solution u∗ (t) as the
x2 (t) = x2 (0) + ζt reference value to a linear regulator, which keeps the system
close to the wanted trajectory
Eliminating t gives curves
• Efficient design method for nonlinear problems
x1 (t) ± x2 (t)2 /2 = const
These define the switch curve, where the optimal control switch

Lecture 12 25 Lecture 12 26

EL2620 2011 EL2620 2011

Linear Quadratic Control Properties of LQ Control



min (xT Qx + uT Ru) dt • Stabilizing
u:[0,∞)→Rm 0
!

with
• Closed-loop system stable with u = −α(t)Lx for
ẋ = Ax + Bu
α(t) ∈ [1/2, ∞) (infinite gain margin)

has optimal solution


• Phase margin 60 degrees
u = −Lx If x is not measurable, then one may use a Kalman filter; leads to
linear quadratic Gaussian (LQG) control.
where L = R−1 B T S and S > 0 is the solution to
• But, then system may have arbitrarily poor robustness! (Doyle,
1978)
SA + AT S + Q − SBR−1 B T S = 0

Lecture 12 27 Lecture 12 28
EL2620 2011 EL2620 2011

Tetra Pak Milk Race


Move milk in minimum time without spilling
Given dynamics of system and maximum slosh φ = 0.63, solve
minu:[0,tf ]→[−10,10] 0 f 1 dt, where u is the acceleration.
Acceleration
15

10
.t

-5

-10

-15
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

Slosh
1

0.5

-0.5

-1
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

Optimal time = 375 ms, TetraPak = 540ms


[Grundelius & Bernhardsson,1999]

Lecture 12 29 Lecture 12 30

EL2620 2011 EL2620 2011

SF2852 Optimal Control Theory


• Period 3, 7.5 credits
Pros & Cons for Optimal Control • Optimization and Systems Theory
http://www.math.kth.se/optsyst/
+ Systematic design procedure
+ Applicable to nonlinear control problems Dynamic Programming: Discrete & continuous; Principle of
+ Captures limitations (as optimization constraints) optimality; Hamilton-Jacobi-Bellman equation

− Hard to find suitable criteria Pontryagin’s Maximum principle: Main results; Special cases such
as time optimal control and LQ control
− Hard to solve the equations that give optimal controller
Numerical Methods: Numerical solution of optimal control problems

Applications: Aeronautics, Robotics, Process Control,


Bioengineering, Economics, Logistics

Lecture 12 31 Lecture 12 32
EL2620 2011

Today’s Goal
You should be able to

• Design controllers based on optimal control theory for


– Standard form
– Generalized form

• Understand possibilities and limitations of optimal control

Lecture 12 33
EL2620 2011 EL2620 2011

EL2620 Nonlinear Control


Lecture 13 Today’s Goal
You should
• Fuzzy logic and fuzzy control
• Artificial neural networks • understand the basics of fuzzy logic and fuzzy controllers
• understand simple neural networks

Some slides copied from K.-E. Årzén and M. Johansson

Lecture 13 1 Lecture 13 2

EL2620 2011 EL2620 2011

Fuzzy Control Model Controller Instead of Plant


• Many plants are manually controlled by experienced operators Conventional control design
Model plant P → Analyze feedback → Synthesize controller C →
Implement control algorithm
• Transfer process knowledge to control algorithm is difficult
Idea: Fuzzy control design
• Model operator’s control actions (instead of the plant) Model manual control → Implement control rules

r e u y
• Implement as rules (instead of as differential equations)
C P
Example of a rule: −
IF Speed is High AND Traffic is Heavy
THEN Reduce Gas A Bit

Lecture 13 3 Lecture 13 4
EL2620 2011 EL2620 2011

Fuzzy Set Theory Example


Specify how well an object satisfies a (vague) description Q1: Is the temperature x = 15 cold?
Conventional set theory: x ∈ A or x $∈ A A1: It is quite cold since µC (15) = 2/3.

Fuzzy set theory: x ∈ A to a certain degree µA (x) Q2: Is x = 15 warm?


A2: It is not really warm since µW (15) = 1/3.
Membership function:
µC µW
µA : Ω → [0, 1] expresses the degree x belongs to A 1

1
µC µW

Cold Warm
Cold Warm

0
0 10 25
x 0
0 10 25 x
A fuzzy set is defined as (A, µA ) 15

Lecture 13 5 Lecture 13 6

EL2620 2011 EL2620 2011

Fuzzy Logic
How to calculate with fuzzy sets (A, µA )?
Conventional logic: Example
AND: A ∩ B
OR: A ∪ B Q1: Is it cold AND warm? Q2: Is it cold OR warm?
NOT: A!
1 1
µC∪W
Fuzzy logic:
AND: µA∩B (x) = min(µA (x), µB (x)) Cold Warm Cold Warm
OR: µA∪B (x) = max(µA (x), µB (x))
µC∩W
0 0
0 10 25 0 10 25
NOT: µA! (x) = 1 − µA (x)
x x
Defines logic calculations as X AND Y OR Z
Mimic human linguistic (approximate) reasoning [Zadeh,1965]

Lecture 13 7 Lecture 13 8
EL2620 2011 EL2620 2011

Fuzzy Controller
Fuzzy Controller
Fuzzy Control System
y Fuzzy u
r Fuzzifier Inference Defuzzifier
Fuzzy u y
Controller Plant

Fuzzifier: Fuzzy set evaluation of y (and r )


r, y, u : [0, ∞) (→ R are conventional signals Fuzzy Inference: Fuzzy set calculations
Defuzzifier: Map fuzzy set to u
Fuzzy controller is a nonlinear mapping from y (and r ) to u

Fuzzifier and defuzzifier act as interfaces to the crisp signals

Lecture 13 9 Lecture 13 10

EL2620 2011 EL2620 2011

Fuzzifier Fuzzy Inference


Fuzzy set evaluation of input y Fuzzy Inference:
Example 1. Calculate degree of fulfillment for each rule
y = 15: µC (15) = 2/3 and µW (15) = 1/3
2. Calculate fuzzy output of each rule

1
µC µW 3. Aggregate rule outputs

Examples of fuzzy rules:


Cold Warm Rule 1: IF y is Cold THEN u is High
1. 2.
0
Rule 2: IF y is Warm THEN u
! "# $ ! "# $

0 10 25 y 1. 2.
15
! "# $ ! is"#Low$

Lecture 13 11 Lecture 13 12
EL2620 2011 EL2620 2011

1. Calculate degree of fulfillment for rules 2. Calculate fuzzy output of each rule

Note that ”mu” is standard fuzzy-logic nomenclature for ”truth value”:

Lecture 13 13 Lecture 13 14

EL2620 2011 EL2620 2011

Defuzzifier
3. Aggregate rule outputs

Lecture 13 15 Lecture 13 16
EL2620 2011 EL2620 2011

Fuzzy Controller—Summary Example—Fuzzy Control of Steam Engine


Fuzzy Controller

y Fuzzifier
Fuzzy
Defuzzifier
u
Inference

Fuzzifier: Fuzzy set evaluation of y (and r )


Fuzzy Inference: Fuzzy set calculations
1. Calculate degree of fulfillment for each rule
2. Calculate fuzzy output of each rule
3. Aggregate rule outputs
Defuzzifier: Map fuzzy set to u http://isc.faqs.org/docs/air/ttfuzzy.html

Lecture 13 17 Lecture 13 18

EL2620 2011 EL2620 2011

Lecture 13 19 Lecture 13 20
EL2620 2011 EL2620 2011

Lecture 13 21 Lecture 13 22

EL2620 2011 EL2620 2011

Rule-Based View of Fuzzy Control Nonlinear View of Fuzzy Control

Lecture 13 23 Lecture 13 24
EL2620 2011 EL2620 2011

Pros and Cons of Fuzzy Control


Advantages Neural Networks
• User-friendly way to design nonlinear controllers
• How does the brain work?
• Explicit representation of operator (process) knowledge
• A network of computing components (neurons)
• Intuitive for non-experts in conventional control
Disadvantages

• Limited analysis and synthesis


• Sometimes hard to combine with classical control
• Not obvious how to include dynamics in controller

Fuzzy control is a way to obtain a class of nonlinear controllers

Lecture 13 25 Lecture 13 26

EL2620 2011 EL2620 2011

Model of a Neuron
Neurons Inputs: x1 , x2 , . . . , xn
Brain neuron Artificial neuron Weights: w1 , w2 , . . . , wn
Bias: b y = φ b + ni=1 wi xi
Nonlinearity: φ(·)
% & '

x1 w1 Output: y
x2 w2 y x1 w1
Σ φ(·)
wn x2 w2 y
Σ φ(·)
xn b
wn
xn b

Lecture 13 27 Lecture 13 28
EL2620 2011 EL2620 2011

A Simple Neural Network


Neural network consisting of six neurons:
Input Layer Hidden Layer Output Layer

Neural Network Design


1. How many hidden layers?
u1
2. How many neurons in each layer?
u2 y 3. How to choose the weights?

u3 The choice of weights are often done adaptively through learning

Represents a nonlinear mapping from inputs to outputs

Lecture 13 29 Lecture 13 30

EL2620 2011 EL2620 2011

Success Stories
Fuzzy controls:
• Zadeh (1965)
• Complex problems but with possible linguistic controls
Today’s Goal
• Applications took off in mid 70’s
You should
– Cement kilns, washing machines, vacuum cleaners
• understand the basics of fuzzy logic and fuzzy controllers
Artificial neural networks:
• understand simple neural networks
• McCulloch & Pitts (1943), Minsky (1951)
• Complex problems with unknown and highly nonlinear structure
• Applications took off in mid 80’s
– Pattern recognition (e.g., speech, vision), data classification

Lecture 13 31 Lecture 13 32
EL2620 2011

Next Lecture
• EL2620 Nonlinear Control revisited
• Spring courses in control
• Master thesis projects
• PhD thesis projects

Lecture 13 33
EL2620 2011 EL2620 2011

Exam
EL2620 Nonlinear Control • Regular written exam (in English) with five problems

Lecture 14 • Sign up on course homepage


• You may bring lecture notes, Glad & Ljung “Reglerteknik”, and
• Summary and repetition TEFYMA or BETA
(No other material: textbooks, exercises, calculators etc. Any
other basic control book must be approved by me before the
• Spring courses in control
• Master thesis projects exam.).

• See homepage for old exams

Lecture 14 1 Lecture 14 2

EL2620 2011 EL2620 2011

Question 1 Question 2
What’s on the exam? What design method should I use in practice?
• Nonlinear models: equilibria, phase portaits, linearization and stability The answer is highly problem dependent. Possible (learning)
• Lyapunov stability (local and global), LaSalle approach:

• Circle Criterion, Small Gain Theorem, Passivity Theorem • Start with the simplest:
– linear methods (loop shaping, state feedback, . . . )
• Compensating static nonlinearities
• Describing functions
• Evaluate:
– strong nonlinearities (under feedback!)?
– varying operating conditions?
• Sliding modes, equivalent controls
• Lyapunov based design: back-stepping – analyze and simulate with nonlinear model
• Exact feedback linearization, input-output linearization, zero dynamics • Some nonlinearities to compensate for?
• Nonlinear controllability – saturations, valves etc
• Optimal control • Is the system generically nonlinear? E.g, ẋ = xu

Lecture 14 3 Lecture 14 4
EL2620 2011 EL2620 2011

Question 3
Can a system be proven stable with the Small Gain Theorem and
unstable with the Circle Criterion? Question 4
• No, the Small Gain Theorem, Passivity Theorem and Circle
Criterion all provide only sufficient conditions for stability

• But, if one method does not prove stability, another one may. Can you review the circle criterion? What about k1 < 0 < k2 ?
• Since they do not provide necessary conditions for stability, none
of them can be used to prove instability.

Lecture 14 5 Lecture 14 6

EL2620 2011 EL2620 2011

The Circle Criterion


k2 y f (y)

k1 y
− k11 − k12 The different cases
y
Stable system G

1. 0 < k1 < k2 : Stay outside circle


G(iω) 2. 0 = k1 < k2 : Stay to the right of the line Re s = −1/k2
Theorem Consider a feedback loop with y = Gu and 3. k1 < 0 < k2 : Stay inside the circle
u = −f (y). Assume G(s) is stable and that
Other cases: Multiply f and G with −1.
Only Case 1 and 2 studied in lectures. Only G stable studied.
k1 ≤ f (y)/y ≤ k2 .
If the Nyquist curve of G(s) stays on the correct side of the circle
defined by the points −1/k1 and −1/k2 , then the closed-loop
system is BIBO stable.

Lecture 14 7 Lecture 14 8
EL2620 2011 EL2620 2011

Tracking PID
(a)
−y
KTd s

Actuator
e v u
K ∑

K 1 − +
∑ ∑
Question 5 Ti s
es
1
Tt

(b)
Please repeat antiwindup −y
KTd s

Actuator model Actuator


e v u
K ∑

K 1 − +
∑ ∑
Ti s
es
1
Tt

Lecture 14 9 Lecture 14 10

EL2620 2011 EL2620 2011

Antiwindup—General State-Space Model


D

y xc u
G ∑ s −1 C ∑
Question 6
D

F − KC Please repeat Lyapunov theory


y xc v u
G − KD ∑ s −1 C ∑ sat

Choose K such that F − KC has stable eigenvalues.

Lecture 14 11 Lecture 14 12
EL2620 2011 EL2620 2011

Stability Definitions Lyapunov Theorem for Local Stability


Theorem Let ẋ = f (x), f (0) = 0, and 0
An equilibrium point x = 0 of ẋ = f (x) is ∈ Ω ⊂ Rn . Assume that
V : Ω → R is a C 1 function. If
locally stable, if for every R > 0 there exists r > 0, such that
• V (0) = 0
#x(0)# < r ⇒ #x(t)# < R, t ≥ 0
• V (x) > 0, for all x ∈ Ω, x )= 0
locally asymptotically stable, if locally stable and
• V̇ (x) ≤ 0 along all trajectories in Ω
lim x(t) = 0 then x = 0 is locally stable. Furthermore, if
t→∞
#x(0)# < r ⇒

globally asymptotically stable, if asymptotically stable for all • V̇ (x) < 0 for all x ∈ Ω, x )= 0
then x = 0 is locally asymptotically stable.
x(0) ∈ Rn .

Lecture 14 13 Lecture 14 14

EL2620 2011 EL2620 2011

LaSalle’s Theorem for Global Asymptotic


Stability
Lyapunov Theorem for Global Stability
Theorem: Let ẋ = f (x) and f (0) = 0. If there exists a C1 function
Theorem Let ẋ = f (x) and f (0) = 0. Assume that V : Rn → R V : Rn → R such that
is a C 1 function. If
(1) V (0) = 0
• V (0) = 0
(2) V (x) > 0 for all x )= 0
• V (x) > 0, for all x )= 0
(3) V̇ (x) ≤ 0 for all x
• V̇ (x) < 0 for all x )= 0
(4) V (x) → ∞ as #x# → ∞
• V (x) → ∞ as #x# → ∞
(5) The only solution of ẋ = f (x) such that V̇ (x) = 0 is x(t) = 0
then x = 0 is globally asymptotically stable. for all t

then x = 0 is globally asymptotically stable.

Lecture 14 15 Lecture 14 16
EL2620 2011 EL2620 2011

LaSalle’s Invariant Set Theorem


Theorem Let Ω ∈ Rn be a bounded and closed set that is invariant
with respect to Relation to Poincare-Bendixson Theorem
ẋ = f (x).
Poincare-Bendixson Any orbit of a continuous 2nd order system that
Let V : Rn → R be a C 1 function such that V̇ (x) ≤ 0 for x ∈ Ω. stays in a compact region of the phase plane approaches its ω -limit
Let E be the set of points in Ω where V̇ (x) = 0. If M is the largest set, which is either a fixed point, a periodic orbit, or several fixed
invariant set in E , then every solution with x(0) ∈ Ω approaches M points connected through homoclinic or heteroclinic orbits
as t → ∞

Remark : a compact set (bounded and closed) is obtained if we e.g., In particular, if the compact region does not contain any fixed point
consider then the ω -limit set is a limit cycle
Ω = {x ∈ Rn |V (x) ≤ c}
and V is a positive definite function

Lecture 14 17 Lecture 14 18

EL2620 2011 EL2620 2011

Example: Pendulum with friction


g k
l m
ẋ1 = x2 , ẋ2 = − sin x1 − x2
g 1 k
l 2 m
V (x) = (1 − cos x1 ) + x22 ⇒ V̇ = − x22
• We can not prove global asymptotic stability; why? • If we e.g., consider Ω : x21 + x22 ≤ 1 then
M = {(x1 , x2 )|x1 = 0, x2 = 0} and we have proven
asymptotic stability of the origin.
• The set E = {(x1 , x2 )|V̇ = 0} is E = {(x1 , x2 )|x2 = 0}
• The invariant points in E are given by ẋ1 = x2 = 0 and ẋ2 = 0.
Thus, the largest invariant set in E is

M = {(x1 , x2 )|x1 = kπ, x2 = 0}

• The domain is compact if we consider


Ω = {(x1 , x2 ) ∈ R2 |V (x) ≤ c}

Lecture 14 19 Lecture 14 20
EL2620 2011 EL2620 2011

Question 7 Step 1. The Sliding Manifold S


Aim: we want to stabilize the equilibrium of the dynamic system

Please repeat the most important facts about sliding modes.


ẋ = f (x) + g(x)u, x ∈ Rn , u ∈ R1

Idea: use u to force the system onto a sliding manifold S of


There are 3 essential parts you need to understand: dimension n − 1 in finite time

1. The sliding manifold S = {x ∈ Rn |σ(x) = 0} σ ∈ R1


2. The sliding control and make S invariant
3. The equivalent control
If x ∈ R2 then S is R1 , i.e., a curve in the state-plane (phase plane).

Lecture 14 21 Lecture 14 22

EL2620 2011 EL2620 2011

Step 2. The Sliding Controller


Example Use Lyapunov ideas to design u(x) such that S is an attracting
invariant set
Lyapunov function V (x) = 0.5σ 2 yields V̇ = σ σ̇
ẋ1 = x2 (t) For 2nd order system ẋ1 = x2 , ẋ2 = f (x) + g(x)u and
ẋ2 = x1 (t)x2 (t) + u(t) σ = x1 + x2 we get
f (x) + x2 + sgn(σ)
Choose S for desired behavior, e.g., V̇ = σ (x2 + f (x) + g(x)u) < 0
g(x)
⇐ u=−
σ(x) = ax1 + x2 = 0 ⇒ ẋ1 = −ax1 (t)
Example: f (x) = x1 x2 , g(x) = 1, σ = x1 + x2 , yields
Choose large a: fast convergence along sliding manifold
u = −x1 x2 − x2 − sgn(x1 + x2 )

Lecture 14 23 Lecture 14 24
EL2620 2011 EL2620 2011

Step 3. The Equivalent Control


When trajectory reaches sliding mode, i.e., x ∈ S , then u will chatter
(high frequency switching).
Note!
However, an equivalent control ueq (t) that keeps x(t) on S can be
computed from σ̇ = 0 when σ = 0 Previous years it has often been assumed that the sliding mode
control always is on the form
Example:
u = −sgn(σ)
σ̇ = ẋ1 + ẋ2 = x2 + x1 x2 + ueq = 0 ⇒ ueq = −x2 − x1 x2
This is OK, but is not completely general (see example)

Thus, the sliding controller will take the system to the sliding manifold
S in finite time, and the equivalent control will keep it on S .

Lecture 14 25 Lecture 14 26

EL2620 2011 EL2620 2011

Backstepping Design
We are concerned with finding a stabilizing control u(x) for the
system
Question 8 ẋ = f (x, u)
General Lyapunov control design: determine a Control Lyapunov
function V (x, u) and determine u(x) so that
Can you repeat backstepping?
V (x) > 0 , V̇ (x) < 0 ∀x ∈ Rn

In this course we only consider f (x, u) with a special structure,


namely strict feedback structure

Lecture 14 27 Lecture 14 28
EL2620 2011 EL2620 2011

Strict Feedback Systems The Backstepping Idea


Given a Control Lyapunov Function V1 (x1 ), with corresponding
ẋ1 = f1 (x1 ) + g1 (x1 )x2 control u = φ1 (x1 ), for the system
ẋ2 = f2 (x1 , x2 ) + g2 (x1 , x2 )x3
ẋ3 = f3 (x1 , x2 , x3 ) + g3 (x1 , x2 , x3 )x4
ẋ1 = f1 (x1 ) + g1 (x1 )u
.. find a Control Lyapunov function V2 (x1 , x2 ), with corresponding
.
control u = φ2 (x1 , x2 ), for the system
ẋn = fn (x1 , . . . , xn ) + gn (x1 , . . . , xn )u
x˙1 = f1 (x1 ) + g1 (x1 )x2
where gk )= 0
x˙2 = f2 (x1 , x2 ) + u
Note: x1 , . . . , xk do not depend on xk+2 , . . . , xn .

Lecture 14 29 Lecture 14 30

EL2620 2011 EL2620 2011

The Backstepping Result


Let V1 (x1 ) be a Control Lyapunov Function for the system

ẋ1 = f1 (x1 ) + g1 (x1 )u


with corresponding controller u = φ(x1 ).
Then V2 (x1 , x2 ) = V1 (x1 ) + (x2 − φ(x1 ))2 /2 is a Control Question 9
Lyapunov Function for the system

ẋ1 = f1 (x1 ) + g1 (x1 )x2


Repeat backlash compensation
ẋ2 = f2 (x1 , x2 ) + u
with corresponding controller

dφ dV
u(x) = f (x1 )+g(x1 )x2 − g(x1 )−(xk −φ(x1 ))−f2 (x1 , x2 )
dx1 dx1
! "

Lecture 14 31 Lecture 14 32
EL2620 2011 EL2620 2011

Backlash Compensation • Choose compensation F (s) such that the intersection with the
describing function is removed
• Deadzone 1+sT2
F (s) = K 1+sT with T1 = 0.5, T2 = 2.0:
1

Nyquist Diagrams
• Linear controller design
1.5
10

8 1
• Backlash inverse
6
0.5
Linear controller design: Phase lead compensation 4 y, with/without filter

2 0
0 5 10 15 20
θref e u θ̇in θin θout 0

Imaginary Axis
1+sT2 1 1 -2
2
K 1+sT 1
1 + sT -4 1
s with filter
-6 0

-8 without filter -1
u, with/without filter
-10 -2
-10 -5 0 5 10 0 5 10 15 20
Real Axis

Oscillation removed!

Lecture 14 33 Lecture 14 34

EL2620 2011 EL2620 2011

Inverting Nonlinearities
Compensation of static nonlinearity through inversion:

Question 10 Controller

F (s) fˆ−1 (·) f (·) G(s)


Can you repeat linearization through high gain feedback? −

Should be combined with feedback as in the figure!

Lecture 14 35 Lecture 14 36
EL2620 2011 EL2620 2011

Remark: How to Obtain f −1 from f using


Feedback Question 11
f (u)
What should we know about input–output stability?
v ê k u
s You should understand and be able to derive/apply
u

%y%2
• System gain γ(S) = supu∈L2 %u%2

f (·)
• BIBO stability
• Small Gain Theorem
• Circle Criterion
ê = v − f (u)
• Passivity Theorem
If k
# $

> 0 large and df /du > 0, then ê → 0 and


0 = v − f (u) ⇔ f (u) = v ⇔ u = f −1 (v)
# $

Lecture 14 37 Lecture 14 38

EL2620 2011 EL2620 2011

Idea Behind Describing Function Method


r e u y
N.L. G(s)

Question 12
e(t) = A sin ωt gives

What about describing functions? u(t) = a2n + b2n sin[nωt + arctan(an /bn )]
n=1
% &

If |G(inω)|. |G(iω)| for n ≥ 2, then n = 1 suffices, so that

y(t) ≈ |G(iω)| a21 + b21 sin[ωt + arctan(a1 /b1 ) + arg G(iω)]


'

Lecture 14 39 Lecture 14 40
EL2620 2011 EL2620 2011

replacements

Existence of Periodic Solutions


Definition of Describing Function
The describing function is G(iω)
0 e y
f (·) u G(s)
b1 (ω) + ia1 (ω)
N (A, ω) =

A −1/N (A)
A
e(t) u(t) e(t) u
N.L. N (A, ω)
1
(1 (t)

If G is low pass and a0 = 0 then


N (A)
y = G(iω)u = −G(iω)N (A)y ⇒ G(iω) = −
u
The intersections of the curves G(iω) and −1/N (A)
give ω and A for a possible periodic solution.
(1 (t) = |N (A, ω)|A sin[ωt + arg N (A, ω)] ≈ u(t)

Lecture 14 41 Lecture 14 42

EL2620 2011 EL2620 2011

EL2520 Control Theory and Practice,


Advanced Course
Aim: provide an introduction to principles and methods in advanced
More Courses in Control control, especially multivariable feedback systems.
• EL2450 Hybrid and Embedded Control Systems, per 3 • Period 4, 7.5 p
• EL2745 Principles of Wireless Sensor Networks, per 3 • Multivariable control:
• EL2520 Control Theory and Practice, Advanced Course, per 4 – Linear multivariable systems
– Robustness and performance
• EL1820 Modelling of Dynamic Systems, per 1
– Design of multivariable controllers: LQG, H∞ -optimization
• EL2421 Project Course in Automatic Control, per 2 – Real time optimization: Model Predictive Control (MPC)
• Lectures, exercises, labs, computer exercises

Contact: Mikael Johansson mikaelj@kth.se

Lecture 14 43 Lecture 14 44
EL2620 2011 EL2620 2011

EL2450 Hybrid and Embedded Control


EL2745 Principles of Wireless Sensor
Systems
Networks
Aim:course on analysis, design and implementation of control
algorithms in networked and embedded systems. Aim: provide the participants with a basic knowledge of wireless
sensor networks (WSN)
• Period 3, 7.5 p
• Period 3, 7.5 cr
• How is control implemented in reality:
– Computer-implementation of control algorithms
• THE INTERNET OF THINGS
– Essential tools within communication, control, optimization and
– Scheduling of real-time software
signal processing needed to cope with WSN
– Control over communication networks
– Design of practical WSNs
• Lectures, exercises, homework, computer exercises – Research topics in WSNs

Contact: Carlo Fischione carlofi@kth.se


Contact: Dimos Dimarogonas dimos@kth.se

Lecture 14 45 Lecture 14 46

EL2620 2011 EL2620 2011

EL2421 Project Course in Control


EL1820 Modelling of Dynamic Systems
Aim: provide practical knowledge about modeling, analysis, design,
Aim: teach how to systematically build mathematical models of and implementation of control systems. Give some experience in
technical systems from physical laws and from measured signals. project management and presentation.
• Period 1, 6 p • Period 4, 12 p
• Model dynamical systems from • “From start to goal...”: apply the theory from other courses
– physics: lagrangian mechanics, electrical circuits etc
• Team work
– experiments: parametric identification, frequency response
• Preparation for Master thesis project
• Computer tools for modeling, identification, and simulation
• Project management (lecturers from industry)
• Lectures, exercises, labs, computer exercises
• No regular lectures or labs
Contact: Håkan Hjalmarsson, hjalmars@kth.se
Contact: Jonas Mårtensson jonas1@kth.se

Lecture 14 47 Lecture 14 48
EL2620 2011 EL2620 2011

Doing Master Thesis Project at Automatic


Doing PhD Thesis Project at Automatic
Control Lab
Control
◦ Theory and practice
• Intellectual stimuli
◦ Cross-disciplinary
• Get paid for studying
◦ The research edge
• International collaborations and travel
◦ Collaboration with leading industry and universities
• Competitive
◦ Get insight in research and development
Hints:
• World-wide job market
• Research (50%), courses (30%),
teaching (20%), fun (100%)
• The topic and the results of your thesis are up to you
• Discuss with professors, lecturers, PhD and MS students
• 4-5 yr’s to PhD (lic after 2-3 yr’s)
• Check old projects

Lecture 14 49 Lecture 14 50

You might also like