You are on page 1of 28

Outline

Nonlinear Systems
Nonlinear control problem

Dr. V. Sankaranarayanan
Associate Professor
Department of EEE
National Institute of Technology
Tiruchirapalli

DRDL-Hyderabad Advanced Control Engineering


Outline

Outline

1 Stability-review
Stability definitions
Stability theorems

DRDL-Hyderabad Advanced Control Engineering


Outline

Outline

1 Stability-review
Stability definitions
Stability theorems

2 Nonlinear control problem


Problem formulation
Controllability
Motivation

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Outline

1 Stability-review
Stability definitions
Stability theorems

2 Nonlinear control problem


Problem formulation
Controllability
Motivation

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Nonlinear system I

Consider the following nonlinear system

ẋ = f (x)

where x ∈ Rn , f : D −→ Rn is a smooth vector field in x on D, and D ⊂ Rn is a


domain that contains the origin x = 0. Let xe ∈ D be an equilibrium point, that
is, f (xe ) = 0. Without loss of generality, we assume xe = 0, the origin.

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Lyapunov stability

Definition
The equilibrium point x = 0 is said to be stable if, for each  > 0, there is a
δ = δ() > 0 such that

||x(0)|| < δ =⇒ ||x(t)|| < , ∀ t ≥ 0.

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Attractivity

Definition
The equilibrium point x = 0 is said to be attractive if there is a η > 0 such that
||x(0)|| < η =⇒ ||x(t)|| −→ 0 as t −→ ∞.

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Asymtotic stability

Definition
The equilibrium point x = 0 is said to be asymptotically stable if it is stable and
attractive.

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Exponential stability

Definition
The equilibrium point x = 0 is said to be exponentially stable if there exist
positive constants c, k and λ such that

||x(t)|| ≤ k||x(0)||e−λ t , ∀ ||x(0)|| < c.

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Outline

1 Stability-review
Stability definitions
Stability theorems

2 Nonlinear control problem


Problem formulation
Controllability
Motivation

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Lyapunov theorem

Consider the following nonlinear system

ẋ = f (x)

where x ∈ Rn , f : D −→ Rn is locally Lipschitz in x on D, and D ⊂ Rn is a


domain that contains the origin x = 0. Let xe ∈ D be an equilibrium point, that
is, f (xe ) = 0. Without loss of generality, we assume xe = 0, the origin.

Theorem

Let V : D −→ R be a continuously differentiable function such that V is


positive-definite on D and V̇ ≤ 0 in D. Then x = 0 is Lyapunov stable. Further if
V̇ < 0 in D \ 0, then x = 0 is asymptotically stable.

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Example I

Consider the simple unforced pendulum without friction. The dynamic


equations can be written as

ẋ1 = x2
ẋ2 = − sin x1

Consider the following Lyapunov candidate function (Positive definite)

V (x) = (1 − cos x1 ) + x22 /2

It is time derivative along the trajectories of the system

V̇ = x2 sin x1 − x2 sin x1
= 0

Since V̇ is negative semi definite, the pendulum is Lyapunov stable around


the equilibrium point (0,0).

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

Example II

Consider the simple unforced pendulum with viscous damping friction. The
dynamic equations can be written as

ẋ1 = x2
ẋ2 = − sin x1 − x2

Consider the following Lyapunov candidate function (Positive definite)

V = (1 − cos x1 ) + x22 /2

It is time derivative along the trajectories of the system

V̇ = x2 sin x1 − x + 2 sin x1 − x22


≤ 0

Hence the pendulum is Lyapunov stable around the equilibrium point (0,0).
But pendulum with friction is asymptotically stable !
So the Lyapunov candidate function selected for this problem is not appropriate.
This stability theorem is only sufficient condition to check the stability.

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

La Salle’s invariance principle I

Theorem
Consider the system ẋ = f (x) and let Ω be a compact set that is invariant with
respect to this system. Let V : D → R be continuously differentiable such that
V̇ ≤ 0 in Ω. Let E be the set of points in Ω such that V̇ = 0. Further let M be the
largest invariant set in E. Then every solution starting in Ω converges to M as
t → ∞.

Corollary
Consider the system ẋ = f (x) and let x = 0 be the equilibrium point. Let
V : D → R be continuously differentiable that is positive definite and V̇ ≤ 0 on D.
Let S = {x ∈ Ωc : V̇ = 0}. If no solution of ẋ = f (x) can stay identically in S
except the trivial solution x(t) = 0 then the origin is asymptotically stable.

Consider the pendulum system with friction

ẋ1 = x2
ẋ2 = − sin x1 − x2

DRDL-Hyderabad Advanced Control Engineering


Stability-review Stability definitions
Nonlinear control problem Stability theorems

La Salle’s invariance principle II

Consider the following Lyapunov function (Positive definite)

V = (1 − cos x1 ) + x22 /2 x1 ∈ (−π, π)

V̇ = sin x1 x2 − sin x1 x2 − x22


≤ 0

Now the set S = {x : x2 = 0}. But the dynamic equation in this set becomes

ẋ1 = 0
ẋ2 = − sin x1

Further
ẋ2 = 0 =⇒ x1 = 0
The only solution can stay identically in the set S is (0, 0) and hence, by La Salle’s
invariance principle, the pendulum system is asymptotically stable around (0, 0)
point.

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Outline

1 Stability-review
Stability definitions
Stability theorems

2 Nonlinear control problem


Problem formulation
Controllability
Motivation

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Nonlinear control

Nonlinear control system

ẋ = f (x, u)

The control objective is to do one or more of the following


Stabilize the system around one of its equilibrium point or operating point
Improve the performance of the system
Ensure some optimality if possible

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Outline

1 Stability-review
Stability definitions
Stability theorems

2 Nonlinear control problem


Problem formulation
Controllability
Motivation

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Controllability

Definition
The following nonlinear system
ẋ = f (x, u)
is said to be controllable if for each x0 , xd ∈ D ⊂ Rn and each T > 0, there exist
an admissible input u(t) with the property that if x(t) is the solution to the initial
value (x0 ) problem, then x(T ) = xd

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Linearization I

Consider the following nonlinear system

ẋ = f (x, u)

Linearizing this system around an operating point (xe , ue )

∂f
f (xe + ∆x, ue + ∆u) = f (xe , ue ) + |(∆x=xe ,∆u=ue ) ∆x
∂x
∂f
+ |(∆x=xe ,∆u=ue ) ∆u
∂u
d(xe + ∆x) dxe d∆x
= +
dt dt dt
d∆x ∂f
= |(∆x=xe ,∆u=ue ) ∆x
dt ∂x
∂f
+ |(∆x=xe ,∆u=ue ) ∆u
∂u

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Linear system representation

Define
∂f
A= |(∆x=xe ,∆u=ue )
∂x
and
∂f
B= (∆x, ∆u)|(∆x=xe ,∆u=ue )
∂u
then the linear state space model can be written as

∆ẋ = A∆x + B∆u

The above linear system is said to be controllable iff the rank of

C = [B AB A2 B . . . An−1 B] = n

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Outline

1 Stability-review
Stability definitions
Stability theorems

2 Nonlinear control problem


Problem formulation
Controllability
Motivation

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Example 1

Consider the following nonlinear system

ẋ = ηx3 + u, y=x

The control objective is to drive the output to zero value. Linearizing the system
around this desired point yields

ẋ = u, y=x

The simple linear controller


u = −kx, k>0
results in

ẋ = ηx3 − kx
 k
= ηx x2 −
η

q close-loopqsystem is asymptotically stable with the domain of attraction


The
− kη < x < k
η
. But the global asymptotic stability cannot be achieved even the
value of k −→ ∞.

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Example-1

ẋ = ηx3 + u
But simple nonlinear feedback

u = −ηx3 − kx

results in linear closed-loop system

ẋ = −kx

and ensure asymptotic stability in large.


Warning!
This feedback controller requires exact knowledge of both parameter η and
nonlinearity x3

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Example-2 I

Consider the following nonlinear system

ẋ1 = x2
ẋ2 = ηx32 + u

A linear controller
u = −k1 x1 − k2 x2 k1 , k 2 > 0
can be designed based on the linear approximation.
But the asymptotic stability cannot be ensured if η > 0. Moreover the closed-loop
system with linear controller for η > 0, possess unstable limit cycle.

ẋ1 = x2
ẋ2 = ηx32 − k1 x1 − k2 x2

ẋ1 = x2
η 2
ẋ2 = −k1 x1 − k2 x2 (1 − x )
k2 2

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Example-2 II

η 2
For the values outside the closed curve (1 − x )
k2 2
< 0, the closed-loop system
becomes

ẋ1 = x2
ẋ2 = −k1 x1 + k2 x2

which is unstable. So the trajectories starts outside the closed orbit leaves to
infinity.
For the values inside the closed curve (1 − kη x22 ) > 0, the closed-loop system
2
becomes

ẋ1 = x2
ẋ2 = −k1 x1 − k2 x2

and the system is stable with respect to (0, 0).

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Car-like mobile robot I

Consider the car like mobile robot

DRDL-Hyderabad Advanced Control Engineering


Problem formulation
Stability-review
Controllability
Nonlinear control problem
Motivation

Car-like mobile robot II

The kinematic model can be written as

ẋ = v cos θ
ẏ = v sin θ
θ̇ = ω

The control objective is to stabilize the mobile robot around any given position
and orientation (xd , yd , θd ) using two control inputs v, ω.
Let us assume that we want to stabilize around (0, 0, 0).
Linearize the system around the equilibrium point, we have

ẋ = v
ẏ = 0
θ̇ = ω

The above linear model is not linearly controllable.


But obviously this system is controllable using nonlinear feedback !
Car parking problem

DRDL-Hyderabad Advanced Control Engineering

You might also like