You are on page 1of 8

Chapter 1: Nonlinear Systems

N. Baiboun
September 27, 2021

1 Introduction to nonlinear systems


1.1 Introduction
• Most systems are nonlinear: dynamics is described by nonlinear differential equations, so you
have to deal with the nonlinearities
• Linear approximation is fine for some important classes of systems: if you want design methods
that are well known and robust, and don’t need to get very high performances
One type of nonlinearities arise when modeling the system.

Example 1.1
Let’s analyse the following system:
v̇ + |v|v = u
for u = 1 and u = 10

[1]: %matplotlib inline


import matplotlib.pyplot as plt
plt.style.use('../my_params.mplstyle')
import examples

[2]: examples.example1(u=1)

1
[3]: examples.example1(u=10)

2
It can be easily seen that the static gain is different for different inputs. Mathematically, this is
also easily proved by the differential equation. Indeed, when the transient ends, the derivatives are
all equal to 0:

0 + |v|v = u (1)

⇔v = u (2)

The other type of nonlinearities is what would be called “static nonlinearity”, such as:
• relay
• dead zone
• saturation
• hysteresis
• quantization
Tools used in linear control such as Bode Plot, pole placement etc… are no more usable for nonlinear
systems because the mathematical tools are based on linear differential equations.
If we really have to design a nonlinear control system, the physics will be our best option. Good
physical understanding can ease the design of a controller.
Here are a list of common nonlinear behaviours:
• Multiple equilibria
• Finite escape time
• Limit cycles
• Chaos

1.2 Introduction to Lyapunov theory


We define a nonlinear system as following:

ẋ = f (x, t)

with x the state of the system and t the time


When f () explicitly depends on time t, the system is said to be non-autonomous (or time-varying);
otherwise, if f () doesn’t depend on time t, the system is said to be autonomous (or time-invariant).
This definition holds for open-loop and closed-loop systems. In the following, we will stick to
autonomous systems because of their simplicity.

1.2.1 Equilibrium
A basic definition of an equilibrium point would be: “if the system starts there, it stays there”.

f (xeq ) = 0

3
Example 1.2
Let’s consider a simple pendulum equation:

M R2 θ̈ + bθ̇ + M gR sin θ = 0

with the terms corresponding respectively to inertia, damping and gravity.


Letting x1 = θ and x2 = θ̇, we obtain the following state-space equation

x˙1 = x2 (3)
b g
x˙2 = − 2
x2 − sin x1 (4)
MR R

The equilibrium points are given by x˙1 = x˙2 = 0. Which will lead to the points (0 + 2kπ, 0) and
(π + 2kπ, 0) for k ∈ Z. Physically, this means that the equilibrium points are the vertical up and
down positions.

1.2.2 Stability
In linear systems, the notion of stability is quite simple. A system can be:
• stable: for any initial state, the system will converge to its equilibrium point exponentially
• unstable: for any initial state, the system will blow up and the state will tend to infinity
exponentially
• marginally stable: neither stable or unstable; this stability state is not robust
Nonlinear systems having more complex behaviour, we will have to define different concepts of
stability.
We usually not talk about system stability but stability of equilibrium points. This is because a
nonlinear system can have multiple equilibria or limit cycles. Each region of the state-space will
then have a different stability behaviour. In the following, we will always consider the origin as the
equilibrium point (x = 0).

Definitions
The equilibrium point x = 0 is:
• stable if, ∀ R > 0, ∃ r > 0 such that ∥x(0)∥ < r =⇒ ∀ t ≥ 0, ∥x(t)∥ < R
• unstable if it is not stable; that means if there exists at least one R such that for every
r > 0, ∥x(t)∥ > R eventually (for t > T)
• asymptotically stable if it is stable and if ∃ r > 0 such that ∥x(0)∥ < r =⇒ x(t) → 0 as
t→∞
• exponentially stable if ∃ α > 0, λ > 0 such that ∀ t > 0, ∥x(t)∥ ≤ α∥x(0)∥e−λt for ∥x(0)∥ < r
All these definitions suppose that ∥x(0)∥ < r, which means that the equilibrium point is locally
stable. If r → ∞, that is, for any initial state, the equilibrium point is said to be globally stable.

4
1.2.3 Linearisation
Linearisation aims to approximate the behaviour of a nonlinear systems in a small region around
its equilibrium point.
Using Taylor expansion on the nonlinear autonomous system equation, we obtain
( )
∂f
ẋ = f (0) + x + fhot (x)
∂x x=0

where f (0) = 0 and hot stands for high order terms. Neglecting the higher order terms of the
Taylor expansion, and replacing f (0) by its value, the equation simplifies to
( )
∂f
ẋ = x
∂x x=0
( )
∂f
or, by defining the matrix A = ∂x x=0 , we have

ẋ = Ax

which is the well known state equation of a linear system without input.
In the more general case where the nonlinear system has a control input u, the system is defined

ẋ = f (x, u)

and the Taylor expansion for x = u = 0 gives


( ) ( )
∂f ∂f
ẋ = f (0, 0) + x+ u + fhot (x, u)
∂x (x=0,u=0) ∂u (x=0,u=0)

Again, neglecting the higher order terms and f (0, 0) = 0, we obtain


( ) ( )
∂f ∂f
ẋ = x+ u
∂x (x=0,u=0) ∂u (x=0,u=0)

( ) ( )
∂f ∂f
or, by defining the matrix A = ∂x (x=0,u=0) and B = ∂u (x=0,u=0) we have

ẋ = Ax + Bu

which is the well known state equation of a linear system.


It is then easy to obtain the linear approximation of the nonlinear system at a given point by
computing the Jacobian matrix A with respect to x and B with respect to u. In practice, it is
easier to obtain the linear approximation by neglecting any term of order higher than 1 in the
dynamics.

5
Example 1.3
Let’s consider the following system

x˙1 = x22 + x1 cos x2 (5)


x˙2 = x2 + (x1 + 1)x1 + x1 sin x2 (6)

Method 1: Computing the Jacobian


The Jacobian matrix with respect to x is given by
( )
∂f1 ∂f1
A= ∂x1 ∂x2
∂f2 ∂f2
∂x1 ∂x2

Computing each derivative, we have

∂f1
= cos x2 (7)
∂x1
∂f1
= 2x2 − x1 sin x2 (8)
∂x2
∂f2
= 2x1 + 1 + sin x2 (9)
∂x1
∂f2
= 1 + x1 cos x2 (10)
∂x2

By evaluating these expressions for x = 0 we obtain the system

( )
1 0
ẋ = x
1 1

Method 2: Neglecting higher order terms


Before doing anything, we start by approximating cos x = 1 and sin x = x for small x. We obtain
the following system

x˙1 = x22 + x1 (11)


x˙2 = x2 + (x1 + 1)x1 + x1 x2 (12)

We can now neglect all terms higher than order 1 to obtain

x˙1 = x1 (13)
x˙2 = x2 + x1 (14)

which gives the linear system

6
( )
1 0
ẋ = x
1 1

The example above shows that the second method saves a lot of computations.

1.2.4 Lyapunov’s linearisation method


Lyapunov’s linearisation method shows how the stability of the equilibrium of a nonlinear system
and the stability of the linearisation around that equilibrium point are related. This serves as a
justification for using linear control theory on a nonlinear system.
The following result shows how the stability of the linear approximation relates to that of the
nonlinear system.
• If the linear approximation is strictly stable (all eigenvalues have a strictly negative real part),
then the equilibrium point of the nonlinear system is asymptotically stable
• If the linear approximation is unstable (at least one eigenvalue has a strictly positive real
part), then the equilibrium point of the nonlinear system is unstable
• If the linear approximation is marginally stable (the eigenvalues have their real part negative
or equal to 0), we cannot conclude anything

Example 1.4
Let’s take our first example but without any input u

v̇ = −|v|v

This system is stable because when v < 0, v̇ > 0 which makes v increase, and when v > 0, v̇ < 0
which makes v decrease. This system is, by intuition, at least globally asymptotically stable.
Linearisation gives

v̇ = 0

which has a zero eigenvalue (pole).


Now, let’s take the same system but with a slight difference

v̇ = |v|v

This system is unstable because when v < 0, v̇ < 0 which makes v decrease further, and when
v > 0, v̇ > 0 which makes v increase further. This system is, by intuition, unstable.
Linearisation gives

v̇ = 0

which has a zero eigenvalue (pole).

7
We cannot conclude anything when the linear approximation is marginally stable.
This example shows that, because linearisation neglects higher order terms, the local behaviour has
to be “robust” so that the effect of nonlinear terms cannot modify the stability of the linear part
of the system. Because marginal stability is not robust (a slight change in any parameter of the
system can make the system stable or unstable), the nonlinear terms near the equilibrium point
will decide whether that slight change will make the system stable or unstable.

1.3 Exercises
Hint: For all the exercises, you can use any numerical computing software to compute the eigen-
values of the matrices using the “eig” function:
A = [1 2;3 4];
d = eig(A);

1.3.1 Exercise 1
Consider the system given by:

ẋ1 = −x2 x3 + 1 (15)


ẋ2 = x1 x3 − x2 (16)
ẋ3 = x23 (1 − x3 ) (17)

1. Show that the system has a unique equilibrium point


2. Study the stability of its equilibrium point using linearisation

1.3.2 Exercise 2
Consider the system given by:

ẋ1 = −x1 (18)


ẋ2 = (x1 x2 − 1)x32 + (x1 x2 − 1 + x21 )x2 (19)

1. Show that x = 0 is the unique equilibrium point


2. Study the stability of its equilibrium point using linearisation

1.3.3 Exercise 3
Consider the system given by:

ẋ1 = x1 − x31 + x2 (20)


ẋ2 = 3x1 − x2 (21)

1. Find all equilibrium points of the system


2. Study the stability of each equilibrium point using linearisation

You might also like