Professional Documents
Culture Documents
PHYM038
University of Surrey
Department of Physics
Spring 2019
1
CONTENTS
Contents
1 Introduction 6
1.1 History of Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Example: the pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Solving non-linear systems on a computer . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.1 Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.2 Improved Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.3 Fourth-order Runge-Kutta method . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.4 Newton Raphson and Henyey methods . . . . . . . . . . . . . . . . . . . . . . . 13
1.5 Flow in one dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.1 Example RC circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Linear stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.1 Example: population growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.2 Example: ẋ = x 2 − x 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7 Existence and uniqueness theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.8 Potential functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2 Bifurcations 23
2.1 Bifurcations in 1 dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.1 Example: f (r, x) = ẋ = r + x 2 , a “saddle node” or “blue sky” bifurcation . . . . 23
2.2 Prototypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.1 Example: f (x) = ẋ = r − x − e −x . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3 Types of 1D bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.1 Saddle Node / Blue Sky: ẋ = r ± x 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.2 Transcritical: ẋ = r x − x 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.3 Supercritical pitchfork: ẋ = r x − x 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3.4 Subcritical pitchfork: ẋ = r x + x 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4 Insect outbreak! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4.1 Scale free equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4.3 Bifurcation curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.4 Bistable states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4.5 General state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.5 Ghosts and bottlenecks: the non-uniform oscillator . . . . . . . . . . . . . . . . . . . 37
2.5.1 Period of oscillation when a < ω . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.6 Superconducting Josephson junction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3 Linear systems 45
3.1 Real and distinct eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1.1 The slope of trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1.2 a < 0 and b < 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.3 a > 0 and b > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.1.4 a > 0 and b < 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.5 a = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2
CONTENTS
7 Poincaré-Bendixson theorem 73
7.0.1 Poincaré-Bendixson F.A.Q. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.0.2 Proof of the theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.1 Glycolysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.1.1 General properties, nullclines and the fixed point . . . . . . . . . . . . . . . . . 75
7.1.2 The fixed point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
9 Hopf Bifurcations 91
9.1 Supercritical Hopf bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.2 Subcritical Hopf bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.3 Saddle node bifurcation of cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
10 Fractals 101
10.1 Fractals in Nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.2 Cantor Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.3 Ternary expansion characterization of the Cantor-thirds set . . . . . . . . . . . . . . . 104
3
CONTENTS
12 Attractors 122
12.1 2-dimensional torus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
14 Chaos 134
4
CONTENTS
5
1 Introduction
1.1 History of Dynamics
17th century Newton, Leibniz → Calculus and differential equations
Poincare Looked for qualitative behaviour rather than quantitative solutions: a geometric ap-
proach.
More information
• Chaos and the Butterfly Effect (Uni. Nottingham)
https://www.youtube.com/watch?v=WepOorvo2I4
• History of Dynamics (MIT)
https://www.youtube.com/watch?v=zv6Qe6T6UYI
6
1.2 Basic Definitions
F = F (x) , (1.7)
F = F (x, t ) . (1.8)
1. Differential equations, which are continuous functions describing, for example, the evolu-
tion of a system in time, like
2. Iterated maps, also called difference equations, where the values at one “timestep” depend
on the previous in the form
x i +1 = F (x i ) . (1.9)
These are of great relevance because continuous functions usually must be discretized if
they are to be solved on a computer.
One can convert from differential to difference equation form, e.g. with the Euler method. Con-
sider a differential equation,
y0 =
¡ ¢
f y, t , (1.10)
¡ ¢
then this can be approximated as by taking the first terms in the Taylor expansion of f y n , t n ,
¡ ¢
y n+1 = y n + h f y n , t n , (1.11)
where h is a “timestep”. Smaller timesteps lead to more accurate solutions, but have an associated
extra computational cost. More
¡ advanced methods ¢ include the (hopefully familiar) Runge-Kutta
scheme and may involve f y n , y n+1 , t n , t n+1 , . . . .
F (x) = Ax , (1.12)
7
1.3 Example: the pendulum
Linear systems are “easy” in that they can be split into n one-dimensional problems,
ẋ = Ax , (1.13)
Av i = λi v , (1.14)
where the constants c i are determined by initial or boundary conditions. The eigenvalues, eigen-
vectors and constants c i can all be complex.
The dimensionality of the problem is important to the behaviour that non-linear systems can
demonstrate.
n = 1 Simple population growth, relaxation dynamics. These systems can display bifurcations.
n = 2 Oscillations, e.g. predator-prey, Josephson junctions, the Van der Pol equation, pendulum
(see below). These systems have limit cycles.
n ≥ 3 Complicated motion occurs in three or more dimensions, such as chaotic behaviour, strange
attractors and fractals.
Note: chaos can occur in one-dimensional maps but, in continuous systems, requires three di-
mensions or more.
8
1.3 Example: the pendulum
h
mg sinθ
mg
The equation of motion is,
d 2θ
ml = −mg sin θ , (1.16)
dt2
which simplifies to,
θ̈ + ω2 sin θ = 0 , (1.17)
where
g
ω2 = . (1.18)
l
We require two degrees of freedom, or “initial conditions”, θ and θ̇, to solve the problem, so it is
two-dimensional. We could write the equation of motion as two first-order equations in terms of
θ and θ̇.
In the simple case where θ ¿ 1, we can approximate sin θ ≈ θ, and we have a linear harmonic
oscillator. The equations of motion become the (hopefully) familiar,
d 2θ g
= − θ, (1.19)
dt2 l
with solutions (cf. Eq. 1.15),
θ (t ) = θ0 cos ωt + φ ,
¡ ¢
(1.20)
θ̇ (t ) = −ωθ0 sin ωt + φ ,
¡ ¢
(1.21)
9
1.3 Example: the pendulum
θ̇
What happens if θ is large? We can no longer make the simple linearization, sin θ ≈ θ. We
can, however, recall some of our knowledge of physics. The energy of the system is the sum of the
kinetic, T , and potential, U , energies, is a constant of the motion1 , i.e.,
E θ, θ̇ = T +U
¡ ¢
1 ¡ ¢2
= m l θ̇ + l (1 − cos θ) mg (1.22)
2
= constant.
U = mg l (1 + cos θ)
2 θ
µ ¶
= 2mg l sin .
2
Let θ0 be the highest point of the motion, where the pendulum stops moving – albeit only instant-
aneously – then at this point the angular speed is zero, θ̇ (θ0 ) = 0, so also T = 0, and the total energy
there is only from the potential energy,
E θ0 , θ̇ = 0 = l (1 − cos θ0 ) mg
¡ ¢
2 θ0
µ ¶
= 2mg l sin , (1.23)
2
which must be the same as the total energy at any angle, E θ, θ̇ , because energy is conserved,
¡ ¢
hence
2 θ0
µ ¶
E θ, θ̇ = E θ0 , θ̇ = 0 = 2mg l sin
¡ ¢ ¡ ¢
.
2
We can then solve for the kinetic energy, T , at any general angle, θ,
2 θ0 2 θ
· µ ¶ µ ¶¸
1 ¡ ¢2
T = E −U = m l θ̇ = 2mg l sin − sin , (1.24)
2 2 2
1
https://en.wikipedia.org/wiki/Noether’s_theorem
10
1.3 Example: the pendulum
2 θ0 2 θ
· µ ¶ µ ¶¸
¡ ¢2 2
θ̇ = 4ω sin − sin . (1.25)
2 2
θ̇
0
-2
-4
-6
-6 -4 -2 0 2 4 6
θ
2 θ θ2
µ ¶
sin ≈ , (1.26)
2 4
then
¡ ¢2
θ̇ = ω2 θ02 − θ 2 ,
¡ ¢
(1.27)
Remark 2. We can write the equations in a two-dimensional autonomous system as follows. Define
v = θ̇ , (1.28)
then
g
v̇ = − sin θ . (1.29)
l
Equations 1.28 and 1.29 together form the required two-dimensional autonomous system.
There is an analytic solution which does not require a small angle approximation, although
it does require elliptic integrals, e.g. http://sbfisica.org.br/rbef/pdf/070707.pdf, https:
//en.wikipedia.org/wiki/Elliptic_integral.
11
1.4 Solving non-linear systems on a computer
then we can write expand the solution for x at time t 0 + ∆t given a known solution at t = t 0 by
expanding the Taylor series,
∂x
µ ¶
x (t 0 + ∆t ) = x (t 0 ) + ∆t + O ∆t 2
¡ ¢
∂t t0
= x 0 + f (x 0 ) ∆t , (1.31)
where we defined x 0 = x (t 0 ). We can them solve for the evolution of the system for as long as we
like, out to a time t = n∆t , where,
x n ≡ x (t = n∆t ) , (1.32)
x n+1 = x n + f (x n ) ∆t . (1.33)
We can improve by guessing a solution, x̃ n+1 using Euler’s method, then use this to better approx-
imate the derivative,
1£
f (x n ) + f (x̃ n+1 ) ∆t .
¤
x n+1 = x n + (1.34)
2
Such methods are simple to construct. In some cases the solutions are unstable ( f would be de-
scribed as stiff ). There are other methods which typically are more precise and more stable, but
involve more computational expense.
Define
k1 = f (x n ) · ∆t , (1.35)
µ ¶
1
k2 = f x n + k 1 · ∆t , (1.36)
2
µ ¶
1
k3 = f x n + k 2 · ∆t , (1.37)
2
k4 = f (x n + k 3 ) · ∆t , (1.38)
12
1.5 Flow in one dimension
1
x n+1 = x n + [k 1 + 2k 2 + 2k 3 + k 4 ] + O (∆t )4 . (1.39)
6
Note that this is the “classical” Runge-Kutta method, there are many variations on its theme.
These are relaxation methods which use equations such as Eq. 1.34 but then iterate to converge on
a solution. Method such as Henyey’s allow many coupled equations to be solved simultaneously,
such as when the stellar evolution equations are discretized.
ẋ = f (x) , (1.40)
However, this may not be easy if f (x) is a complicated function. Instead, let us consider the geo-
metric properties of f (x) and look at qualitative features of the motion.
The phase space is easy to draw, it is just a line.
0 x
Consider f (x): we have only three options.
f (x) = 0 then ẋ = 0, i.e. x is constant, “in equilibrium”, “fixed”, “invariant” or a “singular point”
13
1.5 Flow in one dimension
C
V0
Note that V0 , R and C are all constants. From our knowledge of basic electronics, the charge Q
on the capacitor obeys the equation,
Q
V0 = R Q̇ + , (1.42)
C
hence
dQ
= Q̇ = f (Q)
dt
V0 Q
= − . (1.43)
R RC
Consider the phase space in the two dimensions, Q and Q̇.
Q = Q ∗ = CV0 , (1.44)
Q = Q ∗ − ∆Q , (1.45)
V0 Q ∗ − ∆Q V0 CV0 − ∆Q ∆Q
Q̇ = − = − =+ >0 (1.46)
R RC R RC RC
• When Q = 0 we have
V0
Q̇ = . (1.47)
R
14
1.6 Linear stability analysis
V0 / R
f(Q)=Q˙ 0
0 Q*
Q
In such a simple case we can solve analytically given that Q = Q (t 0 ) at a “start time” t 0 ,
t −t 0
Q (t ) = CV0 + [Q (t 0 ) −CV0 ] e − RC . (1.48)
Q (t )
Q* = V0 / R
ẋ = f (x) , (1.49)
f x∗ = 0 .
¡ ¢
(1.50)
15
1.6 Linear stability analysis
We want to know about its stability, so we Taylor expand around a point near x ∗ ,
x = x∗ + η , (1.51)
f (x) = ẋ = ẋ ∗ + η̇ = η̇ , (1.52)
f (x) = f x ∗ + η = f x ∗ + f 0 x ∗ η + O η2
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
(1.53)
f 0 x ∗ η + O η2
¡ ¢ ¡ ¢
= (1.54)
η̇ = f 0 x∗ η
¡ ¢
(1.55)
Let n = n(t ) be the number of individuals in a population, where n > 0. Then the population
growth rate can be modelled as,
ṅ = r n , (1.57)
n (t ) = e r t , (1.58)
i.e. the population grows forever. This is not a very realistic model!
Instead, consider the logistic equation,
³ n´
ṅ = r˜n = r n 1 − , (1.59)
k
where the k is called the carrying capacity, and k > 0. The effective growth rate is r˜ = r (1 − n/k)
which is 0 when n = k, so this is the limiting population. As n → k, r˜ → 0 and growth stops.
16
1.6 Linear stability analysis
0.3
0.2
0.1
n˙
0 ⬤
⭕ ⬤
⬤
-0.1
-0.2
n0 n1
n
Consider the locations of the fixed points where ṅ = 0.
n ∗ ∆n
µ ¶
¡ ∗
ṅ = r n + ∆n 1 −
¢
− (1.60)
k k
∗ 2
(n ) n∗ ∆n (∆n)2
= r n ∗ + r ∆n − r − r ∆n − r n∗ −r (1.61)
kµ k k k
∗¶ ∗¶
n n
µ
≈ r n∗ 1 − +r 1−2 ∆n . (1.62)
k k
k k
µ ¶ µ ¶
ṅ ≈ r k 1 − + r 1 − 2 ∆n (1.63)
k k
= −r ∆n . (1.64)
• Growth toward the stable fixed point is exponential as it is in the general case, as we shall see
below.
17
1.6 Linear stability analysis
• General solution,
dn ³ n´
= rn 1−
dt k
dn r
= dt
n (k − n) k
log n − log(k − n) r
= t +c
k k
k
n =
1 + exp (− [r t + ck])
k
µ ¶
1
c = − ln −1 .
k ni
• Maximum population
³ n´
ṅ = r n 1 − = 0
k
n = k.
18
1.6 Linear stability analysis
1.8
1.6
1.4
1.2
n (t ) / k
1
0.8
0.6
0.4
0.2
0 1/r 2/r 3/r 4/r 5/r
t
1.6.2 Example: ẋ = x 2 − x 4
We have
f (x) = ẋ = x 2 − x 4 , (1.65)
hence
f 0 (x) = 2x − 4x 3 . (1.66)
ẋ = 0 (1.67)
which are
x = x∗ = 0 (1.68)
= ±1 (1.69)
19
1.6 Linear stability analysis
x ∗ = −1 is unstable.
If x is slightly larger than x ∗ then f 0 > 0 and motion is away from x ∗ .
If x is slightly smaller than x ∗ then f 0 > 0 and motion is away from x ∗ .
Hence any x
x ∗ = 0 is semistable.
If x is slightly larger than x ∗ (i.e. positive) then f 0 > 0 and motion is away from x ∗ = 0 in the
positive direction.
If x is slightly smaller than x ∗ (i.e. negative) then f 0 < 0 and motion is towards x ∗ = 0 in the
positive direction.
x ∗ = +1 is stable.
If x is slightly larger than x ∗ then f 0 < 0 and motion is towards x ∗ .
If x is slightly smaller than x ∗ then f 0 < 0 and motion is towards x ∗ .
0.3
0.2
0.1
f(x) 0 ⬤
⭕ ◐
⬤ ⬤
⬤
-0.1
-0.2
-0.3
-1 -0.5 0 0.5 1
x
20
1.7 Existence and uniqueness theorem
• Stay constant
but they can never go back. Hence oscillations are impossible in one-dimensional systems, except
for circular motion.
Corollary 1. The theorem can be extended to n dimensions, but proof of the theorem is not a simple
matter. The general proof involves Picard iterations of solutions until a series solution is built up. If
we have in general,
y 0 = f x, y ,
¡ ¢
(1.72)
y (x 0 ) = y 0 , (1.73)
then we can integrate,
ˆ x ¡ ¢
y (x) = y 0 + f z, y(z) d z . (1.74)
x0
We don’t know y (z) but we can guess it is approximately the value at x 0 , i.e. y (x 0 ) = y 0 , and integrate
again,
ˆ
¡ ¢
y 1 (x) = y 0 + f z, y 0 d z . (1.75)
and repeat to the nth stage. The solution converges to the unique solution. To prove that the
series converges requires the Banach-Caccioppoli fixed-point theorem and Gronwall’s lemma, both
of which are beyond the scope of this course but which, naturally, make for interesting reading.
See also
• http://web.mit.edu/jorloff/www/18.03-esg/notes/existAndUniq.pdf
• https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s_inequality
• https://en.wikipedia.org/wiki/Banach_fixed-point_theorem
2
An interval is s set of real numbers with the property that any number between ¡two ¢numbers in the set lies in the
set. Open implies that the set does not include its endpoints, and uses the notation x, y .
21
1.8 Potential functions
dV
ẋ = f (x) = − , (1.77)
dx
where V = V (x) is called the potential, cf. gravitational potential or electric potential. If such a
function V exists, then
dV dV d x
= (1.78)
dt dx dt
dV 2
µ ¶
= − ≤ 0. (1.79)
dx
Hence stable points are minima of V (x) which follows from ẋ = f (x) = −dV /d x.
22
2 Bifurcations
2.1 Bifurcations in 1 dimension
Definition 4 - Bifurcation.
A change in the nature, or the number, of fixed points, when a parameter of the system is varied.
Consider a system,
f (x) = f (x, r )
◐
⬤
p
r < 0: f (r, x) = r + x 2 and we have f = 0 when r = −x 2 < 0, i.e. x 1,2 = ± −r , so there are two fixed
points.
⬤
⬤ ⬤
⭕
We can plot x vs r , which shows the bifurcation. This particular type of bifurcation is called a
“saddle node” (because it looks like a saddle?) or “blue sky” bifurcation.
23
2.2 Prototypes
x1 = r1/2
0.5
0
x
-0.5
x2 = -r1/2
-1
-1 -0.5 0 0.5 1
r
2.2 Prototypes
The form ẋ = r ± x 2 is called a prototype or normal form. From this, similar forms can be con-
structed with the same qualitative properties, e.g. by multiplying terms by constants. Prototypes
represent all bifurcations of their kind.
Consider the following example,
24
2.2 Prototypes
0.3
0.2
0.1
ẋ 0
r=0
r = 0.2
r = 0.4
-0.1 r = 0.6
r = 0.8
r=1
-0.2 r = 1.2
∂f ∂f 1 ∂2 f
µ ¶ µ ¶ µ ¶
f (x, r ) = f (r c , x 0 ) + (r − r c ) + (x − x 0 ) + +... , (2.2)
∂r r c ,x0 ∂x r c ,x0 2 ∂x 2 r c ,x0
together with the condition at the fixed point (the “tangent condition” at a saddle node),
∂f
µ ¶
= 0, (2.3)
∂x r c ,x0
and if we neglect the higher terms we have,
f (x, r ) = a (r − r c ) + b (x − x 0 )2 , (2.4)
where
∂f
µ ¶
a = (2.5)
∂r r c ,x 0
25
2.2 Prototypes
and
1 ∂2 f
µ ¶
b = . (2.6)
2 ∂x 2 r c ,x0
The constants a and b are much easier to calculate than the solutions to the original cubic equa-
tion. In our case,
a = −x 0 , (2.7)
and
à £ ¤!
1 ∂ −3x 2 + 4x − r
b =
2 ∂x
r c ,x 0
= 2 − 3x 0 , (2.8)
so
f (x, r ) = −x 0 (r − r c ) + (2 − 3x 0 ) (x − x 0 )2 . (2.9)
This looks like a parabola, so to within scaling constants is the same as our prototype and has the
same general behaviour.
We do not have to calculate f (x) over the entire x, r plane; it is simpler to take a geometric ap-
proach. Fixed points are at f (x) = ẋ = 0, i.e. where,
r − x = e −x , (2.10)
so plot r − x against e −x for a range of r .
8
r = -1
r = +1
6 exp(-x) r = +3
-2 r-x
-4
-6
-2 -1 0 1 2 3 4
x
26
2.2 Prototypes
The critical point is at r = 1: if r > 1 there are two fixed points, if r < 1 there are none. When
r = 1 we have
1 − x c = e −xc , (2.11)
and from this information we can construct the entire bifurcation diagram. Around the bifurca-
tion, we can expand f (x),
∗
f x ∗ = 0 = r − x ∗ − e −x
¡ ¢
x ∗2 x ∗3
µ ¶
∗ ∗
= r −x − 1−x + − ...
2! 3!
x ∗2
= r − x ∗ −1 + x ∗ − + O x ∗3
¡ ¢
2
x ∗2
≈ (r − 1) − . (2.12)
2
Again, we have the quadratic prototype, and everything we have learned about it applies equally
to this function, f (x). The important point is that we can plot a detailed bifurcation diagram using
the approximate solution,
p
x ∗ = ± 2r − 2 . (2.13)
0
x
-2
-4
0 0.5 1 1.5 2
r
27
2.3 Types of 1D bifurcations
0
x
-2
-4
0 0.5 1 1.5 2
r
2.3.2 Transcritical: ẋ = r x − x 2
28
2.3 Types of 1D bifurcations
0
x
-2
-4
-4 -2 0 2 4
r
0
x
-2
-4
-4 -2 0 2 4
r
Stable Unstable
29
2.4 Insect outbreak!
0
x
-2
-4
-4 -2 0 2 4
r
Unstable Stable
B n2
p (n) = , (2.15)
A2 + n2
where A and B are non-zero. When n is small (n ¿ A), p (n) ≈ 0, but when n is large (n À A),
p (n) ≈ B . The rate of growth, or decline, of the bug population is thus,
³ n´ B n2
ṅ = Rn 1 − − 2 . (2.16)
k A + n2
30
2.4 Insect outbreak!
We now try to make an equation in a general, scale-free/unit-free form, by defining a new set of
variables,
n
x = (2.17)
A
Bt
τ = (2.18)
A
RA
r = (2.19)
B
k
c = (2.20)
A
and we can rewrite Eq. 2.16 as, by dividing through by B and replacing n by A × n/A,
A ṅ RA n An n2
µ ¶
= k 1− − 2 , (2.21)
B A B A k A A + n2
and now note that,
d B d
= , (2.22)
dt A dτ
hence
dx ³x´ x2
= r x 1− − (2.23)
dτ 2
h ³ c x ´ 1 + xx i
= x r 1− − . (2.24)
c 1 + x2
The fixed points are at,
A dx dx
= = 0, (2.25)
B dt dτ
one obvious solution to which x = 0, but this just means there are never any bugs: it is a trivial
solution. Assume there are some bugs, and let g be a function which depends on the model para-
meters,
³ x´
g (x) = r 1 − (2.26)
c
and h be a function which is independent of the model parameters,
x
h (x) = . (2.27)
1 + x2
then
xg (x) = xh (x) . (2.28)
. Other than the trivial solution, we have x 6= 0, i.e.,
g (x) = h (x) . (2.29)
We can alter our model by changing g (x) and compare to the fixed function h (x): a relatively easy
process. We can also plot g (x) and h (x) to see where they cross, at (non-trivial) fixed points x ∗ ,
which we don’t solve for because it is complicated. To give you an idea, try solving for x ∗ ,
x∗ x∗
µ ¶
= r 1− . (2.30)
1 + x ∗2 c
31
2.4 Insect outbreak!
2.4.2 Stability
• h (x) is a straight line which depends on r and c which are “free parameters” of the problem
(e.g. set by other factors, like the nature of the bugs, physics, etc.).
We can see there are general solutions to the problem. One shown below has r = 1, c = 4. There
is only one non-trivial fixed point (where the lines cross, i.e. g (x) = h (x)) at x ≈ 2.7. Whatever the
initial population, this is the final number of bugs, because the fixed point is stable.
0.6
g(x) = r(1-x/c)
h(x) = x/(1+x2)
0.5
0.4 r=1
c=4
0.3
0.2
0.1
0⬤
⭕ ⬤
0 2 4 6 8 10
x
Another example, showing different behaviour, is illustrated by r = 0.5, c = 10. There are now
three fixed points at x 1 ≈ 0.7, x 2 ≈ 2 and x 3 ≈ 7.3. If x (t = 0) < 2 the population of bugs settles to
the “low” equilibrium state at x 1 ≈ 0.7. However, if the population ever exceeds x ≈ 2, e.g. by a
sudden migration from elsewhere, the number of bugs will reach a maximum at x 3 ≈ 7.3 and will
stay there. If you were a pest controller, you’d not want this to happen!
32
2.4 Insect outbreak!
0.6
g(x) = r(1-x/c)
h(x) = x/(1+x2)
0.5
0.4 r = 0.5
c = 10
0.3
0.2
0.1
0⬤
⭕ ⬤ ⬤
⭕ ⬤
0 2 4 6 8 10
x
What happens when the parameters r or k (equivalently c) change?
• Increasing r or k, the population jumps to the single solution in a saddle node bifurcation,
what is known as the “refuge” population. This can be large, and the system cannot recover
back to the earlier state by then reducing r or k because this solution is stable.
• If the population can be kept such that it is below the unstable fixed point, it will remain low
in the “refuge” state.
• If the population increases above the unstable fixed point, it will balloon to the “outbreak”
state.
• The parameters r and k, which determine the curve g (x), control the behaviour.
• There is a range of r, k in which there are two stable states, this is called bistable.
33
2.4 Insect outbreak!
0.6
g(x) = 0.2(1-x/10)
g(x) = 0.3(1-x/10)
g(x) = 0.4(1-x/10)
0.5 g(x) = 0.5(1-x/10)
⬤ g(x) = 0.6(1-x/10)
0.4
⬤
0.3 ⬤
0.2 ⬤
⬤
0.1
0
0 2 4 6 8 10
x
8
5
xcrit
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
r
We cannot explicitly calculate the bifurcation curves, but we can do so parametrically. The saddle-
node bifurcation, as discussed in Section 2.2, requires that ẋ = 0 and that the tangent condition is
34
2.4 Insect outbreak!
satisfied, ẍ = 0. Thus,
x ³ x´
= r 1− (2.31)
1 + x2 c
and
d ³ x ´ d h ³ x ´i
= r 1− (2.32)
d x 1 + x2 dx c
r 1 − x2
− = ¢2 . (2.33)
c
¡
1 + x2
2x 3
r = ¡ ¢2 ,
1 + x2
2x 3
c = .
x2 − 1
Note that c > 0 implies x > 1. We can now plot x in the r, c plane.
0.8
0.7 Outbreak
0.6
0.5
0.4 Bistable
r
0.3
0.2 Refuge
0.1
0
0 5 10 15 20
c
At low r there is only the refuge (low) bug state. At high r there is only the outbreak. In the
bistable region, both low and high bug states are possible.
Given
35
2.4 Insect outbreak!
2x 3
r = ¡ ¢2 ,
1 + x2
2x 3
c = .
x2 − 1
write x(r, c) (where x 6= 0),
r¡ ¢2
1 + x2 = x2 − 1 ,
c
which has the solution,
s r ³
r 1 r ´2 ³ r´
x = −1± 2− − 1+
2c 4 c c
where we take the positive root because x > 0.
100
10
0.1
0.01
40 1
30 0.8
20 0.6
c 10 0.4 r
0 0.2
3
https://en.wikipedia.org/wiki/Cubic_function
36
2.5 Ghosts and bottlenecks: the non-uniform oscillator
100
10
x 1
0.1
0.01
−1 ≤ sin θ ≤ +1 ,
37
2.5 Ghosts and bottlenecks: the non-uniform oscillator
slow
a<ω
fast
θ = π/2
½
0
a = a c = ω: f (θ, a) =
> 0 otherwise
θ = π/2 is a semistable fixed point, otherwise motion is cyclic.
stops at t = ∞
a = ac = ω ⬤
◑
a > ω: f (θ, a) = 0
There are two unstable fixed points at θ1,2 = sin−1 (ω/a). Motion is oscillatory, not periodic.
38
2.5 Ghosts and bottlenecks: the non-uniform oscillator
a>ω
⬤
⭕ ⬤
⭕
θ2
θ1
no periodic motion
We can write the time an oscillation cycle takes as being the time it takes θ to traverse 2π, e.g. from
−π to +π,
ˆ T
T = dt
ˆ0
π
dt
= dθ
−π d θ
ˆ π
1
= dθ
−π ω − a sin θ
¶¸+π
−1 a + ω tan (θ/2)
· µ
2
= p tan p
ω2 − a 2 ω2 − a 2 −π
2π
= p . (2.35)
ω2 − a 2
Hint: do the integral at https://www.wolframalpha.com. A proof is given below (there are al-
ternative proofs which do not require contour integration, but they’re even more hideous in terms
of algebra).
39
2.5 Ghosts and bottlenecks: the non-uniform oscillator
x = −i ln z , (2.36)
dz
dx = , (2.37)
iz
e i x − e −i x
sin x = , (2.38)
µ 2i ¶
1 1
= z− , (2.39)
2i z
and our integral can be rewritten as a contour integralb around the curve |z| = 1,
ˆ +π z
1 1 dz
dx = β ¡
(2.40)
α + β sin x 1
α + 2i z − z iz
¢
−π |z|=1
z 1
= β¡ ¢dz (2.41)
|z|=1 i αz + 2 z 2 − 1
2 z 1 2
= α d z ≡ I. (2.42)
β z 2 + 2i β z − 1 β
|z|=1
We need to know which is inside the unit circle, i.e. |z| < 1. Our condition, a < ω, is equivalent to
β2
−β < α or β2 < α2 hence α2 < 1 and the inside of the square root is always positive and less than
1. Also α/β > −1 hence ¯α/β¯> 1. This implies that the term in the brackets can only be inside
¯ ¯
I = 2πi R + , (2.46)
40
2.5 Ghosts and bottlenecks: the non-uniform oscillator
−βi
= . (2.50)
2 α2 + β2
p
henced
βπ
I = 2πi R + = p . (2.51)
α2 − β2
a
See a similar proof at https://web.williams.edu/Mathematics/sjmiller/public_html/372Fa15/
coursenotes/Trapper_MethodsContourIntegrals.pdf
b
https://en.wikipedia.org/wiki/Contour_integration
c
https://en.wikipedia.org/wiki/Cauchy’s_integral_formula
d
https://en.wikipedia.org/wiki/Residue_theorem
6 6
5 5
4 4
θ θ
3 3
2 2
1 1
0 0
0 10 20 30 40 50 0 10 20 30 40 50
t t
6 6
5 5
4 4
θ θ
3 3
2 2
1 1
0 0
0 10 20 30 40 50 0 10 20 30 40 50
t t
Before reaching the saddle point, where ² → 0, the period T becomes infinite (T → +∞).
In general, a prototype like,
ẋ = (r − r 0 ) + (x − x 0 )2 , (2.56)
41
2.6 Superconducting Josephson junction
ψ1eiϕ1
V
ψ2eiϕ2
I s = I c sin φ , (2.58)
~
V = φ̇ , (2.59)
2e
where
φ (t ) ≡ φ1 − φ2 ,
is the phase difference between the two superconductors. Note that if V is nonzero, the phase φ (t )
evolves because φ̇ (t ) = d φ/d t ∝ V .
42
2.6 Superconducting Josephson junction
The junction has a natural resistance which absorbs the non-superconducting current and a ca-
pacitance. The equivalent circuit is show below.
R C Ic
I
V
V̇ C + + I c sin φ = I , (2.60)
R
and we can substitute Eqs. 2.58 and 2.59 to show,
~C ~
φ̈ + φ̇ + I c sin φ = I . (2.61)
2e 2eR
Note that this is an identical equation to that of a damped pendulum with a constant applied
torque,
mL 2 θ̈ + b θ̇ + mg L sin θ = Γ . (2.62)
In the overdamped limit C is very small and the φ̈ term can be neglected,t
~
φ̇ + I c sin φ = I , (2.63)
2eR
hence
dφ I
= − sin φ (2.64)
dτ Ic
where we have defined a “dimensionless time”,
2e I c R
τ = t. (2.65)
~
Eq. 2.64 is identical to the non-uniform oscillator we studied previously (Eq. 2.34), hence we know
the time (in units of τ) for one period is,
2π
T = r³ ´ , (2.66)
2
I
Ic −1
43
2.6 Superconducting Josephson junction
~ dφ
¿ À
〈V 〉 =
2e d t
ˆ T
~ dφ
= dt
2eT 0 dt
ˆ 2π
~
= dφ
2eT 0
~
= × 2π
2eT
~ eR
q
= I 2 − I c2
e q~
= R I 2 − I c2 , (2.68)
hence
(
0 I ≤ Ic ,
〈V 〉 = q (2.69)
R I 2 − I c2 I > I c .
Note that we have not had to understand any of the detailed physics5 to arrive at this result, we
have only used the analysis skills we have learned over the previous classes.
4
V( I )
V=IR
3.5
2.5
V 2
1.5
0.5 R=1
0
0 0.5 1 1.5 2 2.5 3 3.5 4
Ic I
If C is large, the behaviour is more complicated, e.g. with hysteresis.
5
e.g. https://en.wikipedia.org/wiki/Ginzburg-Landau_theory which involves equations like the
Ginzburg-Landau equation ∂u/∂t = D∂2 u/∂x 2 + r u − u 3 . As an exercise, linearize the homogeneous form of
this equation (when the derivatives are all zero, which is the natural equilibrium state of the system) to see if it is
stable to perturbations.
44
3 Linear systems
Remember that linear systems can be defined as,
ẋ = Ax , (3.1)
Av i = λi v i , (3.3)
where
µ ¶
a b
A = , (3.5)
c d
and
µ ¶ µ ¶
x0 0
= , (3.6)
y0 0
det (A − λi I ) = 0 , (3.7)
i.e.
λ2 − τλ + ∆ = 0 , (3.8)
where
τ = a +d = Tr (A) (3.9)
and
45
3.1 Real and distinct eigenvalues
v1
v2
x
46
3.1 Real and distinct eigenvalues
⬤ x
⬤ x
47
3.1 Real and distinct eigenvalues
⬤
⭕ x
⬤
⭕ x
48
3.1 Real and distinct eigenvalues
Unstable manifold
Stable manifold
⬤
⭕ x
3.1.5 a = 0
• a = 0 and b < 0
Stable nodes a = 0, b < 0
⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤x
49
3.1 Real and distinct eigenvalues
• a = 0 and b > 0
Stable nodes a = 0, b > 0
⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕x
3.1.7 Stability
Lyapunov stable if any trajectory that starts close to x 0 remains close for all later times,
Note that this does not mean the trajectory approaches x 0 asymptotically.
50
3.1 Real and distinct eigenvalues
A fixed point x 0 is a saddle node if it is stable in one direction and unstable in the other, i.e. λ1 λ2 < 0
(e.g. λ1 < 0 and λ2 > 0, or vice versa).
stable manifold is the set of points that converge to x 0 . Trajectories approach the stable manifold
as t → −∞.
unstable manifold is the set of points that diverge from x 0 . Trajectories approach the unstable
manifold as t → +∞.
A fixed point x 0 is neutrally stable if it is Lyapunov stable but not attracting. Good examples are
the harmonic oscillator, pendulum and Lagrange points.
A point can be both attracting and Lyapunov unstable. For example, θ̇ = 1 − cos θ has a semistable
fixed point. On one side it is attracting, but on the other side (which is still nearby) it is unstable.
θ
⬤
◑
51
3.2 Complex conjugate eigenvalues
λ1,2 = α + i ω, α, ω ∈ R . (3.20)
λ1 = λ∗2 (3.21)
hence
v 1 = v ∗2 (3.22)
52
3.2 Complex conjugate eigenvalues
and we can write v 1,2 are complex combinations of real vectors u 1,2 ,
We can also substitute λ1,2 into the exponential and expand using Euler’s formula,
then when α > 0 the solution repels from the origin, when α < 0 the solution attracts towards the
origin.
• Note: Near a fixed point α is the called the rate of convergence because kx(t )k ≤ me −α(t −t0 ) kx (t 0 )k
The general solution is,
µ ¶
x (t )
x (t ) = = c 1 v 1 e λ1 t + c 2 v 2 e λ2 t
y (t )
= (c 1 v 1 + c 2 v 2 ) e αt cos ωt + i (c 1 v 1 − c 2 v 2 ) e αt sin ωt (3.25)
and we can choose to have real trajectories (in R2 as one needs for a real-life solution) when c 1 =
c 2 = c/2 ∈ R – this is equivalent to choosing a phase in ωt ,
-v1
v2 x
α=-2
α=-1
α=0
α=1
α=2
53
3.3 Equal eigenvalues
m ẍ + kx = 0 , (3.27)
ẋ = v , (3.28)
k
v̇ = − x = −ω2 x , (3.29)
m
which defines the angular frequency, ω. The equivalent A matrix is,
µ ¶
0 1
A = , (3.30)
−ω2 0
with
Tr (A) = 0 , (3.31)
2
∆ = ω . (3.32)
λ1,2 = ±i ω . (3.33)
• In some cases there is only one eigenvector. This is called a “degenerate node”.
• When λ1 = λ2 there can be two eigenvectors. If λ1,2 6= 0 this is a star, if λ1,2 = 0 then all points
are Lyapunov stable.
3.4 Summary
4 Phase Space Analysis in 2 dimensions
Any autonomous6 two-dimensional system can be cast into standard form,
(
dx
¡ ¢
dt = P x, y ,
dy ¡ ¢ (4.1)
dt
= Q x, y ,
6
Remember, autonomous means a system of equations of the form d x/d t = f (x (t )) rather than the general form
d x/d t = g (x (t ) , t ). See, e.g., https://en.wikipedia.org/wiki/Autonomous_system_(mathematics)#Second_
order for solution methods.
54
τ
τ2 - 4Δ = 0
unstable nodes
saddles
unstable spirals
centre Δ
0
non-isolated fixed points
stable spirals
saddles
stars
, dege
n erate
stable nodes node
s
Figure 1: Stability of 2D systems as a function of the Jacobian, J , at a fixed point. τ is the trace of J
and ∆ is the determinant of J .
55
Figure 2: Poincare diagram to classify phase portraits of a matrix A. Taken from an original by
Freesodas at https://en.wikipedia.org/wiki/File:Stability_Diagram.png under a Creat-
ive Commons 4 licence https://creativecommons.org/licenses/by-sa/4.0/deed.en.
56
4.1 Linearization
y
x t=1
x t=7
x
x t=2
x t=6
x t=3
x t=5
x t=4
ẋ = F (x) , (4.2)
x (t = 0) = x 0 , (4.3)
then, if F (x) and all the partial derivatives ∂F /∂x i are continuous in the region D ⊂ R3 (D is
a subset or subspace of the real space, does not have to be the whole real space), then x 0 ∈ D
has a unique solution in some interval (−τ, +τ) around t = 0.
4.1 Linearization
To proceed further we need to linearize and because we are interested in the stability of stationary
points we want to expand near them. This is a process where we convert a function, e.g. a non-
linear function, to a locally linear form (possibly in n dimensions7 ). Any smooth function can be
converted to a locally linear function in this way.
7
https://en.wikipedia.org/wiki/Linearization
57
4.1 Linearization
¡ ¢
Given a stationary point x 0 , y 0 , for small u (t ) and v (t ) near the fixed point,
(
x (t ) = x 0 + u (t ) ,
(4.6)
y (t ) = y 0 + v (t ) ,
ẋ = u̇ = P x 0 + u, y 0 + v = P x 0 , y 0 + au + bv + O u 2 , v 2 , uv
¡ ¢ ¡ ¢ ¡ ¢
(4.7)
ẏ = v̇ = Q x 0 + u, y 0 + v = Q x 0 , y 0 + cu + d v + O u 2 , v 2 , uv
¡ ¢ ¡ ¢ ¡ ¢
(4.8)
where a, b, c, d are the partial derivatives of P and Q with respect to x and y, respectively,
∂P
¶ µ
a= , (4.9)
∂x (x0 ,y 0 )
∂P
¶ µ
b= , (4.10)
∂y (x0 ,y 0 )
∂Q
¶ µ
c= , (4.11)
∂x (x0 ,y 0 )
and
∂Q
¶ µ
d= . (4.12)
∂y (x0 ,y 0 )
¡ ¢ ¡ ¢
The functions P x 0 , y 0 = Q x 0 , y 0 = 0 at the fixed point by definition.
¡ 2 2 Close
¢ enough to the fixed
point u and v are small so we can ignore the quadratic terms O u , v , uv , hence the linearized
form is,
u̇ = au + bv , (4.13)
v̇ = cu + d v . (4.14)
We will see many examples of Jacobian matrices. Their more general form is the m × n matrix,
∂ fi
Ji j = , (4.17)
∂x j
where the m-dimensional vector function f takes the n-dimensional vector x as input.
58
4.2 Eigenvalues and eigenvectors
λi v i = A0 v i i = 1 . . . n , (4.18)
¡ ¢
then solutions close to the stationary point x 0 , y 0 are,
µ ¶ µ ¶
x (t ) x0
= + c 1 v 1 e λ1 t + c 2 v 2 e λ2 t . (4.19)
y (t ) y0
Each eigenvalue λi determines the stability of x 0 , y 0 for trajectories along the corresponding ei-
¡ ¢
genvector v i .
det (A 0 − λI ) = 0 , (4.20)
and hence
Tr (A 0 ) = τ = a + d = λ1 + λ2 , (4.21)
det (A 0 ) = ∆ = ad − bc = λ1 λ2 , (4.22)
where
p
τ ± τ2 − 4∆
λ1,2 = . (4.23)
2
• g is the rate of growth, i.e. how fast the population can breed
• −x/K is the death rate (related to the carrying capacity K as we saw in the logistic equation)
• f relates to competition for resources, generally caused by outside influences (e.g. popula-
tions of other species consuming food). For more details about the general form of this equa-
tion, and how to extend it to n species, see https://en.wikipedia.org/wiki/Competitive_
Lotka-Volterra_equations.
59
4.3 Example: Lotka-Volterra competition model
Imagine there is a world containing only rabbits and sheep that both eat the same grass. If you have
been to Australia or New Zealand, this is it. We can write the equations describing the number of
rabbits, r (t ), and the number of sheep, s (t ), as
where the term −2s or −r is the effect of the sheep and/or rabbits on the other population caused
by eating the grass: sheep eat more grass (∝ 2s) than rabbits (∝ r ).
The singular points are at P = Q = 0, thus
r = 3 − 2s or (4.27)
r = 0, (4.28)
and
s = 2 − r or (4.29)
s = 0. (4.30)
60
4.4 When linearization fails
2.5
2⬤
⬤
1.5
1 ⬤
⭕
0.5
0⬤
⭕ ⬤
⬤
0 0.5 1 1.5 2 2.5 3 3.5 4
r
ẋ = P x, y = −y + a x 2 + y 2 ,
¡ ¢ ¡ ¢
(4.40)
ẏ = Q x, y = +x + a x 2 + y 2 ,
¡ ¢ ¡ ¢
(4.41)
then
µ ¶
0 −1
A = , (4.42)
+1 0
which has eigenvalues
i
λ1,2 = ± , (4.43)
2
61
4.4 When linearization fails
hence the “linearized trajectories” do not depend on the parameter, a, even though the motion
clearly does.
We can draw some insight by switching to polar co-ordinates, then
r 2 = x2 + y 2 , (4.44)
2 2 2 4
¡ ¢
r r˙ = x ẋ + y ẏ = ar x + y = ar , (4.45)
£ 2
x + a y x2 + y 2 + y 2 − a y x2 + y 2
¡ ¢¤ £ ¡ ¢¤
x ẏ − y ẋ x2 + y 2
θ̇ = = = =1 (4.46)
r2 r2 r2
hence our original equations are equivalent to,
r˙ = ar 3 , (4.47)
θ̇ = 1 , (4.48)
with solutions,
1 1
= + a (t − t 0 ) (4.49)
2r 2 2r 02
θ = θ0 + (t − t 0 ) . (4.50)
This is a spiral which either expands (a < 0), shrinks (a > 0) or is stable (a circle) a = 0 (Fig. 3).
We know
r 2 = x2 + y 2
2r r˙ = 2x ẋ + 2y ẏ (4.51)
x = r cos θ
ẋ = r˙ cos θ − θ̇r sin θ
y ẋ = r r˙ sin θ cos θ − θ̇r 2 sin2 θ
y = r sin θ
ẏ = r˙ sin θ + θ̇r cos θ
x ẏ = r r˙ sin θ cos θ + θ̇r 2 cos2 θ
x ẏ − y ẋ = θ̇r 2 sin2 θ + cos2 θ
¡ ¢
x ẏ − y ẋ
θ̇ = . (4.52)
r2
62
4.4 When linearization fails
5 1 0.7
4 0.6 0.5
0.4
3.5 0.4
0.3
3 0.2
0.2
y 2.5 y 0 y
0.1
2 -0.2
0
1.5 -0.4
-0.1
1 -0.6 -0.2
0.5 -0.8 -0.3
0 -1 -0.4
1 2 3 4 5 6 7 8 9 10 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1 0 1 2 3 4 5 6 7 8
x x x
Figure 3: Trajectories of the system of equations described in Section 4.4 for, from left to right,
a = −1, 0 and +1.
63
5 Lyapunov Stability Theorem
We work with an n-dimensional system,
ẋ = F (x) , (5.1)
for all x in E . V (x) is called a Lyapunov function and it has the following properties. Note that
V (x) > 0 implies the function V is positive definite.
• If V̇ (x) < 0 for all x in E , except at the fixed point x 0 , then x 0 is asymptotically stable, i.e.
x (t ) → x 0 , for all initial conditions in E. There are no closed orbits in E .
• If V̇ (x) > 0 for all x in E , except at the fixed point x 0 , then x 0 is unstable.
Lyapunov functions are like an energy, hence the V notation (which is often used for potential
energy in physics and engineering).
and assume a general form of V , with a constant c that we can choose later,
V x, y = x 2 + c y 2 .
¡ ¢
(5.4)
∂V d x ∂V d y
¡ ¢
dV x, y
= + (5.5)
dt ∂x d t ∂y d t
= 2x −x + 4y + 2c y −x − y 3
¡ ¢ ¡ ¢
(5.6)
2 4
= −2x + 8x y − 2c x y − 2c y (5.7)
2 4
= −2x − 2c y + (8 − 2c) x y . (5.8)
V x, y = x 2 + 4y 2 ,
¡ ¢
(5.9)
and
V̇ x, y = −2x 2 − 8y 4 .
¡ ¢
(5.10)
We have
64
5.1 Lyapunov function example
• V (0, 0) = 0,
¡ ¢ ¡ ¢
• V x, y > 0 for all x, y 6= 0,
¡ ¢ ¡ ¢
• V̇ x, y < 0 for all x, y 6= 0,
65
6 Limit cycles and oscillators
Consider the harmonic oscillator8 with damping µ > 0,
ẍ = −x − µẋ . (6.1)
Now introduce
ẋ = v , (6.2)
v̇ = −x − µv , (6.3)
ẋ = v . (6.4)
r 2 = x2 + v 2 , (6.5)
2r r˙ = 2x ẋ + 2v v̇ , (6.6)
x v
r˙ = ẋ + v̇ , (6.7)
r r
1
= (x ẋ + v v̇) , (6.8)
r
and, taking time derivatives of x and v,
v ẋ − x v̇ = −r 2 θ̇ , (6.13)
i.e.,
x v̇ − v ẋ
θ̇ = . (6.14)
r2
Then substitute x and v to obtain r˙ as a function of r and θ only, using the definitions of ẋ and v̇,
1¡
r cos θ · v + v −x − µv
£ ¤¢
r˙ = (6.15)
r
v¡
r cos θ − x − µv
¢
= (6.16)
r
r sin θ ¡
r cos θ − r cos θ − µr sin θ
¢
= (6.17)
r
= −µr sin2 θ, (6.18)
8
This particular oscillator is a simplified example. Really we should have m ẍ = −kx − µẋ, I chose k = 1, but the
mathematics that follows is similar.
66
and
r 2 θ̇ = x v̇ − v ẋ (6.19)
= r cos θ · −x − µv − r sin θ · v
¡ ¢
(6.20)
= −r cos θ · r cos θ − µr sin θ − r 2 sin2 θ
¡ ¢
(6.21)
= −r 2 cos2 θ + sin2 θ − µr 2 sin θ cos θ
¡ ¢
(6.22)
1
= −r 2 − µr 2 sin 2θ . (6.23)
2
So now we know
µ = 0: Circular motion
µ À 2: This is the overdamped limit: r˙ is large and negative. θ̇ is large but sin θ is small when
θ → 0, so this is our bottleneck and θ → 0− (on the negative side because initially θ̇ < 0).
1
mu=0
mu=0.1
mu=1
mu=2
mu=10
0.5 mu=100
v 0
-0.5
-1
-1 -0.5 0 0.5 1
x
67
6.1 Van der Pol oscillator
ẍ + µ x 2 − 1 ẋ + x = 0 ,
¡ ¢
(6.26)
where µ > 0. The damping term is non-linear, with a damping term proportional to µ x 2 − 1 – the
¡ ¢
• When |x| > 1 the damping is like ordinary damping (the energy drops),
• but when |x| < 1 the damping is “negative” i.e. there is an injection of energy.
= −x − vµ x 2 − 1 ,
¡ ¢
v̇ (6.27)
ẋ = v , (6.28)
2.5
1.5
0.5
x 0
-0.5
-1
-1.5
-2
-2.5
0 10 20 30 40 50
t
μ=0.01 μ=0.1 μ=0.5 μ=1 μ=2 μ=10
68
6.1 Van der Pol oscillator
2.5
1.5
0.5
x 0
-0.5
-1
-1.5
-2
-2.5
0 5 10 15 20
t
μ=0.01 μ=0.1 μ=0.5 μ=1 μ=2 μ=10
69
6.1 Van der Pol oscillator
15
10
v = ẋ 0
-5
-10
-15
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
x
μ=0.01 μ=0.1 μ=0.5 μ=1 μ=2 μ=10
70
6.1 Van der Pol oscillator
v = ẋ 0
-2
-4
-6
-6 -4 -2 0 2 4 6
x
20
15
10
v = ẋ 0
-5
-10
-15
-20
-8 -6 -4 -2 0 2 4 6 8
x
ẍ + µ x 2 − 1 ẋ + x = A sin ωt ,
¡ ¢
(6.31)
71
6.2 Limit Cycles
1.5
0.5
0
x
-0.5
-1
-1.5
-2
-2.5
0 50 100 150 200
t
Unstable (some) neighbouring orbits move away from the limit cycle. This is equivalent to ap-
proaching it as t → ∞.
Half-stable some orbits move towards the limit cycle, some move away from the limit cycle.
72
7 Poincaré-Bendixson theorem
In two dimensions, R2 , consider:
E ⊆ R2 . (7.1)
@x ∈ E , F (x) = 0, (7.2)
4. There is a trajectory C of the system ẋ = F (x) that is confined in E . That is, it starts in E and
remains in E forever.
a C is a closed orbit, or
b it, and hence all trajectories in E (because C can be any trajectory in E ), spirals towards a
closed orbit in E as t → ∞.
To prove (4), it is often sufficient to show that all borderline trajectories enter E . This is called
a trapping region because no trajectories escape from it.
73
E
Note that the theorem does not extend to higher numbers of dimensions than two.
• Can we extend the region to infinity, then it must contain any/all closed orbits?
Right, that’s if there are any limit cycles. Consider ẋ = ẏ = 1: there are never any closed
orbits. The Poincaré-Bendixson theorem can only tell us if there is a closed orbit, not that
there is not.
One can use other methods to rule out a closed orbit.
If a system has a Lyapunov function V then V̇ < 0 so any trajectory must ´ tend towards a
fixed point x ∗ rather than orbit (an orbit will have, by definition, ∆V = V̇ d t = 0 where the
integral is over a period of the orbit).
If a system has a gradient function a similar argument applies.
One can also use Dulac’s criterion: if ẋ = f (x) is a continuously differentiable function in a
simply connected subset of¡ the¢ real plane, then if there exists a continuously differentiable
function g (x) such that ∇ · g ẋ ´has
´ one ¡ sign
¢ throughout
¸ the subset then there are no closed
orbits in the subset. Why? 0 6= ∇ · g ẋ d A = g ẋ · nd l = 0 (use Green’s theorem9 ) where
A is the inside area of the orbit, d l is a small piece of the orbit.
• My region has trajectories flowing out, can I not just make the region a bit bigger?
Yes, you can, but the new, bigger region must have all trajectories flowing inwards through its
edges. If there is a region with outflowing trajectories, then all trajectories from inside the re-
gion could leak out, and there might be no closed orbit at all. Remember, Poincaré-Bendixson
9
https://en.wikipedia.org/wiki/Green’s_theorem
74
7.1 Glycolysis
(a) (b)
Figure 4: Chemical structures of (a) adenosine diphosphate (ADF) and (b) fructose-6-phosphate
(F6P).
cannot tell you when there is not a closed orbit, only sufficient conditions to prove that there
is.
Proof is not trivial and is beyond the scope of this course. See e.g.
https://math.byu.edu/~grant/courses/m634/f99/lec39.pdf or
http://www.staff.science.uu.nl/~kouzn101/NLDV/Lect6_7.pdf.
7.1 Glycolysis
As an example of the Poincaré-Bendixson theorem, consider the process of glycolysis, in which
glucose is broken down to obtain energy. All living cells do this. In some, e.g. yeast or muscles,
oscillatory behaviour has been observed. A simplified reaction network10 is,
ẋ = −x + a y + x 2 y , (7.3)
ẏ = b − a y − x 2 y , (7.4)
where x and y are the concentrations of ADF (adenosine diphosphate11 ) and F6P (fructose-6-
phosphate12 ) respectively, and a and b are reaction constants (both are positive).
75
7.1 Glycolysis
Figure 5: The full glycolysis cycle: now you see why we work with a simplified set of equa-
tions! Image from Factors affecting plasmid production in Escherichia coli from a resource alloca-
tion standpoint, by Cunningham et al. (Microb. Cell Fact., 2009, https://openi.nlm.nih.gov/
detailedresult.php?img=PMC2702362_1475-2859-8-27-1&req=4). Shared under a Creative
Commons BY 2.0 licence https://creativecommons.org/licenses/by/2.0/.
76
7.1 Glycolysis
b b
• If ẏ > 0 then y < and if ẏ < 0 then y < a+x
a+x 2 2.
¡ ¢ ³ b
´
• Thus there is a fixed point at x 0 , y 0 = b, a+b 2 . We’ll come back to this to address its stabil-
ity.
ẋ = 0
b/a ẏ = 0
ẋ > 0
ẏ < 0
ẋ > 0
ẏ > 0
y
ẋ < 0 ẏ < 0
ẋ < 0
ẏ > 0
0 x b b (1+1/a)
• ẋ = 0 → ẏ is non-zero
We can construct a subspace around the region of interest, which contains the fixed point, into
which we know the flow is inwards:
• Along the y-axis we have ẋ = a y > 0 as long as y > 0. We want to go as far as we can with until
ẏ changes sign ( ẏ = b > 0 at the origin). So take a line from (0, 0) to where the ẏ nullcline
intersects the y axis (i.e. where ẏ = 0), this is at y = b/a, hence the first edge of our subspace
is from (0, 0) to (0, b/a).
ẋ = −x , (7.5)
ẏ = b , (7.6)
77
7.1 Glycolysis
• The nullcline ẏ = 0 has a maximum at (0, b/a) and the whole region above the nullcline has
ẏ = 0.
Along the straight line y = b/a we have ẏ = −x 2 b/a which is guaranteed to point down be-
cause ẏ < 0.
• We then require the closing side from y = b/a to the x-axis, but we must be careful that tra-
jectories point inwards.
If we choose a vertical line – the simplest option – we require ẋ < 0 along it (this is what
“inwards” means, i.e. trajectories must cross the line towards the origin). We cannot choose
this because of the x 2 y term in ẋ which could be arbitrarily large: we require ẋ < 0 for any x
(and note that y > 0).
Instead, we will choose a straight line to join y = b/a to the x-axis, y = mx + c, where m
is the gradient and c the intercept.
The condition ẋ < 0 is difficult to guarantee along this line (at y = b/a we have ẋ > 0, for
example), but instead we can also say that we require ẋ + ẏ < 0 along the line, because we
know that above the ẏ = 0 nullcline, which we are, we have ẏ < 0, and requiring both ẋ < 0
and ẏ < 0 implies ẋ + ẏ < 0. This is far simpler to calculate,
−x + a y + x y 2 + b − a y − x 2 y
¡ ¢ ¡ ¢
ẋ + ẏ = (7.7)
= b − x < 0. (7.8)
So we can close our region by taking a line from (0, b/a) to (b, b/a), because this has ẏ < 0
˙ 0).
(and x >
Note that below the ẏ = 0 nullcline we are also below the ẋ = 0 nullcline so ẋ < 0, as required
by our original condition.
dx dy
< − (7.9)
dt dt
dx
< −1 , (7.10)
dy
along the trajectories. Thus trajectories that are just above a line with slope −1 will pass
downwards through it, as required.
To think of this another way: the trajectories have d y < −d x, i.e. more negative that the
line which has d y = −d x, so the trajectories “descend” faster than the line and will cross it.
We thus choose a line with slope −1 to join (b, b/a) to the x-axis. Given y = −x + c we have
−b + c = b/a, c = b/a + b = b (1 + 1/a), and this line is
µ ¶
1
y = −x + b 1 + , (7.11)
a
78
7.1 Glycolysis
• We have now closed our region from (0, 0) to (0, b/a) to (b, b/a) to (1 + a/b, 0) to (0, 0), and all
trajectories flow into the region.
y 3
0
0 1 2 x 3 4 5 6
ẋ = 0 ẏ = 0
y 3
0
0 1 2 x 3 4 5 6
ẋ = 0 ẏ = 0
79
7.1 Glycolysis
y 3
0
0 1 2 x 3 4 5 6
ẋ = 0 ẏ = 0
• The fixed point is stable if the trace of the Jacobian is negative (Sec. 3.4).
• We require the fixed point to be unstable so that all trajectories flow out of it and then we can
construct a trapping region around it, so the trace of the Jacobian should be positive.
• The Jacobian is
2
à !
2b
2 a + b2
µ ¶
−1 + 2x y a −1 + a+b
A = ¡ + x 2¢ = 2
2
(7.12)
2b
−2x y − a+x − a + b2
¡ ¢
(x0 ,y 0 ) − a+b 2
hence
2b 2 2 b 4 + b 2 (2a − 1) + a (1 + a)
τ = Tr (A) = −1 +
¡ ¢
− a + b = − (7.13)
a + b2 a + b2
2b 2 ¡ ¢ 2b 2
µ ¶
2 2 2 2
∆ = + 2b 2 = a + b 2 > 0 . (7.14)
¢ ¡ ¡ ¢
1− a + b + a + b = a + b − 2b
a + b2 a + b2
• If b < b 1 or b > b 2 then the fixed point is an attractive spiral or centre, there is no limit cycle.
80
7.1 Glycolysis
• If b > b 1 or b < b 2 then the centre is unstable, and we can construct the trapping region.
2
• We requires a ≤ 1/8 (such that b 1,2 is real), with the limiting case a = 1/8, then b 2 = 21 − a =
q
4
8
− 18 = 38 = 0.375 then b = 12 32 ≈ 0.61. So let a = 0.1 and b = 0.5, so b/a = 5, and we’re
roughly in the middle of the parameter space that allows a trapping region.
1.2
0.8
b2 0.6
0.2
0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16
a
We have now successfully created a trapping region around the fixed point, with the require-
ment that b 1 < b < b 2 for there to be a limit cycle (which is probably, in biological terms, what is
desired). We have said quite a lot about the system without solving the equations!
81
8 1D bifurcations in 2D and ghosts
In one-dimensional systems, bifurcations happen at fixed points. This happens in just the same
way when, in a two-dimensional phase space, we approach a nullcline. As one dimension ap-
proaches the nullcine, the other dimension is independent. However, in two dimensions, the
nature of the bifurcations can change when the nullclines intersect.
The prototypes (also called normal forms) are the following, in which x and y could be rotated
in 2D. Because ẏ = −y all critical points are on the x-axis (where y = 0). To analyse these, consider
the nullclines in the x − y phase space. When they cross there are are fixed points, when they don’t,
there are not (Fig. 6).
The directions of the trajectories can be found by considering ẋ vs ẏ. Where ẋ > ẏ we have,
dx dy
> (8.1)
dt dt
hence
dx > dy , (8.2)
and along the nullclines ẏ = 0 where ẋ > ẏ, i.e. the ẋ curve is above the ẏ curve, we have d x > d y = 0
i.e. trajectory vectors are in the positive direction. Similarly when ẋ < ẏ along the ẏ = 0 nullcline
trajectory vectors are in the negative direction. The same argument can be applied swapping x
and y to obtain directions along the ẋ nullcline (Fig. 7).
ẋ = µ ± x 2 , (8.3)
ẏ = −y , (8.4)
Where ẋ = ẏ = 0 we have
ẋ = 0 = µ ± x 2 (8.5)
ẏ = 0 = −y (8.6)
p
hence y = 0 and µ ± x 2 = 0. Consider the case µ + x 2 = 0 then x = ± −µ hence we require µ < 0
for there to be fixed points. Thus the bifurcation happens at µ = 0. At µ = 0 there is only one
eigenvalue of the equations, and as µ increases above zero there is a “ghost” region through which
trajectories take a long time to pass.
ẋ = µx − x 2 , (8.7)
ẏ = −y . (8.8)
82
8.2 Transcritical bifurcation
5
ẏ ẋ
4 b = -1
y 1
-1
-2
-3
-4 -2 0 2 4
x
5
ẏ ẋ
4 b = -0.25
y 1
-1
-2
-3
-4 -2 0 2 4
x
5
ẏ ẋ
4 b=1
y 1
-1
-2
-3
-4 -2 0 2 4
x
Figure 6: Nullclines of two functions, ẋ = x + b and ẏ = x 2 , for b = +1 (bottom), −0.25 (middle) and
−1 (top). Where the nullclines cross there are fixed points. When b = −1 there are no fixed points,
there is one when b = −0.25 (this is the bifurcation) and there are two when b > −0.25. To find the
bifurcation, set ẋ = ẏ and x = y because both the tangents to trajectories and the trajectories must
be equal at this point.
83
8.2 Transcritical bifurcation
5
ẏ ẋ
4 b=1
y 1
-1
-2
-3
-4 -2 0 2 4
x
5
ẏ ẋ
4 b=1
3
⬤
⬤
y 1
⬤
⭕
0
-1
-2
-3
-4 -2 0 2 4
x
Figure 7: a) Nullclines with associated trajectory arrows on them, found by considering ẋ vs ẏ. For
example, when ẋ > ẏ (where the red line is above the blue line) we have d x > d y so on the ẏ = 0
nullcline, d x > 0 and trajectories point in the positive x direction. (b) These trajectories can then
be used to determine the stability of the fixed points.
84
8.2 Transcritical bifurcation
y 0
-2
-4
-4 -2 0 2 4
x
ẏ = 0
y 0 ⬤
⭕
-2
-4
-4 -2 0 2 4
x
ẏ = 0 ẋ = 0
y 0 ⬤
⬤ ⬤
⭕
-2
-4
-4 -2 0 2 4
x
ẏ = 0 ẋ = 0
85
8.2 Transcritical bifurcation
25
20
15
y 0 |dx / dt|
10
-2
-4
0
-4 -2 0 2 4
x
25
20
15
y 0 ⬤
⭕ |dx / dt|
10
-2
-4
0
-4 -2 0 2 4
x
Figure 9: Saddle node bifurcation with µ = +1 (above) and µ = 0 (below). When changing µ from
0 to 1, the “slow” region near the location of the fixed point (in this case at the origin) – called the
“ghost” region – remains. While there is no fixed point when µ = +1, the time taken to go through
p
the ghost region is long (T ∼ 1/ µ, cf. Section 2.5.1).
86
8.2 Transcritical bifurcation
y 0 ⬤
⭕ ⬤
⬤
-2
-4
-4 -2 0 2 4
x
ẏ = 0 ẋ = 0 ẋ = 0 ẋ = 0 ẋ = 0
y 0 ⬤
⭕ ⬤
⬤
-2
-4
-4 -2 0 2 4
x
ẏ = 0 ẋ = 0 ẋ = 0 ẋ = 0 ẋ = 0
87
8.3 Supercritical pitchfork bifurcation
ẋ = µx − x 3 , (8.9)
ẏ = −y . (8.10)
p
ẋ = 0 implies x = 0 or µ = x 2 , i.e. x = ± µ. If µ > 0 we thus have three fixed points, if µ < 0 we
have one. The bifurcation is at µ = 0.
ẋ = µx + x 3 , (8.11)
ẏ = −y . (8.12)
p
ẋ = 0 implies x = 0 or µ = −x 2 i.e. x = −µ. If µ < 0 there are three fixed points, if µ > 0 there is one.
The bifurcation is at µ = 0.
88
8.4 Subcritical pitchfork bifurcation
y 0 ⬤
⬤ ⬤
⭕
⬤ ⬤
⬤
-2
-4
-4 -2 0 2 4
x
ẏ = 0 ẋ = 0
y 0 ⬤
⬤
-2
-4
-4 -2 0 2 4
x
ẏ = 0 ẋ = 0
89
8.4 Subcritical pitchfork bifurcation
y 0 ⬤
⭕
-2
-4
-4 -2 0 2 4
x
ẏ = 0 ẋ = 0
y 0 ⬤
⭕ ⬤
⭕
⬤ ⬤
⭕
-2
-4
-4 -2 0 2 4
x
ẏ = 0 ẋ = 0
90
9 Hopf Bifurcations
Everything we have discussed so far relates to real eigenvalues, but in 2D the eigenvalues can also
be complex conjugate pairs, i.e. they have imaginary parts. The fixed points are centres with ro-
tation around them, i.e. spirals, the outward direction of which depends on the real part of the
eigenvalue (unstable if it is positive, stable if it is negative). In some cases the sign of the real part
of the eigenvalues can change which leads to another form of bifurcation we have not yet con-
sidered: the Hopf bifurcation. At a Hopf bifurcation limit cycles can appear or disappear.
Remember that stability depends on the real parts of eigenvalues, ℜ (λ),
so we have a stable fixed point if ℜ (λ) < 0 and unstable if ℜ (λ) > 0.
If we have two real fixed points which merge, i.e. ∆ changes sign, this is a saddle-node, tran-
scritical or pitchfork bifurcation. If we have τ change sign when ∆ > 0 we have a more interesting
case, because we have two complex conjugate eigenvalues. The imaginary part gives the “rotation
rate” of the resulting spiral trajectory, and the real part gives the stability (as above, where ℜ (λ) = 0
implies a circular trajectory, i.e. r˙ = 0). The change from stable to unstable is called a Hopf bifurc-
ation.
Further reading: https://www.math.colostate.edu/~shipman/47/volume3b2011/M640_
MunozAlicea.pdf.
r˙ = µr − r 3 ,
θ̇ = ω + br 2 . (9.2)
p
We have r˙ = 0 when r = 0 or µ = r 2 i.e. r = µ, so there is a limit cycle when µ is positive
(Figs. 13 and 14). When µ < 0 we have a stable spiral (r˙ < 0).
To show the eigenvalues cross ℜ (λ) = 0 with non-zero ℑ (λ) we convert to Cartesians using
x = r cos θ and y = r sin θ, then
ẋ = r˙ cos θ + r θ̇ sin θ
= µr cos θ − r 3 cos θ + r ω + br 2 sin θ
¡ ¢
Similarly,
ẏ = r˙ sin θ − r θ̇ cos θ
= µr sin θ − r 3 sin θ − r ω + br 2 cos θ
¡ ¢
91
9.1 Supercritical Hopf bifurcation
1.5
0.5
ṙ 0 ⬤
⬤
-0.5
-1
-1.5
-2
0 0.5 1 1.5 2
r
μ=-2 μ=0 μ=2
r < 0 : unphysical
Figure 14: r vs µ in the supercritical Hopf bifurcation (Eq. 9.2). Compare to the supercritical pitch-
fork bifurcation (Fig. 11).
92
9.1 Supercritical Hopf bifurcation
μ=-1
μ=1
Figure 15: Supercritical trajectories of Eqs. 9.2 with µ = −1 and µ = +1, with the latter having a limit
p
cycle at r = µ = 1.
93
9.1 Supercritical Hopf bifurcation
Near the critical point at r = r ∗ = 0, where we can neglect the O r 3 terms, we have,
¡ ¢
ẋ ≈ µx + ωy , (9.5)
ẏ ≈ µy − ωx , (9.6)
µ ω
µ ¶
J = , (9.7)
−ω µ
¢2
with eigenvalues from det (J − λI ) = µ − λ + ω2 = 0, so µ − λ = ±i ω and
¡
λ = µ±iω. (9.8)
Hence as µ becomes more positive, the eigenvalues cross the imaginary axis.
Im(λ)
⬤
⬤
Re(λ)
⬤
⬤
∆ = µ2 + ω2 (9.9)
τ = 2µ . (9.10)
94
9.2 Subcritical Hopf bifurcation
unstable spirals
0 Δ
stable spirals
r˙ = µr + r 3 ,
θ̇ = ω + br 2 . (9.11)
p
There is a limit cycle at µr = −r 3 i.e. r = 0 and r 2 = −µ i.e. r = ± −µ when µ < 0, thus at µ = 0
there is a bifurcation:
p
• µ > 0 there is only one fixed point because µ is imaginary,
p p
• µ < 0 there is another fixed point at r = + −µ (the other, at r = − −µ, is unphysical).
ẋ ≈ µx + ωy , (9.12)
ẏ ≈ µy − ωx , (9.13)
µ ω
µ ¶
J = , (9.14)
ω µ
¢2
with eigenvalues from det (J − λI ) = µ − λ − ω2 = 0, so µ − λ = ±ω and
¡
λ = µ±ω. (9.15)
95
9.2 Subcritical Hopf bifurcation
1.5
0.5
ṙ 0 ⬤
⭕
-0.5
-1
-1.5
-2
0 0.5 1 1.5 2
r
μ=-2 μ=0 μ=2
2 r
1.5
0.5
ṙ 0 μ
-0.5
-1 r < 0 : unphysical
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
r
Figure 17: r vs µ in the subcritical Hopf bifurcation (Eq. 9.2). Compare to the subcritical pitchfork
bifurcation (Fig. 12).
96
9.2 Subcritical Hopf bifurcation
μ=-1
y 0
-2
-4
-4 -2 0 2 4
x
μ=1
y 0
-2
-4
-4 -2 0 2 4
x
Figure 18: Subcritical trajectories of Eqs. 9.11 with µ = −1 and µ = +1, with the latter having a limit
p
cycle at r = µ = 1.
97
9.3 Saddle node bifurcation of cycles
r˙ = µr + r 3 − r 5 ,
θ̇ = ω + br 2 , (9.16)
µr + r 3 − r 5 = −r −µ − r 2 + r 4 = 0 ,
¡ ¢
(9.17)
or
µ = r2 −r4 = r2 1−r2 ,
¡ ¢
(9.18)
and then,
r 4 − r 2 − µ = ρ2 − ρ − µ = 0 , (9.19)
r p
1∓ 1+4µ
r =± 2 where ρ = r 2 . Therefore,
1±δ
p
1± 1 + 4µ
ρ = = , (9.20)
2 2
where δ =
p
1 + 4µ, and then,
s
1±δ
r = ± , (9.21)
2
or in full,
s s s s
1+δ 1−δ 1+δ 1−δ
r = + ,+ ,− ,− , (9.22)
2 2 2 2
If 1 + 4µ < 0 we have imaginary δ so no extra real roots at all, so the only non-zero extra roots have
µ > µc = −1/4. (By “extra” I mean in addition to the trivial root at r = 0.) When µ > µc , we have:
98
9.3 Saddle node bifurcation of cycles
1.2
0.8
0.6
0.4
ṙ
0.2
0 ⬤
⭕ ⬤
⬤
-0.2
-0.4
0 0.5 1 1.5 2
r
μ=-2 μ=-0.12 μ=0.25
μ=-0.25 μ=0 μ=1
99
9.3 Saddle node bifurcation of cycles
r < 0 : unphysical
Figure 20: r vs µ in the Hopf subcritical saddle bifurcation (Eq. 9.2). Compare to the subcritical
pitchfork bifurcation (Fig. 12). The system exhibits hysteresis: when µ < −0.25 the centre
p is a
stable spiral. When µp increases and becomes positive the system jumps to the stable r = (1 + δ)/2
solution (where δ = 1 + 4µ). If µ is then decreased, it stays on the stable branch as long as this
exists, i.e. when µc = − 14 < µ < 0, before jumping back to r = 0 catastrophically.
μ=-0.12
Figure 21: Hopf saddle bifurcation trajectories of Eqs. 9.16 with µc < µ = −0.12 < 0. The two limit
cycles at r ≈ 0.928 and r ≈ 0.373 are marked as black lines.
100
10 Fractals
Definition .
A fractal is a geometrical shape that shows fine structure at arbitrarily small scales and displays
self-similarity under magnification. That is, a fractal is an image repeated at ever reduced scales.
or
A fractal is a subset of a Euclidean space for which the Hausdorff dimension strictly exceeds
the topological dimension. Fractals appear (nearly) the same at different levels.
• A fractal line winds through space: it has a dimension, D, where 1 < D < 2
Video https://www.youtube.com/watch?v=VxYcWn6AQsg
• https://www.youtube.com/watch?v=GKYG__-HATI
• https://www.youtube.com/watch?v=xLgaoorsi9U
101
10.2 Cantor Set
5
0.0 0.2 0.4 0.6 0.8 1.0
C 0 = [0, 1] . (10.1)
• Step 1: Split the line into three parts. Remove the central, even-numbered, part. We now
have, · ¸ · ¸
1 2
C 1 = 0, ∪ ,1 . (10.2)
3 3
• Step 3: Repeat,
· ¸ · ¸ · ¸ · ¸ · ¸ · ¸ · ¸ · ¸
1 2 1 2 7 8 1 2 19 20 7 8 25 26
C 3 = 0, ∪ , ∪ , ∪ , ∪ , ∪ , ∪ , ∪ , 1 . (10.4)
27 27 9 9 27 27 3 3 27 27 9 9 27 27
• The length of each segment is 1, then 1/3, then 1/9, etc., so call this ²,
1
² = . (10.6)
3n
14
C is the intersection of all the sets, not the union: think vertically rather than horizontally. The sum, n = 0 . . . ∞, is
vertical in Fig. 22.
102
10.2 Cantor Set
N = 2n . (10.7)
N = 2− ln ²/ ln 3 . (10.9)
• The measure of the set is zero. This means that the set can be covered by intervals whose
total length is arbitrarily small.
At step n we have N (n) segments of length ² (n). The total length is L (n),
We can then simply increase n to n 0 > n and L (n) covers the set.
In the limit n → ∞, we can always define a δ > L (n) that covers the set,
• C is totally disconnected
– Given two points x and y in C , where ¯x − y ¯ > 1/3k , then x and y are on two different
¯ ¯
intervals in C k (we can choose k such that this is true). There exists a point z in between
x and y which is not in C k and hence not in C . The set is thus disconnected.
– Following on from the previous point, given that x ∈ C , x must also in one of the 2k
(k ∈ N) intervals that make up C k .
– There is thus an end-point, y k , in C k which has ¯x − y k ¯ < 1/3k .
¯ ¯
– This end point is in C (end points survive the iterative process), thus we can choose k
such that y k is arbitrarily close to x.
• The set C is closed. Each C n is a finite union of closed sets, e.g. C 1 = 0, 31 ∪ 32 , 1 , so the set
£ ¤ £ ¤
is – by definition – closed.
103
10.3 Ternary expansion characterization of the Cantor-thirds set
• The set C is not “dense” in the interval [0, 1]. Loosely, this means the points in the set are not
tightly clustered15 .
Assume there is an interval I = [a, b] in C = C ∞ , where a < b. Then I = [a, b] ∈ C n as well, for
all n < ∞ but, as the set length |C n | → 0 as n → ∞ then |I | = b − a = 0, so the set is not dense.
• C is uncountable.16 .
To be countable17 , the set must have the same number of elements as a subset of the real
numbers.
1 7 0 7 1
p = 1
+ 2 + 3 + 4 +... , (10.13)
2 10 10 10 10
i.e. β1 = 7, β2 = 0, β3 = 7, β4 = 1 etc. Simply placing these numbers in a list gives us the decimal
(base-10) number.
Thus x can be written in ternary (base 3) as,
∞ α
X k (x)
x = , (10.14)
k=1 3k
where αk (x) = 0, 1 or 2. Now for the trick: we then associate each part of the Cantor-thirds set with
a digit, 0, 1 or 2, and note that it is section 1 that is always removed. Thus, any number in the
Cantor-thirds set cannot have a 1 in it, only a 0 or 2,
( )
∞ α
k
, ²k = 0, 2 .
X
C = x ∈ [0, 1] : x = k
(10.15)
k=1 3
Now consider dividing αk by two: αk /2 = 0, 1. These are the digits of a binary representation. We
can then define, for any x in the Cantor-thirds set C , a function f (x) that maps to the range [0, 1]
in binary,
∞ α ∞ α
X k (x) /2 1X k (x)
f (x) = = . (10.16)
k=1 2k 2 k=1 2k
15
https://en.wikipedia.org/wiki/Nowhere_dense_set
16
https://www.youtube.com/watch?v=AYj80i0eQHo and https://www.youtube.com/watch?v=
9O9aTxtBT80
17
http://www.math.umaine.edu/~farlow/sec25.pdf
18
https://en.wikipedia.org/wiki/Ternary_numeral_system
104
10.4 Fractal examples
n=0 1 1
n=1 1/3 4
n=2 1/9 16
n=3 1/27 64
105
10.4 Fractal examples
1
² (n) = . (10.18)
3n
N (n) = 4n . (10.19)
• The von Koch curve is continuous, bounded and has infinite length.
106
10.4 Fractal examples
Step
n=0
n=1
n=2
n=3
107
10.4 Fractal examples
Step
n=0
n=1
n=2
n=3
108
10.4 Fractal examples
One can think of this a map such that we start with 1 and apply,
0 0 0
0 → 0 0 0 , (10.21)
0 0 0
1 1 1
1 → 1 0 1 . (10.22)
1 1 1
¡ 1 ¢n
The area is ²2 × N (²) where ² = 3 and N (²) = 8n , hence,
µ ¶n
2 8
A (n) = ² N (²) = , (10.23)
9
and
109
10.5 Fractal dimension
n=0
n=1
n=2
n=3
n=4
110
10.5 Fractal dimension
Figure 24: Fractal antenna in a mobile phone using Sierpinski’s carpet. The use
of a fractal antenna maximizes the perimeter of the antenna, and the antenna’s self-
similiarity allows it to function well over a range of frequencies (normal antennas are func-
tion at a single frequency only). See also https://en.wikipedia.org/wiki/Fractal_
antenna and https://www.researchgate.net/publication/255633073_Self-similarity_
and_the_geometric_requirements_for_frequency_independence_in_antennae.
We can use the self similarity of a fractal to define its self-similarity dimension,
ln Z
D = , (10.25)
ln M
• The numerator is how much the fractal is divided into pieces
• The denominator is the magnification factor required to get back to the original
Example .
Consider a line broken into N equal pieces, Z = N . To get back to the original line we must magnify
by Z = N . Thus D = ln N / ln N = 1. This is the same as the dimension of a line.
Example .
Consider a square. If we divide this into N 2 smaller squares, then Z = N 2 . To zoom in to a square
that looks like our original square, we require M = N . Hence D = ln N 2 / ln N = 2 ln N / ln N = 2.
This is the same dimension as an area.
Example .
Consider the Sierpiński triangle. Each iteration divides a triangle into three self-similar triangles,
Z = 3. The length of the side of each triangle has been reduced by a factor 2. Hence D = ln 3/ ln 2 ≈
1.58.
Example .
The Cantor set is, at each step, divided into two self-similar pieces, Z = 2. To recover the original,
we zoom by a factor 3. Hence D = ln 2/ ln 3 ≈ 0.63.
111
10.5 Fractal dimension
Figure 25: Left: Geography cone shells (Conus geographus, a type of snail). Right: Sierpiński tri-
angle fractal.
Not all fractals are self-similar, so we need a “better” way”. We can define the “box dimension”,
also called the “capacity dimension”, as follows. Let’s say we have a load of boxes of size ² × ² in the
plane, hence each of area ²2 .
• Consider a line of length L: to cover this line we need N (²) = L/² boxes.
• In general, to cover something we need N (²) ∝ ²−k boxes, where k is the box dimension.
• Rearranging this equation and solving for k, we see that for some measure of the system (e.g.
its area or volume) B = L k ,
B
N (²) = = L k ²−k (10.26)
²k
ln N (²) = k ln L − k ln ² (10.27)
k (ln L − ln ²) = ln N (²) (10.28)
ln N (²)
k = . (10.29)
ln L − ln ²
In the limit ² → 0, the (constant) ln L is negligible because |ln ²| → ∞, hence the definition of
the box (or capacity) dimension,
ln N (²)
D c = k = − lim . (10.30)
²→0 ln ²
More formally, let N (²) be the minimum number of D-dimensional boxes of side-length ² needed
to cover the object, then for a D-dimensional cube with sides of length L,
112
10.6 Pointwise and correlation dimension
total volume LD
N (²) = = D (10.31)
hypercube volume ²
or
ln N (²)
D = . (10.32)
ln L + ln ²−1
In the limit ² → 0, we have ln ²−1 À ln L and we define the box dimension, D c ,
ln N (²) ln N (²)
D c = lim = − lim . (10.33)
²→0 ln ²−1 ²→0 ln ²
ln (8n ) ln 8
Dc = lim = ≈ 1.89 .
n→∞ ln (3n ) ln 3
N (x, r ) = r d , (10.34)
where d is the pointwise dimension. Note that N can depend very much on the location, x, hence
instead define an average (in some way) over many different x,
C (r ) = r d , (10.35)
lnC (r ) = a + d ln r , (10.36)
113
10.7 Cantor Rings
and measure the slope by experiment. This works if we have sufficient resolution such that if a is
the minimum separation among the sample of points,
r À a, (10.37)
r ¿ S. (10.38)
d ≤ Dc . (10.39)
The number d is a good representation of the number of degrees of freedom required to paramet-
erize the fractal.
• n = 1: remove the middle third between 4/3 and 5/3, now we have two rings: 1 to 1+1/3 = 4/3
and 1 + 2/3 = 5/3 to 2.
² = 1/3 and inner radii r 1 = 1 and r 2 = 5/3
• n = 3 : inner radii are 1, 29/27, 11/9, 35/27, 5/3, 47/27, 17/9 and 53/27
• n = 4 : inner radii are 1, 83/81, 29/27, 89/81, 11/9, 101/81, 35/27, 107/81, 5/3, 137/81, 47/27,
143/81, 17/9, 155/81, 53/27, 161/81
• . . . etc.. . .
• n: We have NR rings,
NR (n) = 2n , (10.40)
1
² (n) = . (10.41)
3n
114
10.7 Cantor Rings
1 5/3 2
4/3
• Note that: along the x-axis we have the standard middle-thirds Cantor set.
NR £
A (²) = π (r i + ²)2 − r i2
X ¤
i =1
NR £
= π ²2 + 2²r i
X ¤
i =1
à ! à !
NR NR
2
= π²
X X
1 + 2π² ri , (10.42)
i =1 i =1
115
10.8 Barnsley Fern
where x i = r i − 1 are the usual Cantor-thirds set distances from the origin.
the derivation of which is left as an exercise for the student (example sheet 4).
• Hence
à ! à !
NR NR
A (²) = π²2
X X
1 + 2π² ri
i =1 i =1
3 ²
¶ µ
2
= π² NR + 2π²NR −
2 2
= π² NR + 3π²NR − π²2 NR
2
= 3π²NR . (10.45)
A (²) 3πNR 2n
N (²) ≈ = = 3π = 3π6n . (10.46)
²2 ² 1/3 n
ln N (²) ln (3π) + n ln 6 ln 6
D c = − lim = lim = ≈ 1.6309 . (10.47)
n→∞ ln ² n→∞ n ln 3 ln 3
• Note that this is just D c (C ) + 1 where C is the Cantor-thirds set. This extra dimension is the
radial dimension.
116
10.8 Barnsley Fern
w a b c d e f p Portion generated
f1 0 0 0 0.16 0 0 0.01 Stem
f2 0.85 0.04 −0.04 0.85 0 1.60 0.85 Successively smaller leaflets
f3 0.20 −0.26 0.23 0.22 0 1.60 0.07 Largest left-hand leaflet
f4 −0.15 0.28 0.26 0.24 0 0.44 0.07 Largest right-hand leaflet
µ ¶µ ¶ µ ¶
¡ ¢ a b x e
f x, y = + (10.48)
c d y f
with coefficients as in Table 1, and initial point (0, 0). There are also “mutant” varieties!
The following Python code is from Wikipedia.
import random
import matplotlib . pyplot as plt
X = [0]
Y = [0]
for n in range (100000) :
r = random . uniform (0 , 100)
if r < 1.0:
x = 0
y = 0.16* Y [n -1]
elif r < 86.0:
x = 0.85* X [n -1] + 0.04* Y [n -1]
y = -0.04* X [n -1] + 0.85* Y [n -1]+1.6
elif r < 93.0:
x = 0.2* X [n -1] - 0.26* Y [n -1]
y = 0.23* X [n -1] + 0.22* Y [n -1] + 1.6
else :
x = -0.15* X [n -1] + 0.28* Y [n -1]
y = 0.26* X [n -1] + 0.24* Y [n -1] + 0.44
X . append ( x ) ; Y . append ( y )
21
https://en.wikipedia.org/wiki/Affine_transformation
117
10.9 Mandelbrot Set
f (z) = z 2 + c , (10.49)
does not diverge when iterated from z = 0 (Fig. 27). They are closely related to the Julia Set. See
e.g. http://www.karlsims.com/julia.html and https://www.youtube.com/watch?v=mg4bp7G0D3s
118
10.9 Mandelbrot Set
Figure 27: The Mandelbrot Set. Image created by Wolfgang Beyer with the program Ultra Fractal 3.
CC BY-SA 3.0 https://commons.wikimedia.org/w/index.php?curid=321973
119
11 Evolution of Volumes in Phase Space
We have learned about how trajectories move in phase space, i.e. how points move, but how do we
generalize this idea to volumes in a three-dimensional system, ẋ = f (x)? Motion that is dissipative,
for example, involves phase space contraction, e.g. all trajectories will tend towards zero.
• In general we can consider an arbitrary closed surface S(t ) surrounding a volume V (t ) in
phase space.
• If
< 0 , phase volumes contract : dissipation (energy sinks) ,
1 dV (t )
= = 0, conservative system , (11.4)
V dt
> 0, phase volumes expand : energy sources .
θ̇ = ω , (11.6)
2
ω̇ = −γω − Ω sin θ = 0 . (11.7)
• Consider now,
∂θ̇ ∂ω̇
∇· f = + = −γ . (11.8)
∂θ ∂ω
• The phase volume contracts which is consistent with energy dissipation.
120
11.2 Limit cycle
r˙ = r − r 2 , (11.9)
θ̇ = 1 , (11.10)
then
1
∂r˙ ∂θ̇ >0 r < 2
∇· f = + = 1 − 2r = 0 r = 21 . (11.11)
∂r ∂θ
< 0 r > 21
• From inside, they expand towards r = 1/2 and then contract onto the limit cycle.
121
12 Attractors
Classical attractors include fixed points and limit cycles. We can generalise the idea of attractor: in
an n-dimensional flow,
• A is invariant with the flow (i.e. it does not change with time)
• A has and is contained in a basin of attraction B . Any trajectory starting in B will eventually
approach A as t → ∞.
• A has dimension d < n: usually phase space blobs are squeezed onto the lower dimension
attractor, A, by contraction of the phase space. Dissipation usually generates an attractor.
θ̇1 = f 1 (θ1 , θ2 ) ,
θ̇2 = f 2 (θ1 , θ2 ) . (12.2)
where f 1 and f 2 are periodic functions. Map θ1 around the long circumference and θ2 around the
short circumference of the torus22 . We will investigate a simple case,
θ̇1 = ω1 ,
θ̇2 = ω2 , (12.3)
which are shown in Fig. 28. We can expand the trajectories onto a flat surface in θ1 and θ2 .
• When ω1 /ω2 = n/m is rational (i.e. n, m ∈ N), we have closed trajectories (Fig. 29).
• When ω1 /ω2 is irrational the trajectory completely fills the plane: it never closes (Fig. 30).
The trajectories remain parallel so never cross. Neighbouring trajectories will remain neigh-
bouring forever and evolution is predictable, i.e. this is not sensitive to initial conditions.
A third dimension is required to have sensitivity to initial conditions! In order to have a chaotic
attractor we require:
• Attraction
22
The equation of a torus is x, y, z = ([a + b cos u] cos ν, [a + b cos u] sin ν, b sin u) and we then set u = θ1 and v = θ2 .
¡ ¢
122
12.1 2-dimensional torus
θ1
θ2
123
12.1 2-dimensional torus
θ2 3
0
0 1 2 3 4 5 6
θ1
124
12.1 2-dimensional torus
θ2 3
0
0 1 2 3 4 5 6
θ1
θ2 3
0
0 1 2 3 4 5 6
θ1
θ2 3
0
0 1 2 3 4 5 6
θ1
p
Figure 30: Trajectories of Eq. 12.3 with ω1 = 1 and ω2 = 8 where time t runs from 0 to, from top to
bottom, 4π, 10π and 100π.
125
13 The Lorenz equations
The Lorenz equations are,
ẋ = σ y − x ,
¡ ¢
ẏ = r x − y − xz , (13.1)
ż = x y − bz ,
where r , b and σ are positive constants. These equations were originally developed to understand
convection in fluids, and r is the Rayleigh number, σ is the Prandtl number. The equations are
almost linear, there are the two terms xz and x y which make them non-linear.
The equations are found in many places in physics:
1 dV ∂ẋ ∂ ẏ ∂ż
= + + = − (σ + 1 + b) < 0 . (13.2)
V dt ∂x ∂y ∂z
Thus there is always dissipation. Commonly, σ ≈ 10, r ≈ 28 and b ≈ 8/3, hence V (t ) ∝ e −41t /3
where e −41/3 ≈ 10−6 .
• Points on the z axis stay on the z axis and converge to the origin. If x = y = 0 then ẋ = ẏ = 0
and ż = −bz, hence x(t ) = y(t ) = 0 and z(t ) ∝ e −bt .
126
13.1 Properties of the Lorenz equations
−σ +σ 0
J = r −1 0 , (13.6)
0 0 −b
λ1 = −b , (13.7)
p
− (σ + 1) ± (σ + 1) + σ (r − 1)
2
λ2,3 = , (13.8)
2
hence if r < 1 we have (0, 0, 0) is attractive, while if r > 1 it is a saddle point.
– ẋ = 0 gives x = y and,
ẏ = (r − 1 − z) x , (13.9)
2
ż = x − bz , (13.10)
zc = r − 1 , (13.11)
p p
x c± = y c± = ± bz c = ± b (r − 1) , (13.12)
0
−σ +σ 0 −σ +σ
p
J = r −z −1 −x =
p
r −z
p
−1 ∓ b (r − 1) , (13.13)
y x −b ± b (r − 1) ± b (r − 1) −b
λ3 + λ2 (σ + b + 1) + λb (σ + r ) + 2σr (b − 1) = 0 . (13.14)
This has either three real roots or one real root and two complex conjugate. The real
root is attractive because, remembering that σ, r and b are positive, and indeed (to
have real fixed points) r > 1,
127
13.2 Giovanni Mirouh’s Lecture
Tu − Tb
, (13.17)
L
∇·v = 0, (13.18)
dv ∂v 1
= + (v · ∇) v = − ∇P + ν∇2 v + ρg , (13.19)
dt ∂t ρ
dT ∂T
= + (v · ∇) T = κ∇2 T . (13.20)
dt ∂t
∂Ψ
vx = , (13.21)
∂z
∂Ψ
vz = − . (13.22)
∂x
T = Tb + Tu − Γ + θ , (13.23)
Tu − Tb
Γ = . (13.24)
L
We end up with
∂∆Ψ ∂θ
+ J ∇2 Ψ, Ψ = RaPr + Pr∇2 ∇2 Ψ ,
¡ ¢ ¡ ¢
(13.25)
∂t ∂x
23
https://en.wikipedia.org/wiki/Incompressible_flow
24
https://en.wikipedia.org/wiki/Euler_equations_(fluid_dynamics)
25
https://en.wikipedia.org/wiki/Rayleigh\T1\textendashBénard_convection
128
13.2 Giovanni Mirouh’s Lecture
where the first term is the buoyancy, the second is the diffusion,
∂θ ∂ψ
+ J (Ψ, θ) = + ∇2 θ ,
∂t ∂x
and
Now substitute and neglect terms other than J and change variables as above, and we find,
Ẋ = Pr (Y − X ) , (13.30)
Ẏ = r X −Y , (13.31)
kx
r = ¢3 Ra . (13.32)
π + k x2
¡
2
X is related to Ψ1 and Y is related to θ1 . In this case there are convective cells turning over with
one cell in the vertical direction.
Instead, consider two vertical cells,
and
129
13.2 Giovanni Mirouh’s Lecture
Now find
kx ¡ 2 2
Pr −1 Ψ̇1 = − θ π Ψ1
¢
1 − k x + (13.35)
π2 + k x 2
πk x2
Y = ¡ ¢3 + θ 1 , (13.40)
π2 + k x2
2πk x2
Z = ¢3 θ 2 , (13.41)
π2 + k x2
¡
then,
Ẋ = σ (Y − X ) , (13.42)
Ẏ = r X −Y , (13.43)
Ż = −bZ + X Y , (13.44)
where
σ = Pr , (13.45)
k x2
r = ¢3 Ra , (13.46)
π2 + k x2
¡
4π2 8
b = = . (13.47)
π2 + k x2 3
b is determined by the size of the convective rolls, so is fixed, and r is again the reduced Rayleigh
number.
• Motion along the Z axis stays on the z axis. Set X = Y = 0 to see, Z (t ) = Z (0) e −bt .
• The Lorenz system is dissipative. To see this, compute the derivative of the volume V in
parameter space,
1 dV ∂ Ẋ ∂Ẏ ∂ Ż
= + + , (13.48)
V dt ∂X ∂Y ∂Z
which is a “Lie derivative”. Hence
1 dV
= −σ − 1 − b < 0 (13.49)
V dt
because σ > 0 and b > 0. Hence parameter space volumes decrease with time. Also,
V (t ) = V (0) e −(σ+1+b)t (13.50)
−41t /3
= V (0) e , (13.51)
so in one unit of time the volume is divided, roughly, by one million. Convergence is thus
very fast.
130
13.2 Giovanni Mirouh’s Lecture
∂ Ẋ ∂ Ẋ ∂ Ẋ
−σ σ
∂x ∂y ∂z 0
J =
∂Ẏ =
r −1 0 . (13.52)
∂x
·
∂Ẏ 0 0 −b
∂z
...
with solutions
λ1 = −b < 0 , (13.54)
p
− (σ + 1) ± (σ + 1) + σ (r − 1)
2
λ2,3 = . (13.55)
2
The only parameter that can vary is r . If λ > 0 it is unstable.
• Ẋ = 0 implies X = Y and,
Ẏ = X (r − 1 − Z ) , (13.56)
2
Ż = −bZ + X , (13.57)
then,
Z1,2 = r − 1 , (13.58)
p p
X 1,2 = Y1,2 = ± bZ = ± b (r − 1) . (13.59)
0
−σ +σ
p
J = p 1 −1 ∓ b (r − 1) , (13.60)
p
± b (r − 1) ± b (r − 1) −b
λ3 + λ2 (σ + b + 1) + λb (σ + r ) + 2σb (r − 1) = 0 . (13.61)
There are three roots, one is real and two are complex conjugates λ = α + i β. (The three real
roots do not happen here.)
131
13.2 Giovanni Mirouh’s Lecture
20
σ = 10 Unstab
le limit
15 b = 8/3 cycles
Subcritical
10 Hopf
Subcritical
Pitchfork ed points
5 Stable fix
24.74 = rH
x 0
Stable 13.926 24.06
origin
-5
-10
-15 Strange
Transient chaos attractor
-20
0 5 10 15 20 25
r
Figure 31: Bifurcation diagram of the Lorenz system of equations with σ = 10 and b = 8/3.
α ≶ 0 if r H ≶ 0 . (13.63)
132
13.3 Conclusions
• We can interpret the outcome in terms of the original system of fluid mechanics equations.
Some example trajectories include:
• When on C ± there is convection, where the sign is the direction of the convective vortices.
13.3 Conclusions
We have shown that the Lorenz system, which is a “simple” model of convection, is – in some
parts of the parameter space – chaotic, yet deterministic. The Lorenz system is based on the
Navier-Stokes equations which govern the motion of fluids, and thus chaos is a possible solu-
tion of the Navier-Stokes equations. This was important, historically, because previous theories
about the complexity of chaos required many more dimensions that the three of the Lorenz sys-
tem (cf. Landau’s work). It is also clear that turbulence really is deterministic: it is not just a failure
of the equations of motion, in that either the equations are wrong or solutions wrongly computed.
133
14 Chaos
A definition of chaos:
• Aperiodic long-term behaviour: trajectories do not settle down into fixed points, limit cycles
or similar.
When trajectories approach a chaotic attractor, usually a fractal, the above characteristic can be
generated. The stretching and folding of phase space volumes creates:
Dissipation is usually responsible for reducing phase space volumes to shapes with smaller (fractal)
dimensions.
Nonlinearity is required to have chaos, but a dynamical system of differential equations must
also have three independent variables (i.e. be 3rd order or higher). For iterative maps, one dimen-
sion is sufficient. There is often underlying order even in a chaotic system. Chaotic does not mean
random – chaos is deterministic!
In the Lorenz system we have both stretching and folding. The Lorenz attractor is a fractal with
dimension about 2.05. However, there is underlying order. Fig. 32 shows how |x n+1 | vs |x n | are
related when taking a Poincaré map. The chaos is deterministic.
134
50
45
40
35
30
z 25
20
15
10
0
-20 -15 -10 -5 0 5 10 15 20
x
17
16
15
14
|x|
13
12
11
10
0 10 20 30 40 50
t
17
16
15
14
| xn+1 |
13
12
11
10
10 11 12 13 14 15 16 17
| xn |
Figure 32: Lorenz attractor and map when σ = 10, β = 8/3 and r = 28, starting at x, y, z =
¡ ¢
(15, 8, 27). The top panel shows a projection of the attractor in the z–x plane. The plane z = 28
is used to create a Poincaré section. |x| through the section is shown in the middle panel, and
|x n+1 | vs |x n | is shown in the bottom panel. Note that the two branches are thin.
135
S
xn+1=P(x)
xn
Figure 33: The Poincaré map is the relation between x n and P (x n ) = x n+1 .
15 Iterated maps
We now turn our attention to discrete dynamical systems, such as recursion relations and differ-
ence equations. Difference equations are, for example, how all but the simplest differential equa-
tions are solved numerically on computers. Iterations are counted by an integer index, n, and the
trajectory has discrete values, x n (t ).
ẋ = F (x) , (15.1)
and suppose the trajectories cycle across a surface, S, of dimension n − 1. The Poincaré map, P , is
defined as a map of S onto itself such that each crossing point is linked to the previous,
x n+1 = P (x n ) . (15.2)
P (x) = x . (15.3)
Poincaré sections are the collection of all points where the trajectory intersects S. Roughly speak-
ing,
© ¡ ¢ ª
S = x ∈S :x =P y ,y ∈S . (15.4)
136
15.2 One-dimensional maps
x n+1 = f (x n ) n ∈ N , (15.5)
and such a system can be chaotic, despite only being one-dimensional (recall that three dimen-
sions are required in a continuous system), and can be oscillatory.
Linear stability can be analysed in a similar way to continuous systems. Let x ∗ be a fixed point,
f x∗ = x∗ ,
¡ ¢
(15.6)
and expand the map near the fixed point as a Taylor series. Define,
xn = x ∗ + ηn ,
then
df
µ ¶
η n+1 = η n + O η2 .
¡ ¢
(15.10)
dx x∗
df
µ ¶
λ = = f 0 x∗ ,
¡ ¢
(15.11)
dx x∗
137
15.3 Cobweb diagrams
1.2
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2
f(x) = sin(x) xn = xn+1
0.5
-0.5
-1
-3 -2 -1 0 1 2 3
f(x) = cos(x) xn = xn+1
138
15.3 Cobweb diagrams
10
Bottleneck
6
-2 0 2 4 6 8 10
f(x) = x-sin(x)-1.2 xn = xn+1
Figure 36: Cobweb diagram of f (x) = x−sin x−1.2 starting with x 0 = 9.5. The bottleneck, or “ghost”,
starting around x ∼ 5 is obvious because of the large number of steps required to traverse this
section.
10
Fixed point
6
-2 0 2 4 6 8 10
f(x) = x-sin(x)-1 xn = xn+1
Figure 37: As Fig. 36 with f (x) = x − sin x − 1. There is now a fixed point where f (x) = x right where
the bottleneck was, and the cobweb curve settles there.
139
15.4 Lyapunov exponents
x n+1 = f (x n ) , (15.12)
consider a point x 0 and its neighbouring point x 0 + δ0 , where the initial distance is very small,
δ0 ¿ 1. The separation between the two trajectories is, after n iterations, the difference between,
x n = f f . . . f (x 0 ) = f (n) (x 0 ) ,
¡ ¡ ¢¢
(15.13)
and
x n + δn = f (n) (x 0 + δ0 ) . (15.14)
1 ¯¯ δn ¯¯
¯ ¯
λn = lim ln ¯ ¯ (15.16)
δ0 →0 n δ0
¯ n
1 ¯¯ f (x 0 + δ0 ) − f (n) (x 0 ) ¯¯
¯
= lim ln ¯ (15.17)
δ0 →0 n δ0 ¯
¯Ã ¡ ! ¯
1 ¯¯ d f (n) (x 0 )
¢ ¯
= lim ln ¯ ¯. (15.18)
¯
δ0 →0 n ¯ dx ¯
x=x 0
and then, the Lyanunov exponent is found by taking the limit n → ∞. Note that in the product,
Qn−1
i =0 , the derivatives are taken at each x i , not at x 0 (except when i = 0).
140
15.5 Lyapunov exponents in continuous systems
Definition 5.
Given a map, x n+1 = f (x n ), its Lyapunov exponent for the orbit starting at x 0 is,
( )
1 n−1
X ¯ 0
λ = lim
¯
ln ¯ f (x i )¯ . (15.25)
n→∞ n
i =0
Note that λ is the same whatever the x 0 as long as x 0 is in the basin of the attractor.
d (x + ²)
= F (x) + M(t ) ² , (15.27)
dt
where M is the Jacobian matrix,
∂F i
µ ¶
M(t ) = , (15.28)
∂x j x(t )
If M is time-independent, i.e. constant, and we assume that the eigenvectors are the phase space
unit vectors, we have a linear dependence of ² on time,
d²
= M² , (15.30)
dt
which is trivially solved and has solutions like
² = e M t ² (0) . (15.31)
In the coordinate systems where the eigenvectors are parallel to the phase space unit vectors we
then have
λ1 t
e 0 0
e M t = L (t ) = 0 e λ2 t 0 . (15.32)
λ3 t
0 0 e
The flow of ²(t ) is then dominated by the largest of the three eigenvalues, λi , the sign of which
determines whether ² (t ) grows or decays asymptotically with time.
141
15.5 Lyapunov exponents in continuous systems
ϵ(0) ϵ(t)
Similar arguments to the above can be made in the case of a system of equations with time-
dependent eigenvalues and time-independent eigenvectors. In such systems we have M = M(t ),
then we should consider small dispacements in the x, y and z directions corresponding to the
(constant) eigenvectors of M,
δx δx
A 0 0
d
δy = 0 B 0 δy , (15.33)
dt
δz 0 0 C δz
where A, B and C are time-dependent eigenvalues and are functions of x(t ). The general solution
is again that of a first-order differential equation,
·ˆ t ¸
¡ ¡ 0 ¢¢ 0
δX (t ) = δX (0) exp A x t dt , (15.34)
0
where t 0 is a dummy integration variable. Rearranging, taking logarithms, and dividing by time,
ˆ
1 ¯¯ δX (t ) ¯¯ 1 t ¡ ¡ 0 ¢¢
¯ ¯
ln = A x t dt . (15.35)
t ¯ δX (0) ¯ t 0
The right hand side is just the time average of A, which we denote 〈A〉. If we assume that, over long
enough times ,this represents the long-term average of A, we thus have one of the three Lyapunov
exponents,
1 ¯¯ δX (t ) ¯¯
¯ ¯
〈A〉 = lim ln ¯ . (15.36)
t →∞ t δX (0) ¯
142
15.6 p-cycle
Figure 39: 516, 552 and 576hPa geopotential heights as forecast by an ensemble of Global Fore-
casting System models (data from www.wetterzentrale.de) at times 0, 48, 96, 144, 192, 240, 288
and 336 h. As time progresses the forecasts become more scattered showing the chaos inherent in
weather. After about 192 h the system is truly chaotic, so the Lyapunov time is about 192 h corres-
ponding to a Lyapunov exponent 1/192 = 5 × 10−3 h−1 .
More detailed analysis can be done for the general case of time-dependent eigenvalues and eigen-
vectors.
λ = 1/τ , (15.37)
and τ is the Lyapunov time (Fig. 39). In the context of the above, where several exponents can
be calculated from the Jacobian, the λ used is the largest of the Lyapunov exponents because this
dominates the flow. τ is the characteristic timescale on which trajectories diverge, hence a sys-
tem becomes chaotic and, when running computer simulations, it is the maximum timescale on
which deterministic forecasts are possible. This is relevant for weather forecasting (Fig. 39) and
determining whether orbits, e.g. of planets in the solar system, are stable29 .
15.6 p-cycle
Suppose f has a stable p-cycle containing x 0 ,
f (p ) (x 0 ) = x 0 , (15.38)
143
15.7 Tent map
Qp−1
• But g 0 (x 0 ) = i =0
f 0 (x i )
0 ≤ x ≤ 12 ,
½
rx,
f (x) = (15.45)
r (1 − x) , 12 ≤ x ≤ 1 ,
r/2
xn+1
0 1/2 1
xn
x
144
15.8 Logistic map
• f 0 (x) = ±r so ¯ f 0 (x)¯ = r .
¯ ¯
• When λ > 0 we have r > 1, and the map is chaotic, as shown in Fig. 41.
145
15.8 Logistic map
0.9
0.8
0.7
0.6
xn 0.5
0.4
0.3
0.2
0.1
0
1 1.2 1.4 1.6 1.8 2
r
0.55
0.54
0.53
xn 0.52
0.51
0.5
0.49
1 1.02 1.04 1.06 1.08 1.1
r
146
15.8 Logistic map
r/2
xn+1
0 1/2 1
xn
x
r n
1 3
p
2 1 + 6 = 3.449 . . .
3 3.54409 . . .
4 3.5644 . . .
5 3.568759 . . .
∞ 3.569946 . . .
147
15.8 Logistic map
1 1
r = 0.5 r=2
xn+1 xn+1
r/2
0 1/2 1 0 1/2 1
xn xn
1 1
r = 3.2 r = 3.6
xn+1 xn+1
0 1/2 1 0 1/2 1
xn xn
Figure 43: Logistic map cobweb diagrams with r = 0.5, r = 2, r = 3.2 (period doubling) and r = 3.6
(chaos).
0.9
0.8
0.7
0.6
xn 0.5
0.4
0.3
0.2
0.1
0
1 1.5 2 2.5 3 3.5 4
r
148
15.9 Hénon map
• Examples: go to http://thewessens.net/ClassroomApps/Main/logistic.html
x n+1 =y n + 1 − ax n2 , (15.57)
y n+1 =bx n , (15.58)
with, in the “classical” C map, parameters a = 1.4 and b = 0.3. The map can be written, equival-
ently, as first an area-conserving bend,
µ ¶ µ ¶
x1 x
= , (15.59)
y1 1 − ax 2 + y
The three steps are shown for the first four iterations of the map in Fig. 45.
The map can also be written in one dimension by substituting for y n+1 ,
• The map is dissipative in the range −y < b < 1, and the Jacobian is,
µ ¶
−2ax 1
J = , (15.68)
b 0
where |J | = −β < 0.
149
15.9 Hénon map
2
1.5 n=0, step 0 (Start)
1
0.5
yn 0
-0.5
-1
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn
2 2 2
1.5 n=1, step 1 (Bend) 1.5 n=1, step 2 (Contract) 1.5 n=1, step 3 (Reflect)
1 1 1
0.5 0.5 0.5
yn 0 yn 0 yn 0
-0.5 -0.5 -0.5
-1 -1 -1
-1.5 -1.5 -1.5
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn xn xn
2 2 2
1.5 n=2, step 1 (Bend) 1.5 n=2, step 2 (Contract) 1.5 n=2, step 3 (Reflect)
1 1 1
0.5 0.5 0.5
yn 0 yn 0 yn 0
-0.5 -0.5 -0.5
-1 -1 -1
-1.5 -1.5 -1.5
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn xn xn
2 2 2
1.5 n=3, step 1 (Bend) 1.5 n=3, step 2 (Contract) 1.5 n=3, step 3 (Reflect)
1 1 1
0.5 0.5 0.5
yn 0 yn 0 yn 0
-0.5 -0.5 -0.5
-1 -1 -1
-1.5 -1.5 -1.5
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn xn xn
2 2 2
1.5 n=4, step 1 (Bend) 1.5 n=4, step 2 (Contract) 1.5 n=4, step 3 (Reflect)
1 1 1
0.5 0.5 0.5
yn 0 yn 0 yn 0
-0.5 -0.5 -0.5
-1 -1 -1
-1.5 -1.5 -1.5
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn xn xn
Figure 45: The first four iterations of the classical Hénon map starting from a uniform map of
points in the range −1 ≤ x 0 , y 0 ≤ 1.
150
15.10 Quadratic map and the Mandelbrot set
1.5
0.5
xn 0
-0.5
-1
-1.5
1 1.1 1.2 1.3 1.4 1.5
a
• Some trajectories can escape to infinity, unlike (say) the Lorenz system.
• Fig. 46 shows the bifurcation diagram for x n as a function of a. Many features, such as period
doubling, will be familiar to you from simpler maps.
Further reading:
• https://www.math.uu.se/digitalAssets/562/c_562622-l_1-k_tucker_slides.pdf
• http://www.cmsim.eu/papers_pdf/october_2013_papers/5_CMSIM-Journal_2013_Aybar_
etal_4_529-538.pdf
x n+1 = a 2 x n2 + a 1 x n + a 0 , (15.69)
where a i are constants and i = 0, 1, 2. The logistic map is a special case. Sometimes the map is
analytically soluble.
2
• Binary trees of height ≤ n are given by the map y n = y n−1 + 1 with y 0 = 1 (Aho and Sloane
¥ n¦
1973, https://www.fq.math.ca/11-4.html). This has the analytic solution y n = c 2
where bxc is the “floor” of x, i.e. the largest integer that is smaller than or equal to x, and
c ≈ 1.50283.
• The closest number to 1 that is not 1 and is less than 1 is given by,
X n 1
Sn = , (15.70)
i =1 z i
151
15.10 Quadratic map and the Mandelbrot set
2
where z n = z n−1 − z n−1 + 1 and z 1 = 2. This is Sylvester’s sequence, and has terms 2, 3, 7, 43,
1807, . . . . The closed solution is,
¹ º
2n 1
zn = d + (15.71)
2
where d ≈ 1.2640.
• The map
x n+1 = x n2 + c , (15.72)
where x n are real, which is not –in general– solvable, is the real version of the Mandelbrot set
map,
z n+1 = z n2 +C , (15.73)
where z n are complex and z 0 = 0 is a complex starting location. Points for which z n does not
tend to infinity are defined to be part of the Mandelbrot set.
z0 = 0 (15.74)
z1 = C (15.75)
2
z 2 = C +C . (15.76)
Consider the case |z| ≥ |C | > 2. The triangle inequality gives us,
¯ 2
¯z +C ¯ + |−C | ≥ ¯z 2 ¯ ,
¯ ¯ ¯
(15.77)
hence
¯ 2
¯z +C ¯ ≥ ¯z 2 ¯ − |−C |
¯ ¯ ¯
(15.78)
≥ ¯z 2 ¯ − |C | .
¯ ¯
(15.79)
Now
|z| ≥ |C | (15.80)
hence
− |z| ≤ − |C | (15.81)
so,
¯ 2
¯z +C ¯ ≥ ¯z 2 ¯ − |z|
¯ ¯ ¯
(15.82)
= |z| (|z| − 1) (15.83)
≥ |z| (|C | − 1) (15.84)
≥ k |z| , (15.85)
152
15.10 Quadratic map and the Mandelbrot set
|z 1 | = |C | (15.86)
|z n | ≥ k n |C | . (15.87)
• Points outside the set, but with |C | < 2, also blow up to infinity.
• The Mandelbrot set is a fractal with Hausdorf dimension 2 (Shisikura 1994 https://www.
jstor.org/stable/121009).
• When plotted, the colour is usually the n for which |z n | < 2 (as shown about, points which
diverge beyond 2 are not members of the set). Fig. 47 shows some examples of parts of the
set.
¡ ¢
• The Julia set is similar except that C is a (complex) constant and z 0 = x, y varies. The Julia
set is then the boundary between the points that diverge to infinity and 2those that do not.
153
15.11 Other chaotic maps
154
16 Universality and chaos
Experience tells
¡ us ¢ that there are several ways chaos can be approached. Consider the two-dimensional
map on x = x, y with,
¡ ¢
x n+1 = f xn , y n , (16.1)
¡ ¢
y n+1 = h xn , y n , (16.2)
x ∗ = x ∗, y ∗ = f x ∗, y ∗ , h x ∗, y ∗ .
¡ ¢ ¡ ¡ ¢ ¡ ¢¢
(16.3)
where
∂f ∂f
à !
∂x ∂y
A = ∂h ∂h (16.7)
∂x ∂y
λi v i = Av i . (16.8)
If |λi | > 1 we have instability, if |λi | < 1 stability: remember λ is, generally, complex, so you must
consider the magnitude. There are three general cases.
155
16.2 Real eigenvalues with λi < 1
x n+1 = f (x n ) , (16.9)
∗
f x∗ ,
¡ ¢
x = (16.10)
λ = f 0 x∗ .
¡ ¢
(16.11)
f (x n + δx) = f (x n ) + f 00 (x n ) (δx)2 + O x 3 .
¡ ¢
(16.12)
x∗ = f x∗ ,
¡ ¢
(16.13)
f 0 x∗ = 0
¡ ¢
(16.14)
∗
xn = x . (16.15)
This is the point in the f (x n ) vsx n plot where the map curve crosses the straight line f (x n ) = x n .
Unimodal maps follow a sequence of period doubling, eventually becoming chaotic. We con-
sider the logistic map as an example, but any similarly-shaped function has similar behaviour.
f (x) = r x (1 − x) . (16.16)
x = r x (1 − x) (16.17)
2
0 = r x + (1 − r ) x (16.18)
= x (r x + 1 − r ) (16.19)
156
16.5 The logistic map as a unimodal map
x∗ = 0 , (16.20)
λ = f 0 x∗
¡ ¢
(16.22)
∗
= r − 2x r , (16.23)
which at x ∗ = 0 gives,
λ = r, (16.24)
f 2 x∗ = x∗ ,
¡ ¢
(16.27)
i.e.,
¡ ¢
f f (x) − x = r [r x (1 − x)] (1 − [r x (1 − x)]) − x = 0 . (16.28)
Two of the roots are the previous found x ∗ = 0 and x = 1− r1 . We can immediately thus divide
by x,
multiply out
r [r − r x] 1 − r x − r x 2 − 1
¡ £ ¤¢
= 0 (16.30)
r 2 (1 − x) 1 − r x + r x 2
¡ ¢
= 0 (16.31)
r 2 1 − r x + r x2 − x + r x2 − r x3
¡ ¢
= 0 (16.32)
r 2 1 − (1 + r ) x + 2r x 2 − r x 3
¡ ¢
= 0 (16.33)
157
16.5 The logistic map as a unimodal map
hence
µ ¶ µ ¶
1 2 3 1
x − 1 + ax = ax + a − 1 x2 (16.36)
r r
µ ¶ µ ¶
1 2 1
x − 1 + bx = bx + b −1 x (16.37)
r r
µ ¶ µ ¶
1 1
x − 1 + c = cx + c −1 (16.38)
r r
a = 1, (16.39)
1
− 1 + b = −2 (16.40)
r
hence
1
b = −1 − . (16.41)
r
Powers of x give,
µ ¶µ ¶
1 1 1
− +1 −1 +c = +1 (16.42)
r r r
µ ¶µ µ ¶¶
1 1
c = +1 1+ −1 (16.43)
r r
µ ¶
1 1
= +1 . (16.44)
r r
Hence,
µ ¶µ µ ¶ µ ¶¶
1 2 1 1 1
x −1+ x − 1+ x + +1 = 0 (16.45)
r r r r
158
16.6 Renormalization
We can thus only have real solutions for r > 3, because the discriminant (under the square
root) must be positive. At r = 3 we see period doubling.
• It is then possible to compute whether this point is an attractor or repeller by computing the
appropriate derivatives (exercise for the student!).
16.6 Renormalization
We can show that period doubling will repeat by renormalizing our co-ordinates near each “u”
in the unimodal map. All we need to do is flip the graph and scale the co-ordinates to map the
upside-down “u” back onto the original “u”. Then the process repeats.
• Scale the co-ordinate system x → x + ∆x so that he new centre is at the centre of the “u”
• Multiply and shift f (2) (x) → α f (2) ([x + ∆x] /α)+∆y = α f (2) (x/α, R n ), where R n is the location
of the nth bifurcation, so that “u” of the map f (2) (x) sits on top of where the “u” of the original
map f (x) was.
• The analysis for the next bifurcation, f (4) (x), is now identical other than the scaling.
• The limit,
x + ∆x
µ ¶
n
lim αn f (2 ) , R n = g (x) , (16.52)
n→∞ α
159
16.6 Renormalization
which is the Feigenbaum ratio. All “quadratic-like”, i.e. one-humped maps with non-zero
second derivative at the peak, have the same number. This was shown by May, Feigenbaum
and others in the 1970s and early 1980s, and at the end of the 1980s a full analytical proof
was given by Dennis Sullivan.
• The bifurcation regions are fractals, and many comparisons can be made between the Man-
delbrot set and the logistic map bifurcation diagram (Fig. 49). The logistic map is,
x n+1 = r x n (1 − x n ) (16.54)
z n+1 = z n2 + c . (16.55)
Let
z n = a + bx n , (16.56)
hence also
Then
z n2 = a 2 + 2abx n + b 2 x n2 (16.58)
and
z n+1 = a + br x n (1 − x n ) (16.59)
= a + br x n − br x n2 . (16.60)
2
We then have to choose a and b such that z n+1 = z n2 + c,
z n+1 = a + br x n − br x n2 (16.61)
= a 2 + 2abx n + b 2 x n2 + c . (16.62)
−br = b2 (16.63)
br = 2ab (16.64)
a2 + c = a (16.65)
hence
b = −r , (16.66)
r
a = , (16.67)
2
r³ r´
c = a (1 − a) = 1 − . (16.68)
2 2
160
16.7 Lyapunov exponent
Thus the map between the logistic and Mandelbrot set is,
r
zn = (1 − 2x n ) (16.69)
2
with
r³ r´
c = 1− . (16.70)
2 2
Application of this map gives the one to one scaling seen in Fig. 49, and indeed the bifurc-
ation diagram of the Mandelbrot set along the real axis is identical to a suitably scaled (by
Eq. 16.69) logistic map bifurcation diagram.
161
16.8 Further reading
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x r=3.5 x r=3.5
a f b f (2)
1 0.6
0.9
0.55
0.8
0.7
0.5
0.6
0.5 0.45
0.4
0.4
0.3
0.2
0.35
0.1
0 0.3
0 0.2 0.4 0.6 0.8 1 0.35 0.4 0.45 0.5 0.55 0.6 0.65
x r=3.5 x r=3.5
c f (2) d f (2)
0.8
Δy+αF((x+Δx)/α)
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
αx r=3.5
e f (2)
Figure 48: Renormalization of a unimodal map. The original map f (x) is in panel a, the second
iteration f (2) (x) is in panel b. In c we select a region which looks like the original map, zoomed
in d, and in e we rescale and shift the axes – known as renormalization – to get back to something
very close to the original map. This region then has similar bifurcations to the original map, in this
case another period doubling to f (4) (x), and so on.
162
16.8 Further reading
Figure 49: Comparison of the Mandelbrot set and the bifurcation diagram of the logistic map. Note
the universal ratios in both.
163
16.8 Further reading
Figure 50: Bifurcation diagram and Lyapunov exponent as a function of the parameter r in the
logistic map.
164
17 The 0-1 test for chaos
We often need to determine whether a data set is truly chaotic. The classical way to do this is with
Lyapunov exponents, to see where they are positive, or to look at the return map, where a random
scatter indicates chaos. There is a (perhaps) better way: the “0-1” test for chaos.
Let us assume we have a data set of points, φn , where n = 1, 2, . . . , generated by some map that
may or may not be chaotic. Define a two-dimensional system,
where c ∈ (0, 2π) is a fixed, real constant. We then define the time-averaged displacement function,
M n , by,
1 XN h¡ ¢2 ¡ ¢2 i
Mn = lim p j +n − p j + q j +n − q j , (17.3)
n→∞ N
j =1
log M n
K = lim . (17.4)
n→∞ n
Generally, M n and K exist, and K = 0 when the dynamics are regular (non-chaotic) and K = 1 when
the dynamics are chaotic.
φn+1 = r φn 1 − φn ,
¡ ¢
(17.5)
where 0 < r < 4 (if r > 4 the map can diverge, as we previously showed). Fig. 51 shows p n and q n ,
Fig. 52 shows M n as a function of n, and Fig. 53 shows the K (the slope) as a function of the logistic
map parameter r and compares to the classical bifurcation diagram.
165
17.3 Resolution and convergence
2.5
1.5
qn
0.5
0
r = 3.55
-0.5
-2 -1.5 -1 -0.5 0 0.5 1
pn
16
14
12
10
8
qn
0 r = 3.9
-2
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2
pn
-2
qn
-4
-6
-8
r = 3.97
-10
-6 -4 -2 0 2 4 6 8 10 12
pn
Figure 51: p n and q n for the logistic map with r = 3.55 (regular) and r = 3.9 and 3.97 (chaotic), with
5000 points starting at x 0 = 0.5.
166
17.3 Resolution and convergence
4
Mn
2
Slope = -0.00065306
1
r = 3.55
0
0 50 100 150 200 250 300 350 400 450 500
n
14
12
10
8
Mn
4
Slope = 0.0137398
2
r = 3.9
0
0 50 100 150 200 250 300 350 400 450 500
n
25
20
15
Mn
10
5 Slope = 0.0319652
r = 3.97
0
0 50 100 150 200 250 300 350 400 450 500
n
Figure 52: M n vs n using the logistic map with r = 3.55 (regular) and r = 3.9 and 3.97 (chaotic),
with 5000 points starting at x 0 = 0.5 and 500 iterations of the 01 algorithm
167
17.3 Resolution and convergence
0.9
0.8
0.7
0.6
xn 0.5
0.4
0.3
0.2
0.1
0
1 1.5 2 2.5 3 3.5 4
a r
0.8
0.7
0.6
0.5
K = slope
0.4
0.3
0.2
0.1
-0.1
1 1.5 2 2.5 3 3.5 4
b r
0.8
0.7
0.6
0.5
K = slope
0.4
0.3
0.2
0.1
-0.1
3.5 3.6 3.7 3.8 3.9 4
c r
Figure 53: a) Bifurcation plot of the logistic map, b) slopes K of the 0-1 test for 5000 sample points
and 500 iterations, c) zoom of (b) in the chaotic region.
168
17.3 Resolution and convergence
0.8
0.7
0.6
0.5
K = slope
0.4
0.3
0.2
0.1
-0.1
1 1.5 2 2.5 3 3.5 4
r
a N=5000 N=10000
0.8
0.7
0.6
0.5
K = slope
0.4
0.3
0.2
0.1
-0.1
3.5 3.6 3.7 3.8 3.9 4
r
b N=5000 N=10000
Figure 54: Comparison of the 0-1 test for 5000 and 10000 sample points of the logistic map, with
with 500 and 1000 iterations respectively. (b) is a zoom in on (a).
169
17.4 Improvements
17.4 Improvements
Because M n oscillates as a function of n, a modified function,
D n = M n − Vosc (17.6)
1 − cos (nc)
= Mn − E 2 (17.7)
1 − cos c
where
1 XN
φ j ,
¡ ¢
E = lim (17.8)
N →∞ N j =1
can be used to remove the oscillations. However, the asymptotic growth rate remains the same, it
is just that D n converges better with limited resolution.
• A comparison with the Henon map is on the following youtube video https://www.youtube.
com/watch?v=ecjpVGUWYoc
17.6 Credits
The 0-1 test for chaos was created by researchers at the University of Sydney and the University of
Surrey.
• You might also want to read the following paper comparing the Lyapunov exponent test to
the 0-1 test http://fse.studenttheses.ub.rug.nl/14017/1/The_Lyapunov_Exponent_
Test_and_1.pdf
170
18 Generating Fractals
18.1 Random numbers to fractals
• https://www.youtube.com/watch?v=kbKtFN71Lfs
• http://thewessens.net/ClassroomApps/Main/chaosgame.html
18.2 Lindenmayer-systems
Lindenmayer31 systems are used to describe the growth of systems, in particular organisms. They
involve encoding a set of simple rules which can then result in an arbitrarily complex structure.
They can be used to generate fractals, often looking like plants, e.g. in Algorithmic Botany.
The idea starts with an Axiom: a starting state, and a set of transformation rules. These rules
are applied iteratively to produce the final structure.
Consider an axiom b and rules b = a and a = ab. The first few iterations are then,
b,
a,
ab ,
aba ,
abaab ,
abaababa .
The idea of turtle graphics is used to draw these structures. Define the following rules.
F Move foward a step and draw a line from the previous position to the current position.
+ Rotate n degrees.
- Rotate by −n degrees.
FFF+F+FF-F+F+FF.
31
https://en.wikipedia.org/wiki/Aristid_Lindenmayer was a Hungarian biologist.
171
18.2 Lindenmayer-systems
[F+F]
One can even go multidimensional, e.g. by introducing pitch up ^, pitch down &, \ roll clock-
wise and / roll counterclockwise. For example, with axiom
FFFA
and rule
A=”[&FFFA]////[&FFFA]////[&FFFA]
172
18.3 Examples
18.3 Examples
• Axiom F-F-F-F; F → FF-F+F-F-FF
• Axiom F; F → F[-F][+F]
There are many Python examples out there, this is but one to make the von Koch snowflake fractal,
as seen earlier in the course.
173
18.3 Examples
class turtle :
"""
A turtle is a simple object with a direction and a position .
It can follow two basic commands : move forward and turn by an angle
"""
def __init__ ( self ) :
self . _direction = np . array ([ scale , 0]) # 2 D direction vector
self . _position = np . array ([0 , 0]) # 2 D position vector
def forward ( self ) :
"""
Move turtle forward by one unit .
"""
pos = self . _position
dirn = self . _direction
self . _position = np . add ( pos , dirn )
def rotate ( self , theta ) :
"""
Rotate turtle direction by angle theta in degrees .
"""
(x , y ) = self . _direction
current_angle = atan2 (y , x )
new_angle = current_angle + radians ( theta )
Parameters
----------
commands : dict
Maps single characters to function calls written as strings
The functions are performed on a turtle object
e . g . { ’+ ’: ’t . rotate ( - theta ) ’, ’ - ’: ’t . rotate ( theta ) ’, ’F ’: ’t .
forward () ’}
174
18.3 Examples
axiom : str
The initial string of command characters .
The associated function calls of these characters are found in param
commands
e . g . ’ FX + FX + ’
production_rules : dict
Maps single character strings to more complicated strings of
characters
The value strings replace the key strings on each new iteration
e . g . { ’X ’: ’X + YF ’ , ’Y ’: ’FX -Y ’}
theta : int
Angle of rotation , in degrees
e . g . 90
n_iterations : int
Number of iterations for the L system
e.g. 5
Returns
-------
positions : numpy matrix
The positions of the turtle , while following commands in the final
command string
"""
command_string = axiom # Begin commands with only the axiom
for iteration in range ( n_iterations ) :
n ew_ co mm an d_ st ri ng = str ()
for char in command_string :
if char in production_rules :
n ew_ co mm an d_ st ri ng += production_rules [ char ]
else :
n ew_ co mm an d_ st ri ng += char
command_string = ne w_c om ma nd _s tr in g
return positions
commands = {
’F ’: ’t . forward () ’,
’+ ’: ’t . rotate ( - theta ) ’,
’ - ’: ’t . rotate ( theta ) ’,
175
18.4 Fractals and numbers
theta = -60
scale =1.0
scalefac = 2.0/3.0
dpi = 1200
plt . ioff ()
nmax = 5
When you run this you should produce five files, vonKoch.n .pdf where n = 1, 2, 3, 4, 5, as shown
in Sec. 10.4.1. You can download this from the code directory of the course, it is vonKoch.py.
I also provide Sierpinski-carpet.py in the course’s code directory, which is somewhat more
complicated.
• https://blog.klipse.tech/python/2017/01/04/python-turtle-fractal.html
• https://www.vexlio.com/blog/drawing-simple-organics-with-l-systems/
• http://exupero.org/hazard/post/fractals/
• https://blog.goodaudience.com/fractals-and-recursion-in-python-d11d87fcf9cd
176
18.6 Fractal compression
• Apps at https://www.geogebra.org/
177
19 More on chaotic oscillators
19.1 Rossler-band attractor
A simpler example than the Lorenz model, this attractor contains only one non-linear term. The
equations are,
ẋ = −y − z , (19.1)
ẏ = x + a y , (19.2)
ż = b + z (x − c) , (19.3)
1 dV ∂ẋ ∂ ẏ ∂ż
= + + = 0 + a + (x − c) (19.4)
V dt ∂x ∂y ∂z
= a −c +x (19.5)
2. Motion in the x y plane spirals outwards. If we let z = 0 to constrain motion to the plane, we
have equations of motion,
ẋ = −y , (19.6)
ẏ = x + a y , (19.7)
ż = b = 0 , (19.8)
then
¡ ¢
ẍ = − ẏ = − x + a y = −x + a ẋ , (19.9)
i.e.
ẍ − a ẋ + x = 0 . (19.10)
This is the equation of a harmonic oscillator with negative damping, so the trajectories spiral
out in the x y plane.
3. Vertical motion (in the z direction) depends on x alone. To show this set x and y constant,
then,
ẋ = ẏ = 0 , (19.11)
and,
ż = b + z (x − c) , (19.12)
ż = 0 (19.13)
178
19.2 van der Pol oscillator
i.e.
b
z = . (19.14)
c −x
Acceleration in the z direction depends only on x, because
∂ż
= x −c . (19.15)
∂z
Stability criteria:
∂ż
• x < c: stable and attractive, ∂z <0
∂ż
• x > c: unstable and repulsive, ∂z
>0
• As x → ±∞ we have z ∗ → 0
ẍ + µ x 2 − 1 ẋ + ω2 x = 0 ,
¡ ¢
(19.16)
ẋ = y , (19.17)
2 2
ẏ = −µ x − 1 y − ω x ,
¡ ¢
(19.18)
and if we write
µ ¶
ẋ ¡ ¢
= F x, y , (19.19)
ẏ
then
∂ ẏ ∂ẋ
∇ ·F = + (19.20)
∂y ∂x
= −µ x 2 − 1
¡ ¢
(19.21)
• ẋ = y = 0 i.e. y = 0 ,
179
19.2 van der Pol oscillator
• ẏ = −µ x 2 − 1 y − ω2 x i.e. x 0 = y 0 = 0 .
¡ ¢
Linear analysis at the fixed point (0, 0) of the equations gives us the Jacobian,
µ ¶ µ ¶
0 ¡ 12 0 1
A0 = 2
¢ = 2 (19.22)
−ω − 2µx y −µ x − 1 (0,0)
−ω +µ
τ = µ, (19.23)
and determinant,
∆ = ω2 , (19.24)
¯µ¯ > 2ω .
¯ ¯
(19.25)
Eigenvalues are,
µ± µ2 + 4ω2
p
λ1,2 = , (19.26)
2
and there are four regions of the parameter space:
When µ is small, oscillations are near to sine waves. When µ is large, they are almost square waves.
Consider the case µ À 1 with ω = 1, then our equation of motion is,
ẍ + µ x 2 − 1 ẋ + x = 0 ,
¡ ¢
(19.27)
x3
F (x) = −x, (19.28)
3
such that,
dF
= x2 − 1 , (19.29)
dx
and
dF dF dx
= ẋ x 2 − 1 ,
¡ ¢
= (19.30)
dt dx dt
180
19.2 van der Pol oscillator
dF
0 = ẍ + µ +x, (19.31)
¡ dt ¢
d ẋ + µF
= +x, (19.32)
dt
Now we can write our second-order equation as two coupled first order equations. With,
ẋ
y = +F , (19.33)
µ
such that
µy = ẋ + µF , (19.34)
then
µ ẏ = −x , (19.35)
ẋ = µ y − F
¡ ¢
(19.36)
x
ẏ = − . (19.37)
µ
181
8
0
x
-2
-4
-6
-8
-10
0 20 40 60 80 100
t
1000
800
600
400
200
y
-200
-400
-600
-800
0 20 40 60 80 100
t
10
0
y
-5
-10
-10 -5 0 5 10
x
10
0
y
-5
-10
-10 -5 0 5 10
x
μ = 0.01 μ = 0.1 μ = 0.5 μ=1 μ=2 μ = 10
Figure 55: x vs y of Eqs 19.36 and 19.37 showing in the top panel every data point, in the bottom
panel every δt = 1 so that rapid motion has few points. When µ À 1, e.g. µ = 10 which is the yellow
curve, the “horizontal” parts of the limit cycle are fast, they have few points, while the “vertical”
parts are relatively slow, as shown in the text.