You are on page 1of 182

Non-linear physics

PHYM038

University of Surrey

Department of Physics

Spring 2019

Please send corrections by email to r.izzard@surrey.ac.uk

1
CONTENTS

Contents

1 Introduction 6
1.1 History of Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Example: the pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Solving non-linear systems on a computer . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.1 Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.2 Improved Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.3 Fourth-order Runge-Kutta method . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.4 Newton Raphson and Henyey methods . . . . . . . . . . . . . . . . . . . . . . . 13
1.5 Flow in one dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.1 Example RC circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Linear stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.1 Example: population growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.2 Example: ẋ = x 2 − x 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7 Existence and uniqueness theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.8 Potential functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2 Bifurcations 23
2.1 Bifurcations in 1 dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.1 Example: f (r, x) = ẋ = r + x 2 , a “saddle node” or “blue sky” bifurcation . . . . 23
2.2 Prototypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.1 Example: f (x) = ẋ = r − x − e −x . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3 Types of 1D bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.1 Saddle Node / Blue Sky: ẋ = r ± x 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.2 Transcritical: ẋ = r x − x 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.3 Supercritical pitchfork: ẋ = r x − x 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3.4 Subcritical pitchfork: ẋ = r x + x 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4 Insect outbreak! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4.1 Scale free equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4.3 Bifurcation curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.4 Bistable states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4.5 General state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.5 Ghosts and bottlenecks: the non-uniform oscillator . . . . . . . . . . . . . . . . . . . 37
2.5.1 Period of oscillation when a < ω . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.6 Superconducting Josephson junction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3 Linear systems 45
3.1 Real and distinct eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1.1 The slope of trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1.2 a < 0 and b < 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.3 a > 0 and b > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.1.4 a > 0 and b < 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.5 a = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2
CONTENTS

3.1.6 Nodes and stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50


3.1.7 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.1.8 Saddle node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1.9 Neutral stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1.10 Attracting but not Lyapunov stable . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2 Complex conjugate eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.1 Example: Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3 Equal eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4 Phase Space Analysis in 2 dimensions 54


4.1 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.3 Example: Lotka-Volterra competition model . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4 When linearization fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4.1 Real and complex eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4.2 Marginal cases: at least one imaginary eigenvalue . . . . . . . . . . . . . . . . . 62

5 Lyapunov Stability Theorem 64


5.1 Lyapunov function example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6 Limit cycles and oscillators 66


6.1 Van der Pol oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.1.1 Forced Van der Pol oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.2 Limit Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

7 Poincaré-Bendixson theorem 73
7.0.1 Poincaré-Bendixson F.A.Q. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.0.2 Proof of the theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.1 Glycolysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.1.1 General properties, nullclines and the fixed point . . . . . . . . . . . . . . . . . 75
7.1.2 The fixed point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

8 1D bifurcations in 2D and ghosts 82


8.1 Saddle node bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.2 Transcritical bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.3 Supercritical pitchfork bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
8.4 Subcritical pitchfork bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

9 Hopf Bifurcations 91
9.1 Supercritical Hopf bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.2 Subcritical Hopf bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.3 Saddle node bifurcation of cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

10 Fractals 101
10.1 Fractals in Nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.2 Cantor Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.3 Ternary expansion characterization of the Cantor-thirds set . . . . . . . . . . . . . . . 104

3
CONTENTS

10.4 Fractal examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105


10.4.1 von Koch curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
10.4.2 von Koch quadratic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
10.4.3 von Koch snowflake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10.4.4 Sierpiński carpet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
10.4.5 Sierpiński triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
10.5 Fractal dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
10.5.1 General definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
10.5.2 Box dimension/capacity dimension . . . . . . . . . . . . . . . . . . . . . . . . . 112
10.5.3 Cantor set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10.5.4 von Koch curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10.5.5 Sierpiński carpet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10.6 Pointwise and correlation dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10.7 Cantor Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
10.8 Barnsley Fern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.9 Mandelbrot Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

11 Evolution of Volumes in Phase Space 120


11.1 Damped pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
11.2 Limit cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

12 Attractors 122
12.1 2-dimensional torus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

13 The Lorenz equations 126


13.1 Properties of the Lorenz equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
13.2 Giovanni Mirouh’s Lecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
13.2.1 Stream function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
13.2.2 Temperature perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
13.2.3 The Lorenz model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
13.2.4 Fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
13.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

14 Chaos 134

15 Iterated maps 136


15.1 Poincaré maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
15.2 One-dimensional maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
15.3 Cobweb diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
15.4 Lyapunov exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
15.5 Lyapunov exponents in continuous systems . . . . . . . . . . . . . . . . . . . . . . . . 141
15.5.1 Constant systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
15.5.2 Time-dependent systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
15.5.3 Lyapunov time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
15.6 p-cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
15.7 Tent map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
15.8 Logistic map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
15.9 Hénon map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

4
CONTENTS

15.10Quadratic map and the Mandelbrot set . . . . . . . . . . . . . . . . . . . . . . . . . . . 151


15.11Other chaotic maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

16 Universality and chaos 155


16.1 Real eigenvalues with λi > 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
16.2 Real eigenvalues with λi < 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
16.3 Complex eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
16.4 Unimodal maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
16.5 The logistic map as a unimodal map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
16.6 Renormalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
16.7 Lyapunov exponent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
16.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

17 The 0-1 test for chaos 165


17.1 Example: the logistic map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
17.2 The parameter c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
17.3 Resolution and convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
17.4 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
17.5 How does it work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
17.6 Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

18 Generating Fractals 171


18.1 Random numbers to fractals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
18.2 Lindenmayer-systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
18.2.1 Rules example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
18.2.2 Turtle graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
18.2.3 Branching and 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
18.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
18.3.1 Python code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
18.3.2 Try it yourself . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
18.3.3 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
18.4 Fractals and numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
18.5 Fractal Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
18.6 Fractal compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
18.7 More on the Mandelbrot and Julia sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

19 More on chaotic oscillators 178


19.1 Rossler-band attractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
19.2 van der Pol oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

5
1 Introduction
1.1 History of Dynamics
17th century Newton, Leibniz → Calculus and differential equations

19th century planetary motion → (analytic) solutions impossible N ≥ 3

Poincare Looked for qualitative behaviour rather than quantitative solutions: a geometric ap-
proach.

20th century Non-linear oscillators (e.g. valves), convection (Rayleigh-Bérnard)

1963 Lorenz discovers strange attractors, chaos

1970s. . . chaos, fractals

More information
• Chaos and the Butterfly Effect (Uni. Nottingham)
https://www.youtube.com/watch?v=WepOorvo2I4
• History of Dynamics (MIT)
https://www.youtube.com/watch?v=zv6Qe6T6UYI

1.2 Basic Definitions


An n-dimensional or nth-order system is described by n variables, also called degrees of freedom,
given “equations of motion”,
ẋ 1 = F 1 (x 1 , x 2 , . . . , x n ) , (1.1)
ẋ 2 = F 2 (x 1 , x 2 , . . . , x n ) , (1.2)
...
ẋ n = F n (x 1 , x 2 , . . . , x n ) , (1.3)
where the dot means derivative with respect to the independent variable, t , i.e. d /d t (t is often
thought of as time, and x i as positions). We can write this using the more compact vector notation,
 
x1
 x2 
 
x =  , (1.4)
 ... 
xn
hence
 
ẋ 1
ẋ 2
 
ẋ =  . (1.5)
 
 ... 
ẋ n
Then
ẋ = F (x) . (1.6)
Note that F is now a vector function.

6
1.2 Basic Definitions

Definition 1 - Autonomous equations.


Define an autonomous equation as one that is of the form,

F = F (x) , (1.7)

that is F 6= F (t ), i.e. ∂F /∂t = 0. Define a non-autonomous equation as one that is a function of x


and t , i.e.,

F = F (x, t ) . (1.8)

These are also called time-invariant systems, if t represents time.


Note that an n-dimensional non-autonomous equation is equivalent to an (n +1)-dimensional
autonomous equation, where the extra equation contains the time dependence, e.g. x n+1 = t , and
hence ẋ n+1 = 1 is the appropriate term in F . This extra term is not dependent on time, it is a
constant, so the system is autonomous.
We will study two general types of systems

1. Differential equations, which are continuous functions describing, for example, the evolu-
tion of a system in time, like

2. Iterated maps, also called difference equations, where the values at one “timestep” depend
on the previous in the form

x i +1 = F (x i ) . (1.9)

These are of great relevance because continuous functions usually must be discretized if
they are to be solved on a computer.

One can convert from differential to difference equation form, e.g. with the Euler method. Con-
sider a differential equation,

y0 =
¡ ¢
f y, t , (1.10)
¡ ¢
then this can be approximated as by taking the first terms in the Taylor expansion of f y n , t n ,
¡ ¢
y n+1 = y n + h f y n , t n , (1.11)

where h is a “timestep”. Smaller timesteps lead to more accurate solutions, but have an associated
extra computational cost. More
¡ advanced methods ¢ include the (hopefully familiar) Runge-Kutta
scheme and may involve f y n , y n+1 , t n , t n+1 , . . . .

Definition 2 - Linear systems.


Linear systems are those which can be written

F (x) = Ax , (1.12)

where x is a vector of n elements and A is a (constant) n × n matrix.

7
1.3 Example: the pendulum

Linear systems are “easy” in that they can be split into n one-dimensional problems,

ẋ = Ax , (1.13)

then we can find the n eigenvectors by solving,

Av i = λi v , (1.14)

where i = 1 . . . n and λi are the eigenvalues. Then the general solution is


n
v i c i e λi (t −t0 ) ,
X
x = (1.15)
i =1

where the constants c i are determined by initial or boundary conditions. The eigenvalues, eigen-
vectors and constants c i can all be complex.

Definition 3 - Non-linear systems.


Non-linear systems are not easy in any sense of the word. They cannot be written in the form
of equation 1.12 so require other techniques if we are to find solutions, assuming we can do so at
all.

The dimensionality of the problem is important to the behaviour that non-linear systems can
demonstrate.

n = 1 Simple population growth, relaxation dynamics. These systems can display bifurcations.

n = 2 Oscillations, e.g. predator-prey, Josephson junctions, the Van der Pol equation, pendulum
(see below). These systems have limit cycles.

n ≥ 3 Complicated motion occurs in three or more dimensions, such as chaotic behaviour, strange
attractors and fractals.

Note: chaos can occur in one-dimensional maps but, in continuous systems, requires three di-
mensions or more.

1.3 Example: the pendulum


Consider a frictionless pendulum consisting of a mass m on a wire of fixed length l which subtends
angle θ to a a constant gravitational field of local acceleration g .

8
1.3 Example: the pendulum

h
mg sinθ
mg
The equation of motion is,

d 2θ
ml = −mg sin θ , (1.16)
dt2
which simplifies to,

θ̈ + ω2 sin θ = 0 , (1.17)

where
g
ω2 = . (1.18)
l

We require two degrees of freedom, or “initial conditions”, θ and θ̇, to solve the problem, so it is
two-dimensional. We could write the equation of motion as two first-order equations in terms of
θ and θ̇.
In the simple case where θ ¿ 1, we can approximate sin θ ≈ θ, and we have a linear harmonic
oscillator. The equations of motion become the (hopefully) familiar,

d 2θ g
= − θ, (1.19)
dt2 l
with solutions (cf. Eq. 1.15),

θ (t ) = θ0 cos ωt + φ ,
¡ ¢
(1.20)
θ̇ (t ) = −ωθ0 sin ωt + φ ,
¡ ¢
(1.21)

where θ0 and φ are set by some initial conditions.


The phase space is a diagram of θ vs θ̇, the two dimensions, is an ellipse as evidenced by equa-
tions 1.20 and 1.21.

9
1.3 Example: the pendulum

θ̇

What happens if θ is large? We can no longer make the simple linearization, sin θ ≈ θ. We
can, however, recall some of our knowledge of physics. The energy of the system is the sum of the
kinetic, T , and potential, U , energies, is a constant of the motion1 , i.e.,

E θ, θ̇ = T +U
¡ ¢

1 ¡ ¢2
= m l θ̇ + l (1 − cos θ) mg (1.22)
2
= constant.

The potential energy is, at any angle θ,

U = mg l (1 + cos θ)
2 θ
µ ¶
= 2mg l sin .
2
Let θ0 be the highest point of the motion, where the pendulum stops moving – albeit only instant-
aneously – then at this point the angular speed is zero, θ̇ (θ0 ) = 0, so also T = 0, and the total energy
there is only from the potential energy,

E θ0 , θ̇ = 0 = l (1 − cos θ0 ) mg
¡ ¢

2 θ0
µ ¶
= 2mg l sin , (1.23)
2
which must be the same as the total energy at any angle, E θ, θ̇ , because energy is conserved,
¡ ¢

hence
2 θ0
µ ¶
E θ, θ̇ = E θ0 , θ̇ = 0 = 2mg l sin
¡ ¢ ¡ ¢
.
2
We can then solve for the kinetic energy, T , at any general angle, θ,

2 θ0 2 θ
· µ ¶ µ ¶¸
1 ¡ ¢2
T = E −U = m l θ̇ = 2mg l sin − sin , (1.24)
2 2 2
1
https://en.wikipedia.org/wiki/Noether’s_theorem

10
1.3 Example: the pendulum

and then find θ̇,

2 θ0 2 θ
· µ ¶ µ ¶¸
¡ ¢2 2
θ̇ = 4ω sin − sin . (1.25)
2 2

θ̇
0

-2

-4

-6
-6 -4 -2 0 2 4 6
θ

The “small-angle solution”, when θ0 ¿ 1 , is recovered when considering,

2 θ θ2
µ ¶
sin ≈ , (1.26)
2 4
then
¡ ¢2
θ̇ = ω2 θ02 − θ 2 ,
¡ ¢
(1.27)

which is an equation of an ellipse in the θ̇-θ phase space.


Remark 1. What about θ0 = π?

In this case, sin2 (θ0 /2) = 1 hence θ̇ = ±2ω cos (θ/2)

Remark 2. We can write the equations in a two-dimensional autonomous system as follows. Define

v = θ̇ , (1.28)

then
g
v̇ = − sin θ . (1.29)
l
Equations 1.28 and 1.29 together form the required two-dimensional autonomous system.
There is an analytic solution which does not require a small angle approximation, although
it does require elliptic integrals, e.g. http://sbfisica.org.br/rbef/pdf/070707.pdf, https:
//en.wikipedia.org/wiki/Elliptic_integral.

11
1.4 Solving non-linear systems on a computer

1.4 Solving non-linear systems on a computer


Physical systems are represented by a system of equations,
 
F 1 (x)
 F 2 (x) 
 
ẋ =   = f (x) , (1.30)
 ... 
F n (x)

then we can write expand the solution for x at time t 0 + ∆t given a known solution at t = t 0 by
expanding the Taylor series,

∂x
µ ¶
x (t 0 + ∆t ) = x (t 0 ) + ∆t + O ∆t 2
¡ ¢
∂t t0
= x 0 + f (x 0 ) ∆t , (1.31)

where we defined x 0 = x (t 0 ). We can them solve for the evolution of the system for as long as we
like, out to a time t = n∆t , where,

x n ≡ x (t = n∆t ) , (1.32)

and n = 0, 1, . . . , ∞ (this also works when n < 0).

1.4.1 Euler’s method

The simplest method is that of Euler,

x n+1 = x n + f (x n ) ∆t . (1.33)

1.4.2 Improved Euler’s method

We can improve by guessing a solution, x̃ n+1 using Euler’s method, then use this to better approx-
imate the derivative,

f (x n ) + f (x̃ n+1 ) ∆t .
¤
x n+1 = x n + (1.34)
2
Such methods are simple to construct. In some cases the solutions are unstable ( f would be de-
scribed as stiff ). There are other methods which typically are more precise and more stable, but
involve more computational expense.

1.4.3 Fourth-order Runge-Kutta method

Define

k1 = f (x n ) · ∆t , (1.35)
µ ¶
1
k2 = f x n + k 1 · ∆t , (1.36)
2
µ ¶
1
k3 = f x n + k 2 · ∆t , (1.37)
2
k4 = f (x n + k 3 ) · ∆t , (1.38)

12
1.5 Flow in one dimension

then the solution is given by,

1
x n+1 = x n + [k 1 + 2k 2 + 2k 3 + k 4 ] + O (∆t )4 . (1.39)
6
Note that this is the “classical” Runge-Kutta method, there are many variations on its theme.

1.4.4 Newton Raphson and Henyey methods

These are relaxation methods which use equations such as Eq. 1.34 but then iterate to converge on
a solution. Method such as Henyey’s allow many coupled equations to be solved simultaneously,
such as when the stellar evolution equations are discretized.

1.5 Flow in one dimension


In general,

ẋ = f (x) , (1.40)

where f is a time-independent function. We can then separate and integrate to solve,


ˆ x(t ) ˆ t
1
dx = dt . (1.41)
x0 f (x) t0

However, this may not be easy if f (x) is a complicated function. Instead, let us consider the geo-
metric properties of f (x) and look at qualitative features of the motion.
The phase space is easy to draw, it is just a line.

0 x
Consider f (x): we have only three options.

f (x) > 0 then ẋ > 0 and x increases with time

f (x) < 0 then ẋ < 0 and x decreases with time

f (x) = 0 then ẋ = 0, i.e. x is constant, “in equilibrium”, “fixed”, “invariant” or a “singular point”

13
1.5 Flow in one dimension

1.5.1 Example RC circuit

C
V0

Note that V0 , R and C are all constants. From our knowledge of basic electronics, the charge Q
on the capacitor obeys the equation,
Q
V0 = R Q̇ + , (1.42)
C
hence
dQ
= Q̇ = f (Q)
dt
V0 Q
= − . (1.43)
R RC
Consider the phase space in the two dimensions, Q and Q̇.

• When Q̇ = 0, corresponding to a fixed point, we have from Eq. 1.42,

Q = Q ∗ = CV0 , (1.44)

where the ∗ superscript indicates a fixed point.

• When Q < Q ∗ , Q̇ > 0 and “motion” is towards Q ∗ . To show this, write

Q = Q ∗ − ∆Q , (1.45)

where ∆Q is positive. Then

V0 Q ∗ − ∆Q V0 CV0 − ∆Q ∆Q
Q̇ = − = − =+ >0 (1.46)
R RC R RC RC

• Similarly, when Q > Q ∗ we see Q̇ < 0 and “motion” is towards Q ∗ .

• When Q = 0 we have
V0
Q̇ = . (1.47)
R

14
1.6 Linear stability analysis

V0 / R

f(Q)=Q˙ 0

0 Q*
Q
In such a simple case we can solve analytically given that Q = Q (t 0 ) at a “start time” t 0 ,
t −t 0
Q (t ) = CV0 + [Q (t 0 ) −CV0 ] e − RC . (1.48)

Note that the natural timescale of the circuit is RC .

Q (t )

Q* = V0 / R

0 RC 2RC 3RC 4RC


t

1.6 Linear stability analysis


Consider a one-dimensional function

ẋ = f (x) , (1.49)

and suppose we have a fixed point x ∗ , defined by

f x∗ = 0 .
¡ ¢
(1.50)

15
1.6 Linear stability analysis

We want to know about its stability, so we Taylor expand around a point near x ∗ ,

x = x∗ + η , (1.51)

where η ¿ x is “small”. The derivative of x is,

f (x) = ẋ = ẋ ∗ + η̇ = η̇ , (1.52)

because x ∗ is a fixed (constant) point so ẋ ∗ = 0, and the Taylor series becomes,

f (x) = f x ∗ + η = f x ∗ + f 0 x ∗ η + O η2
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
(1.53)
f 0 x ∗ η + O η2
¡ ¢ ¡ ¢
= (1.54)

We neglect the terms O η2 and are left with,


¡ ¢

η̇ = f 0 x∗ η
¡ ¢
(1.55)

which has the general solution,


0 ∗
η ∝ e − f (x )t . (1.56)

In general, we have three cases:

f 0 (x ∗ ) < 0: x ∗ is stable, symbol

f 0 (x ∗ ) > 0: x ∗ is unstable, symbol #

f 0 (x ∗ ) = 0: a higher order analysis is required.

Semistable bifurcations have symbols #


G or #
H where the dark side is the stable side, the light side is
unstable.

1.6.1 Example: population growth

Let n = n(t ) be the number of individuals in a population, where n > 0. Then the population
growth rate can be modelled as,

ṅ = r n , (1.57)

where r > 0 is a constant growth rate. The analytic solution is

n (t ) = e r t , (1.58)

i.e. the population grows forever. This is not a very realistic model!
Instead, consider the logistic equation,
³ n´
ṅ = r˜n = r n 1 − , (1.59)
k
where the k is called the carrying capacity, and k > 0. The effective growth rate is r˜ = r (1 − n/k)
which is 0 when n = k, so this is the limiting population. As n → k, r˜ → 0 and growth stops.

16
1.6 Linear stability analysis

0.3

0.2

0.1

0 ⬤
⭕ ⬤

-0.1

-0.2
n0 n1
n
Consider the locations of the fixed points where ṅ = 0.

• ṅ = 0 at n 0 = 0. In the general case, we can perturb n linearly, n = n ∗ + ∆n, then

n ∗ ∆n
µ ¶
¡ ∗
ṅ = r n + ∆n 1 −
¢
− (1.60)
k k
∗ 2
(n ) n∗ ∆n (∆n)2
= r n ∗ + r ∆n − r − r ∆n − r n∗ −r (1.61)
kµ k k k
∗¶ ∗¶
n n
µ
≈ r n∗ 1 − +r 1−2 ∆n . (1.62)
k k

At n ∗ = n 0 = 0 the population cannot drop so ∆n > 0, so ṅ ≈ r ∆n > 0. The fixed point is


unstable because n must grow.

• ṅ = 0 at n 1 = k: this is another fixed point.


From our linear expansion above,

k k
µ ¶ µ ¶
ṅ ≈ r k 1 − + r 1 − 2 ∆n (1.63)
k k
= −r ∆n . (1.64)

If ∆n < 0 then the population grows up towards n 1 = k.


If ∆n > 0 then the population drops back towards n 1 = k.
The fixed point is thus stable.

• Growth toward the stable fixed point is exponential as it is in the general case, as we shall see
below.

17
1.6 Linear stability analysis

• General solution,

dn ³ n´
= rn 1−
dt k
dn r
= dt
n (k − n) k
log n − log(k − n) r
= t +c
k k
k
n =
1 + exp (− [r t + ck])

where c is set by the initial conditions. When t = 0, n = n i hence

k
µ ¶
1
c = − ln −1 .
k ni

• Maximum population
³ n´
ṅ = r n 1 − = 0
k
n = k.

18
1.6 Linear stability analysis

1.8

1.6

1.4

1.2
n (t ) / k
1

0.8

0.6

0.4

0.2
0 1/r 2/r 3/r 4/r 5/r
t

1.6.2 Example: ẋ = x 2 − x 4

We have

f (x) = ẋ = x 2 − x 4 , (1.65)

hence

f 0 (x) = 2x − 4x 3 . (1.66)

Fixed points are solutions of,

ẋ = 0 (1.67)

which are

x = x∗ = 0 (1.68)
= ±1 (1.69)

19
1.6 Linear stability analysis

x ∗ = −1 is unstable.
If x is slightly larger than x ∗ then f 0 > 0 and motion is away from x ∗ .
If x is slightly smaller than x ∗ then f 0 > 0 and motion is away from x ∗ .
Hence any x

x ∗ = 0 is semistable.
If x is slightly larger than x ∗ (i.e. positive) then f 0 > 0 and motion is away from x ∗ = 0 in the
positive direction.
If x is slightly smaller than x ∗ (i.e. negative) then f 0 < 0 and motion is towards x ∗ = 0 in the
positive direction.

x ∗ = +1 is stable.
If x is slightly larger than x ∗ then f 0 < 0 and motion is towards x ∗ .
If x is slightly smaller than x ∗ then f 0 < 0 and motion is towards x ∗ .

0.3

0.2

0.1

f(x) 0 ⬤
⭕ ◐
⬤ ⬤

-0.1

-0.2

-0.3
-1 -0.5 0 0.5 1
x

20
1.7 Existence and uniqueness theorem

1.7 Existence and uniqueness theorem


If f (x) and f 0 (x) are continuous for x ∈ I where I is the real open interval2 ⊆ ℜ, i.e. is a subset of
the real numbers, and x 0 ∈ I , then
f (x) = ẋ (1.70)
x (t = 0) = x 0 (1.71)
has a unique solution around t = 0.
For autonomous systems each point in phase space is a complete set of initial conditions.
In the one-dimensional phase space, i.e. on the line, the trajectories can
• Increase monotonically with time

• Decrease monotonically with time

• Stay constant
but they can never go back. Hence oscillations are impossible in one-dimensional systems, except
for circular motion.
Corollary 1. The theorem can be extended to n dimensions, but proof of the theorem is not a simple
matter. The general proof involves Picard iterations of solutions until a series solution is built up. If
we have in general,
y 0 = f x, y ,
¡ ¢
(1.72)
y (x 0 ) = y 0 , (1.73)
then we can integrate,
ˆ x ¡ ¢
y (x) = y 0 + f z, y(z) d z . (1.74)
x0

We don’t know y (z) but we can guess it is approximately the value at x 0 , i.e. y (x 0 ) = y 0 , and integrate
again,
ˆ
¡ ¢
y 1 (x) = y 0 + f z, y 0 d z . (1.75)

Now use y 1 as a better guess to,


ˆ
¡ ¢
y 2 (x) = y 0 + f z, y 1 d z , (1.76)

and repeat to the nth stage. The solution converges to the unique solution. To prove that the
series converges requires the Banach-Caccioppoli fixed-point theorem and Gronwall’s lemma, both
of which are beyond the scope of this course but which, naturally, make for interesting reading.
See also
• http://web.mit.edu/jorloff/www/18.03-esg/notes/existAndUniq.pdf

• https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s_inequality

• https://en.wikipedia.org/wiki/Banach_fixed-point_theorem
2
An interval is s set of real numbers with the property that any number between ¡two ¢numbers in the set lies in the
set. Open implies that the set does not include its endpoints, and uses the notation x, y .

21
1.8 Potential functions

1.8 Potential functions


In some cases we can further write,

dV
ẋ = f (x) = − , (1.77)
dx
where V = V (x) is called the potential, cf. gravitational potential or electric potential. If such a
function V exists, then

dV dV d x
= (1.78)
dt dx dt
dV 2
µ ¶
= − ≤ 0. (1.79)
dx

Hence stable points are minima of V (x) which follows from ẋ = f (x) = −dV /d x.

22
2 Bifurcations
2.1 Bifurcations in 1 dimension
Definition 4 - Bifurcation.
A change in the nature, or the number, of fixed points, when a parameter of the system is varied.

Consider a system,

f (x) = f (x, r )

where r is a real parameter r ∈ R and x is an independent variable. Then when we change r we


may change the nature of the fixed points of the system (ẋ = 0) by either introducing or removing
fixed points, or by changing their nature, e.g. from stable to unstable.

2.1.1 Example: f (r, x) = ẋ = r + x 2 , a “saddle node” or “blue sky” bifurcation

In this case we have three regimes:

r > 0: f (r, x) = r + x 2 > 0 6= 0 hence there are no fixed points

r = 0: f (r, x) = x 2 in this case if x = 0 we have f = 0, so x ∗ = 0 is the only fixed point.


It is semi-stable because ∂ f /∂x = f 0 = 2x


p
r < 0: f (r, x) = r + x 2 and we have f = 0 when r = −x 2 < 0, i.e. x 1,2 = ± −r , so there are two fixed
points.


⬤ ⬤

We can plot x vs r , which shows the bifurcation. This particular type of bifurcation is called a
“saddle node” (because it looks like a saddle?) or “blue sky” bifurcation.

23
2.2 Prototypes

x1 = r1/2

0.5

0
x

-0.5

x2 = -r1/2

-1
-1 -0.5 0 0.5 1
r
2.2 Prototypes
The form ẋ = r ± x 2 is called a prototype or normal form. From this, similar forms can be con-
structed with the same qualitative properties, e.g. by multiplying terms by constants. Prototypes
represent all bifurcations of their kind.
Consider the following example,

f (x) = −x 3 + 2x 2 − r x + 0.1 , (2.1)

which is shown below.

24
2.2 Prototypes

0.3

0.2

0.1

ẋ 0
r=0
r = 0.2
r = 0.4
-0.1 r = 0.6
r = 0.8
r=1
-0.2 r = 1.2

f(x,r) = -x3 + 2x2 - rx + 0.1


-0.3
-0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
Then f (x) = 0 has the solution at the bifurcation point, r = r c , which is difficult to solve (who
remembers the cubic equation?). However, we do not have to do everything the hard way. To find
the bifurcation point, we can use the Taylor expansion,

∂f ∂f 1 ∂2 f
µ ¶ µ ¶ µ ¶
f (x, r ) = f (r c , x 0 ) + (r − r c ) + (x − x 0 ) + +... , (2.2)
∂r r c ,x0 ∂x r c ,x0 2 ∂x 2 r c ,x0
together with the condition at the fixed point (the “tangent condition” at a saddle node),
∂f
µ ¶
= 0, (2.3)
∂x r c ,x0
and if we neglect the higher terms we have,

f (x, r ) = a (r − r c ) + b (x − x 0 )2 , (2.4)

where
∂f
µ ¶
a = (2.5)
∂r r c ,x 0

25
2.2 Prototypes

and
1 ∂2 f
µ ¶
b = . (2.6)
2 ∂x 2 r c ,x0
The constants a and b are much easier to calculate than the solutions to the original cubic equa-
tion. In our case,

a = −x 0 , (2.7)
and
à £ ¤!
1 ∂ −3x 2 + 4x − r
b =
2 ∂x
r c ,x 0
= 2 − 3x 0 , (2.8)
so
f (x, r ) = −x 0 (r − r c ) + (2 − 3x 0 ) (x − x 0 )2 . (2.9)
This looks like a parabola, so to within scaling constants is the same as our prototype and has the
same general behaviour.

2.2.1 Example: f (x) = ẋ = r − x − e −x

We do not have to calculate f (x) over the entire x, r plane; it is simpler to take a geometric ap-
proach. Fixed points are at f (x) = ẋ = 0, i.e. where,
r − x = e −x , (2.10)
so plot r − x against e −x for a range of r .
8
r = -1
r = +1
6 exp(-x) r = +3

-2 r-x

-4

-6
-2 -1 0 1 2 3 4
x

26
2.2 Prototypes

The critical point is at r = 1: if r > 1 there are two fixed points, if r < 1 there are none. When
r = 1 we have

1 − x c = e −xc , (2.11)

which has the solution x c = 0, so the critical point is at x c , r c = 0, 1.


What about the nature of the fixed points?

r − x > e −x implies f (x) > 0

r − x < e −x implies f (x) < 0

and from this information we can construct the entire bifurcation diagram. Around the bifurca-
tion, we can expand f (x),

f x ∗ = 0 = r − x ∗ − e −x
¡ ¢

x ∗2 x ∗3
µ ¶
∗ ∗
= r −x − 1−x + − ...
2! 3!
x ∗2
= r − x ∗ −1 + x ∗ − + O x ∗3
¡ ¢
2
x ∗2
≈ (r − 1) − . (2.12)
2
Again, we have the quadratic prototype, and everything we have learned about it applies equally
to this function, f (x). The important point is that we can plot a detailed bifurcation diagram using
the approximate solution,
p
x ∗ = ± 2r − 2 . (2.13)

0
x

-2

-4

0 0.5 1 1.5 2
r

27
2.3 Types of 1D bifurcations

2.3 Types of 1D bifurcations


2.3.1 Saddle Node / Blue Sky: ẋ = r ± x 2

We saw this above.

0
x

-2

-4

0 0.5 1 1.5 2
r

2.3.2 Transcritical: ẋ = r x − x 2

The critical points like on the lines r x − x 2 = 0, i.e. x = 0 or r = x.

28
2.3 Types of 1D bifurcations

0
x

-2

-4

-4 -2 0 2 4
r

2.3.3 Supercritical pitchfork: ẋ = r x − x 3


p
The critical points lie on the lines r x − x 3 = 0 i.e. x = 0 or, when r > 0, x = ± r .

0
x

-2

-4

-4 -2 0 2 4
r
Stable Unstable

29
2.4 Insect outbreak!

2.3.4 Subcritical pitchfork: ẋ = r x + x 3


p
The critical points lie on the lines r x − x 3 = 0 i.e. x = 0 or, when r < 0, x = ± −r .

0
x

-2

-4

-4 -2 0 2 4
r
Unstable Stable

2.4 Insect outbreak!


The population of bugs at time t is n(t ), in the absence of predators only food limits the number
of bugs and the logistic equation applies,
³ n´
ṅ = Rn 1 − , (2.14)
k
where k is the limiting population.
However, there is also a population of birds that likes to eat the bugs. The rate of these birds
eating bugs is,

B n2
p (n) = , (2.15)
A2 + n2
where A and B are non-zero. When n is small (n ¿ A), p (n) ≈ 0, but when n is large (n À A),
p (n) ≈ B . The rate of growth, or decline, of the bug population is thus,
³ n´ B n2
ṅ = Rn 1 − − 2 . (2.16)
k A + n2

30
2.4 Insect outbreak!

2.4.1 Scale free equation

We now try to make an equation in a general, scale-free/unit-free form, by defining a new set of
variables,
n
x = (2.17)
A
Bt
τ = (2.18)
A
RA
r = (2.19)
B
k
c = (2.20)
A
and we can rewrite Eq. 2.16 as, by dividing through by B and replacing n by A × n/A,
A ṅ RA n An n2
µ ¶
= k 1− − 2 , (2.21)
B A B A k A A + n2
and now note that,
d B d
= , (2.22)
dt A dτ
hence
dx ³x´ x2
= r x 1− − (2.23)
dτ 2
h ³ c x ´ 1 + xx i
= x r 1− − . (2.24)
c 1 + x2
The fixed points are at,
A dx dx
= = 0, (2.25)
B dt dτ
one obvious solution to which x = 0, but this just means there are never any bugs: it is a trivial
solution. Assume there are some bugs, and let g be a function which depends on the model para-
meters,
³ x´
g (x) = r 1 − (2.26)
c
and h be a function which is independent of the model parameters,
x
h (x) = . (2.27)
1 + x2
then
xg (x) = xh (x) . (2.28)
. Other than the trivial solution, we have x 6= 0, i.e.,
g (x) = h (x) . (2.29)
We can alter our model by changing g (x) and compare to the fixed function h (x): a relatively easy
process. We can also plot g (x) and h (x) to see where they cross, at (non-trivial) fixed points x ∗ ,
which we don’t solve for because it is complicated. To give you an idea, try solving for x ∗ ,
x∗ x∗
µ ¶
= r 1− . (2.30)
1 + x ∗2 c

31
2.4 Insect outbreak!

2.4.2 Stability

• When g exceeds h, motion is in the positive x direction.

• When h exceeds g , motion is in the negative x direction.

• g (x) is a constant function: it cannot change

• h (x) is a straight line which depends on r and c which are “free parameters” of the problem
(e.g. set by other factors, like the nature of the bugs, physics, etc.).

We can see there are general solutions to the problem. One shown below has r = 1, c = 4. There
is only one non-trivial fixed point (where the lines cross, i.e. g (x) = h (x)) at x ≈ 2.7. Whatever the
initial population, this is the final number of bugs, because the fixed point is stable.
0.6
g(x) = r(1-x/c)
h(x) = x/(1+x2)
0.5

0.4 r=1
c=4
0.3

0.2

0.1

0⬤
⭕ ⬤
0 2 4 6 8 10
x
Another example, showing different behaviour, is illustrated by r = 0.5, c = 10. There are now
three fixed points at x 1 ≈ 0.7, x 2 ≈ 2 and x 3 ≈ 7.3. If x (t = 0) < 2 the population of bugs settles to
the “low” equilibrium state at x 1 ≈ 0.7. However, if the population ever exceeds x ≈ 2, e.g. by a
sudden migration from elsewhere, the number of bugs will reach a maximum at x 3 ≈ 7.3 and will
stay there. If you were a pest controller, you’d not want this to happen!

32
2.4 Insect outbreak!

0.6
g(x) = r(1-x/c)
h(x) = x/(1+x2)
0.5

0.4 r = 0.5
c = 10
0.3

0.2

0.1

0⬤
⭕ ⬤ ⬤
⭕ ⬤
0 2 4 6 8 10
x
What happens when the parameters r or k (equivalently c) change?

• Increasing r or k, the population jumps to the single solution in a saddle node bifurcation,
what is known as the “refuge” population. This can be large, and the system cannot recover
back to the earlier state by then reducing r or k because this solution is stable.

• If the population can be kept such that it is below the unstable fixed point, it will remain low
in the “refuge” state.

• If the population increases above the unstable fixed point, it will balloon to the “outbreak”
state.

• The parameters r and k, which determine the curve g (x), control the behaviour.

• There is a range of r, k in which there are two stable states, this is called bistable.

33
2.4 Insect outbreak!

0.6
g(x) = 0.2(1-x/10)
g(x) = 0.3(1-x/10)
g(x) = 0.4(1-x/10)
0.5 g(x) = 0.5(1-x/10)
⬤ g(x) = 0.6(1-x/10)

0.4

0.3 ⬤

0.2 ⬤


0.1

0
0 2 4 6 8 10
x
8

5
xcrit

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
r

2.4.3 Bifurcation curves

We cannot explicitly calculate the bifurcation curves, but we can do so parametrically. The saddle-
node bifurcation, as discussed in Section 2.2, requires that ẋ = 0 and that the tangent condition is

34
2.4 Insect outbreak!

satisfied, ẍ = 0. Thus,
x ³ x´
= r 1− (2.31)
1 + x2 c
and
d ³ x ´ d h ³ x ´i
= r 1− (2.32)
d x 1 + x2 dx c
r 1 − x2
− = ¢2 . (2.33)
c
¡
1 + x2

Now substitute Eq. 2.33 into Eq. 2.31,

2x 3
r = ¡ ¢2 ,
1 + x2

and solve for c to find,

2x 3
c = .
x2 − 1
Note that c > 0 implies x > 1. We can now plot x in the r, c plane.
0.8

0.7 Outbreak

0.6

0.5

0.4 Bistable
r

0.3

0.2 Refuge

0.1

0
0 5 10 15 20
c
At low r there is only the refuge (low) bug state. At high r there is only the outbreak. In the
bistable region, both low and high bug states are possible.

2.4.4 Bistable states

Given

35
2.4 Insect outbreak!

2x 3
r = ¡ ¢2 ,
1 + x2

2x 3
c = .
x2 − 1
write x(r, c) (where x 6= 0),

r¡ ¢2
1 + x2 = x2 − 1 ,
c
which has the solution,
s r ³
r 1 r ´2 ³ r´
x = −1± 2− − 1+
2c 4 c c
where we take the positive root because x > 0.

2.4.5 General state

The general solution for x is from,


¢³ x´
x = r 1 + x2 1 −
¡
,
c
which implies, with r > 0,
³ c´
0 = x3 − c x2 + x 1 + − c ,
r
which is non-trivial to solve analytically3 , but quick enough using (say) a numerical technique.

100

10

0.1

0.01

40 1
30 0.8
20 0.6
c 10 0.4 r
0 0.2

3
https://en.wikipedia.org/wiki/Cubic_function

36
2.5 Ghosts and bottlenecks: the non-uniform oscillator

100

10

x 1

0.1

0.01

1 0.8 0.6 0.4 0.2 403020100 c


r

Exercise: try plotting this yourself in “3D”, e.g. with gnuplot 4 .

2.5 Ghosts and bottlenecks: the non-uniform oscillator


One-dimensional motion on a circle can be periodic,

θ̇ = dd θt = f (θ, a) = ω − a sin θ (2.34)

where a > 0 and ω > 0. Always,

−1 ≤ sin θ ≤ +1 ,

thus we have three possible states:

a < ω: f (θ, a) > 0 always


Motion is in infinite cycles, slower at the “top” (when sin θ is maximal), faster at the “bottom”.
4
http://gnuplot.info/

37
2.5 Ghosts and bottlenecks: the non-uniform oscillator

slow
a<ω

fast

θ = π/2
½
0
a = a c = ω: f (θ, a) =
> 0 otherwise
θ = π/2 is a semistable fixed point, otherwise motion is cyclic.

stops at t = ∞
a = ac = ω ⬤

a > ω: f (θ, a) = 0
There are two unstable fixed points at θ1,2 = sin−1 (ω/a). Motion is oscillatory, not periodic.

38
2.5 Ghosts and bottlenecks: the non-uniform oscillator

a>ω


⭕ ⬤

θ2
θ1

no periodic motion

Together, the above represent a saddle bifurcation.

2.5.1 Period of oscillation when a < ω

We can write the time an oscillation cycle takes as being the time it takes θ to traverse 2π, e.g. from
−π to +π,
ˆ T
T = dt
ˆ0
π
dt
= dθ
−π d θ
ˆ π
1
= dθ
−π ω − a sin θ
¶¸+π
−1 a + ω tan (θ/2)
· µ
2
= p tan p
ω2 − a 2 ω2 − a 2 −π

= p . (2.35)
ω2 − a 2
Hint: do the integral at https://www.wolframalpha.com. A proof is given below (there are al-
ternative proofs which do not require contour integration, but they’re even more hideous in terms
of algebra).

39
2.5 Ghosts and bottlenecks: the non-uniform oscillator

Proof. Let z = e i x thena ,

x = −i ln z , (2.36)
dz
dx = , (2.37)
iz
e i x − e −i x
sin x = , (2.38)
µ 2i ¶
1 1
= z− , (2.39)
2i z

and our integral can be rewritten as a contour integralb around the curve |z| = 1,
ˆ +π z
1 1 dz
dx = β ¡
(2.40)
α + β sin x 1
α + 2i z − z iz
¢
−π |z|=1
z 1
= β¡ ¢dz (2.41)
|z|=1 i αz + 2 z 2 − 1
2 z 1 2
= α d z ≡ I. (2.42)
β z 2 + 2i β z − 1 β
|z|=1

The poles lie at the roots of the quadratic,


α
z 2 + 2i z − 1 = (z − z + ) (z − z − ) , (2.43)
β

which are, according to the quadratic formula,


 s 
α β 2
z + = −i 1 − 1 − 2  , (2.44)
β α
 s 
α β 2
z − = −i 1 + 1 − 2  . (2.45)
β α

We need to know which is inside the unit circle, i.e. |z| < 1. Our condition, a < ω, is equivalent to
β2
−β < α or β2 < α2 hence α2 < 1 and the inside of the square root is always positive and less than
1. Also α/β > −1 hence ¯α/β¯> 1. This implies that the term in the brackets can only be inside
¯ ¯

the unit circle for z + because z − always has a magnitude exceeding 1.


The contour integral can be evaluated using Cauchy’s integral formulac ,

I = 2πi R + , (2.46)

40
2.5 Ghosts and bottlenecks: the non-uniform oscillator

where R + is the residue at z + . This is, by definition,


½ ¾
1
R + = lim (z − z + ) (2.47)
z→z + (z − z + ) (z − z − )
1
= (2.48)
z+ − z−
1
= µ q q ¶ (2.49)
α β2 β2
−i β 1 + 1 − α2 − 1 + 1 − α2

−βi
= . (2.50)
2 α2 + β2
p

henced
βπ
I = 2πi R + = p . (2.51)
α2 − β2

Remembering our factor 2/β,


ˆ +π
1 2 2π
dx = I=p . (2.52)
−π α + β sin x β α2 − β2

In our case, α = ω and β = −a, hence


ˆ π
1 2π
dθ = p . (2.53)
−π ω − a sin θ ω2 − a 2

a
See a similar proof at https://web.williams.edu/Mathematics/sjmiller/public_html/372Fa15/
coursenotes/Trapper_MethodsContourIntegrals.pdf
b
https://en.wikipedia.org/wiki/Contour_integration
c
https://en.wikipedia.org/wiki/Cauchy’s_integral_formula
d
https://en.wikipedia.org/wiki/Residue_theorem

When a ≈ ω we have ω + a ≈ 2ω and ω − a = ², where ² ¿ ω and ² ¿ a, then,


2π 2π
T = p =p (2.54)
ω −a
2 2 (ω + a) (ω − a)
r
2 1
= π p . (2.55)
ω ²

a=0.5 ω=1 a=0.98 ω=1

6 6

5 5

4 4
θ θ
3 3

2 2

1 1

0 0
0 10 20 30 40 50 0 10 20 30 40 50
t t

a=0.99 ω=1 a=1 ω=1

6 6

5 5

4 4
θ θ
3 3

2 2

1 1

0 0
0 10 20 30 40 50 0 10 20 30 40 50
t t

Before reaching the saddle point, where ² → 0, the period T becomes infinite (T → +∞).
In general, a prototype like,

ẋ = (r − r 0 ) + (x − x 0 )2 , (2.56)

41
2.6 Superconducting Josephson junction

has a bottleneck timescale,


ˆ +∞
dx π
Tbottleneck ≈ =p
2
. (2.57)
−∞ (r − r 0 ) + (x − x 0 ) r − r0

2.6 Superconducting Josephson junction


A Josephson junction a sandwich of non-superconducting material between two superconductor
plates. The current through the non-superconducting is what we seek to model here. material A re-
view can be found at https://www.scientificamerican.com/article/what-are-josephson-juncti/
and there is a more detailed description at https://en.wikipedia.org/wiki/Josephson_effect
or your superconducting physics textbook of choice. The detailed physics can be worked out us-
ing a full quantum mechanical treatment, but we do not require all this to understand how the
junction behaves.

ψ1eiϕ1

V
ψ2eiϕ2

The current through the junction is

I s = I c sin φ , (2.58)

and the voltage across the junction is

~
V = φ̇ , (2.59)
2e
where

φ (t ) ≡ φ1 − φ2 ,

is the phase difference between the two superconductors. Note that if V is nonzero, the phase φ (t )
evolves because φ̇ (t ) = d φ/d t ∝ V .

I < I c : we can have V = 0 and then φ̇ = 0 implying φ and I s are constant.

42
2.6 Superconducting Josephson junction

I > I c : non-zero voltage implies an evolving phase and I s is an alternating (super)current.

The junction has a natural resistance which absorbs the non-superconducting current and a ca-
pacitance. The equivalent circuit is show below.

R C Ic
I

The equation relating the voltage and current is,

V
V̇ C + + I c sin φ = I , (2.60)
R
and we can substitute Eqs. 2.58 and 2.59 to show,

~C ~
φ̈ + φ̇ + I c sin φ = I . (2.61)
2e 2eR
Note that this is an identical equation to that of a damped pendulum with a constant applied
torque,

mL 2 θ̈ + b θ̇ + mg L sin θ = Γ . (2.62)

In the overdamped limit C is very small and the φ̈ term can be neglected,t

~
φ̇ + I c sin φ = I , (2.63)
2eR
hence
dφ I
= − sin φ (2.64)
dτ Ic
where we have defined a “dimensionless time”,
2e I c R
τ = t. (2.65)
~
Eq. 2.64 is identical to the non-uniform oscillator we studied previously (Eq. 2.34), hence we know
the time (in units of τ) for one period is,

T = r³ ´ , (2.66)
2
I
Ic −1

or, in real units,


π~ 1
T = q . (2.67)
eR 2 2
I − Ic

43
2.6 Superconducting Josephson junction

The mean voltage is

~ dφ
¿ À
〈V 〉 =
2e d t
ˆ T
~ dφ
= dt
2eT 0 dt
ˆ 2π
~
= dφ
2eT 0
~
= × 2π
2eT
~ eR
q
= I 2 − I c2
e q~
= R I 2 − I c2 , (2.68)

hence
(
0 I ≤ Ic ,
〈V 〉 = q (2.69)
R I 2 − I c2 I > I c .

Note that we have not had to understand any of the detailed physics5 to arrive at this result, we
have only used the analysis skills we have learned over the previous classes.
4
V( I )
V=IR
3.5

2.5

V 2

1.5

0.5 R=1

0
0 0.5 1 1.5 2 2.5 3 3.5 4
Ic I
If C is large, the behaviour is more complicated, e.g. with hysteresis.
5
e.g. https://en.wikipedia.org/wiki/Ginzburg-Landau_theory which involves equations like the
Ginzburg-Landau equation ∂u/∂t = D∂2 u/∂x 2 + r u − u 3 . As an exercise, linearize the homogeneous form of
this equation (when the derivatives are all zero, which is the natural equilibrium state of the system) to see if it is
stable to perturbations.

44
3 Linear systems
Remember that linear systems can be defined as,

ẋ = Ax , (3.1)

where x is a vector and A a matrix. The general solution is,


n
c i v i e λi (t −t0 ) ,
X
x (t ) = (3.2)
i =1

where v i are the eigenvectors of A that satisfy,

Av i = λi v i , (3.3)

where λi are the eigenvalues of A.


Second-order systems can be written,
µ ¶ µ ¶
ẋ x
= A , (3.4)
ẏ y

where
µ ¶
a b
A = , (3.5)
c d

and
µ ¶ µ ¶
x0 0
= , (3.6)
y0 0

is a stable point. The eigenvectors are found from

det (A − λi I ) = 0 , (3.7)

i.e.

λ2 − τλ + ∆ = 0 , (3.8)

where

τ = a +d = Tr (A) (3.9)

and

∆ = ad − bc = det (A) . (3.10)

The eigenvalues are


p
τ ± τ2 − 4∆
λ1,2 = , (3.11)
2
which are, in a second-order system, either both real or each other’s complex conjugate.

45
3.1 Real and distinct eigenvalues

x(t) = ( x(t), y(t) )


= c1 v1 eλ1 t + c2 v2 eλ2 t

v1

v2
x

3.1 Real and distinct eigenvalues


λi < 0: the trajectory converges to the origin along v i

λi > 0: the trajectory diverges away from the origin along v i

λi = 0: any point on v i is stable.


As an example, consider
µ ¶ µ ¶µ ¶
ẋ a 0 x
= , (3.12)
ẏ 0 b y
then the (rather trivial) solutions are
x (t ) = x 0 e at (3.13)
and
y (t ) = y 0 e bt . (3.14)

3.1.1 The slope of trajectories

The slope of the trajectory is


dy y 0 b (b−a)t
= e = C e (b−a)t , (3.15)
dx xo a
and, as t → ∞,

±∞ a <b,
dy 
= 0 a >b, (3.16)
dx
constant a = b .

46
3.1 Real and distinct eigenvalues

3.1.2 a < 0 and b < 0

• a < b < 0: stable node


Stable node a < b < 0
y

⬤ x

• a = b < 0: stable star


Stable star a = b < 0
y

⬤ x

• b < a < 0: stable node

47
3.1 Real and distinct eigenvalues

Stable Node 0 < b < a


y


⭕ x

3.1.3 a > 0 and b > 0

• 0 < a < b: unstable node


Unstable Node 0 < a < b
y


⭕ x

48
3.1 Real and distinct eigenvalues

3.1.4 a > 0 and b < 0

• a > 0 and b < 0: saddle node


Saddle Node 0 < a < b
y

Unstable manifold
Stable manifold


⭕ x

3.1.5 a = 0

• a = 0 and b < 0
Stable nodes a = 0, b < 0

⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤ ⬤x

All points on the x axis are stable

49
3.1 Real and distinct eigenvalues

• a = 0 and b > 0
Stable nodes a = 0, b > 0


⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕ ⬤
⭕x

All points on the x axis are unstable

3.1.6 Nodes and stars

A fixed point is said to be a node when λ1 6= λ2 , and a star when λ1 = λ2 .

3.1.7 Stability

A fixed point x 0 , i.e. with f (x 0 ) = 0, is:

attracting if any trajectory starting sufficiently close to x 0 reaches it as t → ∞.


Mathematically, if there is exists δ where the initial vector x is closer than a distance δ to x 0
then eventually x becomes x 0 , or in symbols,

∃δ > 0 , |x (t = 0) − x 0 | < δ , ⇒ lim x (t ) = x 0 . (3.17)


t →∞

globally attracting if any initial point (from anywhere!) eventually approaches x 0 ,

∀ (x (t = 0)) ∈ Rn ⇒ lim x (t ) = x 0 . (3.18)


t →∞

Lyapunov stable if any trajectory that starts close to x 0 remains close for all later times,

∃δ > 0, |x (t = 0) − x 0 | < δ, ⇒ |x (t ) − x 0 | ≤ |x (t = 0) − x 0 | (3.19)

Note that this does not mean the trajectory approaches x 0 asymptotically.

(asymptotically) stable if x 0 is both attracting and Lyapunov stable

unstable if it is not attractive and not Lyapunov stable

50
3.1 Real and distinct eigenvalues

3.1.8 Saddle node

A fixed point x 0 is a saddle node if it is stable in one direction and unstable in the other, i.e. λ1 λ2 < 0
(e.g. λ1 < 0 and λ2 > 0, or vice versa).

stable manifold is the set of points that converge to x 0 . Trajectories approach the stable manifold
as t → −∞.

unstable manifold is the set of points that diverge from x 0 . Trajectories approach the unstable
manifold as t → +∞.

3.1.9 Neutral stability

A fixed point x 0 is neutrally stable if it is Lyapunov stable but not attracting. Good examples are
the harmonic oscillator, pendulum and Lagrange points.

3.1.10 Attracting but not Lyapunov stable

A point can be both attracting and Lyapunov unstable. For example, θ̇ = 1 − cos θ has a semistable
fixed point. On one side it is attracting, but on the other side (which is still nearby) it is unstable.

θ

51
3.2 Complex conjugate eigenvalues

Further notes on stability http://www.cds.caltech.edu/~murray/courses/cds101/fa02/


caltech/mls93-lyap.pdf.

3.2 Complex conjugate eigenvalues

λ1,2 = α + i ω, α, ω ∈ R . (3.20)

The eigenvectors are complex conjugates,

λ1 = λ∗2 (3.21)

hence

v 1 = v ∗2 (3.22)

52
3.2 Complex conjugate eigenvalues

and we can write v 1,2 are complex combinations of real vectors u 1,2 ,

v 1,2 = u 1 ± i u 2 u 1,2 ∈ R . (3.23)

We can also substitute λ1,2 into the exponential and expand using Euler’s formula,

e λt = e αt (cos ωt + i sin ωt ) , (3.24)

then when α > 0 the solution repels from the origin, when α < 0 the solution attracts towards the
origin.
• Note: Near a fixed point α is the called the rate of convergence because kx(t )k ≤ me −α(t −t0 ) kx (t 0 )k
The general solution is,
µ ¶
x (t )
x (t ) = = c 1 v 1 e λ1 t + c 2 v 2 e λ2 t
y (t )
= (c 1 v 1 + c 2 v 2 ) e αt cos ωt + i (c 1 v 1 − c 2 v 2 ) e αt sin ωt (3.25)

and we can choose to have real trajectories (in R2 as one needs for a real-life solution) when c 1 =
c 2 = c/2 ∈ R – this is equivalent to choosing a phase in ωt ,

x (t ) = ce αt (u 1 cos ωt − u 2 sin ωt ) . (3.26)

With λ1,2 = α + i ω we have oscillations:


α = 0: constant amplitude, an ellipse

α < 0: attracting spiral

α > 0: repulsive spiral

-v1

v2 x

α=-2
α=-1
α=0
α=1

α=2

53
3.3 Equal eigenvalues

3.2.1 Example: Harmonic Oscillator

A harmonic oscillator is governed by an equation of motion like,

m ẍ + kx = 0 , (3.27)

which we can write as two first-order equations,

ẋ = v , (3.28)
k
v̇ = − x = −ω2 x , (3.29)
m
which defines the angular frequency, ω. The equivalent A matrix is,
µ ¶
0 1
A = , (3.30)
−ω2 0

with

Tr (A) = 0 , (3.31)
2
∆ = ω . (3.32)

The eigenvalues are

λ1,2 = ±i ω . (3.33)

3.3 Equal eigenvalues


If the eigenvalues are equal,
τ
λ1 = λ2 = , (3.34)
2

τ2 = , (3.35)
2
the behaviour is a transition between a spiral and a node.

• In some cases there is only one eigenvector. This is called a “degenerate node”.

• When λ1 = λ2 there can be two eigenvectors. If λ1,2 6= 0 this is a star, if λ1,2 = 0 then all points
are Lyapunov stable.

3.4 Summary
4 Phase Space Analysis in 2 dimensions
Any autonomous6 two-dimensional system can be cast into standard form,
(
dx
¡ ¢
dt = P x, y ,
dy ¡ ¢ (4.1)
dt
= Q x, y ,
6
Remember, autonomous means a system of equations of the form d x/d t = f (x (t )) rather than the general form
d x/d t = g (x (t ) , t ). See, e.g., https://en.wikipedia.org/wiki/Autonomous_system_(mathematics)#Second_
order for solution methods.

54
τ

τ2 - 4Δ = 0
unstable nodes

saddles
unstable spirals

centre Δ
0
non-isolated fixed points

stable spirals
saddles
stars
, dege
n erate
stable nodes node
s

Figure 1: Stability of 2D systems as a function of the Jacobian, J , at a fixed point. τ is the trace of J
and ∆ is the determinant of J .

where P and Q are¡ functions¢of x and y only and can be nonlinear.


The solutions x (t ) , y (t ) move along trajectories in the phase plane of the variables x and y.
Note that, in general, we have a phase space but here, because we have only two dimensions, we
are constrained to a plane.

55
Figure 2: Poincare diagram to classify phase portraits of a matrix A. Taken from an original by
Freesodas at https://en.wikipedia.org/wiki/File:Stability_Diagram.png under a Creat-
ive Commons 4 licence https://creativecommons.org/licenses/by-sa/4.0/deed.en.

56
4.1 Linearization

y
x t=1

x t=7

x
x t=2

x t=6

x t=3
x t=5
x t=4

Existence and Uniqueness Theorem Valid in all dimensions, we have

ẋ = F (x) , (4.2)
x (t = 0) = x 0 , (4.3)

then, if F (x) and all the partial derivatives ∂F /∂x i are continuous in the region D ⊂ R3 (D is
a subset or subspace of the real space, does not have to be the whole real space), then x 0 ∈ D
has a unique solution in some interval (−τ, +τ) around t = 0.

The trajectories can be found by eliminating the time in Eq. 4.1,


¡ ¢
dx P x, y
= ¡ ¢, (4.4)
dy Q x, y
then solve this equation to find the trajectories in phase space.

Stationary points are found by solving


¡ ¢ ¡ ¢
P x, y = Q x, y = 0 . (4.5)
¡ ¢ ¡ ¢
Ordinary points (also called stationary points) are found when either P x, y = 0 or Q x, y = 0.

4.1 Linearization
To proceed further we need to linearize and because we are interested in the stability of stationary
points we want to expand near them. This is a process where we convert a function, e.g. a non-
linear function, to a locally linear form (possibly in n dimensions7 ). Any smooth function can be
converted to a locally linear function in this way.
7
https://en.wikipedia.org/wiki/Linearization

57
4.1 Linearization

¡ ¢
Given a stationary point x 0 , y 0 , for small u (t ) and v (t ) near the fixed point,
(
x (t ) = x 0 + u (t ) ,
(4.6)
y (t ) = y 0 + v (t ) ,

then using a Taylor expansion, noting that ẋ 0 = 0 and ẏ 0 = 0,

ẋ = u̇ = P x 0 + u, y 0 + v = P x 0 , y 0 + au + bv + O u 2 , v 2 , uv
¡ ¢ ¡ ¢ ¡ ¢
(4.7)
ẏ = v̇ = Q x 0 + u, y 0 + v = Q x 0 , y 0 + cu + d v + O u 2 , v 2 , uv
¡ ¢ ¡ ¢ ¡ ¢
(4.8)

where a, b, c, d are the partial derivatives of P and Q with respect to x and y, respectively,

∂P
¶ µ
a= , (4.9)
∂x (x0 ,y 0 )

∂P
¶ µ
b= , (4.10)
∂y (x0 ,y 0 )

∂Q
¶ µ
c= , (4.11)
∂x (x0 ,y 0 )

and
∂Q
¶ µ
d= . (4.12)
∂y (x0 ,y 0 )
¡ ¢ ¡ ¢
The functions P x 0 , y 0 = Q x 0 , y 0 = 0 at the fixed point by definition.
¡ 2 2 Close
¢ enough to the fixed
point u and v are small so we can ignore the quadratic terms O u , v , uv , hence the linearized
form is,

u̇ = au + bv , (4.13)
v̇ = cu + d v . (4.14)

We can also use the more compact matrix notation,


µ ¶ µ ¶
u̇ u
+ O u 2 , v 2 , uv ,
¡ ¢
= A0 (4.15)
v̇ v

where A 0 is called the Jacobian matrix. In our case,


 ³ ´ ³ ´ 
∂P ∂P
 ∂x ∂y
µ ¶
a b
A0 = =  ³ ∂Q ´(x0 ,y 0 ) ³ ∂Q ´(x0 ,y 0 )  . (4.16)

c d
∂x (x ,y ) ∂y (x ,y )
0 0 0 0

We will see many examples of Jacobian matrices. Their more general form is the m × n matrix,
∂ fi
Ji j = , (4.17)
∂x j

where the m-dimensional vector function f takes the n-dimensional vector x as input.

58
4.2 Eigenvalues and eigenvectors

4.2 Eigenvalues and eigenvectors


Let λi and v i be the and eigenvalues and eigenvectors of A 0 , then,

λi v i = A0 v i i = 1 . . . n , (4.18)
¡ ¢
then solutions close to the stationary point x 0 , y 0 are,
µ ¶ µ ¶
x (t ) x0
= + c 1 v 1 e λ1 t + c 2 v 2 e λ2 t . (4.19)
y (t ) y0

Each eigenvalue λi determines the stability of x 0 , y 0 for trajectories along the corresponding ei-
¡ ¢

genvector v i .

• ℜ (λi ) < 0: stable along v i

• ℜ (λi ) > 0: unstable along v i

• ℑ (λi 6= 0): oscillatory behaviour, e.g. circles, spirals

As previously, the eigenvalues are found from

det (A 0 − λI ) = 0 , (4.20)

and hence

Tr (A 0 ) = τ = a + d = λ1 + λ2 , (4.21)
det (A 0 ) = ∆ = ad − bc = λ1 λ2 , (4.22)

where
p
τ ± τ2 − 4∆
λ1,2 = . (4.23)
2

4.3 Example: Lotka-Volterra competition model


A general population grows as,
³ x ´
ẋ (t ) = x g − − f , (4.24)
K
where

• g is the rate of growth, i.e. how fast the population can breed

• −x/K is the death rate (related to the carrying capacity K as we saw in the logistic equation)

• f relates to competition for resources, generally caused by outside influences (e.g. popula-
tions of other species consuming food). For more details about the general form of this equa-
tion, and how to extend it to n species, see https://en.wikipedia.org/wiki/Competitive_
Lotka-Volterra_equations.

59
4.3 Example: Lotka-Volterra competition model

Imagine there is a world containing only rabbits and sheep that both eat the same grass. If you have
been to Australia or New Zealand, this is it. We can write the equations describing the number of
rabbits, r (t ), and the number of sheep, s (t ), as

r˙ (t ) = P (r, s) = r (3 − r − 2s) , (4.25)


ṡ (t ) = Q (r, s) = s (2 − s − r ) , (4.26)

where the term −2s or −r is the effect of the sheep and/or rabbits on the other population caused
by eating the grass: sheep eat more grass (∝ 2s) than rabbits (∝ r ).
The singular points are at P = Q = 0, thus

r = 3 − 2s or (4.27)
r = 0, (4.28)

and

s = 2 − r or (4.29)
s = 0. (4.30)

There are thus four stationary points,


µ ¶ µ ¶
r0 0
= , (4.31)
s0 0
µ ¶ µ ¶
r1 0
= , (4.32)
s1 2
µ ¶ µ ¶
r2 3
= , (4.33)
s2 0
µ ¶ µ ¶
r3 1
= . (4.34)
s3 1

The Jacobian is,


∂P ∂P
µ ¶ Ã ! µ ¶
a b ∂r ∂s 3 − 2r − 2s −2r
A0 = = ∂Q ∂Q = , (4.35)
c d ∂r ∂s
−s 2 − 2s − r

• At (r 0 , s 0 ) = (0, 0) we have a = 3, b = c = 0, d = 2 hence τ = a + d = 5 and ∆ = ad − bc = 6 with


eigenvalues
p
5 ± 25 − 24
λ1,2 = = 3, 2 . (4.36)
2
Both eigenvalues are positive, so (r 0 , s 0 ) is an unstable node.

• At (r 1 , s 1 ) = (0, 2) we have a = −1, b = 0, c = d = −2, hence τ = a +d = −3 and ∆ = ad −bc = +2


with eigenvalues
p
−3 ± 9 − 8
λ1,2 = = −2, −1 . (4.37)
2
Both eigenvalues are negative, so (r 1 , r 2 ) is a stable node.

60
4.4 When linearization fails

• At (r 2 , s 2 ) = (3, 0) we have a = −3, b = −6, c = 0, d = −1, hence τ = a+d = −4 and ∆ = ad −bc =


+3 with eigenvalues,
p
−4 ± 16 − 12
λ1,2 = = −3, −2 . (4.38)
2
Both eigenvalues are negative, so (r 2 , s 2 ) is a stable node.

• At (r 3 , s 3 ) = (1, 1) we have a = c = d = −1 and b = −2, hence τ = −2 and ∆ = −1 with eigenval-


ues,
p
−2 ± 8 p
λ1,2 = = −1 ± 2 . (4.39)
2
One is positive, one is negative: this is thus a saddle point.

2.5

2⬤

1.5

1 ⬤

0.5

0⬤
⭕ ⬤

0 0.5 1 1.5 2 2.5 3 3.5 4
r

4.4 When linearization fails


Sometimes, we need to consider higher order terms. Consider the following example,

ẋ = P x, y = −y + a x 2 + y 2 ,
¡ ¢ ¡ ¢
(4.40)
ẏ = Q x, y = +x + a x 2 + y 2 ,
¡ ¢ ¡ ¢
(4.41)

then
µ ¶
0 −1
A = , (4.42)
+1 0
which has eigenvalues
i
λ1,2 = ± , (4.43)
2

61
4.4 When linearization fails

hence the “linearized trajectories” do not depend on the parameter, a, even though the motion
clearly does.
We can draw some insight by switching to polar co-ordinates, then
r 2 = x2 + y 2 , (4.44)
2 2 2 4
¡ ¢
r r˙ = x ẋ + y ẏ = ar x + y = ar , (4.45)
£ 2
x + a y x2 + y 2 + y 2 − a y x2 + y 2
¡ ¢¤ £ ¡ ¢¤
x ẏ − y ẋ x2 + y 2
θ̇ = = = =1 (4.46)
r2 r2 r2
hence our original equations are equivalent to,
r˙ = ar 3 , (4.47)
θ̇ = 1 , (4.48)
with solutions,
1 1
= + a (t − t 0 ) (4.49)
2r 2 2r 02
θ = θ0 + (t − t 0 ) . (4.50)
This is a spiral which either expands (a < 0), shrinks (a > 0) or is stable (a circle) a = 0 (Fig. 3).
We know

r 2 = x2 + y 2
2r r˙ = 2x ẋ + 2y ẏ (4.51)

but then we need θ̇ so from

x = r cos θ
ẋ = r˙ cos θ − θ̇r sin θ
y ẋ = r r˙ sin θ cos θ − θ̇r 2 sin2 θ
y = r sin θ
ẏ = r˙ sin θ + θ̇r cos θ
x ẏ = r r˙ sin θ cos θ + θ̇r 2 cos2 θ
x ẏ − y ẋ = θ̇r 2 sin2 θ + cos2 θ
¡ ¢

x ẏ − y ẋ
θ̇ = . (4.52)
r2

4.4.1 Real and complex eigenvalues

Repellers have λ1 > 0, λ2 > 0

Attractors have λ1 < 0, λ2 < 0

Saddles have λ1 λ2 < 0

4.4.2 Marginal cases: at least one imaginary eigenvalue

Centers about where there are orbits: λ1,2 = ±i ω


As higher order, and for non-isolated fixed points, e.g. lines, at least one eigenvalue is zero.

62
4.4 When linearization fails

5 1 0.7

4.5 0.8 0.6

4 0.6 0.5

0.4
3.5 0.4
0.3
3 0.2
0.2
y 2.5 y 0 y
0.1
2 -0.2
0
1.5 -0.4
-0.1
1 -0.6 -0.2
0.5 -0.8 -0.3

0 -1 -0.4
1 2 3 4 5 6 7 8 9 10 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1 0 1 2 3 4 5 6 7 8
x x x

Figure 3: Trajectories of the system of equations described in Section 4.4 for, from left to right,
a = −1, 0 and +1.

63
5 Lyapunov Stability Theorem
We work with an n-dimensional system,

ẋ = F (x) , (5.1)

where x ∈ R, with stationary points, x 0 , in a subset of real space E , defined by F (x 0 ) = 0. Suppose


there exists a continuously differentiable, real-valued, function, V (x), such that
(
V (x 0 ) = 0 ,
(5.2)
V (x) > 0,

for all x in E . V (x) is called a Lyapunov function and it has the following properties. Note that
V (x) > 0 implies the function V is positive definite.
• If V̇ (x) < 0 for all x in E , except at the fixed point x 0 , then x 0 is asymptotically stable, i.e.
x (t ) → x 0 , for all initial conditions in E. There are no closed orbits in E .

• If V̇ (x) ≤ 0 for all x in E , except at the fixed point x 0 , then x 0 is stable.

• If V̇ (x) > 0 for all x in E , except at the fixed point x 0 , then x 0 is unstable.
Lyapunov functions are like an energy, hence the V notation (which is often used for potential
energy in physics and engineering).

5.1 Lyapunov function example


Consider the system of equations,
(
ẋ = −x + 4y ,
(5.3)
ẏ = −x − y 3 ,

and assume a general form of V , with a constant c that we can choose later,

V x, y = x 2 + c y 2 .
¡ ¢
(5.4)

The trajectories have a time derivative,

∂V d x ∂V d y
¡ ¢
dV x, y
= + (5.5)
dt ∂x d t ∂y d t
= 2x −x + 4y + 2c y −x − y 3
¡ ¢ ¡ ¢
(5.6)
2 4
= −2x + 8x y − 2c x y − 2c y (5.7)
2 4
= −2x − 2c y + (8 − 2c) x y . (5.8)

We can choose c = 4, then the nasty cross term disappears,

V x, y = x 2 + 4y 2 ,
¡ ¢
(5.9)

and

V̇ x, y = −2x 2 − 8y 4 .
¡ ¢
(5.10)

We have

64
5.1 Lyapunov function example

• V (0, 0) = 0,
¡ ¢ ¡ ¢
• V x, y > 0 for all x, y 6= 0,
¡ ¢ ¡ ¢
• V̇ x, y < 0 for all x, y 6= 0,

so (0, 0) is an asymptotically stable fixed


¡ point
¢ according
¡ to¢ the Lyapunov stability theorem.
Note: if the sign is flipped, i.e. V x, y < 0 for all x, y 6= 0 (but V̇ is still negative) then the
system is unstable, as one might expect from the energy analogy.

65
6 Limit cycles and oscillators
Consider the harmonic oscillator8 with damping µ > 0,

ẍ = −x − µẋ . (6.1)

Now introduce

ẋ = v , (6.2)

then ẍ = v̇ and our second-order equation becomes two first-order equations,

v̇ = −x − µv , (6.3)
ẋ = v . (6.4)

Now change variables: let x = r cos θ and v = r sin θ then,

r 2 = x2 + v 2 , (6.5)
2r r˙ = 2x ẋ + 2v v̇ , (6.6)
x v
r˙ = ẋ + v̇ , (6.7)
r r
1
= (x ẋ + v v̇) , (6.8)
r
and, taking time derivatives of x and v,

ẋ = r˙ cos θ − r sin θ θ̇ , (6.9)


v̇ = r˙ sin θ + r cos θ θ̇ , (6.10)

multiply Eq. 6.9 by v = r sin θ and Eq. 6.10 by x = r cos θ to find,

v ẋ = r r˙ cos θ sin θ − r 2 sin2 θ θ̇ (6.11)


2 2
x v̇ = r r˙ sin θ cos θ + r cos θ θ̇ , (6.12)

then subtract Eq. 6.11 from Eq. 6.12,

v ẋ − x v̇ = −r 2 θ̇ , (6.13)

i.e.,
x v̇ − v ẋ
θ̇ = . (6.14)
r2
Then substitute x and v to obtain r˙ as a function of r and θ only, using the definitions of ẋ and v̇,

r cos θ · v + v −x − µv
£ ¤¢
r˙ = (6.15)
r

r cos θ − x − µv
¢
= (6.16)
r
r sin θ ¡
r cos θ − r cos θ − µr sin θ
¢
= (6.17)
r
= −µr sin2 θ, (6.18)
8
This particular oscillator is a simplified example. Really we should have m ẍ = −kx − µẋ, I chose k = 1, but the
mathematics that follows is similar.

66
and

r 2 θ̇ = x v̇ − v ẋ (6.19)
= r cos θ · −x − µv − r sin θ · v
¡ ¢
(6.20)
= −r cos θ · r cos θ − µr sin θ − r 2 sin2 θ
¡ ¢
(6.21)
= −r 2 cos2 θ + sin2 θ − µr 2 sin θ cos θ
¡ ¢
(6.22)
1
= −r 2 − µr 2 sin 2θ . (6.23)
2
So now we know

r˙ = −µr 2 sin2 θ , (6.24)


µ
θ̇ = −1 − sin 2θ , (6.25)
2
and finally we can analyse the phase space as a function of µ, remembering that µ > 0:

µ = 0: Circular motion

0 ≤ µ < 2: In this regime, θ̇ < 0 always (this implies rotation is clockwise)


³ ´
µ ≥ 2: There is a fixed point θ ∗ at θ̇ = 0, i.e. at sin 2θ = − µ2 which is θ ∗ = 12 sin−1 − µ2 .

µ À 2: This is the overdamped limit: r˙ is large and negative. θ̇ is large but sin θ is small when
θ → 0, so this is our bottleneck and θ → 0− (on the negative side because initially θ̇ < 0).

µ = −1: r˙ > 0 and the spiral is outwards to ∞

1
mu=0
mu=0.1
mu=1
mu=2
mu=10
0.5 mu=100

v 0

-0.5

-1
-1 -0.5 0 0.5 1
x

67
6.1 Van der Pol oscillator

6.1 Van der Pol oscillator


The Van der Pol oscillator was vital in early studies of non-linear dynamics. It is governed by the
equation,

ẍ + µ x 2 − 1 ẋ + x = 0 ,
¡ ¢
(6.26)

where µ > 0. The damping term is non-linear, with a damping term proportional to µ x 2 − 1 – the
¡ ¢

x 2 is the non-linearity. Because there is damping, energy is not conserved.

• When |x| > 1 the damping is like ordinary damping (the energy drops),

• but when |x| < 1 the damping is “negative” i.e. there is an injection of energy.

The limit cycle is periodic but not a sine wave.


We can write ẋ = v then our second-order equation becomes two first-order equations,

= −x − vµ x 2 − 1 ,
¡ ¢
v̇ (6.27)
ẋ = v , (6.28)

which can be solved numerically (see Van_der_Pol.py).


We can also use a Liénard transformation, y = x − x 3 /3 − ẋ/µ, to turn the equation into a 2D
form,
µ ¶
1 3
ẋ = µ x − x − y (6.29)
3
1
ẏ = x. (6.30)
µ

2.5

1.5

0.5

x 0

-0.5

-1

-1.5

-2

-2.5
0 10 20 30 40 50
t
μ=0.01 μ=0.1 μ=0.5 μ=1 μ=2 μ=10

68
6.1 Van der Pol oscillator

2.5

1.5

0.5

x 0

-0.5

-1

-1.5

-2

-2.5
0 5 10 15 20
t
μ=0.01 μ=0.1 μ=0.5 μ=1 μ=2 μ=10

69
6.1 Van der Pol oscillator

15

10

v = ẋ 0

-5

-10

-15
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
x
μ=0.01 μ=0.1 μ=0.5 μ=1 μ=2 μ=10

70
6.1 Van der Pol oscillator

v = ẋ 0

-2

-4

-6
-6 -4 -2 0 2 4 6
x
20

15

10

v = ẋ 0

-5

-10

-15

-20
-8 -6 -4 -2 0 2 4 6 8
x

6.1.1 Forced Van der Pol oscillator

We can force the oscillator, e.g. with a sine wave,

ẍ + µ x 2 − 1 ẋ + x = A sin ωt ,
¡ ¢
(6.31)

which results in chaotic behaviour. More on chaos later in the course.

71
6.2 Limit Cycles

μ=8.53 A=1.2 ω=0.628319


2.5

1.5

0.5

0
x

-0.5

-1

-1.5

-2

-2.5
0 50 100 150 200
t

6.2 Limit Cycles


A limit cycle is a closed, isolated trajectory. It is isolated in the sense that there are no nearby, closed
trajectories, rather nearby trajectories either spiral into or away from the limit cycle. A limit cycle
can be

Stable or Attracting nearby orbits approach the cycle

Unstable (some) neighbouring orbits move away from the limit cycle. This is equivalent to ap-
proaching it as t → ∞.

Half-stable some orbits move towards the limit cycle, some move away from the limit cycle.

72
7 Poincaré-Bendixson theorem
In two dimensions, R2 , consider:

1. A subspace E that is closed and bounded,

E ⊆ R2 . (7.1)

2. A continuously differentiable function, F (x), where x is a real vector,x ∈ R2 .

3. E contains no fixed points,

@x ∈ E , F (x) = 0, (7.2)

although it can contain a “hole” in which there is a fixed point.

4. There is a trajectory C of the system ẋ = F (x) that is confined in E . That is, it starts in E and
remains in E forever.

If, and only if, 1, 2, 3 and 4 are true, then:

a C is a closed orbit, or

b it, and hence all trajectories in E (because C can be any trajectory in E ), spirals towards a
closed orbit in E as t → ∞.

To prove (4), it is often sufficient to show that all borderline trajectories enter E . This is called
a trapping region because no trajectories escape from it.

73
E

Note that the theorem does not extend to higher numbers of dimensions than two.

7.0.1 Poincaré-Bendixson F.A.Q.

• Can we extend the region to infinity, then it must contain any/all closed orbits?
Right, that’s if there are any limit cycles. Consider ẋ = ẏ = 1: there are never any closed
orbits. The Poincaré-Bendixson theorem can only tell us if there is a closed orbit, not that
there is not.
One can use other methods to rule out a closed orbit.
If a system has a Lyapunov function V then V̇ < 0 so any trajectory must ´ tend towards a
fixed point x ∗ rather than orbit (an orbit will have, by definition, ∆V = V̇ d t = 0 where the
integral is over a period of the orbit).
If a system has a gradient function a similar argument applies.
One can also use Dulac’s criterion: if ẋ = f (x) is a continuously differentiable function in a
simply connected subset of¡ the¢ real plane, then if there exists a continuously differentiable
function g (x) such that ∇ · g ẋ ´has
´ one ¡ sign
¢ throughout
¸ the subset then there are no closed
orbits in the subset. Why? 0 6= ∇ · g ẋ d A = g ẋ · nd l = 0 (use Green’s theorem9 ) where
A is the inside area of the orbit, d l is a small piece of the orbit.

• My region has trajectories flowing out, can I not just make the region a bit bigger?
Yes, you can, but the new, bigger region must have all trajectories flowing inwards through its
edges. If there is a region with outflowing trajectories, then all trajectories from inside the re-
gion could leak out, and there might be no closed orbit at all. Remember, Poincaré-Bendixson
9
https://en.wikipedia.org/wiki/Green’s_theorem

74
7.1 Glycolysis

(a) (b)

Figure 4: Chemical structures of (a) adenosine diphosphate (ADF) and (b) fructose-6-phosphate
(F6P).

cannot tell you when there is not a closed orbit, only sufficient conditions to prove that there
is.

7.0.2 Proof of the theorem

Proof is not trivial and is beyond the scope of this course. See e.g.
https://math.byu.edu/~grant/courses/m634/f99/lec39.pdf or
http://www.staff.science.uu.nl/~kouzn101/NLDV/Lect6_7.pdf.

7.1 Glycolysis
As an example of the Poincaré-Bendixson theorem, consider the process of glycolysis, in which
glucose is broken down to obtain energy. All living cells do this. In some, e.g. yeast or muscles,
oscillatory behaviour has been observed. A simplified reaction network10 is,

ẋ = −x + a y + x 2 y , (7.3)
ẏ = b − a y − x 2 y , (7.4)

where x and y are the concentrations of ADF (adenosine diphosphate11 ) and F6P (fructose-6-
phosphate12 ) respectively, and a and b are reaction constants (both are positive).

7.1.1 General properties, nullclines and the fixed point


x
• The line ẋ = 0 is at y = a+x 2
. This intersects the y axis at y = 0.
x x
• If ẋ > 0 then y > a+x 2
, similarly if ẋ < 0 then y < a+x 2
.
b
• The line ẏ = 0 is at y = a+x 2
. This intersects the y axis at b/a.
10
See Fig. 5 and references therein for the full cycle.
11
https://en.wikipedia.org/wiki/Adenosine_diphosphate
12
https://en.wikipedia.org/wiki/Fructose_6-phosphate

75
7.1 Glycolysis

Figure 5: The full glycolysis cycle: now you see why we work with a simplified set of equa-
tions! Image from Factors affecting plasmid production in Escherichia coli from a resource alloca-
tion standpoint, by Cunningham et al. (Microb. Cell Fact., 2009, https://openi.nlm.nih.gov/
detailedresult.php?img=PMC2702362_1475-2859-8-27-1&req=4). Shared under a Creative
Commons BY 2.0 licence https://creativecommons.org/licenses/by/2.0/.

76
7.1 Glycolysis

b b
• If ẏ > 0 then y < and if ẏ < 0 then y < a+x
a+x 2 2.

¡ ¢ ³ b
´
• Thus there is a fixed point at x 0 , y 0 = b, a+b 2 . We’ll come back to this to address its stabil-

ity.

ẋ = 0
b/a ẏ = 0

ẋ > 0
ẏ < 0
ẋ > 0
ẏ > 0
y
ẋ < 0 ẏ < 0

ẋ < 0
ẏ > 0

0 x b b (1+1/a)

Along the nullclines, flow is either entirely in the x or y direction:

• ẏ = 0 → ẋ is non-zero, i.e. is either to the right or left

• ẋ = 0 → ẏ is non-zero

We can construct a subspace around the region of interest, which contains the fixed point, into
which we know the flow is inwards:

• Along the y-axis we have ẋ = a y > 0 as long as y > 0. We want to go as far as we can with until
ẏ changes sign ( ẏ = b > 0 at the origin). So take a line from (0, 0) to where the ẏ nullcline
intersects the y axis (i.e. where ẏ = 0), this is at y = b/a, hence the first edge of our subspace
is from (0, 0) to (0, b/a).

• Along the x-axis we have,

ẋ = −x , (7.5)
ẏ = b , (7.6)

which is always into our region, given that x > 0.


Note: near the origin, we can take the following limit, lim²→0 ẋ (−², 0) = ² which points into
our region of interest, so we are safe.

77
7.1 Glycolysis

• The nullcline ẏ = 0 has a maximum at (0, b/a) and the whole region above the nullcline has
ẏ = 0.
Along the straight line y = b/a we have ẏ = −x 2 b/a which is guaranteed to point down be-
cause ẏ < 0.

• We then require the closing side from y = b/a to the x-axis, but we must be careful that tra-
jectories point inwards.

If we choose a vertical line – the simplest option – we require ẋ < 0 along it (this is what
“inwards” means, i.e. trajectories must cross the line towards the origin). We cannot choose
this because of the x 2 y term in ẋ which could be arbitrarily large: we require ẋ < 0 for any x
(and note that y > 0).

Instead, we will choose a straight line to join y = b/a to the x-axis, y = mx + c, where m
is the gradient and c the intercept.

The condition ẋ < 0 is difficult to guarantee along this line (at y = b/a we have ẋ > 0, for
example), but instead we can also say that we require ẋ + ẏ < 0 along the line, because we
know that above the ẏ = 0 nullcline, which we are, we have ẏ < 0, and requiring both ẋ < 0
and ẏ < 0 implies ẋ + ẏ < 0. This is far simpler to calculate,

−x + a y + x y 2 + b − a y − x 2 y
¡ ¢ ¡ ¢
ẋ + ẏ = (7.7)
= b − x < 0. (7.8)

So we can close our region by taking a line from (0, b/a) to (b, b/a), because this has ẏ < 0
˙ 0).
(and x >
Note that below the ẏ = 0 nullcline we are also below the ẋ = 0 nullcline so ẋ < 0, as required
by our original condition.

• Given that we require ẋ + ẏ < 0 we require ẋ < − ẏ. This implies,

dx dy
< − (7.9)
dt dt
dx
< −1 , (7.10)
dy

along the trajectories. Thus trajectories that are just above a line with slope −1 will pass
downwards through it, as required.

To think of this another way: the trajectories have d y < −d x, i.e. more negative that the
line which has d y = −d x, so the trajectories “descend” faster than the line and will cross it.

We thus choose a line with slope −1 to join (b, b/a) to the x-axis. Given y = −x + c we have
−b + c = b/a, c = b/a + b = b (1 + 1/a), and this line is
µ ¶
1
y = −x + b 1 + , (7.11)
a

which intersects the x-axis, when y = 0, at x = b (1 + 1/a).

78
7.1 Glycolysis

• We have now closed our region from (0, 0) to (0, b/a) to (b, b/a) to (1 + a/b, 0) to (0, 0), and all
trajectories flow into the region.

y 3

0
0 1 2 x 3 4 5 6
ẋ = 0 ẏ = 0

y 3

0
0 1 2 x 3 4 5 6
ẋ = 0 ẏ = 0

79
7.1 Glycolysis

y 3

0
0 1 2 x 3 4 5 6
ẋ = 0 ẏ = 0

7.1.2 The fixed point

• The fixed point is stable if the trace of the Jacobian is negative (Sec. 3.4).

• We require the fixed point to be unstable so that all trajectories flow out of it and then we can
construct a trapping region around it, so the trace of the Jacobian should be positive.

• The Jacobian is
2
à !
2b
2 a + b2
µ ¶
−1 + 2x y a −1 + a+b
A = ¡ + x 2¢ = 2
2
(7.12)
2b
−2x y − a+x − a + b2
¡ ¢
(x0 ,y 0 ) − a+b 2

hence

2b 2 2 b 4 + b 2 (2a − 1) + a (1 + a)
τ = Tr (A) = −1 +
¡ ¢
− a + b = − (7.13)
a + b2 a + b2

(note that a + b 2 > 0 because a > 0 by definition) and

2b 2 ¡ ¢ 2b 2
µ ¶
2 2 2 2
∆ = + 2b 2 = a + b 2 > 0 . (7.14)
¢ ¡ ¡ ¢
1− a + b + a + b = a + b − 2b
a + b2 a + b2

• We thus require b 4 + b 2 (2a − 1) + a (1 + a) > 0, with critical values at


p p
2 (1 − 2a) ± (2a − 1)2 − 4a (1 + a) 1 − 2a ± 1 − 8a
b 1,2 (a) = = . (7.15)
2 2

• If b < b 1 or b > b 2 then the fixed point is an attractive spiral or centre, there is no limit cycle.

80
7.1 Glycolysis

• If b > b 1 or b < b 2 then the centre is unstable, and we can construct the trapping region.
2
• We requires a ≤ 1/8 (such that b 1,2 is real), with the limiting case a = 1/8, then b 2 = 21 − a =
q
4
8
− 18 = 38 = 0.375 then b = 12 32 ≈ 0.61. So let a = 0.1 and b = 0.5, so b/a = 5, and we’re
roughly in the middle of the parameter space that allows a trapping region.

1.2

1 Stable fixed point,


no limit cycle

0.8

b2 0.6

0.4 Unstable fixed point,


stable limit cycle

0.2

0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16
a

We have now successfully created a trapping region around the fixed point, with the require-
ment that b 1 < b < b 2 for there to be a limit cycle (which is probably, in biological terms, what is
desired). We have said quite a lot about the system without solving the equations!

81
8 1D bifurcations in 2D and ghosts
In one-dimensional systems, bifurcations happen at fixed points. This happens in just the same
way when, in a two-dimensional phase space, we approach a nullcline. As one dimension ap-
proaches the nullcine, the other dimension is independent. However, in two dimensions, the
nature of the bifurcations can change when the nullclines intersect.
The prototypes (also called normal forms) are the following, in which x and y could be rotated
in 2D. Because ẏ = −y all critical points are on the x-axis (where y = 0). To analyse these, consider
the nullclines in the x − y phase space. When they cross there are are fixed points, when they don’t,
there are not (Fig. 6).
The directions of the trajectories can be found by considering ẋ vs ẏ. Where ẋ > ẏ we have,
dx dy
> (8.1)
dt dt
hence

dx > dy , (8.2)

and along the nullclines ẏ = 0 where ẋ > ẏ, i.e. the ẋ curve is above the ẏ curve, we have d x > d y = 0
i.e. trajectory vectors are in the positive direction. Similarly when ẋ < ẏ along the ẏ = 0 nullcline
trajectory vectors are in the negative direction. The same argument can be applied swapping x
and y to obtain directions along the ẋ nullcline (Fig. 7).

8.1 Saddle node bifurcation


Also called the blue sky bifurcation, shown in Fig. 8,

ẋ = µ ± x 2 , (8.3)
ẏ = −y , (8.4)

Where ẋ = ẏ = 0 we have

ẋ = 0 = µ ± x 2 (8.5)
ẏ = 0 = −y (8.6)
p
hence y = 0 and µ ± x 2 = 0. Consider the case µ + x 2 = 0 then x = ± −µ hence we require µ < 0
for there to be fixed points. Thus the bifurcation happens at µ = 0. At µ = 0 there is only one
eigenvalue of the equations, and as µ increases above zero there is a “ghost” region through which
trajectories take a long time to pass.

8.2 Transcritical bifurcation


Shown in Fig. 10, we have,

ẋ = µx − x 2 , (8.7)
ẏ = −y . (8.8)

Where ẋ = 0 we have µx = x 2 i.e. x = µ or x = 0.

82
8.2 Transcritical bifurcation

5
ẏ ẋ
4 b = -1

y 1

-1

-2

-3
-4 -2 0 2 4
x

5
ẏ ẋ
4 b = -0.25

y 1

-1

-2

-3
-4 -2 0 2 4
x

5
ẏ ẋ
4 b=1

y 1

-1

-2

-3
-4 -2 0 2 4
x

Figure 6: Nullclines of two functions, ẋ = x + b and ẏ = x 2 , for b = +1 (bottom), −0.25 (middle) and
−1 (top). Where the nullclines cross there are fixed points. When b = −1 there are no fixed points,
there is one when b = −0.25 (this is the bifurcation) and there are two when b > −0.25. To find the
bifurcation, set ẋ = ẏ and x = y because both the tangents to trajectories and the trajectories must
be equal at this point.

83
8.2 Transcritical bifurcation

5
ẏ ẋ
4 b=1

y 1

-1

-2

-3
-4 -2 0 2 4
x

5
ẏ ẋ
4 b=1

3

y 1


0

-1

-2

-3
-4 -2 0 2 4
x

Figure 7: a) Nullclines with associated trajectory arrows on them, found by considering ẋ vs ẏ. For
example, when ẋ > ẏ (where the red line is above the blue line) we have d x > d y so on the ẏ = 0
nullcline, d x > 0 and trajectories point in the positive x direction. (b) These trajectories can then
be used to determine the stability of the fixed points.

84
8.2 Transcritical bifurcation

y 0

-2

-4

-4 -2 0 2 4
x
ẏ = 0

y 0 ⬤

-2

-4

-4 -2 0 2 4
x
ẏ = 0 ẋ = 0

y 0 ⬤
⬤ ⬤

-2

-4

-4 -2 0 2 4
x
ẏ = 0 ẋ = 0

Figure 8: Saddle node bifurcation with µ = +1 (above), µ = 0 (centre) and µ = −1 (below).

85
8.2 Transcritical bifurcation

25

20

15

y 0 |dx / dt|

10

-2

-4

0
-4 -2 0 2 4
x

25

20

15

y 0 ⬤
⭕ |dx / dt|

10

-2

-4

0
-4 -2 0 2 4
x

Figure 9: Saddle node bifurcation with µ = +1 (above) and µ = 0 (below). When changing µ from
0 to 1, the “slow” region near the location of the fixed point (in this case at the origin) – called the
“ghost” region – remains. While there is no fixed point when µ = +1, the time taken to go through
p
the ghost region is long (T ∼ 1/ µ, cf. Section 2.5.1).

86
8.2 Transcritical bifurcation

y 0 ⬤
⭕ ⬤

-2

-4

-4 -2 0 2 4
x
ẏ = 0 ẋ = 0 ẋ = 0 ẋ = 0 ẋ = 0

y 0 ⬤
⭕ ⬤

-2

-4

-4 -2 0 2 4
x
ẏ = 0 ẋ = 0 ẋ = 0 ẋ = 0 ẋ = 0

Figure 10: Transcritical bifurcation with µ = +1 (above) and µ = −1 (below).

87
8.3 Supercritical pitchfork bifurcation

8.3 Supercritical pitchfork bifurcation


Shown in Fig. 11, we have,

ẋ = µx − x 3 , (8.9)
ẏ = −y . (8.10)
p
ẋ = 0 implies x = 0 or µ = x 2 , i.e. x = ± µ. If µ > 0 we thus have three fixed points, if µ < 0 we
have one. The bifurcation is at µ = 0.

8.4 Subcritical pitchfork bifurcation


Shown in Fig. 12, we have,

ẋ = µx + x 3 , (8.11)
ẏ = −y . (8.12)
p
ẋ = 0 implies x = 0 or µ = −x 2 i.e. x = −µ. If µ < 0 there are three fixed points, if µ > 0 there is one.
The bifurcation is at µ = 0.

88
8.4 Subcritical pitchfork bifurcation

y 0 ⬤
⬤ ⬤

⬤ ⬤

-2

-4

-4 -2 0 2 4
x
ẏ = 0 ẋ = 0

y 0 ⬤

-2

-4

-4 -2 0 2 4
x
ẏ = 0 ẋ = 0

Figure 11: Supercritical pitchfork bifurcation with µ = +1 (above) and µ = −1 (below).

89
8.4 Subcritical pitchfork bifurcation

y 0 ⬤

-2

-4

-4 -2 0 2 4
x
ẏ = 0 ẋ = 0

y 0 ⬤
⭕ ⬤

⬤ ⬤

-2

-4

-4 -2 0 2 4
x
ẏ = 0 ẋ = 0

Figure 12: Subcritical pitchfork bifurcation with µ = +1 (above) and µ = −1 (below).

90
9 Hopf Bifurcations
Everything we have discussed so far relates to real eigenvalues, but in 2D the eigenvalues can also
be complex conjugate pairs, i.e. they have imaginary parts. The fixed points are centres with ro-
tation around them, i.e. spirals, the outward direction of which depends on the real part of the
eigenvalue (unstable if it is positive, stable if it is negative). In some cases the sign of the real part
of the eigenvalues can change which leads to another form of bifurcation we have not yet con-
sidered: the Hopf bifurcation. At a Hopf bifurcation limit cycles can appear or disappear.
Remember that stability depends on the real parts of eigenvalues, ℜ (λ),

x ∼ e ℜ(λ)t e i ℑ(λ) ∼ e ℜ(λ)t sin ωt , (9.1)

so we have a stable fixed point if ℜ (λ) < 0 and unstable if ℜ (λ) > 0.
If we have two real fixed points which merge, i.e. ∆ changes sign, this is a saddle-node, tran-
scritical or pitchfork bifurcation. If we have τ change sign when ∆ > 0 we have a more interesting
case, because we have two complex conjugate eigenvalues. The imaginary part gives the “rotation
rate” of the resulting spiral trajectory, and the real part gives the stability (as above, where ℜ (λ) = 0
implies a circular trajectory, i.e. r˙ = 0). The change from stable to unstable is called a Hopf bifurc-
ation.
Further reading: https://www.math.colostate.edu/~shipman/47/volume3b2011/M640_
MunozAlicea.pdf.

9.1 Supercritical Hopf bifurcation


Consider, with ω > 0 and b > 0,

r˙ = µr − r 3 ,
θ̇ = ω + br 2 . (9.2)
p
We have r˙ = 0 when r = 0 or µ = r 2 i.e. r = µ, so there is a limit cycle when µ is positive
(Figs. 13 and 14). When µ < 0 we have a stable spiral (r˙ < 0).

To show the eigenvalues cross ℜ (λ) = 0 with non-zero ℑ (λ) we convert to Cartesians using
x = r cos θ and y = r sin θ, then

ẋ = r˙ cos θ + r θ̇ sin θ
= µr cos θ − r 3 cos θ + r ω + br 2 sin θ
¡ ¢

= µr cos θ − r 3 cos θ + r ω sin θ + br 3 sin θ


= µx + ωy + r 3 (cos θ + b sin θ)
= µx + ωy + O r 3 .
¡ ¢
(9.3)

Similarly,

ẏ = r˙ sin θ − r θ̇ cos θ
= µr sin θ − r 3 sin θ − r ω + br 2 cos θ
¡ ¢

= µr sin θ − r 3 sin θ − ωr cos θ + br 3 cos θ


= µy − ωx + O r 3 .
¡ ¢
(9.4)

91
9.1 Supercritical Hopf bifurcation

1.5

0.5

ṙ 0 ⬤

-0.5

-1

-1.5

-2
0 0.5 1 1.5 2
r
μ=-2 μ=0 μ=2

Figure 13: r˙ vs r in the supercritical Hopf bifurcation (Eq. 9.2).

r < 0 : unphysical

Figure 14: r vs µ in the supercritical Hopf bifurcation (Eq. 9.2). Compare to the supercritical pitch-
fork bifurcation (Fig. 11).

92
9.1 Supercritical Hopf bifurcation

μ=-1

μ=1

Figure 15: Supercritical trajectories of Eqs. 9.2 with µ = −1 and µ = +1, with the latter having a limit
p
cycle at r = µ = 1.

93
9.1 Supercritical Hopf bifurcation

Near the critical point at r = r ∗ = 0, where we can neglect the O r 3 terms, we have,
¡ ¢

ẋ ≈ µx + ωy , (9.5)
ẏ ≈ µy − ωx , (9.6)

which has Jacobian,

µ ω
µ ¶
J = , (9.7)
−ω µ
¢2
with eigenvalues from det (J − λI ) = µ − λ + ω2 = 0, so µ − λ = ±i ω and
¡

λ = µ±iω. (9.8)

Hence as µ becomes more positive, the eigenvalues cross the imaginary axis.

Im(λ)


Re(λ)


This can also be visualized in the ∆, τ plane,

∆ = µ2 + ω2 (9.9)
τ = 2µ . (9.10)

94
9.2 Subcritical Hopf bifurcation

unstable spirals

0 Δ

stable spirals

9.2 Subcritical Hopf bifurcation


Similarly to the above is the subcritical Hopf bifurcation, with prototype given by,

r˙ = µr + r 3 ,
θ̇ = ω + br 2 . (9.11)
p
There is a limit cycle at µr = −r 3 i.e. r = 0 and r 2 = −µ i.e. r = ± −µ when µ < 0, thus at µ = 0
there is a bifurcation:
p
• µ > 0 there is only one fixed point because µ is imaginary,
p p
• µ < 0 there is another fixed point at r = + −µ (the other, at r = − −µ, is unphysical).

Again we can write the equations in polar co-ordinates near r = 0,

ẋ ≈ µx + ωy , (9.12)
ẏ ≈ µy − ωx , (9.13)

which has Jacobian,

µ ω
µ ¶
J = , (9.14)
ω µ
¢2
with eigenvalues from det (J − λI ) = µ − λ − ω2 = 0, so µ − λ = ±ω and
¡

λ = µ±ω. (9.15)

95
9.2 Subcritical Hopf bifurcation

1.5

0.5

ṙ 0 ⬤

-0.5

-1

-1.5

-2
0 0.5 1 1.5 2
r
μ=-2 μ=0 μ=2

Figure 16: r˙ vs r in the subcritical Hopf bifurcation (Eq. 9.2).

2 r

1.5

0.5

ṙ 0 μ

-0.5

-1 r < 0 : unphysical

-1.5

-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
r

Figure 17: r vs µ in the subcritical Hopf bifurcation (Eq. 9.2). Compare to the subcritical pitchfork
bifurcation (Fig. 12).

96
9.2 Subcritical Hopf bifurcation

μ=-1

y 0

-2

-4

-4 -2 0 2 4
x

μ=1

y 0

-2

-4

-4 -2 0 2 4
x

Figure 18: Subcritical trajectories of Eqs. 9.11 with µ = −1 and µ = +1, with the latter having a limit
p
cycle at r = µ = 1.

97
9.3 Saddle node bifurcation of cycles

9.3 Saddle node bifurcation of cycles


A more interesting system is the following,

r˙ = µr + r 3 − r 5 ,
θ̇ = ω + br 2 , (9.16)

where we have, when r˙ = 0, µr + r 3 − r 5 = 0. This appears intractable but one root is r = 0, so to


find the other roots factor out (r − 0) = r ,

µr + r 3 − r 5 = −r −µ − r 2 + r 4 = 0 ,
¡ ¢
(9.17)

or

µ = r2 −r4 = r2 1−r2 ,
¡ ¢
(9.18)

and then,

r 4 − r 2 − µ = ρ2 − ρ − µ = 0 , (9.19)
r p
1∓ 1+4µ
r =± 2 where ρ = r 2 . Therefore,

1±δ
p
1± 1 + 4µ
ρ = = , (9.20)
2 2

where δ =
p
1 + 4µ, and then,
s
1±δ
r = ± , (9.21)
2

or in full,
s s s s
1+δ 1−δ 1+δ 1−δ
r = + ,+ ,− ,− , (9.22)
2 2 2 2

If 1 + 4µ < 0 we have imaginary δ so no extra real roots at all, so the only non-zero extra roots have
µ > µc = −1/4. (By “extra” I mean in addition to the trivial root at r = 0.) When µ > µc , we have:

• The first root, r = ( 1+δ


2
)1/2 , is always positive and real so always exists.
³ ´1/2
1−δ
• The second root, r = 2
, is real if δ > 1, i.e. 1 + 4µ < 1, so only if µ < 0 and hence only if
µc < µ < 0.
³ ´1/2
1±δ
• The third and fourth roots, r = − 2
, are always negative so are always unphysical (re-
gardless of µ).

98
9.3 Saddle node bifurcation of cycles

1.2

0.8

0.6

0.4

0.2

0 ⬤
⭕ ⬤

-0.2

-0.4

0 0.5 1 1.5 2
r
μ=-2 μ=-0.12 μ=0.25
μ=-0.25 μ=0 μ=1

Figure 19: r˙ vs r in the Hopf subcritical saddle bifurcation (Eq. 9.16).

99
9.3 Saddle node bifurcation of cycles

r < 0 : unphysical

Figure 20: r vs µ in the Hopf subcritical saddle bifurcation (Eq. 9.2). Compare to the subcritical
pitchfork bifurcation (Fig. 12). The system exhibits hysteresis: when µ < −0.25 the centre
p is a
stable spiral. When µp increases and becomes positive the system jumps to the stable r = (1 + δ)/2
solution (where δ = 1 + 4µ). If µ is then decreased, it stays on the stable branch as long as this
exists, i.e. when µc = − 14 < µ < 0, before jumping back to r = 0 catastrophically.

μ=-0.12

Figure 21: Hopf saddle bifurcation trajectories of Eqs. 9.16 with µc < µ = −0.12 < 0. The two limit
cycles at r ≈ 0.928 and r ≈ 0.373 are marked as black lines.

100
10 Fractals
Definition .
A fractal is a geometrical shape that shows fine structure at arbitrarily small scales and displays
self-similarity under magnification. That is, a fractal is an image repeated at ever reduced scales.
or
A fractal is a subset of a Euclidean space for which the Hausdorff dimension strictly exceeds
the topological dimension. Fractals appear (nearly) the same at different levels.

• Fractals are usually nowhere differentiable.

• A fractal line winds through space: it has a dimension, D, where 1 < D < 2
Video https://www.youtube.com/watch?v=VxYcWn6AQsg

10.1 Fractals in Nature


Galaxies, waves, many plants (especially trees), craters, corals, coastlines, valleys, octopus, sea
urchin, sea shells, waterfalls, the rings of Saturn, blood vessels, cauliflowers.

• https://www.youtube.com/watch?v=GKYG__-HATI

• https://www.youtube.com/watch?v=xLgaoorsi9U

The pattern of a fractal remains (approximately) constant at all scales.

10.2 Cantor Set


A good example of a fractal is the “middle thirds” Cantor13 set, C , as shown in Fig. 22.
13
The set was discovered by Henry Smith in 1875 (Oxford professor of geometry) then picked up by Cantor in 1883.

101
10.2 Cantor Set

5
0.0 0.2 0.4 0.6 0.8 1.0

Figure 22: The Cantor-thirds set.

• Step 0: Start with a line of unit length,

C 0 = [0, 1] . (10.1)

• Step 1: Split the line into three parts. Remove the central, even-numbered, part. We now
have, · ¸ · ¸
1 2
C 1 = 0, ∪ ,1 . (10.2)
3 3

• Step 2: Repeat the process,


· ¸ · ¸ · ¸ · ¸
1 2 1 2 7 8
C 2 = 0, ∪ , ∪ , ∪ ,1 (10.3)
9 9 3 3 9 9

• Step 3: Repeat,
· ¸ · ¸ · ¸ · ¸ · ¸ · ¸ · ¸ · ¸
1 2 1 2 7 8 1 2 19 20 7 8 25 26
C 3 = 0, ∪ , ∪ , ∪ , ∪ , ∪ , ∪ , ∪ , 1 . (10.4)
27 27 9 9 27 27 3 3 27 27 9 9 27 27

• Step n . . . : Repeat. . . the final Cantor-thirds set is then14 the limit as n → ∞,



\
C = Cn . (10.5)
n=0

The set has a number of properties.

• The length of each segment is 1, then 1/3, then 1/9, etc., so call this ²,

1
² = . (10.6)
3n
14
C is the intersection of all the sets, not the union: think vertically rather than horizontally. The sum, n = 0 . . . ∞, is
vertical in Fig. 22.

102
10.2 Cantor Set

• The number of segments is 1, then 2, then 4, etc., i.e.,

N = 2n . (10.7)

• We can then write


ln ²
n=− (10.8)
ln 3
so

N = 2− ln ²/ ln 3 . (10.9)

The set has the following properties:

• The measure of the set is zero. This means that the set can be covered by intervals whose
total length is arbitrarily small.
At step n we have N (n) segments of length ² (n). The total length is L (n),

L (n) = N (n) × ² (n) (10.10)


µ ¶n
2
= . (10.11)
3

We can then simply increase n to n 0 > n and L (n) covers the set.
In the limit n → ∞, we can always define a δ > L (n) that covers the set,

lim L (n) = 0 ∀ δ > 0 ∃ n : L (n) < δ and L (n) covers C .


n→∞

• C is totally disconnected

– Given two points x and y in C , where ¯x − y ¯ > 1/3k , then x and y are on two different
¯ ¯

intervals in C k (we can choose k such that this is true). There exists a point z in between
x and y which is not in C k and hence not in C . The set is thus disconnected.

• Any point in C is arbitrarily close to at least another point in C .

– Following on from the previous point, given that x ∈ C , x must also in one of the 2k
(k ∈ N) intervals that make up C k .
– There is thus an end-point, y k , in C k which has ¯x − y k ¯ < 1/3k .
¯ ¯

– This end point is in C (end points survive the iterative process), thus we can choose k
such that y k is arbitrarily close to x.

• The set C is non-empty. After the first segment £ is1 ¤removed,


£ 2 1 ¤ the
£ 2 7end-points
¤ £ 8 ¤ at 0, 1/3, 2/3 and 1
are still in the set. At the next iteration, C 2 = 0, 9 ∪ 9 , 3 ∪ 3 , 9 ∪ 9 , 1 , which still contains
the points 0, 1/3, 2/3 and 1. These points never disappear, hence the set is never empty.

• The set C is closed. Each C n is a finite union of closed sets, e.g. C 1 = 0, 31 ∪ 32 , 1 , so the set
£ ¤ £ ¤

is – by definition – closed.

103
10.3 Ternary expansion characterization of the Cantor-thirds set

• The set C is not “dense” in the interval [0, 1]. Loosely, this means the points in the set are not
tightly clustered15 .
Assume there is an interval I = [a, b] in C = C ∞ , where a < b. Then I = [a, b] ∈ C n as well, for
all n < ∞ but, as the set length |C n | → 0 as n → ∞ then |I | = b − a = 0, so the set is not dense.

• C is uncountable.16 .
To be countable17 , the set must have the same number of elements as a subset of the real
numbers.

• We can choose even-fifths, even-sevenths, etc.

10.3 Ternary expansion characterization of the Cantor-thirds set


We can write the Cantor-thirds set, C , in base three numbers, i.e. in ternary 18 . To give some back-
ground, any number x, where 0 ≤ x ≤ 1, can be written in base n as,
∞ β
X k (x)
x = , (10.12)
k=1 nk

p βk appropriately: these are the “digits” of the number.


all we have to do is choose the coefficients
For example, in base 10 we can write 1/ 2 ≈ 0.7071 . . . as,

1 7 0 7 1
p = 1
+ 2 + 3 + 4 +... , (10.13)
2 10 10 10 10

i.e. β1 = 7, β2 = 0, β3 = 7, β4 = 1 etc. Simply placing these numbers in a list gives us the decimal
(base-10) number.
Thus x can be written in ternary (base 3) as,
∞ α
X k (x)
x = , (10.14)
k=1 3k

where αk (x) = 0, 1 or 2. Now for the trick: we then associate each part of the Cantor-thirds set with
a digit, 0, 1 or 2, and note that it is section 1 that is always removed. Thus, any number in the
Cantor-thirds set cannot have a 1 in it, only a 0 or 2,
( )
∞ α
k
, ²k = 0, 2 .
X
C = x ∈ [0, 1] : x = k
(10.15)
k=1 3

Now consider dividing αk by two: αk /2 = 0, 1. These are the digits of a binary representation. We
can then define, for any x in the Cantor-thirds set C , a function f (x) that maps to the range [0, 1]
in binary,
∞ α ∞ α
X k (x) /2 1X k (x)
f (x) = = . (10.16)
k=1 2k 2 k=1 2k
15
https://en.wikipedia.org/wiki/Nowhere_dense_set
16
https://www.youtube.com/watch?v=AYj80i0eQHo and https://www.youtube.com/watch?v=
9O9aTxtBT80
17
http://www.math.umaine.edu/~farlow/sec25.pdf
18
https://en.wikipedia.org/wiki/Ternary_numeral_system

104
10.4 Fractal examples

Because any real number y, where 0 ≤ y ≤ 1, can be written in binary as,


∞ ²
X k (x)
y = , (10.17)
k=1 2k
our Cantor-thirds set has as many numbers in it as the number of real numbers between 0 and 1
despite us removing 1/3 of these numbers !
• The process of removing infinitely-many middle thirds from [0, 1] has no effect on the
number of points in the range [0, 1]: it must thus be uncountable.
Alternatively, one can show that the real numbers are uncountable and hence, because we can
map from the Cantor-thirds set to [0, 1], the Cantor-thirds set is also uncountable19 .

10.4 Fractal examples


See https://users.math.yale.edu/public_html/People/frame/Fractals/ for many more
beautiful examples of fractals. You can try making your own fractals at http://thewessens.net/
ClassroomApps/Main/fractals.html and http://thewessens.net/ClassroomApps/Main/linefractal
html.

10.4.1 von Koch curves

Step ² (n) N (n)

n=0 1 1

n=1 1/3 4

n=2 1/9 16

n=3 1/27 64

n=4 1/81 256


19
https://www.youtube.com/watch?v=CvalbBGhmW4

105
10.4 Fractal examples

• The length of the nth piece is,

1
² (n) = . (10.18)
3n

• The number of pieces is,

N (n) = 4n . (10.19)

• The total length is,


µ ¶n
4
L (n) = N (n) × ² (n) = , (10.20)
3

so as n → ∞ the total length is infinite.

• The von Koch curve is continuous, bounded and has infinite length.

• It cannot be mapped to the line: it must thus have a dimension > 1.

• It does not cover the two-dimensional plane: its dimension is < 2.

106
10.4 Fractal examples

10.4.2 von Koch quadratic

Step

n=0

n=1

n=2

n=3
107
10.4 Fractal examples

10.4.3 von Koch snowflake

Step

n=0

n=1

n=2

n=3

108
10.4 Fractal examples

10.4.4 Sierpiński carpet

One can think of this a map such that we start with 1 and apply,
 
0 0 0
0 →  0 0 0 , (10.21)
0 0 0
 
1 1 1
1 →  1 0 1 . (10.22)
1 1 1
¡ 1 ¢n
The area is ²2 × N (²) where ² = 3 and N (²) = 8n , hence,
µ ¶n
2 8
A (n) = ² N (²) = , (10.23)
9

and

lim A (n) = 0 . (10.24)


n→∞

109
10.5 Fractal dimension

n=0

n=1

n=2

n=3

n=4

Figure 23: Successive iterations of the Sierpiński carpet.

10.4.5 Sierpiński triangle

See Fig. 25. This is also called the Sierpiński gasket.

10.5 Fractal dimension


We need a way to determine the “dimension” of a fractal20 . We have already seen that the von Koch
curve has dimension 1 < D < 2, but what is D?
20
https://www.youtube.com/watch?v=RFMZZ4pPKlk

110
10.5 Fractal dimension

Figure 24: Fractal antenna in a mobile phone using Sierpinski’s carpet. The use
of a fractal antenna maximizes the perimeter of the antenna, and the antenna’s self-
similiarity allows it to function well over a range of frequencies (normal antennas are func-
tion at a single frequency only). See also https://en.wikipedia.org/wiki/Fractal_
antenna and https://www.researchgate.net/publication/255633073_Self-similarity_
and_the_geometric_requirements_for_frequency_independence_in_antennae.

10.5.1 General definition

We can use the self similarity of a fractal to define its self-similarity dimension,
ln Z
D = , (10.25)
ln M
• The numerator is how much the fractal is divided into pieces

• The denominator is the magnification factor required to get back to the original
Example .
Consider a line broken into N equal pieces, Z = N . To get back to the original line we must magnify
by Z = N . Thus D = ln N / ln N = 1. This is the same as the dimension of a line.

Example .
Consider a square. If we divide this into N 2 smaller squares, then Z = N 2 . To zoom in to a square
that looks like our original square, we require M = N . Hence D = ln N 2 / ln N = 2 ln N / ln N = 2.
This is the same dimension as an area.

Example .
Consider the Sierpiński triangle. Each iteration divides a triangle into three self-similar triangles,
Z = 3. The length of the side of each triangle has been reduced by a factor 2. Hence D = ln 3/ ln 2 ≈
1.58.

Example .
The Cantor set is, at each step, divided into two self-similar pieces, Z = 2. To recover the original,
we zoom by a factor 3. Hence D = ln 2/ ln 3 ≈ 0.63.

111
10.5 Fractal dimension

Figure 25: Left: Geography cone shells (Conus geographus, a type of snail). Right: Sierpiński tri-
angle fractal.

10.5.2 Box dimension/capacity dimension

Not all fractals are self-similar, so we need a “better” way”. We can define the “box dimension”,
also called the “capacity dimension”, as follows. Let’s say we have a load of boxes of size ² × ² in the
plane, hence each of area ²2 .

• Consider a line of length L: to cover this line we need N (²) = L/² boxes.

• Consider an area A: to covert this we need N (²) = A/²2 boxes.

• In general, to cover something we need N (²) ∝ ²−k boxes, where k is the box dimension.

• Rearranging this equation and solving for k, we see that for some measure of the system (e.g.
its area or volume) B = L k ,
B
N (²) = = L k ²−k (10.26)
²k
ln N (²) = k ln L − k ln ² (10.27)
k (ln L − ln ²) = ln N (²) (10.28)
ln N (²)
k = . (10.29)
ln L − ln ²
In the limit ² → 0, the (constant) ln L is negligible because |ln ²| → ∞, hence the definition of
the box (or capacity) dimension,

ln N (²)
D c = k = − lim . (10.30)
²→0 ln ²

Note that the limit may not exist.

More formally, let N (²) be the minimum number of D-dimensional boxes of side-length ² needed
to cover the object, then for a D-dimensional cube with sides of length L,

112
10.6 Pointwise and correlation dimension

total volume LD
N (²) = = D (10.31)
hypercube volume ²
or
ln N (²)
D = . (10.32)
ln L + ln ²−1
In the limit ² → 0, we have ln ²−1 À ln L and we define the box dimension, D c ,
ln N (²) ln N (²)
D c = lim = − lim . (10.33)
²→0 ln ²−1 ²→0 ln ²

10.5.3 Cantor set

Remember N (²) = 2n and ² = 3−n i.e. ²−1 = 3n . Hence,


ln (2n ) ln 2
D c = lim = ≈ 0.6309 .
n→0 ln (3n ) ln 3

10.5.4 von Koch curve

N (²) = 4n and ²−1 = 3n then,


ln (4n ) ln 4
D c = lim = ≈ 1.26 .
n→0 ln (3n ) ln 3
This fits with our previous assertion that the “dimension” should be between 1 and 2.

10.5.5 Sierpiński carpet


¡ ¢n
² = 31 and N (²) = 8n , so,

ln (8n ) ln 8
Dc = lim = ≈ 1.89 .
n→∞ ln (3n ) ln 3

10.6 Pointwise and correlation dimension


Often it is not practical to set up a box correlator. Instead, consider sampling points on trajectories.
Then fix a point x in the phase space and a hypersphere of radius r around it. The number of points
in the sphere is,

N (x, r ) = r d , (10.34)

where d is the pointwise dimension. Note that N can depend very much on the location, x, hence
instead define an average (in some way) over many different x,

C (r ) = r d , (10.35)

where d is the correlation dimension. We can then write,

lnC (r ) = a + d ln r , (10.36)

113
10.7 Cantor Rings

and measure the slope by experiment. This works if we have sufficient resolution such that if a is
the minimum separation among the sample of points,

r À a, (10.37)

and if r is small relative to the size of the fractal, S,

r ¿ S. (10.38)

In general, the correlation dimension is less than the box dimension,

d ≤ Dc . (10.39)

The number d is a good representation of the number of degrees of freedom required to paramet-
erize the fractal.

10.7 Cantor Rings


Define a Cantor Ring fractal as follows.

• n = 0: define an ring between radii r 1 = 1 and 2, hence inner radius r 1 = 1.

• n = 1: remove the middle third between 4/3 and 5/3, now we have two rings: 1 to 1+1/3 = 4/3
and 1 + 2/3 = 5/3 to 2.
² = 1/3 and inner radii r 1 = 1 and r 2 = 5/3

• n = 2: repeat the process. . .


² = 1/32 and r 1 = 1, r 2 = 1 + 2/9 = 11/9, r 3 = 5/3, r 4 = 5/3 + 2/9 = 17/9

• n = 3 : inner radii are 1, 29/27, 11/9, 35/27, 5/3, 47/27, 17/9 and 53/27

• n = 4 : inner radii are 1, 83/81, 29/27, 89/81, 11/9, 101/81, 35/27, 107/81, 5/3, 137/81, 47/27,
143/81, 17/9, 155/81, 53/27, 161/81

• . . . etc.. . .

• n: We have NR rings,

NR (n) = 2n , (10.40)

with thickness of each ring (cf. the Cantor-thirds set),

1
² (n) = . (10.41)
3n

114
10.7 Cantor Rings

1 5/3 2
4/3

• Note that: along the x-axis we have the standard middle-thirds Cantor set.

• The total area at step n is,

NR £
A (²) = π (r i + ²)2 − r i2
X ¤
i =1
NR £
= π ²2 + 2²r i
X ¤
i =1
à ! à !
NR NR
2
= π²
X X
1 + 2π² ri , (10.42)
i =1 i =1

115
10.8 Barnsley Fern

• Now we can write,


NR
X NR
X
ri = [(r i − 1) + 1]
i =1 i =1
XNR NR
X
= xi + 1
i =1 i =1
XNR
= x i + NR , (10.43)
i =1

where x i = r i − 1 are the usual Cantor-thirds set distances from the origin.

• We can use the result,


NR
3 ²
µ ¶
X
ri = NR − , (10.44)
i =1 2 2

the derivation of which is left as an exercise for the student (example sheet 4).

• Hence
à ! à !
NR NR
A (²) = π²2
X X
1 + 2π² ri
i =1 i =1
3 ²
¶ µ
2
= π² NR + 2π²NR −
2 2
= π² NR + 3π²NR − π²2 NR
2

= 3π²NR . (10.45)

Note that the area is not proportional to ²2 !

• The number of ²2 boxes required to cover this area is,

A (²) 3πNR 2n
N (²) ≈ = = 3π = 3π6n . (10.46)
²2 ² 1/3 n

• Hence the box dimension is,

ln N (²) ln (3π) + n ln 6 ln 6
D c = − lim = lim = ≈ 1.6309 . (10.47)
n→∞ ln ² n→∞ n ln 3 ln 3

• Note that this is just D c (C ) + 1 where C is the Cantor-thirds set. This extra dimension is the
radial dimension.

• The Cantor rings . . .

116
10.8 Barnsley Fern

w a b c d e f p Portion generated
f1 0 0 0 0.16 0 0 0.01 Stem
f2 0.85 0.04 −0.04 0.85 0 1.60 0.85 Successively smaller leaflets
f3 0.20 −0.26 0.23 0.22 0 1.60 0.07 Largest left-hand leaflet
f4 −0.15 0.28 0.26 0.24 0 0.44 0.07 Largest right-hand leaflet

Table 1: Barnsley fern coefficients.

10.8 Barnsley Fern


The Barnsley Fern, as shown in Fig. 26, is generated with four affine transformations21 , of general
form

µ ¶µ ¶ µ ¶
¡ ¢ a b x e
f x, y = + (10.48)
c d y f

with coefficients as in Table 1, and initial point (0, 0). There are also “mutant” varieties!
The following Python code is from Wikipedia.
import random
import matplotlib . pyplot as plt

X = [0]
Y = [0]
for n in range (100000) :
r = random . uniform (0 , 100)
if r < 1.0:
x = 0
y = 0.16* Y [n -1]
elif r < 86.0:
x = 0.85* X [n -1] + 0.04* Y [n -1]
y = -0.04* X [n -1] + 0.85* Y [n -1]+1.6
elif r < 93.0:
x = 0.2* X [n -1] - 0.26* Y [n -1]
y = 0.23* X [n -1] + 0.22* Y [n -1] + 1.6
else :
x = -0.15* X [n -1] + 0.28* Y [n -1]
y = 0.26* X [n -1] + 0.24* Y [n -1] + 0.44
X . append ( x ) ; Y . append ( y )

’’’ Make a plot ’ ’ ’


plt . figure ( figsize = [15 ,15])
plt . scatter (X ,Y , color = ’g ’ , marker = ’. ’)
plt . show ()

21
https://en.wikipedia.org/wiki/Affine_transformation

117
10.9 Mandelbrot Set

Figure 26: The Barnsley fern fractal.

Try making it yourself:


https://solarianprogrammer.com/2017/11/02/barnsley-fern-python-3

10.9 Mandelbrot Set


One of the most famous fractals is the Mandelbrot Set (see also Sec. 15.10). It is defined as the set
of complex numbers c for which the function,

f (z) = z 2 + c , (10.49)

does not diverge when iterated from z = 0 (Fig. 27). They are closely related to the Julia Set. See
e.g. http://www.karlsims.com/julia.html and https://www.youtube.com/watch?v=mg4bp7G0D3s

118
10.9 Mandelbrot Set

Figure 27: The Mandelbrot Set. Image created by Wolfgang Beyer with the program Ultra Fractal 3.
CC BY-SA 3.0 https://commons.wikimedia.org/w/index.php?curid=321973

119
11 Evolution of Volumes in Phase Space
We have learned about how trajectories move in phase space, i.e. how points move, but how do we
generalize this idea to volumes in a three-dimensional system, ẋ = f (x)? Motion that is dissipative,
for example, involves phase space contraction, e.g. all trajectories will tend towards zero.
• In general we can consider an arbitrary closed surface S(t ) surrounding a volume V (t ) in
phase space.

• We let S (t ) evolve to S(t + d t ): what is V (t )?

• Let n be an outward normal on S. The velocity is ẋ = f (x), so the outward component of


velocity is f · n.
¡ ¢
• In a time d t an area d A sweeps out a volume f · nd t d A

• Hence the new volume is,


ˆ
¡ ¢
V (t + d t ) = V (t ) + f · nd t d A , (11.1)
S

or, using the divergence theorem,


ˆ ˆ
V̇ = f ·nd A = ∇ · f dV . (11.2)
V V

• We know f hence we can calculate,


3 ∂f
X i
∇· f = . (11.3)
i =1 ∂x i

• If 
< 0 , phase volumes contract : dissipation (energy sinks) ,
1 dV (t ) 
= = 0, conservative system , (11.4)
V dt
> 0, phase volumes expand : energy sources .

11.1 Damped pendulum


The damped pendulum follows an equation like,

θ̈ + γθ̇ + Ω2 sin θ = 0 , (11.5)

where Ω2 = p/ρ and γ is a damping coefficient.


• We can write this as two equations

θ̇ = ω , (11.6)
2
ω̇ = −γω − Ω sin θ = 0 . (11.7)

• Consider now,
∂θ̇ ∂ω̇
∇· f = + = −γ . (11.8)
∂θ ∂ω
• The phase volume contracts which is consistent with energy dissipation.

120
11.2 Limit cycle

11.2 Limit cycle


• Consider a typical limit cycle,

r˙ = r − r 2 , (11.9)
θ̇ = 1 , (11.10)

then
 1
∂r˙ ∂θ̇  >0 r < 2
∇· f = + = 1 − 2r = 0 r = 21 . (11.11)
∂r ∂θ
< 0 r > 21

• Volumes contract onto the limit cycle at r = 1 from the outside.

• From inside, they expand towards r = 1/2 and then contract onto the limit cycle.

121
12 Attractors
Classical attractors include fixed points and limit cycles. We can generalise the idea of attractor: in
an n-dimensional flow,

ẋ(t ) = F (x) , (12.1)

where x ∈ Rn , an attractor is a subset of the space, A ∈ Rn with the properties:

• A is invariant with the flow (i.e. it does not change with time)

• A has and is contained in a basin of attraction B . Any trajectory starting in B will eventually
approach A as t → ∞.

• A has dimension d < n: usually phase space blobs are squeezed onto the lower dimension
attractor, A, by contraction of the phase space. Dissipation usually generates an attractor.

12.1 2-dimensional torus


Consider a 2-dimensional torus, T 2 . This kind of shape results from systems of the form,

θ̇1 = f 1 (θ1 , θ2 ) ,
θ̇2 = f 2 (θ1 , θ2 ) . (12.2)

where f 1 and f 2 are periodic functions. Map θ1 around the long circumference and θ2 around the
short circumference of the torus22 . We will investigate a simple case,

θ̇1 = ω1 ,
θ̇2 = ω2 , (12.3)

which are shown in Fig. 28. We can expand the trajectories onto a flat surface in θ1 and θ2 .

• When ω1 /ω2 = n/m is rational (i.e. n, m ∈ N), we have closed trajectories (Fig. 29).

• When ω1 /ω2 is irrational the trajectory completely fills the plane: it never closes (Fig. 30).
The trajectories remain parallel so never cross. Neighbouring trajectories will remain neigh-
bouring forever and evolution is predictable, i.e. this is not sensitive to initial conditions.

A third dimension is required to have sensitivity to initial conditions! In order to have a chaotic
attractor we require:

• Attraction

• Sensitivity to initial conditions

• Non-integer fractal dimension

All three together give us a strange attractor.

22
The equation of a torus is x, y, z = ([a + b cos u] cos ν, [a + b cos u] sin ν, b sin u) and we then set u = θ1 and v = θ2 .
¡ ¢

122
12.1 2-dimensional torus

θ1

θ2

Figure 28: Evolution of the equations θ̇1 = ω1 = 1 and θ̇2 = ω2 = 4 on a torus.

123
12.1 2-dimensional torus

θ2 3

0
0 1 2 3 4 5 6
θ1

Figure 29: Trajectories of Eq. 12.3 with ω1 = 1 and ω2 = 4.

124
12.1 2-dimensional torus

θ2 3

0
0 1 2 3 4 5 6
θ1

θ2 3

0
0 1 2 3 4 5 6
θ1

θ2 3

0
0 1 2 3 4 5 6
θ1

p
Figure 30: Trajectories of Eq. 12.3 with ω1 = 1 and ω2 = 8 where time t runs from 0 to, from top to
bottom, 4π, 10π and 100π.

125
13 The Lorenz equations
The Lorenz equations are,
 ẋ = σ y − x ,
 ¡ ¢

ẏ = r x − y − xz , (13.1)
ż = x y − bz ,

where r , b and σ are positive constants. These equations were originally developed to understand
convection in fluids, and r is the Rayleigh number, σ is the Prandtl number. The equations are
almost linear, there are the two terms xz and x y which make them non-linear.
The equations are found in many places in physics:

1. Convective rolls (Lorenz 1963): these introduced the world to chaos.


Introductory videos at https://www.youtube.com/watch?v=-apQw9r1YGM and https://
www.youtube.com/watch?v=Q_f1vRLAENA
x: intensity
y: horizontal temperature variation
z: vertical temperature variations
σ: 10
b: 8/3 = 2.666 . . .
r : changes with temperature, typically 28

2. Convection of an incompressible fluid in a vertical loop or torus.

3. Waterwheels (Strogatz). https://www.youtube.com/watch?v=7A_rl-DAmUE

4. Haken laser equations. https://pdfs.semanticscholar.org/447f/9efce9165fa5afa004199edfea8


pdf

5. Electric circuits. https://www.youtube.com/watch?v=DBteowmSN8g

13.1 Properties of the Lorenz equations


• Volumes in phase space always contract.

1 dV ∂ẋ ∂ ẏ ∂ż
= + + = − (σ + 1 + b) < 0 . (13.2)
V dt ∂x ∂y ∂z

We can then write,


dV
= − (σ + 1 + b)V , (13.3)
dt
which is easily solved to find

V (t ) = V (0) e −(σ+1+r )t . (13.4)

Thus there is always dissipation. Commonly, σ ≈ 10, r ≈ 28 and b ≈ 8/3, hence V (t ) ∝ e −41t /3
where e −41/3 ≈ 10−6 .

• Points on the z axis stay on the z axis and converge to the origin. If x = y = 0 then ẋ = ẏ = 0
and ż = −bz, hence x(t ) = y(t ) = 0 and z(t ) ∝ e −bt .

126
13.1 Properties of the Lorenz equations

• The Jacobian matrix is,


 
−σ +σ 0
J =  r − z −1 −x  . (13.5)
y x −b

• Fixed points are at,

– The origin (0, 0, 0) where,

 
−σ +σ 0
J =  r −1 0  , (13.6)
0 0 −b

with ∆ = −σ (r − 1) and τ = − (σ + 1), hence eigenvalues are,

λ1 = −b , (13.7)
p
− (σ + 1) ± (σ + 1) + σ (r − 1)
2
λ2,3 = , (13.8)
2
hence if r < 1 we have (0, 0, 0) is attractive, while if r > 1 it is a saddle point.
– ẋ = 0 gives x = y and,

ẏ = (r − 1 − z) x , (13.9)
2
ż = x − bz , (13.10)

hence fixed points are at

zc = r − 1 , (13.11)
p p
x c± = y c± = ± bz c = ± b (r − 1) , (13.12)

and they only exist if r > 1. The Jacobian is,

0
   
−σ +σ 0 −σ +σ
p
J =  r −z −1 −x  = 
p
r −z
p
−1 ∓ b (r − 1)  , (13.13)
y x −b ± b (r − 1) ± b (r − 1) −b

with eigenvalue equation,

λ3 + λ2 (σ + b + 1) + λb (σ + r ) + 2σr (b − 1) = 0 . (13.14)

This has either three real roots or one real root and two complex conjugate. The real
root is attractive because, remembering that σ, r and b are positive, and indeed (to
have real fixed points) r > 1,

λ λ2 + b (σ + r ) = −λ2 (σ + 1 + b) − 2σb (r − 1) < 0 ,


£ ¤
(13.15)

hence λ1 < 0 and λ1 ∈ R.


The other two are λ2,3 = α + i β which, when α changes sign, lead to a Hopf bifurcation.
This happens either side of,
σ (σ + b + 3)
r = rH = . (13.16)
σ−b −1

127
13.2 Giovanni Mirouh’s Lecture

13.2 Giovanni Mirouh’s Lecture


Imagine an atmosphere in the x − z plane, with temperature Tb on the ground, Tu at the (upper)
edge (the troposphere, essentially space) at a height L. The temperature gradient is,

Tu − Tb
, (13.17)
L

and, for an incompressible fluid23 ,

∇·v = 0, (13.18)

and Euler’s equation24 – the equation of motion of the fluid – is,

dv ∂v 1
= + (v · ∇) v = − ∇P + ν∇2 v + ρg , (13.19)
dt ∂t ρ

where ν is the thermal diffusivity, and

dT ∂T
= + (v · ∇) T = κ∇2 T . (13.20)
dt ∂t

We seek to understand when the atmosphere is convective25 .

13.2.1 Stream function

For a 2D flow of an incompressible fluid we can write a stream function, Ψ, then

∂Ψ
vx = , (13.21)
∂z
∂Ψ
vz = − . (13.22)
∂x

13.2.2 Temperature perturbation

The temperature perturbation relative to the background, θ, is defined by,

T = Tb + Tu − Γ + θ , (13.23)

where Γ is the background temperature gradient,

Tu − Tb
Γ = . (13.24)
L
We end up with

∂∆Ψ ∂θ
+ J ∇2 Ψ, Ψ = RaPr + Pr∇2 ∇2 Ψ ,
¡ ¢ ¡ ¢
(13.25)
∂t ∂x
23
https://en.wikipedia.org/wiki/Incompressible_flow
24
https://en.wikipedia.org/wiki/Euler_equations_(fluid_dynamics)
25
https://en.wikipedia.org/wiki/Rayleigh\T1\textendashBénard_convection

128
13.2 Giovanni Mirouh’s Lecture

where the first term is the buoyancy, the second is the diffusion,

∂θ ∂ψ
+ J (Ψ, θ) = + ∇2 θ ,
∂t ∂x

then the Rayleigh26 and Prandtl27 numbers are,

diffusion thermal transport timescale βg ΓL 4


Ra = convection thermal transport timescale = (13.26)
νκ
ν
Pr = thermal diffusion timescale
= (13.27)
viscous diffusion timescale κ
where Ra depends on the weather (through the temperature gradient, Γ, and layer depth, L), g
is the acceleration due to gravity, while the other parameters are properties of the fluid: β is the
thermal expansion coefficient, ν is the kinematic viscosity and κ is the thermal conductivity.28

13.2.3 The Lorenz model

To obtain the Lorenz model, we linearize and assume a wave-like solution,

Ψ x, y, z = Ψ1 (t ) × cos (πz) × sin (k x x) ,


¡ ¢
(13.28)

and

θ x, y, z = Ra × θ1 (t ) cos (πz) × cos (k x x) .


¡ ¢
(13.29)

Now substitute and neglect terms other than J and change variables as above, and we find,

Ẋ = Pr (Y − X ) , (13.30)
Ẏ = r X −Y , (13.31)

where r is the reduced Rayleigh number,

kx
r = ¢3 Ra . (13.32)
π + k x2
¡
2

X is related to Ψ1 and Y is related to θ1 . In this case there are convective cells turning over with
one cell in the vertical direction.
Instead, consider two vertical cells,

Ψ x, y, z = Ψ1 (t ) × cos (πz) × sin (k x x) ,


¡ ¢
(13.33)

and

θ x, y, z = Ra × θ1 (t ) cos (πz) × cos (k x x) + Ra−1 θ2 (t ) sin (2πz) .


¡ ¢
(13.34)
26
https://en.wikipedia.org/wiki/Rayleigh_number
27
https://en.wikipedia.org/wiki/Prandtl_number
28
http://williamsgj.people.cofc.edu/NonlinearChaos.pdf

129
13.2 Giovanni Mirouh’s Lecture

Now find
kx ¡ 2 2
Pr −1 Ψ̇1 = − θ π Ψ1
¢
1 − k x + (13.35)
π2 + k x 2

θ̇1 = −2πk x Ψ1 θ2 + Rak x ψ1 − k x2 + π2 θ1


¡ ¢
(13.36)
πk x
θ̇2 = ψ1 θ1 − 4π2 θ2 , (13.37)
2
and make the variable substitution,
t 0 = π2 + k x2 t ,
¡ ¢
(13.38)
πk x
X = + ψ1 , (13.39)
π + k x2
2

πk x2
Y = ¡ ¢3 + θ 1 , (13.40)
π2 + k x2
2πk x2
Z = ¢3 θ 2 , (13.41)
π2 + k x2
¡

then,
Ẋ = σ (Y − X ) , (13.42)
Ẏ = r X −Y , (13.43)
Ż = −bZ + X Y , (13.44)
where
σ = Pr , (13.45)
k x2
r = ¢3 Ra , (13.46)
π2 + k x2
¡

4π2 8
b = = . (13.47)
π2 + k x2 3
b is determined by the size of the convective rolls, so is fixed, and r is again the reduced Rayleigh
number.
• Motion along the Z axis stays on the z axis. Set X = Y = 0 to see, Z (t ) = Z (0) e −bt .

• The Lorenz system is dissipative. To see this, compute the derivative of the volume V in
parameter space,
1 dV ∂ Ẋ ∂Ẏ ∂ Ż
= + + , (13.48)
V dt ∂X ∂Y ∂Z
which is a “Lie derivative”. Hence
1 dV
= −σ − 1 − b < 0 (13.49)
V dt
because σ > 0 and b > 0. Hence parameter space volumes decrease with time. Also,
V (t ) = V (0) e −(σ+1+b)t (13.50)
−41t /3
= V (0) e , (13.51)
so in one unit of time the volume is divided, roughly, by one million. Convergence is thus
very fast.

130
13.2 Giovanni Mirouh’s Lecture

13.2.4 Fixed points

• Ẋ = Ẏ = Ż = 0 has an obvious fixed point at X = Y = Z = 0. There are more.


Stability requires the Jacobian,

∂ Ẋ ∂ Ẋ ∂ Ẋ
 
−σ σ
 
∂x ∂y ∂z 0
J = 
 ∂Ẏ =
 
r −1 0  . (13.52)
∂x
·
∂Ẏ 0 0 −b
∂z
...

The eigenvalues come from the usual,

det (λI − J ) = (λ + b) [(λ + 1) (λ + σ) − σr ] , (13.53)

with solutions

λ1 = −b < 0 , (13.54)
p
− (σ + 1) ± (σ + 1) + σ (r − 1)
2
λ2,3 = . (13.55)
2
The only parameter that can vary is r . If λ > 0 it is unstable.

– For stability, i.e. λ2,3 < 1 we require r < 1.


– If r > 1 we have λ2 < 0 and λ3 > 0, i.e. a saddle point, hence is unstable.
– At r = 1 we have a supercritical pitchfork bifurcation.

• Ẋ = 0 implies X = Y and,

Ẏ = X (r − 1 − Z ) , (13.56)
2
Ż = −bZ + X , (13.57)

then,

Z1,2 = r − 1 , (13.58)
p p
X 1,2 = Y1,2 = ± bZ = ± b (r − 1) . (13.59)

Stability from J again where,

0
 
−σ +σ
p
J =  p 1 −1 ∓ b (r − 1)  , (13.60)
p
± b (r − 1) ± b (r − 1) −b

and the eigenvalues are from

λ3 + λ2 (σ + b + 1) + λb (σ + r ) + 2σb (r − 1) = 0 . (13.61)

There are three roots, one is real and two are complex conjugates λ = α + i β. (The three real
roots do not happen here.)

131
13.2 Giovanni Mirouh’s Lecture

20
σ = 10 Unstab
le limit
15 b = 8/3 cycles

Subcritical
10 Hopf
Subcritical
Pitchfork ed points
5 Stable fix

24.74 = rH
x 0
Stable 13.926 24.06
origin
-5

-10

-15 Strange
Transient chaos attractor
-20
0 5 10 15 20 25
r

Figure 31: Bifurcation diagram of the Lorenz system of equations with σ = 10 and b = 8/3.

The roots are stable if R (λ) < 0.


Let
σ (σ + b + 3)
rH = , (13.62)
σ−b −1
then

α ≶ 0 if r H ≶ 0 . (13.63)

In the atmosphere with σ = 10 and b = 8/3, then r H ≈ 24.47 . . . . At r = r H we have a subcritical


Hopf bifurcation.

• We can now construct the bifurcation diagram (Fig. 31).

– When r < 1 the origin is the stable fixed point.


– When 1 < r < 13.926 the system has two stable fixed points (C + and C − ) but the origin
is unstable.
– Between r = 13.926 and r = 24.06 we have transient chaos. The system will start mo-
tion towards one fixed point then move to the other, perhaps a number of times, then
eventually settle on one of the fixed points C ± .
– When r > 24.06 the system is a strange attractor. Once inside the attractor, one can
choose two arbitrarily close points which, after a number of iterations, will be arbitrar-
ily far away from each other (within the limits of the attractor). They return to being
close some time later, and then repeat the cycle.

132
13.3 Conclusions

• We can interpret the outcome in terms of the original system of fluid mechanics equations.
Some example trajectories include:

– σ = 10, b = 8/3, r = 1/2 starting at x 0 = 1, y 0 = 10 and z 0 = 1 convergences on (0, 0, 0).


– With r = 10 the point (12, 9, 25) converges on C − at (−4.9, −4.9, 9)
while point (−12, −9, 25) converges on C +
– With r = 19 the point (20, −5, 5) converges on C + at (+6.9, +6.9, 18)
while the point (20, −5, 5.1) converges on C −
– With r = 24.6 the point (0, 0.1, 0) converges on a saddle cycle from the outside
while the point (−6, −6, 24) converges on a saddle cycle from the inside
– When r = 30 the point (0, 0.1, 0) shows the full behaviour of a strange attractor.

• When at C 0 = (0, 0, 0) the heat flow is purely conductive.

• When on C ± there is convection, where the sign is the direction of the convective vortices.

• The chaotic behaviour is similar to that of a boiling fluid.

13.3 Conclusions
We have shown that the Lorenz system, which is a “simple” model of convection, is – in some
parts of the parameter space – chaotic, yet deterministic. The Lorenz system is based on the
Navier-Stokes equations which govern the motion of fluids, and thus chaos is a possible solu-
tion of the Navier-Stokes equations. This was important, historically, because previous theories
about the complexity of chaos required many more dimensions that the three of the Lorenz sys-
tem (cf. Landau’s work). It is also clear that turbulence really is deterministic: it is not just a failure
of the equations of motion, in that either the equations are wrong or solutions wrongly computed.

133
14 Chaos
A definition of chaos:

• Aperiodic long-term behaviour: trajectories do not settle down into fixed points, limit cycles
or similar.

• The trajectories are deterministic solutions of differential equations

• An extreme sensitivity to initial conditions (often represented by the acronym “SIC”)

When trajectories approach a chaotic attractor, usually a fractal, the above characteristic can be
generated. The stretching and folding of phase space volumes creates:

stretching: sensitivity to initial conditions

folding: trajectories are turned into fractals, with aperiodic behaviour

Dissipation is usually responsible for reducing phase space volumes to shapes with smaller (fractal)
dimensions.
Nonlinearity is required to have chaos, but a dynamical system of differential equations must
also have three independent variables (i.e. be 3rd order or higher). For iterative maps, one dimen-
sion is sufficient. There is often underlying order even in a chaotic system. Chaotic does not mean
random – chaos is deterministic!
In the Lorenz system we have both stretching and folding. The Lorenz attractor is a fractal with
dimension about 2.05. However, there is underlying order. Fig. 32 shows how |x n+1 | vs |x n | are
related when taking a Poincaré map. The chaos is deterministic.

134
50

45

40

35

30

z 25

20

15

10

0
-20 -15 -10 -5 0 5 10 15 20
x

17

16

15

14
|x|
13

12

11

10
0 10 20 30 40 50
t

17

16

15

14
| xn+1 |
13

12

11

10
10 11 12 13 14 15 16 17
| xn |

Figure 32: Lorenz attractor and map when σ = 10, β = 8/3 and r = 28, starting at x, y, z =
¡ ¢

(15, 8, 27). The top panel shows a projection of the attractor in the z–x plane. The plane z = 28
is used to create a Poincaré section. |x| through the section is shown in the middle panel, and
|x n+1 | vs |x n | is shown in the bottom panel. Note that the two branches are thin.

135
S
xn+1=P(x)

xn

Figure 33: The Poincaré map is the relation between x n and P (x n ) = x n+1 .

15 Iterated maps
We now turn our attention to discrete dynamical systems, such as recursion relations and differ-
ence equations. Difference equations are, for example, how all but the simplest differential equa-
tions are solved numerically on computers. Iterations are counted by an integer index, n, and the
trajectory has discrete values, x n (t ).

15.1 Poincaré maps


Consider the n-dimensional system,

ẋ = F (x) , (15.1)

and suppose the trajectories cycle across a surface, S, of dimension n − 1. The Poincaré map, P , is
defined as a map of S onto itself such that each crossing point is linked to the previous,

x n+1 = P (x n ) . (15.2)

Fixed points of a Poincaré map are closed cycles,

P (x) = x . (15.3)

Poincaré sections are the collection of all points where the trajectory intersects S. Roughly speak-
ing,
© ¡ ¢ ª
S = x ∈S :x =P y ,y ∈S . (15.4)

136
15.2 One-dimensional maps

15.2 One-dimensional maps


A one-dimensional map is defined by,

x n+1 = f (x n ) n ∈ N , (15.5)

and such a system can be chaotic, despite only being one-dimensional (recall that three dimen-
sions are required in a continuous system), and can be oscillatory.
Linear stability can be analysed in a similar way to continuous systems. Let x ∗ be a fixed point,

f x∗ = x∗ ,
¡ ¢
(15.6)

and expand the map near the fixed point as a Taylor series. Define,

xn = x ∗ + ηn ,

then

x n+1 = x ∗ + η n+1 (15.7)


df
µ ¶
= f x∗ + η n + O η2
¡ ¢ ¡ ¢
(15.8)
dx ∗
¶ x
df
µ
= x∗ + η n + O η2 .
¡ ¢
(15.9)
d x x∗

The linearized map is then,

df
µ ¶
η n+1 = η n + O η2 .
¡ ¢
(15.10)
dx x∗

The multiplier is,

df
µ ¶
λ = = f 0 x∗ ,
¡ ¢
(15.11)
dx x∗

is the eigenvalue of the system.

• |λ| = ¯ f 0 (x ∗ )¯ < 1 means the map is linearly stable.


¯ ¯

• |λ| = ¯ f 0 (x ∗ )¯ > 1 means the map is unstable.


¯ ¯

• |λ| = ¯ f 0 (x ∗ )¯ = 1 means that we need to investigate O η2 .


¯ ¯ ¡ ¢

15.3 Cobweb diagrams


A “cobweb” plot is one of x n+1 vs x n . The function f (x), which would be the continuous equivalent,
is then mapped by the cobweb.

137
15.3 Cobweb diagrams

1.2

0.8

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1 1.2
f(x) = sin(x) xn = xn+1

Figure 34: Cobweb diagram of f (x) = sin x starting with x 0 = 1.

0.5

-0.5

-1

-3 -2 -1 0 1 2 3
f(x) = cos(x) xn = xn+1

Figure 35: Cobweb diagram of f (x) = cos x starting with x 0 = −0.45π.

138
15.3 Cobweb diagrams

10

Bottleneck
6

-2 0 2 4 6 8 10
f(x) = x-sin(x)-1.2 xn = xn+1

Figure 36: Cobweb diagram of f (x) = x−sin x−1.2 starting with x 0 = 9.5. The bottleneck, or “ghost”,
starting around x ∼ 5 is obvious because of the large number of steps required to traverse this
section.

10

Fixed point
6

-2 0 2 4 6 8 10
f(x) = x-sin(x)-1 xn = xn+1

Figure 37: As Fig. 36 with f (x) = x − sin x − 1. There is now a fixed point where f (x) = x right where
the bottleneck was, and the cobweb curve settles there.

139
15.4 Lyapunov exponents

15.4 Lyapunov exponents


Chaotic motion has great sensitivity to initial conditions, in the sense that initially neighbouring
trajectories separate exponentially fast, on average. Given a map,

x n+1 = f (x n ) , (15.12)

consider a point x 0 and its neighbouring point x 0 + δ0 , where the initial distance is very small,
δ0 ¿ 1. The separation between the two trajectories is, after n iterations, the difference between,

x n = f f . . . f (x 0 ) = f (n) (x 0 ) ,
¡ ¡ ¢¢
(15.13)

and

x n + δn = f (n) (x 0 + δ0 ) . (15.14)

We expect, on average, a chaotic system to diverge exponentially,

|δn | ≈ |δ0 | e nλ , (15.15)

where λ is the Lyapunov exponent. We can write λ at each step n, λn , as,

1 ¯¯ δn ¯¯
¯ ¯
λn = lim ln ¯ ¯ (15.16)
δ0 →0 n δ0
¯ n
1 ¯¯ f (x 0 + δ0 ) − f (n) (x 0 ) ¯¯
¯
= lim ln ¯ (15.17)
δ0 →0 n δ0 ¯
¯Ã ¡ ! ¯
1 ¯¯ d f (n) (x 0 )
¢ ¯
= lim ln ¯ ¯. (15.18)
¯
δ0 →0 n ¯ dx ¯
x=x 0

Then note that, by the chain rule,

d f (n) (x 0 ) d f (n) (x 0 ) d f (n−1) (x 0 )


¡ ¢ ¡ ¢ ¡ ¢
= ¡ ¢ (15.19)
dx d f (n−1) (x) dx
¡ ¡ (n−1) ¢¢ ¡ (n−1) ¢
d f f (x 0 ) d f (x 0 )
= ¡ ¢ (15.20)
d f (n−1) (x 0 ) dx
¡ ¢ ¡ (n−1) ¢
d f (x n−1 ) d f (x 0 )
= (15.21)
d x n−1 dx
¡ (n−1) ¢
d f (x 0 )
= f 0 (x n−1 ) · , (15.22)
dx
and repeat n times,
¯ ¯
1 ¯¯n−1
Y 0 ¯
λn ≈ ln ¯ f (x i )¯ (15.23)
¯
n ¯ i =0 ¯
1 n−1
X ¯ 0 ¯
= ln ¯ f (x i )¯ , (15.24)
n i =0

and then, the Lyanunov exponent is found by taking the limit n → ∞. Note that in the product,
Qn−1
i =0 , the derivatives are taken at each x i , not at x 0 (except when i = 0).

140
15.5 Lyapunov exponents in continuous systems

Definition 5.
Given a map, x n+1 = f (x n ), its Lyapunov exponent for the orbit starting at x 0 is,
( )
1 n−1
X ¯ 0
λ = lim
¯
ln ¯ f (x i )¯ . (15.25)
n→∞ n
i =0

Note that λ is the same whatever the x 0 as long as x 0 is in the basin of the attractor.

• Fixed points and closed cycles have λ < 0.

• Chaotic attractors have λ > 0.

15.5 Lyapunov exponents in continuous systems


Consider a flow, x(t ), in phase space given by,
dx
ẋ = = F (x) , (15.26)
dt
then the distance between two points, x (t ) and x (t ) + ² (t ), diverges with time. To first order,

d (x + ²)
= F (x) + M(t ) ² , (15.27)
dt
where M is the Jacobian matrix,
∂F i
µ ¶
M(t ) = , (15.28)
∂x j x(t )

where i and j represent the vector components, thus,



= M(t ) ² . (15.29)
dt

15.5.1 Constant systems

If M is time-independent, i.e. constant, and we assume that the eigenvectors are the phase space
unit vectors, we have a linear dependence of ² on time,

= M² , (15.30)
dt
which is trivially solved and has solutions like
² = e M t ² (0) . (15.31)
In the coordinate systems where the eigenvectors are parallel to the phase space unit vectors we
then have
 λ1 t
e 0 0

e M t = L (t ) =  0 e λ2 t 0 . (15.32)
λ3 t
0 0 e
The flow of ²(t ) is then dominated by the largest of the three eigenvalues, λi , the sign of which
determines whether ² (t ) grows or decays asymptotically with time.

141
15.5 Lyapunov exponents in continuous systems

ϵ(0) ϵ(t)

Figure 38: Growth of a perturbation ² (t ).

15.5.2 Time-dependent systems

Similar arguments to the above can be made in the case of a system of equations with time-
dependent eigenvalues and time-independent eigenvectors. In such systems we have M = M(t ),
then we should consider small dispacements in the x, y and z directions corresponding to the
(constant) eigenvectors of M,
δx δx
    
A 0 0
d 
δy  =  0 B 0   δy  , (15.33)
dt
δz 0 0 C δz
where A, B and C are time-dependent eigenvalues and are functions of x(t ). The general solution
is again that of a first-order differential equation,
·ˆ t ¸
¡ ¡ 0 ¢¢ 0
δX (t ) = δX (0) exp A x t dt , (15.34)
0

where t 0 is a dummy integration variable. Rearranging, taking logarithms, and dividing by time,
ˆ
1 ¯¯ δX (t ) ¯¯ 1 t ¡ ¡ 0 ¢¢
¯ ¯
ln = A x t dt . (15.35)
t ¯ δX (0) ¯ t 0
The right hand side is just the time average of A, which we denote 〈A〉. If we assume that, over long
enough times ,this represents the long-term average of A, we thus have one of the three Lyapunov
exponents,
1 ¯¯ δX (t ) ¯¯
¯ ¯
〈A〉 = lim ln ¯ . (15.36)
t →∞ t δX (0) ¯

142
15.6 p-cycle

Figure 39: 516, 552 and 576hPa geopotential heights as forecast by an ensemble of Global Fore-
casting System models (data from www.wetterzentrale.de) at times 0, 48, 96, 144, 192, 240, 288
and 336 h. As time progresses the forecasts become more scattered showing the chaos inherent in
weather. After about 192 h the system is truly chaotic, so the Lyapunov time is about 192 h corres-
ponding to a Lyapunov exponent 1/192 = 5 × 10−3 h−1 .

More detailed analysis can be done for the general case of time-dependent eigenvalues and eigen-
vectors.

15.5.3 Lyapunov time

We can define the Lyapunov exponent, λ, in a rough sense, as,


|δ| ≈ |δ0 | e λt ,

hence define a time τ such that the system diverges by a factor e,

λ = 1/τ , (15.37)

and τ is the Lyapunov time (Fig. 39). In the context of the above, where several exponents can
be calculated from the Jacobian, the λ used is the largest of the Lyapunov exponents because this
dominates the flow. τ is the characteristic timescale on which trajectories diverge, hence a sys-
tem becomes chaotic and, when running computer simulations, it is the maximum timescale on
which deterministic forecasts are possible. This is relevant for weather forecasting (Fig. 39) and
determining whether orbits, e.g. of planets in the solar system, are stable29 .

15.6 p-cycle
Suppose f has a stable p-cycle containing x 0 ,

f (p ) (x 0 ) = x 0 , (15.38)

where x 0 is a fixed point of,

g (x) = f (p ) (x) . (15.39)


29
https://en.wikipedia.org/wiki/Stability_of_the_Solar_System and https://en.wikipedia.org/wiki/Orbital_resonance

143
15.7 Tent map

• Because the cycle is stable, ¯g 0 (x 0 )¯ < 1 hence ln ¯g 0 (x 0 )¯ < 0.


¯ ¯ ¯ ¯

Qp−1
• But g 0 (x 0 ) = i =0
f 0 (x i )

• The Lyapunov exponent is


( )
1 n−1
X ¯ 0
λ =
¯
lim ln ¯ f (x i )¯ (15.40)
n→∞ n i =0
1 p−1
X ¯ 0 ¯
= ln ¯ f (x i )¯ (15.41)
p i =0
1 ¯¯Y 0 ¯
= ln f (x i )¯ (15.42)
p
1 ¯¯ ¯
= ln g (x 0 )¯ (15.43)
p
< 0. (15.44)

15.7 Tent map


The tent map is defined by,

0 ≤ x ≤ 12 ,
½
rx,
f (x) = (15.45)
r (1 − x) , 12 ≤ x ≤ 1 ,

with 0 ≤ x ≤ 1 and 0 ≤ r ≤ 2 (Fig. 40).

r/2

xn+1

0 1/2 1
xn
x

Figure 40: The tent map.

144
15.8 Logistic map

• f 0 (x) = ±r so ¯ f 0 (x)¯ = r .
¯ ¯

• The Lyapunov exponent is


( )
1 n−1
X ¯ 0 ¯
λ = lim ln ¯ f (x)¯
n→∞ n
i =0
½ ¾
1
= lim × n ln r
n→∞ n
= ln r . (15.46)

• When λ < 0 we have r < 1, and the map is well behaved.

• When λ > 0 we have r > 1, and the map is chaotic, as shown in Fig. 41.

15.8 Logistic map


The logistic map is the time-discrete version of the logistic equation for population growth,
x n+1 = f (x) , (15.47)
f (x) = r x (1 − x) , (15.48)
where 0 ≤ x ≤ 1 and 0 ≤ r ≤ 4 (Fig. 42). The fixed points are at f (x) = x, hence,
r x 2 + (1 − r ) x = 0 , (15.49)
which implies either x ∗ = 0 (if r > 0) or x ∗ = 1 − 1/r (if r > 1).
Stability comes from,
d −r x 2 − x
¡ £ ¤¢
0
f (x) = = r (1 − 2x) , (15.50)
dx
so at x 0∗ = 0,
f 0 (0) = r , (15.51)
then x 0∗ = 0 is stable if r < 1 and unstable if r > 1.
Similarly, at x 1∗ = 1 − 1/r ,
µ ¶
0 1
f 1− = 2−r , (15.52)
r
hence x ∗ = 1 − 1/r is stable if r < 3¡ and¢unstable if r > 3.
If r > 3 then g (x) = f (2) (x) = f f (x) has two extra fixed points (proof is given below),
p
∗ r + 1 ± (r − 3) (r + 1)
x 2,3 = , (15.53)
2r
and then,
f x 2∗ = x 3∗ ,
¡ ¢
(15.54)
f x 3∗ = x 2∗ ,
¡ ¢
(15.55)
¡ ¡ ∗ ¢¢ ∗
f f x 2,3 = x 2,3 . (15.56)
This is period doubling 30 : two iterations and the map is back onto itself (Fig. 43). There are further
cycles as r increases (Table 2 and Fig. 44) with a transition to chaos when r & 3.569946. We study
this further below.
30
https://www.youtube.com/watch?v=M9Aud7dyqx0

145
15.8 Logistic map

0.9

0.8

0.7

0.6

xn 0.5

0.4

0.3

0.2

0.1

0
1 1.2 1.4 1.6 1.8 2
r

0.55

0.54

0.53

xn 0.52

0.51

0.5

0.49
1 1.02 1.04 1.06 1.08 1.1
r

Figure 41: Bifurcation diagrams of x n vs r for the tent map.

146
15.8 Logistic map

r/2

xn+1

0 1/2 1
xn
x

Figure 42: The logistic map.

r n
1 3
p
2 1 + 6 = 3.449 . . .
3 3.54409 . . .
4 3.5644 . . .
5 3.568759 . . .
∞ 3.569946 . . .

Table 2: n-cycles of the logistic map as a function of its parameter r .

147
15.8 Logistic map

1 1

r = 0.5 r=2

xn+1 xn+1

r/2

0 1/2 1 0 1/2 1
xn xn

1 1

r = 3.2 r = 3.6

xn+1 xn+1

0 1/2 1 0 1/2 1
xn xn

Figure 43: Logistic map cobweb diagrams with r = 0.5, r = 2, r = 3.2 (period doubling) and r = 3.6
(chaos).

0.9

0.8

0.7

0.6

xn 0.5

0.4

0.3

0.2

0.1

0
1 1.5 2 2.5 3 3.5 4
r

Figure 44: Bifurcation diagram of x n vs r for the logistic map.

148
15.9 Hénon map

• Examples: go to http://thewessens.net/ClassroomApps/Main/logistic.html

15.9 Hénon map


The Hénon map is in two-dimensions,

x n+1 =y n + 1 − ax n2 , (15.57)
y n+1 =bx n , (15.58)

with, in the “classical” C map, parameters a = 1.4 and b = 0.3. The map can be written, equival-
ently, as first an area-conserving bend,
µ ¶ µ ¶
x1 x
= , (15.59)
y1 1 − ax 2 + y

then a contraction in the x-direction,


µ ¶ µ ¶
x2 bx 1
= , (15.60)
y2 y1

and reflection in the line y = x,


µ ¶ µ ¶
x3 y2
= . (15.61)
y3 x2

The three steps are shown for the first four iterations of the map in Fig. 45.
The map can also be written in one dimension by substituting for y n+1 ,

x n+1 = 1 − ax n2 + bx n−1 , (15.62)

which resembles the Fibonacci sequence,

x n+1 = x n + x n−1 . (15.63)

• The classical map (a = 1.4, b = 0.3) has a fixed point,


p
609 − 7
x = ≈ 0.631354477 , (15.64)
¡p28 ¢
3 609 − 7
y = ≈ 0.189406343 . (15.65)
280

• The map is invertible for b 6= 0,


1
xn = y n+1 (15.66)
b
a 2
y n = x n+1 − 1 + y (15.67)
b 2 n+1

• The map is dissipative in the range −y < b < 1, and the Jacobian is,
µ ¶
−2ax 1
J = , (15.68)
b 0

where |J | = −β < 0.

149
15.9 Hénon map

2
1.5 n=0, step 0 (Start)
1
0.5
yn 0
-0.5
-1
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn

2 2 2
1.5 n=1, step 1 (Bend) 1.5 n=1, step 2 (Contract) 1.5 n=1, step 3 (Reflect)
1 1 1
0.5 0.5 0.5
yn 0 yn 0 yn 0
-0.5 -0.5 -0.5
-1 -1 -1
-1.5 -1.5 -1.5
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn xn xn

2 2 2
1.5 n=2, step 1 (Bend) 1.5 n=2, step 2 (Contract) 1.5 n=2, step 3 (Reflect)
1 1 1
0.5 0.5 0.5
yn 0 yn 0 yn 0
-0.5 -0.5 -0.5
-1 -1 -1
-1.5 -1.5 -1.5
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn xn xn

2 2 2
1.5 n=3, step 1 (Bend) 1.5 n=3, step 2 (Contract) 1.5 n=3, step 3 (Reflect)
1 1 1
0.5 0.5 0.5
yn 0 yn 0 yn 0
-0.5 -0.5 -0.5
-1 -1 -1
-1.5 -1.5 -1.5
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn xn xn

2 2 2
1.5 n=4, step 1 (Bend) 1.5 n=4, step 2 (Contract) 1.5 n=4, step 3 (Reflect)
1 1 1
0.5 0.5 0.5
yn 0 yn 0 yn 0
-0.5 -0.5 -0.5
-1 -1 -1
-1.5 -1.5 -1.5
-2 -2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
xn xn xn

Figure 45: The first four iterations of the classical Hénon map starting from a uniform map of
points in the range −1 ≤ x 0 , y 0 ≤ 1.

150
15.10 Quadratic map and the Mandelbrot set

1.5

0.5

xn 0

-0.5

-1

-1.5
1 1.1 1.2 1.3 1.4 1.5
a

Figure 46: Hénon map bifurcation diagram.

• There is a trapping region for given a and b.

• Some trajectories can escape to infinity, unlike (say) the Lorenz system.

• Fig. 46 shows the bifurcation diagram for x n as a function of a. Many features, such as period
doubling, will be familiar to you from simpler maps.

Further reading:

• https://www.math.uu.se/digitalAssets/562/c_562622-l_1-k_tucker_slides.pdf

• http://www.cmsim.eu/papers_pdf/october_2013_papers/5_CMSIM-Journal_2013_Aybar_
etal_4_529-538.pdf

15.10 Quadratic map and the Mandelbrot set


The quadratic map is,

x n+1 = a 2 x n2 + a 1 x n + a 0 , (15.69)

where a i are constants and i = 0, 1, 2. The logistic map is a special case. Sometimes the map is
analytically soluble.
2
• Binary trees of height ≤ n are given by the map y n = y n−1 + 1 with y 0 = 1 (Aho and Sloane
¥ n¦
1973, https://www.fq.math.ca/11-4.html). This has the analytic solution y n = c 2
where bxc is the “floor” of x, i.e. the largest integer that is smaller than or equal to x, and
c ≈ 1.50283.

• The closest number to 1 that is not 1 and is less than 1 is given by,
X n 1
Sn = , (15.70)
i =1 z i

151
15.10 Quadratic map and the Mandelbrot set

2
where z n = z n−1 − z n−1 + 1 and z 1 = 2. This is Sylvester’s sequence, and has terms 2, 3, 7, 43,
1807, . . . . The closed solution is,
¹ º
2n 1
zn = d + (15.71)
2

where d ≈ 1.2640.

• The map

x n+1 = x n2 + c , (15.72)

where x n are real, which is not –in general– solvable, is the real version of the Mandelbrot set
map,

z n+1 = z n2 +C , (15.73)

where z n are complex and z 0 = 0 is a complex starting location. Points for which z n does not
tend to infinity are defined to be part of the Mandelbrot set.

• If |C | > 2 the series diverges. The first iterations are

z0 = 0 (15.74)
z1 = C (15.75)
2
z 2 = C +C . (15.76)

Consider the case |z| ≥ |C | > 2. The triangle inequality gives us,
¯ 2
¯z +C ¯ + |−C | ≥ ¯z 2 ¯ ,
¯ ¯ ¯
(15.77)

hence
¯ 2
¯z +C ¯ ≥ ¯z 2 ¯ − |−C |
¯ ¯ ¯
(15.78)
≥ ¯z 2 ¯ − |C | .
¯ ¯
(15.79)

Now

|z| ≥ |C | (15.80)

hence

− |z| ≤ − |C | (15.81)

so,
¯ 2
¯z +C ¯ ≥ ¯z 2 ¯ − |z|
¯ ¯ ¯
(15.82)
= |z| (|z| − 1) (15.83)
≥ |z| (|C | − 1) (15.84)
≥ k |z| , (15.85)

152
15.10 Quadratic map and the Mandelbrot set

Figure 47: Example zoom-in on the Mandelbrot set.

where k = |C | − 1 is a real number. We have

|z 1 | = |C | (15.86)

hence after n iterations we have,

|z n | ≥ k n |C | . (15.87)

If k > 1 this blows up to infinity, corresponding to k = |C | − 1 > 1 i.e. |C | > 2.

• Points outside the set, but with |C | < 2, also blow up to infinity.

• The Mandelbrot set is a fractal with Hausdorf dimension 2 (Shisikura 1994 https://www.
jstor.org/stable/121009).

• When plotted, the colour is usually the n for which |z n | < 2 (as shown about, points which
diverge beyond 2 are not members of the set). Fig. 47 shows some examples of parts of the
set.
¡ ¢
• The Julia set is similar except that C is a (complex) constant and z 0 = x, y varies. The Julia
set is then the boundary between the points that diverge to infinity and 2those that do not.

153
15.11 Other chaotic maps

• An excellent high-resolution movie is at https://www.youtube.com/watch?v=PD2XgQOyCCk

• There is a ton of information on the web, e.g.


http://www.alunw.freeuk.com/mandelbrotroom.html
https://en.wikipedia.org/wiki/Julia_set

15.11 Other chaotic maps


There are many other maps which exhibit chaos, a long list can be found at https://en.wikipedia.
org/wiki/List_of_chaotic_maps.

154
16 Universality and chaos
Experience tells
¡ us ¢ that there are several ways chaos can be approached. Consider the two-dimensional
map on x = x, y with,
¡ ¢
x n+1 = f xn , y n , (16.1)
¡ ¢
y n+1 = h xn , y n , (16.2)

with fixed points x ∗ = x ∗ , y ∗ such that


¡ ¢

x ∗ = x ∗, y ∗ = f x ∗, y ∗ , h x ∗, y ∗ .
¡ ¢ ¡ ¡ ¢ ¡ ¢¢
(16.3)

If we let δx i = x i − x i −1 then an unstable system has,

|δx 1 | ≤ |δx 2 | ≤ |δx 3 | . . . , (16.4)

while a stable system has

|δx 1 | ≥ |δx 2 | ≥ |δx 3 | . . . . (16.5)

A linear analysis gives,


µ ∗ ¶ Ã ∂f ∂f

x∗ δx δx
µ ¶ µ ¶ ¶
x ∂x ∂y
+ O δx 2 , δy 2 , δxδy
¡ ¢
+ = + ∂h ∂h (16.6)
y∗ δy y∗ δy
∂x ∂y

where
∂f ∂f
à !
∂x ∂y
A = ∂h ∂h (16.7)
∂x ∂y

which satifies an eigenvalue equation,

λi v i = Av i . (16.8)

If |λi | > 1 we have instability, if |λi | < 1 stability: remember λ is, generally, complex, so you must
consider the magnitude. There are three general cases.

16.1 Real eigenvalues with λi > 1


This is the intermittent route to chaos. We can write λ = 1 + ² where ² ∈ R and ² > 0. In this
case, stable, attracting periodic orbits, with λ < λc , disappear as λ increases until λ > λc . This
can happen, e.g., with the Lorenz attractor. Regular motion is interrupted by sporadic bursts of
chaotic behaviour. At the transition point the phase space extent of the attractor changes suddenly,
leading to chaos.
Near λc ≈ λ the length of time spent in the stable orbits ∼ (λ − λc )−1/2 – is this familiar to you?

155
16.2 Real eigenvalues with λi < 1

16.2 Real eigenvalues with λi < 1


This is the period doubling route to chaos. We can write λ = − (1 + ²) where ² ∈ R and ² > 0.
As an example consider a one-dimensional map,

x n+1 = f (x n ) , (16.9)

f x∗ ,
¡ ¢
x = (16.10)
λ = f 0 x∗ .
¡ ¢
(16.11)

• If λ = −1 + ² the map is stable.

• If λ = −1 the map is a two-cycle, a marginal case: never shrinking, never growing.

• If λ = −1 − ² the map is unstable.

16.3 Complex eigenvalues


This is the quasiperiodic route to chaos. We can write λ = α + i β = (1 + ²) e i γ where α, β, γ, ² ∈ R
and ² > 0. Each map amplifies and rotates δx.

16.4 Unimodal maps


Unimodal maps have a “U-turn” in them. They all have the same cascade route to chaos as the
logistic map. The “u” shape generates the common feature, because the map is of the form,

f (x n + δx) = f (x n ) + f 00 (x n ) (δx)2 + O x 3 .
¡ ¢
(16.12)

If x n is a fixed point, it is superstable,

x∗ = f x∗ ,
¡ ¢
(16.13)
f 0 x∗ = 0
¡ ¢
(16.14)

xn = x . (16.15)

This is the point in the f (x n ) vsx n plot where the map curve crosses the straight line f (x n ) = x n .
Unimodal maps follow a sequence of period doubling, eventually becoming chaotic. We con-
sider the logistic map as an example, but any similarly-shaped function has similar behaviour.

16.5 The logistic map as a unimodal map


Consider f (x) and f (2) (x) = f f (x) of the logistic map,
¡ ¢

f (x) = r x (1 − x) . (16.16)

• Fixed points are at f (x) = x, i.e.,

x = r x (1 − x) (16.17)
2
0 = r x + (1 − r ) x (16.18)
= x (r x + 1 − r ) (16.19)

156
16.5 The logistic map as a unimodal map

i.e. a trivial fixed point at,

x∗ = 0 , (16.20)

and another at,


1
x∗ = 1 − , (16.21)
r
which can only exist if r ≥ 1.

• The eigenvalue is,

λ = f 0 x∗
¡ ¢
(16.22)

= r − 2x r , (16.23)

which at x ∗ = 0 gives,

λ = r, (16.24)

hence this point is stable if r < 1. At x ∗ = 1 − r1 we have,


µ ¶
1
λ = r − 2r 1 − = 2−r (16.25)
r
This is stable if, −1 < λ = 2 − r < +1, i.e.,

1 < r < 3. (16.26)

• At r = 3 we have a period-doubling bifurcation, there are now points where

f 2 x∗ = x∗ ,
¡ ¢
(16.27)

i.e.,
¡ ¢
f f (x) − x = r [r x (1 − x)] (1 − [r x (1 − x)]) − x = 0 . (16.28)

Two of the roots are the previous found x ∗ = 0 and x = 1− r1 . We can immediately thus divide
by x,

r [r (1 − x)] (1 − [r x (1 − x)]) − 1 = 0 , (16.29)

multiply out

r [r − r x] 1 − r x − r x 2 − 1
¡ £ ¤¢
= 0 (16.30)
r 2 (1 − x) 1 − r x + r x 2
¡ ¢
= 0 (16.31)
r 2 1 − r x + r x2 − x + r x2 − r x3
¡ ¢
= 0 (16.32)
r 2 1 − (1 + r ) x + 2r x 2 − r x 3
¡ ¢
= 0 (16.33)

and factor out −r 3 to give us a cubic for the roots,


µ ¶
3 2 1 1
x − 2x + + 1 x − = 0. (16.34)
r r

157
16.5 The logistic map as a unimodal map

We now factor out the other known root,


µ ¶
1 ¡ 2 ¢
x −1+ ax + bx + c = 0 (16.35)
r

hence
µ ¶ µ ¶
1 2 3 1
x − 1 + ax = ax + a − 1 x2 (16.36)
r r
µ ¶ µ ¶
1 2 1
x − 1 + bx = bx + b −1 x (16.37)
r r
µ ¶ µ ¶
1 1
x − 1 + c = cx + c −1 (16.38)
r r

and then immediately, by equating powers of x 3 ,

a = 1, (16.39)

then from powers of x 2 ,

1
− 1 + b = −2 (16.40)
r
hence
1
b = −1 − . (16.41)
r
Powers of x give,
µ ¶µ ¶
1 1 1
− +1 −1 +c = +1 (16.42)
r r r
µ ¶µ µ ¶¶
1 1
c = +1 1+ −1 (16.43)
r r
µ ¶
1 1
= +1 . (16.44)
r r

Hence,
µ ¶µ µ ¶ µ ¶¶
1 2 1 1 1
x −1+ x − 1+ x + +1 = 0 (16.45)
r r r r

and because x − 1 + r1 = 0 is a root,


µ ¶ µ ¶
2 1 1 1
x − 1+ x + +1 = 0 (16.46)
r r r

158
16.6 Renormalization

gives the other solutions,


q¡ ¢2
1
1 + r1 − r4 r1 + 1
¡ ¢
1+ ± r
x = (16.47)
s2
(r + 1)2 − 4 + 4r
µ ¶
1 1
= 1+ ± (16.48)
2 r 4r 2
· ¸
1 1 1p 2
= 1+ ± r + 2r + 1 − 4 − 4r (16.49)
2 r r
· ¸
1 1 1p 2
= 1+ ± r − 2r − 3 (16.50)
2 r r
· ¸
1 1 1p
= 1+ ± (r − 3) (r + 1) . (16.51)
2 r r

We can thus only have real solutions for r > 3, because the discriminant (under the square
root) must be positive. At r = 3 we see period doubling.

• It is then possible to compute whether this point is an attractor or repeller by computing the
appropriate derivatives (exercise for the student!).

16.6 Renormalization
We can show that period doubling will repeat by renormalizing our co-ordinates near each “u”
in the unimodal map. All we need to do is flip the graph and scale the co-ordinates to map the
upside-down “u” back onto the original “u”. Then the process repeats.

• Let x m be the x coordinate corresponding maximum of the previous “u”.

• Scale the co-ordinate system x → x + ∆x so that he new centre is at the centre of the “u”

• Multiply and shift f (2) (x) → α f (2) ([x + ∆x] /α)+∆y = α f (2) (x/α, R n ), where R n is the location
of the nth bifurcation, so that “u” of the map f (2) (x) sits on top of where the “u” of the original
map f (x) was.

• The analysis for the next bifurcation, f (4) (x), is now identical other than the scaling.

• For more information, see http://www.cmp.caltech.edu/~mcc/Chaos_Course/Lesson16/


RNG.pdf and https://arxiv.org/abs/1807.09517

• The limit,

x + ∆x
µ ¶
n
lim αn f (2 ) , R n = g (x) , (16.52)
n→∞ α

is a function that is universal and depends only on the behaviour at x ≈ 0. Feigenbaum


showed in the 1970s that α = −2.5029 . . . .

• The locations of the bifurcations, r 1 , r 2 , etc., tend to


r n − r n−1
lim = 4.669 . . . (16.53)
n→∞ r n+1 − r n

159
16.6 Renormalization

which is the Feigenbaum ratio. All “quadratic-like”, i.e. one-humped maps with non-zero
second derivative at the peak, have the same number. This was shown by May, Feigenbaum
and others in the 1970s and early 1980s, and at the end of the 1980s a full analytical proof
was given by Dennis Sullivan.

• The bifurcation regions are fractals, and many comparisons can be made between the Man-
delbrot set and the logistic map bifurcation diagram (Fig. 49). The logistic map is,

x n+1 = r x n (1 − x n ) (16.54)

and the Mandelbrot map is,

z n+1 = z n2 + c . (16.55)

Let

z n = a + bx n , (16.56)

hence also

z n+1 = a + bx n+1 . (16.57)

Then

z n2 = a 2 + 2abx n + b 2 x n2 (16.58)

and

z n+1 = a + br x n (1 − x n ) (16.59)
= a + br x n − br x n2 . (16.60)

2
We then have to choose a and b such that z n+1 = z n2 + c,

z n+1 = a + br x n − br x n2 (16.61)
= a 2 + 2abx n + b 2 x n2 + c . (16.62)

This requires the powers of x n2 , x n1 and x n0 are equal,

−br = b2 (16.63)
br = 2ab (16.64)
a2 + c = a (16.65)

hence

b = −r , (16.66)
r
a = , (16.67)
2
r³ r´
c = a (1 − a) = 1 − . (16.68)
2 2

160
16.7 Lyapunov exponent

Thus the map between the logistic and Mandelbrot set is,
r
zn = (1 − 2x n ) (16.69)
2
with
r³ r´
c = 1− . (16.70)
2 2
Application of this map gives the one to one scaling seen in Fig. 49, and indeed the bifurc-
ation diagram of the Mandelbrot set along the real axis is identical to a suitably scaled (by
Eq. 16.69) logistic map bifurcation diagram.

• See also https://www.youtube.com/watch?v=gaOKAtlukNM and https://en.wikipedia.


org/wiki/Buddhabrot#Relation_to_the_logistic_map.

16.7 Lyapunov exponent


Fig. 50 shows the bifurcation diagram and Lyapunov exponent of the logistic map. Note the re-
gions which are chaotic have positive Lyapunov exponent, while the regions which have n solu-
tions (where n is 1, 2, 4, . . . ) have negative exponent, and the exponent is zero at the bifurcations.
You can make this figure yourself using the ipython recipe at https://ipython-books.github.
io/121-plotting-the-bifurcation-diagram-of-a-chaotic-dynamical-system/ or the Math-
ematica notebook described at http://physics.ucsc.edu/~peter/242/logistic.pdf.

16.8 Further reading


• https://en.wikipedia.org/wiki/Universality_(dynamical_systems)

161
16.8 Further reading

1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x r=3.5 x r=3.5
a f b f (2)

1 0.6

0.9
0.55
0.8

0.7
0.5
0.6

0.5 0.45

0.4
0.4
0.3

0.2
0.35
0.1

0 0.3
0 0.2 0.4 0.6 0.8 1 0.35 0.4 0.45 0.5 0.55 0.6 0.65
x r=3.5 x r=3.5
c f (2) d f (2)

0.8
Δy+αF((x+Δx)/α)

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
αx r=3.5
e f (2)

Figure 48: Renormalization of a unimodal map. The original map f (x) is in panel a, the second
iteration f (2) (x) is in panel b. In c we select a region which looks like the original map, zoomed
in d, and in e we rescale and shift the axes – known as renormalization – to get back to something
very close to the original map. This region then has similar bifurcations to the original map, in this
case another period doubling to f (4) (x), and so on.

162
16.8 Further reading

Figure 49: Comparison of the Mandelbrot set and the bifurcation diagram of the logistic map. Note
the universal ratios in both.

163
16.8 Further reading

Figure 50: Bifurcation diagram and Lyapunov exponent as a function of the parameter r in the
logistic map.

164
17 The 0-1 test for chaos
We often need to determine whether a data set is truly chaotic. The classical way to do this is with
Lyapunov exponents, to see where they are positive, or to look at the return map, where a random
scatter indicates chaos. There is a (perhaps) better way: the “0-1” test for chaos.
Let us assume we have a data set of points, φn , where n = 1, 2, . . . , generated by some map that
may or may not be chaotic. Define a two-dimensional system,

p n+1 = p n + φn cos (cn) , (17.1)


q n+1 = q n + φn sin (cn) , (17.2)

where c ∈ (0, 2π) is a fixed, real constant. We then define the time-averaged displacement function,
M n , by,

1 XN h¡ ¢2 ¡ ¢2 i
Mn = lim p j +n − p j + q j +n − q j , (17.3)
n→∞ N
j =1

where n = 1, 2, 3, . . . . The growth rate of M n is,

log M n
K = lim . (17.4)
n→∞ n
Generally, M n and K exist, and K = 0 when the dynamics are regular (non-chaotic) and K = 1 when
the dynamics are chaotic.

17.1 Example: the logistic map


Let us try the 0-1 test with the logistic map because we know this is chaotic in some regimes. The
map is,

φn+1 = r φn 1 − φn ,
¡ ¢
(17.5)

where 0 < r < 4 (if r > 4 the map can diverge, as we previously showed). Fig. 51 shows p n and q n ,
Fig. 52 shows M n as a function of n, and Fig. 53 shows the K (the slope) as a function of the logistic
map parameter r and compares to the classical bifurcation diagram.

17.2 The parameter c


While the parameter c does not really matter so much, there are values of c for which the map
resonates. Thus, in practice, it is best to choose random 0 ≤ c < 2π and then sample from the
results (e.g. choose the median result). While some will resonate, and give false readings, most will
work just fine.

17.3 Resolution and convergence


It is also clear that if N is small the method will fail. The slope K converges to either zero or one,
but may converge very slowly near the boundary between regular and chaotic trajectories (and
hence will give a false result if N is too small, cf. Fig. 54).

165
17.3 Resolution and convergence

2.5

1.5
qn

0.5

0
r = 3.55

-0.5
-2 -1.5 -1 -0.5 0 0.5 1
pn

16

14

12

10

8
qn

0 r = 3.9

-2
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2
pn

-2
qn

-4

-6

-8
r = 3.97

-10
-6 -4 -2 0 2 4 6 8 10 12
pn

Figure 51: p n and q n for the logistic map with r = 3.55 (regular) and r = 3.9 and 3.97 (chaotic), with
5000 points starting at x 0 = 0.5.
166
17.3 Resolution and convergence

4
Mn

2
Slope = -0.00065306
1
r = 3.55

0
0 50 100 150 200 250 300 350 400 450 500
n

14

12

10

8
Mn

4
Slope = 0.0137398
2
r = 3.9

0
0 50 100 150 200 250 300 350 400 450 500
n

25

20

15
Mn

10

5 Slope = 0.0319652

r = 3.97

0
0 50 100 150 200 250 300 350 400 450 500
n

Figure 52: M n vs n using the logistic map with r = 3.55 (regular) and r = 3.9 and 3.97 (chaotic),
with 5000 points starting at x 0 = 0.5 and 500 iterations of the 01 algorithm
167
17.3 Resolution and convergence

0.9

0.8

0.7

0.6

xn 0.5

0.4

0.3

0.2

0.1

0
1 1.5 2 2.5 3 3.5 4
a r

0.8

0.7

0.6

0.5
K = slope

0.4

0.3

0.2

0.1

-0.1
1 1.5 2 2.5 3 3.5 4
b r

0.8

0.7

0.6

0.5
K = slope

0.4

0.3

0.2

0.1

-0.1
3.5 3.6 3.7 3.8 3.9 4
c r

Figure 53: a) Bifurcation plot of the logistic map, b) slopes K of the 0-1 test for 5000 sample points
and 500 iterations, c) zoom of (b) in the chaotic region.

168
17.3 Resolution and convergence

0.8

0.7

0.6

0.5
K = slope

0.4

0.3

0.2

0.1

-0.1
1 1.5 2 2.5 3 3.5 4
r
a N=5000 N=10000

0.8

0.7

0.6

0.5
K = slope

0.4

0.3

0.2

0.1

-0.1
3.5 3.6 3.7 3.8 3.9 4
r
b N=5000 N=10000

Figure 54: Comparison of the 0-1 test for 5000 and 10000 sample points of the logistic map, with
with 500 and 1000 iterations respectively. (b) is a zoom in on (a).

169
17.4 Improvements

17.4 Improvements
Because M n oscillates as a function of n, a modified function,

D n = M n − Vosc (17.6)
1 − cos (nc)
= Mn − E 2 (17.7)
1 − cos c
where

1 XN
φ j ,
¡ ¢
E = lim (17.8)
N →∞ N j =1

can be used to remove the oscillations. However, the asymptotic growth rate remains the same, it
is just that D n converges better with limited resolution.

17.5 How does it work?


When motion is regular (non-chaotic) trajectories are typically bounded, while chaotic trajectories
p
typically behave like Brownian motion, i.e. evolve diffusively (growth rate ∝ n). Thus the mean-
square displacement, M n , is either bounded or grows, respectively, and hence is a good measure
of chaos.

• A good review is at http://www.maths.usyd.edu.au/u/gottwald/preprints/testforchaos_


MPI.pdf

• A comparison with the Henon map is on the following youtube video https://www.youtube.
com/watch?v=ecjpVGUWYoc

17.6 Credits
The 0-1 test for chaos was created by researchers at the University of Sydney and the University of
Surrey.

• You can read the original paper at https://homepages.warwick.ac.uk/~maslaq/papers/


01manual.pdf

• You might also want to read the following paper comparing the Lyapunov exponent test to
the 0-1 test http://fse.studenttheses.ub.rug.nl/14017/1/The_Lyapunov_Exponent_
Test_and_1.pdf

170
18 Generating Fractals
18.1 Random numbers to fractals
• https://www.youtube.com/watch?v=kbKtFN71Lfs

• http://thewessens.net/ClassroomApps/Main/chaosgame.html

18.2 Lindenmayer-systems
Lindenmayer31 systems are used to describe the growth of systems, in particular organisms. They
involve encoding a set of simple rules which can then result in an arbitrarily complex structure.
They can be used to generate fractals, often looking like plants, e.g. in Algorithmic Botany.
The idea starts with an Axiom: a starting state, and a set of transformation rules. These rules
are applied iteratively to produce the final structure.

18.2.1 Rules example

Consider an axiom b and rules b = a and a = ab. The first few iterations are then,

b,
a,
ab ,
aba ,
abaab ,
abaababa .

18.2.2 Turtle graphics

The idea of turtle graphics is used to draw these structures. Define the following rules.

F Move foward a step and draw a line from the previous position to the current position.

f Move forward a step without drawing.

+ Rotate n degrees.

- Rotate by −n degrees.

Example: draw the letter L. We let n = 90 then draw

FFF+F+FF-F+F+FF.
31
https://en.wikipedia.org/wiki/Aristid_Lindenmayer was a Hungarian biologist.

171
18.2 Lindenmayer-systems

18.2.3 Branching and 3D

More advanced structures can be created by branching, denoted by, e.g.,

[F+F]

and scaling by some factor “.

One can even go multidimensional, e.g. by introducing pitch up ^, pitch down &, \ roll clock-
wise and / roll counterclockwise. For example, with axiom

FFFA

and rule

A=”[&FFFA]////[&FFFA]////[&FFFA]

gives the following

172
18.3 Examples

18.3 Examples
• Axiom F-F-F-F; F → FF-F+F-F-FF

• Axiom F-G-G; F → F-G+F+G-F and G → GG

• Axiom F; F → F[-F][+F]

18.3.1 Python code

There are many Python examples out there, this is but one to make the von Koch snowflake fractal,
as seen earlier in the course.

173
18.3 Examples

#!/ usr / bin / env python


import matplotlib . pyplot as plt
import numpy as np
from math import sin , cos , atan2 , radians

class turtle :
"""
A turtle is a simple object with a direction and a position .
It can follow two basic commands : move forward and turn by an angle
"""
def __init__ ( self ) :
self . _direction = np . array ([ scale , 0]) # 2 D direction vector
self . _position = np . array ([0 , 0]) # 2 D position vector
def forward ( self ) :
"""
Move turtle forward by one unit .
"""
pos = self . _position
dirn = self . _direction
self . _position = np . add ( pos , dirn )
def rotate ( self , theta ) :
"""
Rotate turtle direction by angle theta in degrees .
"""
(x , y ) = self . _direction
current_angle = atan2 (y , x )
new_angle = current_angle + radians ( theta )

self . _direction = [ cos ( new_angle ) , sin ( new_angle ) ]

def L_system ( commands , axiom , production_rules , theta , n_iterations ) :


"""
Executes the commands of an L - system on a turtle object ,
and returns the resulting positions .

Beginning with a string of simple commands , this string is made longer


by
replacing single characters with longer strings , in a recursive manner .

By completing a number of iterations of this process , a long command


string
is generated . A ’ turtle ’ object then follows these commands in order .
It can only move forward or change its direction . The positions of the
turtle are
returned in a matrix .

Parameters
----------
commands : dict
Maps single characters to function calls written as strings
The functions are performed on a turtle object
e . g . { ’+ ’: ’t . rotate ( - theta ) ’, ’ - ’: ’t . rotate ( theta ) ’, ’F ’: ’t .
forward () ’}

174
18.3 Examples

axiom : str
The initial string of command characters .
The associated function calls of these characters are found in param
commands
e . g . ’ FX + FX + ’

production_rules : dict
Maps single character strings to more complicated strings of
characters
The value strings replace the key strings on each new iteration
e . g . { ’X ’: ’X + YF ’ , ’Y ’: ’FX -Y ’}

theta : int
Angle of rotation , in degrees
e . g . 90

n_iterations : int
Number of iterations for the L system
e.g. 5

Returns
-------
positions : numpy matrix
The positions of the turtle , while following commands in the final
command string

"""
command_string = axiom # Begin commands with only the axiom
for iteration in range ( n_iterations ) :
n ew_ co mm an d_ st ri ng = str ()
for char in command_string :
if char in production_rules :
n ew_ co mm an d_ st ri ng += production_rules [ char ]
else :
n ew_ co mm an d_ st ri ng += char
command_string = ne w_c om ma nd _s tr in g

n_commands = len ( command_string ) # Total number of commands for the


turtle

t = turtle () # Initialize a turtle at position [0 , 0]

positions = np . zeros (( n_commands , 2) )

for i , command in enumerate ( command_string ) :


if command in commands :
exec ( commands [ command ]) # Perform command on turtle
positions [i , :] = t . _position

return positions

commands = {
’F ’: ’t . forward () ’,
’+ ’: ’t . rotate ( - theta ) ’,
’ - ’: ’t . rotate ( theta ) ’,

175
18.4 Fractals and numbers

# von Koch curve


axiom = ’+ - F + - ’
production_rules = {
’F ’: ’F +F - - F +F ’
}

theta = -60
scale =1.0
scalefac = 2.0/3.0
dpi = 1200
plt . ioff ()
nmax = 5

for n_iterations in range (0 , nmax ,1) :


positions = L_system ( commands , axiom , production_rules , theta ,
n_iterations )
plt . clf ()
plt . axis (" off ")
plt . gcf () . set_size_inches (5 ,2)
plt . plot ( positions [: , 0] , positions [: , 1] , linewidth =1 , antialiased = True )
file = " vonKoch ."+ str ( n_iterations +1) +". pdf "
plt . margins (0)
plt . savefig ( file , dpi = dpi , bbox_inches = ’ tight ’)
plt . close ()
scalefac *= 1.0/3.0

When you run this you should produce five files, vonKoch.n .pdf where n = 1, 2, 3, 4, 5, as shown
in Sec. 10.4.1. You can download this from the code directory of the course, it is vonKoch.py.
I also provide Sierpinski-carpet.py in the course’s code directory, which is somewhat more
complicated.

18.3.2 Try it yourself

• https://blog.klipse.tech/python/2017/01/04/python-turtle-fractal.html

18.3.3 Further reading

• https://www.vexlio.com/blog/drawing-simple-organics-with-l-systems/

• http://exupero.org/hazard/post/fractals/

• https://blog.goodaudience.com/fractals-and-recursion-in-python-d11d87fcf9cd

18.4 Fractals and numbers


• The link between the Mandelbrot set and Fibonacci numbers
https://www.youtube.com/watch?v=4LQvjSf6SSw

18.5 Fractal Landscapes


• https://www.youtube.com/watch?v=IQ4estA7HDE

176
18.6 Fractal compression

18.6 Fractal compression


• https://karczmarczuk.users.greyc.fr/matrs/Dess/RADI/Refs/fractal_paper.pdf

18.7 More on the Mandelbrot and Julia sets


• https://www.youtube.com/watch?v=FFftmWSzgmk

• Apps at https://www.geogebra.org/

• The “dark side” of the Mandelbrot set https://www.youtube.com/watch?v=9gk_8mQuerg

177
19 More on chaotic oscillators
19.1 Rossler-band attractor
A simpler example than the Lorenz model, this attractor contains only one non-linear term. The
equations are,

ẋ = −y − z , (19.1)
ẏ = x + a y , (19.2)
ż = b + z (x − c) , (19.3)

with a = 0.398, b = 2 and c = 4 (usually).

1. The attractor is dissipative around the origin,

1 dV ∂ẋ ∂ ẏ ∂ż
= + + = 0 + a + (x − c) (19.4)
V dt ∂x ∂y ∂z
= a −c +x (19.5)

so if x < c − a = 3.602, V̇ < 0 and the volume contracts.

2. Motion in the x y plane spirals outwards. If we let z = 0 to constrain motion to the plane, we
have equations of motion,

ẋ = −y , (19.6)
ẏ = x + a y , (19.7)
ż = b = 0 , (19.8)

then
¡ ¢
ẍ = − ẏ = − x + a y = −x + a ẋ , (19.9)

i.e.

ẍ − a ẋ + x = 0 . (19.10)

This is the equation of a harmonic oscillator with negative damping, so the trajectories spiral
out in the x y plane.

3. Vertical motion (in the z direction) depends on x alone. To show this set x and y constant,
then,

ẋ = ẏ = 0 , (19.11)

and,

ż = b + z (x − c) , (19.12)

with a fixed point at

ż = 0 (19.13)

178
19.2 van der Pol oscillator

i.e.
b
z = . (19.14)
c −x
Acceleration in the z direction depends only on x, because

∂ż
= x −c . (19.15)
∂z
Stability criteria:
∂ż
• x < c: stable and attractive, ∂z <0
∂ż
• x > c: unstable and repulsive, ∂z
>0
• As x → ±∞ we have z ∗ → 0

4. When x becomes large, we have ż ∝ zx hence z increases. Then ẋ = −y − z becomes negat-


ive, restoring the position.

19.2 van der Pol oscillator


The second-order equation,

ẍ + µ x 2 − 1 ẋ + ω2 x = 0 ,
¡ ¢
(19.16)

can be rewritten as,

ẋ = y , (19.17)
2 2
ẏ = −µ x − 1 y − ω x ,
¡ ¢
(19.18)

and if we write
µ ¶
ẋ ¡ ¢
= F x, y , (19.19)

then
∂ ẏ ∂ẋ
∇ ·F = + (19.20)
∂y ∂x
= −µ x 2 − 1
¡ ¢
(19.21)

which is, assuming µ > 0,

• an energy sink, i.e. dissipative, if |x| > 1

• an energy source if |x| < 1 ,

and the opposite way around if µ < 0.


Fixed points are at,

• ẋ = y = 0 i.e. y = 0 ,

179
19.2 van der Pol oscillator

• ẏ = −µ x 2 − 1 y − ω2 x i.e. x 0 = y 0 = 0 .
¡ ¢

Linear analysis at the fixed point (0, 0) of the equations gives us the Jacobian,
µ ¶ µ ¶
0 ¡ 12 0 1
A0 = 2
¢ = 2 (19.22)
−ω − 2µx y −µ x − 1 (0,0)
−ω +µ

which has trace,

τ = µ, (19.23)

and determinant,

∆ = ω2 , (19.24)

and we have spirals when τ2 > 4∆, i.e. when

¯µ¯ > 2ω .
¯ ¯
(19.25)

Eigenvalues are,

µ± µ2 + 4ω2
p
λ1,2 = , (19.26)
2
and there are four regions of the parameter space:

• µ > 2ω: unstable node

• 0 < µ < 2ω: unstable spiral

• −2ω < µ < 0: stable spiral

• µ < −2ω: stable node

When µ is small, oscillations are near to sine waves. When µ is large, they are almost square waves.
Consider the case µ À 1 with ω = 1, then our equation of motion is,

ẍ + µ x 2 − 1 ẋ + x = 0 ,
¡ ¢
(19.27)

and we can define a function F (x) by,

x3
F (x) = −x, (19.28)
3
such that,

dF
= x2 − 1 , (19.29)
dx
and
dF dF dx
= ẋ x 2 − 1 ,
¡ ¢
= (19.30)
dt dx dt

180
19.2 van der Pol oscillator

then our equation of motion is,

dF
0 = ẍ + µ +x, (19.31)
¡ dt ¢
d ẋ + µF
= +x, (19.32)
dt
Now we can write our second-order equation as two coupled first order equations. With,


y = +F , (19.33)
µ

such that

µy = ẋ + µF , (19.34)

then

µ ẏ = −x , (19.35)

so our equations of motion become,

ẋ = µ y − F
¡ ¢
(19.36)
x
ẏ = − . (19.37)
µ

• ẋ ∝ µ so when µ is large this ẋ is large, and this is a fast transient

• ẏ ∝ µ−1 so when µ is large y changes slowly.

When µ ¿ 1 clearly the opposite is the case.

181
8

0
x

-2

-4

-6

-8

-10
0 20 40 60 80 100
t

1000

800

600

400

200
y

-200

-400

-600

-800
0 20 40 60 80 100
t

10

0
y

-5

-10
-10 -5 0 5 10
x

10

0
y

-5

-10
-10 -5 0 5 10
x
μ = 0.01 μ = 0.1 μ = 0.5 μ=1 μ=2 μ = 10

Figure 55: x vs y of Eqs 19.36 and 19.37 showing in the top panel every data point, in the bottom
panel every δt = 1 so that rapid motion has few points. When µ À 1, e.g. µ = 10 which is the yellow
curve, the “horizontal” parts of the limit cycle are fast, they have few points, while the “vertical”
parts are relatively slow, as shown in the text.

You might also like