You are on page 1of 30

Estimating Lyapunov Exponents

from Time Series

Gerrit Ansmann
Example: Lorenz system
.

Two identical Lorenz systems with initial conditions;


one is slightly perturbed (10−14 ) at 𝘵 = 30:
20
𝘹12

−20
20
𝘹11

−20

1010
|𝘹 − 𝘹 |
2

1
1

10−10

0 30 𝘵 60 90
2
The Largest Lyapunov Exponent
.

Consider evolution of two close trajectories 𝘹 and 𝘺 :

𝘹(𝘵 +𝜏)
𝘹(𝘵 )
𝘺(𝘵 +𝜏)
𝘺(𝘵 )

Then their distance grows or shrinks exponenentially:

|𝘹 (𝘵 + 𝜏) − 𝘺 (𝘵 + 𝜏)| = |𝘹 (𝘵 ) − 𝘺 (𝘵 )| 𝘦 𝜆1 𝜏

For:
• infinitesimally close trajectories (|𝘹 (𝘵 ) − 𝘺 (𝘵 )| → 0)
• infinite time evolution (𝜏 → ∞)
Note: In this entire lecture, 𝜏 is not the embedding delay.
3
The Largest Lyapunov Exponent – Definition
.

|𝘹 (𝘵 + 𝜏) − 𝘺 (𝘵 + 𝜏)| = |𝘹 (𝘵 ) − 𝘺 (𝘵 )| 𝘦 𝜆1 𝜏

→ Solve for 𝜆1 and implement the limits:

.
First Lyapunov exponent
.
Let 𝘹 and 𝘺 be two trajectories of the dynamics.

1 |𝘹 (𝘵 + 𝜏) − 𝘺 (𝘵 + 𝜏)|
𝜆1 ∶= 𝗅𝗂𝗆 𝗅𝗂𝗆 𝗅𝗇
.
𝜏→∞ |𝘹 (𝘵 )−𝘺 (𝘵 )|→0 𝜏 ( |𝘹 (𝘵 ) − 𝘺 (𝘵 )| )

Also: largest Lyapunov exponent or just Lyapunov exponent.

4
Typical Evolution of a Trajectory Distance
.

Two identical Lorenz systems with initial conditions;


one is slightly perturbed (10−14 ) at 𝘵 = 30:
20
𝘹12

−20
20
𝘹11

−20

1010
|𝘹 − 𝘹 |
2

1
1

10−10

0 30 𝘵 60 90
5
Comparison with Evolution
of Infinitesimal Distance
.

Two identical Lorenz systems with initial conditions;


one is slightly perturbed (10−14 ) at 𝘵 = 30:
20
𝘹12

−20
20
𝘹11

−20

1010
|𝘹 − 𝘹 |, |𝘷 |
2

1
1

10−10

0 30 𝘵 60 90
5
Typical Evolution of a Trajectory Distance
.

Regimes of the average distance 𝘋 :

3
𝘋 (𝜏)

0 𝜏

6
Typical Evolution of a Trajectory Distance
.

Regimes of the average distance 𝘋 :


1. alignment to direction of largest growth:

𝘥
𝘋 (𝜏) ∝ ∑ 𝘤𝘪 (𝜏) 𝖾𝗑𝗉(𝜆𝘪 𝜏 )
3 𝘪 =1

𝘤 (𝜏)
Asymptotically: 𝘤 𝘪 (𝜏) → 0 for 𝘪 > 1
𝘋 (𝜏)

1
2

0 𝜏

6
Typical Evolution of a Trajectory Distance
.

Regimes of the average distance 𝘋 :


1. alignment to direction of largest growth:

𝘥
𝘋 (𝜏) ∝ ∑ 𝘤𝘪 (𝜏) 𝖾𝗑𝗉(𝜆𝘪 𝜏 )
3 𝘪 =1

𝘤 (𝜏)
Asymptotically: 𝘤 𝘪 (𝜏) → 0 for 𝘪 > 1
𝘋 (𝜏)

1
2
2. exponential growth:
1
𝘋 (𝜏) ∝ 𝖾𝗑𝗉(𝜆1 𝜏 )
0 𝜏

6
Typical Evolution of a Trajectory Distance
.

Regimes of the average distance 𝘋 :


1. alignment to direction of largest growth:

𝘥
𝘋 (𝜏) ∝ ∑ 𝘤𝘪 (𝜏) 𝖾𝗑𝗉(𝜆𝘪 𝜏 )
3 𝘪 =1

𝘤 (𝜏)
Asymptotically: 𝘤 𝘪 (𝜏) → 0 for 𝘪 > 1
𝘋 (𝜏)

1
2
2. exponential growth:
1
𝘋 (𝜏) ∝ 𝖾𝗑𝗉(𝜆1 𝜏 )
0 𝜏
3. constancy on the scale of the attractor:

𝘋 (𝜏) ≈ diam(𝒜 )
6
Translation to Time Series
.

• continuous trajectories
→ discrete trajectories

• actual phase space


→ reconstruction

• evolution of arbitrary states


→ available trajectories

And of course: finite data, noise, …

7
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.

𝘹(𝜏)
𝘹(0)
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)

1. Let 𝘹 (0) be the first reconstructed state.


2. Find a state 𝘹 (𝘵1 ) such that |𝘹 (𝘵1 ) − 𝘹 (0)| < 𝜀.

8
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.

𝘹(𝜏)
𝘹(0)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)

1. Let 𝘹 (0) be the first reconstructed state.


2. Find a state 𝘹 (𝘵1 ) such that |𝘹 (𝘵1 ) − 𝘹 (0)| < 𝜀.

8
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.

𝘹(𝜏)
𝜀
𝘹(0) 𝘹(𝘵₂)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)

1. Let 𝘹 (0) be the first reconstructed state.


2. Find a state 𝘹 (𝘵1 ) such that |𝘹 (𝘵1 ) − 𝘹 (0)| < 𝜀.
3. Approximate instantaneous largest Lyapunov exponent:
𝜆1̂ (0) = 1𝜏 𝗅𝗇( 𝘹 𝘹(𝜏)− 𝘹 (𝘵1 +𝜏)
(0)−𝘹 (𝘵1 ) )
.

8
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.

𝘹(𝜏)
𝜀
𝘹(0) 𝘹(𝘵₂)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)

1. Let 𝘹 (0) be the first reconstructed state.


2. Find a state 𝘹 (𝘵1 ) such that |𝘹 (𝘵1 ) − 𝘹 (0)| < 𝜀.
3. Approximate instantaneous largest Lyapunov exponent:
𝜆1̂ (0) = 1𝜏 𝗅𝗇( 𝘹 𝘹(𝜏)− 𝘹 (𝘵1 +𝜏)
(0)−𝘹 (𝘵1 ) )
.

4. Find a state 𝘹 (𝘵2 ) such that |𝘹 (𝘵2 ) − 𝘹 (𝜏)| < 𝜀 and


𝘹 (𝘵2 ) − 𝘹 (𝘵 + 𝜏) is nearly parallel to 𝘹 (𝘵1 + 𝜏) − 𝘹 (𝘵 + 𝜏).

8
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.

𝘹(𝜏)
𝜀
𝘹(0) 𝘹(𝘵₂)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)

1. Let 𝘹 (0) be the first reconstructed state.


2. Find a state 𝘹 (𝘵1 ) such that |𝘹 (𝘵1 ) − 𝘹 (0)| < 𝜀.
3. Approximate instantaneous largest Lyapunov exponent:
𝜆1̂ (0) = 1𝜏 𝗅𝗇( 𝘹 𝘹(𝜏)− 𝘹 (𝘵1 +𝜏)
(0)−𝘹 (𝘵1 ) )
.

4. Find a state 𝘹 (𝘵2 ) such that |𝘹 (𝘵2 ) − 𝘹 (𝜏)| < 𝜀 and


𝘹 (𝘵2 ) − 𝘹 (𝘵 + 𝜏) is nearly parallel to 𝘹 (𝘵1 + 𝜏) − 𝘹 (𝘵 + 𝜏).
5. Approximate instantaneous largest Lyapunov exponent:
𝜆1̂ (𝜏) = 1𝜏 𝗅𝗇( 𝘹 (𝘹2(𝜏)−
𝜏)−𝘹 (𝘵2 +𝜏)
𝘹 (𝘵2 ) )
.

8
Wolf Algorithm – Averaging
.

After acquiring the local Lyapunov exponents, estimate:

𝘐
1 ̂
𝜆1 = 𝜆1 (𝘪 𝜏)
𝘐 −𝘳 ∑
𝘪 =𝘳
The offset 𝘳 ensures that the distances are aligned to the
direction of largest growth.

9
Wolf Algorithm – Parameters
.

𝘹(𝜏)
𝜀
𝘹(0) 𝘹(𝘵₂)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)

Initial distance 𝜀:
• too small → impact of noise too high
• too large → small region of exponential growth

Rescaling time 𝜏 :
• too high → distance reaches size of attractor
• too small → small region of exponential growth

10
Wolf Algorithm – Problems
.

• Parameters have to be chosen a priori.

• Problems may be obfuscated:


• no exponential growth due to noise
• embedding dimension 𝘮 too small

• Sensitivity to noise.

• Difficult to find neighbouring trajectory segment


with required properties.

→ Different way to ensure alignment


to direction of largest growth.

11
Rosenstein–Kantz Algorithm
.

𝘹(𝘵 )
𝜀

1. For a given reference state 𝘹 (𝘵 ), find all states


𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.

12
Rosenstein–Kantz Algorithm
.

𝘹(𝘵 )
𝘹(𝘵₂)
𝜀
𝘹(𝘵₁)
1. For a given reference state 𝘹 (𝘵 ), find all states
𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.

12
Rosenstein–Kantz Algorithm
.

𝘹(𝘵 )
𝘹(𝘵₂)
𝜀
𝘹(𝘵₁)
1. For a given reference state 𝘹 (𝘵 ), find all states
𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.
2. For a given 𝜏 , define the average distance of the respective
trajectory segments from the initial one
𝘶
1
𝘴 (𝘵 ,𝜏) ∶= |𝘹 (𝘵 + 𝜏) − 𝘹 (𝘵𝘫 + 𝜏)|
𝘶∑
𝘫 =1

12
Rosenstein–Kantz Algorithm
.

𝘹(𝘵 )
𝘹(𝘵₂)
𝜀
𝘹(𝘵₁)
1. For a given reference state 𝘹 (𝘵 ), find all states
𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.
2. For a given 𝜏 , define the average distance of the respective
trajectory segments from the initial one
𝘶
1
𝘴 (𝘵 ,𝜏) ∶= |𝘹 (𝘵 + 𝜏) − 𝘹 (𝘵𝘫 + 𝜏)|
𝘶∑
𝘫 =1

3. Average over all states as reference states:


𝘚 (𝜏) ∶= 1
𝘕
∑𝘕
𝘵 =1 𝘴 (𝘵 ,𝜏)

12
Rosenstein–Kantz Algorithm
.

𝘹(𝘵 )
𝘹(𝘵₂)
𝜀
𝘹(𝘵₁)
1. For a given reference state 𝘹 (𝘵 ), find all states
𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.
2. For a given 𝜏 , define the average distance of the respective
trajectory segments from the initial one
𝘶
1
𝘴 (𝘵 ,𝜏) ∶= |𝘹 (𝘵 + 𝜏) − 𝘹 (𝘵𝘫 + 𝜏)|
𝘶∑
𝘫 =1

3. Average over all states as reference states:


𝘚 (𝜏) ∶= 1
𝘕
∑𝘕
𝘵 =1 𝘴 (𝘵 ,𝜏)
4. Obtain 𝜆1 from region of exponential growth of 𝘚 (𝜏).
12
Rosenstein–Kantz Algorithm – Example
.

(Different lines correspond


to different 𝜺 and 𝘮.)

adapted from H. Kantz, A robust method to estimate the maximal Lyapunov


exponent of a time series, Phys. Let. A 185 (1994) 13
Rosenstein–Kantz Algorithm
– Mind How You Average
.

1. Average over the neighbourhood of a reference state


→ 𝘴 (𝘵 ,𝜏).
2. Average 𝘴 (𝘵 ,𝜏) over all reference states → 𝘚 (𝜏).

3. Obtain 𝜆1 from slope of 𝘚 (𝜏).

Density of states in a region of the attractor affects:


• reference states
• states in neighbourhood of given reference state
Separating the averaging in steps 1 and 2
(instead of averaging of all pairs closer than 𝜀)
ensures that density is accounted for only once
(and not twice).
14
Rosenstein–Kantz Algorithm
– Advantages and Problems
.

• Region of exponential growth


can be determined a posteriori.
Be careful of wishful thinking though.

• Absense of exponential growth usually detectable


(but only usually).

• Region of strong noise influence


can be detected and excluded.

• Can only determine the largest Lyapunov exponent.

15
Extensions and Alternatives
.

• tangent-space methods
→ require estimate of Jacobian

• further Lyapunov exponents


→ requires a lot of data

16
Lyapunov Spectrum and Types of Dynamics
.

For bounded, continuous-time dynamical systems:

signs of Lyapunov exponents Dynamics


−, −−, −−−, … fixed point
+, ++, +++, …, +0, ++0, … not possible (unbounded)
0, 00, 000, … no dynamics (𝘧 = 0)
0−, 0−−, 0−−−, … periodic / limit cycle
00−, 00−−, 00−−−, … quasiperiodic (torus)
000−, 0000−, …, 000−−, … quasiperiodic (hypertorus)
+0−, +0−−, +0−−−, … chaos
++0−, +++0−, …, ++0−−, … hyperchaos
∞, … noise
17
Interpretation
.

• Stability and type of the dynamics:


𝜆1 > 0 chaos, instable dynamics
𝜆1 = 0 regular dynamics
𝜆1 < 0 fixed-point dynamics
• Quantification of loss of information.

• Prediction horizon:

− 𝗅𝗇(𝜌)
𝜏p ≈
∑𝘪 ,𝜆𝘪 >0 𝜆𝘪

• 𝜌: Accuracy of measurement (initial state).


• ∑𝘪 ,𝜆𝘪 >0 : Sum of positive Lyapunov exponents.

18

You might also like