Professional Documents
Culture Documents
Gerrit Ansmann
Example: Lorenz system
.
−20
20
𝘹11
−20
1010
|𝘹 − 𝘹 |
2
1
1
10−10
0 30 𝘵 60 90
2
The Largest Lyapunov Exponent
.
𝘹(𝘵 +𝜏)
𝘹(𝘵 )
𝘺(𝘵 +𝜏)
𝘺(𝘵 )
|𝘹 (𝘵 + 𝜏) − 𝘺 (𝘵 + 𝜏)| = |𝘹 (𝘵 ) − 𝘺 (𝘵 )| 𝘦 𝜆1 𝜏
For:
• infinitesimally close trajectories (|𝘹 (𝘵 ) − 𝘺 (𝘵 )| → 0)
• infinite time evolution (𝜏 → ∞)
Note: In this entire lecture, 𝜏 is not the embedding delay.
3
The Largest Lyapunov Exponent – Definition
.
|𝘹 (𝘵 + 𝜏) − 𝘺 (𝘵 + 𝜏)| = |𝘹 (𝘵 ) − 𝘺 (𝘵 )| 𝘦 𝜆1 𝜏
.
First Lyapunov exponent
.
Let 𝘹 and 𝘺 be two trajectories of the dynamics.
1 |𝘹 (𝘵 + 𝜏) − 𝘺 (𝘵 + 𝜏)|
𝜆1 ∶= 𝗅𝗂𝗆 𝗅𝗂𝗆 𝗅𝗇
.
𝜏→∞ |𝘹 (𝘵 )−𝘺 (𝘵 )|→0 𝜏 ( |𝘹 (𝘵 ) − 𝘺 (𝘵 )| )
4
Typical Evolution of a Trajectory Distance
.
−20
20
𝘹11
−20
1010
|𝘹 − 𝘹 |
2
1
1
10−10
0 30 𝘵 60 90
5
Comparison with Evolution
of Infinitesimal Distance
.
−20
20
𝘹11
−20
1010
|𝘹 − 𝘹 |, |𝘷 |
2
1
1
10−10
0 30 𝘵 60 90
5
Typical Evolution of a Trajectory Distance
.
3
𝘋 (𝜏)
0 𝜏
6
Typical Evolution of a Trajectory Distance
.
𝘥
𝘋 (𝜏) ∝ ∑ 𝘤𝘪 (𝜏) 𝖾𝗑𝗉(𝜆𝘪 𝜏 )
3 𝘪 =1
𝘤 (𝜏)
Asymptotically: 𝘤 𝘪 (𝜏) → 0 for 𝘪 > 1
𝘋 (𝜏)
1
2
0 𝜏
6
Typical Evolution of a Trajectory Distance
.
𝘥
𝘋 (𝜏) ∝ ∑ 𝘤𝘪 (𝜏) 𝖾𝗑𝗉(𝜆𝘪 𝜏 )
3 𝘪 =1
𝘤 (𝜏)
Asymptotically: 𝘤 𝘪 (𝜏) → 0 for 𝘪 > 1
𝘋 (𝜏)
1
2
2. exponential growth:
1
𝘋 (𝜏) ∝ 𝖾𝗑𝗉(𝜆1 𝜏 )
0 𝜏
6
Typical Evolution of a Trajectory Distance
.
𝘥
𝘋 (𝜏) ∝ ∑ 𝘤𝘪 (𝜏) 𝖾𝗑𝗉(𝜆𝘪 𝜏 )
3 𝘪 =1
𝘤 (𝜏)
Asymptotically: 𝘤 𝘪 (𝜏) → 0 for 𝘪 > 1
𝘋 (𝜏)
1
2
2. exponential growth:
1
𝘋 (𝜏) ∝ 𝖾𝗑𝗉(𝜆1 𝜏 )
0 𝜏
3. constancy on the scale of the attractor:
𝘋 (𝜏) ≈ diam(𝒜 )
6
Translation to Time Series
.
• continuous trajectories
→ discrete trajectories
7
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.
𝘹(𝜏)
𝘹(0)
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)
8
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.
𝘹(𝜏)
𝘹(0)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)
8
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.
𝘹(𝜏)
𝜀
𝘹(0) 𝘹(𝘵₂)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)
8
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.
𝘹(𝜏)
𝜀
𝘹(0) 𝘹(𝘵₂)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)
8
Wolf Algorithm
Acquisition of Instantaneous Lyapunov Exponents
.
𝘹(𝜏)
𝜀
𝘹(0) 𝘹(𝘵₂)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)
8
Wolf Algorithm – Averaging
.
𝘐
1 ̂
𝜆1 = 𝜆1 (𝘪 𝜏)
𝘐 −𝘳 ∑
𝘪 =𝘳
The offset 𝘳 ensures that the distances are aligned to the
direction of largest growth.
9
Wolf Algorithm – Parameters
.
𝘹(𝜏)
𝜀
𝘹(0) 𝘹(𝘵₂)
𝜀
𝘹(𝘵₁+𝜏)
𝘹(𝘵₁)
Initial distance 𝜀:
• too small → impact of noise too high
• too large → small region of exponential growth
Rescaling time 𝜏 :
• too high → distance reaches size of attractor
• too small → small region of exponential growth
10
Wolf Algorithm – Problems
.
• Sensitivity to noise.
11
Rosenstein–Kantz Algorithm
.
𝘹(𝘵 )
𝜀
12
Rosenstein–Kantz Algorithm
.
𝘹(𝘵 )
𝘹(𝘵₂)
𝜀
𝘹(𝘵₁)
1. For a given reference state 𝘹 (𝘵 ), find all states
𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.
12
Rosenstein–Kantz Algorithm
.
𝘹(𝘵 )
𝘹(𝘵₂)
𝜀
𝘹(𝘵₁)
1. For a given reference state 𝘹 (𝘵 ), find all states
𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.
2. For a given 𝜏 , define the average distance of the respective
trajectory segments from the initial one
𝘶
1
𝘴 (𝘵 ,𝜏) ∶= |𝘹 (𝘵 + 𝜏) − 𝘹 (𝘵𝘫 + 𝜏)|
𝘶∑
𝘫 =1
12
Rosenstein–Kantz Algorithm
.
𝘹(𝘵 )
𝘹(𝘵₂)
𝜀
𝘹(𝘵₁)
1. For a given reference state 𝘹 (𝘵 ), find all states
𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.
2. For a given 𝜏 , define the average distance of the respective
trajectory segments from the initial one
𝘶
1
𝘴 (𝘵 ,𝜏) ∶= |𝘹 (𝘵 + 𝜏) − 𝘹 (𝘵𝘫 + 𝜏)|
𝘶∑
𝘫 =1
12
Rosenstein–Kantz Algorithm
.
𝘹(𝘵 )
𝘹(𝘵₂)
𝜀
𝘹(𝘵₁)
1. For a given reference state 𝘹 (𝘵 ), find all states
𝘹 (𝘵1 ), …, 𝘹 (𝘵𝘶 ) for which |𝘹 (𝘵 ) − 𝘹 (𝘵𝘫 )| < 𝜀.
2. For a given 𝜏 , define the average distance of the respective
trajectory segments from the initial one
𝘶
1
𝘴 (𝘵 ,𝜏) ∶= |𝘹 (𝘵 + 𝜏) − 𝘹 (𝘵𝘫 + 𝜏)|
𝘶∑
𝘫 =1
15
Extensions and Alternatives
.
• tangent-space methods
→ require estimate of Jacobian
16
Lyapunov Spectrum and Types of Dynamics
.
• Prediction horizon:
− 𝗅𝗇(𝜌)
𝜏p ≈
∑𝘪 ,𝜆𝘪 >0 𝜆𝘪
18