This action might not be possible to undo. Are you sure you want to continue?
Contents lists available at SciVerse ScienceDirect
Physics Reports
journal homepage: www.elsevier.com/locate/physrep
Fractional motions
Iddo I. Eliazar
a,∗
, Michael F. Shlesinger
b
a
Holon Institute of Technology, P.O. Box 305, Holon 58102, Israel
b
Office of Naval Research, Code 30, 875 N. Randolph St., Arlington, VA 22203, USA
a r t i c l e i n f o
Article history:
Accepted 16 January 2013
Available online 23 January 2013
editor: D.K. Campbell
Keywords:
Brownian motion
Fractional Brownian motion
Lévy motion
Fractional Lévy motion
Langevin’s equation
Random walks
Scaling limits
Universality
Noah exponent
Noah effect
Joseph exponent
Joseph effect
Subdiffusion
Superdiffusion
Shortrange correlations
Longrange correlations
Fractal trajectories
Selfsimilarity
Hurst exponent
a b s t r a c t
Brownian motion is the archetypal model for random transport processes in science
and engineering. Brownian motion displays neither wild fluctuations (the ‘‘Noah effect’’),
nor longrange correlations (the ‘‘Joseph effect’’). The quintessential model for processes
displaying the Noah effect is Lévy motion, the quintessential model for processes
displaying the Joseph effect is fractional Brownian motion, and the prototypical model
for processes displaying both the Noah and Joseph effects is fractional Lévy motion. In
this paper we review these four randommotion models – henceforth termed ‘‘fractional
motions’’ – via a unified physical setting that is based on Langevin’s equation, the
Einstein–Smoluchowski paradigm, and stochastic scaling limits. The unified setting
explains the universal macroscopic emergence of fractional motions, and predicts –
according to microscopiclevel details – which of the four fractional motions will emerge
on the macroscopic level. The statistical properties of fractional motions are classified and
parametrized by two exponents—a ‘‘Noah exponent’’ governing their fluctuations, and a
‘‘Joseph exponent’’ governing their dispersions and correlations. This selfcontained review
provides a concise and cohesive introduction to fractional motions.
©2013 Elsevier B.V. All rights reserved.
Contents
1. Introduction.......................................................................................................................................................................................... 102
2. Langevin dynamics............................................................................................................................................................................... 103
2.1. The Langevin equation ............................................................................................................................................................ 103
2.2. Noise ......................................................................................................................................................................................... 104
2.3. Langevin motions..................................................................................................................................................................... 105
2.4. Random walks.......................................................................................................................................................................... 106
3. Scaling and scaling limits .................................................................................................................................................................... 106
3.1. Scaling....................................................................................................................................................................................... 106
3.2. Regular variation...................................................................................................................................................................... 107
3.3. Scaling limits of the renormalized noise................................................................................................................................ 107
3.4. Scaling limits of the renormalized Langevin motions ........................................................................................................... 108
∗
Corresponding author. Tel.: +972 507 290 650.
Email addresses: eliazar@post.tau.ac.il (I.I. Eliazar), mike.shlesinger@navy.mil (M.F. Shlesinger).
03701573/$ – see front matter ©2013 Elsevier B.V. All rights reserved.
doi:10.1016/j.physrep.2013.01.004
102 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
4. Interim summary ................................................................................................................................................................................. 110
5. Fractional motions ............................................................................................................................................................................... 111
5.1. Gaussdistributed increments ................................................................................................................................................ 112
5.2. Lévydistributed increments................................................................................................................................................... 113
5.3. Dispersion ................................................................................................................................................................................ 113
5.4. Correlation................................................................................................................................................................................ 114
5.5. Trajectories............................................................................................................................................................................... 115
5.6. Classification of fractional motions ........................................................................................................................................ 116
6. Discussion............................................................................................................................................................................................. 117
6.1. The pervasiveness of fractional motions................................................................................................................................ 117
6.2. Brownian motion and Ito’s calculus ....................................................................................................................................... 118
6.3. Data traffic and fractional Brownian motion ......................................................................................................................... 118
6.4. Universality via stochastic scaling.......................................................................................................................................... 118
6.5. Fractional motions vs. the CTRW model ................................................................................................................................ 118
7. Conclusion ............................................................................................................................................................................................ 119
8. Stochastic analyses............................................................................................................................................................................... 120
8.1. Poisson processes and their Fourier functional ..................................................................................................................... 120
8.2. Regular variation and scaling limits ....................................................................................................................................... 121
8.3. The increments of random walks and Langevin motions ..................................................................................................... 122
8.4. Fourier inversion of the limiting Fourier transforms ............................................................................................................ 123
8.5. Heavy tails of the Lévy distribution........................................................................................................................................ 124
8.6. Selfsimilarity of fractional motions ........................................................................................................................................ 125
8.7. Correlations of motions with stationary increments ............................................................................................................ 125
8.8. The jump structure of Lévy motions....................................................................................................................................... 126
Acknowledgments ............................................................................................................................................................................... 127
References............................................................................................................................................................................................. 127
1. Introduction
Random motions are omnipresent all across science and engineering. Examples of random motions include: diffusion
of particles in physics and chemistry [1–3], trajectories of tracers in turbulent flows [4,5], search patterns of foraging
animals [6–8], propagation of contaminants in geological and hydrological systems [9,10], levels of data traffic in
communication channels [11,12], and prices of commodities and stocks in financial markets [13,14].
The archetypal model of diffusion – perhaps the most elemental form of random transport in the physical sciences –
is Brownian motion [1–3]. Brownian motion was discovered by Sir Robert Brown in 1827 [15], was first applied by Louis
Bachelier to model stock prices in 1900 [16], and was established by Einstein and Smoluchowski as the prototypical model
of diffusion in 1905 [17,18]. Mathematically constructed by Norbert Wiener in 1923 [19], Brownian motion is a stochastic
process characterized by increments which are stationary, independent, and Gaussdistributed.
The Gaussian distribution of the Brownian increments implies that Brownian motion displays tamed fluctuations [20,21].
However, the sciences are abundant with randomprocesses displaying wild fluctuations—a phenomenon termed the ‘‘Noah
effect ’’ by Mandelbrot and Wallis [22], and quantified by infinite variances. The quintessential model for wildly fluctuating
processes is Lévy motion [23,24]: a stochastic process characterized by increments which are stationary, independent, and
Lévydistributed [25,26]. In the passage from Brownian motion to Lévy motion the increments are kept stationary and
independent, but yet their distribution is changed from Gauss to Lévy. The Lévy distribution yields infinite variances, and
hence the Lévy motion indeed displays the Noah effect.
Both Brownian motion and Lévy motion have independent increments. This implies that the increments of these motions
have no ‘‘memory’’. Nonetheless, random processes with longmemory stationary increments are prevalent across the
sciences—a phenomenon termed the ‘‘Joseph effect’’ by Mandelbrot and Wallis [22], and quantified by nonsummable auto
covariance functions [27–30]. The quintessential model for processes with longmemory stationary increments is fractional
Brownian motion [31–34]: a stochastic process characterized by increments which are stationary, dependent, and Gauss
distributed. In the passage from Brownian motion to fractional Brownian motion the increments are kept stationary and
Gaussdistributed, but yet their independence is changed to dependence. The increments of fractional Brownian motion
can be either negatively correlated, or positively correlated. When positively correlated, the increments’ autocovariance
function is nonsummable, and hence fractional Brownian motion indeed displays the Joseph effect.
The Noah and Joseph effects are antithetical notions—as the former is characterized by infinite variances, whereas
the latter is characterized by nonsummable autocovariance functions (thus implicitly implying finite variances). To
accommodate both the Noah and Joseph effects into one setting, alternative measures of ‘‘memory’’ – which are not
covariancebased – need to be applied. Such measures do exist [24,35], and the prototypical model for processes which are
both wildly fluctuating and have longmemory stationary increments is fractional Lévy motion [36,37]: a stochastic process
characterized by increments which are stationary, dependent, and Lévydistributed. In the passage from Brownian motion
to fractional Lévy motion the stationary increments are changed fromindependent and Gaussdistributed to dependent and
Lévydistributed. As noted above, the Lévy distribution yields infinite variances, and hence fractional Lévy motion displays
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 103
the Noah effect. The increments of fractional Lévy motion can be either negatively correlated, or positively correlated (the
correlation quantified by measures which are not covariancebased). When positively correlated, the increments’ memories
are longranged, and hence fractional Lévy motion displays both the Noah and Joseph effects.
In effect, Brownian motion and Lévy motion can be classified, respectively, as fractional Brownian motion and as
fractional Lévy motion with uncorrelated increments [35]. Due to this classification we henceforth term, in short, the four
aforementioned randommotion models ‘‘fractional motions’’. Fractional motions constitute the very bedrock of stochastic
models for motions with stationary increments, and are thus of prime importance both theoretically and practically. The
wide application of fractional motions is well exemplified in the transdisciplinary field of Anomalous Diffusion[38–41].
Anomalous Diffusion originated in the nineteenseventies [42–44], and in the recent years it led to the statements
‘‘anomalous is normal ’’ [45,46] and ‘‘anomalous is ubiquitous’’ [47] — which highlight and underscore the major importance
of fractional motions.
1
The goal of this review paper is to provide audiences from the physical sciences a cohesive and concise selfcontained
introduction to fractional motions. Due to the intricate statistical structure of fractional motions, probabilistic reviews of
these motions tend to be somewhat technical. In this paper we approach and overviewfractional motions fromthe physical
perspective of ‘‘Langevin unification of fractional motions’’ [50]. This unified approach, which explains the emergence of
fractional motions, is based on three key physical foundations:
• The Langevin equation—one of the most fundamental equations of stochastic dynamics in the physical sciences [51,52].
• The Einstein–Smoluchowski paradigm—which provides an elemental conceptual model for the generation of noise on the
microscopic level [17,18].
• The scaling method—a renormalization scheme facilitating the transcendence form the microscopic level to the
macroscopic level.
In this review paper we will show how fractional motions emerge universally, on the macroscopic level, from general
Langevin dynamics taking place on the microscopic level. Moreover, we will show how to determine the specific type of
the macrolevel fractional motion – Brownian motion, fractional Brownian motion, Lévy motion, or fractional Lévy motion
– based on the details of the microlevel Langevin dynamics. Also, we will describe the very different macrolevel statistical
behaviors displayed by the different types of fractional motions.
In the physical sciences diffusion processes are usually analyzed via the Fokker–Planck equation [53], and anomalous
diffusion processes are commonly explored via fractional generalizations of the Fokker–Planck equation—in which
partial derivatives are replaced by fractional derivatives [54–57].
2
Both the Fokker–Planck equation and its fractional
generalizations track the propagation of the marginal distributions of the processes investigated. In this review paper we
employ probabilistic techniques that enable us to directly track the very trajectories of the processes investigated—rather
than merely their marginal distributions. In fact, not even a single Fokker–Planck equation shall be used throughout the
manuscript. Thus, this review paper presents an analytic approach to fractional motions which is entirely free of fractional
Fokker–Planck equations. For a pathintegral approach to fractional Brownian and Lévy motions the readers are referred to
the forthcoming monograph [59].
The paper is organized as follows. Section 2 reviews in detail Langevin dynamics: the Langevin equation, the underlying
noise, and the resulting Langevin motions and random walks. Section 3 reviews in detail the method of scaling: the
notion of scaling, asymptotic analysis, and the scaling limits of Langevin motions and random walks. Fractional motions
are established as the universal macroscopiclevel scaling limits of microscopiclevel Langevin dynamics, and an interim
summary of the scalinglimit results is presented in Section 4. Section 5 explores the statistical and topological properties of
fractional motions andclassifies themaccording to their properties: Gauss andLévy distributions, dispersionandcorrelation,
selfsimilarity and continuity. The results are discussed in Section 6, and the detailed stochastic analyses of fractional motions
are given in Section 8.
A note about notation: Throughout this paper ⟨ξ⟩ denotes the mathematical expectation of a realvalued randomvariable
ξ.
2. Langevin dynamics
In this section we reviewLangevin dynamics: the Langevin equation, the underlying noise driving the Langevin equation,
and the resulting Langevin motions and random walks.
2.1. The Langevin equation
The Langevin equation describes the stochastic dynamics of motions in potential wells and in randomenvironments. This
equation was introduced by Paul Langevin in 1908 [51], and is one of the most fundamental stochastic differential equations
1
We note that a closely related topic is the ubiquity of 1/f noise [48,49].
2
We note that an alternative (yet closely related) approach to the fractional Fokker–Planck equation is the generalized master equation with a memory
function [58].
104 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
in the physical sciences [1–3,52]. In a onedimensional setting the Langevin equation is given by
¨
X (t) = −r
˙
X (t) + N (t) , (1)
where: X (t) is the realvalued position of the motion at time t, r is a positive friction parameter (relaxation parameter), and
N (t) is a perturbing random noise. Integrating the Langevin equation we obtain that the motion’s velocity is given by
˙
X (t) = R (t − t
0
)
˙
X (t
0
) +
_
t
t
0
R
_
t − t
′
_
N
_
t
′
_
dt
′
(2)
(t ≥ t
0
), where R (τ) = exp (−rτ) (τ ≥ 0). Moreover, taking t
0
→ −∞ we obtain that the motion’s velocity is given by the
stochastic process
˙
X (t) =
_
t
−∞
R
_
t − t
′
_
N
_
t
′
_
dt
′
(3)
(−∞ < t < ∞). In what follows we termthe stochastic process
˙
X = (
˙
X (t))
−∞<t<∞
given by Eq. (3) the ‘‘Langevin velocity’’.
In signal processing Eq. (3) represents a convolution filter which transforms the input signal N = (N (t))
−∞<t<∞
to the
output signal
˙
X = (
˙
X (t))
−∞<t<∞
, using the impulseresponse function R (τ) [60,61]. The output signal
˙
X of a convolution
filter is a moving average of the input signal N, andthe averaging is quantifiedby the impulseresponse functionR (τ). Moving
averages are a fundamental statistical tool in the analysis of general time series [62], and are of particular importance in
the analysis of financial time series [63]. The term ‘‘impulseresponse function’’ stems from the following observation: A
Dirac deltafunction input signal applied at time t
∗
– namely, the ‘‘impulse’’ input signal N (t) = δ (t − t
∗
) – results in a
‘‘response’’ output signal which is zero before time t
∗
, and is given by
˙
X (t) = R (t − t
∗
) at times t ≥ t
∗
.
From a physical perspective the impulseresponse function R (τ) characterizes the relaxation of the Langevin velocity
˙
X
with respect to the perturbing randomnoise N. Exponential impulseresponse functions – induced by the Langevin equation
(1) – characterize exponential relaxations (also known as Debye relaxations). Consequently, if the input signal N is a white
noise then the resulting output signal
˙
X is the Ornstein–Uhlenbeck process [64,65], and if the input signal N is a ‘‘train of
randomshot impulses’’ then the resulting output signal
˙
X is a shot noise process [66–68]. On the other hand, nonexponential
impulseresponse functions characterize nonexponential relaxations [69–71]. For example, if the input signal N is a
‘‘train of random shot impulses’’, and the impulseresponse functions are truncated powerlaws, then the resulting output
signals
˙
X are fractal shot noise processes [72,73]. Nonexponential relaxations are ubiquitously encountered in disordered
systems [74], and are observed on many scales ranging fromsingle molecules [75,76] to macroscopic viscoelasticity [77,78].
In what follows we consider general relaxations—which are characterized by general impulseresponse functions R (τ).
To fully grasp the profound difference between exponential and nonexponential relaxations let us introduce and observe
the processes V
m
= (V
m
(t))
−∞<t<∞
(m = 0, 1, 2, . . .) given by
V
m
(t) =
_
t
−∞
R
(m)
_
t − t
′
_
N
_
t
′
_
dt
′
, (4)
where R
(m)
(τ) denotes the mth derivative of the impulseresponse function R (τ). Note that the process V
0
coincides with
the Langevin velocity: V
0
=
˙
X. Straightforward integration by parts of Eq. (4) implies that
˙
V
m
(t) = V
m+1
(t) + R
(m)
(0) N (t) (5)
(m = 0, 1, 2, . . .). Now, if the impulseresponse function is an exponential (R (τ) = exp (−rτ)) then R
(1)
(τ) = −rR (τ)
and R
(0)
(0) = 1, and hence we obtain that
˙
V
0
(t) = −rV
0
(t) + N (t) ; (6)
since the process V
0
coincides with the Langevin velocity
˙
X we further obtain that Eq. (6) coincides with the Langevin
equation (1). On the other hand, if the impulseresponse function is not an exponential then the stochastic differential
equation (5) coupling together the processes V
m
and V
m+1
continues ‘‘ad infinitum’’ (rather than terminating at m = 0 as in
the exponential case). We thus conclude that the dynamics of the Langevin velocity
˙
X are described by: (i) a onedimensional
stochastic differential equation when the impulseresponse function is an exponential; (ii) an infinitedimensional set
of stochastic differential equations when the impulseresponse function is not an exponential. This dimensionality gap
highlights and underscores the dramatic difference between exponential and nonexponential relaxations.
2.2. Noise
We turn now to model the random noise N = (N (t))
−∞<t<∞
that perturbs the Langevin dynamics described in
Section 2.1. According to the Einstein–Smoluchowski paradigm [17,18], the noise process N represents the effect of the
randomenvironment on the motion. For example, if the motion is that of a pollen particle suspended in liquid – as in Brown’s
historical experiment [15] – then the random collisions of the liquid molecules (colliding into the pollen particle) generate
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 105
the noise process; in turn, the noise process drives the pollen particle—yielding the erratic motion observed by Brown.
Following this example, the noise process can be modeled as a sequence of random shocks {S
k
} impacting the motion at
random times {t
k
} (where k is an index labeling the shocks). Consequently, the noise process is given by
N (t) =
k
S
k
δ (t − t
k
) (7)
(−∞ < t < ∞).
The random shocks {S
k
} are considered a collection of independent and identically distributed (IID) random variables.
Spatial symmetry implies that the random shocks are symmetric random variables: the random variable S
k
is equal, in law,
to the randomvariable −S
k
. In what follows we set the randomvariable S to denote a generic shock (i.e., the randomshocks
{S
k
} are IID copies of the random variable S), and set
F (θ) = 1 − ⟨cos (θS)⟩ = 1 − ⟨cos (θ S)⟩ (8)
(−∞ < θ < ∞). Note that the function F (θ) is continuous, vanishes at the origin (F (0) = 0), and is symmetric around the
origin: F (θ) = F (−θ). As we shall momentarily show, the function F (θ) governs the processdistribution of the random
noise N.
The randomtimes {t
k
} are considereda Poissonprocess –whichis indeedthe statistical model of ‘‘randomtimes’’ [79–81].
Informally, the Poissonprocess setting implies that the times {t
k
} occur randomly and at a constant occurrence intensity λ.
Namely, the probability that the infinitesimal interval (t, t + dt) will contain a single randomtime is λdt, and the probability
that the infinitesimal interval (t, t + dt) will not contain random times is 1 −λdt (independently of all other infinitesimal
intervals). The precise statistical definition of the Poissonprocess setting is given by the two following rules [79]: (i) the
number of random times occurring during the time interval (a, b) is a Poissondistributed random variable with mean
λ (b − a) (−∞ < a < b < ∞); (ii) the numbers of random times occurring during disjoint time intervals are independent
randomvariables. With no loss of generality, we henceforth set the Poissonian intensity λ = 1. The randomshocks {S
k
} and
the random times {t
k
} are considered mutually independent sequences.
The random noise N resulting from the above construction is a compound Poisson process [79–81], and is the precise
statistical model of a ‘‘train of random shot impulses’’ noted in the Section 2.1 (in the context of shot noise). The process
distribution of the random noise N is characterized by the Fourier functional
_
exp
_
i
_
∞
−∞
ϕ (t) N (t) dt
__
= exp
_
−
_
∞
−∞
F (ϕ (t)) dt
_
, (9)
where ϕ (·) = ϕ (t) (−∞ < t < ∞) is an arbitrary test function for which the integral appearing on the righthand
side of Eq. (9) converges. The meaning of Fourier functionals is described below, and the derivation of Eq. (9) is given in
Section 8.1 (the readers may further consult [79] and [81]). As noted above, the Fourier functional of Eq. (9) implies that the
processdistribution of the random noise N is indeed governed by the function F (θ).
The Fourier functionals of random processes are the infinitedimensional counterparts of the Fourier transforms of
onedimensional random variables, and will play a key role in this paper. The Fourier transform of a realvalued random
variable ξ is the function ⟨exp (iϕξ)⟩—where −∞ < ϕ < ∞ is the corresponding onedimensional Fourier variable.
Analogously, the Fourier transform of a ddimensional random vector ξ = (ξ
1
, . . . , ξ
d
) is the function ⟨exp(i
d
k=1
ϕ
k
ξ
k
)⟩—
where ϕ = (ϕ
1
, . . . , ϕ
d
) is the corresponding ddimensional Fourier variable (taking values in the ddimensional Euclidean
space). Somewhat informally, a random process ξ = (ξ (t))
−∞<t<∞
can be considered an infinitedimensional random
vector—whose coordinates are parametrized by the continuous index −∞ < t < ∞ (rather than by the discrete index
k = 1, . . . , d). Carrying on the aforementioned analogy, the Fourier transform of the random process ξ = (ξ (t))
−∞<t<∞
should be of the form ⟨exp(i
_
∞
−∞
ϕ (t) ξ (t) dt)⟩—where the function ϕ (·) = ϕ (t) (−∞ < t < ∞) is the corresponding
infinitedimensional Fourier variable. Namely, in the passage from the ddimensional setting to the infinitedimensional
setting the sum
d
k=1
ϕ
k
ξ
k
is replaced by the integral
_
∞
−∞
ϕ (t) ξ (t) dt. In the infinitedimensional setting the infinite
dimensional Fourier variable ϕ (·) is often termed a ‘‘test function’’, and is commonly required to satisfy certain regularity
conditions (such as the integrability condition
_
∞
−∞
F (ϕ (t)) dt < ∞ in the case of Eq. (9)).
2.3. Langevin motions
The Langevin velocity
˙
X constructed in Section 2.1 is the velocity of a Langevin motion X = (X (t))
t≥0
. Specifically, assume
that we start tracking the Langevin motion at time t
∗
= 0, and that we set the position of the motion at time t
∗
= 0 to be
our origin: X (0) = 0. Then, the positions of the Langevin motion X are given by
X (t) =
_
t
0
˙
X
_
t
′
_
dt
′
(10)
(t ≥ 0). Recall that the Langevin velocity
˙
X is driven by the random noise N, that its relaxation is quantified by the impulse
response functionR (τ), andthat it is givenby Eq. (3). Set the functionK (τ) (τ ≥ 0) tobe the integral of the impulseresponse
106 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
function:
K (τ) =
_
τ
0
R
_
τ
′
_
dτ
′
(11)
(τ ≥ 0). Integrating Eq. (3) we obtain that the positions of the Langevin motion X are given by
X (t) =
_
0
−∞
_
K
_
t − t
′
_
− K
_
0 − t
′
__
N
_
t
′
_
dt
′
+
_
t
0
K
_
t − t
′
_
N
_
t
′
_
dt
′
(12)
(t ≥ 0). Eq. (12) implies that the Langevin motion X is characterized by two functions:
• The function F (θ) – given by Eq. (8), and henceforth termed the ‘‘Fourier function’’ – which governs the process
distribution of the motion’s underlying random noise N.
• The function K (τ) – given by Eq. (11), and henceforth termed the ‘‘kernel function ’’ – which governs the motion’s intrinsic
convolution structure.
In what follows both the Fourier function F (θ) and the kernel function K (τ) will play focal roles.
2.4. Random walks
The randomwalk W = (W (t))
t≥0
is a stochastic process whose velocity is the randomnoise N constructed in Section 2.2.
Specifically, assume that we start tracking the random walk at time t
∗
= 0, and that we set the position of the random walk
at time t
∗
= 0 to be our origin: W (0) = 0. Then, the positions of the random walk W are given by
W (t) =
_
t
0
N
_
t
′
_
dt
′
(13)
(t ≥ 0). Substituting Eq. (7) into Eq. (13), we obtain that the positions of the random walk W are given by
W (t) =
0≤t
k
≤t
S
k
(14)
(t ≥ 0). Eq. (13) implies that the randomwalk’s position W (t) at time t equals the sum of shocks impacting it
during the time interval [0, t]. Random walks constitute the most elemental model of stochastic dynamics in science and
engineering [82–84].
Ineffect, the randomwalk W is a special case of the LangevinmotionX constructedinSection2.3. Specifically, the random
walk W is a Langevin motion X with a Dirac deltafunction impulseresponse R (τ) = δ (τ). Indeed, the Dirac deltafunction
impulseresponse R (τ) = δ (τ) yields the constant kernel function K (τ) = 1. In turn, substituting the constant kernel
function K (τ) = 1 into Eq. (12) implies that the Langevin motion X coincides with the random walk W.
We note that the random walk W explains the meaning of the Fourier function F (θ). Let χ (t) denote the indicator
function of the unit interval (0, 1). Then, setting ϕ (t) = θ · χ (t) in Eq. (9) we obtain that
⟨exp (iθW (1))⟩ = exp (−F (θ)) (15)
(−∞ < θ < ∞). Namely, the Fourier function F (θ) is the logarithm of the Fourier transform of the random variable
W (1)—the position of the random walk W at time t = 1.
3. Scaling and scaling limits
The Langevindynamics described inSection2 take place onthe microscopic level. Indeed, the randomnoise N constructed
according to the Einstein–Smoluchowski paradigm models molecular collisions. On the macroscopic level these molecular
collisions have very small magnitudes, but yet they occur very frequently. In this section we review the method of scaling
which facilitates the transcendence from the microscopic level to the macroscopic level.
3.1. Scaling
In order to model the microscopic Langevin dynamics on the macroscopic level the Langevin motion X is
‘‘renormalized’’ as follows: we speed up the motion’s time by the factor n, and we rescale the motion’s positions by the
factor 1/φ
X
(n)—where φ
X
(n) is a scaling function which diverges to infinity: lim
n→∞
φ
X
(n) = ∞. This scaling yields a
renormalized Langevin motion X
n
= (X
n
(t))
t≥0
whose positions are given by
X
n
(t) =
1
φ
X
(n)
X (nt) (16)
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 107
(t ≥ 0). In the case of the randomwalk W the aforementioned scaling yields a renormalized randomwalk W
n
= (W
n
(t))
t≥0
whose positions are given by
W
n
(t) =
1
φ
W
(n)
W (nt) (17)
(t ≥ 0), where φ
W
(n) is a scaling function which diverges to infinity: lim
n→∞
φ
W
(n) = ∞. The renormalization of the
random walk W further renormalizes the underlying random noise N. Indeed, combining together Eqs. (13) and (17) we
obtain that
W
n
(t) =
_
t
0
N
n
_
t
′
_
dt
′
(18)
(t ≥ 0), where the stochastic process N
n
= (N
n
(t))
−∞<t<∞
is given by
N
n
(t) =
n
φ
W
(n)
N (nt) (19)
(−∞ < t < ∞). The stochastic process N
n
is the renormalization of the randomnoise N. In what follows we will investigate
the scaling limit of the renormalized randomnoise N
n
, and thereafter the scaling limit of the renormalized Langevin motion
X
n
, in the macroscopic limit n → ∞.
3.2. Regular variation
The asymptotic analysis to be carried out in this section is based on the notion of regular variation which we now
describe [85]. Consider a nonnegative valued function Φ (x) defined on the nonnegative halfline (x ≥ 0). The function
Φ (x) is termed regularly varying at the origin if the limit
L
0
(x) = lim
l→0
Φ (lx)
Φ (l)
(20)
exists for all x > 0. Analogously, the function Φ (x) is termed regularly varying at infinity if the limit
L
∞
(x) = lim
l→∞
Φ (lx)
Φ (l)
(21)
exists for all x > 0.
In what follows we use the shorthand notation L (x) to denote the limiting functions of Eq. (20)–(21). It is straightforward
to observe that if the limit function L (x) exists then it satisfies the factorization L (x
1
x
2
) = L (x
1
) L (x
2
) for all x
1
, x
2
> 0. In
turn, this observation implies that the limit function L (x) is a powerlaw:
L (x) = x
ϵ
(22)
(x > 0), where ϵ is a real power termed ‘‘the exponent of regular variation’’ [85].
Regularly varying functions with exponent ϵ = 0 are termed slowly varying, and are of special importance. Indeed, a
function Φ (x) is regularly varying with exponent ϵ if and only if it admits the representation
Φ (x) = x
ϵ
Ψ (x) (23)
(x ≥ 0), where the function Ψ (x) is slowly varying (both the regular and slow variation holding, respectively, either at
the origin (l → 0) or at infinity (l → ∞)). The class of slowly varying functions includes asymptotically constant functions,
logarithms, powers of slowly varying functions, and logarithms of slowly varying functions [85]. Informally speaking, slowly
varying functions are generalizations of constant functions, andregularly varying functions are generalizations of powerlaw
functions.
3.3. Scaling limits of the renormalized noise
Combining together Eqs. (19) and (9) we obtain that the processdistribution of the renormalized random noise N
n
is
characterized by the Fourier functional
_
exp
_
i
_
∞
−∞
ϕ (t) N
n
(t) dt
__
= exp
_
−
_
∞
−∞
F
n
(ϕ (t)) dt
_
, (24)
where ϕ (·) is an arbitrary test function for which the integral appearing on the righthandside of Eq. (24) converges, and
where the Fourier function of the renormalized random noise N
n
is given by
F
n
(θ) = nF
_
θ
φ
W
(n)
_
(25)
(−∞ < θ < ∞).
108 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
The Fourier function F
n
(θ) of Eq. (25) can be rewritten in the form
F
n
(θ) =
_
nF
_
1
φ
W
(n)
__
_
_
F
_
θ
1
φ
W
(n)
_
F
_
1
φ
W
(n)
_
_
_
(26)
(−∞ < θ < ∞). Eq. (26) implies that a limiting Fourier functionF
∞
(θ) = lim
n→∞
F
n
(θ) exists and is nontrivial if and only
if the following pair of conditions holds: (i) the limit lim
n→∞
nF (1/φ
W
(n)) exists and is positive; (ii) the Fourier function
F (θ) is regularly varying at the origin (since the Fourier function F (θ) is symmetric around the origin, we can consider it
along the nonnegative halfline (θ ≥ 0) alone, and thus indeed use the notion of regular variation). Regarding the first
condition: since the Fourier function F (θ) is continuous and vanishes at the origin, we can set lim
n→∞
nF (1/φ
W
(n)) = 1.
Regarding the second condition: denoting by α the exponent of regular variation of the Fourier function F (θ), and using the
symmetry of this function, we obtain that the limiting Fourier function F
∞
(θ) is given by the powerlaw
F
∞
(θ) = θ
α
(27)
(θ ̸= 0). Consequently, we conclude that: The renormalized random noise N
n
converges, in law, to a limiting random noise
N
∞
= (N
∞
(t))
−∞<t<∞
whose processdistribution is characterized by the Fourier functional
_
exp
_
i
_
∞
−∞
ϕ (t) N
∞
(t) dt
__
= exp
_
−
_
∞
−∞
F
∞
(ϕ (t)) dt
_
, (28)
where ϕ (·) is an arbitrary test function for which the integral appearing on the righthandside of Eq. (28) converges.
The aforementioned regularvariation analysis was in terms of the Fourier function F (θ). In turn, the Fourier function
F (θ) is induced by the randomvariable S – which represents the magnitude of the generic randomshock S (recall Eq. (8)).
Let the functions P
≤
(x) = Pr (S ≤ x) and P
>
(x) = Pr (S > x) (x ≥ 0) denote, respectively, the cumulative distribution
function and the tail distribution function of the random variable S. An analysis detailed in Section 8.2 asserts that:
• The admissible range of the exponent of the limiting Fourier function F
∞
(θ) is 0 < α ≤ 2.
• The Fourier function F (θ) is regularly varying at the origin with exponent α (0 < α < 2) if and only if the tail distribution
function P
>
(x) is regularly varying at infinity with exponent −α (0 < α < 2).
• The Fourier function F (θ) is regularly varying at the origin with exponent α = 2 if and only if the function
Q (x) =
_
x
0
u
2
P
≤
(du) (29)
(x ≥ 0) is slowly varying at infinity.
These assertions imply that there is a marked difference between the exponent range 0 < α < 2 and the exponent value
α = 2. In what follows we will see that the distinction between the exponent range 0 < α < 2 and the exponent value
α = 2 is indeed dramatic. Note that if the generic random shock S has a finite variance (⟨S
2
⟩ < ∞) then the function
Q (x) is asymptotically constant—and is hence slowly varying at infinity. This observation implies that the exponent value
α = 2 has a very wide ‘‘domain of attraction’’ including all the generic random shocks S with finite variance. On the other
hand, each exponent value α (0 < α < 2) has a rather narrow ‘‘domain of attraction’’—as it requires a very specific regular
variation condition to hold.
3.4. Scaling limits of the renormalized Langevin motions
Combining together Eqs. (16) and (12) we obtain that the positions of the renormalized Langevin motion X
n
are given by
X
n
(t) =
1
φ
X
(n)
__
0
−∞
_
K
_
nt − t
′
_
− K
_
0 − t
′
__
N
_
t
′
_
dt
′
+
_
nt
0
K
_
nt − t
′
_
N
_
t
′
_
dt
′
_
(30)
(t ≥ 0). Applying the change of variables t
′
→ nt
′
, and using the definition of the renormalized random noise N
n
(given by
Eq. (19)), the positions of the renormalized Langevin motion X
n
can be rewritten in the form
X
n
(t) =
_
K (n) φ
W
(n)
φ
X
(n)
_
·
_
_
0
−∞
_
K
_
n
_
t − t
′
__
− K
_
n
_
0 − t
′
__
K (n)
_
N
n
_
t
′
_
dt
′
+
_
t
0
_
K
_
n
_
t − t
′
__
K (n)
_
N
n
_
t
′
_
dt
′
_
(31)
(t ≥ 0).
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 109
FromEq. (31) it is evident that the renormalized Langevin motion X
n
converges, in law, to a nontrivial limiting Langevin
motion X
∞
= (X
∞
(t))
t≥0
if and only if the following triplet of conditions holds: (i) the renormalized random noise N
n
converges, in law, to a limiting random noise N
∞
; (ii) the limit lim
n→∞
K (n) φ
W
(n) /φ
X
(n) exists and is positive; (iii) the
kernel function K (τ) is regularly varying at infinity. The first condition was analyzed in detail in Section 3.3. Regarding
the second condition: we set φ
X
(n) = K (n) φ
W
(n) and obtain that lim
n→∞
K (n) φ
W
(n) /φ
X
(n) = 1. Regarding the third
condition: denoting by β the exponent of regular variation of the kernel function K (τ), we obtain that the limiting kernel
function K
∞
(τ) = lim
n→∞
K (nτ) /K (n) is given by the powerlaw
K
∞
(τ) = τ
β
(32)
(τ > 0). Consequently, we conclude that: The renormalized Langevin motion X
n
converges, in law, to a limiting Langevin
motion X
∞
whose positions are given by
X
∞
(t) =
_
0
−∞
_
K
∞
_
t − t
′
_
− K
∞
_
0 − t
′
__
N
∞
_
t
′
_
dt
′
+
_
t
0
K
∞
_
t − t
′
_
N
∞
_
t
′
_
dt
′
(33)
(t ≥ 0), where N
∞
is a limiting random noise whose processdistribution is characterized by the Fourier functional of
Eq. (28).
Combining together Eqs. (28) and (33), an integral calculation implies that the Fourier transforms of the positions of the
limiting Langevin motion X
∞
are given by
⟨exp (iθX
∞
(t))⟩ = exp
_
−c
α,β
θ
α
t
1+αβ
_
(34)
(t ≥ 0; −∞ < θ < ∞), where
c
α,β
=
_
∞
0
¸
¸
(1 + u)
β
− u
β
¸
¸
α
du +
_
1
0
u
αβ
du. (35)
Differentiating Eq. (34) with respect to the variable θ at the origin (θ = 0) implies that:
• In the exponent range 0 < α ≤ 1 the positions of the limiting Langevin motion X
∞
have illdefined means.
• In the exponent range 1 < α ≤ 2 the positions of the limiting Langevin motion X
∞
have zero means and infinite
variances.
• At the exponent value α = 2 the positions of the limiting Langevin motion X
∞
have zero means and finite variances.
Thus, the distinction between the exponent range 0 < α < 2 and the exponent value α = 2 is indeed dramatic. An
analysis of the integral appearing in Eq. (35) further implies that: (i) in the exponent range 0 < α ≤ 1 the integral is
convergent if and only if the exponent β is in the range −1/α < β ≤ 0; (ii) in the exponent range 1 < α ≤ 2 the integral
is convergent if and only if the exponent β is in the range −1/α < β < 1 − 1/α.
The sign of the exponent β determines the monotonicity of the limiting kernel function K
∞
(τ): monotone decreasing in
the exponent range β < 0, constant at the exponent value β = 0, and monotone increasing in the exponent range β > 0.
Classifying the limiting Langevin motion X
∞
according to the monotonicity of the limiting kernel function K
∞
(τ) we obtain
the three following admissible ranges of exponents:
• Decreasing limiting kernel function: 0 < α ≤ 2 and −1/α < β < 0.
• Constant limiting kernel function: 0 < α ≤ 2 and β = 0.
• Increasing limiting kernel function: 1 < α ≤ 2 and 0 < β < 1 − 1/α.
There is a marked difference between the exponent value β = 0 – which represents the class of slowly varying kernel
functions K (τ) – and nonzero exponent values β ̸= 0. The significance and uniqueness of the class of slowly varying kernel
functions stems fromthe fact that this class characterizes the scenario in which the limiting Langevin motion X
∞
is a random
walk. Indeed, consider the limiting random walk W
∞
= (W
∞
(t))
t≥0
whose positions are given by
W
∞
(t) =
_
t
0
N
∞
_
t
′
_
dt
′
(36)
(t ≥ 0). Then, it follows from Eq. (33) that the limiting Langevin motion X
∞
coincides with the limiting random walk W
∞
if
and only if β = 0. We note that if the impulseresponse function R (τ) is integrable on the positive halfline (0, ∞) then the
kernel function K (τ) is asymptotically constant – and is hence slowly varying. This observation implies that the exponent
value β = 0 has a very wide ‘‘domain of attraction’’ including all the integrable impulseresponse functions. On the other
hand, each nonzero exponent value β has a rather narrow ‘‘domain of attraction’’ – as it requires a very specific regular
variation condition to hold.
110 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
4. Interim summary
In this section – following the Langevin dynamics introduced in Section 2, and the scaling limits obtained in Section 3 –
we present an interim summary and discussion of the constructions and results established so far.
Noise. The underlying process – which drives Langevin motions and random walks on both the microscopic and
macroscopic levels – is a random noise N. The processdistribution of the random noise N is characterized by a Fourier
functional admitting the form
_
exp
_
i
_
∞
−∞
ϕ (t) N (t) dt
__
= exp
_
−
_
∞
−∞
F (ϕ (t)) dt
_
, (37)
where F (θ) (−∞ < θ < ∞) is the corresponding Fourier function, andwhere ϕ (·) is anarbitrary test functionfor whichthe
integral appearing on the righthandside of Eq. (37) converges. The Fourier function F (θ) governs the processdistribution
of the random noise N.
Langevin Motions. A Langevin motion X driven by the random noise N is a stochastic process whose positions are given
by
X (t) =
_
0
−∞
_
K
_
t − t
′
_
− K
_
0 − t
′
__
N
_
t
′
_
dt
′
+
_
t
0
K
_
t − t
′
_
N
_
t
′
_
dt
′
(38)
(t ≥ 0), where K (τ) (τ > 0) is the corresponding kernel function. The kernel function K (τ) governs the convolution
structure of the Langevin motion X. A straightforward calculation using Eq. (37) implies that the Fourier transforms of the
positions of the Langevin motion X are given by
⟨exp (iθX (t))⟩ = exp
_
−
_
∞
0
F (θ [K (u + t) − K (u)]) du −
_
t
0
F (θK (u)) du
_
(39)
(t ≥ 0; −∞ < θ < ∞).
Random Walks. The random walk W driven by the random noise N is a stochastic process whose positions are given by
W (t) =
_
t
0
N
_
t
′
_
dt
′
(40)
(t ≥ 0). The random walk W is a special case of the Langevin motion X – which is obtained by the constant kernel function
K (τ) = 1. Eq. (37) implies that the Fourier transforms of the positions of the random walk W are given by
⟨exp (iθW (t))⟩ = exp (−F (θ) t) (41)
(t ≥ 0; −∞ < θ < ∞).
Increments. Eq. (37) implies that the random noise N is a stationary stochastic process. Indeed, it follows from Eq. (37)
that the random noise N is invariant with respect to time shifts of the form t → t + s, where s is an arbitrary real
valued shift parameter. The stationarity of the random noise N, combined together with the convolution structure of the
Langevin motion X, implies that the Langevin motion X is a stochastic process with stationary increments. The increment of
the Langevin motion X corresponding to the time interval (t, t + s] is the motion’s displacement along this time interval:
X (t + s) − X (t) (t, s ≥ 0). The ‘‘stationary increments’’ property means that the motion’s displacement X (t + s) − X (t)
depends only on the length of the time interval (s), and does not depend on the starting point of the time interval (t).
In other words, the ‘‘stationary increments’’ property means that the increment X (t + s) − X (t) is equal, in law, to the
position X (s)—whose Fourier transform is given by Eq. (39). Eq. (37) further implies that the random walk W is a stochastic
process with independent increments. The ‘‘independent increments’’ property means that increments of the random walk
W – which correspond to disjoint time intervals – are independent randomvariables. Since the randomwalk W is a process
with stationary and independent increments its processdistribution is characterized by the distributions of its positions—
whose Fourier transforms are given by Eq. (41). A detailed explanation of the aforementioned ‘‘stationary increments’’ and
‘‘independent increments’’ properties is given in Section 8.3.
Markov property. A stochastic process Y = (Y (t))
t≥0
is said to be Markov if – given its position Y (t) at a present time
t – its future positions Y (t + s) (s ≥ 0) depend only on its present position Y (t), and do not depend on the trajectory of
the process up to time t [1–3]. Markov processes are the stochastic counterparts of deterministic motions whose dynamics
are governed by ordinary differential equations. The ‘‘independent increments’’ property of the random walk W renders it
a Markov process. On the other hand, the Langevin motion X – except for the randomwalk case – is not a Markov process.
Somewhat informally we can state that: The Langevin motion X has a memory of its past, whereas the random walk W has
only a memory of its present.
Microscopic and Macroscopic Levels. A Langevin motion X is governed by its Fourier function F (θ) and kernel function
K (τ) – the former characterizing the processdistribution of the motion’s underlying noise, and the latter characterizing the
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 111
motion’s intrinsic convolution structure. On the microscopic level the Fourier and kernel functions are given, respectively,
by
_
_
_
F (θ) = 1 − ⟨cos (θ S)⟩ ,
K (τ) =
_
τ
0
R
_
τ
′
_
dτ
′
.
(42)
The randomvariable S represents the generic randomshock impacting the motion, and the impulseresponse function R (τ)
(τ > 0) represents the motion’s relaxation following the random shocks impacting it. On the macroscopic level the Fourier
and kernel functions are given, respectively, by the powerlaws
_
F (θ) = θ
α
,
K (τ) = τ
β
.
(43)
Classifiedaccording to the monotonicity of the powerlawkernel function, the admissible exponent ranges are: (i) monotone
decreasing kernel: 0 < α ≤ 2 and −1/α < β < 0; (ii) constant kernel: 0 < α ≤ 2 and β = 0; (iii) monotone increasing
kernel: 1 < α ≤ 2 and 0 < β < 1 − 1/α.
Universality. On the microscopic level both the Fourier function F (θ) and the kernel function K (τ) are general functions
with infinitely many ‘‘degrees of freedom’’—those of the Fourier function characterized by the distribution of the generic
random shock S, and those of the kernel function characterized by the shape of the impulseresponse function R (τ).
On the other hand, on the macroscopic level the Fourier function F (θ) and the kernel function K (τ) are powerlaws
with a single ‘‘degree of freedom’’—that of the Fourier function characterized by the exponent α, and that of the kernel
function characterized by the exponent β. Thus, in the transcendence from the microscopic level to the macroscopic level
universality emerges: the infinitely many ‘‘degrees of freedom’’ allowed on the microscopic level collapse to two ‘‘degrees of
freedom’’ allowed on the macroscopic level. These two ‘‘degrees of freedom’’ – parametrized by the exponents α and β –
determine the powerlaw structures of the Fourier and kernel functions emerging on the macroscopic level. Consequently,
on the macroscopic level the Fourier transforms of the positions of the Langevin motion X are given by
⟨exp (iθX
∞
(t))⟩ = exp
_
−c
α,β
θ
α
t
1+αβ
_
(44)
(t ≥ 0; −∞ < θ < ∞), where c
α,β
is a constant depending on the exponents α and β.
Domains of Attraction I. For each exponent 0 < α < 2 the domain of attraction of the powerlaw Fourier function
F (θ) = θ
α
is rather narrow—as the distribution of the generic random shock S has to satisfy a particular regular variation
condition. On the other hand, at the exponent value α = 2 the domain of attraction of the quadratic Fourier function
F (θ) = θ
2
is very wide—as it includes the class of generic random shocks S with a finite variance. The exponent value
α = 2 is of special importance—as it characterizes the unique macroscopiclevel scenario in which the positions of the
Langevin motion X have finite variances.
Domains of AttractionII. For eachexponent β ̸= 0 the domainof attractionof the powerlawkernel functionK (τ) = τ
β
is rather narrow—as the underlying impulseresponse function R (τ) has to satisfy a particular regular variation condition.
On the other hand, at the exponent value β = 0 the domain of attraction of the constant kernel function K (τ) = 1 is very
wide—as it includes the class of integrable impulseresponse functions. The exponent value β = 0 is of special importance—
as it characterizes the unique macroscopiclevel scenario in which the Langevin motion X coincides with the random walk
W. Consequently, the exponent value β = 0 also characterizes the unique macroscopiclevel scenario in which the Langevin
motion X has independent increments and is Markov.
Langevin Motions vs. Random Walks. On the microscopic level Langevin motions and random walks are two
significantly different models of stochastic dynamics. However, on the macroscopic level these different modeling
approaches may – under a certain condition – yield identical stochastic dynamics. Indeed, the necessary and sufficient
condition ensuring the macroscopic coincidence of Langevin motions and random walks is the slow variation of the kernel
function K (τ) of the microscopiclevel Langevin motions. In particular, integrable impulseresponse functions R (τ) of
microscopiclevel Langevin motions meet the slow variation condition. Consequently, microscopic level Langevin motions
with exponential and stretched exponential relaxations [86–90] yield random walks on the macroscopic level.
5. Fractional motions
As establishedso forth, transcending fromthe microscopic level to the macroscopic level gives rise to ‘‘fractional motions’’:
Langevin motions with powerlaw Fourier and kernel functions. More specifically, in what follows we refer to a stochastic
process X = (X (t))
t≥0
as a ‘‘fractional motion with Noah exponent α and Joseph exponent γ ’’ if it is a macroscopiclevel
Langevin motion X with a powerlawFourier function F (θ) = θ
α
(−∞ < θ < ∞), and with a powerlawkernel function
K (τ) = τ
γ /α
(τ > 0). The fractional noise ∆X = (∆X (t))
∞
t=0
corresponding to the fractional motion X is the stationary
sequence of the motion’s consecutive unitlength increments:
∆X (t) = X (t + 1) − X (t) (45)
(t = 0, 1, 2, . . .).
112 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
Table 1
Basic classification of fractional motions.
Independent increments Dependent increments
Finitevariance increments Brownian motion
α = 2
γ = 0
Fractional Brownian motion
α = 2
γ ̸= 0
Infinitevariance increments Lévy motion
α ̸= 2
γ = 0
Fractional Lévy motion
α ̸= 2
γ ̸= 0
Mandelbrot and Wallis introduced the term ‘‘Noah effect’’ to describe wildly fluctuating stochastic processes, and
introduced the term ‘‘Joseph effect’’ to describe stationary stochastic processes with longranged correlations [22]. The
etymology of the term ‘‘Noah effect’’ stems from the biblical story of Noah’s great flood: ‘‘...were all the fountains of the great
deep broken up, and the windows of heaven were opened. And the rain was upon the earth forty days and forty nights.’’ (Genesis,
7: 11–12). The etymology of the term ‘‘Joseph effect’’ stems from the biblical story of Joseph’s prophesy: ‘‘...there came seven
years of great plenty throughout the land of Egypt. And there shall arise after them seven years of famine...’’ (Genesis, 41: 29–30).
In this section we will show that the exponent α governs the fluctuations of fractional motions, and that the exponent
γ governs the correlations of fractional noises. For this reason we term α the ‘‘Noah exponent’’, and term γ the ‘‘Joseph
exponent’’. Classified according to the sign of the Joseph exponent, the admissible exponent ranges are: (i) negative Joseph
exponent: 0 < α ≤ 2 and −1 < γ < 0; (ii) zero Joseph exponent: 0 < α ≤ 2 and γ = 0; (iii) positive Joseph exponent:
1 < α ≤ 2 and 0 < γ < α − 1.
Fractional motions are categorized into the four following classes:
• Brownian motion [1–3]. This class has finitevariance independent increments, and is characterized by the Noah and
Joseph exponents α = 2 and γ = 0.
• Lévy motion [23,24]. This class has infinitevariance independent increments, and is characterized by the Noah and Joseph
exponents α ̸= 2 and γ = 0.
• Fractional Brownian motion [34,35]. This class has finitevariance dependent increments, and is characterized by the Noah
and Joseph exponents α = 2 and γ ̸= 0.
• Fractional Lévy motion [36,37]. This class has infinitevariance dependent increments, and is characterized by the Noah
and Joseph exponents α ̸= 2 and γ ̸= 0.
The aforementioned classification of fractional motions is summarized in Table 1. In this section we explore the statistical
and topological properties of fractional motions. We will showthat fractional motions belonging to different classes exhibit
profoundly different behaviors.
5.1. Gaussdistributed increments
Consider the Noah exponent α = 2. Eq. (44) implies that at this exponent value the Fourier transforms of the Langevin
motion positions and increments admit the functional form
ˆ
f
2
(θ) = exp
_
−c θ
2
_
(46)
(−∞ < θ < ∞; c being an arbitrary positive coefficient). Eq. (46) is the Fourier transform of the Gauss distribution, and its
Fourier inversion yields the Gauss probability density function
f
2
(x) =
1
√
4πc
exp
_
−
x
2
4c
_
(47)
(−∞ < x < ∞). A detailed derivation of the Fourier inversion of the Fourier transform of Eq. (46) is given in Section 8.4.
The Gauss probability density function of Eq. (47) attains the famous ‘‘bell shape’’.
The Gauss distribution has finite moments of all orders. Consequently, the increments of both Brownian motion and
fractional Brownian motion have finite variances and thus do not display the Noah effect. Moreover, the probability tails of
the Gauss distribution are bounded from above by Chernoff’s largedeviations bound
_
x>m
f
2
(x) dx ≤ 2 exp
_
−
m
2
4c
_
(48)
(m > 0) [91,92]. Eq. (48) implies that the probability that the Gauss distribution will display a fluctuation with magnitude
greater than m follows a superexponential decay in the variable m. This superexponential decay is of prime importance,
as it implies that the increments of both Brownian motion and fractional Brownian motion are highly unlikely to display
extreme fluctuations—a phenomenon termed by Mandelbrot ‘‘mild randomness’’ [20,21,93].
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 113
5.2. Lévydistributed increments
Consider Noah exponents in the range 0 < α < 2. Eq. (44) implies that in this exponent range the Fourier transforms of
the Langevinmotion positions and increments admit the functional form
ˆ
f
α
(θ) = exp (−c θ
α
) (49)
(−∞ < θ < ∞; c being an arbitrary positive coefficient). Eq. (49) is the Fourier transform of the Lévy
distribution[25,26,94–97]. Except for the exponent value α = 1 there is no closedform inversion of the Fourier transform
of Eq. (49). Indeed, in general the Lévy probability density function is represented in the form of slowly converging power
series. At the exponent value α = 1 Eq. (49) is the Fourier transform of the Cauchy distribution, and its Fourier inversion
yields the Cauchy probability density function
f
1
(x) =
c
π
_
c
2
+ x
2
_ (50)
(−∞ < x < ∞). A detailed derivation of the Fourier inversion of the Fourier transform of Eq. (49) is given in Section 8.4.
Analogous to the Gauss probability density function, also the Cauchy probability density function attains the famous ‘‘bell
shape’’. However – counterwise to the Gauss distribution – the Cauchy distribution has no converging moments.
The Lévy distribution has infinite variance. Consequently, the increments of both Lévy motion and fractional Lévy motion
display the Noah effect [22]. The mean of the Lévy distribution is illdefined in the exponent range 0 < α ≤ 1, and is zero
in the exponent range 1 < α ≤ 2. Moreover, the probability tails of the Lévy distribution admit the powerlaw asymptotics
_
x>m
f
α
(x) dx ≈
1
m
α
(51)
(m → ∞). The detailed derivation of Eq. (51) is given in Section 8.5. Eq. (51) implies that the probability that the Lévy
distribution will display a fluctuation with magnitude greater than m follows a powerlaw decay in the variable m—a
statistical behavior termed ‘‘heavy tails’’ or ‘‘fat tails’’ [98]. This powerlaw decay is of prime importance, as it implies that
the increments of both Lévy motion and fractional Lévy motion are likely to display extreme fluctuations—a phenomenon
termed by Mandelbrot ‘‘wild randomness’’ [20,21,93].
Random motions exhibiting Lévydistributed fluctuations are prevalent across the sciences. Documented examples of
Lévydistributed fluctuations include anomalous transport [99,100], search processes [8,101], human travel [102,103], light
scattering [104], plasma [105], solar wind [106], solar flares [107], and price processes in finance [13]. Applications of Lévy
fluctuations range fromground water and surface water hydrology [108–113], to medical ultrasound [114], and to the clean
up of images from the Hubble space telescope [115]. Highly efficient numerical algorithms for the computation of the Lévy
probability density function are given in [116,117], and highly efficient MonteCarlo algorithms for the simulation of Lévy
distributed random variables are given in [118].
5.3. Dispersion
The most common approach to quantitatively measure the dispersion of random motions with finite variances is via
their mean square displacements. Namely, if Y = (Y (t))
t≥0
is a random motion with zero means and finite variances then
its mean square displacements are given by the function ⟨Y (t)
2
⟩ (t ≥ 0). More generally, the dispersion of a motion Y with
finite variances can be measured via its variance function V
Y
(t) = ⟨Y (t)
2
⟩ − ⟨Y (t)⟩
2
(t ≥ 0). In the context of random
motions with infinite variances the variance function is obviously illdefined and thus cannot be applied. However, the
notion of ultra diffusion provides a natural and useful generalization of the variance function [119].
A random motion Y is said to be an ultra diffusion if the Fourier transforms of its positions admit the functional form
⟨exp (−iθY (t))⟩ = exp (−F
Y
(θ) · D
Y
(t)) , (52)
where [119]: (i) F
Y
(θ) is a ‘‘Fourier function’’ depending on the Fourier variable alone (−∞ < θ < ∞); (ii) D
Y
(t) is a
‘‘dispersion function’’ depending on the time variable alone (t ≥ 0).
In case the random motion Y has finite variances then a double differentiation of Eq. (52) – with respect to the Fourier
variable – at the origin (θ = 0) yields
V
Y
(t) = F
′′
Y
(0) · D
Y
(t) (53)
(t ≥ 0). With no loss of generality we can set F
′′
Y
(0) = 1, and thus obtain that in the finitevariance case the variance
function V
Y
(t) and the dispersion function D
Y
(t) coincide: V
Y
(t) = D
Y
(t) (t ≥ 0). However, in contrast to the variance
function V
Y
(t), the welldefinedness of the dispersion function D
Y
(t) is not limited to the finitevariance case. We conclude
that: In the context of ultra diffusion processes the dispersion function D
Y
(t) is a generalization of the variance function
V
Y
(t), and its applicability extends to the case of infinite variances.
114 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
Consider now the fractional motion X. Eq. (44) implies that the fractional motion X is an ultra diffusion with powerlaw
Fourier function F
X
(θ) = θ
α
, and with powerlaw dispersion function
D
X
(t) = c
α,γ
t
1+γ
(54)
(where c
α,γ
is a coefficient depending on the exponents α and γ ). Consequently, we obtain that:
• In the range of negative Joseph exponents −1 < γ < 0 fractional motions are ‘‘subdiffusive ’’—as their dispersion grows
sublinearly.
• At the zero Joseph exponent γ = 0 fractional motions are ‘‘diffusive’’ random walks—as their dispersion grows linearly.
• In the range of positive Joseph exponents 0 < γ < α − 1 fractional motions are ‘‘superdiffusive ’’—as their dispersion
grows superlinearly.
3
The dispersion function introduced in this section facilitates the generalization of the notions of diffusion, subdiffusion,
and superdiffusion to settings in which the underlying variances are infinite. Random motions exhibiting subdiffusive
and superdiffusive dispersions are ubiquitously observed in the sciences [38–41]. Examples of subdiffusive dispersions
include the propagation of contaminants in geological and hydrological systems [9,10], the transport of charge carriers in
amorphous semiconductors [42–44], and the movement of proteins in intracellular media [120,121]. Examples of super
diffusive dispersions include the trajectories of tracers in turbulent flows [4,5], the search patterns of foraging animals [6–8],
and active intracellular transport [122]. Some relevant references on subdiffusive and superdiffusive behaviors include,
respectively, [123–125] and [126–129].
5.4. Correlation
The ‘‘stationary increments’’ property of fractional motions implies that fractional noises are stationary processes. The
most common approach to quantitatively measure the correlation structure of stationary processes with finite variances
is via their autocovariance functions. Namely, if Z = (Z (t))
∞
t=0
is a stationary process with finite variances then its auto
covariance function is C
Z
(l) = ⟨Z (0) , Z (l)⟩ − ⟨Z (0)⟩ ⟨Z (l)⟩, where l is the ‘‘lag variable’’ (l = 1, 2, . . .).
Consider a random motion Y = (Y (t))
t≥0
, and let ∆Y = (∆Y (t))
∞
t=0
be the corresponding ‘‘noise sequence’’ of
consecutive unitlength increments: ∆Y (t) = Y (t + 1) − Y (t) (t = 0, 1, 2, . . .). Clearly, if the random motion Y has
stationary increments then the noise sequence ∆Y is a stationary process, and if the random motion Y has finite variances
then so does the noise sequence ∆Y. Moreover, if the random motion Y has stationary increments and finite variances then
a covariancecalculation yields
C
∆Y
(l) =
1
2
_
V
Y
(l − 1) − 2V
Y
(l) + V
Y
(l + 1)
_
(55)
(l = 1, 2, . . .), where V
Y
(t) = ⟨Y (t)
2
⟩ − ⟨Y (t)⟩
2
(t ≥ 0) is the variance function of the random motion Y. The detailed
derivation of Eq. (55) is given in Section 8.7.
As explained in Section 5.3 – in the context of ultra diffusion processes – the dispersion function D
Y
(t) is a generalization
of the variance function V
Y
(t). The dispersion function D
Y
(t) coincides with the variance function V
Y
(t) in the case of finite
variances, but yet is applicable also in the case of infinite variances. Thus, in the context of ultra diffusion processes with
stationary increments, the autocovariance function C
∆Y
(l) can be generalized to encompass the case of infinite variances:
replace variance function V
Y
(t), in Eq. (55), by the dispersion function D
Y
(t).
Consider nowthe fractional noise ∆X. Replacing the variance function V
X
(t) by the dispersion function D
X
(t) of Eq. (54),
we obtain that the generalized autocovariance function C
∆X
(l) of the fractional noise ∆X is given by
C
∆X
(l) =
c
α,γ
2
_
(l − 1)
1+γ
− 2l
1+γ
+(l + 1)
1+γ
_
(56)
(where c
α,γ
is a coefficient depending on the exponents α and γ ). Note that at the exponent value γ = 0 the autocovariance
function C
∆X
(l) vanishes, and at exponent values γ ̸= 0 the sign of the autocovariance function C
∆X
(l) coincides with the
sign of the exponent γ . Consequently, we obtain that:
• In the range of negative Joseph exponents −1 < γ < 0 fractional noises are ‘‘antipersistent’’—as they are negatively
correlated: C
∆X
(l) < 0.
• At the zero Joseph exponent γ = 0 fractional noises are sequences of independent random variables, and are
uncorrelated: C
∆X
(l) = 0.
• In the range of positive Joseph exponents 0 < γ < α − 1 fractional noises are ‘‘persistent’’—as they are positively
correlated: C
∆X
(l) > 0.
4
3
The superdiffusive scenario is applicable only in the Noah exponent range 1 < α ≤ 2.
4
The persistent scenario is applicable only in the Noah exponent range 1 < α ≤ 2.
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 115
A stationary stochastic process Z is said to have ‘‘shortrange correlations’’ if its autocovariance function C
Z
(l) is
summable (
∞
l=1
C
Z
(l) < ∞), and is said to have ‘‘longrange correlations’’ if its autocovariance function C
Z
(l) is non
summable (
∞
l=1
C
Z
(l) = ∞) [27–30]. The notions of ‘‘longrange correlations’’ and ‘‘Joseph effect’’ are synonymous,
and the generalized autocovariance function introduced in this section facilitates the extension of the notions of short
range and longrange correlations to settings in which the underlying variances are infinite. Stationary stochastic processes
exhibiting longrange correlations are omnipresent in the sciences. Documented examples of longrange correlations
include nucleotide and DNA sequencing [130,131], human walk [132], sedimentation [133], atmospheric temperature
variations [134], volatility of price processes [135], heartbeat dynamics [136], and seismic coda [137].
Eq. (56) implies that the autocovariance C
∆X
(l) – except for the uncorrelated case (γ = 0) – admits the powerlaw
asymptotics
C
∆X
(l) ≈
sign (γ )
l
1−γ
(57)
(l → ∞), where sign (γ ) denotes the sign of the exponent γ . From Eq. (57) it is evident that fractional motions have long
range correlations if and only if their Joseph exponent is positive. Thus, we conclude that: Fractional motions display the
Joseph effect if and only if their Noah and Joseph exponents are in the range 1 < α ≤ 2 and 0 < γ < α − 1.
5.5. Trajectories
So far we explored the statistical properties of fractional motions. In this section we study the topological properties of
the trajectories of fractional motions.
A random motion Y = (Y (t))
t≥0
is said to be ‘‘selfsimilar with Hurst exponent H ’’ if its following different scalings are
equal in law [35,138]: (i) scaling the motion’s time by the factor s – yielding the process (Y (st))
t≥0
; (ii) scaling the motion’s
positions by the factor s
H
—yielding the process
_
s
H
Y (t)
_
t≥0
. Namely, the selfsimilarity of the randommotion Y implies that
the random processes (Y (st))
t≥0
and
_
s
H
Y (t)
_
t≥0
are statistically indistinguishable (for any positive scaling factor s).
The intrinsic powerlaw structure of fractional motions renders them selfsimilar. Indeed, an analysis detailed in
Section8.6 asserts that a LangevinmotionX withNoahexponent α and Josephexponent γ is selfsimilar withHurst exponent
H =
1 +γ
α
. (58)
The admissible ranges of the Hurst exponent H – induced by the admissible ranges of the Noah exponent and Joseph
exponents α and γ , and classified according to the sign of the Joseph exponent – are as follows:
• In the exponents range 0 < α ≤ 2 and −1 < γ < 0 the ‘‘Hurst range’’ is 0 < H < 1/α.
• In the exponents range 0 < α ≤ 2 and γ = 0 the ‘‘Hurst value’’ is H = 1/α.
• In the exponents range 1 < α ≤ 2 and 0 < γ < α − 1 the ‘‘Hurst range’’ is 1/α < H < 1.
We emphasize that on the microscopic level Langevin motions are not selfsimilar. However, in the transcendence from
the microscopic level to the macroscopic level selfsimilarity emerges—hand in hand with the emergence of powerlaws
and universality. The selfsimilarity of fractional motions means that their trajectories are ‘‘fractal ’’ objects: zoomingin and
zoomingout on the trajectory of a fractional motion X yields, statistically, the very same ‘‘picture’’.
Let us turn now to examine the continuity of the trajectories of fractional motions. To that end we apply the Kol
mogorov–Censtov continuity criteria [139,140]. A random motion Y = (Y (t))
t≥0
is said to satisfy the Kolmogorov–Censtov
continuity criteria if there exist positive numbers {c, p, q} such that for all T > 0 the following bound holds:
_
Y (t
1
) − Y (t
2
)
p
_
≤ c t
1
− t
2

1+q
(59)
(0 ≤ t
1
, t
2
≤ T). The Kolmogorov–Censtov continuity criteria assures the continuity of the trajectories of the randommotion
Y = (Y (t))
t≥0
.
Consider now the fractional motion X. Since the fractional motion X has stationary increments, and since it is selfsimilar
with Hurst exponent H, we have
_
X (t
1
) − X (t
2
)
p
_
=
_
X (1)
p
_
t
1
− t
2

pH
(60)
(0 ≤ t
1
, t
2
≤ T). We emphasize that Eq. (60) holds valid provided that the moment ⟨X (1)
p
⟩ is convergent.
Consider the Noah exponent α = 2. In this case the random variable X (1) is Gauss distributed, and hence the moment
⟨X (1)
p
⟩ converges for all p > 0. Moreover, in this case the Hurst exponent H takes values in the range 0 < H < 1.
Setting p = 2/H we obtain that the Kolmogorov–Censtov continuity criteria is satisfied with q = 1. Thus, we conclude
that: Brownian motion and Fractional Brownian motion have continuous trajectories. However, the trajectories of Brownian
motion and fractional Brownian motion are very ‘‘rough’’ and irregular. Indeed, the trajectories of Brownian motion and
116 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
fractional Brownian motion are nowheredifferentiable continuous functions – see [141,142] for the case of Brownian motion,
and see [34,143] for the case of fractional Brownian motion.
5
Consider Noah exponents in the range 0 < α < 2. In this case the random variable X (1) is Lévy distributed, and hence
the moment ⟨X (1)
p
⟩ converges for powers p in the range 0 < p < α (this observation follows straightforwardly from Eq.
(51)). Eq. (58) implies that pH = (p/α) (1 +γ ), and hence only positive Joseph exponents (γ > 0) are potentially applicable
to satisfy the Kolmogorov–Censtov continuity criteria. Setting p = α (1 +γ /2) / (1 +γ ) we obtain that Kolmogorov’s
continuity criteria is satisfied with q = γ /2. Thus, we conclude that: persistent fractional Lévy motion has continuous
trajectories. The differentiability of the trajectories of persistent fractional Lévy motion is discussed in [35].
Lévy motion and antipersistent fractional Lévy motion have discontinuous trajectories. Lévy motion is a pure jump
process—as it propagates via jumps alone. Moreover, the Lévymotion jumps of size larger than x (x > 0) form a Poisson
process with intensity c
α
/ x
α
, and the Lévymotion jumps of size smaller than x (x < 0) form a Poisson process with
intensity c
α
/ x
α
(where c
α
is a constant depending on the exponent α). The detailed derivation of the ‘‘jump structure’’ of
Lévy motionis giveninSection8.8. The trajectories of antipersistent fractional Lévy motionare discontinuous andnowhere
bounded – see [35] for further details.
5.6. Classification of fractional motions
Having explored the statistical and topological properties of fractional motions, we are now in position to fully describe
the profoundly different behaviors displayed by the four classes of fractional motions:
• Brownian motion is a Markov randommotion with stationary, independent, and Gaussdistributed increments. Brownian
motion is the universal macroscopic scaling limit of Langevin motions whose microscopic fluctuations and microscopic
correlations do not transcend to the macroscopic level. Brownian motion has no degrees of freedom, and it displays
neither the Noah effect nor the Joseph effect. The trajectories of Brownian motion are selfsimilar, continuous, and
nowheredifferentiable.
• Lévy motion is a Markov random motion with stationary, independent, and Lévydistributed increments. Lévy motion
is the universal macroscopic scaling limit of Langevin motions whose microscopic fluctuations do transcend to the
macroscopic level, but yet their microscopic correlations do not transcend to the macroscopic level. Lévy motion has one
degree of freedom which is parametrized by its Noah exponent 0 < α < 2. Lévy motion displays the Noah effect, and
does not display the Joseph effect. The trajectories of Lévy motion are selfsimilar and discontinuous, and the propagation
of Lévy motion is purely via jumps.
• Fractional Brownian motion is a nonMarkov random motion with stationary, dependent, and Gaussdistributed
increments. Fractional Brownian motion is the universal macroscopic scaling limit of Langevin motions whose
microscopic fluctuations do not transcend to the macroscopic level, but yet their microscopic correlations do transcend
to the macroscopic level. Fractional Brownian motion has one degree of freedom which is parametrized by its Joseph
exponent γ . Fractional Brownian motion is subdiffusive and antipersistent in the exponent range −1 < γ < 0, and is
superdiffusive and persistent in the exponent range 0 < γ < 1. Fractional Brownian motion does not display the Noah
effect, and displays the Joseph effect when superdiffusive and persistent. The trajectories of fractional Brownian motion
are selfsimilar, continuous, and nowheredifferentiable.
• Fractional Lévy motion is a nonMarkov random motion with stationary, dependent, and Lévydistributed increments.
Fractional Brownian motion is the universal macroscopic scaling limit of Langevin motions whose microscopic
fluctuations and microscopic correlations do transcend to the macroscopic level. Fractional Lévy motion has two degrees
of freedom which are parametrized by its Noah and Joseph exponents α and γ . Fractional Lévy motion is subdiffusive
and antipersistent in the exponents range 0 < α < 2 and −1 < γ < 0, and is superdiffusive and persistent in the
exponents range 1 < α < 2 and 0 < γ < α − 1. Fractional Lévy motion displays the Noah effect, and further displays
the Joseph effect when superdiffusive and persistent. The trajectories of fractional Lévy motion are selfsimilar, are
discontinuous and nowherebounded when subdiffusive and antipersistent, and are continuous when superdiffusive
and persistent.
The aforementioned detailed description of fractional motions is summarized in Table 2, and the classification of
fractional motions according to the sign of their Noah exponent γ is summarized in Table 3.
Fractional motions carry in along any time interval – no matter how short – an infinite amount of information. This
intrinsic infiniteness is a consequence of the fact that fractional motions are scaling limits of Langevin motions. The scaling
procedure—described in Section 3, and shifting us from the microscopic level to the macroscopic level – compresses
Langevinmotions ‘‘adinfinitum’’ to yieldfractional motions. Inturn, this compressionresults inthe aforementionedintrinsic
infiniteness. Moreover, this compression also results in the fractal structure of fractional motions – which is a manifestation
of their intrinsic infiniteness. Indeed, the invariance of fractional motions under changes of scale implies that they must
have, on all time scales, an infinite amount of information.
5
Wiener provedthat the distance betweenany two points along a Browniantrajectory is infinite—as the Browniantrajectory oscillates infinitely between
the two points and, ineffect, covers anarea. These infinite oscillations prevent the Browniantrajectory frombeing differentiable anywhere andthus velocity
it not a well defined quantity for Brownian motion. Langevin’s approach rectified this physical shortcoming of Brownian motion.
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 117
Table 2
Detailed classification of fractional motions.
Joseph exponent γ = 0 Joseph exponent γ ̸= 0
Noah exponent α = 2 Brownian motion
Increments: Independent
Increments: Gaussdistributed
Randomness: Mild
Noah effect: No
Joseph effect: No
Markov property: Yes
Trajectories: Continuous
Fractional Brownian motion
Increments: Dependent
Increments: Gaussdistributed
Randomness: Mild
Noah effect: No
Joseph effect: Yes iff γ > 0
Markov property: No
Trajectories: Continuous
Noah exponent α ̸= 2 Lévy motion
Increments: Independent
Increments: Lévydistributed
Randomness: Wild
Noah effect: Yes
Joseph effect: No
Markov property: Yes
Trajectories: Discontinuous
Fractional Lévy motion
Increments: Dependent
Increments: Lévydistributed
Randomness: Wild
Noah effect: Yes
Joseph effect: Yes iff γ > 0
Markov property: No
Trajectories: Continuous iff γ > 0
Table 3
Noah classification of fractional motions.
Noah exponent Exponents range Hurst exponent Dispersion Correlation Memory
Negative 0 < α ≤ 2
−1 < γ < 0
0 < H < 1/α Sublinear:
D
X
(t) = ct
1−γ 
Negative:
C
△X
(l) ≈ −1/l
1+γ 
Shortrange:
¸
¸
C
△X
(l)
¸
¸
< ∞
Zero 0 < α ≤ 2
γ = 0
H = 1/α Linear:
D
X
(t) = ct
Zero:
C
△X
(l) = 0
Shortrange:
¸
¸
C
△X
(l)
¸
¸
< ∞
Positive 1 < α ≤ 2
0 < γ < α − 1
1/α < H < 1 Superlinear:
D
X
(t) = ct
1+γ 
Positive:
C
△X
(l) ≈ 1/l
1−γ 
Longrange:
¸
¸
C
△X
(l)
¸
¸
= ∞
The aforementioned intrinsic infiniteness of fractional motions renders the precise simulation of their trajectories an
impossible task. Indeed, since any simulation can employ only a finite amount of information the trajectories of fractional
motions can be simulated only approximately. The approximate simulations of fractional motions are a crafty niche of
probability and statistics, and are beyond the scope of this paper; the interested readers are referred to [23,24,29,35]
and [144]. Fractional Brownian motion and fractional Lévy motion turn out to be especially tricky to simulate; the former
is well approximated by the Davies–Harte algorithm [145] (see also [146]), and the latter is well approximated by the
Stoev–Taqqu algorithm [147]. A related topic is the effective Markov approximation of nonMarkov processes [148,149].
6. Discussion
6.1. The pervasiveness of fractional motions
As explained in Sections 3 and 4 the Noah and Joseph exponents α and γ have wide domains of attraction at the values
α = 2 and γ = 0, and have narrow domains of attraction at all other values (α ̸= 2 and γ ̸= 0). The Noah exponent
α = 2 characterizes the scenario in which microscopic fluctuations do not transcend to the macroscopic level, and includes
the case of finitevariance microscopic generic random shocks. The Joseph exponent γ = 0 characterizes the scenario in
which microscopic correlations do not transcend to the macroscopic level, and includes the case of integrable microscopic
impulseresponse functions.
The intersection of the wide domains of attraction α = 2 and γ = 0 yields Brownian motion—which has no degrees of
freedom. Brownian motion is the only fractional motion which is totally invariant with respect to the microscopic details
of Langevin motions. Hence, in essence, Brownian motion is the ‘‘truly universal’’ macroscopic scaling limit of Langevin
motions. In the probabilistic literature the universality of Brownian motion is often referred to as ‘‘Donsker’s invariance
principle’’ [150,151].
The intersection of a wide domain of attraction and a narrow domain of attraction yields Lévy motion and fractional
Brownian motion—each having a single degree of freedom. Lévy motion with Noah exponent α (0 < α < 2) requires a
specific type of microscopiclevel fluctuations, and fractional Brownian motion with Joseph exponent γ (γ ̸= 0) requires a
specific type of microscopiclevel correlations. The intersection of two narrow domains of attraction yields fractional Lévy
motion—which has two degrees of freedom. The Noah exponent α (0 < α < 2) requires a specific type of microscopiclevel
fluctuations, and the Joseph exponent γ (γ ̸= 0) requires a specific type of microscopiclevel correlations.
The empirical pervasiveness of fractional motions is completely compatible with the mathematical classification of their
domains of attraction. In the ‘‘first place’’ stands supreme Brownian motion—which has the widest domain of attraction
and is indeed the most omnipresent form of random motion in the sciences [1–3]. The ‘‘second place’’ is shared together by
118 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
Lévy motion and fractional Brownian motion which are prevalent in the sciences—but are far less ubiquitous than Brownian
motion. In the ‘‘third place’’ stands fractional Lévy motion—which has the narrowest domain of attraction and is indeed the
least common fractional motion observed in the sciences. Examples and applications of fractional Lévy motion include solar
wind [106], solar flares [107], sedimentary rocks [161], aquifers [162], and network traffic [163].
6.2. Brownian motion and Ito’s calculus
Brownian motion is the only fractional motion which combines together the ‘‘independent increments’’ property
and continuous trajectories. These combined statistical and topological features render Brownian motion a natural
‘‘stochastic integrator’’. Indeed, there is a stochastic calculus – termed ‘‘Ito’s calculus’’– which is based on Brownian motion
[152,153]. Ito’s calculus differs markedly from ordinary calculus, and has its own stochastic rules of differentiation and
integration [140]. Ito’s calculus is of the utmost importance as it facilitates the methodology required in order to define
and analyze stochastic differential equations—which have a wide host of applications in science and engineering [154–156].
Perhaps the best known application of stochastic differential equations is in the field of financial engineering [157,158],
which followed the invention of the Merton–Black–Scholes optionpricing formula [159,160].
6.3. Data traffic and fractional Brownian motion
Hand in hand with the rise of the World Wide Web, data traffic grew both highly massive and highly intricate. In the
1990s a substantial body of accumulated evidence established that data traffic in communication networks displays long
ranged correlations and selfsimilarity [11,12], [164,165]. Consequently, fractional Brownian motion was widely applied as
‘‘the model of choice’’ for data traffic in communication systems [166,167]. The theoretical basis for this choice is the fact
that fractional Brownian motion emerges as a universal scaling limit of superpositions of IID signal processes—a setting
which is a fairly reasonable approximation in communication systems. Examples of signalsuperpositions whose scaling
limits yield fractional Brownian motion include: on–off processes [168], renewal processes [169,170], correlated random
walks [171,172], and Ornstein–Uhlenbeck processes [173,174]. (For the emergence of fractional Brownian motion from
superpositions of dependent Ornstein–Uhlenbeck processes, driven by a common underlying noise, see [175,176].)
6.4. Universality via stochastic scaling
A key feature of fractional motions is their universality: macroscopic invariance with respect to microscopic details.
Universality emerged on the macroscopic level, and the transcendence from the microscopic level to the macroscopic
level was facilitated via the method of scaling—which is a deterministic renormalization scheme. However, there are
physical settings in which the natural scaling is stochastic rather than deterministic [177–179]—a typical example of natural
stochastic scaling being the setting of Holtsmark’s law [180,181]. Considering stochastic scaling in the aforementioned
context of superpositions of IID signal processes, the following ‘‘universality question’’ arises [182,183]: When are the
statistics of the macroscopiclevel signalsuperpositioninvariant withrespect to the microscopiclevel signal processes? The
answer to this universality question led to the establishment of a novel methodological approach regarding the generation
of a host of powerlaw statistics displaying, in particular, both the Noah and Joseph effects—see [47,184] for overviews of
this approach.
6.5. Fractional motions vs. the CTRW model
The ‘‘archrival model’’ of fractional motions is the Continuous Time Random Walk (CTRW) introduced by Montroll and
Weiss [185] (see also [39] and [186]). The basic CTRW model is a generalization of the compound Poisson model of Eq. (14)
—in which the underlying Poisson process is replaced by a renewal process [187,188]. In the Poisson process the interimpact
time durations are IID random variables which are exponentially distributed. On the other hand, in a renewal process the
interimpact time durations are IID random variables which are drawn from a general (nonexponential) distribution. The
exponential distribution is the only distribution (defined on the positive halfline) which is ‘‘memoryless’’, and hence the
Poisson process is the only renewal process which is Markov. Thus, replacing the Poisson process by a renewal process
has the effect of breaking the underlying Markov property. We note that in the Langevin dynamics an analogous ‘‘Markov
breaking’’ effect is attained by replacing the exponential impulseresponse function by general (nonexponential) impulse
response functions.
When transcending from the microscopic level to the macroscopic level – by applying the method of scaling – the
CTRW yields four classes of scaling limits [189,190]: Brownian motion, Lévy motion, subordinated Brownian motion, and
subordinated Lévy motion. When the microscopiclevel interimpact time durations have a ‘‘short memory’’ then Brownian
and Lévy motions emerge macroscopically. Conversely, when the microscopiclevel interimpact time durations have a
‘‘long memory’’ then subordinated Brownian and Lévy motions emerge macroscopically. The precise formulation of ‘‘short
memory’’ and ‘‘long memory’’ is as follows. Let P
≤
(x) and P
>
(x) (x ≥ 0) denote, respectively, the cumulative distribution
function and the tail distribution function of the interimpact time durations. Then: (i) ‘‘short memory’’ is quantified by a
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 119
slowly varying function M (x) =
_
x
0
uP
≤
(du) (x ≥ 0); (ii) ‘‘long memory’’ is quantified by a regularly varying tail distribution
function P
>
(x). The ‘‘short memory’’ scenario includes the case of microscopiclevel interimpact time durations with finite
means, and in the ‘‘long memory’’ scenario the exponent of regular variation ϵ takes values in the range −1 < ϵ < 0.
The ‘‘short memory’’ scenario is the CTRWanalogue of Langevin motions with slowly varying kernel functions—in which
case both the CTRW model and the Langevin model yield Brownian and L évy motions as their universal scaling limits.
On the other hand, the ‘‘long memory’’ scenario is the CTRW analogue of Langevin motions with regularly varying kernel
functions—in which case the CTRW model yields subordinated Brownian and Lévy motions as their universal scaling limits,
whereas the Langevin model yields fractional Brownian and Lévy motions as their universal scaling limits. Subordinated
Brownian and Lévy motions are the CTRW analogues of subdiffusive fractional Brownian and Lévy motions. These classes
of processes share in common two key features: they are selfsimilar processes, and their trajectories are jointly continuous
and discontinuous. However, these classes of processes display dramatically different statistical behaviors. Indeed, the
increments of subordinated Brownian and Lévy motions: (i) are not Gauss and Lévy distributed (respectively); (ii) are non
stationary; (iii) are positively correlated, and their correlations are longranged. On the other hand, the increments of sub
diffusive fractional Brownian and Lévy motions: (i) are Gauss and Lévy distributed (respectively); (ii) are stationary; (iii) are
negatively correlated, and their correlations are shortranged.
The marked difference between these classes of processes stems from their fundamentally different propagation
mechanisms. Subordinated Brownian and Lévy motions produce their subdiffusive behavior by halting their propagation
for longextended and nonstationary time periods. On the other hand, fractional Brownian and Lévy motions produce a sub
diffusive behavior by stationary and antipersistent oscillations. The issue of ‘‘fractional motions vs. the CTRW approach’’ is
studied in detail in [191–193], and a detailed comparison between fractional Brownian motion and the ‘‘fractional Langevin
equation’’ is presented in [194]. For CTRW models which couple space and time the readers are referred to [39] and to
[195–197].
7. Conclusion
In this paper we presented a physicsbased panoramic overview of fractional motions—the four archetypal models,
in science and engineering, of random transport processes with stationary increments: Brownian motion, Lévy motion,
fractional Brownian motion, and fractional Lévy motion. Our starting point was Langevin dynamics driven by a perturbing
noise constructed according to the Einstein–Smoluchowski paradigm. The Langevin dynamics describe the stochastic
evolution of a physical motion on the microscopic level. To transcend from the microscopic level to the macroscopic level
we applied the renormalization method of scaling, and established that: The universal macroscopiclevel scaling limits of
microscopiclevel Langevin dynamics are fractional motions.
In the transcendence from the microscopic level to the macroscopic level powerlaws emerged, and the degrees of
freedom collapsed from infinitely many to two. Microscopiclevel Langevin dynamics are characterized by two functions—
the Fourier function which governs the statistics of the dynamics’ fluctuations, and the kernel function which governs the
structure of the dynamics’ correlations. On the microscopiclevel both these functions are general and have infinitely many
degrees of freedom. Transcendence from the microscopic level to the macroscopic level replaces the general Fourier and
kernel functions by universal powerlaw functions. Thus, the infinitely many degrees of freedom of the microscopiclevel
Fourier and kernel functions collapse to two degrees of freedom—the exponent of the macroscopiclevel powerlawFourier
function, and the exponent of the macroscopiclevel powerlaw kernel function. Moreover, the powerlaws emerging on
the macroscopic level further induced fractality: fractional motions are selfsimilar processes—as their trajectories are fractal
objects which are statistically invariant under ‘‘zooming in’’ and ‘‘zooming out’’.
The Langevin approach yields a tangible distinction between the four different classes of fractional motions: (i) Brownian
motion is the universal scaling limit of Langevin dynamics whose microscopiclevel fluctuations and correlations do not
transcend to the macroscopic level; (ii) Lévy motion is the universal scaling limit of Langevin dynamics whose microscopic
level fluctuations transcend to the macroscopic level, but yet whose microscopiclevel correlations do not; (iii) fractional
Brownian motion is the universal scaling limit of Langevin dynamics whose microscopiclevel correlations transcend to the
macroscopic level, but yet whose microscopiclevel fluctuations do not; (iv) fractional Lévy motion is the universal scaling
limit of Langevin dynamics whose microscopiclevel fluctuations and correlations transcend to the macroscopic level. The
Langevin approach further provides quantitative criteria precisely predicting when the microscopiclevel fluctuations and
correlations do or do not transcend to the macroscopic level.
The statistical and topological properties of fractional motions were explored in detail. We determined precisely when:
(i) the fluctuations are ‘‘mild’’ or ‘‘wild’’, and the when Noah effect is displayed; (ii) the dispersion is ‘‘subdiffusive’’,
‘‘diffusive’’, or ‘‘superdiffusive’’; (iii) the corresponding stationary noise process is ‘‘antipersistent’’, uncorrelated, or
‘‘persistent’’, and when the Joseph effect is displayed; (iv) the trajectories are continuous or discontinuous. The different
classes of fractional motions were shown to exhibit dramatically different statistical and topological behaviors.
Fractional motions are the very bedrock of fractal random processes with stationary increments. Fractional motions
are omnipresent and ubiquitous in a host of scientific fields, and are of fundamental importance both practically and
theoretically. The aim of this review paper is to provide a selfcontained cohesive and concise introduction to fractional
motions. We hope this review paper will help scientists gain insight regarding the structure and universal emergence of
120 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
fractional motions, and help researchers properly model the transcendence from microlevel to macrolevel stochastic
dynamics.
8. Stochastic analyses
In this section we present the detailed derivations of results stated in Sections 2–5. Throughout this section E [ξ] denotes
the mathematical expectation of a realvalued random variable ξ.
8.1. Poisson processes and their Fourier functional
Consider a random collection of points P scattered across a general Euclidean domain X. The collection P is said to be
a Poisson process with intensity Λ(x) (x ∈ X) if the following pair of conditions hold [79–81]: (i) the number of points
residing in the subdomain D ⊂ X is a Poissondistributed random variable with mean
_
D
Λ(x) dx; (ii) the numbers of
points residing in disjoint subdomains are independent random variables.
While the mathematical definition of Poisson processes might appear, at first glance, rather abstract—its intuitive
construction is, in fact, rather straightforward. Indeed, a Poisson process with intensity Λ(x) (x ∈ X) can be informally
constructed as follows: partition the Euclidean domain X into infinitesimal nonoverlapping cells, place a single point
in the cell dx with probability Λ(x) dx, and leave the cell dx empty with complementary probability 1 − Λ(x) dx. If the
placing of points in the infinitesimal cells is done independently then the resulting scattering of points will effectively yield
a Poisson process with intensity Λ(x) (x ∈ X). The mathematical definition of Poisson processes is the precise probabilistic
characterization of the aforementioned scattering of points into infinitesimal cells.
The best known example of Poisson processes is the ‘‘standard’’ Poisson process which is defined on the positive halfline
X = (0, ∞), and which has a constant intensity Λ(x) = λ (x > 0). In this example each infinitesimal cell dx of the positive
halfline contains a single point with probability λdx, and is empty with complementary probability 1−λdx (independently
of all other cells). Thus, the constant intensity λ can be considered as the uniform scattering rate of the ‘‘standard’’ Poisson
process.
Afundamental result fromthe theory of Poisson processes asserts that the collection P is a Poisson process with intensity
Λ(x) (x ∈ X) if and only if
E
_
p∈P
φ (p)
_
= exp
_
−
_
X
(1 −φ (x)) Λ(x) dx
_
(61)
holds for all test functions φ (x) (x ∈ X) for which the integral appearing on the righthandside of Eq. (61) converges [79].
This result facilitates the derivation of Eq. (9), as we shall now demonstrate.
Consider the noise process N = (N (t))
−∞<t<∞
given by
N (t) =
k
S
k
δ (t − t
k
) (62)
where: (i) P = {t
k
} is a Poisson process on the real line X = (−∞, ∞) with unit intensity Λ(x) = 1 (−∞ < x < ∞);
(ii) {S
k
} are IID copies of a symmetric random variable S; (iii) the random sequences {t
k
} and {S
k
} are independent.
Set F (θ) = 1 − E [cos (θS)], and note that the fact (ii) implies that
E [exp (iθS
k
)] = 1 − F (θ) (63)
(−∞ < θ < ∞). Eq. (62) implies that
_
∞
−∞
ϕ (t) N (t) dt =
k
S
k
ϕ (t
k
) , (64)
where ϕ (t) (−∞ < t < ∞) is an arbitrary test function. Consequently
E
_
exp
_
i
_
∞
−∞
ϕ (t) N (t) dt
__
= E
_
k
exp (iS
k
ϕ (t
k
))
_
(65)
(using conditioning)
= E
_
E
_
k
exp (iS
k
ϕ (t
k
))  P
__
(66)
(using facts (ii) and (iii), and using Eq. (63))
= E
_
k
[1 − F (ϕ (t
k
))]
_
(67)
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 121
(using Eq. (61) withφ (x) = 1 − F (ϕ (x)) )
= exp
_
−
_
∞
−∞
F (ϕ (x)) dx
_
. (68)
Eqs. (65)–(68) prove Eq. (9).
8.2. Regular variation and scaling limits
Let the functions P
≤
(x) = Pr (S ≤ x) and P
>
(x) = Pr (S > x) (x ≥ 0) denote, respectively, the cumulative distribution
function and the tail distribution function of the random variable S (the absolute value of the generic shock S). In addition,
we introduce the function
Q (x) =
_
x
0
u
2
P
≤
(du) (69)
(x ≥ 0), and the function
G(x) =
1 − cos (x)
x
2
(70)
(x ≥ 0).
Recall that the Fourier function F (θ) is given by F (θ) = 1 − E [cos (θ S)] (−∞ < θ < ∞). A calculation involving
integrationbyparts implies that
F (θ) =
_
∞
0
sin (x) P
>
_
x
θ
_
dx = θ
2
_
∞
0
_
−G
′
(x)
_
Q
_
x
θ
_
dx. (71)
In turn, Eq. (71) implies that
F (lθ)
F (l)
=
_
∞
0
sin (x)
P
>
(mx/θ)
P
>
(m)
dx
_
∞
0
sin (x)
P
>
(mx)
P
>
(m)
dx
= θ
2
_
∞
0
_
−G
′
(x)
_
Q(mx/θ)
Q(m)
dx
_
∞
0
[−G
′
(x)]
Q(mx)
Q(m)
dx
, (72)
where m = 1/l. From Eq. (72) it is evident that the following conditions are equivalent: (i) the Fourier function F (θ) is
regularly varying at the origin; (ii) the tail distribution function P
>
(x) is regularly varying at infinity; (iii) the function Q (x)
is regularly varying at infinity. Assume that these equivalent regular variation conditions hold, and let {α, −p, q} denote,
respectively, the regular variation exponents of the functions {F (θ) , P
>
(x) , Q (x)}. Taking the limit l → 0 in Eq. (72) we
obtain that
θ
α
= θ
p
_
∞
0
sin (x) x
−p
dx
_
∞
0
sin (x) x
−p
dx
= θ
2−q
_
∞
0
_
−G
′
(x)
_
x
q
dx
_
∞
0
[−G
′
(x)] x
q
dx
(73)
(−∞ < θ < ∞).
Note that
_
∞
0
sin (x) x
−p
dx = p
_
∞
0
1 − cos (x)
x
1+p
dx, (74)
and that the integral appearing on the righthandside of Eq. (74) converges if and only if 0 < p < 2. Further note that
_
∞
0
_
−G
′
(x)
_
x
q
dx =
_
¸
_
¸
_
1
2
q = 0,
q
_
∞
0
1 − cos (x)
x
1+(2−q)
dx q ̸= 0,
(75)
and that the integral appearing on the righthandside of Eq. (75) converges if and only if 0 < q < 2.
Thus, we conclude that: (i) the Fourier function F (θ) is regularly varying at the origin with exponent α (0 < α < 2) if
and only if the tail distribution function P
>
(x) is regularly varying at infinity with exponent −α (0 < α < 2); (ii) the Fourier
function F (θ) is regularly varying at the origin with exponent α = 2 if and only if the function Q (x) is slowly varying at
infinity (i.e., regularly varying at infinity with exponent q = 0).
The exponent of regular variation of the Fourier function F (θ) is indeed restricted to the range 0 < α ≤ 2, as we now
argue. The Fourier transform of the random variable W
n
(1) – the position, at time t = 1, of the renormalized random walk
W
n
– is given by
E [exp (iθW
n
(1))] = exp (−F
n
(θ)) (76)
122 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
(−∞ < θ < ∞). Thus, the convergence of the Fourier function F
n
(θ) to a limiting Fourier function F
∞
(θ) = lim
n→∞
F
n
(θ)
implies that: the random variable W
n
(1) converges, in law, to a limiting random variable W
∞
(1) whose distribution is
characterized by the Fourier transform
E [exp (iθW
∞
(1))] = exp (−F
∞
(θ)) (77)
(−∞ < θ < ∞). Setting θ = 0 in Eq. (77) implies that F
∞
(0) = 0. Moreover, differentiating Eq. (77) twice and then setting
θ = 0 implies that the variance of the random variable W
∞
(1) is given by Var [W
∞
(1)] = F
′′
∞
(0). Since F
∞
(θ) = θ
α
we obtain that: (i) the exponent α must be positive (otherwise we wont have F
∞
(0) = 0); (ii) the exponent α cannot be
greater than 2 (otherwise we will have Var [W
∞
(1)] = 0 which, in turn, implies that the random variable W
∞
(1) is zero
with unit probability). Thus, the exponent α cannot take values beyond the range 0 < α ≤ 2.
8.3. The increments of random walks and Langevin motions
Consider a random noise N = (N (t))
−∞<t<∞
whose processdistribution is given by the Fourier functional
E
_
exp
_
i
_
∞
−∞
ϕ (t) N (t) dt
__
= exp
_
−
_
∞
−∞
F (ϕ (t)) dt
_
, (78)
where F (θ) (−∞ < θ < ∞) is the corresponding Fourier function, and where ϕ (·) is an arbitrary test function for which
the integral appearing on the righthandside of Eq. (78) converges. Note that setting the zero test function ϕ (t) = 0 in
Eq. (78) implies that the Fourier function F (θ) vanishes at the origin: F (0) = 0.
The random noise N is stationary. Indeed, given a real shift s consider the shifted random noise N
s
= (N
s
(t))
−∞<t<∞
given by N
s
(t) = N (s + t). Eq. (78) implies that
E
_
exp
_
i
_
∞
−∞
ϕ (t) N
s
(t) dt
__
= E
_
exp
_
i
_
∞
−∞
ϕ (t) N (s + t) dt
__
= E
_
exp
_
i
_
∞
−∞
ϕ (t − s) N (t) dt
__
= exp
_
−
_
∞
−∞
F (ϕ (t − s)) dt
_
= exp
_
−
_
∞
−∞
F (ϕ (t)) dt
_
= E
_
exp
_
i
_
∞
−∞
ϕ (t) N (t) dt
__
. (79)
Eq. (79) implies that the processdistribution of the shifted random noise N
s
coincides with the processdistribution of the
random noise N. Namely, the shifted random noise N
s
and the ‘‘original’’ random noise N are equal in law. Thus the random
noise N is stationary—as it is shiftinvariant.
The increment N
I
of the random noise N over the interval I is defined by
N
I
=
_
I
N (t) dt. (80)
Eq. (78) implies that the distribution of the increment N
I
is characterized by the Fourier transform
E [exp (iθN
I
)] = exp
_
−
_
I
F (θ) dt
_
= exp (−F (θ) I) (81)
(−∞ < θ < ∞), where I denotes the length of the interval I. Eq. (81) implies that the random noise N has stationary
increments. Indeed, from Eq. (81) it is evident that all increments of the same length are identically distributed. Thus, the
increments’ distributions are invariant under shifts—i.e., the increments are stationary.
Let {I
k
}
k
be a finite collection of disjoint intervals, and let {θ
k
}
k
be arbitrary real numbers. Denote by ϕ
k
(·) the indicator
function of the interval I
k
. Specifically: ϕ
k
(t) = 1 when t ∈ I, and ϕ
k
(t) = 0 when t ̸∈ I. Eqs. (78) and (81) imply that
E
_
k
exp
_
iθ
k
N
I
k
_
_
= E
_
k
exp
_
iθ
k
_
∞
−∞
ϕ
k
(t) N (t) dt
_
_
E
__
i
_
∞
−∞
_
k
θ
k
ϕ
k
(t)
_
N (t) dt
__
= exp
_
−
_
∞
−∞
F
_
k
θ
k
ϕ
k
(t)
_
dt
_
= exp
_
−
k
_
∞
−∞
F (θ
k
) ϕ
k
(t) dt
_
= exp
_
−
k
_
I
k
F (θ
k
) dt
_
=
k
exp (−F (θ
k
) I
k
) =
k
E
_
exp
_
iθ
k
N
I
k
__
(82)
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 123
(in the passage from the second to the third line we used (i) the fact that the intervals {I
k
}
k
are disjoint, and (ii) the fact that
F (0) = 0). Eq. (82) implies that the randomnoise N has independent increments. Thus, we conclude that: The random noise
N has stationary and independent increments.
The random walk W = (W (t))
t≥0
induced by the random noise N is given by the positions
W (t) =
_
t
0
N
_
t
′
_
dt
′
(83)
(t ≥ 0). The increments of the random walk W coincide with the increments of the random noise N (restricted to the
nonnegative halfline). Thus, we further conclude that: The random walk W has stationary and independent increments.
Consider now a general Langevin motion X
s
= (X
s
(t))
t≥0
driven by the shifted random noise N
s
, where s is a positive
shift. Specifically, the positions of the Langevin motion X
s
are given by
X
s
(t) =
_
0
−∞
_
K
_
t − t
′
_
− K
_
0 − t
′
__
N
s
_
t
′
_
dt
′
+
_
t
0
K
_
t − t
′
_
N
s
_
t
′
_
dt
′
(84)
(t ≥ 0), where K (τ) (τ > 0) is the corresponding kernel function. Note that if we extend the kernel function K (τ) to the
entire real line – by setting K (τ) = 0 for all τ ≤ 0 – then Eq. (84) can be rewritten in the form
X
s
(t) =
_
∞
−∞
_
K
_
t − t
′
_
− K
_
0 − t
′
__
N
s
_
t
′
_
dt
′
(85)
(t ≥ 0). Since the shifted randomnoise N
s
is equal, in law, to the nonshifted randomnoise N = N
0
, so are the corresponding
Langevin motions: The Langevin motion X
s
is equal, in law, to the Langevin motion X = X
0
. In particular, the position X
s
(t)
is equal, in law, to the position X (t) (s, t > 0).
Given a positive shift s, consider the increment X (s + t) − X (s) of the Langevin motion X. Note that
X (s + t) − X (s) =
_
∞
−∞
_
K
_
s + t − t
′
_
− K
_
0 − t
′
__
N
_
t
′
_
dt
′
−
_
∞
−∞
_
K
_
s − t
′
_
− K
_
0 − t
′
__
N
_
t
′
_
dt
′
=
_
∞
−∞
_
K
_
s + t − t
′
_
− K
_
s − t
′
__
N
_
t
′
_
dt
′
=
_
∞
−∞
_
K
_
t − t
′
_
− K
_
0 − t
′
__
N
_
s + t
′
_
dt
′
=
_
∞
−∞
_
K
_
t − t
′
_
− K
_
0 − t
′
__
N
s
_
t
′
_
dt
′
= X
s
(t) . (86)
Eq. (86) implies that the increment X (s + t) − X (s) of the Langevin motion X is equal to the position X
s
(t) of the Langevin
motion X
s
. On the other hand, the position X
s
(t) is equal, in law, to the position X (t). Hence, we obtain that the increment
X (s + t)−X (s) is equal, in law, to the position X (t). We thus conclude that: The Langevin motion X has stationary increments.
8.4. Fourier inversion of the limiting Fourier transforms
Consider the probability density function f
α
(x) (−∞ < x < ∞) whose Fourier transform is given by
ˆ
f
α
(θ) = exp (− θ
α
) (87)
(−∞ < θ < ∞), where 0 < α ≤ 2. Fourier inversion implies that
f
α
(x) =
1
2π
_
∞
−∞
exp (−iθx) exp (− θ
α
) dθ (88)
(using the symmetry of the function exp (− θ
α
) )
=
1
π
_
∞
0
cos (θx) exp (−θ
α
) dθ (89)
(using a Taylor expansion for the function cos (θx) )
=
1
π
∞
m=0
(−1)
m
x
2m
(2m)!
_
∞
0
exp (−θ
α
) θ
2m
dθ. (90)
Note that (using the change of variables u = θ
α
)
_
∞
0
exp (−θ
α
) θ
2m
dθ =
1
α
_
∞
0
exp (−u) u
2m+1
α
−1
du =
1
α
Γ
_
2m + 1
α
_
. (91)
124 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
Substituting Eq. (91) into Eq. (90) we conclude that
f
α
(x) =
1
πα
∞
m=0
(−1)
m
Γ
_
2m+1
α
_
Γ (2m + 1)
x
2m
(92)
(−∞ < x < ∞).
At the exponent value α = 1 Eq. (92) yields the Cauchy probability density function:
f
1
(x) =
1
π
∞
m=0
_
−x
2
_
m
=
1
π
_
1 + x
2
_ (93)
(−∞ < x < ∞). At the exponent value α = 2 a Gammafunction calculation implies that
Γ
_
m +
1
2
_
Γ (2m + 1)
=
√
π
4
m
· m!
. (94)
Substituting Eq. (94) into Eq. (92), the exponent value α = 2 yields the Gauss probability density function:
f
2
(x) =
1
2
√
π
∞
m=0
1
m!
_
−
x
2
4
_
m
=
1
2
√
π
exp
_
−
x
2
4
_
(95)
(−∞ < x < ∞).
8.5. Heavy tails of the Lévy distribution
Consider a Lévydistributed random variable L, whose Lévy distribution is characterized by the Fourier transform
E [exp (iθL)] = exp (− θ
α
) (96)
(−∞ < θ < ∞), where 0 < α < 2. The Fourier transform of Eq. (96) implies that the random variable L is symmetric:
the random variable L is equal, in law, to the random variable −L. Set P (x) = Pr (L > x) (x ≥ 0). Integration by parts,
combined together with the aforementioned symmetry, implies that
1 − exp (− θ
α
) =
_
∞
0
sin (x) P
_
x
θ
_
dx (97)
(−∞ < θ < ∞). Eq. (97) further implies that
1 − exp (− θ
α
)
θ
α
=
_
∞
0
sin (x)
_
1
θ
α
P
_
x
θ
__
dx (98)
(−∞ < θ < ∞). Consider now the limit θ → 0. Since the lefthandside of Eq. (98) converges to unity as θ → 0, the
following limit function must exist:
P
∞
(x) = lim
θ→0
1
θ
α
P
_
x
θ
_
= lim
m→∞
m
α
P (mx) (99)
(x > 0). Eq. (99) implies that
P
∞
(x) =
1
x
α
P
∞
(1) (100)
(x > 0). Taking the limit θ → 0 in Eq. (98), and using Eq. (100), we obtain that
1 = P
∞
(1)
_
∞
0
sin (x) x
−α
dx = P
∞
(1) αI
α
, (101)
where I
α
=
_
∞
0
[1 − cos (x)] x
−1−α
dx (note that the integral I
α
is indeed convergent in the exponent range 0 < α < 2).
Setting x = 1 in Eq. (99), and using Eq. (101), we conclude that
lim
m→∞
Pr (L > m)
m
−α
= αI
α
. (102)
Namely, we obtained that the tail probability Pr (L > m) is asymptotically equivalent to the powerlaw 1/m
α
.
The asymptotic equivalence of Eq. (102) implies that: The moment E[L
p
] is convergent in the range 0 < p < α, and is
divergent in the range p ≥ α. Since 0 < α < 2 we obtain that: (i) the variance of the random variable L is infinite; (ii) the
mean of the random variable L is illdefined in the exponent range 0 < α ≤ 1, and is zero (due to the symmetry of the
random variable L) in the exponent range 1 < α < 2.
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 125
8.6. Selfsimilarity of fractional motions
Consider a fractional motion X = (X (t))
t≥0
with Fourier function F (θ) = θ
α
(−∞ < θ < ∞), and kernel function
K (τ) = τ
γ /α
(τ > 0). Note that if we extend the kernel function K (τ) to the entire real line – by setting K (τ) = 0 for all
τ ≤ 0 – then the positions of the fractional motion X can be rewritten in the form
X (t) =
_
∞
−∞
_
K
_
t − t
′
_
− K
_
0 − t
′
__
N
_
t
′
_
dt
′
(103)
(t ≥ 0). The processdistribution of the random noise N = (N (t))
−∞<t<∞
is characterized by the Fourier functional
E
_
exp
_
i
_
∞
−∞
ϕ (t) N (t) dt
__
= exp
_
−
_
∞
−∞
F (ϕ (t)) dt
_
, (104)
where ϕ (·) is an arbitrary test function for which the integral appearing on the righthandside of Eq. (104) converges.
Given a positive scaling factor n, Eq. (103) and the powerlaw structure of the kernel function K (τ) = τ
γ /α
imply that
X (nt) =
_
∞
−∞
_
K
_
nt − t
′
_
− K
_
0 − t
′
__
N
_
t
′
_
dt
′
= n
_
∞
−∞
_
K
_
nt − nt
′
_
− K
_
0 − nt
′
__
N
_
nt
′
_
dt
′
= n
1+(γ /α)
_
∞
−∞
_
K
_
t − t
′
_
− K
_
0 − t
′
__
N
_
nt
′
_
dt
′
(105)
(t ≥ 0). On the other hand, Eq. (104) and the powerlaw structure of the Fourier function F (θ) = θ
α
imply that
E
_
exp
_
i
_
∞
−∞
ϕ (t) N (nt) dt
__
= E
_
exp
_
i
_
∞
−∞
_
1
n
ϕ
_
t
n
__
N (t) dt
__
= exp
_
−
_
∞
−∞
F
_
1
n
ϕ
_
t
n
__
dt
_
= exp
_
−n
_
∞
−∞
F
_
1
n
ϕ (t)
_
dt
_
= exp
_
−
_
∞
−∞
F
_
n
(1/α)−1
ϕ (t)
_
dt
_
= E
_
exp
_
i
_
∞
−∞
ϕ (t)
_
n
(1/α)−1
N (t)
_
dt
__
. (106)
FromEq. (106) we obtainthat the randomnoise (N (nt))
−∞<t<∞
is equal, inlaw, tothe randomnoise (n
(1/α)−1
N (t))
−∞<t<∞
.
In turn, from Eq. (105) we obtain that the random motion (X (nt))
t≥0
is equal, in law, to the random motion
(n
(1+γ )/α
X (t))
t≥0
. This equivalence is the very definition of selfsimilarity with Hurst exponent
H =
1 +γ
α
. (107)
8.7. Correlations of motions with stationary increments
In this section Var[ξ] = E[ξ
2
] − E[ξ]
2
denotes the variance of a realvalued random variable ξ, and Cov[ξ
1
, ξ
2
] =
E[ξ
1
ξ
2
] − E[ξ
1
]E[ξ
2
] denotes the covariance of a pair of realvalued random variable {ξ
1
, ξ
2
}.
Consider a random motion Y = (Y (t))
t≥0
with stationary increments and finite variances. Given positive times a < b
we have
Var[Y (b)] = Var[Y (a) +(Y (b) − Y (a))]
= Var[Y (a)] + Var[Y (b) − Y (a)] + 2Cov[Y (a) , (Y (b) − Y (a))]
= Var[Y (a)] + Var[Y (b − a)] + 2Cov[Y (a) , Y (b)] − 2Var[Y (a)]
= −Var[Y (a)] + Var[Y (b − a)] + 2Cov[Y (a) , Y (b)]. (108)
In turn, Eq. (108) implies that
Cov[Y (a) , Y (b)] =
1
2
_
Var[Y (a)] + Var[Y (b)] − Var[Y (b − a)]
_
. (109)
Consider now the ‘‘noise sequence’’ ∆Y = (∆Y (t))
∞
t=0
of the random motion Y, given by the motion’s consecutive unit
length increments: ∆Y (t) = Y (t + 1) − Y (t) (t = 0, 1, 2, . . .). Using Eq. (109) we obtain that
Cov [∆Y (0) , ∆Y (l)] = Cov [Y (1) − Y (0) , Y (l + 1) − Y (l)]
= Cov [Y (1) , Y (l + 1)] − Cov [Y (1) , Y (l)] − Cov [Y (0) , Y (l + 1)] + Cov [Y (0) , Y (l)]
126 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
=
1
2
_
Var[Y (1)] + Var[Y (l + 1)] − Var[Y (l)]
_
−
1
2
_
Var[Y (1)] + Var[Y (l)] − Var[Y (l − 1)]
_
−
1
2
_
Var[Y (0)] + Var[Y (l + 1)] − Var[Y (l + 1)]
_
+
1
2
_
Var[Y (0)] + Var[Y (l)] − Var[Y (l)]
_
=
1
2
_
Var[Y (l − 1)] − 2Var[Y (l)] + Var[Y (l + 1)]
_
. (110)
8.8. The jump structure of Lévy motions
Consider a random noise N = (N (t))
−∞<t<∞
given by
N (t) =
k
s
k
δ (t − t
k
) , (111)
where P = {(s
k
, t
k
)}
k
is a Poisson process on the real plane X = (−∞, ∞) ×(−∞, ∞) with intensity Λ(s, t) = a/ s
1+α
(−∞ < s, t < ∞; the coefficient a and the exponent α are positive parameters). Note that
_
∞
−∞
ϕ (t) N (t) dt =
k
s
k
ϕ (t
k
) , (112)
where ϕ (·) is an arbitrary test function. Using Eq. (61) we obtain that
E
_
exp
_
i
_
∞
−∞
ϕ (t) N (t) dt
__
= E
_
k
exp (is
k
ϕ (t
k
))
_
= exp
_
−
_
∞
−∞
_
∞
−∞
[1 − exp (isϕ (t))]
a
s
1+α
dsdt
_
. (113)
In turn
_
∞
−∞
_
∞
−∞
[1 − exp (isϕ (t))]
a
s
1+α
dsdt = 2a
_
∞
−∞
__
∞
0
1 − cos (s ϕ (t))
s
1+α
ds
_
dt
= 2aI
α
_
∞
−∞
ϕ (t)
α
dt, (114)
where I
α
=
_
∞
0
[1 − cos (x)] x
−1−α
dx. The integral I
α
is convergent the exponent range 0 < α < 2, and with no loss of
generality we can set a = 1/(2I
α
) and thus obtain that
E
_
exp
_
i
_
∞
−∞
ϕ (t) N (t) dt
__
= exp
_
−
_
∞
−∞
ϕ (t)
α
dt
_
, (115)
where ϕ (·) is an arbitrary test function satisfying
_
∞
−∞
ϕ (t)
α
dt < ∞.
Eq. (115) implies that a randomnoise N with Fourier function F (θ) = θ
α
(−∞ < θ < ∞) – with exponent 0 < α < 2
– admits the Poissonian stochastic representation of Eq. (111). The Lévy motion is the random walk W = (W (t))
t≥0
constructed by ‘‘riding’’ the random noise N. Namely: W (t) =
_
t
0
N
_
t
′
_
dt
′
(t ≥ 0). Thus the Poissonian stochastic
representation of Eq. (111) implies that the Lévy motion W is a ‘‘pure jump’’ process—i.e., its propagation is only via jumps.
The jumps of the Lévy motion W are the ‘‘shocks’’ {s
k
}
k
of the random noise N.
Let x be a positive number, and consider the process N
x
= (N
x
(t))
−∞<t<∞
given by
N
x
(t) =
k
χ (s
k
> x) δ (t − t
k
) , (116)
where χ (s
k
> x) is the indicator function of the event ‘‘the shock s
k
is greater than the size x ’’. The process N
x
tracks the
jumps of the Lévy motion W that exceed the size x. Note that
_
∞
−∞
ϕ (t) N
x
(t) dt =
k
χ (s
k
> x) ϕ (t
k
) , (117)
where ϕ (·) is an arbitrary test function. Using Eq. (61) we obtain that
E
_
exp
_
i
_
∞
−∞
ϕ (t) N
x
(t) dt
__
= E
_
k
exp (iχ (s
k
> x) ϕ (t
k
))
_
= exp
_
−
_
∞
−∞
_
∞
−∞
[1 − exp (iχ (s > x) ϕ (t))]
a
s
1+α
dsdt
_
. (118)
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 127
In turn
_
∞
−∞
_
∞
−∞
[1 − exp (iχ (s > x) ϕ (t))]
a
s
1+α
dsdt
=
_
∞
−∞
__
∞
x
[1 − exp (iϕ (t))]
a
s
1+α
ds
_
dt
=
_
∞
−∞
[1 − exp (iϕ (t))]
__
∞
x
a
s
1+α
ds
_
dt
=
_
∞
−∞
[1 − exp (iϕ (t))]
_
a
αx
α
_
dt. (119)
Substituting Eq. (119) into Eq. (118) we obtain that the Fourier functional of the process N
x
is given by
E
_
exp
_
i
_
∞
−∞
ϕ (t) N
x
(t) dt
__
= exp
_
−
_
∞
−∞
[1 − exp (iϕ (t))]
c
α
x
α
dt
_
, (120)
where c
α
= 1/(2αI
α
). Eq. (120) implies that N
x
is a Poisson process with constant intensity λ
x
= c
α
/x
α
. Consequently, we
conclude that: The Lévymotion jumps of size greater than x (x > 0) form a standard Poisson process with constant intensity
c
α
/ x
α
. An analogous analysis implies that: The Lévymotion jumps of size smaller than x (x < 0) form a standard Poisson
process with constant intensity c
α
/ x
α
.
Acknowledgments
The authors wish to thank Dr. Marcin Magdziarz, of Wroclaw University of Technology, for enlightening discussions
regarding the topological properties and the simulation of the trajectories of fractional motions.
References
[1] C. Gardiner, Handbook of Stochastic Methods, Springer, New York, 2004.
[2] N.G. Van kampen, Stochastic Processes in Physics and Chemistry, 3rd edition, NorthHolland, Amsterdam, 2007.
[3] Z. Schuss, Theory and Applications of Stochastic Processes: An Analytical Approach, Springer, New York, 2009.
[4] L.F. Richardson, Proc. Roy. Soc. 110 (1926) 709.
[5] T. Solomon, E. Weeks, H. Swinney, Phys. Rev. Lett. 71 (1993) 3975.
[6] G.M. Viswanathan, et al., Nature 401 (1999) 911.
[7] D.W. Sims, et al., Nature 451 (2008) 1098.
[8] O. Benichou, C. Loverdo, M. Moreau, R. Voituriez, Rev. Modern Phys. 83 (2011) 81.
[9] J.W. Kirchner, X. Feng, C. Neal, Nature 403 (2000) 524.
[10] H. Scher, G. Margolin, R. Metzler, J. Klafter, B. Berkowitz, Geophys. Res. Lett. 29 (2002) 1061.
[11] W. Leland, et al., IEEE/ACM Trans. Net. 2 (1994) 1.
[12] V. Paxon, S. Floyd, IEEE/ACM Trans. Net. 3 (1994) 226.
[13] R.N. Mantegna, H.E. Stanley, Introduction to Econophysics: Correlations and Complexity in Finance, Cambridge University Press, Cambridge, 2007.
[14] J. Voit, The Statistical Mechanics of Financial Markets, Springer, New York, 2010.
[15] R. Brown, Phil. Mag. 4 (1828) 161.
[16] L. Bachelier, Ann. Sci. Ecole Norm. Sup. 17 (1900) 21;
L. Bachelier, Louis Bachelier’s Theory of Speculation: The Origins of Modern Finance, Princeton University Press, Princeton, 2006, (translated by
M. Davis and A. Etheridge).
[17] A. Einstein, Ann. Phys. 17 (1905) 549.
[18] M. von Smoluchowski, Ann. Phys. 21 (1906) 756.
[19] N. Wiener, J. Math. Phys. 2 (1923) 131.
[20] B.B. Mandelbrot, Fractals and Scaling in Finance, Springer, New York, 1997.
[21] B. Mandelbrot, N.N. Taleb, in: F. Diebold, N. Doherty, R. Herring (Eds.), The Known, the Unknown and the Unknowable in Financial Institutions,
Princeton University Press, Princeton, 2010, pp. 47–58.
[22] B.B. Mandelbrot, J.R. Wallis, Water Resour. Res. 4 (1968) 909.
[23] A. Janicki, A. Weron, Simulation and Chaotic Behavior of Stable Stochastic Processes, Marcel Dekker, New York, 1994.
[24] G. Samrodintsky, M.S. Taqqu, Stable NonGaussian Random Processes, Chapman and Hall, New York, 1994.
[25] P. Lévy, Théorie de l’addition des Variables Aléatoires, GauthierVillars, Paris, 1954.
[26] W. Feller, An Introduction to Probability Theory and its Applications, vol. 2, 2nd edition, Wiley, New York, 1971.
[27] D.R. Cox, in: H.A. David, H.T. David (Eds.), Statistics: An Appraisal, Iowa State University Press, Ames, 1984, pp. 55–74.
[28] J. Beran, Statistics for Longmemory Processes, Chapman & Hall/CRC, London, 1994.
[29] P. Doukhan, G. Oppenheim, M.S. Taqqu (Eds.), Theory and Applications of Longrange Dependence, Birkhauser, Boston, 2003.
[30] G. Rangarajan, M. Ding (Eds.), Processes with Longrange Correlations: Theory and Applications, Springer, New York, 2003.
[31] A.N. Kolmogorov, Dokl. Akad. Nauk SSSR 26 (1940) 115.
[32] A.M. Yaglom, Amer. Math. Soc. Transl. 8 (1958) 87.
[33] B.B. Mandelbrot, Compt. Rend. 260 (1965) 3274.
[34] B.B. Mandelbrot, J.W. van Ness, SIAM Rev. 10 (1968) 422.
[35] P. Embrechts, M. Maejima, Selfsimilar Processes, Princeton University Press, Princeton, 2002.
[36] M.S. Taqqu, R. Wolpert, Z. Wahrscheinlichkeitstheor. Verwandte Geb. 62 (1983) 53.
[37] M. Maejima, Z. Wahrscheinlichkeitstheor. Verwandte Geb. 62 (1983) 235.
[38] J.P. Bouchaud, A. Georges, Phys. Rep. 195 (1990) 12.
[39] R. Metzler, J. Klafter, Phys. Rep. 339 (2000) 1.
[40] I.M. Sokolov, J. Klafter, Chaos 15 (2005) 026103.
128 I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129
[41] J. Klafter, I.M. Sokolov, Phys. World 18 (2005) 29.
[42] H. Scher, M. Lax, Phys. Rev. B 7 (1973) 4502.
[43] M.F. Shlesinger, J. Stat. Phys. 10 (1974) 421.
[44] H. Scher, E.W. Montroll, Phys. Rev. B 12 (1975) 2455.
[45] J.M. Sancho, A.M. Lacasta, K. Lindenberg, I.M. Sokolov, A.H. Romero, Phys. Rev. Lett. 92 (2004) 250601; Phys. Rev. Lett. 94 (2005) 188902.
[46] M. Khoury, A.M. Lacasta, J.M. Sancho, K. Lindenberg, Phys. Rev. Lett. 106 (2011) 090602.
[47] I. Eliazar, J. Klafter, Ann. Phys. 326 (2011) 2517.
[48] B.J. West, M.F. Shlesinger, Internat. J. Modern Phys. B 3 (1989) 795.
[49] I. Eliazar, J. Klafter, Phys. Rev. E 82 (2010) 021109.
[50] I. Eliazar, M.F. Shlesinger, J. Phys. A: Math. Theor. 45 (2012) 162002.
[51] P. Langevin, C.R. Acad. Sci. Paris 146 (1908) 530.
[52] W.T. Coffey, Yu.P. Kalmykov, J.T. Waldron, The Langevin Equation, 2nd edition, World Scientific, Singapore, 2004.
[53] H. Risken, The Fokker–Planck Equation: Methods of Solutions and Applications, Springer, New York, 1996.
[54] I.M. Sokolov, J. Klafter, A. Blumen, Phys. Today 55 (2002) 48.
[55] M.M. Meerschaert, A. Sikorskii, Stochastic Models for Fractional Calculus, De Gruyter, Boston, 2011.
[56] J. Klafter, S.C. Lim, R. Metzler (Eds.), Fractional Dynamics: Recent Advances, World Scientific, New York, 2011.
[57] D. Baleanu, K. Diethelm, E. Scalas, J.J. Trujillo, Fractional Calculus Models and Numerical Methods, World Scientific, New York, 2012.
[58] V.M. Kenkre, E.W. Montroll, M.F. Shlesinger, J. Stat. Phys. 9 (1973) 45.
[59] H.S. Wio, Path Integrals for Stochastic Processes: An Introduction, World Scientific, New York, 2013.
[60] P. Brémaud, Mathematical Principles of Signal Processing, Springer, New York, 2002.
[61] G.R. Arce, Nonlinear Signal Processing: A Statistical Approach, Wiley, New York, 2004.
[62] D.C. Montgomery, C.L. Jennings, M. Kulahci, Introduction to Time Series Analysis and Forecasting, Wiley, New York, 2008.
[63] R.S. Tsay, Analysis of Financial Time Series, Wiley, New York, 2005.
[64] G.E. Uhlenbeck, L.S. Ornstein, Phys. Rev. 36 (1930) 823.
[65] J.L. Doob, Ann. of Math. 43 (1942) 351.
[66] S.O. Rice, Bell Syst. Tech. J. 23 (1944) 282.
[67] S.O. Rice, Bell Syst. Tech. J. 24 (1945) 4.
[68] S.B. Lowen, M.C. Teich, Fractalbased Point Processes, Wiley, New York, 2005.
[69] M.F. Shlesinger, E.W. Montroll, Proc. Natl. Acad. Sci. USA 81 (1984) 1280.
[70] J. Klafter, M.F. Shlesinger, Proc. Natl. Acad. Sci. USA 83 (1986) 848.
[71] T.V. Ramakrishnan, L. Raj Lakshmi (Eds.), NonDebye Relaxation in Condensed Matter, World Scientific, Singapore, 1987.
[72] S.B. Lowen, M.C. Teich, Phys. Rev. Lett. 63 (1989) 1755.
[73] S.B. Lowen, M.C. Teich, IEEE Trans. Inform. Theory 36 (1990) 1302.
[74] W.T. Coffey, Yu.P. Kalmykov, Fractals, Diffusions and Relaxation in Disordered Systems, WileyInterscience, New York, 2006.
[75] R. Metzler, J. Klafter, J. Jortner, M. Volk, Chem. Phys. Lett. 293 (1998) 477.
[76] H. Yang, et al., Science 302 (2003) 262.
[77] M. Reiner, Deformation, Strain and Flow, Lewis & Co., London, 1969.
[78] R. Metzler, W. Schick, H.G. Kilian, T.F. Nonnenmacher, J. Chem. Phys. 103 (1995) 7180.
[79] J.F.C. Kingman, Poisson Processes, Oxford University Press, Oxford, 1993.
[80] D.R. Cox, V. Isham, Point Processes, CRC, Boca Raton, 2000.
[81] R.L. Streit, Poisson Point Processes, Springer, New York, 2010.
[82] K. Pearson, Nature 72 (1905) 294.
[83] G.H. Weiss, Aspects and Applications of the Random Walk, NorthHolland, Amsterdam, 1994.
[84] J. Klafter, I.M. Sokolov, First Steps in Random Walks: From Tools to Applications, Oxford University Press, Oxford, 2011.
[85] N.H. Bingham, C.M. Goldie, J.L. Teugels, Regular Variation, Cambridge University Press, Cambridge, 1987.
[86] G. Williams, D.C. Watts, Trans. Faraday Soc. 66 (1970) 80.
[87] R.V. Chamberlin, G. Mozurkewich, R. Orbach, Phys. Rev. Lett. 52 (1984) 867.
[88] J.C. Phillips, Rep. Progr. Phys. 59 (1996) 1133.
[89] J. Laherrere, D. Sornette, Eur. Phys. J. B 2 (1998) 525.
[90] I. Eliazar, Phys. Rev. E 86 (2012) 031103.
[91] H. Chernoff, Ann. Math. Stat. 23 (1952) 493.
[92] A. Dembo, O. Zeitouni, Large Deviations Techniques and Applications, 2nd edition, Springer, New York, 2009.
[93] I. Eliazar, M.H. Cohen, Physica A 392 (2013) 27.
[94] B.V. Gnedenko, A.N. Kolmogorov, Limit Distributions for Sums of Independent Random Variables, AddisonWesley, London, 1954.
[95] I.A. Ibragimov, Yu.V. Linnik, Independent and Stationary Sequences of Random Variables, WalterssNoordhoff, Groningen, The Netherlands, 1971.
[96] V.M. Zolotarev, Onedimensional Stable Distributions, American Mathematical Society, Providence, 1986.
[97] J.P. Nolan, Stable Distributions: Models for Heavy Tailed Data, Birkhauser, New York, 2012.
[98] R.J. Adler, R.E. Feldman, M.S. Taqqu (Eds.), A Practical Guide to Heavy Tails: Statistical Techniques and Applications, Birkhauser, Boston, 1998.
[99] M.F. Shlesinger, G.M. Zaslavsky, J. Klafter, Nature 363 (1993) 31.
[100] M.F. Shlesinger, Nature 411 (2001) 641.
[101] S. Condamin, et al., Nature 450 (2007) 77.
[102] D. Brockmann, L. Hufnagel, T. Geisel, Nature 439 (2006) 462.
[103] M.C. Gonzalez, C.A. Hidalgo, A.L. Barabasi, Nature 453 (2008) 779.
[104] P. Barthelemy, J. Bertolotti, D.S. Wiersma, Nature 453 (2008) 495.
[105] A.V. Chechkin, V.Y. Gonchar, M. Szydlowski, Phys. Plasmas 9 (2002) 78.
[106] N. Watkins, et al., Space Sci. Rev. 121 (2005) 271.
[107] A. Weron, et al., Phys. Rev. E 71 (2005) 016113.
[108] D.A. Benson, S.W. Wheatcraft, M.M. Meerschaert, Water Resour. Res. 36 (2000) 1403.
[109] D.A. Benson, R. Schumer, M.M. Meerschaert, S.W. Wheatcraft, Transp. Porous Media 42 (2001) 211.
[110] Z.Q. Deng, L. Bengtsson, V.P. Singh, Environ. Fluid Mech. 6 (2006) 451.
[111] S. Kim, M.L. Kavvas, J. Hydrol. Eng. 11 (2006) 80.
[112] C. Shen, M.S. Phanikumar, Adv. Water Resour. 32 (2009) 1482.
[113] V. Ganti, M.M. Meerschaert, E. Foufoula Georgiou, E. Viparelli, G. Parker, J. Geophys. Res. 115 (2010) F00A12.
[114] J.F. Kelly, R.J. McGough, M.M. Meerschaert, J. Acoust. Soc. Am. 124 (2008) 2861.
[115] A.S. Carasso, Opt. Eng. 4510 (2006) 107004.
[116] K.A. Penson, K. Górska, Phys. Rev. Lett. 105 (2010) 210604.
[117] K. Górska, K.A. Penson, Phys. Rev. E 83 (2011) 061125.
[118] J.M. Chambers, C.L. Mallows, B. Stuck, J. Amer. Statist. Assoc. 71 (1976) 340.
[119] I. Eliazar, J. Klafter, J. Phys. A: Math. Theor. 43 (2010) 132002.
[120] I. Golding, E.C. Cox, Phys. Rev. Lett. 96 (2006) 098102.
[121] J. Szymanski, M. Weiss, Phys. Rev. Lett. 103 (2009) 038102.
I.I. Eliazar, M.F. Shlesinger / Physics Reports 527 (2013) 101–129 129
[122] A. Caspi, R. Granek, M. Elbaum, Phys. Rev. Lett. 85 (2000) 5655.
[123] S. Burlatsky, G. Oshanin, Theoret. Math. Phys. 75 (1988) 659.
[124] S. Burlatsky, G. Oshanin, A. Mogutov, M. Moreau, Phys. Lett. A 166 (1992) 230.
[125] S. Burlatsky, G. Oshanin, M. Moreau, W. Reinhardt, Phys. Rev. E 54 (1996) 3165.
[126] G. Oshanin, A. Blumen, Phys. Rev. E 49 (1994) 4185.
[127] H. Schiessel, G. Oshanin, A. Blumen, J. Chem. Phys. 103 (1995) 5070.
[128] H. Schiessel, G. Oshanin, A. Blumen, Macromol. Theory Simul. 5 (1996) 45.
[129] S. Jespersen, G. Oshanin, A. Blumen, Phys. Rev. E 63 (2001) 011801.
[130] C.K. Peng, et al., Nature 356 (1992) 168.
[131] S.V. Buldyrev, et al., Phys. Rev. E 51 (1995) 5084.
[132] J.M. Hausdorff, et al., J. Appl. Physiol. 78 (1995) 349.
[133] P.N. Segre, E. Herbolzheimer, P.M. Chaikin, Phys. Rev. Lett. 79 (1997) 2574.
[134] E. KoscielnyBunde, et al., Phys. Rev. Lett. 81 (1998) 729.
[135] Y.L. Parameswaran Gopikrishnan, et al., Phys. Rev. E 60 (1999) 1390.
[136] P.C. Ivanov, et al., Chaos 11 (2001) 641.
[137] M. Campillo, A. Paul, Science 299 (2003) 547.
[138] M.S. Taqqu, in: E. Eberlein, M.S. Taqqu (Eds.), Dependence in Probability and Statistics, Birkhauser, Boston, 1986, pp. 137–162.
[139] N.N. Censtov, Theory Probab. Appl. 1 (1956) 140.
[140] I. Karatzas, S.E. Shreve, Brownian Motion and Stochastic Calculus, 2nd edition, Springer, New York, 1991.
[141] R. Paley, N. Wiener, A. Zigmund, Math. Z. 37 (1933) 647.
[142] A. Dvoretski, P. Erdös, S. Kakutani, Proc. 4th Berkeley Symp., Math. Stat. Prob., vol. 2, 1961, p. 103.
[143] F. Biagini, Y. Hu., B. Oksendal, T. Zhang, Stochastic Calculus for Fractional Brownian Motion and Applications, Springer, New York, 2008.
[144] T. Dieker, Simulation of Fractional Brownian Motion, Master’s Thesis, University of Twente, The Netherlands, 2002, Available at:
http://www2.isye.gatech.edu/~adieker3/fbm/index.html.
[145] R.B. Davies, D.S. Harte, Biometrica 74 (1987) 95.
[146] D.C. Caccia, et al., Physica A 246 (1997) 609.
[147] S. Stoev, M.S. Taqqu, Fractals 12 (2004) 95.
[148] M.A. Fuentes, H.S. Wio, R. Toral, Physica A 303 (2002) 91.
[149] M.A. Fuentes, H.S. Wio, Eur. Phys. J. B 52 (2006) 249.
[150] M.D. Donsker, Mem. Amer. Math. Soc. 6 (1951) 1.
[151] W. Whitt, StochasticProcess Limits: An Introduction to StochasticProcess Limits and Their Application to Queues, Springer, New York, 2011.
[152] K. Ito, Mem. Amer. Math. Soc. 4 (1951) 1.
[153] K. Ito, H.P. McKean, Diffusion Processes and Their Sample Paths, Springer, Berlin, 1996, Reprint of the 1974 edition.
[154] A. Friedman, Stochastic Differential Equations and Applications, Dover, New York, 2006.
[155] B. Oksendal, Stochastic Differential Equations: An Introduction with Applications, 6th edition, Springer, New York, 2010.
[156] L. Arnold, Stochastic Differential Equations: Theory and Applications, Dover, New York, 2011.
[157] J.C. Hull, Options, Futures, and Other Derivatives, 6th edition, PrenticeHall, London, 2005.
[158] J.M. Steele, Stochastic Calculus and Financial Applications, Springer, New York, 2010.
[159] R.C. Merton, Bell J. Econ. Manage Sci. 4 (1973) 141.
[160] F. Black, M. Scholes, J. Polit. Econ. 81 (1973) 637.
[161] S. Painter, L. Paterson, Geophys. Res. Lett. 21 (1994) 2857.
[162] S. Painter, Water Resour. Res. 32 (1996) 1323.
[163] N. Laskina, I. Lambadarisa, F.C. Harmantzisb, M. Devetsikiotis, Comput. Netw. 40 (2002) 363.
[164] M. Crovella, A. Bestavros, Perform. Eval. Rev. 24 (1996) 160.
[165] W. Willinger, et al., IEEE/ACM Trans. Net. 5 (1997) 71.
[166] I. Noros, IEEE Selected Areas Commun. 13 (1995) 953.
[167] J.L. Vehel, R.H. Riedi, in: J.L. Vehel, E. Lutton, C. Tricot (Eds.), Fractals in Engineering, Springer, New York, 1997, pp. 185–202.
[168] M.S. Taqqu, W. Willinger, R. Sherman, Comput. Commun. Rev. 27 (1997) 5.
[169] B.B. Mandelbrot, Internat. Econom. Rev. 10 (1969) 82.
[170] M.S. Taqqu, J. Lévy, in: E. Eberlein, M.S. Taqqu (Eds.), Dependence in Probability and Statistics, Birkhauser, Boston, 1986, pp. 73–89.
[171] Y. Davydov, Theory Probab. Appl. 25 (1970) 487.
[172] M.M. Meerschaert, E. Nane, Y. Xiao, Statist. Probab. Lett. 79 (2009) 1194.
[173] E. Iglói, G. Terdik, Electron. J. Probab. 4 (1999) 1.
[174] N.N. Leonenko, E. Taufer, Stochastics 77 (2005) 477.
[175] I. Eliazar, J. Klafter, J. Phys. A: Math. Theor. 41 (2008) 122001.
[176] I. Eliazar, J. Klafter, Phys. Rev. E 79 (2009) 021115.
[177] I. Eliazar, J. Klafter, Chem. Phys. 370 (2010) 290.
[178] I. Eliazar, J. Klafter, Phys. Rev. E 82 (2010) 021122.
[179] I. Eliazar, Eur. Phys. J. Special Topics 216 (2013) 3.
[180] J. Holtsmark, Ann. Phys. 58 (1919) 577.
[181] S. Chandrasekhar, Rev. Modern Phys. 15 (1943) 1.
[182] I. Eliazar, J. Klafter, Proc. Natl. Acad. Sci. USA 106 (2009) 12251.
[183] I. Eliazar, J. Klafter, Phys. Rev. Lett. 103 (2009) 040602.
[184] I. Eliazar, J. Klafter, Phys. Rep. 511 (2012) 143.
[185] E.W. Montroll, G.H. Weiss, J. Math. Phys. 6 (1965) 167.
[186] E. Lutz, Phys. Rev. E 64 (2001) 051106.
[187] D.R. Cox, Renewal Theory, Methuen, London, 1962.
[188] S.M. Ross, Applied Probability Models with Optimization Applications, HoldenDay, San Francisco, 1970.
[189] M.M. Meerschaert, H.P. Scheffler, J. Appl. Probab. 41 (2004) 623.
[190] J. Klafter, I.M. Sokolov, First Steps in Random Walks: From Tools to Applications, Oxford University Press, Oxford, 2011.
[191] M. Magdziarz, A. Weron, J. Klafter, Phys. Rev. Lett. 101 (2008) 210601.
[192] A. Weron, M. Magdziarz, K. Weron, Phys. Rev. E 77 (2008) 036704.
[193] M. Magdziarz, A. Weron, K. Burnecki, Phys. Rev. Lett. 103 (2009) 180602.
[194] J.H. Jeon, R. Metzler, Phys. Rev. E 81 (2010) 021103.
[195] M.F. Shlesinger, J. Klafter, Y.M. Wong, J. Stat. Phys. 27 (1982) 499.
[196] M.F. Shlesinger, J. Klafter, Phys. Rev. Lett. 54 (1985) 2551.
[197] M.F. Shlesinger, J. Klafter, B. West, Phys. Rev. Lett. 58 (1987) 1100.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.