Vincenzo Ferrazzano
Vollstndiger Abdruck der von der Fakultt fr Mathematik der Technischen Universitt
Mnchen zur Erlangung des akademischen Grades eines
Doktors der Naturwissenschaften (Dr. rer. nat.)
genehmigten Dissertation.
Vorsitzender:
Prfer der Dissertation:
......................
1.
2.
......................
3.
......................
Abstract
Turbulence is a wide and fascinating subject, dating back to over two centuries, and still
offering new challenges to the scientific community as a whole. The theory of turbulence
involves a wide range of fields, from very applied disciplines, such as engineering and
climatology, to fundamental theoretical questions, such as the uniqueness of a strong solution of the threedimensional NavierStokes equation. As a consequence, a considerable
deal of attention has been paid to turbulence, engaging physicists and mathematicians
alike. The phenomenological similarities between turbulent timeseries and highfrequency
financial data, together with the importance of wind as an energy resource, have more
recently triggered a huge amount of interest on the subject. Classical simulation methods
of a turbulent flow, based on discretizing the NavierStokes equation, involve daunting
calculations, defying the computational power of most recent supercomputers. There is,
therefore, an interest in obtaining a parsimonious stochastic model, which is able to reproduce the rich phenomenology of turbulence. Since the most interesting aspect of turbulence
lies in its behaviour at the smallest scale, a great deal of highfrequency timeseries are
available.
The asymptotic behaviour of second order properties of CARMA processes, as the
frequency of observation tends to infinity, is investigated in Chapter 1. A CARMA process
is an element of the larger class of the continuoustime moving average processes (CMA)
R
Yt = g(t s)dWs , for t R, where the kernel function g is deterministic, square
integrable and W is a process with stationary and uncorrelated increments and finite
variance. Under some necessary and sufficient conditions, classical nonparametric timeseries methods can be employed to estimate the kernel function g and the increment of
the driving noise W , when Y is a CARMA process. The flexibility of CARMA processes
and the nonparametric nature of the employed methods hint that our results possibly
hold for a larger subclass of CMA processes. In Chapter 2 and 3. we show the consistency
of the kernel and of the driving process estimation procedures, respectively. In Chapter
4, the processing techniques commonly employed to obtain samples of the velocity of a
turbulent flow are reviewed. In Chapter 5, the results obtained in the previous Chapters
are applied to atmospheric turbulence datasets. Based on the results of the estimation
methods mentioned earlier, a parametric model reproducing the most commonly observed
features of turbulence is proposed.
II
Acknowledgment
Coming from an engineering background, to achieve such results would have been impossible without the contribution of many persons.
First of all my thanks go to Prof. Claudia Klppelberg for giving me the opportunity
to undertake the doctoral studies. Without her always helpful advices, kind patience
and encouragement throughout the years, this thesis would not have been possible. I
am especially grateful to her for introducing me to many distinguished scholars. Among
these I would like to thank Prof. Ole BarndorfNielsen, Prof. Richard Davis, Prof. Peter
Brockwell, Prof. Jrgen Schmiegl and Dr. Gunnar Chr. Larsen for making my stay in
Aarhus, Roskilde and New York City possible. I will remember these days as fruitful
and enjoyable. In particular, I am indebted to Dr. Gunnar Chr. Larsen and Prof. Jrgen
Schmiegl for sharing their data with us.
Furthermore, I am grateful to my colleagues at the Chair of Mathematical Statistics
for their support and for the enjoyable company during the past years. I would like
to especially thank my fellows students Christina Steinkohl, Florian Fuchs and Florian
Uelzthfer for all the time spent discussing mathematics and more mundane topics as
well. The support of the "International Graduate School of Science and Engineering" of
the Technical University of Munich, through the project "Energie 2020" is also gratefully
acknowledged.
III
IV
Contents
Contents
Introduction
I.
Turbulence . . . . . . . . . . . . . . . . . .
II. Continuoustime moving average processes
III. CARMA processes . . . . . . . . . . . . .
IV. Sampling . . . . . . . . . . . . . . . . . . .
V. Outline of the thesis . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
VII
VII
IX
XI
XIII
XV
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
14
19
21
30
31
32
32
35
.
.
.
.
.
.
.
39
40
40
41
43
50
56
58
CONTENTS
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
67
67
67
68
70
72
74
75
.
.
.
.
.
.
.
.
.
.
.
.
.
79
79
83
83
84
87
88
88
89
91
93
96
98
101
107
VI
Introduction
In this Introduction we will illustrate some of the concepts employed throughout this
thesis, giving when needed reference to the literature detailed in the bibliography at the
end of this thesis.
In Section I we give a brief account of the theory of stationary turbulence, relevant for
this thesis, presenting some stylised facts and theoretical results.
In Section II we shall recall the basic properties of continuoustime moving average
(CMA) processes with finite variance. These processes constitute a general class of stationary processes, which have been proposed as possible models for the timewise behaviour
of turbulence (see e.g. BarndorffNielsen et al. (2011), BarndorffNielsen and Schmiegel
(2009, 2008a)).
From a mathematical point of view, most of the results of this thesis will be obtained for
a specific yet flexible subclass of CMA processes, namely the class of the continuoustime
autoregressive movingaverage (CARMA) processes. One of the advantages of this kind
of processes is that many relevant quantities can be explicitly calculated. The definition
of a CARMA process and basic properties are given in Section III.
Throughout this thesis we will investigate the asymptotic behaviour of the samples of
the underlying continuoustime processes, when the sampling frequency tends to infinity.
A brief overview of the subject will be given in Section IV.
We conclude this Introduction with a chapterbychapter outline of the thesis in Section
V.
I. Turbulence
Turbulence is the complex behaviour of a particle in a fluid, under certain conditions,
described by its velocity. Its modelling is a longstanding problem in both physics and
mathematics. The NavierStokes equations, the basic mathematical equations describing
turbulence, are wellknown since the 19th century. Actual comprehension of this phenomenon, however, is scarce. Since the seminal work of Kolmogorov (1941a,b, 1942),
henceforth referred to as K41 theory, there is a widespread conviction that turbulence
can be regarded and analysed as a random phenomenon. In particular, the velocity of a
turbulent flow can be modelled as a spatiotemporal stochastic process which preserves
some statistical structure. We emphasise that highresolution measurements with actual
VII
I Turbulence
Introduction
probes in both space and time, are difficult to gather. The theory developed in a spatiotemporal setting is reduced to a timeseries framework by utilising Taylors frozenfield
hypothesis (Pope 2000, p. 223), which allows to exchange spatial with time increments.
For an exhaustive account of the turbulence theory we refer to Frisch (1996) and Pope
(2000). For a more intuitive description of turbulence we refer to Tsinober (2009). We note
that virtually every observed turbulent flow displays several stylised statistical facts. Experimental investigations highlighted that the magnitude of these features only depends
on a control parameter called the Reynolds number ; it is proportional to the mean flow
velocity over the kinematic viscosity. The final goal of this thesis is to analyse and model
turbulent flows with a Reynolds number above a critical threshold, called fully developed
turbulence. We focus on the modelling of the velocity V = {Vt }tR along the main flow
direction at some fixed point in space of a weakly stationary turbulent flow.
As customary in both statistics and physics, our analysis is mostly data driven. Therefore in the latter Chapter of this thesis we shall consider many dataset, coming from
highquality physical experiments. The most remarkable amongst the dataset employed
is the socalled Brookhaven wind speed data set (see Drhuva 2000). It consists of measurements taken at the atmospheric boundary layer, about 35m above the ground and it
displays a rather high Reynolds number.
The transmission and dissipation of energy are of paramount importance in the description of turbulence. In the K41 theory, the former is based on the phenomenology
called energy cascade (see Richardson 2007). It is suggested that the kinetic energy enters the turbulent motion at its largest scale, called energy containing range. An example
can be fall winds (e. g., Fhn in the Bavarian alps) downwards flowing from the top of
a mountain, due to the pressure difference between the peak and the valley. These large
scale phenomena are, in general, nonstationary and very specific models are needed. The
kinetic energy is then transmitted by an inviscid mechanism to smaller scale vortices in
the socalled inertial range, until it gets dissipated at its smallest scale, called dissipation
range. In the frequency domain, these partitions correspond to low (energy containing
range), intermediate (inertial range), and high frequencies (dissipation range). The extent
of these ranges depends on the Reynolds number. Kolmogorov assumed that the transmission and dissipation mechanisms are independent from the one injecting the kinetic
energy; moreover, he supposed that these mechanisms is universal for every fully developed turbulent flow. Along with other simplifications such as stationarity, the K41 theory
gives an elegant and quantitative description of the energy transmission mechanism. It
is essentially based on second order properties, in particular spectral density and autocovariance; the behaviour of the energy dissipation mostly remains unspecified. The most
prominent result is Kolmogorovs 5/3law : in the inertial range, the spectral density of a
turbulent flows velocity decays as 5/3 with the frequency .
In the development of his theory Kolmogorov did not take into account that turbulence is a highly nonGaussian phenomenon. Although the K41 theory has been proven
VIII
Introduction
to be correct to a fair degree concerning second order properties, it fails to make accurate
predictions regarding higherorder statistics. Physicists subsume all phenomena deviating
from the predictions of the K41 theory under the term intermittency. Our paramount
aim in this thesis is to advocate a statistical model, which is able to reproduce the following key intermittent features, virtually observed in every fully developed turbulent
flow. First, sample paths are rather smooth; physicists conjecture that they are infinitely
often differentiable in the L2 sense. Unfortunately, to obtain experimental data with a
good resolution at very small scales is still a daunting task, so no last word has been said
on this matter. Nevertheless, the NavierStokes equation to be well defined requires that
the turbulent velocity is an at least twice continuously differentiable process. Second, the velocity increments display a distinctive clustering: the phenomenon is originally
called intermittency. In particular, the squared increments of turbulent flow velocities are
significantly correlated; their autocorrelation is positive and slowly decaying. Third, the
velocity increments are semiheavy tailed and display a distinctive scaling. Their distribution is approximately Gaussian on large timescales, but has exponential tails and is
positively skewed on small timescales. The skewness is given by Kolmogorovs 4/5law ;
for all t R and a s > 0 in the inertial range, we have
4
5
E[(Vt+s Vt )3 ] s,
where > 0 is the mean rate of energy dissipation. Under some ideal conditions, is
proportional to E[(t V )2 ]; it is the physical analogue of return volatility in finance. This
law is a direct consequence of NavierStokes equations; it is one of the few exact results in
turbulence theory. For a different approach to connect the classical NavierStokes equation
with stochastic processes, see Birnir (2013).
g(t s)dWs ,
t R,
(II.i)
where g L2 (R) and {Wt }tR is a process stationary and uncorrelated increments, such
that E[W1 ] = 0, E[W12 ] = 2 < and W0 = 0 a.s.. The process X defined by (II.i) is
then a stationary process with mean zero and finite variance 2 kgk2L2 . Throughout this
thesis, we will employ the weak concept of second order (weak) stationarity, that is, E[Xt ]
is constant for every t and the autocovariance function
X ( ) := E[(Xt+ E[Xt+ ])(Xt E[Xt ])],
IX
R,
Introduction
is independent of t. For every second order stationary process the spectral density fX is
defined as the Fourier transform of the autocovariance function. i.e.
fX () = F 1 {X ()}(),
R.
g(s)g(s + )ds,
fX () =
2
F{g()}2 (),
2
R,
R.
The class of continuoustime moving average constitute a large subclass of the second
order stationary processes. To be more precise, every process having a spectral density
fX has a representation in the L2 sense as a CMA process, that is, there exists a CMA
process having the same autocorrelation function and spectral density (Doob (1990), pg.
533).
From a timeseries perspective, the case of causal continuoustime moving average processes is of particular interest, that is, when the kernel function g is assumed to be zero
on (, 0]. Since g is deterministic, the randomness in a causal model is solely given
by the driving process W . In many timeseries applications a causal model, i.e. a model
depending only on the past values of W , have a strong intuitive motivation.
The class of causal CMA process, although appealing from a modeling point of view,
is slightly smaller than the one of continuoustime moving average. In order for a process
to have a causal moving average representation (II.i), it must satisfy the PaleyWiener
condition (Yaglom (2005), Section 26.8), that is, its spectral density satisfies
 log fX ()
d < .
1 + 2
(II.ii)
Also the converse is true, that is, every causal moving average satisfies (II.ii). The PaleyWiener condition clearly rules out spectral densities either decaying as or faster than e
or vanishing on sets having positive measure. This result descents from some beautiful
results of Paley and Wiener (1934), equivalent to those of Titchmarsh (1948), connecting
the behaviour of a function g with some regularity of its Fourier transform, when extended
to the complex plane.
Introduction
The rates of decay of the spectral density also relates to the order of differentiability
R
of the process. As shown in Doob (1990), Example 1, pg. 535, if R 2p fX ()d <
for some p N, then the process X admits p derivatives in the L2 sense. That implies
differentiability of the sample paths in probability. Let us assume without any loss of
generality that there exist an such that fX () > for every fixed R. The differentiability condition, in conjunction with the PaleyWiener condition (II.ii), tells us that
processes having infinitely differentiable sample paths might not have a causal representation, while, on the other hand, processes with only a finite number of derivatives will
always have one. This is a crucial point in turbulence, since it is still not clear whether
the turbulent timeseries are infinitely differentiable or not. An additional layer of confusion is given by the fact that most of the turbulence theory is developed in space; these
results are often translated into a time context employing the Taylor hypothesis. We shall
investigate the matter in Chapter 5.
In this thesis we assume that the driving noise W of a causal CMA process is a weak
Lvy process L. By weak Lvy process, we understand a Lvy process (see e.g. Sato
(1999)), where the hypothesis of independent increments is relaxed to the weaker one of
uncorrelated increments. The assumptions of independent increments is too restrictive to
incorporate nonlinear effects in the model, such as volatility clustering, which is widely
observed in turbulent and financial data.
Examples of causal CMA processes are the OrnsteinUhlenbeck process, with g(t) =
t
e 1(0,) , where > 0, and the more general continuoustime autoregressive moving
average (CARMA) processes studied by Doob (1944) for Gaussian L. Another notable
example of causal CMA processes is the socalled Brownian semistationary process of
BarndorffNielsen and Schmiegel (2008a), where the kernel function is assumed to be a
gamma kernel, i.e., for > 1/2 and < 0,
g(t) = t1 et 1(0,) (t),
t R.
(II.iii)
A causal CMA process equipped with the kernel (II.iii) has an autocorrelation function of
the WhittleMatrn family (see e.g. Guttorp and Gneiting (2005)). The spectral density,
that is the Fourier transform of the autocorrelation function, of such a process decays as
2 as tends to infinity. Powerlaw spectral densities are often observed, and they have
a role of prime importance in turbulence, where Kolmogorov 5/3 law predicts = 5/6.
Therefore, such spectral densities have widely been employed in turbulence modelling, see
e.g. von Krmn (1948) and the international standard IEC 614001 (1999).
XI
Introduction
represent a very tractable and flexible subclass of the CMA processes class, defined in the
previous section. The process is defined as follows.
For nonnegative integers p and q such that q < p, a CARMA(p, q) process Y = {Yt }tR ,
with coefficients a1 , . . . , ap , b0 , . . . , bq R, and driving Lvy process L, is defined to be
the strictly stationary solution of the suitably interpreted formal equation,
a(D)Yt = b(D)DLt ,
t R,
(III.i)
where D denotes differentiation with respect to t, a() and b() are the polynomials,
a(z) := z p + a1 z p1 + + ap
and the coefficients bj satisfy bq = 1 and bj = 0 for q < j < p. Throughout this thesis we
shall denote by i , i = 1, . . . , p, and i , i = 1, . . . , q, the roots of a() and b() respectively,
such that these polynomials can be written as a(z) =
p
Y
i=1
(z i ) and b(z) =
q
Y
(z + i ).
i=1
Throughout this thesis we will further assume, to avoid noninteresting cases, that a()
and b() have no common zeroes. For a study of the general case we refer to Brockwell
and Lindner (2009).
Since the derivative DLt does not exist in the usual sense, we interpret (III.i) as being
equivalent to the observation and state equations
where
Yt = bT Xt ,
(III.ii)
(III.iii)
X(t)
X (1) (t)
..
.
Xt =
(p2)
X
(t)
X (p1) (t)
0
0
..
.
A=
ap
1
0
..
.
b0
b1
..
.
b=
bp2
bp1
0
1
..
.
...
...
..
.
0
0
..
.
0
0
...
ap1 ap2 . . . a1
0
0
..
.
ep =
0
1
and A = a1 for p = 1.
It is easy to check that the eigenvalues of the matrix A, which we shall denote by 1 , ..., p ,
are the same as the zeroes of the autoregressive polynomial a().
XII
Introduction
IV Sampling
If the roots of the polynomial a() lie in the left halfplane, it has been shown (Brockwell
and Lindner (2009), Lemma 2.3) that these equations have the unique stationary solution
Yt =
g(t u)dLu ,
t R,
(III.iv)
where
g(t) =
Z
X
b(z)
b(z)
tz
zt
1
e dz =
Resz= e
,
2i
0,
a(z)
a(z)
if t > 0,
(III.v)
if t 0.
and is any simple closed curve in the open left half of the complex plane encircling
the zeroes of a(). The integral in (III.iv) is again defined in L2 , as outlined in Section
II. Therefore, the CARMA process is a causal, continuoustime moving average, whose
kernel belongs to a specific parametric family, fully specified by the roots of a() and b().
We outline that the condition on the roots of a() to lie in the interior of the left half
of the complex plane in order to have causality arises from Theorem V, p. 8, Paley and
Wiener (1934), which is intrinsically connected with the Theorems in Titchmarsh (1948),
pp. 125129, on the Hilbert transform. A similar condition on the roots of b() will be
needed in Chapters 2 and 3.
A more concise form for the kernel g is (Brockwell and Lindner (2009), equations (2.10)
and (3.7))
g(t) = b> eAt ep 1(0,) (t), t R.
(III.vi)
Gaussian CARMA processes, of which the Gaussian OrnsteinUhlenbeck process is
an early example, were first studied in detail by Doob (1944) (see also Doob (1990)).
The statespace formulation, (III.ii) and (III.iii) (with b> = [1 0 0]), was used by
Jones (1981) to carry out inference for time series with irregularlyspaced observations.
This formulation naturally leads to the definition of Lvydriven and nonlinear CARMA
processes (see Brockwell and Lindner (2009) and the references therein).
XIII
IV Sampling
Introduction
A classical result, bridging the continuoustime and the sampled spectral density, is the
aliasing formula (Bloomfield (2000), p. 196, Eq. 9.17)
fY ()
1 X
=
fY
kZ
+ 2k
As shown by the formula above, for a generic > 0, the discretetime spectral density fY
may bear little resemblance to the continuoustime spectral density fY . Most of the data
available are discrete, therefore a numerous methods to estimate the discretetime spectral
density have been developed (see e.g. Priestley (1981), Brockwell and Davis (1991)). The
problem of the convergence of the estimate of fY to the continuous time fY as 0 for
the class of CMA processes is discussed in Fasen (2012)
A classical timeseries model for discretetime data is the moving average (MA())
process
Yn
n
X
nj
Zj ,
j=
n Z,
(IV.i)
p
Y
i=1
n Z,
> 0,
(IV.ii)
XIV
Introduction
(IV.ii), under some technical condition, coincides with the one appearing in the MA()
(IV.i).
The ARMA processes, introduced by Whittle (1951) and popularized by Box et al.
(1994), constitute a popular class of linear model for discretetime data. An extensive
review on the subject can be find in the classic Brockwell and Davis (1991).
Confronting (IV.ii) and (III.i), one can note that a CARMA process preserve a linear
structure after sampling. In general the polynomial () has a complicated dependence
on a() and b() as > 0 and its order will be different from q. Results concerning the
asymptotic behaviour of (), as tends to zero, is given in Chapter 1.
In Chapter 2 and 3 we will investigate the relationship between the MA() representation (IV.i) of the sampled sequence of a CARMA process and its CMA representation
(II.i), once again as the spacing tends to 0.
XV
Introduction
Chapter 5 We employ the results of Chapter 2 and 3 to estimate the kernel and the driving
process of a CMA model from the Brookhaven data. Firstly, we fully characterize the
CMA model from the physical point of view. Then we propose parametric models
for the kernel and the driving noise, which are able to reproduce the stylized features
of turbulence. After fitting the model, we simulate thereupon, and we compare the
obtained simulation with the original data. This chapter is based on the working
paper Ferrazzano, Klppelberg and Ueltzhfer (2012), due in March 2013.
XVI
1. Highfrequency sampling of a
continuoustime ARMA process
Continuoustime autoregressive moving average (CARMA) processes have recently been used widely in the modeling of nonuniformly spaced data and as
a tool for dealing with highfrequency data of the form Yn , n = 0, 1, 2, . . .,
where is small and positive. Such data occur in many fields of application,
particularly in finance and the study of turbulence. This Chapter is concerned
with the characteristics of the process {Yn }nZ , when is small and the
underlying continuoustime process {Yt }tR is a specified CARMA process.
This Chapter is organised as follows. In Section 1.1 we derive an expression for the spectral
density of the sampled sequence Y := {Yn }nZ . It is known that the filtered process
{ (B)Yn }nZ , where (B) is the filter defined in (3.1), is a moving average of order at
most p 1. In Section 1.2, we determine the asymptotic behaviour of the spectral density
and autocovariance function of { (B)Yn }nZ as 0 and the asymptotic moving
average coefficients and white noise variance in the cases p q = 1, 2 and 3. In general we
show that for small enough the order of the moving average { (B)Yn }nZ is p 1.
it
g(t)e
1
dt =
2i
b(z) 1
b(i)
dz =
,
a(z) z + i
a(i)
R.
(1.1.1)
Since the autocovariance function Y () is the convolution of g() and g(), its Fourier
transform is given by
2
F{Y ()}() = F{g()}()F{g()}() =
, R.
a(i)
2
2 b(i)
ih
e
R
1
2 b(i)
Y (h)dh =
F{Y ()}() =
,
2
2 a(i)
R.
eih fY ()d,
h R,
X
b(z)b(z) hz
e dz = 2
Resz=
a(z)a(z)
b(z)b(z) zh
e
a(z)a(z)
(1.1.2)
1 X
=
Y (h)eih ,
2
h=
b(z)b(z)
sinh(z)
dz,
a(z)a(z) cosh(z) cos()
(1.1.3)
p
Y
j=1
(1 ej B)
(1.2.1)
to the sampled sequence, Y , we obtain a strictly stationary sequence which is (p 1)correlated and is hence, by Lemma 3.2.1 of Brockwell and Davis (1991), a moving average
process of order p 1 or less.
Our goal in this section is to study the asymptotic properties, as 0, of the moving
average (B)Zn in the ARMA representation,
(B)Yn = (B)Zn ,
n Z,
(1.2.2)
of the highfrequency sequence Y . Here B denotes the backward shift operator and
(Zn )nZ is an uncorrelated sequence of zeromean random variables with constant vari2.
ance which we shall denote by
() = 
p
Y
j=1
(1 e
j +i 2
p a1
) = 2 e
p
Y
i=1
(cosh(i ) cos()),
, (1.2.3)
we have
() = ()fY (),
fMA
(1.2.4)
X m()1
2
fMA
() = ()
Dz
2
sinh(z)b(z)b(z)
(cosh() cos())a(z)
m()
6= (z )
(1.2.5)
,
z=
where the sum is over the distinct zeroes of a() and the product in the denominator
is over the distinct zeroes of a(), which are different from . The multiplicities of the
zeroes and are denoted by m() and m() respectively. When the zeroes 1 , . . . , p
() simplifies to
each have multiplicity 1, the expression for fMA
fMA
()
p
Y
(2)p ea1 2 X b(i )b(i )
=
sinh(
)
(cos cosh(j )),
i
2
a0 (i )a(i )
i=1
j6=i
fMA
()
2
(1)pq1 2(pq)1 cpq1 ()2p1 (1 cos )p ,
2
(1.2.6)
X
sinh x
ck ()x2k+1 .
=
cosh x cos
(1.2.7)
k=0
In particular, c0 () =
1
1cos ,
2+cos
c1 () = 6(1cos
, c2 () =
)2
....
Proof. The integrand in (1.1.3) can be expanded as a power series in using (1.2.7). The
integral can then be evaluated term by term using the identities, (see Example 3.1.2.3. of
Mitrinovi and Keki (1984))
1
2i
2k+1
b(z)b(z)
1
dz = Resz=
a(z)a(z)
2
z 2k+1 b(z)b(z)
a(z)a(z)
k {0, 1, 2, . . .},
b(z)b(z)
dz =
z 2k+1
a(z)a(z)
0
(1)pq
2
if 0 k < p q 1,
if k = p q 1.
Substituting the resulting expansion of the integral (1.1.3) and the asymptotic expression () 2p (1 cos )p into (1.2.4) and retaining only the dominant power of as
0, we arrive at (1.2.6).
Corollary 1.2.2. The following special cases are of particular interest.
p q = 1 : fMA
()
2 q
2 (1 cos )q .
2
p q = 2 : fMA
()
2 3
2
pq =3:
fMA
()
2 5
2
(1.2.8)
cos q
2 (1 cos )q .
3
11 13 cos cos(2)
+
+
20
30
60
2q (1 cos )q .
(1.2.9)
(1.2.10)
Proof. These expressions are obtained from (1.2.6) using the values of c0 (), c1 () and
c2 () given in the statement of the theorem.
Remark 1.2.3. (i) The righthand side of (1.2.8) is the spectral density of a qtimes
differenced white noise with variance 2 . It follows that, if q = p 1, then the moving
average polynomial (B) in (1.2.2) is asymptotically (1 B)q and the white noise
2 is asymptotically 2 as 0. This result is stated with the corresponding
variance
results for p q = 2 and p q = 3 in the following corollary.
(ii) By Proposition 3.32 of Marquardt and Stelzer (2007) a CARMA(p, q)process has
sample paths which are (pq1)times differentiable. Consequently to represent processes
Xn = (1 B)q Zn ,
n Z,
2 := V ar(Z ) = 2 .
where
n
(b) If p q = 2, then
Xn = (1 + B)(1 B)q Zn ,
where = 2
n Z,
2 := V ar(Z ) = 2 3 (2 + 3)/6.
3 and
n
(c) If p q = 3, then
Xn = (1 + 1 B + 2 B 2 )(1 B)q Zn ,
n Z,
p
where 2 = 2 8 + 30 375 + 64 30, 1 = 262 /(1 + 2 ) = 13 135 + 4 30 and
p
5 2
2
= 2 8 + 30 + 375 + 64 30 /120.
Proof. (a) follows immediately from Theorem 4.4.2 of Brockwell and Davis (1991).
To establish (b) we observe from (1.2.9) that the required moving average is the q times
differenced MA(1) process with autocovariances at lags zero and one, (0) = 2 2 3 /3 and
2 gives the equations,
(1) = 2 3 /6. Expressing these covariances in terms of and
2
(1 + 2 )
= 2 2 3 /3,
2
= 2 3 /6,
from which we obtain a quadratic equation for . Choosing the unique solution which
makes the MA(1) process invertible gives the required result.
The proof of (c) is analogous. The corresponding argument yields a quartic equation
for 2 . The particular solution given in the statement of (b) is the one which satisfies the
condition that (z) is nonzero for all complex z such that z < 1.
Although the absence of the movingaverage coefficients, bj , from Corollary 3.4 suggests
that they cannot be estimated from very closelyspaced observations, the coefficients do
appear if the expansions are taken to higher order in . An higher order expansion will
p
Y
j=1
(1 ej B )g(t) = 0,
t > p.
(1.2.11)
Pp
Proof. Rewriting the product in (1.2.11) as a sum we find (B )g(t) = j=0 Apj g(t
j), which has Fourier transform (invoking the shift property and the right hand side of
(1.1.1))
Y
b(i)
(1 e(+i) )m()
, R,
a(i)
where the product is taken over the distinct zeroes of a() having multiplicity m(). Using
the fact that the product of Fourier transforms corresponds to the convolution of functions,
we obtain from (III.v)
1
(B )g(t) =
2i
=
Z Y
(1 e(z) )m()
Resz=
ezt b(z)
b(z) tz
e dz
a(z)
Y (1 e(z) )m()
(z )m()
zi
(1 e(j z) )m(j )
= m(j ) .
(z j )m(j )
The singularities at z = j are removable and, therefore, using Cauchys residue theorem,
Theorem 1 of Section 3.1.1, p. 25, and Theorem 2 of Section 2.1.2, p. 7, of Mitrinovi and
Keki (1984), the filtered kernel is zero for every t R.
Proposition 1.2.6. Let Y be the CARMA(p, q) process (III.iv) and > 0. The autocovariance at lag n of ( (B)Yj )jZ is, for n = 0, 1, . . . , p 1,
MA
(n)
pn n+i1
i1
X
X X
i=1
k=0 h=0
Apk Aph
(i1)
(1.2.12)
Apk = (1)k
e(i1 ++ik ) ,
k = 1, . . . , p.
(1.2.13)
(B ) =
p
Y
j=1
(1 e
B ) =
p
X
k
,
Apk B
(1.2.14)
k=0
k
B
g(t u)dLu =
tk
g(tk u)dLu =
p Z
X
i=k
ti
ti+1
g(tk u)dLu .
(1.2.15)
Applying the operator (1.2.14) to Yt , using (1.2.15) and interchanging the order of summation gives
p Z
X
( (B )Y )t =
m=0
tm
m
X
j
X
tm+1 k=0
(1.2.16)
From Lemma 1.2.5 we know that the contribution from the term corresponding to m = p
is zero. By stationarity, the autocovariance function is independent of t, hence we can
choose t = n. Then we obtain
p1 Z
X
( (B )Y )n =
j=0
p Z
X
j=1
p Z
X
i=1
(nj)
(nj1) k=0
j1
(nj+1) X
(nj)
k=0
i1
(i1) X
h=0
For the autocovariance function we obtain for n = 0, . . . , p 1 by using the fact that L
has orthogonal increments,
MA
(n) = E[( (B )Y )0 ( (B )Y )n ]
pn i+n1
i1
X
X X
i=1
Apk Aph
k=0 h=0
(i1)
pn n+i1
i1
2 2(pq)1 X
X X
p
p
MA
(n)
(1)h+k
C(h, k, i, n; pq1), (1.2.17)
2
k
h
((p q 1)!)
i=1
where for N N0
C(h, k, i, n; N ) :=
k=0 h=0
Proof. The kernel g can also be expressed (Brockwell and Lindner (2009), equations (2.10)
and (3.7)) as
g(t) = b> eAt ep 1(0,) (t).
(1.2.18)
From this equation we see at once that g is infinitely differentiable on (0, ) with k th
derivative,
g (k) (t) = b> eAt Ak ep , 0 < t < .
Since bq = 1 and bj = 0 for j > q, the right derivatives g (k) (0+) satisfy
g (k) (0+) = b> Ak ep =
0 if k < p q 1,
(1.2.19)
1 if k = p q 1,
1
0
(1.2.20)
Since g is infinitely differentiable on (0, ) and the right derivatives at 0 exist, the integrand has onesided Taylor expansions of all orders M N,
M
X
dl [g((s + i 1 h))g((s + i 1 k + n))]
dl
l=0
M X
l
X
l
l=0 m=0
lm
(s + i 1 h)
=0+
l
+ o M
l!
m (lm)
(s + i 1 k + n) g
(0+)g
(m)
l
(0+)
+ o M ,
l!
as 0. Choose M = 2(p q 1). Then by (1.2.19) there is only one term in the double
sum which does not vanish, namely the term for which m = p q 1 = l m. Setting
N := p q 1 (so that M = 2N ) the sum reduces to
2N
1
(s + i 1 h)N (s + i 1 k + n)N
2N + o 2N .
N
(2N )!
8
2N
N
2N +1
2
(N !)
and, since
lim
we also have
Apk Aph
e(i1 ++ih ) =
= (1)
h+k
p
k
p
h
(1.2.21)
p
,
h
+ o(1) as 0.
(1.2.22)
(s + i 1 h)N (s + i 1 k + n)N ds
N
X
N
N
l1 ,l2 =0
l1
l2
N
X
N
N
l1 ,l2 =0
l1
l2
N l1
(i 1 h)
N l2
(i 1 k + n)
sl1 +l2 ds
1
(i 1 h)N l1 (i 1 k + n)N l2 .
l1 + l2 + 1
MA
(p 1) (1)q
2 2(pq)1
(2(p q 1))!
(1.2.23)
(p 1)
MA
p1
2 2(pq)1 X
k p
(1)
C(0, k, 1, p 1; p q 1).
((p q 1)!)2
k
(1.2.24)
k=0
Set d := p q 1, then
C(0, k, 1, p 1; d 1) =
and, from Remark 1.2.8, this is a polynomial of order d1. In order to apply known results
R1
on the difference operator, we define the polynomial f (x) = 0 sd1 (x + s + p 1)d1 ds.
Then, using Eq. (5.40), p. 188, and the last formula on p. 189 in Graham et al. (1994),
the sum in (1.2.24) can be written as
p1
X
(1)
k=0
p
X
(1)
k=0
p
C(0, k, 1, p 1; d 1)
k
p+1
= 0 + (1)
p
p
f (x k)x=0 (1)p
C(0, k, 1, p 1; d 1)
k
p
s
0
d1
(s 1)
d1
p+d
ds = (1)
where we have used the fact that d 1 = p q 1 < p. To obtain Eq. (1.2.23) it suffices to
note that (1)p+d = (1)2pq = (1)q and that the integral in (1.2.25) is a beta function.
Hence
Z 1
((d))2
((d 1)!)2
sd1 (1 s)d1 ds =
=
> 0, d N.
(2d)
(2d 1)!
0
Remark 1.2.10. If Y is a CARMA(p, q) process, then, from Theorem 1.2.1, the spectral
density of (1 B)pq Y is asymptotically, as 0,
2
(22 )pq1 cpq1 ()(1 cos )pq ,
2
10
1.3 Conclusions
pq
(p 1)
MA
1
(1)p1 2
61 3 (1)p2 2
1201 5 (1)p3 2
4
1
5040
7 (1)p4 2
(p 1) for p q = 1, . . . , 4.
Table 1.1.: Values of MA
In this Chapter we have considered only second order properties of Y . It is possible (see Brockwell (2001), Theorem 2.2) to express the joint characteristic functions,
P
E exp(i m
k=1 k Yk ), for m N, in terms of the coefficients aj and bj and the function
(), where (), for R, is the exponent in the characteristic function, EeiL1 = e() ,
of L1 . In particular the marginal characteristic function is given by E exp(iYk ) =
R
exp 0 (b0 eAu e)du, where b, A and e are defined as in (III.ii)(III.iii).
These expressions are awkward to use in practice, however Brockwell et al. (2011) have
found that least squares estimation (which depends only on second order properties) for
closely and uniformly spaced observations of a CARMA(2,1) process on a fixed interval [0, T ] gives good results. They find in simulations that for large T the empiricallydetermined sample covariance matrix of the estimators of a1 , a2 and b0 is close to the
matrix calculated from the asymptotic (as T ) covariance matrix of the maximum
likelihood estimators based on continuous observation on [0, T ] of the corresponding Gaussian CARMA process.
1.3. Conclusions
When a CARMA(p, q) process Y is sampled at times n for n Z, it is wellknown
that the sampled process Y satisfies discretetime ARMA equations of the form (1.2.2).
The determination of the moving average coefficients and white noise variance for given
grid size , however, is a nontrivial procedure. In this Chapter we have focussed on
high frequency sampling of Y . We have determined the relevant second order quantities,
of the moving average on the righthand side of (1.2.2) and
the spectral density fMA
its asymptotic representation as 0. This includes the moving average coefficients as
well as the variance of the innovations. We also derived an explicit expression for the
and its asymptotic representation as 0. This shows, in
autocovariance function MA
particular, that the moving average is of order p 1 for sufficiently small.
11
1.3 Conclusions
12
13
(i) The zeroes of the polynomial a() satisfy <(j ) < 0 for every j =
(ii) the roots of b() have nonvanishing real part, i.e. <(j ) 6= 0 for all j = 1, . . . , q.
The part (i) is the assumption under which a CARMA process is causal, and part (ii)
is a technical assumption.
In analogy to the discretetime case, we establish the notion of invertibility for the
continuoustime ARMA process.
Definition 2.1.1. A CARMA(p, q) process is said to be invertible if the roots of the
moving average polynomial b() have negative real parts, i.e. <(i ) > 0 for all i = 1, . . . , q.
The CMA process Y defined by (5.1.2) has autocovariance function
Y (h) =
g(x)g(x + h)dx,
h R,
eih (h)dh =
where
F{g()}() :=
2
F{g()}2 (),
2
c ,
(2.1.1)
g(x)eix dx.
The spectral density of the sampled process, Y := (Yn )nZ is (Bloomfield (2000),
p. 196, Eq. 9.17)
+ 2k
1 X
fY () =
fY
, d .
(2.1.2)
k=
14
b(i) 2
a(i) ,
(2.1.3)
X
b(z) tz
e dz 1(0,) (t) =
Resz=
a(z)
z b(z)
a(z)
1(0,) (t),
(2.1.4)
where the integration is anticlockwise around any simple closed curve in the interior of
the left half of the complex plane, encircling the distinct zeroes of a(z), and Resz= (f (z))
denotes the residue of the function f at . For such processes it was shown in Chapter 1,
Section 2, that the spectral density fY of the sampled process Y is
2
f () = 2
4 i
b(z)b(z)
sinh(z)
dz, ,
a(z)a(z) cosh(z) cos()
(2.1.5)
where the integral, as in (2.1.4), is anticlockwise around any simple closed contour in
the interior of the left half of the complex plane, enclosing the zeroes of a(z). It is known
that the sampled process Y satisfies the ARMA equations,
(B)Yn = (B)Zn ,
n Z,
2
{Zn }nZ WN(0,
)
(2.1.6)
where B is the backward shift operator, (z) is a polynomial of degree less than p,
(Zn )nZ is an uncorrelated sequence of zeromean random variables with variance, which
2,
we denote by
p
Y
(z) =
j=1
(1 ej z),
z C,
and 1 , . . . , p are the zeroes of the polynomial a(z). Since the polynomial (z) is known
precisely for any given CARMA process, the second order properties of the sampled
process Y for small can be determined by studying the properties of the moving
the spectral
average term, Xn := (B)Zn in (2.1.6), as 0. Denoting by fMA
density of X, we find from (2.1.6) that
fMA
()
2p ea1 fY ()
p
Y
j=1
(cosh(j ) cos()),
15
(2.1.7)
2
b(z)b(z)
sinh(z)
fY () =
Resz=
,
4
a(z)a(z) cosh(z) cos()
2 X 2 2j+1
rj cj (),
=
4
j=0
(2.1.8)
ck ()z 2k+1 =
k=0
sinh z
,
cosh z cos
and
rj := Resz= z
2j+1
rj z 2j
j=0
Qp
b(z)b(z)
,
a(z)a(z)
Qq
2 2
2 pq1 Qi=1 (1 i z )
= (z )
,
p
2 2
i=1 (1 i z
(2.1.9)
where a(z) = i=1 (z i ) and b(z) = i=1 (z + i ). The power series (2.1.8) is the
is obtained from (2.1.7) and (2.1.8) as
required expansion for fY . The expansion for fMA
()
fMA
p
X
2p 2 ea1 Y
(i )2j X
=
1 cos +
rk ck ()2k+1 ,
4
(2j)!
i=1
j=1
k=0
fMA
()
p
X
2p 2 ea1 Y
(i )2j X
=
x+
rk k (x)2k+1 ,
4
(2j)!
i=1
j=1
k=0
16
(2.1.10)
k (x)z 2k+1 =
k=0
sinh z
.
cosh z 1 + x
In particular 0 (x) = 1/x, 1 (x) = (x 3)/(3!x2 ) and 2 (x) = (x2 15x + 30)/(5!x3 ).
More generally, k (x) has the form.
k
Y
1
k (x) =
(x k,i ),
(2k + 1)!xk+1
(2.1.11)
i=1
where
k
Y
(2.1.12)
i=1
and the product, when k = 0, is defined to be 1. Since pq1 (x) plays a particularly
important role in what follows, we shall denote its zeroes more simply as
i := pq1,i ,
i = 1, . . . , p q 1.
From (2.1.10), with the aid of (2.1.9) and (2.1.11), we can now derive the required
(). Observe first that the expression on the right
higherorder approximation to fMA
of (2.1.10), in spite of its forbidding appearance, is in fact a polynomial in x of degree
less than p. We therefore collect together the coefficients of xp1 , xp2 , . . . , x0 . This gives
(using the identity (2.1.12) and defining y := 2 ) the asymptotic expression as 0,
fMA
()
X
2p 2 ea1 2(pq)1 p
x rpq1 pq1 (x) + o(1) +
j xqj y j ,
=
4
with
j = (2)
(pq1+j)
= 2(pq1+j)
j=1
rpq1+j rpq2+j
2i1 . . . 2ij + o(1),
p
X
2i
i=1
(2.1.13)
+ o(1)
where the second line follows from (2.1.9) and the sum on the second line is over all
subsets of size j of the q zeroes of the polynomal b(z).
Finally, replacing rpq1 in (2.1.13) by (1)pq1 , substituting for pq1 (x) from
(2.1.11) and using the continuity of the zeroes of a polynomial as functions of its coefficients, we can rewrite (2.1.13) (recalling that x := 1 cos and i , i = 1, . . . , p q 1
are the zeroes of pq1 (x)) as
fMA
()
p1q
q
Y
2k 2
(2 )pq1 2p 2 ea1 Y
=
[x i (1 + o(1))]
x+
(1 + o(1)) .
[2(p q) 1]!4
2
i=1
k=1
(2.1.14)
17
where
k = 1 k + o(),
(2.1.15)
and the sign is chosen so that k  = 1 2<(k ) + k 2 2 < 1 for sufficiently small . For
each k = 1, . . . , q, we would then choose the plus if <(k ) < 0, or the minus if <(k ) > 0.
The Assumption 1 (ii) excludes the case <(k ) = 0, since either signs gives a root greater
than 1.
Similarly we can write
x i (1 + o(1)) =
1
(1 + (i )ei )(1 + (i )ei ),
2(i )
where
(i ) = i 1
(i 1)2 1 + o(1),
(2.1.16)
and the sign is chosen so that lim0 (i ) 1. If the zero i of pq1 (x) is such that
both choices of sign cause the limit to be 1, then either choice will do provided the same
choice is made for ( i ), where i denotes the complex conjugate of i . A more precise
characterization of the roots i , i = 1, . . . , p q 1, will be given in Chapter 3.
These factorizations allow us to give the following asymptotic representation of the
moving average process Xn = (B)Zn appearing in (2.1.6).
has the
Theorem 2.1.2. The moving average process {Xn }nZ with spectral density fMA
asymptotic representation, as 0,
Xn
p1q
Y
(1 + (i )B)
i=1
q
Y
(1 k B)Zn ,
k=1
where
2
2
{Zn }nZ WN(0,
),
2(pq)1 ea1 2
[2(p q) 1]!
Qpq1
i=1
(i )
(2.1.17)
(2.1.18)
k=1 k
Proof. The result follows at once from (2.1.14), (2.1.15) and (2.1.16).
Remark 2.1.3. (i) The parameters (i ) and k may be complex but the moving average
operator will have real coefficients because of the existence of corresponding complex
conjugate parameters in the product.
(ii) The representation in Theorem 2.1.2 is a substantial generalization of the one in
Corollary 2 of Chapter 1, since it is not only of higherorder in , but it applies to all
CARMA(p, q) processes, not only to those with p q 3.
2
18
j Znj
,
j=0
n Z,
2
),
{Zn }nZ WN(0,
(2.2.1)
g (x) :=
j 1[j,(j+1)) (x).
j=0
(2.2.2)
Using Theorem 2.1.2, we shall show that, for all CARMA(p, q) processes, as 0,
g converges pointwise to g, or to g if L is standardized so that E[L21 ] = 1. We first
illustrate the convergence in the simple case when Y is a CARMA(1,0) or stationary
OrnsteinUhlenbeck process.
Example 2.2.1. [The CARMA(1,0) process] This a special case of (5.1.2) with kernel
g(x) = ex 1(0,) (x) where < 0.
The sampled process Y is the discretetime AR(1) process satisfying
Yn = e Yn1
+ Zn ,
where Z = Zn
nZ
n Z,
e(nu) dLu ,
(n1)
n Z.
2
In this case it is easy to write down the coefficients j and the white noise variance
in the Wold representation of Y . From wellknown properties of discretetime AR(1)
2 = 2 (e2 1). Substituting these
processes, they are j = ej , j = 0, 1, 2, . . ., and
2
values in the definition (2.2.2) we find that
g (x) =
X
j=0
e2 1 j
e 1[j,(j+1)) (x),
2
19
The approximation (2.2.2) is well defined for all processes (5.1.2) and there are standard
methods for estimating the coefficients and white noise variance appearing in the definition
from observations of Y . Example 3.1 shows that g converges pointwise to g for CAR(1)
processes. Our aim now is to establish this convergence for all CARMA(p, q) processes.
We give the proof under the assumption that the zeroes 1 , . . . , p of the autoregressive
polynomial a(z) all have multiplicity one. Multiple roots can be handled by supposing
them to be separated and letting the separation(s) converge to zero.
The kernel (2.1.4) of a causal CARMA(p, q) process Y whose autoregressive roots each
have multiplicity one reduces (see e.g. Brockwell and Lindner (2009)) to
g(x) =
p
X
b(i )
j=1
a0 (i )
ei x 1(0,) (x),
(2.2.3)
Qp
where a(z) = i=1 (z i ) and b(z) = i=1 (z + i ) are the autoregressive and moving
average polynomials respectively and a0 denotes the derivative of the function a. We now
establish the convergence, as 0, of g as defined in (2.2.2) to g. Theorem 2.1.2 is
used to determine the parameters of the Wold representation appearing in the definition
of g .
Theorem 2.2.2. If Y is the CARMA(p, q) process with kernel (2.2.3),
(i) the Wold coefficients and white noise variance of the sampled process Y are
j
Qq
p Qp1q
r )
X
(1
+
(
)e
(1 k er ) jr
i
i=1
k=1
Q
=
e
,
( )
m6=r (1 e
r=1
and
2(pq)1 ea1 2
[2(p q) 1]!
Qpq1
i=1
(i )
(2.2.4)
(2.2.5)
k=1 j
X
j=0
j z j
Qp1q
i=1
(1 + (i )z) k=1 (1 k z)
Qp
,
m z)
(1
e
m=1
20
(1 k e
)=
q
Y
(r k ) + o(q ),
k=1
k=1
where the sign is taken to be the same as <(k ), for each k = 1, . . . , q, as in (2.1.15). It
easy to see that the product above is asymptotically equal to q b(r ) if and only if the
process is invertible, that is, <(k ) > 0.
Under this assumptions, the convergence of g to (2.2.3) follows by substituting for
2 from (2.2.4) and (2.2.5) into (2.2.2), substituting for from (2.1.15), letting
j and
k
0 and using the identities
a0 (r ) =
m6=r
and
pq1
Y
i=1
(r m )
pq1
Y i
(1 + (i ))2
=
= [2(p q) 1]!,
(i )
2
i=1
Remark 2.2.3. Although we have established the convergence of g only for CARMA
processes, the nonparametric nature of g strongly suggests that the result is true for
all processes defined as in (5.1.2). In practice we have found that estimation of g by
estimation of g with small works extremely well for simulated processes with nonrational spectral densities also. The assumption on the roots of b will appear in Chapter
3 to be the continuoustime analogue of the invertibility condition for the discretetime
ARMA processes.
2
2
,
4(1 cos )
21
other words, for any fixed positive integer k, the sequence of observations Yn / , n =
1, . . . , k, from a second order point of view, behaves as 0 like a sequence of observations
of integrated white noise with whitenoise variance 2 .
In this section we derive analogous asymptotic approximations for the spectral densities
of more general CMA processes and the implications for their local second order behaviour.
Since we allow in this section for spectral densities with a singularity at zero we recall the
definition of the spectral domains,
d := [, ]\{0} and c := (, )\{0}.
We require the CMA processes to have spectral density satisfying a weak regularity condition at infinity. To formulate this condition we first need a definition.
Definition 2.3.1 (Regularly varying function (cf. Bingham et al. (1987)). Let f be a
positive, measurable function defined on (0, ). If there exists R such that
f (x)
= ,
x f (x)
lim
c .
(2.3.1)
fY () L(1 )1  + (2) , 1
+ (2) , 1 +
X
k=0
1
,
(r + k)s
<(s) > 1,
22
r 6= 0, 1, 2, . . . .
i
d ,
(2.3.2)
(2.3.3)
2
2C L 1 1 ,
where
C = exp
1
2
log  + (2) , 1
2
0,
+ (2) , 1 +
2
i
(2.3.4)
fY (
X
fY ( + 2k1 )
k=
fY (1 )
d .
(2.3.5)
fY ( + 2k1 )
< (1 + )2k + + .
fY (1 )
(2.3.6)
We take > 0 such that > 1. Then, using (2.3.6), we can bound (2.3.5) as follows:
fY (1 ) X
fY (1 ) X
(1)
2k+
< fY () < (1+)
2k++ ,
k=
k=
fY (1 ) X
 + 2k ,
k=
23
d .
d .
(2.3.7)
k=
 + 2k
= (2)
X
+ k
k=
= 
+ (2)
X
k=0
k+1
2
+ k+1+
2
d .
(2.3.8)
d .
(2.3.9)
 + (2) , 1
+ (2) , 1 +
.
2
2
The right hand side is integrable over [, ] and bounded in a neighbourhood of the
origin, since 2/2 (1 cos )/2 1 as 0. Thus we conclude that the spectral
density of the rescaled differenced sequence (2.3.3) converges to that of a shortmemory
stationary process.
(c) It is easy to check that the sampled CMA process has a Wold representation of
the form (2.2.1) and that its onestep prediction meansquared error based on the infinite
2 . Kolmogorovs formula (see, e.g., Theorem 5.8.1 of Brockwell and Davis (1991))
past is
states that the onestep prediction meansquared error for a discretetime stationary process with spectral density f is
2 = 2 exp
1
2
log f ()d
(2.3.10)
Applying it to the differenced process we find that its onestep prediction meansquared
error is
2 exp
1
2
= 2 exp
log h ()d
Z
log(2 2 cos )d
exp
24
1
2
log fY ()d
2
=
.
hX
2k +  0
exp
1
2
log fY ()d
exp
1
2
log
"
2k +  d
#)
2
2C L 1 1 .
(2.3.11)
Remark 2.3.3. (i) Theorem 2.3.2(b) means that, from a second order point of view, a
sample {Yn , n = 1, . . . , k} with k fixed and small resembles a sample from an (/2)times integrated shortmemory stationary sequence. If in (b) we replace (1 B)/2 by
(1 B) where > ( 1)/2, then the conclusion holds for the overdifferenced process.
If, for example, we difference at order = b( + 1)/2c (the smallest integer greater than
( 1)/2) we get a stationary process. In particular, if 1 < < 3, then b( + 1)/2c = 1
and, by (2.3.2) and (2.3.9), the differenced sequence (1B)Y has the asymptotic spectral
density, as 0,
+ (2) , 1 +
i
d .
This is the spectral density of the increment process of a selfsimilar process with selfsimilarity parameter H = (1)/2 (see Beran (1992), eq. (2)). Moreover, for a generic >
1 the asymptotic autocorrelation function of the filtered sequence has unbounded support.
The only notable exception is when is even, where the asymptotic autocorrelation
sequence is the one of a movingaverage process with order /2, as in Brockwell et al.
(2012b) or in Example 2.3.7.
(ii) The constant C of (2.3.4) is shown as a function of in Figure 1. The values,
when is an even positive integer, can be derived from (2.2.5) since CARMA processes
constitute a subclass of the processes covered by the theorem (see Example 2.3.7). It is
clear from (2.3.4) that C is exponentially bounded as .
2
25
Coefficient C
30
1
10
25
20
0
10
15
10
10
10
0
1
1.2
1.4
1.6
1.8
2.2
2.4
2.6
2.8
10
15
20
25
30
Figure 2.1.: The constant C , as a function of the index of regular variation , is shown
on the left using a linear scale and on the right using a logarithmic scale.
From Corollary 3.4 (a) of Brockwell et al. (2012b) we know that C2 = 1. The
horizontal line indicates the value 1.
Corollary 2.3.4. Let Y be a CMA process satisfying the assumptions of Theorem 2.3.2
with 1 < < 2p + 1. Then for 0,
(1 cos )p  + (2) , 1
+ (2) , 1 +
i
d.
Proof. By stationarity we have E[(1 B)p Yn ] = 0 and, hence E[((1 B)p Yn )2 ] is the
variance, of ((1 B)p Yn ) which can be calculated as the integral of its spectral density.
Thus
Z
E[((1 B)p Yn )2 ] = 2p
(1 cos )p fY ()d.
Using the inequalities (2.3.7) and Lebesgues dominated convergence theorem, we find
that as 0,
1
1
L( )1
(1 cos )p fY ()d
(1 cos )
k=
which, with the previous equation and (2.3.8), gives the result.
26
2k +  d,
q(u) := sup
sup
F (g)() ( + 1)e
1/
g(s)ds,
(2.3.12)
1/
g(s)ds =

where we used the fact g() R1 (0+) means g(1/) R+1 ().
Substituting (2.3.12) into (2.1.1) and recalling that ( + 1) = (), we obtain
fY () =
2 () 2 2
1
F (g)2 ()
 g (1/),
2
2
27
R,
(2.3.13)
where C2(pq) can be calculated from (2.3.4). However C2(pq) can also be calculated from
Qpq1
(2.2.5) as C2(pq) = [(2(p q) 1)! i=1 lim0 (i )]1 , where (i ) was defined in
(2.1.16). Theorem 2.3.2(b) implies that the spectral density of qp+1/2 (1 B)pq Y
converges to that of a short memory stationary process. From Theorem 2.1.2 we get
Qq
1/2
the more precise result that the spectral density of C2(pq) qp+1/2 (1 B)pq i=1 (1 +
b(i) 2
a(i) ,
c ,
(2.3.14)
with a() and b() as in (2.1.3) and 0 < d < 0.5. Hence
fY () =  L(),
c ,
where = 2(p + d q) and lim L() = 2 /(2). The spectral density (2.3.14) has a
singularity at frequency 0 which gives rise to the slowly decaying autocorrelation function
associated with long memory. Applying Theorem 2.3.2(c) as in Example 2.3.7, the white
noise variance in the Wold representation of Y satisfies as 0
2
2 C2(p+dq) 2(p+dq)1 ,
(2.3.15)
where C2(p+dq) can be calculated from (2.3.4). As 0, the asymptotic spectral density
fY of Y is given by (2.3.2) with = 2(p + d q) > 1 and is therefore not integrable for
any > 0. However Theorem 2.3.2(b) implies that the spectral density of qpd+1/2 (1
B)p+dq Y converges to that of a short memory stationary process.
2
Our next two examples are widely used in the modelling of turbulence. Kolmogorovs
famous 5/3 law (see Frisch (1996) Section 6.3.1, Pope (2000) Section 6.1.3) suggests a
regularly varying spectral density model for turbulent flows.
Example 2.3.9. [Two turbulence models]
Denote by U the mean flow velocity, with ` the integral scale parameter and define ` =
`/U .
28
2/3
5/3
2
2
2 + c` /`
!17/6
c .
Moreover, fY R5/3 , so it has a representation (2.3.1) and the conclusions of Theorem 2.3.2 hold with = 5/3.
(ii) The Kaimal spectrum for the longitudinal component of the energy spectrum is
the current standard of the International Electrotechnical Commission; cf. IEC 614001
(1999). The spectral density is given by
fY () = v
4`
,
(1 + 6`)5/3
c ,
(2.3.16)
(2.3.17)
has variance
Y (0) = 2 (2)12 (2 1)
and autocorrelation function
Y (h) =
23/2
h1/2 K1/2 (h),
( 1/2)
(2.3.18)
which is the WhittleMatrn autocorrelation function (see Guttorp and Gneiting (2005))
with parameter 1/2, evaluated at h. The function K1/2 in (2.3.18) is the modified
Bessel function of the second kind with index 1/2 (Abramowitz and Stegun (1974),
Section 9.6).
Note that g R1 (0+) and that it satisfies the assumptions of Proposition 2.3.5. From
(2.1.1) with F{g()}() = ()( i) , we obtain the spectral density
2
2 2 ()
2 2 ()
2
2
fY () =
F{g()} () =
=
,
2
2 (2 + 2 )
2 ((/)2 + 1)
c .
which belongs to R2 () and slowly varying function L such that lim L() =
2 2 ()/2.
29
Note that if = 5/6, then fY , like the von Krmn spectral density of Example 4.9 (i),
decays as 5/3 for , in accordance with Kolmogorovs 5/3 law.
Theorem 2.3.2 gives the asymptotic form of the spectral density of the sequence {(1
B) Yn }nZ as 0,
h () 2 2 ()(2)1 2 21 (1 cos )
h
i
2
2
2

+ (2) 2, 1
+ (2) 2, 1 +
, d .
2
2
The second order structure function, S2 () := E[(Y Y0 )2 ], plays an important role
in the physics of turbulence. For the kernel (2.3.17) with 1/2 < < 3/2 its asymptotic
behaviour as 0 is given by
S2 () = 2Y (0)(1 Y ()),
> 0,
(3/2 )
( + 1/2)
S2 ()
=
2Y (0)
1
2
3
2 ()  log  + O( ),
1
()2 + O(21 ),
4( 3/2)
= 3/2,
> 3/2,
which can be found in Pope (2000), Appendix G, and BarndorffNielsen et al. (2011). The
first of these formulae can also be obtained as a special case of Corollary 2.3.4 with p = 1.
2
30
DurbinLevinson algorithm
The first method determines the (highorder) causal AR(p) process whose autocovariances
match the sample autocovariances of Y up to lag p by solving the YuleWalker equations
for the coefficients 1 , . . . , p of the autoregressive polynomial (z) = 1 1 z . . . p z p .
This can be done efficiently using the DurbinLevinson algorithm, which also yields an
2 (Proposition 8.2.1, Brockwell and Davis (1991)).
estimator of
The coefficients of the Wold representation (2.2.1) of the fitted AR(p) process are used
as estimates of the coefficients j in (4.1). The estimate of j is the coefficient of z j in
the powerseries expansion of 1/(z). These coefficients are easily calculated recursively
from the coefficients j .
Innovations algorithm
A more direct approach to fitting a highorder moving average process based on the
sample autocovariance function is to use the innovations algorithm (Definition 8.3.1 of
Brockwell and Davis (1991)). Although this method is computationally slower than the
DurbinLevinson algorithm, under the conditions of Theorem 8.3.1 of Brockwell and Davis
(1991), the estimates of the Wold coefficients are asymptotically jointly normal with simple
2 is consistent. If the
covariance matrix, and the estimator of the white noise variance
driving process L in (5.1.2) is Brownian motion (as in the following simulations) and if
Y is a CARMA process then Y and Y are Gaussian, the driving noise in the Wold
representation of Y is i.i.d. Gaussian, and the conditions of Theorem 8.3.1 are satisfied.
31
In practice it has been found that the DurbinLevinson algorithm gives better results
except when the fitted autoregressive polynomial has zeroes very close to the unit circle.
32
0.9
Kernel
DB+Inversion estimate
Innovation estimate
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.5
1.5
Kernel
DB+Inversion estimate
Innovation estimate
0.8
0.7
g (t)
g (t)
0.8
0
0
2.5
0.9
Kernel
DB+Inversion estimate
Innovation estimate
0.8
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0.5
1.5
Kernel
DB+Inversion estimate
Innovation estimate
0.8
0.7
g (t)
g (t)
0.9
0
0
2.5
0
0
dataset displays a rather high Reynolds number (about 17000), thus it can be regarded as
a good representative of turbulent phenomena. A more detailed presentation of turbulence
phenomena and an application of the CMA model (5.1.2) in the context of turbulence
modelling is given in Ferrazzano and Klppelberg (2012); moreover we refer to Drhuva
(2000), Ferrazzano (2010) for a precise description of the data, and to Pope (2000), Frisch
(1996) for a comprehensive review of turbulence theory. The nonparametric kernel estimate for the CMA model (5.1.2) is then based on = 1/5000 = 2 104 seconds. A
CMA model (5.1.2) with a gamma kernel as in Example 2.3.10 has been suggested as a
parametric model in BarndorffNielsen and Schmiegel (2009).
Figure 2.6 a) shows the sample autocorrelation function up to 120 seconds, which appears to be exponentially decreasing. In general, the data are not significantly correlated
after a lag of 100 seconds.
The estimated spectral density of Y is shown in Figure 2.6 b), plotted against the
frequency = /2, since in (2.1.1) have the units of an angular velocity. The estimates
depicted with circles was estimated by Welchs method (Welch (1967)) with segments of
222 data points (circa 14 minutes), windowed with a Hamming window and using an
overlapping factor of 50%. This method allows a significant reduction of the variance of
the estimate, sacrificing some resolution in frequency. In order to have a better resolution
near to the frequency 0Hz, we estimated the spectral density for 103 Hz with the raw
periodogram (Brockwell and Davis (1991), p. 322), which provides a better resolution in
33
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.4
g (t)
g (t)
Kernel
DB+Inversion estimate
Innovation estimate
0.3
0.5
0.4
0.3
0.2
0.2
0.1
0
0
Kernel
DB+Inversion estimate
Innovation estimate
0.1
0.1
0.2
0.3
0.4
0.5
0
0
0.6
0.8
0.8
0.7
0.7
0.6
0.6
0.5
Kernel
DB+Inversion estimate
Innovation estimate
0.3
0.1
0.3
0.4
0.5
Kernel
DB+Inversion estimate
Innovation estimate
0.3
0.2
0.2
0.4
0.1
0.1
0.5
0.2
0
0
g (t)
g (t)
0.4
0
0
0.6
frequency at cost of a larger variance. The results are plotted in the leftmost part of Figure
2.6 b) with diamonds, and the two ranges of estimation are indicated by a vertical solid
line. The spectral density is plotted on a loglog scale, so that a powerlaw relationship
appears linear. The spectral density in a neighborhood of zero is essentially constant,
and this is compatible with an exponentially decreasing autocorrelation function (as for
instance for the gamma kernel function of Example 2.3.10).
Moreover, for a frequency between 102 and 200Hz, log fY decreases linearly with
log with slope of approximately 5/3, in accordance with Kolmogorovs 5/3law. As a
reference, a solid line depicts the powerlaw 5/3 . For larger than 200Hz, the spectral
density deviates from Kolmogorovs 5/3law, decaying with a steeper slope. We stress
the point that a spectral density decaying as prescribed by Kolmogorovs law in the
neighborhood of would require a kernel behaving like t1/6 near to the origin, according
to Proposition 2.3.5.
The estimated kernel function b
g (t) is plotted in Figure 2.6 c) in a loglinear scale, so
that the behaviour of the kernel estimate for t near zero and for t to infinity is clearly
visible. For large t the estimated g(t) decays rapidly and it oscillates slightly around
zero for t > 100 seconds. For small t the estimated g(t) grows slowly up to t 103 ,
corresponding to Kolmogorovs 5/3law. For smaller t it drops off to zero corresponding
to the steeper decay of the spectral density at high frequencies.
34
2.6 Conclusions
0.35
0.35
0.3
0.3
0.25
0.25
g (t)
g (t)
0.2
0.15
0.2
0.15
0.1
0.1
Kernel
DB+Inversion estimate
Innovation estimate
0.05
0
0
Kernel
DB+Inversion estimate
Innovation estimate
0.5
1.5
0.05
0
0
2.5
0.35
0.35
0.3
0.3
0.25
0.25
g (t)
g (t)
0.2
0.15
Kernel
DB+Inversion estimate
Innovation estimate
0.2
0.15
0.1
0.1
Kernel
DB+Inversion estimate
Innovation estimate
0.05
0
0
0.5
1.5
0.05
2.5
0
0
Figure 2.4.: Estimation of the gamma kernel for = 2 (CAR(2) process) and = 22 .
Figure 6 d) shows the spectral density computed directly from the estimated kernel
function b
g . Its close resemblance to the spectral density calculated by Welchs method
provides justification for our estimator of g even when there is no underlying parametric
model.
2.6. Conclusions
We studied the behaviour of the sequence of observations Y obtained when a CMA
process of the form (5.1.2) is observed on a grid with spacing as 0.
In the particular case when Y is a CARMA process we obtained a more refined asymptotic representation of the sampled process than that found by Brockwell et al. (2012) and
used it to show the pointwise convergence as 0 of a sequence of functions defined in
terms of the Wold representation of the sampled process to the kernel g. This suggested
a nonparametric approach to the estimation of g based on estimation of the coefficients
and white noise variance of the Wold representation of the sampled process.
For a larger class of CMA processes we found results analogous to those of Brockwell
et al. (2012) and examined their implications for the local second order properties of such
processes, which include in particular fractionally integrated CARMA processes.
35
2.6 Conclusions
0.35
0.4
0.3
0.35
Kernel
DB+Inversion estimate
Innovation estimate
0.3
0.25
0.2
g (t)
g (t)
0.25
0.2
0.15
0.15
0.1
0.1
Kernel
DB+Inversion estimate
Innovation estimate
0.05
0
0
0.1
0.2
0.3
0.4
0.5
0.05
0
0
0.6
0.35
0.35
0.3
0.3
0.25
0.25
g (t)
g (t)
0.2
0.15
Kernel
DB+Inversion estimate
Innovation estimate
0.2
0.15
0.1
0.1
Kernel
DB+Inversion estimate
Innovation estimate
0.05
0
0
0.1
0.2
0.3
0.4
0.5
0.05
0
0
0.6
Figure 2.5.: Estimation of the gamma kernel for = 2 (CAR(2) process) and = 24 .
Finally we applied the nonparametric procedure for estimating g to simulated and real
data with positive results.
36
2.6 Conclusions
Figure 2.6.: Estimates for the Brookhaven dataset: a) autocorrelation function b) spectral density (Welch estimator and periodogram) c) kernel function (linearlog
scale) d) spectral density computed using the estimated kernel.
37
2.6 Conclusions
38
39
3.1 Preliminaries
3.1. Preliminaries
3.1.1. Finitevariance CARMA processes
Throughout this Chapter we shall be concerned with a CARMA process driven by a
second order zeromean Lvy process L = {Lt }tR with E[L1 ] = 0 and E[L21 ] = 1. It is
defined as follows.
For nonnegative integers p and q such that q < p, a CARMA(p, q) process Y = (Yt )tR
with coefficients a1 , . . . , ap , b0 , . . . , bq R and driving Lvy process L is defined to be a
strictly stationary solution of the suitably interpreted formal equation
a(D)Yt = b(D)DLt ,
t R,
(3.1.1)
where D denotes differentiation with respect to t, a() and b() are the characteristic
polynomials,
a(z) := z p + a1 z p1 + + ap
the coefficients bj satisfy bq = 1 and bj = 0 for q < j < p, and > 0 is a constant. The
polynomials a() and b() are assumed to have no common zeroes.
Since the derivative DLt does not exist in the usual sense, we interpret (3.1.1) as being
equivalent to the observation and state equations
where
Yt = bT Xt ,
(3.1.2)
(3.1.3)
X(t)
X (1) (t)
..
.
Xt =
(p2)
X
(t)
X (p1) (t)
A = ...
ap
1
0
..
.
b0
b1
..
.
b=
bp2
bp1
0
1
..
.
...
...
..
.
0
0
..
.
0
0
...
ap1 ap2 . . . a1
0
0
..
.
ep =
0
1
and A = a1 for p = 1.
It is easy to check that the eigenvalues of the matrix A are the same as the zeroes of the
autoregressive polynomial a().
40
3.1 Preliminaries
Under Assumption 1 (i) it has been shown in (Brockwell and Lindner (2009), Theorem 3.3) that Eqs. (3.1.2)(3.1.3) have the unique strictly stationary solution,
Yt =
g(t u)dLu ,
t R,
(3.1.4)
where
g(t) =
Z
X
b(z)
b(z)
tz
zt
e dz =
Resz= e
,
2i
0,
a(z)
a(z)
if t > 0,
(3.1.5)
if t 0,
and is any simple closed curve in the open left half of the complex plane encircling the
zeroes of a(). The sum is over the distinct zeroes of a() and Resz= () denotes the
residue at of the function in brackets. The kernel g can also be expressed (Brockwell
and Lindner (2009), equations (2.10) and (3.7)) as
g(t) = b> eAt ep 1(0,) (t),
t R,
(3.1.6)
g(s)eis ds =
b(i)
,
a(i)
R.
(3.1.7)
In the light of Eqs. (3.1.4)(3.1.7), we can interpret a CARMA process as a continuoustime filtered white noise, whose transfer function has a finite number of poles and zeros.
We outline that the request on the roots of a() to lie in the interior of the left half of
the complex plane in order to have causality arises from Theorem V, p. 8, Paley and
Wiener (1934), which is intrinsically connected with the Theorems in Titchmarsh (1948),
pp. 125129, on the Hilbert transform. A similar request on the roots of b() will turn out
to be necessary for recovering the driving Lvy process.
Qp
2
n Z, {Zn } WN(0,
),
(3.1.8)
with the AR part (B) := i=1 (1 ei B), where B is the discretetime backshift
. Finally, the MA part () is a polynomial of order p 1,
operator, BYn := Yn1
chosen in such a way that it has no roots inside the unit circle. For p > 3 and fixed > 0
there is no explicit expression for the coefficients of () nor the white noise process Z .
41
3.1 Preliminaries
2 = var(Z ) as 0 were obtained
Nonetheless, asymptotic expressions for () and
n
in Brockwell et al. (2012a,b). Namely we have that the polynomial (z) can be written
as (see Theorem 2.1, Chapter 2)
(z) =
2
=
pq1
Y
(1 + (i )z)
i=1
2
2(pq)1
(2(p q) 1)!
where, again as 0,
Qpq1
i=1
q
Y
k=1
(1 + o(1)),
(i )
k = 1 k + o(),
(i ) = i 1
(1 k z),
z C,
(3.1.9)
as 0,
(3.1.10)
k = 1, . . . , q,
(i 1)2 1 + o(1),
i = 1, . . . , p q 1.
(3.1.11)
The signs in (3.1.11) are chosen respectively such that, for sufficiently small , the
coefficients k and (i ) are less than 1 in absolute value, to ensure that (3.1.8) is invertible.
Moreover, i , i = 1, . . . , p q 1, are the zeros of pq1 (), which is defined as the
(p q 1)th coefficient in the series expansion
X
sinh(z)
=
k (x)z 2k+1 ,
cosh(z) 1 + x
k=0
z C, x R\{0},
(3.1.12)
where the L.H.S. of Eq. (3.1.12) is a power transfer function arising from the sampling
procedure (cf. Brockwell et al. (2012b), Eq. (11)). Therefore the coefficients (i ), i =
1, . . . , p q 1, can be regarded as spurious, as they do not depend on the parameters of
the underlying continuoustime process Y , but just on p q.
Remark 3.1.1. Our notion of sampled process is a weak one, since we only require that
the sampled process has the same autocovariance structure as the continuoustime process,
when observed on a discrete grid. We know that the filtered process on the LHS of (3.1.8)
(Brockwell and Lindner (2009), Lemma 2.1) is a (p 1)dependent discretetime process.
Therefore there exist 2p1 possible representations for the RHS of (3.1.8) yielding the same
autocovariance function of the filtered process, but only one having its roots outside the
unit circle, which is called minimumphase spectral factor (see Sayed and Kailath (2001)
for a review on the topic). Since it is not possible to discriminate between them, we always
take the minimumphase spectral factor without any further question. This will be crucial
in our main result.
Moreover, the rationale behind Assumption 1 (ii) becomes clear now: if <(k ) = 0 for
some k, then the corresponding k 2 is equal to 1+2 k 2 +o(2 ), for either sign choice.
Then the MA(p 1) polynomial in (3.1.9) will never be invertible for small .
42
In order to ensure that the sampled CARMA process is invertible, we need to verify
that (i ) will be strictly less than 1 for all i = 1, . . . , p q 1 and sufficiently small .
Proposition 3.1.2. The coefficients (i ) in Eq. (3.1.11) are uniquely determined by
(i ) = i 1
and we have that i 1
(i 1)2 1 + o(1),
i = 1, . . . , p q 1,
Proof. It follows
pfrom Proposition 3.5.1 that i (2, ) for all i = 1, . . . , p q 1. This
yields i 1 + (i 1)2 1 > 1 for all i and hence, we have that
(i ) = i 1
(i 1)2 1 + o(1),
i = 1, . . . , p q 1.
X
j=0
j Znj
X
j=0
Znj
n Z,
(3.2.1)
where j=0 (j )2 < . Moreover, Eq. (3.2.1) is the causal representation of Eq. (3.1.8),
and it has been shown in Chapter 2 that for every causal and invertible CARMA(p, q)
process, as 0,
bt/c
g(t), t 0,
(3.2.2)
where g is the kernel in the moving average representation (3.1.4). Given the availability of
2 , and the flexibility of CARMA
classical timeseries methods to estimate {j }jN and
processes, we argue that this result can be applied to more general continuoustime moving
average processes.
Given Eqs. (3.2.1) and (3.2.2) it is natural to investigate, whether the quantity
:=
L
n
43
Zn ,
n Z,
approximates the increments of the driving Lvy process in the sense that for every fixed
t (0, ),
bt/c
X
i=1
L2
L
L
Lt ,
i
0.
(3.2.3)
As usual, convergence in
implies also convergence in probability and distribution.
The first results on retrieving the increments of L were given in Brockwell et al. (2011),
and furthermore generalized to the multivariate case by Brockwell and Schlemm (2011).
The essential limitation of this parametric method is that it might not be robust with
respect to model misspecification. More precisely, the fact that a CARMA(p, q) process is
(p q 1)times differentiable (see Proposition 3.32 of Marquardt and Stelzer (2007)) is
essential for the procedure to work (cf. Theorem 4.3 of Brockwell and Schlemm (2011)).
However, if the underlying process is instead CARMA(p0 , q 0 ) with p0 q 0 < p q, then
some of the necessary derivatives do not longer exist. In contrast to the aforementioned
procedure, in the method we propose there is no need to specify the autoregressive and
the moving average orders p and q in advance.
Our main theorem is the following:
Theorem 3.2.1. Let Y be a finitevariance CARMA(p, q) process and Z the noise on
:=
the RHS of the sampled Eq. (3.1.8). Moreover, let Assumption 1 hold and define L
/ Z . Then, as 0,
bt/c
X
i=1
L
L
Lt ,
i
t (0, ),
(3.2.4)
if and only if the roots of the moving average polynomial b() on the RHS of the CARMA
Eq. (3.1.1) have negative real parts, i.e. if and only if the CARMA process is invertible.
Proof. Under Assumption 1 (ii) and using Proposition 3.1.2, the sampled ARMA equation
(3.1.8) is invertible. The noise on the RHS of Eq. (3.1.8) is then obtained using the classical
inversion formula
(B)
Y , n Z,
Zn =
(B) n
where B is the usual backshift operator. Let us consider the stationary continuoustime
process
Z ti
(B
)
Zt :=
Yt =
a
g(t i s)dLs , t R,
(3.2.5)
i
(B )
i=0
44
Exchanging the sum and the integral signs in (3.2.5), and the fact that g() = 0 for
negative arguments, we have that Z is a continuoustime moving average process
Zt
g (t s)dLs ,
t R,
R,
> 0.
Rt
L
j Ln =
n
n X
j=1
"
g (j s) 1(0,) (j s) dLs =
n
X
j=1
h
n (n s)dLs ,
(3.2.6)
s R,
and the stochastic integral in Eq. (3.2.6) w.r.t. L is still in the L2 sense. It is a standard
result, cf. (Gikhman and Skorokhod 2004, Ch. IV, 4), that the variance of the moving
average process in Eq. (3.2.6) is given by
"
n
X
L
j
j=1
Ln
#2
h
n (n s)
2
2
ds = kh
n ()kL2 ,
(3.2.7)
" Qp
X
n
j=1
(1 e(j +i) )
(ei )
b(i)
a(i)
45
ei(nj)
j=1
ei
R.
1 1 ein
i
1 ei
var
"
n
X
i=1
Ln = kh ()k2 2 = 1
L
n
j
L
2
2
F{h
n ()} ()d,
Z h
,1 ,3 2 ,2 ,3 2
2 i
1
h hn () + h hn () 2< h,1 h,2 () h,3
d.
=
()
n
2
(3.2.8)
It is easy to see that the first two integrals in Eq. (3.2.8) are, respectively, the variances
Pn
of i=1 L
j and Ln , both equal to n. Setting n := bt/c yields for fixed positive t, as
0,
bt/c
var
X
i=1
Lbt/c = 2bt/c 1
L
j
1
= 2t(1 + o(1))
,1
< h
2
,3
hbt/c () d
h,2 ()
1
2t
"
1 cos(bt/c) sin()
<
1 cos()
R
1 cos()
=
+
Qp
Qp
(j +i) )
b(i)
j=1 (1 e
(ei )
(j +i) )
b(i)
j=1 (1 e
i
a(i)
(e )
a(i)
!#
d = 1 + o(1)
as 0.
(3.2.9)
Now, Lemma 3.5.2 shows that the integrand in Eq. (3.2.9) converges pointwise, for
every R\{0}, to 2(1 cos(t))/ 2 as 0. Since, for sufficiently small , the integrand is dominated by an integrable function (see Lemma 3.5.3), we can apply Lebesgues
Dominated Convergence Theorem and deduce that the LHS of Eq. (3.2.9) converges, as
0, to
Z
Z
1
1 cos(t)
2
1 cos()
d =
d = 1.
2
t R
0
2
This proves Eq. (3.2.9) and concludes the proof of the ifstatement.
46
j=1
pq1
Y
(1 + (j ))1 q
j=1
pq1
Y
Y j i
jJ
j i
(1 + o(1))
j=1
R,
(3.2.10)
1 cos(t)
1
1
+
<(D())
d = 1 +
2
1 cos()
<(D(/t)) d.
2
1 cos(t)
1
+
<(D())
d < 1,
2
which, in turn, shows that the convergence result (3.2.4) cannot hold.
Remark 3.2.2. (i) It is an easy consequence of the triangle and Hlders inequality
that, if the recovery result (3.2.4) holds, then also
bt/c
X
i=1
bs/c
L
i
j=bt/c+1
L
L
Lt (Ls Lt ),
j
t, s (0, ), t s,
is valid.
(ii) Minor modifications of the proof above show that the recovery result in Eq. (3.2.4)
remains still valid if we drop the assumption of causality (i.e. Assumption 1 (i)) and
suppose instead only <(j ) 6= 0 for every j = 1, . . . , p. Hence, invertibility of the
CARMA process is necessary for the noise recovery result to hold, whereas causality
is not. In the noncausal case, the obtained white noise process will not be the same
as in the Wold representation (3.2.1).
(iii) The necessity and sufficiency of the invertibility assumption descends directly from
the fact that we always choose the minimumphase spectral factor, as pointed out in
Remark 3.1.1.
47
(iv) The proof suggests that this procedure should work in a much more general frame
work. Let I () denote the inversion filter in Eq. (3.2.5) and := i iN the
coefficients in the Wold representation (3.2.1). Then the proof of Theorem 3.2.1
essentially needs, apart from the rather technical Lemma 3.5.3, that, as 0,
R
g(s)eis ds
i
0
1,
I (e )F{g()}() = P
ik
k=0 k
R,
(3.2.11)
k
provided that the function
k=0 k z does not have any zero inside the unit circle. In other words, we need that the discrete Fourier transform in the denominator
of (3.2.11) converges to the Fourier transform in the numerator; this can be easily related to the kernel estimation result in (3.2.2). Given the peculiar structure
of CARMA processes, this relationship can be calculated explicitly, but the results
should hold true for continuoustime moving average processes with more general
kernels, too.
q = 0,
q = 1,
q = 0,
q = 1,
(n1)
g(n u)dLu +
(n1)
(n2)
48
3 2 + o(1),
= 1 sgn(b) b + o(),
= 2 3 (2 +
2
3)/6 + o(3 ),
q = 0,
= + o(),
q = 1.
i=0
i=1
Y
(B) X
(1 ei B)Yn ,
( B)i
Yn =
=
(B)
Zn
g(n u)dLu
(n1)
i=0
(ni1)
(ni2)
R0,
E[Zn Lnj ] =
g(s)ds,
0
j1 R
j < 0,
j = 0,
[g( + s) (e1
+ e2
(3.2.13)
)g(s)]ds, j > 0.
bt/c
X
i=1
bt/c
Li ) = 2bt/c 2
(L
i
X
i=1
Li ] 2
E[L
i
Lj ]
E[L
i
i6=j
Z
2
= 2bt/c
bt/c
g(s)ds
0
Z
bt/c i1
X X j1
2
1
2
[g( + s) (e
+e
)g(s)]ds
0
i=1 j=1
where the last equality is obtained using (3.2.13). Moreover, for every a 6= 1,
n X
i1
X
i=1 j=1
aj1 =
an + (1 a)n 1
,
(1 a)2
49
n N.
bt/c
X
i=1
2
Li ) = 2bt/c
bt/c
g(s)ds
(L
i
Z
2 bt/c + bt/c(1 ) 1
(1 )2
0
We now compute the asymptotic expansion for 0 of the equation above. We obviously
have that 2bt/c = 2t(1 + o(1)) and, using the explicit formulas for the kernel functions
g,
R
0
[g( + s)
R
2
bt/c 0 g(s)ds
(e1 + e2 )g(s)]ds
=
=
q = 0
3 3 t + o(1),
4 3 6 (1 + o(1)),
1
3
+
3 t/(1 + o(1)),
6
q=1
2t + o(1),
2(b sgn(b) b)2 + o(2 ),
(esgn(b)bt + sgn(b)bt 1)/(b)2 + o(2 ).
bt/c
X
i=1
(L
i
Li ) =
2(esgn(b)bt
o(1),
q = 0,
+ sgn(b)bt 1)(sgn(b) 1)/b + o(1), q = 1,
i.e. (3.2.4) holds always for q = 0, whereas for q = 1 if and only if b > 0. If b < 0, the
error approximating the driving Lvy by inversion of the discretized process grows as 4t
for large t.
g((j + h))Lnj ,
j=0
n Z,
(3.3.1)
for some h [0, 1] and g is the kernel function as in (3.1.6). In other terms, we have
for a CARMA process, under the assumption of invertibility and causality, that the
discretetime quantities appearing in the Wold representation approximate the quantities in Eq. (3.3.1) when 0. The transfer function of Eq. (3.3.1) is then defined as
h ()
:=
g((j + h))eij ,
j=0
50
(3.3.2)
It is well known that a CMA process can be defined (for a fixed time point t) as the
L2 limit of Eq. (3.3.1); this fact is naturally employed to easily simulate a CMA process,
when all the relevant quantities are known a priori. Therefore we will name the process Y ,h approximating Riemann sum of Eq. (3.1.4), and h is the socalled rule of the
approximating sum; e.g. if h = 1/2, we have the popular midpoint rule.
In order to give an answer to our conjecture, we will investigate properties of the
approximating Riemann sum Y ,h of a CARMA process and compare its asymptotic
autocovariance structure with the one of the sampled CARMA process Y . This will
yield more insight into the role of h for the behavior of Y ,h as a process.
We start with a well known property of approximating sums.
Proposition 3.3.1. Let g be in L2 and Riemannintegrable. Then, for every h [0, 1],
as 0:
2
L
(i) Yk,h Yk 0, for every k Z.
2
,h L
(ii) Ybt/c
Yt , for every t R.
Proof. This follows immediately from the hypotheses made on g and the definition of
L2 integrals.
This result essentially says only that approximating sums converge to Yt for every fixed
t. However, for a CARMA(p, q) process we have that the approximating Riemann sum
process satisfies for every h and an ARMA(p, p 1) equation (see Proposition 3.3.2
below), meaning that there might exist a process, whose autocorrelation structure is the
same as the one of the approximating sum. Given that the AR filter in this representation is the same as in Eq. (3.1.8), it is reasonable to investigate whether (B)Y and
(B)Y ,h have, as 0, the same asymptotic autocovariance structure, which can be
expected but is not granted by Proposition 3.3.1.
The following proposition gives the ARMA(p, p 1) representation for the approximating Riemann sum.
Proposition 3.3.2. Let Y be a CARMA(p, q) process, satisfying Assumption 1, and
furthermore suppose that the roots of the autoregressive polynomial a() are distinct. Then
the approximating Riemann sum process Y ,h of Y defined by Eq. (3.3.1) satisfies, for
every h [0, 1], the ARMA(p, p 1) equation
,h (B)Ln ,
(B)Yn,h =
51
n Z,
(3.3.3)
where
,h (z) := ,h ,h z + . . . + (1)p1 ,h z p1
0
1
p1
and
k,h :=
p
X
b(l )
l=1
a0 (l )
ehl
(3.3.4)
k = 0, . . . , p 1,
where the righthand sum is defined to be 1 for k = 0 and it is evaluated over all possible
subsets {j1 , . . . , jk } of {1, . . . , p}\{l}, having cardinality k, for k > 0.
Proof. Write (z) =
Qp
j z)
j=1 (1 e
(B)Yn,h =
= b>
p1
X
p
X
,h
j Ynj
Pp
j
j=0 j z
j=0
k=0
k
X
>
b
= b>
A(kj)
j e
j=0
p
X
j=0 k=pj
p1 X
k
X
j=0
b>
X
k=p
eAh ep Lnk
A(h+k)
ep Lnjk
j e
A(kj)
j e
k=0
p
X
eAh ep Lnk
Aj
j e
j=0
eA(h+k) ep Lnk .
By virtue of the CayleyHamilton theorem (cf. also (Brockwell and Lindner 2009, proof
of Lemma 2.1)), we have that
p
X
Aj
= 0,
j e
j=0
Pp1 Pk
k=0
A(kj)
j=0 j e
with (Fasen and Fuchs 2013, Lemma 2.1(i) and Eq. (4.4)).
Remark 3.3.3. (i) The approximating Riemann sum of a causal CARMA process is
automatically a causal ARMA process. On the other hand, even if the CARMA
,h () may lie
process is invertible in the sense of Definition 2.1.1, the roots of
inside the unit circle, causing Y ,h to be noninvertible.
(ii) It is easy to see that 0,h = g(h). Then for p q 2, if h = 0, we have that
,0 (0) = 0. This is never the case for (), as one can
0,0 = 0, giving that
52
see from (3.1.9) and Proposition 3.1.2. Moreover, it is possible to show that for
,1
h = 1 and p q 2, the coefficient p1
is 0, implying that Eq. (3.3.3) is actually
an ARMA(p, p 2) equation. For those values of h, the ARMA equations solved
by the approximating Riemann sums will never have the same asymptotic form as
Eq. (3.1.8): therefore we shall restrict ourselves to the case h (0, 1) from now on.
(iii) The assumption of distinct autoregressive roots might seem restrictive, but the omitted cases can be obtained by letting distinct roots tend to each other. This would, of
course, change the coefficients of the MA polynomial in Eq. (3.3.4). Moreover, as
shown in Brockwell et al. (2012a,b), the multiplicity of the zeroes does not matter
when L2 asymptotic relationships as 0 are considered.
Due to the complexity of retrieving the roots of a polynomial of arbitrary order from
,h () for arbitrary p is a daunting
its coefficients, finding the asymptotic expression of
task. Nonetheless, using Proposition 3.3.2, it is not difficult to give an answer for processes
with p 3, which are the most used in practice.
Proposition 3.3.4. Let Y ,h be the approximating Riemann sum for a CARMA(p, q)
process, with p 3, let Assumption 1 hold and the roots of a() be distinct.
If p = 1, the process Y ,h is an AR(1) process driven by Zn = eh1 Ln , for every
> 0.
If p = 2, 3, we have
(B)Yn,h =
q
Y
i=1
(1 (1 i + o())B)
pq1
Y
i=1
(h)pq1
(1 pq,i (h)B)
Ln
(p q 1)!
(3.3.5)
2(h 1)2
1 4(h 1)h
+ o(1),
j = 1, 2.
,h
p1
q
Y
i=1
(1 + i + o() z)
53
pq1
Y
i=1
z C.
,h (z) =
0,h
q
Y
i=1
(1 (1 i + o())z)
pq1
Y
i=1
(1 pq,i (h)z).
for h = (3 3)/6,
and for h = 15
if p q = 1,
if p q = 2,
225 30 30 /30,
if p q = 3.
Moreover, the MA polynomials in Eqs. (3.1.9) and (3.3.5) coincide if and only if the
CARMA process is invertible and pq,i (h) < 1, that is
for every h (0, 1),
for h = (3 + 3)/6,
if p q = 1,
if p q = 2.
(1 ) = 2 3 2,1 (h)h2
with (1 ) = 2 3 + o(1) and 2,1 (h) = (h 1)/h + o(1). Eq. (3.1.10) then yields the two
(1 )(2 ) =
3,1 (h)3,2 (h)h4
4
(1 + ((1 ) + (2 ))2 + (1 )2 (2 )2 ) =
54
where (1,2 ) = 13 105 270 26 105 /2 + o(1) and 3,1 (h) and 3,2 (h) are as
in Proposition 3.3.4. Solving that system for h gives the claimed values.
To prove the second part of the Corollary, we start observing that, under the assumption
of an invertible CARMA process, the coefficients depending on i will coincide automatically, if any. Then it remains to check whether the coefficients depending on h can be
smaller than 1 in absolute value. The cases p q = 1, 2 follow immediately. Moreover,
to see that there is no such h for p q = 3, it is enough to notice that, for h (0, 1),
3,1 (h) > 1 and 0 < 3,2 (h) < 1, i.e. they do never satisfy the sought requirement for
h (0, 1).
Corollary 3.3.5 can be interpreted as a criterion to choose an h such that the Riemann
sum approximates the continuoustime process Y in a stronger sense than the simple
convergence as a random variable for every fixed t. The second part of the corollary says
that there is an even more restrictive way to choose h such that Eqs. (3.1.9) and (3.3.5)
will coincide. If the two processes satisfy asymptotically the same causal and invertible
ARMA equation, they will have the same coefficients in their Wold representations as
0, which are given in the case of the approximating Riemann sum explicitly by
definition in (3.3.1).
In the light of (3.2.2) and Theorem 3.2.1, the sampled CARMA process will asymptot which
ically behave like its approximating Riemann sum process for some specific h = h,
exists, the kernel
might not even exist, as in the case p = 3, q = 0. However, if such an h
estimators (3.2.2) can be improved to
+ o(),
bt/c
= g((bt/c + h))
t R.
55
3.4 Conclusion
extremal cases h = 0 and h = 1, the vertical sign the midpoint rule h = 0.5, and the
diamond and the square are the values given in Corollary 3.3.5, if any. The true kernel
function is then plotted with a solid, continuous line. For the sake of clarity, only the first
8 estimates are plotted.
For the CARMA(2, 1) process, the kernel estimation seems to follow a midpoint rule
CARMA(2,1) process
CAR(3) process
CAR(2) process
0.75
0.8
0.7
0.7
0.7
0.6
0.6
0.65
0.5
0.5
g ( t)
g ( t)
0.4
g ( t)
0.6
0.4
0.55
0.3
0.3
0.5
0.2
0.2
0.45
0.4
0.1
0.1
0.5
1.5
0.5
1.5
0.5
1.5
Figure 3.1.: Kernel estimation for a sampling frequency of 22 samplings per unit of time,
i.e. = 0.25. The diamond and the square symbols denote, where available,
the values of h suggested by Corollary 3.3.5.
3.4. Conclusion
In this Chapter we have dealt with the asymptotic behaviour of two classes of discretetime
processes closely related to the one of CARMA processes, when the sampling frequency
tends to infinity.
56
3.4 Conclusion
CAR(2) process
CARMA(2,1) process
CAR(3) process
0.25
0.04
0.63
0.035
0.2
0.62
0.03
0.025
0.15
0.6
g ( t)
g ( t)
g ( t)
0.61
0.1
0.59
0.02
0.015
0.58
0.01
0.05
0.57
0.005
0.02
0.04
0.06
0.08
0.1
0.12
0.02
0.04
0.06
0.08
0.1
0.12
0.02
0.04
0.06
0.08
0.1
0.12
Figure 3.2.: Kernel estimation for a sampling frequency of 26 samplings per unit of time,
i.e. 0.016. The diamond and the square symbols denote, where available,
the values of h suggested by Corollary 3.3.5.
First, we studied the behaviour of the white noise appearing in the ARMA equation
solved by the sampled sequence of a CARMA process of arbitrary order. Then we showed,
under a necessary and sufficient identifiability condition, that the aforementioned white
noise approximates the increments of the driving Lvy process of the continuoustime
model. The proposed procedure is nonparametric in nature, and it can be applied without
assuming any knowledge on the order of the process. Moreover, it is argued that such
results should be extendable to CMA processes with more general kernel functions. This
is left for future research.
The results in the first part of this Chapter, considered jointly with those in Brockwell
et al. (2012a), show that the Wold representation of a sampled causal and invertible
CARMA process behaves somehow like a Riemann sum as 0. Then in the second part
of this Chapter the converse is investigated, i.e. whether a Riemann sum approximating a
causal CARMA process satisfies the same ARMA(p, p1) equation of the process we want
to approximate, as the spacing of the grid tends to zero. This is in particular important
for simulations, where such approximating Riemann sums constitute a natural choice.
It has been shown that these processes satisfy an ARMA(p, p 1) equation, but in
general they are not invertible. For p 3, the roots of the moving average polynomial
of such discretetime processes depend, apart from the roots of b(), also on the rule h.
Moreover, in the case p = 3, q = 0, the Riemann sum is invertible for no choice of h,
implying that the Riemann sum and the sampled process will never satisfy asymptotically
57
the same causal and invertible ARMA equation. Although only a finite number of cases
has been considered, it shows that in general a causal and invertible CARMA process
cannot be approximated by a Riemann sum, at least not in the sense of the second part
of Corollary 3.3.5. Further investigations on this matter are left for future research.
1
Pn (x)
,
(2n + 1)! xn+1
x 6= 0, n N,
(3.5.1)
) k "
!
!
!
!#
X
2n
+
1
i
+
1
2k
i
2k
Pn (x) =
xnj
(2k)!
(2)j+12k
2k
j+1
2i + 1
j+1
2i
j=0
i=j
k=j+1
(
) k "
!
!
!
!#
n
n
X
X
X
2n
+
1
i
+
1
2k
+
1
i
2k
+
1
+
xnj
(2k + 1)!
(2)j2k ,
2k + 1
j+1
2i + 1
j+1
2i
n
X
j=0
n
X
i=j
k=j
(3.5.2)
with being the Stirling number of the second kind. Moreover, all the zeroes of n (x)
are real, distinct and greater than 2.
Proof. Using the definition of the hyperbolic functions, we can write
e2z 1
sinh(z)
= 2z
=: f (z, x),
cosh(z) 1 + x
e + 1 + 2(x 1)ez
z C,
x 6= 0,
i.e. f (z, x) = g(ez , x), where g(y, x) :=(y 2 1)/(y 2 +1+2(x1)y). Then the nth derivative
of the function f (, x) is, by the Fa di Bruno formula
n
X n
n
n
z
f
(z,
x)
=
g(e
,
x)
=
z n
z n
k
k=1
n
k
kz
g(y,
x)
ze ,
y k
y=e
z C,
x 6= 0,
where the coefficients k are the Stirling numbers of the second kind. Using the previous
formula and the definition of n (x), for x 6= 0,
2n+1
X 2n + 1 k
2n+1
f
(z,
x)
g(y,
x)
(2n + 1)!n (x) =
=
k
z 2n+1
k
y
z=0
y=1
2n+1
X
k=1
2n+1
X
k=1
k!
2n + 1 1
k
2i
2n + 1 1
k!
k
2i
k=1
(z 1)(z + 1)
dz
(z a2 (x))(z a1 (x))(z 1)k+1
z+1
dz
(z a2 (x))(z a1 (x))(z 1)k
58
(3.5.3)
fk (z, x)dz = Resz=1 fk (z, x) = Resz=a1 (x) fk (z, x)Resz=a2 (x) fk (z, x)Resz= fk (z, x),
where the latter identity is obtained using Theorem 2, p. 25 of Mitrinovi and Keki
(1984). Moreover, since the difference of orders between the polynomials in the denominator and in the numerator of fk (z, x) is k + 1 > 1, the residue at infinity is always zero
(Section 3.1.2.3, pp. 2728, Mitrinovi and Keki (1984)). For x 6= 0, 2, we have that
z = a1 (x) and z = a2 (x) are simple poles, yielding
1
2i
1 + a1 (x)
1 + a2 (x)
k
(a1 (x) 1) (a2 (x) a1 (x)) (a2 (x) 1)k (a2 (x) a1 (x))
!
!
!#
bk/2c "
X
k
k
k
k
1
2
(x) +
fk (z, x)dz =
j=0
i=j
where the last equality is obtained by using the Binomial theorem. Plugging Eq. (3.5.4)
in Eq. (3.5.3), and dividing the outermost sum in odd and even k, we get, still for x 6= 0, 2,
(2n + 1)! n (x)
(
)"
!
!
!
!#
2n + 1
i+1
2k
i
2k
=
(2k)!
(2)j+12k xj1
2k
j+1
2i + 1
j+1
2i
k=1 j=0 i=j
(
)"
!
!
!
!#
n X
k1 X
k
X
2n + 1
i+1
2k + 1
i
2k + 1
+
(2k + 1)!
(2)j2k xj1
2k + 1
j+1
2i + 1
j+1
2i
k=0 j=0 i=j
(
)
n
X
2n + 1
+
(2k + 1)!
(2)k xk1 .
2k + 1
n X
k1 X
k
X
k=0
59
Then Eq. (3.5.1) can be obtained by merging the last two lines and rearranging the
indices.
Using Eq. (3.5.2), we easily see that, for Pn (x) = p0 + p1 x + . . . + pn xn ,
p0 = (2)n (2n + 1)!,
pn = 1,
(3.5.5)
i.e. Pn (x) will have n, potentially complex, roots, and they can not be zero. Moreover it
is easy to verify that f (z, x) solves the mixed partial differential equation
2
2
+
x(x
2)
f
(z,
x)
=
(x
1)
f (z, x),
z 2
x
x2
z C, x 6= 0.
(3.5.6)
x(x 2)
x(x 2) n (x) ,
x
x
0 (x) = 1/x.
(3.5.7)
We now prove by induction the claim regarding the roots being real, distinct and greater
than 2. The cases 0 (x) = 1/x and 61 (x) = (x 3)/x2 have respectively no and one
zero, so the claim can only be partially verified; then we start from 2 (x) = (30 15x +
x2 )/(120x3 ), whose zeros are 2,1 = 1/2 15 105 2.37652, 2,2 = 1/2 15 + 105
12.6235, and they satisfy the claim. We now assume that the claim holds for n (x), n 2,
and its zeros are 2 < n,1 < n,2 < . . . < n,n .
The derivative of n (x) is of the form Qn (x)/xn+2 , where (2n + 1)!Qn (x) = x x
Pn (x)
(1 + n)Pn (x). By Rolles theorem, Qn (x) has n 1 real roots n,i , i = 1, . . . , n 1, such
that 2 < n,1 < n,1 < n,2 < n,2 < . . . < n,n1 < n,n . Using the product rule and the
value of the coefficients in Eq. (3.5.5), we get
x .
(3.5.8)
60
(j +i) )
(1
e
1 cos(bt/c) sin()
b(i)
j=1
lim
<
1 cos()
a(i)
0
(ei )
=
and
2 2 cos(t)
1
+
<(D())
2
1 cos(bt/c)
=
lim
Qp
(j +i) )
b(i)
j=1 (1 e
(ei )
a(i)
= 0,
Qp
(j +i) )
b(i)
j=1 (1 e
i
a(i)
(e )
= Qpq1
j=1
pq
p
q
Y
e(j +i) 1 Y j i
(1 + (j )ei ) j=1
(1 + D())
pq1
Y
j=1
i + j
j=1
1 j ei
(1 + (j ))1 (1 + o(1))
as 0.
[2(p q) 1]!
Qpq1
j=1
pq1
(j )
(1 + o(1))
as 0.
[2(p q) 1]!
Qpq1
j=1
Qpq1
j=1
(1 + (j ))
(j )
Qpq1
j=1
1 + (j )
j=1
(1 + (j ))
= Qpq1
(1 + o(1)) = 1 + o(1)
as 0
Lemma 3.5.3. Suppose that t (0, ) and <(j ) 6= 0 for all j = 1, . . . , q, and let the
functions h,1 (), h,2 () and h,3
() be defined as in the proof of Theorem 3.2.1. Then
bt/c
there is a C > 0 such that, for any R and any sufficiently small ,
,3
,3
,1
,2
2< h hbt/c () h hbt/c () h(),
2
2p+q
2
+ 1 t 1(1,1) () + C
61
2
2
,2 ,3
,1 ,3
,3
,3
,1
,2
2< h hbt/c () h hbt/c () h hbt/c () + h hbt/c ()
(3.5.9)
for any R and any . Let us first consider the second addend on the RHS of
Eq. (3.5.9).
We obtain h,2 h,3
()2 = 2(1 cos(bt/c))/ 2 and since bt/c t holds,
bt/c
we can bound, for any , the latter function by t2 on the interval (1, 1) and by 4/ 2 on
R\(1, 1).
As to the first addend on the RHS of Eq. (3.5.9), we calculate
Qp
2
1 e(j +i) 2 b(i)2 1 cos(bt/c)
,1 ,3
j=1
.
h hbt/c () = 2 2
2
2
a(i)
 (ei )
1 cos()
(3.5.10)
Let now  < 1 and suppose that is sufficiently small, i.e. the following inequalities
will be true for any  < 1 whenever is sufficiently small. Using 1 ez  7/4z for
z < 1 (see, e.g., (Abramowitz and Stegun 1974, 4.2.38)) yields
Qp
j=1
1 e(j +i) 2
a(i)2
7 2p
4
Qpq1
As in the proof of Lemma 3.5.2 we write (z) = j=1 (1+(j )z) j=1 (1j z), where
j
= 1 sgn(<(j )) j + o() (see Chapter 2, Theorem 2.1). Since
Qq
Qq
2
i / 2
j=1 1 j e
j=1 1/2 sgn(<(j )) j i , we further deduce
b(i)2
j=1 1 j e
i 2
2q
.
2q
2
2(pq1)
1 + (j )ei 
2
j=1
j=1
and since (j ) < 1 for all j (see Proposition 3.1.2) we also have that 1 + (j )ei 
1
2 1 + (j ) for all j, resulting in
pq1
pq1
Y
(j )
Y
22(pq)1
22(pq)1
i 2
2
1 + (j )e
2(pq1) [2(p q) 1]!
=
2(pq1)
1 + (j )2
2
j=1
j=1
62
where the latter equality follows from (Brockwell et al. 2012a, proof of Theorem 3.2). All
together the RHS of Eq. (3.5.10) can be bounded for any  < 1 and any sufficiently
small by (7/2)2p 2q t2 .
It remains to bound the RHS of Eq. (3.5.10) also for  1. Hence, for the rest of the
proof let us suppose  1 and in addition we assume again that is sufficiently small
in order that all the following inequalities hold. We are going to show that
2 2
Qp
j=1
1 e(j +i) 2 b(i)2 1 cos(bt/c)
2
(ei )
1 cos()
a(i)
C
2
(j )ei 2
prove
Qpq1
()2
2(pq)
j=1
Qp
j=1
Qpq1
j=1
1+
1 e(j +i) 2 b(i)2 1 cos(bt/c)
2
i 
j=1 1 j e
a(i)2
1 cos()
(3.5.11)
for some C > 0. The power transfer function satisfies b(i)2 /a(i)2 const./( 2(pq) +
1) for any R. Thus, Eq. (3.5.11) will follow from
()2
()2(pq)
Qp
1 e(j +i) 2 1 cos(bt/c)
j=1
+ 2(pq)
i 2
j=1 1 j e
1 cos()
C.
(3.5.12)
We even show that Eq. (3.5.12) is true for any R. However, using symmetry and
periodicity arguments it is sufficient to prove Eq. (3.5.12) on the interval [0, 2
]. We split
that interval into the following six subintervals
j 
I1 := 0, min
, I2 :=
j=1,...,q 2
j 
min
, max 2j  , I3 :=
j=1,...,q 2 j=1,...,q
j 
2
2
2
,
max 2j  , I5 :=
max 2j ,
min
I4 :=
j=1,...,q
j=1,...,q
j=1,...,q 2
max 2j ,
,
j=1,...,q
and
j  2
2
I6 :=
min
,
.
j=1,...,q 2
1cos(bt/c)
can be bounded
1cos()
2
bound 1cos() for that term.
1 ej ei 2 2 1 ei 2 + 42 j 2 = 8 sin2
63
+ 42 j 2 42 2 + j 2
if I1 I2 I3 , and 1 ej ei 42 (2/ )2 + j 2 if I4 I5 I6 .
The first fraction on the LHS of Eq. (3.5.12) satisfies
 
2
min 2j 2(pq)
, if I1 ,
j=1,...,q
()2
()2(pq) + 2(pq)
()2
,
()2(pq)
(2)2
,
2(pq)
if I2 I3 ,
if I4 I5 I6 .
1 j ei 2 = 1 (1 sgn(<(j )) j + o())ei 2 1 2 sgn(<(j )) j i2
2
1
2 j 2 .
8
If I3 , then we have
2
1 j ei 2 1 ei j + o(1) 2 = 2 sin j + o(1)
2
3
5
2
j + o(1)
2
and likewise, for I4 , we deduce 1 j ei 2
we get for arbitrary > 0
3 2
5(
2
) j + o(1) . On I2
1 j ei 2 = 2(1 cos()) (1 sgn(<(j )) <(j ) + o())
d f ()
f ()/ 2 on (0, ) could be achieved in any with d
( ) = 0. The only such
2
value is =
(j 2 +o(1))
.
(1+)=(j )
Since
2
<(j )2
f ( )
1 <(j )2
2 =(j )
2
= 1 (1 + )
(1 + )
3
;
( )2
j 2 + o(1)
j 2
2 j 2
if we choose =
1 <(j )
6 j 2
, we obtain
f ()
2
1 <(j )
2 j 2
2
1 j ei 2 f () 1 <(j ) ()2
2 j 2
for all I2 .
Using periodic properties of the sine and cosine terms, we likewise get
2
2
1 j ei 2 1 <(j ) 2 2
2
2 j 
64
for any I5 .
Qp
p 2p
j  (bt/c)2 4
min
j=1,...,q 2
2(pq)
mink=1,...,q k 2 /4 + j 2
j=1
8q 2q
j  2
min
t
j=1,...,q 2
2
j=1 j 
Q
p
4p+q j=1
mink=1,...,q k 2 /4 + j 2
1
2
j=1 2 j 
in I2 by
p 2p
2()2 4
1 cos()
Qp
4 maxk=1,...,q k 2 + j 2
j=1
()2p
5 42p
Qp
j=1
1 <(j )2
j=1 2 j 2
minj=1,...,q j 2p
Qp
4p ()2p
j=1
1 <(j )2
j=1 2 j 2
j 2
1+
4 maxk=1,...,q k 2
1
()2(pq) ( 20
)2q
1 cos()
2 p
2q
4 20
4 maxk=1,...,q k 2 + j 2
in I3 by
2()2
p
Y
4 maxk=1,...,q k 2
j=1
= C,
= C,
j 2
1+
= C,
in I4 by
(2)2
2
2(pq)
1 cos()
4p (2 )2p
Qp
j=1
1+
4 maxk=1,...,q k 2
202q (2 )2q
4p+1 202q
p
Y
1+
j=1
2 4p+1 202q
p
Y
j=1
j 2
4 maxk=1,...,q k 2
1+
j 2
2 (2 )2
1 cos(2 )
j 2
4 maxk=1,...,q k 2
= C,
in I5 by
Qp
4p 2p j=1 4 maxk=1,...,q k 2 + j 2
(2)2
2
Qq
2(pq) 1 cos() 2q j=1 18 mink=1,...,q k 2 (<(j )/j )2
Qp
p
2
2
(2)2 4 j=1 4 maxk=1,...,q k  + j 
22
Q
2(pq) q 1
2
2 1 cos(2 )
(2)2
2(pq)
4p
Qp
4 maxk=1,...,q k 2 + j 2
j=1
1
j=1 8 mink=1,...,q
65
k
2 (<(
j )/j
)2
54
minj=1,...,q j 2
= C,
and, finally, in I6 by
Qp
2
2
j=1 mink=1,...,q k  /4 + j 
Qq
8q 2q j=1 j 2
Qp
2
2
p+q
(2t)2 4
j=1 mink=1,...,q k  /4 + j 
Qq 1
2
2(pq)
j=1 2 j 
p 2p
(2bt/c)2 4
2(pq)
66
= C.
4.1. Introduction
This chapter is organized as follow: in Section 2 we review the data acquisition process and
we present possible issues that can arise in the analysis of the data; in section 3 we show
that the discretization error is approximately uniform and independent of the measured
value if the discretization level is small and it can be safely ignored in the estimation of
second order quantities. In Section 4 we analyse the Brookhaven dataset, which presents
very pathological histogram of the increments, especially for small increments.
67
Figure 4.1.: Succession of typical operation performed on the data, as reported in Bruun
(1995).
As mentioned before, the output of a hotwire is not a windspeed, but the voltage difference V 0 (t) between two ends of the anemometer. This signal will be filtered by applying
a lowpass filter, with cut off frequency set at half (or less) of the sampling frequency.
The rationale for this operation is the ShannonNyquist theorem that assures that a good
reconstruction of the signal in the time domain can be obtained only for information
contained in the part of the spectrum with frequency f < f0 /2. All those operations are
applied to the analog (i.e. continuous time) signal through electronic devices.
After these filtering operation we have a signal V 0 (t), which is amplified through an
amplifier that gives an output V 1 (t) = aV 0 (t), where a > 0 is the gain factor.
After these manipulation the signals are passed through an analogtodigital converter
(A/C), in order to get a discretized sequence of values, to be used by a computer. The
output of the A/C can assume only an integer number p of values, called bits. Let us
assume that the signal to be sampled is included in the interval E = [Vl , Vh ], then we
call E = Vh Vl the voltage scale. The voltage scale can be adjusted, depending on
the amplitude of the incoming signal, remaining fixed during the experiment. Since only
signals with values in E can be properly sampled, it is necessary to take E large enough
in order to have
Vl aV 0 (t) Vh t [0, T ],
otherwise the experiment must be performed again, since the exceeding values will be
censored, i.e. set to the nearest value inside E. Then the digital data will have a truncation
68
level of
E
,
(4.2.1)
1
that is, every variation of the voltage smaller than wont be visible in the processed
data. Now we obtain discretetime data Vi , for i = 0, . . . , [T n] :
=
{}
Vi
2p
V 1 (i/n)
V 0 (i/n)
=
+ 1/2 =
+ 1/2 ;
(4.2.2)
V E,
(4.2.3)
V 2 k0 /k1
1/n
and the vector is = (k0 , k1 , n). We note that f is increasing and concave. If the voltage
signal had symmetric increments, the windspeed will in general not.
Normally this procedure is too expensive to be performed for each commercially available anemometer, and it is reserved only to those to be employed in high precision scientific
measurement. For all other purposes the producer provides a standard calibration curve.
Although the discretization procedure in the voltage data is clear, in wind data it is
not. That can be seen using the chain rule
{}
{}
V
Vi
v
v V
{} (i+1)
=
f0 (Vi )
t
V t
69
Now the discretization step depends on the derivatives of the calibration curve, which
normally is not given along with the time series. We shall now use an argument to recover
the original voltage data.
{}
+ V ) f (V
{}
)=
K
X
1
i=1
i!
(i)
f (V {} )(V )i + o (V )K .
(4.2.4)
We conclude from the RHS of (4.2.4) that the new discretization steps depend on derivatives of the calibration curve f at voltage V and on the discrete increments V . Since
Taylors formula is precise only for small V and, by smoothness of f , for small v,
it makes sense to consider only the increments smaller than a threshold . Then for all
ji N such that ji v < :
[] v = (j0 v, . . . , jn v)
where j v = vj+1 vj . Then we summarize the associated levels v [] as
v [] = (vj0 , . . . , vjn ).
Plotting [] v versus v [] one has a striking visual effect of the discretization effect, as
shown in Figure 4.2.
Of course, null increments are distributed along all the measurement levels, but other
increments move along solid curves, as the speed varies; this is due to the variation of the
derivatives of f . Then we fix V and we get discretized increments i v := f (V + iV )
f (V ); using Taylors formula:
1v
..
.
K v =
K
P
1
i=1
K
P
i=1
(i)
i
i! f (V )(V )
+ o (V )K
..
.
1 (i)
i
i! f (V )(KV )
+ o (V )K
(4.2.5)
Since our data are given in velocities, we perform a change of variables: V = f1 (v).
Then, in the light of (4.2.5) we can interpret Figure 4.2 in two different manners:
We fix v and we look at the different branches of the increments: here we see how the
velocity can jump discretely only to some predefined values. That happens because
the voltage is moving with step V .
70
Figure 4.2.: [] v vs. v [] for the Brookhaven dataset (see Section 4.4), = 0.05. The nonlinear behavior of the curves is given by derivatives of Kings law. Note that
92.36% of all increments of Brookhaven dataset belongs to 0.05 v, including
the already mentioned 23% of null increments (in red).
We look only at the first, upper branch (i.e. the one composed by the smallest nonnull increments). If we move along this curve, we have fixed V and we vary v, that
is, we see how the derivatives of f influence the increments.
The first outlook gives us an immediate feeling of the discreteness of our data, but the
second one is more productive. Since V (212 1)1 = 2.4420 104 we should be able
to reach a good approximation with low order Taylor series.
We propose the following strategy to recover the parameter vector .
Take increments (1) v := {v : belongs to the first branch} and the levels v (1) := {v :
belongs to the first branch}. Furthermore, let = (, V ) be the extended parameter
vector of the Taylor series and let (0) be a guess on , which we take as initial values.
Then we perform a nonlinear least squares fitting on the first order Taylor series
(1) v = f0 (f1 (v (1) ))V
(4.2.6)
returning the estimate (1) . If the estimate is too crude, we may need to take higher
order Taylor polynomials into account, getting the estimate, for instance, (2) from a
least squares fitting on the second order:
(1)
v=
71
V
+ f00 (f1 (v (1) ))
2
(4.2.7)
(k + u) =
u,
0<u<1a
,
u 1, 1 a u < 1
k Z, u [0, 1].
,a
g (x,
(x))) dx =
Z Z
R
1a
where
1
(u, v) =
1 (u, v),
u>0
1 (u + 1, v), u 0
72
Z Z
R
1a
g(x, u)dudx.
(4.3.1)
and
1 (u, v) = v 1{v>u} .
2. Let a = 1/2. If g is odd in the second variable the second integral in (4.3.1) is zero.
Moreover, if g is even in the first variable, the r.h.s. of (4.3.1) is 0.
R 1a
1 (u, {x/})
g(x, u)dudx,
x
(4.3.2)
Proof. The proof goes along the lines of Proposition 51. of Delattre (1997).
The interpretation of Proposition 4.3.13) is clear. The original value of a random
variable X is approximately independent from the rescaled discretization error ,a (X),
where the nonindependent part is proportional to . Then the discretization error is
approximately independent of the true value X and it is distributed as an uniform random
variable on (a, 1 a).
Let us consider the concrete case of our measurements. Then the situation is a bit more
intricate, since we transform the discretized values X {} to get the data of interest. From
now on, we shall suppose that a = 1/2, that is, roundingtothenearest.
We take g (x, ,a (x))) = fX (x)h(x{} ) = fX (x)h(x ,a (x)), where h is a differentiable function defined on a compact.
Then (4.3.1) can be rewritten
"Z
#
"Z
#
1/2
1/2
Z Z
R
1/2
h(X u)du + EX
1/2
1/2
1/2
1 (u, {x/})h(x u)
1 (u, {X/})h0 (X u)
f (x, u)dudx.
x
If is small, then we can expand h(x u) = h(x) h0 (x)u + o(), and computing
the integrals in u where possible,
"Z
1/2
1/2
Z Z
R
73
1/2
1/2
0
1 (u, {x/})h(x)fX
(x)dudx + o().
Taking the absolute value of both sides of the previous equation, using the fact that
1  1 and the triangular inequality we get
E[h(X
,a
0
(x)dx + o(). (4.3.3)
h(x)fX
Equation (4.3.3) gives an estimate of the error committed in estimating the mean of h(X),
when X is rounded to nearest level. In the specific case of the nth moment of turbulent
dataset, we can take X to be the voltage, and h(x) = fn (x).
2v 1n k0 + k1 v n
=
k1 n
1n
4 (1 n) v 12n (k0 + k1 v n )
2v
f00 (f1 (v)) =
+
k1 n
k12
For the first branch the behavior depends mainly on the first derivative, which allows us
to make a guess on the parameter n, which gives the shape of the curve. For small values
of v the curve tends to zero, and for big values of v it grows somehow linearly, then we
can suppose n (0, 1), with an initial guess n(0) = 0.5.
Since the mean value of the windspeed was unfortunately subtracted from the Brookhaven
data, we have to consider, instead of the actual velocities v, a Reynoldslike decomposition
t [0, T ],
v(t) = u(t) + v0 ,
where v0 is the unknown average windspeed. Since the derivatives depend in a nonlinear way on the values of the windspeed, this information is important and it has to be
recovered as well.
74
In Drhuva (2000), Table 1, for entry 3 (BNL) v0 = 8.3 is reported, but it was calculated
on 4 107 values, while our dataset has "only" 2 107 datapoints. Two strategies are
viable: taking the average speed in Drhuva (2000) for real, or use it as a guess, increasing
the numbers of parameters to be fitted by 1.
We tried both, getting similar parameters; it is very interesting to notice how a slight
variation in the value of v0 causes a relatively better fit, similarly for the residuals listed
in Table 1. However, the really close estimation of average windspeed on only half the
dataset is, at least, a good indication of the stationarity of this dataset.
(0)
k0
12
k1
12
n
0.5
(1)
(2)
13.0084
12.9990
10.4190
10.4449
0.5005
0.5010
(1)
(2)
13.0386
13.0316
10.5953
10.5653
0.5000
0.4999
V
10 (212 1)1
v0
8.3
max(res)

max(res/(1) v)

8.3
8.3
0.0051
0.0052
13.31 104
13.32 104
8.3115
8.3120
8.78 106
8.97 106
13.40 104
13.40 104
9.87 109
1.05 108
3.23 106
3.21 106
Once we have estimated , we can proceed to reconstruct the voltage data and compare
them to the windspeed data. In Figure 4.3 it is rather obvious that the discretization in
the voltage data is generating the unexpected behavior in the center of the histogram of
the Brookhaven data.
From the voltage data we obtain that V = 0.0011, which is not that far from the one
estimated by the least squares fitting. We do stress that the estimate V do not enter
in Kings law, but it is necessary, since it gives the scaling in the Taylor series. That is
translated in E = 0.0011(212 1) 4.5V , which is in plausible since max(Vi )min(Vi ) =
i
2.8763. That indicates that experimenters took a large, but reasonable voltage scale, to
be able to sample the whole signal, sacrificing some accuracy. In Figure 4.4.1f) we can
see the derivative of f(2) plotted on the range assumed by V {} , which is not constant.
the transformation, although smooth, is rather nonlinear.
75
V
v
mean
1.54 108
1.22 107
variance
1.87 105
9.62 104
3
4
5
6
0.62 31.48 143.61 1.84104
0.64 25.93 82.37 4.69103
7
3.76105
3.55104
windspeed) must enjoy a rather strong form of continuity. A first comparison between the
two datasets can be performed looking at the normalized moments. In general the voltage
increments has higher normalized moments, that is, heavier tails than the windspeed.
BarndorffNielsen and Schmiegel claim that the NormalInverse Gaussian (NIG) distribution fits well the distribution of the increments of the windspeed of high frequency
turbulent data.
A characteristic of turbulence is the socalled multifractality, that is, the nth order
structure function,
Sn ( ) = E[Xt+ Xt n ], n N,
behaves like c(n) for in the inertial range, where c(n) is a function of n. Kolmogorov
K41 theory (Kolmogorov (1941b,a, 1942)) predicts c(n) = n/3, but empirical observations
suggest a rather nonlinear behaviour of c(n). It is common in physics to employ the
empirical concept of Extended Self Similarity Benzi et al. (1993) to estimate the exponents
(n) as n varies. In figure (4.4.1) c) we plot the estimated exponents for the windspeed
data and the voltage data; they are quite in agreement. The red line is the linear behaviour
predicted by Kolmogorovs theory. Therefore we can conclude that the scaling behaviour
of the moments is not strongly affected by the conversion from voltage to windspeed.
We tried to fit instead a Generalized Hyperbolic distribution (GH), which generalises
the NIG distribution, to the increments of both voltage and data, in order to compare the
statistical properties of the data. This class of distribution is very flexible, because allows
to model skewness and semiheavy tails.
This probability distribution has density
f (x, , , , ) =
2 /2
e(x)
2 +(x)2
2K
p
12
p
K 1
2 2
+ (x )2
where K is the Bessel function of the second kind. R is a location parameter, > 0
accounts for dispersion around the mean, > 0 and R are shape factors and  <
take into account asymmetry of the tails. The NIG distribution can be obtained setting
= 1/2.
The tail behavior of this distribution is expressed by
f (x, , , , ) x1 exp(( + )x),
76
x .
Then the tail behavior of this distribution are power law with an exponential tempering.
This is not quite in agreement with the current turbulence literature, which advocates
b
tails of the form ea x , where a depends from which tail are we considering (positive
or negative) and b, varies with the spacing. For increments over small spacing the value
b = 0.5 is often suggested. We start using the method of moment (i.e. we found the
parameters which match the sample moments of the data, up to order 5) and then we
used from this starting position the MaximumLikelihood method.
Looking at Table 4.3 and Figure 4.4.1 d) e), we see that the MaximumLikelihood
estimation gives a completely different (and qualitatively worse) fitting respect the coarser
method of moments. The fitting were performed on normalized data, i.e. we subtracted
the mean and we divide by the standard deviation before fitting, in order to compare the
estimates. We observe that the Method of Moment tends to give higher absolute values of
and smaller of than the Maximum Likelihood, therefore favoring the slower powerlaw
decay than the exponential one.
Table 4.3.: Parameter of the GH distribution for the increments of Brookhaven data (voltage and windspeed).
V (MoM) 0.9797
V (LL)
0.3884
v (MoM) 0.5748
v (LL)
0.3488
0.2231
0.5218
0.3508
0.5227
77
0.0224
0.0478
0.0287
0.0437
0.6757
0.3972
0.4182
0.3622
0.0223
0.0460
0.0285
0.0421
7000
1500
2.5
Velocity exponents
Voltage exponents
K41 exponents
6000
2
5000
1000
1.5
( n)
4000
3000
1
500
2000
0.5
1000
0.01
0.005
0.005
0.01
0.08
0.06
0.04
0.02
10
0.02
0.04
0.06
0.08
2.5
3.5
4.5
10
Loghist
LL estimation
MoM estimation
Standard normal
10
5.5
6.5
Loghist
LL estimation
MoM estimation
Standard normal
12
10
10
10
1 1
10
10
10
10
10
10
40
30
20
10
V /sd ( V )
10
20
30
40
10
10
10
50
f ! ( 2 )( V ) [m V
10
10
40
30
20
10
10
v /sd ( v )
20
30
40
50
0
4.5
5.5
6.5
7.5
Volt age [V ]
Figure 4.3.: Statistics comparison between voltage and wind data. a) b) Respectively,
histogram ofpthe increments of the
p voltage and the windspeed. Only values
2
between 3 E[(V ) ] and 3 E[v]2 are shown. c) Scaling exponent of
the structure functions of order n = 2, . . . , 7, calculated via the ESS. d)e) Loghistogram of the increments with Generalized Hyperbolic fitting via
Method of Moments and LogLikelihood. f) Plot of the derivative of the King
law over the values assumed by the voltage.
78
5.1. Introduction
The WoldKarhunen representation (Doob (1990), p. 588) states that every nondeterministic,
one dimensional, stationary stochastic process Y = {Yh }hR , whose twosided power spectrum E() satisfyies the PaleyWiener condition
 log E()
d < ,
1 + 2
(5.1.1)
g(h s)dWs ,
h R,
(5.1.2)
where W = {Wh }hR is a process with uncorrelated and weakly stationary increments,
E[dWh ] =R0 and E[(dWh )2 ] = dh. The kernel g is an element of the Hilbert space L2 , i.e.
g(s +  )g(s)ds,
(5.1.3)
79
5.1 Introduction
while the twosided power spectrum is E() = (2/) 0 Y (s) cos(s)ds = F{g()}2 (),
where
Z
1
g(s)eis ds, R,
F{g()}() =
2 0
denotes the Fourier operator. Condition (5.1.1) is crucial, since for a general stationary
process a representation similar to (5.1.2) holds, but the kernel g may not vanish on
(, 0) and the integrals in (5.1.2) and (5.1.3) are extended to the whole real line (Yaglom
(2005), Ch. 26). The rate of decay of the spectrum plays a pivotal role in determining
whether a given process has representation (5.1.2) or not, since (5.1.1) excludes spectra
decaying at infinity as exp() or faster. In this context h or do not necessarily denote
time or frequency. In the following x will denote a streamwise spatial coordinate, 1 the
associate wavenumber, t a time coordinate, and = 2f the associated angular velocity
with frequency f .
In the turbulence literature the spectral properties of turbulent velocity fields were
intensively investigated, starting from Kolmogorovs K41 theory (Kolmogorov (1941b,a,
1942)). In the WoldKarhunen representation (5.1.2) the second order properties of Y
depend only on the function g, with no necessity to specify the driving noise W . This is
analogous to K41 theory, where the second order properties of the velocity field can be
handled without considering the intermittent behavior of the turbulent flow.
From now on, we shall denote the mean flow velocity component by V . Moreover, we
work with the usual Reynolds decomposition V = U + u, where U denotes the mean
velocity and u is the timevarying part of V . Then Re := U L/ is the Reynolds number
of the flow, by L we denote a typical length, and is the kinematic viscosity of the flow.
In K41 the first universality hypothesis claims that, for locally isotropic and fully developed turbulence (i.e. Re 1), the spatial power spectrum EL of the mean flow velocity
fluctuations, has the universal form
EL (1 ) = ( 5 )1/4 L (1 ) = v2 L (1 ),
(5.1.4)
where = ( 3 /)1/4 and v = ()1/4 are, respectively, Kolmogorovs length and velocity,
and L () is a universal, adimensional function of adimensional argument. As a cornerstone of the K41 theory, much effort has been devoted to verify Eq. (5.1.4) and to
determine the functional form of L . For finite Re we can define L in (5.1.4) as the
rescaled spectrum Re
L .
In an experimental setting with a probe in a fixed position, the spatial power spectrum
can be estimated from the time spectrum EL , using Taylors hypothesis (Taylor (1938)),
EL (1 ) = U EL (U 1 ),
(5.1.5)
where 1 = 2f
p /U . This is regarded as a good approximation, when the turbulence
intensity I = E[u2 ]/U 1 (Wyngaard and Clifford (1977), Lumley (1965)). Relation
80
5.1 Introduction
(5.1.5) is a first order approximation, where in general higher order corrections are feasible
(Gledzer (1997), Lumley (1965)), and also error bounds can be obtained.
Comparison of a large number of experimental lowintensity data sets (Gibson and
Schwarz (1963), Saddoughi and Veeravalli (1994), Saddoughi (1997)) shows that in the
dissipation range Re
L is unvarying on a wide range of Reynolds numbers. On the other
hand, the inertial range does not exist for small Reynolds number (Champagne (1978)),
however, its length increases with the Reynolds number. The supposed infinite differentiability of the solution of the NavierStokes equation yields L to decrease faster than
any power in the far dissipation range (1 > 1) (Von Neumann (19611963)). Moreover,
the link between second order properties and third order ones is given by Kolmogorovs
relation (Kolmogorov (1941a), Pope (2000), Lindborg (1999))
4
dS2
3
S3 (x, t) = x + 6
4
5
dx
x
z4
dS2
dz,
dt
(5.1.6)
where Sn (x, t) = E[(u(s+x, t)u(s, t))n ] is the nth order longitudinal structure function.
Formula (5.1.6) has been used in Sirovich et al. (1994) to determine an expression for the
spatial spectrum which decreases like 1 exp(1 ) as 1 tends to infinity and and
depend on the Reynolds number. Numerical (see e.g. Chen et al. (1993), Martnez et al.
(1997), Ishihara et al. (2007), Schumacher (2007), Lohse and MllerGroeling (1995)) and
experimental (Saddoughi and Veeravalli (1994), Saddoughi (1997)) studies confirmed such
exponential decay in the dissipation range. Under the hypothesis of local isotropy the local
rate of energy dissipation is
x = 15(x V )2
15
(t V )2 = t ,
2
U
t R,
(5.1.7)
where the approximation with the instantaneous rate of energy dissipation t follows from
Taylors hypothesis. As customary in physics literature, we will drop the term rate for
the sake of simplicity. The average energy dissipation can be calculated directly from the
spatial spectrum (Batchelor (1953)) as
:= E[x ] = 15
21 EL (1 )d1 E[t ],
(5.1.8)
81
5.1 Introduction
function cannot hold for every lag, but it must stop near to the origin and for great lags.
The matter was studied in great detail by Cleve (2004).
Since the model (5.1.2) is a linear model, and the surrogate energy dissipation (5.1.7) is a
nonlinear function of the velocity, this behaviour cannot be reproduced by the kernel only.
Therefore we advocate that the intermittency process W = {Wt }tR can be appropriately
modelled by a twosided, timechanged Lvy process
Wt = LR t Ys ds ,
0
t R,
(5.1.9)
where L is a purely discontinuous martingale with tempered stable Lvy measure (see
Rosiski 2007) and Y is itself a positive, ergodic, causal continuoustime moving average
process independent of L. In detail: we suppose there exists an 0 < < 2 and there are
two completely monotone functions q+ , q : R+ R+ , called the tempering functions,
such that the Lvy measure of L is given by
F (dx) =
q+ (x)
q (x)
1{x>0} (x)dx +
1
(x)dx;
1+
x
x1+ {x<0}
(5.1.10)
1/2
82
In Section 5.2 the model (5.1.2) for the timewise behavior of the timevarying component u of a turbulent velocity field is motivated first by an analysis of the literature
yielding a discussion on the errors of Taylors hypothesis and consequences for our model.
Physical scaling properties of the kernel lead to a model for g in the inertial and energy
containing range. In Section 5.3 the estimation methods of Section 5.2 are applied, using
a nonparametric estimation of the kernel on 13 turbulent data sets, having Reynolds
numbers Re spanning over 5 orders of magnitude. In Section 5.4 we illustrate a parametric model for the intermittency process, inside the larger class of timechanged Lvy
processes. More specifically, we will propose and fit to the data a parametric model, motivating it by the physical point of view. In order to validated the fitted model, in Section
5.5 we simulate it and we compare the results with the original data.
83
Based on such facts we postulate that the time spectrum for every flow with turbulence
intensity I > 0 satisfies (5.1.1), and it is related to the spatial spectrum by
EL (1 ) = U EL (I (U 1 )),
(5.2.1)
g(C1 (t s))dWs ,
t R,
(5.2.2)
where g is a positive universal function, depending on the Reynolds number Re and the
turbulence intensity I; C1 and C2 are normalizing constants to be determined. The integral
represents the timevarying part u in the Reynolds decomposition. The scaling property
of the Fourier transform, i.e. that for c > 0, cF{g(c )}() = F{g()}(/c), gives
EL () = (C2 /C1 )2 F{g()}2 (/C1 ) .
Using (5.1.4) replacing L with Re
L , i.e. without considering the limit for Re 1, and
(5.2.1), the relation between the rescaled spatial spectrum and the time spectrum is given
by
2
2
(5.2.3)
v2 Re
L (1 ) = U (C2 /C1 ) F{g()} (I (U 1 /C1 )) .
84
a) Time spectrum
10
10
10
E L (f )
10
10
10
10
10
10
10
10
10
10
b) PaleyWiener condition
20
19.5
19
18.5
18
17.5
f
17
16.5
16
15.5
10
10
10
Figure 5.1.: a) Time spectrum for data set h3, the solid line indicates Kolmogorovs 5/3
law. b) Convergence of the PaleyWiener integral (5.1.1). The spectral density
is estimated with the Welch method, using a Hamming window of 214 data
points and 60% overlap.
85
U
v
s,
g(f (t s))dW
(5.2.4)
= Vt Vt =
s +
[
g (t s) g(t s)]dW
s,
g(t s)dW
t
s 0 a.s. as 0, we have that the
where g() := U/v g(f ). If 1 t g(t s)dW
derivative process is, with the limit assumed to exist a.s.,
t V =
lim 1 t V
0
U
f
v
s,
g 0 (f (t s))dW
i.e. it is again a model of the form (5.1.2) and we essentially exchanged integration and
differentiation. Moreover, plugging (5.1.4) into (5.1.8) we obtain
v2
= 15 2
s2 L (s)ds,
(5.2.5)
s1 dW
s2 ,
g 0 (f (t s2 ))g 0 (f (t s1 ))dW
(5.2.6)
86
(5.2.7)
E[u2 ]
U
v
kgkL2 Re1/4 kgkL2 .
U
t 0,
(5.2.8)
where Re > 1/2 and Re , CRe > 0. This model yields a powerlaw time spectrum Pope
(2000)
2 2
2
EL () = (2)1 CRe
(Re )(Re
+ 2 )Re ,
(5.2.9)
87
where () is Eulers gamma function. For instance, the von Krmn spectrum von Kr
mn (1948) is a special case of (5.2.9), where Re = 5/6 and CRe = C/(Re ) 1.1431,
and C is the Kolmogorov constant, which has been found to be around 0.53 over a large
number of flows and Reynolds numbers, and therefore universal and independent of the
Reynolds number Sreenivasan (1995). For Re , the spectrum (5.2.9) is constant;
hence Re can be interpreted as the transition frequency from the inertial range
to the energy containing range; moreover, since the upper limit of the inertial range is
1
independent of the Reynolds number, the size of the inertial range varies as Re
.
The gamma model has two essential shortcomings: firstly, it fails to model the steeper
spectral decay in the dissipation range, but can be regarded as a good model for the
inertial range. Secondly, the kernel has unbounded support, implying that the process is
significantly autocorrelated even for large times, although the autocorrelation is exponentially decaying.
From now on, we shall assume that the kernel function g has compact support, i.e.
g(t) = 0 for t > TRe and t < 0, where TRe is the decorrelation time (ONeill et al. (2004)).
Since the inertial range increases with Re (see e.g. Pope (2000), p. 242) we expect Re to
decrease. For the same reason we expect TRe to increase with Re.
Explicit computations can be carried out for the truncated gamma model, considering
a truncation at TRe and assuming that the failure of the gamma model in the dissipation
range does not significantly affect kgkL2 . Then
kgk2L2
2
=CRe
TRe
s2(Re 1) e2Re s ds =
where (a, z) := z sa1 es ds. If TRe Re 0 and it does not vary too much with Re,
with the choice of the parameters as in the von Krmn spectrum and the fact that
kgkL2 Re1/2 , we get Re Re3/4 .
88
102
m2 s3
2.72
0.52
20.14
0.85
13.14
10.98
64.37
1995.62
2120.81
5192.01
13580.72
1.38
9.41
m
36.42
55.07
22.08
15.7
24.56
8.28
5.32
2.25
2.22
1.78
1.4
837.47
442.42
mm
0.76
1.11
0.55
0.49
0.66
0.36
0.26
0.14
0.14
0.11
0.1
140.01
115.85
R
112
105
162
253
184
495
640
930
978
1005
1336
7216
17706
F
kHz
5.74
2.87
22.97
5.74
45.94
22.97
91.88
367.5
367.5
367.5
735
10
5
f
kHz
1.14
0.52
3.57
1.74
6.01
9.02
25.66
175.68
190.05
285.13
570.02
1.21
2.99
I
%
20.56
19.19
21.47
24.04
10.98
23.34
22.58
22.15
21.64
22.88
21.34
15.29
28.23
kgkL2
Re
Re 104
TRe 102
C2
5.37
5.21
6.46
8.08
6.9
11.31
12.85
15.5
15.89
16.11
18.58
43.16
67.61
0.76
0.73
0.85
0.81
0.95
0.83
0.87
0.84
0.84
0.83
0.8
0.77
0.81
363.85
387.11
276.51
120.81
316.1
63.57
49.87
22.95
21.78
18.13
11.76
0.55
0.24
0.98
1.05
1.38
3.06
1.25
5.95
7.53
13.81
15.79
18.42
26.57
282.72
882.39
0.84
0.38
2.46
0.53
2.87
2.31
6.05
37.37
39.46
60.46
108.72
1.97
4.75
Table 5.1.: h1h11: Helium data (Chanal et al. (2000)), a1: SilsMaria data (Gulitski et al.
(2007)), a2: Brookhaven data (Drhuva (2000)). The norm kgkL2 is evaluated
using (5.2.7).
see Chanal et al. (2000)). Since the data sets come from different experimental
p designs,
characteristic quantities such as Taylors
microscale Reynolds number R = E[u2 ]/
p
based on Taylors microscale = 15E[u2 ]/ are considered. This Reynolds number is
unambiguously defined and the rough estimate R 2 Re holds (Pope (2000), p. 245).
In all data sets the turbulence intensity I is rather high and, therefore, the expectation of
the r.h.s of (5.1.7) would not be a good approximation for . Moreover, in most of the data
sets the inertial range is hard to identify due to low Reynolds numbers. The mean energy
dissipation has been estimated from (5.1.6), following Lindborg (1999). Estimation of
is a challenging and a central task, since the rescaling constants in (5.2.4) depend, in the
K41 spirit, only on , and U .
89
a logarithmic time axis. Due to instrumental noise, the high turbulence intensity of the
data sets and the noninfinitesimal dimension of the hotwires, estimates in the dissipation
range (tf < 1) differ significantly from one data set to another. In general we notice that
the steeper decay of the spectrum in the dissipation range is reflected by the fact that
the kernel functions tend to 0 for tf 1, instead of exploding as the gamma model for
Re = 5/6. The only estimation of the kernel that is not perfectly aligned with the others
is the one for h5, which looks slightly shifted to the left. This can be attributed to the
difficulty in estimating , and consequently f . In Figure 5.2c) the estimated kgkL2 based
on (5.2.7) are plotted against R , and the powerlaw fitting shown in the figure returned
kgkL2 = 0.5081 R 0.5 Re0.25 .
The decorrelation time TRe has been estimated by the first zerocrossing of the estimated g. It increases with R (Figure 5.2a)) and it follows empirically the law TRe =
0.1408 R 1.3613 Re0.6806 (Figure 5.2c)).
The transition frequency Re is obtained via leastsquares fitting of the gamma model
(5.2.8) to the nonparametric estimate of g for tf > 5 (Figure 5.2b)). The statistical fits in
Figure 5.2b) are very good for all data sets considered, at least when not too close to TRe .
The transition frequency follows the empirical law Re = 61.4917 R 1.5127 Re0.7564
(Figure 5.2e)), which is exceptionally close to the exponent 3/4 estimated for the gamma
model, and Re TRe is between 1.5 and 4.1 in all data sets. The estimated values of Re
are also close to the reference value of 5/6 0.834, with a mean value of 0.8218. The
only notable outlier is the data set h5, which has been already proven to be somewhat
anomalous.
As said in Section 2.3, the truncated gamma model is not able to capture the sharp
cutoff in the neighborhood of TRe nor the rapid decrease in the dissipation range.
To estimate how the variance of the model (5.1.2) is affect by the truncation at TRe ,
we consider the quantity
kg()(TRe )k2L2
(2/3, 2c1 c2 Re )
H(Re) =
=
1
(2/3)
kg()k2L2
where the latter is obtained using the gamma kernel (5.2.8) with parameters Re = 5/6,
TRe = c1 Re and Re = c2 Re , where c1 , c2 > 0 and > > 0, as indicated by
the leastsquares in Figure 5.2. H represents the ratio between the variance (5.2.7) using
a truncated gamma model and a nontruncated one, and it is a decreasing function of
Re, and it tends to 0 as Re 1, indicating that the truncation is important when
Re 1; i.e., when TRe Re 1. Nonetheless, using the values of c1 , c2 , , returned
by the leastsquares fitting, we obtain H 0.991 for the dataset a2, which exhibits the
highest Reynolds number among the considered datasets. Then we can conclude that, for
the wide range of Reynolds numbers considered, the variance of the model (5.1.2) with a
kernel following the truncated gamma model does not differ in a sensible way from the
variance of the classical von Krman model.
90
Finally, the behavior of Re as function of Re estimated via the gamma model agrees
significantly with the data, showing that the dissipation range does not contribute appreciably to kgkL2 , which is in agreement with the idea that there is few energy in the
dissipation range.
Tt :=
The process Y is, in the light of the previous formula, is called the activity process. We
shall suppose, that E[Y0 ] = 1, so that E[Tt ] = t.
From now on, we shall assume that the intermittency process, which acts as driving
process of the CMA process (5.2.2), is a timechanged Lvy process, that is
Wt = LTt ,
t R.
By Corollaire 10.12 of Jacod (1979), the timechanged Lvy process W = {Wt }tR
given by 5.1.9 is a purely discontinuous martingale w.r.t. the filtration given by Ft := GTt ;
the process has cdlg sample paths and X0 = 0. The integervalued random measure m
on R R given by
{s:Xs ()6=0}
where x denotes the Dirac measure at x, is called its jump measure. By Theorem II.2.34
of Jacod and Shiryaev (2003), moreover, the increments Wt+s Wt can be represented as
the stochastic integral, for s > 0,
Wt+s Wt =
]t,t+s]R
91
t R,
(5.4.1)
"Z
Y(k1)+s ds,
Y(k+j1)+r dr
(5.4.2)
E[Wt ] = 0,
t R+ .
Recalling that W is supposed to be the driving process of the CMA model for the velocity
V , furthermore, we assume that the intermittency process is normalised such that c2 =
var[W1 ] = 1. Under this assumptions, we note that
E[Wt3 ] = tc3 ,
R
t R+ ,
(5.4.3)
W 2 (k; ) =
(5.4.4)
(see, e.g., BarndorffNielsen and Shepard 2006, Proposition 5). From (5.4.4) it is clear
that the autocorrelation function of the increments of the intermittency process is, to a
scaling factor, given by the autocovariance of the activity process Y .
Therefore we let the activity Y = {Yt }tR be again a CMA process, independent of L,
namely
Z
t
Yt :=
gY (t s; )dZs ,
t R,
(5.4.5)
92
t R.
(5.4.6)
Ct1 exp(t),
if t ,
(5.4.7)
for k = 1, . . . , n.
W 2 (k; , 4 )
Y (k; )
=
,
4 2
(5.4.8)
Second, we turn to the estimation of the Lvy density of the Lvy process L. The
class of tempered stable Lvy measures (5.1.10) is truly of semiparametric nature. By
Bernstein (1929), every bounded, completely monotone function is the Laplace transform
R
of some finite measure Q on R+ ; that is, x 7 0 ex Q(d). In literature, parametric
estimation of tempered stable Lvy densities is often based on the assumption that for
known orders p+ , p N it belongs to the 2(p+ + p ) + 1parametric subfamily
f (x; p+ ,p ) =
x1
Pp+
+
c+
k exp(k x)
for x > 0,
for x < 0,
k=1
P
p
1
x
93
(5.4.9)
fm
2 X
:= argmin
h(nk W ) +
n
hSm
k=1
h(x)2 (dx),
(5.4.10)
coincides with the respective projection estimator (cf. Lemma 2.1 of Ueltzhfer and Klppelberg 2011).
By FigueroaLpez (2009), under some hypothesis on Y , the estimator fm is consistent
for the density f of the Lvy measure F if n , 0 fast enough, and m .
For some related, pointwise central limit theorem, we refer to FigueroaLpez (2011).
For a numerically stable computation of fm , we construct the Bspline basis Bm :=
{hm,j : j = 1, . . . , m + 3} of the space Sm := S(3, Km ), and denote the Gramian matrix
w.r.t by A = (aij )i,j=1,...,m+3 ; that is,
aij :=
Let hm : R Rm+3 be the mapping with components hm,j . Then the unique minimiser
in (5.4.10) is given by
fm (x) =
m+3
X
where cm := A1
j=1
k=1
94
1 X
hm (nk W )
n
+ 2
Dm
n
D3m
(n)4
+ 3
D0m
n
where
supxD h(x)2
Dm := sup R
,
2
hSm D h(x) (dx)
and
D0m
where
Then the estimator fm
m
:= argmin
mN
D03
m
(n)4
+ 4
m + 3 (m + 3)3
n
(n)4
R
h0 (x)(dx))2
D
R
:= sup
.
2
(
hSm
h(x) (dx)
(
cm )>A
cm + pen(m) ,
(5.4.11)
(5.4.12)
is called the minimum penalised contrast estimator of f (w.r.t the penalty pen). We
plugin this estimator into the leastsquare fitting of the parametric family (5.4.9).
+
on some domain D+ ]0, [ and, seperately,
In practice, we calculate an estimator fm
m
an estimator f on some domain D ], 0[. For the Lebesgue density f of the Lvy
measure F , we are thereby given the nonparametric estimate
+
f(x) = fm
(x)0 (x)1D+ (x) + fm
(x)0 (x)1D (x).
(5.4.13)
In general, this estimate is not the restriction of a tempered stable Lvy density to the
domain D+ D . For orders p+ , p up to a specified order, we calculate the leastsquare
fit of the parametric family given by (5.4.9) to our estimate given by (5.4.13) under the
constraint that the variance var[W1 ] of our fitted model equals one; and we penalise for
deviations of the fitted third and fourth cumulant from the empirical ones (recall (5.4.3)).
In particular, our estimator of p+ ,p is given by
p+ ,p :=
where
argmin
f(x) f (x; p
,p
)2 dx
+
{p+ ,p :c2 (p+ ,p )=1}
D+ D
c3 (p+ ,p )
c4 (p+ ,p )
1 +
1
,
+
4 32 (
2 + 1)
cn (p+ ,p ) := (n )
p+
X
+ n
c+
k (k )
k=1
95
+ (1) (n )
p
X
k=1
n
c
k (k )
(5.4.14)
p
X
2 (k) 2 (k; , 4 )2 ,
W
k=3
where
4 ) is given by (5.4.8). We remark that no closedform solution is known
W 2 (k; ,
for the autocorrelation Y (; ) of Y given by (5.4.6). In practice, hence, we utilise the
96
int
int
int
3.6017
0.2881
0.0325
1.152 103
Y (k/100; )
k250
p
h
i2
D gY (k/100; ) k=0,...,500p ,
where we sample gY with a 100times higher frequency and on a 5times longer interval
than used afterwards; D denotes the discrete Fourier transform, and  2 is understood
componentwise. Our estimate is summarised in Figure 5.4 and Table 5.2. We plotted the
empirical autocorrelation function
and
X 2 (k) (black points) for the lags k = 1, . . . , p
choose the end points 0.015 to exclude an interval with a radius of about one
standard deviation centred at the origin. As penalty coefficients we choose 1 = 2, 2 = 1,
3 = 0.5 and 4 = 0.1. As no closedform solution is known for the constants Dm and
D0m in (5.4.11), in practice, we replaced their true value by numerical approximations. In
(5.3), we summarised the penalised contrast values (PCV) for the estimators (fm )m=1,...,5
on D+ and D . We note that a local minimum is attained at m
+ = 4 and m
= 1,
respectively.
For the Lebesgue density f of the Lvy measure F of the Lvy process L, we are given
the nonparametric estimate
+
f(x) := fm
(x)x4 1D+ (x) + fm
(x)x4 1D (x);
(recall (5.4.13)). We observe that the nonparametric estimate oscillates around zero for
x > 0.3; since no more than 591 observations that is, 0.003% of the data are larger
97
m PCV on D+
1
2
3
4
5
PCV on D
1.283414
1.283749
1.283912
1.283924
1.283870
1.016977
1.016962
1.016947
1.016933
1.016879
in absolute value than 0.3, for the remainder, we consider our estimate reliable on the
set D = [0.3, 0.015] [0.015, 0.3] only. For all orders p+ + p 4, we calculated the
penalised leastsquare estimator p+ ,p defined in (5.4.14); we replaced the integral over
the set D by the discrete residual sum of squares given by
300
X
f (xk ) f (xk ; p+ ,p )2 + f (xk ) f (xk ; p+ ,p )2 ,
RSS(p+ ,p ) :=
k=15
where xk = k/1000; and chose the penalty constant = 5 105 . To find an optimal choice
for (p+ , p ), we also calculated the corrected Akaikes information criterion
2Kp+ ,p (Kp+ ,p + 1)
AICc (p+ , p ) := N log(RSS(p+ ,p )/N ) + 2Kp+ ,p +
,
N Kp+ ,p 1
where Kp+ ,p := 2(p+ + p ) + 1 is the number of parameters and N := 572 is the number
of squared residuals evaluated for RSS. Our results are summarised in Table 5.4.
We observe that AICc is minimised for p+ = 1 and p = 2. We present the corresponding estimated density in (5.5). The parametric fit f (x; 1,2 ) (red solid line) is compared to
the nonparametric estimate f(x) (black points). On the left, both axes are in linear scale
whereas, on the right, the yaxis is in logarithmic scale. In linear scale, the parametric
and the nonparametric estimate are indistinguishable to the eye.
98
Table 5.4.: (Penalised) least square fitting of the parametric families f (x; p+ ,p ) in (5.4.9)
to the nonparametric estimate f(x) given by (5.4.13).
p+
c+
k
c
k
2.542
14.35
3.101
0.618
10.33
0.177
6.67
16.156 1031.79
63.279
37.81
782.867 2346.48
AICc
24.17
1.314
4361.4
0.740
16.879
19.86
438.58
1.390
3911.4
0.219
17.17
1.487
4059.6
76.977
4.229
47.44
243.94
0.701
5544.7
0.012
9.57
0.014 19.68
0.001 162.75
0.539 518.30
1.411
3937.4
0.180
0.000
15.238
6.74
198.51
928.82
0.222
1.487
4062.6
99
17.14
stochastic integral defining the cCMA process by a stochastic Riemann sum: for 1 > 0,
let {Yt1 }tR be given by
bt/1 c
Yt1
:=
k=
Zk1 Z(k1)1 ;
(5.5.1)
i Z
log E eiu(Xt+1 Xt ) Y =
h
t+1
Ys ds
t
100
fit is excellent. Since the leastsquare fitting of the Lvy density has been performed on
the domain [0.3, 0.015] [0.015, 0.3] only, we are very satisfied with the fit of the
stationary distribution of the intermittency increments. On the right, we compare the
empirical autocorrelation function of the squared intermittency data (black points) to
the empirical autocorrelation of the square simulated increments (red solid line). Both
axes are in logarithmic scale. Again, their agreement is excellent.
101
is excellent.
We conclude mentioning that the analysis performed in this paper holds in great generality for onedimensional processes. It is possible to model the full threedimensional,
timewise behavior of turbulent velocity field with a similar CMA model, and under the
hypothesis of isotropy a model for the threedimensional kernel function can be obtained
from a onedimensional one, since the two point correlator depends only on the longitudinal autocorrelation function (see e.g. Pope (2000), p. 196).
102
h1
h2
h3
h4
h5
h6
h7
h8
h9
h10
h11
a1
a2
0.8
g (t)
0.6
0.4
0.2
0
1
10
10
10
10
tf
10
10
10
b) Gamma tting
g (t)
10
10
10
10
10
10
10
tf
c) L 2 norm of g
10
10
d) Decorrelation time
10
10
e) Transition frequency
10
10
R e
TR e
g L 2
10
10
10
1
10
10
2
10
4
10
10
10
Figure 5.2.: a) Estimated kernels, against logarithmic time scale. b) Estimated kernels,
plotted in loglog scale. Solid lines are the fitted gamma models (5.2.8). ce)
Decorrelation times TRe , transition frequency Re and kgk2L2 as a function of
Taylors microscalebased Reynolds number. Solid lines represents powerlaw
fittings of the considered quantities vs. R .
103
104
Figure 5.5.: Estimation of the tempered stable Lvy density. Left and right: The paramet (x)x4
ric estimate f (x, 1,2 ) (black line) and the nonparametric estimate fm
(grey points) are compared on the domain 0.15 x 0.3.
105
Figure 5.6.: Simulation from the fitted model. Top: Simulated increments Xk X(k1)
of the intermittency process on an interval of length 1000 s. BottomLeft:
Quantilequantile plot (black points) of the observed increments nk W of the
data (xaxis) against the simulated increments (yaxis). The red line indicates the identity diagonal. The fit is excellent on [0.3, 0.3] which carries
more than 99.996% of the data. BottomRight: Comparison of the empirical
n
2
autocorrelation
X 2 of the squared intermittency increments (k W ) (black
points) for lags k = 1, . . . , 26 698 corresponding to a timelag of 5.3396s and
of the empirical autocorrelation of the squared simulated intermittency increments (red solid line).
106
6. Bibliography
Abramowitz, M. and Stegun, I. A.: 1974, Handbook of Mathematical Functions With Formulas, Graphs, and Mathematical Tables, Dover Publications, New York.
BarndorffNielsen, O. E., Blsild, P. and Schmiegel, J.: 2004, A parsimonious and universal description of turbulent velocity increments, Eur. Phys. J. B 41, 345363.
BarndorffNielsen, O. E., Corcuera, J. M. and Podolskij, M.: 2011, Multipower variation
for Brownian semistationary processes, Bernoulli 17(4), 11591194.
BarndorffNielsen, O. E. and Schmiegel, J.: 2004, Lvy based tempospatial modeling;
with applications to turbulence, Uspekhi Mat. Nauk 159, 6390.
BarndorffNielsen, O. E. and Schmiegel, J.: 2008a, A stochastic differential equation framework for the timewise dynamics of turbulent velocities, Theory of Probability and its
Applications 52(3), 372388.
URL: http://link.aip.org/link/?TPR/52/372/1
BarndorffNielsen, O. E. and Schmiegel, J.: 2008b, Time change, volatility and turbulence,
Technical report, Thiele Center for Applied Mathematics in Natural Science, Aarhus.
BarndorffNielsen, O. E. and Schmiegel, J.: 2009, Brownian semistationary processes and
volatility/intermittency, in H. Albrecher, W. Runggaldier and W. Schachermayer (eds),
Advanced Financial Modelling, Walter de Gruyter, Berlin, pp. 126. Radon Ser. Comput. Appl. Math. 8.
BarndorffNielsen, O. E. and Shepard, N.: 2006, Impact of jumps on returns and realised
variances: Econometric analisys of timedeformed Lvy processes, Journal of econometrics 131, 217252.
BarndorffNielsen, O. E. and Shephard, N.: 2001, NonGaussian OrnsteinUhlenbeck based
models and some of their uses in financial economics, J. Roy. Statist. Soc. Ser. B
63, 167241.
BarndorffNielsen, O. E. and Shephard, N.: 2002, Econometric analysis of realised volatility and its use in estimating stochastic volatility models, J. Roy. Statist. Soc. Ser. B
64, 253280.
107
6. Bibliography
6. Bibliography
108
6. Bibliography
6. Bibliography
Brockwell, P. J. and Lindner, A.: 2009, Existence and uniqueness of stationary Lvydriven
CARMA processes, Stoch. Proc. Appl. 119(8), 26602681.
Brockwell, P. J. and Marquardt, T.: 2005, Lvydriven and fractionally integrated ARMA
processes with continuous time parameter, Statistica Sinica 15(2), 477494.
Brockwell, P. J. and Schlemm, E.: 2011, Parametric estimation of the driving Lvy process
of multivariate CARMA processes from discrete observations. To appear in Journal of
Multivariate Analysis.
Bruun, H.: 1995, HotWire Anemometry: Principles and Signal Analysis, Oxford University Press, Oxford.
Carr, P. and Wu, L.: 2004, Timechanged levy processes and option pricing, Journal of
Financial Economics 71(1), 113141.
URL: http://ideas.repec.org/a/eee/jfinec/v71y2004i1p113141.html
Champagne, F. H.: 1978, The fine structure of the turbulent velocity field, J. Fluid Mech.
86(1), 67108.
Chanal, O., Chabaud, B., Castain, B. and Hbral, B.: 2000, Intermittency in a turbulent
low temperature gaseosus helium jet, Eur. Phys. J. B 17, 309317.
Chen, S., Doolen, G., Herring, J. R., Kraichnan, R. H., Orszag, S. A. and She, Z. S.: 1993,
Fardissipation range of turbulence, Phys. Rev. Lett. 70(20), 30513054.
Cleve, J.: 2004, Datadriven theoretical modelling of the turbulent energy cascade, PhD
thesis, Technische Universitt Dresden.
Cleve, J., Greiner, M., Pearson, B. R. and Sreenivasan, K. R.: 2004, Intermittency exponent of the turbulent energy cascade, Phys. Rev. E 69, 066316.
Cline, D. B. H.: 1991, Abelian and Tauberian theorems relating the local behavior of an
integral function to the tail behavior of its Fourier transform, Journal of Mathematical
Analysis and Applications 154, 5576.
del lamo, J. C. and Jimnez, J.: 2009, Estimation of turbulent convection velocities and
corrections to Taylors approximation, J. Fluid Mech. 640, 526.
Delattre, S.: 1997, Estimation du coefficient de diffusion dun processus de diffusion en
prsence derreurs darrondi., PhD thesis, Paris 6.
Doob, J. L.: 1944, The elementary Gaussian processes, Ann. Math. Stat. 15(3), 229282.
Doob, J. L.: 1990, Stochastic Processes, 2nd edn, Wiley, New York.
109
6. Bibliography
6. Bibliography
Drhuva, B. R.: 2000, An experimental study of high Reynolds number turbulence in the
atmosphere, PhD thesis, Yale University.
Fasen, V.: 2012, Statistical inference of spectral estimation for continuoustime ma processes with finite second moments. Submitted for publication.
Fasen, V. and Fuchs, F.: 2013, On the limit behavior of the periodogram of highfrequency
sampled stable CARMA processes, Stoch. Proc. Appl. 123(1), 229273.
Ferrazzano, V.: 2010, The windspeed recording process and related issues, Technical report, Technische Universitt Mnchen, Munich.
wwwm4.ma.tum.de/en/research/preprintspublications/.
Ferrazzano, V. and Fuchs, F.: 2013, Noise recovery for lvydriven carma processes and
highfrequency behaviour of approximating riemann sums, Electron. J. Statist. 7, 533
561.
Ferrazzano, V. and Klppelberg, C.: 2012, Turbulence modeling by timeseries
methods, Technical report, Technische Universitt Mnchen. Available at wwwm4.ma.tum.de/en/research/preprintspublications/.
Ferrazzano, V., Klppelberg, C. and Ueltzhfer, F. A. J.: 2012, Turbulence modeling by
timeseries methods. In preparation.
FigueroaLpez, J. E.: 2009, Nonparametric estimation of timechanged Lvy models under highfrequency data, Adv. Appl. Probability 41, 11611188.
FigueroaLpez, J. E.: 2011, Central limit theorems for the nonparametric estimation of
timechanged Lvy models, Scand. J. Statist. 38, 748765.
Frisch, U.: 1996, Turbulence: the Legacy of A. N. Kolmogorov, Cambridge University Press,
Cambridge.
Geman, H., Madan, D. and Yor, M.: 2001, Time changes for Lvy processes, Math. Finance
11, 7996.
Gibson, C. and Schwarz, W.: 1963, Universal equilibrium spectra of turbulent velocity
and scalar fields, J. Fluid Mech. 16(3), 365384.
Gikhman, I. I. and Skorokhod, A. V.: 2004, The Theory of Stochastic Processes I, reprint
of the 1974 edn, Springer.
Gledzer, E.: 1997, On the taylor hypothesis corrections for measured energy spectra of
turbulence, Physica D: Nonlinear Phenomena 104(2), 163183.
110
6. Bibliography
6. Bibliography
Graham, R. L., Knuth, D. E. and Patashnik, O.: 1994, Concrete Mathematics, 2nd edn,
AddisonWesley, Reading, MA.
Gulitski, G., Kholmyansky, M., Kinzelbach, W., Lthi, B., Tsinober, A. and Yorish, S.:
2007, Velocity and temperature derivatives in highReynoldsnumber turbulent flows in
the atmospheric surface layer. Part 1. Facilities, methods and some general results, J.
Fluid Mech. 589, 5781.
Guttorp, P. and Gneiting, T.: 2005, On the WhittleMatrn correlation family, Technical
Report 80, NRCSE Technical Report Series, Washington DC.
Hrmander, L.: 1990, The analysis of linear partial differential operators, Vol. 1, Springer,
Berlin.
IEC 614001: 1999, Wind Turbine Generator Systems, Part 1. Safety Requirements, International Standard, 2 edn, International Electrotechnical Commission, Geneva.
Ishihara, T., Kaneda, Y., Yokokawa, M., Itakura, K. and Uno, A.: 2007, Smallscale statistics in highresolution direct numerical simulation of turbulence: Reynolds number dependence of onepoint velocity gradient statistics, J. Fluid Mech. 592, 335366.
Jacod, J.: 1979, Calcul Stochastique et Problmes de Martingales, Vol. 714 of Lecture Notes
in Mathematics, Springer, Berlin.
Jacod, J., Klppelberg, C. and Mller, G.: 2010, Testing for zero correlation or for a
functional relationship between price and volatility jumps. Submitted for publication.
Jacod, J. and Protter, P.: 2012, Discretization of Processes, Stochastic Modelling and
Applied Probability, Springer.
Jacod, J. and Shiryaev, A. N.: 2003, Limit Theorems for Stochastic Processes, Springer,
Berlin. 2nd edition.
Jacod, J. and Todorov, V.: 2010, Do price and volatility jump together?, Ann. Appl.
Probab. 20(4), 14251469.
Jones, R.: 1981, Fitting a continuous time autoregression to discrete data, in D. Finley
(ed.), Applied Time Series Analysis II, Academic Press, New York, pp. 651682.
Kallsen, J. and MuhleKarbe, J.: 2011, Method of moments estimation in timechanged
Lvy models, Statistics and Decisions 28(2), 169194.
Klppelberg, C., Lindner, A. and Maller, R.: 2004, A continuoustime GARCH process
driven by a Lvy process: stationarity and second order behaviour, J. Appl. Prob.
41(3), 601622.
111
6. Bibliography
6. Bibliography
112
6. Bibliography
6. Bibliography
Richardson, L. F.: 2007, Weather prediction by numerical process, 2nd edn, Cambridge
University Press, Cambridge.
Rosiski, J.: 2007, Tempering stable processes, Stochastic Processes and their Applications
117(6), 677 707.
Saddoughi, S. G.: 1997, Local isotropy in complex turbulent boundary layers at high
Reynolds number, J. Fluid Mech. 348, 201245.
Saddoughi, S. G. and Veeravalli, S. V.: 1994, Local isotropy in turbulent boundary layers
at high Reynolds number, J. Fluid Mech. 268(1), 333372.
Sato, K.: 1999, Lvy Processes and Infinitely Divisible Distributions, Cambridge University
Press, Cambridge, U.K.
Sayed, A. H. and Kailath, T.: 2001, A survey of spectral factorization methods, Numer.
Linear Algebra Appl. 8, 467496.
Schumacher, J.: 2007, SubKolmogorovscale fluctuations in fluid turbulence, EPL
80, 54001.
Sirovich, L., Smith, L. and Yakhot, V.: 1994, Energy spectrum of homogeneous and
isotropic turbulence in far dissipation range, Phys. Rev. Lett. 72(3), 344347.
Sreenivasan, K. R.: 1995, On the universality of the Kolmogorov constant, Phys. Fluids
7(11), 27782784.
Sreenivasan, K. R. and Kailasnath, P.: 1993, An update on the intermittency exponent
in turbulence, Phys. Fluids A 5, 512515.
Taylor, G. I.: 1938, The spectrum of turbulence, Proc. R. Soc. London A 164, 476.
Tennekes, H.: 1975, Eulerian and Lagrangian time microscales in isotropic turbulence, J.
Fluid Mech. 67(3), 561567.
Titchmarsh, E. C.: 1948, Introduction to the Theory of Fourier Integrals, 2nd edn, Oxford
University Press, London, UK.
Tropea, C., Yarin, A. L. and Foss, J. F. (eds): 2008, Handbook of experimental fluid
mechanics, Springer, Berlin.
Tsinober, A.: 2009, An Informal Conceptual Introduction to Turbulence, Vol. 92 of Fluid
Mechanics and its Application, 2nd edn, Springer Verlag.
Tukey, J. W.: 1939, On the distribution of the fractional part of a statistical variable.,
Rec. Math. (Mat. Sbornik) N.S. 4, 561562.
113
6. Bibliography
6. Bibliography
Ueltzhfer, F. A. J. and Klppelberg, C.: 2011, An oracle inequality for penalised projection estimation of Lvy densities from highfrequency observations, J. Nonparametr.
Statist. 23(4), 967989.
von Krmn, T.: 1948, Progress in statistical theory of turbulence, Dokl. Akad. Nauk.
SSSR 34, 530539.
Von Neumann, J.: 19611963, Recent theories of turbulence. (report made to office of
naval research), in A. H. Taub (ed.), Collected Works, Vol. VI, Pergamon Press.
Welch, P.: 1967, The use of fast Fourier transform for the estimation of power spectra: A
method based on time averaging over short, modified periodograms, IEEE Transactions
on Audio Electroacoustics 15(2), 7073.
Whittle, P.: 1951, Hypothesis testing in time series analysis, Almqvist & Wiksells boktr.,
Uppsala.
Wyngaard, J. C. and Clifford, S. F.: 1977, Taylors hypothesis and highfrequency turbulence spectra, J. Atmos. Sci. 34, 922929.
Yaglom, A. M.: 2005, Correlation Theory of Stationary and Related Random Functions,
Vol. I: Basic Results, Springer, New York.
114
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.