You are on page 1of 32

Asymptotic expansion for vector-valued sequences of random

variables with focus on Wiener chaos


arXiv:1712.03123v1 [math.PR] 8 Dec 2017

1 2,3,
Ciprian A. Tudor Nakahiro Yoshida
1
Laboratoire Paul Painleve, Universite de Lille 1
F-59655 Villeneuve dAscq, France.
tudor@math.univ-lille1.fr
2
Graduate School of Mathematical Science, University of Tokyo
3-8-1 Komaba, Meguro-ku, Tokyo 153, Japan
3
Japan Science and Technology Agency CREST
nakahiro@ms.utokyo.ac.jp

December 11, 2017

Abstract
We develop the asymptotic expansion theory for vector-valued sequences (FN )N 1 of
random variables in terms of the convergence of the Stein-Malliavin matrix associated
to the sequence FN . Our approach combines the classical Fourier approach and the
recent theory on Stein method and Malliavin calculus. We find the second order term
of the asymptotic expansion of the density of FN and we illustrate our results by several
examples.

2010 AMS Classification Numbers: 62M09, 60F05, 62H12

Key Words and Phrases: Asymptotic expansion, Stein-Malliavin calculus, Quadratic


variation, Fractional Brownian motion, Central limit theorem, Fourth Moment Theorem.

1 Introduction
The analysis of the convergence in distribution of sequences of random variables constitutes
a fundamental direction in probability and statistics. In particular, the asympotic expan-

. The author was partially supported by Project ECOS NCRS-CONICYT C15E05,MEC PAI80160046
and MATHAMSUD 16-MATH-03 SIDRE Project.

The author was in part supported by Japan Science and Technology Agency CREST JPMJCR14D7;
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research No. 17H01702 (Scientific
Research) and by a Cooperative Research Program of the Institute of Statistical Mathematics.

1
sion represents a technique that allows to give a precise approximation for the densities of
probability distributions,
Our work concerns the convergence in law of random sequences to the Gaussian
distribution. Although our context is more general, we will focus on examples based on
elements in Wiener chaos. The characterization of the convergence in distribution of se-
quences of random variables belonging to a Wiener chaos of fixed order to the Gaussian law
constitutes an important research direction in stochastic analysis in the last years. This
research line was initiated by the seminal paper [11] where the authors proved the famous
Fourth Moment Theorem. This result, which basically says that the convergence in law of a
sequence of multiple stochastic integrals with unit variance is equivalent to the convergence
of the sequence of the fourth moments to the fourth moment of the Gaussian distribution,
has then been extended, refined, completed or applied to various situations. The reader
may consult the recent monograph [8] for the basics of this theory.
In this work, we are also concerned with some refinements of the Fourt Moment The-
orem for sequences of random variables in a Wiener chaos of fixed order. More concretely,
we aim to find the second order expansion for the density. This is a natural extension of
the Fourth Moment Theorem and of its ramifications. In order to explain this link, let us
briefly describe the context. Let H be a real and separable Hilbert space, (W (h), h H)
an isonormal Gaussian process on a probability space (, F, P ) and let Iq denote the qth
multiple stochastic integral with respect to W . Let Z N (0, 1). Denote by dT V (F, G) the
total variation distance between the laws of the random variables F and G and let D be
the Malliavin derivative with respect to W . The Fourth Moment Theorem from [11] can be
stated as follows.
Theorem 1 Fix q 1. Consider a sequence {Fk = Iq (fk ), k 1} of random variables in
the q-th Wiener chaos. Assume that

lim EFk2 = lim q!kfk k2H q = 1. (1)


k k

Then, the following statements are equivalent:


1. The sequence of random variables {Fk = In (fk ), k 1} converges in distribution as
k to the standard normal law N (0, 1).

2. limk E[Fk4 ] = 3.

3. kDFk k2H converges to q in L2 () as k .

4. dT V (Fk , Z) k 0.
A more recent point of interest in the analysis of the convergence to the normal dis-
tribution of sequences of multiple integrals is represented by the analysis of the convergence
of the sequence of densities. For every N 1, denote by pFN the density of the random
variable FN (which exists due to a result in [14]). Since the total variation distance between
F and G (or between the distributions of the random variables F and G) can be written

2
as dT V (F, G) = 12 R |pF (x) pG (x)|dx, we can notice that point 4. in the Fourth Moment
R

Theorem implies that


kpFN (x) p(x)kL1 (R) N 0
x2
where p(x) = 12 e 2 denotes, throughout our work, the density of the standard normal
distribution. The result has been improved in the work [7], where the authors showed that
if
lim EkDFN k4
H < (2)
N
then the conditions 1.-4. in the Fourth Moment Theorem are equivalent to
q
sup |pFN (x) p(x)| c (EFN4 3). (3)
xR

We also refer to [3], [4] for other references related to the density convergence on Wiener
chaos.
Our purpose is to find the asymptotic expansion in the Fourth Moment Theorem
(and, more generally, for sequences of random variables converging to the normal law).
By finding the asymptotic expansion we mean to find a sequence (pN )N 1 of the form
pN (x) = p(x) + rN (x) such that rN N 0 which approximates the density pFN in the
following sense

sup |x | |pFN (x) pN (x)| = o(rN ) (4)


xR

for any Z+ . This improves the main result in [7] and it will also allow to generalize
the Edgeworth -type expansion (see [9], [1]) for sequences on Wiener chaos to a larger class
of measurable functions. The main idea to get the asymptotic expansion is to analyze the
asymptotic behavior of the characteristic function of FN and then to obtain the approximate
density pN by Fourier inversion of the dominant part of the characteristic function.
As mentioned before, we develop our asymptotic expansion theory for a general class
of random variables, which are not necessarily multiple stochastic integrals, although our
main examples are related to Wiener chaos. Our main assumption is that the vector-valued
1

random sequence FN , N ((FN ) C) converges to a d+ d d-dimensional Gaussian vec-
tor, where (N )N 1 is a deterministic sequence converging to zero as N , (FN ) is the
(i) (j)
so-called Stein-Malliavin matrix of FN whose components are i,j = hDFN , D(L)1 FN iH
(i) (j)
for i, j = 1, d (L is the Ornstein-Uhlenbeck operator) and Ci,j = limN EFN FN . In the
one-dimensional case the assumption becomes
1
(hDFN , D(L)1 FN iH 1)

FN , N (5)

converges in distribution, as N , to a two-dimensional centered Gaussian vector


(Z1 , Z2 ) with EZ12 = EZ22 = 1 and EZ1 Z2 = (1, 1).
The condition (5) is related to what is usually assumed in the asymptotic expansion
theory for martingales, which has been developed in the last decades and it is based on the

3
so-called Fourier approach. Let us refer, among many others, to [6], [17], [13], [15], [16] for
few important works on the martingale approach in asymptotic expansion. Recall that if
(MN )N 1 denotes a sequence of random variables converging to Z N (0, 1) such that MN
is the terminal value of a continuous martingale and if hM iN is the martingales bracket,
then one assumes in e.g. [17]), [15] that

hM iN N 1 in probability
and the following joint convergence holds true
 
hM iN 1 (d)
MN , N (Z1 , Z2 ) (6)
N

where (Z1 , Z2 ) is a centered Gaussian vector as above. The notation (d) stands for the
convergence in distribution.
In our assumption (5), the role of the martingale bracket is played by
hDFN , D(L)1 FN iH which becomes 1q kDFN k2H when FN is an element of the qth Wiener
chaos. The choice is natural, suggested by the identity EFN2 = EhF i2N which playes the
2
role of the Ito isometry and by the fact that kDFqN k 1 converges to zero in L2 (), due to
the Fourth Moment Theorem.
We organized our paper as follows. In Section 2, we fix the general context of our
work, the main asumptions and some notation. In Section 3, we analyze the asymptotic
behavior of the characteristic function of a random sequence that satisfies our assumptions
while in Section 4 we obtain the approximate demsity and the asymptotic behavior via
Fourier inversion. In Section 5, we illustrate our theory by several examples related to
Wiener chaos.

2 Asssumptions and notation


We will here present the basic assumptions and the notation utilised throughout our work.
Let H be a real and separable Hilbert space endowed with the inner product h, iH
and consider (W (h), h H) an isonormal process on the probability space (, F, P ). In
the sequel, D, L and represent the Malliavin derivative, the Ornstein-Uhlenbeck operator
and the divergence integral with respect to W . See the Appendix for their definition and
basic properties.

2.1 The main assumptions


Consider a sequence of centered random variables (FN )N 1 in Rd of the form
 
(1) (d)
FN = FN , . . . , FN .

4
Denote by (FN ) = (i,j )i,j=1,..,d the Stein-Malliavin matrix with components
(i) (j)
i,j = hDFN , D(L)1 FN iH

for every i, j = 1, .., d.


We consider the following assumptions:

[C1 ] There exists a symmetric (strictly) positive definite d d matrix C and a deterministic
1
sequence (N )N 1 converging to zero as N such that FN , N ((FN ) C)
converges in distribution to a d + d d Gaussian random vector (Z1 , Z2 ) such that
(1) (d) (k,l)
Z1 = (Z1 , . . . , Z1 ), Z2 = (Z2 )k,l=1,..,d with
(i) (j)
EZ1 Z1 = Ci,j , for every i, j = 1, .., d.

(i)
[C2 ] For every i = 1, .., d, the sequence (FN )N 1 is bounded in D+1, = p>1 D+1,p for
some l d + 3.
1

[C3 ] With N , C from [C1 ], for every i, j = 1, d the sequence N i,j Ci,j N N is
bounded in D, for some l d + 3.

Notice that the matrix (FN ) is not necessarily symmetric (except when all the
components belong to the same Wiener chaos) since in general i,j 6= j,i . Nevertheless, we
1
assumed
 in [C1] that it converges to the symmetric matrix C, in the sense that N ((FN )
C converges in law to normal random variable.
In the case of random variables in Wiener chaos, from the main result in [12] (see
(i)
Theorem 4 later in the paper), we can prove that, under the convergence in law of FN ,
i = 1, .., d to the Gaussian law, the quantities i,j Ci,j , j,i Ci,j converges to zero in
L2 (). Thus condition [C1] intuitively means that these two terms have the same rate of
convergence to zero.

2.2 The truncation


Let us introduce the truncation sequence (N )N 1 which will be widely used throughout
our work. Its use allows to avoid the condition on the existence of negative moments for
the Malliavin derivative of FN (2).
We consider the function N D,p for p > 1 and d + 3 such that

|(FN ) C|dd 2
  !
N = K
(7)
N
with some K, 0 < < 1, where C (R; [0, 1]) is such that (x) = 1 is |x| 12 and
(x) = 0 if |x| 1. The square in the argument pf in order to have N differentiable in
the Mlliavin sense.
From this definition, clearly we have

5
1. 0 N 1 for every N 1.

2. N N 1 in probability.

A crucial fact in our computations is that on the set {N > 0} we have


1
|(FN ) C|dd .
K N
This also implies (see e.g. [17]) that there exists two positive constants c1 , c2 such that for
N large enough
c1 det (FN ) c2 . (8)

2.3 Notation
For every x = (x1 , ..., xd ) Rd and = (1 , ..., d ) Z+ we use the notation

1 d
x = x1 1 ....xd d and = ....
x x1 1 xd d

If x, y Rd , we denote by hx, yid := hx, yi their scalar product in Rd . By |x|d := |x|


we will denote the Euclidean norm of x.
The following will be fixed in the sequel:

The sequence (FN )N 1 defined in Section 2.1.

The truncation sequence (N )N 1 is as in Section 2.2.


o (N )
o (N ) denotes a deteministic sequence that depends on such that N N 0.
o(N )
By o(N ) we refer as usual to a sequence such that N N 0.
2
By () = e 2 we denote the characteristic function of Z N (0, 1) and by p(x) =
2
1 e x2
2
its density.

By c, C, C1 , ... we denote generic strictly positive constants that may vary from line
to line and by h, i we denote the Euclidean scalar product Rd . On the other hand,
we will keep the subscript for the inner product in H, denoted by h, iH .

We will use bold letters to indicate vectors in Rd when we need to differentiate them
from their one-dimensional components.

6
3 Asymptotic behavior of the characteristic function
The first step in finding the asymptotic expansion for the sequence (FN )N 1 which satisfies
[C1] - [C3] is to analyze the asymptotic behavior as N of the characteristic function
of FN . In order to avoid troubles related to the integrability over the whole real line, we
will work under truncation, in the sense that we will always keep the multiplicative factor
N given by (7).
Let = (1 , ..., d ) Rd and [0, 1] and let us consider the truncated interpola-
tion
 T

ih,FN i(1 2 ) 2C
N (, ) = E N e . (9)

Notice that N (1, ) = E(N ei,FN i ), the truncated characteristic function of FN , while
T C
N (0, ) = (EN )e 2 , the truncatedcharacteristic function of the limit in law of FN .
Therefore, it is useful to analyze the behavior of the derivative of
N (, ) with respect to
the variable .
We have the following result.

Proposition 1 Assume [C1] - [C3] and suppose that

 d
 X
(k,l) (a)
E Z2 |Z1 = ak,l Z1 (10)
a=1

for every k, l = 1, .., d, where (Z1 , Z2 ) is the Gaussian random vector from [C1]. Then we
have the convergence

d
1 X T C
N N (, ) N i
2
j k i aj,k Cia e 2 . (11)

i,j,k,a=1

(i) (i)
Proof: By differentiating (9) with respect to and using the formula FN = D(L)1 FN
for every i = 1, .., d, we can write

 
T
ih,FN i(1 2 ) 2C T

(, ) = E N e ih, FN i + C
N

T
d
2 C X (j)
= iE N eih,FN i(1 ) 2 j D(L)1 FN
j=1
d  T

ih,FN i(1 2 ) 2C
X
+ j k Cj,k E N e
j,k=1

7
and by the duality relationship (65)
d  
X T
ih,FN i(1 2 ) 2C 1 (j)
(, ) = i j E N e ihD(L) FN , Dh, FN iiH
N
j=1
d  T C

ih,FN i(1 2 ) (j)
X
1
+i j E e 2 hDN , D(L) FN iH
j=1
d  T

ih,FN i(1 2 ) 2C
X
+ j k Cj,k E N e .
j,k=1

Pd (k)
We obtain, since Dh, FN i = k=1 k DFN ,

d  
X
ih,FN i(1 2 )
T C 
(k) 1 (j)
(, ) = j k E N e 2 hDFN , D(L) FN iH Cj,k
N
j,k=1
d  T C

ih,FN i(1 2 ) (j)
X
1
+i j E e 2 hDN , D(L) FN iH
j=1
:= AN (, ) + BN (, ). (12)

Using the definition of the truncation function N , we can prove that


d  T C

ih,FN i(1 2 ) (j)
X
1 1 1
N BN (, ) = N E e 2 hDN , D(L) FN iH N 0. (13)
j=1

Indeed, by the chain rule (66)


2 ! 2
|(FN ) C|dd |(FN ) C|dd
 

DN = K
D
N N

and by using [C2]-[C3] and (7), we get for every p > 1



1 1 (j)
N |BN (, )| cN E hDN , D(L)1 FN iH

 1
1 1 2
cN P |(FN ) C|dd N
2
p 1p p(1)1
cE |(FN ) C|dd N cN

where we used again [C2]. Since < 1 and p is arbitrarly large, we obtain the convergence
(13).
On the other hand, by assumption [C1], we have the convergence

8
d  T C

ih,Z1 i(1 2 ) (k,l)
X
1
N AN (, ) N j k E e 2 Z2 . (14)
j,k=1

Using our assumption (10), we can express the above limit in a more explicit way. We have
 T
  T

ih,Z1 i(1 2 ) 2C (j,k) ih,Z1 i(1 2 ) 2C (j,k)
E e Z2 = E e E(Z2 |Z1 )
d
!
2 T C X (a)
= E eih,Z1 i(1 ) 2 aj,k Z1 (15)
a=1

and, moreover, for every a = 1, .., d, from the differential equation verified by the charac-
teristic function of the normal distribution,

 T C
 T  
ih,Z1 i(1 2 ) (a) 2 ) C (a)
E e 2 Z1 = e(1 2 E eih,Z1 i Z1
d
(1 2 )
T C
1 X T
2 C
= e 2 ( i Cia ) 2 e 2
i
i=1
d d
T C X T C X
= e 2 ( i Cia ) = ie 2 i Cia . (16)
i
i=1 i=1

Thus, by replacing (15) and (16) in (14), we obtain



d
X T C
1 AN (, ) N i 2
N j k i aj,k Cia e 2 . (17)
i,j,k,a=1

Relation (17), together with with (13) and (12) will imply the conclusion.

We will also need to analyze the asymptotic behavior of the partial derivatives with
respect to the variable of (9). Let us first introduce some notation borrowed from [16].
For every u, z Rd , Zd+ and for every symmetric d d matrix C, we will denote
by
1 T 1 T
P (u, z, C) = eihu,zid 2 u Cu (i)|| eihu,zid + 2 u Cu (18)
u
where || := 1 + ... + d if = (1 , .., d ) Zd+ . Then clearly,

ihu,zid + 1 uT Cu 1 T
(i)||
e 2 = eihu,zid + 2 u Cu P (u, z, C). (19)
u
In the next lemma we recall the explicit expression of the polynomial P together
on some estimates on this polynomial. We refer to Lemma 1 in [16] for the proof.

9
Lemma 1 If P is given by (18), then for every u, z Rd and evey d d matrix C, we
have X

P (u, z, C) = C (i) z S (u, C)
0

with
1 T Cu 1 uT Cu
S (u, C) = e 2 u e2 .
u
Moreover, we have the estimate
||
||
X
|S (u, C)| cj |u|j |C|(j+||)/2
j=0

for every u Rd and every d d matrix C.

Let us now state and prove the asymptotic behavior of the partial derivatives of the
truncated interpolation
N.

Proposition 2 Assume [C1] -[C3] and (10). Let N be given by (9). Then for every
d
Z+ , we have

d
1 X T C
N (, ) N i 2 j k i aj,k Cia e 2 .
N
i,j,k,a=1

Proof: If = (1 , ..., d ) Zd+ , then by (12) we have


N (, ) = AN (, ) + BN (, )

with AN , BN defined in (12). Now, since by (19)

ih,FN i(12 ) T C T
2 C
e
2 = i|| P (, FN , (1 2 )C)eih,FN i(1 ) 2

we can write


(, )
N
d  T  
ih,FN i(1 2 ) 2C (k) 1 (j)
X
2
= j k E N e i P (, FN , (1 ), C) hDFN , D(L) FN iH Cj,k
j,k=1
d  T C

ih,FN i(1 2 ) (j)
X
2 1
+i j E N e 2 i P (, FN , (1 ), C)hDN , D(L) FN iH (20)
j=1

10
1
As in the proof of (13), the second summand above multiplied by N converges to
zero as N . Concerning the first summand in (20), the hypothesis [C1] implies that it
converges as N , to

d  T C

ih,Z1 i(1 2 ) (k,l)
X
2
j k E i P (, Z1 , (1 )C)e 2 Z2
j,k=1
d
ih,Z1 i(12 ) T C
  
(k,l)
X
= j k E e 2 Z2

j,k=1

d
X T
C
= i 2 j k i aj,k Cia e 2 .

i,j,k,a=1

Then the conclusion is obtained.

Remark 1 If d = 1, relation (11) becomes

1 2
N N (, ) N i 2 3 e 2

where = E(Z2 /Z1 ).

The following integration by parts formula plays an important role in the sequel.
Its proof is a slightly adaptation of the proof of Proposition 1 in [16]. For its proof the
nondegeneracy (8) is needed.

Lemma 2 Assume [C2] -[C3]. Consider a function f Cpl (Rd ) with l 0 and let G be a
random variable in Dl,. Then it holds
 l 

E f (FN )GN = E (f (FN )Hl (GN )) (21)
xl

where the random variable Hl (GN ) belongs to Lp () for every p 1.

We will use the integration by parts formula in the above Lemma 2 to show that

N (, )is dominated by an integrable function uniformly with respect to (, )
[0, 1] Rd .

Lemma 3 Assume [C2]-[C3]. For every Zd+ , there exists a positive constant C such
that
1
N FN (, ) C (1 + ||)+2

(22)

for all Rd , [0, 1] and N 1.

11
T
2 C
Proof: First we consider the case = 0 Zd+ . Consider the function f (x) = eih,xi(1 ) 2

for x Rd with x
f (x) = f (x)(i) .

For (0, 1], by using (12) and the integration by parts formula (21) to the function
f -times, we obtain


(i) (, )
FN
d  T  
ih,FN i(1 2 ) 2C (k) 1 (j)
X

= (i) j k E N e hDFN , D(L) FN iH Cj,k
j,k=1
d  T C

ih,FN i(1 2 ) (j)
X
1
+i(i) j E e 2 hDN , D(L) FN iH
j=1
d  T C
 
ih,FN i(1 2 )
X
1

= j k E N e 2 H q hDFN , DFN iH 1 N
j,k=1
d  T C
 
ih,FN i(1 2 )
X
1
+ j E e 2 H hi(q) DFN , DN iH
j=1

We use this estimate for 1/2. For [0, 1/2], the inequality is trivial thanks to
T
the exponential function exp((1 2 ) 2C ). Thus we obtained the desired estimate when
= 0. For Zd+ , 6= (0, ...0), we can follow a similar argument with the estimate
m T C 
sup (1 2 )(1 + 2 ) exp (1 2 ) <
, 2
for every m Z+ . Thus we obtained the result.
Let us denote by N the sum of the characteristic function of the law N (0, C) and
of the integral over [0, 1] with respect to of the limit in (11), i.e.

d
T
C 1 X T C
N () = e 2 iN j k i aj,k Cia e 2 . (23)
3
i,j,k,a=1

The next result shows that the difference between the truncated characteristic func-
tion of FN and the characteristic function of the weak limit of the sequence FN can be
approximated by N .
Proposition 3 Suppose that [C1]-[C3] are fulfilled for = d + 3. For every N 1, let
N be given by (23). Then
Z  
1
 ih,F 
N
E e N
i N E[ N ]N () d N 0

Rd

for every Zd+ .

12
Proof: If Zd+ , using
T
2C

 ih,F i
N and

N (1, ) = E e N (0, ) = e
N

we can write
Z  
1  ih,F i 
E e N
N E[N ]N () d
N
Rd

Z Z 1 d 
X T C
1 2 a

=
N N (, ) (i) E[N ] j k i j,k Cia e 2 d d
R 0 i,j,k,a=1

Z Z 1 d
1 X T C 
2 a
N (, ) (i)
N j k i j,k Cia e 2 dd

Rd 0 i,j,k,a=1
1
+C k1 N k22 2 .

The last quantity converges to zero as N by the choice of the truncation function
and also Lemma 3 and the dominated convergence theorem since the integrand is bounded
uniformly in (, N ).

Let FN be the characteristic function of the random vector FN , i.e. FN () =


Eeih,FN ifor Rd . If N is given by (23), we have

FN ()
   
= N ()EN + E N eih,FN i E(N )N () + E (1 N )eih,FN i
   
ih,FN i ih,FN i
= N () + N ()(EN 1) + E N e E(N )N () + E (1 N )e

Proposition 3 shows that the difference E N eih,FN i E(N )N () while from




the definition of the truncation function N we see that the term E((1 N )eih,FN i +
N ()(EN 1) is also small. Consequently, FN () can be approximated by N . Actually,
we have
FN () = N () + sN () (24)
1
where sN () is a small term such that N sN () and its derivatives are dominated by
the right-hand side of (20) and it that satisfies

sN () Ck1 N kp + o (N ) (25)

for every p 1, where o (N ) denotes a deteministic sequence that depends on such that
o (N )
N N 0.
Therefore N is the dominant part in the asymptotic expansion of FN . By inverting
N , we get the approximate density.

13
4 The approximate density
In this paragraph, our purpose is to find the first and second-order term in the asymptotic
expansion of the density of FN . We will assume throughout this section [C1 ]-[C3 ] are
fulfilled for = d + 3. Let us denote by FN the truncated characteristic function of FN ,
i.e.  
FN () = E N e
ih,FN i
=
N (1, ) (26)

with N (1, ) from (9).


We define the approximate density of FN via the Fourier inversion of the dominant
part of p
FN defined by (26).

Definition 1 For every N 1, we define the approximate density pN by


d
1
Z
pN (x) = eih,xi N ()d, x Rd (27)
(2)d R

where N is given by (23).

Let us first notice that in the case d = 1, we can obtain an explicit expression for
pN in the next result. Note that for d = 1, assuming C1,1 = 1 and EZ1 Z2 = , we have
from (23)
2 1 2
N () = e 2 i 3 e 2 , for every R. (28)
3
x2
Recall that p(x) = 12 e 2 denotes the density of the standard normal law. By Hk we
denote the Hermite polynomial of degree k 0 given by H0 (x) = 1 and for k 1
k
 
k x2 d
2 2
x2
Hk (x) = (1) e e .
dxk

Proposition 4 If pN is given by (27) then for every x R,



pN (x) = p(x) N H3 (x)p(x).
3
Proof: This follows from (28) and the formula

1
Z
eix (i)k ()d = Hk (x)p(x) (29)
2 R
2
for every k 0 integer. Recall the notation () := e 2 .
We prefer to work with the truncated density of FN , which can be easier controlled.

14
Definition 2 The local (or truncated) density of the random variable FN is defined as the
inverse Fourier transform of the truncated characteristic function

1
Z
p
FN (x) = eih,xi
FN ()d
(2)d Rd
for x Rd .
The local density is well-defined since obviously the truncated characteristic function

FN is integrable over Rd .
Our purpose is to show that the local density p FN is well-approximated by the
approximate density pN given by (27) in the sense that for every Zd+
sup x p

FN (x) pN (x) .

xRd

is of order o(N ). This will be showed in the next result, based on the asymptotic behavior
of the characteristic function of FN .
Theorem 2 For every p 1, and for every integer = (1 , .., d ) Zd+
sup x p

FN (x) pN (x) ck1 N kp + o(N ) = o(N ). (30)

xRd

Proof: We have, by integrating by parts


1
Z  

 ih,xi
x pFN (x) pN (x) = x e F ( N () d
(2)d R N

1 
ih,xi
Z 

= e FN () N () d
(2)d (i)|| Rd
and then, by using (24) and Lemma 3 we obtain
Z  
 1 d
sup x pFN (x) pN (x)
FN () N () d
xRd (2)d Rd d
Z
c (1 + ||)+2 d (o(N ) + ck1 N kp )
Rd
o(N ) + ck1 N kp = o(N )
where we use 2 > d since d + 3 in [C2]. The fact that ck1 N kp = o(N ) follows
from the choice of our truncation function N .
Let us make some comments on the above result.
Remark 2 Theorem 2 extends the inequality (3) when FN belongs to the Wiener
chaos of order q for every N 1. It is known that for every N 1, we have that
2 = V ar( 1 kDF k2 ) behaves as EF 4 3, more precisely (see Section 5.2.2 in [8])
N q N H N

2 q1
N (EFN4 3) (q 1)N
2
.
3q
It also extends some results in [3] (in particular Theorem 6.2 and Corollary 6. 6).

15
The appearance of truncation function in the right-hand side of (30) replaces the
condition (2) on the existence of the negative moments of the Malliavin derivative.

As a consequence of Theorem 2 we will obtain the asymptotic expansion for the


expectation of measurable functionals of FN .

Theorem 3 Let pN be given by (27) and fix M 0, > 0. Then


Z

sup Ef (Fn )
f (x)pN (x)dx = o(N ) (31)
f E(M,) Rd

where E(M, ) is the set of measurable functions f : Rd R such that |f (x)| M (1 + |x| )
for every x Rd .

Proof: For any measurable function f that satisfies the assumptions in the statement, we
can write
Z Z
Ef (Fn ) f (x)pN (x)dx = Ef (FN ) Ef (FN )N + Ef (FN )N f (x)pN (x)dx
Rd Z R d
h i
= Ef (FN )(1 N ) + f (x) p
FN (x) p N (x) dx
Rd

with p
FN defined in (26). Then for every p > 1 and for 0 large enough (it may depend
on M, ) with p such that 1p + p1 = 1, we have, by using Holders inequality,
Z

Ef (Fn ) f (x)pN (x)dx
d
R
 1  
1
Z 2

p  h i
dx sup 1 + |x|2
p
E |f (FN )| k1 N kp + |f (x)| 2
pFN (x) pN (x)
Rd 1 + |x|2 xRd

 1
E |f (FN )|p
p
k1 N kp + (ck1 N kp + o(N ))
Ck1 N kp + o(N ) = o(N ).

where we used (30). So the desired conclusion is obtained.


We insert some remarks related to the above result.

Remark 3 Theorem 3 is related to Theorem 9.3.1 in [8] (which reproduces the main
result of [9]). Notice that we do not assume the differentiability of f and our result
includes the uniform control in the case of the Kolmogorov distance.

In particular, the asymptotic expansion (31) holds for any bounded function f and for
polynomial functions, but the class of examples is obviously larger.

16
In the case of sequences in a Wiener chaos of fixed order, we have the following
result, which follows from the above results and from the hypercontractivity property (64).

Corollary 1 Assume (FN )N 1 = (Iq (fN ))N 1 (with fN H q for every N 1) be a


random sequence in the q th Wiener chaos (q 2). Assume EFN2 = 1 for every N 1 and
kDFN k2H

suppose that the condition [C1] holds with N = V ar q i.e.

kDFN k2H
  
1 (d)
FN , N 1 N (Z1 , Z2 )
q

where (Z1 , Z2 ) is a centered Gaussian vector with EZ12 = EZ22 = 1 and EZ1 Z2 = (0, 1).
Then we have the asymptotic expansions (30) and (31).

Proof: The results follows from Theorems 2 and 3 by noticing that (64) ensures that the
conditions [C2] and [C3] are satisfied.

5 Examples
In the last part of our work, we will present three examples in order two illustrate our
theory. The first example concerns a one-dimensional sequence of random variables in
the second Wiener chaos and it is inspired from [8], Chapter 9. The second example is
a multidimensional one and it involves quadratic functions of two correlated fractional
Brownian motions while the last example involves a sequence in a finite sum of Wiener
chaoses.

5.1 A one-dimensional in a Wiener chaos of a fixed order: Quadratic


variations of stationary Gaussian sequences
We consider a centered stationary Gaussian sequence (Xk )kZ with EXk2 = 1 for every
k Z. We set

(l) := EX0 Xl for every l Z


so (0) = 1 and |(l)| 1 for every l Z.
Without loss of generality and in order to fit to our context, we assume that Xk =
W (hk ) with khk kH = 1 for every k Z, where (W (h), h H) is an isonormal process as
defined in Section 6. Define the quadratic variation sequence
N
1 X
Xk2 1 ,

VN = N 1. (32)
N k=0

and let vN := EVN2 . Assume 2 (Z) (the Banach space of 2-summable functions endowed
with the norm kk22 (Z) = kZ (k)2 ). In this case, we know from [8] that the deterministic
P

17
2
P
sequence vN converges to the constant 2 kZ (k) > 0 and consequently it plays no role
in the sequel.
Define the renormalized sequence
N
!
VN 1 X
FN = = I2 h2 for every N 1. (33)
vN N vN k=0 k

Obviously EFN = 0 and EFN2 = 1 for every N 1. It is well known (see [2], see also
Chapter 7 in [8]) that, as N
FN (d) Z N (0, 1).
Let us see that the sequence (FN )N 1 satisfies the conditions conditions
[C1]-[C3] with  
2 1 2
N = V ar kDFN kH . (34)
2
Actually, this follows from the computations contained in Section 5 of [8], but we recall the
main ideas because they are needed later. Notice that hDFN , D(L)1 FN iH = 21 kDFN k2H
since FN belongs to the second Wiener chaos.
To prove [C1], we need to assume
4
3 (Z). (35)
In this case N2 (which coincides with the fourth cumulant of F , denoted k (F )) behaves
N 4 N
1
as c N for N large, see Theorem 7.3.3 in [8]. The proof of Theorem 9.5.1 in [8] implies that,
as N ,
  2 
1 kDFN kH 1
(hDFN , D(L)1 FN iH 1 (d) (Z1 , Z2 )

FN , N 1 = FN , N
2
where (Z1 , Z2 ) is a correlated centered Gaussian vector with EZ12 = EZ22 = 1. Therefore
condition [C1] is fullfiled.
Conditions [C2] and [C3] are satisfied since FN belongs to the second Wiener chaos
and one can use the hypercontractivity property of multiple integrals (64). Consequently
Theorems 3 and 2 can be then applied to the sequence (32).
Remark 4 Let (BtH )tR be a (two-sided) fractional Brownian motion with Hurst parameter
H (0, 1). We recall that B H is a centered Gaussian process with covariance function
1
EBtH BsH = (|t|2H + |s|2H |t s|2H )
2
H B H for every k Z. In this case (k) := EX X is
for every s, t R. Let Xk = Bk+1 k H 0 k
given by
1
|k + 1|2H + |k 1|2H 2|k|2H , k Z.

H (k) = (36)
2
Since H (k) behaves as H(2H 1)|k|2H2 for |k| large enough, condition (35) is fullfiled
for 0 < H < 85 . Therefore Theorems 2 and 3 can be applied for H (0, 85 ).

18
5.2 A multidimensional: quadratic variations of correlated fractional Brow-
nian motions
For every H (0, 1), denote by
 
H 1 H 1
ftH (u) = d(H) (t u)+ 2 (u)+ 2 , tR

H 2 = |t|2 for every t R. For


R
with d(H) a positive constant that ensures that R ft (u) du
every k Z, let

LH H H
k = fk+1 fk+1 . (37)
Consider (Wt )tR a Wiener process on the whole real line and define the two fractional
Brownian motions
Z
Hi
Bt = ftHi (s)dWs , i = 1, 2.
R

The fBms B H1 and B H2 are correlated. We actually have (see [5], Section 4), for every
k, l Z,

H1
E(Bk+1 BkH1 )(Bl+1
H2
BlH2 ) = hfk+1
H1
fkH1 , fl+1
H2
flH2 i = hLH 1 H2
k , Ll i
= D(H1 , H2 ) H1 +H2 (k l) (38)
2

where D(H1 , H2 ) is an explicit constant such that D(Hi , Hi ) = 1 for i = 1, 2.


Assume H1 , H2 0, 34 . Define the quadratic variations, for i = 1, 2 and for N 1
N 1  
(i) 1 X  Hi Hi
2
VN = Bk+1 Bk 1
N k=0
and
(i)
(i) V
FN = qN (39)
(i)
vn
(i) (i) (i)
with vN = E(VN )2 , i = 1, 2. Recall that vN N cvi := 2 2
P
kZ Hi (k) for i = 1, 2.
(1) (2)
Let us show that the sequence FN = (FN , FN )
satisfies the conditions
21
[C1]-[C3] with N = N . We will focus on [C1] (the assumptions [C2] and [C3] can be
easily checked since we deal with elements of the second Wiener chaos and we can apply
the hypercontractivity (64)).
From the previous paragraph we know that for i = 1, 2, the random vectors

(i)
(i) (i)

FN , N hDFN , D(L)1 FN iH 1 (40)

19
(1) (2)
converge in distribution, when N , to some centered Gaussian vectors (Zi , Zi ) with
(1)
E(Zi )2 = Ci,i , i = 1, 2 and with nontrivial correlation.
(1) (2)
The main tool to prove the the two-dimensional sequence (FN , FN ) satisfies [C1] is
the multidimensional version of the Fourth Moment Theorem. Let us recall this result which
says that for sequences of random vectors on Wiener chaos, the componentwise convergence
to the Gaussian law implies the joint convergence (see [12] or Theorem 6.2.3 in [8]).

Theorem 4 Let d 2 and q1 , ..., qd 1 integers. Consider the random vector


   
(1) (d) (1) (d)
FN = FN , . . . , FN = Iq1 (fN ), . . . , Iqd (fN )

(i)
with fN H qi , i = 1, .., d. Assume that
(i) (j)
EFN FN N C(i, j) for every i, j = 1, .., d. (41)

Then the following two conditions are equivalent:

FN converges in law, as N , to a d dimensional centered Gaussian vector with


covariance matrix C.
(i)
For every i = 1, .., d, FN converges in law, as N , to N (0, C(i, i)).

We will need the two auxiliary lemmas below. The first concerns the asymptotic
(1) (2)
covariance of FN and FN .

(1) (2)
be defined by (39). Assume H1 , H2 0, 43 . Then for every

Lemma 4 Let FN , FN
H1 , H2 (0, 1),
(1) (2)
EFN FN N C1,2
with
2D 2 (H1 , H2 ) X
C1,2 = H1 +H2 (l)2 . (42)
cv1 cv2 2
lZ

Consequently,  
(1) (2) (d) (1) (2)
FN , FN N (Z1 , Z1 ) (43)
(1) (2) (1) (2)
where (Z1 , Z1 ) is a Gaussian vector with EZ1 = EZ1 = 0 and
(1) (2) (1) (2)
E(Z1 )2 = C1,1 = 1, E(Z1 )2 = C2,2 > 0, E(Z1 Z1 ) = C1,2

with C1,2 given by (42). Recall that we denoted by (d) the convergence in distribution.

20
(i)
Proof: By the chaos expression of FN (33) and by using the isometry (58)

N 1
! N 1
(1) (2) 1 X X
EFN FN = q EI2 (LH 1 2
i ) I2 (LH 2 2
j )
(1) (2)
N vN vN i=0 j=0

N 1
1
hLH H2 2
X
1
= q 2 i , Lj i
(1) (2)
N vN vN i,j=0

with LH from (37). From the identity (38)


N 1
(1) (2) 1 X
EFN FN = q 2D 2 (H1 , H2 ) H1 +H2 (i j)2
(1) (2) 2
N vN vN i,j=0

N 1
2 X l
= q H1 +H2 (l)2 (1 )
(1) (2) 2 N
vN vN l=0
2D 2 (H1 , H2 ) X
N H1 +H2 (l)2 .
cv1 cv2 2
lZ

The sum lZ H1 +H2 (l)2 is finite since H1 + H2 < 23 . The limit (43) will then follow from
P
2
Theorem 4.

We will also need to control the speed of convergence of the covariance function
(1) (2)
of FN and FN . In this case we need to impose an additional restriction on the Hurst
parameters.
(1) (2)
Lemma 5 Let FN , FN be defined by (39) and let C1,2 be given by (42). Assume H1 , H2
5

0, 8 . Then
(1) (2)
N(EFN FN C1,2 ) N 0.

Proof: For every N 1 we can write


(1) (2)
N (EFN FN C1,2 )

N 1
1 X l 1 X
= 2D 2 (H1 , H2 ) N q H1 +H2 (l)2 (1 ) H1 +H2 (l)2 .
(1) (2) 2 N cv1 cv2 2
vN vN l=0 lZ

21
(i)
Thus, for N large enough, using that vN converges to cvi for i = 1, 2, we can write
(1) (2)

N EFN FN C1,2


X N 1

2 1 X
2
1 1
C N H1 +H2 (l) + C l H1 +H2 (l) + C N q

2 N 2 (1) (2) cv1 cv2
lN l=0 v v N N
:= T1,N + T2,N + T3,N .
5
Since for l large, the function H (l) behaves as C |l|2H2 , we get, since H1 , H2 < 8 (so
H1 + H2 < 45 )
5
T1,N C N N 2H1 +2H2 3 = CN 2H1 +2H2 2 N 0
and
1 5
T2,N C N 2H1 +2H2 2 = CN 2H1 +2H2 2 N 0.
N
Concerning the summand T3,N , we have for N large enough

1

1 1 1
T3,N C N q + C N q
v (1) cv1 v (2) cv2
N N
 (1) (2)

C N |vN cv1 | + |vN cv2 |

X X 5 5
C N 2H1 (l) + 2H2 (l) C(N 4H1 2 + N 4H2 2 ) N 0
lN lN

and the conclusion follows.

The following lemma is also important in order to check [C1].

5 (1) (2)
Lemma 6 Assume H1 < 8 and H2 < 85 . Let FN , FN be defined by (39) and let C1,2 be
given by (42). Then

 
1 (1) (2)
N hDFN , DFN iH C1,2
2
converges in distribution, as N goes to infinity, to a Gaussian random variable.
(1) (2) (1) (2)
Proof: Since EFN FN = 12 hDFN , DFN iH and by Lemma 5


 
1 (1) (2)
N E hDFN , DFN iH C1,2 N 0
2

it suffices to show that the sequence (YN )N 1 with

22
1  (1) (2) (1) (2)

YN := N hDFN , DFN iH EhDFN , DFN iH (44)
2
converges in distribution as N goes to infinity, to a Gaussian random variable. We will
apply the Fourth Moment Theorem. In order to do it, we need to show that

EYN2 N C0 > 0 and V ar(kDYN k2H ) N 0. (45)

From (33) and the product formula for multiple integrals (60), we can write

N 1
1 X
YN = q I2 (LH 1 H2 H1
k Ll )hLk , Ll i
H2
(1) (2)
N vN vN k,l=0

N 1
1 X
= D(H1 , H2 ) q I2 H1 +H2 (k l)(LH 1 H2
k Ll ) (46)
(1) (2) 2
N vN vN k,l=0

with LH
k defined by (37). Consequently, since any four functions of two variable f, g, u, v
we have
1
uvi
hf g, = (hf, uihg, vi + hf, vihg, ui)
2
we will obtain

N 1
1 X
EYN2 2
= D (H1 , H2 ) (1) (2)
H1 +H2 (k l) H1 +H2 (i j)hLkH1 L
H 2 H1 H2
l , Li Lj i
N vN vN k,l,i,j=0 2 2

N 1
1 X
= D 2 (H1 , H2 ) (1) (2)
H1 +H2 (k l) H1 +H2 (i j)
N vN vN k,l,i,j=0
2 2

1h 2
i
H1 (k i)H2 (l j) + D(H1 , H2 ) H1 +H2 (i l) H1 +H2 (k j) . (47)
2 2 2

For two sequences (u(n), n Z) and (v(n), n Z), we define their convolution by
X
(u v)(j) = u(n)v(j n).
nZ

1 1 1
We will need the Young s inequality: if s, p, q 1 with s +1= p + q then

ku vks (Z) kukp (Z) kvkq (Z) . (48)

By the proof of Theorem 7.3.3 in [8], we have

23
N 1
1 X
H1 +H2 (kl) H1 +H2 (ij) H1 +H2 (il) H1 +H2 (kj) N h3
H1 +H2 , H1 +H2 i2 (Z)
N 2 2 2 2 2 2
k,l,i,j=

4
5
where h3
H1 +H2 , H1 +H2 i2 (Z) < when H1 + H2 4 which implies that H1 +H2 3 (Z).
2 2 2
The second summand in (47) can be handled as follows
N 1
1 X
H1 +H2 (k4 k3 ) H1 +H2 (k2 k1 )H1 (k1 k4 )H2 (k3 k2 )
N 2 2
k1 ,k2 ,k3 ,k4 =0
N 1 N 1k
X 1
1 X
= H1 +H2 (k4 k3 ) H1 +H2 (k2 )H1 (k4 )H2 (k3 k2 )
N 2 2
k1 =0 k2 ,k3 ,k4 =k1
1 X
= H1 +H2 (k4 k3 ) H1 +H2 (k2 )H1 (k4 )H2 (k3 k2 )
N 2 2
k2 ,k3 ,k4 Z
    
max(k2 , k3 , k4 ) min(k2 , k3 , k4 )
1 1 0 1|k2 |<N,|k3|<N,|k4|<N
N N
X
N H1 +H2 (k4 k3 ) H1 +H2 (k2 )H1 (k4 )H2 (k3 k2 )
2 2
k2 ,k3 ,k4 Z
= h H1 +H2 H1 H2 , H1 +H2 i2 (Z) . (49)
2 2

Again notice that h| H1 +H2 | |H1 | |H2 |, | H1 +H2 |i2 (Z) < since by Holders and Youngs
2 2
4
inequalities, by using H1 +H2 3 (Z),
2

h| H1 +H2 | |H1 | |H2 |, | H1 +H2 |i2 (Z) Ck| H1 +H2 | |H1 | |H2 |k4 (Z)
2 2 2

Ck| H1 +H2 |k 4 k|H1 | |H2 |k2 (Z)


2 3 (Z)
k H1 +H2 k 4 k|H1 |k 4 k|H2 |k 4
2 3 (Z) 3 (Z) 3 (Z)

where we applied twice (48). Thus

D 2 (H1 , H2 ) 1
 
EYN2 N C0 := h H1 +H2 H1 H2 , H1 +H2 i2 (Z) + D 2 (H1 , H2 )h3
H1 +H2 , H 1 +H i 2
2 (Z)
cv1 cv2 2 2 2 2 2

and the first part of (45) is obtained. Next, we check the convergence of the Malliavin
derivative in (45). From (46) and the product formula (60)


N 1
4D(H1 , H2 )2 X
kDYN k2H = (1) (2)
I2 H1 +H2 (k l) H1 +H2 (i j)(LH 1 H2 H1 H2
k Ll ) 1 (Li Lj )
N vN vN i,j,k.l=0
2 2

24
where the contraction 1 is defined in (63). Since for every f, g, u, v L2 (R2 ), we have
1
1 (uv)
(f g) = (hf, ui(g v) + hf, vi(g u) + hg, ui(f v) + hg, vi(f u))
4
we obtain, via (38)

N 1
D(H1 , H2 )2 X
kDYN k2H = (1) (2)
I2 H1 +H2 (k l) H1 +H2 (i j)
N vN vN i,j,k.l=0
2 2

h
H1 (k i)(LH 2 H2 H2 H1
l Lj ) + D(H1 , H2 ) H1 +H2 (k j)(Ll Li )
2
i
+D(H1 , H2 ) H1 +H2 (l i)(LH 1 H2 H1 H1
k Lj ) + H2 (l j)(Lk Li )
2

and by the isometry formula (58)

V ar(kDYN k2H )
N 1
1 X
= C1 (1) (2)
H1 +H2 (k l) H1 +H2 (i j) H1 +H2 (k l ) H1 +H2 (i j )
N 2 (vN )2 (vN )2 i,j,k.l,i,j ,k ,l =0 2 2 2 2

H1 +H2 (k i) H1 +H2 (k i ) H1 +H2 (l l ) H1 +H2 (j j )



2 2 2 2
N 1
1 X
+C2 (1) (2) 1 2 H +H (k l) H1 +H2 (i j) H1 +H2 (k l ) H1 +H2 (i j )
N 2 (vN )2 (vN )2 i,j,k.l,i,j ,k ,l =0 2 2 2 2

H1 (k i)H1 (k i )H2 (l l )H2 (j j )


N 1
1 X
+C2 (1) (2) 1 2 H +H (k l) H1 +H2 (i j) H1 +H2 (k l ) H1 +H2 (i j )
N 2 (vN )2 (vN )2 i,j,k.l,i,j ,k ,l =0 2 2 2 2

H1 (k i) H1 +H2 (k j )H2 (l l ) H1 +H2 (j i ).


2 2

We can write, using the convolution symbol

V ar(kDYN k2H )
N 1
1 X h 
C 2 | H1 +H2 | | H1 +H2 | | H1 +H2 | | H1 +H2 | (l j )2
N 2 2 2 2
l,j =0
 
+ | H1 +H2 || H1 +H2 | |H1 | |H2 | (l j )2
2 2
    i
+ | H1 +H2 | | H1 +H2 | | H1 +H2 | | H1 +H2 | (l j ) | H1 +H2 || H1 +H2 | |H1 | |H2 | (l j )
2 2 2 2 2 2
N 1
1 X
C g(l) (50)
N
l=(N 1)

25
with
   
g(l) = | H1 +H2 | | H1 +H2 | | H1 +H2 | | H1 +H2 | (l)2 + | H1 +H2 | | H1 +H2 | |H1 | |H2 | (l)2
2 2 2 2 2 2

   
+ | H1 +H2 | | H1 +H2 | | H1 +H2 | | H1 +H2 | (l) | H1 +H2 || H1 +H2 | |H1 | |H2 | (l).
2 2 2 2 2 2

Since for any two sequence u, v 2 (Z) we have


sX sX
lim (|u| |v|)(n) u(j)2 v(j)2
n
jM jM

for every M 1 (see [8], page 168), we obtain as in [8], by using again Youngs inequality

g(N ) N 0.
The convergence of V ar(kDYN k2H ) to zero follows from (50) and Cesaro theorem.
(1) (2)
Let us go back to the sequence FN = (FN , FN ) from (39).
 
(1) (2)
Proposition 5 The two-dimensional random sequence FN = FN , FN defined by (39)
1
satisfies condition [C1] with N = N 2 .

Proof: Since for every N 1

(1) (2) (2) (1) 1 (1) (2)


hDFN , D(L)1 FN i = hDFN , D(L)1 FN i = hDFN , DFN i (51)
2
we need to show that the random vector

(2) 1
     
(1) 1 (1) 1 (2) (1) (2)
FN , FN , N kDFN k2H 1 , N kDFN k2H 1 , N ( hDFN , DFN iH C1,2 )
2 2 2
(52)
converges to a Gaussian vector.
All the components of the above vector are multiple integrals of order 2 or sum
of a multiple integral of order 2 and a deterministic term tending to zero (the case of
the last component). We will use the multidimensional Fourth Moment Theorem to prove
the convergence of (52). From the convergence of the vectors (40), by using Lemmas 4
and 6, it suffices to check the condition (41) from the multidimensional Fourth Moment
Theoren (Theorem 4), i.e. the covariances of the components of the vector (52) converge
(1) (2)
to constants as N . The covariance of FN and FN converges to C1,2 (42) due to
Lemma 4. Concerning the other limits, we have from (33) (recall the the symbol means
that the sides have the same limit as N )

26

(2) N 1 N 1
! !
(1) kDFN k2H 1 X X
EFN 1 CE I2 (LH 1 2
k ) I2 H2 +H2 (j k)(LH 1 H2
j Lk )
2 N 2
i=0 j,k=0
N 1
1 X
= C H1 +H2 (j k)H1 (i j) H1 +H2 (i k)
N 2 2
i,j,k=0
N h H1 +H2 H1 , H1 +H2 i2 (Z) <
2 2

where the limit is obtained similarly as (49) and the last series is finite by using Young
4
inequality and H1 +H2 , H1 , H2 3 (Z). Also, by symmetry
2

(1)
!
(2) kDFN k2H
EFN 1 N h H1 +H2 H2 , H1 +H2 i2 (Z) .
2 2 2

Next, denote by YN the last component of the vector (52), i.e.


1 (1) (2)
YN := N ( hDFN , DFN iH C1,2 ).
2
By Lemma 5

N 1 N 1
!
(1) (1) 1 X X
EFN YN EFN YN C EI2 (LH 1 2
k ) I2 H1 +H2 (j k)(H 1 H2
j Lk )
N 2
i=0 j,k=0
N 1
1 X
= h(LH 1 2 H1 H2
k ) , (Lj Lk )i
N
i,j,k=0
1 X
= C H1 +H2 (j k)H1 (i j) H1 +H2 (i k) N h H1 +H2 H1 , H1 +H2 i2 (Z)
N 2 2 2 2
i,j,k

and similarly
(2)
EFN YN N Ch H1 +H2 H2 , H1 +H2 i2 (Z) .
2 2

Finally, we can also prove that

(1) (2)
! !
kDFN k2H kDFN k2H 1 X
E 1 1 C H1 (i j)H2 (k l)h(LH 1 H1 H2 H2
i Lj ), (Lk Ll )i
2 2 N
i,j,k,l
1 X
= C H1 (i j)H2 (k l) H1 +H2 (k i) H1 +H2 (l j)
N 2 2
i,j,k.l
N hH1 H1 +H2 , H2 H1 +H2 i2 (Z)
2 2

27
and
(1) (1)
! !
kDFN k2H kDFN k2H
E 1 YN E 1 YN N hH1 H1 +H2 , H1 H1 +H2 i2 (Z)
2 2 2 2

(2) (2)
! !
kDFN k2H kDFN k2H
E 1 YN E 1 YN N hH2 H1 +H2 , H2 H1 +H2 i2 (Z) .
2 2 2 2

So, condition (41) is fullfilled and the conclusion follows from Theorem 4.

5.3 An example in a sum of finite Wiener chaoses


Consider a sequence I2 (fN ) in a second Wiener chaos that satisfies condition [C1]- [C3]
(take for example the sequence (33)). Consider a perturbation GN = Iq (gN ) with q 3
and such that for N 1
1
kgN k2H q C 1+ (53)
N
with some > 0. In particular, (53) implies that GN N 0 in L2 (). Define,

FN = I2 (fN ) + GN , N 1. (54)
q
1 2

Clearly FN verifies [C2] . Let us show that FN satisfies [C1] with N = V ar 2 kDI2 (fN )kH
(which behaves as C 1N for N large). We have, by the product formula (60)
 
1 1 1 1 2 1 1
kDGN k2H

N hDFN , D(L) FN i 1 = N kDI2 (fN )kH 1 + N
2 q
1
+(2 + q)N (Iq (fN 1 gN ) + (q 1)Iq1 (f 2 g))

Notice that, by (53)


1 1
1
kDGN k2H CN 2 kgN k2H q CN 2 N 0.

E N (55)

Also, since from (63) kfN k gN kH (2+q2k) kfN kH 2 kgN kH q for k = 1, 2


1
2
E N Iq (fN 1 gN ) CN kfN 1 gk2H q CN kfN k2H 2 kgN k2H q CN (56)

and
1
2
E N Iq2 (fN 2 gN ) CN kfN 2 gk2H (q2) CN kfN k2H 2 kgN k2H q CN . (57)
1 1

Relations (55), (56) and (57), together with the fact that N 2 kDI2 (fN ) 1 converges
to a Gaussian law as N , implies [C1]. The same relations together with (64) will give
[C3].

28
6 Appendix: Elements from Malliavin calculus
We briefly describe the tools from the analysis on Wiener space that we will need in our work.
For complete presentations, we refer to [10] or [8]. Consider H a real separable Hilbert space
and (W (h), h H) an isonormal Gaussian process on a probability space (, A, P ), which
is a centered Gaussian family of random variables such that E [W ()W ()] = h, iH .
Denote by In the multiple stochastic integral with respect to B (see [10]). This mapping In
is actually an isometry between the Hilbert space H n (symmetric tensor product) equipped
with the scaled norm 1n! k kH n and the Wiener chaos of order n which is defined as the
closed linear span of the random variables hn (W (h)) where h H, khkH = 1 and hn is the
Hermite polynomial of degree n N
(1)n
 2 n   2 
x d x
hn (x) = exp n
exp , x R.
n! 2 dx 2
The isometry of multiple integrals can be written as follows: for m, n positive integers,
E (In (f )Im (g)) = n!hf, giH n if m = n,
E (In (f )Im (g)) = 0 if m 6= n. (58)
It also holds that In (f ) = In f where f denotes the symmetrization of f .


We recall that any square integrable random variable which is measurable with respect
to the -algebra generated by W can be expanded into an orthogonal sum of multiple
stochastic integrals

X
F = In (fn ) (59)
n=0
where fn H n are (uniquely determined) symmetric functions and I0 (f0 ) = E [F ].

Let L be the Ornstein-Uhlenbeck operator


X
LF = nIn (fn )
n0
P 2 n!kf 2
if F is given by (59) and it is such that n=1 n n kHn < .

For p > 1 and R we introduce the Sobolev-Watanabe space D,p as the closure of
the set of polynomial random variables with respect to the norm

kF k,p = k(I L) 2 F kLp ()
where I represents the identity. We denote by D the Malliavin derivative operator that acts
on smooth functions of the form F = g(W (h1 ), . . . , W (hn )) (g is a smooth function with
compact support and hi H)
n
X g
DF = (W (h1 ), . . . , W (hn ))hi .
xi
i=1

29
The operator D is continuous from D,p into D1,p (H) .
We recall the product formula for multiple integrals. It is well-known that for
f H n and g H m
nm
X  n  m 
In (f )Im (g) = r! Im+n2r (f r g) (60)
r r
r=0

where f r g means the r-contraction of f and g. This contraction is defined in the following
way. Consider (ej )j1 a complete orthonormal system in H and let f H n , g H m be
two symmetric functions with n, m 1. Then
X
f= j1 ,..,jn ej1 ... ejn (61)
j1 ,..,jn 1

and X
g= k1 ,..,km ek1 .. ekm (62)
k1 ,..,km 1

where the coefficients i and j satisfy j(1) ,...j(n) = j1 ,..,jn and k(1) ,...,k(m) = k1 ,..,km for
every permutation of the set {1, ..., n} and for every permutation of the set {1, .., m}.
Actually j1 ,..,jn = hf, ej1 ... ejn i and k1 ,..,km = hg, ek1 .. ekm i in (61) and (62).
If f H n , g H m are symmetric given by (61), (62) respectively, then the
contraction of order r of f and g is given by
X X X
f r g = i1 ,..,ir ,j1 ,..,jnr i1 ,..,ir ,k1 ,..,kmr
i1 ,..,ir 1 j1 ,..,jnr 1 k1 ,..,kmr 1
 
ej1 .. ejnr ek1 .. ekmr (63)

for every r = 0, .., m n. In particular f 0 g = f g. Note that f r g belongs to


H (m+n2r) for every r = 0, .., m n and it is not in general symmetric. We will denote by
f r g the symmetrization of f r g.
Another important
P property of finite sums of multiple integrals is the hypercontrac-
tivity. Namely, if F = nk=0 Ik (fk ) with fk H k then
p
E|F |p Cp EF 2 2
. (64)

for every p 2.
We denote by D the Malliavin derivative operator that acts on smooth functionals
of the form F = g(W (1 ), . . . , W (n )) (here g is a smooth function with compact support
and i H for i = 1, .., n)
n
X g
DF = (W (1 ), . . . , W (n ))i .
xi
i=1

The operator D can be extended to the space D1,2 since it is closable.

30
The adjoint of D is denoted by and is called the divergence (or Skorohod) integral.
Its domain (Dom()) coincides with the class of stochastic processes u L2 ( T ) such
that
|EhDF, ui| ckF k2
for all F D1,2 and (u) is the element of L2 () characterized by the duality relationship

E(F (u)) = EhDF, uiH . (65)

The chain rule for the Malliavin derivative (see Proposition 1.2.4 in [10]) will be
used several times. If : R R is a continuously differentiable function and F D1,2 ,
then (F ) D1,2 and
D(F ) = (F )DF. (66)

References
[1] H. Bierme, A. Bonami, I. Nourdin and G. Peccati (2012): Optimal Berry-Esseen rates
on the Wiener space: the barrier of third and fourth cumulants. ALEA Lat. Am. J.
Probab. Math. Stat. 9, no. 2, 473-500.

[2] P. Breuer and P. Major (1983): Central limit theorem for non-linear functionals of
Gaussian fields. J. Mult. Anal. 13, 425-441.

[3] Y. Hu, F. Lu and D. Nualart (2014): Convergence of densities of some functionals of


Gaussian processes. J. Funct. Anal. 266 (2), 814-875.

[4] Y. Hu, D. Nualart, S. Tindel, F. Xu (2015): Density convergence in the Breuer-Major


theorem for Gaussian stationary sequences. Bernoulli 21 (4), 2336-2350.

[5] M. Maejima and C. A. Tudor (2012): Selfsimilar processes with stationary increments
in the second Wiener chaos. Probability and Mathematical Statistics, 32, 167-186.

[6] P. A. Mykland (1992): Asymptotic expansions and bootstrapping distributions for de-
pendent variables: a martingale approach. Ann. Statist. 20(2), 623-654.

[7] I. Nourdin and D. Nualart (2016): Fisher information and the fourth moment theorem.
Ann. Inst. Henri Poincare Probab. Stat. 52 (2), 849-867.

[8] I. Nourdin and G. Peccati (2012): Normal Approximations with Malliavin Calculus
From Steins Method to Universality. Cambridge University Press.

[9] I. Nourdin and G. Peccati (2009): Steins method and exact Berry-Esseen asymptotics
for functionals of Gaussian fields. Ann. Probab. 37, no. 6, 2231-2261.

[10] D. Nualart (2006): Malliavin Calculus and Related Topics. Second Edition. Springer.

31
[11] D. Nualart and G. Peccati (2005): Central limit theorems for sequences of multiple
stochastic integrals, Ann. Probab. 33, 177-193.

[12] G. Peccati and C.A. Tudor (2004): Gaussian limits for vector-valued multiple stochas-
tic integrals. Seminaire de Probabilites, XXXIV, 247-262.

[13] M. Podolskij and N. Yoshida (2016): Edgeworth expansion for functionals of continuous
diffusion processes. Ann. Appl. Probab. 26 (6), 3415-3455.

[14] I. Shigekawa (1980): Derivatives of Wiener functionals and absolute continuity of


induced measures. J. Math. Kyoto Univ. 20(2), 263-289.

[15] N. Yoshida (1997): Malliavin calculus and asymptotic expansion for martingales.
Probab. Theory and Related Fields, 109. 301-342.

[16] N. Yoshida (2001): Malliavin calculus and martingale expansion. Bull. Sci, math., 125
(6-7), 431-456.

[17] N. Yoshida (2013): Martingale expansion in mixed normal limit. Stochastic Processes
and Their Applications, 123 (3), 887-933.

32

You might also like