# ISOPERIMETRY AND PROCESSES

**IN PROBABILITY IN BANACH SPACES
**

Probability in Bana
h spa
es is a bran
h of modern Mathemati
s whi
h emphasizes the geometri
and

fun
tional analyti
aspe
ts of Probability theory. Its probabilisti
sour
es may be found in the study of

regularity of random pro
esses (espe
ially Gaussian pro
esses) and limit theorems for sums of independent

ve
tor valued random variables whi
h are the two main topi
s of this book. Bana
h spa
e theory forms its

fun
tional ba
kground and Probability in Bana
h spa
es has strong and fruitful
onne
tions with Geometry

of Bana
h spa
es and the nowadays
alled lo
al theory of Bana
h spa
es.

Probability in Bana
h spa
es started in the early

**fties with the study, by R. Fortet and E. Mourier, of the
**

law of large numbers and the
entral limit theorem for sums of independent identi
ally distributed Bana
h

spa
e valued random variables. Important
ontributions to the foundations of probability distributions

on ve
tor spa
es, toward whi
h A. N. Kolmogorov already pointed in 1935, were at the time those of L.

Le Cam and Y. V. Prokhorov and the Russian s
hool. A de
isive step to the modern developments of

Probability in Bana
h spa
es was the introdu
tion by A. Be
k (1962) of a
onvexity
ondition on normed

linear spa
es equivalent to the validity of the extension of a
lassi
al law of large numbers of Kolmogorov.

This geometri
line of investigation was pursued and ampli

**ed by the S
hwartz s
hool in the early seventies.
**

The
on
epts of radonifying and summing operators and the landmark work of B. Maurey and G. Pisier

on type and
otype of Bana
h spa
es
onsiderably in
uen
ed the developments of Probability in Bana
h

spa
es. Other noteworthy a
hievements of the period are the early book (1968) of J.-P. Kahane, who

systemati
ally developed the
ru
ial idea of symmetrization, and the study by J. Homann-Jrgensen of sums

of independent random variables. Simultaneously, the study of regularity properties of random pro
esses, in

parti
ular Gaussian pro
esses, saw great progress in the late sixties and early seventies with the introdu
tion

of entropy methods. Pro
esses are understood here as random fun
tions on some abstra
t index set T ,

in other words as a family X = (Xt )t2T of random variables. In this setting of what might appear as

Probability with minimal stru
ture, the major dis
overy of R. Dudley (1967) was the idea of analyzing

regularity properties of a Gaussian pro
ess X through the geometry of the index set T for the L2 -metri

k Xs Xt k2 indu
ed by X itself. These foundations of Probability in Bana
h spa
es led to rather intense

a
tivity for the last

**fteen years. In parti
ular, the Dudley-Fernique theorems on regularity properties of
**

Gaussian and stationary Gaussian pro
esses allowed the de

**nitive treatment by M.B. Mar
us and G. Pisier
**

of regularity of random Fourier series initiated in this line by J.-P. Kahane. Limit theorems for sums of

Typeset by

AMS-TEX

2

independent Bana
h spa
e valued random variable be
ame
learer. Under the impulse, in parti
ular, of the

lo
al theory of Bana
h spa
es, isoperimetri
methods and
on
entration of measure phenomena put forward

most vigorously by V.D. Milman made a strong entry in the late seventies and eighties into the probabilisti

methods of investigation. Starting from Dvoretzky's theorem on Eu
lidian se
tions of
onvex bodies, the

isoperimetri
inequalities on spheres and in Gauss spa
e proved most powerful in the study of Gaussian

measures and pro
esses, in parti
ular through the work of C. Borell. They were useful too in the study of

limit theorems through the te
hnique of randomization. An important re
ent development was the dis
overy,

motivated by these results, of a new isoperimetri
inequality that is
losely
onne
ted to the tail behavior of

sums of independent Bana
h spa
e valued random variables. It allows in parti
ular today an almost
omplete

des
ription of various strong limit theorems like the strong law of large numbers and the law of the iterated

logarithm. In the mean time, almost sure boundedness and
ontinuity of general Gaussian pro
esses has

been re
ently
ompletely understood with the tool of majorizing measures.

One of the fas
inations of the theory of Probability in Bana
h spa
es today is its use of a wide range

of rather powerful methods. Sin
e the

**eld is one of the most a
tive
onta
t points between Probability
**

and Analysis, it should
ome as no surprise that many of the te
hniques are not probabilisti
but rather

ome from Analysis. The book fo
uses on two
onne
ted topi
s { the use of isoperimetri
methods and

the regularity of random pro
ess { where many of these te
hniques
ome into play and whi
h en
ompass

many (although not all) of the main aspe
ts of Probability in Bana
h spa
es. The purpose of this book is

to give a modern and, at many pla
es, seemingly de

**nitive a
ount of these topi
s. The book is written so
**

as to require only basi
prior knowledge of either Probability or Bana
h spa
e theory, in order to make it

a
essible from readers of both

**elds as well as to non-spe
ialists. It is moreover presented in perspe
tive
**

with the histori
al developments and strong modern intera
tions between Measure and Probability theory,

Fun
tional Analysis and Geometry of Bana
h spa
es. It is essentially self-
ontained (with the ex
eption that

the proofs of a few deep isoperimetri
results have not been reprodu
ed), so as to be a
essible to anyone

starting the subje
t, in
luding graduate students. Emphasis has been put in bringing forward the ideas we

judge important but not on en
y
lopedi
detail. We hope that these ideas will fruitfully serve the further

developments of the

**eld and hope their propagation will in
uen
e new areas of Mathemati
s.
**

The two parts of the book are introdu
ed by
hapters on isoperimetri
ba
kground and generalities on

ve
tor valued random variables. To explain and motivate the organization of our work, let us brie
y analyze

one fundamental example. Let (T; d) be a
ompa
t metri
spa
e and let X = (X )t2T be a Gaussian

pro
ess indexed by T . If X has almost all its sample paths
ontinuous, it de

nes a Gaussian Radon .

On the other hand.3 measure on the Bana h spa e C (T ) of all ontinuous fun tions on T . Su h a Gaussian measure or variable may then be studied for its own sake and shares indeed some remarkable integrability and tail behavior properties of isoperimetri nature. These related but somewhat dierent aspe ts of the study of Gaussian variables. led us to divide the book into two parts. An analysis of the geometry of the index set T for the L2 -metri k Xs Xt k2 indu ed by the pro ess allows a omplete understanding of this property. (The logi al order would have been perhaps to ask . whi h were histori ally the two main streams of developments. one might wonder (before) when a given Gaussian pro ess is almost surely ontinuous.

rst when a given pro ess is bounded or ontinuous and then investigate it for its properties as a well-de.

ned in.

) In the . we have however hosen the other way for various pedagogi al reasons.nite dimensional random ve tor.

Su essively. their integrability and tail behavior properties and strong limit theorems for sums of independent random variables. Gaussian. The strong law of large numbers and the law of the iterated logarithm. we study ve tor valued random variables.rst part. stable and sums of independent Bana h spa e valued random variables are investigated in this s ope using isoperimetri tools. Radema her series. for whi h the almost sure statement is shown to redu e to the statement in probability. omplete this .

rst part with extension to in.

. tightness of sums of independent random variables and regularity properties for random pro esses are presented. General random pro esses are investigated and regularity properties of Gaussian pro esses hara terized with appli ations to random Fourier series.nite dimensional Bana h spa e valued random variables of some lassi al real limit theorems. In the se ond part. The link with Geometry of Bana h spa es through type and otype is the subje t of one hapter with appli ations in parti ular to the lassi al entral limit theorem. The book is ompleted with an a ount on empiri al pro esses methods and with several appli ations. espe ially to lo al theory of Bana h spa es.

in Gauss spa e and on the ube 1. Some isoperimetri inequalities on the sphere.3. Isoperimetri inequalities and the on entration of measure phenomenon 1.4 Chapter 1.2.1. Martingale inequalities Notes and referen es . An isoperimetri inequality for produ t measures 1.

Isoperimetri inequalities and the on entration of measure phenomenon In this .5 Chapter 1.

we present the isoperimetri inequalities whi h now appear as the ru ial on ept in the understanding of various on entration inequalities.rst hapter. These inequalities often arise as the . tail behaviors and integrabililty theorems in Probability in Bana h spa es.

weaker (but already eÆ ient) inequalities whi h will be mentioned in their framework throughout the book.nal and most elaborated forms of previous. In these .

nal forms however. the isoperimetri inequalities and asso iated on entration of measure phenomena provide the appropriate ideas for an in-depth omprehension of some of the most important theorems of the theory. The on entration of measure phenomenon whi h roughly des ribes how a well-behaved fun tion is almost a onstant on almost all the spa e an moreover be seen as the explanation for the two main parts of this work: the .

) be a ( ompa t) metri spa e (X. Following [G-M℄. ) with a Borel probability measure . the basi idea may be des ribed in the following way. The on entration of measure phenomenon has been mainly put forward by the lo al theory of Bana h spa es in the study of Dvoretzky's theorem on almost Eu lidian se tions of onvex bodies. the se ond tries to determine onditions for a fun tion to be \ni e". r > 0 . [Mi-S℄. Let (X. r). . The on entration fun tion (X. is de.rst one deals with \ni e" fun tions applying isoperimetri inequalities and on entration properties.

(A) . A Borelg 2 where Ar denotes the -neighborhood of order r of A i. r) turns out to be extremely small when r in reases to in.e. ) . . A X. A) < rg: For many families (X. (x.ned as 1 (X. Ar = fx 2 X . the on entration fun tion (X. r) = supf1 (Ar ).

nity. the omplement of the neighborhood of order r of a set of probability bigger than 1=2 de reases extremely rapidly when r be omes large. A typi al and basi example is given by the Eu lidian unit sphere S N 1 in IRN equipped with its geodesi distan e and normalized Haar measure N 1 for whi h it an be shown (see below) that 1=2 (S N 1 . r) exp( (N 2)r2 =2) (N 3): 8 Hen e. This is what is now usually alled the on entration of measure .

!f (") = supfjf (x) f (y)j. and let Mf be a median of f . the smallness of whi h depends on (X. For example. any ni e fun tion is very lose to being a onstant (median or expe tation) on all but a very small set. (x.6 phenomenon. [Mie3℄) from an isoperimetri property. whi h we develop here in their abstra t and measure theoreti al setting. N 1 (jf Mf j > !f (")) 1=2 2 exp( (N 2)"2 =2): The on entration of measure phenomenon is usually derived (see however [G-M℄. The . y) < "g . This hapter des ribed some isoperimetri inequalities. Then. r) . In presen e of su h a property. if f is a fun tion on S N 1 . denote by !f (") its modulus of ontinuity. Their appli ation to Probability in Bana h spa es will be one of the purposes of Part I of this book. for every " > 0 .

Some isoperimetri inequalities on the sphere. A) < rg is the neighborhood of A of order r for the geodesi distan e. (x. then. 1. then N 1 (Ar ) 1 1=2 8 exp( (N 2)r2 =2): . In parti ular. The main obje t of the se ond se tion is an isoperimetri theorem for prod u t measures (independent random variables) while the last one is devoted to some well-known and useful martingale inequalities. in Gauss spa e and on the ube Con entration properties for families (X. We present some of these inequalities in this se tion and start with the isoperimetri inequality on the spheres already alluded to above ( f. Proofs of some of the deepest inequalities like the isoperimetri inequality on spheres and the one for produ t measures are omitted and repla ed by (hopefully) a urate referen es.1. N 1 (Ar ) N 1 (Hr ) where we re all that Ar = fx 2 S N 1. Gauss spa e and the ube. [F-L-M℄. a ball for the geodesi distan e ) with the same measure N 1 (H ) = N 1 (A) ..).. Theorem 1.1 If A is a Borel set in S N 1 and H a ap (i. Various omments and remarks try however to des ribe some of the ideas involved in these results as well as their onsequen es whi h will be useful in the sequel. for any r > 0 .rst se tion presents isoperimetri inequalities in the lassi al ases of the sphere. ) are thus usually established through isoperimetri inequalities. if N 1 (A) 1=2 (and N 3 ).. [B-Z℄.e. .

The lose relationships and analogies between uniform measures on spheres and Gaussian measures have been noti ed in many ontexts. Our interest in Theorem 1.1 lies in its onne tions and onsequen es to a similar isoperimetri result for Gaussian measures. In this isoperimetri setting. it turns out that the isoperimetri inequality on S N 1 leads in the Poin are limit when N in reases to in.7 The main interest in su h an isoperimetri theorem is of ourse the possiblility of an estimate (or even expli it omputation) of the measure of a ap (the neighborhood of a ap is again a ap). a parti ular important estimate being given in the se ond assertion of the theorem.

Poin are's limit expresses the anoni al Gaussian measure in .nity to an isoperimetri inequality for Gauss measure.

nite dimension as the limiting distribution of proje ted uniform distributions on spheres of radius p N when N tends to in.

nity. This is one example of the deep relations mentioned previously. it is also a way of illustrating the ommon belief that Wiener measure an be thought as uniform distribution on a sphere of in.

nite dimension and of radius square root of in.

for every N . denote. N dg of measures on IRd measure on the sphere onverges weakly when N goes to in.d(N p p d) the proje tion from NS N 1 onto IRd . Then. the sequen e fN. [M-K℄). p To be more pre ise on this observation of Poin are. by N N1 the uniform normalized p N 1 p NS of enter the origin and radius N in IRN .nity ( f. Denote further by N.d(N N1 ).

This an be obtained similarly with some more eorts (see [Fer9℄). To sket h the proof. simply note that by the law of large numbers 2N =N ! 1 almost surely (or only in probability) where 2N = g12 + + gN2 and (gi ) is a sequen e of independent standard normal random variables. hen e (N 1=2 =N ) (g1 . The basi result whi h is obtained in this way on erns the anoni al Gaussian distribution in .d(N N1 ) and the on lusion follows. gN ) is equal in distribution to N N1 .1.nity to the anoni al Gaussian measure on IRd . : : : . it is simple to see how it is possible to derive an isoperimetri inequality for Gaussian measures from Theorem 1. Note that we will a tually need a little more than weak onvergen e in this Poin are limit. gd) = N. p p (N 1=2 =N ) (g1 . namely onvergen e for all Borel sets. learly. But. : : : . Provided with this tool.

the fundamental feature of Gaussian distributions allows then to extend these results to general . as is lassi al however.nite dimension.

nite or in.

Let us denote therefore by N the anoni al Gaussian probability measure on IRN with density N (dx) = (2) N=2 exp( jxj2 =2)dx: .nite dimensional Gaussian measures.

N ) where d denotes the Eu lidian distan e. if N (A) 1=2. t 2 [1. u > < g. Equivalently. 2 [ 1.1 pk k 1 ((k. then.1 1 (A ) ( 1 (A)) where the neighborhood of order r on the right is understood It is easy to see that k. for any r > 0 . By Poin are's observation. b℄) is a ap on kS k 1 . < x.N p k 1 p with respe t to the geodesi distan e on kS . Ar is the Eu lidean neighborhood of order r of A . 1 1 N (Ar ) (r) exp( r2 =2): 2 Proof. If A is a Borel set in IRN and if H is a half-spa e fx 2 IRN . i. b℄))r ): .N k 1 k. Turning to the proof itself.1).11 (℄ 1.2. for all k( N ) large enough.8 Denote further by the distribution fun tion of this measure in dimension 1. is the inverse fun tion and = 1 for whi h we re all the lassi al estimate: 1 (t) exp( t2 =2). sin e 1 (0) = 1 .N r r k 1 k. p p 1 (A)) > k ( 1 (℄ 1. +1℄. a ordingly. 1 ( N (Ar )) 1 ( n (A)) + r and in parti ular. +1℄ . (t) = (2) 1 1=2 Z t 1 exp( x2 =2)dx. a[ .e.11 (℄ 1. we may assume that a = 1 ( N (A)) > 1 . The equivalen e between the two formulations is easy sin e the Gaussian measure of a half-spa e is being omputed in dimension one and thus N (Hr ) = ( 1 ( N (H )) + r) = ( 1 ( N (A)) + r): The ase N (A) 1=2 simply follows form the fa t that 1 (1=2) = 0 . t 0: 2 The next theorem is the isoperimetri inequality for (IRN . Let then b 2℄ 1. u 2 IRN .N r r k. p p 1 (A )) k (( 1 (A)) ) k k1 (k. with the same Gaussian measure N (H ) = N (A) . by the isoperimetri inequality on spheres (Theorem 1. Sin e k. d. N (Ar ) N (Hr ) where. b℄)): k k1 (k. Theorem 1.

It was one of the remarkable observations of the work of A. by onvexity it is lear that Pk. Note that half-spa es. 1℄ . it is easily seen how (1. B are Borel sets in IRN and 0 1 . [Pi18℄. (1.2) may be given using again the Poin are limit but this time on the lassi al Brunn-Minkowski inequality on IRN (see [B-Z℄. he obtained in parti ular in this way an inequality of Brunn-Minkowski's type. Ehrhard to show how one an introdu e a similar symmetrization pro edure adapted to Gaussian measure (with half-spa es as extremal sets). If A. The isoperimetri inequality for Gauss measure thus follows rather easily from the orresponding one on spheres. namely for A. This allows a more intrinsi proof of Theorem 1.. for k N . A.11 (℄ 1. b℄))r = k.N1 (A) + (1 )Pk. However.2.2 (for A onvex).N1 (A + (1 )B ) Pk. b 2 B g . are extremal sets for the Gaussian isoperimetri inequality sin e they a hieve equality in the on lusion.9 Now (k.11 (℄ 1.) whi h states that volN (A + (1 )B ) ( volN (A)) ( volN (B ))1 for in [0. as aps on spheres. In the Poin are k!1 limit we get therefore that N (Ar ) (b + r) . b + r(k)℄) for some r(k) 0 satisfying lim r(k) = r .. Let. This later one however requires a quite involved proof based on extensive use of powerful symmetrization (in the sense of Steiner) te hniques. 1℄ .. hen e the result sin e b < 1 ( N (A)) is arbitrary. B onvex sets in IRN and 2 [0. An inequality on whi h (1.1) is only known at the present time for onvex sets.N1 (B ) . x = a + (1 )b. This also led him to a rather omplete isoperimetri al ulus in Gauss spa e.N p be the proje tion from the ball of enter the origin and radius k in IRk onto IRN . B bounded in IRN and where volN is N -dimensional volume. Pk.1) a tually implies the isoperimetri inequality of Theorem 1.1) appears as an improvement (for onvex sets) but whi h holds for arbitrary Borel sets A. a 2 A. Taking B to be the Eu lidian ball of enter the origin and radius r=(1 ) and letting tend to 1 . B is the so- alled log- on avity of Gaussian measures: (1:2) log N (A + (1 )B ) log N (A)) + (1 ) log N (B ): A proof of (1. (1:1) 1 ( N (A + (1 )B )) 1 ( N (A)) + (1 ) 1 ( N (B )) where the sum A + (1 )B is understood in the sense of Minkowski as fx 2 IRN .

multiply then by k and let it tend to in. by Brunn-Minkowski (in IRk ).N1 (A))) ( volk (Pk.N1 (A + (1 )B )) ( volk (Pk.10 so that.N1 (B )))1 : If we subtra t 1 on ea h side of this inequality. volk (Pk.

As announ ed. and due to the properties of Gaussian distributions.N1 (A)) = N (A) k!1 for every Borel set A in IRN . we see that we obtain (1.nity. One measure further on this proof the sharpness of (1.1). the pre eding inequalities and Theorem 1.2 easily extend to general .2) sin e it is easily he ked that the Poin are limit also indi ates that lim volk (Pk.

nite or in.

Let us however brie y indi ate one in. These extensions will usually be des ribed below in their appli ations.nite dimensional Gaussian measures.

nite dimensional result whi h will be useful to re ord here. On IRIN . onsider the measure = 1 whi h is the in.

Note that (`2 ) = 0! As a orollary to Theorem 1. d. i.e. we now express the on entration of measure phenomenon for fun tions on (IRN . Let f be Lips hitzian on IRN with Lips hitz norm given by k f k Lip = supf jf (xjx) fyj(y)j . y 2 IRN g: Let us denote further by Mf a median of f for N .e. The isoperimetri inequality indi ates similarly that for ea hBorel set A in IRIN and r > 0 . Ar = A + rB = fx = a + rh. (1:3) 1 ( (Ar )) 1 ( (A)) + r where Ar is here the Eu lidian or rather Hilbertian neighborhood of order r of A in IRIN . Mf is a number su h that N (f Mf ) and N (f Mf ) are both bigger than 1=2 . . (ff Mf g \ ff Mf g)t fjf Mf j t k f k Lip g.2 to those two sets of measure bigger than 1=2 and noti ing that for t > 0 . This of ourse simply follows for Theorem 1.nite produ t of the anoni al one-dimensional Gaussian distribution on ea h oordinate. N ) . Applying the se ond on lusion of Theorem 1. a 2 A.2. x. jhj 1g where B is the unit ball of `2 ( Ar is not ne essarily measurable). i.2 and a ylindri al approximation. This formulation of the isoperimetry will turn out to be a onvenient tool in the appli ations. h 2 IRIN .

for all t > 0 (1:4) N (jf Mf j > t) 2 (t= k f k Lip ) exp( t2 =2 k f k2Lip ): Hen e. this inequality an be used to investigate the integrability properties of Gaussian random ve tors.4) appears as a dire t onsequen e of Theorem 1. as before. with very high probability. As we will see in Chapter 3.4) whi h indi ates a on entration. it should be noted that inequalities of the same type an a tually be established by simple dire t arguments and onsiderations. If (1. We shall ome ba k to this distin tion in various ontexts in the sequel.2. One su h approa h is the following. Lips hitzian on IRN . Let us note also that the pre eding argument applied only to ff Mf g shows similarly that 1 N (f > Mf + t) exp( t2 =2 k f 2 k2Lip ): This inequality however appears more as a deviation inequality only as opposed to (1. Let f be. f is on entrated around its median Mf . so that f is almost everywhere dierentiable and its R gradient 5f satis.11 we get that.

Assume now moreover that fd N = 0 . N (f > t) exp( t) exp( t) Z exp(f )d N Z Z exp[(f (x) f (y))℄d N (x)d N (y) where in the se ond inequality we have used Jensen's inequality (in y ) and the mean zero property. Then.es j 5 f j k f k Lip . we an write by Chebyshev's inequality. x. for any t and > 0 . y being . Now.

x0 () > d N (x)d N (y) d: 2 . N (f > t) is majorized by 2 exp( t) Z =2 Z Z 0 exp < 5f (x()). x0 () = x os y sin : We have f (x) f (y) = = Z =2 0 Z =2 0 d f (x())d d < 5f (x()). for any in [0. using Jensen's inequality one more time (in here).xed in IRN . 2℄ . set. x() = x sin + y os . x0 () > d: Hen e.

by Fubini's theorem. for any . x0 ()) has the same distribution. N (f > t) exp( t) Z Z < 5f (x). Therefore. y > d N (x)d N (y) : exp 2 Performing integration in y . we . under N N . as the original one (x. N (f > t) exp( t) exp Z 2 2 j 5 f j2 d N 8 2 2 k f k2Lip : t + 8 exp If we then minimize in ( = 4t=2 k f k2Lip ) . the ouple (x(). y) .12 The fundamental rotational invarian e of Gaussian measures indi ates that.

k2Lip) R where Ef = fd N (. For f not ne essarily of mean zero. it follows that. and applying the result to for t > 0 . N (jf (1:5) Ef j > t) 2 exp( 2t2=2 k f f as well.nally get N (f > t) exp( 2t2 =2 k f k2Lip ) for every t > 0 .

using a similar argument but with sto hasti dierentials with respe t to Brownian motion. It is of ourse often easier to work with expe tations rather than with medians.4)-(1. We retain in any ase that on entration inequalities of the type (1. If (1.6) des ribe the same on entration property around a median Mf or the expe tation Ef = fd N of a Lips hitzian fun tion f . [Pi16℄).5) however presents the advantage to apply to more general situations as the ase of ve tor valued fun tions ( f. for every t > 0 .4) and (1.4). integrating for example (1.6) are usually easier to obtain than the rather deli ate isoperimetri theorems. (1. This inequality is of ourse very lose in spirit to (1.4) yields R p jEf Mf j 2 k f k Lip .nite sin e f is Lips hitzian).5) has a worse onstant in the exponent. these are essentially of the same order here. (1:6) N (jf Ef j > t) 2 exp( t2 =2 k f k2Lip ): The pre eding argument leading to (1. A tually. it an a tually be shown. . Indeed. that we also have.

Then there is an isoperimetri inequality of Gaussian type for .5) and its simple proof may be extended similarly (see [Pi16℄).4) and (1.6) from ea h other. Uniform measures on spheres and Gaussian measures thus satisfy isoperimetri inequalities and on entration phenomena. y in IRN for some = ' > 0 . learly (' 1 (A))r ' 1 (A r ) from whi h the result follows. k Lip instead of k f k Lip in the 1 ( N ((' 1 (A))r )) 1 ((A)) + r and.4) a tually shows that a median Mf is ne essarily unique. Note further that (1. Inequality (1. the orresponding inequality (1. if Mf < Mf0 are two distin t medians of f . (A) = N (' 1 (A)) for any Borel set A in IRN . Denote by ' a Lips hitzian map on IRN with values in IRN su h that j'(x) '(y)j jx yj for all x.4) for also holds with k f right hand side. by the Lips hitz property of ' . simply note that by Theorem 1. Indeed.13 while. it is not known whether it is possible to dedu e exa tly (1. there is a useful observation whi h allows to dedu e from the Gaussian inequalities further ones for a rather large lass of measures. . For a proof of (1. for A measurable in IRN and r > 0 .7). if t is hosen su h that 2 exp( t2 =2 k f k2Lip) < 21 . i. namely. For our purposes. letting t = (Mf0 Mf )=2 > 0 gives 1 2 N (f Mf0 ) N (f > Mf + t) (t= k f k Lip) < 12 whi h is impossible. we get from (1.6) that Ef t Mf t + Ef : However. for example t = 2 k f k Lip .e. (1:7) 1 ((A r )) 1 ((A)) + r: Similarly.2. Denote by the image measure of N by ' .6). given (1.

: : : .6) sin e it depends on the dimension. Denote by N = ( 21 Æ 1 + 21 Æ+1 ) N the anoni al probability measure (Haar measure ) on the (Cantor) group f 1. for symmetry reasons. +1gN . For a non-empty subset A of f 1. For any non-empty subset A of Z f 1. the next examples are worthwhile to be noted (and useful for the sequel) . +1gN . If one. +1gN . +1=2℄N . +1℄N ) of A . d(x. xi 6= yi g = jx N 2N i=1 i yi j for x. +1gN . The pre eding approa h however does not allow to investigate the important ase of Haar measure on f0. Let be uniformly distributed on the ube [0. Then is the image of N by the map N . +1gN . y 2 f 1. y) = N 1 X 1 Cardfi N . 1 N (Ar ) 1 exp( 2Nr2 ) 2 where. +1gN given by d(x.4) or (1. A dierent way has to be taken. '(x) = '(x1 .14 It is then of ourse of some interest to try to des ribe the lass of probabilities whi h an be obtained as the image of N by a ontra tion. +1gN . N ) is known [Har℄ and states in parti ular that if N (A) 1=2 for some A f 1. 1gN (the extreme points of [0. hoose then ' = (2 1) N for whi h ' = (2=)1=2 . we will need a stronger result (independent of N ) whi h is the following. +1gN .3. or rather on f 1.e. set. y 2 ConvAg where ConvA is the onvex hull (in [ 1. If a omplete des ription is still missing. : : : . is rather interested by the uniform measure on [ 1=2. Consider the normalized Hamming metri d on f 1. i. for x 2 f 1. +1gN . A) < rg . An isoperimetri theorem for the triple (f 1. In order to a omplish this program. xN ) 2 IRN . for r > 0 . dA (x) = inf fjx yj. x = (x1 . Ar = fx 2 f 1. exp(d2A =8)dN 1(A) : N . d. xN ) = (x1 ) : : : (xN ). 1℄N ). +1gN whi h we use preferably for symmetry reasons. 1℄N . Unfortunately. for whi h it is easily seen that ' = (2) 1=2 . Theorem 1. as usual. this result is not strong enough to yield a on entration inequality for N similar to (1.

15 Proof. We .

1 + 2) 2 ConvA . This proves (1. R For i = 1. for all 0 1 .8) we have. Assuming without loss of generality that these points dier on the last oordinate and identifying f 1. +1gN f 1. A+1 are non-empty in f 1. i) 2 ConvA so that z = (z+1 + (1 )z 1 . 1)) dA+1 (x) . 1)) dA+1 (x) and (1. we prove it for N + 1 . We an assume for example that N (A 1 ) N (A+1 ) and observe moreover that dA ((x. We now prove Theorem 1. Z Z 1 2 exp(dA =8)dN +1 exp(d2A+1 =8)dN 2Z 1 + exp(2 =2 + d2A+1 =8 + (1 )d2A 1 =8) dN 2 12 u+1 + 12 e2 =2 u+1 u1 1 " # 2 1 v 1 1 = 2 2 v+1 1 + e v+1 .8). From dA ((x. let zi 2 ConvAi su h that jx zi j = dAi (x) . This already proves the theorem when N = 1 sin e then the only ase left is A = f 1.rst onsider the ase where Card A = 1 . it is enough to onsider the ase where A has at least two points. Then Z N X N i=2 exp(d2A =8)dN = 2 N e i=0 i = 1 + e1=2 2 N 2N = 1(A) N sin e e1=2 < e < 3 . (1:8) d2A ((x. By the pre eding. The ru ial point of the proof is ontained in the following observation: for any x in f 1.3 by indu tion over N . 1)) 42 + d2A+1 (x) + (1 )d2A 1 (x): Indeed. +1 . we an then suppose that A = A 1 f 1g [ A+1 f+1g where A 1 . Assuming it holds for N . we set ui = exp(d2Ai =8)dN and vi = 1=N (Ai ) so that ui vi by the indu tion hypothesis. +1gN . for i = 1. 1) z j2 = 42 + jx (z+1 + (1 )z 1 )j2 = 42 + j(x z+1) + (1 )(x z 1 )j2 42 + jx z+1 j2 + (1 )jx z 1 j2 by the triangle inequality and onvexity of the square fun tion. +1 . +1g . We noti e that (zi . +1gN and 0 1 . Now j(x. +1g for whi h dA 0 and the result holds. +1gN +1 with f 1.

Theorem 1.3 is omplete. As announ ed. (1:9) N (jf Mf j > t) 4 exp( t2 =8 k f k2Lip): To prove this inequality.3. in order not to have to onsider the ase where 1 . for every t > 0 .16 by Holder's inequality and sin e ui vi . However.3 ontains a on entration estimate for Lipshitzian fun tions similar to the one we des ribed before for Gaussian measures. Then. with respe t to the pre eding inequalities. The value of whi h minimizes the pre eding expression is = log(v+1 =v 1 ) but. 2 4 2 1 v = 1 1 = N (A+1 ) + N (A 2 +1 2 + v+1 v 1 1) = 1 : N +1 (A) The proof of Theorem 1. let . Let Mf denote a median of f with respe t to N . let us take = 1 v+1 =v 1 (re all that we assume that v+1 = 1=N (A+1 ) 1=N (A 1 ) = v 1 ) whi h gives Z 2 1 exp(d2A =8)dN +1 v+1 [1 + e =2 (1 ) 2 1) ℄: It is elementary to see that for 0 < 1 . This appli ation is one of the main interests of Theorem 1. one main additional assumption is that the inequality will only on ern onvex Lips hitzian fun tions f (on IRN ) with Lips hitz onstant k f k Lip . 2 1 + e =2 (1 ) 1 2 4 : For the value of .

rst A = ff Mf g . then f (x) Mf + t . let B = ff < Mf f (x) < Mf . by the Lips hitz property. thus tg . Further. by Chebyshev's inequality and Theorem 1. As before we see that when dB (x) N (dB > t= k f k Lip ) 12 t= k f k Lip . Sin e f is onvex. Hen e. f Mf on Conv A . N (f > Mf + t) N (dA > t= k f k Lip ) 1(A) exp( t2 =8 k f k2Lip ) N 2 exp( t2 =8 k f k2Lip): On the other hand. if dA (x) t= k f k Lip .3. then .

17 by de.

3. But.3 and the subsequent on entration inequality (1. +1gIN . on erning (1. again by Chebyshev's inequality and Theorem 1. Theorem 1. if f on IRIN is onvex and Lips hitzian in the sense that jf () f (.9). For example. we get N (B ) 2 exp( t2 =8 k f k2Lip ): These two inequalities together then imply (1.nition of the median.9).9) a tually do not depend on N and easily extend to the ase of Haar measure = ( 12 Æ 1 + 12 Æ+1 ) IN on f 1.

)j k f k Lip j .

j for all . .

Let us note further that the onvexity assumption on f in (1. It is not known whether 2 an be rea hed. something whi h the argument of the proof of Theorem 1. +1gN . (jf (1:10) Mf j > t) 4 exp( t2 =8 k f k2Lip) for all t > 0 . This is made lear by the following example. then similarly. for Mf a median of f for .3 annot a omplish.9) does not seem best possible.4) and (1. in `2 .9) annot be dropped. Compared with the orresponding inequalities (1. Let A = fx 2 f 1.6) the oeÆ ient 8 in (1. N P i=1 xi 0g and de.

as is easy to see.9) will be used in Chapter 4 in the study of tail behavior and integrability properties of ve tor valued Radema her series as eÆ iently as the Gaussian inequalities in Chapter 3.9). Theorem 1. y 2 Ag .3 and the on entration inequality (1. but then. Then. N (f > N 1=4 ) 1=4 for some > 0 independent of N from whi h is is lear that the non- onvex Lips hitzian fun tion f annot verify an inequality like (1. Assume N is an even integer. An isoperimetri inequality for produ t measures The pre eding isoperimetri inequalities and on entration phenomena will be applied in the next hapters to the study of integrability properties and tail behaviors of Gaussian and Radema her series with ve tor .ne f (x) = inf fjx yj. from the entral limit theorem. f 2 (x) = 2 i=1 xi . 1.2. Clearly k f k Lip= 1 and 0 is P N + a median of f . Despite of these somewhat negative observations.

we present an isoperimetri theorem for produ t measures whi h will be our key tool in the study of general sums of independent random variables (Chapter 6). The statement of the result is somewhat abstra t but we will try by omments and ideas on the proof to larify its powerful meaning. . Given a probability spa e (E.18 valued oeÆ ients. In this se tion. ) and a . Its dis overy was a tually motivated by these questions.

: : : . : : : . 9x1 . : : : .xed. but arbitrary. xN ). xi 62 fx1i . To a subset A of E N . integer N 1 . xq 2 A su h that Card fi N . k) an be thought of in an isoperimetri way as some neighborhood of A whose elements are determined by a . q. xi 2 E . A point x in E N has oordinates x = (x1 . xqi gg kg: The set H (A. we asso iate H (A. k) = fx 2 E N . q. denote by P the produ t measure N on E N .

: : : . q. q. Between elements x. as before. k) under P in terms of P (A). P (H (A.q: x`i 6=yi`g = Card fi N . x): The isoperimetri theorem estimates the size of H (A. Aq ) kg where for x in E N . For some universal onstant K . The main on lusion is an exponential de ay in terms of k of the measure of the omplement of H (A. H (A. xq ) = (x` )`q where x` 2 E N and. y in (E N )q introdu e d(x. x`i 6= yi`g: Then for A in E N . : : : . k) = fx 2 E N .:::. k)) 1 where P denotes inner probability. d(xe. q. x` = (x`i )iN . q. q. : : : . q and k .xed number q of points in A with at most k free oordinates.4. Theorem 1. q. k) an simply be interpretated as the neighborhood of order k with respe t to d in the sense that H (A. k) . denote its oordinates by x = (x1 . 8` = 1. xe is the element of (E N )q with oordinates xe = (x. This an be made somewhat more pre ise in the terminology of the beginning of this hapter. log(1=P (A)) 1 + K k q k . y) = N X i=1 If8`=1. For an element x in (E N )q .

(1. k)). It does not seem however to allow an exa t solution of the isoperimetri problem whi h would be the determination. for any a > 0 of inf fP (H (A. q. We below illustrate one typi al argument in a parti ular ase. Let A in E N be de.19 The proof of Theorem 1. q. we do this and K0 will moreover always have the meaning of (1.11) an a tually be improved into (1:12) P (H (A. k)) 1 K0 k q where K0 is a numeri al onstant.4 is mostly used in appli ation only in the typi al ase P (A) 1=2 and k q for whi h we get that (1:11) P (H (A. It has been remarked in [Ta11℄ that. P (A) > ag: Note further that the use of inner probability is ne essary sin e H (A. Theorem 1. k) need not be measurable when A is. q. onsider the ase where E = f0. q. in the ase P (A) 1=2 for example. It should be noted however that this estimate is sharp.4 is isoperimetri in nature and relies on several redu tions based on symmetrization (rearrangement) pro edures. 1g and = 1 r r N Æ0 + N Æ1 where r is an integer less than N and N is assumed to be large.11) throughout the book. Indeed. It will be onvenient in the sequel to assume this onstant K0 to be an integer. k)) 1 1 1 + K k q log q k : The gain of the fa tor log q is irrelevant for the appli ations presented in this work.

q. k) = fx 2 E N . N X i=1 xi rg: Then P (A) is of the order of 1=2 and. H (A. learly.ned as A = fx 2 E N . N X i=1 xi rq + kg: .

20 Now r rq+k+1 r N rq k N 1 N N rq + k + 1 rq+k+1 r rq+k+1 N 2 r N e 2(rq + k + 1) P (H (A. k) ) " 1 K 1 q + kr !#rq+k+1 : If k q are . q.

the sample XX falls with high probability into H (A. On this set. Xi (!) only depends on !i . q. By independen e. As announ ed.12) is optimal. Let XX = (Xi )iN be a sample of independent random variables with values in a measurable spa e E . Then (1. we will only use (1. XX is entirely ontrolled by a . they an be realized on some produ t probability spa e N . q.11) in appli ations. we see that we have obtained an example for whi h the bound (1. Let us therefore brie y indi ate the trivial translation from the pre eding abstra t produ t measures to the setting of independent random variables. when k q and IPfXX 2 Ag 1=2 for some measurable set A in E N .xed (large enough) and if we take r to be of the order of k=(q log q) . in su h a way that for ! = (!i )iN in N . k) . k)g 1 K0 k : q Hen e. then (1:13) IP fXX 2 H (A. These will be mainly studied in Chapter 6 and in orollaries in Chapters 7 and 8 on strong limit theorems for sums of independent random variables.11) is simply that. when k or q are large.

Let us note further that this intuition about large values of the sample is justi. In the appli ations.nite number q of points in A provided k elements of the sample are negle ted. these negle ted terms an essentially be thought as the largest values of the sample. on e an appropriate bound on large values have been found. espe ially to the study of sums of independent random variables. Hen e. a good hoi e of A and relations between the various parameters q and k determine sharp bounds on the tails of sums of independent random variables. This will be one of the obje ts of study in Chapter 6.

the .4 we give below.ed in the spe ial ase of the proof of Theorem 1.

than the rearrangement . in the same spirit. We would like however to give a proof in the simpler ase where A is symmetri .e. The rearrangement part of the proof is an inequality introdu ed in [Ta6℄.nal binomial arguments exa tly handle the situation as we just des ribed.4. We refer to [Ta11℄ for a detailed proof of Theorem 1. but easier. i. invariant under permutation of the oordinates.

Consider C symmetri (invariant under 0 0 permutation of the oordinates) in E N su h that P 0 (C ) > 0 where P 0 = N . it is not diÆ ult to see that when A is symmetri . This is another motivation for the details we are giving now. k) G(C. q. yN 0 gg kg: For a omparison with H (A. The expli it omputations on the sets after appropriate rearrangement are then identi al to those required to prove Theorem 1. xi 62 fy1. k) . let A in E N and denote by C E qN the set of all sequen es y = (yi )iqN su h that fy1 . the onverse in lusion G(C. and rely on lassi al binomial estimates. : : : . k) is also satis. k) H (A.4. For ea h integer k . : : : . yqN g an be overed by q sets of the form fx1 . set G(C. as before.4 in the ase of symmetri A for q > 1 not ne essarily an integer.4. On the other hand. : : : . 9y 2 C su h that Card fi N .21 arguments needed for Theorem 1. q. C is learly invariant under permutation of the oordinates and H (A. q. denote by N 0 the integer part of qN . This method also yields a version of Theorem 1. N 1 be an integer and let now q > 1 . k) = fx 2 E N . More pre isely let. xN g where (xi )iN 2 A . k) when q is an integer.

k) need not be measurable. k)) 1 k(q. In order to establish (1. these are similar to the ones expli ited in Theorem 1. N e k )): N (G(C. 1℄ .) The on lusion now follows from an appropriate lower bound for . 0 < zi < yi . that sin e the result is measure theoreti we might as well assume that E = [0. P 0 (C )) large enough su h that for all k k(q. we take upon the framework of [Ta11℄ and note in parti ular. then e k ) whi h is therefore measurable. The main point in the proof of (1. P 0 (C )) . P 0 (C ))[K (q)℄k : For simpli ity. for every k . (1:14) P (G(C. of a left-hereditary subset Ce of ℄0. In these notations. Ce is left-hereditary in the sense that whenever (yi )in 2 Ce and. P 0 (C )) in fun tion of q and P 0 (C ) but.14). but in gen(zi )iN 0 2 Ce . 1[N su h that N (Ce) = N (C ) and for whi h.ed at least on the subset of E N onsisting of those x = (xi )iN su h that xi 6= xj whenever i 6= j . for C symmetri . eral G(C.14) is the use of Theorem 11 in [Ta6℄ whi h ensures 0 0 0 the existen e. to start with. so is G(C. (When Ce is left-hereditary. 1℄ and is Lebesgue measure on [0. k)) (G(C. as will be lear from the proof. for 1 i N 0 . we then have that there exist K (q) < 1 and k(q.4. we do not indi ate the expli it dependen e of K (q) and k(q.

22 e k )) whi h will be obtained using binomial estimates. Convenient here is the following inequality: (G(C. ) tng (1:15) " t 1 1 t t 1 t #n where B (n. " > 0 . IPfB (n. and . 0 We let q = (1 + ")2 . ) is the number of su esses in a run of n Bernoulli trials with probability of su ess and 0 < t < 1.

for every 1 r N 0 . ) tng exp( Æ2 n): Therefore. hen e.rst show that for some = (".15). 12 ) . if =t 1 + " . for every 1 r N . Card fi N . IPfB (n. 1 1 t 1 t = 1 t 1 t IPfB (n. by this inequality. Now. sin e Ce is left-hereditary. 0 (1 + ")(r + ) g r 1) N0 exp( Æ2 (1 + ")(r + )) N ((yi )iN 0 . "i > 1 Indeed. "i > 1 and X r1 0 exp( Æ2 (1 + ")(r + )) < N (C ) 0 whenever = (". by (1. This proves the pre eding laim. (1 + ")(r + ) g r: N0 Card fi N 0 . ea h sequen e (xi )iN su h that for every r > k . N (C )) is hosen large enough. there exists (yi )iN 0 in Ce su h that. ) tng exp 1 t tn exp( ( t)) t 1 log : t If u 1 + " . Card fi N 0 . "i 1 (1 + ")(r k + ) gr N0 1 . N (C )) > 0 large enough. then u 1 log u Æ2 u where Æ = 14 min(". so that.

Martingale inequalities Martingale methods prove useful in order to establish various on entration results.15).14) will therefore be ompleted if we an show that of C. if IPfB (n.23 e k ) . by (1. 8r > k. this is exa tly what was required to on lude the proof and therefore the isoperimetri result (1. 12 ) . 1. Card fi N . e k ) . These omplement the pre eding isoperimetri inequalities and will prove useful in various pla es throughout this work. the r -th largest element of (xi )iN is less than 1 (1 + ")(r k + )=N 0 and belongs to G(C. The inequalities we state are rather lassi al. The proof of (1. (1:16) IPfB (n. (1 + ")(r k + ) gr N0 0 1 k(". R Re all L1 = L1 ( . Card fi N . ) tng exp( Æ2 (1 t)n): 0 Using this inequality. is therefore smaller than the (r k) -th largest element of (yi )iN 0 . we get that N ((xi )iN . then u (1 )=(1 t) 1=1 + " .14) is established. indeed. N (C ))[K (")℄k N ((xi )iN . note that t t 1+ t and thus. and we present them in the general spirit of on entration properties. at least some of them. Hen e. 1 log u t t (1 t)n exp( t) 1 1 t 1 log 1 1 t : Æ2 (where we re all that Æ = 14 min ". N (C )) large enough. "i > 1 1) 0 for some K (") < 1 and all k k(". by the left-hereditary property e (xi )iN 2 G(C. N (C )) large enough. Thus. A. if r > k and k k(". ) tng exp If 0 < u < (1 + ") 1 . Assume that we are given a . To this aim. "i 1 (1 + ")(r k + ) g r) exp( Æ2 r): N0 As announ ed. IP) denotes the spa e of all real measurable fun tions f on su h that IEjf j = jf jdIP < 1 .3.

g = A0 A1 AN = A .ltration f.

: : : . -algebras of A . Given f in L1 .24 of sub. IEAi denotes the onditional operator with respe t to Ai . (di )iN de. di = IEAi f IEAi 1 f P IEf = N i=1 di . for ea h i = 1. N . set.

i N . it is lear that IEAi 1 Xi = IEXi = 0 . The . Indeed.nes a so- alled martingale dieren e sequen e hara terized by the property IEAi 1 di = 0. : : : . Hen e. if Ai denotes the -algebra generated by the variables X1 . by independen e and the mean zero property. all the results we will present for f IEf = PN i=1 di as before apply to the sum PN i=1 Xi . Xi . so that f One of the typi al examples of martingale dieren es sequen e we have in mind is a sequen e (Xi )iN of independent mean zero random variables.

Assume that PN i=1 di be as before the sum of martingale dieren es with k di k1 < 1 and set a = (PNi=1 k di k21)1=2 .rst lemma is a kind of analog in this ontext of the on entration property for Lips hitzian fun tions.5. for every t > 0 . It expresses. IPfjf IEf j > tg 2 exp( t2 =2a2): Proof. Lemma 1. high on entration of f around its expe tation in terms of the size of the dieren es di . Then. Let f in L1 and let f IEf = respe t to (Ai )iN . in the pre eding notations. We .

Indeed. for any i = 1. for any jxj 1 . then. It learly follows that. for any real number . : : : .rst note that when ' is a random variable su h that j'j 1 almost surely and IE' = 0 . IEAi 1 exp di exp(2 k di k21 =2): . IE exp ' exp(2 =2) . N . simply note from the onvexity of x ! exp(x) and x = (1 + x)=2 (1 x)=2 that. exp(x) h + x sh exp(2 =2) + x sh and integrating yields the laim.

Applying this inequality also to f then yields the on lusion of the lemma. When. for t > 0 . in addition to the bounds on di . IE exp[(f IEf )℄ = IE exp( N X = IE(exp( IE exp( di ) i=1 NX1 i=1 NX1 ::: i=1 di )IEAN 1 exp dN ) di ) exp(2 k dN k21 =2) exp(2 a2 =2) : We then obtain from Chebyshev's inequality that. and on e for all. some information on IEAi 1 d2i is available. IPff IEf > tg exp( t + 2 a2 =2) exp( t2 =2a2) for the optimal hoi e of . We should point out at this stage.25 Iterating by the properties of onditional expe tation.5.1) and will be applied usually without any further omment in the sequel. it is understood that if we are interested only in f IEf rather than jf IEf j . the pre eding proof basi ally yields the following re. that in a statement like Lemma 1. we also have IPff IEf > tg exp( t2 =2a2): (This was a tually expli itly extablished in the proof!) This general omment about a oeÆ ient 2 in front of the exponential bound in order to take into a ount absolute values (f and f ) applies in many similar situations (we already mentioned it about the on entration inequalities of Se tion 1.

( i=1 i IPfjf IEf j > tg 2 exp t2 at 2 exp 2 2b2 b : .6. P Lemma 1. Let f IEf = N k di k1 and let b i=1 di be as before.nement. Set a = max iN PN k IEAi 1 d2 k1 )1=2 . Then. for every t > 0 .

By homogeneity we may and do assume that a = 1 . This kind of result is squares ( N i=1 k di k1 ) des ribed in the next two lemmas. i=1 di and set now a = max iN IPfjf IEf j > tg 2 exp( tq =Cq aq ) where Cq > 0 only depends on q .5.5.26 The proof is similar to the one of Lemma 1. it is lear that we always have jf IEf j N X i=1 k di k1 : This simple observation of ourse suggests the possibility of some interpolation between the sum of the P 2 1=2 whi h steps in in Lemma 1. Lemma 1.5 and this trivial bound. Proof. Let further f be as before P with f IEf = N i1=p k di k1 . For any integer m we an write jf IEf j n X i=1 m X i=1 jdi j + j X i>m di j X X i 1=p + j di j qm1=q + j di j : i>m i>m Assume .7. It uses simply that 2 Ai 1 2 3 Ai 1 3 IE di + IE di + : : : 2! 3! k di k1 2 k di k21 2 A i 1 2 1 + 2 k IE di k1 1 + 3 + 3:4 + : : : 2 A a i 1 2 exp 2 k IE di k1 e : IEAi 1 exp di = 1 + Turning ba k to Lemma 1. Then. Let 1 < p < 2 and let q = p=p 1 denote the onjugate of p . for every t > 0 .

5 to di and thus obtain in this way.rst that t > 2q and denote then by m the largest integer su h that t > 2qm1=q . together with the pre eding \interpolation" inequality. i>m IPfjf IEf j > tg IPfj " X i>m di j > qm1=q g 2 exp q2 m2=q =2 X i>m # k di k21 : . We an apply P Lemma 1.

We have as before that jf IEf j Hen e IPfjf m X i=1 IEf j > tg IPfj jdi j + j X i>m X i>m di j (1 + log m) + j di j > 1g 2 exp 1X i 2 i>m where we have used Lemma 1. for every t > 0 .5. IPfjf IEf j > tg 16e e 16 exp[ exp(t=4)℄: When t > 4 . iN IPfjf IEf j > tg 16 exp[ exp(t=4a)℄: Proof. IPfjf IEf j > tg 1 2 exp( (2q)q =Cq2 ) 2 exp( tq =Cq2 ) where Cq2 = (2q)q = log 2 . let m be the largest integer su h that t 2 + log m . we get IPfjf and the on lusion follows. sin e a 1 . When t 2q . Sin e t < 2 + log(m + 1) . We again assume by homogeneity that a = 1 . Then. Cq2 ) . When t 4 . Lemma 1. It is similar to the pre eding one. The lemma then follows with Cq = max(Cq1 . X i>m so that k di k21 IPfjf q 1 2=p m i 2=p q 2 i>m X IEf j > tg 2 exp( q(q 2)m=2) 2 exp( tq =Cq1 ) where Cq1 = 4(2q)q =q(q 2) .8. Notes and referen es IEf j > tg 4 exp[ exp(t=4)℄ X i>m ! 2 di j: 2 exp( m=2) . Let f IEf = PN i=1 di be as above and set now a = max i k di k1 .27 Now.

Milman [Mi1℄. ampli. Gromov and V.28 The des ription of the on entration of measure phenomenon is taken from the paper [G-M℄ by M. D. [Mi3℄). The use of isoperimetri on entration properties in Dvoretzky's theorem on almost spheri al se tions of onvex bodies (see Chapter 9) was initiated by V. D. Milman were further interesting examples of "Levy' families" are dis ussed (see also [Mi-S℄.

Borell [Bo1℄. Tsirel'son [S-T℄ with the proof sket hed here. [Pi18℄. Maurey ( f. Inequality (1. if is a Gaussian Radon measure on E and if F : G ! IR is measurable and onvex. S hmidt [S hm℄. The Gaussian isoperimetri Theorem 1. Ehrhard introdu ed Gaussian symmetrization in [Eh1℄ and established there inequality (1. S. They a tually deal with ve tor valued fun tions and the method of proof indeed ensures in the same way that if f : E ! G is lo ally Lips hitzian between two Bana h spa es E and G . geometri inequalities and isoperimetry may be found in [B-Z℄. A.3 may be found in [Ta9℄ where omparison with [Har℄ is dis ussed. The isoperimetri theorem for Haar measure on f 1. [TJ2℄. Maurey [Pi16℄.1). see this paper for the history of the result.1. Gromov [Gr1℄. (1. N. The simple proof we suggest has been shown to us by J.7) and the subsequent examples were observed by G. A ounts on symmetrizations and rearrangements. A proof of (1.5) is due to G. [Led7℄). Zinn (oral ommuni ation). The isoperimetri inequality on the sphere (Theorem 1.ed later in [F-L-M℄. [Os℄. Extensions to measures on f 1. [Mi-S℄. de A osta and J. that has not been understood for a long time.6) using Yurinskii's observation (see Chapter 6 below) and a entral limit theorem for martingales has been noti ed by A. H.1) is due to P. Saint-Raymond. then Z F (f Z fd )d ZZ F 2 f 0 (x) y d (x)d (y) : (1. We refer the reader to [Ta19℄ where a new isoperimetri inequality is presented. Pisier with the simple proof of B. S hmidt's proof is based on deep isoperimetri symmetrizations (rearrangements in the sense of Steiner) arguments. Theorem 1. Poin are's Lemma is not to be found in [Po℄ a ording to [D-F℄. +1gN with non-symmetri weights are des ribed in [J-S2℄. [Pi16℄. that improves upon ertain aspe ts of the Gaussian isoperimetri theorem. For a short proof of Theorem 1. [Beny℄ (for the two point symmetrization method). Levy's proof. Harper [Har℄. See further [Eh2℄ and also [Eh3℄ where extremality of half spa es is investigated. Sudakov and B. Pisier [Pi16℄. Borell [Bo2℄ and V. we refer to [F-L-M℄. Poin are's Lemma is ni ely revisited in [MK℄. Log- on avity of Gaussian Radon measures in lo ally onvex spa es has been shown by C. +1gN with respe t to the Hamming metri was established by L. has been revived and generalized by M.6) omes from B. . Levy [Le2℄ and E.2 is due independently to C. Further appli ations to lo al theory of Bana h spa es are presented in [Mi-S℄. or [Ba-T℄.

Inequality (1.29 The isoperimetri theorem for subsets of a produ t of probability spa es.15) omes from [Che℄. and is also due to the se ond author.4.14) for q not ne essarily an integer is new. (1. is due to the se ond author [Ta11℄. Theorem 1. The . The binomial omputations losely follow the last step in [Ta11℄.

6 is the martingale analogue of the lassi al exponential inequality of Kolmogorov [Ko℄ (see [Sto℄). We use for simpli ity Lemma 1. For appli ation of the martingale method to rather dierent problems. [S h2℄. [J-S1℄. [Ben℄. in a form put forward in [A 7℄. for example. [Pi12℄.6 and 1. see [R-T1℄. [S h1℄.3 are rather lassi al. besides others. et .7 are taken from [Pi16℄. Lemma 1. [Pi16℄. [R-T3℄. [Mi-S℄. In this form. [Ho℄) this type of inequality has been extensively studied leading to sharp versions in. [R-T2℄.rst two inequalities of Se tion 1. Lemmas 1.5 is apparently due to [Azu℄. For sums of independent random variables. [B-L-M℄. [S h3℄.6 in this work but the pre eding referen es an basi ally be used equivalently in our appli ations. [Ho℄ (see also Chapter 6). Lemma 1. see. and starting with Bernstein's inequality ( f. . [Mau3℄. For appli ations of all these inequalities to Bana h spa e theory.

3 Symmetri random variables and Levy's inequalities 2. Generalities on Bana h spa e valued random variables and random pro esses 2.2 Random pro esses and ve tor valued random variables 2.4 Some inequalities for real random variables Notes and referen es .1 Bana h spa e valued Radon random variables 2.30 Chapter 2.

31 Chapter 2. Generalities on Bana h spa e valued random variables and random pro esses This hapter olle ts in a rather informal way some basi fa ts about pro esses and in.

some of whi h are given at the end of the hapter. Only a few proofs are given and many important results are only just mentioned or even omitted. The .nite dimensional random variables. It is therefore re ommended to omplement if ne essary these partial basis with the lassi al referen es. The material we present a tually only appears as the ne essary ba kground for the subsequent analysis developed in the next hapters.

rst se tion des ribes Radon (or separable) ve tor valued random variables while the se ond makes pre ise some terminology and de.

A. that is the -algebra A ontains the negligible sets for IP . espe ially Levy's inequalities and It^o-Nisio's theorem. In a last paragraph. xi (2 IR) . we shall always onsider real Bana h spa es. We also assume for onvenien e that ( . IP) whi h are always assumed to be large enough in order to support all the random variables we will work with. A. x 2 B . Throughout this book also. IP) is omplete.1 Bana h spa e valued Radon random variables A Borel random variable or ve tor with values in a Bana h spa e B is a measurable map X from some probability spa e ( . that is a ve tor spa e over IR or CI with norm kk and omplete with respe t to it. B 0 denotes the topologi al dual of B and f (x) = hf. B denotes a Bana h spa e. The third one presents some important fa ts about symmetri random variables. IP) into B equipped with its Borel -algebra B generated by the open sets of B . A.nitions about random pro esses and general ve tor valued random variables. this de. but a tually almost everything we will present arries over to the omplex ase. 2. The norm on B 0 is also denoted kf k . this is legitimate by Kolmogorov's extension theorem. we mention some lassi al and useful inequalities. For simpli ity. f 2 B 0 . f 2 B 0 . the duality. In fa t. Throughout this book we deal with abstra t probability spa es ( .

like for example the oarsest one for whi h the linear fun tionals are measurable ( ylindri al -algebra). if B is equipped with a dierent -algebra.nition of random variable is somewhat too large for our purposes sin e for example the sum of two random variables is not trivially a random variable. Furthermore. the two de.

. One way to handle them is the on ept of Radon or regular random variables. whi h amounts to some separability of the range.nitions might not agree in general. We do not wish here to deal at length with measurability questions.

let fxi . i 2 INg be dense in E and let " > 0 be . for ea h " > 0 there is a ompa t set K = K (") in B su h that IPfX 2 K g 1 " : (2:1) In other words. the image of the probability IP by X (see below) is a Radon measure on (B. there exists a sequen e (Kn ) of ompa t sets in B su h that IPfX 2 Kn g = 1 n so that X takes almost surely its values in some separable subspa e of B . if. Equivalently X takes almost all its values in some separable (i.1).32 A Borel random variable X with values in B is said to be regular with respe t to ompa t sets. Indeed.e. Conversely. ountably generated) losed linear subspa e E of S B . B) . under (2. or Radon. or yet tight.

2 n)g 1 " 2 n where B (xi . We all thus Radon.1). for every n . for ea h n 1 there exists an integer Nn su h that IPfX 2 [ iNn B (xi . The pre eding argument P shows equivalently that X is almost sure limit of step random variables of the form xi IAi where . a Borel random variable X satisfying (2. By density. further. in a single ball B (xi . 2 n) : K is losed and IPfX 2 K g 1 " . 2 n) . or separable. Set then K = K (") = \ [ n1 iNn B (xi . 2 n ) denotes the losed ball of enter xi and radius 2 n in B .xed. therefore. K is ompa t sin e from ea h sequen e in K one an extra t a subsequen e ontained. this subsequen e is. a Cau hy sequen e hen e onvergent by ompletness of B .

1) together with the analogous property for losed sets whi h holds from the very de. Note also that (2.1) is extended into IPfX 2 Ag = supfIPfX 2 K g . This follows from (2. K Ag for every Borel set A in B .nite xi 2 B and Ai 2 A . K ompa t.

We are mostly interested in this work in results on erning sequen es of random variables. it is sometimes onvenient to assume the Bana h spa e itself to be separable.nition of the Borel -algebra. Sin e Radon random variables have separable range. Note further that when . When dealing with sequen es of Radon random variables. we will therefore usually assume for onvenien e and without any loss in generality the Bana h spa e to be separable.

33 B is separable all the "reasonable" de.

in parti ular the Borel and ylindri al -algebras. Note also. IE'(X ) = Z B '(x)d(x) : The distribution of a Radon random variable is ompletely determined by its . and this observation will motivate parts of the next se tion. for any real bounded measurable ' on B . that if B is separable the norm an be expressed as a supremum kxk = sup jf (x)j . For a random variable X with values in B . x 2B.nitions of -algebras on B oin ide. the probability measure = X image of IP by X is alled the distribution (or law) of X . f 2D over a ountable set D of linear fun tionals of norm 1 (or less than or equal to 1 ).

Note that it suÆ es to know that f (X ) and f (Y ) have the same law on some weakly dense subset of B 0 . Indeed. The topology generated by these neighborhoods is alled the weak topology and a sequen e (n ) in P (B ) onverging with respe t to this topology is said to onverge weakly. Denote by P (B ) the spa e of all Radon probability measures on B . then X = Y . if X and Y are Radon random variables su h that for all f in B 0 . are real bounded ontinuous on B . ompletely determines the distribution of X . For ea h in neighborhood Z Z f 2 P (B ) . j 'i d 'i d j < " . f 2 B0 . we an assume B separable. More pre isely. f (X ) and f (Y ) have (as real random variables) the same distribution.nite dimensional proje tions. the Borel -algebra is therefore generated by the algebra of ylinder sets. i N . the hara teristi fun tionals on B 0 IE exp if (X ) = Z B exp(if (x))d(x) . As a onsequen e of the pre eding. and a ording to the uniqueness theorem in the s alar ase. Sin e f (X ) = f (Y ) . i N g B P (B ) onsider the B where " > 0 and 'i . Observe that n ! weakly if and only if lim n!1 Z 'dn = Z 'd . X and Y agree on this algebra and the result follows.

lim inf (G) (G) n!1 n for ea h open set G . Thus. in order to he k that a sequen e (n ) in P (B ) onverges weakly. it suÆ es to show that (n ) is relatively ompa t in the weak topology and that all possible limits are the same.34 for every bounded ontinuous ' on B . The spa e P (B ) equipped with the weak topology is known to be a omplete metri spa e (separable if B is separable). or. equivalently. The latter an be veri. It an be shown further that this holds if and only if lim sup n (F ) (F ) n!1 for ea h losed set F in B . in parti ular.

Theorem 2.ed along linear fun tionals.2) holds if and only if for ea h " > 0 there is a . A family (i )i2I in P (B ) is relatively ompa t for the weak topology if and only if for ea h " > 0 there is a ompa t set K in B su h that (2:2) i (K ) 1 " for all i 2 I : This ompa tness riterion may be expressed in various manners depending on the ontext. a very useful riterion of Prokhorov hara terizes relatively ompa t sets of P (B ) as those whi h are uniformly tight with respe t to ompa t sets.1. For example. (2. For the former.

nite set A in B su h that (2:3) i (x 2 B .1 is based on the idea of . Another equivalent formulation of Theorem 2. A) < ") 1 " for all i 2 I where d(x. A) denotes the distan e (in B ) from the point x to the set A . d(x.

nite dimensional approximation and is most useful in appli ations. The idea is simply that bounded sets in . It is sometimes referred to as " at on entration".

nite dimension are relatively ompa t and therefore if a set of measures is on entrated near a .

Then kT (x)k = d(x. x 2 B . If F is a losed subspa e of B . (We denote in the same way the norm of B and the norm of B=F . The following simple fun tional analyti lemma makes this lear.nite dimensional subspa e it should be lose to be relatively ompa t.) . F ) . denote by T = TF the anoni al quotient map T : B ! B=F .

2.35 Lemma 2. A subset K of B is relatively ompa t if and only if it is bounded and for ea h " > 0 there is a .

F ) < " for x 2 K ). d(x. if. A ording to this result and to Theorem 2. kT (x)k < " for every x in K (i. there is a bounded set L in B su h that i (L) 1 " for every i 2 I and a .nite dimensional subspa e F of B su h that if T = TF . for ea h " > 0 .e.1.

then the family (i )i2I is relatively ompa t. d(x.4) is satis. If is a Borel measure on B and f a linear fun tional. when (2. F ) < ") 1 " for all i 2 I . A tually. Æ f 1 denotes the measure on IR image of by f . To he k the pre eding laim. let F of dimension N su h that (2.4) holds. the existen e of L is a too strong hypothesis and it is enough to assume that for all f in B 0 .nite dimensional subspa e F of B su h that (2:4) i (x 2 B . (i Æ f 1 )i2I is a weakly relatively ompa t family of probability measures on the line.

Therefore. if (i Æ f 1 )i2I is relatively ompa t for every f in in B 0 (a tually a weakly dense subset of B 0 would suÆ e). and if (2. the family (i )i2I is uniformly almost on entrated on a bounded set and Prokhorov's riterion is then ful.4) holds. then kxk 2a . if max jfi (x)j a .ed. whenever x 2 F and a > 0 . there exist linear fun tionals f1 . By the Hahn-Bana h theorem. fn in the unit ball of B 0 su h that. : : : .

By what pre eeds. lim IPfkXn n!1 X k > "g = 0 : . (Xn ) onverges weakly to X as soon as (f (Xn )) onverges weakly (as a sequen e of real random variables) to f (X ) for all f in B 0 (or only in a weakly dense subset) and the sequen e (Xn ) is tight in the sense that.lled. A sequen e (Xn ) of Radon random variables with values in B onverges weakly to a Radon random variable X if the sequen e of distributions (Xn ) onverges weakly to X .4). In the ve tor valued ase. there exists a ompa t set K in B with IPfXn 2 K g 1 " for all n (or only all n large enough sin e the Xn 's are themselves tight). for ea h " > 0 .1. For real random variables.3) or (2. The sequen e (Xn ) is said to onverge in probability (in measure) to X if. this an be established through (2. a elebrated theorem of P. Levy indi ates that (Xn ) onverges weakly if and only if the orresponding sequen e of hara teristi fun tions (Fourier transforms) onverges pointwise (to a ontinuous limit). by Theorem 2. for ea h " > 0 .

for ea h " > 0 .36 It is said to be bounded in probability (or sto hasti ally bounded) if. one an .

nd A > 0 su h that sup IPfkXnk > Ag < " : n The topology of onvergen e in probability is metrizable and a possible metri an be given by IE min(1. If thus one has to he k for example that (Xn ) onverges to 0 in probability. Denote by L0(B ) = L0 ( . B ) the ve tor spa e of all random variables (on ( . A. it suÆ e to show that the sequen e (Xn ) is tight and that all possible limits are 0 . A. it also onverges weakly and the onverse holds true if the limiting distribution is on entrated on one point. IP. kX Y k) . IP) ) with values in B equipped with the topology of onvergen e in probability. If (Xn ) onverges in L0 (B ) . The L0 and weak topologies are thus lose in this sense and may be onsidered as weak statements as opposed to the strong almost sure properties (de.

. an important theorem of Skorokhod [Sk1℄ asserts that if (Xn ) onverges weakly to X . We denote moreover. p<1 and kX k1 = ess sup kX k < 1 if p = 1 . If B = IR . we set simply Lp = Lp (IR) ( 0 p 1 ). This property is useful in parti ular in onvergen e of moments. there exist. If (Xn ) onverges to X in Lp (B ) . both in the s alar and ve tor valued ases (and without onfusion). by kX kp the quantity (IEkX kp)1=p .ned below). denote by Lp (B ) = Lp ( . Finally. It learly implies onvergen e in probability whi h in turn implies weak onvergen e. A. and a fortiori weakly. it onverges to X in L0 (B ) . IP) ) with values in B su h that kX kp is integrable: IEkX kp = Z kX kpdIP < 1 . a sequen e (Xn ) onverges almost surely (almost everywhere) to X if IPfnlim !1 Xn = X g = 1 : The sequen e (Xn ) is almost surely bounded if IPfsup kXnk < 1g = 1 : n Almost sure onvergen e is not metrizable. If 0 < p 1 . that is in probability. A. on a possibly ri her probability spa e. random variables Xn0 and X 0 su h that Xn0 = Xn for every n and X = X 0 and su h that Xn0 ! X 0 almost surely. for example in entral limit theorems. B ) the spa e of all random variables X (on ( . The spa es Lp (B ) are Bana h spa es for 1 p 1 (metri ve tor spa es for 0 p < 1 ). IP. Conversely.

IP) de. If we onsider the operator T : B 0 ! L1 = L1 ( . A. a Radon random variable X on ( . if the real random variable kX k is integrable (IEkX k < 1) . IP) with values in B belongs to L1 (B ) .37 We on lude this se tion with some remarks on erned with integrability. or is strongly or Bo hner integrable. Suppose now we are given X su h that for ea h f in B 0 the real random variable f (X ) is integrable. As we have already seen. A.

ned by T f = f (X ) . T is therefore a bounded operator from whi h we dedu e that f ! IEf (X ) de. T has learly a losed graph.

let us all it z . and the element z of B 00 just onstru ted a tually belongs to B . we an hoose. If this is the ase. Let (Ai )iN be a . a ompa t set K in B su h that IE(kX kIf>X 62K g) < " . It is not diÆ ult to see that if the Radon random variable X is strongly integrable it is weakly integrable and kIEX k IEkX k . z is then denoted by IEX . The Radon random variable X is said to be weakly or Pettis integrable if for ea h f in B 0 .nes a ontinuous linear map on B 0 . f (X ) is integrable. of the bidual B 00 of B . Indeed. that is an element. for ea h " > 0 .

nite partition of K with sets of diameter less than " and .

A. -algebra of A . Let X be a Radon random variable in L1( . B ) and let F be a sub. IP.x for ea h i a point xi in Ai . Then one an de. In the same way. The on lusion then follows from the fa t that (IEY (1=n)) is a Cau hy sequen e i=1 in B whi h onverges therefore to an element IEX in B satisfying f (IEX ) = IEf (X ) for every f in B 0 . Set then Y (") = N X i=1 xi IfX 2Ai g : It is plain by onstru tion that IEkX Y (")k < 2" and that Y (") is weakly integrable with expe tation N P IEY (") = xi IPfX 2 Ai g . onditional expe tation of ve tor valued random variables an be onstru ted.

Z F IEF X dIP = Z F X dIP : It satis. IP.ne IEF X as the Radon random variable in L1 ( . F . for any F in F . B ) su h that.

es kIEF X k IEF kX k almost surely and f (IEF X ) = IEF f (X ) almost surely for every f in B 0 . if X is in L1( . IP. by separability and extension of a lassi al martingale theorem. B ) . A. Note further that. there exists a sequen e (AN ) of .

-algebras of A su h that if XN = IEAN X . If X is in Lp (B ) . . 1 < p < 1 .nite sub. the onvergen e also takes pla e in Lp (B ) . (XN ) onverges almost surely and in L1 (B ) to X .

by analogy with the pre eding.38 We would like to mention for the sequel (in parti ular Chapter 8) that. IEf 2 (X ) < 1 . it an be shown as before that for every in L2 . if X is a Radon random variable with values in B su h that for every f in B 0 . X is weakly integrable so that IE(X ) is well de. Furthermore. the operator T f = f (X ) is also bounded from B 0 into L2 .

In parti ular. g(IE(f (X )X )) = IE(f (X )g(X )) . for f.ned as an element of B . g in B 0 . whi h de.

Radon random variables form a somewhat too restri tive setting for other types of questions. and if we ask (for example) for the integrability properties or tail behavior of this supremum. we are learly fa ed with a random element of in. if we are given a sequen e (Xn ) of real random variables su h that sup jXn j < 1 almost n surely. However.2 Random pro esses and ve tor valued random variables The on ept of Radon or separable random variable is a onvenient on ept when dealing with weak onvergen e or tightness properties. and also in some related questions on the law of large numbers and the law of the iterated logarithm.nes the so- alled " ovarian e stru ture" of a random variable X weakly in L2 . 2. This on ept is also a way of taking easily into a ount various measurability problems. We will use it indeed in the typi al weak onvergen e theorem whi h is the entral limit theorem. For example.

In other words. it would be onvenient to have a notion of random variable with values in `1 . Re all further that every separable Bana h spa e an be realized isometri ally as a losed subspa e of `1 . The spa e 0 of all real sequen es tending to 0 is a separable subspa e of `1 but `1 is not separable.nite dimension but we need not (and in general do not) have a Radon random ve tor. On the other hand. another ategory of in.

Let T be a (in.nite dimensional random elements are the random fun tions or sto hasti pro esses.

d) . equipped with the ylindri al -algebra generated by the ylinder sets. A random fun tion or pro ess X = (Xt )t2T indexed by T is a olle tion of real random variables Xt . t 2 T . determined by the olle tion of all marginal distributions of the . By the distribution or law of X we mean the distribution on IRT .nite) index set whi h will be usually assumed to be a metri spa e (T.

and. ask for possible integrability properties or tail behaviors of sup jXt j t2T whenever this makes sense. A priori. XtN ) . ti 2 T .nite dimensional random ve tors (Xt1 . when this is the ase. : : : . a random pro ess X = (Xt )t2T is almost surely bounded . We often study throughout this book when a given random pro ess is almost surely bounded and/or ontinuous. These onsiderations of ourse raise some non-trivial measurability questions as soon as T is no more ountable.

and to deal with it. or has almost all its traje tories or sample paths bounded or ontinuous. for almost all ! . the path t ! Xt (!) is bounded or ontinuous. it is preferable and onvenient to know that the sets involved in these de. if. However. in order to prove that a random pro ess is almost surely bounded or ontinuous.39 or ontinuous.

It is not the fo us of this work to enter these ompli ations. the pointwise supremum sup jXt (!)j t2T is usually not well de.nitions are properly measurable. Let us therefore brie y indi ate in this se tion some possible and lassi al arguments used to handle these annoying measurability questions. These will then mostly be used without any further omments in the sequel. Let X = (Xt )t2T be a random pro ess. but rather to try to redu e to a simple setting in order not to hide the main ideas of the theory. When T is not ountable.

t 2 T . : : : ) as the essential (or t2T t2T s. we an simply set IE sup jXt jp = supfIE sup jXt jp . that is.t2T latti e) supremum in L0 of the olle tion of random variables jXt j . 0 < p < 1 . Even simpler. sup jXs Xt j . sup Xt . One possible way is to understand quantities as sup jXt j (or similar ones.ned sin e one has to take into a ount an un ountable family of negligible sets. F . It is therefore ne essary to onsider a handy notion of measurable supremum of the olle tion. if the pro ess X is in Lp . if IEjXt jp < 1 for every t in T .

redu ing basi ally the various estimates to the ase where T is .nite in T g : t2T t2F This latti e supremum also works in more general Orli z spa es than Lp -spa es and will mainly be used in Chapters 11 and 12 in order to show a pro ess is bounded.

Another possibility is the probabilisti on ept of separable version whi h allows to deal similarly with other properties than boundedness. A random pro ess X = (Xt )t2T de. Let (T. like for example ontinuity.nite. d) be a metri spa e.

A. d) is separable and X almost surely ontinuous. d) . s 2 S . If X is separable. and sin e S is ountable there is. no diÆ ulty in dealing with this type of supremum.ned on ( . d(s. IP) is said to be separable if there exist a negligible set N and a ountable set S in T su h that for every ! 62 N . then (T. every t 2 T and " > 0 . t) < "g where the losure is taken in IR [f1g . Xt (!) 2 fXs (!) . of ourse. Note that if there exists a separable random pro ess on (T. sup jXt (!)j = sup jXt (!)j for every t2T t2S ! 62 N . in parti ular. d) is separable as a metri spa e. then X is separable. If (T. .

it admits a version whi h is separable. a given random pro ess X = (Xt )t2T need not be separable. d) is separable and when X = (Xt )t2T is ontinuous in probability. every dense sequen e S in T an be taken as separable set. Yt = Xt with probability one. It is known that when (T. when a random pro ess is separable. A random pro ess Y = (Yt )t2T is said to be a version of X if. that is. there is no diÆ ulty in dealing with almost sure boundedness or ontinuity of the traje tories sin e these properties are redu ed along some ountable parameter set. t!t 0 then X admits a separable version.40 Hen e. In general however. in parti ular. for every t 2 T . But in a rather general setting. Y has the same distribution as X . for every t0 2 T and every " > 0 . lim IPfjXt Xt0 j > "g = 0 . Moreover. The pre eding hypothesis will always be satis.

Summarizing. In our . the study of almost sure boundedness and ontinuity of random pro esses an essentially be redu ed through the tools of essential supremum or separable version to the setting of a ountable index set for whi h no measurability question o urs.ed when we will need su h a result so that we use it freely below.

The se ond examines when a given pro ess is almost surely bounded or ontinuous and we use separable versions.rst part. The purposes of the . we will therefore basi ally study integrability properties and tail behaviors of supremum of bounded pro esses indexed by a ountable set.

One possible de.rst part motivate the introdu tion of a slightly more general notion of random variable with ve tor values in order to possibly unify results on Radon random variables and on `1 -valued random variables or bounded pro esses.

Re all that separable Bana h spa es possess this property. Assume we are given a Bana h spa e B (not ne essarily separable!) su h that there exists a ountable subset D of the unit ball or sphere of the dual spa e B 0 su h that kxk = sup jf (x)j . This de. Given B. we an say that X is random variable with values in B if X is a map from some probability spa e ( . D like this.nition is the following. IP) into B su h that f (X ) is measurable for every f in D . f 2D x 2B: The typi al example we have of ourse in mind is the spa e `1 . A. We an then work freely with the measurable fun tion kX k .

It also in ludes almost surely bounded pro esses X = (Xt )t2T indexed on a ountable set T . take then simply B = `1 (T ) and D = T identi.nition in ludes Radon random variables.

ed with the .

41 evaluation maps. note that when X = (Xt )t2T is an almost surely ontinuous pro ess on (T. it de. As a remark. d) ompa t.

not trying to avoid repetitions in this regard. we simply say that X is a random variable (or ve tor) with values in B . i. and if we have to deal with the distribution of su h a random variable X . When X and B are as before. and the orresponding on epts for p = 0 or 1 .nes a Radon random variable in the separable Bana h spa e C (T ) of ontinuous fun tions on T . We will try however to re all ea h time it will be ne essary the exa t setting in whi h we are working. as opposed to Radon random variable.e. let us note that for this generalized notion of random variables with values in a Bana h spa e B we an also speak of the spa es Lp (B ) . To on lude this se tion. Almost sure onvergen e of a sequen e (Xn ) makes sense similarly. distributions of the . we simply mean the one determined by its marginal distributions. 0 p 1 . When we are dealing with a separable Bana h spa e B . we however do not distinguish and simply speak of random variable (or Borel random variable) with values in B . as the spa es of random variables X su h that kX kp = (IEkX kp)1=p < 1 for 0 < p < 1 .

fN 2 D . in ase of a Radon random variable. Again. : : : .nite dimensional random ve tors (f1 (X ). this oin ides with the usual de. : : : . fN (X )) where f1 .

(2:5) IEF (kX + Y k) IEF (kX k) : Indeed. . if Y has mean zero. let us mention a trivial but useful observation based on independen e and Jensen's inequality. If X is a random variable with values in B in the general sense just des ribed. Let then F be a onvex fun tion on IR+ and let X and Y be independent random variables in B su h that IEF (kX k) < 1 and IEF (kY k) < 1 . Then. this follows simply by onvexity of F (kk) and partial integration with respe t to Y using Fubini's theorem.nition ( hoose D weakly dense in the unit ball of B 0 ). let us simply say that X has mean zero if IEf (X ) = 0 for all f in D (we then sometimes write with some abuse that IEX = 0 ). Finally in this se tion.

f 2D A random variable X with values in B is alled symmetri if X and X have the same distribution. Equivalently. (Although the name of Bernoulli is histori ally more appropriate. B denotes a Bana h spa e su h that for some ountable set D in the unit ball of B 0 . we will mostly speak of Radema her variables sin e this is the most ommonly used terminology in the . X has the same distribution as "X where " denotes a symmetri Bernoulli or Radema her random variable taking values 1 with probability 1=2 and independent of X .42 2. kxk = sup jf (x)j for all x 2 B .3. Symmetri random variables and Levy's inequalities In this paragraph.

(Xi ) . XN ) has the same law as (X1 . We all by Radema her sequen e (or Bernoulli sequen e) a sequen e ("i )i2IN of independent Radema her random variables taking thus the values +1 and 1 with equal probability. by IP" . IE" (resp. (Xi ) has the same distribution as ("i Xi ) where ("i ) is a Radema her sequen e independent of (Xi ) . The typi al example of a symmetri sequen e onsists in a sequen e of independent and symmetri random variables. (Xi ) has the same distribution as (Xi ) (i. Re all they apply to the important ase of independent symmetri random variables. They an be stated as follows.3. Proposition 2. Equivalently. i=1 (2:6) IPfmax kSk k > tg 2IPfkSN k > tg kN . it will be onvenient to denote. for every integer N and t > 0 .e. Let (Xi ) be a symmetri sequen e of random variables with values in B . ("i ) ). XN ) in B N ).eld. using Fubini's theorem. for every N . Note that for a general random variable X . In these notations. IEX ) onditional probability and expe tation with respe t to the sequen e (Xi ) (resp. in the probabilisti sense) whi h is one most powerful tool in Probability in Bana h spa es. IP0 ) . IP) and ( 0 . We hope the slight abuse in notation. Partial sums of a symmetri sequen e of random variables satisfy some very important inequalities known as Levy's inequalities. there is a anoni al way of generating a symmetri random variable not too "far" from X : onsider indeed Xe = X X 0 where X 0 in an independent opy of X . (X1 . " representing ("i ) and X . : : : . for every hoi e of signs 1 . For every k P k . A. Then. we will usually assume that X and X 0 are onstru ted on dierent probability spa es ( . with the same distribution as X . : : : . In this setting of symmetri sequen es. A sequen e (Xi ) of random variables with values in B is alled a symmetri sequen e if. i.. A0 . set Sk = Xi . IPX . will not get onfusing in the sequel.e.) This simple observation is at the basis of randomization (or symmetrization.

Among the onsequen es of Levy's inequalities is a useful result on onvergen e of series of symmetri sequen es. . we also have that IPfkSN k > tg = where Rk = SN N X k=1 IPfkSk Rk k > t . XN ) and f = kg only depends on X1 .3 is omplete. Xk . for every k . We have IPfkSN k > tg = N X k=1 IPfkSN k > t . : : : .3.7) being established exa tly in the same way. : : : . IE max kSk kp 2IEkSN kp k N and similarly with Xk instead of Sk . sin e. We only detail (2.43 and (2:7) IPfmax kXi k > tg 2IPfkSN k > tg : iN If (Sk ) onverges in probability to S . Let = inf fk N . for every 0 < p < 1. Proof. summing then the two pre eding probabilities yields 2IPfkSN k > tg N X k=1 IPf = kg = IPfmax kSk k > tg : k N The proof of Proposition 2. : : : . note also that by integration by parts. kSk k > tg . = kg : Now. Using the triangle inequality 2kSk k kSk + Rk k + kSk Rk k = kSN k + kSk Rk k . = k g Sk . Xk . This is known as the Levy-It^o-Nisio theorem whi h we present in the ontext of Radon random variables. XN ) has the same distribution as (X1 . : : : . (X1 .7). (2. the inequalities extend to the limit as IPfsup kSk k > tg 2IPfkS k > tg k and similarly for (2. Xk+1 . k N . As a onsequen e of Proposition 2.6).

Denote. The following are equivalent: (i) the sequen e (Sn ) onverges almost surely. for ea h n . (ii) (Sn ) onverges in probability. Proof. (iii) ) (ii). Let (Xi ) be a symmetri sequen e of Borel random variables with values in a separable P Bana h spa e B . (iv) there exists a probability measure in in B 0 . by n the distribution of the n -th partial sum Sn = ni=1 Xi . We . (iii) (n ) onverges weakly. P (B ) su h that n Æ f 1 ! Æ f 1 weakly for every f By a simple symmetrization argument. We shall ome ba k to this in Chapter 6.4. Observe further from the proof that the equivalen e between (i) and (ii) is not restri ted to the Radon setting. the equivalen es (i)-(iii) an be show to also hold for sums of independent (not ne essarily symmetri ) random variables.44 Theorem 2.

Hen e. su h that Xi0 onverges weakly to some X . n X i=1 f 2 (Xi (!)) 8M 2 : . IP(A) = IPX (A) > 1 " . one an extra t a further one. we an apply Khint hine's inequalities to the sum Pn i=1 "i f (Xi (! )) together with Lemma 4. denote it by i0 .2 and (4. IP" fj n X i=1 "i f (Xi (!))j > M g < "g : By Fubini's theorem. If ! 2 A . Thus. along every linear fun tional f . from every subsequen e. For every n . for all " > 0 . f (Xi0 ) ! f (X ) weakly. there is M > 0 su h that sup IPfjf (Sn )j > M g < "2 : n Re all now the symmetry assumption and the pre eding notations: sup IPX IP" fj n n X i=1 "i f (Xi )j > M g < "2 where ("i ) is a Radema her sequen e independent of (Xi ) . if " 1=8 . But now (f (Sn )) onverges in distribution as a sequen e of real random variables so that. By dieren e.3) below to see that. let A = f! .rst show that Xi ! 0 in probability. (Xi ) is weakly relatively ompa t.

if this is not the ase. Sin e Tk = Xi onverges weakly. If Sn ! S in probability. we may apply the pre eding step to i k get a ontradi tion. By Prokhorov's riterion and (2. We are left with the proof of (iv) ) (iii) sin e the other impli ations are obvious.45 It follows that n X sup IPf n i=1 f 2 (Xi ) > 8M 2g < " P from whi h we get that f 2 (Xi ) < 1 almost surely. This shows that Xi ! 0 in probability. IPf kSn Snk k > 2 k+1 g 2IPfkSnk Snk k > 2 k+1 g 2(IPfkSnk S k > 2 k g + IPfkSnk max nk 1 <nnk 1 1 1 Sk > 2 k 1 g) : By the Borel-Cantelli lemma. Hen e (Xi ) is a i tight sequen e with only 0 as possible limit point. (Sn ) is almost surely a Cau hy sequen e and thus (i) holds. We then dedu e (ii). (Sn ) is not a Cau hy sequen e in probability and there exists " > 0 and a stri tly in reasing sequen e of integers (nk ) su h that Tk = Snk+1 Snk does not P P onverge in probability to 0 .4) it is enough to show that for every " > 0 there exists a . (ii) ) (i). Thus f (Xi ) ! 0 almost surely. Indeed. there exists a sequen e (nk ) of integers su h that X k IPfkSnk Sk > 2 kg < 1 : By Levy's inequalities.

" > 0 and F losed subspa e in B . IPfd(Sn . Now. for every losed subspa e F in B . F ) = sup jf (x)j for every f 2D . it of ourse suÆ es to show that IPfd(Sn . for every n . sin e B is separable. d(x. F ) > "g < " : Sin e is a Radon measure.nite dimensional subspa e F of B su h that. there is a ountable subset D = ffm g of the unit ball of B 0 su h that d(x. F ) > ") for any n. F ) > "g 2(x .

sup jf (x)j > ") f 2D 2(x . (f1 (Sn ).6)) in the limit. max jf (x)j > ") im i m 2(x . : : : . Hen e. max jfi (x)j > ") : im im The on lusion is then easy: IPfd(Sn . IPfmax jfi (Sm )j > "g 2(x . The . Some inequalities for real random variables We on lude this hapter with two elementary and lassi al inequalities for real random variables whi h will be useful to re ord at this stage. 2. fm (Sn )) is weakly onvergent in IRm to the orresponding marginal of . F ) > ") : Theorem 2.4. F ) > "g = IPf sup jf (Sn )j > "g f 2D sup IPfmax jf (S )j > "g im i n m 2 sup (x . for every n . by Levy's inequalities ((2. d(x.4 is thus established. For every m .46 x .

Then.:::. Now n XY j =1 IP(Aij ) = 1 n! n1! X n Y distin t j =1 i1 .in n X Y all j =1 i1 . (1.16)) while the se ond is the inequality at the basis of the Borel-Cantelli lemma.15).in IP(Aij ) IP(Aij ) = an : n! 1 .rst one is a version of the binomial inequalities ( ompare with (1.:::. We have IPf X i IAi ng n XY j =1 IP(Aij ) where the summation is over all hoi es of indexes i1 < < in . P Lemma 2. Let (Ai ) be a sequen e of independent sets su h that a = IP(Ai ) < i every n . for . X an ea n IPf IAi ng : n! n i Proof.5.

Thus. 1 x exp( x) and 1 exp( x) x=1 + x . Then. For x 0 . iN N X i=1 IPfZi > tg 2IP max Zi > tg : iN Proof.6. by independen e. the pre eding inequality ensures that iN i=1 on lusion of the lemma follows from the . Let (Zi )iN be independent positive random variables. N Y (1 IPfZi > tg IPfmax Zi > tg = 1 iN i=1 1 exp( N X i=1 N X i=1 IPfZi > tg) IPfZi > tg=1 + N X i=1 IPfZi > tg : N P IPfZi > tg If IPfmax Zi > tg 21 . N X N X IPfmax Zi > tg IPfZi > tg=(1 + IPfZi > tg) : iN i=1 i=1 In parti ular.47 Lemma 2. for every t > 0 . if IPfmax Zi > tg 21 .

in.rst one. 1 so that the se ond Notes and Referen es The following referen es will hopefully omplete appropriately this survey hapter. Basi s on metri spa es.

Various a ounts on random variables with values in Bana h spa es may be found in [Ka1℄. Bana h spa es.nite dimensional ve tor spa es. [Li℄. [HJ3℄. [V-T-C℄. an be found in all lassi al treatrises on fun tional analysis like for example [Dun-S℄. [Ar-G2℄. Informations on Bana h spa es more in the spirit of this book are given in [Da℄. [S hw2℄. et . For Skorokhod's theorem [Sk1℄. The ne essary elements on ve tor valued martingales and their onvergen e may be found in [Ne3℄. [Bea℄. [Bi℄. Probability distributions on metri spa es and weak onvergen e are presented in [Par℄. [Li-T1℄. Prokhorov's riterion omes from [Pro1℄. the terminology " at on entration" is used in [A 1℄. . see also [Du3℄. [Li-T2℄ as well as in the referen es therein.

[Ar-G2℄. see also [Ba℄. More in the ontext of probability in Bana h spa es. [J-M3℄. [Ne1℄. Symmetri sequen es and randomization te hniques were .48 Generalities on random pro esses and separability are given in [Doo℄. [Me℄.

. [HJ2℄.4 is due to Levy [Le1℄ on the line and to It^o-Nisio [I-N℄ for independent Bana h spa e valued random variables. Homan-Jrgensen [HJ3℄. [HJ3℄. see J. Theorem 2.rst onsidered by J. The inequalities of Se tion 4 an be found in all lassi al treatrises on probability theory. Our proof follows [Ar-G2℄. like for example [Fe1℄.-P. For symmetri sequen es. See also [HJ1℄. Kahane in [Ka1℄ who gave a proof of Levy's inequalities [Le1℄ in this setting of ve tor valued random variables.

1.3.2.49 Chapter 3. Comparison theorems Notes and referen es . Integrability and tail behavior 3. Integrability of Gaussian haos 3. Gaussian random variables 3.

50 Chapter 3. Gaussian random variables With this hapter we really enter the subje t of Probability in Bana h spa es. The study of Gaussian random ve tors and pro esses may indeed be onsidered as one of the fundamental topi s of the theory sin e it inspires many other parts of the .

The histori al developments also followed this line of progress. We shall be interested in this hapter in integrability properties and tail behavior of norms of Gaussian random ve tors or bounded pro esses as well as in the basi omparison properties of Gaussian pro esses. The question of when a Gaussian pro ess is almost surely bounded (or ontinuous) will be addressed and ompletely solved in Chapters 11 and 12. The study of the tail behavior of the norm of a Gaussian random ve tor is based on the isoperimetri tools introdu ed in the .eld both in the results themselves and in the te hniques of investigation.

and even sums of independent random variables whi h will be treated respe tively in the next hapters.rst hapter. This will be the subje t of the . This study will be a kind of referen e for the orresponding results for other types of random ve tors like Radema her series. stable random variables.

The se ond examines the orresponding results for haos. The last paragraph is devoted to the important omparison properties of Gaussian random variables.rst se tion of this hapter. Chapter 12). These now appear at the basis of the rather deep present knowledge on the regularity properties of sample paths of Gaussian pro esses ( f. We .

rst re all the basi de.

IP) is said to be Gaussian (or normal) if its Fourier transform satis. A real mean zero random variable X in L2 ( .nitions and some lassi al properties of Gaussian variables. A.

XN ) in IRN is Gaussian if for all real numbers 1 . : : : . gN ) follows the anoni al Gaussian distribution N on IRN with density N (dx) = (2) N=2 exp( jxj2 =2)dx : N P A random ve tor X = (X1 . t 2 IR . N . the ve tor g = (g1 . i Xi is a i=1 real Gaussian variable. for ea h N .es IE exp itX = exp( 2 t2 =2) . : : : . X is said to be standard if = 1 . In other words. : : : . we sometimes all it orthogaussian or orthonormal sequen e (be ause orthogonality and independen e are equivalent for Gaussian variables). When we speak of Gaussian variable we therefore always mean a entered Gaussian (or equivalently symmetri ) variable. the sequen e denoted by (gi )i2IN will always mean a sequen e of independent standard Gaussian random variables. Throughout this work. where = kX k2 = (IEX 2)1=2 . Su h a Gaussian ve tor an always be diagonalized and regarded under the anoni al .

if = AAt = (IEXi Xj )1i.jN denotes the symmetri (semi-) positive de. Indeed.51 distribution N .

ompletely determines the distribution of X whi h is the same as the one of Ag where g = (g1 . One fundamental property of Gaussian distributions is their rotational invarian e whi h may be expressed in various manners. gN ) . gN ) is distributed a ording to N in IRN and if U is an orthogonal matrix in IRN . For example.nite ovarian e matrix. if g = (g1 . : : : . then Ug is also distributed a ording to N . As a simple onsequen e. if (i ) is a . : : : .

!1=2 1 so that the span of (gi ) in Lp is isometri to `2 .e. (X sin + Y os . X os Y sin ) . i i In parti ular. As another des ription of this invarian e by rotation (whi h was used in the proof of (1. for every .5) in Chapter 1). if X is Gaussian (in IRN ) and if Y is an independent opy of X . These properties are trivially veri. i. has the same distribution as (X. sin e g1 has moments of all orders. Y ) . P i gi i has the same distribution as g1 X gi i i for any 0 < p < p = kg1 kp X i 2i P 2 1=2 . the rotation of angle of the ve tor (X.nite sequen e of real numbers. Y ) .

A Radon random variable X with values in a Bana h spa e B is Gaussian if for any ontinuous linear fun tional f on B . The typi al example is given by a onvergent P series gi xi (whi h onverges in any sense by Theorem 2.ed on the hara teristi fun tionals.4) where (xi ) is a sequen e in B . Finally. a pro ess X = (Xt )t2T indexed by a set T is alled Gaussian if ea h . f (X ) is a real Gaussian variable. We shall i a tually see later on in this hapter that every Radon Gaussian ve tor may be represented in distribution in this way.

nite linear

P

ombination i Xti , i 2 IR , ti 2 T , is Gaussian. The
ovarian
e stru
ture (s; t) = IEXs Xt , s; t 2 T ,

i

ompletely determines the distribution of the Gaussian pro ess X . Sin e the distributions of these in

nite

dimensional Gaussian variables are determined by the

**nite dimensional proje
tions, the rotational invarian
e
**

trivially extends. For example, if X is a gaussian pro
ess or variable (in B ), and if (Xi ) is a sequen
e

of independent
opies of X , then, for any

**nite sequen
e (i ) of real numbers,
**

distribution as

P 2

i

i

1=2

X.

P

i

i Xi has the same

52

**3.1. Integrability and tail behavior
**

By their very de

nition, (norms of) real or

**nite dimensional Gaussian variables admit high integrability
**

properties. One of the purposes of this se
tion will be to show how these extend to an in

nite dimensional

setting. We thus investigate integrability and tail behavior of the norm kX k of a Gaussian Radon variable

or almost surely bounded pro
ess. In order to state the results in a somewhat uni

ed way, it is
onvenient

to adopt the terminology introdu
ed in Se
tion 2.2. Unless otherwise spe
i

**ed, we thus deal in this se
tion
**

with a Bana
h spa
e B su
h that, for some
ountable subset D of the unit ball of B 0 , kxk = sup jf (x)j

f 2D

for all x in B . We say that X is a Gaussian random variable in B if f (X ) is measurable for every f in

P

D and if every

More pre isely. Two main parameters of the distribution of X will determine the behavior of IPfkX k > tg (when t ! 1 ): some quantity in the (strong) topology of the norm. onsider = (X ) = sup (IEf 2 (X ))1=2 : f 2D Note that this supremum is . it would be enough to onsider M su h that IPfkX k M g > " > 0 as we will see later.see below). the on ept of median is however ru ial for on entration results as opposed to the pre eding ones whi h ome more under deviation inequalities. fi 2 D . f 2 D . i 2 IR . for the purposes of tail behavior.nite linear ombination i fi (X ) . let M = M (X ) be a median of kX k . 2 2 A tually. that is a number satisfying both 1 1 IPfkX k M g : IPfkX k M g . i Let therefore X be Gaussian in B . 0 p < 1 . IEf 2(X ) . and weak varian es. median or ex eptation of kX k for example (or some other quantity in Lp (B ) . is Gaussian. Besides M . integrability properties or moment equivalen es.

1. In order to onveniently use the isoperimetri inequality. This is always understood in the sequel. as is usual. we redu e. kf k1 The main on lusions on the behavior of IPfkX k > tg are obtained from the Gaussian isoperimetri inequality (Theorem 1. Indeed. now f (X ) is a real Gaussian variable with varian e IEf 2 (X ) and this inequality implies that (IEf 2(X ))1=2 2M sin e IPfjgj 12 g < 0:4 < 12 where g is a standard normal variable.nite and a tually ontrolled by M . IPfjf (X )j M g 21 . for every f in D . Hen e 2M < 1 . Let us mention that if X is a Gaussian Radon variable with values in B . is really meant to be sup (IEf 2(X ))1=2 . the distribution of X to the anoni al .2) and subsequent on entration results des ribed in Se tion 1.

the next lemma des ribes the isoperimetri on entration of kX k around its median M measured in terms of . Re all .3) here. IPfjkX k M j > tg 2 (t=) exp( t2 =22 ) : Proof.53 Gaussian distribution = 1 on IRIN (i. 1 . Set D = ffn . We use Theorem 1. M g learly The proof of Lemma 3. the distribution fun tion of a standard normal variable. Let X be a Gaussian variable in B with median M = M (X ) and supremum of weak varian es = (X ) .1 of ourse just repeats the on entration property (1. = (X ) = sup khk jhj1 j j is the `2 norm. or rather (1. Let A = fx 2 IRIN . We use this simple redu tion throughout the various proofs below. we then extend the meaning of fn by letting i=1 n P an x for n 1 and x = (xi ) in IRIN . the sequen e (fn (X )) has the same distribution as the n if x P n sequen e ai xi under . n 1: In other words. If we set further. t 0 . This observation tells us . the n1 i=1 probabilisti study of kX k then amounts to the study of kxk under . Then (A) 21 . The pro edure is lassi al.e. By Theorem 1. as usual. For notational onvenien e. if At is the Hilbertian neighborhood of order t > 0 of A . if x 2 At x = a + th where a 2 A and jhj 1 .4) for Lips hitz fun tions as is lear from the fa t that kxk (on IRIN ) is Lips hitzian with onstant . Applying the same argument to A = fx ..2. As announ ed. n 1g . kxk on ludes the proof of Lemma 3. for every t > 0 . hen e by (3.1) kxk M + tkhk M + t and therefore At fx .1. we an write (in distritution) fn (X ) = n X i=1 ani gi .2 through the pre eding redu tion. = 1 and the estimate (t) 12 exp( t2 =2) . for x 2 IRIN . is the distribution of the orthonormal sequen e (gi ) ). kxk = sup jfn (x)j . = (xi ) is a generi point in IRIN .1. But. kxk M + tg . Then. Note that in these notations fn (x) = i i (3:1) where. kxk M g . By the Gram-S hmidt orthonormalization pro edure applied to the sequen e (fn (X ))n1 in L2 . Lemma 3. (At ) (t) .

this an a tually also be dedu ed from a . that is. for any t > 0 . (3:2) IPfjkX k IEkX kj > tg 2 exp( 2t2 =2 2 ) : It is lear that IEkX k < 1 from Lemma 3. following (1.5).54 that.1. we have similarly (and at some heaper pri e) a on entration of kX k around its expe tation.

note that a median M = M (X ) of kX k is unique and that. that is only the deviation inequality IPfkX k > M + tg (t) . (3. We already noted that 2M and we an add the trivial one (IEkX k2)1=2 (whi h is . The on entration around M (or IEkX k ) will however be ru ial for some other questions like for example in Chapter 9 where this result and the relative weights of M and an be used in Geometry of Bana h spa es. (and similarly with IEkX k instead of M ).nite dimensional version of (3. the information on weak moments is weaker than the one.) As usual. for t > 0 . A tually. IPfkX k > s + tg exp( 1 (")2 =2) exp( t2 =8) : As we have seen. IPfkX k > s + tg ( 1 (") + t) : It follows for example that. integrating for example the inequality of Lemma 3. Repeating in this framework some of the omments of Se tion 1. kxk sg .6) of ourse yields (3.2) with best onstant t2 =22 in the exponential.2)) a tually only use half of it. we get that.1. M or IEkX k for example. for every t > 0 .2) together with an approximation argument. for the integrability theorems even only the knowledge of s su h that IPfkX k sg " > 0 is suÆ ient. we apply the isoperimetri inequality to A = fx . if. on the strong topology.1. in the proof of Lemma 3. jIEkX k M j (=2)1=2 : As already mentioned.1 (or (3. Indeed. t > 0.1.2) is interesting sin e it is often easier to work with expe tations rather than with medians. the integrability theorems we will dedu e from Lemma 3. ((1.

nite). For the anoni al distribution N on IRN we already have p that = 1 and M is of the order of N . In the pre eding inequality. is mu h smaller. an be repla ed by one of the . In general.

24s2 1 " an inequality indeed similar in nature to (3.3) from isoperimetri methods.1 (for example!) and 2 M 2 2IEkX k2 . kY k > (t s)= 2g p (IPfkX k > (t s)= 2g)2 : Iterating this inequality with IPfkX k sg = " > 1 2 and p t = tn = ( 2n+1 p 1)( 2 + 1)s easily yields that. From Lemma 3. p p IPfkX k sgIPfkX k > tg = IPfkX + Y k s 2 . kX + Y k s 2 and kX p Y )= 2 are independent with the same distribution as X . . it should be noti ed that su h an inequality an a tually be established by a dire t argument whi h we would like to brie y sket h. for t > 0 . for every t > 0 . By the rotational invarian e p p of Gaussian measures. Note further that 1 1+" 2 be omes large when " goes to 1 . 1 1 IPfkX k > tg exp 1 (")2 + 2 8 1 1+" 2 exp t2 32s2 1 1+" 2 2 ! : This inequality seems ompli ated but however des ribes as before an exponential squared tail for IPfkX k > tg . For example. (X + Y )= 2 and (X Now. kX Y k > t 2g p p IPfkX k > (t s)= 2 . we have from the triangle inequality that both p kX k and kY k are larger than (t s)= 2 . Let Y denote an independent opy of X . if for s < t . in the ontext of the pre eding inequality.55 strong parameters yielding weaker but still interesting bounds. p Y k > t 2 . (3:4) IPfkX k > tg exp " t2 log . by independen e and identi al distribution. we have that. we observe that s= (3:3) 1 1+" 2 from whi h it follows that. While we dedu e (3. (3:5) IPfkX k > tg 4 exp( t2 =8IEkX k2) : IEkX k2 .3). Hen e. Let us re ord at this stage an inequality of the pre eding type in whi h only the strong parameter steps in and that will be onvenient in the sequel. for ea h t s .

56 To on lude these omments. let us also mention a bound for the maximum of norms of a .

Assume .nite number of ve tor valued Gaussian variables Xi . i N .

we have iN by integration by parts and Lemma 3. The fa t that the limit is less than or equal to the minoration simply uses IPfkX k > tg 1=22 easily follows from Lemma 3.1 and the pre eding inequalities to the tail behavior and integrability properties of the norm of a Gaussian random ve tor. equivalently. Let X be a Gaussian variable in B with orresponding = (X ) .rst that max (Xi ) 1 .1 while IPfjf (X )j > tg for all f in D . Proof.2. log IPfkX k > tg = 1 22 1 IE exp kX k2 < 1 if and only if > : 22 Further.q depending on p and q only su h that for any Gaussian ve tor X kX kp Kp. the median of kX k ): for any 0 < p.q kX kq : In parti ular. all the moments of kX k are equivalent (and equivalent to M = M (X ) .1 that N Z 1 X IE max jkXi k M (Xi )j Æ + IPfjkXik M (Xi )j > tgdt iN i=1 Æ Z 1 Æ + N exp( t2 =2)dt Æ r Æ + N 2 exp( Æ2 =2) : Let then simply Æ = (2 log N )1=2 so that we have obtained by homogeneity that (3:6) IE max kXi k 2 max IEkXi k + 3(log N )1=2 max (Xi ) : iN iN iN The next orollary des ribes appli ations of Lemma 3. q < 1 . For any Æ 0 . Corollary 3. The equivalen e with the . Then lim 1 t!1 t2 or. Kp. there exists Kp.2 = K pp ( p 2 ) where K is numeri al.

2 an be used (to some extent) similarly. Indeed. ertainly sup M (Xn ) < 1 and the pre eding equivalen es of moments on. Now this inequality is stronger than what we need sin e 2M and M an be majorized by (2IEkX kq )1=q for all q > 0 . if M (Xn ) is the median of kXn k . As already mentioned. if M is the median of kX k . for ea h " > 0 there exists A > 0 su h that sup IPfkXnk > Ag < " : n Then (Xn ) is bounded in all the Lp -spa es. Con erning the moment equivalen es. IEjkX k M jp = Z 1 Z0 1 0 IPfjkX k M j > tgdtp p exp( t2 =22)dtp (K p)p for some numeri al K . The proof is omplete. integrating the inequality of Lemma 3. that is. Let (Xn ) be a sequen e of Gaussian variables whi h is bounded in probability.57 exponential integrability is easy by Chebyshev's inequality and integration by parts. we use Lemma 3.1 in this proof but the inequalites dis ussed prior to Corollary 3.1.

: : : . this onvergen e takes pla e in all Lp . if (Xn ) n is a Gaussian sequen e (in the sense that (Xn1 . : : : . Although already very pre ise. nN ) whi h onverges in probability.rm the laim. XnN ) is Gaussian in B N for all n1 . In parti ular. the previous orollary an be re.

The sharpening we des ribe next rests on a more elaborated use of the Gaussian isoperimetri inequality and on.ned.

Let X be as before Gaussian in B and re all = (X ) = sup (IEf 2 (X ))1=2 . weak and strong. used to measure the size of the distribution of the norm of a Gaussian ve tor. IPfkX k g > 0g . that is the .rm the role of the two parameters. Consider now f 2D = (X ) = inf f 0 .

In ase X is Radon. One way to prove this is to .rst jump of the distribution of the norm kX k . = 0 . This jump an a tually be shown to be unique.

rst observe that for every " > 0 and x in B (IPfkX p xk "g)2 IPfkX k " 2g : .

(IPfkX xk "g)2 = IPfkX xk "gIPfkY + xk "g IPfk(X x) + (Y + x)k 2"g p IPfkX k " 2g sin e X + Y has the law of p 2X .58 Indeed. let us re all the typi al example in whi h > 0 . it is easily seen that (3:7) jgn j lim sup =1 n!1 (2 log(n + 1))1=2 almost surely so that = 1 in this ase. by symmetry and independen e. Consider X in `1 given by the sequen e of its oordinates (gn =(2 log(n + 1))1=2 ) where (gn ) is the orthogaussian sequen e. IPf9n. there is a sequen e (xn ) in B su h that. if Y is an independent opy of X . Then. If X is Radon. ne essarily = (X ) = 0 in this ase. and if we assume that P fkX k "0 g = 0 for some "0 > 0 . On the other hand. kX But then 1 X n IPfkX p xn k "0= 2g = 1 : p xn k "0 = 2g X n (IPfkX k "0 g)1=2 = 0 and thus.2)) but is suÆ ient for our modest purpose here.or log - on avity ((1. Note that this inequality an a tually be improved to IPfkX xk "g IPfkX k "g by the 1 . by a simple use of the Borel-Cantelli lemma. by separability.1) and (1. The following theorem re.

3. Let X be a Gaussian variable with values in B and let = (X ) and = (X ) . 1 0 2 IE exp (kX k ) < 1 : 22 .2 and involves both and in the integrability result. Theorem 3.nes as announ ed Corollary 3. Then. for any 0 > .

1) we know that the fun tion 1 (IPfkX k tg) is on ave on ℄.3 an therefore be des ribed equivalently as 1 0 tlim !1[ (IPfkX k tg) t℄ : That is.2 that 1 1 lim log IPfkX k > tg = t!1 t2 22 whi h an be rewritten as 1 1 lim 1 (IPfkX k tg) = t t!1 ( 1 (1 u) is equivalent to (2 log u1 )1=2 when u ! 0 ).3 appears as a onsequen e of the following (deviation) lemma of independent interest whi h may be ompared to Lemma 3. 1[ has an asymptote (t=)+ ` with Note that = 0 and hen e ` = 0 if X is Radon. Z kX k> 0 exp 00 )= > 0 . Applying Z 1 1 0 )2 dIP 1 + 0 + tgt exp t2 dt ( k X k IP fk X k > 22 2 0 2 Z 1 t (t + ")2 N +1 1 + K (N ) (1 + " + t) exp 2 dt 2 0 Z 1 1 + K (N ) (1 + " + t)N +1 exp( "t)dt < 1 : 0 . = ` 0 . To see why the theorem follows from the lemma. the on ave fun tion 1 (IPfkX k tg) on ℄. : IPfkX k > 0 + tg K (N )(1 + t)N exp( t2 =2) where K (N ) is a positive onstant depending on N only. The proof of Theorem 3.4.4 to 00 > .59 Before proving this theorem. there is an integer N su h that for every t 0 .1. Lemma 3. let 0 > 00 > and " = ( 0 Lemma 3. 1[ . For every 0 > . IPfkX k > 0 + tg IP In parti ular 8 N < X : i=1 gi2 !1=2 9 = . we get that. let us interpret this result as a tail behavior. for some integer N . We have seen in Corollary 3. From (1. Theorem 3.

Under these notations. z ) with y 2 IRN and z 2 IR℄N. a point x in IRIN an be de omposed in (y.2 implies that ( ℄N. for any t 0 . if ℄N. t jyj and s = (t2 jyj2 )1=2 . If A is a Borel set in IRIN . x 62 At ) N +1 (y0 2 IRN +1 .1[ . By Fubini's theorem. Given an integer N .4.1[ . It is based again on the Gaussian isoperimetri inequality in the form of the following lemma whi h we introdu e with some notations. by de.1[) (Bs ) (s) where Bs is of ourse the Hilbertian neighborhood of order s 0 of B in IR℄N.60 Proof of Lemma 3. Lemma 3.1[(B ) 1=2 .5.1[ . (0. d (x) = d N (y)d ℄N. If z 2 Bs . jy0 j > t) : Proof. Re all for t 0 . Sin e ℄N. At denotes the Eu lidean or Hilbertian neighborhood of A . z ) 2 Ag . set B = fz 2 IR℄N. (x 2 IRIN .1[ (z ) . Theorem 1. Let y 2 IRN .1[(B ) 1=2 .

jkj 1 . z ) = (0. kxk 0 g . We . k 2 IR℄N. Let 0 > and A = fx. b) + (jyj2 + s2 )1=2 h = (0.4. Hen e x 2 At .nition. we get that. via Fubini's theorem.5 is thus established. z 62 B(t2 jyj2 )1=2 ) N (y : jyj > t) Z Z s2 ds p d N (y) + exp 2 2 jyjt s>(t2 jyj2 )1=2 Z 2 2 2 2 exp s2 pds d N (y) 2 s +jyj >t whi h is the announ ed result. From this observation.1[) ((y. Lemma 3. z ) : jyj t . ) as in Lemma 3.1[ . b) + th where h 2 IRIN and jhj 1 . We of ourse use the redu tion to (IRIN . We are now in a position to prove Lemma 3. z = b + sk where b 2 B .3. for t 0. (x : x 62 At ) N (y : jyj > t) + N ( ℄N. then x = (y.

1[(B ) 1=2 . then ℄N. By hypothesis (A) > 0 .5. if B = fz 2 IR℄N. Sin e kxk = sup jfn (x)j and fn (x) only depends on the . z ) 2 Ag . (0.1[ .rst show that there exists an integer N su h that in the notations introdu ing Lemma 3.

there exists n1 N su h that 4 N (y.rst n oordinates of x . max jfn (y)j 0 ) (A) : nN 3 .

(y. kxk 0 + tg the . But if z belongs to this interse tion. (y.1[(z .61 By Fubini's theorem. z )k 0 . z ) 2 A) 3=4 so that the interse tion of the two pre eding sets has a measure bigger than 1=2 .5 we then get an upper bound for the omplement of At .1[(B ) 1=2 . k(y. Sin e At fx . This shows that ℄N. z )k 0 and therefore k(0. z )k 0 and k(y. there exists then y in IRN su h that ℄N. z )k = k( y. ℄N.1[(z . From Lemma 3. z ) 2 A) 3=4 : By symmetry. t > 0 .

rst inequality in Lemma 3.4 follows. The se ond one is an easy and lassi al onsequen e of the .

3. It is easily seen that N X IEAN X = gi xi i=1 with xi = IE(gi X ) . : : : . The proof of Lemma 3. A. IP) with values in a Bana h P spa e B . Let H be the losure in L2 = L2 ( .rst one. Then H may be assumed to be separable sin e X is Radon and is entirely formed of Gaussian variables. Let X be a Gaussian Radon random variable on ( . Proof. i 0 < p < 1 . Re all now that IEkX k < 1 (for example) so that by the martingale onvergen e P theorem ( f.2. Re all (gi ) denote an orthogaussian sequen e. it onverges also in Lp (B ) for every p .1) the series gi xi is almost surely onvergent to X . Let (gi ) denote an orthonormal basis of H and denote by AN the -algebra generated by g1 .4 is omplete. gN . Then X has the same distribution as gi xi for some sequen e (xi ) in B where this series is i onvergent almost surely and in all Lp 's. Sin e IEkX kp < 1 . Integrability of Gaussian haos .6. Se tion 2. To on lude this paragraph we brie y des ribe the series representation of Gaussian Radon Bana h spa e valued random variables whi h is easily dedu ed from the integrability properties. IP) of the variables of the form f (X ) with f in B 0 . Proposition 3. A.

The Gaussian variables studied so far may indeed be onsidered as haos of order 1 .62 In this se tion. But let us . integrability and tail behavior of real and ve tor valued Gaussian haos are investigated as a natural ontinuation of the pre eding.

Consider the Hermite polynomials fhk . k 2 INg on IR de.rst brie y (and partially) des ribe the on ept of haos.

1 ) . set. k2 . For 0 " 1 . ki 2 IN . x 2 IR : = exp x 2 k=0 k ! The Hermite polynomials form an orthonormal basis of L2(IR.e. ) where we re all that is the anoni al Gaussian produ t measure on IRIN . i. introdu e the bounded linear operator T (") : L2( ) ! L2 ( ) de. k = Hk (x) = hk1 (x1 )hk2 (x2 ) : Then fHk . Similarly. . if k P (k1 . for x = (xi ) 2 IRIN .ned by the series expansion 1 k X 2 p hk (x) . k 2 IN(IN) g form an orthonormal basis of L2 (IRIN . i 2 IN(IN) . with jkj = ki < 1 . : : : ) .

It is not diÆ ult to he k that T (") extends to a positive ontra tion on all Lp ( ) . i. This is a tually lear from the following integral representation of T (") : if f is in L2 ( ) and x in IRIN . This property indi ates that for 1 < p < q < 1 and " su h that j"j [(p 1)=(q 1)℄1=2 . 1 p < 1 . T (") maps Lp ( ) into Lq ( ) with norm 1 . for any f in Lp ( ) . Tt = T (e t) is known as the Hermite or Ornstein-Uhlenbe k semigroup. 0 " 1 . The operators T (") . satisfy a very important hyper ontra tivity property whi h is related to the integrability properties of Gaussian variables and haos. (3:8) kT (")f kq kf kp : A fun tion f in L2( ) an be written as f= X k Hk fk .e.ned by T (")Hk = "jkj Hk for any k with jkj < 1 . Z T (")f (x) = f ("x + (1 "2 )1=2 y)d (y) : If t 0 .

the very de. et . so P haos of degree 1 are Gaussian series gi i . Chaos of degree 2 are of the type i X i6=j gi gj ij + X i (gi2 1)i . Now. We an also write f= 1 X d=0 0 X jkj=d 1 Hk fk A = 1 X d=0 Qdf : Qdf is named the haos of degree d of f . h1 (x) = x . Sin e h0 = 1 . Q0 f is simply the mean of f .63 R where fk = fHk d and the sum runs over all k in IN(IN) .

Let d be an integer and let Z be a positive random variable. for the ase of series. In parti ular. This will be a omplished again with the tool of the Gaussian isoperimetri inequality. This follows for example from the next easy lemma whi h is obtained by a series expansion of the exponential fun tion. we only treat the ase of haos of order 2 . in (3.7. note the right order of magnitude of Kp. This observation together with (3.8) has some interesting onsequen e. For simpli ity. that is T (")Qdf = "d Qdf . Lemma 3. In this paragraph. (Let us mention that this redu tion simpli. q > 2 and " = (q 1) 1=2 . this provides an alternate approa h to the integrability of Gaussian variables presented in Se tion 3. The following are equivalent: (i) there is a onstant K su h that for any p 2 kZ kp Kpd=2 kZ k2 .8).nition of the Hermite operators T (") shows that the a tion of T (") on a haos of order d is simply multipli ation by "d .1. IE exp Z 2=d < 1 : The integrability properties of Gaussian haos we obtain in this way extend to haos with oeÆ ients in a Bana h spa e. we shall a tually be on erned in more pre ise tail behavior of Gaussian haos similar to the ones obtained previously for haos of order 1 . (ii) for some > 0 .2 = K pp . If we let. we see that kQdf kq (q 1)d=2kQdf k2 : These inequalities of ourse imply strong exponential integrability properties of Qdf . p = 2 .

es. several non-trivial polarization arguments that . by elementary symmetry onsiderations.

Borell [Bo4℄.j sin e the onstant 1 belongs to the losure of the homogeneous Gaussian polynomials of degree 2 (at least if the underlying Gaussian measure is in.64 are ne essary in general and whi h are thus somewhat hidden in our treatment. [Bo7℄. for the degree 2 . i. we refer to [Bo4℄. we will study more pre isely homogeneous Gaussian polynomials whi h basi ally orresponds. to onvergent series of the P type gi gj xij where (xij ) is a sequen e in a Bana h spa e. Following the work of C. [Bo7℄ for details on these aspe ts.) With respe t to the pre eding des ription of haos.

the usual meaning of Gaussian or Wiener haos. If we set further. To measure the size of the tail IPfkX k > tg we use.nite dimensional). we say that a random variable X with values in B is a Gaussian haos of order 2 if. for ea h n .1. (At least at the order 2 .j ! where we re all almost surely (or only in probability) onvergent. x0 ) = anij xi x0j i. P n a g ij i gj i. there exists a bilinear form X Qn(x.j ij i gj is P n a g i.) We still use the terminology of haos in this setting. First. This framework therefore in ludes. some aspe ts of this omparison an also be made apparent through simple symmetrization arguments. several parameters of the distribution of X . for the results we will des ribe. by analogy. the haos des ribed previously (and their ve tor valued version as well) are limit. Following the redu tion to the anoni al f 2D Gaussian distribution on IRIN . Let again B be a Bana h spa e with D = ffn. the ase of onvergent quadrati sums is somewhat too restri tive. n 1g in the unit ball of B 0 su h that kxk = sup jf (x)j for all x in B . for ea h n . in probability for example.j on IRIN IRIN su h that the sequen e (fn (X )) has the same distribution as that (gi ) is the anoni al orthogaussian sequen e. we are n1 redu ed as usual to the study of the tail behavior and integrability properties of kQk under the anoni al Gaussian produ t measure on IRIN . Let us now des ribe the framework in whi h we will work. x 2 IRIN . onsider the "de oupled" symmetri haos Y asso iated to X de. of homogeneous Gaussian polynomials of the orresponding degree. x)j . x)k = sup jQn (x. as in the pre eding se tion. kQ(x. Therefore. As in Se tion 3.

we denote below by IE0 . A ording to our usual notations.j where bnij = anij + anji and (gj0 ) is an independent opy of (gi ) . n1 i. IP0 partial expe tation and probability with respe t to (gj0 ) . Let then M and m be su h .ned in distribution by X fn (Y ) = bnij gi gj0 .

65 that IPfkX k M g 3=4 and IPf sup (IE0 f 2 (Y ))1=2 mg 3=4 : f 2D Let further = (X ) = sup kQ(h. h)k : jhj1 If we re all the situation for Gaussian variables in the previous se tion. Let us show that these parameters are a tually well de. The new and important parameter m appears as some intermediate quantity involving both the weak and strong topologies. we see that and M orrespond respe tively to the weak and strong parameters.

**ned. It will suÆ
e to show that the de
oupled
haos Y exists. The key observation is
**

that, for every t > 0 ,

(3:9)

p

IPfkY k > tg 2IPfkX k > t=2 2g :

We establish this inequality for

nite sums and a norm given by a

**nite supremum; a limiting argument
**

then shows that the quadrati
sums de

**ning Y are
onvergent as soon as the
orresponding ones for X
**

are, and, by in
reasing the

nite supremum to the norm, that (3.9) is satis

**ed. We redu
e to (IRIN ;
) and
**

re
all that kQ(x; x)k = sup jQn (x; x)j . Let, on IRIN IRIN ,

n1

X

Qb n (x; x0 ) = bnij xi x0j

i;j

and set, with some further abuse in notation, kQb(x; x0 )k = sup jQb n (x; x0 )j . Then, simply note that for x; x0

n1

IN

in IR ,

2Qb n (x; x0 ) = Qn (x + x0 ; x + x0 ) Qn (x x0 ; x x0 )

p

and that x + x0 and x x0 have both the same distribution under d
(x)d
(x0 ) than 2x under d
(x) .

From this and the pre
eding
omments, it follows that Y is well de

ned and that (3.9) is satis

ed.

It might be worthwhile to note that (3.9)
an essentially be reversed. Using that the
ouple ((x +

p

p

x0 )= 2; (x x0 )= 2) has the same distribution as (x; x0 ) , one easily veri

**es that for all t > 0 ,
**

IPfkX

X 0 k > tg 3IPfkY k > t=3g

66

**where X 0 is an independent
opy of X . If the diagonal terms of X are all zero, we see by Jensen's
**

inequality that X and Y have all their moments equivalent. It follows in parti
ular from this observation

and the subsequen
e arguments that if X is a real
haos (i.e. D is redu
ed to one point) with all its diagonal

terms zero, then the parameters M and m are equivalent (at least if M is large enough, see below), and

equivalent to kX k2 and kY k2 . We will use this observation later on in Chapter 11, Se
tion 11.3.

be su
h that

((x; x0 ) ; kQ

b (x; x0 )k M

) 7=8 . By Fubini's theorem,
(A) 3=4 where

Let M

) 1 g :

A = fx;
(x0 ; kQb(x; x0 )k M

2

is larger than the

Conditionally in x , kQb(x; x0 )k is the norm of a Gaussian variable in x0 . If x 2 A , M

median of kQb(x; x0 )k and thus, by what was observed early in the pre
eding se
tion, the supremum of the

. But this supremum is simply

weak varian
es is less than 2M

sup

n1

Z

1=2

Qbn (x; x0 )2 d (x0 )

:

whi h is therefore well de

ned and .

by (3. k)k m) 3=4 1=2 . by independen e and the results of Se tion 3. by the same sup k jhj1 k = sup Qb(h.1 it is not diÆ ult to see that IEkY k2 < 1 (a tually IEkY kp < 1 for every p ) and that we have the following hierar hy in the parameters: (IEkY k2 )1=2 (IE sup IE0 f 2 (Y ))1=2 2 : f 2D .jkj1 But this supremum is easily seen to be bigger than 2 so that m < 1 .) Con erning . we an take. and thus also that if M is hosen to satisfy IPfkX k M g 15=16 . k) d (x) 2m : sup kQb(h. Hen e. kQb(x. for every k reasoning as before. (x. jkj 1 . Hen e we may simply take m = 2M p = 2 2M . A tually. n1 Z 1=2 2 b Qn (x. noti e that it is usually easier to work with expe tations and moments rather than medians or quantiles. (Noti e.9). 2 `2 . for later purposes. k)k 2m : jhj. As for series in the previous paragraph.nite. k) Therefore. M p m = 4 2M .

we now state and prove the lemma whi h des ribed the tail IPfkX k > tg of a Gaussian haos X in terms of these parameters. some. Re all. an be obtained at the expense of some ompli ations. Lemma 3. the parameters M and m are equivalent so that the inequality of the lemma may be formulated only in terms of M and . This will be used in Se tion 11. Let X be a Gaussian haos of order 2 as just de. also that for real haos. We shall not be interested here in on entration properties.8. however.3.67 After these long preliminaries in order to properly des ribe the various parameters we are using.

Then. m and . sup kQb(x. A2 = fx .ned with orresponding parameters M . by de. IPfkX k > M + mt + t2 g exp( t2 =2) : Proof.2. for every t > 0 . Let A1 = fx . k)k mg jkj1 and A = A1 \ A2 so that. It is a simple onsequen e of Theorem 1. We use the pre eding notations. kQ(x. x)k M g .

x = a + th where a 2 A . h) and therefore At fx . The next orollary is the qualitative result drawn from the pre eding lemma on erning integrability and tail behavior of Gaussian haos. for every n . jhj 1 . for any t > 0 . Qn (x. But if x 2 At . By Theorem 1.2 in the . a) + tQbn (a.2. It orresponds to Corollary 3. (A) 1=2 .3).nition of M and m . h) + t2 Qn (h. Thus. more pre isely (1. kQ(x. (At ) (t) . x) = Qn (a. x)k M + tm + t2 g : Lemma 3.8 is therefore established.

equivalently. Then lim t!1 or. 1 1 log IPfkX k > tg = t 2 1 IE exp kX k < 1 2 if and only if > : . Let X be a Gaussian haos of order 2 with orresponding = (X ) .rst se tion. Corollary 3.9.

the distribution of y = (hx. we an write y = x1 h + ye and x1 and ye are independent. hi i) where xe = (0. a simple symmetry argument shows that one an . Proof. there exists an orthonormal basis (hi )i1 of `2 su h that hi1 = hi for every i 1 . let " > 0 and hoose jhj 1 su h that kQ(h.8) and sin e x1 h ye is distributed as y . ye) (where Qbn . Sin e. Q were introdu ed prior to Lemma 3. That the limit is 1=2 follows from Lemma 3. h) + x1 Qbn (ye.68 Further all moments of X are equivalent. Qn (y. for ea h n . h) + Qn (ye. Given this h = (hi )i1 .8. y) = x21 Qn (h. x3 . this also implies that IE exp(kX k=2) < 1 for > . h)k " . x2 . hi i) under is the same as the distribution of x . To prove the onverse assertions. By the rotational invarian e of Gaussian measures. If we then set ye = (hxe. : : : ) .

y)k > t) 12 (x. kQ(y. kQ(ye. x21 ( ") > t + jx1 jM + M ) : The proof of the . kQb(ye. ye)k M . kQ(x.nd M su h that (x. x)k > t) = (x. h)k M ) 12 : We then dedu e from Fubini's theorem that for every t > 0 (x.

8 that if M is hosen to satisfy IPfkX k M g p take m 4 2M and satis.rst laim in Corollary 3. That IE exp(kX k=2) = 1 an be established in the same way. We have seen before Lemma 3.9 is then easily ompleted.

Note that Kp is of the order of p when p ! 1 .8 immediately yields kX kp Kp M for some onstant Kp depending only on p . integrating the inequality of Lemma 3.es p 15=16 . then we an 4 2M . . If we then have q > 0 . Hen e. The proof of Corollary 3. in a ordan e with what we observed in the beginning through the hyper ontra tive estimates.9 is thus omplete. simply take M = (16IEkX kq )1=q from whi h the equivalen e of the moments of X follows. if p > 0 .

69 We on lude this se tion with a re.

for some integer N . Z 1 exp 0 k X k 2 ( )1=2 > 2 kX k 1=2 0 2 t exp 2 + 0 2 t exp 2 + 0 2 t 1 Z t0 1 Z Z t0 1 !2 2 IP IP ( ( 00 )=2 > 0 .3. a point x in IRIN is de omposed in (y. dIP ) kX k 1=2 > 0 + t t exp t2 dt 2 2 ) kX k 1=2 > 00 + (t + ") t exp t2 dt 2 2 2 exp 2 + IPfkX k > 00 (t + ") + (t + ")2 gt exp t2 dt t0 2 2 Z 1 t0 t (t + ")2 N +1 dt < 1 : exp 2 + K (N ) (1 + " + t) exp 2 2 0 0 Let us therefore prove (3. z ) with y 2 IRN and z 2 IR℄N. for every 0 > .1[ .1[ .1[ and if A is a Borel set in IRIN . Given an integer N . This will be suÆ ient to establish the theorem. (0. Set then = (X ) = inf f > 0 : IPf sup (IE0 f 2 (Y ))1=2 g > 0g : f 2D We an then state: Theorem 3.10) to 00 . Further = N ℄N. we set B = fz 2 IR℄N. Then. z ) 2 Ag . Re all the notations of the proof of Lemma 3.4.10).10. let " = ( 0 applying (3.nement of the previous orollary along the lines of what we obtained in Theorem 3. . Indeed. !2 1 k X k 1=2 0 < 1: IE exp 2 2 Proof. if 0 > 00 > . Let X be a Gaussian haos with orresponding = (X ) and = (X ) . Let X be as before a Gaussian haos variable of order 2 with values in B and re all the symmetri de oupled haos Y asso iated to X . We show that for every 0 > there exists an integer N and a real number t0 su h that for every t t0 IPfkX k > 0 t + t2 g IP (3:10) 8 N < X : i=1 gi2 !1=2 9 = >t : . Then.

max sup jQb n (x. we may assume the bilinear forms Qn . n` n` jkj1 then (A03 ) < 4 (A3 )=3 . max jQn (x. x)j M . kQ(x. x)k M g . By a simple approximation. k)k 0 g jkj1 and where Qb is the quadrati form asso iated to the de oupled haos Y ( f. repla ing M and 0 by M + " and 0 + " if ne essary. Qbn . then (A3 ) > 0 . only depend on a . and A3 = A1 \ A2 .70 If 0 > . the proof of Lemma 3. k)j 0 g . Choose M large enough su h that if A1 = fx . (A2 ) > 0 where A2 = fx . There exists an integer ` su h that if A03 = fx .8). n ` . sup kQb(x.

z ). (y. By onvexity. there exists then y in IRN su h that 3 ℄N. z ).1[(z . Let z belong to this interse tion. z ) 2 A3 ) : 4 By symmetry we also have 3 ℄N. z ))k M . Hen e. summing. z ))k . z ). for some integer N . k)k 0 : jkj1 Moreover. (y. sup kQb((0. z ). sin e kQ((y. By Fubini's theorem.1[(z . A03 = L IR℄N. z ) 2 A3 ) : 4 The interse tion of these two sets of measure bigger than 3=4 has therefore measure bigger than 1=2 . kQ((y. (y. (y. z ))k M .1[ with L IRN . we get that kQ((0.nite number of oordinates. (0.

.

.

N .

.

X n .

+ sup .

.

aij yi yj .

.

n1 .

j =1 .i.

M + jyj2 : .

z ) 2 Ag satis. (0. if we set M 0 = M + jyj2 and A = fx . jkj1 it follows that B = fz 2 IR℄N.71 Hen e. k)k 0 g .1[ . x)k M 0 . sup kQb(x. kQ(x.

and if is de. let us brie y mention the formal orresponding results for the haos of degree d > 2 . y)k + t2 kQ(h.10 is omplete. jy0 j > t) : Now it has simply to be observed that At fx . h)k : (3. x = a + th with a 2 A . jhj 1 .1[(B ) 1=2 . then kQ(x.10) then easily follows from the pre eding playing with 0 > and t to be large enough. a)k + tkQb(a.es ℄N. To on lude. If X is su h a haos. We are then in a position to apply Lemma 3. if x 2 At . The proof of Theorem 3. x)k kQ(a. x)k M 0 + t 0 + t2 g : Indeed.5 from whi h we get that for every t 0 : (x . kQ(x. x 62 At ) N +1 (y0 .

9 reads: 1 1 lim log IP fk X k > t g = : 2 =d t!1 t 22=d Theorem 3.10 is somewhat more diÆ ult to translate. is de. Corollary 3.ned analogously.

ned appropriately from the asso iated d de oupled symmetri haos on whi h one takes weak moments on d 1 oordinates and the strong parameter on the remaining one. We then get that for all 0 > 1 IE exp 2 kX k 1=d 0 d !2 < 1: One ould possibly imagine further re.

nements involving d + 1 parameters.3 Comparison theorems . 3.

are very important and useful tools in the Probability al ulus in Bana h spa es. whi h may be onsidered as geometri al. These results.72 In the last part of this hapter we investigate the Gaussian omparison theorems whi h. together with integrability. .

XN ) and Y = (Y1 . : : : . In order to des ribe the question we would like to study.rst step in with the so- alled Slepian's lemma on whi h the further results are variations. let us assume . Assume we are given two Gaussian random ve tors X = (X1 . YN ) in IRN . : : : . We present here some of these statements in the s ope of their further appli ations.

If Z is a Gaussian variable independent from Y with ovarian e IEZi Zj = IEXi Xj IEYi Yj (whi h is positive de. X i2 : Then. we may of ourse assume for the proof that X and Y are independent. as an example.rst. for any onvex set C in IRN IPfY (3:11) 62 C g 2IPfY 62 C g : Indeed. Y i2 IEh. that the ovarian e stru ture of X dominates that of Y in the (strong) sense that for every in IRN IEh.

11). IPfY 2 6 C g = IP 12 [(Y + Z ) + (Y Z )℄ 62 C IPfY + Z 62 C g + IPfY Z 62 C g whi h is (3. IPfX 2 C g = IPfY + Z 2 C g = Z IPfY + z 2 C gdIPZ (z ) : Now. Hen e. . By independen e and symmetry. by Fubini's theorem. 2 C z g IPfY 2 C g .11) without the numeri al onstant 2 for all onvex and symmetri (with respe t to the origin) sets C . by onvexity. for every z in IRN . It should be noted that deeper tools an a tually yield (3.2) and symmetry ensure that. X has also the same distribution as Y Z . the on avity inequalities (1. Indeed.nite from the assumption).1) or (1. X has the same distribution as Y + Z . IPfY hen e the announ ed laim.

there is a sequen e (fk ) in B su h that x 2 K whenever fk (x) 1 for all k . for every " > 0 . The . Indeed. there exists a ompa t set C whi h may be hosen onvex su h that IPfX 2 C g 1 " . Sin e B may be assumed to be separable. sin e X is Radon. then the sequen e (Xn ) is tight.11) whi h implies that IPfXn 2 C g 1 2" for all n .11) (or the pre eding) is used to show a property like the following one: if (Xn ) is a sequen e of Gaussian Radon random variables with values in a Bana h spa e B su h that IEf 2 (Xn ) IEf 2 (X ) for all f in B 0 .73 Typi ally (3. all n and some Gaussian Radon variable X in B . The on lusion is then immediate from (3.

Let X = (X1 . Its quite easy proof learly des ribes the Gaussian properties whi h enter these questions.11. We have '0 (T ) = N X i=1 IE(Di f (X (t))Xi0(t)) : Let now t and i be . j ) 2 A . The result is similar in nature to (3. 1℄ . As before we may assume X and Y to be independent. IEXi Xj IEYi Yj if (i. : : : . for t 2 [0. : : : YN ) be Gaussian random variables in IRN . j ) 62 A [ B where A and B are subsets of f1. Dij f 0 if (i. : : : . j ) 2 B : IEf (X ) IEf (Y ) : Proof. : : : . N g . XN ) and Y = (Y1 .11) but under weaker onditions on the ovarian e stru tures. Theorem 3. Set. N g f1. Let f be a fun tion on IRN su h that its se ond derivatives in the sense of distributions satisfy Then Dij f 0 if (i. X (t) = (1 t)1=2 X + t1=2 Y and '(t) = IEf (X (t)) . IEXi Xj = IEYi Yj if (i. j ) 2 B . j ) 2 A . Assume that IEXi Xj IEYi Yj if (i.rst omparison property we now present is the abstra t statement from whi h we will dedu e many onsequen es of interest.

xed. 1 IEXj (t)Xi0 (t) = IE(Yj Yi 2 Xj Xi ) : . It is easily seen that. for every j .

j ) 2 B . j ) 2 A . j ) 62 A [ B . j ) 2 B . j = 0 if (i. j 0 if (i. this fun tion vanishes when all the j 's are 0 sin e IE(Di f (Z )Xi0(t)) = IE(Di f (Z ))IEXi0 (t) = 0 : Hen e IE(Di f (X (t))Xi0 (t)) theorem. j ) 2 A [ B ). the hypotheses on f show that this fun tion is in reasing of those j 's su h that (i.74 The hypotheses of the theorem then indi ate that we an write Xj (t) = j Xi0 (t) + Zj where Zj is orthogonal to Xi0 (t) and j 0 if (i. If we now examine IE(Di f (X (t))Xi0(t)) as a fun tion of the j 's (for (i. But. by the orthogonality and therefore independen e. 0 . '0 (t) 0 and therefore '(0) '(1) whi h is the on lusion of the As a .

we state Slepian's lemma. It is simply obtained by taking in the theorem A = f(i. i 6= j g . B = . i ℄ for whi h the hypotheses on the se ond derivatives are immediately veri.rst orollary. and f = IG where G is a produ t of half-lines ℄ 1. j ) .

ed. i N . .12 Let X and Y be Gaussian in IRN su h that 8 < IEXi Xj : IEYi Yj for all i 6= j . by integration by parts. (Yi > i )g IPf (Xi > i )g : i=1 IE max Yi IE max Xi : iN iN The next orollary relies on a more elaborated use of Theorem 3. IPf i=1 In parti ular. The laim on erning expe tations of maxima in the next statement simply follows from the integration by parts formula Z 1 Z 1 IEX = IPfX > tg dt IPfX < tg dt : 0 0 Corollary 3.11. IEXi2 = IEYi2 for all i : N [ N [ Then. It will be useful in various appli ations both in this hapter yet and in Chapters 9 and 15. for all real numbers i .

j : in j m in j m Proof. for all i.j i. Let N = mn . : : : .e.k IEYi. Consider then X and Y as random ve tors in IRN indexed in this way. J ) .j X`.j for all i 6= ` and j.j Yi. For I 2 f1. IPf n [ m \ i=1 j =1 n [ m \ (Yi. for all real numbers i.j )g : In parti ular. i(I ) 6= i(J )g : Then the .j )g IPf i=1 j =1 (Xi. k . 1 n m su h that I = m(i 1) + j .j > i. Let A = f(I. N g let i = i(I ) . k .k 2 = IEY 2 IEXi. 1 j m .k > > < > > > : IEXi. j : Then.j ) and Y = (Yi. B = f(I.75 Corollary 3.j Y`. be Gaussian random ve tors su h that 8 for all i.k IEYi.j(I ) . j. i. XI = Xi(I ). 1 i n . J ) .j . IE min max Yi. Let X = (Xi. > IEXi.13.j IE min max Xi.j ) . j = j (I ) be the unique 1 i n . i(I ) = i(J )g .j > i.j Xi.

11 is ful.rst set of hypotheses of Theorem 3.

Taking further f to be the indi ator fun tion of the set n [ \ i=1 I=i(I )=i fx 2 IRN . Let X = (X1 .11 implies the on lusion by taking omplements. In the pre eding results the omparison was made possible by assumptions on the respe tive ovarian es of the Gaussian ve tors with.12 in this dire tion. : : : .14. The next statement is a simple onsequen e of Corollary 3.j(I ) g . Corollary 3. onditions of equality on the "diagonal". XN ) and Y = (Y1 . Theorem 3. : : : . it is often more onvenient to deal with the orresponding L2 -metri s kXi Xj k2 whi h do not require those spe ial onditions on the diagonal. YN ) be Gaussian variables in IRN su h that for every i.lled. espe ially. xI > i. In pra ti e. j IEjYi Yj j2 IEjXi Xj j2 : .

76 Then IE max Yi 2IE max Xi : iN iN Proof. Repla ing X = (Xi )iN by (Xi X1 )iN we may and do assume that X1 = 0 and similarly Y1 = 0 . Let = max(IEXi2 )1=2 and onsider the Gaussian variables X and Y in IRN de.

jmax N i 2 1 = IE max X = IE max X IEjgj iN i IEg+ iN i where we have used again. We are thus in the hypotheses of Corollary 3.ned by. iN X i = Xi + g(2 + IEYi2 IEXi2)1=2 . in the . 1 = max(IEXi2 )1=2 = max IEjXi j iN IEjgj iN jX Xj j IE1jgj IE i.12 and therefore IE max Y i IE max X i : iN iN Now. learly. Y i = Yi + g where g is standard normal independent from X and Y . It is easily seen that IEY 2i = IEX 2i = 2 + IEYi2 while IEjY i Y j j2 = IEjYi Yj j2 IEjXi Xj j2 IEjX i X j j2 so that IEX i X j IEY i Y j for all i 6= j . i N . But now. IE max Y i = IE max Yi while iN iN IE max X i IE max Xi + IEg+ iN iN where we have used that IEYi2 IEXi2 (sin e X1 = Y1 = 0 ).

that X1 = 0 . This bound on and the pre eding .rst inequality.

j Xj j = IE max (Xi Xj ) = 2IE max Xi : i.nish the proof. IE max jXi i. : : : XN ) is Gaussian in IRN . If X = (X1 . by symmetry.j i .

j Xj j IEjXi j + 2IE max Xi : i 0 But in general the omparison results do not apply dire tly to IE max jXi j (see however [Si1℄.j i. for every i0 N . i i. IE max Xi IE max jXi j IEjXi0 j + IE max jXi i i i.14 and let tend to in. [Si2℄).j i Of ourse. take for i example Yi = Xi + g where g is standard normal independent from X in Corollary 3.77 The omparison theorems usually deal with max Xi or max jXi Xj j = max (Xi Xj ) rather than max jXi j .

j . (3:12) IPfmax jYi i. > 0 .j Yj j > g 2IPfmax Xi > i g: 2 Following the proof of Corollary 3. one onvenient feature of IE max Xi is that for any real mean zero random variable Z . then. under the hypotheses of Corollary 3. kYi Yj k2 kXi Xj k2 . This observation suggests that the fun tional max jXi i. we also have that.nity. i IE max (Xi + Z ) = IE max Xi . IPfmax jYi i. On the other hand.j i Yj j) IEF (max jXi Xj j) : i.j Xj j is perhaps more natural in omparison theorems. Theorem 3. for all > 0 .12. for every non-negative onvex in reasing fun tion F on IR+ IEF (max jY i. The next result (due to X.j Yj j > g 2IPfmax jXi i. i i The numeri al onstant 2 in the pre eding orollary is not best possible and an be improved to 1 with however a somewhat more ompli ated proof. j . Fernique [Fer4℄) whi h we state without proof ompletely answers these questions.14 we an then obtain that if X and Y are Gaussian ve tors in IRN su h that for all i.j g 4 Xj j2 )1=2 g+ > g: 4 This inequality an of ourse be integrated by parts. A tually.j Xj j > + 2IPfmax(IEjXi i.15. Let X and Y be Gaussian random ve tors in IRN su h that for every i. j IEjYi Yj j2 IEjXi Xj j2 : Then.

k j2 IEjXi.15 for onvenien e but a somewhat weaker result an be obtained through (3.17.k j2 for all i 6= ` and j.j Yi.j ) and Y = (Yi. [Gor3℄.k j2 : IEjYi. Gordon [Gor1℄. Theorem 3. 1 j m be Gaussian random ve tors su h that 8 for all i.j : in j m in j m Among the various onsequen es of the pre eding omparison properties.j Xi. Let T be bounded in IRN and onsider the Gaussian pro ess indexed by T de. 1 i n .j IE min max Xi. let us start with an elementary one whi h we only present in the s ope of the next hapter where a similar result for Radema her averages is obtained.k j2 IEjYi.13 with onditions on L2 -distan es.j ) .78 There is also a version of Corollary 3.12).j Then Y`.j Y`. Corollary 3. It is due to Y. k . Again the proof is more involved so that we only state the result. Let X = (Xi. We use Theorem 3. j. < IEjYi. k : IE min max Yi.16.

Let further 'i : IR ! IR . t = (t1 . i=1 Then for any non-negative onvex in reasing fun tion F on IR+ . be ontra tions with 'i (0) = 0 .ned N P as gi ti . : : : . i N . tN ) 2 T IRN . .

.

! N .

X .

1 IEF sup .

.

gi 'i (ti ).

.

.

2 t2T .

i=1 .

.

! N .

X .

.

2 sup .

gi ti .

.

.

t2T .

We an write by onvexity: . Let u 2 T . IEF i=1 Proof.

.

! N .

X .

1 sup .

.

gi 'i (ti ).

.

IEF .

2 t2T .

i=1 .

.

! : N .

X .

1 1 IEF sup .

.

gi ('i (ti ) 'i (ui )).

.

+ IEF .

.

2 2 t2T i=1 .

.

! N .

X .

1 1 .

IEF sup .

gi ('i (si ) 'i (ti )).

.

+ IEF .

2

2

s;t

i=1

!

N

X

g

'

(ui )

i

i

.

i=1 .

.

! N .

X .

.

gi ui .

.

.

.

.

by Theorem 3. Now.15 (and a trivial approximation redu ing to . i=1 where we have use that j'i (ui )j jui j sin e 'i (0) = 0 .

the pre eding is further majorized by .nite supremum).

N .

X 1 IEF sup .

.

gi (si 2 s.t .

i=1 .

! .

ti ).

.

+ .

1 IEF 2 .

.

! N .

X .

.

gi ui .

.

.

.

.

i=1 sin e by ontra tion. N X i=1 j'i (si ) 'i (ti )j2 N X i=1 jsi ti j2 : . t . for every s.

A most important onsequen e of the omparison theorems is the so- alled Sudakov's minoration. We shall ome ba k to it in Chapter 12 when we investigate regularity of Gaussian pro esses but it is fruitful to already re ord it at this stage.17 is therefore established. To introdu e it. let us .79 Corollary 3.

g2) 1=3 (for example). Namely. Let X = (X1 . Note the omparison with (3. (3:14) K 1 (log N )1=2 IE max gi K (log N )1=2 : iN Indeed. by integration by parts. The pre eding inequality is two-sided for the anoni al Gaussian ve tor (g1 . : : : . XN ) be Gaussian in IRN . Choose then simply Æ = (2 log N )1=2 . Then (3:13) IE max Xi 3(log N )1=2 max(IEXi2)1=2 : iN iN Indeed. : : : . we may assume N to be large enough. Note that. gN ) where we re all that gi are independent standard normal variables.rst observe the following easy fa ts. IE max jgi j iN Z Æ 0 [1 (1 IPfjgj > tg)N ℄ dt Æ[1 (1 IPfjgj > Æg)N ℄ : Now IPfjgj > Æg = r 2 1 exp( t2 =2) dt Æ Z r 2 exp( (Æ + 1)2 =2) : Choose then for example Æ = (log N )1=2 (N large) so that IPfjgj > Æg 1=N and hen e " IE max jgi j Æ 1 iN 1 # 1 N Æ 1 N 1 : e . for every Æ 0 . assume by homogeneity that max IEXi2 1 . sin e IE max(g1 . iN Z 1 IE max Xi IE max jXi j Æ + N IPfjgj > tg dt iN iN Æ r Æ + N 2 exp( Æ2 =2) where g is a standard normal.6). then. for every Æ > 0 . by independen e and identi al distribution. for some numeri al onstant K .

d. d) is a metri or pseudo-metri spa e ( d need not separate points of T ). ") need not be . d. Of ourse N (T.80 Sin e IE max jgi j IEjgj + 2IE max gi . denote by N (T. ") the minimal number of open balls of radius " > 0 in the metri d ne essary to over T (the results we present are a tually identi al with losed balls). If (T.14) sin e N is assumed to be large iN iN enough. this proves the lower bound in (3. The upper bound has been established before.

") is . N (T. ") = 1 . d.nite in whi h ase we agree that N (T. d.

nite for ea h " > 0 if and only if (T. d) is totally bounded. Let X = (Xt )t2T be a Gaussian pro ess indexed by a set T . one fruitful way to analyze the regularity properties of the pro ess X is to study the "geometri " properties of T with respe t to the L2 pseudo-metri dX indu ed by X de. As will be further and deeply investigated in Chapter 12.

t) = kXs Xt k2 . we simply let IE sup Xt = supfIE sup Xt . s. F . dX . ") for ea h " > 0 in terms of the supremum of X . t 2 T : The next theorem is an estimate of the size of N (T. In the statement.ned as dX (s.

14. IE sup Xu0 2IE sup Xu : u2U u2U If we now re all (3. dX . by Corollary 3. ") N . v) : Therefore. There exists U T with CardU = N and su h that dX (u. "))1=2 K IE sup Xt t2T where K is a numeri al onstant. Let N be su h that N (T. for all u. dX . dX ) is totally bounded. v) > " for all u 6= v in U . learly. "(log N (T. Let (gu )u2U be standard normal independent variables and onsider Xu0 = p"2 gu . Let X = (Xt )t2T be a Gaussian pro ess with L2 -metri dX . Then. (T.14) we see that IE sup Xu0 K u2U 1 p" 2 (log CardU )1=2 = K 1 p" 2 (log N )1=2 : . for ea h " > 0 .nite in T g : t2T t2F Theorem 3. if X is almost surely bounded. Then. Proof. kXu0 Xv0 k2 = " < dX (u.18. In parti ular. v . u 2 U .

"))1=2 = 0 : "!0 Proof.81 The on lusion follows. dX . Theorem 3. We denote by X itself the bounded ontinuous (and therefore separable) version. let Æ > 0 be small enough that IE sup dX (s. dX ) (sin e X is also bounded).t)<Æ jXs Xt j < : Let A be . Corollary 3. lim IE sup Æ!0 dX (s.t)<Æ jXs Xt j = 0 : For every > 0 .18 admits a slight strengthening when the pro ess is ontinuous. If the Gaussian pro ess X = (Xt )t2T has a version with almost all bounded and ontinuous sample paths on (T.19. then lim "(log N (T. By the integrability properties of Gaussian ve tors and ompa tness of (T. dX ) .

18. ") CardB CardA max CardAs : s Therefore "(log N (T. i. T is bounded onvex symmetri about the origin with non-empty interior in IRN ( T is a Bana h ball). s2A Ea h point of T is within distan e " of an element of B .18). for every s in A there exists As T satisfying "(log CardAs )1=2 K S and su h that if t 2 T and dX (s. and then . Consider the Gaussian pro ess X = (Xt )t2T de. dX . t) < " . "))1=2 "(log CardA)1=2 + K : Letting " . dX . Let T be a onvex body in IRN . t) < Æ there exists u in As with dX (u.nite in T su h that the balls of radius Æ with enters in A over T (su h an A exists by Theorem 3. By Theorem 3.e. tend to 0 on ludes the proof. hen e N (T. Sudakov's minoraton is the o asion of a short digression on some dual formulation. Let then B = As . Let " > 0 .

t = (t1 . tN ) 2 T IRN : . : : : .ned as Xt = N X i=1 gi ti .

82 Set `(T ) = IE sup jXt j = t2T .

.

N .

X .

.

IE sup .

gi ti .

.

.

t2T .

"T 0) where T 0 = fx 2 IRN . hx. a hange of variables indi ates that N (z + aT 0) = exp( jz j2 =2) Z aT 0 exphz. zn in a" B2 su h that. i=1 = Z sup jhx. for all i 6= j . namely that the sup-norm of X ontrols in the same way N (B2 . we . for some numeri al onstant K . N (z + aT 0) exp( jz j2 =2) N (aT 0) : Therefore. "B2) where B2 is the Eu lidean (open. Let a = 2`(T ) . B are sets in IRN . more pre isely. "T 0) = N ( B2 . aT 0) n : " There exist z1 . simpler than the proof of Sudakov's minoration. For example. B ) the minimal number of translates of B by elements of A ne essary to over A . yi 1 for all y in T g is the polar of T . dX . If A. sin e zi 2 a" B2 . "B2) when " ! 0 is ontrolled by `(T ) . Let now " > 0 and n be su h that a N (B2 . N (T. It might be interesting to point out here that the dual version of this result is also true. . by Jensen's inequality and symmetry of T 0 . Hen e. Then 1 N (aT 0 ) = IPfsup jXt j ag 2 t2T where N is the anoni al Gaussian measure on IRN . 1 N n [ i=1 ! (zi + aT 0) n X = i=1 N (zi + aT 0) : For any z in IRN . and thus.15) is rather simple. xid N (x) . Sudakov's minoration indi ates that the rate of growth of N (T. and N (aT 0) 1=2 . : : : . for onsisten y!) unit ball of IRN . (3:15) sup "(log N (B2 . i 2 N . denote by N (A. tijd N (x) : t2T The L2 -metri of X is of ourse simply the Eu lidean metri in IRN . (zi + aT 0) \ (zj + aT 0) = . "T 0))1=2 K`(T ) : ">0 The proof of (3. ") = N (T.

15).nally get that 2 n exp( a2 =2"2) whi h is exa tly (3. .

83

**It is worthwhile noting that Theorem 3.18 and its dual version (3.15)
an easily be shown to be equivalent.
**

Let us sket
h for example how Sudakov's minoration
an be dedu
ed from (3.15) using a simple duality

2

2

argument. Observe

**rst that for every " > 0 , 2T \ ( "2 T 0) "B2 . Indeed, if t 2 2T and t 2 "2 T 0 ,
**

2

**jtj2 = ht; ti ktkT ktkT 2 "2 = "2 ;
**

0

where

**k kT (respe
tively k kT ) is the Bana
h norm (gauge) indu
ed by T ( T 0 ). It follows that
**

0

"2

"2

N (T; "B2) N (T; 2T \ ( T 0)) = N (T; T 0 ) :

2

2

By homogeneity, and elementary properties of entropy numbers

N (T;

Thus, for every " > 0 ,

"2

"2 0

T ) N (T; 2"B2)N (2"B2 ; T 0)

2

2

" 0

N (T; 2"B2)N (B2 ; 4 T ) :

"(log N (T; "B2 ))1=2 "(log N (T; 2"B2))1=2 + 4M

**where M = sup "(log N (B2 ; "T 0))1=2 . One then easily dedu
es that
**

">0

sup "(log N (T; "B2))1=2 8M :

(3:16)

">0

The onverse inequality may be shown similarly. By duality, B2 Conv( 2" T 0 ; 2" T ) 2" T 0 + 2" T . Then

"

2

N (B2 ; "T 0) N ( T 0 + T; "T 0)

2

"

2

= N ( T; "T 0)

"

2 1

N ( " T; 4 B2 )N ( 41 B2 ; 2" T 0)

and we
an
on
lude as before that

sup "(log N (B2 ; "T 0))1=2 4M 0

(3:17)

">0

**where M 0 = sup "(log N (T; "B2))1=2 .
**

">0

84

**The last appli
ation of the
omparison theorems that we present
on
ern tensor produ
ts of Gaussian
**

measures. For simpli
ity, we only deal with Radon random variables. Let E and F be two Bana
h spa
es.

If x 2 E and y 2 F , x

y is the bilinear form on E 0 F 0 whi
h maps (f; h) 2 E 0 F 0 into f (x)h(y) .

P

The linear tensor produ
t E

F
onsists of all

nite sums u = xi

yi with xi 2 E , yi 2 F . On

i

E

F ,
onsider the inje
tive tensor produ
t norm

kuk_ =

(

X

sup

f (xi )h(yi )

.

.

.

j xi yj F . the norm of u as a bounded bilinear form on E 0 F 0 . F ). i ) .j ) be a doubly-indexed orthogaussian sequen e. Y = P j gj yj ) a Gaussian random variable with values in E (resp.e. one might wonder whether X i. kf k1 kf k1 i (Y ) being de. kf k 1 . Given onvergent series X and Y like this with values in E and F respe tively. The ompletion of E F with respe t to this F . khk 1 . This question has a positive answer is almost surely onvergent in the inje tive tensor produ t spa e E P and this is the on lusion of the next theorem.j gi. Let further (gi. Re all that (X ) = sup (IEf 2(X ))1=2 = sup ( f 2 (xi ))1=2 . Here (gi ) denotes as usual an orthogaussian sequen e. i. norm is alled the inje tive tensor produ t of E and F and denoted by E Consider now X = P i gi xi (resp.

20.j F and the following inequality holds: onvergent in the inje tive tensor produ t E max((X )IEkY k . (Y )IEkX k) IEkGk_ (X )IEkY k + (Y )IEkX k : Proof.j xi yj is almost surely i. Then G = gi.ned similarly. Let X = P i gi xi and Y = P j gj yj be onvergent Gaussian series with values in E P and F respe tively and with orresponding (X ) and (Y ) . Theorem 3. To prove that G is onvergent it is enough to establish the right side of the inequality of the theorem for .

nite sums and use a limiting argument. By de.

the left side is easy. note that it indi ates in the same way that the onvergen e of G implies the onvergen e of X and Y .nition of the tensor produ t norm. In the sequel we therefore only deal with .

onsidered as a Gaussian pro ess indexed by the produ t of the unit balls of E 0 and F 0 . to . The idea of the proof is to ompare G .nite sequen es (xi ) and (yj ) .

f 2 E0 . h) : kf k=1 khk=1 The reason for the introdu tion of Gb is that we will use Corollary 3. Consider namely the Gaussian pro ess Ge indexed by E 0 F 0 : Ge(f.85 another built as a kind of (independent) "sum" of X and Y . IEkGk_ IE sup sup Gb(f. something whi h is given by Gb . Indeed. by Jensen's inequality and independen e. rather than to ompare G to Ge it is onvenient to repla e G by Gb(f.j ) .12 where we need spe ial information on the diagonal of the ovarian e stru tures. h 2 F 0 where (gi ) . h) = X i. Clearly. it is easily veri. (gj0 ) are independent orthogaussian sequen es.j f (xi )h(yj ) + (X )( j h2 (yj ))1=2 g . f 2 E0 . h 2 F 0 where g is a standard normal variable independent of (gi. A tually. h) = (X ) X j X gj0 h(yj ) + ( j h2 (yj ))1=2 X i gi f (xi ) .j X gi.

We are thus in a position to apply Corollary 3. Noti e that Theorem 3. h) IE sup sup Ge(f. h) : khk=1 kf k=1 khk=1 kf k=1 To get the inequality of the theorem we need simply note that IE sup sup Ge(f. h)Ge(f 0 . f 0 2 E 0 . h0 ) IEGe(f.15 yields a somewhat simpli.12 to the Gaussian pro esses Gb and Ge . h)Gb(f 0 . h0 ) = ((X )2 X i X f (xi )f 0 (xi ))[( j X h2 (yj ))1=2 ( j h0 2 (yj ))1=2 X j h(yj )h0 (yj )℄ : Hen e this dieren e is always positive and is equal to 0 when h = h0 .20 is therefore omplete.ed that for f . After an approximation and ompa tness argument. it implies that IE sup sup Gb(f. h) (X )IEkY k + (Y )IEkX k : khk=1 kf k=1 The proof of Theorem 3. h . IEGb(f. h0 2 F 0 .

.ed proof.

Then Theorem 3. N ) is a generi point in IRN . Let X be a Gaussian Radon random variable with values in a Bana h spa e B .21. One appli ation is the following orollary whi h will be of interest in Chapter 9.86 As a onsequen e of Corollary 3. if = (1 .20 in the tensor spa e B 2 immediately yields N X IE sup i Xi jj=1 i=1 00 IEkX k 11=2 1 N X + (X )IE B gj2 A C A j =1 . N X IE sup i Xi jj=1 p IEkX k + (X ) N N X IE inf i Xi jj=1 p IEkX k (X ) N : i=1 and i=1 Proof. : : : . Consider Y= N X j =1 gj ej `N where (ej ) is the anoni al basis of `N 2 .20. X an be represented as X = P i gi xi for some sequen e (xi ) in B . Then. we also have that IE inf sup Ge(f. h) : khk=1 kf k=1 khk=1 kf k=1 Now noti e that X IE inf sup Ge(f. h) = (X )IE inf h(Y ) + inf ( h2 (yj ))1=2 IEkX k khk=1 khk=1 j khk=1 kf k=1 so that we have the following lower bound: (3:18) X IE inf sup Gb(f.13. Corollary 3. Let also (Xi ) be independent opies of X . h) IE inf sup Gb(f. for every N . h) (X )IE inf h(Y ) + inf ( h2 (yj ))1=2 IEkX k : khk=1 kf k=1 khk=1 khk=1 j This inequality has some interesting onsequen es together with the one in Theorem 3.

87 and thus the .

18). [Fer4℄. in this ase indeed IE inf sup jj=1 kf k=1 0 0 11=2 1 X BX gi. [K-L℄.rst inequality of the orollary follows sin e obviously IE(( use (3. [Ku℄.j j = and thus N X IE inf j Xj jj=1 j=1 N P g2 )1=2 ) j =1 j 0 N X p N . The interested reader may . For the se ond 1 IE inf sup j f (Xj ) + (X )gA jj=1 kf k=1 j=1 N X IE inf j Xj jj=1 j=1 00 IEkX k 11=2 1 N X (X )IE B gj2 A C A j =1 : The proof is therefore omplete.j j f (xi ) + (X ) 2j A gC A= i. [Ad℄. [V-T-C℄. [J-M3℄. [B-C℄. [Su4℄. Notes and referen es Some general referen es on Gaussian pro esses and measures (for both this hapter and Chapters 11 and 12) are [Ne2℄. [Fer9℄.

. reprodu ing kernel Hilbert spa e. The history of integrability properties of norms of in. et . [Ta1℄) various topi s on Gaussian measures on abstra t spa es. like for example zero-one laws.nd there ( ompleted with the papers [Bo3℄. not developed in this book.

[G-K2℄. Our des ription of the integrability properties and tail behavior is isoperimetri and follows C. indeed.3 has some interpretation in large deviations. Lemma 3.4) and applies to the rather general setting of measurable seminorms.2 1 1 lim log IPfkX k > tg = t!1 t2 22 . Fernique's simple argument is the one leading to (3. see also [G-K1℄. Shepp [M-S2℄. The proof by H. The on entration in Lemma 3. Landau and L. A.5 in a slight improved form was used in [Go℄. Borell [Bo2℄ (and the exposition of [Eh4℄). J. Skorokhod [Sk2℄ had an argument to show that IE exp kX k < 1 (using the strong Markov property of Brownian motion). In parti ular. Mar us and L.1 is issued from Chapter 1 . 1 (IPfkX k tg) approa hes its asymptote as slowly as one wishes it. Corollary 3. J.nite dimensional Gaussian random ve tors starts with the papers [L-S℄ and [Fer2℄. A. Theorem 3. B.3 was established in [Ta2℄ where examples des ribing its optimality are given.2 is due to M. the limit in Corollary 3. Homann-Jrgensen indi ates in [HJ3℄ a way to get from this partial on lusion the usual exponential square integrability. Shepp is isoperimetri and led eventually to the Gaussian isoperimetri inequality. Theorem 3.

[Az℄.. [Str℄. Tsirelson [Ts℄. : : : ). similar results for dierent sets might hold as well (on large deviations. Homogeneous haos were introdu ed by N. in [Ne2℄. see e. That = (X ) = 0 for a Gaussian Radon measure was re orded in [D-HJ-S℄ and that is the unique jump of the distribution of kX k is due to B.88 is of ourse a large deviation result for omplements of balls entered at the origin. Their order of integrability was . e.g. Wiener [Wie℄ and are presented.3 improves this limit into 1 t log IP fk X k > t g + =0 lim t!1 t 22 (for Radon variables) whi h appears as a "normal" deviation result for omplements of balls.g. S. Theorem 3. [Ja3℄.

let us mention its "two-sided" analogue studied by Z. The relevan e of hyper ontra tivity to integrability of Gaussian haos (and its extension to the ve tor valued ase) was noti ed by C. ( ) IP N [ i=1 (jXi j i ) N Y i=1 IPfjXi j i g : We refer to [To℄ for more inequalities on Gaussian distributions in . Anderson [An℄. the deep work [Bo4℄ however develops the isoperimetri approa h that we losely follow here (and that is further developed in [Bo7℄). XN ) is a Gaussian ve tor in IRN . : : : .rst investigated in [S hr℄ and [Var℄. [Gr2℄. L. [Be ℄ for further deep results in Fourier Analysis along these lines). Inequality (3. Gross [Gro℄ translated this property into a logarithmi Sobolev inequality and uses a two point inequality and the entral limit theorem to provide an alternate proof (see also Chapter 4 and f. Borell [Bo4℄. Slepian's Lemma appeared in [Sl℄. [Si2℄ whi h expresses that if X = (X1 . Hyper ontra tivity of the Hermite semigroup has been dis overed by E. [Bo6℄. [Bo7℄).11) with its best onstant 1 (for C symmetri ) is due to T. i N . Theorem 3. [Su4℄. ak [Si1℄. The introdu tion of de oupled haos is motivated by [Kw4℄ (following [Bo6℄. for any positive Sid numbers i . Related to this lemma. W.10 is perhaps new. Nelson [Nel℄. Its geometri meaning makes it probably more an ient as was noted by several authors [Su1℄.

Slepian's lemma was .nite dimension.

14 is established. Gordon [Gor1℄. Y. Theorem 3. Chevet (unpublished).15 was announ ed by V.13 and Theorem 3. redit is also due to S. Chapter 9). Sudakov in [Su2℄ (see also [Su4℄) and established in this form by X. [Gor3℄ ontains a more general and simpli. [M-S1℄ and [M-S2℄ where Corollary 3. [Gor2℄ dis overed Corollary 3. N.rst used in the study of Gaussian pro esses in [Su1℄.16 motivated by Dvoretzky's theorem ( f. Fernique [Fer4℄.

16 with appli ations. . [Su3℄. Our exposition of the inequalities by Slepian and Gordon is based on Theorem 3. Kahane [Ka2℄.-P. Sudakov's minoration was observed in [Su1℄.ed proof of Theorem 3.11 of J.

Tensor produ ts of Gaussian measures were initiated by S. f.17)) is due to N.15) appeared in the ontext of lo al theory of Bana h spa es and duality of entropy numbers [P-TJ℄. The best onstants in Theorem 3.℄ The equivalen e between Sudakov's minoration and its dual version ((3.20 and inequality (3. [Gor2℄ from where Corollary 3.21 is also taken.16) and (3.89 Its dual version (3. The onsideration of `(T ) (with this notation) goes ba k to [L-P℄. see also [Car℄. this will partially be used below in Se tion 15. Further appli ations of the method are presented in [Ta19℄.5). [TJ1℄. [The simple proof of (3. This proof was ommuni ated in parti ular to the author of [Go℄ (where a probabilisti appli ation is obtained) who gives a stri kingly reative a knowledgement of the fa t.18) follow from [Gor1℄. [Ch2℄.15) presented here is due to the se ond author. Chevet [Ch1℄. . Tom zak-Jaegermann [TJ1℄ (and her argument a tually shows a loser relationshiop between the entropy numbers and the dual entropy numbers.

3.4. Radema her averages 4.1.90 Chapter 4. Integrability and tail behavior of Radema her series 4.5. Comparison theorems Notes and referen es . Real Radema her averages 4.2. Integrability of Radema her haos 4. The ontra tion prin iple 4.

4. of a version of Sudakov's minoration presented in Se tion 4.3 and 4.91 Chapter 4. We will see in this way how isoperimetri methods an be used to yield strong integrability properties of onvergent Radema her series and haos.5. We however start in the . for example. The properties we examine are entirely similar to the ones investigated i in the Gaussian ase. Radema her averages P This hapter is devoted to Radema her averages "i xi with ve tor valued oeÆ ients as a natural analog i P of the Gaussian averages gi xi . This is studied in Se tions 4. Some omparison results are also available in the form.

as a on rete example. to be the Cantor group f 1. a most valuable tool in Probability in Bana h spa es. If ("i ) is onsidered alone.rst two se tions with some basi fa ts on Radema her averages with real oeÆ ients as well as on the so- alled ontra tion prin iple. A. +1gIN . We usually all ("i ) a Radema her sequen e. We thus investigate . one might take. We thus assume we are given on some probability spa e ( . that is symmetri Bernoulli or Radema her random variables. IP its anoni al produ t probability measure (Haar measure) = 1 Æ + 1 Æ IN and " the oordinate maps. IP) a sequen e ("i ) of independent random variables taking the values 1 with probability 1=2 .

nite or onvergent sums P i 2 1 2 +1 i "i xi with ve tor valued oeÆ ients xi . As announ ed the .

a trivial appli ation of the three series theorem (or Lemma 4. Real Radema her averages If (i ) is a sequen e of real numbers.1. 4.rst paragraph is devoted to some preliminaries in the real ase. indi ates that the series Sin e we will only be interested in estimates whi h easily extend to in. i iP P A tually the sum "i i has remarkable properties in onne tion with the sum of the squares 2i and i i it is the purpose of this paragraph to re all some of these.2 below) P P "i i is almost surely (or in probability) onvergent if and only if 2i < 1 .

there is no loss in generality to assume.nite sums. as is usual. that we deal with . for simpli ity in the exposition.

e. . i.nite sequen es (i ) .

nitely many i 's only are non-zero. The .

5 sin e "i i learly de. We an obtain it as a onsequen e of Lemma 1.rst main observation is the lassi al subgaussian estimate whi h draws its name from the Gaussian P type tail.

given thus a .nes a mean zero sum i of martingale dieren es or dire tly by the same argument: indeed.

nite sequen e (i ) of real .

by Chebyshev's inequality. for every t > 0 .92 numbers. 2 i i and hen e. . IE exp X i ! "i i = Y i IE exp("i i ) Y i exp 2 2 2i ! 2 X 2 = exp . for all > 0 .

(.

.

X .

.

IP .

"i i .

.

.

.

an be dedu ed for example from the more general Kolmogorov minoration inequality given below as Lemma 8. by homogeneity.1. a tually in a more pre ise form on erning the hoi e of the onstants.1) is extremely useful. This simple inequality (4. then i i X IPf (4:2) i i "i i > tg exp( Kt2 = X i 2i ) : This inequality.2). (4:1) i ) >t 2 exp t2 =2 X i ! 2i : P In parti ular. ji j 1=16t for all i . Assume that (i )i1 is su h that. De. a onvergent Radema her series "i i satisfy exponential squared integrability properties i exa tly as Gaussian variables. 2i = 1 and that i t 2 . Let us P however give a dire t proof of (4. It is moreover sharp in various instan es and we have in parti ular the following onverse whi h we re ord at this stage for further purposes: there is a numeri al P onstant K 1 su h that if (i ) and t satisfy t K ( 2i )1=2 and t max ji j K 1 P 2i .

ne nj . k 162t2 . : Sin e P 2 i i n X i=nj 1 +1 2i > 9 = 1 : 162t2 . On the other hand. = 1 . j k ( n0 = 0 ) by 8 < nj = inf n > nj 1 . for ea h j k . nj X i=nj 1 +1 2i nX j 1 i=nj 1 +1 2i + 1 162t2 so that k 4 16t2 sin e k being the last one means that X i>nk 2i 1 162t2 21 : 1622t2 .

(4. i j k i2Ij X j k IP > :i2Ij "i i > 1 4 11=2 9 > = 2 A i > . : : : .3) and Lemma 4. i2Ij 0 X : Now. We an then write by independen e ) Y 8 <X Y 8 > <X 9 1 = "i i > "i i > t IP : 4 16t . nj g .2 below indi ate together that 8 <X IP It follows that IP : ( X i 1 "i .93 Set Ij = fnj 1 + 1. ( IP 1 j k .

i > 4 i ) "i i > t X i .

1) an be used to yield a simple proof of the lassi al Khint hine inequalities.g.1. Lemma 4. there exist positive . The subgaussian inequality (4. K = 162 log 16 ). 16 1 k exp( 162t2 log 16) 16 whi h is the result (with e. For any 0 < p < 1 .i2 !1=2 9 = 1 : .

nite onstants Ap and Bp depending on p only su h that for any .

nite sequen e (i ) of real numbers Ap X i 2i !1=2 Proof. By homogeneity assume that i P 2 i .

.

p .

X .

.

IE .

"i i .

.

.

.

by the integration by parts formula and (4. Then.1). 1 Z 2 0 . i X "i i = alphai p Bp X i 2i !1=2 : = 1 .

(.

.

X .

.

IP .

"i i .

.

.

.

to onsider the ase p < 2 . it is enough. By mean . by Jensen's inequality. 1 Z 0 i ) > t dtp exp( t2 =2) dtp = Bpp : For the left hand side inequality.

94 of Holder's inequality. we get 1 .

.

2 .

X .

.

= IE .

"i i .

.

.

.

i = 0.

.

2p=3 .

.

2 2p=3 1 .

X .

.

X .

A IE .

.

"i i .

.

.

.

"i i .

.

.

.

.

.

i i .

.

p !2=3 0 .

.

6 2p 11=3 .

X .

.

X .

IE .

.

IE .

.

"i i .

.

"i i .

.

A .

.

.

.

.

.

p !2=3 .

X .

.

IE .

"i i .

.

B62 22pp=3 .

.

The best possible onstants Ap and Bp in Khin hine's inequalities are known [Ha℄. We will use also the known fa t [Sz℄ that A1 = 2 1=2 (in order to deal with a spe i. i i i from whi h the on lusion follows. We retain from the pre eding proof that Bp K pp ( p 1 ) for some numeri al onstant K .

i.e. value). (4:3) X i 2i !1=2 = X " i i i 2 p X 2 "i i i 1 : Khint hine's inequalities show how the Radema her sequen e de.

if t > 0 is su h that IPfZ > tg (2C p )q=(p q) . Lemma 4.nes a basi un onditional sequen e and spans `2 in the spa es Lp .2. we have kZ kp 21=p t and kZ kq 21=p Ct : Proof. Let Z be a positive random variable su h that for some q > p > 0 and some onstant C kZ kq C kZ kp : Then. . The interest of this lemma goes beyond this appli ation and it will be mentioned many times throughout this book. We ould also add in a sense p = 0 in this laim as is shown by the simple (but useful) following lemma. 0 < p < 1 . By Holder's inequality IEZ p tp + Z fZ>tg Z p dIP tp + kZ kpq(IPfZ > tg)1 p=q 2tp where the last inequality is obtained from the hoi e of t .

that if (Xn ) is a sequen e of random variables (real or ve tor valued) su h that for some q > p > 0 and C > 0 . While the subspa e generated by ("i ) in Lp for 0 p < 1 is `2 . then. sin e sup kXnkq < 1 by Lemma 4. kXn kq C kXn kp for all n . (Xn ) also onverges to X in n Lq0 for all q0 < q . in L1 however this subspa e is isometri to `1 . Indeed.2. for any . and if (Xn ) onverges in probability to some variable X .95 Note. as a onsequen e of this lemma.

Denote by L = L ( . A.nite sequen e (i ) of real numbers. in reasing with tlim !1 (T ) = 1 and (0) = 0 . Hen e X "i i i 1 = X i ji j : It might therefore be of some interest to try to have an idea of the span of the Radema her sequen e in spa es "between" Lp for p < 1 and L1 . there exists a hoi e of signs "i = 1 su h that "i i = ji j for all i . A Young fun tion : IR+ ! IR+ is onvex. Among the possible intermediary spa es we may onsider Orli z spa es with exponential rates. IP) the Orli z spa e of all real random variables X (de.

ned on ( . Equipped with the norm kX k = inf f > 0 . IE (jX j= ) 1g . L de. A. IP) ) su h that IE (jX j= ) < 1 some > 0 .

If for example (x) = xp . The . we an let q (x) = exp(xq ) 1 for x x(q) large enough. We shall be interested here in the exponential fun tions q q (x) = exp(x ) 1. 1 q < 1: To handle the small onvexity problem when 0 < q < 1 . x(q)℄ . then L = Lp . and take q to be linear on [0. 1 p < 1 .nes a Bana h spa e.

but the proof is similar when 0 < q < 1 .rst observation is that ("i ) still spans a subspa e isomorphi to `2 in q whenever q 2 . Letting X = "i i where P i (i ) is a . We P assume that 1 q 2 for simpli ity.

Z 1 jX j = IPfjX j > tgd(etq 1) IE q 0 2 2 Z 1 t q 2q exp + t tq 1 dt : 2 0 . 2i = 1 . by homogeneity. by integration by i parts and (4.1). we have.nite sequen e of real numbers su h that.

1 . IE q (jX j= ) 1 so that kX k q Bq0 . Re all that for 0 < p < 1 we denote by `p. The next lemma des ribes how in this ase the span of ("i ) in L q is isomorphi to `p. For q > 2 .1 = sup i1=p i where (i ) is the non-in reasing rearrangement of the sequen e (ji j) . 1 < p < 1 .1 X i ji jp !1=p ppr 1=p k(i )kr.1 . when q 2 and Bq0 large enough. sin e ex 1 x .3.96 Hen e. equivalent to a norm when p > 1 . A0q X i 2i !1=2 X "i i i B0 q q X i 2i !1=2 for any sequen e (i ) of real numbers. There exist positive .7. Hen e.1 : Let now 2 < q < 1 and denote by p the onjugate of q : 1=p + 1=q = 1 . j Xj j X jq Aq q IE q > 1 IE q for A0q small enough so that kX k q A0q . the span of ("i ) in L q is no more isomorphi to `2 . when q 2 . To see it. k(i )kp. we simply follow the observation whi h led to Lemma 1. On the other hand. ji j > tg)1=p < 1 : t>0 Equivalently k(i )kp. may be ompared to the `p -norms as follows: for r < p . Let 2 < q < 1 and p = q=q 1 . Lemma 4.1 = (sup tp Cardfi . The fun tional k kp. This span a tually appears as some interpolation spa e between `2 and `1 . i1 These spa es are known to show up in interpolations of `p -spa es. as laimed.1 the spa e of all real sequen es (i )i1 su h that k(i )kp.

Bq0 depending only on q ( p ) su h that for any .nite onstants A0q .

Further.1 X "i i i q Bq0 k(i )kp.nite sequen e (i ) of real numbers A0q k(i )kp. By symmetry ("i i ) has the same distribution as ("i ji j) . by identi al distribution and de.1 : Proof.

1 we may assume that j1 j ji j .nition of k(i )kp. (4:4) .7 then yields. The martingale inequality of Lemma 1. for every t > 0 .

(.

.

X .

.

IP .

"i i .

.

.

.

1 ) : . i ) >t 2 exp( tq =Cq k(i )kqp.

one then dedu es the right side of the inequality of Lemma 4. It implies indeed. that for every m .97 As before. by symmetry and monotoni ity of (ji j) . we make use of the ontra tion prin iple in the form of Theorem 4.4 below. Turning to the left side.3 from a simple integration by parts.

.

q .

X .

.

IE exp .

"i i .

.

.

.

IE exp vertm i Hen e X "i i Now. sin e m P i=1 q .

i=1 m X m vert "i j i .

m .

X jq .

.

i=1 .

q ! .

"i .

.

.

we have obtained that X "i i i (1 + log 2) 1=q q sup m1=p jm j m1 whi h is the result. it is easily seen that m X "i i=1 (1 + log 2) 1=q m1=p : q Summarizing.8 states in this setting that for any . We should point out further that the inequality orresponding to Lemma 1. : : q "i = m with probability 2 m .

nite sequen e (i ) and any t > 0 (4:5) .

(.

.

X .

.

IP .

"i i .

.

.

.

1 )℄ : The rest of this hapter is mainly devoted to extensions of the previous lassi al results to Radema her averages with Bana h spa e valued oeÆ ients. this ve tor valued setting is hara terized by the la k of the orthogonality property . i ) >t 16 exp[ exp(t=4k(i )k1. Of ourse.

.

.

X .

2 .

IE .

"i i .

.

.

.

i = X i 2i : Various substitutes have therefore to be investigated but the extension program will basi ally be ful.

Classi.lled.

ation of Bana h spa es a ording to the pre eding orthogonality property is at the origin of the .

The ontra tion prin iple It is plain from Khint hine's inequalities that if (i ) and (. We study here integrability and omparison theorems for Radema her averages with ve tor valued oeÆ ients. 4.98 notions of type and otype of Bana h spa es whi h will be dis ussed later on in Chapter 9.2.

i ) are two sequen es of real numbers su h P P that ji j j.

one an ompare k "i i kp and k "i .i j for all i .

It however extends to ve tor valued oeÆ ients and even in an improved form.4. For any . Theorem 4.i kp for all p . Let F : IR+ ! IR+ be onvex. This omparison is thus i i based on the sum of the squares and orthogonality. This property is known as the ontra tion prin iple to whi h this paragraph is devoted. The main result is expressed in the following fundamental theorem.

This proves (4. we suppose that 1 N Then N X i=1 It follows that i "i xi = N X k=1 k (Sk Sk 1 ) = N X i "i xi i=1 N X (k k=1 max kS k : kN k k N +1 = 0 . N ) ! IEF i ) >t : ! N X i "i xi i=1 is onvex. +1℄N .7). repla ing i by ji j we may assume by symmetry that i 0 . for any t > 0 . on the ompa t onvex set [ 1. Further. it attains its maximum at an extreme point.6) are equal. The fun tion on IRN (1 . Set Sk = P "i xi .nite sequen e (xi ) in a Bana h spa e B and any real numbers (i ) su h that ji j 1 for all i . Therefore. we have (4:6) ! X i "i xi IEF ! X "i xi IEF i i : Further. k+1 )Sk : i=1 . that is a point (i )iN su h that i = 1 . (4:7) ( X IP i "i xi i ) >t ( X 2IP "i xi Proof. But for su h values of i . : : : . by symmetry. both terms in (4. by identi al distribution. Con erning (4.6).

Let (i ) and (.7). noti e the following fa t. As a simple onsequen e of inequality (4.3).99 We on lude by Levy's inequalities (Proposition 2.

i ) be two sequen es P of real numbers su h that j.

then. then the same holds for the series .i j ji j for all i . if the series with ve tor oeÆ ients i "i xi onverges i P almost surely or equivalently in probability.

i Theorem 4. the on lusion of Proposition 4.4 also applies to (i ) in pla e of ("i ) . as ("i i ) where ("i ) is independent from (i ) .4 admits several easy generalizations whi h will be used mostly without further omments in the sequel. the . Re all that a sequen e (i ) of real random variables is alled a symmetri sequen e when (i ) has the same distribution. su h that ki k1 1 for all i . Further. as is easy also. as we will see in Chapter 6.i "i xi . as a sequen e. Let us brie y indi ate some of these generalizations.4 still holds true. Moreover. It is then lear by independen e and Fubini's theorem that Theorem 4. if the i 's are now random variables independent of ("i ) (or (i ) ).

Then. for any . Lemma 4.5.xed points xi an also be repla ed by ve tor valued random variables independent of ("i ) . Let F : IR+ ! IR+ be onvex and let (i ) be a symmetri sequen e of real random variables su h that IEji j < 1 for every i . The next lemma is another extension and formulation of the ontra tion prin iple.

nite sequen e (xi ) in a Bana h spa e. (i ) has the same distribution as ("i ji j) where. IEF ! X inf IE i "i xi i j j i ! X i xi IEF i : Proof. ("i ) is independent from (i ) . By the symmetry asumption. Using . as usual.

and then the ontra tion prin iple (4.rst Jensen's inequality and partial integration.6). . we get that IEF ! X i xi i = IEF IEF IEF ! X "i i xi i ! X "i IE i xi i ! X inf IE " x i i i i j j j j j j i : Note that in ase the i 's have a ommon distribution the inequality redu es to the appli ation of Jensen's inequality.

Sin e IEjgi j = (2=)1=2 we have from the previous lemma and its notation that (4:8) IEF ! X "i xi i IEF ! 1=2 X gi xi 2 i : Hen e Gaussian averages always dominate the orresponding Radema her ones. if the series gi xi is onvergent.2).100 An example of a sequen e (i ) of parti ular interest is given by the orthogaussian sequen e (gi ) onsisting of independent standard normal variables. Letting F (t) = t for simpli ity. from the P P integrability properties of Gaussian series (Corollary 3. so is "i xi . i i One might wonder whether a onverse inequality or impli ation hold true. In parti ular. the ontra tion prin iple applied onditionally on (gi ) assumed to be independent from ("i ) yields N X IE" "i gi xi i=1 N X max gi IE" "i xi iN j j i=1 for any .

This example is however extremal in the sense that if a Bana h spa e does not ontain subspa es isomorphi to .14).8) sin e the onstant depends on the number of elements xi whi h are onsidered. If we now re all from (3.nite sequen e (xi )iN in a Bana h spa e B where we re all that IE" is partial integration with respe t to ("i ) . we see by integrating the previous inequality that (4:9) N X IE gi xi i=1 K (log(N N X + 1))1=2 IE i=1 "i xi : This is not the onverse of inequality (4.13) that IE max jgi j K (log(N +1))1=2 for some numeri al onstant iN K . (4.8) is a tually best possible in general Bana h spa es as is shown by the example of the anoni al basis of `1 together with the left hand side of (3.

9) holds with a onstant independent of N (but depending on the Bana h spa e). So far. The next lemma is yet another form of the ontra tion prin iple under omparison in probability.6. Let further (i ) and (i ) be two symmetri sequen es of real random variables su h that for some onstant K 1 and all i and t > 0 IPfji j > tg K IPfji j > tg : . Lemma 4.9).nite dimensional subspa es of `1 . Let F : IR+ ! IR+ be onvex. then (4. We shall ome ba k to this later on in Chapter 9 (see (9.12)). we retain that Gaussian averages dominate Radema her ones and that the onverse is not true in general or only holds in the form (4.

for any .101 Then.

nite sequen e (xi ) in a Bana h spa e. simply note that if ! ! X X 1 1 IEF 2Kt0 "i xi + IEF 2K i xi : 2 2 i i i0 = t0 "i Ifji jt0 g + i Ifji j>t0 g where ("i ) is an independent Radema her sequen e. then the on lusion is somewhat weakened: we have IEF ! X i xi i Indeed. i0 ) satis. jÆi i j ji j for all i : From the ontra tion prin iple and the symmetry assumption it follows that IEF ! X Æi i xi i IEF ! X i xi i : The proof is then ompleted via Jensen's inequality with respe t to the sequen e (Æi ) sin e IEÆi = 1=K . the ouple (i . it is easily seen and lassi al that the sequen es (Æi i ) and (i ) an be realized on some probability spa e in su h a way that. IEF ! X i xi i ! X K i xi IEF i : Proof. Let (Æi ) be independent of (i ) su h that IPfÆi 1g = 1 IPfÆi = 0g = 1=K for all i . Noti e that if we have only in the pre eding lemma that IPfji j > tg K IPfji j > tg for all t t0 > 0 . for every t > 0 . then. almost surely. IPfjÆi i j > tg IPfji j > tg : Taking inverses of the distribution fun tions.

Use then onvexity and the ontra tion prin iple to get rid of the indi ator fun tions. Integrability and tail behavior of Radema her series .3. 4.6.es the hypothesis of Lemma 4.

102 On the basis of the s alar results des ribed in the .

This de.rst paragraph. we now investigate integrability properties of Radema her series with ve tor valued oeÆ ients. The typi al obje t of study is a onvergent series P "i xi where (xi ) is a sequen e in a Bana h spa e B .

That is. i Motivated by the Gaussian study of the previous hapter. there is a somewhat larger setting orresponding to what ould be alled almost surely bounded Radema her pro esses. let (xi ) be a sequen e of fun tions on T su h that for all t . almost surely (or in probability) onvergent. for some set T assumed to be ountable for simpli ity.nes a Radon random variable in B . . in other words.

.

.

X .

.

sup .

"i xi (t).

.

.

t2T .

i <1 P i xi (t)2 <1 P "i xi (t) is for all t . we are interested in the integrability properties and tail behavior of this almost surely . Assuming that i almost surely .

nite supremum. : : : . kxk = sup jf (x)j . "i fN (xi )) for every . we assume that we are given a Bana h spa e B su h that for some ountable subset D in the unit ball of B 0 . in order to unify the exposition. : : : . As in the previous hapter. fN (X )) has the same distribution as i P P ( "i f1 (xi ) . We deal f 2D with a random variable X with values in B su h that there exists a sequen e (xi ) of points in B su h P that f 2 (xi ) < 1 for every f in D and for whi h (f1 (X ).

we onsider indeed = (X ) = sup (IEf 2 (X ))1=2 = sup f 2D f 2D X i !1=2 f 2 (xi ) = . fN g of D . We then speak of X as a ve tor i i valued Radema her series (although this terminology is somewhat improper) or almost surely bounded Radema her pro ess. The size of the tail IPfkX k > tg will be measured in terms of two parameters similar to the ones used in the Gaussian ase.nite subset ff1. As for Gaussian random ve tors. For su h an X we investigate integrability and tail behavior of kX k . : : : .

.

.

X .

.

sup sup .

hi f (xi ).

.

.

jhj1 f 2D .

i P (where h = (hi ) 2 `2 ). Re all that if X = "i xi is almost surely onvergent in a (arbitrary) Bana h spa e i B . de.

ning thus a Radon random variable. we an let simply (X ) = sup (IEf 2(X ))1=2 : kf k1 It is easy to see that is .

a tually ontrolled by some quantity asso iated to the L0 topology of the p norm kX k : if M is for example su h that IPfkX k > M g 1=8 . we have that 2 2M . To see this. .nite.

103 re all .

3). for any f in D . 0 .rst that by (4.

.

2 11=2 .

X .

.

IE .

"i f (xi ).

.

A .

.

i .

.

.

X .

.

2IE .

"i f (xi ).

.

.

.

p i : It then follows from Lemma 4.2 and de.

nition of M that X i f 2 (xi ) !1=2 0 .

.

2 11=2 .

X .

= IE .

.

"i f (xi ).

.

A .

.

i p 2 2M and thus our pre eding laim. Let X be a Radema her series in B as de. median or expe tation. and some quantity. The tail behavior and integrability properties of kX k are measured in terms of this number . related to the L0 topology of the norm (strong moments). supremum of weak moments. The next statement summarizes more or less the various results around this question.3 and related on entration inequality. The main ingredient is the isoperimetri Theorem 1.7. Theorem 4.

q depending on p. +1gIN . (4:10) IPfjkX k M j > tg 4 exp( t2 =82 ) : In parti ular. The fun tion ' on IRIN de. there is a onstant Kp. for every t > 0 . for any 0 < p.ned before with orresponding = (X ) . there exists > 0 su h that IE exp kX k2 < 1 and all moments of X are equivalent: that is. q < 1 . Then. Re all Haar measure on f 1. q only su h that kX kp Kp.q kX kq : Proof. Let moreover M = M (X ) denote a median of kX k .

f 2D i = (i ) . is -almost everywhere .ned by '() = sup j P i f (xi )j .

nite by de.

It is onvex and Lips hitzian on `2 with Lips hitz onstant sin e j'() '(.nition of X .

)j .

.

X sup .

.

(i f 2D .

i .

.

.

i )f (xi ).

.

.

j .

Alternatively.10) is then simply the on entration inequality (1. ) .10) issued from Theorem 1.3 applied to this onvex Lips hitzian fun tion on (IRIN . j : Inequality (4.3 .10) dire tly form Theorem 1. one may obtain (4.

104 by a simple .

.nite dimensional approximation.

rst through a .

nite supremum in f . then through .

Kahane [Ka1℄) provide an alternate proof of Khint hine's inequalities (with the right order of magnitude of the onstants when p ! 1 ). P "i xi in B . IPfkX k > M + tg 2 exp( t2 =82 ) : (4:11) An integration by parts on the basis of (4. for any 0 < p < 1 .-P.10) already ensures that IE exp kX k2 < 1 for all > 0 small enough. They are therefore sometimes refered to in the literature as the Khint hine-Kahane inequalities.10) (or rather (4.11) together with the fa t that M For an almost surely onvergent series X = (2IEkX k2)1=2 and 2 IEkX k2 .7 (due to J.nite sums. from (4.7 an be re. the laim of the theorem is established. Z 1 IEjkX k M 0 jp M 0 p + IPfkX k > M 0 + tg dtp 0 M 0 p + Kp p K 0 M 0 p : p Sin e M 0 (8IEkX kq )1=q for every 0 < q < 1 . Integrating by parts. namely less than 1=82 . Note that we an take Kp. for all t > 0 . as is usual in this setting. use (4.1): for every t > 0 . IPfkX k > M 0 + tg 2 exp( t2 =82 ) for all t > 0 . IPfkX k > tg 2 exp( t2 =32IEkX k2) : (4:12) For the proof. we know that 2 2M 0 .11)) implies the weaker but sometimes onvenient inequality whi h orresponds perhaps more dire tly to the subgaussian inequality (4. Con erning the moment equivalen es. that we also have. if M 0 is hosen to satisfy p IPfkX k > M 0 g 1=8 . Note also that (4. It is worthwhile mentioning that the moment equivalen es in Theorem 4. it is very easy to see that the exponential square integrability result of Theorem 4.2 = K pp ( p 2 ) for some numeri al onstant K .11) and M M 0 . Re all. and.

7 so that in i=1 parti ular (XN X ) ! 0 . XN onverges to X almost surely and in L2 (B ) by Theorem 4.11) that for every t > 0 IPfkX i XN k > t + 1g 2 exp( t2 =8(X XN )2 ) : . Let then > 0 and hoose an integer N su h that the median of kX XN k is less than 1 (say) and su h that 8(X XN )2 < 1 .ned into IE exp kX k2 < 1 for all > 0 . It follows from (4. Set indeed N P XN = "i xi for ea h N .

IE exp kX XN k2 < 1 from whi h the laim follows sin e kXN k is bounded. Theorem 4. Let X be a ve tor valued Radema her series as in Theorem 4.9.8. Lemma 4.7. The proof is however somewhat more ompli ated and makes use of a variation on the onverse subgaussian inequality (4.105 Hen e. de.2). Then IE exp kX k2 < 1 for all > 0 : Proof. There is a numeri al onstant K with the following property. Let (i )i1 be a de reasing P P sequen e of positive numbers su h that 2i < 1 . If t K ( 2i )1=2 . It relies on the following lemma. It is still true that the pre eding observation holds in the more general setting of almost surely bounded Radema her pro esses. by the hoi e of N .

Then i .ne n to be the smallest integer su h that n P i=1 i i > t .

(.

.

X .

.

IP .

"i i .

.

.

.

by de. i >t P 2i )1=2 and let K 1 in n 2Kt2=2 .

If ) IP ( n X i=1 1 be the numeri al onstant of (4.nition of n . We distinguish between ) "i i > t 0 X 1 exp Kt2 = 2i A : 2 in 2 n exp( 2Kt2=2 ) : If n > 2Kt2=2 . Let = ( two ases.2). by de. Proof.

9 follows by Levy's inequality (2. exp( Kt2 =2 ) : Lemma 4. hanging K into 2K . .nition of n and sin e i t for all i . IP 8 <X : in "i i > t 9 = .6). n 1X 2t max i = n in n i=1 i n 2 Kt : Therefore.2). sin e K is the numeri al onstant of (4.

there exists " = "() > 0 su h that if M satis. Namely. for every > 0 .8 we show a quantitative version of the result.106 To establish Theorem 4.

for all t > 0 .es IPfkX k > M g < " . In parti ular exp X KM 0 2 = vi (f )2 ! i . IPfkX k > KM (t + 1)g 2 exp( t2 ) (4:13) p p for some numeri al K . Let then M 0 = 2 2KM where K 1 is the onstant of Lemma 4. for ea h f in D . f (xi ) = ui (f ) + vi (f ) .9. sequen es (ui (f )) and (vi (f )) su h that. If IPfkX k > M g 1=8 we know that = (X ) 2 2M . By this lemma (applied to the non-in reasing rearrangement of (jf (xi )j) with t = M 0 ). there exist. with the following properties: X i jui (f )j M 0 . for all i . We thus assume that " < 1=8 .

.

2" : .

.

X kX k M 0 + sup .

.

.

"i vi (f ).

.

.

f 2D i P and the Radema her pro ess ( "i vi (f ))f 2D satis.

es the following properties: i ( IP and .

.

.

X .

.

sup .

"i vi (f ).

.

.

f 2D .

hoose " = "() > 0 small enough in order that K (log 21" ) from (4. i ) 1 > 2M 0 < " < 2 X 1 sup vi (f )2 KM 0 2 log 2" f 2D i 1 : Given > 0 . .11). for all t > 0 .

.

.

X .

.

sup .

"i vi (f ).

.

.

f 2D .

i ) > M 0 (t + 2) 1 be smaller than 1=8 . ( IP Hen e whi h gives (4.13). 2 exp( t2 ) : IPfkX k > M 0 (t + 3)g 2 exp( t2 ) . Then.

For ea h N . set ZN = sup j "i f (xi )j .8.107 P We an now easily on lude the proof of Theorem 4. (ZN ) f 2D i>N de.

and therefore degenerated. Hen e. : : : . sup IEZN N IEkX k < 1 : (ZN ) is therefore almost surely onvergent.nes a reverse submartingale with respe t to the family of -algebras generated by "N +1 . there exists M < 1 su h that for all " > 0 one an . Its limit is measurable with respe t to the tail algebra. "N +2. By Theorem 4.7. N 2 IN .

Apply then (4.7. it is interesting to point out that these inequalities ontain the orresponding ones for Gaussian averages. Denote by ("ij ) a doubly indexed Radema her sequen e. i=1 Going ba k to the moment equivalen es of Theorem 4. This an be seen from a simple argument involving the entral limit theorem.nd N large enough su h that IPfZN > M g < " .8 sin e N k P "i xi k is bounded. for any .13) to ZN to easily on lude the proof of Theorem 4.

0 < p.nite sequen e (xi ) in B and any n . 1 0 n X X "ij A x i i j =1 n p p n P When n tends to in. q < 1 .

It follows that X gi xi i p X Kp.q in fun tion of p and q also follows the exponential square integrability of Gaussian random ve tors (Lemma 3. we an then obtain.7).q xi i j =1 n p : q p "ij = n) onverges in distribution to a standard normal.nity.7 based on the Gaussian isoperimetri inequality and ontra tions of Gaussian measures (see (1.q gi xi i q : From the right order of magnitude of the onstants Kp.7) in Chapter 1). As yet another remark.2)). that for any (.7). for example. ( j =1 0 1 n X X "ij A Kp. Let (ui ) denote a sequen e of independent random variables uniformly distributed on [ 1. As in the previous hapter ( f. As des ribed by (1. we would like to outline a dierent approa h to the on lusions of Theorem 4. there is an isoperimetri inequality of Gaussian type for these measures. (3. +1℄ .

nite) sequen e (xi ) in B and any t > 0 (4:14) ( X IP ui xi i ) X > 2IE ui xi + t i exp( t2 =2 ) .

5.4 and Lemma 4. We now would like to apply the ontra tion prin iple f 2D i in order to repla e the sequen e (ui ) in (4.7).108 P where is as before i. Let Z be a positive random variable and . . = sup ( f 2 (xi ))1=2 . To perform this we need to translate (4.e.14) in some moment inequalities in order to be able to apply Theorem 4.10.14) by the Radema her sequen e. We make use of the following easy equivalen e (in the spirit of Lemma 3. Lemma 4.

IPfZ > K (. be positive numbers. The following are equivalent: (i) there is a onstant K > 0 su h that for all t > 0 .

(ii) there is a onstant K > 0 su h that for all p 1 kZ kp K (. + t)g K exp( t2 =K ) .

10 we get (4:15) ( X IP "i xi i > K (IE k sumi "i xi k + t) ) K exp( t2 =K2) for some numeri al onstant K > 0 and all t > 0 .5) applies to give X "i xi i p 2K X "i xi i 1 ! p + p (IEjui j = 1=2) .4 and Lemma 4. + pp) : Further. This inequality easily extends to in.14) and this lemma it follows that for some numeri al onstant K X ui xi i p K X ui xi i ! 1 p + p for all p 1 . From (4. Going ba k to a tail estimate through Lemma 4. the onstants in (i) and (ii) only depend on ea h other. The ontra tion prin iple (Theorem 4.

nite sums and bounded Radema her pro ess (start with a norm given by a .

However.7. .nite supremum and pass to the limit). The most important dieren e however is that (4. It is possible to obtain from it all the on lusions of Theorem 4.10) expresses a on entration property.

in the form of (4.15) an be used equivalently. 4. Integrability of Radema her haos Let us onsider the anoni al representation of the Radema her fun tions ("i ) as the oordinate maps on = f 1.10) is only used as a deviation inequality (i.109 ea h time (4. then (4. For Q any .e. +1gIN equipped with the natural produ t probability measure = ( 21 Æ 1 + 12 Æ+1 ) IN .11)).4.

de.nite subset A of IN .

= 1 ). A i2A IN . jAj < 1g de. It is known that the Walsh system fwA .ne wA = "i ( w.

For 0 " 1 . introdu e the operator T (") : L2 () ! L2 () de.nes an orthonormal basis of L2 () .

Regrouping. for 1 < p < q < 1 and " [(p 1)=(q 1)℄1=2 . T (") maps Lp () into Lq () with norm 1 . R P A fun tion f in L2 () an be written as f = wA fA where fA = fwA d and the sum runs over A all A IN . jAj < 1 . Sin e. as in the Gaussian ase. T (") is a onvolution operator. it extends to a positive ontra tion on all Lp () .ned by T (")wA = "jAjwA for A IN . Namely. Chaos of degree d are hara terized by the i= 6 j fa t that the a tion of the operator T (") is multipli ation by "d . One striking property of the operator T (") is the hyper ontra tivity property similar to the one observed for the Hermite semigroup in the pre eding hapter. i. jAj < 1 . Chaos of degree 1 are simply Radema her series "i i . [Be ℄). i P haos of degree 2 quadrati series of the type "i "j ij . It implies moreover the orresponding Gaussian hyper ontra tivity using the entral limit theorem ([Gro℄. et . 1 p < 1 . for all f in Lp () kT (")f kq kf kp : (4:16) This property an be dedu ed from a sharp two point inequality together with a onvolution argument. we an write f= 1 X d=0 0 X jAj=d 1 wA fA A = 1 X d=0 Qd f : P Qdf is alled the haos of degree or order d of f . that is T (")Qdf = "dQd f : .e.

this type of inequalities implies strong integrability properties of Qd f . As for Gaussian haos. with respe t to the Gaussian ase. we however would like to omplete these integrability results with some more pre ise tail estimates. 2 in a setting similar to the one developed in the Gaussian ase. We keep the setting of the pre eding paragraph with a Bana h spa e B for whi h there exists a ountable subset D of the unit ball of B 0 su h that kxk = sup jf (x)j for all x in B . The idea of this study will be to follow the approa h to Gaussian haos using isoperimetri methods in the form of Theorem 1. This approa h extends to haos with values in a Bana h spa e providing in parti ular a dierent proof of some of the results of the pre eding se tion for haos of order 1 . We brie y dis uss the general ase at the end of the se tion.16) with p = 2 . we have that kQdf kq (q 1)d=2kQdf k2 : As we know. However.110 Using (4. We say that a random variable f 2D X with values in B is a Radema her haos. For simpli ity we only deal with the haos of degree. q > 2 and " = (q 1) 1=2 .j zero in whi h ase the results are more omplete. Let thus X be a Radema her haos (of order 2 ) as just de. or order. re alling Lemma 3. some onvexity and de oupling results appear to be slightly more ompli ated here and we will try to detail some of these diÆ ulties. it follow indeed as in the Gaussian ase that IE exp jQd f j2=d < 1 for some (a tually all) > 0 . We assume for simpli ity that the diagonal terms xii are i.3. This is the obje t of what follows where we one more time make use of isoperimetri methods. if there is a sequen e (xij ) in B su h that P "i "j f (xij ) is almost surely onverent (or only in probability) for all f in D and su h that (f (X ))f 2D i.7.j P has the same distribution as ( "i "j f (xij ))f 2D . of order 2 thus.

ned. Our aim will be to try to .

These are similar to the ones P used in the Gaussian setting.nd estimates of the tail IPfkX k > tg in terms of some parameters of the distribution of X . Let us onsider .

rst the "de oupled" haos Y = ( "i "0j f (yij ))f 2D where ("0j ) i. for example 63 IPfkX k M g > : 64 . Let further M be a number su h that IPfkX k M g is large enough.j is an independent opy of ("i ) and yij = xij + xji .

111 Let also m be su h that 8 < .

.

.

.

.

X .

IP sup sup .

.

"i hj f (yij ).

.

:jhj1 f 2D .

.

i.j and set = (X ) = 9 = 15 m. > 16 .

.

.

.

.

X .

.

sup sup .

hi hj f (xij ).

.

jhj1 f 2D .

i.j .

: It might be worthwhile to already mention at this stage that these parameters as well as the de oupled haos Y are well de.

To this aim we use a de oupling argument whi h will also be useful in the proof of the main result below. Let us .ned.

rst assume we deal with a norm k k given by a .

On e the right estimates are established we will simply need to in rease these supremum to the norm.nite supremum. For ea h N . for all N large enough (re all the norm i. If M 0 is su h that IPfkX k M 0 g > 127=128 . let us set further N P XN = "i "j xij .j =1 is a .

: : : . Let I f1.nite supremum) IPfkXN k M 0 g 127=128 . N g and let (i ) be de.

A. series.j 62I spa es ( . for some numeri al onstant K . IP0 ) respe tively. We laim that.e.j =1 8 < X IP "i "j yij : i2I. i = +1 if i 62 I .j 62J .j 62I 9 = 63 M . "i i "j j xij i.j 62I P as "i "0j yij . By symmetry IP and thus by dieren e 8 < kXN k : N X M. A0 . : : i2I.j 62I KM 0 2 : This a tually simply follows from Fubini's theorem and the integrability results for one-dimensional haos. we are now in a position to "de ouple" in the sense that "i "j yij has the same distribution i2I. Let us assume for larity that ("i ) and ("0j ) are onstru ted on dierent probability i2I. 64 9 = M . . 63 : 64 P Of ourse. IP) and ( 0 . Let indeed 9 9 8 8 < = = < X A = ! : IP0 "i (!)"0j yij M 0 7=8 : .ned as i = 1 if i 2 I . (4:17) 2 X IE "i "0j yij i2I. i.

17) sin e IP(A) 7=8 .7 applied to the sum in "0j implies that 2 X IE0 "i (!)"0j yij i2I. P i.j =1 KM 0 2 : From a Cau hy sequen e argument it then easily follows that for ea h f in D . from (4. But now.:::.j 62J KM 0 2 for some numeri al K .j =1 I f1.112 Then IP(A) 7=8 . IP0 .j 62J Therefore.j "i "0j f (yij ) is onvergent in L2 (and almost surely by Fubini). Hen e Y is well de. the same result applied to the sum in "i but in L2( 0 . B ) implies the announ ed laim (4. Theorem 4. For ! in A .N g i2I. We note then that 0 N X 1 X X 1 "i "0j yij A : "i "0j yij = 4 N 2 i. 2 N X IE "i "0j yij i.17).

ned and. in reasing the .

also satis.nite supremum to the norm.

To ontrol m and . IEkY k2 . by independen e.es IEkY k2 KM 0 2 . observe that.

.

2 .

.

.

X .

.

IE sup sup .

"i hj f (yij ).

.

jhj1 f 2D .

i.j .

.

.

2 .

.

.

X .

sup sup .

.

hi kj f (yij ).

.

jkj1 f 2D . jhj.

i.j .

42 : Further. by Theorem 4. we may take m to be equivalent to 0 .7 again (although for a dierent norm).

.

2 11=2 .

.

X .

.

B .

"i hj f (yij ).

.

C IE sup sup .

A jhj1 f 2D .

j . i.

and thus Km K 2M 0 for some numeri al onstant K . we will be allowed to deal with . In parti ular moreover.

After having des ribed the parameters. we are now ready to state and prove the tail behavior of IPfkX k > tg where X is a haos as before.nite sums in the estimates we are looking for sin e the pre eding s heme and these inequalities justify all the ne essary approximations. . de oupling and approximation arguments we will use.

for every t > 0 . m.113 P Theorem 4. IPfkX k > 2(M + mt + t2 )g 20 exp( t2 =144) : Moreover. we an assume that we deal with a . Let X = ( "i "j f (xi ))f 2D be a Radema her haos with xii = 0 for all i and let i. N P Proof. As announ ed. IE exp kX k < 1 for all > 0 . Then. be its parameters as just des ribed.11.j M.

For simpli ity in the i2I. we thus assume for this step that yij = 0 if i 62 I or j 2 I .j =1 the tail estimate.nite sum X = "i "j xij for the proof of i. N g .N g i2I. the tail probability of k P "i "j yij k .j62I i j ij f1.j 62I notation. by de oupling and We .:::. Re all that. Re all that 0 1 X X 1 X =2 N ""y A: 2 I f1. : : : .

j 9 = M + mt. we have that Let 8 < B = !.rst estimate. 4 jhj1 i. we see that we ontrol the one-dimensional parameters in the summation with respe t to "0j . IP(A1 ) IP(B )(1 2 exp( t2 =8)) where 8 < A1 = (!. : .j Thus IP(B ) 3=4 .j In the same way. 8 < X IP "i "j yij : i. 78 : 9 = X 3 and sup "i (!)hj yij m : . for t > 0 . !0 ) .3 that. : X 0 (!0 )yij " ( ! ) " i j i.j 9 = M.j 8 < X IP0 "i (!)"0j yij : i. 32 : X IP sup "i hj yij :jhj1 i. for all I dieren e. 9 = m. It therefore follows from Theorem 1. : 8 < 9 = 31 M . If ! 2 B . ! 2 B .

: j i. ! B . 12 . "i (!)"j (! )yij M : i. we are in a position to apply the onedimensional integrability results for the sum in "i sin e we ontrol the orresponding parameters. jhj1 i. : : : . this gives that 8 9 8 9 < = < = X X IP "i "0 yij M + 2mt + 4t2 = IP "i "0 yij M + mt + (m + 4t)t : i. jhj1 i. By Fubini's theorem. 2 exp( t2 =8) where we have used the simple fa t that X sup hi kj yij jhj. we have obtained that for every I f1. : 8 < X 0 0 IP ! B .jkj1 i.j 9 = X and sup hi "0j yij m + 4t .114 Also from Theorem 1. (4:18) 8 < X IP "i "j yij : i2I. ! ) .j .j > m + 6t 9 = . if we let for every t > 0 A= 8 < X 0 0 0 (!. t2 : 72 . we have that.3. If !0 is in B 0 . N g and t > 0 . 8 < X IP0 sup hi "0j yij :jhj1 i.j j .j 6 : Hen e.j 2 + mt we have that IP(A) IP(B )(1 4 exp( t2 =18)) : Let then 8 < B 0 = !0 .j 9 = X sup hi "0j (!0 )yij m + 4t : .j 2 9 = + mt . 1 10 exp( t2 =18) : Summarizing.j 62I > M + tm + t2 9 = 10 exp . "i (!)"j (! )yij M : i. IP0 (B 0 ) 1 8 exp( t2 =18) . for t > 0 .

j>N surely to some degenerate distribution. Indeed.j 62I Hen e by onvexity IE 1 kX k M 2 1+ 1 MA + ! A 10 : 10 : Thus.18) and integration by parts we have that IE 00 X "i "j yij i2I.13) and Fubini's theorem as in the previous proof of the tail estimate.j 62I we are basi ally left to show that the pre eding tail estimate is stable by onvex ombination. By the pre eding tail estimate. IP ( 1 kX k M 2 + ) > mt + t2 exp(t2 =10144) 1 from whi h the tail estimate of the theorem follows. We make use of the proof of Theorem P 4.:::. we already know that IE exp kX k < 1 for some > 0 .j =1 I f1. "i "j xij = 2 N 2 i. ZN onverges almost f 2D i.115 If we now re all that 0 N X 1 X X 1 "i "j yij A . we . To establish that this hold for every > 0 it learly suÆ es by the de oupling argument developed above (and Fatou's lemma) to show this integrability result for the de oupled haos Y . The fun tion (u) = exp(t(u)2 =144) 1 is onvex in reasing with (0) = 0 (elementary omputation).8 and (4. Note that the somewhat unsatisfa tory fa tor 2 ame in only at the very end from the de oupling formula. let u = u(t) = t2 + mt and denote by t = t(u) the inverse fun tion (on IR+ ). Let ZN = sup k "i "0j yij k . By the reverse submartingale theorem. Using then (4.13). By (4.N g i2I. This is however easily proved. for every t > 0 .

nd that there exists M > 0 su h that for every > 0 one an .

.nd N large enough with IPfZN > KM (t + 1)g K exp( t) for all t > 0 . K numeri al. By the one-dimensional integrability properties. the proof of Theorem 4.11 is easily ompleted.

116 We on lude this part on Radema her haos with a few words on the ase where the diagonal elements are non-zero. Assume we are given a .

1 from whi h we dedu e similar integrability properties for haos with non-zero diagonal terms.j X "i "j xij i6=j 1 Hen e. Comparison theorems The norm of a (. by dieren e. X xii i Therefore.11 extends to this setting.j : 1 X (3 + 2Kp) "i "j xij i. We do not know however whether the tail estimate of Theorem 4. Denote by ("0i ) an independent Radema her sequen e.5. for every p 1 . Then.j p X 3 "i "j xij i. X 0 0 ("i "j "i "j )xij 1 i6=j 1 X 0 0 = ("i "j "i "j )xij i. X " " x i j ij i. by independen e and Jensen's inequality.nite sequen e (xij ) in a Bana h spa e B .j .j 1 X 2 "i "j xij : i. We just learned that there is a numeri al onstant K su h that for all p 1 X "i "j xij i6=j p X Kp "i "j xij i6=j 1 ( f. 4.7). Lemma 3.

nite) Radema her sum N k P "i xi k with oeÆ ients in a Bana h spa e B is the supremum i=1 of a Radema her pro ess. For the purposes of this study. one onvenient representation is N X "i xi i=1 = .

.

N .

X .

.

sup .

"i ti .

.

.

t2T .

i=1 .

i.e. t = (t1 . : : : . tN ) 2 T . where T is the (bounded) subset of IRN de. kf k 1g .117 2 B 0 . We therefore N P present the results on omparison for Radema her pro esses of this type. "i ti .

Let F : IR+ ! IR+ be onvex and in reasing. we set for simpli ity (and with some abuse) kh(t)kT = khkT = sup jh(t)j . for any bounded subset T in IRN ! N 1 X IEF "i 'i (ti ) 2 i=1 T IEF ! N X "i ti i=1 T : Before turning to the proof. One typi al appi ation of Theorem 4. then (4:19) IE .ned by T = ft = (f (xi ))iN . Let further 'i : IR ! IR .12 is of ourse given when 'i (x) = jxj for all i . be ontra tions su h that 'i (0) = 0 .17. While this is not ompletely possible for Radema her pro esses. We learned in the pre eding hapter how Gaussian pro esses an be ompared by mean of their L2 metri s. 1) with '1 (x) = x . If (xi )iN are points in a Bana h spa e. we establish a omparison theorem for Radema her averages when oordinates are ontra ted and we prove a version of Sudakov's minoration inequality in this ontext. Another one whi h will be useful in the sequel is the following. More pre isely. Both results are due to the se ond author. t 2 IR . analogous to Corollary 3. The numeri al onstant 21 is optimal as an be seen from the example of the subset T of IR2 onsisting of the points (1. A map ' : IR ! IR is alled a ontra tion when j'(s) '(t)j js tj for all s. '2 (x) = jxj and F (x) = x . We start with the omparison theorem.12. note the following. If h is a map on some set T . 1) and ( 1. Then. i N . one an however investigate analogs of some of the usual onsequen es of the Gaussian omparison theorems. t2T Theorem 4. f i=1 T (bounded) subset of IRN .

.

! N .

X .

.

2 sup .

"i f (xi ).

.

.

kf k1 .

12 for Gaussian averages follow from the Gaussian omparison properties. let simply T be as before. T = ft = (f (xi ))iN .) To iN i=1 dedu e (4.e. i. i N : As we mentioned before.19) from the theorem. . i=1 N X 4IE " i=1 xi xi k k : N P (Re all that by the ontra tion prin iple we an repla e the right hand side by 4 max kxi kIEk "i xi k . The Radema her ase involves independent (and on eptually simpler) proofs to whi h we now turn. i 2 2kxi k 2 s 2 IR . . and take 'i (s) = min s2 kx k2 . theorems like Theorem 4. kf k 1g .

118 Proof of Theorem 4.12. We .

by () . We show that for all t and s in T . Hen e. b0 = t1 + '(t2 ) so that we would like to prove that G(a) G(b) G(a0 ) G(b0 ) : Sin e ' is a ontra tion with '(0) = 0 and s2 Further. a b and. b = s1 s2 ) : s2 . again by ontra tion and s2 t2 . Assume to begin with that s2 t2 . b0 b . then IEG(sup(t1 + "2 '(t2 ))) IEG(sup(t1 + "2 t2 )) t2T t2T ( t = (t1 . t2 ) ). 1 st ase. a0 = t1 + t2 . the right hand side is always larger than 1 1 I = G(t1 + '(t2 )) + G(s1 '(s2 )) : 2 2 We may assume that () t1 + '(t2 ) s1 + '(s2 ) and () s1 '(s2 ) t1 '(t2 ) : We distinguish between the following ases. a b = s2 0 . We show that 2I G(t1 + t2 ) + G(s1 Set a = s1 (4:21) '(s2 ) . s2 0 . '(s2 ) t2 '(t2 ) = a0 b0 : . it suÆ es to show that if T is a subset of IR2 and ' a ontra tion on IR su h that '(0) = 0 .rst show that if G : IR ! IR is onvex and in reasing ! N X (4:20) IEG sup "i 'i (ti ) t2T i=1 IEG sup N X t2T i=1 ! "i ti : By onditioning and iteration. t2 0 . j'(s2 )j s2 .

If T is a (bounded) subset of IRN . t2 0 . 2 nd ase. set r(T ) = . After omparison properties.119 Sin e G is onvex and in reasing. we have that 2I G(t1 + t2 ) + G(s1 s2 ) and the result follows. By onvexity. 3 rd ase. we get that G() is in reasing. Sin e '(t2 ) t2 .20). Applying (4. This ompletes the proof of (4. the argument is similar hanging s into t and ' into ' . '(s2 ) s2 . s2 0 . t2 0 . by symmetry. the map G( + x) x = a b 0 and with b b0 . The proof of Theorem 4. s2 0 . t2 0 . for all positive x . we des ribe in the last part of this hapter a version of the Sudakov minoration inequality (Theorem 3. for G(a) G(b) G(b0 + (a b)) G(b0 ) : Using that b0 + a b a0 yields then the announ ed laim (4. Similar to the 3 rd ase. It is ompletely similar to the pre eding one. ! N 1 X "i 'i (ti ) IEF 2 i=1 T 0 0 N X 1 IEF sup "i 'i (ti ) 2 t2T i=1 0 IEF sup t2T N X i=1 !+ 1 A + IEF sup t2T !+ 1 "i 'i (ti ) 0 N X i=1 ! 11 "i 'i (ti ) AA A where we have used in the se ond step that. ( "i ) has the same distribution as ("i ) and ( ) = ()+ .21).12 is omplete. 4 th ase. We now on lude the proof of the theorem.18) for Radema her pro esses. s2 0 . Thus. When s2 t2 .20) to F (()+ ) whi h is onvex and in reasing on IR yields then immediately the on lusion.

.

N .

X .

.

IE sup .

"i ti .

.

.

t2T .

i=1 : Denote by d2 (s. d2 . ") the minimal number of (open) balls of radius " > 0 in the metri d2 suÆ ient to over T . Equivalently N (T. t) = js tj the Eu lidean metri on IRN and re all N (T. "B2 ) . ") = N (T. d2 .

There is a numeri al onstant K su h that for any " > 0 . Proposition 4. if T is a subset of IRN su h that max jti j "2=Kr(T ) for any t 2 T . d2 . and where B2 is the Eu lidean unit ball (open). (We do not spe ify later if the balls are open or losed sin e this distin tion learly does not ae t the various results. "))1=2 Kr(T ) : Proof. B ) the minimal number of translates of B by elements of A ne essary to over A .120 where we denote by N (A. As an intermediary step. we .13.) The next result is the main step in the proof of Sudakov's minoration inequality for Radema her pro esses. then iN "(log N (T.

B2 ))1=2 Kr(T ) 2 1=Kr(T ) for all where K is a numeri al onstant.rst show that when T B2 and max jti j iN t 2 T . For s > 0 . The . set h = gIfjgj>sg . Let g be a standard normal variable. then 1 (4:23) log N (T.

it will be suÆ ient to he k that f 00 () 0 when s=4 . Now f 00 () = IE(h2 exp h) 32 exp( s2 =32) : By de. whenever s=4 . onsider for example f () = IE(exp h) 1 162 exp( s2 =32) for f 0 (0) = 0 . (4:24) IE exp h 1 + 162 exp( s2 =32) exp[162 exp( s2 =32)℄ : For a proof.rst simple observation of this proof is that.

nition of h and hange of variables. s < jx + j jxj + s=2 2jxj so that Z x2 dx 2 IE(h2 exp h) 4 exp( ) x2 exp( )p 2 jxj>s=2 2 2 Z 2 x2 dx 16 exp( 2 ) exp( )p 4 2 jxj>s=2 2 s2 s2 32 exp( 2 16 ) 32 exp( ) 32 2 0 . Sin e f (0) = . IE(h2 exp h) = Z jxj>s = exp x2 exp 2Z 2 x jx+j>s x2 2 pdx 2 (x + )2 exp x2 2 pdx : When s=4 s=2 .

by (4. Let us now show (4. then.23).121 whi h gives the result. There is nothing to prove it T 21 B2 so that we may assume that there is an p element t of T with jtj 1=2 .3). r(T ) 1=2 2 . By de.

18 and Corollary 3. Let us then onsider the Gaussian pro ess ( gi ui )u2U where (gi ) is an orthogaussian sequen e. v) 1=2 whenever u.2). 12 B2 ) su h that d2 (u. 12 B2 ) there exists a subset U of T of ardinality N (T. As i=1 a onsequen e of Sudakov's minoration inequality and Gaussian integrability properties (Theorem 3. there is a numeri al onstant K 0 1 su h that . v are distin t elements in N P U .nition of N (T.

.

N .

X .

.

sup .

gi ui .

.

.

u2U .

We laim that whenever 1 satis. Let K = (100K 0)2 and assume that max jti j 1=Kr(T ) for iN every t 2 T . ( IP i=1 > (K 0 ) 1 (log CardU )1=2 ) 21 : We use a kind of a priori estimate argument.

then .es (log CardU )1=2 Kr(T ) .

.

N .

X .

.

sup .

gi ui .

.

.

u2U .

ki = gi sin e jki j s . set hi = gi Ifjgi j>sg . This of ourse ensures that (log CardU )1=2 Kr(T ) whi h is the on lusion (4. . interse ting the probabilities. (log CardU )1=2 Kr(T )=2 . For all i . ( IP i=1 ) 1 K > 0 r(T ) < 2K 2 so that.23).

.

N .

X .

.

sup .

gi ui .

.

.

u2U .

By the triangle inequality and the ontra tion prin iple. and sin e U . for any sKr(T )=4 . ) K > 0 r (T ) 2K By (4.24). ( IP i=1 hi .

.

N .

X .

.

sup .

hi ui .

.

.

u2U .

( IP i=1 K > 0 r(T ) 4K ) ( .

.

) N .

X .

4K 0s K + IP sup .

.

hi ui .

.

> 0 r(T ) : .

K 4K u2U .

Then N X K 2 IPf sup j gi ui j > 0 r(T )g + 2 exp 2 K 2 r(T )2 (1 4K 5 u2U i=1 K 16K 2 2 K 2 + exp( )) : 160(K 0)2 (40K 0)2 32(10K 0)2 . i=1 T B2 . s2 2 CardU exp[ 4KK 0 r(T ) + 162 exp( 32 )℄ s2 2 exp[2 K 2 r(T )2 4KK 0 r(T ) + 162 exp( 32 )℄ : Let s = K=10K 0 and = K 2 r(T )=40K 0 .

23) tells us that N (T \ B2 (t. (4. 2 ` B2 ) : `k t2T By homogeneity. ") = N (T. r(T ) 1=2 2 and K = (100K 0)2 104 . we use a simple iteration pro edure. Then N (T. denote by B2 (t. The previous proposition yields a . Let " > 0 and k be an integer su h that 2 k < " 2 k+1 . it is lear that the pre eding probability is made less than 1=2 whi h was the announ ed laim. 2 `+1 ). 2 k B2 ) : Clearly Y N (T.13 is therefore established. For ea h t and Æ > 0 . "B2) N (T. To rea h the full on lusion of the proposition. 2 `+1 ).122 p If we re all that 1 . d2 . 2 `B2 ) exp(K 22` 2 r(T )2 ) : Hen e 0 N (T. d2 . 2 k B2 ) sup N (T \ B2 (t. Æ) the Eu lidean ball of enter t and radius Æ . ") exp K X `k 1 r(T )2 ): "2 22` 2 r(T )2 A exp(4K Proposition 4.

Corollary 4.14. let us . It an be stated as follows. for all " > 0 p !1=2 " N "(log N (T.rst version for Radema her pro esses of Sudakov's minoration whi h however involves a fa tor depending on the dimension. "))1=2 Kr(T ) log(2 + ) r(T ) where K is some numeri al onstant. d2. Proof. As before. Let T be a subset of IRN .

12 B2 ) . We an write that 1 1 1 1 N (T. K r(T ) B1 )N ((T T ) \ K r(T ) B1 . B1 )N ((T T ) \ B1 . Denote by K1 the numeri al onstant in Proposition 4. 12 B2 ) 1 1 . B2 ) N (T.13.rst assume that T B2 and estimate N (T. B2 ) 2 K1 r(T ) K1 r(T ) 2 1 1 N (B2 .

B2 ) = N indi ates that Sudakov's minoration for Gaussians annot extend litterally to Radema hers. D) is the minimal number of translates of D by elements of T ne essary to over T . On the other hand.13 we get that for some K3 numeri al p !1=2 1 N (log N (T.13 by hanging the strange balls D into `2 balls for another T .13 to obtain the inequality of the orollary. The proof is omplete. K1 r(T ) 1 where K2 is numeri al (it an be assumed that r(T ) is bounded below). The example of T onsisting of the anoni al basis of IRN for whi h learly r(T ) = 1 and N (T. r(T ) 1 . Let K1 be the onstant of the on lusion of Proposition 4. This is the on lusion of the next statement. It is known that (see [S hu. Combining with Proposition 4. B2 ))1=2 K3r(T ) log(2 + ) 2 r(T ) : We an then use an iteration argument similar to the one used in Proposition 4. Theorem 1℄) p !1=2 1=2 1 B ) K2 r(T ) log(2 + r(TN) ) log N (B2 . the unit balls of `N 2 and `2 respe tively. Proof.15.123 where B1 is the unit ball for the sup-norm in IRN . There exists a t2T i=1 numeri al onstant K su h that if " > 0 and if D = Kr(T )B1 + "B2 . This suggests the possibility of some interpolation and of a minoration involving both B2 and N B1 . D))1=2 Kr(T ) where we re all that N (T. The idea is to use Proposition 4. the `N1 -ball. Let T be a (bounded) subset of IRN and let r(T ) = IE sup j "i ti j . then "(log N (T. N P Theorem 4. We now turn to another version of Sudakov's minoration. note that if T B1 .13 and set K = 3K1 whi h we would like to .

De. Set a = "2 =Kr(T ) and let M be an integer bigger than r(T )=a .15.t the statement of Theorem 4.

ne a map ' in the following way: ' : [ Ma . +Ma℄ ! IR[ M. .M ℄ u ! ('(u)j ) M jM .

124 '(u)j is de.

Ma℄ . M X (4:25) j= M j'(u)j '(u0 )j j = ju u0 j : Another elementary property is the following. k = k(u) is the integer part of u=a . Let us de. Suppose we are given u. Ma℄ and assume u0 < u . u0 in [ Ma. We mention some elementary properties of ' . we let '(u)j = '( u) j . for every u.ned a ording to the following rule: if u 2 [0. 0℄ . Ma℄ . First. we then set '(u)j = a for 1 j k '(u)k+1 = u ka '(u)j = 0 for all other values of j : If u 2 [ Ma. u0 in [ Ma.

let ! (IR[ M. we let v = (k(u) 1)a and v0 = k(u0 )a . if u < 0 . We then have that (4:26) M X j= M j'(u)j '(u0 )j j2 = ju vj2 + ju0 v0 j2 + ajv v0 j : On e these properties have been observed.M ℄)N t = (ti )iN ! (('(ti )j ) M jM )iN : :T is of ourse well de. let v = k(u)a and v0 = (k(u0 ) + 1)a or v0 = k(u0)a a ording whether u0 0 or u0 < 0 .ne v0 v in the following way: if u 0 .

N . Consider now a doubly indexed Radema her sequen e ("ij ) and another one ("0i ) whi h we assume to be independent.ned sin e if t 2 T . r( (T )) . Then. : : : . then jti j r(T ) Ma for every i = 1. by symmetry.

.

.

N .

M .

X X .

= IE sup .

.

"ij '(ti )j .

.

= t2T .

i=1 j = M .

.

.

.

N .

M .

X 0 X .

IE sup .

.

"i ( "ij '(ti )j ).

.

t2T .

i=1 j = M .

t0 in T . for every hoi e of ("ij ) . Now. by (4. .25). every i and t.

.

M .

X .

"ij '(ti )j .

.

j = M .

.

.

"ij '(t0i )j .

.

.

j= M M X jti t0i j : : .

"B2) and the proof of Theorem 4. indeed. if t. D) N ( (T ). "(log N ( (T ). As in the Gaussian ase. by Proposition 4.26). i. for every t.13 applied to (T ) in (IR[ M. re all .15 is therefore omplete. We on lude this hapter with a remark on tensorization of Radema her series.3) we ask ourselves if.12 from whi h we get that r( (T )) 3r(T ) . Now. this i i P F . we are in a position to apply the omparison Theorem 4. "B2 ))1=2 K1r( (T )) Kr(T ) : Now this implies the on lusion. t0 are su h that j (t) (t0 )j " .125 This means that. by (4. with respe t to ("0i ) . 2 2 j'(ti )j j a = 3K "r(T ) K r(" (T )) : 1 1 Hen e. To investigate this question. then t 2 t0 + Kr(T )B1 + "B2 : Hen e N (T. given (xi ) and (yi ) in P P Bana h spa es E and F respe tively su h that "i xi and "i yi are both almost surely onvergent. j . (and we follow here the notations introdu ed in Se tion 3.M ℄ )N . by onstru tion.

8).rst also true for "ij xi yj in the inje tive tensor produ t E i.j (4. For a large lass of Bana h spa es (whi h will be des ribed in Chapter 9 as the spa es having a .

(yi )iN ) be a .nite P P otype). a ording to Theorem 3. i i Therefore. if E and F have this property. onvergen e of Radema her series "i xi and orresponding Gaussian series gi xi are equivalent.20. What we would like to brie y point out here is that this is not the ase in general. the answer to the pre eding question is yes. Let (xi )iN (resp.

let E = `N 1 . respe tively.nite sequen e in E (resp.27) is best possible in general. F ) and set X = N P N P N P N P i=1 "i xi (resp.9). Then for the Radema her i=1 kf k1 i=1 kf k1 i=1 N P F we have: average "ij xi yj in E i. xi be the . The point of this observation is that (4.j =1 (4:27) N X IE "ij xi i. This inequality is an immediate onsequen e of.j =1 yj K (log(N + 1))1=2 ((X )IEkY k + (Y )IEkX k) where K is numeri al. (4. Y = "i yi ). Theorem 3. Re all (X ) = sup ( f 2 (xi ))1=2 ( (Y ) = sup ( f 2(yi ))1=2 ).8). To see it.20 and (4.

IEkX k = 1 . by de. : : : . Then learly (X ) = (Y ) = 1 . yi = N 1=2 . IEkY k 1 . N .126 elements of the anoni al basis. i = 1. However. and F = IR .

j =1 yj 0 1 = IE max p iN N .nition of the tensor produ t norm N X IE "ij xi i.

.

1 .

N .

.

X .

.

" ij .

.

A .

.

j =1 .

Indeed. for some numeri al K > 0 .2). and this quantity turns out to be of the order of (log N )1=2 . But then . by (4. ) N 1 X 1 1 = 2 IP p "i > K (log N ) N1 N i=1 ( (4:28) (at least for all N large enough).

.

! N .

X .

.

"ij .

.

.

.

.

1 IE max p iN N i=1 Z K 1 (log N )1=2 1 (log N )1=2 0 Z K 0 Z K 0 1 (log N )1=2 8 < .

.

9 .

N .

= X .

.

1 .

"ij .

.

> t dt .

N .

j=1 .

. .

( )!N 3 N .

.

.

X 1 .

5 "i .

.

> t IP .

.

.

Stri tly speaking. the Radema her sequen e is the sequen e on [0. Notes and referen es The name of Bernoulli is histori ally more appropriate for a random variable taking the values 1 with equal probability. 1℄ de. N IP max p : iN 2 41 [1 (1 K 1(1 1e )(log N )1=2 1 p i=1 dt 1 N ) ℄ dt N whi h proves our laim.

ned by ri (t) = sin(2it) ( i 1 ). We de ided to use the terminology of Radema her sequen e sin e it is the ommonly used one in the .

3). The best onstants in the (real) Khint hine's inequality were obtained by U.6 is taken from [J-M2℄. [HJ2℄.-P. The ontra tion prin iple has been dis overed by J. [Ka1℄). Some further extensions have been obtained by J. See [Sz℄ for the ase p = 1 (4. Homann-Jrgensen [HJ1℄. Chapter 13). Kahane [Ka1℄. Lemma 4.eld as well as in Geometry of Bana h spa es.3 has been observed in [R-S℄ and [Pi10℄ and used in Probability in Bana h spa es in [M-P2℄ ( f. . Lemma 4.2 is in the spirit of the Paley-Zygmund inequality ( f. [HJ3℄. Lemma 4. Haagerup [Ha℄.

127 P i In [Ka1℄ (.

rst edition).-P. Kahane showed that an almost surely onvergent Radema her series X = "i xi with oeÆ ients in a Bana h spa e satis. J.

9 whi h was noti ed independently in [MS2℄. Gross (as a logarithmi Sobolev inequality) and W. et .7). Theorem 4. S. [MC-T2℄. Using Lemma 4. The omparison Theorem 4. Borell [Bo5℄. The de oupling argument used in the proof of Theorem 4.8 on bounded Radema her pro esses is perhaps new.es IE exp kX k < 1 for all > 0 and has all its moments equivalent. [MC-T1℄. [Kw4℄.16) was established by L.11 is inspired from [B-T1℄. Kwapien [Kw3℄ improved this integrability to IE exp kX k2 < 1 for some (and also all) > 0 (Theorem 4.5. Be kner (as a two point inequality). Its interest for integrability of Radema her haos was pointed out by C. [Zi3℄. The hyper ontra tivity inequality (4. General results on the de oupling tool may be found in [K-S℄. its proof uses Lemma 4. Bonami [Bon℄ is pointed out.12 is due to the se ond named author and . The proof of this result presented here is dierent and is based on isoperimetry. Complete details may be found in [Pi4℄ where the early ontribution of A.

(4.13 and Theorem 4. .15 is in parti ular applied in [Ta18℄.27) and the fa t that it is best possible belongs to the folklore.15 are re ent results of the se ond author while Corollary 4. Proposition 4. Theorem 4.rst appeared in [L-T4℄ (the proof presented here being simpler).14 is essentially in [C-P℄ (see also [Pa℄ for some earlier related results).

Representation of stable random variables 5.128 Chapter 5. Integrability and tail behavior 5.2.1. Stable random variables 5.3. Comparison theorems Notes and referen es .

Stable random variables appear as fundamental in Probability Theory and. as will be seen later. also play a role in stru ture theorems of Bana h spa es. we investigate in this hapter another important lass of random variables and ve tors.129 Chapter 5. namely stable random variables. we do not attempt to study stable measures in the natural more general setting of in. In parti ular. The literature is rather extensive on this topi and we only on entrate here on the parts of the theory whi h will be of interest and use to us in the sequel. Stable random variables After Gaussian variables and Radema her series.

nitely divisible distributions. our study is based on a most useful representation of stable random variables detailed in the . In parti ular. We only on entrate here on the aspe ts of stable distributions analogous to those developed in the pre eding hapters on Gaussian and Radema her variables. We refer to [Ar-G2℄ and [Li℄ for su h a study.

The se ond one examines integrability properties and tail behavior of norms of in.rst paragraph.

0 < p 2 . we only onsider symmetri stable random variables.1 ( . the last se tion is devoted to some omparison theorems.1 . IP) denotes the spa e of all real random variables X on ( . A random variable X in Lp is of ourse in Lp.1 p (integration by parts).1 = k kX k kp.1 = (sup tp IPfjX j > tg)1=p < 1 : t>0 k kp. its Fourier transform is of the form IE exp itX = exp( p jtjp =2) . kX kp.1 Np (X ) qkX kp. A real valued (symmetri ) random variable X is alled p -stable. IP) su h that kX kp.1 kX kp r r p 1=p kX kr. if B is a Bana h spa e.1 . A. IP(A) > 0g where q = p=p 1 is the onjugate of p .1 = Lp. Finally. A 2 A . A. if. satisfying even tlim !1 t IPfjX j > tg = 0 . we let simply kX kp. the spa e of all random variables having this limit 0 is the losure in the Lp. t 2 IR : .nite dimensional stable random variables. Conversely. we denote by Lp. take for example (5:1) Np (X ) = supfIP(A) 1=q Z A jX jdIP . for whi h we have that. We re all that.1 -norm of the step random variables. Re all also the omparisons with the Lr -norms: for every r > p and X kX kp. Lp.1 is only a quasi-norm but is equivalent to a norm when p > 1 . for some 0 .1 .1(B ) the spa e of all random variables X in B su h that kX k 2 Lp. As in the Gaussian ase. for all X . for 0 < p < 1 .1 : Finally.

when we speak of a standard p -stable sequen e (i ) . A 2 -stable random variable with parameter is just Gaussian with varian e 2 . Despite the analogy in the de. we always mean a sequen e of independent standard p -stable random variables i (the index p will be lear from the ontext). As for Gaussians. X is alled standard. If = 1 .130 = p = p (X ) is alled the parameter of the stable random variable X with index p .

Further. while stable distributions have densities. It an however be shown that kX kp. For example. Stable random variables are hara terized by their fundamental "stability" property (from whi h they draw their name): if (i ) is a standard p -stable sequen e.r depends on p and r only. e. [Fe1℄).nition. a non-zero p -stable random vari able X . these annot be easily expressed in general. X has therefore moments of order r for every r < p and kX kr = p.g. 0 < p < 2 .r where p. if Gaussian variables have exponential moments. is not even in Lp .1 < 1 and a tually that lim tp IPfjX j > tg = pp p (5:2) t!1 where is the parameter of X and p > 0 only depends on p ( f. the ase p = 2 orresponding to Gaussian variables and the ase 0 < p < 2 present some quite important dieren es. for any .

sup ji j M 00 for some M 00 .nite sequen e (i ) of real numbers. In this order of ideas and among the onsequen es of (5. i.2) there exists t0 su h i that for all t t0 1 IPfj1 j > tg p p : 2 p t . whi h is analogous to what we learned in the Gaussian ase but with a smaller spe trum in r . then ji jp < 1 . is of fundamental interest in the study of `np -subspa es of Bana h spa es ( f. Lemma 2. we have from i i the Borel-Cantelli lemma ( f. P i i i has the same distribution as P i ji jp 1=p X i i i r 1 . r < p .r X i ji jp !1=p so that the span in Lr . for any r < p .e.6) that for some M 0 > 0 X i IPfji i j > M 0 g < 1 : It already follows that (i ) is bounded. of (i ) is isometri to `p . By what pre eeds. Chapter 9). This property. Indeed. = p. in parti ular.2) . we would like to note further that if (i ) is a standard p -stable sequen e with 0 < p < 2 and (i ) a P sequen e of real numbers su h that sup ji i j < 1 almost surely. By (5.

we have that 1> X i IPfji i j > M g 1 X p j j : 2M p pp i i This kind of result is of ourse ompletely dierent in the ase p = 2 for whi h we re all (see (3. Similarly. By their very de. XN ) with values in IRN is p -stable if ea h linear ombination i Xi i=1 is a real p -stable variable. XtN ) is a p -stable random ve tor. : : : . that if (gi ) is an orthogaussian sequen e. letting M = max(M 0 . jgi j lim sup =1 (2 log( i + 1))1=2 i!1 almost surely : N P A random variable X = (X1 . tN in T . : : : . as an example and for the matter of omparison. : : : .131 Hen e. (Xt1 .7)). t0 M 00 ) . a Radon random variable X with values in a Bana h spa e B is p -stable if f (X ) is p -stable for every f in B 0 . A random pro ess X = (Xt )t2T indexed by a set T is alled p -stable if for every t1 .

and only if.nition. then X i i Xi has the same distribution as X i ji jp !1=p X for every . all these p -stable random ve tors satisfy the fundamental and hara teristi stability property of stable distributions: if. if Xi are independent opies of X . X is p -stable.

It will almost always be assumed in the sequel that 0 < p < 2 . 5. The ase p = 2 orresponding to Gaussian variables was investigated previously.nite sequen e (i ) of real numbers.1. Representation of stable random variables p -stable ( 0 < p < 2 ) .

nite or in.

This representation is a most valuable tool in the study of stable random variables. We use it almost automati ally ea h time we deal with stable distributions. it almost allows to think of stable variables as sums of ni ely behaved independent random variables for whi h a large variety of tools is available. This representation an be though as some entral limit theorem with stable limits and we will a tually have the opportunity to verify this observation in the sequel. To introdu e this representation we .nite dimensional random variables an be given (in distribution) a series representation.

rst investigate the s alar ase. t 0 . Set . Let (i ) be independent random variables with ommon exponential distribution IPfi > tg = e t . We need some notations.

132 j P = i . The sequen e ( j )j1 i=1 de. whi h will always have the same meaning throughout the book. j 1 .

As is easy to see. hen e tlim !1 t IPf j 1=p > tg = 0 .nes the su essive times of the jumps of a standard Poisson pro ess ( f. We already quote at this stage a . j 1 and t > 0 . Z t 0 xj 1 x e dx . j IPf j tg = In parti ular. t 0 : (j 1)! 1 1 IPf j 1=p > tg pj j! t for all 0 < p < 1 . A powerful method will be to repla e j by j whi h is non random. By the strong law of large numbers. for j 2 we have that k (5:4) (a tually k j 1=p j 1=p kp < 1 p kr < 1 for any r < pj ). j =j ! 1 almost surely. [Fe1℄). It already follows that while lim tp IPf (5:3) t!1 1 1=p > tg = 1 .

Denote further by (j ) independent opies of assumed to be independent from the sequen e ( j ) . Theorem 5.rst observation: for any > 0 and j > . Provided with these easy observations. Let 0 < p < 2 and let be a symmetri real random variable su h that IEjjp < 1 . the series representation of p -stable random variables an be formulated as follows. (5:5) IE( j ) = as an easily be seen from Stirling's formula ( (j ) (j ) j1 for j > is the gamma fun tion).1. Then. the almost surely onvergent series 1 X 1=p X= j j j =1 de.

.2)).nes a p -stable random variable with parameter = p 1 kkp (where p has been introdu ed in (5.

133 Proof. Let us .

rst onvin e ourselves that the sum de.

we prove a little bit more than ne essary but whi h will be useful later in this proof. Let us show indeed that N X 1=p lim sup j j j0 !1 N j0 j =j0 (5:6) p.ning X is almost surely onvergent. To this aim.1 If this is satis.

2 j0 N . Sin e ( j 1=p j ) is a symmetri sequen e. and thus in probability. the series onverges almost surely by It^o-Nisio's theorem.2. 8. An alternate proof making use of some of the material introdu ed later in this hapter is given at the end of Se tion 5. r < p .ed.6). the sum 1 P j =1 1=p j = 0: is in parti ular onvergent in all Lr . For every t > 0 . Let us thus establish (5.

.

9 .

N <.

.

X = .

1=p .

IP .

.

> t j j .

:.

. .

jj j > tj 1=p g X j j0 IE( j 2=p j2 Ifjj jtj1=p g ) : 1 IPfjj > tj 1=p g p IE(jjp Ifjj>tj01=p g ) t j j0 X while.1 "p + C 0 a2 "2 p j0(2=p) 1 + C 0 IE(jjp Ifjj>ag ) + (C 0 + 1)IE(jjp Ifjj>"j01=p g ) : Sin e IEjjp < 1 . a > 0 and j0 large enough. C 0 . jj j > tj 1=p g + t12 IPf9j j0 . provided j0 is large enough. j =j0 Clearly IPf9j j0 . by (5. 0 1 1 X C 2 X 2=p 2=p 2 IE( I ) IE jj j Ifjjjp =tp g A 1 =p fj j tj g j j j t2 jj0 t2 j j0 0 Ct2 (2=p1 ) 1 IE(jj2 Ifjjtj01=p g ) + C 0 vertp IE(jjp Ifjj>tj01=p g ) : j0 The on lusion now follows: for every " > 0 . for some onstants C. we have obtained that p N X 1=p sup j j N j0 j =j0 p. we an let j0 tend to in.5).

nity. then a also. In order to establish the theorem. we show that X satis. and then " to 0 to get the on lusion.

es the hara teristi property of p -stable random variables. for all real numbers 1 . 1 X1 + 2 X2 has the same distribution as (j1 jp + j2 jp )1=p X : . namely that if X1 and X2 are independent opies of X . 2 .

j 1 . i = 1. Consider the non-de reasing rearrangement f j . 2 are independent opies of f( = ji jp . j 1g orresponds then to the sequen e of the su essive times of the jumps of the pro ess N 1 + N 2 . 2 . j =1 j )j 1 . 2g . Hen e f j . The sequen e f ji =ai . We have therefore the following equalities in distribution: Write Xi = 1 X1 + 2 X2 = 1 X j =1 1 X 1=p ( j ) 1=p j = (a1 + a2 )1=p j = (j1 jp + j2 jp )1=p X : j j =1 Hen e X is p -stable. (j )j 1 g . 2 . i = 1. j 1g has the same distribution as the sequen e f j =(a1 + a2 ) .134 1 P 1=p ji ji . where f( ji )j1 . (ji )j1 g for i = 1. j 1g orresponds to the su essive times of the jumps of a Poisson pro ess N i of parameter ai . The . j 1g of the ountable set f ji =ai . But N 1 + N 2 is a Poisson pro ess of parameter a1 + a2 . It is easily seen that f j . j 1g . Set ai i = 1.

nal step of the proof onsists in showing that X has parameter p 1 jjp .2) and make use of the fa t established in the . To this aim we identify the limit (5.

4) that 8.6) and (5. We have indeed from (5.rst part of this proof.

.

9 .

1 <.

.

X = .

1=p .

p IP .

lim t > t j.

j t!1 :.

.

. .

After the s alar ase we now atta k the ase of in.1 is omplete.3) and independen e.2) we indeed get that X has parameter p 1 jjp . j =2 = 0: Now from (5. The proof of Theorem 5. we see that lim tp IPf t!1 1 1=p j1 j > tg = IEjjp : Hen e ombining these observations yields lim tp IPfjX j > tg = IEjjp t!1 and by omparison with (5.

The key tool in this investigation is the on ept of spe tral measure. The spe tral measure of a stable variable arises from the general theory of Levy-Khint hine representations of in.nite dimensional stable random ve tors and pro esses.

for the modest purposes of our study. We do no follow here this approa h but rather outline.nitely divisible distributions. a somewhat weaker but simple des ription of spe tral measures of stable distributions in in.

It will over the .nite dimension.

135 appli ations we have in mind and explains the avor of the result. We refer to [Ar-G2℄ and [Li℄ for the more general in.

Let 0 < p < 2 an d let X = (Xt )t2T be a p -stable pro ess indexed by a ountable set T . There exists a positive . Theorem 5. This is anyway the basi result from whi h the ne essary orollaries are easily dedu ed.nitely divisible theory. We state and prove the existen e of a spe tral measure of a stable distribution in the ontext of a random pro ess X = (Xt )t2T indexed by a ountable set T in order to avoid measurability questions.2.

nite measure m on IRT (equipped with its ylindri al -algebra) su h that for every .

nite sequen e (j ) of real numbers IE exp i X j 0 j Xtj = exp Z .

.

.

p .

1 .

X .

1 .

x .

dm(x)A : 2 IRT .

.

j j tj .

.

let us just mention that in the ase p = 2 we an simply take for m the distribution of the Gaussian pro ess X . m is alled a spe tral measure of X (it is not ne essarily unique). Proof. Before the proof. In a .

rst step. assume that T is a .

It follows that for every = (1 . N ) in IRN . then kkr = p. : : : . tN g . Re all that if is real p -stable with parameter . and r < p .nite set ft1 . : : : .r . IE exp i N X j =1 j Xtj = exp 2 = exp 6 4 where () denotes the parameter of N P j =1 1 ()p 2 .

.

r 1p=r 3 .

N .

.

1 .

.

X IE .

j Xtj .

.

A 7 5 p 2 p.r .

j =1 .

0 j Xtj . de. For every r < p .

ne then a positive .

XtN ) .nite measure mr on the unit sphere S for the sup norm k k on IRN by setting for every bounded measurable fun tion ' on S Z Z x 1 r '(y)dmr (y) = r ' p. for any = (1 . : : : . : : : . Hen e. IE exp i N X j =1 2 j Xtj = exp 6 4 1 2 . N ) in IRN .r IRN kxk kxk dIPX (x) S where IPX is the law of X = (Xt1 .

.

r 1p=r 3 .

N .

.

X .

.

j xj .

.

dmr (x)A 7 5 .

S .

j =1 .

0 Z : .

136 Now. Therefore sup jmr j < 1 . 1 j N . are the unit ve tors of IRN . Let then m be a luster r<p point (in the weak-star sense) of (mr )r<p . the total mass jmr j of mr is easily seen to be majorized by 0 jmr j xinf 2S N X j =1 1 1 jxj jr A N X j =1 (ej )r where ej . m is a positive .

nite measure whi h is learly a spe tral measure of X . This proves the theorem in this .

nite dimensional ase. : : : g . there exists a sequen e (at )t2T of positive numbers su h that if Zt = at Xt . Assume now that T = ft1 . the p -stable pro ess Z = (Zt )t2T satis. Indeed. It is not diÆ ult to see that we may assume the stable pro ess X be almost surely bounded. t2 . by the integrability property (5.2) and the BorelCantelli lemma. if this is not the ase.

de.es sup jZt j < 1 almost surely. If we an then onstru t a spe tral measure m0 t2T on IRT for Z .

ne m in su h a way that for any bounded measurable fun tion ' on IRT depending only on .

nitely many oordinates. Then the positive . Z '(x)dm(x) = T IR Z IR ' T xt at dm0 (x) where x = (xt ) 2 IRT .

We therefore assume that X is almost surely bounded. the pre eding .nite measure m on IRT is a spe tral measure for the p -stable pro ess X . For ea h N .

1 indi ates that (Xt1 . : : : . XtN ) has the same distribution as 1 X 1=p p jmN j1=p "j YjN : j j =1 Our next step is to show that sup jmN j < 1 . ("j ) being assumed independent. we an N hoose to this aim a . Theorem 5. then. : : : . the sequen es (YjN ) . If we re all the sequen e ( j ) of the representation and if we let ("j ) denote a Radema her sequen e.nite dimensional step provides us with a spe tral measure mN on entrated on the unit sphere of `N 1 of the N N random ve tor (Xt1 . Denote by (Yj ) independent random variables distributed like mN =jmN j . Sin e X is assumed to be almost surely bounded. ( j ) . XtN ) in IR .

1 2 2IPfsup jXt j > ug 2IPfmax jX j > ug iN ti t2T IPf p jmN j1=p 1 1=p > ug : .7) and the t2T pre eding representations. By Levy's inequality (2. for every N .nite number u su h that IPfsup jXt j > ug 1=4 . it follows that.

m is immediately seen to ful.137 We dedu e that jmN j p p up and thus that sup jmN j < 1 sin e u has been hosen independently of N N . if m denotes then a luster point (in the weak-star sense) of the bounded sequen e (mN ) of positive measures. As before.

A tually.3. Let further (j ) be real independent symmetri random variables with the same law as where IEjjp < 1 and assume the sequen es (Yj ) . the random pro ess X = (Xt )t2T has the same distribution as 0 p kkp 1jmj1=p 1 X j =1 1 j 1=p j Yj (t)A t2T : Remark 5. As an immediate orollary to Theorems 5. If m is a spe tral measure of an almost surely bounded p -stable pro ess X = (Xt )t2T . by Levy's inequalities applied onditionally on ( j ) . a lose inspe tion where we have denoted by of the proof of Theorem 5. observe that if m is a spe tral measure of X satisfying R kxkp dm(x) < 1 . Then m1 is a spe tral measure of X on entrated on the unit sphere of `1 (T ) with total mass Z jm1 j = kxkp dm(x) : .l the on lusion of Theorem 5. indeed. then ne essarily Z kxkp dm(x) < 1 kk the `1 (T ) -norm.2 we an now state the following: Corollary 5. It is in fa t onvenient in many problems to work with a spe tral measure on entrated on the unit sphere (or only ball) of `1 (T ) . To this aim. The laim thus follows from the Borel-Cantelli lemma and j 1 the fa t that the independent random variables Yj are distributed like m=jmj . ( j ) to be independent. we must have that sup kYj k= 1j =p < 1 j 1 almost surely.2.1 and 5.4. Then. let m1 be the image of the measure kxkp dm(x) by the map x ! x=kxk . This an be seen for example from the pre eding representation. Now re all that from the strong law of large numbers j =j ! 1 with probability one and therefore we also have that sup kYj k=j 1=p < 1 . Let (Yj ) be a sequen e of independent random variables distributed like m=jmj (in IRT ). m is a spe tral measure of X and the proof is omplete. Let 0 < p < 2 and let X = (Xt )t2T be a p -stable random pro ess indexed by a ountable set T with spe tral measure m .2 shows that for a bounded pro ess we dire tly onstru ted a spe tral measure on entrated on the unit ball of `1 (T ) . (j ) .

138 It an be shown that a symmetri spe tral measure on the unit sphere of `1 (T ) is unique and this a tually follows from (5. we an however de. This uniqueness is however rather irrelevant for our purposes. By analogy with the s alar ase.10) below (at least in the Radon ase).

p (X ) is well de.ne the parameter of X by (5:7) p (X ) = jm1 j1=p = Z 1=p kxkp dm(x) : Note that by uniqueness of m1 .

5. The terminology of parameter extends the real ase. Then there is a positive . also (5. Corollary 5. This parameter plays sometimes roles analogous to the 's en ountered in the study of Gaussian and Radema her variables.11)). it is however quite dierent in nature as will be ome apparent later on. Let X be a p -stable ( 0 < p < 2 ) Radon random variable with values in a Bana h R spa e B . We now inspe t the onsequen es of the pre eding results in the ase of a p -stable Radon random variable in a Bana h spa e.ned ( f.

(j ) and (Yj ) are assumed to be independent from ea h other and is distributed as X . Let D be ountable weakly dense in the unit ball of B 0 . the series p jjp 1 jmj1=p 1 X j =1 j 1=p j Yj onverges almost surely in B where.3. We may and do assume B to be separable. if (Yj ) is a sequen e of independent random variables distributed like m=jmj . and (j ) a sequen e of real symmetri random variables with the same law as where IEjjp < 1 . By Corollary 5.nite Radon measure m on B satisfying kxkp dm(x) < 1 su h that for every f in B 0 Z 1 p IE exp if (X ) = exp jf (x)j dm(x) : 2 B Further. the sequen es ( j ) . as usual. there exists a positive . Proof.

(f (X ))f 2D has the same distribution as 0 1 1 X 1 =p f p k kp 1 jmj1=p j Yj A : j j =1 f 2D . and independent of ( j ) and (j ) .nite measure M on the unit ball of `1 (D) su h that if (Yjf )f 2D are independent and distributed like m=jmj .

for every " > 0 . 2IPf inf kX z2Fn z k > "g IPf p kkp 1jmj1=p inf sup j z2Fn f 2D 1 1=p 1 Y1f z j > "g : Sin e X is a Radon variable in B .7) applied to the norms inf sup j f (z )j . by Fn the subspa e generated by x1 . the left hand side of this inequality an be made. and onditionally on the sequen e ( j ) . for ea h n .139 Let (xn ) be a dense sequen e in B and denote. xn . arbitrarily small for all n large enough. z2Fn f 2D we get that for all n and " > 0 . By Levy's inequality (2. It easily follows that we an de. : : : .

that IEkY kp < 1 . all it Y . R As in (5. the law of Y is.ne a random variable with values in B .4).7) we de. The onvergen e of the series representation indeed takes pla e in B by the It^o-Nisio theorem (Theorem 2.5 is therefore established. Corollary 5. The same argument as before through Levy's inequalities indi ates that Y is Radon. a spe tral measure of X and we have. by Remark 5. up to a multipli ative fa tor.4. By density of D . su h that f (Y ) = Y1f almost surely for all f in D .

Let then m be given by i m= X i kxi kp (Æ 2 x x kxii k + Æ kxii k ) : Then m is a spe tral measure for X (symmetri and on entrated on the unit sphere of B ). For example. In this ase.7)) is simply p (X ) = X i kxi kp !1=p < 1: P This property indu es a rather deep dieren e with the Gaussian situation in whi h kxi k2 is not ne esi P P sarily . We note in this ase that the parameter p (X ) of X ( f. We an hoose further (following Remark 5. sin e ne essarily sup ki xi k < 1 almost surely. (5. A typi al example of a Radon p -stable random variable with values in a Bana h spa e P B is of ourse given by an almost sure onvergent series X = i xi where (i ) is a standard p -stable i sequen e and (xi ) a sequen e in B . the spe tral measure is dis rete and an be expli itely des ribed. kxi kp < 1 . it is then unique ( f.4) the spe tral measure to be symmetri ally distributed on the unit sphere of B .10) below). (5. we learned in the beginning of this i P hapter that. when 0 < p < 2 .ne the parameter p (X ) = kxkp dm(x) 1=p .

note that if onvergent series gi xi ompletely i i des ribe the lass of Gaussian Radon random ve tors.nite if gi xi onverges. As yet another dieren e. this is no more the ase when p < 2 (as soon as .

The representation now learly indi ates what kind of properties would be desirable. Two hoi es are of parti ular interest. The representation theorems were des ribed with a sequen e (j ) of independent identi ally distributed real symmetri random variables with IEjj jp < 1 . similar onsequen es for stables.4. It suÆ es to show that for all j large enough 1 IEj j 1=p j 1=p jp Kp p=2+1 : j We have that IEj j 1=p j 1=p jp 1 IPf j j j > jg + Z f j 2jg . and the two pre eding hoi es. will be used extensively in the subsequent study of p -stable random variables and pro esses. it is plain that. j Yj onverges almost surely j j =1 1 1=p P if and only if j j Yj does. This main idea. after integration. First. First is the simple ase of a Radema her sequen e. The series representation might then be thought as a kind of substitute to this property. It then appears that p -stable ve tors and pro esses may be seen as onditionally Gaussian. A se ond hoi e is an orthogaussian sequen e. we have that 1=p j sup j 1 j (5:8) p. More important perhaps is the following: X (5:9) j 2 k j 1=p j 1=p kpp^1 < 1 : Note that the sum in (5. As a simple onsequen e of the expression of IPf j tg and Stirling's formula. as an easy onsequen e of j =j ! 1 almost surely and the ontra tion prin iple ( onditionally) in 1 1=p P the form of Theorem 4.140 the spe tral measure is no more dis rete). To on lude this se tion we would like to ome ba k to the omparison of ( j ) with (j ) initiated in the beginning. in the previous notations. Various Gaussian results an then be used to yield. The next two observations will be used as quantitative versions of this j =1 result.9) is up from 2 in order for j 1=p to be in Lp .1 Kp for some Kp < 1 and similarly with ( j =j )1=p (for whi h a tually all moments exist).

.

1 .

.

1 j .

j j 1=p .

.

p .

.

dIP : .

.

j1 x1=p j Kp j1 xj . so that. by Chebyshev's and Holder's inequalities the pre eding is bounded by . if 0 x 2 .141 Now.

.

1 + Kpp IE .

.

1 j2 .

2 !p=2 j .

.

j .

0 < p < 2 . (IE( 2j =(p 2) (2 p)=2 )) = 1 1 + Kpp p=2 (IE( 2j =(p j2 j 2) (2 p)=2 )) : By (5. We already know from the real ase the severe limitations ompared to the Gaussian ase p = 2 . as we just learned it. 5. the laim follows. There is however a .2. j to j . Integrability and tail behavior We investigate the integrability properties of p -stable Radon random variables and almost sure bounded pro esses.5). This study ould be based entirely on the representation and the results of the next hapter on sums of independent random variables substituting.

X is a random variable with values in B if f (X ) is measurable for every f in D . This is Proposition 5. It is p -stable if ea h . ombined with some results of the next hapter.rst a priori simple result whi h will be onvenient to re ord. for some pre ise information on tail behaviors. We then use the representation.6 below. As usual. in order to unify our statement on Radon random variables and bounded pro esses. let us assume we are given a Bana h spa e B with a ountable subset D in the unit ball of B 0 su h that kxk = sup jf (x)j f 2D for all x in B .

1 < 1 . N 1=p i=1 i .6. Sin e.r1 kX kr kX kp. Let (Xi ) be independent opies of X .e.r kX kr : Proof.1 .nite linear ombination P i i fi (X ) . Let indeed t0 be su h that IPfkX k > t0 g 1=4 . Let 0 < p < 2 and let X be a p -stable random variable in B . we show that the moments of X are ontroled by some parameter in the L0 (B ) topology. As in the previous hapters. is a real p -stable random variable. i 2 IR . for ea h N . all the moments of X of order r < p are equivalent. there exists Kp. for every r < p . and equivalent to kX kp. N 1 X X has the same distribution as X .r su h that for every p -stable variable X Kp.1 Kp. Proposition 5. Furthermore. Then kX kp. fi 2 D . i.

2 or dire tly from the proof of Proposition 5. To show the moment equivalen es.142 we get from Levy's inequality (2. or only in probability. indeed. then (Xn ) also onverges in Lp. By a trivial interpolation.1 21=p t0 . simply note that for 0 < r < p .7) that 1 4 IPfkX k > t0 g ( N X IP Xi i=1 > t0 N 1=p ) 1 IPfmax kX k > t0 N 1=p g : 2 iN i By Lemma 2. Integrability properties of in. then whi h therefore holds for every N 1 1 IPfkX k > t0 g r IEkX kr : t0 4 As a onsequen e of this proposition. if t0 = (4IEkX kr )1=r . for every " > 0 .6 and identi al distribution it follows that IPfkX k > t0 N 1=p g 1 N 1 .1 and therefore in Lr for every r < p . kX kp. This follows from the pre eding moment equivalen es together with Lemma 4. we see that if (Xn ) is a p -stable sequen e of random variables onverging almost surely to X .1 21=p " . IPfkXn X k > "g an be made smaller than 1=4 for all n large enough and then kXn X kp.6.

nite dimensional p -stable random ve tors are thus similar to the .

we present this result in the setting of Radon random variables but everything goes through to the ase of almost surely bounded stable pro esses. This observation an be pushed further to obtain that (5. Let therefore X be a p -stable Radon random variable with values in a Bana h spa e B . For notational onvenien e and simpli ity in the exposition.5 (and Remark 5. kX 2 A = pp m(A) : Xk . Then.1. for every measurable set A in the unit sphere of B su h that m(A) = 0 where A is the boundary of A . A ording to Corollary 5.4). let m be a spe tral measure of X symmetri ally distributed on the unit sphere of B . (5:10) lim t!1 tp IP kX k > t . The proof is based on the representation and mimi ks the last argument in the proof of Theorem 5.nite dimensional ones.2) also extends.

If we re all the parameter p (X ) = jmj1=p of X ( f.143 This shows in parti ular uniqueness of su h a spe tral measure as announ ed in Remark 5.10). let us . (5. we have in parti ular that lim tp IPfkX k > tg = pp p (X )p : (5:11) t!1 To better des ribe the idea of the proof of (5.7)).4.

By (5. hen e the result.5. t > 0 X IP kX k > t . k Y k "t j j . assume that p jmj1=p = 1 .10). ombining with (5. A ording to Corollary 5.11 to see that indeed IEk j 1=p Yj kp < 1 . Sin e Y1 is on entrated on the unit sphere of B . By homogeneity. For ea h " > 0 .rst establish the parti ular ase (5. kZZ k 2 F . The main observation is that IEk j j =2 j =2 But then we are dealing with a sum of independent random variables. . j =2 8 9 1 < = X 1=p > "t : + IP Y j j : j =2 . we 1 P may invoke Theorem 6. kX k 8 < 9 1 = X 1=p 2 F IP :kZ k > t . Set also Z = Yj j j =1 whi h has the same distribution as X .10) sin e Y1 is dis1 1=p P tributed like m=jmj . We next turn to (5.9). we set F" = fx 2 B . Let (Yj ) be independent random variables 1 1=p P distributed like m=jmj . kX k 2 F IPfY1 2 F g : The orresponding lower bound for open sets is established similarly yielding thus (5. We use a result of the next hapter on sums of independent random variables. it is enough to have IEk j 1=p Yj kp < 1 . 9y 2 F . . For every ".3) we get that 8 1 < X 1 =p p lim t IP Yj j t!1 : j =1 9 = > t = 1. Hen e it follows that j =2 8 1 < X 1=p p lim t IP Yj j t!1 : j =2 (5:12) 9 = > t = 0: . kx yk "g .11). We only establish that for every losed subset set F of the unit sphere of B lim sup tp IP t!1 X kX k > t . Anti ipating on the next hapter. j j =1 1 1=p p 1 P P Yj k < 1 . X has the same distribution as p jmj1=p Yj .

12) we will need only on entrate on the .144 By (5.

Further Y1 . kX k 2 F By independen e of 1 1 1=p Y1 + 1 X j j =2 1=p Yj . Z=kZ k 2 F and Y j "t . Sin e j j =2 1=p Z= 1 we dedu e from the triangle inequality that 1 1=p 1=p Y1 1 Z k k and sin e (1 ")t < kZ k "t X IP kX k > t . Assume thus P 1 1=p that kZ k > t .rst probability on the right of this inequality. Y1 =kZ k 2 F" .

.

.

.

1 .

1 .

.

.

.

1=p .

kX 2F Xk (1 + ")IPfY1 2 F" g + " : Sin e " > 0 is arbitrary.3). The argument relies on a on entration idea for sums of independent random variables presented in Se tion 6. Summarizing. Along the same line of ideas. we get that Y1 2 F2" .7. IPf 1 1=p > (1 ")t .3 but we already explain here the result. this proves the laim. r < p . (5. Y1 2 8 1 < X 1=p F2" + IP Y j j : j =2 g 9 = > "t : . The ase 0 < p 1 an be dis ussed similarly with IEkX kr . it follows that for every " > 0 lim sup tp IP t!1 kX k > t . and (5. (X )p IPfj kX k IEkX k j > tg Cp p p t where Cp > 0 only depends on p . p (X ) denoting the parameter of X . it is possible to obtain from the representation a on entration inequality of kX k around its expe tation ( p > 1 ). instead of IEkX k . and Y1 . Then. we deal as before with Radon random variables and the ase p > 1 . . Let 1 < p < 2 and let X be a p -stable Radon random variable with values in a Bana h spa e B . Proposition 5. for all t > 0 . kZ k kZ k + "t . For simpli ity.12).

j j =1 By the triangle inequality. j kZ k IEkZ k j .5). Let (Yj ) be independent identi ally distributed random variables in the unit sphere of B su h 1 1=p P that X has the law of p p (X )Z where Z = Yj is almost surely onvergent in B (Corollary 5.145 Proof.

.

1 .

X 1=p .

j Y j .

.

11) to see that j j =1 j 1 X j 1=p j + IEj j 1=p 1=p j =1 j 1=p j < 1 . 1 P j =1 IEj j 1=p (6.9). while only 1 X j =2 In order to estimate 1 X j j . and. j =1 + By (5.

.

P .

1 j 1=p Yj .

.

j =1 k 1=p j k IEk .

.

1 .

X 1=p .

IE .

j Yj .

j =1 1=p j 1 P j =1 j .

1 .

X 1=p .

IE j Yj .

.

j =1 .

p k 1 X j =2 .

.

1 k 1=p Y k.

.

j . .

1=p j 1=p kp < 1 : we an use the (martingale) quadrati inequality . 1=p j j 1=p j : 1kp.1 < 1 .

2 1 .

X 1=p .

IE j Yj .

.

j =1 .

the proof is easily ompleted.13) is two-sided 1 1=p P when 0 < p < 1 . the "strong" norm of a p -stable variable always dominates its parameter p (X ) . j =1 We on lude this paragraph with some useful inequalities for real valued independent random variables whi h do not seem at . We will see later how this is of ourse no more the ase when 1 p < 2 .1 for some onstant Kp depending only on p . applying Levy's inequalities on the representation yields (5:13) p (X ) Kp kX kp. Let us already mention that (5. This follows again from the representation together with the fa t that j <1 when p < 1 .11). (It is also a onsequen e of (5. As a parenthesis.) Thus. note that if X is a p -stable random variable with parameter p (X ) . 1 X j =1 j 2=p < 1 : Combining these various estimates.

rst sight to be onne ted with stable distributions. They will however be mu h helpful .

Lemma 5.1 > tg = IPfsup nZn > tg n1 1 X t IP Zn > n n=1 !n 1 eX X t IP Zi > : n n=1 n i Assuming by homogeneity that sup u u>0 P i IPfZi > ug 1 . Hen e. Let (Zi ) be independent positive random variables. They also allow to evalute some interesting norms of p -stable variables.1 = (sup tp Cardfi . Re all that for 0 < p < 1 and (i )i1 a sequen e of real numbers. Let 0 < p < 1 . for all n 1 . (5:14) IPfsup n1=p Zn nk > tg X 2e k p sup u IPfZi > ug tp u>0 i .146 later on in the study of various questions involving stable distributions like in Chapters 9 and 13. The next terms n1 are smaller and the pre eding proof indi ates indeed that for all t > 0 and integer k 1 . Zn > u if and only if IfZi >ug n . we see that if t > 2e IPfk(Zi )k1. If (Zi )i1 denotes P the non-in reasing rearrangement of the sequen e (Zi )i1 . By homogeneity (repla ing Zi by Zip ) it suÆ es to deal with the ase p = 1 . ji j > tg)1=p = sup i1=p i i1 t>0 where (i )i1 is the non-in reasing rearrangement of the sequen e (ji j)i1 . Then sup tp IPfk(Zi )kp. Note that the t p tail of IPfsup n1=p Zn > tg is a tually given by the largest term Z1 . The basi inequality we present is ontained in the following lemma.1 > tg 2e sup tp t>0 X t>0 i IPfZi > tg : Proof.1 > tg 1 e n X n=1 t 2te while this inequality is trivial for t 2e . we set k(i )kp.5. Lemma 5.8. i Now ea n : IPfZn > ug n IPfk(Zi)k1.7 is therefore established. by Lemma 2. i P if a = IPfZi > ug .

i 1 . Let 0 < p < 1 and let Y be a positive random variable su h that IEY p < 1 .147 Motivated by the representation of stable variables. if (Zi ) is as usual the non-in reasing rearrangement of the sequen e (Zi ) . the pre eding lemma has an interesting onsequen e to the ase where Zi = Yi =i1=p where (Yi ) is a sequen e of independent identi ally distributed random variables.9. Then. Corollary 5. Note that for every u > 0 1 X i=1 IPfZi > ug = 1 X 1 IPfY > ui1=pg p IEY p : u i=1 Hen e the . for every t > 0 and integer n 1 1 IPfn1=p Zn > e1=p kY kp tg np : t Further 1 IPfsup n1=p Zn > (2e)1=p kY kp tg kp t nk for all k 1 and t > 0 . Proof. Let (Yi ) be independent opies of Y and set Zi = Yi =i1=p .

1 8. j =1 j p. Let us note that the previous simple statements yield an alternate proof of (5. The se ond inequality follows similarly from Lemma 5.rst inequality of the lemma simply follows from IPfZn > tg (ea=n)n (Lemma 2.14).5) with u = (e=n)1=p kY kp t . For every t > 0 . Let us simply indi ate P 1 1=p how to establish that j < 1 whenever IEjjp < 1 with these tools.8 and (5.6).

.

.

1 <.

.

X 1=p .

.

IP .

.

j j .

> :.

.

8. j =1 3t 9 = .

1 <.

.

X IP .

.

j ( j 1=p :.

j =1 j .

.

1=p ).

.

.

.

> 9 = 8.

.

.

1 <.

.

X .

t + IP .

.

j 1=p j .

.

. :.

.

j =1 9 = > 2t : .3)(. By (5.

9) the. and (5.

.

We are thus left ) .rst probability on the right of this inequality is of the order of t p .

.

P 1 with IP .

.

j 1=p j .

.

and let > (Zj ) be the non-in reasing rearrangement . > 2t . j 1 . Set Zj = j =j 1=p .

j =1 .

by . Denote further by ("j ) a Radema her sequen e independent of (j ) . Then. of the sequen e (jZj j) .

8.148 symmetry and identi al distribution.

.

.

1 <.

.

X .

1 =p .

IP .

j j .

.

:.

.

j =1 9 = 8.

.

.

1 <.

.

X .

.

> 2t = IP .

"j Zj .

.

. :.

.

8. j =1 IPfZ1 > tg 9 = > 2t .

.

.

1 <.

.

X .

.

+ IP .

"j Zj .

.

:.

.

p IPfZ1 > tg + IPfsup j 1=p Zj > tg j 2 8. j =2 9 = >t .

.

.

1 <.

.

X .

.

+ IP .

"j Zj .

.

:.

.

sup j 1=p Zj t : . j =2 9 p= > t . j 2 Re all that IEjj jp < 1 and Zj = j =j 1=p .9. the . By Corollary 5.

Con erning the third one.rst two terms in the last estimate are of the order of t p . we an use for example the subgaussian inequality (4.1) onditionally on the sequen e (Zj ) and .

nd in this way a bound of the order of exp( Kpt) .8 has some further interesting onsequen es to the evaluation of the norm of ertain stable ve tors.6) an be established in the same way. This shows P 1 1=p indeed that j < 1 .1 Lemma 5. The limit (5. Consider a standard p -stable sequen e ( i ) ( 0 < p < 2 ) and (i ) a . j =1 j p.

2).1 X (2e)1=p k 1 kp. We might be interested in P i ji i jr 1=r p.8 indi ates to start with that (5:15) k k(i i )kp. then i by Lemma 2. we would like to examine the p -stable random variable in `r whose oordinates are (i i ) . we have indeed that k sup ji i j kp.6 and (5. 0 < r 1 . The P right side follows from (5.1 i ji jp !1=p .inf ty . In other words.1 (5:16) i X i ji jp !1=p where the equivalen e sign means a two-sided inequality up to a onstant Kp depending on p only. if t is large enough and satis.nite (for simpli ity) sequen e of real numbers.1 kp. Lemma 5.15) while for the left one we may assume by homogeneity that ji jp = 1 .: This inequality is in fa t two-sided. as a fun tion of the i 's.

es IPfsup ji i j > tg 1=2 . i IPfsup ji i j > tg i 1 IPfji i j > tg (Kp tp ) 2 i X 1 .

16) we need only be on erned with the . For the upper bound. X i (5:18) !1=r r i i j j p.16) extends for r > p into X i (5:17) !1=r r i i j j p.1 X i ji jr !1=r : We are thus left with the slightly more ompli ated ase r = p . r only.1 B i 0 P ji jp 641 + log B ( i 131 1=p ji jp )1=p C7C ji j A5A : ji jp = 1 .1 . it is plain that (5. Sin e for r > p X sup ji i j i i ji i jr !1=r r X r p 1=r k(i i )kp.6 and the moment equivalen es to see that.r depending on p.149 from whi h (5.1 i ji jp !1=p where the equivalen e is up to Kp. we note that for every t > 0 .16) learly follows. It states that X i (5:19) j j Assume by homogeneity that IP 8 < X : i 0 !1=p p i i ji i jp P 2 X p. IP ( X i jp I ji i fji i jtg > tp ) + IPfsup ji i j > tg : i By (5. by Fubini. we an simply use Proposition 5. i !1=p 9 = >t . When r < p .

if tp P Kp ji jp 1 + log j1i j . then i X tp IE(ji i jp Ifji i jtg ) : 2 i Therefore. ( IP X i jp I ji i fji thetai jtg > tp ) ( p IP ji i fji i jtg IE(ji i fji i jtg ) > t2 i 4 X t2p IE(ji i j2p Ifji ijtg ) : i X jp I jp I ) .rst probability on the right of this inequality. By integration by parts. by Chebyshev's inequality. it is easily seen that there exists a onstant Kp large enough su h that. for su h t .

This line of investigation will also be the key idea in Se tion 12. However.3. we are on erned with omparisons of stable pro eses analogous to the ones des ribed for Gaussian and Radema her pro esses. : : : .1 t p . this quantity is seen to be less than 8k1kpp. Then p (X ) of (5. Assume that 1 < p < 2 . Comparison theorems In this se tion.1 is of the order of (log N )1=q (for N large) where q is the onjugate of p ( log log N when p = 1 ). The lower bound is proved similarly and we thus leave it to the interested reader. First note that by the ontra tion prin iple 1 1=p X 1=p IEkX k : IE j Yj IE sup j j 1 j j =1 By (5. We only prove the lower bound.150 Integrating again by parts.19).2. We have noti ed that this inequality is two-sided for 0 < p < 1 . the upper bound being similar.13).8) we know that IE sup( j =j )1=p Kp for some Kp depending on p only. If we set together all these informations we see that these yield the upper bound in (5. 0 < p < 2 .7) is 1 . that is the mere existen e of a spe tral measure m satisfying kxkpdm(x) < 1 ( f. 5. Let us show however that kX kp. We would like to start with inequality (5. In some sense therefore the study of p -stable variables with 0 < p < 1 is not really interesting R sin e the parameter. By de. ompletely des ribes boundedness and size of the variable. Let now Z be the real j 1 1 1=p P random variable j "j where ("j ) is a Radema her sequen e and denote by Z1 . Remark 5. several interesting results are stil available. although the ase p = 1 an be treated ompletely similarly. Gaussian te hniques an then be used to yield some positive onsequen es for p -stable variables. namely that stable variables an be represented as onditionally Gaussian. +1gN . Things are quite dierent when 1 p < 2 as the following example shows. Consider the 1 1=p P p -stable random variable X in IRN equipped with the sup-norm given by the representation Yj j j =1 where Yj are independent and distributed like the Haar measure N on f 1.1.4). ZN independent j =1 opies of Z . It will appear that these annot in general be extended to the stable setting. All of them are based on the observation at the end of Se tion 5.

nition. 1 X 1=p IE j Yj = IE max jZi j iN j =1 . and sin e we onsider IRN with the sup-norm.

IPfjZ j > tg With probability 2 ` .151 so that we have simply to bound below this maximum. let ` be the smallest integer su h that (` + 1)1=q > t . By Levy's inequality (2. For t > 0 . so that .6).

.

.

.

`+1 .

X 1=p .

.

j "j .

.

.

.

.

j =1 8.

.

.

.

9 `+1 = .

1 <.

.

X IP .

j 1=p "j .

.

2 :. > t : .

j=1 .

1 and the parameter p (X ) of a in. This learly indi ates what the dieren es an be between kX kp. By Lemma 2.1 Kp 1 (log N )1=q while p (X ) = 1 .6 we have N IPfjZ j > iN iN tg 1 . From the pre eding lower bound it follows that > t Kp 1(log N )1=q for some Kp > 0 . Therefore we have obtained that kX kp. = `+1 X j =1 j 1=p (` + 1)1=q > t IPfjZ j > tg 2 ` 1 12 exp( tq ) : Let then t = 2IE max jZi j so that in parti ular IPfmax jZi j > tg 1=2 .

We . the next results on omparison theorems and Sudakov's minoration for p -stable random variables are restri ted to the ase 1 p < 2 .nite dimensional p -stable random variable X for 1 p < 2 . A ording to the previous observations.

One of the ideas of the Gaussian omparison theorems was that if one an ompare dX and dY .rst address the question of omparison properties in the form of Slepian's lemma for stable random ve tors. YN ) in IRN . j ) (resp. N ) ( (i ) is a standard p -stable sequen e). Consider two p -stable. : : : . Thus dX and dY are equivalent. then one should be able to ompare the distributions or averages of max Xi and max Yi ( f. i 6= j . we know that IE max Xi Kp (log N )1=q iN . j N . 1 p < 2 . : : : .15). j ) = 21=p . XN ) . the following iN iN simple example furnishes a very negative result to begin with. j ) ) the parameter of the real p -stable variable Xi Xj (resp. However. It is easily seen that dX (i. 1 i . Yi Yj ). Corollary 3. By analogy with the Gaussian ase denote by dX (i. dY (i. random ve tors X = (X1 . Y = (Y1 . : : : . In the stable ase with p < 2 . assuming for example that p > 1 . Let X be as in the pre eding example and take Y to be the anoni al p -stable ve tor in IRN given by Y = (1 . j ) = 21=q while dY (i.14 and Theorem 3.

16). as a onsequen e of (5. r < p . Nevertheless some positive results remain.152 while. dX ( drX if p = 1 ) de. This is for example the ase for Sudakov's minoration (Theorem 3. ompare IE max i and IE max ji j ). s. One an thus measure on this iN iN example the gap whi h may arise in omparison theorems for p -stable ve tors when p < 2 . t) the parameter of the p -stable real random variable Xs Xt . t) .18). If X = (Xt )t2T is a p -stable pro ess (1 p < 2) indexed by some set T . IE max i Kp 1N 1=p iN (at least for all N large enough .r dX (s. Sin e kXs Xt kr = p. denote as before by dX (s. t 2 T .

dX . Re all N (T.nes a pseudo-metri on T . ") is the smallest number of open balls of radius " > 0 in the metri dX whi h over T (possibly in.

1 . We then have the following extension of Sudakov's minoration.1 = supfk sup jXt j kp.nite).2. F .) The idea of the proof is to represent X as onditionally Gaussian and apply then the Gaussian inequalities. As is usual in similar ontexts. we simply let here k sup jXt j kp. (This minoration will be improved in Se tion 12.

There is a onstant Kp depending only on p > 1 su h that if q is the onjugate of p . if X is almost surely bounded. for every " > 0 . the lower bound has to be repla ed by " log log+ N (T. ( j ) . dX . t) > " for s 6= t in U .3. Let X = (Xt )t2T be a p -stable random pro ess with 1 p < 2 and asso iated pseudo-metri dX .1 : t2T When p = 1 . As usual in the representation. the sequen es (Yj ) . In both ases. Let N (T. (Xt )t2U has the same distribution as p kg1kp 1 jmj1=p 1 X j =1 j 1=p gj Yj : . dX . We only show the result when 1 < p < 2 . "))1=q Kp k sup jXt j kp. we refer to [Ta8℄ for this investigation. (gj ) are independent. Proof.10. ") N . "(log N (T. From Corollary 5. ") . Let (Yj ) be independent and distributed like m=jmj and let further (gj ) denote an orthogaussian sequen e.nite in T g : t2T t2F Theorem 5. dX ) is totally bounded. Consider the p -stable pro ess (Xt )t2U and let m be a spe tral measure of this random ve tor (in IRN thus). there exists U T with ardinality N su h that dX (s. (T. The ase p = 1 seems to require independent deeper tools. dX .

0 d! (s. t) . Denote by ! randomness in the sequen es (Yj ) and ( j ) and set.153 Related to this representation. for ea h su h ! and s. t in U . we introdu e random distan es (on U ).

.

.

=B IE g .

.

p .

s) j j . kg1kp 1 jmj1=p 1 X j =1 j (! ) 0 1 X 1=p 2=p j (! ) = p kg1 kp 1jmj j =1 1=p g (Y (!.

2 11=2 .

.

Yj (!. t)).

.

C A .

t) 2 ! 2 2 p exp 2 u2 d2X (s. Now re all that dX (s. t) = IE exp d! (s. t) : 2 2 It follows that for every u and > 0 and s. more pre isely take u = 1= (log(2N 2 )) 1= where N = CardU . t 1 p p 1 2 2 IE exp i(Xs Xt ) = exp j j dX (s. 11=2 jYj (!. IPfd! (s. t)g = IP exp d (s. namely taking = [(2u2 )=2p dX (s. A ordingly. t) udX (s. By Sudakov's minoration for Gaussian pro esses (Theorem 3. 2 2 2 2 2 IPfd! (s. for all u > 0 . d! (s. s) Yj (!. d! (s. From (5. Hen e. t) > "u for all s 6= t in U . t) : Minimizing over > 0 . t) "ug ( CardU )2 exp( u ) : Choose u > 0 in order this probability is less than 1=2 (say).20) we thus get that for all u > 0 IPf9s 6= t in U . t)℄ where 1 = p1 1 2 yields. t) udX (s. for some numeri al onstant K and all ! in 0 . on a set 0 of ! 's of probability bigger than 1=2 . t)g exp( u ) (5:20) where = (4 2=2 ) 1 1. t) 2 dpX (s. t) > " for s 6= t in U . note that for all 2 IR and s.18). t in U . t) exp u dX (s. t)j2 A where IEg denotes partial integration with respe t to the Gaussian sequen e (gi ) .

.

.

1 .

X .

t). 1=p g Y (!.

.

1 "u (log N )1=2 p kg1kp 1 jmj1=p IEg sup .

.

j (! ) j j .

K 2 t2U .

j =1 .

.

154 (sin e N (U. d! . Now by partial integration . "u=2) N ).

.

.

1 .

X .

.

1=p 1=p .

IEg sup .

gj Yj (t).

.

j t2U .

j =1 0 .

19 whose proof is ompletely similar. we brie y investigate tensorization of stable random variables analogous to the ones studied for Gaussian and Radema her series with ve tor valued oeÆ ients. dX . ") = 0 . dX ) (or having a version with these properties). ") was arbitrary the proof is seen to be omplete. 1 p < 2 . where (ij ) is a doubly indexed standard p -stable sequen e. not all p -stable Radon random variable in a Bana h spa e an be represented as a onvergent series of the type P i xi .20 has provided a positive the inje tive tensor produ t E answer in the ase p = 2 and the obje t of what follows will be to show that this remains valid when p < 2 . dX . Z IE sup jXt j p kg1kp 1 jmj t2U 41K "u(log N )1=2 : dIP By the hoi e of u and Proposition 5. That is. We use instead spe tral measures. V ) on entrated on the unit sphere of E (resp. Denote by mU (resp. mV ) the symmetri spe tral measure of U (resp. There is also an extension of Corollary 3. If we now re all that N N (T. p > 1 (5:21) "!0 (and lim " log+ log N (T. "))1=q = 0 . F ). we know indeed that. One an then de.1 Kp 1 "(log N )1=q . if X = (Xt )t2T is a p -stable random pro ess. p = 1) : "!0 In the last part of this hapter. where (i ) is a standard p -stable i P j ij xi yj . in F of the Bana h spa es E and F ? Theorem 3. dX . k supt2U jXt j kp. then lim "(log N (T. ontrary to the Gaussian ase. One orresponding question would be the following: if (xi ) is a sequen e in a Bana h spa e E and (yj ) a sequen e in a Bana h P P spa e F su h that i xi and j yj are both almost surely onvergent.6. with almost all traje tories bounded and ontinuous on (T. Let thus 0 < p < 2 and let U and V be p -stable Radon i random variables with values in E and F respe tively. We have however to somewhat widen this study.

j .ne naturally F . Is this measure the spe tral measure of the symmetri measure mU mV on the unit sphere of E F ? The next theorem des ribes the positive answer to some p -stable random variable with values in E this question. is it the same for i. sequen e.

for some onstant Kp depending on p only. Let 0 < p < 2 and let U and V be p -stable Radon random variables with values in Bana h spa es E and F respe tively. Let mU and mV be the respe tive symmetri spe tral measures on the unit spheres of E and F . there exists a p -stable Radon random variable W with values in F with spe tral measure mU mV . Then. E kW kp. The idea is again to use Gaussian randomization and to bene.1) : Proof.11.1 Kp (p (U )kV kp. Moreover.155 Theorem 5.1 + p (V )kU kp.

( j ) . (Zj ) ) be independent with values in E (resp. F ) and distributed like mU =jmU j (resp. 1 X 0 U = j =1 j 1=p gj Yj and 1 X 0 V = j =1 j 1=p gj Zj are almost surely onvergent in E and F respe tively where (gj ) is an orthogaussian sequen e and.t onditionally of the Gaussian omparison theorems. Let (Yj ) (resp. From Corollary 5. as usual. Our aim is to show that W0 = 1 X j =1 j 1=p gj Yj Zj F and satis.5. (Yj ) and (Zj ) are independent. (gj ) . mV =jmV j ).

V 0 . W 0 as before but only for . p (V ) = jmV j1=p .es is almost surely onvergent in E (5:22) kW 0kp. W 0 indu es a p -stable Radon random variable W with values in E Sin e p (U ) = jmU j1=p . homogeneity and the normalization in Corollary 5.1 ) : F and spe tral measure mU mV . p (W ) = jmU j1=p jmV j1=p .1 + kV 0 kp.5 lead then easily to the on lusion.22) for sums U 0 . We establish inequality (5.1 Kp (kU 0 kp.

22). with probability one. simply indi ated by P j .nitely many terms in the summations. h. jf (Yj )h(Zj ) f 0 (Yj )h0 (Zj )j jf (Yj ) f 0 (Yj )j + jh(Zj ) h0 (Zj )j : . Convergen e will follow by a simple limiting argument from (5. Sin e kYj k = kZj k = 1 . Let f. h0 in the unit ball of F 0 . f 0 in the unit ball of E 0 . for every j .

156 Let (gj0 ) be another orthogaussian sequen e independent from (gj ) and denote by IEg onditional integration with respe t to those sequen es. By the pre eding inequality .

.

.

X 1=p IEg .

.

gj [f j .

j h(Yj Zj ) f 0 h0 (Yj .

2 11=2 .

.

Zj )℄.

.

C A .

0 .

.

.

X B 1=p 2IE gj (f (Yj ) g .

.

j .

j f 0 (Yj0 )) + X j j 1=p gj0 (h(Zj ) .

2 11=2 .

.

h0 (Zj )).

.

C A .

However. besides p (U )p (V ) . The de. X 1=p IEg gj Yj j j Zj p 2 X 1=p 2IEg gj Yj + 2 j j X 1=p 0 2IEg gj Zj j j p : Integrating and using the moment equivalen es of both Gaussian and stable random ve tors on lude in this way the proof of Theorem 5. a natural lower bound for kW kp.11 an be reversed.1 always dominates p (U )p (V ) .11.1 would then be. One may introdu e weak moments similar to the Gaussian ase. almost surely in ( j ) .1 .1 . The works [G-M-Z℄ and [M-T℄ have a tually shown a variety of examples with dierent sizes of kW kp. neither this lower bound nor the upper bound of the theorem seem to provide the exa t weight of kW kp.1 where p (U ) = sup kf k1 Z jf (x)jp dm 1=p U (x) and similarly for V . p (U )kV kp.11. (Yj ) . (Zj ) .1 + p (V )kU kp. : It therefore follows from the Gaussian omparison theorems in the form for example of Corollary 3. While kW kp. it is not true in general that the inequality of Theorem 5. We mention to on lude some omments and open questions on Theorem 5.15) that.14 (or Theorem 3.

should be in between. if any. Notes and referen es .nitive result. This question is still under study.

our exposition of stable distributions and random variables in in.157 As announ ed.

nite dimensional spa es is quite restri ted. More general expositions based on in.

B. Levy and was revived re ently by R. (5. Lemma 5. see [LP℄ for the history of this representation. [Ar-G2℄. [Li℄ for the details). More on Lp. goes ba k to the work by P. Levy-Khint hine representations and related entral limit theorems for triangular arrays may be found in the treatrises [Ar-G2℄ and [Li℄ to whi h we a tually also refer for more a urate referen es and histor ial ba kground.9) was noti ed in [Pi12℄. Levy measures. Woodroofe and J. Proposition 5. M.2 and the existen e of spe tral measures is due to P. Zinn [LP-W-Z℄. like [Fe1℄.6 is due to A. See also the work [A-A-G℄. Pisier [M-P2℄ with a simpli. de A osta [A 3℄ also established the limit (5. a prior result for i xi was established by J. For a re ent and new representation. 0 < p < 2 . The survey arti le [Wer℄ presents a sample of the topi s studied in the rather extensive literature on stable distributions. The theory of stable laws was onstru ted by P. P Proposition 5. Mar us and G. LePage. Remark 5. et . The proof of Theorem 5. The few fa ts presented here as an introdu tion may be found in the lassi al books on Probability theory. Representation of p -stable variables. Araujo and E.10) was proved by A. A.g. Gine [Ar-G1℄. the paper [M-Z℄. Levy [Le1℄.8 is due to M.1 is taken from [Pi16℄ (see also [M-P2℄). Theorem 5.1 -spa es (and interpolation spa es Lp.nitely divisible distributions.7 is taken from [G-M-Z℄.q ) may be found e. our exposition follows [B-DC-K℄. Homanni Jrgensen [HJ1℄ (see also [HJ3℄ and Chapter 6). de A osta [A 2℄. see [Ro3℄. and started with [Ja1℄ and [Kue1℄ in Hilbert spa e and was then extended to more general spa es by many authors ( f. Levy [Le1℄ (who a tually dealt with the Eu lidean sphere).11) (by a dierent method however) while the full on lusion (5. in [S-W℄.4 and uniqueness of the symmetri spe tral measure on entrated on the unit sphere follow from the more general results about uniqueness of Levy measures for Bana h spa e valued random variables.

Gine. Our exposition follows the work by M. S hwartz [S hw1℄. [Ma3℄. [Li℄. Zinn [G-M-Z℄. Zinn ( f. [M-Zi℄.. Theorem 5. B. Theorem 5.10 is theirs. Pisier [M-P2℄. The various introdu tory omments olle t informations taken from [E-F℄.11 on tensor produ t of stable distributions was established by E. M. [Pi16℄).19) were des ribed by L.16)-(5.ed proof by J. [M-P2℄.. Comparison theorems for stable random variables intrigued many people and it was probably known for a long time that Slepian's lemma does not extend to p -stable variables with 0 < p < 2 . and further investigated in [M-T℄. . B. Mar us and G. Mar us and J.. The equivalen es (5.

1 Symmetrization and some inequalities on sums of independent random variables 6.3 Con entration and tail behavior Notes and referen es .2 Integrability of sums of independent random variables 6.158 Chapter 6. Sums of independent random variables 6.

Many results presented in this hapter will be of basi use in the study of limit theorems later.1 is on erned with symmetrization. one would expe t that results similar to those presented previously should hold in a sense or another for sums of independent random variables. moment equivalen es. The results presented in this hapter go in this dire tion and the reader will re ognize in this general setting the topi s overed before: integrability properties. tail behavior. On the intuitive basis of entral limit theorems whi h approximate normalized sums of independent random variables by smooth limiting distributions (Gaussian. Se tion 6. representation of stable random variables). on entration. Sums of independent random variables Sums of independent random variables already appeared in the pre eding hapters in on rete situations (Gaussian and Radema her averages. stable). We will mainly des ribe ideas and te hniques whi h go from simple but powerful observations like symmetrization (randomization) te hniques to more elaborated results like those obtained from the isoperimetri inequality for produ t measures of Theorem 1.159 Chapter 6.4. In the last and main se tion. martingale and isoperimetri methods are developed in this ontext. et . Se tion 6. Let us emphasize that the in.2 with Homann-Jrgensen's inequalities and moment equivalen es of sums of independent random variables.

where (Xi ) is a .nite dimensional setting is hara terized by the la k of the orthogonality P P property IEj Xi j2 = IEjXi j2 .

This type of identity or equivalen e extends to .nite sequen e of independent mean zero real random i i variables.

we present the various results in the setting introdu ed in Chapter 2 and already used in the previous hapters. many ideas larify the real ase (for example. the systemati use of symmetrization-randomization) and. Besides the diÆ ult ontrol in probability (that will be dis ussed further in this book).3). With respe t to the lassi al theory whi h is developed under this orthogonality property. Chapter 9). We say that a map X from some probability spa e ( . Sin e we need not be on erned here with tightness properties. the various tools introdu ed in this hapter allow a more than satisfa tory extension of the lassi al theory of sums of independent random variables. A tually.nite dimensional random ve tors and even to Hilbert spa e valued random variables. the study of sums of independent Bana h spa e valued random variables undertaken here requires in parti ular to ir umvent this diÆ ulty. IP) into B is a random variable . but does not in general for arbitrary Bana h spa e valued random variables ( f. A. That is. let B be a Bana h spa e su h that there exists a ountable subset D of the unit ball of the dual spa e su h that kxk = sup jf (x)j for f 2D all x in B . as for the isoperimetri approa h to exponential inequalities (Se tion 6. go beyond the known results.

160 if f (X ) is measurable for ea h f in D . We re all that this de.

Re all that (Xei ) is then a symmetri sequen e in the sense .2). a > 0 . a > 0 . Similarly. one an onstru t a symmetri random variable whi h is \near" X by looking at Xe = X X 0 where X 0 denotes an independent opy of X ( onstru ted on some dierent probability spa e ( 0 . for ea h i . one an show that for t. A tually (6. (6:2) inf IPfjf (X )j agIPfkX k > t + ag IPfkX f 2D X 0 k > tg: For the proof. for any t. (6:1) IPfkX k agIPfkX k > t + ag IPfkX X 0k > tg: This is of parti ular interest when for example a is hosen su h that IPfkX k ag 1=2 in whi h ase it follows that IPfkX k > t + ag 2IPfkX X 0 k > tg: It also follows in parti ular that IEkX kp < 1 (0 < p < 1) if and only if IEkX X 0kp < 1 . for some h in D . The distributions of X and X X 0 are indeed losely related. If X is a random variable. IP0 ) ). let ! be su h that kX (!)k > t + a . A0 . by independen e and identi al distribution.1 Symmetrization and some inequalities on sums of independent random variables One simple but basi idea in the study of sums of independent random variables is the on ept of symmetrization. jh(X (!))j > t + a . we onstru t an asso iated sequen e of independent and symmetri random variables by setting. then. for example.nition overs the ase of Radon random variables or equivalently Borel random variables taking their values in a separable Bana h spa e. Xei = Xi Xi0 where (Xi0 ) is an independent opy of the sequen e (Xi ) . Hen e inf IP0 fjf (X 0)j ag IPfkX (!) X 0 k > tg: f 2D Integrating with respe t to ! then yields (6. a > 0 . (6:3) IPfkX k > t + ag IPfkX X 0 k > tg + sup IPfjf (X )j > ag: f 2D When dealing with a sequen e (Xi )i2IN of independent random variables. 6.1) is somewhat too rude in various appli ations and it is worthwhile mentioning here the following improvements: for t.

161 that it has the same distribution as (i Xei ) where (i ) is a Radema her sequen e independent of (Xi ) and (Xi0 ) . Sin e weak onvergen e is involved in this statement we restri t for simpli ity to the ase of Radon random variables. (Sn0 ) onverges weakly i=1 to S 0 whi h has the same distribution as S . A0 . IP (resp.4) where symmetrization proves its eÆ ien y. The following are equivalent: i=1 (i) the sequen e (Sn ) onverges almost surely. we an randomize by independent hoi es of signs symmetri sequen es (Xi ) . (iii) (Sn ) onverges weakly. (Xi ) ). Suppose that (Sn ) onverges weakly to some random variable S . Set Sen = Sn Sn0 . n 1 . Se = S S 0 de.1. On some dierent probability n P spa e ( 0 . Theorem 2. Let us start for example with the Levy-It^o-Nisio theorem for independent but not ne essarily symmetri variables ( f. onsider a opy (Xi0 ) of the sequen e (Xi ) and set Sn0 = Xi0. IPX ) partial integration with respe t to (i ) (resp. we denote by IE . A ordingly and following Chapter 2. IP0 ) . (ii) (Sn ) onverges in probability. Set Sn = Xi . The fa t that the symmetri sequen e (Xei ) built over (Xi ) is useful in the study of (Xi ) an be illustrated in dierent ways. That is. Let (Xi ) be a sequen e of independent Borel random variables with values in a separable n P Bana h spa e B . Proof. IEX . The equivalen e between (i) and (ii) however holds in our general setting. Theorem 6.

By dieren e. taking hara teristi fun tionals. Sen ! Se almost surely.3) are one of the main ingredients in the proof of the It^o-Nisio's theorem in the symmetri al ase. it follows from these two observations that (Sn0 (!0 )) is a relatively ompa t sequen e in B . by the result for symmetri sequen es (Theorem 2.4). Further. there exists by Fubini's theorem !0 in 0 su h that Sn Sn0 (!0 ) ! S S 0 (!0 ) almost surely: On the other hand Sn ! S weakly.ned on 0 . exp(if (Sn0 (!0 ))) ! exp(if (S 0(!0 )): Hen e f (Sn0 (!0 )) ! f (S 0 (!0 )) and thus Sn0 (!) onverges in B to S 0 (!0 ) . While Levy's inequalities (Proposition 2. In parti ular. one an a tually also prove dire tly the pre eding statement using instead . for every f in B 0 . Therefore Sn ! S almost surely and the theorem is proved. Sin e Sen ! Se weakly.

Re all that X is entered if IEf (X ) = 0 for all f in D . Let = inf fk N .3. Hen e. k N k=1 then kSN k > t . Lemma 6. Let (Xi )iN be independent random variables in B and set Sk = Xi . As always. kSk k > s + tg ( +1 if no su h k exists).2. IPfkSN k > tg : IPf max kSk k > s + tg kN 1 max IPfkSN Sk k > sg kN Proof. : : : . k P Lemma 6. as usual. kSN Sk k sg kinf IPfkSN Sk k sg N N X k=1 IPf = kg whi h is the result. i=1 for every s. ("i ) is a Radema her sequen e independent of (Xi ) . N X IPfkSN k > tg = k=1 N X k=1 IPf = k. Let F : IR+ ! IR+ be onvex. Xk and IPf = kg = IPf max kSk k > s + tg . kSN k > tg IPf = k.162 a similar inequality known as Ottaviani's inequality. Its proof follows the pattern of the proof of Levy's inequalities. k N . f = kg only N P depends on X1 . by independen e. Then for any . The symmetrization pro edure is illustrated further in the next trivial lemma. When = k and kSN Sk k s . Then. t > 0 . Then.

nite sequen e (Xi ) of independent mean zero random variables in B su h that IEF (kXi k) < 1 for all i . ! 1 X IEF "i Xi 2 i IEF ! X Xi i IEF ! X 2 "i Xi : i Proof. we have IEF ! X Xi i IEF Conversely. Jensen's inequality and zero mean ( f. (2. and then by onvexity. Then. ! 1 X "i Xi IEF 2 i ! X Xei i = IEF ! ! X "i Xei i IEF ! X X IEF 12 "i Xei = IEF 21 Xei i i ! X 2 "i Xi : IEF i ! X Xi i : . by Fubini. Re all Xei = Xi Xi0 and let ("i ) be a Radema her sequen e independent from (Xi ) and (Xi0 ) . by the same arguments.5)).

163 The lemma is proved. we have similarly that IEF and also IEF . Note that when the variables Xi are not entered.

.

X sup .

.

f (Xi ) f 2D .

.

! .

IEf (Xi ).

.

.

i .

.

X sup .

.

"i (f (Xi ) f 2D .

i IEF .

! .

IEf (Xi )).

.

.

Proposition 6. IEF ! X 2 "i Xi i ! X 2 Xi i : Symmetrization thus indi ates how results on symmetri random variables an be transferred to general ones. Kanter [Kan℄.4. We leave it if ne essary to the interested reader to extend them to the ase of general (or mean zero) independent random variables by the te hniques just presented. Before turning to the main obje t of this hapter. It is due to M. we would like to brie y mention in passing a on entration inequality often useful when for example Levy's inequalities do not readily apply. we therefore mainly on entrate only on symmetri al distributions for whi h results are usually learer and easier to state. In the sequel of this hapter. Let (Xi ) be a .

Let (Xi ) be a . Then. for any x in B and t > 0 .5. The se ond one will prove extremely useful in the sequel. the ontra tion and omparison properties des ribed for Radema her averages in Chapter 4 an be extended to this more general setting. ( X IP Xi i x ) t ! X 3 1 + IPfkXi k > tg 2 i 1 2 : Sin e symmetri sequen es of random variables an be randomized by an independent Radema her sequen e. Lemma 6.nite sequen e of independent symmetri random variables with values in B . The next two lemmas are easy instan es of this pro edure.

if ji j ji j almost surely for all i . and similarly for i . Then. Let further (i ) and (i ) be a real random variables su h that i = 'i (Xi ) where 'i : IR ! IR is symmetri (even).nite symmetri sequen e of random variables with values in B . IEF (k X i i Xi k) IEF (k X i i Xi k): . for any onvex fun tion F : IR+ ! IR+ (and under appropriate integrability).

By the symmetry assumption on the 'i 's and Fubini's theorem X X IEF (k i Xi k) = IEX IE F (k "i i Xi k): i i By the ontra tion prin iple (Theorem 4. for every t > 0 . Proof. IPfk X i Xi k > tg 2IPfk i X i i Xi k > tg: These inequalities in parti ular apply when i = IfXi 2Ai g 1 i where the sets Ai are symmetri in B (in parti ular Ai = fkxk ai g ).164 We also have that.4) IE" F (k X i "i i Xi k) leIE" F (k X i "i i Xi k) from whi h the . The sequen e (Xi ) has the same distribution as ("i Xi ) .

Let (Xi ) be arbitrary random variables in B . The se ond is established similarly using (4. Let F : IR+ ! IR+ be onvex and in reasing.rst inequality of the lemma follows.7). .6. Then. Lemma 6. if IEF (kXi k) < 1 .

.

! .

X .

1 sup .

.

"i jf (Xi )j .

.

IEF .

2 f 2D .

i (6:4) IEF (k X i "i Xi k): When the Xi 's are independent and symmetri in L2 (B ) . (6.5). we also have that X (6:5) IE sup ( f 2D i ! f 2 (Xi )) sup X f 2D i X ) + 8IE Xi i IEf 2 (X i Xi : k k Proof.4) is simply Theorem 4.12 applied onditionally. In order to establish (6. write IE sup f 2D X i Lemma 6.3 shows that IE !! f 2 (Xi ) sup f 2D i .

.

X sup .

.

f 2 (Xi ) f 2D .

i and. IE X IEf 2 (Xi ) + IE . by (4.19).

! .

IEf 2 (Xi ).

.

.

.

.

! .

X .

.

sup .

"i f 2 (Xi ).

.

.

f 2D .

i .

.

X sup .

.

f 2 (Xi ) f 2D .

i .

! .

IEf 2 (Xi ).

.

: .

.

.

! .

X .

.

sup .

"i f 2 (Xi ).

.

.

f 2D .

2IE i X 4IE "i Xi i Xi : k k .

t > 0 . An important result is a set of inequalities due to J. By de. Let (Xi )iN be independent random variables with values in B . for s. IPfmax kSk k > 3t + sg (IPfmax kSk k > tg)2 + IPfmax kXi k > sg: iN k N k N (6:6) If the variables are symmetri . Set Sk = Xi . whi h are the most powerful ones.7.2 Integrability of sums of independent random variables Integrability of sums of independent ve tor valued random variables are based on various inequalities.165 The lemma is thus established. k P Proposition 6. IPfkSN k > 2t + sg 4(IPfkSN k > tg)2 + IPfmax kXi k > sg: iN (6:7) Proof. kSj k > tg . t > 0 . While isoperimetri methods. we present here some more lassi al and easier ideas. For every s. Some of its various onsequen es are presented in the subsequent theorems. Homann-Jrgensen whi h is the ontent of the next statement. i=1 k N . 6. Let = inf fj N . will be des ribed in the next se tion.

by independen e.7). : : : . N yields (6. Xj and fmax kSk k > tg = f = j g (disjoint union). for every j = 1. kSk k t if k < j and when kN j =1 kj kSk k t + kXj k + kSk Sj k so that in any ase max kSk k t + max kXi k + max kSk iN k N j<kN Sj k: Hen e. IPf = j. a summation over j = 1. N .6). . max kXi k > sg + IPf = j gIPfj<k max kSk iN N Sj k > 2tg: Sin e max kSk Sj k 2 max kSk k . max kSk k > 3t + sg kN IPf = j. kSN k kSj 1 k + kXj k + kSN Sj k. : : : . : : : . j<kN kN Con erning (6. f = j g only depends on the random variables N P X1 .nition. On f = j g .

kSN k > 2t + sg IPf = j. max kXi k > sg + IPf = j gIPfkSN Sj k > tg: iN Making use of Levy's inequality (2. The pre eding inequalities are mainly used with s = t . As a . Their main interest and usefulness stem from the squared probability whi h make them lose in a sense to exponential inequalities (see below).7).6) for symmetri variables and summing over j yields then (6. The proposition is proved.166 so that IPf = j.

9). the next proposition is still a te hni al step before the integrability statements. We only show (6. Namely. Proposition 6. provided the same holds for the maximum of the individual summands.8. for t0 = inf ft > 0. if the sums are ontrolled in probability (L0 ) they are also ontrolled in Lp .6). Let u > t0 . Then. k N . the proof of (6. then IEkSN kp 2 3p IE max kXi kp + 2(3t0)p : iN Proof. Set k P Sk = Xi . Radema her and stable ve tor valued random variables. It however already expresses in this general ontext of sums of independent random variables a property similar to the one presented in the pre eding hapters on Gaussian. p > 0 .7). Applied to Gaussian. k N i=1 (6:8) IE max kSk kp 2:4pIE max kXi kp + 2(4t0 )p : iN kN If the Xi 's are moreover symmetri . Let 0 < p < 1 and let (Xi )iN be independent random variables in Lp(B ) . and t0 = inf ft > 0. IPfkSN k > tg (8:3p ) (6:9) 1g . the next proposition a tually gives rise to new proofs of the moment equivalen es for these parti ular sums of independent random variables.8) being similar using (6.rst onsequen e of the pre eding inequalities. Radema her or stable averages. By integration by parts and (6. Z 1 p p IEkSN k = 3 IPfkSN k > 3tgdtp 0 Z u Z 1 p =3 + IPfkSN k > 3tgdtp 0 uZ Z 1 1 2 p p p p IPfmax kXi k > tgdtp (IPfkSN k > tg) dt + 3 (3u) + 4 3 iN u u Z 1 (3u)p + 4 3p IPfkSN k > ug IPfkSN k > tgdtp + 3p IE max kXi kp i N 0 2(3u)p + 2 3pIE max k Xi kp iN . IPfmax kSk k > tg (2 4p ) 1 g .

Let p > 0 and let (Zi ) be a . Sin e this holds for arbitrary u > t0 the proposition is established. Before introdu ing this result we need a simple lemma on moments of maximum of independent random variables whi h is of independent interest. It is a tually possible to obtain a true equivalen e of moments for sums of independent symmetri random variables.167 sin e 4 3p IPfkSN k > ug 1=2 by the hoi e of u . Lemma 6.9. The formulation is however somewhat te hni al sin e it involves trun ation.

Turning to the left side we use Lemma 2.6: the de. The right hand side is trivial (and a tually holds for any Æ0 > 0 ). Use integration by parts. IPfZi > tg g . Then i X 1 Z IPfZi > tgdtp Æ0 XZ 1 p p IE max Zi Æ0 + IPfZi > tgdtp : (1 + ) 1 Æ0p + (1 + ) 1 lim itsi i Æ0 i Proof. Given P > 0 . let Æ0 = inf ft > 0.nite sequen e of positive random variables in Lp .

10. Let 0 < p.9 then learly follows. q < 1 . Let (Xi ) be a . Proposition 6. The announ ed equivalen e of moments is as follows.nition of Æ0 then indi ates that IPfmax Zi > tg i ( 1 P IPfZ i i (1 + ) 1 (1 + ) > tg if t > Æ0 if t Æ0 : Lemma 6.

1g X i Xi IfkXi kÆ0 g kq 1 and where the sign aKg p. Then. By the triangle inequality IEk X i Xi kp 2p IEk X i Xi IfkXi kÆ0 g kp + 2p IEk X i Xi IfkXi k>Æ0 g kp : .q b means that Kp. IPfkXik > tg (8:3p ) i Kp.q k max i P where Æ0 = inf ft > 0. q only.q depending on p.q b a Proof. k X i Xi kp Kg kXi k kp + k p. for some onstant Kp.nite sequen e of independent and symmetri random variables in Lp (B ) .q b .

by de.168 If we apply (6.9) to the se ond term of the right side of this inequality we see that.

nition of Æ0 . we an take t0 = 0 there so that IEk X i Xi IfkXi k>Æ0 g kp 2 3p IE max kXi kp : i Turning to the .

8.rst term and applying again Proposition 6. we an take t0 = (8 3p IEk X i Xi IfkXi kÆ0 g kq )1=q : The .

9 the fa t that IE max kXi kp (1 + ) 1 Æ0p i with = (8:3p ) 1. Sn = Xi . To prove the reverse inequality note that. by (6. if sup kSn k < 1 almost surely.rst half of the proposition follows.11. if (Sn ) onverges almost surely. X i Xi IfkXi kÆ0 g kq 2 3q Æ0q + 2:(3t0 )q t0 = (8 3q IEk X i Xi kp )1=p : If we then draw from Lemma 6. Set.9) again. we have equivalen e n i=1 between: (i) IE sup kSnkp < 1 . Let (Xi )i2IN be a sequen e of independent random variables with values in B . the proof will be omplete sin e we know from Levy's inequality (2.7) that IE max kXi kp 2IEk i X i Xi kp : We now summarize in various integrability theorems for sums of independent ve tor valued random variables the pre eding powerful inequalities and arguments. n (ii) IE sup kXnkp < 1 . IEk where we an hoose as t0 . Then. (i) and (ii) are also equivalent to (iii) IEk P i Xi kp < 1 . Theorem 6. n Further. as n P usual. n 1 . Let also 0 < p < 1 .

169 and in this ase (Sn ) onverges also in Lp . Let N be . Proof. That (i) implies (ii) is obvious.

8. By Proposition 6. t0 being de.xed.

IE max kSn kp 2 4p IE max kXikp + 2(4t0)p : nN iN Sin e IPfsup kSn k < 1g = 1 . there is M > 0 su h that t0 M independently of N .ned there. Letting N tend n to in.

1) in general.nity shows (ii) ) (i). and from an easy symmetrization argument based on (6. The assertion relative to (iii) follows from Levy's inequalities for symmetri random variables. Let (an ) be an in reasing sequen e of positive numbers tending to in.12. Corollary 6.

for any 0 < p < 1 . a n (ii) IE sup n kXnk p < 1: an Proof. the following are equivalent: n k Snk p (i) IE sup < 1. We de. as usual. Let (Xi ) n P be independent random variables with values in B and set. Then. if i=1 sup kSnk=an < 1 almost surely. Sn = Xi .nity. n 1 .

: : : . It is amusing to note that Homann-Jrgensen's inequalities and Theorem 6. P Provided with the pre eding material.11 that if IAi onverges almost surely.11 des ribe well enough independen e to ontain the Borel-Cantelli lemma! Indeed. Remark 6. i .13. i . and sup k n n X i=1 Yi k = sup n kSn k : an Apply then Theorem 6. if (Ai ) is a sequen e of independent P P P sets. we get from Theorem 6.ne a new sequen e (Yi ) of independent random variables with values in the Bana h spa e `1 (B ) of all bounded sequen es x = (xn ) with sup-norm kxk = sup kxn k by setting n n X X X Yi = (0. then IE( IAi ) = IP(Ai ) < 1 . i . 0. : : : ) ai ai+1 ai+2 where there are i 1 zeroes to start with. let us now onsider an almost surely onvergent series S = Xi of i independent symmetri or only mean zero uniformly bounded random variables with values in B and let us . i i i this orresponds to the independent portion of the Borel-Cantelli lemma.11 to the sequen e (Yi ) in `1(B ) . Clearly kYi k = kXik=ai for all i .

7). i=1 IPfkSN k > 2t + ag (2IPfkSN k > tg)2 : Let t0 to be spe i.170 try to investigate the integrability properties of kS k . set SN = Xi . for every t > 0 . By (6. Assume more pre isely that the Xi 's are symmetri N P and that kXi k1 a < 1 for all i . For ea h N .

ed in a moment and de.

k P Sk = Xi . there exists t0 su h that for every N . let us present a somewhat neater argument for the same result.8 ( f. by Fatou's lemma.15 below). i=1 IE exp kSN k exp(t) + 2 exp((t + a))IPfkSN k > tgIE exp kSN k: Proof. We an write IE exp kSN k exp(t) + N Z X k=1 f =kg exp kSN kdIP: . The argument is the following. for every N and n . Remark 6. for every . Let (Xi )iN be independent and symmetri random variables with values in B . ompletes our hability with the te hnique. To omplement it.14. Then.ne the sequen e tn = 2n (t0 + a) a . By onvergen e. N The pre eding iteration pro edure may be ompared to the proof of (3. t > 0 . applied to power fun tions. whi h. i Summarizing. and thus. kSk k > tg . sup IE exp kSN k < 1 . we get that n+1 2 (IP IPfkSN k > tn g 22 fkSN k > t0 g)2n : P If S = Xi is almost surely onvergent. Let as usual = inf fk N . if only a small variation. The pre eding inequality indi ates that. this an easily be improved into the same property for all > 0 . Assume that kXi k1 a for all i N . k N . for every n . Note a tually that sup IE exp kSN k < 1 as soon as the sequen e (SN ) is sto hasti ally bounded. that IE exp kS k < N 1 . IPfkSN k > tn g (2IPfkSN k > tn 1 g)2 : Iterating. it yields an alternate proof of Proposition 6. Proposition 6. IPfkSN k > 2n (t0 + a)g 2 2n : It easily follows that for some > 0 . IPfkSN k > t0 g 1=8 .5) in the Gaussian ase.

we simply get that sup IE exp kSN k 2 exp(t) < 1: N This exponential integrability result is not quite satisfa tory sin e it is known that for real random variables. That is. This result on the line is one instan e of the Poisson behavior of general sums of independent (bounded) random variables as opposed to the normal behavior of more spe ialized ones.14 applied to power fun tions yields an alternate proof of the inequalities of Proposition 6.14. if we hoose t in Proposition 6. N X k=1 Sk k: Sk k IE exp kSN k ( f. under the assumption of the last proposition. Then. even improving at some pla es the s alar ase. for all t > 0 and every 1 p < 1 . As announ ed. The proof is omplete.8. kSN k kSk 1 k + kXk k + kSN Sk k t + a + kSN Sk k so that.5)). It originates in the sharp quadrati real exponential inequalities (see for example (6. These real results an however be extended to the ve tor valued ase.14 satisfying IPfkSN k > tg (2e) 1 for all N and let = (t + a) 1 . summing over IPf = kg = IPfmax kSk k > tg 2IPfkSN k > tg kN where we have used Levy's inequality (2. .171 On the set f = kg . (2. Indeed. we re over the fa t i that IE exp kS k < 1 for some (a tually all) > 0 . We use to this aim isoperimetri methods to obtain sharp exponential estimates for sums of independent random variables.15. the proof of Proposition 6. 2 1 IPfkSN k > tg p 41 2 3 22p (tp + IE max kXi kp ) iN 5: IEkSN kp Let (Xi ) be a sequen e of independent symmetri (or only mean zero) uniformly bounded by a su h P that S = Xi onverges almost surely. IE exp kSN k. It a tually yields an inequality in the form of Kolmogorov's onverse inequality. and. as a onsequen e of Proposition 6. Z f =kg exp kSN kdIP exp((t + a))IPf = kgIE exp kSN By Jensen's inequality and mean zero. by independen e.6). (Note that there is an easy analog when the variables are only entered. like Radema her averages.10) below). IE exp jS j log+ jS j < 1 for some > 0 .) Remark 6.

4) to integrability and tail behavior of sums of independent random variables.3. One of the obje tives will be to try to extend to the in.172 6. Con entration and tail behavior This se tion is mainly devoted to appli ations of isoperimetri methods (Theorem 1.

) To state one. (Of ourse. let us onsider Bennett's inequality.6). Let (Xi ) be a . Prokhorov.nite dimensional setting the lassi al (quadrati ) exponential inequalities like those of Bernstein. the la k of orthogonality for es to investigate new arguments. Kolmogorov (Lemma 1. et . Bennett. like therefore isoperimetry. and for the matter of omparison. Hoeding.

then. if b2 = IEXi2 . These two inequalities a tually des ribe the lassi al normal and Poisson types behavior of sums of independent random variables a ording to the size of t with respe t to b2 =a .10) implies that IPf X i Xi > tg exp t2 at3 + . if t b2=a . Sin e log(1 + x) x 12 x2 when 0 x 1 . The key observation in order to use (real) martingale inequalities in the study of sums of independent ve tor valued random variables relies on the following simple but extremely useful observation of V. Although these do not seem to be powerful enough in general for the integrability and tail behavior questions we have in mind. Before turning to isoperimetri argument in this study. . some of whi h will be useful in this form in Chapter 9. They also present the advantage of being formulated as on entration inequalities.nite sequen e of independent mean zero real random P variables su h that kXi k1 a for all i . we would like to present some results based on martingales. i (6:10) X IPf i Xi > tg exp t a t b2 at + log 1 + 2 a a2 b : This inequality is rather typi al of the tail behavior of sums of independent random variables. On the other hand. for all t > 0 . (6. 2b2 2b4 whi h is further less than exp( t2 =4b2) if t b2 =2a (for example). Yurinskii. This behavior is variable depending on the relative sizes of t and the ratio b2 =a . they are however rather simple and quite useful in many situations. for all t > 0 X IPf i Xi > tg exp t at log 1 + 2 a b 1 whi h is sharp for large values of t (bigger than b2 =a ).

for ea h i .173 Ai the -algebra generated by the variables Let (Xi )iN be integrable random variable in B . kSN k: 1 N P (di )iN de. Write as usual SN = di = IEAi kSN k IEAi N P i=1 Xi and set. Denote by X1 . Xi . i N. : : : . A0 the trivial algebra.

almost surely for every i N . jdi j kXi k + IEkXi k: Further. if the Xi 's are in L2(B ) we also have that kIEAi d2i k1 IEkXi k2 : 1 Proof. Assume the random variables Xi are independent. in the pre eding notations.nes a real martingale dieren e sequen e (IEAi 1 di = 0) and di = kSN k i=1 then have: IEkSN k . Then. We Lemma 6.16. Independen e ensures that di = (IEAi IEAi 1 )(kSN k kSN Xi k) and the .

Typi al in this regard is the following quadrati inequality. i N .16 and orthogonality of martingale dieren es: (6:11) IEj kSN k IEkSN kj2 N X i=1 IEkXi k2 : . from its expe tation IEkSN k an be written as a real martingale whose dieren es di are nearly exa tly ontrolled by the norms of the orresponding individual summands Xi of SN . kSN k is as good as a real martingale with dieren es omparable to kXi k . up to IEkSN k. In a sense therefore. immediate onsequen e of Lemma 6.rst inequality of the lemma already follows from the triangle inequality sin e jdi j (IEAi + IEAi )(kXi k) = kXi k + IEkXi k: 1 Sin e onditional expe tation is a proje tion in L2 the same argument leads to the se ond inequality of the lemma. The philosophy of the pre eding observation is that the deviation of the norm of a sum SN of independent random ve tors Xi .

on the line or even in Hilbert spa e.174 Of ourse. if the variables are entered. this inequality (a tually an equality) holds true without the entering fa tor IEkSN k by orthogonality. This remark is one rather important feature of the study of sums of independent Bana h spa e valued random variables (we will .

nd it also in the isoperimetri approa h developed later). Radema her and stable ases.8) of a sum of independent random variables is given. It shows how. As in the Gaussian. when a ontrol in expe tation (or only in probability by Proposition 6. there is then to know how to ontrol in probability (or weakly) sums of independent random variables. using some of the lassi al real arguments. In the next two hapters on strong limit theorems for sums of independent random variables. then one an expe t. On the line or in . This was already the line of the integrability theorems dis ussed in Se tion 6.2 and those of this se tion will be similar. this will lead to various equivalen es between almost sure and in probability limiting properties under ne essary (and lassi al) moment assumptions on the individual summands. an almost sure ontrol.

nite dimensional spa es this is easily a omplished by orthogonality and moment onditions. This is mu h more diÆ ult in the in.

From Lemma 6. Se tion 1.nite dimensional setting and may be onsidered a main problem of the theory. N P Lemma 1.16) shows that if a = max kXi k1 and b ( IEkXik2 )1=2 .7 indi ates in the same way that if 1 < p < 2. yields (6:13) IPfj kSN k t IEkSN k j > tg exp 2a t b2 2at + 2 log 1 + 2 2a 4a b Lemma 1. q = p=p 1 and a = sup i1=p kXi k1 assumed to i1 be . (6:12) IPfj kSN k IEkSN k j > tg 2 exp t2 2at 2 exp 2 2 2b b : One an also prove a martingale version of (6. in Chapters 9.10) whi h.6 (together thus with Lemma 6. an be applied to kSN k N P IEkSN k yielding on entration properties for norms of sums SN = Xi of independent random variables i=1 around their expe tation in terms of the size of the summands Xi . It will be studied in some instan es in the se ond part of this work.16. for iN i=1 all t > 0 . the various martingale inequalities of Chapter 1.3. Let us re ord some of them at this stage. 10 and 14 in parti ular. applied to kSN k IEkSN k .

nite. for all t > 0 . (6:14) IPfj kSN k IEkSN k j > tg 2 exp( tq =Cq aq ) .

(6.175 where Cq > 0 only depends on q .14) is of parti ular interest when Xi = i xi where (i ) is a Radema her sequen e and (xi ) a .

for all t > 0 .1 . This inequality may of ourse be ompared to the on entration property of Theorem 4.nite sequen e in B .1 = k(kxi k)kp.7 as well as to the real inequalities des ribed in the . (6:15) IPfj k X i "i xi k IEk X i i xi k j > tg 2 exp( tq =Cq k(xi )kqp. We then have.1 ) where k(xi )kp.

seems to subsume in general the usual arguments. The previous two inequalities will be helpful both in this hapter and in Chapter 9 where in parti ular on entration will be required for the onstru tion of `np -subspa es. a priori very dierent from the lassi al tools (although similarities an and will be dete ted).rst se tion of Chapter 4. This isoperimetri inequality appears as a powerful tool whi h will be shown to be eÆ ient in many situations. of Bana h spa es. Note that we already used (6. We now turn to the main part of this hapter with the appli ations to sums of independent random variables of the isoperimetri inequality for produ t measures of Theorem 1.11) in the pre eding hapter to prove a on entration inequality for stable random variables (Proposition 5. Let us .4. 1 < p < 2 . This isoperimetri approa h. It will allow in parti ular to omplete the study of integrability properties of sums of independent random variables started in the pre eding se tion and to investigate almost sure limit theorems in the next hapters.7).

rst brie y re all Theorem 1.13).4 and (1. Let N be an arbitrary but .

9x1 . set H (A. xqi gg kg: Then.) For A measurable in the produ t spa e B N = fx 2 B . xq 2 A.xed integer and let XX = (Xi )iN be a sample of independent random variables with values in B . q. x = (xi )iN . xi 2 B g and for integers q. q. : : : . k) = fx 2 B N . On H (A. xi 62 fx1i . the sample XX is ontrolled by a . we may simply equip B with the -algebra generated by the linear fun tionals f 2 D . k)g 1 K0 k : q Re all that for onvenien e the numeri al onstant K0 is assumed to be an integer. (In this setting. k) . k . q. : : : . if IPfXX 2 Ag 1=2 and k q . Cardfi N . (6:16) IP fXX 2 H (A.

nite number q of points in A provided k values are negle ted. the .16) pre isely estimates. The isoperimetri inequality (6. with an exponential de ay in k .

estimates on large values. the k values for whi h the isoperimetri inequality does not provide any ontrol may be thought as the largest elements (in norm) of the sample. We will see how. As for ve tor valued Gaussian variables and Radema her series. and thus. It deals with symmetri variables sin e Radema her randomization and onditional use of tail estimates for Radema her averages are an essential omplement to the approa h. several parameters are used to measure tail of sums of independent random variables. as is lear from the isoperimetri approa h. It may be ompared to the real inequalities presented at the beginning of the se tion although we do not expli it this omparison sin e the ve tor valued ase indu es several ompli ations. If (Xi )iN is a . These involve some quantity in the L0 topology (median or expe tation). information on weak moments. Let us now state and prove the tail estimate on sums of independent random variables we draw from the isoperimetri inequality (6. up to the ontrol of the large values. Various arguments in both this hapter and the next one however provide the ne essary methodology and tools towards this goal.16). the isoperimetri inequality provides optimal estimates of the tail behavior of sums of independent random variables. In appli ations to sums of independent random variables. This observation is a tually the ondu ting rod to the subsequent developments.176 probability that this is realized.

for any integers k q and real numbers s. Theorem 6.17. t > 0 .nite sequen e of random variables. Then. First note that M and m de. i=1 0 N X m = IE sup f 2 (ui ) f 2D i=1 !1=2 1 A and ui = Xi IfkXi ks=kg . we denote by (kXi k)iN the non-in reasing rearrangement of (kXi k)iN . let us make some omments in order to larify the statement. (6:17) ( N X IP Xi i=1 where M ) > 8qM + 2s + t ( k ) X t2 K0 k + IP kXi k > s + 2 exp 128qm2 q i=1 N X = IE ui . i N: Before turning to the proof of Theorem 6.17. Let (Xi )iN be independent and symmetri random variables with values in B .

note also that when we need not .ned from trun ated random variables ui are easily majorized (using the ontra tion prin iple for M ) by the same expressions with Xi instead of ui (provided Xi 2 L2(B ) ). A tually. (6.17) is often good enough for appli ations in this simpler form.

17) the parameter 2s in the left hand side an be improved to s . This is ompletely lear from the proof below and is sometimes useful as we will see in Theorem 6.19. then in (6. The reader re ognizes in the .177 be on erned with trun ations. for example if we deal with bounded variables.

18) orresponds basi ally to the sharpest estimate and will prove eÆ ient in limit theorems for example. Proof. (6:19) while. espe ially (6. by (4. m2 (6:20) N X i=1 IEkui k2 : (6. Let us now show how Theorem 6. The two others.rst two terms on the right of (6. These are summarized in the following main three estimates. We de ompose the proof step by step. trivially.16) and the largest values. are onvenient when no weak moments assumptions need to be taken into a ount. M orresponds to a ontrol in probability of the sum. this step an be omitted.19).17) the isoperimetri bound (6. If (Xi )iN are (a tually arbitrary) random variables and if s kXi k . First. i=1 then k N X i=1 N X Xi k s + k i=1 ui k where we re all that ui = Xi IfkXi ks=kg . This is an elementary observation on trun ation and large values. Indeed.5) and the ontra tion prin iple N X Ms m2 sup IEf 2 (ui ) + 8 : k f 2D i=1 (6:18) Then. if J denotes the set of integers i N su h that k kXi k > s=k . m2 2M 2. A tually m an be used in several ways. 1 st step. by (6. Then i=1 N X Xi i=1 X X Xi + Xi i2J i62J k X i=1 kXik + k N X i=1 ui k . both in lude the real valued situation. then Card J k sin e if not this would ontradi t P kXi k s . m to weak moment estimates.17 is obtained from the isoperimetri inequality. i N . Re all that if no trun ations are k P needed.3) and symmetry.

k) .178 whi h gives the result. Appli ation of the isoperimetri inequality. q. If then XX 2 H = H (A. By symmetry and independen e re all that the sample XX = (Xi )iN has the same distribution as ("i Xi )iN where ("i ) is a Radema her sequen e independent of XX . Suppose that we are given A in B N su h that IPfXX 2 Ag 1=2 . there exist by de. 2 nd step.

: : : . : : : . Xi = x`i g . xq 2 A su h that f1. N g = fi1.nition j k and x1 . Together with the . ij g [ I where I = q S k fi N . : : : .

rst step we an then write. we P are now interested in onditional estimates of IP" fk "i ui k > tg with some appropriate hoi e for A . (6:21) IPfk N X i=1 Xi k > 2s + tg Z k X X K0 k IP" fk "i ui k > tgdIPX : + IPf kXi k > sg + q fXX2H g i=1 i2>I There is here a slight abuse in notation sin e fXX 2 H g need not be measurable and we should be somewhat more areful in this appli ation of Fubini's theorem.16) we then learly get that.11) in the following form: if (xi ) is a . 3 rd step. if s P kXi k . Choi e of A and onditional estimates on Radema her averages. for k q . This i2I is the pla e where randomization appears to be ru ial. t > 0 . This is however irrelevant and for simpli ity we skip these details. i=1 `=1 k N X i=1 "i Xi k s + k s+k N X i=1 j X `=1 2s + k "i ui k "i` ui` k + k X i2I "i ui k: X i2I "i ui k From the isoperimetri inequality (6. s.21). On the basis of (6. The tail inequality on Radema her averages we use P is inequality (4.

for f 2D i any t > 0 .nite sequen e in B and = sup ( f 2 (xi ))1=2 . (6:22) IPfk X i "i xi k > 2IEk X i "i xi k + tg 2 exp( t2 =82 ): .

Provided with this inequality. this kind of inequality is simpler on the line ( f.15). we ould also use (4. i=1 N X A2 = fx = (xi )iN . sup ( f 2(xi )Ifkxi ks=kg )1=2 4mg: f 2D i=1 The very de. let us onsider A = A1 \ A2 where A1 = fx = (xi )iN . IEk N X "i xi Ifkxi ks=kg k 4M g.1)) and the interested reader is perhaps invited to onsider this simpler ase to start with.) Of ourse. the subgaussian inequality (4.179 (With some worse onstants.

Observe now that Radema her averages are monotone in the sense that IEk i xi k is an in reasing i2J fun tion of J IN . By de.nitions of M and m learly show that IPfXX 2 Ag 1=2 so that we are in a position to apply P (6. We use this property in the following way.21).

nition of I . we an . for ea h i 2 I .

by monotoni ity of Radema her averages. We have that X IE" "i ui = i2I q X X IE" "i ui `=1 i2I` X ` IE "i xi Ifkx`i ks=kg `=1 i2I` q X : Then. Let I` = fi. and de. 1 ` q . `(i) = `g .x 1 `(i) q with Xi = x`i (i) .

it follows that X IE" i2I q X `=1 "i ui N X IE" i=1 "i x`i Ifkx`i ks=kg 4qM : Similarly. X sup f 2 (ui ) 16qm2: f 2D i2I Theorem 6. xq belong to A ). One of the key observations in the third step of the pre eding proof is the monotoni ity of Radema her averages.17 now learly follows from these observations ombined with the estimate on Radema her averages (6. for i=1 every k q and s > 0 . : : : . Then.18.21). (6:23) IPX fIE" k N X i=1 "i Xi k > 2qM + sg k X K0 k + IPf kXi k > sg: q i=1 . but with sums of squares (that are also monotone). i=1 N P Let therefore (Xi )iN be independent symmetri variables in L1 (B ) and set M = IEk Xi k . Remark 6.22) and (6.nition of A (re all x1 . It might be interesting to des ribe what the isoperimetri inequality dire tly produ es N P when applied to onditional averages IE" k "i Xi k whi h thus satisfy this basi un onditionality property.

xq in A su h that f1. IPfXX 2 Ag 1=2 . If XX 2 H (A. N g = fi1. k) . Then. Xi = x`i g .180 For the proof we let A = fx = (xi )iN . by symmetry. : : : . `=1 "i X i k k X i=1 k X i=1 k X i=1 kXi k + IE" k X q i2I "i Xi k N X X kXi k + IE" k "i x` k `=1 i=1 i kXi k + 2qM by monotoni ity of Radema her averages and de. q. : : : . : : : . as in the third step. ij g [ I where I = IE" k N X i=1 q S fi N . there exist j k and x1 . IEk N X i=1 "i xi k 2M g so that.

nition of A .16). Note that the same applies to sums of independent real positive random variables sin e these share this monotoni ity property.23) then simply follows from (6. and this ase is a tually one . (6.

We next turn to several appli ations of Theorem 6.rst instru tive example for the understanding of the te hnique.17. We .

14 provided an exponential integrability. Proposition 6.rst solve the integrability question of almost surely onvergent series of independent entered bounded random variables. > 0 . Xi = 1 with equal probability (2i2) 1 and Xi = 0 with probability N P P 1 i 2 .g. that in the real ase these series are integrable with respe t to the fun tion exp(x log+ x) . if SN = Xi and . from (6. for ea h i 1. ( f. This order of integrability is furthermore best possible.10) or Prokhorov's ar sinh inequality e. for every N . However. i i=1 IE exp(jSN j(log+ jSN j)1+ ) = N X k=0 exp(k(log+ k)1+ )IPfjSN j = kg exp(N (log N )1+ ) N Y i=1 i 2 whi h thus goes to in. [Sto℄). Take indeed (Xi )i1 to be a sequen e of independent random variables su h that. Then IEXi2 < 1 . It is however known.

nity with N . . The following theorem extends to the ve tor valued ase this strong integrability property.

Theorem 6. for every < 1=a . This is the ontent of the next statement. If the random variables are only entered.8). i IE exp kS k log+ kS k < 1: If the Xi 's are merely entered. .3. We show that sup IE exp kSN k log+ kSN k < 1 N whi h is enough by Fatou's lemma. t > 0 . Then. set t = u . use for example the symmetrization Lemma 6.17) together with (6. almost surely. Homann-Jrgensen (Proposition 6. k = [(1 2)a 1u℄ (integer part) and q= 2 a u : 2 256M log u Then.11 that sup IEkSN k = M < 1 (this N an also be dedu ed dire tly from the isoperimetri inequality so that this proof is a tually self- ontained).181 Theorem 6. We use (6. set SN = N P i=1 Xi . this holds for all < 1=2a . We already know from Theorem 6. k P i=1 k X t2 K0 k + IPf kXi k > sg + 2 exp : q 256qM 2 i=1 kXi k ka . For u > 0 large enough.19) and the omment after Theorem 6.19. Note again that sup IE exp kSN k log+ kSN k < 1 as soon as the sequen e N (SN ) is bounded in probability. IPfkSN k > 8qM + ka + tg K0 k t2 + 2 exp : q 256qM 2 Let > 0 . k q and IPfkSN k > ug IPfkSN k > 8qM + ka + tg exp( (1 3)a 1 u log u) + 2 exp( a 1 u log u): Sin e this is uniform in N the on lusion follows. IPkSN k > 8qM + s + tg Sin e.17 on erning trun ation to see that. ) large enough. a. for all integers k q and all real numbers s. for u u0 (M. Let (Xi ) be independent and symmetri random variables with values in B su h that P S = Xi onverges almost surely and kXi k1 a for all i . Proof.19 is losely related to the best order of growth in fun tion of p of the onstants in the Lp -inequalities of J. For every N .

182 Theorem 6.20. There is a universal onstant K su h that for all p > 1 and all .

In parti ular therefore IPfXN(1) > kg u p for all u > 0 . that IPfXN(r) > ug (IPfXN(1) > ug)r u rp : (With some worse irrelevant numeri al onstants. If r is an integer. Then. (6:24) IPfk N X i=1 Xi k > 8qM + 2s + tg k X K0 k t2 (r) + IPf XN > sg + 2 exp : q 256qM 2 r=1 To establish Theorem 6. for k q i=1 and s. if M = k Xi k1 .nite sequen es (Xi ) of independent mean zero random variables in Lp (B ) . from whi h we dedu e.) Let u 1 be . to be symmetri . We may and do assume the Xi 's .19) indi ate that. By indu tion over r . the same result may be obtained. k X i Xi kp K p X (k X k + k max kXi kkp ): i log p i i 1 Proof.17 and (6. from the su essive appli ation of Lemmas 2.5 and 2.6. Theorem 6. perhaps more simply.20. we set XN(r) = kXj k whenever kXj k is the r -th maximum of the sample (kXi k)iN (ties being broken by the N P index and XN(r) = 0 if r > N ). i N . iterating. iN one easily sees that IPfXN(r) > ug IPfmax kXi k > ugIPfXN(r iN 1) > ug . we may assume by homogeneity that M 1 and kXN(1)kp 1 ( XN(1) = max kXi k ). for all u > 0 . t > 0 .

Further. the omplement of the set fXN(1) u . XN(2) u2=3 . We now apply (6. On the set fXN(1) u . IPfXN(`) > 2g 2 `p u 2p u 4p=3 : Hen e.24).xed. k X r=1 XN(r) u + `u2=3 + 2(k 1) Cu . XN(2) u2=3 . We have IPfXN(2) > u2=3g u 4p=3 . XN(`) 2g has a probability smaller than IPfXN(1) > ug + 2u 4p=3 . XN(`) 2g . Let k be the smallest integer u . if ` is the smallest integer su h that 2` u2 .

The pre eding moment inequalities an also be investigated for exponential fun tions. t = u . The proof is omplete.183 for some onstant C . There is a onstant K depending on only su h that for all . Re all that for 0 < < 1 we let = exp(x ) 1 (linear near the origin when 0 < < 1 in order for to be onvex) and denote by k k the norm of the Orli z spa e L . it follows Xi k > 2(C + 10)ug pu K0 + IPfXN(1) > ug + 5u 4p=3 + 2 exp u3=2 : 256 Standard omputations using the integration by parts formula give then the onstant p= log p in the inequality of the theorem. If we now take in (6.21. We then have Theorem 6.24) q to be the smallest integer from the pre eding that IPfk exp u log N X i=1 pu. s = Cu.

i Xi k1 + ( X i kXi k. k (6:25) X i and if 1 < 2 . if 0 < 1 . k (6:26) X i Xi k Xi k K (k X K ((k X i i Xi k1 + k max kXi kk ).nite sequen es (Xi ) of independent mean zero random variables in L (B ) .

)1=.

) where 1= + 1=.

26). By Lemma 6. Proof. We only give the proof of (6. We set di = kXi k so that IPfkXik > ug 2 exp( (u=di ) ) . P d. Similar (and even simpler when 0 < < 1 ) arguments are used for (6. [Ta11℄). = 1 . By homogeneity we an assume that N N k P Xi k1 1.3 we redu e to symmetri random variables (Xi )iN .25) ( f.

p 2 .i 1 . We an . and there is no loss in generality to assume the sequen e (di )iN de reasing.

nd a sequen e i 2i d.

i = 2 jj ij=2 2j d.2i su h that i i+1 = 2 i =2 for i 1 and 2i N P P i 10 (e.g.

2j ). Let i = (2 i i )1=.

so that i+1 i 2 =2.

i i+1 23=2. .

. i j 1 P P Then 2i .

i 10 and dj i for j 2i . i=1 P Hen e i=1 2i d. We observe that 4 ` (4 log 4`+1 )1= C < 1 .

depending on only.2i `2 i It will be enough to show that for some onstants a. if u then (6:27) IPf X ru XN(r) > aug exp( u): .

27). k of the order of u . s and t of the order of u .26). it is a tually enough to . In order to establish (6. if we then take. sin e IPfXN(1) > ug 2 exp( u ) . q to be 2K0.24). in (6.184 Indeed. we obtain the tail estimate orresponding to (6.

we an . Suppose that so that n ` P 2` X (2 ) N `=0 P ru XN(r) > au > a . a. su h that when u X IPf 2s ru XN(r) > aug exp( u ): Fix u (large enough) and denote by n the largest integer su h that 2n u . XN(2 ) > 2 `(4 log 4`+1)1= g: Then P ` (2` ) 2X `2L 2m (m N > au C > au=2 for u large enough. Let ` L = fs ` n.nd s. For ` 2 L .

: : : . N g su h that CardJ` 2` 1 and kXi k > a` for i 2 J` . Set I` = J` nf1.nd a number a` of the ` 2 ZZ) su h that XN(2 ) > a` and a` ` (4 log 4`+1 )1= and P 2` a` > au=4 . we have (a` = `) learly have that Y 0 X `2L 2` 2 <iN (a` =di ) ) . 2` 2 g . We have form ` IPf8` 2 L. of f1. : : : . so IPfkXi k > a` g X j ` 2 j ` 2 2 IPfkXik > a` gA : 2j+1 exp( (a` = j ) ): (a` = ` ) =2 + log 4j+2 and sin e j+1 j 2 =2. XN > a` g We know that IPfkXik > a` g 2 exp( X 2` 2 <iN Sin e ` 2 L . XN(2 ) > a` g XY Y `2L i2I` IPfkXi k > a` g where the summation is taken over all possible hoi es of (I` )`2L . There exist `2L disjoint subsets J` . ` 2 L . Hen e (2` ) IPf8` 2 L.

j (a` = j ) (a` = `) =4 + log 4j+2 for j ` 2 provided s has been taken large enough. . It follows that X 12` 2j+1 exp( (a` = j ) ) X j ` 2 exp 2 j 1 exp 1 a` : 4 ` 1 a` 4 ` j+1 23=2.

we .

au=4 so that P ` 2 (a `2L ` =d` ) X `2L 2`a ` X `2L 2` !1=. XN(2 ) > a` g exp ! 1 X ` a` 2 : 8 `2L d` By Holder's inequality.185 Hen e ` IPf8` n.

!1= X a` .

The pre eding proof shows that the onstant K in (6. Applying (6.27) follows and the proof of (6.26) is omplete. we get that ` IPf8` n. From this. and. XN(2 ) > a` g exp( 2t ): The number of possible hoi es for the sequen e (a` )`n is less than exp u for large enough.22.26) to independent opies of a Gaussian random variable. 2℄ . if we take for example a = 640 . Remark 6. one an then dedu e from this observation (and a .26) is bounded in any interval [1 + . (6. ` 2 ` d` `2L (au=40) .

As an illustration. To on lude this hapter. we would like to mention that the previous statements are only a few examples of the seemingly large number of variations that one an try with the isoperimetri approa h and estimates on large values. let us mention a few more ideas. The .nite dimensional approximation) that ve tor valued Gaussian variables are in L 2 (B ) .

in the hypotheses of Theorem 6. One may think for example of (6.15).21. we already get an interesting inequality in the spirit of Theorem 6.21 whi h deals with the ase > 2 . for 2 < < 1 and 1= + 1=. Namely. possibly other inequalities on Radema her averages an be used.17.rst remark is that in the third step of the proof of Theorem 6. A tually.14) dire tly (whi h was obtained through martingale methods). from the more general (6.

= 1 . (6:28) k X i Xi k K (k X i Xi k1 + k(kXi k1 )k.

21. it is possible to improve (6..1): Using estimates on large values similar to the ones put forward in the proof of Theorem 6.28) by the isoperimetri method into (6:29) k X i Xi k K (k X i Xi k1 + k(kXi k s )k.

1 ) ..

186 for 2 < < s 1 (See [Ta11℄). and at some point t too. Another remark on erns the possibility in the three step pro edure of the proof of Theorem 6. One random bound for the largest values is given by the inequaliity k X i=1 kXi k .17 to let s . be random.

k1=.

1 < < 1 .1 . . k(Xi )k.

With this hoi e of s . one an show the following inequalities. as usual. (Xi ) is a . = = 1 .

for some onstant K depending only on and all t > 0 . If 2 < 1 . IPfk (6:30) X i Xi k > t(k X i Xi k1 + k(Xi )k.nite sequen e of independent symmetri ve tor valued random variables.1 )g K exp( t.

if 1 < < 2 . =K ) . (6:31) IPfk X i Xi k > K k X i Xi k1 + tk(Xi )k.1 g K exp( t.

=K) : To illustrate these inequalities. let us prove the se ond one.15) as opposed to the . it uses (6.

We set there s to be random and equal to P . a tually without trun ated variables sin e we need not be on erned with trun ation here.rst one whi h uses the usual quadrati inequality (Theorem 4. We start with (6.21).7).

k1=.

1 . k(Xi )k. Take further t = 2qM + k1=.

IEk i xi k i Radema her averages. where M = k Xi k1 . i It follows that. Xi k > 2qM + (2. for k q .1 . IPfk X Kq0 P i k Let A = f(xi ). k(Xi )k. also random therefore.

+ 1)k1=.

k(Xi )k).1 g + Z fXX2H g IP fk X i2I i Xi k > 2qM + k1=.

k(Xi )k.1 gdIPX : 2M g so that IPfXX 2 Ag 1=2 . By de.

by inequality (6. IP" fk X i2I "i Xi k > 2qM + k1=.nition of I and monotoni ity of IE k X i2I i Xi k 2qM: Now.15) onditionally on the Xi 's.

k(Xi )k.1 g 2 exp( k=C): .

the proof of (6. [HJ3℄). Notes and referen es The study of sums of independent Bana h spa e valued random variables and the introdu tion of symmetrization ideas were initiated by the work of J. Nisio [I-N℄ ( f. Inequality (6.31).4 was established by M.5 and 6. Lemma 6. Kanter [Kan℄ as an improvement of various previous on entration inequalities of the same type.-P. [HJ2℄. [HJ3℄). We on lude here these appli ations although many dierent te hniques might now be ombined from what we learned to yield some new useful inequalities. Theorem 6. Proposition 6.11 and Corollary P 6. Proposition 6. it yielded histori ally the .8 is impli it in his proof. It^o and M. Applied to a sum i xi where (i ) is a standard p -stable i sequen e. Lemmas 6.12. Proposition 6. We hope the interested reader has now enough informations to establish from the pre eding ideas the tools he will need in its own study. extending the lassi al result of P. Se tion 10. Chapter 2). It turns out to be rather useful in some ases ( f. Levy on the line. As already mentioned. Homann-Jrgensen [HJ2℄ who used it to establish Theorem 6. Se tions 6.2) has been noti ed in [Ale1℄ and [V-C1℄.7 is due to J. Kahane [Ka1℄ and J.187 Letting for example q = 2K0 we then learly dedu e (6.6 are easy variations and extensions of the ontra tion prin iple and omparison properties (see further [J-M2℄.3).2 basi ally follow their work ( f.1 and 6.1 is due to K. Homann-Jrgensen [HJ1℄.2 is usually referred to as Ottaviani's inequality and is part of the folklore.30) is similar.

Prokhorov [Pro2℄ ( f. Bennett [Ben℄.15 is taken from the paper [A-S℄. [Sto℄). et . de A osta [A 4℄ while the ontent of Remark 6.14 is due to A. G. Nagaev [F-N℄ ( f.9 is lassi al and is taken in this form from [G-Z1℄. Integrability of sums of independent bounded ve tor valued random variables has been studied by many authors. Hoeding [Ho℄ for omparison between these inequalities and further developments. See also an inequality by D. Iteration of Homann-Jrgensen's inequality has been used in [J-M2℄. We refer to the interesting paper by W. [Yu2℄). Remark 6. N. [Pi3℄ and [Kue5℄ for example. Bernstein ( f. Proposition 6.13 has been mentioned to us by J. V. let us mention the ones by S. A. [Sto℄). In this paper. E. Gine and J. Y.10. [Na2℄. [Ho℄). Zinn. Zinn establish the moment equivalen es of sums of independent (symmetri ) random variables of Proposition 6. K. Fuk and S. Kolmogorov [Ko1℄ ( f. Among the numerous real (quadrati ) exponential inequalities.rst integrability and moment equivalen es results for ve tor valued stable random variables. Lemma 6. The key .

[He8℄ (see further Lemma 14. On the line.4).11) (and some others) has been put forward in [A 6℄. There is also a version of Prokhorov's ar sinh inequality noti ed in [J-S-Z℄ whi h may be applied similarly to kSN k IEkSN k . . Kwapien and J.188 Lemma 6.21 is taken from [Ta11℄ and improves upon [Kue5℄.2 and 4. A. This kind of de omposition is one important feature of the isoperimetri approa h ( f. Inequalities (6.16 is due to V. an alternate proof of Theorem 6. Inequalities (6. a Gaussian randomization argument is used to de ompose the study of sums of independent random variables into two parts: one for whi h the Gaussian (or Radema her) on entration properties an be applied onditionally. Yurinskii [Yu1℄ (see also [Yu2℄).14) may be found respe tively in [K-Z℄.13) to establish Theorem 6. In the ontribution [Led6℄ (see [L-T2℄). Theorem 6.4 was motivated by this un onditionality property.19 is in [Ta11℄ and extends the prior result of [A 5℄.31) extend to the ve tor valued ase various ideas of [He7℄.13) and (6. Theorem 6. in the ontext of the law of the iterated logarithm. Our exposition here follows [Ta11℄. and a se ond one whi h is enri hed by an un onditionality property (monotoni ity of Radema her averages). (6. Inequality (6.18) and Theorem 1. [A 5℄. Theorem 6.12).30) and (6.29) is also in [Ta11℄. Szulga [K-S℄ using very interesting hyper ontra tive estimates in the spirit of what we dis ussed in the Gaussian and Radema her ases (Se tions 3. Inequality (6.20 was obtained in [J-S-Z℄ where a thorough relation to exponential integrability is given. [A 5℄ and [Pi16℄ (see also [Pi12℄). Remark 6.4).20 has been given by S. Re ently. de A osta [A 5℄ used (6.19 in otype 2 spa es.

The strong law of large numbers 7.189 Chapter 7.1 A general statement for strong limit theorems 7.2 Examples of laws of large numbers Notes and referen es .

we present the strong law of large numbers and the law of the iterated logarithm respe tively for sums of independent Bana h spa e valued random variables. Some of these questions will be dis ussed in the sequel of the work. The strong law of large numbers In this hapter and the next one. In the .3 demonstrates its eÆ ien y. On the line. of the almost sure limit theorem and the orresponding property in probability. One main feature of the results we present is the equivalen e. In this study. As announ ed. the isoperimetri approa h of Se tion 6.190 Chapter 7. under lassi al moment onditions. this an be seen as yet another instan e in whi h the theory is broken into two parts: under a statement in probability (weak statement). We only investigate extensions to ve tor valued random variables of some of the lassi al limit theorems like here the laws of large numbers of Kolmogorov and Prokhorov. in this hapter and the next one. prove almost sure properties. one has to either put onditions on the Bana h spa e or to use empiri al pro ess methods. espe ially in the ontext of the entral limit theorem whi h forms the typi al example of a weak statement. then try to understand the weak statement. this is usually done with orthogonality and moment onditions. It is one of the main diÆ ulties of the ve tor valued setting to ontrol boundedness in probability or tightness of a sum of independent random variables. In a sense. almost sure limit properties are investigated under assumptions in probability. starting with Chapter 9. In general spa es.

rst part of this hapter. Apart of one example where Radon random variables will be useful. It is dire tly drawn from the isoperimetri approa h of Se tion 6. for independent random variables. n 1 . we study a general statement for almost sure limit theorems for sums of independent random variables.3 and already presents some interest for real random variables. Prokhorov. . When (Xi )i2IN is a sequen e of (independent) random variables in B we set. et . Sn = X1 + + Xn . as usual. We introdu e it with generalities on strong limit theorems like symmetrization (randomization) and blo king arguments. X is a random variable with values in B if f 2D f (X ) is measurable for all f in D . we an adopt the setting of the last hapter and deal with a Bana h spa e B for whi h there is a ountable subset D of the unit ball of the dual spa e B 0 su h that kxk = sup jf (x)j for all x 2 B . Brunk. The se ond paragraph is devoted to appli ations to on rete examples like the independent and identi ally distributed (iid) KolmogorovMar inkiewi z-Zygmund strong laws of large numbers and the laws of Kolmogorov.

A general statement for strong limit theorems Let (Xi )i2IN be a sequen e of independent random variables with values in B . Let also (an ) be a sequen e of positive numbers in reasing to in.191 7.1.

As des ribed before. su h a study in the in.nity. We study the almost sure behavior of the sequen e (Sn =an ) .

nite dimensional setting an only be developed reasonably if one assumes some (ne essary) boundedness or onvergen e in probability. n!1 Given (Xi ) . It allows in parti ular a simple symmetrization pro edure summarized in the next trivial lemma. Xei = Xi Xi0 de. onvergent to 0 ). for ea h i . n!1 then lim sup kYn k 2M + A almost surely. Let (Yn ).1. lim sup kYn Yn0 k M almost surely n!1 and lim sup IPfkYn k > Ag < 1. onvergent to 0 ) and (Yn ) is bounded (resp. if for some numbers M and A . (Yn0 ) be independent sequen es of random variables su h that the sequen e (Yn Yn0 ) is almost surely bounded (resp. Re all that a sequen e (Yn ) is bounded in probability if for every " > 0 there exists A > 0 su h that for all n IPfkYnk > Ag < ": This kind of hypothesis will be ommon to all limit theorems dis ussed here. Then (Yn ) is almost surely bounded (resp. onvergent to 0 ) in probability. More quantitatively. let then (Xi0 ) denote an independent opy of the sequen e (Xi ) and set. Lemma 7.

This avoids several unne essary ompli ations about enterings. we therefore only des ribe the various results and the general theorem we have in mind in the symmetri al ase. but. properties in probability are often equivalent to properties in Lp whi h are usually more onvenient.1 tells us that under n P appropriate assumptions in probability on (Sn =an) . it would however be possible to study the general ase. redu ing to symmeti=1 ri random variables. with some are. Lemma 7. it is enough to study ( Xei =an ) .ning thus independent and symmetri random variables. . this is shown by Homann-Jrgensen's inequalities (Proposition 6. As we learned.8) on whi h relies the following useful lemma. For sums of independent random varibles. From now on.

More pre isely. Let ( n ) be bounded by . t0 (n) is seen to be smaller than "an for all n large enough. assume there exists a subsequen e (amn ) of (an ) su h that for ea h n (7:1) amn amn+1 Camn +1 where 1 < C < 1 . onvergent to 0 ).9). By inequality (6. Con erning the maximum term. This hypothesis is by no mean restri tive sin e it an be shown ( f. Proof. If the sequen e (Sn =an ) is bounded (resp. using the ontra tion prin iple in the form of Lemma 6. onvergent to 0 ) in probability. Let (Xi ) be independent and symmetri random variables with values in B . for any p > 0 and any bounded sequen e ( n ) of positive numbers. the sequen e (IEk n X i=1 Xi IfkXi k nan g =an kp) is bounded (resp. A lassi al and important observation in the study of strong limit theorems for sums Sn of independent random variables is that it an be developed in quite general situations through blo ks of exponential size. [Wi℄) that for any . We only show the onvergen e statement. IEk where n X i=1 Xi IfkXi k nan g kp 2:3pIE(max kXi kp IfkXi k n ang ) + 2(3t0 (n))p in t0 (n) = inf ft > 0 : IPfk n X i=1 Xi IfkXi k n ang k > tg (8:3p ) 1 g: When (Sn =an ) onverges to 0 in probability. by integration by parts and Levy's inequality (2.5.192 Lemma 7.7). for ea h n . for ea h " > 0 . IE(max kXi in Z an IPfmax kXi k > tgdtp in 0 Z 2apn IPfkSnk > tan gdtp : 0 kp I fkXi k nan g ) The on lusion follows by dominated onvergen e.2.

xed M > 1 one an .

C = M 3 .nd a stri tly in reasing sequen e (mn ) of integers su h that the pre eding holds with = M.1) holds for some subsequen e (mn ) and de. We thus assume throughout this se tion that (7.

mn g . : : : . for ea h n . I (n) as the set of integers fmn 1 + 1.ne. The next lemma then des ribes the redu tion to blo ks in the study of the almost sure behavior of (Sn =an ) . .

Then kSn (!)k kSm` (!)k + 0 1 +1 j 1 X 1 X k `=`0 i2I (`) j X kSm`0 1 (!)k + " am` `=1 kSm` (!)k + "an C 0 1 1 X `=0 Xi (!)k + k n X i=mj 1 +1 Xi (!)k ` where we have used (7.1 and 7. By symmetrization and Lemma 7.6). Re all we assume (7.3.1). By the Borel-Cantelli lemma and the Levy inequality for symmetri random variables (2. Let (Xi ) be independent random variables with values in B . i2I (n) Proof. As a orollary to Lemmas 7. k X sup k k2I (`) i=m` Let now n and j `0 be su h that mj 1 Xi (!)k "am` : < n mj . Hen e. for almost all ! and for all " > 0 .3. After these preliminaries. Then Sn =an ! 0 almost surely if and only if Sn =an ! 0 in probability and Smn =amn ! 0 almost surely. Sin e > 1 and C < 1 the on lusion follows. The sequen e (Sn =an ) is P almost surely bounded (resp.1). Let (Xi ) be independent and symmetri random variables. we have to study. Corollary 7.3 we an state the following equivalen e for general independent variables.193 Lemma 7. the sequen e k X ( sup k k2I (n) i=mn 1 +1 Xi =amn k) onverges almost surely to 0 . thanks to the Borel-Cantelli lemma.4. we are in a position to des ribe the general result about the almost sure behavior of (Sn =an) . and similarly for boundedness. onvergent to 0 ) if and only if the same holds for ( Xi =amn ) . We only show the onvergen e statement. onvergen e of series of the type X n IPfk X i2I (n) Xi k > "ang . there exists `0 su h that for all ` `0 .

The announ ed result an now be stated. n!1 X (7:4) n exp( Æ2 a2mn =n2 ) < 1. this is also the ase for (max kXi k=an) . They are of various types. L) = " + qL + ("L + Æ2 )1=2 (q log Kq0 )1=2 " + qL + q("L + Æ2 )1=2 . and setting XI(r(n) ) = 0 if r > CardI (n) ).5. and. L)amn g < 1 where (". all) > 0 and (7. There is the usual assumption in probability on the sequen e (Sn =an ) .16). If r is an integer. On e this is given. This ne essary ondition on the norm of the individual summands in Xi is unfortunately not powerful enough in general and has to be omplemented with informations on the su essive maximum of the sample (kX1k. Theorem 7. XI(r(n) ) > "amn g < 1 for some " > 0 . for some Æ > 0 . if L = lim sup Mn=amn < 1 . Conversely. Æ. Æ. if (7.2) and (7. the last suÆ ient ondition deals with weak moments assumptions whi h are kind of optimal. Set then. X n = sup ( IE(f 2 (Xi )IfkXi k"amn =kn g ))1=2 : f 2D i2I (n) Then. " > 0 . for ea h n .194 for some.3) are satis. Re all K0 is the absolute onstant of the isoperimetri inequality (6. q. or all. Let (Xi ) be a sequen e of independent and symmetri random variables with values in B . : : : kXnk) . If the sequen e (Sn =an) is almost surely bounded.17. The suÆ ient onditions we des ribe for this to hold are obtained by the isoperimetri inequality for produ t measures in the form of Theorem 6. Assume there exist an integer q 2K0 and a sequen e (kn ) of integers su h that the following hold: X K0 kn (7:2) q n X (7:3) n kn X IPf r=1 < 1.5) holds for some (resp. we have (7:5) X n IPfk X i2I (n) Xi k > 102(". Mn = IEk X i2I (n) Xi IfkXi k"amn =kn g k. we set XI(r(n) ) = kXj k whenever kXj k is the r -th maximum of the sample (kXi k)i2I (n) (breaking ties by priority of index. q.

4) holds for some (resp. then L < 1 (resp. L = 0 ) and (7. all) Æ > 0 .ed. .

just a onvenient number.1 below and the parameter and onstants . being arbitrary but . "( ).4) is based on Kolmogorov's exponential minoration inequality (Lemma 8. The ne essity of (7. we still have that X n IPf X i2I (n) fn (Xin) > amn g < 1: Let Æ > 0 .) The ne essity part on erning L is ontained in Lemma 7.1 below).195 Proof.2) it is enough to he k (7.5). There is nothing to prove on erning the suÆ ien y part of this theorem whi h readily follows from the inequality of Theorem 6. s = "amn and t = 102("L + Æ2 )1=2 q q log K0 1=2 amn : (Of ourse.2. K ( ) therein.4) for the integers n satisfying a2mn kn n2 . the numeri al onstant 102 is not the best one. fn in D su h that n2 2 X i2I (n) IEfn2 (Xin) ( 2n2 ): By the ontra tion prin iple (se ond part of Lemma 6. If > 0 is su h that Æ2 log(q=K0 ) . i 2 I (n) . Xin = Xi IfkXi k"amn =kn g . for ea h n . and hoose.17 together with (6.18) applied to the sample (Xi )i2I (n) with k = kn ( q for n large enough). for simpli ity. Re all Lemma 8. by (7. Assume that for all > 0 (the ase for some > 0 is similar) X n IPfk X i2I (n) Xi k > amn g < 1: Set. Let us therefore assume this holds.

it follows from Lemma 7. Let > 0 be small enough in order that (1 + )2 Æ2 and 2" "( ) so that (amn )("amn =kn ) "n2 "( ) X i2I (n) Sin e L = 0 . thus.2 and orthogonality that large enough.xed for our purposes. X amn K ( )( i2I (n) IEfn2 (Xin ): P i2I (n) IEfn2 (Xin )=a2mn IEfn2 (Xin ))1=2 : Lemma 8. for all n .1 then exa tly implies that X IPf i2I (n) fn (Xin ) > amn g exp( (1 + )2 a2mn =n2 ) ! 0 .

Theorem 7.5 expresses that.3). This ompletes the proof of Theorem 7.196 whi h gives the result sin e (1 + )2 Æ2 .3) look however rather te hni al and it would be desirable to .5. under (7. some ne essary and suÆ ient onditions involving the behavior in probability or expe tation and weak moments an be given to des ribe the almost sure behavior of (Sn =an ) .2) and (7. Conditions (7.2) and (7.

or at least easy to be handled. hypotheses on (Xi ) in order these onditions be ful.nd if possible simple.

In the notations of Theorem 7. the 2s -th largest one is already small enough so that quite a big number of values after it are under ontrol. We suggest a possible one in terms of the probabilities P IPf max kXi k > tg (or IPfkXik > tg ). for ea h q > K0 . 0 < t 1 .5. there exists a sequen e (kn ) of integers su h (K0 =q)kn < 1 and satisfying Pn n n X n IP (k n X r=1 XI(r(n) ) q > 2s u + v log K0 1! ) amn < 1: Proof.20). i2I (n) i2I (n) Lemma 7. If IPf max kXi k > tvamn g 1=2 .5 and 2. for some u > 0 . we have by Lemmas 2. There ould be many ways to do this. X (7:6) n IPf max kXi k > uamn g < 1. assume that. The idea is simply that if the largest element of the sample (Xi )i2I (n) is exa tly estimated by (7.6 IPfXI(2(ns)) > tvamn g ( X i2I (n) IPfkXi k > tvamn g)2s (2IPfimax kX k > tvamn g)2s 2I (n) i 2s 2Æn exp 1t where we have used hypothesis (7. and sin e XI(2(ns)) > tvamn if and only if Cardfi 2 I (n) .7) in the last inequality (one an also use the small tri k on large values p shown in the proof of Theorem 6. The hoi e of t = t(n) = (log 1= Æn ) 1 bounds the previous . Then. No ve tor valued stru ture is of ourse involved here. all n and t. i2I (n) and that.6. for some v > 0 . 1 IPf max kXi k > tvamn g Æn exp i2I (n) t (7:7) where that P s Æ < 1 for some integer s .6). kXi k > i2I (n) tvamn g 2s .lled.

197 probability by (2Æn )s whi h. De. by hypothesis. is the term of a summable series.

all n and t. Then the pre eding proof an be easily improved to yield that Lemma 7. and the normalizing sequen e (an ) is regular enough.6 is strengthened into 1 IPf max kXi k > tvamn g p Æn t i2I (n) for some v > 0 . for ea h n . the . IP (k n X r=1 XI(r(n) ) q > 2s u + v log K0 1! ) amn IPfXI(1)(n) > uamn g + IPfXI(2(ns)) > t(n)vamn g: The lemma is therefore established. Now. and some p > 0 . Remark 7. to be the integer part of 1 q 1 log p : 2s log K0 Æn P It is plain that (K0 =q)kn < 1 .6 holds with a sequen e (kn ) satisfying X 1 2ps n kn (or even P n <1 0 kn p s < 1 for p0 > p ).7. Assume that (7.ne then kn . When the independent random variables Xi are identi ally distributed. This observation is sometimes useful.7) of Lemma 7. 0 < t 1 . we have that n kn X r=1 (2s) XI(r(n) ) 2sXI(1) (n) + kn XI (n) from whi h it follows that. for every n .

This is the purpose of the next lemma.rst ondition in Lemma 7.6 a tually implies the se ond. . if for some u > 0 . 0 < p < 1 . ap2n 2n k ap2k . The subsequen e (mn ) is hosen to be mn = 2n for all n . whi h are the basi examples we have in mind for appli ations.8. Lemma 7. Assume (Xi ) is a sequen e of independent and identi ally distributed (like X ) random variables. et . an = (2nLLn)1=2 . Let (an ) be su h that for some p > 0 and all k n . Then. The regularity ondition on (an ) ontains the ases of an = n1=p . X n 2n IPfkX k > ua2n g < 1.

198 for all n and 0 < t 1 1 IPf max kXi k > tua2n g 2n IPfkX k > tua2n g 2p Æn t i2I (n) where P n Æn < 1 . There exists a sequen e (. Proof. For ea h n set n = 2nIPfkX k > ua2n g .

n ) su h that .

n n P and .

n 2.

n+1 for every n and satisfying .

Let 0 < t 1 . 2n IPfkX k > tua2n g 2k n k 2k .n < 1 . and k 1 be su h that n 2 k tp 2 k+1 . If k < n .

n k 22k .

n 4t 2p .

2n IPfkX k > tua2n g 2n 4t 2p 2 n: The on lusion follows with Æn = 4 max(.n : If k n .

this general result is a tually already of interest on the line as will be lear from some of the statements we will obtain here. 7. That is. Issued from sharp isoperimetri methods.n .5.2.8 enters the setting of Remark 7. Examples of laws of large numbers This se tion is devoted to appli ations to some lassi al strong laws of large numbers for Bana h spa e valued random variables of the pre eding general Theorem 7. We do not seek the greatest generality in normalizing sequen es and (almost) only deal with the lassi al strong law of large numbers given by an = n .7.5. The results we present follow rather easily from Theorem 7. we usually say that a sequen e (Xi ) satis. 2 n): Noti e that Lemma 7.

Further appli ations an easily be imagined.es the strong law of large numbers (in short SLLN) if Sn =n ! 0 almost surely . The appli ations we present deal with the independent and identi ally distributed (idd) SLLN and the SLLN of Kolmogorov and Prokhorov as typi al and lassi al examples. We sometimes speak of the weak law of large numbers meaning that Sn =n ! 0 in probability. The . When thus an = n . we may simply take mn = 2n as the blo king subsequen e.

5. there is a simpler argument . Although this result an be dedu ed from Theorem 7.rst theorem is the ve tor valued version of the SLLN of Kolmogorov and Mar inkiewi z-Zygmund for iid random variables.

indeed. Let 0 < p < 2 . by Lemma 7. to prove the on lusion for the symmetri variable X X 0 where X 0 denotes an independent opy of X . we present the two proofs. Theorem 7. Ne essity is obvious. then Xn =n1=p from whi h it follows by the Borel-Cantelli lemma and identi al distribution that X n ! 0 almost surely IPfkX k > n1=p g < 1 whi h is equivalent to IEkX kp < 1 .9.199 based on Lemma 6. Let (Xi ) be a sequen e of iid random variables distributed like X with values in B . Turning to suÆ ien y. In order however to demonstrate the universal hara ter of the isoperimetri approa h. Sin e X X 0 satis.1.16 and the martingale representation of kSn k IEkSn k . it is enough. Then Sn ! 0 almost surely n1=p if and only if S IEkX kp < 1 and 1n=p ! 0 in probability: n Proof. if Sn =n1=p ! 0 almost surely.

: : : . For ea h n . or dire tly by Levy's inequality (2. We have: X n IPf9i 2 I (n) : ui 6= Xi g X n 2n IPfkX k > 2n=p g whi h is . By Lemma 7.3. it suÆ es to show that for every " > 0 X n where I (n) = f2n 1 st IPfk X i2I (n) Xi k > "2n=pg < 1 1 + 1.es the same onditions as X .6). set ui = ui (n) = Xi IfkXi k2n=pg . we an assume without loss of generality X itself to be symmetri . proof. 2n g . i 2 I (n) .

nite under IEkX kp < 1 . It therefore suÆ es to show that for every " > 0 X n By Lemma 7.2. we know that IPfk X i2I (n) ui k > "2n=pg < 1: 1 lim IEk n!1 2n=p X i2I (n) ui k = 0: .

it is suÆ ient to prove that for all " > 0 X n IPfk X i2I (n) ui k IEk X i2I (n) ui k > "2n=p g < 1: By the quadrati inequality (6.200 Hen e.11) and identi al distribution. the pre eding sum is less than X 1 1X 1 X 2 1 IE k u k IE(kX k2IfkX k2n=p g ) i 2 2 " n 22n=p i2I (n) " n 2n(2=p 1) X "12 IE(kX k2 2n(21=p 1) If2n kX kp g ) n whi h is .

This on ludes the 1 st proof.6 and 7. 2 nd proof.nite under IEkX kp < 1 .5 with Lemmas 7.8 for the ontrol of the large values. We apply the general Theorem 7. IEkX kp < 1 is equivalent to say that for every " > 0 X n 2n IPfkX k > "2n=pg < 1: P Let " > 0 be .

we an take L = 0 by Lemma 7. By Lemmas 7. there is a sequen e (kn ) of integers su h that 2 kn < 1 n and X n kn X IPf r=1 XI(r(n) ) > 5"2n=pg < 1: Apply then Theorem 7.2.5. To he k ondition (7.4) note that n2 2nIE(kX k2IfkX k2n=pg ) P (at least for all n large enough). with q = 2K0 .8.xed.6 and 7. The same omputation as in the . As before.

rst proof shows that 2 2n=p n2 < 1 n under IEkX kp < 1 so that (7. we open a digression on the hypothesis Sn =n1=p ! 0 in probability in the pre eding theorem on whi h we shall a tually ome ba k in Chapter 9 on type and otype of Bana h spa es. 8 9 X n IP < X X i : i2I (n) = > 102(5" + 2K0Æ) < 1: .4) will hold for every Æ > 0 . Let us assume we deal here with a Borel random variable X with values in a separable Bana h spa e B . It is .5 then tells us that. The proof is therefore omplete. The on lusion of Theorem 7. At this point. for all Æ > 0 .

201 known and very easy to show that in .

Æ > 0 IPfjSn (. 0 < p < 2 (and IEX = 0 for 1 p < 2 ). when IEkX kp < 1.nite dimensional spa es. For all ". then Sn ! 0 in probability: n1=p Let us brie y sket h the proof for real random variables.

n .

X 1 =p 1 =p j > 2"n g nIPfjX j > Æn g + IP .

.

X .

. If 0 < p < 1 .

.

n .

X .

.

IE(Xi IfjXi jÆn1=pg ).

.

.

.

.

i=1 .

.

.

i IfjXi jÆn1=p g .

.

. by entering. > 2"n1=p ) : nIEjX jp (Æn1=p )1 p . i=1 hoose then Æ = Æ(") > 0 su h that IEjX jp Æ1 p = " . If 1 p < 2 .

.

n .

X .

.

IE(Xi IfjXi jÆn1=p g ).

.

.

.

.

Hen e. in any ase. for n large. i=1 nIE(jX jIfjX j>Æn =pg ) 1 whi h an be made smaller than "n1=p for all n large enough. . we an enter and write that.

(.

) IP n .

X .

.

Xi IfjXi jÆn1=p g .

.

> 2"n1=p .

.

.

i=1 .

(.

n .

X .

.

IP .

Xi IfjXi jÆn1=p g IE(Xi IfjXi jÆn1=p g ).

.

.

.

From the pre eding observation. Note indeed that if X is real symmetri (for example). n1 2=p IE(jX j2 IfjX jÆn1=p g ) ZÆ 0 dt2 ntp IPfjX j > tn1=p g p t p p so that the on lusion follows by dominated onvergen e sin e ulim !1 u IPfjX j > ug = 0 under IEjX j < 1 . by integration by parts. We shall ome ba k to this and t!1 extensions to the ve tor valued setting in Chapter 9. then Sn =n1=p ! 0 in probability as soon as lim tp IPfjX j > tg = 0 (whi h is a tually ne essary and suÆ ient). But now. i=1 "2 n12=p n X i=1 > "n1=p ) IE(jXi j2 IfjXi jÆn1=p g ) by Chebyshev's inequality. a .

nite dimensional approximation argument shows that in arbitrary separable Bana h spa es. the integrability ondition IEkX kp < 1 (and X has mean zero . when 0 < p 1 .

we an hoose for every " > 0 a . sin e X is Radon and IEkX kp < 1 .202 for p = 1 ) also implies that Sn =n1=p ! 0 in probability. Indeed.

by the triangle inequality and sin e p 1 . we have.nite valued random variable Y (with mean zero when p = 1 and IEX = 0 ) su h that IEkX Y kp < " . IEkSn Tn kp nIEkX Y kp "n: Y being a . Letting Tn denote the partial sums asso iated to independent opies of Y .

It is however instru tive to dedu e it from general methods.10.g.e. The proof of this result presented as a orollary of Theorems 7.5 and 7. It would require an inequality like X p IE Yi i C X i IEkYi kp for every . Theorem 7. The pre eding elementary approximation argument does not extend to the ase 1 < p < 2 .nite dimensional random variable. Let X be a Borel random variable with values in a separable Bana h spa e B . Then Sn n ! 0 almost surely if and only if IEkX k < 1 and IEX = 0 . [HJ3℄). we re over the extension to separable Bana h spa e of the lassi al iid SLLN of Kolmogorov. 0 < p 1) that Sn n1=p ! 0 almost surely if and only if IEkX kp < 1 (and IEX = 0 for p = 1 ). In parti ular. Corollary 7.9 therefore states in this ase (i.9 is of ourse too ompli ated and a rather elementary dire t proof an be given (see e. Tn=n1=p ! 0 in probability and the laim follows immediately.

nite sequen e (Yi ) of independent entered random variables with values in B. C depending on B and p . Su h an inequality does not hold in general Bana h spa e and a tually de.

Mimi king the pre eding argument we an however already announ e that in (separable) spa es of type p. We shall see how this property is a tually hara teristi of type p spa es (see Theorem 9.nes the spa es of type p dis ussed later in Chapter 9. Sn =n1=p ! 0 almost surely if and only if IEkX kp < 1 and IEX = 0 .21 below). 1 < p < 2 . .

annot in general tend to 0 even if X has very strong integrability properties. for all de reasing sequen es (n ) of positive numbers tending to 0 .3).11. an almost surely bounded and symmetri random variable X su h that (Sn =nn ) does not tend to 0 in probability.203 The following example ompletes this dis ussion. 1 < p < 2 . the separable Bana h spa e of all real sequen es tending to 0 equipped with the sup-norm. The example is adapted for further purposes (the onstru ted random variable is moreover pregaussian in the sense of Se tion 9. In 0 . Let (k )k1 be independent with distribution p 1 1 : IPfk = +1g = IPfk = 1g = (1 IPfk = 0g) = 2 log(k + 1) De. Proof. showing in parti ular that Sn =n1=p . Example 7. there exists.

ne .

k = n whenever 2n 1 k < 2n and take X to be the random variable in 0 with oordinates (.

denote by (k. as is easily seen. Indeed. An = i=1 [ k2n En.k k )k1 . Then X is learly symmetri and almost surely bounded. n 1 . However (Sn =nn ) does not tend to 0 in probability.k ) = (log(k + 1)) n and IP(An ) = 1 Y k2n Y )=1 IP(En. for ea h k.k k2n 1 : (log(k + 1))n 1 Therefore. Now. En. Set further. if (ek ) is the anoni al basis of 0 .i )i independent opies of (k ) . IP(An ) ! 1 .k : Clearly IP(En. ! 1 1 X n Sn X = .k = n \ fk.i = 1g.

p n .e: nn k=1 nn i=1 k.i k k On An .

kSnk max 1 .

.

.

.

X 1 n .

=p : .

k.

k.i .

k2n n n .

. sin e n ! 0 . n n Hen e. for every " > 0 .

we restri t to lassi al statements . After having extended to the ve tor valued setting the iid SLLN. we turn to the SLLN for independent but not ne essarily identi ally distributed random variables. i=1 kSn k > " lim inf IP n!1 nn n n liminf IP(An ) = 1 n!1 whi h establishes the laim. Here again.

for ea h Æ > 0 . : : : . Kolmogorov dedu ed his idd statement. 1 IPf max kXi k > tv2n g Æn exp i2I (n) t (7:10) where (7:11) P s Æ n n < 1 for some s > 0 . together with a trun ation argument. Sn n Then the SLLN holds. Theorem 7. all n and t. If we de.6. Sn =n ! 0 almost surely. This SLLN states that if (Xi ) is a sequen e of independent mean zero real random variables su h that X IEX 2 i 2 < 1.e. Let (Xi ) be a sequen e of independent random variables with values in B . It is hara terized by a areful balan e between onditions on the norm of the Xi 's and assumptions on their weak moments. The subsequent orollary is perhaps a more pra ti al result. i i then the SLLN holds. We take again our framework of non ne essarily Radon random variables. and that. Assume that (7:8) Xi i ! 0 almost surely Sn n ! 0 in probability: and (7:9) Assume further that for some v > 0 . We simply apply Theorem 7.e.5 and Lemma 7. X 0 X 1 exp Æ22n = sup IE(f 2 (Xi )IfkXi k2n g )A < 1 f 2 D n i2I (n) (where we re all that I (n) = f2n 1 + 1. i. 0 < t 1 . i. ! 0 almost surely: Proof. From this result. 2n g ).204 like the SLLN of Kolmogorov. The next theorem points toward a general extension of this result already of interest on the line.12.

ne Yi = Xi IfkXi kig . If (Xi0 ) is an independent opy of the sequen e (Xi ) . We therefore assume that kXi k i almost surely. almost surely Yi = Xi for every i large enough so that it learly suÆ es to prove the result for the sequen e (Yi ) instead of (Xi ) . the sequen e of symmetri variables (Xi Xi0) will satisfy the same kind of hypotheses . by (7.8).

6 and Theorem 7.1. 9 = > "2n < 1 .3.5. the SLLN is satis.10 but with (7.5. for every u > 0 .205 as (Xi ) . Æ > 0 and all q 2K0 .2). Under the hypotheses of Theorem 7.9) (Lemma 7.10) repla ed by X (7:100) 0 n 1s 1 IEkXi kp A < 1 2np i2I (n) X for some p > 0 and s > 0 . By (7. Sin e Xi =i ! 0 almost surely. hen e the on lusion by Lemma 7. we an take L = 0 thanks to (7.13. we an therefore redu e to the ase of a symmetri sequen e (Xi ) . X IPf max kXi k > u2ng < 1: i2I (n) n Summarizing the on lusions of Lemma 7. In Theorem 7. whatever the hoi e of (kn ) will be. 8 < X IP Xi : n i2I (n) X It follows obviously that " q > 102 2s u + v log K0 8 < X IP Xi : n i2I (n) X 1! # 9 = + qÆ 2n < 1: . and for s assumed to be an integer. Corollary 7. for every " > 0 . for all u.9) and Lemma 7.

9) des ribing the usual assumption in probability on the sequen e (Sn ) .10 0 ) an also be IPfkXik > tg: In Theorem 7.12 follows.12 (and Corollary 7. (7.8) and (7. X i2I (n) IPfkXi k > tv2n g 1 1 X IEkXi kp (tv)p 2np i2I (n) X C (p) exp 1t v1p 21np IEkXi kp i2I (n) from whi h (7.10) of Theorem 7. Proof. if one wishes it. for every n .9) are of ourse ne essary. For real entered random variables. this ondition . onditions (7. Note that the sums repla ed.13). by expressions of the type sup tp t>0 X i2I (n) P i2I (n) IEkXi kp in (7. Simply note that.ed.

206 (7.9) is automati ally satis.

5 and 7. simply note that for p 2 0 12=p X 1 X 1 IEf 2 (Xi ) n(p=2+1) IEjf (Xi )jp A 2 n 2 i2I (n) 2 i2I (n) for every n and f in D . it might be obtained simpler from Lemma 6. As easily as we obtained Theorem 7. i. ip i the SLLN holds. it is legitimate (and we used it in the proof of Theorem 7. Along the same line of ideas. ! 0 almost surely.8). i i X then the SLLN holds. This is sometimes onvenient when various statements have to be ompared. To in lude this result. for example. i. If for some 1p2 X IEkXi kp < 1.10 0 )) holds then under the stronger ondition IEkXi kp <1 ip i X whi h is in this ase seen to be weaker and weaker as p in reases.12 from the pre eding se tion. One feature of the isoperimetri approa h at the basis of Theorems 7. (7.8) and (7. We still work under onditions (7.12 is a ommon treatment of the SLLN of Kolmogorov and Prokhorov. if and only if the weak law of large numbers holds. under (7.14.11) but reinfor e (7.) Corollary 7.e.10) (via (7. see [K-Z℄.11) and the real statement su h obtained is sharp. (Sin e no weak moments are involved.11) provided p 2 and we have therefore the following orollary.12) to assume that kXi k1 i for all i .8) in kXi k1 i=LLi for ea h i .12 also ontains extensions of Brunk's SLLN. we get an extension of Prokhovov's theorem to the ve tor valued ase. Brunk's theorem in the real ase states that if (Xi ) are independent with mean zero satisfying IEjXi jp p=2+1 < 1 for some p 2. Let (Xi ) be a sequen e of independent random variables with values in B .e. Note also that.ed under (7.16.9) and (7. Sn =n Sn =n ! 0 in probability. Then this ondition implies (7. Theorem 7.

207 where LLt = L(Lt) and Lt = max(1. t 0 . log t). This boundedness assumption provides the exa t bound on large values and a tually .

11) be omes ne essary. the proof follows the ne essity portion in Theorem 7. for ea h n and t . 0 < t 1 . kXi k1 i=LLi: Then the SLLN is satis. Assume that. for every i . ondition (7.5 and we therefore do not detail it. Indeed. Corollary 7.10) of Theorem 7. Let (Xi ) be a sequen e of independent random variables with values in B .ts (7. IPf max kXi i2I (n) k > 2t2ng Æ n exp 1 t with Æn = exp( 2LL2n) whi h is summable.15. We thus obtain as a last orollary the following version of Prokhorov's SLLN.12. Note that under the pre eding boundedness assumption on the Xi 's.

de A osta [A 6℄ and T. A. 2n g ).5 and Lemma 7. [Gn-K℄. [Re℄.ed if and only if Sn n and X ! 0 in probability 0 X 1 exp Æ22n = sup IEf 2 (Xi )A < 1 f 2 D n i2I (n) for every Æ > 0 (where I (n) = f2n 1 + 1.9) is due independently to A.g. with the . Theorem 7. Volodin [A-V℄.3 follows the paper [L-T5℄. [Pe℄ et .1 and 7.3 are learly presented in [Sto℄ and the ve tor valued situation does not make any dieren e. In parti ular. In parti ular. Lemmas 7. [Sto℄. This hapter based on the isoperimetri approa h of [Ta11℄. Azlarov and N. : : : . Notes and referen es Various expositions on the strong laws of large numbers (SLLN) for sums of independent real random variables may be found in the lassi al works e. [Le1℄. [Ta9℄ presented in Se tion 6. The extension of the Mar inkiewi z-Zygmund SLLN (Theorem 7.A.6 are taken from there.

rst proof. Mourier ba k in the early . The lassi al iid SLLN of Kolmogorov in separable Bana h spa es (Corollary 7.10) was established by E.

The non-separable version of this result whi h is not dis ussed in this text has given re ently rise to many developments related to measure theory. A simple proof may be found in [HJ3℄. see for . [F-M2℄).fties [Mo℄ (see also [F-M1℄.

Example 7.208 example [HJ5℄. in the ontext of empiri al pro esses. [Ta3℄. [V-C2℄. Be k [Be℄ and J. [Al2℄. an be obtained as a onsequen e of the Fuk-Nagaev inequality ( f. see also [He5℄. f. In a spe ial lass of Bana h spa es however. The real valued statement. Zinn [K-Z℄ whi h extended results of A. Kuelbs and J. [F-N℄.13. Zinn was important in realizing that under an assumption in probability no onditions have to be imposed on the spa es. For further developments. Kuelbs and J. V. Chapter 9). Pisier [HJ-P℄ in spe ial lasses of spa es ( f. Theorem 7. and. [Na2℄ seems still to be found. Nagaev [Na1℄.14 is due to J. at least in the form of Corollary 7. f. Corollary 7. [Al3℄ in this regard. Homann-Jrgensen and G. Brunk's SLLN appeared in [Br℄ and was .11 is taken from [C-T1℄. The work of J.12 omes from [L-T5℄. [Yu2℄). [V-C1℄. [G-Z2℄. Let us mention that a suitable ve tor valued version of the SLLN of S.

[He6℄.rst investigated in Bana h spa es in [Wo3℄. [Al1℄ (where in parti ular ne essity was shown) and the . Extensions of Prokhorov's SLLN [Pro2℄ was undertaken in [K-Z℄.

nal result obtained in [L-T4℄ (see also [L-T5℄). Appli ations of the isoperimetri method to strong limit theorems for trimmed sums of iid random variables are further des ribed in [L-T5℄. .

209 Chapter 8.1 The law of the iterated logarithm of Kolmogorov 8.2 The law of the iterated logarithm of Hartman-Wintner-Strassen 8.3 On the identi. The law of the iterated logarithm 8.

ation of the limits Notes and referen es .

The law of the iterated logarithm This hapter is devoted to the lassi al laws of the iterated logarithm of Kolmogorov and HartmanWintner-Strassen in the ve tor valued setting.210 Chapter 8. These extensions both enlighten the s alar statements and des ribe various new interesting phenomena in the in.

We . As the law of large numbers and the entral limit theorem.nite dimensional setting. under moment onditions similar to the ones of the s alar ase. the law of the iterated logarithm (in short LIL) is a vast subje t in Probability theory. the isoperimetri approa h proves to be an eÆ ient tool in this study. The main results des ribed here show again how the strong almost sure statement of the law of the iterated logarithm redu es to the orresponding (ne essary) one in probability. As in the previous hapter on the strong law of large numbers. We only on entrate here on the lassi al (but typi al) forms of the LIL for sums of independent Bana h spa e valued random variables.

the extension of Kolmogorov's LIL. A last survey paragraph is devoted to a dis ussion on various results and questions about identi. In Se tion 8.2. starting from the real ase.rst des ribe . we des ribe the Hartman-Wintner-Strassen form of the (iid) LIL in Bana h spa e and hara terize the random variables whi h satisfy it.

we set. if (Xi )i2IN is a sequen e of random variables. as usual.1. The law of the iterated logarithm of Kolmogorov Let (Xi )i2IN be a sequen e of independent real mean zero random variables su h that IEXi2 < 1 for all n P i . t 0 . i=1 Assume the sequen e (sn ) in reases to in. Sn = X1 + + Xn . sn = ( IEXi2 )1=2 . ation of the limits in the ve tor valued LIL. Re all also that LL denotes the iterated logarithm fun tion. for ea h n . that is. Set. log t) . 8. n 1 . In all this hapter. LLt = L(Lt) and Lt = max(1.

(8:1) S lim sup 2 n 2 1=2 = 1 : (2 s LLs n!1 n n) . with probability one. Kolmogorov's LIL state that. kXi k1 i si =(LLs2i )1=2 for every i : Then. Assume further that for some sequen e (i ) of positive number tending to 0 .nity.

relies on Kolmogorov's onverse exponential inequality des ribed in the following lemma. somewhat more ompli ated.1) is based on the exponential inequality of Lemma 1. [Sto℄) is a pre ise ampli.211 The proof of the upper bound in (8.6 applied to the sums Sn of independent mean zero random variables. Its proof ( f. The lower bound.

ation of the argument leading to (4. Let (Xi ) be a .1.2). Lemma 8.

there exist positive numbers K ( ) (large enough) and "( ) (small enough) depending on only su h that for every t satisfying t K ( )b and ta "( )b2 where P b = ( IEXi2 )1=2 .nite sequen e of independent mean zero real random variables su h that kXi k1 a for all i . for every > 0 . i ( IP X i ) Xi > t exp[ (1 + )t2 =2b2℄ : The next theorem presents the extension to Bana h spa e valued random variables of the LIL of Kolmogorov. This extension involves a areful balan e between onditions on the norms of the random variables and weak moment assumptions. Sin e tightness properties are unessential in this . Then.

rst se tion. for ea h n . we des ribe the result in our setting of a Bana h spa e B for whi h there exists a ountable subset D of the unit ball of the dual spa e su h that kxk = sup jf (x)j for all x in B .2. Theorem 8. assumed to in rease to in. X is a random variable with values in B if f 2D f (X ) is measurable for every f in D . Set. Let B be as before and let (Xi )i2IN be a sequen e of independent random variables with values in B su h that IEf (Xi ) = 0 and IEf 2 (Xi ) < 1 for ea h i and f in D . n P sn = sup ( IEf 2 (Xi ))1=2 .

kS k (8:3) lim sup 2 n 2 1=2 = 1 : n!1 (2sn LLsn ) This type of statement learly shows in what dire tion the extension of the real result has to be understood. and this will be a omplished in a . with probability one.nity. if the sequen e (Sn =(2s2nLLs2n )1=2 ) onverges to 0 in probability.2 ould seem to be somewhat involved. Assume further that for some sequen e (i ) of f 2D i=1 positive numbers tending to 0 and all i (8:2) kXi k1 i si =(2LLs2i )1=2 : Then. The proof of Theorem 8. Let us mention however.

rst step.3) is . that the proof that the lim sup in (8.

nite (less than some numeri al onstant) is rather easy on the basis of the isoperimetri approa h of Se tion 6.3. . The lower bound reprodu es the real ase. The fa t that it is a tually less than 1 requires then some te hni alities.

un = (2LLs2n)1=2 . we . let us set. As announ ed.212 Proof. for ea h n . To simplify the notations.

f.5)) n X sn sup IEf 2 (Xi f 2D i=1 Xi0 ) !1=2 Xi0) where (Xi0 ) is 2sn .1 that we deal with symmetri variables. repla ing (Xi ) by (Xi an independent opy of the sequen e (Xi ) . (2. de. we an assume by Lemma 7. For ea h n .rst show that kS k lim sup n n!1 sn un (8:4) M almost surely for some numeri al onstant M . and sin e (by entering. To this aim.

2). at least for n large enough. by (8. m2 in Theorem 6. Assuming for simpli ity the sequen e (i ) be bounded by 1 . by Lemma 7. smn+1 smn 2: By the Borel-Cantelli lemma. s = t = 20pq smn umn . p 2 IPfkSmn k > (60 2K0 + 1)smn umn g 2 umn + 2 exp( u2mn ) .ne mn as the smallest integer m su h that sm > 2n .17) that. using (6. For these hoi es.17 an be bounded. we need show that X n kSnk > M g < 1 : max mn 1 <mmn sm um IPf Using the pre eding and Levy's inequality (2. for large n . we take there q = 2K0 .18) whi h we apply to the sample of independent and symmetri random variables (Xi )imn for ea h n . it suÆ es to prove that X n IPfkSmn k > Msmn umn g < 1 : We make use of the isoperimetri inequality of Theorem 6. Hen e. smn 2n . It is easily seen that sn+1 sn 1.2.17 together with (6. k = [u2mn ℄ + 1 .6) (in reasing M ).18). k X kXi k (u2mn + 1) usmn s : mn i=1 Sin e (sn =snun ) onverges to 0 in probability. It thus follows from (6. IEkSnk=sn un ! 0 . by 2s2mn .

213 whi h gives the result sin e u2mn 2LL4n . Sin e Sn =sn un ! 0 in probability. Note that this proof shows (8. observe . We begin by showing it is less than 1 .4) already when the sequen e (Sn =sn un) is only bounded in probability. whi h is of ourse ne essary. We now turn to the more deli ate proof that the lim sup is a tually equal to 1 .

for all n large enough.2) together with (8. As before for = 2 we have that smn+1 smn smn n . that for n!1 every Æ > 0 there is an in reasing sequen e (mn ) of integers su h that X n IPf kSmk max mn 1 <mmn sm um > (1 + 2Æ)g < 1 : A simple use of Ottaviani's inequality (Lemma 6. kSm k max mn 1 <mmn sm um IPf > (1 + 2Æ)g IPfm max kSmk > (1 + 2Æ)smn 1 umn n 1 <mmn 1 g 2IPfkSmn k > (1 + Æ)smn umn g : 1 1 Now.5) shows that. by the Borel-Cantelli lemma.2 and entering (Lemma 6. there exist " > 0 and > 1 su h that if mn is de. in order that lim sup kSnk=snun 1 almost surely. Lemma 7.rst that by symmetrization. For > 1 .3). for Æ > 0 . it suÆ es. : To establish the laim it will be suÆ ient to show that for every " > 0 and > 1 . let mn for ea h n be the smallest m su h that sm > n . X (8:6) n IPfkSmn k > (1 + ")smn umn g < 1 : Indeed. we have both that n X 1 IE k S k = lim IE " X lim =0 n i i n!1 n!1 sn un i=1 (8:5) where as usual ("i ) denotes a Radema her sequen e independent of (Xi ) .

for all large n .ned from as before. (1 + Æ)smn 1 umn 1 (1 + ")smn umn : .

it is enough to show (8.214 So. The proof is based on a .6).

nite dimensional approximation argument through some entropy estimate. Let now " > 0 and > 1 be .

g) = mn 1 X s2mn i=1 !1=2 g)2 (Xi ) IE(f : Re all N (D. set dn2 (f. dn2 . ") denote the minimal number of elements g in D su h that for every f in D there exists su h a g with dn2 (f. For f.xed. g) < " . g in D . and every n . de. For every n .

ne mn X 1 n = IE "i Xi smn umn i=1 whi h tends to 0 when n goes to in.

For every n large enough. Then. ") exp(n u2mn ) : Proof. Suppose this is not the ase.3. in. N (D. Lemma 8.nity by (8. dn2 .5).

( mn 1 X IP s2 ( mn i=1 h2 (X i "2 i )) > 2 ) ) + IEh2 (X 1 CardDn exp( u2mn ) < : 4 It follows that. f = 6 g in Dn . g) " and D su h that for CardDn = [exp(n u2mn )℄ + 1 : By Lemma 1. in. g.nitely often in n . dn2 (f. there exists Dn any f 6= g in Dn . for h = f ( IP mn 1 X s2mn i=1 "2 i) < 2 ) h2 (X For n large enough.2). and n large.6 and (8.

2 (f smn i=1 i) g)2 (X "2 2 ) 34 : exp( u2mn ) : . ( mn 1 X IP 8f 6= g in Dn .nitely often in n .

215 We would like to apply. the Sudakov type minoration inequality put forward in Proposition 4. To this aim. onditionally on the Xi 's.13. note .

rst that by (8. ! 1 mn kXik1 jmax mn j "2 K IE X "i Xi " s u s mn mn mn i=1 where K is the numeri al onstant of Proposition 4.5). with high probability. This proposition then shows that. for example bigger than 3=4 . in.13. by (8. mn X IE" "i Xi smn 1 i=1 2 u mn "K max imn i for all n large enough. and thus. i mn . integrating. with probability bigger than 1=2 . p mn X " u 1 1 " IE" "i Xi p (log CardDn )1=2 p n mn : smn i=1 K 2 2K Therefore.2).

by gn (f ) an element of D su h that dn2 (f. We write that mn X X i i=1 where Dn0 = ff . p "pn =2 2K whi h leads to a ontradi tion sin e We an now establish (8. we denote.nitely often in n .3 is omplete. The proof of Lemma 8. gn (f )) < " in su h a way that the set Dn of all gn (f ) has a ardinality less than exp(n u2mn ) . A ording to Lemma 8. n n ! 0 .6). for ea h n (large enough) and f in D .3.

.

.

.

mn mn .

X .

.

X .

.

.

.

sup .

g(Xi ).

+ sup .

h(Xi ).

.

.

h2Dn0 .

.

g2Dn .

f 2 Dg . The main observation on erning Dn0 is that sup h2Dn0 mn X i=1 !1=2 IEh2 (X i) "s2mn : It is then an easy exer ise to see how the proof of the . i=1 i=1 gn (f ).

rst part an be reprodu ed (and adapted to the ase of a norm of the type sup jh()j . h2Dn0 X n . thus depending on n ) to yield that for some numeri al onstant M .

.

mn .

X .

.

sup .

h(Xi ).

.

.

h2Dn0 .

( IP i=1 ) > M"smn umn < 1 : .

6) will therefore be omplete if we show that X n .216 The proof of (8.

.

mn .

X .

.

sup .

g(Xi ).

.

.

g2Dn .

as in the real ase.6 that for all n large enough (in order to eÆ iently use (8. . we have by Lemma 1.2) and i ! 0 . ( IP i=1 ) > (1 + ")smn umn < 1 : But now.

rst negle t the .

rst terms of the summation). .

.

mn .

X .

.

sup .

g(Xi ).

.

.

g2Dn .

1. [Sto℄). hen e the result sin e CardDn exp(2n LLs2mn ) . i. ( IP i=1 ) > (1 + ")smn umn 2 CardDn exp( (1 + ")LLs2mn ) .o. In the last part of this proof. Then.o. kSmn k X Xi i2I (n) 0 1 X 1+1 9 > = i. on a set of probability one. and mn . we show that kS k lim sup n n!1 sn un 1 almost surely. Re all that for > 1 we let mn = inf fm. by the upper bound we just established. n ! 0 and smn n ( > 1 ). kSmn k (1 ")2 sup 1 X f 2D i2I (n) 1 0 11=2 X IEf 2 (Xi )A IEf 2 (Xi )A LL sup f 2D i2I (n) (1 ")2 [2(s2mn s2mn )LL(s2mn s2mn )℄1=2 2smn umn 1 1 1 2smn 1 umn 1 1 . sm > n g . we have that (for example) IPfkSmn k 2smn umn for all n large enoughg = 1 : Suppose it an be proved that for all " > 0 and all > 1 large enough 8 < X IP Xi : i2I (n) 0 (8:7) 0 > (1 ")2 2 sup IEf 2(Xi )A f 2D i2I (n) LL sup X f 2D i2I (n) 11=2 IEf 2 (Xi )A where I (n) denotes the set of integers between mn in n . By the zero-one law ( f. repordu ing simply (more or less) the real ase based on the exponential minoration inequality of Lemma 8. in n = 1 > . the lim sup we study is almost surely non-random.

i 2 I (n) . The law of the iterated logarithm of Hartman-Wintner-Strassen . 8 > < X 111=2 9 > = X X 2 2 A A IP fn (Xi ) > (1 ") 2 IEfn (Xi )LL IEfn (Xi ) > > :i2I (n) . i2I (n) i2I (n) 2 0 13 X IEfn2 (Xi )A5 exp 4 (1 ")LL 0 0 i2I (n) where it has been used that 0 X 11=2 smn sup IEf 2 (Xi )A f 2D i2I (n) 0 1 1 " X i2I (n) + smn 1 11=2 IEfn2 (Xi )A + smn 1 and that the ratio smn 1 =smn is small for large > 1 .7) then.2 has been established. Taking there = "=1 " . this will therefore show that the lim sup is surely. hen e the on lusion. Theorem 8. Let us prove (8. 8. For ea h n . this lower bound behaves like " ")2 (1 1 2 1 # 1=2 2 s u : mn mn For large enough and " > 0 arbitrarily small. the probability in (8.o. This observation yields then also the on lusion with the Borel-Cantelli lemma (independent ase). let fn in D be su h that X 1 almost X IEfn2(Xi ) (1 ") sup IEf 2 (Xi ) : f 2 D i2I (n) i2I (n) Thus. for all n large enough. for n large.217 and. We an now apply Lemma 8.1 to the independent entered real random variables fn(Xi ) .7) is bigger than IP 8 > < X > :i2I (n) 0 fn(Xi ) > (1 ") 2 0 X i2I (n) IEfn2 (Xi )LL 111=2 X i2I (n) IEfn2 (Xi )AA 9 > = i.2. in n : > .

the sequen e (an ) stabilizes the partial sums Sn in su h a way that. When IEX 2 < 1 . The basi normalization sequen e is here an = (2nLLn)1=2 whi h is seen to orrespond to the sequen e in (8. with probability one. Hartman and W. Let X be a random variable. one an . trivially by the SLLN).1) when IEX 2 = 1 . Wintner dedu ed their result from Kolmogorov's LIL using a ( lever) trun ation argument. and in the rest of the hapter (Xi )i2IN will always denote a sequen e of independent opies of X . The LIL of Hartman and Wintner states that. then IEX 2 < 1 (and IEX = 0 . we now turn to the independent and identi ally distributed (iid) LIL and the results of Hartman-Wintner and Strassen. if the sequen e (Sn =an ) is almost surely bounded. P. Sn S inf =: lim sup n = lim n!1 an n!1 an (8:8) Conversely. if X is a real random variable su h that IEX = 0 and IEX 2 = 2 < 1 .218 Having des ribed the fundamental LIL of Kolmogorov for sums of independent random variables and its extension to the ve tor value ase.

simply observe that by Cau hy-S hwarz's inequality. (8. . for ea h i .nd a sequen e (i ) of positive numbers tending to 0 su h that 1 IPfjX j > i (i=LLi)1=2g < 1 : LLi i X (8:9) Set then. and Zi = Xi Yi . Yi = Xi IfjXi ji (i=LLi)1=2 g IE(Xi IfjXi ji (i=LLi)1=2 g ) .1) already gives that n 1 X lim sup Yi = n!1 an i=1 almost surely. Sin e the Yi 's are bounded at a level orresponding to the appli ation of Kolmogorov's LIL. To this aim. The proof then onsists in showing that the ontribution of the Zi 's is negligible.

.

n n .

1 .

.

X 1X Xi IfjXi j>i (i=LLi)1=2 g .

.

X2 .

.

an .

i=1 n i=1 i !1=2 n 1 X I 1=2 2LLn i=1 fjXi j>i (i=LLi) g !1=2 : .

219 The .

rst root on the right of this inequality de.

8) holds. we may and do assume X to be symmetri .9). [Sto℄) and (8. the se ond one onverges to 0 by Krone ker's lemma ( f. it follows that n 1 X Zi = 0 almost surely lim n!1 an i=1 and therefore (8. The ne essity of IEX 2 < 1 an be obtained from a simple symmetry argument.g. Sin e the enterings in (Zi ) are taken into a ount similarly.nes an almost surely bounded ( onvergent!) sequen e by the SLLN and IEX 2 < 1 . By symmetrization. Let > 0 and de. e.

X has the same distribution as X . there is a . By the zero-one law.ne X = XIfjX j g XIfjX j> g : Sin e X is symmetri . Assume now the sequen e (Sn =an ) is almost surely bounded.

with probability one. lim sup jSn j=an = M . n!1 . Now 2XIfjX j g = X + X and sin e X has the same law as X .nite number M su h that.

.

n .

1 .

X 2 lim sup .

.

Xi IfjXi j g.

.

2M .

n!1 an .

sin e IE(X 2 IfjX j g ) < 1 and X is symmetri . i=1 almost surely.8). it follows that IE(X 2 IfjX j g ) M 2 : Letting tend to in. By the LIL of Hartman-Wintner (8.

8). As an illustration. have been obtained. whi h even produ e more.8) should be .nity implies indeed that IEX 2 < 1 . Hartman and A. Wintner used the rather deep result of A. the ase of iid random variables should a priori appear as easier. Sin e then. we would like to intuitively des ribe in a dire t way why the lim sup in (8. N. While P. simpler proofs of (8. Kolmogorov.

nite when IEX = 0 and IEX 2 < 1 .6 in Kolmogorov's LIL) and the study of (Sn ) through blo ks of exponential size. It explains rather easily the ommon steps and features of LIL's results like for example the fundamental use of exponential bounds of Gaussian type (Lemma 1. The idea is based on randomization by Radema her random variables. It suÆ es for this (modest) purpose to treat the ase of a symmetri random variable so that we may assume as usual that . a tool extensively used throughout this book.

6) for sums of independent symmetri random variables and the Borel-Cantelli lemma. By Levy's inequality (2. it is enough to .220 (Xi ) has the same distribution as ("i Xi ) where ("i ) is a Radema her sequen e independent of (Xi ) .

nd a .

nite number M su h that X n .

(.

n 2 .

X .

.

IP .

"i Xi .

.

.

.

X 2 satis. i=1 ) > Ma2n < 1 : Sin e IEX 2 < 1 .

by the Borel-Cantelli lemma again (independent ase). for every n . . ( ) X n IP 2n X i=1 We now simply write that.es the law of large numbers and hen e.

(.

n 2 .

X .

.

IP .

"i Xi .

.

.

.

i=1 ) > Ma2n IP Xi2 > 2n+1 IEX 2 < 1 : ( n 2 X i=1 Xi2 > 2n+1IEX 2 .

(.

2n .

X .

.

+ IP .

"i Xi .

.

> .

.

the two non-random lim sup 's . Alternatively.2). it is not diÆ ult to see that if (gi ) is an orthogaussian sequen e independent of (Xi ) . whi h redu es in a sense the iid LIL to the SLLN through Gaussian exponential estimates. the laim is established.1) applies onditionally on the sequen e (Xi ) to show that the se ond probability on the right of the pre eding inequality is less than Z P n i=1 Xi 2n+1 IEX 2 2 M 2 2nLL2n= 2 exp 2n X i=1 ! Xi2 dIP 2 exp( M 2LL2n=2IEX 2 ) : If we then hoose M 2 > 2IEX 2 . This simple approa h. and without going into the details. an be pushed further in order to show the ne essity of IEX 2 < 1 when the sequen e (Sn =an ) is almost surely bounded. i=1 ) Ma2n . One an use the onverse subgaussian inequality (4. 2n X i=1 ) Xi2 2n+1 IEX 2 : The lassi al subgaussian estimate (4.

.

n .

1 .

X lim sup .

.

Xi .

.

n!1 an .

i=1 .

.

and .

n .

1 .

X lim sup .

.

gi Xi .

.

.

n!1 an .

i=1 are equivalent. Independen e and the stability properties of (gi ) expressed for example by jgnj = 1 lim sup (2 log n)1=2 n!1 almost surely .

221 an then easily be used to he k the ne essity of IEX 2 < 1 ( f. The framework of these elementary observations will lead later to the in. the onne tion of the LIL with the law of large numbers (for squares) and of ourse the entral limit theorem through the introdu tion of Gaussian randomization. These rather easy ideas des ribe some basi fa ts in the study of the LIL like exponential estimates of Gaussian type. the LIL an be thought of as some almost sure form of the entral limit theorem. [L-T2℄). blo king arguments. In fa t.

Simple proofs of (8.11) in the more general ontext of Bana h spa e valued random variables below. where d(x.10) and (8. we will obtain (8.8). As usual. We start by des ribing what an be understood by a LIL for independent and identi ally distributed Bana h spa e valued random variables. i. with separable Bana h spa es. the pi ture is probably the most omplete and satisfa tory and we adopt this framework in order not to obs ure the main s heme. Let therefore B denote a separable Bana h spa e. [ . ℄ an almost surely. We deal until the end of the hapter with Radon variables and even. although various on lusions still hold in our usual more general setting. n!1 n (8. y 2 Ag is the distan e of the point x to the set A and where C (xn ) denotes the set of limit points of the sequen e (xn ) . The pre eding sket hy proof of Hartman-Wintner's LIL of ourse only provides qualitative results and not the exa t value of the lim sup in (8. if and only if IEX = 0 and IEX 2 < 1 . C (xn ) = fx 2 IR .10) and C (Sn =an ) [ . ℄ = 0 lim d n!1 an and (8:11) S C n = [ . Strassen's approa h used Brownian motion and the Skorohod embedding of a sequen e of iid random variables in Brownian paths. (Xi ) a sequen e of independent opies of X . they in lude the more pre ise and so- alled Strassen's form of the LIL whi h states that. Our obje tive in the sequel of this hapter will now be to investigate the iid LIL for ve tor valued random variables.nite dimensional LIL. The full property (8. For Radon random variables. . lim inf jx xj = 0g . these will be indi ated in remarks. for more onvenien e. Let X be a Borel random variable with values in B .8) have been given in the literature. A) = inf fjx yj . (8:10) Sn .e.11) is more deli ate and various arguments an be used in order to establish it. ℄ follow rather easily from the LIL of Hartman-Wintner. Sn = X1 + + Xn .

222 n 1 . we an say that X satis. A ording to the LIL of Hartman-Wintner.

es the LIL with respe t to the lassi al normalizing sequen e an = (2nLLn)1=2 if the non-random limit (zero-one law) (8:12) kS k (X ) = lim sup n n!1 an is .

nite.) We will a tually de. (X ) > 0 by the s alar ase. (If X is degenerate.

we might as well say that X satis.ne this property as the bounded LIL. Indeed.

es the LIL whenever the sequen e (Sn =an ) is almost surely relatively ompa t in B sin e this means the same in .

nite dimensional spa es. We will say then that X satis.

es the ompa t LIL and it will turn out that in in.

A tually. bounded and ompa t LIL are not equivalent.nite dimension. Strassen's formulation (8.10) and (8.11) even suggests a third de.

nition: X satis.

It is a nontrivial result that the ompa t LIL and this de. almost surely.es the LIL if there is a ompa t onvex symmetri set K in B su h that. K ) = inf fkx yk . y 2 K g and C (Sn =an) denotes the set of luster points of the sequen e (Sn =an ) . K = 0 n!1 an and (8:14) C Sn =K an where d(x. (8:13) S lim d n .

we would like to study what the limit set K should be. It will turn out to be the unit ball of the so- alled reprodu ing kernel Hilbert spa e asso iated to the ovarian e stru ture of X . some of whi h go ba k to the Gaussian setting as des ribed in Chapter 3.nition a tually oin ide. We sket h its onstru tion and properties. Before des ribing pre isely this result. X is therefore a .

Re all the separability allows to assume A ountably generated and thus L2 ( .xed Borel random variable on some probability spa e ( . A. that these hypotheses are natural in the ontext of the LIL sin e if for example X satis. A. IP) separable. Let us observe. IP) with values in the separable Bana h spa e B . Suppose that for all f in B 0 . IEf (X ) = 0 and IEf 2 (X ) < 1 . as a remark.

(8:15) (X ) = sup (IEf 2 (X ))1=2 < 1 : kf k1 Indeed. (f (Sn =an )) is almost surely bounded and therefore IEf (X ) = 0 and IEf 2 (X ) < 1 by the s alar ase. if we onsider the operator A = AX de. for ea h f in B 0 .es the bounded LIL. Under these hypotheses.

Let A = AX denote the adjoint of . IP) .ned as A : B 0 ! L2 = L2 ( . A. Af = f (X ) . then kAk = (X ) and A is bounded by an easy losed graph argument.

Note .223 A .

rst that sin e X de.

Indeed.nes a Radon random variable. A a tually maps L2 into B B 00 ( f. Se tion 2. If is in L2 . there exists a sequen e (Kn ) of ompa t sets in B su h that IP[X 62 Kn g ! 0 .1). A (IfX 2Kn g ) belongs to B sin e it an be identi.

In parti ular. But IE(XIfX 2Kn g ) onverges to IE(X ) (weak integral) in B 00 sin e sup f (IE(XIfX 62Kn g )) (X )(IE( 2 IfX 62Kng ))1=2 ! 0 : kf k1 Hen e A = IE(X ) belongs to B . f (x) = IEf (X )g(X ) . g in B 0 . Observe further that for any x in H . g in B 0 . iX . The word "reprodu ing" stems from the fa t that H reprodu es the ovarian e of X in the sense that for f. K = fx 2 B . of the image of B 0 by the omposition S = A A : B ! B 0 . iX transferred from L2 : if Denote by H = HX the (separable) Hilbert spa e A (L2 ) equipped with h. iL2 = dIP : h. iX . if X and Y are random variables with the same ovarian e stru ture.e. k k2 1g . On the image A (L2 ) B of L2 by A . IEf (X )g(X ) = IEf (Y )g(Y ) for all f. i. whi h thus de. i.ed with the expe tation (in the strong sense) IE(XI[X 2Kn g ) . if x = A (g(X )) 2 H . this reprodu ing property implies that HX = HY . x = IE(X ) . we also have that H is the ompletion. A iX = h. onsider the s alar produ t . (8:16) kxk (X )hx.e. H is alled the reprodu ing kernel Hilbert spa e asso iated to the ovarian e stru ture of X . with respe t to the s alar produ t h. Z hA . 2 L2 . xiX : Denote by K = KX the losed unit ball of H . Note that sin e A(B 0 )? = KerA .

and by separability this an be a hieved by taking only a (well- hosen) sequen e (fk ) in B 0 . it is easily veri. K is separable in B by (8. f (x) kf (X )k2 for all f in B 0 g . K is weakly ompa t and therefore also losed for the topology of the norm on B .nes a bounded onvex symmetri set in B . we also have that K = fx 2 B . As the image of the unit ball of L2 by A .16). By the Hahn-Bana h theorem. Further.

x2K (X ) = sup kxk : x2K While K is weakly ompa t. kf (X )k2 = sup f (x) . The next easy lemma des ribes for further referen es equivalent instan es for K to be ompa t. it is not always ompa t. .ed that for any f in B 0 .

(iii) S = A A is ompa t. By the uniform boundedness prin iple. (i) and (ii) are learly equivalent and imply (iii). (v) the family of real random variables ff 2(X ) . it suÆ es to show that kfn(X )k2 ! 0 when fn ! 0 weakly in B 0 . kf k 1g is uniformly integrable. (ii) A (resp.4. Proof. f 2 B 0 . (iv) the ovarian e fun tion T (f. g) = IEf (X )g(X ) is weakly sequentially ontinuous.224 Lemma 8. The ompa tness of S ensures that we an extra t from the sequen e (xn ) de. The following are equivalent: (i) K is ompa t. we may assume that kfnk 1 for all n . A ) is ompa t. To see that (iv) holds under (iii).

But then IEfn2 (X ) = fn (xn ) kxn xk + jfn (x)j ! 0 : Assume (v) is not satis. still denoted with n .ned by xn = IE(Xfn(X )) a subsequen e. onvergent to some x .

there exist " > 0 and a sequen e ( n ) of positive numbers in reasing to in.ed. Then.

nity su h that for every n Z Z sup f 2 (X ) dIP sup f 2 (X )dIP > " : kf k1 fkX k> ng kf k1 fjf (X )j> ng Hen e. one an . for every n .

By (iv). A is therefore ompa t. so fn (X ) ! f (X ) almost surely and hen e in L2 by uniform integrability. kfnk 1 .4 is omplete.nd fn . K is ompa t. onvergent to some f . (v) easily implies (ii). . The proof of Lemma 8. fn (X ) ! f (X ) in L2 and this learly rea hes a ontradi tion sin e Z lim f 2(X )dIP = 0 : n!1 fkX k> n g Finally. for some subsequen e and some f . if (fn ) is a sequen e in the unit ball of B 0 . indeed. Note that when IEkX k2 < 1 (in ase of Gaussian variables for example). su h that Z fkX k> ng fn2 (X )dIP > " : Extra t then from the sequen e (fn ) in the unit ball of B 0 a weakly onvergent subsequen e. fn ! f weakly. still denoted (fn ) .

1℄ . if B = IRN and the ovarian e matrix of X is the identity matrix. H an be identi. When X follows the Wiener distribution on C [0. K is simply the Eu lidean unit ball of IRN .225 As simple examples.

due to J.14). 1℄ su h R that x(0) = 0 and 01 x0 (t)2 dt < 1 .13) and (8. onne ting the de.ed with the so- alled Cameron-Martin Hilbert spa e of the absolutely ontinuous elements x in C [0. Kuelbs. we now present the theorem. and K is known in this ase as Strassen's limit set. Having des ribed the natural limit set in (8.

Sn . Let X be a Borel random variable with values in a separable Bana h spa e B .nition of the ompa t LIL with (8.14).13) and (8. then. if the pre eding holds for some ompa t set K . Theorem 8. with probability one. then X satis.5. Conversely. If the sequen e (Sn =an ) is almost surely relatively ompa t in B . K =0 lim d n!1 an (and) C Sn =K an where K = KX is the unit ball of the reprodu ing kernel Hilbert spa e asso iated to the ovarian e stru ture of X and K is ompa t.

when we speak of ompa t LIL we mean one of the equivalent properties of this statement.es the ompa t LIL and K = KX . Proof. when X satis. As we have seen. A ording to this theorem.

5. whi h is always the ase under one of the properties of Theorem 8. K = KX is well de.es the bounded LIL.

Let us .ned.

with probability one.rst show that. C (Sn =an) K . As was observed in the de.

nition of K . Denote by 0 the set of full probability (by the s alar LIL) of the ! 's su h that for every k jf (S (!))j lim sup k n an n!1 kfk (X )k2 : So if x 2 C (Sn (!)=an) and ! 2 0 . there is a sequen e (fk ) in B 0 su h that a point x belongs to K as soon as fk (x) kfk (X )k2 (8:17) for all k . x learly satis.

17) and therefore belongs to K .es (8. This .

rst property easily implies that (8:18) S lim d n . K = 0 n!1 an .

if this is not the ase. Indeed.226 with probability one. one an .

to show that any x in K belongs almost surely to C (Sn =an) . Let us assume . by relative ompa tness of (Sn =an ) . it suÆ es. by density. We are thus left with the proof that C (Sn =an) = K . a subsequen e of (Sn =an ) onverging to some point exterior to K and this is impossible as we have just seen.nd. To this aim.

rst that B is .

nite dimensional. Sin e the ovarian e matrix of X is symmetri positive de.

X ik2 " = 1 " : n Hen e.nite. for n large enough.8).18). . By Hartman-Wintner's LIL (8. By (8. along a subsequen e. Let then x be in B with jxj = 1 and let " > 0 . We are therefore redu ed to the ase where B is Hilbertian and K is its unit ball. it may be diagonalized in some orthonormal basis. xi khx. h San . along this subsequen e and for n large. jSn =an j2 1 + " .

.

Sn .

.

a n .

.

.

.

2 .

S .

2 x.

.

= .

.

n .

.

By proje tion. then y = (x. xi an 1 + " + 1 2 + 2" = 3" and therefore x 2 C (Sn =an ) almost surely. We now omplete the proof of Theorem 8. we limb in dimension and onsider the random variable in B IR given by Y = (X. Let x in K with jxj = < 1 . x 2 C (Sn =an) .5 and use this . to the luster set asso iated to Y . To rea h the interior points of K . (1 2 )1=2 ) belongs to the unit sphere of B IR and thus. ") where " is a Radema her variable independent of X . by the pre eding step. + jxj2 an S 2h n .

nite dimensional result to get the full on lusion on erning the luster set C (Sn =an) . When X satis.

es the LIL. it also satis.

es the strong law of large numbers and therefore IEkX k < 1 . There exists an in reasing sequen e (AN ) of .

nite -algebras of X su h that X N = IEAN X onverges almost surely and in L1(B ) to X .4 is ful. property (iv) of Lemma 8. Note that if (Sn =an ) is almost surely relatively ompa t.

by ompa tness. if fk ! 0 weakly. indeed.lled. .

.

.

Sn .

.

.

lim sup f = k!1 n .

k a .

kf k 1g is uniformly integrable. it follows that kfk (X )k2 ! 0 whi h gives (iv). f. f 2 B 0 . or equivalently. There exists therefore (Lemma of La Valle-Poussin. By Lemma 8.8). [Me℄) a positive onvex fun tion on IR+ with tlim !1 (t)=t = 1 su h that .g. e. By (8. n 0 with probability one. ff 2 (X ) .4. K is therefore ompa t.

227 sup IE (f 2 (X )) < 1 . This an be used to obtain that (X X N ) ! 0 when N ! 1 (where () is de. N 2 INg kf k1 is also uniformly integrable. kf k 1 . it follows that the family ff 2 (X X N ) . By Jensen's inequality.

Further. If xN = IE(X N ) .18). Let now x in K : x = IE(X ) . (X ) sup kxk = (X ) . and sin e the X N 's are .ned in (8.15)). by (8. k k2 1 . kx xN k (X X N ) ! 0 .

nite dimensional x2K they also satisfy the ompa t LIL and therefore (X X N ) (X X N ) for every N . we simply write by the triangle inequality. Sn lim inf n!1 an SnN x lim inf n!1 an N S lim inf n n!1 an xN + (X xN + 2(X X N ) + x xN XN) where SnN are the partial sums asso iated to X N . By the previous step in . Now. that for every N .

nite dimension. SnN lim inf n!1 an xN = 0 almost surely. Letting N tend to in. for ea h N .

nity then shows that x Theorem 8. 2 C (Sn =an ) whi h ompletes the proof of The pre eding dis ussion and Theorem 8.5.5 present the de.

nitions of the LIL of Hartman-Wintner-Strassen for Bana h spa e valued random variables. We now would like to turn to the ru ial question of knowning when a random variable X with values in a Bana h spa e B satis.

depending if possible only on the distribution of X . a .es the bounded or ompa t LIL in terms of minimal onditions. more generally. If B is the line or.

nite dimensional spa e. X satis.

bounded mean zero random variables do not satisfy the LIL. the integrability IEkX k2 < 1 is no longer ne essary. while these onditions are suÆ ient. examples disprove the equivalen e between bounded and ompa t LIL in the in. However. already in Hilbert spa e.es the bounded or ompa t LIL if and only if IEX = 0 and IEkX k2 < 1 . Further. It happens onversely that in some spa es.

All these examples will a tually be ome lear on the .nite dimensional setting.

nal hara terization. They however pointed out histori ally the diÆ ulty in .

nding what this hara terization should be. Assume . The issue is based in parti ular on a areful examination of the ne essary onditions for a Bana h spa e valued random variable to satisfy the LIL whi h we now would like to des ribe.

rst that X satis.

Then learly.es the bounded LIL in B . f (X ) satis. for ea h f in B 0 .

These weak integrability onditions are omplemented .es the s alar LIL and thus IEf (X ) = 0 and IEf 2 (X ) < 1 .

for some .228 by a ne essary integrability property on the norm. it is ne essary that the sequen e (Xn =an) is bounded almost surely and thus. by independen e. indeed. the Borel-Cantelli lemma and identi al distribution.

by some ondition involving the laws of the partial sums Sn instead only of X .nite M . This unfortunate fa t for es. X IPfkX k > Man g < 1 : n As is easily seen. this an be avoided in some spa es. As we mentioned it however. The third ne essary ondition is then simply (and trivially) that the sequen e (Sn =an ) should be bounded in probability. this turns out to be equivalent to the integrability ondition IE(kX k2=LLkX k) < 1 : These are the best moment onditions whi h an be dedu ed from the bounded LIL. In . in order to expe t some hara terization. As we will see. there are almost surely bounded (mean zero) random variables whi h do not satisfy the LIL. but it is ne essary to pro eed along these lines in general as a tually ould have been expe ted from the previous hapters. to omplete the pre eding integrability onditions. whi h depend only on the distribution of X .

let us omplete the dis ussion on ne essary onditions by the ase of the ompa t LIL. Before stating this hara terization.5. but onvergent to 0 . it is ne essary that the sequen e (Sn =an ) is not only bounded in probability.nite dimension. the almost sure behavior of the sequen e (Sn =an ) to its behavior in probability.1 for the normalization an . under ne essary and natural moment onditions. IEf 2 (X ) < 1 for all f in B 0 . the weak L2 integrability of ourse implies the strong L2 integrability. K = KX should be ompa t. indeed. As typi al in Probability in Bana h spa es.) We an now present the hara terization of random variables satisfying the bounded or ompa t LIL. (To see this. or equivalently ff 2(X ) . kf k 1g uniformly integrable (this an also be proved dire tly as is lear from the last part of the proof of Theorem 8. Let us note that the sto hasti boundedness (and a fortiori onvergen e to 0 ) of the sequen e (Sn =an) also ontains the fa t that X is entered. We keep that IE(kX k2=LLkX k) < 1 . under the ompa t LIL. and therefore IE(kX k2=LLkX k) < 1 . from the entral limit theorem). By Theorem 8. it redu es in a sense. f 2 B 0 . Finally. . use the analog of Lemma 10. for example. the sequen e of the laws of Sn =an is ne essarily tight with 0 as only possible limit point sin e IEf (X ) = 0 . as well as the sto hasti boundedness of (Sn =an) (as is easily seen.5). It is remarkable that this easily obtained set of ne essary onditions is also suÆ ient for X to satisfy the bounded LIL.

In order that X satisfy the bounded LIL.229 Theorem 8.6. it is ne essary and suÆ ient that the following onditions are ful. Let X be a Borel random variable with values in a separable Bana h spa e B .

by symmetrization and Lemma 7. IEf (X ) = 0 and IEf 2 (X ) < 1 .2. (iii) the sequen e (Sn =an ) is bounded in probability. this inequality (8. at least for symmetri random variables but a tually also in general by symmetrization and Lemma 7. for all symmetri random variables X satisfying (i). kf k 1g is uniformly integrable. From (8. L(X ) an be hosen to be 0 by (iii') and. We will show indeed that for some numeri al onstant M . Proof. This estimate applied to quotient norms by . Indeed.15)) and sin e (iii) implies that L(X ) < 1 by Lemma 7. the inequality also holds in this form (with thus L(X ) = 0 ) for non-ne essarily symmetri random variables satisfying (i) and (iii') . f (iii') Sn =an ! 0 in probability.19) ontains the bounded LIL. it is ne essary and suÆ ient that (i) holds and that (ii) and (iii) are repla ed by 2 B 0 .1. The proof of the suÆ ien y for the bounded LIL is the main point of the whole theorem.2. (ii) for ea h f in B 0 .1. by Lemma 7. (ii') IEX = 0 and ff 2(X ) . In order that X satisfy the ompa t LIL.lled: (i) IE(kX k2=LLkX k) < 1 . Ne essity has been dis ussed above. we have (8:19) kS k (X ) = lim sup n n!1 an M ((X ) + L(X )) where (X ) = sup (IEf 2 (X ))1=2 and kf k1 n 1 X L(X ) = lim sup IE Xi IfkXi kan g : n!1 an i=1 Sin e (X ) < 1 under (ii) ((8.19) also follows the ompa t version.

if F denotes a .nite dimensional subspa es then yields the on lusion. More pre isely.

The sequen e (Sn =an) then appears as being arbitrarily lose to some bounded set in the . as in the proof of Theorem 8.19) that (T (X )) M(T (X )) . (T (X )) an be made arbitrarily small with large enough F (show for example. that (X X N ) ! 0 where X N = IEAN X ).nite dimensional subspa e of B and T = TF the quotient map B ! B=F . Under (ii') .5. we get from (8.

nite dimensional .

230 subspa e F . and is therefore relatively ompa t (Lemma 2. Hen e X satis.2).

To this aim.5.es the ompa t LIL under (i). (ii') and (iii') . we use the isoperimetri approa h as developed for example in the pre eding hapter on the SLLN. We intend more pre isely to apply Theorem 7. We are thus left with the proof of (8.19). In order to verify the .

The integrability ondition IE(kX k2=LLkX k) < 1 is equivalent to say that for every " > 0 X n 2nIPfkX k > "a2n g < 1 : Let " > 0 be .6 and 7.rst set of onditions there.8. we employ Lemmas 7.

L in Theorem 7. n2 2n (X )2 so that (7.6 and 7.5 is less than L(X ) ( ontra tion prin iple) and.8 we see that (taking q = 2K0 ) P there exists a sequen e of integers (kn ) su h that 2 kn < 1 for whi h n X n IP (k n X r=1 kX2(rn) 1 k > 5"a2n ) <1 where X2(rn) 1 is the r -th largest element of the sample (kXi k)i2n 1 . Setting together the on lusions of Lemmas 7.xed. for ea h n .4) is satis.

is then that X n IPfkS2n 1 k > 102[" + 2K0 ((X ) + L(X ) + (5"L(X ))1=2)℄a2n g < 1 : p Sin e " > 0 is arbitrary and a2n 2a2n 1 . The on lusion of Theorem 7. hypotheses on the distribution of the partial sums rather than only on the variable have to be used. It is worthwhile to point out at this stage that in a spe ial lass of Bana h spa es.6). say a Bana h spa e B is of type 2 if there is a onstant C su h that for any . Theorem 8. Anti ipating on the next hapter as we did for the law of large numbers. with q = 2K0 . inequality (8.6 is thus established.ed with for example Æ = (X ) .19) follows from the Borel-Cantelli lemma and the maximal inequality of Levy (2. it is possible to get rid of these assumptions and state the hara terization in terms only of the moment onditions (i) and (ii) (or (ii') ).5.6 provides a rather omplete hara terization of random variables satisfying the LIL. While Theorem 8.

we have X 2 IE Yi i C X i IEkYi k2 : .nite sequen e (Yi ) of independent mean zero random variables with values in B .

further examples are dis ussed in Chapter 9. We show that if X is symmetri and IE(kX k2=LLkX k) < 1 . Let X be a mean zero random variable su h that IE(kX k2=LLkX k) < 1 with values in a type 2 Bana h spa e B . For ea h n . that Sn =an ! 0 in probability and hen e the ni er form of Theorem 8. by Lemma 6. IEkSn k n X IE Xi IfkXi kang + nIE( i=1 kX kIfkX k>ang ) : A simple integration by parts shows that. Proof. Then Sn =an ! 0 is probability.231 Hilbert spa es are learly of type 2 .3. if IEX = 0 . Let us prove this impli ation. then IEkSn k=an ! 0 whi h. 1=2 n 1 X C 2 IE X I IE(kX k IfkX kan g ) : an i=1 i fkXi kan g 2LLn For ea h t > 0 . implies the lemma.6 in this ase. the right hand side squared of this inequality is seen to be smaller than Ct2 C Ct2 kX k2 I + IE(kX k2Ift<kX kan g ) + C IE : 2LLn 2LLn 2LLn LLkX k fkX k>tg Letting n and then t go to in. Lemma 8. n IE(kX kIfkX k>ang ) = 0 : lim n!1 an By the type 2 inequality (and symmetry). under IE(kX k2=LLkX k) < 1 .7. In type 2 spa es the integrability onditi on IE(kX k2=LLkX k) < 1 implies.

nity on ludes the proof.8. Lemma 8. Then X satis. Let X be a Borel random variable with values in a separable type 2 Bana h spa e B . Corollary 8.7 implies the next orollary. As announ ed.

Remark 8. ff 2 (X ) . . extensions of the ompa t LIL to this setting an also be imagined. IEf 2 (X ) < 1 for all f in B 0 (resp. This is in parti ular ompletely obvious for the bounded version whi h. f 2 B 0 . kf k 1g is uniformly integrable).es the bounded (resp. ompa t) LIL if and only if IE(kX k2=LLkX k) < 1 and IEf (X ) = 0 . does not require any approximation argument.9. Theorem 8. With some pre autions.6 has been presented in the ontext of Radon random variables. as in the ase of Kolmogorov's LIL. Its proof however learly indi ates some possible extensions to more general settings of random variables as the one we usually adopt in this text. We leave this to the interested reader.

On the identi.3.232 8.

K = 0 n!1 an and (8:21) S C n =K an where K = KX is the unit ball of the reprodu ing kernel Hilbert spa e asso iated to the ovarian e stru ture of X and K is ompa t in this ase. One might now be interested in knowning if these properties still hold. (8:20) S lim d n . ation of the limits In this last paragraph. (8:22) kS k (X ) = lim sup n = (X ) = sup (IEf 2 (X ))1=2 n!1 an kf k1 (re all (X ) = sup kxk ). We learned from Theorem 8. In parti ular also. with probability one.5 that when the sequen e (Sn =an ) is almost surely relatively ompa t in B . then. or what x2K they be ome. we would like to des ribe various results and examples on the limits of the sequen e (Sn =an ) in the bounded form of the LIL for Bana h spa e valued random variables. when for example X only satis.

or even. for (8.es the bounded LIL and not the ompa t one. if X is just su h that IEf (X ) = 0 and IEf 2(X ) < 1 for all f in B 0 (in order for K to be well de.21).

ned). Let X be a Borel random variable with values in a separable Bana h spa e B su h that IEf (X ) = 0 and IEf 2 (X ) < 1 for all f in B 0 . As we have seen in the . We start with the remarkable results of K. Alexander [Ale3℄ on the luster set. Further examples of pathologi al situations have been observed in the literature. [Kue6℄) of a bounded random variable satisfying the bounded LIL but for whi h the luser set C (Sn =an ) is empty. We would like here to brie y des ribe some positive results as well as some problems left open. let us mention the example ( f. To put the question in learer perspe tive.

As a main result. and examples are given in [Ale4℄ showing that every value of an indeed o ur. It an be K . it is shown in [Ale3℄ that C (Sn =an ) an a tually only be empty or K for some in [0.5. . Moreover. 1℄ . From an easy zero-one law. it an be shown that the luster set C (Sn =an ) is almost surely non-random. a series ondition involving the laws of the partial sums Sn determines the value of . almost surely C (Sn =an) K where K = KX .rst part of the proof of Theorem 8. and an also be empty as alluded to above. More pre isely we have the following theorem whi h we state without proof refering to [Ale3℄.

Let X be a Borel random variable with values in a separable Bana h spa e B su h that IEf (X ) = 0 and IEf 2 (X ) < 1 for every f in B 0 . Let 2 = supf.10.233 Theorem 8.

X n n . 0 .

Then C (Sn =an ) = K . IPfkSm=amk < " for some 2n whenever this set is not empty. Similar questions an of ourse be asked on erning (8. Although the results are less omplete here. We have of ouse to assume here that X satis. = 1 These results settle the nature of the luster set C (Sn =an ) .22).20) and (8. 1 < m 2ng = 1 for all " > 0g . In parti ular. when this set is empty. or when Sn =an ! 0 in probability. one positive fa t is available.

20) and (8. that IE(kX k2=LLkX k) < 1 . by Theorem 8. (X ) < 1 and (Sn =an) is bounded in probability. whose proof ampli. It turns out that when this last ondition is strengthened into Sn =an ! 0 in probability. This is the obje t of the following theorem.22) with K = KX . ompa t or not. that is.6. one an prove (8.es the bounded LIL.

2 and whi h provides a rather omplete des ription of the limits in this ase. Let X be a Borel random variable with values in a separable Bana h spa e B . Then we have (8:23) kS k (X ) = lim sup n = (X ) almost surely : n!1 an Moreover. indeed. As we will see. the situation may be quite dierent in general.11.23). it is easily seen that d(Sn =an. K ) ! 0 . (8:24) S lim d n . K = 0 n!1 an and S C n =K an with probability one where K = KX is the unit ball of the reprodu ing kernel Hilbert spa e asso iated to the ovarian e stru ture of X . Theorem 8. It is enough to prove (8. Assume that IEX = 0 .es some of the te hniques of the proof of Theorem 8. (X ) = sup (IEf 2 (X ))1=2 < 1 and that Sn =an ! 0 in kf k1 probability. repla ing the norm of B by the gauge of K + "B1 where B1 is the unit ball of B . Identi. IE(kX k2=LLkX k) < 1 . Proof.

we need only show that (X ) 1 when (X ) = 1 . by homogeneity and the real LIL. by the . ation of the luster set follows from Theorem 8.6)). As in the proof of Theorem 8. To establish (8.10 sin e Sn =an ! 0 in probability.23).2 (see (8.

it suÆ es to prove that for all " > 0 and >1 ( ) X n IP mn X Xi i=1 > (1 + ")amn < 1 where mn = [n ℄ . Let then 0 < " 1 and > 1 be .234 Borel-Cantelli lemma and Ottaviani's inequality (Lemma 6.2). n 1 (integer part).

xed. we . As in the proof of Theorem 8. h) = (IE(f h)2 (X )IfkX kamn g )1=2 : Let further N (U.2. h) < " . ") be the minimal number of elements h in U su h that for every f in U there exists su h a h with dn2 (f. For every integer n and f. dn2 . set dn2 (f. h in the unit ball U of B 0 .

However. with respe t to Lemma 8.3. it does not seem possible to use the Sudakov minoration for Radema her pro esses sin e the trun ations do not appear to .rst need an estimate of the size of these entropy numbers when n varies.

4 below and the omments thereafter. Under the assumptions of the theorem. One an also use for this result Corollary 10.2 (sin e Sn =an ! 0 in 1 probability) and the elementary fa t that nlim !1 nan IE(kX kIfkX k>ang ) = 0 under IE(kX k2=LLkX k) < 1 .25) is seen to be 0 using Lemma 7. we have that (8:25) n n 1 X 1 X lim IE " X = lim IE g X = 0: i i n!1 i i n!1 an a n i=1 i=1 If X is symmetri . the proof of Lemma 8.2 for an .25) also holds in general sin e X is entered (Lemma 6. we will use the Gaussian minoration through an improved randomization property whi h is made possible by the fa t that we are working with identi ally distributed random variables. Let respe tively ("i ) and (gi ) be Radema her and standard Gaussian sequen es independent of (Xi ) . Using the latter property and the Gaussian Sudakov minoration.3 is trivially modi.t the right levels. the left of (8. the limit on the left of (8. we refer to Proposition 10. By symmetrization. Instead rather. Con erning Gaussian randomization.3).

hn (f )) < " in su h a way that the set Un of all hn (f ) has a ardinality less than exp(n LLmn) . ") exp(n LLmn) : A ording to this result. dn2 .ed to this setting to yield the existen e of a sequen e (n ) of positive numbers tending to 0 su h that for all n large enough (8:26) N (U. we denote. for ea h n and f in U . We an write that . by hn (f ) an element of U su h that dn2 (f.

.

.

.

mn mn mn X .

X .

.

X .

.

.

.

Xi sup .

f (Xi ).

+ sup .

h(Xi ).

.

f 2Un .

.

h2Vn .

.

i=1 i=1 i=1 .

5 is des ribed in the setting of a single true norm of a Bana h spa e.19) through Theorem 7. it is just a mere exer ise to verify that for some numeri al onstant C .235 where Vn = ff hn (f ) . f 2 U g 2U . The main observation on erning Vn is that IE(h2 (X )IfkX kamn g ) "2 for all h in Vn and all n . it is lear that it also applies to more general norms whi h might depend on n on the blo k of size mn .6 and (8. In this way. Although the proof of Theorem 8. X n ( IP .

.

mn .

X .

.

sup .

h(Xi ).

.

.

h2Vn .

X (8:27) n ( IP . ) > C"amn < 1 : i=1 We are thus left to show that. for some C > 0 .

.

mn .

X .

.

sup .

f (Xi ).

.

.

f 2Un .

i=1 ) > (1 + C")amn < 1 : Let Æ = Æ(") > 0 to be spe i.

De.ed in a moment and set. for ea h n . n = Æmn =amn .

it follows that . n ))) : Note that jYi (f.ne. for ea h n . i mn and f in U . n)j 2 n and IE(Yi (f. min(f (Xi ) .6 applied to the sum of the independent mean zero random variables Yi (f. By Lemma 1. n)2 ) 1 . min(f (Xi ) . n )) IE(max( n . i mn . Yi (f. n) = max( n . n) .

.

mn .

X .

.

sup .

n). Yi (f.

.

.

f 2Un .

( IP i=1 ) > (1 + ")amn 2 CardUn exp( (1 + ")LLmn) provided Æ = Æ(") > 0 is small enough in order that 2 already follows that (8:28) X n .

.

mn .

X .

.

sup .

Yi (f. n).

.

.

f 2Un .

i mn . n 1 . ( IP i=1 Consider now Zi (f. n) . by entering of f (Xi ) . n) = f (Xi ) Yi (f. f exp(2(1 + ")Æ) (1 + ") 1. it thus ) > (1 + ")amn < 1 : 2 U . Note that. IEjZi (f.26). n)j 2IE(kX kIfkX k> ng ) : The integrability ondition IE(kX k2=LLkX k) < 1 is equivalent to say that . By (8.

n = mn IPfkX k > n g (LLmn )2 P n .

n < 1 where .

236 (elementary veri.

There exists a sequen e ( n ) su h that n . ation).

f 2 U and i mn .3. 2C2 (. Æ) . Æ) > 0 . n)j "amn =mn : We now use this property to show that if n 2 L is large enough.n .12. lim 1 n!1 amn IE IE . ! mn X IE sup jZi (f. IE(kX kIfkX k> ng ) X `n `+1 IPfkX k > ` g X `+1 (LLm` )2 m` `n ` (LL`)3=2 (` n)=3 `=2 `n X C1 (. from Theorem 4.25) implies that 1 lim n!1 amn hen e. Æ) n 3=2 C2 (. Æ) n (LLm1=n2) mn for some C1 (. Æ) n LLmn pre eding estimate indi ates that for all n 2 L . n)j f 2U i=1 (8:29) 2"amn : Indeed. by Lemma 6. It is then easily seen that for all n . Consider the set of integers L = fn . P n n < 1 and satisfying the regularity property n+1 1=3 n for every n (re all that > 1 ). "g . The IEjZi (f. (8. C2 (.

mn .

X sup .

.

"i f 2U .

i=1 .

mn .

X .

sup .

f 2U .

i=1 jZi (f. n)j .

! .

.

.

= .

. 0.

! .

.

.

.

n)j IEjZi (f.18 about positive random variables). It provides us indeed with the ru ial monotoni ity property ( f. More pre isely. n)j = 0 from whi h the announ ed property (8. Remark 6. let n 2 L and set ( mn X ) A = ! 2 .29) follows. sup jZi (f. The main interest in the introdu tion of the absolute values in (8. jZi (f.16).29) is that it allows a simple use of the isoperimetri inequality (6. n)(!)j 4"amn : f 2U i=1 .

29) (at least for n large).28) and (8. for ! 2 .27) and on lude the proof of the theorem.16) ensures that for k q ( mn X IP sup jZi (f. n)j > (1 + 8K0)"amn < 1 : f 2U i=1 Combining (8. the isoperimetri inequality (6. Now. there exist !1 . X (8:31) n62L . if. we get that X (8:30) n2L ( ) mn X IP sup jZi (f. Hen e. we see that in order to establish (8. : : : .237 Then IP(A) 1=2 by (8. we have to show that for some numeri al C > 0 . n)j > (1 + 4q)"amn f 2U i=1 ) ( k ) X K0 k + IP kZik > "amn : q i=1 If we now hoose q = 2K0 and k = kn as in the proof of Theorem 8. n)(!)j kZi (!)k + sup jZi (f. : : : . Xi (!q )g ex ept perhaps for at most k values of i mn .6 using the integrability ondition IE(kX k2=LLkX k) < 1 . n)(!` )j f 2U i=1 i=1 `=1 f 2U i=1 k X i=1 kZi (!)k + 4q"amn where (kZi k ) denotes the non-in reasing rearrangement of (kXi k +IEkXi k)imn . !q in A su h that Xi (!) 2 fXi (!1 ). then mn X q k X mn X X sup jZi (f.30).

.

mn .

X .

.

sup .

f (Xi ).

.

.

f 2Un .

( IP i=1 ) > C"amn < 1 : We follow very mu h the pattern of the ase n 2 L . and de. Let now 0n = mn =4"amn .

n) . Zi (f. Exa tly as what we des ribed before for Zi (f. we an get from the isoperimetri approa h that X (8:32) n ( mn X ) jZ 0 (f. n) . n)j 8"amn =mn . We observe. that IEjZi0 (f. n) as Yi (f. the exponential inequality of Lemma 1. n)j > C"am IP sup i f 2U i=1 n < 1: Con erning Yi0 (f. n) .6 shows that . n) before but with 0n instead of n . n) .ne Yi0 (f. Zi0 (f. now be ause (X ) 1 . i mn .

.

mn .

X .

.

sup .

n). Yi0 (f.

.

.

f 2Un .

( IP i=1 ) > "amn 2 CardUn exp( "2 (2 pe)LLmn ) 2 exp 2 " 4 n LLmn .

LLmn > "(2C2 (.26). n we learly get that . if n 62 L . Sin e n ! 0 . Æ) n ) 1 where n < 1 .238 P where we have used (8. Now.

.

( ) mn .

X .

X IP sup .

.

n). Yi0 (f.

.

> "amn < 1 .

f 2Un .

together with (8.32).31) follows. Theorem 8. (8.11 settles the question of the identi. i=1 n62L from whi h. This ompletes the proof of Theorem 8.11.

K =0 lim d n!1 an with probability one.6). in ase of the bounded LIL. 1℄ of length b divided into p equal subintervals. Consider the lass G of fun tions on [0. for example through (X ) = lim sup IEkSn =ank ( f.11. or just only (8. ation of the limits when Sn =an ! 0 in probability. Very few however is known at the present time about the limit (8. There exists a random variable X satisfying the bounded LIL in the Bana h spa e 0 su h that Sn . then ne essarily C (Sn =an ) = K . as well as for the example itself. We believe that this study ould lead to some rather intri ated situations as suggested by the following example with whi h we lose this hapter. Example 8. Let I be a subinterval of [0. Let us note that K. 1℄ de. It will be onvenient for this study. Theorem 8. but for whi h Sn =an does not onverge in probability to 0 . The onstru tion of this example is based on the following preliminary study whi h appears to be kind of anoni al and ould possibly be useful for related onstru tions. [A-K-L℄). K ) = 0 . One might wonder what fun tion n!1 of (X ) and (X ) (or some other quantity equivalent to (X ) ) (X ) ould be.22). to use the language and notations of empiri al pro esses ( f.20). The limsup (X ) need not be equal to (X ) and has to take into a ount the sto hasti boundedness of (Sn =an ) .11 is not ne essary for the limit nlim !1 d(Sn =an . Alexander [Ale4℄ showed that when nlim !1 d(Sn =an . Chapter 14). where K = KX the unit ball of the reprodu ing kernel Hilbert spa e asso iated to the ovarian e stru ture of X . in the setting of Theorem 8. We onstru t here an example showing that the ondition Sn =an ! 0 in probability in Theorem 8. K ) to be almost surely 0 . when (Sn =an ) is only bounded in probability ( f.12. that is.

We study here. we denote by (Xi ) a sequen e of independent variables uniformly distributed on [0. It is impli it that p and h are large enough for all the omputations we will develop. 1℄ . With some abuse. n X IE "i f (Xi ) i=1 G .ned by G = fIA . for every n . A is the union of h subintervals (of I ) g .

239 n P " f ( X ) i i .

n .

P .

.

= sup .

.

"i f (Xi ).

.

as usual. ("i ) is a Radema her sequen e independent of i=1 G f 2G i=1 n P (Xi ) . and. Note .

"i f (Xi ) Cardfi n . To this aim.rst that. obviously. for all f in G and all t > 0 . we use Bennett's inequality (6.10) from whi h we easily dedu e that. Xi 2 I g so that i=1 G where n X IE "i f (Xi ) (8:33) G i=1 bn : Let us now try to improve this general estimate for relative values of n . .

(.

n .

X .

.

IP .

"i f (Xi ).

.

.

.

( n X IP "i f (Xi ) ) G i=1 >t 2 exp h log p t t 0 C1 n2 : n P IE "i f (Xi ) 3t0 . It is easily veri. (u) = e log u when u e and where C1 is some (large) numeri al onstant. Consider now t0 su h that t0 (t0 =n2 )=C1 h log p ( 1) . for all t t0 . (u) = u when 0 < u e . Sin e CardG ph = exp(h log p) and is in reasing. ) i=1 > t le2 exp t t C1 n2 where 2 = hb=p .

ed i=1 G p p that when n C1 p log p=e2 b . by integration by parts and de. one may take t0 to be C1 nh(b log p=p)1=2 whereas when n C1 p log p=e2b Now.

b p C p log p p 3C1ehC log if n 21 .33). that: (8:34) G (8:35) n 1 X pn IE "i f (Xi ) i=1 (8:36) n X p1n IE "i f (Xi ) i=1 n X p1n IE "i f (Xi ) i=1 bpn if n h . ep log p e b n log G bn 1 b log p 1=2 3 C1 h p G p if n C1 p log p : e2 b . it follows that we an take C1 eh log p : log C1 epbnlog p t0 = We have thus obtained. ombining with (8.nition of t0 .

240 Sin e the right hand side of (8. we obtain a .35) is de reasing in n (for the values of n onsidered).

14. Corollary 8. Corollary 8.34) for r = h=b and the previous orollary. C1 p m2 . (p(q)) . If C1 p h2 . If m h=b and.15. in addition.12.13. onsider the bound (8.rst onsequen e of this investigation. we now start the onstru tion of Example 8. C denotes below some numeri al onstant possibly varying from line to line. then p n 1 X hb sup IE "i f (Xi ) C q : n an i=1 LL hb G Provided with this preliminary study and the pre eding statements. h b log p p h LL b q 1=2 1 C A : Corollary 8. n 1 X sup IE "i f (Xi ) nm an i=1 0 C max q hh p . Consider in reasing sequen es of integers (n(q)) . (s(q)) to be spe i. If m h=b . LL b m G 1 h b log p 1=2 A q : p LL hb To obtain a ontrol whi h is uniform in n . n 1 X sup IE "i f (Xi ) nm an i=1 0 G h log p C max B q p LL hb m log C1 epmlog p .

kf k2 = d(q) where d(q)2 = (q)2 [n(q)b(q)℄ b(q) p(q) . always for all q . Let Iq . We divide Iq in p(q) equal subintervals and denote by Iq the family of these subintervals. 1℄ where. Iq has length b(q) = LLn(q)=n(q) and Jq . Jq . q 2 IN . be disjoint intervals in [0. Gq = f (q)IA . b(q)=s(q) . We note that for every f in Gq .ed later on. A is union of [n(q)b(q)℄ intervals of Iq g where [℄ is the integer part fun tion. for ea h q . We set 1 a(q) a(q) = an(q) = (2n(q)LLn(q))1=2 and (q) = : 10 LLn(q) We set further.

We set further Fq = fUq (f ) .) n 1 X lim sup "i f (Xi ) n!1 an i=1 Fu 1 almost surely 2 u ConvF .241 whi h is equivalent. +1g in 0 (F ) de. set 1 d(q)2 Uq (f ) = f + d(q)2 1=2 p s(q)fÆ Tq so that kUq (f )k2 = kf k2=d(q) . (The notation k kF has the same meaning as in the preliminary Before turning to the somewhat te hni al details of the onstru tion. Let Tq be the aÆne map from Jq into Iq . for q large. kf k2 1g . Let F = Fq . Observe that F is ountable. 1℄ f 1. In S parti ular kf k2 = 1 if f 2 Fq . let us show why (8. For f with support in Iq and onstant on the intervals of Jq . Let X be the map from [0.38) give raise to Example 8. (p(q)). (s(q)) su h that n(q) X "i f (Xi ) an(q) i=1 1 (8:37) F 6! 0 in probability and. Gq0 = fUq (f ) f . to 2LLn(q)=102p(q) .12. We will show that one an hoose appropriately the q sequen es (n(q)).37) and (8. f 2 Gq g . (8:38) where Fu = ff study. for all u > 0 . f 2 Gq g .

") = ("f (x))f 2F . Sin e the intervals Iq . X a tually takes its values in the spa e of . Jq are disjoint.ned by X (x.

lim sup jkSn =an kj 1 with probability one where jk kj is the gauge of K + "B1 . it suÆ es to show (as in Theorem 8. To establish that nlim !1 d(Sn =an.nite sequen es. kgk 1=" .37). That Sn =an 6! 0 in probability follows from (8. B1 the n!1 unit ball of 0 (F ) .11) that for every " > 0 . K ) = 0 almost surely where K = KX . IEg(X )2 1g so that it thus suÆ es to have that . But the unit ball of the dual norm is V = fg 2 `1 (F ) .

.

n .

1 .

X lim sup sup .

.

"i g(Xi ).

.

1 almost surely : .

n!1 g2V an .

i=1 Let V0 be the elements of V with .

Then (8.38) exa tly means that .nite support.

.

n .

1 .

X lim sup sup .

.

"i g(Xi ).

.

1 almost surely . .

n!1 g2V0 an .

i=1 .

set Hq = fg 2 2q Conv( F` ) . Hq is a onvex `q set of . b(q) 2q p(q 1) (whi h is possible sin e (q) = a(q)=10LLn(q) and b(q) = LLn(q)=n(q) ) and r(q 1) a(q) ar(q 1) n(q) (8:40) 2 q: We then take p(q) suÆ iently large su h that p(q) n(q)4 : (8:41) Set m(q) = [(2 q p(q))1=2 ℄ and hoose then s(q) large enough so that (8:42) s(q) 2q m(q)b(q) . To this aim. The ase q = 1 . Let us therefore assume that r(q 1) has been onstru ted. the on lusion follows.37) and (8. LL (q) 2q . Let us now turn to the onstru tion and proof of (8. similar to the general ase is left to the reader. s(q)d(q)2 1 and (what is a tually a stronger ondition) p LL s(q) p(q) : (8:43) S We are left with the hoi e of r(q) . kgk2 1g . We will a tually onstru t (by indu tion) the sequen es (p(q)) and (s(q)) from sequen es (r(q)) and (m(q)) su h that r(q 1) < n(q) < m(q) < r(q) (where ea h stri t inequality a tually means mu h bigger). We then take n(q) large enough in order that (8:39) LLn(q) 24q .38).242 and sin e V0 is easily seen to be dense in norm in V .

There exists a .nite dimension.

6). easily implies that one an . The exponential inequality of Kolmogorov (Lemma 1.nite set Hq0 su h that Hq ConvHq0 and kgk2 (1 2 q 1 ) 1=2 for g 2 Hq0 . applied to ea h g in Hq0 .

for all n r(q) and all t with an =2 t 2an . >t t2 (1 2 q ) 2n .nd r(q) su h that. (8:44) 8 n < X IP "i f (Xi ) : i=1 Hq 9 = 2 exp .

243 and n X IE "i f (Xi ) (8:45) Hq i=1 C pn where C is numeri al. is the omplement of Tq1 . This ompletes the onstru tion (by indu tion) of the various sequen es of integers we will work with. if Tq1. 1 b(q) IP(Tq1. Let Tq1 be the event Tq1 = f8i m(q) . ea h interval of Iq ontains at most one Xi g : Clearly.38).37) and (8. by de. ) m(q)(m(q) 1)p(q) 2 p(q) 2 and thus. We an now turn to the proofs of (8.

there are at least [n(q)b(q)℄ points Xi . ) m(q)b(q)=s(q) 2 q by (8. IP(Tq1. on a set of probability bigger than 1=3 (for example). i n(q) . let Tq0 = f8i m(q) .42). ) 2 q .37). We an then already show (8. Xi 62 Jq g : Then IP(Tq0 .nition of m(q) and the fa t that b(q) 1 . From these estimates indeed. in Iq whi h are in dierent intervals of Iq and not in Jq . There . Similarly. for all q large enough.

.

exists therefore a union A of [n(q)b(q)℄ intervals of Iq su h that (q)n(q)b(q) = a(q)=10 . 8 > n(q) < X IP "i f (Xi ) > : i=1 . it follows that. for all q large enough.

.

nP (q) .

"i IA (Xi ).

.

.

.

.

Let us . Sin e 9 > = a(q)=40> 13 . We now turn to (8. i=1 12 [n(q)b(q)℄ .37) learly follows. Fq for whi h (8.38) whi h is of ourse the most diÆ ult part.

As in the proof of Theorem 8. kf k2 1g . (8:46) lim sup n!1 (n) X " f (X ) a(n) i=1 i i 1 Fu almost surely .11.x u > 0 and re all that Fu = ff 2 uC onvF . > 1. it suÆ e to show that for all > 1 .

r(q)℄ (as subsets of integers) and q study separately lim sup and lim sup . Let us .244 S where (n) = [n ℄ . We set IN1 = [r(q q S 1). m(q)℄ and IN2 = [m(q).

r(q)℄ . Moreover f 2Fq 1 d(q)2 gq = (q)IIq + (q) d(q)2 p (q)IIq + d ((qq)) s(q)IJq : 1=2 p s(q)IJq It follows that. Note S that F Hq [ Fq [ F` so that `>q n 1 X IE "i f (Xi ) an i=1 F (I) +(II) +(III) : `>q ` By (8. (8. let gq = sup jf j whi h have disjoint supports. IE(gq2 =LLgq2) (q)2 b(q) (q)2 bp (q) + LL (q) d(q)2 LL s(q) 1 p(q) p + LL (q) LLn(q)LL s(q) whi h gives raise to a summable series by (8.rst onsider the limsup when (n) 2 IN2 . (I) has a limit zero.45).39) and (8. for all large q . Con erning (III).43). let n 2 [m(q).33)) n X IE "i f (Xi ) i=1 n n 1 X 1 X IE "i f (Xi ) + IE "i f (Xi ) an i=1 a Hq n i=1 Fq n X 1 + IE "i f (Xi ) S an i=1 F G` n (`)b(`) n na((``)) . We make use (n)2IN1 (n)2IN2 for it of the proof of Theorem 8.47). Turning to (8.11 from whi h we know that we will get a limsup less than 1 under the onditions IE(g2 =LLg) < 1 where g = kf kF = sup jf j and f 2F n 1 X lim IE "i f (Xi ) = 0 : n2IN2 an i=1 F (8:47) To he k the integrability ondition. for ` > q we have that ( f.

we write that n n 1 X 1 X (III) IE "i f (Xi ) + IE "i f (Xi ) = (IV) +(V) : an i=1 an i=1 Gq Gq0 and thus.245 so that. s(`) an 2 n P IE "i f (Xi ) an 2 `+1 and therefore F` that (III) 2 q+2 . For (IV). n 1 X n a(`) IE "i f (Xi ) an i=1 an n(`) G` ar(q) na((``)) 2 r(q) ` by (8. (III) also has a limit whi h is 0 . Hen e. as before. It follows that i=1 We evaluate (IV) and (V) by the preliminary study. `. We are left with (II).40). For q large enough. p = p(q) and m = m(q) . the de.42). n X IE "i f (Xi ) 0 i=1 G` n P IE "i f (Xi ) i=1 G`0 n d ((``)) pb(`) n (`)b(`) n na((``)) . h = [n(q)b(q)℄ . we let there b = b(q) . sin e n r(q) . As for (III). by (8. On the other hand.

We have therefore established in this way that (8. for some numeri al onstant C and all q large enough. Cardfi n . (IV) C max n(q) 1=2 . Tq5 = f8i 2 q n(q)=LLn(q) . Sin e b(q) 1 and (q)n(q)b(q) = a(q)=10 . from whi h.46) when (n) 2 IN1 . we establish (8. Cardfi n . we dedu e that (IV) tends to 0 .47) holds and thus (8. Tq4 = f for 2 q n(q)=LLn(q) n 2 2q n(q) . onsider Tq = Tqi i=1 where Tq1 = f8i m(q) . Using Corollary 8. together with (8.14. Xi 2 Iq g 2e2 nb(q)g . Xi 62 Jq [ [ `>q (I` [ J` g . h = [n(q)b(q)℄ and p = p(q) . ea h interval of Iq ontains at most one Xi g .15 with b = b(q)=s(q) . 5 T In the last part of this proof. m(q) 1=2 ! n(q) log p(q) : p(q) By the hoi e of p(q) n(q)4 . Xi 62 Iq g : . the ontrol of (V) is similar by (8.nition of m(q) (m(q) = [(2 q p(q))1=2 ℄) shows that we are in a position to apply Corollary 8. Tq2 = f8i m(q) . it follows that. For ea h q .43).39). Xi 2 Iq g e2 2 q+1 an = (q)g . Tq3 = f for 2 2q n(q) n m(q) . we have that m(q) 2 q 1 n(q)2 .46) along (n) 2 IN2 .

where B (n. ) 2 q . it follows that [2 q+` n(q)=LLn(q)℄ . note that. one veri. by (8. we have that IP(Tq3. where Tqi.48) = b(q) and t = e2 ( q large). ) is the number of su esses in a run of n Bernoulli trials with probability of su ess and 0 < t < 1 .16) whi h implies in parti ular that (8:48) IPfB (n. We have already seen that IP(Tq1. When ` > q . Con erning Tq4 . ) tng exp h tn t 1 log i exp( tn) t when t e2 (for example). ) < 1 . when ` q . t = e2 2 q av(`) =v(`) (q) . we use the binomial bound (1.42). Xi 2 J` g m(q) s(`) s(`) by (8. i.39). We have that Tq3 \ f Cardfi 2 `0 2q+` n(q ) . b(`) b(`) m(`) 2 ` IPf9i m(q) . Xi 2 I` g m(q)b(`) p(q)b(`) p(` 1)b(`) 2 ` : Hen e IP(Tq2. Con erning Tq5 . i = 1.39).e.48). Sin e 2` 2 q LLn(q) . For i = 2 .246 P We would like to show that q IP(Tq ) < 1 for whi h it suÆ es to prove that P q IP(Tqi. IPf9i m(q) . For i = 3. : : : . is the omplement of Tqi . one need simply note that jIq j = b(q) = LLn(q)=n(q) . Take then in (8. 5 . 2` 2 q LLn(q) . 4 . = b(q) and n = v(`) . set v(`) = fCardfi v(`) . taking in (8. ) 3 2 q . ) Sin e n(q)b(q) = LLn(q) X `0 exp( e2 2 2q+` n(q)b(q)) : 24q by (8. Then Tq4 \ P q IP(Tq3) < 1 . Xi 2 Iq g e2 2 q av(`) = (q)g where the interse tion is over all ` 0 su h that v(`) 2 2q n(q) . Xi 2 Iq g e2 2 2q+` b(q)g : Then.

t e2 . ) X `0 exp( e2 2 q av(`) = (q)) : . at least for q large enough (by (8. Hen e IP(Tq4.39)).es that.

IP(Tq4. set now 8 (n) < X Pq = IP "i f (Xi ) : i=1 r(q 1)(n)m(q) Fu X The proof will be ompleted on e we will have shown that P q > a(n) . We have thus For every q (large enough).247 P Sin e av(`) = (q) 2 q=2+`=2 (LLn(q))1=2 . ) < 1 . none of the Xi 's. : Pq < 1 . one on ludes as for i = 3 that established that P q q IP(Tq ) < 1 . On Tq . Tq 9 = . Take q large enough so that 2q > u . i (n) m(q) . is in J` for ` q or I` for ` > q (de.

On the other hand.nition of Tq2 ).u : Lemma 8. one an .u = ff 2 u ConvFq . For every " > 0 . there exists q0 = q0 (") su h that for all q q0 and r(q 1) (n) m(q) . We thus have that `<q (n) X "i f (Xi ) i=1 (8:49) Fu (n) X "i f (Xi ) i=1 Hq 1 (n) X + "i f (Xi ) i=1 Fq. kf k2 1g S and its restri tion to I` [ J` is in Hq 1 . the restri tion of a fun tion of Fu to Iq \ Jq belongs to Fq.16.

q) a(n) : 3 Before proving this lemma.q (n. we may now use (8. 8 > (n) < X IP "i f (Xi ) > : i=1 Hq > a(n) (n. (8:51) there exists M = M (". q))2 (1 2 q+1 ) : 9 > = > .49) and (8. q) : 1 1) . (n.50) 8 (n) < X IP "i f (Xi ) : i=1 Using that (n) r(q Fu > a(n) .44) and estimate the latter probability by 2 exp 1 (a 2(n) (n) (n. q0 ) su h that Cardfn . . q) with the following properties: (8:50) on Tq . Tq 9 = .nd numbers (n. (n) X "i f (Xi ) i=1 Fu. let us show how to on lude from it. By (8. q) > "a(n) g M . (8:52) 2 (n. q) .

If (n. For ea h q ( q0 ). there is a ontribution of at most M exp 1 LLn(q 1) 18 whi h is summable by (8.248 In (8. The ru ial point is the following. Proof of Lemma 8. Therefore. using (8. q) "a(n) .51). let f be the restri tion of g to Iq . take " > 0 to be su h that " > 1 .52).39). If g 2 Fq. q) ( ")a(n) and the pre eding gives raise to a summable series. the number of (n.16. Then g = Uq f .u . 1 kUq f k2 = kf k2=d(q) so that kf k2 d(q) . (The . q) > "a(n) is ontrolled by M and thus. a(n) (n.

rst reader that will have omprehensively rea hed this point of this onstru tion is invited. .

and then to onta t the authors (the se ond one!) for a reward as a token of his perseveran e.rst to ourageously pursue his eort until the end.) Let n be .

xed.q 1 LLn(q) 5 LLn 1=2 : Therefore. obviously ( f. Xi 2 Iq g n P n P so that by Cau hy-S hwartz jf (Xi )j N f (Xi i=1 i=1 intervals of Iq . n X "i f (Xi ) i=1 Fu. we have that n 1 X "i f (Xi ) an i=1 (8:53) Fu.33)). and assume we are on Tq . Sin e on Tq (Tq1 ) the Xi 's are in distin t (value of f on I )2 = kf k22 p(q) b(q) 2 LLn(q) d(q)2 pb((qq)) 100 b(q) : 2e2 nb(q) 20nb(q) (Tq3 ) so that n X i=1 and therefore in this ase 1=2 jf (Xi )j 25 nLLn(q) n 1 X "i f (Xi ) an i=1 Fu. when n 2 2q n(q) and n(q) is so large that LL(2 2q n(q)) 12 LLn(q) . n m(q) .q On the other hand. Let N = Cardfi n . N X I 2Iq )2 1=2 . (8.q n X u "i f (Xi ) i=1 F 23 : u u (q)n(q)b(q) 10 a(q) . we have that n X i=1 f (Xi )2 When n 2 2q n(q) .

Fu.q (8:55) u n(q) 1=2 : 10 n 2u n(nq) 1=2 (where it is assumed as before that q is large enough so that LL(2 q n(q)) 12 LLn(q)) . when n n(q) . Xi 2 Iq g 20 2 q an = (q) and therefore n 1 X "i f (Xi ) an i=1 Fu. again n 1 X "i f (Xi ) an i=1 n X "i f (Xi ) i=1 When n 2 2q n(q) (by Tq3 ).q u (q) Cardfi n .q (8:56) 20u2 q : Re all u is . by Tq3 . (8:54) Finally.q i=1 and therefore Fu. When n 2 2q n(q) . Tq4 .249 and thus. Cardfi N . Xi 2 Iq g : 20un (q)b(q) = 2un na((qq)) n 1 X "i f (Xi ) an i=1 Fu.q n X u "i f (Xi ) F i=1 n X "i f (Xi ) Fu.

2u n(q) 2 u 3 a(n) . Notes and referen es Before rea hing the rather omplete des ription we give in this hapter.53)-(8.56). if 2 2q n(q) (n) n(q) . q) satisfy all the required properties. if (n) n(q) : By (8. 10 1=2 a(n) n(q) 1=2 a (n) (n) if (n) 2 2q n(q) .xed.12. This ompletes the proof of Lemma 8.16 and therefore of Example 8. the numbers (n. We an then simply take (n. q) = 8 > > > > > < > > > > > : 20u2 q a(n) min min (n) 2a 3 (n) . the study of the law of the iterated logarithm (LIL) for Bana h spa e valued random variables went through several stages of partial results and .

The exposition of this hapter is basi ally taken from the papers [L-T2℄ and [L-T5℄. A.2 . We do not try here to give a detailed a ount of the ontributions of all the authors but rather on entrate only on the main steps of the history of these results. we refer to [Bin℄.1. The LIL of Kolmogorov appeared in 1929 [Ko℄ and is extraordinary of a ura y for the time. Kolmogorov used sharp both upper and lower exponential inequalities ( arefully des ribed in [Sto℄) presented here as Lemma 1.250 understanding. N. Let us mention that the LIL is a vast subje t of Probability theory from whi h only one parti ular aspe t is developed here. For a survey of various LIL topi s.6 and Lemma 8. The extension to the ve tor valued setting in the form of Theorem 8.

rst appeared in [L-T4℄ with a limsup only .

nite. A . The best result presented here is new.

the simple argument presented here is taken from [Fe2℄. we refer to [L-T2℄. The independent and identi ally distributed s alar LIL is due to P. The simple qualitative proof by randomization seems to be part of the folklore. Hartman and A. Strassen's paper [St1℄ is a milestone in the study of the LIL and strongly in uen ed the in. Strassen [St2℄.rst form of the ve tor valued extension is due to J. Ne essity was established by V. Wintner [H-W℄ (with the proof sket hed in the text) who extended previous early results in parti ular by Hardy and Littlewood and by Khint hine. Kuelbs [Kue4℄. For the onverse using Gaussian randomization.

[Kue2℄). The setting up of the framework of the study of the LIL for Bana h spa e valued random variables was undertaken in the early seventies by J. f. The de.g. Kuelbs ( f.g. e. Theorem 8. e. Various simple proofs of the Hartman-Wintner-Strassen s alar LIL are now available. [A 7℄ and the referen es therein.5 belongs to him [Kue3℄ (with however a somewhat too strong hypothesis removed in [Pi3℄).nite dimensional developments.

[Pi3℄ and [G-K-Z℄.4 on ompa tness of its unit ball ombine observations of [Kue3℄. The progresses leading to the .L2 random variable and Lemma 8.nition of the reprodu ing kernel Hilbert spa e of a weak.

nal hara terization of Theorem 8. R. To mention some. LePage .6 were numerous.

a ondition weakened later into the boundedness or onvergen e to 0 in probability of (Sn =an ) by J. Pisier [Pi3℄ established the LIL for square integrable random variables satisfying the entral limit theorem. The .rst showed the result for Gaussian random variables and G. Kuelbs [Kue4℄.

rst real hara terization of the LIL in some in.

Then. Zinn [G-K-Z℄ in Hilbert spa e after a preliminary investigation by G. Pisier and J.nite dimensional spa es is due to V. The . Zinn [P-Z℄. su esively. several authors extended the on lusion to various lasses of smooth normed spa es [A-K℄. J. Goodman. Kuelbs and J. [Led5℄. [Led2℄.

nal hara terization was obtained in [L-T2℄ using a Gaussian randomization te hnique already suggested in [Pi2℄ and put forward in the unpublished manus ript [Led6℄ .

with in parti ular results of [Pi3℄. C (Sn =an) is almost surely empty if it does not ontain 0 or. [G-K-Z℄. equivalently.10 are due to K. Preliminary n!1 observations appeared in [Kue6℄. Alexander [Ale3℄.8) was settled. [He2℄. Among other results. [Ale4℄.251 where the ase of type 2 spa es (Corollary 8. when IEkX k2 < 1 .7 is taken from [G-K-Z℄. The remarkable results on the luster set C (Sn =an ) presented here as Theorem 8.3 is taken from [L-T5℄. he further showed that. Lemma 8. Its onsequen e to the relation between LIL and CLT is dis ussed in Chapter 10. if (and only if) lim inf IEkSnk=an > 0 . The short proof given here based on the isoperimetri approa h of Se tion 6. [G-K-Z℄ and in [A-K℄ where it was .

rst shown that C (Sn =an ) = K when Sn =an ! 0 in probability (a result to whi h the .

11 is taken from [L-T5℄. Some observations on possible values of (X ) in ase of the bounded LIL may be found in [A-K-L℄.12.rst author of this monograph had some ontribution). Example 8. . in the spirit of [Ale4℄. is new and due to the se ond author. Theorem 8.

Type and otype of Bana h spa es 9.2 Type and otype 9.3 Some probabilisti statements in presen e of type and otype Notes and referen es .1 `np -subspa es of Bana h spa es 9.252 Chapter 9.

We observed there that. in quite general situations. 0 p < 1 . On the line or in . we will now study the possibility of a ontrol in probability or in the weak topology of probability distributions of sums of independent random variables. Type and otype of Bana h spa es The notion of type of a Bana h spa e already appeared in the last hapters on the law of large numbers and the law of the iterated logarithm. almost sure properties an be redu ed to properties in probability or in Lp. Starting with this hapter.253 Chapter 9.

nite dimensional spa es. su h a ontrol is usually easily veri.

ed through moment onditions by the orthogonality property .

.

.

X .

2 .

IE .

Xi .

.

.

.

i = X i IEjXi j2 where (Xi ) is a .

In some lasses of Bana h spa es.nite sequen e of independent mean zero real random variables (in L2 ). This property extends to Hilbert spa e with norm instead of absolute values but does not extend to general Bana h spa es. This lassi. this general question has however reasonable answers. whi h is indeed one typi al example of this tightness question. This observation already indi ates the diÆ ulties in showing tightness or boundedness in probability of sums of independent ve tor valued random variables. This will be in parti ular illustrated in the next hapter on the entral limit theorem.

ation of Bana h spa es is based on the on ept of type and otype. These notions have their origin in the pre eding orthogonality property whi h they extend in various ways. we des ribe these relations in the . They are losely related to geometri properties of Bana h spa es of ontaining or not subspa es isomorphi to `np . Starting with Dvoretzky's theorem on spheri al se tions of onvex bodies.

1 `np -subspa es of Bana h spa es n P 1=p Given 1 p 1 . we ome ba k to our starting question and investigate some results on sums of independent random variables in spa es with some type or otype. Pregaussian random variables and stable random variables in spa es of stable type are also dis ussed. n ) 2 IRn . In parti ular. In the last se tion. = (1 . When 1 p 1 . A short exposition of some general properties on type and otype is given in Se tion 9. : : : . n an integer and " > 0 . 9. we omplete some results on the law of large numbers as announ ed in Chapter 7. We also brie y dis uss spa es whi h do not ontain 0 .rst paragraph of this hapter as a geometri al ba kground.2. re all that `np denotes IRn equipped with the norm ji jp ( max lim itsin ji j i =1 if p = 1 ). a Bana h spa e B .

254 is said to ontain a subspa e (1 + ") -isomorphi to `np if there exist x1 . n ) in IRn n X i=1 ji jp !1=p n X i xi i=1 (1 + ") n X i=1 ji jp !1=p ( max ji j if p = 1 ). xn in B su h that for all = (1 . : : : . B ontains `np 's uniformly if it ontains subspa es (1 + ") -isomorphi to `np for all in n and " > 0 . The purpose of this paragraph is to present some results whi h relate the set of p 's for whi h a Bana h spa e ontains `np 's to some probabilisti inequalities satis. : : : .

1. Any in.ed by the norm of B . Theorem 9. The fundamental result at the basis of this theory is the following theorem of A. Dvoretzky [Dv℄.

It should be mentioned that the various subspa es isomorphi to `n2 do not form a net and therefore annot in general be pat hed together to form an in.nite dimensional Bana h spa e B ontains `n2 's uniformly.

both of them based on Gaussian variables and their properties des ribed in Chapter 3.nite dimensional Hilbertian subspa e of B .1. We shall give two dierent proofs of Theorem 9. The .

1). They a tually yield a stronger .3).rst one uses isoperimetri and on entration inequalities (Se tion 3. the se ond omparison theorems (Se tion 3.

[Pi18℄). This .nite dimensional version of Theorem 9.1 whi h may be interpreted geometri ally as a result on almost spheri al se tions of onvex bodies ( f. [Mi-S℄. [F-L-M℄.

2 we will give. we use a ru ial intermediate result known as the Dvoretzky-Rogers lemma. the repla ement of IEkX k2 by (IEkX k)2 or M (X )2 in d(X ) gives . It will be onvenient to immediately interpret this result in terms of Gaussian variables. For ea h " > 0 there exists (") > 0 su h that every Bana h spa e B of dimension N ontains a subspa e (1 + ") -isomorphi to `n2 where n = [(") log N ℄ . we set (X ) = sup (IEf 2 (X ))1=2 .1 learly follows from this result.2. In the two proofs of Theorem 9. We may then onsider the \dimension" (or \ on entration dimension") of a Gaussian variable X as the ratio IEkX k2 : d(X ) = (X )2 Note that d(X ) depends on both X and the norm of B . Sin e all moments of X are equivalent and equivalent to the median M (X ) of X .2). Theorem 9. and X has strong moments kf k1 of all orders (Corollary 3. Re all from Chapter 3 that if X is a Gaussian Radon random variable with values in a Bana h spa e B .nite dimensional statement is the following. Theorem 9.

In parti ular. [Mi-S℄. to d(X ) . Lemma 9.255 rise to a dimension equivalent. We use freely this observation below. We already mentioned in Chapter 3 that the strong moment of a Gaussian ve tor valued variable is usually mu h bigger than the weak moments (X ) . Let us state without proof the result ( f. Let B be a Bana h spa e of dimension N and let N = [N=2℄ (integer part). [Pi16℄. N ) in IRN . [TJ℄). : : : . [F-L-M℄. then (X ) = 1 and IEkX k = N so that d(X ) = N . up to numeri al onstants.3. Note that if B = `1 . Re all for example that if X follows the anoni al distribution 2 N in IRN and if B = `N 2 . N P It is easily seen how the se ond assertion of the lemma follows from the . there exists a Gaussian random ve tor X with values in B whose dimension d(X ) is larger than log N where > 0 is some numeri al onstant. There exist points (xi )iN in B su h that kxi k 1=2 for all i N and satisfying (9:1) N X i xi i=1 N X i=1 ji j2 !1=2 for all = (1 . One of the on lusions of Dvoretzky-Rogers' lemma is that the ase of `N 1 is extremal.14)). [D-R℄. then (X ) is still 1 but IEkX k2 is of the order of log N ( f. (3.

It is based on the on entration properties of the norm of a Gaussian random ve tor around its median or mean and stability properties of Gaussian distributions.2) an be used ompletely similarly. 1 IEkX k2 IE max jgi j2 log N: 8 iN Provided with this tool. We use Lemma 3. 0 N X 11=2 (X ) = sup (IEf 2 (X ))1=2 = sup f 2(xi )A kf k1 kf k1 i=1 = N X sup i xi jj1 i=1 1: On the other hand.rst one. sin e kxi k 1=2 for all i N . In a .7) and (3.2. By (9. we an now atta k the proofs of Theorem 9. by Levy's inequality (2. Let indeed X = gi xi i=1 where (gi ) is orthogaussian.14).2. 1st proof of Theorem 9.1).1 but the simpler inequality (3.

we need two te hni al lemmas to dis retize the problem of .rst step.

A Æ -net (Æ > 0) of a set A in (B.nding `n2 -subspa es. k k) is a set S su h that for every x in A one an . We state them in a somewhat more general setting for further purposes.

nd y in S with kx yk Æ . .

Let further S be a Æ -net of the unit sphere of (IRn . By homogeneity. Then. xn in a Bana h spa e B . kj jk ). Let n be an integer and let kj jk be a norm on IRn . For ea h " > 0 there is Æ = Æ(") > 0 with the following property. Proof. we an and do assume that kjjk = 1 .256 Lemma 9.4. we have 1 Æ > k i xi k 1 + Æ for all in S . : : : . if for n P some x1 . By de. then i=1 (1 + ") n X 1=2 kjjk i=1 i xi (1 + ")1=2 kjjk for all in IRn .

hen e = 0 + 1 .nition of S . there exists 0 in S su h that kj 0 jk Æ .

with kj.

It therefore suÆ es to hoose appropriately Æ in i=1 fun tion of " > 0 only in order to get the on lusion. Iterating the pro edure. Hen e j =0 n X i xi i=1 1 X j =0 n X Æj i=1 ji xi 1+Æ 1 Æ n P (Æ < 1) . jk = 1 and j1 j Æ . jj j Æj for all j 0 . In the same way. The size of Æ -net of spheres in . k i xi k (1 3Æ)=(1 Æ) . we 1 P get that = j j with j 2 S .

There is a Æ -net S of the unit sphere of (IRn .5. Then the balls xi + (Æ=2)U are disjoint and are all ontained in (1+2=Æ)U . Proof. Let U denote the unit ball of (IRn . kj jk) of ardinality less than (1 + 2=Æ)n exp(2n=Æ) . we get that m(Æ=2)n (1+2=Æ)n from whi h the lemma follows. Lemma 9. We are now in a position to perform the .nite dimension is easily estimated in terms of Æ > 0 and the dimension. kj jk) and let (xi )im be maximal in the unit sphere of U under the relations kjxi xj jk > Æ for i 6= j . By omparing the volumes. Let kj jk be any norm on IRn . as shown in the next lemma whi h follows from a simple volume argument.

3 there exists a Gaussian variable X with values in B whose dimension is larger than log N . Let n to be spe i.2) M is equivalent to kX k2 .rst proof of Theorem 9. By Lemma 9. Let M = M (X ) denote the median of X .2. The main argument is the on entration property of Gausian ve tors. we also have that M (log N )1=2 (X ) where > 0 is numeri al (possibly dierent from the pre eding). Sin e (Corollary 3. Let B of dimension N .

: : : . the basi rotational invarian e property of Gaussian distributions . More pre isely. Xn ) will be shown to span a subspa e almost isometri to `n2 . With positive probability. the sample (X1 . Xn of X .ed and onsider independent opies X1 . : : : .

257 indi ates that if n P i=1 n P 2i = 1 . : : : . n with IP i=1 n P i Xi has the same distribution as X . i=1 (. In parti ular. by Lemma 3.1. then for all t > 0 and 1 .

n .

X .

.

.

i=1 2i = 1 . i Xi .

) .

.

M.

> t .

exp( t2 =2(X )2 ): Let now " > 0 be .

Let furthermore S be a Æ -net of the unit sphere of `n2 whi h an be hosen of ardinality less than exp(2n=Æ) (Lemma 9. Let t = ÆM .4.xed and hoose Æ = Æ(") > 0 a ording to Lemma 9. the pre eding inequality implies that ( IP 9 2 S : .5).

n .

X Xi .

i .

.

M .

) .

.

1.

> Æ .

16 and Corollary 3. while isoperimetri arguments were used in the . for xi = Xi (!)=M. Indeed. i=1 (Card S ) exp exp 2Æn Æ2 M 2 2(X )2 Æ2 2 log N 2 sin e M (log N )1=2 (X ) .21. i n . It follows that there exists an ! su h that for all in S 1 Æ n X Xi (!) i M i=1 1 + Æ: Hen e. we are in the hypotheses of Lemma 9. The se ond proof is shorter and neater but relies on the somewhat more involved tool of Theorem 3. hoose n = [ log N ℄ for = (Æ) = (") small enough in order the pre eding probability is stri tly less than 1 .4 so that the on lusion readily follows. Assuming N is large enough (otherwise there is nothing to prove).

21.g. denote by n n P P X1 . IEkX k pn(X ) IE' IE IEkX k + pn(X ): p It follows that for some ! . following e. 2nd proof of Theorem 9. Xn independent opies of X . If X is a Gaussian random variable with values in B .2. jj=1 i=1 jj=1 i=1 Clearly. this was only through on entration inequalities whi h we know.rst proof. by Corollary 3.21. 0 ' and. The dis retization lemmas are not ne essary in this se ond approa h. an a tually be established rather easily. (!) '(!) p IEkX k + n(X ) pn(X ) : IEkX k . Set ' = inf k i Xi k. : : : . (3.2). if IEkX k > n(X ) . We apply Corollary 3. = sup k i Xi k .

by de.258 Choose now X in B a ording to Lemma 9. But now. so that (!) (1 + ")'(!) .3 so that IEkX k > (log N )1=2 (X ) for some numeri al > 0 . if " > 0 and n = [(") log N ℄ where (") > 0 an be hosen depending on " > 0 only. p IEkX k + n(X ) pn(X ) IEkX k p (log N )1=2 + n pn (log N )1=2 1 + ". Then.

G. for every in IRn with jj = 1 . S he htman [S h5℄ has shown how the more lassi al isoperimetri approa h of the .2 of the order of "2 where is numeri al.nition of ' and . that B ontains a subspa e (1+ ") -isomorphi to `n2 . n X i xi 1 i=1 1+" whi h means. for every jj = 1 . Re ently. This is best possible. '(!) n X i Xi (!) i=1 (!): Hen e. by homogeneity. Let us note that this se ond proof provides a dependen e of (") in fun tion of " in Theorem 9. i n . setting xi = Xi (!)='(!) . The proof is omplete.

rst proof an be modi.

1.ed so to yield also this dependen e. A ording thus to Theorem 9. every in.

note that if. (i ) denotes a sequen e of independent standard p -stable random variables de. Clearly. this does not extend to `np 's for p 6= 2 as an be seen from the example of Hilbert spa e.nite dimensional Bana h ontains `n2 's uniformly. for 0 < p 2 . Related to this question.

A. when p = 2 . (i ) spans `p in Lq only for q < p . (i ) (whi h is then simply an orthogaussian sequen e) spans `2 in Lq = Lq ( . IP) . IP) for all q . A. the set of p 's for whi h a Bana h spa e B ontains uniformly an be hara terized through a probabilisti inequality satis. whereas for p < 2 .ned on some probability spa e ( . then. at least in the ase 1 p < 2 . It is remarkable that.

ed by the norm of B . For 1 p 2 . `np 's Let (i ) denote as usual a sequen e of independent standard p -stable random variables. This is what we would like to des ribe now in the rest of this se tion. The ase p > 2 will be shortly presented on e the notion of otype has been introdu ed in the next paragraph. a Bana h spa e B is said to be of stable type p if there is a onstant C su h that for every .

nite sequen e (xi ) in B (9:2) X i xi i p.1 C X i kxi kp !1=p : .

259 The integrability properties and moment equivalen es of stable random ve tors (Chapter 5) tell us that an equivalent de.

nition of the stable type property is obtained when k kp. Further. in terms of in.1 is repla ed by any k kr . r < p .

r > 0 and all . the existen e of the spe tral i i P measure of the stable variable i xi determines its onvergen e. this is automati i when 0 < p < 1 and this is why the range of the stable type is 1 p 2 . This is ontained in Proposition 9. as a onsequen e of the ontra tion prin iple (Lemma 4.nite sequen es and using a losed graph argument. We may assume that p > 1 . (As we know.5). In other words.12 below but we may brie y anti ipate one argument in this regard here. B is of stable type p if and only P P if i xi onverges almost surely whenever kxi kp < 1 . for some C .) A Bana h spa e B of stable type p is also of stable type p0 for every p0 < p .13). f. Then. (5.

nite sequen es (xi ) in B . It then follows that Lq is of stable type p . . this inequality applied onditionally yields r X IE i xi i C 0 IEk(i xi )krp0 . X "i xi i r C r X IE "i xi i X i kxi kp !1=p : C 0 k(xi )krp0 . Let now (i ) be a standard p0 -stable sequen e. B is of stable type pg: Examples of spa es of stable type p will be given in the next se tion on e the general theory of type and otype has been developed. Sin e (i ) has the same distribution as ("i i ) . B is seen to be of stable type p0 . Using now Lemma 5. As a onsequen e of the pre eding. there is some interest to onsider the number (9:4) p(B ) = supfp. indeed. p0 ) . Let 1 q < 2 and let Lq = Lq (S. This an be seen for example from the pre eding. ) .1 < 1 .1 where C 0 = C 0 (r. a simple use of Fubini's theorem together with Khint hine's inequalities shows that (9.8 and the basi fa t that k1 kp0 . .1 hoose (r < p0 ) . (9:3) In parti ular. Then Lq is of stable type p for all p < q . Let us however mention an important example at this stage.3) holds with r = p = q . ) on some measure spa e (S. sin e p0 < p . p. Unless .

the anoni al basis of . Lq is however not of stable type q . a ording to (5. Indeed.19).nite dimensional.

2). Sin e the stable type property learly only depends on the olle tion of the .260 `q annot satisfy the q -stable type inequality (9.

Theorem 9. A Bana h spa e B ontains `np 's uniformly if and only if B is not of stable type p . The following theorem expresses the striking fa t that the onverse also holds. we have similarly that a Bana h spa e ontaining `np 's uniformly (1 p < 2) is not of stable type p . let us . Before turning to its proof.6.nite dimensional subspa es of a given Bana h spa e. Let 1 p < 2 .

rst state some important and useful onsequen es of Theorem 9. The .6.

If a Bana h spa e is of stable type p . it is also of stable type p1 for some p1 > p . The se ond answers the question addressed at the light of Dvoretzky's theorem for the p 's su h that 1 p 2 . Re all p(B ) of (9. Let 1 p < 2 . Corollary 9.7.rst one expresses that the set of p 's for whi h a Bana h spa e is of stable type p is open.8. It is rather elementary to he k that the set of p 's in [1. The set of p 's of [1. Proof. 2℄ for whi h an in. Therefore its omplement is open and Theorem 9.4).6 allows to on lude. 2℄ for whi h a Bana h spa e ontains `np 's uniformly is a losed subset of [1. Corollary 9. 2℄ .

Proof.nite dimensional Bana h spa e B ontains `np 's uniformly is equal to [p(B ). By de. If p(B ) = 2. 2℄ .1 and if p < 2 . B ontains `n2 's uniformly by Theorem 9.6 an be applied. B is of stable type p and Theorem 9.

7. as for Theorem 9. will dedu e this result from some stronger .6 and. We next turn the proof of Theorem 9. if p(B ) p < 2 .6 applies again.1. B is not of stable type p and therefore ontains `np 's uniformly whereas for p < p(B ) . B is of stable type p and Theorem 9.nition of p(B ) and Corollary 9.

. denote by STp(B ) the smallest onstant C for whi h (9. If B is a Bana h spa e and 1 < p < 2 .2) with the L1 norm on the left (yielding as we know an equivalent ondition) holds. Theorem 9.nite dimensional version. For ea h " > 0 there exists p (") > 0 su h that every Bana h spa e B of stable type p ontains a subspa e (1 + ") -isomorphi to `np where n = [p (")STp (B )q ℄ .9. Let 1 < p < 2 and q = p=p 1 be the onjugate of p .

if 1 < p < 2 and B is not of stable type p . then STp (B ) = 1 and one an .261 This statement learly ontains Theorem 9.6. Indeed.

nd .

That Bana h spa es ontaining `np 's uniformly are not of stable type p has been dis ussed prior to Theorem 9.nite dimensional subspa es of B with orresponding stable type onstant arbitrarily large and hen e. 2℄ for whi h B ontains `np 's uniformly is losed.6. Proof of Theorem 9. Sin e the set of p 's of [1. It will follow the pattern of the . we rea h the ase p = 1 as well.9.9. by Theorem 9. B ontains `np 's uniformly.

Let S = 21 STp(B ) .rst proof of Theorem 9. through the series representation of stable random ve tors.2 and will rely. on the on entration inequality (6.14) for sums of independent random variables. by de.

there exist non-zero x1 .nition of STp (B ) .3 in Theorem 9. the stable type onstant enters the question here. xN in B su h that N X i=1 N X kxi kp = 1 and IE i=1 i xi > S (re all (i ) is a standard p -stable sequen e). : : : . (This is in a sense the step orresponding to Lemma 9.3 holds true in any Bana h spa e.2 but whereas Lemma 9.) Let Y with values on the unit sphere of B be de.

i N . X = N 1 1=p P P p 1=p i xi has the same distribution as Yj . Then. as a onsequen e of Corollary 5. Z ). Let furthermore = (1 . : : : . n ) n P in IRn be su h that ji jp = 1 . Let (Yj ) be independent opies of Y . (Zi ) ) denote independent opies of X (resp.ned as Y = xi =kxi k with probability kxi kp=2.5. Consider now j i=1 j =1 Z= 1 X j =1 j 1=p Yj and let (Xi ) (resp. We .

rst laim that inequality (6. i=1 (9:5) (.14) implies that for every t > 0 .

n .

X .

IP .

i Zi .

i=1 .

n X .

IE i Zi .

.

.

has the same distribution as 1 X k=1 . if we have independent opies (Yj.i are iid and symmetri . sin e the Yj.i )i of the sequen e (Yj ) .i whi h. n X i=1 i Zi = 1X n X j =1 i=1 i j 1=p Yj. i=1 ) >t 2 exp( tq =Cq ): Indeed.

k Yk .

262 where (.

j 1 . sin e ji jp = 1 . n P 1 i ng . It is easily seen.k )k1 is the non-in reasing rearrangement of the doubly-indexed olle tion fji jj 1=p . that .

Indeed. there is no more exa tly the ase for the Zi 's. the idea of the proof is exa tly the same as in the proof of Theorem 9. This would however be needed in view of (9. so that (6.5).5). while the Xi 's are stable random variables and n P therefore.k k 1=p . From (9. by the fundamental stability property. However.5). i=1 n X IE i Xi (9:6) i=1 = IEkX k > p 1=p S. for ji jp = 1 .2 and the subspa e isomorphi to `np will be found at random.14) applied to the pre eding i=1 sum of independent random variables indeed yields (9. But Z is a tually lose enough to X so that this stability property an almost be safed for the Zi 's.8) that 1 X IE Yj ( j 1=p j =1 j 1=p ) 1 X j =1 IEj j 1=p j 1=p j = Dp where Dp is a . we know from (5.

by the triangle inequality and Holder's inequality. n P for ji jp = 1 . i=1 .nite onstant depending only on p . Hen e.

n .

X .

i Xi .

IE .

i=1 .

n X .

IE i Zi .

.

.

i=1 Dp n X i=1 ji j Dp n1=q : Let now Æ > 0 and impose. as a .

rst ondition on n .6). that Dp n1=q ÆS=2 1p=p : (9:7) By (9. we see that . setting M = IEkX k .

n .

X .

i Zi .

IE .

i=1 .

.

M .

.

.

: ÆM 2 Hen e (9.5) for t = ÆM=2 yields (.

n .

X .

IP .

i Zi .

i=1 .

.

M .

.

> .

) ÆM 2 exp( Æq M q =2q Cq ) 2 exp( Æq S q =2q q=p p Cq ): .

263 The proof is almost omplete. Then . with a ardinality less than exp(2n=Æ) .5. Let R be a Æ -net of the unit sphere of `np whi h an be hosen. a ording to Lemma 9.

.

( ) ! n .

X .

2n Æq S q .

.

: IP 8 2 R. .

i Zi M .

ÆM 1 2 exp .

.

Take then = (Æ) = (") > 0 su h that if n = [STp (B )q ℄ (STp (B ) being assumed large enough otherwise there is nothing to prove).4. (9. Æ 2q q=p p Cq i=1 Given " > 0 hoose Æ = Æ(") > 0 small enough a ording to Lemma 9.7) holds and the pre eding probability is stri tly positive. It follows then that one an .

Theorem 9.9 is thus established. We start in this paragraph a systemati study of related probabilisti onditions named type and otype (or Radema her type and otype). A Bana h spa e B is said to be of type p (or Radema her type p ) if there is a onstant C su h that for all . As usual ("i ) denotes a Radema her sequen e.2 Type and otype. Let 1 p < 1 . n X i=1 ji jp !1=p 9. In the pre eding se tion. we dis overed how the probabilisti onditions of stable type are related to some geometri properties of Bana h spa es.nd an ! su h that for every in IRn (1 + ") 1=2 n X i=1 ji jp !1=p n X Zi (!) i M i=1 (1 + ")1=2 whi h gives the on lusion.

every Bana h spa e is of type 1 . Khint hine's inequalities indi ate that the de. On the other hand. X "i xi (9:8) i p C X i kxi kp !1=p : From the triangle inequality.nite sequen es (xi ) in B .

7) show that repla ing the P p -th moment of "i xi by any other moment leads to an equivalent de.nition makes sense only for p 2 . Note moreover that the Khint hine-Kahane inequalities in the form of moment equivalen es of Radema her series (Theorem 4.

i i Let 1 q 1 . Furthermore. B is of type p if and only if "i xi onverges almost surely when kxi kp < 1 . A Bana h spa e B is alled of (Radema her) otype q if there is a onstant C su h that for all .nition. by a losed i P P graph argument.

nite sequen es (xi ) in B (9:9) X i k k sup k i X xi q C "i xi i q X xi C "i xi when !1=q k i 1 ! q=1 : .

7.264 By Levy's inequalities (Proposition 2. (2.7)). every Bana h spa e is of in. or a tually some easy dire t argument based on the triangle inequality.

nite otype whereas. the de. by Khint hine's inequalities.

9) leads to an equivalent de.nition of otype q redu es a tually to q 2 . The same omments as for the type apply: any moment of the Radema her average in (9.

otype q ) is also of type p0 for every p0 p (resp.nition and B is of otype q if and only if the almost P P sure onvergen e of the series "i xi implies kxi kq < 1 . A tually. Theorem 9. of otype q0 for every q0 q ). Hilbert spa es have this property sin e by orthogonality X 2 IE "i xi i = X i kxi k2 : It is a basi result of the theory due to S. A Bana h spa e is of type 2 and otype 2 if only if it is isomorphi to a Hilbert spa e. Thus the \best" possible spa es in terms of the type and otype onditions are the spa es of type 2 and otype 2 . If a Bana h spa e is of type p and otype q so are all its subspa es.10. the type and otype properties are seen to depend only on the olle tion of the . Kwapien [Kw1℄ that the onverse (up to isomorphism) also holds. i i It is lear from the pre eding omments that a Bana h spa e of type p (resp.

Fubini's theorem and Khint hine's inequalities an easily be used to determine the type and otype of the Lp -spa es. ). with the same onstant. ) be a measure spa e and let Lp = Lp (S. It is not diÆ ult to verify that quotients of a Bana h spa e of type p are also of type p . Assume that 1 p < 1 . Then Lp is of type p when p 2 and of type 2 for p 2 . let (S. Let (xi ) be a . 1 p 1 . . .nite dimensional subspa es. This is no more true however for the otype as is lear for example from the fa t that every Bana h spa e an be realized as a quotient of L1 and that (see below) L1 is of best possible otype. Let us brie y he k these assertions. To mention examples. otype 2.

we an write that p X IE "i xi i If p 2 . Using Lemma 4.nite sequen e in Lp .1. Z = X S i .

.

p .

X .

.

IE .

"i xi (s).

.

d(s) .

S .

by the triangle inequality. Z jxi i (s)j2 !p=2 d(s) whereas. 0 Z S X i jxi (s)j2 !p=2 12=p d(s)A Bpp Z X S i X S i jxi jxi (s)jp d(s) = X Z i Z S (s)j2 X i 2=p jxi (s)jp d(s) !p=2 d(s): kxi kp . = X i kxi k2: . when p 2 .

.e. i. if Lp is in.265 By onsidering the anoni al basis of `p one an easily show that the pre eding result annot be improved.

the pre eding examples an easily be generalized. Using the moment equivalen es of ve tor valued Radema her averages (Theorem 4.nite dimensional. Then. L1 is of type 1 and otype 1 and nothing more. p) and of otype max(r. In the same way. It is obvious on the anon ial basis that `1 is of no type p > 1 and 0 (or `1 ) of no otype q < 1 . if a Bana h spa e B is of type p . . Indeed. Lr (S. We are left with the ase p = 1 . . It an be shown similary that Lp is of otype p for p 2 and otype 2 for p 2 (and nothing better). Let us mention further that the type and otype properties appear as dual notions. in parti ular `1 and 0 . C (S ) the spa e of ontinuous fun tions on a ompa t metri spa e S with the sup norm has no non-trivial type or otype. for 1 r 1. q) . Let B be a Bana h spa e of type p and otype q . Sin e L1 ontains isometri ally any separable Bana h spa e. it is of no type p0 > p . To he k this. its dual spa e B 0 is of otype q = p=p 1 . B ) is of type min(r. and similarly 0 . Note that L1 is of otype 2 as mentioned previously. let (x0i ) be a .7) instead of Khint hine's inequalities.

by Theorem 9. We then have: X X (1 ") kx0 kq hx0 . kxi k = 1 . by Holder's inequality (assuming p > 1 ) and the type p property of B . (1 ") X i kx0i kq X "i xi C i x0i q 1 k k p X "i x0i i q !1=p X X 0 p ( q 1) 0 xi "i xi i k k i q : It follows that B 0 is of otype q (with onstant C ).. xi ikx0i kq DX = IE i "i xi kx0i kq 1 . let xi in B . Pisier [Pi11℄ implies that the otype is dual to the type if the Bana h spa e does not ontain `n1 's uniformly (i. xi ikx0 kq 1 i i i i i 0 X = IE i. For ea h " > 0 and ea h i .e. A deep result of G.nite sequen e in B 0 .j 0 1 "i "j hx0j . if it is of some non-trivial type). xi i (1 ")kx0i k where h:. :i is duality. . su h that x0i (xi ) = hx0i . 1A X j 1 E "j x0j A : Hen e. The onverse assertion is not true in general sin e `1 is of otype 2 but `1 is of no type p > 1 .6.

we now examine several questions on erning the repla ement in the de.266 After these preliminaries and examples on the notions of type and otype.

9) of the Radema her sequen e by some other sequen es of random variables. we only deal with Radon random variables. Then. We start with a general elementary result.11. For reasons that will be ome learer in the next se tion.nitions (9.8) and (9. for every . Proposition 9. Let B be a Bana h spa e of type p and otype q with type onstant C1 and otype onstant C2 .

8) onditionally on the Xi 's then immediately yields the result. By Lemma 6. Re all a Bana h spa e B is said to be of stable type p if there is a onstant C su h that for all . X p IE Xi i and X q IE Xi i (2C1 )p (2C2 ) X q i IEkXi kp X i IEkXikq Proof. We show the assertion relative to the type.8) and the stable type dis ussed in the pre eding se tion. Lq (B ) ). X p IE Xi i X 2p IE i p "i Xi where ("i ) is a Radema her sequen e independent of (Xi ) . We now investigate the ase where the Radema her sequen e ("i ) is repla ed by a p -stable standard sequen e (i ) (1 p 2) .nite sequen e (Xi ) of independent mean zero Radon random variables in Lp (B ) (resp. This will in parti ular larify the lose relationship between the Radema her type (9. Applying (9. Let 1 p 2 .3.

we have: (i) If B is of stable type p . it is of stable type p0 for every p0 < p . (ii) If B is of type p . Then. Let 1 p < 2 and let B be a Bana h spa e.12. Proposition 9. X i xi i p.1 C X i kxi kp !1=p where (i ) is a sequen e of independent standard p -stable variables. it is of type p .nite sequen es (xi ) in B . .

267 (iii) B is of stable type p if and only if there is a onstant C su h that for all .

nite sequen es (xi ) in B. Both (i) and (ii) follow from the more diÆ ult laim (iii) but however an be given simple proofs. let 1 < p0 < p and (i ) denote a standard p0 -stable sequen e. we may assume p > 1 so that IEji j < 1 and (i) follows from Lemma 4.5. for (xi ) a . X i ji jp !1=p p p p0 1=p k(i )kp0 . on erning (i). Re all that sin e p > p0 . X IE i "i xi C k(xi )kp. For (ii).1 : Proof.1: Applying onditionally the type p inequality and the pre eding. Indeed.

1 norms.13. or equivalently of stable type 1 or of stable type p for some p > 1 . if B is of stable type p .7.12 and Theorem 9. if and only if B does not ontain `n1 's uniformly. as a onsequen e to this proposition.1 ): We then on lude by Lemma 5. we know that `1 is of no type p > 1 .6. the proof is omplete. The \if" part of (iii) reprodu es the proof of (ii) we just gave (with p0 = p ). B is also of stable type p1 for some p1 > p . By the omparison between the `p1 and `p. Conversely. B is of type pg : Further. Let 1 p < 2 .nite sequen e in B and some onstant C not ne essarily the same at ea h line. note the following version of Proposition 9. that p(B ) introdu ed in (9.11 for the stable type. by Corollary 9. A Bana h spa e B is of stable type p if and only if there is a onstant C su h that for every .8 sin e p0 > 1 and ki kp0 . hen e of type p1 by (i).1 < 1 . by Proposition 9.4) is also given by p(B ) = supfp.12. X IE i xi i X = IE "i i xi i 0 C IE X i ji jp kxi kp !1=p 1 A C IE(k(ji jkxi k)kp0 . Proposition 9. Note. we a tually have that a Bana h spa e B is of some type p > 1 . As a onsequen e of Proposition 9.

1 (B ) .1 C sup tp t>0 X i IPfkXik > tg.nite sequen e (Xi ) of independent symmetri Radon random variable in Lp. X p Xi i p. .

1 i C X i kXi kpp.268 or. if and only if.1: Proof. equivalently. We establish the . X p Xi p. The \if" part follows by simply letting Xi = i xi in the se ond inequality.

Let t > 0 . We . Hen e by Proposition 9. whi h will lead us by the same way to some questions analogous to the pre eding ones in the ontext of the otype. Assume by homogeneity that P sup tp IPfkXik > tg 1 . We next turn to the ase of a 2 -stable standard sequen e.rst inequality (whi h learly implies the se ond) in spa es of stable type p .11. for some C . that is an orthogaussian sequen e (gi ) . it is also of type p0 for some p0 > p . Sin e tp IPfmax kXi k > tg 1 . ( X tp IP Xi i Now X i p0 X 0 > t 1 + tp p IE Xi IfkXi ktg i X 1 + Ctp p0 IE(kXi kp0 IfkXi ktg ): i ) 0 IE(kXi kp IfkXi ktg ) Z tX 0 Z t 0 i 0 IPfkXik > sgdsp 0 p0 p0 p dsp = 0 t p s p p and the on lusion follows. i t>0 i ( ) X tp IP Xi > t i ( ) X 1 + tp IP Xi IfkXi ktg > t i : Sin e B is of stable type p .

su h an inequality implies that B must be of type p . In parti ular.5 that onversely.rst omplete the ase of type and orthogaussian sequen es whi h is the simplest overall.8)). if B is of type p . sin e Gaussian averages always dominate Radema her averages ((4. Conversely. X p IE gi xi i C X i kxi kp for some onstant C . stable type 2 and Radema her type 2 are equivalent notions. Indeed.11. by Proposition 9. That is. . Radema her averages do not always dominate the orresponding Gaussian ones. We have seen however in a dis ussion after Lemma 4. if in a Bana h spa e B . in parti ular in `1 .

269 for some onstant C and all .

Re all that for a (real) random variable .1 q=(s q)k ks . This is however true and we now would like to des ribe some of the deep steps leading to this on lusion. q0 ) .14. X (9:10) i kq kxi q X IE gi xi . Proposition 9. mainly without proofs. there is a onstant C su h that for all . Let r 1 and let (i ) be a sequen e of independent symmetri real random variables distributed like . If B is a Bana h spa e of otype q0 < 1 and if q = max(r. i this inequality does not readily imply that B is of otype q .nite sequen es ( xi ). The next proposition already overs various appli ations. k kq. we set Z 1 k kq.1 = (IPfj j > tg)1=q dt : 0 Note that if s > q .

let A be measurable and su h that IP(A) > 0 .1 C k k i r : Proof. X i xi i r X "i xi q. onsider independent opies ('i ) of ' and assume . On some (ri h enough) probability spa e. Set ' = IA .nite sequen es (xi ) in B .

with onstant C say. by symmetry the right side is C k "i xi kr . : : : . Using that Lr (B ) is of otype q . Sin e j "0j 'ji j = 1 for every i . j =1 i .rst that i = "i 'i where ("i ) is an independent Radema her sequen e. we may and do assume that IP(A) = 1=N for some integer N . Let further ('ji )i be independent opies of IAj for all j N . Thus i inequality (9. The left hand side of this inequality is just N P P P N 1=q k i xi kr . We show that in this ase. we see that 0 q 11=q N X X "i 'ji xi A j =1 i r N X C "0j j =1 ! "i 'ji xi i r X where ("0j ) is another independent Radema her sequen e. AN g be a partition of the probability spa e into sets of probability 1=N with A1 = A . X x i i (9:11) i r X C (IP(A))1=q i "i xi r : By an easy approximation and the ontra tion prin iple. Let then fA1 .11) holds. for some onstant C .

there is a onstant C su h that for all .15. X "i i xi j j i r C Z i r 1 : X (IPfj j > tg)1=q dt "i xi 0 i r from whi h the on lusion follows sin e ("i ji j) has the same distribution as (i ) . let us note that this result has a dual version for the type.270 We an then on lude the proof of the proposition. Before turning ba k to the dis ussion leading to Proposition 9.11). p0 ) . Note that Z 1 ji j = Ifji j>tg dt: 0 For every t > 0 . by (9. Let r 1 and let (i ) be a sequen e of independent symmetri real random variables distributed like . by the triangle inequality. Namely Proposition 9.14. If B is a Bana h spa e of type p0 and if p = min(r. X "i Ifji j>tg xi i r X )1=q "i xi C (IPfj j > tg Therefore.

The proof is entirely similar to the proof of Proposition 9. resp. of the usual ontra tion prin iple in whi h we get k k1 on the left (Lemma 4. 8i6=j . simply use the ontra tion prin iple to see that for every t > 0 .14. xN are points in a Bana h spa e and if A is su h that IP(A) = 1=N . Indeed. r = p0 . Then learly r N X IE i xi i=1 = r N X IE "i 'i xi i=1 = = r N X "i 'i xi dP i=1 f'j =1 .14 and 9. if x1 . let ('i ) be independent opies of IA and i = "i 'i . X "i xi p.1 k k i r X C i xi i r : This result thus appears as an improvement.15 are optimal in the sense that they hara terize otype q0 and type p0 whenever r = q0 .5). : : : .nite sequen es (xi ) in B . 'i =0g i=1 " # N X 1 1 N 1 n Z X kxj kr N 1 N j =1 . in spa es of some type. X "i Ifji j>tg xi i r 1 X "i ji jxi : t i r It should be noti ed that Propositions 9. in the last step.

we therefore know that if B has some . whi h is of the order of N1 i=1 Turning ba k to the question behind Proposition 9. This easily shows the above laim.271 N P kxi kr .14.

[Mi-S℄. if q(B ) = inf fq. Inequality (9.16. Pisier [Mau-Pi℄ shows that this last property hara terizes spa es having a non-trivial otype. A deep result of B. A Bana h spa e B is of otype q for some q < 1 if and only if B does not ontain `n1 's uniformly.nite otype. More pre isely. whi h is the ounterpart for the otype of the results detailed previously for the type. We refer to [Mau-Pi℄. The theorem. A more probabilisti proof of this theorem is perhaps still to be found. B is of otype qg and B is in. [Mi-Sh℄. an be stated as follows. Maurey and G. It ompletes the `np -subspa es question for p > 2 although the set of p > 2 for whi h a given Bana h spa e ontains `np 's uniformly seems to be rather arbitrary.10) a tually easily implies that B annot ontain `n1 's uniformly (simply be ause it annot hold for the anoni al basis of `1 ). Theorem 9. inequality (9.10) will imply that B is of otype q .

nite dimensional. Summarizing in parti ular some of the (dual) on lusions of Corollary 9.16.8 and Theorem 9. then B ontains `nq 's uniformly for q = q(B ) . we retain that an (in.

if a Bana h spa e B does not ontain `n1 's uniformly. Further.16 with Proposition 9.nite dimensional) Bana h spa e has some non-trivial type (resp.14. ombining Theorem 9. there is a onstant C su h that for all . otype) if and only if it does not ontain `n1 's (resp. `n1 's) uniformly.

best possible inequality (4. if (9.14. in general. (9:12) X IE gi xi i X C IE "i xi : i This is thus an improvement in those spa es over the. we would like to brie y indi ate the (easy) extension of the notions of type and otype to operators between Bana h spa es. this hara terization easily extends to more general sequen es of independent random variables than the orthogaussian sequen e. Conversely thus. A linear operator u : E ! F between two Bana h spa es E .9). To on lude this se tion.nite sequen es (xi ) in B. B does not ontain `n1 's uniformly.12) holds in a Bana h spa e B . By Proposition 9.

272 and F is said to be of (Radema her) type p. 1 p 2 . if there is a onstant C su h that for all .

nite sequen es (xi ) in E . u is said to be of otype q if X i kxi kq !1=q X C "i u(xi ) i q : Some of the easy properties of type and otype learly extend without modi. X "i u(xi ) i p C X i kxi kp !1=p : Similarly.

This is in parti ular trivially the ase for Proposition 9. on the basis for example of Proposition 9. one may onsider (possibly) dierent de.12. One an also onsider operators of stable type but. ations to operators.11 whi h we use freely below.

1 p < 2 if for some onstant C and all .nitions. We an say that u : E ! F is of stable type p .

6. We an also say that it is p -stable if (9:14) X "i u(xi ) i p C k(xi )kp.nite sequen es (xi ) in E . (9:13) X i u(xi ) p.14) are thus equivalent for the identity operator of a given Bana h spa e but.1 : (9.13) and (9. from the la k of a geometri al hara terization analogous to Theorem 9. these two de.1 i C X i kxi kp !1=p where (i ) is a standard p -stable sequen e.

We refer to [P-R1℄ for a dis ussion on this dieren e as well as on related de.nitions are a tually dierent in general.

3. We thus revisit now the strong laws of Kolmogorov and of Mar inkiewi z-Zygmund. As we know. We will establish namely tightness and onvergen e in probability of various sums of independent random variables taking their values in Bana h spa es having some type or otype. 9. we try to answer some of the questions we started with. Type 2 and otype 2 will also be examined in their relations to pregaussian . this question is motivated by the strong limit theorems whi h were redu ed in the pre eding hapters to weak statements as well as by the entral limit theorem investigated in next hapter.nitions. Some probabilisti statements in presen e of type and otype In this last paragraph.

e. We start with the SLLN of Kolmogorov for independent random variables. as well as spe tral measures of stable distributions in spa es of stable type. IEkXi kp < 1. Proof. Sn =n ! 0 almost surely.14 that if (Xi ) is a sequen e of independent random variables with values in a Bana h spa e su h that for some 1 p 2 . if and only if the weak law of large numbers holds.e. As announ ed. This is the on lusion of the next theorem. we only onsider in this hapter Radon random variables. we may assume we are given a separable Bana h spa e.273 random variables. A Bana h spa e B is of type p if and only if for every sequen e (Xi ) of independent mean zero (or only symmetri ) Radon random variables with values in B . Equivalently. Sn =n ! 0 in probability. Theorem 9. sin e we will be dealing with tightness and weak onvergen e properties. we present some results on almost sure boundedness and onvergen e of sums of independent random variables in spa es whi h do not ontain isomorphi opies of 0 . ip i X (9:15) then the SLLN holds. Assume . i. the series ondition (9.15) implies the weak law Sn =n ! 0 in probability (provided the variables are entered). i. and a tually only in them. Let 1 p 2 . but however not dire tly related to type and otype. the ondition P i IEkXi kp =ip < 1 implies the SLLN.17. In type p spa es. We have seen in Corollary 7. Finally.

and also in L1 (B ) (or .14 we need only show that Sn =n ! 0 in probability when P i IEkXi kp =ip < 1 and the Xi 's have mean zero. [Sto℄). But this is rather trivial under the type p ondition. n S p 1 X IE n C p IEkXi kp : n n i=1 The result follows from the lassi al Krone ker's lemma ( f. if P i kxi kp =ip < 1 . To prove the onverse.11. for some onstant C and all n . Indeed. Then. we assume the SLLN property for random variables of the type Xi = "i xi where xi sequen e. by Proposition 9. we know that n P i=1 2 B and ("i ) is a Radema her "i xi =n ! 0 almost surely. As we have seen.rst that B is of type p . by Corollary 7.

Given y1 . : : : . for some onstant C . ym in B . apply this inequality to the sequen e (xi ) de. Hen e. by the losed graph theorem. !1=p n X kxi kp 1 X sup IE "i xi C ip n n i=1 i for every sequen e (xi ) in B .2.7 together with Lemma 4.274 Lr (B ) for any r < 1 ) by Theorem 4.

: : : .ned by xi = 0 if i m or i > 2m. Theorem 9. We get that m X IE "i yi i=1 2C m X i=1 kyi kp !1=p and B is therefore of type p . xm+1 = y1. As a onsequen e of the pre eding and Corollary 9.18.17 is thus established. x2m = ym .8 we an state Corollary 9. xm+2 = y2 . A Bana h spa e B is of type p for some p > 1 if and only if every sequen e (Xi ) of independent symmetri uniformly bounded Radon random variables with values in B satis.

Ne essity follows from Theorem 9. Hen e. y2nn in B su h that kyink 1 . Conversely.12.8. together with Proposition 9. it suÆ es to prove that if B is of no type n P p > 1 there exists a bounded sequen e (xi ) su h that ( "i xi =n) does not onverge almost surely or in i=1 L1 (B ) .17. : : : .es the SLLN. By Corollary 9. for every n . and n 2 X IE "i yin i=1 2n 1: De. Proof. then B ontains `n1 's uniformly. i = 1. 2n . if B has no non-trivial type. there exist y1n . : : : .

the ontent of the following statement. 2n i < 2n+1 .11 is still an equivalen e when the random variables Xi have the same distribution. for the type. .9. It follows by Jensen's inequality that if 2n m < 2n+1 . n m 2 X 1 X 1 1 IE "i xi n+1 IE "i yin : m i=1 2 4 i=1 This proves Corollary 9. Further results in Chapter 7 an be interpreted similarly under type onditions. We do not detail everything but would like to des ribe the independent and identi ally distributed (iid) ase following the dis ussion next to Theorem 7.ne now (xi ) by letting xi = yjn . This is. it is onvenient to re ord that Proposition 9. j = i + 1 2n.18. To this aim.

Assume there is a onstant C su h that for every .19. Let 1 p 2 and let B be a Bana h spa e.275 Proposition 9.

N X IE Xi i=1 . : : : . XN ) of iid symmetri Radon random variables in Lp (B ) . N X IE Xi i=1 N X IE "j xj j =1 for some > 0 . N X IE Xi i=1 CN 1=p (IEkX1 kp )1=p : Then B is of type p . assumed to be independent from a Radema her sequen e ("i ) . by Lemma 4. : : : . : : : . i N . Let ('j )jN be real symmetri random variables with disjoint supports su h that for every j = 1. denote by ('ij ).nite sequen e (X1 . By symmetry N X IE Xi i=1 N X = IE "j j =1 N X i=1 'ij ! xj and therefore. XN are independent opies of X . independent opies of ('j ) .5 for symmetri sequen es. To this aim. N . 1 1 IPf'j = 1g = IPf'j = 1g = (1 IPf'j = 0g) = : 2 2N N P N P Let (xj )jN be points in B . Then X = 'j xj is su h that IEkX kp = kxj kp =N so that it is enough j =1 j =1 to show that if X1 . Proof.

.

! N N .

X .

X .

.

i IE .

'1 .

IE "j xj : .

.

. by symmetry and Khint hine's inequalities ((4.3)). i=1 j =1 But now.

.

N .

X .

.

IE .

'i1 .

.

= .

.

i=1 .

.

n .

X .

.

IE .

"i 'i1 .

.

.

.

i=1 Sin e IP we get that 0 1 0 1 !1=2 !1=2 N N X 1 1 X p IE j'i1 j2 A = p IE j'i1 j A : 2 2 i=1 i=1 ( N X i=1 ) j'i1 j = 0 .

.

N .

X .

.

IE .

'i1 .

.

.

.

i=1 = 1 p1 1 2 1 N N 1 e 1e . 13 . .

The proof of Proposition 9.20. Assume there is a onstant C su h that for every .16. There is an analogous statement for the otype but the proof involves the deeper tool of Theorem 9. The main idea of the proof is the so- alled \Poissonization" te hnique.19 is omplete. Let 2 q < 1 and let B be a Bana h spa e.276 hen e the announ ed laim with = 1=3 . Proposition 9.

are independent. variables with IPfÆi = 0g = 1 1g = e 1 . Then kxi kq : N P Take (X1 . i N . Let further (Xi.5)). sin e Xi. : : : . XN ) of iid symmetri Radon random variables in Lq (B ) .j )1i. are independent Poisson random variables with parameter 1=2 . Ni0(1=2). i N . : : : . Proof. xN be points in B and let X with distribution (2N ) N IEkX kq = N X i=1 N 1 P (Æ xi i=1 + Æ xi ) . (2. q N Ni ^1 X X IE Xi.0 = 0 for ea h i .j : i=1 j =0 1. Again by Jensen's inequality ( onditionally on the Xi 's). Then.jN be independent opies of X and set Xi.j has the same distribution as N X i=1 Nei xi where Nei = Ni (1=2) Ni0 (1=2) and Ni (1=2). IPfNi ^ 1 = 0g = 1 IPfNi ^ 1 = 1g = e q N Ni ^1 X X IE Xi.j i=1 j =0 Further. : : : . Let (Ni )iN be indepeni=1 dent Poisson random variables with parameter 1 .0 = 0 .nite sequen e (X1 . Hen e.j i=1 j =0 = q N X IE Æi Xi i=1 where Æi . q N X IE Æi Xi i=1 e q N X q IE Xi : i=1 IPfÆi = . q N Ni X X IE Xi. independent of the Xi 's. N IEkX1 kq q N X C IE Xi : i=1 Then B is of otype q . XN ) to be independent opies of X and let us onsider Xi . and independent from (Xi ) . as is easy to he k on hara teristi fun tionals. Now. Ni N X X i=1 j =0 Xi. Let x1 . by Jensen's inequality onditionally on (Ni ) ( f.

we have obtained that N X i=1 kq kxi q N X C IE Xi i=1 N X Ceq IE i=1 e Ni xi : Now this inequality learly annot hold for all .277 Summarizing.

by Theorem 9. B is of .16. Therefore.nite sequen es (xi ) in a Bana h spa e B whi h ontains `n1 's uniformly sin e it does not hold for the anoni al basis of `1 .

In Theorem 7.14 sin e IENeip < 1 for all p .9. The proof is omplete. Sn =n1=p IEkX kp < 1 and IEX = 0 . n 1 . Let 1 p < 2 and let B be a Bana h spa e. Sn = X1 + + Xn . We now ome ba k after this digression to the main appli ation of Proposition 9. as already dis ussed thereafter. Let 1 p < 2 . (ii) for every Radon random variable X with values in B .19. We namely investigate the relationship between the type ondition and the iid SLLN of Mar inkiewi z-Zygmund.10. We now show that this result is hara teristi of the type p property. If X is a Radon random variable with values in a Bana h spa e B .nite otype and we are then in a position to appy Proposition 9. Sn =n1=p ! 0 in probability when IEkX kp < 1 and IEX = 0 . ! 0 almost surely if and only if Note of ourse that sin e every Bana h spa e is of type 1 we re over Corollary 7. (Xi ) denotes below a sequen e of independent opies of X and. Under the type assumption and moment onditions IEkX kp < 1 and IEX = 0 . Let us brie y re all the argument leading to (i) ) (ii) already dis ussed after Theorem 7. Moreover. The following are equivalent: (i) B is of type p . we have seen that Sn =n1=p ! 0 almost surely if and only if IEkX kp < 1 and Sn =n1=p ! 0 in probability.21. Proof.9. in type p spa es. the sequen e (Sn =n1=p ) was shown to be arbitrarily lose to a . Theorem 9. as usual.

Conversely. it follows that Sn =n1=p ! 0 in probability and we an on lude the proof of (i) ) (ii) by Theorem 7.nite dimensional sequen e. let us . f (Sn =n1=p ) ! 0 in probability. and thus tight.9. Sin e for every linear fun tional f .

Sin e X is entered. Then. we already know that n X sup 1=p IE Xi IfkXi kn1=p g n n i=1 1 < 1: . by Lemma 7.rst show that when IEkX kp < 1 . IEX = 0 and Sn =n1=p ! 0 almost surely (or only in probability).2. then the sequen e (Sn =n1=p ) is bounded in L1 (B ) . by a symmetrization argument it is enough to treat the ase of a symmetri variable X .

under IEkX kp < 1 .19. n X IE X I 1=p g i fk X k >n i n1=p 1 i=1 n1 1=p IE(kX kI fkX k>n1=pg ) is uniformly bounded in n . it is easily seen by integration by parts that.2 below for n1=p . Theorem 9. The pre eding theorem has an analog for the stable type. (One an also invoke for this result the version of Corollary 10. The following are equivalent: (i) B is of stable type p . Let 1 p < 2 and let B be a Bana h spa e.9 that (ii) holds true in . Sn =n1=p ! 0 in probability if and p only if tlim !1 t IPfkX k > tg = 0 . Proof. there exists therefore a onstant C su h that for all entered random variables X with values in B 1 IEkSnk C (IEkX kp)1=p : 1 n n =p sup We on lude that B is of type p by Proposition 9.278 However. Let us brie y state this result and sket h its proof. The laim follows.) By the losed graph theorem.22. We have noti ed next to Theorem 7. (ii) for every symmetri Radon random variable X with values in B.

13 and a . The impli ation (i) ) (ii) is then simply based on Proposition 9.nite dimensional spa es.

We now would like to investigate some onsequen es related to the next hapter on the entral limit theorem. IEf (X ) = 0 and IEf 2 (X ) < 1 . indeed.7) and Lemma 2. for ea h " > 0 and all n large enough. Some more appli ations of the type ondition in ase of the law of the iterated logarithm have been des ribed in Chapter 8 (Corollary 8. 1 4 IPfkSnk > "n1=p g 21 IPfmax kXi k > "n1=p g 14 nIPfkX k > "n1=p g: in The impli ation (ii) ) (i) is obtained as in the last theorem via (iii) of Proposition 9.21.6. is said to be pregaussian if there exists a Gaussian Radon variable X in B with the same .12 and some are in the losed graph argument. A Radon random variable X with values in a Bana h spa e B su h that for every f in B 0 .8) and we need not re all them here. They deal with pregaussian variables.nite dimensional argument as in p 1=p Theorem 9. That tlim !1 t IPfkX k > tg = 0 when Sn =n ! 0 in probability is a simple onsequen e of Levy's inequality (2.

g in B 0 . The following easy lemma is useful in this study. we may assume that it is onstru ted on some probability spa e ( . IP) is separable and we an . Proof.23. IEkG(Y )kp 2IEkG(X )kp . IP) with A ountably generated. In parti ular. Lemma 9.e. A. i. A. The on ept of pregaussian variables and their integrability properties are losely related to type 2 and otype 2 . IEf 2 (Y ) IEf 2 (X ) (and IEf (Y ) = 0 ). IEf (X )g(X ) = IEf (G)g(G) (or just IEf 2(X ) = IEf 2 (G) ). we denote with some abuse by G(X ) a Gaussian variable with the same ovarian e stru ture as the pregaussian variable X .279 ovarian e stru ture as X . Sin e the distribution of a Gaussian variable is entirely determined by the ovarian e stru ture. L2 ( . Sin e Y is Radon. Then Y is pregaussian and. Let X be a pregaussian Radon random variable with values in a Bana h spa e B and with asso iated Gaussian G(X ) . for all p > 0 . for all f. Let Y be a Radon random variable in B su h that for all f in B 0 .

Sin e Y is Radon ( f. A.1). IP) .nd a ountable orthonormal basis (hi )i1 of L2( . IE(hi Y ) de. Se tion 2.

1 p < 1 .11).23 and its proof is a tually not needed as follows from the deeper result des ribed after (3. To introdu e to what follows. g standard normal.nes an element of B for n P every i . then XIfX 2Ag is also pregaussian. Further. for every n and f in B 0 . note also that the sum of two pregaussian variables is also pregaussian. By Bessel's i=1 inequality. Note that the fa tor 2 in Lemma 9.23. Let X = (Xk )k1 be weakly entered and square integrable with values in `p . It follows that if 1 X (9:16) (IEjXk j2 )p=2 < 1. let now Gn = gi IE(hi Y ) where (gi ) is an orthogaussian sequen e. G is the natural andidate for G(X ) . Sin e IPfkG(Y )k > tg 2IPfkG(X )k > tg for all t > 0 also follows from (3. the limit is unique and (Gn ) onverges almost surely (It^o-Nisio) and in L2(B ) to a Gaussian Radon random variable G(Y ) in B with the same ovarian e as Y .11). IEf 2(Gn ) IEf 2 (Y ) IEf 2 (X ) = IEf 2 (G(X )): Using (3. Note that IEjGk jp = p (IEjGk j2 )p=2 = p (IEjXk j2 )p=2 where p = IEjgjp . k=1 . As a onsequen e of Lemma 9. ` . let us brie y hara terize pregaussian variables in the sequen e spa es `p . Sin e IEf 2(Gn ) ! IEf 2 (Y ) . Let G = (Gk )k1 be a Gaussian sequen e of real variables with ovarian e stru ture determined by IEGk G` = IEXk X` for all k. For every n .11). if X is pregaussian and if A is a Borel set su h that XIfX 2Ag has still mean zero. the proof is omplete. the Gaussian sequen e (Gn ) is seen to be tight in B .

G is seen to de.280 by a simple approximation.

1 p < 1 . More generally. where is -.16) holds.ne a Gaussian random variable with values in `p with the same ovarian e stru ture as X and su h that 1 X IEkGkp = k=1 IEjGk jp = 1 X k=1 (IEjXk j2 )p=2 < 1: Therefore X = (Xk )k1 in `p. ). it an be shown that X with values in Lp = Lp (S. . is pregaussian if and only if (9. 1 p < 1 .

Let B be a Bana h spa e. Proposition 9. Let B be a Bana h spa e. The equivalen e is still true when IEkX k2 < 1 is repla ed by X bounded.24. Then B is of type 2 if and only if every mean zero Radon random variable X with values in B su h that IEkX k2 < 1 is pregaussian and we have the inequality IEkG(X )k2 C IEkX k2 for some onstant C depending only on the type 2 onstant of B . Proposition 9.nite.25. is pregaussian if and only if (it has weak mean zero and) Z S (IEjX (s)j2 )p=2 d(s) < 1: The next two propositions are the main observations on pregaussian random variables and their relations to type 2 and otype 2 . Then B is of otype 2 if and only if ea h pregaussian (Radon) random variable X with values in B satis.

es IEkX k2 < 1 and we have the inequality IEkX k2 C IEkG(X )k2 for some onstant C depending only on the otype 2 onstant of B . Proof of Proposition 9. We know from Proposition 9. for some C > 0 and all . The equivalen e is still true when IEkX k2 < 1 is repla ed by IEkX kp < 1 for some 0 < p 2 .24.11 that if B is of type 2 .

2 X IE gi xi i C X i kxi k2 : .nite sequen es (xi ) in B .

281 Let now X with IEX = 0 and IEkX k2 < 1 . There exists an in reasing sequen e (AN ) of .

X N ! X almost surely and in L2(B ) . Sin e AN is .nite algebras su h that if X N = IEAN X .

X N an P be written as a .nite.

(G(X N )) ne essarily onverges weakly to some Gaussian variable G(X ) with the same ovarian e stru ture as X . one obtains that IEkG(X )k2 C IEkX k2 . the sequen e (G(X N )) of Gaussian random variables is seen to be tight.1. By Skorokhod's theorem. Then i G(X N ) = X i gi (IP(Ai ))1=2 xi and IEkX N k2 = X i kxi k2 IP(Ai ) so that IEkG(X N )k2 C IEkX N k2 C IEkX k2 whi h thus holds for every N . Se tion 2. Sin e further IEf 2 (G(X N )) = IEf 2 (X N ) ! IEf 2 (X ) for every f in B 0 .nite sum xi IAi with the Ai disjoint. This establishes the . Sin e the type 2 inequality also holds for quotient norms (with the same onstant). f.

P then B is type 2 .rst part of Proposition 9. Conversely. it is suÆ ient to show that if every entered variable X su h that kX k1 1 is pregaussian.24. Let (xi ) be a sequen e in B su h that kxi k2 = 1 . Consider X with distribution i P X = xi =kxi k with probability kxi k2=2 . By hypothesis X is pregaussian and G(X ) must be gi xi i P whi h therefore de.

25.nes a onvergent series. So is then "i xi and the proof is omplete. i Proof of Proposition 9. Assume .

Then for some onstant C and all .rst B is of otype 2 .

let Y = "XIfkX ktg where t > 0 and " is a Radema her random variable independent of X .10) and Remark 9. Arguing then exa tly as in the . Given X pregaussian with values in B .26 below).nite sequen es (xi ) in B (9:17) X i kxi k2 C IEk X i gi xi k2: (This is a priori weaker than the otype 2 inequality with Radema hers { see (9.23. By Lemma 9. Y is again pregaussian and IEkG(Y )k2 2IEkG(X )k2 .

rst part of the proof of Proposition 9.24 by .

it follows that IEkX k2 < 1 and the inequality of the proposition holds. Turning to the onverse. let us . one obtains that IEkY k2 C IEkG(Y )k2 2C IEkG(X )k2 : Sin e t > 0 is arbitrary.nite dimensional approximation.

rst show that. given 0 < p 2 . if every pregaussian variable X in B P satis.

We a tually show that if gi xi onverges almost surely. then (9.es IEkX kp < 1 .17) holds. then i .

282 P i kxi k2 < 1 . Let i be positive numbers su h that P r i i = 1 and de. and set r = 2=2 p . We assume p < 2 . whi h is the most diÆ ult ase.

17) implies that B is of otype 2 . IEkX kp = X i i kxi kp < 1: P Sin e this holds of ea h su h sequen e (i ) . by duality it must be that i laim. by hypothesis.ne X by: r)=p X = (1 xi with probability ri =2: i P It is easily seen that G(X ) is pre isely i gi xi and therefore. let us show how (9. Re all . kxi k2 < 1 whi h is the announ ed To on lude the proof.

rst we proved before that (9.17) implies IEkX k2 2C IEkG(X )k2 (9:18) for every pregaussian variable X in B . Let (xi ) be a .

we see that 2 X fjgj>tg )IE gi xi : 2C IE(jgj2 I i Choose now t > 0 be small enough in order for IE(jgj2 Ifjgj>tg ) to be less than (8C) (9.18) to X = 2 X IE gi Ifjgi j>tg xi i P i gi Ifjgi j>tg xi . X gi xi i 2 X X gi Ifjgi jtg xi + gi Ifjgi j>tg xi i i 2 2 X X t "i xi + gi Ifjgi j>tg xi i 2 i 2 where we have used the ontra tion prin iple.nite sequen e in B . If we now apply (9. For every t > 0 . we then obtain that X i Proposition 9.25 is established. kxi k2 2 X C IE gi xi i 2 X 4t2 C IE "i xi : i 1.17). Together with .

26.283 Remark 9. The pre eding proof indi ates in parti ular that a Bana h spa e B is of otype 2 if and only if for all .

If B is a Bana h spa e of stable type p . then the series i xi onverges almost surely and de. and if (xi ) is a sequen e in P P B su h that kxi kp < 1 . The pre eding dire t proof in this ase is however simpler sin e it does not use the deep Theorem 9.nite sequen es (xi ) in B X i 2 X C IE gi xi : kxi k2 i This an also be obtained by the onjun tion of Proposition 9.14 and Theorem 9.16.16. we dis uss some results on spe tral measures of stable distributions in spa es of stable type. 1 p < 2 . After pregaussian random variables.

if m is the .nes a p -stable Radon i i random variable in B . In other words.

Moreover kX kp. Corollary 5.nite dis rete measure m= X i kxi kp (Æ 2 xi =kxi k + Æ+xi =kxi k ). there exists a spe tral measure m R R of X su h that kxkp dm(x) < 1 .5) that if X is a p -stable Radon random variable with values in a Bana h spa e B .1 C jmj1=p =C Z 1=p kxkpdm(x) for some onstant C depending only on the stable type p property of B . m is the spe tral measure of a p -stable random ve tor X in B . Re all further ( f. The parameter p (X ) of X is de. Re all from Chapter 5 that m symmetri ally distributed on the unit sphere of B is unique.

This property a tually extends to general measures on type p -stable Bana h spa es.1: This inequality is two-sided when 0 < p < 1 and what we have just seen is that when 1 p < 2 and P kxi kp < 1 in a Bana h spa e of stable type p.27.13)) Kp 1 p (X ) kX kp.ned as p (X ) = ( kxkp dm(x))1=p (whi h is unique among all possible spe tral measures) and we always have ( f. (5. Let 1 p < 2 .1 i i Cp (X ) . A Bana h spa e B is of stable type p if and only if every positive R . Theorem 9. X = P i xi onverges almost surely and kX kp.

nite Radon measure m on B su h that kxkp dm(x) < 1 is the spe tral measure of a p -stable Radon random ve tor X in B . Furthermore. there exists a onstant C su h that kX kp.1 C Z kxkp dm(x) 1=p : . if this is the ase.

Corollary 5. Suppose now that B is of stable type p . B is of Radema her type p1 for some p1 > p . Let further Yj be independent random variables distributed like m1 =jm1j . Let m1 denote the image of the measure kxkp dm(x) by the map x ! x=kxk . The hoi e of a dis rete measure m as above proves the \if" part of the statement. The natural andidate for X is given by the series representation 1 X 1=p 1 =p p jm1 j "j Yj j j =1 ( f. Therefore 0 11=p1 1 1 X X 1=p IE j "j Yj C j p1 =p A <1 j =1 j =1 from whi h the required onvergen e easily follows. by Corollary 9. note the following: sin e B is of stable type p .5).7 together with Proposition 9. In order to show that this series onverges. X thus de.12 (i).284 Proof.

The inequality of the theorem follows from the same argument (and from some of the elementary material in Chapter 5). m) be any measure spa e. . let us brie y mention an alternate approa h to p -stable random variables in Bana h spa es of stable type p .nes a p -stable random variable with spe tral measure m1 and therefore also m . De. Kakutani. As yet another appli ation. This approa h goes through sto hasti integrals and follows a lassi al onstru tion of S. Let (S.

i 2 IR. Ai 2 disjoint.ne a p -stable random measure M based on (S. For a step fun tion ' of the form ' = i IAi . the i R sto hasti integral 'dM is well-de. M (A) is p -stable with parameter m(A)1=p and whenever (Ai ) are disjoint. m) in the following way: (M (A))A2 is a olle tion of real random variables su h that for every A . the sequen e P (M (Ai )) is independent. .

R it is easy to de. Therefore. by a density argument.ned as Z 'dM = X i i M (Ai ): R It is a p -stable random variable with parameter k'kp = ( j'jp dm)1=p .

ne the sto hasti integral 'dM for any ' in Lp (S. As suspe ted. the lass of Bana h spa es B of type p -stable is the one in whi h the R R pre eding sto hasti integral 'dM an be de. m) . The question now raises of the possibility of this onstru tion for fun tions ' taking their values in a Bana h spa e B . .

.27. 'dM is a p -stable random variable in B with spe tral measure m so that we re over the on lusion of Theorem 9.ned when k'kp dm < 1 . In ase ' is the identity map on R B .

Let 1 p < 2 .285 Proposition 9. m) and any p -stable random measure M based on (S. .28. m) the sto hasti integral 'dM R with k'kp dm < 1 de. . A Bana h spa e B is of stable type p if and only if for any measure R spa e (S.

and a fortiori into L0 (B ) .27 with the hoi e for ' of the identity on (B. if B is of stable type p .nes a p -stable Radon random variable with values in B and Z 'dM p.1 (B ) . C X i m(Ai )kxi kp !1=p =C Z 1=p k'kpdm : R Hen e the map ' ! 'dM an be extended to a bounded operator from Lp (m.1 C Z 1=p k'kp dm : Proof. P Conversely. if ' is a step fun tion xi IAi with xi in B and Ai mutually disjoint.28. This on ludes the proof of Proposition 9. B ) into Lp. SuÆ ien y is embedded in Theorem 9. This result is not dire tly related to type and otype sin e these are lo al properties in the sense that they only depend on the olle tion of .1 'dM = X i xi M (Ai ) m(Ai )1=p i xi . m) . i Z whi h is equal in distribution to Z 'dM P i p. We on lude this hapter with a result on almost sure boundedness and onvergen e of sums of independent random variables in spa es whi h do not ontain subspa es isomorphi to the spa e 0 of all s alar sequen es onvergent to 0 . Now.

nite dimensional subspa es of the given spa e whereas the property dis ussed here involves in.

it is well-known that (Sn ) is n almost surely onvergent.4 but let us emphasize here the argument. This is a tually ontained in one of the steps of the proof of Theorem 2. Let ("i ) be a Radema her sequen e independent of (Xi ) and re all the partial integration notations IP" . IEX . IE" .e. Then. It is however natural and onvenient for further purposes to re ord this result at this stage. IPfsup jSn j < 1g = 1 . for every Æ > 0 . i. Let (Xi ) be a sequen e of independent real symmetri random variables su h that the sequen e (Sn ) of the partial sums is almost surely bounded. By symmetry and the assumption. there exists a .nite dimensional subspa es. IPX .

nite number M su h that for all n .

(.

n .

X .

.

IP .

"i Xi .

.

.

.

i=1 ) M 1 Æ2 : .

286 Let n be .

xed for a moment. By Fubini's theorem. if .

(.

n .

X .

.

!. IP" .

"i Xi (!).

.

.

.

3). if Æ 1=8 . for every ! in A .2 and (4. ( A= i=1 ) ) M 1 Æ . Now. n X i=1 n P Xi (!)2 = p . by Lemma 4. then IPX (A) 1 Æ .

.

2 n .

X .

.

IE" .

"i Xi (!).

.

.

.

IPf Xi (!)2 2 2M g 1 Æ . i It an easily be shown that this argument extends. "i Xi onverges almost surely and the laim follows. to Hilbert spa e valued random variables. for all n. by i P i=1 Fubini's theorem. A tually. Thus. this property is satis. i=1 p 2 2M : P Hen e. for example. It follows that Xi2 < 1 almost surely.

By symmetry and Fubini's theorem. the anoni al basis of 0 . Proof. Similarly. i=1 n P i=1 P i "i xi onverges almost surely.4. by (iv). (iii) for every sequen e (xi ) in B .29. A sequen e (Yn ) of random variables is almost surely bounded if IPfsup kYn k < 1g = 1 . xi ! 0 . Xi (!) ! 0 . if ( n P "i xi ) is almost surely bounded. if we take blo ks for any n i=1 P stri tly in reasing sequen e (nk ) of integers. n Theorem 9. Hen e. for almost all ! on the probability spa e supporting the Xi 's. and only in. "i xi ) is almost surely bounded. and thus onvergent. Let us then show the onverse impli ation and let us pro eed . Xi ! 0 almost surely when k ! 1 .ed for random variables taking their values in. This is the ontent of the next theorem. The following are equivalent: (i) B does not ontain subspa es isomorphi to 0 . Let B be a Bana h spa e. Let (Xi ) in B with sup kSnk < 1 with probability one and let ("i ) be a Radema her sequen e independent of n (Xi ) . The main point in this proof is the equivalen e between (i) and (iv). (Sn ) is a Cau hy sequen e in probability. (ii) for every sequen e (Xi ) of independent symmetri Radon random variables in B . n P sup k "i Xi (!)k < 1 almost surely. That (iv) ) (i) is lear by the hoi e of xi = ei . The impli ations (ii) ) (iii) ) (iv) are obvious. Let us show that (iv) ) (ii). the almost sure boundedness of the sequen e (Sn ) of the partial sums implies its onvergen e. Therefore (ii) holds by Theorem 2. Bana h spa es whi h do not ontain subspa es isomorphi to 0 .6). if ( (iv) for every sequen e (xi ) in B . Hen e nk <ink+1 Snk+1 Snk ! 0 in probability and by Levy's inequality (2.

287 by ontradi tion. IP) is the anoni al produ t spa e f with its natural -algebra and produ t measure. A. we an de. 1 lim IP(C \ f"i = 1g) = lim IP(C \ f"i = +1g) = IP(C ): i!1 i!1 2 Let us pi k M < 1 so that IP(A) > 1=2 where A = fsup k n n X i=1 "i xi k M g: By the previous observation. Let (xi ) be a sequen e in B su h that inf kxi k > 0 and IPfsup k i n n P "i xi k < 1g = 1 . +1gIN equipped i=1 We may assume that the probability spa e ( . It is easy to see that for every C 2 A . 1.

19) also holds for A0 with respe t to ("0i ) . (9:19) IP(A \ f"n1 = a1 g \ \ f"nk = ak g) > 2 k 1 : Put "0i = "i if i is one of the nj 's. "0i = "i if not. if ) ( n X 0 0 A = sup " xi M . i n i=1 (9. i kg = 2 k . k . ak have been . : : : . Therefore. The sequen es ("i ) and ("0i ) are equidistributed. Thus k X ai xni i=1 = 0 nk 1 X "j (!)xj 2 j =1 1 + "0j (!)xj A j =1 nk X M: Sin e the integer k and the signs a1 . Sin e "ni = "0ni and IPf"ni = ai . it follows by interse tion that there is an ! in A \ A0 su h that "ni = ai for all i = 1. a2 .ne indu tively an in reasing sequen e of integers (ni ) su h that for every sequen e of signs (ai ) and every k . : : : .

i and su h that inf kyi k > 0 . Let (yi ) be a sequen e in a Bana h spa e B su h that for every f in B 0 . there exists a subsequen e (yik ) of (yi ) whi h is equivalent to the i anoni al basis of 0 in the sense that. Then. for some C > 0 and all . jf (yi )j < 1 .30. while i i inf k x k > 0 . that is jf (xni )j < 1 for every f in B 0 . this inequality implies that the P P series xni is weakly un onditionally onvergent.xed arbitrary. P Lemma 9. The on lusion is then obtained from the following lassi al result on basi sequen es in n i i Bana h spa es.

nite sequen es (k ) of real numbers. C 1 max j j k k X k yik k C max jk j : k .

288 Proof. i The on lusion is then obvious: for all .5) that one an extra t a subsequen e (yni ) P whi h is basi in the sense that every element in the span of (yni ) an be written uniquely as y = i yni i k > 0 . we know in parti ular that inf kyi k > 0 while yi ! 0 weakly. i It is then a well-known and important result ( f. e. by another appli ation of the losed graph theorem. Sin e jf (yi )j < 1 for all f i P in B 0 .g. for some C and all f in B 0 . ne essarily i ! 0 sin e inf k y n i i P theorem we already have the lower inequality in the statement of the lemma. jf (yi )j C kf k . by the losed graph for some sequen e of s alars (i ) . [Li-T1℄. p. and. Then. As a onsequen e of the hypotheses.

nite sequen es (k ) of s alars. X k yik k = .

.

.

X .

.

sup .

k f (yik ).

.

.

kf k1 .

and more pre isely its proof. if (Xi ) is a sequen e of independent symmetri Radon random variables with values in a Bana h spa e B su h that sup kSn k < 1 n almost surely but (Sn ) does not onverge.29. Notes and referen es This hapter reprodu es. the various assertions of the theorem are obviously equivalent to say that if sup kSn k < 1 . Remark 9. As a onsequen e of Theorem 9. parts of the ex ellent notes [Pi16℄ by G. By Fubini's theorem. k max jk j sup k X kf k1 k jf (yik )j C max jk j : k This proves the lemma whi h thus on ludes the proof of Theorem 9. The i n i=1 remark then follows from the proof of the impli ation (i) ) (iv). Indeed. there n n P exists an ! on the spa e supporting the Xi 's su h that sup k "i Xi (!)k < 1 but inf kXi (!)k > 0 . Pisier where the interested reader an . then Xi ! 0 almost surely.29. there exist an ! and a subsequen e (ik ) = (ik (!)) su h that (Xik (!)) is equivalent to the anoni al basis of 0 .31. although not up to the original.

For a more operator theoreti al point of view. A omplete exposition of type and otype and their relations to lo al theory of Bana h spa es is the book [Mi-S℄ by V.nd more Bana h spa e theory oriented results and in parti ular quantitative dimensional results.D. The new proof by V. Woy zynski.D. Milman and G. S he htman. See also the exposition [Wo2℄ by W. The Le ture Notes [S h3℄ by L.) We refer to these works for a urate referen es. see [Pie℄. (A more re ent \volumi " des ription of lo al theory is the book [Pi18℄ by G. S hwartz surveys mu h of the onne tions between Probability and Geometry in Bana h spa es until 1980. [Pi15℄. Pisier. Dvoretzky's theorem was established in 1961 [Dv℄. [TJ2℄. Milman [Mi1℄ using isoperimetri methods and ampli.A.

ed later on in the paper [F-L-M℄ onsiderably in uen ed the developments of the .

A detailed a ount on appli ations of isoperimetri inequalities and on entration of measure phenomena to Geometry of Bana h spa es may be found in [Mi-S℄. The \ on entration dimension" of a Gaussian variable was introdu ed by G.289 lo al theory of Bana h spa es. Pisier [Pi16℄ (see also [Pi18℄) in a Gaussian version of Dvoretzky's theorem and the .

The fundamental Theorems 9. Krivine [Kr℄. [Pi15℄). with an important ontribution by J. various simple proofs are given in the modern literature. Kwapien [Pi16℄.12 (iii) was known sin e the paper [M-P2℄ in whi h Lemma 5. The Dvoretzky-Rogers lemma appeared in [D-R℄. Pisier [Mau-Pi℄. S he htman on embedding `m p into `1 [J-S1℄. Embeddings via stable variables had already been used in [B-DC-K℄. Maurey and G.B. The proof of Proposition 9.8 is established. The notions of type and otype of Bana h spa es were expli itely introdu ed by B.15) from G. [Gor2℄.2 is taken from [Pi16℄.14 is however due to S. Pisier [Pi12℄ and was n motivated by the results of W.g.6 through stable distributions and their representation is due to G. [Pi16℄. e. The proof of Theorem 9.10 is due to S. The se ond proof is due to Y. [F-L-M℄. Kwapien [Kw1℄.-L. Comparison of averages with symmetri random variables in Bana h spa es not ontaining `n1 's is des ribed in [Mau-Pi℄. We learned its optimality as well as its dual version (Proposition 9. Johnson and G. Operators of stable type and their possible dierent de. Gordon [Gor1℄. [TJ2℄.rst proof of Theorem 9. Zinn (personal ommuni ation). [Mi-S℄. Homann-Jrgensen [HJ1℄ ( f. Proposition 9. Maurey in the MaureyS hwartz Seminar 1972/73 (see also [Mau1℄) and independently by J.13 omes from [Ro1℄ (improved in this form in [Led4℄ and [Pi16℄). S he htman and J.16 are due to B. Proposition 9.6 and 9. The basi Theorem 9.

for all sequen es (xi )in in the unit ball of B . The relation of the strong law of large numbers (SLLN) with geometri onvexity onditions goes ba k to the origin of Probability in Bana h spa es. a Bana h spa e B is alled B - onvex if for some " > 0 and some integer n . one an .18 holds if and only if B is B - onvex. Be k [Be℄ showed that the SLLN of Corollary 9. A.nitions (in the ontext of Probability in Bana h spa es) are examined in [P-R1℄. In 1962.

This property was identi.nd a hoi e of signs ("i )in . with n k P "i xi k (1 ")n . "i = 1 .

Mar us and W. See also [Wo3℄. [HJ-P℄. Note the prior ontribution [Wo1℄ in smooth normed spa es.. partly motivated by the results of M. Woy zynski [M-W℄ on weak laws of large numbers and stable type (Theorem 9. Theorem 9. More on \Poissonization" may be found there as well as in. More SLLN's are dis ussed in [Wo2℄. [Pi1℄.22.20 is taken from the paper [A-G-M-Z℄.19 was observed in [Pi3℄ while Proposition 9. Proposition 9. [A-A-G℄ and [Ar-G2℄. de A osta established Theorem 9.A.ed to B not ontaining `n1 's in [Gi℄ and then ompletely i=1 elu itated with the on ept of type by G. e.B. but [M-W℄ goes beyond this statement).17 is due to J. Homann-Jrgensen and G. A. . Pisier [HJ1℄.21 in [A 6℄. Pisier [Pi1℄.g.

Note that while Lp -spa es. Theorem 9.24 and 9.19) (that this study a tually extends). Theorem 9. 1 p < 2 .25 have been noti ed by many authors. are not of stable type. [Ro2℄). et . (9. one an still des ribe spe tral measures of p -stable random variables in Lp . That Sn onverges almost surely when supn jSn j < 1 for real random variables is lassi ally dedu ed from Kolmogorov's onverse inequality (and the three series theorem). [Pi3℄. One an show for example in this way that if m is a (say) probability measure on the unit sphere of `p and if Y = (Yk ) has law m . e. The proof goes again through the representation together with arguments similar to those used in (5.16) goes ba k to [Va1℄. [Ar-G2℄ (through the entral limit theorem). [Ar-G2℄. . Lemma 9. Fernique). The proof through the representation is borrowed from [Pi16℄ as also the approa h through sto hasti integrals (see also [M-P3℄. Homann-Jrgensen [H-J2℄ and S. Kwapien and G. then m is the spe tral measure of a p -stable random variable in `p .27 has been dedu ed by several authors from more general statements on Levy measures and their integrability properties ( f.23 is part of the folklore on pregaussian ovarian es.290 Lemma 9. [Li-T1℄). [A-G℄ (attributed to X. 1 p < 2 . [Ja2℄. Propositions 9. [C-T2℄. Kwapien [Kw2℄ (for the main impli ation (i) ) (iv)).29 on Bana h spa es not ontaining 0 is due to J.30 goes ba k to [B-P℄ ( f. if and only if X k IE jYk jp 1 + log+ jYk j < 1: kYk kp This has been known by S. Pisier for some time and presented in [C-R-W℄ and [G-Z1℄. [Li℄).g.

3 A small ball riterion for the entral limit theorem Notes and referen es . The entral limit theorem 10.1 Some general fa ts about the entral limit theorem 10.291 Chapter 10.2 Some entral limit theorems in ertain Bana h spa es 10.

292 Chapter 10. It was shown indeed that under natural moment onditions. The entral limit theorem The study of strong limit theorems for sums of independent random variables like the strong law of large numbers or the law of the iterated logarithm in the pre eding hapters showed that in Bana h spa es these an only be reasonably understood when the orresponding weak property. that is tightness or onvergen e in probability. the strong statements a tually redu e to the orresponding weak ones. is realized. On the line or in .

therefore. We only investigate here the very lassi al CLT for sums of independent p and identi ally distributed random variables with normalization n . these moment onditions usually automati ally ensure the weak limiting property. One su h example is provided by the entral limit theorem (in short CLT). As we pointed out. The CLT is of ourse one of the main topi in Probability theory. Here also. its study will indi ate the typi al problems and diÆ ulties to a hieve tightness in Bana h spa es. to attempt to investigate one typi al tightness question in Bana h spa e. this is no more the ase in general Bana h spa es. In the . This framework is a tually ri h enough already to analyze the main questions. There is some point.nite dimensional spa es.

we present some general fa ts on the CLT. Let us mention that this study of the lassi al CLT will be further developed in the empiri al pro ess framework in Chapter 14.rst se tion of this hapter. we make use of the type and otype onditions to extend to ertain lasses of Bana h spa es the lassi al hara terization of the CLT. we des ribe a small ball riterion. with Borel random variables with values in a separable Bana h spa e. In the whole hapter we deal with Radon random variables. with only minor modi. Some results a tually extend. In the last paragraph. In the se ond one. whi h might be of independent interest. as well as an almost sure randomized CLT of possible appli ation in Statisti s. and a tually for more onvenien e.

1. as usual. Let thus B denote a separable Bana h spa e. Some general fa ts about the entral limit theorem We start with the fundamental de. 10. to our usual more general setting. ations. If X is a (Borel) random variable with values in B . we denote by (Xi ) a sequen e of independent opies of X .3. n 1 . like for example the results of Se tion 10. Sn = X1 + + Xn . We leave this to the interested reader. and set.

nition of entral limit property. X is said to satisfy the entral limit theorem (CLT) in B if p the sequen e (Sn = n) onverges weakly in B . . Let X be a (Borel) random variable with values in a separable Bana h spa e B .

293 On e this de.

nition has been given. one of the main questions is of ourse to de ide when a random variable X satis.

and if possible in terms only of the distribution of X . It is well-known that on the line a random variable X satis.es the CLT.

and if X p satis.es the CLT if and only if IEX = 0 and IEX 2 < 1 .

On e IEX 2 < 1 has been shown. repla ing X by X X 0 where X 0 is an independent opy of X . (. The suÆ ienty part an be established by various methods. : : : . the entering will be obvious from the strong law of large numbers. we an assume without loss of generality that X is symmetri . Further. then IEX = 0 and IEX 2 < 1 . We would like to outline here the ne essity of IEX 2 < 1 whi h is parti ularly lear using the methods developed so far in this book (and somewhat annoying by the usual ones). for example Levy's method of hara teristi fun tions or Lindeberg's trun ation approa h.es the CLT. n . For every n 1 and i = 1. for any t > 0 .5). Let us show more pre isely that p if the sequen e (Sn = n) is sto hasti ally bounded (of ourse ne essary for X to satisfy the CLT). let ui = ui (n) = pXni IfjXi jpng : By the ontra tion prin iple (Lemma 6. the sequen e (Sn = n) onverges weakly to a normal distribution with mean zero and varian e IEX 2 .

n .

.

.

X .

IP .

ui .

.

.

.

we get that . By Proposition 6. i=1 ) >t 2IPfjSn =pnj > tg : By hypothesis. hoose t = t0 independent of n su h that the right side of this inequality is less than 1=72 .8.

.

n .

X IE .

.

.

i=1 .

2 ui .

.

.

Hen e. by orthogonality and identi al distribution. (10:1) IE jX j2 IfjX jpng 18(1 + t20 ) and thus the result when n tends to in. 18(1 + t20 ) uniformly therefore in n .

The suÆ ien y of the onditions IEX = 0 and IEkX k2 < 1 for a random variable X to satisfy the CLT learly extends to the ase where X takes values in a .nity.

we note that the pre eding argument allows to on lude that IEX = 0 and IEkX k2 < 1 for any random variable X satisfying the CLT in a Bana h . It is not too diÆ ult to see that this extends also to Hilbert spa e.nite dimensional spa e. Con erning ne essity.

If X satis. the orthogonality property leading to (10. Rather than to give an example at this stage. Indeed.8 in whi h we will a tually realize that IEkX k2 < 1 is ne essary for X to satisfy the CLT (in and) only in otype 2 spa es.1) is just the otype 2 inequality. There are however spa es in whi h the CLT does not ne essarily imply that IEkX k2 < 1 .294 spa e of otype 2. we refer to the forth oming Proposition 10.

for any linear fun tional f in B 0 . the s alar random variable p f (X ) satis.es the CLT in B however.

Hen e. It shows that if the strong se ond moment is not always available. The gap however indu es on eptually a rather deep dieren e. Repla ing X by X X 0 where X 0 is an independent opy of X and with some trivial desymmetrization argument. The mean zero property follows from the se ond result together with the law of large numbers. Lemma 10.es the CLT with limiting Gaussian with varian e IEf 2 (X ) < 1 . if G = G(X ) denotes the limiting Gaussian distribution of the sequen e (Sn = n) . For any p t > 0 . Let X be a random variable in B satisfying the CLT. IEkGk2 < 1 and we an therefore . Proof.1.25. By Proposition 9. we an then re over in parti ular that a random variable X with values in a Bana h spa e B of otype 2 satisfying the CLT is su h that IEkX k2 < 1 . a random variable X satisfying the CLT is ne essarily pregaussian. nevertheless a lose property holds. We mentioned above that in general a random variable satisfying the CLT does not ne essarily have a strong se ond moment. Let 0 < " 1 . k Sn k lim sup IP p > t IPfkGk tg : n n!1 Sin e G is Gaussian. In other words and in the terminology of Chapter 9. Then X has mean zero and lim t2 IPfkX k > tg = 0 : t!1 In parti ular. we need only onsider the ase of a symmetri random variable X . What an then be said on the integrability properties of the norm of X when X satisfy the CLT in an arbitrary Bana h spa e? The next lemma des ribes the best possible result in this dire tion. the sequen e (Sn = n) a tually onverges weakly to a Gaussian random variable G = G(X ) with the same ovarian e stru ture as X . IEkX kp < 1 for every 0 < p < 2 .

) Hen e. there exists n0 = n0 (") su h that for all n n0 k Sn k IP p > t0 2"t0 2 : n .nd t0 = t0 (") ( 3 ) large enough so that 2 IPfkGk t0 g "t0 2 . (We are thus a tually only using that tlim !1 t IPfkGk > tg = 0 . the property we would like to establish for X .

6. Let now t be su h that t0 n t < t0 n + 1 for some n n0 .295 By Levy's inequality (2. p 1 IPfmax kXi k > t0 ng 4"t0 2 ( ) : in 2 By Lemma 2. that is. lim sup t2 IPfkX k > tg 16" : t!1 Sin e " > 0 is arbitrarily small.7) for symmetri random variables. Then p t2 IPfkX k > tg t20 (n + 1)IPfkX k > t0 ng 16" . p nIPfkX k > t0 ng 8"t0 2 p p whi h therefore holds for all n n0 . p It is useful to observe that the pre eding argument an also be applied to Sk = k instead of X for p ea h . this proves the lemma.

if Y1 .2. Ym denote independent opies of Sk = k . we i=1 an state the following orollary to the proof of Lemma 10. Indeed.1. then m p P p Yi = m has the same distribution as Smk = mk and the argument remains the same. If X satis.xed k with bounds uniform in k . : : : . In this way. Corollary 10.

es the CLT.2 extends to more general normalization sequen es (an ) like e. then lim t2 sup IP t!1 n In parti ular.. 0 < p < 2 . kpSn k > t = 0 : n Sn p sup IE n n p < 1: It will be onvenient for the sequel to retain a simple quantitive version of this result that immediately folows from the proof of Lemma 10. or an = (2nLLn)1=2 sin e the only property . for any 0 < p < 2 . if.g.1 and the pre eding observation. then sup IE pSn 20" : n 8 n n Let us mention further that the pre eding argument leading to Corollary 10. for some " > 0 . namely. (10:2) sup IP n kpSnk > " 1 . an = n1=p .

2. one an use Homann-Jrgensen's inequalities. CLT (X ) < 1 when X satis.2. In the following.2 and Lemma 10. Lemma 7. as an alternate approa h to the proof of Corollary 10. we adopt the notation CLT (X ) = Sn sup IE n n p : By Corollary 10. Finally.8. Combine to this aim Proposition 6.296 really used is that amk Cam ak for some onstant C .1.

es the CLT. It will be seen below that CLT () de.

We an say indeed that a random variable X with values in B satis. we would like to open a parenthesis and mention a few words about what an be alled the bounded form of the CLT. At this point.nes a norm on the linear spa e of all random variables satisfying the CLT.

that is.es the p bounded CLT if the sequen e (Sn = n) is bounded in probability. if for ea h " > 0 there is a positive .

1 and Corollary 10.nite number A su h that kS k sup IP pn > A < " : n n The proofs of Lemma 10.2 of ourse arry over to on lude that if X satis.

It is not diÆ ult however to see that this equivalen e already fails in Lp -spa es when p > 2 . as will follow from subsequent results. and.2). we also have that CLT (X ) < 2 spa es. there exists by the . and similarly IEkX k2 < 1 in otype In parti ular. It is an open problem to hara terize those Bana h spa es in whi h bounded and true CLT are equivalent. then IEX = 0 and sup sup t2 IP t>0 n kpSn k > t < 1 : n 1 as already indi ated by (10.10 below where a hara terization of the CLT and bounded CLT in Lp -spa es will learly indi ate the dieren e between these two properties. the bounded and true CLT are equivalent. the bounded CLT also implies that IEX 2 < 1 . Sin e however IEf (X ) = 0 and IEf 2 (X ) < 1 for every f in B 0 . Rather than to detail an example at this stage. As we have seen to start with.es the bounded CLT. In the s alar ase therefore. on the line. this equivalen e a tually extends to Hilbert spa es and even otype 2 spa es. The bounded CLT does not in general imply that X is pregaussian. we refer to Theorem 10.

nite dimensional CLT a Gaussian pro ess G = (Gf )f 2B10 indexed by the unit ball B10 of B 0 with the same ovarian e stru ture as X . G is . Moreover.

297 almost surely bounded sin e by the .

the family ff (X ).nite dimensional CLT and onvergen e of moments. this Gaussian pro ess G need not de.18). f 2 B10 g is relatively ompa t in L2 . (using Skorokhod's theorem on weak onvergen e). by Sudakov's minoration (Theorem 3. However. IE sup jGf j CLT (X ) < 1 : f 2B10 In parti ular.

Let us onsider indeed the following easy but meaningful example. Denote by (ek )k1 the anoni al basis of 0 and onsider the random variable X with values in 0 de.ne in general a Radon probability distribution on B .

Let us sket h that X satis.ned by (10:3) X= "k ek (2 log(k + 1))1=2 where ("k ) is a Radema her sequen e.

for every A > 0 . By independen e and identi al distribution.es the bounded CLT. kS k IP pn > A = 1 n Y k 1 .

(.

n .

X IP .

.

.

.

p"in .

.

.

> A(2 log(k + 1))1=2 i=1 )! : From the subgaussian inequality (4. .1).

(.

n .

X IP .

.

.

.

p"in .

.

.

> A(2 log(k + 1))1=2 i=1 ) 2 exp( A2 log(k + 1)) : Therefore. if A is large enough independently of n . IP kpSn k > A 4 X exp( A2 log(k + 1)) n k from whi h it learly follows that X satis.

However. . But we know that jgk j lim sup =1 (2 log( k + 1))1=2 k!1 almost surely. Indeed. the natural Gaussian stru ture with the same ovarian e as X should be given by gk ek G= (2 log(k + 1))1=2 where (gk ) is an orthogaussian sequen e.es the bounded CLT. X does not satisfy the CLT in 0 sin e X is not pregaussian.

298 Therefore G is a bounded Gaussian pro ess but does not de.

Let B be the separable Bana h spa e. Proposition 10. In order that every random variable X with values in B that satis. The hoi e of 0 in this example is not asual as shown by the next result.ne a Gaussian random variable with values in 0 .3.

Proof.es the bounded CLT is pregaussian. Assume B does not ontain 0 and let X be de.3) provides ne essity. it is ne essary and suÆ ient that B does not ontain an isomorphi opy of 0 . Example (10.

there exists a sequen e (AN ) of . A. Sin e IEkX k < 1 . IP) satisfying the bounded CLT in B .ned on ( .

Sin e . N 1 ( X 0 = 0 ). Set Y N = X N X N 1 . By Jensen's inequality. CLT (X N ) CLT (X ) for every N . the sequen e (X N ) onverges almost surely and in L1(B ) to X .nite sub -algebras of A su h that if X N = IEAN X .

! IEf 2 N X i=1 Gi = IEf 2 (X N ) : Hen e. As is easily seen. for every f in B 0 and every N . Denote by (GN ) independent Gaussian random variables in B su h that GN has the same ovarian e stru ture as Y N .nite valued. for ea h N . Y N is pregaussian. by the .

Sin e B does not ontain 0 . it onverges almost surely (Theorem 9. N X IE Gi i=1 N P CLT (X N ) CLT (X ) : Gi is almost surely bounded.29) to a Gaussian random variable G whi h satis. for every N .nite dimensional CLT (and onvergen e of moments).

Thus the sequen e i=1 After this short digression on the bounded CLT we ome ba k to the general study of the CLT. We . Hen e X is pregaussian.es IEf 2 (G) = IEf 2 (X ) for every f in B 0 .

From the .rst re all from Chapter 2 some of the riteria whi h might be used in order to establish weak onvergen e of the p sequen e (Sn = n) .

nite dimensional CLT and Theorem 2.1. a random variable X with values in a separable Bana h spa e B satis.

es the CLT if and only if for ea h " > 0 one an .

nd a ompa t set K in B su h that S IP pn n 2 K 1 " for every n .

299 (or only every n large enough). and in terms of . Alternatively.

f (X ) satis.nite dimensional approximation. a random variable X with values in B su h that IEf (X ) = 0 and IEf 2 (X ) < 1 for every f in B 0 (i.e.

es the s alar CLT for every f ) satis.

es the CLT if and only if for every " > 0 there is a .

4)). (2. for every " > 0 . (The onverse a tually also holds f.) Note further from these onsiderations that X satis. CLT (T (X )) < " : Note that su h a property is realized as soon as there exists.2). a step mean zero random variable Y su h that CLT (X Y ) < " .nite dimensional subspa e F of B su h that IP kT ( pSnn )k > " < " for every n (or only n large enough) where T = TF denotes the quotient map B equivalently. [Pi3℄. By (10. (10:4) ! B=F ( f.

es the CLT as soon as for ea h " > 0 there is a random variable Y satisfying the CLT su h that CLT (X Y ) < " . the linear spa e of all random variables satisfying the CLT equipped with the norm CLT () de. in parti ular.

By entering and Jensen's inequality on (10. Before turning to the next se tion.nes a Bana h spa e.4) for example. let us ontinue with these easy observations and mention some omments about symmetrization. learly. X satis.

es the CLT if and only if X X 0 does where X 0 is an independent opy of X . When trying to establish that a random variable X satis.

In the same spirit.es the CLT. it will thus basi ally be enough to deal with a symmetri X .3 that X satis. we also see from Lemma 6.

Then 1 IEj j CLT (X ) CLT (X ) 2k k2.4.es the CLT if and only if "X does where " denotes a Radema her random variable independent of X .q = q 0 1 dt (tp IPfj j > tg)q=p t 1=q < 1: Lp.q1 Lp.1 independent of X . Let further be a non-zero real random variable in L2. denote by Lp. Let X be a random variable with values in B su h that CLT (X ) < 1 (in parti ular IEX = 0 ).1 CLT (X ) : 2 . This property is a tually one example of a general randomization argument whi h might be worthwhile to detail at this point.q the spa e of all real random variable su h that Z k kp.p is just Lp by the usual integration by parts formula and Lp. Proposition 10. If 0 < p . q < 1 .q2 if q1 q2 .

300 In parti ular X satis.

4) and the inequalities applied to T (X ) for quotient maps T . The se ond assertion follows from (10.es the CLT (and IEX = 0 ) if and only if X does. Proof. We .

in this ase.1 yield the on lusion. IEk n X i=1 i Xi k = IEk SX n () i=1 Xi k IE((Sn ( ))1=2 ) CLT (X ) pn IP(A) CLT (X ) p where i are independent opies of and Sn ( ) = 1 + + n . Hen e. p CLT (X ) IP(A) CLT (X ) : Now lassi al extremal properties of indi ators in the spa es Lp. by independen e and identi al distribution. for every n .rst prove the right hand side inequality. Supposing . Then learly. Assume to begin with that is the indi ator fun tion IA of some set A .

1 X CLT ( (") X ) "(IPf > "(k k=1 1)g)1=2 CLT (X ) k k2.1CLT (X ) : Letting " tend to 0 yield the result in this ase. The general one follows by writing = + . 1 1 X X ( " ) = "kIf"(k 1)<"kg = "If>"(k 1)g : k=1 k=1 By the triangle inequality and the pre eding. let. To establish the reverse inequality.rst 0 . note . for ea h " > 0 .

Use then the ontra tion prin iple onditionally on X and in the form of Lemma 4. CLT ("X ) 2CLT (X ) . where " is a Radema her variable independent of X and .5. This will be one of the arguments in the empiri al pro ess approa h to the CLT developed in . Therefore. the normalized p sums Sn = n an be regarded as onditionally Gaussian and several Gaussian tools and te hniques an be used.rst that by entering and Lemma 6.3.4 in parti ular applies when is a standard normal variable. Proposition 10. The on lusion follows.

This relation to stables is not fortuitous and the diÆ ulty in proving tightness in the CLT resembles in some sense to the diÆ ulty in showing existen e of a p -stable random ve tor with a given spe tral measure. The ondition in L2. the ru ial point in Proposition 10.1 has been shown to be best possible in general [L-T1℄. although L2 is (ne essary and) suÆ ient in various lasses of spa es. (4. Note further that this Gaussian randomization is similar to the one put forward in the series representation of stable random variables.9)). Note . Let us mention also that while for general sums of independent random variables. this L2 -multipli ation property is perhaps related to some geometri al properties of the underlying Bana h spa e. Gaussian randomization is heavier than Radema her randomization ( f.301 Chapter 14.4 is that we are dealing with independent identi ally distributed random variables.

nally that the argument of the proof of Proposition 10. 0 < p < 2 .4 p is not really limited to the normalization n of the CLT and that similar statements an be obtained in ase for example of the iid laws of large numbers with normalization n1=p . 10. and in ase of the law of the iterated logarithm. we try to . Some entral limit theorems in ertain Bana h spa es In this paragraph.2.

nd onditions on the distribution only of a Borel random variable X with values in a separable Bana h spa e B in order it satis.

this is a diÆ ult question in general spa es and at the present time it has a lear ut answer only for spe ial lasses of Bana h spa es. We start with some examples and negative fa ts in order to set up the framework of the study. In this se tion. Let us .es the CLT. As we know. we present a sample of these results.

rst mention that a Gaussian random variable learly satis.

A random variable X with values in a .es the CLT.

nite dimensional Bana h spa e B satis.

this equivalen e extends to in.es the CLT if and only if IEX = 0 and IEkX k2 < 1 . As will be shown below.

but a tually only to them! In general.nite dimensional Hilbert spa es. very bad situations an o ur and strong assumptions on the distribution of a random variable X have no reason to ensure that X satis.

For example. the random variable in 0 de.es the CLT.

ned by (10.3) is symmetri and almost surely bounded but does not satisfy the CLT.11. we have a bounded symmetri pregaussian (elementary veri. but if we go ba k to Example 7. It fails the CLT sin e it is not pregaussian.

there is even an example of a bounded symmetri pregaussian random variable in 0 satisfying the bounded CLT but failing the CLT. a spa e with "bad" geometri al properties (of no non-trivial type or otype) is not restri tive. ation) variable in 0 whi h does not satisfy the bounded CLT. hen e the CLT. (In [Ma-Pl℄.) Even the fa t that these examples are onstru ted in 0 . There exist indeed spa es of type .

302 2 " and otype 2 + " for every " > 0 in whi h one an .

nd bounded pregaussian random variables failing the CLT [Led3℄. and as indi ated by the last mentioned example. some positive results an be obtained. spa es of type 2 and/or otype 2 play a spe ial role.25 onne ting type 2 and otype 2 with pregaussian random variables and some inequalities involving those. The . In spite of these negative examples. This is also made lear by Propositions 9. In parti ular.24 and 9.

Then X satis. Theorem 10.5. Let X be a mean zero random variable su h that IEkX k2 < 1 with values in a separable Bana h spa e B of type 2 .rst theorem extends to type 2 spa es the suÆ ient onditions on the line for the CLT.

Conversely. if in a (separable) Bana h spa e B . every random variable X su h that IEX = 0 and IEkX k2 < 1 satis.es the CLT.

es the CLT. Proof. The de. then B must be of type 2 .

given " > 0 .11 immediately implies that for X su h that IEX = 0 and IEkX k2 < 1 . we hoose a mean zero random variable Y with . (10:5) CLT (X ) C (IEkX k2)1=2 : If.nition of type 2 in the form of Proposition 9.

the pre eding inequality applied to X Y yields CLT (X Y ) < " and thus X satis.nite range su h that IEkX Y k2 < "2 =C 2 .

es the CLT ((10. Conversely. ea h mean zero random variable X with IEkX k2 < 1 satis. in B .4)). if.

5 easily extends to operators of type 2 .19. The se ond part of the theorem therefore simply follows from the orresponding one in Proposition 9.6.5) and apply then Proposition 9. Then the Corollary 10. for later referen e.5.24. We would like to mention at this stage for further purposes that Theorem 10. Let us state. Alternatively. one an invoke a losed graph argument to obtain (10. ! F be an operator of type 2 between two separable Bana h spa es E and F . Let u : E random variable u(X ) satis.es the CLT. su h a random variable is ne essarily pregaussian. Let X be a random variable with values in E su h that IEX = 0 and IEkX k2 < 1 . the following orollary to the proof of Theorem 10.

5 for otype 2 spa es. The next statement is the dual result of Theorem 10.es the CLT in F . .

Let X be a pregaussian random variable with values in a separable otype 2 Bana h spa e B .303 Theorem 10.7. Then X satis.

any pregaussian random variable satis. if in a (separable) Bana h spa e B .es the CLT. Conversely.

Sn = n is pregaussian and asso iated to G(X ) too. then B must be of otype 2 . Hen e. IEkX k2 C IEkG(X )k2 < 1 p for any pregaussian random variable X with asso iated Gaussian variable G(X ) . Proof.es the CLT. CLT (X ) (C IEkG(X )k2)1=2 : Let now X N = IEAN X where (AN ) is a sequen e of . By Proposition 9. for ea h n . in a otype 2 spa e. Now.25.

IEf 2 (X X N ) ! 0 .2). for every f . Then (X N ) onverges almost surely and in L2 (B ) to X . X X N is still pregaussian and sin e IEf 2 (X X N ) 2IEf 2 (X ) = 2IEf 2 (G(X )) for every f in B 0 . It an only onverge to 0 sin e. it follows from (3. for ea h " > 0 one an . For ea h N .11) that the sequen e (G(X X N )) is tight (sin e G(X ) is).nite -algebras generating the -algebra of X . By the Gaussian integrability properties (Corollary 3.

Thus CLT (X X N ) < " and X satis.nd an N su h that IEkG(X X N )k2 < "2=C .

This a tually only happens in otype 2 spa es as shown by the next statement. Proposition 10. Conversely. In otype 2 spa es. re all that a random variable X satisfying the CLT is su h that IEkX kp < 1 . Proof.25. p < 2 (Lemma 10. If in a (separable) Bana h spa e B every random variable X satisfying the CLT is in L2 (B ) . Se tion 10. This result omplements Theorem 10. Sin e ( f.8. then B is of otype 2 . Remark 9.1). Assume B is not of otype 2 .1).7.1.26) the otype 2 de. Theorem 10. We need then simply re all one assertion of Proposition 9. random variables satisfying the CLT have a strong se ond moment (be ause they are pregaussian.7 is thus established.es the CLT by the tightness riterion des ribed in Se tion 10. or f.

there exists a sequen e (xj ) in B su h that gj xj j .nitions with either P Gaussian or Radema her averages are equivalent.

P Then IEkX k2 = kxj k2 = 1 .304 onverges almost surely but for whi h P j kxj k2 = 1 . Let us show however that X satis. On some suitable probability spa e onsider 1 X X= j =1 2j=2 IAj gj xj where Aj are disjoints sets with IP(Aj ) = 2 j and independent from the orthogaussian sequen e (gj ) .

es the CLT whi h leads therefore to a j ontradi tion and proves the proposition. Sn = n has the same distribution as j xj where j are independent j =1 normal random variables with varian es less than 26 + 1 28 . for every n and s > 0 . Hen e. NP (n) IP 8 (n) < NX kpSn k > s 2 4 + IP k : n 4 j =1 1 X j xj k > s 9 = 2 4 + 2s IEk gj xj k j =1 . For every t > 0 . NX (n) j =1 t21n IP ( n X NX (n) j =1 i=1 (IAij 2j IP(Aij )) > tn2 j ) 27 : t2 Let us take t = 26 . let N (n) be the smallest integer su h that 2N (n) 25n . for every n . on the omplement of 0 . Let 1 [ n [ 0 = Aij : j =N (n)+1 i=1 0 only depends on the Aij 's and IP( 0 ) 2 Sn = 5. NX (n) j =1 Moreover. there is a set of probability bigger than 1 2 4 su h that p onditionally on the Aij 's. onditionally on (Aij ) . Let (gji ) be independent opies of (gj ) and (Aij ) independent opies of (Aj ) . 2j=2 n X i=1 IAij gji ! xj : n P We now use the law of large numbers to show that. all of them assumed to be independent. IP 8 (n) <N[ : j =1 n 1X I i > (t + 1)2 j n i=1 Aj !9 = . . the Gaussian variables IAi gji i=1 j have a varian e lose to n2 j . We an then write. For every n .

we dedu e from (10.305 where we have used the ontra tion prin iple in the last step.2) that CLT (X ) 20 28 IEk 1 X j =1 1 P j =1 gj xj k (for gj xj k : It is now easy to on lude that X satis. If we now hoose s = 28 IEk example).

es the CLT. The same inequality when a .

nite number of xj are 0 allows indeed to organize a .

X therefore satis.nite dimensional approximation of X in the CLT () -norm.

In the spirit of the pre eding proof and as a on rete example.es the CLT and the proof is thereby ompleted. let us note in passing that a onverP gent Radema her series X = "i xi satis.

es the CLT if and only if the orresponding Gaussian series i P P gi xi onverges. and starting with a . i i Con erning suÆ ien y. Ne essity is obvious sin e gi xi is a Gaussian variable with the same ovarian e as X . if ("ij ) and (gij ) are respe tively doubly-indexed Radema her and orthogaussian sequen es.

8). By approximation. the onjun tion of Theorem 10. i If we re all (Theorem 9. if Theorem 10. this is not exa tly true for type 2 spa es.10) that a Bana h spa e of type 2 and otype 2 is isomorphi to a Hilbert spa e. Corollary 10. A (separable) Bana h spa e B is isomorphi to a Hilbert spa e if and only if for every random variable X with values in B the onditions IEX = 0 and IEkX k2 < 1 are ne essary and suÆ ient for X to satisfy the CLT. X is then easily P seen to satisfy the CLT when gi xi onverges almost surely. by (4. n X X " S IEk pn k = IEk ( pij )xi k n i j =1 n ( 2 )1=2 IEk n X X gij ( j =1 pn )xi k X = ( )1=2 IEk gi xi k 2 i where the last step follows from the Gaussian rotational invarian e. While the ase of otype 2 spa es might be onsider as ompletely understood.5 and Proposition 10.nite sequen e (xi ) . The pre eding results are perhaps the most satisfa tory ones on the CLT in Bana h spa es although they a tually on ern rather small lasses of spa es.8 yields an isomorphi hara terization of Hilbert spa e by the CLT.9. for every n .4 indi ates that . Indeed.

when X satis.306 IEX = 0 and IEkX k2 < 1 are suÆ ient for X to satisfy the CLT in a type 2 spa e. As we have seen. in general. therefore Hilbert). the integrability ondition IEkX k2 < 1 need not onversely be ne essary (it is only in otype 2 spa es.

e. similar in some sense to the type and otype inequalities.. X pregaussian and tlim !1 t IPfkX k > tg = 0 .es the CLT. but whi h ombines moment assumptions and the pregaussian hara ter.1). let us say that a separable Bana h spa e B satis. 2 i. lim t2 IPfkX k > tg = 0 : t!1 It is therefore of some interest to try to understand the spa es in whi h the best possible ne essary onditions. One onvenient way to investigate this lass of spa es is an inequality. one only knows that (Lemma 10. More pre isely. are also suÆ ient for a random variable X to >satisfy the CLT.

1 p < 1 . if there is a onstant C su h that for any .es the inequality Ros(p) .

P. Theorem 10.10. It an be shown a tually that the spa es of otype 2 are the only ones with this property ( f. Let B be a separable Bana h spa e satisfying Ros(p) for some p > 2 . Then. a 2 random variable X with values in B satis.8 and the observation that IEj X i X G(Xi )jp = Cp ( i X IEG(Xi )2 )p=2 = Cp ( i IEXi2 )p=2 = Cp (IEj X i Xi j2 )p=2 : The same argument together with Proposition 9. Proposition 6.nite sequen e (Xi ) of independent pregausssian random variables with values in B with asso iated Gaussian variables (G(Xi )) (whi h may be assumed to be independent) IEk (10:6) X i Xi kp C X i IEkXi kp + IEk X i G(Xi )kp ! : This inequality is the ve tor valued version of an inequality dis overed by H.6) holds on the line for any p is easily dedu ed from. [Led4℄). The main interest of the inequality Ros(p) in onne tion with the CLT lies in the following observation. 1 p < 1 . for example. Rosenthal (hen e the appellation) on the line. That (10.25 shows that otype 2 spa es also satisfy Ros(p) for every p .

The ne essity has been dis ussed in Se tion 10. as we know.es the CLT if and only if it is pregaussian and tlim !1 t IPfkX k > tg = 0 . Turning to suÆ ien y. in any spa e. our aim is to show that for any symmetri pregaussian X with values in B (10:7) CLT (X ) C (kX k2.1 and holds.1 + IEkG(X )k) . Proof.

given X symmetri . Indeed.307 for some C . let us . pregaussian and 2 su h that tlim !1 t IPfkX k > tg = 0 and given " > 0 . This property easily implies the on lusion.

rst hoose t large enough in order that. kX Y k2.23). if Y = XIfkX ktg (whi h is still pregaussian by Lemma 9.1 + IEkG(X Y )k < 2"C : 2 2 0 This an be obtained sin e tlim !1 t IPfkX k > tg = 0 and tlim !1 IEf (XIfkX k>tg ) = 0 for every f in B . Consider then (AN ) a sequen e of .

G(Y Y N ) ! 0 in L2(B ) .nite -algebras generating the -algebra of X and set Y N = IEAN Y . Then Y N ! Y almost surely and in L2 (B ) . as usual.7) to X Y and Y Y N for N large enough yields CLT (X Y N ) < " and therefore X satis. Applying (10. and.

es the CLT from this .

repla e it for example by "X where " is a Radema her random variable independent of X . p > 2 . by integration by parts it is n Applying the inequality Ros(p) .1 easily seen that p sup nIE(kX kIfkX k>png ) 2 : 1 .10 is established.nite dimensional approximation. Sin e kX k2. sin e p > 2 and n1 p=2 n1 p=2 pn Z Z 0p n 0 IPfkX k > tg dtp dtp p = : t2 p 2 (10. But now.7) thus follows (re all Gaussian random ve tors have all their moments equivalent). For ea h n . we have that. i = 1. Theorem 10. : : : . Assume by homogeneity that kX k2. nIEku1 kp 2IEkG(X )kp (Lemma 9.7).1 1 .1 1 . n . It therefore suÆ es indeed to establish (10. n n X X p1n IEk Xi k IEk ui k + pnIE(kX kIfkX k>png ) i=1 i=1 where ui = ui (n) = n 1=2 Xi IfkXi kpng . we get that IEk n X i=1 ui kp C (nIEku1 kp + 2IEkG(X )kp) sin e the ui 's are pregaussian and IEkG(u1 )kp kX k2.23). If X is not symmetri . to the ui 's. .

10 is Ros(p) for p > 2 . When 2 < p < 1 .308 Type 2 spa es of ourse satisfy Ros(2) but the important property in Theorem 10. in parti ular Lp -spa es with 1 p 2 . We already noti ed that otype 2 spa es verify Ros(p) for all p . Lp satis.

. Indeed. ) where (S. if (Xi ) are independent pregaussian random variables in Lp = Lp (S. ) is -. .es Ros(p) for the orresponding p . This follows from the real inequality together with Fubini's theorem.

nite. IEk X i Xi kp = Z S Z S =C IEj X i Xi (s)jp d(s) X C i X i IEjXi IEkXi (s)jp + IEj kp + IEk X i X i G(Xi G(Xi )kp )(s)jp ! d(s) ! : When 2 < p < 1 .5 but sin e it satis. Lp is of type 2 and enters the setting of Theorem 10.

16) of pregaussian stru tures.es Ros(p) and p > 2 we an also apply the more pre ise Theorem 10. espe ially among the lass of type 2 spa es. Together with the hara terization (9. veri. However. r > 2 . It an be shown that Bana h latti es whi h are r onvex and s - on ave for some r > 2 and s < 1 ( f. 1 p < 1 . already `2 (`r ) . the CLT is therefore ompletely understood in the Lp -spa es. whi h is of type 2 . One might wonder from the pre eding for some more examples of spa es satisfying Ros(p) for some p > 2 . [Li-T2℄) belong to this lass.10.

es Ros(p) for no p > 2 . this is true of `2 (B ) as soon as B is a Bana h spa e whi h is not of otype 2 . note that if B is not of otype 2 . A tually. To see this. one an . for every " > 0 .

Assume then that `2 (B ) satis.nd a pregaussian random variable Y in B su h that kY k = 1 almost surely and IEkG(Y )k2 < " : Consider then independent opies Yi of Y and set Xi = Yi ei in `2(B ) where (ei ) is the anoni al basis.

es Ros(p) for some p > 2 and apply this inequality to the sample (Xi )iN . we should have that. N p=2 C (N + (N IEkG(Y )k2 )p=2 ) C (N + ("N )p=2 ) : ! . for some onstant C and all N . sin e G(Xi ) = G(Yi )ei . Sin e Gaussian moments are all equivalent. IEj N X i=1 Xi kp C N X i=1 IEkXi kp + (IEk N X i=1 G(Xi )k2 )p=2 : That is.

if " is small enough and N tends to in.309 Hen e.

Thus . this leads to a ontradi tion.nity.

One interesting problem in this ontext would be to know whether Theorem 10. p > 2 . if in a Bana h spa e B .10 has some onverse.1 < 1 satis. the onditions X pregaussian and lim t2 IPfkX k > tg = 0 are suÆ ient for X to satisfy the CLT. su h that kX k2.10 ( f.7)) that a pregaussian variable in a Ros(p) -spa e. Corollary 9.18). that is. (10. and so the CLT under the best possible ne essary onditions. It is lear from the proof of Theorem 10. does B satisfy Ros(p) for some p > 2 ? t!1 This ould be in analogy with theorems on the laws of large numbers and type of Bana h spa e ( f. As a remark we would like to brie y ome ba k at this stage to the bounded CLT and show that the bounded CLT and true CLT already dier in Lp for p > 2 .nding spa es satisfying Ros(p) for p > 2 seems a diÆ ult task.

it is therefore simply enough to onstru t a pregaussian random variable X in. We are left with showing that X is pregaussian and we use (9. i 1 . but this is lear sin e by de. That is. To prove our laim. 2 < p < 1 . Let ("i ) "i IfN 2 i<N 2 +N g ei where (ei ) is the anoni al basis of `p . Then kX k = N 1=p and (10. for example.es the bounded CLT.8) learly holds. let N be an integer valued random variable su h that IPfN = ig = i be a Radema her sequen e independent of N and onsider X in `p given by X= 1 X i=1 1 2=p .16). `p . it suÆ es to show that 1 X i=1 (IPfN i < N 2 + N g)p=2 < 1 . su h that (10:8) 0 < lim sup t2 IPfkX k > tg < 1 : t!1 To this aim.

nition of N . On the line and in . we present some remarks on the relation between the CLT and the LIL (for simpli ity we understand here by LIL only the ompa t law of the iterated logarithm) in Bana h spa es. and p > 2 . In on lusion to this se tion. IPfN i < N 2 + N g is of the order of i 1=2 1=p when i ! 1 .

a random variable X satis.nite dimensional spa es. that is. CLT and LIL are of ourse equivalent.

es the CLT if and only if it satis.

sin e they are both hara terized by the moment onditions .es the LIL.

However.9 and Corollary 8.310 IEX = 0 and IEkX k2 < 1 . the onjun tion of Corollary 10.8 indi ates that this equivalen e already fails in in.

nite dimensional Hilbert spa e where one an .

nd a random variable satisfying the LIL but failing the CLT.1) a tually shows that the impli ation LIL ) CLT only holds in . This observation together with Dvoretzky's theorem (Theorem 9.

Indeed. if B is an in.nite dimensional spa es.

nite dimensional Bana h spa e in whi h every random variable satisfying the LIL also satis.

for some onstant C and every X in B . for all random variables. CLT (X ) C (X ) where we re all that (X ) = lim sup kSnk=an (non random). by approximation. Let B be a separable Bana h spa e in whi h every random variable satisfying the LIL also satis. By Theorem 9. by a losed graph argument.es the CLT. Hen e we an state Theorem 10. But this is impossible as we have seen. and hen e.11. the same inequality would n!1 hold for all step random variables with values in a Hilbert spa e.1.

es the CLT. Then B is .

Con erning the impli ation CLT ) LIL. if X satis. Indeed.nite dimensional. a general statement is available.

es the CLT. Theorem 10.6) then yields the following theorem. and sin e X is pregaussian the unit ball of the reprodu ing kernel Hilbert spa e asso iated to X is ompa t. Then X satis.12. The hara terization of the LIL in Bana h spa es (Theorem 8. then trivially Sn =an ! 0 in probability. Let X be a random variable with values in a separable Bana h spa e B satisfying the CLT.

An argument similar to the one used for Theorem 10. But a . Theorem 10. the question of the impli ation CLT ) LIL is not solved for all that.es the LIL if and only if IE(kX k2=LLkX k) < 1 .16 instead of Dvoretzky's theorem. shows then that the spa es satisfying CLT ) LIL are ne essarily of otype 2 + " for every " > 0 . This is of ourse the ase for otype 2 spa es but the hara terization of the CLT in Lp -spa es shows that Lp with p > 2 does not satisfy this property. but this time with Theorem 9. Despite this general satisfa tory result.12 indi ates that the spa es in whi h random variables satisfying the CLT also satisfy the LIL are exa tly those in whi h the CLT implies the integrability property IE(kX k2=LLkX k) < 1 . The moment ondition IE(kX k2=LLkX k) < 1 is of ourse ne essary in this statement sin e it is not 2 omparable to the tail behavior tlim !1 t IPfkX k > tg = 0 ne essary for the CLT.11.

A small ball riterion for the entral limit theorem .3.nal hara terization has still to be obtained. 10.

Namely. We have noti ed. k S k n lim inf IP p < " > 0 : n!1 n (10:9) It turns out that. for every " > 0 . The result therefore presents some interest from a theoreti al point of view. The idea of its proof an be used further for an almost sure randomized version of the CLT. involves in its elaboration several interesting arguments and ideas developed throughout this book. prior to Theorem 3.3. and 2 if the ne essary tail ondition tlim !1 t IPfkX k > tg = 0 holds. Surprisingly. If (10. then X satis. Re all that we deal in all this hapter with a separable Bana h spa e B . It therefore follows that if X is a Borel random variable satisfying the CLT in B . ea h ball entered at the origin has a positive mass for the distribution of G . then it is Radon. onversely. we develop a riterion for the CLT whi h.311 In this last paragraph. if a Gaussian ylindri al measure harges ea h ball entered at the origin. while ertainly somewhat diÆ ult to verify in pra ti e.9) holds for every " > 0 . this onverse extends to the CLT. that for a Gaussian Radon random variable G .

It an thus be stated as follows. Let X be a random variable with values in a separable Bana h spa e B . This is theoreti al small ball riterion for the CLT. Theorem 10. Then X satis.13.es the CLT.

es the CLT if and only if the following two properties are satis.

(ii) does not ne essarily imply (i) ( f. for n large enough. already implies the bounded CLT.4. (ii).4. p (ii) for ea h " > 0 . n!1 Before turning to the proof of this result. Let Y1 . Repla ing X by X X 0 where X 0 is an independent opy of p X . i=1 (") 2 ( ) ! 1=2 m X IP k Yi k < "pm 32 1 + IPfkYi k > "pmg i=1 i=1 m X . i. and we would like to detail this point. However. Sin e m P p p Yi = m has the same distribution as Smn = mn . and for one " > 0 only. the tail ondition (i) annot be suppressed in general.ed: 2 (i) tlim !1 t IPfkX k > tg = 0 . (i) and (ii) are ne essary. one of whi h will be of help in the proof. Indeed. it is enough to deal with the symmetri al ase. that is the sequen e p > (Sn = n) is bounded in probability (and therefore (ii) implies sup t2 IPfkX k > tg < 1 ). (") = lim inf IPfkSn= nk < "g > 0 . Ym be independent opies of Sn = n .e. we would like to mention a few fa ts. : : : . and best possible. by Proposition 6. As we have seen. [L-T3℄). This laim is t>0 based on the inequality of Proposition 6.

(Sn = n) is bounded in probability. All that was noti ed in Se tion 10. at least there is a bounded Gaussian pro ess with the same ovarian e stru ture as X and the family ff (X ).1 when we dis ussed the bounded CLT. f 2 B 0 . for all n large enough and every m . CLT (X ) < 1 and while X is not ne essarily pregaussian. kf k 1g is totally bounded in L2 . p k Sn k p 9 : mIP p > " m n (")2 As announ ed therefore.312 and thus. In parti ular.4 shows similarly that a random variable X satis. Let us note that the pre eding argument based on Proposition 6.

one an . Re all we denote by IP" . We show.13. IPX . and this is enough. Repla ing as usual X by X X 0 we may and do assume that X is symmetri . The sequen e (Xi ) of independent opies of X has therefore the same distribution as ("i Xi ) where ("i ) is a Radema her sequen e independent of (Xi ) . Proof of Theorem 10.13. ("i ) ). IE" (resp.es the CLT if there is a ompa t set K in B su h that lim inf IP n!1 pSnn 2 K > 0 : This improved version of the usual tightness riterion may be useful to understand the intermediate level of Theorem 10. given Æ > 0 . IEX ) onditional probability and expe tation with respe t to the sequen e (Xi ) (resp. that there is a numeri al onstant C su h that.

nd a .

nite dimensional subspa e F of B with quotient map T = TF : B ! B=F su h that (10:10) n X lim sup IPX fIE" k "i n!1 i=1 T (Xi ) pn k > CÆg Æ : The proof is based on the isoperimetri inequality for produ t measures of Theorem 1. for ea h Æ > 0 there exists a .13). IE" k n X i=1 "i T (xi ) pn k 2Æg . We use here the full statement of Theorem 1. The main step in this proof will be to show that if A = fx = (xi )in 2 B n .4 and not only (1.4 whi h was one of the main tools in the study of strong limit theorems and whi h proves to be also of some interest in weak statements like here the CLT.

(10:11) IPf(Xi )in 2 Ag (Æ) > 0 .nite dimensional subspa e F su h that if T = TF .

313 for all n large enough where (Æ) > 0 depends only on Æ > 0 . Let us show how to on lude when (10.11) is satis.

ed. q. k . Theorem 1. q.18). xq in A su h that f1. k.4 tells us that for some numeri al onstant K . and take then k = k(Æ) large enough depending on Æ > 0 only in order for the pre eding probability to be less than Æ=2 . p lim IPfmax kXi k > Æ n=k(Æ)g = 0 .10) is satis. q) . k) . there exist j k and x1 . IP f(Xi )in 2 H (A.11). ng = fi1. q. by (i). If (Xi )in 2 H (A. in n!1 (10. For integers q. re all H (A. Sin e. : : : . Remark 6. ij g [ I where I = q S fi n : Xi = x`i g . By monotoni ity of Radema her averages ( f. k)g n kX k + IPfk max p i > Æg : in n Now. whi h we might hoose to be an integer for onvenien e. : : : . : : : . under (10. `=1 IE" k n X i=1 "i T (Xi ) kpXik + IE k X " Tp(Xi) k pn k k max " i in n n i2I kpXik + k max in n q X `=1 IE" k n X kpXik + 2qÆ : k max in n Hen e IPX fIE" k n X i=1 "i i=1 "i T (x`i ) pn k Tp (Xi ) k > (2q + 1)Æg IP f(Xi )in 62 H (A. k)g K log(1=(Æ)) 1 + k q k : Let us hoose q to be 2K .

ed and the CLT for X will hold. n X n X Tp (Xi ) IPX fIE" k "i k > 2Æg IPfk "i Xpi n> k < Æg n i=1 i=1 + IPfIE" k n X i=1 "i T (Xi ) pn k k n X i=1 "i T (Xi ) pn k > Æ g . for every n . We have therefore to establish (10.11). We write.

ui = ui (n) = pXni IfkXi k (Æ)png . By (ii).314 sin e kT k 1 . : : : . n where (Æ) > 0 is to be spe i.11) will hold as soon as n X lim sup IPfIE" k "i n!1 i=1 T (Xi ) pn k k n X i=1 "i T (Xi) pn k > Æg < (Æ) for some appropriate hoi e of T . i = 1. (10. Setting. for every n .

Conditionally on (Xi ) .7).12). integrating by parts. denote by M a median n P f 2 (T (u 1=2 "i T (ui )k and let = sup where the supremum runs here over the unit ball of i )) kf k1 i=1 the dual spa e of B=F for an F to be hosen. we use the on entration properties of Radema her averages (Theorem 4. IP" fIE" k n X i 1 n X i=1 "i T (ui)k M j 12 : "i T (ui)k k n X i=1 "i T (ui)k > Æg 128 2 : Æ2 Thus. we have thus just simply to show that IE2 an be made arbitrarily small independently of n for a well hosen large enough subspa e F of B . of k n P i=1 IP" fjk n X i=1 "i T (ui )k M j > tg 4 exp( t2 2 ) 32 : 82 t2 In parti ular. To this aim. if Æ=24 . integrating with respe t to IPX .7. By Theorem 4. jIE" k Hen e. but this is easy. (10:13) IPfIE" k n X i=1 "i T (ui)k k n X i=1 "i T (ui)k > Æg IPf > Æ 128 103 g + 2 IE2 2 IE2 : 24 Æ Æ To prove (10.ed. for every t > 0 . We have however to swit h to expe tations instead of medians. it is a tually enough by (i) to he k that n X lim sup IPfIE" k "i T (ui)k n!1 i=1 (10:12) k n X i=1 "i T (ui )k > Æg < (Æ) : To this aim. re all from the dis ussion prior to .

f 2 B 0 . kf k 1g is relatively ompa t in L2 . under (ii).6 (6. n X IE2 = IE sup f 2 (T (ui )) kf k1 i=1 sup IEf 2(T (X )) + 8 (Æ)CLT (X ) : kf k1 Choose then (Æ) > 0 to be less than Æ2 (Æ)=16 103 CLT (X ) . for all n . We use Lemma 6. CLT (X ) < 1 and that this implies that ff (X ).5) (and the ontra tion prin iple) to see that.315 this proof that. and hoose also T : B ! B=F asso iated to some large enough .

A ording to (10.13).nite dimensional subspa e F of B in order that sup IEf 2 (T (X )) Æ2 (Æ)=2 103 .12) is satis. we see then that (10.

The following are equivalent: (i) IEkX k2 < 1 and X satis.13 is therefore established in this way. Let X be a mean zero random variable with values in a separable Bana h spa e B and let be a real random variable in L2.1 independent of X su h that IE = 0 and IE 2 = 1 . As there also. We on lude this hapter with an almost sure randomized version of the CLT. The interest in su h a result lies in its proof itself.4 before the statement. and if (Xi ) (resp. the sequen es (Xi ) and (i ) are understood to be independent ( onstru ted on dierent probability spa es). It might be worthwhile to re all Proposition 10. whi h was to kf k1 prove. if X is a Bana h spa e valued random variable and a real random variable independent of X . (i ) ) are independent opies of X (resp.ed. whi h is similar in nature to the pre eding one. ). Theorem 10.14. Theorem 10. and in possible statisti al appli ations.

i=1 n P p i Xi (!)= n does not depend on ! and is distributed like G(X ) .es the CLT. It is plain that under (ii) the produ t X satis. the limit of i=1 Gaussian distribution with the same ovarian e stru ture as X . Proof. the sequen e almost every n P p i Xi (!)= n onverges in distribution. (ii) for ! on the probability spa e supporting the Xi 's. the In either ase.

X also satis. sin e X has mean zero and 6 0 . Hen e.es the CLT.

the same holds for the i=1 . n P p i Xi (!)= n is bounded in probability.4.es the CLT by Proposition 10. we may assume to be symmetri . To show that IEkX k2 < 1 . repla ing by 0 where 0 is an independent opy of .7). by Levy's inequality (2. Hen e. For almost every ! .

if IEkX k2 < 1 .14. IE denotes partial integration with respe t to the sequen e (i ) . Lemma 10. assumed to be . Set M = lim sup IEk i Xi = nk . almost surely. by independen e and the Borel-Cantelli lemma. n n X X 1 1 lim sup p IE k i Xi k K lim sup p IEk i Xi k n n i=1 n!1 n!1 i=1 where. it follows that for almost all ! sup n kXpn(!)k < 1 : n Therefore.316 p sequen e (jn jkXn(!)k= n) . The main tool in the proof of the onverse impli ation (i) ) (ii) is the following lemma. for some numeri al onstant K . Sin e is non-zero.15. This proves the impli ation (ii) ) (i). IEkX k2 < 1 . In the setting of Theorem 10. This lemma may be onsidered as some ve tor valued extension of a strong law of large numbers for squares. as usual. p n P Proof.

3. By the Borel-Cantelli lemma. it suÆ es to show that for some K . we may and do assume to be symmetri . X n IPX fIE k 2n X i=1 i Xi k > K (M + ")2n=2 g < 1 . By Lemma 6. sin e IE = 0 . or further. and monotoni ity of the averages.nite. and all " > 0 . by de. repla n!1 i=1 ing by " .

4 developed in Se tion 6.3.23) . there exists a sequen e (kn ) of integers su h that 2 kn < 1 n and kn X X IPf kXi k > "2n=2g < 1 n i=1 where (kXi k ) is the non-in reasing rearrangement of (kXi k)i2n . Hen e by (6. P Sin e IEkX k2 < 1 .18 whi h applies similarly to the averages in the symmetri sequen e (i ) . that X n IPX fIE k 2n X i=1 i Xi k > K IEk 2n X i=1 i Xi k + "2n=2g < 1 : To show this.nition of M .8. by Lemmas 7.6 and 7. We now make use of Remark 6. we make use of the isoperimetri approa h based on Theorem 1. Note that IEj j 1 .

Sin e X satis.15 is omplete. We now on lude the proof of the theorem. the proof of Lemma 10. IPX fIE k 2 2n X i=1 i Xi k > 2qIEk kn kn X + IPf i=1 2n X i=1 i Xi k + "2n=2g kXi k > "2n=2 g : Letting K = 2q = 4K0 .317 adapted to (i ) with q = 2K0 and k = kn ( q for n large enough).

es the CLT. for every k 1 . there exists a .

there exists k with IP( k ) = 1 su h that for all ! in k n X 1 1 lim sup p IE k i Tk (Xi (!))k : n k n!1 i=1 Let also 0 the set of full probability obtained when Lemma 10.15 is applied to X itself.1 Re all from Proposition 10.15 to Tk (X ) for ea h k . Let now ! that if T = TF .nite dimensional subspa e Fk of B su h that if Tk = TFk : B ! B=Fk CLT (Tk (X )) 1 : 2Kkk k2. 2 0 . If we now apply Lemma 10. there exists a .4 that CLT (X ) 2k k2. For ea h " > 0 .1CLT (X ) . Let 0 = IP( 0 ) = 1 . T k0 k .

nite dimensional subspa e F of B su h n X 1 lim sup p IE k i T (Xi(!))k < "2 : n n!1 i=1 Hen e. it i=1 is not diÆ ulty to see. ( i f (Xi (!))= n) onverges in distribution i=1 to a normal variable with varian e IEf 2 (X ) . Theorem 10. We on lude the proof by identifying the limit. by the Lindeberg CLT ( f.g.14 is established. n X IP fkT ( n P It follows that the sequen e ( i=1 i=1 p i Xi (!)= n)k > "g " : p i Xi (!)= n) is tight (it is bounded in probability si n e ! n P 2 0 ). The proof is then easily ompleted by onsidering a weakly dense ountable subset in B 0 . . e. [Ar-G2℄) for example. Using basi ally that f 2 (Xi )=n ! IEf 2 (X ) almost surely. that for every f in B 0 there n P p exists a set f of probability one su h that for all ! 2 f . if n n0 (") .

10 is explained. The extension to Hilbert spa es of the lassi al CLT was obtained by S.3 is due to G. by R. Dudley ( f.318 Notes and referen es This hapter only on entrate on the lassi al entral limit theorem for sums of identi ally distributed random variables under the normalization pn . S. Varadhan [Var℄. Our proof is Pisier's and the best possibility of L2. Mourier [Mo℄ and R.1 < 1 in any Bana h spa e [Ja1℄. using empiri al pro esses methods. Strassen [D-S℄. The randomization property of Proposition 10. R. We note further that some more results on the CLT. [HJ4℄.R. Zinn [Zi2℄. also [G-Z1℄). [Kue2℄). 1℄ failing the CLT were provided in [D-S℄.8. Proposition 10. Its interest in the study of the CLT in Lp -spa es was put forward in [G-M-Z℄ and further by J. Zinn [P-Z℄. will be presented in Chapter 14. Jain [Ja2℄ with Theorem 10. Fernique and G. Gine [Ar-G2℄ for a more omplete a ount on the general CLT for real and Bana h spa e valued random variables. a topi not overed here. The improved Lemma 10.2 was observed in [Pi3℄. Ra hkauskas [P-R2℄ on rates of onvergen e in the ve tor valued CLT. An attempt of a systemati study of Rosenthal's inequality for ve tor valued random variables is undertaken in [Led4℄. Pisier [HJ-P℄ with Theorem 10. C.7 and Proposition 10. Aldous [Ald℄. Corollary 10. [HJ3℄. [G-Z1℄. Some further CLTs in `p (`q ) are investigated in this last paper [G-Z1℄. is taken from [P-Z℄. C. Rosenthal's inequality appeared in [Ros℄. R. A further extension in some smooth spa es. Araujo and E. Homann-Jrgensen and G.1 was shown in [L-T1℄. Le Cam [LC℄. A de isive step was a omplished by J. Pisier and put forward in the paper [G-Z2℄. We also mention the re ent book by V. See also [Pi3℄. anti ipating type 2 . The proof we present of the ne essity of IEX 2 < 1 on the line and similarly of IEkX k2 < 1 in otype 2 spa es is due to N. as well as for pre ise and detailed histori al and re ent referen es. L. Paulauskas and A. [Ja2℄. 2 < p < 1 . and in Lp . Jain [Ja1℄ who also showed ne essity of kX k2. The hara terization of the CLT in Lp (and more general Bana h latti es) goes ba k to [P-Z℄ ( f. See also the ni e paper [A-A-G℄.S. .1 was then noti ed independently in [A-A-G℄ and [P-Z℄. the study of the CLT for Bana h spa e valued random variables was initiated by E. Pisier and J. is des ribed in [F-M1℄. Varadhan [Var℄. The example on the CLT and bounded CLT in `p . Zinn in [Zi2℄ where Theorem 10. 1 p < 2 . That the best possible ne essary onditions for the CLT are not suÆ ient in `2 (B ) when B is not of otype 2 is due to J. This proposition is due independently to D. Dudley and V. Starting with Donsker's invarian e prin iple [Do℄. We would like to refer to the book of A. S. Examples of bounded random variables in C [0.5 and N. [F-M2℄.4 has been known for some time independently by X. Fortet [F-M1℄.

12 assuming a strong moment and with a proof ontaining the essential step of the general ase.11 is due to G. Zinn [P-Z℄ thanks to several early results on the CLT and the LIL in Lp -spa es. G. Pisier [Pi3℄ established Theorem 10.319 Theorem 10. The . Pisier and J. 2 p < 1 .

Zinn and the authors and also appeared in [L-T3℄ (with a dierent proof). Its proof.15.14 is due to J. Fernique. . the result was noti ed in [J-O℄. Zinn [G-Z4℄. Heinkel [He2℄. Gine and J. On the line.4) in this study from X. has been used re ently in bootstrapping of empiri al measures by E. J. The omments on the impli ation CLT ) LIL are taken from [Pi3℄. The proof of Theorem 10.13) was obtained in [L-T3℄. Theorem 10. Zinn [G-K-Z℄ and B. Goodman.nal result is due independently to V. and in parti ular Lemma 10. We learned how to use Kanter's inequality (Proposition 6. The small ball riterion (Theorem 10.13 with the isoperimetri approa h is new. Kuelbs. J.

320 Chapter 11.3 Examples of appli ations Notes and referen es .2 Regularity of random pro esses under majorizing measure onditions 11. Regularity of random pro esses 11.1 Regularity of random pro esses under metri entropy onditions 11.

if (xi ) is a sequen e in a type 2 Bana h spa e B su h that P kxi k2 < 1 .321 Chapter 11. For example. then the series P gi xi is almost surely onvergent and de. Regularity of random pro esses In Chapter 9 we des ribed how ertain onditions on Bana h spa es an ensure the existen e and tightness of some probability measures.

nes a Gaussian Radon random i i variable with values in B . we mean some metri entropy or majorizing measure ondition whi h estimates the size of T in fun tion of some parameters related to X . These onditions were further used in Chapters 9 and 10 to establish tightness properties of sums of independent random variables. this result was extended to pro esses indexed by regular subsets T of IRN and then further to abstra t index sets T . The setting of this study has its roots in a elebrated theorem of Kolmogorov whi h gives suÆ ient onditions for the ontinuity of pro esses X indexed by a ompa t subset of IR in terms of a Lips hitz ondition on the in rements Xs Xt of the pro esses. In this hapter. The . Given a random pro ess X = (Xt )t2T indexed by some set T . By geometry. we investigate suÆ ient onditions for almost sure boundedness or ontinuity of the sample paths of X in terms of the "geometry" (in the metri al sense) of T . another approa h to existen e and tightness of ertain measures is taken in the framework of random fun tions and pro esses. we present several results in this general abstra t setting. In this hapter. espe ially in the ontext of entral limit theorems. Under this type of in remental onditions on the pro esses.

These suÆ ient entropy or majorizing measure onditions for sample boundedness or ontinuity are utilized in the proofs in a rather similar manner: the main idea is indeed based on the rather lassi al overing te hnique and haining argument already ontained in Kolmogorov's theorem. we present important examples of appli ations to Gaussian. The se ond paragraph investigates majorizing measure onditions whi h are more pre ise than entropy onditions as they take more into a ount the lo al geometry of the index set. In Se tion 11. that is a olle tion (Xt ) of random variables. Common to this hapter is the datum of a random pro ess X = (Xt )t2T .rst se tion deals with the metri entropy ondition. Our main obje tive is thus to .3. indexed by some parameter set T whi h we assume to be a metri or pseudo-metri spa e. whi h naturally extend the more lassi al ones. That majorizing measures are a key notion will be shown in the next hapter on Gaussian pro esses. are rather easy to prove and to use but nevertheless an be shown to be sharp in many respe ts. The results. t) = 0 does not always imply s = t ). By pseudo-metri re all we mean that T is equipped with a distan e d whi h does not ne essarily separate points ( d(s. Radema her and haos pro esses.

or to possess a version with these .nd suÆ ient onditions in order for X to be almost surely bounded or ontinuous.

t2F Xt j . more generaly. Con erning almost sure boundedness. in some Orli z spa e L .e. We usually work with pro esses X = (Xt )t2T whi h are in Lp . : : : . as latti e supremum in L .. for example. s. and a ording to what was des ribed in Chapter 2.322 properties. we therefore simply understand supremum like sup Xt . kXt k < 1 for all t . i. F . 1 p < 1 .t2T IE sup jXs s. or. sup jXt j . t2T t2T sup jXs Xt j .t2T Xt j = supfIE sup jXs s.

nite in T g : We avoid in this way the usual measurability questions and moreover redu e the estimates we will establish to the ase of a .

One more word before turning to the obje t of this hapter. We ould of ourse also use separable versions whi h we do anyway in the study of sample ontinuity. we usually bound the quantity IE sup jXs Xt j (for T .nite parameter set T . In the theorems below.

IE sup Xt IE sup jXt j IEjXt0 j + 2IE sup Xt : t2T t2T t2T These inequalities are used freely below.t Re all that a Young fun tion is a onvex in reasing fun tion on IR+ with tlim !1 (t) = 1 and (0) = 0 .. sup . Of ourse s. The example of T being redu ed to one point shows that the estimates we establish do not hold in general for IE sup jXt j rather than IE sup Xt or IE sup jXs Xt j .e. We also have that. A.t2T The supremum notations will also often be shortened in sup or sup . IP) asso iated to is de.nite for simpli ity).t2T Xt j = IE sup (Xs s. the distribution in IRT of t2T are the same.t2T X and X Xt j and. for example Gaussian pro esses). when X is symmetri .t2T IE sup jXs s. for every t0 in T . IE sup Xt IE sup jXt j IEjXt0 j + IE sup jXs t2T t2T s. T t s. t2T t2T s.t2T Xt ) whi h is also equal to 2IE sup Xt if the pro ess is symmetri (i. et . The Orli z spa e L = L ( .

ned as the spa e of all real random variables Z on ( . IE (jZ j= ) 1g : . A. Re all furthermore it is a Bana h spa e for the norm kZ k = inf f > 0 . IP) su h that IE (jZ j= ) < 1 for some > 0 .

323 The general question we study in the .

d) and in L (i.rst two se tions of this hapter is the following: given a Young fun tion and a random pro ess X = (Xt )t2T indexed by (T.e. kXt k < 1 for all t ) satisfying the Lips hitz onditions in L (11:1) kXs Xt k d(s. . t) for all s. t 2 T .

Note that we ould take as pseudo-metri d the one given by the pro ess itself d(s. By this we mean the size of T measured in terms of d and . ) ". d. Our main geometri measures of (T. The . The main idea will be to onvey through the in remental onditions (11. t) = kXs Xt k so that T may be measured in terms of X .1) boundedness and ontinuity of X in the fun tion spa e L to the orresponding almost sure properties. d.nd then estimates of sup Xt and suÆ ient onditions for sample boundedness or ontinuity of X in terms t of "the geometry of (T. ) are the metri entropy ondition and the majorizing measure ondition. This will be a omplished in a haining argument.

all the onditions we will use. 11. one an therefore redu e everything if one wishes it to the ase of (T. Xs = Xt almost surely.) T is totally bounded for d if and only if N (T. d) is totally bounded. entropy or majorizing measures. imply that (T. For simpli ity. ") < 1 for every " > 0 . d. a property whi h will always be satis. ") the smallest number of open balls of radius " > 0 in the pseudo-metri d whi h form a overing of T . Let us also mention that the fa t that we are working with pseudo-metri s rather than metri s is not really important sin e we an always identify two points s and t in T su h that d(s.1.1). For ea h " > 0 . d) metri ompa t. denote by N (T. Let us note before turning to these results that the study of the ontinuity (a tually uniform ontinuity) will always follow rather easily from the various bounds established for boundedness that thus appears as the main ase of this investigation. (Re all we ould work equivalently with losed balls. t) = 0 . d. under (11.rst se tion develops the results under the on ept of entropy whi h we already en ountered in some of the pre eding hapters in ne essity results. d) be a pseudo-metri spa e. This situation is rather lassi al and we already met it for example in Chapters 7-9 in the study of limit theorems. Regularity of random pro esses under metri entropy onditions Let (T. Further.

.ed under all the onditions we will deal with. s.e. t). D = supfd(s. Denote further by D = D(T ) the diameter of (T. d) i. t 2 T g .

324 (.

nite or in.

t2T Xt jdIP 8IP(A) Note that this statement does not really on ern should perhaps be preferably stated in this way. Z 1 1 jXs Xt jdIP d(s.t2T Xt j 8 Z D 0 1 (N (T. The following theorem is the main regularity result under metri entropy ondition for pro esses with in rements satisfying (11.2. d) .1. for all A . t)IP(A) IE IP(A) d(s. let us explain the advantages of its formulation and how it in ludes Theorem 11. A. For the latter. "))d" : It is lear that the onvergen e of the entropy integral is understood when " ! 0 . t) IP(A) A A 1 j Xs Xt j 1 d(s. by onvexity and Jensen's inequality Z Z j Xs Xt j dIP 1 jXs Xt jdIP = d(s.2. Z D 0 1 (IP(A) 1 N (T. IP) su h that for all measurable set A in and all s. X is almost surely bounded and we a tually have IE sup jXs s. We will a tually prove a somewhat better result whose proof is not more diÆ ult. d. Let X = (Xt )t2T be a random pro ess in L su h that for all s. and Before proving Theorem 11. t)IP(A) Æ d(s. "))d" < 1 . Let be a Young fun tion and let X = (Xt )t2T be a random pro ess in L1 = L1 ( .nite).1. 0 < u 1 . It only on erns at this point boundedness but ontinuity will be a hieved 1 denotes the inverse fun tion of . "))d" : but rather the fun tion u 1 (1=u) . simply note that when (11. Theorem 11. Theorem 11. Z sup jXs A s. t in T kXs Xt k d(s. if Z D 0 1 (N (T. d. t in (T. t)IP(A) 1 IP(1A) : . The numeri al onstant 8 is without any spe ial meaning. t) : Then.1).1) holds. t)IP(A) : IP(A) A Then. t) d(s. similarly later on. d.

letting A = fZ > ug and using Chebyshev's inequality we have IPfZ > ug Z 1 1 ZdIP IPfZ > ug u fZ>ug u so that for every u > 0 . t in T kXs Xt kp. it is less restri tive and this is why Theorem 11. IPfZ > ug 1 1 IPfZ > ug 1 : (u) For Young fun tions of exponential type the latter is equivalent to say that kZ k < 1 . IP(A) then. t)IP(A) IP(1A) 1=p where q = p=p 1 is the onjugate of p . It in ludes for example onditions in weak Lp -spa es. Con erning the on lusion. for power type fun tions.2 is more general in its hypotheses. the formulation in Theorem 11.325 Conversely it should be noted that if Z is a positive random variable su h that for every measurable set A Z A ZdIP IP(A) 1 1 . However. as is easily seen by integration by parts. t) : (11:2) Then.2 allows to obtain dire tly some quite sharp integrability and tail estimates of the sup-norm of the pro ess X mu h in the spirit of those des ribed in the . This for the advantages in the hypotheses.1 d(s. Z A jXs Xt jdIP qd(s. t)IP(A)1=q qd(s. Let 1 < p < 1 and assume indeed we have a random pro ess X su h that for all s.

t . is su h that for some onstant C = C and all x.rst part of the book. 1 p < 1 ).2 is that for all measurable set A Z 1 sup jXs Xt jdIP 8CE IP(A) 1 IP(A) A s. then the on lusion of Theorem 11. If. y 1 1 (xy ) C 1 (x) 1 (y ) (whi h is the ase for example when (x) = xp . for example.

326 where E = E (T. 0 assumed of ourse to be . "))d" . ) = Z D 1 (N (T. d. d.

nite. under the . by Chebyshev's inequality as before.2) for (x) = xp .t Hen e. Then.t 1 u 8CE 1 : This applies in parti ular to the pre eding setting of (11. (11:3) IPfsup jXs Xt j > ug s. 1 < p < 1 .1 8qCE : s. in whi h ase we get that k sup jXs Xt jkp. for every u > 0 .

t .3) an be improved into a deviation inequality.t Xt jdQ I 8CE from whi h. the general estimate (11. t) instead of (11. if Z is a positive random variable su h that kZ kq = 1 and if we set Q( I A) = A Z q dIP .t If is an exponential fun tion q (x) = exp(xq ) 1 . by uniformity in Q I . it follows that k sup jXs Xt jkp 8CE : s.2 with respe t to Q I yields Z sup jXs s. This an however easily R be repaired.2) we end up with a small gap in this order of idea.3) immediately implies that k sup jXs Xt jk q Cq E s.t for some onstant Cq depending only on q . Indeed. we on lude to a degree of integrability for the supremum whi h is exa tly the same as the one we started with on the individuals (Xs Xt ) . then (11. Z A jXs Xt jdQI d(s. t)IQ(A)1=q : Theorem 11. Assume satis. from the assumption.niteness of the entropy integral. In ase we have kXs Xtkp d(s. In this ase a tually. then. for every measurable A and all s.

es now 1 (xy ) C ( 1 (x) + 1 (y )) .

For the purpose of omparison. E . the entropy integral. It is enough to prove the inequality of the statement with T . 0 p < 1 . obviously. Theorem 11. d. whi h measures some information on the sup-norm in Lp . Proof of Theorem 11. and the diameter D whi h may be assimilated to weak moments. In this situation.2. E= ( 1 (1) > 0 Z D 0 1 (N (T. note that.2. We now prove Theorem 11.t u 1 D : This is an inequality of the type we obtained for bounded Gaussian or Radema her pro esses in Chapters 3 and 4 with two parameters. "))d" 1 (1)D by onvexity) and E is mu h bigger than D in general.2 indi ates that for every measurable set A . y 1 .t Xt jdIP 8C IP(A) D 1 1 +E : IP(A) This easily implies for every u > 0 IPfsup jXs (11:4) Xt j > 8C (E + u)g s. Z sup jXs A s. The fun tions q satisfy this inequality.327 for all x.

By indu tion. let also `1 be the smallest integer ` su h that the open balls B (t.nite. 2 `) su h that the balls fB (t. t 2 T`g form a overing of T . de. 2 `). For ea h `0 ` `1 . d. Let `0 be the largest integer (in ZZ ) su h that 2 ` D . 2 `) of enter t and radius 2 ` in the metri d are redu ed to exa tly one point. let T` T of ardinality N (T.

2 `+1 ) . for every t in T . Set then k` : T ! T` . We an then write the fundamental haining identity whi h is at the basis of most of the results on boundedness and ontinuity of pro esses under onditions on in rements. Sin e 2 `0 D . k` = h`+1 Æ Æ h`1 . su h that t 2 B (h` (t).ne maps h` : T` ! T` 1 . `0 ` `1 ( k`1 = identity). Then.t2T (Xk` (t) `=`0 +1 Xt j 2 `1 X Xk` 1 (t) ) : sup jXk` (t) `=`0 +1 t2T Xk` 1 (t) j : . `0 < ` `1 . all it t0 . T`0 is redu ed to one point. Xt Xt0 = `1 X It follows that sup jXs s.

for .328 Now.

Zi dIP IP(A) 1 Z max Zi dIP IP(A) A iN 1 1 : IP(A) N : IP(A) Proof. 2 ` )) : . t 2 T g N (T. d. by onstru tion. Z sup jXs A s. Lemma 11. This lemma of ourse des ribes one of the key points iN of the entropy approa h through the intervention of the ardinality N . Hen e the hypothesis indi ates that for (t) jdIP 2 `+1 IP(A) 1 1 : IP(A) The on lusion will then easily follow from the following elementary lemma. k` 1 (t)) every t . d.3. Let (Ai )iN be a (measurable) partition of A su h that Zi = max Zj on Ai .t Xt jdIP 2 `1 Z X sup jXk` (t) `=`0 +1 A t 4IP(A) X `>`0 2 ` Xk` 1 (t) jdIP 1 (IP(A) 1 N (T. `0 < ` `1 and A measurable Z A jXk` (t) Xk` Xk` 1 (t) ). d(k` (t). 2 `) : 2 1 `+1 for every t . for every measurable set A .2.xed ` . Let (Zi )iN be positive random variables on some probability spa e ( . observe that Cardf(Xk` (t) Further. the lemma implies that for any measurable set A . The lemma is proved. Note that the lemma applies when max kZi k 1 . IP) su h that for all i N and all measurable set A in . Then j N Z N Z X N X max Zi dIP = Zi dIP IP(Ai ) A iN i=1 Ai i=1 IP(A) where we have used in the last step that 1 is on ave and that 1 1 N P i=1 1 IP(Ai ) N IP(A) IP(Ai ) = IP(A) . Z A Then. Together with the haining inequality. We an now on lude the proof of Theorem 11. A.

"))d" 1 1 (IP(A) 1 N (T. "))d" 0 by de. d. d. 2 ` )) 2 2 ` XZ 2 2 ` `>`0 Z D 1 (IP(A) 1 N (T. d.329 The on lusion follows from a simple omparison between series and integral: X `>`0 2 ` 1 (IP(A) 1 N (T.

(Note that similar simple omparison between series and integral will be used frequently throughout this hapter and the next ones.2 is therefore established.4. Remark 11.nition of `0 . usually without any further omments.) Theorem 11. for every ` . What was thus used in this proof is the existen e. of a .

Z (11:5) A jXt Xt` jdIP 2 `IP(A) 1 1 + M` IP(A) IP(A) P where (M` ) is a sequen e of positive numbers satisfying M` < 1 . Z A jXs Xt jdIP d(s. t` ) as before and every measurable set A . every (t. t` ) 2 ` . t in T . for some C . there exists t` 2 T` with d(t. and with the property X ` 2 ` 1( Card T`) < 1 : It should be noted further that the pre eding proof also applies to the random fun tions X = (Xt )t2T with the following property: for every ` ( `0 ).nite subset T` of T su h that. for all measurable sets A and all s. for every t . t)IP(A)( 1( 1 ) + C ) :℄ IP(A) The . [This property is in parti ular realized ` when.

t) = kXs Xt k indu ed by X itself.1 and 11. We olle t here some further easy observations whi h will be useful next. Further. This simple observation an be useful as will be illustrated in Se tion 11.3. Remark 11. "))d" + 2IP(A) X ``0 M` where we re all that `0 is the largest integer ` su h that 2 ` D .2 (and Remark 11. sin e the hypotheses and on lusions of these theorems . First note that in Theorems 11.5. d.t2T Xt jdIP 8IP(A) Z D 0 1 (IP(A) 1 N (T. The proof is straightforward.4 too) we might have as well equipped T with the pseudo-metri d(s.nal bound of ourse involves then this quantity in the form Z sup jXs A s.

By random pro ess with values in a Bana h spa e. This in ludes D(s. The one we mentioned will be useful to in lude some lassi al statements. d) .6.330 a tually only involve the in rements Xs Xt in absolute values. "))d" < 1 . jXs Xt j) for example. 0 < 1 . t) with probability one. IP) su h that for all measurable set A in and all s. u in T . and D(s. We now present the ontinuity theorem in the pre eding framework. d. if Z D 0 1 (N (T. Moreover. t) = kXs Xt k (or some of the pre eding variations). t in (T. A tually we ould also in lude in this way random pro ess with values in a Bana h spa e by setting D(s. More extensions of this type an be obtained. and sin e the only property used on them is that they satisfy the triangle inequality. we leave them to the interested reader. t) = jXs Xt j . t. s) D(s.3.t2T on T T su h that for all s. Let be a Young fun tion and let X = (Xt )t2T be a random pro ess in L1 = L1 ( . we simply mean here a olle tion X = (Xt )t2T of random variables with values in a Bana h spa e. the pre eding results may be trivially extended to the setting of random distan es. An example will be dis ussed in Se tion 11. t)IP(A) : IP(A) A Then. D(s. random pro esses (D(s. Theorem 11. s) = 0 D(s. that is. A. d) . Xe satis. u) + D(u. t))s. X admits a version Xe with almost all sample paths bounded and (uniformly) ontinuous on (T. Z 1 1 jXs Xt jdIP d(s. t) = D(t. t) = min(1.

there exists > 0 . depending only on " and the .es the following property: for ea h " > 0 .

itself su h that IE sup jXes d(s.niteness of the entropy integral but not on the pro ess X . We take the notations of the proof of Theorem 11.t)< Xet j < " : Proof. The main point is to show that. when T is .2.

d. for every > 0 and `0 ` `1 . (11:6) IE sup jXs d(s. 2 ` )2 ) + 8 X m>` 2 m 1 (N (T.nite. d.t)< Xt j 1 (N (T. 2 m )) : Let and ` be .

xed. The proof of Theorem 11.2 indi ates that the haining identity Xt Xk` (t) = `1 X (Xkm (t) m=`+1 Xkm 1 (t) ) .

we . d. k` (v) = yg : If (x. y) 2 T` T` . v in T su h that d(u. y) 2 U .331 implies that IE sup jXt t2T Xk` (t) j 2 X m>` 2 m 1 (N (T. 2 m )) : Let now U = f(x. v) < and k` (u) = x . 9u.

y . k` (vx.y j + jXux. y = k` (t) . t be arbitrary in T satisfying d(s.y j 1 ( Card U ) (x.y Xvx.y ) = y and d(ux.y ) = k` (s) = x and similarly for y . We an now on lude the proof of the theorem.y)2U where we have used that k` (ux.y j + 4 sup jXr Xk` (r)j r 2T (x. vx.y Xk` (t) j + jXk` (t) Xt j sup jXux. By Lemma 11. d. We an then write by the triangle inequality jXs Xt j jXs Xk` (s) j + jXk` (s) Xux. We have obtained by (11.6) that.y)2U 1 (N (T.y Xvx. We then have learly (11. under the .y Xvx.y j + jXvx.y ) < .y su h that k` (ux.y . y) 2 U . IE sup jXux.y ) = x . Set x = k` (s) . Clearly (x. t) < . vx.6). 2 `)2 ) : Let now s.x ux.3.

for ea h " > 0 there exists > 0 depending only on " > 0 and T. for every . su h that. d.niteness of the entropy integral.

nite and thus also ountable subset S of T . Then (Xet )t2T is leary a version of X whi h satis. Setthen Xet = Xt if t 2 S and Xet = lim Xs where this limit. IE sup jXs s. in probability or L1 . is taken for s ! t . there exists S ountable and dense in T .t2S Xt j < " : d(s. d) is totally bounded. s 2 S .t)< Sin e (T.

n > 0 be su h that IE sup d(s. To see in parti ular that (Xet )t2T has uniformly ontinuous sample paths on (T.t)<n jXes Xet j < 4 n : . let.es all the required properties. d) . for ea h n .

"))d" < 1 : Let X be a family of separable random pro esses X = (Xt )t2T in L1 = L1 ( . d.5 also apply in the ontext of Theorem 11. We thus have the following onsequen e whi h will be of interest in the study of the entral limit theorem in Chapter 14. X 2 Xg is relatively ompa t as real random variables and.7.6 has some easy onsequen es to tightness results. for some t .g. if An = f sup d(s. Further. A. d) is a ompa t metri spa e and denote by C (T ) the Bana h spa e of ontinuous fun tions on T equipped with the sup-norm. t in T and A measurable Z 1 1 jXs Xt jdIP d(s. By Prokhorov's riterion and the Arzela-As oli hara terization of ompa t sets in C (T ) ( f.t)< Xt j < " : But this last ondition is exa tly what is provided by Theorem 11. [Bi℄). Assume (T. IP) su h that for all s.6. the dependen e of > 0 we arefully des ribe in Theorem 11. Let (T. for ea h " > 0 .t)<n jXes Xet j > 2 n g . it is easily seen that a family X of random variables X = (Xt )t2T is relatively ompa t in the weak topology of probability distributions on C (T ) as soon as. It is plain that Remarks 11.332 Then.6 is omplete. Assume that 1 (N (T. P n IP(An ) < 1 and the laim >follows from the Borel-Cantelli lemma. d) be ompa t and let Z D 0 be a Young fun tion. t)IP(A) : IP(A) A Then ea h element of X de. The proof of Theorem 11.6 under the entropy ondition. e.4 and 11. Corollary 11. fXt. there is > 0 su h that for every X in X IE sup jXs d(s.

fXt . X 2 Xg is weakly relatively ompa t (as measures on IR ). for some t 2 T .nes a tight probability distribution on C (T ) and X is weakly relatively ompa t if and only if. As a .

t in [0. IEjXs Xt j js tjp : .8. We state it in its lassi al and usual form although its various sharpening are dedu ed similarly.1℄ be a separable random pro ess indexed on [0.rst appli ation. 1℄ . 1℄ su h that for some > 0 and p > 1 and all s. Corollary 11. Let X = (Xt )t2[0. we would like in parti ular to indi ate how the pre eding results ontain the ontinuity theorem of Kolmogorov.

We distinguish between two ases. 1℄.333 Then X has almost surely bounded and ontinuous sample paths. We apply Theorem 11. If p .5.6 is . then kXs Xt k d(s. Proof. N ([0. As is obvious. t) where d is the metri d(s. t) = js tjp= ( p= 1 ).6 together with Remark 11. d. ") is of the order " =p and sin e p > 1 the orresponding entropy integral with (x) = x in Theorem 11.

sin e 1 . t in [0. jXs Xt j de.5. then. t) = js tj . 1℄ kjXs Xt j kp d(s. t) where = =p 1 and here d(s.6 with this time Remark 11. for all s. When p . Apply then again Theorem 11.nite and the on lusion follows in this ase. indeed.

nes a random distan e and here N ([0.3 and the next hapter is a . ") " 1 . The ase of Gaussian pro esses treated in Se tion 11. 1℄. The proof is omplete. they however appear to be sharp in various instan es. While the pre eding evaluations seem quite easy. d.

Other examples on erning pro esses indexed by regular subsets of IRN were treated in the literature (see e. then a orresponding integral is onvergent. Regularity of random pro esses under majorizing measure onditions One main feature (and weakness) of the entropy ondition is that it gives some "weight" to ea h pie e of T .rst example. Only in the Gaussian ase the ne essary onditions an on ern one single pro ess due to the omparison theorems.). They do not on ern one single pro ess but rather the whole family of pro esses satisfying the same in remental ondition. we will see how this is the ase in Chapter 13 and how metri entropy is best possible (ne essary) for some pro esses in su h a homogeneous setting ( f. [Pi9℄. This does not present any in onvenient if T is in some sense homogeneous. [Ha℄. This is the natural formulation of the ne essity results. [Ta12℄. 11. et . if every random pro ess satisfying a ertain Lips hitz ondition is almost surely bounded or ontinuous.2. They basi ally indi ate that. We shall ome ba k to this in Chapter 12.g.1). one has rather to think at some geometri al measure of T whi h takes into a ount the possible la k of homogeneity of T . . One way to handle this is the on ept of majorizing measure. [Ib℄. In general however. [H-K℄. also the losing omments of Se tion 11.

if (11:7) = m (T.334 Given a pseudo-metri spa e (T.) We thus all (11. d. (Again we ould use essentially equivalently losed balls. d.7) a majorizing measure ondition as opposed to the entropy ondition studied before. This de. ") is the open ball in the d -metri of enter t and radius " > 0 . say that a probability measure 1 d" < 1 m(B (t. ) = sup t2T Z D 1 0 as before. ")) where B (t. d) and a Young fun tion m on T is a majorizing measure for (T. ) .

then (T. ")) Z 1 0 1 d" m(B (s. d) . Thus. ) and B (s0 . Our aim in this se tion will be to show how one an ontrol random pro esses satisfying Lips hitz onditions under a majorizing measure ondition as we did before with entropy and a tually in a more eÆ ient way. d) is totally bounded and a tually (11:9) sup " ">0 1 (N (T. There exist t1 . if m is a probability measure on (T. ") N .nition learly gives a way to take more into a ount lo al properties of the geometry of T . say B (s. By de. as for entropy. m (T. ) . if D = D(T ) is the diameter of (T.1. d. ")) 2 m (T. d. ): The proof of this easy fa t is already instru tive on the way to use majorizing measures. the open balls B (s. has a measure less than or equal to 1=2 . s0 ) = 2 > 0 . If s and s0 are two points in T with d(s. We start with some remarks for a better omprehension of ondition (11. tN su h that d(ti . Therefore sup t2T Z D 0 1 1 d" m(B (t.7) and for omparison with the results of Se tion 11. d. d. Let N (T.7). ) (11:8) 1 2 1 (2)D : Also. tj ) " for all i 6= j . : : : . if m is a majorizing measure satisfying (11. ")) 1 (2) . one of these two balls. d) . ) are disjoint. so that.

d.9).nition of m (T. for ea h iN. . "=2)) that is m(B (ti . ) = . 2 m(B (ti . are disjoint and m is a probability. i N . "=2) . "=2)) [ (2 =")℄ 1 . " 1 1 . it follows that (2 =") N whi h is the result (11. Sin e the balls B (ti .

")) 0 1 (N (T. We will thus restri t this study to the ase of a spe ial lass of Young fun tions for whi h things be ome simpler. For every ` `0 .10). over T . (11:11) 1 (xy ) C ( 1 (x) + 1 (y )) is a Young fun tion su h that for some onstant Z and 1 1 dx < 1 : x 0 This lass overs the main examples we have in mind. 1 q < 1 (and also 1 (x) = exp(exp x) e ). As already mentioned. let us assume moreover that C = 1 in (11. let T` T denote the set of the enters of a minimal family su h that the balls B (t. d. [Ta12℄) but we a tually only prove it here in a spe ial ase.11) (whi h is a tually easily seen not to be a restri tion). "))d" where K is some numeri al onstant. Let us now show how in this ase we may prove (11. t 2 T` . Let. as usual. d) . For the rest of this paragraph. For simpli ity in the notations. The general study of bounds on sto hasti pro esses using majorizing measures runs indeed in many te hni alities in whi h we de ided not to enter here for the simpli ity of the exposition.335 More important now is to observe that entropy onditions are stronger than majorizing measure onditions. By de. 2 ` ) . namely the exponential Young fun tions q (x) = exp(xq ) 1 . y 1 . is there is a probability measure m on T su h that (11:10) sup t2T Z D 1 0 Z D 1 d" K m(B (t. `0 be the largest integer with 2 `0 D where D is the diameter of (T. we hen e assume that C and all x. this study an be ondu ted in a rather large generality and we refer to [Ta12℄ where this program is performed. That. This restri tion however does not hide the main idea and interest of the majorizing measure te hnique. This an be established in great generality ( f.

d. d. Hen e Z D 0 1 X 1 d" 2 ` m(B (t.nition. m(B (t. Card T` = N (T. d. 2 ` )) 1 (2` `0 N (T. Consider then the probability measure m on T given by m= X `>`0 2 `+`0 N (T. for every t and ` > `0 . Clearly. 2 `) . 2 ` )) 2 `+`0 N (T. d. 2 `) X 1 t2T` Æt where Æt is Dira measure at t . ")) `>`0 X `>`0 2 ` 1 1 m(B (t. 2 `) 1 . 2 ` )) : .

Note that the onstant however depends on R Let us note that the pre eding proof a tually shows that when 0D majorizing measure whi h satis.336 Using (11. 2 ` )) Z 1=2 0 Z 1=2 0 (x 1 )dx + 2 (x 1 )dx + 2 2 1 + 2( 1 (1)) 1 Z D Z D 0 1 (N (T. "))d" (x !Z 1 )dx D 0 and the announ ed laim follows. d. "))d" 0 Z 1=2 0 1 (N (T. d. d.11) this is then estimated by X `>`0 2 ` 1 (2` `0 ) + X `>`0 2 2 ` `0 +1 4D 1 (N (T.

Our approa h will be to asso iate to ea h majorizing measure a (non-unique) ultrametri distan e Æ . d.7) whi h deals with boundedness. This is another advantage of majorizing measures upon entropy. 1 (N (T. ontinuous. then m is a 1 d" = 0 : m(B (t.10)) (11:12) lim sup Z !0 t2T 0 1 1 (N (T. "))d" . ")) This ondition is the one whi h enters to obtain ontinuity properties of sto hasti pro esses as opposed to (11. We therefore sometimes speak of bounded. to be able to give weaker suÆ ient onditions for sample boundedness than for sample ontinuity. resp. Let us now enter the heart of the matter and show how majorizing measures are used to ontrol pro esses. majorizing measure onditions.es (in addition to (11. d. "))d" < 1 .

ner than the original distan e d and for whi h there still exists a majorizing measure. it an be seen as a way to dis retize majorizing measures. This ultrametri stru ture is at the basis of the understanding of majorizing measures and will appear as the key notion in the study of ne essity in the next se tion. This program is a omplished in Proposition 11.12 below. Alternatively.10 and Corollary 11. We however start with a simple lemma whi h allows to onveniently redu e to a . This then allows to use haining arguments exa tly as with entropy onditions.

")) . Let (T. d. d) be a pseudo-metri spa e with diameter D(T ) .nite index set T . ) = sup t2T Z D(T ) 0 1 1 d" : m(B (t. d) and re all we set m (T. Lemma 11. Let m be a probability measure on (T.9.

if A is a .337 Then.

In parti ular. ")) . ")) 1 1 d" m(B (t. For t in T . Proof. Fix x in A . '(t)) = d(t. (A. ")) BA (x. d) su h that (A. For t in T . y 2 Ag : Set = '(m) .nite (or ompa t) subset of T . d. ) . ) = sup t2A Z D(A) 1 0 2 sup Z D(A)=2 t2A 0 1 d" (BA (t. there is a probability measure on (A. 2")) m(B (x. ")) where D(A) is the diameter of (A. d. '(t)) = d(t. we have d(t. take '(t) in A with d(t. d) and BA (t. ") the ball in A of enter t and radius " > 0 . y) . ) 2 m (T. t) . Æ) is ultrametri if Æ satis. d. It follows that '(B (x. Re all a metri spa e (U. 2") and thus (BA (x. '(t)) 2d(x. The proof is easily ompleted. A) = inf fd(t. x) and thus d(x. A) d(t. so is supported by A .

11). we re all however that. u. w). v) max(Æ(u.11) and let (T. v)) .ed the improved triangle inequality Æ(u. d) be a . Let satisfying (11. The next proposition deals with the ni e fun tions satisfying (11. Proposition 11.10. Æ(w. w 2 U : The main feature of ultrametri spa es is that two balls of the same radius are either disjoint or equal. at the expense of (severe) ompli ations. [Ta12℄). a similar study an be driven in the general setting ( f. v.

Æ. t) Æ(s. ) where K is a onstant depending only on . . d. t in T and a probability measure on (T. ) = sup t2T Z D 0 1 1 d" : m(B (t. Æ) su h that (T. ")) There exist an ultrametri distan e Æ on T su h that d(s. ) K m (T. d.nite metri spa e with diameter D . t) for all s. d) and re all we set m (T. Let m be a probability measure on (T.

i = fti g for every i . points x`.j g .k \ B (x`. : : : . 4 `+1 ) 6= . T`+1.i . T`.i . . Let `0 be the largest integer ` su h that 4 ` D and `1 be the smallest one su h that the balls B (t. Set T`1 . [ m(B (x`.j g : De. .k 6 T`. 4 ` ) of enter t and radius 4 ` in the metri d are redu ed to exa tly one point.i = [ fT`+1.k . we onstru t by indu tion on i 1 . 4 `)) = maxfm(B (x.i and subsets T`. T`+1.338 Proof. j<i 8j < i .0 = . For every ` = `1 1. 4 ` )) .i of T as follows: setting T`. Assume T = (ti ) . x 62 T`. `0 .

it is easily veri. Æ is learly an ultrametri distan e. By de reasing indu tion on ` . t) = 4 `+2 where ` is the largest integer su h that s and t belong to the same T`.i for some i .ne then Æ(s.

It then learly follows by de.ed that the diameter of ea h set T`.i is less than 4 `+2 .

(T`. By onstru tion.nition of Æ that d(s. For ea h ` .i be a .i . the balls B (x`. t) for all s. t) Æ(s. Let t`. 4 `)) 1 . t in T .i ) forms the family of the Æ -balls of radius 4 `+2 . 4 ` ) P when i varies are disjoint so that m(B (x`.i .

Æ.i . There is a probability measure onstru tion.i ) 4 `+`0 1 m(B (t. Let t be . 4 ` )) : We evaluate (T.i . note that. ) using (11.i . (11:13) 0 . by (T`. If t 2 T`. 4 `))Æt`.13) and the properties of . Consider i 0 = `1 X `=`0 X 4 `+`0 +1 m(B (x`.i i where Æt is Dira measure at t .xed point in T`.

xed.i ) m(B (t. `1 X X X 1 1 ` 1 ` ` ` 1 ` 1 0 +1 4 (4 )+ 4 : 4 (T`.i . 4 `)) ``0 ``0 `=`0 By de. By (11. t 2 T`.11).13) and (11.

11. this easily implies the on lusion.8)). If (T.nition of `0 (and (11. The proof is omplete. Remark 11. Æ) is ultrametri and satis.

let B` be the family of the balls B of radius 2 ` (or 4 ` to agree with the proof of Proposition 11.11). `) 0 . perhaps more important than the probability measure we onstru ted (although it is a tually equivalent). B 2 B` . we may observe the following from the pre eding proof: for every ` .10). then. su h that . is the datum of a family of weights (B.es (11.

there exists. Proposition 11.10 may be expressed in the following "dis retized" formulation.339 P B 2B` (B.i . Denote by `0 the largest integer ` su h that 2 ` D . . 4 ` )) in the pre eding proof). Further. for all ` `0 . `) 1 (the measures m(B (x`. then.

Let then ` (t) be one . ` `0 g satisfying X sup 2 t2T ``0 ` 1 1 (f` (t)g) K m (T. if Æ is the ultrametri stru ture obtained in Proposition 11.10. ` (t)) 2 ` for every t and ` .nite sets T` in T and maps ` : T ! T` su h that ` 1 Æ ` = ` 1 and d(t. For every t 2 T . ) where K only depends on . for every ` `0 denote by B` the family of the Æ -balls of radius 2 ` . d. and a dis rete probability measure on fT`. Indeed. there is a unique element B of B` with t 2 B .

The P probability measure 2 `+`0 +1 ` ful.xed point of B and let ` (f` (t)g) = (B ) .

in the setting of fun tions satisfying (11. The results are the analogs (a tually improvements).1. ``0 Provided with the pre eding results.6 dealing with entropy. 11.lls the onditions of the laim. of Theorems 11. We .2 and 11.11). we now present suÆ ient onditions in terms of majorizing measures for a random pro ess to be almost surely bounded or ontinuous.

10 for a fun tion satisfying (11. one may use the pre eding dis retization (Remark 11. IP) indexed by a .11). Let be an arbitrary Young fun tion and let X = (Xt )t2T be a random pro ess in L1 = L1 ( . We refer to [Ta12℄ for more details on this point and a more general study for arbitrary Young fun tions .11) of is used. however.rst establish the main result for a general Young fun tion . A. As an alternate approa h.11) that. in the ase of an ultrametri index set. and then dedu e the general ase from Proposition 11. Proposition 11.12. does not really larify the steps in whi h the property (11.

Z sup jXs A s. for all s. Æ) su h that. for any probability measure on (T.nite ultrametri spa e (T. Æ) .t2T Xt jdIP K IP(A) sup t2T Z D 0 1 1 d" IP(A)(B (t. Set T = fti g and let A in A . Æ) and any measurable set A . sup jXt t2T Xx j = jXti Xx j . ")) where K is numeri al and D is the diameter of (T. t)IP(A) : IP(A) A Then. on Ai . t in T and all measurable sets A in . Z 1 1 jXs Xt jdIP Æ(s. Let (Ai ) be a measurable partition of A su h that. Proof.

340 where x is one (arbitrary) .

Z sup jXt Xx jdIP = A t2T XZ Ai i jXti Xx j dIP : Let `0 be the largest integer ` su h that 2 ` D and. denote by B` the family of the balls of radius 2 ` .xed point of T . Thus. we . for ` `0 . For every B in B` .

The usual haining identity then yields. x(B )) 2 `+1 . ")) 1 IP(A)(B (t. ` `0 . t 2 T . 1 2M . jX`(ti ) X` `>`0 B 2B` AB 1 (ti ) jdIP jX` (ti ) X` (ti ) jdIP jXx(B) Xx(B) jdIP Hen e. from the hypothesis. jXt Xxj X `>`0 jX`(t) X` 1 (t) j : For every B in B` . sup jXt X X Xx jdIP A t2T `>`0 B 2B` 2 `+1 IP(A B) 1 M = sup t2T X ``0 2 Z D ` 1 0 1 1 d" IP(A)(B (t. ti 2 B g . we an write Z sup jXt XXZ Xx jdIP A t2T i `>`0 Ai X X XZ = `>`0 B 2B` ti 2B Ai X X Z = where B B and B 2 B` Z 1. Æ) and set so that. and take x(T ) = x . set AB = [fAi .x x(B ) 2 B . 2 ` )) . we let ` (t) = x(B (t. for every t in T . 2 ` )) 2M : Integrating with respe t to yields X ``0 2 ` X B 2B` (B ) 1 1 IP(A)(B ) 1 : IP(AB ) Let now be a probability measure on (T. for every t in T . . Further. Thus. sin e Æ(x(B ).

Theorem 11. for all s.341 while integrating with respe t to the measure on T su h that (fti g) = IP(Ai ) yields X ``0 2 ` X B 2B` IP(AB ) 1 1 IP(A)(B ) We next observe the following. IP) indexed by the pseudo-metri spa e (T. t)IP(A) 1 IP(1A) : A Then.t2T Xt j K sup t2T Z D 0 1 1 ! 1 d" m(B (t. Together with Lemma 11.13.13. the pre eding basi result yields the following general theorem for fun tions satisfying (11. If IP(AB ) IP(A)(B ) .12 is established. 1 d" : m(B (t. for any probability measure m on (T.9 and Proposition 11.2 about integrability and tail behavior of the supremum of the pro esses under study an be repeated similarly from Theorem 11. IP(AB ) 1 1 IP(AB ) IP(A)(B ) 1 2IP(A)M : 1 IP(A)(B ) sin e (u) u 0 (u) whi h shows that the fun tion u 1 (1=u) is in reasing. we have simply that 1 1 1 1 IP(AB ) IP(AB ) : IP(AB ) IP(A)(B ) Assembling this observation with what we obtained previously yields Z sup jXt A t2T XxjdIP 8IP(A)M from whi h the on lusion follows.10. d) su h that. d) and K only depends on IE sup jXs s. In parti ular. simply repla e . Let be a Young fun tion satisfying (11. Proposition 11. ")) . d) and any measurable set A . If IP(AB ) IP(A)(B ) .t2T Xt jdIP K IP(A) D 1 Z D 1 + sup IP(A) t2T 0 where D = D(T ) is the diameter of (T. A.11) and let X = (Xt )t2T be a random pro ess in L1 = L1 ( .11). Z jXs Xt jdIP d(s. Z sup jXs A s. t in T and all measurable sets A in . ")) Let us mention that the various omments next to Theorem 11.

the majorizing measure ondition has to be strengthened into (11. As announ ed. t)IP(A) 1 1 : IP(A) Assume there is a probability measure m on (T.342 the entropy integral by the orresponding majorizing measure integral. ")) The next result on erns ontinuity of random pro esses under majorizing measure onditions. Z A jXs Xt jdIP d(s. In parti ular. as an analog of (11. d) and all measurable sets A in . d) su h that lim sup !0 t2T Z 1 0 1 d" = 0 : m(B (t.11) and let X = (Xt )t2T be a random pro ess in L1 = L1 ( . ")) Then X admits a version Xe with almost all sample paths (uniformly) ontinuous on (T. Let be a Young fun tion satisfying (11. IP) su h that for all s. ) = sup t2T Z D 1 0 u 1 D 1 d" : m(B (t.6.4). d) .13 that for every u > 0 (11:14) IP sup jXs s. Xe satis. t in (T.12). we have in the setting of Theorem 11. Moreover.t2T Xt j > K ( + u) where = m (T.14.6 and the proof a tually simply needs to adapt appropriately the proof of Theorem 11. Theorem 11. d. It is the analog of Theorem 11. A.

there exists > 0 depending only on the pre eding limit. d. If A is a . i. set Z 1 1 d" () = sup m(B (t. su h that IE sup jXes d(s. m but not on X . . We simply sket h the steps of the proof of Theorem 11.es the following property: for ea h " > 0 . only on T. For ea h > 0 .e.6 in the majorizing measure setting. Fix > 0 . ")) t2T 0 so that lim!0 () = 0 .t)< Xet j < " : Proof.

or more pre isely its proof. ")) .9. we know from Lemma 11.nite subset of T . d) su h that sup t2T Z 0 1 1 d" 2 (=2) 2 () : (B (t. that there is a probability measure on (A.

343 This observation allows to assume that T is .

the proof of Theorem 11.15).14 an be made expli it from (11. Various remarks developed in Se tion 11.t)< Xt j 1 (N (T.6) in the proof of Theorem 11. This is also the ase for Remark 11. 2 ` )2 ) + K ( ) : Proposition 11.15.e. This will be in parti ular the ase for the majorizing measure versions of Corollary 11. Let ` be the largest integer su h that 2 ` D . Note further. IE sup jXs (11:15) d(s. d) is totally bounded under the majorizing measure ondition. allows then to extend this property to the ase of a general index set T. the proof of Theorem 11. This will not be required in the sequel so that we only gave the statement that will be suÆ ient in our appli ations. i. that a deviation inequality of the type (11.7 whi h we need not state sin e ompletely similar. Sin e (T. d. Here is its analog. we however use it freely below.14 is ompleted exa tly as the one of Theorem 11. K depending then on .10.13. If (T. with repla ing the diameter. Note that the pre ise dependen e of on " in the last assertion of Theorem 11. adapted to the present ase. We an then simply repeat in this ase the argument leading to (11. d) is ultrametri .6. t) < from (11.nite in what follows.4 whi h might be worthwile to detail in this ontext.5 whi h we need not repeat here. We leave the details to the interested reader.1 in the ontext of entropy apply similarly in the setting of majorizing measure onditions.15). In the setting of Theorem 11. This is the ase for example with Remark 11. Remark 11.13 and its notations yield IE sup jXt t2T X`(t) j K () for some (numeri al) K .6 to get that. assume the pro ess X = (Xt )t2T satis.14) may be obtained for supremum over d(s. for every > 0 .

and every measurable set A and s in the ball of enter t and radius 2 ` .es the following weaker assumption: for every t 2 T and integer ` . (11:16) Z A jXs Xt jdIP 2 ` IP(A) 1 where (M`(t)) is a sequen e of positive numbers su h that X 1 + M`(t)IP(A) IP(A) sup M` (t) < 1 : t2T ` .

13 holds similarly.3. this extension to pro esses satisfying (11. They on ern Gaussian and Radema her pro esses and their orresponding haos pro esses. we present some (rather important) examples for whi h the pre eding results an be applied. In ase of Theorem 11. the ondition on (M` (t)) has to be strengthened into X lim sup M` (t) = 0 : `0 !1 t2T `` 0 As for Remark 11.4. 11. i.16) follows dire tly from the proof of Theorem 11.13. In parti ular. Examples of appli ations In the last paragraph of this hapter.344 Then. We will see in the next paragraph how these simple observations an be rather useful in various appli ations. the on lusion of Theorem 11.e. X is almost surely bounded. under the majorizing measure ondition. the suÆ ient onditions we des ribe in order a Gaussian pro ess be sample bounded or ontinuous may be onsidered as the . The quantitative bounds of ourse involve then the pre eding quantity.14 dealing with ontinuity.

")) m(B (t. the integral up to D or 1 sin e N (T. "))d" 3 Z 0 1 (log N (T. The se ond part devoted to ne essity is the obje t of the next hapter. "))1=q d" : The reverse inequality (with onstant 1 ) is obvious. with the fun tion (log x)1=q . ")) t2T 0 . d. let us brie y indi ate an elementary but onvenient remark. 1 q < 1 . d. "))1=q d" (log 2)1=q D and thus Z D (11:17) q 0 1 (N (T. d. d. We have that q 1 (x) = (log(1 + x))1=q . Before entering these examples. in the entropy or majorizing measure onditions. x 1 . "))1=q d" = Z D 0 (log N (T. The point of this observation is that we an deal equivalently. d. it is lear that 1 Z 0 (log N (T. For the entropy ondition. for the majorizing measure ondition. We deal here with the Young fun tions q (x) = exp(xq ) 1 . we have that for every probability measure m on T (11:18) sup t2T Z D 0 q 1 1=q Z 1 1 1 d" 4 sup log d" m(B (t. ") = 1 if " D .rst part of the study of the regularity of Gaussian pro esses. Similarly. with the fun tions (log x)1=q . Note that we an write indierently.

we use freely below either q 1 (x) or (log x)1=q depending on the ontext and/or histori al referen es. (log x)1=q will be used most often.345 and trivially also a reverse inequality. A ordingly. or the more ommonly used log+ log x (provided the diameter is taken in a ount in the inequalities). Let X = (Xt )t2T be a Gaussian pro ess. Re all from Chapter 3 that by this we mean that the distribution of any . A similar property of ourse also holds for ontinuous majorizing measure onditions. a tually. We an further deal with 1 (x) = exp(exp x) e and repla e 11 (x) by log(1+log x) .

let us . dX ) des ribes boundedness or ontinuity properties of the sample paths of X . dX is therefore a natural pseudo-metri on the index set T asso iated to the Gaussian pro ess X . ti 2 T . A ording to the previous se tions. the results of the pre eding se tions provide a rather pre ise des ription of the situation (they will a tually be shown to be best possible in the next hapter). To start with however. In this order of ideas. at least if IEXt2 . s. is known. t 2 T . t 2 T . XtN ) . s. : : : . The distribution of X is therefore ompletely determined by its ovarian e stru ture IEXs Xt . is Gaussian. The question of ourse raises to know under whi h ondition(s) on its ovarian e stru ture. Set dX (s. and onversely.nite dimensional random ve tor (Xt1 . t 2 T : The knowledge of the ovarian e stru ture implies a omplete knowledge of this L2 -metri dX . t) = kXs Xt k2 . the Gaussian pro ess X is (or admits a version whi h is) almost surely bounded or ontinuous. we might try to know how the "geometry" of (T.

This is lear for boundedness by the integrability properties of supremum of Gaussian pro esses.rst mention that the omparison theorems of Se tion 3. boundedness or ontinuity of the sample paths. For ontinuity. Let Y = (Yt )t2T be a Gaussian pro ess and d be a metri on T su h that dY (s. t) for all s. t) dX (s. ))1=2 ) . t .3 an also be eÆ ient in this study.t)< Yt j + (log N (T. t) d(s. these an be "transferred" to Y . Lemma 11. if X and Y are two Gaussian pro esses su h that dY (s. IE sup jYs d(s. Then. Indeed. and if X has ni e regularity properties. we an use the following lemma.16. Yt j K (sup IE sup jYs t2T d(s.3. then by the results of Se tion 3.t)< where K is numeri al. for every > 0 . d. t) .

Fix > 0 and let N = N (T. ) (assumed to be .346 Proof. d.

13). that is Sudakov's minoration. t in T . The integrability properties of Gaussian variables indi ate that. we have IE max( sup jYt u2U d(t.v2U d(u.19. IE max u. The .t)< Yt j 2 max( sup jYt u2U d(t. Note that the Gaussian omparison properties are only used in this approa h through Corollary 3.u)< Similarly. Clearly sup jYs d(s.v)<3 jYu Yv j 3(log N 2 )1=2 and the lemma is proved.v)<3 jYu Yv j : By (3. by (3. Let therefore X be a Gaussian pro ess with asso iated pseudo-metri dX .nite and larger than 2). Let U = (u1 . u) . : : : .v2U d(u. uN ) in T be su h that the d -balls of radius and enter ui over T .u)< Yu j) 2 max IE sup jYt Yu j + 3(log N )1=2 : u2U d(t. kXs Xt k 2dX (s. u) d(t. for all s.u)< Yu j) + max u.19. The next two statements are then dire t onsequen es of the results obtained there. The pre eding laim then follows immediately from this lemma when d = dX using Corollary 3.6) and the fa t that dY (t. t) : 2 We are thus immediately in the setting of pro esses satisfying a Lips hitz ondition as studied in the previous se tions.

Let X = (Xt )t2T be a Gaussian pro ess. . The numeri al onstant has no reason to be sharp. then IE sup Xt 24 t2T Z 0 1 (log N (T.17. dX . dX ) . Theorem 11. "))1=2 d" : Further.rst one whi h deals with entropy is known as Dudley's theorem. X has a version with almost all sample paths (uniformly) ontinuous on (T. if this entropy integral is onvergent.

for some numeri al onstant K and any probability measure m on (T. IE sup Xt K sup t2T t2T Further.347 Theorem 11. Then. Let X = (Xt )t2T be a Gaussian pro ess.18. dX ) . if m satis.

let T A = f(s. we need simply assume in addition that dX is ontinuous on (T. For > 0 . t) g . d) . SuÆ ien y is obvious. This is a losed set in T T and A = A0 . dX (s. so is dX . If X is d - ontinuous. Fix " > 0 . A tually. d) . d) is metri ompa t. dX ) . Note that if we are asked for ontinuity properties of a Gaussian pro ess X with respe t to another metri d for whi h T is ompa t. ")) 1 log m(B (t. d) if and only if it is ontinuous on (T. in other words that X is ontinuous in L2 (or in probability). dX ) and dX is ontinuous on (T. t) 2 T T . X admits a version with almost all sample paths (uniformly) ontinuous on (T. ")) 1=2 1=2 d" : d" = 0 .es lim sup Z !0 t2T 0 1 Z 0 1 log m(B (t. a Gaussian pro ess X = (Xt )t2T is ontinuous on (T. >0 By ompa tness. if (T. there is > 0 and a .

It follows that X is dX -uniformly ontinuous. the right side of this inequality goes to 0 with " .18 is more general than Theorem 11.18) and. Xs0 = Xt0 with probability one. As will be dis ussed in Chapter 12. the existen e of a majorizing measure in Theorem 11. d(t.17 may indeed be ompared to Sudakov's minoration (Theorem 3.nite set A0 A0 su h that whenever (s. We have by the triangle inequality jXs Xt j jXs Xs0 j + jXs0 Xt0 j + jXt0 Xt j : Sin e (s0 . a tually.10)). and thus the left side with . It follows that IE sup dX (s. whi h apply to large lasses of pro esses. t) 2 A . t0 ) " . . Re all that Theorem 11.t) jXs Xt j 2IE sup jXs Xt j : d(s. t0 ) 2 A0 . are sharp in this Gaussian setting.17 ((11. s0 ) . Theorem 11.t)" By the integrability properties of Gaussian random ve tors. t0 ) 2 A0 with d(s.18 will be shown to be ne essary for X to be bounded or ontinuous. there exists (s0 . It is remarkable that these two theorems drawn from the rather general results of the previous se tions.

xi (t)2 < 1 ). t) : 2 2 d(s.. by Chebyshev's inequality. t)2 ) for all u > 0 . Re all that i i ("i ) denotes a Radema her sequen e. a sequen e of independent random variables taking the values 1 with probability 1=2 .17 and 11. IPfjXs Xt j > ug 2 exp u + Minimizing over ( = u=d(s. for every . for all s. we say that a pro ess X = (Xt )t2T is a Radema her pro ess if there exists a sequen e (xi (t)) of fun tions on T su h P P that for every t . kXs Xt k 2 5kXs Xt k2 for all s. t in T .e. Hen e. for all s. The basi observation is that a ording to the subgaussian inequality (4. A entered pro ess X = (Xt )t2T is said to be subgaussian with respe t to a metri or pseudo-metri d on T if. The pre eding Theorems 11.348 Closely related to Gaussian pro esses are Radema her pro esses. for any probability measure m on T equipped with the pseudo-metri d(s.18 therefore also apply to Radema her pro esses.1). u > 0 . t)2 ) yields IPfjXs Xt j > ug 2 exp( u2 =2d(s. P In parti ular.e. t)2 : 2 . The Gaussian results a tually apply to the general lass of the so- alled subgaussian pro esses. If X is subgaussian with respe t to d . we have (11:19) IE sup t X i "i xi (t) K sup t 1 Z log 0 1 m(B (t. Xt = "i xi (t) assumed to onverge almost surely (i. t) = ( jxi (s) i xi (t)j2 )1=2 . t)2 : Gaussian and Radema her pro esses are subgaussian with respe t to (a multiple of) their asso iated L2 metri . (11:20) IE exp (Xs Xt ) exp 2 2 d(s. ")) 1=2 d" for some numeri al onstant K . Following Chapter 4. t in T . t in T and every in IR . i. kXs Xt k 5d(s. as in the Gaussian ase.

Therefore. IE exp Z exp C 2 2 where C is numeri al. for all 2 IR . then. A tually.349 This is the property we use on subgaussian pro esses. elementary omputations on the basis of a series expansion of the exponential fun tion shows that if Z is a real mean zero random variable su h that kZ k 2 1 . hanging d by some multiple of it shows that the subgaussian de.

IE sup jXs d(s. or also IPfjXs Xt j > ug C exp( u2=Cd(s. Theorem 11. t . t 2 T . ")) 1=2 d" = 0 : Then. In parti ular. there exists > 0 su h that for every (separable) pro ess X = (Xt )t2T whi h is subgaussian with respe t to d . for any probability measure m on (T. Then. d) su h that lim sup !0 t2T Z 0 log 1 m(B (t. we have the following statement. P Lemma 11. the proof is a tually arried over with the true metri s kXs Xt k q . for ea h " > 0 .19.17 and 11. t in T . Xt = "i xi (t) . 2 < q 1 . s. Assume there is a probability measure m on (T.1. i let dp. dp. As announ ed. Proposition 11. These results yield then further entropy or majorizing measure bounds of Radema her pro esses in terms of q and these metri s. ")) 1=q d" . we have that kXs Xt k2 = i P ( jxi (s) xi (t)j2 )1=2 .1 where p is the onjugate of q .1 need not be distan es in general. IE sup t X i "i xi (t) Kp sup t2T 1 Z 0 1 log m(B (t.1 (s.18 apply similarly to subgaussian pro esses. for other metri s i than this `2 -metri .7 for families of subgaussian pro esses.20) is equivalent to say that kXs Xt k 2 d(s. t) = k(xi (s) xi (t))kp. What we will a tually use in appli ations on erning subgaussian pro esses (in Chapter 14) is the majorizing measure version of Corollary 11. Let us re ord at this stage the following statement for further referen e. we learned estimates of kXs Xt k q .1 -metri s. Sin e k(xi (s) xi (t))kp. t)2 ) for some onstant C and all u > 0 . In Se tion 4.20. k(xi (s) xi (t))kp.nition (11. We use this freely below.t)< Xt j < " : P Turning ba k to Radema her pro esses X = (Xt )t2T . Let X = (Xt )t2T be a Radema her pro ess.1 ) . For 1 p < 2 . Xt = "i xi (t) . namely `p. t) for all s.1 .

16). One of the main interests of these appli ations is the use of Remarks 11.4 and 11.5) or (11. In order to put the results in a learer perspe tive. we now investigate some more spe ialized ones. IE sup t X "i xi (t) K sup t2T i 1 Z 0 log 1 + log 1 d" : m(B (t. we de ided to present the . when p = 1 . We leave it to the interested reader to translate the results to the Radema her ase.350 where Kp only depends on p and q = p=p 1 is the onjugate of p > 1 . ")) After these lassi al and important examples. They on ern Gaussian pro esses with ve tor values and Gaussian haos.15 on erning pro esses satisfying (11. We say Gaussian but a tually these appli ations are exa tly the same for Radema her pro esses on the basis of the orresponding results in Chapter 4 and the previous dis ussion.

and the se ond one using majorizing measures. It is indeed fruitful to .4.rst appli ation to ve tor valued Gaussian pro esses using the tool of entropy and Remark 11.

We do not seek the greatest generality in the de. Theorem 11.rst analyze the questions in terms of entropy. Of ourse.21 below also holds under the orresponding majorizing measure onditions.

Assume simply we are given a separable Bana h spa e B and a family X = (Xt )t2T of Borel random variables Xt with values in B indexed by T . X is Gaussian if ea h .nition of pro esses with ve tor values.

nite sample (Xt1 . is Gaussian in B N . We may then ask similarly for almost sure boundedness or ontinuity properties of the sample paths of X = (Xt )t2T in B . ti 2 T . : : : . As a . XtN ) .

t) = kXs Xt k2 = (IEkXs Xt k2 )1=2 : From (3. t) : 2 We an then make use of Remark 11. dX . then X has a version with almost all sample paths bounded and ontinuous on (T. dX ) . set. for all s. t in T .rst simple observation. we have that kXs Xt k 8dX (s.5). "))1=2 d" < 1 . dX (s.5 to realize that if (11:21) 1 Z 0 (log N (T. .

1 that. t in T . t)IP(A) : IP(A) We are therefore in a position to make use of the general setting developed in the previous se tions. as announ ed. IPfkXs Xt k > 2dX (s. Then. In this way. if 1 Z 0 (log N (T. P ` Proof.21. It improves upon (11. This inequality (3. These involve. X (s. t) 2 from whi h it follows that for every measurable set A Z (11:22) A kXs Xt kdIP 2X (s. t) + uX (s. t 2 T . Let us . t)g exp( u2 =2) : This is (basi ally) equivalent to say that k(kXs Xt k 2dX (s.351 The metri dX in (3.1. Re all the weak and strong distan es X and dX on T introdu ed above. "))1=2 d" < 1 and 1 Z 0 log+ log N (T. Then we know from Lemma 3. dX . Let X = (Xt )t2T be a Gaussian pro ess with values in a separable Bana h spa e B . besides what an be alled the "strong" parameter dX .15. Let us set indeed. t)IP(A) 2 1 1 + 2dX (s. t))+ k 2X (s.5) is indeed a onsequen e of the pre ise deviation inequalities for norms of Gaussian random ve tors in the form of Lemma 3. ")d" < 1 . for all s.4 and 11. and for all u > 0 . it is des ribed in the setting of entropy for a somewhat learer pi ture of the argument but also holds under the orresponding majorizing measures onditions. X has a version with almost surely bounded and ontinuous paths on (T. and in parti ular of Remarks 11. a "weak" parameter. s.21).5) is however too strong. t) = (Xs Xt ) = sup (IEf 2 (Xs kf k1 Xt ))1=2 . dX ) . X . we obtain the following result. Theorem 11.

dX . a` ))1=2 < 1 : .rst show that there exists a sequen e (a` ) of positive numbers su h that a` < 1 and X ` 2 `(log N (T.

dX . 2 k ))1=2 for every k . De. dX . ")d" < 1 .352 R Set bk = (log N (T. we have P k 2 k log+ bk < 1 . Sin e 01 log+ log N (T.

a` ))1=2 2 `k +1 bk k whi h is . dX .ne `k = [log+2 (2k bk )℄ + 1 where [℄ is integer part and log2 the logarithm of base 2 . Clearly X ` Further X ` a` X k 2 k `k+1 < 1 : X 2 `(log N (T. We let then a` to be 2 k for all ` with `k < ` `k+1 .

nite too by de.

As in those hapters. Gaussian and Radema her haos were introdu ed in Chapters 3 and 4 respe tively where their integrability and tail behavior properties were investigated. we restri t here to haos of order 2 .22) and re all that a` < 1 . for ea h ` . If we now use (11. T` = C` (D) . we indi ate at the end how the appli ation an be ampli. let now. 2 `) + log N (T. there exists. X . A` be minimal in T su h that the dX -balls with enters in A` and radius a` over T . dX . t` in T` su h that X (t. The on lusion therefore follows sin e this remark applies similarly to ontinuity as we noti ed it. We have D log Card T` log N (T. a` ) : Further. A ording to this. Set then. t` ) 2a` . Our last appli ation deals with haos pro esses. we see that we are exa tly in the ` setting of Remark 11. by onstru tion.nition of `k . t` ) 2 ` P and dX (t. If D is su h a ball.4. for every t 2 T and every ` . for every ` . We deal moreover only with real valued haos. let further C` (D) be minimal in D su h that the X -balls S with enters in C` (D) and radius 2 ` over D .

As before .ed to more general ases.

t) = sup j hi hj (xij (s) xij (t))j jhj1 i.j is almost surely onvergent.2. we only deal with the Gaussian ase. X = (Xt )t2T is a Gaussian haos pro ess of order 2 if P there is a sequen e (xij (t)) of (real) fun tions on T su h that. with the orresponding results in Chapter 4. t) = kXs Xt k2 and X d2 (s. Xt = gi gj xij (t) where the sum i.nally.j . Re all (gi ) denotes an orthogaussian sequen e. Following Se tion 3. the theorem we will obtain applies similarly to Radema her haos. for all t . we introdu e two distan es d1 and d2 on T by setting d1 (s.

With respe t to Se tion 3. It follows from Lemma 3. t)IP(A) : IP(A) a These relations put us in the right situation in order to apply Theorems 11. note that it implies that if a d2 (s.23) is used in the ontext of Remarks 11. for all u > 0 . if a d2 (s. We an now state our result on almost sure boundedness and ontinuity of Gaussian haos pro esses. for real sequen es. t) ).15. P Theorem 11. Hen e. for some (possibly dierent) numeri al onstant K.4 and 11. t) + ud2 (s. t . as in the previous appli ation. t) . (11:24) Z A jXs Xt jdIP KaIP(A) 1 1 1 1 + d21 (s. for all s. t) + 2aug K exp( u=K ) a (sin e a 1 d21 (s.15.23) involving the two distan es d1 and d2 are more pre ise and lead to sharper onditions.8 and the omments introdu ing it that there is a numeri al onstant K su h that.14 together with Remark 11. t) and A is measurable in .2. t) Kd1 (s. t) + 2au ud1 (s. t))+ k 1 Ka : Therefore. t)g K exp( u2=K ) for every u > 0 . t) + u2 d2 (s. the in remental estimates (11. IPfjXs p 1 Xt j > d21 (s. k(jXs Xt j a1 d21 (s. t) and IPfjXs (11:23) Xt j > ud1 (s.22. t 2 T . Let Xt = gi gj xij (t) . In parti ular (for some possibly dierent K ). we do not onsider the third parameter sin e we have seen there that.13 and 11. IPfjXs Xt j > ud1 (s.j des ribed with asso iated metri s d1 and d2 .353 for s. the asso iated de oupled haos is equivalent to X (at least if the diagonal terms are zero). be a Gaussian haos pro esses of order 2 as just i. (11. t)g K exp( u=K ) : We ould then apply the results of the pre eding se tions and show boundedness and ontinuity of X in terms of the only distan e d1 with respe t to the Young fun tion 1 (x) = exp(x) 1 . t in T . However. d2 (s. Assume there exist probability measures m1 and m2 on T su h that respe tively 1=2 Z 1 lim sup log d" = 0 !0 t2T 0 m1 (B1 (t. To this aim. ")) .

354 and lim sup !0 t2T Z 0 log 1 d" = 0 m2 (B2 (t. ")) where Bi (t. ") is the di -ball of enter t and radius " > 0 (i = 1. 2) . Then X = (Xt )t2T admits a version with almost all sample paths bounded and ontinuous on (T. d1 ) . We only show that if T is . Proof.

Let thus T be . ")) IE sup jXs s. mi (Bi (t.t2T Xt j KM for some numeri al K . 2. i = 1. it is not diÆ ult to dedu e the full on lusion of the statement.nite and if M is a number su h that sup t2T then 1 Z 0 log i=2 1 d" M . From this and the material dis ussed in the pre eding se tion.

nite and M be as before. `) KM .`) 2 2k(t. `). ` + k(t. ` 2 Ln g . ` + j ) 1 for all j k . ` We may observe that X (11:25) ` 2 2k(t. `) = 2 ` log 1 m1 (B1 (t. `) = ng . We note that if ` 6= `0 are elements of Ln . `) the largest integer k su h that 22j (t. by `2Ln de. For every t and ` set (t. `0 ) P from whi h it follows that 2 2k(t. 2 `)) 1=2 P so that (t. `0 ) = minfk(t. `) 6= k(t. let Ln = f`. A ording to Proposition 11.`0 )+1 where k(t. But then.`) 8KM : To show this.10. then k(t. we may and do assume that d1 and d2 are ultrametri . Denote by k(t.

nition of k(t. n + 1) . `0 ) + 1) = 4 (t. as is easily he ked. note. `) is in reasing in ` . `) . onsider the subset H (t. `)) 8KM : As another property of the integers k(t. `0 ) .25) learly follows and implies in parti ular that (11:26) X ` (t. `0 + k(t. 2 2` ) of T onsisting of the balls C of radius 2 ` k(t. that ` + k(t. (11. ` + k(t.`0 ) 4 (t. For every t and ` .`) 1 . 2 2k(t.

355 in luded in B1 (t.`) ) su h that k(s. `) = k(t. `) for all s in C . 2 ` k(t. From the de.

2 2j )g : Re all now from the assumptions that.24). `) is in reasing in ` .`) and d2 (s. 2 2` 2) H (t. 2 2` )) KM : Let B (t.26) and (11. To this ball we an asso iate the weight 2 k(t. We do not pursue in this dire tion. t) 2 ` k(t. for all t . t) = 2 2` where ` = supfj : 9u su h that s. (11:27) X ` 2 2` log 1 m2 (B2 (t. the latter property only depends on C . from (11. We thus on lude the proof of Theorem 11.j spa es. Then. `) . 2 2`) be the ball of enter t and radius 2 2` for the distan e d = max(d0 . and not on s in C .25) holds. t) 2 2` . by onstru tion. .27). for every measurable set A . Let us mention to on lude this hapter that the previous theorem might be extended to ve tor valued P haos pro esses.`) : A We therefore see that we are exa tly in the situation des ribed by Remark 11. Su h a ball is the interse tion of a d0 -ball of radius 2 2` and a d2 -ball of radius 2 2` . `) . t) 2 2` . One an then onstru t a probability measure m on (T. three distan es would then be involved with dierent entropy or majorizing measure onditions on ea h of them ( d + 1 distan es for haos of order d !). d1 (s. Z jXs Xt jdIP K 2 2`IP(A) 1 IP(1A) + 2 2k(t. We are then in a position to on lude. Let s and t be su h that d(s. d2 ) . ` 2 ZZ . X ` 2 2` log 1 m(B (t. 2 2`)) where k0 is the smallest possible value for k(t. by (11. We obtain in this way a family of weights as des ribed in Remark 11. 2 2` )) KM for some numeri al K .22 in this way.`) ))m2 (B2 (t. that is pro esses Xt = gi gj xij (t) where the fun tions xij (t) take their values in a Bana h i. Sin e t + k(t. Hen e. H (t. 2 2`) . d) su h that.11. for all t in T .`)+k0 1 m1 (B1 (t. t 2 H (u.2. 2 ` k(t. A ording to the study of Se tion 3.16) sin e (11.nition of k(t. Further the subsets H (t. t 2 T .15 and (11. 2 2`) when t 2 T form the family of the balls of radius 2 2` for the (ultrametri ) distan e d0 given by d0 (s.

where Theorem 11. Exposition on Gaussian pro esses have been given in parti ular in the ourse [Fer4℄ by X. Sudakov [Su4℄ and N. Pisier and emphasized in [Fer7℄ and [We℄. V. Fernique. and also by R. Our exposition is based on [Pi13℄ and the re ent work [Ta12℄. N. Dudley. Mar us [J-M3℄. The notion of " -entropy goes ba k to Kolmogorov. The landmark paper [Du1℄ by R. Jain and M. B. C. General suÆ ient onditions for non-Gaussian pro esses satisfying in remental Orli z onditions to be almost surely bounded or ontinuous are presented in the notes [Pi13℄ by G. The study of random pro esses under metri entropy onditions a tually started with the Gaussian results of Se tion 11.356 Notes and referen es Various referen es have presented during the past years the theory of random pro esses on abstra t index sets and their regularity properties as developed in this hapter.3. Dudley [Du2℄. We refer to these authors for more a urate referen es and in parti ular to [Du2℄ for a areful histori al des ription of the early developments of the theory of Gaussian (and non-Gaussian too) pro esses.17 is established. introdu ed this fundamental abstra t framework in the .

[J-M1℄ (on subgaussian pro esses. Sudakov [Su1℄. Among the various arti les. It was only slowly realized after that the Gaussian stru ture of this result only relies on the appropriate integrability properties of the in rements Xs Xt and that the te hnique ould be extended to large lasses of non-Gaussian pro esses. several authors investigated suÆ ient onditions for boundedness and ontinuity of pro esses whose in rements Xs Xt are ni ely ontrolled. [N-N℄.eld.1 is due to G. see also [J-M3℄). On the basis of Kolmogorov's ontinuity theorem (whi h already ontains the fundamental haining argument) and this observation. [Ib℄ and [K^o℄ and [Pi9℄ on the important ase of in rements in Lp . let us mention (see also [Du2℄) [De℄. Fernique). Strassen (see [Du1℄) and V. [Ha℄. [Bou℄. The general Theorem 11. Credit for the introdu tion of " -entropy applied to regularity of pro esses goes to V. [H-K℄. Pisier [Pi13℄ (on the basis of [Pi9℄ thanks to an observation of X. N. Its re.

This lemma was further used and re. [We℄). E. in [Slu℄. The uniform ontinuity and ompa tness results (Theorem 11.g. [Ne1℄. Rodemi h and H. The tail behaviors dedu ed from this statement were pre isely analyzed in some ases in [Ale1℄ in the ontext of empiri al pro esses (see also [Fer7℄. A. Garsia.6 and Corollary 11. Rumsey establish a real variable lemma using integral bounds involving majorizing measures to be applied to regularity of random pro esses.7) simplify prior proofs by X. M. We refer to the survey paper [He3℄ for a history of majorizing measures. [Bi℄.ned version Theorem 11. In [G-R-R℄.2 is equivalent to the (perhaps somewhat unorthodox) formulation of [Fer7℄. Kolmogorov's theorem (Corollary 11.8) may be found e. Fernique [Fer11℄.

Let us note that our approa h to majorizing .ned in [Gar℄ and [G-R℄ and usually provides interesting moduli of ontinuity.

re. Our on erns go more to integrability and tail behaviors rather than moduli of ontinuity.357 measures is not ompletely similar to this real variable lemma. the te hnique of [G-R-R℄. More pre isely.

10 and Corollary 11. [Ta5℄. Sudakov while Theorem 11. [B-H℄). allows for example to show the following non-random property.t useful in sto hasti al ulus by several authors (see e. the evaluation of the entropy integral (usually for Lebesgue measure on some ompa t subset of IRN ) yields therefore various moduli of ontinuity and a tually allows to study bounds on sup jXs Xt j=d(s. Preston [Pr1℄. Fernique [Fer4℄.12). t) = jf (s) f (t)j I d(s. and [Ta12℄. Lemma 11. and a main on ern of the paper [Ta12℄ is to remove this square when it is not needed. d) . [Yo℄. (While this square is irrelevant when has exponential growth. X. Dudley [Du1℄. As mentioned. [Ta7℄. t) . [B-Y℄. [Fer3℄. Theorem 11.18 [Fer4℄. He was apparently not aware of the power of the present formulation whi h was put forward by X. Heinkel [He1℄.t)=2 0 1 1 d" : m(B (u. However.18. t in T . Preston unne essarily restri ts his hypotheses. [Ad℄. [Pr2℄ and B. C. These arguments have been proved s. d) and let be a Young fun tion. Let f be real ( ontinuous) on some metri spa e (T. Fernique developed in the mean time a somewhat dierent point of view based on duality of Orli z norms ( f. impli it in [Ta5℄. [Fer4℄). [He3℄) that for all s. Strassen and V. Preston [Pr1℄.) In on rete situations.16 is taken from [Fer8℄ (to .ned by C. [DM℄ et .t)6=0g in the produ t spa e (T T . Fernique who ompletely established Theorem 11.. [M-S2℄ (where Slepian's lemma is introdu ed in this study). are des ribed in [A-G-O-Z℄.g. Barlow [Bar1℄.17 is due to R. in his main statement.18 is due to X. ")) in the majorizing measure integral. C. this is not the ase in general. Our exposition is taken from [Ta12℄ to whi h we a tually refer for a more omplete exposition involving general Young fun tions . denote by kfekL (mm) the Orli z norm with respe t to of fe(s. [Pr2℄ developed the on ept of majorizing measures and basi ally obtained. m m) . T. > 0 . [He3℄. Given a probability measure m on (T. On the basis of the seminal result of [G-R-R℄. in [Pr1℄. [S-V℄. [J-M3℄. and in parti ular the re ent onne tions between regularity of (stationary) Gaussian pro esses and regularity of lo al times of levy pro esses put forward by M. N. jf (s) f (t)j 20kfekL (mm) sup u2T Z d(s. [Du2℄ after early observations and ontributions by V. t) fd(s. "))2 Note the square of m(B (u. Theorem 11. Then one an show ( f. The ultrametri stru ture and dis retization pro edure (Proposition 11. [Bar2℄. Various omments on regularity of Gaussian pro esses are taken from these referen es as well as from [M-S1℄.

Subgaussian pro esses have been emphasized in their relation to the entral limit theorem in [J-M1℄.22 is due to the se ond author. see also [Ta19℄. It be omes very natural in the new presentation of majorizing measures introdu ed in [Ta18℄. Chapter 14).22 on regularity of Gaussian and Radema her haos pro esses improves observations of [Bo6℄.358 show a result of [M-S2℄).20 omes from [M-P2℄ (see also [M-P3℄) and will be ru ial in Chapter 13. Lemma 11. . The new te hnique of using simultaneously dierent majorizing measure onditions for dierent metri s in Theorems 11. [He1℄. Theorem 11.21 and 11. Let us mention here a volumi approa h to regularity of Gaussian pro esses in the paper [Mi-P℄ where lo al theory of Bana h spa es is used to prove a onje ture of [Du1℄. [J-M3℄ ( f.

2 Ne essary onditions for boundedness and ontinuity of stable pro esses 12.359 Chapter 12. Regularity of Gaussian and stable pro esses 12.1 Regularity of Gaussian pro esses 12.3 Appli ations and onje tures on Radema her pro esses Notes and referen es .

we presented suÆ ient metri entropy and majorizing measure onditions for sample boundedness and ontinuity of random pro esses satisfying in remental onditions. We will see indeed. In parti ular. This hara terization is performed in the . as one of the main results.3. The main on ern of this hapter is ne essity. This hara terization thus provides a omplete understanding of the regularity properties of Gaussian paths.360 Chapter 12. The arguments of proof rely heavily on the basi ultrametri stru ture whi h lies behind a majorizing measure ondition. that the suÆ ient majorizing measure ondition in order for a Gaussian pro ess to be almost surely bounded or ontinuous is a tually also ne essary. these results were applied to Gaussian pro esses in Se tion 11. Regularity of Gaussian and stable pro esses In the pre eding hapter.

3). the whole family of pro esses satisfying in remental onditions with respe t to the same Young fun tion and metri d . The diÆ ult subje t of Radema her pro esses is dis ussed through some onje tures in the very last part. d) . Let us mention at this point that the study of ne essity for non-Gaussian pro esses in this framework involves.2. 12. We refer the interested reader to [Ta12℄ for a study of ne essity for general pro esses in this setting of in remental onditions.1. Regularity of Gaussian pro esses Re all that a random pro ess X = (Xt )t2T is said to be Gaussian if ea h . is the subje t of Se tion 12. it is suÆ ien y whi h appears to be more diÆ ult in the stable ase. rather than one given pro ess.rst se tion whi h is ompleted by some equivalent formulations of the main result. as we will see. This hapter is ompleted with appli ations to subgaussian pro esses and some remarkable type properties of the inje tion map Lip(T ) ! C (T ) when there is a Gaussian or stable majorizing measure on (T. In the Gaussian ase. onfound those two situations and things be ome simpler. 1 p < 2 . The series representation and onditional use of Gaussian te hniques allow here to des ribe ne essary onditions as for Gaussian pro esses. This extension to p -stable pro esses. whi h appear as a ornerstone in this study of ne essity. A noti eable ex eption is however the ase of stable pro esses. Slepian's lemma and the omparison theorems (Se tion 3.

is a real Gaussian variable. The distribution of the Gaussian pro ess X is i therefore ompletely determined by its ovarian e stru ture IEXs Xt . to study the regularity properties of X . s. ti 2 T . i 2 IR . s. As we know. it is fruitful to analyze the geometry of T for the indu ed L2 -pseudo-metri dX (s. P . t) = kXs Xt k2 .nite linear ombination i Xti . t 2 T . t 2 T .

the majorizing measure integral on the right of (12. for some probability measure m . ") is the open ball of enter t and radius " > 0 in the pseudo-metri dX and X has almost surely bounded sample paths if. dX ) .18 that for any probability measure m on (T. (12:1) 1 Z IE sup Xt K sup t2T t2T 0 log 1 m(B (t.361 We have seen in Theorem 11.1) is . ")) 1=2 d" where B (t.

Re all further that IE sup Xt is simply t2T understood here as IE sup Xt = supfIE sup Xt .nite. F . There is a similar result about ontinuity.

2) and (12. "))1=2 d" : Now. whi h de. There is however a small gap whi h may be put forward by the example of an independent Gaussian sequen e (Yn ) su h that kYnk2 = (log(n + 1)) 1=2 for all n .10).3) appear to be rather lose from ea h other. we have seen in Theorem 3. This sequen e.17) (12:2) IE sup Xt K t2T 1 Z 0 (log N (T.nite in T g : t2T t2F K is further some numeri al onstant whi h may vary from line to line in what follows. (12.1) ontains the familiar entropy bound (Theorem 11. dX . dX . "))1=2 K IE sup Xt : ">0 t2T These two bounds (12.18 a lower bound in terms of entropy numbers whi h indi ates that (12:3) sup "(log N (T. By (11.

")) t2T t2T 0 Z .10).nes an almost surely bounded pro ess by (3. the majorizing measure integral of (12. the metri entropy ondition does hara terize almost sure boundedness and ontinuity of Gaussian pro esses. we will now show that the minoration (12.7).2) and (12.1) lies in between the entropi bounds (12. dX ) . shows that boundedness of Gaussian pro esses annot be hara terized by the metri entropy integral in (12. majorizing measures provide a way to take into a ount the possible la k of homogeneity of (T. As a main result. Further. by (11. We will see in the next hapter that in. As we know.9) and (11.2). The reason for this failure is due to the possible la k of homogeneity of T for the metri dX .3) an be improved into (12:4) 1=2 1 1 sup d" K IE sup Xt log m(B (t.3). a homogeneous setting.

1℄ with h(1) = 0 and xlim !0 h(x) = 1 . To unify the exposition. together with (12. y in (0. or (log(1=x))1=2 ( f. let us thus onsider a stri tly de reasing fun tion h on (0. The proof of this result.362 for some probability measure m on T . The majorizing measure onditions whi h will be in luded in this study on ern in parti ular the ones asso iated to the Young fun tions q (x) = exp(xq ) 1 .18)). existen e of a majorizing measure for the fun tion 2 1 . 1℄ (12:5) h(xy) h(x) + h(y) : [This ondition may be weakened into h(xy) h(x) + h(y) for some positive and the reader an veri. ompletely hara terizes boundedness of Gaussian pro esses X in terms only of their asso iated L2 -metri dX . to whi h we now turn. We assume that for all x. (11. Hen e. requires a rather involved study of majorizing measures. This will be a omplished in an abstra t setting whi h we now des ribe.1). There is a similar result for ontinuity.

the numeri al onstant of Theorem 12. the main examples we have in mind with this fun tion h are the examples of hq (x) = (log(1=x))1=q . Let us mention that we never attempt to .ed that the subsequent arguments go through in this ase.℄ As announ ed. and also h1 (x) = log(1 + log(1=x)) . 1 q < 1 .5 depending then on h .

we adopt the onvention that B (t. d) = inf m (T. d) = inf m (T ) . d) = sup t2T Z 0 1 h(m(B. i.e. and throughout this study. (t. t 2 T g : Given a probability measure m on the metri spa e (T. d) is a metri spa e. Here. t) .nd sharp numeri al onstants. If (T. ") is the open ball of enter t and radius " > 0 in (T. but always use rude. when no ambiguity arises. D = D(T ) = supfd(s. We also let (T ) = (T. "))d" where B (t. but simple. d) . ") denotes the ball for the distan e on the spa e whi h ontains t . re all we denote by D = D(T ) its diameter. s. let m (T ) = m (T. bounds. d) .

363 where the in.

i.e. d) refers to the quantity asso iated to the metri spa e (A. For a subspa e A of T . (A) = inf m (A) where the in. (A) = (A. d) .mum is taken over all probability measures m on T .

Æ(v. w in U we have Æ(u.mum is taken over the probability measures supported by A . and until further noti e. Æ) is alled ultrametri if for u. we assume that all the metri spa es are . w) max(Æ(u. w)) : The ni e feature of ultrametri spa es is that two balls of the same radius are either identi al or disjoint. v. From now on. v). Re all that a metri spa e (U.

say that a map ' from U onto T is a ontra tion if d('(u). Given a metri spa e (T. v in U . De. '(v)) Æ(u. v) for u. d) and an ultrametri spa e (U. Æ) .nite.

ne the fun tional (T ) = (T. d) by (T ) = inf f (U ) . U is ultrametri and T is the image of U by a ontra tiong : Although (T ) omes .

the quantity (T ) is easier to manipulate and yields stronger results.rst in mind as a way to measure the size of T . We .

(i) (T ) (T ) .1. The following hold under the pre eding notations. (v) (T ) = inf f (U ). then (A) 2 (T ) . then (A) (U ) . D(U ) D(T ) and T is the image of U by a ontra tion and this in. Lemma 12. (iii) If U is ultrametri and A U . U is ultrametri .rst olle t some simple fa ts. (ii) If A T . then (A) (T ) . (iv) If A T .

") . ")) (B (u. take a(t) in A with d(t. A) . . ")) B (u. ")) . (i) Let ' be a ontra tion from U onto T .9 but let us brie y re all the argument. that ' 1 (B ('(u). g (vi) D(T ) 2[h(1=2)℄ 1 (T ) . sin e ' is a ontra tion. a(t)) = d(t. For t in T . a probability measure on U and m = '() . we have. " > 0 . so (T ) (U ) sin e is arbitrary. Sin e h is de reasing and ' onto. so m(B ('(u). we get m (T ) (U ) . (ii) This has already been shown in Lemma 11. For u in U . therefore (T ) (T ) sin e U and ' are arbitrary.mum is attained. Proof.

Æ1 ) (U. (iii) With the notations of the proof of (ii). Æ) is ultrametri and ' is a ontra tion from U onto T . The last assertion follows by a standard ompa tness argument. 2")) m(B (x. a(t)) d(t. say the . t). By (iii). t) . Then (U. v). a(t)) d(x. Sin e = a(m) . (U. By the argument of (i). it follows that (B (x. so d(t. v) = min(Æ(u. Æ) . Fix x in d(t. Æ1) is ultrametri and ' is still a ontra tion from (U. we get that (A) (' 1 (A)) (U ) and thus (A) (T ) . (iv) Let U be ultrametri and let ' be a ontra tion from U onto T . d(t. the ultrametri ity gives d(x. t) and thus (A) (U ) in this ase. a(t)) 2d(x. x) . ")) . (A) 2 m(T ) whi h gives the results. Æ1 ) onto T . =2) and B (t. A . x) and d(x. =2) are disjoint so that if m is a probability measure on T . a(t)) max(d(x.364 Let m be a probability measure on T and let = a(m) so that is supported by A . t) . For t in T . onsider the distan e Æ1 on U given by Æ1 (u. The balls B (s. Hen e. by a hange of variables. (vi) Take two points s and t in T and let = d(s. we have d(t. one of these balls. A) (v) If (U. D(T )) .

t are arbitrary and h(1=2) > 0 by (12. Let T be a . Therefore m (T ) Z =2 0 1 h(m(B (s.5). The proof of the lemma is omplete. s. Lemma 12. ")))d" h( ) 2 2 from whi h the result follows sin e m. It exhibits a behavior of that resembles a strong form of subadditivy.2.rst. has a measure less than 1=2 . The next lemma is one of the key tools of this investigation.

nite metri spa e with diameter D = D(T ) . Suppose that we have a .

1 (v). De. Æi ) of diameter less than D . Then. there exists an ultrametri spa e (Ui . Let U be the disjoint sum of the spa es (Ui )in . From Lemma 12. for every i = 1. (T ) max[(Ai ) + D(T )h(ai )℄ : in Proof. : : : . an with n P i=1 ai 1 . : : : . : : : . n .nite overing A1 . a ontra tion 'i from Ui onto Ai and a probability measure i on Ui su h that (Ai ) = i (Ui ) (or arbitrarily lose). An of T . for every positive numbers a1 .

v) whenever u. n P Consider the positive measure 0 on U given by 0 = ai i . Æ) is ultrametri and the map ' from U onto T given by '(u) = 'i (u) for u in Ui is a ontra tion. and Æ(u. v) = Æi (u. there is a probability on i=1 . Then (U.ne the distan e Æ on U by Æ(u. v) = D otherwise. v belong to the same Ui . Sin e j0 j 1 .

"))) h(0 (B (u. If (T. ")))d" + Dh(ai ) 0 (Ai ) + Dh(ai ) : (U ) (U ) max[(Ai ) + Dh(ai )℄ in from whi h the on lusion follows.5). d) is a . "))) h(i (B (u. ")))d" h(i (B (u. "))) + h(ai ) : It follows that 1 Z 0 h((B (u. ")))d" = Therefore Z D Z0 1 h((B (u. The next lemma is the basi step in the subsequent onstru tion. h((B (u. By (12.365 U with 0 . Take then u in U and let i be su h that u 2 Ui . "))) h(ai i (B (u.

nite metri spa e. for every integer k (in ZZ ). . let.

k (T ) = (T ) sup (B (x.3. 6 k )) : x2T Lemma 12. d) be a . Let (T.

nite metri spa e of diameter less than 6 k . We are ne essarily in one of the following two ases: (i) either there exists a subset S of T of diameter less than 6 k (12:6) satisfying .

6 k 2 )) = maxf(B (x. xi 1 have been onstru ted. in T in the following manner: (B (x1 . By indu tion. xj ) 3 6 k 2 g : . we onstru t points xi . : : : .k+2 (S ) 2((T ) (S )) . and. i 1 . Suppose that (i) does not hold. 6 k 2 )) . 6 k 2 )) is maximal. (ii) or there exists balls (Bi )1iN of radius 6 k 3 6 k 2 su h that (12:7) 1 2 with enters at mutual distan e larger than 1 for all i . if x1 . 8j < i . (Bi ) (T ) 6 k+1 h 1+N : Proof. we take xi su h that (B (xi . d(x.

. ne essarily. set then [ Si = B (xi . 3 6 k 2 )n B (xj .366 for every i . for any i . 3 6 k 2 ) : j<i Sin e (i) does not hold.

By onstru tion .k+2 (Si ) 2((T ) (Si ) (B (xi . 6 k 2 )) . for all i . (Si )) . Thus.

6 k 2 )) = (Si ) .k+2 (Si ) = (B (xi .

Hen e. They an be assumed to be ordered in su h a way that the sequen e P ((Si )) is de reasing. : : : .k+2 (Si ) (12:8) (Si ) 2((T ) (Si )) (T ) 3((T ) (Si )) : The union of the Si 's overs T .7) is satis. 6 k 2 )) (T ) 6 k+1 h 1 1 + CardI : Letting Bi = B (xi . h((i0 + 1) 2 ) 2h((i0 + 1) 1 ) .9) that for all i in I . we see from (12.8) yields that for all i in I .2. (B (xi .10) with (12. (12:10) (Si ) (T ) 2 6 k h 1 1 + CardI : Combining (12. Therefore. there exists i1 i0 1 su h that (12:9) (Si0 ) (T ) 6 k h((i0 + 1) 2 ) : By (12. if I = f1. sin e ((Si )) is de reasing. by Lemma 12. 6 k 2 ) for i in I and N = CardI shows that we are in ase (ii) of the statement and that (12. i0g .5). we have ai 1 . If we let ai = (i + 1) 2 .

b) . For two subsets A.3 and onstru t in this way subsets (a tually balls) whi h are well separated and whose fun tionals are big enough and arry enough information on (T ) itself.ed. We now perform the main onstru tion. Iterative use of this proposition gives raise to a "tree" and an ultrametri stru ture. B ) = inf fd(a. a 2 A . d) . . we exhaust it with the alternative of Lemma 12. B of a metri spa e (T. Lemma 12. let d(A. b 2 B g .3 is established. Given a metri spa e T .

d) be a . Let (T.367 Proposition 12.4.

the diameter of Bi is 6 `+1 and iN 1 for all i . we onstru t a de reasing sequen e (Tm ) of subsets of T su h that T0 = T . Bj ) 6 ` 2 for S i 6= j . By indu tion. D(Tm ) 6 k m and . There exists an integer ` k and subsets (Bi )1iN of T of diameter less than 6 ` 1 su h that d(Bi .nite metri spa e of diameter less than 6 k . (Bi ) (T ) 2 6 k+1 h 1+N (12:11) : Proof.

The onstru tion stops (sin e T is .k+m+1 (Tm) 2((Tm 1 ) (Tm )) (12:12) for all m 1 .

We .nite) for some m = n and in Tn we are ne essarily in ase (ii) of Lemma 12.3 (sin e if not we would be able to ontinue the exhaustion).

(T ) (Tm ) .rst note that. for all m n .

this is learly the ase for m = 1 (sin e we even have in this ase that .k+m+1 (Tm ) : (12:13) Indeed.

Assume then that (12.k+2 (T1 ) 2((T ) (T1 )) ).13) is satis.

ed for m and let us show it for m + 1 .12) that . We have by (12.

k m+2 (Tm+1 ) 2((Tm ) (Tm+1 )) (Tm ) (Tm+1 ) + .

k+m+1 (Tm ) sin e (Tm ) hypothesis. (Tm+1 ) .

k+m+1 (Tm ) by de.

nition of the of the fun tional .

Thus. by the indu tion . .

13) indeed holds.k+m+2 (Tm+1 ) (T ) (Tm+1 ) and (12.3. Set now ` = k + n . we an . Sin e in Tn we are in ase (ii) of Lemma 12.

nd balls (Bi0 )1iN (ot Tn ) of radius 6 ` 2 with enters at mutual distan e 3 6 ` 2 su h that. (12:14) (Bi0 \ Tn) (Tn ) 6 k+1 h 1 1+N : . for all i .

368 Sin e (Bi0 \ Tn ) (Tn ) .

k 2 ZZ . 6 k ) .`+1 (T ) . ombining (12. For x in U . De.14) yields that (Bi0 \ Tn ) (T ) 2 6 k+1 h 1 1+N and the proof of Proposition 12.4 is omplete with Bi = Bi0 \ Tn . let Nk (x) be the number of disjoint balls of radius 6 k 1 whi h are ontained in B (x.13) and (12. Let U be an ultrametri spa e.

Theorem 12. There is a numeri al onstant K with the following property: for ea h fun tion h satisfying (12.ne x (U ) = X 6 k h(1=Nk (x)) . we have x (U ) = X k0 k<k1 6 k h(1=Nk (x)) : We an now state and prove the main on lusion of the pre eding onstru tion.5) and ea h . 6 k1 ) = fxg for all x . k2ZZ (U ) = inf x (U ) : x2U We note that if D(U ) 6 k0 and B (x.5.

d) . there exist an ultrametri spa e (U. Let k0 be the largest integer (in ZZ ) with 6 k0 D(T ) . Æ(u. v) d((u). Æ) and a map : U ! T su h that the following onditions hold: (T ) K (U ) . The spa e U = (fu. Consider two points u. vg.nite metri spa e (T. (v)) 63Æ(u. v of T with d(u. v) : Proof. v in U . for all u. v) = D(T ) . d) is ultrametri and the anoni al inje tion from U in T satis.

so we have (U ) 6 k0 1 h(1=2) 6 1 h(1=2)D(T ) : We intend to prove the theorem with K = 4 63 . The onstru tion starts with A = T . B (v. The balls B (u. (v)) 6 k0 . It thus remains to prove the theorem in that ase only. 6 k0 1 ) . 6 k0 1 ) are disjoint. By the pre eding. By indu tion over k k0 . the result holds unless (T ) 4 62 h(1=2)D(T ) . we onstru t a family B of subsets A of T in the following way.es 6 k0 1 d((u).

subsets (Bi (A))1iN (A) of A of diameter 6 k(A) 1 and su h that d(Bi (A). 6 k ) be the Æ -ball of U with enter x and radius 6 k . Bj (A)) 6 k(A) 2 . iN (A) (12:15) (Bi (A)) (A) 2 6 k(A)+1 h 1 : 1 + N (A) The onstru tion stops when ea h element A of B is redu ed to exa tly one point and we denote by U the olle tion of points of T obtained in this way. u 2 Bi (A) . Further.369 and ea h step is performed by an appli ation of Proposition 12. if is the anoni al inje tion map from U into T . v) d((u). Æ is ultrametri on U . with the following property: for all i . v) = 6 k(A) 2 .15). Denote by (A` )`1 the de reasing sequen e of elements of B that ontain x . v) . Fix x in U . That is. (v)) 63 Æ(u. for every ` 1 . For u. Bj (A) . there exist integers k(A) k and N (A) 1 . v in U . By (12. (A`+1 ) (A` ) 2 6 k(A` )+1 h 1 : 1 + N (A` ) Sin e (fxg) = 0 . if A 2 B with diameter 6 k . We then set Æ(u. there exists A in B su h that for two dierent Bi (A) . we have by onstru tion that Æ(u. let B (x. v 2 Bj (A) . By de. summation of those inequalities yields (12:16) (T ) 12 X `1 6 k(A` ) h 1 : 1 + N (A` ) For k k0 + 3 .4 to ea h element of B obtained at the step before. and S the diameter of Bi (A) is 6 k(A)+1 . A1 = T .

if k = k(A` ) + 2 for some ` . from (12. while if there is no su h ` .5) sin e 1 + N (A` ) 2N (A` ) . then Nk (x) = N (A` ) .16). Hen e. it follows that (T ) 4 63x (U ) . Provided with the abstra t Theorem 12. indexed by a . Sin e x is arbitrary in U . to start with. Nk (x) = 1 . we get that (T ) 12 h(1=2) X k>k0 ! 6 k + 62x (U ) 12(6h(1=2)D(T ) + 62 x (U )) : Sin e we are in the ase D(T ) (T )=4 62h(1=2) .5.nition of Æ . we an now prove existen e of majorizing measures for bounded Gaussian pro esses (at least. we have (T ) 4 63 (U ) whi h is the announ ed laim. and (12.

.nite set). h(x) = (log(1=x))1=2 . In the rest of this se tion.

370 Theorem 12.6. Let X = (Xt )t2T be a Gaussian pro ess indexed by a .

t2T t2Te Let U. v) so that u2U the theorem is a onsequen e of the following result that we single out for future referen e. Proposition 12. dX ) and IE sup Xt = IE sup Xt .5 to the spa e (T.7. dX ) (or (Te. v in U . dX ((u). Æ) be a . It is thus enough to show that (U ) K IE sup X(u) . be as given by the appli ation of Theorem 12. dX ) K IE sup Xt t2T where K is a numeri al onstant. that the Gaussian stru ture through the omparison theorems based on Slepian's lemma plays its key role. Let (U. dX ) = (T.nite set T . Then (T. We note that for u. (v)) Æ(u. and provide T with the anoni al distan e dX (s. The fa t here that dX does not possibly separate all points of T is no problem: simply identify s and t su h that dX (s. t) = kXs Xt k2 . a tually the unique pla e in this study. It is at this point. dX ) ). t) = 0 and the new index set Te obtained in this way is su h that (Te.

For u in U . 6 k ) = B (v.k . Let k0 be the largest integer su h that 6 k0 D(U ) .k gv.k ) . we establish the following statement: (Hn ) If U has diameter 6 k0 and if. then (U ) AIE sup Zu . Consider an independent family (gB )B2>B of standard normal k>k0 variables. Proof. It follows that k>` p X kZu Zv k2 2 6 k 2 6 k>` ` 1 P 6 k gu. so 2Æ(u. we take A su h that (log N )1=2 AIE max gi for all N where (gi ) is an orthogaussian sequen e.nite ultrametri spa e. B (x. v) Æ(u.14 shows that it is enough to establish that (U ) AIE sup Zu for some onstant A . We let further Zu = u. iN By indu tion over n .k = gB(u. A ording u2U to (3. v 2 U . For k > k0 . u2U . k > k0 . 6 P Zu Zv = 6 k (gu. v in U and let ` be the largest su h that Æ(u. v) 2dX (u. Let k>k0 k ) for k ` .14).6 k) . v) whenever u. for ea h x in U . Let B = Bk . let Bk be the olle tion of S the balls of radius 6 k . for ea h Gaussian pro ess X = (Xu )u2U su h that dX (u. Then. v) 6 ` . 6 k ) = fxg with k k0 n . Then B (u. v) : Corollary 3. we write simply gu. we have (U ) K IE sup Xu where K is u2U numeri al.

For i q . p 6= i . de. gBp < gBi g . Bq g . We enumerate Bk0 +1 as fB1 .371 For n = 0 . let i = f8p q . For u in U . : : : . U ontains only one point so that (U ) = 0 and (H0 ) holds. Let us assume that (Hn ) holds and let us prove (Hn+1 ) .

k = Zu 6 k0 1 gu.k0 +1 : q onsider a measurable map i from to Bi that satis.ne Zu0 = For i X k>k0 +1 6 k gu.

es Z0 i = sup Zu0 . De.

and thus 1 IE(I i Z0 i ) = IP( i )IEZ0 i = IEZ0 i : q By the indu tion hypothesis. AIEZ0 i = AIE sup Zu0 (Bi ) : u2Bi Sin e the de. We have u2Bi X IE sup Zu IEZ = IE(I i Zi ) u2U iq X IE(I i (6 k0 1 gBi + Z0 i )) iq X X = 6 k0 1 IE(I i gBi ) + IE(I i Z0 i ) : iq iq Now X IE(I i gBi ) = IE max gBi A 1 (log q)1=2 : iq iq Further.ne now a measurable map from to U by (!) = i (!) for ! in i . the independen e of the variables (gB )B2B shows that I i and Z0 i are independent. for every i .

Theorem 12.nition of makes it lear that for ea h i (Bi ) + 6 k0 1 (log q)1=2 (U ) . the proof is omplete.6 proves the existen e of a majorizing measure for Gaussian pro esses when the index set is .

We now dedu e from this .nite sin e (T ) (T ) (Lemma 12.1).

nite ase the existen e of a majorizing measure for almost surely bounded general Gaussian pro esses. The use of the fun tional a tually yields .

but equivalent (see Remark 12. Anti ipating on the next se tion.11). on ne essary onditions for boundedness and ontinuity of sample paths of Gaussian pro esses a tually hold similarly for other pro esses on e a statement for T . as well as their onsequen es. some interesting onsequen es to be developed next.372 a seemingly stronger. statement with. in this form. let us mention that the following two theorems.

6 an be established for them.nite analogous to Theorem 12. We therefore write the proofs below with a general fun tion h . Metri spa es are no longer always . This is the pro edure whi h will indeed be followed for stable pro esses in the next se tion.

dX ) su h that sup t2T Z 0 1 log 1=2 1 d" K IE sup Xt supfm(fsg). Consider a bounded Gaussian pro ess X = (Xt )t2T . Theorem 12. Theorem 12. t) < "g t2T where K is a numeri al onstant. Proof. Then. there exists a probability measure m on (T.nite.6 shows that for ea h . dX (s.8.

nite subset F of T . F T . (F ) K IE sup Xt K IE sup Xt : t2F t2T It is hen e enough to show that if = supf(F ) . F .

dX . ") < 1 for every " > 0 (Theorem 3. Denote by k0 the largest integer with 2 k0 D(T ) . there is a probability measure m on T su h that the left hand side of the inequality of the theorem is less than K .niteg . let T` be a . For ` k0 .18). N (T. Sin e X is almost surely bounded.

This implies that. Denote by m`k the probability measure on T that. We note that m`k is supported by Tk . Consider a map a` from T to T` su h that dX (t. and a probability ` on U` su h that ` (U` ) . Let Bk be the family of balls of radius 2 k of U` . Æ` ) . for ea h B in Bk .nite subset of T su h that ea h point of T is within a distan e 2 ` of a point of T` . for every u in U` and every ` . For ea h k . assigns mass ` (B ) to the point ak ('` (v(B ))) . we asso iate a point v(B ) of B . Choose t0 in T` . So there exist an ultrametri spa e (U` . 2 k ))) 2 : To ea h ball B in an ultrametri spa e U . a` (t)) 2 ` . Fix t in T . we know that (T` ) . a ontra tion '` from U` on T` . (12:17) X k>k0 2 k h(m` (B (u.

2 k ))) 2 k so that dX (t0 . 2 k ))))) 2 k+1 + 2 ` : We set t`k = ak ('` (v(B (u. It follows from (12. For ea h k . 2 k )))) 2 k and dX (t. we have Æ` (u. 2 k )))) so dX (t. t`k ) 2 k+1 + 2 ` . 2 k )) . '` (v(B (u. t0 ) 2 ` . Take u in U` su h that '` (u) = t0 . ak ('` (v(B (u.17) that we have X 2 k h(m`k (ft`k g)) 2 : k>k0 Let U be a ultra. and m`k (ft`k g) m` (B (u.373 with dX (t. v(B (u.

lter in IN . Sin e t`k belongs to the .

and `!U dX (t. tk ) 2 k+1 . Sin e m`k is supported by the .nite set Tk . the limit tk = lim t`k exists.

5) and 2 k0 D(T ) . dX ) su h that lim sup Z !0 t2T 0 1 log supfm(fsg) . supfm(fsg) . We note that. X 2 k h(mk (ftk g)) 2 : k>k0 Let m = P k>k0 2k0 k mk . the limit mk = lim m`k exists and thus. by (12. dX (s.1 that D(T ) K . `!U for ea h t in U .9. dX ) . Re all from Lemma 12. Theorem 12. so m is a probability on T . Then. t) < "g m(ftk g) .5). h(m(ftk g)) h(2k0 k mk (ftk g)) h(2k k ) + h(mk (ftk g)) : 0 It follows that X k>k0 2 k h(m(ftk g)) X k>k0 2 k h(2k0 k ) + 2 KD(T ) + 2 where we have used (12. The on lusion then follows sin e for " 2 k+1 . t) < "g 1=2 d" = 0 : .nite set Tk . We now present the ne essary majorizing measure ondition for almost sure ontinuity of Gaussian pro esses. Consider a Gaussian pro ess X = (Xt )t2T that is almost surely bounded and ontinuous (or that admits a version whi h is almost surely bounded and ontinuous) on (T. there exists a probability measure m on (T. dX (s.

dX .i) j .p(n) of dX -balls of radius an that overs T where p(n) = N (T. for i p(n) . an ) .1 . Bn. we have Xt(n. So.374 Proof.i .i) ) IE sup jXt Xt(n. let an = 2 n D where D = D(T ) . if we denote by t(n. : : : . Consider a family Bn. i) the enter of Bn. For n 1 .

i where we have set . (an ) t2Bn.i IE sup Xt = IE sup (Xt t2Bn.i t2Bn.

i the diameter of Bn.i (12:18) 0 h(supfmn. t) < "g)d" K.8 shows that there is a probability measure mn.i (that an be t2T dX (s. dX (s. () = sup IE sup jXs Xt j .i (fsg).i we have Z dn. Denote by dn.i on Bn.t)< smaller than 2an ). Theorem 12.i su h that for ea h t in Bn.

i(fsg). t) < "g From (12. an ) .i .18) therefore. dX (s. (an ) : Let m0n. so 2an . We note that if t 2 Bn. t) < "g)d" Z dn. Fix t in T . Z 0 h(supfm(fsg). 1 supfm(fsg). dX (s.i (fsg).i . for " min(dn.i ) and let X X m0 = n 2 p(n) 1 m0n.i . 0 < D . t) < "g)d" + h 1 2 2n p(n) K. There is a probability m on T su h that m m0 . Let n be the smallest integer with an .i . dX (s.i) + mn.i = 12 (Æt(n. n1 ip(n) so jm0 j 1 . t) < "g : 2n p(n) Also.i 0 1 : 2n2p(n) h(supfmn. if t 2 Bn. dX (s. supfm(fsg). t) < "g 2 supfm0n. dX (s.

(an ) + h p(1n) h 2n1 2 K.

Now. 2 ) 1 ) + h( 12 (log 2)2 (log D ) 2 ) sin e an 2an . if the Gaussian pro ess X is ontinuous on (T. () + h(N (T. dX . lim !0 . dX ) .

Further. ) 1 ) = 0 : !0 .19 that (12:19) lim h(N (T. () = 0 by the 1 = 2 integrability properties of Gaussian random ve tors. if h(x) = (log(1=x)) . we see from Corollary 3. dX .

In the last part of this se tion. If (and only if) X is ontinuous. The two pre eding theorems therefore des ribe ne essary majorizing measure onditions for a Gaussian pro ess to have bounded or ontinuous sample paths that. Let = sup(IEjXt j2 )1=2 and t2T M = IE sup jXt j . we des ribe a onsequen e of this result in terms of a onvex hull representation of Gaussian pro esses. it de. Let X = (Xt )t2T be a bounded Gaussian pro ess. ea h Yn is a linear ombination of at most two variables of the type Xt .9. Moreover. by the Borel-Cantelli lemma and Gaussian tail. Theorem 12. we an write Xt = where n (t) 0 . let us mention that if (Yn ) is as in the theorem. (Yn ) an 1=2 be hosen su h that nlim !1(log n) kYn k2 = 0 .18.10.375 These observations on lude the proof of Theorem 12. together with Theorem 11. Then there exists a Gaussian sequen e (Yn )n1 with t2T kYn k2 KM (log n + M 2 =2 ) 1=2 su h that. P n1 X n1 n (t)Yn n (t) 1 and the series onverges almost surely and in L2 . thus provide a omplete des ription of regularity properties of Gaussian pro esses. for ea h t in T . Before the proof.

The representation of Theorem 12. We shall partially ome ba k to this at the end of the se tion. We even have that.nes an almost surely bounded sequen e. for some numeri al onstant K1 . at least t2T n1 qualitatively.10 1=2 also implies. that X is ontinuous when nlim !1(log n) kYn k2 = 0 . the ontinuous ase being obtained with some easy modi. Proof. IPfsup jYn j > K1 (M + u)g K1 exp( u2 ) n1 for all u > 0 . we note in parti ular that the majorizing measure theorem ontains. if K is su h that kYn k2 KM (log n + M 2 =2 ) 1=2 in Theorem 12. Se tion 3.1). the tail behavior of isoperimetri nature of norms of Gaussian random ve tors ( f.10. with a little more eort. we have X IPfsup jYn j > 2K (M + u)g 2 exp( 4K 2(M + u)2 =2kYnk22 ) n1 n1 2 exp( u2 ) X n1 n 2: Sin e sup jXt j sup jYn j . We only show the assertion on erning boundedness. Indeed.

It . Let k0 be the largest integer with 2 k0 D(T ) . ations on the basis of Theorem 12.9.

we pi k tk su h that dX (t.8 that there is a probability measure m on T su h that for ea h t in T . tk ) < 2 k and m(ftk g) = supfm(fsg). Thus. From (12.376 follows from Theorem 12. k k0 . t) < 2 k g : We an assume that tk0 does not depend on t . we see in parti ular that 2 k h(m(ftk g)) KM . dX (s. t) < 2 k g) KM : kk0 For t in T . for ea h t in T .20). tk belongs to the . we have X (12:20) 2 k h(supfm(fsg). dX (s.

it follows from (12. m(fsg) h 1 (2k KM )g : Let bk = 2 k+k0 1 h 1 (M=D) . Using the fa t that D 2 (2)1=2 M .nite set Ak = fs 2 T .5) and (12. k k0 . For ea h t in T . we de.20) that X (12:21) kk0 2 k h(bk m(ftk g)) K1 M for some onstant K1 .

ne at. P kk0 at.k 3K1M . De.k = 2 k (h(bk m(ftk g)) + h(bk+1 m(ftk+1 g))) : Form (12.21).

k for t in T .k = 6K1 Mat. tk+1 belong to the . for k k0 .ne then. zt.k1(Xtk+1 Xtk ) : Let Zk be the set of all zt. Sin e tk .

nite set Ak+1 . Zk is .

tk ) 3 2 k so.k k2 " . Let Z be the union of the sets Zk for k > k0 . we have at. tk+1 ) + dX (t. Fix " > 0 . We note that kXtk +1 Xtk k2 dX (t. if kzt.k 9 2 k K1 M=" and thus h(bk m(ftk g)) + h(bk+1 m(ftk+1 )) 9K1M=" : 1 .nite.

the proof of Theorem 12. Therefore Cardfz 2 Zk . For ea h t in T . kz k2 kYn k2 g h 1 M D h 1 9K1M kYn k2 1 #2 so that 1 1 kYnk2 9K1M M +h p : D n (12:22) Sin e D 2 . this shows that there are at most 2 k+k0 1 h 1 M D h 1 1 9K1M " possible hoi es for either tk or tk+1 when kzt.k 3K1M . kz k2 "g 2 and 2k+2k0 2 " Cardfz 2 Z . " n Cardfz 2 Z .377 This implies that m(ftk g). kz k2 "g h 1 " h 1 M D h M D 1 h 1 9K1M " 9K1M " 1 #2 1 #2 : We an thus index Z as a sequen e (Yn )n1 su h that kYn k2 does not in rease.k zt. For ea h n . In parti ular.22) implies that.10 is omplete. this implies that Xt Xtk0 = n (t)Yn where n (t) 1=2 and where the n1 n1 kk0 series onverges almost surely sin e (Yn ) is bounded. the Gaussian sequen e (Yn ) is bounded almost surely. Sin e . Sin e kXtk0 k2 . (12. for h(x) = (log(1=x))1=2 . kYn k2 18K1M (log n + M 2 =2 ) 1=2 . we have Xt Xtk0 = P X kk0 (Xtk+1 Xtk ) = X kk0 P (6K1M ) 1 at. m(ftk+1 g) 2k k0 +1 h 1 1 M D h 1 9K1M " : Sin e m is a probability.k k2 " . for all n 1 . by the Borel-Cantelli lemma.k : P at.

d) . d) = inf m where the in. d) (q) (T ) = (q) (T. For simpli ity.11. As we noted. d) = sup m m t2T Z 1 0 hq (m(B (t. 1 q 1 (and asso iated fun tions hq (x) = (log(1=x))1=q ) although some more general fun tions may be imagined. (q) (T ) = (q) (T. for 1 q 1 . ")))d" and (q) (T.378 Remark 12. If (T.7. This remark is in order to show that several of the te hniques developed in the proofs of the pre eding theorems go beyond the Gaussian ase and apply to a rather general setting. set. the Gaussian stru ture and Slepian's lemma were a tually only basi in the proof of Proposition 12. d) is a metri spa e and m is probability measure on (T. let us onsider the family of Young fun tions q .

Kq denotes below a onstant depending only on q and not ne essarily the same at ea h o uren e. Our .18) at this point and the easy omparison between q 1 and hq ). (It might be useful to re all (11.mum is taken over all probability measures m on T .

d) : If T is . d) su h that sup t2T 1 Z 0 hq (supfm(fsg). whi h we dedu e from the proof of Theorem 12. t) < "g)d" Kq (q) (T.rst observation.8. d(s. is that there is a probability m on (T.

nite. Proposition 11. d) Kq (q) (T.10 indeed indi ates that (T. d) (where is de.

d) .ned with h = hq ). P 1 and the series onverges alsmost surely and in L1 . the proof of Theorem 12. for every n . The main point in the Gaussian ase is of ourse that (2) (T. The proof of Theorem 12. dX ) K IE sup Xt . d) D(T ) + hq pn P n n (t)Yn where n (t) 0. then implies the result. t in T . There is a similar result about ontinuous majorizing measures. let us mention further the following one. d) < 1 . d) ( q ) kYn k1 Kq (T.11) as a result about subsets of Hilbert spa e asso iated to bounded Gaussian . from the pre eding. Xt = n n (t) t2T It is interesting to interpret Theorem 12. and su h that.10 shows that there is a sequen e (Yn ) of random variables su h that. for every t . t) for all s. more pre isely su h that kXs Xt k1 d(s. Then. Consider a sto hasti pro ess X = (Xt )t2T ontinuous in L1 (say) on (T. Assume again that (q) (T. From this observation. (q) 1 1 (T.8.10 (and similar omments may be given in the ontext of the observations of Remark 12. whi h is given with a general fun tion h .

jyn j M (log(n + 1)) 1=2 . Theorem 12. and T Conv(yn ) . A.10 proves the existen e of a sequen e (yn ) in H su h that. Then. IP) and identify T with the subset of H onsisting of the family (Xt )t2T . Let H = L2 ( . Let X = (Xt )t2T be a bounded Gaussian pro ess on ( . Let us rewrite this observation as a perhaps more geometri al statement about the .379 pro esses. IP) . A. for some M > 0 and all n .

yijd(x) y 2T and thus K 1N 1=2 V (T ) `(T ) KN 1=2 V (T ) for some numeri al K . For t in H . onsider V (T ) = Z sup jhx. yijd(x) : y2T This quantity has been studied in geometry under the name of mixed volume and plays an important role in N P the lo al theory of Bana h spa es. Sin e the distribution of (gi )iN is rotation invariant. let Xt = gi ht. De. N X `(T ) = IE sup jXt j = IE gi2 t2T i=1 !1=2 Z sup jhx. Fix an orthonormal basis (ei )iN of H . For a subset T of H . Consider H of dimension N and denote by normalized measure on its unit ball. ei i i=1 where (gi ) is a standard Gaussian sequen e.nite dimensional Hilbert spa e.

10 an be reformulated as (12:23) K 1 N 1=2 C (T ) V (T ) KN 1=2C (T ) : As a losing remark.2. note that if we go ba k to the tail estimate (11. 9(yn ) in H . T Conv(yn )g : Then Theorem 12. jyn j a(log(n + 1)) 1=2 .4) yield deviation inequalities with the two parameters similar to those obtained from isoperimetri onsiderations in Chapter 3. Su h inequalities an also be dedu ed as we have seen from Theorem 12. Re all that by this we mean that every .ne now C (T ) = inf fa > 0 . Ne essary onditions for boundedness and ontinuity of stable pro esses Let X = (Xt )t2T be a p -stable pro ess. we see that the pro esses te hniques and existen e of majorizing measures (12. 12.14) for the Young fun tion = 2 asso iated to Gaussian pro esses. 0 < p < 2 .10 and elementary onsiderations on Gaussian sequen es.

nite linear P ombination i Xti . is a p -stable real random variable. ti 2 T . As in the Gaussian ase. t) = (Xs Xt ) . i 2 IR . we i may onsider the asso iated pseudo-metri dX on T given by dX (s. . t 2 T . s.

Chapter 5). The distribution of X is however determined by its spe tral measure (Theorem 5.2) and. the existen e and . as we already noted in Chapter 5 through the representation. Contrary to the Gaussian ase.380 where (Xs Xt ) is the parameter of the real stable random variable Xs Xt ( f. the pseudo-metri dX does not entirely determine the distribution and therefore the regularity properties of a p -stable pro ess X when 0 < p < 2 .

As we mentioned. SuÆ ient onditions for sample boundedness or ontinuity of p -stable pro esses may be obtained from Theorem 11.2 at some weak level. 1 p < 2 . if 1 < p < 2 . for almost sure boundedness and ontinuity of p -stable pro esses. similar to the one of the Gaussian ase.1 Cp dX (s. the . This is no more the ase for 1 p < 2 on whi h we on entrate here. sin e kXs Xt kp. t) . exists. It is the purpose of this paragraph to des ribe this result. nevertheless a best possible ne essary majorizing measure ondition for (T. If a hara terization in term of dX is therefore hopeless. for example. dX ) .niteness of a spe tral measure for a p -stable pro ess with 0 < p < 1 already ensures its almost surely boundedness and ontinuity. the dieren e with the Gaussian setting is that this ne essary ondition is far from being suÆ ient.

For 1 p 2 .t2T t2T 0 . Theorem 12. 1 p < 2 .niteness of the entropy integral Z D 0 s. Trying to hara terize regularity properties of p -stable pro esses.8 and 12. still under study. t) < "g)d" Kp k sup jXs Xt jkp. dX (s. 1 p < 2 . of Theorems 12. dX ) .12. The main result of this se tion is the extension to p -stable pro esses.1 sup s. dX ) su h that Z 1 hq (supfm(fsg). t 2 T .9. let q be the onjugate of p . Let 1 p < 2 . "))1=p d" implies that X has a version with almost all paths bounded and ontinuous on (T. (N (T. re all we set hq (x) = (log(1=x))1=q . seems to be a diÆ ult question. For p > 1 . The paper [Ta13℄ re e ts for example some of the main problems. These fun tions satisfy (12.5) and (12. The dieren e however between ne essity des ribed below in Theorem 12. dX . Then there is a probability measure m on (T.6) and enter the setting of the abstra t analysis of the pre eding se tion. Let X = (Xt )t2T be a sample bounded p -stable pro ess. further h1 (x) = log(1 + log(1=x)) .12 and this suÆ ient ondition is huge. 0 < x 1 .

We only give the proof of Theorem 12.) The main idea of the proof of this Sudakov type minoration was to realize the stable pro ess X = (Xt )t2T as onditionally Gaussian by the series representation of stable variables. dX ) Kp k sup jXs Xt jkp.1 : ">0 t2T (By (11. For some spe ial lass of stationary p -stable pro ess. strongly or harmonizable stationary pro ess. "))1=q Kp k sup jXt jkp. dX ) su h that lim sup !0 t2T Z 0 hq (supfm(fsg).381 (q) where Kp only depends on p . It was des ribed there how (Xt )t2T has the same distribution as (Xt! )t2T de. dX . Theorem 12. We need not go ba k to all the details here and refer to the proof of Theorem 5.3. (n =(log n)1=q ) is not almost surely bounded. m (T.10). let us go ba k to Chapter 5.12 improves upon this result.10 for this point. The proof when p = 1 follows the same pattern with however some further (deli ate) arguments inherent to this ase. t) < "g)d" = 0 : As we noted (Remark 12. there exists a probability measure m on (T. There we saw how for a p -stable pro ess X (Theorem 5. sup "(log N (T. provided with su h a result. This observation puts into light the gap between this ne essity result and suÆ ient onditions for stable pro esses to be bounded or ontinuous simply be ause if (n ) is a standard p -stable sequen e with 1 < p < 2 (say). we will see however in Chapter 13 that the ne essary onditions of Theorem 12. We restri t for larity to p > 1 refering to [Ta8℄ for the omplete result. X s.12 in perspe tive. Theorem 12. moreover.9).10 has an extension to the stable ase.) If.12 for p > 1 . dX (s.t2T has (or admits a version with) almost surely ontinuous sample paths on (T. Se tion 5.1 . In the following therefore 1 < p < 2 . dX ) .11). (In parti ular. To put Theorem 12.12 are also suÆ ient.

where 1= = 1=p 1=2 and only depends on .ned on 0 where. the idea was to "transfer" Gaussian minoration inequalities of Sudakov's type on the random distan es d! into similar ones for dX . d! (s.12 is similar. the main tool was omparison between the random distan es d! and dX given by ( f. t in T . If d! is the anoni al metri asso iated to the Gaussian pro ess (Xt! )t2T .20)) (12:24) IPf!. trying to transfer the stronger minoration results . (5. t)g exp( " ) for all s. " > 0 . (Xt! )t2T is Gaussian. The idea of the proof of Theorem 12. for ea h ! in . From (12.24). t) "dX (s.

Let 1 < p < 2 and let X = (Xt )t2T be a p -stable pro ess indexed by a . It turns out unfortunately that it does not seem possible to use mixtures of majorizing measures.13. following property. we are going to use the ma hinery of ultrametri stru tures of the last se tion to redu e the proof to the simpler. Instead. yet non-trivial.382 of Se tion 12. Theorem 12.1.

6 and the fun tional is the one used in Se tion 12. This result is the analog of the Gaussian Theorem 12.12 follows from Theorem 12. k))1=q . with similar proofs than in the Gaussian ase. Then (T.7 in this stable ase.nite set T .1 where Kp only depends on p . if (U. dX ) Kp k sup jXs s.19) holds with h = hq by (5. inf (q) (U ) : x2U x By Theorem 12. Let us set for simpli ity = 6 k0 . Æ) is ultrametri . dX ) and h = hq . That is.1 with h = hq .13.13 (for ontinuity. it is enough to prove the analog of Proposition 12. We set x(q) (U ) = (q ) (U ) = X kk0 6 k (log N (x. Theorem 12.21)). k) the number of disjoint balls of Bk+1 whi h are ontained in B (x. Provide T with the anoni al distan e dX . note that (12. We therefore on entrate now on the proof of the latter. We noted there that. For x in U . Æ) is a . we denote by Bk the family of balls of U of radius 6 k ( k 2 ZZ ). we denote by Nk (x) = N (x. we see that in order to establish Theorem 12. Let k0 be the largest su h that CardBk0 = 1 .t2T Xt jkp. 6 k ) . Re all that if (U.5 applied to (T.

13 have already proved their usefulness in some other ontexts.v2U Sin e the arguments of the proof of (12.nite ultrametri spa e and X = (Xu )u2U a p -stable pro ess with dX (12:25) Æ .25) and Theorem 12.1 : u. then we have (q) (U ) Kp k sup jXu Xv jkp. we will try to detail (at least the .

v2U . Let us set M = k sup jXu Xv jkp.rst steps of) this study in a possible general ontext.1 : u.

10). By Fubini's theorem and de.383 X is onditionally Gaussian and we denote by X ! = (Xu! )u2U . the onditional Gaussian pro esses (see the proof of Theorem 5. for every ! in .

25) holds. To prove (12. and all u. d! (x. it will be more onvenient to use the fun tion h02 (t) = h2 (t=3) (the hoi e of 3 is rather arbitrary) whi h is onvex on (0. 1℄ . there exists a set 1 with IP( 1 ) 1=2 and su h that for ! in 1 we have 1 IPf sup jXu! Xv! j > 4M g : 2 u. v) . v) . and therefore Theorem 12. v) Æ(u. Sin e (12. v) . d! (u. The philosophy of the approa h is that a large value of (q) (U ) means that (U. we an expe t that U will be big with respe t to d! for most values of ! . K2 . v) = kXu! Xv! k2 . K3 depending on p on ly. Æ) is big in an appropriate sense. As announ ed. It is easily seen that K4 M so that (12.24) means that d! (u. we have no information about the joint behavior of (d! (u. Let us therefore assume that we have a family (d! ) of random distan es on (U.v2U By the integrability properties of norms of Gaussian pro esses (Corollary 3. The fun tion h2 (t) = (log(1=t))1=2 is onvex for t e 1=2 but not for 0 < t 1 . v) behaves ompared to dX (u. (12:28) IPf! . y) < "))d" + K3 : We observe that h02 (t) h2 (t) + (log 3)1=2 . v)g (") : .24) tells us pre isely how d! (u. v0 )) . v0 ) . so ombining with (12. For that reason. su h that Z 1 (12:26) sup h2 (! (y 2 U . not mu h smaller than dX (u. there exists a probability ! on (U. if we take another ouple (u0 .25). d! ) where d! (u.6 we know the following: for ! in 1 . we have (12:27) (q) (U ) K1 sup x2U Z K2 0 h02 ((y 2 U . and onstants K1 .27). The onstru tion is made rather deli ate however by the following feature: while (12. v) "Æ(u. we get that (q) (U ) K1(KM + K2 (log 3)1=2 ) + K3 . we will try to develop the proof of (12. d! (u0 . su h that for ! in 2 and ea h probability measure on U .2) and Theorem 12. y) < "))d" KM : x2U 0 where K is a numeri al onstant.25) in a possible general framework. We thus establish now (12. we will exhibit a subset 2 of with IP( 2 ) 3=4 (for example).13. v) is. most of the time. v in U and " > 0 . d! (x. Æ) su h that for some stri tly in reasing fun tion on IR+ with "lim !0 (") = 0 .26) sin e IP( 1 \ 2 ) > 0 .nition of M .

Let further b.27). ! Equivalently. d! (x. (fxg) = Bk . To show (12. it will be enough to show that for some probability measure on U K5 1 (12:29) Z U d(x)x(q) (U ) K1 Z U d(x) Z K2 0 h02 ((y 2 U . the stable ase orresponds to (") = exp( " ) where 1= = 1=p 1=2 . > 0 and de. y) < "))d" + K3 : We hoose as a onvenient probability measure the following: is homogeneous in the sense that the mass of any ball of radius 6 k is devided evenly among all the balls of radius 6 k 1 that it ontains. for any x in U .384 By (12.24).

Let now k k0 be .ne Q kk0 N (x. k) 1 .

i) the number of elements of Bi+1 ontained in B 0 . we have N (B. b. i) whenever x belongs to B . so that in parti ular N (B. one he ks immediately by Fubini's theorem and Chebyshev's inequality that (12:30) IP(A(B1 . k) is hosen as (12:31) b = 1 ((2N (B. B1 6= B2 . B2 in B1 . (2N (B. ) = f! 2 . i) = N (x. We denote by N (B. k) is the number of elements of Bk+1 that are ontained in B . B2 ) = A(B1 . There exist therefore two balls B1 . B1 . Let B1 6= B2 in A(B1 . there is a unique B 0 2 Bi that ontains B . B 0 2 Bk0 . B2 . ((x. b. In parti ular. k) > 1 . y) b6 k ) (B1 )(B2 )g : Under (12. k)) 2 ) where b = b(B. k) > 2 N (B. if i k0 k and B 2 Bk . . N (B. B B 0 . i) . B2 B . We denote by Bk0 the subset of Bk that onsists of the balls B in Bk for whi h Y N (B. y) 2 B1 B2 . k)) 6 ) : Bk+1 . i) = N (B 0 . Also. i) : i<k Let B in Bk0 . )) (b) : For B 2 Bk and i k . B2 .xed. We onsider the event C (B. so N (B.28). B2 . d! (x. b.

Assume now that we have proved the lemma for k + 1 and let D in Bk . B2 ) where the union is taken over all hoi es of j k . k)) 4. B B1 . Assume . and the lemma holds. Lemma 12. Under the previous notations. B1 6= B2 . i)) 2 : Proof. B1 . we set A(D) = .385 It follows from (12. If k is large enough that D has only one point. we onsider the event A(D) = [ C (B. B1 . IP(A(D)) (2 Y i<k N (D. B2 )) (2N (B. . then A(D) = . B2 in Bj+1 .14.30) that IP(C (B. B1 . B in Bj0 .) D . B2 B . (If no su h hoi e is possible. For D in Bk . The proof goes by de reasing indu tion over k .

rst that D 2 Bk0 . D0 2 Bk+1 . B1 . IP(A(D)) N (D. 1 IP(A2 ) N (D. Let [ fA(D0 ) . B2 Dg : A1 = We have A(D) = A1 [ A2 and by the indu tion hypothesis. A(D) = A1 . i)) 2 : 4. B2 2 Bk+1 . B2 )) (2N (B. Thus. Y ik N (D. k)) Y ik N (D. B2 ) . D0 Dg . i)) 2 12 (2 Y i<k N (D. B1 . k) 2 . so N (D. i)) 2 : i<k The result therefore follows in this ase. with the same notation as before. i)) 2 : . k)) 4 2 Y 12 N (D. i)) 2 (2 Y i<k N (D. B1 6= B2 . by the indu tion hypothesis. sin e N (D. If D 62 Bk0 .14. IP(A1 ) N (D. k)(2 Using that IP(C (D. B1 . k)(2 This ompletes the proof of Lemma 12. k) 2 . k) 2 12 (2 N (D. B1 . k)2 (2N (D. [ A2 = fC (D.

so IP( 2 ) 3=4 .386 Let now 2 = nA(U ) . Let us .

k)) (B1 )(B2 )=4N (B. d! (x1 . k) where b(B. we have (D) = (B )=N (B. for ` = 1. (B` ) (x 2 B` . k)g : We then have that. however. 2 . D B . z ) a(B. x2 ) 2a(B. k) For x1 . For B in Bk0 . k) . B1 . It follows that there are at least two balls B1 .31). k) is given by (12. so we have ((x1 . y 2 Hx ) (12:32) 3(B ) : 2N (B.x furthermore ! in 2 . B1 . d! (x. k) Indeed. k) = 6 k 2 b(B. x2 ) 2a(B. d! (x1 . for all y in U . we set a(B. k) . suppose otherwise and let y in U su h that (12. For x in B . y 2 Hx ) : 2N (B. x2 in Hy . (x 2 B . x2 ) 2 B1 B2 .32) does not hold. B2 of Bk+1 . B2 B su h that. set Hx = fz 2 U . B2 ) (by de. ontradi ts the fa t that ! 62 C (B. For D in Bk+1 . k)2 : This.

Sin e the fun tion h02 is onvex.nition of 2 ) and thus shows (12. It follows from (12.32).32) that 0 de. Let be a probability measure on U . y 2 Hx )=(B ) . we have I (B ) = Z Z 1 h0 ((Hx ))d(x) h02 ( g d) (B ) B 2 where g(y) = (x 2 B .

I (B ) (log(2N (B. k)I (B ) 6 k 2 b(B. let us enumerate as k1 (x) < < k`(x) (x) the indexes k su h that B (x. let (x. k))1=2 . by a(B. k` (x)) : . In parti ular therefore (12:33) g 3=2N (B. k)(log(2N (B. 6 k ) 2 Bk0 . Note that k1 (x) = k0 . so that. k) . 6 k` (x) ) . k))1=2 : For x in U . For ` `(x) .nition of h02 . `) = a(B (x.

34) dominates X (12:35) 6 k 2 b(B. y) a(B. d! (x.387 We have Z X (x. k)(log(2N (B. we see that the latter quantity (12. Observe that for ` < `(x) . h! (x. y) < (x. `)=8 (by de.33). By (12. k)))d(x) where the summation is taken over ea h value of k and ea h B in Bk0 . k)h02 ((y 2 U . `)))d(x) U ``(x) (12:34) = XZ B a(B. ` +1) (x. `)h02 ((y 2 U . k)))1=2 (B ) where the summation is over the same range. (x.

y) < "))d" : Let us summarize in a statement what we have obtained so far in this general approa h. y) < "))d" 6 k 1 ((2N (B. 1) 1 (1=2) . It follows that. `))) 2 Z 0 1 (1=2) h02 ((y 2 U . Let (U. Also. Æ) su h that for some in reasing fun tion on IR+ su h that "lim !0 (") = 0 and all u. Æ) be an ultrametri spa e and (d! ) a family of random distan es on (U. there exists 0 with IP( 0 ) 3=4 su h that for all ! in 0 and every probability measure on U . IPf! . v)g (") : Then. Proposition 12. . in the previous notations. Z U d(x) Z 0 2 1 (1=2) 7 X h02 ((y 2 U . `)h02 ((y 2 U . (x. d! (x. y) < (x. d! (x. v in U and " >0. d! (u. k)))1=2 (B ) where the summation is taken over ea h value of k and ea h B in Bk0 . X ``(x) (x. v) "Æ(u. for every x . k)) 6 )(log(2N (B. This statement ould possibly be of some use in a related ontext.nition of Bk0 and sin e is in reasing). d! (x.15.

31).13. we an now omplete the proof of Theorem 12. we need simply . k)) b(B.24).388 From this general result.29) (with K2 = ( = log 2)1= ).13 with the hoi e of given by (12. in order to establish (12. 1= 6 log(2N (B.15 is simply R 2 7 (6= ) 1= U x (U )d(x) where x (U ) = X ``(x) 6 k`(x) (log(2N (x. we take (") = exp( " ) . from (12. By (12. Then. k` (x)))1=q : Therefore. the right side of the inequality of Proposition 12. k) = : Sin e 1=2 1= = 1=q .24). Proof of Theorem 12.

ks (x))2 i k1 (x) : : : N (x. s < `(x) and k0 = k1 (x) < < ks (x) i < ks+1 (x) . Therefore.29) and on ludes the proof of Theorem 12. This shows (12. For x in U . k` (x)))1=q : A simple omputation then shows that there are onstants K6. i) 22 i ks (x) N (x. Appli ations and onje tures on Radema her pro esses The . (log N (x. k1 (x))2 as is shown by immediate indu tion over i .3. 12. i))1=q (2i k0 log 2)1=q + s X `=1 (2i k` (x) log N (x.13. we have i k0 N (x. K7 su h that x(q) (U ) K6 x (U ) + K7 .nd the appropriate lower bound for x (U ) .

20)) that a entered pro ess Y = (Yt )t2T is subgaussian with respe t to a metri or pseudometri d if for all s. t) : IE exp (Ys Yt ) exp 2 . 2 2 d (s. t in T and in IR . Re all from Se tion 11.rst appli ation deals with subgaussian pro esses.3 ((11.

But now. Y has a version with almost all sample paths ontinuous.1 and in parti ular Theorem 12. then IE sup Yt K IE sup Xt t2T t2T where K is a numeri al onstant. In parti ular. if X is ontinuous. We have thus the following statement. as a onsequen e of the main result of Se tion 12. Theorem 12. ")) 1=2 d" where K is numeri al. we see that if d is the pseudo-metri of a Gaussian pro ess X = (Xt )t2T . Its se ond part is proved similarly.8.389 As a onsequen e of the general study of Chapter 11. That is. Then. Y is almost surely bounded if X is.16. IE sup Yt K IE sup Xt for some numeri al t2T t2T onstant K . then IE sup Yt K sup t2T t2T 1 Z 0 log 1 m(B (t. The se ond appli ation on erns some remarkable type properties of the anoni al inje tion map j : Lip(T ) ! C (T ) . if m is a probability measure on (T. Let X = (Xt )t2T be a Gaussian pro ess and Y = (Yt )t2T be subgaussian with respe t to the anoni al distan e dX asso iated to X . we noted there that the suÆ ient onditions in order for a Gaussian pro ess to have bounded or ontinuous sample paths also apply for subgaussian pro esses. Further. d) .

Denote by C (T ) the spa e of ontinuous fun tions on T with the supnorm k k1 and by Lip(T ) the spa e of Lips hitz fun tions f on (T. For every 1 p 2 .rst onsidered by J. d) be any ompa t metri spa e with diameter D = D(T ) . the orthogaussian sequen e). d) provided with the norm kf k Lip = D 1 kf k1 + sup jf (sd)(s. Zinn. Let (T. (i ) = (gi ) . let (i ) be a p -stable standard sequen e (if p = 2 . Denote by Tp (j ) the smallest onstant C su h that for every . tf)(t)j : s= 6 t Consider the anoni al inje tion map j : Lip(T ) ! C (T ) .

1 p 2 .1 if 1 p < 2 : . the smallest onstant C su h that IEk X i "i j (xi )k1 C 8 P > <( i > : kxi k2Lip) if p = 2 k(kxi k Lip)kp.1 C X i kxi kpLip !1=p : Denote further by Tp0 (j ) . kk X i i j (xi )k1 kp.nite sequen e (xi ) in Lip(T ) .

390 The introdu tion of those two type onstants is motivated by the two possible de.

From this se tion a tually.nitions of p -stable operators as des ribed at the end of Se tion 9. The se ond inequality is simply that Gaussian averages "dominate" the orresponding Radema her ones. that is. while the . K 1T2 (j ) T20 (j ) KT2(j ) . for some numeri al onstant K . it an be seen that T2 (j ) and T20 (j ) are equivalent.2.

one portion of the equivalen e (iii) in Proposition 9. the type onstants Tp and Tp0 of general operators between Bana h spa es are not equivalent when 1 p < 2 .rst inequality is obtained by partial integration and moment equivalen es of Gaussian averages.12). In general however. What we will dis over however is that. For the same reason as the latter. and their . for this parti ular operator j . Tp (j ) Kp Tp0 (j ) ( f. for 1 p < 2 . Tp (j ) and Tp0 (j ) are equivalent for every 1 p 2 .

niteness equivalent to the existen e of a majorizing measure ondition on (T. d) (q) (T. d) for hq (x) = (log(1=x))1=q where q is the onjugate of p ( h1 (x) = log(1 + log(1=x)) if p = 1 ). Re all that if m is a probability measure m on (T. d) . d) = sup m t2T 1 Z 0 hq (m(B (t. we let (q) (T. and set (q) (T. d) = inf m where the in. ")))d" .

We .17. Theorem 12. d) Kp Tp (j ) : Proof.mum runs over all probability measures m on T . There is a onstant Kp depending only on p su h that Kp 1 Tp0 (j ) (q) (T. Let 1 p 2 . We then have the following theorem.

Let (xi ) be a .rst prove the left hand side inequality for p = 2 .

t) = kXs Xt k2 = ( jxi (s) xi (t)j2 )1=2 . IEk X X "i j (xi )k1 = IE sup j "i xi (t)j t2T i i sup t2T X i xi (t)2 !1=2 + K (2)(T.19)). kxi k2Lip = 1 . t 2 T . Then. and set Pi i dX (s. Let X = (Xt )t2T where Xt = "i xi (t) .nite sequen e in Lip(T ) P P su h that. dX ) : . by homogeneity. for some numeri al onstant K . i (11. sin e X is subgaussian with respe t to dX ( f.

by de.391 Now.

d) K 0 (2) (T. for every s. On the other hand.20. A T .8 that (q) (T ) Kp supf(A) . Using Lemma 11. (P xi (t)2 )1=2 D and dX (s. d) where we have used Lemma 12. t) . Thus i IEk X i "i j (xi )k1 D + K (2)(T. A . we have shown in the proof of Theorem 12. then Tp (j 0 ) Tp (j ) . t in T . Let us now establish that (q) (T. the proof is entirely similar for 1 p < 2 . t) d(s. The result thus follows by homogeneity. d) Kp Tp (j ) . It is easy to see that if A is a subset of T and j 0 the anoni al inje tion from Lip(A) into C (A) .1 (vi) in the last step.nition of k k Lip .

we do not spe ify q in ). when T is .niteg (where. for simpliti y. it is enough to show that. Hen e.

Here. (T ) KpTp (j ) .nite. Kp denotes some onstant depending only on p and not ne essarily the same in ea h o uren e. We laim that it is enough to show that when (T. d) is .

nite and ultrametri . there is a P .

then the anoni al distan e dX of the p -stable pro ess X = (Xt )t2T satis.nite family (xi ) in Lip(T ) with ( kxi kpLip )1=p 128 (for example!) and su h that if. i P Xt = i xi (t) . for t in T .

25) also holds for p = 1 . denote by Bk the family of balls of T of radius S Bk .5 and t2T Proposition 12.7 for p = 2 . i Indeed. the result. Sin e T is . by the onjun tion of Theorem 12. and.) Let k0 be the largest su h that 4 k0 D . For k k0 . 4 k .1 . [Ta8℄. or (12. f. we would then have 128Tp(j ) k sup jXt jkp. (We use here the fa t that (12.es dX d .25) for 1 p < 2 .

nite. 4 k1 ) = fxg for every x in T . 1gB . there exists k1 su h that B (x. we de. Let B = k0 kk1 For " = ("B )B2B 2 E = f0.

t in T and let ` be the largest su h that k>k d(s. t) 4 ` .ne '" = k1 X X k=k0 +1 B 2Bk 4 k "B IB : P 4 k 2D . It follows that We note that k'" k1 0 j'" (s) '" (t)j X k>` 4 k 2d(s. t) : . If B 2 Bm for some m ` . we have IB (s) = IB (t) . Consider now s.

The de.392 This shows that k'" k Lip 4 .

we have j'" (s) '" (t)j 12 4 ` 1 j" B1 (s) "B2 (t)j : Set N = 2 CardB . It follows from the pre eding inequality that 1X j' (s) '" (t)jp N "2E " Set then x" = 32N 1=p'" . The family (x" )"2E is a . B2 = B (t.nition of ` shows that the two balls B1 = B (s. t) : 2 E . 4 ` 1) . we have that j'" (s) '" (t)j 4 ` 1 j" 4 ` 1 j" "B2 (t)j B1 (s) "B2 (t)j B1 (s) X 4 k k>`+1 1 ` 1 4 : 2 Sin e j"B1 (s) "B2 (t)j is zero or one. Sin e they belong to B`+1 . " P 1=p !1=p 211=p 12 4 ` 1 321 d(s. 4 ` 1 ) are dierent.

We on lude this hapter with some observations and onje tures on Radema her pro esses. Further. t) d(s.17. this on ludes the proof of Theorem 12.nite family of elements of Lip(T ) su h that kx" kpLip 128 . Re all . t) . then we have just shown that dX (s. t 2 T . t in T . "2E As announ ed. if (" )"2E is an independent family of standard p -stable random " P variables and if Xt = " x" (t) . for all s.

Boundedness of the Gaussian pro ess X is then hara terized by the majorizing measure ondition (2) (T ) for the Hilbertian distan e j j on T `2 . we mean that for some sequen e (xi (t)) P P of fun tions on T . dX ) < 1 .19). assumed to be almost surely onvergent (i. t) = kXs Xt k2 . By Radema her pro ess X = (Xt )t2T indexed by a set T . By (11.rst that the various results on Gaussian pro esses des ribed in Se tion 12. A Radema her pro ess is subgaussian. we may identify i T with the subset of `2 onsisting of the elements (xi (t)) .e. t 2 T . If dX (s. xi (t)2 < 1 ). it is onvenient (and fruitfull too) to identify as before T with a subset of `2 .1 an be formulated in the language of subsets of Hilbert spa e (as.10). where (xi (t)) is a sequen e of fun tions on T . we thus know that r(T ) < 1 whenever there is a probability measure m on T su h that (2) m (T. For example. In order to present our observations. t in T . Xt = "i xi (t) . next to Theorem 12. for example. As i i in Chapter 4. if X = (Xt )t2T is a P Gaussian pro ess given by Xt = gi xi (t) . we an write that r(T ) jT j + K (2) (T ) . let r(T ) = IE sup jXt j . t2T s.

`2 with respe t to From this result.10). one may look equivalently for a subset A of `2 su h that A Conv(yn ) where (yn ) is a sequen e in `2 su h that. for all n . Sin e r(T ) sup jxi (t)j . This result would ompletely hara terize boundedness of Radema her pro ess. Theorem 4. for some t2T i numeri al onstant K . one might wonder. Let us all M -de omposable a Radema her pro ess X = (Xt )t2T .393 where jT j = sup jtj . Denote P by B1 the unit ball of `1 `2 . and (2)(T ) is as before the majorizing measure ondition on T t2T the anoni al metri j j on `2 and K some numeri al onstant. or rather T identi. jyn j Kr(T )(log(n+1)) 1=2 . By the onvex hull representation of subsets A for whi h (2)(A) is ontrolled (Theorem 12. as for Gaussian pro esses.15 supports this onje ture. T Kr(T )B1 + A where A is a subset of `2 su h that (2) (A) Kr(T ) . for some possible ne essary majorizing measure ondition in order for a Radema her pro ess to be almost surely bounded (or ontinuous). a natural onje ture would be that.

ed with a subset of `2 . As a . (12:36) T MB1 + A and (2)(A) M : In the last part of this hapter. for some M and some A in `2 . su h that. we would like to brie y des ribe some examples of M -de omposable Radema her pro esses whi h go in the dire tion of the pre eding onje ture.

That is. where 1 p < 2 and q is the onjugate of p . let us onsider the ase of a subset T of `2 for whi h there is a probability measure m (q) su h that m (T.1 = sup i1=p zin M (log(n + 1)) i1 1=q . kz nkp. kkp. if M = Kp (q) (T.1 M (log(n + 1)) 1=q ( M log(1 + log n) if q = 1 ) and T Conv(z n ) .1 ) where Kp only depends on p . k kp. As a onsequen e of Lemma 11. q = 1 being similar.rst example. there exists a sequen e (z n ) in `2 su h that.1) < 1 . let us restri t for simpli ity to the ase q < 1 . kz nkp. For every n . T Kp0 MB1 + Conv(yn ) where Kp0 only depends on p . T is Kp0 M -de omposable.11. su h that. We show that there exists then a sequen e (yn ) in `2 with jyn j Kp0 M (log(n + 1)) 1=2 for all n . To he k this assertion.20 and Remark 12.

for ea h n . Let i0 = i0 (n) be the largest integer i 1 su h that i (M=q)q log(n + 1) so that i0 X (12:37) i=1 zin M : Let then.394 where (zin) denotes the non-in reasing rearrangement of (jzin j) . yn be the element of IRIN de.

The next proposition des ribes a more general result based on Lemma 4.ned by yin = zin if zin otherwise. Clearly X jyn j2 = (zin )2 Kp00 M 2 (log(n + 1)) 1 62 fz1n. a ording to the onje ture.18. zin g .37). More pre isely. k kp. ould possibly des ribe all Radema her pro esses. Hen e.1 ) < 1 . yin = 0 0 i>i0 and the laims follows by (12. Then T is KM -de omposable where K is numeri al. one an .9. : : : . T is de omposable. Proposition 12. It deals with a natural lass of bounded Radema her pro esses whi h. under a majorizing measure ondition of the type (q) (T. Let (an ) be a sequen e in `2 su h that for some M X n IPfj 1 "i ani j > M g : 8 i X Let T Conv(an ) .

un and vn in `2 su h that an = un + vn and i X i junij M1 and exp K1M12 jvn j2 2IPfj X i "i ani j > M1 g : If. it follows from the se ond of these inequalities and the hypothesis that N (") 41 exp(K1 M12="2 ) . We an therefore rearrange (vn ) as a sequen e (yn ) su h that jyn j does not in rease. Denote by K1 1 the numeri al onstant of p Lemma 4. p P Proof.2. there exist. for ea h n . jvn j "g . In parti ular. jynj KM (log(n + 1)) 1=2 with the property that T KMB1 + Conv(yn ) . N Cardfn . N (") = Cardfn . for all n .9 and set M1 = 2 2K1 M . for " > 0 . ( (ani )2 )1=2 2 2M for all n .nd a sequen e (yn ) in `2 su h that. By Lemma 4. jvn j jyN jg 1 K M2 exp 1N 21 : 4 jy j . By this lemma. for every N .

A re ent a ount on both ontinuity and extrema is the set of notes by R. has been established. with some omissions. J. Adler [Ad℄. Dudley [Du2℄. for all N . The on lusion immediately follows sin e Conv(vn ) = Notes and Referen es The main results of this hapter are taken from the two papers [Ta5℄ and [Ta8℄ where existen e of majorizing measures for bounded Gaussian and p -stable pro esses. B.395 Hen e. Jain and M. K1 M12 = log(4N ) . This hapter basi ally ompiles. these arti les. Mar us [J-M3℄. The . 1 p < 2 . Referen es on Gaussian pro esses (in the spirit of what is developed in this hapter) presenting the results at their stage of developments are the arti le by R. jyN j Conv(yn ) . C. Fernique [Fer4℄ and the paper by N. the well-known notes by X.

yield the new proof of Theorem 12. Fernique [Fer4℄.8 and 12. that is dierent from the original proof of [Ta5℄. a tually ontrol of subsets of Hilbert spa e. The result orresponds to Theorems 12.12 (properly reformulated) to a large lass of in. (That the entropy integral is not ne essary in general was known sin e the examples of [Du1℄. and that is somewhat simpler and more onstru tive. This limit to the ase of Gaussian or p -stable pro esses. Let us mention also a "volumi " approa h to regularity of Gaussian pro esses.7.niteness of Dudley's entropy ondition for almost surely bounded stationary Gaussian pro esses is due to X. The ideas of this result.5 is that it an help to understand only pro esses whi h are essentially des ribed by a single distan e. the use of Slepian's lemma by the use of Sudakov minoration and of on entration of measure (expressed by Borell inequality in the Gaussian ase). this theorem is extended to a ase where the single distan e is repla ed by a family of distan es. He onje tured ba k in 1974 the validity of the general ase. Fernique also established in [Fer5℄ the a posteriori important ase of existen e of a majorizing measure in ase of an ultrametri index set.5 presented here. in the papers [Du1℄ and [Mi-P℄ (see also [Pi18℄). In the re ent work [Ta18℄. The use of these tools allow in [Ta18℄ to extend Theorem 12. This result has thus been obtained in [Ta5℄. in the proof of Proposition 12. One limitation of Theorem 12. when restri ted to the ase of one single distan e.) X. Another ontribution of [Ta18℄ is a new method to repla e.9 in an homogeneous setting and will be detailed in the next hapter.

. Theorem 12.nitely divisible pro esses. Pisier [M-P2℄ ( f.12 omes from [Ta8℄. The somewhat more general study for the proof has already been used in empiri al pro esses in [L-T4℄. Chapter 5). It improves upon the homogoneous ase as well as the Sudakov type minorations obtained previously by M. B. Mar us and G.

16 was known for a long time to follow from the majorizing measure onje ture.396 Theorem 12. Its proof is a tually rather indire t and it would be desirable to .

Zinn [Zi1℄ to onne t the JainMar us entral limit theorem for Lips hitz pro esses [J-M1℄ to the general type 2 theory of the CLT ( f. in [Ta19℄ Theorem 12. . The still open i i ase of Radema her pro esses would orrespond to the ase " = 1 ". Building on the ideas of [Ta18℄. and an independent identi ally distributed sequen e (i ) . This would yield a new approa h to the results of Se tion 12.nd a dire t argument. where the law of i has a density a e jxj with respe t to Lebesgue's measure ( a is the normalizing fa tor). also [M-P2℄. Consider 1 < 1 . The type 2 property of the anoni al inje tion map j : Lip(T ) ! C (T ) has been put into light by J.1. Pisier [Pi6℄ and des ribed by the results of [Ta12℄. Chapter 14).8 is (properly reformulated) extended to the pro esses (Xt )t2T . and where (xi (t)) is a sequen e of fun tions on T su h that x2i (t) < 1 . Theorem 12. its validity a tually implies onversely the existen e of majorizing measures.17 was obtained in [Ta5℄. where P P Xt = i xi (t) . The majorizing measure version of Zinn's map for p = 2 was noti ed in [He1℄ and in [Ju℄ for 1 p < 2 ( f. As indi ated by G. [M-P3℄).

3.1. Random Fourier series 13.397 Chapter 13. Ve tor valued random Fourier series Notes and referen es . Random Fourier series 13. Stable random Fourier series and strongly stationary pro esses 13.2. Stationarity and entropy 13.4.

we evaluated random pro esses indexed by an arbitrary index set T . Random Fourier series In Chapter 11. The tools developed so far lead indeed to a de. We now take advantage of some homogeneity properties of T and investigate in this setting.398 Chapter 13. the more on rete pro esses of random Fourier series. using the general on lusions of Chapters 11 and 12.

In a . Mar us and G. Our main referen e for this hapter is the work by M. [M-P2℄ to whi h we refer for an histori al ba kground and a urate referen es and priorities.nitive treatment of those obje ts with appli ations to Harmoni Analysis. Pisier [M-P1℄. B.

u + t) = d(s. Let G be a lo ally ompa t Abelian group with unit element 0 . We shall adopt (throughout this hapter) the following homogeneous setting. almost sure boundedness and ontinuity of large lasses of random Fourier series. often without further noti e. majorizing measure and entropy onditions are the same. t) for all u. Let () = j j be the normalized translation invariant (Haar) measure on G . the trivial extensions to the omplex ase of various results (like e. Stationarity and entropy In this paragraph.1.2. We an therefore deal next only with the simpler minded entropy onditions. Let us note that we usualy deal in this hapter with omplex Bana h spa es and use.rst paragraph. Let . The ase of stable random Fourier series and strongly stationary pro esses is studied next with the on lusions of Se tion 12. Using ne essity of majorizing measure (entropy) ondition for boundedness and ontinuity of (stationary) Gaussian pro esses. s. t in G . we brie y indi ate how majorizing measure and entropy onditions oin ide in an homogeneous setting. we show that on translation invariant metri s. in the se ond se tion. Consider furthermore a metri or pseudo-metri d on G whi h is translation invariant in the sense that d(u + s.g. We on lude the hapter with some results and omments on random Fourier series with ve tor valued oeÆ ients. we investigate and hara terize in this way. 13. the ontra tion prin iple).

denote by N (A. More generally. i. for subsets A.e. 9t1 . tN 2 A. N (A. Re all that N (T. A N [ (ti + B )g : i=1 .nally T be a ompa t subset of nonempty interior of G . ") denotes the minimal number of open balls (with enters in T ) of radius " > 0 in the pseudo-metri d ne essary to over T . B ) = inf fN 1 . B ) the minimal number of translates of B (by elements of A ) ne essary to over A . : : : . d. B of G .

We let similarly for subsets A. B of G . t 2 B g and de. s 2 A . A + B = fs + t . s 2 B g .399 Here t + B = ft + s .

d. we set T 0 = T + T and T 00 = T + T 0 = T + T + T . ") B (0. B ) . Proof. then N (T. (iii) if A G . Lemma 13. ") . A A) jT 00 j=jAj . d.1. The following lemma is an elementary statement on the pre eding overing numbers whi h will be useful in various parts of this hapter. et . N (T. 0) < "g . ") = N (T. (i) follows immediately from the de. 2") N (T. " > 0 . A) jT j=jAj . N (T. In parti ular. We set B (0. ")) . Under the previous notations. (iv) if A T 0 .ne in the same way A B . ") = ft 2 G . d(t. B (0. (ii) N (T. we have: (i) if B = T 0 \ B (0.

Hen e. there exists ti in T su h that t 2 ti + B (0. for some i = 1. v 2 B (0. B (0. ")) . d(t. A) < 1 . tM g be maximal in T under the onditions (ti + A) \ (tj + A) = . this means that t = ti + u v where u.1 is omplete. v in A . A A)jAj : The proof of Lemma 13. Then. This implies i=1 that M N (T. ti ) = d(u v. The idea is simply that if a majorizing measure exists then Haar measure is also a majorizing measure from whi h the on lusion then follows from the pre eding lemma. 0) + d(0. 2") N (T. N S if T (ti + A) . Let ft1. by maximality. (iv) Assume jAj > 0 . Therefore t 2 ti + A A and T (ti + A A) . . A A) . (ii) If. (t + A) \ (ti + A) 6= . If t 2 T . ") . (iii) It is enough to onsider the ase N (T. 0) d(u. ") . Now (ti + A)iM are disjoint in T 00 = T + T 0 and thus jT 00 j M X i=1 jti + Aj = M jAj N (T. t + u = ti + v for some u. ") B (0. d. ") B (0. by translation invarian e. . for t in T . : : : . : : : . M S Hen e. M . Provided with this lemma we an ompare majorizing measure and entropy onditions in translation invariant situations.nitions. i=1 jT j N X i=1 jti + Aj = N jAj whi h gives the result. v) < 2" so that N (T. 8i 6= j .

Let be a Young fun tion.2. d. Z T m(B (t. Note that. Sin e 1 (1=x) is onvex. d. by Jensen's inequality. ") . ")) 1 jT j 0 jT 00 j (T \ B (0. for every " > 0 . T 00 (B (t. "))d(t) 1 Now. ")) Proof. ") d" sup Z D jT 00 j t2T 0 1 1 d" : m(B (t. ") : . Hen e. observe that. ")) Z 0 1 jT 00 j N (T. d. N (T. by (ii) and (iv) of Lemma 13. ")j from whi h Proposition 13. then. if T 00 denotes restri ted normalized Haar measure on T 00 = T + T + T . for any > 0 . Let us denote by M the right hand side of the inequality that we have to establish. ")jdm(s) jT 0 \ B (0.400 Proposition 13. d. In the pre eding notations.1. if t 2 T . ") d" : jT j (Note the omplete equivalen e when T = G is ompa t. M Z D 0 Z D Z 1 jT j T 1 0 1 d(t)d" m(B (t. "))d(t) = Hen e M Z Z D T jT \ B (s. onversely.) To show (13. 2") jT 00 j jT 0 \ B (0. by translation invarian e and (i) and (iii) of Lemma 13. Then 1 2 Z D 1 0 jT j N (T. by Fubini's theorem and translation invarian e. let m be a probability measure on T G and denote by D = D(T ) the d -diameter of T .1). (13:1) sup t2T Z 0 1 1 d" T 00 (B (t.2 follows. ") T 00 \ B (t. ")j : 1 0 jT j jT 0 \ B (0.1. ")) jT 00 j N (T. ")) jT j R d" : T m(B (t. ")j d" : To on lude. t + T 0 \ B (0.

dX ) t2T where D(T ) is the diameter of (T. Ne essity and the left side inequality follow from Theorem 12. together with the ontinuity of dX . t) = kXs Xt k2 is translation invariant. Moreover.17). L(T ) = 0 . Note that if T = G is ompa t. d) = Z 0 1 log(1 + log N (T. dX ) L(T )1=2D(T )) IE sup Xt KE (2) (T.2. the integral is taken from 0 to 1 but of ourse stops at D .3. As a orollary to the majorizing measure hara terization of boundedness and ontinuity of Gaussian pro esses des ribed in Chapter 12. its orresponding L2 -metri dX (s.2. If X is a stationary Gaussian pro ess.3: stationary Gaussian pro esses are either almost surely ontinuous or almost surely unbounded (a ording to the . Let (T.2 allows to state in terms of entropy the hara terization of almost sure boundedness and ontinuity of stationary Gaussian pro esses. Let X = (Xt )t2G be a stationary Gaussian pro ess indexed by G . dX ) and L(T ) = log(jT 00 j=jT j) . we let Z 1 E (q) (T. When q = 1 .17).) Note the following di hotomy ontained in the statement of Theorem 13. Before stating the result. it is onvenient to introdu e some notations on erning entropy integrals.401 Proposition 13. Then X has a version with almost all bounded and ontinuous sample paths on T G if and only if dX is ontinuous on G G and E (2) (T. we let E (1) (T. d. dX ) < 1 . d. Let G be a lo ally ompa t Abelian group and let T be a ompa t metrizable subset of G of nonempty interior. d) = (log N (T.9 together with Proposition 13. "))d" : A pro ess X = (Xt )t2G indexed by G is stationary if. For 1 q < 1 . there is a numeri al onstant K su h that K 1(E (2) (T. (Xu+t )t2G has the same distribution as X . "))1=q d" : 0 As usual. d) be a pseudo-metri spa e with diameter D . (It might be useful to re all at this point the simple omparison (11. we an state: Theorem 13. SuÆ ien y in this theorem is simply Dudley's majoration (Theorem 11. for every u in G . and Proposition 13.

Indeed.niteness or not of the entropy integral). let a tually (Xt )t2G be any stationary pro ess. The hoi e of the ompa t set T does not ae t the qualitative on lusion of Theorem 13.3. If T1 and T2 are two ompa t subsets of G with nonempty .

402 interiors. This is obvious by stationarity sin e ea h of the sets T1 and T2 an be overed by . then (Xt )t2T1 has a version with ontinuous sample paths if and only if (Xt )t2T2 does.

if G is the union of a ountable family of ompa t sets. "))1=2 d" < 1 : This an be shown by either repeating the arguments of the proof of Theorem 13. Boundedness (latti e-boundedness) of the stationary Gaussian pro ess of Theorem 13.3 or by using the Bohr ompa ti.nitely many translates of the other.3 over all G holds if and only if 1 Z 0 (log N (G. Consequently. then (Xt )t2T has a version with ontinuous sample paths if and only if the entire pro ess (Xt )t2G does. This applies in the most important ases su h as G = IRn . dX .

134-138℄ for more details on these omments. We refer to [M-P1. d) for a translation invariant metri d . X has a version whi h is an almost periodi fun tion on G . 0) . t 2 T 0 . t 2 G . (Æ) = supf" > 0 . ( n ) a sequen e of ontinuous hara ters of G and (gn ) . it is onvenient for omparison and possible estimates in on rete situations to mention an equivalent of the entropy integral E (q) (T. jT 0j℄ (with respe t to T 0 ). d) = Z jT 0 j 0 (") d" 000 1=p " log 4jT" j . a stationary Gaussian pro ess X = (Xt )t2G indexed by G admits an expansion as a series Xt = X n gn Re(an n (t)) + X n gn0 Im(an n (t)) . more pre isely. (gn0 ) independent standard Gaussian sequen es. jT 0 \ B (0. Assume for simpli ity that T is a symmetri subset of G and let (t) = d(t. under this ondition. The proof of this fa t indi ate further that. Before turning to this topi in the subsequent se tion. p. let I (p) (T. ")j < Æg : For 1 < p 2 . for 0 < Æ jT 0 j .3 is one of the main ingredients in the study of random Fourier series. ation of G and the fa t that under the pre eding entropy ondition. Theorem 13. where (an ) is a sequen e of omplex numbers in `2 . Consider the non-de reasing rearrangement of on [0.

d) Kp (D + I (p) (T. Kp 1 (D + I (p) (T. we take advantage of Theorem 13. for some onstant Kp > 0 only depending on p . d)) where D is the diameter of (T. The interested reader will .2.403 where T 000 = T + T + T + T . Namely. d) are essentially of the same order. A similar result holds for p = 1 (and q = 1 ).1 and elementary arguments it an be shown that. d)) E (q) (T. By Lemma 13. d) . if q is the onjugate of p . d) and I (p) (T. 13.3 to study a lass of random Fourier series and develop some appli ations. E (q) (T. Random Fourier series In this paragraph.

nd in the book [M-P1℄ a general and histori al introdu tion to random Fourier series. We refer to [M-P1℄ for the non-Abelian ase as well as for further aspe ts. We more or less only on entrate here on one typi al situation. In parti ular. we . Throughout this se tion. we only onsider Abelian groups.

we an assume without restri ting the generality that itself is separable. Sin e the main interest will be the study of random Fourier series with spe trum in A .x the following notations. G is a lo ally ompa t Abelian group with identity element 0 and Haar measure () = j j . one may onsider the ompa t group TT = IR=ZZ (identi. all the ompa t subsets of G are metrizable. (s) (t) = (s + t) for all s.e. Fix a ountable subset A of . Let be the dual group of all hara ters of G (i. We denote by V a ompa t neighborhood of 0 (by translation invarian e. In parti ular. t in G ). and sin e A generates a losed separable subgroup of . is a ontinuous omplex valued fun tion on G su h that j (t)j = 1 . As a on rete example of this setting. the results would apply to all ompa t sets with nonempty interior).

2℄ ) with ZZ as dual group.) Let also (g ) 2A be a standard Gaussian sequen e. we agree to denote by k k = sup j j the sup -norm on the Bana h spa e C (V ) t2V of all ontinuous omplex valued fun tions on V (or on G when V = G ). Let (a ) 2A be omplex numbers su h that P P P 2A ja j2 < 1 .ed with [0.3. (We sometimes omit 2 A in the summa- . let us . V G being as before. Following the omments at the end of Theorem 13.

t 2 G: .rst onsider the Gaussian pro ess X = (Xt )t2G given by tion symbol only indi ated then by (13:2) or just Xt = X 2A a g (t) .

404 The question of ourse arises of the almost sure ontinuity or uniform onvergen e of the series X on V . X onverges uniformly (for any ordering of A ).1). The pro ess de. these two properties are equivalent: if X is ontinuous (or admits a ontinuous version) on V . Sin e (g ) is a symmetri sequen e. then by It^o-Nisio's theorem (Theorem 2.

to enter the setting of Theorem 13.3. X 0 admits a version with almost all sample paths ontinuous on V and therefore onverges uniformly almost surely on V if and only if E (2) (V. The following are equivalent: (i) for some (all) ordering of A . dX ) < 1 .2) is a ( omplex valued) Gaussian pro ess indexed by G with asso iated L2 metri 0 1 dX (s.ned by (13. X 0 = (Xt0 )t2G is a real stationary Gaussian pro ess su h that dX 0 = dX . If we let X X Xt0 = g Re(a (t)) + g 0 Im(a (t)) . Theorem 13. t) = X 2A ja j2 j (s) (t)j2 A 1=2 . t 2 G . and thus the same holds for X .3.2). (ii) P sup IEk a g k2 < 1 . the series X onverges uniformly almost surely on V . we use the following. Sin e X is omplex valued. The series X and X 0 onverge uniformly almost surely simultaneously. As a onsequen e of Theorem 13. dX 0 ) = E (2) (V. t 2 G . 2A 2A where (g 0 ) is an independent opy of (g ) . s. That is. we have the following statement. F A 2F F . Let X be the random Fourier series of (13. whi h is translation invariant.4.

dX ) + X 2A 11=2 1 ja j2 A C A where we re all that L(V ) = log(jV 00 j=jV j) where V 00 = V + V + V and kX k = sup jXt j . t2V . 0 K (13:3) 1 BE (2) (V. dX ) < 1 .nite (iii) E (2) (V. d 0 X) L(V )1=2 X 2A 0 11=2 1 ja j2 A C A 0 (2) (IEkX k2)1=2 K B E (V. for some numeri al onstant K . Further.

If X onverges uniformly in some ordering. then (ii) is satis.405 Proof.

2) and onditional expe tation.ed by the integrability properties of Gaussian random ve tors (Corollary 3. Let F be .

nite in A and denote by P X F the .

Then. we have that (13. with thus a numeri al onstant K independent of F . as a onsequen e of Theorem 13.3 ( onsidering as indi ated before the 2F natural asso iated real stationary Gaussian series).3) holds for X F .nite sum a g .

While we are dealing with Fourier series with omplex oeÆ ients. the sequen e ( ) an be repla ed by (" ) where (" ) is an independent Radema her sequen e. more generally. the results apply similarly to a sequen e ( ) of independent mean zero random variables ( f. by the symmetry assumption. The omplex ase an a tually easily be dedu ed. Then (iii) holds under (ii) by in reasing F to A . if E (2) (V. for simpli ity however we only onsider real probabilisti stru tures. simply use a Cau hy argument in inequality (13. by some symmetri sequen e ( ) 2A of real random variables.3). for any ordering of A . This is the setting we will adopt. we shall be interested in similar results and estimates when the Gaussian sequen e (g ) 2A is repla ed by a Radema her sequen e (" ) 2A or. Assume therefore we are given a sequen e of omplex numbers (a ) 2A and a sequen e ( ) 2A of real random variables satisfying X 2A ja j2 IEj j2 < 1 : Consider the pro ess Y = (Yt )t2G de. Note that we re over from this statement the equivalen e between almost surely boundedness and ontinuity of Gaussian random Fourier series. and in the same way. X onverges in L2 with respe t to the uniform norm on C (V ) .3) and dominated onvergen e in the entropy integral. As the main question of this study. dX ) < 1 . Finally. By standard symmetrization pro edure.nite in A . Re all that. Lemma 6. and thus almost surely by It^o-Nisio's theorem: to see this.

t) = X 2A 11=2 ja j2 IEj j2 j (s) (t)j2 A . t 2 G . s. We asso iate to this pro ess the L2 pseudo-metri 0 dY (s. t 2 G . . (" ) is a Radema her sequen e independent of ( ) . as before. where.ned by (13:4) Yt = X 2A a " (t) .

406 whi h is translation invariant. The basi idea onsists in .2)). If V is as usual a ompa t neighborhood of 0 .4). in the almost sure uniform onvergen e on V of the random Fourier series Y of (13. as previously in the Gaussian ase. The main obje tive is to try to obtain bounds similar to the ones of the Gaussian random Fourier series ((13. we shall be interested.

The . along the Radema her sequen e) and then integrating and making use of the translation invariant properties.rst writing the best possible entropy estimates onditionally on the sequen e ( ) (that is.

5. Let 1 p 2 and equip T with the pseudo-metri (fun tional) dp de.rst step is simply the ontent of the following lemma whi h is the entropi (and omplex) version of (11.19) and Lemma 11.20. Let xi (t) be ( omplex) fun tions on a set T su h that jxi (t)j2 < 1 for every t in i T . P Lemma 13.

t) = if p = 2 and X i jxi (s) xi (t)j2 !1=2 dp (s.1 if 1 p < 2 . Then. for some Kp depending only on p . 0 . t) = k(xi (s) xi (t))kp.ned by d2 (s.

.

2 11=2 .

X .

IE sup .

.

"i xi (t).

.

A .

t2T .

for some numeri al onstant K . dp ) < 1 . the random Fourier series Y of (13. The main results in this dire tion is the following theorem. dp ) . Re all that for V G we set L(V ) = log(jV 00 j=jV j) where V 00 = V + V + V . dY ) < 1 . If E (2) (V. i P (t)j2 Kp (D + E (q) (T. dY )C A: .1 a ording as p = 2 or p < 2 and where q = p=p 1 t2T i t2T P is the onjugate of p . if E (q) (T. Further.6. 0 0 1=2 (IEkY k2 )1=2 K B (1 + L(V ) ) X 2A 11=2 1 ja j2 IEj j2 A + E (2) (V. this lemma applied onditionally together with the translation invarian e property allows to evaluate random Fourier series of the type (13.4) onverges uniformly on V with probability one.4). Theorem 13. "i xi (t) has a version with ontinuous paths i t2T on (T. As announ ed. Further. dp )) 1=2 where D = sup jxi or sup k(xi (t))kp.

we now apply Lemma 13. t . For ! in 0 . 0) 2 n g where V 0 = V + V . t) D(!) = 2 ja j2 j (!)j2 !1=2 < 1: We may as well assume that D(!) > 0 for all ! in 0 . For all ! in 0 . t) and X d! (s. t 2 G : Clearly. s. There is no loss of generality to assume by homogeneity that (for example) X ja j2 IEj j2 !1=2 Denote by 0 the set. dY (t. To this aim. X (13:5) IE" sup j a " t2V (!) (t)j2 Z D(!) K D (! ) + 0 !1=2 ! (log N (V. IE" denoting as usual partial integration with respe t to the Radema her sequen e (" ) . We get that.5 (for p = 2 ). Conditionally on the set 0 . de. t) = ja j2 j (!)j2 j (s) 18 : ja j2 j (!)j2 < 1 . for every integer n 1 . t) = d2Y (s. whi h only depends on the sequen e ( ) . of probability one. of all ! 's for whi h P the translation invariant pseudo-metri X d! (s. d! . "))1=2 d" : The idea is now to integrate this inequality with respe t to the sequen e ( ) and use the translation invarian e properties to on lude. let us set. Wn = ft 2 V 0 . introdu e (t)j2 !1=2 . IEd2! (s. for every s.407 Proof.

bn (!) = max 2 n . 4 Z Wn d! (t. 0) dt jWn j (2 n if jWn j = 0) : .ne a sequen e (bn (!))n0 of positive numbers by letting b0 (!) = D(!) and. for n 1 .

408 Observe that. !) = ft 2 Wn . by Fubini's theorem and de. (IEb2n )1=2 2 n+4 2 n+4 Z dt IEd2! (t. jWn j so that in parti ular bn() ! 0 almost surely. set B (n. 0) 1=2 dt 1=2 52 n. and n 1 . 0) bn (!)=2g : Clearly. 0) j W nj Wn Z Wn d2Y (t. for every n 1 . Let us denote by 1 the set of full probability onsisting of 0 and of the set on whi h the sequen e (bn ) onverges to 0 . d! (t. For ! in 1 .

!)j jW2n j : (13:6) We are now ready to integrate in ! inequality (13. for ! in 1 . we an write N (V. for ! in 1 and n 0 . Let also (bn ) be a sequen e of positive numbers with b0 = D and bn ! 0 . d! . !)) 00 jB (njV+ 1j . To this aim. jB (n. Then Z D 1 X f (x) dx bn f (bn+1 ) : 0 n=0 By this lemma. !) B (n + 1. Let f : (0. "))1=2 d" 1 X n=0 bn (!)(log N (V. dY .5). Lemma 13.nition of bn (!) . D℄ ! IR+ be de reasing. Wn+1 ) 00 2 jjVV jj N (V. d! . B (n + 1.7. we have Z D(!) 0 (log N (V. we make use of the following elementary lemma. bn+1(!)) N (V.1 and (13. bn+1 (!)))1=2 : By Lemma 13.6). d! . 2 n 1 ) : . !)j 00 2 jWjV j j n+1 j V 00 j 2 jV j N (V.

409 Hen e (13.5) reads as 0 .

.

2 11=2 .

X .

.

IE" sup .

a " (!) (t).

.

A .

t2V .

3) indi ate that. Theorem 13. we see by the ontra tion prin iple that (13:8) IEk X a " k2 !1=2 K ((1 + L(V )1=2 ) sup(IEj j2 )1=2 IEk X a g k2 !1=2 : . dY .6 expresses a bound for the random Fourier series Y very similar to the one des ribed previously for Gaussian Fourier series. the omparison between Theorem 13. The proof of Theorem 13. 2 n=0 1=2 ! n 1) for every ! in 1 . for some numeri al onstant K .6 is omplete. (13:7) IEk X a " k2 !1=2 K ((1 + L(V )1=2 ) IEk X a (IEj j2 )1=2 g k2 !1=2 : (What we get a tually is that IEk X a " k2 !1=2 0 K (1 + L(V )1=2 ) + IEk X X ja j2 IEj j2 a (IEj j2 )1=2 g k2 !1=2 !1=2 1 A but we will not use this improved formulation in the sequel. 1 X 00 K D(!) + bn(!) log 2 jjVV jj N (V. A Cau hy argument together with dominated onvergen e in the entropy integral and It^o-Nisio's theorem proves then the almost sure uniform onvergen e of Y on V . by the triangle inequality. A tually. dY . (IE sup jYt t2V j2 )1=2 1=2 ! 1 X 1 j V 00 j n n 1 K 4 + 5 2 log 2 jV j N (V. we have simply to integrate with respe t to ! and we then get. 2 ) n=0 whi h yields the inequality of the theorem. Sin e (IEb2n )1=2 5 2 n .) In parti ular. Re all that L(V ) 0 .6 and the lower bound in (13.

By the ontra tion prin iple in the form of Lemma 4.5.8) an be reversed. IEk X a " k2 !1=2 inf IEj j IEk X a " k2 !1=2 : If we try to exploit this information in order to reverse inequality (13.8)) that the onverse is always satis.8).410 One might of ourse wonder at this stage when an inequality like (13. we are fa ed with the question of P knowing whether the Radema her series a " dominates in the sup-norm the oresponding Gaussian series P a g . We know ((4.

ed.e. i. IEk (13:9) X a " k2 !1=2 K IEk X a g k2 !1=2 (with K = (=2)1=2 ) but the other way does not hold in general. unless we deal with a Bana h spa e of .

Although C (V ) is of no .9) and Proposition 9.nite otype ( f. (4.14).

7) or (13. the more general estimates (13.8) that we have obtained a tually show that the inequality we are looking for is satis.nite otype.

25 showing that the Radema her and Gaussian otype 2 de.ed here due to the parti ular stru ture of the ve tor valued oeÆ ients a . The proof of this result is similar to the argument used in the proof of Proposition 9.

IEk X a g k2 !1=2 K ((1 + L(V )1=2 ) IEk X a " k2 !1=2 : (Re all that by (13. The following proposition is the announ ed result of the equivalen e of Gaussian and Radema her random Fourier series. Further. this property is indeed related to some otype 2 Bana h spa e of remarkable interest whi h we dis uss next.8.9) the reserve inequality is satis.nitions are equivalent. For some numeri al onstant K . Proposition 13.

Proof. We an thus assume we deal with .) P P Further. a g and a " onverge uniformly almost surely simultaneously. The se ond assertion follows from the inequalities and a Cau hy argument and the integrability properties of both Gaussian and Radema her series.ed with a numeri al onstant independent of V .

Let > 0 to be spe i.nite sums.

ed. IEk X a g k2 !1=2 IEk X a g Ifjg j g k2 !1=2 + IEk X a g Ifjg j> g k2 !1=2 : . By the triangle inequality.

411 By the ontra tion prin iple. the .

8. Re all that by the symmetry assumption. (A tually a smaller fun tion of L(V ) would suÆ e but we need not be on erned with this here.7) to see that it bounded by K (1 + L(V )1=2 )(IEjgj2 I fjgj> g )1=2 IEk X a g k2 !1=2 where g is a standard normal variable.9.rst term on the right of this inequality is smaller than IEk X a " k2 !1=2 : To the se ond. we an now summarize the results on random Fourier series Y = (Yt )t2G of the type X Yt = a (t) . Let Y = (Yt )t2G be as just de. t 2 G. we apply (13. Let us then hoose > 0 in order that 1 K (1 + L(V )1=2 )(IEjgj2 Ifjgj> g )1=2 : 2 This an be a hieved with of the order of 1 + L(V )1=2 from whi h the proposition then follows. where Corollary 13.7 and Proposition 13.) As a orollary of Theorem 13. 2A P ja j2 < 1 and ( ) is a symmetri sequen e of real random variables su h that sup IEj j2 < 1 and inf IEj j > 0 . ( ) has the same distribution as (" ) where (" ) is an independent Radema her sequen e. Let V be as usual a ompa t symmetri neighborhood of the unit of G .

t) = X ja j2 j (s) (t)j2 !1=2 .ned with asso iated metri d(s. s. d) is . t 2 G : Then Y onverges uniformly on V almost surely if and only if the entropy integral E (2) (V.

d)A 0 (IEkY k2 )1=2 K (1 + L(V )1=2 ) sup(IEj j2 )1=2 X ja j2 !1=2 1 + E (2) (V. for some numeri al onstant K . 0 (K (1 + L(V )1=2 )) 1 inf IEj j X ja j2 !1=2 1 + E (2) (V.nite. Furthermore. d)A : .

Then (Yt )t2V satis.9 and that the equivalen es of Theorem 13. Corollary 13. The following is a onsequen e of Corollary 13.10.412 Note in parti ular that Y onverges uniformly almost surely if and only if the asso iated Gaussian Fourier P series a g does (a spe ial ase of whi h dis ussed previously being of ourse onstitued by the hoi e for ( ) of a Radema her sequen e).9.3 apply similarly in the ontext of Corollary 13.9 with IEj j2 = 1 for all in A . Let Y be as in Corollary 13.4 hold in the same way. In parti ular. Let us mention further that several of the omments following Theorem 13. boundedness and ontinuity are equivalent.

P Proof. ( a g (t))t2V is a Gaussian pro ess with the same ovarian e stru ture as (Yt )t2V . d) < 1 . If (Yt )t2V satis.es the entral limit theorem in C (V ) if and only if E (2) (V.

for all .9 together with an approximation argument shows that (Yt )t2V satis. Sin e.3). !1=2 n X 1 i 2 pn IEj j = (IEj j2 )1=2 = 1 . d) < 1 by (13. i=1 the right hand side of the inequality of Corollary 13. onsider independent opies of (Yt )t2V asso iated to independent opies ( i ) of the sequen e ( ) . To prove suÆ ien y.es the CLT it is ne essarily pregaussian so that E (2) (V.

Let us assume here for simpli ity that G is (Abelian and) ompa t. we know that we get the same de.8) and the on lusion of Corollary 13.4)). 2 By what was des ribed so far. Denote by its dis rete dual group.9.8 and its proof. Introdu e then the spa e C a. = C a.es the CLT in C (V ) ( f. onverges uniformly on G almost surely.s. (10. let us now expli it a remarkable Bana h algebra of otype 2 whose otype 2 property a tually basi ally amounts to inequality (13. t 2 G . (G) of all sequen es of omplex numbers a = (a ) 2 in P `2 su h that the Gaussian Fourier series a g (t) .s. Turning ba k to Proposition 13.

s. or even some more general symmetri sequen e whi h enters the framework of Corollary 13.nition when the orthogaussian sequen e (g ) is repla ed by a Radema her sequen e (" ) . Alternatively. is hara terized by E (2) (G. d) < 1 where d(s. whi h thus provides a metri des ription of C a.9. t 2 G .s. C a. t) = P ja j2 j (s) (t)j2 !1=2 . as opposed to the pre eding probabilisti de.s.

with the norm 0 [ a℄ = IEk X 2 11=2 a g k2 A .nition.s. Equip now the spa e C a.

basi ally amounts to inequality (13.413 for whi h it be omes a Bana h spa e. The left hand side follows from the ontra tion prin iple. moment equivalen es of both Gaussian and Radema her series allow to onsider Lp -norms for 1 p 6= 2 < 1 . we get that. Indeed.11. The right hand side is lear by the triangle inequality. : : : . (13. Convenient also is to observe that (13:10) 1 ([[ Re(a)℄℄ + [ Im(a)℄℄) [ a℄ 2([[ Re(a)℄℄ + [ Im(a)℄℄) 4 where Re(a) = ( Re(a )) .10) easily follows using one more time the ontra tion prin iple. aN .s. In the same way.s. then IEk X b g k2 !1=2 2 IEk X b g k2 !1=2 : Repla ing by and b by b . Proof.8 an equivalent norm is obtained when (g ) is repla ed by (" ) . de. The spa e C a. .10). whi h arises from a sup-norm has ni e otype properties.s. as announ ed. It is remarkable that the spa e C a. Consider then the element a of C a.s. aN are elements of C a.8). : : : . then N X i=1 N X [ ai ℄ 2 C IE[[ i=1 "i ai ℄ 2 : By observation (13. we need only prove this for real elements a1 . IEk X b g k2 !1=2 2 IEk X b g k2 !1=2 : For = a =ja j and b = Re(a ). Im(a ) . By (13. We need to show that there is some onstant C su h that if a1 . This is the ontent of the following proposition whi h. Proposition 13. Im(a) = ( Im(a )) . if (b ) and ( ) are omplex numbers su h that j j = 1 for all .9) and Proposition 13. sin e j j = 1 . is of otype 2 .

ned for every in by a = N P i=1 jai j2 N X IE[[ 1=2 . it is lear that 1 "i ai ℄ 2 [ a℄ 2 2 i=1 . By Jensen's inequality and (4.3).

A subset of the dual group of G is alled a Sidon set if there is a onstant C su h that for every . Proposition 13.s. = C a. the on lusion follows from (13.s. : : : . for in . Let. We simply dedu e this from (13. let A1 .414 so that we have only to show that N P [ ai ℄ 2 C [ a℄ 2 .11 is therefore established.8). (G) . AN be disjoint sets with equal probability 1=N . Independently of i=1 the basi orthogaussian sequen e (g ) . It is interesting to present some remarkable subspa es of the Bana h spa e C a.8). N ai p X i IA : = N j a i=1 j Clearly N X X i=1 [ ai ℄ 2 = IEk a g k2 : Sin e IEj j2 = 1 for every . Let G be as before a ompa t Abelian group.

nite sequen e ( ) of omplex numbers X 2 j j C k X 2 k : Sin e the reverse inequality (with C = 1 ) is learly satis.

On the other hand.s.s.1. Let G be ompa t P Abelian. is of no type p > 1 (sin e this is not the ase for `1 ). C a. 2 We denote by [ fb℄ the norm in C a. The onsideration of the spa e C a. As we have seen in Chapter 4. We . A typi al example is provided by a la unary sequen e (in the sense of Hadamard) in ZZ . a subspa e of C (G) isomorphi to `1 . f . If a = (a ) 2 is a sequen e of omplex numbers vanishing outside some Sidon set of . when is a Sidon set. then the norm [ a℄ is equivalent to the `1 P norm ja j . Any fun tion f in L2 (G) admits a Fourier expansion fb( ) whi h onverges to f in L2(G) . gives raise to another interesting observation. 2 g generates. the Radema her sequen e in the Cantor group is another example of Sidon set. of the Fourier transform fb = (fb( )) 2 of f . the dual group of the torus group.ed. Se tion 4.s. `1 is of otype 2 and it is remarkable that the norm [ ℄ preserves this property.

Then. Let F be a omplex valued fun tion on CI su h that jF (x) F (y)j jx yj and jF (x)j 1 for all x. . h = F Æ f belongs to C a.s. and for some numeri al onstant K1 1 .rst note the following.s. y 2 CI : Let f be a fun tion in C (G) su h that fb belongs to C a. (13:11) [ bh℄ K1 (kf k + [ fb℄ ) .

(G) . = C a.14 (in some omplex formulation).11). If (Xt ) is the Gaussian pro ess P Xt = fb( )g (t) .s. set ft(x) = f (t + x) . Let B be the spa e of the fun tions f in C (G) whose Fourier transform fb belongs to C a.415 where we re all that k k is the sup-norm (on G ). The trigonometri polynomials are dense in B ( f. kXs Xt k2 = kfs ftk2 where the L2 norm on the left is understood with respe t to the Gaussian sequen e (g ) and the one on the right with respe t to Haar measure on G . Let F be de. B is a tually a Bana h algebra for the pointwise produ t su h that jjfhjj jjf jj jjhjj for all f.s. we have that khs ht k2 kfs ft k2 and khk2 1 and the inequality then follows from (for example) Corollary 3.4).11). For any t in G . sin e F is 1 -Lips hitz. For (13. Equip B with the norm jjf jj = K1 (3kf k + [ fb℄ ) . This property is an easy onsequen e of the omparison theorems for Gaussian pro esses. t in G . This an be proved dire tly or as a onsequen e of (13. where K1 is the numeri al onstant whi h appears in (13. for whi h it be omes a Bana h spa e. As a result.11). for any s. Theorem 3. h in B .

ifjxj > 1 : 1 . 2 x =2 x2 =2jxj2 if jxj 1 .ned as F (x) = Then. as a orollary of (13. sin e 4fh = (f + h)2 (f h)2 . khk applied with this F . h are in B with kf k .11) ℄ 2K1 (2 + [ fb℄ + [ b [ fh h℄ ) : The inequality jjfhjj jjf jj jjhjj follows from de. if f.

Let A(G) be the algebra of the elements of C (G) whose Fourier transform is absolutely summable.nition of jj jj . The pre eding algebra B is an example of a (strongly homgeneous) Bana h algebra with A(G) 6 B 6 C (G) on whi h all Lips hitzian fun tions of order 1 operate. One might wonder for a minimal algebra with these .

Let Be be the spa e of the fun tions f in C (G) su h that . In this order of idea. the following algebra might be of some interest.416 properties.

.

n .

X .

.

E sup .

ai gi ft(xi ).

.

.

t2G .

: : : . that this quantity (at the ex eption perhaps of a numeri al fa tor) de. n 1 . as before. n X i=1 ) a2i 1 < 1 : It is not diÆ ult to see. x1 . ( sup i=1 . xn 2 G .

n X 1 sup p IE sup j gi ft (Zi )j < 1 : n t2G i=1 n Therefore. Then. by the entral limit theorem in . To see it. let (Zi ) be independent random variables with values in G and ommon distribution (normalized Haar measure on G ).nes a norm on Be for whi h Be is a Bana h algebra with A(B ) 6 Be 6 C (G) on whi h all 1 -Lips hitzian fun tions operate. Further. if f 2 Be . Be is smaller than B .

whi h yields the laim. t 2 G . is almost surely bounded. s. Is it in parti ular the smallest on whi h the 1 -Lips hitzian fun tions operate? . it follows that the Gaussian pro ess with L2 metri (IEjfs (Z1 ) ft (Z1 )j2 )1=2 = kfs ft k2 .4). A deeper analysis of the Bana h algebra Be is still to be done.s. f belongs to C a.nite dimension. But then (Theorem 13.

3. We shall be interested here in possible extensions to the stable ase and to random Fourier series as before when ( ) satis. Stable random Fourier series and strongly stationary pro esses In the pre eding se tions.417 13. we investigated stationary Gaussian pro esses and random Fourier series of the P type a where ( ) is a symmetri sequen e of real random variables satisfying basi ally sup IEj j2 < 1 .

of a standard p -stable sequen e). for ( ) . G denotes a lo ally ompa t Abelian group with dual group . by Bo hner's theorem. identity 0 and Haar measure () = jj . If X = (Xt )t2G is a stationary Gaussian pro ess ontinuous in L2 .es some weaker moment assumptions (a typi al example of both topi s being given by the hoi e. there is a measure m in whi h represents the ovarian e of X in the sense that for all . Throughout this se tion.

Say that a p -stable pro ess X = (Xt )t2G indexed by G is strongly stationary. IEj X j j2 j Xtj = Z j X j j (tj )j2 dm( ) : Let 0 < p 2 . or harmonizable.nite sequen es of real numbers (j ) and (tj ) in G . if there is a .

nite positive Radon measure m on entrated on su h that. for all .

nite sequen es (j ) of real numbers and (tj ) of elements of G . (13:12) IE exp i X j 0 j Xtj = exp 1 2 .

.

p .

Z .

X .

.

.

j (tj ).

.

.

.

j .

1 dm( )A : Going ba k to the spe tral representation of stable pro esses (Theorem 5. we thus assume in this de.2).

ontrary to the Gaussian ase not all stationary stable pro esses are strongly stationary. It is worthwhile mentioning an example. This property is motivated by several fa ts. and refering to [M-P2℄. Let e be a omplex stable random variable su h that. as a variable in IR2 . strongly stationary stable pro esses are stationary in the usual sense. In parti ular. e has for spe tral measure uniform distribution on the unit ir le. however. if e = 1 + i2 = (1 . namely to be on entrated on the dual group of G . some of whi h will be ome lear in the subsequent developments. 2 ) 2 CI . (13:13) IE exp i Re(e) = IE exp i(1 1 + 2 2 ) = exp 1 0p p jj 2 p . 2 ) and = 1 + i2 = (1 .nition of strongly stationary stable pro esses a spe ial property of the spe tral measure m . That is.

) This de.418 R 1=p where 0p = 02 j os xjp dx=2 . 1 and 2 are ne essarily independent. (Only in the Gaussian ase. p = 2 .

Then Xt = X 2A a e (t) . de.nition is one 2 -dimensional extension of the real stable variables for whi h spe tral measures are on entrated on f 1. Let further (e ) 2A be a P sequen e of independent variables distributed as e and (a ) 2A be omplex numbers with ja jp < 1 . +1g . Let A be a ountable subset of . t 2 G.

Its real and imaginary parts are real strongly stationary p -stable pro esses in the sense of de.nes a omplex strongly stationary p -stable pro ess.

Strongly stationary stable pro esses therefore in lude random Fourier series with stable oeÆ ients. Ree and Ime are standard p -stable real variables.nition (13.12). up to a onstant depending on p only. Sin e. We shall ome ba k to this from a somewhat simpler point of view later. The spe tral measure is dis rete in this ase. this study an P be shown to in lude similarly random Fourier series of the type a where ( ) is a standard p -stable sequen e (real). In the .

rst part of this se tion. we ex lude from su h a study the ase 0 < p < 1 sin e the only property that m is . we extend to strongly stationary p -stable pro esses.3.3) together with ideas developed in the pre eding se tion on random Fourier series ( onditional estimates and integration in presense of translation invariant metri s). 1 p < 2 . Ne essity will follow easily from Se tion 12.2. As usual. the Gaussian hara terization of Theorem 13.2 and Proposition 13. SuÆ ien y uses the series representation of stable pro esses (Corollary 5.

t) = j (s) (t)jp dm( ) 1=p : dX de. that is Z dX (s. Let X = (Xt )t2G be a strongly stationary p -stable pro ess with 1 p < 2 and with spe tral measure m in (13. 0 < p < 1 .12). t in G . denote by dX (s. It is thus assumed hen eforth that 1 p 2 . has a version with bounded and ontinuous paths. and a tually also that p < 2 sin e the ase p = 2 has already been studied. For s. t) the parameter of the real p -stable variable Xs Xt .nite already ensures that a p -stable pro ess.

Let V be a .nes a translation invariant pseudo-metri .

it yields that if X has . We know from Theorem 12.12 a general ne essary ondition for boundedness of p -stable pro esses. Together with Proposition 13.2. We shall always assume that V is metrizable. dX is then ontinuous on V V (by dominated onvergen e) (and thus also on G G ).xed ompa t neighborhood of the unit element 0 of G .

The ase p = 1 an basi ally be obtained similarly with however some more are. Proof: Ne essity and the left hand side inequality of the theorem have been dis ussed above. ( j ) . E (q) (V. for some onstant Kp depending only on p . Moreover. This settles the ne essity portion of this study of almost sure boundedness and ontinuity of strongly stationary p -stable pro esses. Then. dX ) < 1 where q = p=p onjugate of p . by Corollary 5. we denote again by X this representation. Theorem 13. j j =1 where Under 0p appears in (13. Let Yj be independent random variables distributed as m=jmj .3 and (13.12. that the pre eding ne essary entropy ondition is also suÆ ient.12).1 will be denoted for simpli ity t2V by kX kp. We refer to [Ta14℄ for the ase p = 1 . Let q = p=p 1 . k sup jXt jkp. Assume the sequen es ("j ) . Then X has a version with almost all sample paths bounded and ontinuous on V G if and only if E (q) (V.419 a version with almost all sample paths bounded on V . dX ) . Let further wj be independent omplex random variables all of them uniformly distributed on the unit ir le of IR2 . 1 p < 2 . (Yj ) to be independent. (wj ) . exa tly as in the proof of Theorem 13. X has the same distribution as 1 X 1=p p ( 0p ) 1 jmj1=p "j Re(wj Yj (t)) . (When q = 1 we agree that L(V )1=q = log(1+log(jV 00 j=jV j)) .13). dX ) L(V )1=q D(V )) t2V where D(V ) is the diameter of (V. As announ ed. dX ) L(V )1=q D(V )) kX kp.3. dX )) where m is the spe tral measure of X and D(V ) the diameter of (V.) In the following. For simpli ity in the notation. dX ) and L(V ) = log(jV 00 j=jV j) (V 00 = V + V + V ) . we will show. as a main result. Let X = (Xt )t2G be a strongly stationary p -stable pro ess.1 . then E (q) (V. We only prove the result for 1 < p < 2 .6. (13:14) 1 is the k sup jXt jkp. t 2 G. We now turn to suÆ ien y and show. Furthermore. dX ) < 1 . dX ) < 1 . there is a onstant Kp depending on p only su h that Kp 1 (jmj1=p + E (q) (V. the proof makes use of various arguments developed in Se tion 13.2. Re all the series representation of Corollary 5. that this series has a version .1 Kp ((1 + L(V )1=q )jmj1=p + E (q) (V.1 Kp 1(E (q) (V.

1 : Sin e the Yj 's take their values in .t)<" s. t) . (wj ) and (Yj ) . for almost all ! . ( Kp denotes here and below some onstant only depending on 1 < p < 2 . Conditionally on ( j ) . t) KpdX (s. letting D(!) = 2k( j 1=p (!))kp. For every ! of the probability spa e supporting these sequen es. we apply Lemma 13. Further. By homogeneity.t2V s. On e this has been observed. By Lemma 13. satisfying moreover the inequality of the theorem.t)<" (s.t2V for all " > 0 . if is a metri for whi h V is ompa t.5. we dedu e from (5. denote by d! (s. Indeed. the rest of the proof is entirely similar to the proof of Theorem 13. t) the translation invariant fun tional d! (s. the dual group of G . . let us assume that jmj = 1 . t .5. s) Yj (!. sin e k Re(w1 (Y1 (s) Y1 (t)))kp = 0p dX (s. t) = k( j 1=p (!) Re(wj (!)(Yj (!. t) Kp IE sup jY1 (s) Y1 (t)j (s.) The laim thus follows.8) and Corollary 5. t) (13:15) for all s.9 that IE sup d! (s. ontinuous on V V .1 . we have from exa tly the same tools that IEd! (s. t))))kp.6. it is easy to verify that d! is.420 with almost all sample paths bounded and ontinuous.

.

1 .

X 1=p IE" sup .

.

(!)"j j t2V .

if this entropy integral is .j =1 (13:16) Kp D(!) + and.

nite. 1 P j =1 j 1=p .

.

.

t)). Re(wj (!)Yj (!.

.

.

d! . 2 bn(!) log 2 X jV j n=0 n 1) 1=q : . "))1=q d" ! (!)"j Re(wj (!)Yj (!.7 and exa tly in the same way. Z D (! ) 0 (log N (V. Wn and bn (!) being as in the proof of Theorem 13.6. Z D(!) 0 ! (log N (V. t)) t2V has a version with almost all (with respe t to ("j ) ) sample paths ontinuous (sin e d! is ontinuous on V V ). "))1=q d" 1 X jV 00 j N (V. d . we have from (13. from Lemma 13. Furthermore.15) that IEbn Kp2 n . d! .

it follows that.421 Integrating with respe t to ! . for almost all! ! the entropy integral 1 1=p P on the left is . when E (q) (V. dX ) < 1 .

2. G is a lo ally ompa t Abelian group with unit 0 and dual group ompa t symmetri neighborhood of 0 and A a ountable subset of P . we would like to study the ase of a sequen e ( ) there not ne essarily in L2 . As in Se tion 13. by Fubini's theorem and the representation. we further investigate in the se ond part of this se tion random Fourier series with the obje tive of enlarging the on lusions of Theorem 13.6 or Corollary 13.12 is thus omplete (re all we have assumed by homogeneity that jmj = 1 ). Furthermore. In parti ular. One typi al example is a P stable random Fourier series a where ( ) is a standard p -stable sequen e. the proof of Theorem 13. V a . X has a version with ontinuous paths on V . dX )) : Sin e IEkX k is equivalent to kX kp. Motivated by the previous result.6).16) together with the fa t that IEbn Kp 2 n yields IEkX k Kp (1 + L(V )1=q + E (q) (V. We have seen that this example an be shown to enter the previous setting. (!)"j Re(wj (!)Yj (!. by the pre eding.nite and thus. We now present some natural extensions in the ontext of random Fourier series.9.1 (Proposition 5. Therefore. t)) has a version with j j =1 t2T ontinuous sample paths with respe t to ("j ) . integrating (13.

s. a (t) . t) = X 2A 11=p ja jp j (s) (t)jp A . We are interested in the almost sure uniform onvergen e of the random Fourier series Y = (Yt )t2G where (13:17) Yt = X 2A t 2 G. Let (a ) 2A be a sequen e of omplex numbers su h that ja jp < 1 and let ( ) 2A be a sequen e of independent and symmetri real random variables.12 enables to extend the results of Se tion 13. t 2 G : The te hnique of proof of Theorems 13. in terms of the translation invariant pseudo-metri 0 d(s.2 to random variables whi h do not have .xed . Let 1 p < 2 .6 and 13.

nite se ond moments. .

Assume that sup k kp. if E (q) (V. Then. d) < 1 where q = p=p 1 and d is de.13.1 < 1 .422 Theorem 13.

1 < 1 . the pre eding theorem holds similarly but with sup k kp instead of Lp.ned above.1 = k sup jYt jkp. The proof is ompleted similarly.1 Kp sup k kp.1 moments. By independen e and symmetry. d) < 1 is also ne essary for the random Fourier series Y of (13.5 onditionally on ( ) with respe t to the metri d! (s.17) onverges uniformly on V with probability one. we simply use that d! (s.1 : Sin e the are independent. It is entirely similar (a tually somewhat simpler) to the proofs of Theorems 13. assume that for some u0 > 0 and Æ > 0 . if ( ) is a standard p -stable sequen e. Y an be repla ed by X 2A a " (t) . t2V Proof. d)A (where kY kp. 0 kY kp.1 ).17) to be almost surely bounded. we an integrate with respe t to ! and use Lemma 5. Further. The argument is similar but. IPfj j > ug Æu p for all u > u0 and all .6 and 13. More pre isely. t) X ja jp j (!)jp j (s) (t)jp !1=p : If the tails of the random variables are lose to the tail of a standard p -stable variable. the random Fourier series Y of (13. t) = k(ja (!)( (s) (t))j) 2A kp.8 and the hypothesis sup k kp. to integrate d! .2). by (5. Note that if the sequen e ( ) is only a symmetri sequen e. 1 p < 2 . Hen e.1 (1 + L(V )1=q ) X ja jp !1=p 1 + E (q) (V. We then use Lemma 13. inf IPfj j > ug=IPfj j > ug . t 2 G. sin e the need not be independent. where (" ) is a Radema her sequen e independent of ( ) . then the entropy ondition E (q) (V.12 so that we only mention a few observations. for some onstant Kp depending on p only.

The results are still fragmentary and only on ern so far Gaussian variables.423 is bounded below for u suÆ iently large. Ve tor valued random Fourier series In the last part of this hapter. As in the pre eding se tions. Let us .4. if same holds for P P a onverges uniformly almost surely. 13. for ea h t .13 from Theorem 13. Finally. : : : .3 also apply in this P stable ase. X is Gaussian if for every t1 . the a by Lemma 4. : : : tN in T . let G be a lo ally ompa t Abelian group with identity 0 and dual group of hara ters . Therefore. But now we deal with a stable pro ess and the omplex version of (13.12. (Xt1 .4 in the ontext of stable Fourier series a . XtN ) is Gaussian (in B N ).14) yields the announ ed laim. we present some appli ations of the previous results to ve tor valued stationary pro esses and random Fourier series. Xt is a Borel random variable with values in B . Let B be a separable Bana h spa e with dual spa e B 0 . This approa h an of ourse also be used to dedu e Theorem 13. Similarly for the equivalen es of Theorem 13. the various omments developed in the Gaussian setting next to Theorem 13. Re all that by a pro ess X = (Xt )t2T with values in B we simply mean a family su h that.6.

3. it does not seem possible to dedu e it from the omparison theorems based on Slepian's lemma. Sin e B is separable. 1 (IEkX0k + sup IE sup jf (Xt )j) IE sup kXt k K (1 + L(V )1=2 )(IEkX0k + sup IE sup jf (Xt )j) : 2 t2V kf k1 t2V kf k1 t2V . Instead we use majorizing measures and the deep results of Chapter 12. this is equivalent to say that for every f in B 0 . (Xu+t )t2G has the same distribution (on B G ) as X . A pro ess X = (Xt )t2G indexed by G with values in B is said to be stationary if. While this result is related to tensorization of Gaussian measures studied in Se tion 3. Re all that L(V ) = log(jV 00 j=jV j) . for some numeri al onstant K . Almost sure boundedness and ontinuity of ve tor valued stationary Gaussian pro esses may be hara terized rather easily through the orresponding properties along linear fun tionals. Then.14. Theorem 13. Let X = (Xt )t2G be a stationary Gaussian pro ess with values in B . the real pro ess (f (Xt ))t2G is stationary. This is the ontent of the following statement. for every u in G . V 00 = V + V + V . as in the real ase.x also a ompa t metrizable neighborhood V of 0 .

t 2 G .18 and Lemma 11. d((f. f. To ontrol the integral on the right of (13. Continuity follows similarly together with the pre eding observation.1) so let V 00 be restri ted normalized Haar measure on V 00 G . by the results of Se tion 13. By the triangle inequality and stationarity. X is ontinuous on V if and only if lim sup Z !0 kf k1 0 (and kXs Xt k2 is ontinuous on V (log N (V. t) . d((f. df (X ) . t 2 G .t)2B10 V 0 1 1 log m V 00 (B ((f.9. ") is the ball for the L2 pseudo-metri of X on B10 G . f. dX0 ) su h that (13:18) sup f 2B10 Z 0 1 log 1 m(B (f. g) + df (X ) (s. (g. By Theorem 12. (We use further this onvention about balls in metri spa es below. f 2 B 0 .1. that is. If we bound X onsidered as a (real) pro ess on B10 V with the majorizing measure integral for m V 00 (on B10 V 00 B10 V ). t). s)) = kf (Xt ) g(Xs )k2 . t). the real Gaussian pro ess X0 = (f (X0 ))f 2B10 indexed by B10 has a majorizing measure. We onsider X as a pro ess indexed by B10 G . Proof. X has a version with almost all ontinuous paths on V if and only if kXs on V V and lim sup IE sup jf (Xt )j = 0 : !0 kf k1 t2V Xt k2 is ontinuous Note that.t)2B10 V f (Xt ) K sup Z (f. ")) 1=2 d" K IE sup0 f (X0 ) = K IEkX0 k f 2B1 where. ")) 1=2 d" where B ((f. note the following. s. g 2 B10 . g) = kf (X0 ) g(X0 )k2 . Let B10 be the unit ball of B 0 .19). ") is the ball of radius " with respe t to the metri on the spa e whi h ontains its enter f . i. B (f.e.424 Further. as in Chapter 12. (13:19) IE sup (f. dX0 (f. there exists a probability measure m on (B10 . t). we get from Theorem 11. s. We only show boundedness and the inequalities of the theorem. t) = kf (Xs ) f (Xt )k2 .9 (whi h applies similarly for the fun tion (log(1=x))1=2 ) that. that is. s)) dX0 (f. "))1=2 d" = 0 V ) where df (X ) (s. g 2 B 0 . (g.) We intend to use (13. t).

")) 1=2 d" + sup B10 V Z 1 0 log 1 V 00 (Bdf (X) (t. "))V 00 (Bdf (X) (t. It follows that. t) in B10 V and all " > 0 . m V 00 (B ((f. t). Therefore. 0 IE sup f (Xt ) 2K sup B10 V B10 1 Z 0 1 log m(B (f.425 where we re all that df (X ) (s. 2")) m(B (f. ") is the ball with respe t to the metri df (X ) . for all (f. ")) !1=2 1 d"A : The . ")) where Bdf (X) (t. t) = kf (Xs) f (Xt )k2 .

IE sup kXt k K (IEkX0 k + sup IE sup jf (Xt )j + L(V )1=2 sup (IEf 2 (X0 ))1=2 ) : t2V f 2B10 t2V f 2B10 This inequality is stronger than the upper bound of the theorem.3 that for some numeri al onstant K .rst term on the right of this inequality is ontrolled by (13. df (X ) . we get from Theorem 13.18). omplete. The minoration inequality is obvious. ") f 2B10 0 Z Summarizing. We use (13. therefore. One interesting appli ation of Theorem 13. Let A be a . The proof is.1) (whi h applies similarly with the fun tion (log(1=x))1=2 ) to see that the se ond term is smaller than or equal to 1=2 1 jV 00 j d" : sup log jV j N (V.14 on erns Gaussian random Fourier series with ve tor valued oeÆ ients.

De. Suppose that the series g x is onvergent. We assume that B is P a omplex Bana h spa e.xed ountable subset of the dual group of hara ters of G . Let (g ) 2A be an orthogaussian sequen e and (x ) 2A be a sequen e of elements of a Bana h spa e B .

the almost sure ontinuity of the pro ess X t2V on V .20) (in the sup-norm sup k k ) or. t 2 G: As in the s alar ase. one might wonder for the almost sure uniform onvergen e of the series (13. . equivalently (by It^o-Nisio's theorem).14 implies the following.ne then (using the ontra tion prin iple) the Gaussian random Fourier series X = (Xt )t2G by (13:20) Xt = X 2A g x (t) . Theorem 13.

In the pre eding notations.426 Corollary 13.15. there is a numeri al onstant K su h that .

.

! X .

X .

1 IE g x + sup IE sup .

.

f (x )g (t).

.

kf k1 t2V .

.

2 X IE sup g x (t) t2V .

.

! X .

X .

.

1 = 2 K (1 + L(V ) ) IE g x + sup IE sup .

f (x )g (t).

.

kf k1 t2V .

.

Further. P : g x onverges uniformly almost surely if and only if .

.

.

.

.

.

X .

f (x )g (t).

.

= lim sup IE sup .

kf k1 t2V .

2F .

0 where the limit is taken over the .

20) when for example the Gaussian sequen e (g ) is repla ed by a Radema her sequen e (" ) or a standard p -stable sequen e ( ) . The s alar ase investigation of Se tions 13. it is plain from Corollary 13.15 that a Radema her series the orresponding Gaussian one provided P " x and P P " x is hara terized as g x onverge simultaneously.nite sets F in reasing to A . By the equivalen e of s alar Gaussian and Radema her Fourier series (Proposition 13. We know that this holds for all sequen es (x ) in B if and only if B is of .3 invites to onsider the same onvergen e question of ve tor valued random Fourier series of the type (13. These questions are not yet answered.8). 1 p < 2 .2 and 13.

1 p < 2 . This follows from the hara terizations we des ribed and extends to the lasses of random Fourier series studied there.15 and its inequality also hold when (g ) is repla ed by (" ) (note of ourse that the left hand side inequality is trivial). this is he ked immediately reprodu ing the argument of the proof of Theorem 13. To on lude. we analyze this di hotomy for general random Fourier series with ve tor valued oeÆ ients.16) but in general P " x is only dominated by P g x ((4. Stationary real Gaussian (and strongly stationary p -stable) pro esses are either ontinuous or unbounded.8)).14.3. . This onje ture is supported by the fa t that it holds for Radema her pro esses whi h are Kr(T ) -de omposable in the terminology of Se tion 12.nite otype (Theorem 9. The ase of a p -stable standard sequen e. We however onje ture that Corollary 13. in Corollary 13.15 is also open.

the pre eding partial sums onverge uniformly almost surely. the pre eding is independent of the ordering of A and is equivalent to the boundedness of X as a pro ess on V .427 Let A be a ountable subset of . onsider the random Fourier series X = (Xt )t2G given by (13:21) Xt = X 2A x (t) . the random Fourier series X of (13. Assuming that x is almost surely onvergent for one (or. Sin e ( ) is a symmetri sequen e. by symmetry.21) is almost surely uniformly onvergent if for some (or all by It^o-Nisio's theorem) ordering A = f n . Equivalently. Similarly. n 1g . Let further (x ) 2A be a sequen e in a omplex Bana h spa e B P and let ( ) 2A be independent symmetri real random variables. X de. n=1 are almost surely uniformly bounded with respe t to the norm supt2V k k . t 2 G: Let V be as usual a ompa t neighborhood of 0 in G . N 1 . by Levy's inequalities. n 1g the partial sums n x n n . The random Fourier series X is said to be almost N P surely uniformly bounded if for some ordering of A = f n. all) ordering of A .

Note that knk (!)xnk k = sup knk (!)xnk nk (t)k t2V t2V so that. Theorem 13. Let us set for simpli ity n = n n and xn = x n for all n . in parti ular (sin e nk (0) = 1 ).29 whi h is identi al for omplex Bana h spa es. n x n n is an almost surely bounded series whi h does not onverge. there exists a series (13. The proof is based on Theorem 9. By Remark 9. Proof. If (ii) does not hold. the following are equivalent: (i) B does not ontain subspa es isomorphi to 0 .31. n 1g . there exist ! and a sequen e (nk ) su h that (nk (!)xnk nk ) is equivalent in the norm sup kk to the anoni al basis of 0 . For a Bana h spa e B .21) and similarly for B -valued oeÆ ients if B does not ontain an isomorphi opy of 0 .16.21) su h that.nes an almost surely ontinuous pro ess on V with values in B . P inf knk (!)xnk k > 0 : k In the same way.21) with oeÆ ients in B is almost surely uniformly onvergent. for every . (ii) Every almost surely uniformly bounded random Fourier series of the type (13. for some ordering A = f n . The next theorem des ribes how these two properties are equivalent for s alar random Fourier series of the type (13.

nite sequen e (k ) of omplex numbers with jk j 1 . X k nk (!)xnk k C .

Let G be the ompa t Cantor group f 1.428 for some onstant C . De. +1gIN and set V = G . To prove the onverse impli ation. This shows that (i) ) (ii). The hara ters on G onsist of the Radema her oordinate maps "n (t) . : : : .21) (a tually Gaussian) with oeÆ ients in 0 whi h is bounded but not uniformly onvergent. We an then apply Lemma 9. let further I (n) denote the set of integers f2n + 1. On some (dierent) probability spa e. we exhibit a random Fourier series of the type (13.30 to extra t a further subsequen e from (nk (!)xnk ) whi h will be equivalent to the anoni al basis of 0 . let (gn ) be a standard Gaussian sequen e. 2n+1 g . For every n .

To see it.ne then X = (Xt )t2G by Xt = X n 0 2 n X i2I (n) 1 gi "i (t)A en . X is a (Gaussian) random Fourier series with values in 0 . t 2 G. It is almost surely bounded. note that 0 1 N X X n gi "i (t)A en sup sup 2 N t2G n=1 i2I (n) = sup sup sup 2 N t2G nN = sup 2 n n X i2I (n) . where (en ) is the anoni al basis of 0 .

.

.

.

X .

.

n.

.

g " ( t ) i i .

.

.

i2I (n) .

jgi j (where we have used that ("n ) generates `1 in L1 (G) ). 8. and. by Chebyshev's inequality. Now sup 2 n Indeed. sup 2 n n P i2I (n) IEjgi j < 1 .

<.

.

X IP .

.

:.

i2I (n) .

.

.

gi .

.

.

Salem and Zygmund. Mar us and G. n P i2I (n) jgi j < 1 almost surely. 9 = jgi j IEj j > 2n . B. Pisier and their paper [M-P2℄ (see also [M-P3℄).16 is omplete. Notes and Referen es The main referen es to this hapter are the book [M-P1℄ by M. X is not almost surely uniformly onvergent. The proof of Theorem 13. 2 n : Hen e the laim by the Borel-Cantelli lemma. From exa tly the same argument. Kahane's ideas [Ka1℄ signi. Random Fourier series go ba k to Paley.

antly ontributed to the neat a hievements of [M-P1℄.3 is due to X. Theorem 13. Fernique [Fer4℄ (with of ourse a dire t entropi proof). (See also [J-M3℄ for an exposition of this result more in the setting of random Fourier series.) It is the translation invariant version .

Mar us [Ma1℄. A Sidon set generates a subspa e isomorphi to `1 in C (G) . The equivalen e of boundedness and ontinuity of stationary Gaussian pro esses was known previously as Belaev's di hotomy [Bel℄ (see also [J-M3℄) (a similar result for random Fourier series was proved by Billard.s. It does not use non-de reasing rearrangements as presented in [M-P1℄ (see also [Fer6℄). it was shown onversely by J. The equivalen e between Gaussian and Radema her random Fourier series was put forward in [Pi6℄ and [M-P1℄. Pisier [Pi6℄. extended later in [M-P1℄. see [Ka1℄).s. it has been put forward in [Ta14℄.1 and the key point in the subsequent investigation of random Fourier series. D. The remarkable Bana h spa e C a. [Pi7℄. Bourgain and V. (G) and asso iated Bana h algebra C a. He further provided an harmoni analysis des ription of C a.s.6 (in the ase G = IR ) is due to M. (G) \ C (G) have been investigated by G. Milman [B-M℄ that if a subset of is su h that the subspa e C of C (G) of all fun tions whose Fourier transform is supported by is of . B.429 of the results of Se tion 12. As a remarkable result. The basi Theorem 13. as the predual of a spa e of Fourier multipliers. The proof we present is somewhat dierent and simpler.

For further on lusions on random Fourier series.e. The pi ture is ompleted in [Ta14℄ with the ase p = 1 (and with a proof whi h inspired the proofs presented here). The results of Se tion 13. (A prior ontribution assuming C of otype 2 is due to G. in parti ular in non-Abelian groups. examples and quantitative estimates. we refer to [M-P1℄. The omplex probabilisti stru tures are arefully des ribed in [M-P2℄. then must be a Sidon set. does not ontain `n1 's uniformly).nite otype (i. Pisier [Pi5℄).3 are taken from [M-P2℄ for the ase 1 < p < 2 . Extensions to random Fourier series with in.

Further extensions to very general Random Fourier series and harmoni pro esses are obtained in [Ta17℄. it is shown there how random Fourier quadrati forms with either Radema her or standard Gaussian sequen es onverge simultaneously. Various entral limit theorems (Corollary 13. In parti ular. A law of the iterated logarithm for the empiri al hara teristi fun tion an also be proved [Led1℄. related to the results of this hapter.14 is due [Fer10℄ (see also [Fer12℄). Theorem 13. He further extended this result in [Fer13℄. .16 is perhaps new. Gaussian and Radema her random Fourier quadrati forms are studied and hara terized in [L-M℄ with the results of Se tion 13.2 and 13. Theorem 13. The study of stationary ve tor valued Gaussian pro ess was initiated by X.nitely divisible oeÆ ients and -radial pro esses are studied in [Ma4℄. Fernique to whom Theorem 13. Sin e the on lusion does not involve majorizing measures. Finally. note the following. one might wonder for a proof that does not use this tool.4. [La℄.14 was re ently used in [I-M-M-T-Z℄ and [Fer14℄.10) for random Fourier series an be established [M-P1℄ with appli ations of the te hniques to the empiri al hara teristi fun tion [Ma2℄.

The entral limit theorem for Lips hitz pro esses 14.430 Chapter 14. Empiri al pro ess methods in Probability in Bana h spa es 14. Empiri al pro esses and random geometry 14.2.1. Vapnik-Chervonenkis lasses of sets Notes and referen es .3.

431 Chapter 14. Empiri al pro ess methods in Probability in Bana h spa es The purpose of this hapter is to present appli ations of the random pro ess te hniques developed so far to in.

we will be interested for example in the CLT in the spa e C (T ) of ontinuous fun tions on a ompa t metri spa e T . While these random geometri des riptions do not solve the entral limit problem in in. Sin e C (T ) is not well behaved with respe t to the type or otype 2 properties. More pre isely. and in parti ular the entral limit theorem (CLT).nite dimensional limit theorems. we will have rather to seek for ni e lasses of random variables in C (T ) for whi h a entral limit property an be established. Random geometri des riptions of the CLT may then be produ ed through this approa h as well as omplete des ription for ni e lasses of fun tions (indi ator fun tions of some sets) on whi h the empiri al pro esses are indexed. This point of view leads to enlarge this framework and to investigate limit theorems for empiri al measures or pro esses.

The . Dudley [Du4℄. they however learly des ribe the main diÆ ulties inherent to the problem from the empiri al point of view. Gine and J. We do not try to give here a omplete a ount on empiri al pro esses and their limiting properties but rather on entrate on some useful methods and ideas related to the material already dis ussed in this book. Zinn [G-Z2℄. The examples of te hniques we hose to present are borrowed from the work by R. [Du5℄ and E. [G-Z3℄.nite dimension (and are probably of little use in appli ations). and we a tually refer the interested reader to these authors for a omplete exposition.

we introdu e the language of empiri al pro esses and dis uss the ee t of pregaussianness in two ases: the . In the se ond se tion.rst se tion of this hapter presents various results on the CLT for subgaussian and Lips hitz pro esses in C (T ) under metri entropy or majorizing measure onditions.

14. d) .rst one on erns uniformly bounded lasses while the se ond provides a random geometri des ription of Donsker lasses. Vapnik-Chervonenkis lasses of sets form the matter of Se tion 14. .1.3 where it is shown how these lasses satisfy the lassi al limit properties uniformly over all probability measures. The entral limit theorem for Lips hitz pro esses Let (T. i.e. lasses for whi h the CLT holds. n 1 . we denote as usual by (Xi ) a sequen e of independent opies of X and let Sn = X1 + + Xn . A Borel random variable X with values in C (T ) may be denoted in the pro esses notation as X = (Xt )t2T = (X (t))t2T and (X (t))t2T has all its sample paths ontinuous on (T. If X is a random variable. d) be a ompa t metri spa e and denote by C (T ) the separable Bana h spa e of all ontinuous fun tions on T equipped with the sup-norm k k1 . and are a tually hara terized in this way.

Equivalently.432 A subset K of C (T ) is relatively ompa t if and only if it is bounded and uniformly equi ontinuous (Arzela-As oli). this is the ase if there exists t0 in T and a .

t in T with d(s.1 and the . t) < .nite number M su h that jx(t0 )j M for all x in K and for all " > 0 there exists = (") > 0 su h that jx(s) x(t)j < " for all x in K and s. Combining with Prokhorov's Theorem 2.

nite dimensional CLT. it follows that a random variable X = (X (t))t2T satis.

for ea h " > 0 .t)< . there is = (") > 0 su h that ( (14:1) lim sup IP sup n!1 d(s.es the CLT in C (T ) if and only if IEX (t) = 0 and IEX (t)2 < 1 for all t and.

.

.

Sn (s) .

.

) .

pnSn (t) .

.

and does not satisfy any kind of Rosenthal's inequality ( f. > " < " : Sin e the spa e C (T ) has no non-trivial type or otype. We on entrate on the lasses of subgaussian and Lips hitz variables. the . the results that we an expe t on the CLT in C (T ) an only on ern spe ial lasses of random variables. Chapter 10).

t)g C exp( u2 =C ) for all u > 0 and some onstant C ). t)2 : 2 Changing if ne essary d into a multiple of it. we may require equivalently that kX (s) X (t)k 2 d(s. IE exp (X (s) X (t)) exp 2 d(s. t in T . t in T (or IPfjX (s) X (t)j > ud(s. Re all that a entered pro ess X = (X (t))t2T is said to be subgaussian with respe t to a metri d on T if for all real numbers and all s. t) for all s.3 that if (T. We have seen in Se tion 11.rst of whi h naturally extends the lass of Gaussian variables (whi h trivially satisfy the CLT). d) satis.

then the subgaussian pro ess X has a version with almost all sample paths ontinuous on (T. It therefore de. ")) 1=2 d" = 0 for some probability measure m on T . d) .es the majorizing measure ondition lim sup Z !0 t2T 0 1 log m(B (t.

it is easily seen that the subgaussian pro ess X also p satis. Note that by the main result of Chapter 12 and existen e of majorizing measures for bounded and ontinuous pro esses. t) for all s.nes (a tually its version whi h we denote in the same way) a Radon random variable in C (T ) . t in T . the pre eding ondition is (essentially) equivalent to the existen e of a Gaussian random variable G in C (T ) su h that kG(s) G(t)k2 d(s. under one of these (equivalent) assumptions. Now.

by independen e and identi al distribution of the summands. Indeed.es the CLT in C (T ) . Sn = n is .

Then. to be also subgaussian with respe t to d . for every n . one an . we dedu e that for every " > 0 . from Proposition 11.433 seen.19.

d su h that. uniformly in n . T.nd > 0 depending only on ". .

.

Sn (s) .

.

.

.

pnSn (t) .

.

< " : IE sup d(s.t)< Hen e. X satis.

We have therefore the following result.1).es the CLT by (14. d) su h that lim sup !0 t2T Z 0 log 1 m(B (t. Let X be a Borel random variable in C (T ) whi h is subgaussian with respe t to d . Theorem 14. Assume there is a probability measure m on (T.1. ")) 1=2 d" = 0 : Then X satis.

We turn to the se ond lass of random variables in C (T ) we will study here whi h are the Lips hitz random variables.es the CLT. One . They will be shown to be onditionally subgaussian and will therefore satisfy the CLT under onditions similar to the ones used for subgaussian variables.

t in T . d) satis. jX (!. Assume there is a positive random variable M in L2 su h that for all ! and all s. Let X be a Borel random variable in C (T ) su h that IEX (t) = 0 and IEX (t)2 < 1 for all t in T . t) : Then.2. s) X (!. t)j M (!)d(s.rst and main result is the following theorem. Theorem 14. if (T.

d) .es the majorizing measure ondition lim sup Z !0 t2T 0 1 log m(B (t. X satis. ") 1=2 d" = 0 for some probability measure m on (T.

and (Xi ) . It should a tually be similar to the proof of Theorem 14.2 we give is slightly more ompli ated than it should be.es the CLT in C (T ) . We hose this exposition in order to in lude in the same pattern Theorem 14. Let X .9) that d is the L2 -pseudo-metri of a Gaussian random variable in C (T ) .5 below. Proof. Re all that we may assume equivalently (Theorem 12.3. We would like to mention that the exposition of the proof of Theorem 14. be de.

Thus. A. or a simple symmetrization argument. (Xi ) has the same distribution .4. By Proposition 10. we may and do assume that X is symmetri ally distributed.ned on ( . IP) .

and all s.434 as ("i Xi ) where ("i ) is a Radema her sequen e onstru ted on some dierent probability spa e. t in T . ( IP" p1n . There is further a sequen e (Mi ) of independent opies of M su h that jXi (!. t) for all i . By the subgaussian inequality (4. for every ! . s) Xi (!. t)j Mi (!)d(s.1). all ! . t in T . every integer n and every u > 0 . and all s.

n .

X .

"i (Xi (!. s) .

.

i=1 .

.

Xi (!. t)).

.

.

) >u 0 B 2 exp B 0 1 n 2 P n i=1 jXi (!. as usual. integration with respe t to ("i ) . t)j2 2 2 n d(s. Let then a > 0 to be spe i. where IP" is. s) Xi (!. t) C C A 1 u2 B 2 exp B u2 n P i=1 Mi (!)2 C C A .

2. t in T . IE sup jY n (s) Y n (t)j < aÆ : (14:2) d(s. for some numeri al onstant K . n 1 X ): Y n (t) = p " X (t)I( P n 2 n i=1 i i Mj a2 n j=1 From the pre eding. we know from Proposition 11. all n and u > 0 .t)< Mj2 > a2 n . it learly follows that for all s. Therefore. t)g 2 exp( u2=2) : That is to say.19 that for all Æ > 0 there exists > 0 depending only on Æ. the pro esses ((Ka) 1 Y n (t))t2T are subgaussian with respe t to d . Hen e IP n P j =1 ( IP sup d(s. under the majorizing measure ondition of the theorem.ed and set for every integer n and every t in T . uniformly in n .t)< It is now easy to on lude Theorem 14. IPfjY n (s) Y n (t)j > a ud(s. m su h that. Fix " > 0 and let a = a(") > 0 be su h that ( the proof of ) a2 2IEM 2 =" . T. d.

.

Sn (s) .

.

"=2 for all n . For all > 0 . we an write .

) .

pnSn (t) .

.

> " 2" + IPf sup jY n (s) Y n (t)j > "g d(s.t)< " 1 + IE sup jY n (s) Y n (t)j : 2 " d(s.t)< .

435 If we then hoose = (") > 0 small enough in order for (14.2) to be satis.

we .ed with Æ = "2 =2a .

nd that X satis.

The proof of Theorem 14.es (14. Re all we denote by Lip(T ) the spa e of Lips hitz fun tions x on T equipped with the norm kxk Lip = D 1 kxk1 + sup jx(sd)(s. then we an only on lude in general to the bounded CLT for the Lips hitz variable X . A tually. it is interesting to mention that Theorem 14.2 on Lips hitz random variables an be related to the general results on the CLT in type 2 spa es of Chapter 10.2 is weakened into the orresponding bounded one. d) for the fun tion (log 1=u)1=2 .17 that if there is a (bounded) majorizing measure on (T. We have seen in Theorem 12. Note that if the ontinuous majorizing measure ondition in Theorem 14. That the ontinuous majorizing measure ondition is ne essary is made lear by the example of the random variable X = ("n =(log(n + 1))1=2 ) on C (IN [ f1g) whi h is Lips hitzian with respe t to the distan e of the bounded. but not ontinuous. tx)(t)j s= 6 t where D = D(T ) is the diameter of (T. Although C (T ) is not of type 2 . it rather on erns operators of type 2 and more pre isely the anoni al inje tion map j : Lip(T ) ! C (T ) investigated in Se tion 12.1) and therefore the CLT. Gaussian sequen e (gn =(log(n + 1))1=2 ) .2 is omplete. then j is an operator of type 2 and that its type 2 onstant T2 (j ) satis.3. d) .

2.11 for operators. one might wish to use the CLT result for operators of type 2 (Corollary 10. For example. from Proposition 9. Then IEkX k2Lip < 1 and sin e j is type 2 . This an be turned around in several ways. Let now X be Lips hitzian with respe t to d as in Theorem 14.6). d) for some numeri al onstant K . There is however a small problem here sin e Lip(T ) need not be separable and X not a Radon random variable in this spa e. X already satis. (14:3) n X p1n IE j (Xi ) i=1 1 2T2(j )(IEkX k2Lip )1=2 : In parti ular.es T2 (j ) K (2)(T. we already have that. for every n.

Now.es the bounded CLT in C (T ) .17 an be modi. it is not diÆ ult to see that the proof of Theorem 12. ")) 1=2 d" = 0 . (14:4) lim sup !0 t2T Z 0 log 1 m(B (t. d) su h that. if there is a probability measure m on (T.

ed to show that for every " > 0 there exists a .

nite dimensional subspa e F of C (T ) su h that if TF is the quotient map C (T ) ! C (T )=F . then .

Sin e the balls for the norm in Lip(T. As an alternate.1 su h that for all ! and all s. d) su h that lim sup !0 t2T Z 0 log 1 m(B (t. t) : Then. t) ! 0 and for whi h still (2) (T. s) X (!. i. d0 ) .1). d0 ) . The proof relies on inequality (6.1 ould possibly be weakened into M in L2. it was onje tured for some time that the hypothesis M in L2 in Theorem 14. X satis. Theorem 14.436 T2 (TF Æ j ) < " . t in T jX (!. sup t2 IPfM > t>0 tg < 1 . Assume there is a positive random variable M in L2. but also somewhat umbersome argument. Sin e a random variable X satisfying the CLT in a Bana h spa e B does not ne essarily verify IEkX k2 < 2 1 but rather tlim !1 t IPfkX k > tg = 0 ( f. d0 ) < 1 .2 takes its values in some separable subspa e of Lip(T. if there is a probability measure m on (T. d) are ompa t in Lip(T. the Lips hitz random variable X of Theorem 14. one an show that under (14.3. Lemma 10.e. t) ! 0 when d(s.3) to TF Æ j then easily yields the CLT. The next result shows how this is indeed the ase. t)=d0 (s. It is assumed expli itely that X is pregaussian sin e this does not follow anymore from the Lips hitz assumption when M is not in L2 .6 an then be applied. In this last step however.1 . t)j M (!)d(s. Applying (14.2.4) there exists a distan e d0 on T su h that d(s.30) and Lemma 5. Let X be a pregaussian random variable in C (T ) . ")) 1=2 d" = 0 .8. Corollary 10. this approa h basi ally amounts to the original proof of Theorem 14.

es the CLT in C (T ) . Proof. We .

t) = kXs Xt k2 . there is a probability measure m0 on (T. by Theorem 12. By the omments next to Theorem 11.9. There exists a Gaussian variable in C (T ) with L2 -metri dX (s. this Gaussian pro ess is also ontinuous with respe t to dX and thus.18.rst need transform the (ne essary) pregaussian property into a majorizing measure ondition. dX ) whi h satis.

t0 )) satis. t). We would like to have this property for the maximum of those two distan es d and dX . Clearly.es the same majorizing measure ondition as m on (T. d) . (s0 . = m m0 on T T equipped with the metri de((s. dX (t. t0 )) = max(d(s. s0 ).

es lim sup !0 T T Z 0 log 1 (B ((s. t). ")) 1=2 d" = 0 : .

t) in T T . one an .437 We now simply proje t on the diagonal. For ea h ouple (s.

t)) 2de((s. ('(s. u) are both < " . t) in T su h that de((s. '(s. Then.nd (by a ompa tness argument) a point '(s. u) and dX (t. t). if d(s. it follows by de. t). u)) for all u in T . t). (u.

m e (B (u. we may and do assume in the following that dX d . to state and prove again the inequality we will need. repla ing d by max(d. t) that d('(s. 3")) where B (u. t). 3") is the ball in T of enter u and radius 3" for the metri max(d. u) are < 3" . Instead of refering to (6. ") ' 1 (B (u.4. We an now turn to the proof of Theorem 14. it is simpler.30). letting m e = '() . Hen e B ((u. ")) 1=2 d" = 0 : It follows from this dis ussion that.nition of '(s. ")) and thus lim sup Z !1 t2T 0 1 log m e (B (t. u). 3")) (B ((u.2. Let (Zi ) be a . u). dX ) . u) and dX ('(s. dX ) . sin e we will only be on erned with real variables. Lemma 14. t). Therefore.

Then. 8. for all u > 0 .nite sequen e of independent real symmetri random variables in L2 .

.

<.

X .

.

IP .

Zi .

.

:.

.

Set = IEZi2 i de. i P Proof.

1 + 1=2 Therefore.nition of k(Zi )k2.1 that 0 > u 4k(Zi)k2. if we let A = u .

(.

.

X .

.

IP .

Zi .

.

.

.

we an write by the triangle inequality and . i i !1=2 19 = A IEZi2 . For any random A > 0 . 4 exp( u2=6) : .

.

.

X .

.

Zi .

.

.

.

.

i X .

.

.

X .

X .

Zi IfjZi jAg.

.

+ Zi IfjZi j>Ag .

.

.

i i .

.

.

X .

2 .

Zi IfjZi jAg.

.

1 : . + (Zi ) 22.

.

.

1 + ) Let us now observe that on the set . ) > u(4k(Zi)k2.1 . A j j k i k 1 k(Z i )k2.

(.

.

.

X .

IP .

Zi IfjZi jAg .

.

.

.

i fjZij Ag jZi j jZi j ^ 1 : 2Au + 2u ) > u(2Au + ) : .

438 Hen e. . by symmetry of the variables Zi and the ontra tion prin iple in the form of (4.7) (applied onditionally on the Zi 's).

(.

.

X .

.

IP .

Zi .

.

.

.

1 + ) ) (. i > u(4k(Zi )k2.

.

X 2IP .

.

"i .

i .

) jZi j ^ 1 .

.

.

> u 2u .

where ("i ) is an independent Radema her sequen e.8. one an .4 is proved.4 to the subgaussian inequality. substituing the inequality of Lemma 14. Provided with this lemma. the proof of Theorem 14. Sin e M is in L2. Lemma 14.1 . by Lemma 5.3 is very mu h like the proof of Theorem 14. We need now simply apply Kolmogorov's inequality (Lemma 1.6) to get the result.2.

a = a(") > 0 su h that. We on lude this se tion with an analogous study of some spe tral measures of p -stable random ve tors in C (T ) .19).4 implies that for all s. t) and kXi (s) Xi (t)k2 d(s. for ea h " > 0 . Y n (t) = p1n n X i=1 "i Xi (t)Ifk(Mj )jn k2.1 apng : Lemma 14.1 > a ng : 2 Let.2 by the subgaussian results (Proposition 11. We already know the los e relationships between the Gaussian CLT and the question of existen e of a stable random ve tor with given spe tral measure. for every n . t in T and all u > 0 . t)g 4 exp( u2=6) sin e jXi (s) Xi (t)j Mi d(s. IPfjY n (s) Y n (t)j > (4a + 1)ud(s.3 is ompleted exa tly as the proof of Theorem 14. p " IPfk(Mj )jn k2. Given a positive . for all n and t in T .nd. From this result. The next example is another instan e of this observation. t in T . the proof of Theorem 14. t) for all i and s.

. as for the CLT. f. Assume for simpli ity (and without any loss of generality) that is a probability measure so that it is the distribution of a random variable Y in C (T ) . we onsider. Chapter 5).nite Radon measure on C (T ) . we would like to determine onditions under whi h is the spe tral measure of some p -stable random variable in C (T ) with 1 p < 2 (re all the ase p < 1 is trivial. the parti ular ase orresponding to Lips hitz pro esses. Sin e this seems a diÆ ult task in general.

the distribution of Y is the spe tral measure of a p -stable random variable with values in C (T ) . d) su h that lim sup Z 1 log m(B (t. Proof. It is similar to the proof of Theorem 14. ("j ) be a Radema her sequen e and assume as usual that ( j ) . use the fun tion log+ log ).5. Assume there is a probability measure m on (T. s) Y (!. Let (Yj ) be independent opies of Y .2. Let 1 p < 2 and q = p=p 1 . the series "j Yj (t) is almost surely j j =1 onvergent (and de. t in T . (Yj ) are independent. we restri t ourselves to the ase q < 1 . t)j M (!)d(s. Then. Re all the series representation of stable random ve tors and pro esses (Corollary 5.3). For notational onvenien e. 1 1=p P ("j ) . sin e IEjY (t)jp < 1 .439 Theorem 14. For ea h t . jY (!. ")) !0 t2T 0 1=q d" = 0 (if q < 1 . Let Y = (Y (t))t2T be a random variable in C (T ) su h that IEjY (t)jp < 1 for all t and su h that for all ! and all s. t) for some positive random variable M in Lp . if q = 1 .

nes a p -stable real random variable). It will be enough to show that for ea h " > 0 one an .

nd > 0 su h that .

.

1 .

X 1=p IE sup .

.

"j (Yj (s) j d(s.t)< .

j =1 (14:5) 1 P .

.

.

Yj (t)).

.

.

< ": 1=p "j Yj is then seen to be onvergent almost surely and in L1 in C (T ) (It^o-Nisio theorem). 1 1=p P By Corollary 5. p 1 "j Yj therefore de.5.

nes there a p -stable random variable with spe tral measure j The series . we . j =1 j j =1 To establish (14.5).

8).rst note that by independen e. . the ontra tion prin iple and (5.

.

1 .

X 1=p IE sup .

.

"j (Yj (s) j d(s.t)< .

j =1 .

.

.

Yj (t)).

.

.

.

.

1 .

X IE sup IE sup .

.

j 1=p "j (Yj (s) j j 1 d(s.t)< .

j =1 .

.

.

1 .

.

X 1=p .

Kp IE sup .

.

j "j (Yj (s) Yj (t)).

.

d(s.t)< .

j =1 .

j 1=p .

.

.

Yj (t)).

.

.

.

t in T and u > 0 . and every s.440 where Kp only depends on p .7. 8. for every ! on the spa e supporting (Yj ) . Using Lemma 1.

1 <.

.

X IP" .

.

s) :. j 1=p "j (Yj (!.

j =1 .

.

.

t)). Yj (!.

.

.

t)j Mj (!)d(s. t) . Under the majorizing measure ondition of the statement.14 that for ea h " > 0 . we dedu e from Theorem 11. >u uq Cq d(s. t)q k(j 1=p Mj (!))kqp. one an . s) Yj (!.1 where (Mj ) is a sequen e of independent opies of M and where we have used that jYj (!. 9 = 2 exp .

nd > 0 su h that. . uniformly in ! .

.

1 .

X IE" sup .

.

j 1=p "j (Yj (!.t)< . s) d(s.

j =1 .

.

.

Yj (!. t)).

.

.

we examine the CLT through yet another angle. 14. namely by empiri al pro ess methods.1 : 1=p M Integrating with respe t to ! using Corollary 5.2.9 implies (14. We . We a tually only present a short overview of these empiral te hniques with. a random geometri hara terization of lasses for whi h the entral limit property holds.5) and. Empiri al pro esses and random geometry In this se tion. We refer to [Du5℄ and [G-Z3℄ for some of the basi s of the theory as well as for a more detailed investigation. "k(j j (! ))kp. the on lusion. in parti ular. as announ ed.

If P is a probability on (S. S ) be a measurable spa e. (Xi ) will denote here. Let (S. a sequen e of independent random variables de. unless otherwise indi ated. S ) .rst introdu e the empiri al pro ess language.

IPg . IEg partial integration with respe t to ("i ) or (gi ) . A. IE" . We will also use randomizing sequen es like Radema her or standard Gaussian sequen es ("i ) or (gi ) and denote a ordingly by IP" . The empiri al measures Pn asso iated to P are de. IP) with values in S and with ommon law P .ned on some probability spa e ( .

dp (f. P . S . Lp = Lp (P ) . n i=1 Xi (!) ! 2 . g) = kf gkp its . n 2 IN (re all the Xi 's have ommon law P ). IR) (we write Lp or Lp (P ) depending on the ontext and the ne essity of spe ifying the underlying probability P ). is understood to be Lp (S. kf kp denotes the Lp -norm ( 1 p 1 ) of the measurable fun tion f on S . In this se tion (and the next one).ned as the random measures on S given by Pn (!) = n 1X Æ . 0 p 1 .

1 p < 1 . and denote by dn.) Given P on (S. we denote further P (f ) = E (f ) = fdP .441 R asso iated metri . we will always mean here a family F of (real) measurable fun tions f on (S. We need also onsider the random spa es Lp (Pn ) . we set.p = n1 jf (Xi )jp i=1 !1=p where f is a fun tion on S . S ) .p the asso iated random distan es. S ) su h that kf (x)kF = sup jf (x)j < 1 for all x in S . If f is in L1 = L1(P ) . (For any family (a(f ))f 2F of numbers indexed f 2F by a lass F . with some abuse. the ( entered) empiri al f 2F pro esses based on P and indexed by a lass F L1 (P ) are de. with their norms n X kf kn. By lass of fun tions on S . ka(f )kF = sup ja(f )j .

n i=1 f 2 F . the maps f ! f (Xi ) . i 2 IN . In order not to hide the main ideas we intend to emphasize here. we shall assume all lasses F to be ountable. de. We ould instead require a separability assumption on the pro esses ((Pn P )(f ))f 2F . Sin e we are assuming that kf (x)kF < 1 for every x in S . we do not enter the various and possibly intri ated measurability questions that the study of empiri al pro esses raises. n 2 IN : As always in this book.ned as (Pn P )(f ) = n 1X (f (Xi ) P (f )) .

ne random elements in the spa e `1 (F ) of bounded fun tions F ! IR equipped with the sup-norm kkF . we are therefore dealing with random variables taking their values in the non-separable (unless F is . In this study of empiri al pro esses.

nite) Bana h spa e `1 (F ) entering. our general setting of in. sin e we are assuming F ountable.

Se tion 2.3).nite dimensional random variables ( f. Limit properties are of ourse the main topi in the study of empiri al pro esses as a way to approximate a given law P by empiri al data Pn . Many results presented throughout this book therefore apply in this empiri al setting. We have the following de.

nitions. or P satis. A lass F as before is said to be a Glivenko-Cantelli lass for P .

with probability one.es the strong law of large numbers uniformly on F . lim kP (f ) P (f )kF = 0 : n!1 n This de. if.

nition extends the lassi al result due to Glivenko and Cantelli a ording to whi h the lass F of the indi ator fun tions of the intervals [0. is a Glivenko-Cantelli lass for every probability P on . 0 t 1 . t℄ .

the de.442 [0. 1℄ . Sin e weak onvergen e is involved.

Write for onvenien e n = n(Pn P ) . n 2 IN . Then a lass F of fun tions on S is said to be a Donsker lass for P .nition of the entral limit property in this non-separable p framework requires some more are. or P satis.

lim n!1 Z '(n )dIP = Z 'd P : The use of the upper integral takes into a ount the measurability questions.es the entral limit theorem uniformly on F . if there is a Gaussian Radon probability measure P on `1 (F ) su h that for every real bounded ontinuous fun tion ' on `1 (F ) . By the .

If this property is realized the lass F is said to be P -pregaussian so that a P -Donsker lass is of ourse P -pregaussian. g 2 F ( f. f. P being Radon on `1 (F ) is equivalent to say that GP admits a version with almost all sample paths bounded and ontinuous on F with respe t to the metri k(f P (f )) (g P (g))k2 .nite dimensional CLT. As before. [G-Z3℄). g 2 F : Further. these de. f. the probability measure P is the law of a Gaussian pro ess GP indexed by F with ovarian e given by IEGP (f )GP (g) = P (fg) P (f )P (g) .

We note for further purposes that if GP is ontinuous in the previous sense and kP (f )kF < 1 there exists a Gaussian pro ess WP with L2 -metri given by IEjWP (f ) WP (g)j2 = kf gk22 = d2 (f. 0 t 1 . We may simply take for example WP (f ) = GP (f ) + P (f ) where is a standard normal variable independent of GP . 1℄ ). the Gaussian pro ess GP appears as a generalization of the Brownian bridge (with P Lebesgue measure on [0. d2 ) .nitions extends the lassi al Kolmogorov-Smirnov-Donsker theorem for the lass F of the indi ator fun tions of the intervals [0. f. whi h is almost surely ontinuous on (F . t℄ . g in F (the analog of the Brownian motion). To on lude . g)2 .

nally this set of de.

we let F = ff g . g 2 F . the pre eding se tion). Sin e we will basi ally only be on erned with the CLT here. [G-Z2℄. For every > 0 . we leave this to the interested reader ( f.g. Proposition 10. We refer to [Du4℄.nitions. [D-P℄). [G-Z3℄ for omplete des ription and proof of the following statement whi h extends (14. e. d2 (f. g) < g . As for the CLT in the spa e of ontinuous fun tions ( f. we should introdu e Strassen lasses satisfying the law of the iterated logarithm. . [Du5℄. [K-D℄. a lass F is a P -Donsker lass if and only if the pro esses n satisfy a Prokhorov type asymptoti equi ontinuity ondition. It is already expressed in its randomized version ( f. [Du5℄.1) to this empiri al framework.4) whi h will be useful in the sequel. f.

2). Then F is a Donsker lass for P if and only if (F . in the form for example of Corollary 10. P ) su h that kP (f )kF < 1 .443 Theorem 14.2 and (10. From the integrability properties in the CLT.6. d2 ) is totally bounded and for every " > 0 there exists > 0 su h that 9 8 p n < X = lim sup IP "i f (Xi )= n > " < " : . S . Provided with these de. note that if F is a P -Donsker lass we also have that (14:6) n 1 X p lim lim sup IE " f ( X ) i i !0 n!1 n i=1 F =0 and similarly with (gi ) in pla e of ("i ) . : n!1 i=1 F The equivalen e holds similarly if the Radema her sequen e ("i ) is repla ed by an orthogaussian sequen e (gi ) . Let F be a lass of fun tions on (S.

nitions and observations. we now turn to the two results on Donsker lasses we intend to present. The .

For every " > 0 and integer n .n denotes F for = ("= n)1=2 . Then.rst one des ribes the ee t of pregaussianness on the equi ontinuity ondition of Theorem 14.6 for uniformly bounded lasses of fun tions. F". S . Only suÆ ien y requires a proof. Donsker lass if and only if it is P -pregaussian and. Let " > 0 be . Assume without loss of generality that kf k1 1 for all f in F .n F is a P - ! 0 in probability : Proof. It ombines Sudakov's minoration with real p exponential bounds. Let F be a uniformly bounded lass of fun tions on (S. P ) . Theorem 14. for some (or all) " > 0 . n X "i f (Xi )= i=1 pn F".7.

we know that WP is a Gaussian pro ess whi h has a ontinuous version on (F . nlim !1 n (") = 0 where " 1=2 n (") = p log N n By de. Sin e F is P -pregaussian. by Sudakov's minoration (Corollary 3.xed.19). d2 ) . Therefore.

12 !!1=2 " 1=2 pn : G = G (".nition of the entropy numbers. d2 . there exists p d2 (f. g) ("= n)1=2 su h that (14:7) F . n) maximal in F with respe t to the relations p CardG exp( n (")2 n=") : .

it is enough to show that for all Æ > 0 . by hypothesis. 8 n < X IP "i f (Xi )= : 9 8 n pn > 3Æ= 2IP < X p "i > f (Xi )= n . i=1 G So. : i=1 F i=1 8 n < X + IP "i f (Xi )= : F". f 6= g i=1 p in G . g)g n P 1=2 where we re all that dn. i=1 G Set now. g) are the random distan es (f g)2 (Xi )=n . dn. then khk1 2 sin e F is uniformly bounded by 1 and khk2 ("= n)1=2 by de. and n suÆ iently large depending on .444 By maximality. A(". we an write for all Æ > 0 . for every > 0 . Therefore. for every n . g) < ("=pn)1=2 . Let h = f g . for every f in F there exists g in G satisfying d2 (f. n) = f8f 6= g in G = G (". 8 n < X lim lim sup IP "i f (Xi )= !0 n!1 : (14:8) 9 pn > Æ= = 0 : . n) .2 (f. g) 2d2 (f. 9 = pn > Æ : .n >Æ 9 = .2 (f.

6. n) .nition of G . n) ) lim sup( CardG (". n))2 exp( " n=50) = 0 : n!1 n!1 For ea h n and ! in A(". for all n large enough. IPfkhkn. (14:9) p lim sup IP(A(".2 > 2khk2g IP ( n X i=1 (h2 (Xi ) IEh2 (Xi )) > 3nkhk22 ) exp( nkhk22=50) exp( "pn=50) : Hen e. By Lemma 1.n(f ) = p1n n X i=1 gi f (Xi (!)) . by (14.7). onsider the Gaussian pro ess Z!. f 2G: .

that lim IE kZ (f )kG !0 g !. no an ellation (one of the main features of the study of sums of independent random variables) o urs.n (f ) Z!.9) then implies (14. d2 ) . In the remaining part. Standard omparison of Radema her averages to Gaussian averages ombined with (14. Theorem 14. Now WP has d2 as asso iated L2 -metri and possesses a ontinuous version on (F . learly IEg jZ!. n) .16 for example.8) and thus the on lusion. It then learly follows. from Lemma 11.7 is established. we now take advantage of existen e of majorizing measures (Chapter 12).n(f 0 )j2 4d2 (f. n) .n =0 whi h therefore holds for all n and ! in A(". While we only used Sudakov's minoration before. we . For larity. The result we present indi ates rather pre isely how the pregaussian property a tually ontrols a whole "portion" of F .445 Sin e ! 2 A(". The se ond result of this se tion investigates further the in uen e of pregaussianness in the study of Donsker lasses F (no more ne essarily uniformly bounded). f 0 ) .

8. we write F F1 + F2 to signify that ea h f in F an be written as f1 + f2 where f1 2 F1 . Theorem 14. S ) and F a lass of fun tions in L2 = L2(P ) . If P is a probability on (S.rst give a quantitative rather than a qualitative statement. F2 . re all the Gaussian pro ess WP = (WP (f ))f 2F . We may and do assume that F is a . F1. There is a numeri al onstant K with the following property: for every P -pregaussian lass F su h that kf kF 2 L1 = L1(P ) and for every n . For lasses of fun tions F . F2n in L2 = L2 (P ) su h that F F1n + F2n and (i) (ii) n X IE f (Xi ) = i=1 n X IE gi f (Xi )= j ! n X j pn K IEkWP (f )kF + IE "i f (Xi )=pn Fn 1 pn K IEkW (f )k : P F i=1 i=1 F . f2 2 F2 . F2n Proof. there exist lasses F1n .

nite lass.10) as in Proposition . ") are the balls for the metri Æ . Æ) su h that (14:10) sup f 2F 1 Z 0 log 1 m(B (f. K is further some numeri al onstant. ")) 1=2 d" K IEkWP (f )kF where B (f. possibly hanging from line to line below.6. By Theorem 12. there exists an ultrametri distan e Æ d2 on F and a probability measure m on (F . We use (14. and eventually yielding the onstant of the statement.

Denote by `0 the largest ` for whi h 2 ` D where D is the d2 -diameter of F .446 11. For every f in F . let B` be the family of Æ -balls of radius 2 ` . For every ` `0 . there is a unique element B of B` with f 2 B . Let then ` (f ) be one .10 and Remark 11.11.

Let P further = 2 `+`0+1 ` whi h de.xed point of B and let ` (f` (f )g) = m(B ) .

10). ` (f )) 2 ` for all ``0 f and ` and that ` 1 Æ ` = ` 1 . We note that d2 (f. (14:11) 1=2 X 2` `0 sup 2 ` log (f` (f )g) f 2F ``0 K IEkWP (f )kF (where we have used the de.nes a probability measure. From (14.

nition of `0 ). Let now n be .

jj (f )(x) j 1 (f )(x)j a(f.xed (so that we do not spe ify it every time we should). For every f in F and ` > `0 . j )g : ` De. f ) = supf8j ` . let `(x. `) = n2 Given f in ` 2` `0 log (f` (f )g) 1=2 : F and x 2 S . set p a(f.

We start with (ii). The lasses F1n and F2n are the lasses of the expe ted de omposition and we thus would like to show that they satisfy (i) and (ii) respe tively. F2 = F2n = ff2 . Set F2 F2 = ff2 f20 . f2 .ne then f2 by f2 (x) = `(x. f 2 Fg . We work with F2 F2 rather than F2 sin e the pro ess bounds of Chapter 11 are usually stated in this way. By de.f ) (f )(x) and f1 = f f2 and let F1 = F1n = ff1 . f 2 Fg . with the obvious abuse in notation. f20 2 F2 g and u = IEkWP (f )kF .

We evaluate. for every t > 0 (or only t t0 large enough). this will make no dieren e. In a .nition of u . the probability 8 < n X IP IEg gi f (Xi )= : n p i=1 F2 F2 9 = > tu : .

for K2 to be spe i.rst step. let us show that this probability is less than IP(A(t) ) where.

ed later. we an write for every x in S that f2 (x) f20 (x) = (`(x.2 .2 K2 1 t2 `g (re all the random norms and distan es k kn.2 ). A(t) = f8` `0 .`)g kn. That is. f 0 ) j . k(` (f ) ` 1 (f ))Ifj` (f ) ` 1 (f )ja(f.f 0 ) (f 0 )(x) j (f 0 )(x))If`(.f )jg (x) (`(x. f ) j if and only if `(x. Then `(x.f ) (f )(x) j (f )(x))If`(.f 0 )jg (x)) : . f 0 in F and denote by j the largest ` with ` (f ) = ` (f 0 ) . Let f. 8f 2 F . dn.

on the set A(t) .447 It follows that dn.2 X `j + k(` (f ) ` 1 (f ))Ifj` (f ) X `j ` 1 (f )ja(f. f 0 ) (by de. f20 ) = kf2 f20 kn.2 k(` (f 0 ) ` 1 (f 0 ))Ifj` (f 0 ) ` 1 (f 0 )ja(f 0 . f20 ) K2 1 t2 j+2 8K2 1tÆ(f. for all f.2 (f2 . dn.`)g kn.2 (f2 .`)g kn. f 0 .2 and thus.

(14:12) 8 < n X IP IEg gi f (Xi )= : n p i=1 F2 F2 9 = > tu . We need therefore evaluate IP(A(t) ) . it follows from the majorizing measure bound of Theorem 11. By de. IPfk(` (f ) ` 1 (f ))Ifj` (f ) ` 1 (f )ja(f. for all f in F . IP(A(t) ) for K2 well hosen from (14.18 and (14. and ` > `0 . To this aim.`)gkn.18.6 that. for example. of Lemma 1.6. we use exponential inequalities in the form.10) that. Re entering. for all t > 0 .10) and the onstant of Theorem 11. From this property. Note that k`(f ) ` 1 (f )k2 3 2 ` . f and ` ). we dedu e from Lemma 1. `) 2 ) for all t t1 large enough (independent of n .2 > K2 1 t2 `g exp( tn2 2` a(f.nition of Æ and j ).

we have obtained that IP(A(t) ) where we have used that ` then yields 1 Æ ` = ` 1 X X ` `0 1 2 (f` (f )g) 2 2 t `>`0 f (f )g t ` 1 and that is a probability. t2 ) . exp( t log ) ( t2 ) 1 as soon as t max(t1 . Therefore. By (14.nition of a(f.12). integration by parts n X IE gi f (Xi )= i=1 t2 where t2 is numeri al. if t t0 = pn F2 F2 t0 + t1 u 0 . `) . this probability is estimated by 2` `0 exp t log : (f` (f )g) If 2 .

(i) immediately follows. where the last inequality is (14.448 F . by Lemma 6. n X IE "i i=1 jf (Xi )j=pn 2(v + Ku) . It is then easy to on lude.3. Set p n X v = IE "i f (Xi )= n : i=1 F Sin e F1 F F2 . for any f in `0 5u .8 is therefore . we already know from (ii) that n X IE "i f (Xi )= i=1 pn v + Ku : F1 From the omparison properties for Radema her averages (Theorem 4. kf1k1 = E jf j X ``0 E (jf ` (f )jIfj`+1 (f ) `(f )j>a(f. for every f1 in F1 . ` + 1) 1 kf ` (f )k2 k`+1 (f ) ` (f )k2 a(f.f ) (f )(x)j > a(f.11) (re all that u = IEkWP (f )kF ). kf1k1 X ``0 X ``0 a(f. The proof of Theorem 14. n X IE ( i=1 jf (Xi )j IEjf (Xi )j)=pn 4(v + Ku) : Sin e kf1k1 K1 un 1=2 for all f1 in omplete. F1 F1 . F1 and further. The main observation to establish (i) is the following: sin e j`(x.f )+1 (f )(x) `(x.12). kf2k2 kf k2 + 2 from whi h (ii) immediately follows sin e.`+1)g ) : By Cau hy-S hwartz and Chebyshev's inequalities. ` + 1) 1 3 2 2` 1 K1 pun for some numeri al onstant K1 . ` + 1) .

Æ) whi h is ontrolled by the pregaussian hypothesis.2 is a Lips hitz image of (F . The lass F1n is ontrolled in L1 (Pn ) .8 a tually yields more than its statement.449 It is worthwhile mentioning that the proof of Theorem 14. If F is P -Donsker. the lass F2n equipped with the random distan es dn.8 may appear as a random geometri des ription of the entral limit property. the . with high probability. In this sense. then F is de omposed in two lasses. Theorem 14. We have shown indeed that.

rst for whi h the random distan es dn.8 for p this de omposition orrespond. g 2 F . Corollary 14.8. one an . when F is redu ed to one point. g) < g . Let F be P -pregaussian. Re all F = ff g . [G-Z3℄). Conversely. to the lassi al level n . let us state the following (see also [Ta4℄. To draw a possible qualitative version of Theorem 14. Note tha the levels of trun ation hosen in the proof of Theorem 14. Then.9.2 are ontrolled by the (ne essary) P -pregaussian property. d2 (f. f. su h a de omposition learly ontains the Donsker property.1 random norms for whi h no an ellation o urs. for all > 0 and every integer n . the se ond being ontrolled in the k kn.

on [0. the limit properties of empiri al pro esses indexed by Vapnik-Chervonenkis lasses a tually hold uniformly over all probability distributions. 1℄ .nd lasses F1n () . These lasses of sets are the so- alled Vapnik-Chervonenkis lasses whi h naturally extend the ase of the intervals [0. pn n F1 () n F2 () =0 n X K lim sup lim sup IE "i f (Xi )= !0 n!1 i=1 pn F F is P -Donsker if and only if n X lim lim sup IE !0 n!1 i=1 jf (Xi )j=pn F1n() = 0: 14. 0 t 1 .3. t℄ . Vapnik-Chervonenkis lasses of sets While the previous se tion dealt with random hara terizations of Donsker lasses. As we will see moreover. this paragraph is devoted to the study of ni e lasses of indi ator fun tions for whi h the lassi al limit theorems an be established. In parti ular. . F2n () in L2 (P ) su h that F F1n () + F2n () and n X lim lim sup IE gi f (Xi )= !0 n!1 i=1 and n X lim sup lim sup IE !0 n!1 i=1 jf (X )j=pn i (where K is numeri al).

C 2 Cg .450 Let S be a set and C be a lass of subsets of S . The lass C = f[0. Say that C is a Vapnik-Chervonenkis lass (VC lass in short) if there exists an integer k 1 su h that no subset A of S of ardinality k is shattered by C . The following result is the most striking fa t about VC lasses. for any .e. Then. i. t℄ . Denote by v(C ) the smallest k with this property.10. Say that C shatters A if ea h subset of A is the tra e of an element of C . Card(C \ A) = 2k where C \ A = fC \ A . 1℄ is a VC lass with v(C ) = 2 . i. we have Card(C \ A) < 2k . Let A be a subset of S of ardinality k . 0 t 1g in [0. Proposition 14.e. Let C be a VC lass in S and let v = v(C ) . for all A in S with CardA = k .

We may note that if B A is su h that Card(C \ A) = 2 CardB . Proposition 14.nite subset A of S . CardB < vg = P n j<v j and an easy estimate of the latter.11. Card(C \ A) CardfB A . Let A be a . then CardB < v .10 therefore follows from the more general following result (by letting U = C \ A ) whose proof uses rearrangement te hniques. CardB < vg : In parti ular. Proposition 14. This se tion is devoted to the appli ations to empiri al pro esses indexed by VC lasses of this basi result. Card(C \ A) en v v : The se ond part of the proposition follows from the fa t that CardfB A . It indi ates in parti ular that we pass from the a priori information that Card(C \ A) < 2n to a polynomial growth of this ardinality. if CardA = n and n v .

nite set and U a lass of subsets of A . CardU CardfB A . Then. The idea is to . B is shattered by Ug : Proof.

One then applies this operation until the set U is so regular that the result is obvious. Given x in A . we de.nd a simple operation (symmetrization) that will make U more regular while at the same time not de reasing the number of sets shattered by U .

Tx(U ) = U nfxg if x 2 U and U nfxg 62 U and Tx (U ) = U otherwise. The .ne Tx(U ) = fTx(U ) . U 2 Ug where for U in U .

rst observation is that (14:13) CardTx(U ) = CardU : .

by de. U2 in U . it suÆ es to establish that Tx is one-to-one on U . Then.451 To show this. Suppose that Tx(U1 ) = Tx (U2 ) for U1 .

for A. Then. S ) be a measurable spa e and onsider a lass (S. S ) . We an now on lude the proof of the proposition. Re all the entropy numbers N (C . so x 2 Tx (U1 ) whi h is a ontradi tion. let us assume that x 62 Tx (U1 ) and U1 6= U2 and pro eed to a ontradi tion.13). Let now (S. then B 0 2 U 0 . for ea h U in U 0 . both U and U nfxg belong to U . U1 nfxg = U2 nfxg . x in U . dQ . B ) = (Q(A4B ))1=2 = kIA IB k2 (where AB = (A \ B ) [ (A \ B ) and where the norm k k2 is understood with respe t to Q ). This means that U 0 is hereditary in the sense that if B 0 B 2 U 0 . CardU 0 = CardU and U shatters more sets than U 0 . ") . 1 : log N (C . Then. In parti ular. If x 62 B . so U shatters P B . Tx(U ) and U have the same tra e on B . S ) and all 0 < " 1 . Let C S be a VC lass. we let. Sin e by (14.nition of the operation Tx . ") Kv(C ) 1 + log " Proof. and su h that w(U 0 ) is minimal. This shows (14.10 and appears as the fundamental property in the study of limit theorems for empiri al pro esses indexed by VC lasses. Tx(U1 ) = U1 . The next theorem is a onsequen e of Proposition 14. If Q is a probability measure on dQ (A. If x 2 B .12. Sin e x 2 T . Let Q and " be . the proof is omplete. Let U 0 su h that U 0 is U 2U obtained from U by appli ations of some transformations Tx . There is a numeri al onstant K su h that for all probability measures Q on (S. we must have U nfxg 2 U 0 for otherwise w(Tx (U 0 )) < w(U 0 ) . Theorem 14. x 62 U2 . B in S .13). dQ . T is of the form Tx (U ) for some U in U . C S . U 0 shatters ea h set it ontains so that the result of the proposition is obvious for U 0 . for B 0 B nfxg there is T 2 Tx(U ) su h that T \ B = B 0 [ fxg . Sin e U1 = U2 when x 2 Tx (U1 ) . sin e U1 nfxg = U2 2 U . Let us now establish that if Tx(U ) shatters B . Suppose for example that x 2 U1 . then U shatters B . Let w(U ) = CardU .

g (1 "2 )M . IPfXi 62 Ak 4A`g 1 "2 and thus IPf(Ak 4A`) \ fX1 . There exist A1 . ") N . dQ . AN in C su h that dQ (Ak . Let N be su h that N (C . A` ) " for all k 6= ` .xed. : : : . : : : . XM g = . If (Xi ) are independent random variables distributed a ording to Q .

: : : . if M is hosen su h that N 2 (1 "2 )M < 1 . Therefore. (Ak 4A`) \ fx1 . for k = 6 ` . Proposition 14. at least when M v . (Ak 4A`) \ fX1. N eM v v where v = v(C ) . xM in S su h that. ne essarily. XM g 6= . IPf9k 6= ` . Take M = [2" 2 log N ℄ + 1 so that N 2 (1 "2 )M < 1 .452 for all k 6= ` and integer M .g > 0 and there exist therefore points x1 .10 then indi ates that. Assume . : : : . . xM g) =6 . : : : .

rst that log N v ( 1) so that v M 4" 2 log N . It follows that log N an inequality whi h is also satis. Then the inequality N (eM=v)v yields log N v log 4"2e + log logv N v log 4"2e + 1e log N where we have used that log x x=e . x > 0 .

we identify a lass C S with the lass F of indi ator fun tions IC . Theorem 14. The next statement is one of the main results on VC lasses. S ) . As a . v sin e " 1 .13. dQ . ") is arbitrary. S ) . Sin e N N (C .ed when log N Theorem 14. We also assume that we deal only with ountable lasses C in order to avoid the usual measurability questions. Let P on (S. 2v log 4"2e .12 is established. C S be a VC lass. Let P be a probability measure on (S. Then C is a Donsker lass for every probability measure Proof. To agree with the notations and language of the previous se tion. C 2 C and thus write k kC for k kF . This ondition may be repla ed by some appropriate separability assumption. It expresses that VC lasses are Donsker for all underlying probability distributions.

By Theorem 14. is totally bounded in L2 = L2(P ) .6. it therefore suÆ es to show that .rst simple onsequen e of Theorem 14.12.

n .

X sup .

.

"i (IC dP (C.D)< .

( (14:14) lim lim sup IP !0 n!1 i=1 .

) p.

ID )(Xi )= n.

.

> " = 0 .

C .

19 yields a on lusion whi h is uniform over the distan es dPn (!) in the sense that for every " > 0 .453 for all " > 0 (where. dQ . D belong to n C ). Re all the empiri al measures Pn (!) = P ÆXi (!) =n . n 2 IN .12 a tually implies that lim sup !0 Q Z 0 (log N (C . there exists = (") > 0 su h that for all n and ! . Sin e Theorem 14. C. it is plain that Proposition 11. of ourse. the random (in the Radema her sequen e ("i ) ) pro ess "i IC (Xi (!))= n i=1 C 2C is subgaussian with respe t to dPn (!) . i=1 n P p ! 2 . "))1=2 d" = 0 . For every ! and n .

.

.

IE" sup .

D)< . dPn (!) (C.

(14:15) n X p1n "i (IC i=1 .

.

ID )(Xi (!)).

.

.

C.15) is satis. the proof will be ompleted if the random distan es dPn (!) an be repla ed by dP . IE" is partial integration with respe t to ("i ) . given " > 0 and = (") > 0 so that (14. as usual. n 1 X sup p IE "i IC (Xi ) < 1 : n i=1 n C By Lemma 6. if C is VC.10. In the same way as we have (14. we also have that n 1 X sup p IE (IC (Xi ) P (C )) < 1 : n i=1 n C The same property holds for C4C = fC 4C 0 . at least on a large enough set of ! 's.15). < "2 where. From this result. C 0 2 Cg sin e it is also VC by Proposition 14.3 (a tually the subsequent omment). Hen e.

15). For all n n0 . using (14. there exists n0 su h that IP(A(n)) " for all n n0 where A(n) = fk(Pn P )(C 4C 0 )kC4C > 2 =4g : We an then easily on lude. .ed.

n .

X .

sup "i (IC .

dP (C.D)<=2 .

i=1 ( ( IP " + IP 2" : A(n) . .

) p.

ID )(Xi )= n.

.

> " .

.

n .

X sup .

.

"i (IC dPn (C.D)< .

i=1 .

) p.

ID )(Xi )= n.

.

> " .

.

) It is remarkable that these uniform limit properties a tually hara terize VC lasses.12. there is a numeri al onstant K su h that if C is VC.9 for example.6 (or the "bounded" version of Theorem 10.16). sup IEkn(C )kC Kv(C )1=2 (14:16) n p for all probability measures P on (S. The pre eding proof based on the key Theorem 14.12) whi h yield the best possible an = (LLn=n)1=2 . This property implies a uniform strong law of large numbers in the sense that for some sequen e (an ) of positive numbers de reasing to 0 and any P 1 sup k(Pn P )(C )kC < 1 almost surely : n an This may be obtained for example from Theorem 8. S ) where we re all that n = n(Pn P ) .15) through Theorem 14. Following (14.13. One may also invoke the SLLN result in the form of Theorem 7. (Re all that sin e we are dealing with indi ators. the orresponding random variables in `1 (C ) are uniformly bounded. and as we a tually used it in the proof.14) so that Theorem 14.454 This gives (14. It indeed indi ates a uniformity property over all probability measures P . With the same argument leading to (14. suppose for example that we are given a lass C in S su h that for some .12 a tually arries more information than the a tual statement of Theorem 14.13 is established.

2. D 2 C . sup IEkn (C )kC M (14:17) n for all probability measures P on (S. If we then re all the Gaussian pro esses GP with ovarian e P (C \ D) P (C )P (D) . C. we also have.nite M . by the . introdu ed in Se tion 14. S ) .

if P = k1 Æxi where x1 . that IEkGP (C )kC M (14:18) k P for all P . : : : . C 2C. GP an be realized as i=1 GP (C ) = p1 k X k i=1 gi (Æxi (C ) P (C )) .nite dimensional CLT. . In parti ular. xk are points in S .

19) when k is large enough. there exists a subset A = fx1 . We may assume that supn (an n) 1 < 1 . the sequen e (k(Pn P )(C )kC =an) is bounded in probability.) The next proposition strengthens this on lusion. xk g of S of ardinality k su h that Card(C \ A) = 2k . : : : .17). Let C be a ( ountable) lass in S . it is a tually enough to have that C is P -pregaussian for every P . Proposition 14. (P ) = sup n nan i=1 C 1 1 + 2 sup IEk(Pn a nan n n (P ) sup p n P )(C )kC : C : . for all = (1 . n X IE "i (IC (Xi ) i=1 Therefore.14 below.18).14. (By an argument lose in spirit to the losed graph theorem as in Proposition 14. Therefore C is ne essarily a VC lass under (14.8). for every k . Then. for all n . Then C is a VC lass. as is obvious. showing in parti ular that the on lusion is not restri ted to the CLT. from (14. Assume there exists a de reasing sequen e of positive numbers (an ) tending to 0 su h that for every probability P on (S.455 where (gi ) is a standard normal sequen e. 1 sup IEk(Pn P )(C )kC < 1 : n an From Lemma 6. we see that. k ) in IRk k X (14:20) i=1 ji j k X 2 i Æxi (C ) i=1 C : Integrating this inequality along the orthogaussian sequen e (gi ) leads to a ontradi tion with (14.18) and thus a fortiori under (14.3. : : : . By Homann-Jrgensen's inequalities (Proposition 6. Therefore. if we set we learly get that P (C )) C n X 2IE (IC (Xi ) i=1 P (C )) n 1 X IE "i IC (Xi ) . Therefore. p Proof. k X IE gi Æxi (C ) (14:19) i=1 p (M + 1) k : C Suppose now that C is not a VC lass. S ) .

let us .456 Hen e (P ) < 1 for all P . We would like to show a tually that there is some onstant M su h that (P ) M (14:21) for all probabilities P : To this aim.

then (P ) (P 0 ) .rst show that if P 0 and P 1 are two probability measures on (S. Then (Xi ) has the same distribution as (XiÆi ) . n X IE "i IC (Xi ) i=1 C n X IE "i IC (Xi0 )IfÆi =0g i=1 C : Jensen's inequality on (Æi ) yields then n X IE "i IC (Xi ) i=1 C n X IE "i IC (Xi0 ) i=1 C and thus the announ ed inequality (P ) (P 0 ) . Let further (Æi ) be independent of everything that was introdu ed before and onsisting of independent random variables with law IPfÆi = 0g = 1 IPfÆi = 1g = . In parti ular. P 1 ). one an . S ) .21). Indeed. This observation easily implies (14. by the ontra tion prin iple. Let (Xi0) (resp. 0 1 . and if P = P 0 + (1 )P 1 . (Xi1 ) ) be independent random variables with ommon distribution P 0 (resp. if it is not realized.

xk g in S su h that (14. sin e (P ) M . Xi 6= Xj g : Then. whi h is possible sin e an ! 0 . .14. (P ) 2 k (P k ) 2k for all k . The proof of Proposition 14. ontradi ting (P ) < 1 .nd a sequen e (P k ) of probabilities on (S. If C is not VC.20) holds. for all k . Take then P = k1 P Æxi . S ) su h that (P k ) 4k for all k . Consider k > 2n2 in order that IP( 0 ) 1=2 where 0 = f8i 6= j n . there exists.20) that Mnan n X IE "i IC (Xi ) i=1 C Z X n "i IC (Xi ) 0 i=1 C n2 IP( 0 ) n4 whi h is a ontradi tion. we an write by (14. : : : . k=1 We an now on lude the proof of Proposition 14. If 1 k k P we then let P = 2 P .14 is omplete. Fix then n large enough so that i=1 n > 4Mnan . A = k fx1 .

457 As in Se tion 14. the ni e limit properties of empiri al pro esses indexed by VC lasses may be related to the type property of a ertain operator between Bana h spa es. S ) ! `1 (C ) de. Let C S .1. onsider the operator j : M (S. S ) equipped with the norm kk = jj(S ) . Denote by M (S. S ) the Bana h spa e of bounded measures on (S.

ned by j () = ((C ))C 2C . that is the smallest onstant C su h that for all . We denote by T2 (j ) the type 2 onstant of j .

We establish that T2 (j ) K (v(C ))1=2 . K 1 (v(C ))1=2 T2 (j ) K (v(C ))1=2 : Proof. for some numeri al onstant K . 2 .15. In the pre eding notations. X IE "i j (i ) i X C C i ki k2 !1=2 (provided there exists one). Moreover. We have the following result. C is a VC lass if and only if j is an operator of type Therorem 14. S ) .nite sequen es (i ) in M (S. Let (i ) be a .

dQ . that ki k2 = 1 .12 applied to P the pro ess "i i (C ) where C S is VC yields i C 2C Z 1 X IE "i j (i ) 1 + K (log N (C . Sin e T2 (j ) 1 . D . we an assume that v 2 . K 0 are numeri al onstants.19) together with Theorem 14. we may assume that the measures i are positive and. To prove the P type 2 inequality. To prove the onverse inequality. the right side inequality of the theorem follows. S ) . by homogeneity. Then Q is a probability measure on (S. Sin e v(C ) 1 . S ) and we have learly that i X i ji (C ) i (D)j2 !1=2 dQ (C. i P Set Q = ki ki . By de. set v = v(C ) . Therefore.the entropy version of inequality (11. D) for all C.nite sequen e in M (S. "))1=2 d" 0 i C 1 + K 0 (v(C ))1=2 where K.

xv 1 g in S su h that Card(C \ A) = 2v 1 . . there exists A = fx1 . : : : . Then vX1 IE "i j (Æxi ) i=1 C T2 (j )(v 1)1=2 .nition of v .

9. Chapter 7.14. Theorem 14.20). simply be ause if this is realized. Moreover. Let x1 . Let r(T ) = . one an note that C is a tually a VC lass as soon as the asso iated operator j has some type p > 1 . : : : . 8. From this result together with the limit theorems for sums of independent random variables with values in Bana h spa es with some type (or for operators with some type) ( f. one an essentially dedu e again Theorem 14. by (14. the following statement is the Bana h spa e theory formulation of the previous investigation. property (14.20) makes it lear that the notion of VC lass is related to `n1 spa es. 10).15.13 and the onsequent strong limit theorems for empiri al pro esses indexed by VC lasses.458 and. xn be fun tions on some set T taking the values 1 .16. Finally. we are in a position to apply Proposition 14. Therefore T2 (j ) (v 1 (v 2 1) T2 (j )(v 1)1=2 : 1)1=2 =2 v1=2 =4 whi h ompletes the proof of Theorem 14.

.

n .

P .

IE sup .

.

"i xi (t).

.

one t2T i=1 an . There exists a numeri al onstant K su h that for all k su h that k r(T )2 =Kn . .

xmk generates in `1 (T ) a subspa e isometri to `k1 . is exa tly f 1.nd m1 < m2 < < mk in f1. In other words. Let M= . : : : . the subsequen e xm1 . +1gk . xmk (t)g . : : : . : : : . Proof. t 2 T . ng su h that the set of values fxm1 (t).

.

n .

X 1 + xi (t) .

.

.

IE sup .

"i .

.

2 t2T .

T2 (j )pn K (nv(C ))1=2 : Therefore. i=1 and onsider the lass C of subsets of f1. by de. ng . xi (t) = 1g .15 applied to this lass yields M 2 f1. ng of the form fi Theorem 14. : : : . 2. : : : . t 2 T .

the on lusion of the theorem is ful.nition of v(C ) .

Sin e we may assume M n (otherwise there is nothing to prove). Note that p p r(T ) 2M + n . we see that when k r(T )2 =K 0n for some large enough K 0 .lled for all k < M 2 =K 2n . Notes and referen es . we have that k < M 2 =K 2n in whi h ase we already know the on lusion holds.

Theorem 14. Zinn [G-Z2℄. Zinn onne ts the Jain-Mar us CLT for Lips hitz pro esses with the type 2 property of the operator j : Lip(T ) ! C (T ) .6) is taken from [G-Z2℄. E. [Du5℄. [Pol℄.8 and the trun ations used there. These authors worked under the metri entropy ondition. Jain and M. [Ma-Pl℄. The interested reader will omplete appropriately this hapter and this short dis ussion with those referen es and the papers ited there. a similar de omposition is developed from whi h the lo al Lips hitz onditions provides the ontrol of the orresponding L1 (Pn ) -portion. Dudley [Du4℄. J. The entral limit theorems for subgaussian and Lips hitz pro esses (Theorems 14. Theorem 14.3 is established in [A-G-O-Z℄ where it is shown that the on lusion a tually holds for lo al Lips hitz pro esses. C. this hapter only presents a few examples of the empiri al pro ess methods and their appli ations.2 for random variables in 0 is studied in [Pau℄. Another prior partial result to Theorem 14. analogous results for the LIL are dis ussed. B.5 is taken from [M-P3℄. Mar us [J-M1℄. Theorem 14. R. [G-Z3℄ itself initiated by R.8 and the random geometri des ription of Donsker lasses have been obtained in [Ta4℄.4 are due to B. Further .1 and 14. A stronger version of Theorem 14. improving upon [Led1℄. The equi ontinuity riterion for Donsker lasses is due to R. namely pro esses X in C (T ) su h that for all t in T and " > 0 . motivated by the investigation and prior results in [G-Z2℄. [He4℄.2 with M uniformly bounded. its randomized version (as stated here as Theorem 14. further results are obtained in [Oss℄.see below). [G-Z3℄. [L-T4℄. Gine and J. [A-G-Z℄. Gine and J. Dudley [Du4℄. Zinn made lever use of the Gaussian randomization and the Gaussian pro ess te hniques together with exponential bounds to a hieve remarkable progress in the understanding of the Donsker property. We refer to [G-Z3℄ for an alternate exposition and more details. Dudley and V. k sup jXs Xt j k2. [A-G-O-Z℄. In [Zi1℄. (The arguments of proof are related to Theorem 14. Strassen [D-S℄ introdu ed entropy in this study and established Theorem 14.t)<" The proof of this result relies on bra keting te hniques in the ontext of empiri al pro esses.2 may be found in [Gin℄ (where the te hnique of proof is a tually lose to the nowadays bra keting arguments .459 As announ ed.) Bra keting was initiated in [Du4℄. In those last two arti les.7 is theirs [G-Z2℄. Some general referen es on empiri al pro esses are the notes and books [Ga℄. Heinkel [He7℄ (see also [He8℄). Our framework is essentially the one put forward in the work of E.2) are due to N. The simple proof of the (weaker) Theorem 14. the majorizing measure version of these results was obtained in [He1℄. The analog of theorem 14.3 and the inequality of Lemma 14.1 " : d(s.

[Sa℄. independently. Pisier [Pi14℄.460 statements in this spirit appear in [L-T4℄. Universal Donsker lasses of fun tions do not seem to have similar ni e des riptions. We learned it from V. Theorem 14. Donsker lasses of sets are investigated in [G-Z2℄. [Ta6℄ (see also [V-C2℄). in [V-C1℄. Proposition 14.12 on the uniform entropy ontrol of VC lasses has been observed by R. [G-Z3℄. LIL and invarian e prin iple for VC lasses have been established respe tively in [Du4℄. CLT. Frankl [Fr℄.13 is due. [K-D℄. Vapnik-Chervonenkis lasses of sets were introdu ed in [V-C1℄ and were shown there to satisfy uniform laws of large numbers. f. [Zi4℄. D. [Sh℄. Milman. Pisier [Pi14℄. [D-P℄.15 by G.11 seems to be due to P. Our exposition of this se tion is based on the observations by G. for some results onne ted with type 2 map.10 was established. The main Theorem 14. The proof based on Proposition 14. Th at VC lasses are a tually hara terized by uniform limit properties of the empiri al measures was noti ed in a parti ular ase (for the Donsker property) in [D-D℄ and ompletely understood via the map j and Theorem 14. Let us also mention to on lude the extension of the VC de. Dudley [Du4℄ to whom Theorem 14.16 is also taken from [Pi14℄.

See [A-T℄ for the orresponding LIL result.nition to lasses of fun tions (VC graphs) with in parti ular a ni e hara terization of the Donsker property [Ale2℄ in the spirit of the best possible onditions for the CLT in Bana h spa es ( f. Chapter 10). .

1. Mis ellaneous problems Notes and referen es .6.5.8.3. Cotype of the anoni al inje tion `N 1 ! L2. Majorizing measures on ellipsoids 15.4.461 Chapter 15. An inequality of J. Subspa es of small odimension 15. Conje tures on Sudakov's minoration for haos 15. Bourgain 15.2. Embedding subspa es of Lp into `N p 15.7. Appli ations to Bana h spa e theory 15.1 15. Invertibility of submatri es 15.

They demonstrate the power of probabilisti ideas in this ontext. the others in the last paragraph on mis ellaneous problems. The appli ations we present are only a sample of some of the re ent developments in lo al theory of Bana h spa es (and we refer to the lists of referen es and seminars and pro eedings for further main examples in the histori al developments).1. This hapter is organized along its subtitles of rather independent ontext. Appli ations to Bana h spa e theory This last hapter emphasizes some appli ations of isoperimetri methods and pro ess te hniques of Probability in Bana h spa es to lo al theory of Bana h spa es.2 and 15.6. some with details as in Se tions 15.462 Chapter 15. 15. Several questions and onje tures are presented in addition. Subspa es of small odimension Before turning to the obje t of this .

5 whi h will be of help here.1.5. We denote by B2 = B2N the Eu lidean unit ball of IRN .rst se tion. Lemma 15. Let N be . it is onvenient to brie y present a overing lemma in the spirit of Lemma 9. It is similar to the proof of Lemma 9. There exists a subset H of 2B2N of ardinality at most 5N su h that B2N ConvH . Proof.

. hij h2He for all x in IRN . note that He = 21 H of the pre eding proof is su h that He CardHe 5N and jxj 2 sup jhx. As a onsequen e of this lemma. Take then Æ = 1=2 . y in He . By maximality of He . Then the balls of radius Æ=2 with enters in He are disjoint and ontained in (1 + Æ=2)B2 . let He be maximal in B2 su h that jx yj > Æ for all x. For 0 < Æ < 1 . H = 2He and the lemma follows. it is easily seen that ea h x in B2 an be written as x= 1 X k=0 Æk hk where (hk ) He .xed. Comparing the volumes yields CardHe N Æ 2 volB2 1 + Æ N volB2 2 so that CardHe (1 + 2=Æ)N . B2N .

463 Let T be a onvex body of IRN . that is T is a ompa t onvex symmetri (about the origin) subset of IRN with nonempty interior. We let (as in Chapter 3) . denote by (gi ) an orthonormal Gaussian sequen e. As usual.

.

N .

X .

Z `(T ) = IE sup .

.

gi ti .

.

tijd N (x) . = N sup jhx.

t2T .

i=1 IR t2T where t = (t1 . The result of this se tion des ribes a way of . tN ) in IRN and N is the anoni al Gaussian measure on IRN . : : : .

both of them based on Gaussian variables. Milman [Mi2℄. we present two proofs of the results.nding subspa es F of IRN whose interse tion with T has a small diameter as soon as the odimension of F is large with respe t to `(T ) . The .D. As for Dvoretzky's theorem. This result is one of the main steps in the proof of the remarkable quotient of a subspa e theorem of V.

Theorem 15. 1 i N . x. we denote by G the random operator IRN ! IRk with matrix (gij ) . If S is a subset of IRN . For every x in IRN and all u > 0.2. 1 j k . a family of independent standard normal variables. diamS = supfjx yj . (15:1) IPfjG(x)j < ujxjg eu2 k=2 : k k P We may assume by homogeneity that jxj = 1 . If F is a ve tor subspa e of a ve tor spa e B . For every i=1 > 0. Then jG(x)j2 has the distribution of gi2 . There exists a numeri al onstant K su h that for all k N `(T ) IP diam(T \ KerG) K p 1 exp( k=K ) : k In parti ular. the se ond one the Gaussian omparison theorems. k N . there exists a subspa e F of IRN of odimension k (obtained as F = KerG(!) for some appropriate ! ) su h that `(T ) diam(T \ F ) K p : k 1st proof of Theorem 15.2. For gij .rst one uses isoperimetry and Sudakov's minoration. Let T be a onvex body in IRN . We start with an elementary observation. y 2 S g . the odimension of F in B is the dimension of the quotient spa e B=F . ! k=2 k X 1 2 2 k : IE exp gi = (IE exp( g1 )) = 1 + 2 i=1 .

there exists H in B2k of ardinality less than 5k su h that jG(x)j 2 sup jhG(x).3. hij > 2`(T ) + g exp : 2 8R2 x2T By the Gaussian rotational invarian e and de.1 (or rather its immediate onsequen e). hij h2H for all x in IRN .1) follows. It is based on the isoperimetri type inequalities for Gaussian measures. x2T Proof. Then next lemma is the main step of this proof. It therefore suÆ es to show that for every jhj 1 . for every integer k and u > 0 . IPfjG(x)j2 ( < u2 g = IP exp k X i=1 ! gi2 ) > exp( u2 ) k=2 1 exp(u2 ) : 1 + 2 Choose then for example = k=2u2 and (15. By Lemma 15. Under the previous notations. u2 IPfsup jG(x)j > 4`(T ) + ug 5k exp 8R2 x2T where R = sup jxj . Lemma 15.464 Therefore. u u2 (15:2) IPfsup jhG(x).

hi)x2T is distributed as N P jhj gi xi . Then (15.2) for example an be used equivalently. (Of ourse.) Lemma 15.1. the pro ess (hG(x).2) is a dire t onsequen e of Lemma 3. We an now on lude this .3 is established. we need not be on erned i=1 x2T here with sharp onstants in the exponential fun tion and the simpler inequality (3.nition of G .

rst proof of Theorem 15.18). sup "(log N (T. By Sudakov's minoration (Theorem 3. there exists a subset S of T of ardinality less than exp(K12 k) su h that the Eu lidean balls of radius k 1=2 `(T ) and enters in S over T . Let T = 2T \ (k 1=2 `(T )B2N ) and de. Therefore. "B2N ))1=2 K1 `(T ) ">0 for some numeri al onstant K1 .2.

ne the random variables jG(x)j : A = sup jG(x)j . B = inf x2S jxj x2T .

x y)j `(T ) jxj jyj + `p(T ) jGB(!. B K2 1 kg has a probability larger than 1 2 exp( 2k) 1 exp( k) . IPfA > 14`(T )g exp( 2k) : On the other hand. IPfA > (8 + u)`(T )g 5k exp ku2 . if u = 6 for example (k 1 ). it follows that p IPfB < K2 1 kg exp( 2k) : p Therefore. the set fA 14`(T ). Sin e G(!. we an write that y)j `(T ) jG(!. On this set of ! 's.3 applied to T . for every u > 0 . by (15. IPfB < ug exp(K12 k) p eu2 k=2 : k If we hoose here u = K2 1 k where K2 = exp(K12 + 3) . 8 so that. This ompletes the . + p = + p (! ) B (!) k k p A(!) `(T ) + p B (!) k k (14K2 + 1) `p(T ) : k Hen e diam(T \ KerG(!)) K`(T )= k with K = 2(14K2 +1) . x) = 0 .1). take x 2 T \ KerG(!) .465 By Lemma 15. There exists y in S with jx yj k 1=2 `(T ) .

Let S be a losed subset of the unit sphere S2N 1 of IRN . Let (gi ) .2. It is based on Corollay 3. (gi0 ) be independent orthogaussian sequen es.2.rst proof of Theorem 15.13. de. 2nd Proof of Theorem 15. For x in S and y in the Eu lidean unit sphere S2k 1 of IRk .

y = hG(x). yi + g = N X k X i=1 j =1 gij xi yj + g where g is a standard normal variable independent of (gij ) and Yx.ne the two Gaussian pro esses Xx.y = N X i=1 gi xi + k X j =1 gj0 yj : .

x0 in S and y.y Xx0 .y0 ) = (1 hx. y0 in S2k 1. IE(Xx. for every > 0.y > g : x2S y2S k 1 x2S y2S k 1 2 2 (15:3) By de.y Yx0 .13.466 It is easy to see that for all x. IPf inf sup Yx.y > g IPf inf sup Xx. x0 i)(1 hy. y0 i) so that this dieren e is always non-negative and equal to 0 when x = x0 . By Corollary 3.y0 ) IE(Yx.

2 P j =1 k gj !1=2 = IEZk2 .y and Yx.nition of Xx. 102 sin e IEZk4 (k + 1)2 (elementary omputations). it follows that the right hand side of this inequality is majorized by 1 IPf inf jG(x)j + g > g IPf inf jG(x)j > 0g + exp( 2 =2) x2S x2S 2 while the left hand side is bounded below by 80 11=2 > k < X IP gj. if = k=20 . by the pre eeding. Hen e. we get that 1 k 1 2 2 IPfZk k=10 g 1 2 10 k + 1 2 at least if k 3 something we may assume without any loss of generality (in reasing if ne essary K in the p on lusion of the theorem).2 A > : j =1 sup N X x2S i=1 9 > = gi xi > > . the left side of (15.3) is larger than 1 2 p20 IE sup N X k x2S i=1 gi xi : . ( IPfZk > 2g IP sup N X x2S i=1 ) gi xi > N X IPfZk > 2g 1 IE sup gi xi x2S i=1 where we have set Zk = k .y . We an write Z k + Zk2 dIP 102 fZk2 k=102 g k + (IEZk4 )1=2 (IPfZk2 k=102g)1=2 .

S \ F = . There exists therefore an ! su h that jG(!.3) (with = k=20 ) that p20 `(rT ) 12 exp( k=800) : 1 IPf inf jG(x)j > 0g x2S 2 k p Hen e. if we hoose r = K`(T )= k for K numeri al large enough. we see from the pre eding inequality that IPf inf jG(x)j > 0g > 0 . . if we let F = KerG(!) (whi h is of odimension k ).Let then S = r 1 T \ S2N 1 467 p where r > 0 . x)j > 0 for all x in S . That x2S is. By de. We have obtained from (15.

. We indeed improve the minoration of the right hand side of (15. ( IPf inf sup Yx. On the other hand.nition of S = r 1 T \ S2N 1 . if S is as before r 1 T \ S2N M N X 2 gi xi `(T ) r x2S i=1 2IE sup 1. To improve the pre eding argument into the quantitative estimate. p this implies that. For every x2S i=1 ) N X sup gi xi > x2S i=1 ( IPfZk m g IP sup N X x2S i=1 ( 1 IPfm Zk > g IP sup ) gi xi > m 2 N X x2S i=1 By Lemma 3. The qualitative on lusion of the theorem already follows. Let m and M be respe tively medians of Zk = ( k .2 P g )1=2 j =1 > 0.1.3) in the following way. for every x in T \ F . this is larger than 1 1 exp( 2 =2) 2 1 exp[ (m M 2 2)2 =2℄ : Let us then hoose = (m M )=3 to get the lower bound 1 exp[ (m M )2 =18℄ and (15. we n eed simply ombine it with on entration properties.3) thus reads as IPf inf jG(X )j > 0g 1 x2S p 3 expf (m M )2 =18℄ : 2 We have seen previously that m k=10 . if m M j ) gi xi > M + (m M 2) : 2 > 0 . jxj < r = K`(T )= k .y > g IP Zk x2S y2S k 1 2 N P and sup gi xi .

1. Note that this se ond proof may be rather simpli.468 so that IPf diam(T \ KerG) 2rg IPf inf jG(x)j > 0g x2S 2 p 1 32 exp 4 181 10k 2`(rT ) !2 3 5 : The on lusion of the theorem follows.2) may be used instead of Lemma 3. Let us observe again that the simpler inequality (3.

y = hG(x).y so that this dieren e is always 1. y0i) 0 and equal to 0 if x = x0 . x0 in S and y. N X i=1 gi xi + jxj k X j =1 gj0 yj : we have Xx0 . y0 i 2jxj jx0 j(1 hy. x0 i(1 jxj2 + jx0 j2 2jxj jx0 jhy.y Yx0 . By Theorem 3.ed yielding moreover best onstants in the statement of the theorem by the use of Theorem 3. y0 i 2hx.y x2S y2S k 1 x2S y2S k 1 2 2 and hen e.y0 j2 IEjXx. Take Xx. this implies that IE inf sup Yx.y = For all x. by de.y IE inf sup Xx. y0 in S2k IEjYx. yi and Yx. y0 i) hy.16.16.y0 j2 = jxj2 + jx0 j2 2jxj jx0 jhy.

y and Yx. improved numeri al onstants. ak is of the order of j =1 that 1 IPf inf jG(x)j = 0g IPf inf jG(x)j IE inf jG(x)j `(T ) ak g x2S x2S x2S r p k . with.nition of Xx.) by " 1 exp a 2 k `(T ) r 2 # : As before. Write then whi h is majorized using on entration ((1. this yields the on lusion of the theorem. as announ ed.6) e. . By the law of large numbers. 1 `(T ) r k P where ak = IEZk = IE(( gj0 2 )1=2 ) .g.y and with S = r 1 T \ S2N IE inf jG(x)j ak x2S 1.

While we have studied in Chapter 3 the integra- bility properties and tail behavior of sup j little seems to be known on the "metri geometri " t2T i. .469 15. A. IP) ) and onsider the Gaussian haos pro ess P i. Let further ! (gi ) be an orthogaussian sequen e (on ( .j ij !1=2 < 1 . Let T be a subset of `2 (IN IN) .j gi gj tij P t2T gi gj tij j . Conje tures on Sudakov's minoration for haos Denote by `2 (ININ) the spa e of all sequen es t = (tij ) indexed by ININ su h that jtj = P 2 t i.j onditions on T equivalent to this supremum to be almost surely .2.

Unfortunately. after de oupling. as a mixture of Gaussian pro esses. A0 . this approa h seems to be doomed to failure: the random distan es 0 1 d!0 (s.e. given P P !0 . to study. The suÆ ient onditions of Theorem 11. One approa h ould be to view the pre eding haos!! pro ess. t) = . IP0 ) .22 are too strong and by no way ne essary.nite (if there are any?). gi gj0 (!0 )tij where (gj0 ) is a standard Gaussian sequen e onstru ted on some dierent i j t2T probability spa e ( 0 . i.

.

X .

X 0 0 B .

gj (! )(sij .

i .

j .

2 1=2 .

.

tij ).

.

C A .

Let us redu e to de oupled haos and set `(2) (T ) = .e. t) < "g is very small for " > 0 small. d!0 (s. i. that IP0 f!0. do not have the property that was essential in Se tion 12.2.

.

.

.

.

X .

0 .

IE sup .

gi gj tij .

.

t2T .

j . i.

2 we know that.1.) By the study of Se tion 13. this redu tion to the de oupled setting is no loss of generality in the understanding of when T de. at least for symmetri (tij ) . the 2 in `(2) (T ) indi ates that we are dealing with haos of order 2 . : (With respe t to the notation of Se tion 15.

nes an almost surely bounded haos pro ess. A .

there are two natural distan es to onsider. The .3 on suÆ ien y. As we dis ussed it in Se tion 11.rst step in this study would be an analog of Sudakov's minoration.

h0 ij = sup jthj jhj.jh j1 where th = P j ! hj tij i .rst one is the usual L2 -metri js tj and the se ond one is the inje tive metri or norm given by ktk_ = sup0 jhth. Clearly ktk_ jtj . jhj1 .

470 We .

Denote. for n .rst investigate an instru tive example.

Clearly .xed. Let T be the unit ball of `2 (n n) for the inje tive norm k k_ . by `2 (n n) the subset of `2 (IN IN) onsisting of the elements (tij ) for whi h tij = 0 if i or j > n .

.

.

n .

.

X .

sup .

.

gi gj0 tij .

.

t2T .

i.j =1 .

Let ("ij )1i . Let T be as before. by the subgaussian inequality (4. (ii) is obvious by volume onsiderations. Proof. n))1=2 n . We give a simple probabilisti proof of (i).4. T = ft 2 `2 (n n). There is a numeri al onstant > 0 (independent of n ) su h that p (i) (log N (T. 12 ))1=2 n . n X i=1 11=2 !1=2 0 n X gi2 gj0 2 A j =1 so that `(2) (T ) n . jh0 j 1 . (ii) (log N (T. . For jhj . jn be a doubly-indexed Radema her sequen e. j j.1). Proposition 15. ktk_ 1g . for every u > 0 . k k_ .

9 8.

.

= <.

.

X .

u2 IP .

.

hi h0j "ij .

.

> u 2 exp : . :.

2 .

By the pre eding. there exists H CardH 5n . sin e IP 8 < 8h.1. h0 2 H . i. 2B2n . : .j By Lemma 15. CardH 5n su h that B2n ConvH .

.

.

.

.

X .

0 .

h h " i j ij .

.

.

.

i.j .

2 p By de. 9 p= 1 >4 n : .

.4 is omplete.nition of kk_ and H . one needs at least 12 exp( n2 ) balls p of radius n=8 to over T . it follows that IPfk("ij )k_ 4 ng 1=2 . be a family of 1 . That is IPfj("ij ij )j n=2g exp( n2 ) . The proof of Proposition 15. j n . Therefore. 1 i. Re entering. IP 8 <X : i. Lemma 1. So one needs at least 21 exp( n2 ) balls of radius n=2 in the p p Eu lidean metri j j to over fk("ij )k_ 4 ng 4 nT . Let ij .6 implies that for some > 0 .j 9 j"ij ij j2 n2 = exp( n2 ) : 4. Then (j"ij ij j2 ) is a sequen e of random variables taking the values 0 and 1 with probability 1=2 .

for " = n . k k_ . j j. Proposition 15. "))1=4 3=2 3=2 `(2) (T ) .e. as we have seen. `(2)(T ) n . "))1=4 K`(2)(T ) (15:4) ">0 and sup "(log N (T. "))1=2 n `(2)(T ) 2 2 sin e. sup "(log N (T. (gj0 ) . (geij ) be independent standard Gaussian sequen es. Re ently (15. "(log N (T. k k_ . where it is also proved that for Æ > 2 sup "(log N (T.471 p It follows from this proposition that. "))1=2 K`(2)(T ) ">0 for some numeri al K .5. Let (gi ) . that for any subset T in `2 (IN IN) . and. "))1=Æ K (Æ)`(2) (T ) : ">0 We would like now to present a simple result that is somewhat related to these questions.4) has been proved in [Ta16℄ (relying on an appropriate version of Theorem 15. if (xij )1i. It is natural to onje ture that these inequalities are best possible. i. k k_ . for " = 1=2 . j j.2).jn is a . Then. "(log N (T.

if ("i ) is a Radema her sequen e independent of (geik ) . (gj0 ) . We simply write that 1=2 2 i.k=1 : .j.j =1 n pnIE X g g0 x gik gj0 xij e i.k=1 n X = IE "k geik "j gj0 xij i.k=1 n pnIE X 0 g g x = IE i j ij n X : : By symmetry. n X IE geik gj0 xij i. (15:4) n X IE geij xij i.j.nite sequen e in a Bana h spa e.j =1 Proof.j.j =1 i j ij i.

j.j =1 : The ontra tion prin iple in (gj0 ) ( f.472 Jensen's inequality with respe t to partial integration in ("i ) shows then that n X IE geik gj0 xij i. Let (i ) be independent random variables with ommon distribution IPfi = 1g = 1 IPfi = 0g = Æ . "B2N ))1=2 d" (assumed to be .8)) and symmetry then imply the result.3.k=1 n X IE gik gj0 xij e i. An inequality of J. J. Bourgain In his remarkable re ent work on (p) -sets [Bour℄. Let T be a subset of IRN + and set E= 1 Z 0 0 < Æ < 1: (log N (T. (4. Bourgain establishes the following inequality. 15.

Under the previous notations. tN ) .nite).6.10) that for some onstant K and all u > 0 ( (15:5) " p 1 IP sup max i ti > u + K R Æm + log Æ t2T CardI m i2I X K exp 1=2 #) E u2 1 log : KR2 Æ P We will establish (15.5) using the entropi bound (11. t2T Theorem 15. : : : . with respe t to the norm X max i ti : CardI m i2I . i2I CardI m . X i ti sup max t2T CardI m i2I p KR " p p 1 Æm + p log 1=2 # Æ 1 + K log Æ 1=2 E: The inequality of the theorem is equivalent to say (Lemma 4. t 2 T . : : : . Set further R = sup jtj . An element t in IRN has oordinates t = (t1 . N g . I f1.4) for the ve tor valued pro ess i ti . there is a numeri al onstant K su h that for all p 1 and 1 m N .

1 i 1 . k(ti )k2. we also have that k(i ti )k2. if t = (t1 . IPfZi > ug = IP Sin e k(ti )k2. IPfZi > ug (15:6) 0 1 .473 To this aim.5). Lemma 15. and i = 0 or 1 . tN ) 2 IRN .1 1 .7. that is Zi i 1 Æ = 2: u2 u eÆ i : u2 i Without loss of generality.1 jtj . N X j =1 8 N <X : j =1 9 = Ifj tj >ug i . Re all that k(ti )k2. By the binomial estimate (Lemma 2. we an assume that m = 2p .1 denotes the weak `2 norm of the sequen e (ti )iN .1 1 . 1i IPfj tj > ugA : p IPf1 > u j g IPf1 > u2 j g IE Therefore. Let t in IRN + with k(ti )k2. Let u > 0 be .1 p onstant K su h that for all u K Æm . : : : . IPfj tj > ug = N X j =1 N X j =1 ei N X j =1 1=2 . There is a numeri al ) i ti > u K exp u2 1 log : K Æ Proof: Let (Zi )iN be the non-in reasing rearrangement of the sequen e (i ti )iN so that m X X max i ti = Zi : CardI m i=1 i2I Note that sin e k(ti )k2. for all integers i N and all u > 0 . ( IP X max Cardm i2I 1 and let also 1 m N . As usual. the next ru ial lemma will indi ate that the in rements of this pro ess satisfy the appropriate regularity ondition. for every i and u > 0 .

if j 1 .) We observe that. 2j X i=1 Zi 2j X u i 1=2 2 2j=2 : 2 i=1 .xed. Let j 1 be the largest su h that u2 2e1042j and take j = 0 if u2 < 2e104 . (We do not attempt to deal with sharp numeri al onstants.

`=j 2`Z2` p P `=j 9 = u` u=2 .474 We also observe that X i>2j Zi p X `=j 2` Z2` : It follows from these two observations that. if u K Æm where K 103 . w` ) . It is lear that P `j p v` u=4 while. u w` 3 10(Æ2p+1)1=2 : 4 `p X Hen e we have that (15:7) p P `=j IP u` u=2 . By the binomial estimates (15. log p X exp `=j 2` log u2` eÆ2` : x x 1 log : Æ 4 Æ Therefore. for all j 0 . the .6). and set u` = max(v` . w` = 10(Æ2`)1=2 . IP ( m X i=1 ) Zi > u 8 p <X IP : `=j whenever (u` ) are positive numbers su h that p X u > IPf2` > Z2` > u`g 2 . ( m X i=1 ) Zi > u ` p X eÆ2` 2 u2` `=j Note that for 2Æ x 4 . j ` p . Let v` = 10 2u2 (` j)=2 .

7) gives raise to an exponent of the form 2j log u2j eÆ2j ! 2j log provided that vj2 eÆ2j ! 2Æ = 2j log u2 e1042j u2 e104Æ2j 2 14 eu104 log 1Æ 4: p This learly holds by de.rst term (` = j ) of the sum on the right of (15.

Hen e " (15:8) for K 4e104 .nition of j 1 and also if j = 0 sin e u K Æm ( K 103 ). exp 2j log u2j eÆ2j !# exp u2 1 log K Æ .

then u` = w` and this inequality is trivially satis. 5 eÆ2` 1 2 35 log eÆu2` ` 11 : If u` 1 = w` 1 .475 We now would like to show that for every ` > j 2` log (15:9) that is. log u2` eÆ2` u2` eÆ2` u2` 1 6 ` 1 2 log .

ed sin e by de.

If u` 1 = v` 1 .9) is thus satis. (15. the interior of the logarithm is onstant.nition of w` . we note the following: u2` eÆ2` 2 2 2 2 eÆv`2` = 14 eÆv2` ` 1 1 = 41 eÆu2` ` 1 1 14 eÆw2`` 11 23 : Using that log(x=4) (3 log x)=5 when x 25 .

7. t in IRN . For s. we have that ( IP m X i=1 ) Zi > u X k 0 X k 1 " k k u2 1 log 3 K Æ exp exp 2 exp 6 5 u2 1 log 3K Æ u2 1 log K Æ # at least if u2 log(1=Æ) 3K .7).8) and (15. The inequality of Lemma 15.9). (15. We an now on lude the proof of Lemma 15. By (15.ed.6. Proof of Theorem 15.7 follows with (for example) K = 12e104 sin e there is nothing to prove if u2 log(1=Æ) 3K . de.

ne the random distan e .

.

t) = log max .X 1 1=2 D(s.

.

i (si CardI m .

Æ i2I .

.

ti ).

.

.

From the pre eding. ( ) ! 1 1=2 u2 IP D(s. for all u > 0 .7 implies that for all s. Lemma 15. t) > ujs tjg K exp : u2 : K K denotes a numeri al onstant not ne essarily the same ea h time it appears below. t) > u + K mÆ log js tj K exp K : Æ . t and u K (mÆ log 1Æ )1=2 . IPfD(s.

Invertibility of submatri es This se tion is devoted to the exposition of one simple but signi.4 to the random distan e D(s.5). for all measurable sets A . t . by Lemma 15. t) > u + KR mÆ log Æ 1=2 ) K exp u2 KR2 for all u > 0 . for all u > 0 .4. ( !) u2 1 1=2 IP sup D(s.7. for all measurable sets A . It follows that. We are therefore in a position to apply Remark 11. 15. Z sup D(s. t)dIP K IP(A) D A s. t)dIP K IP(A)js tj 2 1 ! 1 1=2 1 + mÆ log IP(A) Æ where we re all that 2 (x) = exp(x2 ) 1 . Remark 11.6 is thus omplete. t) > u + K R mÆ log +E K exp : Æ KR2 s. inequality (15. t) ( f.t2T 2 1 1 1 + R mÆ log IP(A) Æ 1=2 ! +E and hen e ((11.5) follows.476 Hen e. Note of ourse that we an repla e the entropy integral by sharper majorizing measure onditions.4)). ( 1 IP D(0.t2T Sin e for every t . Z A D(s. and all s. The proof of Theorem 15.

Bourgain and L. The proof of this result uses several probabilisti ideas and arguments already en ountered throughout this book. : : : . N g with ardinality of the order of N su h that R ARt is an isomorphism. : : : . Tzafriri. . denote by R the restri tion operator. we mean the existen e of a subset of f1. By restri ted invertibility of A . Denote by kAk = kAk2!2 its N norm as an operator `N 2 ! `2 . N g . If the diagonal of A is the identity matrix I . Rt is the transpose operator. If is a subset of f1. Let A be a real N N matrix onsidered as a linear operator on IRN . ant result of the work [B-T1℄ (see also [B-T2℄) by J. this will be a hieved by simply onstru ting the set su h that kR (A I )Rt k < 1=2 . that is the proje tion from IRN onto the span of the unit ve tors (ei )i2 where (ei ) is the anoni al basis of IRN .

Corollary 15.9. whenever A = (aij ) is an N N matrix with aii = 0 for all i there exists a subset of f1. there exists a subset of f1. 1 and N K " 2 . There is a numeri al onstant K with the following property: for every 0 < " 1 .8. There is a numeri al onstant K with the following property: for every 0 < Æ < K 1 and N K=Æ . whenever A is an N N matrix of norm kAk and with only 1 's on the diagonal.477 The following statement is the main result we present. : : : . Theorem 15. N g su h that Card ÆN and p kR ARt k K ÆkAk : The following restri ted invertibility statement is an immediate onsequen e of Theorem 15. : : : .8. N g of ardinality Card "2 N=K su h that R ARt restri ted to R `N 2 is invertible and its inverse satis.

10. for all N N matri es A = (aij ) with aii = 0 . is the main step in the proof of Theorem 15. : : : .8 from this result. Then. Let 0 < Æ < 1 and N 8=Æ .es k(R ARt ) 1 k < 1 + " : The following proposition.8. let us show how to establish then Theorem 15. there exists f1. Proposition 15. whose proof is based on a random hoi e argument. Before proving this proposition. N g su h that Card ÆN=2 and p kR ARt k2!1 50ÆkAk N where k k2!1 is the operator norm `M2 ! `M1 ( M = Card ). We need the following se ond step whi h lari.

Proposition 15. : : : . By the "little Grothendie k theorem" ( f. mg with Card m=2 1=2 kR uk2!2 m kuk2!1 : Proof.11. 2 (u ) (=2)1=2 kuk1!2 : . kuk1!2 = kuk2!1 . [Pi15℄). Let u : `n2 su h that ! `m1 .es the passage through the norm k k2!1 . if 2 (u ) is the 2 -summing norm of u . There exists a subset of f1. Consider u : `m 1 ! `n2 .

These two propositions learly imply the theorem. i 2=mg . there exists (i )im . It follows that one an . [Pie℄. e. by Piets h's fa torization theorem ( f. If N 8=Æ and A is an N N matrix with aii = 0 . there exists. xm ) in `m 1. 1=2 kuRt k2!2 2 kuk1!2 from whi h Proposition 15. m 1=2 X i x2i ju (x)j 2 ku k1!2 i=1 Let = fi m .g. N g su h that Card ÆN=2 and p kR ARt k2!1 50ÆkAk N : Apply then Proposition 15.478 m P Therefore.11 follows. : : : . Then Card !1=2 : m=2 and. su h that for all x = (x1 . : : : .10. i > 0 . by Proposition 15. from the pre eding. in f1.11 to u = R ARt . i = i=1 1 . [Pi15℄).

let BI = Sin e X IE i j aij ei i62I.j ij (re all that aii = 0 ). Let 0 < Æ < 1 and let i be independent with distribution IPfi = 1g = 1 IPfi = 0g = Æ . Proof of Proposition 15.j 2I ej 2!1 1 X X 4 X aij = N a N 2 I i62I. we have that (15:10) 4 X IEkA(!)k2!1 N B : 2 I I : . : : : . Let A(!) be the random operator IRN ! IRN with matrix (i (!)j (!)aij ) . For I f1. N g .10.nd with Card ÆN=4 and p 1=2 50ÆkAk N 50(2Æ)1=2kAk : kR ARt k2!2 Card 12 Card Theorem 15.8 immediately follows.j2I 2 i. We use a de oupling argument in the following form.

479 We therefore estimate BI for ea h .

We have that X i j aij ei i62I.xed I f1. let di (h) = P j 2I ej .j 2I For every i and h . N g . : : : .

.

.

.

.

X .

= sup i .

.

j hj aij .

.

jhj1 i62I .

j2I .

j 2 I . we get by entering and Jensen's inequality that BI . X i jdi (h)j : BI = IE sup jhj1 i62I Let (i0 ) be an independent opy of (i ) . Working onditionally on j . 2!1 X : j hj aij . Therefore. j0 .

.

.

X IE sup .

.

(i jhj1 .

i62I .

.

.

X IE sup .

.

(i jhj1 .

i62I By Cau hy-S hwarz. .

.

X .

.

+ Æ sup .

jhj1 i62I .

IEi )jdi (h)j .

.

X .

.

+ Æ sup .

jhj1 i62I .

by symmetry . if ("i ) is a Radema her sequen e independent of (i ) . i0 )jdi (h)j jdi (h)j jdi (h)j : p X sup jdi (h)j kAk N : jhj1 i62I On the other hand.

.

.

X IE sup .

.

(i jhj1 .

i62I .

.

.

.

.

.

i0 )jdi (h)j .

.

.

X 2IE sup .

.

"i i di (h) jhj1 .

i62I j .

.

.

.

.

.

12). we have further that . j : By the omparison properties of Radema her averages (Theorem 4.

.

.

X IE sup .

.

"i i di (h) jhj1 .

i62I j .

.

.

.

.

.

j .

.

.

.

.

X .

.

2IE sup .

"i i di (h).

.

jhj1 .

i62I .

we have obtained so far that BI . Summarizing.

.

.

.

.

X .

4IE sup .

.

"i i di (h).

.

+ Æ jhj1 .

i62I .

p kAk N : : .

480 Going ba k to the de.

we see that .nition of di (h) .

.

2 .

.

.

X .

sup .

.

"i i di (h).

.

jhj1 .

i62I .

= .

.

2 .

.

.

X .

j .

.

"i i aij .

.

.

j 2I .

we obtain 0 1 Æ Sin e P 2 a i ij X j 2I X j i62I a2ij A : kAk2 for every j . i 62 I . we . i62I X : If we integrate this expression with respe t to the Radema her sequen e and then with respe t to the variables i .

nally get that 0 BI .

.

2 11=2 .

.

X .

.

C .

4B IE sup " d ( h ) i i i .

.

A .

jhj1 .

i62I .

= fi N . N g . p 1 IPfkA(!)k2!1 50ÆkAk N g > : 2 Sin e learly ( IP N X i=1 i ÆN 2 ) 1 > . 2 there exists ! in the interse tion of these two events. by (15. p + ÆkAk N p 4(Æ2 kAk2 N )1=2 + ÆkAk N p 5ÆkAk N : This estimate holds for any I f1. : : : . p IEkA(!)k2!1 20ÆkAk N : In parti ular.10). Hen e. i (!) = 1g then ful.

10. ) su h that . 15. 1 p < 1 . Given an n -dimensional subspa e E of Lp = Lp ([0. and > 0 . dt) . The proof is omplete. 1℄. Embedding subspa es of Lp into `Np This se tion is devoted to the following problem of lo al theory of Bana h spa es. what is the smallest N = N (E.5.lls the on lusion of Proposition 15.

F ) 1 + ? Here d(E. F ) is the Bana h-Mazur distan e between Bana h spa es de.481 there is a subspa e F of `N p with d(E.

ned as the in.

the logarithm of d is a distan e). 1 p < 2 . follow from the general study on Dvoretzky's theorem and the stable type in Chapter 9. They are at this point dierent when p = 1 or p > 1 so that we distinguish these ases in two separate statements. d(E. Partial results in ase E = `n2 or E = `np .num of kU k kU 1 k over all isomorphisms U : E ! F (more pre isely. F ) = 1 if E and F are isometri . In the . The results we present here are taken from [B-L-M℄ and [Ta15℄.

If E is a (. [Pi16℄. we need the on ept of K - onvexity onstant [Pi11℄.rst one.

There are numeri al onstants 0 and C su h that for any n -dimensional subspa e E of Lp and any 0 < < 0 . Pisier [Pi8℄ that K (E ) C (log(n + 1))1=2 if E is a subspa e of dimension n of L1 ( C numeri al). IP. we have N (E. ) CK (E )2 2 n : In parti ular N (E. E ) onto the span of the fun tions P "i xi where ("i ) is a Radema her sequen e on ( . ) C 2 n(log n)2 max 1 p 1 . log( 1 n) . Let 1 < p < 1 .13. we have N (E. ) C 2 n log(n + 1) : Theorem 15. A. We use this freely below.nite dimensional) Bana h spa e. IP) . There are numeri al onstants 0 and C su h that for any n -dimensional subspa e E of L1 and any 0 < < 0 . We an now state the two main results. we have i X "i IE("i f ) i L2 (E ) K (E )kf kL (E) : 2 It has been noti ed in [TJ2℄ that then the same inequality holds when ("i ) is repla ed by a standard Gaussian sequen e (gi ) . Theorem 15. if f 2 L2(E ) . It is known further and due to G. denote by K (E ) the norm of the natural proje tion from L2 (E ) = L2 ( . A. ) Cp 2 np=2 (log n)2 log( 1 n) if p 2 and N (E.12. Thus.

We . Common to the proofs of these theorems is a basi random hoi e argument.482 if 1 < p 2 .

1 p < 1 . . the norm of Lp (S.rst develop it in general. We agree to denote below by k kp = k kLp() . ) where is a -.

nite measure on (S. . ) (hopefully with no onfusion when (S. ) varies). We .

a . given > 0 .rst observe that.

using that kf E k f kp respe t to the k -th dyadi subalgebra of [0. e. ! 0 where E k denotes the onditional expe tation with From this observation.g. 1℄ . we on entrate on subspa es E of dimension n of `M p for some .nite dimensional subspa e E of Lp = Lp ([0. dt) is always at a distan e (in the sense of Bana h-Mazur) less than 1 + of a subspa e of `M p for some (however large) M . 1℄. This an be seen in a number of ways.

almost an isometry when restri ted to M E . We note that U" (`M p ) is isometri to `p 1 where M1 = Cardfi M . For x in `Mp . To embed E into `M p 1 where M1 is of the order M=2 . we will. for ea h oordinate.xed M . we have M1 M=2 . More formally. "i = 1g and. onsider a sequen e " = ("i ) of Radema her random variables M M 1=p and onsider the operator U" : `M p ! `p given by: if x = (xi )iM 2 `p . We will give onditions under whi h U" is. onsider the random variable Zx = kU" (x)kpp We have Zx = M X i=1 (1 + "i )jxi jp M X i=1 kxkpp : jp jxi = M X i=1 "i jxi jp : Let E1 denote the unit ball of E and set AE = . ip a oin and disregard the oordinate if "head" omes up. with large probability. U" (x) = ((1 + "i ) xi )iM . with probability 1=2 .

.

M .

X .

.

sup .

"i xi p .

.

.

x2E1 .

The restri tion R" of U" to E satis.

we have therefore obtained: 12 A : p E .es kR" k AE 1=2 . U" (E )) 1 + Sin e IPfAE 3IEAE g 2=3 . kR" 1k (1 AE )1=p so that. when d(E. i=1 j j : (1 + AE )1=p .

If IEAE 1=6 . It is an immediate onsequen e of a result of D. Lemma 15. one must therefore ensure that IEAE is small. one an .14. H ) 1 + 36 IEAE : p In order to apply su essfully this result. of dimension n and let AE be as before.483 Proposition 15. The next lemma is one of the useful tools in this order of ideas. Lewis [Lew℄. 1 p < 1 . there exists M1 M=2 and a subspa e H of `M p 1 su h that d(E. Then. Let E be a subspa e of `M p . 1 p < 1 . Let E be an n -dimensional subspa e of `M p .15.

we an assume that ea h atom of has mass 2=M and that is now supported by f1. M g and a subspa e F of Lp() isometri to E whi h admits a basis n P 2 1=2 for all j n . Also we an assume that i = (fig) > 0 for all i M 0 . : : : .nd a probability measure on f1. Our main task in the proof of both theorems . We always use Lemma 15. M 0g where M 0 is an integer less than 3M=2 .15 in this onvenient form below. if we split ea h atom of of mass a 2=M in [aM=2℄ + 1 equal pie es. Let F be as before and F1 be the unit ball of F will be to show that (15:11) Lp () . ( j )jn orthogonal in L2() su h that j = 1 and k j k2 = n j =1 In this lemma. : : : .

.

M0 .

X E = IE sup .

.

i "i x2F1 .

i=1 .

.

p .

.

.

.

Applying Proposition 15. exists a subspa e H of `M p with M1 M =2 3M=4 su h that (15:12) d(E. jx(i)j 0 an be ni ely estimated (re all that i = (fig)) ). If we denote by G `M p the image of F by the map Lp() ! `M p x = (x(i))iM 0 0 ! (1i =p x(i))iM 0 .17) to suitably bound E .13. 0 then IEAG = E and G is isometri to F . H ) 1 + 36 : p E This is the pro edure we will use to establish Theorems 15. H ) = d(F.14 to G in `M p . H ) = d(G. When p = 1 . we see that if 0 1 E 1=6 . we use Dudley's entropy majorizing result (Theorem 11. .12 and 15. when p > 1 . we estimate E by the K - onvexity onstant K (E ) of E while.

p = 1 .11). By an appli ation of the omparison properties for Radema her averages (Theorem 4.16. Proof. In this proof.12. Then n 1=2 M E CK (E ) for some numeri al onstant C .12. Proof of Theorem 15. Proposition 15.484 We turn to the proof of Theorem 15.8). The main point is the following proposition. Let E be as obtained in (15. E .12) and omparison with Gaussian averages (4.

.

.

M0 .

.

X .

2IE sup .

.

i "i x(i).

.

x2F1 .

i=1 .

.

.

.

M0 .

.

X .

2IE sup .

.

i gi x(i).

.

x2F1 .

i=1 .

f i : n X IE sup h gj j . IE(gj f )i : x2F1 j =1 j =1 Setting yj = IE(gj f ) . xi = h n X j =1 gj j . p : (We may of ourse also invoke the Gaussian omparison theorems.) We denote by h. i the s alar produ t in L2 () . xi = h j .12) is however simpler. (Theorem 4. F ) = L1 (F ) of norm 1 su h that n X sup h x2F1 j =1 Thus n X gj j . IP. A. There exists f in L1( . we have by de.

nition of K (F ) = K (E ) . n X g y j j j =1 K (E ) : L2 (F ) Now 2 n X gj yj j =1 L2 (F ) = .

.

2 .

Z .

X .

n .

.

IE .

gj yj (t).

.

d(t) .

j =1 .

= Z 0 1 yj (t)2 A d(t) : n X j =1 .

we an write by Cau hy-S hwarz that n X j =1 h j . xi : x2F1 i=1 0 M X IE sup h i 1=2 gi vi . . yj i = Z Z 0 n X j =1 0 n X j =1 1 j (t)yj (t)A d(t) 11=2 yj (t)2 A d(t) K (E ) : Hen e we have obtained that (15:13) n X IE sup h gj n1=2 j . xi n1=2 K (E ) : x2F1 j =1 Set vi = Ifig so that (i 1=2 vi )iM 0 is an orthonormal basis of L2 () ( i = (fig) ). xi n1=2 K (E ) : x2F1 i=1 Finally. the rotational invarian e of Gaussian distribution indi ates that (15:14) n X IE sup h gj x2F1 j =1 Hen e (15. sin e i 2=M for all i . Sin e (n1=2 j )jn is an orthonormal basis of F L2 () . by the ontra tion prin iple.485 Sin e n P j =1 2 j = 1 . 0 M X xi = IE sup h i 1=2 gi vi .13) reads as n1=2 j.

.

.

M0 .

.

X .

.

IE sup .

i gi x(i).

.

x2F1 .

i=1 .

We establish Theorem 15. for C 1 . 0 M X = IE sup h gi vi . there exists a subspa e H of `N1 su h that d(E.16 follows (with C = 2 ). Note that for any Bana h spa es X. Y . . xi x2F1 i=1 imax 1=2 IE M 0 i 0 M X sup h i 1=2 gi vi . We show that. H ) 1 + C and N CK (E )2 2 n where C is numeri al.12 by a simple iteration of the pre eding proposition. xi x2F1 i=1 1=2 2Mn K (E ) : p Proposition 15.

there exists M and a subspa e H 0 of `M 1 with d(E. As we have seen. satisfying Mj+1 3Mj =4 and subspa es H j . d(H j .16 and set C2 = 288C1 and assume that 0 < 10 2 .486 K (Y ) d(X. j 0 . we onstru t by indu tion a sequen e of integers (Mj )j0 . H j+1 ) 1 + C2 K (E ) n Mj 1=2 and we stop at j0 . Y )2 K (X ) . for all j 0 . given > 0 . H 0 ) 1 + . In parti ular.12). K (H 0 ) (1 + )2 K (E ) . By (15. We an assume further that M C22 K (E )2 2 n otherwise we simply take H = H 0 . j M0 = M . the . of `M 1 su h that. Let C1 be the onstant of Proposition 15.

: : : . H j+1 ) 1 + 36C 1 K (H j ) 1 + C2 K (E ) Mn j n Mj 1=2 1=2 : This proves the indu tion pro edure. H j0 ) jY 0 1 j =0 n 1 + C2 K (E ) Mj 1=2 ! e10 . by Proposition 15. by (15. : : : Mj .16. H 0 .12). The result then follows. If follows in parti ular that K (H j ) (1 + )2 e10 K (E ) 8K (E ) . H j C1 K (H j ) n Mj 1=2 8C1 K (E ) Mn j 1=2 61 j+1 with so that. suppose that j < j0 and that M0 . We have that d(H 0 . there exist Mj+1 3Mj =4 and a subspa e H j+1 of `M 1 d(H j .rst for whi h Mj0 C22 K (E )2 2 n . Note that j Y (15:15) `=0 n 1 + C2 K (E ) M` 1=2 ! j Y `=0 (j `)=2 3 1 + C2 K (E ) 4 n Mj 1=2 ! e10 2 (sin e 10 2 ). and thus. H j have been onstru ted. Indeed.

487 (by (15.15)) and thus d(E.13. H j0 ) (1 + )e10 . We now turn to the ase p > 1 and the proof of Theorem 15. We . The proof of Theorem 15. Proof of Theorem 15.12 is therefore omplete.13.

rst need the following easy onsequen es of Lemma 15. 0 11=2 0 11=2 n n X X 2 2j A j (i) A j =1 j =1 = n1=2 kxk2 : This already gives the result for p 2 sin e then kxk2 kxkp . If x 2 F . n X IE gj j j =1 Cr1=2 r where C is numeri al. for all r 1 .17. for all i M 0 . By (15.16). kxk2 = n jx(i)j 1=2 n P j =1 2j !1=2 . then max jx(i)j nmax(1=p.15. When p 2 . so that.1=2) kxkp : iM 0 Further.15. kxk22 max jx(i)j2 p kxkpp n1 i p=2 kxk2 p kxkp p 2 and thus kxk2 n(1=p) (1=2) kxkp . If x = (15:16) n P j =1 j j . Let F be as given by Lemma 15. Proof. Lemma 15. the .

. j =1 Re all that i = (fig) > 0 .rst laim in Lemma 15. The se ond laim immediately follows from Jensen's inequality and the fa t that j = 1.17 is established in the ase n P 2 p 2 as well.18. i 15.

by Lemma . Then. i M 0 . Let J = fi M 0 . 2=M . i > 1=M 2g .

.

X IE sup .

.

i "i x2F1 .

i62J .

.

p .

.

.

.

jx(i)j X i62J i nmax(1. by the triangle inequality and the ontra tion prin iple.p=2) 2 n + 2M M 1=2 .p=2) 3 max(1.p=2) n : 2M Hen e. (15:17) E 3 max(1.

.

X IE sup .

.

1i =2 "i x2F1 .

i2J .

.

p .

.

.

jx(i)j : .

with asso iated pseudo-metri Æ(x. J being .488 Our aim is to study the pro ess P 1=2 i2J i "i jx(i)jp . y) = X i2J i (jx(i)jp jy(i)jp )2 !1=2 . x 2 F1 . We therefore need to evaluate various entropy numbers. and use Dudley's entropy integral to bound it appropriately.

let us .xed as before. we set kxkJ = max jx(i)j i2J and agree to denote by BJ the orresponding unit ball. In these notations. for x = (x(i))iM 0 in Lp () .

we pro eed as follows. y)2 2p2n(p 2)=2 kx yk2J X i2J 2 jx(i)jp i (jx(i)jp + jy(i)jp ) and (15. b 0 .rst observe the following.17. y) 2pn(p (15:18) 2)=4 kx ykJ if p 2 and 2 Æ(x. y) 6kx ykp= J (15:19) if p 2 : For a proof observe that if a.18) is satis. sin e kxkp . kykp 1 . y)2 2p2 X i2J 1 + bp 1 )ja bj : i max(jx(i)j2p 2 . kxkp . For all i . jy(i)j2p 2 )jx(i) y(i)j2 : If p 2 . Æ(x. using Lemma 15. y 2 F . kykp 1 . If x. ap bp p(ap Thus Æ(x. then Æ(x. jx(i)j2p 2 max jx(j )jp j kx yk2J . Hen e.

ed. jy(i)jp ) i i2J (kxkpp + kykpp)(2p 2)=p kx yk2p p 4 : and. jy(i)j2p 2 )jx(i) y(i)j2 p i2J X !(2p 2)=p max(jx(i)jp . When p 2 . jx(i) y(i)j2 X i2J i jx(i) 2) and . if i 2 J . Holder's inequality with = p=(2p that X i max(jx(i)j2p 2 .

= p=(2 y(i)jp !(2 p)=p p) shows .

2 and let F Lp () be as given by Lemma 15. Let r > 1 and r0 be the onjugate of r .15. uBJ ) C n log M u2 where C is numeri al.r ))1=2 . By (3. for some onstant C and all u > 0 .19).r the unit ball of F onsidered as a subspa e of Lr () (in parti ular BF.18. It is based on the dual version of Sudakov's minoration ((3. Proof. The following proposition is the basi entropy estimate whi h will be required for the ase p 2 .p .15)) whi h will also be used for p 2 . uBF. for every log N (BF.p = F1 ). Let p u > 0.2.15). Then. For every r 1 . u(log N (BF.489 This yields (15. Proposition 15. denote by BF.

.

.

.

M0 .

.

X 1=2 .

C IE sup .

i gi x(i).

.

r0 . x2BF.

i=1 .

18. One the other hand. Sin e p 2 .r eBJ if r = log M 2 . BF. 0 M X i=1 i jx(i)jr 1 and thus jx(i)jr i 1 M 2 if i 2 J .p . if x 2 BF. Therefore. uBJ ) N (BF.2 . BF.2 .p BF. u N (BF.r for some r . xi x2BF. : As in (15. for some numeri al onstant C and all r > 1 and u > 0 (15:20) log N (BF. the rotational invarian e of Gaussian measures implies that the expe tation on the right of the pre eding inequality is equal to n X IE sup h gj n1=2 j . uBF.17 shows that.14). for this hoi e of r . the se ond assertion in Lemma 15. sin e (n1=2 j )jn is an orthonormal basis of F .2 .r ) C rn : u2 It is then easy to on lude the proof of Proposition 15.r ) e . Hen e.r0 j =1 n X = n1=2 IE gj j j =1 : r Therefore. BF.

un 1=2 BF. Z 2pn Z 1 1 = 2 ( p 2) = 4 (log N (BF.12 (but simpler) then yields the on lusion in ase p 2 .19.17). by Dudley's majoration theorem (Theorem 11. Æ. if n M . log M u p 1 where C is numeri al.18. by simple volume onsiderations.p ) 1 + u Together with Proposition 15.p BJ (Lemma 15. 1 C log N (BF. u))1=2 du [Cp2 np=2 (log n)2 log M ℄1=2 : Therefore.490 and the on lusion follows from (15. we get that Z 2pn Z 1 = 2 (log N (BF. for every 0 < u 2n1=p . Let 1 < p 2 and let F Lp () be as given by Lemma 15. u)) du 2pn 0 0 Then note that sin e n 1=2 BF. uBJ ))1=2 du : (log N (F1 . Proposition 15.p .p . E p=2 n Cp2 M (log n)2 log M 1=2 : An iteration on the basis of (15.13. It relies on the following analog of Proposition 15.17). uBJ )) du 1 p n : 2 n 1=2 n log 1 + du u 0 p Z 2 n du + (Cn log M )1=2 : u 1 0 It follows that for some numeri al onstant C .15.p . .p . 1 Z 0 (log N (F1 . Then.p . uBJ ) N (BF. We turn next to the ase 1 < p 2 .18).12) similar to the one performed in the proof of Theorem 15.18 that will basi ally be obtained by duality (after an interpolation argument).17) and by (15. p 2 n N (BF. Let us now show how to dedu e the portion p 2 of Theorem 15.20). By (15. Æ. uBJ ) p n max .

Let q = p=p 1 be the onjugate of p . Let v 2n(2 p)=2p .491 Proof. Let further h q and de.

y in BF.r eBJ . not ne essarily the same ea h time it appears.p . BF.2 . by Holder's inequality. kx ykq kx yk12 kx ykh 21 kx ykh so that. log n) so that log N (BF.18. using furthermore the properties of entropy numbers. by 1 1 = + : q 2 h For x.ne . (2 p)h it follows that (re all v 2n(2 p)=2p ). if kx ykh (v=2)1= . BF. uBJ ) log N (BF. vBF. 0 1 . log N (BF. v > 0 . BF.r : ev .2 ) + log N BF. vBF.2 . Sin e 1 p = 2 p 2p . C denotes some numeri al onstant. then kx ykq v .2 .p . u log N (BF. vBF.q ) Chn n2=h v 2p=(2 p) 2 : Let us then hoose h = h(n) = max(q.2 ) Chn v 2p=(2 p) 2 where the onstant C may be hosen independent of p .q ) Chn v 2p=(2 p) 2 : The proof of (3. if r = log M 2 .2 . vBF.h 2 Chn v2 2= : Here and below.2 .2 .p .20) for r = h yields v 1= log N (BF. Hen e. Thus. As in the proof of Proposition 15.q ) log N BF. for u.16) of the equivalen e of Sudakov and dual Sudakov inequalities indi ates that we have in the same way (15:21) log N (BF. (15. vBF.

By (15.21) applied to the .492 Let us hoose v = 2p=2 u (2 p)=2 e (so that v 2n(2 p)=2p sin e u 2n1=p ).

for some numeri al onstant C (re all that 1 < p 2 ). uBJ ) Cu p n(h + log M ) : Sin e we may assume that n M . log N (BF.p . by de.20) to the se ond (for r = log M 2 ) . we get that.rst term on the right of the pre eding inequality and (15.

n P onstru t a majorizing measure on n1 Ai .3) that the anoni al Gaussian pro ess Xt = gi ti . Of a similar nature: given sets Ai . i=1 As an example of onstru tion. not just on their lengths. no expli it onstru tion is known. The proof of Theorem 15. we onstru t in this se tion expli it majorizing measures on ellipsoids. The same holds of ourse if X is indexed by ConvT .12) on ludes the proof in the same way. In H = `2 . i n . with majorizing measure. there is a majorizing measure on ConvT . The diÆ ulty is that this majorizing measure should depend on the relative position of the yn 's. onstru t a majorizing measure on ConvA . In parti ular. is almost surely bounded.6. More generally. The pre eding proposition together with Dudley's theorem ensures similarly that E 1 n C (log n)2 max . by Theorem 12. Se tion 12. n 1g . Let (ai ) be a sequen e of positive numbers with ( P 2 a E = x2H. i i 1 .13 for 1 < p 2 is then ompleted exa tly as before for p 2 .19 is omplete. onsider the ellipsoid X x2 i 2 a i i ) 1 : . we an ask: given a set A in H with a majorizing measures. However. t = (ti ) 2 i T H . 15.nition of h = h(n) .8. We know ( f. Majorizing measures on ellipsoids Consider a sequen e (yn ) is a Hilbert spa e H su h that jyn j (log(n + 1)) 1=2 for all n and set P T = fyn . log M M p 1 1=2 : An iteration on the basis of (15. The onstru tion is based on the evaluation of the entropy integral for produ ts of balls. the proof of Proposition 15.

Sin e .493 Let (gi ) be an orthogaussian sequen e.

.

2 .

X .

.

IE sup .

gi xi .

.

.

x2E .

8k . we an then use Lemma 11. Let Ik . ")) 1=2 d" C p where C is numeri al and B (x. given x in E 0 and k an integer. What will a tually be expli it is the following. i X = IE i ! gi2 a2i = X a2i 1 . 2 k Note that sin e P 2 a i i 1 . It an be shown further to be a ontinuous majorizing measure. log 0 < ai 2 k g : 2 2k CardIk 4 . let (n) be the produ t of balls ( (n) = x 2 H . let mk be the integer su h that X 2 mk 1 < 22k x2i 2 mk : i2Ik . we have that P k E0 = x 2 H . k For su h a sequen e n = (nk ) . i we know by Theorem 12. Sin e E E 0 . Consider then the ellipsoid ( (15:22) 1 1 m(B (x. ") is the (Hilbertian) ball of enter x and radius " > 0 in 3E 0 . We will exhibit a probability measure m on 1 Z X k 22k X i2Ik ) x2i 1 : p 0 3E su h that. X i2Ik 22k x2i 2 nk ) : The family f(n) . n 2 U g overs E 0 .9 to obtain a majorizing measure on E (however less expli it).8 that there is a majorizing measure (for the fun tion (log(1=x))1=2 ) on E with respe t to the `2 metri (that an a tually be made into a ontinuous majorizing measure). We leave this to this interested reader. k 0 . be the disjoint sets of integers given by Ik = fi . P Let U be the set of all sequen es of integers n = (nk )k0 su h that nk k for all k and 2 nk le3 . Indeed. One may wonder for an expli it des ription of su h a majorizing measure. for all x in E . This is the purpose of this se tion.

Note onversely that (n) 3E 0 for every n in U . Let therefore n = (nk ) be a . mk ) . The main step in this onstru tion will be to show that the produ ts of balls (n) satisfy the entropy onditions uniformly over all n in U . 2 pB2 ))1=2 C (where B2 is the unit ball of `2 ).494 p Then x 2 (n) with n = (nk ) . nk = min(k. that is. for some onstant C and all n in U . X (15:23) p 2 p (log N ((n) .

Set for all k . `k = nk + 2k and onsider thus the produ t of balls ( ) = (n) = x 2 H . 8k .xed element of U . Note that sin e (15:24) P k 2 2k CardIk 4 and P X i2Ik x2i 2 `k : 3 . by Cau hy-S hwartz and de.

X p (2 `k CardIk )1=2 2 3 : k 2 nk k In addition. 2 p 1 B2 ) where p is the produ t k<k(p) B (k) of the . This property will allow to easily sele t the balls with small entropy in the produ t . let k(p) be the smallest su h that X kk(p) Then 2 `k 2 2p 2 : N ( .nition of `k . we may assume that the sequen e (2`k CardIk ) is in reasing. 2 pB2 ) N (p . For any integer p .

nite dimensional Eu lidean balls ( B (k) = (xi )i2Ik . X i2Ik x2i 2 `k ) : We agree further to denote in the same way by B2 the unit ball of `2 and of the .

for every " > 0 .nite dimensional spa e `N 2 for the orresponding dimension ( lear from the ontext). "B2 ) CardIk 2 2 `k =2 : 1+ " . (15:25) N (B (k) . by volume onsiderations. Let us now re all that.

"k B2 ) : Let us hoose in this inequality "2k = 2 `k where j is su h that k(j 1) k < k(j ) . then N (B (k) . We have that X k<k(p) by de.495 P It is easily seen further that if ("k )k<p are positive numbers with N (p . 2 p 1 B2 ) (15:26) Y k<k(p) k<k(p) "2k 2 2p 2.

26). We are thus in a position to apply (15. set uj = so that (15:28) P j p X j X k(j 1)k<k(j ) 0 2 j X k(j 1)k<k(j ) 11=2 CardIk A (2 `k CardIk )1=2 uj 2 3 by (15.25). 2 p B2 ) X k<k(p) X j p log N (B (k) .24). 2 p B2 ))1=2 C For every j .nition of k(j 4(p j ) 1 "2k = X j p X X j p j p 2 4(p j ) 1 2 4(p j ) 1 22j 4p 1 X k(j 1)k<k(j ) X 2 `k 2 `k kk(j 1) 2 2p 2 1) . it yields that log N (. Sin e (2`k CardIk ) is in reasing. X k(j 1)k<k(j ) CardIk uj (2`k(j) 1 CardIk(j) 1 )1=2 : : . Together with (15. "k B2 ) 0 1 X k(j 1)k<k(j ) CardIk A log(22(p j)+2 ) : It follows that for some numeri al onstant C X (15:27) p 2 p(log N (.

496 By de.

k 'j ((nk )) = (nk )kj .nition of k(j ) . i.23) follows from Cau hy-S hwarz inequality and (15. nk 0 . P k>k(j ) 1 2 `k > 2 2j X k(j ) 1<k<k(j +1) 2. for every n = (nk )kj in Uj . We now make use of this result (15.24). we denote by (n) the (. X j Sin e P j uj 0 2 j X k(j 1)k<k(j ) 11=2 CardIk A X 2 2j uj (2`k(j) j X 23=2 (uj (uj j 1 CardIk(j) 1 )1=2 + uj+1 ))1=2 : p 2 3 by (15. Set Uj = 'j (U ) and note that CardUj (j + 1)! . Therefore. As for the elements of U .22). For every j 0 .e. Hen e 2 `k > 2 2j Therefore 0 2 2j (2`k(j) 1 2 (2j +1) 2 = 2 2j 3: 1 X CardIk(j) 1 )1=2 23 23 2 k(j ) 1<k<k(j +1) X k(j ) 1<k<k(j +1) 2 `k A (2`k(j) 1 CardIk(j) 1 )1=2 (2 `k CardIk )1=2 23 (uj + uj+1 ) where we have used again that (2`k CardIk ) is in reasing.28).23) to exhibit a on rete majorizing measure satisfying (15. the announ ed laim (15. by (15. su h that nk k for P all k and 2 nk 3 .27). Re all we denote by U the set of all sequen es of integers n = (nk )k0 . let 'j be the restri tion map from ININ onto INj+1 .

n) . n)) and radius 2 j over (n) . 8i 2 Ik . X i2Ik 22k x2i 2 nk . ) 8k > j .nite dimensional) produ t of balls ( (n) = x 2 H . n)) be a family of points in (n) of ardinality N ((n) . xi = 0 : For every j and n in Uj . su h that the Eu lidean balls of enter in (x(j. 2 j B2 ) . Consider then m= X X j 0 n2Uj (2j+1 (j + 1)!N ((n). 2 j B2 )) 1 Æx(j. let then (x(j. 8k j .

For every j 0 . there exists n in U with x 2 (n) . if x is in E E 0 . Now. by onstru tion. one an . m is a measure on 3E 0 su h that. Sin e (n) 3E 0 for every n in U .497 p p where Æx is point mass at x . jmj 1 .

Then.nd x(j ) in ('j (n)) with jx x(j )j 2 j . by de.

by (15. a result due to S.1 . "B2 ) : Hen e.7. the announ ed result (15.e. Montgomery-Smith [MS1℄. 2 j )) (2j+1 (j + 1)!N (('j (n)).22) is learly established. Consider an operator T from C (S ) to a Bana h spa e B . 15.nition of m .23) (and E 0 B2 ). 2 j+1 )) m(B (x(j ). we develop a onsequen e of Sudakov's minoration for Radema her averages (Theorem 4.1 In this se tion. Let S be a ompa t metri spa e and C (S ) be the Bana h spa e of ontinuous fun tions on S equipped with the supnorm k k1 . This on ludes the expli it onstru tion of a majorizing measure on ellipsoids we wanted to present. for every " > 0 . Cotype of the anoni al inje tion `N1 ! L2.15) to the otype properties of the inje tion `N 1 ! L2. m(B (x. N (('j (n)). C2 (T ) is the smallest number C su h that X i kT (xi )k2 !1=2 X C IE "i xi 1 i for all . 2 j B2 )) 1 : Note that. J. i. "B2 ) N ((n). We denote by C2 (T ) the Radema her otype 2 onstant of T .

it is (2. we have that (15:29) X i kT (xi )k2 !1=2 C sup X s2S i jxi (s)j : This is expressed by saying that T is (2. Hen e. satis. if T : C (S ) ! B is of otype 2 . 1) -summing (i. In parti ular. [Pie℄).e.nite sequen es (xi ) in C (S ) . 1) -summing ( f.

we an de.es (15.29)) with a (2. We re all that for a measurable fun tion x on a probability spa e (S. ) . 1) -summing onstant less than C2 (T ) . .

1 = 1 Z 0 ((jxj > t))1=2 dt : .ne a quasi-norm by kxk2.

498 This quasi-norm de.

It is known (and easy to see.1 as de. [S-W℄) that k k2. for simpli ity.nes the Lorentz spa e L2. f. we will use below that kk2.1 () .1 is equivalent to a norm and.

1 () ). is (2. Indeed. by Cau hy-S hwartz. if (xi ) is a . for any . 1) -summing.1 () (or L1 () ! L2. The anoni al inje tion C (S ) ! L2.1 = ((jxj > t))1=2 dt 0 kxk1 Z 0 1 (jxj > t)dt = kxk1 Z jxjd : Hen e.ned above behaves like a norm. !2 Z kxk1 kxk22.

We nonetheless have the following rather remarkable result of [MS1℄. Theorem 15. there exists a probability measure on S su h that for all x in C (S ) .29)) fa tors through L2. The answer to this question turns out to be no [Ta10℄.e.e.1 . Pisier has shown onversely [Pi17℄ that an operator T : C (S ) ! B that is (2.℄ It is a natural question whether the anoni al inje tion C (S ) ! L2. i. where K is a numeri al onstant.1 () is also of otype 2 .nite sequen e in C (S ) . kT (x)k KC kxk2.1 X i Z kxi k1 jxi jd sup s X i !2 jxi (s)j : [G.20.1() . Consider a probability spa e (S. . ) where is assumed to be . satisfying (15. 1) -summing (i. X i kxi k22.

nite with N atoms.1() satis. Then the otype 2 onstant C2 (j ) of the anoni al inje tion j : L1 () ! L2.

20 the term log(1 + log N ) an be repla ed by (log(1 + log N ))1=2 . . We onje ture that in Theorem 15. When is uniform measure on N points. From the subsequent proof. this would be the ase if the onje ture on Radema her pro esses presented at the end of Chapter 12 were true.es C2 (j ) K log(1 + log N ) where K is numeri al. it an be shown that C2 (j ) K 1(log(1 + log N ))1=2 .

499 Proof. Consider a .20 relies on Sudakov's minoration for Radema her pro esses (Theorem 4.15) that an be reformulated as follows. P Lemma 15.21. Our proof of Theorem 15.

) where is .nite sequen e (xi ) in L1 (S. .

nite su h that IEk "i xi k1 i k 1 . Then. one an . for all k .

xi = x0i + yi + zi where X 0 IE "i xi 1 i X i and yi j j X 2 zi i 1 = sup = sup s 1 X X i i 1. Proof. jyi (s)j K1 zi (s)2 K1 2 k=2 for some numeri al onstant K1 . for all i . There is no loss of generality to assume that S is .nd a subalgebra 0 of that has at most 22 atoms and 0 -measurable fun tions x0i on S su h that.

the subset f(xi (s))iM . By Theorem 4.nite and that is the algebra of all its subsets. Thus. Assume (xi ) = (xi )iM . s 2 S g of IRM an be k overed by 22 translates of the ball K1 (B1 + 2 k=2 B2 ) where B1 = B1M and B2 = B2M are respe tively k the `1 and `2 unit balls of IRM .15. we an .

set then x0i (s) = xi (sA ) . The on lusion is then obvious. (xi (s) xi (sA ))iM 2 K1 (B1 + 2 k=2 B 2) : For s in A .nd a partition of S in at most 22 sets A su h that.1 !1=2 K1 : !1=2 : . 1) -summing already indi ates that i X i kyi k22.1 !1=2 X i kx0i k22. the fa t that j is (2.1 !1=2 + X i kzik22.1 !1=2 + X i kyi k22. there exists sA 2 S with 8s 2 A . As a onsequen e of this lemma. given A . note that by the triangle inequality X (15:30) Sin e i kxi k22.1 k P jyi jk1 K1 .

then X i kxi k22.500 The main obje tive will be to establish the following fa t.22. Lemma 15.1 !1=2 X K2 (log(N + 1))1=2 i 1=2 2 xi 1 for all . If has N atoms.

it follows together with Lemma 15. ) where K2 is numeri al. (15.1 + K1 + K1 K2 2 k=2 (log(N + 1))1=2 k where (x0i ) are fun tions whi h are measurable with respe t to a subalgebra 0 of with at most 22 P k+1 atoms and satisfy IEk "i x0i k1 1 .1 !1=2 + K3 for some numeri al K3 .31) reads as i X i kxi k22.1 !1=2 X i kx0 k2 !1=2 i 2. . (15:31) X i kxi k22.21 and (15. If has less than 22 atoms. P On e this has been established.1 !1=2 X i kx0i k22. for all k . Iterative use of this property shows that if (xi ) is a .30) that if IEk "i xi k1 i 1 .nite sequen es (xi ) of fun tions on (S.

22. We are thus left with the proof of Lemma 15.1 !1=2 K (k + 1) : This shows indeed Theorem 15. kxk2.22. for some numeri al onstant K . Proof of Lemma 15. if x is in L2.1() .1 2(1 + log(a 1=2 ))1=2 kxk 2: Indeed. and ea h atom of has mass a > 0 . then (15:32) kxk2. .20. First.nite sequen e in L1 (S.1 1 + 1+ Z a 1=2 1 1 Z 0 dt (t(jxj > t))1=2 p t 1=2 t(jxj > t)dt Z a 1=2 1 dt t !1=2 . It relies on two observations. assuming kxk2 = 1 . we have kxk1 a1=2 . i X i kxi k22. ) P k where has less than 22 atoms with IEk "i xi k 1 . then. Hen e.

sin e a priori one would think that the regularity of F should be related to the speed at whi h F (t) goes to zero when t ! 0 . It follows from (1.32). F (t) = (tB1 ) . Consider the fun tion on IR+ . kxi k2L2() 1 from whi h the on lusion then learly follows. . B of IRN (and not only onvex ones)? Problem 2. we present various problems on (or related to) the topi s developed in this book. When 15. This is a fas inating fa t.1() p 2kxkL2. p kxkL . 1℄ ). Let = 21 ( + ) . Problem 1. One may wonder how regular F is.8.32) follows. Sin e (jxj > t) 2(jxj > t) . in all the examples we know the fun tion F has an analyti extension in a se tor j argz j < (think for example of Wiener measure on C [0.1) (Chapter 1) of A. by (15. Let be the probability measure on (S. Thus.1() . () 2(1 + log( 2N ))1=2 kxkL () : 2 1 P 2 P xi (s)2 1 for all s . we have that kxkL2. ) that assigns mass 1=N to ea h atom of .501 and (15. thus it has right and left derivatives at every point. (The proof of [Ta2℄ is erroneous). Ehrhard [Eh1℄ 1 ( N (A + (1 )B )) 1 ( N (A)) + (1 ) 1 ( N (B )) hold for all Borel sets A. Consider a Gaussian measure on a separable Bana h spa e of unit ball B1 . Does inequality (1. Is it thus dierentiable? A tually. Consider a lo ally onvex Hausdor topologi al ve tor spa e E and a Radon measure on E equipped with its Borel -algebra that is Gaussian in the sense that the law of every ontinuous linear fun tional is Gaussian under .2) that log F is on ave. It is shown in [By℄ that these derivatives are equal so that F is dierentiable. The se ond observation is as follows. assigns mass 1=2N to ea h atom of . Some of them have already been explained in their ontext so that we only brie y re all them here. Mis ellaneous problems In this last se tion. But does there exist a ompa t onvex set K for whi h (K ) > 0 ? An equivalent formulation is the following. The lemma i i is established. It is known that there is a ompa t metrizable set K with (K ) > 0 . Problem 3.

It does not seem to be known whether n = 3 works. x 2 B g . One might wonder what these limits be ome when (Sn =an ) is only bounded in probability. Problem 4. Following Corollary 8.e. x. Using in parti ular this result. we have A E ? It is not diÆ ult to see that this problem is equivalent to the following. then Bn ontains a onvex set A for whi h N (A) 1=2 . Theorem 8. n℄B | {z n times } (and [ n. are type 2 spa es the only spa es B in whi h the onditions IE(kX k2=LLkX k) < 1 and IEf (X ) = 0 .19 that a Bana h spa e with this property is ne essarily of type 2 " for all " > 0 (see [Pi3℄). Is it true that for some ompa t onvex set A with N (A) > 0 .502 Consider the anoni al Gaussian measure N on IRN and a ompa t set B IRN su h that N (B ) > 0 . D + D = fx + y . n℄B + + [ n. Does there exist n with the following property: whenever B IRN is su h that N (B ) 1=2 . i. Problem 5. IEf 2 (X ) < 1 for all f in B 0 are also suÆ ient for X to satisfy the bounded (say) LIL? It an be seen from Proposition 9. E = Bn where n Bn = [ n. Theorem 8.10 ompletely des ribes the luster set C (Sn =an ) of the LIL sequen e in Bana h spa e. n℄B + [ n. S Let E be the linear spa e generated by B . y 2 Dg ). that is when X only satis. n℄B = fx .8 (Chapter 8). jj n .11 provides a omplete pi ture of the limits in the LIL when Sn =an ! 0 in probability.

Problem 6. Try to hara terize Bana h spa es in whi h every random variable satisfying the CLT also satis. What is in parti ular (X ) = lim sup kSn k=an ? n!1 Example 8.es the bounded LIL.12 suggests that this investigation ould be diÆ ult.

We have seen in Chapter 10. that otype 2 spa es have this property. after Theorem 10.12. while onversely if a Bana h spa e B satis.es the LIL.

Theorem 10. Are otype 2 spa es the only ones? Problem 7.10 indi ates that in a Bana h spa e B satisfying the inequality Ros(p) for 2 some p > 2 .es this impli ation. B is ne essarily of otype 2 + " for every " > 0 [Pi3℄. a random variable X satis.

es the CLT if and only if it is pregaussian and tlim !1 t IPfkX k > tg = 0 . then B is of Ros(p) for some p > 2 ? This ould be in analogy with the law of large numbers and the type of a Bana h spa e (Corollary 9. Is it true that. try to understand when an in. these best possible ne essary onditions for the CLT are also suÆ ient. in a Bana h spa e B . More generally on Chapter 10 (and Chapter 14). Problem 8. if.18).

nite dimensional random variable satis.

This is the ourse related to one of the main questions of .es the CLT.

3 and Se tion 15.2 in this hapter for more details and some (very) partial results. Try to hara terize almost sure boundedness and ontinuity of Gaussian haos pro esses of order d 2 . As onje tured in Se tion 15. an analog of Sudakov's minoration would be a . See Se tion 11.503 Probability in Bana h spa es: how to a hieve eÆ iently tightness of a sum of independent random variables in terms for example of the individual summands? Problem 9.2.

rst step in this investigation. Re ently. Try now to understand boundedness and ontinuity of p stable pro esses when 1 p < 2 .2 are no more suÆ ient when p < 2 . Problem 12. In parti ular. by G. Re all the problem des ribed in Se tion 15. Con erning further this algebra Be . what are the additional onditions to investigate? From the series representation of stable pro esses. some positive results in this dire tion have been obtained by the se ond author [Ta16℄.3? Problem 13.2. this question is losely related to Problem 8. Almost sure boundedness and ontinuity of Gaussian pro esses are now understood via the tool of majorizing measures (Se tion 12.1). try to des ribe it from an Harmoni Analysis point of view as was done for C a. Problem 14.s. In the random Fourier series notations of Chapter 13. is it true that X IE sup x (t) K . Pisier [Pi6℄. Is there a minimal Bana h algebra B with A(G) 6 B 6 C (G) on whi h all Lips hitzian fun tions of order 1 operate? What is the ontribution to this question of the algebra Be dis ussed at the end of Se tion 13. sin e the ne essary majorizing measure onditions of Se tion 12. The paper [Ta13℄ des ribes some of the diÆ ulties in su h an investigation. Is it possible to hara terize boundedness (and ontinuity) of Radema her pro esses as onje tured in Se tion 12.6 of this hapter of the expli it onstru tion of a majorizing measure on ConvT when there is one on T . Problem 11. Problem 10.

X .

X IE x + sup IE sup .

.

kf k1 t2V .

.

! .

x (t). f.

.

.

h i for every (.

1 < p < 2 (and also p = 1 but for moments less than 1 )? The onstant K may depend on V as in the Gaussian ase (Corollary 13. Notes and referen es .15).nite) sequen e (x ) in a Bana h spa e B when ( ) is either a Radema her sequen e (" ) or a standard p -stable sequen e ( ) .

D. Milman used indeed a (weaker) version of this result to establish the remarkable fa t that if B is a Bana h spa e of dimension N . [Mi2℄. Pajor and N.504 Theorem 15. A. The . V. [Pi18℄). Tom zak-Jaegermann [P-TJ℄ improved Milman's estimate and established Theorem 15.2 using the isoperimetri inequality on the sphere and Sudakov's minoration. D. there is a subspa e of a quotient of B of dimension [ (")N ℄ whi h is (1 + ") -isomorphi to a Hilbert spa e. Milman on almost Eu lidean subspa e of a quotient ( f.2 originates in the work of V. [Mi-S℄.

Bourgain [Bour℄ uses in his deep investigation on (p) -sets. Bourgain and L. The simpli. The se ond proof is due to Y.8 is taken from the work by J. Pisier. Se tion 15.5 was shown to us by G.rst proof presented here is the Gaussian version of their argument and is taken from [Pi18℄. Proposition 15. Gordon [Gor4℄ with quantative improvements kindly ommuni ated to us by the author.3 presents a dierent proof of the sharp inequality that J. Theorem 15. Tzafriri [B-T1℄ (see also [B-T2℄ for more re ent information).

Embedding subspa es of Lp into `N p was onsidered in spe ial ases in [F-L-M℄. Bourgain at the light of some arguments used in [Ta15℄. In parti ular. [J-S1℄. [S h4℄ (among others). [S h3℄. ation in the proof of Proposition 15. [Pi12℄.10 was noti ed by J. S he htman's method was re. a breakthrough was made in [S h4℄ by G. S he htman who used empiri al distributions to obtain various early general results.

Lindenstrauss and V. Bourgain.ned and ombined with deep fa ts from Bana h spa e theory by J. In [Ta15℄. Milman [B-L-M℄. D. a simple random hoi e argument is introdu ed that simpli. J.

Theorem 15. The entropy omputations of the proof of Theorem 15. Lewis [Lew℄.es the probabilisti part of the proofs of [B-L-M℄. The proof of the existen e of a majorizing measure on ellipsoids was the . The ru ial Lemma 15.13 are taken from [B-L-M℄.12 was obtained in this way in [Ta15℄.15 is taken from the work by D. It is not known if the K - onvexity onstant K (E ) is ne essary.

A re.rst step of the se ond author on the way of his general solution of majorizing measures (Chapter 12).

nement of this result (with a simpli.

7 are due to S.ed proof) an be found in [Ta20℄. The results of Se tion 15. where it is shown to imply several very sharp dis repan y results. J. Montgomery-Smith [MS1℄ (that ontains further developments). The proofs are somewhat simpler than those of [MS1℄ and taken from [MS-T℄. .