Professional Documents
Culture Documents
The Convergence of The Sums of Independent Random Variables Under The Sub-Linear Expectations
The Convergence of The Sums of Independent Random Variables Under The Sub-Linear Expectations
Mar., 2020, Vol. 36, No. 3, pp. 224–244 Acta Mathematica Sinica,
Published online: February 15, 2020 English Series
https://doi.org/10.1007/s10114-020-8508-0 Springer-Verlag GmbH Germany &
http://www.ActaMath.com The Editorial Office of AMS 2020
Li Xin ZHANG
School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, P. R. China
E-mail : stazlx@zju.edu.cn
the law of the iterated logarithm (Chen [2], Zhang [14]). For the convergence of the infinite
∞
series n=1 Xn , Xu and Zhang [12] gave sufficient conditions of the almost sure convergence
for independent random variables under the sub-linear expectation via a three-series theorem,
recently. In this paper, we will consider the necessity of these conditions and the equivalence of
the almost sure convergence, the convergence in capacity and the convergence in distribution.
In the classical probability space, the Lévy maximal inequalities are basic to the study of the
almost sure behavior of sums of independent random variables and a key to show that the
convergence in probability of ∞ n=1 Xn implies its almost sure convergence. We will establish
Lévy type inequalities under the sub-linear expectation. For showing that the convergence in
∞
distribution of n=1 Xn implies its convergence in probability, the characteristic function is a
basic tool. But, under the sub-linear expectation, there is no such tools. We will find a new
way to show a similar implication under the sub-linear expectations basing on a Komlogorov
type maximal inequality.
As for the central limit theorem,
n
it is well-known that the finite variances and mean zeros
Xk
are sufficient and necessary for k=1 √ to converge in distribution to a normal random variable
n
if {Xn ; n ≥ 1} is a sequence of independent and identically distributed random variables on a
classical probability space (Ω, F, P). Under the sub-linear expectation, Peng [8, 9] proved the
central limit theorem under the finite (2+α)-th moment. By applying a moment inequality and
the truncation method, Zhang [14] and Lin and Zhang [6] showed that the moment condition
can be weakened to the finite second moment. A natural question is whether the finite second
moment is necessary. In this paper, by applying the maximal inequalities, we will obtain the
sufficient and necessary conditions for the central limit theorem.
In the remainder of the section, we state some natations. In the next section, we will
establish the maximal inequalities for random variables under the sub-linear expectation. The
results on the convergence of the infinite series of random variables will be given in Section 3.
The sufficient and necessary conditions for the central limit theorem are given in Section 4.
We use the framework and notations of Peng [8]. Let (Ω, F) be a given measurable space
and let H be a linear space of real functions defined on (Ω, F) such that if X1 , . . . , Xn ∈ H
then ϕ(X1 , . . . , Xn ) ∈ H for each ϕ ∈ Cl,Lip (Rn ), where Cl,Lip (Rn ) denotes the linear space of
local Lipschitz functions ϕ satisfying
or −∞ + ∞;
(d) Positive homogeneity: E[λX]
= λE[X], λ ≥ 0.
Here R = [−∞, ∞]. The triple (Ω, H , E) is called a sub-linear expectation space. Given a
let us denote the conjugate expectation Eof
sub-linear expectation E, E by
E[X]
:= −E[−X], ∀X ∈ H .
From the definition, it is easily shown that E[X]
≤ E[X], +c] = E[X]+c
E[X −Y ] ≥
and E[X
E[X] ] for all X, Y ∈ H with E[Y
− E[Y ] being finite. Further, if E[|X|]
is finite, then E[X] and
E[X] are both finite.
Definition 1.2 (i) Let X1 and X2 be two n-dimensional random vectors defined respectively
1 ) and (Ω2 , H2 , E
in sub-linear expectation spaces (Ω1 , H1 , E 2 ). They are called identically dis-
d
tributed, denoted by X1 = X2 if
1 [ϕ(X1 )] = E
E 2 [ϕ(X2 )], ∀ϕ ∈ Cb,Lip (Rn ),
where Cb,Lip (Rn ) is the space of bounded Lipschitz functions.
a random vector Y = (Y1 , . . . , Yn ), Yi ∈ H
(ii) In a sub-linear expectation space (Ω, H , E),
is said to be independent to another random vector X = (X1 , . . . , Xm ), Xi ∈ H under E if
E[ϕ(X, E[ϕ(x,
Y )] = E[ Y )]|x=X ], ∀ϕ ∈ Cb,Lip (Rm × Rn ).
Random variables {Xn ; n ≥ 1} are said to be independent, if Xi+1 is independent to (X1 , . . . , Xi )
for each i ≥ 1.
In Peng [8–10], the space of the test function ϕ is Cl,Lip (Rn ). Here, the test function ϕ in
the definition is limit in the space of bounded Lipschitz functions. When the considered random
variables have finite moments of each order, i.e., E[|X| p
] < ∞ for each p > 0, then the space of
test functions Cb,Lip (Rn ) can be equivalently extended to Cl,Lip (Rn ).
A function V : F → [0, 1] is called a capacity if V (∅) = 0, V (Ω) = 1 and V (A ∪ B) ≤
V (A) + V (B) for all A, B ∈ F. Let (Ω, H , E) be a sub-linear space. We denote a pair (V, V)
of capacities by
: IA ≤ ξ, ξ ∈ H },
V(A) := inf{E[ξ] V(A) := 1 − V(Ac ), ∀A ∈ F,
where Ac is the complement set of A. Then
] ≤ V(A) ≤ E[g],
E[f ] ≤ V(A) ≤ E[g],
E[f if f ≤ IA ≤ g, f, g ∈ H .
It is obvious that V is sub-additive, i.e., V(A ∪ B) ≤ V(A) + V(B). Further, if X is not in H ,
we define E[X] by
E[X] ] : X ≤ Y, Y ∈ H }.
= inf{E[Y
A ].
Then V(A) = E[I
Definition 1.3 (I) A function V : F → [0, 1] is called to be countably sub-additive if
∞ ∞
V An ≤ V (An ), ∀An ∈ F.
n=1 n=1
2 Maximal Inequalities
In this section, we establish several inequalities on the maximal sums. The first one is the Lévy
maximal inequality.
Lemma 2.1 Let X1 , . . . , Xn be independent random variables in a sub-linear expectation space
Sk = k Xi , and 0 < α < 1 be a real number. If there exist real constants βn,k
(Ω, H , E), i=1
such that
V(Sk − Sn ≥ βn,k + ) ≤ α, for all > 0 and k = 1, . . . , n,
then
(1 − α)V max(Sk − βn,k ) > x + ≤ V(Sn > x), for all x > 0, > 0. (2.1)
k≤n
then
(1 − α)V max(|Sk | − βn,k ) > x + ≤ V(|Sn | > x), for all x > 0, > 0. (2.2)
k≤n
Proof We only give the proof of (2.1) since the proof of (2.2) is similar. Let g (x) be a function
with
g ∈ Cb1 (R) and I{x≥} ≤ g (x) ≤ I{x≥/2} for all x, (2.3)
k
where 0 < < 1/2. Denote Zk = g (Sk − βn,k − x), Z0 = 0, η0 = 1 and ηk = i=1 (1 − Zi ).
Then Sn − Sm is independent to (Z1 , . . . , Zm ), and
(1 − α)I max(Sk − βn,k ) > x +
k≤n
n
= (1 − α) 1 − I{Sk − βn,k − x ≤ }
k=1
n
≤ (1 − α)[1 − ηn ] = (1 − α) ηm−1 Zm
m=1
n
= ηm−1 Zm I{Sm − Sn < βn,m + /2}
m=1
n
+ ηm−1 Zm [1 − α − I{Sm − Sn < βn,m + /2}]
m=1
n
n
≤ ηm−1 Zm I{Sn > x} + ηm−1 Zm [−α + I{Sm − Sn ≥ βn,m + /2}]
m=1 m=1
n
= I{Sn > x} + ηm−1 Zm [−α + g/2 (Sm − Sn − βn,m )],
m=1
where the second inequality above is due to the fact that on the event {Zm = 0} and {Sm − Sn
< βn,m + /2} we have Sn ≥ Sm − (Sm − Sn ) > x. Note
/2 (Sm − Sn − βn,m )] ≤ V(Sm − Sn ≥ βn,m + /4) ≤ α.
E[g
228 Zhang L. X.
By the independence,
m−1 Zm [−α + g/2 (Sm − Sn − βn,m )]]
E[η
m−1 Zm {−α + E[g
= E[η /2 (Sm − Sn − βn,m )]}] ≤ 0.
it follows that
By the sub-additivity of E,
(1 − α)V max(Sk − βn,k ) > x +
k≤n
n
≤ V(Sn > x) + m−1 Zm [−α + g/2 (Sm − Sn − βn,m )]]
E[η
m=1
n 2
Write Bn2 = k=1 E[Xk ]. It follows that
n 2 2
(x + + c)2 k=1 (Xk − E[Xk ])ηk−1
1− 2
+ 2
Bn Bn
2 n
− +
≤ 1 − ηn + 2 [Xk Sk−1 ηk−1 − Xk Sk−1 ηk−1 ].
Bn
k=1
Note
k S − ηk−1 ] = E[X
E[X k ]E[S
− ηk−1 ] ≤ (x + )(E[X
k ])+ ,
k−1 k−1
E[−X kS
+
ηk−1 ] = E[−X + ηk−1 ] ≤ (x + )(E[−X
k ]E[S
k ])
+
k−1 k−1
and
n
n
E (Xk2 2
− E[Xk ])ηk−1 = E E 2 2
(Xk − E[Xk ])ηk−1 |X1 , . . . , Xn−1
k=1 k=1
n−1
=E 2 2 2 2
(Xk − E[Xk ])ηk−1 + ηn−1 E[Xn − E[Xn ]]
k=1
n−1
=E 2 ])ηk−1
(Xk2 − E[X k
k=1
= ···
= 0, (2.6)
n−1 2 ])ηk−1 are functions
where the second equality is due to the facts that ηn−1 , k=1 (Xk2 − E[X k
of X1 , . . . , Xn−1 , and Xn is independent to (X1 , . . . , Xn−1 ) (Definition 1.2 (ii)). It follows that
n k ])+ + (E[−X +
(x + + c)2 2(x + ) k=1 {(E[X k ]) } − ηn ]
1− − ≤ E[1
Bn2 Bn2
≤ V max |Sk | > x .
k≤n
≤ Sn ηn + (x + + c)[1 − ηn ]
≤ (x + )ηn + (x + + c)[1 − ηn ]
≤ (x + + c).
n
Write en = k=1 E[Xk ]. It follows that
n k ])ηk−1
(x + + c) (Xk − E[X
1− + k=1 ≤ 1 − ηn .
en en
230 Zhang L. X.
Note
n
n−1
E
(Xk − E[Xk ])ηk−1 = E
(Xk − E[Xk ])ηk−1 = · · · = 0,
k=1 k=1
then
V ω : lim Sn (ω) = S(ω) = 0. (3.2)
n→∞
∞
When (3.2) holds, we call that n=1 Xn is almost surely convergent in capacity, and when (3.1)
holds, we call that ∞ n=1 Xn is convergent in capacity.
(ii) If V is continuous, then (3.2) implies (3.1).
The second theorem gives the equivalency between the convergence in capacity and the
convergence in distribution.
Theorem 3.2 Let {Xn ; n ≥ 1} be a sequence of independent random variables in a sub-linear
Sn = n Xk .
expectation space (Ω, H , E), k=1
(i) If there is a random variable S in the measurable space (Ω, F) such that
E[φ(S
n )] → E[φ(S)], φ ∈ Cb (R). (3.4)
∞
When (3.4) holds, we call that n=1 Xn is convergent in distribution.
H
(ii) Suppose that there is a sub-linear space (Ω, , E)
and a random variable S on it such
that S is tight under E, i.e., V(|S| > x) → 0 as x → ∞, and
E[φ(S
n )] → E[φ(S)], φ ∈ Cb (R), (3.5)
Convergence Under Sub-linear Expectations 231
Conversely, if Sn is a Cauchy sequence in capacity V, then (S1), (S2) and (S3) will hold for all
c > 0.
From Theorem 3.3, we have the following corollary which is the three-series theorem on the
∞
sufficient and necessary conditions for the almost sure convergence of n=1 Xn .
Corollary 3.4 Let {Xn ; n ≥ 1} be a sequence of independent random variables in (Ω, H , E).
∞
Suppose that V is countably sub-additive. Then n=1 Xn will converge almost surely in capacity
if the three conditions (S1), (S2) and (S3) in Theorem 3.3 hold for some c > 0. Conversely,
∞
if V is continuous and n=1 Xn is convergent almost surely in capacity, then (i), (ii) and (iii)
will hold for all c > 0.
The sufficiency of (S1), (S2) and (S3) is proved by Xu and Zhang [12], and also follows from
Theorem 3.3 and the second part of conclusion of Theorem 3.2 (ii). The necessity follows from
Theorem 3.3 and Theorem 3.1 (ii).
To prove Theorems 3.1 and 3.2, we need some more lemmas. The first lemma is a version
of Theorem 9 of Peng [10].
Lemma 3.5 Let {Yn ; n ≥ 1} be a sequence of d-dimensional random variables in a sub-linear
Suppose that Yn is asymptotically tight, i.e.,
expectation space (Ω, H , E).
{Y ≤x}c ] = lim sup V(Yn > x) → 0
lim sup E[I as x → ∞.
n
n→∞ n→∞
Then for any subsequence {Ynk } of {Yn }, there exist further a subsequence {Ynk } of {Ynk }
and a sub-linear expectation space (Ω, H , E) with a d-dimensional random variable Y on it
such that
E[φ(Y n )] → E[φ(Y )] for any φ ∈ Cb (Rd )
k
Then E is a sub-linear expectation on the function space Cb (Rd ) and is tight in sense that for
any > 0, there is a compact set K = {x : x ≤ M } for which E[IK c ] < . With the same
argument as in the proof of Theorem 9 of Peng [10], there is a countable subset {ϕj } of Cb (Rd )
such that for each φ ∈ Cb (Rd ) and any > 0 one can find a ϕj satisfying
On the other hand, for each ϕj , the sequence E[ϕ j (Yn )] is bounded and so there is a Cauchy
subsequence. Note that the set {ϕj } is countable. By the diagonal choice method, one can
find a sequence {nk } ⊂ {n} such that E[ϕ j (Yn )] is a Cauchy sequence for each ϕj . Now, we
k
show that E[φ(Y nk )] is a Cauchy sequence for any φ ∈ Cb (Rd ). For any > 0, choose a ϕj such
that (3.7) holds. Then
|E[φ(Y
nk )] − E[φ(Ynl )]| ≤ |E[ϕj (Ynk )] − E[ϕj (Ynl )]|
+ E[|φ(Y
nk ) − ϕj (Ynk )|] + E[|φ(Ynl ) − ϕj (Ynl )|].
Hence E[φ(Y nk )] is a Cauchy sequence for any φ ∈ Cb (Rd ), and then
lim E[φ(Ynk )] exists and is finite for any φ ∈ Cb (Rd ). (3.8)
k→∞
Then (Ω, H , E) is a sub-linear expectation space. Define the random variable Y by Y (x) = x,
x ∈ Ω. From (3.8) it follows that
lim E[φ(Ynk )] = E[φ(Y )] for any φ ∈ Cb (Rd ).
k→∞
max V(|Sk | > x0 ) ≤ max V(|X1 + Sk | > x0 /2) + V(|X1 | > x0 /2)
k≤n k≤n
|X1 + Sk | |X1 |
≤ E g1/2 + E g1/2
x0 x0
g1/2 |X|
= 2E
x0
Convergence Under Sub-linear Expectations 233
≤ 2V(|X| ≥ x0 /4)
< 1/4 (3.9)
The above inequality holds for all n, which is impossible unless q = 0. So we conclude that
which contradicts to (3.10) when n > 12x0 /E[ Y1 ]. Hence, E[ Y1 ] ≤ 0. Similarly, E[−
Y1 ] ≤ 0.
Y1 ] = E[−
We conclude that E[ Y1 ] = 0. Now, if E[
Y ] = 0, then by Lemma 2.2 (i) we have
2
1
(3x0 + 5x0 )2
V max |Sk | ≥ 3x0 ≥ 1 − ,
k≤n Y 2 ]
nE[ 1
Y 2 ]. We conclude that E[
which contradicts to (3.10) when n > 96x20 /E[ Y 2 ] = 0.
1 1
Finally, for any > 0 ( < 5x0 ),
2 ∧ (5x0 )2 ]
E[Y Y 2 ]
E[ 1
V(|Y | ≥ ) ≤ 2
= = 0.
2
The proof is complete.
Proof of Theorem 3.1 (i) Let k = 1/2k , δk = 1/4k . By (3.1), there exists a sequence n1
< n2 < · · · < nk → ∞, such that
∞
≤ δk → 0 as m → ∞.
k=m
By (3.12), maxn≥nk V(|Sn − Snk+1 | ≥ 2k ) < 2δk < 1/2. Apply the Lévy inequality (2.2) yields
V max |Sn − Snk | > 5k ≤ 2V(|Snk+1 − Snk | > 2k ) < 4δk . (3.13)
nk ≤n≤nk+1
∞
≤ V max |Sn − Snk | ≥ 5k
nk ≤n≤nk+1
k=m
∞
≤4 δk → 0 as m → ∞.
k=m
It follows that
V lim sup |Sn − S| > 0 ≤ V lim sup |Snk − S| > 0
n→∞ k→∞
+ V lim sup max |Sn − Snk | > 0
k→∞ nk ≤n≤nk+1
= 0.
(3.2) follows.
(ii) From (3.2) and the continuity of V, it follows that for any > 0,
∞ ∞
0≥V {|Sm − S| ≥ }
n=1 m=n
∞
= lim V {|Sm − S| ≥ }
n→∞
m=n
By letting n → ∞ and the arbitrariness of > 0, we obtain (3.4). Now, suppose that φ is a
bounded continuous function. Then for any N > 1, φ((−N ) ∨ x ∧ N ) is a bounded uniformly
continuous function. Hence
lim E[φ((−N
) ∨ Sn ∧ N )] = E[φ((−N ) ∨ S ∧ N )].
n→∞
and
lim sup |E[φ((−N
) ∨ Sn ∧ N )] − E[φ(S n )]|
n→∞
where g is defined as in (2.3). Hence, (3.4) holds for a bounded continuous function φ.
(ii) Note
V(|Sn − Sm | ≥ 2x) ≤ V(|Sn | ≥ x) + V(|Sm | ≥ x).
It follows that
Write Yn,m = (Sn , Sm − Sn ), then the sequence {Yn,m ; m ≥ n} is asymptotically tight, i.e.,
By Lemma 3.5, for any subsequence (nk , mk ) of (n, m), there is further a subsequence (nk , mk )
of (nk , mk ) and a sub-linear expectation space (Ω, H , E) with a random vector Y = (Y1 , Y2 )
such that
E[φ(Y n ,m )] → E[φ(Y )], φ ∈ Cb (R2 ). (3.14)
k k
Note that Smk − Snk is independent to Snk . By Lemma 4.4 of Zhang [13], Y2 is independent
to Y1 under E. Let φ ∈ Cb,Lip (R). By (3.14),
E[φ(S mk )] → E[φ(Y1 + Y2 )],
E[φ(S nk )] → E[φ(Y1 )] (3.15)
and
E[φ(S mk − Snk )] → E[φ(Y2 )]. (3.16)
Hence, by Lemma 3.6, we obtain V(|Y2 | ≥ ) = 0 for all > 0. By choosing φ ∈ Cb,Lip (R) such
that I|x|≥ ≤ φ(x) ≤ I|x|≥/2 in (3.16), we have
So, we conclude that for any subsequence (nk , mk ) of (n, m), there is further a subsequence
(nk , mk ) of (nk , mk ) such that
V(|Snk+1 − Snk | ≥ k ) ≤ δk .
∞
Let A = {ω : k=1 |Snk+1 − Snk | < ∞}. Then
∞ ∞
V(Ac ) ≤ V |Snk+1 − Snk | ≥ k
k=K k=K
∞
≤ V(|Snk+1 − Snk | ≥ k )
k=K
∞
≤ δk → 0 as K → ∞.
k=K
V(|Sn − Snk | ≥ ) → 0 as n, nk → ∞.
Hence
V(|Sn − S| ≥ ) ≤ V(|Sn − Snk | ≥ /2) + V(|S − Snk | ≥ /2) → 0.
By (2.5),
n
k ] ≤ x + c for all x ≥ x0 , n ≥ n0 .
E[X
1−β
k=1
∞
So k=1 E[Xk ] is convergent. Similarly, ∞k=1 E[−Xk ] is convergent.
Now, by (2.4),
n k ])+ + (E[−X
2 (x + c)2 + 2x nk=1 {(E[X +
k ]) }
E[Xk ] ≤
1−β
k=1
∞ k ] + E[−X
(x + c)2 + 2x k=1 {E[X k ]}
≤ for all x ≥ x0 , n ≥ n0 .
1−β
2
So ∞
n=1 E[Xn ] is convergent. The proof is complete.
Proof of Theorem 3.3 (i) By Lemma 2.3 and the condition (S3),
n
n
E[(X 2
k − E[Xk ]) ]
V Sn − S m −
E[Xk ] ≥ ≤ C k=m+1 → 0 as n ≥ m → ∞.
2
k=m+1
n
The convergence of ∞ n=1 E[Xn ] implies k=m+1 E[Xk ] → 0. It follows that
k ] + E[−X
On the other hand, note E[X k ] ≥ 0. The condition (S2) implies
∞
k ] + E[−X
(E[X k ]) < ∞,
n=1
∞ 2
and then n=1 (E[Xk ] + E[−Xk ]) < ∞. Hence, by the condition (S3) and the fact that
E[(−X 2 2 2
k − E[−Xk ]) ] ≤ E[(Xk − E[Xk ]) ] + (E[Xk ] + E[−Xk ]) ,
∞
E[(−X 2
n − E[−Xn ]) ] < ∞.
n=1
By considering −Xn instead of Xn , we have
lim V(−Sn + Sm ≥ ) = 0 for all > 0.
n≥m→∞
Then
lim V max |Xk | ≥ c = 0 for all c > 0. (3.20)
n≥m→∞ m≤k≤n
Write vk = V(|Xk | ≥ 2c). Similar to (3.11), we have for m0 large enough and all n ≥ m ≥ m0 ,
1 2
> V max |Xk | ≥ c ≥ 1 − n .
3 m≤k≤n k=m+1 vk
∞
It follows that k=1 vk < ∞. The condition (S1) is satisfied for all c > 0.
Next, we consider (S3). Write Xnc = (−c) ∨ Xn ∧ c and Snc = nk=1 Xkc . Note on the event
{maxm≤k≤n |Xk | < c}, Xkc = Xk , k = m + 1, . . . , n. By (3.19) and (3.20),
lim V max |Skc − Sm c
| > = 0 for all > 0. (3.21)
n≥m→∞ m≤k≤n
Let Y1 , Y1 , Y2 , Y2 , . . . , Yn , Yn , . . . be independent random variables under the sub-linear expec-
tation E with Yk = d d
Yk = Xkc , k = 1, 2, . . .. Then
, . . . , Yn } = {Xm+1
d d
{Ym+1 , . . . , Yn } = {Ym+1 c
, . . . , Xnc }.
k k
Let Tk = i=1 Yi and Tk = i=1 Yi . By (3.21),
lim V max |Tk − Tm | >
n≥m→∞ m≤k≤n
= lim V max |Tk − Tm
| > = 0 for all > 0. (3.22)
n≥m→∞ m≤k≤n
Write Yn = Yn − Yn and Tn = nk=1 Yk . Then {Yn ; n ≥ 1} is a sequence of independent random
variables with V(|Yn | > 3c) = 0. Without loss of generality, we can assume |Yn | ≤ 3c for
otherwise we can replace Yn by (−3c) ∨ Yn ∧ (3c). By (3.22),
lim V max |Tk − Tm | > 2 = 0 for all > 0.
n≥m→∞ m≤k≤n
Yk ] = E[
Note E[− Yk ] = E[X
c ] + E[−X
k ] ≥ 0. By Lemma 3.7,
c
k
∞
∞
c ] + E[−X
(E[X c Y 2 ]
E[
n n ]) and n are convergent.
n=1 n=1
Note
Y 2 |Yn ] ≥ (Yn − E[Y
E[ n ])2 + E[(Y
2
n n − E[Yn ]) ]
n ])− E[Y
+ 2(Yn − E[Y ]].
− E[Y
n n
So
Y 2 ] ≥ 2E[(X
E[ c 2 c c c −
n − E[Xn ]) ] − 2{E[Xn ] + E[−Xn ]}E[(Xn − E[Xn ]) ]
c c
n
≥ 2E[(X c c ])2 ] − 2c{E[X
− E[X c ] + E[−X
c
]}.
n n n n
It follows that
∞
E[(X c 2
n − E[Xn ]) ] < ∞.
c
(3.23)
n=1
Since E[(−X c 2 c 2 c c 2
n − E[−Xn ]) ] ≤ E[(Xn − E[Xn ]) ] + (E[Xn ] + E[−Xn ]) , we also have
c c
∞
E[(−X 2
n − E[−Xn ]) ] < ∞.
n=1
Convergence Under Sub-linear Expectations 239
where G(α) = 12 (σ 2 α+ − σ 2 α− ).
That ξ is a normal distributed random variable is equivalent to that, if ξ is an independent
d
copy of ξ (i.e., ξ is independent to ξ and ξ = ξ ), then
E[ϕ(αξ
+ βξ )] = E[ϕ( α2 + β 2 ξ)], ∀ϕ ∈ Cl,Lip (R) and ∀α, β ≥ 0, (4.2)
d d
(Definition II.1.4 and Example II.1.13 of Peng [9]). We also write η = N (0, [σ 2 , σ 2 ]) if η = ξ (as
defined in Definition 1.2 (i)) and ξ ∼ N (0, [σ 2 , σ 2 ]) (as defined in Definition 4.1). By definition,
√
d
η = N (0, [σ 2 , σ 2 ]) if and only if for any ϕ ∈ Cb,Lip (R), the function u(x, t) = E[ϕ(x + tη)]
240 Zhang L. X.
(x ∈ R, t ≥ 0) is the unique viscosity solution of the equation (4.1). In the sequel, without loss
of generality, we assume that the sub-linear expectation spaces (Ω, H , E)
and (Ω, H , E)
are
the same.
Let {Xn ; n ≥ 1} be a sequence of independent and identically distributed random variables
Sn = n Xk . Peng [8, 9] proved that, if E[X
in a sub-linear expectation space (Ω, H , E), 1] =
k=1
E[−X
1 ] = 0 and E[|X1 |
2+α
] < ∞ for some α > 0, then
ϕ √ Sn
lim E = E[ϕ(ξ)], ∀ϕ ∈ Cb (R), (4.3)
n→∞ n
2 ] and σ 2 = E[X
where ξ ∼ N (0, [σ 2 , σ 2 ]), σ 2 = E[X 1 |2+α ] < ∞
12 ]. Zhang [14] showed that E[|X
1
can be weakened to E[(X 2 +
1 − c) ] → 0 as c → ∞ by applying the moment inequalities of sums
of independent random variables and the truncation method. A natural question is whether
2 ] < ∞ and E[X
E[X 1 ] = E[−X 1 ] = 0 are sufficient and necessary for (4.3). The following
1
theorem is our main result.
Theorem 4.2 Let {Xn ; n ≥ 1} be a sequence of independent and identically distributed ran-
Sn = n Xk . Suppose that
dom variables in a sub-linear expectation space (Ω, H , E), k=1
2 ∧ c] is finite ;
(i) limc→∞ E[X 1
(ii) x2 V(|X1 | ≥ x) → 0 as x → ∞;
(iii) limc→∞ E[(−c)
∨ X1 ∧ c] = limc→∞ E[(−c) ∨ (−X1 ) ∧ c] = 0.
2 2 2
Write σ = limc→∞ E[X1 ∧ c] and σ = limc→∞ E[X 2 ∧ c]. Then for any ϕ ∈ Cb (R),
1
ϕ √S n
lim E = E[ϕ(ξ)], (4.4)
n→∞ n
where ξ ∼ N (0, [σ 2 , σ 2 ]).
Conversely, if (4.4) holds for any ϕ ∈ Cb1 (R) and a random variable ξ with x2 V(|ξ| ≥ x) → 0
as x → ∞, then (i), (ii) and (iii) hold and ξ = N (0, [σ 2 , σ 2 ]).
d
Before prove the theorem, we give some remarks on the conditions. Note that E[X 2 ∧ c]
1
2 ∧ c] are non-decreasing in c. So, σ 2 and σ 2 are well-defined and nonnegative, and are
and E[X 1
finite if the condition (i) is satisfied. It is easily seen that, for c1 > c2 > 0,
c2 ]| ≤ E[(|X
c1 ] − E[X + σ2
|E[X 1 1 1 | ∧ c 1 − c 2 ) ] ≤ . (4.5)
c2
So, the condition (i) implies that limc→∞ E[X c ] and limc→∞ E[−X c
1 1 ] exist and are finite.
is a continuous sub-linear expectation, i.e., E[X
If E n ] E[X] whenever 0 ≤ Xn X,
and E[Xn ] 0 whenever Xn 0, E[Xn ] < ∞, then (i) is equivalent to E[X 2 ] < ∞, (iii) is
1
1 ] = E[−X
equivalent to E[X 2
1 ] = 0, and (ii) is automatically implied by E[X1 ] < ∞. In general,
2 ] < ∞ and (i) with (ii) do not imply each other. However, it is easily verified
the condition E[X 1
that, if E[(X 2 +
1 − c) ] → 0 as c → ∞, then (i) and (ii) are satisfied and (iii) is equivalent to
1 ] = E[−X
E[X 1 ] = 0.
To prove Theorem 4.2, we need a more lemma.
Lemma 4.3 Let Xn1 , . . . Xnn be independent random variables in a sub-linear expectation
with
space (Ω, H , E)
1
n
√
{|E[Xnk ]| + |E[−X nk ]|} → 0,
n
k=1
Convergence Under Sub-linear Expectations 241
1 2
n
nk
{|E[Xnk ] − σ 2 | + |E[X 2
] − σ 2 |} → 0
n
k=1
and
1
n
3/2
E[|Xnk |3 ] → 0.
n k=1
Then
n
k=1 Xnk
lim E ϕ √ = E[ϕ(ξ)], ∀ϕ ∈ Cb (R),
n→∞ n
where ξ ∼ N (0, [σ 2 , σ 2 ]).
This lemma can be proved by refining the arguments of Li and Shi [5] and can also follow
from the Lindeberg central limit theorem [16]. We omit the proof here.
Proof of Theorem 4.2 We first prove the sufficient part, i.e., (i), (ii) and (iii) ⇒ (4.4). Let
√ √
Xnk = (− n) ∨ Xk ∧ n. Then for any > 0,
1
n
1 √
3/2
E[|Xnk |3 ] = E[|Xn1 |3 ] ≤ σ 2 + nV(|X1 | ≥ n) → 0
n k=1
n1/2
as n → ∞ and then → 0, by the condition (ii). Also,
1 2
n
2 ] − σ 2 |}
{|E[Xnk ] − σ 2 | + |E[X nk
n
k=1
2 ∧ n] − σ 2 | + |E[X
= |E[X 12 ∧ n] − σ 2 | → 0,
1
(4.4) is proved.
Now, we consider the necessary part. Letting ϕ = g (|x| − t) yields
|Sn |
lim sup V √ ≥ t + ≤ V(|ξ| ≥ t) for all t > 0, > 0.
n→∞ n
242 Zhang L. X.
So
|Sk − Sl |
lim sup max V √ ≥ 2t + ≤ 2V(|ξ| ≥ t) for all t > 0, > 0.
n≥m→∞ m≤k,l≤n n
Choose t0 such that V(|ξ| ≥ t0 ) < 1/32. Applying the Lévy maximal inequality (2.2) yields
maxm≤k≤n |Sk − Sm | 64
lim sup V √ ≥ 4t < V(|ξ| ≥ t) for all t > t0 . (4.6)
n≥m→∞ n 31
Hence
maxm≤k≤n |Xk | 64
lim sup V √ ≥ 8t < V(|ξ| ≥ t) for all t > t0 . (4.7)
n≥m→∞ n 31
Let t1 > t0 and m0 such that
maxm≤k≤n |Sk − Sm | 2
V √ > 4t1 < for all m ≥ m0 (4.8)
n 31
and
maxm≤k≤n |Xk | 4
V √ > 8t1 < for all m ≥ m0 . (4.9)
n 31
Hence (n − m)(E[Y
n1 ])+ ≤ 15t1 . Similarly, (n − m)(E[−Y +
n1 ]) ≤ 15t1 . Hence, by Lemma 2.2
(i), it follows that
1 n1 ])+ + (n − m)(E[−Y
(4t1 + 8t1 )2 + 8t1 {(n − m)(E[Y +
n1 ]) }
>1− .
5 2]
(n − m)E[Y n1
2 ] ≤ 5 (122 + 240)t2 .
2 ∧ c] = lim nE[Y
lim E[X 1 n1 1
c→∞ n→∞ 2
c ] exists and is finite. Then
(i) is proved. Note that (i) implies that limc→∞ E[X 1
√
c ] = lim sup
lim E[X n1 ] ≤ lim sup 30t
nE[Y √ 1 = 0.
1
c→∞ n→∞ n→∞ n
Similarly, limc→∞ E[−X c
1 ] exists, is finite and not positive. Note E[−X1 ] + E[X1 ] ≥ 0. Hence
c c
(iii) follows.
Finally, we show (ii). For any given 0 < < 1/2, by the condition x2 V(|ξ| ≥ x) → 0, one
can choose t1 > t0 such that 64 31 V(|ξ| ≥ t1 ) ≤ 93 t21 < 1/2. Then by (4.7), there is m0 such that
maxm≤k≤n |Xk |
V √ ≥ 8t1 < 3 2 , n ≥ m ≥ m0 .
n 9 t1
Convergence Under Sub-linear Expectations 243
√ √
Choose Zk = g ( 8t|X k|
√ − 1) such that I{|Xk | ≥ 9t1 n} ≤ Zk ≤ I{|Xk | ≥ 8t1 n}. Let
√ 1 n
qn = V(|X1 | ≥ 9t1 n). Then
n
maxm≤k≤n |Xk | 1−
V √ ≥ 8t1 ≥ E (1 − Zk )
n
k=m+1
n
=1− k ])
(1 − E[Z
k=m+1
≥ 1 − e−(n−m)qn .
It follows that
√
nV(|X1 | ≥ 9t1 n) ≤ 2(n − m)qn < 2 × 2 × for m = [n/2] ≥ m0 .
93 t21
Hence
√ √ 4
(9t1 n)2 V(|X1 | ≥ 9t1 n) < , n ≥ 2m0 .
9
√ √ √
When x ≥ 9t1 2m0 , there is n such that 9t1 n ≤ x ≤ 9t1 n + 1. Then
√ √ 8
x2 V(|X1 | ≥ x) ≤ (9t1 n + 1)2 V(|X1 | ≥ 9t1 n) ≤ .
9
It follows that lim supx→∞ x2 V(|X1 | ≥ x) < . (ii) is proved. The proof is now complete.
Remark 4.4 From the proof, we can find that
|Sn |
lim lim sup V √ ≥ x = 0
x→∞ n→∞ n
implies (i) and (ii). One may conjecture that,
(C1) if (4.4) holds for any ϕ ∈ Cb1 (R) and a tight random variable ξ (i.e., V(|ξ| ≥ x) → 0 as
d
x → ∞), then (i), (ii) and (iii) holds and ξ = N (0, [σ 2 , σ 2 ]).
An equivalent conjecture is that,
(C2) if ξ and ξ are independent and identically distributed tight random variables, and
E[ϕ(αξ
+ βξ )] = E[ϕ( α2 + β 2 ξ)], ∀ϕ ∈ Cb (R) and ∀α, β ≥ 0, (4.11)
d 2 ∧ c] and σ 2 = limc→∞ E[ξ
then ξ = N (0, [σ 2 , σ 2 ]), where σ 2 = limc→∞ E[ξ 2 ∧ c].
It should be noted that the conditions (4.2) and (4.11) are different. The condition (4.2)
implies that ξ have finite moments of each order, but no information about the moments of
ξ is hidden in (4.11). As Theorem 4.2, the conjecture (C2) is true when x2 V(|ξ| ≥ x) → 0
d
as x → ∞. In fact, let X1 , X2 , . . . , be independent random variables with Xk = ξ. Then
Sn d
by (4.11), √ n
= ξ. By the necessary part of Theorem 4.2, the conditions (i), (ii) and (iii)
d
are satisfied. Then by the sufficient part of the theorem, ξ = N (0, [σ 2 , σ 2 ]). We don’t known
whether conjectures (C1) and (C2) are true without assuming any moment conditions. It is
very possible that they are not true in general. But finding a counterexample is not an easy
task.
Acknowledgements We thank the referees for their time and helpful comments.
244 Zhang L. X.
References
[1] Chen, Z. J.: Strong laws of large numbers for sub-linear expectation. Sci. China Math., 59(5), 945–954
(2016)
[2] Chen, Z. J., Hu, F.: A law of the iterated logarithm under sublinear expectations. Journal of Financial
Engineering, 1, No.2 (2014)
[3] Chen, Z. J., Wu, P. Y., Li, B. M.: A strong law of large numbers for non-additive probabilities. Internat.
J. Approx. Reason., 54(3), 365–377 (2013)
[4] Hu, C.: Strong laws of large numbers for sublinear expectation under controlled 1st moment condition.
Chin. Ann. Math. Ser. B, 39(5), 791–804 (2018)
[5] Li, M., Shi, Y. F.: A general central limit theorem under sublinear expectations. Sci. China Math., 53(8),
1989–1994 (2010)
[6] Lin, Z. Y., Zhang, L. X.: Convergence to a self-normalized G-Brownian motion. Probability, Uncertainty
and Quantitative Risk, 2, Art. No. 4, 2017
[7] Peng, S. G.: G-expectation, G-Brownian motion and related stochastic calculus of Ito type. In: Proceedings
of the 2005 Abel Symposium. Springer, Berlin-Heidelberg, 541–567, 2007
[8] Peng, S. G.: A new central limit theorem under sublinear expectations. arXiv:0803.2656v1 (2008)
[9] Peng, S. G.: Nonlinear expectations and stochastic calculus under uncertainty. arXiv:1002.4546 [math.PR]
(2010)
[10] Peng, S. G.: Tightness, weak compactness of nonlinear expectations and application to CLT.
arXiv:1006.2541 [math.PR] (2010)
[11] Peng, S. G.: Survey on normal distributions, central limit theorem, Brownian motion and the related
stochastic calculus under sublinear expectations. Sci. China Math., 52(7), 1391–1411 (2009)
[12] Xu, J. P., Zhang, L. X.: Three series theorem for independent random variables under sub-linear expecta-
tions with applications. Acta Math. Sin., Engl. Ser., 35(2), 172–184 (2019)
[13] Zhang, L. X.: Donsker’s invariance principle under the sub-linear expectation with an application to Chung’s
law of the iterated logarithm. Comm. Math. Stat., 3(2), 187–214 (2015)
[14] Zhang, L. X.: Exponential inequalities under the sub-linear expectations with applications to laws of the
iterated logarithm. Sci. China Math., 59(12), 2503–2526 (2016)
[15] Zhang, L. X.: Rosenthal’s inequalities for independent and negatively dependent random variables under
sub-linear expectations with applications. Sci. China Math., 59(4), 751–768 (2016)
[16] Zhang, L. X.: Lindeberg’s central limit theorems for martingale like sequences under nonlinear expectations.
To appear in Sci. China Math., arXiv:1611.0619 (2019)
[17] Zhang, L. X., Lin, J. H.: Marcinkiewicz’s strong law of large numbers for nonlinear expectations. Statist.
Probab. Lett., 137, 269–276 (2018)