You are on page 1of 6

A note on the convergence rate of Peng’s law of large

arXiv:2107.02465v1 [math.PR] 6 Jul 2021

numbers under sublinear expectations

Mingshang Hu 1 , Xiaojuan Li 1
and Xinpeng Li 2†

Abstract
This short note provides a new and simple proof of the convergence rate for Peng’s law
of large numbers under sublinear expectations, which improves the corresponding results
in Song [15] and Fang et al. [3].

Keywords Law of large numbers, rate of convergence, sublinear expectation


AMS Subject Classification 60F05

1 Introduction
The first law of large numbers (LLN for short) on sublinear expectation space was proved by
Peng in 2007 for uncorrelated random variables on arXiv (math.PR/0702358v1), see also Peng
[14]. The notions of independence and identical distribution (i.i.d. for short) are initiated by
[14], and the more general form of LLN for i.i.d. sequence is proved by Peng [13]:
Let {Xi }∞
i=1 be an i.i.d. sequence on sublinear expectation space (Ω, H, Ê) with µ = Ê[X1 ]
and µ = −Ê[−X1 ], we further assume that limλ→+∞ Ê[(|X1 | − λ)+ ] = 0, then
  
X1 + · · · + Xn
lim Ê ϕ = max ϕ(r), (1)
n→∞ n µ≤r≤µ

where ϕ is continuous function satisfying linear growth condition.


Song [15] gives the following error estimation for Peng’s LLN via Stein’s method:
  
Sn C
sup Ê ϕ
− max ϕ(r) ≤ √ , (2)
|ϕ|Lip ≤1 n µ≤r≤µ n

where |ϕ|Lip ≤ 1 means that the Lipschitz constant of ϕ is not exceed 1 and C is a constant de-
pending only on Ê[X12 ]. The corresponding proof in [15] is based on the smooth approximations
of nonlinear partial differential equation.
In this short note, we will provide a simple and purely probabilistic proof of (2). One basic
tool in our proof is Chatterji’s inequality in the classical probability theory (see Chatterji [1]),
1 Zhangtai Securities Institute for Financial Studies, Shandong University, Jinan 250100, China.
2 Research Center for Mathematics and Interdisciplinary Sciences, Shandong University, Qingdao 266237,
China.
† Corresponding author E-mail:lixinpeng@sdu.edu.cn
∗ Project supported by National Key R&D Program of China (No.2018YFA0703900) and NSF
(No.11601281, No.11671231).
1
Pn
which says E[|X1 + · · · + Xn |p ] ≤ 22−p j=1 E[|Xj |p ] for martingale-difference sequence with
max1≤j≤n E[|Xj |p ] < ∞, where p ∈ [1, 2]. In particular, we show that the constant C in (2) can
1
be chosen as the upper standard deviations of X1 , i.e., C = σ(X1 ) := inf µ∈[µ,µ] Ê[|X1 − µ|2 ] 2 .
The remainder of this paper is organized as follows. Section 2 describes some basic concepts
and notations of the sublinear expectation theory. The main results of this note with the simple
proof are provided in Section 3.

2 Preliminaries
We recall some basic notions and results in the theory of sublinear expectations. The readers
may refer to [7, 10, 11, 12, 13] for more details.
Let Ω be a given set and let H be a linear space of real functions defined on Ω such that
c ∈ H for all constants c and |X| ∈ H if X ∈ H. We further suppose that if X1 , . . . , Xn ∈ H,
then ϕ(X1 , · · · , Xn ) ∈ H for each ϕ ∈ Cb.Lip (Rn ), where Cb.Lip (Rn ) denotes the space of
bounded and Lipschitz functions on Rn . H is considered as the space of random variables.
X = (X1 , . . . , Xn ), Xi ∈ H, is called a n-dimensional random vector.

Definition 2.1 A sublinear expectation Ê on H is a functional Ê : H → R satisfying the


following properties: for all X, Y ∈ H, we have

(a) Monotonicity: Ê[X] ≥ Ê[Y ] if X ≥ Y .

(b) Constant preserving: Ê[c] = c for c ∈ R.

(c) Sub-additivity: Ê[X + Y ] ≤ Ê[X] + Ê[Y ].

(d) Positive homogeneity: Ê[λX] = λÊ[X] for λ ≥ 0.

The triple (Ω, H, Ê) is called a sublinear expectation space.


Denote by Ĥ the completion of H under the norm ||X|| := Ê[|X|]. Noting that |Ê[X] −
Ê[Y ]| ≤ Ê[|X − Y |], Ê[·] can be continuously extended to Ĥ. One can check that (Ω, Ĥ, Ê) is
still a sublinear expectation space, which is called a complete sublinear expectation space. In
the following, we always suppose that (Ω, H, Ê) is complete.
Definition 2.2 Let X and Y be two random variables on (Ω, H, Ê). X and Y are called
d
identically distributed, denoted by X = Y , if for each ϕ ∈ Cb.Lip (R),

Ê[ϕ(X)] = Ê[ϕ(Y )].



Definition 2.3 Let {Xn }n=1 be a sequence of random variables on (Ω, H, Ê). Xn is said
to be independent of (X1 , . . . , Xn−1 ) under Ê, if for each ϕ ∈ Cb.Lip (Rn )
 
Ê [ϕ (X1 , . . . , Xn )] = Ê Ê [ϕ (x1 , . . . , xn−1 , Xn )] .

(x1 ,...,xn−1 )=(X1 ,...,Xn−1 )


The sequence of random variables {Xn }n=1 is said to be independent, if Xn+1 is independent
of (X1 , . . . , Xn ) for each n ≥ 1.
The following representation theorem is useful in the theory of sublinear expectations.
2
Theorem 2.1 Let X = (X1 , . . . , Xn ) be a n-dimensional random vector on (Ω, H, Ê). Set

P = {P : P is a probability measure on (Rn , B(Rn )), EP [ϕ] ≤ Ê [ϕ(X)] for ϕ ∈ Cb.Lip (Rn )}.
(3)
n
Then P is weakly compact, and for each ϕ ∈ Cb.Lip (R ),

Ê [ϕ(X)] = max EP [ϕ]. (4)


P ∈P

Moreover, if max1≤i≤n Ê [|Xi |r ] < ∞ for some r > 1, then for each ϕ ∈ CLip (Rn ),

Ê [ϕ(X)] = max EP [ϕ], (5)


P ∈P

where CLip (Rn ) denotes the space of Lipschitz functions on Rn .


The representation (4) is obtained by Theorem 10 in Hu and Li [5] (see also [2, 6]). Similar
to Lemma 2.4.12 in Peng [13], it can be extend to (5) with higher moment comment condition.
For each given positive integer n, consider measurable space (Rn , B(Rn )) and define

X̃i (x) = xi for x = (x1 , . . . , xn ) ∈ Rn , 1 ≤ i ≤ n,


(6)
Fi = σ(X̃1 , . . . , X̃i ) = {A × Rn−i : ∀A ∈ B(Ri )}, 1 ≤ i ≤ n, F0 = {∅, Rn }

The following proposition was initiated by Li [8] (see also [4, 9]).
Proposition 2.1 Let X = (X1 , . . . , Xn ) be a n-dimensional random vector on (Ω, H, Ê).
P, X̃i and Fi , 1 ≤ i ≤ n, are defined in (3) and (6) respectively. If Xj+1 is independent of
(X1 , . . . , Xj ) for some j ≥ 1 and Ê |Xj+1 |1+α < ∞ for some α > 0, then, for each P ∈ P,
 

we have
− Ê [−Xj+1 ] ≤ EP [X̃j+1 |Fj ] ≤ Ê [Xj+1 ] , P − a.s. (7)

Proof. Set B = {EP [X̃j+1 |Fj ] > Ê [Xj+1 ]} ∈ Fj . If P (B) > 0, then we can find a compact
set F ⊂ Rj such that F × Rn−j ⊂ B and P (F × Rn−j ) > 0. By Tietze’s extension theorem,
there exists a sequence {ϕk }∞ j
k=1 ⊂ Cb.Lip (R ) such that 0 ≤ ϕk ≤ 1 and ϕk (X̃1 , . . . , X̃j ) ↓
IF (X̃1 , . . . , X̃j ). Let f (xj+1 ) = [(xj+1 − Ê [Xj+1 ]) ∧ N ] ∨ (−N ), then

φk,N (x1 , . . . , xn ) = ϕk (x1 , . . . , xj )f (xj+1 ) ∈ Cb.Lip (Rn ),

for each N ≥ 1, we get


h i
EP [φk,N (X̃1 , . . . , X̃n )] ≤ Ê [φk,N (X1 , . . . , Xn )] = Ê ϕk (X1 , . . . , Xj )Ê [f (Xj+1 )] ≤ Ê[f (Xj+1 )]
h i 1 h i
≤ Ê [f (Xj+1 )] − Ê Xj+1 − Ê [Xj+1 ] ≤ α Ê |Xj+1 − Ê [Xj+1 ] |1+α .

N
Letting N → ∞ first and then k → ∞, we obtain

EP [IF (X̃1 , . . . , X̃j )(X̃j+1 − Ê [Xj+1 ])] ≤ 0,

which contradicts to

EP [IF (X̃1 , . . . , X̃j )(X̃j+1 − Ê [Xj+1 ])] = EP [IF (X̃1 , . . . , X̃j )(EP [X̃j+1 |Fj ] − Ê [Xj+1 ])] > 0,

since F × Rn−j ⊂ B and P (F × Rn−j ) > 0.


Thus P (B) = 0, the right hand of (7) holds, so does the left hand if we consider −Xj+1 .
3
3 Main result
Now we give the following convergence rate of Peng’s LLN.
Theorem 3.1 Let {Xi }∞ i=1 be the independent random variables on sublinear expectation
space (Ω, H, Ê) with Ê[Xi ] = µ̄ and −Ê[−Xi ] = µ for i ≥ 1. Let Sn = X1 + · · · + Xn . We
further assume that there exists α ∈ (0, 1] such that
 
Cα = sup Ê[|Xi |1+α ] < ∞.
i≥1

Then, for each ϕ ∈ CLip (R) with Lipschitz constant Lϕ , we have


     1
Sn 4Cα 1+α
Ê ϕ − max ϕ(r) ≤ Lϕ .

n r∈[µ,µ̄] nα

Proof. For each fixed n ≥ 1, we use the notations P, X̃i and Fi as in (3) and (6).
For each given P ∈ P, set S̃n = ni=1 X̃i and S̃nP = ni=1 EP [X̃i |Fi−1 ]. By Proposition 2.1,
P P
S̃ P
we know µ ≤ nn ≤ µ̄, P -a.s., which implies that
" !# " ! !#
S̃n S̃n S̃nP Lϕ  h 1
i 1+α
EP ϕ − max ϕ(r) ≤ EP ϕ −ϕ ≤ EP |S̃n − S̃nP |1+α
n r∈[µ,µ̄] n n n

Since {X̃i − EP [X̃i |Fi−1 ]}ni=1 is a martingale-difference sequence, by Chatterji’s inequality,


h i Xn  1+α 
P 1+α 1−α
EP |S̃n − S̃n | ≤2 EP X̃i − EP [X̃i |Fi−1 ]

i=1
n  h i h i (8)
X
1−α
≤2 2 α
EP |X̃i |1+α + EP |EP [X̃i |Fi−1 ]|1+α ≤ 4nCα ,
i=1

Thus, by Theorem 2.1, we obtain


   " !#  1
 1+α
Sn S̃n 4Cα
Ê ϕ − max ϕ(r) = max EP ϕ − max ϕ(r) ≤ Lϕ .
n r∈[µ,µ̄] P ∈P n r∈[µ,µ̄] nα

On the other hand, there exists µ∗ ∈ [µ, µ̄] such that ϕ(µ∗ ) = maxr∈[µ,µ̄] ϕ(r) for fixed ϕ ∈
CLip (R). By Theorem 2.1, we can find Pi,1 , Pi,2 ∈ P such thatEPi,1 [X̃i ] = µ̄ and EPi,2 [X̃i ] = µ.
For i ≤ n, if µ̄ = µ, we define Pi = Pi,1 . Otherwise, we define
µ∗ − µ µ̄ − µ∗
Pi = Pi,1 + Pi,2 .
µ̄ − µ µ̄ − µ

One can check that Pi ∈ P and EPi [X̃i ] = µ∗ for i ≤ n, and X̃1 , . . . , X̃n are independent under
P ∗ , where P ∗ is defined by
On
P∗ = Pi |σ(X̃i ) .
i=1

We can verify that P ∈ P, EP ∗ [X̃i ] = EP ∗ [X̃i |Fi−1 ] = µ∗ . Similar to the above proof, we have

   " n
!#   1
Sn 1X ∗ 4Cα 1+α
Ê ϕ − max ϕ(r) ≥ EP ϕ ∗ X̃i − ϕ(µ ) ≥ −Lϕ
n r∈[µ,µ̄] n i=1 nα

The proof is completed.


4
In particular, if α = 1, we can give a more precise estimation. Noting that

EP [(X̃i − EP [X̃i |Fi−1 ])2 ] ≤ inf EP [(X̃i − µ)2 ] ≤ inf Ê[(Xi − µ)2 ], (9)
µ∈[µ,µ] µ∈[µ,µ]

we immediately have the following corollary, which generalizes the result in Song [15].
Corollary 3.1 Let {Xi }∞ i=1 be an i.i.d. sequence on sublinear expectation space (Ω, H, Ê)
with µ = Ê[X1 ], µ = −Ê[−X1 ] and Ê[|X1 |2 ] < ∞, Then,
  
Sn σ(X )
1
sup Ê ϕ − max ϕ(r) ≤ √ ,

|ϕ|Lip ≤1 n r∈[µ,µ̄] n

where σ 2 (X1 ) = inf µ∈[µ,µ] Ê[|X1 − µ|2 ] is called the upper variance in Walley [16].
Remark 3.1 Under assumptions in Corollary 3.1 and define the upper variance as σ 2 :=
supP ∈P EP [(X̃1 −EP [X̃1 ])2 ], Fang et al. [3] obtained the following LLN with rate of convergence:

2[σ 2 + (µ − µ)2 ]
  
Sn
Ê d2[µ,µ] ≤ , (10)
n n

where d[µ,µ] (x) = inf y∈[µ,µ] |x − y|. For each P ∈ P, by (8) and (9),
"   2 #
Sn 1
 2  σ 2
P

EP d[µ,µ] ≤ 2 EP S̃ − S̃ ≤ .

n n
n n n

Thus (10) can be improved by


σ2
 
Sn
Ê d2[µ,µ] ( ) ≤ .
n n

References
[1] S. Chatterji, An Lp -convergence theorem, Ann. Mathe. Statis., 40(3) 1969, 1068-1070.

[2] L. Denis, M. Hu, S. Peng, Function spaces and capacity related to a sublinear expectation:
application to G-Brownian motion paths, Potential Anal., 34 (2011), 139-161.

[3] X. Fang, S. Peng, Q. Shao, Y. Song, Limit theorems with rate of convergence under
sublinear expectations, Bernoulli, 25 (2019), 2564-2596.

[4] X. Guo and X. Li, On the laws of large numbers for pseudo-independent random variables
under sublinear expectation, Statist. and Probab. Lett., 172, (2021), 109042.

[5] M. Hu, X. Li, Independence Under the G-Expectation Framework, J. Theor. Probab., 27
(2014), 1011-1020.

[6] M. Hu, S. Peng, On representation theorem of G-expectations and paths of G-Brownian


motion, Acta Math. Appl. Sin. Engl. Ser., 25 (2009), 539-546.

[7] M. Hu, S. Peng, G-Lévy processes under sublinear expectations, Probab. Uncertain. Quant.
Risk, 6 (2021), 1-22.
5
[8] X. Li, Sublinear expectations and its applications in game theory, PhD thesis, Shandong
University, 2013.

[9] X. Li, G. Zong, On the necessary and sufficient conditions for Peng’s law of large numbers
under sublinear expectations, preprint, 2020.

[10] S. Peng, G-expectation, G-Brownian Motion and Related Stochastic Calculus of Itô type,
Stochastic analysis and applications, Abel Symp., Vol. 2, Springer, Berlin, 2007, 541-567.

[11] S. Peng, Multi-dimensional G-Brownian motion and related stochastic calculus under G-
expectation, Stochastic Process. Appl., 118 (2008), 2223-2253.

[12] S. Peng, Survey on normal distributions, central limit theorem, Brownian motion and
the related stochastic calculus under sublinear expectations, Science in China, Series A.
Mathematics, 52 (2009), 1391-1411.

[13] S. Peng, Nonlinear Expectations and Stochastic Calculus under Uncertainty, Springer
(2019).

[14] S. Peng, Law of large numbers and central limit theorem under nonlinear expectations,
Probab. Uncertain. Quant. Risk, 4 (2019), 1-8.

[15] Y. Song, Stein’s Method for Law of Large Numbers under Sublinear Expectations,
arXiv:1904.0467v1 (2019).

[16] P. Walley, Statistical Reasoning with Imprecise Probabilities, Chapman & Hall (1991).

You might also like