You are on page 1of 27

a

r
X
i
v
:
m
a
t
h
/
0
6
0
1
0
3
5
v
2


[
m
a
t
h
.
P
R
]


3
1

D
e
c

2
0
0
6 GExpectation, GBrownian Motion and
Related Stochastic Calculus of Ito Type
Shige PENG

Institute of Mathematics, Fudan University


Institute of Mathematics
Shandong University
250100, Jinan, China
peng@sdu.edu.cn
1st version: arXiv:math.PR/0601035 v1 3 Jan 2006
Abstract. We introduce a notion of nonlinear expectation -Gexpectation-
generated by a nonlinear heat equation with a given innitesimal generator G. We
rst discuss the notion of Gstandard normal distribution. With this nonlinear dis-
tribution we can introduce our Gexpectation under which the canonical process is
a GBrownian motion. We then establish the related stochastic calculus, especially
stochastic integrals of Itos type with respect to our GBrownian motion and derive
the related Itos formula. We have also given the existence and uniqueness of stochas-
tic dierential equation under our Gexpectation. As compared with our previous
framework of gexpectations, the theory of Gexpectation is intrinsic in the sense
that it is not based on a given (linear) probability space.
Keywords: gexpectation, Gexpectation, Gnormal distribution, BSDE, SDE,
nonlinear probability theory, nonlinear expectation, Brownian motion, It os
stochastic calculus, It os integral, It os formula, Gaussian process, quadratic
variation process
MSC 2000 Classication Numbers: 60H10, 60H05, 60H30, 60J60, 60J65,
60A05, 60E05, 60G05, 60G51, 35K55, 35K15, 49L25

The author thanks the partial support from the Natural Science Foundation of China,
grant No. 10131040. He thanks to the anonymous referees constructive suggestions, as
well as Juan Lis typos-corrections. Special thanks are to the organizers of the memorable
Abel Symposium 2005 for their warm hospitality and excellent work (see also this paper in:
http://abelsymposium.no/2005/preprints).
1
1 Introduction
In 1933 Andrei Kolmogorov published his Foundation of Probability Theory
(Grundbegrie der Wahrscheinlichkeitsrechnung) which set out the axiomatic
basis for modern probability theory. The whole theory is built on the Measure
Theory created by

Emile Borel and Henry Lebesgue and profoundly developed
by Radon and Frechet. The triple (, T, P), i.e., a measurable space (, T)
equipped with a probability measure P becomes a standard notion which ap-
pears in most papers of probability and mathematical nance. The second im-
portant notion, which is in fact at an equivalent place as the probability measure
itself, is the notion of expectation. The expectation E[X] of a Tmeasurable
random variable X is dened as the integral
_

XdP. A very original idea of


Kolmogorovs Grundbegrie is to use RadonNikodym theorem to introduce the
conditional probability and the related conditional expectation under a given
algebra ( T. It is hard to imagine the present state of arts of probability
theory, especially of stochastic processes, e.g., martingale theory, without such
notion of conditional expectations. A given time information (T
t
)
t0
is so in-
geniously and consistently combined with the related conditional expectations
E[X[T
t
]
t0
. It os calculusIt os integration, It os formula and It os equation
since 1942 [24], is, I think, the most beautiful discovery on this ground.
A very interesting problem is to develop a nonlinear expectation E[] un-
der which we still have such notion of conditional expectation. A notion of
gexpectation was introduced by Peng, 1997 ([35] and [36]) in which the condi-
tional expectation E
g
[X[T
t
]
t0
is the solution of the backward stochastic dier-
ential equation (BSDE), within the classical framework of It os calculus, with
X as its given terminal condition and with a given real function g as the gener-
ator of the BSDE. driven by a Brownian motion dened on a given probability
space (, T, P). It is completely and perfectly characterized by the function g.
The above conditional expectation is characterized by the following well-known
condition.
E
g
[E
g
[X[T
t
]I
A
] = E
g
[XI
A
], A T
t
.
Since then many results have been obtained in this subject (see, among others,
[4], [5], [6], [7], [11], [12], [8], [9], [25], [26], [37], [41], [42], [44], [46], [27]).
In [40] (see also [39]), we have constructed a kind of ltrationconsistent non-
linear expectations through the socalled nonlinear Markov chain. As compared
with the framework of gexpectation, the theory of Gexpectation is intrinsic,
a meaning similar to the intrinsic geometry. in the sense that it is not based
on a classical probability space given a priori.
In this paper, we concentrate ourselves to a concrete case of the above situa-
tion and introduce a notion of Gexpectation which is generated by a very simple
one dimensional fully nonlinear heat equation, called Gheat equation, whose
coecient has only one parameter more than the classical heat equation consid-
ered since Bachelier 1900, Einstein 1905 to describe the Brownian motion.. But
this slight generalization changes the whole things. Firstly, a random variable
X with Gnormal distribution is dened via the heat equation. With this
2
single nonlinear distribution we manage to introduce our Gexpectation under
which the canonical process is a GBrownian motion.
We then establish the related stochastic calculus, especially stochastic inte-
grals of It os type with respect to our GBrownian motion. A new type of It os
formula is obtained. We have also established the existence and uniqueness of
stochastic dierential equation under our Gstochastic calculus.
In this paper we concentrate ourselves to 1dimensional GBrownian mo-
tion. But our method of [40] can be applied to multidimensional Gnormal
distribution, GBrownian motion and the related stochastic calculus. This will
be given in [43].
Recently a new type of second order BSDE was proposed to give a proba-
bilistic approach for fully nonlinear 2nd order PDE, see [10]. In nance a type
of uncertain volatility model in which the PDE of Black-Scholes type was mod-
ied to a fully nonlinear model, see [3] and [29]. A point of view of nonlinear
expectation and conditional expectation was proposed in [39] and [40]. When
I presented the result of this paper in Workshop on Risk Measures in Evry,
July 2006, I met Laurent Denis and got to learn his interesting work, joint with
Martini, on volatility model uncertainty [16]. See also our forthcoming paper
[17] for the pathwise analysis of G-Brownian motion.
As indicated in Remark 3, the nonlinear expectations discussed in this paper
are equivalent to the notion of coherent risk measures. This with the related
conditional expectations E[[T
t
]
t0
makes a dynamic risk measure: Grisk mea-
sure.
This paper is organized as follows: in Section 2, we recall the framework
established in [40] and adapt it to our objective. In section 3 we introduce 1
dimensional standard G-normal distribution and discuss its main properties. In
Section 4 we introduce 1dimensional G-Brownian motion, the corresponding
Gexpectation and their main properties. We then can establish stochastic inte-
gral with respect to our G-Brownian motion of It os type and the corresponding
It os formula in Section 5 and the existence and uniqueness theorem of SDE
driven by G-Brownian motion in Section 6.
2 Nonlinear expectation: a general framework
We briey recall the notion of nonlinear expectations introduced in [40]. Fol-
lowing Daniell (see Daniell 1918 [14]) in his famous Daniells integration, we
begin with a vector lattice. Let be a given set and let H be a vector lattice of
real functions dened on containing 1, namely, H is a linear space such that
1 H and that X H implies [X[ H. H is a space of random variables. We
assume the functions on H are all bounded. Notice that
a b = mina, b =
1
2
(a +b [a b[), a b = [(a) (b)].
Thus X, Y H implies that XY , XY , X
+
= X0 and X

= (X)
+
are
all in H.
3
Denition 1 A nonlinear expectation E is a functional H R satisfying
the following properties
(a) Monotonicity: If X, Y H and X Y then E[X] E[Y ].
(b) Preserving of constants: E[c] = c.
In this paper we are interested in the expectations which satisfy
(c) Sub-additivity (or selfdominated property):
E[X] E[Y ] E[X Y ], X, Y H.
(d) Positive homogeneity: E[X] = E[X], 0, X H.
(e) Constant translatability: E[X +c] = E[X] +c.
Remark 2 The above condition (d) has an equivalent form: E[X] =
+
E[X]+

E[X]. This form will be very convenient for the conditional expectations
studied in this paper (see (vi) of Proposition 16).
Remark 3 We recall the notion of the above expectations satisfying (c)(e) was
systematically introduced by Artzner, Delbaen, Eber and Heath [1], [2], in the
case where is a nite set, and by Delbaen [15] in general situation with the
notation of risk measure: (X) = E[X]. See also in Huber [23] for even early
study of this notion E (called upper expectation E

in Ch.10 of [23]) in a nite


set . See Rosazza Gianin [46] or Peng [38], El Karoui & Barrieu [18], [19] for
dynamic risk measures using gexpectations. Super-hedging and super pricing
(see [20] and [21]) are also closely related to this formulation.
Remark 4 We observe that H
0
= X H, E[[X[] = 0 is a linear subspace
of H. To take H
0
as our null space, we introduce the quotient space H/H
0
.
Observe that, for every X H/H
0
with a representation X H, we can
dene an expectation E[X] := E[X] which still satises (a)(e) of Denition
1. Following [40], we set |X| := E[[X[], X H/H
0
. It is easy to check that
H/H
0
is a normed space under ||. We then extend H/H
0
to its completion [H]
under this norm. ([H], ||) is a Banach space. The nonlinear expectation E[]
can be also continuously extended from H/H
0
to [H], which satises (a)(e).
For any X H, the mappings
X
+
() : H H and X

() : H H
satisfy
[X
+
Y
+
[ [X Y [ and [X

[ = [(X)
+
(Y )
+
[ [X Y [.
Thus they are both contraction mappings under || and can be continuously
extended to the Banach space ([H], ||).
We dene the partial order in this Banach space.
4
Denition 5 An element X in ([H], ||) is said to be nonnegative, or X 0,
0 X, if X = X
+
. We also denote by X Y , or Y X, if X Y 0.
It is easy to check that X Y and Y X implies X = Y in ([H], ||).
The nonlinear expectation E[] can be continuously extended to ([H], ||) on
which (a)(e) still hold.
3 Gnormal distributions
For a given positive integer n, we denote by lip(R
n
) the space of all bounded
and Lipschitz real functions on R
n
. In this section R is considered as and
lip(R) as H.
In classical linear situation, a random variable X(x) = x with standard
normal distribution, i.e., X N(0, 1), can be characterized by
E[(X)] =
1

2
_

x
2
2
(x)dx, lip(R).
It is known since Bachelier 1900 and Einstein 1950 that E[(X)] = u(1, 0) where
u = u(t, x) is the solution of the heat equation

t
u =
1
2

2
xx
u (1)
with Cauchy condition u(0, x) = (x).
In this paper we set G(a) =
1
2
(a
+

2
0
a

), a R, where
0
[0, 1] is xed.
Denition 6 A real valued random variable X with the standard Gnormal
distribution is characterized by its Gexpectation dened by
E[(X)] = P
G
1
() := u(1, 0), lip(R) R
where u = u(t, x) is a bounded continuous function on [0, ) R which is the
(unique) viscosity solution of the following nonlinear parabolic partial dierential
equation (PDE)

t
u G(
2
xx
u) = 0, u(0, x) = (x). (2)
In case no confusion is caused, we often call the functional P
G
1
() the stan-
dard Gnormal distribution. When
0
= 1, the above PDE becomes the stan-
dard heat equation (1) and thus this Gdistribution is just the classical normal
distribution N(0, 1):
P
G
1
() = P
1
() :=
1

2
_

x
2
2
(x)dx.
5
Remark 7 The function G can be written as G(a) =
1
2
sup
01

2
a, thus
the nonlinear heat equation (2) is a special kind of HamiltonJacobiBellman
equation. The existence and uniqueness of (2) in the sense of viscosity solution
can be found in, for example, [13], [22], [34], [47], and [28] for C
1,2
-solution if

0
> 0 (see also in [32] for elliptic cases). Readers who are unfamililar with the
notion of viscosity solution of PDE can just consider, in the whole paper, the
case
0
> 0, under which the solution u becomes a classical smooth function.
Remark 8 It is known that u(t, ) lip(R) (see e.g. [47] Ch.4, Prop.3.1 or [34]
Lemma 3.1 for the Lipschitz continuity of u(t, ), or Lemma 5.5 and Proposition
5.6 in [39] for a more general conclusion). The boundedness is simply from the
comparison theorem (or maximum principle) of this PDE. It is also easy to
check that, for a given lip(R
2
), P
G
1
((x, )) is still a bounded and Lipschitz
function in x.
In general situations we have, from the comparison theorem of PDE,
P
G
1
() P
1
(), lip(R). (3)
The corresponding normal distribution with mean at x R and variance t > 0
is P
G
1
((x +

t )). Just like the classical situation, we have


Lemma 9 For each lip(R), the function
u(t, x) = P
G
1
((x +

t )), (t, x) [0, ) R (4)


is the solution of the nonlinear heat equation (2) with the initial condition
u(0, ) = ().
Proof. Let u C([0, ) R) be the viscosity solution of (2) with u(0, ) =
() lip(R). For a xed (

t, x) (0, ) R, we denote u(t, x) = u(t

t, x

t + x). Then u is the viscosity solution of (2) with the initial condition
u(0, x) = (x

t + x). Indeed, let be a C


1,2
function on (0, ) R such that
u (resp. u) and (, ) = u(, ) for a xed (, ) (0, ) R. We
have (
t

t
,
x x

t
) u(t, x), for all (t, x) and
(
t

t
,
x x

t
) = u(t, x), at (t, x) = (

t,

t + x).
Since u is the viscosity solution of (2), at the point (t, x) = (

t,

t + x), we
have
(
t

t
,
x x

t
)
t
G(

2
(
t

t
,
x x

t
)
x
2
) 0 (resp. 0).
But since G is positive homogenous, i.e., G(a) = G(a), we thus derive
(
(t, x)
t
G(

2
(t, x)
x
2
))[
(t,x)=(,)
0 (resp. 0).
This implies that u is the viscosity subsolution (resp. supersolution) of (2).
According to the denition of P
G
() we obtain (4).
6
Denition 10 We denote
P
G
t
()(x) = P
G
1
((x +

t )) = u(t, x), (t, x) [0, ) R. (5)


From the above lemma, for each lip(R), we have the following Kolmogorov
Chapman chain rule:
P
G
t
(P
G
s
())(x) = P
G
t+s
()(x), s, t [0, ), x R. (6)
Such type of nonlinear semigroup was studied in Nisio 1976 [30], [31].
Proposition 11 For each t > 0, the Gnormal distribution P
G
t
is a nonlinear
expectation on H = lip(R), with = R, satisfying (a)(e) of Denition 1.
The corresponding completion space [H] = [lip(R)]
t
under the norm ||
t
:=
P
G
t
([[)(0) contains (x) = x
n
, n = 1, 2, , as well as x
n
, lip(R) as its
special elements. Relation (5) still holds. We also have the following properties:
(1) Central symmetric: P
G
t
(()) = P
G
t
(());
(2) For each convex [lip(R)] we have
P
G
t
()(0) =
1

2t
_

(x) exp(
x
2
2t
)dx;
For each concave , we have, for
0
> 0,
P
G
t
()(0) =
1

2t
0
_

(x) exp(
x
2
2t
2
0
)dx,
and P
G
t
()(0) = (0) for
0
= 0. In particular, we have
P
G
t
((x)
xR
) = 0, P
G
t
((x
2n+1
)
xR
) = P
G
t
((x
2n+1
)
xR
), n = 1, 2, ,
P
G
t
((x
2
)
xR
) = t, P
G
t
((x
2
)
xR
) =
2
0
t.
Remark 12 Corresponding to the above four expressions, a random X with the
Gnormal distribution P
G
t
satises
E[X] = 0, E[X
2n+1
] = E[X
2n+1
],
E[X
2
] = t, E[X
2
] =
2
0
t.
See the next section for a detail study.
4 1dimensional GBrownian motion under G
expectation
In the rest of this paper, we denote by = C
0
(R
+
) the space of all Rvalued
continuous paths (
t
)
tR
+ with
0
= 0, equipped with the distance
(
1
,
2
) :=

i=1
2
i
[( max
t[0,i]
[
1
t

2
t
[) 1].
7
We set, for each t [0, ),
W
t
:=
t
: ,
T
t
:= B
t
(W) = B(W
t
),
T
t+
:= B
t+
(W) =

s>t
B
s
(W),
T :=

s>t
T
s
.
(, T) is the canonical space equipped with the natural ltration and =
(
t
)
t0
is the corresponding canonical process.
For each xed T 0, we consider the following space of random variables:
L
0
ip
(T
T
) := X() = (
t1
, ,
tm
), m 1, t
1
, , t
m
[0, T], lip(R
m
).
It is clear that L
0
ip
(T
t
) L
0
ip
(T
T
), for t T. We also denote
L
0
ip
(T) :=

_
n=1
L
0
ip
(T
n
).
Remark 13 It is clear that lip(R
m
) and then L
0
ip
(T
T
) and L
0
ip
(T) are vector
lattices. Moreover, since , lip(R
m
) implies lip(R
m
) thus X, Y
L
0
ip
(T
T
) implies X Y L
0
ip
(T
T
).
We will consider the canonical space and set B
t
() =
t
, t [0, ), for
.
Denition 14 The canonical process B is called a GBrownian motion under
a nonlinear expectation E dened on L
0
ip
(T) if for each T > 0, m = 1, 2, ,
and for each lip(R
m
), 0 t
1
< < t
m
T, we have
E[(B
t1
, B
t2
B
t1
, , B
tm
B
tm1
)] =
m
,
where
m
R is obtained via the following procedure:

1
(x
1
, , x
m1
) = P
G
tmtm1
((x
1
, , x
m1
, ));

2
(x
1
, , x
m2
) = P
G
tm1tm2
(
1
(x
1
, , x
m2
, ));
.
.
.

m1
(x
1
) = P
G
t2t1
(
m2
(x
1
, ));

m
= P
G
t1
(
m1
()).
The related conditional expectation of X = (B
t1
, B
t2
B
t1
, , B
tm
B
tm1
)
under T
tj
is dened by
E[X[T
tj
] = E[(B
t1
, B
t2
B
t1
, , B
tm
B
tm1
)[T
tj
] (7)
=
mj
(B
t1
, , B
tj
B
tj1
).
8
It is proved in [40] that E[] consistently denes a nonlinear expectation on
the vector lattice L
0
ip
(T
T
) as well as on L
0
ip
(T) satisfying (a)(e) in Denition
1. It follows that E[[X[], X L
0
ip
(T
T
) (resp. L
0
ip
(T)) forms a norm and
that L
0
ip
(T
T
) (resp. L
0
ip
(T)) can be continuously extended to a Banach space,
denoted by L
1
G
(T
T
) (resp. L
1
G
(T)). For each 0 t T < , we have
L
1
G
(T
t
) L
1
G
(T
T
) L
1
G
(T). It is easy to check that, in L
1
G
(T
T
) (resp. L
1
G
(T)),
E[] still satises (a)(e) in Denition 1.
Denition 15 The expectation E[] : L
1
G
(T) R introduced through above
procedure is called Gexpectation. The corresponding canonical process B is
called a GBrownian motion under E[].
For a given p > 1, we also denote L
p
G
(T) = X L
1
G
(T), [X[
p
L
1
G
(T).
L
p
G
(T) is also a Banach space under the norm |X|
p
:= (E[[X[
p
])
1/p
. We have
(see Appendix)
|X +Y |
p
|X|
p
+|Y |
p
and, for each X L
p
G
, Y L
q
G
(Q) with
1
p
+
1
q
= 1,
|XY | = E[[XY [] |X|
p
|X|
q
.
With this we have |X|
p
|X|
p
if p p

.
We now consider the conditional expectation introduced in (7). For each
xed t = t
j
T, the conditional expectation E[[T
t
] : L
0
ip
(T
T
) L
0
ip
(T
t
) is a
continuous mapping under || since E[E[X[T
t
]] = E[X], X L
0
ip
(T
T
) and
E[E[X[T
t
] E[Y [T
t
]] E[X Y ],
|E[X[T
t
] E[Y [T
t
]| |X Y | .
It follows that E[[T
t
] can be also extended as a continuous mapping L
1
G
(T
T
)
L
1
G
(T
t
). If the above T is not xed, then we can obtain E[[T
t
] : L
1
G
(T)
L
1
G
(T
t
).
Proposition 16 We list the properties of E[[T
t
] that hold in L
0
ip
(T
T
) and still
hold for X, Y L
1
G
(T):
(i) E[X[T
t
] = X, for X L
1
G
(T
t
), t T.
(ii) If X Y , then E[X[T
t
] E[Y [T
t
].
(iii) E[X[T
t
] E[Y [T
t
] E[X Y [T
t
].
(iv) E[E[X[T
t
][T
s
] = E[X[T
ts
], E[E[X[T
t
]] = E[X].
(v) E[X +[T
t
] = E[X[T
t
] + , L
1
G
(T
t
).
(vi) E[X[T
t
] =
+
E[X[T
t
] +

E[X[T
t
], for each bounded L
1
G
(T
t
).
(vii) For each X L
1
G
(T
t
T
), E[X[T
t
] = E[X],
where L
1
G
(T
t
T
) is the extension, under ||, of L
0
ip
(T
t
T
) which consists of random
variables of the form (B
t1
B
t1
, B
t2
B
t1
, , B
tm
B
tm1
), m = 1, 2, ,
lip(R
m
), t
1
, , t
m
[t, T]. Condition (vi) is the positive homogeneity, see
Remark 2.
9
Denition 17 An X L
1
G
(T) is said to be independent of T
t
under the G
expectation E for some given t [0, ), if for each real function suitably
dened on R such that (X) L
1
G
(T) we have
E[(X)[T
t
] = E[(X)].
Remark 18 It is clear that all elements in L
1
G
(T) are independent of T
0
. Just
like the classical situation, the increments of G-Brownian motion (B
t+s
B
s
)
t0
is independent of T
s
. In fact it is a new GBrownian motion since, just like the
classical situation, the increments of B are identically distributed.
Example 19 For each n = 0, 1, 2, , 0 s t, we have E[B
t
B
s
[T
s
] = 0
and, for n = 1, 2, ,
E[[B
t
B
s
[
n
[T
s
] = E[[B
ts
[
2n
] =
1
_
2(t s)
_

[x[
n
exp(
x
2
2(t s)
)dx.
But we have
E[[B
t
B
s
[
n
[T
s
] = E[[B
ts
[
n
] =
n
0
E[[B
ts
[
n
].
Exactly as in classical cases, we have
E[(B
t
B
s
)
2
[T
s
] = t s, E[(B
t
B
s
)
4
[T
s
] = 3(t s)
2
,
E[(B
t
B
s
)
6
[T
s
] = 15(t s)
3
, E[(B
t
B
s
)
8
[T
s
] = 105(t s)
4
,
E[[B
t
B
s
[[T
s
] =
_
2(t s)

, E[[B
t
B
s
[
3
[T
s
] =
2

2(t s)
3/2

,
E[[B
t
B
s
[
5
[T
s
] = 8

2(t s)
5/2

.
Example 20 For each n = 1, 2, , 0 s t < T and X L
1
G
(T
s
), since
E[B
2n1
Tt
] = E[B
2n1
Tt
], we have, by (vi) of Proposition 16,
E[X(B
T
B
t
)
2n1
] = E[X
+
E[(B
T
B
t
)
2n1
[T
t
] +X

E[(B
T
B
t
)
2n1
[T
t
]]
= E[[X[] E[B
2n1
Tt
],
E[X(B
T
B
t
)[T
s
] = E[X(B
T
B
t
)[T
s
] = 0.
We also have
E[X(B
T
B
t
)
2
[T
t
] = X
+
(T t)
2
0
X

(T t).
Remark 21 It is clear that we can dene an expectation E[] on L
0
ip
(T) in the
same way as in Denition 14 with the standard normal distribution P
1
() in the
place of P
G
1
(). Since P
1
() is dominated by P
G
1
() in the sense P
1
() P
1
()
P
G
1
( ), then E[] can be continuously extended to L
1
G
(T). E[] is a linear
expectation under which (B
t
)
t0
behaves as a Brownian motion. We have
E[X] E[X], X L
1
G
(T). (8)
In particular, E[B
2n1
Tt
] = E[B
2n1
Tt
] E[B
2n1
Tt
] = 0. Such kind of extension
under a domination relation was discussed in details in [40].
10
The following property is very useful
Proposition 22 Let X, Y L
1
G
(T) be such that E[Y ] = E[Y ] (thus E[Y ] =
E[Y ]), then we have
E[X +Y ] = E[X] +E[Y ].
In particular, if E[Y ] = E[Y ] = 0, then E[X +Y ] = E[X].
Proof. It is simply because we have E[X +Y ] E[X] +E[Y ] and
E[X +Y ] E[X] E[Y ] = E[X] +E[Y ].
Example 23 We have
E[B
2
t
B
2
s
[T
s
] = E[(B
t
B
s
+B
s
)
2
B
2
s
[T
s
]
= E[(B
t
B
s
)
2
+ 2(B
t
B
s
)B
s
[T
s
]
= t s,
since 2(B
t
B
s
)B
s
satises the condition for Y in Proposition 22, and
E[(B
2
t
B
2
s
)
2
[T
s
] = E[(B
t
B
s
+B
s
)
2
B
2
s

2
[T
s
]
= E[(B
t
B
s
)
2
+ 2(B
t
B
s
)B
s

2
[T
s
]
= E[(B
t
B
s
)
4
+ 4(B
t
B
s
)
3
B
s
+ 4(B
t
B
s
)
2
B
2
s
[T
s
]
E[(B
t
B
s
)
4
] + 4E[[B
t
B
s
[
3
][B
s
[ + 4(t s)B
2
s
= 3(t s)
2
+ 8(t s)
3/2
[B
s
[ + 4(t s)B
2
s
.
5 Itos integral of GBrownian motion
5.1 Bochners integral
Denition 24 For T R
+
, a partition
T
of [0, T] is a nite ordered subset
= t
1
, , t
N
such that 0 = t
0
< t
1
< < t
N
= T. We denote
(
T
) = max[t
i+1
t
i
[, i = 0, 1, , N 1.
We use
N
T
= t
N
0
< t
N
1
< < t
N
N
to denote a sequence of partitions of [0, T]
such that lim
N
(
N
T
) = 0.
Let p 1 be xed. We consider the following type of simple processes: for
a given partition t
0
, , t
N
=
T
of [0, T], we set

t
() =
N1

j=0

j
()I
[tj,tj+1)
(t),
where
i
L
p
G
(T
ti
), i = 0, 1, 2, , N 1, are given. The collection and these
type of processes is denoted by M
p,0
G
(0, T).
11
Denition 25 For an M
1,0
G
(0, T) with
t
=

N1
j=0

j
()I
[tj,tj+1)
(t), the
related Bochner integral is
_
T
0

t
()dt =
N1

j=0

j
()(t
j+1
t
j
).
Remark 26 We set, for each M
1,0
G
(0, T),

E
T
[] :=
1
T
_
T
0
E[
t
]dt =
1
T
N1

j=0
E[
j
()](t
j+1
t
j
).
It is easy to check that

E
T
: M
1,0
G
(0, T) R forms a nonlinear expectation
satisfying (a)(e) of Denition 1. By Remark 4, we can introduce a natural
norm ||
1
T
=

E
T
[[[] =
1
T
_
T
0
E[[
t
[]dt. Under this norm M
1,0
G
(0, T) can be
continuously extended to M
1
G
(0, T) which is a Banach space.
Denition 27 For each p 1, we will denote by M
p
G
(0, T) the completion of
M
p,0
G
(0, T) under the norm
(
1
T
_
T
0
|
p
t
| dt)
1/p
=
_
_
1
T
N1

j=0
E[[
j
()[
p
](t
j+1
t
j
)
_
_
1/p
.
We observe that,
E[[
_
T
0

t
()dt[]
N1

j=0
|
j
()| (t
j+1
t
j
) =
_
T
0
E[[
t
[]dt.
We then have
Proposition 28 The linear mapping
_
T
0

t
()dt : M
1,0
G
(0, T) L
1
G
(T
T
) is
continuous. and thus can be continuously extended to M
1
G
(0, T) L
1
G
(T
T
).
We still denote this extended mapping by
_
T
0

t
()dt, M
1
G
(0, T). We have
E[[
_
T
0

t
()dt[]
_
T
0
E[[
t
[]dt, M
1
G
(0, T). (9)
Since M
1
G
(0, T) M
p
G
(0, T), for p 1, this denition holds for M
p
G
(0, T).
5.2 Itos integral of GBrownian motion
Denition 29 For each M
2,0
G
(0, T) with the form
t
() =

N1
j=0

j
()I
[tj ,tj+1)
(t),
we dene
I() =
_
T
0
(s)dB
s
:=
N1

j=0

j
(B
tj+1
B
tj
).
12
Lemma 30 The mapping I : M
2,0
G
(0, T) L
2
G
(T
T
) is a linear continuous
mapping and thus can be continuously extended to I : M
2
G
(0, T) L
2
G
(T
T
).
In fact we have
E[
_
T
0
(s)dB
s
] = 0, (10)
E[(
_
T
0
(s)dB
s
)
2
]
_
T
0
E[((t))
2
]dt. (11)
Denition 31 We dene, for a xed M
2
G
(0, T), the stochastic integral
_
T
0
(s)dB
s
:= I().
It is clear that (10), (11) still hold for M
2
G
(0, T).
Proof of Lemma 30. From Example 20, for each j,
E[
j
(B
tj+1
B
tj
)[T
tj
] = 0.
We have
E[
_
T
0
(s)dB
s
] = E[
_
tN1
0
(s)dB
s
+
N1
(B
tN
B
tN1
)]
= E[
_
tN1
0
(s)dB
s
+E[
N1
(B
tN
B
tN1
)[T
tN1
]]
= E[
_
tN1
0
(s)dB
s
].
We then can repeat this procedure to obtain (10). We now prove (11):
E[(
_
T
0
(s)dB
s
)
2
] = E[
__
tN1
0
(s)dB
s
+
N1
(B
tN
B
tN1
)
_
2
]
= E[
__
tN1
0
(s)dB
s
_
2
+E[2
__
tN1
0
(s)dB
s
_

N1
(B
tN
B
tN1
) +
2
N1
(B
tN
B
tN1
)
2
[T
tN1
]]
= E[
__
tN1
0
(s)dB
s
_
2
+
2
N1
(t
N
t
N1
)].
Thus E[(
_
tN
0
(s)dB
s
)
2
] E[
_
_
tN1
0
(s)dB
s
_
2
] + E[
2
N1
](t
N
t
N1
)]. We
then repeat this procedure to deduce
E[(
_
T
0
(s)dB
s
)
2
]
N1

j=0
E[(
j
)
2
](t
j+1
t
j
) =
_
T
0
E[((t))
2
]dt.
13

We list some main properties of the It os integral of GBrownian motion.


We denote for some 0 s t T,
_
t
s

u
dB
u
:=
_
T
0
I
[s,t]
(u)
u
dB
u
.
We have
Proposition 32 Let , M
2
G
(0, T) and let 0 s r t T. Then in
L
1
G
(T
T
) we have
(i)
_
t
s

u
dB
u
=
_
r
s

u
dB
u
+
_
t
r

u
dB
u
,
(ii)
_
t
s
(
u
+
u
)dB
u
=
_
t
s

u
dB
u
+
_
t
s

u
dB
u
, if is bounded and in L
1
G
(T
s
),
(iii) E[X +
_
T
r

u
dB
u
[T
s
] = E[X], X L
1
G
(T).
5.3 Quadratic variation process of GBrownian motion
We now study a very interesting process of the G-Brownian motion. Let
N
t
,
N = 1, 2, , be a sequence of partitions of [0, t]. We consider
B
2
t
=
N1

j=0
[B
2
t
N
j+1
B
2
t
N
j
]
=
N1

j=0
2B
t
N
j
(B
t
N
j+1
B
t
N
j
) +
N1

j=0
(B
t
N
j+1
B
t
N
j
)
2
.
As (
N
t
) 0, the rst term of the right side tends to
_
t
0
B
s
dB
s
. The second
term must converge. We denote its limit by B)
t
, i.e.,
B)
t
= lim
(
N
t
)0
N1

j=0
(B
t
N
j+1
B
t
N
j
)
2
= B
2
t
2
_
t
0
B
s
dB
s
. (12)
By the above construction, B)
t
, t 0, is an increasing process with B)
0
=
0. We call it the quadratic variation process of the GBrownian motion
B. Clearly B) is an increasing process. It perfectly characterizes the part of
uncertainty, or ambiguity, of GBrownian motion. It is important to keep in
mind that B)
t
is not a deterministic process unless the case = 1, i.e., when
B is a classical Brownian motion. In fact we have
Lemma 33 We have, for each 0 s t <
E[B)
t
B)
s
[T
s
] = t s, (13)
E[(B)
t
B)
s
)[T
s
] =
2
0
(t s). (14)
14
Proof. By the denition of B) and Proposition 32-(iii),
E[B)
t
B)
s
[T
s
] = E[B
2
t
B
2
s
2
_
t
s
B
u
dB
u
[T
s
]
= E[B
2
t
B
2
s
[T
s
] = t s.
The last step can be check as in Example 23. We then have (13). (14) can be
proved analogously with the consideration of E[(B
2
t
B
2
s
)[T
s
] =
2
(t s).
To dene the integration of a process M
1
G
(0, T) with respect to d B),
we rst dene a mapping:
Q
0,T
() =
_
T
0
(s)d B)
s
:=
N1

j=0

j
(B)
tj+1
B)
tj
) : M
1,0
G
(0, T) L
1
(T
T
).
Lemma 34 For each M
1,0
G
(0, T),
E[[Q
0,T
()[]
_
T
0
E[[
s
[]ds. (15)
Thus Q
0,T
: M
1,0
G
(0, T) L
1
(T
T
) is a continuous linear mapping. Conse-
quently, Q
0,T
can be uniquely extended to L
1
F
(0, T). We still denote this map-
ping by
_
T
0
(s)d B)
s
= Q
0,T
(), M
1
G
(0, T).
We still have
E[[
_
T
0
(s)d B)
s
[]
_
T
0
E[[
s
[]ds, M
1
G
(0, T). (16)
Proof. By applying Lemma 33, (15) can be checked as follows:
E[[
N1

j=0

j
(B)
tj+1
B)
tj
)[]
N1

j=0
E[[
j
[ E[B)
tj+1
B)
tj
[T
tj
]]
=
N1

j=0
E[[
j
[](t
j+1
t
j
)
=
_
T
0
E[[
s
[]ds.
A very interesting point of the quadratic variation process B) is, just like
the GBrownian motion B its self, the increment B)
t+s
B)
s
is independent
of T
s
and identically distributed like B)
t
. In fact we have
15
Lemma 35 For each xed s 0, (B)
s+t
B)
s
)
t0
is independent of T
s
. It is
the quadratic variation process of the Brownian motion B
s
t
= B
s+t
B
s
, t 0,
i.e., B)
s+t
B)
s
= B
s
)
t
. We have
E[B
s
)
2
t
[T
s
] = E[B)
2
t
] = t
2
(17)
as well as
E[B
s
)
3
t
[T
s
] = E[B)
2
t
] = t
3
, E[B
s
)
4
t
[T
s
] = E[B)
4
t
] = t
4
.
Proof. The independence is simply from
B)
s+t
B)
s
= B
2
t+s
2
_
s+t
0
B
r
dB
r
[B
2
s
2
_
s
0
B
r
dB
r
]
= (B
t+s
B
s
)
2
2
_
s+t
s
(B
r
B
s
)d(B
r
B
s
)
= B
s
)
t
.
We set (t) := E[B)
2
t
].
(t) = E[(B
t
)
2
2
_
t
0
B
u
dB
u

2
]
2E[(B
t
)
4
] + 8E[(
_
t
0
B
u
dB
u
)
2
]
6t
2
+ 8
_
t
0
E[(B
u
)
2
]du
= 10t
2
.
This also implies E[(B)
t+s
B)
s
)
2
] = (t) 14t. Thus
(t) = E[B)
s
+B)
s+t
B)
s

2
]
E[(B)
s
)
2
] +E[(B
s
)
t
)
2
] + 2E[B)
s
B
s
)
t
]
= (s) +(t) + 2E[B)
s
E[B
s
)
t
]]
= (s) +(t) + 2st.
We set
N
= t/N, t
N
k
= kt/N = k
N
for a positive integer N. By the above
inequalities
(t
N
N
) (t
N
N1
) +(
N
) + 2t
N
N1

N
(t
N
N2
) + 2(
N
) + 2(t
N
N1
+t
N
N2
)
N
.
.
.
We then have
(t) N(
N
) + 2
N1

k=0
t
N
k

N
10
t
2
N
+ 2
N1

k=0
t
N
k

N
.
16
Let N we have (t) 2
_
t
0
sds = t
2
. Thus E[B
t
)
2
] t
2
. This with
E[B
t
)
2
] E[B
t
)
2
] = t
2
implies (17).
Proposition 36 Let 0 s t, L
1
G
(T
s
). Then
E[X +(B
2
t
B
2
s
)] = E[X +(B
t
B
s
)
2
]
= E[X +(B)
t
B)
s
)].
Proof. By (12) and Proposition 22, we have
E[X +(B
2
t
B
2
s
)] = E[X +(B)
t
B)
s
+ 2
_
t
s
B
u
dB
u
)]
= E[X +(B)
t
B)
s
)].
We also have
E[X +(B
2
t
B
2
s
)] = E[X +(B
t
B
s
)
2
+ 2(B
t
B
s
)B
s
]
= E[X +(B
t
B
s
)
2
].
We have the following isometry
Proposition 37 Let M
2
G
(0, T). We have
E[(
_
T
0
(s)dB
s
)
2
] = E[
_
T
0

2
(s)d B)
s
]. (18)
Proof. We rst consider M
2,0
G
(0, T) with the form

t
() =
N1

j=0

j
()I
[tj,tj+1)
(t)
and thus
_
T
0
(s)dB
s
:=

N1
j=0

j
(B
tj+1
B
tj
). By Proposition 22 we have
E[X + 2
j
(B
tj+1
B
tj
)
i
(B
ti+1
B
ti
)] = E[X], for X L
1
G
(T), i ,= j.
Thus
E[(
_
T
0
(s)dB
s
)
2
] = E[
_
_
N1

j=0

j
(B
tj+1
B
tj
)
_
_
2
] = E[
N1

j=0

2
j
(B
tj+1
B
tj
)
2
].
This with Proposition 36, it follows that
E[(
_
T
0
(s)dB
s
)
2
] = E[
N1

j=0

2
j
(B)
tj+1
B)
tj
)] = E[
_
T
0

2
(s)d B)
s
].
Thus (18) holds for M
2,0
G
(0, T). We thus can continuously extend the above
equality to the case M
2
G
(0, T) and prove (18).
17
5.4 Itos formula for GBrownian motion
We have the corresponding It os formula of (X
t
) for a G-Ito process X. For
simplication, we only treat the case where the function is suciently regular.
We rst consider a simple situation.
Let C
2
(R
n
) be bounded with bounded derivatives and
2
x

x

n
,=1
are uniformly Lipschitz. Let s [0, T] be xed and let X = (X
1
, , X
n
)
T
be
an ndimensional process on [s, T] of the form
X

t
= X

s
+

(t s) +

(B)
t
B)
s
) +

(B
t
B
s
),
where, for = 1, , n,

and

, are bounded elements of L


2
G
(T
s
) and
X
s
= (X
1
s
, , X
n
s
)
T
is a given R
n
vector in L
2
G
(T
s
). Then we have
(X
t
) (X
s
) =
_
t
s

x
(X
u
)

dB
u
+
_
t
s

x
(X
u
)

du (19)
+
_
t
s
[D
x
(X
u
)

+
1
2

2
x

x
(X
u
)

]d B)
u
.
Here we use the Einstein convention, i.e., each single term with repeated indices
and/or implies the summation.
Proof. For each positive integer N we set = (t s)/N and take the partition

N
[s,t]
= t
N
0
, t
N
1
, , t
N
N
= s, s +, , s +N = t.
We have
(X
t
) = (X
s
) +
N1

k=0
[(X
t
N
k+1
) (X
t
N
k
)]
= (X
s
) +
N1

k=0
[
x
(X
t
N
k
)(X

t
N
k+1
X

t
N
k
)
+
1
2
[
2
x

x
(X
t
N
k
)(X

t
N
k+1
X

t
N
k
)(X

t
N
k+1
X

t
N
k
) +
N
k
]] (20)
where

N
k
= [
2
x

x
(X
t
N
k
+
k
(X
t
N
k+1
X
t
N
k
))
2
x

x
(X
t
N
k
)](X

t
N
k+1
X

t
N
k
)(X

t
N
k+1
X

t
N
k
)
with
k
[0, 1]. We have
E[[
N
k
[] = E[[[
2
x

x
(X
t
N
k
+
k
(X
t
N
k+1
X
t
N
k
))
2
x

x
(X
t
N
k
)](X

t
N
k+1
X

t
N
k
)(X

t
N
k+1
X

t
N
k
)[]
cE[[X
t
N
k+1
X
t
N
k
[
3
] C[
3
+
3/2
],
where c is the Lipschitz constant of
2
x

x

n
,=1
. Thus

k
E[[
N
k
[] 0. The
rest terms in the summation of the right side of (20) are
N
t
+
N
t
, with

N
t
=
N1

k=0

x
(X
t
N
k
)[

(t
N
k+1
t
N
k
) +

(B)
t
N
k+1
B)
t
N
k
) +

(B
t
N
k+1
B
t
N
k
)]
+
1
2

2
x

x
(X
t
N
k
)

(B
t
N
k+1
B
t
N
k
)(B
t
N
k+1
B
t
N
k
)
18
and

N
t
=
1
2
N1

k=0

2
x

x
(X
t
N
k
)[

(t
N
k+1
t
N
k
) +

(B)
t
N
k+1
B)
t
N
k
)]
[

(t
N
k+1
t
N
k
) +

(B)
t
N
k+1
B)
t
N
k
)]
+

(t
N
k+1
t
N
k
) +

(B)
t
N
k+1
B)
t
N
k
)](B
t
N
k+1
B
t
N
k
).
We observe that, for each u [t
N
k
, t
N
k+1
)
E[[
x
(X
u
)
N1

k=0

x
(X
t
N
k
)I
[t
N
k
,t
N
k+1
)
(u)[
2
]
= E[[
x
(X
u
)
x
(X
t
N
k
)[
2
]
c
2
E[[X
u
X
t
N
k
[
2
] C[ +
2
].
Thus

N1
k=0

x
(X
t
N
k
)I
[t
N
k
,t
N
k+1
)
() tends to
x
(X

) in M
2
G
(0, T). Similarly,
N1

k=0

2
x

x
(X
t
N
k
)I
[t
N
k
,t
N
k+1
)
()
2
x

x
(X

), in M
2
G
(0, T).
Let N , by the denitions of the integrations with respect to dt, dB
t
and
d B)
t
the limit of
N
t
in L
2
G
(T
t
) is just the right hand of (19). By the estimates
of the next remark, we also have
N
t
0 in L
1
G
(T
t
). We then have proved (19).
Remark 38 We have the following estimates: for
N
M
1,0
G
(0, T) such that

N
t
=

N1
k=0

N
t
k
I
[t
N
k
,t
N
k+1
)
(t), and
N
T
= 0 t
0
, , t
N
= T with lim
N
(
N
T
) =
0 and

N1
k=0
E[[
N
t
k
[](t
N
k+1
t
N
k
) C, for all N = 1, 2, , we have
E[[
N1

k=0

N
k
(t
N
k+1
t
N
k
)
2
[] 0,
and, thanks to Lemma 35,
E[[
N1

k=0

N
k
(B)
t
N
k+1
B)
t
N
k
)
2
[]
N1

k=0
E[[
N
k
[ E[(B)
t
N
k+1
B)
t
N
k
)
2
[T
t
N
k
]]
=
N1

k=0
E[[
N
k
[](t
N
k+1
t
N
k
)
2
0,
19
as well as
E[[
N1

k=0

N
k
(B)
t
N
k+1
B)
t
N
k
)(B
t
N
k+1
B
t
N
k
)[]

N1

k=0
E[[
N
k
[]E[(B)
t
N
k+1
B)
t
N
k
)[B
t
N
k+1
B
t
N
k
[]

N1

k=0
E[[
N
k
[]E[(B)
t
N
k+1
B)
t
N
k
)
2
]
1/2
E[[B
t
N
k+1
B
t
N
k
[
2
]
1/2
=
N1

k=0
E[[
N
k
[](t
N
k+1
t
N
k
)
3/2
0.
We also have
E[[
N1

k=0

N
k
(B)
t
N
k+1
B)
t
N
k
)(t
N
k+1
t
N
k
)[]

N1

k=0
E[[
N
k
[(t
N
k+1
t
N
k
) E[(B)
t
N
k+1
B)
t
N
k
)[T
t
N
k
]]
=
N1

k=0
E[[
N
k
[](t
N
k+1
t
N
k
)
2
0.
and
E[[
N1

k=0

N
k
(t
N
k+1
t
N
k
)(B
t
N
k+1
B
t
N
k
)[]
N1

k=0
E[[
N
k
[](t
N
k+1
t
N
k
)E[[B
t
N
k+1
B
t
N
k
[]
=
_
2

N1

k=0
E[[
N
k
[](t
N
k+1
t
N
k
)
3/2
0.
We now consider a more general form of It os formula. Consider
X

t
= X

0
+
_
t
0

s
ds +
_
t
0

s
d B, B)
s
+
_
t
0

s
dB
s
.
Proposition 39 Let

and

, = 1, , n, are bounded processes of


M
2
G
(0, T). Then for each t 0 and in L
2
G
(T
t
) we have
(X
t
) (X
s
) =
_
t
s

x
(X
u
)

u
dB
u
+
_
t
s

x
(X
u
)

u
du (21)
+
_
t
s
[
x
(X
u
)

u
+
1
2

2
x

x
(X
u
)

u
]d B)
u
20
Proof. We rst consider the case where , and are step processes of the
form

t
() =
N1

k=0

k
()I
[t
k
,t
k+1
)
(t).
From the above Lemma, it is clear that (21) holds true. Now let
X
,N
t
= X

0
+
_
t
0

,N
s
ds +
_
t
0

,N
s
d B)
s
+
_
t
0

,N
s
dB
s
where
N
,
N
and
N
are uniformly bounded step processes that converge to
, and in M
2
G
(0, T) as N . From Lemma 5.4
(X
,N
t
) (X
0
) =
_
t
s

x
(X
N
u
)
,N
u
dB
u
+
_
t
s

x
(X
N
u
)
,N
u
du (22)
+
_
t
s
[
x
(X
N
u
)
,N
u
+
1
2

2
x

x
(X
N
u
)
,N
u

,N
u
]d B)
u
Since
E[[X
,N
t
X

t
[
2
] 3E[[
_
t
0
(
N
s

s
)ds[
2
] + 3E[[
_
t
0
(
,N
s

s
)d B)
s
[
2
]
+3E[[
_
t
0
(
,N
s

s
)dB
s
[
2
] 3
_
T
0
E[(
,N
s

s
)
2
]ds + 3
_
T
0
E[[
,N
s

s
[
2
]ds
+ 3
_
T
0
E[(
,N
s

s
)
2
]ds,
we then can prove that, in M
2
G
(0, T), we have (21). Furthermore

x
(X
N

)
,N

+
2
x

x
(X
N

)
,N


,N


x
(X

+
2
x

x
(X

x
(X
N

)
,N


x
(X

x
(X
N

)
,N


x
(X

We then can pass limit in both sides of (22) and get (21).
6 Stochastic dierential equations
We consider the following SDE dened on M
2
G
(0, T; R
n
):
X
t
= X
0
+
_
t
0
b(X
s
)ds +
_
t
0
h(X
s
)d B)
s
+
_
t
0
(X
s
)dB
s
, t [0, T]. (23)
where the initial condition X
0
R
n
is given and b, h, : R
n
R
n
are given
Lipschitz functions, i.e., [(x) (x

)[ K[xx

[, for each x, x

R
n
, = b, h
and . Here the horizon [0, T] can be arbitrarily large. The solution is a process
21
X M
2
G
(0, T; R
n
) satisfying the above SDE. We rst introduce the following
mapping on a xed interval [0, T]:

(Y ) := Y M
2
G
(0, T; R
n
) M
2
G
(0, T; R
n
)
by setting
t
with

t
(Y ) = X
0
+
_
t
0
b(Y
s
)ds +
_
t
0
h(Y
s
)d B)
s
+
_
t
0
(Y
s
)dB
s
, t [0, T].
We immediately have
Lemma 40 For each Y, Y

M
2
G
(0, T; R
n
), we have the following estimate:
E[[
t
(Y )
t
(Y

)[
2
] C
_
t
0
E[[Y
s
Y

s
[
2
]ds, t [0, T],
where C = 3K
2
.
Proof. This is a direct consequence of the inequalities (9), (11) and (16).
We now prove that SDE (23) has a unique solution. By multiplying e
2Ct
on
both sides of the above inequality and then integrate them on [0, T]. It follows
that
_
T
0
E[[
t
(Y )
t
(Y

)[
2
]e
2Ct
dt C
_
T
0
e
2Ct
_
t
0
E[[Y
s
Y

s
[
2
]dsdt
= C
_
T
0
_
T
s
e
2Ct
dtE[[Y
s
Y

s
[
2
]ds
= (2C)
1
C
_
T
0
(e
2Cs
e
2CT
)E[[Y
s
Y

s
[
2
]ds.
We then have
_
T
0
E[[
t
(Y )
t
(Y

)[
2
]e
2Ct
dt
1
2
_
T
0
E[[Y
t
Y

t
[
2
]e
2Ct
dt.
We observe that the following two norms are equivalent in M
2
G
(0, T; R
n
):
_
T
0
E[[Y
t
[
2
]dt
_
T
0
E[[Y
t
[
2
]e
2Ct
dt.
From this estimate we can obtain that (Y ) is a contract mapping. Conse-
quently, we have
Theorem 41 There exists a unique solution X M
2
G
(0, T; R
n
) of the stochas-
tic dierential equation (23).
22
7 Appendix
For r > 0, 1 < p, q < with
1
p
+
1
q
= 1, we have
[a +b[
r
max1, 2
r1
([a[
r
+ [b[
r
), a, b R (24)
[ab[
[a[
p
p
+
[b[
q
q
. (25)
Proposition 42
E[[X +Y [
r
] C
r
(E[[X[
r
] +E[[Y [
r
]), (26)
E[[XY [] E[[X[
p
]
1/p
E[[Y [
q
]
1/q
, (27)
E[[X +Y [
p
]
1/p
E[[X[
p
]
1/p
+E[[Y [
p
]
1/p
. (28)
In particular, for 1 p < p

, we have E[[X[
p
]
1/p
E[[X[
p

]
1/p

.
Proof. (26) follows from (24). We set
=
X
E[[X[
p
]
1/p
, =
Y
E[[Y [
q
]
1/q
.
By (25) we have
E[[[] E[
[[
p
p
+
[[
q
q
] E[
[[
p
p
] +E[
[[
q
q
]
=
1
p
+
1
q
= 1.
Thus (27) follows. We now prove (28):
E[[X +Y [
p
] = E[[X +Y [ [X +Y [
p1
]
E[[X[ [X +Y [
p1
] +E[[Y [ [X +Y [
p1
]
E[[X[
p
]
1/p
E[[X +Y [
(p1)q
]
1/q
+E[[Y [
p
]
1/p
E[[X +Y [
(p1)q
]
1/q
We observe that (p 1)q = p. Thus we have (28).
References
[1] Artzner, Ph., F. Delbaen, J.-M. Eber, and D. Heath (1997), Thinking Co-
herently, RISK 10, November, 6871.
[2] Artzner, Ph., F. Delbaen, J.-M. Eber, and D. Heath (1999), Coherent Mea-
sures of Risk, Mathematical Finance 9.
23
[3] Avellaneda M., Levy, A. and Paras A. (1995). Pricing and hedging deriva-
tive securities in markets with uncertain volatilities. Appl. Math. Finance
2, 7388.
[4] Briand, Ph., Coquet, F., Hu, Y., Memin J. and Peng, S. (2000) A converse
comparison theorem for BSDEs and related properties of g-expectations,
Electron. Comm. Probab, 5.
[5] Chen, Z. (1998) A property of backward stochastic dierential equations,
C.R. Acad. Sci. Paris Ser.I Math.326(4), 483488.
[6] Chen, Z. and Epstein, L. (2002), Ambiguity, Risk and Asset Returns in
Continuous Time, Econometrica, 70(4), 14031443.
[7] Chen, Z., Kulperger, R. and Jiang L. (2003) Jensens inequality for g-
expectation: part 1, C. R. Acad. Sci. Paris, Ser.I 337, 725730.
[8] Chen, Z. and Peng, S. (1998) A Nonlinear Doob-Meyer type Decomposition
and its Application. SUT Journal of Mathematics (Japan), 34(2), 197208.
[9] Chen, Z. and Peng, S. (2000), A general downcrossing inequality for g-
martingales, Statist. Probab. Lett. 46(2), 169175.
[10] Cheridito, P., Soner, H.M., Touzi, N. and Victoir, N., Second order back-
ward stochastic dierential equations and fully non-linear parabolic PDEs,
Preprint (pdf-le available in arXiv:math.PR/0509295 v1 14 Sep 2005).
[11] Coquet, F., Hu, Y., Memin, J. and Peng, S. (2001) A general converse com-
parison theorem for Backward stochastic dierential equations, C.R.Acad.
Sci. Paris, t.333, Serie I, 577581.
[12] Coquet, F., Hu, Y., Memin J. and Peng, S. (2002), Filtrationconsistent
nonlinear expectations and related gexpectations, Probab. Theory Relat.
Fields, 123, 127.
[13] Crandall, M., Ishii, H., and Lions, P.-L. (1992) UserS Guide To Viscosity
Solutions Of Second Order Partial Dierential Equations, Bulletin Of The
American Mathematical Society, 27(1), 1-67.
[14] Daniell, P.J. (1918) A general form of integral. Annals of Mathematics, 19,
279294.
[15] Delbaen, F. (2002), Coherent Risk Measures (Lectures given at the Catte-
dra Galileiana at the Scuola Normale di Pisa, March 2000), Published by
the Scuola Normale di Pisa.
[16] Denis, L. and Martini, C. (2006) A theoretical framework for the pricing
of contingent claims in the presence of model uncertainty, The Annals of
Applied Probability, Vol. 16, No. 2, 827852.
24
[17] Denis, L. and Peng, S. Working paper on: Pathwise Analysis of G-Brownian
Motions and G-Expectations.
[18] Barrieu, P. and El Karoui, N. (2004) Pricing, Hedging and Optimally De-
signing Derivatives via Minimization of Risk Measures, Preprint, to appear
in Contemporary Mathematics.
[19] Barrieu, P. and El Karoui, N. (2005) Pricing, Hedging and Optimally De-
signing Derivatives via Minimization of Risk Measures, Preprint.
[20] El Karoui, N., Quenez, M.C. (1995) Dynamic Programming and Pricing
of Contingent Claims in Incomplete Market. SIAM J.of Control and Opti-
mization, 33(1).
[21] El Karoui, N., Peng, S., Quenez, M.C. (1997) Backward stochastic dier-
ential equation in nance, Mathematical Finance 7(1): 171.
[22] Fleming, W.H., Soner, H.M. (1992) Controlled Markov Processes and Vis-
cosity Solutions. SpringerVerleg, New York.
[23] Huber,P. J., (1981) Robustic Statistics, John Wiley & Sons.
[24] It o, Kiyosi, (1942) Dierential Equations Determining a Marko Process,
in Kiyosi It o: Selected Papers, Edit. D.W. Strook and S.R.S. Varadhan,
Springer, 1987, Translated from the original Japanese rst published in
Japan, Pan-Japan Math. Coll. No. 1077.
[25] Jiang, L. (2004) Some results on the uniqueness of generators of backward
stochastic dierential equations, C. R. Acad. Sci. Paris, Ser. I 338 575
580.
[26] Jiang L. and Chen, Z. (2004) A result on the probability measures domi-
nated by g-expectation, Acta Mathematicae Applicatae Sinica, English Se-
ries 20(3) 507512
[27] Kloppel, S., Schweizer, M.: Dynamic Utility Indierence Val-
uation via Convex Risk Measures, Working Paper (2005)
(http://www.nccr-nrisk.unizh.ch/media/pdf/wp/WP209-1.pdf).
[28] Krylov, N.V. (1980) Controlled Diusion Processes. SpringerVerlag, New
York.
[29] Lyons, T. (1995). Uncertain volatility and the risk free synthesis of deriva-
tives. Applied Mathematical Finance 2, 117133.
[30] Nisio, M. (1976) On a nonlinear semigroup attached to optimal stochastic
control. Publ. RIMS, Kyoto Univ., 13: 513537.
[31] Nisio, M. (1976) On stochastic optimal controls and envelope of Markovian
semigroups. Proc. of int. Symp. Kyoto, 297325.
25
[32] ksendal B. (1998) Stochastic Dierential Equations, Fifth Edition,
Springer.
[33] Pardoux, E., Peng, S. (1990) Adapted solution of a backward stochastic
dierential equation. Systems and Control Letters, 14(1): 5561.
[34] Peng, S. (1992) A generalized dynamic programming principle and
Hamilton-Jacobi-Bellman equation. Stochastics and Stochastic Reports,
38(2): 119134.
[35] Peng, S. (1997) Backward SDE and related gexpectation, in Backward
Stochastic Dierential Equations, Pitman Research Notes in Math. Series,
No.364, El Karoui Mazliak edit. 141159.
[36] Peng, S. (1997) BSDE and Stochastic Optimizations, Topics in Stochastic
Analysis, Yan, J., Peng, S., Fang, S., Wu, L.M. Ch.2, (Chinese vers.),
Science Publication, Beijing.
[37] Peng, P. (1999) Monotonic limit theorem of BSDE and nonlinear decom-
position theorem of Doob-Meyers type, Prob. Theory Rel. Fields 113(4)
473-499.
[38] Peng, S. (2004) Nonlinear expectation, nonlinear evaluations and risk mea-
sures, in K. Back T. R. Bielecki, C. Hipp, S. Peng, W. Schachermayer,
Stochastic Methods in Finance Lectures, C.I.M.E.-E.M.S. Summer School
held in Bressanone/Brixen, Italy 2003, (Edit. M. Frittelli and W. Rung-
galdier) 143217, LNM 1856, Springer-Verlag.
[39] Peng, S. (2004) Filtration Consistent Nonlinear Expectations and Evalua-
tions of Contingent Claims, Acta Mathematicae Applicatae Sinica, English
Series 20(2), 124.
[40] Peng, S. (2005) Nonlinear expectations and nonlinear Markov chains, Chin.
Ann. Math. 26B(2) ,159184.
[41] Peng, S. (2004) Dynamical evaluations, C. R. Acad. Sci. Paris, Ser.I 339
585589.
[42] Peng, S. (2005), Dynamically consistent nonlinear evaluations and expec-
tations, in arXiv:math.PR/0501415 v1 24 Jan 2005.
[43] Peng, S. (2006) Multi-dimensional GBrownian motion and related
stochastic calculus under Gexpectation, Preprint, (pdf-le available in
arXiv:math.PR/0601699 v1 28 Jan 2006).
[44] Peng, S. and Xu, M. (2003) Numerical calculations to solve BSDE, preprint.
[45] Peng, S. and Xu, M. (2005) g

expectations and the Related Nonlinear


Doob-Meyer Decomposition Theorem
26
[46] Rosazza Giannin, E., (2002) Some examples of risk measures via g
expectations, preprint, to appear in Insurance: Mathematics and Eco-
nomics.
[47] Yong, J., Zhou, X. (1999) Stochastic Controls: Hamiltonian Systems and
HJB Equations. SpringerVerlag.
27

You might also like