P. 1
analisi stocastica

analisi stocastica

|Views: 34|Likes:
Published by Francesco Cannarile

More info:

Published by: Francesco Cannarile on Nov 04, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

12/10/2012

pdf

text

original

Sections

  • 1.1 Some concepts of Probability
  • 1.1.1 Random variables
  • 1.1.2 Product measures
  • 1.2 Probability measures in Hilbert spaces
  • 1.2.1 Mean and covariance
  • 1.2.2 Finite dimensional projections of measures
  • 1.3 Gaussian probability measures
  • 1.3.1 Gaussian probability measures in R
  • 1.3.2 Gaussian probability measures in Rn
  • 1.3.3 Gaussian probability measures in H
  • 1.3.4 Computation of some Gaussian integrals
  • 1.3.5 The Cameron–Martin space
  • Gaussian random variables
  • 2.1 Notations
  • 2.2 Independence
  • 2.2.1 Independent real variables
  • 2.2.2 Independent Gaussian random variables
  • 2.3.1 Affine changes of variables
  • 2.4 The white noise function
  • 2.4.1 Equivalence classes of random variables
  • 2.4.2 Definition of the white noise function
  • Brownian Motion
  • 3.1 Stochastic Processes
  • 3.2 Brownian motion
  • 3.2.1 Construction of a Brownian motion
  • 3.2.2 Some properties of a Brownian motion
  • 3.3 Wiener integral
  • 3.4 Continuity of Brownian motion
  • 3.5 The standard Brownian motion
  • 3.5.1 Some properties of C0
  • 3.5.2 The Wiener measure and the standard Brownian
  • 3.6 Quadratic variation of the Brownian mo-
  • 3.7 Multidimensional Brownian motions
  • 4.1 Filtration
  • 4.1.1 Ft-measurable random variables
  • 4.2 Stopping times
  • 4.3 The Brownian motion W(t+τ)−W(τ)
  • 4.4 Transition semigroup
  • 4.5 Markov property
  • 4.5.1 Strong Markov property
  • 4.6 Some consequences of the strong Markov
  • 4.7 Application to partial differential equa-
  • 4.7.1 The Dirichlet problem in the half-line
  • 4.7.2 The Neumann problem
  • 4.7.3 The Ventzell problem
  • The Itˆo integral
  • 5.1 Definition of Itˆo’s integral
  • 5.1.1 Itˆo’s integral for elementary processes
  • 5.1.2 General definition of Itˆo’s integral
  • 5.2 Itˆo integral for mean square continuous
  • 5.3 The Itˆo integral as a stochastic process
  • 5.4 Itˆo integral with stopping times
  • 5.4.1 Stopping times
  • 5.4.2 Itˆo’s integral with stopping times
  • 5.5 Multidimensional Itˆo integrals
  • The Itˆo formula
  • 6.1 Introduction
  • 6.1.1 The Itˆo formula for unbounded functions
  • 6.2 Itˆo’ formula for a vector valued process
  • Stochastic evolution equations
  • 7.1 Existence and uniqueness
  • 7.1.2 Examples
  • 7.1.3 Differential stochastic equations with random co-
  • 7.2 Continuous dependence on data
  • 7.2.1 Continuous dependence on mean square
  • 7.4 Differentiability of X(t,s,x) with respect
  • 7.4.1 Existence of Xx(t,s,x)
  • 7.4.2 Existence of Xxx(t,s,x)
  • 7.5 Itˆo Differentiability of X(t,s,x) with re-
  • 7.5.1 The deterministic case
  • 7.5.2 The stochastic case
  • 7.5.3 Backward Itˆo’s formula
  • Kolmogorov equations
  • 8.1 The deterministic case
  • 8.1.1 The autonomous case
  • 8.2 Stochastic case
  • 8.3 Basic properties of transition operators
  • 8.4 Parabolic equations
  • 8.4.1 Autonomous case
  • 8.5 Examples
  • λ-systems and π-systems
  • Conditional expectation
  • B.1 Definition
  • B.2 Basic properties
  • Martingales
  • C.1 Definitions
  • C.2 The basic inequality for martingales
  • C.3 Square integrable martingales
  • D.1 Introduction
  • D.2 Gˆateaux differentiable mappings
  • D.3 The main result
  • E.1 Fractional Sobolev spaces on [0,1]
  • E.2 Processes belonging to W ,2m

INTRODUCTION TO STOCHASTIC

ANALYSIS
Giuseppe Da Prato
June 22, 2009
Contents
1 Gaussian measures in Hilbert spaces 3
1.1 Some concepts of Probability . . . . . . . . . . . . . . . . . . 3
1.1.1 Random variables . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Product measures . . . . . . . . . . . . . . . . . . . . . 5
1.2 Probability measures in Hilbert spaces . . . . . . . . . . . . . 5
1.2.1 Mean and covariance . . . . . . . . . . . . . . . . . . . 5
1.2.2 Finite dimensional projections of measures . . . . . . . 7
1.3 Gaussian probability measures . . . . . . . . . . . . . . . . . . 9
1.3.1 Gaussian probability measures in R . . . . . . . . . . . 9
1.3.2 Gaussian probability measures in R
n
. . . . . . . . . . 10
1.3.3 Gaussian probability measures in H . . . . . . . . . . . 11
1.3.4 Computation of some Gaussian integrals . . . . . . . . 11
1.3.5 The Cameron–Martin space . . . . . . . . . . . . . . . 13
2 Gaussian random variables 17
2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Independent real variables . . . . . . . . . . . . . . . . 18
2.2.2 Independent Gaussian random variables . . . . . . . . 21
2.3 Gaussian random variables defined in a Hilbert space . . . . . 21
2.3.1 Affine changes of variables . . . . . . . . . . . . . . . . 22
2.4 The white noise function . . . . . . . . . . . . . . . . . . . . . 23
2.4.1 Equivalence classes of random variables . . . . . . . . . 23
2.4.2 Definition of the white noise function . . . . . . . . . . 25
3 Brownian Motion 27
3.1 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 Construction of a Brownian motion . . . . . . . . . . . 29
3.2.2 Some properties of a Brownian motion . . . . . . . . . 29
3.3 Wiener integral . . . . . . . . . . . . . . . . . . . . . . . . . . 31
i
ii
3.4 Continuity of Brownian motion . . . . . . . . . . . . . . . . . 35
3.5 The standard Brownian motion . . . . . . . . . . . . . . . . . 36
3.5.1 Some properties of C
0
. . . . . . . . . . . . . . . . . . 37
3.5.2 The Wiener measure and the standard Brownian motion 37
3.6 Quadratic variation of the Brownian motion . . . . . . . . . . 39
3.7 Multidimensional Brownian motions . . . . . . . . . . . . . . . 41
4 Markov property of the Brownian motion 43
4.1 Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1 F
t
-measurable random variables . . . . . . . . . . . . . 44
4.2 Stopping times . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 The Brownian motion W(t + τ) −W(τ) . . . . . . . . . . . . 49
4.4 Transition semigroup . . . . . . . . . . . . . . . . . . . . . . . 50
4.5 Markov property . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.5.1 Strong Markov property . . . . . . . . . . . . . . . . . 52
4.6 Some consequences of the strong Markov property . . . . . . . 53
4.7 Application to partial differential equations . . . . . . . . . . . 56
4.7.1 The Dirichlet problem in the half-line . . . . . . . . . . 57
4.7.2 The Neumann problem . . . . . . . . . . . . . . . . . . 58
4.7.3 The Ventzell problem . . . . . . . . . . . . . . . . . . . 59
5 The Itˆo integral 61
5.1 Definition of Itˆo’s integral . . . . . . . . . . . . . . . . . . . . 61
5.1.1 Itˆo’s integral for elementary processes . . . . . . . . . . 61
5.1.2 General definition of Itˆo’s integral . . . . . . . . . . . . 63
5.2 Itˆ o integral for mean square continuous processes . . . . . . . 66
5.3 The Itˆo integral as a stochastic process . . . . . . . . . . . . . 67
5.4 Itˆ o integral with stopping times . . . . . . . . . . . . . . . . . 70
5.4.1 Stopping times . . . . . . . . . . . . . . . . . . . . . . 70
5.4.2 Itˆo’s integral with stopping times . . . . . . . . . . . . 71
5.5 Multidimensional Itˆ o integrals . . . . . . . . . . . . . . . . . . 72
6 The Itˆo formula 75
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.1.1 The Itˆ o formula for unbounded functions . . . . . . . . 82
6.2 Itˆ o’ formula for a vector valued process . . . . . . . . . . . . . 84
7 Stochastic evolution equations 89
7.1 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . 90
7.1.1 Solution of the stochastic differential equation in the
space C
B
([s, T]; L
2m
(Ω; R
d
)). . . . . . . . . . . . . . . 94
1
7.1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.1.3 Differential stochastic equations with random coefficients 96
7.2 Continuous dependence on data . . . . . . . . . . . . . . . . . 97
7.2.1 Continuous dependence on mean square . . . . . . . . 97
7.3 Almost sure continuity and h¨olderianity of trajectories . . . . 100
7.4 Differentiability of X(t, s, x) with respect to x . . . . . . . . . 101
7.4.1 Existence of X
x
(t, s, x) . . . . . . . . . . . . . . . . . . 101
7.4.2 Existence of X
xx
(t, s, x) . . . . . . . . . . . . . . . . . 102
7.5 Itˆ o Differentiability of X(t, s, x) with respect to s. . . . . . . . 105
7.5.1 The deterministic case . . . . . . . . . . . . . . . . . . 105
7.5.2 The stochastic case . . . . . . . . . . . . . . . . . . . . 106
7.5.3 Backward Itˆo’s formula . . . . . . . . . . . . . . . . . . 107
8 Kolmogorov equations 111
8.1 The deterministic case . . . . . . . . . . . . . . . . . . . . . . 111
8.1.1 The autonomous case . . . . . . . . . . . . . . . . . . . 113
8.2 Stochastic case . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.3 Basic properties of transition operators . . . . . . . . . . . . . 115
8.4 Parabolic equations . . . . . . . . . . . . . . . . . . . . . . . . 116
8.4.1 Autonomous case . . . . . . . . . . . . . . . . . . . . . 117
8.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
A λ-systems and π-systems 121
B Conditional expectation 123
B.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
B.2 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . 124
C Martingales 127
C.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
C.2 The basic inequality for martingales . . . . . . . . . . . . . . . 128
C.3 Square integrable martingales . . . . . . . . . . . . . . . . . . 129
D Fixed points depending on parameters 133
D.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
D.2 Gˆ ateaux differentiable mappings . . . . . . . . . . . . . . . . . 134
D.3 The main result . . . . . . . . . . . . . . . . . . . . . . . . . . 135
E Fractional Sobolev spaces and regularity of processes 137
E.1 Fractional Sobolev spaces on [0, 1] . . . . . . . . . . . . . . . . 137
E.2 Processes belonging to W
,2m
(0, T) . . . . . . . . . . . . . . . 138
2
E.3 Multi dimensional Sobolev spaces and regularity of random
fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Chapter 1
Gaussian measures in Hilbert
spaces
We shall denote by H a real separable Hilbert space (with inner product
¸, ) and norm [ [), and by L(H) the Banach algebra of all linear bounded
operators T : H → H, endowed with the norm
|T| = sup
x∈H, |x|=1
[Tx[.
We recall that T ∈ L(H) is said to be symmetric if ¸Tx, y) = ¸x, Ty) for all
x, y ∈ H, positive if ¸Tx, x) ≥ 0 for all x ∈ H. The set of all symmetric and
positive elements of L(H) will be denoted by L
+
(H).
Finally, we shall denote by C
b
(H) the space of all functions ϕ: H → R
which are continuous and bounded. C
b
(H), endowed with the norm
|ϕ|
0
: = sup
x∈H
[ϕ(x)[,
is a Banach space.
Next section is devoted to some basic facts from Measure Theory and
Probability needed in what follows.
1.1 Some concepts of Probability
1.1.1 Random variables
Let (Ω, F, P) be a probabilty space and let E be a Polish (complete separable
metric) space; we shall denote by B(E) the σ–algebra generated by all closed
(or equivalently open) subsets of E. The elements of B(E) are called Borel
sets.
3
4 Chapter 1
By an E-valued random variable in (Ω, F) we mean a mapping
X: Ω → E, ω → X(ω),
such that
I ∈ B(E) ⇒ X
−1
(I) ∈ F.
The law (or image measure or push-forward measure) of X is the probability
measure X
#
P on (E, B(E)) defined as
(X
#
P)(I) = P(X
−1
(I)), ∀ I ∈ B(E).
Sometimes we shall use the notation X
#
P = P
X
.
Let us prove the following basic change of variables formula.
Theorem 1.1 Let X be an E-valued random variable in (Ω, F, P). Let
moreover ϕ: E →R be a nonnegative Borel function. Then we have
_

ϕ(X(ω))P(dω) =
_
E
ϕ(x)(X
#
P)(dx). (1.1)
Proof. Let first ϕ = 1l
I
with I ∈ B(E)
(1)
. In this case we have
ϕ(X(ω)) = 1l
X
−1
(I)
(ω), ∀ ω ∈ Ω.
So,
_

ϕ(X(ω))P(dω) = P(X
−1
(I)) = X
#
P(I) =
_
E
ϕ(x)X
#
P(dx).
Consequently, (1.1) holds for all simple functions ϕ of the form
ϕ =
n

i=1
c
i
1l
I
i
,
with n ∈ N, c
1
, ..., c
n
≥ 0 and I
1
, ..., I
n
∈ B(E). Since any positive Borel
functions is the limit of an increasing sequence of positive simple functions,
the conclusion follows from the monotone convergence theorem.
(1)
1l
I
(ω) is the characteristic function of I; it is equal to 1 if ω ∈ I to 0 if ω / ∈ I.
Gaussian measures 5
1.1.2 Product measures
Let (Ω
i
, F
i
, P
i
), i = 1, ..., n, be probability spaces. Set Ω =

n
i=1

i
. A mea-
surable rectangle of Ω is, by definition, a set of the form R =

n
i=1
A
i
where
A
i
∈ F
i
, i = 1, 2, ..., n. The σ-algebra generated by all measurable rectangles
is called the product σ-algebra of F
i
, ..., F
n
; it is denoted by

n
i=1
F
i
.
For any R =

n
i=1
A
i
we define
P(R) :=
n

i=1
P
i
(A
i
).
One can show that P can be uniquely extended to a probability measure on
(Ω, F) which is called the product probability of P
1
, P
2
, ..., P
n
.
1.2 Probability measures in Hilbert spaces
1.2.1 Mean and covariance
Let µ be a probability measure on (H, B(H)). Assume that µ has finite first
momentum,
_
H
[x[µ(dx) < +∞.
Then the linear functional F : H →R defined as
F(h) =
_
H
¸x, h)µ(dx), ∀ h ∈ H,
is continuous since
[F(h)[ ≤
_
H
[x[µ(dx) [h[, ∀ h ∈ H.
By the Riesz representation theorem there exists m ∈ H such that
¸m, h) =
_
H
¸x, h)µ(dx), ∀ h ∈ H.
m is called the mean of µ. We shall write
m =
_
H
xµ(dx).
Assume now that the second moment of µ is finite,
_
H
[x[
2
µ(dx) < +∞,
6 Chapter 1
(so that the first one is finite as well). Let us consider the bilinear form
G : H H →R defined as
G(h, k) =
_
H
¸h, x −m)¸k, x −m)µ(dx), ∀ h, k ∈ H.
G is continuous since
[G(h, k)[ ≤
_
H
[x −m[
2
µ(dx) [h[ [k[, ∀ h, k ∈ H.
Therefore there is a unique linear bounded operator Q ∈ L(H) such that
¸Qh, k) =
_
H
¸h, x −m)¸k, x −m)µ(dx), ∀ h, k ∈ H.
Q is called the covariance of µ.
In order to state the next result we need the concept of trace class op-
erator. A symmetric and positive operator Q ∈ L(H) is said to be of trace
class if
Tr Q: =

k=1
¸Qe
k
, e
k
) < +∞
for one (and consequently for any) complete orthonormal system (e
k
). One
can show that any trace class operator Q is compact and that Tr Q is the
sum of its eigenvalues repeated according to their multiplicity, see e. g. N.
Dunford and J.T. Schwartz, Linear Operators. Part II, Interscience, 1964.
(2)
Proposition 1.2 The covariance operator Q of µ is symmetric, positive and
of trace class.
Proof. Symmetry and positivity of Q are clear. To prove that Q is of trace
class choose a complete orthonormal system (e
k
) in H. Then we have
¸Qe
k
, e
k
) =
_
H
[¸x −m, e
k
)[
2
µ(dx), k ∈ N.
Therefore, by the monotone convergence theorem and the Parseval identity,
we find that
Tr Q =

k=1
_
H
[¸x −m, e
k
)[
2
µ(dx) =
_
H
[x −m[
2
µ(dx) < +∞.
(2)
It is also possible to define trace-class operators which are not symmetric, but we shall
not need in what follows.
Gaussian measures 7

We shall denote by L
+
1
(H) the set of all positive, symmetric operators in
H of trace class.
We finally define the Fourier transform ´ µ of a probability measure µ
setting
´ µ(h) =
_
H
e
ix,h
µ(dx), ∀ h ∈ H. (1.2)
One checks easily that ´ µ : H →C is continuous.
1.2.2 Finite dimensional projections of measures
We are given a probability measure µ ∈ P(H). Let (e
k
) be a complete or-
thonormal system in H. For any n ∈ N we consider the projection P
n
: H →
P
n
(H) defined as
P
n
x =
n

k=1
¸x, e
k
)e
k
, x ∈ H. (1.3)
We have lim
n→∞
P
n
x = x for all x ∈ H.
For any n ∈ N we consider the measure µ
n
:= (P
n
)
#
µ defined by
_
H
ϕ(P
n
x)µ(dx) =
_
H
n
ϕ(y)µ
n
(dy),
for all ϕ ∈ C
b
(R).
Thus µ
n
is a probability measure on (P
n
(H), B(P
n
(H)), µ
n
). We shall
also consider µ
n
as a probability measure on (H, B(H), µ), setting
µ
n
(I) = µ
n
(I ∩ P
n
(H)), ∀I ∈ B(H).
We want now to show that µ is determined by the sequence (µ
n
). For this
we first need the following result.
Proposition 1.3 Let µ, ν ∈ P(H) be such that
_
H
ϕ(x)µ(dx) =
_
H
ϕ(x)ν(dx), ∀ ϕ ∈ C
b
(H). (1.4)
Then µ = ν.
Proof. Let C ⊂ H be closed and let (ϕ
n
) ⊂ C
b
(H) be such that
(i) lim
n→∞
ϕ
n
(x) = 1l
C
(x) for all x ∈ H.
8 Chapter 1
(ii) |ϕ
n
|
0
≤ 1 for all ∈ N.
A sequence (ϕ
n
) ⊂ C
b
(H) fulfilling (i) and (ii) is provided by,
ϕ
n
(x) =
_
_
_
1 if x ∈ C,
1 −n d(x, C) if d(x, C) ≤
1
n
0 if d(x, C) ≥
1
n
.
Now, by the dominate convergence theorem it follows that
lim
n→∞
_
H
ϕ
n
dµ = lim
n→∞
_
H
ϕ
n
dν = µ(C) = ν(C).
Since closed sets generate the Borel σ–algebra of H this implies that µ = ν.

We can now prove the announced result.
Proposition 1.4 Let µ, ν ∈ P(H). If (P
n
)
#
µ = (P
n
)
#
ν for any n ∈ N we
have µ = ν.
Proof. Let ϕ ∈ C
b
(H). Then, using the dominated convergence theorem and
the change of variables formula, we have
_
H
ϕ(x)µ(dx) = lim
n→∞
_
H
ϕ(P
n
x)µ(dx) = lim
n→∞
_
P
n
(H)
ϕ(ξ)((P
n
)
#
µ)(dξ)
and
_
H
ϕ(x)ν(dx) = lim
n→∞
_
H
ϕ(P
n
x)ν(dx) = lim
n→∞
_
P
n
(H)
ϕ(ξ)((P
n
)
#
ν)(dξ).
Since (P
n
)
#
µ = (P
n
)
#
ν by assumption, we conclude that
_
H
ϕ(x)µ(dx) =
_
H
ϕ(x)ν(dx)
for all ϕ ∈ C
b
(H). Therefore, in view of Proposition 1.3 we have µ = ν.
As an application of Proposition 1.4 we prove that the Fourier transform
of µ determines µ.
Proposition 1.5 Let µ, ν ∈ P(H) be such that ´ µ(h) = ´ ν(h) for all h ∈ H.
Then µ = ν.
Gaussian measures 9
Proof. We assume as granted the result when H is finite-dimensional
(3)
. In
the general case we have by (1.1) for any h ∈ H and n ∈ N,
´ µ(P
n
h) =
_
H
e
ix,P
n
h
µ(dx) =
_
P
n
(H)
e
iP
n
ξ,P
n
h
(P
n
)
#
µ(dξ) =

(P
n
)
#
µ(P
n
h)
and
´ ν(P
n
h) =
_
H
e
ix,P
n
h
ν(dx) =
_
P
n
(H)
e
iP
n
ξ,P
n
h
(P
n
)
#
ν(dξ) =

(P
n
)
#
ν(P
n
h).
Therefore measures (P
n
)
#
µ and (P
n
)
#
ν have the same Fourier tranforms and
so they coincide. The conclusion follows from Proposition 1.4.
1.3 Gaussian probability measures
We first recall the definition of Gaussian measure on (R, B(R)), then we go
to the general case.
1.3.1 Gaussian probability measures in R
For any pair of real numbers (m, q) with m ∈ R and q ≥ 0 we define a
probability measure N
m,q
on (R, B(R)) as follows. If q = 0 we set
N
m,0
= δ
m
,
where δ
m
is the Dirac measure at m, defined for all B ∈ B(R) by
δ
m
(B) =
_
_
_
1 if m ∈ B,
0 if m / ∈ B.
If q > 0 we set
N
m,q
(B) =
1

2πq
_
B
e

(x−m)
2
2q
dx, for all B ∈ B(R).
N
m,q
is a probability measure since
N
m,q
(R) =
1

2πq
_
+∞
−∞
e

(x−m)
2
2q
dx =
1


_
+∞
−∞
e

x
2
2
dx = 1.
(3)
See e.g. M. M´etivier, Notions fondamentales de la th´eorie des probabilit´ees, Dunod
Universit´e, 1968.
10 Chapter 1
If q > 0, N
m,q
is absolutely continuous with respect to the Lebesgue measure

1
(dx) = dx in (R, B(R) and
N
m,q
(dx) =
1

2πq
e

(x−m)
2
2q
dx.
When m = 0 we shall write for short N
q
instead N
0,q
.
It is easy to see that m is the mean and q the covariance of N
m,q
. Moreover,
its Fourier transform is given by

N
m,q
(h) :=
_
R
e
ihx
N
m,q
(dx) = e
imh−
1
2
qh
2
, h ∈ R. (1.5)
1.3.2 Gaussian probability measures in R
n
We are going to define a Gaussian measure N
m,Q
for any m = (m
1
, ..., m
n
) ∈
R
n
and any Q ∈ L
+
(R
n
).
Let Q ∈ L
+
(R
n
) and let (e
1
, ..., e
n
) be an orthonormal basis on R
n
such
that Qe
k
= λ
k
e
k
, k = 1, ..., n, for some λ
k
≥ 0. Then we define a probability
measure N
a,Q
on (R
n
, B(R
n
)) by setting
N
m,Q
=
n

k=1
N
m
k

k
.
When m = 0 we shall write N
Q
instead of N
m,Q
for short.
The proof of the following proposition is easy; it is left to the reader.
Proposition 1.6 Let m ∈ R
n
, Q ∈ L
+
(R
n
) and µ = N
m,Q
. Then we have
_
R
n
xµ(dx) = m,
_
R
n
¸y, x −a)¸z, x −a)µ(dx) = ¸Qy, z), y, z ∈ R
n
.
Moreover the Fourier tranform of N
a,Q
is given by
¯
N
a,Q
(h) :=
_
R
n
e
ih,x
µ(dx) = e
ia,h−
1
2
Qh,h
, h ∈ R
n
.
Finally, if the determinant of Q is positive, N
a,Q
is absolutely continuous
with respect to the Lebesgue measure in R
n
and we have
N
a,Q
(dx) =
1
_
(2π)
d
det Q
e

1
2
Q
−1
(x−a),x−a
dx.
Therefore m is the mean and Q the covariance operator of N
a,Q
.
Gaussian measures 11
1.3.3 Gaussian probability measures in H
Let m ∈ H and Q ∈ L
+
1
(H). We denote by N
m,Q
the probability measure on
(H, B(H)) of mean m, covariance Q and Fourier transform given by

N
m,Q
(h) = e
im,h−
1
2
Qh,h
, h ∈ H. (1.6)
One can show that such a measure does exist
(4)
; it is unique thank’s to
Proposition 1.5.
1.3.4 Computation of some Gaussian integrals
To compute some integrals with respect to a Gaussian measure µ = N
m,Q
in
an infinite dimensional Hilbert space H it is useful to reduce the computation
to integrals on a sequence (H
n
) of finite dimensional vector spaces convergent
to H and then to let n → ∞.
More precisely, given µ = N
m,Q
∈ P(H), we shall proceed as follows.
Since Q is compact there exists an orthonormal complete system (e
k
) in H
and a sequence of nonnegative numbers (λ
k
) such that
Qe
k
= λ
k
e
k
, ∀ k ∈ N.
For any n ∈ N we set m
n
:= ¸m, e
n
),
P
n
x =
n

k=1
¸x, e
k
)e
k
, ∀ x ∈ H
and identify P
n
(H) with R
n
through the isomorphism,
P
n
(H) →R
n
, x =
n

k=1
¸x, e
k
)e
k
→ (¸x, e
1
), ..., ¸x, e
n
)).
Exercise 1.7 Prove that
µ
n
= (P
n
)
#
µ =
n

i=1
N
m
k

k
.
Hint. Show that the Fourier transform of µ
n
is given by
´ µ
n
(h) = e
i
P
n
k=1
m
k
h
k
e

1
2
P
n
k=1
λ
k
h
2
k
.
(4)
see e.g. G. Da Prato, An introduction to infinite-dimensional analysis. Springer-
Verlag, Berlin, 2006.
12 Chapter 1
We shall assume (which is always true after a rearrangement) that λ
1

λ
2
≥ λ
n
≥ .
To formulate the next result notice that for any ε <
1
λ
1
, the linear operator
1 − εQ is invertible and (1 − εQ)
−1
is bounded. We have in fact, as easily
checked,
(1 −εQ)
−1
x =

k=1
1
1 −ελ
k
¸x, e
k
)e
k
, x ∈ H.
In this case we can define the determinant of (1 −εQ) by setting
det(1 −εQ): = lim
n→∞
n

k=1
(1 −ελ
k
) :=

k=1
(1 −ελ
k
).
Exercise 1.8 Prove that

k=1
(1 −ελ
k
) > 0.
Hint. Write
log
_

k=1
(1 −ελ
k
)
_
=

k=1
log(1 −ελ
k
)
and show that the series is convergent since


k=1
λ
k
< +∞.
Proposition 1.9 Let ε ∈ R. Then we have
_
H
e
ε
2
|x|
2
µ(dx) =
_
_
_
[det(1 −εQ)]
−1/2
e
ε
2
(1−εQ)
−1
m,m
, if ε <
1
λ
1
,
+∞, otherwise.
(1.7)
Proof. For any n ∈ N we have, taking into account Exercise 1.7
_
H
e
ε
2
|P
n
x|
2
µ(dx) =
_
P
n
(H)
e
ε
2
|P
n
ξ|
2
µ
n
(dξ) =
n

k=1
_
R
e
ε
2
ξ
2
k
N
m
k

k
(dξ
k
).
Since [P
n
x[
2
↑ [x[
2
as n → ∞ and, by an elementary computation,
_
R
e
ε
2
x
2
k
N
m
k

k
(dx
k
) =
1

1 −ελ
k
e

ε
2
m
2
k
1−ελ
k
,
the conclusion follows from the monotone convergence theorem.
Gaussian measures 13
Exercise 1.10 Prove that for all m ∈ N
J
m
:=
_
H
[x[
2m
µ(dx) < ∞
and compute J
m
.
Hint. Notice that J
m
= 2
m
F
(m)
(0), where
F(ε) =
_
H
e
ε
2
|x|
2
µ(dx), ε > 0.
Proposition 1.11 We have
_
H
e
h,x
µ(dx) = e
a,h
e
1
2
Qh,h
, h ∈ H. (1.8)
Proof. For any ε > 0 we have
e
h,x
≤ e
|x| |h|
≤ e
ε|x|
2
e
1
ε
|h|
2
.
Choosing ε <
1
λ
1
, we have, by the dominated convergence theorem, that
_
H
e
h,x
µ(dx) = lim
n→∞
_
H
e
h,P
n
x
µ(dx) = lim
n→∞
_
P
n
(H)
e
h,P
n
ξ
µ
n
(dx)
= lim
n→∞
e
P
n
m,h
e
1
2
P
n
Qh,h
= e
m,h
e
1
2
Qh,h
.

1.3.5 The Cameron–Martin space
We are given a Gaussian measure µ = N
Q
, where Q ∈ L
+
1
(H). We say that
µ is non degenerate if Ker Q := ¦x ∈ H : Qx = 0¦ = ¦0¦. Thus, if H is
finite-dimensional µ is non degenerate if and only if det Q > 0.
Assume now that H is infinite-dimensional and that µ is non degenerate.
We denote by (e
k
) a complete orthonormal system in H such that Qe
k
=
λ
k
e
k
, k ∈ N, where (λ
k
) are the eigenvalues of Q and we set x
k
= ¸x, e
k
), k ∈
N.
We notice that the inverse Q
−1
of Q (which is well defined since Ker
Q = ¦0¦) is not continuous because,
Q
−1
e
k
=
1
λ
k
e
k
, k ∈ N
and λ
k
→ 0 as k → ∞. Consequently, recalling the closed graph theorem,
we see that the range Q(H) does not coincide with H. However, it is dense
in H as the following lemma shows.
14 Chapter 1
Lemma 1.12 Q(H) is a dense subspace of H.
Proof. In fact if x
0
is an element of H orthogonal to Q(H), we have
¸Qx, x
0
) = ¸x, Qx
0
) = 0, ∀ x ∈ H,
which yields Qx
0
= 0, and so x
0
= 0 because Ker(Q) = ¦0¦.
It is useful to introduce the operator Q
1/2
defined as
Q
1/2
x =

k=1
_
λ
k
¸x, e
k
)e
k
, x ∈ H.
Its range Q
1/2
(H) is called the Cameron–Martin space of the measure µ.
Arguing as before we see that Q
1/2
(H) is a subspace of H different of H and
dense in H. Moreover it is clear that x ∈ Q
1/2
(H) if and only if,

k=1
λ
−1
k
x
2
k
< +∞.
It is important to notice that the measure of the Cameron–Martin space
is zero.
Proposition 1.13 We have µ(Q
1/2
(H)) = 0.
Proof. For any n, k ∈ N set
U
n
=
_
y ∈ H :

h=1
λ
−1
h
y
2
h
< n
2
_
= ¦y ∈ Q
1/2
(H) : [Q
−1/2
y[ < n¦,
and
U
n,k
=
_
y ∈ H :
2k

h=1
λ
−1
h
y
2
h
< n
2
_
.
Clearly U
n
↑ Q
1/2
(H) as n → ∞, and for any n ∈ N, U
n,k
↓ U
n
as k → ∞.
So, it is enough to show that
µ(U
n
) = lim
k→∞
µ(U
n,k
) = 0. (1.9)
We have in fact
µ(U
n,k
) =
_
¦
y∈H:
P
2k
h=1
λ
−1
h
y
2
h
<n
2
¦
2k

h=1
N
λ
k
(dy
k
),
Gaussian measures 15
which, setting z
h
= λ
−1/2
h
y
h
is equivalent to
µ(U
n,k
) =
_
¦
z∈R
2k
:|z|<n
¦
N
I
2k
(dz),
where I
2k
is the identity in R
2k
. Let us compute µ(U
n,k
). We have
µ(U
n,k
) =
µ(U
n,k
)
µ(H)
=
_
n
0
e

r
2
2
r
2k−1
dr
_
+∞
0
e

r
2
2
r
2k−1
dr
=
_
n
2
/2
0
e
−ρ
ρ
k−1

_
+∞
0
e
−ρ
ρ
k−1

.
Therefore
µ(U
n,k
) =
1
(k −1)!
_
n
2
/2
0
e
−ρ
ρ
k−1
dρ ≤
1
(k −1)!
_
n
2
/2
0
ρ
k−1
dρ =
1
k!
_
n
2
2
_
k
,
and (1.9) follows.
16 Chapter 1
Chapter 2
Gaussian random variables
2.1 Notations
Let (Ω, F, P) be a probability space, H a separable Hilbert space, X: Ω → H
a random variable such that
_

[X(ω)[
2
P(dω) < ∞.
We denote by X
#
P the law of X, by m(X) the mean of X
#
P and by Q(X)
the covariance of X
#
P.
By the change of variables formula it follows that the Fourier transform
of X
#
P is given by

X
#
P(h) =
_

e
iX(ω),h
P(dω), ∀ h ∈ H
and that
¸m(X), h) =
_

¸X(ω), h)P(dω), ∀ h ∈ H,
and
¸Q(X)h, k) =
_

¸X(ω) −m(X), h) ¸X(ω) −m(X), k)P(dω), ∀ h, k ∈ H.
Definition 2.1 We say that X
#
P is a Gaussian random variable if X
#
P is
a Gaussian measure, that is if

X
#
P(h) = e
im(X),h
e

1
2
Q(X)h,h
, ∀ h ∈ H.
In this case we call m(X) the mean and Q(X) the covariance of X.
17
18 Chapter 2
Example 2.2 Let n ∈ N, X
1
, ..., X
n
be real random variables on (Ω, F, P).
Then X = (X
1
, ..., X
n
) is a R
n
-valued random variable. So, m(X) is a vector
of R
n
denoted by (m(X)
1
, ..., m(X)
n
) and Q(X) is a n n matrix denoted
Q(X)
i,j
, i, j = 1, ..., n.
More precisely, let (e
1
, ..., e
n
) be the canonical basis in R
n
. Then for any
k = 1, ..., n we have
m(X)
k
= ¸m(X), e
k
) =
_

X
k
(ω)P(dω) = m(X
k
)
and for any j, k = 1, ..., n we have
Q(X)
j,k
= ¸Q(X)e
j
, e
k
) =
_

(X
j
(ω) −m
j
(X
j
))(X
k
(ω) −m
k
(X
k
))P(dω).
In particular, if j = k we find
Q(X)
k,k
= Q(X
k
), k = 1, ..., n.
Example 2.3 Assume that X = (X
1
, ..., X
n
) is a n-dimensional Gaussian
random variable. Then X
1
, ..., X
n
are real Gaussian random variables. In
fact if k = 1, ..., n and a ∈ R we have
_

e
iaX
k
(ω)
P(dω) =
_

e
iae
k
,X(ω)
P(dω)
= e
iae
k
,m(X)
e

1
2
a
2
Q(X)e
k
,e
k

= e
iam(X
k
)
e

1
2
a
2
Q(X
k
)
.
Notice that, if conversely X
1
, ..., X
n
are real Gaussian random variables, then
X = (X
1
, ..., X
n
) is not necessarily Gaussian.
2.2 Independence
In this section we introduce the basic concept of independence.
2.2.1 Independent real variables
Definition 2.4 Let n ∈ N and let X
1
, ..., X
n
be real random variables in
(Ω, F, P). Consider the R
n
-valued random variable
X(ω) = (X
1
(ω), ..., X
n
(ω)), ω ∈ Ω.
random variables 19
We say that X
1
, ..., X
n
are independent if
X
#
P =
n

j=1
(X
j
)
#
P.
Let (X
i
) be a sequence of real random variables. They are called independent
if X
i
1
, . . . , X
i
n
are independent for any choice of n and of positive integers
i
1
< i
2
< < i
n
.
A necessary and sufficient condition for the independence is provided by
the following proposition.
Proposition 2.5 Let X
1
, ..., X
n
, n ∈ N, be real independent random vari-
ables in (Ω, F, P). Let moreover ϕ
1
, ..., ϕ
n
be Borel positive functions. Then
we have
_

ϕ
1
(X
1
(ω)) ϕ
n
(X
n
(ω))P(dω)
=
_

ϕ
1
(X
1
(ω))P(dω)
_

ϕ
n
(X
n
(ω))P(dω).
(2.1)
Conversely, if (2.1) holds for any choice of positive Borel functions ϕ
1
, ..., ϕ
n
,
then X
1
, ..., X
n
are independent.
Proof. Set X = (X
1
, ..., X
n
) and let ψ: R
n
→R be defined as
ψ(ξ
1
, ..., ξ
n
) = ϕ
1

1
) ϕ
k

n
), (ξ
1
, ..., ξ
n
) ∈ R
n
.
Then by the change of variable formula we have, taking into account the
independence of X
1
, ..., X
n
,
_

ϕ
1
(X
1
(ω)) ϕ
n
(X
n
(ω))P(dω) =
_

ψ(X(ω))P(dω)
=
_
R
n
ψ(ξ)(X
#
P)(dξ) =
_
R
ϕ
1

1
)((X
1
)
#
P)(dξ
1
)
_
R
ϕ
k

n
)((X
n
)
#
P)(dξ
n
)
=
_

ϕ
1
(X
1
(ω))P(dω)
_

ϕ
n
(X
n
(ω))P(dω).
Assume conversely that (2.1) holds for any choice of functions ϕ
1
, ..., ϕ
n
positive Borel. To prove independence of X
1
, ..., X
n
it is enough to show that
(X
#
P)(I
1
I
n
) = ((X
1
)
#
P)(I
1
) ((X
n
)
#
P)(I
n
), ∀ I
1
, ..., I
n
∈ B(R).
But this follows immediately setting in (2.1)
ϕ
i
= 1l
I
i
, i = 1, ..., n.

20 Chapter 2
Exercise 2.6 Let X
1
, ..., X
n
be real independent random variables in (Ω, F, P).
Show that
_

X
1
X
n
dP =
_

X
1
dP
_

X
n
dP
and
V (X
1
+ + X
n
) = V (X
1
) + + V (X
n
).
The following useful result is left to the reader as an exercise.
Proposition 2.7 Let X
1
, ..., X
n
be real random variables in (Ω, F, P) and
let X = (X
1
, ..., X
n
). Then X
1
, ..., X
n
are independent if and only if

X
#
P(h) =
n

k=1

(X
k
)
#
P(h
k
), ∀ h = (h
1
, ..., h
n
) ∈ R
n
.
Definition 2.8 Let (Ω, F, P) be a probability space and A
1
, ..., A
n
∈ F.
We say that the sets A
1
, ..., A
n
are independent if the random variables
1l
A
1
, ..., 1l
A
n
are so.
Exercise 2.9 Show that sets A
1
, ..., A
n
are independent if and only if
P(A
j
1
∩ ∩ A
j
k
) = P(A
j
1
) P(A
j
k
),
for all k = 1, ..., n and k different positive integer j
1
, ..., j
k
less or equal to n.
Proposition 2.10 Let X
1
, ..., X
n
be real independent random variables in
(Ω, F, P) and let X = (X
1
, ..., X
n
). Then the covariance matrix Q(X) is
diagonal.
Proof. We have in fact (by Exercise 2.6) for i, j = 1, ..., n
Q(X)
i,j
=
_

(X
i
(ω) −m
i
(X))(X
j
(ω) −m
j
(X))P(dω)
=
_

(X
i
(ω) −m
i
(X))P(dω)
_

(X
j
(ω) −m
j
(X))P(dω) = 0.
The converse of Proposition 2.10 does not hold in general.
random variables 21
2.2.2 Independent Gaussian random variables
Let X
1
, ..., X
n
be real random variables in (Ω, F, P) and let X = (X
1
, ..., X
n
).
Proposition 2.11 Assume that X
1
, ..., X
n
are independent Gaussian ran-
dom variables. Then X = (X
1
, ..., X
n
) is Gaussian.
Proof. In fact, let h = (h
1
, ..., h
n
) ∈ R
n
. Then, taking into account the
independence of (X
1
, ..., X
n
),

X
#
P(h) =
_

e
i(X
1
(ω)h
1
+···+X
1
(ω)h
n
)
P(dω) =
n

k=1
_

e
iX
k
(ω)h
k
P(dω)
= e
i(m(X
1
)h
1
+···+m(X
n
)h
n
)
e

1
2
(Q(X
1
)h
2
1
+···+Q(X
n
)h
2
n
)
.

Proposition 2.12 Assume that X
1
, ..., X
n
are real random variables and
that X = (X
1
, ..., X
n
) is Gaussian. Then X
1
, ..., X
n
are independent if and
only if Q(X) is diagonal.
Proof. If X
1
, ..., X
n
are independent the conclusion follows from Proposition
2.11. Assume now that Q(X) is diagonal. By Proposition 2.7 it is enough to
show that

X
#
P(h) =
n

i=1

(X
k
)
#
P(h),
for each h = (h
1
, ..., h
n
) ∈ H.
We have in fact

X
#
P(h) = e
im(X),h
e

1
2
Q(X)h,h
= e
im(X),h
e

1
2
P
n
k=1
Q(X)
k,k
h
2
k
= e
im(X),h
e

1
2
P
n
k=1
Q(X
k
)h
2
k
=
n

i=1

(X
k
)
#
P(h).

2.3 Gaussian random variables defined in a
Hilbert space
We now consider the case when (Ω, F, P) coincides with (H, B(H), µ), where
H is a separable Hilbert space and µ = N
m,Q
with m ∈ H and Q ∈ L
+
1
(H).
22 Chapter 2
2.3.1 Affine changes of variables
Let b ∈ K and A ∈ L(H, K) where K is another separable Hilbert space.
Let us consider the affine transformation
T(x) = Ax + b, x ∈ H.
Proposition 2.13 T is a Gaussian random variable and its law T
#
µ is given
by N
Aa+b,AQA
∗, where A

is the transpose of A.
Proof. We have in fact
_
K
e
ik,y
T
#
µ(dy) =
_
H
e
ik,T(x)
µ(dx) =
_
H
e
ik,Ax+b
µ(dx)
= e
ik,b
_
H
e
iA

k,x
µ(dx) = e
ik,Aa+b
e

1
2
AQA

k,k
, k ∈ K.

Example 2.14 Let µ = N
m,Q
and n ∈ N, f
1
, ..., f
n
∈ H. Let F : H → R
n
be defined as
F(x) := (¸x, f
1
), ..., ¸x, f
n
)), x ∈ H.
Then by Proposition 2.13 F is a Gaussian random variable with mean m(F)
and covariance Q(F) given by,
m(F) = F(m) = (¸m, f
1
), ..., ¸m, f
n
))
and
Q(F) = FQF

.
On the other hand, the linear operator F

: R
n
→ H is given by
F

(ξ) =
n

k=1
f
k
ξ
k
, ∀ ξ = (ξ
1
, ..., ξ
n
) ∈ R
n
.
Therefore
QF

(ξ) =
n

k=1
Qf
k
ξ
k
, ∀ ξ = (ξ
1
, ..., ξ
n
) ∈ R
n
and
FQF

(ξ) =
__
n

k=1
Qf
k
ξ
k
, f
1
_
, ...,
_
n

k=1
Qf
k
ξ
k
, f
n
__
random variables 23
so that
Q(F)
h,k
= ¸Qf
h
, f
k
). (2.2)
Therefore, F
1
, ..., F
n
are independent if and only if
¸Qf
h
, f
k
) = 0, h, k = 1, ..., n,
if h ,= k.
2.4 The white noise function
In order to define the white noise function (which will play an important role
in what follows), we shall deal with equivalence class of random variables
(rather than random variables), which we briefly discuss in the next sub-
section.
2.4.1 Equivalence classes of random variables
Let (Ω, F, P) be a probability space and let H be a separable Hilbert space.
We denote by R(H) the set of all H-valued random variables.
Definition 2.15 We say that X, Y ∈ R(H) are equivalent (and write X ∼
Y ) if
P(¦ω ∈ Ω : X(ω) = Y (ω)¦) = 1.
One can easily check that X ∼ Y, X, Y ∈ R(H) is an equivalence relation,
so that the set R(H) is disjoint union of equivalences classes.
We notice that if X ∼ Y then the laws of X and Y coincide. In fact set
K = ¦ω ∈ Ω : X(ω) ,= Y (ω)¦,
so that P(K) = 0. Since for any I ∈ B(H) we have
X
−1
(I) ⊂ Y
−1
(I) ∪ K,
it follows that P(X
−1
(I)) ≤ P(Y
−1
(I)) and, exchanging X and Y we see that
P(X
−1
(I)) = P(Y
−1
(I)).
Consequently, all random variables belonging to a fixed equivalence class
˜
X have the same law, which is called the law of
˜
X.
In the following we shall not distinguish between a random variable X
and the equivalence class
˜
X including X, except when needed.
24 Chapter 2
By L
p
(Ω, F, P; H), p ≥ 1, we mean the space of all equivalence class of
random variables X: Ω → H such that
_

[X(ω)[
p
P(dω) < +∞.
L
p
(Ω, F, P; H), endowed with the norm
|X|
L
p
(Ω,F,P;H)
=
__

[X(ω)[
p
P(dω)
_
1/p
,
is a Banach space. We shall write L
p
(Ω, F, P; H) = L
p
(Ω, P; H) for brevity.
We prove now that the limit of a convergent sequence in L
2
(Ω, P; H) of
Gaussian random variables is Gaussian.
Proposition 2.16 Let (X
n
) ⊂ L
2
(Ω, P; H) be a sequence of Gaussian ran-
dom variables convergent to X in L
2
(Ω, P; H). Then X is a Gaussian random
variable and
¸m(X), h) = lim
n→∞
¸m(X
n
), h), h ∈ H,
and
¸Q(X)h, k) = lim
n→∞
¸Q(X
n
)h, k), h, k ∈ H.
Proof. Since X
n
→ X in L
2
(Ω, P; H) we have
lim
n→∞
¸m(X
n
), h) = lim
n→∞
_

¸X
n
(ω), h)P(dω) =
_

¸X(ω), h)P(dω) = ¸m(X), h)
and
lim
n→∞
¸Q(X
n
)h, k) = lim
n→∞
_

¸X
n
(ω) −m(X
n
), h) ¸X
n
(ω) −m(X
n
), k)P(dω)
=
_

¸X(ω) −m(X), h) ¸X(ω) −m(X), k)P(dω) = ¸Q(X)h, k).
Let us show now that X is a Gaussian random variable. We have in fact
_
H
e
ix,h
(X
#
µ)P(dy) =
_

e
iX(ω),h
P(dω) = lim
n→∞
_

e
iX
n
(ω),h
P(dω)
= lim
n→∞
e
im(X
n
),h
e

1
2
Q(X
n
)h,h
= e
im(X),k
e

1
2
Q(X)h,h
.

random variables 25
2.4.2 Definition of the white noise function
In this section we assume that the Hilbert space H is infinite dimensional and
consider a non degenerate Gaussian measure µ = N
Q
in H (Ker (Q) = ¦0¦).
Since Q is compact there exists a complete orthonormal basis (e
k
) on H and
a sequence of positive numbers (λ
k
) such that
Qe
k
= λ
k
e
k
, k ∈ N.
Let us define a mapping
W : Q
1/2
(H) → C(H), z → W
z
where
W
z
(x) = ¸x, Q
−1/2
z), ∀ x ∈ H.
Here Q
1/2
(H) is the Cameron–Martin space and C(H) the space of all real
continuous functions on H.
Lemma 2.17 For all z
1
, z
2
∈ Q
1/2
(H) we have
_
H
W
z
1
(x)W
z
2
(x)µ(dx) = ¸z
1
, z
2
). (2.3)
Proof. We have in fact
_
H
W
z
1
(x)W
z
2
(x)µ(dx) =
_
H
¸x, Q
−1/2
z
1
)¸x, Q
−1/2
z
2
)µ(dx)
= ¸QQ
−1/2
z
1
, QQ
−1/2
z
2
) = ¸z
1
, z
2
).

Since Q
1/2
(H) is dense in H, the mapping W can be uniquely extended
as a mapping from H into L
2
(H, µ) which we denote still by W and call the
white noise function.
W
f
is linear in the sense that for all α, β ∈ R we have
W
f
(αx + βy) = αW
f
(x) + βW
f
(y), x, y µ a.e..
Remark 2.18 Given z ∈ H (not belonging to Q
1/2
(H)) it would be tempt-
ing to define the random variable W
z
by setting,
W
z
(x) = ¸Q
−1/2
x, z), x ∈ Q
1/2
(H).
However this definition is meaningless because µ(Q
1/2
(H)) = 0, by Proposi-
tion 1.13
26 Chapter 2
Proposition 2.19 Let z ∈ H. Then W
z
is a real Gaussian random variable
with mean 0 and covariance [z[
2
.
Proof. We have to show that
_
H
e
iηW
z
(x)
µ(dx) = e

1
2
η
2
|z|
2
, ∀ η ∈ R.
Let (z
n
) ⊂ Q
1/2
(H) be a sequence such that z
n
→ z in H. Then, by the
dominated convergence theorem, we have
_
H
e
iηW
z
(x)
µ(dx) = lim
n→∞
_
H
e
iηQ
−1/2
z
n
,x
µ(dx) = lim
n→∞
e

1
2
η
2
|z
n
|
2
= e

1
2
η
2
|z|
2
.
So, the conclusion follows.
The following generalization of Proposition 2.19 is important.
Proposition 2.20 Let n ∈ N, z
1
, ..., z
n
∈ H. Then (W
z
1
, ..., W
z
n
) is an n-
dimensional Gaussian random variable with mean 0 and covariance operator
Q
z
given by
(Q
z
)
h,k
= ¸z
h
, z
k
), h, k = 1, ..., n. (2.4)
The random variables W
z
1
, ..., W
z
n
are independent if and only if z
1
, ..., z
n
are
mutually orthogonal.
Proof. Let (z
1
j
), ..., (z
n
j
) be n sequences in Q
1/2
(H) convergent respectively to
z
1
, ..., z
n
in H. Then we have by the dominated convergence theorem, that
_
H
e
i(ξ
1
W
z
1
(x)+···+ξ
n
W
z
n
(x))
µ(dx) = lim
j→∞
_
H
e
i(ξ
1
Q
−1/2
z
j
1
,x+···+ξ
n
Q
−1/2
z
j
n
,x)
µ(dx)
= lim
j→∞
_
H
e
ix,Q
−1/2

1
z
j
1
+···+ξ
n
z
j
n
)
µ(dx)
= lim
j→∞
e

1
2

1
z
j
1
+···+ξ
n
z
j
n
|
2
= e

1
2

1
z
1
+···+ξ
n
z
n
|
2
= e

1
2
P
n
j,k=1
z
j
,z
k

j
ξ
k
.

Chapter 3
Brownian Motion
3.1 Stochastic Processes
We are given a probability space (Ω, F, P). We denote by P

the outer
measure of P. We recall that a null set of Ω is a set of outer measure zero.
For any integrable real random variable F we note
E(F) =
_

F(ω)P(dω).
So, in particular we have
F
#
P(I) = E(1l
I
(F)), ∀ I ∈ B(R).
We say that a property π concerning elements of Ω holds P-a.s. if the set
where π does not hold is a null set.
Definition 3.1 A family X = (X(t))
t≥0
of real random variables in (Ω, F, P)
is called a real stochastic process in [0, +∞). For any ω ∈ Ω, X(, ω) is called
a trajectory of X.
• X is Gaussian if for any n ∈ N and any 0 ≤ t
1
< < t
n
the n-
dimensional random variable (X(t
1
), ..., X(t
n
)) is Gaussian.
• X is continuous if X(, ω) is continuous P-a.s.
• X is p-mean continuous, p ≥ 1, if
(i) X(t) is p-integrable for any t ≥ 0.
(ii) We have
lim
t→t
0
E[[X(t) −X(t
0
)[
p
] = 0, ∀ t
0
≥ 0. (3.1)
27
28 Chapter 3
We notice that a p-mean continuous process is not continuous in general.
We say that two stochastic processes X and Y are equivalent if for all
t ≥ 0 we have
X(t, ω) = Y (t, ω), P-a.s..
When X and Y are equivalent we also say that Y is a version of X (or that
X is a version of Y ).
3.2 Brownian motion
Definition 3.2 A real Brownian motion B = (B(t))
t≥0
on (Ω, F, P) is a
real stochastic process such that
(i) B(0) = 0 and if 0 ≤ s < t, B(t) − B(s) is a real Gaussian random
variable with law N
t−s
.
(ii) If 0 < t
1
< ... < t
n
, the random variables,
B(t
1
), B(t
2
) −B(t
1
), , B(t
n
) −B(t
n−1
)
are independent.
We express condition (ii) by saying that B is a process with independent
increments.
Lemma 3.3 Let t, s > 0. Then
E[B(t)(B(s)] = min¦t, s¦. (3.2)
Proof. Let for instance t > s. Then we have
E[B(t)B(s)] = E[(B(t) −B(s))B(s)] +E[B
2
(s)].
On the other hand, B(t) −B(s) is independent of B(s) so that
E[(B(t) −B(s))B(s)] = E[B(t) −B(s)]E[B(s)] = 0.
Since the law of B(s) is N
s
we conclude that E[B(t)B(s)] = s as required.
Brownian motion 29
3.2.1 Construction of a Brownian motion
Consider the probability space (H, B(H), µ), where H = L
2
(0, +∞) and
µ = N
Q
, Q being an arbitrary (but fixed) non degenerate Gaussian measure
in H.
Define
B(t) = W
1l
[0,t]
, t ≥ 0, (3.3)
where
1l
[0,t]
(s) =
_
_
_
1 if s ∈ [0, t],
0 otherwise,
and W is the white noise function defined in Chapter 2.
More precisely, for any t ≥ 0 we choose an arbitrary element in the
equivalence class of B(t) which we still denote by B(t).
Clearly, for any t ≥ 0, B(t) is a Gaussian random variable N
t
and for any
t > s ≥ 0, B(t) −B(s) = W
1l
(s,t]
is a Gaussian random variable N
t−s
. So, B
fulfills Definition 3.2(i). Let us prove (ii). Since the system of elements of H,
(1l
[0,t
1
]
, 1l
(t
1
,t
2
]
, ..., 1l
(t
n−1
,t
n
]
),
is orthogonal, we have by Proposition 2.20 that the random variables
B(t
1
), B(t
2
) −B(t
1
), , B(t
n
) −B(t
n−1
)
are independent. Thus (ii) is proved as well.
3.2.2 Some properties of a Brownian motion
Proposition 3.4 Let B(t), t ≥ 0, be a Brownian motion on (Ω, F, P).
Then B is a Gaussian process. Moreover, if 0 < t
1
< ... < t
n
the law of
(B(t
1
), ..., B(t
n
)) is given by
P((B(t
1
), ..., B(t
n
)) ∈ I)
= (2π)
−n/2
(t
1
(t
2
−t
1
) (t
n
−t
n−1
))
−1/2
_
I
e

η
2
1
2t
1


2
−η
1
)
2
2(t
2
−t
1
)
−·−

n
−η
n−1
)
2
2(t
n
−t
n−1
)
dη,
(3.4)
for all I ∈ B(R
n
).
Proof. Let 0 < t
1
< ... < t
n
and set
X := (B(t
1
), B(t
2
) −B(t
1
), ..., B(t
n
) −B(t
n−1
))
Z := (B(t
1
), ..., B(t
n
)).
30 Chapter 3
Since random variables B(t
1
), B(t
2
) − B(t
1
), ..., B(t
n
) − B(t
n−1
) are inde-
pendent, by Proposition 2.11 it follows that X is a n-dimensional Gaussian
random variable with mean 0 and covariance operator
Q(X) = diag (t
1
, t
2
−t
1
, ..., t
n
−t
n−1
).
Now, consider the linear mapping T ∈ L(R
n
) defined by,
T(x
1
, ..., x
n
) = (x
1
, x
1
+ x
2
, ..., x
1
+ + x
n
), ∀ (x
1
, ..., x
n
) ∈ R
n
.
It is clear that Z = T(X). Therefore by Proposition 2.13 Z is Gaussian with
mean 0 and covariance Q(Z) = TQ(X)T

where T

is the transpose of T.
It remain to show (3.4). If I ∈ B(R
n
) we have
P(Z ∈ I) = (2π)
−n/2
(det Q(Z))
−1/2
_
I
e

1
2
(Q(Z))
−1
η,η
dη.
Since det T = det T

= 1, as easily checked, we have
det Q(Z) = det Q(X) = t
1
(t
2
−t
1
) (t
n
−t
n−1
).
Moreover, since
T
−1
η = (η
1
, η
2
−η
1
, ..., η
n
−η
n−1
),
we have
¸(Q(Z))
−1
η, η) = ¸Q
−1
T
−1
η, T
−1
η) =
η
2
1
t
1


2
−η
1
)
2
(t
2
−t
1
)
− −

n
−η
n−1
)
2
(t
n
−t
n−1
)
and so, the conclusion follows.
Proposition 3.5 Let B(t), t ≥ 0, be a Brownian motion on (Ω, F, P).
Then B is p-mean square continuous for all p ≥ 1.
Proof. It is enough to show the result for p = 2m, m ∈ N. Let t > t
0
≥ 0.
Since B(t) −B(t
0
) is a Gaussian random variable N
t−t
0
, we have
E([B(t) −B(t
0
)[
2m
) =
_
R
[ξ[
2m
N
t−t
0
(dξ) =
(2m)!
m!2
m
(t −t
0
)
m
.
Therefore
lim
t→0
E([B(t) −B(t
0
)[
2m
) = 0
and the conclusion follows.
Exercise 3.6 Let B(t) be a Brownian motion in a probability space (Ω, F, P).
Prove that the following are Brownian motions.
Brownian motion 31
(i) B
1
(t) = B(t + h) −B(h), t ≥ 0, where h > 0 is given.
(ii) B
2
(t) = αB(α
−2
t), t ≥ 0, where α > 0 is given.
(iii) B
3
(t) = tB(1/t), t > 0, B
3
(0) = 0.
(iv) B
4
(t) = −B(t), t ≥ 0.
3.3 Wiener integral
Let B(t), t ≥ 0, be a Brownian motion in (Ω, F, P) and let f ∈ L
2
(0, T)
with T > 0. We want to define the stochastic integral:
_
T
0
f(s)dB(s).
We start with step functions. Let 0 = t
0
< t
1
< < t
n
= T, f
0
, f
1
, ..., f
n−1

R and set
f =
n

j=1
t
j−1
1l
(t
j
−t
j−1
]
.
Then define
_
T
0
f(s)dB(s) :=
n

j=1
f
t
j−1
(B(t
j
) −B(t
j−1
)).
Let us prove two basic identities.
Lemma 3.7 We have
E
__
T
0
f(s)dB(s)
_
= 0 (3.5)
and
E
_
__
T
0
f(s)dB(s)
_
2
_
=
n

j=1
[f(t
j−1
)[
2
(t
j
−t
j−1
) =
_
t
0
f
2
(s)ds. (3.6)
Proof. Identity (3.5) is obvious. Let us prove (3.6). We have
E([I
σ
(f)[
2
) = E
_
n

j=1
[f(t
j−1
)[
2
[B(t
j
) −B(t
j−1
)]
2
_
+2E
_
n

j<k
f(t
j−1
)f(t
k−1
)[B(t
j
) −B(t
j−1
)][B(t
k
) −B(t
k−1
)]
_
.
(3.7)
32 Chapter 3
Now the conclusion follows taking into account that B(t
j
) −B(t
j−1
) is a real
Gaussian random variable N
t
j−1
−t
j
and that B(t
j
) − B(t
j−1
) is independent
of B(t
k
) −B(t
k−1
) for k ,= j.
Denote by S(0, T) the linear space of all step functions. By (3.6) it follows
that the linear mapping I
S(0, T) ⊂ L
2
(0, T) → L
2
(Ω, F, P), f → I(f) =
_
T
0
f(s)dB(s),
is continuous. Since S(0, T) is dense in L
2
(0, T) it can be uniquely extended
to the whole L
2
(0, T). We still denote by I(f) =
_
T
0
f(s)dB(s) this estension.
It is clear that for any f ∈ L
2
(0, T) we have
E
__
T
0
f(s)dB(s)
_
= 0, (3.8)
and
E
_
__
T
0
f(s)dB(s)
_
2
_
=
_
t
0
f
2
(s)ds. (3.9)
The random variable (more precisely, the equivalence class of random
variables)
_
T
0
f(s)dB(s), which belongs to L
2
(Ω, F, P), is called the Wiener
integral of f in [0, T].
We define in an obvious way the Wiener integral
_
b
a
f(s)dB(s) for any
a, b ≥ 0. It is easy to see that if a, b, c ≥ 0 we have
_
b
a
f(s)dB(s) +
_
c
b
f(s)dB(s) =
_
c
a
f(s)dB(s).
Exercise 3.8 Let f, g ∈ L
2
(0, T). Show that
E
__
T
0
f(s)dB(s)
_
T
0
g(s)dB(s)
_
=
_
T
0
f(s)g(s)ds.
Proposition 3.9 Let f ∈ L
2
(0, T). Then I(f) =
_
T
0
f(s)dB(s) is a real
Gaussian random variable N
q
with q =
_
T
0
[f(s)[
2
ds.
Proof. It is enough to prove the result for f of the form
f =
n

i=1
f
t
i−1
(t
i
−t
i−1
),
Brownian motion 33
where n ∈ N, 0 = t
0
< t
1
< ... < t
n−1
= T, so that
I(f) =
n

i=1
f
t
i−1
(B(t
i
) −B(t
i−1
)).
Since random variables
B(t
1
), B(t
2
) −B(t
1
), , B(t
n
) −B(t
n−1
),
are independent, we have that I(f) is a real Gaussian random variable N
q
with
q =
n

i=1
f
2
(t
i−1
)(t
i
−t
i−1
).

We now show a relation between the white noise function and the Wiener
integral.
Example 3.10 We use here notations of Section 3.2.1. Let f ∈ L
2
(0, ∞).
Then we have
W
f
=
_

0
f(s)dB(s). (3.10)
It is enough to show (3.10) when
f =
n

k=1
f
t
k−1
1l
(t
k−1
,t
k
]
,
where 0 ≤ t
0
< < t
n
. In this case we have in fact
_

0
f(s)dB(s) =
n

k=1
f
t
k−1
W
1l
(t
k−1
,t
k
]
= W
P
n
k=1
f
t
k−1
1l
(t
k−1
,t
k
]
= W
f
.
Let f : [0, ∞) →R such that it is integrable in all interval [0, T], T0. Let
us introduce a stochastic process setting
F(t) =
_
t
0
f(s)ds, ∀ t ≥ 0.
Proposition 3.11 The process F(t), t ≥ 0 is p-mean continuous for any
p ≥ 1.
34 Chapter 3
Proof. Let p = 2m, m ∈ N and t > t
0
≥ 0. Then by Proposition 3.9 we
have that
F(t) −F(t
0
) =
_
t
t
0
f(s)dB(s)
is a real Gaussian random variable with mean 0 and covariance
_
t
t
0
f
2
(s)ds.
Therefore
E[F(t) −F(t
0
)[
2m
=
(2m)!
m!2
m
__
t
t
0
f
2
(s)ds
_
q
,
so that
lim
t→t
0
E[F(t) −F(t
0
)[
2m
= 0.
We note finally, that if f ∈ C
1
([0, T]) then it is possible to express the
Wiener integral
_
T
0
f(s)dB(s) in terms of a Riemann integral as the following
integration by parts formula shows.
Proposition 3.12 If f ∈ C
1
([0, T]) we have
_
T
0
f(s)dB(s) = f(T)B(T) −
_
T
0
f

(s)B(s)ds, P-a.e. ω ∈ Ω. (3.11)
Proof. Let σ = ¦t
0
, t
1
, , t
n
¦ ∈ Σ. Then we have
I
σ
(f) =
n

k=1
f(t
k−1
)(B(t
k
) −B(t
k−1
))
=
n

k=1
(f(t
k
)B(t
k
) −f(t
k−1
)B(t
k−1
))

n

k=1
(f(t
k
) −f(t
k−1
))B(t
k
)
= f(T)B(T) −
n

k=1
(f(t
k
) −f(t
k−1
))B(t
k
)
= f(T)B(T) −
n

k=1
f


k
)B(t
k
)(t
k
−t
k−1
),
where α
k
are suitable numbers in the interval [t
k−1
, t
k
], k = 1, ..., n. It follows
that
lim
|σ|→0
I
σ
(f) = f(T)B(T) −
_
T
0
f

(s)dB(s)ds, P-a.s..

Brownian motion 35
3.4 Continuity of Brownian motion
Let B(t), t ≥ 0, be a Brownian motion on a probability space (Ω, F, P). We
are going to show that B possesses a continuous version. To this purpose we
shall use a representation formula for B proved in the next proposition.
Proposition 3.13 For any α ∈ (0, 1/2) we have
B(t) =
sin πα
π
_
t
0
(t −σ)
α−1
Y
α
(σ)dσ, (3.12)
where
Y
α
(σ) =
_
σ
0
(σ −s)
−α
dB(s). (3.13)
Notice that the Wiener integral Y
α
is meaningful since α ∈ (0, 1/2).
Proof. We start from the following elementary identity which is valid for
any α ∈ (0, 1).
_
t
s
(t −σ)
α−1
(σ −s)
−α
dσ =
π
sin πα
, 0 ≤ s ≤ σ ≤ t, (3.14)
where α ∈ (0, 1). To check (3.14) it is enough to set σ = r(t −s) +s so that
(3.14) becomes
_
1
0
(1 −r)
α−1
r
−α
dr = β(α, 1 −α) =
π
sin πα
.
Now since, obviously, B(t) =
_
s
0
dB(s) we can write
B(t) =
sin πα
π
_
t
0
__
t
s
(t −σ)
α−1
(σ −s)
−α

_
dB(s).
Exchanging integrals
(1)
, yields
B(t) =
sin πα
π
_
t
0
dξ(t −σ)
α−1
__
σ
0
(σ −s)
−α
dB(s)
_
.

We can now prove the result.
Theorem 3.14 Let B(t), t ≥ 0, be a Brownian motion on a probability
space (Ω, F, P). Then B possesses a continuous version.
(1)
This requires a proof which is left to the reader.
36 Chapter 3
Proof. Choose a version Y
α
(, ω) of the stochastic process Y
α
which is 2m-
integrable with 2m > 1/α. This is possible in view of Proposition 3.11. Now
set
B(t, ω) =
sin πα
π
_
t
0
(t −σ)
α−1
Y
α
(σ, ω)dσ, ∀ t ≥ 0.
Then B(, ω) is a continuous version of B thanks to the following analytic
lemma.
Lemma 3.15 Let α ∈ (0, 1/2), m ∈ N with 2m > 1/α and f ∈ L
2m
(0, T).
Set
F(t) =
_
t
0
(t −σ)
α−1
f(σ)dσ, t ∈ [0, T].
Then F ∈ C([0, T]; H).
Proof. By H¨ older’s inequality we have
[F(t)[ ≤
__
t
0
(t −σ)
(α−1)
2m
2m−1

_
2m−1
2m
[f[
L
2m
(0,T;H)
. (3.15)
(Notice that (α − 1)
2m
2m−1
> −1.) Therefore F ∈ L

(0, T; H) and F is con-
tinuous at 0. Let us prove that F is continuous on [
t
0
2
, T] for any t
0
∈ (0, T].
Let us set for ε <
t
0
2
,
F
ε
(t) =
_
t−ε
0
(t −σ)
α−1
f(σ)dσ, t ∈ [0, T].
F
ε
is obviously continuous on [
t
0
2
, 1]. Moreover, using again H¨ older’s inequal-
ity, we find
[F(t) −F
ε
(t)[ ≤ M
_
2m−1
2mα −1
_2m−1
2m
ε
α−
1
2m
[f[
L
2m
(0,T;H)
.
Thus lim
ε→0
F
ε
(t) = F(t), uniformly on [
t
0
2
, T], and F is continuous as re-
quired.
Exercise 3.16 Prove that B possesses an H¨ older continuous version with
any exponent β < 1/2.
3.5 The standard Brownian motion
Let us consider a Brownian motion B(t), t ≥ 0, in a probability space
(Ω, F, P) such that B(, ω) is continuous for all ω ∈ Ω. We denote by B
the mapping
B : Ω → C
0
, ω → B(, ω),
where C
0
= ¦η ∈ C([0, +∞)) : η(0) = 0¦.
Brownian motion 37
3.5.1 Some properties of C
0
First we notice that, as easily checked, C
0
, endowed with the metric,
d(η
1
, η
2
) :=

k=1

1
−η
2
|
k
2
k
(1 +|η
1
−η
2
|
k
)
,
is a complete metric space. We have set for any k ∈ N,
|η|
k
= sup¦[η(t)[ : t ∈ [0, k]¦, ∀ η ∈ C
0
.
Let us now consider the σ-algebra B(C
0
). It is important to notice that
B(C
0
) is generated by the cylindrical subsets of C
0
that we shall introduce
now.
For n ∈ N, 0 < t
1
< < t
n
and A ∈ B(R
n
) we define
C
t
1
,t
2
,...,t
n
;A
:= ¦η ∈ C
0
: (η(t
1
), ..., η(t
n
)) ∈ A¦ .
Note that
C
t
1
,t
2
,...,t
n
;A
= C
t
1
,t
2
,...,t
n
,t
n+1
,...,t
n+k
;A×R
k, k, n ∈ N.
Using this identity one can easily see that C is an algebra. Moreover, the
σ-algebra generated by C coincides with B(C
0
) since any ball (with respect
to the metric of C
0
) is a countable intersection of cylindrical sets.
3.5.2 The Wiener measure and the standard Brownian
motion
We come back to the mapping B
B : Ω → C
0
, ω → B(, ω)
and we denote by Q its law (which is a probability measure on (C
0
, B(C
0
)).
Q is called the Wiener measure on (C
0
, B(C
0
)).
So, for any nonnegative Borel mapping
F : C
0
→R, η → F(η),
we have
E[F(B())] =
_

F(B(, ω))P(dω) =
_
C
0
F(η)Q(dη). (3.16)
Some examples of mappings F are the following.
38 Chapter 3
(i) F(η) = g(η(t
0
)), for all η ∈ C
0
, where g : R → R is nonnegative Borel
and t
0
> 0 is given.
(ii) F(η) = G(η(t
1
), ..., η(t
n
)), for all η ∈ C
0
, where G : R
n
→R is nonneg-
ative Borel and t
1
, ..., t
n
> 0 are given.
(iii) F(η) = sup
t∈[0,1]
[η(t)[, for all η ∈ C
0
.
Now we define a stochastic process W(t), t ≥ 0, in (C
0
, B(C
0
), Q) setting
W(t)(η) = η(t), η ∈ C
0
, t ≥ 0.
Proposition 3.17 W is a Brownian motion in (C
0
, B(C
0
), Q), called the
standard Brownian motion.
Proof. The proof is straightforward. Let us show for instance that for
t > s ≥ 0, W(t) − W(s) is a Gaussian random variable N
t−s
. For this it is
enough to show that the Fourier transform of W(t) −W(s)
ψ(h) :=
_
C
0
e
i(η(t)−η(s))h
Q(dη), h ∈ R,
is given by e

1
2
(t−s)h
2
, h ∈ R.
In fact by (3.16) we have
_
C
0
e
i(η(t)−η(s))h
Q(dη) =
_

e
i(B(t,ω)−B(s,ω))h
P(dω)
= E[e
i(B(t)−B(s))
] = e

1
2
(t−s)h
2
, h ∈ R.
In an analogous way one can prove that W(t), t ≥ 0, has independent incre-
ments.
Let us compute the Wiener measure of a cylindrical set.
Proposition 3.18 Let C
t
1
,t
2
,...,t
n
;A
be a cylindrical set. Then we have
Q(C
t
1
,t
2
,...,t
n
;A
)
=
1
_
(2π)
n
t
1
(t
2
−t
1
) (t
n
−t
n−1
)
_
A
e

ξ
2
1
2t
1


2
−ξ
1
)
2
2(t
2
−t
1
)
−···−

n
−ξ
n−1
)
2
2(t
n
−t
n−1
)
dξ.
Proof. We simply note that, thanks to (3.16), we have
Q(C
t
1
,t
2
,...,t
n
;A
) = P((B(t
1
), ..., B(t
n
)) ∈ A),
so that the conclusion follows from Proposition 3.4.
Brownian motion 39
3.6 Quadratic variation of the Brownian mo-
tion
In this section we are given a real continuous Brownian motion B(t), t ≥ 0,
on a probability space (Ω, F, P). For any T > 0 we denote by Σ(0, T) the
set of all decompositions of [0, T]
σ = ¦0 = t
0
< t
1
< < t
n
= T¦.
Then for any σ = ¦0 = t
0
< t
1
< < t
n
= T¦ ∈ Σ(0, T) we set
[σ[ := min¦t
k
−t
k−1
: k = 1, ...n −1¦.
We introduce a partial ordering on Σ(0, T), setting
σ
1
≤ σ
2
if and only if [σ
1
[ ≤ [σ
2
[.
Let us now introduce the quadratic variation of Brownian motion B in
[0, T]. For any σ = ¦0 = t
0
< t
1
< < t
n
= T¦ ∈ Σ(0, T) we define
J
σ
:=
n

k=1
[B(t
k
) −B(t
k−1
)[
2
.
Then we prove
Theorem 3.19 We have
lim
|σ|→0
J
σ
= T in L
2
(Ω, F, P).
We say that T is the quadratic variation of B in [0, T].
Proof. Since B
t
k
−B
t
k−1
is a real Gaussian random variable with law N
t
k
−t
k−1
,
we have E(J
σ
) = T, and so,
E([J
σ
−T[
2
) = E(J
2
σ
) −2TE(J
σ
) + T
2
= E(J
2
σ
) −T
2
. (3.17)
Moreover
E[J
σ
[
2
= E
¸
¸
¸
¸
¸
n

k=1
[B(t
k
) −B(t
k−1
)[
2
¸
¸
¸
¸
¸
2
= E
n

k=1
[B(t
k
) −B(t
k−1
)[
4
+ 2
n

h<k=1
E[B(t
h
) −B(t
h−1
)[
2
[B(t
k
) −B(t
k−1
)[
2
.
40 Chapter 3
But we have
E
n

k=1
[B(t
k
) −B(t
k−1
)[
4
= 3
n

k=1
(t
k
−t
k−1
)
2
, (3.18)
and, since B(t
h
) −B(t
h−1
) and B(t
k
) −B(t
k−1
) are independent, we have
n

h<k=1
E[B(t
h
) −B(t
h−1
)[
2
[B(t
k
) −B(t
k−1
)[
2
=
n

h<k=1
(t
h
−t
h−1
)(t
k
−t
k−1
).
(3.19)
Therefore
E[J
σ
[
2
= 3
n

k=1
(t
k
−t
k−1
)
2
+ 2
n

h<k=1
(t
h
−t
h−1
)(t
k
−t
k−1
)
= 2
n

k=1
(t
k
−t
k−1
)
2
+
_
n

k=1
(t
k
−t
k−1
)
_
2
.
= 2
n

k=1
(t
k
−t
k−1
)
2
+ T
2
.
(3.20)
Now, substituting (3.20) on (3.17), we obtain
E
_
[J
σ
−T[
2
_
= 2
n

k=1
(t
k
−t
k−1
)
2
→ 0,
as [σ[ → 0.
An important consequence of Theorem 3.19 is that almost all trajectories
of the Brownian motion B have not bounded variation
(2)
. In other terms
the set
V
T
:= ¦ω ∈ Ω : B(, ω) ∈ BV (0, T)¦
has outer probability zero.
In fact the following result holds.
Proposition 3.20 We have P

(V
T
) = 0.
(2)
Let f : [0, T] → R. Then for any σ = ¦0 = t
0
< t
1
< < ..., t
n
= T¦ ∈ Σ(0, T) we
set V
σ
(f) =

n
k=1
[f(t
k
) − f(t
k−1
)[ and define V (f) := sup
σ∈Σ
V
σ
(f), V (f) is called the
variation of f. BV (0, T) is the set of all functions f : [0, T] →R of finite variation.
Brownian motion 41
Proof. Set
Λ := ¦ω ∈ Ω : B(, ω) is continuous ¦,
so that P(Λ) = 1 because B is continuous.
Since lim
|σ|→0
J
σ
= T in L
2
(Ω, F, P) there exists a sequence (σ
n
) ⊂
Σ(0, T) such that [σ
n
[ → 0 and a set Λ
1
⊂ F such that
(i) P(Λ
1
) = 1.
(ii) lim
n→∞
J
σ
n
(ω) = T for all ω ∈ Λ
1
.
We claim that
V
T
∩ Λ ⊂ Λ
c
1
. (3.21)
By the claim the conclusion will follow since P(Λ
c
1
) = 0.
Let us prove the claim. Let ω ∈ V
T
∩ Λ. Since B(, ω) is uniformly
continuous in [0, T], for any ε > 0 there exists δ
ε
> 0 such that
t, s ∈ [0, T], [t −s[ < δ
ε
=⇒ [B(t, ω) −B(s, ω)[ < ε.
Consequently, if n is so large that [σ
n
[ < δ
ε
we have J
σ
n
(ω) ≤ εV (B(, ω)).
Since ε is arbitrary ω cannot belong to Λ
1
. The claim is proved.
3.7 Multidimensional Brownian motions
Definition 3.21 Let n ∈ N and let X
1
, ..., X
n
be stochastic processes on a
probability space (Ω, F, P). Then X(t) := (X
1
(t), ..., X
n
(t)), t ≥ 0, is called
an n-dimensional stochastic process.
X
1
, ..., X
n
are said to be independent if for any t
1
, ..., t
n
∈ [0, +∞) the
random variables X
i
(t
i
) are independent.
A n-dimensional Brownian motion is a n-dimensional stochastic process
B(t) := (B
1
(t), ..., B
n
(t)), t ≥ 0, such that B
1
, ..., B
n
are independent Brow-
nian motions.
Example 3.22 Let us construct an n-dimensional Brownian motion. Let
(e
1
, ..., e
n
) be the canonical basis in R
n
and choose Ω = H = L
2
(0, +∞; R
n
),
F = B(H) and P = N
Q
, where Q is any operator in L
+
1
(H) such that Ker
Q = ¦0¦.
Then set
B
i
(t) = W
e
i
1l
[0,t]
, ∀ t ≥ 0, i = 1, ..., n.
Then one can check easily that B(t) = (B
1
(t), ..., B
n
(t)) is an n-dimensional
Brownian motion.
42 Chapter 3
Let B be a Brownian motion in R
n
. Then the following properties are
easily checked.
(i) If t > s, B(t) −B(s) is a Gaussian random variable with law N
(t−s)I
n
,
t ≥ 0, where I
n
represents the identity in R
n
,
(ii) E[B
i
(t)B
j
(t)] = 0 if i ,= j.
(iii) We have
E
_
[B(t) −B(s)[
2
¸
= n(t −s). (3.22)
Let us check (iii). We have
E
_
[B(t) −B(s)[
2
¸
=
n

k=1
E
_
[B
k
(t) −B
k
(s)[
2
¸
= n(t −s).
Exercise 3.23 Prove that for 0 ≤ s < t we have
E
_
[B(t) −B(s)[
4
¸
= (2n + n
2
)(t −s)
2
. (3.23)
Exercise 3.24 Let A, C ∈ L(R
d
) and set
Z(t) = e
tA
x +
_
t
0
e
(t−s)A
CdB(s), t ≥ 0.
Prove that the law of Z(t) in R
d
is given by
N
e
tA
x,Q
t
, (3.24)
where
Q
t
=
_
t
0
e
sA
CC

e
sA

ds, (3.25)
where A

and C

are the adjoint of A and C respectively.
Chapter 4
Markov property of the
Brownian motion
Let us consider the probability space (C
0
, B(C
0
), Q) where C
0
is the complete
metric space of all continuous functions ω : [0, +∞) → R introduced in
Chapter 3 and Q is the Wiener measure. Moreover, let W(t), t ≥ 0, the
standard Brownian motion in (C
0
, B(C
0
), Q) defined by
W(t)(ω) = ω(t), ∀ t ≥ 0, ω ∈ C
0
.
This chapter is devoted to some sharp properties of the Brownian motion,
in particular the Markov and strong Markov property and the reflexion prin-
ciple. To this purpose we shall introduce some basic concepts as filtration,
stopping time and transition semigroup.
4.1 Filtration
For any t > 0 we denote by C
t
the algebra of all cylindrical sets
C
t
1
,··· ,t
n
;A
= ¦ω ∈ C
0
: (ω(t
1
), ..., ω(t
n
)) ∈ A¦
= ¦ω ∈ C
0
: (W(t
1
), ..., W(t
n
)) ∈ A¦
where 0 ≤ t
1
< ... < t
n
, t
n
≤ t and A ∈ B(R
n
). Moreover, we denote by F
t
the σ-algebra generated by C
t
. Obviously F
0
= ¦∅, Ω¦.
The family of σ–algebras (F
t
)
t≥0
is increasing; it is called the natural
filtration of W. For any t > 0 we define
F
t
− = σ¦F
t−
: ∈ (0, t)¦
43
44 Chapter 4
where σ
_

∈(0,t)
F
t−
_
is the σ-algebra generated by F
t−
for ∈ (0, t) and
F
t
+ : =

>0
F
t+
, t ≥ 0.
Proposition 4.1 For all t > 0 we have F
t
= F
t
−.
Due to Proposition 4.1 we say that the natural filtration (F
t
)
t≥0
is left con-
tinuous.
Proof. Let t > 0. It is clear that
F
t

_
∈(0,t)
F
t−
,
so that F
t
⊃ F
t
−. To prove the converse inclusion it is enough to show that
C
t
⊂ F
t
−.
Let in fact I = C
t
1
,··· ,t
n
;A
∈ C
t
so that t
n
≤ t. If t
n
< t then I belongs to F
t

whereas if t
n
= t we have
I = lim
k→∞
C
t
1
,··· ,t
t−
1
k
;A
∈ F
t
−,
so that I ∈ F
t
− as well.
Remark 4.2 The filtration (F
t
)
t≥0
is not right continuous, that is F
t
+ ,= F
t
for all t ≥ 0. Let for instance t = 0 and consider the sets
A
n
= ¦ω ∈ Ω : [ω(1/n)[ ≤ 1/n¦, n ∈ N.
Then A
n
∈ F
1/n
and A =

n∈N
A
n
∈ F
0
+. Notice that
A = ¦ω ∈ Ω : [ω

(0)[ = 0¦,
so that F
0
+ ,= F
0
.
4.1.1 F
t
-measurable random variables
We say that a real random variable X is F
t
-measurable if
I ∈ B(R) ⇒ X
−1
(I) ∈ F
t
.
In this case we say also that X depends from the story of the Brownian
motion only up to t.
The following lemma will be frequently used.
Markov property 45
Lemma 4.3 Let s
2
> s
1
≥ t > 0, and let ϕ be a real random variable
F
t
–measurable. Then W(s
2
) −W(s
1
) and ϕ are independent.
Proof. It is enough to show that for any A ∈ F
t
, W(s
2
) −W(s
1
) and 1l
A
are
independent; in other words that F
t
coincides with the set D defined below.
D = ¦A ∈ F
t
: 1l
A
is independent of W(s
2
) −W(s
1
)¦.
Since W is a process with independent increments, D contains the algebra
of all cylindrical set belonging to C
t
(which is a π-system). Moreover, D
is a λ-system. In fact if A ∈ D it is obvious that A
c
∈ D. Moreover, if
(A
n
) is a sequence in D consisting of disjoint sets, one can show easily that


n=1
A
n
∈ D. Now the claim follows from Dynkin’s theorem (Theorem A.1
in Appendix A).
Next result shows that F
0
+ contains only trivial sets.
Proposition 4.4 (one-zero law) Assume that A ∈ F
0
+. Then either P(A) =
1 or P(A) = 0.
Proof. Let A ∈ F
0
+. Denote by G the σ-algebra generated by all sets of
the form
D
t
1
,...,t
n
,h;I
= ¦ω ∈ Ω : (ω(t
1
+ h) −ω(h), ..., ω(t
n
+ h) −ω(h)) ∈ I¦,
where n ∈ N, 0 < t
1
< < t
n
, h > 0, I ∈ B(R
n
). It is clear that A is
independent of G, since it belongs to all F
t
, t > 0, and W has independent
increments. Then we have
P(A ∩ G) = P(A)P(G), ∀ G ∈ G. (4.1)
On the other hand, we claim that G = B(C
0
). To prove the claim it is
enough to show that any cylindrical set C
t
1
,...,t
n
,h;I
belongs to G; but this
follows from the identity
lim
j→∞
D
t
1

1
j
,...,t
n

1
j
,
1
j
;I
= lim
j→∞
¦ω ∈ Ω : (ω(t
1
) −ω(1/j), ..., ω(t
n
) −ω(1/j)) ∈ I¦ = C
t
1
,...,t
n
;I
.
Since G = B(C
0
) we can set in (4.1) G = A, so that P
2
(A) = P(A) which
yields P(A) equal to zero or one.
Remark 4.5 For any t ≥ 0 denote by F
t
the σ-algebra generated by F
t
and all null sets of Ω (called the completion of F
t
). By using Proposition 4.4
one can easily show that (F
t
)
t≥0
is both right and left continuous.
46 Chapter 4
4.2 Stopping times
A nonnegative extended (that is with values in [0, +∞]) random variable τ in
(C
0
, B(C
0
), Q) is called a stopping time with respect to the filtration (F
t
)
t≥0
if
¦τ ≤ t¦ ∈ F
t
for all t ≥ 0.
To any stopping time τ we associate the σ-algebra
F
τ
: = ¦A ∈ F : A ∩ ¦τ ≤ t¦ ∈ F
t
for all t ≥ 0¦.
Let us describe the σ-algebra F
τ
, For 0 < t
1
< ... < t
n
and I∈B(R) we
define
C
(τ)
t
1
,...,t
n
;I
= ¦ω ∈ Ω : t
n
(ω) < τ, (ω(t
1
), ..., ω(t
n
)) ∈ I¦ = C
t
1
,...,t
n
;I
∩¦t
n
< τ¦.
We claim that C
(τ)
t
1
,...,t
n
;I
is F
τ
-measurable.
In fact
C
(τ)
t
1
,...,t
n
;I
∩ ¦τ ≤ t¦ = C
t
1
,...,t
n
;I
∩ ¦t
n
< τ ≤ t¦
So, the σ-algebra generated by all C
(τ)
t
1
,...,t
n
;I
in included in F
τ
and one can
show that it coincides with F
τ
.
If τ is stopping time, then ¦τ > t¦ and ¦τ = t¦ belong obviously to F
t
for all t ≥ 0.
Moreover, τ is F
τ
-measurable. In fact, if A = ¦τ ≤ s¦ we have
A ∩ ¦τ ≤ t¦ = ¦τ ≤ t ∧ s¦ ∈ F
t∧s
⊂ F
t
.
In other words we have
F
τ
⊃ σ(τ),
where σ(τ) is the σ-algebra generated by τ.
Remark 4.6 Let τ be an extended random variable such that
¦τ < t¦ ∈ F
t
, for all t ≥ 0.
Then τ is not in general a stopping time with respect to (F
t
)
t≥0
, but it is a
stopping time with respect to the filtration (F
t
+)
t≥0
. In fact
¦τ ≤ t¦ =

k=1
_
τ ≤ t +
1
k
_
∈ F
t
+.
Markov property 47
Exercise 4.7 Assume that the nonnegative random variable τ is discrete,
that is that τ(Ω) = (µ
k
)
k∈N
where µ
k
is an increasing sequence of positive
numbers. Show that τ is a stopping time if and only if ¦τ = µ
k
¦ ∈ F
µ
k
for
all k ∈ N. Show that in this case F
τ
is the σ–algebra
F
τ
: = ¦A ∈ F : A ∩ ¦τ = µ
k
¦ ∈ F
µ
k
for all k ∈ N¦.
Proposition 4.8 Let τ be a stopping time. Then there exists a decreasing
sequence (τ
n
) of discrete stopping times convergent pointwise to τ such that
F
τ
n
⊃ F
τ
for all n ∈ N.
Proof. Define for any n ∈ N and ω ∈ Ω
τ
n
(ω) =
k
2
n
if
k −1
2
n
≤ τ(ω) <
k
2
n
, k ∈ N. (4.2)
It is clear that the sequence (τ
n
) is decreasing. Moreover, τ
n
is a stopping
time. In fact, if t =
k
2
n
with k ∈ N we have
¦τ
n
= t¦ =
_
k −1
2
n
≤ τ <
k
2
n
_
∈ F
t
. (4.3)
Finally, let A ∈ F
τ
, that is
A ∩ ¦τ ≤ t¦ ∈ F
t
, ∀ t ≥ 0.
Then we have
A ∩
_
τ
n
=
k
2
n
_
= A ∩
_
k −1
2
n
≤ τ <
k
2
n
_
∈ F k
2
n
, ∀ k ∈ N,
so that A ∈ F
τ
n
.
We want to extend several properties concerning time t to general stop-
ping times τ. We start by showing that W
τ
is F
τ
-measurable.
Proposition 4.9 Let τ be a stopping time and set
W
τ
(ω) = W(τ(ω), ω), ω ∈ Ω.
Then W
τ
is F
τ
-measurable.
Proof. Assume first τ discrete,
τ(Ω) = ¦t
k
¦, 0 < t
1
< < t
k
<
48 Chapter 4
and set A
k
= ¦τ = t
k
¦, k ∈ N. Then we have
W
τ
(ω) = W(t
k
)(ω), ∀ω ∈ A
k
, k ∈ N.
Let I ∈ B(R). Then
¦W
τ
∈ I¦ ∩ ¦τ ≤ t¦ =


k=1
[¦W
τ
∈ I¦ ∩ ¦τ ≤ t¦ ∩ A
k
]
=


k=1
[¦W
t
k
∈ I¦ ∩ ¦τ ≤ t¦ ∩ A
k
]
=


{k∈N: t
k
≤t}
[¦W
t
k
∈ I¦ ∩ ¦τ ≤ t¦ ∩ A
k
] ∈ F
t
.
So, the conclusion holds in this case.
Let now τ be arbitrary, let τ
n
be defined by (4.2) and set
W
τ
n
(ω) = W(τ
n
(ω), ω), ω ∈ Ω.
Since W is continuous we have
lim
n→∞
W
τ
n
(ω) = W
τ
(ω), ω ∈ Ω.
Fix t ≥ 0. By the previous argument we have
¦W
τ
n
∈ I¦ ∩ ¦τ
n
≤ t¦ ∈ F
t
for all I ∈ B(R). (4.4)
Now the conclusion follows letting n → ∞.
Example 4.10 Let a ∈ R and set
(1)
τ
a
= inf¦t ≥ 0 : W(t) = a¦.
Then
¦τ
a
> t¦ =

s∈[0,t]
¦W(s) < a¦ =

s∈[0,t]∩Q
¦W(s) < a¦ ∈ F
t
.
So, τ
a
is a stopping time with respect to the filtration (F
t
)
t≥0
.
Let now
τ = inf¦t ≥ 0 : W(t) > a¦.
Then we have
¦τ ≥ t¦ =

s∈[0,t]
¦W(s) ≤ a¦ =

s∈[0,t]∩Q
¦W(s) ≤ a¦ ∈ F
t
.
Consequently, by Remark 4.6, τ is a stopping time with respect to filtration
¦F
t

t≥0
.
(1)
We use the convention that the infimum of the empty set is +∞.
Markov property 49
4.3 The Brownian motion W(t + τ) −W(τ)
We recall that W(t +h) −W(t), t ≥ 0, is a Brownian motion for any h > 0.
We want now to show that the same holds when h is replaced by a stopping
time.
Proposition 4.11 Let τ be a stopping time. Then
C(t) := W(t + τ) −W(τ), t ≥ 0,
is a Brownian motion.
Proof. Let us first prove that the law of C(t) is N
t
. For this it is enough to
show that for any α ∈ R we have
E
_
e
iαC(t)
_
= E
_
e
iα(W(t+τ)−W(τ))
_
= e

1
2
α
2
t
, α ∈ R. (4.5)
Assume first that τ is discrete, τ(Ω) = (t
k
) and set
A
i
= ¦τ = t
i
¦ ∈ F
t
i
, ∀ i ∈ N.
Then we have
E
_
e
iα(W(t+τ)−W(τ))
_
=

i=1
_
A
i
e
iα(W(t+t
i
)−W(t
i
))
dP =

i=1
E
_
1l
A
i
e
iα(W(t+t
i
)−W(t
i
))
_
.
Since 1l
A
i
and W(t + t
i
) −W(t
i
) are independent, it follows that
E
_
e
iα(W(t+τ)−W(τ))
_
=

i=1
P(A
i
)E
_
e
iα(W(t+t
i
)−W(t
i
))
_
= e

1
2
α
2
t
and so (4.5) is proved.
Let now τ be general and let (τ
n
) be the sequence of finite stoppping
times defined by (4.2). We have just proved that
E
_
e
iα(W(t+τ
n
)−W(τ
n
))
_
= e

1
2
α
2
t
, α ∈ R.
Now (4.5) follows letting n tend to infinity. By (4.5) it follows that C(t) is a
Gaussian random variable N
t
. Proceeding similarly one can prove that the
law of C(t) − C(s) with t > s > 0 is N
t−s
and that C(t) has independent
increments. Continuity of C(t) is obvious.
50 Chapter 4
4.4 Transition semigroup
We shall denote by B
b
(R) the set of all real, bounded and Borel functions
and by C
b
(R) the subspace of B
b
(R) of those functions which are uniformly
continuous and bounded on R.
Given ϕ ∈ B
b
(R) we want to study the evolution in time of ϕ(W(t) +x).
To this purpose, we define the transition semigroup
P
t
ϕ(x) = E[ϕ(W(t) + x)], t ≥ 0, x ∈ R, ϕ ∈ B
b
(R), (4.6)
Since the law of W(t) + x is N
x,t
we have
P
t
ϕ(x) = E[ϕ(W(t) + x)]
=
1

2πt
_
+∞
−∞
e

1
2t
(x−y)
2
ϕ(y)dy
=
_
+∞
−∞
g
t
(x −y)ϕ(y)dy,
(4.7)
where
g
t
(ξ) =
1

2πt
e

ξ
2
2t
, t > 0, ξ ∈ R. (4.8)
We deduce, by an explicit computation, that P
t
, t ≥ 0, is a semigroup of
linear operators in B
b
(R), that is P
0
= I and
P
t+s
= P
t
P
s
, ∀ t, s ≥ 0.
Notice that P
t
coincides with the heat semigroup in R. In fact one checks
easily that if ϕ ∈ C
b
(R) then the function u : [0, +∞) R → R, u(t, x) =
P
t
ϕ(x) is continuous, infinitely differentiable and fulfills
_
¸
_
¸
_
u
t
(t, x) =
1
2
u
xx
(t, x), ∀ t > 0, x ∈ R,
u(0, x) = ϕ(x), ∀ x ∈ R.
Remark 4.12 One can show that u(t, x) = P
t
ϕ(x), t ≥ 0, x ∈ R, is the
unique solution of the Dirichlet problem above.
There is a simple deterministic proof based on maximum principle and a
stochastic proof, which we will present later, based on Itˆo’s formula.
Exercise 4.13 Prove that for t > s ≥ 0,
P
t−s
ϕ(x) = E[ϕ(W(t) −W(s) + x)], ϕ ∈ B
b
(H), x ∈ R. (4.9)
Markov property 51
4.5 Markov property
In this section we shall use several properties of conditional expectation, they
are recalled in Appendix A.
We are here concerned with the stochastic process
X(t) = X(t, x) = W(t) + x, t ≥ 0,
where x ∈ R.
Proposition 4.14 For any t > s > 0 and any ϕ ∈ B
b
(H) we have
E[ϕ(X(t))[F
s
] = (P
t−s
ϕ)(X(s)). (4.10)
Equivalently
_
A
ϕ(X(t))dP =
_
A
(P
t−s
ϕ)(X(s))dP, ∀ A ∈ F
s
. (4.11)
Moreover X() is a Markov process.
Proof. Set
X(t) = W(t) + x = (W(s) + x) + (W(t) −W(s)) =: U + V.
Notice that U is F
s
-measurable and V is independent of F
s
. By Proposition
B.6 it follows that
E[ϕ(X(t))[F
s
] = E[ϕ(U + V )[F
s
] = h(U),
where (recall Exercise 4.13)
h(u) = E[ϕ(u + V )] = E[ϕ(u + W(t) −W(s))] = P
t−s
ϕ(u).
So, (4.10) is proved.
To prove the last statement notice that by Proposition B.3 we have
E[ϕ(X(t))[X(s)] = E[E[ϕ(X(t))[F
s
][X(s)]
= E[P
t−s
ϕ(X(s))[X(s)]
= P
t−s
ϕ(X(s)) = E[ϕ(X(t))[F
s
].

Exercise 4.15 Let s > 0, η a F
s
-measurable random variable and ϕ ∈
B
b
(R). Show that
E[ϕ(W(t) + η[F
s
] = (P
t−s
ϕ(η)).
52 Chapter 4
4.5.1 Strong Markov property
We now consider conditional expectation with respect to F
τ
where τ is a
stopping time.
Proposition 4.16 Let τ be a stopping time and let t ≥ τ and ϕ ∈ B
b
(H).Then
we have
E[ϕ(X(t))[F
τ
] = (P
t−τ
ϕ)(X(τ)). (4.12)
Equivalently
_
A
ϕ(X(t))dP =
_
A
(P
t−τ
ϕ)(X(τ))dP, ∀ A ∈ F
τ
. (4.13)
Proof. We set x = 0 for simplicity, so that X(t) = W(t). Assume first that
τ is of the form
τ(Ω) = (t
k
)
k∈N
.
Let A ∈ F
τ
. Then we have
_
A
(P
t−τ
ϕ)(W(τ))dP =

i=1
_
A∩{τ=t
i
}
(P
t−τ
ϕ)(W(τ))dP
=

i=1
_
A∩{τ=t
i
}
(P
t−t
i
ϕ)(W(t
i
))dP.
Therefore, by (4.10) and taking into account that by the definition of F
τ
we
have
A ∩ ¦τ = t
i
¦ ∈ F
t
i
, i = 1, ..., n,
we can write,
_
A
(P
t−τ
ϕ)(W(τ))dP =

i=1
_
A∩{τ=t
i
}
(P
t−t
i
ϕ)(W(t
i
))dP
=

i=1
_
A∩{τ=t
i
}
E[ϕ(W(t))[F
t
i
]dP
=

i=1
_
A∩{τ=t
i
}
ϕ(W(t))dP =
_
A
ϕ(W(t))dP.
Therefore, (4.13) is proved.
Markov property 53
Let now τ be an arbitrary stopping time and let (τ
n
) be defined by (4.2).
Recall that (Proposition 4.8)
F
τ
⊂ F
τ
n
for all n ∈ N.
Let A ∈ F
τ
. Then by (4.13) it follows that
_
A
ϕ(W(t))dP =
_
A
(P
t−τ
n
ϕ)(W(τ
n
))dP for all A ∈ F
τ
.
Now the conclusion follows letting n → ∞.
Property (4.12) is called the strong Markov property of W.
4.6 Some consequences of the strong Markov
property
In this section we want to determine the laws of the following important
random variables.
• T
b
= inf¦t ≥ 0 : B(t) = b¦, b ∈ R.
• M(t) = max
s∈[0,t]
B(s), t ≥ 0.
• m(t) = min
s∈[0,t]
B(s), t ≥ 0.
Notice that
¦T
a
≤ t¦ = ¦M(t) ≥ a¦, t ≥ 0, a ≥ 0 (4.14)
and
¦T
a
≤ t¦ = ¦m(t) ≤ a¦, t ≥ 0, a ≤ 0. (4.15)
To find the laws of T
a
with a ≥ 0 and M(t) the following lemma is useful.
Lemma 4.17 Let a ≥ 0 and t ≥ 0. Then we have
P(B(t) ≤ a, M(t) ≥ a) = P(B(t) ≥ a). (4.16)
Proof. We have, taking into account that
¦T
a
≤ t¦ = ¦M(t) ≥ a¦
54 Chapter 4
P(W(t) ≤ a, M(t) ≥ a) = P(W(t) ≤ a, T
a
≤ t)
=
_
{T
a
≤t}
1l
(−∞,a]
(W(t))dP
=
_
{T
a
≤t}
E[1l
(−∞,a]
(W(t))[F
T
a
]dP,
since ¦T
a
≤ t¦ ∈ F
T
a
. By the strong Markov property it follows that
P(W(t) ≤ a, M(t) ≥ a) =
_
{T
a
≤t}
E[1l
(−∞,a]
(W(t))[F
T
a
]dP
=
_
{T
a
≤t}
E[P
t−T
a
1l
(−∞,a]
(W(T
a
))]dP
=
_
{T
a
≤t}
E[P
t−T
a
1l
(−∞,a]
(a)]dP.
On the other hand, we have, as easily checked,
P
s
1l
(−∞,a]
(a) = P
s
1l
[a,+∞)
(a), ∀ s > 0, a > 0.
Therefore
P(W(t) ≤ a, M(t) ≥ a) =
_
{T
a
≤t}
E[P
t−T
a
1l
(−∞,a]
(a)]dP
=
_
{T
a
≤t}
E[P
t−T
a
1l
[a,+∞)
(a)]dP
=
_
{T
a
≤t}
E[1l
[a,+∞)
(W(t))[F
T
a
]dP
= P(W(t) ≥ a, M(t) ≥ a)
= P(W(t) ≥ a).
Proposition 4.18 (Reflection principle) For all a ≥ 0 we have
P(M(t) ≥ a) = 2P(W(t) ≥ a), (4.17)
Proof. Write
P(M(t) ≥ a) = P(M(t) ≥ a, W(t) ≤ a) +P(M(t) ≥ a, W(t) ≥ a).
Markov property 55
Now, by Lemma 4.17 we have P(M(t) ≥ a, W(t) ≤ a) = P(W(t) ≥ a).
Moreover, it is clear that P(M(t) ≥ a, W(t) ≥ a) = P(W(t) ≥ a) so, the
conclusion follows.
By Proposition 4.18 we can easily deduce the expressions of the laws of
M(t) and T
a
for all a ∈ R.
Corollary 4.19 (Law of M(t)) For all t ≥ 0 we have
(M(t)
#
P)(dξ) =
2

2πt
e

ξ
2
2t
1l
[0,+∞)
(ξ)dξ. (4.18)
Proof. We have in fact by Proposition 4.18 for any a ≥ 0
P(M(t) ≥ a) = 2P(W(t)[ ≥ a) =
2

2πt
_
+∞
a
e

ξ
2
2t

= P([W(t)[ ≥ a).

Remark 4.20 From Corollary 4.19 it follows that at fixed time t the law
of M(t) coincides with that of [W(t)[, though random variables M(t) and
[W(t)[ are different; in particular M(t) is increasing whereas [W(t)[ is not.
Obviously the laws of M() and [W()[ on C
0
([0, +∞)) are different.
Corollary 4.21 (Law of T
a
) Let a ≥ 0 and t ≥ 0. Then we have
((T
a
)
#
P)(dt) =
a

2πt
3
e

a
2
2t
dt. (4.19)
Proof. By (4.14) and Proposition 4.18 we have
P(T
a
≤ t) = P(M(t) ≥ a) =
2

2πt
_
+∞
a
e

ξ
2
2t

=
2


_
+∞
at
−1/2
e

η
2
2
dξ.
Therefore
d
dt
P(T
a
≤ t) =
a

2πt
3
e

a
2
2t
dt,
which implies the conclusion.
The following results can be proved similarly.
56 Chapter 4
Lemma 4.22 Let a ≤ 0 and t ≥ 0. Then we have
P(W(t) ≥ a, m(t) ≤ a) = P(W(t) ≤ a). (4.20)
Proposition 4.23 (Reflection principle) For all a ≤ 0 we have
P(m(t) ≤ a) = 2P(W(t) ≤ a). (4.21)
Corollary 4.24 (Law of m(t)) For all t ≥ 0 we have
(m(t)
#
P)(dξ) = −
2

2πt
e

ξ
2
2t
1
(−∞,a]
(ξ)dξ. (4.22)
Corollary 4.25 (Law of T
a
) Let a ∈ R and t ≥ 0. Then we have
((T
a
)
#
P)(dt) =
[a[

2πt
3
e

a
2
2t
dt. (4.23)
4.7 Application to partial differential equa-
tions
For any x ≥ 0 we set in this section
τ
x
= inf¦t ≥ 0 : W(t) + x = 0¦ = T
−x
.
Moreover we consider the following processes which take values in [0, +∞).
(i) Y (t) = W(t) + x, ∀ t ∈ [0, τ
x
].
Y (t) is called the Brownian motion killed in 0.
(ii) U(t) = [W(t) + x[, x ≥ 0, t ≥ 0.
U(t) is called the Brownian motion reflected in 0
(iii) V (t) = W(t ∧ τ
x
) + x, t ≥ 0. V (t) is called the Brownian motion
absorbed in 0
Markov property 57
4.7.1 The Dirichlet problem in the half-line
We are here concerned with the process Y (t) = W(t) + x, ∀ t ∈ [0, τ
x
].
Define for any ϕ ∈ B
b
([0, +∞))
U
t
ϕ(x) := u(t, x) := E[ϕ(W(t) + x)1l
t≤τ
x
], t ≥ 0, x ∈ H. (4.24)
We are going to show that u(t, x) is the solution of the Dirichlet problem in
[0, +∞),
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
u
t
(t, x) =
1
2
u
xx
(t, x), x > 0, t > 0
u(t, 0) = 0, t > 0,
u(0, x) = ϕ(x), x ≥ 0.
(4.25)
Proposition 4.26 We have
u(t, x) =
_
+∞
0
[g
t
(x −y) −g
t
(x + y)]ϕ(y)dy, x ≥ 0, t ≥ 0, (4.26)
where g is defined by (4.8).
Proof. We have
u(t, x) = E[ϕ(W(t) + x)1l
t≤τ
x
]
= P
t
ϕ(x) −E[ϕ(W(t) + x)1l
t>τ
x
],
where ϕ is extended to R by setting
ϕ(−x) = ϕ(x), x ≥ 0.
Write
E[ϕ(W(t) + x)1l
t>τ
x
] = E[E[1l
t>τ
x
ϕ(W(t) + x)[F
τ
x
]]
= E[1l
t>τ
x
E[ϕ(W(t) + x)[F
τ
x
]]
Now, using the strong Markov property we find that,
E[ϕ(W(t) + x)1l
t>τ
x
] = E[1l
t>τ
x
(P
t−τ
x
ϕ)(0)] =: E[ψ(τ
x
)],
where
ψ(λ) = 1l
t>λ
1
_
2π(t −λ)
_
R
e

ξ
2
2(t−λ)
ϕ(ξ)dξ, λ > 0.
58 Chapter 4
Next, recalling the law of τ
x
(see (4.23)) it follows that
E[ϕ(W(t) + x)1l
t>τ
x
] =
_
t
0
__
R
g
t−s
(y)ϕ(y)dy
_
x

2πs
3
e

x
2
2s
ds
=

∂x
_
t
0
__
R
g
t−s
(y)ϕ(y)dy
_
g
s
(x)ds
=
_
R
g
t
(x −y)ϕ(y)dy +

∂x
_
R
G
x,y
ϕ(y)dy,
where
(2)
G
x,y
=
_
t
0
g
t−s
(y)g
s
(x)ds =
1
2
Erfc
_
[x[ +[y[

2t
_
.
Since, for x > 0,

∂x
G
x,y
= −
1

2πt
e

(x+|y|)
2
2t
= −g
t
(x +[y[)
we get
u(t, x) =
_
R
g
t
(x −y)ϕ(y)dy −
_
R
g
t
(x +[y[)ϕ(y)dy,
and the conclusion follows.
It is easy to check, by a direct computation, that if ϕ ∈ C
b
([0, +∞)),
U
t
ϕ(x) = u(t, x) is the solution of the Dirichlet problem (4.25). Moreover
U
0
= I and U
t+s
= U(t)U(s) for all t, s ≥ 0.
4.7.2 The Neumann problem
We consider the process
U(t) = [W(t) + x[, x ≥ 0, t ≥ 0.
For any ϕ ∈ B
b
([0, +∞)) we set
Q
t
ϕ(x) = E[ϕ([W(t) + x[)] = (2πt)
−1/2
_
R
e

|x−y|
2
2t
ϕ([y[)dy.
Replacing in the last integral y with −y, we see that
Q
t
ϕ(x) =
_
+∞
0
[g
t
(x −y) + g
t
(x + y)]ϕ(y),
(2)
We recall that Erfc (a) =
2

π
_
+∞
a
e
−r
2
dr.
Markov property 59
where g
t
is defined by (4.8).
Now it is easy to check that if ϕ ∈ C
b
([0, +∞)) then u(t, x) = Q
t
ϕ(x) is
continuous in [0, ∞) [0, ∞), infinitely differentiable in (0, ∞) [0, ∞) and
solves the following Neumann problem
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
u
t
(t, x) =
1
2
u
xx
(t, x), x ≥ 0, t > 0,
u
x
(t, 0) = 0, t > 0,
u(0, x) = ϕ(x), x ≥ 0.
Moreover Q
0
= I and Q
t+s
= Q(t)Q(s) for all t, s ≥ 0.
4.7.3 The Ventzell problem
Let us consider the stochastic process,
V (t) = W(t ∧ τ
x
) + x, t ≥ 0,
where x ≥ 0.
Set
Z
t
ϕ(x) = E[ϕ(W(t ∧ τ
x
) + x)], ϕ ∈ B
b
([0, +∞)), x ≥ 0.
So,
Z
t
ϕ(x) =
_

ϕ(B(t ∧ τ
x
) + x)dP
=
_
{t<τ
x
}
ϕ(W(t) + x)dP +
_
{t≥τ
x
}
ϕ(0)dP,
since W(τ
x
) + x = 0. Therefore
Z
t
ϕ(x) = U
t
ϕ(x) + ϕ(0) P(T
−x
≤ t),
where U
t
is defined by (4.24). So
Z
t
ϕ(x) =
_
+∞
0
[g
t
(x −y) −g
t
(x + y)]ϕ(y)dy +
ϕ(0)

2πt
_
x
−∞
e

y
2
2t
dy.
If ϕ ∈ C
b
([0, +∞)), setting u(t, x) = Z
t
ϕ(x) we see that u is the solution to
the Ventzell problem,
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
u
t
(t, x) =
1
2
u
xx
(t, x), x ≥ 0, t ≥ 0
u
xx
(t, 0) = 0, t ≥ 0,
u(0, x) = ϕ(x), x ≥ 0.
60 Chapter 4
Moreover Z
0
= I and Z
t+s
= Z(t)Z(s) for all t, s ≥ 0.
Chapter 5
The Itˆ o integral
In all this chapter B represents a Brownian motion in a probability space
(Ω, F, P).
Similarly as in Chapter 4, for any t > 0 we denote by C
t
the algebra of
all cylindrical sets
C
t
1
,··· ,t
n
;A
= ¦ω ∈ C
0
: (B(t
1
), ..., B(t
n
)) ∈ A¦
where 0 ≤ t
1
< ... < t
n
, t
n
≤ t and A ∈ B(R
n
).
Moreover, we denote by F
t
the σ-algebra generated by C
t
and all P-null
sets of Ω. We call F
t
, t ≥ 0 the natural filtration of B augmented with the
null sets of P.
The family of σ–algebras (F
t
)
t≥0
is increasing; it is called the natural
filtration of B.
We denote by (F
t
)
t≥0
the completion of the natural filtration of B with
all P-null sets of Ω.
We say that a stochastic process F(t), t ∈ [0, T], is adapted to the Brow-
nian motion B if F(t) is F
t
-measurable for any t ∈ [0, T].
5.1 Definition of Itˆo’s integral
5.1.1 Itˆ o’s integral for elementary processes
Definition 5.1 Let T > 0. An elementary process F(t), t ∈ [0, T], in
(Ω, F, P) is a stochastic process of the form
F =
n

i=1
F
i−1
1l
[t
i−1
,t
i
)
, (5.1)
61
62 The Itˆo integral
where n ∈ N, 0 = t
0
< t
1
< < t
n
= T and F
i
is F
t
i
-measurable for any
i = 0, 1, ..., n −1.
For any elementary process F(t), t ∈ [0, T], we define the Itˆo integral
setting
I(F): =
_
T
0
F(s)dB(s) =
n

i=1
F
i−1
(B(t
i
) −B(t
i−1
)). (5.2)
Obviously any elementary process is adapted. This property is needed to
prove some basic identities (similar to those obtained for the Wiener integral)
which allow to extend the integral to more general processes.
Proposition 5.2 Assume that F ∈ E
2
B
(0, T). Then I(F) ∈ L
2
(Ω, F, P) and
we have
E
__
T
0
F(s)dB(s)
_
= 0 (5.3)
E
_
__
T
0
F(s)dB(s)
_
2
_
=
_
T
0
E([F(s)[
2
)ds. (5.4)
Proof. Let us prove (5.3). We have
E[I(F)] =
n

j=1
E[F
j−1
(B(t
j
) −B(t
j−1
))].
Since F
j−1
is F
j−1
measurable, it is independent of B(t
j
)−B(t
j−1
), by Lemma
4.3. Therefore we have
E[I(F)] =
n

j=1
E[F
j−1
]E[B(t
j
) −B(t
j−1
)] = 0
and (5.3) is proved.
Let us prove (5.4). We have
E[[I(F)[
2
] = E
_
n

j=1
[F
j−1
[
2
[B(t
j
) −B(t
j−1
)]
2
_
+2E
_

j<k
F
j−1
F
k−1
[B(t
j
) −B(t
j−1
)] [B(t
k
) −B(t
k−1
)]
_
.
Notice now that for j < k the random variable
F
j−1
F
k−1
[B(t
j
) −B(t
j−1
)],
Chapter 5 63
is F
k−1
–measurable and consequently is independent of B(t
k
) − B(t
k−1
).
Therefore, taking the expectation, we have
E[F
j−1
F
k−1
[B(t
j
) −B(t
j−1
)][B(t
k
) −B(t
k−1
)]]
= E[F
j−1
F
j−1
[B(t
j
) −B(t
j−1
)]] E[B(t
k
) −B(t
k−1
)] = 0.
It follows that
E[[I(F)[
2
] =
n

j=1
E[[F
j−1
[
2
](t
j
−t
j−1
),
as required.
Exercise 5.3 Let F, G ∈ E
2
B
(0, T). Prove that
E
__
T
0
F(s)dB(s)
_
T
0
G(s)dB(s)
_
=
_
T
0
E[F(s)G(s)]ds.
Hint: Use the identity
ab =
1
2
(a + b)
2

1
2
a
2

1
2
b
2
, a, b ∈ R.
5.1.2 General definition of Itˆ o’s integral
Let us denote by
Z
T
:= L
2
([0, T] Ω, B(0, T) F, dt P)
the Hilbert space of all (equivalence classes of) functions
F : [0, T] Ω, (t, ω) → F(t, ω),
which are measurable with respect to the product σ-algebra, B(0, T) F
and such that
|F|
Z
T
:= E
_
T
0
[F(t, )[
2
dt < ∞.
The scalar product on Z is defined by
¸F, F
1
) = E
_
T
0
F(t, )F
1
(t, )dt.
Obviously any elementary process F belongs to Z.
64 The Itˆo integral
In view of (5.4), the mapping
E
2
B
(0, T) ⊂ Z
T
→ L
2
(Ω, F
T
, P)F →
_
T
0
F(s)dB(s),
is an isometry. Therefore it can be uniquely extended to the closure E
2
B
(0, T)
of E
2
B
(0, T) in Z
T
.
Processes belonging to E
2
B
(0, T) are called predictable.
So, the Itˆ o integral can be uniquely defined by extension for any pre-
dictable square integrable process F(t), t ≥ 0 and the following properties
are fulfilled.
E
__
T
0
F(s)dB(s)
_
= 0 (5.5)
E
_
__
T
0
F(s)dB(s)
_
2
_
=
_
T
0
E([F(s)[
2
)ds. (5.6)
Moreover, from Exercise 5.3 it follows that if F and G are predictable square
integrable processes we have
E
__
T
0
F(s)G(s)dB(s)
_
=
_
T
0
E[F(s)G(s)]ds. (5.7)
We can define in an obvious way the Itˆo integral
_
b
a
F(s)dB(s) in any
interval [a, b] ⊂ [0, T]. We have
E
__
b
a
F(s)dB(s)
_
= 0,
and
E
_
__
b
a
F(s)dB(s)
_
2
_
=
_
b
a
(E[F(s)[
2
)ds.
Moreover, for any a, b, c ∈ [0, T] we have
_
c
a
F(s)dB(s) =
_
b
a
F(s)dB(s) +
_
c
b
F(s)dB(s).
Let us now present a characterization of predictable processes (that is of
space E
2
B
(0, T)). Note first that an elementary process is a linear combination
of processes of the form
F1l
[a,b)
, with F F
a
-measurable.
Chapter 5 65
In turn each F can be approximated by linear combinations of characteristic
functions of F
a
-measurable sets. So, it is natural to approximate a general
predictable process by linear combinations of functions of the form
1l
A×[a,b)
, with A F
a
measurable.
We call A [a, b) a predictable rectangle. We denote by R the family of all
predictable rectangles and by P the σ-algebra generated by R. P is called
the σ-algebra of all predictable events.
Definition 5.4 A real predictable process in [0, T] is a real random variable
in the probability space
([0, T] Ω, P, dt P).
Proposition 5.5 The closure E
2
B
([0, T]) is precisely L
2
([0, T]Ω, P, dtP).
Proof. Denote by Λ
T
the closure of E
2
B
([0, T]) in L
2
([0, T] Ω, P, dt P).
Since any element of L
2
([0, T] Ω, P, dt P) can be approximated by a
monotonic sequence of simple functions, it is enough to show that 1l
A
∈ Λ
T
for any A ∈ P. For this we shall use the Dynkin Theorem, see Appendix A.
We first note that R is a π-system. Then we set
D = ¦A ∈ P : 1l
A
∈ Λ
T
¦.
We claim that D is a λ-system, i.e. that it fulfills (A.1). Properties (B.1)-
(i)-(ii) are clear, let us show (A.1)-(iii). Let (A
n
) ⊂ D be mutually disjoint
sets and set
φ
n
=
n

k=1
1l
A
k
.
Then, by the monotone convergence theorem, φ
n
→ φ = 1l
A
in L
2
([0, T]
Ω, P, dt P) where A =


k=1
A
k
. So, A ∈ D and (A.1)-(iii) is fulfilled. Now
the conclusion follows by Theorem A.1.
Exercise 5.6 Let F ∈ L
2
([0, T] Ω, P, dt P), [s, t] ⊂ [0, T] and let ϕ ∈
L

(Ω, F
s
, P). Prove that
ϕ
_
t
s
F(r)dB(r) =
_
t
s
ϕ F(r)dB(r). (5.8)
Exercise 5.7 Let F ∈ L
2
([0, T] Ω, P, dt P) such that
_
T
0
F(s)dB(s) = 0.
Show that F = 0.
66 The Itˆo integral
5.2 Itˆ o integral for mean square continuous
processes
We shall denote by C
B
([0, T]; L
2
(Ω)) the space of all stochastic processes
which are mean square continuous and adapted. We recall that if F ∈
C
B
([0, T]; L
2
(Ω)) then F(t) is F
t
-measurable for all t ∈ [0, T] and the map-
ping
[0, T] → L
2
(Ω, F, P), t → F(t),
is continuous.
For any decomposition σ = ¦t
0
, t
1
, , t
n
¦ ∈ Σ(0, T) consider the ele-
mentary process
F
σ
:=
n

j=1
F(t
j−1
)1l
[t
j−1
,t
j
)
and set
I
σ
(F) :=
_
T
0
F
σ
(s)dB(s) =
n

j=1
F(t
j−1
)(B(t
j
) −B(t
j−1
)).
Clearly F
σ
∈ E
2
B
(0, T) and, using the continuity of F one can check easily
that
lim
|σ|→0
F
σ
= F, in L
2
([0, T] Ω, P, dt P). (5.9)
Consequently we have
lim
|σ|→0
I
σ
(F) =
_
T
0
F(s)dB(s) in L
2
(Ω, F, P). (5.10)
Example 5.8 Let us prove that
_
T
0
B(t)dB(t) =
1
2
(B
2
(T) −T). (5.11)
Let σ = ¦t
0
, t
1
, ..., t
n
¦ ∈ Σ(0, T). Write
B(t
k−1
)(B(t
k
) −B(t
k−1
)) = B(t
k−1
)B(t
k
) −B
2
(t
k−1
))
= −
1
2
B
2
(t
k
) + B(t
k−1
)B(t
k
) −
1
2
B
2
(t
k−1
) +
1
2
B
2
(t
k
) −
1
2
B
2
(t
k−1
)
=
1
2
B
2
(t
k
) −
1
2
B
2
(t
k−1
) −
1
2
(B(t
k
) −B(t
k−1
))
2
.
Chapter 5 67
Then we have
I
σ
(B) =
1
2
B
2
(T) −
1
2
n

k=1
(B(t
k
) −B(t
k−1
))
2
.
Recalling that the quadratic variation of B is T (Theorem 3.19), we deduce
that
_
T
0
B(t)dB(t) = lim
|σ|→0
I
σ
(B) =
1
2
(B
2
(T) −T).
Exercise 5.9 Prove that
lim
|σ|→0
n

k=1
B(t
k
)(B(t
k
) −B(t
k−1
)) =
1
2
(B
2
(T) + T), in L
2
(Ω, F, P),
and
lim
|σ|→0
n

k=1
B
_
t
k
+ t
k−1
2
_
(B(t
k
) −B(t
k−1
)) =
1
2
B
2
(T), in L
2
(Ω, F, P).
Therefore the definition of the Itˆo integral depends on the particular form of
the integral sums.
5.3 The Itˆ o integral as a stochastic process
Let F ∈ L
2
([0, T] Ω, P, dt P and set
X(t) =
_
t
0
F(s)dB(s), t ∈ [0, T].
We first notice that X(t), t ≥ 0, is not a process with independent increments
in general (unless f is deterministic); take for instance
X(t) =
_
t
0
B(s)dB(s) =
1
2
(B
2
(t) −t), t ≥ 0.
However, X(t), t ≥ 0, has orthogonal increments (in the sense of L
2
(Ω, F, P))
as the following result shows.
Proposition 5.10 Let 0 ≤ t
1
≤ t
2
≤ t
3
≤ t
4
≤ T. Then we have
E[(X(t
2
) −X(t
1
))(X(t
4
) −X(t
3
))] = 0
68 The Itˆo integral
Proof. We have in fact, taking into account (5.7)
E[(X(t
2
) −X(t
1
))(X(t
4
) −X(t
3
))]
= E
__
t
2
t
1
F(s)dB(s)
_
t
4
t
3
F(s)dB(s)
_
= E
__
T
0
1l
[t
1
,t
2
]
F(s)dB(s)
_
T
0
1l
[t
3
,t
4
]
F(s)dB(s)
_
=
_
T
0
1l
[t
1
,t
2
]
1l
[t
3
,t
4
]
E(F
2
(s))ds = 0.

We are going to show that X(t), t ≥ 0, is mean square continuous, then
that it is a continuous process.
Proposition 5.11 Let F ∈ L
2
([0, T]Ω, P, dtP). Then X ∈ C
B
([0, T]; L
2
(Ω)).
Proof. We know that for any t ∈ [0, T], X(t) ∈ L
2
(Ω, F
t
, P). Moreover, for
any t, t
0
∈ [0, T] we have
E([X(t) −X(t
0
)[
2
) =
¸
¸
¸
¸
_
t
t
0
E([F(r)[
2
)dr
¸
¸
¸
¸
,
so that
lim
t→t
0
E([X(t) −X(t
0
)[
2
) = 0.
The conclusion follows.
We show now that X(t), t ≥ 0, is a continuous process. For this we first
prove that it is a martingale with respect to the filtration (F
t
) (see Appendix
C).
Proposition 5.12 X(t), t ∈ [0, T], is a F
t
–martingale
Proof. Let t > s. Since
X(t) −X(s) =
_
t
s
F(r)dB(r),
we have
E[X(t)[F
s
] = X(s) +E
__
t
s
F(r)dB(r)[F
s
_
.
Chapter 5 69
So, it remains to prove that
E
__
t
s
F(r)dB(r)[F
s
_
= 0. (5.12)
Notice that this is not obvious since
_
t
s
F(r)dB(r) is not independent of F
s
in general
(1)
. It is enough to prove (5.12) when F is an elementary process,
F =
n

i=1
F
i−1
1l
[t
i−1
,t
i
)
,
where s = t
1
, , t
n
= t and F
i−1
∈ L
2
(Ω, F, P). In this case, taking into
account that F
s
⊂ F
i−1
, we write
E
__
t
s
F(r)dB(r)[F
s
_
=
n

i=1
E[F
i−1
(B(t
i
) −B(t
i−1
))[F
s
]
=
n

i=1
E¦E[F
i−1
(B(t
i
) −B(t
i−1
))[F
i−1
][F
s
¦ = 0,
since F
i−1
is F
i−1
–measurable and B(t
i
) − B(t
i−1
) is independent of F
i−1
.
So, (5.12) is proved and the conclusion follows.
We are now ready to prove the continuity of X.
Theorem 5.13 Let F ∈ L
2
([0, T] Ω, P, dt P) and let
X(t) =
_
t
0
F(s)dB(s), t ∈ [0, T].
Then X has a continuous version and
E
_
sup
t∈[0,T]
[X(t)[
2
_
≤ 4
_
T
0
E[F(s)[
2
ds. (5.13)
Proof. Let (F
n
) ⊂ E
2
B
(0, T) such that
F
n
→ F in L
2
([0, T] Ω, P, dt P)
and set
X
n
(t) =
_
t
0
F
n
(s)dB(s), n ∈ N, t ∈ [0, T].
(1)
because F(r) contains in general the “story” of the Brownian motion from 0 to r.
70 The Itˆo integral
Since B(t) is continuous it is clear that X
n
(t) is continuous for all n ∈
N. Taking into account Proposition 5.12 we see that X(t), t ∈ [0, T], is
a continuous F
t
–martingale. Then by Corollary C.6 it follows that for any
n, m ∈ N
E
_
sup
t∈[0,T]
[X
n
(t) −X
m
(t)[
2
_
≤ 4E([X
n
(T) −X
m
(T)[
2
)
= 4E
__
T
0
[F
n
(s) −F
m
(s)[
2
ds
_
.
Consequently (X
n
)(ω) is Cauchy in C([0, T]) for almost all ω and its limit,
which coincides with X(ω) is continuous.
5.4 Itˆ o integral with stopping times
5.4.1 Stopping times
We proceed here as in Section 4.2.
A nonnegative extended random variable τ in (Ω, F, P) is called a stopping
time with respect to the filtration (F
t
)
t≥0
if
¦τ ≤ t¦ ∈ F
t
for all t ≥ 0.
To any stopping time τ we associate the σ-algebra
F
τ
: = ¦A ∈ F : A ∩ ¦τ ≤ t¦ ∈ F
t
for all t ≥ 0¦.
The proofs of the two following propositions are completely similar to that
of Proposition 4.8 and 4.8. So, they will be omitted.
Proposition 5.14 Let τ be a stopping time. Then there exists a decreasing
sequence (τ
n
) of discrete stopping times convergent pointwise to τ such that
F
τ
n
⊃ F
τ
for all n ∈ N.
Proposition 5.15 Let τ be a stopping time and set
W(τ)(ω) = W(τ(ω))(ω), ω ∈ Ω.
Then W(τ) is F
τ
-measurable and W(t + τ) − W(τ), t ≥ 0 is a Brownian
motion in (Ω, F, P).
Chapter 5 71
5.4.2 Itˆ o’s integral with stopping times
Let F ∈ L
2
([0, T] Ω, P, λ P) and set
X(t) =
_
t
0
F(s)dB(s), t ∈ [0, T].
Let moreover τ ≤ T be a stopping time. Define
_
τ
0
F(s)dB(s): = X(τ),
where
X(τ, ω) = X(τ(ω), ω), ω ∈ Ω.
Arguing as in Proposition 5.15 and using the fact that X(t), t ∈ [0, T], has
a continuous version, one can see that X(τ) is F
τ
–measurable.
The following result reduces a Itˆ o’s integral with a stopping time to a
usual one between 0 to T.
Proposition 5.16 Let F ∈ L
2
([0, T] Ω, P, dt P) and let τ ≤ T be a
stopping time. Then we have
_
τ
0
F(s)dB(s) =
_
T
0
1l
{s<τ}
F(s)dB(s). (5.14)
Proof. It is enough to prove the result when τ is of the form,
τ(Ω) = (t
1
, t
2
, ..., t
n
),
with 0 < t
1
< t
2
< < t
n
≤ T.
Set
A
i
:= ¦τ = t
i
¦, i = 1, ..., n.
T
¯
hen A
i
∈ F
t
i
, i = 1, ..., n.
Consider now the stochastic process
h(s) = 1l
{s≤τ}
, s ∈ [0, T].
We have
h(s) = 1, s ∈ [0, t
1
).
If s ∈ [t
1
, t
2
) we have
h(s)(ω) = 1 if ω ∈ A
2
∪ ∪ A
n
,
72 The Itˆo integral
so that
h(s) = 1l
A
2
∪···∪A
n
= 1l
A
c
1
.
Similarly, if s ∈ [t
k−1
, t
k
) with k ≤ n we have
h(s) = 1l
(A
k
∪...∪A
n
)
c.
Then h is predictable and
_
T
0
1l
{t<τ}
F(s)dB(s) =
_
t
1
0
F(s)dB(s) + 1l
(A
1
)
c
_
t
2
t
1
F(s)dB(s)
+ + 1l
(A
1
∪A
2
∪···∪A
n−1
)
c
_
t
n
t
n−1
F(s)dB(s)
= X(t
1
) + 1l
(A
1
)
c(X(t
2
) −X(t
1
))
+ + 1l
(A
1
∪A
2
∪···∪A
n−1
)
c(X(t
n
) −X(t
n−1
) = X(τ).

5.5 Multidimensional Itˆ o integrals
Let m ∈ N be fixed and consider a standard m-dimensional Brownian motion
B(t) = (B
1
(t), ..., B
m
(t)), t ≥ 0
in the probability space (Ω, F, P). Let (F
t
)
t∈[0,T]
be the natural filtration of
B (augmented with all P-null sets of Ω) .
We shall define the Itˆ o integral for predictable processes with values
in L(R
m
, R
d
) (that is such that any matrix element belongs to L
2
([0, T]
Ω, P, dtP)). We shall denote this space by L
2
([0, T]Ω, P, dtP; L(R
m
, R
d
))).
First we need a lemma whose simple proof is left to the reader.
Lemma 5.17 Let f, g ∈ L
2
([0, T] Ω, P, dt P). Then we have
E
__
T
0
f(s)dB
i
(s)
_
T
0
g(s)dB
j
(s)
_
= δ
i,j
_
T
0
E[f(s)g(s)]ds, i, j = 1, ..., m.
(5.15)
Let now F ∈ L
2
([0, T] Ω, P, dt P; L(R
m
, R
d
)). We define the Itˆo
integral of F as the d-dimensional process
__
T
0
F(t)dB(t)
_
i
=
m

j=1
_
T
0
F
i,j
(t)dB
j
(t), i = 1, ..., d.
Chapter 5 73
Proposition 5.18 Let F ∈ L
2
([0, T] Ω, P, dt P; L(R
m
, R
d
)). Then we
have
E
¸
¸
¸
¸
_
T
0
F(t)dB(t)
¸
¸
¸
¸
2
=
_
T
0
E[Tr (F(t)F

(t))]dt, (5.16)
where Tr denotes the trace.
Proof. Set I(F) =
_
T
0
F(t)dB(t). Then we have
(I(F))
i
=
m

j=1
_
T
0
F
i,j
(t)dB
j
(t), i = 1, ..., d.
It follows that
E[I(F)[
2
=
d

i=1
E
_
m

j=1
_
T
0
F
i,j
(t)dB
j
(t)
_
2
and, taking into account (5.15),
E[I(F)[
2
=
d

i=1
m

j=1
_
T
0
E[F
i,j
(t)
2
]dt,
which yields (5.16).
Remark 5.19 Assume that d = 1 so that L(R
d
; R
m
) is isomorphic to R
m
and F becomes a vector F = (F
1
, , F
m
).
In this case we shall write the Itˆo integral of F as
_
T
0
¸F(s), dB(s))
and formula (5.16) reduces to
E
¸
¸
¸
¸
_
T
0
¸F(t), dB(t))
¸
¸
¸
¸
2
=
_
T
0
E[F(t)[
2
dt. (5.17)
74 The Itˆo integral
Chapter 6
The Itˆ o formula
6.1 Introduction
Let (Ω, F, P) be a probability space, B a real Brownian motion, (F
t
)
t≥0
the
natural filtration of B augmented with the null sets of P and P the σ-algebra
of all predictable events (also augmented with the null sets of P).
We are given two stochastic processes b, σ ∈ L
2
([0, T] Ω, P, dt P) and
consider the stochastic process
X(t) = x +
_
t
0
b(s)ds +
_
t
0
σ(s)dB(s), t ≥ 0, (6.1)
where x ∈ R. X is adapted, continuous and continuous in mean square.
We set
dX(t) = b(t)dt + σ(t)dB(t)
and call dX(t) the Itˆo differential of X.
Given a regular real function ϕ, we are going to give a meaning to the
Itˆo’s differential ϕ

(X(t)).
We need some notations. For any k ∈ N we denote by C
k
b
(R) the linear
space of all real mappings which are uniformly continuous and bounded to-
gether with their derivatives of order less or equal to k. If ϕ ∈ C
k
b
(R) we
set
|ϕ|
0
= sup
x∈R
[ϕ(x)[,
and
|ϕ|
k
= |ϕ|
0
+
k

j=1
sup
x∈R
[D
j
ϕ(x)[.
75
76 Chapter 6
We shall prove the following Itˆo’s formula,
ϕ(X(t)) = ϕ(x) +
_
t
0
ϕ

(X(s))σ(s)dB(s)
+
_
t
0
_
1
2
σ
2
(s)ϕ

(X(s)) + b(s)ϕ

(X(s))
_
ds, t ≥ 0.
(6.2)
We shall write (6.2) in the differential form, setting
ϕ

(X(t)) = ϕ

(X(t))σ(t)dB(t),
+
_
1
2
σ
2
(t)ϕ

(X(t)) + b(t)ϕ

(X(t))
_
dt, t ≥ 0,
(6.3)
or, also as
ϕ

(X(t)) = ϕ

(X(t))dX(t) +
1
2
σ
2
(t)ϕ

(X(t))dt, t ≥ 0. (6.4)
Remark 6.1 One can deduce formally Itˆo’s formula by proceeding as fol-
lows. Write dX = b(t)dt + σ(t)dB and
dϕ(X) = ϕ(X + dX) −ϕ(X) = ϕ

(X)dX +
1
2
ϕ

(X)(dX)
2
= ϕ

(X)dX +
1
2
ϕ

(X)b
2
(t)(dt)
2
+ 2b(t)σ(t)dt dB + σ
2
(t)(dB)
2
.
Put (dB)
2
= dt and neglet the terms of order greater than dt, that is terms
with (dt)
2
and dt dB(t).
Writing (dB)
2
= dt is justified by Lemma 6.2 below.
Tthe following result on quadratic sums of a process is a generalization of
Theorem 3.19.
Lemma 6.2 Let F ∈ C
B
([0, T]; L
2
(Ω, F, P)) and let η = ¦0 = t
0
< t
1
<
< t
n
= T¦ ∈ Σ(0, T). Then we have
lim
|η|→0
n

k=1
F(t
k−1
)(B(t
k
) −B(t
k−1
))
2
=
_
T
0
F(s)ds in L
2
(Ω, F, P) (6.5)
Proof. Set
J
η
:=
n

k=1
F(t
k−1
)(B(t
k
) −B(t
k−1
))
2
.
The Itˆo formula 77
It is enough to prove that
lim
|η|→0
E
_
_
_
J
η

n

k=1
F(t
k−1
)(t
k
−t
k−1
)
_
2
_
_
= 0, (6.6)
since, obviously
lim
|η|→0
n

k=1
F(t
k−1
)(t
k
−t
k−1
) =
_
T
0
F(s)ds in L
2
(Ω, F, P).
To prove (6.6) write
E
_
_
_
J
η

n

k=1
F(t
k−1
)(t
k
−t
k−1
)
_
2
_
_
= E
_
_
_
n

k=1
F(t
k−1
)
_
[B(t
k
) −B(t
k−1
)[
2
−(t
k
−t
k−1
)
_
_
2
_
_
=
n

k=1
E
_
[F(t
k−1
)[
2
_
[B(t
k
) −B(t
k−1
)[
2
−(t
k
−t
k−1
)
¸
2
_
+2
n

j<k=1
E
_
F(t
j−1
)[[B(t
j
) −B(t
j−1
)[
2
−(t
j
−t
j−1
)]
F(t
k−1
)[[B(t
k
) −B(t
k−1
)[
2
−(t
k
−t
k−1
)]
_
Since the Brownian motion has independent increments, the last sum van-
ishes, so that
E
_
_
_
J
η

n

k=1
F(t
k−1
)(t
k
−t
k−1
)
_
2
_
_
=
n

k=1
E
_
[F(t
k−1
)[
2
_
[B(t
k
) −B(t
k−1
)[
2
−(t
k
−t
k−1
)
¸
2
_
=
n

k=1
E[F(t
k−1
)[
2
E
_
_
[B(t
k
) −B(t
k−1
)[
2
−(t
k
−t
k−1
)
¸
2
_
,
(6.7)
78 Chapter 6
since F(t
k−1
) and B(t
k
) −B(t
k−1
) are independent.
Now, taking into account that
E[[B(t
k
) −B(t
k−1
)[
2
] = (t
k
−t
k−1
),
E[[B(t
k
) −B(t
k−1
)[
4
] = 3(t
k
−t
k−1
)
2
,
we have
E
_
_
_
J
η

n

k=1
F(t
k−1
)(t
k
−t
k−1
)
_
2
_
_
= 2
n

k=1
E[[F(t
k−1
)[
2
](t
k
−t
k−1
)
2
≤ 2[η[
n

k=1
E[[F(t
k−1
)[
2
(t
k
−t
k−1
)] → 0,
as [η[ → 0. The conclusion follows.
Now we are in position to prove Itˆ o’s formula. First we assume that b
and σ are elementary processes,
b =
p

i=1
b
i−1
1l

i−1

i
)
, σ =
p

i=1
σ
i−1
1l

i−1

i
)
, (6.8)
where p ∈ N, 0 = λ
0
< λ
1
< < λ
p
and b
i
, σ
i
are F
t
i
-measurable for any
i = 0, 1, ..., p −1.
Lemma 6.3 Let ϕ ∈ C
2
b
(R), x ∈ R, b and σ given by (6.8) and X by (6.1).
Then identity (6.2) holds.
Proof. Since C
3
b
(R) is dense in C
2
b
(R) it is enough to show (6.2) when
ϕ ∈ C
3
b
(R). We start by proving (6.2) in [0, t] with t ≤ λ
1
. In this case we
have
b(t) = b
0
, σ(t) = σ
0
, t ∈ [0, λ
1
]
and
X(t) = b
0
t + σ
0
B(t), t ∈ [0, λ
1
].
Let η = ¦t
0
= 0 < t
1
< < t
N
= t¦. Then we obviously have
ϕ(X(t)) −ϕ(x) =
N

k=1
[ϕ(X(t
k
)) −ϕ(X(t
k−1
))].
The Itˆo formula 79
On the other hand, using Taylor’s formula we can write
ϕ(X(t)) −ϕ(x) =
N

k=1
ϕ

(X(t
k−1
))(X(t
k
) −X(t
k−1
))
+
1
2
N

k=1
ϕ

(X(t
k−1
))(X(t
k
) −X(t
k−1
))
2
+ R
η
=: I
1
+ I
2
+ I
3
. (6.9)
Concerning I
1
we have
I
1
=
N

k=1
ϕ

(X(t
k−1
))(b
0
(t
k
−t
k−1
) + σ
0
(B(t
k
) −B(t
k−1
)).
So,
lim
|η|→0
I
1
=
_
t
0
ϕ

(X(s))b(s)ds +
_
t
0
ϕ

(X(s))σ(s)dB(s) in L
2
(Ω, F, P).
(6.10)
Concerning I
2
we write
2I
2
=
N

k=1
ϕ

(X(t
k−1
))b
2
0
(t
k
−t
k−1
)
2
+ 2
N

k=1
ϕ

(X(t
k−1
))b
0
σ
0
(t
k

k−1
)(B(t
k
) −B(t
k−1
))
+
N

k=1
ϕ

(X(t
k−1
))σ
2
0
(B(t
k
) −B(t
k−1
))
2
=: I
2,1
+ I
2,2
+ I
2,3
. (6.11)
It is easy to check that
lim
|η|→0
I
2,1
= lim
|η|→0
I
2,2
= 0 in L
1
(Ω, F, P) (6.12)
In fact
[I
2,1
[ ≤
1
2
|ϕ|
2
[b
0
[
2
N

k=1
(t
k
−t
k−1
)
2
→ 0 as [η[ → 0
80 Chapter 6
and
(1)
E[I
2,2
[ ≤ |ϕ|
2
[b
0
[ [σ
0
[
N

k=1
(t
k
−t
k−1
)E[B(t
k
) −B(t
k−1
)[
≤ |ϕ|
2
[b
0
[ [σ
0
[
N

k=1
(t
k
−t
k−1
)
3/2
→ 0 as [η[ → 0.
Moreover, by Lemma 6.2 it follows that
lim
|η|→0
2I
2,3
=
_
t
0
ϕ

(X(s))σ
2
(s)ds in L
2
(Ω, F, P). (6.13)
So, the conclusion will follow provided
lim
|η|→0
E[R
η
[ = 0, (6.14)
Let us prove (6.14). We have
R
η
=
N

k=1
_
1
0
(1 −ξ)[ϕ


k
) −ϕ

(X(t
k−1
))](X(t
k
) −X(t
k−1
))
2
dξ,
where
ξ
k
= (1 −ξ)X(t
k−1
) + ξX(t
k
).
Since ϕ ∈ C
3
b
(R) we have by the mean value theorem,


k
) −ϕ

(X(t
k−1
))[ ≤ |ϕ|
0
(1 −ξ)[X(t
k
) −X(t
k−1
)[,
so that, we deduce setting 1 −ξ ≤ 1,
[R
η
[ ≤ |ϕ|
3
N

k=1
[X(t
k
) −X(t
k−1
)[
3
.
Consequently
[R
η
[ ≤ 3|ϕ|
3
[b
0
[
3
N

k=1
[t
k
−t
k−1
[
3
+ 3|ϕ|
3

0
[
3
N

k=1
[B(t
k
) −B(t
k−1
)[
3
(1)
since E[B(t)[ ≤ [E[B
2
(t)[]
1/2
= t
1/2
.
The Itˆo formula 81
and so
(2)
,
E([R
η
[) ≤ 3|ϕ|
3
[b
0
[
3
N

k=1
[t
k
−t
k−1
[
3
+ 3|ϕ|
3

0
[
3

15
N

k=1
[t
k
−t
k−1
[
3/2
→ 0,
as [η[ → 0. The proof is complete when t ≤ λ
1
. The general case can be
treated in the same way taking into account that b
k−1
and σ
k−1
are indepen-
dent of B(t
k
) −B(t
k−1
).
We finally prove
Theorem 6.4 Let x ∈ R, b, σ ∈ L
2
([0, T] Ω, P, dt P) and ϕ ∈ C
2
b
(R).
Then identity (6.2) holds for all t ∈ [0, T].
Proof. Let (b
j
) and (σ
j
) be sequences of elementary processes such that
lim
j→∞
b
j
= b, lim
j→∞
σ
j
= σ in L
2
([0, T] Ω, P, dt P).
Set, for any j ∈ N,
X
j
(t) = x +
_
t
0
b
j
(s)ds +
_
t
0
σ
j
(s)dB(s), s ∈ [0, T]. (6.15)
Then we have (see (5.10))
lim
j→∞
X
j
= X in C
B
([0, T]; L
2
(Ω)).
Moreover by (6.2) we have
ϕ(X
j
(t)) = ϕ(x) +
_
t
0
ϕ

(X
j
(s))σ
j
(s)dB(s),
+
_
t
0
_
1
2
σ
j
(s)ϕ

(X
j
(s)) + b
j
(s)ϕ

(X
j
(s))
_
ds.
(6.16)
Now the conclusion follows by the dominated convergence theorem letting
j → ∞.
Taking expectation in the Itˆo formula we find a useful identity which
allows to estimate the expectation of ϕ(X(t)).
(2)
Since E[B(t)[
3
) ≤ [E(B(t)
6
)]
1/2
=

15.
82 Chapter 6
Proposition 6.5 Assume that x ∈ R, b, σ ∈ L
2
([0, T] Ω, P, dt P) and
ϕ ∈ C
2
b
(R). Let
X(t) = x +
_
t
0
b(s)ds +
_
t
0
σ(s)dB(s), t ∈ [0, T].
Then
E[ϕ(X(t))] = ϕ(x) +
1
2
E
_
t
0

(X(s))σ
2
(s) + 2ϕ

(X(s))b(s)]ds. (6.17)
6.1.1 The Itˆ o formula for unbounded functions
We want now to show that formula (6.17) also holds without the assumption
that ϕ is bounded, provided the integrand in the right hand side is summable.
Proposition 6.6 Assume that x ∈ R, b, σ ∈ L
2
([0, T] Ω, P, dt P) and
ϕ ∈ C
2
(R). Set
X(t) = x +
_
t
0
b(s)ds +
_
t
0
σ(s)dB(s), t ∈ [0, T]. (6.18)
and assume in addition that
E
_
t
0

(X(s))σ
2
(s) + 2ϕ

(X(s))b(s)[ds < +∞. (6.19)
Then E[ϕ(X(t))] < +∞ and (6.17) holds.
Example 6.7 Take ϕ(x) = x
2
. Then condition (6.19) becomes
E
_
t
0

2
(s) + 2X(s)b(s)[ds < +∞
which is clearly fulfilled. Then
E([X(t)[
2
) = [x[
2
+E
_
t
0

2
(s) + 2X(s)b(s))ds.
Proof of Proposition 6.6. For any R > 0 consider a function ϕ
R

C
2
b
(R) such that
ϕ
R
(x) =
_
ϕ(x) if [x[ ≤ R,
0 if [x[ ≥ R + 1.
The Itˆo formula 83
Then, applying Itˆo’s formula (6.2) to ϕ
R
(X(t)), yields for any R > 0
ϕ
R
(X(t)) −ϕ(x) =
1
2
_
t
0

R
(X(s))σ
2
(s) + 2ϕ

R
(X(s)b(s)]ds
+
_
t
0
ϕ

R
(X(s)))σ(s)dB(s).
(6.20)
Let now τ
R
be the stopping time
τ
R
=
_
¸
¸
_
¸
¸
_
inf¦t ∈ [0, T] : [X(t)[ ≥ R¦ if sup
t∈[0,T]
[X(t)[ ≥ R,
T if sup
t∈[0,T]
[X(t)[ < R.
It is clear that τ
R
is increasing and bounded by T. We know that X(, ω) is
continuous for almost all ω ∈ Ω. For such a ω, X(, ω) attains the maximum,
say M(ω). Then we have τ
R
(ω) = T for all R > M(ω). So,
lim
R→∞
τ
R
= T P–a.s.. (6.21)
Now, in view of Proposition 5.16 we can write
ϕ(X(t ∧ τ
R
)) −ϕ(x) =
1
2
_
t
0
1l
s<(t∧τ
R
)

(X(s))σ
2
(s) + 2ϕ

(X(s)b(s)]ds
+
_
t
0
1l
s<(t∧τ
R
)
ϕ

(X(s)))σ(s)dB(s).
(6.22)
Taking expectation we obtain
E[ϕ(X(t ∧ τ
R
))] −ϕ(x)
=
1
2
E
_
t
0
1l
s<(t∧τ
R
)

(X(s))σ
2
(s) + 2ϕ

(X(s)b(s)]ds.
(6.23)
Now, by the assumption (6.19), (6.21) and the dominated convergence theo-
rem, we can let R → ∞ obtaining the conclusion.
As an application of Proposition 6.6 let us estimate E
_
_
T
0
F(s)dB(s)
_
2m
where F is predictable and m ∈ N, m > 1.
84 Chapter 6
Proposition 6.8 Assume that F ∈ L
2m
([0, T] Ω; P, dt P), m ∈ N, and
set
X(t) =
_
t
0
F(s)dB(s), t ∈ [0, T].
Then X ∈ L
2m
([0, T] Ω; P, dt P) and we have
E[[X(T)[
2m
] ≤ [m(2m−1)]
m
T
m−1
_
T
0
E
_
[F(t)[
2m
¸
dt. (6.24)
Proof. It is enough to prove (6.24) when F is bounded (because L

([0, T]
Ω; P, dt P) is dense in L
2m
([0, T] Ω; P, dt P)).
We start from the case m = 2, setting ϕ(x) = x
4
. Then (6.19) holds so
that, by Proposition 6.6 we have
E[[X(t)[
4
] = 6E
__
t
0
[X(s)[
2
[F(s)[
2
ds
_
.
By H¨older’s inequality it follows that
E[[X(t)[
4
] ≤ 6
_
E
_
t
0
[X(s)[
4
ds
_
1/2
_
E
_
t
0
[F(s)[
4
ds
_
1/2
. (6.25)
Integrating between 0 and T, yields
_
T
0
E[X(t)[
4
dt ≤ 6T
_
E
_
T
0
[X(t)[
4
dt
_
1/2
_
E
_
T
0
[F(t)[
4
dt
_
1/2
. (6.26)
From which
_
T
0
E[X(t)[
4
dt ≤ 36T
2
_
T
0
E[F(t)[
4
dt.
Substituting this in (6.25) yields
E[[X(t)[
4
] ≤ 36TE
_
T
0
[F(t)[
4
dt.
So, (6.24) is proved for m = 2. We can now easily iterate the previous
argument taking successively m = 3, 4 and so on.
6.2 Itˆ o’ formula for a vector valued process
Let d, m ∈ N. Assume that x ∈ R
d
, b ∈ L
2
([0, T] Ω; P, dt P; R
d
) and
σ ∈ L
2
([0, T] Ω; P, dt P; L(R
m
; R
d
)). Set
X(t) = x +
_
t
0
b(s)ds +
_
t
0
σ(s)dW(s), t ∈ [0, T]
The Itˆo formula 85
We are going to prove the following Itˆo’s formula,
ϕ(X(t)) = ϕ(x) +
_
t
0
¸Dϕ(X(s)), σ(s)dB(s)),
+
_
t
0
_
1
2
Tr[(σσ

)(s)D
2
ϕ(X(s))] +¸b(s), Dϕ(X(s)))
_
ds,
(6.27)
for all t ∈ [0, T]. We shall write (6.27) in the differential form
ϕ

(X(t)) = ¸Dϕ(X(t)), σ(t)dB(t))
+
_
1
2
Tr[(σσ

)(t)D
2
ϕ(X(t))] +¸b(t), Dϕ(X(t)))
_
dt, t ≥ 0,
(6.28)
The proof is similar to that of the one-dimensional case seen before. So, we
shall only sketch some points of the proof. Let us start with a preliminary
lemma.
Lemma 6.9 Let f ∈ C
B
([0, T]; L
2
(Ω)) and let i, j ∈ ¦1, 2..., m¦. Then we
have
lim
|σ|→0
n

k=1
f(t
k−1
)(B
i
(t
k
) −B
i
(t
k−1
))(B
j
(t
k
) −B
j
(t
k−1
))
= δ
i,j
_
T
0
f(s)ds, in L
2
(Ω, F, P).
(6.29)
Proof. Let η = ¦0 = t
0
< t
1
< < t
n
= T¦ be a decomposition of [0, T].
If i = j, (6.29) follows from Lemma 6.2. Let i ,= j and set
I
η
i,j
:=
n

k=1
f(t
k−1
)(B
i
(t
k
) −B
i
(t
k−1
))(B
j
(t
k
) −B
j
(t
k−1
)).
Then we have
E[(I
σ
i,j
)
2
] = E
n

h,k=1
f(t
h−1
)f(t
k−1
)(B
i
(t
h
) −B
i
(t
h−1
))(B
j
(t
h
) −B
j
(t
h−1
))
(B
i
(t
k
) −B
i
(t
k−1
))(B
j
(t
k
) −B
j
(t
k−1
))
= E
n

h=1
f
2
(t
h−1
)(B
i
(t
h
) −B
i
(t
h−1
))
2
(B
j
(t
h
) −B
j
(t
h−1
))
2
=
n

h=1
E(f
2
(t
h−1
))(t
h
−t
h−1
)
2
→ 0,
86 Chapter 6
as [σ[ → 0.
Now we prove Itˆo’s formula when b and σ are elementary processes as,
b =
p

i=1
b
i−1
1l

i−1

i
)
, σ =
p

i=1
σ
i−1
1l

i−1

i
)
, (6.30)
where p ∈ N, 0 = λ
0
< λ
1
< < λ
p
b
i
∈ L
2
(Ω, F
t
i
, P; R
d
) and σ
i

L
2
(Ω, F
t
i
, P; L(R
m
; R
d
)) i = 0, 1, ..., p −1.
Lemma 6.10 Let ϕ ∈ C
2
b
(R
d
), x ∈ R
d
and let b and σ given by (6.30).
Then identity (6.27) holds.
Proof. We proceed as in the proof of Lemma 6.3, taking ϕ ∈ C
3
b
(R
d
) and
proving (6.6) in [0, t] with t ≤ λ
1
. We have
b(t) = b
0
, σ(t) = σ
0
, t ∈ [0, λ
1
]
and
X(t) = b
0
t + σ
0
B(t), t ∈ [0, λ
1
].
Let η = ¦t
0
= 0 < t
1
< < t
N
= t¦. Then we obviously have
ϕ(X(t)) −ϕ(x) =
N

k=1
[ϕ(X(t
k
)) −ϕ(X(t
k−1
))].
On the other hand, by Taylor’s formula we can write
(3)
ϕ(X(t)) −ϕ(x) =
N

k=1
¸Dϕ(X(t
k−1
)), X(t
k
) −X(t
k−1
))
+
1
2
N

k=1
¸D
2
ϕ(X(t
k−1
))(X(t
k
) −X(t
k−1
)), X(t
k
) −X(t
k−1
)) + R
η
=: I
1
+ I
2
+ I
3
. (6.31)
Concerning I
1
we have
I
1
=
N

k=1
¸Dϕ(X(t
k−1
)), b
0
(t
k
−t
k−1
) + σ
0
(B(t
k
) −B(t
k−1
)).
(3)
We use the notations Dϕ(x)h = ¸Dϕ(x), h) and D
2
ϕ(x)(h, k) = ¸D
2
ϕ(x)h, k) for all
x, h, k ∈ R
d
.
The Itˆo formula 87
So,
lim
|η|→0
I
1
=
_
t
0
¸Dϕ(X(s)), b(s))ds+
_
t
0
¸Dϕ(X(s)), σ(s)dB(s)) in L
2
(Ω, F, P).
(6.32)
Concerning I
2
we write
2I
2
=
N

k=1
¸D
2
ϕ(X(t
k−1
))b
0
, b
0
)(t
k
−t
k−1
)
2
+ 2
N

k=1
¸D
2
ϕ(X(t
k−1
))b
0
, σ
0
(B(t
k
) −B(t
k−1
)))(t
k
−t
k−1
)
+
N

k=1
¸D
2
ϕ(X(t
k−1
))σ
0
(B(t
k
)−B(t
k−1
)), σ
0
(B(t
k
)−B(t
k−1
))) =: I
2,1
+I
2,2
+I
2,3
.
(6.33)
It is easy to check that
lim
|η|→0
I
2,1
= lim
|η|→0
I
2,2
= 0 in L
1
(Ω, F, P) (6.34)
Moreover, we have
2I
2,3
=
N

k=1
¸D
2
ϕ(X(t
k−1
))(σ(B(t
k
) −B(t
k−1
))), σ(B(t
k
) −B(t
k−1
)))
=
N

k=1
d

i,j=1
m

α,β=1
D
2
i,j
ϕσ
i,α
(B
α
(t
k
) −B
α
(t
k−1
)) σ
i,β
(B
β
(t
k
) −B
β
(t
k−1
)).
Therefore, taking into account Lemma 6.9 we have
lim
|η|→0
2I
2,3
=
_
t
0
d

i,j=1
m

α=1
D
2
i,j
ϕ(X(s)) σ
i,α
(s)σ
i,β
(s)ds
=
_
t
0
Tr [D
2
ϕ(X(s))(σσ

(s))]ds.
Now, proceeding as before, we see that
lim
|η|→0
E[R
η
[ = 0, (6.35)
88 Chapter 6
The proof is complete when t ≤ λ
1
. The general case can be treated in
the same way taking into account that b
k−1
and σ
k−1
are independent of
B(t
k
) −B(t
k−1
).
Finally, proceeding as we did for the proof of Theorem 6.4 we obtain the
result
Theorem 6.11 Let b ∈ L
2
([0, T] Ω, P, dt P : R
d
), σ ∈ L
2
([0, T]
Ω, P, dt P : L(R
m
; R
d
)), x ∈ R
d
and ϕ ∈ C
2
b
(R
d
). Then identity (6.27)
holds for any t ∈ [0, T].
Exercise 6.12 Let d = 1, m ∈ N, b, σ
k
∈ L
2
([0, T] Ω, P, dt P), k =
1, ..., m.
Set
X(t) =
_
t
0
b(s)ds +
m

k=1
_
t
0
σ
k
(s)dB
k
(s).
Let ϕ ∈ C
2
b
(R). Prove that
dϕ(X(t)) = ϕ

(X(t))dX(t) +
1
2
ϕ

(X(t))[σ(t)[
2
dt, (6.36)
where σ(t) = (σ
1
(t), ..., σ
m
(t)).
Exercise 6.13 Let d ∈ N, m = 1 b
i
, σ
i
∈ L
2
([0, T] Ω, P, dt P), i = 1, 2 =
..., d. Set
X(t) = b(t)dt + σdB(t), i = 1, 2,
where σ = (σ
1
, ..., σ
d
). Let moreover ϕ ∈ C
2
b
(R
d
). Prove that
dϕ(X(t)) = ¸Dϕ(X(t)), dX(t)) +
1
2
¸D
2
ϕ(X(t))σ(t), σ(t))dt. (6.37)
Chapter 7
Stochastic evolution equations
We are given two positive integers r, d and an r-dimensional standard Brow-
nian motion B(t), t ≥ 0, in a probability space (Ω, F, P). We denote by
(F
t
)
t≥0
the natural filtration of B(t) (augmented with all P-null sets of Ω).
Let us consider the following integral equation
X(t) = η +
_
t
s
b(u, X(u))du +
_
t
s
σ(u, X(u))dB(u), t ∈ [s, T], (7.1)
where s ∈ [0, T), η ∈ L
2
(Ω, F
s
, P; R
d
), b: [0, T] R
d
→ R
d
and σ: [0, T]
R
d
→ L(R
r
, R
d
). b is called the drift and σ the diffusion coefficient of the
equation.
We shall write (7.1) in differential form as
_
_
_
dX(t) = b(t, X(t))dt + σ(t, X(t))dB(t),
X(s) = η.
(7.2)
By a solution of equation (7.1) on the interval [s, T] we mean a function
X ∈ C
B
([s, T]; L
2
(Ω; R
d
)) that fulfills equation (7.1).
In order to solve (7.1) we shall use a fixed point argument, based on the
identity
E
¸
¸
¸
¸
_
b
a
G(t)dB(t)
¸
¸
¸
¸
2
=
_
b
a
E[Tr (G(t)G

(t))] dt.
for all G ∈ C
B
([0, T]; L
2
(Ω, L(R
r
, R
d
))) and 0 ≤ a < b ≤ T. This suggests to
endow L(R
r
, R
d
) with the Hilbert–Schmidt norm, setting
|S|
HS
: = [Tr(SS

)]
1/2
, S ∈ L(R
r
, R
d
)
and to write
E
¸
¸
¸
¸
_
b
a
G(t)dB(t)
¸
¸
¸
¸
2
=
_
b
a
E
_
|G(t)|
2
HS
_
dt. (7.3)
89
90 Chapter 7
7.1 Existence and uniqueness
The standard assumptions for the well-posedness of problem (7.1) are the
following.
Hypothesis 7.1
(i) b and σ are continuous on [0, T] R
d
.
(ii) There exists M > 0 such that for all t ∈ [0, T], x, y ∈ R
d
, we have
[b(t, x) −b(t, y)[
2
+|σ(t, x) −σ(t, y)|
2
HS
≤ M
2
[x −y[
2
(7.4)
and
[b(t, x)[
2
+|σ(t, x)|
2
HS
≤ M
2
(1 +[x[
2
). (7.5)
Notice that, after possibly changing the constant M, (7.5) is a consequence
of (7.4).
Theorem 7.1 Assume that Hypothesis 7.1 holds and let s ∈ [0, T), η ∈
L
2
(Ω, F
s
, P; R
d
). Then problem (7.1) has a unique solution
X ∈ C
B
([s, T]; L
2
(Ω; R
d
)).
Proof. We are going to solve (7.1) by a fixed point argument in the space
C
B
:= C
B
([s, T]; L
2
(Ω; R
d
)).
Define
γ
1
(X)(t) :=
_
t
s
b(u, X(u))du, X ∈ C
B
, t ∈ [s, T],
γ
2
(X)(t) :=
_
t
s
σ(u, X(u))dB(u), X ∈ C
B
, t ∈ [s, T]
and set
γ(X) := η + γ
1
(X) + γ
2
(X), X ∈ C
B
.
Then equation (7.1) is equivalent to the following,
X = η + γ
1
(X) + γ
2
(X) = γ(X). (7.6)
Step 1. γ
1
and γ
2
map C
B
into itself.
Stochastic evolution equations 91
Concerning γ
1
we have, using the H¨ older inequality and taking into ac-
count (7.5),

1
(X)(t)[
2
≤ (t −s)
_
t
s
[b(u, X(u))[
2
du ≤ M
2
(t −s)
_
t
s
(1 +[X(u)[
2
)du
≤ M
2
(t −s)
2
(1 +|X|
2
C
B
).
Since γ
1
(X)(t) is F
t
–measurable for all t ∈ [s, T], γ
1
maps C
B
into itself and

1
(X)|
C
B
≤ M(t −s)(1 +|X|
C
B
).
Concerning γ
2
we have taking into account (7.3) and (7.5),
E[γ
2
(X)(t)[
2
=
_
t
s
E(|σ(u, X(u))|
2
HS
)du
≤ M
2
_
t
s
(1 +[X(u)[
2
)du ≤ M
2
(t −s)(1 +|X|
2
C
B
)
So, we see that γ
2
maps C
B
into itself.
Step 2. γ is Lipschitz continuous.
Let X, Y ∈ C
B
. We have, using again the H¨older inequality and taking
into account (7.4),

1
(X)(t) −γ
1
(Y )(t)[
2
≤ (t −s)
_
t
s
[b(u, X(u)) −b(u, Y (u))[
2
du
≤ (t −s)M
2
_
t
s
[X(u) −Y (u)[
2
du ≤ (t −s)
2
M
2
|X −Y |
2
C
B
du.
Consequently

1
(X) −γ
1
(Y )|
C
B
≤ M (T −s) |X −Y |
C
B
, X, Y ∈ C
B
(7.7)
Furthermore
E[γ
2
(X)(t) −γ
2
(Y )(t)[
2
=
_
t
s
E(|σ(u, X(u)) −σ(u, Y (u))|
2
HS
)du
≤ M
2
(t −s)|X −Y |
2
C
B
,
92 Chapter 7
and so,

2
(X) −γ
2
(Y )|
C
B
≤ M

T −s |X −Y |
C
B
, X, Y ∈ C
B
. (7.8)
By (7.7) and (7.8) it follows that γ maps C
B
into itself and
|γ(X) −γ(Y )|
C
B
≤ M(T −s +

T −s )|X −Y [|
C
B
,
for all X, Y ∈ C
B
. Now if T −s is such that
M
_
T −s +

T −s
_
≤ 1/2, (7.9)
γ is a 1/2–contraction on C
B
, and so, it possesses a unique fixed point. If
(7.9) does not hold we choose T
1
∈ (s, T] such that
M
_
T
1
−s +
_
T
1
−s
_
≤ 1/2.
Then by the previous argument there is a unique solution to (7.1) on [s, T
1
].
Now we repeat the proof with T
1
replacing s and in a finite number of steps
we arrive to the conclusion.
Remark 7.2 By Theorem 5.13 it follows that there exists a version of the
solution X(, s, η) which belongs to L
2
(Ω, C([s, T])) and so it is a continuous
process.
In the following we shall denote by X(, s, η) the solution of problem (7.1).
Whe shall use greek letters for stochastic initial data and latin letters for
deterministic ones.
Let us prove the co-cycle law.
Proposition 7.3 Assume that Hypothesis 7.1 holds and let η ∈ L
2
(Ω, F
s
, P; R
d
).
Then
X(t, s, η) = X(t, r, X(r, s, η)), 0 ≤ s ≤ r ≤ t ≤ T. (7.10)
Proof. Define Z(t) = X(t, s, η), t ∈ [s, T]. Then Z solves the problem
_
_
_
dZ(t) = b(t, Z(t))dt + σ(t, Z(t))dB(t),
Z(r) = X(r, s, η).
By the uniqueness part of Theorem 7.1 it follows that
Z(t) = X(t, s, η) = X(t, r, X(r, s, η)),
as required.
Stochastic evolution equations 93
Remark 7.4 By the contraction principle it follows that the solution X(t, s, η)
of problem (7.1) can be obtained as a limit of successive approximations.
More precisely, define X
0
(t, s, η) = η and for any N ∈ N,
X
N+1
(t, s, η) = η +
_
t
s
b(u, X
N
(u, s, η))du +
_
t
s
σ(u, X
N
(u, s, η))dB(u).
(7.11)
Then we have
lim
N→∞
X
N
(, s, η) = X(, s, η) in C
B
([s, T]; L
2
(Ω; R
d
)). (7.12)
Next result, which as we shall see plays an important rˆole in proving that
X(, s, x) is a Markov process, gives some information about the relationship
between X(t, s, η), η ∈ L
2
(Ω, F
s
, P; R
d
) and X(t, s, x), x ∈ R
d
.
Proposition 7.5 Assume that Hypothesis 7.1 holds and that
η =
n

k=1
x
k
1l
A
k
, (7.13)
where x
1
, ..., x
n
∈ R
d
, and A
1
, ..., A
n
are mutually disjoints sets in F
s
such
that
Ω =
n
_
k=1
A
k
.
Then we have
X(t, s, η) =
n

k=1
X(t, s, x
k
)1l
A
k
. (7.14)
Proof. Let X
N
be defined by (7.11). We claim that
X
N
(t, s, η) =
n

k=1
X
N
(t, s, x
k
)1l
A
k
, ∀ N ∈ N. (7.15)
Once (7.15) is proved, the conclusion follows letting N tend to infinity. Let
us proceed by recurrence. Equality (7.15) is clear for N = 0. Assume that it
holds for a given N ∈ N, so that
X
N
(t, s, η) = X
N
(t, s, x
k
) in A
k
, k = 1, ..., n.
Then we have
b(u, X
N
(u, s, η)) = b(u, X
N
(u, s, x
k
)) in A
k
, k = 1, ..., n,
σ(u, X
N
(u, s, η)) = σ(u, X
N
(u, s, x
k
)) in A
k
, k = 1, ..., n,
94 Chapter 7
so that
b(u, X
N
(u, s, η)) =
n

k=1
1l
A
k
b(u, X
N
(u, s, x
k
)),
σ(u, X
N
(u, s, η)) =
n

k=1
1l
A
k
σ(u, X
N
(u, s, x
k
)).
Consequently
X
N+1
(t, s, η) =
n

k=1
1l
A
k
_
X
0
(t, s, x
k
) +
_
t
s
b(u, X
N
(u, s, x
k
)du
+
_
t
s
σ(u, X
N
(u, s, x
k
))dB(u)
_
=
n

k=1
1l
A
k
X
N+1
(t, s, x
k
)
and (7.15) holds for N + 1. So, the conclusion follows.
7.1.1 Solution of the stochastic differential equation in
the space C
B
([s, T]; L
2m
(Ω; R
d
)).
Theorem 7.6 Assume that Hypothesis 7.1 holds and let m ∈ N, s ∈ [0, T),
η ∈ L
2m
(Ω, F
s
, P; R
d
). Then problem (7.1) has a unique solution
X(, s, η) ∈ C
B
([s, T]; L
2m
(Ω; R
d
)).
In particular
X(, s, x) ∈ C
B
([s, T]; L
2m
(Ω; R
d
)), ∀ x ∈ R
d
.
Proof. We proceed as in the proof of Theorem 7.1 by a fixed point argument
in the space
C
m
B
:= C
B
([s, T]; L
2m
(Ω; R
d
)),
using inequality (6.24) proved in Proposition 6.8.
7.1.2 Examples
Example 7.7 Consider the stochastic differential equation
dX = AXdt + CdB(t), X(0) = x, (7.16)
where A ∈ L(R
d
), C ∈ L(R
r
; R
d
) and x ∈ R
d
.
Stochastic evolution equations 95
Clearly Theorem 7.1 applies so that (7.16) has a unique solution X(t)
which fulfills the integral equation
X(t) = x + A
_
t
0
X(s)ds + CB(t). (7.17)
Setting
Y (t) =
_
t
0
X(s)ds, t ∈ [0, T],
Y fulfills the equation
Y

(t) = AY (t) + x + CB(t), Y (0) = 0, t ∈ [0, T],
which can be easily solved by the method of variation of constants. We
obtain
Y (t) =
_
t
0
e
(t−s)A
(x + CB(s))ds, t ∈ [0, T].
By substituting Y (t) in (7.17) yields
X(t) = A
_
t
0
e
(t−s)A
(x + CB(s))ds + x + CB(t).
Taking into account that, thanks to Proposition 3.12,
_
t
0
e
(t−s)A
CdB(s) = CB(t) + A
_
t
0
e
(t−s)A
CB(s)ds,
we find
X(t) = e
tA
x +
_
t
0
e
(t−s)A
CdB(s). (7.18)
Example 7.8 Let r = d = 1 and consider the stochastic differential equation
dX = aXdt + cXdB(t), X(0) = x, (7.19)
where a, c, x ∈ R. Again Theorem 7.1 applies. We want to show that the
solution of (7.19) is given by
X(t) = e
t
(
a−
1
2
c
2
)
e
cB(t)
x, t ≥ 0. (7.20)
For this we check that X(t) given by (7.20) solves (7.19).
Write X(t) = e
F(t)
where F(t) = t
_
a −
1
2
c
2
_
+ cB(t). Then we have
dF(t) =
_
a −
1
2
c
2
_
dt + cdB(t)
96 Chapter 7
and, by Itˆo’s formula,
dX(t) = e
F(t)
dF(t) +
1
2
c
2
e
F(t)
dt
= e
F(t)
_
a −
1
2
c
2
_
dt + cdB(t) +
1
2
c
2
e
F(t)
dt
= aX(t)dt + cX(t)dB(t).
Exercise 7.9 Let r = 1 and consider the differential stochastic equation
dX = AXdt + CXdB(t), X(0) = x, (7.21)
where A, C ∈ L(R
d
), x ∈ R
d
and AC = CA. Show that the solution of (7.21)
is given by
X(t) = e
t(A−C
2
/2)
e
CB(t)
x. (7.22)
7.1.3 Differential stochastic equations with random co-
efficients
In some situations (see Subsections 7.3 and 7.4) one deals with stochastic
differential equations having random coefficients,
X(t, ω) = η(ω) +
_
t
s
b(u, X(u, ω), ω)du +
_
t
s
σ(u, X(u, ω), ω)dB(u). (7.23)
Here η ∈ L
2
(Ω, F
s
, R
d
), b: [0, T] R
d
Ω →R
d
and σ: [0, T] L(R
r
, R
d
)
Ω →R
d
are such that:
Hypothesis 7.2
(i) There exists M > 0 such that for all t ∈ [0, T], x, y ∈ R
d
, ω ∈ Ω
[b(t, x, ω)−b(t, y, ω)[
2
+|σ(t, x, ω)−σ(t, y, ω)|
2
HS
≤ M
2
[x−y[
2
(7.24)
and
[b(t, x, ω)[
2
+|σ(t, x, ω)|
2
HS
≤ M
2
(1 +[x[
2
). (7.25)
(ii) For any Y ∈ C
B
([0, T]; L
2
(Ω, R
d
)) we have U ∈ C
B
([0, T]; L
2
(Ω, R
d
))
and V ∈ C
B
([0, T]; L
2
(Ω, L(R
r
, R
d
))) where, for all t ∈ [0, T], ω ∈ Ω,
U(t, ω) = b(t, Y (t, ω), ω)), V (t, ω) = σ(t, Y (t, ω), ω)).
The following result can be proved as Theorem 7.1.
Stochastic evolution equations 97
Theorem 7.10 Assume that Hypothesis 7.2 holds. Let s ∈ [0, T) and η ∈
L
2
(Ω, F
s
, R
d
). Then problem (7.23) has a unique solution
X ∈ C
B
([s, T]; L
2
(Ω; R
d
)).
Example 7.11 Let d = 1 and consider the stochastic differential equation
_
_
_
dX(t) = X(t)¸F(t), dB(t)), t ∈ [0, T],
X(0) = x,
(7.26)
where F ∈ C
B
(0, T; L

(Ω; R
d
)). Now it is easy to check that Theorem 7.10
applies and so there exists a solution X of (7.26). Let us show that
X(t) = e

1
2
R
t
0
|F(s)|
2
ds+
R
t
0
F(s),dB(s)
x, t ≥ 0. (7.27)
For this we check that X(t) given by (7.27) solves (7.26).
Write X(t) = e
H(t)
where
H(t) = −
1
2
_
t
0
[F(s)[
2
ds +
_
t
0
¸F(s), dB(s)).
Then we have
dH(t) = −
1
2
[F(t)[
2
dt +¸F(t), dB(t)), t ≥ 0.
Now by Itˆo’s formula we find
dX(t) = e
H(t)
dH(t) +
1
2
e
H(t)
[F(t)[
2
dt
= e
H(t)
¸F(t), dB(t)) = X(t)¸F(t), dB(t)), t ≥ 0.
So, (7.27) is proved.
7.2 Continuous dependence on data
7.2.1 Continuous dependence on mean square
We assume here that Hypothesis 7.1 holds. We are going to prove that
the solution X(t, s, η) to (7.1) is H¨ older continuous on t, s and Lipschitz
continuous on η in mean square. First we show that E[X(t, s, η)[
2
is bounded.
98 Chapter 7
Lemma 7.12 Assume that Hypothesis 7.1 holds. Then for all s ∈ [0, T] and
η ∈ L
2
(Ω, F
s
, P; R
d
) we have
E
_
[X(t, s, η)[
2
_
≤ 3[E([η[
2
) + M
2
((T −s)
2
+ (T −s)]e
3M
2
(T−s+1)
. (7.28)
Proof. Writing for short X(t, s, η) = X(t), we have
E([X(t)[
2
) ≤ 3E([η[
2
) + 3E
_
¸
¸
¸
¸
_
t
s
b(u, X(u))du
¸
¸
¸
¸
2
_
+3
_
t
s
E(|σ(u, X(u))|
2
HS
)du.
By Hypothesis 7.1(ii) and the H¨ older inequality we deduce that
E([X(t)[
2
) ≤ 3E([η[
2
) + 3M
2
(t −s)
_
t
s
(1 +E
_
[X(u)[
2
_
)du
+3M
2
_
t
s
(1 +E
_
[X(u)[
2
_
)du.
Consequently
E([X(t)[
2
) ≤ 3E([η[
2
) + 3M
2
((T −s)
2
+ (T −s))
+3M
2
((T −s) + 1)
_
t
s
E
_
[X(u)[
2
_
du.
The conclusion follows from the Gronwall lemma.
We now study the regularity of X(t, s, η) with respect to t, s, η. We note
that, by Lemma 7.12, there exists a constant C(T, E([η[
2
)) such that
E
_
[X(t, s, η)[
2
_
≤ C(T, E([η[
2
)), 0 ≤ s < t ≤ T. (7.29)
We start with the regularity of X(t, s, η) with respect to t.
Proposition 7.13 Assume that Hypothesis 7.1 holds. Let 0 ≤ s ≤ t
1
< t ≤
T and η ∈ L
2
(Ω, F
s
, R
d
). Then there exists a constant C
1
(T, E([η[
2
)) such
that we have
E
_
[X(t, s, η) −X(t
1
, s, η)[
2
_
≤ C
1
(T, E([η[
2
))(t −t
1
). (7.30)
Stochastic evolution equations 99
Proof. We have
E
_
[X(t, s, η) −X(t
1
, s, η)[
2
_
≤ 2M
2
(t −t
1
)
_
t
t
1
(1 +E
_
[X(u, s, η)[
2
_
du
+ 2M
2
_
t
t
1
(1 +E
_
[X(u, s, η)[
2
_
)du.
Consequently,
E
_
[X(t, s, η) −X(t
1
, s, η)[
2
_
≤ 2M
2
((t −t
1
)
2
+ t −t
1
)(1 + C
2
(T, E([η[
2
)))
and the conclusion follows.
Let us study the regularity of X(t, s, η) with respect to η.
Proposition 7.14 Assume that Hypothesis 7.1 holds, let 0 ≤ s < t ≤ T and
η, ζ ∈ L
2
(Ω, F
s
, R
d
). Then
E
_
[X(t, s, η) −X(t, s, ζ)[
2
_
≤ 3e
3M
2
(T−s+1)(t−s)
E([η −ζ[
2
). (7.31)
Proof. We have
[X(t, s, η) −X(t, s, ζ)[
2
≤ 3[η −ζ[
2
+ 3
¸
¸
¸
¸
_
t
s
(b(u, X(u, s, η) −b(u, X(u, s, ζ))du
¸
¸
¸
¸
2
+ 3
¸
¸
¸
¸
_
t
s
(σ(u, X(u, s, η) −σ(u, X(u, s, ζ))dB(u)
¸
¸
¸
¸
2
.
Taking expectation and using (7.4) we obtain
E([X(t, s, η) −X(t, s, ζ)[
2
) ≤ 3E([η −ζ[
2
) + 3M
2
(T −s + 1)

_
t
s
E
_
[X(u, s, η) −X(u, s, ζ)[
2
_
du
and the conclusion follows from the Gronwall lemma.
We finally study the regularity of X(t, s, η) with respect to s.
Proposition 7.15 Assume that Hypothesis 7.1 holds, let 0 < s < s
1
< t ≤
T, and η ∈ L
2
(Ω, F
s
, P; R
d
). Then there exists a constant C

T,η
> 0 such that
E
_
[X(t, s, η) −X(t, s
1
, η)[
2
_
≤ C

T,η
[s −s
1
[. (7.32)
100 Chapter 7
Proof. Taking into account the co-cycle law (7.10), we can write
X(t, s, η) −X(t, s
1
, η) = X(t, s
1
, X(s
1
, s, η)) −X(t, s
1
, η).
By (7.31) there exists C
T
> 0 such that
E([X(t, s, η) −X(t, s
1
, η)[
2
) ≤ C
2
T
E([X(s
1
, s, η) −η[
2
)
= C
2
T
E([X(s
1
, s, η) −X(s, s, η)[
2
) .
The conclusion follows now from (7.30).
7.3 Almost sure continuity and h¨olderianity
of trajectories
In this section we show that X(, s, x) belongs to a suitable Sobolev space,
whose definition is recalled in Appendix E below. Then the Sobolev embed-
ding theorem (also stated in Appendix E) will imply that X(, s, x) is H¨ older
continuous almost surely.
First we need a lemma, which can be proved as Proposition 7.13 using
(6.24).
Lemma 7.16 Assume that Hypothesis 7.1 holds. Let 0 ≤ s ≤ t
1
< t ≤
T, x ∈ R
d
and m ∈ N. Then there exists a constant C
1
(T, [x[) such that
E
_
[X(t, s, x) −X(t
1
, s, x)[
2m
_
≤ C
1
(T, [x[
2
))(t −t
1
)
m
. (7.33)
Now from Proposition E.3 and the Sobolev embedding theorem E.1 it
follows that
Proposition 7.17 Assume that Hypothesis 7.1 holds. Let x ∈ R
d
, 0 ≤ s ≤
t ≤ T, m ∈ N and ∈ (0, 1/2). Then we have
E
_
[X(, s, x)[
2m
,2m
¸
< +∞. (7.34)
Moreover, X(, s, x) belongs to C
−1/(2m)
([s, T]) almost surely.
Finally, we consider almost sure regularity of X(t, s, ). First, arguing as
in the proof of Proposition 7.14 we have
Lemma 7.18 Assume that Hypothesis 7.1 holds, let 0 ≤ s < t ≤ T and
x, y ∈ R
d
. Then there is a constant C(T) > 0 such that
E
_
[X(t, s, x) −X(t, s, y)[
2m
_
≤ C(T)[x −y[
2m
. (7.35)
Stochastic evolution equations 101
Now from Proposition E.3 it follows that
Proposition 7.19 Assume that Hypothesis 7.1 holds, let 0 ≤ s < t ≤ T and
x, y ∈ [0, 1]
d
. Then for any m > 1 and ∈ (0, 1) we have
E
_
[X(t, s, )[
2m
,2m
¸
< +∞. (7.36)
Moreover, X(t, s, ) belongs to C
−d/(2m)
([0, 1]
d
) almost surely.
7.4 Differentiability of X(t, s, x) with respect
to x
In this section we assume, besides Hypothesis 7.1, that
Hypothesis 7.3
(i) D
x
b, D
2
x
b, D
x
σ and D
2
x
σ are continuous on [0, T] R
d
.
(ii) We have
(1)
sup
t∈[0,T]
([b(t, )]
2
+ [σ(t, )]
2
) < ∞. (7.37)
We set
C
B
= C
B
([s, T]) =: C
B
([s, T]; L
2
(Ω; R
d
)).
7.4.1 Existence of X
x
(t, s, x)
Theorem 7.20 Assume that Hypotheses 7.1 and 7.3 hold. Then for any
s ∈ [0, T] the mapping
R
d
→ C
B
, x → X(, s, x),
is continuously Gateaux differentiable and its Gateaux derivative is given by
X
x
(t, s, x) h = η
h
(t, s, x), x, h ∈ R
d
, (7.38)
where η
h
(t, s, x) is the solution to the stochastic differential equation with
random coefficients,
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_

h
(t, s, x) = b
x
(t, X(t, s, x)) η
h
(t, s, x)dt

x
(t, X(t, s, x))(η
h
(t, s, x), dB(t))
η
h
(s, s, x) = h.
(7.39)
(1)
Recall the notations given at the beginning of Chapter 6.
102 Chapter 7
Proof. Note that the coefficients of equation (7.39) fulfill Hypothesis 7.3, so
it possesses a unique solution by Theorem 7.10.
To prove the theorem we use Theorem D.6 from Appendix D (with Λ = R
d
and E = C
B
). We set C
B
= C
B
([s, T
1
]) and define a mapping
F : R
d
C
B
→ C
B
,
setting
[F(x, X)](t): = x +
_
t
s
b(r, X(r))dr +
_
t
s
σ(r, X(r))dB(r), t ∈ [s, T
1
],
(7.40)
where T
1
> s is chosen such that
|F(x, X
1
) −F(x, X
2
)|
C
B

1
2
|X
1
−X
2
|
C
B
for all X
1
, X
2
∈ C
B
, x ∈ R
d
.
(7.41)
Then F fulfills Hypothesis D.1 so that it possesses a unique fixed point
X(x) ∈ C
B
, that is
F(x, X(x)) = X(x), x ∈ R
d
,
which depends continuously on x. X(x) coincides with the solution X(, s, x)
of (7.2).
It is not difficult to check that F is Gateaux continuously differentiable,
(the straightforward proof is left to the reader) and that for each x ∈ R
d
,
X, Y ∈ C
B
we have
F
x
(x, X) = I,
[F
X
(x, X)Y ](t) =
_
t
s
b
x
(r, X(r))Y (r)dr+
_
t
s
σ
x
(r, X(r))Y (r)dB(r), t ∈ [s, T
1
],
So, the conclusion follows from Theorem D.6.
7.4.2 Existence of X
xx
(t, s, x)
We now prove the existence of the second derivative of X(t, s, x) with respect
to x.
Theorem 7.21 Assume that Hypotheses 7.1 and 7.3 hold. Then the mapping
R
d
→ C
B
, x → X(, s, x),
is twice differentiable with respect to x in any couple of directions (h, k) in
R
d
. Moreover, setting
X
xx
(t, s, x)(h, k) = ζ
h,k
(t, s, x), x, h ∈ R
d
, (7.42)
Stochastic evolution equations 103
ζ
h,k
(t, s, x) is the solution to the stochastic differential equation (with random
coefficients)
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
d ζ
h,k
(t, s, x) = b
x
(t, X(t, s, x)) ζ
h,k
(t, s, x)dt
+b
xx
(t, X(t, s, x))(η
h
(t, s, x), η
k
(t, s, x))dt

x
(t, X(t, s, x))(ζ
h,k
(t, s, x), dB(s))

xx
(t, X(t, s, x)) (η
h
(t, s, x), η
k
(t, s, x), dB(t))
ζ
h,k
(s, s, x) = 0.
(7.43)
We shall prove the theorem when n = r = 1 for simplicity. We first prove a
lemma.
Lemma 7.22 Let η(, s, x) ∈ C
B
([s, T]; L
2
(Ω)) be the solution of the equa-
tion
η(t, s, x) = 1 +
_
t
s
b
x
(r, X(r, s, x))η(r, s, x)dr
+
_
t
s
σ
x
(r, X(r, s, x))η(r, s, x)dB(r).
(7.44)
Then η(, s, x) ∈ C
B
([s, T]; L
4
(Ω)) and there exists C > 0 such that
E[η(, s, x)[
4
≤ C, ∀ s ∈ [0, T), x ∈ R
d
. (7.45)
Proof. We have,
[η(t, s, x)[
4
≤ 27 + 27
¸
¸
¸
¸
_
t
s
b
x
(r, X(r, s, x))η(r, s, x)dr
¸
¸
¸
¸
4
+27
¸
¸
¸
¸
_
t
s
σ
x
(r, X(r, s, x))η(r, s, x)dB(r)
¸
¸
¸
¸
4
.
By using (7.37) and the H¨older inequality we see that there exists a constant
C
1
such that
[η(t, s, x)[
4
≤ 27 + C
1
_
t
s
[η(r, s, x)[
4
dr
+C
1
¸
¸
¸
¸
_
t
s
σ
x
(r, X(r, s, x))η(r, s, x)dB(r)
¸
¸
¸
¸
4
.
104 Chapter 7
Now, taking expectation on both sides of this inequality and using Corollary
6.8, we find that
E[η(t, s, x)[
4
≤ C
2
(1 +
_
t
s
E[η(r, s, x)[
4
dr), 0 ≤ s ≤ t ≤ T, x ∈ R,
where C
2
is another constant. The conclusion follows from the Gronwall
lemma.
Proof of Theorem 7.21. We choose T
1
as in (7.41) and C
B
= C
B
([s, T
1
])
as before. By Theorem 7.20 we know that X(t, s, x) is differentiable with
respect to x and that its derivative η(, s, x) = X
x
(, s, x) belongs to C
B
and
fulfills equation (7.44). For any x ∈ R we define a linear bounded operator
T(x) from C
B
into C
B
setting for all t ∈ [s, T
1
],
(T(x)Z)(t) = −
_
t
s
b
x
(r, X(r, s, x))Z(r)dr −
_
t
s
σ
x
(r, X(r, s, x))Z(r)dB(r).
(7.46)
Notice that, since η(, s, x) ∈ C
B
([s, T]; L
4
(Ω)), T(x)Z is differentiable with
respect to x for any Z ∈ C
B
([s, T]; L
4
(Ω)) and it results
(T

(x)Z)(t) = −
_
t
s
b
xx
(r, X(r, s, x))Z(r)η(, r, x)dr

_
t
s
σ
xx
(r, X(r, s, x))Z(r)η(, r, x)dB(r).
(7.47)
Now we write equation (7.44) as
η(, s, x) = 1 + T(x)η(, s, x) (7.48)
By (7.41) it follows that
|T(x)|
L(C
B
)
≤ 1/2, ∀ x ∈ R.
Thus the solution of (7.48) is given by
η(, s, x) = (1 −T(x))
−1
(1). (7.49)
From this identity it is easy to show the existence of η
x
(, s, x) := ζ(, s, x).
We have in fact, by a straightforward computation
η
x
(, s, x) = (1 −T(x))
−1
(T

(x)η(, s, x)), (7.50)
Stochastic evolution equations 105
where
T

(x)η(, s, x)(t) =
_
t
s
b
xx
(r, X(r, s, x))η
2
(, s, x)dr
+
_
t
s
σ
xx
(r, X(r, s, x))η
2
(, s, x)dB(r).
(7.51)
Now by (7.50) it follows that
η
x
(t, s, x) −T(x)η
x
(, s, x)(t) =
_
t
s
b
xx
(r, X(r, s, x))η
2
(, s, x)dr
+
_
t
s
σ
xx
(r, X(r, s, x))η
2
(, s, x)dB(r),
and the conclusion follows.
7.5 Itˆ o Differentiability of X(t, s, x) with re-
spect to s.
It is useful to recall first some results in the deterministic case.
7.5.1 The deterministic case
Let us consider the problem
_
_
_
X

(t) = b(t, X(t)), t ∈ [s, T],
X(s) = x,
(7.52)
under Hypotheses 7.1 and 7.3 with σ = 0. Denote by X(t, s, x) the solution
of (7.52). Let us compute X
s
(t, s, x) (it is well known that X(t, s, x) is C
1
in all variables).
Write
X(t, s, x) = X(t, r, X(r, s, x)), t ≥ r ≥ s. (7.53)
Differentiating (7.53) with respect to r yields
0 = X
s
(t, r, X(r, s, x)) + X
x
(t, r, X(r, s, x)) X
t
(r, s, x).
Setting r = s we find
X
s
(t, s, x) = −X
x
(t, s, x)b(s, x),
106 Chapter 7
which is equivalent to
X(t, s, x) = x +
_
t
s
X
x
(t, r, x)b(r, x)dr, 0 ≤ s ≤ t ≤ T. (7.54)
In the next subsection we are going to generalize this formula for the solution
X(t, s, x) of (7.2).
7.5.2 The stochastic case
Here we want to study the differentiability of X(t, s, x) with respect to s in
a sense to be precised. A difficulty arises since the process s → X(t, s, x) is
not adapted, because X(t, s, x) is not F
s
-measurable. It happens, however,
that for any s ∈ [0, T], X(t, s, x) is measurable with respect to the σ–algebra
F
+
s
generated by all sets of the form
¦ω ∈ Ω : (B(s
1
(ω)) −B(s(ω)), ..., B(s
n
(ω)) −B(s(ω))) ∈ A¦ ,
where n ∈ N, 0 ≤ s ≤ s
1
< ... < s
n
≤ T and A ∈ B(R
n
). The family
(F
+
s
)
s∈[0,T]
is called the future filtration of B.
Proposition 7.23 Assume that Hypotheses 7.1 holds. Let x ∈ R
d
, s ∈ [0, T].
Then X(t, s, x) is F
+
s
-measurable.
Proof. Let X
N
(t, s, x) be defined by (7.11), N ∈ N. Then X
1
(t, s, x) is
F
+
s
–measurable. We have in fact
X
1
(t, s, x) = x +
_
t
s
b(u, x)du +
_
t
s
σ(u, x)dB(u).
Since
_
t
s
σ(u, x)dB(u) = lim
|η|→0
n

k=1
σ(t
k−1
, x)(B(t
k
) −B(t
k−1
)),
where η = ¦s = t
0
< t
1
< < t
n
= t¦, then X
1
(t, s, x) is F
+
s
-measurable.
We end the proof by recurrence.
Now we introduce the backward Itˆo integral for a process wich is adapted
to the future filtration. For this we need the following result which can be
proved as Lemma 4.3.
Lemma 7.24 Let t
1
< t
2
≤ s, and let ϕ ∈ L
2
(Ω, F
+
s
, P). Then B(t
2
)−B(t
1
)
and ϕ are independent.
Stochastic evolution equations 107
We define C
B
+([0, T]; L
2
(Ω; L(R
r
; R
d
))) by a straightforward generaliza-
tion of the space C
B
([0, T]; L
2
(Ω; L(R
r
; R
d
))) defined in Chapter 5.
The elements of C
B
+([0, T]; L
2
(Ω; L(R
r
; R
d
)) are called stochastic pro-
cesses adapted to the future filtration (F
+
t
) and continuous in quadratic
mean.
Let F ∈ C
B
+([0, T]; L
2
(Ω; L(R
r
; R
d
))). For any η ∈ Σ with η = ¦0 =
s
0
< s
1
< < s
n
= T¦ we set
I
σ
(F) =
n

k=1
F(t
k
)(B(t
k
) −B(t
k−1
))
The proof of next theorem is completely similar to that of equation (5.10).
Theorem 7.25 For any F ∈ C
B
+([0, T]; L
2
(Ω; L(R
r
; R
d
))) there exists the
limit
lim
|σ|→0
I
σ
(F) =:
_
T
0
F(s)dB(s), (7.55)
in L
2
(Ω). Moreover we have
E
_
T
0
F(s)dB(s) = 0, (7.56)
and
E
¸
¸
¸
¸
_
T
0
F(s)dB(s)
¸
¸
¸
¸
2
=
_
T
0
E
_
|F(s)|
2
HS
¸
ds. (7.57)
_
T
0
F(s)dB(s) is called the backward Itˆo integral of the function F in [0, T].
Exercise 7.26 Let t > s. Prove that
_
t
s
B(r)dB(r) =
1
2
(B(t)
2
−B(s)
2
+ (t −s)).
7.5.3 Backward Itˆo’s formula
Theorem 7.27 Assume that Hypotheses 7.1 and 7.3 hold. Then we have
X(t, s, x) −x =
_
t
s
X
x
(t, r, x) b(r, x)dr
+
1
2
_
t
s
TR [X
xx
(t, r, x)(σ(r, x), σ(r, x))]dr
+
_
t
s
X
x
(t, r, x)(σ(r, x), dB(r))),
(7.58)
108 Chapter 7
where
TR [X
xx
(t, r, x)(σ(r, x), σ(r, x))] =
d

k=1
X
xx
(t, r, x)(σ(r, x)e
k
, σ(r, x)e
k
)
and (e
k
) is any orthonormal basis in R
d
.
Proof. We take d = r = 1 for simplicity. For any η ∈ Σ(s, t) we set
[η[ = max
k=1,...,n
(t
k
−t
k−1
).
If η ∈ Σ(s, t) we have
X(t, s, x) −x = −
n

k=1
[X(t, s
k
, x) −X(t, s
k−1
, x)]
= −
n

k=1
[X(t, s
k
, x) −X(t, s
k
, X(s
k
, s
k−1
, x))]
= −
n

k=1
X
x
(t, s
k
, x)(x −X(s
k
, s
k−1
, x))

1
2
n

k=1
X
xx
(t, s
k
, x)(x −X(s
k
, s
k−1
, x))
2
+ o([η[).
(7.59)
Arguing as in the proof of Itˆ o’s formula one can show, after some tedious but
straighforward computations, that
lim
|η|→0
o([η[) = 0, P-a.s..
On the other hand we have
X(s
k
, s
k−1
, x) −x =
_
s
k
s
k−1
b(r, X(r, s
k−1
, x))dr
+
_
s
k
s
k−1
σ(r, X(r, s
k−1
, x))dB(r)
= b(s
k
, x)(s
k
−s
k−1
) + σ(s
k
, x)(B(s
k
) −B(s
k−1
)) + o(s
k
−s
k−1
).
(7.60)
Stochastic evolution equations 109
(Notice that, since b is deterministic, one can replace in (7.60) b(s
k
, x) with
b(ξ
k
, x) where ξ
k
is any point in [s
k−1
, s
k
].) Substituting (7.60) in (7.59) we
find that
X(t, s, x) −x =
n

k=1
X
x
(t, s
k
, x)b(s
k
, x)(s
k
−s
k−1
)
+
n

k=1
X
x
(t, s
k
, x)σ(s
k
, x)(B(s
k
) −B(s
k−1
))
+
1
2
n

k=1
X
xx
(t, s
k
, x)σ
2
(s
k
, x)(B(s
k
) −B(s
k−1
))
2
+I
1
(η) + I
2
(η) + I
3
(η) + o
1
([η[).
(7.61)
Obviously
lim
|η|→0
I
1
(η) =
_
t
s
X
x
(r, x)b(r, x)dr.
Concerning I
2
(η), we note that it is an integral sum corresponding to the
backward Itˆ o integral since X
x
(t, s
k
, x) is F
+
s
k
measurable by Proposition
7.23. Therefore we have
lim
|η|→0
I
2
(η) =
_
t
s
X
x
(r, x)σ(r, x)dB(r).
The other terms I
3
(η) and o
1
([η[) can be handled as in the proof of Itˆo’s
formula.
In a similar way one can prove the following backward Itˆo formula.
Theorem 7.28 Let ϕ ∈ C
2
b
(R
d
). Then for any 0 ≤ s < t ≤ T, we have
ϕ(X(t, s, x)) −ϕ(x) =
_
t
s
¸D
x
[ϕ(X(t, r, x))], b(r, x))dr
+
1
2
_
t
s
Tr [D
2
x
[ϕ(X(t, r, x))]σ(r, x)σ

(r, x)]dr
+
_
t
s
¸D
x
[ϕ(X(t, r, x))], σ(r, x)dB(r).
(7.62)
110 Chapter 7
Chapter 8
Kolmogorov equations
8.1 The deterministic case
We consider here the problem
_
_
_
X

(t) = b(t, X(t)), t ∈ [s, T],
X(s) = x ∈ R
n
,
(8.1)
where s ∈ [0, T) and b : [0, T] R
n
→R
n
fulfills the following hypothesis.
Hypothesis 8.1
(i) b is continuous on [0, T] R
n
.
(ii) There exists M > 0 such that
[b(t, x) −b(t, y)[ ≤ M[x −y[, x, y ∈ R
n
, t ∈ [0, T].
(iii) b is differentiable with respect to x and b
x
is continuous on [0, T] R
n
.
As well known, under Hypothesis 8.1 problem (8.1) has a unique solution
X() = X(, s, x) ∈ C
1
([s, T]; R
n
), and it holds
X(t, s, x) = X(t, u, X(u, s, x)), 0 ≤ s ≤ u ≤ t ≤ T, x ∈ R
n
. (8.2)
Morever, differentiating (8.2) with respect to u and setting u = s we find
X
s
(t, s, x) + X
x
(t, s, x) b(s, x) = 0, 0 ≤ s ≤ t ≤ T, x ∈ R
n
. (8.3)
Of great interest for the applications is the transition evolution operator
P
s,t
, s, t ∈ [0, T], defined on the space C
b
(R
n
) by
P
s,t
ϕ(x) = ϕ(X(t, s, x)), x ∈ R
n
, s, t ∈ [0, T]. (8.4)
111
112 Kolmogorov equations
As easily checked, P
s,t
is a linear bounded operator on C
b
(R
n
). Moreover for
any ϕ ∈ C
b
(R
n
) the mapping
[0, T] [0, T] R
n
→R
n
, (s, t, x) → P
s,t
ϕ(x),
is continuous. From (8.2) it follows immediately the cocycle property
P
s,t
= P
s,u
P
u,t
, s, t, u ∈ [0, T]. (8.5)
Proposition 8.1 For any ϕ ∈ C
1
b
(R
n
) we have
d
dt
P
s,t
ϕ = P
s,t
L(t)ϕ, t ≥ s (8.6)
and
d
ds
P
s,t
ϕ = −L(s)P
s,t
ϕ, t ≥ s, (8.7)
where
L(t)ϕ(x) = ¸b(t, x), ϕ
x
(x)), ϕ ∈ C
1
b
(R
n
), x ∈ R
n
. (8.8)
Proof. We have
d
dt
P
s,t
ϕ(x) =
d
dt
ϕ(X(t, s, x)) = ¸b(t, X(t, s, x)), ϕ
x
(X(t, s, x)))
and
P
s,t
L(t)ϕ(x) = ¸b(t, X(t, s, x)), ϕ
x
(X(t, s, x))),
so that (8.6) follows.
Let us prove (8.7). We have, taking into acccount (8.3),
d
ds
P
s,t
ϕ(x) =
d
ds
ϕ(X(t, s, x)) = −¸ϕ
x
(X(t, s, x)), X
x
(t, s, x) b(s, x))
= −L(s)P
s,t
ϕ(x).

Let us now consider the following partial differential equation called trans-
port equation
_
_
_
z
s
(s, x) +¸b(s, x), z
x
(s, x)) = 0, s ∈ [0, T]
z(T, x) = ϕ(x),
(8.9)
where ϕ ∈ C
1
b
(R
n
) and T > 0 is fixed.
Chapter 8 113
Theorem 8.2 Assume that b : [0, T] R
n
→ R
n
fulfills Hypothesis 8.1 and
let ϕ ∈ C
1
b
(R
n
). Then problem (8.9) has a unique solution z. z is given by
z(s, x) = P
s,T
ϕ(x) = ϕ(X(T, s, x)), s ∈ [0, T], x ∈ R
n
. (8.10)
Proof Existence. It is enough to notice that z, given by (8.10), is a
solution of (8.9) by (8.6).
Uniqueness. If z is a solution of problem (8.9) we have
d
ds
z(s, X(s, u, x))
= z
t
(s, X(s, u, x)) +¸z
x
(s, X(s, u, x)), X
t
(s, u, x))
= z
t
(s, X(s, u, x)) +¸z
x
(s, X(s, u, x)), b(s, X(s, u, x))) = 0.
Therefore z(s, X(s, u, x)) is constant in s. Setting s = T and s = u we find
that z(T, X(T, u, x)) = z(u, X(u, u, x)) which implies z(u, x) = ϕ(X(T, s, x))
as required.
8.1.1 The autonomous case
We assume here that b(t, x) = b(x) and consider the problem
_
_
_
X

(t) = b(X(t)), t ≥ 0,
X(0) = x ∈ R
n
,
(8.11)
whose solution we denote by X(, x). In this case it is easy to check that for
any t > s ≥ 0, we have P
s,t
= P
0,t−s
.
Define
P
t
ϕ(x) = ϕ(X(t, x)), ϕ ∈ C
b
(R
n
), t ≥ 0, x ∈ R
n
, (8.12)
so that by (8.5) it follows the semigroup law
P
t+s
= P
t
P
s
, t, s ≥ 0. (8.13)
P
t
is called the transition semigroup associated with (8.11). By Proposition
8.1 we deduce
Proposition 8.3 For any ϕ ∈ C
1
b
(R
n
) we have
D
t
P
t
ϕ = P
t
Lϕ = LP
t
ϕ, t ≥ 0 (8.14)
where
Lϕ(x) = ¸b(x), ϕ
x
(x)), ϕ ∈ C
1
b
(R
n
), x ∈ R
n
. (8.15)
114 Kolmogorov equations
Finally, by Theorem 8.2 we have
Theorem 8.4 Assume that b ∈ C
1
b
(R
n
) and let ϕ ∈ C
1
b
(R). Then problem
_
_
_
u
t
(t, x) = ¸b(x), u
x
(t, x)), t ≥ 0, x ∈ R
n
u(0, x) = ϕ(x), x ∈ R
n
.
(8.16)
has a unique solution given by
u(t, x) = P
t
ϕ(x) = ϕ(X(t, x)), t ≥ 0, x ∈ R
n
. (8.17)
8.2 Stochastic case
We consider the stochastic evolution equation
_
_
_
dX(t) = b(t, X(t))dt + σ(t, X(t))dB(t)
X(s) = x ∈ R
d
(8.18)
and assume that the following hypothesis holds.
Hypothesis 8.2 (i) b : [0, T] R
n
→R
n
and σ : [0, T] R
n
→ L(R
r
, R
n
)
are continuous.
(ii) There exists M > 0 such that
[b(t, x)−b(t, y)[+|σ(t, x)−σ(t, y)|
HS
≤ M[x−y[, x, y ∈ R
n
, t ∈ [0, T].
(iii) b and σ have first and second partial derivatives with respect to x con-
tinuous and bounded in [0, T] R
n
.
We denote as before by X(, s, x) the solution of (8.18) corresponding to
η = x ∈ R
n
. For all t, s with 0 ≤ s ≤ t ≤ T and for all function ϕ ∈ C
b
(R
n
)
we set
P
s,t
ϕ(x) = E[ϕ(X(t, s, x))], x ∈ R
n
, 0 ≤ s ≤ t ≤ T. (8.19)
As easily checked, P
s,t
is a linear bounded operator on C
b
(R
n
).
P
s,t
, 0 < s ≤ t ≤ T, is called the transition evolution operator associated with
(8.18). By Chapter 6 we know that the mapping
(s, t, x) → P
s,t
ϕ(x),
is continuous for all ϕ ∈ C
b
(R
n
).
Chapter 8 115
8.3 Basic properties of transition operators
Let us introduce the Kolmogorov operator
(L(s)ϕ)(x) =
1
2
Tr [ϕ
xx
(x)σ(s, x)σ

(s, x)] +¸b(s, x), ϕ
x
(x)), ϕ ∈ C
2
b
(R
n
).
(8.20)
The first basic identity is the following.
Proposition 8.5 Assume that Hypothesis 8.2 holds and let ϕ ∈ C
2
b
(R
n
).
Then P
s,t
ϕ is differentiable in t and we have
d
dt
P
s,t
ϕ = P
s,t
L(t)ϕ, t ≥ 0. (8.21)
Proof. By the Itˆ o formula we have that
d
t
ϕ(X(t, s, x)) = (L(t)ϕ)(X(t, s, x)) +¸ϕ
x
(X(t, s, x)), σ(t, X(t, s, x))dB(t)).
Integrating with respect to t and taking expectation, yields
E[ϕ(X(t, s, x))] = ϕ(x) +
_
t
s
E[(L(r)ϕ)(X(r, s, x))]dr,
that is
P
s,t
ϕ(x) = ϕ(x) +
_
t
s
P
r,t
(L(r)ϕ)(x)dr,
which coincides with (8.21).
The second basic identity is the following,
Proposition 8.6 Assume that Hypothesis 8.2 holds and let ϕ ∈ C
2
b
(R
n
).
Then P
s,t
ϕ is differentiable in s and we have
d
ds
P
s,t
ϕ = −L(s)P
s,t
ϕ, t ≥ 0. (8.22)
Proof. Taking expectation in the backward Itˆ o formula (7.62) we find
P
s,t
ϕ(x) −ϕ(x) =
_
t
s
L(r)P
s,r
ϕ(x)dr,
which yields (8.22).
116 Kolmogorov equations
8.4 Parabolic equations
We consider here the parabolic equation
_
_
_
z
s
(s, x) + (L(s)(z(s, )))(x) = 0, 0 ≤ s < T,
z(T, x) = ϕ(x), x ∈ R
n
,
(8.23)
We say that a function z : [0, T] R
n
→ R is a solution to (8.23) if z is
continuous and bounded together with its partial derivatives z
t
, z
x
, z
xx
, and
fulfills (8.23).
Theorem 8.7 Assume that Hypothesis 8.2 holds and let ϕ ∈ C
2
b
(R
n
). Then
there exists a unique solution z of problem (8.23). z is given by
z(s, x) = E[ϕ(X(T, s, x))], 0 < s ≤ T, ϕ ∈ C
2
b
(R
n
). (8.24)
Proof. Existence. By (8.22) it follows that
z(s, x) = P
s,T
ϕ(x), s ∈ [0, T], x ∈ R
n
,
fulfills (8.23).
Uniqueness. Let z be a solution to (8.23), and let 0 ≤ u ≤ s ≤ T. Let us
compute the Itˆ o differential of z(s, X(s, u, x)). We have
d
s
z(s, X(s, u, x)) = z
s
(s, X(s, u, x))ds + (L(s)z(s, X(s, u, )))(x)
+¸z
x
(s, X(s, u, x)), σ(s, X(s, u, x))dB(s))
= ¸z
x
(s, X(s, u, x)), σ(s, X(s, u, x))dB(s)).
since z fulfills (8.23). Integrating in s between u and T yields
z(T, X(T, u, x)) −z(u, X(u, u, x)) = ϕ(X(t, u, x)) −z(u, x)
=
_
t
u
z
x
(s, X(s, u, x))σ(s, X(s, u, x))dB(s).
Now, taking expectation we find
z(u, x) = E[ϕ(X(t, u, x))].

Exercise 8.8 Prove the cocycle law
P
s,r
P
r,t
= P
s,t
(8.25)
for 0 ≤ s ≤ t ≤ t ≤ T.
Chapter 8 117
8.4.1 Autonomous case
Assume that b and σ are independent of t :
b(t, x) = b(x), σ(t, x) = σ(x), x ∈ R
n
.
Then we have L(s) = L where
Lϕ(x) =
1
2
Tr [ϕ
xx
(x)σ(x)σ

(x)] +¸b(x), ϕ
x
(x)), ϕ ∈ C
2
b
(R
n
).
Proposition 8.9 Let X(t, s, x) be the solution of the stochastic evolution
equation
_
_
_
dX(t) = b(X(t))dt + σ(X(t))dB(t)
X(s) = x ∈ R
n
.
(8.26)
Then for any and a > 0 the laws of X(t, s, x) and X(t +a, s +a, x) coincide.
Proof. Set Y (t) = X(t + a, s + a, x). The we have
X(t+a, s+a, x) = x+
_
t+a
s+a
b(X(r, s+a, x))dr+
_
t+a
s+a
σ(X(r, s+a, x))dB(r).
Setting r −a = ρ yields
Y (t) = x +
_
t
s
b(Y (ρ))dρ +
_
t
s
σ(Y (ρ))d[B(ρ + a) −B(a)].
Setting B
1
(t) = B(t + a) − B(a) we see that Y (t) fulfills equation (8.26)
but with the Brownian motion B(t) replaced by B
1
(t). Now the conclusion
follows.
By the proposition and the cocycle law (8.25)it follows that, setting
P
t
= P
0,t
, t ≥ 0,
we have
P
t+s
= P
t
P
s
, t, s ≥ 0, P
0
= 1.
Thus P
t
, t ≥ 0 is a semgroup of linear operators in C
b
(R
d
).
Setting
v(s, x) = u(t, t −s, x), t ≥ 0, s ∈ [0, t], x ∈ R
n
,
problem (8.23) becomes
_
_
_
v
s
(s, x) = Lv(s, x), s ∈ [0, t], x ∈ R
n
,
v(0, x) = ϕ(x), x ∈ R
(8.27)
Then by Theorem 8.7 we find the result
118 Kolmogorov equations
Theorem 8.10 Assume that b, σ : R → R are Lipschitz continuous and of
class C
2
. Then, for any ϕ ∈ C
2
b
(R), problem (8.27) has a unique solution
given by
v(s, x) = P
t−s,t
ϕ(x) = P
t
ϕ(x), t ≥ 0, s ∈ [0, t], x ∈ R. (8.28)
8.5 Examples
Example 8.11 Consider the parabolic equation in R
n
_
_
_
u
t
(t, x) =
1
2
Tr [Qu
xx
(t, x)] +¸Ax + u
x
(t, x))
u(0, x) = ϕ(x),
(8.29)
where A, Q ∈ L(R
n
), Q is symmetric and ¸Qx, x) ≥ 0 for all x ∈ R
n
.
The corresponding stochastic differential equation is
_
_
_
dX(t) = AX(t)dt +

Q dB(t),
X(0) = x,
(8.30)
where B is a standard Brownian motion in a probability space (Ω, G, P)
taking values in R
n
. The solution of (8.30) is given by the variation of
constants formula
X(t, x) = e
tA
x +
_
t
0
e
(t−s)A
_
QdB(s). (8.31)
Therefore the law of X(t, x) is given by
X(t, x)
#
P = N
e
tA
x,Q
t
, (8.32)
where
Q
t
=
_
t
0
e
sA
Qe
sA

ds, t ≥ 0, (8.33)
where A

is the adjoint of A.
Consequently, the transition semigroup P
t
looks like
P
t
ϕ(x) =
_
R
n
ϕ(y)N
e
tA
x,Q
t
(dy). (8.34)
So, the solution of (8.29) is given by
u(t, x) = P
t
ϕ(x).
Chapter 8 119
If, in particular, det Q
t
> 0 we have
u(t, x) = (2π)
−n/2
[det Q
t
]
−1/2
_
R
n
e

1
2
Q
−1
t
(y−e
tA
x),(y−e
tA
x)
ϕ(y)dy. (8.35)
Example 8.12 Consider the parabolic equation in R
_
_
_
u
t
(t, x) =
1
2
qx
2
u
xx
(t, x) + axu
x
(t, x)
u(0, x) = ϕ(x),
(8.36)
where q > 0 and a ∈ R.
The corresponding stochastic differential equation is
_
_
_
dX(t) = aX(t)dt +

q X(t)dB(t),
X(0) = x,
(8.37)
where B is a real Brownian motion in is a real Brownian motion in some
probability space (Ω, F, P).
The solution of (8.37) is given by
X(t, x) = e
(a−q/2)t+

q B(t)
x. (8.38)
Therefore
P
t
ϕ(x) =
1

2πt
_
+∞
−∞
e

y
2
2t
ϕ(e
(a−q/2)t+

q y
x)dy. (8.39)
120 Kolmogorov equations
Appendix A
λ-systems and π-systems
Let Ω be a non empty set. A non empty family R of parts of Ω is called a
π-system if
A, B ∈ R =⇒ A ∩ B ∈ R,
a λ-system if
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
(i) Ω, ∅ ∈ D.
(ii) A ∈ D =⇒ A
c
∈ D.
(iii) (A
i
) ⊂ D mutually disjoint =⇒


i=1
A
i
∈ D.
(A.1)
Obviously any algebra is a π-system. Moreover, if D is a λ-system such that
A, B ∈ D =⇒ A ∩ B ∈ D then it is σ–algebra. In fact if (A
i
) is a sequence
in D of not necessarily disjoint sets we have

_
i=1
A
i
= A
1
∪ (A
2
¸ A
1
) ∪ (A
3
¸ A
2
¸ A
1
) ∪ ∈ D
and so


i=1
A
i
∈ D by (ii) and (iii).
Let us prove the following Dynkin theorem.
Theorem A.1 Let R be a π-system and let D be a λ-system including R.
Then we have σ(R) ⊂ D, where σ(R) is the σ algebra generated by R. If in
particular, D ⊂ σ(R) we have σ(R) = D.
Proof. Let D
0
be the minimal λ-system including R. We are going to show
that D
0
is a σ–algebra, which will imply the theorem. For this it is enough
to show, as remarked before, that the following inclusion holds
A, B ∈ D
0
=⇒ A ∩ B ∈ D
0
. (A.2)
121
122 λ-systems and π-systems
For any B ∈ D
0
we set
H (B) = ¦F ∈ D
0
: B ∩ F ∈ D
0
¦.
We claim that H (B) is a λ-system. In fact properties (i) and (iii) are clear.
It remains to show that if F ∩ B ∈ D
0
then F
c
∩ B ∈ D
0
or, equivalently,
that F ∪B
c
∈ D
0
. In fact, since F ∪B
c
= (F ¸ B
c
) ∪B
c
= (F ∩B) ∪B
c
and
F ∩ B and B
c
are disjoint, we have that F ∪ B
c
∈ D
0
as required.
If we show that
H (B) ⊃ R, ∀B ∈ D
0
(A.3)
then we conclude that H (B) = D
0
by the minimality of D
0
and (A.2) is
proved.
On the other hand it is clear that if R ∈ R we have R ⊂ H (R) since R is
a π-system. Therefore H (R) = D
0
by the minimality of D
0
. Consequently,
the following implication holds
R ∈ R, B ∈ D
0
⇒ R ∩ B ∈ D
0
,
which yields R ⊂ H (B) and (A.3) is fulfilled.
Example A.2 Let A be an algebra of subsets of Ω and let F be the σ-
algebra generated by A. Let P
1
and P
2
be probability measures on (Ω, F)
such that
P
1
(I) = P
2
(I), ∀ I ∈ A.
Using the Dynkin theorem we can show that P
1
= P
2
. It is clear in fact that
A is a π-system. Define
D = ¦B ∈ F : P
1
(B) = P
2
(B)¦.
It is easy to see that D is a λ-system which contains D. So, by Corollary
A.1 it follows that P
1
= P
2
.
Appendix B
Conditional expectation
B.1 Definition
We are given a probability space (Ω, F, P) and a σ-algebra G included in F.
Let X : Ω →R be a real random variable on (Ω, F, P)
(1)
.
We say that X is G-measurable if
I ∈ B(R) ⇒ X
−1
(I) ∈ F.
It is clear that X is not G-measurable in general.
Let us consider the signed measure
µ(G) =
_
G
XdP, G ∈ G.
It is clear that µ is absolutely continuous with respect to the restriction of
P to G. Therefore, by the Radon-Nikodym Theorem there exists a unique
Y ∈ L
1
(Ω, G, P) such that
µ(G) =
_
G
XdP =
_
G
Y dP, ∀ G ∈ G. (B.1)
The G-measurable random variable Y is called the conditional expectation of
X given G; it is denoted by E(X[G).
In view of (B.1) E(X[G) is characterized by
_
G
XdP =
_
G
E(X[G)dP, ∀ G ∈ G. (B.2)
Exercise B.1 Assume that X ∈ L
2
(Ω, F, P). Show that E(X[G) coincides
with the orthogonal projection of X into the closed subspace L
2
(Ω, G, P) of
L
2
(Ω, F, P).
(1)
In all this appendix by random variable we mean an equivalence class of random
variables with respect to the usual equivalence relation.
123
124 Conditional expectation
B.2 Basic properties
Let X, Y ∈ L
1
(Ω, F, P) and let G be σ-algebra included in F. It is obvious
that if X is G-measurable, we have E(X[G) = X. Setting G = Ω in (B.2)
yields
E[E(X[G)] = E(X). (B.3)
Moreover, one can check easily the linearity of conditional expectation,
E(αX + βY [G) = αE(X[G) + βE(Y [G), (B.4)
for all α, β ∈ R and all X, Y ∈ L
1
(Ω, F, P). Also if X ≥ 0, P-a.s., one has
E(X[G) ≥ 0, P-a.s. From this one deduces the inequality
[E(X[G)[ ≤ E([X[ [G). (B.5)
Proposition B.2 Assume that X is independent of G. Then we have
E(X[G) = E(X). (B.6)
Proof. Let A ∈ G. Then 1l
A
and X are independent so that
_
A
XdP =
_

1l
A
XdP = P(A)E(X) =
_
A
E(X[G)dP.
Proposition B.3 Let H be a σ-algebra included in G. Then we have
E(X[H ) = E
_
E(X[G)
¸
¸
H
¸
. (B.7)
Proof. Let A ∈ H . Then we have
_
A
XdP =
_
A
E(X[H )dP (B.8)
and
_
A
XdP =
_
A
E(X[G)dP =
_
A
E
_
E(X[G)
¸
¸
H
¸
dP. (B.9)
So, comparing (B.8) and (B.9) we see that
_
A
E(X[H )dP =
_
A
XdP =
_
A
E
_
E(X[G)
¸
¸
H
¸
dP.

Proposition B.4 Let X, Y, XY ∈ L
1
(Ω, F, P). Assume that X is G-measurable.
Then we have
E(XY [G) = XE(Y [G). (B.10)
Appendix B 125
Proof. It is enough to show (B.10) for X = 1l
A
where A ∈ G. Let now
G ∈ G, then since G∩ A ∈ G we have
_
G
E(1l
A
Y [G)dP =
_
G
1l
A
Y dP =
_
G∩A
Y dP
=
_
G∩A
E(Y [G)dP =
_
G
1l
A
E(Y [G)dP,
for any G ∈ G.
Recalling Proposition B.2 we find.
Corollary B.5 Let X, Y, XY ∈ L
1
(Ω, F, P). Assume that X is G-measurable
and that Y is independent of G. Then we have
E(XY [G) = XE(Y ). (B.11)
Let us prove now a useful generalization of this Corollary.
Proposition B.6 Let X, Y ∈ L
1
(Ω, F, P) and let φ : R
2
→ R be bounded
and Borel. Assume that X is G-measurable and Y is independent of G. Then
we have
E(φ(X, Y )[G) = h(X), (B.12)
where
h(x) = E[φ(x, Y )], x ∈ R. (B.13)
Proof. We have to show that
_
G
φ(X, Y )dP =
_
G
h(X)dP, ∀ G ∈ G.
This is clearly equivalent to
E(Zφ(X, Y )) = E(Zh(X)), ∀Z ∈ L
1
(Ω, G, P). (B.14)
Denote by µ the law of the random variable (X, Y, Z) with values in R
3
µ = (X, Y, Z)
#
P.
So,
E(Zφ(X, Y )) =
_
R
3
zφ(x, y)µ(dx, dy, dz). (B.15)
126 Conditional expectation
Since X and Z are G-measurable and Y is independent of G, the random
variables (X, Z) and Y are independent so that
µ(dx, dy, dz) = ν(dx, dz)λ(dy),
where
ν(dx, dz) = (X, Z)
#
P(dx, dz), λ(dy) = Y
#
P(dy).
Therefore we can write (B.15) as
E(Zφ(X, Y )) =
_
R
3
zφ(x, y)ν(dx, dz)λ(dy).
Using the Fubini Theorem we get finally
E(Zφ(X, Y )) =
_
R
2
z
__
R
φ(x, y)λ(dy)
_
ν(dx, dz)
=
_
R
2
zh(x)ν(dx, dz) = E(Zh(X)),
as required.
Exercise B.7 Let F, H, FH ∈ L
1
(Ω, G, P) and Z = E(H[G). Prove that
E(FH) = E(FZ). (B.16)
Exercise B.8 Let g : R → R be convex and let F, g(F) ∈ L
1
(Ω, F, P).
Prove the Jensen inequality
E(g(F)[G) ≥ g(E(F[G)). (B.17)
Appendix C
Martingales
C.1 Definitions
Let (Ω, F, P) be a probability space, (F
t
)
t≥0
an increasing family of σ-
algebras included in F and (M(t))
t∈[0,T]
with M(t) ∈ L
1
(Ω, F
t
, P), t ∈ [0, T],
a stochastic process.
(M(t))
t∈[0,T]
is said to be a martingale (with respect to the filtration
(F
t
)
t≥0
) if
E[M(t)[F
s
] = M(s), ∀ 0 ≤ s < t ≤ T,
a submartingale if
E[M(t)[F
s
] ≥ M(s), ∀ 0 ≤ s < t ≤ T,
a supermartingale if
E[M(t)[F
s
] ≤ M(s), ∀ 0 ≤ s < t ≤ T.
Thus (M(t))
t∈[0,T]
is a martingale if and only if
_
A
M(s)dP =
_
A
M(t)dP, ∀ 0 ≤ s < t ≤ T, A ∈ F
s
,
a submartingale if and only if
_
A
M(s)dP ≥
_
A
M(t)dP, ∀ 0 ≤ s < t ≤ T, A ∈ F
s
,
and a supermartingale if and only if
_
A
M(s)dP ≤
_
A
M(t)dP, ∀ 0 ≤ s < t ≤ T, A ∈ F
s
.
127
128 Martingales
Proposition C.1 If M is a martingale then [M[ is a submartingale.
Proof. Let 0 ≤ s < t ≤ T, A ∈ F
s
. Set
A
+
= ¦ω ∈ Ω : M(s)(ω) > 0¦, A

= ¦ω ∈ Ω : M(s)(ω) ≤ 0¦.
Clearly A
+
and A

belong to F
s
. Consequently we have
_
A
[M(s)[dP =
_
A
+
M(s)dP −
_
A

M(s)dP
=
_
A
+
M(t)dP −
_
A

M(t)dP ≤
_
A
[M(t)[dP.
This shows that [M[ is a submartingale.
Example C.2 The Brownian motion B is a martingale. In fact, let t > s
and A ∈ F
s
. Since B(t) −B(s) and 1l
A
are independent we have
_
A
(B(t) −B(s))dP = E(1l
A
(B(t) −B(s))) = 0,
so that
_
A
B(t)dP =
_
A
B(s)dP.
Exercise C.3 Using Jensen’s inequality prove that any convex function of
a martingale is a submartingale. (See Exercise B.8).
C.2 The basic inequality for martingales
Let M(t) be a martingale, let 0 < t
1
< t
2
< ... < t
n
≤ T and set
S = sup
1≤i≤n
[M(t
i
)[.
We are going to prove an important estimate (due to Kolmogorov) of S in
terms of M(t
n
).
Proposition C.4 For all λ > 0 we have
P(S ≥ λ) ≤
1
λ
_
{S≥λ}
[M(t
n
)[dP. (C.1)
Appendix C 129
Proof. Set
A
1
= ¦[M(t
1
)[ ≥ λ¦,
A
2
= ¦[M(t
1
)[ < λ, [M(t
2
)[ ≥ λ¦,

A
n
= ¦[M(t
1
)[ < λ, ..., [M(t
n
)[ ≥ λ¦.
Clearly, sets A
1
, ..., A
n
are mutually disjoint. Moreover A
i
∈ F
t
i
, i = 1, ..., n,
and we have
¦S ≥ λ¦ =
n
_
i=1
A
i
.
Let us estimate
_
{S≥λ}
[M(t
n
)[dP. We have obviously
_
A
n
[M(t
n
)[dP ≥ λP(A
n
).
Now we estimate
_
A
n−1
[X(t
n
)[dP. We have, recalling that [M(t)[ is a sub–
martingale,
λP(A
n−1
) ≤
_
A
n−1
[M(t
n−1
)[dP ≤
_
A
n−1
[M(t
n
)[dP.
Therefore
_
A
n−1
[M(t
n
)[dP ≥ λP(A
n−1
).
Proceeding in a similar way we obtain
_
A
k
[M(t
n
)[dP ≥ λP(A
k
), k = 1, . . . , n. (C.2)
Summing up on k from 1 to n the conclusion follows.
C.3 Square integrable martingales
In this section we are given a martingale M(t) such that M(t) ∈ L
2
(Ω, F, P)
for all t ∈ [0, T].
Let 0 < t
1
< t
2
< ... < t
n
≤ T and set as before
S = sup
1≤i≤n
[M(t
i
)[.
We are going to estimate of E[S
2
] in terms of E[M
2
(t
n
)].
130 Martingales
Proposition C.5 We have
E
_
sup
1≤i≤n
[M(t
i
)[
2
_
≤ 4E([M(t
n
)[
2
). (C.3)
Proof. Set
F(t) = P(S > t), t ≥ 0.
By (C.1) we have
F(t) ≤
1
t
_
{S≥t}
[M(t
n
)[dP. (C.4)
Consequently
E(S
2
) =
_

0
P(S
2
> t)dt =
_

0
P(S >

t)dt.
So, by (C.1) and the Fubini Theorem we have
E(S
2
) ≤
_

0
_
1

t
_
{S≥

t}
[M(t
n
)[dP
_
dt
=
_
[0,+∞)×Ω
1

t
[M(t
n
)[1l
{S≥

t}
P(dω)dt
=
_

[M(t
n
)[P(dω)
_

0
1

t
1l
{S≥

t}
dt
=
_

[M(t
n
)[P(dω)
_
S
2
0
1

t
dt
= 2
_

[M(t
n
)SP(dω) ≤ 2
__

[M(t
n
)[
2
dP
_
1/2
__

S
2
dP
_
1/2
.
Now the conclusion follows easily.
Corollary C.6 Let M be a square integrable continuous martingale. Then
for any T > 0 we have
E
_
sup
t∈[0,T]
[M(t)[
2
_
≤ 4E[M
2
(T)]. (C.5)
Appendix C 131
Proof. Let 0 < s
1
< s
2
< < s
m
= T. By Proposition C.5 it follows that
E
_
sup
1≤i≤m
[M(s
i
)[
2
_
≤ 4E
_
[M(T)[
2
¸
.
Since M is continuous it follows, by the arbitrariness of the sequence s
1
, s
2
, . . . , s
m
,
that
E
_
sup
s∈[0,T]
[M(s)[
2
_
≤ 4E
_
[M(T)[
2
¸
,
as required.
132 Martingales
Appendix D
Fixed points depending on
parameters
D.1 Introduction
Let Λ, E be Banach spaces (norms [ [). We are given a continuous mapping
F : Λ E → E, (λ, x) → F(λ, x)
and assume that
Hypothesis D.1 There exists κ ∈ [0, 1) such that
[F(λ, x) −F(λ, y)[ ≤ κ[x −y[, ∀ λ ∈ Λ, x, y ∈ E.
The following result (contraction principle) is classical.
Theorem D.1 (i). There exists a unique continuous mapping
x : Λ → E, λ → x(λ),
such that
x(λ) = F(λ, x(λ)), ∀ λ ∈ Λ. (D.1)
(ii). If in addition F is of class C
1
, then x is of class C
1
and
x

(λ) = F
λ
(λ, x(λ)) + F
x
(λ, x(λ))x

(λ). (D.2)
We want to generalize the second part of this result to mappings F(λ, x)
which are only continuously Gˆateaux differentiable.
133
134 Fixed points
D.2 Gˆateaux differentiable mappings
Let A and B be Banach spaces and let Φ : A → B be a continuous mapping
from A into B.
Definition D.2 We say that Φ is Gˆ ateaux differentiable if there exists a
mapping
DΦ : A → L(A, B), a → DΦ(a),
such that
lim
ξ→0
1
ξ
(Φ(a + ξc) −Φ(a)) = DΦ(a)c, ∀ a, c ∈ A.
If in addition for all c ∈ A the mapping A → B, a → DΦ(a)c is continuous
we say that Φ is continuously Gˆ ateaux differentiable.
Remark D.3 It is well known that if the mapping A → L(A, B), a →
DΦ(a) is continuous then Φ is differentiable.
(1)
Example D.4 Let A, B = L
2
(0, 1) and Φ(x) = sin x. Then one can check
easily that Φ is continuously Gˆ ateaux differentiable and
DΦ(x)y = y cos x, ∀ x, y ∈ L
2
(0, 1).
However, (as one can see) Φ is not differentiable in any point.
We shall need the following result.
Proposition D.5 Let Φ : A → B be continuously Gˆateaux differentiable.
Then the following identity holds
Φ(c) −Φ(a) =
_
1
0
DΦ((1 −ξ)a + ξc)(c −a)dξ. (D.3)
Proof. Set
F(ξ) = Φ((1 −ξ)a + ξc), ξ ∈ [0, 1].
Then we have
F

(ξ) = DΦ((1 −ξ)a + ξc)(c −a)dξ,
and the conclusion follows just integrating this identity between 0 and 1.
(1)
One also says that Φ is Fr´echet differentiable.
Appendix D 135
D.3 The main result
We can back to the notations of the introduction and consider two Banach
spaces Λ and E and a continuous mapping
F : Λ E → E, (λ, x) → F(λ, x).
We assume that Hypothesis D.1 is fulfilled and denote by x the mapping
x : Λ → E, λ → x(λ),
such that
x(λ) = F(λ, x(λ)), ∀ λ ∈ Λ. (D.4)
Theorem D.6 Assume that Hypotheses D.1 is fulfilled and that F is con-
tinuously Gˆateaux differentiable. Then x() is continuously Gˆateaux differen-
tiable as well and we have
x

(λ) µ = (1 −F
x
(λ, x(λ)))
−1
F
λ
(λ, x(λ)) µ, (D.5)
equivalently
x

(λ) µ = F
λ
(λ, x(λ)) µ + F
x
(λ, x(λ))(x

(λ) µ). (D.6)
Proof. Let λ, µ ∈ Λ and h ∈ R. From (D.4) and (D.3) it follows that
x(λ + hµ) −x(λ) = F(λ + hµ, x(λ + hµ)) −F(λ, x(λ))
= h
_
1
0
F
λ
(λ + ξhµ, x(λ) + ξ(x(λ + hµ) −x(λ))) µdξ
+
_
1
0
F
x
(λ + ξhµ, x(λ) + ξ(x(λ + hµ) −x(λ))) (x(λ + hµ) −x(λ))dξ.
(D.7)
Set now
G(λ, x, µ, h)z = Gz :=
_
1
0
F
x
(λ+ξhµ, x(λ)+ξ(x(λ+hµ)−x(λ)))zdξ, z ∈ E.
Then G ∈ L(E) and by Hypothesis D.1
[Gz[ ≤ κ[z[, ∀ z ∈ E.
136 Fixed points
Then from equation (D.7) we have
(1 −G(λ, x, µ, h))(x(λ + hµ) −x(λ))
= h
_
1
0
F
λ
(λ + ξhµ, x(λ) + ξ(x(λ + hµ) −x(λ))) µdξ,
which implies
1
h
x(λ + hµ) −x(λ)) = (1 −G(λ, x, µ, h))
−1

_
1
0
F
λ
(λ + ξhµ, x(λ) + ξ(x(λ + hµ) −x(λ))) µdξ.
Letting h → 0 we find
x

(λ) µ = (1 −F
x
(λ, x(λ)))
−1
F
λ
(λ, x(λ)).
Therefore
x

(λ) µ −F
x
(λ, x(λ))(x

(λ) µ) = F
λ
(λ, x(λ)).

Appendix E
Fractional Sobolev spaces and
regularity of processes
E.1 Fractional Sobolev spaces on [0, 1]
Let ∈ (0, 1), m ∈ N. Define
|f|
2m
,2m
:=
_
[0,T]
2
[f(t) −f(s)[
2m
[t −s[
1+2m
dt ds
W
,2m
(0, T) is by definition the space of all f : [0, T] →R such that |f|
,2m
<
+∞.
Theorem E.1 (Sobolev embedding) Assume that > 1/(2m). Then the
following inclusion holds with continuous embedding.
W
,2m
(0, T) ⊂ C
−1/(2m)
([0, T]). (E.1)
Example E.2 (The Brownian motion) Let > 0 and let p ≥ 1. We ask
the question whether B() belongs to W
,p
(0, T) or not.
Let us compute
E(|B|
p
W
,p) = E
_
[0,T]
2
[B(t) −B(s)[
p
[t −s[
1+p
dt ds
Take for simplicity p = 2m, then
E
_
|B|
2m
W
,2m
_
= E
_
[0,T]
2
[B(t) −B(s)[
2m
[t −s[
1+2m
dt ds
= c
m
_
[0,T]
2
[t −s[
m
[t −s[
1+2m
dt ds = c
m
_
[0,T]
2
[t −s[
m−1−2m
dt ds
137
138 Fractional Sobolev spaces
The integral is finite if and only if <
1
2
.
For instance taking m = 1 we conclude that B() ∈ W
,2
(0, T) for <
1
2
.
This does not imply that B() is continuous.
But if we take m = 2 we have B() ∈ W
,4
(0, T) again for <
1
2
. Therefore
if
1
4
< <
1
2
we conclude by the Sobolev embedding that B() ∈ C

1
4
(0, T).
Arguing similarly taking larger m we conclude that B() ∈ C
α
(0, T) for
any α ∈ (0, 1/2).
E.2 Processes belonging to W
,2m
(0, T)
Let (Ω, F, P) be probability space and let X(t), t ∈ [0, T], be a real stochastic
process on (Ω, F, P). One situation often encountered is when the following
estimate holds for some m > 1, ∈ (0, 1/2), and c
m
> 0
E[[X(t) −X(s)[
2m
] ≤ c
m
[t −s[
m
, ∀ t, s ∈ [0, T]. (E.2)
This estimate (provided m > 1) allows us to conclude that trajectories of X
are H¨older continuous almost surely, as the next proposition shows.
Proposition E.3 Assume that there is m > 1, ∈ (0, 1/2), and c
m
> 0
such that (E.2) is fulfilled. Then we have
E
_
[X[
2m
,2m
¸
< +∞. (E.3)
Moreover, X(, ω) belongs to C
−1/(2m)
([0, T]) for almost ω ∈ Ω.
Proof. We have in fact
E
_
|X|
2m
,2m
_
≤ c
m
_
[0,T]
2
[t −s[
m−1−2m
dt ds < ∞,
since ∈ (0, 1/2) and m− 1 − 2m > −1. The last statement follows from
the Sobolev embedding theorem.
Remark E.4 Kolomogorov test It is a generalization Proposition E.3. As-
sume that there is a > 0, b > 0 such that
E[[X(t) −X(s)[
1+a
] ≤ c
m
[t −s[
1+b
∀ t, s ∈ [0, T]. (E.4)
Then X has α-H¨ older continuous trajectories with α <
1+b
a
.
Appendix F 139
E.3 Multi dimensional Sobolev spaces and reg-
ularity of random fields
Let ∈ (0, 1), m ∈ N, d ∈ N. Define
|f|
2m
,2m
:=
_
[0,T]
2d
[f(x) −f(y)[
2m
[x −y[
d+2m
dx dy.
W
,2m
([0, T]
d
) is by definition the space of all f : [0, T]
d
→ R such that
|f|
,2m
< +∞.
Theorem E.5 (Sobolev embedding) Assume that > d/(2m). Then the
following inclusion holds with continuous embedding.
W
,2m
([0, T]
d
) ⊂ C
−d/(2m)
([0, T]
d
). (E.5)
Let (Ω, F, P) be probability space and let X(x), x ∈ [0, T]
d
, be a random
field on (Ω, F, P).
Assume that there is m > 1, ∈ (0, 1), and c
m
> 0
E[[X(x) −X(y)[
2m
] ≤ c
m
[t −s[
2m
, ∀ t, s ∈ [0, T]. (E.6)
This estimate implies that almost all trajectories of X are H¨ older continuous
almost surely.
Proposition E.6 Assume that there is m > 1, ∈ (0, 1), and c
m
> 0 such
that (E.2) is fulfilled. Then we have
E
_
[X[
2m
,2m
¸
< +∞. (E.7)
Moreover, X(, ω) belongs to C
−d/(2m)
([0, T]) for almost ω ∈ Ω.
Proof. We have in fact
E(|X|
2m
,2m
) ≤ c
m
_
[0,T]
2
[t −s[
m−1−2m
dt ds < ∞,
since ∈ (0, 1/2) and m− 1 − 2m > −1. The last statement follows from
the Sobolev embedding theorem.

Contents
1 Gaussian measures in Hilbert spaces 1.1 Some concepts of Probability . . . . . . . . . . . 1.1.1 Random variables . . . . . . . . . . . . . . 1.1.2 Product measures . . . . . . . . . . . . . . 1.2 Probability measures in Hilbert spaces . . . . . . 1.2.1 Mean and covariance . . . . . . . . . . . . 1.2.2 Finite dimensional projections of measures 1.3 Gaussian probability measures . . . . . . . . . . . 1.3.1 Gaussian probability measures in R . . . . 1.3.2 Gaussian probability measures in Rn . . . 1.3.3 Gaussian probability measures in H . . . . 1.3.4 Computation of some Gaussian integrals . 1.3.5 The Cameron–Martin space . . . . . . . . 3 3 3 5 5 5 7 9 9 10 11 11 13 17 17 18 18 21 21 22 23 23 25 27 27 28 29 29 31

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

2 Gaussian random variables 2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Independence . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Independent real variables . . . . . . . . . . . 2.2.2 Independent Gaussian random variables . . . 2.3 Gaussian random variables defined in a Hilbert space 2.3.1 Affine changes of variables . . . . . . . . . . . 2.4 The white noise function . . . . . . . . . . . . . . . . 2.4.1 Equivalence classes of random variables . . . . 2.4.2 Definition of the white noise function . . . . . 3 Brownian Motion 3.1 Stochastic Processes . . . . . . . . . . . . . . 3.2 Brownian motion . . . . . . . . . . . . . . . . 3.2.1 Construction of a Brownian motion . . 3.2.2 Some properties of a Brownian motion 3.3 Wiener integral . . . . . . . . . . . . . . . . . i

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

ii 3.4 3.5 Continuity of Brownian motion . . . . . . . . . . . . . The standard Brownian motion . . . . . . . . . . . . . 3.5.1 Some properties of C0 . . . . . . . . . . . . . . 3.5.2 The Wiener measure and the standard Brownian Quadratic variation of the Brownian motion . . . . . . Multidimensional Brownian motions . . . . . . . . . . . . . . . . . . . . . . . motion . . . . . . . . 35 36 37 37 39 41 43 43 44 46 49 50 51 52 53 56 57 58 59 61 61 61 63 66 67 70 70 71 72

3.6 3.7

4 Markov property of the Brownian motion 4.1 Filtration . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Ft -measurable random variables . . . . . . 4.2 Stopping times . . . . . . . . . . . . . . . . . . . 4.3 The Brownian motion W (t + τ ) − W (τ ) . . . . . 4.4 Transition semigroup . . . . . . . . . . . . . . . . 4.5 Markov property . . . . . . . . . . . . . . . . . . 4.5.1 Strong Markov property . . . . . . . . . . 4.6 Some consequences of the strong Markov property 4.7 Application to partial differential equations . . . . 4.7.1 The Dirichlet problem in the half-line . . . 4.7.2 The Neumann problem . . . . . . . . . . . 4.7.3 The Ventzell problem . . . . . . . . . . . . 5 The Itˆ integral o 5.1 Definition of Itˆ’s integral . . . . . . . . . . . . . o 5.1.1 Itˆ’s integral for elementary processes . . . o 5.1.2 General definition of Itˆ’s integral . . . . . o 5.2 Itˆ integral for mean square continuous processes o 5.3 The Itˆ integral as a stochastic process . . . . . . o 5.4 Itˆ integral with stopping times . . . . . . . . . . o 5.4.1 Stopping times . . . . . . . . . . . . . . . 5.4.2 Itˆ’s integral with stopping times . . . . . o 5.5 Multidimensional Itˆ integrals . . . . . . . . . . . o

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

6 The Itˆ formula o 75 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 6.1.1 The Itˆ formula for unbounded functions . . . . . . . . 82 o 6.2 Itˆ’ formula for a vector valued process . . . . . . . . . . . . . 84 o 7 Stochastic evolution equations 89 7.1 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . 90 7.1.1 Solution of the stochastic differential equation in the space CB ([s, T ]; L2m (Ω; Rd )). . . . . . . . . . . . . . . 94

.1 The deterministic case . . . . .5. . . T ) .1 Fractional Sobolev spaces on [0. . . . . . . . . . a D. . . . . . . . . 94 7. . . . . . . . . .4 7. . . . . . . . .4.2. . . .5 8 Kolmogorov equations 8.1 Continuous dependence on mean square . . . . . . 128 C. . .1 The autonomous case . . .2 The stochastic case . 1] . . 138 . .1 Autonomous case . . 124 C Martingales 127 C.2 Processes belonging to W . . . . . . . . x) . . . . . . . . . 105 7. . . . . . . . . 111 . . s. . . . . . . . . . . . . . 8. . . . . .1 Definitions . . . . . . .2 Basic properties . . . . .1. . . . . . . . . . . . . . . . . . . 101 7. . . . . . 116 . . . . . . . . . .3 Backward Itˆ’s formula . . . . . . . 100 o Differentiability of X(t. . . . . . . . . . 127 C. .5. . 106 7. s. . . . . . . . . . . . . .2 Stochastic case . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . .1 7. . . . x) with respect to x . . 107 o 111 . . . . . . . x) with respect to s. . . .1 The deterministic case . . 133 . . .2 7. . . . . . .2m (0. . . . . . . . . . . .5. . . . . 97 Almost sure continuity and h¨lderianity of trajectories . . . . .4 Parabolic equations . . . . .4. . .1 Existence of Xx (t. 8. . . . . . . . . . . . . . . . . . 97 7. . . . . . D. . . . . . . . .2 Gˆteaux differentiable mappings . . . . . .3 Differential stochastic equations with random coefficients 96 Continuous dependence on data . . 133 . . x) . . 101 7. . 134 . . .1 Definition . . . .5 Examples . .2 Examples . . . . . . . . . . 114 . . . . . . . . . . . . . . 118 121 7. A λ-systems and π-systems . . . . . . . . . . . . . . . 129 D Fixed points depending on parameters D. . . . . . . . . . . . . . 8. .4. . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . . . . .1. .3 Square integrable martingales . . . . . . . . . . . . . . 135 E Fractional Sobolev spaces and regularity of processes 137 E. . . . . . . . . 117 . . 115 .3 Basic properties of transition operators 8. . . s. . . . . . B Conditional expectation 123 B. . . . . . . . . . . 8. . . . . . . . .1. . . . 123 B. . . . . . . . s. . . 137 E. . . .3 7. . . . . . . . . . . . . . . .2 Existence of Xxx (t. . .2 The basic inequality for martingales . . . . . . . . . . 105 o 7. 102 Itˆ Differentiability of X(t. . . . 113 . . . . . . . . . . . . . . . . .3 The main result . . . . . . . . .

. . . . . . 139 . . . . . . . . . . . . . . .2 E. . . . . . . . . . .3 Multi dimensional Sobolev spaces and regularity of random fields . .

1 1. P) be a probabilty space and let E be a Polish (complete separable metric) space. · and norm | · |). endowed with the norm T = sup x∈H. The set of all symmetric and positive elements of L(H) will be denoted by L+ (H).1 Some concepts of Probability Random variables Let (Ω. y = x. Next section is devoted to some basic facts from Measure Theory and Probability needed in what follows. and by L(H) the Banach algebra of all linear bounded operators T : H → H. Finally. endowed with the norm ϕ 0: = sup |ϕ(x)|. x ≥ 0 for all x ∈ H. The elements of B(E) are called Borel sets. positive if T x.1. |x|=1 |T x|. we shall denote by B(E) the σ–algebra generated by all closed (or equivalently open) subsets of E. We recall that T ∈ L(H) is said to be symmetric if T x. x∈H is a Banach space. 3 . we shall denote by Cb (H) the space of all functions ϕ : H → R which are continuous and bounded.Chapter 1 Gaussian measures in Hilbert spaces We shall denote by H a real separable Hilbert space (with inner product ·. Cb (H). y ∈ H. F . 1. T y for all x.

P). the conclusion follows from the monotone convergence theorem... Let us prove the following basic change of variables formula.. In ∈ B(E).. Since any positive Borel functions is the limit of an increasing sequence of positive simple functions. In this case we have ∀ ω ∈ Ω. Theorem 1. . (1. ω → X(ω).1) Proof. ∀ I ∈ B(E). Let moreover ϕ : E → R be a nonnegative Borel function. (1. it is equal to 1 if ω ∈ I to 0 if ω ∈ I.1) holds for all simple functions ϕ of the form n ϕ= i=1 ci 1 I i . l So.. Then we have ϕ(X(ω))P(dω) = Ω E (1) ϕ(x)(X# P)(dx). . . such that I ∈ B(E) ⇒ X −1 (I) ∈ F .1 Let X be an E-valued random variable in (Ω. l with n ∈ N. (1) 1 I (ω) is the characteristic function of I. F ) we mean a mapping X : Ω → E. Let first ϕ = 1 I with I ∈ B(E) l ϕ(X(ω)) = 1 X −1 (I) (ω). The law (or image measure or push-forward measure) of X is the probability measure X# P on (E. c1 . l / . cn ≥ 0 and I1 . Consequently.. B(E)) defined as (X# P)(I) = P(X −1 (I)).4 Chapter 1 By an E-valued random variable in (Ω. ϕ(X(ω))P(dω) = P(X −1 (I)) = X# P(I) = Ω E ϕ(x)X# P(dx). F . Sometimes we shall use the notation X# P = PX .

by definition. h = H x. A meai=1 surable rectangle of Ω is. The σ-algebra generated by all measurable rectangles is called the product σ-algebra of Fi .. B(H)).Gaussian measures 5 1. i=1 For any R = n Ai we define i=1 n P(R) := i=1 Pi (Ai ). it is denoted by n Fi . h µ(dx). . . H Then the linear functional F : H → R defined as F (h) = H x.1 Probability measures in Hilbert spaces Mean and covariance Let µ be a probability measure on (H. 2. 1. H . Assume now that the second moment of µ is finite. F ) which is called the product probability of P1 . i = 1. By the Riesz representation theorem there exists m ∈ H such that m. a set of the form R = n Ai where i=1 Ai ∈ Fi . |x|2 µ(dx) < +∞. ∀ h ∈ H. ∀ h ∈ H. We shall write m= H xµ(dx). .2 Product measures Let (Ωi ...1. i = 1. Fn ..2.2 1..... ∀ h ∈ H. Assume that µ has finite first momentum. Pn . m is called the mean of µ. Fi . Set Ω = n Ωi . |x|µ(dx) < +∞. . is continuous since |F (h)| ≤ H |x|µ(dx) |h|.. Pi ). n. be probability spaces.. n. h µ(dx). P2 ... One can show that P can be uniquely extended to a probability measure on (Ω.

k ∈ H. k ∈ H. k)| ≤ H |x − m|2 µ(dx) |h| |k|. To prove that Q is of trace class choose a complete orthonormal system (ek ) in H. by the monotone convergence theorem and the Parseval identity. Therefore there is a unique linear bounded operator Q ∈ L(H) such that Qh. k ∈ H. k ∈ N. Part II. ∀ h. k = H h. x − m k. g. 1964. Schwartz. ∀ h. Symmetry and positivity of Q are clear. . ek |2 µ(dx).2 The covariance operator Q of µ is symmetric. One can show that any trace class operator Q is compact and that Tr Q is the sum of its eigenvalues repeated according to their multiplicity. Proof. Dunford and J. N. we find that ∞ Tr Q = k=1 (2) | x − m. In order to state the next result we need the concept of trace class operator. Linear Operators.T. Therefore. (2) Proposition 1. ∀ h. see e. It is also possible to define trace-class operators which are not symmetric. Q is called the covariance of µ. Then we have Qek . x − m µ(dx). G is continuous since |G(h. A symmetric and positive operator Q ∈ L(H) is said to be of trace class if ∞ Tr Q : = k=1 Qek . but we shall not need in what follows. positive and of trace class. ek = H | x − m.6 Chapter 1 (so that the first one is finite as well). Let us consider the bilinear form G : H × H → R defined as G(h. x − m k. ek |2 µ(dx) = H H |x − m|2 µ(dx) < +∞. Interscience. ek < +∞ for one (and consequently for any) complete orthonormal system (ek ). x − m µ(dx). k) = H h.

Let (ek ) be a complete orthonormal system in H. (1. Proof.3 Let µ.3) We have limn→∞ Pn x = x for all x ∈ H. We shall also consider µn as a probability measure on (H. setting µn (I) = µn (I ∩ Pn (H)). Proposition 1. ∀ h ∈ H.2 Finite dimensional projections of measures We are given a probability measure µ ∈ P(H).4) Then µ = ν. for all ϕ ∈ Cb (R). (1. We finally define the Fourier transform µ of a probability measure µ setting µ(h) = H ei x. symmetric operators in 1 H of trace class. l n→∞ .2) One checks easily that µ : H → C is continuous. ∀I ∈ B(H). µ). ∀ ϕ ∈ Cb (H). We want now to show that µ is determined by the sequence (µn ). For any n ∈ N we consider the projection Pn : H → Pn (H) defined as n Pn x = k=1 x.h µ(dx). Thus µn is a probability measure on (Pn (H).2.Gaussian measures 7 We shall denote by L+ (H) the set of all positive. ν ∈ P(H) be such that ϕ(x)µ(dx) = H H ϕ(x)ν(dx). µn ). 1. For this we first need the following result. B(Pn (H)). B(H). Let C ⊂ H be closed and let (ϕn ) ⊂ Cb (H) be such that (i) lim ϕn (x) = 1 C (x) for all x ∈ H. x ∈ H. (1. ek ek . For any n ∈ N we consider the measure µn := (Pn )# µ defined by ϕ(Pn x)µ(dx) = H Hn ϕ(y)µn (dy).

by the dominate convergence theorem it follows that lim ϕn dµ = lim H n→∞ n→∞ ϕn dν = µ(C) = ν(C). 1 1 − n d(x. Proposition 1. H Since closed sets generate the Borel σ–algebra of H this implies that µ = ν. Now. we have ϕ(x)µ(dx) = lim H n→∞ ϕ(Pn x)µ(dx) = lim H n→∞ ϕ(ξ)((Pn )# µ)(dξ) Pn (H) and ϕ(x)ν(dx) = lim H n→∞ ϕ(Pn x)ν(dx) = lim H n→∞ ϕ(ξ)((Pn )# ν)(dξ). Let ϕ ∈ Cb (H). ν ∈ P(H) be such that µ(h) = ν(h) for all h ∈ H. Therefore.4 we prove that the Fourier transform of µ determines µ. ν ∈ P(H). in view of Proposition 1.4 Let µ. Proof. . As an application of Proposition 1. Proposition 1. We can now prove the announced result. C) if d(x. using the dominated convergence theorem and the change of variables formula. C) ≥ n .5 Let µ. Then. C) ≤ n ϕn (x) =  1 0 if d(x.8 (ii) ϕn 0 Chapter 1 ≤ 1 for all ∈ N. If (Pn )# µ = (Pn )# ν for any n ∈ N we have µ = ν. A sequence (ϕn ) ⊂ Cb (H) fulfilling (i) and (ii) is provided by. Pn (H) Since (Pn )# µ = (Pn )# ν by assumption. Then µ = ν.   1 if x ∈ C.3 we have µ = ν. we conclude that ϕ(x)µ(dx) = H H ϕ(x)ν(dx) for all ϕ ∈ Cb (H).

q (R) = √ (3) 1 2πq +∞ e− −∞ (x−m)2 2q 1 dx = √ 2π +∞ −∞ e− 2 dx = 1. Nm. 1968. µ(Pn h) = H (3) 9 .q (B) = √ 2πq e− B (x−m)2 2q dx. e . B(R)). M. defined for all B ∈ B(R) by   1 if m ∈ B. for all B ∈ B(R).4. We assume as granted the result when H is finite-dimensional the general case we have by (1. x2 See e. 1. δm (B) =  0 if m ∈ B. 1. then we go to the general case.q is a probability measure since Nm. where δm is the Dirac measure at m.Pn h (Pn )# ν(dξ) = (Pn )# ν(Pn h).Pn h µ(dx) = Pn (H) ei Pn ξ.0 = δm .3 Gaussian probability measures We first recall the definition of Gaussian measure on (R. Notions fondamentales de la th´orie des probabilit´es. Therefore measures (Pn )# µ and (Pn )# ν have the same Fourier tranforms and so they coincide.q on (R. The conclusion follows from Proposition 1.g. M´tivier.Pn h ν(dx) = Pn (H) ei Pn ξ.Gaussian measures Proof. / If q > 0 we set 1 Nm.Pn h (Pn )# µ(dξ) = (Pn )# µ(Pn h) and ν(Pn h) = H ei x. In ei x.1) for any h ∈ H and n ∈ N. If q = 0 we set Nm.1 Gaussian probability measures in R For any pair of real numbers (m. B(R)) as follows. q) with m ∈ R and q ≥ 0 we define a probability measure Nm.3. Dunod e e e Universit´.

h ∈ R.λk . B(Rn )) by setting Nm. It is easy to see that m is the mean and q the covariance of Nm. When m = 0 we shall write NQ instead of Nm. n.Q for short.6 Let m ∈ Rn . ..q (dx) = √ (x−m)2 1 e− 2q dx. B(R) and Nm. z ∈ Rn . . Moreover. Nm.10 Chapter 1 If q > 0. if the determinant of Q is positive.x−a dx. z .Q .Q (dx) = 1 (2π)d det Q e− 2 1 Q−1 (x−a).. x − a µ(dx) = Qy.q .q (h) := R eihx Nm.Q (h) := Rn ei h. Let Q ∈ L+ (Rn ) and let (e1 .Q = ×N k=1 n mk .. k = 1. Proposition 1. x − a z.. Then we have xµ(dx) = m. Q ∈ L+ (Rn ) and µ = Nm. h ∈ Rn ..h . its Fourier transform is given by Nm.x µ(dx) = ei a. Then we define a probability measure Na.q (dx) = eimh− 2 qh ..2 Gaussian probability measures in Rn We are going to define a Gaussian measure Nm.5) 1.Q .Q is absolutely continuous with respect to the Lebesgue measure in Rn and we have Na. Moreover the Fourier tranform of Na. Na..q .Q is given by Na. .3. 1 2 (1. Therefore m is the mean and Q the covariance operator of Na. mn ) ∈ Rn and any Q ∈ L+ (Rn ). Rn y. The proof of the following proposition is easy. Finally. .. en ) be an orthonormal basis on Rn such that Qek = λk ek .Q on (Rn . for some λk ≥ 0. it is left to the reader.. 2πq When m = 0 we shall write for short Nq instead N0.h − 2 1 Qh.q is absolutely continuous with respect to the Lebesgue measure 1 (dx) = dx in (R.Q for any m = (m1 . Rn y.

Gaussian measures 11 1. h ∈ H. More precisely. Pn x = k=1 x. x. ek ek → ( x.5.Q (h) = ei m. (4) (1.3.h − 2 1 Qh.4 Computation of some Gaussian integrals To compute some integrals with respect to a Gaussian measure µ = Nm. . see e.6) One can show that such a measure does exist Proposition 1. en . 2006.g. n Pn (H) → R . ek ek . Show that the Fourier transform of µn is given by µn (h) = ei (4) Pn k=1 1 mk hk − 2 e Pn k=1 λk h2 k . given µ = Nm. We denote by Nm. . covariance Q and Fourier transform given by Nm.λk .3 Gaussian probability measures in H Let m ∈ H and Q ∈ L+ (H). SpringerVerlag. . Berlin.7 Prove that µn = (Pn )# µ = ×N i=1 n mk .. Exercise 1.. e1 . B(H)) of mean m.Q ∈ P(H). en ).3. it is unique thank’s to 1. Since Q is compact there exists an orthonormal complete system (ek ) in H and a sequence of nonnegative numbers (λk ) such that Qek = λk ek . x = k=1 n x. An introduction to infinite-dimensional analysis. Da Prato.h ..Q in an infinite dimensional Hilbert space H it is useful to reduce the computation to integrals on a sequence (Hn ) of finite dimensional vector spaces convergent to H and then to let n → ∞. Hint.Q the probability measure on 1 (H. we shall proceed as follows. ∀x∈H and identify Pn (H) with Rn through the isomorphism. n ∀ k ∈ N. For any n ∈ N we set mn := m. G.

m . e 2 xk Nmk . Proposition 1. ∞ 1 −1 x.λk (dxk ) = √ R ε 2 1 −ε e 2 1 − ελk m2 k 1−ελk .7 n e H ε |P x|2 2 n µ(dx) = Pn (H) e ε |P ξ|2 2 n µn (dξ) = k=1 R e 2 ξk Nmk . For any n ∈ N we have. if ε < 1 . (1 − εQ) x = 1 − ελk k=1 In this case we can define the determinant of (1 − εQ) by setting n ∞ det(1 − εQ) : = lim Exercise 1. otherwise. k=1 (1 − ελk ) > 0. ek ek .8 Prove that ∞ n→∞ (1 − ελk ) := k=1 (1 − ελk ). the linear operator 1 − εQ is invertible and (1 − εQ)−1 is bounded. as easily checked.12 Chapter 1 We shall assume (which is always true after a rearrangement) that λ1 ≥ λ2 ≥ · · · λn ≥ · · · .9 Let ε ∈ R. the conclusion follows from the monotone convergence theorem.λk (dξk ). λ1 (1. taking into account Exercise 1. k=1 Hint.7) Proof. . ε 2 e 2 |x| µ(dx) =  H +∞. Then we have  ε −1  [det(1 − εQ)]−1/2 e 2 (1−εQ) m. We have in fact. 1 To formulate the next result notice that for any ε < λ1 . Write log ∞ ∞ (1 − ελk ) k=1 = k=1 log(1 − ελk ) ∞ k=1 and show that the series is convergent since λk < +∞. x ∈ H. by an elementary computation. ε 2 Since |Pn x|2 ↑ |x|2 as n → ∞ and.

Gaussian measures Exercise 1.10 Prove that for all m ∈ N Jm :=
H

13

|x|2m µ(dx) < ∞

and compute Jm . Hint. Notice that Jm = 2m F (m) (0), where F (ε) =
H

e 2 |x| µ(dx),

ε

2

ε > 0.

Proposition 1.11 We have e h,x µ(dx) = e a,h e 2
H
1

Qh,h

, h ∈ H.

(1.8)

Proof. For any ε > 0 we have e h,x ≤ e|x| |h| ≤ eε|x| e ε |h| . Choosing ε <
1 , λ1
2 1 2

we have, by the dominated convergence theorem, that
n→∞

e h,x µ(dx) =
H

lim

e h,Pn x µ(dx) = lim
H
1

n→∞

e h,Pn ξ µn (dx)
Pn (H)
1

=

n→∞

lim e Pn m,h e 2

Pn Qh,h

= e m,h e 2

Qh,h

.

1.3.5

The Cameron–Martin space

We are given a Gaussian measure µ = NQ , where Q ∈ L+ (H). We say that 1 µ is non degenerate if Ker Q := {x ∈ H : Qx = 0} = {0}. Thus, if H is finite-dimensional µ is non degenerate if and only if det Q > 0. Assume now that H is infinite-dimensional and that µ is non degenerate. We denote by (ek ) a complete orthonormal system in H such that Qek = λk ek , k ∈ N, where (λk ) are the eigenvalues of Q and we set xk = x, ek , k ∈ N. We notice that the inverse Q−1 of Q (which is well defined since Ker Q = {0}) is not continuous because, Q−1 ek = 1 ek , λk k∈N

and λk → 0 as k → ∞. Consequently, recalling the closed graph theorem, we see that the range Q(H) does not coincide with H. However, it is dense in H as the following lemma shows.

14 Lemma 1.12 Q(H) is a dense subspace of H.

Chapter 1

Proof. In fact if x0 is an element of H orthogonal to Q(H), we have Qx, x0 = x, Qx0 = 0, ∀ x ∈ H, which yields Qx0 = 0, and so x0 = 0 because Ker(Q) = {0}. It is useful to introduce the operator Q1/2 defined as

Q1/2 x =
k=1

λk x, ek ek ,

x ∈ H.

Its range Q1/2 (H) is called the Cameron–Martin space of the measure µ. Arguing as before we see that Q1/2 (H) is a subspace of H different of H and dense in H. Moreover it is clear that x ∈ Q1/2 (H) if and only if,

λ−1 x2 < +∞. k k
k=1

It is important to notice that the measure of the Cameron–Martin space is zero. Proposition 1.13 We have µ(Q1/2 (H)) = 0. Proof. For any n, k ∈ N set

Un = and

y∈H:
h=1

2 λ−1 yh < n2 h

= {y ∈ Q1/2 (H) : |Q−1/2 y| < n},

2k

Un,k =
1/2

y∈H:
h=1

2 λ−1 yh < n2 h

.

Clearly Un ↑ Q (H) as n → ∞, and for any n ∈ N, Un,k ↓ Un as k → ∞. So, it is enough to show that µ(Un ) = lim µ(Un,k ) = 0.
k→∞

(1.9)

We have in fact µ(Un,k ) =

{y∈H:

P2k

−1 2 2 h=1 h=1 λh yh <n }

×N

2k

λk (dyk ),

Gaussian measures which, setting zh = λh
−1/2

15 yh is equivalent to NI2k (dz),

µ(Un,k ) =

{z∈R2k :|z|<n}

where I2k is the identity in R2k . Let us compute µ(Un,k ). We have µ(Un,k ) µ(Un,k ) = = µ(H) Therefore 1 µ(Un,k ) = (k − 1)! and (1.9) follows.
n2 /2 n − r2 2k−1 e 2r dr 0 2 +∞ − r 2k−1 e 2r dr 0

=

n2 /2 −ρ k−1 e ρ dρ 0 . +∞ −ρ k−1 e ρ dρ 0

e ρ
0

−ρ k−1

1 dρ ≤ (k − 1)!

n2 /2

ρ
0

k−1

1 dρ = k!

n2 2

k

,

16 Chapter 1 .

and Q(X)h.1 Notations Let (Ω. X : Ω → H a random variable such that |X(ω)|2 P(dω) < ∞. ∀h∈H and that m(X). ∀ h ∈ H. F . h X(ω) − m(X).h e− 2 1 Q(X)h. Definition 2. H a separable Hilbert space. k = Ω X(ω) − m(X). by m(X) the mean of X# P and by Q(X) the covariance of X# P. k P(dω). ∀ h ∈ H. h P(dω).h . By the change of variables formula it follows that the Fourier transform of X# P is given by X# P(h) = Ω ei X(ω). In this case we call m(X) the mean and Q(X) the covariance of X.1 We say that X# P is a Gaussian random variable if X# P is a Gaussian measure. k ∈ H. that is if X# P(h) = ei m(X). ∀ h. Ω We denote by X# P the law of X. h = Ω X(ω).Chapter 2 Gaussian random variables 2. 17 . P) be a probability space.h P(dω).

Then X = (X1 ..m(X) e− 2 a2 Q(X)ek ..4 Let n ∈ N and let X1 .. Xn ) is a Rn -valued random variable.... . . . ω ∈ Ω.. n.2. ek = Ω Xk (ω)P(dω) = m(Xk ) and for any j. ... ... X1 .....2 Let n ∈ N. . F .. if conversely X1 . More precisely. let (e1 . m(X)n ) and Q(X) is a n × n matrix denoted Q(X)i..1 Independent real variables Definition 2. 2. So.... then X = (X1 . . In particular. Xn (ω)).. Notice that. .18 Chapter 2 Example 2... . F .. .. P). Xn be real random variables on (Ω...k = Q(X)ej . i.. Example 2.3 Assume that X = (X1 .... .. 2. Then X1 . Consider the Rn -valued random variable X(ω) = (X1 (ω). .. m(X) is a vector of Rn denoted by (m(X)1 .. Xn are real Gaussian random variables. ek = Ω (Xj (ω) − mj (Xj ))(Xk (ω) − mk (Xk ))P(dω). . ...X(ω) P(dω) = eiam(Xk ) e− 2 a 1 2 Q(X ) k = ei aek .. Xn be real random variables in (Ω. n we have Q(X)j. Xn are real Gaussian random variables. k = 1. In fact if k = 1...2 Independence In this section we introduce the basic concept of independence. . k = 1. en ) be the canonical basis in Rn .ek .. . n we have m(X)k = m(X).j . if j = k we find Q(X)k. Then for any k = 1.. Xn ) is a n-dimensional Gaussian random variable... n and a ∈ R we have eiaXk (ω) P(dω) = Ω Ω 1 ei aek .. Xn ) is not necessarily Gaussian.k = Q(Xk ). n.. P). j = 1.

.. In ∈ B(R).. Xn are independent if X# P = 19 ×(X ) j=1 n j # P... Xn ) and let ψ : Rn → R be defined as ψ(ξ1 . Proposition 2.1) = Ω ϕ1 (X1 (ω))P(dω) · · · Ω ϕn (Xn (ω))P(dω).. .. . Then we have ϕ1 (X1 (ω)) · · · ϕn (Xn (ω))P(dω) Ω (2.. Xn it is enough to show that (X# P)(I1 × · · · × In ) = ((X1 )# P)(I1 ) · · · ((Xn )# P)(In )..... Let moreover ϕ1 . . .. taking into account the independence of X1 . Xn . . ξn ) ∈ Rn .. Xn are independent. n ∈ N. .1) ϕi = 1 Ii .1) holds for any choice of functions ϕ1 . be real independent random variables in (Ω.. A necessary and sufficient condition for the independence is provided by the following proposition..1) holds for any choice of positive Borel functions ϕ1 . .. . . . To prove independence of X1 .. ϕn positive Borel. But this follows immediately setting in (2... . They are called independent if Xi1 . . l i = 1.5 Let X1 .. P). ϕn be Borel positive functions. . Then by the change of variable formula we have. Proof.. Xn . F . (ξ1 ... if (2. Assume conversely that (2. n.. .. ∀ I1 .... . . ϕ1 (X1 (ω)) · · · ϕn (Xn (ω))P(dω) = Ω Ω ψ(X(ω))P(dω) = Rn ψ(ξ)(X# P)(dξ) = R ϕ1 (ξ1 )((X1 )# P)(dξ1 ) · · · R ϕk (ξn )((Xn )# P)(dξn ) = Ω ϕ1 (X1 (ω))P(dω) · · · Ω ϕn (Xn (ω))P(dω)... ϕn .. Xin are independent for any choice of n and of positive integers i1 < i2 < · · · < in .random variables We say that X1 ... . Conversely. then X1 .... Let (Xi ) be a sequence of real random variables. ξn ) = ϕ1 (ξ1 ) · · · ϕk (ξn ). Set X = (X1 ... .

.. ∀ h = (h1 .... jk less or equal to n.10 does not hold in general.. Xn ).10 Let X1 . l l Exercise 2. F ..... Ω = Ω (Xi (ω) − mi (X))P(dω) The converse of Proposition 2. j = 1. ..6 Let X1 . We say that the sets A1 . Proof. ..6) for i. . Xn are independent if and only if n X# P(h) = k=1 (Xk )# P(hk ).. Xn be real independent random variables in (Ω.... Xn be real independent random variables in (Ω. An are independent if the random variables 1 A1 . Show that X1 · · · Xn dP = Ω Ω X1 dP × · · · × Ω Xn dP and V (X1 + · · · + Xn ) = V (X1 ) + · · · + V (Xn ).. An are independent if and only if P(Aj1 ∩ · · · ∩ Ajk ) = P(Aj1 ) × · · · × P(Ajk ). for all k = 1.. P) and let X = (X1 . An ∈ F . 1 An are so.. F .. ... P) be a probability space and A1 . .. n and k different positive integer j1 ... ..8 Let (Ω. Proposition 2... F ..9 Show that sets A1 ... P). Xn be real random variables in (Ω. ..7 Let X1 ..j = Ω (Xi (ω) − mi (X))(Xj (ω) − mj (X))P(dω) (Xj (ω) − mj (X))P(dω) = 0. F .. . .20 Chapter 2 Exercise 2. Then the covariance matrix Q(X) is diagonal. hn ) ∈ Rn ... We have in fact (by Exercise 2. . P) and let X = (X1 .. Definition 2... Proposition 2.. Xn ). . . The following useful result is left to the reader as an exercise... n Q(X)i. . Then X1 . . .

Xn ) is Gaussian. P) and let X = (X1 . .h e− 2 n 1 Pn k=1 Q(X)k. hn ) ∈ H...... ... Xn ).. Xn are independent the conclusion follows from Proposition 2.. By Proposition 2. hn ) ∈ Rn . F .. Xn ) is Gaussian. B(H)..3 Gaussian random variables defined in a Hilbert space We now consider the case when (Ω. F .. Proof. .. We have in fact X# P(h) = ei m(X).12 Assume that X1 .. n X# P(h) = Ω e i(X1 (ω)h1 +···+X1 (ω)hn ) P(dω) = k=1 Ω 2 eiXk (ω)hk P(dω) = ei(m(X1 )h1 +···+m(Xn )hn ) e− 2 (Q(X1 )h1 +···+Q(Xn )hn ) . taking into account the independence of (X1 . . 1 2 Proposition 2..random variables 21 2.Q with m ∈ H and Q ∈ L+ (H).2 Independent Gaussian random variables Let X1 ... Xn are independent Gaussian random variables. Proof.h = ei m(X). .h e− 2 =e i m(X). 1 . Xn ). P) coincides with (H. Proposition 2. Xn are real random variables and that X = (X1 .h 1 Q(X)h. .. . If X1 .k h2 k e 1 −2 Pn k=1 Q(Xk )h2 k = i=1 (Xk )# P(h). Xn are independent if and only if Q(X) is diagonal. ......7 it is enough to show that n X# P(h) = i=1 (Xk )# P(h). where H is a separable Hilbert space and µ = Nm. Then X = (X1 ...11.11 Assume that X1 . µ)... Then X1 .. 2. for each h = (h1 . Then. Xn be real random variables in (Ω... let h = (h1 .. . In fact.. .2. Assume now that Q(X) is diagonal.. .

22

Chapter 2

2.3.1

Affine changes of variables

Let b ∈ K and A ∈ L(H, K) where K is another separable Hilbert space. Let us consider the affine transformation T (x) = Ax + b, x ∈ H.

Proposition 2.13 T is a Gaussian random variable and its law T# µ is given by NAa+b,AQA∗ , where A∗ is the transpose of A. Proof. We have in fact ei k,y T# µ(dy) =
K H

ei k,T (x) µ(dx) =
H
1

ei k,Ax+b µ(dx)
AQA∗ k,k

= ei k,b
H

ei A

∗ k,x

µ(dx) = ei k,Aa+b e− 2

, k ∈ K.

Example 2.14 Let µ = Nm,Q and n ∈ N, f1 , ..., fn ∈ H. Let F : H → Rn be defined as F (x) := ( x, f1 , ..., x, fn ), x ∈ H. Then by Proposition 2.13 F is a Gaussian random variable with mean m(F ) and covariance Q(F ) given by, m(F ) = F (m) = ( m, f1 , ..., m, fn ) and Q(F ) = F QF ∗ . On the other hand, the linear operator F ∗ : Rn → H is given by
n

F (ξ) =
k=1

fk ξk ,

∀ ξ = (ξ1 , ..., ξn ) ∈ Rn .

Therefore QF (ξ) =

n

Qfk ξk ,
k=1

∀ ξ = (ξ1 , ..., ξn ) ∈ Rn
n

and F QF ∗ (ξ) =

n

Qfk ξk , f1
k=1

, ...,
k=1

Qfk ξk , fn

random variables so that Q(F )h,k = Qfh , fk . Therefore, F1 , ..., Fn are independent if and only if Qfh , fk = 0, if h = k. h, k = 1, ..., n,

23

(2.2)

2.4

The white noise function

In order to define the white noise function (which will play an important role in what follows), we shall deal with equivalence class of random variables (rather than random variables), which we briefly discuss in the next subsection.

2.4.1

Equivalence classes of random variables

Let (Ω, F , P) be a probability space and let H be a separable Hilbert space. We denote by R(H) the set of all H-valued random variables. Definition 2.15 We say that X, Y ∈ R(H) are equivalent (and write X ∼ Y ) if P({ω ∈ Ω : X(ω) = Y (ω)}) = 1. One can easily check that X ∼ Y, X, Y ∈ R(H) is an equivalence relation, so that the set R(H) is disjoint union of equivalences classes. We notice that if X ∼ Y then the laws of X and Y coincide. In fact set K = {ω ∈ Ω : X(ω) = Y (ω)}, so that P(K) = 0. Since for any I ∈ B(H) we have X −1 (I) ⊂ Y −1 (I) ∪ K, it follows that P(X −1 (I)) ≤ P(Y −1 (I)) and, exchanging X and Y we see that P(X −1 (I)) = P(Y −1 (I)). Consequently, all random variables belonging to a fixed equivalence class ˜ ˜ X have the same law, which is called the law of X. In the following we shall not distinguish between a random variable X ˜ and the equivalence class X including X, except when needed.

24

Chapter 2

By Lp (Ω, F , P; H), p ≥ 1, we mean the space of all equivalence class of random variables X : Ω → H such that |X(ω)|p P(dω) < +∞.

Lp (Ω, F , P; H), endowed with the norm
1/p

X

Lp (Ω,F ,P;H)

=

|X(ω)|p P(dω)

,

is a Banach space. We shall write Lp (Ω, F , P; H) = Lp (Ω, P; H) for brevity. We prove now that the limit of a convergent sequence in L2 (Ω, P; H) of Gaussian random variables is Gaussian. Proposition 2.16 Let (Xn ) ⊂ L2 (Ω, P; H) be a sequence of Gaussian random variables convergent to X in L2 (Ω, P; H). Then X is a Gaussian random variable and m(X), h = lim m(Xn ), h , h ∈ H,
n→∞

and Q(X)h, k = lim Q(Xn )h, k ,
n→∞

h, k ∈ H.

Proof. Since Xn → X in L2 (Ω, P; H) we have
n→∞

lim m(Xn ), h = lim

n→∞

Xn (ω), h P(dω) =
Ω Ω

X(ω), h P(dω) = m(X), h

and
n→∞

lim Q(Xn )h, k

=

n→∞

lim

Xn (ω) − m(Xn ), h

Xn (ω) − m(Xn ), k P(dω)

=

X(ω) − m(X), h

X(ω) − m(X), k P(dω) = Q(X)h, k .

Let us show now that X is a Gaussian random variable. We have in fact ei x,h (X# µ)P(dy) =
H Ω
1

ei X(ω),h P(dω) = lim = ei m(X),k e− 2
1

n→∞

ei Xn (ω),h P(dω)

= lim ei m(Xn ),h e− 2
n→∞

Q(Xn )h,h

Q(X)h,h

.

e.random variables 25 2. Q−1/2 z2 µ(dx) = QQ−1/2 z1 . y µ a. ∀ x ∈ H.4. Let us define a mapping W : Q1/2 (H) → C(H). Q−1/2 z1 x. Since Q is compact there exists a complete orthonormal basis (ek ) on H and a sequence of positive numbers (λk ) such that Qek = λk ek . However this definition is meaningless because µ(Q1/2 (H)) = 0. Since Q1/2 (H) is dense in H. the mapping W can be uniquely extended as a mapping from H into L2 (H. β ∈ R we have Wf (αx + βy) = αWf (x) + βWf (y). Q−1/2 z . z2 .2 Definition of the white noise function In this section we assume that the Hilbert space H is infinite dimensional and consider a non degenerate Gaussian measure µ = NQ in H (Ker (Q) = {0}). Remark 2. z2 ∈ Q1/2 (H) we have Wz1 (x)Wz2 (x)µ(dx) = z1 . We have in fact Wz1 (x)Wz2 (x)µ(dx) = H H x. QQ−1/2 z2 = z1 .. µ) which we denote still by W and call the white noise function. z . where Wz (x) = x. by Proposition 1. Wf is linear in the sense that for all α. H k ∈ N.18 Given z ∈ H (not belonging to Q1/2 (H)) it would be tempting to define the random variable Wz by setting.17 For all z1 .3) Proof. Lemma 2. x. x ∈ Q1/2 (H). Here Q1/2 (H) is the Cameron–Martin space and C(H) the space of all real continuous functions on H. z → Wz (2.13 . z2 . Wz (x) = Q−1/2 x.

∀ η ∈ R.4) The random variables Wz1 .... zk ... (zj ) be n sequences in Q1/2 (H) convergent respectively to z1 . . n 1 Proof. Proposition 2.k=1 zj .k = zh .19 Let z ∈ H. by the dominated convergence theorem.. Wzn ) is an ndimensional Gaussian random variable with mean 0 and covariance operator Qz given by (Qz )h...19 is important. Then (Wz1 .Q H 1 j −1/2 (ξ j j 1 z1 +···+ξn zn ) µ(dx) 2 1 = lim e− 2 |ξ1 z1 +···+ξn zn | = e− 2 |ξ1 z1 +···+ξn zn | = e− 2 j→∞ j 2 1 Pn j.. .. Wzn are independent if and only if z1 . Then.. that ei(ξ1 Wz1 (x)+···+ξn Wzn (x)) µ(dx) = lim H j→∞ ei(ξ1 H j j Q−1/2 z1 .. z1 . Proof.x µ(dx) = lim e− 2 η n→∞ 1 2 |z n| 2 = e− 2 η 1 2 |z|2 . So. k = 1. Let (zn ) ⊂ Q1/2 (H) be a sequence such that zn → z in H. The following generalization of Proposition 2..x +···+ξn Q−1/2 zn . Let (zj ). ..20 Let n ∈ N. we have eiηWz (x) µ(dx) = lim H n→∞ eiη Q H −1/2 z n ...x ) µ(dx) = lim j→∞ ei x. . zn ∈ H...26 Chapter 2 Proposition 2. . h.zk |ξj ξk .. . n. . zn are mutually orthogonal. the conclusion follows. (2.. Then we have by the dominated convergence theorem. Then Wz is a real Gaussian random variable with mean 0 and covariance |z|2 .. zn in H. . We have to show that eiηWz (x) µ(dx) = e− 2 η H 1 2 |z|2 .

Chapter 3 Brownian Motion
3.1 Stochastic Processes

We are given a probability space (Ω, F , P). We denote by P∗ the outer measure of P. We recall that a null set of Ω is a set of outer measure zero. For any integrable real random variable F we note E(F ) =

F (ω)P(dω).

So, in particular we have F# P(I) = E(1 I (F )), l ∀ I ∈ B(R).

We say that a property π concerning elements of Ω holds P-a.s. if the set where π does not hold is a null set. Definition 3.1 A family X = (X(t))t≥0 of real random variables in (Ω, F , P) is called a real stochastic process in [0, +∞). For any ω ∈ Ω, X(·, ω) is called a trajectory of X. • X is Gaussian if for any n ∈ N and any 0 ≤ t1 < · · · < tn the ndimensional random variable (X(t1 ), ..., X(tn )) is Gaussian. • X is continuous if X(·, ω) is continuous P-a.s. • X is p-mean continuous, p ≥ 1, if (i) X(t) is p-integrable for any t ≥ 0. (ii) We have
t→t0

lim E[|X(t) − X(t0 )|p ] = 0, 27

∀ t0 ≥ 0.

(3.1)

28

Chapter 3

We notice that a p-mean continuous process is not continuous in general. We say that two stochastic processes X and Y are equivalent if for all t ≥ 0 we have X(t, ω) = Y (t, ω), P-a.s..

When X and Y are equivalent we also say that Y is a version of X (or that X is a version of Y ).

3.2

Brownian motion

Definition 3.2 A real Brownian motion B = (B(t))t≥0 on (Ω, F , P) is a real stochastic process such that (i) B(0) = 0 and if 0 ≤ s < t, B(t) − B(s) is a real Gaussian random variable with law Nt−s . (ii) If 0 < t1 < ... < tn , the random variables, B(t1 ), B(t2 ) − B(t1 ), · · · , B(tn ) − B(tn−1 ) are independent. We express condition (ii) by saying that B is a process with independent increments. Lemma 3.3 Let t, s > 0. Then E[B(t)(B(s)] = min{t, s}. Proof. Let for instance t > s. Then we have E[B(t)B(s)] = E[(B(t) − B(s))B(s)] + E[B 2 (s)]. On the other hand, B(t) − B(s) is independent of B(s) so that E[(B(t) − B(s))B(s)] = E[B(t) − B(s)]E[B(s)] = 0. Since the law of B(s) is Ns we conclude that E[B(t)B(s)] = s as required. (3.2)

Brownian motion

29

3.2.1

Construction of a Brownian motion

Consider the probability space (H, B(H), µ), where H = L2 (0, +∞) and µ = NQ , Q being an arbitrary (but fixed) non degenerate Gaussian measure in H. Define B(t) = W1l[0,t] , t ≥ 0, (3.3) where 1 [0,t] (s) = l   1 if s ∈ [0, t],  0 otherwise,

and W is the white noise function defined in Chapter 2. More precisely, for any t ≥ 0 we choose an arbitrary element in the equivalence class of B(t) which we still denote by B(t). Clearly, for any t ≥ 0, B(t) is a Gaussian random variable Nt and for any t > s ≥ 0, B(t) − B(s) = W1l(s,t] is a Gaussian random variable Nt−s . So, B fulfills Definition 3.2(i). Let us prove (ii). Since the system of elements of H, (1 [0,t1 ] , 1 (t1 ,t2 ] , ..., 1 (tn−1 ,tn ] ), l l l is orthogonal, we have by Proposition 2.20 that the random variables B(t1 ), B(t2 ) − B(t1 ), · · · , B(tn ) − B(tn−1 ) are independent. Thus (ii) is proved as well.

3.2.2

Some properties of a Brownian motion

Proposition 3.4 Let B(t), t ≥ 0, be a Brownian motion on (Ω, F , P). Then B is a Gaussian process. Moreover, if 0 < t1 < ... < tn the law of (B(t1 ), ..., B(tn )) is given by P((B(t1 ), ..., B(tn )) ∈ I) = (2π)−n/2 (t1 (t2 − t1 ) × · · · × (tn − tn−1 ))−1/2
I

e

− 2t1 −
1

η2

(ηn −ηn−1 )2 (η2 −η1 )2 −·− 2(t −t 2(t2 −t1 ) n n−1 )

dη,

(3.4) for all I ∈ B(R ).
n

Proof. Let 0 < t1 < ... < tn and set X := (B(t1 ), B(t2 ) − B(t1 ), ..., B(tn ) − B(tn−1 )) Z := (B(t1 ), ..., B(tn )).

η2 − η1 .. m ∈ N.. consider the linear mapping T ∈ L(Rn ) defined by.4). Since B(t) − B(t0 ) is a Gaussian random variable Nt−t0 . η = Q T −1 −1 −1 2 (ηn − ηn−1 )2 η1 (η2 − η1 )2 − ··· − − η = t1 (t2 − t1 ) (tn − tn−1 ) η. F .. Exercise 3. . as easily checked.30 Chapter 3 Since random variables B(t1 ). B(tn ) − B(tn−1 ) are independent. x1 + x2 . Moreover.13 Z is Gaussian with mean 0 and covariance Q(Z) = T Q(X)T ∗ where T ∗ is the transpose of T . It is clear that Z = T (X).. t2 − t1 ...6 Let B(t) be a Brownian motion in a probability space (Ω. F . the conclusion follows.11 it follows that X is a n-dimensional Gaussian random variable with mean 0 and covariance operator Q(X) = diag (t1 . ∀ (x1 . . .. T −1 and so. Now. Let t > t0 ≥ 0. . Proof.. . ηn − ηn−1 ).... x1 + · · · + xn ). Therefore by Proposition 2. m!2m Therefore lim E(|B(t) − B(t0 )|2m ) = 0 t→0 and the conclusion follows. T (x1 . P). P). since T −1 η = (η1 .. Prove that the following are Brownian motions. It is enough to show the result for p = 2m. tn − tn−1 ).. . Then B is p-mean square continuous for all p ≥ 1.5 Let B(t). be a Brownian motion on (Ω. we have (Q(Z)) η. t ≥ 0. .. xn ) ∈ Rn . If I ∈ B(Rn ) we have P(Z ∈ I) = (2π)−n/2 (det Q(Z))−1/2 I e− 2 1 (Q(Z))−1 η. B(t2 ) − B(t1 ). xn ) = (x1 . It remain to show (3. we have det Q(Z) = det Q(X) = t1 (t2 − t1 ) × · · · × (tn − tn−1 ).. we have E(|B(t) − B(t0 )|2m ) = R |ξ|2m Nt−t0 (dξ) = (2m)! (t − t0 )m . Proposition 3. Since det T = det T ∗ = 1.η dη.... by Proposition 2.

.Brownian motion (i) B1 (t) = B(t + h) − B(h). (iii) B3 (t) = tB(1/t)..5) and T 2 n t E 0 f (s)dB(s) = j=1 |f (tj−1 )|2 (tj − tj−1 ) = 0 f 2 (s)ds. f0 . t ≥ 0. We want to define the stochastic integral: T f (s)dB(s). Let us prove (3.7) +2E j<k f (tj−1 )f (tk−1 )[B(tj ) − B(tj−1 )][B(tk ) − B(tk−1 )] . l Then define T n f (s)dB(s) := 0 j=1 ftj−1 (B(tj ) − B(tj−1 )). T ) with T > 0. t > 0.5) is obvious. 0 We start with step functions.3 Wiener integral Let B(t). (ii) B2 (t) = αB(α−2 t). We have n E(|Iσ (f )| ) = E j=1 n 2 |f (tj−1 )|2 [B(tj ) − B(tj−1 )]2 (3. Identity (3. (iv) B4 (t) = −B(t). t ≥ 0. where α > 0 is given. Let 0 = t0 < t1 < · · · < tn = T . fn−1 ∈ R and set n f= j=1 tj−1 1 (tj −tj−1 ] . Let us prove two basic identities. 31 3. .7 We have T E 0 f (s)dB(s) =0 (3. (3. f1 . where h > 0 is given.. t ≥ 0. Lemma 3. t ≥ 0. F .6) Proof. B3 (0) = 0. be a Brownian motion in (Ω.6).. P) and let f ∈ L2 (0.

T 0 f (s)dB(s) is a real Proof.9 Let f ∈ L2 (0. We still denote by I(f ) = 0 f (s)dB(s) this estension. b.8) and T t E 0 f (s)dB(s) = 0 f 2 (s)ds. (3. Exercise 3. T ) the linear space of all step functions. is continuous. b ≥ 0. T ]. c ≥ 0 we have b c c f (s)dB(s) + a b f (s)dB(s) = a f (s)dB(s). Show that T T T E 0 f (s)dB(s) 0 g(s)dB(s) = 0 f (s)g(s)ds. F . the equivalence class of random T variables) 0 f (s)dB(s). f → I(f ) = 0 f (s)dB(s). T ). It is enough to prove the result for f of the form n f= i=1 fti−1 (ti − ti−1 ). P). is called the Wiener integral of f in [0. b We define in an obvious way the Wiener integral a f (s)dB(s) for any a. Then I(f ) = T Gaussian random variable Nq with q = 0 |f (s)|2 ds. Proposition 3. T ) ⊂ L2 (0. Since S(0. T ) it can be uniquely extended T to the whole L2 (0. It is easy to see that if a.6) it follows that the linear mapping I T S(0. T ) is dense in L2 (0. T ) we have T E 0 f (s)dB(s) 2 = 0. which belongs to L2 (Ω. T ).9) The random variable (more precisely.32 Chapter 3 Now the conclusion follows taking into account that B(tj ) − B(tj−1 ) is a real Gaussian random variable Ntj−1 −tj and that B(tj ) − B(tj−1 ) is independent of B(tk ) − B(tk−1 ) for k = j. It is clear that for any f ∈ L2 (0.8 Let f. P). F . Denote by S(0. By (3. T ). T ) → L2 (Ω. g ∈ L2 (0. . (3.

Example 3. are independent.. Let f ∈ L2 (0.1. · · · .10) when n f= k=1 l ftk−1 1 (tk−1 . where 0 ≤ t0 < · · · < tn . t ≥ 0 is p-mean continuous for any p ≥ 1. T 0. In this case we have in fact ∞ n f (s)dB(s) = 0 k=1 ftk−1 W1l(tk−1 .Brownian motion where n ∈ N.. we have that I(f ) is a real Gaussian random variable Nq with n q= i=1 f 2 (ti−1 )(ti − ti−1 ). Since random variables B(t1 ).tk ] . (3.11 The process F (t).tk ] = Wf . B(t2 ) − B(t1 ).tk ] = WPn ftk−1 1l(tk−1 . . 0 = t0 < t1 < . Proposition 3. so that n 33 I(f ) = i=1 fti−1 (B(ti ) − B(ti−1 )). k=1 Let f : [0. ∞). We now show a relation between the white noise function and the Wiener integral.2. ∞) → R such that it is integrable in all interval [0. Then we have ∞ Wf = 0 f (s)dB(s).10) It is enough to show (3. Let us introduce a stochastic process setting t F (t) = 0 f (s)ds. B(tn ) − B(tn−1 ).10 We use here notations of Section 3. ∀ t ≥ 0. T ]. < tn−1 = T .

.. Let σ = {t0 .11) Proof.s. m!2m t0 so that lim E|F (t) − F (t0 )|2m = 0. · · · . . tk ]. Then by Proposition 3. ω ∈ Ω. t1 . P-a. Then we have n Iσ (f ) = k=1 n f (tk−1 )(B(tk ) − B(tk−1 )) = k=1 n (f (tk )B(tk ) − f (tk−1 )B(tk−1 )) − k=1 (f (tk ) − f (tk−1 ))B(tk ) n = f (T )B(T ) − k=1 n (f (tk ) − f (tk−1 ))B(tk ) = f (T )B(T ) − k=1 f (αk )B(tk )(tk − tk−1 )... Proposition 3.12 If f ∈ C 1 ([0. n. tn } ∈ Σ. T ]) we have T T f (s)dB(s) = f (T )B(T ) − 0 0 f (s)B(s)ds. . t→t0 f 2 (s)ds. Let p = 2m.e. that if f ∈ C 1 ([0.9 we have that t F (t) − F (t0 ) = t0 f (s)dB(s) t t0 is a real Gaussian random variable with mean 0 and covariance Therefore q t (2m)! 2m 2 E|F (t) − F (t0 )| = f (s)ds . We note finally. m ∈ N and t > t0 ≥ 0. P-a. It follows that T |σ|→0 lim Iσ (f ) = f (T )B(T ) − 0 f (s)dB(s)ds. k = 1. T ]) then it is possible to express the T Wiener integral 0 f (s)dB(s) in terms of a Riemann integral as the following integration by parts formula shows. where αk are suitable numbers in the interval [tk−1 .34 Chapter 3 Proof. (3.

14) becomes 1 (1 − r)α−1 r−α dr = β(α. We can now prove the result. (3. sin πα 0 ≤ s ≤ σ ≤ t.Brownian motion 35 3. t ≥ 0. 1/2) we have B(t) = where Yα (σ) = 0 sin πα π σ t (t − σ)α−1 Yα (σ)dσ. F . be a Brownian motion on a probability space (Ω. Then B possesses a continuous version. Proof. 1/2). . 0 (3. be a Brownian motion on a probability space (Ω. We are going to show that B possesses a continuous version. To this purpose we shall use a representation formula for B proved in the next proposition. P). (3. F . sin πα Now since.13 For any α ∈ (0. Proposition 3.4 Continuity of Brownian motion Let B(t). 1). Exchanging integrals . obviously. t (t − σ)α−1 (σ − s)−α dσ = s π . (1) This requires a proof which is left to the reader. t ≥ 0.14 Let B(t).12) (σ − s)−α dB(s).14) where α ∈ (0. P). We start from the following elementary identity which is valid for any α ∈ (0. B(t) = B(t) = sin πα π (1) t 0 s t s 0 dB(s) we can write (t − σ)α−1 (σ − s)−α dσ dB(s). 1). yields t σ sin πα B(t) = π dξ(t − σ) 0 α−1 0 (σ − s)−α dB(s) .13) Notice that the Wiener integral Yα is meaningful since α ∈ (0.14) it is enough to set σ = r(t − s) + s so that (3. To check (3. Theorem 3. 1 − α) = 0 π .

T ]. Let us prove that F is continuous on [ t2 . ω)dσ. F . By H¨lder’s inequality we have o t 2m−1 2m |F (t)| ≤ 0 (t − σ) 2m (α−1) 2m−1 dσ |f |L2m (0.) Therefore F ∈ L∞ (0. T ]. t ∈ [0. Set t F (t) = 0 (t − σ)α−1 f (σ)dσ. t ∈ [0. 1]. where C0 = {η ∈ C([0. t0 Let us set for ε < 2 . 3. T ]. T ] for any t0 ∈ (0. ω → B(·.5 The standard Brownian motion Let us consider a Brownian motion B(t). Exercise 3.15 Let α ∈ (0. H) and F is con0 tinuous at 0. 1 0 Thus limε→0 Fε (t) = F (t). ω) = (t − σ)α−1 Yα (σ. and F is continuous as required. +∞)) : η(0) = 0}. ω) is continuous for all ω ∈ Ω. T .16 Prove that B possesses an H¨lder continuous version with o any exponent β < 1/2.15) 2m (Notice that (α − 1) 2m−1 > −1. we find |F (t) − Fε (t)| ≤ M 2m − 1 2mα − 1 2m−1 2m εα− 2m |f |L2m (0.H) . m ∈ N with 2m > 1/α and f ∈ L2m (0. π 0 Then B(·. Then F ∈ C([0. H). ∀ t ≥ 0. Proof.T . This is possible in view of Proposition 3. P) such that B(·.36 Chapter 3 Proof. using again H¨lder’s inequalo ity.11. We denote by B the mapping B : Ω → C0 . . Now set t sin πα B(t.H) . ω) of the stochastic process Yα which is 2mintegrable with 2m > 1/α. in a probability space (Ω. Moreover. (3.T . Lemma 3. uniformly on [ t2 . T ]. t ≥ 0. 0 Fε is obviously continuous on [ t2 . Choose a version Yα (·. t−ε Fε (t) = 0 (t − σ)α−1 f (σ)dσ. ω). 1/2). T ). ω) is a continuous version of B thanks to the following analytic lemma. T ].

η2 ) := k=1 2k (1 η1 − η2 k . It is important to notice that B(C0 ) is generated by the cylindrical subsets of C0 that we shall introduce now. ω))P(dω) = C0 F (η)Q(dη). for any nonnegative Borel mapping F : C0 → R.tn+k .. . we have E[F (B(·))] = Ω F (B(·. .Brownian motion 37 3.. endowed with the metric.....A := {η ∈ C0 : (η(t1 ). ∀ η ∈ C0 . B(C0 )). (3. η → F (η).. So. ω → B(·..5.tn . We have set for any k ∈ N.16) Some examples of mappings F are the following.. Let us now consider the σ-algebra B(C0 ). For n ∈ N. 0 < t1 < · · · < tn and A ∈ B(Rn ) we define Ct1 .2 The Wiener measure and the standard Brownian motion B : Ω → C0 . η(tn )) ∈ A} . C0 ..tn . Moreover.tn+1 .. d(η1 .A×Rk . B(C0 )). + η1 − η2 k ) is a complete metric space. as easily checked.tn ... Using this identity one can easily see that C is an algebra. ω) We come back to the mapping B and we denote by Q its law (which is a probability measure on (C0 .A = Ct1 . Note that Ct1 .5.. k]}.t2 . the σ-algebra generated by C coincides with B(C0 ) since any ball (with respect to the metric of C0 ) is a countable intersection of cylindrical sets.1 Some properties of C0 ∞ First we notice that. η k = sup{|η(t)| : t ∈ [0. k. 3. n ∈ N. Q is called the Wiener measure on (C0 .....t2 ..t2 .

for all η ∈ C0 . Q). We simply note that. Then we have Q(Ct1 . η ∈ C0 .16) we have ei(η(t)−η(s))h Q(dη) = C0 Ω 1 2 ei(B(t. so that the conclusion follows from Proposition 3. t ≥ 0.. (iii) F (η) = supt∈[0. W (t) − W (s) is a Gaussian random variable Nt−s . tn > 0 are given. B(tn )) ∈ A). Proposition 3.A be a cylindrical set. ...38 Chapter 3 (i) F (η) = g(η(t0 )). we have Q(Ct1 .A ) = P((B(t1 )...ω))h P(dω) = E[ei(B(t)−B(s)) ] = e− 2 (t−s)h . Proof. Q) setting W (t)(η) = η(t). B(C0 ). .t2 . in (C0 . Let us show for instance that for t > s ≥ 0. The proof is straightforward. (ii) F (η) = G(η(t1 ). η(tn )). called the standard Brownian motion. where G : Rn → R is nonnegative Borel and t1 . Let us compute the Wiener measure of a cylindrical set..... thanks to (3. In fact by (3... t ≥ 0. h ∈ R.tn ..... For this it is enough to show that the Fourier transform of W (t) − W (s) ψ(h) := C0 ei(η(t)−η(s))h Q(dη).A ) = 1 (2π)n t1 (t2 − t1 ) · · · (tn − tn−1 ) A e 1 2 − 2t − 2(t 1 ξ2 (ξn −ξn−1 )2 (ξ −ξ1 )2 −···− 2(t −t n 2 −t1 ) n−1 ) dξ.. Proposition 3. t ≥ 0.. for all η ∈ C0 . Now we define a stochastic process W (t). where g : R → R is nonnegative Borel and t0 > 0 is given.17 W is a Brownian motion in (C0 ..16).18 Let Ct1 . . B(C0 ). for all η ∈ C0 . In an analogous way one can prove that W (t).tn .4.. . has independent increments.. Proof.ω)−B(s.1] |η(t)|.tn . 1 2 h ∈ R.t2 . is given by e− 2 (t−s)h .t2 .. h ∈ R.

T ] σ = {0 = t0 < t1 < · · · < tn = T }. setting σ1 ≤ σ2 if and only if |σ1 | ≤ |σ2 |. F . and so. P). Then for any σ = {0 = t0 < t1 < · · · < tn = T } ∈ Σ(0. .n − 1}. For any T > 0 we denote by Σ(0. We say that T is the quadratic variation of B in [0. Let us now introduce the quadratic variation of Brownian motion B in [0.Brownian motion 39 3. T ) we set |σ| := min{tk − tk−1 : k = 1.19 We have |σ|→0 lim Jσ = T in L2 (Ω. T ]. T ) the set of all decompositions of [0.. For any σ = {0 = t0 < t1 < · · · < tn = T } ∈ Σ(0. P). T ]. Since Btk −Btk−1 is a real Gaussian random variable with law Ntk −tk−1 . we have E(Jσ ) = T. 2 2 E(|Jσ − T |2 ) = E(Jσ ) − 2T E(Jσ ) + T 2 = E(Jσ ) − T 2 . T ) we define n Jσ := k=1 |B(tk ) − B(tk−1 )|2 . (3. F . on a probability space (Ω. Proof. We introduce a partial ordering on Σ(0. t ≥ 0.17) Moreover n 2 E|Jσ |2 = E k=1 n |B(tk ) − B(tk−1 )|2 n =E k=1 |B(tk ) − B(tk−1 )| + 2 h<k=1 4 E|B(th ) − B(th−1 )|2 |B(tk ) − B(tk−1 )|2 . . Then we prove Theorem 3.. T ).6 Quadratic variation of the Brownian motion In this section we are given a real continuous Brownian motion B(t).

substituting (3. T ] → R. (2) . Proposition 3..19 is that almost all trajectories of the Brownian motion B have not bounded variation (2) . (3.40 But we have n n Chapter 3 E k=1 |B(tk ) − B(tk−1 )|4 = 3 k=1 (tk − tk−1 )2 . Then for any σ = {0 = t0 < t1 < · · · < . In fact the following result holds. BV (0. T ] → R of finite variation. since B(th ) − B(th−1 ) and B(tk ) − B(tk−1 ) are independent.20 We have P∗ (VT ) = 0. Now. we have n n E|B(th ) − B(th−1 )| |B(tk ) − B(tk−1 )| = h<k=1 2 2 (th − th−1 )(tk − tk−1 ). V (f ) is called the variation of f .20) = 2 k=1 (tk − tk−1 )2 + T 2 . 2 =2 k=1 (tk − tk−1 )2 → 0. tn = T } ∈ Σ(0. T ) we n set Vσ (f ) = k=1 |f (tk ) − f (tk−1 )| and define V (f ) := supσ∈Σ Vσ (f ).18) and. Let f : [0. T ) is the set of all functions f : [0. h<k=1 (3. T )} has outer probability zero. In other terms the set VT := {ω ∈ Ω : B(·.17). we obtain n E |Jσ − T | as |σ| → 0.. An important consequence of Theorem 3. ω) ∈ BV (0.20) on (3. (3..19) Therefore n n E|Jσ |2 = 3 k=1 n (tk − tk−1 )2 + 2 n (th − th−1 )(tk − tk−1 ) h<k=1 2 = 2 k=1 n (tk − tk−1 ) + k=1 2 (tk − tk−1 ) .

n→∞ We claim that VT ∩ Λ ⊂ Λc . |t − s| < δε =⇒ |B(t. s ∈ [0.22 Let us construct an n-dimensional Brownian motion. (ii) lim Jσn (ω) = T for all ω ∈ Λ1 . F . . 1 Let us prove the claim. . Bn are independent Brownian motions. Bn (t)).Brownian motion Proof. ω)| < ε.... Xn are said to be independent if for any t1 . ... is called an n-dimensional stochastic process. Rn ). where Q is any operator in L+ (H) such that Ker 1 Q = {0}.21 Let n ∈ N and let X1 .. ω)). 41 so that P(Λ) = 1 because B is continuous. tn ∈ [0.21) By the claim the conclusion will follow since P(Λc ) = 0. for any ε > 0 there exists δε > 0 such that t. . Since lim|σ|→0 Jσ = T in L2 (Ω. such that B1 . X1 . . Consequently. t ≥ 0.. . T ) such that |σn | → 0 and a set Λ1 ⊂ F such that (i) P(Λ1 ) = 1. Let ω ∈ VT ∩ Λ. en ) be the canonical basis in Rn and choose Ω = H = L2 (0. Since ε is arbitrary ω cannot belong to Λ1 .. if n is so large that |σn | < δε we have Jσn (ω) ≤ εV (B(·.t] . The claim is proved.. ∀ t ≥ 0. Set Λ := {ω ∈ Ω : B(·. T ].. Xn (t)). t ≥ 0. F = B(H) and P = NQ . F .... 1 (3. Then X(t) := (X1 (t). i = 1. Let (e1 . Since B(·. . Example 3. 3.....7 Multidimensional Brownian motions Definition 3.. Bn (t)) is an n-dimensional Brownian motion. P). ω) is continuous }. P) there exists a sequence (σn ) ⊂ Σ(0. . ω) − B(s. Then set Bi (t) = Wei 1l[0. T ]. +∞. Then one can check easily that B(t) = (B1 (t). Xn be stochastic processes on a probability space (Ω. A n-dimensional Brownian motion is a n-dimensional stochastic process B(t) := (B1 (t)... ω) is uniformly continuous in [0..... .. +∞) the random variables Xi (ti ) are independent. n.. ..

24) esA CC ∗ esA ds. Then the following properties are easily checked.25) where A∗ and C ∗ are the adjoint of A and C respectively.23) Z(t) = etA x + 0 e(t−s)A CdB(s). We have n (3. t ≥ 0. where In represents the identity in Rn . Exercise 3.24 Let A.Qt . where Qt = 0 t ∗ (3. Prove that the law of Z(t) in Rd is given by NetA x. Let us check (iii). (ii) E[Bi (t)Bj (t)] = 0 if i = j. (iii) We have E |B(t) − B(s)|2 = n(t − s). Exercise 3. . (i) If t > s.22) E |B(t) − B(s)| 2 = k=1 E |Bk (t) − Bk (s)|2 = n(t − s). t ≥ 0.42 Chapter 3 Let B be a Brownian motion in Rn . B(t) − B(s) is a Gaussian random variable with law N(t−s)In . C ∈ L(Rd ) and set t (3.23 Prove that for 0 ≤ s < t we have E |B(t) − B(s)|4 = (2n + n2 )(t − s)2 . (3.

This chapter is devoted to some sharp properties of the Brownian motion. ∀ t ≥ 0. t ≥ 0. B(C0 ). Obviously F0 = {∅.. .A = {ω ∈ C0 : (ω(t1 ). Moreover.··· .Chapter 4 Markov property of the Brownian motion Let us consider the probability space (C0 . < tn ..1 Filtration Ct1 . tn ≤ t and A ∈ B(Rn ). the standard Brownian motion in (C0 . ω ∈ C0 . it is called the natural filtration of W . +∞) → R introduced in Chapter 3 and Q is the Wiener measure.. ω(tn )) ∈ A} = {ω ∈ C0 : (W (t1 ).. Moreover. Q) defined by W (t)(ω) = ω(t)..tn . t)} . The family of σ–algebras (Ft )t≥0 is increasing. Ω}.. stopping time and transition semigroup. 4. For any t > 0 we define Ft− = σ{Ft− : 43 ∈ (0. Q) where C0 is the complete metric space of all continuous functions ω : [0. W (tn )) ∈ A} For any t > 0 we denote by Ct the algebra of all cylindrical sets where 0 ≤ t1 < . we denote by Ft the σ-algebra generated by Ct ... To this purpose we shall introduce some basic concepts as filtration. B(C0 ). let W (t). . in particular the Markov and strong Markov property and the reflexion principle.

Then An ∈ F1/n and A = n∈N n ∈ N. We say that a real random variable X is Ft -measurable if In this case we say also that X depends from the story of the Brownian motion only up to t. The following lemma will be frequently used.··· . . 4.2 The filtration (Ft )t≥0 is not right continuous.1 we say that the natural filtration (Ft )t≥0 is left continuous. Due to Proposition 4. Remark 4.1 Ft -measurable random variables I ∈ B(R) ⇒ X −1 (I) ∈ Ft . that is Ft+ = Ft for all t ≥ 0. k→∞ k so that I ∈ Ft− as well. Proof. To prove the converse inclusion it is enough to show that Ct ⊂ Ft− .··· .t) Ft− . Let in fact I = Ct1 . so that Ft ⊃ Ft− . Let for instance t = 0 and consider the sets An = {ω ∈ Ω : |ω(1/n)| ≤ 1/n}.1 For all t > 0 we have Ft = Ft− .tn . Proposition 4. t ≥ 0. If tn < t then I belongs to Ft− whereas if tn = t we have I = lim Ct1 . t) and Ft+ : = >0 Ft+ . An ∈ F0+ . Notice that A = {ω ∈ Ω : |ω (0)| = 0}. so that F0+ = F0 .1. It is clear that Ft ⊃ ∈(0.44 where σ ∈(0.A ∈ Ct so that tn ≤ t.tt− 1 .t) Chapter 4 Ft− is the σ-algebra generated by Ft− for ∈ (0.A ∈ Ft− . Let t > 0.

I .. To prove the claim it is enough to show that any cylindrical set Ct1 . W (s2 ) − W (s1 ) and 1 A are l independent. (4. if (An ) is a sequence in D consisting of disjoint sets. Proposition 4.. ∀ G ∈ G.5 For any t ≥ 0 denote by Ft the σ-algebra generated by Ft and all null sets of Ω (called the completion of Ft ). j→∞ Since G = B(C0 ) we can set in (4.I j j j = lim {ω ∈ Ω : (ω(t1 ) − ω(1/j).1) On the other hand.tn . Then W (s2 ) − W (s1 ) and ϕ are independent...... Denote by G the σ-algebra generated by all sets of the form Dt1 . Moreover.. It is enough to show that for any A ∈ Ft . Proof. Then either P(A) = 1 or P(A) = 0. in other words that Ft coincides with the set D defined below. Moreover.. Proof. where n ∈ N. l Since W is a process with independent increments. Then we have P(A ∩ G) = P(A)P(G).. D contains the algebra of all cylindrical set belonging to Ct (which is a π-system).. but this follows from the identity j→∞ lim Dt1 − 1 . Remark 4.4 (one-zero law) Assume that A ∈ F0+ . I ∈ B(Rn ).1 in Appendix A).... Next result shows that F0+ contains only trivial sets.3 Let s2 > s1 ≥ t > 0. D = {A ∈ Ft : 1 A is independent of W (s2 ) − W (s1 )}..Markov property 45 Lemma 4. and W has independent increments. t > 0. By using Proposition 4. since it belongs to all Ft . Now the claim follows from Dynkin’s theorem (Theorem A. 0 < t1 < · · · < tn . D is a λ-system. Let A ∈ F0+ .tn . 1 .I = {ω ∈ Ω : (ω(t1 + h) − ω(h).1) G = A.tn .4 one can easily show that (Ft )t≥0 is both right and left continuous. In fact if A ∈ D it is obvious that Ac ∈ D. h > 0.I belongs to G .h.. . .tn − 1 . It is clear that A is independent of G ... ω(tn ) − ω(1/j)) ∈ I} = Ct1 . one can show easily that ∞ n=1 An ∈ D.h. so that P2 (A) = P(A) which yields P(A) equal to zero or one. we claim that G = B(C0 ).. and let ϕ be a real random variable Ft –measurable... .. ω(tn + h) − ω(h)) ∈ I}.

.... For 0 < t1 < ..I ∩{tn < τ }. τ is Fτ -measurable. ω(tn )) ∈ I} = Ct1 . where σ(τ ) is the σ-algebra generated by τ . Moreover.I ∩ {tn < τ ≤ t} So. If τ is stopping time. the σ-algebra generated by all Ct1 .. but it is a stopping time with respect to the filtration (Ft+ )t≥0 . +∞]) random variable τ in (C0 .... In fact ∞ {τ ≤ t} = k=1 τ ≤t+ 1 k ∈ Ft+ .. Q) is called a stopping time with respect to the filtration (Ft )t≥0 if {τ ≤ t} ∈ Ft for all t ≥ 0. In other words we have Fτ ⊃ σ(τ )..I = {ω ∈ Ω : tn (ω) < τ.tn ..2 Stopping times A nonnegative extended (that is with values in [0...tn ...46 Chapter 4 4. .tn ..tn . if A = {τ ≤ s} we have A ∩ {τ ≤ t} = {τ ≤ t ∧ s} ∈ Ft∧s ⊂ Ft .6 Let τ be an extended random variable such that {τ < t} ∈ Ft ..I ∩ {τ ≤ t} = Ct1 . In fact (τ ) Ct1 . Let us describe the σ-algebra Fτ .tn . B(C0 )..I is Fτ -measurable.....I in included in Fτ and one can show that it coincides with Fτ .. .tn .. We claim that Ct1 .. (ω(t1 ).. (τ ) (τ ) (τ ) Then τ is not in general a stopping time with respect to (Ft )t≥0 . In fact.. for all t ≥ 0. then {τ > t} and {τ = t} belong obviously to Ft for all t ≥ 0. To any stopping time τ we associate the σ-algebra Fτ : = {A ∈ F : A ∩ {τ ≤ t} ∈ Ft for all t ≥ 0}.. < tn and I∈B(R) we define Ct1 . Remark 4.

Assume first τ discrete.7 Assume that the nonnegative random variable τ is discrete.2) It is clear that the sequence (τn ) is decreasing. n 2 k k−1 ≤τ < n n 2 2 ∈ Ft . Show that in this case Fτ is the σ–algebra Fτ : = {A ∈ F : A ∩ {τ = µk } ∈ Fµk for all k ∈ N}. Proof. Show that τ is a stopping time if and only if {τ = µk } ∈ Fµk for all k ∈ N. We start by showing that Wτ is Fτ -measurable. if t = 2k with k ∈ N we have n {τn = t} = Finally. Proposition 4. k 2n =A∩ k−1 k ≤τ < n n 2 2 ∈Fk.Markov property 47 Exercise 4. Proposition 4.9 Let τ be a stopping time and set Wτ (ω) = W (τ (ω). Proof. In fact. Then there exists a decreasing sequence (τn ) of discrete stopping times convergent pointwise to τ such that Fτn ⊃ Fτ for all n ∈ N. let A ∈ Fτ .3) ∀ t ≥ 0. 0 < t1 < · · · < tk < · · · ω ∈ Ω. τ (Ω) = {tk }. that is that τ (Ω) = (µk )k∈N where µk is an increasing sequence of positive numbers. ω). Then we have A ∩ τn = so that A ∈ Fτn . We want to extend several properties concerning time t to general stopping times τ . Then Wτ is Fτ -measurable. (4. that is A ∩ {τ ≤ t} ∈ Ft . ∀ k ∈ N. (4. Define for any n ∈ N and ω ∈ Ω τn (ω) = k 2n if k−1 k ≤ τ (ω) < n . τn is a stopping time. n 2 2 k ∈ N.8 Let τ be a stopping time. . Moreover.

t]∩Q Consequently. let τn be defined by (4. τ is a stopping time with respect to filtration {Ft+ }t≥0 . lim Wτn (ω) = Wτ (ω). Then {τa > t} = s∈[0. ω). Then {Wτ ∈ I} ∩ {τ ≤ t} = = = ∞ k=1 [{Wtk ∞ k=1 [{Wτ Chapter 4 ∀ω ∈ Ak .10 Let a ∈ R and set (1) for all I ∈ B(R). (1) We use the convention that the infimum of the empty set is +∞. ∈ I} ∩ {τ ≤ t} ∩ Ak ] ∈ I} ∩ {τ ≤ t} ∩ Ak ] ∈ I} ∩ {τ ≤ t} ∩ Ak ] ∈ Ft .t]∩Q So. k ∈ N. Then we have Wτ (ω) = W (tk )(ω). ω ∈ Ω. Let I ∈ B(R). the conclusion holds in this case.6. Fix t ≥ 0.2) and set Wτn (ω) = W (τn (ω). Since W is continuous we have n→∞ ω ∈ Ω. Example 4.t] {W (s) ≤ a} = {W (s) ≤ a} ∈ Ft .48 and set Ak = {τ = tk }.4) τa = inf{t ≥ 0 : W (t) = a}. τa is a stopping time with respect to the filtration (Ft )t≥0 . Then we have {τ ≥ t} = s∈[0. By the previous argument we have {Wτn ∈ I} ∩ {τn ≤ t} ∈ Ft Now the conclusion follows letting n → ∞. by Remark 4. . k ∈ N.t] {W (s) < a} = {W (s) < a} ∈ Ft . Let now τ be arbitrary. Let now τ = inf{t ≥ 0 : W (t) > a}. (4. s∈[0. s∈[0. ∞ {k∈N: tk ≤t} [{Wtk So.

it follows that l ∞ E e iα(W (t+τ )−W (τ )) = i=1 P(Ai )E eiα(W (t+ti )−W (ti )) = e− 2 α 1 2t and so (4. We want now to show that the same holds when h is replaced by a stopping time. is a Brownian motion for any h > 0. t ≥ 0. Assume first that τ is discrete.2).5) it follows that C(t) is a Gaussian random variable Nt . Let us first prove that the law of C(t) is Nt . Continuity of C(t) is obvious. is a Brownian motion. . Then C(t) := W (t + τ ) − W (τ ).3 The Brownian motion W (t + τ ) − W (τ ) We recall that W (t + h) − W (t). 1 2 α ∈ R. Now (4. α ∈ R. τ (Ω) = (tk ) and set Ai = {τ = ti } ∈ Fti .Markov property 49 4. Let now τ be general and let (τn ) be the sequence of finite stoppping times defined by (4.11 Let τ be a stopping time. (4. For this it is enough to show that for any α ∈ R we have E eiαC(t) = E eiα(W (t+τ )−W (τ )) = e− 2 α t .5) ∀ i ∈ N. We have just proved that E eiα(W (t+τn )−W (τn )) = e− 2 α t .5) follows letting n tend to infinity. Proceeding similarly one can prove that the law of C(t) − C(s) with t > s > 0 is Nt−s and that C(t) has independent increments. By (4.5) is proved. l Since 1 Ai and W (t + ti ) − W (ti ) are independent. Proposition 4. Then we have ∞ ∞ 1 2 t ≥ 0. Proof. E eiα(W (t+τ )−W (τ )) = i=1 Ai eiα(W (t+ti )−W (ti )) dP = i=1 E 1 Ai eiα(W (t+ti )−W (ti )) .

by an explicit computation. ϕ ∈ Bb (H). (4. x) = 1 uxx (t. x ∈ R. is a semigroup of linear operators in Bb (R). bounded and Borel functions and by Cb (R) the subspace of Bb (R) of those functions which are uniformly continuous and bounded on R. +∞) × R → R. ϕ ∈ Bb (R). There is a simple deterministic proof based on maximum principle and a stochastic proof. ξ2 1 e− 2t . Pt−s ϕ(x) = E[ϕ(W (t) − W (s) + x)].12 One can show that u(t. ξ ∈ R. t ≥ 0. ∀ x ∈ R. that Pt . s ≥ 0.6) Since the law of W (t) + x is Nx. ∀ t. ∀ t > 0. t ≥ 0. x).9) . is the unique solution of the Dirichlet problem above.  2   u(0. based on Itˆ’s formula.13 Prove that for t > s ≥ 0. To this purpose. Notice that Pt coincides with the heat semigroup in R. (4. we define the transition semigroup Pt ϕ(x) = E[ϕ(W (t) + x)]. t ≥ 0. t > 0. infinitely differentiable and fulfills   ut (t. x ∈ R. o Exercise 4. x) = ϕ(x).t we have Pt ϕ(x) = E[ϕ(W (t) + x)] = √ 1 2πt +∞ +∞ −∞ e− 2t (x−y) ϕ(y)dy 1 2 (4. that is P0 = I and where gt (ξ) = √ Pt+s = Pt Ps .4 Transition semigroup We shall denote by Bb (R) the set of all real. x) = Pt ϕ(x).8) 2πt We deduce. Given ϕ ∈ Bb (R) we want to study the evolution in time of ϕ(W (t) + x). x ∈ R. Remark 4. (4.7) = −∞ gt (x − y)ϕ(y)dy. x) = Pt ϕ(x) is continuous. which we will present later. In fact one checks easily that if ϕ ∈ Cb (R) then the function u : [0.50 Chapter 4 4. x ∈ R. u(t.

6 it follows that E[ϕ(X(t))|Fs ] = E[ϕ(U + V )|Fs ] = h(U ).15 Let s > 0.13) h(u) = E[ϕ(u + V )] = E[ϕ(u + W (t) − W (s))] = Pt−s ϕ(u). Notice that U is Fs -measurable and V is independent of Fs . Set X(t) = W (t) + x = (W (s) + x) + (W (t) − W (s)) =: U + V. where (recall Exercise 4.Markov property 51 4. x) = W (t) + x. Equivalently ϕ(X(t))dP = A A t ≥ 0. they are recalled in Appendix A. Proof. Show that E[ϕ(W (t) + η|Fs ] = (Pt−s ϕ(η)).14 For any t > s > 0 and any ϕ ∈ Bb (H) we have E[ϕ(X(t))|Fs ] = (Pt−s ϕ)(X(s)). .10) is proved. where x ∈ R. η a Fs -measurable random variable and ϕ ∈ Bb (R). By Proposition B. So. Proposition 4.11) Moreover X(·) is a Markov process.10) (Pt−s ϕ)(X(s))dP.5 Markov property In this section we shall use several properties of conditional expectation. (4. ∀ A ∈ Fs . We are here concerned with the stochastic process X(t) = X(t. (4. Exercise 4. (4.3 we have E[ϕ(X(t))|X(s)] = E [E[ϕ(X(t))|Fs ]|X(s)] = E[Pt−s ϕ(X(s))|X(s)] = Pt−s ϕ(X(s)) = E[ϕ(X(t))|Fs ]. To prove the last statement notice that by Proposition B.

Therefore. . so that X(t) = W (t). (4. n. (4..5. Therefore. . by (4.13) Proof. we can write.13) is proved. Assume first that τ is of the form τ (Ω) = (tk )k∈N .. Then we have ∞ (Pt−τ ϕ)(W (τ ))dP = A i=1 A∩{τ =ti } (Pt−τ ϕ)(W (τ ))dP ∞ = i=1 A∩{τ =ti } (Pt−ti ϕ)(W (ti ))dP. We set x = 0 for simplicity. Let A ∈ Fτ .10) and taking into account that by the definition of Fτ we have A ∩ {τ = ti } ∈ Fti . Proposition 4.Then we have E[ϕ(X(t))|Fτ ] = (Pt−τ ϕ)(X(τ )).1 Strong Markov property We now consider conditional expectation with respect to Fτ where τ is a stopping time.12) Equivalently ϕ(X(t))dP = A A (Pt−τ ϕ)(X(τ ))dP.. ∀ A ∈ Fτ . (4.52 Chapter 4 4. ∞ (Pt−τ ϕ)(W (τ ))dP = A ∞ i=1 A∩{τ =ti } (Pt−ti ϕ)(W (ti ))dP = i=1 ∞ A∩{τ =ti } E[ϕ(W (t))|Fti ]dP = i=1 A∩{τ =ti } ϕ(W (t))dP = A ϕ(W (t))dP.16 Let τ be a stopping time and let t ≥ τ and ϕ ∈ Bb (H). i = 1.

14) . M (t) ≥ a) = P(B(t) ≥ a). Proof.16) t ≥ 0.2). t ≥ 0. (4. a ≥ 0 (4.t] Notice that {Ta ≤ t} = {M (t) ≥ a}.17 Let a ≥ 0 and t ≥ 0. Then by (4.Markov property 53 Let now τ be an arbitrary stopping time and let (τn ) be defined by (4. Recall that (Proposition 4.13) it follows that ϕ(W (t))dP = A A (Pt−τn ϕ)(W (τn ))dP for all A ∈ Fτ . Now the conclusion follows letting n → ∞. s∈[0.15) To find the laws of Ta with a ≥ 0 and M (t) the following lemma is useful. • M (t) = max B(s). t ≥ 0. and {Ta ≤ t} = {m(t) ≤ a}.8) F τ ⊂ F τn for all n ∈ N.6 Some consequences of the strong Markov property In this section we want to determine the laws of the following important random variables. Let A ∈ Fτ . Lemma 4.12) is called the strong Markov property of W . • Tb = inf{t ≥ 0 : B(t) = b}. t ≥ 0. s∈[0. Then we have P(B(t) ≤ a.t] b ∈ R. taking into account that {Ta ≤ t} = {M (t) ≥ a} (4. We have. • m(t) = min B(s). 4. a ≤ 0. Property (4.

+∞) (a).a] (a) = Ps 1 [a. M (t) ≥ a) = {Ta ≤t} E[1 (−∞. W (t) ≥ a). By the strong Markov property it follows that P(W (t) ≤ a. Ta ≤ t) = {Ta ≤t} Chapter 4 1 (−∞. l On the other hand.54 P(W (t) ≤ a. Write P(M (t) ≥ a) = P(M (t) ≥ a.+∞) (W (t))|FTa ]dP l = P(W (t) ≥ a.17) . Proof. M (t) ≥ a) = P(W (t) ≤ a. Proposition 4.a] (W (t))dP l = {Ta ≤t} E[1 (−∞.+∞) (a)]dP l = {Ta ≤t} E[1 [a.a] (W (t))|FTa ]dP. as easily checked.18 (Reflection principle) For all a ≥ 0 we have P(M (t) ≥ a) = 2P(W (t) ≥ a). a > 0. l E[Pt−Ta 1 (−∞. (4. W (t) ≤ a) + P(M (t) ≥ a. Ps 1 (−∞.a] (a)]dP. M (t) ≥ a) = {Ta ≤t} ∀ s > 0.a] (W (t))|FTa ]dP l = {Ta ≤t} E[Pt−Ta 1 (−∞.a] (a)]dP = {Ta ≤t} E[Pt−Ta 1 [a.a] (W (Ta ))]dP l = {Ta ≤t} E[Pt−Ta 1 (−∞. l since {Ta ≤ t} ∈ FTa . l l Therefore P(W (t) ≤ a. we have. M (t) ≥ a) = P(W (t) ≥ a).

by Lemma 4.18 for any a ≥ 0 P(M (t) ≥ a) = 2P(W (t)| ≥ a) = √ 2 2πt +∞ a e− 2t dξ ξ2 = P(|W (t)| ≥ a). though random variables M (t) and |W (t)| are different. Obviously the laws of M (·) and |W (·)| on C0 ([0.Markov property 55 Now.19 (Law of M (t)) For all t ≥ 0 we have ξ2 2 (M (t)# P)(dξ) = √ l e− 2t 1 [0. 2πt (4.18 we have 2 P(Ta ≤ t) = P(M (t) ≥ a) = √ 2πt 2 =√ 2π Therefore +∞ at−1/2 +∞ a e− 2t dξ ξ2 e− 2 dξ. the conclusion follows. W (t) ≤ a) = P(W (t) ≥ a).21 (Law of Ta ) Let a ≥ 0 and t ≥ 0. it is clear that P(M (t) ≥ a.17 we have P(M (t) ≥ a. Moreover. Corollary 4.19 it follows that at fixed time t the law of M (t) coincides with that of |W (t)|. Remark 4.18 we can easily deduce the expressions of the laws of M (t) and Ta for all a ∈ R. .19) Proof. Corollary 4. By Proposition 4. in particular M (t) is increasing whereas |W (t)| is not.20 From Corollary 4. The following results can be proved similarly. dt 2πt3 which implies the conclusion.14) and Proposition 4. By (4. +∞)) are different. W (t) ≥ a) = P(W (t) ≥ a) so. 2πt3 (4.+∞) (ξ)dξ.18) Proof. η2 a2 d a P(Ta ≤ t) = √ e− 2t dt. We have in fact by Proposition 4. Then we have a2 a ((Ta )# P)(dt) = √ e− 2t dt.

(i) Y (t) = W (t) + x.21) Corollary 4.22 Let a ≤ 0 and t ≥ 0. τx ]. V (t) is called the Brownian motion . Then we have a2 |a| e− 2t dt. U (t) is called the Brownian motion reflected in 0 (iii) V (t) = W (t ∧ τx ) + x. x ≥ 0. Y (t) is called the Brownian motion killed in 0. t ≥ 0.23) 4. +∞). 2πt (4.22) Corollary 4. For any x ≥ 0 we set in this section Moreover we consider the following processes which take values in [0. absorbed in 0 t ≥ 0. (ii) U (t) = |W (t) + x|. ∀ t ∈ [0.24 (Law of m(t)) For all t ≥ 0 we have (m(t)# P)(dξ) = − √ ξ2 2 e− 2t 1(−∞. Then we have P(W (t) ≥ a.20) Proposition 4. Chapter 4 (4. ((Ta )# P)(dt) = √ 2πt3 (4.23 (Reflection principle) For all a ≤ 0 we have P(m(t) ≤ a) = 2P(W (t) ≤ a).56 Lemma 4. (4.a] (ξ)dξ.25 (Law of Ta ) Let a ∈ R and t ≥ 0. m(t) ≤ a) = P(W (t) ≤ a).7 Application to partial differential equations τx = inf{t ≥ 0 : W (t) + x = 0} = T−x .

Define for any ϕ ∈ Bb ([0. x) := E[ϕ(W (t) + x)1 t≤τx ]. x) = 1 uxx (t.8). x ≥ 0. t > 0    2   (4. t ≥ 0. x) = E[ϕ(W (t) + x)1 t≤τx ] l = Pt ϕ(x) − E[ϕ(W (t) + x)1 t>τx ]. x) is the solution of the Dirichlet problem in [0. ξ2 λ > 0. (4.25)  u(t.26) where g is defined by (4. 0) = 0. x) = ϕ(x). l t ≥ 0. Proposition 4. +∞). We have u(t. l where ϕ is extended to R by setting ϕ(−x) = ϕ(x).1 The Dirichlet problem in the half-line ∀ t ∈ [0.7.Markov property 57 4.24) We are here concerned with the process Y (t) = W (t) + x. l l where ψ(λ) = 1 t>λ l 1 2π(t − λ) R x ≥ 0. Proof. using the strong Markov property we find that. . We are going to show that u(t. Write E[ϕ(W (t) + x)1 t>τx ] = E[E[1 t>τx ϕ(W (t) + x)|Fτx ]] l l = E[1 t>τx E[ϕ(W (t) + x)|Fτx ]] l Now.      u(0. +∞)) Ut ϕ(x) := u(t. x > 0. τx ]. x). x ∈ H. e− 2(t−λ) ϕ(ξ)dξ. (4. E[ϕ(W (t) + x)1 t>τx ] = E[1 t>τx (Pt−τx ϕ)(0)] =: E[ψ(τx )].26 We have +∞ u(t.   ut (t. t > 0. x) = 0 [gt (x − y) − gt (x + y)]ϕ(y)dy. x ≥ 0.

y ϕ(y)dy. +∞)). x) is the solution of the Dirichlet problem (4. x) = R gt (x − y)ϕ(y)dy − R gt (x + |y|)ϕ(y)dy.y = 0 gt−s (y)gs (x)ds = 1 Erfc 2 |x| + |y| √ 2t . R where (2) t Gx. a We recall that Erfc (a) = 2 √ π . +∞ −r 2 e dr. s ≥ 0. we see that +∞ Qt ϕ(x) = 0 (2) [gt (x − y) + gt (x + y)]ϕ(y). x ≥ 0. recalling the law of τx (see (4. by a direct computation. for x > 0. that if ϕ ∈ Cb ([0. (x+|y|)2 1 ∂ Gx. +∞)) we set Qt ϕ(x) = E[ϕ(|W (t) + x|)] = (2πt)−1/2 R e− |x−y|2 2t ϕ(|y|)dy.7. Since.58 Next.2 The Neumann problem U (t) = |W (t) + x|.y = − √ e− 2t = −gt (x + |y|) ∂x 2πt we get u(t.23)) it follows that t Chapter 4 E[ϕ(W (t) + x)1 t>τx ] = l 0 R t gt−s (y)ϕ(y)dy √ ∂ ∂x x 2πs3 e− 2s ds x2 = gt−s (y)ϕ(y)dy gs (x)ds 0 R = R gt (x − y)ϕ(y)dy + ∂ ∂x Gx. 4. It is easy to check. and the conclusion follows. Ut ϕ(x) = u(t.25). t ≥ 0. Replacing in the last integral y with −y. Moreover U0 = I and Ut+s = U (t)U (s) for all t. We consider the process For any ϕ ∈ Bb ([0.

∞) and solves the following Neumann problem   ut (t.3 The Ventzell problem V (t) = W (t ∧ τx ) + x. So x y2 ϕ(0) e− 2t dy.Markov property 59 where gt is defined by (4. ∞). +∞)).      u(0. s ≥ 0. x) = Zt ϕ(x) we see that u is the solution to the Ventzell problem. Set Zt ϕ(x) = E[ϕ(W (t ∧ τx ) + x)]. x) = 1 uxx (t. x ≥ 0. x) = Qt ϕ(x) is continuous in [0. Now it is easy to check that if ϕ ∈ Cb ([0. where x ≥ 0.7. t ≥ 0.   ut (t. {t≥τx } = {t<τx } since W (τx ) + x = 0. 0) = 0. 4. t ≥ 0    2   +∞ Zt ϕ(x) =  uxx (t.24). ∞) × [0. infinitely differentiable in (0. t > 0. x ≥ 0. +∞)). x ≥ 0. x) = ϕ(x). [gt (x − y) − gt (x + y)]ϕ(y)dy + √ 2πt −∞ 0 If ϕ ∈ Cb ([0. ϕ(B(t ∧ τx ) + x)dP ϕ(W (t) + x)dP + ϕ(0)dP. where Ut is defined by (4. t > 0. Let us consider the stochastic process. Zt ϕ(x) = Ω ϕ ∈ Bb ([0. x) = ϕ(x). x ≥ 0. So. 0) = 0. Moreover Q0 = I and Qt+s = Q(t)Q(s) for all t.    2    ux (t. +∞)) then u(t. x ≥ 0. . Therefore Zt ϕ(x) = Ut ϕ(x) + ϕ(0) P(T−x ≤ t). x). t ≥ 0.8). x) = 1 uxx (t.      u(0. setting u(t. x). ∞) × [0.

60 Moreover Z0 = I and Zt+s = Z(t)Z(s) for all t. s ≥ 0. Chapter 4 .

we denote by Ft the σ-algebra generated by Ct and all P-null sets of Ω. 5. in (Ω.. The family of σ–algebras (Ft )t≥0 is increasing. for any t > 0 we denote by Ct the algebra of all cylindrical sets Ct1 .. T ]. B(tn )) ∈ A} where 0 ≤ t1 < . t ∈ [0. F . We call Ft .1. t ∈ [0. .. l 61 (5. P). An elementary process F (t). We say that a stochastic process F (t).Chapter 5 The Itˆ integral o In all this chapter B represents a Brownian motion in a probability space (Ω.A = {ω ∈ C0 : (B(t1 ). P) is a stochastic process of the form n F = i=1 Fi−1 1 [ti−1 . F . tn ≤ t and A ∈ B(Rn ). T ]. < tn . t ≥ 0 the natural filtration of B augmented with the null sets of P.1) . We denote by (Ft )t≥0 the completion of the natural filtration of B with all P-null sets of Ω. Moreover.1 Definition of Itˆ’s integral o Itˆ’s integral for elementary processes o Definition 5.. T ].. Similarly as in Chapter 4.1 Let T > 0.tn .··· .1 5. it is called the natural filtration of B.ti ) . is adapted to the Brownian motion B if F (t) is Ft -measurable for any t ∈ [0.

2 Assume that F ∈ EB (0.. n − 1. .4).62 The Itˆ integral o where n ∈ N. 1. Let us prove (5.3) is proved. Then I(F ) ∈ L2 (Ω. For any elementary process F (t). (5. P) and we have T E 0 T F (s)dB(s) 2 T =0 E(|F (s)|2 )ds.3) (5. T ]. it is independent of B(tj )−B(tj−1 ). (5. by Lemma 4. This property is needed to prove some basic identities (similar to those obtained for the Wiener integral) which allow to extend the integral to more general processes. . 0 = t0 < t1 < · · · < tn = T and Fi is Fti -measurable for any i = 0. Since Fj−1 is Fj−1 measurable.3). we define the Itˆ integral o setting T n I(F ) : = 0 F (s)dB(s) = i=1 Fi−1 (B(ti ) − B(ti−1 )).2) Obviously any elementary process is adapted. Therefore we have n E[I(F )] = j=1 E[Fj−1 ]E[B(tj ) − B(tj−1 )] = 0 and (5.. t ∈ [0. T )..3. Let us prove (5. Notice now that for j < k the random variable Fj−1 Fk−1 [B(tj ) − B(tj−1 )]. We have n E[|I(F )|2 ] = E j=1 |Fj−1 |2 [B(tj ) − B(tj−1 )]2 +2E j<k Fj−1 Fk−1 [B(tj ) − B(tj−1 )] [B(tk ) − B(tk−1 )] .4) E 0 F (s)dB(s) = 0 Proof. 2 Proposition 5. We have n E[I(F )] = j=1 E[Fj−1 (B(tj ) − B(tj−1 ))]. F .

1.3 Let F. The scalar product on Z is defined by T F. ·)|2 dt < ∞. . we have E [Fj−1 Fk−1 [B(tj ) − B(tj−1 )][B(tk ) − B(tk−1 )]] = E [Fj−1 Fj−1 [B(tj ) − B(tj−1 )]] E[B(tk ) − B(tk−1 )] = 0. (t. 2 2 2 a. ·)dt. B(0. T ) × F and such that T F ZT := E 0 |F (t. ·)F1 (t. 2 Exercise 5.Chapter 5 63 is Fk−1 –measurable and consequently is independent of B(tk ) − B(tk−1 ). T ] × Ω. F1 = E 0 F (t. b ∈ R. B(0. 5. Prove that T T T E 0 F (s)dB(s) 0 G(s)dB(s) = 0 E[F (s)G(s)]ds. Obviously any elementary process F belongs to Z. Therefore. It follows that E[|I(F )| ] = j=1 2 n E[|Fj−1 |2 ](tj − tj−1 ). T ) × F . dt × P) the Hilbert space of all (equivalence classes of) functions F : [0. T ] × Ω. T ). taking the expectation. which are measurable with respect to the product σ-algebra. G ∈ EB (0. ω) → F (t. as required.2 General definition of Itˆ’s integral o Let us denote by ZT := L2 ([0. Hint: Use the identity ab = 1 1 1 (a + b)2 − a2 − b2 . ω).

6) E 0 F (s)dB(s) = 0 Moreover. We have b F (s)dB(s) in any E a F (s)dB(s) 2 = 0. T ) are called predictable.5) (5. c ∈ [0. the mapping T 2 EB (0. . b. T ] we have c b c F (s)dB(s) = a a F (s)dB(s) + b F (s)dB(s). T )). l with F Fa -measurable. (5. T ) in ZT . b a (5. P)F → 0 The Itˆ integral o F (s)dB(s). So. from Exercise 5. for any a. T ) ⊂ ZT → L2 (Ω. T E 0 T F (s)dB(s) 2 T =0 E(|F (s)|2 )ds.7) We can define in an obvious way the Itˆ integral o interval [a. Moreover. FT . Therefore it can be uniquely extended to the closure EB (0.3 it follows that if F and G are predictable square integrable processes we have T T E 0 F (s)G(s)dB(s) = 0 E[F (s)G(s)]ds. Note first that an elementary process is a linear combination of processes of the form F 1 [a. Let us now present a characterization of predictable processes (that is of 2 space EB (0. and b b E a F (s)dB(s) = a (E|F (s)|2 )ds. 2 Processes belonging to EB (0.b) . b] ⊂ [0.4). 2 is an isometry. the Itˆ integral can be uniquely defined by extension for any preo dictable square integrable process F (t).64 In view of (5. t ≥ 0 and the following properties are fulfilled. T ]. T ) 2 of EB (0.

t] ⊂ [0. A ∈ D and (A.Chapter 5 65 In turn each F can be approximated by linear combinations of characteristic functions of Fa -measurable sets. P). l We claim that D is a λ-system. Properties (B. Definition 5.1. [s. T ] × Ω. Denote by ΛT the closure of EB ([0. dt × P) can be approximated by a monotonic sequence of simple functions. Fs . We first note that R is a π-system. T ] × Ω. Now k=1 the conclusion follows by Theorem A. by the monotone convergence theorem. φn → φ = 1lA in L2 ([0.1)-(iii). dt×P). 2 Proposition 5. T ] and let ϕ ∈ L∞ (Ω. T ] × Ω. i.7 Let F ∈ L2 ([0. P.1)(i)-(ii) are clear. b) a predictable rectangle. it is natural to approximate a general predictable process by linear combinations of functions of the form 1 A×[a. P. (5. see Appendix A. T ] is a real random variable in the probability space ([0. P. l with A Fa measurable.8) Exercise 5.5 The closure EB ([0. P is called the σ-algebra of all predictable events.1). Let (An ) ⊂ D be mutually disjoint sets and set n φn = k=1 1 Ak . dt × P) such that T F (s)dB(s) = 0. that it fulfills (A. So. We call A × [a.4 A real predictable process in [0. P. P. dt × P). . P. 0 Show that F = 0. Prove that t t ϕ s F (r)dB(r) = s ϕ F (r)dB(r). P. T ]) in L2 ([0. T ] × Ω. T ]) is precisely L2 ([0.e. it is enough to show that 1lA ∈ ΛT for any A ∈ P. Since any element of L2 ([0. dt × P). We denote by R the family of all predictable rectangles and by P the σ-algebra generated by R. For this we shall use the Dynkin Theorem. dt × P). l Then. 2 Proof. T ]×Ω. T ] × Ω. So.b) . T ] × Ω. dt × P) where A = ∞ Ak .6 Let F ∈ L2 ([0. Then we set D = {A ∈ P : 1 A ∈ ΛT }. let us show (A.1)-(iii) is fulfilled. Exercise 5.

· · · . We recall that if F ∈ CB ([0. P). L2 (Ω)) then F (t) is Ft -measurable for all t ∈ [0. T ]. (5.2 Itˆ integral for mean square continuous o processes We shall denote by CB ([0. in L2 ([0.9) |σ|→0 Consequently we have T |σ|→0 lim Iσ (F ) = 0 F (s)dB(s) in L2 (Ω.66 The Itˆ integral o 5.. T ).10) Example 5.8 Let us prove that T 0 1 B(t)dB(t) = (B 2 (T ) − T ). 2 Clearly Fσ ∈ EB (0. tn } ∈ Σ(0. F . . P). F . T ) consider the elementary process n Fσ := j=1 F (tj−1 )1 [tj−1 .. T ) and. using the continuity of F one can check easily that lim Fσ = F. L2 (Ω)) the space of all stochastic processes which are mean square continuous and adapted. Write B(tk−1 )(B(tk ) − B(tk−1 )) = B(tk−1 )B(tk ) − B 2 (tk−1 )) 1 1 1 1 = − B 2 (tk ) + B(tk−1 )B(tk ) − B 2 (tk−1 ) + B 2 (tk ) − B 2 (tk−1 ) 2 2 2 2 = 1 2 1 1 B (tk ) − B 2 (tk−1 ) − (B(tk ) − B(tk−1 ))2 . P. T ] and the mapping [0. 2 2 2 .tj ) l and set T n Iσ (F ) := 0 Fσ (s)dB(s) = j=1 F (tj−1 )(B(tj ) − B(tj−1 )). T ] → L2 (Ω. t → F (t).11) Let σ = {t0 . For any decomposition σ = {t0 . t1 . T ]. t1 . is continuous. dt × P). (5. tn } ∈ Σ(0. 2 (5.. T ] × Ω.

10 Let 0 ≤ t1 ≤ t2 ≤ t3 ≤ t4 ≤ T .Chapter 5 Then we have Iσ (B) = 1 1 2 B (T ) − 2 2 n 67 (B(tk ) − B(tk−1 ))2 . is not a process with independent increments in general (unless f is deterministic). Then we have E[(X(t2 ) − X(t1 ))(X(t4 ) − X(t3 ))] = 0 .3 The Itˆ integral as a stochastic process o t Let F ∈ L2 ([0. in L2 (Ω. P). P. 2 t ≥ 0. We first notice that X(t). However. 5. we deduce that T 1 B(t)dB(t) = lim Iσ (B) = (B 2 (T ) − T ). |σ|→0 2 0 Exercise 5. Proposition 5. t ≥ 0. has orthogonal increments (in the sense of L2 (Ω. P). t ≥ 0.19). k=1 Recalling that the quadratic variation of B is T (Theorem 3. F .9 Prove that n |σ|→0 lim B(tk )(B(tk ) − B(tk−1 )) = k=1 1 (B 2 (T ) + T ). T ]. X(t). 2 Therefore the definition of the Itˆ integral depends on the particular form of o the integral sums. P)) as the following result shows. in L2 (Ω. t ∈ [0. F . 2 and n |σ|→0 lim B k=1 tk + tk−1 2 (B(tk ) − B(tk−1 )) = 1 2 B (T ). T ] × Ω. take for instance t X(t) = 0 B(s)dB(s) = 1 (B 2 (t) − t). dt × P and set X(t) = 0 F (s)dB(s). F .

Moreover. Then X ∈ CB ([0. t ∈ [0. is a continuous process. T ]. T ]. For this we first prove that it is a martingale with respect to the filtration (Ft ) (see Appendix C). Proposition 5. The conclusion follows. Since t X(t) − X(s) = s F (r)dB(r).11 Let F ∈ L2 ([0. Proof.t2 ] 1 [t3 .12 X(t). We show now that X(t). X(t) ∈ L2 (Ω. P). so that t→t0 lim E(|X(t) − X(t0 )|2 ) = 0. T ] we have t E(|X(t) − X(t0 )|2 ) = t0 E(|F (r)|2 )dr . T ]. Ft .t4 ] E(F 2 (s))ds = 0. We know that for any t ∈ [0. is mean square continuous. dt×P).7) E[(X(t2 ) − X(t1 ))(X(t4 ) − X(t3 ))] t2 t4 The Itˆ integral o =E t1 T F (s)dB(s) t3 F (s)dB(s) T =E 0 T 1 [t1 . we have E[X(t)|Fs ] = X(s) + E s t F (r)dB(r)|Fs . t0 ∈ [0. for any t. We have in fact. t ≥ 0. L2 (Ω)).t2 ] F (s)dB(s) l 0 1 [t3 . l l We are going to show that X(t). taking into account (5. Let t > s. is a Ft –martingale Proof. then that it is a continuous process. Proposition 5.68 Proof. T ]×Ω.t4 ] F (s)dB(s) l = 0 1 [t1 . t ≥ 0. P. .

t (5. F . Theorem 5. T ] × Ω. It is enough to prove (5. T ]. P.ti ) . n ∈ N. Then X has a continuous version and T E sup |X(t)|2 ≤ 4 t∈[0.12) Notice that this is not obvious since s F (r)dB(r) is not independent of Fs in general (1) . dt × P) t Fn (s)dB(s).13 Let F ∈ L2 ([0. T ]. because F (r) contains in general the “story” of the Brownian motion from 0 to r. So. P). dt × P) and let t X(t) = 0 F (s)dB(s). n F = i=1 Fi−1 1 [ti−1 .Chapter 5 So. T ) such that Fn → F and set Xn (t) = 0 (1) in L2 ([0.T ] 0 E|F (s)|2 ds.12) is proved and the conclusion follows. . since Fi−1 is Fi−1 –measurable and B(ti ) − B(ti−1 ) is independent of Fi−1 . t ∈ [0. it remains to prove that t 69 E s F (r)dB(r)|Fs = 0. (5. l where s = t1 . T ] × Ω. We are now ready to prove the continuity of X. t ∈ [0.13) 2 Proof. we write t n E s n F (r)dB(r)|Fs = i=1 E[Fi−1 (B(ti ) − B(ti−1 ))|Fs ] = i=1 E{E[Fi−1 (B(ti ) − B(ti−1 ))|Fi−1 ]|Fs } = 0.12) when F is an elementary process. · · · . tn = t and Fi−1 ∈ L2 (Ω. In this case. P. Let (Fn ) ⊂ EB (0. (5. taking into account that Fs ⊂ Fi−1 .

So. A nonnegative extended random variable τ in (Ω. .14 Let τ be a stopping time.2. ω ∈ Ω.8 and 4. F.70 The Itˆ integral o Since B(t) is continuous it is clear that Xn (t) is continuous for all n ∈ N. is a continuous Ft –martingale.T ] T ≤ 4E(|Xn (T ) − Xm (T )|2 ) = 4E 0 |Fn (s) − Fm (s)|2 ds . 5. To any stopping time τ we associate the σ-algebra Fτ : = {A ∈ F : A ∩ {τ ≤ t} ∈ Ft for all t ≥ 0}. Consequently (Xn )(ω) is Cauchy in C([0. The proofs of the two following propositions are completely similar to that of Proposition 4. t ≥ 0 is a Brownian motion in (Ω.15 Let τ be a stopping time and set W (τ )(ω) = W (τ (ω))(ω). Then by Corollary C.4.1 Itˆ integral with stopping times o Stopping times We proceed here as in Section 4. T ]) for almost all ω and its limit. Then W (τ ) is Fτ -measurable and W (t + τ ) − W (τ ). m ∈ N E sup |Xn (t) − Xm (t)|2 t∈[0.4 5. Taking into account Proposition 5.6 it follows that for any n. T ].8. they will be omitted. P) is called a stopping time with respect to the filtration (Ft )t≥0 if {τ ≤ t} ∈ Ft for all t ≥ 0.12 we see that X(t). F. Proposition 5. Then there exists a decreasing sequence (τn ) of discrete stopping times convergent pointwise to τ such that Fτn ⊃ Fτ for all n ∈ N. P). which coincides with X(ω) is continuous. t ∈ [0. Proposition 5.

. 0 where X(τ.. t ∈ [0.. τ (Ω) = (t1 . tn ). Arguing as in Proposition 5. ω ∈ Ω.Chapter 5 71 5.15 and using the fact that X(t). Proposition 5. n.. with 0 < t1 < t2 < · · · < tn ≤ T . t ∈ [0. . i = 1. . It is enough to prove the result when τ is of the form. Set Ai := {τ = ti }. ¯ Consider now the stochastic process h(s) = 1 {s≤τ } . s ∈ [0.16 Let F ∈ L2 ([0. Then we have τ T F (s)dB(s) = 0 0 1 {s<τ } F (s)dB(s). t1 ).. . s ∈ [0. Define τ F (s)dB(s) : = X(τ ).. l We have h(s) = 1. If s ∈ [t1 . dt × P) and let τ ≤ T be a stopping time.14) Proof. T ]. t2 .4. λ × P) and set X(t) = 0 F (s)dB(s). t2 ) we have h(s)(ω) = 1 if ω ∈ A2 ∪ · · · ∪ An . T ]. Let moreover τ ≤ T be a stopping time. ω) = X(τ (ω). has a continuous version.. T ] × Ω. T ]. .2 Itˆ’s integral with stopping times o t Let F ∈ L2 ([0. one can see that X(τ ) is Fτ –measurable... l (5. Then Ai ∈ Fti . P. ω). P. i = 1. The following result reduces a Itˆ’s integral with a stopping time to a o usual one between 0 to T . T ] × Ω. n.

t≥0 Let m ∈ N be fixed and consider a standard m-dimensional Brownian motion in the probability space (Ω. tk ) with k ≤ n we have h(s) = 1 (Ak ∪..5 Multidimensional Itˆ integrals o B(t) = (B1 (t). g ∈ L2 ([0. i = 1. d. dt × P). if s ∈ [tk−1 . Rd ) (that is such that any matrix element belongs to L2 ([0.j (t)dBj (t).72 so that h(s) = 1 A2 ∪···∪An = 1 Ac . l 5. P)... .. L(Rm . T ]×Ω. dt × P. . l Then h is predictable and T t1 t2 The Itˆ integral o 1 {t<τ } F (s)dB(s) = l 0 0 F (s)dB(s) + 1 (A1 )c l t1 tn F (s)dB(s) + · · · + 1 (A1 ∪A2 ∪···∪An−1 l )c tn−1 F (s)dB(s) = X(t1 ) + 1 (A1 )c (X(t2 ) − X(t1 )) l + · · · + 1 (A1 ∪A2 ∪···∪An−1 )c (X(tn ) − X(tn−1 ) = X(τ ).T ] be the natural filtration of B (augmented with all P-null sets of Ω) . We shall define the Itˆ integral for predictable processes with values o in L(Rm . Bm (t)).∪An )c . (5. T ] × Ω. P. dt×P)).. F . Rd )).. . We define the Itˆ o integral of F as the d-dimensional process T m T F (t)dB(t) 0 i = j=1 0 Fi. First we need a lemma whose simple proof is left to the reader. P. m. T ] × Ω.. dt×P. Let (Ft )t∈[0. j = 1.j 0 E[f (s)g(s)]ds. We shall denote this space by L2 ([0. Rd ))). P..17 Let f. i. L(Rm . Lemma 5. P. l l 1 Similarly.15) Let now F ∈ L2 ([0.. T ] × Ω.. . Then we have T T T E 0 f (s)dBi (s) 0 g(s)dBj (s) = δi..

16). dt × P. taking into account (5.16) where Tr denotes the trace..Chapter 5 73 Proposition 5. L(Rm . Remark 5. Rd )).17) .. (5.19 Assume that d = 1 so that L(Rd .15). T ] × Ω. Then we have T 2 T E 0 F (t)dB(t) = 0 E[Tr (F (t)F ∗ (t))]dt.j (t)2 ]dt. (5. d. i = 1.16) reduces to T 2 T E 0 F (t).18 Let F ∈ L2 ([0. dB(s) 0 and formula (5. · · · .j (t)dBj (t). which yields (5. Fm ). d m 0 T E|I(F )| = i=1 j=1 2 E[Fi. In this case we shall write the Itˆ integral of F as o T F (s). Rm ) is isomorphic to Rm and F becomes a vector F = (F1 . P. dB(t) = 0 E|F (t)|2 dt.j (t)dBj (t) and. Set I(F ) = T 0 F (t)dB(t). Then we have m T (I(F ))i = j=1 0 Fi. It follows that d m T 2 E|I(F )|2 = i=1 E j=1 0 Fi. Proof.. .

74 The Itˆ integral o .

For any k ∈ N we denote by Cb (R) the linear space of all real mappings which are uniformly continuous and bounded tok gether with their derivatives of order less or equal to k. x∈R and k ϕ k = ϕ 0 + j=1 sup |Dj ϕ(x)|. T ] × Ω. (6. X is adapted. o Given a regular real function ϕ. x∈R 75 . σ ∈ L2 ([0. If ϕ ∈ Cb (R) we set ϕ 0 = sup |ϕ(x)|.Chapter 6 The Itˆ formula o 6. We are given two stochastic processes b. F . We set dX(t) = b(t)dt + σ(t)dB(t) and call dX(t) the Itˆ differential of X.1 Introduction Let (Ω. continuous and continuous in mean square.1) where x ∈ R. B a real Brownian motion. o k We need some notations. dt × P) and consider the stochastic process t t X(t) = x + 0 b(s)ds + 0 σ(s)dB(s). P. we are going to give a meaning to the Itˆ’s differential ϕ (X(t)). (Ft )t≥0 the natural filtration of B augmented with the null sets of P and P the σ-algebra of all predictable events (also augmented with the null sets of P). t ≥ 0. P) be a probability space.

3) t ≥ 0. Remark 6.2 Let F ∈ CB ([0. Put (dB)2 = dt and neglet the terms of order greater than dt. Tthe following result on quadratic sums of a process is a generalization of Theorem 3. P)) and let η = {0 = t0 < t1 < · · · < tn = T } ∈ Σ(0.19. F .4) 1 2 σ (t)ϕ (X(t)) + b(t)ϕ (X(t)) dt. Then we have n |η|→0 T lim F (tk−1 )(B(tk ) − B(tk−1 )) = k=1 0 2 F (s)ds in L2 (Ω. L2 (Ω. We shall write (6. Writing (dB)2 = dt is justified by Lemma 6. F . 2 t ≥ 0. T ]. o t Chapter 6 ϕ(X(t)) = ϕ(x) + 0 t ϕ (X(s))σ(s)dB(s) (6. Set Jη := n F (tk−1 )(B(tk ) − B(tk−1 ))2 . T ).2 below. also as ϕ (X(t)) = ϕ (X(t))dX(t) + 1 2 σ (t)ϕ (X(t))dt. (6.2) in the differential form. setting ϕ (X(t)) = ϕ (X(t))σ(t)dB(t). 2 + 0 t ≥ 0.76 We shall prove the following Itˆ’s formula. that is terms with (dt)2 and dt dB(t). P) (6. k=1 .1 One can deduce formally Itˆ’s formula by proceeding as folo lows.5) Proof. Lemma 6.2) 1 2 σ (s)ϕ (X(s)) + b(s)ϕ (X(s)) ds. + or. 2 (6. Write dX = b(t)dt + σ(t)dB and dϕ(X) = ϕ(X + dX) − ϕ(X) = ϕ (X)dX + = ϕ (X)dX + 1 2 1 2 ϕ (X)(dX)2 ϕ (X)b2 (t)(dt)2 + 2b(t)σ(t)dt dB + σ 2 (t)(dB)2 .

.The Itˆ formula o It is enough to prove that  |η|→0 77 n 2   = 0. (6.7) = k=1 E|F (tk−1 )|2 E |B(tk ) − B(tk−1 )|2 − (tk − tk−1 ) 2 . the last sum vanishes. To prove (6. P). obviously n |η|→0 T lim F (tk−1 )(tk − tk−1 ) = k=1 0 F (s)ds in L2 (Ω.6) lim E  Jη − k=1 F (tk−1 )(tk − tk−1 ) since.6) write  E  Jη − n 2   2 F (tk−1 )(tk − tk−1 ) k=1  = E n   F (tk−1 ) |B(tk ) − B(tk−1 )|2 − (tk − tk−1 ) k=1 n = k=1 E |F (tk−1 )|2 |B(tk ) − B(tk−1 )|2 − (tk − tk−1 ) n 2 +2 j<k=1 E F (tj−1 )[|B(tj ) − B(tj−1 )|2 − (tj − tj−1 )] F (tk−1 )[|B(tk ) − B(tk−1 )|2 − (tk − tk−1 )] Since the Brownian motion has independent increments. F . so that   n 2 E  Jη − k=1 n F (tk−1 )(tk − tk−1 )  = k=1 n E |F (tk−1 )|2 |B(tk ) − B(tk−1 )|2 − (tk − tk−1 ) 2 (6.

σi are Fti -measurable for any i = 0. b and σ given by (6. p p b= i=1 bi−1 1 [λi−1 . Now we are in position to prove Itˆ’s formula. we have  E  Jη − k=1 n n 2 Chapter 6   F (tk−1 )(tk − tk−1 ) =2 k=1 E[|F (tk−1 )|2 ](tk − tk−1 )2 n ≤ 2|η| k=1 E[|F (tk−1 )|2 (tk − tk−1 )] → 0.1).8) and X by (6. Now.. . E[|B(tk ) − B(tk−1 )|4 ] = 3(tk − tk−1 )2 . 2 Lemma 6.78 since F (tk−1 ) and B(tk ) − B(tk−1 ) are independent.2) when 3 ϕ ∈ Cb (R). 1.λi ) . t ∈ [0. Let η = {t0 = 0 < t1 < · · · < tN = t}.. In this case we have b(t) = b0 . as |η| → 0. p − 1.λi ) . λ1 ] and X(t) = b0 t + σ0 B(t). Then identity (6. t] with t ≤ λ1 .2) in [0.3 Let ϕ ∈ Cb (R). First we assume that b o and σ are elementary processes.8) where p ∈ N.. l σ= i=1 σi−1 1 [λi−1 . We start by proving (6. The conclusion follows. x ∈ R. . 3 2 Proof. l (6. σ(t) = σ0 .2) holds. Since Cb (R) is dense in Cb (R) it is enough to show (6. 0 = λ0 < λ1 < · · · < λp and bi . taking into account that E[|B(tk ) − B(tk−1 )|2 ] = (tk − tk−1 ). t ∈ [0. λ1 ]. Then we obviously have N ϕ(X(t)) − ϕ(x) = k=1 [ϕ(X(tk )) − ϕ(X(tk−1 ))].

3 .1 | ≤ 1 ϕ 2 |b0 |2 (tk − tk−1 )2 → 0 as |η| → 0 2 k=1 N .2 = 0 in L1 (Ω.9) Concerning I1 we have N I1 = k=1 ϕ (X(tk−1 ))(b0 (tk − tk−1 ) + σ0 (B(tk ) − B(tk−1 )).12) In fact |I2. (6. P) |η|→0 |η|→0 (6.1 = lim I2.11) It is easy to check that lim I2. P). F .2 + I2. t |η|→0 t lim I1 = 0 ϕ (X(s))b(s)ds + 0 ϕ (X(s))σ(s)dB(s) in L2 (Ω. (6. using Taylor’s formula we can write N 79 ϕ(X(t)) − ϕ(x) = 1 2 k=1 N ϕ (X(tk−1 ))(X(tk ) − X(tk−1 )) ϕ (X(tk−1 ))(X(tk ) − X(tk−1 ))2 + Rη k=1 + =: I1 + I2 + I3 .The Itˆ formula o On the other hand.1 + I2.10) Concerning I2 we write N 2I2 = k=1 ϕ (X(tk−1 ))b2 (tk − tk−1 )2 0 N +2 k=1 N ϕ (X(tk−1 ))b0 σ0 (tk −k−1 )(B(tk ) − B(tk−1 )) + k=1 2 ϕ (X(tk−1 ))σ0 (B(tk ) − B(tk−1 ))2 =: I2. F . (6. So.

where ξk = (1 − ξ)X(tk−1 ) + ξX(tk ). N |Rη | ≤ 3 ϕ 3 |b0 |3 k=1 (1) |tk − tk−1 |3 + 3 ϕ 3 |σ0 |3 k=1 |B(tk ) − B(tk−1 )|3 since E|B(t)| ≤ [E|B 2 (t)|]1/2 = t1/2 . |ϕ (ξk ) − ϕ (X(tk−1 ))| ≤ ϕ 0 (1 − ξ)|X(tk ) − X(tk−1 )|. by Lemma 6.14) Let us prove (6.2 it follows that t |η|→0 lim 2I2.80 and (1) N Chapter 6 E|I2. 3 Since ϕ ∈ Cb (R) we have by the mean value theorem.3 = 0 ϕ (X(s))σ 2 (s)ds in L2 (Ω.14).13) So. so that.2 | ≤ ϕ 2 |b0 | |σ0 | k=1 N (tk − tk−1 )E|B(tk ) − B(tk−1 )| ≤ ϕ 2 |b0 | |σ0 | k=1 (tk − tk−1 )3/2 → 0 as |η| → 0. P). We have N 1 Rη = k=1 0 (1 − ξ)[ϕ (ξk ) − ϕ (X(tk−1 ))](X(tk ) − X(tk−1 ))2 dξ. F . . the conclusion will follow provided |η|→0 lim E|Rη | = 0. (6. we deduce setting 1 − ξ ≤ 1. N |Rη | ≤ ϕ Consequently N 3 k=1 |X(tk ) − X(tk−1 )|3 . (6. Moreover.

dt × P) and ϕ ∈ Cb (R). (2) Since E|B(t)|3 ) ≤ [E(B(t)6 )]1/2 = √ 15. T ].16) 1 σj (s)ϕ (Xj (s)) + bj (s)ϕ (Xj (s)) ds.The Itˆ formula o and so (2) 81 . Then identity (6.2) we have t ϕ(Xj (t)) = ϕ(x) + 0 t ϕ (Xj (s))σj (s)dB(s). Let (bj ) and (σj ) be sequences of elementary processes such that j→∞ lim bj = b. . (6. N 3 k=1 E(|Rη |) ≤ 3 ϕ 3 |b0 | |tk − tk−1 | + 3 ϕ 3 |σ0 | 3 3 √ N 15 k=1 |tk − tk−1 |3/2 → 0. σ ∈ L2 ([0. The proof is complete when t ≤ λ1 . The general case can be treated in the same way taking into account that bk−1 and σk−1 are independent of B(tk ) − B(tk−1 ). P. P.15) Then we have (see (5.2) holds for all t ∈ [0. T ]. T ] × Ω. (6. s ∈ [0. Set. We finally prove 2 Theorem 6. dt × P). Proof.10)) j→∞ lim Xj = X in CB ([0. t t Xj (t) = x + 0 bj (s)ds + 0 σj (s)dB(s). L2 (Ω)). 2 + 0 Now the conclusion follows by the dominated convergence theorem letting j → ∞. T ] × Ω. for any j ∈ N. Taking expectation in the Itˆ formula we find a useful identity which o allows to estimate the expectation of ϕ(X(t)).4 Let x ∈ R. b. Moreover by (6. as |η| → 0. j→∞ lim σj = σ in L2 ([0. T ].

T ]. T ] × Ω.17) holds. σ ∈ L2 ([0.6 Assume that x ∈ R.17) 6.19) becomes t E 0 |σ 2 (s) + 2X(s)b(s)|ds < +∞ which is clearly fulfilled. dt × P) and 2 ϕ ∈ Cb (R). 0 (6. 2 Cb (R) Proof of Proposition 6. b.6.1. b. t ∈ [0. T ]. Then E[ϕ(X(t))] = ϕ(x) + 1 E 2 t [ϕ (X(s))σ 2 (s) + 2ϕ (X(s))b(s)]ds. t ∈ [0. Then t E(|X(t)|2 ) = |x|2 + E 0 (σ 2 (s) + 2X(s)b(s))ds. dt × P) and ϕ ∈ C 2 (R).18) and assume in addition that t E 0 |ϕ (X(s))σ 2 (s) + 2ϕ (X(s))b(s)|ds < +∞. (6.17) also holds without the assumption that ϕ is bounded.1 The Itˆ formula for unbounded functions o We want now to show that formula (6.5 Assume that x ∈ R. Set t t X(t) = x + 0 b(s)ds + 0 σ(s)dB(s). (6. ϕR (x) = 0 if |x| ≥ R + 1. Example 6. σ ∈ L2 ([0.19) Then E[ϕ(X(t))] < +∞ and (6. provided the integrand in the right hand side is summable. P. P. Then condition (6. T ] × Ω.7 Take ϕ(x) = x2 .82 Chapter 6 Proposition 6. . Let t t X(t) = x + 0 b(s)ds + 0 σ(s)dB(s). For any R > 0 consider a function ϕR ∈ such that ϕ(x) if |x| ≤ R. Proposition 6.

  t∈[0. yields for any R > 0 o ϕR (X(t)) − ϕ(x) = 1 2 + 0 t 83 [ϕR (X(s))σ 2 (s) + 2ϕR (X(s)b(s)]ds 0 (6. Then we have τR (ω) = T for all R > M (ω). l 2 0 Now. by the assumption (6. applying Itˆ’s formula (6.23) 1 s<(t∧τR ) [ϕ (X(s))σ (s) + 2ϕ (X(s)b(s)]ds. say M (ω).s.21) Now.20) t ϕR (X(s)))σ(s)dB(s).21) and the dominated convergence theorem. T 0 2m F (s)dB(s) .6 let us estimate E where F is predictable and m ∈ N.16 we can write ϕ(X(t ∧ τR )) − ϕ(x) = 1 2 + 0 t 1 s<(t∧τR ) [ϕ (X(s))σ 2 (s) + 2ϕ (X(s)b(s)]ds l 0 t 1 s<(t∧τR ) ϕ (X(s)))σ(s)dB(s). X(·. Let now τR be the stopping time   inf{t ∈ [0.T ]   T  if sup |X(t)| < R.19). t∈[0. For such a ω. We know that X(·. So. (6.2) to ϕR (X(t)). R→∞ lim τR = T P–a. in view of Proposition 5.22) Taking expectation we obtain E[ϕ(X(t ∧ τR ))] − ϕ(x) 1 = E 2 t (6.The Itˆ formula o Then. T ] : |X(t)| ≥ R} if sup |X(t)| ≥ R.. ω) attains the maximum. As an application of Proposition 6.T ] τR = It is clear that τR is increasing and bounded by T . m > 1. we can let R → ∞ obtaining the conclusion. l (6. ω) is continuous for almost all ω ∈ Ω. (6.

P. T ].24) Proof. T ] × Ω. L(Rm .6 we have t E[|X(t)|4 ] = 6E 0 |X(s)|2 |F (s)|2 ds . t ∈ [0. Substituting this in (6. P. T ] × Ω. dt × P. (6. P. We can now easily iterate the previous argument taking successively m = 3. setting ϕ(x) = x4 . Set t t X(t) = x + 0 b(s)ds + 0 σ(s)dW (s). (6. 6.8 Assume that F ∈ L2m ([0. P. T ] × Ω. We start from the case m = 2. It is enough to prove (6.26) From which 0 E|X(t)|4 dt ≤ 36T 2 0 E|F (t)|4 dt. So. Rd )). t ∈ [0. and set t X(t) = 0 F (s)dB(s).19) holds so that.24) is proved for m = 2. 4 and so on. P.25) Integrating between 0 and T . by Proposition 6. T ] × Ω. P. dt × P) is dense in L2m ([0. b ∈ L2 ([0. yields T T 1/2 T 1/2 E|X(t)| dt ≤ 6T E 0 0 T 4 |X(t)| dt T 4 E 0 |F (t)| dt 4 . (6. T ] × Ω. Rd ) and σ ∈ L2 ([0. m ∈ N. Assume that x ∈ Rd . T ] .25) yields T E[|X(t)|4 ] ≤ 36T E 0 |F (t)|4 dt. dt × P) and we have T E[|X(T )| 2m ] ≤ [m(2m − 1)] T m m−1 0 E |F (t)|2m dt. Then X ∈ L2m ([0. m ∈ N.24) when F is bounded (because L∞ ([0. (6. By H¨lder’s inequality it follows that o t 1/2 t 1/2 E[|X(t)|4 ] ≤ 6 E 0 |X(s)|4 ds E 0 |F (s)|4 ds . dt × P. Then (6. T ] × Ω.2 Itˆ’ formula for a vector valued process o Let d. dt × P). dt × P)).84 Chapter 6 Proposition 6.

T ]. = δi.2.27) in the differential form ϕ (X(t)) = Dϕ(X(t)).29) f (s)ds. o t 85 ϕ(X(t)) = ϕ(x) + 0 t Dϕ(X(s)). Let us start with a preliminary lemma. We shall write (6. σ(t)dB(t) + 1 Tr[(σσ ∗ )(t)D2 ϕ(X(t))] + b(t).29) follows from Lemma 6.9 Let f ∈ CB ([0.27) for all t ∈ [0. (6. . P). (6. L2 (Ω)) and let i. t ≥ 0. So. Let η = {0 = t0 < t1 < · · · < tn = T } be a decomposition of [0. T ]. Then we have n σ E[(Ii. σ(s)dB(s) . F . (6. Then we have n |σ|→0 lim f (tk−1 )(Bi (tk ) − Bi (tk−1 ))(Bj (tk ) − Bj (tk−1 )) k=1 T (6. 1 Tr[(σσ ∗ )(s)D2 ϕ(X(s))] + b(s). Let i = j and set n η Ii. Dϕ(X(s)) 2 + 0 ds. If i = j.j )2 ] = E h.The Itˆ formula o We are going to prove the following Itˆ’s formula. m}.. Dϕ(X(t)) 2 dt. T ].j := k=1 f (tk−1 )(Bi (tk ) − Bi (tk−1 ))(Bj (tk ) − Bj (tk−1 )). 2. j ∈ {1..j 0 Proof. we shall only sketch some points of the proof.. in L2 (Ω.28) The proof is similar to that of the one-dimensional case seen before.k=1 f (th−1 )f (tk−1 )(Bi (th ) − Bi (th−1 ))(Bj (th ) − Bj (th−1 )) × (Bi (tk ) − Bi (tk−1 ))(Bj (tk ) − Bj (tk−1 )) n =E h=1 f 2 (th−1 )(Bi (th ) − Bi (th−1 ))2 (Bj (th ) − Bj (th−1 ))2 n = h=1 E(f 2 (th−1 ))(th − th−1 )2 → 0. Lemma 6.

p − 1.. h. Now we prove Itˆ’s formula when b and σ are elementary processes as. Fti .86 Chapter 6 as |σ| → 0.. We use the notations Dϕ(x)h = Dϕ(x). 2 Lemma 6. taking ϕ ∈ Cb (Rd ) and proving (6.30).30) where p ∈ N. 1. k for all x.10 Let ϕ ∈ Cb (Rd ). t] with t ≤ λ1 . by Taylor’s formula we can write N ϕ(X(t)) − ϕ(x) = k=1 Dϕ(X(tk−1 ))..λi ) . (3) On the other hand. l σ= i=1 σi−1 1 [λi−1 . 0 = λ0 < λ1 < · · · < λp bi ∈ L2 (Ω. Then identity (6. λ1 ] t ∈ [0. h and D2 ϕ(x)(h. Rd ) and σi ∈ L2 (Ω. Let η = {t0 = 0 < t1 < · · · < tN = t}.31) Concerning I1 we have N I1 = k=1 (3) Dϕ(X(tk−1 )).λi ) .27) holds. Rd )) i = 0. Fti . and σ(t) = σ0 . o p p b= i=1 bi−1 1 [λi−1 . 3 Proof. L(Rm . (6. . k ∈ Rd . P. b0 (tk − tk−1 ) + σ0 (B(tk ) − B(tk−1 ) . t ∈ [0. x ∈ Rd and let b and σ given by (6. .6) in [0. X(tk ) − X(tk−1 ) + 1 2 N D2 ϕ(X(tk−1 ))(X(tk ) − X(tk−1 )). We have b(t) = b0 . X(tk ) − X(tk−1 ) + Rη k=1 =: I1 + I2 + I3 . k) = D2 ϕ(x)h. X(t) = b0 t + σ0 B(t). Then we obviously have N ϕ(X(t)) − ϕ(x) = k=1 [ϕ(X(tk )) − ϕ(X(tk−1 ))]. P. We proceed as in the proof of Lemma 6. λ1 ].3. l (6.

k=1 i. taking into account Lemma 6. we see that |η|→0 lim E|Rη | = 0.3 = = D2 ϕ(X(tk−1 ))(σ(B(tk ) − B(tk−1 ))). σ(s)dB(s) in L2 (Ω.β (s)ds 0 i. P) |η|→0 (6.j=1 α.1 = lim I2.α (Bα (tk ) − Bα (tk−1 )) σi. σ0 (B(tk )−B(tk−1 )) =: I2.2 = 0 in L1 (Ω. proceeding as before.α (s)σi.1 +I2.3 = = 0 Tr [D2 ϕ(X(s))(σσ ∗ (s))]ds.j ϕ σi.32) Concerning I2 we write N 2I2 = k=1 D2 ϕ(X(tk−1 ))b0 . b(s) ds+ 0 Dϕ(X(s)). we have N 2I2.34) Moreover.The Itˆ formula o So. (6. σ0 (B(tk ) − B(tk−1 )) (tk − tk−1 ) + k=1 D2 ϕ(X(tk−1 ))σ0 (B(tk )−B(tk−1 )).2 +I2. b0 (tk − tk−1 )2 N +2 k=1 N D2 ϕ(X(tk−1 ))b0 .j=1 α=1 t lim 2I2. σ(B(tk ) − B(tk−1 )) k=1 N d m 2 Di.j ϕ(X(s)) σi.33) It is easy to check that |η|→0 lim I2.β (Bβ (tk ) − Bβ (tk−1 )).3 . t |η|→0 t 87 lim I1 = 0 Dϕ(X(s)).9 we have t |η|→0 d m 2 Di.β=1 Therefore. (6.35) . Now. (6. F . F . P).

σ(t) dt. Exercise 6. m..4 we obtain the result Theorem 6. b. dt × P). m = 1 bi .. d. σm (t)). Finally. P. 2 = . Exercise 6. P. proceeding as we did for the proof of Theorem 6. T ] × 2 Ω.13 Let d ∈ N.. σd ). .12 Let d = 1. 2. Let moreover ϕ ∈ Cb (Rd ). Set X(t) = b(t)dt + σdB(t). dX(t) + 1 D2 ϕ(X(t))σ(t). m ∈ N.. Then identity (6. dt × P : Rd ).36) dϕ(X(t)) = ϕ (X(t))dX(t) + where σ(t) = (σ1 (t).. dt × P). .. σ ∈ L2 ([0.. The general case can be treated in the same way taking into account that bk−1 and σk−1 are independent of B(tk ) − B(tk−1 ). P.11 Let b ∈ L2 ([0.. Rd )). σk ∈ L2 ([0. i = 1.. σi ∈ L2 ([0. Prove that dϕ(X(t)) = Dϕ(X(t)). k = 1. 2 where σ = (σ1 . x ∈ Rd and ϕ ∈ Cb (Rd ).. Let ϕ ∈ 2 Cb (R). . dt × P : L(Rm . T ] × Ω.88 Chapter 6 The proof is complete when t ≤ λ1 . 2 (6.27) holds for any t ∈ [0.37) . 2 (6.. i = 1. T ]. T ] × Ω. Prove that 1 ϕ (X(t))|σ(t)|2 dt. T ] × Ω. Set m t t X(t) = 0 b(s)ds + k=1 0 σk (s)dBk (s).. P.

T ). (7. Rd )) that fulfills equation (7. This suggests to endow L(Rr . (7. t ∈ [s. L (Ω. d and an r-dimensional standard Brownian motion B(t). T ] × Rd → Rd and σ : [0. R ))) and 0 ≤ a < b ≤ T. t ≥ 0.3) 89 . P. T ] × Rd → L(Rr . Let us consider the following integral equation t t X(t) = η + s b(u. Rd ). X(t))dt + σ(t.1) where s ∈ [0. F . b : [0. T ]. We shall write (7.2)  X(s) = η. P). b is called the drift and σ the diffusion coefficient of the equation. based on the identity b 2 b E a G(t)dB(t) 2 r = a d E [Tr (G(t)G∗ (t))] dt.1) on the interval [s. 2 b S ∈ L(Rr .1). T ] we mean a function X ∈ CB ([s. We denote by (Ft )t≥0 the natural filtration of B(t) (augmented with all P-null sets of Ω). Rd ) with the Hilbert–Schmidt norm. L2 (Ω. By a solution of equation (7. setting S and to write b HS : = [Tr(SS ∗ )]1/2 .1) in differential form as   dX(t) = b(t. X(u))dB(u). T ].Chapter 7 Stochastic evolution equations We are given two positive integers r. T ]. X(t))dB(t). X(u))du + s σ(u. Fs .1) we shall use a fixed point argument. (7. in a probability space (Ω. Rd ). η ∈ L2 (Ω. L(R . for all G ∈ CB ([0. Rd ) G(t) 2 HS E a G(t)dB(t) = a E dt. In order to solve (7.

Fs . (7.1) are the following.90 Chapter 7 7. (7. Step 1.1 holds and let s ∈ [0. L2 (Ω. x.5) is a consequence of (7. T ]. We are going to solve (7. T ]. Proof.1 Existence and uniqueness The standard assumptions for the well-posedness of problem (7. η ∈ L2 (Ω.5) Notice that. X = η + γ1 (X) + γ2 (X) = γ(X). y) and |b(t.4) ≤ M 2 (1 + |x|2 ). (ii) There exists M > 0 such that for all t ∈ [0. X ∈ CB . after possibly changing the constant M . T ]. X(u))du.1 (i) b and σ are continuous on [0. X(u))dB(u). Theorem 7. Rd )). Then problem (7. L2 (Ω. σ(u. T ] × Rd . we have |b(t. γ1 and γ2 map CB into itself. T ] γ2 (X)(t) := and set γ(X) := η + γ1 (X) + γ2 (X). X ∈ CB . x) 2 HS 2 HS ≤ M 2 |x − y|2 (7. x)|2 + σ(t. P. t ∈ [s.6) . t ∈ [s. s X ∈ CB .1 Assume that Hypothesis 7.4). Then equation (7.1) by a fixed point argument in the space CB := CB ([s. T ). Rd )). Hypothesis 7.1) is equivalent to the following. Define γ1 (X)(t) := s t t b(u.1) has a unique solution X ∈ CB ([s. T ]. x) − σ(t. Rd ). y ∈ Rd . x) − b(t. (7. y)|2 + σ(t.

Let X. X(u)) 2 HS )du ≤ M2 s (1 + |X(u)|2 )du ≤ M 2 (t − s)(1 + X 2 CB ) So. using again the H¨lder inequality and taking o into account (7. T ]. We have. γ1 (X) − γ1 (Y ) Furthermore CB ≤ M (T − s) X − Y CB . X(u))|2 du ≤ M 2 (t − s) s 2 CB ).3) and (7. (1 + |X(u)|2 )du ≤ M 2 (t − s)2 (1 + X Since γ1 (X)(t) is Ft –measurable for all t ∈ [s. Y (u))|2 du ≤ (t − s)M Consequently |X(u) − Y (u)|2 du ≤ (t − s)2 M 2 X − Y 2 CB du.Stochastic evolution equations 91 Concerning γ1 we have. t t |γ1 (X)(t)|2 ≤ (t − s) s |b(u. we see that γ2 maps CB into itself. Y (u)) 2 HS )du 2 CB . X(u)) − σ(u. using the H¨lder inequality and taking into aco count (7.4). γ1 maps CB into itself and γ1 (X) CB ≤ M (t − s)(1 + X CB ). t |γ1 (X)(t) − γ1 (Y )(t)|2 ≤ (t − s) s t 2 s |b(u. X. Step 2.7) t E|γ2 (X)(t) − γ2 (Y )(t)|2 = s E( σ(u. Y ∈ CB . Y ∈ CB (7.5). γ is Lipschitz continuous. t E|γ2 (X)(t)| = s t 2 E( σ(u.5). ≤ M 2 (t − s) X − Y . Concerning γ2 we have taking into account (7. X(u)) − b(u.

CB . s. s. η)).1).1 holds and let η ∈ L2 (Ω. Then by the previous argument there is a unique solution to (7. In the following we shall denote by X(·. Y ∈ CB . Then Z solves the problem   dZ(t) = b(t.8) it follows that γ maps CB into itself and √ γ(X) − γ(Y ) CB ≤ M (T − s + T − s ) X − Y | for all X.9) γ is a 1/2–contraction on CB . T ]. η) which belongs to L2 (Ω.2 By Theorem 5.10) Proof. η). T1 ]. Y ∈ CB . η). Z(t))dt + σ(t. Remark 7.7) and (7. (7. s. P.3 Assume that Hypothesis 7. and so. η)).92 and so. Now if T − s is such that √ M T − s + T − s ≤ 1/2. η) the solution of problem (7.8) By (7. Rd ). X. s. Define Z(t) = X(t.13 it follows that there exists a version of the solution X(·. Fs . γ2 (X) − γ2 (Y ) CB Chapter 7 ≤M √ T −s X −Y CB . Z(t))dB(t). C([s. it possesses a unique fixed point. Now we repeat the proof with T1 replacing s and in a finite number of steps we arrive to the conclusion.9) does not hold we choose T1 ∈ (s. (7. s. Proposition 7. t ∈ [s. η) = X(t. By the uniqueness part of Theorem 7. s. η) = X(t. Then X(t. . If (7. 0 ≤ s ≤ r ≤ t ≤ T.1 it follows that Z(t) = X(t. r. Whe shall use greek letters for stochastic initial data and latin letters for deterministic ones.  Z(r) = X(r. (7. X(r. s. r. s. Let us prove the co-cycle law. T ])) and so it is a continuous process. as required.1) on [s. X(r. T ] such that M T1 − s + T1 − s ≤ 1/2.

s. so that XN (t. l k=1 (7. . η) = X(·. xk )) in Ak . η)) = b(u. t t XN +1 (t. . η) of problem (7... s. Rd ) and X(t. k = 1..12) Next result.Stochastic evolution equations 93 Remark 7...11). which as we shall see plays an important rˆle in proving that o X(·.. η))du + s σ(u. . xk )1 Ak . XN (u. XN (u. η)) = σ(u. XN (u. Then we have b(u. s. x ∈ Rd . xk )) in Ak . s. k = 1. k = 1. xk )1 Ak . P. the conclusion follows letting N tend to infinity.1) can be obtained as a limit of successive approximations. and A1 . Let XN be defined by (7. s.1 holds and that n η= k=1 xk 1 Ak . s.. η) = k=1 XN (t.. s. s. We claim that n XN (t.11) Then we have N →∞ lim XN (·.. η))dB(u). . XN (u.4 By the contraction principle it follows that the solution X(t.13) where x1 . s. η) in CB ([s.. Let us proceed by recurrence. . l (7. s. s.15) Once (7. n. s. Proposition 7. define X0 (t.15) is proved.14) Proof. xn ∈ Rd . n.. Fs . x) is a Markov process. Equality (7. η) = XN (t. s. gives some information about the relationship between X(t. s. Assume that it holds for a given N ∈ N. s.. s. s. η) = n X(t... XN (u. s. s. (7. More precisely. Rd )). η) = η + s b(u. (7. xk ) in Ak .15) is clear for N = 0. x). l ∀ N ∈ N. s. σ(u. Then we have X(t. (7. n. . An are mutually disjoints sets in Fs such that n Ω= k=1 Ak . η). T ]. L2 (Ω. η) = η and for any N ∈ N.. XN (u.5 Assume that Hypothesis 7. η ∈ L2 (Ω.

T ]. T ]. xk ) + l t b(u. 7. xk ) l and (7. the conclusion follows. XN (u. using inequality (6. l Consequently n t XN +1 (t. η)) = k=1 n Chapter 7 n 1 Ak b(u.1) has a unique solution X(·. xk ))dB(u) = k=1 1 Ak XN +1 (t. T ). T ]. (7. xk )). η)) = k=1 1 Ak σ(u. s. Then problem (7. s. s. s.6 Assume that Hypothesis 7. Proof. η ∈ L2m (Ω. T ]. L2m (Ω.1 by a fixed point argument in the space m CB := CB ([s. Theorem 7. where A ∈ L(Rd ). We proceed as in the proof of Theorem 7. ∀ x ∈ Rd . xk )). Rd )). η) ∈ CB ([s. s. s. XN (u. x) ∈ CB ([s. 7. Rd )).8.24) proved in Proposition 6. s. XN (u.94 so that b(u. Rd ). XN (u.2 Examples Example 7. s.16) . s. In particular X(·.15) holds for N + 1. l σ(u.7 Consider the stochastic differential equation dX = AXdt + CdB(t). XN (u. P. s. L2m (Ω. xk )du s n + s σ(u. X(0) = x. XN (u.1. s. Rd )). C ∈ L(Rr . Fs . L2m (Ω. L2m (Ω. Rd )).1. So. Rd ) and x ∈ Rd . s ∈ [0.1 Solution of the stochastic differential equation in the space CB ([s. η) = k=1 1 Ak X0 (t.1 holds and let m ∈ N.

T ].18) Example 7.17) Setting t Y (t) = 0 X(s)ds. (7.20) solves (7.1 applies. T ].Stochastic evolution equations 95 Clearly Theorem 7.19) where a. T ]. c. Write X(t) = eF (t) where F (t) = t a − 1 c2 + cB(t). x ∈ R. thanks to Proposition 3.1 applies so that (7.16) has a unique solution X(t) which fulfills the integral equation t X(t) = x + A 0 X(s)ds + CB(t). X(0) = x.17) yields t X(t) = A 0 e(t−s)A (x + CB(s))ds + x + CB(t). Y (0) = 0. t ≥ 0.8 Let r = d = 1 and consider the stochastic differential equation dX = aXdt + cXdB(t). t ∈ [0. Taking into account that. (7.19).12. t ∈ [0. Y fulfills the equation Y (t) = AY (t) + x + CB(t). t t e(t−s)A CdB(s) = CB(t) + A 0 0 t e(t−s)A CB(s)ds.20) For this we check that X(t) given by (7. By substituting Y (t) in (7. t ∈ [0. We want to show that the solution of (7. which can be easily solved by the method of variation of constants. Again Theorem 7. Then we have 2 dF (t) = a− 1 2 c dt + cdB(t) 2 . (7. (7. we find X(t) = e x + 0 tA e(t−s)A CdB(s).19) is given by 1 2 X(t) = et(a− 2 c ) ecB(t) x. We obtain t Y (t) = 0 e(t−s)A (x + CB(s))ds.

ω)dB(u). T ]. The following result can be proved as Theorem 7. L(Rr . ω)).23) Here η ∈ L2 (Ω.21) where A. (7. ω) − b(t. by Itˆ’s formula. X(0) = x. ω). x. ω) = b(t.1. for all t ∈ [0. ω). Show that the solution of (7. U (t. T ]. Exercise 7.24) ≤ M 2 (1 + |x|2 ). V (t. ω). L2 (Ω. ω).3 Differential stochastic equations with random coefficients In some situations (see Subsections 7.25) (ii) For any Y ∈ CB ([0.4) one deals with stochastic differential equations having random coefficients. Rd )) we have U ∈ CB ([0. T ]. (7. x ∈ Rd and AC = CA. X(u. L2 (Ω. Rd )) and V ∈ CB ([0. T ]. L2 (Ω. Y (t. T ] × L(Rr . ω) = η(ω) + s b(u. y ∈ Rd . ω) = σ(t. ω)|2 + σ(t. Fs . y. Rd ). ω)du + s σ(u. ω) − σ(t.2 (i) There exists M > 0 such that for all t ∈ [0. x. . ω) and |b(t. x. y.3 and 7. ω)). x. ω) 2 HS 2 HS ≤ M 2 |x − y|2 (7. T ] × Rd × Ω → Rd and σ : [0. Rd ) × Ω → Rd are such that: Hypothesis 7. (7. ω ∈ Ω. ω ∈ Ω |b(t. ω)|2 + σ(t. X(u. C ∈ L(Rd ).22) 7. b : [0. t t X(t. Y (t.96 and.1. T ].21) is given by 2 X(t) = et(A−C /2) eCB(t) x. x.9 Let r = 1 and consider the differential stochastic equation dX = AXdt + CXdB(t). Rd ))) where. o dX(t) = eF (t) dF (t) + = eF (t) Chapter 7 1 2 F (t) c e dt 2 1 1 a − c2 dt + cdB(t) + c2 eF (t) dt 2 2 = aX(t)dt + cX(t)dB(t). (7.

dB(t) = X(t) F (t). Fs . T ) and η ∈ L2 (Ω. 7. dB(t) .10 applies and so there exists a solution X of (7. We are going to prove that the solution X(t.27) solves (7. where F ∈ CB (0. Write X(t) = eH(t) where H(t) = − Then we have 1 dH(t) = − |F (t)|2 dt + F (t). T ].2. T . s. η) to (7. L2 (Ω.dB(s) x. Rd )).26).1 holds.26)  X(0) = x. dB(s) . dB(t) . Rd ).27) is proved. s and Lipschitz o continuous on η in mean square. s.27) For this we check that X(t) given by (7. 2 Now by Itˆ’s formula we find o dX(t) = eH(t) dH(t) + 1 2 1 2 t t |F (s)|2 ds + 0 0 F (s). η)|2 is bounded. Then problem (7. Let s ∈ [0. L∞ (Ω.26). First we show that E|X(t.2 holds. dB(t) .Stochastic evolution equations 97 Theorem 7. (7. Example 7. T ]. t ≥ 0. (7. (7. eH(t) |F (t)|2 dt t ≥ 0.1 Continuous dependence on data Continuous dependence on mean square We assume here that Hypothesis 7. t ≥ 0. Let us show that X(t) = e− 2 1 Rt 0 Rt |F (s)|2 ds+ 0 F (s). So. Now it is easy to check that Theorem 7. Rd )).11 Let d = 1 and consider the stochastic differential equation   dX(t) = X(t) F (t).23) has a unique solution X ∈ CB ([s. .2 7.1) is H¨lder continuous on t.10 Assume that Hypothesis 7. t ∈ [0. = eH(t) F (t).

12 Assume that Hypothesis 7. We now study the regularity of X(t. s. The conclusion follows from the Gronwall lemma. Let 0 ≤ s ≤ t1 < t ≤ T and η ∈ L2 (Ω. η)|2 ≤ 3[E(|η|2 ) + M 2 ((T − s)2 + (T − s)]e3M Proof. Writing for short X(t. η)|2 ≤ C(T. Then for all s ∈ [0. (7. 0 ≤ s < t ≤ T. η. Fs .13 Assume that Hypothesis 7. E(|η|2 )) such that we have E |X(t. η) with respect to t. X(u)) 2 HS )du. η) with respect to t. E(|η|2 ))(t − t1 ).1 holds. s.1 holds. E(|η|2 )). Rd ). s. s.98 Chapter 7 Lemma 7. s. (7. we have t 2 2 (T −s+1) . s. η)|2 ≤ C1 (T.12. Then there exists a constant C1 (T.30) . η) = X(t). T ] and η ∈ L2 (Ω. Fs . Rd ) we have E |X(t. P. Proposition 7.28) E (|X(t)| ) ≤ 3E(|η| ) + 3E s t 2 2 b(u. by Lemma 7.1(ii) and the H¨lder inequality we deduce that o t E (|X(t)|2 ) ≤ 3E(|η|2 ) + 3M 2 (t − s) s t (1 + E |X(u)|2 )du +3M 2 s (1 + E |X(u)|2 )du. We note that. Consequently E (|X(t)|2 ) ≤ 3E(|η|2 ) + 3M 2 ((T − s)2 + (T − s)) t +3M ((T − s) + 1) s 2 E |X(u)|2 du. s.29) We start with the regularity of X(t. E(|η|2 )) such that E |X(t. By Hypothesis 7. s. (7. X(u))du +3 s E( σ(u. η) − X(t1 . there exists a constant C(T.

32) . E(|η|2 ))) and the conclusion follows. X(u.1 holds. Then there exists a constant CT.31) +3 s (b(u. η) − X(t. Fs . s.4) we obtain E (|X(t. s. X(u. We have t 99 E |X(t. Let us study the regularity of X(t. s. s. s. η)|2 ≤ 2M 2 (t − t1 ) t1 (1 + E |X(u. s. ζ))dB(u) . η) with respect to s. η) with respect to η.15 Assume that Hypothesis 7. Taking expectation and using (7. s. let 0 ≤ s < t ≤ T and η. s. η) − σ(u. η)|2 ≤ CT. We finally study the regularity of X(t. s. let 0 < s < s1 < t ≤ T. X(u. s. s. (7. η)|2 du t 2 t1 + 2M Consequently. (1 + E |X(u. s. X(u. s. s. η) − X(u. ζ)|2 ≤ 3e3M Proof. Rd ). and η ∈ L2 (Ω. s. η)|2 ≤ 2M 2 ((t − t1 )2 + t − t1 )(1 + C 2 (T.14 Assume that Hypothesis 7. η) − X(t. Proposition 7. s. η) − b(u. (7. ζ))du t 2 +3 s (σ(u. s. η) − X(t1 . s. P. Proposition 7. We have |X(t. Fs . Rd ). ζ ∈ L2 (Ω. s.1 holds.Stochastic evolution equations Proof. s. s1 . η) − X(t. η) − X(t1 . s. Then E |X(t.η > 0 such that E |X(t. E |X(t. η) − X(t. ζ)|2 du and the conclusion follows from the Gronwall lemma.η |s − s1 |. ζ)|2 ) ≤ 3E(|η − ζ|2 ) + 3M 2 (T − s + 1) t × s E |X(u. ζ)|2 ≤ 3|η − ζ|2 t 2 2 (T −s+1)(t−s) E(|η − ζ|2 ). η)|2 )du.

17 Assume that Hypothesis 7.30).1 holds. x) is H¨lder o continuous almost surely. m ∈ N and ∈ (0. Finally. First we need a lemma. s. y)|2m ≤ C(T )|x − y|2m .1 holds. Then there is a constant C(T ) > 0 such that E |X(t.1 holds. s. s.35) . s. x) belongs to C −1/(2m) (7. First. η) − X(t.2m Moreover. . η)) − X(t. s. s. x)|2m < +∞. 1/2). Lemma 7. Taking into account the co-cycle law (7. s. s. let 0 ≤ s < t ≤ T and x. s. s1 .3 and the Sobolev embedding theorem E. η) = X(t.31) there exists CT > 0 such that 2 E (|X(t. 0 ≤ s ≤ t ≤ T . η) − X(s. By (7.1 it follows that Proposition 7. ·). x)|2m ≤ C1 (T.3 Almost sure continuity and h¨lderianity o of trajectories In this section we show that X(·. |x|) such that E |X(t. X(s1 . s. η) − X(t. η)|2 ) .34) ([s. Then we have E |X(·. η)|2 ) ≤ CT E (|X(s1 . s. y ∈ Rd . we consider almost sure regularity of X(t. (7. (7. x ∈ Rd and m ∈ N. s. x) − X(t. 7. Then there exists a constant C1 (T. whose definition is recalled in Appendix E below. which can be proved as Proposition 7. η) − η|2 ) 2 = CT E (|X(s1 . Then the Sobolev embedding theorem (also stated in Appendix E) will imply that X(·. s1 . Let x ∈ Rd .10). |x|2 ))(t − t1 )m . we can write Chapter 7 X(t. arguing as in the proof of Proposition 7.14 we have Lemma 7. Let 0 ≤ s ≤ t1 < t ≤ T. X(·.18 Assume that Hypothesis 7.33) Now from Proposition E.24).16 Assume that Hypothesis 7.100 Proof. T ]) almost surely. x) belongs to a suitable Sobolev space. η). s. x) − X(t1 . s.13 using (6. s. s1 . s1 . The conclusion follows now from (7.

Dx b. x) is the solution to the stochastic differential equation with random coefficients.3 2 2 (i) Dx b. T ] × Rd . ·)]2 + [σ(t. L2 (Ω.38) where η h (t. besides Hypothesis 7.Stochastic evolution equations Now from Proposition E. s. h ∈ Rd . x → X(·. T ] the mapping Rd → CB . y ∈ [0. x) · h = η h (t.39) +σx (t.20 Assume that Hypotheses 7.37) We set CB = CB ([s. ·) belongs to C −d/(2m) (7. s. s. Then for any m > 1 and ∈ (0.  h  dη (t. . ·)]2 ) < ∞.2m Moreover. s. dB(t))      h η (s. Dx σ and Dx σ are continuous on [0. x))(η h (t. x). 1]d .3 hold. s.1 Existence of Xx (t. t∈[0. X(t. x) Theorem 7. 1) we have E |X(t. that Hypothesis 7. let 0 ≤ s < t ≤ T and x. (7.1 holds. x) with respect to x In this section we assume.1 and 7. X(t. s. s.1. s. T ]) =: CB ([s. s. Rd )). X(t. s. 7. x) = bx (t. s.4.36) ([0. s. . (1) Recall the notations given at the beginning of Chapter 6. ·)|2m < +∞.19 Assume that Hypothesis 7. x)) · η h (t. x). s. Then for any s ∈ [0. 7. x) = h. x. T ]. (ii) We have (1) sup ([b(t.4 Differentiability of X(t. 1]d ) almost surely.3 it follows that 101 Proposition 7. x)dt     (7. x).T ] (7. is continuously Gateaux differentiable and its Gateaux derivative is given by Xx (t. s.

X) = I.21 Assume that Hypotheses 7. X(r))Y (r)dr+ s σx (r.41) Then F fulfills Hypothesis D.10.3. which depends continuously on x. t ∈ [s. T1 ]. x) We now prove the existence of the second derivative of X(t.102 Chapter 7 Proof. So. (7.6. setting Xxx (t. Note that the coefficients of equation (7. T1 ]) and define a mapping F : Rd × CB → CB .1 so that it possesses a unique fixed point X(x) ∈ CB . s. X(r))dB(r). Theorem 7. t t [FX (x.k (t. x → X(·. X2 ) CB ≤ (7. s. that is F (x.42) . setting t t [F (x. x)(h. We set CB = CB ([s.1 and 7. X.6 from Appendix D (with Λ = Rd and E = CB ). x) of (7. Y ∈ CB we have Fx (x.39) fulfill Hypothesis 7. It is not difficult to check that F is Gateaux continuously differentiable. X1 ) − F (x. X2 ∈ CB . (7. To prove the theorem we use Theorem D. so it possesses a unique solution by Theorem 7. x ∈ Rd . is twice differentiable with respect to x in any couple of directions (h. (the straightforward proof is left to the reader) and that for each x ∈ Rd .2 Existence of Xxx (t. x) with respect to x. X(x)) = X(x). s. X)](t) : = x + s b(r.40) where T1 > s is chosen such that F (x. x. k) = ζ h. T1 ].4. x ∈ Rd . k) in Rd . x). s. 7. 1 X1 − X2 2 CB for all X1 . h ∈ Rd . s. t ∈ [s. Then the mapping Rd → CB . x). X(r))dr + s σ(r. Moreover.2). the conclusion follows from Theorem D. X(x) coincides with the solution X(·.3 hold. s. X(r))Y (r)dB(r). X)·Y ](t) = s bx (r.

s. x)) · ζ h. x). x)) · (η h (t. s. s. X(r. X(r. Lemma 7. x)| 4 ≤ 27 + 27 s t bx (r. T ]. x) is the solution to the stochastic differential equation (with random coefficients)  h. T ]. s. x))η(r. s. x) ∈ CB ([s. x) ∈ CB ([s. s. x) = bx (t. x)dr (7. . s. s. s. s. x)|4 dr 4 +C1 s σx (r. x)|4 ≤ 27 + C1 s t |η(r. L2 (Ω)) be the solution of the equation t η(t.45) |η(t. s. x))η(r. x))η(r. X(t. s. X(r.43) We shall prove the theorem when n = r = 1 for simplicity. s. L4 (Ω)) and there exists C > 0 such that E|η(·. s. x). x)dB(r) . x)dB(r) . (7. x).k (t. T ). X(t. x))dt      +σx (t. s. x) = 0. s. s. x))η(r. x). We first prove a lemma. x))(η h (t.44) + s σx (r. x))(ζ h. x)dB(r). dB(t))       h. Then η(·.k (t. s. s. s. X(t. x)dt        +bxx (t. x))η(r. x)|4 ≤ C. s.k  d ζ (t. s. By using (7.Stochastic evolution equations 103 ζ h. X(r. s. s. (7.37) and the H¨lder inequality we see that there exists a constant o C1 such that t |η(t. Proof. x ∈ Rd . x)dr 4 +27 s σx (r. η k (t. s. We have. s. dB(s))        +σxx (t. x) = 1 + s t bx (r. s.k (t. s. X(r.k ζ (s. t 4 ∀ s ∈ [0.22 Let η(·. η k (t. s. X(t. s.

s. s. x))Z(r)dB(r). x) = (1 − T (x))−1 (1). x) belongs to CB and fulfills equation (7. x) = 1 + T (x)η(·. x) By (7.48) is given by η(·. x).46) Notice that. x)|4 ≤ C2 (1 + s E|η(r. By Theorem 7. s. L4 (Ω)) and it results t (T (x)Z)(t) = − s t bxx (r.44). x))Z(r)η(·. x))Z(r)dr − s 4 σx (r.41) and CB = CB ([s. X(r. by a straightforward computation ηx (·.48) ≤ 1/2. we find that t E|η(t.8. s. T1 ]. s. T ]. taking expectation on both sides of this inequality and using Corollary 6. t t (T (x)Z)(t) = − s bx (r. L (Ω)). Now we write equation (7. since η(·. s. X(r. s.104 Chapter 7 Now.20 we know that X(t.49) From this identity it is easy to show the existence of ηx (·. X(r. s. (7. Thus the solution of (7. ∀ x ∈ R.21. s. where C2 is another constant. T1 ]) as before. T ]. x)dr (7. 0 ≤ s ≤ t ≤ T.50) . x) = Xx (·.41) it follows that T (x) L(CB ) (7. We choose T1 as in (7. x) ∈ CB ([s. x ∈ R. x)). s. x))Z(r)η(·. For any x ∈ R we define a linear bounded operator T (x) from CB into CB setting for all t ∈ [s. s. x) = (1 − T (x))−1 (T (x)η(·.47) − s σxx (r. x) is differentiable with respect to x and that its derivative η(·. Proof of Theorem 7. (7. s. s. T (x)Z is differentiable with respect to x for any Z ∈ CB ([s. s. (7.44) as η(·. X(r. s. s. The conclusion follows from the Gronwall lemma. We have in fact. x)|4 dr). r. x) := ζ(·. s. x)dB(r). r.

r. Setting r = s we find Xs (t. It is useful to recall first some results in the deterministic case. x)) + Xx (t. s. s. Let us compute Xs (t. s. x)dB(r). s.  X(s) = x. X(r. x)dB(r). s. 7. s. x)) · Xt (r. x). s. s. s. Now by (7. T ]. X(r. x))η 2 (·. x)dr t + s σxx (r. X(r. x) − T (x)ηx (·. x). X(r.52) Let us consider the problem   X (t) = b(t.5 Itˆ Differentiability of X(t. x)(t) = s bxx (r. s. 7. x) = −Xx (t. s. t ≥ r ≥ s. . under Hypotheses 7.53) with respect to r yields 0 = Xs (t. s.3 with σ = 0. x)(t) = s bxx (r. r. x)). s. x)dr (7. x) with reo spect to s. s. X(t)). r. x)b(s.52). x) the solution of (7. s. s. s. x))η 2 (·.50) it follows that t ηx (t. x) = X(t. x) (it is well known that X(t. X(r. Denote by X(t.1 and 7. (7. Write X(t. s. x))η 2 (·. x))η 2 (·. (7. s. s. s. x) is C 1 in all variables).5. X(r. X(r.Stochastic evolution equations where t 105 T (x)η(·.1 The deterministic case t ∈ [s. s.51) t + s σxx (r.53) Differentiating (7. and the conclusion follows.

2 The stochastic case Here we want to study the differentiability of X(t. s. Let x ∈ Rd . B(sn (ω)) − B(s(ω))) ∈ A} . x) be defined by (7. P). Then X(t. Fs+ . r. x) is Fs+ -measurable. For this we need the following result which can be proved as Lemma 4..5. x)dr. It happens. s. x)dB(u) = lim |η|→0 σ(tk−1 . Then B(t2 )−B(t1 ) and ϕ are independent.106 which is equivalent to t Chapter 7 X(t. then X1 (t. where n ∈ N.2).3. Then X1 (t. Proof. (7. s ∈ [0.54) In the next subsection we are going to generalize this formula for the solution X(t. The family (Fs+ )s∈[0. 7.11). x)(B(tk ) − B(tk−1 )). Lemma 7. s. x)dB(u). s. < sn ≤ T and A ∈ B(Rn ). X(t.. N ∈ N. s. Let XN (t. x)b(r. k=1 where η = {s = t0 < t1 < · · · < tn = t}. that for any s ∈ [0. however. A difficulty arises since the process s → X(t. We have in fact t t X1 (t. x) is not Fs -measurable. x) with respect to s in a sense to be precised.T ] is called the future filtration of B. 0 ≤ s ≤ t ≤ T. x) = x + s b(u. 0 ≤ s ≤ s1 < . s. x) = x + s Xx (t. x) is measurable with respect to the σ–algebra Fs+ generated by all sets of the form {ω ∈ Ω : (B(s1 (ω)) − B(s(ω)). T ]. and let ϕ ∈ L2 (Ω. . s. T ]. Since s t n σ(u.23 Assume that Hypotheses 7. Proposition 7. s. Now we introduce the backward Itˆ integral for a process wich is adapted o to the future filtration. We end the proof by recurrence. because X(t. s. .24 Let t1 < t2 ≤ s. x)du + s σ(u. x) is Fs+ –measurable.. x) is Fs+ -measurable..1 holds. s. s.. x) of (7. x) is not adapted.

L(Rr . L2 (Ω.58) + s Xx (t. Rd ))) defined in Chapter 5. L(Rr . x)dr t 1 + 2 t TR [Xxx (t. x) · b(r. Let F ∈ CB + ([0. dB(r)) . L(Rr . 2 T (7. T ].27 Assume that Hypotheses 7. Rd )) are called stochastic processes adapted to the future filtration (Ft+ ) and continuous in quadratic mean. L2 (Ω. Rd ))) there exists the limit T |σ|→0 lim Iσ (F ) =: 0 F (s)dB(s). L2 (Ω.26 Let t > s.57) F (s)dB(s) is called the backward Itˆ integral of the function F in [0. . The elements of CB + ([0. 2 7. x). T ]. x)(σ(r. Theorem 7. Rd ))) by a straightforward generalization of the space CB ([0. Rd ))). L2 (Ω. T ]. T ].10). x))]dr s (7.5.3 hold. Moreover we have T E 0 F (s)dB(s) = 0. s. For any η ∈ Σ with η = {0 = s0 < s1 < · · · < sn = T } we set n Iσ (F ) = k=1 F (tk )(B(tk ) − B(tk−1 )) The proof of next theorem is completely similar to that of equation (5.25 For any F ∈ CB + ([0.56) and T E 0 T 0 F (s)dB(s) = 0 E F (s) 2 HS ds. T ]. L(Rr . x). σ(r.1 and 7. r. Then we have X(t. (7. o Exercise 7.Stochastic evolution equations 107 We define CB + ([0.55) in L2 (Ω). T ].3 Backward Itˆ’s formula o t Theorem 7. L2 (Ω. (7. L(Rr . x)(σ(r. Prove that t B(r)dB(r) = s 1 (B(t)2 − B(s)2 + (t − s)). r. x) − x = s Xx (t. r.

n If η ∈ Σ(s. sk−1 . x)(sk − sk−1 ) + σ(sk . σ(r. . x)) − 1 2 Xxx (t.108 where d Chapter 7 TR [Xxx (t. sk . x)(x − X(sk . s.. k=1. x) − X(t. X(r.s. sk . On the other hand we have sk X(sk . sk . Proof. x)(σ(r. sk−1 .59) =− k=1 n Xx (t. that |η|→0 lim o(|η|) = 0. t) we have n X(t. x) − x = sk−1 sk b(r.. after some tedious but o straighforward computations. x) − x = − k=1 n [X(t. x)(B(sk ) − B(sk−1 )) + o(sk − sk−1 ). x)ek ) and (ek ) is any orthonormal basis in Rd . We take d = r = 1 for simplicity. x))] (7. x)(x − X(sk . P-a. r. sk−1 . sk−1 . x))dB(r) = b(sk . X(sk . sk−1 . For any η ∈ Σ(s. x))] = k=1 Xxx (t. sk ... sk . sk−1 . k=1 Arguing as in the proof of Itˆ’s formula one can show. x)] =− k=1 n [X(t. x)ek . x) − X(t. t) we set |η| = max (tk − tk−1 ). sk−1 . r.60) + sk−1 σ(r. X(r. σ(r. x))2 + o(|η|). x))dr (7. x). x)(σ(r..

60) b(sk .60) in (7. b(r.62) + s Dx [ϕ(X(t. x)dr. x)σ ∗ (r. sk . x) where ξk is any point in [sk−1 . sk . x)σ(sk . x)dB(r). Therefore we have t |η|→0 lim I2 (η) = s Xx (r.Stochastic evolution equations 109 (Notice that. x) dr 1 + 2 t t 2 Tr [Dx [ϕ(X(t. we note that it is an integral sum corresponding to the backward Itˆ integral since Xx (t. x)b(sk . x))]σ(r. x)(B(sk ) − B(sk−1 ))2 k=1 +I1 (η) + I2 (η) + I3 (η) + o1 (|η|). x) with b(ξk . x)(sk − sk−1 ) + k=1 Xx (t. sk . Obviously t |η|→0 lim I1 (η) = s Xx (r. x))]. x) − x = k=1 n Xx (t. x)(B(sk ) − B(sk−1 )) (7. one can replace in (7. x)]dr s (7. r. r.59) we find that n X(t. since b is deterministic. we have t ϕ(X(t. In a similar way one can prove the following backward Itˆ formula. x) is Fs+ measurable by Proposition o k 7. σ(r. o 2 Theorem 7. x)) − ϕ(x) = s Dx [ϕ(X(t. Concerning I2 (η).) Substituting (7. s.28 Let ϕ ∈ Cb (Rd ). sk ].23. x)σ(r. x)σ 2 (sk . sk . The other terms I3 (η) and o1 (|η|) can be handled as in the proof of Itˆ’s o formula. Then for any 0 ≤ s < t ≤ T. . x)b(r. x))]. s.61) n 1 + 2 Xxx (t. x)dB(r). r.

110 Chapter 7 .

Hypothesis 8.1 (i) b is continuous on [0.1) We consider here the problem   X (t) = b(t. differentiating (8. s.2) with respect to u and setting u = s we find Xs (t. x. u. 0 ≤ s ≤ u ≤ t ≤ T. x ∈ Rn . 0 ≤ s ≤ t ≤ T.t . x ∈ Rn .1 problem (8. x) ∈ C 1 ([s. x)). y)| ≤ M |x − y|. defined on the space Cb (Rn ) by Ps. x) = X(t. x) − b(t. t ∈ [0. s. X(t)). T ) and b : [0. As well known. y ∈ Rn . t ∈ [0.4) . (8. Rn ).3) Of great interest for the applications is the transition evolution operator Ps.Chapter 8 Kolmogorov equations 8. (iii) b is differentiable with respect to x and bx is continuous on [0.1 The deterministic case t ∈ [s. and it holds X(t.t ϕ(x) = ϕ(X(t. 111 x ∈ Rn . (8. s. T ]. (8. T ]. x)). T ]. T ] × Rn .1) has a unique solution X(·) = X(·. s. T ]. where s ∈ [0. under Hypothesis 8. x) + Xx (t. (ii) There exists M > 0 such that |b(t. X(u. T ].2) Morever. s. s.  X(s) = x ∈ Rn . T ] × Rn . t ∈ [0. x) · b(s. T ] × Rn → Rn fulfills the following hypothesis. s. x) = 0. s. (8.

T ].8) where 1 ϕ ∈ Cb (Rn ).t ϕ(x).6) follows.t L(t)ϕ. Let us prove (8.t . (s. is continuous. s. x)) dt dt and Ps. so that (8.u Pu. 1 where ϕ ∈ Cb (Rn ) and T > 0 is fixed. u ∈ [0.3). s. x). x)). Moreover for any ϕ ∈ Cb (Rn ) the mapping [0. dt and d Ps. X(t. x)). .112 Kolmogorov equations As easily checked. x) = ϕ(x). ϕx (x) . x) + b(s.5) 1 Proposition 8. t.2) it follows immediately the cocycle property Ps. (8. We have t≥s (8. x ∈ Rn . s.t ϕ. s. X(t. We have. Let us now consider the following partial differential equation called transport equation   zs (s. x). d d Ps.t ϕ(x) = ϕ(X(t. x) = 0.7) (8.t is a linear bounded operator on Cb (Rn ).t = Ps. ϕx (X(t.1 For any ϕ ∈ Cb (Rn ) we have d Ps. T ] × Rn → Rn .t ϕ = −L(s)Ps.t L(t)ϕ(x) = b(t.t ϕ = Ps. x) → Ps. Proof.t ϕ(x). taking into acccount (8. s. x) · b(s. s. x)). T ] (8. x)) . (8.6) t ≥ s. x) ds ds = −L(s)Ps.7). s. Xx (t.t ϕ(x) = ϕ(X(t. From (8. zx (s. T ] × [0. s. Ps. d d Ps. x)) = − ϕx (X(t. ϕx (X(t. t. s. ds L(t)ϕ(x) = b(t. s ∈ [0. x)) = b(t.9)  z(T.

13) Pt is called the transition semigroup associated with (8. X(s. Uniqueness. X(s. x)) + zx (s. x) = zt (s.1 and 1 let ϕ ∈ Cb (Rn ). X(s. x). X(s. u.10) Proof Existence. (8. x)).11) whose solution we denote by X(·. x)) which implies z(u. x)). x ∈ Rn . u. t.Chapter 8 113 Theorem 8. u. (8. Then problem (8.t−s . . s ∈ [0. Therefore z(s. given by (8. u.3 For any ϕ ∈ Cb (Rn ) we have Dt Pt ϕ = Pt Lϕ = LPt ϕ. ϕx (x) . T ]. t ≥ 0. x)) + zx (s. s.1 we deduce 1 Proposition 8. we have Ps. z is given by z(s. x) = Ps.11).10). Define Pt ϕ(x) = ϕ(X(t. u.5) it follows the semigroup law Pt+s = Pt Ps . s. is a solution of (8. X(s.12) so that by (8. t ≥ 0. x)) = z(u. u. 8.15) 1 ϕ ∈ Cb (Rn ). X(s. u.9) by (8. X(T. x)) ds = zt (s. x)). n (8. ϕ ∈ Cb (Rn ).6). x ∈ Rn . By Proposition 8. (8. T ] × Rn → Rn fulfills Hypothesis 8.T ϕ(x) = ϕ(X(T. In this case it is easy to check that for any t > s ≥ 0. x)) as required.1. x)) = 0. Xt (s.t = P0. If z is a solution of problem (8.  X(0) = x ∈ R .14) (8. Setting s = T and s = u we find that z(T.9) we have d z(s. x ∈ Rn . x) = b(x) and consider the problem   X (t) = b(X(t)). u. u. x)). X(s.1 The autonomous case We assume here that b(t. It is enough to notice that z. u. t≥0 (8. b(s.2 Assume that b : [0.9) has a unique solution z. x)) is constant in s. s ≥ 0. X(u. x) = ϕ(X(T. where Lϕ(x) = b(x).

T ] × Rn → Rn and σ : [0.18) and assume that the following hypothesis holds.4 Assume that b ∈ Cb (Rn ) and let ϕ ∈ Cb (R). s.2 (i) b : [0.17) 8. By Chapter 6 we know that the mapping (s.19) As easily checked.18). T ] × Rn → L(Rr . (ii) There exists M > 0 such that |b(t. x)). Ps. x))]. Rn ) are continuous. by Theorem 8. x) = ϕ(x). T ] × Rn . X(t))dt + σ(t. y ∈ Rn . is continuous for all ϕ ∈ Cb (Rn ). x) the solution of (8. x) = b(x). y) HS ≤ M |x−y|. Then problem   ut (t. is called the transition evolution operator associated with (8. X(t))dB(t)  X(s) = x ∈ Rd (8. x ∈ Rn .18) corresponding to η = x ∈ Rn . Hypothesis 8.16)  u(0. t ∈ [0. (8.2 we have Kolmogorov equations 1 1 Theorem 8.t is a linear bounded operator on Cb (Rn ). s. 0 ≤ s ≤ t ≤ T. t. t ≥ 0.t . x) = Pt ϕ(x) = ϕ(X(t. has a unique solution given by u(t. Ps.t ϕ(x) = E[ϕ(X(t. . For all t. 0 < s ≤ t ≤ T. s with 0 ≤ s ≤ t ≤ T and for all function ϕ ∈ Cb (Rn ) we set Ps.114 Finally. x.t ϕ(x). x)−σ(t.2 Stochastic case We consider the stochastic evolution equation   dX(t) = b(t. x ∈ Rn (8. T ]. ux (t. x ∈ Rn . (8. x)−b(t. x) → Ps. y)|+ σ(t. (iii) b and σ have first and second partial derivatives with respect to x continuous and bounded in [0. x) . x ∈ Rn . t ≥ 0. We denote as before by X(·.

t ϕ = −L(s)Ps.t (L(r)ϕ)(x)dr. Integrating with respect to t and taking expectation.22) Proof. yields t E[ϕ(X(t. x)). s.t ϕ(x) = ϕ(x) + s t Pr. s. σ(t.t ϕ(x) − ϕ(x) = s L(r)Ps. Taking expectation in the backward Itˆ formula (7.t ϕ is differentiable in s and we have d Ps.t L(t)ϕ. (8. x). x)] + b(s. dt Proof. s.t ϕ is differentiable in t and we have d Ps. 2 Proposition 8. (8.21).62) we find o t Ps. .r ϕ(x)dr. (8. x)σ ∗ (s.3 Basic properties of transition operators 1 Tr [ϕxx (x)σ(s. that is Ps. Then Ps.t ϕ = Ps. x))] = ϕ(x) + s E[(L(r)ϕ)(X(r.21) dt ϕ(X(t. s.20) The first basic identity is the following. ϕx (x) . s.Chapter 8 115 8. ds t ≥ 0. 2 Let us introduce the Kolmogorov operator (L(s)ϕ)(x) = 2 ϕ ∈ Cb (Rn ). Then Ps.22). x)) + ϕx (X(t. which coincides with (8. s.t ϕ. x))]dr. 2 Proposition 8.5 Assume that Hypothesis 8. x))dB(t) . X(t.2 holds and let ϕ ∈ Cb (Rn ). By the Itˆ formula we have that o t ≥ 0.2 holds and let ϕ ∈ Cb (Rn ). which yields (8. The second basic identity is the following.6 Assume that Hypothesis 8. x)) = (L(t)ϕ)(X(t.

x) + (L(s)(z(s.8 Prove the cocycle law Ps.T ϕ(x). u. u. x)) = ϕ(X(t. s.23) We consider here the parabolic equation   zs (s. x))ds + (L(s)z(s. X(s. X(s. 2 Theorem 8. X(s.23). Integrating in s between u and T yields z(T. By (8. x))σ(s.t = Ps.t for 0 ≤ s ≤ t ≤ t ≤ T. n We say that a function z : [0. X(s. X(s. taking expectation we find z(u. = u zx (s. (8. We have o ds z(s. x) t s ∈ [0. X(u. Now. x ∈ Rn . x)) − z(u. x))dB(s) = zx (s. (8. x) = E[ϕ(X(t.23). x)) − z(u. σ(s. x))]. ϕ ∈ Cb (Rn ).4 Parabolic equations 0 ≤ s < T. ·)))(x) + zx (s.25) .7 Assume that Hypothesis 8.116 Kolmogorov equations 8. x)). σ(s. u. z is given by z(s. x) = Ps. u. u. u. Uniqueness. (8. u. X(s. T ] × Rn → R is a solution to (8. 2 0 < s ≤ T. Then there exists a unique solution z of problem (8. u. and let 0 ≤ u ≤ s ≤ T.22) it follows that z(s. x)). x) = E[ϕ(X(T. x))dB(s). X(s. x))]. T ]. u. u.23).2 holds and let ϕ ∈ Cb (Rn ). X(s.  z(T. X(s. u. ·)))(x) = 0. x)) = zs (s. x ∈ R .23). X(T. u.24) Proof. since z fulfills (8. Let us compute the Itˆ differential of z(s. Let z be a solution to (8. Exercise 8. Existence.23). u. zxx . and fulfills (8. fulfills (8. x)). u. x) = ϕ(x). zx . X(s.r Pr.23) if z is continuous and bounded together with its partial derivatives zt . x))dB(s) .

By the proposition and the cocycle law (8. σ(t. x) = Lv(s. s ≥ 0. x ∈ Rn . Lϕ(x) = Then for any and a > 0 the laws of X(t.23) becomes   vs (s. setting Pt = P0. x). The we have t+a t+a X(t+a. s+a. s. x) = u(t.  v(0. x))dr + s+a σ(X(r. Setting v(s. x). x) coincide. s ∈ [0. we have Pt+s = Pt Ps . 2 Proposition 8. t]. x ∈ R (8. s+a. t ≥ 0. t]. Now the conclusion follows. Assume that b and σ are independent of t : Then we have L(s) = L where 1 2 Tr [ϕxx (x)σ(x)σ ∗ (x)] + b(x). ϕ ∈ Cb (Rn ). t ≥ 0 is a semgroup of linear operators in Cb (Rd ). x) and X(t + a. Setting B1 (t) = B(t + a) − B(a) we see that Y (t) fulfills equation (8. x) = x+ s+a b(X(r. problem (8. x))dB(r).4.t .Chapter 8 117 8. x) = ϕ(x). P0 = 1. ϕx (x) . s+a.9 Let X(t. t. t ≥ 0. Thus Pt . s + a. t − s. x) = b(x). s + a.26) but with the Brownian motion B(t) replaced by B1 (t).27) Then by Theorem 8.1 Autonomous case b(t. x ∈ Rn . Setting r − a = ρ yields t t Y (t) = x + s b(Y (ρ))dρ + s σ(Y (ρ))d[B(ρ + a) − B(a)]. s ∈ [0.25)it follows that. x ∈ Rn . x) = σ(x).7 we find the result . Set Y (t) = X(t + a. s. Proof.26)  X(s) = x ∈ Rn . x) be the solution of the stochastic evolution equation   dX(t) = b(X(t))dt + σ(X(t))dB(t) (8. x).

x) = 2 Tr [Quxx (t. (8. x) = ϕ(x).10 Assume that b. x) is given by X(t. (8. Q ∈ L(Rn ).30) where B is a standard Brownian motion in a probability space (Ω. σ : R → R are Lipschitz continuous and of 2 class C 2 . P) taking values in Rn .118 Kolmogorov equations Theorem 8. problem (8. t ≥ 0. The solution of (8.t ϕ(x) = Pt ϕ(x). x)  u(0. for any ϕ ∈ Cb (R).29) is given by u(t. x) = Pt ϕ(x). Q is symmetric and Qx.30) is given by the variation of constants formula t X(t. where Qt = 0 t (8.34) So. x ≥ 0 for all x ∈ Rn .31) Therefore the law of X(t. (8. ∗ t ≥ 0. the transition semigroup Pt looks like Pt ϕ(x) = Rn ϕ(y)NetA x.5 Examples Example 8.27) has a unique solution given by v(s.Qt . . Consequently. (8.11 Consider the parabolic equation in Rn  1  ut (t. x ∈ R.33) where A∗ is the adjoint of A. x)] + Ax + ux (t. x)# P = NetA x.Qt (dy).32) esA QesA ds. (8. G . x) = e x + 0 tA e(t−s)A QdB(s). The corresponding stochastic differential equation is  √  dX(t) = AX(t)dt + Q dB(t).  X(0) = x.28) 8. the solution of (8. s ∈ [0. (8. t].29) where A. x) = Pt−s. Then.

x) 2  u(0. (8. P). in particular.39) . x) = e(a−q/2)t+ Therefore 1 Pt ϕ(x) = √ 2πt +∞ −∞ √ q B(t) x. The solution of (8.37) where B is a real Brownian motion in is a real Brownian motion in some probability space (Ω. x) = 1 qx2 uxx (t.38) e− 2t ϕ(e(a−q/2)t+ y2 √ qy x)dy. The corresponding stochastic differential equation is  √  dX(t) = aX(t)dt + q X(t)dB(t). det Qt > 0 we have u(t. x) + axux (t.35) Example 8. (8. (8. x) = (2π)−n/2 [det Qt ]−1/2 Rn 119 e− 2 1 Q−1 (y−etA x).36) where q > 0 and a ∈ R.Chapter 8 If. F . (8. (8.12 Consider the parabolic equation in R   ut (t. x) = ϕ(x).  X(0) = x.37) is given by X(t.(y−etA x) t ϕ(y)dy.

120 Kolmogorov equations .

We are going to show that D0 is a σ–algebra.Appendix A λ-systems and π-systems Let Ω be a non empty set. D ⊂ σ(R) we have σ(R) = D.     (ii) A ∈ D =⇒ Ac ∈ D. Let D0 be the minimal λ-system including R.1) ∞ i=1 Ai ∈ D. if D is a λ-system such that A.1 Let R be a π-system and let D be a λ-system including R. A non empty family R of parts of Ω is called a π-system if A.      (iii) (Ai ) ⊂ D mutually disjoint =⇒ (A. a λ-system if   (i) Ω. i=1 Let us prove the following Dynkin theorem. ∅ ∈ D. 121 (A. that the following inclusion holds A. Moreover. Theorem A. B ∈ D0 =⇒ A ∩ B ∈ D0 . In fact if (Ai ) is a sequence in D of not necessarily disjoint sets we have ∞ Ai = A1 ∪ (A2 \ A1 ) ∪ (A3 \ A2 \ A1 ) ∪ · · · ∈ D i=1 and so ∞ Ai ∈ D by (ii) and (iii). Then we have σ(R) ⊂ D. B ∈ D =⇒ A ∩ B ∈ D then it is σ–algebra. where σ(R) is the σ algebra generated by R. Obviously any algebra is a π-system. If in particular. which will imply the theorem.2) . B ∈ R =⇒ A ∩ B ∈ R. Proof. For this it is enough to show. as remarked before.

1 it follows that P1 = P2 . ∀B ∈ D0 (A. Let P1 and P2 be probability measures on (Ω. Therefore H (R) = D0 by the minimality of D0 . B ∈ D0 ⇒ R ∩ B ∈ D0 .2) is proved. we have that F ∪ B c ∈ D0 as required. Consequently. If we show that H (B) ⊃ R. So.3) is fulfilled.3) then we conclude that H (B) = D0 by the minimality of D0 and (A. that F ∪ B c ∈ D0 . equivalently.2 Let A be an algebra of subsets of Ω and let F be the σalgebra generated by A . Using the Dynkin theorem we can show that P1 = P2 . It is clear in fact that A is a π-system. . It is easy to see that D is a λ-system which contains D. by Corollary A. the following implication holds R ∈ R.122 For any B ∈ D0 we set λ-systems and π-systems H (B) = {F ∈ D0 : B ∩ F ∈ D0 }. Define D = {B ∈ F : P1 (B) = P2 (B)}. Example A. On the other hand it is clear that if R ∈ R we have R ⊂ H (R) since R is a π-system. ∀ I ∈ A . We claim that H (B) is a λ-system. In fact. It remains to show that if F ∩ B ∈ D0 then F c ∩ B ∈ D0 or. F ) such that P1 (I) = P2 (I). which yields R ⊂ H (B) and (A. since F ∪ B c = (F \ B c ) ∪ B c = (F ∩ B) ∪ B c and F ∩ B and B c are disjoint. In fact properties (i) and (iii) are clear.

Therefore. P) (1) .1 Definition We are given a probability space (Ω. G . F . Let us consider the signed measure µ(G) = G XdP. P). In view of (B. It is clear that µ is absolutely continuous with respect to the restriction of P to G . F . ∀ G ∈ G. G ∈ G. (B. Let X : Ω → R be a real random variable on (Ω.1 Assume that X ∈ L2 (Ω. F . P) such that µ(G) = G XdP = G Y dP. it is denoted by E(X|G ). Show that E(X|G ) coincides with the orthogonal projection of X into the closed subspace L2 (Ω. P) of L2 (Ω. by the Radon-Nikodym Theorem there exists a unique Y ∈ L1 (Ω. In all this appendix by random variable we mean an equivalence class of random variables with respect to the usual equivalence relation.2) Exercise B. (B. P) and a σ-algebra G included in F . We say that X is G -measurable if I ∈ B(R) ⇒ X −1 (I) ∈ F . F .Appendix B Conditional expectation B. It is clear that X is not G -measurable in general.1) E(X|G ) is characterized by XdP = G G E(X|G )dP. (1) 123 . P). ∀ G ∈ G.1) The G -measurable random variable Y is called the conditional expectation of X given G . G .

3 Let H be a σ-algebra included in G .s. Then we have XdP = A A (B.2 Assume that X is independent of G . (B. XY ∈ L1 (Ω.5) (B.6) 1 A XdP = P(A)E(X) = l A E(X|G )dP. one can check easily the linearity of conditional expectation. Let A ∈ G . Proposition B. Y ∈ L1 (Ω. P-a. P). E(αX + βY |G ) = αE(X|G ) + βE(Y |G ).3) Moreover. P-a. Then we have E(XY |G ) = XE(Y |G ). F . (B. Then we have E(X|G ) = E(X). (B.124 Conditional expectation B.. It is obvious that if X is G -measurable. From this one deduces the inequality |E(X|G )| ≤ E(|X| |G ).2) yields E[E(X|G )] = E(X). Proposition B.9) we see that E(X|H )dP = A A XdP = A E E(X|G ) H dP. Then 1 A and X are independent so that l XdP = A Ω (B.8) and XdP = A A E(X|G )dP = A E E(X|G ) H dP. Setting G = Ω in (B. Proof. Let A ∈ H . Y. P) and let G be σ-algebra included in F . comparing (B.4) for all α. P). we have E(X|G ) = X. (B. one has E(X|G ) ≥ 0. Also if X ≥ 0.10) .9) So.4 Let X.2 Basic properties Let X. Proposition B. Proof. Assume that X is G -measurable.7) E(X|H )dP (B. F .8) and (B.s. F . Then we have E(X|H ) = E E(X|G ) H . β ∈ R and all X. Y ∈ L1 (Ω.

12) where h(x) = E[φ(x. P) and let φ : R2 → R be bounded and Borel.14) Denote by µ the law of the random variable (X.10) for X = 1lA where A ∈ G . Y. Z)# P.Appendix B 125 Proof. P). Z) with values in R3 µ = (X. dz). l for any G ∈ G . So.5 Let X. Y. then since G ∩ A ∈ G we have E(1 A Y |G )dP = l G G 1 A Y dP = l G∩A Y dP = G∩A E(Y |G )dP = G 1 A E(Y |G )dP. Proposition B. Corollary B. Y )) = E(Zh(X)). F . We have to show that φ(X. It is enough to show (B. F . Assume that X is G -measurable and Y is independent of G . y)µ(dx. Y )dP = G G (B.11) x ∈ R.15) . Let now G ∈ G . Then we have E(φ(X. ∀ G ∈ G. XY ∈ L1 (Ω. Y )) = R3 zφ(x. dy. Recalling Proposition B. Y.6 Let X. This is clearly equivalent to E(Zφ(X. Y ∈ L1 (Ω. (B. (B. (B. Y )|G ) = h(X). P). ∀Z ∈ L1 (Ω. Let us prove now a useful generalization of this Corollary. Then we have E(XY |G ) = XE(Y ). E(Zφ(X. Y )].13) h(X)dP.2 we find. (B. G . Proof. Assume that X is G -measurable and that Y is independent of G .

(B. dz). y)ν(dx. where ν(dx. g(F ) ∈ L1 (Ω. dz)λ(dy).126 Conditional expectation Since X and Z are G -measurable and Y is independent of G . Z) and Y are independent so that µ(dx. F H ∈ L1 (Ω. Prove that E(F H) = E(F Z). P). y)λ(dy) ν(dx. Using the Fubini Theorem we get finally E(Zφ(X. dy. Z)# P(dx. Therefore we can write (B. dz)λ(dy). dz) = (X. Prove the Jensen inequality E(g(F )|G ) ≥ g(E(F |G )). dz) = R2 zh(x)ν(dx. F . G . dz) = E(Zh(X)). dz) = ν(dx.7 Let F. the random variables (X. Y )) = R2 z R φ(x.15) as E(Zφ(X.17) .8 Let g : R → R be convex and let F. (B. H.16) Exercise B. Exercise B. zφ(x. as required. P) and Z = E(H|G ). Y )) = R3 λ(dy) = Y# P(dy).

T ]. ∀ 0 ≤ s < t ≤ T. Thus (M (t))t∈[0. ∀ 0 ≤ s < t ≤ T. ∀ 0 ≤ s < t ≤ T. a submartingale if and only if M (s)dP ≥ A A M (t)dP. A ∈ Fs . F . (M (t))t∈[0. A ∈ Fs . Ft .Appendix C Martingales C.1 Definitions Let (Ω. P) be a probability space. 127 . P).T ] is said to be a martingale (with respect to the filtration (Ft )t≥0 ) if E[M (t)|Fs ] = M (s). a submartingale if E[M (t)|Fs ] ≥ M (s). ∀ 0 ≤ s < t ≤ T. ∀ 0 ≤ s < t ≤ T.T ] with M (t) ∈ L1 (Ω. t ∈ [0.T ] is a martingale if and only if M (s)dP = A A M (t)dP. a stochastic process. a supermartingale if E[M (t)|Fs ] ≤ M (s). and a supermartingale if and only if M (s)dP ≤ A A M (t)dP. A ∈ Fs . ∀ 0 ≤ s < t ≤ T. (Ft )t≥0 an increasing family of σalgebras included in F and (M (t))t∈[0.

In fact..4 For all λ > 0 we have P(S ≥ λ) ≤ 1 λ |M (tn )|dP.1 If M is a martingale then |M | is a submartingale. let 0 < t1 < t2 < .2 The Brownian motion B is a martingale.128 Martingales Proposition C. Since B(t) − B(s) and 1 A are independent we have l (B(t) − B(s))dP = E(1 A (B(t) − B(s))) = 0. < tn ≤ T and set We are going to prove an important estimate (due to Kolmogorov) of S in terms of M (tn ). l A so that B(t)dP = A A B(s)dP. Clearly A+ and A− belong to Fs . This shows that |M | is a submartingale. A− = {ω ∈ Ω : M (s)(ω) ≤ 0}. A ∈ Fs . let t > s and A ∈ Fs .1) . Proof. {S≥λ} (C. (See Exercise B. C. Consequently we have |M (s)|dP = A A+ M (s)dP − A− M (s)dP = A+ M (t)dP − A− M (t)dP ≤ A |M (t)|dP..8). Set A+ = {ω ∈ Ω : M (s)(ω) > 0}. 1≤i≤n Let M (t) be a martingale. Exercise C. Proposition C. Example C.3 Using Jensen’s inequality prove that any convex function of a martingale is a submartingale.2 The basic inequality for martingales S = sup |M (ti )|. Let 0 ≤ s < t ≤ T.

Let us estimate {S≥λ} |M (tn )|dP. sets A1 . T ].Appendix C Proof. . An Now we estimate martingale.. .. 129 Clearly. P) for all t ∈ [0.. 1≤i≤n We are going to estimate of E[S 2 ] in terms of E[M 2 (tn )]. . |M (t2 )| ≥ λ}.. < tn ≤ T and set as before S = sup |M (ti )|. F . C. i = 1. Ak k = 1. An−1 Proceeding in a similar way we obtain |M (tn )|dP ≥ λP(Ak ). (C. .2) Summing up on k from 1 to n the conclusion follows. · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·· An = {|M (t1 )| < λ. . Let 0 < t1 < t2 < . .. n. n.. We have. An are mutually disjoint. Moreover Ai ∈ Fti .. Therefore |M (tn )|dP ≥ λP(An−1 ). and we have n {S ≥ λ} = i=1 Ai . Set A1 = {|M (t1 )| ≥ λ}. We have obviously |M (tn )|dP ≥ λP(An ). A2 = {|M (t1 )| < λ.. . recalling that |M (t)| is a sub– λP(An−1 ) ≤ An−1 |M (tn−1 )|dP ≤ An−1 |M (tn )|dP.. An−1 |X(tn )|dP.. |M (tn )| ≥ λ}. ..3 Square integrable martingales In this section we are given a martingale M (t) such that M (t) ∈ L2 (Ω.

4) E(S 2 ) = 0 P(S 2 > t)dt = 0 P(S > √ t)dt.6 Let M be a square integrable continuous martingale.3) t ≥ 0.5 We have E Proof. 1 t |M (tn )|dP. Now the conclusion follows easily. {S≥t} (C. by (C. Corollary C.5) .T ] E (C.130 Proposition C. So. t∈[0. By (C. Set F (t) = P(S > t). Then for any T > 0 we have sup |M (t)|2 ≤ 4E[M 2 (T )].1) and the Fubini Theorem we have ∞ E(S ) ≤ 0 2 1 √ t √ {S≥ t} |M (tn )|dP dt = [0.+∞)×Ω 1 √ |M (tn )|1 {S≥√t} P(dω)dt l t ∞ = Ω |M (tn )|P(dω) 0 S2 1 √ 1 {S≥√t} dt l t 1 √ dt t 1/2 1/2 = Ω |M (tn )|P(dω) 0 =2 Ω |M (tn )SP(dω) ≤ 2 Ω |M (tn )|2 dP Ω S 2 dP . (C.1) we have F (t) ≤ Consequently ∞ ∞ Martingales sup |M (ti )|2 1≤i≤n ≤ 4E(|M (tn )|2 ).

that E as required.5 it follows that E sup |M (si )|2 ≤ 4E |M (T )|2 . by the arbitrariness of the sequence s1 . By Proposition C.Appendix C 131 Proof. sup |M (s)|2 ≤ 4E |M (T )|2 . . sm . s∈[0. .T ] . Let 0 < s1 < s2 < · · · < sm = T. . s2 . 1≤i≤m Since M is continuous it follows. .

132 Martingales .

There exists a unique continuous mapping x : Λ → E. x(λ)). x) − F (λ. (D. then x is of class C 1 and x (λ) = Fλ (λ.1) (ii). We want to generalize the second part of this result to mappings F (λ. ∀ λ ∈ Λ. If in addition F is of class C 1 . Theorem D. x(λ)) + Fx (λ. We are given a continuous mapping The following result (contraction principle) is classical. y)| ≤ κ|x − y|. a 133 . x) which are only continuously Gˆteaux differentiable. (λ. x) → F (λ. x) and assume that Hypothesis D. ∀ λ ∈ Λ. x. Let Λ.Appendix D Fixed points depending on parameters D.1 (i). 1) such that |F (λ. y ∈ E. E be Banach spaces (norms | · |). x(λ))x (λ).2) λ → x(λ). such that x(λ) = F (λ.1 Introduction F : Λ × E → E.1 There exists κ ∈ [0. (D.

e . 1). a Then the following identity holds 1 Φ(c) − Φ(a) = 0 DΦ((1 − ξ)a + ξc)(c − a)dξ. Then we have F (ξ) = DΦ((1 − ξ)a + ξc)(c − a)dξ.134 Fixed points D.3 It is well known that if the mapping A → L(A. 1) and Φ(x) = sin x. a → DΦ(a) is continuous then Φ is differentiable. One also says that Φ is Fr´chet differentiable. (D. and the conclusion follows just integrating this identity between 0 and 1. We shall need the following result. If in addition for all c ∈ A the mapping A → B. a Remark D. 1]. ξ ∀ a. such that ξ→0 lim 1 (Φ(a + ξc) − Φ(a)) = DΦ(a)c. B). (as one can see) Φ is not differentiable in any point. Proposition D. (1) Example D. ∀ x. Set F (ξ) = Φ((1 − ξ)a + ξc). Definition D.2 We say that Φ is Gˆteaux differentiable if there exists a a mapping DΦ : A → L(A. c ∈ A.5 Let Φ : A → B be continuously Gˆteaux differentiable. Then one can check easily that Φ is continuously Gˆteaux differentiable and a DΦ(x)y = y cos x. a → DΦ(a). B).4 Let A. (1) ξ ∈ [0. However.2 Gˆteaux differentiable mappings a Let A and B be Banach spaces and let Φ : A → B be a continuous mapping from A into B. y ∈ L2 (0. B = L2 (0.3) Proof. a → DΦ(a)c is continuous we say that Φ is continuously Gˆteaux differentiable.

(D. µ ∈ Λ and h ∈ R. such that x(λ) = F (λ. z ∈ E. x) → F (λ. x(λ)))−1 Fλ (λ.6) =h 0 1 Fλ (λ + ξhµ.4) λ → x(λ). x. x(λ)) · µ + Fx (λ.3) it follows that x(λ + hµ) − x(λ) = F (λ + hµ.7) Set now 1 G(λ. x(λ)+ξ(x(λ+hµ)−x(λ)))·zdξ. . (λ. Let λ. x(λ)).Appendix D 135 D. µ.4) and (D. x(λ) + ξ(x(λ + hµ) − x(λ))) · µdξ + 0 Fx (λ + ξhµ.5) (D.3 The main result We can back to the notations of the introduction and consider two Banach spaces Λ and E and a continuous mapping F : Λ × E → E.6 Assume that Hypotheses D. Then x(·) is continuously Gˆteaux differena a tiable as well and we have x (λ) · µ = (1 − Fx (λ. Proof. x(λ) + ξ(x(λ + hµ) − x(λ))) · (x(λ + hµ) − x(λ))dξ. x(λ))(x (λ) · µ). x(λ)) 1 (D. We assume that Hypothesis D. Then G ∈ L(E) and by Hypothesis D. From (D. ∀ z ∈ E.1 is fulfilled and that F is continuously Gˆteaux differentiable. x(λ + hµ)) − F (λ.1 |Gz| ≤ κ|z|. x). ∀ λ ∈ Λ. h)z = Gz := 0 Fx (λ+ξhµ. (D. equivalently x (λ) · µ = Fλ (λ. Theorem D.1 is fulfilled and denote by x the mapping x : Λ → E. x(λ)) · µ.

µ. h))−1 h 1 × 0 Fλ (λ + ξhµ. x(λ)))−1 Fλ (λ.7) we have (1 − G(λ. Letting h → 0 we find x (λ) · µ = (1 − Fx (λ. x(λ)). x. h))(x(λ + hµ) − x(λ)) 1 Fixed points =h 0 Fλ (λ + ξhµ. which implies 1 x(λ + hµ) − x(λ)) = (1 − G(λ. x(λ) + ξ(x(λ + hµ) − x(λ))) · µdξ. x.136 Then from equation (D. Therefore x (λ) · µ − Fx (λ. x(λ))(x (λ) · µ) = Fλ (λ. x(λ) + ξ(x(λ + hµ) − x(λ))) · µdξ. µ. . x(λ)).

Appendix E Fractional Sobolev spaces and regularity of processes E. m ∈ N.T ]2 = cm [0.2m E =E [0. (E.2 (The Brownian motion) Let > 0 and let p ≥ 1.p ) = E [0.2m (0. Let us compute E( B p W .T ]2 |B(t) − B(s)|p dt ds |t − s|1+p Take for simplicity p = 2m. T ) is by definition the space of all f : [0.2m W .T ]2 |f (t) − f (s)|2m dt ds |t − s|1+2m .T ]2 |t − s|m dt ds = cm |t − s|1+2m 137 . < Theorem E.p (0.1 (Sobolev embedding) Assume that > 1/(2m). then B 2m W . 1] 2m . We ask the question whether B(·) belongs to W . T ) ⊂ C −1/(2m) ([0.2m Let ∈ (0.1 Fractional Sobolev spaces on [0. T ] → R such that f +∞.T ]2 |B(t) − B(s)|2m dt ds |t − s|1+2m |t − s|m−1−2m dt ds [0.2m (0. 1). Define f := [0.1) Example E. T ]). W . Then the following inclusion holds with continuous embedding. T ) or not.

This does not imply that B(·) is continuous. T ). b > 0 such that E[|X(t) − X(s)|1+a ] ≤ cm |t − s|1+b ∀ t. . The last statement follows from the Sobolev embedding theorem. F .2m Moreover. T ]. s ∈ [0. 1/2). and cm > 0 E[|X(t) − X(s)|2m ] ≤ cm |t − s|m . o Proposition E. T ) for any α ∈ (0.138 Fractional Sobolev spaces The integral is finite if and only if < 1 . T ) again for < 2 . T ]) for almost ω ∈ Ω. T ) Let (Ω. a (E. t ∈ [0. s ∈ [0. 1/2). One situation often encountered is when the following estimate holds for some m > 1.T ]2 |t − s|m−1−2m dt ds < ∞. T ) for < 2 . T ].2m (0. P) be probability space and let X(t). and cm > 0 (E. as the next proposition shows.4 (0. E.4) Then X has α-H¨lder continuous trajectories with α < o .2m −1/(2m) ∈ (0. 2 1 For instance taking m = 1 we conclude that B(·) ∈ W . Remark E.2) This estimate (provided m > 1) allows us to conclude that trajectories of X are H¨lder continuous almost surely.3 Assume that there is m > 1. 1/2). P).2 Processes belonging to W . ω) belongs to C Proof. ≤ cm [0. 1 But if we take m = 2 we have B(·) ∈ W . X(·. Assume that there is a > 0.2 (0. ∈ (0. since ∈ (0. F .2) is fulfilled. 4 2 Arguing similarly taking larger m we conclude that B(·) ∈ C α (0. 1/2) and m − 1 − 2m > −1. Then we have E |X|2m < +∞.3) ([0. T ]. Therefore 1 if 1 < < 1 we conclude by the Sobolev embedding that B(·) ∈ C − 4 (0. such that (E. ∀ t. (E.3. 1+b . We have in fact E X 2m .4 Kolomogorov test It is a generalization Proposition E. be a real stochastic process on (Ω.

P) be probability space and let X(x). P). Define f W f . T ]) for almost ω ∈ Ω. T ]d ) is by definition the space of all f : [0.2m Moreover. W .6 Assume that there is m > 1. that (E. ω) belongs to C Proof. We have in fact E( X 2m .5 (Sobolev embedding) Assume that > d/(2m). and cm > 0 E[|X(x) − X(y)|2m ] ≤ cm |t − s|2m . be a random field on (Ω. ≤ cm [0.2m := [0. Proposition E. (E. x ∈ [0. X(·. F . Assume that there is m > 1. and cm > 0 such (E.7) ([0. The last statement follows from the Sobolev embedding theorem. F . 1). d ∈ N.2m ) −d/(2m) ∈ (0.T ]2 |t − s|m−1−2m dt ds < ∞. 1). 1). Then the following inclusion holds with continuous embedding. m ∈ N. T ]d . (E. T ]d ) ⊂ C −d/(2m) ([0. T ].2m 2m . . Then we have E |X|2m < +∞. T ]d ). . since ∈ (0. 1/2) and m − 1 − 2m > −1.2m ([0.2) is fulfilled. s ∈ [0.3 Multi dimensional Sobolev spaces and regularity of random fields |f (x) − f (y)|2m dx dy.2m < +∞. T ]d → R such that . |x − y|d+2m Let ∈ (0. ∈ (0. Theorem E.6) This estimate implies that almost all trajectories of X are H¨lder continuous o almost surely. ∀ t.5) Let (Ω.T ]2d ([0.Appendix F 139 E.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->