Professional Documents
Culture Documents
STOCHASTIC CALCULUS
Marta Sanz-Solé
Facultat de Matemàtiques
Universitat de Barcelona
January 30, 2009
Contents
1 A review of the basics on stochastic processes 4
1.1 The law of a stochastic process . . . . . . . . . . . . . . . . . 4
1.2 Sample paths . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Itô’s calculus 25
3.1 Itô’s integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 The Itô integral as a stochastic process . . . . . . . . . . . . . 30
3.3 An extension of the Itô integral . . . . . . . . . . . . . . . . . 32
3.4 A change of variables formula: Itô’s formula . . . . . . . . . . 34
3.4.1 One dimensional Itô’s formula . . . . . . . . . . . . . . 35
3.4.2 Multidimensional version of Itô’s formula . . . . . . . . 38
3
1 A review of the basics on stochastic pro-
cesses
This chapter is devoted to introduce the notion of stochastic processes and
some general definitions related with this notion. For a more complete ac-
count on the topic, we refer the reader to [?]. Let us start with a definition.
For a successful progress in the analysis of such object, one needs to put
some structure on the index set I and on the state space S. In this course,
we shall mainly deal with the particular cases: I = N, Z+ , R+ and S either a
countable set or a subset of R.
The basic problem statisticians are interested in, is the analysis of the prob-
ability law (mostly described by some parameters) of characters exhibited by
populations. For a fixed character described by a random variable X, they
use a finite number of independent copies of X -a sample of X. For many
purposes, it is interesting to have samples of any size and therefore to con-
sider sequences Xn , n ≥ 1. It is important here to insist on the word copies,
meaning that the circumstances around the different outcomes of X do not
change. It is a static world. Hence, they deal with stochastic processes
{Xn , n ≥ 1} consisting of independent and identically distributed random
variables.
However, this is not the setting we are interested in here. Instead, we would
like to give stochastic models for phenomena of the real world which evolve as
time passes by. Stochasticity is a choice in front of a complete knowledge and
extreme complexity. Evolution, in contrast with statics, is what we observe
in most phenomena in Physics, Chemistry, Biology, Economics, Life Sciences,
etc.
Stochastic processes are well suited for modeling stochastic evolution phe-
nomena. The interesting cases correspond to families of random variables Xi
which are not independent. In fact, the famous classes of stochastic processes
are described by means of types of dependence between the variables of the
process.
4
Definition 1.2 The finite-dimensional joint distributions of the process
{Xi , i ∈ I} consists of the multi-dimensional probability laws of any finite
family of random vectors Xi1 , . . . , Xim , where i1 , . . . , im ∈ I and m ≥ 1 is
arbitrary.
X : Ω → RI .
Putting the appropriate σ-field of events in RI , say B(RI ), one can define,
as for random variables, the law of the process as the mapping
PX (B) = P (X −1 (B)), B ∈ B.
where:
5
1. Pt1 ,...,tn is a probability on Rn ,
2. if {ti1 < . . . < tim } ⊂ {t1 < . . . < tn }, the probability law Pti1 ...tim is the
marginal distribution of Pt1 ...tn .
One can apply this theorem to Example 1.1 to show the existence of Gaussian
processes, as follows.
Let K : I × I → R be a symmetric, positive definite function. That means:
Then there exists a Gaussian process {Xt , t ≥ 0} such that E(Xt ) = 0 for
any t ∈ I and Cov (Xti , Xtj ) = K(ti , tj ), for any ti , tj ∈ I.
To prove this result, fix t1 , . . . , tn ∈ I and set µ = (0, . . . , 0) ∈ Rn , Λ =
(K(ti , tj ))1≤i,j≤n and
Pt1 ,...,tn = N (0, Λ).
We denote by (Xt1 , . . . , Xtn ) a random vector with law Pt1 ,...,tn . For any
subset {ti1 , . . . , tim } of {t1 , . . . , tn }, it holds that
with
δt1 ,ti1 · · · δtn ,ti1
A = ··· ··· ··· ,
Hence, the assumptions of Theorem 1.1 hold true and the result follows.
6
1.2 Sample paths
In the previous discussion, stochastic processes are considered as random
vectors. In the context of modeling, what matters are observed values of the
process. Observations correspond to fixed values of ω ∈ Ω. This new point
of view leads to the next definition.
Definition 1.3 The sample paths of a stochastic process {Xt , t ∈ I} are the
family of functions indexed by ω ∈ Ω, X(ω) : I → S, defined by X(ω)(t) =
Xt (ω).
is the time of the n-th arrival. The process we would like to introduce is Nt ,
giving the number of customers who have visited the store during the time
interval [0, t], t ≥ 0.
Clearly, N0 = 0 and for t > 0, Nt = k if and only if
Sk ≤ t < Sk+1 .
The stochastic process {Nt , t ≥ 0} takes values on Z+ . Its sample paths are
increasing right continuous functions, with jumps at the random times Sn ,
n ≥ 1, of size one. It is a particular case of a counting process. Sample paths
of counting processes are always increasing right continuous functions, their
jumps are natural numbers.
E(Bt ) = 0
E(Bs Bt ) = s ∧ t,
This defines the finite dimensional distributions and therefore the existence
of the process via Kolmogorov’s theorem (see Theorem 1.1).
7
Before giving a heuristic motivation for the preceding definition of Brownian
motion we introduce two further notions.
8
2 The Brownian motion
2.1 Equivalent definitions of Brownian motion
This chapter is devoted to the study of Brownian motion, the process intro-
duced in Example 1.3 that we recall now.
Definition 2.1 The stochastic process {Bt , t ≥ 0} is a one-dimensional
Brownian motion if it is Gaussian, zero mean and with covariance function
given by E (Bt Bs ) = s ∧ t.
The existence of such process is ensured by Kolmogorov’s theorem. Indeed,
it suffices to check that
(s, t) → Γ(s, t) = s ∧ t
is nonnegative definite. That means, for any ti , tj ≥ 0 and any real numbers
ai , aj , i, j = 1, . . . , m,
m
X
ai aj Γ(ti , tj ) ≥ 0.
i,j=1
But Z ∞
s∧t= 1[0,s] (r) 1[0,t] (r) dr.
0
Hence,
m
X m
X Z ∞
ai aj ti ∧ tj = ai aj 1[0,ti ] (r) 1[0,tj ] (r) dr
i,j=1 i,j=1 0
m
!2
Z ∞ X
= ai 1[0,ti ] (r) dr ≥ 0.
0 i=1
Notice also that, since E(B02 ) = 0, the random variable B0 is zero almost
surely.
Each random variable Bt , t > 0, of the Brownian motion has a density, and
it is
1 x2
pt (x) = √ exp(− ),
2πt 2t
while for t = 0, its ”density” is a Dirac mass at zero, δ{0} .
Differentiating pt (x) once with respect to t, and then twice with respect to
x easily yields
∂ 1 ∂2
pt (x) = pt (x)
∂t 2 ∂x2
p0 (x) = δ{0} .
9
This is the heat equation on R with initial condition p0 (x) = δ{0} . That
means, as time evolves, the density of the random variables of the Brownian
motion behaves like a diffusive physical phenomenon.
There are equivalent definitions of Brownian motion, as the one given in the
nest result.
(i) X0 = 0, a.s.,
E (Xr (Xs+u − Xs )) = 0,
E(Xt − Xs )2 = t + s − 2s = t − s.
Remark 2.1 We shall see later that Brownian motion has continuous sam-
ple paths. The description of the process given in the preceding proposition
tell us that such a process is a model for a random evolution which starts from
x = 0 at time t = 0, such that the qualitative change on time increments only
depends on their length (stationary law), and that the future evolution of the
process is independent of its past (Markov property).
10
Remark 2.2 It is easy to prove that the if B = {Bt , t ≥ 0} is a Brownian
motion, so is −B = {−Bt , t ≥ 0}. Moreover, for any λ > 0, the process
1
Bλ = Bλ2 t , t ≥ 0
λ
is also a Brownian motion. This means that zooiming in or out, we will
observe the same sort of behaviour.
previous chapters that the sequence {Sn , n ≥ 0} is a Markov chain, and also
a martingale.
Let us consider the continuous time stochastic process defined by linear in-
terpolation of {Sn , n ≥ 0}, as follows. For any t ≥ 0, let [t] denote its integer
value. Then set
Yt = S[t] + (t − [t])ξ[t]+1 , (2.1)
for any t ≥ 0.
The next step is to scale the sample paths of {Yt , t ≥ 0}. By analogy with
the scaling in the statement of the central limit theorem, we set
(n) 1
Bt = √ Ynt , (2.2)
σ n
t ≥ 0.
A famous result in probability theory -Donsker theorem- tell us that the se-
(n)
quence of processes Bt , t ≥ 0}, n ≥ 1, converges in law to the Brownian
motion. The reference sample space is the set of continuous functions van-
ishing at zero. Hence, proving the statement, we obtain continuity of the
sample paths of the limit.
Donsker theorem is the infinite dimensional version of the above mentioned
(n)
central limit theorem. Considering s = nk , t = k+1
n
, the increment Bt −
Bs(n) = σ√1 n ξk+1 is a random variable, with mean zero and variance t −
11
(n)
s. Hence Bt is not that far from the Brownian motion, and this is what
Donsker’s theorem proves.
h0 (t) = 1,
n n
hkn (t) = 2 2 1[ 2k
, 2k+1 [ − 2 1[ 2k+1 , 2k+2 [ ,
2
2n+1 2n+1 2n+1 2n+1
where the notation h·, ·i means the inner product in L2 ([0, 1], B)([0, 1]), λ).
Using (2.3), we define an isometry between L2 ([0, 1], B([0, 1]), λ) and
L2 (Ω, F, P ) as follows. Consider a family of independent random variables
with law N (0, 1), (N0 , Nnk ). Then, for f ∈ L2 ([0, 1], B([0, 1]), λ), set
−1
∞ 2X n
Clearly,
E I(f )2 = kf k22 .
Hence I defines an isometry between the space of random variable
L2 (Ω, F, P ) and L2 ([0, 1], B([0, 1]), λ). Moreover, since
−1
m 2X n
12
Proof: By construction B0 = 0. Notice that for 0 ≤ s ≤ t ≤ 1, Bt − Bs =
I(11]s,t] ). Hence, by virtue or (2.4) the process B has independent increments
and Bt − Bs has a N (0, t − s) law. From this, by a change of variables
one obtains that the finite dimensional distributions of {Bt , t ∈ [0, 1]} are
Gaussian. By Proposition 2.1, we obtain the first statement.
Our next aim is to prove that the series appearing in
∞ 2X
−1 n
h11[0,t] , hkn iNnk
X
Bt = I 1[0,t] = h11[0,t] , h0 iN0 +
n=0 k=0
∞ 2X
−1 n
gnk (t)Nnk
X
= g0 (t)N0 + (2.5)
n=0 k=0
converges uniformly. In the last term we have introduced the Schauder func-
tions defined as follows.
g0 (t) = h11[0,t] , h0 i = t,
Z t
gnk (t) = h11[0,t] , hkn i = hkn (s)ds,
0
The next step consists in proving that |Nnk | is bounded by some constant
n
depending on n such that when multiplied by 2− 2 the series with these
terms converges.
For this, we will use a result on large deviations for Gaussian measures along
with the first Borel-Cantelli lemma.
Lemma 2.1 For any random variable X with law N (0, 1) and for any a ≥ 1,
a2
P (|X| ≥ a) ≤ e− 2 .
13
x √2
where we have used that 1 ≤ a
and a 2π
≤ 1.
We now move to the Borel-Cantelli’s based argument. By the preceding
lemma,
! n −1
2X
n n
P sup |Nnk | >2 4 ≤ P |Nnk | > 2 4
0≤k≤2n −1 k=0
n
≤ 2n exp −2 2 −1 .
It follows that ∞
!
n
|Nnk |
X
P sup >2 4 < +∞,
n=0 0≤k≤2n −1
That is, a.s., there exists n0 , which may depend on ω, such that
n
sup |Nnk | ≤ 2 4
0≤k≤2n −1
a.s., for n big enough, which proves the a.s. uniform convergence of the series
(2.5).
Next we discuss how from Theorem 1.1 we can get a Brownian motion indexed
by R+ . To this end, let us consider a sequence B k , k ≥ 1 consisting of
independent Brownian motions indexed by [0, 1]. That means, for each k ≥ 1,
B k = {Btk , t ∈ [0, 1] is a Brownian motion and for different values of k, they
are independent. Then we define a Brownian motion recursively as follows.
Let k ≥ 1; for t ∈ [k, k + 1] set
k+1
Bt = B11 + B12 + · · · + B1k + Bt−k .
14
we end this section by giving the notion of d-dimensional Brownian motion,
for a natural number d ≥ 1. For d = 1 it is the process we have seen so far.
For d > 1, it is the process defined by
Bt = (Bt1 , Bt2 , . . . , Btd ), t ≥ 0,
where the components are independent one-dimensional Brownian motions.
15
Proof: Let γ ∈ ( 12 , 1] and assume that a sample path t → Bt (ω) is γ-Hölder
continuous at s ∈ [0, 1). Then
j γ j + 1 γ
≤ C s − + s − .
n n
Hence, by restricting j = i, i + 1, . . . , i + N − 1, we obtain
γ
N M
B j (ω) − B j+1 (ω) ≤ C = γ.
n n n n
Define
M
AiM,n = B j (ω) − B j+1 (ω) ≤ , j = i, i + 1, . . . , i + N − 1 .
n n nγ
We have seen that the set of trajectories where t → Bt (ω) is γ-Hölder con-
tinuous at s is included in
∪∞ ∞ ∞ n i
M =1 ∪k=1 ∩n=k ∪i=1 AM,n .
16
1
Hence, by taking N such that N γ − 2
> 1,
h 1
iN
P ∩∞ n i
n=k ∪i=1 AM,n ≤ lim inf nC n
2
−γ
= 0.
n→∞
n→∞
lim |Πn | = 0,
17
Proof: For the sake of simplicity, we shall omit the dependence on n. Set
∆k t = tk −tk−1 . Notice that the random variables (∆k B)2 −∆k t, k = 1, . . . , n,
are independent and centered. Thus,
!2 !
rn rn 2
(∆k B)2 − T = E (∆k B)2 − ∆k t
X X
E
k=1 k=1
rn 2
(∆k B)2 − ∆k t
X
= E
k=1
rn h i
3(∆k t)2 − 2(∆k t)2 + (∆k t)2
X
=
k=1
rn
(∆k t)2 ≤ 2T |Πn |,
X
=2
k=1
rn rn
!
2
X X
(∆k B) ≤ sup |∆k B| |∆k B|
k=1 k k=1
≤ V sup |∆k B|.
k
Prn 2
We obtain limn→∞ k=1 (∆k B) = 0. a.s., which contradicts the result proved
in Proposition 2.3.
2. For any 0 ≤ s ≤ t, Fs ⊂ Ft .
18
If in addition
∩s>t Fs = Ft ,
for any t ≥ 0, the filtration is said to be right-continuous.
Ft = σ(Bs , 0 ≤ s ≤ t), t ≥ 0.
To ensure that the above property (1) for a filtration holds, one needs to com-
plete the σ-field. In general, there is no reason to expect right-continuity.
However, for the Brownian motion, the natural filtration possesses this prop-
erty.
A stochastic process with independent increments possesses the martingale
property with respect to the natural filtration. Indeed, for 0 ≤ s ≤ t,
1. {Bt2 − t, t ≥ 0},
a2 t
2. {exp aBt − 2
, t ≥ 0}.
19
Since Bt − Bs is independent of Fs , owing to the properties of the conditional
expectation, we have
E (Bt − Bs )2 /Fs = E (Bt − Bs )2 = t − s,
E ((Bt − Bs )Bs /Fs ) = Bs E (Bt − Bs /Fs ) = 0,
E Bs2 /Fs = Bs2 .
Consequently,
E Bt2 − Bs2 /Fs = t − s.
For the second example, we also use the property of independent increments,
as follows:
a2 t a2 t
! ! ! !
E exp aBt − /Fs = exp(aBs )E exp a(Bt − Bs ) − /Fs
2 s
a2 t
!!
= exp(aBs )E exp a(Bt − Bs ) − .
s
Using the density of the random variable Bt − Bs one can easily check that
a2 t a2 (t − s) a2 t
!! !
E exp a(Bt − Bs ) − = exp − .
s 2 2
Therefore, we obtain
a2 t a2 s
! ! !
E exp aBt − /Fs = exp aBs − .
2 2
|x − y|2
!
1 Z
p(s, t, x, A) = 1 exp − dy. (2.6)
(2π(t − s)) 2 A 2(t − s)
which means that, conditionally to the past of the Brownian motion until
time s, the law of Bt at a future time t only depends on Bs .
20
Let f : R → R be a bounded measurable function. Then, since Bs is Fs –
measurable and Bt − Bs independent of Fs , we obtain
and consequently,
Z
E (f (Bt )/ Fs ) = f (y)p(s, t, Bs , dy).
R
We recall that the sum of two independent Normal random variables, is again
Normal, with mean the sum of the respective means, and variance the sum
of the respective variances. This is expressed in mathematical terms by the
fact that
Z
fN(x,σ1 ) ∗ fN(y,σ2 ) = fN(x,σ1 ) (y)fN(y,σ2 ) (· − y)dy
R
= fN(x+y,σ1 +σ2 ) .
proving (2.8).
This equation is the time continuous analogue of the property own by the
transition probability matrices of a Markov chain. That is,
21
meaning that evolutions in m + n steps are done by concatenating m-step
and n-step evolutions. In (2.8) m + n is replaced by the real time t − s, m
by t − u, and n by u − s, respectively.
We are now prepared to give the definition of a Markov process.
Consider a mapping
p : R+ × R+ × R × B(R) → R+ ,
x → p(s, t, x, A)
is B(R)–measurable,
A → p(s, t, x, A)
is a probability,
22
Theorem 2.3 Let T be a stopping time. Then, conditionally to {T < ∞},
the process defined by
BtT = BT +t − BT , t ≥ 0,
is a Brownian motion independent of FT .
Proof: Assume that T < ∞ a.s. We shall prove that for any A ∈ FT , any
choice of parameters 0 ≤ t1 < · · · < tp and any continuous and bounded
function f on Rp , we have
h i h i
E 1A f BtT1 , · · · , BtTp = P (A)E f Bt1 , · · · , Btp . (2.9)
This suffices to prove all the assertions of the theorem. Indeed, by taking
A = Ω, we see that the finite dimensional distributions of B and B T coincide.
On the other hand, (2.9) states the independence of 1A and the random vector
(BtT1 , · · · , BtTp ). By a monotone class argument, we get the independence of
1A and B T .
The continuity of the sample paths of B implies, a.s.
f BtT1 , · · · , BtTp = f BT +t1 − BT , . . . , BT +tp − BT
∞
X
= lim 1{(k−1)2−n <T ≤k2−n } f Bk2−n +t1 − Bk2−n , . . . Bk2−n +tp − Bk2−n .
n→∞
k=1
23
Proposition 2.4 (Reflection principle). For any t > 0, set St = sups≤t Bs .
Then, for any a ≥ 0 and b ≤ a,
Ta = inf{t ≥ 0, Bt = a},
24
3 Itô’s calculus
Itô’s calculus has been developed in the 50’ by Kyoshi Itô in an attempt to
give rigourous meaning to some differential equations driven by the Brown-
ian motion appearing in the study of some problems related with continuous
time Markov processes. Roughly speaking, one could say that Itô’s calculus
is an analogue of the classical Newton and Leibniz calculus for stochastic pro-
cesses. In fact, in classical
R
mathematical analysis, there are several extensions
of the Riemann integral f (x)dx. For example, if g is an increasing bounded
function (or the difference of two of this class of functions),
R
Lebesgue-Stieltjes
integral gives a precise meaning to the integral f (x)g(dx), for some set of
functions f . However, before Itô’s development, no theory allowing nowhere
differentiable integrators g was known. Brownian motion, introduced in the
preceding chapter, is an example of stochastic process whose sample paths, al-
though continuous, are nowhere differentiable. Therefore, Lebesgue-Stieltjes
integral does not apply to the sample paths of Brownian motion.
There are many motivations coming from a variety of disciplines to consider
stochastic differential equations driven by a Brownian motion. Such an object
is defined as
or in integral form,
Z t Z t
Xt = x0 + σ(s, Xs )dBs + b(s, Xs )ds. (3.1)
0 0
25
Notice that these two properties are satisfied if (Ft , t ≥ 0) is the natural
filtration associated to B.
We fix a finite time horizon T and define L2a,T the set of stochastic processes
u = {ut , t ∈ [0, T ]} satisfying the following conditions:
(i) u is adapted and jointly measurable in (t, ω), with respect to the product
σ-field B([0, T ]) × F.
RT
(ii) 0 E(u2t )dt < ∞.
The notation L2a,T evokes the two properties -adaptedness and square
integrability- described before.
Consider first the subset of L2a,T consisting of step processes. That is, stochas-
tic processes which can be written as
n
X
ut = uj 1[tj−1 ,tj [ (t), (3.2)
j=1
that Rwe may compare with Lebesgue integral of simple functions. Notice
that 0T ut dBt is a random variable. Of course, we would like to be able to
consider more general integrands than step processes. Therefore, we must try
to extend the definition (3.3). For this, we have to use tools from Functional
Analysis based upon a very natural idea: If we are able to prove that (3.3)
gives a continuous functional between two metric spaces, then the stochastic
integral defined for the very particular class of step stochastic processes could
be extended to a more general class given by the closure of this set with
respect to a suitable norm.
The idea of continuity is made precise by the
Isometry property:
!2 !
Z T Z T
E ut dBt =E u2t dt . (3.4)
0 0
26
Let us prove (3.4) for step processes. Clearly
!2 n
Z T
E u2j (∆j B)2
X
E ut dBt =
0 j=1
X
+2 E(uj uk (∆j B)(∆k B)).
j<k
j=1 0
For the second term, we notice that for fixed j and k, j < k, the random
variables uj uk ∆j B are independent of ∆k B. Therefore,
27
The next step consists of identifying a bigger set than E of random processes
such that E is dense in the norm L2 (Ω × [0, T ]). This is actually the set
denoted before by L2a,T . Indeed, we have the following result which is a
crucial fact in Itô’s theory.
Sketch of the proof: Assume that u has continuous sample paths, a.s. An
approximation sequence can be defined as follows:
[nT ] !
n
X k
u (t) = u 1 k k+1 (t).
0 n [n, n [
Then
[nT ] Z k+1
! 2
Z T n
∧T k
2
X
E |un (t) − u(t)| dt = E u
− u(t) dt
0 k=0
k
n
n
≤ T sup sup E|un (t) − u(t)|2 ,
k t∈∆k
28
In order this definition to make sense, one needs to make sure that if the
process u is approximated by two different sequences, say un,1 and un,2 , the
definition of the stochastic integral, using either un,1 or un,2 coincide. This
is proved using the isometry property. Indeed
!2
Z T Z T Z T 2
E un,1
t dBt − utn,2 dBt = E un,1 n,2
t − ut dt
0 0 0
Z T 2 Z T 2
≤2 E un,1
t − ut dt + 2 E un,2
t − ut dt
0 0
→ 0,
By its very definition, the stochastic integral defined in Definition 3.1 satisfies
the isometry property as well. Moreover,
Remember that these facts are true for processes in E, as has been mentioned
before. The extension to processes in L2a,T is done by applying Proposition
3.1. For the sake of illustration we prove (a).
Consider an approximating sequence un inR the sense of Proposition 3.1. By
the construction of the stochastic integral 0T ut dBt , it holds that
! !
Z T Z T
n→∞
lim E unt dBt =E ut dBt ,
0 0
R
T
Since E 0 unt dBt = 0 for every n ≥ 1, this concludes the proof.
We end this section with an interesting example.
Example 3.1 For the Brownian motion B, the following formula holds:
Z T 1 2
Bt dBt = BT − T .
0 2
29
Let us remark that we would rather expect 0T Bt dBt = 12 BT2 , by analogy
R
t ∈ [0, T ].
For this definition to make sense, we need that for any t ∈ [0, T ], the process
{us 1[0,t] (s), s ∈ [0, T ]} belongs to L2a,T . This is clearly true.
Obviously, properties of the integral mentioned in the previous section, like
zero mean, isometry, linearity, also hold for the indefinite integral.
The rest of the section is devoted to the study of important properties of the
stochastic process given by an indefinite Itô integral.
Rt
Proposition 3.2 The process {It = 0 us dBs , t ∈ [0, T ]} is a martingale.
30
Proof: We first establish the martingale property for any approximating
sequence Z t
Itn = uns dBs , t ∈ [0, T ],
0
n 2
where u converges to u in L (Ω × [0, T ]). This suffices to prove the Propo-
sition, since L2 (Ω)–limits of martingales are again martingales.
Let unt , t ∈ [0, T ], be defined by the right hand-side of (3.2). Fix 0 ≤ s ≤ t ≤
T and assume that tk−1 < s ≤ tk < tl < t ≤ tl+1 . Then
l
Itn − Isn = uk (Btk − Bs ) +
X
uj (Btj − Btj +1 )
j=k+1
+ ul (Bt − Btl ).
Using properties (g) and (f), respectively, of the conditional expectation (see
Appendix 2) yields
l
E (Itn − Isn /Fs ) = E (uk (Btk − Bs )/Fs ) +
X
E E uj ∆j B/Ftj−1 /Fs
j=k+1
+ E ul E Bt − Btl /Ftl−1 /Fs
= 0.
n
!2
Z tj Z t
1
u2s ds.
X
L (Ω) − n→∞
lim us dBs =
j=1 tj−1 0
31
A combination of Burkholder’s inequality and Kolmogorov’s continuity cri-
terion allows to deduce the continuity of the sample paths p
of the indefi-
RT
nite stochastic integral. Indeed, assume that 0 E (ur ) dr < ∞, for any
2
p ∈ [2, ∞[. Using first (3.8) and then Hölder’s inequality (be smart!) implies
Z t p Z t p
2
E ur dBr ≤ C(p)E u2r dr
s s
p
Z t p
−1
≤ C(p)|t − s| 2 E (ur ) 2 dr
s
p
−1
≤ C(p)|t − s| 2 .
Since
Rt
p ≥ 2 is arbitrary, with Theorem 1.1 we have that the sample paths of
1
0 us dBs , t ∈ [0, T ] are γ–Hölder continuous with γ ∈]0, 2 [.
Clearly L2a,T ⊂ Λ2a,T . Our aim is to define the stochastic integral for processes
in Λ2a,T . For this we shall follow the same approach as in section 3.1. Firstly,
we start with step processes (un , n ≥ 1) of the form (3.2) belonging to Λ2a,T
and define the integral as in (3.3). The extension to processes in Λ2a,T needs
two ingredients. The first one is an approximation result that we now state
without giving a proof. Reader may consult for instance [1].
a.s.
32
Proposition 3.5 Let u be a step processes in Λ2a,T . Then for any > 0,
N > 0, (Z ) (Z )
T T N
2
P ut dBt > ≤ P ut dt > N + 2 . (3.10)
0 0
we obtain
(Z ) (Z ) (Z )
T T T
P
ut dBt > ≤ P
v N dBt > + P
u2t dt > N .
0 0 t 0
33
in the convergence in probability.
By Proposition 3.5, for any > 0, N > 0 we have
(Z ) (Z )
T T N
n m n m 2
P (ut − ut )dBt > ≤ P (ut − ut ) dt > N + 2 .
0 0
Using (3.11), we can choose such that for any N > 0 and n, m big enough,
(Z )
T
n m 2
P (ut − ut ) dt > N ≤ .
0 2
34
where we have used that B0 = 0.
Consider a sequence of partitions of [0, t] whose mesh tends to zero. We
already know that
n−1
X 2
Btj+1 − Btj → t,
j=0
in the convergence of L2 (Ω). This gives the extra contribution in the devel-
opment of Bt2 in comparison with the classical calculus approach.
Notice that, if B were of bounded variation then, we could argue as follows:
n−1
X 2
Btj+1 − Btj ≤ sup |Btj+1 − Btj |
j=0 0≤j≤n−1
n−1
X
× |Btj+1 − Btj |.
j=0
By the continuity of the sample paths of the Brownian motion, the first factor
in the right hand-side of the preceding inequality tends to zero as the mesh
of the partition tends to zero, while the second factor remains finite, by the
property of bounded variation.
Summarising. Differential calculus with respect to the Brownian motion
should take into account second order differential terms. Roughly speaking
(dBt )2 = dt.
35
An alternative writing of (3.16) in differential form is
with t̄ ∈]tj , tj+1 [ and X̄j an intermediate (random) point on the segment
determined by Xtj and Xtj+1 .
In fact, this follows from a Taylor expansion of the function f up to the
first order in the variable s, and up to the second order in the variable x.
The asymmetry in the orders is due to the existence of quadratic variation
of the processes involved. The expresion (3.18) is the analogue of (3.15).
36
The former is much simpler for two reasons. Firstly, there is no s-variable;
secondly, f is a polynomial of second degree, and therefore it has an exact
Taylor expansion. But both formulas have the same structure.
When passing to the limit as n → ∞, we obtain
n−1
X Z t
∂s f (tj , Xtj )(tj+1 − tj ) → ∂s f (s, Xs )ds
j=0 0
n−1
X Z t
∂x f (tj , Xtj )(Xtj+1 − Xtj ) → ∂x f (s, Xs )us dBs
j=0 0
Z t
+ ∂x f (s, Xs )vs ds
0
n−1 Z t
∂xx f (tj , X̄j )(Xtj+1 − Xtj )2 → 2
f (s, Xs )u2s ds,
X
∂xx
j=0 0
with µ, σ ∈ R.
Applying formula (3.17) to Xt := Bt -a Brownian motion- yields
Z t Z t
f (t, Bt ) = 1 + µ f (s, Bs )ds + σ f (s, Bs )dBs .
0 0
37
Hence, the process {Yt = f (t, Bt ), t ≥ 0} satisfies the equation
Z t Z t
Yt = 1 + µ Ys ds + σ Ys dBs .
0 0
The equivalent differential form of this identity is the linear stochastic differ-
ential equation
Black and Scholes propose as model of a market with a single risky asset with
initial value S0 = 1, the process St = Yt . We have seen that such a process is
in fact the solution to a linear stochastic differential equation (see (3.21)).
where in order to compute dXsk dXsl , we have to apply the following rules
38
We remark that the identity (3.24) is a consequence of the independence of
the components of the Brownian motion.
and then Taylor’s formula for each term in the last sum. We obtain
39
a.s. indeed
pn −1 Z t
n n n
X
∂s f (t̄i , Xti )(ti+1 − ti ) −
n ∂s f (s, Xs )ds
i=1 0
pn −1 Z tn h i
i+1
n
X
= ∂s f (t̄i , Xti ) − ∂s f (s, Xs ) ds
n
i=1 tni
n −1 Z tn
pX
i+1
≤ ∂s f (t̄n , X ) − ∂ f (s, Xs ds
)
n
ti s
i
i=1 tn
i
≤t sup ∂s f (t̄n
i , Xtn ) − ∂ f (s, X ) 1 n ,tn ] (s) .
i s s [ti i+1
1≤i≤pn −1
lim sup ∂s f (t̄n
i , Xtn ) − ∂ f (s, X ) 1 n ,tn ] (s) = 0.
s s [t
n→∞ 1≤i≤p −1 i i i+1
n
Second term
Fix k = 1, . . . , p. We next prove
n −1
pX Z t
∂xk f tni , Xt1ni , . . . , Xtpni (Xtkni+1 − Xtkni ) → ∂xk f (s, Xs )dXsk (3.27)
i=1 0
n −1
pX Z tn Z t
i+1
∂xk f tni , Xt1ni , . . . , Xtpni uk,l l
t dBt → ∂xk f (s, Xs )uk,l l
s dBs , (3.28)
i=1 tn
i 0
n −1
pX Z tn Z t
i+1
∂xk f tni , Xt1ni , . . . , Xtpni vtk dt → ∂xk f (s, Xs )vsk ds, (3.29)
i=1 tn
i 0
for any l = 1, . . . , m.
40
We start with (3.28). Assume that ∂xk f is bounded and that the processes
uk,l are in L2a,T . Then
2
n −1
pX Z tni+1 Z t
∂xk f tni , Xt1ni , . . . , Xtpni k,l l k,l l
E u t dBt − ∂xk f (s, X )u
s s dBs
tn
i=1 i 0
2
pn −1 Z tn h
i+1
i
X n 1 p k,l l
=E
n
∂xk f ti , Xtni , . . . , Xtni − ∂xk f (s, Xs ) us dBs
i=1 ti
2
n −1
pX
Z tni+1 h i
n 1 p k,l l
= E n ∂xk f ti , Xtni , . . . , Xtni − ∂xk f (s, Xs ) us dBs
t
i=1 i
pX
n −1 Z n
ti+1 nh i o2
= E ∂xk f tni , Xt1ni , . . . , Xtpni − ∂xk f (s, Xs ) uk,l
s ds,
i=1 tn
i
41
Third term
Given the notation (3.24), it suffices to prove that
n −1
pX m Z t
k,l
(Xtkni+1 Xtkni )(Xtlni+1 Xtlni ) ∂x2k ,xl f (s, Xs )uk,j l,j
X
fn,i − − → s us ds. (3.31)
i=0 j=1 0
n −1
pX ! Z !
Z tn
i+1 tn
i+1
k,l
fn,i vsk ds vsl ds → 0. (3.34)
i=0 tn
i tn
i
Let us start by arguing on (3.32). We assume that ∂x2k ,xl f is bounded and
that ul,j ∈ L2a,T for any l = 1, . . . , p, j = 1, . . . , m. Suppose that j 6= j 0 . Then,
the convergence to zero follows from the fact that the stochastic integrals
Z tn
i+1
Z tn
i+1 0 0
uk,j j
s dBs , ul,j j
s dBs ,
tn
i tn
i
are independent.
Assume j = j 0 . Then,
pn −1 Z tn
! Z
tn
! Z t
i+1 i+1
l,j 0
X k,l
uk,j j
ul,j j
∂x2k ,xl f (s, Xs )uk,j
E fn,i s dBs s dBs − s us ds
i=0 tn
i tn
i 0
≤ (T1 + T2 ),
with
pn −1
X k,l
T1 = E fn,i
i=0
" Z n ! Z ! #
t i+1 tn
i+1
Z tn
i+1
× uk,j j
s dBs ul,j
− j
s dBs ,
uk,j l,j
s us ds
i tn tn
i tn
i
pn −1
X k,l tni+1
Z Z t
0
uk,j l,j 2 k,j l,j
T2 = E fn,i n s u s ds − ∂xk ,xl f (s, X )u
s s u s ds.
i=0 ti 0
42
Since ∂x2k ,xl f is bounded,
n −1
pX
Z n ! Z n !
ti+1 ti+1
k,j j l,j j
T1 ≤ CE us dBs us dBs
tn tn
i=0 i i
Z tn
i+1
k,j l,j
− n us us ds .
t i
By continuity,
k,l
lim sup (fn,i − ∂x2k ,xl f (s, Xs ) 1]tn ,tn+1 ] (s) = 0.
n→∞ i i i
The first factor tends to zero as n → ∞, while the second one is bounded
a.s.
The proof of the theorem is now complete.
43
4 Applications of the Itô formula
This chapter is devoted to give some important results that use the Itô for-
mula in some parts of their proofs.
Then, for any p > 0, there exist two positive constants cp , Cp such that
!p !p
Z T 2 Z T 2
cp E u2s ds ≤E (MT∗ )p ≤ Cp E u2s ds . (4.1)
0 0
Proof: We will only prove here the right-hand side of (4.1) for p ≥ 2. Consider
the function
f (x) = |x|p ,
for which we have that
p p
We next apply Hölder’s inequality with exponents p−2
and q = 2
and get
Z t Z t
E |Ms |p−2 u2s ds ≤ E (Mt∗ )p−2 u2s ds
0 0
p−2
" Z t p # p2
2
≤ [E (Mt∗ )p ] p E u2s ds .
0
44
Doob’s inequality implies
!p
p
E (Mt∗ )p ≤ E(|Mt |p ).
p−1
Hence,
!p !p p
p p(p − 1) 2 t
Z
2
E (Mt∗ )p ≤ E u2s ds .
p−1 2 0
Proof: We start with the proof of (4.2). Let H be the vector space consisting
of random variables Z ∈ L2 (Ω, F∞ ) such that (4.2) holds. Firstly, we argue
the uniqueness of h. This is an easy consequence of the isometry of the
45
stochastic integral. Indeed, if there were two processes h and h0 satisfying
(4.2), then
Z ∞ Z ∞ 2
E (hs − h0s )2 ds =E (hs − h0s )dBs
0 0
= 0.
This yields h = h0 in L2 ([0, ∞) × Ω).
We now turn to the existence of h. Any Z ∈ H satisfies
Z ∞
2 2
E(Z ) = (E(Z)) + E h2s ds .
0
and t 1Z t 2
Z
Etf = exp i fs dBs + f ds .
0 2 0 s
By the Itô formula, Z t
Etf = 1 + i Esf f (s)dBs .
0
Hence Re exp iλj (Btj − Btj −1 ) ∈ H. Since by Lemma 4.1 these random
variables are dense in L2 (Ω, F∞ ), we conclude H = L2 (Ω, F∞ ).
The representation (4.3) is a consequence the following fact: The martingale
M converges in L2 (Ω) as t → ∞ to a random variable M∞ ∈ L2 (Ω), and
Mt = E(M∞ |Ft ).
Then by taking conditional expectations in both terms of (4.2) we obtain
Z t
Mt = E(M∞ |Ft ) = E(M∞ ) + hs dBs .
0
46
Example 4.1 Consider Z = BT3 . In order to find the corresponding process
h in the integral representation, we apply first Itô’s formula, yielding
Z T Z T
BT3 = 3Bt2 dBt + 3 Bt dt.
0 0
Thus, Z T h i
BT3 = 3 Bt2 + T − t dBt .
0
Q is indeed a probability.
Let A ∈ F be such that Q(A) = 0. Since L > 0, a.s., we should have
P (A) = 0. Reciprocally, for any A ∈ F with P (A) = 0, we have Q(A) = 0
as well.
The second assertion of the lemma is Radon-Nikodym theorem.
47
If we denote by EQ the expectation operator with respect to the probability
Q defined before, one has
EQ (X) = E(XL).
Indeed, this formula is easily checked for simple random variables and then
extended to any random variable X ∈ L1 (Ω) by the usual approximation
argument.
Consider now a Brownian motion {Bt , t ∈ [0, T ]}. Fix λ ∈ R and let
λ2
!
Lt = exp −λBt − t . (4.5)
2
Lemma 4.3 Let X be a random variable and let G be a sub σ-field of F such
that
u2 σ 2
E eiuX |G = e− 2 .
Then, the random variable X is independent of the σ-field G and its proba-
bility law is Gaussian, zero mean and variance σ 2 .
48
In particular, for A := Ω, we see that the characteristic function of X is that
of a N(0, σ 2 ).
Moreover, for any A ∈ G,
u2 σ 2
EA eiuX = e− 2 ,
saying that the law of X conditionally to A is also N(0, σ 2 ). Thus,
P ((X ≤ x) ∩ A) = P (A)PA (X ≤ x) = P (A)P (X ≤ x) ,
yielding the independence of X and G.
Indeed, writing
λ2 λ2
! !
Lt = exp −λ(Bt − Bs ) − (t − s) exp −λBs − s ,
2 2
we have
EQ eiu(Wt −Ws ) 1A = E 1A eiu(Wt −Ws ) Lt
λ2
= E 1A eiu(Bt −Bs )+iuλ(t−s)−λ(Bt −Bs )− 2
(t−s)
Ls .
(iu−λ)2 2
(t−s)+iuλ(t−s)− λ2 (t−s)
= Q(A)e 2
u2
= Q(A)e− 2
(t−s)
.
The proof is now complete.
49
5 Local time of Brownian motion and
Tanaka’s formula
This chapter deals with a very particular extension of Itô’s formula. More
precisely, we would like to have a decomposition of the positive submartingale
|Bt − x|, for some fixed x ∈ R as in the Itô formula. Notice that the function
f (y) = |y − x| does not belong to C 2 (R). A natural way to proceed is to
regularize the function f , for instance by convolution with an approximation
of the identity, and then, pass to the limit. Assuming that this is feasible,
the question of identifying the limit involving the second order derivative
remains open. This leads us to introduce a process termed the local time of
B at x introduced by Paul Lévy.
We see that L(t, x) measures the time spent by the process B at x during a
period of time of length t. Actually, it is the density of this occupation time.
We shall see later that the above limit exists in L2 (it also exists a.s.), a fact
that it is not obvious at all.
Local time enters naturally in the extension of the Itô formula we alluded
before. In fact, we have the following result.
Theorem 5.1 For any t ≥ 0 and x ∈ R, a.s.,
Z t 1
+ +
(Bt − x) = (B0 − x) + 1[x,∞) (Bs )dBs + L(t, x), (5.2)
0 2
where L(t, x) is given by (5.1) in the L2 convergence.
Proof: The heuristics of formula (5.2) is the following. In the sense of
distributions, f (y) = (y − x)+ has as first and second order derivatives,
f 0 (y) = 1[x,∞) (y), f 00 (y) = δx (y), respectively, where δx denotes the Dirac
delta measure. Hence we expect a formula like
Z t 1Z t
(Bt − x)+ = (B0 − x)+ + 1[x,∞) (Bs )dBs + δx (Bs )ds.
0 2 0
50
However, we have to give a meaning to the last integral.
Approximation procedure
We are going to approximate the function f (y) = (y − x)+ . For this, we fix
> 0 and define
0,
if y ≤ x −
(y−x+)2
fx (y) = 4
, if x − ≤ y ≤ x +
y−x if y ≥ x +
and
0,
if y < x −
00 1
fx (y) = 2
, if x − < y < x +
0 if y > x +
Let φn , n ≥ 1 be a sequence of C ∞ functions with compact supports decreas-
ing to {0}. For instance we may consider the function
φ(y) = c exp −(1 − y 2 )−1 1{|y|<1} ,
R
with a constant c such that R φ(z)dz = 1, and then take
φn (y) = nφ(ny).
Set Z
gn (y) = [φn ∗ fx ](y) = fx (y − z)φn (z)dz.
R
51
Convergence of the terms in (5.3) as n → ∞
0
The function fx is bounded. The function gn0 is also bounded. Indeed,
Z
|gn0 (y)| = 0
fx (y − z)φn (z)dz
R
Z 1
n 0
= 1
fx (y − z)φn (z)dz
−n
0
≤ kfx k∞ .
Moreover,
0 0
1[0,t] − fx
gn (Bs )1 (Bs )11[0,t] → 0,
uniformly in t and in ω. Hence, by bounded convergence,
Z t
2
E |gn0 (Bs ) − fx
0
(Bs )| → 0.
0
as n → ∞.
We next deal with the second order term. Since the law of each Bs has a
density, for each s > 0,
P {Bs = x + } = P {Bs = x − } = 0.
Thus,
lim g 00 (Bs )
n→∞ n
00
= fx (Bs ),
a.s. for any fixed s. Using Fubini’s theorem, we see that this convergence
also holds, for almost every s, a.s. In fact,
Z Z t
dP ds11{fx,
00 (B )6=lim
s n→∞ g (Bs )} = 0.
00
Ω 0
We have
1
sup |gn00 (y)| ≤ .
y∈R 2
Indeed,
1 Z
|gn00 (y)| =
1(x−,x+) (y − z)dz
φn (z)1
2 R
1 Z y−x+ 1
≤ |φn (z)|dz ≤ .
2 y−x− 2
52
Then, by bounded convergence
Z t Z t
gn00 (Bs )ds → 00
fx (Bs )ds,
0 0
2
a.s. and in L .
Thus, passing to the limit the expression (5.3) yields
Z t 1Z t 1
0
fx (Bt ) = fx (B0 ) + fx (Bs )dBs + 1(x−,x+) (Bs )ds. (5.4)
0 2 0 2
Convergence as → 0 of (5.4)
Since fx (y) → (y − x)+ as → 0 and
|fx (Bt ) − fx (B0 )| ≤ |Bt − B0 |,
we have
fx (Bt ) − fx (B0 ) → (Bt − x)+ − (B0 − x)+ ,
in L2 .
Moreover,
Z t 2 Z t
0
E fx, (Bs ) − 1[x,∞) (Bs ) ds ≤ E 1(x−,x+) (Bs )ds
0 0
Z t 2
≤ √ ds.
0 2πs
that clearly tends to zero as → 0. Hence, by the isometry property of the
stochastic integral
Z t Z t
0
fx, (Bs )dBs → 1[x,∞) (Bs )dBs ,
0 0
in L2 .
Consequently, we have proved that
Z t 1
1(x−,x+) (Bs )ds
0 2
converges in L2 as → 0 and that formula (5.2) holds.
We give without proof two further properties of local time.
1. The property of local time as a density of occupation measure is made
clear by the following identity, valid por any t ≥ 0 and every a ≤ b:
Z b Z t
L(t, x)dx = 1(a,b) (Bs )ds.
a 0
53
2. The stochastic integral 0t 1[x,∞) (Bs )dBs has a jointly continuous ver-
R
54
6 Stochastic differential equations
In this section we shall introduce stochastic differential equations driven by a
multi-dimensional Brownian motion. Under suitable properties on the coeffi-
cients, we shall prove a result on existence and uniqueness of solution. Then
we shall establish properties of the solution, like existence of moments of any
order and the Hölder property of the sample paths.
The setting
We consider a d-dimensional Brownian motion B = {Bt = (Bt1 , . . . , Btd ), t ≥
0}, B0 = 0, defined on a probability space (Ω, F, P ), along with a filtration
(Ft , t ≥ 0) satisfying the following properties:
1. B is adapted to (Ft , t ≥ 0),
or coordinate-wise,
d Z t Z t
Xti = xi + σji (s, Xs )dBsj + bi (s, Xs )ds,
X
j=1 0 0
55
i = 1, . . . , m.
Strong existence and path-wise uniqueness
We now give the notions of existence and uniqueness of solution that we will
consider throughout this chapter.
3. Equation (6.2) holds true for the fixed Brownian motion defined before.
Definition 6.2 The equation (6.2) has a path-wise unique solution if any
two strong solutions X1 and X2 in the sense of the previous definition are
indistinguishable, that is,
1. Linear growth:
sup [|b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)|] ≤ L|x − y|. (6.4)
t
56
6.1 Examples of stochastic differential equations
When the functions σ and b have a linear structure, the solution to (6.2)
admits an explicit form. This is not surprising as it is indeed the case for
ordinary differential equations. We deal with this question in this section.
More precisely, suppose that
Xt = eDt X0 , t ≥ 0.
Xt = X0 (t)eDt .
A priori X0 (t) may be random. However, since eDt is differentiable, the Itô
differential of Xt is given by
57
Equating the right-hand side of the preceding identity with
yields
that is
dX0 (t) = e−Dt [Σ(t)dBt + c(t)dt] .
In integral form
Z t
X0 (t) = x + e−Ds [Σ(s)dBs + c(s)ds] .
0
Theorem 6.1 Assume that the functions σ, and b satisfy the assumptions
(H). Then there exists a path-wise unique strong solution to (6.2).
Xt0 = x,
Z t Z t
Xtn =x+ σ(s, Xsn−1 )dBs + b(s, Xsn−1 )ds, n ≥ 1, (6.10)
0 0
58
t ≥ 0. Let us restrict the time interval to [0, T ], with T > 0. We shall
prove that the sequence of stochastic processes defined recursively by (6.10)
converges uniformly to a process X which is a strong solution of (6.2). Even-
tually, we shall prove path-wise uniqueness.
Step 1: We prove by induction on n that for any t ∈ [0, T ],
( )
E sup |Xsn |2 < ∞. (6.11)
0≤s≤t
Indeed, this property is clearly true if n = 0, since in this case Xt0 is constant
and equal to x. Suppose that (6.11) holds true for n = 0, . . . , m − 1. By
applying Burkholder’s and Hölder’s inequality, we reach
( ) 2 !
s
Z
sup |Xsn |2 σ(u, Xum−1 )dBu
E ≤C x+E sup
0≤s≤t 0≤s≤t 0
Z s 2 !
m−1
+ E sup b(u, Xu )du
0≤s≤t 0
Z t 2
≤C x+E σ(u, Xum−1 ) du
0
Z t 2
+E b(u, Xum−1 )du
0
Z t
≤C x+E 1 + |Xum−1 |2 du
0
" !#
≤ C x + T + TE sup |Xsm−1 |2 .
0≤s≤T
(Ct)n+1
( )
E sup |Xsn+1 − Xsn |2 ≤ . (6.12)
0≤s≤t (n + 1)!
≤ Ct(1 + |x|2 ).
59
Similarly,
Z s
2 ! Z t
|b(u, x)|2 du
E sup b(u, x)du ≤ Ct
0≤s≤t 0 0
≤ Ct2 (1 + |x|2 ).
with
( 2 )
s
Z
σ(u, Xun ) − σ(u, Xun−1 ) dBu ,
A(t) = E sup
0≤s≤t 0
( )
Z s 2
n n−1
B(t) = E sup b(u, Xu ) − b(u, Xu ) du .
0≤s≤t 0
Using first Burkholder’s inequality and then Hölder’s inequality along with
the Lipschitz property of the coefficient σ, we obtain
Z t
A(t) ≤ C(L) E(|Xsn − Xsn−1 |2 )ds.
0
(Ct)n+1
B(t) ≤ C(T, L) .
(n + 1)!
(Ct)n+1
( )
1
P sup Xtn+1 − Xtn > n ≤ 22n ,
0≤t≤T 2 (n + 1)!
60
which clearly implies
∞
( )
1
sup Xtn+1 − Xtn > n
X
P < ∞.
n=0 0≤t≤T 2
In other words, for each ω a.s., there exists a natural number m0 (ω) such
that 1
sup Xtn+1 − Xtn ≤ n ,
0≤t≤T 2
for any n ≥ m0 (ω). The Weierstrass criterion for convergence of series of
functions then implies that
m−1
Xtm = x + [Xtk+1 − Xtk ]
X
k=1
61
Indeed, applying the extension of Lemma 3.5 to processes of Λa,T , we have
for each , N > 0,
Z t
n
P (σ(s, Xs ) − σ(s, Xs )) dBs >
0
Z t
N
≤P |σ(s, Xsn ) − σ(s, Xs )|2 ds > N + .
0 2
The first term in the right-hand side of this inequality converges to zero as
n → ∞. Since , N > 0 are arbitrary, this yields the convergence stated in
(6.13).
Summarising, by considering if necessary a subsequence {Xtnk , t ∈ [0, T ]},
we have proved the a.s. convergence, uniformly in t ∈ [0, T ], to a stochastic
process {Xt , t ∈ [0, T ]} which satisfies (6.2), and moreover
Theorem 6.2 Assume the same assumptions as in Theorem 6.1 and suppose
in addition that the initial condition is a random variable X0 , independent of
62
the Brownian motion. Fix p ∈ [2, ∞) and t ∈ [0, T ]. There exists a positive
constant C = C(p, t, L) such that
!
E sup |Xs |p ≤ C (1 + E|X0 |p ) . (6.14)
0≤s≤t
Define !
ϕ(t) = E sup |Xs |p .
0≤s≤t
63
Theorem 6.3 The assumptions are the same as in Theorem 6.1. Then
!
p
E sup |Xs (X0 ) − Xs (Y0 )| ≤ C(p, L, t) (E|X0 − Y0 |p ) , (6.15)
0≤s≤t
for any p ∈ [2, ∞), where C(p, L, t) is some positive constant depending on
p, L and t.
Theorem 6.4 The assumptions are the same as in Theorem 6.1. Let p ∈
[2, ∞), 0 ≤ s ≤ t ≤ T . There exists a positive constant C = C(p, L, T ) such
that
p
E (|Xt − Xs |p ) ≤ C(p, L, T ) (1 + E|X0 |p ) |t − s| 2 . (6.16)
E (|Xt − Xs |p )
Z t
p Z t
p
≤ C(p) E σ(u, Xu )dBu + E b(u, Xu )du .
s s
64
The estimate of the path-wise integral follows from Hölder’s inequality and
(6.17). Indeed, we have
Z t
p Z t
p−1
E (|b(u, Xu )|p ) du
E b(u, Xu )du ≤ |t − s|
s s
Z t
p−1
≤ C(p, L)|t − s| (1 + E(|Xu |p )) du
s
≤ C(p, T, L)|t − s|p (1 + E|X0 |p ) .
Corollary 6.1 With the same assumptions as in Theorem 6.1, we have that
the sample paths of the solution to (6.2) are Hölder continuous of degree
α ∈ (0, 21 ).
Remark 6.1 Assume that in Theorem 6.3 the initial conditions are deter-
ministic and are denoted by x and y, respectively. An extension of Kol-
mogorov’s continuity criterion to stochastic processes indexed by a multi-
dimensional parameter yields that the sample paths of the stochastic process
{Xt (x), t ∈ [0, T ], x ∈ Rm } are jointly Hölder continuous in (t, x) of degree
α < 21 in t and β < 1 in x, respectively.
65
and
Ψ : E × Ω → Rm ,
E ⊗ G, with ω 7→ Ψ (X(ω), ω) in L1 (Ω).
Then,
E (Ψ (X, ·) |H) = Φ(X)(·), (6.18)
with Φ(x)(·) = E (Ψ(x, ·)).
with coeffients σ and b satisfying the assumptions (H). Then for any t ≥ u,
η(ω)
Yt (ω) = Xtx,u (ω)|x=η(ω) ,
66
By virtue of the local property of the stochastic integral, on the set Ai ,
Xtci ,u (ω) = Xtx,u (ω)|x=η(ω) = Ytη (ω).
Let now (ηn , n ≥ 1) be a sequence of simple Fu -measurable random variables
converging in L2 (Ω) to η. By Theorem 6.3 we have
L2 (Ω) − lim Ytηn = Ytη .
n→∞
where in the last equality, we have applied the joint continuity in (t, x) of
Xtx,u .
x,s
As a consequence of the preceding lemma, we have Xtx,s = XtXu ,s
for any
0 ≤ s ≤ u ≤ t, a.s.
For any Γ ∈ B(Rm ), set
p(s, t, x, Γ) = P {Xtx,s ∈ Γ}, (6.20)
so, for fixed 0 ≤ s ≤ t and x ∈ Rm , p(s, t, x, ·) is the law of the random
variable Xtx,s .
Theorem 6.5 The stochastic process {Xtx,s , t ≥ s} is a Markov process with
initial distribution µ = δ{x} and transition probability function given by ([?]).
Proof: According to Definition 2.3 we have to check that (6.20) defines a
Markovian transition function and that
P {Xtx,s ∈ Γ|Fu } = p(u, t, Xux,s , Γ). (6.21)
We start by proving this identity. For this, we shall apply Lemma 6.3 in the
following setting:
(E, E) = (Rm , B(Rm )),
G = σ (Br+u − Bu , r ≥ 0) , H = Fu ,
Ψ(x, ω) = 1Γ (Xtx,u (ω)) , u ≤ t,
X := Xux,s , s ≥ u.
Clearly the σ-fields G and H defined above are independent and Xux,s is Fu -
measurable. Moreover,
Φ(x) = E (Ψ(x, ·)) = E (11Γ (Xtx,u ))
= P {Xtx,u ∈ Γ} = p(u, t, x, Γ).
67
Thus, Lemma 6.3 and then Lemma 6.2 yield
n x,s o
P {Xtx,s ∈ Γ|Fu } = P XtXu ,u
∈ Γ|Fu
x,s
= E 1Γ XtXu ,u
|Fu = E (Ψ (Xux,s , ω) |Fu )
= Φ (Xux,s ) = p(u, t, Xux,s , Γ).
68
Notice that the values Xτπn , n = 0, · · · , N − 1 are determined by the values
of Bτn , n = 1, . . . , N .
We can extend the definition of X π to any value of t ∈ [0, T ] by setting
Xtπ = Xτπj + σ(τj , Xτπj )(Bt − Bτj ) + b(τj , Xτπj )(t − τj ), (7.2)
|σ(t, x) − σ(s, x)| + |b(t, x) − b(s, x)| ≤ C(1 + |x|)|t − s|α , (7.4)
1
where |π| denotes the norm of the partition π and β = 2
∧ α.
Proof: We shall apply the following result, that can be arguing in a similar
way as in Theorem 6.2
!
sup E sup |Xsπ |p ≤ C(p, T ). (7.6)
π 0≤s≤t
Set
Zt = sup |Xsπ − Xs | .
0≤s≤t
69
The assumptions on the coefficients yield
π π
σ(π(s), Xπ(s)
) − σ(s, Xs ) ≤ σ(π(s), Xπ(s) ) − σ(π(s), Xπ(s) )
+ σ(π(s), Xπ(s) ) − σ(s, Xπ(s) ) + σ(s, Xπ(s) ) − σ(s, Xs )
h i
π
≤ CT Xπ(s) − Xπ(s) + 1 + Xπ(s) |s − π(s)|α + Xπ(s) − Xs
h i
≤ CT Zs + 1 + Xπ(s) (s − π(s))α + Xπ(s) − Xs ,
as n → ∞, uniformly in t ∈ [0, T ].
70
8 Continuous time martingales
In this chapter we shall study some properties of martingales (respectively,
supermartingales and submartingales) whose sample paths are continuous.
We consider a filtration {Ft , t ≥ 0} as has been introduced in section 2.4 and
refer to definition 2.2 for the notion of martingale (respectively, supermartin-
gale, submartingale). We notice that in fact this definition can be extended
to families of random variables {Xt , t ∈ T} where T is an ordered set. In
particular, we can consider discrete time parameter processes.
We start by listing some elementary but useful properties.
The first assertion follows easily from property (c) of the conditional expec-
tation. The second assertion can be proved using Jensen’s inequality.
T = inf{n : Xn ≥ λ} ∧ N.
71
Then
E (XN ) ≥ E (XT ) = E XT 1(supn Xn ≥λ)
+ E XT 1(supn Xn <λ)
≥ λP sup Xn ≥ λ + E XN 1(supn Xn <λ) .
n
By substracting E XN 1(supn Xn <λ) from the first and last term before we
obtain the first inequality of (8.1). The second one is obvious.
As a consequence of this proposition we have the following.
Proof: Without loss of generality, we may assume that E |XN |p ) < ∞, since
otherwise (8.2) holds trivially.
According to property 2 above, the process {|Xn |p , 0 ≤ n ≤ N } is a sub-
martingale and then, by Proposition 8.1 applied to the process Xn := |Xn |p ,
1
p
µP sup |Xn | ≥ µ = µP sup |Xn | ≥ µ p
n n
= λ P sup |Xn | ≥ λ ≤ E (|XN |p ) ,
p
n
1
where for any µ > 0 we have written λ = µ p .
We now prove the second inequality of (8.3); the first is obvious.
Set X ∗ = supn |Xn |, for which we have
λP (X ∗ ≥ λ) ≤ E |XN |11(X ∗ ≥λ) .
72
Fubini’s theorem yields
X ∗ ∧k
Z !
∗ p p−1
E ((X ∧ k) ) = E pλ dλ
0
Z Z ∞
= dP 1{λ≤X ∗ ∧k} pλp−1 dλ
Ω 0
Z k Z
= dλpλp−1 dP
0 {λ≤X ∗ }
Z k
= dλpλp−1 P (X ∗ ≥ λ)
0
Z k
≤p dλpλp−2 E |XN |11(X ∗ ≥λ)
0
k∧X ∗
Z !
p−2
= pE |XN | λ dλ
0
p
= E |XN | (X ∗ ∧ k)p−1 .
p−1
p
Applying Hölder’s inequality with exponents p−1
and p yields
p p−1 1
E ((X ∗ ∧ k)p ) ≤ [E ((X ∗ ∧ k)p )] p [E (|XN |p )] p .
p−1
Consequently,
p 1 1
[E ((X ∗ ∧ k)p )] p ≤
[E (|XN |p )] p .
p−1
Letting k → ∞ and using monotone convergence, we end the proof.
It is not difficult to extend the above results to martingales (submartingales)
with continuous sample paths. In fact, for a given T > 0 we define
D = Q ∩ [0, T ],
( )
k
Dn = D ∩ n , k ∈ Z+ ,
2
where Q denotes the set of rational numbers.
We can now apply (8.2), (8.3) to the corresponding processes indexed by Dn .
By letting n to ∞ we obtain
!
p
λ P sup |Xt | ≥ λ ≤ sup E (|Xt |p ) , p ∈ [1, ∞)
t∈D t∈D
and ! !p
p
E sup |Xt |p ≤ sup E (|Xt |p ) , p ∈]1, ∞).
t∈D p − 1 t∈D
By the continuity of the sample paths we can finally state the following result.
73
Theorem 8.1 Let {Xt , t ∈ [0, T ]} be either a continuous martingale or a
continuous positive submartingale. Then
!
p
λP sup |Xt | ≥ λ ≤ sup E (|Xt |p ) , p ∈ [1, ∞), (8.4)
t∈[0,T ] t∈[0,T ]
! !p
p p
E sup |Xt | ≤ sup E (|Xt |p ) , p ∈]1, ∞). (8.5)
t∈[0,T ] p−1 t∈[0,T ]
Tn = inf{t ≥ 0, |Mt | ≥ n}
reduces M .
74
Proof: We start by proving (1). Consider a sequence (Tn , n ≥ 1) of stopping
times that reduce M . Then for 0 ≤ s ≤ t and each n,
Since the random variables are positive, by letting n tend to infinity and
applying Fatou’s lemma, we obtain
= E (Mt |Fs ) .
Let us finally prove 3). The process M Tn is a local martingale (see property
(b) above) and it is bounded by n. Thus, by 2), it is also a martingale.
Consequently, the sequence Tn = inf{t ≥ 0, |Mt | = n}, n ≥ 1 reduces M .
The next statement gives a feeling of the roughness of the sample paths of a
continuous local martingale.
n ≥ 1.
Fix n and set N n = M τn . This is a local martingale satisfying 0∞ |dNsn | ≤ n
R
75
Fix t > 0 and consider a partition 0 = t0 < t1 < · · · < tp = t of [0, t]. Then
p h i
E(Nt2 ) = E| Nt2i − Nt2i−1
X
i=1
Xp 2
= E Nti − Nti−1
i=1
" p
X #
≤E sup |Nti − Nti−1 | |Nti − Nti−1 |
i i=1
≤ nE sup |Nti − Nti−1 | ,
i
Theorem 8.2 The sequence (hM int , n ≥ 1), t ∈ [0, T ], converges uni-
formly in t, in probability, to a continuous, increasing process hM i =
(hM it , t ∈ [0, T ]), such that M0 = 0. That is, for any > 0,
( )
P sup |hM int − hM it | > → 0, (8.7)
t∈[0,T ]
76
Proof: Uniqueness follows from Proposition 8.4. Indeed, assume there were
two increasing processes hMi i, i = 1, 2 satisfying that M 2 − hMi i is a con-
tinuous local martingale. By substracting these two processes we get that
hM1 i − hM2 i is of bounded variation and, at the same time a continuous local
martingale. Hence hM1 i and hM2 i are indistinguishable.
The existence of hM i is proved through different steps, as follows.
Step 1. Assume that M is bounded. We shall prove that (hM int , n ≥ 1),
t ∈ [0, T ] is uniformly in t a Cauchy sequence in probability.
Let m > n. We have the following:
2
p
X n
2
E |hM int − hM im
n 2
X
m 2
t | = E (∆k M )t − (∆k M )t
k=1 j:tm n n
j ∈[tk−1 ,tk [
2
p
X n
m
X
= 4E (∆k M )t Mtm − M
n
tk
j
k=1 j:tm
j ∈[tn ,t
k−1 k
n[
pn 2
X
(∆m 2
X
= 4E M ) M m − Mtn
k t tj k
k=1 j:tm n n
j ∈[tk−1 ,tk [
pm
2 X
≤ 4E sup Mtm
j
− Mtnk (∆m 2
k M )t
j:tm n n
j ∈[tk−1 ,tk [ j=1
2 12
4 pm
(∆m 2
X
≤ 4 E sup Mtm − Mtnk E k M )t .
j
j:tm n n
j ∈[tk−1 ,tk [ j=1
Let us now consider the last expression. The first factor tends to zero as n
and m tends to infinity, because M is continuous and bounded. The second
factor is easily seen to be bounded uniformly in m. Thus we have proved
2
lim E |hM int − hM im
n,m→∞ t | = 0, (8.8)
77
Step 2. For a general continuous local martingale M , let us consider an in-
creasing sequence of stopping times (Tn , n ≥ 1) converging to infinity a.s.
such that M Tn is a bounded martingale. By Step 1, there exists an increas-
ing process hM n i associated with M n := M Tn . Moreover, for any m ≤ n,
hM n it∧Tm = hM m it . Fix t ∈ [0, T ]. There exists n ≥ 1 such that t ≤ Tn ;
then we set
hM it = hM it∧Tn = hM n it .
We have
+ P {Tk < T }.
For any δ > 0, we can choose k ≥ 1 such that P {Tk < T } < 2δ . Thus, we
obtain
lim P { sup |hM int − hM it | > } = 0.
n→∞
t∈[0,T ]
p
The assertion on L (Ω) convergence can be proved using martingale inequal-
2
ities.
Given two continuous local martingales M and N we define the cross varia-
tion by
1
hM, N it = [hM + N it − hM it − hN it ] , (8.9)
2
t ∈ [0, T ].
From the properties of the quadratic variation (see Theorem 8.2) we see that
the process {hM, N it , t ∈ [0, T ]} is a process of bounded variation and it is
the unique process (up to indistinguishability) such that {Mt Nt −hM, N it , t ∈
[0, T ]} is a continuous local martingale. It is also clear that
pn
X
lim Mtnk − Mtnk−1 Ntnk − Ntnk−1 = hM, N it , (8.10)
n→∞
k=1
78
This inequality (a sort of Cauchy-Schwarz’s inequality) is a particular case
of the result stated in the next proposition.
79
9 Stochastic integrals with respect to contin-
uous martingales
This chapter aims to give an outline of the main ideas of the extension of the
Itô stochastic integral to integrators which are continuous local martingales.
We start be describing precisely the spaces involved in the construction of
such a notion. Throughout the chapter, we consider a fixed probability space
(Ω, F, P ) endowed with a filtration (Ft , t ≥ 0).
We denote by H2 the space of continuous martingales M , bounded in L2 (Ω),
with M0 = 0 a.s. This is a Hilbert space endowed with the inner product
where 0 = t0 < t1 < . . . < tp+1 , and for each i, Hi is a Fti -measurable,
bounded random variable.
Stochastic processes belonging to E are termed elementary. There are related
with L2 (M ) as follows.
Proposition 9.1 Fix M ∈ H2 . The set E is dense in L2 (M ).
Proof: We will prove that if K ∈ L2 (M ) and is orthogonal to E then K = 0.
For this, we fix 0 ≤ s < t and consider the process
H = F 1]s,t] ,
80
with F a Fs -measurable and bounded random variable.
Saying that K is orthogonal to H can be written as
Z t
E F Ku dhM iu = 0.
s
We have thus proved that E (F (Xt − Xs )) = 0, for any 0 ≤ s < t and any
Fs -measurable and bounded random variable F . This shows that the process
(Xt , t ≥ 0) is a martingale. At the same time, (Xt , t ≥ 0) is also a process of
bounded variation. Hence
Z t
Ku dhM iu = 0, ∀t ≥ 0,
0
Then
(i) H.M ∈ H2 .
81
Proof of (i): The martingale property follows from the measurability proper-
ties of H and the martingale property of M . Moreover, since H is bounded,
(H.M ) is bounded in L2 (Ω).
H ∈ E 7→ H.M
is an isometry from E to H2 .
Clearly H 7→ H.M is linear. Moreover, H.M is a finite sum of terms like
Mti = Hi Mti+1 ∧t − Mti ∧t
each one being a martingale and orthogonal to each other. It is easy to check
that
hM i it = Hi2 (hM iti+1 ∧t − hM iti ∧t ).
Hence,
p
Hi2 (hM iti+1 − hM iti ).
X
hH.M it =
i=0
Consequently,
" p #
EhH.M i∞ = kH.M k2H2 = E Hi2 (hM iti+1 ∧t − hM iti ∧t )
X
i=0
Z ∞
=E Hs2 dhM is
0
= kHkL2 (M ) .
82
10 Appendix 1: Conditional expectation
Roughly speaking, a conditional expectation of a random variable is the
mean value with respect to a modified probability after having incorporated
some a priori information. The simplest case corresponds to conditioning
with respect to an event B ∈ F. In this case, the conditional expectation is
the mathematical expectation computed on the modified probability space
(Ω, F, P (·/B)).
However, in general, additional information cannot be described so easily.
Assuming that we know about some events B1 , . . . , Bn we also know about
those that can be derived from them, like unions, intersections, complemen-
taries. This explains the election of a σ-field to keep known information and
to deal with it.
In the sequel, we denote by G an arbitrary σ-field included in F and by X
a random variable with finite expectation (X ∈ L1 (Ω)). Our final aim is to
give a definition of the conditional expectation of X given G. However, in
order to motivate this notion, we shall start with more simple situations.
Conditional expectation given an event
Let B ∈ F be such that P (B) 6= 0. The conditional expectation of X given
B is the real number defined by the formula
1
E(X/B) = E(11B X). (10.1)
P (B)
• E(X/Ω) = E(X),
With the definition (10.1), the conditional expectation coincides with the
expectation with respect to the conditional probability P (·/B). We check
this fact with a discrete random variable X = ∞
P
i=1 ai 1Ai . Indeed,
∞ ∞
!
1 X X P (Ai ∩ B)
E(X/B) = E ai 1Ai ∩B = ai
P (B) i=1 i=1 P (B)
∞
X
= ai P (Ai /B).
i=1
83
Conditional expectation given a discrete random variable
Let Y = ∞ i=1 yi 1Ai , Ai = {Y = yi }. The conditional expectation of X given
P
Notice that, knowing Y means knowing all the events that can be described
in terms of Y . Since Y is discrete, they can be described in terms of the
basic events {Y = yi }. This may explain the formula (10.2).
The following properties hold:
For the proof of (a) we notice that, since E(X/Y ) is a discrete random
variable
∞
X
E (E(X/Y )) = E(X/Y = yi )P (Y = yi )
i=1
∞
!
X
=E X 1{Y =yi } = E(X).
i=1
84
proving the first property.
To prove the second one, it suffices to take A = {Y = yk }. In this case
E 1{Y =yk } E(X/Y ) = E 1{Y =yk } E(X/Y = yk )
!
E(X11{Y =yk } )
= E 1{Y =yk } = E(X11{Y =yk } ).
P (Y = yk )
The existence of E(X/G) is not a trivial issue. You should trust mathe-
maticians and believe that there is a theorem in measure theory -the Radon-
Nikodym Theorem- which ensures its existence.
Before stating properties of the conditional expectation, we are going to
explain how to compute it in two particular situations.
Example 10.1 Let G be the σ-field (actually, the field) generated by a finite
partition G1 , . . . , Gm . Then
m
X E(X11Gj )
E(X/G) = 1Gj . (10.3)
j=1 P (Gj )
85
Example 10.2 Let G be the σ-field generated by random variables
Y1 , . . . , Ym , that is, the σ-field generated by events of the form
Y1−1 (B1 ), . . . , Y1−1 (Bm ), with B1 , . . . , Bm arbitrary Borel sets. Assume in
addition that the joint distribution of the random vector (X, Y1 , . . . , Ym ) has
a density f . Then
Z ∞
E(X/Y1 , . . . , Ym ) = xf (x/Y1 , . . . , Ym )dx, (10.4)
−∞
with
f (x, y1 , . . . , ym )
f (x/y1 , . . . , ym ) = R ∞ R∞ . (10.5)
−∞ · · · −∞ f (x, y1 , . . . , ym )dx
(c) The mean value of a random variable is the same as that of its conditional
expectation: E(E(X/G)) = E(X).
(e) Let X be independent of G, meaning that any set of the form X −1 (B),
B ∈ B is independent of G. Then E(X/G) = E(X).
86
(h) Assume that X is a random variable independent of G and Z another
G-measurable random variable. For any measurable function h(x, z)
such that the random variable h(X, Z) is in L1 (Ω),
For those readers who are interested in the proofs of these properties, we give
some indications.
Property (a) follows from the definition of the conditional expectation and
the linearity of the operator E.
Property (b) is a consequence of the monotony property of the operator E and
a result in measure theory telling that, for G-measurable random variables
Z1 and Z2 , satisfying
E(Z1 1G ) ≤ E(Z2 1G ),
for any G ∈ G, we have Z1 ≤ Z2 .
Here, we apply this result to Z1 = E(X/G), Z2 = E(Y /G).
Taking G = Ω in condition (2) above, we prove (c). Property (d) is obvious.
Constant random variables are measurable with respect to any σ-field. Ther-
fore E(X) is G-measurable. Assuming that X is independent of G, yields
{T ≤ t} ∈ Ft .
It is easy to see that if S and T are stopping times with respect to the same
filtration then T ∧ S, T ∨ S and T + S are also stopping times.
87
Definition 11.2 For a given stopping time T , the σ-field of events prior to
T is the following
Ac ∩ {T ≤ t} = {T ≤ t} ∩ (A ∩ {T ≤ t})c ∈ Ft .
(∪∞ ∞
n=1 An ) ∩ {T ≤ t} = ∪n=1 (An ∩ {T ≤ t}) ∈ Ft .
{T ≤ s} ∩ {T ≤ t} = {T ≤ s ∧ t} ∈ Fs∧t ⊂ Ft .
Let us now check that for any s ≥ 0, the random variable Xs 1{s<T } is
FT -measurable. This fact along with the property {T ≤ (i + 1)2−n } ∈
FT shows the result.
Let A ∈ B(R) and t ≥ 0. The set
{Xs ∈ A} ∩ {s < T } ∩ {T ≤ t}
88
References
[1] P. Baldi: Equazioni differenziali stocastiche e applicazioni. Quaderni
dell’Unione Matematica Italiana 28. Pitagora Editrici. Bologna 2000.
89