5 views

Uploaded by Douglas Lim

SDE

- Masteroppgave Emil Huster
- NPTEL Lecture No. # 32 Laws of Probability - I
- An Real
- topology
- Storm 2012
- 2014 Sheet 2
- Chapter 8 - Queueing Theory
- s to Chas Tic Processes
- 5979_chap1
- Markov Chains to Model Genetic Algorithms
- M-TechEC.pdf
- Pattern and Predictability in Music.pdf
- psheet10
- Markov
- martingale pricing nyu
- novaaaa
- Why the Monte Carlo Method is So Important Today
- Duane459
- stml-42-prev.pdf
- pdf (90)

You are on page 1of 111

Zhongmin QIAN

Mathematical Institute, University of Oxford

22 November 2005

ii

Contents

1 Brownian motion 1

1.1 Probability space . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.1 The scaling property . . . . . . . . . . . . . . . . . . . 9

1.2.2 Markov property and finite-dimensional distributions . 11

1.2.3 The reflection principle . . . . . . . . . . . . . . . . . 16

1.2.4 Martingale property . . . . . . . . . . . . . . . . . . . 18

1.3 Quadratic variational processes . . . . . . . . . . . . . . . . . 20

2 It

os calculus 29

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.2 Stochastic integrals for simple processes . . . . . . . . . . . . 34

2.3 Stochastic integrals for adapted processes . . . . . . . . . . . 38

2.3.1 The space of square-integrable martingales . . . . . . 38

2.3.2 Stochastic integrals as martingales . . . . . . . . . . . 40

2.3.3 Summary of main properties . . . . . . . . . . . . . . 44

2.4 Stochastic integrals along martingales . . . . . . . . . . . . . 45

2.5 Stopping times, local martingales . . . . . . . . . . . . . . . . 46

2.5.1 Technique of localization . . . . . . . . . . . . . . . . . 49

2.5.2 Integration theory for semimartingales . . . . . . . . . 50

2.6 Itos formula . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

2.6.1 Itos formula for BM . . . . . . . . . . . . . . . . . . . 54

2.6.2 Proof of Itos formula. . . . . . . . . . . . . . . . . . . 54

2.7 Selected applications of Itos formula . . . . . . . . . . . . . . 55

2.7.1 Levys characterization of Brownian motion . . . . . . 55

2.7.2 Time-changes of Brownian motion . . . . . . . . . . . 57

2.7.3 Stochastic exponentials . . . . . . . . . . . . . . . . . 58

2.7.4 Exponential inequality . . . . . . . . . . . . . . . . . . 68

2.7.5 Girsanovs theorem . . . . . . . . . . . . . . . . . . . . 70

iii

iv CONTENTS

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

3.1.1 Linear-Gaussian diffusions . . . . . . . . . . . . . . . . 79

3.1.2 Geometric Brownian motion . . . . . . . . . . . . . . . 80

3.2 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . 81

3.2.1 Cameron-Martins formula . . . . . . . . . . . . . . . . 81

3.2.2 Existence and uniqueness theorem: strong solution . . 83

3.2.3 Continuity in initial conditions . . . . . . . . . . . . . 88

3.3 Martingales and weak solutions . . . . . . . . . . . . . . . . . 89

4.1 Several notions . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.2 Doobs inequalities . . . . . . . . . . . . . . . . . . . . . . . . 96

4.3 The convergence theorem . . . . . . . . . . . . . . . . . . . . 102

CONTENTS v

Introduction

A stochastic differential equation is simply a differential equation perturbed

by random noise (its intensity (t, Xt ) depends on time t and the position

Xt ), and therefore has the following form

dXt t .

= A(t, Xt ) + (t, Xt )W

dt

The central limit theorem in probability theory suggests that W t should have

normal distribution, and for the sack of simplicity (Wt )t0 should be inde-

pendent. Such random noise can be ideally modelled by Brownian motion

(Wt )t0 , a mathematical model describing random movements of pollen par-

ticles in a liquid, observed by R. Brown in 1827. The mathematical model

for Brownian motion and the description of its distribution were derived by

Albert Einstein in a short paper On the motion of small particles suspended

in liquids at rest required by the molecular-kinetic theory of heat published

in 1905, Annalen der Physik 17, 549-560. About the same time, in 1900, L.

Bachelier submitted his Ph. D. thesis in which he used Brownian motion to

model stock markets. His results were published in a paper titled Theorie

de la speculation in Ann. Sci. Ecole Norm. sup., 17 (1900), 21-86, which

is the first paper devoted to applications of Brownian motion to finance.

On the other hand, the first mathematical construction of Brownian

motion was achieved in 1923, N. Wiener published his article Differential

space, J. Math. Phys. 2, 132-174. Fruitful results and many unusual

features of Brownian motions were revealed mainly by Paul Levy in 30s -

40s. Among of them, P. Levy showed that almost surely t Wt is non-

where differentiable, and therefore the time-derivative of Brownian motion,

W t , does not exist. It is thus necessary to rewrite the previous differential

equation in differential form

Z t Z t

Xt X0 = A(s, Xs )ds + (s, Xs )dWs

0 0

Z

(t, Xt )dWt

vi CONTENTS

which does not exist in the ordinary sense. It was Ito in 1940s who first

established an integration theory for Brownian motion, and therefore the

theory of stochastic differential equations. Among of the manifold applica-

tions and connections with PDE, one of the most remarkable applications

of Itos theory is in the theory of finance. Although Itos theory never catch

the attention of the Fields medals committee, while it was brought world-

wide recognition by awarding H. Markowitz, W. Sharpe and M. Miller the

1990 Nobel Prize, and F. Black and M. Scholes the 1997 Nobel Prize, both

in Economics.

This course covers the core part of Itos calculus: it provides the nec-

essary background in stochastic analysis for those who are interested in

stochastic modelings and their applications including the theory of finance,

stochastic control and filtering etc. Students who are majoring in (pure and

applied) analysis, differential geometry, functional analysis, harmonic anal-

ysis, mathematical physics and PDEs will find this course relevant to their

interests.

References:

Below is the list of a collection of text books and monographs on stochas-

tic differential equations and related topics. Items 3 and 4 are recommended

readings for this course.

1. N. Ikeda and S. Watanabe: Stochastic Differential Equations and

Diffusion Processes. North-Holland (1981).

2. I. Karatzas and S. E. Shreve: Brownian Motion and Stochastic Cal-

culus. Graduate Texts in Mathematics, Springer-Verlag (1988).

3. F. C. Klebaner: Introduction to Stochastic Calculus with Applica-

tions. Imperial College Press (1998).

4. B. Oksendal: Stochastic Differential Equations. 6th Edition, Univer-

sitetext. Springer (2003).

5. D. Revuz and M. Yor: Continuous Martingales and Brownian Motion.

Springer-Verlag (1991).

6. L. C. G. Rogers and D. Williams: Diffusions, Markov Processes

and Martingales, Volume 2 Ito Calculus. Cambridge Mathematical Library.

Cambridge University Press 2000.

7. S. E. Shreve: Stochastic Calculus Finance II Continuous-Time Mod-

els. Springer Finance Textbook. Springer (2004).

Web page:

http://www.maths.ox.ac.uk/qianz/ private/SDE05.htm

Chapter 1

Brownian motion

Let us first set up our working framework and introduce several notions.

Let (, F, P) be a probability space. That is, is a sample space of basic

events (also called sample points), F is a -algebra of events, and P : F

[0, 1] is a probability measure, namely a function satisfying the following

conditions:

event) and P(A) 0 for any event A F.

disjoint events, i.e. Ai F and Ai Aj = , then

X

P (

i=1 Ai ) = P(Ai ) .

i=1

valued) function on (, F), that is, X : Rd such that for any Borel

subset B of Rd , the inverse image of B under the map X:

X 1 (B) = { : X() B}

function X on that we may be able to talk about for example what is the

probability of the event that X lies in a ball B: { : X() B}.

1

2 CHAPTER 1. BROWNIAN MOTION

If Z

E|X| |X()|P(d) <

expectation of X, denoted by EX, is the integral of X against the probability

P: Z

EX X()P(d) .

In general, we say a random variable X is in the Lp -space, write as X

Lp (, F, P) for p 0, if

Z

|X()|p P(d) < .

of all p-th integrable random variables X is a Banach space under the usual

algebraic operations for functions and the Lp -norm

Z 1/p

p

||X||Lp |X()| P(d) .

||X||p . Therefore p ||X||p is increasing in p (0, ]. Indeed, by a

simple use of Holders inequality we have

Z

q

||X||q = |X()|q P(d)

Z q/p

q pq

|X()| P(d)

Z q/p

p

= |X()| P(d)

random phenomena evolving in time. We thus need to have a set T of time

parameters. In these lectures, T is either the set of non-negative integers

Z+ or [0, +). T is thus an ordered set endowed with the natural topology.

of random variables valued in a topological space S. In this course, unless

otherwise specified, S will be the real line R, or the Euclidean space Rd of

dimension d.

1.1. PROBABILITY SPACE 3

from T Rd , which is the reason why a stochastic process is also called

a random function.

For each sample point , function t Xt () from T to S is called

a sample path (trajectory) or a sample function. Naturally, a stochastic

process X = (Xt )tT is continuous (resp. right-continuous, right-continuous

with left limits) if sample paths t Xt () are continuous (resp. right-

continuous, right-continuous with left limits) for almost all .

if its right-limit at t0 exists and equals f (t0 ). Similarly f is right-continuous

with left limit at t0 , if f is right-continuous at t0 and its left limit at t0

exists. For example monotone functions on intervals of the real line have

right- and left-limits.

identically distributed random variables with Poisson distribution of inten-

sity > 0. Let

Xn

T0 = 0; Tn = j

j=1

Xt = n if Tn t < Tn+1 .

each interval (Tn , Tn+1 ), with jump 1 at Tn and is right-continuous with left

limit n 1 at Tn .

t2 < < tn , the joint distribution of random variables (Xt1 , , Xtn ) given

as

t1 ,t2 , ,tn (dx1 , , dxn ) = P {Xt1 dx1 , , Xtn dxn }

a finite-dimensional distribution of X = (Xt )t0 . If d = 1 and (Xt )t0 is a

real stochastic process, the distribution t1 ,t2 , ,tn is determined through its

distribution function

4 CHAPTER 1. BROWNIAN MOTION

stochastic processes in continuous-time. For example, a subset of like

event, so that to

does not make sense, unless additional conditions on (Xt )t0 are imposed.

In particular, a function like suptK Xt may be not a random variable.

let B be a Borel measurable subset. If F is a finite or countable subset of

[0, +), then

{ : Xt () B for any t F }

and

sup |Xt |

tF

are measurable.

big enough interesting classes of stochastic processes) is the assumption that

our process X is right-continuous almost surely, and our working probability

space (, F, P) is complete in the sense that any trivial subsets of probability

null are events.

The main task of stochastic analysis is to study the probability (or dis-

tribution) properties of random functions determined by their families of

finite-dimensional distributions.

Definition 1.1.6 Two stochastic processes X = (Xt )t0 and Y = (Yt )t0

are equivalent (distinguishable) if for every t 0 we have

P { : Xt () = Yt ()} = 1 .

tic process X = (Xt )t0 is unique up to equivalence of processes.

1.2. BROWNIAN MOTION 5

Brownian motion is a mathematical model of random movements observed

by botanist Robert Brown.

ability space (, F, P) taking values in Rd is called a standard Brownian

motion (BM) in Rd , if

1. P {B0 = 0} = 1.

2. (Bt )t0 possesses independent increments: for any 0 t0 < t1 < <

tn random variables

are independent.

N (0, t s), that is, Bt Bs has pdf (probability density function)

1 |x|2

2(ts)

p(t s, x) = e ; x Rd .

(2(t s))d/2

In other words

Z

Pt f (x) = f (y)p(t, x, y)dy f Cb (Rd ) .

Rd

Since Z

p(t + s, x, y) = p(t, x, z)p(s, z, y)dz

Rd

therefore (Pt )t0 is a semigroup on Cb (Rd ). (Pt )t0 is called the heat semi-

group in Rd : if f Cb2 (Rd ), then u(t, x) = (Pt f )(x) solves the heat equation

1

+ u(t, x) = 0 ; u(0, ) = f ,

2 t

6 CHAPTER 1. BROWNIAN MOTION

P 2

where = i x2i is the Laplace operator.

The connection between Brownian motion and the Laplace operator

(hence the harmonic analysis) is demonstrated through the following iden-

tity:

Z

1 |yx|2

2t

= f (y)e dy .

(2t)d/2 Rd

|x|2

Z

1

E|Bt Bs |p = p |x|p exp dx .

2|t s| R 2|t s|

x p

p =y ; dx = |t s|dy

|t s|

we thus have

p

( |t s|)p |x|2

Z

p p

E|Bt Bs | = |x| exp dx

2 R 2

= cp |t s|p/2

where

|x|2

Z

1 p

cp = |x| exp dx .

2 Rd 2

that for every n Z+

(2n)!

E(Bt Bs )2n = |t s|n .

2n n!

1.2. BROWNIAN MOTION 7

centered Gaussian process with co-variance function C(s, t) = st. Indeed, it

is almost obvious that any finite-dimensional distribution of B is Gaussian,

so B is a centered Gaussian process, and its co-variance function (if s < t)

= E((Bt Bs )Bs ) + EBs2

= E(Bt Bs )EBs + EBs2

= s.

is similar. Observe that a BM (Bt ) must be a Gaussian process (i.e. a

process whose finite-dimensional distributions are Gaussian distributions)

with mean zero and variance function E(Bt Bs ) = s t. Therefore we may

first construct a Gaussian process (Xt ) such that EXt = 0 and E(Xt Xs ) =

s t on some completed probability space (, F, P). It can be verified that

(Xt )t0 satisfies all conditions in the definition of BM, except the continuity

of its sample paths. The Gaussian process (Xt ) may be not continuous,

we thus need to modify the construction of Xt to make it continuous. Let

D = { 2jn : j Z+ , n N} the dyadic real numbers. The important fact we

need to use is that D is dense in R+ . Define

[

\ N 2 n

[ [ 1

H= X X j1 .

j

2n 2n 2n/8

N 2 n

[ [ 1

Al = X X j1 .

j

n=l j=1

2n n 2 2n/8

8 CHAPTER 1. BROWNIAN MOTION

T

l=1 Al has probability zero, and therefore

as a sum of countable many events with probability zero, P(H) = 0. Since

N[2 n

1

X jn X j1 n/8

P

j=1

2 2 n 2

N 2n

X 1

P X X j1

j

j=1

2n 2n 2n/8

n

1

= N 2 P X 1n n/8

2 2

4 4

N 2n 2n/8 E X 1n

2

2

n/8

4

n 1

= 2 N2 3

2n

1

= 3N n/2

2

so that

N[2 n

X 1

P(Al ) X jn X j1 n/8

P

n=l

j=1

2 2 n 2

X 1

3N n/2

n=l

2

3N 2 1

= l .

21 2

Therefore

!

\

P Al = lim P {Al }

n

l=1

3N 2 1

lim l

2 1 n 2

= 0.

It follows that P(H) = 0, thus P(H c ) = 1. On the other hand, by De Morgan

law

[ \ N 2n

c

\ \ 1

H = : X jn () X j1 () < n/8

N =1 l=1 n=l j=1

2 2n 2

1.2. BROWNIAN MOTION 9

and thus, if H c , then for any N , there is an l such that for any n > l

and for all j = 1, , N 2n we have

1

X jn () X j1 () < n/8 .

2 2 n 2

We may show thus that for any H c and t 0 the limit of Xs () exists

as s t along the dyadic numbers, i.e. as s t and s D. Moreover, D

is dense in [0, ), thus for any t [0, ) we may define

Bt () = lim Xs () if H c

sDt

process which coincides with Xt on H c when t D. It remains to verify

that (Bt )t0 is a Brownian motion in R as an exercise.

Let B = (Bt )t0 be a standard BM in Rd . By definition, the distribution of

t = Bt+S BS

BM B = (Bt )t0 is stationary, so that for any fixed time S, B

is again a standard Brownian motion. This statement is true indeed for any

finite stopping time S, see section 1.2.3.

0

Mt Bt/2

is a standard BM in Rd .

(Bt )t0 is also a standard BM, so that (Bt )t0 and (Bt )t0 have the same

distribution.

is a standard BM in Rd . That is, BM is invariant under the action of

orthogonal group of Rd .

distributions under the orthogonal group action.

M0 = 0 , Mt = tB1/t for t > 0

is a standard BM in R.

10 CHAPTER 1. BROWNIAN MOTION

E (Mt Ms ) = tsE B1/t B1/s

1 1

= ts =st

t s

so that (Mt ) is a centered Gaussian process with co-variance function s t.

Moreover t Mt is continuous for t > 0. To see the continuity of Mt at

t = 0, we use the fact that

Bt

lim =0

t t

which is the law of large numbers for BM. We will not prove this here, but

see the remark below.

Remark 1.2.9 To convince yourself why the law of large numbers for BM is

true, we may look at a special way t through natural numbers, namely

Bn X1 + + Xn

lim = lim

n n n n

where Xi = Bi Bi1 . Notice that (Xi ) is a sequence of independent random

variables with identical distribution N (0, 1), so that by the strong law of large

numbers

X1 + + Xn

EX1 = 0 almost surely.

n

In order to handle the general case t 0, we may write t = [t] + rt where

[t] is the integer part of t and rt [0, 1). Then

Bt Bt B[t] [t] B[t]

= +

t t t [t]

B

the second term tends to 0 since as t , [t]t 1 and [t][t] 0. To see

why

Bt B[t]

0

t

as t , we need the following Gaussian tail estimate for BM (see section

1.2.3 below)

( ) r Z

2 x2 /2

P : sup |Bt ()| R = 2 e dx

t[0,T ] R/ T

R2

2 exp for all R > 0 .

2T

1.2. BROWNIAN MOTION 11

( )

X |Bt () Bn ()|

P : sup <

t[n,n+1] n

n=0

Bt Bn

lim sup =0 almost surely.

n t[n,n+1] t n

For more detail, see D. Stroock: Probability Theory: An Analytic View, page

180-181.

A standard d-dimensional Brownian motion B = (Bt )t0 is a Markov process

with transition semigroup (Pt )t0 , the heat semigroup, where

Z

Pt f (x) = p(t, x, y)f (y)dy

Rd

and

|x y|2

1

p(t, x, y) = exp

(2t)d/2 2t

is the Gaussian kernel. To make this statement more precise, let us recall

two notions from the probability theory.

Conditional Expectation

The concept of conditional expectations is perhaps the most important one

in the theory of probability. The properties studied in the theory of proba-

bility like independence, Markov property and martingale property, can be

re-phased in terms of conditional expectation. It is a good idea to review

what you have learned about conditioning in Mods and Part A probabil-

ity courses: conditional probability P(A|B), conditional expectation E(X|A)

and conditional probability density function pY (x) the probability density

function of E(X|Y ), and relate all these notions in terms of conditional

expectations E(X|G) (see definition below).

The formal definition stated below was formulated by J.L. Doob. Let X

be an integrable or non-negative random variable valued in R on a probabil-

ity space (, F, P), and let G be a sub -algebra of F. Then the conditional

expectation E(X|G) of X given G is a random variable (unique up to almost

surely) such that

12 CHAPTER 1. BROWNIAN MOTION

E {E(X|G)1A } = E {X1A } .

able information G. The last equation holds not only for indicator functions

but also for any random variables measurable with respect to G: if Y is

measurable with respect to G, then

E {E(X|G)Y } = E {XY }

Like in any text book, E(X|Y ) means indeed E(X|(Y )) where (Y )

is the smallest -algebra for which Y is measurable. It can be shown that

E(X|Y ) is a measurable function of Y , that is, there is a function F such

that

E(X|Y ) = F (Y ) .

By definition, if Y is G-measurable, then

E(X|G) = EX .

measurable function f

If X = (Xt )t0 is a stochastic process on (, F, P), for each t 0, we set

Ft0 = {Xs : s t} .

Then {Ft0 }t0 is an increasing family of sub -algebras of F, and for each

t 0, Xt is measurable with respect to Ft0 (we say then the process (Xt )t0

is adapted to {Ft0 }t0 ). Obviously, {Ft0 }t0 is the smallest increasing family

of sub -algebras that has the last property. {Ft0 }t0 is called the filtration

generated by the process X = (Xt )t0 . In general, we introduce

1.2. BROWNIAN MOTION 13

called a filtration. Ft represents the information available up to time t. A

stochastic process X = (Xt )t0 is adapted to the filtration (Ft )t0 if Xt is

measurable with respect to Ft for all t 0:

{ : Xt () B} Ft

a filtered probability space (, F, Ft , P).

Let (Ft )t0 be the natural filtration of Brownian motion (Bt )t0 , that

is, Ft is the completion of the -algebra {Bs : s t}, and let F =

t0 Ft . It is a matter of fact that the natural filtration (Ft )t0 is continuous.

We call (Ft )t0 together with F Brownian filtration. It follows by the

independence of increments that

of Fs .

1 |x|2

2t

p(t, x) = e

(2t)d/2

Z

Pt f (x) = f (y)p(t, x y)dy for every t > 0.

Rd

by

P {Bs dx, Bt dy} = p(s, x)p(t s, y x)dxdy .

has a pdf

p(s, x1 )p(t s, x2 )

and thus, for any bounded Borel measurable function f

Ef (Bs , Bt ) = Ef (Bs , Bt Bs + Bs )

ZZ

= f (x1 , x2 + x1 )p(s, x1 )p(t s, x2 )dx1 dx2 .

14 CHAPTER 1. BROWNIAN MOTION

integral, the induced Jacobi is 1 so that dx1 dx2 =dxdy (as measures), and

therefore

ZZ

Ef (Bs , Bt ) = f (x, y)p(s, x)p(t s, y x)dxdy

Theorem 1.2.13 For any t > s and any bounded Borel measurable function

f we have

E {f (Bt )|Fs } = Pts f (Bs ) a.s. (1.2)

where (Pt )t>0 is the heat semigroup. In particular E {f (Bt )|Fs } = E {f (Bt )|Bs }

which is called Markov property, and E {f (Bt )|Fs } equals F (Bs ) where

|xy|2

Z

1 2(ts)

F (x) = Pts f (x) f (y)e dy .

(2(t s))d/2 Rd

which is called Markov property of (Bt )t0 . Clearly we only need to prove

this for bounded continuous (and smooth) function f . For such a function,

we can show that

Nn

X

f (x + y) = lim fnk (x)gnk (y) .

n

k=1

for some functions fnk , gnk (for example, taking the Taylor expansion of

f (x + y)). Hence

Nn

X

= lim E {fnk (Bt Bs )gnk (Bs )|Fs }

n

k=1

Nn

X

= lim E {fnk (Bt Bs )|Fs } gnk (Bs )

n

k=1

Nn

X

= lim E {fnk (Bt Bs )} gnk (Bs )

n

k=1

= E {f (Bt )|Bs } .

1.2. BROWNIAN MOTION 15

To compute the conditional expectation E {f (Bt )|Bs }, we use the fact that

the pdf of (Bs , Bt ) is

p(s, x)p(t s, y x)

so that

Z Z

E {1A (Bs )f (Bt )} = 1A (x)f (y)p(s, x)p(t s, y x)dxdy

Z

= 1A (x)Pts f (x)p(s, x)dx

= E {1A (Bs )Pts f (Bs )}

as Z

Pts f (x) = f (y)p(t s, y x)dy .

E (f (Bt )|Bs ) = Pts f (Bs ) .

in terms of the Gaussian density function p(t, x).

Proposition 1.2.14 For any 0 < t1 < t2 < < tn , the (Rnd -valued)

random variable (Bt1 , , Btn ) has a pdf

p(t1 , x1 )p(t2 t1 , x2 x1 ) p(tn tn1 , xn xn1 )

where

1 2 /(2t)

p(t, x) = d/2

e|x|

(2t)

is a standard Gaussian pdf in Rd . That is, the joint distribution of (Bt1 , , Btn )

is given by

P {Bt1 dx1 , , Btn dxn }

= p(t1 , x1 )p(t2 t1 , x2 x1 ) p(tn tn1 , xn xn1 )dx1 dxn(1.3)

.

E (f (Bt1 , , Btn )) .

One can use the fact that Bt1 , Bt2 Bt1 , , Btn Btn1 are independent,

and has the joint distribution with pdf

p(t1 , z1 )p(t2 t1 , z2 ) p(tn tn1 , zn ) .

16 CHAPTER 1. BROWNIAN MOTION

ment which uses only the Markov property. Indeed, by the Markov property

E (f1 (Bt1 ) fn (Btn ))

= E E f1 (Bt1 ) fn (Btn )|Ftn1

= E f1 (Bt1 ) fn1 (Btn1 )E fn (Btn )|Ftn1

= E f1 (Bt1 ) fn1 (Btn1 ) Ptn tn1 fn (Btn1 )

= E f1 (Bt1 ) fn2 (Btn2 ) fn1 Ptn tn1 fn (Btn1 )

which reduces the number of times ti to n 1, so the conclusion follows from

the induction immediately.

nian motion. Then for each j, Btj is a standard BM in R, and (Btj )t0

(j = 1, , d) are mutually independent.

Brownian motion starts afresh at a stopping time, i.e. the Markov property

for Brownian motion remains true at stopping times. Therefore Brownian

motion possesses the strong Markov property, a very important property

which had been used by Paul Levy in the form of the reflection principle, long

before the concept of strong Markov property had been properly defined.

We will exhibit this principle by computing the distribution of the running

maximum of a Brownian motion.

In many applications, especially in statistics, we would like to estimate

distributions of running maxima of a stochastic process. For Brownian mo-

tion B = (Bt )t0 , the distribution of sups[0,t] Bs can be derived by use of

the reflection principle.

Let B = (Bt )t0 be a standard Brownian motion on (, F, Ft , P) in R.

Let b > 0 and b > a, and let

Tb = inf{t > 0 : Bt = b} .

Then Tb is a stopping time, and the Brownian motion starts afresh as a

standard Brownian motion after hitting b, and therefore

( ) ( )

P sup Bs b, Bt a = P sup Bs b, Bt 2b a

s[0,t] s[0,t]

= P {Bt 2b a}

1.2. BROWNIAN MOTION 17

where the first equality follows from the fact that the Brownian motion

starting at Tb (in position b): BTb = b, runs afresh like a Brownian motion,

so that it moves with equal probability about the line y = b. The second

equality follows from 2b a = b + (b a) > b.

The above equation may be written as

P {Tb t, Bt a} = P {Tb t, Bt 2b a}

= P {Bt 2b a} ,

which can be justified by the strong Markov property of Brownian motion,

a topic that will not pursue here. Therefore

( ) Z +

1 x2

P sup Bs b, Bt a = e 2t dt ,

s[0,t] 2t 2ba

which gives us the joint distribution of a Brownian motion and its maximum

at a fixed time t. By differentiating in a and in b we conclude the following

Theorem 1.2.16 Let B = (Bt )t0 be a standard BM in R, and let t > 0.

Then the pdf of the joint distribution of random variables (sups[0,t] Bs , Bt )

is given as

( )

(2b a)2

2(2b a)

P sup Bs db, Bt da = exp dadb

s[0,t] 2t3 2t

In particular, for any b > 0,

( )

P sup Bs b = P {Tb t}

s[0,t]

(2c a)2

Z Z

2

= (2c a) exp dadc

2t3 {ac,cb} 2t

Z + Z c

(2c a)2

2

= (2c a) exp dadc

2t3 b 2t

Z + Z + 2

2 x

= x exp dxdc

3

2t b c 2t

Z + 2

2 x

= exp dx

2t b 2t

which is the exact distribution function of sups[0,t] Bs (the stopping time

Tb ) and leads to an exact formula for the tail probability of the Brownian

motion.

18 CHAPTER 1. BROWNIAN MOTION

You may find a summary of martingales in discrete-time in the Appendix.

Recall the definition of martingale properties.

filtration (Ft )t0 such that each random variable Xt is integrable. Then

X = (Xt )t0 is a martingale (resp. super-martingale; resp. sub-martingale)

if for any t > s we have

E(Xt |Fs ) = Xs

resp.

E(Xt |Fs ) Xs (super-martingale)

resp.

E(Xt |Fs ) Xs (sub-martingale) .

resp. sub-martingale), then t E(Xt ) is a constant function E(X0 ) (resp.

decreasing function; resp. increasing function).

filtration (Ft )t0 . We have the following

Proposition 1.2.19 1) Each Bt is p-th integrable for any p > 0, and for

t>s

E(|Bt Bs |p ) = cp,d |t s|p/2 . (1.4)

3) For each pair i, j, Mt = Bti Btj ij t is a continuous martingale.

of Fs when t > s we thus have

so that

E(Bt |Fs ) = E(Bs |Fs ) = Bs

1.2. BROWNIAN MOTION 19

+E(2Bs (Bt Bs ) |Fs )

= E((Bt Bs )2 ) + 2Bs E((Bt Bs ) |Fs )

= E (Bt Bs )2

= ts

so that

= Bs2 s

such that B0 = 0. Then (Bt )t0 is a standard BM in R, if and only if for

any R and t > s

(t s)||2

E {exp (ih, Bt Bs i) |Fs } = exp . (1.5)

2

and has normal distribution with variance t s. Conversely, if (Bt )t0 is a

standard BM in R, then Bt Bs is independent of Fs , and Bt Bs has a

normal distribution of mean zero and variance (t s), so that

= E {exp (ih, Bt Bs i)}

|x|2

Z

1 ih,xi 2(ts)

= p e dx

2(t s) R

(t s)||2

= exp .

2

||2

Mt exp ih, Bt i + t

2

is a martingale.

20 CHAPTER 1. BROWNIAN MOTION

Remark 1.2.22 Note that both sides of (1.5) are analytic in so that the

identity continues to hold for any complex vector . In particular, by replac-

ing by i we obtain that

(t s)||2

E {exp (h, Bt Bs i) |Fs } = exp

2

so that for any vector

||2

exp h, Bt i t

2

is a continuous martingale. This statement will be extended to vector fields

in R. The resulted identity is called Cameron-Martin formula.

processes in Rd which possess stationary independent increments, and (1.5)

is the Levy-Khinchin formula for BM. In general if (Xt ) is a Levy process

in Rd , then

E {exp (ih, Xt Xs i) |Fs } = exp (()(t s))

for t > s and Rd , where

1

() = hAAT , i + ihb, i

2Z

+ eih,xi 1 i1{|x|<1} h, xi (dx)

Rd \{0}

for some d r matrix A, vector b and Levy measure (dx) of (Xt ) which is

a -finite measure on Rd \ {0} satisfying the following integrable condition

|x|2

Z

2

(dx) < + . (1.6)

Rd \{0} 1 + |x|

As we have seen that, both (one-dimensional) Brownian motion Bt and

Mt Bt2 t are martingales, and thus

Bt2 = Mt + At

where of course At = t. Therefore, the continuous sub-martingale Bt2 is a

sum of a martingale and an adapted increasing process. We will see this

decomposition for Bt2 is the key to establish Itos integration theory.

1.3. QUADRATIC VARIATIONAL PROCESSES 21

D = {0 = t0 < t1 < < tn = t}

be a finite partition of the interval [0, t], and let

n

X

VD = |Btl Btl1 |2

l=1

the quadratic variation of B over the partition D, which is a non-negative

random variable. Then

EVD = t

and the variance of VD

n o n

X

E (VD EVD )2 = 2 (tl tl1 )2 .

l=1

Proof. Indeed

n

X

EVD = E|Btl Btl1 |2

l=1

n

X

= (tl tl1 )

l=1

= t.

To prove the second formula we proceed as the following

n o

E (VD EVD )2

!2

X n

= E |Btl Btl1 |2 t

l=1

!2

X n

|Btl Btl1 |2 (tl tl1 )

= E

l=1

n

X

|Btk Btk1 |2 (tk tk1 ) |Btl Btl1 |2 (tl tl1 )

= E

k,l=1

n n

X 2 o

= E |Btl Btl1 |2 (tl tl1 )

l=1

n

X

|Btk Btk1 |2 (tk tk1 ) |Btl Btl1 |2 (tl tl1 )

+ E .

k6=l

22 CHAPTER 1. BROWNIAN MOTION

Since the increments over different intervals are independent, so that the

expectation of each product in the last sum on the right-hand side equals

the product of their expectations, which gives contribution zero, therefore

n o

E (VD EVD )2

n n

X 2 o

= E |Btl Btl1 |2 (tl tl1 )

l=1

n

X

E |Btl Btl1 |4 2(tl tl1 )|Btl Btl1 |2 + (tl tl1 )2

=

l=1

n

X

E|Btl Btl1 |4 2(tl tl1 )E|Btl Btl1 |2 + (tl tl1 )2

=

l=1

Xn

= 2 (tl tl1 )2

l=1

X

lim |Btl Btl1 |2 = t in L2 (, P)

m(D)0

l

for any t, where D runs over all finite partitions of interval [0, t], and

l

Therefore

X

lim |Btl Btl1 |2 = t in probability.

m(D)0

l

1.3. QUADRATIC VARIATIONAL PROCESSES 23

2

X

|Btl Btl1 | t = E |VD E (VD )|2

2

E

l

n

X

= 2 (tl tl1 )2

l=1

n

X

2m(D) (tl tl1 )

l=1

= 2tm(D)

and therefore 2

X

lim E |Btl Btl1 |2 t = 0 .

m(D)0

l

P place al-

most surely. To this end, we recall the Borel-Cantelli lemma: if

n P(An ) <

, then limsupn An = 0, where

[

\

limsupn An = An

m=1 n=m

= { belongs to infinitely many An } .

P

If in addition {An } are independent, then n P(An ) = if and only if

P (limsupn An ) = 1 .

Proposition 1.3.3 Let (Bt )t0 be a standard BM in R. Then for any t > 0

we have

2n

X 2

B jn t B j1 t a.s. (1.7)

2 n t 2

j=1

as n .

0 1 2n

Dn = {0 = t < t < < t = t} .

2n 2n 2n

24 CHAPTER 1. BROWNIAN MOTION

n

2

l1 2

2

X l

E |Vn EVn | = 2 t n t

2n 2

l=1

1 2

= 22n t

2n

1 2

= t .

2n1

Therefore, by Markovs inequality,

1

P |Vn EVn | n2 E |Vn EVn |2

n

n2 2

= t

2n1

so that

n2

X 1 2

X

P |Vn EVn | =t < + .

n 2n1

n=1 n=1

By the Borel-Cantelli lemma, it follows that Vn t almost surely.

Remark 1.3.4 Indeed the conclusion is true for monotone partitions. More

precisely, for each n let

n n

Then

nk

X 2

Btni ,n Btni1 ,n t a.s. (1.8)

i=1

as n . Indeed, in this case, if we denote by Mn the right-hand side

of (1.8), then (M

n )n1 is a non-negative martingale with respect to the fil-

tration Gn = Btni ,n : i = 0, , nk , so (1.8) follows from the martingale

convergence theorem.

X

sup |Btl Btl1 |p < a.s.

D l

1.3. QUADRATIC VARIATIONAL PROCESSES 25

if p > 2, where sup is taking over all finite partitions of [0, 1], and

X

sup |Btl Btl1 |2 = a.s.

D l

That is to say, Brownian motion has finite p-variation for any p > 2. Indeed

almost all Brownian motion sample paths are -Holder continuous for any

< 1/2 but not for = 1/2. It follows thus that almost all Brownian

motion paths are nowhere differentiable. We will not go into a deep study

about the sample paths of BM, which are not needed in order to develop

Itos calculus for Brownian motion.

In view of these results, let us introduce the following

finite p-variation (where p > 0 is a constant) on [0, T ], if

X

sup |f (ti ) f (ti1 )|p < +

D l

where D runs over all finite partitions of [0, T ]. f (t) in Rd has finite (total)

variation if it has finite 1-variation.

functions (and therefore must have at most countably many discontinuous

points).

A stochastic process V = (Vt )t0 is called a variational process if for

almost all , sample path t Vt () possesses finite variation on any

finite interval. A Brownian motion is not a variational process.

If M = (Mt )t0 is a continuous, square-integrable martingale, then

(Mt2 )t0 is no longer a martingale but a sub-martingale, except trivial case.

As in the case of Brownian motion, the following limit

Mt Mt 2

X

hM it = lim l l1

m(D)0

l

exists both in probability and in the L2 -norm, where the limit takes over

all finite partitions D of the interval [0, t]. By definition, t hM it is

an adapted, continuous, increasing stochastic process (and therefore has

finite variation). The following theorem demonstrates the importance of the

adapted increasing process hM it .

26 CHAPTER 1. BROWNIAN MOTION

be a continuous, square-integrable martingale. Then hM it is the unique con-

tinuous, adapted and increasing process with initial zero, such that Mt2

hM it is a martingale.

with the martingale M . The theorem is a special case of the Doob-Meyer

decomposition for sub-martingales: any sub-martingale can be decomposed

into a sum of a martingale and a predictable, increasing process with initial

value zero. The decomposition was conjectured by L. Doob, and proved by

P. A. Meyer in the 60s, which opened the new era of stochastic calculus.

(At )t0 is an adapted, continuous and integrable increasing process, then

X = M + A is a continuous sub-martingale. The reverse statement is also

true, that is the context of Doob-Meyers decomposition theorem. Consider a

sub-martingale in discrete-time: X = (Xn )nZ+ with respect to the filtration

(Fn )nZ+ . An increasing sequence (An )nZ+ may be defined by

A0 = 0 ;

An = An1 + E(Xn Xn1 |Fn1 ), n = 1, 2, .

Then

ing.

3. By definition

therefore Mn = Xn An is a martingale.

Theorem 1.3.8 Let (Mt )t0 and (Nt )t0 be two continuous, square-integrable

martingales, and let

1

hM, N it = (hM + N it hM N it )

4

1.3. QUADRATIC VARIATIONAL PROCESSES 27

called the bracket process of M and N . Then hM, N it is the unique adapted,

continuous, variational process with initial zero, such that Mt Nt hM, N it

is a martingale. Moreover

n

X

lim (Mtl Mtl1 )(Ntl Ntl1 ) = hM, N it , in prob. (1.9)

m(D)0

l=1

Z t

1

Mtf f (Bt ) f (B0 ) f (Bs )ds

0 2

by the Brownian motion (Bt )t0 ), and

Z t

f g

hM , M it = hf, gi(Bs )ds .

0

These claims will be proven below after we have established Itos lemma for

Brownian motion.

28 CHAPTER 1. BROWNIAN MOTION

Chapter 2

It

os calculus

R t theory in a traditional way, that

is, we first define stochastic integral 0 Fs dBs for simple adapted processes

(Ft )t0 , and then extend the definition to a large class of adapted integrands

by exploiting the martingale property of Itos integrals.

2.1 Introduction

Let B = (Bt )t0 be a standard Brownian motion in R on a complete prob-

ability space (, F, P) and let (Ft )t0 be its natural filtration, that is, for

each t 0

Ft = {Bs for s t; null events}

the history of the Brownian motion B = (Bt )t0 up to time t, in other words

the information accumulated by the Brownian motion B till time t.

We are going to define Itos integrals of the following form

Z t

Fs dBs for t 0

0

will be described later. For example, we are interested in very much defining

integrals like

Z t

f (Bs )dBs

0

29

30 CALCULUS

CHAPTER 2. ITOS

Bt () is nowhere differentiable, the obvious definition via Riemann sums

X

Fti (Bti Bti1 )

i

does not work: the limit of Riemann sums does not exist. The limit exists

however in a probability sense, if for any finite partition we properly choose

ti [ti1 , ti ] and if the integrand process (Ft )t0 is adapted to the Brownian

filtration (Ft )t0 , that is, for each t 0, Ft is measurable with respect to Ft .

This approach works because both (Bt )t0 and (Bt2 t)t0 are continuous

martingales. Rt

In summary, Itos integral 0 Fs dBs of an adapted process F = (Ft )t0

(such that F is measurable with respect to the product -algebra B([0, ))

F , a condition you are advised to forget at the first reading) with respect

to the Brownian motion B = (Bt )t0 is simply defined to be the limit of

special sort of Riemann sums:

Z t X

Fs dBs = lim Fti1 (Bti Bti1 )

0 m(D)0

i

where the limit takes place in L2 -sense (with respect to the product measure

P(d)dt: you are forgiven to ignore its meaning), over finite partitions

D = {0 = t0 < t1 < < tn = t}

of the interval [0, t] such that m(D) = maxi (ti ti1 ) 0, and through

special Riemann sums X

Fti1 (Bti Bti1 )

i

i.e. sums of increments of Brownian motion B over [ti1 , ti ] multiplying the

value of F at the left-end point ti1 . The reason to choose Fti1 (Bti Bti1 )

rather then other forms is the following: only with this choice we have

E Fti1 (Bti Bti1 ) = 0 (2.1)

and

E Ft2i1 (Bti Bti1 )2 Ft2i1 (ti ti1 ) = 0 . (2.2)

It will become clear that, it is these important features that this sort of

Riemann sums converge to a martingale! These equations also imply that

not only the resulted Itos integral

Z t

Fs dBs

0

2.1. INTRODUCTION 31

Z t 2 Z t

Fs dBs Fs2 ds .

0 0

Exercise 2.1.1 Prove equations (2.1) and (2.2). Indeed, since Fti1 is

Fti1 -measurable, so that

E Fti1 (Bti Bti1 ) = E E Fti1 (Bti Bti1 ) Fti1

= E Fti1 E (Bti Bti1 ) Fti1

= E Fti1 E Bti Bti1

= 0,

+Bt2i1 + ti1

= 2Bt2i1 2Bti1 E Bti | Fti1

= 0

and therefore

E Ft2i1 (Bti Bti1 )2 Ft2i1 (ti ti1 )

n o

2 2 2

= E E Fti1 (Bti Bti1 ) Fti1 (ti ti1 ) Fti1

n o

2 2 2

= E Fti1 E (Bti Bti1 ) Fti1 (ti ti1 ) Fti1

= 0.

integrable martingale by using exactly the same approach. That is, if

M = (Mt )t0 is a continuous, square-integrable martingale on a filtered

probability space (, F, Ft , P) and F = (Ft )t0 is an adapted stochastic

process , then

Z t n

X

Fs dMs = lim Ftl1 (Mtl Mtl1 )

0 m(D)0

l=1

32 CALCULUS

CHAPTER 2. ITOS

0 Fs dMs are martingales. Rt

The Stratonovich integral 0 Fs dMs , which was discovered later then

Itos, is defined by

Z t n

X Ftl1 + Ftl

Fs dMs = lim (Mtl Mtl1 )

0 m(D)0 2

l=1

Rt

which in general is different to the Ito integral 0 Fs dMs .

tingale. According to definition,

Z t Xn

Ms dMs = lim Mtl1 (Mtl Mtl1 )

0 m(D)0

l=1

n

X 1 2 1 2 2

= lim (Mtl Mtl1 ) + Mtl Mtl1

m(D)0 2 2

l=1

n n

1 X 1 X

= lim (Mtl Mtl1 )2 + lim Mt2l Mt2l1

2 m(D)0 2 m(D)0

l=1 l=1

1 1

= hM it + (Mt2 M02 ) .

2 2

In other words Z t

Mt2 M02 =2 Ms dMs + hM it .

0

In particular, for BM we have

Z t

Bt2 B02 =2 Bs dBs + t

0

o formula. The Stratonovich integral

Z t n

X Mtl + Mtl1

Ms dMs = lim (Mtl Mtl1 )

0 m(D)0 2

l=1

n n

1 X X

= lim (Mtl Mtl1 )2 + lim Mtl1 (Mtl Mtl1 )

2 m(D)0 m(D)0

l=1 l=1

Z t

1

= hM it + Ms dMs

2 0

1

Mt2 M02

=

2

2.1. INTRODUCTION 33

so that Z t

Mt2 M02 =2 Ms dMs

0

which coincides with the fundamental theorem in Calculus.

grals.

and let Ft = Nt + At where At is an adapted process with finite variation.

Define Stratonovichs integral

Z t X Ftl + Ftl1

Fs dMs = lim (Mtl Mtl1 )

0 m(D)0 2

l

Then Z t Z t

1

Fs dMs = Fs dMs + hN, M it

0 0 2

Indeed,

Z t n

X Ntl + Ntl1

Fs dMs = lim (Mtl Mtl1 )

0 m(D)0 2

l=1

n

X Atl + Atl1

+ lim (Mtl Mtl1 )

m(D)0 2

l=1

n

1 X

= lim Ntl Ntl1 (Mtl Mtl1 )

2 m(D)0

l=1

n

X

+ lim Ftl1 (Mtl Mtl1 )

m(D)0

l=1

n

1 X

+ lim Atl Atl1 (Mtl Mtl1 )

2 m(D)0

l=1

Z t

1

= hM, N it + Fs dMs .

2 0

Z t Z t

Fs dMs = Fs dMs .

0 0

34 CALCULUS

CHAPTER 2. ITOS

An adapted stochastic process F = (Ft )t0 is called a simple process, if it

possesses a representation

X

Ft () = f0 ()1{0} (t) + fi ()1(ti ,ti+1 ] (t) (2.3)

i=0

where 0 = t0 < t1 < < ti , such that for any finite time T 0,

there are only finite many ti [0, T ]; each fi Fti i.e. fi is measurable

with respect to Fti and F is a bounded process. The space of all simple

(adapted) stochastic processes will be denoted by L0 . If F = (Ft )t0 L0 ,

then Itos integral of F against Brownian motion B = (Bt )t0 is defined as

X

I(F )t fi (Btti+1 Btti ) .

i=0

where the sum makes sense because only finite terms may not vanish. It is

obvious that I(F ) = (I(F )t )t0 is continuous, square-integrable, adapted to

(Ft )t0 .

martingale, and let s < t u < v, f Fs , g Ft . Then

and

E f (Mt Ms )2 |Fs = E (f (hM it hM is ) |Fs ) .

= E {E (g(Mv Mu )(Mt Ms )|Fu ) |Fs }

= E {g(Mt Ms )E(Mv Mu |Fu )|Fs }

=0.

The second equality is trivial as f Fs that can be moved out from the

conditional expectation.

2.2. STOCHASTIC INTEGRALS FOR SIMPLE PROCESSES 35

Then k j and

j1

X

I(F )t = fi (Bti+1 Bti ) + fj (Bt Btj ) ;

i=0

k1

X

I(F )s = fi (Bti+1 Bti ) + fk (Bs Btk ) .

i=0

If k < j 1, then

j1

X

I(F )t I(F )s = fi (Bti+1 Bti )

i=k+1

+fj (Bt Btj ) + fk (Btk+1 Bs ) . (2.4)

E fi (Bti+1 Bti )|Fs

= E E( fi (Bti+1 Bti )|Fti |Fs

= E fi E Bti+1 Bti |Fti |Fs

=0.

In the first equality we have used the fact that fi Fti , and in the second

equality the martingale property of (Bt ). Similarly

E fj (Bt Btj )|Fs = 0, t > tj s, fj Ftj ,

E fk (Btk+1 Bs )|Fs = 0, tk+1 s > tk , fk Ftk Fs .

E (I(F )t I(F )s |Fs ) = 0 .

36 CALCULUS

CHAPTER 2. ITOS

Rt

Lemma 2.2.3 I(F )2t 0 F 2 ds

s is a martingale. Therefore I(F )

t0

Mc2 and

Z t

hI(F )it = Fs2 ds .

0

Proof. We want to prove that for any t s

Z t Z s

2 2 2

Fu2 du .

E I(F )t Fu du Fs = I(F )s

0 0

In other words, we have to prove that

Z t

2 2 2

E I(F )t I(F )s Fu du Fs = 0.

s

Obviously

I(F )2t I(F )2s = (I(F )t I(F )s )2 2I(F )t I(F )s 2I(F )2s

= (I(F )t I(F )s )2 2(I(F )t I(F )s )I(F )s ,

and (I(F )t )t0 is a martingale, so that

E (I(F )t I(F )s |Fs ) = 0 .

While, I(F )s Fs so that

E {I(F )s (I(F )t I(F )s ) |Fs }

= I(F )s E {I(F )t I(F )s |Fs } = 0 .

We therefore only need to show

Z t

2 2

E (I(F )t I(F )s ) Fu du Fs = 0 .

s

Now we use the same notations as in the proof of Lemma 2.2.2.

It is clear from eqn 2.4 that if k < j 1, then

j1

X

(I(F )t I(F )s )2 = fi fl (Bti+1 Bti )(Btl+1 Btl )

i,l=k+1

j1

X

+ fi fj (Bti+1 Bti )(Bt Btj )

i=1

j1

X

+ fi fk (Bti+1 Bti )(Btk+1 Bs )

i=1

+fj2 (Bt Btj )2 + fk2 (Btk+1 Bs )2

+fk fj (Bt Bti )(Btk+1 Bs ) .

2.2. STOCHASTIC INTEGRALS FOR SIMPLE PROCESSES 37

Using Lemma 2.2.1 below and the fact that both (Bt )t0 and (Bt2 t)t0

are martingales, we get

E (I(F )t I(F )s )2 Fs

j1

X

2 2 2

=E fi (ti+1 ti ) + fj (t tj ) + fk (tk+1 s) Fs

j=k+1

so that

t

Z

2

Fu2 du Fs

E (I(F )t I(F )s ) |Fs = E .

s

Z T

2 2

E I(F )T = E Fs ds .

0

then

Z t

I(F )t Nt Fs d hB, N is

0 t0

is a martingale.

Exercise 2.2.6 Show Prop. 2.2.5 by proving the followings.

1. First note that Prop. 2.2.5 is equivalent to the following equality:

Z t

E (I(F )t Nt I(F )s Ns |Fs ) = E Fu d hB, N iu |Fs , t > s.

s

2. Prove

E (I(F )t Nt I(F )s Ns |Fs ) = E ((I(F )t I(F )s )(Nt Ns )|Fs )

for any t > s.

3. If tj < t tj+1 , tk < s tk+1 , then k j. If indeed k < j 1, then

j1

X

Nt Ns = (Nti+1 Nti ) + (Nt Ntj ) + (Ntk+1 Ns ) .

i=k+1

complete the proof.

Proposition 2.2.7 If N Mc2 , then

Z t

hI(F ), N it = Fs d hB, N is for all t 0 . (2.5)

0

38 CALCULUS

CHAPTER 2. ITOS

Rt

We next aim to extend stochastic integrals against Brownian motion 0 Fs dBs

to a class of adapted processes Fs . The main tool we need is a maximal in-

equality for martingales.

Let (, F, Ft , P) be a filtered probability space, and let Mc2 denote the

vector space of all continuous, square-integrable martingales M = (Mt )t0

on (, F, Ft , P) with initial zero: M0 = 0.If M = (Mt )t0 Mc2 , then

Mt2 hM it is a martingale. In particular, for every T > 0 we have

E MT2 hM iT = E M02 hM i0

= 0.

E MT2 = EhM iT .

(2.6)

Mt Nt hM, N it

is a martingale, where

1

hM, N it = (hM + N it hM N it )

4

called the bracket process of M and N . According to definition

k

X

hM, N it = lim Mtl Mtl1 Ntl Ntl1 in probability

m(D)0

l=1

where the limit takes over all finite partitions {0 = t0 < t1 < < tk = t}

of [0, t].

The space of square-integrable martingales Mc2 is endowed with the fol-

lowing distance

X 1 np o

d(M, N ) = E|M n N n |21 for M, N Mc2 .

2n

n=1

2.3. STOCHASTIC INTEGRALS FOR ADAPTED PROCESSES 39

verges to M in Mc2 , if and only if for every T > 0

M (k)T MT in L2 (, F, P)

hM (k)iT hM iT in L1 (, F, P) .

of the Markov inequality in the elementary probability.

T > 0 and > 0

( )

1

P sup |Mt | 2 E MT2 .

0tT

0tT tD

for any countable dense subset D of [0, T ], so that sup0tT |Mt | is a ran-

dom variable. For each n N, we may apply the Kolmogorov inequality

to martingale in discrete-time (see the Appendix) {MT k/2n ; FT k/2n }k0 to

obtain ( )

1

sup |MT k/2n | 2 E MT2 .

P

0k2n

0k2n 1 0tT

as n we therefore have

( ) ( )

P sup |Mt | = lim P sup |MT k/2n |

0tT n 0k2n 1

1

E MT2 .

2

40 CALCULUS

CHAPTER 2. ITOS

Then for any T > 0,

By Kolmogorovs inequality

( )

1

P sup |M (k)t M (l)t | E|M (k)T M (l)T |2 ,

0tT 2

so that, for any fixed T > 0, M (k) uniformly converges to a limit M on [0, T ]

in probability and therefore there exists a stochastic process M (Mt )t0

such that

sup |M (k)t Mt | 0 in prob.

0tT

uniform limit of a sequence of continuous martingales.

Let (Bt )t0 be a standard Brownian motion in R, and let (Ft )t0 be the

natural filtration generated by (Bt )t0 , called the Brownian filtration. Then

for any simple (adapted)

Rt process (Ft )t0 , the Ito integral I(F )t belongs to

Mc2 and hI(F )it = 0 Fs2 ds. Thus, since F I(F ) is linear, and if F and

G are both simple processes, then

q

X 1

d(I(F ), I(G)) = E|I(F )j I(G)j |2 1

2j

j=1

s

Z j

X 1

2 ds 1 .

= E (F s G s )

2j 0

j=1

the limit of simple (adapted) processes in the sense that: there is a sequence

of simple processes F (n) L0 , such that for every T > 0

Z T

lim E |Fs F (n)s |2 ds = 0 . (2.7)

n 0

2.3. STOCHASTIC INTEGRALS FOR ADAPTED PROCESSES 41

The space of all such adapted stochastic processes (Ft )t0 will be denoted by

L2 . If F L2 and F (n) L0 such that (2.7) holds, then limn I(F (n))

exists in Mc2 which is denoted again by I(F ), called the It

o integral of F =

(Ft )t0 along the Brownian motion (Bt )t0 .

F (t, ) = Ft (), F is measurable on [0, +) with respect to the product

-algebra B([0, +)) F, and for every T > 0

Z T

E Fs2 ds < + .

0

The linearity of the Ito integrals F I(F ) from L2 into Mc2 , and the

martingale property of both processes (I(B)t )t0 and

Z t

I(F )2t Fs2 ds

0

Rt

are preserved for any F L2 . Thus hI(F )it = 0 Fs2 ds.

Rt

The stochastic integral I(F )t for F L2 is also denoted by 0 Fs dBs .

We may also use F.B to denote the Ito integral (I(F )t )t0 . Thus

Z t

I(F )t = (F.B)t = Fs dBs

0

and Z t

hI(F )it = hF.Bit = Fs2 ds .

0

We next describe a class of stochastic processes F = (Ft )t0 in L2 .

Let L denote the vector space of all adapted, left-continuous stochastic

processes F = (Ft )t0 such that for all T > 0

Z T

E Fs2 ds < + . (2.8)

0

filtration (Ft )t0 , i.e. each Ft is measurable with respect to Ft , is essen-

tial in the definition of It os integrals. On the other hand, left-continuity

of t Ft is a technical one, that can be replaced by some sort of Borel

measurability (e.g.; right-continuous, continuous, measurable in (t, ) etc.).

Left-continuity becomes a correct condition if we attempt to define stochastic

integrals of F = (Ft )t0 against martingales which may have jumps. The

42 CALCULUS

CHAPTER 2. ITOS

t Ft is left-continuous, then for any time t, the value Ft can be predicted

by the values taking place strictly before time t:

Ft = lim Fs .

st

Remark 2.3.5 We should point out that some kind of measurability of ran-

dom function (t, ) Ft () is necessary in order to ensure (2.8) make

sense. Note that (2.8) may be written as

Z Z T

Fs ()2 dsP(d) < +

0

F (t, ) Ft ()

is the Borel -algebra generated by open subsets in [0, T ], and B([0, T ]) FT

is the product -algebra on [0, T ] .

j

and let

nk

X

F (n)t = F0 1{0} (t) + Ftnl1 1(tnl1 ,tnl ] (t) ; for t 0 . (2.9)

l=1

Z T

E |F (n)s Fs |2 ds 0 as n .

0

Therefore L L2 .

2.3. STOCHASTIC INTEGRALS FOR ADAPTED PROCESSES 43

F (n)t Ft .

In particular, if F = (Ft )t0 L, then, for every t 0,

n

X

I(F )t = lim Ftl1 (Btl Btl1 ) in L2 (, F, P)

m(D)0

l=0

where the limit takes place over all finite partitions D of the interval [0, t].

One thing you may keep in your mind is that the class L2 of integrands

F = (Ft )t0 for which we may integrate against Brownian motion is large

enough that it includes many interesting stochastic processes. On the other

hand, there is a serious restriction on these processes, that is, we require our

integrands being adapted to the Brownian filtration: for each t 0, Ft is

measurable with respect to Ft .

Let us give some examples.

If X = (Xt )t0 is a continuous stochastic process which is adapted to

(Ft )t0 , if f is a Borel function, and if for every T > 0

Z T

E f (Xt )2 dt <

0

then the stochastic process (f (Xt ))t0 belongs to L2 . In particular, for any

Borel measurable function f such that

Z T

E f (Bt )2 dt < (2.10)

0

Z T Z T

E f (Bt )2 dt = Ef (Bt )2 dt

0 0

Z T

= Pt (f 2 )(0)dt

0

where

Z

1 2

2

Pt (f )(0) = d/2

f (x)2 e|x| /2t dx

(2t) d

ZR

1 2

= d/2

f ( tx)2 e|x| /2 dx .

(2) Rd

44 CALCULUS

CHAPTER 2. ITOS

the process (eBt )t0 belongs to L2 as well. How about the stochastic

2

process Ft =eBt ? In this case

Z T Z TZ

1 2 2

E 2

Ft dt = d/2

e2tx e|x| /2 dx

0 (2) 0 Rd

and therefore Z T

E Ft2 dt < if 0 .

0

In the case > 0, then

Z T

1

E Ft2 dt < iff T < .

0 4

If F L2 , then I(F ) is called the Ito integral of F = (Ft )t0 against

Brownian motion B = (Bt )t0 , and we will denote it by

Z t

Fs dBs ; t 0 .

0

If F = (Ft )t0 L2 , then both

Z t Z t 2 Z t

Fs dBs and Fs dBs Fs2 ds

0 0 0

Rt

are continuous martingales with initial zero, and therefore hF.Bit = 0 Fs2 ds.

In general Z t

hF.B, G.Bit = Fs Gs ds

0

if F and G belong to L2 .

For any T 0

Z T 2 Z T

E Fs dBs =E Fs2 ds .

0 0

( Z

t 2 ) Z t

2

Fu dBu Fs = E Fu du Fs .

E

s s

2.4. STOCHASTIC INTEGRALS ALONG MARTINGALES 45

We may apply the same procedure of defining Itos integrals along Brownian

motion to any continuous, square-integrable martingales. Indeed, if M

Mc2 if F = (Ft )t0 is a bounded, adapted, simple process

X

Ft = f0 1{0} (t) + fi 1(ti ,ti+1 ] (t)

i

then define

X

M

I (F ) = fi (Mtti+1 Mtti ) .

i=0

1. I M (F ) Mc2 .

Z t

M

I (F )t Nt Fs d hM, N is

0 t0

is a martingale.

Z T 2 Z T

E Ft dMt =E Ft2 d hM it .

0 0

there is a sequence {F (n)} of adapted, simple and bounded stochastic pro-

cesses (F (n)) such that for any T > 0

Z T

E F (n)2t d hM it <

0

and Z T

2

E |F (n)t Ft | d hM it 0 as n .

0

If F L2 (M ), then we define

n

46 CALCULUS

CHAPTER 2. ITOS

Rt

We use either F.M or 0 Fs dMs to denote I M (F ).

If M = (Mt )t0 is a continuous, square-integrable martingale, and F =

(Ft )t0 belongs to L2 (M ), then both the stochastic integral

Z t

Fs dMs

0

and 2

Z t Z t

Fs dMs Fs2 dhM is

0 0

are martingales.

If M and N are two continuous, square-integrable martingales, and if

F L2 (M ) and G L2 (N ), then

Z t

hF.M, G.N it = Fs Gs d hM, N is .

0

make sense, that is,

Z t Z s Z t

Fs d Gu dMu = Fs Gs dMs .

0 0 s 0

Itos theory of stochastic integration can be further extended to an even

larger class of stochastic processes, by a so-called localization technique.

The technique involves a very important concept in the modern theory of

probability, called stopping times.

Let (, F, Ft , P) be a filtered probability space.

+ is allowed) is called a stopping time (a random time) if for each t 0

the event

{ : T () t} Ft .

FT = {A F : A {T t} Ft for all t 0}

nical reasons, we will require the following conditions to be satisfied, unless

otherwise specified.

2.5. STOPPING TIMES, LOCAL MARTINGALES 47

2. The filtration (Ft )t0 is right-continuous, that is, for each t 0

Ft = Ft+ s>t Fs .

In this case, we say the filtered probability space (, F, Ft , P) satisfies

the usual conditions.

For example, if B = (Bt )t0 is a BM in Rd , then its natural filtration

(Ft )t0 satisfies conditions 1-3, and as matter of fact, (Ft )t0 is continuous:

not only we have Ft = Ft+ but also

Ft = Ft {Fs : s < t} for all t > 0.

Remark 2.5.2 If X = (Xt )t0 is a right-continuous stochastic process

on a complete probability space (, F, P), then its natural filtration (Ft )t0

satisfies the usual conditions.

The following is a result we will not prove in this course.

Theorem 2.5.3 If X = (Xt )t0 is a right-continuous stochastic process

adapted to (Ft )t0 (recall that our filtration (Ft )t0 satisfies the usual con-

ditions), and if T : [0, +] is a stopping time, then the random variable

XT 1{T <} is measurable with respect to -algebra FT , where

XT 1{T <} () = XT () ()1{:T ()<} ()

XT () () ; if T () < + ,

=

0; if T () = + .

Remark 2.5.4 If X = (Xn )nZ+ and T : Z+ {+}, then

X

XT 1{T <} = Xn 1{T =n}

nZ+

X

= Xn 1{T =n} .

n=0

any n Z+ ,

n

X

XT 1{T <} 1{T n} = Xk 1{T =k}

k=0

which is measurable with respect to Fn , thus by definition XT 1{T <} is FT -

measurable.

48 CALCULUS

CHAPTER 2. ITOS

times.

cess that is right-continuous and has left-limits. Then for any Borel subset

D Rd and t0 0

T = inf {t t0 : Xt D}

is a stopping time, where inf = +. T is called the hitting time of D by

the process X.

adapted to {Fn }nZ+ taking values in Rd . Then for a Borel subset D Rd ,

and k Z+

T = inf {n k : Xn D}

is a stopping time. Indeed, if n k 1 then {T = n} = and for n k we

have

n1

\ \

{T = n} = {Xj Dc } {Xn D}

j=k

which belongs to Fn .

and if D Rd is a bounded closed subset of Rd , then

T = inf{t 0 : Xt D}

XT 1{T <+} D .

Tb = inf{t 0 : Xt = b}

( )

sup Xt < b = {Tb > N }

t[0,N ]

and ( )

sup Xt b = {Tb N } .

t[0,N ]

2.5. STOPPING TIMES, LOCAL MARTINGALES 49

The concept of stopping times provides us with a means of localizing

quantities. Suppose (Xt )t0 is a stochastic process, and T is a stopping

time, then X T = (XtT )t0 is a stochastic process stopped at (random)

time T , where

Xt () if t T () ;

XtT () =

XT () () if t T () .

the process X1[0,T ] which is by definition

X1[0,T ] t () = Xt 1{tT } ()

Xt () if t T () ;

=

0 if t > T () .

It is obvious that

XtT = Xt 1{tT } + XT 1{t>T } .

If (Xt )t0 is adapted to the filtration (Ft )t0 , so are the process (XtT )t0

stopped at stopping time T and Xt 1{tT } .

tered probability space (, F, Ft , P) is called a local martingale if there is an

increasing family {Tn } of finite stopping times such that

Tn + as n +

The Ito integration can be extended to local martingales. Let me briefly

describe the idea. Suppose M = (Mt )t0 is a continuous, local martingale

with initial zero, then we may choose a sequence {Tn } of stopping times

such that Tn a.s. and for each n, M Tn = (MtTn )t0 is a continuous,

square-integrable martingale with initial zero. In this case we may define

hM it = hM Tn it if t Tn

that

Mt2 hM it

50 CALCULUS

CHAPTER 2. ITOS

is a local martingale.

Let F = (Ft )t0 be a left-continuous, adapted process such that for each

T >0 Z T

Fs2 dhM is < a.s. (2.11)

0

and define Z t

Sn = inf t 0 : Fs2 dhM is n n

0

Sn . Let Tn = Tn Sn . Then Tn almost surely, for each n, M Tn

Mc2 . Let

F (n)t = Ft 1{tTn } .

Then

Z Z Tn

F (n)2s dhM is = Fs2 dhM is n

0 0

so that F (n) L2 (M Tn ). We may define

Z t

Tn

(F.M )t = F (n)s d M if t Tn

0 s

martingale M . We can show that F.M does not depend on the choice of

stopping times Tn . By definition, both F.M and

Z t

(F.M )2t Fs2 dhM is

0

Finally let us extend the theory of stochastic integrals to the most useful

class of (continuous) semimartingales. An adapted, continuous stochastic

process X = (Xt )t0 is a semimartingale if X possesses a decomposition

X t = Mt + V t

where (Mt )t0 is a continuous local martingale, and (Vt )t0 is stochastic

processes with finite variation on any finite interval.

2.5. STOPPING TIMES, LOCAL MARTINGALES 51

X

sup |f (tl ) f (tl1 )| < +

D l

where D runs over all finite partitions of [0, t] (for any fixed t), then

Z t

g(s)df (s)

0

continuous, then

Z t X

g(s)df (s) = lim g(tl1 )(f (tl ) f (tl1 )) .

0 m(D)0

l

ation, then

Z t

Fs dVs

0

Z t Z t

Fs dVs () Fs ()dVs ()

0 0

X

= lim Ftl1 ()(Vtl () Vtl1 ()) .

m(D)0

l

semi-martingale in an obvious way, namely

Z t Z t Z t

Fs dXs = Fs dMs + Fs dVs

0 0 0

where, the first term on the right-hand side is the Itos integral with respect

to local martingale M defined in probability sense, which is again a local

martingale, the second term is the usual Lebesgue-Stieltjes integral which is

defined path-wisely. Moreover

Z t X

Fs dXs = lim Ftl1 Xtl Xtl1 in probab.

0 m(D)0

l

52 CALCULUS

CHAPTER 2. ITOS

Itos formula (also called Itos lemma) is the fundamental theorem in stochas-

tic calculus. Let us begin with a special case of Itos formula integration

by parts for stochastic integrals. Recall Ables summation formula

n1

X

an bn = a0 b0 + (ai+1 bi+1 ai bi )

i=0

n1

X n1

X

= bi+1 (ai+1 ai ) + ai (bi+1 bi )

i=0 i=0

Lebesgue-Stieltjess integration. Namely, if a and b are two continuous and

increasing functions, then

Z t Z t

a(t)b(t) a(0)b(0) = a(s)db(s) + b(s)da(s) . (2.12)

0 0

(At )t0 and (Bt )t0 which have finite variations to obtain an integration by

parts for variational processes

Z t Z t

At Bt = A0 B0 + As dBs + Bs dAs .

0 0

The story for local martingales will be different, basically due to the fact

that, if (Mt )t0 is a continuous local martingale, then

Mt Mt 2 = hM i , in prob.

X

lim l l1 t

m(D)0

l

terms of Itos integration, this fact can be restated as the following

the integration by parts

Z t Z t

Mt N t = M0 N 0 + Ns dMs + Ms dNs + hM, N it .

0 0

2.6. ITOS FORMULA 53

X X

(Mti+1 Mti )2 = Mt2 M02 2 Mti (Mti+1 Mti )

i i

so that Z t

hM it = Mt2 M02 2 Ms dMs .

0

parts formula.

a continuous semimartingale: M and N are continuous local martingales,

and A, B are continuous, adapted processes with finite variations. Then

Z t Z t

Xt Yt X0 Y0 = Xs dYs + Ys dXs + hM, N it .

0 0

In particular

Z t

Xt2 X02 = 2 Xs dXs + hM it .

0

semimartingale in Rd with decompositions Xti = Mti + Ait : Mt1 , , Mtd are

continuous local martingales, and A1t , , Adt are continuous, locally inte-

grable, adapted processes with finite variations. Let f C 2 (Rd , R). Then

d Z t

X f

f (Xt ) f (X0 ) = (Xs )dXsi

x i

i=1 0

d

1 X t 2f

Z

+ (Xs )dhM i , M j is . (2.13)

2 0 xi xj

i,j=1

t d Z

1 X t 2f

Z

f (Xt ) f (X0 ) = f (Xs ).dXs + (Xs )dhM i , M j is

0 2 0 xi xj

i,j=1

54 CALCULUS

CHAPTER 2. ITOS

f f

where f = ( x 1

, , xn

) is the gradient vector field of f . As a conse-

quence, f (Xt )f (X0 ) is again a continuous semimartingale, with martingale

part

Z t d Z t

X f

f (Xs ).dMs = (Xs )dMsi

0 0 xii=1

and

Z t d

X f f

hf (X.)it = (Xs ) (Xs )dhM i , M j is .

0 i,j=1 xi xj

Z t d

X f g

hf (X.), g(X.)it = (Xs ) (Xs )dhM i , M j is .

0 i,j=1 xi xj

2.6.1 It

os formula for BM

If B = (Bt1 , , Btd )t0 is Brownian motion in Rd , then, for f C 2 (Rd , R),

then Z t Z t

1

f (Bt ) f (B0 ) = f (Bs ).dBs + f (Bs )ds .

0 0 2

Let Z t

1

Mtf = f (Bt ) f (B0 ) f (Bs )ds .

0 2

Then Mf is a local martingale for every f C 2 (Rd , R) and

Z t

f g

hM , M it = hf, gi(Bs )ds .

0

2.6.2 Proof of It

os formula.

Let us prove the Ito formula for one-dimensional case. For simplicity let us

just do it for a continuous, square-integrable martingale M = (Mt )t0 . In

this case we need to show

Z t

1 t 00

Z

0

f (Mt ) f (M0 ) = f (Ms )dMs + f (Ms )dhM is . (2.14)

0 2 0

seen Z t

2 2

Mt M0 = 2 Ms dMs + hM it .

0

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 55

t t

n(n 1)

Z Z

Mtn M0n =n Msn1 dMs + Msn2 dhM is .

0 2 0

Z t Z t

Mtn+1 M0n+1 = Msn dMsMs dMsn + hM, M n it

+

0 0

Z t Z t

n(n 1) n2

= Msn dMs + Ms d nMsn1 dMs + Ms dhM is

0 0 2

Z t

+ nMsn1 dhM is

0

Z t

(n + 1)n t n1

Z

n

= (n + 1) Ms dMs + Ms dhM is

0 2 0

which implies that (2.14) for power function xn+1 . Therefore Itos formula

is true for any polynomial, so is it for any C 2 function f due to Taylors

expansions.

os formula

In this section, we present several applications of Itos lemma.

2.7.1 L

evys characterization of Brownian motion

Our first application is Levys martingale characterization of Brownian mo-

tion. Let (, F, Ft , P) be a filtered probability space satisfying the usual

condition.

tic process on (, F, Ft , P) taking values in Rd with initial zero. Then

(Mt )t0 is a Brownian motion if and only if

2.

For any i and j, the process Mti Mtj ij t is a martingale, that is,

M i , M j t = ij t.

56 CALCULUS

CHAPTER 2. ITOS

we first show that

||2

Zt = exp 1 h, Mt i + t

2

d

!

X i ||2

= exp 1 i Mt + t

2

i=1

case f 0 = f 00 = f ) and semi-martingale

d

X ||2

Xt = 1 i Mti + t,

2

i=1

and obtain

d

!

t ||2

Z X

Zt = Z0 + Zs d 1 i Msi + s

0 2

i=1

d

t

Z

1 X

+ Zs dh 1 i M i is

2 0 i=1

d

t

||2 t

X Z Z

= 1+ 1 i Zs dMsi + Zs ds

0 2 0

i=1

Z d

t X

1

i j Zs dhM i , M j is

2 0 i,j=1

d

t

X Z

= 1+ 1 i Zs dMsi

i=1 0

Z t d Z t

1 X 1

i j Zs dhM i , M j is = ||2 Zs ds .

2 0 i,j=1 2 0

2 s/2

Moreover, since |Zs | = e|| , so that for any T > 0

Z T Z T

2

E |Zs | ds = 2

e|| s ds < +

0 0

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 57

Z t

Zs dMsi Mc2 .

0

That is, Zs is a continuous, square-integrable martingale with initial value

1, so that

||2

ih,Mt Ms i

E e Fs = exp (t s) .

2

We may then conclude that Mt Ms and Fs are independent. After taking

expectation of both sides we obtain

||2

E eih,Mt Ms i = exp (t s)

2

that is, h, Mt Ms i has a normal distribution with mean zero and variance

||2 (t s). Therefore (Mt )t0 is a Brownian motion in Rd .

Theorem 2.7.2 (Dambis, Dubins and Schwarz) Let M = (Mt )t0 be a

continuous, local martingale on (, F, Ft , P) with initial value zero satisfying

hM i = , and let

Tt = inf{s : hM is > t} .

Then Tt is a stopping time for each t 0, Bt = MTt is an (FTt )-Brownian

motion, and Mt = BhM it .

Proof. The family T = (Tt )t0 is called a time-change, because each Tt

is a stopping time (exercise), and obviously t Tt is increasing (another

exercise). Each Tt is finite P-a.e. because hM i = P-a.e. (exercise). By

continuity of hM it

hM iTt = t, P-a.s.

Applying Doobs stopping theorem for the square integrable martingale

(MsTt )s0 and stopping times Tt Ts (t s), we obtain that

E (MTt |FTs ) = MTs

i.e. Bt is a (FTt )-local martingale. By the same argument but to the mar-

2

tingale (MsT hM isTt )s0 we have

t

continuous, so that B = (Bt )t0 is an (FTt ) Brownian motion.

58 CALCULUS

CHAPTER 2. ITOS

Consider the differential equation that defines the exponential of a constant

square matrix A

df (t)

= Af (t) ; f (0) = I .

dt

Its solution is given by

X An

f (t) = exp(tA) = tn .

n!

n=0

Z t

f (t) = I + f (s)d (As)

0

really the exponential of At. It is thus very natural to define the stochastic

exponential of a semi-martingale Xt = Mt + At (where M is a continuous

local martingale with X0 = 0, A is an adapted continuous process with finite

total variation) to be the solution of the following integral equation

Z t

Yt = 1 + Ys dXs (2.15)

0

where we use the Ito integral. To find the solution to (2.15) we may try

Yt = exp(Xt + Vt )

where (Vt )t0 to be determined later is a correction term (which has finite

variation) due to the quadratic variation of X. Applying Itos formula we

obtain Z t

1 t

Z

Yt = 1 + Ys d(Xs + Vs ) + Ys dhM is

0 2 0

and therefore, in order to match the equation (2.15) we must choose Vt =

21 hM it .

A is an adapted continuous process with finite total variation) with X0 = 0.

Then (2.15) has a unique solution

1

Yt = exp Xt hM it ,

2

which is called the stochastic exponential of X = (Xt )t0 , denoted by E(X).

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 59

exponential E(M ), as

Z t

E(M )t = E(M )0 + E(M )s dMs .

0

0, and let

1

E(M )t = exp Mt hM it for all t 0 . (2.16)

2

Z t

Xt = 1 + Xs dMs . (2.17)

0

os integration, if T > 0

such that Z T

2Mt hM it

E e dhM it < + (2.18)

0

then the stochastic exponential

1

E(M )t = exp Mt hM it

2

but it is nevertheless a super-martingale.

tingale such that X0 L1 (, F, P). Then X = (Xt )t0 is a super-martingale:

E(Xt |Fs ) Xs for any s < t. In particular, t EXt is decreasing, and

therefore EXt EX0 for any t > 0.

integrable functions on a probability space (, F, P), such that

60 CALCULUS

CHAPTER 2. ITOS

E (limn fn |G) limn E (fn |G)

for any sub -algebra G (see page 88, D. Williams: Probability with Mar-

tingales).

By definition, there is a sequence of finite stopping times Tn + P-a.e.

such that X Tn = (XtTn )t0 is a martingale for each n. Hence

E (XtTn |Fs ) = XsTn , t s, n = 1, 2, .

In particular

E (XtTn ) = EX0

so that, by Fatous lemma, Xt = limn XtTn is integrable. Applying

Fatous lemma to XtTn and G = Fs for t > s we have

E(Xt |Fs ) = E lim XtTn |Fs

n

limn E(XtTn |Fs )

= limn XsTn

= Xs

According to definition, X = (Xt )t0 is a super-martingale.

Corollary 2.7.7 Let M = (Mt )t0 be a continuous, local martingale with

M0 = 0, then its stochastic exponential E(M ) defined by eqn 2.16 is a super-

martingale. In particular,

1

E exp Mt hM it 1 for all t 0 .

2

It is obvious that a continuous super-martingale X = (Xt )t0 is a mar-

tingale if and only if its expectation t E(Xt ) is constant. Therefore

Corollary 2.7.8 Let M = (Mt )t0 be a continuous, local martingale with

M0 = 0. Then

1

E(M )t exp Mt hM it

2

is a martingale up to time T , that is,

E (E(M )t |Fs ) = E(M )s for 0s<tT

if and only if

1

E exp MT hM iT =1. (2.19)

2

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 61

probability transformations. It is vital in many applications to know whether

the stochastic exponential of a given martingale M = (Mt )t0 is indeed a

martingale. A simple sufficient condition to ensure (2.19) is the so-called

Novikovs condition stated in Theorem 2.7.14 below (A. A. Novikov: On

moment inequalities and identities for stochastic integrals, Proc. second

Japan-USSR Symp. Prob. Theor., Lecture Notes in Math., 330, 333-339,

Springer-Verlag, Berlin 1973).

Uniform integrability

To prove Novikovs theorem, we need the concept of uniform integrability

of a family of integrable random variables and a theorem about local mar-

tingales. The concept of uniform integrability was introduced to handle

the convergence of random variables in L1 (, F, P), in spirit which is very

close to what you have learned in analysis: uniform convergence, uniform

continuity.

If X is integrable: E|X| < +, then

Z

lim |X|dP = 0 .

N {|X|N }

A is uniformly integrable if

Z

lim sup |X|dP = 0

N XA {|X|N }

that is, E 1{|X|N } |X| tends to zero uniformly on A as N .

is a C > 0 depending only on such that

Z

|X|dP <

{|X|N }

for all X A.

According to the definition, we have the following.

grable.

62 CALCULUS

CHAPTER 2. ITOS

there is an integrable random variable Y such that |X| Y for every

X A, then A is uniformly integrable. In fact

Z Z

sup |X|dP Y dP 0 as N .

XA {|X|N } {Y N }

XA

integrable. Indeed

Z Z

1

sup |X|dP sup p1

|X|p dP

XA {|X|N } XA {|X|N } N

1

p1

sup E|X|p 0

N XA

as N .

F, then the family

A = {X E (X|G ) : A}

G -measurable,

Z Z Z

|X |dP = X dP X dP

{|X |N } {X N } {X N }

Z Z

= XdP XdP

{X N } {X N }

Z

= |X|dP

{|X |N }

Z

|X|dP

{|X|N }

and only if

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 63

2. For any > 0 there is a > 0 such that

Z

|X|dP ; X A

A

Z Z Z

|X|dP = |X|dP + |X|dP

A A{|X<N } A{|X|N }

Z

N P (A) + |X|dP .

{|X|N }

Z

sup |X|dP .

XA {|X|N } 2

Then Z

sup |X|dP N P (A) +

XA A 2

for any A F. In particular

Z

sup |X|dP N + ,

XA 2

and by setting = /(2N ) we also have

Z

sup |X|dP

XA A

as long as P(A) .

Sufficiency. Let = supXA E|X|. By the Markov inequality

P(|X| N )

N

for any N > 0. For any > 0, there is a > 0 such that the inequality in 2

holds. Choose N = /. Then P(|X| N ) so that

Z

|X|dP

{|X|N }

for any X A.

64 CALCULUS

CHAPTER 2. ITOS

any D F

E (1D |X|) E (1D ||) ; X A .

Then A is uniformly integrable.

bility.

ables on (, F, P). Then Xn X in L1 (, F, P) for some random variable

as n : Z

|Xn X|dP 0 as n ,

as n .

Proof. Necessity. For any > 0 there is a natural number m such that

Z

|Xn X|dP for all n > m .

2

Z Z Z

|Xn |dP |X|dP + |Xn X|dP

A A

so that Z Z Z

sup |Xn |dP |X|dP + sup |Xk |dP + .

n A A km A 2

In particular

sup E|Xn | E|X| + sup E|Xk | +

n km 2

i.e. {Xn : n 1} is bounded in L1 (, F, P). Moreover, since X, X1 , , Xm

belong to L1 , so that there is > 0 such that, if P(A) , then

Z m Z

X

|X|dP + |Xk |dP

A A 2

k=1

and therefore Z

sup |Xn |dP

n A

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 65

as long as P(A) .

Sufficiency. By Fatous lemma

Z Z

|X|dP sup |Xn |dP < +

n

thus, by Theorem 2.7.10, for any > 0 there is > 0 such that

Z

|Xn X|dP <

A

an N > 0 such that

P (|Xn X| ) n N .

Z Z

|Xn X|dP |Xn X|dP + P (Xn X| < )

{|Xn X|}

+ P (Xn X| < )

2 .

Hence

lim E|Xn X| = 0 .

n

X0 L1 (, F, P), and let T > 0. If

XTn t is a bounded martingale, so that for any t > s we have

and limn XTn t = Xt . Since {XTn t }nZ+ (for any fixed t T ) is uni-

formly integrable, so that (Theorem 2.7.12)

lim XTn t = Xt in L1 (, F, P) .

n

66 CALCULUS

CHAPTER 2. ITOS

Therefore

n o

E {Xt |Fs } = E lim XTn t |Fs

n

= lim E {XTn t |Fs }

n

= lim XTn s = Xs .

n

Novikovs theorem

Now we are in a position to prove Novikovs theorem.

martingale with M0 = 0. If

1

E exp hM iT < + , (2.20)

2

1

E(M )t exp Mt hM it

2

is a martingale up to time T .

uniforme des martingales exponentielles, Acta. Math. Sinica 23, 311-318

(1980). The idea is the following, first show that under the Novikov condition

(2.20) for any 0 < < 1

1

E(M )t exp Mt 2 hM it

2

For any , E(M )t is the stochastic exponential of the local martin-

gale Mt , so that E(M ) is a non-negative, continuous local martingale,

E {E(M )t } 1. We also have the following scaling property

1 1

E(M )t exp Mt hM it ( 1) hM it

2 2

1

= (E(M )t ) exp (1 ) hM it .

2

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 67

1

E {1A E(M )S } = E 1A (E(M )S ) exp (1 ) hM iS . (2.21)

2

in (2.21)

1

E {1A E(M )S } = E (E(M )S ) exp (1 ) hM iS

2

1

1

{E (E(M )S )} E 1A exp hM iS

2

1

1

{E (E(M )T )} E 1A exp hM iT

2

1

1

E 1A exp hM iT

2

1

E 1A exp hM iT (2.22)

2

According to Corollary 2.7.11

martingale on [0, T ]. Therefore

1 = E {E(M )t }

1

1

(E (E(M )t )) E exp hM iT

2

for every (0, 1). Letting 1 we thus obtain

E (E(M )t ) 1

tingale up to T .

Consider a standard Brownian motion B = (Bt ), and F = (Ft )t0 L2 .

If Z T

1 2

E exp F dt <

2 0 t

68 CALCULUS

CHAPTER 2. ITOS

then Z t Z t

1

Xt = exp Fs dBs Fs2 ds (2.23)

0 2 0

F = (Ft )t0 L2 : |Ft ()| C (for all t T and ), where C is a

constant, then

Z T

1 2 1 2

E exp F dt exp C T <

2 0 t 2

T.

Novikovs condition is very nice, it is however not easy to verify in many

interesting Rcases. For example, consider the stochastic exponential of the

t

martingale 0 Bs dBs , the Novikov condition requires to estimate the integral

Z T

1 2

E exp Bt dt

2 0

We are going to present three significant applications of stochastic exponen-

tials: a sharp improvement of Doobs maximal inequality for martingales,

Girsanovs theorem, and the martingale representation theorem (in the next

section). Additional applications will be discussed in the next chapter.

super-martingale on [0, T ]. Then for any > 0

( )

1

E(X0 ) + 2E(XT )

P sup |Xt |

t[0,T ]

on [0, T ], then ( )

1

P sup Xt E(X0 ) . (2.24)

t[0,T ]

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 69

should be) proved in B10, and the above generalization to super-martingale

in continuous-time can be proved by the continuity technique as we have

used to show Kolmogorovs inequality.

tingale with M0 = 0. Suppose there is a (deterministic) continuous, increas-

ing function a = a(t) such that a(0) = 0, hM it a(t) for all t [0, T ].

Then ( )

2

P sup Mt a(T ) e 2

a(T )

. (2.25)

t[0,T ]

2 2

Mt hM it Mt hM iT

2 2

2

Mt a(T )

2

so that

2

E(M )t eMt 2

a(T )

for > 0 .

martingale E(M ) we obtain

( ) ( )

2

a(T ) 2 a(T )

P sup Mt a(T ) P sup E(M )t e

t[0,T ] t[0,T ]

2

ea(T )+ 2

a(T )

E {E(M )0 }

2

a(T )+ 2 a(T )

= e

In particular, by applying the exponential inequality to a standard Brow-

nian motion B = (Bt )t0 ,

( )

2

P sup Bt T e 2

T

. (2.26)

t[0,T ]

70 CALCULUS

CHAPTER 2. ITOS

Let T > 0. Let Z = (Zt )t0 be a continuous, positive martingale up to time

T , with Z0 = 1, on a filtered probability space (, F, Ft , P) that satisfies

the usual condition. Then it is clear that Zt is the stochastic exponential of

some continuous local martingale. Indeed, since Zt > 0 we may apply the

Ito formula to log Zt , therefore

Z t Z t

1 1

log Zt log Z0 = dZs 2

dhZis .

0 Zs 0 Zs

Z t

1

Nt = dZs

0 Zs

is a continuous local martingale.

We define a probability measure Q on the measurable space (, FT ) by

may not be well defined on F. It is clear, however, that Q may be extended

to be a probability measure on F {Ft : t 0} if (Zt ) is a martingale

for all t 0. Moreover, since (Zt )t0 is a martingale

measures P (as measures restricted on the -algebra Ft , where t T if T is

finite, and t < + otherwise) is

dQ

= Zt . (2.29)

dP Ft

martingale on (, F, Ft , P) up to time T . Then

Z t

1

X t = Mt d hM, Zis

0 Zs

FORMULA

2.7. SELECTED APPLICATIONS OF ITOS 71

are bounded. In this case M, Z are bounded martingales. We want to prove

that X is a martingale under the probability Q:

that is,

By definition

probability measure P. By use of integration by parts, we have

Z t Z t

Zt Xt = Z0 X0 + Zs dXs + Xs dZs + hZ, Xit

0 0

Z t

1

= Z0 X0 + Zs dMs d hM, Zis

0 Zs

Z t

+ Xs dZs + hZ, Xit

0

Z t Z t

= Z0 X0 + Zs dMs + Xs dZs

0 0

If Zt = E(N )t is a stochastic exponential of a continuous local martingale

with N0 = 0, and if Zt is a martingale up to time T , then (Zt ) satisfies Itos

integral equation

Z t

Zt = 1 + Zs dNs

0

so that Z t Z t Z t

hM, Zit = h dMs , Zs dNs i = Zs dhN, M is .

0 0 0

Therefore Z t

1

d hM, Zis = hN, M it .

0 Zs

72 CALCULUS

CHAPTER 2. ITOS

N0 = 0. Assume that

1

E exp hN iT < + ,

2

so that E(N )t is a continuous martingale up to T , and E(N )0 = 1. Define

a probability measure Q on the measurable space (, FT ) by

dQ

= E(N )t for all t T .

dP Ft

If M = (Mt )t0 is a continuous local martingale under the probability P,

then

Xt = Mt hN, M it

is a continuous, local martingale under Q up to time T .

The martingale representation theorem is a deep result about Brownian mo-

tion. There is a natural version for multi-dimensional Brownian motion, for

simplicity of notations, we however concentrate on one-dimensional Brown-

ian motion.

Let B = (Bt )t0 be a standard BM in R on a complete probability space

(, F, P), and let (Ft )t0 (together with F = Ft ) be the natural filtration

of (Bt )t0 . Then, as a matter of fact, (Ft )t0 is continuous.

(, F, Ft , P). Then there is a stochastic process F = (Ft )t0 in L2 , such

that Z t

Mt = E(M0 ) + Fs dBs a.s.

0

for any t 0. In particular, any martingale with respect to the Brownian

filtration (Ft )t0 has a continuous version.

The proof of this theorem relies on the following several lemmata. Let

T > 0 be any fixed time.

{(Bt1 , , Btn ) : n Z+ , tj [0, T ] and C0 (Rn )}

is dense in L2 (, FT , P).

2.8. THE MARTINGALE REPRESENTATION THEOREM 73

and events with probability zero.

Lemma 2.8.3 For any h L2 ([0, T ]) (functions on the interval [0, T ] which

RT

are square-integrable 0 h(s)2 ds < +) we associate with an exponential

martingale up to time T :

Z t

1 t

Z

2

M (h)t = exp h(s)dBs h(s) ds ; t [0, T ]. (2.30)

0 2 0

Then the vector space spanned by all these M (h)T

is dense in L2 (, FT , P).

Z

HdP = 0 for all L .

and any ci R, we choose a step function h(t) = ci for t (ti , ti+1 ]. Then

( )

X 1X 2

M (h)T = exp ci (Bti+1 Bti ) ci (ti+1 ti )

2

i i

and therefore

Z ( )

X 1X 2

H exp ci (Bti+1 Bti ) ci (ti+1 ti ) dP = 0

2

i i

so that Z ( )

X

H exp ci (Bti+1 Bti ) dP = 0 .

i

Since ci are arbitrary numbers, it thus follows that

Z ( )

X

H exp ci Bti dP = 0

i

74 CALCULUS

CHAPTER 2. ITOS

for any ci and ti [0, T ]. Since the left-hand is analytic in ci , so that the

last equality is true for any complex numbers ci . We may conclude that, for

any C0 (Rn ), Z

H(Bt1 , , Btn )dP = 0 . (2.31)

Indeed (2.31) can be shown via Fourier integral theorem: if C0 (Rn )

then Z

1 ihz,xi

(x) = (z)e dz

(2)n/2 Rn

where Z

1

(z) = (x)eihz,xi dx

(2)n/2 Rn

is the Fourier transform of . We thus may rewrite the left-hand side of

(2.31) via the Fourier integral formula

Z Z Z

1

X

H(Bt1 , , Btn )dP = H (z) exp i

zj Btj dzdP

(2)n/2 Rn j

Z ( Z ! )

1

X

= (z) H exp i zi Bti dP dz

(2)n/2 Rn i

= 0.

By Lemma 2.8.2, the collection of all functions like (Bt1 , , Btn ) is dense

in L2 (, FT , P), so that

Z

HGdP = 0 for any G L2 (, FT , P) .

2

R

In particular, H dP = 0 so that H = 0 almost surely.

there is a stochastic process F = (Ft )t0 in L2 , such that

Z T

H = E(H) + Ft dBt .

0

Proof. By Lemma 2.8.3 we only need to show this lemma for X(h)T

(where h L2 ([0, T ])) defined by (2.30). While, X(h)t is an exponential

martingale so that it must satisfy the following integral equation

Z T Z t

X(h)T = 1 + X(h)t d h(s)dBs

0 0

Z T

= E(X(h)T ) + X(h)t h(t)dBt .

0

2.8. THE MARTINGALE REPRESENTATION THEOREM 75

The martingale representation theorem now follows easily from the mar-

tingale property and Itos representation theorem.

76 CALCULUS

CHAPTER 2. ITOS

Chapter 3

Stochastic differential

equations

(Gaussian) equation, geometric Brownian motion, Ornstein-Uhlenbeck pro-

cess, strong solution, weak solution, uniqueness in law, path-wise uniqueness

Key theorems: Existence and uniqueness theorem for SDE with Lipschitz

coefficients, Cameron-Martin formula

3.1 Introduction

Stochastic differential equations (SDE) are ordinary differential equations

perturbed by noises. In this course, we will only consider noises which can

be modelled by Brownian motion.

Stochastic differential equations we will consider thus have the following

form

n

j

fij (t, Xt )dBti + f0j (t, Xt )dt ; j = 1, , N

X

dXt = (3.1)

i=1

filtered probability space (, F, Ft , P) and

fij : [0, +) RN RN

An adapted, continuous, RN -valued stochastic process Xt (Xt1 , , XtN )

77

78 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS

n Z t Z t

Xtj X0j fkj (s, Xs )dBsk f0j (s, Xs )ds .

X

= + + (3.2)

k=1 0 0

tion (Xt )t0 of SDE (3.1), and we can believe for any Brownian motion

B = (Bt )t0 , SDE (3.1) should offer the same solution in the distribution

sense. It thus leads to different concepts of solutions and uniqueness: strong

solutions and weak solutions, path-wise uniqueness and uniqueness in law.

X = (Xt )t0 on (, F, Ft , P) is a (weak) solution of (3.1), if there is a

standard Brownian motion W = (Wt )t0 in Rn , adapted to the filtration

(Ft ), such that

n Z t Z t

Xtj X0j flj (s, Xs )dWsl f0j (s, Xs )ds ;

X

= + j = 1, , N .

l=1 0 0

2) Given a standard Brownian motion B = (Bt )t0 in Rn on (, F, P)

with its natural filtration (Ft )t0 , an adapted, continuous stochastic process

X = (Xt )t0 on (, F, Ft , P) is a strong solution of (3.1), if

n Z t Z t

Xtj X0j fij (s, Xs )dBsi f0j (s, Xs )ds .

X

= +

i=1 0 0

and (X,

e B) are two solutions defined on the same filtered space and

same Brownian motion B, and X0 = X e0 , then X = X.e

and (X,e B)

e are two solutions with possibly different Brownian motions

B and B, e and X0 and X e0 possess same distribution, then X and X e

have the same distribution.

ness in law.

3.1. INTRODUCTION 79

The following is a simple example of SDE for which has no strong solu-

tion, but possesses weak solutions and uniqueness in law holds.

equation: Z t

Xt = sgn(Xs )dBs , 0t<

0

where sgn(x) = 1 if x 0, and equals 1 for negative value of x.

(Levys Theorem).

Rt Let Wt be a one-dimensional Brownian

motion, and let Bt = 0 sgn(Ws )dWs . Then B is a one-dimensional

Brownian motion, and

Z t

Wt = sgn(Ws )dBs ,

0

Linear stochastic differential equations can be solved explicitly. Consider

n N

dXtj = ij dBti + kj Xtk dt

X X

(3.3)

i=1 k=1

N n matrix, and = (kj ) a constant N N matrix. (3.3) may be written

as

dXt = dBt + Xt dt .

Let

k

X t

et = k

k!

k=0

80 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS

Z t Z t

et Xt X0 = es dXs es Xs ds

0 0

Z t

= es (dXs Xs ds)

0

Z t

= es dBs

0

so that Z t

t

Xt = e X0 + e(ts) dBs .

0

In particular, if n = N = 1 and X0 = x, then Xt has a normal distribution

with mean et x and variance

Z t 2

t 2 (ts)

E(Xt e x) = E e dBs

0

Z t 2

2t s

= e E e dBs

0

Z t

= 2 e2(ts) ds

0

2 2t

= e 1 .

2

If B = (Bt1 , , Btn )t0 is a Brownian motion in Rn , then the solution

Xt of the SDE:

dXt = dBt AXt dt

is called the Ornstein-Uhlenbeck process, where A 0 is a d d matrix

called the drift matrix. Hence we have

Z t

Xt = eAt X0 + e(ts)A dBs .

0

Uhlenbeck process with drift matrix A.

The stock price in the Black-Scholes model satisfies the stochastic differential

equation

dSt = St (dt + dBt ) (3.4)

3.2. EXISTENCE AND UNIQUENESS 81

Z t Z t

ds + dBs .

0 0

Hence Z t Z t

1 2

St = S0 exp dBs + ds .

0 0 2

In the case and are constants, then

1 2

St = S0 exp Bt + t

2

which is called the geometric Brownian motion.

In this section we present two fundamental theorems about solving stochastic

differential equations.

Consider a simple stochastic differential equation

may solve (3.5) by means of change of probabilities.

Let (Wt )t0 be a standard Brownian motion on (, F, Ft , P), and define

probability measure Q on (, F ) by

dQ

= E(N )t for all t 0

dP Ft

Rt

where Nt = 0 b(s, Ws )dWs is a martingale (under the probability P), with

Rt

hN it = 0 b(s, Ws )2 ds, which is bounded on any finite interval, thus

Z t

1 t

Z

2

E(N )t = exp b(s, Ws )dWs b(s, Ws ) ds

0 2 0

is a martingale. According to Girsanovs theorem

Bt Wt W0 hW, N it

82 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS

Levys martingale characterization of Brownian motion, (Bt )t0 is a Brow-

nian motion. Moreover

Z t Z t

hW, N it = h dWs , b(s, Ws )dWs i

0 0

Z t

= b(s, Ws )ds

0

and therefore Z t

Wt W0 b(s, Ws )ds = Bt

0

is a standard Brownian motion on (, F, Q). Thus

Z t

Wt = W0 + Bt + b(s, Ws )ds (3.6)

0

have just constructed is a weak solution of SDE (3.5).

Theorem 3.2.1 (Cameron-Martins formula) Let b(t, x) = (b1 (t, x), , bn (t, x))

be bounded, Borel measurable functions on [0, +)Rn . Let Wt = (W 1 , , Wtn )

be a standard Brownian motion on a filtered probability space (, F, Ft , P),

and let F = {Ft , t 0}. Define probability measure Q on (, F ) by

dQ Pn R t k k 1

k=1 0 b (s,Ws )dBs 2

Pn R t k 2

k=1 0 |b (s,Ws )| ds

= e for t 0 .

dP Ft

Then (Wt )t0 under the probability measure Q is a solution to the SDE

space (, F, Ft , P) and define P

( n Z n Z

)

dP X t 1 X t k 2

k k

= exp b (s, Xs )dBs b (s, Xs ) ds for t 0

dP 0 2 0

Ft k=1 k=1

we may show that (Xt )t0 under probability measure P is a Brownian mo-

tion. Therefore solutions to SDE (3.7) is unique in law: all solutions have

the same distribution.

3.2. EXISTENCE AND UNIQUENESS 83

By definition, any strong solution is a weak solution. We next prove a basic

existence and uniqueness theorem for a stochastic differential equation under

a global Lipschitz condition. Our proof will rely on two inequalities: The

Gronwall inequality and Doobs Lp -inequality (Theorem 4.2.5).

isfies the integral equation

Z t

g(t) h(t) + g(s)ds , 0tT

0

Z t

g(t) h(t) + e(ts) h(s)ds , 0tT .

0

Rt

Proof. Let F (t) = 0 g(s)ds. Then F (0) = 0 and

F 0 (t) h(t) + F (t)

so that 0

et F (t) et h(t) .

Integrating the differential inequality we obtain

Z t Z t

s

0

e F (s) ds es h(s)ds

0 0

and therefore Z t

F (t) e(ts) h(s)ds

0

which yields Gronwalls inequality.

Consider the following SDE

n

dXtj = flj (t, Xt )dBtl + f0j (t, Xt )dt ;

X

j = 1, , N (3.8)

l=1

are Borel measurable functions on R+ RN , which are bounded

on any compact subset in RN .

We will use a special case of Doobs Lp - inequality: if (Mt )t0 is a square-

integrable, continuous martingale with M0 = 0, then for any t > 0

E sup |Ms | 4 sup E |Ms |2 = 4EhM it .

2

(3.9)

st st

84 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS

(Zt )t0 and (Zt )t0 be two continuous, adapted processes. Let f (t, x) be a

Lipschitz function

1. Let Z t Z t

Mt = f (s, Zs )dBs f (s, Zs )dBs t 0 .

0 0

Then Z t 2

E sup |Ms | 4C 2 2

E Zs Zs ds

st 0

for all t 0.

2. If

Z t Z t

Nt = f (s, Zs )ds f (s, Zs )ds t 0

0 0

then Z t 2

sup |Ns |2 Ct E Zs Zs ds t 0 .

st 0

Z s 2

2

f (u, Zu ) f (u, Zu ) dBu

sup |Ms | = sup

st st 0

Z s 2

2

f (u, Zu ) f (u, Zu ) dBu

E sup |Ms | = E sup

st st 0

Z t 2

4E

f (s, Zs ) f (s, Zs ) dBs

0

Z t 2

= 4E

f (s, Zs ) f (s, Zs ) ds

0

Z t 2

4C 2 E Zs Zs ds .

0

3.2. EXISTENCE AND UNIQUENESS 85

Z s 2

2

sup |Ns | = sup f (u, Zu ) f (u, Zu ) du

st st 0

Z t 2

f (s, Zs ) f (s, Zs ) ds

0

Z t 2

t

f (s, Zs ) f (s,

Z s ds

)

0

Z t 2

2

C t Zs Zs ds

0

Theorem 3.2.4 Consider SDE (3.8). Suppose that fij satisfy the Lipschitz

condition:

j j

fi (t, x) fi (t, y) C|x y| (3.10)

j

fi (t, x) C(1 + |x|) (3.11)

Brownian motion Bt = (Bti ) in Rn , there is a unique strong solution (Xt ) of

(3.8) with X0 = .

Proof. Let us only prove the one dimensional case, the proof of multi-

dimensional case is similar. We are going to thus construct a strong solution

to SDE

dXt = f1 (t, Xt )dBt + f0 (t, Xt )dt .

We will apply Picards iteration to the corresponding integral equation

Z t Z t

Xt = + f1 (t, Xt )dBt + f0 (t, Xt )dt

0 0

Define

Y0 (t) = ;

Z t Z t

Yn+1 (t) = + f1 (s, Yn (s))dBs + f0 (s, Yn (s))ds

0 0

86 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS

n = 0, 1, 2, . We will show that, for every T > 0, the sequence {Yn (t)}

converges to a solution Y (t) uniformly on [0, T ] almost surely. Introduce

notations

D(n)t = Yn (t) Yn1 (t) ; n = 1, 2,

and

st

st

Then

Z t Z t

D(1)t = f1 (s, )dBs + f0 (s, )ds

0 0

Z t

D(n + 1)t = (f1 (s, Yn (s)) f1 (s, Yn1 (s))) dBs

0

Z t

+ (f0 (s, Yn (s)) f0 (s, Yn1 (s))) ds .

0

(x + y)2 2x2 + 2y 2

Z t

d(n + 1)t 2 C 2 (4 + s) d(n)s ds

0

Z t

2

2C (4 + T ) d(n)s ds .

0

We therefore have

2n C 2n (4 + T )n

d(n + 1)t d(1)T t.

n!

3.2. EXISTENCE AND UNIQUENESS 87

Moreover

2

d(1)t = E sup |Y1 (s) Y0 (s)|

0st

( Z s 2 )

2E sup |f1 (, )|dB

0st 0

( 2 )

Z s

+2E sup |f0 (, )|ds

0st 0

Z t Z t

2

8E f1 (, ) ds + 2tE f0 (, )2 ds

0 0

16t + 4t2 1 + E 2 .

Therefore, ( )

2 (C2 T )n

E sup |Yn+1 (t) Yn (t)| C1 .

0tT n!

By the Markov inequality

( )

1 (4C2 T )n

P sup |Yn+1 (t) Yn (t)| n C1 ,

0tT 2 n!

equation.

Next we prove the uniqueness. Let Y and Z be two solutions with same

Brownian motion B. Then

Z t Z t

Yt = + f1 (s, Ys )dBs + f0 (s, Ys )ds

0 0

and Z t Z t

Zt = + f1 (s, Zs )dBsi + f0 (s, Zs )ds .

0 0

Again, by Lemma 3.2.3

Z t

2 2 2

E|Ys Zs |2 ds

E |Yt Zt | C (4 + T )

0

88 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS

E |Yt Zt |2 = 0 .

is a function of the Brownian motion B, and Yn (t) only depends on and

Bs , 0 s t.

a BM B = (Bt )t0 in Rn on (, F, Ft , P), let (X x (t))t0 be the unique strong

solution of (3.8). Then x X x is uniformly continuous almost surely on

any finite interval [0, T ]:

( )

x y 2

lim sup E sup |X (t) X (t)| =0. (3.12)

0 |xy|< 0tT

Z t Z t

x x

X (t) = x + f1 (s, X (s))dBs + f0 (s, X x (s))ds

0 0

and

Z t Z t

y y

X (t) = y + f1 (s, X (s))dBs + f0 (s, X y (s))ds .

0 0

3.3. MARTINGALES AND WEAK SOLUTIONS 89

( )

E sup |X x (t) X y (t)|2 3|x y|2

0tT

( Z t 2 )

+3E sup (f1 (s, X x (s)) f1 (s, X y (s)))dBs

0tT 0

( Z t 2 )

x y

+3E sup (f0 (s, X (s)) f0 (s, X (s)))ds

0tT 0

(Z 2 )

2

T x y

3|x y| + 12E (f1 (s, X (s)) f1 (s, X (s))) dBs

0

Z T

x y 2

+3T E |f0 (X (s)) f0 (X (s))| ds

0

Z T

2 x y 2

3|x y| + 12E |f1 (s, X (s)) f1 (s, X (s))| ds

0

Z T

2 x y 2

+3T C E |X (s) X (s)| ds

0

Z T

2 2

E |X x (t) X y (t)|2 dt.

3|x y| + 3C (4 + T )

0

Setting

(t) = E sup |X x (s) X y (s)|2 ,

0st

then we have

Z T

2 2

(T ) 3|x y| + 3C (4 + T ) (t)dt

0

For simplicity, let us consider the following one-dimensional, homogenous

SDE

dXt = (Xt )dBt + b(Xt )dt (3.13)

90 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS

and b C (R) has at most linear growth. Let X = (Xt )0 be the strong

solution with initial X0 on a filtered probability space (, F, Ft , P). If f

Cb2 (RN , R), then by Itos formula

Z t Z t

1

f (Xt ) f (X0 ) = f 0 (Xs )dXs + f 00 (Xs )dhXis

0 2 0

Z t

= f 0 (Xs ) ((Xs )dBs + b(Xs )ds)

0

1 t 00

Z

+ f (Xs ) 2 (Xs )ds

2 0

Z t Z t

0 1 2 00 0

= (Xs )f (Xs )dBs + f + bf (Xs )ds .

0 0 2

Let us introduce

1 d2 d

L = (x)2 2 + b(x) (3.14)

2 dx dx

which is an elliptic differential operator of second-order. Then the previous

formula may be written as

Z t Z t

f (Xt ) f (X0 ) = (Xs )f 0 (Xs )dBs + (Lf )(Xs )ds .

0 0

If we set Z t

Mtf = f (Xt ) f (X0 ) (Lf )(Xs )dBs ,

0

then Z t

Mtf = (Xs )f 0 (Xs )dBs

0

is a martingale on (, F, Ft , P), and

Z t

f g

hM , M it = ( 2 f 0 )(Xs )ds .

0

(with a given Brownian motion) then for any f Cb2 (R)

Z t

Mtf = f (Xt ) f (X0 ) (Lf )(Xs )ds

0

3.3. MARTINGALES AND WEAK SOLUTIONS 91

1 d2 1

For example, if = 1 and b = 0 (in this case L= 2 dx2 = 2 ), then

(Bt )t0 itself is a strong solution to

dXt = dBt

so that

1 t

Z

Mtf

= f (Bt ) f (B0 ) (f )(Bs )ds

2 0

is a martingale under P. On the other hand, Levys martingale characteri-

zation shows that the previous property that

1 t

Z

f (Bt ) f (B0 ) (f )(Bs )ds

2 0

is martingale, completely characterizes Brownian motion. Therefore we may

believe that the martingale property of all M f should completely the distri-

bution of a solution (Xt )t0 to SDE (3.13), and hence those of weak solution

of (3.13). Thus we give

Definition 3.3.2 Let L be a linear operator on C (R). Let (Xt )t0 be) a

stochastic process on a filtered space (, F, Ft , P). Then we say that (Xt )t0

together with the probability P is a solution to the L-martingale problem, if

for every f Cb (R)

Z t

f

Mt f (Xt ) f (X0 ) Lf (Xs )ds

0

tion to L-martingale problem, where L is given by (3.14):

Z t

f

Mt = f (Xt ) f (X0 ) Lf (Xs )ds

0

we thus have

Z t

f g

hM , M it = {L(f g) f (Lg) g (Lf )} (Xs )ds .

0

is a weak solution to SDE.

92 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS

bounded on any compact subset, and let

1 d2 d

L = (x)2 2 + b(x) .

2 dx dx

If (Xt )t0 on (, F, P) is a continuous process solving the L-martingale prob-

lem: for any f Cb2 (R)

Z t

Mtf = f (Xt ) f (X0 ) Lf (Xs )ds

0

to SDE

dXt = (Xt )dBt + b(Xt )dt . (3.15)

Let us outline the proof only, a detailed proof will be given in next section

that handles the multi-dimensional case. To show (Xt )t0 on (, F, P) is a

weak solution, we need to construct a Brownian motion B = (Bt )t0 such

that Z t Z t

Xt = X0 + (Xs )dBs + b(Xs )ds . (3.16)

0 0

The key of the proof is to compute hXit , and the result is

D E Z t

f g

M ,M = (L(f g) f Lg gLf )(Xs )ds

t 0

Z t

f g

= 2 (Xs )ds .

0 x x

this case M f as M ), then

Z t

hM it = ((Xs ))2 ds

0

so that Z t

1

Bt = dMs

0 (Xs )

is a Brownian motion (Levys martingale characterization for Brownian mo-

tion). It is then obvious that (Xt , Bt ) satisfies the stochastic integral equa-

tion (3.16), so that (Xt )t0 is a weak solution to (3.15).

Chapter 4

Appendix: martingales in

discrete-time

discrete-time, including Doobs martingale inequalities and the convergence

theorem for martingales. A good reference is the book Probability with

Martingales by D. Williams, which is the text book for the third year

course B10a: Martingales through measure theory.

In the probability theory, we study properties of random variables: those

determined by their distributions. Let (, F, P) be a probability space,

and let Z+ denote the set of all non-negative integers. An increasing family

{Fn }nZ+ of sub -algebras of F is called a filtration, and a probability space

(, F, P) together with a filtration {Fn }nZ+ is called a filtered probability

space, denoted by (, Fn , F, P). Given a sequence X {Xn }nZ+ of random

variables on the probability space (, F, P), the natural filtration of the

sequence {Xn } is defined to be FnX = {Xm : m n} the smallest -

algebra such that X0 , , Xn are measurable. If X = {Xn } represents the

process of a random phenomenon evolving in discrete-time, then FnX is the

information up to time n.

is adapted to {Fn } if for every n Z+ , Xn is Fn -measurable. In this case we

say {Xn } is an adapted sequence, or adapted process with respect to {Fn }.

93

94 CHAPTER 4. APPENDIX: MARTINGALES IN DISCRETE-TIME

{Xn } is predictable.

(, F, P). Then a measurable function T : Z+ {+} (allowed to take

value +) is called a stopping time (or a random time) with respect to the

filtration {Fn } if { : T () = n} Fn for every n.

+} F.

2. A random variable T : Z+ {+} is a stopping time if and

only if { : T () n} Fn for every n. Indeed

{ : T () n} = nk=0 { : T () = k} .

time.

{Xn } be an adapted process on a filtered probability space (, Fn , F, P), and

let B B(R). Then the first time T that the process {Xn } hits domain B:

T () = inf{n 0 : Xn () B}

(with the convention that inf = +) is a stopping time with respect to the

filtration {Fn }. T is called a hitting time.

To see why such a hitting time is really a stopping time, we observe that

{T = n} = n1 c

k=0 {Xk B } {Xn B} .

Hence {T = n} Fn and therefore T is a stopping time.

Given a stopping time T on the filtered probability space (, Fn , F, P),

the -algebra that represents information available up to random time T :

a random variable, and set X = . Let T be a stopping time with respect

to {Fn }, and define XT () = XT () () for any . Then XT is FT -

measurable. In particular, XT 1{T <} is FT -measurable.

4.1. SEVERAL NOTIONS 95

{XT r} {T n} = nk=0 {Xk r} {T = k}

which belongs to Fn as {Xk r} {T = k} Fk , k = 0, 1, , n, so that

XT is FT -measurable.

The concept of a martingale originated from a fair game which once

(perhaps still) was popular, in which regardless of the whims of chance in

deciding the outcomes to the past and present, a gamers fortune is exactly

the gamers current capital.

Definition 4.1.6 Let {Xn } be an adapted process on a filtered probability

space (, Fn , F, P). Suppose each Xn L1 (, F, P) (i.e. Xn is integrable).

1. {Xn } is a martingale, if E(Xn+1 |Fn ) = Xn for all n.

2. {Xn } is a supermartingale (resp. a submartingale), if E(Xn+1 |Fn )

Xn (resp. E(Xn+1 |Fn ) Xn ) for every n.

Example 4.1.7 (Martingale transform) Let {Hn } be a predictable process

and let {Xn } be a martingale, and let

n

X

(H.X)n = Hk (Xk Xk1 ), (H.X)0 = 0 .

k=1

Recall Jensens inequality for conditional expectation: if : R R is

a convex function, , () L1 (, F, P), and G is a sub -field of F, then

(E(|G)) E(()|G) .

(t) = (t ln t) 1(1,) (t), t1(0,) and |t|p (for p 1) are all convex functions.

Theorem 4.1.8 1. Let {Xn } be a martingale, and let : R R be

a convex function. If (Xn ) is integrable for any n, then {(Xn )} is a

submartingale.

2. If {Xn } is a submartingale, and : R R is a increasing and convex

function. If (Xn ) is integrable for any n, then {(Xn )} is a submartingale.

Proof. For example, let us prove the first claim. Indeed

(Xn ) = (E(Xn+1 |Fn )) (martingale property)

E((Xn+1 )|Fn ) (Jensens inequality).

96 CHAPTER 4. APPENDIX: MARTINGALES IN DISCRETE-TIME

tion, each Xn log+ Xn is integrable, then (Xn log+ Xn ) is a sub-martingale,

where log+ x = 1{x1} log x.

The most important result about martingales is the following Doobs op-

tional sampling theorem, which says the (super-, sub-) martingale property

holds at random times.

S, T be two bounded stopping times. Suppose S T . Then E(XT |FS ) = XS

(resp. E(XT |FS ) XS ).

Proof. We prove this theorem for the case that {Xn } is a supermartin-

gale. Let T n (as it is bounded by our assumption). Then

n

X

E|XT | = E (|XT | : T = j)

j=0

Xn

= E (|Xj | : T = j)

j=0

Xn

E|Xj | ,

j=0

Let A FS , j 0. Then A {S = j} Fj and {T > j} Fj as S and

T are stopping times. We consider several cases.

1. If 0 T S 1, then

Z n Z

X

(XS XT )dP = (Xj Xj+1 )dP

A j=0 A{S=j}{T >j}

2. General case. Set Rj = T (S + j), j = 1, , n. Then each Rj is a

stopping time,

S R1 Rn = T

4.2. DOOBS INEQUALITIES 97

and

R1 S 1, Rj+1 Rj 1, for 1 j n 1 .

Let A FS . Then A FRj (as S Rj ). Therefore apply the first case to

Rj we have Z Z Z

XS dP XR1 dP XT dP

A A A

so that

E (1A XS ) E (1A XT ) for any A FS .

Since XS FS we may thus conclude that

XS E(XT |FS ) .

Then

E|XT k | E (X0 ) + 2E Xk

for any k = 0, 1, 2,

and therefore

E(|XT |1{T <} ) 3 sup E|Xn | .

n

theorem we have

EX0 + 2E Xk .

3 sup E (|Xn |)

n

and the second inequality thus follows from the Fatou lemma.

and let n 0. Then for any > 0, we have

( ) Z

P sup Xk EX0 Xn dP

kn supkn Xk <

= EX0 E(Xn ; sup Xk < ) ;

kn

98 CHAPTER 4. APPENDIX: MARTINGALES IN DISCRETE-TIME

Z

P inf Xk Xn dP

kn inf kn Xk

= E(Xn ; inf Xk )

kn

and ( )

EX0 + 2E Xn

P sup |Xk | .

kn

and T = R n. Then T is a bounded stopping time. By definition,

XR , on {R < },

so that

{sup Xk } {XT },

kn

{sup Xk < } {T = n}.

kn

EX0 EXT

Z Z

= XT dP+ XT dP

{supkn Xk } {supkn Xk <}

( ) Z

P sup Xk + Xn dP

kn {supkn Xk <}

( ) ( )

= P sup Xk + E Xn ; sup Xk < .

kn kn

submartingale. Define

R = inf{k 0 : Yk }, T = R n.

{sup Yk } {YT } ;

kn

{sup Yk < } {T = n} .

kn

4.2. DOOBS INEQUALITIES 99

EYn EYT

Z Z

= YT dP+ YT dP

{supkn Yk } {supkn Yk <}

( ) Z

P sup Yk + Yn dP

kn {supkn Yk <}

( ) Z

= P sup Xk + Yn dP.

kn {supkn Yk <}

Therefore

( )

P sup Xk = P inf Xk

kn kn

Z

EYn Yn dP

{supkn Yk <}

Z

= Yn dP

{supkn Yk }

Z

= Xn dP.

{inf kn Xk }

As a consequence we have

let Xn L2 (, F, P). Then for any > 0,

( )

1

P sup |Xk | 2 E Xn2 .

kn

inequality in the above theorem, we have

Z

2 P inf Xk2 2 Xn2 dP

kn inf kn Xk2 2

100 CHAPTER 4. APPENDIX: MARTINGALES IN DISCRETE-TIME

and therefore

( ) Z

2

P sup Xk2 2

Xn2 dP

kn { inf kn Xk2 2 }

Z

Xn2 dP = E Xn2

.

: R+ [0, ) is a continuous and increasing function such that (0) = 0,

then (Z ) Z

Xn

E(Xn ) = E d() = d()dP .

0 [0,Xn ]

for any p > 1

p p

+ p

E max Xk E|Xn+ |p .

kn p1

2. If (Xn ) is a martingale, then for any p > 1,

p

p p

E max |Xk | E|Xn |p .

kn p1

follows 1). Let us prove the first conclusion. By replace (Xn ) by (Xn+ ),

we may, without lose of the generality, assume that (Xn ) is a non-negative

sub-martingale. By Fubins theorem

(Z )

Xn

E {(Xn )} = E d()

0

Z

= P(Xn )d()

Z0

1

E(Xn ; Xn )d()

0

1

P(Xn ) E {Xn ; Xn }

4.2. DOOBS INEQUALITIES 101

we thus obtain

Z

1

E {(Xn )} E {Xn ; Xn } d()

Z0 Z

1

= Xn dPd()

0 {Xn }

( Z Xn !)

1

= E Xn d() . (4.1)

0

( Z Xn !)

1

E|Xn |p E Xn pp1 d

0

p p1

= E Xn (Xn )

p1

p

= E Xn (Xn )p1

p1

p 1 1

(|EXn |p ) p (E|Xn |p ) q

p1

the lase equality follows from Holder inequality.

Exercise 4.2.6 1. Prove log x x/e for all x > 0, hence prove that

b

a log+ b a log+ a + .

e

Theorem 4.2.7 (Doobs inequality) Let (Xn ) be a non-negative sub-martingale.

Then

e +

E max Xk 1 + max E Xk log Xk .

kn e1 kn

Proof. We may use the same argument as in the proof of the previous

theorem, but with the choice that () = ( 1)+ . We thus obtain (by

(4.1))

( Z Xn !)

1

E {(Xn )} E Xn d()

0

( Z Xn !)

1

= E Xn 1{Xn 1} d

1

= E Xn log+ Xn ,

102 CHAPTER 4. APPENDIX: MARTINGALES IN DISCRETE-TIME

E (Xn 1) E (Xn 1)+ E Xn log+ Xn .

1

Xn log+ Xn Xn log+ Xn + Xn

e

it follows thus that

E (Xn 1) E Xn log+ Xn

1

E Xn log+ Xn + EXn

e

so that

1

EXn E Xn log+ Xn .

1 1/e

Let {Xn : n Z+ } be an adapted sequence of random variables, and let

[a, b] be a closed interval. Define

T0 = inf{n 0 : Xn a} ;

T1 = inf{n > T0 : Xn b} ;

T2j = inf{n > T2j1 : Xn a} ;

T2j+1 = inf{n > T2j : Xn b} .

Then {Tk } be a sequence of stopping times, which is increasing Tk . If

T2j1 () < , then the sequence

X0 (), , XT2j1 ()

upcrosses the interval [a, b] j times. Denote by Uab (X; n) the number of

upcrossing [a, b] by {Xk } up to time n. Then we have

{Uab (X; n) = j} = {T2j1 n < T2j+1 } Fn .

Note that by definition,

XT2j a, on {T2j < } ;

XT2j+1 b, on {T2j+1 < } .

4.3. THE CONVERGENCE THEOREM 103

martingale, then for any n 1, k 0, we have

n o 1 n o

P Uab (X; n) k E (Xn a) : Uab (X; n) = k

ba

and

1

EUab (X; n) E(Xn a) .

ba

2. Similarly, if X = {Xn } is a submartingale, then

n o 1 n o

P Uab (X; n) k E (Xn a)+ : Uab (X; n) = k

ba

and

1

EUab (X; n) E(Xn a)+ .

ba

is a supermartingale, by Doobs optional sampling theorem,

0 E XT2k+1 n XT2k n

= E XT2k+1 n XT2k n 1{T2k n<T2k+1 }

+E XT2k+1 n XT2k n 1{T2k+1 n}

E (Xn a) 1{T2k n<T2k+1 } ( as XT2k n = XT2k a)

+E (b a) 1{T2k+1 n} (as XT2k +1n = XT2k+1 b).

However

{Uab (X; n) k} {T2k1 n},

so that

+E (b a) 1{Uab (X;n)k}

= E (Xn a) 1{Uab (X;n)=k} + (b a) P{Uab (X; n) k} .

Thus we get the first inequality. By adding up over all k 0 we get the

second inequality.

104 CHAPTER 4. APPENDIX: MARTINGALES IN DISCRETE-TIME

very similar. Again by Doobs stopping theorem,

0 E XT2k1 n XT2k n

= E XT2k1 n XT2k n 1{T2k1 n<T2k }

+E XT2k1 n XT2k n 1{T2k n}

E (b Xn ) 1{T2k1 n<T2k } + E (b a) 1{T2k n}

= E (a Xn ) 1{T2k1 n<T2k } + E (b a) 1{T2k1 n}

permartingale. If supn E|Xn | < +, then

n

1

EUab (X) sup E(Xn a)

ba n

|a| 1

+ sup E|Xn | < .

ba ba n

Therefore

Uab (X) < , almost surely.

Let

W(a,b) = {liminfn Xn < a, limsupn Xn > b}

and

W = (a,b) W(a,b)

the union over all rational pairs (a, b), a < b, which is a countably union.

Clearly

W(a,b) {Uab (X) = }

4.3. THE CONVERGENCE THEOREM 105

so that

P(W(a,b) ) = 0.

Hence P(W ) = 0. However if / W, then limn Xn () exists, and we

denote it by X () and on W we let X () = 0. Then we have Xn X

almost surely. Moreover by the Fatou lemma,

n

i.e. X L1 (, F, P ).

If in addition {Xn } is non-negative, then

E(X |Fn ) Xn .

- Masteroppgave Emil HusterUploaded byAnonymous UrVkcd
- NPTEL Lecture No. # 32 Laws of Probability - IUploaded byI_Anonymous123
- An RealUploaded byyahyaharun48
- topologyUploaded byDenise Parks
- Storm 2012Uploaded byminimamu
- 2014 Sheet 2Uploaded byaqszaqsz
- Chapter 8 - Queueing TheoryUploaded byKamrun Nahar
- s to Chas Tic ProcessesUploaded byjnfz
- 5979_chap1Uploaded byLe He
- Markov Chains to Model Genetic AlgorithmsUploaded byTarundeep Dhot
- M-TechEC.pdfUploaded byajitjk123
- Pattern and Predictability in Music.pdfUploaded byorestis willemen
- psheet10Uploaded byMichael Corleone
- MarkovUploaded byyoki_triwahyudi
- martingale pricing nyuUploaded bypushhy
- novaaaaUploaded byryannnn
- Why the Monte Carlo Method is So Important TodayUploaded byNhan Thành
- Duane459Uploaded byMirian Ellen
- stml-42-prev.pdfUploaded byAna-Maria V Ciobanu
- pdf (90)Uploaded bymehdi
- 0505185-2Uploaded byМария Ева Кортес Романова
- E_C-_8_1_.3.10Uploaded byGovind Mantry
- Calibration of Jump-Diffusion Option Pricing Models - A Robust Non-Parametric ApproachUploaded byKalidane
- QUESTION BANK FOR PROBABILTY QUEUEING THEORY REGULATION 2013Uploaded byPRIYA RAJI
- Probability SlidesUploaded byDmitriiGreen
- RV1.pdfUploaded byDominion Luminous
- bsc3qUploaded bykirandevi1981
- TablesUploaded byClarisse Rodriguez
- bsc3Uploaded bykirandevi1981
- 513 (Sims, Princeton).pdfUploaded byInvest

- peramalanUploaded byReza Tianto
- MarkovUploaded byAhmad Dbouk
- 6.the Bernoulli and Poisson ProcessesUploaded byDskonoci2
- Exponential&PossionUploaded byChandler Jiacheng Liao
- Rainy and Dry Days as a Stochastic Process (Albaha City)Uploaded byInternational Organization of Scientific Research (IOSR)
- Markov ChainsUploaded byprits92
- Stochastic Calculus - BookUploaded byTraderCat Solaris
- Brownian MotionUploaded byAhmed Mikaeil
- calibracion simulador alcohol.pdfUploaded bySoluciones Integrales Ingeniera Medico Industrial
- stochastic-I-CTMCUploaded bySparrow Hawk
- ArimaUploaded byMochammad Adji Firmansyah
- Monte carlo Simulation.xlsxUploaded byAnuj Popli
- Presentation on the Martingale theoryUploaded byRitankar Mandal
- Hw 3 SolutionUploaded byEddie Martinez Jr.
- Markov Chain Monte CarloUploaded bymurdanetap957
- Algoritmo de Genracion de Numeros AleatoriosUploaded byMilthon Ttito Cusi
- Assign_ssa_4-5_6_7Uploaded byparulmoshi
- Ejercicio Variable Aletoria ConjuntaUploaded byadheol
- Over FittingUploaded byAnnisa Putri Utami
- Melbourne RugUploaded byMax Greco
- Markov ChainUploaded byDharmendra Kumar
- Cadenas de MarkovUploaded byolsoler
- MFx Module 3 Properties of Time SeriesUploaded byThanh Nguyen
- Financial Stochastics (a.W. Van Der Vaart, 2005)_145Uploaded byTimur Narbaev
- MarkovChainsUploaded bydifx
- BMandGBMpresUploaded byAssan Achibat
- Mcmc Gibbs IntroUploaded byKizmarg
- 1 LN LinearTSModelsUploaded byTeo Liang Wei
- AC notes2013.pdfUploaded bykaddour7108
- Poisson Process.Uploaded bygopinath12331892