You are on page 1of 42

MA4F7 Brownian Motion

March 16, 2013

Contents
1 Brownian Sample Paths 1
1.1 Brownian Motion as a Gaussian Process . . . . . . . . . . . . . . . . . . . 2
1.2 Growth rate of paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Brownian motion as a Markov Process 12


2.1 Markov transition functions . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Strong Markov Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Arcsine Laws for Brownian motion . . . . . . . . . . . . . . . . . . . . . . 21

3 Brownian Martingales 22

4 Donsker’s theorem 32

5 Up Periscope 39

These notes are based on the 2013 MA4F7 Brownian Motion course, taught by Roger
Tribe, typeset by Matthew Egginton.
No guarantee is given that they are accurate or applicable, but hopefully they will assist
your study.
Please report any errors, factual or typographical, to m.egginton@warwick.ac.uk

i
MA4F7 Brownian motion Lecture Notes Autumn 2012

The key aim is to show that scaled random walks converge to a limit called Brownian
motion. In 1D, P{t 7→ Bt nowhere differentiable } = 1
E(Bt ) = 0, E(Bt2 ) = t and so t 7→ Bt is not differentiable at 0. By shifting gives it at
any t. R1
We also have that P ( 0 χ(Bs > 0)ds ∈ dx) = √ 1 dx
πx(1−x)
(
1 on A
P (x + B exits D in A) = U (x) where ∆U (x) = 0 and U (x) = . For a
0 on∂D \ A
b−log |x|
disc with inner radius a and outer radius b, U (x) = log
log b−log a and this converges to 1 as
b → ∞. Thus the probability that Brownian motion hits any ball is 1.
For random walks, P (x + r.v. exits at y) = U (x) where U (x) = 41 (U (x + e1 ) + U (x −
e1 ) + U (x + e2 ) = U (x − e2 )) which can be thought of as a discrete Laplacian. Thus we
have a nice equation for Brownian motion, but a not so nice one for random walks.

1 Brownian Sample Paths


Our standard space is a probability space (Ω, F, P).

Definition 1.1 A stochastic process (Bt , t ≥ 0) is called a Brownian Motion on R if

1. t 7→ Bt is continuous for a.s. ω

2. For 0 ≤ t1 < t2 < ... < tn we have Bt2 − Bt1 , ..., Btn − Btn−1 are independent

3. For 0 ≤ s < t we have Bt − Bs is Gaussian with distribution N (0, t − s).

But does this even exist, and if it does, do the above properties characterise B. The
answer to both is yes, and we will show these later.
We now define the terms used in the above definition, to avoid any confusion.

Definition 1.2 A random variable Z is a measurable function Z : Ω → R. In full, R


has the Borel σ-algebra B(R) and measurable means if A ∈ B(R) then Z −1 (A) ∈ F.

Definition 1.3 A stochastic process is a family of random variables (Xt , t ≥ 0) all defined
on Ω.

We do not worry what Ω is, we are only interested in the law/distribution of Z, i.e.
P(Z ∈ A) or E(f (Z)) where P(Z ∈ A) = P{ω : Z(ω) ∈ A}
If we fix ω, the function t 7→ Bt (ω) is called the sample path for ω.
The first property above means that the evaluation of Bt at ω is continuous, for almost
all ω. Sadly some books say that P{ω : t 7→ Bt (ω) is continuous } = 1 but how do we
know this set is measurable.

Definition 1.4 Z a real variable is Gaussian N (µ, σ 2 ) if it has density

1 (x−µ)2
P(Z ∈ dz) = √ e− 2σ 2 dz
2πσ 2

for σ 2 > 0, meaning integrate both sides over a set A to get the probability over A. If
σ = 0 then P(Z = µ) = 1.

1 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

1.0.1 Related Animals


The Brownian Bridge is Xt = Bt − tB1 for t ∈ [0, 1]
An Ornstein-Uhlenbeck process is one, for C > 0, of the form Xt = e−Ct Be2Ct and is
defined for t ∈ R. We will check that X here is stationary. Also (Xt+T : t ≥ 0) is still an

O-U process. This arises as the solution of the simplest SDE dX = −CXt + 2C ddtB̂ , or
Rt Rt√ dt
in other form, Xt = X0 − C 0 Xs ds + 0 2CdBt .
A Brownian motion on Rd is a process (Bt : t ≥ 0) such that B = (Bt1 , ..., Btd ) where
each t 7→ Btk is a Brownian motion on R and they are independent.

1.1 Brownian Motion as a Gaussian Process


D
Proposition 1.5 (Facts about Gaussians) 1. Z ∼ N (µ, σ 2 ) then for c ≥ 0 we
D
have cZ ∼ N (cµ, c2 σ 2 )
D D D
2. Z1 ∼ N (µ1 , σ12 ), Z2 ∼ N (µ2 , σ22 ) and are independent then Z1 + Z2 ∼ N (µ1 +
µ2 , σ12 + σ22 )
D D
3. Zk ∼ N (µk , σk2 ) and if Zk → Z then limk→∞ µk = µ, limk→∞ σk2 = σ 2 and Z ∼
N (µ, σ 2 ).

The convergence above can be any one of the following.


a.s.
1. Almost surely convergence: Zk → Z means P ({ω : Zk (ω) → Z(ω)}) = 1
prob
2. In probability: Zk → Z means P (|Zk − Z| > ε) → 0 for all ε > 0
k→∞

D
3. In distribution: Zk → Z means E(f (Zk )) → E(f (Z)) for any continuous and
bounded f .
R1
Example 1.1 I = 0 Bt dt is a Gaussian variable.

1
I = lim (B + B2/N + ... + BN/N )
k→∞ N 1/N
1
= lim ((BN/N − B(N −1)/N ) + 2(B(N −1)/N − B(N −2)/N ) + ... + N (B1 − B0 ))
k→∞ N

and all these are independent and so Gaussian.

1.1.1 Transforms
Definition 1.6 We define the Fourier transform, or the characteristic function to
be
φZ (θ) = E(eiθZ )

D 2 /2
For example, if Z ∼ N (µ, σ 2 ) then φZ (θ) = eiθµ e−σ

Proposition 1.7 (More facts about Gaussians) 4. φZ (θ) determines the law of
Z, i.e. if φZ (θ) = φY (θ) then P (Z ∈ A) = P (Y ∈ A).

5. Z1 , Z2 independent if and only if E((eiθ1 Z1 eiθ2 Z2 ) = E(eiθ1 Z1 )E(eiθ2 Z2 ) for all θ1 , θ2 .

2 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

D
6. φZk (θ) → φZ (θ) if and only if Zk → Z.

These all hold true for Z = (Z1 , ..., Zd ) with φZ (θ1 , ..., θd ) = E(eiθ1 Z1 +...+iθd Zd )
Definition 1.8 Z = (Z1 , ..., Zd ) ∈ Rd is Gaussian if di=1 λk Zk is Gaussian in R for
P
all λ1 , ..., λd . (Xt , t ≥ 0) is a Gaussian process if (Xt1 , ..., XtN ) is a Gaussian vector
on RN for any t1 , ..., tN and N ≥ 1.
Check thatPBrownian motion is a Gaussian process, i.e. is (Bt1 , ..., BtN ) a Gaussian
vector, or is λk Btk Gaussian on R. We can massage this into µ1 (Bt1 − B0 ) + ... +
µN (BtN − BtN −1 ) and so is Gaussian. As an exercise, check this for Brownian bridges and
O-U processes.

Proposition 1.9 (Even more facts about Gaussians) 7. The Law of the Gaus-
sian Z = (Z1 , ..., Zd ) is determined by E(Zk ) and E(Zj , Zk ) for j, k = 1, ..., d
8. Suppose Z = (Z1 , ..., Zd ) is Gaussian. then Z1 , ..., Zd are independent if and only if
E(Zj Zk ) = E(Zj )E(Zk )
for all j 6= k.

For 7, it is enough to calculate φZ (θ) and see that it is determined by them. For 8, noe
need only check that the transforms factor.

Example 1.2 (Bt ) a Brownian motion on R. Then E(Bt ) = 0 and, for 0 ≤ s < t,
E(Bs Bt ) = E((Bt −Bs )(Bs −B0 )+(Bs −B0 )2 ) = E(Bt −Bs )E(Bs −B0 )+E(Bs −B0 )2 = s
and similarly equals t if 0 ≤ t < s.

Do the same for Brownian bridges and O-U processes.

Theorem 1.10 (Gaussian characterisation of Brownian motion) If (Xt , t ≥ 0) is


a Gaussian process with continuous paths and E(Xt ) = 0 and E(Xs Xt ) = s ∧ t then (Xt )
is a Brownian motion on R.

Proof We simply check properties 1,2,3 in the definition of Brownian motion. 1 is


immediate. For 2, we need only check that E((Xtj+1 − Xtj )(Xtk+1 − Xtk )) splits. Suppose
tj ≤ tj+1 ≤ tk ≤ tk+1 and then E((Xtj+1 − Xtj )(Xtk+1 − Xtk )) = tj+1 − tj+1 − tj + tj = 0
as required. For 3, Xt − Xs is a linear combination of Gaussians and so is Gaussian. It
has mean zero and
E(Xt − Xs )2 = E(Xs2 − 2Xs Xt + Xt2 ) = s − 2s ∧ t + t = t − s
Q.E.D.
R1 R1
Suppose I = 0 Bs ds and E(I) = 0 E(Bs )ds = 0 and also
Z 1 Z 1 Z 1Z 1 Z 1Z 1
1
E(I 2 ) = E( Bs ds Br dr) = E(Bs Br )dsdr = s ∧ rdsdr =
0 0 0 0 0 0 3
R1R1
but we need to check that we can use Fubini, so we need to check that K = E( 0 0 |Br ||Bs |drds) <
∞. Now
Z 1Z 1 Z 1Z 1

K= E(|Br ||Bs |)drds ≤ E(Br2 )E(Bs2 )drds ≤ rs < 1
0 0 0 0
as we wanted.

3 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Lemma 1.11 (Scaling Lemma) Suppose that B is a Brownian motion on R and c > 0.
Define Xt = 1c Bc2 t for t ≥ 0. Then X is a Brownian motion on R
Proof Clearly it has continuous paths and E(Xt ) = 0. Now
1 1
E(Xs Xt ) = E( Bc2 s Bc2 t ) = s ∧ t
c c
and also
N N
X X λk
λk Xtk = Bc2 tk
c
1 1
and this is Gaussian since Bt is Gaussian. Q.E.D.

Lemma( 1.12 (Inversion lemma) Suppose that B is a Brownian motion on R. Define


tB 1 t > 0
Xt = t . Then X is a Brownian motion.
0 t=0
Proof
N
X N
X
λk Xtk = λk tk B 1
tk
1 1
which is still Gaussian for tk > 0. If any of the tk = 0 then the addition to the above sum
of this term is zero, so we are fine. Clearly E(Xt ) = 0 and
1
E(Xs Xt ) = E(stB 1 B 1 ) = ts = s
s t t
for s < t. We also have no problem for t > 0 with the continuity of paths. However we
need to check that it is continuous at t = 0, i.e. that tB 1 → 0 as t → 0, or that 1s Bs → 0
√ t
as s → ∞. We expect that Bt ≈ ± t and so Bt /t → 0 should be clear.
However, we know that (Xt1 , ..., XtN ) = (B̂t1 , ..., B̂tN ) providing ti > 0 for a Brownian
motion B̂ and since B̂t → 0 as t → 0 surely Xt → 0 as well. We pin this down precisely:
[Xt → 0 as t → 0] = {Xq → 0, as q → 0, q ∈ Q}
= {∀ε > 0 : ∃δ > 0, q ∈ Q ∩ (0, δ] =⇒ |Xq | < ε}
∞ [ ∞
\ \ 1
= {|Xq | < }
1
N
N =1 M =1 q∈Q∩(0, ]
M

and so
1
P[Xt → 0 as t → 0] = lim lim lim P{|Xq | < } = P[B̂t → 0]
N →∞ M →∞ k→∞ N
1
where q1 , q2 , ... lists Q ∩ (0, M ]. Q.E.D.
We used in the above that
A1 ⊇ A2 ⊇ ... then P(∩AN ) = lim P(AN )
N →∞

A1 ⊆ A2 ⊆ ... then P(∪AN ) = lim P(AN )


N →∞

Corollary 1.13 Bt /t → 0 as t → ∞.
Bt Bt
In fact, Bt /tα → 0 for α > 12 but lim supt→∞ √ t
= ∞ and lim inf t→∞ √
t
= −∞ and so
Bt visits every x ∈ R infinitely many times.
This brings us nicely into the next subsection.

4 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

1.2 Growth rate of paths


Theorem 1.14 (Law of the Iterated Logarithm) Suppose that Bt is a Brownian mo-
tion on R. Then
Bt
lim sup = +1
t→∞ ψ(t)

and
Bt
lim inf = −1
t→∞ ψ(t)
p
where ψ(t) = 2t ln(ln t)

lim supt→∞ Xt = limt→∞ sups≥t Xs and lim sup Xt ≤ 1 means that ∀ε > 0 then sups≥t Xs ≤
1 + ε for large t, which is the same as for all ε > 0 Xt is eventually less than 1 + ε.
lim sup Xt ≥ 1 if and only if ∀ε > 0 we have sups≥t Xs ≥ 1 − ε for large t. which is
the same as for all ε > 0 there exists a sequence sN → ∞ with XsN ≥ 1 − ε.
It is on an example sheet that Xt = e−t Be2t then the Law of the Iterated logarithm
can be converted to get that lim sup √X t
2 ln t
= 1.
lim
√ sup ZN
We can also compare this to ZN an iid N (0, 1) and then 2 ln N
=1
Proof We first show that
Bt
P(lim sup ≤ 1) = 1
ψ(t)
and this is the case if and only if

P(Bt ≤ (1 + ε)ψ(t) for large t) = 1

We first perform a calculation:


p
P(Bt > (1 + ε)ψ(t) for large t) = P(N (0, t) > (1 + ε) 2t ln(ln t))
p
= P(N (0, 1) > (1 + ε) 2 ln(ln t))
Z∞ z2
e− 2
= √ dz
√ 2t
(1+ε) 2 ln(ln t)

Lemma 1.15 (Gaussian Tails)


  Z ∞
1 2 −a2 /2 2 1 2
1− 2 e ≤ e−z /2 dz ≤ e−a /2
a a a a

Then we get that


Z∞ z2

e− 2 1 2
√ dz ≤ p √ e−2((1+ε) 2 ln(ln t)) /2
√ 2t (1 + ε) 2 ln(ln t) 2π
(1+ε) 2 ln(ln t)
1 2
= p √ e−(1+ε) ln(ln t)
(1 + ε) 2 ln(ln t) 2π

The strategy now is to control B along a grid of times tN = θN for θ > 1. Then
1 1 1 2 2
P(BθN > (1 + ε)ψ(θN )) ≤ √ p e−(1+ε) ln(N ln θ) ≤ C(θ, ε)N −(1+ε)
2π 1 + ε 2 ln(N ln θ)

5 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

P∞
Lemma 1.16 (Borel-Cantelli part 1) If 1 P(AN ) < ∞ then

P( only finitely many AN happen ) = 1


(
1 AN
and then the number of AN s that occur is given by ∞
P
Proof Let χAN = c 1 χA N
0 AN
and so

X ∞
X ∞
X
E[ χA N ] = E[χAN ] = P(AN ) < ∞
1 1 1
P∞
and so 1 χAN is finite a.s. Q.E.D. Then by BCI we have that BθN ≤ (1 + ε)ψ(θN ) for
all large N . We now need to control B over (θN , θN +1 ).

Lemma 1.17 (Reflection trick)

P(sup Bs ≥ a) = 2P(Bt ≥ a)
s≤t

for a ≥ 0

Proof Define Ω0 = {sups≤t Bs ≥ a} and then

P(Ω0 ) = P(Ω0 ∩ {Bt > a}) + P(Ω0 ∩ {Bt = a}) + P(Ω0 ∩ {Bt < a})
= 2P(Ω0 ∩ {Bt > a})
= 2P{Bt > a}

We will carefully justify this later by examining the hitting time Ta = inf{t : Bt = a}.
We consider (BTa +t − a, t ≥ 0 and check that this is still a Brownian motion.
Z ∞
Bt a 1 2
P(Ta ≤ t) = P(sup Bs ≥ a) = 2P( √ ≥ √ ) = 2 √ ez /2 dz
s≤t t t a 2π

and also
d 1 2 a 3 1 2
P(Ta ∈ dt) = P(Ta ≤ t) = 2 √ e−a /2t t− 2 = √ e−a /2t =: φ(t)
dt 2π 2 2πt3

and so E(Ta ) = ∞. Q.E.D.


Thus from this we get that
2
P( sup Bs ) ≥ (1 + ε)ψ(θN )) = 2P(BθN > (1 + ε)ψ(θN )) ≤ 2C(ε, θ)N −(1+ε)
s≤θN

Borel-Cantelli part 1 still applies and so for large N we have

sup Bs ≤ (1 + ε)ψ(θN )
s≤θN

This if t ∈ [θN , θN +1 ] we have

√ √
p
Bt (1 + ε)ψ(θN +1 ) ln((N + 1) ln θ)
≤ N
= (1 + ε) θ p → (1 + ε) θ
ψ(t) ψ(θ ) ln(N ln θ)

6 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

and thus we have that


Bt √
lim sup ≤ (1 + ε) θ
t→∞ ψ(t)
for all ε > 0 and θ > 1 and so
Bt
lim sup ≤1
t→∞ ψ(t)
We now show
Bt
lim sup ≥ (1 − ε)
t→∞ ψ(t)
If we choose tN = θN for θ > 1 then
2
P(BθN > (1 − ε)ψ(θN )) ≥ C(θ, ε)N −(1−ε)

Lemma 1.18 (Borel-Cantelli part 2) If ∞


P
1 P(AN ) = ∞ and AN s are independent,
then
P( infinitely many AN occur) = 1
Proof Z = ∞
P
1 χAN is the total number of AN s that occur. From BCI we get that

E(Z) < ∞ =⇒ P(Z < ∞) = 1

or E(e−Z ) = 0 ⇐⇒ P(Z = ∞) = 1. Then


P ∞
Y

E(e χAN
) = E( e−χAN )
1
Y
= E(e−χAN )
1∞

Y
= (1 − αP(AN ))
1

Y
≤ e−αP(AN )
1
P
−α P(AN )
=e
=0

Q.E.D.
We use this on AN = {BθN > (1 − ε)ψ(θN )} but these are not independent, but
so for large N . We finalise by correcting this. We define ÂN = {BθN − BθN −1 >
nearly √
(1 − ε) 1 − θ−1 ψ(θN )} and these are independent. Then
2
P(ÂN ) = P(AN ) ≥ C(θ, ε)N −(1−ε)

BC2 tells us that infinitely many ÂN do occur a.s., i.e.


p p
BθN ≥ (1 − ε) 1 − θ−1 ψ(θN ) − BθN −1 ≥ (1 − ε) 1 − θ−1 ψ(θN ) − (1 + ε)ψ(θN −1 )

and so
Bθ N p
−1 − (1 + ε)
ψ(θN −1 )
≥ (1 − ε) 1 − θ
ψ(θN ) ψ(θN )

7 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

and

p
ψ(θN −1 ) ln((N − 1) ln θ)
= √ → θ−1
ψ(θN )
p
θ ln(N ln θ)
and so
BθN p √
lim sup ≥ (1 − ε) 1 − θ −1 − (1 + ε) θ −1
N →∞ ψ(θN )
and taking θ large and ε small gives the result Q.E.D.
We make some observations:
(
1
1. Can we do better? P(Bt ≤ ht for large t) = for ht deterministic. This is
0

called the 0-1 law, and we see it in week 4. For ht = Ct ln ln t if C < 0 then we
get 0 and if C > 2 then we get 1. This uses an integral test for ht .

2. Random walk analogue. Suppose that X1 , X2 , ... are iid with E(Xk ) = 0, E(Xk2 ) = 1
and SN = X1 + ... + XN . then
SN
lim sup √ =1
N →∞ 2N ln ln N
This was proved in 1913 but the proof was long. Was proved in a shorter manner
using the Brownian motion result in 1941.
tB 1
3. Xt = tB 1 is still a Brownian motion and so lim supt→∞ t
ψ(t) or alternatively
t

Bs
lim q =1
s→0
2s ln ln( 1s )

and so we have a result about small t behaviour.

4. P(B diff at 0) = 0 and if we fix T0 > 0 then define Xt = BT0 +t then this is still a
Brownian motion. Thus

Corollary 1.19 P(B diff at t0 ) = 0 for all t0 .

5. Suppose U ∼ U [0, 1] is uniform r.v. Then define Xt by the value up until U and then
monotone increasing up until 1. Then P(X diff at t0 ) = 1 but it is not differentiable
at all t. and so we cannot easily conclude that Brownian motion id differentiable
everywhere

6. Corollary 1.20
Leb{t : B is diff at t} = 0

Proof Z ∞ Z ∞
E( χ(B diff at t)dt) = Eχ(B diff at t)dt = 0
0 0
Q.E.D.

The points where it is differentiable are examples of random exceptional points.

8 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

1.3 Regularity
Definition 1.21 A function f : [0, ∞) → R is α-Holder continuous, for α ∈ (0, 1], at
t if there exists M, δ > 0 such that

|ft+s − ft | ≤ M |s|α

for |s| ≤ δ.

The case of α = 1 is called Lipschitz.


The aim of the next part is to show that P(B is α Holder continuous at all t ≥ 0) = 1
provided α < 1/2 and that P(B is α Holder continuous at any t ≥ 0) = 0 provided α >
1/2.

Corollary 1.22 P(B differentiable at any t) = 0

The reasons for this are as follows. A differentiable function must lie in some cone as
f (t + s) − f (s)
→a
s
and so (f (t + s) − f (s))/s ∈ (a − ε, a + ε) for small s and thus |f (t + s) − f (s)| ≤ (|a| + ε)|s|
for small s, and so Lipschitz holds with M = |a| + |eps.

Proposition 1.23 Define ΩM,δ = {for some t ∈ [0, 1], |Bt+s −Bt | ≤ M |s| for all |s| ≤ δ}
and then P(ΩM,δ ) = 0 and thus P(∪∞ ∞
M =1 ∪N =1 ΩM, 1 ) = 0 N

Proof This hasn’t been bettered since 1931. Suppose that there exists a t ∈ [K/N, (K +
1)/N ] where it is Lipschitz, i.e. |Bt+s − Bt | ≤ M |s| for |s| < δ. Then if (K + 1)/N, (K +
2)N ∈ [t, t + δ] then
M 2M
|B(K+1)N − Bt | ≤ |B(K+2)N − Bt | ≤
N N
and so by the triangle inequality we get
3M
|B(K+1)N − B(K+2)/N | ≤
N
and then we have
 
3M
P(ΩM,δ ) ≤ P for some K = 1, ..., N − 1 : |B(K+1)N − B(K+2)/N | ≤ (1.1)
N

We first calculate the probability of the event on the right hand side.
3M

ZN
3M 3M 1 2 6M
P[|N (0, 1/N )| ≤ ] = P[|N (0, 1)| ≤ √ ] = √ e−z /2 dz ≤ √
N N 2π 2πN
3M
−√
N

and then the last part of equation (1.1) is equal to


N −1
−1 3M X 6M 6M √
= P(∪N
1 |B(K+1)N − B(K+2)/N | ≤ )≤ (√ )= √ N
N 2πN 2π
1

9 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

but this is not useful, because it does not tend to zero. We modify this by taking more
points. We already know that
3M
|B(K+2)N − B(K+1)/N | ≤
N
but we also have that
5M 7M
|B(K+3)N − B(K+2)/N | ≤ |B(K+4)N − B(K+3)/N | ≤
N N
and then we say that
  3M 
|B(K+1)N − B(K+2)/N | ≤ N
5M 
P(ΩM,δ ) ≤ P  for some K = 1, ..., N − 1 :  and |B(K+3)N − B(K+2)/N | ≤ N
7M
and |B(K+4)N − B(K+3)/N | ≤ N

with 4/N ≤ δ. and this is equal to


N −1 N −1
−1
X X D D
P(∪N
1 {A and B and C}) ≤ P({A and B and C}) ≤ = √ →0
1 1
n3/2 N

and so P(ΩM,δ ) = 0. Q.E.D.


We now turn to the other statement, that B is α-Holder continuous if α ∈ (0, 1/2).

Theorem 1.24 (Kolmogorov’s continuity) Suppose (Xt , t ∈ [0, 1]) has continuous
paths and satisfies
E[|Xt − Xs |p ] ≤ C|t − s|1+γ
for some γ, p > 0. Then X has α-Holder paths for α ∈ 0, γp ).

We use this for X a Brownian motion. In this case


√ p
E[|Bt − Bs |p ] = E[|N (0, t − s)|p ] = t − s E[|N (0, 1)|p ] = cp |t − s|p/2

We then have that Brownian motion has α-Holder continuous paths of orders

p/2 − 1 1 1
α< = −
p 2 p
1 1
and so Holder with any α < 2 but not α = 2

Lemma 1.25 (Markov Inequality) Suppose that Z ≥ 0 is a non negative random vari-
able. Then
1
P(Z ≥ a) ≤ E(Z)
a
Proof
E(Z) = E(Zχ{Z<a} ) + E(Zχ{Z≥a} ) ≥ E(Zχ{Z≥a} ) ≥ aP(Z ≥ a)
Q.E.D.
We use this as follows
1 C|s − t|1+γ
P(|Xt − Xs | ≥ a) = P(|Xt − Xs |p ≥ ap ) ≤ E(|Xt − Xs | p
) ≤
ap ap

10 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

N
Suppose we have a grid with size 2−N . Then we define AN = ∪2K=1 {|Xk/2N −
X(K−1)/2N | > 2N1 α }
We then estimate this
N
2
X 1
P(AN ) ≤ P{|Xk/2N − X(K−1)/2N | > } ≤ C2N 2−N (1+γ) 2N αp = 2−N (γ−αp)
2N α
1

and thus we need γ > αp or α < γ/p. This is the key idea of the proof. We now have
P ∞
1 P(AN ) < ∞ and so by Borel Cantelli I we have that only finitely many AN s occur,
i.e. there exists N0 (ω) such that
1
|XK/2N − X(K−1)/2N | < for all K = 1, ..., 2N and for N > N0
2N α
We also know that P(N0 < ∞) = 1. We now fix ω such that N0 (ω) < ∞. We can control
X on D = { dyadic rationals i.e. of form K/2M }. Fix t, s ∈ D with

1/2M ≤ |t − s| ≤ 1/2M −1

where M ≥ N0 . We have two cases. Either we straddle two points K/2M and (K + 1)/2M
or we straddle only one point K/2M . We consider the first case, and leave the second
case to the reader. We have
K + 1 1 or 0 1 or 0
t= + M +1 + ... + M +L
2M 2 2
and then we have
K +1 1 1 1 1
|Xt − X(K+1)/2M | ≤ + M +1 + ... + M +L ≤ (M +1)α
2M 2 2 2 1 − 2−α
and similarly
1 1
|Xs − X(K)/2M | ≤
2(M +1)α 1 − 2−α
and thus

|Xs − Xt | ≤ |Xs − X(K)/2M | + |X(K)/2M − X(K+1)/2M | + |Xt − X(K+1)/2M |


1 2 1
≤ (M +1)α −α
+ Mα
2 1−2 2
C
≤ Mα
2
and thus using the assumptions of Kolmogorov we have this less than C|t − s|α . Invoking
continuity gives the result.

11 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

2 Brownian motion as a Markov Process


Roughly, the future (Xs ; s ≥ t) is independent of the past (Xs : s ≤ t) conditional on the
present Xt .

Definition 2.1 Suppose that F, G two σ-fields on Ω are called independent if P(A ∩
B) = P(A)P(B) for all A ∈ F and B ∈ G.
Two variables X and Y are called independent if

P(X ∈ C1 , Y ∈ C2 ) = P(X ∈ C1 )P(Y ∈ C2 )

or equivalently
E(h(X)g(Y )) = E(h(X))E(g(Y ))
for all measurable bounded functions h, g.

They are equivalent as follows. We can take h = χC1 and g = χC2 and then the two
statements are the same. We then take simple functions and limits of simple functions to
get any functions, using the standard machine.
Definition 2.2 σ(X), called the σ-field generated by X is defined as

{X −1 (C) : C measurable }

Note that now X is independent of Y if and only if σ(X) is independent of σ(Y ).


Definition 2.3 σ(Xt , t ∈ I) called the σ-field generated by Xt is defined to be

σ{Xt−1 (C) : t ∈ I, C measurable }

i.e. the smallest σ field containing the above sets.

Theorem 2.4 (Markov property of Brownian motion 1) Suppose that B is a Brow-


nian motion, and fix t0 > 0. Define Xt = Bt+t0 −Bt0 and then σ(Xt : t ≥ 0) is independent
of σ(Bt : t ≤ t0 ).

Note that this is Rvery close to independent increments.


t Rt Rt
For example 0 0 Bs ds is independent of 0 1 Xs ds. Need to check that 0 0 Bs ds is
measurable with respect to σ(Bt ).
Z t0 N
X 1
Bs ds = lim B
0 N →∞ N K/N
1

which is measurable since the limit and sum of measurable functions is measurable.
A second example is supt≤s Bt is independent of supt≤t1 Xt . We can write this as
supt≤s Bt = sup Bq = limN →∞ max{Bq1 , ..., BqN } where qi lists [0, 1] ∩ Q. Now
q∈[0,1]∩Q

w 7→ (Bq1 (w), ..., BqN (w)) 7→ max Bqi (w)

and the latter is continuous, and the former is measurable, since if C = C1 × ... × CN
then {w : (Bq1 (w), ..., BqN (w)) ∈ C} is measurable.

Definition 2.5 A collection of subsets A0 is called a π-system if it is closed under finite


intersections.

12 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Lemma 2.6 If F0 and G0 are two different π-systems and P(A ∩ B) = P(A)P(B) for
all A ∈ F0 and B ∈ G0 then P(A ∩ B) = P(A)P(B) holds for σ(F0 ) and σ(G0 ), i.e.
independence of π-systems gives independence of generated σ-fields.

For us, σ(Xt , t ≥ 0) = σ{Xt−1 (C) : t ≥ 0, C ∈ B(R)} and this is generated by the
π-system {Xt−1
1
(C1 ) ∩ ... ∩ Xt−1N
(CN ) : t1 , ..., tN ≥ 0, CK ∈ B(R)}
Proof (of theorem 2.4) By the above lemma, we need only check that (Xt1 , ..., XtN ) is
independent of (Bs1 , ..., BsN ). We have
h P i h P P i
E ei λk Btk ei µk (BT +tk −Btk ) = E e− λ̂k (Bsk −Bsk−1 ) ei µ̂k (BT +tk −BT +tk−1 )
P

h P i h P i
= E ei λk Btk E ei µk (BT +tk −Btk )

by independent increments. Q.E.D.

Proposition 2.7 The following are true

1. a ≤ b ≤ c ≤ d then P(max Bt 6= max Bt ) = 1


[a,b] [c,d]

2. P(∃!t? ∈ [a, b] : Bt? = max Bt ) = 1


[a,b]

3. Local maxima are dense in [0, ∞).

Proof We can get 2 from one by using that P(max Bt 6= max Bt : ∀a ≤ b ≤ c ≤


[a,b] [c,d]
d rational ) = 1 We can get 3 from 2 by doing this for all rational (a, b) simultaneously.
We now prove 1. Take Xt = Bc+t − Bc , and let Y = max[c,d] (Bt − Bc ) = max[0,d−c] Xt
which is independent of σ(Bs , s ≤ c). Take Z = max[a,b] (Bt − Bc ) and we aim to show
that P(Z 6= Y ) = 1. Y and Z are independent by the Markov property but Y has a
density. By the reflection principle, this implies that P(Y = Z) = 0. Q.E.D.
At the last part of this, we have used that if E(g(Y )h(Z)) = E(g(Y ))E(h(Z)) and if
ν(dy) = P(Y ∈ dy) and µ(dz) = P(Z ∈ dz) then
Z Z
E(φ(Y, Z)) = φ(y, z)ν(dy)µ(dz)
(
1 Y =Z
This can be shown by the standard machine, with first taking φ(Y, Z) =
0 Y 6= Z

Definition 2.8 We define the Brownian filtration by FtB := σ(Bs : s ≤ t), and we
B =∩
also define Ft+ B B
s>t Fs and the germ field is F0+ .

Bt
Note that if dB B
dt (t) existed then it would be in Ft+ . However, lim supt→0 h(t)
B ,
is in F0+
since this limsup is FδB measurable for any δ > 0.

Theorem 2.9 (Markov property version 2) Fix T ∈ R and define Xt = BT +t − BT .


B.
Then σ(Xt , t ≥ 0) is independent of Ft+
B so is independent of itself.
Corollary 2.10 T = 0 so σ(Bt , t ≥ 0) is independent of F0+
B is a 0/1 field.
Corollary 2.11 F0+

13 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

B measurable then {ω : Z(ω) ≤ c} has probability 0 or 1. Thus Z is a


Thus if X is F0+
constant almost surely.

Example 2.1 (Lebesgue’s Thorn) Suppose that B is a d dimensional Brownian mo-


tion and F is an open set in Rd . Take τ = inf{t : Bt ∈ F }. We claim that {τ = 0} ∈ F0+
B .
∞ B
Indeed {τ = 0} = ∩N =N0 ∪q∈Q∩[0,1/N ] {Bq ∈ F } ∈ F1/N0 for all N0 . Lebesgue’s thorn
p
is an example of F , e.g. F = {(x, y, z) : y 2 + z 2 ≤ f (x)} a volume of rotation. Here
P(τ > 0) = 1 for thin thorns, or is 0 for thick thorns.

Proof ( of theorem 2.9) Take A ∈ Ft+ B and B ∈ σ(X , s ≥ 0). The π-system lemma
s
says we can choose B = {Xsi : i = 1, ..., m}, i.e. enough to check that
P
E[e− λk Xsk iθχA
e ]

splits as a product of the two expectations.


Let Xsε = Bt+ε+s − Bt+ε and note that this is independent of Ft+ε
B . Then

λk Xsε λk Xsε
P P
E[e− k eiθχA ] = E[e− k ]E[eiθχA ]

by the Markov property version 1. Letting ε → 0 and then using the DCT we get the
required result. Q.E.D.

Example 2.2 (Shakespeare problem) Suppose we have 2 dimensional Brownian mo-


tion.
Then assume that

P(Bt , t ∈ [0, 1] traverses a tube A) = p0 > 0

and then
1 1
P( √ Bt , t ∈ [0, 1/2] traverses a tube √ A) = p0
2 2
then let
1
AN = { Bt : t ∈ [0, 2−N ] traverses 2−N/2 A}
2N/2
and then P(AN ) = p0 by scaling. Then note that AN ∈ F2B−N and let Ω0 = ∩∞ ∞
M =M0 ∪N =M
B B
AN = [AN i.o. ] ∈ F2−M0 and so Ω0 ∈ F0+ and now

P(Ω0 ) = lim P(∪∞


N =M AN ) ≥ p0
M →∞

and so P(Ω0 ) = 1.

2.1 Markov transition functions


You may know that

P(X0 = x0 , ..., XN = xN )p(x0 , x1 )...p(xN , xN −1

and this is equivalent to

P(XN = xN |X0 = x0 , ..., XN −1 = xN −1 ) = p(xn , xN −1 )

Our aim is to come up with something similar in the continuous case. We intuitively
think of the following as the probability that we start at x and end up in dy in time t.

14 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Definition 2.12 A Markov transition kernel is a function p : (0, ∞)×Rd ×B(Rd ) → [0, 1]
such that

1. A 7→ pt (x, A) is a probability measure on B(Rd )

2. x 7→ pt (x, A) is measurable.

Definition 2.13 A Markov process (Xt , t ≥ 0) on Rd with transition kernel stared at x


means that
Z
P(Xt1 ∈ A1 , ..., XtN ∈ AN ) = ptN −tN −1 (xN −1 , dxN )...pt1 (x, dx1 )
A1 ,...,AN

Observe that we have that


Z
E(f (Xt1 , ..., XtN )) = f (x1 , ..., xN )ptN −tN −1 (xN −1 , dxN )...pt1 (x, dx1 )
RN

2
Example 2.3 1. Brownian motion with pt (x, dy) = √ 1 e−(x−y) /2t dy
2πt

2. Xt = Bt + x + ct

3. Reflected Brownian motion Xt = |x + Bt |


(
x + Bt t<τ
4. Absorbed Brownian motion Xt = where τ = inf{t : Bt = 0}
0 t≥τ

5. Ornstein-Uhlenbeck processes

6. RadialqBrownian motion. Suppose that Bt is Brownian motion on Rd and define


Pd i 2
Xt = i=1 (Bt )

7. dX = µ(X)dt + σ(X)dBt an SDE.

8. A Brownian Bridge is NOT an example. However it is a time inhomogeneous


Markov process, where p is a function of two times.

Definition 2.14 We define Btx = x + Bt i.e. Brownian motion starting at x.

We check that (Btx , t ≥ 0) is a Markov process started at x with kernel pt (x, dy) =
2
1
qt (y − x)dy where qt (z) = √2πt e−z /2t The short proof is as follows:

P(Btx1 ∈ dx1 , ..., BtxN ∈ dxN ) = P(Btx1 ∈ dx1 , Btx2 − Btx1 ∈ d(x1 − x2 ), ..., BtxN − BtxN −1 ∈ d(xN − xN −1 ))
= qt1 (x1 − x)dx1 qt2 −t1 (x2 − x1 )dx2 ...qtN −tN −1 (xN − xN −1 )dxN

and then we should integrate both sides over A1 , ..., AN .


The longer version is as follows:

E(f (Btx1 , ..., BtxN )) = E(g(Btx1 − x, Btx2 − Btx1 , ..., BtxN − BtxN −1 ))

15 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

if

g(y1 , ..., yN ) = f (x + y1 , x + y1 + y2 , ...)


Z
= g(y1 , ..., yN )qt1 (y1 )...qtN −tN −1 (yN )dy1 ...dyN
N
ZR
= f (x1 , ..., xN )qt1 (x1 − x)...qtN −tN −1 (xN − xN −1 )Jdx1 ...dxN
RN

where J is the jacobian of the change of variables y1 = x1 − x,...,yN = xN − xN −1 , and


equals 1.

Definition 2.15 Xt , t ≥ 0 is a Markov process with transition kernel p means

P(Xt ∈ A|FsX ) = pt−s (Xs , A)

for all s < t.

This is shorter than the previous definition, and implies it by an induction argument,
which we will do.

2.1.1 Conditional Expectation


Suppose that we have a probability space (Ω, F, P) and a random variable X : Ω → R.
Suppose G ⊂ F is a sub σ field.
Our aim is to take E(X|G) to be the closest G measurable function to X.
The natural place for this is L2 (Ω, F, P) = {X : Ω → R|E(X 2 ) < ∞} and this is
an inner p
product space with inner product (X, Y ) = E(XY ) and a corresponding norm
||X||2 = E(X 2 ).
For X ∈ L2 (F) there exists a unique Y ∈ L2 (G) such that

(X − Y, Z) = 0

for all Z ∈ L2 (G). This means

E((X − Y )Z) = 0 ⇐⇒ E(XZ) = E(Y Z)

for all Z ∈ L2 (G). It is enough though to check for Z = χA for A ∈ G, by the standard
machine of measure theory. This expectation is then
Z Z
XdP = dP (2.1)
A A

and we write Y = E(X|G).

Proposition 2.16 For X ∈ L2 (F) there is a unique Y ∈ L2 (G) satisfying (2.1). This
can be improve to X ∈ L1 (F) or to X ≥ 0.

There are several special cases


1. G = {∅, Ω} and then E(X|G) = E(X)
(
y1 A
2. G = {∅, A, Ac , Ω} and then E(X|G) = . If X = χB then y1 = P(B|A) and
y2 Ac
y2 = P(B|Ac ) and this is like an extension to Bayes formula

16 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

3. G = F and then E(X|G) = X.

Lemma 2.17 Suppose X is G measurable and Z is independent of G, and φ : R2 → R is


bounded and measurable. Then

E(φ(X, Z)|G) = E(φ(x, Z))|x=X

This can be proved using the measure theory machine.

Example 2.4 1. E(Bt |FsB ) = E(Bs + (Bt − Bs )|FsB ) = Bs

2. E(eBt |FsB ) = E(eBt −Bs eBs |FsB ) = eBs e(t−s)/2

3. E(Bt2 |FsB ) = E(Bs2 + 2Bs (Bt − Bs ) + (Bt − Bs )2 |FsB ) = Bs2 + 2Bs E(Bt − Bs ) + E(Bt −
Bs )2 = Bs2 + t − s

4. (X, Y ) a Gaussian vector on R2 with mean 0 and Y 6= 0. We then postulate that

E(X|σ(Y )) = αY

but we need to find α. We do this as follows. Set X = αY + (X − α)Y and we


)
need to check that E(Y (X − αY )) = 0. Thus we choose α = E(XY E(Y 2 )
, and so then
E(X|σ(Y )) = E(αY + (X − α)Y |σ(Y )) = αY + 0.
R
5. We can do a similar thing for E( 0 1Bs ds|σ(B1 ) and set it equal to αB1 . We then
get α = 1/2.

Lemma 2.18 X ≥ 0 implies that E[X|G] ≥ 0 a.s.

Proof Y = E[X|G] means that E(Y Z) = E(XZ) for measurable Z. Let A = {ω :


Y (ω) < 0}. Then Z Z
0≤ XdP = Y dP ≤ 0
A A
and A is G measurable, so P(A) = 0. Q.E.D.
We check that Brownian motion is Markov with this new definition. Let Xt = x + Bt .
Note that FsX = FsB and so

P(Xt ∈ dy|FsX ) = P(x + Bs + Bt − Bs ∈ dy|FsB )


= P(Z + N (0, t − s) ∈ dy)|z=x+Bs = qt−s (y − z)dy|z=x+Bs
= qt−s (y − Xs )dy

Suppose now that Xt = |Btx | and then FsX ⊂ FsB and we guess that the transition
kernel is given by

q̂t−s (y − x)dy = qt−s (y − x)dy + qt−s (−y − x)dy

since we can either reach dy or −dy at time t. We consider P(Xt ∈ dy|FsX ) = P(|Btx | ∈
dy|FsX ). Then using the tower property we get

P(|x + Bt | ∈ dy|FsB ) = q̂t−s (y − (x + Bs ))dy = q̂t−s (y − |x + Bs |)dy = q̂t−s (y − Xs )dy

and since this is already Xs measurable, conditioning on Xs does nothing.

17 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

However there is a warning. Is Xt = f (Btx ) still Markov? The answer is usually not.
In the above case,
p were were just lucky to get this. For example, consider radial Brownian
motion, Xt = B1 (t)2 + ... + Bd (t)2 and we want to show that
Z
E(f (Xt1 , ..., XtN )) = f (x1 , ..., xN )ptN −tN −1 (dxN − xN −1 )...pt1 (dx1 − x)
Qn
The measure theory machine says that it is enough to check for f (x1 , ..., xn ) = 1 φk (xk ).
We show this by induction:
n
Y n
Y
E( φk (xk )) = E(E( φk (xk )|FtX ))
N −1
1 1
n−1
Y
= E( φk (xk )E(φn (Xtn |FtX ))
n−1
1
n−1
Y Z
= E( φk (xk )) φn (xn )ptn −tn−1 (xtn−1 , dxn )
1

and then we use induction hypothesis.

2.2 Strong Markov Process.


Define Xs (w) = BT +s − BT = BT (w)+s (w) − BT (w) (w) and is it still a Brownian motion?
This is the case if Ta = inf{t : Bt = a}, but not if T = sup{t ≤ 1 : Bt = 0}. In other
words, T must not look into the future.

Definition 2.19 (Ft , t ≥ 0) is called a filtration on (Ω, F, P) if Fs ⊆ Ft ⊆ F for s ≤ t.

The key example is (Xt , t ≥ 0) and FtX = σ(Xs : s ≤ t).

Definition 2.20 T : Ω → [0, ∞] is called a stopping time for a filtration (Ft , t ≥ 0) if


{T ≤ t} ∈ Ft for all t ≥ 0.

The key example is (FtB , t ≥ 0) and Ta = inf{t : bt = a}. We give two justifications
as to why this is a stopping time below.

{Ta ≤ t} = {sup Bs ≥ a} = { sup Bs ≥ a} ∈ FtB


s≤t s∈Q∩[0,t]

since sup Bs is a measurable object.


1
{Ta ≤ t} = ∩∞
N =1 ∪q∈Q∩[0,t] {|Bq − a| ≤ }
N
On the example sheet, we have TK = inf{t : Bt = K} for K ⊂ Rd closed is a stopping
time, but if K is open then it is not a stopping time, as you cannot write it in FtB .
B.
However, you can write it in Ft+

Theorem 2.21 B a Brownian motion, FtB , T is an FtB stopping time with T < ∞.
Then define Xs = BT +s − BT and it is a Brownian motion, and independent of FTB

Definition 2.22 (Information up to a stopping time) Suppose that T is a stopping


time. Then we define

FT = {A : A ∩ {T ≤ t} ∈ Ft for all t ≥ 0}

18 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

We need to check that FT is indeed a σ-field, that if S ≤ T then FS ⊆ FT for two stopping
times and that T is FT measurable.
To show the first of these note that ∅, Ω are clearly in FT . Then if A ∈ FT

Ac ∩ {T ≤ t} = {T ≤ t} \ (A ∩ {T ≤ t}) ∈ Ft

Now if A1 , A2 , ... ∈ FT then



\ ∞
\
AN ∩ {T ≤ t} = (AN ∩ {T ≤ t})
1 1

To show the last note that it is enough to check that {T ≤ s} ∈ FT for all s ∈ R.
Then
{T ≤ s} ∩ {T ≤ t} = {T ≤ min(s, t)} ∈ Fmin(s,t) ⊆ Ft
We now justify the part of the Reflection principle that before was somewhat hand
wavy. This here is rigorous though.

P(Ta ≤ t) = P(Ta ≤ t, Bt ≥ a) + P(Ta ≤ t, Bt ≤ a)

and the former is P(B)t ≥ a) and the latter we consider below:

P(Ta ≤ t, Bt ≤ a) = P(Ta ≤ t, Xt−Ta ≤ 0)


= E(P(Ta ≤ t, Xt−Ta ≤ 0|FTBa ))
= E(χTa ≤t P(Xt−s ≤ 0)|s=Ta )
1
= P(Ta ≤ t)
2
Proof (of theorem 2.21) We first assume that T is discrete, i.e. Ω = ∪k {T = tk }, i.e.
T only takes the values t1 < t2 < .... Pick A ∈ FTB and look at
X
E(F (Bt+T − BT )χA ) = E(χA χ{T =tk } F (Btk +t − Btk , t ≥ 0))
k
X
= E(χA∩{T =tk } F (Btk +t − Btk , t ≥ 0))
k

but A ∩ {T = tk } = A ∩ {T ≤ tk } \ A ∩ {T ≤ tk−1 } ∈ FtBk and so by the first Markov


property we get the above equal to
X
E(χA χ{T =tk } )E(F (Btk +t − Btk )) = E(F (Bt , t ≥ 0))P(A)
k

We now consider general T < ∞. We approximate it by discrete stopping times.


Define
j j−1 j
TN = if T ∈ [ , )
N N N
Then TN → T as N → ∞ and BTN → BT due to the continuity of B. We now must
verify that TN is a stopping time
j−1
{TN ≤ t} = {T < } ∈ FB B
j−1 ⊆ Ft
N N

19 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

if t ∈ [ j−1 j
N , N ). Then the above implies that BTN +t − BTN is a Brownian motion and
is independent of FTBN ⊇ FTB If we let N → ∞ then considering characteristic functions
gives
P P P
θk (BTN +sk −BTN ) θk (BTN +sk −BTN ) θk (BT +sk −BT )
E(ei χA ) = E(ei = E(ei P(A)

where we have used the DCT in the ultimate equality. Q.E.D.


We consider the hitting time process (Ta : a ≥ 0). Let St = maxs≤t Bs and then
a 7→ Ta is trying to be the inverse function to St but Ta has jumps in any interval (b, c)
so this isnt a true inverse.

Proposition 2.23 Tb − Ta is independent of (Tc , c ≤ a) and Tb − Ta = Tb−a

This is an example of an independent increments process.


Proof Define Xt = BTa +t − a and this is a new Brownian motion and is independent
D
of FTBa Then Tb − Ta = inf{t : Xt = b − a} = Tb−a and Tc is measurable with respect to
FTBc ⊆ FTBa . Q.E.D.

Proposition 2.24 Let Z = {t : Bt = 0}. Then


1. Leb(Z) = 0

2. There are no isolated points in Z.

3. Z is uncountable

4. The Hausdorff dimension of Z is 1/2.


Recall that t ∈ Z is isolated if there exists ε > 0 such that (t − ε, t + ε) ∩ Z = {t}.
Proof Z ∞ Z ∞
leb(Z) = χ(t ∈ Z)dt = χ(Bt = 0)dt
0 0
R∞
and E(leb(Z)) = 0 P(Bt = 0)dt = 0
Let τs = inf{t ≥ s : Bt = 0} and then (Bτs +t − Bτs ) is a new Brownian motion and
the Law of the Iterated Logarithm gives that τs is not isolated. Let Ẑ = {τs : s ∈ Q}.
We ask whether Ẑ = Z. This is no, because we can take the last zero before a point and
this will not be of the form τs for some s.
Pick τ ∈ Z \ Ẑ and then τ is not isolated from the left. Then pick a sequence sk → τ
and then sN ≤ τsN < τ for all N and so τsk → τ .
Z is closed as it is the preimage under a continuous map of a closed set. Thus by the
deterministic lemma below we have our result. Q.E.D.

Lemma 2.25 Suppose that A ⊂ R is closed and has no isolated points. Then A is
uncountable.

Proof Pick t0 < t1 in A and choose B0 = B(t0 , ε0 ) and B1 = B(t1 , ε1 ) disjoint. Then
choose points t00 , t01 inside B0 and A and t10 , t11 inside B1 and A and again choose disjoint
balls around these points. Continue this process.
We now have a chain B1 ⊇ B10 ⊇ ... of balls and so there exists a unique point ta
in the infinite intersection. A is closed so ta ∈ Z. If a 6= b then ta 6= tb so the set is
uncountable. Q.E.D.

20 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

2.3 Arcsine Laws for Brownian motion


M is the unique time in [0, 1] where it is equal to its supremum, then
1
P(M ∈ dt) = p dt
π t(1 − t)

but why is M symmetric?


The last zero T = sup{t ≤ 1 : Bt = 0} then
1
P(T ∈ dt) = p dt
π t(1 − t)
R1
Let L = 0 χ(Bs > 0)ds, i.e. the time it is above zero. Then

1
P(L ∈ dt) = p dt
π t(1 − t)

These are called arcsine laws due to the following:



Z t Z sin−1 ( t) √
1 1 2 sin θ cos θ 2
p dt = dθ = sin−1 ( t)
0 π t(1 − t) π 0 sin θ cos θ π

For the second law, note

P(T ≤ t) = P(X does not hit − Bt before time 1 − t)

and using the following

P(a + X does not hit the origin by time 1 − t) = P(B does not hit a by time 1 − t)
= 1 − P(B does hit a by time 1 − t)
= 1 − 2P(B1−t > a)
= 1 − 2P(N (0, 1 − t) > a)
a
= 1 − 2P(N (0, 1) > √ )
1−t
Z ∞
1 2
=1−2 √ e−z /2 dz
√a
1−t

 
a
=: 1 − 2Φ √
1−t

and so
 
a
P(T ≤ t) = E(1 − 2Φ √ )
1 − t a=Bt
√ 
t|N (0, 1)|
= E(1 − 2Φ √ )
1−t
Z ∞  √ 
t|x| 1 2
=− 1 − 2Φ √ √ e−x /2 dx
−∞ 1−t 2π

21 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

and if you consider densities you get


Z ∞ 2
√ !
4 tx
− 1−t −x2 /2 1 x tx
P(t ∈ dt) = − √ e e √ √ + 3 dx
2π 0 2 t 1 − t (1 − t) 2
R∞ 2
and this is of the form 0 axe−bx dx which eventually gives the answer.

Theorem 2.26 (Lévy) Suppose St = sups≤t Bs . Then Xt = St − Bt is a reflected


Brownian motion.

Then this would give that the alst zero for X is equal to the time of maximal value
for B, and so the first two arcsine laws one would expect to be the same.
D
Corollary 2.27 T = M

Remark There is almost a random walk analogue. If we do the same thing, then we get
a reflected random walk, but it is “ sticky ” at the origin. We expect the set of sticky
times to be small though, because
Z t
χ(Bs = 0)ds = 0
0

Proof It order to prove Lévy’s theorem, it is left to the reader to check that X is a
Markov process with correct transition kernel as given before.
Then one can find P(Xt1 ∈ A1 ...XtN ∈ AN ). Then

P(Xt ∈ dy|FsX ) = P(Xt ∈ dy|FsB ) =: I + II|a=Ss −Bs

Where I and II are as below. Call Yr = Xs+r − Xs adn this is a new Brownian motion
independent of FsB . We then have two possibilities. Either I, it attains its maximum
after s, or it attains it before. We then have
Z ∞
Y Y
I= P(St−s ∈ dz, Bt−s ∈ dz − y) II = P(St−s ≤ a, B ∈ a − dy)
a

Q.E.D.

3 Brownian Martingales
Definition 3.1 Fix a filtration (Ft , t ≥ 0). A process (Mt , t ≥ 0) is called a (Ft )-
martingale if
E(Mt |Fs ) = Ms ∀s ≤ t

Observe that this requires E|Mt | < ∞ for all t ≥ 0

Theorem 3.2 (Optional Stopping Theorem) If (Mt , t ≥ 0) is an Ft martingale and


T is a bounded stopping time then

E(MT ) = E(M0 )

22 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

There is a financial interpretation of this. Mt could be considered as a stock price.


Then the martingale property is the natural assumption, in discounted money. In other
words the best way to predict the future is to consider the present. Then the OST tells
you that the expected selling value MT equals the initial value M0 , you cannot make
money from a martingale.
Some books use the OST as the definition of a martingale.
T bounded means that there exists a K ∈ R such that P(T ≤ K) = 1
Consider Ta = inf{t : Bt = a}. This is not bounded. Then BTa = a so E(BTa ) = a
but E(B0 ) = 0 and these are not equal.

Example 3.1 (Bt ) is a martingale because E(Bt |FsB ) = E(Bs + Bt − Bs |FsB ) = Bs Take
T = Ta ∧ Tb with a < 0 and b > 0. This is the minimum of two stopping times and so is a
stopping time. It is also bounded. We now use the Optional Stopping theorem.. We have
E(BT ∧N ) = 0 and so by DCT we have E(BT ) = 0 since |BT ∧N | ≤ max(|a|, |b|). Note to
always check that this can be done. We then have

0 = E(BT ) = bP(Tb < Ta ) + aP(Ta < Tb )

and we also have that


1 = P(Tb < Ta ) + P(Ta < Tb )
Solving these gives
−a b
P(Tb < Ta ) = P(Ta < Tb ) =
b−a b−a

Example 3.2 (Bt2 − t, t ≥ 0) is a martingale.

E(Bt2 |FsB ) = E(Bs2 + 2Bs (Bt − Bs ) + (Bt − Bs )2 |FsB ) = Bs2 + t − s

and so E(Bt2 − t|FsB ) = Bs2 − s as we want for a martingale. The OST says that

E(BT2 ∧N − T ∧ N ) = 0

and using DCT and MCT we get that

E(BT2 − T ) = 0

and this gives that

−b2 a a2 b
E(T ) = E(BT2 ) = b2 P(Tb < Ta ) + a2 P(Ta < Tb ) = + = −ab
b−a b−a
2
Example 3.3 (eθBt −tθ /2 , t ≥ 0) is a martingale for all θ ∈ R But how is this a martingale
when it seems to be small always. The reason is the process goes to zero but the expectation
doesn’t. We now check that this is a martingale.
2 /2(t−s)
E(eθBt |FsB ) = E(eθBs eθ(Bt −Bs ) ) = eθBs eθ

and then rearranging gives it as a martingale. We then use the DCT to Ta ∧ N and
2
eθBt −tθ /2 to get, with OST,
θB )=1
E(e Ta ∧N )−Ta ∧N θ2 /2

23 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

2 /2T
and Ta ∧ N → Ta and BTa ∧N → BTa = a and then as N → ∞ we get E(e−θ a ) = e−θa .
Then if we let λ = θ2 /2 we get

E(e−λTa ) = e− 2λa

Before we showd that P(Ta ≤ t) = 2P(Bt ≥ a) and we get here


Z ∞
−λTa a 1 2
E(e )= e−λt 3/2 √ e−a /2t dt = ...
0 t 2π
and try this for yourself.

The above examples give a hint that many interesting facts about Brownian motion
can be found from the Optional stopping theorem, as applied to some stopping time.

Example 3.4 Ta ∧ Tb can be done similarly and inversion can be used to give P(Ta ∧ Tb ∈
dt).

Recall that E(e−λT ) = φ(λ) = 1 + a1 λ + a2 λ2 + ... = 1 − λE(T ) + λ2 /2E(T 2 ) + ... by


comparing coefficients.
If we define Xt = Bt − ct then Xt → −∞. FindRP(Ta < ∞). We consider the
2 2
martingale eθBt −θ /2t = eθXt e(θc−θ /2)t and define Ta = {t : Xt = a}. Then OST gives
you
2
E(eθXTa ∧N e(θc−θ /2)Ta ∧N ) = 1
(
θX eθa {Ta < ∞}
Now e Ta ∧N → and this is dominated by eθa .
0 {Ta = ∞}, θ > 0
( 2
(θc−θ 2 /2)T ∧N e(θc−θ /2)Ta {Ta < ∞}
Now e a → . If we use θc − θ2 /2 =
0 {Ta = ∞}, θc − θ2 /2 < 0
λ then we get
E(e−λTa χ{Ta <∞} ) = e−θa
if θ > 0 and θc − θ2 /2 < 0. Now choose θ → 2c and so P(Ta < ∞) = e−2ca . This makes
sense. If a was big, then the chance is small, and if c is large, i.e. the drift is strongly
negative, then the chance is also small.

Theorem 3.3 (OST version 1) Suppose (Mt : t ≥ 0) is an Ft martingale with contin-


uous paths. Suppose also that T is a bounded stopping time and M is bounded. Then

E(MT ) = E(M0 )

Proof Suppose first that T is discrete, and T ≤ K. Let T ∈ {t1 , .., tN } where t1 < t2 <
... < tN ≤ K. Then
XN
E(MT ) = E(Mtk χ{T =tk }
k=1

and also E(Mt |Fs ) = Ms and so E(Mt χA ) = E(Ms χA ) for A ∈ Fs . Also {T = tk } = {T ≤


tk } \ {T ≤ tk−1 } ∈ Ftk and so
N
X
E(Mtk χ{T =tk } = E(Mk ) = E(M0 )
k=1

24 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Suppose now that we have a general stopping time T . Define TN = Nj if T ∈ [ j−1 j


N , N ).
TN are discrete stopping times and TN → T . Also MTN → MT by the continuity of paths
and so
E(M0 ) = E(MTN ) → E(MT )
using the DCT since |Mt (ω)| ≤ K for some K, and for all t. Q.E.D.
(1) (d) (2)
Bt = (Bt , ..., Bt ) is a Brownian motion on Rd . Suppose T = inf{t : Bt = 1}.
(1)
Find P(Bt ∈ dx). Solve using T and B (1) are independent and you know P(T ∈ dt).

Proposition 3.4 (Lots of Brownian martingales) Suppose that B is a d dimensional


Brownian motion, then
Z t 
∂f 1
f (Bt , t) − (Bs , s) + ∆f (Bs , s) ds
0 ∂s 2
2
is a martingale if f ∈ C 2,1 and f, ∂f ∂f ∂ f
∂s , ∂xj , ∂xi ∂xj are of at most exponential growth.

For example, Bt2 − t with f (x, t) = x2 then 12 ∆f = 1. Also Bt4 isn’t but if we subtract
Rt 2 θBt −θ2 /2t we choose f (x, t) = eθx−θ2 /2t and we subtract nothing.
0 6Bs ds it is. Also for e
If Btx = x + Bt is d dimensional Brownian motion started at x then the above is the
same with Bt replaced with Btx . We will prove it soon, and the proof uses the fact that
the Gaussian density solves ∂φ t 1
∂t = 2 ∆φt
Of special importance are f (x, t) satisfying ∂f 1
∂s + 2 ∆f = 0 or ∆f = 0.
Consider the Dirichlet problem. Let D ⊂ Rd be open. Then find u such that
(
∆u(x) = 0 in D
(3.1)
u(x) = f (x) on ∂D

with f a given function.

Theorem 3.5 Suppose D is a bounded open set of Rd . Suppose u ∈ C 2 (Rd ) and solves
(
∆u(x) = 0 in D
u(x) = f (x) on ∂D

then
u(x) = E(f (BTx ))
where T = inf{t : Btx ∈ ∂D}.

Proof We can modify u outside of D so that it R t has at most exponential growth. Then
1
we get, by the above proposition that u(Bt ) − 0 2 ∆u(Bsx )ds is a martingale. Then the
x

optional stopping theorem, for T ∧ N , gives


 Z T ∧N 
x 1
u(x) = E u(BT ∧N ) − ∆u(Bs )ds = E(u(BTx ∧N ))
x
0 2

and by the DCT we get u(x) = E(u(BTx )) = E(f (BTx )) since u = f on ∂D. Q.E.D.

25 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

(
1
|x|d−2
d≥3
Example 3.5 Let u(x) = and this satisfies ∆u = 0 except at x = 0. I
log |x| d = 2
leave this as a check for the reader, as it has been seen before many times. Let D = {x ∈
R3 : a < |x| < b} and let Ta = inf{t : |Btx | = a} and Tb = inf{t : |Btx | = b}. Then let
T = Ta ∧ Tb , and so
1 1 1 1
= E( x ) = P(Ta < Tb ) + P(Tb < Ta )
|x| |BT | a b

with 1 = P(Ta < Tb ) + P(Tb < Ta ) by the law of the iterated logarithm, and so
1 1
|x| − b
P(Ta < Tb ) = 1 1
a − b
a
If we let b → ∞ and we get P(Ta < Tb ) → P(Ta < ∞) = |x| .

Corollary 3.6 |Bt | → ∞ as t → ∞ in d ≥ 3.

Proof If Bt doesnt tend to infinity then there exists K and tN → ∞ where |BtN | ≤ K.
Let TN = inf{t : |Bt | ≥ N } then the law of the iterated logarithm says TN < ∞. Then
(N ) d−2
Xt = BTN +t − BTN is a Brownian motion and P(X N hits B(0, K)) = K N and thus

P(∩∞
N =K {X
N
hits B(0, K)}) = 0

and so
P(∪∞ ∞
N =1 ∩N =K {X
N
hits B(0, K)}) = 0
Q.E.D.
For d = 2 let b → ∞ and then P(Ta < ∞) = 1 and then if a → 0 then P(T{0} < Tb ) = 0
and if we now let b → ∞ then P(T{0} < ∞) = 0

Corollary 3.7 In d = 2, P(B ever hits {x}) = 0 for x 6= 0

Corollary 3.8 Leb(Bt : t ≥ 0) = 0 a.s. and so is not a space filling curve.


R
Proof Leb(Bt : t ≥ 0) = R2 χx∈(Bt ,t≥0) dx and we consider expectations.
Z  Z
E(Leb(Bt : t ≥ 0)) = E χx∈(Bt ,t≥0) dx = P(x ∈ Bt )dx = 0
R2 R2

Q.E.D.

Corollary 3.9 The range of (Bt , t ≥ 0) is dense in R2

The Poisson problem is the following, for D ⊂ Rd .


(
1
2 ∆u = −g in D
(3.2)
u=0 on ∂D

and here g represents pumping heat in or out, and u is the steady state temperature.

26 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Theorem 3.10 (Poisson version 1) Suppose u ∈ C 2 (R) solves equation (3.2) with D
bounded. Then Z T
u(x) = E( g(Bsx )ds)
0
with T = inf{t : Btx ∈ ∂D}
R2 −|x|2
Example 3.6 u(x) = d solves the Poisson problem on a ball of radius R with g = 1.
Then u(x) = E(T ).

Example 3.7 Suppose D = (0, ∞) ⊂ R and u(x) = −x2 /2. However, E(T ) 6= u(x)

Proof We can modify u away from D to have exponential growth. Then


Z t
x 1
u(Bt ) − ∆u(Bsx )ds
0 2
is a martingale. The OST then gives
Z T ∧N Z T ∧N
x 1 x x
u(x) = E(u(BT ∧N ) − ∆u(Bs )ds) = E(u(BT ∧N ) + g(Bsx )ds)
0 2 0
RT
and the former tens to 0 by DCT and the latter tends to 0 g(Bsx )ds by domination with
(1)
||g||∞ T , and so we need E(T ) < ∞ but T ≤ inf{t : |Bt | = K} and the right hand side
is finite. Q.E.D.
PDE people generally look for solutions to these problems that are in C 2 (D) ∩ C(∂D).
Sadly, one cannot in general extend these to C 2 (Rd ) or even C 2 (D̄).

Example 3.8 21 ∆u = −1 on (0, 1)2 and u = 0 on the boundary. Then u is not C 2 ([0, 1]2 )
because of the effects at the corners, as at (0, 0) we have ∆u = 0.

We can improve though, to allow solutions in C 2 (D) ∩ C(∂D) but still u(x) solves the
Dirichlet or Poisson problem. We do this as follows.
Shrink the domain in by ε by defining Dε = {x : d(x, DC ) > ε} and then Dε → D
and we can then mollify. Let Tε = inf{t : Bt ∈ ∂Dε } and then BTε → BT . Then one can
believe that you can modify u outside of Dε to be C 2 (Rd ) with exponential growth. Then
apply version 1 to Dε to get u(x) = E(u(BTε )) → E(f (BT )) as ε → 0.
By exponential growth, we mean |g(t, x)| ≤ C0 eC1 |x| for all x, t.
Proof ( of theorem 3.4) We first show that E(|Mt |) < ∞
x x x 2
E(|f (Btx , t)|) ≤ C0 E(eC1 |Bt | ) ≤ C0 E(eC1 Bt + e−C1 Bt ) ≤ C0 eC1 x eC1 t/2 < ∞
Rt
and similarly E(| 0 Lf (Bsx , s)ds|) < ∞ WLOG we can take x = 0 since we can shift and
we still have exponential growth.
Z t
E(Mt |FsB ) = E(f (Bt , t) − Lf (Br , r)dr|FsB )
0
 Z s Z t 
B
= E f (Bs , s) + (f (Bt , t) − f (Bs , s)) − Lf (Br , r)dr − Lf (Br , r)dr|Fs
0 s
Z t−s
= Ms + E(f (Bs + Xt−s , t) − f (Bs , s) − Lf (Xr + Bs , s + r)dr|FsB )
0
Z t−s
= Ms + E(f (Z + Xt−s , t) − f (Bs , s) − Lf (Xr + Z, s + r)dr|FsB )|Z=Bs
0

27 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Rt
where Xr = Bs+r −Bs . The result now follows from E(g(Xt , t)−g(0, 0)− 0 Lg(Xr , r)dr) =
0 where we have used g(y, t) = f (z + y, t + s). Alternatively this is
Z t
E(g(Xt , t) − g(0, 0) = Lg(Xr , r)dr)
0

which is the same as


d
E(g(Xt , t)) = E(Lg(Xt , t))
dt
Then
Z
d d 1 2
E(g(Xt , t)) = g(x, t) d/2
e−|x| /2t dx
dt dt Rd (2πt)
Z  
∂g ∂φ
= (x, t)φ(x, t) + g dx
Rd ∂t ∂t
Z  
∂g 1
= (x, t)φ(x, t) + (∆φ)g dx
Rd ∂t 2
Z  
∂g 1
= (x, t)φ(x, t) + (∆g)φ dx
Rd ∂t 2
= E(Lg(Xt , t))

as required. Q.E.D.

Theorem 3.11 (Heat Equation) If u ∈ C 1,2 ([0, ∞) × Rd ) is of exponential growth and


solves (
∂u 1
∂t = 2 ∆u t > 0, x ∈ Rd
u(0, x) = f (x) x ∈ Rd
then Z
1 2
u(x, t) = E(f (Btx )) = f (y) e−|x−y| /2t dx
Rd (2πt)d/2

Theorem 3.12 (Heat Equation on a region) Suppose D ⊂ Rd is bounded, and u ∈


C 1,2 ([0, ∞) × D) ∩ C([0, ∞) × D̄) solves

∂u 1
 ∂t = 2 ∆u
 t > 0, x ∈ D
u(0, x) = f (x) x ∈ D

u(t, x) = g(x) x ∈ ∂D, t > 0

and let T = inf{t : Btx ∈ ∂D} and then

u(x, t) = E g(BTx )χ{T ≤t} + f (Btx )χ{T >t}




We prove both theorems simultaneously. Proof Fix t > 0 and consider the map
Z s 
x ∂u 1
s 7→ u(Bs , t − s) − − + ∆u (Brx , r)dr
0 ∂r 2

in other words we run backwards in time. Then in Rd u(Bsx , t − s) is a martingale so using


the OST we get
u(x, t) = E(u(Btx , 0)) = E(f (Btx ))

28 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

as required.
On D we stop at T ∧ t which is a bounded stopping time. Then u(Bsx , s − t) is a
martingale so

u(x, t) = E(u(BTx ∧t , t − T ∧ t)) = E(χ{T ≤t} g(BTx ) + χ{T >t} f (Btx ))

as required. Q.E.D.

Example 3.9 Brownian motion staying in a tube. Suppose we write u(x, t) = P(|Bt | <
1 for all s ≤ t). Then u(x, t) solves the equation
1 ∂2u

∂u
 ∂t = 2 ∂x2 x ∈ (−1, 1), t ∈ (0, t)

u(0, x) = 1

u(t, x) = 0 x = ±1

but note that u 6∈ C([0, t] × [−1, 1]) and the solution is


∞  
X (2k − 1)πx − (2k−1)2 π2 t
u(x, t) = ak cos e 2
2
k=1
2
−π t/8 and then this solves the above
Inspired by this, we consider the first term cos πx 2 e
πx
equation with initial data u(0, x) = cos 2 and then

Btx π
 
−π 2 t/8
u(t, 0) = e = E cos χ ≤ P({T > t})
2 {T >t}
2 t/8
and so P(T > t) ≥ e−π , and this is correct asymptotically.

Example 3.10 Find P(Bsx1 , ..., Bsxd do not collide by time t). Let Btx = (Bsx1 , ..., Bsxd ) be
a d-dimensional Brownian motion. Let Vd = {x ∈ Rd : x1 < x2 < ... < xd } which is called
a cell, and then ∂Vd = {x ∈ Rd : xi = xi+1 for some i}. This solves

∂u 1
 ∂t = 2 ∆u
 x ∈ Vd , t > 0
u(0, x) = f (x) x ∈ Vd

u(t, x) = 0 x = ∂Vd

and this has a famous solution

Z φt (x1 − y1 ) . . . φt (x1 − yd )
u(x, t) = f (y1 , . . . , yd ) det .. .. dy1 . . . dyd
. .
Vd
φt (xd − y1 ) . . . φt (xd − yd )
2
where φt (z) = √ 1 e−z /2t
2πt

Definition 3.13 φ : R → R is convex means φ(x) = sup{L(x)|L is linear , L ≤ φ}

Note that if φ ∈ C 2 (R) then φ is convex if and only if φ00 ≥ 0.

Lemma 3.14 (Jensen’s Inequality) If E(|X|) < ∞ and φ : R → R is convex then

E(φ(X)) ≥ φ(E(X))

E(φ(X)|F) ≥ φ(E(X|F)) a.s.

29 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Proof

E(φ(X)|F) ≥ E(L(X)|F)
= E(aX + b|F)
= aE(X|F) + b
= L(E(X|F)) a.s.

and then taking a supremum over L ≤ φ gives the result. Note we need to ensure somehow
that we only need countably many Ls. Q.E.D.

Corollary 3.15 Suppose (Mt ) a martingale and φ is convex, then

E(φ(Mt )|Fs ) ≥ φ(E(Mt |Fs )) ≥ φ(Ms )

Lemma 3.16 (OST for φ(Mt )) Suppose (Mt , t ≥ 0) is a martingale with continuous
paths and φ ≥ 0 is a convex function. Suppose T ≤ K is a bounded stopping time. Then

E(φ(MT )) ≤ E(φ(MK ))

Proof First assume that T is discrete, so T ∈ {t1 , ..., tM } with t1 < t2 < ... < tM . Then
M
X M
X
E(φ(MT )) = E(φ(Mtk )χ{T =tk } ) ≤ E(φ(MK )χ{T =tk } ) = E(φ(MK ))
k=1 k=1

Now for general T , find discrete stopping times TN ≤ K so that TN → T and then we
have E(φ(MTN )) ≤ E(φ(MK )) and using Fatou’s lemma we get

E(φ(MT )) ≤ E(φ(MK ))

Q.E.D.
We now prove a more general OST, one without boundedness of the martingale
Theorem 3.17 Suppose (Mt , t ≥ 0) is a continuous martingale, and T ≤ K is a bounded
stopping time. Then
E(MT ) = E(M0 )
Proof For discrete T this works as above and is fine.
We now consider Mt ≥ 0 and assume we have discrete stopping times TN → T and
we have E(MTN ) = E(M0 ). We then write x = x ∧ L + (x − L)+ and we have

E(M0 ) = E(MTN ) = E(MTN ∧ L) + E((MTN − L)+ )

and now E(MTN ∧ L) → E(MT ∧ L) by DCT and E(MT ∧ L) = E(MT ) − E((MT − L)+ )
Fix ε > 0 Choose L large so that

E((MTN − L)+ ) ≤ E((MK − L)+ ) < ε

and then take N large so that

|E(MTN ∧ L) − E(MT ∧ L)| ≤ ε

Finally truncate a general martingale at +L and −L. This is a bit messier though.
Q.E.D.

30 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

We now have some final remarks on the Dirichlet problem.


Can we define u(x) = E(f (BTx )) and check that it solves the Dirichlet problem. Sup-
pose that D = {x ∈ R2 : 0 < |x| < 1} the punctured disc. If we suppose that u = 1 on
the set {x : |x| = 1} and 0 at 0 then the solution given by the Brownian motion formula
is u ≡ 1 but this doesn’t solve the boundary conditions.
However, if u is of that form, then u ∈ C ∞ (D) and ∆u = 0 on D always.
We call a point y ∈ ∂D regular if u(x) → f (y) as x → y in D. Thus if all point of
∂D are regular then u is a solution. We thus need a sufficient condition fo y ∈ ∂D to be
regular.
To show the first point, we observe that u being harmonic is the same as u satisfying
the ball averaging property, or the sphere averaging property, namely if B(x, ε) ⊂ D then
Z
1
u(x) = u(y)dy
|B(x, ε)| B(x,ε)
or Z
1
u(x) = u(y)dS
SAε ∂B(x,ε)

Sphere averaging for our formula is almost obvious. If we let Sε = inf{t : Btx ∈ ∂B(x, ε)},
Xt = BSε +t − BSε and T 0 = inf{t : Xt + BSxε ∈ ∂D} then

u(x) = E(f (BTx ))


= E(f (BSxε + BTx − BSxε ))
= E(f (BSxε + XT 0 ))
= E(E(f (BSxε + XT 0 )|FSBε ))
= E(f (z + XT 0 )|z=BSx )
ε

= E(u(z)|z=BSx )
ε

= E(u(BSxε ))

which is sphere averaging,


To do the part on regular points, the following is an equivalent definition of regular.
P(Tx > ε) → 0 as x → y where Tx = inf{t : Btx ∈ ∂D} for all ε > 0.
This is equivalent to P(σ y = 0) = 1 where σ y = inf{t > 0 : Bty ∈ DC }. The 0-1 law
says that this is either 0 or 1. Remember Lebesgue’s thorn. If y is on a thin enough spike
then P(σ y > 0) = 1 and then you cannot solve the Dirichlet problem.
A sufficient condition for y to be a regular point is the cone condition, namely if there
exists a cone in DC with vertex at y ∈ ∂D then y is regular. This is because

P(σ y ≤ ε) ≥ P(Bε ∈ cone ) = p(α) > 0

where α is the solid angle of the cone. Then letting ε → 0 gives

P(σ y = 0) ≥ p(α) > 0

and so the 0-1 law gives P(σ y = 0) = 1


Book by Richard Bass on this sort of stuff

31 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

4 Donsker’s theorem
The idea is to show random walks converge to Brownian motion. Throughout this chapter
we have the following. 2
PZN
1 , Z2 , ... are IID random variables with E(Zi ) = 0 and E(Zi ) = 1
and we define SN = k=1 Zk and this is the position at time N . We can interpolate the
SN to get St and this is given by

St = (N + 1 − t)SN + (t − N )SN +1 t ∈ [N, N + 1]


(N ) S (N )
We also define Xt = √N t
N
but we cannot hope that Xt → Bt . However, we do have
(N ) D
that Xt → Bt . This is the aim of this section.
We know that
S
√N → N (0, 1)
N
by the central limit theorem, and so


p
(N ) SN t SbN tc SbN tc bN tc
Xt = √ = √ + error = p √ + error → N (0, 1) t
N N bN tc N

we hope. Thus it has the same distribution as Brownian motion.


(N ) (N )
Similarly (Xt1 , ..., Xtk ) → (Bt1 , ..., Btk ) but does

(N ) D
max Xt → max Bt (4.1)
t∈[0,1] t∈[0,1]

Z 1 Z 1
(N ) D
Xt dt → Bt dt ∼ N (0, 1/3) (4.2)
0 0
Z 1 Z 1
D
χ{X (N ) >0} ds → χ{Bs >0} ds (4.3)
s
0 0
PN
1 SK D number of times when Sk >0
(4.2) can be rewritten as N 3/2
→ and (4.3) can be rewritten almost as N
(N )
The plan is to think X (N ) and (Bt , t ∈ [0, 1]) = B as random
of (Xt , t ∈ [0, 1]) =:
variables in C[0, 1]. We thus need to show that
D
X (N ) → B

on C[0, 1]. This is a big improvement of the Central limit theorem. We also show that
D
F (X N ) → F (B)

and we observe that


D D
X (N ) → B =⇒ F (X N ) → F (B)
if F is continuous. In the above questions, the maximum and integral are continuous, but
the last is not, as functions C[0, 1] → R.

Definition 4.1 (E, d) a metric space. X : (Ω, F, P) → (E, d) measurable ( X −1 (B) ∈ F


for B Borel in E)is called an E-valued random variable.

32 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

We use this with E = C[0, 1] with

d(f, g) = sup |f (t) − g(t)|


t∈[0,1]

and an open ball is B(f, ε) = {g : |g(t) − f (t)| < ε} and the Borel sets are generated by
open balls.

Lemma 4.2 B(C[0, 1]) = σ(F0 ) where

F0 = {f : f (t1 ) ∈ O1 , ..., f (tN ) ∈ ON }

for t1 , ..., tN and N ≥ 0 and Oi open sets.

Proof It is easy to check that F0 is a π-system.


We check that σ(F0 ) ⊂ B(C[0, 1]). But if f ∈ {f (t1 ) ∈ O1 , ..., f (tN ) ∈ ON } then since
O1 is open there is an ε1 such that if |f − g| < ε1 then g(t1 ) ∈ O1 , and so on, then take
ε = min{ε1 , ..., εN } and we get B(f, ε) ⊂ {f (t1 ) ∈ O1 , ..., f (tN ) ∈ ON } and so it is open.
We check that B(C[0, 1]) ⊂ σ(F0 ). It is enough to check that B(f, ε) ∈ σ(F0 ). Now

B(f, ε) = {: |g(t) − f (t)| ≤ ε ∀t ∈ [0, 1]}


\
= {: |g(t) − f (t)| ≤ ε}
t∈[0,1]∩Q
\ \ 1
= {: |g(t) − f (t)| < ε + } ∈ F0
N
t∈[0,1]∩Q N ≥0

S
and also B(f, ε) = N ≥1 B(f, ε − 1/N ) Q.E.D.
An example of this is B = (Bt , t ≥ 0) is a C[0, 1] valued variable. One observes
(N )
that {ω Bt1 ∈ O1 , ..., BtN ∈ ON ∈ F0 Also Xt are random variables in C[0, 1]. We
can show the latter using composition of measurable functions, Ω → RN → C[0, 1] given
(N )
by ω 7→ (Z1 (ω), ..., ZN (ω)) 7→ Xt (ω) and the former is measurable and the latter is
continuous.
(N ) D
Definition 4.3 X (N ) , X are (E, d) valued variables then Xt → X means

E(F (X (N ) )) → E(F (X))

for F : E → R bounded and continuous.

Theorem 4.4 (Donsker)


D
X (N ) → B
on C[0, 1]

D
Theorem 4.5 (Continuous Mapping) If X (N ) → X on (E, d) and G : (E, d) →
˜ which is continuous then
(Ẽ, d)
D
G(X (N ) ) → G(X)
˜
on (Ẽ, d)

33 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Proof Take F : Ẽ → R bounded and continuous. Then

E(F (G(X (N ) ))) → E(F (G(X)))

since F ◦ G is bounded and continuous. Q.E.D.

Corollary 4.6
(N )
max Xt → max Bt
t∈[0,1] t∈[0,1]

Proof F (t) = max f (t) is a continuous map. Q.E.D.


R1 R1 D
Consider 0 χ(X (N ) >0) dt → 0 χ(Bt >0) dt is not continuous. Also recall that if X (N ) →
t
X then this does not imply that P(XN ∈ (a, b)) → P(X ∈ (a, b)), for example consider
XN = 1/N and (a, b) = (0, 1).
2 D
SN 2
SN
Consider the simple random walk SN . Then N → N (0, 1) but P( N ∈ Q) = 1 6= 0 =
S2
P(N (0, 1) ∈ Q). However, P( NN ∈ (a, b)) → P(N (0, 1) ∈ (a, b))

D
Theorem 4.7 (Extended Continuous Mapping) If X (N ) → X on (E, d) and F :
(E, d) → R is measurable and Disc(F ) = {f : F is discontinuous atf } is such that P(X ∈
Disc(F )) = 0 then
D
F (X (N ) ) → F (X)
˜
on (Ẽ, d)

This is a big non trivial improvement.


For
R 1 the indicator problem, we guess the discontinuity
R1 set to be F is discontinuous at
f if 0 χ(f (t)=0) dt > 0, but for a Brownian path, 0 χ(Bt =0) dt = 0 and so we have the
result we want.
(N )
We now try to prove the approximation. We remind ourselves that Xt = S√NNt with
SN the simple symmetric random walk and we take (Z1 , Z2 , ...) IID with mean 0 and
D
variance 1. The aim is to show that X (N ) → B on C([0, 1]) with the supremum norm.
We embed simple random walks as follows. We define T1 = inf{t : |Bt | = 1} and then
inductively define TN +1 = inf{t ≥ TN : |Bt − BTN | = 1} and then linearly interpolate.
We have
D
(BT1 , BT2 , ...) = (S1 , S2 , ...)
We are close to a proof. We know that E(T1 ) = 1 and the strong Markov property
at TN implies that E(TN +1 − TN ) = 1 and TN +1 − TN is independent of T1 , T2 , ... and so
√ (N )
TN ≈ N ± O( N ) and if we take Bt = B √N t then we expect X (N ) is close to this.
N

Lemma 4.8 (Skorokhod) Take Z with E(Z) = 0. Then there exists a stopping time
D
T < ∞ so that BT = Z and E(T ) = E(Z 2 ).

Financial mathematicians love this lemma. There are at least 14 different ways to
prove this.
If Bt2 − t is a martingale and so E(BT2 ∧N − T ∧ N ) = 0 or E(T ∧ N ) = E(BT2 ∧N ) and
this, by Fatou, gives E(T ) ≥ |E(BT2 ).
If we take Z to be independent of B we choose T = inf{t : Bt = Z} and so BT = Z
but E(T ) = E(E(T |σ(Z))) = E(E(Ta )|a=Z ) = ∞. This is a bit of a silly example.

34 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Proof We first suppose Z ∈ {a, b}. Then define T by


(
b
P(BT = a) = b−a
−a
P(BT = b) = b−a

and then E(T ) = −ab.


We now take a general Z. Choose random α < 0, β > 0 independent of B, and use
T = Tα,β . We need to find the distribution of α and β. To this end we need to choose

ν(da, db) = P(α ∈ da, β ∈ db)

and we have the target distribution of


(
µ+ (dz) = P(Z ∈ dz) Z ≥ 0
µ− (dz) = P(Z ∈ dz) Z < 0

If Z ≥ 0 we need

µ+ (dz) = (BTα,β ∈ dz)


= E(P(BTα,β ∈ dz|σ(α, β)))
 
−α
=E χ(β ∈ dz)
β−α
−a
ZZ
= χ(b ∈ dz)ν(da, db)
b−a
−a
Z
= ν(da, dz)
z−a
R
and so we choose ν(da, dz) = (z − a)µ+ (dz)π(da) where −aπ(da) = 1 and so we have
matched µ+ .
For Z < 0 we have

µ− (dz) = P(BTα,β ∈ dz)


 
β
=E χ(α ∈ dz)
β−α
ZZ
b
= χ(a ∈ dz)ν(da, db)
b−a
Z
= bµ+ (db)π(dz)

and so we choose
µ− (dz)
π(dz) = R
bµ+ (db)
and so α, β are distributed as

(b − a)µ+ (db)µ− (da)


ν(da, db) = R
xµ+ (dx)

We thus have four things to check:

1. P(BTα,β ∈ dz) = P(Z ∈ dz)

35 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

R
2. −aπ(da) = 1

3. E(Tα,β ) = E(Z 2 )
RR
4. ν(da, db) = 1
These are all true, but we only check 2 and 3. Observe that 1 is by construction. For 2,
we want to show that Z Z
−aµ− (da) = aµ+ (da)

but we have Z Z
0 = E(Z) = aµ+ (da) + aµ− (da)

which is what we want.


For 3, observe that

E(Tα,β ) = E(−αβ)
ZZ
= −abν(da, db)
ZZ
µ+ (db)µ− (da)
= −ab(b − a) R
xµ+ (dx)
− xdµ− x dµ+ + x2 dµ− xdµ+
R R 2 R R
= R
xdµ+
Z
= x2 dµ+R x2 dµ−

= E(Z 2 )

Q.E.D.
We use a Skorokhod trick: (
D
BTα,β = Z
E(Tα,β ) = 1
and take IID copies (α1 , β1 ), (α2 , β2 ) independent of B. Then T1 = inf{t : Bt ∈
{α1 , β1 }},...,TN +1 = {t ≥ TN : Bt − BTN ∈ {αN +1 , βN +1 }} and define (S1 , S2 , ...) =
(BT1 , BT2 , ...) and then it has the random walk distribution that we want, i.e.

(N ) SN t BN t
Xt =√ B (N ) = √
N N
The key estimate is  
P ||X (N ) − B (N ) ||∞ > ε → 0

as N → ∞. Assume this, and then fix F : C[0, 1] → R that is bounded and uniformly
continuous. We get

|E(F (X (N ) )) − E(F (B (N ) ))| ≤ |E(F (X (N ) ) − F (B (N ) ))|


≤ E|F (X (N ) − F (B (N ) |
≤ 2||F ||∞ P(ΩN,ε ) + E(|F (X (N ) − F (B (N ) |χΩC )
N,ε

Fix η > 0 and use uniform continuity to choose ε so that the second term is less than
or equal to η/2.

36 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Then choose N large to make the first term less than or equal to η/2.
We soon check that it is enough to only use uniformly continuous functions.
We now show the key estimate.
(N )
(N ) SK BT (N ) B
XK/N := √ := √ K = BTK /N ≈ K
N N N
where the approximation is the first gap we need to plug. This is the difference of the
two at the endpoints. The second gap is the difference at other parts of the paths.
We plug the first gap. Take T1 , T2 − T1 , T3 − T2 and so on IID with mean 1. Then
TN a.s.
→1
N
by the strong law of large numbers. Then
Tk k
max | − |→0
i=1,...,N N N
almost surely, from the above (analysis 1).
Let ΩN,δ = {maxi=1,...,N | TNk − Nk | ≥ δ} and then
P(ΩN,δ ) → 0
as N → ∞.
(N ) (N )
We now plug the second gap. Suppose ||Xt − Bt ||∞ > ε. Then there exists a
t ∈ [0, 1] such that
(N ) (N )
|Xt − Bt | ≥ 
and suppose that t ∈ [K/n, (K + 1)/N ]. Then either
(N ) (N )
|Bt − BK/N | ≥ ε
or
(N ) (N )
|Bt − B(K+1)/N | ≥ ε
and so
(N ) (N ) (N )
P(||Xt − Bt ||∞ > ε) ≤ P(ΩN,δ ) + P(|Bs(N ) − Bt | ≥ ε for |s − t| ≤ δ + 1/N )
(N )
≤ P(ΩN,δ ) + P(|Bs(N ) − Bt | ≥ ε for |s − t| ≤ 2δ)
and then choose δ small and N large.
This concludes the proof of Donskers theorem, modulo some other minor tidy ups.
Corollary 4.9 Z 1 Z 1
(N ) D
Xt dt → Bt dt
0 0
R1
We apply F (f ) = 0 f . The former isn’t really very useful, so we rewrite it as follows:
Z 1 N −1   N −1
(N ) 1 S SK+1 1 SN
√K + √
X X
Xt dt = = SK −
0 0
2N N N N 3/2 0
N 3/2
and so we suspect that
N −1
1 X
SK → N (0, 1/3)
N 3/2 0
We need a tidy up lemma:

37 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

D D D
Lemma 4.10 Suppose that XN → X and YN → 0. Then XN + YN → X.

This
( is not true if YN converges to a non zero limit. For example consider YN =
Z N even
.
−Z N odd
Proof We consider characteristic functions

|E(eiθ(XN +YN ) − E(eiθX )| ≤ |E(eiθXN − eiθX )| + |E(eiθ(XN +YN ) − eiθXN )|

The former tends to zero as XN → X and the latter is as follows

|E(eiθ(XN +YN ) − eiθXN )| ≤ E(|eiθYN − 1|)


q
≤ E(|eiθYN − 1|2 )
q
≤ E(1 − eiθYN − e−iθYN + 1)
→0

Q.E.D.
R1 D
Now back to the problem in hand. We have F (f ) = 0 f and we have X (N ) → B and
then
N −1
1 X SN
F (X (N ) = 3/2 SK − 3/2
N 0
N
Then s
2
SN SN N
E ≤ E = →0
N 3/2 N 3/2 N3
This uses the following
D
Lemma 4.11 If E|XN | → 0 then XN → 0

and the proof uses


|E(eiθXN − 1)| ≤ θE|XN | → 0
We have another example, with a non continuous F .
Take Ta = inf{N : SN ≥ a} and we guess that
q
D
T√N a N → τa = inf{t : Bt = a}

and we check this. We take

F (f ) = inf{t : f (t) ≥ a} ∧ 1

where the minimum with 1 is for convenience. Then


q
F (X (N ) ) = T√N a N + error

and the error is at most 1/N /. We need only show that

F (X (N ) ) → F (B)

38 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

but the problem is F is not continuous. We thus guess the discontinuity set of F :

Disc(F ) ⊂ {f : τa (f ) < 1, ∃ε > 0 such that f (t) ≤ a for t ∈ [τa , τa + ε]}

and our aim is to show that


P(B ∈ Disc(F )) = 0
If we define Xt = BTa +t − a then this is a new Brownian motion and P(Xt ≤ 0 for t ∈
[0, ε]) = 0 by the time inverted law of the iterated logarithm.
We will check the inclusion of the discontinuity set. We prove the complement state-
ment. Choose δ > 0 and assume that supt≤F (f )−δ f (t) = a − ε for some ε > 0. Then
if
||g − f ||∞ ≤ ε/2
and this implies that F (g) ≥ F (f ) − δ. In other words, if g → f then F (g) ≥ F (f ) and
so we have
lim inf F (g) ≥ F (f )
g→f

which is semicontinuity.
We take an f where there exists SN → F (f ) and f (XN ) > a. We need to show that
F is continuous at such an f .
If f (SN ) = a + ε then take a g such that ||g − f ||∞ < ε/2 and this will have g(SN ) > a.
Thus F (g) ≤ SN and so lim supg→f F (g) ≤ F (f ) and so we are continuous here.

Theorem 4.12 The following are equivalent for XN , X over (E, d).

1. E(F (XN )) → E(F (X)) for all bounded continuous functions F : E → R.

2. E(F (XN )) → E(F (X)) for all bounded uniformly continuous functions F : E → R.

3. lim supN →∞ P(XN ∈ A) ≤ P(X ∈ A) for all closed A.

4. lim inf N →∞ P(XN ∈ A) ≥ P(X ∈ A) for all open A.

5. P(XN ∈ A) → P(X ∈ A) for all A such that P(X ∈ ∂A) = 0, where ∂A = A \ Ao .

6. E(F (XN )) → E(F (X)) for all measurable F such that P(X ∈ Disc(F )) = 0.

Proof 1 =⇒ 2 is immediate. +
d(x,A)
2 =⇒ 3 Define Fε (x) = 1 − ε and this is uniformly continuous and converges
to χA . Then

P(X ∈ A) ← E(Fε (X)) = lim E(Fε (XN )) ≥ lim sup P(XN ∈ A)


N →∞ N →∞

3 =⇒ 4
P(X ∈ A) = 1 − P(X ∈ AC )
and so if A is open then AC is closed.
4 =⇒ 5

P(X ∈ Ao ) ≤ lim inf P(XN ∈ Ao ) ≤ lim inf P(XN ∈ A) ≤ lim sup P(XN ∈ A) ≤ lim sup P(XN ∈ A) ≤ P(X ∈

and so
P(X ∈ A) − P(X ∈ Ao ) = X ∈ ∂A = 0

39 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

5 =⇒ 6 We observe that 5 is a special case with f = χA with Discf = ∂A.


Now choose α1 < α2 < ... < αN with |αi+1 − αi | < ε.
We can approximate a general f by
N
X
fε (x) = αi χf −1 (αi ,αi+1 ]
1

and with |fε (x) − f (x)| ≤ ε. We will check E(fε (XN )) → E(fε (X)), and this is enough.
We then apply part 5 with A = f −1 (αi , αi+1 ] ∪ {x : f (x) = αi } ∪ {x : f (x) = αi+1 }.
We claim that

Disc(χA ) ⊂ Discf ∪ {x : f (x) = αi } ∪ {x : f (x) = αi+1 }

and we prove the complemented statement.


Pick x 6∈ Discf with f (x) 6= αi and f (x) 6= αi+1 . Take xN → x and then as f is
continuous we have f (xN ) → f (x), because χ(f (xN ) ∈ (αi , αi+1 ]) is discontinuous only
at αi and αi+1 .
To apply 5 we need to know that P(X ∈ {x : f (x) = αi }) = 0. To this end let
pα = P(f (X) = α) > 0. Now {α : pα ≥ 1/N } has at most N elements, and so pα > 0 for
only countably many α. Let Q = {α : pα > 0}. Then we need to choose α1 , ..., αN not
lying in Q. As Q is countable this is easy.
6 =⇒ 1 is immediate.
Q.E.D.

5 Up Periscope
Suppose that a box has N balls, half of which are black and the other half are white. We
draw at random until the box is empty. Let SK denote the number of blacks left by draw
(N )
K minus the number of whites by draw K. Let Xt = S√NNT and we expect:
D
Theorem 5.1 X (N ) → Brownian Bridge.
We consider a population model. Let SN be the population size at time N . Assume
that there is a one half probability of one individual having 2 or zero offspring each time
step.

Lemma 5.2
2
P(SN ) > 0|S0 = 1) ∼
N
Corollary 5.3
2 N
P(SN > 0|S0 = N ) ∼ 1 − (1 − ) ∼ 1 − e−2
N
(N ) SN t
If we instead choose S0 = N and linearly interpolate to get Xt = N we get
Theorem 5.4
D
X (N ) → X
where X solves Feller’s equation:
dX √ dB
= X
dt dt

40 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012

Such theorems are called diffusion approximations. One can split the proof into two
parts:

1. Soft nonsense- Show there exists a convergent subsequence X (Nl )

2. Characterise the limit points of X (N

The first point is to do with compactness.

Theorem 5.5 (Kolmogorov) Suppose X (N ) is a continuous process and


(N )
E|Xt − Xs(N ) | ≤ C|t − s|1+ε

for all s, t ≤ 1 and N ≥ 1. Then X (N ) has a convergent subsequence.

We can deduce this from compact sets K ⊂ C[0, 1]. For example

{f : |f (t) − f (s)| ≤ C1 |t − s|α , |f (0)| ≤ C2 }

is compact.
The second point is specific to each convergence. We look only at the population one.
How do we characterise Feller’s diffusion. We would want
p
Xt+∆t − Xt ≈ Xt B(0, ∆t)

The key estimate is the following:

(N ) (N ) SK+1 SK
X K+1 − X K = −
N N N N
and
√ the bit on the right hand side has mean zero and variance SK , which agrees with
SK B(0, ∆t).

41 of 40

You might also like