Professional Documents
Culture Documents
Contents
1 Brownian Sample Paths 1
1.1 Brownian Motion as a Gaussian Process . . . . . . . . . . . . . . . . . . . 2
1.2 Growth rate of paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Brownian Martingales 22
4 Donsker’s theorem 32
5 Up Periscope 39
These notes are based on the 2013 MA4F7 Brownian Motion course, taught by Roger
Tribe, typeset by Matthew Egginton.
No guarantee is given that they are accurate or applicable, but hopefully they will assist
your study.
Please report any errors, factual or typographical, to m.egginton@warwick.ac.uk
i
MA4F7 Brownian motion Lecture Notes Autumn 2012
The key aim is to show that scaled random walks converge to a limit called Brownian
motion. In 1D, P{t 7→ Bt nowhere differentiable } = 1
E(Bt ) = 0, E(Bt2 ) = t and so t 7→ Bt is not differentiable at 0. By shifting gives it at
any t. R1
We also have that P ( 0 χ(Bs > 0)ds ∈ dx) = √ 1 dx
πx(1−x)
(
1 on A
P (x + B exits D in A) = U (x) where ∆U (x) = 0 and U (x) = . For a
0 on∂D \ A
b−log |x|
disc with inner radius a and outer radius b, U (x) = log
log b−log a and this converges to 1 as
b → ∞. Thus the probability that Brownian motion hits any ball is 1.
For random walks, P (x + r.v. exits at y) = U (x) where U (x) = 41 (U (x + e1 ) + U (x −
e1 ) + U (x + e2 ) = U (x − e2 )) which can be thought of as a discrete Laplacian. Thus we
have a nice equation for Brownian motion, but a not so nice one for random walks.
2. For 0 ≤ t1 < t2 < ... < tn we have Bt2 − Bt1 , ..., Btn − Btn−1 are independent
But does this even exist, and if it does, do the above properties characterise B. The
answer to both is yes, and we will show these later.
We now define the terms used in the above definition, to avoid any confusion.
Definition 1.3 A stochastic process is a family of random variables (Xt , t ≥ 0) all defined
on Ω.
We do not worry what Ω is, we are only interested in the law/distribution of Z, i.e.
P(Z ∈ A) or E(f (Z)) where P(Z ∈ A) = P{ω : Z(ω) ∈ A}
If we fix ω, the function t 7→ Bt (ω) is called the sample path for ω.
The first property above means that the evaluation of Bt at ω is continuous, for almost
all ω. Sadly some books say that P{ω : t 7→ Bt (ω) is continuous } = 1 but how do we
know this set is measurable.
1 (x−µ)2
P(Z ∈ dz) = √ e− 2σ 2 dz
2πσ 2
for σ 2 > 0, meaning integrate both sides over a set A to get the probability over A. If
σ = 0 then P(Z = µ) = 1.
1 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
D
3. In distribution: Zk → Z means E(f (Zk )) → E(f (Z)) for any continuous and
bounded f .
R1
Example 1.1 I = 0 Bt dt is a Gaussian variable.
1
I = lim (B + B2/N + ... + BN/N )
k→∞ N 1/N
1
= lim ((BN/N − B(N −1)/N ) + 2(B(N −1)/N − B(N −2)/N ) + ... + N (B1 − B0 ))
k→∞ N
1.1.1 Transforms
Definition 1.6 We define the Fourier transform, or the characteristic function to
be
φZ (θ) = E(eiθZ )
D 2 /2
For example, if Z ∼ N (µ, σ 2 ) then φZ (θ) = eiθµ e−σ
Proposition 1.7 (More facts about Gaussians) 4. φZ (θ) determines the law of
Z, i.e. if φZ (θ) = φY (θ) then P (Z ∈ A) = P (Y ∈ A).
2 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
D
6. φZk (θ) → φZ (θ) if and only if Zk → Z.
These all hold true for Z = (Z1 , ..., Zd ) with φZ (θ1 , ..., θd ) = E(eiθ1 Z1 +...+iθd Zd )
Definition 1.8 Z = (Z1 , ..., Zd ) ∈ Rd is Gaussian if di=1 λk Zk is Gaussian in R for
P
all λ1 , ..., λd . (Xt , t ≥ 0) is a Gaussian process if (Xt1 , ..., XtN ) is a Gaussian vector
on RN for any t1 , ..., tN and N ≥ 1.
Check thatPBrownian motion is a Gaussian process, i.e. is (Bt1 , ..., BtN ) a Gaussian
vector, or is λk Btk Gaussian on R. We can massage this into µ1 (Bt1 − B0 ) + ... +
µN (BtN − BtN −1 ) and so is Gaussian. As an exercise, check this for Brownian bridges and
O-U processes.
Proposition 1.9 (Even more facts about Gaussians) 7. The Law of the Gaus-
sian Z = (Z1 , ..., Zd ) is determined by E(Zk ) and E(Zj , Zk ) for j, k = 1, ..., d
8. Suppose Z = (Z1 , ..., Zd ) is Gaussian. then Z1 , ..., Zd are independent if and only if
E(Zj Zk ) = E(Zj )E(Zk )
for all j 6= k.
For 7, it is enough to calculate φZ (θ) and see that it is determined by them. For 8, noe
need only check that the transforms factor.
Example 1.2 (Bt ) a Brownian motion on R. Then E(Bt ) = 0 and, for 0 ≤ s < t,
E(Bs Bt ) = E((Bt −Bs )(Bs −B0 )+(Bs −B0 )2 ) = E(Bt −Bs )E(Bs −B0 )+E(Bs −B0 )2 = s
and similarly equals t if 0 ≤ t < s.
3 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Lemma 1.11 (Scaling Lemma) Suppose that B is a Brownian motion on R and c > 0.
Define Xt = 1c Bc2 t for t ≥ 0. Then X is a Brownian motion on R
Proof Clearly it has continuous paths and E(Xt ) = 0. Now
1 1
E(Xs Xt ) = E( Bc2 s Bc2 t ) = s ∧ t
c c
and also
N N
X X λk
λk Xtk = Bc2 tk
c
1 1
and this is Gaussian since Bt is Gaussian. Q.E.D.
and so
1
P[Xt → 0 as t → 0] = lim lim lim P{|Xq | < } = P[B̂t → 0]
N →∞ M →∞ k→∞ N
1
where q1 , q2 , ... lists Q ∩ (0, M ]. Q.E.D.
We used in the above that
A1 ⊇ A2 ⊇ ... then P(∩AN ) = lim P(AN )
N →∞
Corollary 1.13 Bt /t → 0 as t → ∞.
Bt Bt
In fact, Bt /tα → 0 for α > 12 but lim supt→∞ √ t
= ∞ and lim inf t→∞ √
t
= −∞ and so
Bt visits every x ∈ R infinitely many times.
This brings us nicely into the next subsection.
4 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
and
Bt
lim inf = −1
t→∞ ψ(t)
p
where ψ(t) = 2t ln(ln t)
lim supt→∞ Xt = limt→∞ sups≥t Xs and lim sup Xt ≤ 1 means that ∀ε > 0 then sups≥t Xs ≤
1 + ε for large t, which is the same as for all ε > 0 Xt is eventually less than 1 + ε.
lim sup Xt ≥ 1 if and only if ∀ε > 0 we have sups≥t Xs ≥ 1 − ε for large t. which is
the same as for all ε > 0 there exists a sequence sN → ∞ with XsN ≥ 1 − ε.
It is on an example sheet that Xt = e−t Be2t then the Law of the Iterated logarithm
can be converted to get that lim sup √X t
2 ln t
= 1.
lim
√ sup ZN
We can also compare this to ZN an iid N (0, 1) and then 2 ln N
=1
Proof We first show that
Bt
P(lim sup ≤ 1) = 1
ψ(t)
and this is the case if and only if
The strategy now is to control B along a grid of times tN = θN for θ > 1. Then
1 1 1 2 2
P(BθN > (1 + ε)ψ(θN )) ≤ √ p e−(1+ε) ln(N ln θ) ≤ C(θ, ε)N −(1+ε)
2π 1 + ε 2 ln(N ln θ)
5 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
P∞
Lemma 1.16 (Borel-Cantelli part 1) If 1 P(AN ) < ∞ then
P(sup Bs ≥ a) = 2P(Bt ≥ a)
s≤t
for a ≥ 0
P(Ω0 ) = P(Ω0 ∩ {Bt > a}) + P(Ω0 ∩ {Bt = a}) + P(Ω0 ∩ {Bt < a})
= 2P(Ω0 ∩ {Bt > a})
= 2P{Bt > a}
We will carefully justify this later by examining the hitting time Ta = inf{t : Bt = a}.
We consider (BTa +t − a, t ≥ 0 and check that this is still a Brownian motion.
Z ∞
Bt a 1 2
P(Ta ≤ t) = P(sup Bs ≥ a) = 2P( √ ≥ √ ) = 2 √ ez /2 dz
s≤t t t a 2π
and also
d 1 2 a 3 1 2
P(Ta ∈ dt) = P(Ta ≤ t) = 2 √ e−a /2t t− 2 = √ e−a /2t =: φ(t)
dt 2π 2 2πt3
sup Bs ≤ (1 + ε)ψ(θN )
s≤θN
√ √
p
Bt (1 + ε)ψ(θN +1 ) ln((N + 1) ln θ)
≤ N
= (1 + ε) θ p → (1 + ε) θ
ψ(t) ψ(θ ) ln(N ln θ)
6 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Q.E.D.
We use this on AN = {BθN > (1 − ε)ψ(θN )} but these are not independent, but
so for large N . We finalise by correcting this. We define ÂN = {BθN − BθN −1 >
nearly √
(1 − ε) 1 − θ−1 ψ(θN )} and these are independent. Then
2
P(ÂN ) = P(AN ) ≥ C(θ, ε)N −(1−ε)
and so
Bθ N p
−1 − (1 + ε)
ψ(θN −1 )
≥ (1 − ε) 1 − θ
ψ(θN ) ψ(θN )
7 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
and
√
p
ψ(θN −1 ) ln((N − 1) ln θ)
= √ → θ−1
ψ(θN )
p
θ ln(N ln θ)
and so
BθN p √
lim sup ≥ (1 − ε) 1 − θ −1 − (1 + ε) θ −1
N →∞ ψ(θN )
and taking θ large and ε small gives the result Q.E.D.
We make some observations:
(
1
1. Can we do better? P(Bt ≤ ht for large t) = for ht deterministic. This is
0
√
called the 0-1 law, and we see it in week 4. For ht = Ct ln ln t if C < 0 then we
get 0 and if C > 2 then we get 1. This uses an integral test for ht .
2. Random walk analogue. Suppose that X1 , X2 , ... are iid with E(Xk ) = 0, E(Xk2 ) = 1
and SN = X1 + ... + XN . then
SN
lim sup √ =1
N →∞ 2N ln ln N
This was proved in 1913 but the proof was long. Was proved in a shorter manner
using the Brownian motion result in 1941.
tB 1
3. Xt = tB 1 is still a Brownian motion and so lim supt→∞ t
ψ(t) or alternatively
t
Bs
lim q =1
s→0
2s ln ln( 1s )
4. P(B diff at 0) = 0 and if we fix T0 > 0 then define Xt = BT0 +t then this is still a
Brownian motion. Thus
5. Suppose U ∼ U [0, 1] is uniform r.v. Then define Xt by the value up until U and then
monotone increasing up until 1. Then P(X diff at t0 ) = 1 but it is not differentiable
at all t. and so we cannot easily conclude that Brownian motion id differentiable
everywhere
6. Corollary 1.20
Leb{t : B is diff at t} = 0
Proof Z ∞ Z ∞
E( χ(B diff at t)dt) = Eχ(B diff at t)dt = 0
0 0
Q.E.D.
8 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
1.3 Regularity
Definition 1.21 A function f : [0, ∞) → R is α-Holder continuous, for α ∈ (0, 1], at
t if there exists M, δ > 0 such that
|ft+s − ft | ≤ M |s|α
for |s| ≤ δ.
The reasons for this are as follows. A differentiable function must lie in some cone as
f (t + s) − f (s)
→a
s
and so (f (t + s) − f (s))/s ∈ (a − ε, a + ε) for small s and thus |f (t + s) − f (s)| ≤ (|a| + ε)|s|
for small s, and so Lipschitz holds with M = |a| + |eps.
Proposition 1.23 Define ΩM,δ = {for some t ∈ [0, 1], |Bt+s −Bt | ≤ M |s| for all |s| ≤ δ}
and then P(ΩM,δ ) = 0 and thus P(∪∞ ∞
M =1 ∪N =1 ΩM, 1 ) = 0 N
Proof This hasn’t been bettered since 1931. Suppose that there exists a t ∈ [K/N, (K +
1)/N ] where it is Lipschitz, i.e. |Bt+s − Bt | ≤ M |s| for |s| < δ. Then if (K + 1)/N, (K +
2)N ∈ [t, t + δ] then
M 2M
|B(K+1)N − Bt | ≤ |B(K+2)N − Bt | ≤
N N
and so by the triangle inequality we get
3M
|B(K+1)N − B(K+2)/N | ≤
N
and then we have
3M
P(ΩM,δ ) ≤ P for some K = 1, ..., N − 1 : |B(K+1)N − B(K+2)/N | ≤ (1.1)
N
We first calculate the probability of the event on the right hand side.
3M
√
ZN
3M 3M 1 2 6M
P[|N (0, 1/N )| ≤ ] = P[|N (0, 1)| ≤ √ ] = √ e−z /2 dz ≤ √
N N 2π 2πN
3M
−√
N
9 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
but this is not useful, because it does not tend to zero. We modify this by taking more
points. We already know that
3M
|B(K+2)N − B(K+1)/N | ≤
N
but we also have that
5M 7M
|B(K+3)N − B(K+2)/N | ≤ |B(K+4)N − B(K+3)/N | ≤
N N
and then we say that
3M
|B(K+1)N − B(K+2)/N | ≤ N
5M
P(ΩM,δ ) ≤ P for some K = 1, ..., N − 1 : and |B(K+3)N − B(K+2)/N | ≤ N
7M
and |B(K+4)N − B(K+3)/N | ≤ N
Theorem 1.24 (Kolmogorov’s continuity) Suppose (Xt , t ∈ [0, 1]) has continuous
paths and satisfies
E[|Xt − Xs |p ] ≤ C|t − s|1+γ
for some γ, p > 0. Then X has α-Holder paths for α ∈ 0, γp ).
We then have that Brownian motion has α-Holder continuous paths of orders
p/2 − 1 1 1
α< = −
p 2 p
1 1
and so Holder with any α < 2 but not α = 2
Lemma 1.25 (Markov Inequality) Suppose that Z ≥ 0 is a non negative random vari-
able. Then
1
P(Z ≥ a) ≤ E(Z)
a
Proof
E(Z) = E(Zχ{Z<a} ) + E(Zχ{Z≥a} ) ≥ E(Zχ{Z≥a} ) ≥ aP(Z ≥ a)
Q.E.D.
We use this as follows
1 C|s − t|1+γ
P(|Xt − Xs | ≥ a) = P(|Xt − Xs |p ≥ ap ) ≤ E(|Xt − Xs | p
) ≤
ap ap
10 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
N
Suppose we have a grid with size 2−N . Then we define AN = ∪2K=1 {|Xk/2N −
X(K−1)/2N | > 2N1 α }
We then estimate this
N
2
X 1
P(AN ) ≤ P{|Xk/2N − X(K−1)/2N | > } ≤ C2N 2−N (1+γ) 2N αp = 2−N (γ−αp)
2N α
1
and thus we need γ > αp or α < γ/p. This is the key idea of the proof. We now have
P ∞
1 P(AN ) < ∞ and so by Borel Cantelli I we have that only finitely many AN s occur,
i.e. there exists N0 (ω) such that
1
|XK/2N − X(K−1)/2N | < for all K = 1, ..., 2N and for N > N0
2N α
We also know that P(N0 < ∞) = 1. We now fix ω such that N0 (ω) < ∞. We can control
X on D = { dyadic rationals i.e. of form K/2M }. Fix t, s ∈ D with
1/2M ≤ |t − s| ≤ 1/2M −1
where M ≥ N0 . We have two cases. Either we straddle two points K/2M and (K + 1)/2M
or we straddle only one point K/2M . We consider the first case, and leave the second
case to the reader. We have
K + 1 1 or 0 1 or 0
t= + M +1 + ... + M +L
2M 2 2
and then we have
K +1 1 1 1 1
|Xt − X(K+1)/2M | ≤ + M +1 + ... + M +L ≤ (M +1)α
2M 2 2 2 1 − 2−α
and similarly
1 1
|Xs − X(K)/2M | ≤
2(M +1)α 1 − 2−α
and thus
11 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Definition 2.1 Suppose that F, G two σ-fields on Ω are called independent if P(A ∩
B) = P(A)P(B) for all A ∈ F and B ∈ G.
Two variables X and Y are called independent if
or equivalently
E(h(X)g(Y )) = E(h(X))E(g(Y ))
for all measurable bounded functions h, g.
They are equivalent as follows. We can take h = χC1 and g = χC2 and then the two
statements are the same. We then take simple functions and limits of simple functions to
get any functions, using the standard machine.
Definition 2.2 σ(X), called the σ-field generated by X is defined as
{X −1 (C) : C measurable }
which is measurable since the limit and sum of measurable functions is measurable.
A second example is supt≤s Bt is independent of supt≤t1 Xt . We can write this as
supt≤s Bt = sup Bq = limN →∞ max{Bq1 , ..., BqN } where qi lists [0, 1] ∩ Q. Now
q∈[0,1]∩Q
and the latter is continuous, and the former is measurable, since if C = C1 × ... × CN
then {w : (Bq1 (w), ..., BqN (w)) ∈ C} is measurable.
12 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Lemma 2.6 If F0 and G0 are two different π-systems and P(A ∩ B) = P(A)P(B) for
all A ∈ F0 and B ∈ G0 then P(A ∩ B) = P(A)P(B) holds for σ(F0 ) and σ(G0 ), i.e.
independence of π-systems gives independence of generated σ-fields.
For us, σ(Xt , t ≥ 0) = σ{Xt−1 (C) : t ≥ 0, C ∈ B(R)} and this is generated by the
π-system {Xt−1
1
(C1 ) ∩ ... ∩ Xt−1N
(CN ) : t1 , ..., tN ≥ 0, CK ∈ B(R)}
Proof (of theorem 2.4) By the above lemma, we need only check that (Xt1 , ..., XtN ) is
independent of (Bs1 , ..., BsN ). We have
h P i h P P i
E ei λk Btk ei µk (BT +tk −Btk ) = E e− λ̂k (Bsk −Bsk−1 ) ei µ̂k (BT +tk −BT +tk−1 )
P
h P i h P i
= E ei λk Btk E ei µk (BT +tk −Btk )
Definition 2.8 We define the Brownian filtration by FtB := σ(Bs : s ≤ t), and we
B =∩
also define Ft+ B B
s>t Fs and the germ field is F0+ .
Bt
Note that if dB B
dt (t) existed then it would be in Ft+ . However, lim supt→0 h(t)
B ,
is in F0+
since this limsup is FδB measurable for any δ > 0.
13 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Proof ( of theorem 2.9) Take A ∈ Ft+ B and B ∈ σ(X , s ≥ 0). The π-system lemma
s
says we can choose B = {Xsi : i = 1, ..., m}, i.e. enough to check that
P
E[e− λk Xsk iθχA
e ]
λk Xsε λk Xsε
P P
E[e− k eiθχA ] = E[e− k ]E[eiθχA ]
by the Markov property version 1. Letting ε → 0 and then using the DCT we get the
required result. Q.E.D.
and then
1 1
P( √ Bt , t ∈ [0, 1/2] traverses a tube √ A) = p0
2 2
then let
1
AN = { Bt : t ∈ [0, 2−N ] traverses 2−N/2 A}
2N/2
and then P(AN ) = p0 by scaling. Then note that AN ∈ F2B−N and let Ω0 = ∩∞ ∞
M =M0 ∪N =M
B B
AN = [AN i.o. ] ∈ F2−M0 and so Ω0 ∈ F0+ and now
and so P(Ω0 ) = 1.
Our aim is to come up with something similar in the continuous case. We intuitively
think of the following as the probability that we start at x and end up in dy in time t.
14 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Definition 2.12 A Markov transition kernel is a function p : (0, ∞)×Rd ×B(Rd ) → [0, 1]
such that
2. x 7→ pt (x, A) is measurable.
2
Example 2.3 1. Brownian motion with pt (x, dy) = √ 1 e−(x−y) /2t dy
2πt
2. Xt = Bt + x + ct
5. Ornstein-Uhlenbeck processes
We check that (Btx , t ≥ 0) is a Markov process started at x with kernel pt (x, dy) =
2
1
qt (y − x)dy where qt (z) = √2πt e−z /2t The short proof is as follows:
P(Btx1 ∈ dx1 , ..., BtxN ∈ dxN ) = P(Btx1 ∈ dx1 , Btx2 − Btx1 ∈ d(x1 − x2 ), ..., BtxN − BtxN −1 ∈ d(xN − xN −1 ))
= qt1 (x1 − x)dx1 qt2 −t1 (x2 − x1 )dx2 ...qtN −tN −1 (xN − xN −1 )dxN
E(f (Btx1 , ..., BtxN )) = E(g(Btx1 − x, Btx2 − Btx1 , ..., BtxN − BtxN −1 ))
15 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
if
This is shorter than the previous definition, and implies it by an induction argument,
which we will do.
(X − Y, Z) = 0
for all Z ∈ L2 (G). It is enough though to check for Z = χA for A ∈ G, by the standard
machine of measure theory. This expectation is then
Z Z
XdP = dP (2.1)
A A
Proposition 2.16 For X ∈ L2 (F) there is a unique Y ∈ L2 (G) satisfying (2.1). This
can be improve to X ∈ L1 (F) or to X ≥ 0.
16 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
3. E(Bt2 |FsB ) = E(Bs2 + 2Bs (Bt − Bs ) + (Bt − Bs )2 |FsB ) = Bs2 + 2Bs E(Bt − Bs ) + E(Bt −
Bs )2 = Bs2 + t − s
E(X|σ(Y )) = αY
Suppose now that Xt = |Btx | and then FsX ⊂ FsB and we guess that the transition
kernel is given by
since we can either reach dy or −dy at time t. We consider P(Xt ∈ dy|FsX ) = P(|Btx | ∈
dy|FsX ). Then using the tower property we get
17 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
However there is a warning. Is Xt = f (Btx ) still Markov? The answer is usually not.
In the above case,
p were were just lucky to get this. For example, consider radial Brownian
motion, Xt = B1 (t)2 + ... + Bd (t)2 and we want to show that
Z
E(f (Xt1 , ..., XtN )) = f (x1 , ..., xN )ptN −tN −1 (dxN − xN −1 )...pt1 (dx1 − x)
Qn
The measure theory machine says that it is enough to check for f (x1 , ..., xn ) = 1 φk (xk ).
We show this by induction:
n
Y n
Y
E( φk (xk )) = E(E( φk (xk )|FtX ))
N −1
1 1
n−1
Y
= E( φk (xk )E(φn (Xtn |FtX ))
n−1
1
n−1
Y Z
= E( φk (xk )) φn (xn )ptn −tn−1 (xtn−1 , dxn )
1
The key example is (FtB , t ≥ 0) and Ta = inf{t : bt = a}. We give two justifications
as to why this is a stopping time below.
Theorem 2.21 B a Brownian motion, FtB , T is an FtB stopping time with T < ∞.
Then define Xs = BT +s − BT and it is a Brownian motion, and independent of FTB
FT = {A : A ∩ {T ≤ t} ∈ Ft for all t ≥ 0}
18 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
We need to check that FT is indeed a σ-field, that if S ≤ T then FS ⊆ FT for two stopping
times and that T is FT measurable.
To show the first of these note that ∅, Ω are clearly in FT . Then if A ∈ FT
Ac ∩ {T ≤ t} = {T ≤ t} \ (A ∩ {T ≤ t}) ∈ Ft
To show the last note that it is enough to check that {T ≤ s} ∈ FT for all s ∈ R.
Then
{T ≤ s} ∩ {T ≤ t} = {T ≤ min(s, t)} ∈ Fmin(s,t) ⊆ Ft
We now justify the part of the Reflection principle that before was somewhat hand
wavy. This here is rigorous though.
19 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
if t ∈ [ j−1 j
N , N ). Then the above implies that BTN +t − BTN is a Brownian motion and
is independent of FTBN ⊇ FTB If we let N → ∞ then considering characteristic functions
gives
P P P
θk (BTN +sk −BTN ) θk (BTN +sk −BTN ) θk (BT +sk −BT )
E(ei χA ) = E(ei = E(ei P(A)
3. Z is uncountable
Lemma 2.25 Suppose that A ⊂ R is closed and has no isolated points. Then A is
uncountable.
Proof Pick t0 < t1 in A and choose B0 = B(t0 , ε0 ) and B1 = B(t1 , ε1 ) disjoint. Then
choose points t00 , t01 inside B0 and A and t10 , t11 inside B1 and A and again choose disjoint
balls around these points. Continue this process.
We now have a chain B1 ⊇ B10 ⊇ ... of balls and so there exists a unique point ta
in the infinite intersection. A is closed so ta ∈ Z. If a 6= b then ta 6= tb so the set is
uncountable. Q.E.D.
20 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
1
P(L ∈ dt) = p dt
π t(1 − t)
P(a + X does not hit the origin by time 1 − t) = P(B does not hit a by time 1 − t)
= 1 − P(B does hit a by time 1 − t)
= 1 − 2P(B1−t > a)
= 1 − 2P(N (0, 1 − t) > a)
a
= 1 − 2P(N (0, 1) > √ )
1−t
Z ∞
1 2
=1−2 √ e−z /2 dz
√a
1−t
2π
a
=: 1 − 2Φ √
1−t
and so
a
P(T ≤ t) = E(1 − 2Φ √ )
1 − t a=Bt
√
t|N (0, 1)|
= E(1 − 2Φ √ )
1−t
Z ∞ √
t|x| 1 2
=− 1 − 2Φ √ √ e−x /2 dx
−∞ 1−t 2π
21 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Then this would give that the alst zero for X is equal to the time of maximal value
for B, and so the first two arcsine laws one would expect to be the same.
D
Corollary 2.27 T = M
Remark There is almost a random walk analogue. If we do the same thing, then we get
a reflected random walk, but it is “ sticky ” at the origin. We expect the set of sticky
times to be small though, because
Z t
χ(Bs = 0)ds = 0
0
Proof It order to prove Lévy’s theorem, it is left to the reader to check that X is a
Markov process with correct transition kernel as given before.
Then one can find P(Xt1 ∈ A1 ...XtN ∈ AN ). Then
Where I and II are as below. Call Yr = Xs+r − Xs adn this is a new Brownian motion
independent of FsB . We then have two possibilities. Either I, it attains its maximum
after s, or it attains it before. We then have
Z ∞
Y Y
I= P(St−s ∈ dz, Bt−s ∈ dz − y) II = P(St−s ≤ a, B ∈ a − dy)
a
Q.E.D.
3 Brownian Martingales
Definition 3.1 Fix a filtration (Ft , t ≥ 0). A process (Mt , t ≥ 0) is called a (Ft )-
martingale if
E(Mt |Fs ) = Ms ∀s ≤ t
E(MT ) = E(M0 )
22 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Example 3.1 (Bt ) is a martingale because E(Bt |FsB ) = E(Bs + Bt − Bs |FsB ) = Bs Take
T = Ta ∧ Tb with a < 0 and b > 0. This is the minimum of two stopping times and so is a
stopping time. It is also bounded. We now use the Optional Stopping theorem.. We have
E(BT ∧N ) = 0 and so by DCT we have E(BT ) = 0 since |BT ∧N | ≤ max(|a|, |b|). Note to
always check that this can be done. We then have
and so E(Bt2 − t|FsB ) = Bs2 − s as we want for a martingale. The OST says that
E(BT2 ∧N − T ∧ N ) = 0
E(BT2 − T ) = 0
−b2 a a2 b
E(T ) = E(BT2 ) = b2 P(Tb < Ta ) + a2 P(Ta < Tb ) = + = −ab
b−a b−a
2
Example 3.3 (eθBt −tθ /2 , t ≥ 0) is a martingale for all θ ∈ R But how is this a martingale
when it seems to be small always. The reason is the process goes to zero but the expectation
doesn’t. We now check that this is a martingale.
2 /2(t−s)
E(eθBt |FsB ) = E(eθBs eθ(Bt −Bs ) ) = eθBs eθ
and then rearranging gives it as a martingale. We then use the DCT to Ta ∧ N and
2
eθBt −tθ /2 to get, with OST,
θB )=1
E(e Ta ∧N )−Ta ∧N θ2 /2
23 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
2 /2T
and Ta ∧ N → Ta and BTa ∧N → BTa = a and then as N → ∞ we get E(e−θ a ) = e−θa .
Then if we let λ = θ2 /2 we get
√
E(e−λTa ) = e− 2λa
The above examples give a hint that many interesting facts about Brownian motion
can be found from the Optional stopping theorem, as applied to some stopping time.
Example 3.4 Ta ∧ Tb can be done similarly and inversion can be used to give P(Ta ∧ Tb ∈
dt).
E(MT ) = E(M0 )
Proof Suppose first that T is discrete, and T ≤ K. Let T ∈ {t1 , .., tN } where t1 < t2 <
... < tN ≤ K. Then
XN
E(MT ) = E(Mtk χ{T =tk }
k=1
24 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
For example, Bt2 − t with f (x, t) = x2 then 12 ∆f = 1. Also Bt4 isn’t but if we subtract
Rt 2 θBt −θ2 /2t we choose f (x, t) = eθx−θ2 /2t and we subtract nothing.
0 6Bs ds it is. Also for e
If Btx = x + Bt is d dimensional Brownian motion started at x then the above is the
same with Bt replaced with Btx . We will prove it soon, and the proof uses the fact that
the Gaussian density solves ∂φ t 1
∂t = 2 ∆φt
Of special importance are f (x, t) satisfying ∂f 1
∂s + 2 ∆f = 0 or ∆f = 0.
Consider the Dirichlet problem. Let D ⊂ Rd be open. Then find u such that
(
∆u(x) = 0 in D
(3.1)
u(x) = f (x) on ∂D
Theorem 3.5 Suppose D is a bounded open set of Rd . Suppose u ∈ C 2 (Rd ) and solves
(
∆u(x) = 0 in D
u(x) = f (x) on ∂D
then
u(x) = E(f (BTx ))
where T = inf{t : Btx ∈ ∂D}.
Proof We can modify u outside of D so that it R t has at most exponential growth. Then
1
we get, by the above proposition that u(Bt ) − 0 2 ∆u(Bsx )ds is a martingale. Then the
x
and by the DCT we get u(x) = E(u(BTx )) = E(f (BTx )) since u = f on ∂D. Q.E.D.
25 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
(
1
|x|d−2
d≥3
Example 3.5 Let u(x) = and this satisfies ∆u = 0 except at x = 0. I
log |x| d = 2
leave this as a check for the reader, as it has been seen before many times. Let D = {x ∈
R3 : a < |x| < b} and let Ta = inf{t : |Btx | = a} and Tb = inf{t : |Btx | = b}. Then let
T = Ta ∧ Tb , and so
1 1 1 1
= E( x ) = P(Ta < Tb ) + P(Tb < Ta )
|x| |BT | a b
with 1 = P(Ta < Tb ) + P(Tb < Ta ) by the law of the iterated logarithm, and so
1 1
|x| − b
P(Ta < Tb ) = 1 1
a − b
a
If we let b → ∞ and we get P(Ta < Tb ) → P(Ta < ∞) = |x| .
Proof If Bt doesnt tend to infinity then there exists K and tN → ∞ where |BtN | ≤ K.
Let TN = inf{t : |Bt | ≥ N } then the law of the iterated logarithm says TN < ∞. Then
(N ) d−2
Xt = BTN +t − BTN is a Brownian motion and P(X N hits B(0, K)) = K N and thus
P(∩∞
N =K {X
N
hits B(0, K)}) = 0
and so
P(∪∞ ∞
N =1 ∩N =K {X
N
hits B(0, K)}) = 0
Q.E.D.
For d = 2 let b → ∞ and then P(Ta < ∞) = 1 and then if a → 0 then P(T{0} < Tb ) = 0
and if we now let b → ∞ then P(T{0} < ∞) = 0
Q.E.D.
and here g represents pumping heat in or out, and u is the steady state temperature.
26 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Theorem 3.10 (Poisson version 1) Suppose u ∈ C 2 (R) solves equation (3.2) with D
bounded. Then Z T
u(x) = E( g(Bsx )ds)
0
with T = inf{t : Btx ∈ ∂D}
R2 −|x|2
Example 3.6 u(x) = d solves the Poisson problem on a ball of radius R with g = 1.
Then u(x) = E(T ).
Example 3.7 Suppose D = (0, ∞) ⊂ R and u(x) = −x2 /2. However, E(T ) 6= u(x)
Example 3.8 21 ∆u = −1 on (0, 1)2 and u = 0 on the boundary. Then u is not C 2 ([0, 1]2 )
because of the effects at the corners, as at (0, 0) we have ∆u = 0.
We can improve though, to allow solutions in C 2 (D) ∩ C(∂D) but still u(x) solves the
Dirichlet or Poisson problem. We do this as follows.
Shrink the domain in by ε by defining Dε = {x : d(x, DC ) > ε} and then Dε → D
and we can then mollify. Let Tε = inf{t : Bt ∈ ∂Dε } and then BTε → BT . Then one can
believe that you can modify u outside of Dε to be C 2 (Rd ) with exponential growth. Then
apply version 1 to Dε to get u(x) = E(u(BTε )) → E(f (BT )) as ε → 0.
By exponential growth, we mean |g(t, x)| ≤ C0 eC1 |x| for all x, t.
Proof ( of theorem 3.4) We first show that E(|Mt |) < ∞
x x x 2
E(|f (Btx , t)|) ≤ C0 E(eC1 |Bt | ) ≤ C0 E(eC1 Bt + e−C1 Bt ) ≤ C0 eC1 x eC1 t/2 < ∞
Rt
and similarly E(| 0 Lf (Bsx , s)ds|) < ∞ WLOG we can take x = 0 since we can shift and
we still have exponential growth.
Z t
E(Mt |FsB ) = E(f (Bt , t) − Lf (Br , r)dr|FsB )
0
Z s Z t
B
= E f (Bs , s) + (f (Bt , t) − f (Bs , s)) − Lf (Br , r)dr − Lf (Br , r)dr|Fs
0 s
Z t−s
= Ms + E(f (Bs + Xt−s , t) − f (Bs , s) − Lf (Xr + Bs , s + r)dr|FsB )
0
Z t−s
= Ms + E(f (Z + Xt−s , t) − f (Bs , s) − Lf (Xr + Z, s + r)dr|FsB )|Z=Bs
0
27 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Rt
where Xr = Bs+r −Bs . The result now follows from E(g(Xt , t)−g(0, 0)− 0 Lg(Xr , r)dr) =
0 where we have used g(y, t) = f (z + y, t + s). Alternatively this is
Z t
E(g(Xt , t) − g(0, 0) = Lg(Xr , r)dr)
0
as required. Q.E.D.
We prove both theorems simultaneously. Proof Fix t > 0 and consider the map
Z s
x ∂u 1
s 7→ u(Bs , t − s) − − + ∆u (Brx , r)dr
0 ∂r 2
28 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
as required.
On D we stop at T ∧ t which is a bounded stopping time. Then u(Bsx , s − t) is a
martingale so
as required. Q.E.D.
Example 3.9 Brownian motion staying in a tube. Suppose we write u(x, t) = P(|Bt | <
1 for all s ≤ t). Then u(x, t) solves the equation
1 ∂2u
∂u
∂t = 2 ∂x2 x ∈ (−1, 1), t ∈ (0, t)
u(0, x) = 1
u(t, x) = 0 x = ±1
Btx π
−π 2 t/8
u(t, 0) = e = E cos χ ≤ P({T > t})
2 {T >t}
2 t/8
and so P(T > t) ≥ e−π , and this is correct asymptotically.
Example 3.10 Find P(Bsx1 , ..., Bsxd do not collide by time t). Let Btx = (Bsx1 , ..., Bsxd ) be
a d-dimensional Brownian motion. Let Vd = {x ∈ Rd : x1 < x2 < ... < xd } which is called
a cell, and then ∂Vd = {x ∈ Rd : xi = xi+1 for some i}. This solves
∂u 1
∂t = 2 ∆u
x ∈ Vd , t > 0
u(0, x) = f (x) x ∈ Vd
u(t, x) = 0 x = ∂Vd
Z φt (x1 − y1 ) . . . φt (x1 − yd )
u(x, t) = f (y1 , . . . , yd ) det .. .. dy1 . . . dyd
. .
Vd
φt (xd − y1 ) . . . φt (xd − yd )
2
where φt (z) = √ 1 e−z /2t
2πt
E(φ(X)) ≥ φ(E(X))
29 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Proof
E(φ(X)|F) ≥ E(L(X)|F)
= E(aX + b|F)
= aE(X|F) + b
= L(E(X|F)) a.s.
and then taking a supremum over L ≤ φ gives the result. Note we need to ensure somehow
that we only need countably many Ls. Q.E.D.
Lemma 3.16 (OST for φ(Mt )) Suppose (Mt , t ≥ 0) is a martingale with continuous
paths and φ ≥ 0 is a convex function. Suppose T ≤ K is a bounded stopping time. Then
E(φ(MT )) ≤ E(φ(MK ))
Proof First assume that T is discrete, so T ∈ {t1 , ..., tM } with t1 < t2 < ... < tM . Then
M
X M
X
E(φ(MT )) = E(φ(Mtk )χ{T =tk } ) ≤ E(φ(MK )χ{T =tk } ) = E(φ(MK ))
k=1 k=1
Now for general T , find discrete stopping times TN ≤ K so that TN → T and then we
have E(φ(MTN )) ≤ E(φ(MK )) and using Fatou’s lemma we get
E(φ(MT )) ≤ E(φ(MK ))
Q.E.D.
We now prove a more general OST, one without boundedness of the martingale
Theorem 3.17 Suppose (Mt , t ≥ 0) is a continuous martingale, and T ≤ K is a bounded
stopping time. Then
E(MT ) = E(M0 )
Proof For discrete T this works as above and is fine.
We now consider Mt ≥ 0 and assume we have discrete stopping times TN → T and
we have E(MTN ) = E(M0 ). We then write x = x ∧ L + (x − L)+ and we have
and now E(MTN ∧ L) → E(MT ∧ L) by DCT and E(MT ∧ L) = E(MT ) − E((MT − L)+ )
Fix ε > 0 Choose L large so that
Finally truncate a general martingale at +L and −L. This is a bit messier though.
Q.E.D.
30 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Sphere averaging for our formula is almost obvious. If we let Sε = inf{t : Btx ∈ ∂B(x, ε)},
Xt = BSε +t − BSε and T 0 = inf{t : Xt + BSxε ∈ ∂D} then
= E(u(z)|z=BSx )
ε
= E(u(BSxε ))
31 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
4 Donsker’s theorem
The idea is to show random walks converge to Brownian motion. Throughout this chapter
we have the following. 2
PZN
1 , Z2 , ... are IID random variables with E(Zi ) = 0 and E(Zi ) = 1
and we define SN = k=1 Zk and this is the position at time N . We can interpolate the
SN to get St and this is given by
√
p
(N ) SN t SbN tc SbN tc bN tc
Xt = √ = √ + error = p √ + error → N (0, 1) t
N N bN tc N
(N ) D
max Xt → max Bt (4.1)
t∈[0,1] t∈[0,1]
Z 1 Z 1
(N ) D
Xt dt → Bt dt ∼ N (0, 1/3) (4.2)
0 0
Z 1 Z 1
D
χ{X (N ) >0} ds → χ{Bs >0} ds (4.3)
s
0 0
PN
1 SK D number of times when Sk >0
(4.2) can be rewritten as N 3/2
→ and (4.3) can be rewritten almost as N
(N )
The plan is to think X (N ) and (Bt , t ∈ [0, 1]) = B as random
of (Xt , t ∈ [0, 1]) =:
variables in C[0, 1]. We thus need to show that
D
X (N ) → B
on C[0, 1]. This is a big improvement of the Central limit theorem. We also show that
D
F (X N ) → F (B)
32 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
and an open ball is B(f, ε) = {g : |g(t) − f (t)| < ε} and the Borel sets are generated by
open balls.
S
and also B(f, ε) = N ≥1 B(f, ε − 1/N ) Q.E.D.
An example of this is B = (Bt , t ≥ 0) is a C[0, 1] valued variable. One observes
(N )
that {ω Bt1 ∈ O1 , ..., BtN ∈ ON ∈ F0 Also Xt are random variables in C[0, 1]. We
can show the latter using composition of measurable functions, Ω → RN → C[0, 1] given
(N )
by ω 7→ (Z1 (ω), ..., ZN (ω)) 7→ Xt (ω) and the former is measurable and the latter is
continuous.
(N ) D
Definition 4.3 X (N ) , X are (E, d) valued variables then Xt → X means
D
Theorem 4.5 (Continuous Mapping) If X (N ) → X on (E, d) and G : (E, d) →
˜ which is continuous then
(Ẽ, d)
D
G(X (N ) ) → G(X)
˜
on (Ẽ, d)
33 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Corollary 4.6
(N )
max Xt → max Bt
t∈[0,1] t∈[0,1]
D
Theorem 4.7 (Extended Continuous Mapping) If X (N ) → X on (E, d) and F :
(E, d) → R is measurable and Disc(F ) = {f : F is discontinuous atf } is such that P(X ∈
Disc(F )) = 0 then
D
F (X (N ) ) → F (X)
˜
on (Ẽ, d)
Lemma 4.8 (Skorokhod) Take Z with E(Z) = 0. Then there exists a stopping time
D
T < ∞ so that BT = Z and E(T ) = E(Z 2 ).
Financial mathematicians love this lemma. There are at least 14 different ways to
prove this.
If Bt2 − t is a martingale and so E(BT2 ∧N − T ∧ N ) = 0 or E(T ∧ N ) = E(BT2 ∧N ) and
this, by Fatou, gives E(T ) ≥ |E(BT2 ).
If we take Z to be independent of B we choose T = inf{t : Bt = Z} and so BT = Z
but E(T ) = E(E(T |σ(Z))) = E(E(Ta )|a=Z ) = ∞. This is a bit of a silly example.
34 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
If Z ≥ 0 we need
and so we choose
µ− (dz)
π(dz) = R
bµ+ (db)
and so α, β are distributed as
35 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
R
2. −aπ(da) = 1
3. E(Tα,β ) = E(Z 2 )
RR
4. ν(da, db) = 1
These are all true, but we only check 2 and 3. Observe that 1 is by construction. For 2,
we want to show that Z Z
−aµ− (da) = aµ+ (da)
but we have Z Z
0 = E(Z) = aµ+ (da) + aµ− (da)
E(Tα,β ) = E(−αβ)
ZZ
= −abν(da, db)
ZZ
µ+ (db)µ− (da)
= −ab(b − a) R
xµ+ (dx)
− xdµ− x dµ+ + x2 dµ− xdµ+
R R 2 R R
= R
xdµ+
Z
= x2 dµ+R x2 dµ−
= E(Z 2 )
Q.E.D.
We use a Skorokhod trick: (
D
BTα,β = Z
E(Tα,β ) = 1
and take IID copies (α1 , β1 ), (α2 , β2 ) independent of B. Then T1 = inf{t : Bt ∈
{α1 , β1 }},...,TN +1 = {t ≥ TN : Bt − BTN ∈ {αN +1 , βN +1 }} and define (S1 , S2 , ...) =
(BT1 , BT2 , ...) and then it has the random walk distribution that we want, i.e.
(N ) SN t BN t
Xt =√ B (N ) = √
N N
The key estimate is
P ||X (N ) − B (N ) ||∞ > ε → 0
as N → ∞. Assume this, and then fix F : C[0, 1] → R that is bounded and uniformly
continuous. We get
Fix η > 0 and use uniform continuity to choose ε so that the second term is less than
or equal to η/2.
36 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Then choose N large to make the first term less than or equal to η/2.
We soon check that it is enough to only use uniformly continuous functions.
We now show the key estimate.
(N )
(N ) SK BT (N ) B
XK/N := √ := √ K = BTK /N ≈ K
N N N
where the approximation is the first gap we need to plug. This is the difference of the
two at the endpoints. The second gap is the difference at other parts of the paths.
We plug the first gap. Take T1 , T2 − T1 , T3 − T2 and so on IID with mean 1. Then
TN a.s.
→1
N
by the strong law of large numbers. Then
Tk k
max | − |→0
i=1,...,N N N
almost surely, from the above (analysis 1).
Let ΩN,δ = {maxi=1,...,N | TNk − Nk | ≥ δ} and then
P(ΩN,δ ) → 0
as N → ∞.
(N ) (N )
We now plug the second gap. Suppose ||Xt − Bt ||∞ > ε. Then there exists a
t ∈ [0, 1] such that
(N ) (N )
|Xt − Bt | ≥
and suppose that t ∈ [K/n, (K + 1)/N ]. Then either
(N ) (N )
|Bt − BK/N | ≥ ε
or
(N ) (N )
|Bt − B(K+1)/N | ≥ ε
and so
(N ) (N ) (N )
P(||Xt − Bt ||∞ > ε) ≤ P(ΩN,δ ) + P(|Bs(N ) − Bt | ≥ ε for |s − t| ≤ δ + 1/N )
(N )
≤ P(ΩN,δ ) + P(|Bs(N ) − Bt | ≥ ε for |s − t| ≤ 2δ)
and then choose δ small and N large.
This concludes the proof of Donskers theorem, modulo some other minor tidy ups.
Corollary 4.9 Z 1 Z 1
(N ) D
Xt dt → Bt dt
0 0
R1
We apply F (f ) = 0 f . The former isn’t really very useful, so we rewrite it as follows:
Z 1 N −1 N −1
(N ) 1 S SK+1 1 SN
√K + √
X X
Xt dt = = SK −
0 0
2N N N N 3/2 0
N 3/2
and so we suspect that
N −1
1 X
SK → N (0, 1/3)
N 3/2 0
We need a tidy up lemma:
37 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
D D D
Lemma 4.10 Suppose that XN → X and YN → 0. Then XN + YN → X.
This
( is not true if YN converges to a non zero limit. For example consider YN =
Z N even
.
−Z N odd
Proof We consider characteristic functions
Q.E.D.
R1 D
Now back to the problem in hand. We have F (f ) = 0 f and we have X (N ) → B and
then
N −1
1 X SN
F (X (N ) = 3/2 SK − 3/2
N 0
N
Then s
2
SN SN N
E ≤ E = →0
N 3/2 N 3/2 N3
This uses the following
D
Lemma 4.11 If E|XN | → 0 then XN → 0
F (f ) = inf{t : f (t) ≥ a} ∧ 1
F (X (N ) ) → F (B)
38 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
but the problem is F is not continuous. We thus guess the discontinuity set of F :
which is semicontinuity.
We take an f where there exists SN → F (f ) and f (XN ) > a. We need to show that
F is continuous at such an f .
If f (SN ) = a + ε then take a g such that ||g − f ||∞ < ε/2 and this will have g(SN ) > a.
Thus F (g) ≤ SN and so lim supg→f F (g) ≤ F (f ) and so we are continuous here.
Theorem 4.12 The following are equivalent for XN , X over (E, d).
2. E(F (XN )) → E(F (X)) for all bounded uniformly continuous functions F : E → R.
6. E(F (XN )) → E(F (X)) for all measurable F such that P(X ∈ Disc(F )) = 0.
Proof 1 =⇒ 2 is immediate. +
d(x,A)
2 =⇒ 3 Define Fε (x) = 1 − ε and this is uniformly continuous and converges
to χA . Then
3 =⇒ 4
P(X ∈ A) = 1 − P(X ∈ AC )
and so if A is open then AC is closed.
4 =⇒ 5
P(X ∈ Ao ) ≤ lim inf P(XN ∈ Ao ) ≤ lim inf P(XN ∈ A) ≤ lim sup P(XN ∈ A) ≤ lim sup P(XN ∈ A) ≤ P(X ∈
and so
P(X ∈ A) − P(X ∈ Ao ) = X ∈ ∂A = 0
39 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
and with |fε (x) − f (x)| ≤ ε. We will check E(fε (XN )) → E(fε (X)), and this is enough.
We then apply part 5 with A = f −1 (αi , αi+1 ] ∪ {x : f (x) = αi } ∪ {x : f (x) = αi+1 }.
We claim that
5 Up Periscope
Suppose that a box has N balls, half of which are black and the other half are white. We
draw at random until the box is empty. Let SK denote the number of blacks left by draw
(N )
K minus the number of whites by draw K. Let Xt = S√NNT and we expect:
D
Theorem 5.1 X (N ) → Brownian Bridge.
We consider a population model. Let SN be the population size at time N . Assume
that there is a one half probability of one individual having 2 or zero offspring each time
step.
Lemma 5.2
2
P(SN ) > 0|S0 = 1) ∼
N
Corollary 5.3
2 N
P(SN > 0|S0 = N ) ∼ 1 − (1 − ) ∼ 1 − e−2
N
(N ) SN t
If we instead choose S0 = N and linearly interpolate to get Xt = N we get
Theorem 5.4
D
X (N ) → X
where X solves Feller’s equation:
dX √ dB
= X
dt dt
40 of 40
MA4F7 Brownian motion Lecture Notes Autumn 2012
Such theorems are called diffusion approximations. One can split the proof into two
parts:
We can deduce this from compact sets K ⊂ C[0, 1]. For example
is compact.
The second point is specific to each convergence. We look only at the population one.
How do we characterise Feller’s diffusion. We would want
p
Xt+∆t − Xt ≈ Xt B(0, ∆t)
(N ) (N ) SK+1 SK
X K+1 − X K = −
N N N N
and
√ the bit on the right hand side has mean zero and variance SK , which agrees with
SK B(0, ∆t).
41 of 40