You are on page 1of 7

MATH 545, Stochastic Calculus

Problem set 2

January 24, 2019

These problems are due on TUE Feb 5th. You can give them to me in class, drop them in my box. In all
of the problems E denotes the expected value with respect to the specified probability measure P.

Problem 0. Read [Klebaner], Chapter4 and Brownian Motion Notes (by FEB 7th)
Problem 1 (Klebaner, Exercise 3.4). Let {Bt }t≥0 be a standard Brownian Motion. Show that, {Xt }t∈[0,T ] ,
defined as below is a Brownian Motion.
a) Xt = −Bt ,
We check that the defining properties of Brownian motion hold. It is clear that B0 = 0 a.s., and that
the increments of the process are independent. For t > s, the increments can be written as

(−Bt ) − (−Bs ) = −(Bt − Bs ) .

Because Bt − Bs is a gaussian RV with mean 0 and variance t − s, −(Bt − Bs ) must have the same
properties.

b) Xt = BT −t − BT for T < ∞ ,
It is clear that B0 = 0 a.s.. For t > s, the increments of the process are given by

Xt − Xs = (BT −t − BT ) − (BT −s − BT ) = BT −t − BT −s .

These increments are independent of Xs = BT −s − BT by the inependent increments propery of


Brownian motion. The incrememnts are also clearly Gaussian random variables with mean 0 and
variance
Var(Xt − Xs ) = Var(BT −t − BT −s ) = |T − t − (T − s)| = t − s .

c) Xt = cBt/c2 for all c > 0, T < ∞ ,


The independence of increments and X0 = 0 property are trivial as they are not affected by the scaling.
The increments are clearly Gaussian random variables as they are the sum of gaussian random variables,
and the scaling preserves the mean 0 property. We The variance of the increments is given by

Var(Xt − Xs ) = Var(cBt/c2 − cBs/c2 ) = c2 Var(Bt/c2 − Bs/c2 ) = c2 (t/c2 − s/c2 ) = t − s .

1
d) 
tB1/t , if t > 0 1
Xt = ,
0, if t = 0

By Theorem 3.3 in [Klebaner]: Xt is a mean zero gaussian process with covariance structure
Cov(Xs , Xt ) = min(s, t) . Because rescaling time and brownian motion paths does not affect the
mean of the process not its Gaussian structure, the first two points above are trivial. Then, for s < t we
compute the covariance structure
ts ts
Cov(Xt , Xs ) = Cov(tB1/t , sB1/s ) = tsCov(B1/t , B1/s) = = = min(t, s) .
min(1/t, 1/s) t
To show continuity of the given process at 0 one could use the strong law of large numbers for a sum
of independent gaussian random variables:
n
1 1X
Bn = (Bi+1 − Bi ) → 0
n n
i=1

in the sense of almost sure convergence of random variables as N → ∞.

√ Exercise 3.5). Let Bt and Wt be two independent Brownian Motions. Show that
Problem 2 (Klebaner,
Xt := (Bt + Wt )/ 2 is also a Brownian Motion. Find the correlation between Xt and Bt .
Clearly X0 = 0 and Xt has independent increments. The increments Xt − Xs are mean 0 Gaussian random
variables. The variance of the increments is given by
1
Var(Xt − Xs ) = Var((Bt + Wt ) − (Bs + Ws ))
2
1
= (Var(Bt − Bs ) + Var(Wt − Ws ) + 2Cov(Bt − Bs , Wt − Ws ))
2
1
= (t − s + t − s + 0) = t − s ,
2
where in the second last equality we used that W and B are independent.
Problem 3 (Klebaner, Exercise 3.13). Let Mt := max0≤s≤t Bt . Show that the random variables |Bt |, Mt
and Mt − Bt have the same distribution for all t > 0.
We have seen in class that for m > 0 we have P [Mt > m] = 2P [Bt > m] so

%M (m) = ∂m P [Mt > m] = 2∂m P [Bt > m] = 2%0,0,t (m) .

while for m < 0 we have %M (m) = 0. Similarly we have that

%|B| (m) = ∂m P [Bt > m] + ∂m P [Bt < −m] = 2%0,0,t (m) .

The third part of the exercise can be solved by simply applying Thm. 3.21 in [Klebaner] and to integrate:
Z ∞ Z ∞r
2 2(m + x) − x − (2(m+x)−x)2
%M −B (m) = %B,M (x, m + x) dx = e 2t dx
−m −m π t3/2
1
Hint: Use Theorem 3.3 in [Klebaner]. Some extra work is needed at t = 0 here to prove continuity. To this end, you can use that
limt→0 tB1/t = limn→∞ n−1 Bn .

2
∞ Z ∞
Z r  
2 m + x − (m+x)2 1 −
(m+x)2
= e 2t dx = 2∂m √ e 2t dx
0 π t3/2 2πt 0
 Z ∞ 
1 2
− x2t
= ∂m √ e dx = 2%0,0,t (m) .
2πt m
for m > 0, where in the second and third line we have performed a change of variables.

Problem 4. Let {Bt }t≥0 be a standard Brownian motion.

a) For any 0 ≤ s < t, show that the joint distribution of (Bs , Bt ) is a bivariate normal distribution and
determine the mean vector µ and covariance matrix Σ of this bivariate normal distribution. 2
We use the result at the end of pp. 59 of [Klebaner] for the random variables X = Bs and Y = Bt −Bs ,
which are independent gaussian random variables with mean 0 and variances s and s − t respectivaly.
Consequently, the random vector (Bs , Bt ) = (X, X + Y ) is distributed according to a 2-dimensional
gaussian distribution with mean vector µ = (0, 0) and covariance matrix
 
s s
σ0 = .
s t

 
ass ast
b) Find a matrix A = so that [Z1 Z2 ], defined as follows, has a standard bivariate normal
ats att
distribution.
        
Z1 Bs ass ast Bs ass Bs + ast Bt
:= A = =
Z2 Bt ats att Bt ats Bs + att Bt

A standard bivariate normal distribution is a bivariate normal distribution where the means of both
coordinate variables are zero and the covariance matrix is the identity matrix. You can use the fact that
any linear combination of random variables following a multi-variate normal distribution has a normal
distribution.
Let [Z1 Z2 ] have a standard bivariate normal distribution. We have
   √  
Bs D s 0 Z1
= √ √
Bt s t−s Z2

To see this,
 one 
can check that the right side has a centered
 √ bivariatenormal distribution with covariance
s s s 0
matrix . Thus, A must be the inverse of √ √ , i.e.,
s t s t−s
 √ −1 " √1 #
s √0 s
0
A= √ = 1 √1
.
s t−s − √t−s t−s

2
Hint: The discussion at the end of pp. 59 of [Klebaner] should be useful. Or using the independence of Bs and Bt − Bs , you
can first find the joint density of
(Bs , Bt − Bs ).
Then do a transformation to get the joint density of (Bs , Bt ) and recognize that it is a density of some bivariate normal distribution.

3
Problem 5 (More martingales in Brownian Motion). Let {Bt }t≥0 be a standard Brownian Motion with a
natural filtration F = {Ft }t≥0 , where Ft = σ(Bs , 0 ≤ s ≤ t).

(a) Compute E Bt4 |Fs for t > s ≥ 0.


 

We have that

E Bt4 |Fs = E (Bs + (Bt − Bs ))4 |Fs


   

= E Bs4 + 4Bs3 (Bt − Bs ) + 4Bs2 (Bt − Bs )2 + 4Bs (Bt − Bs )3 + (Bt − Bs )4 |Fs


 

= E Bs4 |Fs + E 4Bs3 (Bt − Bs )|Fs + E 6Bs2 (Bt − Bs )2 |Fs


     

+ E 4Bs (Bt − Bs )3 |Fs + E (Bt − Bs )4 |Fs


   

= Bs4 + 4Bs E (Bt − Bs )3 |Fs + Bs2 E 6(Bt − Bs )2 |Fs


   

+ 4Bs E (Bt − Bs )3 |Fs + E (Bt − Bs )4 |Fs


   

= Bs4 + 6Bs2 E (Bt − Bs )2 |Fs + E (Bt − Bs )4 |Fs


   

= Bs4 + 6Bs2 (t − s) + 3(t − s)2 .

where in the fifth equality we have used that oddmoments of normal distributions are 0 and in the last the
definition of brownian motion increments.

(b) Consider function f4 (t, x) = x4 − 6tx2 + 3t2 . Show that {Mt }t≥0 given by

Mt = f4 (t, Bt ) = Bt4 − 6tBt2 + 3t2

is a martingale adapted to the filtration F.


We only check that E [Mt |Fs ] = Ms . In light of the above we have that

E [Mt |Fs ] = E Bt4 |Fs − 6tE Bt2 |Fs + 3t2 = E Bt4 |Fs − 6t(E Bs2 |Fs + (t − s)) + 3t2
       

= Bs4 + 6Bs2 (t − s) + 3(t − s)2 − 6t(Bs2 + (t − s)) + 3t2 = Bs4 − 6sBs2 + 3s2

(c) We have shown in class that Mt = f (t, Bt ) is a martingale for the cases

f (t, x) = f1 (t, x) := x;
f (t, x) = f2 (t, x) := x2 − t; and
ϑx−ϑ2 t/2
f (t, x) = g(ϑ, t, x) := e ,
n 2
o
and the corresponding martingales are {Bt }t≥0 , {Bt2 − t} and eϑBt −ϑ t/2 . Check that f1 , f2 and
t≥0
f4 are solutions of the following (time reversed) heat equation

1 ∂2
 

+ f (t, x) = 0 (PDE)
∂t 2 ∂x2

with initial condition fn (0, x) = xn , for n = 1, 2, 4.


The proof is immediate by inserting the suggested solution in the left hand side of the PDE and checking
that the result is 0.

4
(d) (Optional) Find f3 (t, x) so that (i) it is a solution to (PDE) with initial condition f3 (0, x) = x3 and (ii)
{f3 (t, Bt )}t≥0 is a martingale. Hint: You can do this part by solving the PDE from the given initial
condition (if you know how to), or by computing E Bt3 |Fs and guessing what f3 (t, x) should look like.

(e) (Optional). Check that, in fact, we have



∂g(ϑ, t, x)
f1 (t, x) =
∂ϑ
ϑ=0
∂ 2 g(ϑ, t, x)

f2 (t, x) = , etc.
∂ϑ2
ϑ=0

You can use this part to find your solution to part (d).

Problem 6 (Klebaner, Exercise 3.14). The first zero of the Standard Brownian Motion is B0 = 0. What is
the second zero?
By the arcsine law seen in class, the probability that Brownian motion has a zero in the interval (a, b) is given
by
2 a
P [Bt = 0 for t ∈ (a, b)] = arcsin .
π b
We see that setting a = 0 and taking b → 0 results ina probability of 1, so there the second 0 of brownian
motion is also at 0.

An alternative way to prove the same result is by arguing similarly to Example 3.8 in [Klebaner]: it is possible
that the probability that the sign of brownian motion is constant in an (0, ε) is 0 independently of ε, proving
the result.

Problem 7 (Optional: Diffusion and Brownian Motion). Let Bt be a standard Brownian Motion starting
from zero and define
1 − x2
p(t, x) = √ e 2t
2πt
Given any x ∈ R, define Xt = x + Bt . Of course Xt is just a Brownian Motion stating from x at time 0.
Fixing a smooth bounded function R → R, we define the function u(x, t) by

u(x, t) = Ex [f (Xt )]

where we have decorated the expectation with the subscript x to remind us that we are starting from the point
x.

• Explain why Z ∞
u(x, t) = f (y)p(t, x − y)dy

• Show by direct calculation using the formula from the previous question that for t > 0, u(x, t) satisfies
the diffusion equation
∂u ∂2u
=c 2
∂t ∂x
for some constant c. (Find the correct c !)

5
• Show that
lim u(t, x) = f (x)
t→0

and hence the initial condition for the diffusion equation is f .

Problem 8 (Optional). Let {ηk : k = 0, · · · } be a collection of mutually independent standard Gaussian


random variable with mean zero and variance one. Define
r ∞
t 2 X sin(kt)
X(t) = √ η0 + ηk .
π π k
k=1

Show that on the interval [0, π] X(t) has the same mean, variance and covariance as Brownian Motion.
In fact, it is Brownian Motion. For extra credit, prove this. (There are a number of ways to do this. One
is to see X as the limit of the finite sums which are each continuous functions. Then prove that X is the
uniform limit of these continuous functions and hence is itself continuous.)
Then observe that “formally” the time derivative of X(t) is the sum of all frequencies with a random
amplitudes which are independent and identical N (0, 1) Gaussian random variables. This is the origin of the
term “white noise” since all frequencies are equally represented as in white light.
In the above calculations you may need the fact that

ts 2 X sin(kt) sin(ks)
min(t, s) = + .
π π k2
k=1

(If you are interested, this can be shown by periodically extending min(t, s) to the interval [−π, π] and then
showing that it has the same Fourier transform as the right-hand side of the above expression. Then use the
fact that two continuous functions with the same Fourier transform are equal on [−π, π].)

Problem 9 (optional). If S and T are both stopping times relative to the filtration F = {Ft }t≥0 , then
max(S, T ) and S + T are also stopping times relative to F.
We need to check that for any t ≥ 0, {S + T ≤ t} ∈ Ft , which is equivalent to {S + T > t} ∈ Ft . To do
this, we write

{S + T > t} = {S = 0, S + T > t} ∪ {S = t, S + T > t} ∪ {0 < S < t, S + T > t}

Apparently,

{S = 0, S + T > t} = {S = 0} ∩ {T > t} ∈ Ft
{S = t, S + T > t} = {S = t} ∩ {T > 0} ∈ Ft

So it suffices to show the third set {t > S > 0, S + T > t} ∈ Ft . Notice


[
{0 < S < t, S + T > t} = {r < S < t, T > t − r} (∗∗)
r∈Q∩(0,t)

The right side of (∗∗) is in Ft , because it is a countable union of sets in Ft : for each r ∈ Q ∩ (0, t)

{r < S < t, T > t − r} = {r < S < t} ∩ {T > t − r} ∈ Ft

6
To show (∗∗), denote the left side and the right side by AL and AR , respectively. For each r ∈ Q ∩ (0, t),

{r < S(ω) < t, T (ω) > t − r} ⊂ {r < S(ω) < t, T (ω) + S(ω) > t}
⊂ {0 < S(ω) < t, T (ω) + S(ω) > t} = AL

which means AR ⊂ AL . Moreover, for any ω ∈ AL , choose

0 < ε < min{S(ω) + T (ω) − t, S(ω)}, and then choose q ∈ (S(ω) − ε, S(ω) − ε/2) ∩ Q.

Then we have q < S(ω) < t and

S(ω) + T (ω) > t + ε ⇒ T (ω) > t − (S(ω) − ε) > t − q

thus ω ∈ {q < S < t, T > t − q} for such choice of q ∈ Q. This means for any ω ∈ AL , we can find q such
that ω ∈ {q < S < t, T > t − q} ⊂ AR and hence AL ⊂ AR .

You might also like