Professional Documents
Culture Documents
Solutions To Stochastic Calculus For Finance II (Steven Shreve)
Solutions To Stochastic Calculus For Finance II (Steven Shreve)
Contents
1 Chapter 1
1.1 Exercise
1.2 Exercise
1.3 Exercise
1.4 Exercise
1.5 Exercise
1.6 Exercise
1.7 Exercise
1.8 Exercise
1.4 .
1.5 .
1.6 .
1.7 .
1.8 .
1.10
1.11
1.14
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
2
3
4
4
5
6
6
7
2 Chapter 2
2.1 Exercise
2.2 Exercise
2.3 Exercise
2.4 Exercise
2.5 Exercise
2.6 Exercise
2.2 .
2.3 .
2.4 .
2.5 .
2.7 .
2.10
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
8
9
9
11
11
3 Chapter 3
3.1 Exercise
3.2 Exercise
3.3 Exercise
3.4 Exercise
3.5 Exercise
3.6 Exercise
3.7 Exercise
3.2
3.3
3.4
3.5
3.6
3.7
3.8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
12
12
12
13
14
15
17
4 Chapter 4
4.1 Exercise 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Exercise 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Exercise 4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
19
20
21
Email:
.
.
.
.
.
.
zhaog22@mcmaster.ca
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
4.6 .
4.7 .
4.8 .
4.9
4.11
4.13
4.15
4.18
4.19
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
22
23
24
26
27
27
28
29
5 Chapter 5
5.1 Exercise
5.2 Exercise
5.3 Exercise
5.4 Exercise
5.5 Exercise
5.6 Exercise
5.1 .
5.4 .
5.5
5.12
5.13
5.14
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30
30
31
33
34
36
37
6 Chapter 6
6.1 Exercise
6.2 Exercise
6.3 Exercise
6.4 Exercise
6.5 Exercise
6.1
6.3
6.4
6.5
6.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
40
40
41
41
43
45
.
.
.
.
7 Chapter 7
47
8 Chapter 8
47
9 Chapter 9
47
10 Chapter 10
47
11 Chapter 11
11.1 Exercise 11.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
47
Introduction
This solution manual will be updated anytime, and is NOT intended for any business use. The author
suggests this manual as a reference book to the metioned book by Steven Shreve. Also anyone involved
in any mathematical finance courses shall not copy the solutions in this book directly. This is ideally
for self-check purpose.
If anyone find any mistakes or misprints in this book, please inform me. Thanks.
Chapter 1
1.1
Exercise 1.4
We set X =
let
Yn
n=1 2n
where
Xn () =
n
X
Yk ()
2k
k=1
It is clear that
lim Zn () = lim N 1 (Xn ()) = N 1 ( lim Xn ()) = N 1 (X()) = Z()
for every
Since Yk () only depends on the kth coin toss, Xn will only depend on the first n coin tosses,
which means Zn will also only depend on the first n coin tosses.
1.2
Exercise 1.5
First, since
(
0,
I(0,X()) (x) =
1,
if x
/ (0, X())
if x (0, X())
(1.1)
(1.2)
In the other hand, we may also compute the same double integral by changing the order f the integration
as:
Z Z
Z Z
I(0,X()) (x)dxdP () =
Now let us consider the innner integral with respect to P (), in which we shall consider the function
I(0,X()) (x) as a function of instead of x. Now note the condition x (0, X()) in (1.1) is equivalent
to X() x. So
Z
Z
I(0,X()) (x)dP () =
I:X()x ()dP () = P ( : X() x) = P (X x) = 1P (X x) = 1F (x)
Then
Z Z
I(0,X()) (x)dxdP () =
[1 F (x)]dx
E[X] =
0
[1 F (x)]dx
(1.3)
1.3
Exercise 1.6
1.
E[euX ] =
eux f (x)dx
Z
1
(x )2
=
dx
exp ux
2 2
2
Z
1
x2 2x + 2
=
dx
exp ux
2 2
2
Z
1
1
2
2
2
=
exp 2 x (2 + 2u )x 2 dx
2
2
2
Z
1
1
1
2
2
2 2
2
=
exp 2 x ( + u ) + 2 ( + u ) 2 dx
2
2
2
2
Z
1
1
1
2
2
2 2
=
exp 2 x ( + u 2 )
dx exp
(
+
u
)
2
2 2
2 2
2
Z
1
2u 2
2 4
1
=
+
exp 2 y 2 dy exp
2
2
2
2 2
2
1
= 1 exp u + u2 2
2
2.
1.4
eu = () = (EX)
Exercise 1.7
Figure 1: Graphs of Exercise 1.7 with n=1,2,3,4 from the top respectively
1.5
Exercise 1.8
sn t
d tX()
[e
] = X()etX()
dt
Also by the Mean value Theorem, there exists n [sn , t] (or [t, sn ]) such that
Yn () =
=E[Xe
tX
i.e.,
lim EYn = E[ lim Yn ] = E[XetX ]
(ii) If X can take both positive and negative values, then write X = X + X , where X + =
max{X, 0} and X = max{X, 0}. Then by the Mean value Theorem, there exists n [sn , t]
(or [t, sn ]) such that
Yn () = X()en ()X()
Then
|Yn ()| = X()en ()X() |X| en ()X() |X| e2|t||X|
(1.4)
Now we need to show that E[|X| et|X| ] < for any t to apply Dominated Convergence Theorem.
For any t,
Z
t|X|
E[|X| e
]=
|X| et|X| dP ()
Z
Z
=
1X()0 |X| et|X| dP () +
1X()0 |X| et|X| dP ()
Z
Z
+ tX +
=
1X()0 X e
dP () +
1X()0 X etX dP ()
+ tX +
=E[1X()0 X e
] + E[1X()0 X etX ]
Z
Z
=
1X()0 |X| etX dP () +
1X()0 X + etX dP () +
1X()0 X etX dP ()
+ tX +
=E[1X()0 X e
and
and
Hence
1.6
Exercise 1.10
Z()dP () =
Z()dP () +
A{:01/2}
Z()dP ()
A{:1/21}
1.7
R 1/2
0
Z()d =
Exercise 1.11
=e 2 u
=eu
u+ 12 2 +u 12 2
/2
R 1/2
0
0d = 0.
1.8
Exercise 1.14
Since for a 0
P (X a) = 1 ea =
f (x)dx
where
(
ex ,
f (x) =
0,
x0
x<0
Z
Z()dP ()
e()x f (x)dx
Z
()x
e
(ex )dx
=
0
Z
x
=
e
dx
=
=1
2.
P (X a) =
Z
:X()a Z()dP ()
Z
=
Z0 a
=
x
e
dx
= 1 ea
Chapter 2
2.1
Exercise 2.2
1. By the definition, (X) = {{X B}; B ranges over the subsets of R}.
when B = {1},
{X B} = {X = 1} = { 2 : S2 () = 4} = {HT, T H}
when B = {0},
{X B} = {X = 0} = { 2 : S2 () 6= 4} = {HH, T T }
when B = {2},
{S1 B} = {S1 = 2} = { 2 : S1 () 6= 4} = {T H, T T }
B (S1 )
{HT, HH}
{T H, T T }
{HT, HH}
{T H, T T }
AB
{HT}
{TH}
{HH}
{TT}
( 14
( 14
( 14
( 14
P (A)P (B)
+ 14 ) ( 41 + 14 ) =
+ 14 ) ( 41 + 14 ) =
+ 14 ) ( 41 + 14 ) =
+ 14 ) ( 41 + 14 ) =
P (A B)
1
4
1
4
1
4
1
4
1
4
1
4
1
4
1
4
therefore for any A (X) and B (S1 ), we have P (A B) = P (A)P (B). So (X) and (S1 )
are independent under P .
4. Again we only investigate the non-trivial cases:
A (X)
{HT, T H}
{HT, T H}
{HH, T T }
{HH, T T }
B (S1 )
{HT, HH}
{T H, T T }
{HT, HH}
{T H, T T }
AB
{HT}
{TH}
{HH}
{TT}
( 29
( 29
( 49
( 49
+
+
+
+
P (A)P (B)
( 29 + 49 ) =
( 29 + 19 ) =
( 29 + 49 ) =
( 29 + 19 ) =
2
9)
2
9)
1
9)
1
9)
P (A B)
8
27
4
27
10
27
5
27
2
9
2
9
4
9
1
9
therefore for some A (X) and B (S1 ) (indeed for any non-trivial elements as shown), we
have P (A B) 6= P (A)P (B). So (X) and (S1 ) are not independent under P .
5. If we are told that X = 1, then we have S2 = 4, i.e., the event must be HT or T H. But since
P (HT ) = P (T H) = 2/9, we canot use the un-conditioned probability for S1 any longer. In other
words, based on the current information, we shall estimate P (S1 = 8|X = 1) = 1/2 = P (S1 =
2|X = 1).
2.2
Exercise 2.3
= E[e
= E[e
= exp
Y (a cos +b sin )
] E[e
2
(a cos + b sin )
(a cos b sin )
+
2
2
a2 cos2 + b2 sin2 2ab cos sin a2 cos2 + b2 sin2 + 2ab cos sin
+
2
2
2
2
a
b
= exp
+
2
2
= exp
= E[eaV ]E[ebW ]
So V and W are independent.
8
2.3
Exercise 2.4
1. By the definition,
E euX+vY =
=E[e(u+v)X ] + E[e(uv)X ]
2
2
1
1
= e(u+v) /2 + e(uv) /2
2
2
2
2
euv + euv
=e(u +v )/2
2
for any u and v.
2. Let u = 0 in the last result, then
E[evY ] = e(0
+v 2 )/2
2
e0v + e0v
= ev /2
2
+v 2 )/2
2
2
euv + euv
6= eu /2 ev /2 = EeuX EevY
2
2.4
Exercise 2.5
Z
fX (x) =
(2|x| + y)2
2|x| + y
exp
dy
2
2
|x|
2
Z
z
z
exp
dz
(z = y + 2|x|)
=
2
2
|x|
1
z 2 z=
= exp[ ]
2 z=|x|
2
2
1
x
= exp[ ]
2
2
Z
fY (y) =
fX,Y (x, y)dx
Z
2|x| + y
(2|x| + y)2
dx
=
exp
2
2
|x|y
=
if y 0, then
2|x| + y
(2|x| + y)2
exp
dx
2
2
Z
2x + y
(2x + y)2
=2
exp
dx
2
2
0
2
Z
z
z dz
exp
=2
(z = 2x + y)
2 2
2
y
Z
fY (y) =
y2
1
= exp[ ]
2
2
if y 0, then
Z y
2|x| + y
(2|x| + y)2
2|x| + y
(2|x| + y)2
exp
exp
dx +
dx
2
2
2
2
y
Z
Z y
2x + y
2x + y
(2x + y)2
(2x + y)2
dx +
dx
=
exp
exp
2
2
2
2
y
Z
2x + y
(2x + y)2
=2
exp
dx
2
2
y
2
Z
z
z dz
exp
=2
(z = 2x + y)
2 2
2
y
Z
fY (y) =
1
y2
= exp[ ]
2
2
Therefore both X and Y are standard random variable.
It is clear that fX,Y (x, y) 6= fX (x) fY (y), so they are not independent.
And it is also clear that E[X] = E[Y ] = 0, and
E[XY ]
Z Z
=
xyfX,Y (x, y)dydx
Z Z
2|x| + y
(2|x| + y)2
dydx
=
xy
exp
2
2
|x|
Z Z
Z 0 Z
2x + y
(2x + y)2
2x + y
(2x + y)2
=
xy
exp
dydx +
xy
exp
dydx
2
2
2
2
0
x
x
Z Z
Z 0Z
2x + y
(2x + y)2
2t + y
(2t + y)2
exp
exp
=
xy
dydx +
(t)y
dy(dt)
2
2
2
2
0
x
t
Z
=
0
(t = x)
Z Z
2
(2x + y)
2x + y
(2x + y)2
2x + y
xy
xy
exp
dydx
exp
dydx
2
2
2
2
0
x
=0
so E[XY ] = E[X]E[Y ], which implies X and Y are uncoorelated.
10
2.5
Exercise 2.7
(2.1)
by Gmeasurable
=E[Y E[Y |G]] E[XY ] E[y] E[E[Y E[Y |G]|G]] + E[E[XY |G]] + E[E[Y |G]]
=E[Y E[Y |G]] E[XY ] E[y] E[Y E[Y |G]] + E[XY ] + E[Y ]
=0
By substituting this result into (2.1), we will have
V ar(Y X) V ar(Err)
2.6
Exercise 2.10
Z Z
yfX,Y (x, y)
=
dy fX (x) 1(x B)dx
fX (x)
Z Z
=
y 1(x B)fX,Y (x, y)dxdy
=E[Y 1(A)]
Z
=
Y dP
whic verifies the partial averaging property, hence proves E[Y |X] = g(X).
11
Chapter 3
3.1
Exercise 3.2
For 0 s t,
E[W 2 (t) t|F(s)]
=E[(W (t) W (s))2 + 2W (t)W (s) W 2 (s) t|F(s)]
=E[(W (t) W (s))2 t|F(s)] + E[(2W (t)W (s)|F(s)] + E[W 2 (s)|F(s)]
=[(t s) t] + 2W (s)E[W (t)|F(s)] W 2 (s)
= s + 2W 2 (s) W 2 (s)
=W 2 (s) s
so martingale.
3.2
Exercise 3.3
h
i
1 2 2
(u) = E eu(X) = e 2 u
h
i
1 2 2
0 (u) = E (X )eu(X) = u 2 e 2 u
h
i
1 2 2
00 (u) = E (X )2 eu(X) = ( 2 + u2 4 )e 2 u
h
i
1 2 2
(3) (u) = E (X )3 eu(X) = 2u 4 + ( 2 + u2 4 ) u 2 e 2 u
1 2 2
= 3u 4 + u3 6 e 2 u
h
i
1 2 2
(4) (u) = E (X )4 eu(X) = (3 4 + 3u2 6 ) + (3u 4 + u3 6 ) u 2 e 2 u
1 2 2
= 3 4 + 6u2 6 + u4 8 e 2 u
In particular, (3) (0) = 0 and (4) (0) = 3 4 . The latter implies the kurtosis of a normal random
variable is (4) (0)/( 2 )2 = 3.
3.3
Exercise 3.4
1. Since
n1
X
(W (tj+1 ) W (tj ))
j=0
max
0kn1
|W (tk+1 W (tk )|
n1
X
|W (tj+1 ) W (tj )|
j=0
Pn1
|W (tj+1 ) W (tj )|
j=0
(W (tj+1 ) W (tj ))
12
T
=
0
2.
0
n1
X
|W (tj+1 ) W (tj )|
j=0
max
0kn1
|W (tk+1 ) W (tk )|
n1
X
|W (tj+1 ) W (tj )|
j=0
0 T = 0
for almost every path of the Brownina motion W as n and || 0.
3.4
Exercise 3.5
1
exp(x2 /2T ). Then by
Note that W (T ) N (0, T ), with probability density function f (x) = 2T
the definition of the expectation
E erT (S(T ) K)+
Z
+
1 2
=
erT S(0)e(r 2 )T +x K f (x)dx
Z
1 2
=
erT S(0)e(r 2 )T +x K f (x)dx
A
where A is the constant such that S(T ) K for any x A and S(T ) K for any x A. More
precisely,
1
A = min{x : (r 2 )T + x K}
2
K
log S(0)
(r 21 2 )T
= min{x : x
}
K
log S(0)
(r 21 2 )T
=
= T d (T, S(0))
Therefore,
E erT (S(T ) K)+
Z
Z
1 2
=
erT S(0)e(r 2 )T +x f (x)dx
=e
S(0)e
erT Kf (x)dx
T d
T d
rT
(r 21 2 )T
T d
rT
e f (x)dx e
{z
I
13
Z
K
T d
f (x)dx
{z
II
2T T d
Z
1
t2
exp T t
=
dt
2
2 d
Z
2
1
1 2 T /2
exp (t T ) dt
= e
2
2
d
Z
2
2
1
x2
= e T /2
dx
e
2
d T
Z
2
x2
1
e 2 dx
= e T /2
2
d+
=e
T /2
x
(t = )
T
(completing the square)
(x = t T )
(d+ = d + T )
N (d+ )
and
Z
x2
1
e 2T dx
II =
2T T d
2
Z
1
t
=
exp
dt
2
2 d
x
(t = )
T
=N (d )
By substituting I and II,we have the expected payoff of the option
E erT (S(T ) K)+
1
=erT S(0)e(r 2
)T 2 T /2
N (d+ ) erT KN (d )
3.5
Exercise 3.6
1. Write
E[f (X(t))|F(s)] = E[f (X(t) X(s) + X(s))|F(s)]
We define g(x) = E[f (X(t)X(s)+x)], then we have E[f (X(t)X(s)+X(s))|F(s)] = g(X(s)).
Now since
X(t) X(s) = (t s) + W (t) W (s)
is normally distibuted with mean (t s) and variance t s,
Z
1
( (t s))2
g(x) = p
f ( + x) exp
d
2(t s)
2(t s)
Z
1
(y x (t s))2
=p
f (y) exp
dy
2(t s)
2(t s)
by changing variable y = + x.
Therefore E[f (X(t))|F(s)] = g(X(s)), i.e. X has Markov property.
2. Write
S(t)
E[f (S(t))|F(s)] = E f
S(s) |F(s)
S(s)
14
h
i
S(t)
We define g(x) = E f S(s)
x |F(s) , then since
log
S(t)
= (t s) + (W (t) W (s))
S(s)
f (e x) exp
g(x) =
d
2 2 (t s)
2 t s
Z
(log xy (t s))2
1
1
=
f (y) exp
dy
2 2 (t s)
2 t s 0 y
by changing variable y = e x ( d = dy/y and x implies 0 y ).
Therefore
Z
f (y)p(, x, y)dy
g(x) =
with
p(, x, y) =
(log xy )2
1
exp
2 2
2y
satisfies E[f (S(t))|F(s)] = g(S(s)) for any f . Hence S has Markov property.
3.6
Exercise 3.7
1.
h
i
E Z(t)F(s)
Z(t)
Z(s)F(s)
=E
Z(s)
Z(t)
=Z(s)E
F(s)
Z(s)
1
=Z(s)E exp (t s) + (W (t) W (s)) ( + 2 )(t s) F(s)
2
h
i
(ts)
(W (t)W (s))
(+ 12 2 )(ts)
=Z(s)e
E e
(by independence)
e
1
=Z(s)e(ts) e 2
(ts)
e(+ 2
)(ts)
=Z(s)
2. since m is a stopping time, and a martingale which is stoped at a stopping time is still a
martingale,
E[Z(t m )] = E[Z(0 m )] = E[Z(0)] = Z(0) = 1
which is exactly
1
E[exp{X(t m ) + 2 (t m )}] = 1
2
(3.1)
1
lim Z(t m ) = Z(m ) = exp m ( + 2 )m
t
2
(3.2)
1 2
lim Z(t m ) = lim exp X(t) ( + )t
t
t
2
(3.3)
3. For m < ,
For m = ,
15
since
1
lim exp ( + 2 )t = 0
t
2
1
lim Z(t m ) = lim exp [X(t)] lim exp ( + 2 )t = lim eX(t) 0 = 0
t
t
t
t
2
0 eX(t) em
(3.4)
(3.5)
therefore
1 2
1 2
lim exp X(t m ) + (t m ) = Im < exp m ( + )m
t
2
2
By taking the limits in (3.1), we have
1 2
E Im < exp m ( + )m
2
1 2
=E lim exp X(t m ) + (t m )
t
2
1
= lim E exp X(t m ) + 2 (t m )
t
2
(3.6)
(3.7)
=1
Furthermore
1
1 = lim E Im < exp m ( + 2 )m = E[Im < ]
0+
2
=e(+
=e
u2 +2)m
u2 +2m
16
4. By differentiating
d m
d h mm2+2 i
e
E e
=
d
d
m
2
2 emm 2+
E m em = p
2
2 2 +
m
2
m
lim E m e
emm 2+
= lim p
2
0+
0+
2 +
2
m
E [m ] = p
emm 0+
0 + 2
m
E [m ] =
since > 0
And E [m ] =
5. For < 0 and > 2 > 0, we still have (3.2) and (3.3) respectively. The slight difference to
the argument in (3.4) will be,
1
( + 2)
0 lim exp ( + 2 )t lim exp
t =0
t
t
2
2
as both > 0 and + 2 > 0.
Then (3.5)-(3.7) follow directly as part (iii), which is
1 2
E Im < exp m ( + )m = 1
2
(3.8)
Insteading of taking limits 0+ in (ref130724g), we can only take the limit 2 as:
Furthermore
1 2
1 = lim E Im < exp m ( + )m = E[e2m Im < ]
2
2
which is equivalent to P [M < ] = E[Im < ] = e2m = e2m|| .
The Laplace transform argument is exactly the same as part (iii).
3.7
Exercise 3.8
(i)
1
n (u) =E exp u Mnt,n
n
!#
"
nt
u X
=E exp
Xk,n
n
k=1
nt
Y
u
=
E exp Xk,n
n
=
k=1
nt h
Y
pn eu/
+ qn eu/
by independence
i
n
k=1
in
= pn eu/ n + qn eu/ n
17
by i.i.d
"
= e
r
1
n +
/
n
e
u/ n
e/ n
e/ n
!
e
u/ n
!#n
r
1 e/ n
n +
e/ n e/ n
n = 1/x, we have
2
x
t
rx2 + 1 ex
ux rx + 1 e
log 1/x2 (u) = 2 log eux
e
x
ex ex
ex ex
t
eux eux
e(u)x e(u)x
= 2 log (rx2 + 1) x
+
x
e ex
ex ex
t
(rx2 + 1) sinh ux + sinh( u)x
= 2 log
x
sinh x
u
+ O(x3 )) = ux + O(x3 ) + O(x6 ) = ux + O(x3 )
u2 x2
rux2
1 2
=1 +
+
ux + O(x4 ) + O(x5 ) + O(x7 )
2
2
u2 x2
rux2
1
=1 +
+
ux2 + O(x4 )
2
2
cosh ux +
(iv) Since
log 1/x2 (u) =
t
u2 x2
rux2
1 2
4
log
1
+
+
ux
+
O(x
)
x2
2
2
18
2
t
u
ru u
2
4
log
1
+
+
x
+
O(x
)
x2
2
2
2
u
ru u
t
2
4
6
8
+
2
2
u
t
ru u
2
4
= 2
+
x + O(x )
x
2
2
2
ru u
u
t + O(x2 )
+
=
2
2
=
t
2
2
rt t
u2
=u
+ t
2
2
rt t
,t
as n
Mnt,n
n
cZ N (ca, c2 b)
rt t
2
, 2 t = N
r
t, 2 t
2
2
Mnt,n
n
Chapter 4
Theorem 1. If dx = a(x, t)dt + b(x, t)dW , where dW is a Wiener process/Brownian motion then for
G(x, t), we have
G G 1 2 2 G
G
dG = a
+
+ b
dt + b
dW
(4.1)
x
t
2 x2
x
4.1
Exercise 4.1
l1
l
X
X
=E
tj [M (tj+1 M (tj )) +
tj [M (tj+1 M (tj ))]F(tl )
j=0
l1
X
j=k1
i k1
i
X h
E tj [M (tj+1 M (tj ))F(tl ) +
E tj [M (tj+1 M (tj ))F(tl )
h
j=0
1 since
j=l
19
l1
X
j=0
k1
X
i
i
h h
E E tj [M (tj+1 M (tj ))F(tj ) F(tl )
j=l
=I(s) +
k1
X
i
i
h h
E E tj [M (tj+1 M (tj ))F(tj ) F(tl )
j=l
=I(s) +
k1
X
i
h
E [tj [M (tj M (tj ))] F(tl )
j=l
=I(s) +
k1
X
i
h
E 0F(tl )
j=l
=I(s) + 0
=I(s)
so I(t) is a martingale for 0 t T .
4.2
Exercise 4.2
(i) Without loss of generality, we assume t = tk and s = tl where tl < tk . In other words, to show
that I(t) I(s) in independent of F(s) if 0 s < t T , it is sufficient to show that I(tk ) I(tl )
is independent of F(tl ) for 0 tl < tk T . Hence
I(t) I(s) =
k
X
j=0
j=0
k
X
l
X
j=l+1
Now since (t) is a nonrandom simple process, each (tj ) will be a constant; also since W (t) is
a brownian motion, each W (tj+1 ) W (tj ) will be independent of F(tj ) for j > l. Futhermore,
since F(tl ) F(tj ) for all j < l, each W (tj+1 ) W (tj ) will be independent of F(tl ), which
implies each (tj )[W (tj+1 ) W (tj ) is independent of F(tl ). BY taking the summation, we have
Pk
I(t) I(s) = j=l+1 (tj )[W (tj+1 ) W (tj )] is independent of F(tl ).
(ii) Again, we consider
I(t) I(s) =
k
X
j=l+1
Now since each W (tj+1 ) W (tj ) is a normal distribution random variable with mean 0 and
variance tj+1 tj , (tj )[W (tj+1 ) W (tj )] will have a normal distribution with mean 0 and
variance 2 (tj )(tj+1 tj ). Since the summation of any normal random variables is still a normal
random variable, then
k
X
I(t) I(s) N 0,
2 (tj )(tj+1 tj )
j=l+1
P
k
As (t) is a nonrandom simple process, N 0, j=l+1 2 (tj )(tj+1 tj ) is exactly the defiRt
nition of the Riemann sum tlk 2 (u)du. Hence
Z t
I(t) I(s) N 0,
2 (u)du
s
20
(iii) Since
E[I(t)|F(s)] =E[I(t) I(s) + I(s)|F(s)]
=E[I(t) I(s)|F(s)] + E[I(s)|F(s)]
=E[I(t) I(s)] + I(s)
=0 + I(s)
=I(s)
I(t) is a martingale.
(iv) First we write
Z
I (t)
2 (u)du
0
2
2 (u)du
0
2
2 (u)du
then for 0 s t T ,
Z t
E I 2 (t)
2 (u)du|F(s)
0
Z t
=E (I(t) I(s))2 + 2(I(t) I(s))I(s) + I 2 (s)
2 (u)du|F(s)
0
=E (I(t) I(s))2 + 2(I(t) I(s))I(s)|F(s) + I 2 (s)
2 (u)du
0
Z t
2
2
=E (I(t) I(s)) + 2E [2(I(t) I(s))|F(s)] I(s) + I (s)
2 (u)du
0
Z t
Z t
2
2
=
(u)du + 2 0 I(s) + I (s)
2 (u)du
s
0
Z s
=I 2 (s)
2 (u)du
0
2
4.3
Rt
0
2 (u)du is a martingale.
Exercise 4.5
Z t
so
Z t
1 2
log S(t) log S(0) =
(s) ds +
(s)dW (s)
2
0
0
Z t
Z t
1
(s) 2 ds +
(s)dW (s)
S(t) = S(0) exp
2
0
0
21
4.4
Exercise 4.6
1
((s) 2 )ds +
2
(s)dW (s)
0
Then
1
dX(t) = ( 2 )dt + dW (t)
2
dX(t)dX(t) = 2 dt
Now we use two different notations to compute d(S p (t)):
1. Under the notations in Theorem 1, G(x, t) = S p (0)epx , a = 21 2 and b = . Then
1 2
1 2 2 p
p
p
pX(t)
pX(t)
d(S (t)) = ( ) pS (0)e
+ 0 + p S (0)e
dt + S p (0)pepX(t) dW (t)
2
2
1 2 1 2 2
= p p + p S p (t)dt + pS p (t)dW (t)
2
2
2. Under the notations in Theorem 4.4.6 of Text, S p (t) = f (X(t)), then f (x) = S p (0)epx , f 0 (x) =
pS p (0)epx and f 00 (x) = p2 S p (0)epx . Now
d(S p (t)) = df (X(t))
1
= pS p (0)epX(t) dX(t) + p2 S p (0)epX(t) dX(t)dX(t)
2
1 2 p
p
= pS (t)dX(t) + p S (t)dX(t)dX(t)
2
1
1 2
p
= pS (t) ( )dt + dW (t) + p2 S p (t) 2 dt
2
2
1
1
= p p 2 + p2 2 S p (t)dt + pS p (t)dW (t)
2
2
4.5
Exercise 4.7
(4.2)
1. Set f (x) = x4 in (4.2), then f 0 (x) = 4x3 and f 00 (x) = 12x2 . Then (4.2) becomes
dW 4 (t) = 4W 3 (t)dW (t) +
1
12W 2 (t)dt = 4W 3 (t)dW (t) + 6W 2 (t)dt
2
W 3 (t)dW (t)+6
2. First we note that W (t) N (0, t), then by the symmetry of the integrand
Z
1 x2
E[W 3 (t)] =
x3
e 2t dx = 0
2t
22
Z
0
W 2 (t)dt (4.3)
W (t)dt
0
4E W 3 (t) dW (t) + 6
E W 2 (t) dt
0
T
W (t)dW (t) + 6
t2 dt
40dW (t) + 6
0
=3T 2
3. Set f (x) = x6 in (4.2), then f 0 (x) = 6x5 and f 00 (x) = 30x4 . Then (4.2) becomes
dW 6 (t) = 6W 5 (t)dW (t) +
1
30W 4 (t)dt = 6W 5 (t)dW (t) + 15W 4 (t)dt
2
W (t)dt = 6
W (t)dW (t) + 15
W 4 (t)dt
It is easy to see that E[W 5 (t)] = 0. Then by taking expectation in the above equation, we have
" Z
#
Z T
T
6
5
4
E W (T ) =E 6
W (t)dW (t) + 15
W (t)dt
0
6E W 5 (t) dW (t) + 15
E W 4 (t) dt
=6
0dW (t) + 15
0
3t2 dt
=15T 3
4.6
Exercise 4.8
0
which implies
R(t) = et R(0) +
1 et + et
23
Z
0
es dW (s)
4.7
Exercise 4.9
1. It is clear that
y2
1
N 0 (y) = e 2
2
and
2
1
x
+ (r
)(T t)
d = d (T t, x) =
log
K
2
T t
Then
2
d+
d2
N 0 (d+ )
=
exp
+
N 0 (d )
2
2
"
2
2 #
x
1
x
2
2
= exp 2
log
+ (r
)(T t) log
+ (r +
)(T t)
2 (T t)
K
2
K
2
2
1
x
= exp 2
4 log
+ r(T t)
(T t)
2 (T t)
K
2
h
i
x
= exp log
+ r(T t)
K
K
= er(T t)
x
2. First
1
d
=
x
T tx
By the definition
c(t, x) = xN (d+ (T t, x)) Ker(T t) N (d (T t, x))
cx =
c
x
d+
d
= N (d+ (T t, x)) + xN 0 (d+ )
Ker(T t) N 0 (d )
x
x
1
0
r(T t) 0
= N (d+ ) + xN (d+ ) Ke
N (d )
T tx
= N (d+ )
3.
ct =
c
t
d
d+
Ker(T t) rN (d ) Ker(T t) N 0 (d )
t
t
d+
r(T t)
0
r(T t) d+
= Ke
rN (d ) + N (d+ ) x
Ke
xKer(T t)
t
t
d+
d
= Ker(T t) rN (d ) + xN 0 (d+ )
t
t
= xN 0 (d+ )
d+ d = T t
(d+ d )
=
t
2 T t
24
so
x
ct = Ker(T t) rN (d )
N 0 (d+ )
2 T t
4.
1
ct + rxcx + 2 x2 cxx
2
x
N 0 (d+ ) + rxN (d+ ) +
= rKer(T t) N (d )
2 T t
x
= rKer(T t) N (d )
N 0 (d+ ) + rxN (d+ ) +
2 T t
x
= rKer(T t) N (d )
N 0 (d+ ) + rxN (d+ ) +
2 T t
=rxN (d+ ) rKer(T t) N (d )
1 2 2 d
x
[N (d+ (x))]
2
dx
1 2 2 0
d+
x N (d+ )
2
x
1 2 2 0
1
x N (d+ )
2
T tx
=rc
5. As t T ,
x
x
> 0 if x > K, and ln K
< 0 if 0 < x < K. Therefore
T t 0. And ln K
1
1 2
x
lim d = lim
+ (r (T t))
log
tT
tT
K
2
T t
h
i
1
x
1
= lim
log
+ r + lim (r 2 T t)
tT
tT
K
2
T t
(
,
if x > K
=
,
if 0 < x < k
Then
(
lim N (d )
tT
N () = 1,
N () = 0,
if x > K
if 0 < x < k
So
(
lim c(t, x) =
tT
x 1 Ker(T T ) 1 = x K,
x 0 Ker(T T ) 0 = 0,
if x > K
if 0 < x < K
=(x K)+
x
6. For 0 t < T , ln K
as x 0+.
x
1 2
1
log
+ (r (T t))
lim d = lim
x0+
x0+
K
2
T t
=
which implies N (d ) N () = 0 as x 0+. Therefore c(t, x) x 0 Ker(T t) 0 = 0
as x 0+.
x
7. For 0 t < T , ln K
as x .
lim d = lim
1
x
1
log
+ (r 2 (T t))
K
2
T t
=
25
which implies N (d ) N () = 1 as x .
h
i
lim c(t, x) x er(T t) K
x
h
i
= lim xN (d+ ) Ker(T t) x er(T t) K
x
N (d+ ) 1
x
x1
d
[N (d+ ) 1]
= lim dx
x
x2
0
N (d+ ) xT t
= lim
x
x2
1
=
lim x3 N 0 (d+ )
T t x
2
1
1
lim x3 ed+ /2
=
x
T t
2
2
2
1
3
=
) ed+ /2
lim K exp 3 T td+ 3(T t)(r +
x
2
2 T t
2
d+
1
2
=
exp 3(T t)(r +
) lim exp
+ 3 T td+
x
2
2
2 T t
1
2
=
exp 3(T t)(r +
) 0
2
2 T t
= lim
=0
4.8
Exercise 4.11
Without confusion, we shorten the notations when it is proper through this problem:
S = S(t),
c = c(t, S(t)),
cx = cx (t, S(t)),
ct = ct (t, S(t)),
By
dS(t) = Sdt + 2 SdW (t)
we have
dS(t)dS(t) = 2 S 2 dtdt + 2S2 SdtdW (t) + 22 SdW (t)dW (t) = 22 Sdt
and
1
dc(t, S(t)) = ct dt + cx dS + cxx dSdS
2
Now we compute
d(ert X(t)) = rert X(t)dt + ert dX(t)
1 2
2
2
rt
rt
= re X(t)dt + e
dc cx dS + r(X c + Scx )dt (2 1 )S cxx dt
2
1
1 2
rt
2
2
=e
ct dt + cx dS + cxx dSdS cx dS + r(X c + Scx )dt (2 1 )S cxx dt
2
2
1
1
= ert ct + 12 S 2 cxx rc + rScx (22 12 )S 2 cxx dt + ert (cx cx ) dS
2
2
1
= ert (ct rc + rScx + 12 S 2 cxx )dt
2
=0
26
which implies ert X(t) is a non-random constant and ert X(t) = er0 X(0) = X(0) = 0 for any t.
Since r > 0, so X(t) = 0 for any t.
4.9
Exercise 4.13
Since B1 (t) and B2 (t) are Brownian motions, dB1 (t)dB1 (t) = dt and dB2 (t)dB2 (t) = dt. By substituting the definition
dB1 (t) =dW1 (t)
dB2 (t) =(t)dW1 (t) +
1 2 (t)dW2 (t)
(4.5)
(4.6)
Based on the fact that W1 (t) and W2 (t) are continuous martingales with W1 (0) = 0 = W2 (0) ,
and (4.5)(4.4)(4.6), by applying Theorem 4.6.5, we conclude that W1 (t) and W2 (t) are independent
Brownian motions.
4.10
Exercise 4.15
1. Note that
(
dt
dWi (t)dWj (t) =
0
if j = k
if j =
6 k
and
dBi (t) =
d
X
ij (t)
j=1
i (t)
dWj (t)
27
(4.7)
d
d
X
X
ij (t)
ik (t)
dBi (t)dBi (t) =
dWj (t)
dWk (t)
(t)
i (t)
i
j=1
k=1
" d
#
d
X
1 X
ij (t)dWj (t)
= 2
ik (t)dWk (t)
i (t) j=1
k=1
1
i2 (t)
d
X
by (4.7)
j=1
d
1 X 2
(t)dt
2
i (t) j=1 ij
=1
2.
d
d
X
X
kl (t)
(t)
ij
dWj (t)
dWl (t)
dBi (t)dBk (t) =
i (t)
k (t)
j=1
l=1
" d
#
d
X
X
1
ij (t)dWj (t)
kl (t)dWl (t)
=
i (t)k (t) j=1
l=1
1
i (t)k (t)
d
X
by (4.7)
j=1
d
X
1
=
ij (t)kj (t)dt
i (t)k (t) j=1
=ik (t)
4.11
Exercise 4.18
1. In (4.4.13), let
f (t, x) = ex(r+
/2)t
then
ft (t, x) = (r + 2 /2)ex(r+
fx (t, x) = ex(r+
fxx (t, x) = 2 ex(r+
/2)t
/2)t
/2)t
/2)t
dW (t)
(since = ( r)/)
4.12
Exercise 4.19
1. By the definition, dB(t) = sign(t)dW (t), B(t) is a martingale with continuous path. Furthermore, since dB(t)dB(t) = sign2 (t)dW (t)dW (t) = dt, B(t) is a Brownian motion.
2. By Itos product formula,
d[B(t)W (t)] =d[B(t)]W (t) + d[W (t)]B(t) + d[B(t)]d[W (t)]
=sign(W (t))W (t)dW (t) + B(t)dW (t) + sign(W (t))dW (t)dW (t)
=[sign(W (t))W (t) + B(t)]dW (t) + sign(W (t))dt
Integrate both sides, we will have
Z t
Z t
B(t)W (t) =B(0)W (0) +
sign(W (s))ds +
[sign(W (s))W (s) + B(s)]dW (s)
0
0
Z t
Z t
=
sign(W (s))ds +
[sign(W (s))W (s) + B(s)]dW (s)
0
Note that the expectation of an Itos integral is 0 and W (s) N (0, s), which implies
Z
Z
Z 0
2
2
2
1
1
1
ex /2s dx
ex /2s dx = 0
E[sign(W (s))] =
sign(x) ex /2s dx =
2
2
2
E[sign(W (s))]ds = 0
E[B(t)W (t)] =
0
Note that E[W (t)] = 0, the above result also implies that B(t) and W (t) are uncorrelated.
29
Note that again the expectation of an Itos integral is 0 and W (s) N (0, s). Then by taking
the expectation, we have
Z t
E[B(t)W 2 (t)] =
E [[B(s) + 2sign(W (s))W (s)] ds
0
Z t
=2
E [sign(W (s))W (s)] ds
0
Now
Z
2
1
sign(x) ex /2s dx
2
Z
Z 0
2
2
x
x
ex /2s dx
ex /2s dx = 0
=
2
2
0
Z
x x2 /2s
e
dx
=2
2
0
>0
E[sign(W (s))] =
Chapter 5
5.1
Exercise 5.1
1
dX(t) = (t)dW (t) + ((t) R(t) 2 (t))dt
2
30
For f (x) = S(0)ex , we have f 0 (x) = f 00 (x) = f (x) and f (X(t)) = S(0)eX(t) = D(t)S(t), therefore
by Itos Lemma,
d(D(t)S(t)) =df (X(t))
1
=f 0 (X(t))dX(t) + f 0 (X(t))dX(t)dX(t)
2
1
=S(0)eX(t) dX(t) + S(0)eX(t) dX(t)dX(t)
2
1 2
1 2
=D(t)S(t) (t)dW (t) + ((t) R(t) (t))dt +
dt + 0 + 0
2
2
=D(t)S(t) [(t)dW (t) + ((t) R(t))dt]
=D(t)S(t)(t) [dW (t) + (t)dt]
with (t) = ((t) R(t))/(t).
2. The other way is by Itos Product Rule,
d(D(t)S(t)) =S(t)dD(t) + D(t)dS(t) + dD(t)dS(t)
=S(t) [R(t)D(t)dt] + D(t) [(t)S(t)dt + (t)S(t)dW (t)]
+ [R(t)D(t)dt] [(t)S(t)dt + (t)S(t)dW (t)]
= [(t)S(t)D(t) S(t)R(t)D(t)] dt + D(t)(t)S(t)dW (t)
=D(t)S(t) [(t)dW (t) + ((t) R(t))dt]
=D(t)S(t)(t) [dW (t) + (t)dt]
5.2
Exercise 5.4
r(t)
2 (t)
2
(t)
dt + (t)dW
2 (t)
r(t)
2
Z
dt +
(t)
(t)dW
RT
(t) is a normal random variable with mean 0 and variance
By Theorem 4.4.9 of Text, 0 (t)dW
RT 2
RT
2
(t)dt, which implies X is normal distributed with mean 0 r(t) 2(t) dt and variance
0
RT 2
(t)dt.
0
31
1
T
s
r(t)dt
0
1
T
2 (t)dt
=e
/2T dx
2
2 T
x:S(T )K
#
"
2
Z
eRT
T 2
2
x
e exp x RT +
/2T dx
=S(0)
2
2 T
x:S(T )K
"
#
2
Z
T 2
1
RT
2
exp x RT +
/2T dx
Ke
2
2 T
x:S(T )K
(5.1)
Now
Z
x:S(T )K
"
#
2
T 2
1
2
exp x RT +
/2T dx
2
2 T
=P (S(T ) K)
S(0)
=P X ln
K
K
ln S(0)
(RT 21 T 2 )
X (RT 12 T 2 )
T
T
1
2
X (RT 12 T 2 )
ln S(0)
K + (RT 2 T )
=P
T
T
S(0)
1
1
ln
=N
+ (RT T 2 )
K
2
T
=P
32
(5.2)
and
"
#
2
eRT
T 2
x
2
e exp x RT +
/2T dx
2
2 T
x:S(T )K
#
"
2
Z
T 2
1
2
exp x RT
/2T dx
=
2
2
x:S(T )K
#
"
2
Z
1
T 2
2
=
/2T dx
exp x RT
K
2
2 ln S(0)
!
K
(RT + 21 T 2 )
ln S(0)
X (RT + 12 T 2 )
=P
T
T
!
1
2
X (RT + 12 T 2 )
ln S(0)
K + (RT + 2 T )
=P
T
T
S(0)
1
1
ln
+ (RT + T 2 )
=N
K
2
T
Z
(5.3)
5.3
Exercise 5.5
1. By the definition
1
= exp
Z(t)
Z
0
Let
Z
X(t) =
0
1
(u)dW (u) +
2
1
(u)dW (u) +
2
(u)du
2
2 (u)du
then
1
d h X(t) i
d
=
e
Z(t)
dt
1
=eX(t) dX(t) + eX(t) dX(t)dX(t)
2
1
1 2
1
=
(t)dW (t) + (t)dt +
2 (t)dt
Z(t)
2
2Z(t)
1
=
(t)dW (t) + 2 (t)dt
Z(t)
(t) is a martingale, for 0 s t,
2. By Lemma 5.2.2 of text, since M
h
i
h
i
1
33
=(t)dW
(t) + (t)(t)dt
=
(t)
=(t)d
W
Integrate both sides, we will have
(t) = M
(0) +
M
(u)
(u)d
W
5.4
Exercise 5.12
i (t) is a continuous martingale. Secondly, since dBi (t) = P ij (t)/i (t)dWt (t),
1. First each B
j
dBi (t)dt = 0 and furthermore we have
i (t)dB
i (t) = [dBi (t) + i (t)] [dBi (t) + i (t)] = dBi (t)dBi (t) = dt
dB
i (t) must be a brownian motion for every i.
B
2. By
i (t) = dBi (t) + i (t)
dB
we have
dSi (t) =i (t)Si (t)dt + i (t)Si (t)dBi (t)
i (t) i (t)]
=i (t)Si (t)dt + i (t)Si (t)[dB
i (t)
= [i (t)Si (t) i (t)i (t)Si (t)] dt + i (t)Si (t)dB
i (t)
=R(t)Si (t)dt + i (t)Si (t)dB
34
Now note the fact that the expectation of an Itos integral is 0, if we take the expectation on the
above formula, then
Z t
Z t
E[Bi (t)Bk (t)] = E
ik (u)du =
ik (u)du
0
i (t)B
k (t)] =
Similarly we have E[B
Rt
0
ik (u)du.
5. For E[B1 (t)B2 (t)], since dB1 (t) = dW2 (t) and dB2 (t) = sign(W1 (t))dW2 (t) by Itos product
rule,
d[B1 (t)B2 (t)] =B1 (t)dB2 (t) + B2 (t)dB1 (t) + dB1 (t)dB2 (t)
=W2 (t)dB2 (t) + B2 (t)dW2 (t) + sign(W1 (t))dW2 (t)dW2 (t)
By dW2 (t)dW2 (t) = dt, we may write it in the integral version as:
Z t
Z t
Z t
B1 (t)B2 (t) = B1 (0)B2 (0) +
W2 (u)dB2 (u) +
B2 (u)dW2 (u) +
sign(W1 (u))du
0
Note that the expectation of an Itos integral is 0 and W1 (u) N (0, u), which implies
Z
Z
Z 0
2
2
2
1
1
1
ex /2u dx = 0
E[sign(W1 (u))] =
sign(x)
ex /2u dx =
ex /2u dx
2u
2u
2
Then we have
E[B1 (t)B2 (t)] = 0
(5.4)
(5.5)
(5.6)
1 (t)B
2 (t)], since
For E[B
and
by Itos product rule,
1 (t)B
2 (t)] =B
1 (t)dB
2 (t) + B
2 (t)dB
1 (t) + dB
1 (t)dB
2 (t)
d[B
1 (t) t)dW
2 (t) + B2 (t)dW
2 (t) + sign(W1 (t))dt
=W2 (t)sign(W
1 (t) t)dW
2 (t) + B2 (t)dW
2 (t) + sign(W
1 (t) t)dt
=W2 (t)sign(W
35
Z
Z 0
2
1
1
(xu)2 /2u
=
e
e(xu) /2u dx
dx
2u
2u
0
Z
Z
2
2
1
1
e(xu) /2u dx
e(x+u) /2u dx
=
2u
2u
0
0
Z h
i
2
2
1
=
e(xu) /2u e(x+u) /2u du
2u 0
>0
since u > 0
Then we have
1 (t)B
2 (t)] =
E[B
1 (t)B
2 (t)] > 0 6= 0 = E[B1 (t)B2 (t)].
(5.4) and (5.7) are the exmaples that E[B
5.5
Exercise 5.13
1.
1 (t)] =E[
W1(t)] = 0
E[W
Z t
Z t
1 (u)]du = 0
2 (t)] =E[
W2 (t)
W2 (t)]
E[W
E[W
W1 (u)du] = E[
0
Cov[W
1 (T ), W2 (T )] =E[W1 (T )W2 (T )] E[W1 (T )]E[W2 (T )]
1 (T )W2 (T )]
=E[W
1 (T )] = 0 = E[W
2 (T )].
because of E[W
Now by Itos product rule, we have
Z
W1 (T )W2 (T ) = W1 (0)W2 (0) +
Z
W1 (t)dW2 (t) +
Here since
dW1 (t) =dW1 (t)
dW2(t) =dW2 (t) + W1 (t)dt
therefore,
dW1 (t) =dW1 (t)
dW2 (t) =dW2 W1 (t)dt
36
W2 (t)dW1 (t)
0
(5.7)
then
W1 (T )W2 (T ) = W1 (0)W2 (0) +
W1 (t)dW2 (t)
W1 (t)W1 (t)dt +
W2 (t)dW1 (t)
Now since the expectation of any Itos integration is 0, and W1 (0) = 0 = W2 (0),
Z T
Z T
T2
2
E[W1 (T )W2 (T )] =
E[W1 (t)]dt =
tdt =
2
0
0
W1 2 (t)] = Cov[W1 ] = t.
here we used the fact that E[
5.6
Exercise 5.14
1. Since
d ert X(t)
= rert X(t)dt + ert dX(t)
= rert X(t)dt + ert (t)dS(t) ert a(t)dt + rert (X(t) (t)S(t)) dt
h
i
(t) + adt a(t)dt r(t)S(t)dt
=ert (t) rS(t)dt + S(t)dW
(t)
=S(t)ert (t)dW
contains no dt term, ert X(t) is a martingale.
(t) + (r 1 2 )t, then
2. (a) Let Z(t) = W
2
dY (t) =d eZ(t)
1
=eZ(t) dZ(t) + eZ(t) dZ(t)dZ(t)
2
(t) + (r 1 2 )dt + 1 Y (t) 2 (t)
=Y (t) dW
2
2
(t)
=rY (t)dt + dW
(b) Since
d ert Y (t)
= rert Y (t)dt + ert dY (t)
(t)
= rert Y (t)dt + ert Y (t)dt + ert Y (t)dW
rt
=Y (t)e dW (t)
contains no dt term, ert Y (t) is a martingale.
(c) By Itos product rule,
Z t
1
a
a
dS(t) =S(0)dY (t) +
dsdY (t) + Y (t)
dt + dY (t)
dt
Y
(s)
Y
(t)
Y
(t)
0
Z t
1
= S(0) +
ds dY (t) + adt
since dY (t)dt = 0
0 Y (s)
S(t)
=
dY (t) + adt
Y (t)
i
S(t) h
(t) + adt
rY (t)dt + Y (t)dW
=
Y (t)
(t) + adt
=rS(t)dt + S(t)dW
37
(5.8)
E[S(T
)|F(t)]
Z
a
ds|F(t)]
0 Y (s)
"
#
Z t
Z T
a
a
ds|F(t) + E Y (T )
ds|F(t)
=S(0)E[Y (T )|F(t)] + E Y (T )
Y (s)
0 Y (s)
t
Z t
Z T
a
Y (T )
E
=S(0)E[Y (T )|F(t)] + E [Y (T )|F(t)]
ds + a
|F(t) ds
Y (s)
0 Y (s)
t
=E[S(0)Y
(T ) + Y (T )
(5.9)
(5.10)
(T ) W
(s) is normally distributed with mean 0 and variance T s, by taking the
Since W
expectation on both sides of the above equation, we have
1 2
Y (T )
E
=E[e(W (T )W (s)) ]e(r 2 )(T s)
Y (s)
1 2
1 2
(5.11)
=e 2 (T s) e(r 2 )(T s)
=er(T s)
Now by substituting (5.10) and (5.11) into (5.9), we have
[S(T )|F(t)]
E
=S(0)er(T t) Y (t) + er(T t) Y (t)
Z
0
=er(T t) S(t) + a
a
ds + a
Y (s)
Z
t
er(T s) ds
(5.12)
er(T s) ds
t
i
a h r(T t)
e
1
=er(T t) S(t) +
r
4. Now we differentiate (5.12)
[S(T )|F(t)])
d(E
a
=d er(T t) S(t) + d er(T t) 1
r
rT
rt
=e d e S(t) aer(T t) dt
38
(5.13)
Since
d ert S(t)
(5.14)
=e
[S(T )|F(t)] e
E
r(T t)
by linearility
by taking out what is known
er(T t) (S(T ) K)|F(t) = 0 implies that K = E
[S(T )|F(t)], i.e.,
E
[S(T )|F(t)] = F utS (t, T )
F orS (t, T ) = K = E
6. According to the hedgeing strategy, the value of the portfolio at time t is governed by (5.9.7) of
Text with (t) = 1, i.e.,
dX(t) = dS(t) adt + r(X(t) S(t))dt
Then (5.8) implies that
(t)
d ert X(t) = S(t)ert dW
By integrating both sides from 0 to T , we have
erT X(T ) X(0) =
(t)dt
S(t)ert dW
=
0
Z
=
d ert S(t)
aert dt
a
1 erT
r
39
a rT
e 1
r
by (5.9.7) of Text
By (5.12),
S(0)erT +
a rT
e 1 = F utS (0, T ) = F orS (0, T )
r
So
X(T ) = S(T ) F orS (0, T )
Therefore eventually, at time T , she delivers the asset at the price F orS (0, T ), which is exactly
rT
the amount
that she needs to cover the debt in the money market X(T ) S(T ) = S(0)e +
a
rT
e
1
.
r
Chapter 6
6.1
Exercise 6.1
Rt
Rt
1. It is easy to see Z(t) = e0 = 1 since t (v)dW (v) = 0 = t (b(v) 2 (v)/2)dv. Let
Z u
Z u
(b(v) 2 (v)/2)dv
(v)dW (v) +
S(u) =
t
. Then Z(u) = e
S(u)
and
dS(u) = (u)dW (u) + b(u) 2 (u)/2 dv
Now set f (t, x) = ex in Itos Formula (4.4.23) of text, then fx = fxx = ex and ft = 0. Then we
have
dZ(u) =df (t, S(u))
1
=eS(u) dS(u) + eS(u) dS(u)dS(u)
2
=Z(u) [(u)dW (u) + b(u) 2 (u)/2 dv]
1
+ Z(u) [(u)dW (u) + b(u) 2 (u)/2 dv] [(u)dW (u) + b(u) 2 (u)/2 dv]
2
1 2
2
=Z(u) b(u) (u)/2 + (u) du
(since dW (u)dW (u) = du)
2
+ Z(u) [(u) + 0] du
(since X = ZY )
6.2
1.
Exercise 6.3
i
Rs
Rs
Rs
d h R s b(v)dv
C(s, T ) = b(s)e 0 b(v)dv C(s, T ) + e 0 b(v)dv C 0 (s, T ) = e 0 b(v)dv
e 0
ds
2.
T
Z T
i
Rs
d h R s b(v)dv
C(s, T ) ds =
e 0 b(v)dv ds
e 0
ds
t
t
Z T
Rs
Rt
RT
e 0 b(v)dv ds
e 0 b(v)dv C(T, T ) e 0 b(v)dv C(t, T ) =
Z
By C(T, T ) = 0,
RT
t
C(t, T ) =
e
e
Rs
0
Rt
0
b(v)dv
ds
b(v)dv
Z
=
t
3.
Z
A(T, T ) A(t, T ) =
t
Rs
Rt
0
0
b(v)dv
b(v)dv
Z
ds =
Rs
t
b(v)dv
ds
2 (s) 2
a(s)C(s, T ) +
C (s, T ) ds
2
6.3
Exercise 6.4
1.
" Z
#
d 2 T
2
(t) = (t)
C(u, T )dt = (t)C(t, T )
dt 2 t
2
0
00 (t) =
2 0
2
(t)C(t, T )
(t)C 0 (t, T )
2
2
therefore,
C 0 (t, T ) =
2 0
2 (t)C(t, T )
2
2 (t)
00 (t)
200 (t) 2 2
200 (t) 0 (t)
C(t, T ) = 2
+
C (t, T )
2
(t)
(t)
(t)
2
2 2
C =bC 1
2
200 (t)
20 (t)
= b 2
1
2
(t)
(t)
b2 + 2 2
b
=
2
2
b
(t) = a1 e( 2 +)t + a2 e( 2 )t
41
a1 =
2b ( b +)T
2
2 e
b
+ 2 ( b )T
2
2 e
a2 =
=
=
b
2
e( 2 +)T
4(r+ 2b )
b
2
e( 2 )T
4(r 2b )
so c1 = c2 = 2 /4.
2
5. By = b2 + 2 2 /2, we have
2
b2
b
b
2
= ( + )( ) =
4
2
2
2
then
"
(t) = c1 e
2b (T t)
1
e(T t)
+
b
2
"
b
2
2b (T t)
b
2
(T t)
1
e(T t)
b
2
#
(T t)
2 e
2 e
2 + b4
2 + b4
b
b
2b (T t) 2
(T t)
(T t)
= c1 e
+ ( + )e
( )e
2
2
2
(T t)
(T t)
b
e(T t) + e(T t)
2
e
e
= c1 e 2 (T t) 2 b
+ 2
2
2
b
2
= c1 e 2 (T t) 2 [b sinh((T t)) + 2 cosh((T t))]
= c1 e
and
h
i
b
0 (t) = c1 e 2 (T t) e(T t) e(T t)
b
20 (t)
2 (t)
b
2b (T t)
2
2
42
c1 e 2 (T T )
2
[b sinh((T T )) + 2 cosh((T T ))] = 1
2
c1
2
2 = 1
2
c1 =
2
4
Eventually,
Z
A(t, T ) = A(T, T )
Z
=0
t
A0 (s, T )ds
t
0
2a (s)
ds
2 (s)
a C(s, T )ds
=
t
log((t))
2 /2
2c1 b sinh((T t)) + 2 cosh((T t))
2a
= 2 log
b
2
e 2 (T t)
"
#
b
2a
e 2 (T t)
= 2 log
6.4
Exercise 6.5
1. Since
g(t, x1 , x2 ) = E t,x1 ,x2 h(x1 (T ), x2 (T ))
we have for any 0 s t T similarly as Theorem 6.3.1 of Text,
h
i
E h(X1 (T ), X2 (T ))F(t) = g(t, X1 (t), X2 (t))
h
i
E h(X1 (T ), X2 (T ))F(s) = g(s, X1 (s), X2 (s))
Then
h
i
h
i
E g(t, X1 (t), X2 (t))F(s) = E E [h(X1 (T ), X2 (T ))|F(t)] F(s)
h
i
= E h(X1 (T ), X2 (T ))F(t)
= g(s, X1 (s), X2 (s))
Similarly, Since
ert f (t, x1 , x2 ) = E t,x1 ,x2 erT h(x1 (T ), x2 (T )) = erT E t,x1 ,x2 [h(x1 (T ), x2 (T ))]
ert f (t, X1 (t), X2 (t)) is also a martingale.
43
2. If W1 and W2 are independent, dW1 (t)dW1 (t) = dt = dW1 (t)dW1 (t) and dW1 (t)dW2 (t) = 0. By
Itos formula,
dg(t, X1 (t), X2 (t))
1
1
=gt dt + gx1 dX1 (t) + gx2 dX2 (t) + gx1 x1 dX1 (t)dX1 (t) + gx1 x2 dX1 (t)dX2 (t) + gx2 x2 dX2 (t)dX2 (t)
2
2
=gt dt + gx1 [1 dt + 11 dW1 (t) + 12 dW2 (t)]
+ gx2 [2 dt + 21 dW1 (t) + 22 dW2 (t)]
1
+ gx1 x1 [1 dt + 11 dW1 (t) + 12 dW2 (t)] [1 dt + 11 dW1 (t) + 12 dW2 (t)]
2
+ gx1 x2 [1 dt + 11 dW1 (t) + 12 dW2 (t)] [2 dt + 21 dW1 (t) + 22 dW2 (t)]
1
+ gx2 x2 [2 dt + 21 dW1 (t) + 22 dW2 (t)]
2
1
1
2
2
2
2
= gt + 1 gx1 + 2 gx2 + gx1 x1 11 + 12 + gx1 x2 (11 21 + 12 22 ) + gx2 x2 21 + 22 dt
2
2
+ [11 gx1 + 21 gx2 ] dW1 (t) + [12 gx1 + 22 gx2 ] dW2 (t)
Since g(t, X1 (t), X2 (t)) is a martingale, there is no dt term in dg(t, X1 (t), X2 (t)). Therefore by
setting the dt term to zero in the above differential form, we will have (6.6.3) of Text immediately.
Smilarly by Itos formula,
d ert f (t, X1 (t), X2 (t))
= rert f + ert ft dt + ert fx1 dX1 (t) + ert fx2 dX2 (t)
1
1
+ ert fx1 x1 dX1 (t)dX1 (t) + ert fx1 x2 dX1 (t)dX2 (t) + ert fx2 x2 dX2 (t)dX2 (t)
2
2
1
1
rt
2
2
2
2
=e
rf + ft + 1 fx1 + 2 fx2 + fx1 x1 11 + 12 + fx1 x2 (11 21 + 12 22 ) + fx2 x2 21
+ 22
dt
2
2
+ ert [11 fx1 + 21 fx2 ] dW1 (t) + ert [12 fx1 + 22 fx2 ] dW2 (t)
Since ert f (t, X1 (t), X2 (t)) is a martingale, there is no dt term in d [ert f (t, X1 (t), X2 (t))].
(6.6.4) of Text follows immediately by setting the dt term to zero in the above differential form.
3. If dW1 (t)dW2 (t) = dt, dW1 (t)dW1 (t) = dt = dW1 (t)dW1 (t), then by Itos formula,
dg(t, X1 (t), X2 (t))
1
1
=gt dt + gx1 dX1 (t) + gx2 dX2 (t) + gx1 x1 dX1 (t)dX1 (t) + gx1 x2 dX1 (t)dX2 (t) + gx2 x2 dX2 (t)dX2 (t)
2
2
=gt dt + gx1 [1 dt + 11 dW1 (t) + 12 dW2 (t)]
+ gx2 [2 dt + 21 dW1 (t) + 22 dW2 (t)]
1
+ gx1 x1 [1 dt + 11 dW1 (t) + 12 dW2 (t)] [1 dt + 11 dW1 (t) + 12 dW2 (t)]
2
+ gx1 x2 [1 dt + 11 dW1 (t) + 12 dW2 (t)] [2 dt + 21 dW1 (t) + 22 dW2 (t)]
1
+ gx2 x2 [2 dt + 21 dW1 (t) + 22 dW2 (t)]
2
1
1
2
2
2
2
= gt + 1 gx1 + 2 gx2 + gx1 x1 11 + 12 + gx1 x2 (11 21 + 12 22 ) + gx2 x2 21 + 22 dt
2
2
+ [gx1 x1 11 12 + gx1 x2 (11 22 + 12 21 ) + gx2 x2 21 22 ] dt
+ [11 gx1 + 21 gx2 ] dW1 (t) + [12 gx1 + 22 gx2 ] dW2 (t)
44
Since g(t, X1 (t), X2 (t)) is a martingale, there is no dt term in dg(t, X1 (t), X2 (t)). Therefore
by setting the dt term to zero in the above differential form, we will have (6.9.13) of Text
immediately.
Smilarly by Itos formula,
d ert f (t, X1 (t), X2 (t))
= rert f + ert ft dt + ert fx1 dX1 (t) + ert fx2 dX2 (t)
1
1
+ ert fx1 x1 dX1 (t)dX1 (t) + ert fx1 x2 dX1 (t)dX2 (t) + ert fx2 x2 dX2 (t)dX2 (t)
2
2
1
1
2
2
2
2
+ 12
+ fx1 x2 (11 21 + 12 22 ) + fx2 x2 21
+ 22
dt
=ert rf + ft + 1 fx1 + 2 fx2 + fx1 x1 11
2
2
+ ert [gx1 x1 11 12 + gx1 x2 (11 22 + 12 21 ) + gx2 x2 21 22 ] dt
+ ert [11 fx1 + 21 fx2 ] dW1 (t) + ert [12 fx1 + 22 fx2 ] dW2 (t)
Since ert f (t, X1 (t), X2 (t)) is a martingale, there is no dt term in d [ert f (t, X1 (t), X2 (t))].
(6.9.14) of Text follows immediately by setting the dt term to zero in the above differential form.
6.5
Exercise 6.7
1
1
rc + ct + cs rs + cv (a bv) + css vs2 + csv vs + cvv 2 v = 0
2
2
This is the same as (6.9.26) of text.
2. To simplify the notations, we denote f = f (t, log s, v), g = g(t, log s, v) and their paritial deriva-
45
v
v
gvv
+ er(T t) K gt + rg rgx + (a bv)gv + gxx + vgxv +
2
2
=rsf rer(T t) Kg
=rc
which verifies (6.9.26) of Text.
3. One method is to observe first that f (t, X(t), V (t)) is a martingale, and take differentiation and
then set dt = 0 to get the equation.
Here we apply the Two Dimensional Feynman-Kac Theorem (Exercise 6.5 of Text)directly2 Now
by the definition of dX(t) and dV (t),
1
1 = r + v
2
2 = a bv + v
, 11 =
v,
, 21 = 0,
12 = 0
12 = v
, 11 =
, 21 = 0,
v,
12 = 0
12 = v
we cannot apply (6.6.3) or (6.6.4) of Text since they are for independent Brownian motions.
46
Chapter 7
Chapter 8
Chapter 9
10
Chapter 10
11
Chapter 11
11.1
Exercise 11.1
Since
M 2 (t) M 2 (s)
= N 2 (t) N 2 (s) + 2 (t2 s2 ) 2tN (t) + 2sN (s)
= (N (t) N (s))2 + 2N (t)N (s) 2N 2 (s) + 2 (t2 s2 ) 2t[N (t) N (s)] + 2N (s)(s t)
=[N (t) N (s)]2 + 2N (s)[N (t) N (s)] + 2 (t2 s2 ) 2t[N (t) N (s)] + 2N (s)(s t)
we have
E[M 2 (t) M 2 (s)F(s)]
=E[[N (t) N (s)]2 + 2N (s)[N (t) N (s)] + 2 (t2 s2 )
2t[N (t) N (s)] + 2N (s)(s t)F(s)]
=2 (t s)2 + (t s) + 2N (s) (t s) + 2 (t2 s2 )
2t (t s) + 2N (s)(s t)
=2 (t2 + s2 2ts + t2 s2 2t2 + 2ts) + (t s)
=(t s)
Therefore, for any 0 s t,
47
1.
E[M 2 (t)F(s)]
=E[M 2 (t) M 2 (s) + M 2 (s)F(s)]
=E[M 2 (t) M 2 (s)F(s)] + M 2 (s)
=(t s) + M 2 (s)
M 2 (s)
i.e., M 2 (t) is submartingale.
2.
E[M 2 (t) tF(s)]
=E[M 2 (t) M 2 (s) (t s) + M 2 (s) sF(s)]
=E[M 2 (t) M 2 (s)F(s)] (t s) + M 2 (s) s
=(t s) (t s) + M 2 (s) s
=M 2 (s) s
i.e., M 2 (t) t is martingale.
48