Professional Documents
Culture Documents
Yan Zeng-Stochastic Calculus For Finance, Vol. I and II, Solution
Yan Zeng-Stochastic Calculus For Finance, Vol. I and II, Solution
by Yan Zeng
Last updated: August 20, 2007
This is a solution manual for the two-volume textbook Stochastic calculus for finance, by Steven Shreve.
If you have any comments or find any typos/errors, please email me at yz44@cornell.edu.
The current version omits the following problems. Volume I: 1.5, 3.3, 3.4, 5.7; Volume II: 3.9, 7.1, 7.2,
7.57.9, 10.8, 10.9, 10.10.
Acknowledgment I thank Hua Li (a graduate student at Brown University) for reading through this
solution manual and communicating to me several mistakes/typos.
replicated: (1 + r)(X0 0 S0 ) + 0 S1 = V1 . Using the notation of the problem, suppose an agent begins
with 0 wealth and at time zero buys 0 shares of stock and 0 options. He then puts his cash position
0 in a money market account. At time one, the value of the agents portfolio of stock, option
0 S0 0 X
and money market assets is
0 ).
X1 = 0 S1 + 0 V1 (1 + r)(0 S0 + 0 X
Plug in the expression of V1 and sort out terms, we have
0 0 )(
X1 = S0 (0 +
S1
(1 + r)).
S0
0,
Since d < (1 + r) < u, X1 (u) and X1 (d) have opposite signs. So if the price of the option at time zero is X
then there will no arbitrage.
1.3.
h
i
h
i
S0
1
1+rd
u1r
1+rd
u1r
Proof. V0 = 1+r
ud S1 (H) + ud S1 (T ) = 1+r
ud u + ud d = S0 . This is not surprising, since
this is exactly the cost of replicating S1 .
Remark: This illustrates an important point. The fair price of a stock cannot be determined by the
risk-neutral pricing, as seen below. Suppose S1 (H) and S1 (T ) are given, we could have two current prices, S0
and S00 . Correspondingly, we can get u, d and u0 , d0 . Because they are determined by S0 and S00 , respectively,
its not surprising that risk-neutral pricing formula always holds, in both cases. That is,
S0 =
1+rd
ud S1 (H)
u1r
ud S1 (T )
1+r
S00
1+rd0
u0 d0 S1 (H)
u0 1r
u0 d0 S1 (T )
1+r
Essentially, this is because risk-neutral pricing relies on fair price=replication cost. Stock as a replicating
component cannot determine its own fair price via the risk-neutral pricing formula.
1.4.
Proof.
Xn+1 (T )
n dSn + (1 + r)(Xn n Sn )
n Sn (d 1 r) + (1 + r)Vn
pVn+1 (H) + qVn+1 (T )
Vn+1 (H) Vn+1 (T )
(d 1 r) + (1 + r)
=
ud
1+r
= p(Vn+1 (T ) Vn+1 (H)) + pVn+1 (H) + qVn+1 (T )
=
= pVn+1 (T ) + qVn+1 (T )
= Vn+1 (T ).
1.6.
Proof. The banks trader should set up a replicating portfolio whose payoff is the opposite of the options
payoff. More precisely, we solve the equation
(1 + r)(X0 0 S0 ) + 0 S1 = (S1 K)+ .
Then X0 = 1.20 and 0 = 21 . This means the trader should sell short 0.5 share of stock, put the income
2 into a money market account, and then transfer 1.20 into a separate money market account. At time one,
the portfolio consisting of a short position in stock and 0.8(1 + r) in money market account will cancel out
with the options payoff. Therefore we end up with 1.20(1 + r) in the separate money market account.
Remark: This problem illustrates why we are interested in hedging a long position. In case the stock
price goes down at time one, the option will expire without any payoff. The initial money 1.20 we paid at
time zero will be wasted. By hedging, we convert the option back into liquid assets (cash and stock) which
guarantees a sure payoff at time one. Also, cf. page 7, paragraph 2. As to why we hedge a short position
(as a writer), see Wilmott [8], page 11-13.
1.7.
Proof. The idea is the same as Problem 1.6. The banks trader only needs to set up the reverse of the
replicating trading strategy described in Example 1.2.4. More precisely, he should short sell 0.1733 share of
stock, invest the income 0.6933 into money market account, and transfer 1.376 into a separate money market
account. The portfolio consisting a short position in stock and 0.6933-1.376 in money market account will
replicate the opposite of the options payoff. After they cancel out, we end up with 1.376(1 + r)3 in the
separate money market account.
1.8. (i)
Proof. vn (s, y) = 52 (vn+1 (2s, y + 2s) + vn+1 ( 2s , y + 2s )).
(ii)
Proof. 1.696.
(iii)
Proof.
n (s, y) =
1.9. (i)
Proof. Similar to Theorem 1.2.2, but replace r, u and d everywhere with rn , un and dn . More precisely, set
n dn
pen = 1+r
en = 1 pen . Then
un dn and q
Vn =
(ii)
Proof. n =
Vn+1 (H)Vn+1 (T )
Sn+1 (H)Sn+1 (T )
Vn+1 (H)Vn+1 (T )
.
(un dn )Sn
(iii)
3
(T )
(H)
= SnS+10
= 1+ S10n and dn = Sn+1
= SnS10
= 1 S10n . So the risk-neutral probabilities
Proof. un = Sn+1
Sn
Sn
n
n
n
at time n are pn = u1d
= 12 and qn = 12 . Risk-neutral pricing implies the price of this call at time zero is
n dn
9.375.
Ac
P () +
P () =
P () = 1.
(ii)
Proof. By induction,
P
Pit suffices to work
P on the case N = 2. When A1 and A2 are disjoint, P (A1 A2 ) =
A1 A2 P () =
A1 P () +
A2 P () = P (A1 ) + P (A2 ). When A1 and A2 are arbitrary, using
the result when they are disjoint, we have P (A1 A2 ) = P ((A1 A2 ) A2 ) = P (A1 A2 ) + P (A2 )
P (A1 ) + P (A2 ).
2.2. (i)
Proof. Pe(S3 = 32) = pe3 = 81 , Pe(S3 = 8) = 3e
p2 qe = 38 , Pe(S3 = 2) = 3e
pqe2 = 83 , and Pe(S3 = 0.5) = qe3 = 81 .
(ii)
e 1 ] = 8Pe(S1 = 8) + 2Pe(S1 = 2) = 8e
e 2 ] = 16e
Proof. E[S
p + 2e
q = 5, E[S
p2 + 4 2e
pqe + 1 qe2 = 6.25, and
1
3
3
1
e 3 ] = 32 + 8 + 2 + 0.5 = 7.8125. So the average rates of growth of the stock price under Pe
E[S
8
8
8
8
e2 = 7.8125
are, respectively: re0 = 54 1 = 0.25, re1 = 6.25
5 1 = 0.25 and r
6.25 1 = 0.25.
(iii)
8
1
Proof. P (S3 = 32) = ( 23 )3 = 27
, P (S3 = 8) = 3 ( 23 )2 13 = 49 , P (S3 = 2) = 2 19 = 29 , and P (S3 = 0.5) = 27
.
Accordingly, E[S1 ] = 6, E[S2 ] = 9 and E[S3 ] = 13.5. So the average rates of growth of the stock price
under P are, respectively: r0 = 46 1 = 0.5, r1 = 69 1 = 0.5, and r2 = 13.5
9 1 = 0.5.
2.3.
Proof. Apply conditional Jensens inequality.
2.4. (i)
Proof. En [Mn+1 ] = Mn + En [Xn+1 ] = Mn + E[Xn+1 ] = Mn .
(ii)
2
Proof. En [ SSn+1
] = En [eXn+1 e +e
] =
n
2
Xn+1
]
e +e E[e
= 1.
2.5. (i)
Pn1
Pn1
Pn1
Pn1
Pn1
Proof. 2In = 2 j=0 Mj (Mj+1 Mj ) = 2 j=0 Mj Mj+1 j=1 Mj2 j=1 Mj2 = 2 j=0 Mj Mj+1 +
Pn1 2
Pn1
Pn1
Pn1 2
j=0 Mj2 = Mn2 j=0 (Mj+1 Mj )2 = Mn2 j=0 Xj+1
= Mn2 n.
Mn2 j=0 Mj+1
(ii)
Proof. En [f (In+1 )] = En [f (In + Mn (Mn+1 Mn ))] = En [f (In + Mn Xn+1 )] = 12 [f (In + Mn ) + f (In Mn )] =
(ii)
Proof. In the proof of Theorem 1.2.2, we proved by induction that Xn = Vn where Xn is defined by (1.2.14)
of Chapter 1. In other words, the sequence (Vn )0nN can be realized as the value process of a portfolio,
Xn
e
which consists of stock and money market accounts. Since ( (1+r)
n )0nN is a martingale under P (Theorem
Vn
e
2.4.5), (
n )0nN is a martingale under P .
(1+r)
(iii)
Proof.
Vn0
(1+r)n
= En
VN
(1+r)N
, so V00 ,
V10
1+r ,
0
VN
1
, VN
(1+r)N 1 (1+r)N
(iv)
Proof. Combine (ii) and (iii), then use (i).
2.9. (i)
S1 (H)
= 2, d0 = S1S(H)
= 21 ,
S0
0
and d1 (T ) = SS21(T(TT)) = 1.
1
0 d0
So pe0 = 1+r
e0 = 21 , pe1 (H)
u0 d0 = 2 , q
5
qe1 (T ) = 6 .
Therefore Pe(HH) = pe0 pe1 (H) = 14 ,
5
qe0 qe1 (T ) = 12
.
Proof. u0 =
u1 (H) =
=
S2 (HH)
S1 (H)
= 1.5, d1 (H) =
S2 (HT )
S1 (H)
= 1, u1 (T ) =
1
4,
S2 (T H)
S1 (T )
1+r1 (T )d1 (T )
u1 (T )d1 (T )
1
12
=4
= 61 , and
and Pe(T T ) =
The proofs of Theorem 2.4.4, Theorem 2.4.5 and Theorem 2.4.7 still work for the random interest
rate model, with proper modifications (i.e. Pe would be constructed according to conditional probabilities Pe(n+1 = H|1 , , n ) := pen and Pe(n+1 = T |1 , , n ) := qen . Cf. notes on page 39.). So
the time-zero
value iof an option that pays off V2 at time two is given by the risk-neutral pricing formula
h
V2
e
V0 = E (1+r0 )(1+r1 ) .
(ii)
Proof. V2 (HH) = 5, V2 (HT ) = 1, V2 (T H) = 1 and V2 (T T ) = 0. So V1 (H) =
2.4, V1 (T ) =
p
e1 (T )V2 (T H)+e
q1 (T )V2 (T T )
1+r1 (T )
1
9,
and V0 =
p
e0 V1 (H)+e
q0 V1 (T )
1+r0
1.
p
e1 (H)V2 (HH)+e
q1 (H)V2 (HT )
1+r1 (H)
(iii)
Proof. 0 =
V1 (H)V1 (T )
S1 (H)S1 (T )
2.4 91
82
= 0.4
1
54
0.3815.
(iv)
Proof. 1 (H) =
V2 (HH)V2 (HT )
S2 (HH)S2 (HT )
51
128
= 1.
2.10. (i)
(1+r)(Xn n Sn )
]
(1+r)n+1
Xn
(1+r)n .
e n Yn+1 Sn
en [ Xn+1
Proof. E
(1+r)n+1 ] = En [ (1+r)n+1 +
de
q) +
Xn n Sn
(1+r)n
n Sn +Xn n Sn
(1+r)n
n Sn e
(1+r)n+1 En [Yn+1 ]
Xn n Sn
(1+r)n
n Sn
p+
(1+r)n+1 (ue
(ii)
Proof. From (2.8.2), we have
Xn+1 (H)Xn+1 (T )
uSn dSn
(iii)
Proof.
en [
E
Sn+1
]
(1 + r)n+1
=
=
<
=
1
en [(1 An+1 )Yn+1 Sn ]
E
(1 + r)n+1
Sn
[e
p(1 An+1 (H))u + qe(1 An+1 (T ))d]
(1 + r)n+1
Sn
[e
pu + qed]
(1 + r)n+1
Sn
.
(1 + r)n
en [ Sn+1
If An+1 is a constant a, then E
(1+r)n+1 ] =
Sn
pu+e
q d)
(1+r)n+1 (1a)(e
Sn
(1+r)n (1a).
Sn
(1+r)n (1a)n .
2.11. (i)
Proof. FN + PN = SN K + (K SN )+ = (SN K)+ = CN .
(ii)
en [ CNN n ] = E
en [ FNN n ] + E
en [ PNN n ] = Fn + Pn .
Proof. Cn = E
(1+r)
(1+r)
(1+r)
(iii)
e FN N ] =
Proof. F0 = E[
(1+r)
1
(1+r)N
e N K] = S0
E[S
K
(1+r)N
(iv)
6
Sn+1
en [
So E
(1+r)n+1 (1a)n+1 ] =
Proof. At time zero, the trader has F0 = S0 in money market account and one share of stock. At time N ,
the trader has a wealth of (F0 S0 )(1 + r)N + SN = K + SN = FN .
(v)
Proof. By (ii), C0 = F0 + P0 . Since F0 = S0
(1+r)N S0
(1+r)N
= 0, C0 = P0 .
(vi)
en [ SN K
Proof. By (ii), Cn = Pn if and only if Fn = 0. Note Fn = E
] = Sn
(1+r)N n
So Fn is not necessarily zero and Cn = Pn is not necessarily true for n 1.
(1+r)N S0
(1+r)N n
= Sn S0 (1 + r)n .
2.12.
Proof. First, the no-arbitrage price of the chooser option at time m must be max(C, P ), where
+
+
e (SN K)
e (K SN )
C=E
,
and
P
=
E
.
(1 + r)N m
(1 + r)N m
That is, C is the no-arbitrage price of a call option at time m and P is the no-arbitrage price of a put option
at time m. Both of them have maturity date N and strike price K. Suppose the market is liquid, then the
chooser option is equivalent to receiving a payoff of max(C, P ) at time m. Therefore, its current no-arbitrage
)
e max(C,P
price should be E[
(1+r)m ].
By the put-call parity, C = Sm (1+r)KN m + P . So max(C, P ) = P + (Sm (1+r)KN m )+ . Therefore, the
time-zero price of a chooser option is
#
#
"
"
(Sm (1+r)KN m )+
(Sm (1+r)KN m )+
P
(K SN )+
e
e
e
e
E
=E
.
+E
+E
(1 + r)m
(1 + r)m
(1 + r)N
(1 + r)m
The first term stands for the time-zero price of a put, expiring at time N and having strike price K, and the
second term stands for the time-zero price of a call, expiring at time m and having strike price (1+r)KN m .
)
e max(C,P
If we feel unconvinced by the above argument that the chooser options no-arbitrage price is E[
m ],
(1+r)
due to the economical argument involved (like the chooser option is equivalent to receiving a payoff of
max(C, P ) at time m), then we have the following mathematically rigorous argument. First, we can
construct a portfolio 0 , , m1 , whose payoff at time m is max(C, P ). Fix , if C() > P (), we
can construct a portfolio 0m , , 0N 1 whose payoff at time N is (SN K)+ ; if C() < P (), we can
construct a portfolio 00m , , 00N 1 whose payoff at time N is (K SN )+ . By defining (m k N 1)
0
k () if C() > P ()
k () =
00k () if C() < P (),
we get a portfolio (n )0nN 1 whose payoff is the same as that of the chooser option. So the no-arbitrage
price process of the chooser option must be equal to the value process of the replicating portfolio. In
)
e Xm m ] = E[
e max(C,P
particular, V0 = X0 = E[
(1+r)
(1+r)m ].
2.13. (i)
Proof. Note under both actual probability P and risk-neutral probability Pe, coin tosses n s are i.i.d.. So
without loss of generality, we work on P . For any function g, En [g(Sn+1 , Yn+1 )] = En [g( SSn+1
Sn , Yn +
n
Sn+1
Sn Sn )]
PN
n=0 n
Proof. Set vN (s, y) = f ( Ny+1 ). Then vN (SN , YN ) = f ( N
+1 ) = VN . Suppose vn+1 is given, then
V
v
(S
,Y
)
1
n+1
n+1
n+1
n+1
en [
e
Vn = E
] = 1+r [e
pvn+1 (uSn , Yn + uSn ) + qevn+1 (dSn , Yn + dSn )] = vn (Sn , Yn ),
1+r ] = En [
1+r
where
ven+1 (us, y + us) + ven+1 (ds, y + ds)
vn (s, y) =
.
1+r
2.14. (i)
Proof. For n M , (Sn , Yn ) = (Sn , 0). Since coin tosses n s are i.i.d. under Pe, (Sn , Yn )0nM is Markov
en [h(Sn+1 )] = peh(uSn ) + e
under Pe. More precisely, for any function h, E
h(dSn ), for n = 0, 1, , M 1.
eM [g(SM +1 , YM +1 )] = E
eM [g(SM +1 , SM +1 )] = peg(uSM , uSM )+
For any function g of two variables, we have E
en [g(Sn+1 , Yn+1 )] = E
en [g( Sn+1 Sn , Yn + Sn+1 Sn )] = peg(uSn , Yn +uSn )+
qeg(dSM , dSM ). And for n M +1, E
Sn
Sn
qeg(dSn , Yn + dSn ), so (Sn , Yn )0nN is Markov under Pe.
(ii)
PN
Sk
y
+1
Proof. Set vN (s, y) = f ( N M
). Then vN (SN , YN ) = f ( K=M
) = VN . Suppose vn+1 is already given.
N M
en [vn+1 (Sn+1 , Yn+1 )] = pevn+1 (uSn , Yn + uSn ) + qevn+1 (dSn , Yn + dSn ). So vn (s, y) =
a) If n > M , then E
pevn+1 (us, y + us) + qevn+1 (ds, y + ds).
eM [vM +1 (SM +1 , YM +1 )] = pevM +1 (uSM , uSM ) + ven+1 (dSM , dSM ). So vM (s) =
b) If n = M , then E
pevM +1 (us, us) + qevM +1 (ds, ds).
en [vn+1 (Sn+1 )] = pevn+1 (uSn ) + qevn+1 (dSn ). So vn (s) = pevn+1 (us) + qevn+1 (ds).
c) If n < M , then E
3. State Prices
3.1.
e
Proof. Note Z()
:=
P ()
e()
P
1
Z() .
e we get the
Apply Theorem 3.1.1 with P , Pe, Z replaced by Pe, P , Z,
Proof. Pick 0 such that P (0 ) > 0, define Z() =
0,
1
P (0 ) ,
if 6= 0
Then P (Z 0) = 1 and E[Z] =
if = 0 .
1
P (0 )
P (0 ) = 1.
P
Clearly Pe( \ {0 }) = E[Z1\{0 } ] =
6=0 Z()P () = 0. But P ( \ {0 }) = 1 P (0 ) > 0 if
P (0 ) < 1. Hence in the case 0 < P (0 ) < 1, P and Pe are not equivalent. If P (0 ) = 1, then E[Z] = 1 if
and only if Z(0 ) = 1. In this case Pe(0 ) = Z(0 )P (0 ) = 1. And Pe and P have to be equivalent.
In summary, if we can find 0 such that 0 < P (0 ) < 1, then Z as constructed above would induce a
probability Pe that is not equivalent to P .
3.5. (i)
Proof. Z(HH) =
9
16 ,
Z(HT ) = 98 , Z(T H) =
3
8
and Z(T T ) =
15
4 .
(ii)
Proof. Z1 (H) = E1 [Z2 ](H) = Z2 (HH)P (2 = H|1 = H) + Z2 (HT )P (2 = T |1 = H) =
E1 [Z2 ](T ) = Z2 (T H)P (2 = H|1 = T ) + Z2 (T T )P (2 = T |1 = T ) = 23 .
3
4.
Z1 (T ) =
(iii)
Proof.
V1 (H) =
V1 (T ) =
and
V0 =
Z2 (HH)V2 (HH)
Z2 (HT )V2 (HT )
Z2 (T H)V2 (T H)
P (HH) +
P (T H) + 0 1.
1
1
1
1 P (HT ) +
(1 + 4 )(1 + 4 )
(1 + 4 )(1 + 4 )
(1 + 41 )(1 + 12 )
3.6.
Proof. U 0 (x) =
have XN =
1
x,
(1+r)N
Z
so I(x) =
X0 (1 + r)n Z1n En [Z
=
1
Z]
1
x.
(1+r)N
Z
en [ XNN n ]
E
(1+r)
Z
(3.3.26) gives E[ (1+r)
N
X0
N
Z (1 + r) .
0
=X
n , where
Hence Xn =
] = X0 . So =
=
en [ X0 (1+r)
E
Z
1
X0 .
By (3.3.25), we
en [ 1 ] =
] = X0 (1 + r)n E
Z
3.7.
1
Z
Z
p1 ] = X . Solve it for ,
Proof. U 0 (x) = xp1 and so I(x) = x p1 . By (3.3.26), we have E[ (1+r)
0
N ( (1+r)N )
we get
p1
X0
E
Z p1
X0p1 (1 + r)N p
p
(E[Z p1 ])p1
Np
(1+r) p1
1
Z
p1 =
So by (3.3.25), XN = ( (1+r)
N )
Np
p1 Z p1
N
(1+r) p1
X0 (1+r) p1
E[Z
3.8. (i)
p
p1
Z p1
N
(1+r) p1
(1+r)N X0 Z p1
E[Z
p
p1
d
d
(U (x) yx) = U 0 (x) y. So x = I(y) is an extreme point of U (x) yx. Because dx
Proof. dx
2 (U (x) yx) =
00
U (x) 0 (U is concave), x = I(y) is a maximum point. Therefore U (x) y(x) U (I(y)) yI(y) for every
x.
(ii)
Proof. Following the hint of the problem, we have
E[U (XN )] E[XN
Z
Z
Z
Z
] E[U (I(
))] E[
I(
)],
N
N
N
(1 + r)
(1 + r)
(1 + r)
(1 + r)N
e
i.e. E[U (XN )] X0 E[U (XN
)] E[
XN
] = E[U (XN
)] X0 . So E[U (XN )] E[U (XN
)].
(1+r)N
3.9. (i)
en [ XNN n ]. So if XN 0, then Xn 0 for all n.
Proof. Xn = E
(1+r)
(ii)
Proof. a) If 0 x < and 0 < y 1 , then U (x) yx = yx 0 and U (I(y)) yI(y) = U () y =
1 y 0. So U (x) yx U (I(y)) yI(y).
b) If 0 x < and y > 1 , then U (x) yx = yx 0 and U (I(y)) yI(y) = U (0) y 0 = 0. So
U (x) yx U (I(y)) yI(y).
c) If x and 0 < y 1 , then U (x) yx = 1 yx and U (I(y)) yI(y) = U () y = 1 y 1 yx.
So U (x) yx U (I(y)) yI(y).
d) If x and y > 1 , then U (x) yx = 1 yx < 0 and U (I(y)) yI(y) = U (0) y 0 = 0. So
U (x) yx U (I(y)) yI(y).
(iii)
Z
e XN
Proof. Using (ii) and set x = XN , y = (1+r)
N , where XN is a random variable satisfying E[ (1+r)N ] = X0 ,
we have
Z
Z
E[U (XN )] E[
XN ] E[U (XN
)] E[
X ].
(1 + r)N
(1 + r)N N
)].
)] X0 . So E[U (XN )] E[U (XN
That is, E[U (XN )] X0 E[U (XN
(iv)
Proof. Plug pm and m into (3.6.4), we have
N
X0 =
2
X
pm m I(m ) =
m=1
So
X0
{m :
X0
2
X
pm m 1{m 1 } .
m=1
P2N
X0
m=1 pm m 1{m 1 } . Suppose there is a solution to (3.6.4), note > 0, we then can conclude
m 1 } 6= . Let K = max{m : m 1 }, then K 1 < K+1 . So K < K+1 and
PK
N
m=1 pm m (Note, however, that K could be 2 . In this case, K+1 is interpreted as . Also, note
=
we are looking for positive solution > 0). Conversely, suppose there exists some K so that K < K+1 and
PK
X0
1
m=1 m pm = . Then we can find > 0, such that K < < K+1 . For such , we have
N
2
K
X
X
Z
Z
1 =
E[
I(
)]
=
p
1
pm m = X0 .
m
m
{
}
m
(1 + r)N (1 + r)N
m=1
m=1
(v)
Proof. XN
( m ) = I(m ) = 1{m 1 } =
,
if m K
.
0, if m K + 1
N
X
k=n
e
En 1{ =k}
1
1
e
Gk = En 1{ N }
G .
(1 + r)kn
(1 + r) n
The buyer wants to consider all the possible s, so that he can find the least upper bound of security value,
which will be the maximum price of the derivative security acceptable to him. This is the price given by
1
en [1{ N }
Definition 4.4.1: Vn = max Sn E
(1+r) n G ].
From the sellers perspective: A price process (Vn )0nN is acceptable to him if and only if at time n,
he can construct a portfolio at cost Vn so that (i) Vn Gn and (ii) he needs no further investing into the
portfolio as time goes by. Formally, the seller can find (n )0nN and (Cn )0nN so that Cn 0 and
Sn
Vn+1 = n Sn+1 + (1 + r)(Vn Cn n Sn ). Since ( (1+r)
n )0nN is a martingale under the risk-neutral
e
measure P , we conclude
Cn
Vn+1
Vn
e
En
=
0,
n+1
n
(1 + r)
(1 + r)
(1 + r)n
Vn
i.e. ( (1+r)
n )0nN is a supermartingale. This inspired us to check if the converse is also true. This is exactly
the content of Theorem
4.4.4. So (Vn )0nN is the value process of a portfolio that needs no further investing
Vn
if and only if
is a supermartingale under Pe (note this is independent of the requirement
n
(1+r)
0nN
Vn Gn). In summary, a price process (Vn )0nN is acceptable to the seller if and only if (i) Vn Gn ; (ii)
Vn
is a supermartingale under Pe.
(1+r)n
0nN
Theorem 4.4.2 shows the buyers upper bound is the sellers lower bound. So it gives the price acceptable
to both. Theorem 4.4.3 gives a specific algorithm for calculating the price, Theorem 4.4.4 establishes the
one-to-one correspondence between super-replication and supermartingale property, and finally, Theorem
4.4.5 shows how to decide on the optimal exercise policy.
4.1. (i)
Proof. V2P (HH) = 0, V2P (HT ) = V2P (T H) = 0.8, V2P (T T ) = 3, V1P (H) = 0.32, V1P (T ) = 2, V0P = 9.28.
(ii)
Proof. V0C = 5.
(iii)
Proof. gS (s) = |4 s|. We apply Theorem 4.4.3 and have V2S (HH) = 12.8, V2S (HT ) = V2S (T H) = 2.4,
V2S (T T ) = 3, V1S (H) = 6.08, V1S (T ) = 2.16 and V0S = 3.296.
(iv)
11
C
C
p
eVn+1
+e
q Vn+1
1+r
or gP (Sn ) >
P
P
p
eVn+1
+e
q Vn+1
1+r
C
C
p
eVn+1
+e
q Vn+1
}.
1+r
4.2.
Proof. For this problem, we need Figure 4.2.1, Figure 4.4.1 and Figure 4.4.2. Then
1 (H) =
V2 (HH) V2 (HT )
1
V2 (T H) V2 (T T )
= , 1 (T ) =
= 1,
S2 (HH) S2 (HT )
12
S2 (T H) S2 (T T )
and
0 =
V1 (H) V1 (T )
0.433.
S1 (H) S1 (T )
12
4.4.
Proof. 1.36 is the cost of super-replicating the American derivative security. It enables us to construct a
portfolio sufficient to pay off the derivative security, no matter when the derivative security is exercised. So
to hedge our short position after selling the put, there is no need to charge the insider more than 1.36.
4.5.
Proof. The stopping times in S0 are
(1) 0;
(2) 1;
(3) (HT ) = (HH) = 1, (T H), (T T ) {2, } (4 different ones);
(4) (HT ), (HH) {2, }, (T H) = (T T ) = 1 (4 different ones);
(5) (HT ), (HH), (T H), (T T ) {2, } (16 different ones).
When the option is out of money, the following stopping times do not exercise
(i) 0;
(ii) (HT ) {2, }, (HH) = , (T H), (T T ) {2, } (8 different ones);
(iii) (HT ) {2, }, (HH) = , (T H) = (T T ) = 1 (2 different ones).
e { 2} ( 4 ) G ] E[1
e { 2} ( 4 ) G ], where (HT ) =
e { 2} ( 4 ) G ] = G0 = 1. For (ii), E[1
For (i), E[1
5
5
5
e { 2} ( 4 ) G ] = 1 [( 4 )2 1 + ( 4 )2 (1 + 4)] = 0.96. For
2, (HH) = , (T H) = (T T ) = 2. So E[1
5
4 5
5
e { 2} ( 4 ) G ] has the biggest value when satisfies (HT ) = 2, (HH) = , (T H) = (T T ) = 1.
(iii), E[1
5
This value is 1.36.
4.6. (i)
Proof. The value of the put at time N , if it is not exercised at previous times, is K SN . Hence VN 1 =
eN 1 [ VN ]} = max{K SN 1 , K SN 1 } = K SN 1 . The second equality comes from
max{K SN 1 , E
1+r
1+r
the fact that discounted stock price process is a martingale under risk-neutral probability. By induction, we
can show Vn = K Sn (0 n N ). So by Theorem 4.4.5, the optimal exercise policy is to sell the stock
at time zero and the value of this derivative security is K S0 .
Remark: We cheated a little bit by using American algorithm and Theorem 4.4.5, since they are developed
for the case where is allowed to be . But intuitively, results in this chapter should still hold for the case
N , provided we replace max{Gn , 0} with Gn .
(ii)
Proof. This is because at time N , if we have to exercise the put and K SN < 0, we can exercise the
European call to set off the negative payoff. In effect, throughout the portfolios lifetime, the portfolio has
intrinsic values greater than that of an American put stuck at K with expiration time N . So, we must have
V0AP V0 + V0EC K S0 + V0EC .
(iii)
13
Proof. Let V0EP denote the time-zero value of a European put with strike K and expiration time N . Then
K
e SN K ] = V0EC S0 +
.
V0AP V0EP = V0EC E[
(1 + r)N
(1 + r)N
4.7.
eN 1 [ VN ]} = max{SN 1 K, SN 1 K } = SN 1 K .
Proof. VN = SN K, VN 1 = max{SN 1 K, E
1+r
1+r
1+r
K
By induction, we can prove Vn = Sn (1+r)
N n (0 n N ) and Vn > Gn for 0 n N 1. So the
K
time-zero value is S0 (1+r)
N and the optimal exercise time is N .
5. Random Walk
5.1. (i)
Proof. E[2 ] = E[(2 1 )+1 ] = E[(2 1 ) ]E[1 ] = E[1 ]2 .
(ii)
(m)
(m)
(iii)
Proof. Yes, since the argument of (ii) still works for asymmetric random walk.
5.2. (i)
Proof. f 0 () = pe qe , so f 0 () > 0 if and only if >
f () > f (0) = 1 for all > 0.
1
2 (ln q
ln p). Since
1
2 (ln q
ln p) < 0,
(ii)
1
1
1
] = En [eXn+1 f ()
Proof. En [ SSn+1
] = pe f ()
+ qe f ()
= 1.
n
(iii)
1
)n1 e1 ,
Proof. By optional stopping theorem, E[Sn1 ] = E[S0 ] = 1. Note Sn1 = eMn1 ( f ()
by bounded convergence theorem, E[1{1 <} S1 ] = E[limn Sn1 ] = limn E[Sn1 ] = 1, that is,
1
1
E[1{1 <} e ( f ()
)1 ] = 1. So e = E[1{1 <} ( f ()
)1 ]. Let 0, again by bounded convergence theorem,
1
1 = E[1{1 <} ( f (0)
)1 ] = P (1 < ).
(iv)
1
pe +qe , then as varies from 0 to , can take all the values in (0, 1).
1 14pq2
2
2
(note 4pq2 < 4( p+q
1). We want to choose >
in terms of , we have e =
2p
2 ) 1 =
1+ 14pq2
1
14pq2
should take = ln(
). Therefore E[1 ] = 2p 2 =
.
2p
2q
1+ 14pq
Proof. Set =
1
f ()
14
Write
0, so we
(v)
Proof.
1
E[ ]
= E[
1 ] = E[1 1 1 ], and
!0
1 4pq2
2q
i0
h
p
1
(1 1 4pq2 )1
2q
p
1
1
1
[ (1 4pq2 ) 2 (4pq2)1 + (1 1 4pq2 )(1)2 ].
2q 2
1
=
=
So E[1 ] = lim1
1
E[ ]
1
1
2q [ 2 (1
4pq) 2 (8pq) (1
1 4pq)] =
1
2p1 .
5.3. (i)
14pq
Proof. Solve the equation pe + qe = 1 and a positive solution is ln 1+ 2p
= ln 1p
p = ln q ln p. Set
0
0 = ln q ln p, then f (0 ) = 1 and f () > 0 for > 0 . So f () > 1 for all > 0 .
(ii)
1
1
Proof. As in Exercise 5.2, Sn = eMn ( f ()
)n is a martingale, and 1 = E[S0 ] = E[Sn1 ] = E[eMn1 ( f ()
)1 n ].
Suppose > 0 , then by bounded convergence theorem,
1 = E[ lim eMn1 (
n
p
q
1 n1
1 1
)
] = E[1{1 <} e (
) ].
f ()
f ()
< 1.
(iii)
1 14pq2
1
1
Proof. From (ii), we can see E[1{1 <} ( f ()
)1 ] = e , for > 0 . Set = f ()
, then e =
. We
2p
2
2
1
1+
14pq
14pq
), then E[1 1{1 <} ] =
.
need to choose the root so that e > e0 = pq , so = ln(
2p
2q
(iv)
Proof. E[1 1{1 <} ] =
p 1
q 2q1 .
1
E[ 1{1 <} ]|=1
1 4pq
2q [ 14pq
(1
1 4pq)] =
1 4pq
2q [ 2q1
1 + 2q 1] =
5.4. (i)
Proof. E[2 ] =
k=1
P (2 = 2k)2k =
2k
k=1 ( 2 ) P (2
= 2k)4k . So P (2 = 2k) =
(2k)!
.
4k (k+1)!k!
(ii)
Proof. P (2 = 2) = 14 . For k 2, P (2 = 2k) = P (2 2k) P (2 2k 2).
P (2 2k)
P (M2k = 2) + 2P (M2k 4)
15
=
=
=
=
=
4k (k + 1)!k!
2k 1
2k 1
2k 1
(2k)!
.
4k (k + 1)!k!
5.5. (i)
Proof. For every path that crosses level m by time n and resides at b at time n, there corresponds a reflected
path that resides at time 2m b. So
1
P (Mn m, Mn = b) = P (Mn = 2m b) = ( )n
2 (m +
n!
nb
n+b
2 )!( 2
m)!
(ii)
Proof.
P (Mn m, Mn = b) =
n!
(m +
nb
n+b
2 )!( 2
m)!
pm+
nb
2
n+b
2 m
5.6.
Proof. On the infinite coin-toss space, we define Mn = {stopping times that takes values 0, 1, , n, }
and M = {stopping times that takes values 0, 1, 2, }. Then the time-zero value V of the perpetual
+
e { <} (KS ) ]. For an American put with
American put as in Section 5.4 can be defined as sup M E[1
(1+r)
+
e { <} (KS ) ].
the same strike price K that expires at time n, its time-zero value V (n) is max M E[1
n
(1+r)
Clearly (V (n) )n0 is nondecreasing and V (n) V for every n. So limn V (n) exists and limn V (n) V .
,
if =
For any given M , we define (n) =
, then (n) is also a stopping time, (n) Mn
n, if <
and limn (n) = . By bounded convergence theorem,
+
+
e 1{ (n) <} (K S (n) )
e 1{ <} (K S )
lim V (n) .
=
lim
E
E
n
n
(1 + r)
(1 + r) (n)
Take sup at the left hand side of the inequality, we get V limn V (n) . Therefore V = limn V (n) .
Remark: In the above proof, rigorously speaking, we should use (K S ) in the places of (K S )+ . So
this needs some justification.
5.8. (i)
16
1
(1+r)n v(Sn )
Sn
(1+r)n
is a martingale
(ii)
Proof. If the purchaser
chooses
to exercises the call at time
h
i
h n, then thei discounted risk-neutral expectation
Sn K
K
K
e
of her payoff is E (1+r)n = S0 (1+r)n . Since limn S0 (1+r)
= S0 , the value of the call at time
n
h
i
K
zero is at least supn S0 (1+r)
= S0 .
n
(iii)
o
n
q v(ds)
qv
= max{s K, peu+e
Proof. max g(s), pev(us)+e
1+r
1+r s} = max{s K, s} = s = v(s), so equation (5.4.16) is
satisfied. Clearly v(s) = s also satisfies the boundary condition (5.4.18).
(iv)
h
i
e S K 1{ <} S0 . Then P ( < ) 6= 0 and
Proof. Suppose is an optimal exercise time, then E
(1+r)
h
i
h
i
h
i
S K
S
Sn
K
e
e
e
E
1
1
1
>
0.
So
E
<
E
.
Since
is a martingale
(1+r) { <}
(1+r) { <}
(1+r) { <}
(1+r)n
i
hn0
i
h
S n
S
e
e
lim inf n E
=
under risk-neutral measure, by Fatous lemma, E
(1+r) 1{ <}
(1+r) n 1{ <}
i
i
h
h
S n
e 0 ] = S0 . Combined, we have S0 E
e S K 1{ <} < S0 .
e
= lim inf n E[S
lim inf n E
(1+r) n
(1+r)
Contradiction.
So
there
is
no
optimal
time
to
exercise
the
perpetual
American
call.
Simultaneously,
we have
h
i
e S K 1{ <} < S0 for any stopping time . Combined with (ii), we conclude S0 is the least
shown E
(1+r)
upper bound for all the prices acceptable to the buyer.
5.9. (i)
Proof. Suppose v(s) = sp , then we have sp = 25 2p sp +
or p = 1.
2 sp
5 2p .
So 1 =
2p+1
5
21p
5 .
(ii)
Proof. Since lims v(s) = lims (As +
B
s)
= 0, we must have A = 0.
(iii)
Proof. fB (s) = 0 if and only if B + s2 4s = 0. The discriminant = (4)2 4B = 4(4 B). So for B 4,
the equation has roots and for B > 4, this equation does not have roots.
(iv)
Proof. Suppose B 4, then the equation s2 4s +
B = 0 has solution 2
4 s and Bs , we should choose B = 4 and sB = 2 + 4 B = 2.
4 B. By drawing graphs of
(v)
Proof. To have continuous derivative, we must have 1 = sB2 . Plug B = s2B back into s2B 4sB + B = 0,
B
we get sB = 2. This gives B = 4.
6. Interest-Rate-Dependent Assets
6.2.
17
Sn
Bn,m Bk,m
for n k m. Then
Sn
Bk,m Dk ]
Bn,m
Sn
= Dk1 Sk1 Ek1 [Dm (Sm K)]
Ek1 [Ek [Dm ]]
Bn,m
Sn
1
= Dk1 [Sk1 Ek1 [Dm (Sm K)]Dk1
Bk1,m ]
Bn,m
= Dk1 Xk1 .
=
6.3.
Proof.
1 e
1 e
en [ Dm Dm+1 ] = Bn,m Bn,m+1 .
En [Dm+1 Rm ] =
En [Dm (1 + Rm )1 Rm ] = E
Dn
Dn
Dn
6.4.(i)
Proof. D1 V1 = E1 [D3 V3 ] = E1 [D2 V2 ] = D2 E1 [V2 ]. So V1 =
4
V1 (H) = 1+R11 (H) V2 (HH)P (w2 = H|w1 = H) = 21
, V1 (T ) = 0.
D2
D1 E1 [V2 ]
1
1+R1 E1 [V2 ].
In particular,
(ii)
2
Proof. Let X0 = 21
. Suppose we buy 0 shares of the maturity two bond, then at time one, the value of
our portfolio is X1 = (1 + R0 )(X0 B0,2 ) + 0 B1,2 . To replicate the value V1 , we must have
V1 (H) = (1 + R0 )(X0 0 B0,2 ) + 0 B1,2 (H)
V1 (T ) = (1 + R0 )(X0 0 B0,2 ) + 0 B1,2 (T ).
V1 (H)V1 (T )
4
4
2
20
4
So 0 = B1,2
(H)B1,2 (T ) = 3 . The hedging strategy is therefore to borrow 3 B0,2 21 = 21 and buy 3
share of the maturity two bond. The reason why we do not invest in the maturity three bond is that
B1,3 (H) = B1,3 (T )(= 47 ) and the portfolio will therefore have the same value at time one regardless the
outcome of first coin toss. This makes impossible the replication of V1 , since V1 (H) 6= V1 (T ).
(iii)
Proof. Suppose we buy 1 share of the maturity three bond at time one, then to replicate V2 at time
V2 (HH)V2 (HT )
2
two, we must have V2 = (1 + R1 )(X1 1 B1,3 ) + 1 B2,3 . So 1 (H) = B2,3
(HH)B2,3 (HT ) = 3 , and
V2 (T H)V2 (T T )
1 (T ) = B2,3
(T H)B2,3 (T T ) = 0. So the hedging strategy is as follows. If the outcome of first coin toss is
T , then we do nothing. If the outcome of first coin toss is H, then short 32 shares of the maturity three
bond and invest the income into the money market account. We do not invest in the maturity two bond,
because at time two, the value of the bond is its face value and our portfolio will therefore have the same
value regardless outcomes of coin tosses. This makes impossible the replication of V2 .
6.5. (i)
18
6.6. (i)
Proof. The agent enters the forward contract at no cost. He is obliged to buy certain asset at time m at
n
en+1 [Dm (Sm K)] =
. At time n + 1, the contract has the value E
the strike price K = F orn,m = BSn,m
Sn+1 KBn+1,m = Sn+1
flow of Sn+1
Sn Bn+1,m
.
Bn,m
Sn Bn+1,m
Bn,m
(ii)
Proof. By (i), the cash flow generated at time n + 1 is
Sn Bn+1,m
mn1
(1 + r)
Sn+1
Bn,m
!
Sn
=
(1 + r)mn1
Sn+1
(1+r)mn1
1
(1+r)mn
+ r)mn Sn
(1 + r)mn1 Sn+1 (1
en [ Sm ]
en [ Sm ] + (1 + r)m E
= (1 + r)m E
1
m
(1 + r)
(1 + r)m
= F utn+1,m F utn,m .
=
6.7.
Proof.
n+1 (0)
For k = 1, 2, , n,
Dn
e
n+1 (k) = E
1{#H(1 n+1 )=k}
1 + rn (#H(1 n ))
Dn
Dn
e
e
= E
1{#H(1 n )=k} 1{n+1 =T } + E
1{#H(1 n )=k} 1{n+1 =H}
1 + rn (k)
1 + rn (k 1)
e n Vn (k)] 1 E[D
e n Vn (k 1)]
1 E[D
+
=
2 1 + rn (k)
2 1 + rn (k 1)
n (k)
n (k 1)
=
+
.
2(1 + rn (k)) 2(1 + rn (k 1))
Finally,
e n+1 Vn+1 (n + 1)] = E
e
n+1 (n + 1) = E[D
Dn
n (n)
1{#H(1 n )=n} 1{n+1 =H} =
.
1 + rn (n)
2(1 + rn (n))
Remark: In the above proof, we have used the independence of n+1 and (1 , , n ). This is guaranteed
by the assumption that pe = qe = 21 (note if and only if E[|] = constant). In case the binomial model
has stochastic up- and down-factor un and dn , we can use the fact that Pe(n+1 = H|1 , , n ) = pn and
u1rn
n dn
Pe(n+1 = T |1 , , n ) = qn , where pn = 1+r
un dn and qn = un dn (cf. solution of Exercise 2.9 and
e
e E[f
e (n+1 )|Fn ]] =
notes on page 39). Then for any X Fn = (1 , , n ), we have E[Xf
(n+1 )] = E[X
e
E[X(p
n f (H) + qn f (T ))].
20
Since p2 + (1 p)2 < 1 for 0 < p < 1, we have limn (p2 + (1 p)2 )n = 0. This implies P (A) = 0.
1.3.
Proof. Clearly P () = 0. For any A and B, if both of them are finite, then A B is also finite. So
P (A B) = 0 = P (A) + P (B). If at least one of them is infinite, then A B is also infinite. So P (A B) =
PN
= P (A) + P (B). Similarly, we can prove P (N
n=1 An ) =
n=1 P (An ), even if An s are not disjoint.
Then A =
To see countable additivity property doesnt hold for P , let An = { n1 }.P
n=1 An is an infinite
set and therefore P (A) = . However, P (An ) = 0 for each n. So P (A) 6= n=1 P (An ).
1.4. (i)
Proof. By Example 1.2.5, we can construct a random variable X on the coin-toss space, which is uniformly
Rx
2
distributed on [0, 1]. For the strictly increasing and continuous function N (x) = 12 e 2 d, we let
Z = N 1 (X). Then P (Z a) = P (X N (a)) = N (a) for any real number a, i.e. Z is a standard normal
random variable on the coin-toss space ( , F , P ).
(ii)
Proof. Define Xn =
Pn
Yi
i=1 2i ,
where
Yi () =
1, if i = H
0, if i = T .
Then Xn () X() for every where X is defined as in (i). So Zn = N 1 (Xn ) Z = N 1 (X) for
every . Clearly Zn depends only on the first n coin tosses and {Zn }n1 is the desired sequence.
1.5.
21
Z Z
Z
dxdP () =
So E{X} =
R
0
X()dP () = E{X}.
(1 F (x))dx.
P (x < X)dx =
(1 F (x))dx.
1.6. (i)
Proof.
E{euX }
=
=
=
=
=
(x)2
1
eux e 22 dx
2
Z
(x)2 2 2 ux
1
2 2
e
dx
2
Z
[x(+ 2 u)]2 (2 2 u+ 4 u2 )
1
2 2
e
dx
2
Z
[x(+ 2 u)]2
2 u2
1
2 2
e
eu+ 2
dx
2
eu+
2 u2
2
(ii)
Proof. E{(X)} = E{euX } = eu+
u2 2
2
eu = (E{X}).
1.7. (i)
Proof. Since |fn (x)|
1 ,
2n
(ii)
Proof. By the change of variable formula,
fn (x)dx =
Z
x
1 e 2
2
dx = 1. So we must have
lim
fn (x)dx = 1.
(iii)
Proof. This is not contradictory with the Monotone Convergence Theorem, since {fn }n1 doesnt increase
to 0.
22
1.8. (i)
tX s X
e n
Proof. By (1.9.1), |Yn | = e ts
= |XeX | = XeX Xe2tX . The last inequality is by X 0 and the
n
fact that is between t and sn , and hence smaller than 2t for n sufficiently large. So by the Dominated
Convergence Theorem, 0 (t) = limn E{Yn } = E{limn Yn } = E{XetX }.
(ii)
Proof. Since E[etX 1{X0} ] + E[etX 1{X<0} ] = E[etX ] < for every t R, E[et|X| ] = E[etX 1{X0} ] +
E[e(t)X 1{X<0} ] < for every t R. Similarly, we have E[|X|et|X| ] < for every t R. So, similar to
(i), we have |Yn | = |XeX | |X|e2t|X| for n sufficiently large, So by the Dominated Convergence Theorem,
0 (t) = limn E{Yn } = E{limn Yn } = E{XetX }.
1.9.
Proof. If g(x) is of the form 1B (x), where B is a Borel subset of R, then the desired equality is just (1.9.3).
By the linearity
Pn of Lebesgue integral, the desired equality also holds for simple functions, i.e. g of the
form g(x) = i=1 1Bi (x), where each Bi is a Borel subset of R. Since any nonnegative, Borel-measurable
function g is the limit of an increasing sequence of simple functions, the desired equality can be proved by
the Monotone Convergence Theorem.
1.10. (i)
Proof. If {Ai }
i=1 is a sequence of disjoint Borel subsets of [0, 1], then by the Monotone Convergence Theorem,
e
P (i=1 Ai ) equals to
Z
Z
1
ZdP =
i=1 Ai
n Z
X
i=1
ZdP =
Ai
Pe(Ai ).
i=1
R
A
ZdP = 2
A[ 21 ,1]
dP = 2P (A [ 12 , 1]) = 0.
(iii)
Proof. Let A = [0, 12 ).
1.11.
Proof.
2
(u)2
2
=e
u2
2
1.12.
R
2
2
2
Pe =
Proof. First, Z = eY 2 = e(X+) 2 = e 2 +X = Z 1 . Second, for any A F, P (A) = A Zd
R
R
(1A Z)ZdP = 1A dP = P (A). So P = P . In particular, X is standard normal under P , since its standard
normal under P .
1.13. (i)
23
Proof.
1
P (X
B(x, )) =
1
R x+ 2
x 2
u
1 e 2
2
du is approximately
1 1
x2
2 e
X
1 e
2
=
2 ()
(ii)
Proof. Similar to (i).
(iii)
Proof. {X B(x, )} = {X B(y , )} = {X + B(y, )} = {Y B(y, )}.
(iv)
Proof. By (i)-(iii),
e(A)
P
P (A)
is approximately
Y
e
2
e
2
2 ()
X 2 ()
= e
2 ()
Y 2 ()X
= e
2 X 2 ()
(X()+)
= eX()
2
2
1.14. (i)
Proof.
Pe() =
Z e
Z
eZ
()X
e
e
e
()x
x
e x
e
dP =
e
e
dx =
e
dx = 1.
0
0
(ii)
Proof.
Z
Pe(X a) =
{Xa}
Z ae
Z a
e
()x
e
e
e
e
()X
x
e x
e
dP =
e
e
dx =
e
dx = 1 ea .
0
0
1.15. (i)
Proof. Clearly Z 0. Furthermore, we have
Z
Z
Z
h(g(X))g 0 (X)
h(g(x))g 0 (x)
=
f (x)dx =
E{Z} = E
h(g(x))dg(x) =
h(u)du = 1.
f (X)
f (x)
(ii)
Proof.
Z
Pe(Y a) =
{g(X)a}
h(g(X))g 0 (X)
dP =
f (X)
g 1 (a)
h(g(x))g 0 (x)
f (x)dx =
f (x)
24
Ra
g 1 (a)
h(g(x))dg(x).
1
1
< X x0 ) = lim (P (X x0 ) P (X x0 )) = 1.
n
n
n
2.2. (i)
Proof. (X) = {, , {HT, T H}, {T T, HH}}.
(ii)
Proof. (S1 ) = {, , {HH, HT }, {T H, T T }}.
(iii)
Proof. Pe({HT, T H} {HH, HT }) = Pe({HT }) = 41 , Pe({HT, T H}) = Pe({HT }) + Pe({T H}) =
and Pe({HH, HT }) = Pe({HH}) + Pe({HT }) = 14 + 14 = 21 . So we have
1
4
1
4
= 21 ,
25
Proof.
E{euX+vY }
= E{euX+vXZ }
= E{euX+vXZ |Z = 1}P (Z = 1) + E{euX+vXZ |Z = 1}P (Z = 1)
1
1
=
E{euX+vX } + E{euXvX }
2
2
(uv)2
1 (u+v)2
=
[e 2 + e 2 ]
2
uv
u2 +v 2 e
+ euv
= e 2
.
2
(ii)
Proof. Let u = 0.
(iii)
Proof. E{euX } = e
be independent.
u2
2
and E{evY } = e
v2
2
2.5.
Proof. The density fX (x) of X can be obtained by
Z
Z
Z
2
x2
1
2|x| + y (2|x|+y)2
2
e 2 d = e 2 .
dy =
fX (x) = fX,Y (x, y)dy =
e
2
2
2
{|x|}
{y|x|}
The density fY (y) of Y can be obtained by
Z
fY (y) =
fXY (x, y)dx
Z
2|x| + y (2|x|+y)2
2
e
=
1{|x|y}
dx
2
Z
Z 0y
2x + y (2x+y)2
2x + y (2x+y)2
2
2
e
dx +
e
dx
=
2
2
0(y)
Z
Z 0(y)
2x + y (2x+y)2
2x + y (2x+y)2
2
2
=
e
dx +
e
d(x)
2
2
0(y)
Z
2
e 2 d( )
= 2
2
2
|y|
1 y2
= e 2.
2
So both X and Y are standard normal random variables. Since fX,Y (x, y) 6= fX (x)fY (y), X and Y are not
26
R
t
2
u
u e 2
2
2|x| + y (2|x|+y)2
2
xy1{y|x|}
dxdy
e
2
Z
Z
2|x| + y (2|x|+y)2
2
dy
xdx
y
e
2
|x|
Z
Z
2
( 2|x|) e 2 d
xdx
2
|x|
Z
Z 2
x2
2
e 2
2
xdx(
e
d 2|x|
)
2
2
|x|
Z 2
Z Z 2
Z 0
2
2
e 2 ddx +
e 2 ddx
x
x
2
2
x
x
0
Z
Z 0
xF (x)dx +
xF (x)dx.
=
=
=
=
=
=
So E{XY } =
R
0
xF (x)dx
R
0
du, we have
uF (u)du = 0.
2.6. (i)
Proof. (X) = {, , {a, b}, {c, d}}.
(ii)
Proof.
E{Y |X} =
X
{a,b,c,d}
E{Y 1{X=} }
1{X=} .
P (X = )
(iii)
Proof.
X
{a,b,c,d}
(iv)
Proof. E{Z|X} E{Y |X} = E{Z Y |X} = E{X|X} = X.
2.7.
27
E{Y 1{X=} }
1{X=} .
P (X = )
E{(Y X )2 }
= V ar(Err) + E{ 2 }
V ar(Err).
2.8.
Proof. It suffices to prove the more general case. For any (X)-measurable random variable , E{Y2 } =
E{(Y E{Y |X})} = E{Y E{Y |X}} = E{Y } E{Y } = 0.
2.9. (i)
Proof. Consider the dice-toss space similar to the coin-toss space. Then a typical element in this space
is an infinite sequence 1 2 3 , with i {1, 2, , 6} (i N). We define X() = 1 and f (x) =
1{odd integers} (x). Then its easy to see
(X) = {, , { : 1 = 1}, , { : 1 = 6}}
and (f (X)) equals to
{, , { : 1 = 1} { : 1 = 3} { : 1 = 5}, { : 1 = 2} { : 1 = 4} { : 1 = 6}}.
So {, } (f (X)) (X), and each of these containment is strict.
(ii)
Proof. No. (f (X)) (X) is always true.
2.10.
Proof.
Z
g(X)dP
A
= E{g(X)1B (X)}
Z
=
g(x)1B (x)fX (x)dx
Z Z
yfX,Y (x, y)
dy1B (x)fX (x)dx
=
fX (x)
Z Z
=
y1B (x)fX,Y (x, y)dxdy
= E{Y 1B (X)}
= E{Y IA }
Z
=
Y dP.
A
28
2.11. (i)
Proof. We can find a sequence {Wn }n1 of (X)-measurable simple functions such that Wn W . Each Wn
PKn n
ai 1Ani , where Ani s belong to (X) and are disjoint. So each Ani can be
can be written in the form i=1
PKn n
PKn n
ai 1Bin (X) = gn (X),
ai 1{XBin } = i=1
written as {X Bin } for some Borel subset Bin of R, i.e. Wn = i=1
PKn n
n
where gn (x) = i=1 ai 1Bi (x). Define g = lim sup gn , then g is a Borel function. By taking upper limits on
both sides of Wn = gn (X), we get W = g(X).
(ii)
Proof. Note E{Y |X} is (X)-measurable. By (i), we can find a Borel function g such that E{Y |X} =
g(X).
3. Brownian Motion
3.1.
Proof. We have Ft Fu1 and Wu2 Wu1 is independent of Fu1 . So in particular, Wu2 Wu1 is independent
of Ft .
3.2.
Proof. E[Wt2 Ws2 |Fs ] = E[(Wt Ws )2 + 2Wt Ws 2Ws2 |Fs ] = t s + 2Ws E[Wt Ws |Fs ] = t s.
3.3.
1
u2
(3 4 u+
3.4. (i)
Pn1
Proof. Assume there exists A F, such that P (A) > 0 and for every A, limn j=0 |Wtj+1 Wtj |() <
Pn1
Pn1
. Then for every A, j=0 (Wtj+1 Wtj )2 () max0kn1 |Wtk+1 Wtk |() j=0 |Wtj+1 Wtj |()
Pn1
0, since limn max0kn1 |Wtk+1 Wtk |() = 0. This is a contradiction with limn j=0 (Wtj+1
Wtj )2 = T a.s..
(ii)
Pn1
Pn1
Proof. Note j=0 (Wtj+1 Wtj )3 max0kn1 |Wtk+1 Wtk | j=0 (Wtj+1 Wtj )2 0 as n , by an
argument similar to (i).
3.5.
29
Proof.
rT
=
=
(S0 e
(r 12 2 )T )
(S0 e
x2
(r 12 2 )T +x
(ln
K
S0
e 2T
K)
dx
2T
(r 12 2 )T + T y
(r 12 2 )T )
y2
e 2
K) dy
2
y2
y2
1
1
e 2 + T y dy KerT
e 2 dy
1
1
2
2
(ln SK (r 12 2 )T )
(ln SK (r 12 2 )T )
T
T
0
0
Z
2
S
1
1
1
e 2 d KerT N
(ln 0 + (r 2 )T )
S0
1
K
1 2
K
2
2
(ln S (r 2 )T ) T
T
0
1
S0 e 2
3.6. (i)
Proof.
E[f (Xt )|Ft ]
(yWs s(ts))2
Z
2(ts)
e
p
=
f (y)
dy
2(t s)
= g(Xs ).
1 e
2
(yx )2
2
(ii)
Proof. E[f (St )|Fs ] = E[f (S0 eXt )|Fs ] with = v . So by (i),
Z
(yXs (ts))2
1
2(ts)
E[f (St )|Fs ]
=
f (S0 ey ) p
dy
e
2(t s)
Z
S
( 1 ln z 1 ln s (ts))2
S0
S0
1
dz
S0 ey =z
2
e
=
f (z) p
z
2(t s)
0
Z
f (z)
0
(ln z v(ts))2
Ss
2 2 (ts)
f (z)p(t s, Ss , z)dz
=
0
2(t s)
g(Ss ).
3.7. (i)
30
dz
Proof. E
Zt
Zs |Fs
= E[exp{(Wt Ws ) + (t s) ( +
2
2 )(t
s)}] = 1.
(ii)
Proof. By optional stopping theorem, E[Ztm ] = E[Z0 ] = 1, that is, E[exp{Xtm ( +
1.
2
2 )t
m }] =
(iii)
Proof. If 0 and > 0, Ztm em . By bounded convergence theorem,
E[1{m <} Zm ] = E[ lim Ztm ] = lim E[Ztm ] = 1,
t
2
E[em ] = em = emm 2+ .
)m
] = 1. Let
(iv)
Proof. We note for > 0, E[m em ] < since xex is bounded on [0, ). So by an argument similar
to Exercise 1.8, E[em ] is differentiable and
2
E[em ] = E[m em ] = emm 2+ p
.
2 + 2
Let 0, by monotone increasing theorem, E[m ] =
(v)
2
2
2
m( 2
+)t
0 as t . Therefore,
E[em(+
2
2
)m
2
E[em ] = E[em 1{m <} ] = em = emm 2+ .
3.8. (i)
Proof.
n (u)
=
=
e u n Mnt,n ] = (E[e
e n X1,n ])nt = (e n pen + e n qen )nt
E[e
"
!
!#nt
n
r
r
n
u
u
+
1
1
+
e
n
n
e n n
+
e
.
e n e n
e n e n
(ii)
31
Proof.
2
2
t2
x
rx + 1 ex
rx + 1 ex
ux
ux
e
12 (u) = e
.
x
x
x
x
x
e e
e e
So,
ln
1
x2
(u)
=
=
=
=
t
x2
t
x2
t
x2
t
x2
(rx2 + 1)(eux eux ) + e(u)x e(u)x
ex ex
(rx2 + 1) sinh ux + sinh( u)x
ln
sinh x
2
(rx + 1) sinh ux + sinh x cosh ux cosh x sinh ux
ln
sinh x
2
(rx + 1 cosh x) sinh ux
ln cosh ux +
.
sinh x
ln
(iii)
Proof.
(rx2 + 1 cosh x) sinh ux
sinh x
2 2
(rx2 + 1 1 2x + O(x4 ))(ux + O(x3 ))
u2 x2
1+
+ O(x4 ) +
2
x + O(x3 )
cosh ux +
=
(r 2 )ux3 + O(x5 )
u2 x2
1+
+
+ O(x4 )
2
x + O(x3 )
2
(r 2 )ux3 (1 + O(x2 ))
u2 x2
= 1+
+
+ O(x4 )
2
x(1 + O(x2 ))
u2 x2
rux2
1
= 1+
+
ux2 + O(x4 ).
2
(iv)
Proof.
ln
1
x2
So limx0 ln
t
u2 x2
ru 2 ux2
t u2 x2
ru 2 ux2
4
ln(1
+
+
x
+
O(x
))
=
(
+
x
+ O(x4 )).
x2
2
2
x2 2
2
2
1
x2
(u) = t( u2 +
ru
u
2 ),
correspondence between distribution and moment generating function, ( 1n Mnt,n )n converges to a Gaussian
random variable with mean t( r 2 ) and variance t. Hence ( n Mnt,n )n converges to a Gaussian random
variable with mean t(r
2
2 )
and variance 2 t.
4. Stochastic Calculus
4.1.
32
Proof. Fix t and for any s < t, we assume s [tm , tm+1 ) for some m.
Case 1. m = k. Then I(t)I(s) = tk (Mt Mtk )tk (Ms Mtk ) = tk (Mt Ms ). So E[I(t)I(s)|Ft ] =
tk E[Mt Ms |Fs ] = 0.
Case 2. m < k. Then tm s < tm+1 tk t < tk+1 . So
I(t) I(s)
k1
X
j=m
k1
X
j=m+1
Hence
E[I(t) I(s)|Fs ]
=
k1
X
E[tj E[Mtj+1 Mtj |Ftj ]|Fs ] + E[tk E[Mt Mtk |Ftk ]|Fs ] + tm E[Mtm+1 Ms |Fs ]
j=m+1
0.
E[I (t)
2u du
(I (s)
2u du)|Fs ]
2u du|Fs ]
2u du
0.
33
Z
s
2u du
4.3.
Proof. I(t) I(s) = 0 (Wt1 W0 ) + t1 (Wt2 Wt1 ) 0 (Wt1 W0 ) = t1 (Wt2 Wt1 ) = Ws (Wt Ws ).
(i) I(t) I(s) is not independent of Fs , since Ws Fs .
(ii) E[(I(t) I(s))4 ] = E[Ws4 ]E[(Wt Ws )4 ] = 3s 3(t s) = 9s(t s) and 3E[(I(t) I(s))2 ] =
3E[Ws2 ]E[(Wt Ws )2 ] = 3s(t s). Since E[(I(t) I(s))4 ] 6= 3E[(I(t) I(s))2 ], I(t) I(s) is not normally
distributed.
(iii) E[I(t) I(s)|Fs ] = Ws E[Wt Ws |Fs ] = 0.
(iv)
E[I 2 (t)
2u du (I 2 (s)
2u du)|Fs ]
Z t
= E[(I(t) I(s))2 + 2I(t)I(s) 2I 2 (s)
Wu2 du|Fs ]
0
s
2
2
2
E[Ws (Wt Ws ) + 2Ws (Wt Ws ) Ws (t s)|Fs ]
Ws2 E[(Wt Ws )2 ] + 2Ws E[Wt Ws |Fs ] Ws2 (t s)
Ws2 (t s) Ws2 (t s)
0.
=
=
4.4.
Proof. (Cf. ksendal [3], Exercise 3.9.) We first note that
X
B tj +tj+1 (Btj+1 Btj )
j
Xh
j
i X
B tj +tj+1 (Btj+1 B tj +tj+1 ) + Btj (B tj +tj+1 Btj ) +
(B tj +tj+1 Btj )2 .
2
RT
The first term converges in L2 (P ) to 0 Bt dBt . For the second term, we note
2
2
t
X
E
B tj +tj+1 Btj
2
2
j
2
2 X t
X
j+1 tj
= E
B tj +tj+1 Btj
2
2
j
j
tj+1 tj
=
E
B tj +tj+1 Btj
2
2
j, k
"
#
2
X
tj+1 tj
2
=
E
B tj+1 tj
2
2
j
X tj+1 tj 2
=
2
2
j
X
2
T
max |tj+1 tj | 0,
2 1jn
34
B tk +tk+1 Btk
2
2
tk+1 tk
Z
B tj +tj+1 (Btj+1 Btj )
2
Bt dBt +
0
T
1
= BT2 in L2 (P ).
2
2
4.5. (i)
Proof.
d ln St =
dSt
1 dhSit
2St dSt dhSit
2St (t St dt + t St dWt ) t2 St2 dt
1
=
=
= t dWt + (t t2 )dt.
2
2
2
St
2 St
2St
2St
2
(ii)
Proof.
Z
ln St = ln S0 +
Z
s dWs +
1
(s s2 )ds.
2
Rt
Rt
So St = S0 exp{ 0 s dWs + 0 (s 12 s2 )ds}.
4.6.
Proof. Without loss of generality, we assume p 6= 1. Since (xp )0 = pxp1 , (xp )00 = p(p 1)xp2 , we have
d(Stp )
1
= pStp1 dSt + p(p 1)Stp2 dhSit
2
1
= pStp1 (St dt + St dWt ) + p(p 1)Stp2 2 St2 dt
2
1
= Stp [pdt + pdWt + p(p 1) 2 dt]
2
p1 2
p
= St p[dWt + ( +
)dt].
2
4.7. (i)
Proof. dWt4 = 4Wt3 dWt +
1
2
RT
0
Wt3 dWt + 6
RT
0
Wt2 dt.
(ii)
Proof. E[WT4 ] = 6
RT
0
tdt = 3T 2 .
(iii)
Proof. dWt6 = 6Wt5 dWt + 12 6 5Wt4 dt. So WT6 = 6
15T 3 .
RT
0
Wt5 dWt + 15
4.8.
35
RT
0
RT
0
3t2 dt =
es (ds + dWs ) = R0 +
0
t
)+
and Rt = R0 et +
(1 e
Rt
0
t
(e 1) +
es dWs ,
e(ts) dWs .
4.9. (i)
Proof.
Ke
r(T t)
N (d )
=
=
=
=
=
Ke
Ke
r(T t) e
d2
r(T t) e
(d+
T t)2
2 (T t)
r(T t) T td+
2
e
e
N 0 (d+ )
2 (T t)
2
x
Ker(T t) e(r+ 2 )(T t) e 2 N 0 (d+ )
K
xN 0 (d+ ).
Ke
(ii)
Proof.
cx
d+ (T t, x) Ker(T t) N 0 (d ) d (T t, x)
x
x
0
0
0
= N (d+ ) + xN (d+ ) d+ (T t, x) xN (d+ ) d+ (T t, x)
x
x
= N (d+ ).
= N (d+ ) + xN 0 (d+ )
(iii)
Proof.
ct
d+ (T t, x) rKer(T t) N (d ) Ker(T t) N 0 (d ) d (T t, x)
x
t
= xN 0 (d+ )
(iv)
36
Proof.
1
ct + rxcx + 2 x2 cxx
2
1
x
(v)
Proof. For x > K, d+ (T t, x) > 0 and limtT d+ (T t, x)= lim 0 d+ (, x) = . limtT d (T t, x) =
1
x
lim 0 d (, x) = lim 0
ln K
+ 1 (r + 12 2 ) = . Similarly, limtT d = for x
tT
tT
x K,
0,
if x > K
= (x K)+ .
if x K
(vi)
Proof. It is easy to see limx0 d = . So for t [0, T ], limx0 c(t, x) = limx0 xN (limx d+ (T t, x))
Ker(T t) N (limx0 d (T t, x)) = 0.
(vii)
Proof. For t [0, T ], it is clear limx d = . Note
N 0 (d+ ) 1T t
d+
N 0 (d+ ) x
lim x(N (d+ ) 1) = lim
= lim
.
x
x
x
x2
x1
x
e 2 Ke
lim x(N (d+ ) 1) = lim N (d+ )
= lim
x
x
T t d+ 2
0
T td+ (T t)(r+ 12 2 )
T t
Therefore
lim [c(t, x) (x er(T t) K)]
=
=
=
=
0.
4.10. (i)
37
= 0.
Proof. We show (4.10.16) + (4.10.9) (4.10.16) + (4.10.15), i.e. assuming X has the representation
Xt = t St + t Mt , continuous-time self-financing condition has two equivalent formulations, (4.10.9) or
(4.10.15). Indeed, dXt = t dSt +t dMt +(St dt +dSt dt +Mt dt +dMt dt ). So dXt = t dSt +t dMt
St dt + dSt dt + Mt dt + dMt dt = 0, i.e. (4.10.9) (4.10.15).
(ii)
Proof. First, we clarify the problems by stating explicitly the given conditions and the result to be proved.
We assume we have a portfolio Xt = t St + t Mt . We let c(t, St ) denote the price of call option at time t
and set t = cx (t, St ). Finally, we assume the portfolio is self-financing. The problem is to show
1 2 2
rNt dt = ct (t, St ) + St cxx (t, St ) dt,
2
where Nt = c(t, St ) t St .
Indeed, by the self-financing property and t = cx (t, St ), we have c(t, St ) = Xt (by the calculations in
Subsection 4.5.1-4.5.3). This uniquely determines t as
t =
Moreover,
dNt
1
=
ct (t, St )dt + cx (t, St )dSt + cxx (t, St )dhSt it d(t St )
2
1
=
ct (t, St ) + cxx (t, St ) 2 St2 dt + [cx (t, St )dSt d(Xt t Mt )]
2
1
2 2
=
ct (t, St ) + cxx (t, St ) St dt + Mt dt + dMt dt + [cx (t, St )dSt + t dMt dXt ].
2
4.11.
Proof. First, we note c(t, x) solves the Black-Scholes-Merton PDE with volatility 1 :
1
2
+ rx
+ x2 12 2 r c(t, x) = 0.
t
x 2
x
So
1
ct (t, St ) + rSt cx (t, St ) + 12 St2 cxx (t, St ) rc(t, St ) = 0,
2
and
dc(t, St )
=
=
=
1
ct (t, St )dt + cx (t, St )(St dt + 2 St dWt ) + cxx (t, St )22 St2 dt
2
1 2 2
ct (t, St ) + cx (t, St )St + 2 St cxx (t, St ) dt + 2 St cx (t, St )dWt
2
1 2 2
2
rc(t, St ) + ( r)cx (t, St )St + St (2 1 )cxx (t, St ) dt + 2 St cx (t, St )dWt .
2
38
Therefore
dXt
1
rc(t, St ) + ( r)cx (t, St )St + St2 (22 12 )xx (t, St ) + rXt rc(t, St ) + rSt cx (t, St )
2
1 2
2
2
(2 1 )St cxx (t, St ) cx (t, St )St dt + [2 St cx (t, St ) cx (t, St )2 St ]dWt
2
rXt dt.
= ct (t, x) + rer(T t) K
x
= rKer(T t) N (d (T t, x))
N 0 (d+ (T t, x)) + rKer(T t)
2 T t
x
N 0 (d+ (T t, x)).
= rKer(T t) N (d (T t, x))
2 T t
(ii)
Proof. For an agent hedging a short position in the put, since t = px (t, x) < 0, he should short the
underlying stock and put p(t, St ) px (t, St )St (> 0) cash in the money market account.
(iii)
Proof. By the put-call parity, it suffices to show f (t, x) = x Ker(T t) satisfies the Black-Scholes-Merton
partial differential equation. Indeed,
1 2 2 2
1
+ rx
+ x
r f (t, x) = rKer(T t) + 2 x2 0 + rx 1 r(x Ker(T t) ) = 0.
t 2
x2
x
2
Remark: The Black-Scholes-Merton PDE has many solutions. Proper boundary conditions are the key
to uniqueness. For more details, see Wilmott [8].
4.13.
Proof. We suppose (W1 , W2 ) is a pair of local martingale defined by SDE
(
dW1 (t) = dB1 (t)
dW2 (t) = (t)dB1 (t) + (t)dB2 (t).
(1)
1
12 (t)
(
W1 (t) = B1 (t)
Rt
Rt
W2 (t) = 0 (s)
dB1 (s) + 0
2
1 (s)
39
1 (t)
1
dB2 (s)
12 (s)
(2)
. So
(3)
(4)
4.14. (i)
Proof. Clearly Zj Ftj+1 . Moreover
E[Zj |Ftj ] = f 00 (Wtj )E[(Wtj+1 Wtj )2 (tj+1 tj )|Ftj ] = f 00 (Wtj )(E[Wt2j+1 tj ] (tj+1 tj )) = 0,
since Wtj+1 Wtj is independent of Ftj and Wt N (0, t). Finally, we have
E[Zj2 |Ftj ]
where we used the independence of Browian motion increment and the fact that E[X 4 ] = 3E[X 2 ]2 if X is
Gaussian with mean 0.
(ii)
Pn1
Pn1
Proof. E[ j=0 Zj ] = E[ j=0 E[Zj |Ftj ]] = 0 by part (i).
(iii)
Proof.
n1
X
V ar[
Zj ]
n1
X
E[(
j=0
Zj )2 ]
j=0
n1
X
E[
j=0
n1
X
Zj2 + 2
E[E[Zj2 |Ftj ]] + 2
j=0
Zi Zj ]
0i<jn1
0i<jn1
n1
X
j=0
n1
X
j=0
max
0jn1
|tj+1 tj |
n1
X
j=0
0,
since
Pn1
j=0
RT
0
4.15. (i)
40
(dBi (t))2 =
d
X
ij (t)
i (t)
j=1
2
dWj (t) =
d
2
X
ij
(t)
j=1
i2 (t)
dt = dt.
So Bi is a Brownian motion.
(ii)
Proof.
dBi (t)dBk (t)
"
#
d
d
X
X
(t)
(t)
kl
ij
dWj (t)
dWl (t)
=
i (t)
k (t)
j=1
l=1
1j, ld
ij (t)kl (t)
dWj (t)dWl (t)
i (t)k (t)
d
X
ij (t)kj (t)
j=1
i (t)k (t)
dt
= ik (t)dt.
4.16.
Proof. To find the m independent Brownion motion W1 (t), , Wm (t), we need to find A(t) = (aij (t)) so
that
(dB1 (t), , dBm (t))tr = A(t)(dW1 (t), , dWm (t))tr ,
or equivalently
(dW1 (t), , dWm (t))tr = A(t)1 (dB1 (t), , dBm (t))tr ,
and
(dW1 (t), , dWm (t))tr (dW1 (t), , dWm (t))
= A(t)1 (dB1 (t), , dBm (t))tr (dB1 (t), , dBm (t))(A(t)1 )tr dt
= Imm dt,
where Imm is the m m unit matrix. By the condition dBi (t)dBk (t) = ik (t)dt, we get
(dB1 (t), , dBm (t))tr (dB1 (t), , dBm (t)) = C(t).
So A(t)1 C(t)(A(t)1 )tr = Imm , which gives C(t) = A(t)A(t)tr . This motivates us to define A as the
square root of C. Reverse the above analysis, we obtain a formal proof.
4.17.
Proof. We will try to solve all the sub-problems in a single, long solution. We start with the general Xi :
Z t
Z t
Xi (t) = Xi (0) +
i (u)du +
i (u)dBi (u), i = 1, 2.
0
41
C()
V1 ()V2 ()
= (t0 ).
=
=
t0
Z
=
t0 +
(i (u) i (t0 ))du|Ft0 .
i (t0 ) + E
t0
t0 +
|i (u) i (t0 )|du|Ft0
t0
t0
R t +
R t +
Since 1 t00 |i (u) i (t0 )|du 2M and lim0 1 t00 |i (u) i (t0 )|du = 0 by the continuity of i ,
the Dominated Convergence Theorem under Conditional Expectation implies
Z t0 +
Z
1
1 t0 +
|i (u) i (t0 )|du|Ft0 = E lim
|i (u) i (t0 )|du|Ft0 = 0.
lim E
0
0 t
t0
0
So Mi () = i (t0 ) + o(). This proves (iii).
Rt
To calculate the variance and covariance, we note Yi (t) = 0 i (u)dBi (u) is a martingale and by It
os
Rt
formula Yi (t)Yj (t) 0 i (u)j (u)du is a martingale (i = 1, 2). So
E[(Xi (t0 + ) Xi (t0 ))(Xj (t0 + ) Xj (t0 ))|Ft0 ]
Z t0 +
Z
= E Yi (t0 + ) Yi (t0 ) +
i (u)du
Yj (t0 + ) Yj (t0 ) +
t0
j (u)du |Ft0
t0
t0 +
Z
= E [(Yi (t0 + ) Yi (t0 )) (Yj (t0 + ) Yj (t0 )) |Ft0 ] + E
t0 +
j (u)du|Ft0
t0 +
i (u)du
t0
Z
+E (Yi (t0 + ) Yi (t0 ))
t0 +
j (u)du|Ft0
t0
Z
+ E (Yj (t0 + ) Yj (t0 ))
t0
t0 +
i (u)du|Ft0
t0
= I + II + III + IV.
Z
t0 +
i (u)j (u)ij (t)dt|Ft0 .
t0
By an argument similar to that involved in the proof of part (iii), we conclude I = i (t0 )j (t0 )ij (t0 ) + o()
and
Z t0 +
Z t0 +
Z t0 +
II = E
(i (u) i (t0 ))du
j (u)du|Ft0 + i (t0 )E
j (u)du|Ft0
t0
t0
t0
42
By Cauchys inequality under conditional expectation (note E[XY |F] defines an inner product on L2 ()),
Z t0 +
|j (u)|du|Ft0
III E |Yi (t0 + ) Yi (t0 )|
t0
p
M E[(Yi (t0 + ) Yi (t0 ))2 |Ft0 ]
p
M E[Yi (t0 + )2 Yi (t0 )2 |Ft0 ]
s Z
t0 +
i (u)2 du|Ft0 ]
M E[
t0
M M
= o()
Similarly, IV = o(). In summary, we have
E[(Xi (t0 + ) Xi (t0 ))(Xj (t0 + ) Xj (t0 ))|Ft0 ] = Mi ()Mj () + i (t0 )j (t0 )ij (t0 ) + o() + o().
This proves part (iv) and (v). Finally,
C()
(t0 )1 (t0 )2 (t0 ) + o()
lim p
= lim p 2
= (t0 ).
0
0
V1 ()V2 ()
(1 (t0 ) + o())(22 (t0 ) + o())
This proves part (vi). Part (i) and (ii) are consequences of general cases.
4.18. (i)
Proof.
1
(ii)
Proof.
d(t Xt )
t dXt + Xt dt + dXt dt
t t St dWt Xt t dWt .
So t Xt is a martingale.
(iii)
Proof. By part (ii), X0 = 0 X0 = E[T Xt ] = E[T VT ]. (This can be seen as a version of risk-neutral pricing,
only that the pricing is carried out under the actual probability measure.)
4.19. (i)
Proof. Bt is a local martingale with [B]t =
motion.
Rt
0
43
(ii)
Proof. d(Bt Wt ) = Bt dWt + sign(Wt )Wt dWt + sign(Wt )dt. Integrate both sides of the resulting equation
and the expectation, we get
Z t
Z t
1
1
E[Bt Wt ] =
E[sign(Ws )]ds =
E[1{Ws 0} 1{Ws <0} ]ds = t t = 0.
2
2
0
0
(iii)
Proof. By It
os formula, dWt2 = 2Wt dWt + dt.
(iv)
Proof. By It
os formula,
d(Bt Wt2 )
Bt (2Wt dWt + dt) + Wt2 sign(Wt )dWt + sign(Wt )dWt (2Wt dWt + dt)
So
E[Bt Wt2 ]
Z t
Z t
= E[
Bs ds] + 2E[
sign(Ws )Ws ds]
0
0
Z t
Z t
=
E[Bs ]ds + 2
E[sign(Ws )Ws ]ds
0
0
Z t
= 2
(E[Ws 1{Ws 0} ] E[Ws 1{Ws <0} ])ds
0
Z tZ
=
4
0
0
Z tr
=
6=
x2
e 2s
dxds
x
2s
s
ds
2
0
0 = E[Bt ] E[Wt2 ].
4
if x > K
1,
x K, if x K
0,
if x 6= K
So f 0 (x) = undefined, if x = K and f 00 (x) =
0,
if x < K.
undefined, if x = K.
0,
if x < K
(ii)
Proof. E[f (WT )] =
R
K
2
x
T K
2T
2 e
RT
suppose 0
e 2T
(x K)
dx =
2T
(iii)
44
0, if x < K
0
(5)
lim fn (x) = 12 , if x = K
n
1, if x > K.
(v)
Proof. Fix , so that Wt () < K for any t [0, T ]. Since Wt () can obtain its maximum on [0, T ], there
1
exists n0 , so that for any n n0 , max0tT Wt () < K 2n
. So
Z
LK (T )() = lim n
n
T
1 (Wt ())dt = 0.
1
1(K 2n
,K+ 2n
)
(vi)
Proof. Take expectation on both sides of the formula (4.10.45), we have
E[LK (T )] = E[(WT K)+ ] > 0.
So we cannot have LK (T ) = 0 a.s..
4.21. (i)
Proof. There are two problems. First, the transaction cost could be big due to active trading; second, the
purchases and sales cannot be made at exactly the same price K. For more details, see Hull [2].
(ii)
Proof. No. The RHS of (4.10.26) is a martingale, so its expectation is 0. But E[(ST K)+ ] > 0. So
XT 6= (ST K)+ .
5. Risk-Neutral Pricing
5.1. (i)
Proof.
df (Xt )
=
=
=
=
1
f 0 (Xt )dt + f 00 (Xt )dhXit
2
1
f (Xt )(dXt + dhXit )
2
1
1
f (Xt ) t dWt + (t Rt t2 )dt + t2 dt
2
2
f (Xt )(t Rt )dt + f (Xt )t dWt .
(ii)
Proof. d(Dt St ) = St dDt + Dt dSt + dDt dSt = St Rt Dt dt + Dt t St dt + Dt t St dWt = Dt St (t Rt )dt +
Dt St t dWt . This is formula (5.2.20).
5.2.
e T VT |Ft ] = E
Proof. By Lemma 5.2.2., E[D
DT VT ZT
Zt
i
|Ft . Therefore (5.2.30) is equivalent to Dt Vt Zt =
E[DT VT ZT |Ft ].
5.3. (i)
Proof.
cx (0, x)
=
=
=
=
=
=
=
=
1 2
d e rT
f
E[e
(xeWT +(r 2 )T K)+ ]
dx
fT +(r 1 2 )T
rT d
W
e
2
E e
h(xe
)
dx
h
i
fT +(r 1 2 )T
e erT eW
2
1 W
E
f +(r 1 2 )T
T
2
{xe
>K}
h
i
fT
W
21 2 T e
E e
1{W
e
fT > 1 (ln K (r 1 2 )T )}
x
2
f
WT
1 2
2 T e
T
1 W
e
E e
f
1
1 2
{ T T >
(ln K
x (r 2 )T ) T }
T
T
Z
z2
1 2
1
e 2 e T z 1{zT >d+ (T,x)} dz
e 2 T
2
(z T )2
1
2
e
1{zT >d+ (T,x)} dz
2
(ii)
1 2
f
bt = E[
e ZbT |Ft ], then Z
b is a Pe-martingale, Zbt > 0 and E[ZbT ] =
Proof. If we set ZbT = eWT 2 T and Z
1 2
f
T
T
e
2
E[e
] = 1. So if we define Pb by dPb = ZT dPe on FT , then Pb is a probability measure equivalent to
e
P , and
e ZbT 1{S >K} ] = Pb(ST > K).
cx (0, x) = E[
T
Rt
ct = W
ft + ()du = W
ft t is a Pb-Brownian motion (set =
Moreover, by Girsanovs Theorem, W
0
in Theorem 5.4.1.)
(iii)
1
)T
= xeWT +(r+ 2
c
)T
. So
1 2
c
Pb(ST > K) = Pb(xeWT +(r+ 2 )T > K) = Pb
46
!
cT
W
> d+ (T, x) = N (d+ (T, x)).
T
f (t) (t)S(t)dW
f (t). In the first equation for
5.4. First, a few typos. In the SDE for S, (t)dW
e In the second equation for c(0, S(0)), the variable for BSM should be
c(0, S(0)), E E.
s
Z T
Z T
1
1
r(t)dt,
2 (t)dt .
BSM T, S(0); K,
T 0
T 0
(i)
RT
RT
1 2
1
1 2
t
f
f
Proof. d ln St = dS
St 2St2 dhSit = rt dt + t dWt 2 t dt. So ST = S0 exp{ 0 (rt 2 t )dt + 0 t dWt }. Let
R
RT
T
ft . The first term in the expression of X is a number and the second term
X = 0 (rt 12 t2 )dt + 0 t dW
RT
is a Gaussian random variable N (0, 0 t2 dt), since both r and ar deterministic. Therefore, ST = S0 eX ,
RT
R
T
with X N ( 0 (rt 12 t2 )dt, 0 t2 dt),.
(ii)
Proof. For the standard BSM model with constant volatility and interest rate R, under the risk-neutral
fT N ((R 1 2 )T, 2 T ), and E[(S
e 0 eY K)+ ] =
measure, we have ST = S0 eY , where Y = (R 12 2 )T +W
q 2
eRT BSM (T, S0 ; K, R, ). Note R = T1 (E[Y ] + 12 V ar(Y )) and = T1 V ar(Y ), we can get
1
T, S0 ; K,
T
!
r
1
1
V ar(Y ) .
E[Y ] + V ar(Y ) ,
2
T
=
=
RT
RT
rt dt
rt dt
e 0 eX K)+ ]
E[(S
e
E[X]+ 21 V
ar(X)
1
BSM T, S0 ; K,
T
BSM
s
rt dt,
0
1
T, S0 ; K,
T
1
T
!
r
1
1
E[X] + V ar(X) ,
V ar(X)
2
T
t2 dt .
5.5. (i)
Proof. Let f (x) = x1 , then f 0 (x) = x12 and f 00 (x) =
d
1
Zt
2
x3 .
1
1
1 2 2 2
t
2
= f 0 (Zt )dZt + f 00 (Zt )dZt dZt = 2 (Zt )t dWt +
Zt t dt =
dWt + t dt.
3
2
Zt
2 Zt
Zt
Zt
(ii)
fs = E[
eM
ft |Fs ] = E
Proof. By Lemma 5.2.2., for s, t 0 with s < t, M
fs . So M = Z M
f is a P -martingale.
Zs M
(iii)
47
ft
Zt M
Zs |Fs
ft |Fs ] =
. That is, E[Zt M
Proof.
1
1
1
1
t
M t t
Mt 2t
t t
f
dMt = d Mt
=
dMt + Mt d + dMt d
=
dWt +
dWt +
dt +
dt.
Zt
Zt
Zt
Zt
Zt
Zt
Zt
Zt
(iv)
Proof. In part (iii), we have
ft =
dM
et =
Let
t
M t t
Mt 2t
t t
t
M t t
dWt +
dWt +
dt +
dt =
(dWt + t dt) +
(dWt + t dt).
Zt
Zt
Zt
Zt
Zt
Zt
t +Mt t
,
Zt
ft =
e t dW
ft . This proves Corollary 5.3.2.
then dM
5.6.
fi (t) is an Ft -martingale under Pe and [W
fi , W
fj ](t) = tij
Proof. By Theorem 4.6.5, it suffices to show W
fi (t) is an Ft -martingale under Pe if and only if W
fi (t)Zt is an Ft -martingale
(i, j = 1, 2). Indeed, for i = 1, 2, W
under P , since
"
#
fi (t)Zt
W
eW
fi (t)|Fs ] = E
E[
|Fs .
Zs
By It
os product formula, we have
fi (t)Zt )
d(W
fi (t)dZt + Zt dW
fi (t) + dZt dW
fi (t)
= W
fi (t)(Zt )(t) dWt + Zt (dWi (t) + i (t)dt) + (Zt t dWt )(dWi (t) + i (t)dt)
= W
fi (t)(Zt )
= W
d
X
j=1
fi (t)(Zt )
W
d
X
j=1
5.8. The basic idea is that for any positive Pe-martingale M , dMt = Mt M1t dMt . By Martingale Repree t dW
ft for some adapted process
e t . So dMt = Mt ( et )dW
ft , i.e. any positive
sentation Theorem, dMt =
Mt
martingale must be the exponential of an integral w.r.t. Brownian motion. Taking into account discounting
factor and apply It
os product rule, we can show every strictly positive asset is a generalized geometric
Brownian motion.
(i)
e T VT |Ft ]. So (Dt Vt )t0 is a Pe-martingale. By Martingale Represene 0T Ru du VT |Ft ] = E[D
Proof. Vt Dt = E[e
R
e t , 0 t T , such that Dt Vt = t
e s dW
fs , or equivalently,
tation Theorem, there exists an adapted process
R
R0
1 t e f
1 t e f
e t dW
ft ,
Vt = Dt
s dWs . Differentiate both sides of the equation, we get dVt = Rt Dt
s dWs dt + Dt1
R
i.e. dVt = Rt Vt dt +
et
Dt dWt .
(ii)
Proof. We prove the following more general lemma.
Lemma 1. Let X be an almost surely positive random variable (i.e. X > 0 a.s.) defined on the probability
space (, G, P ). Let F be a sub -algebra of G, then Y = E[X|F] > 0 a.s.
Proof. By the property of conditional expectation Yt 0 a.s. Let A = {Y = 0},Pwe shall show P (A) = 0. In
1
deed, note A F, 0 = E[Y IA ] = E[E[X|F]IA ] = E[XIA ] = E[X1A{X1} ] + n=1 E[X1A{ n1 >X n+1
}]
P
1
1
1
1
1
P (A{X 1})+ n=1 n+1 P (A{ n > X n+1 }). So P (A{X 1}) = 0 and P (A{ n > X n+1 }) = 0,
P
1
}) =
n 1. This in turn implies P (A) = P (A {X > 0}) = P (A {X 1}) + n=1 P (A { n1 > X n+1
0.
e tT Ru du VT |Ft ] > 0 a.s.. Moreover,
By the above lemma, it is clear that for each t [0, T ], Vt = E[e
by a classical result of martingale theory (Revuz and Yor [4], Chapter II, Proposition (3.4)), we have the
following stronger result: for a.s. , Vt () > 0 for any t [0, T ].
R
(iii)
Proof. By (ii), V > 0 a.s., so dVt = Vt V1t dVt = Vt V1t Rt Vt dt +
ft , where t =
t Vt dW
et
Vt Dt .
et f
Dt dWt
ft = Rt Vt dt +
= Vt Rt dt + Vt VtDt t dW
e
5.9.
Proof. c(0, T, x, K) = xN (d+ ) KerT N (d ) with d =
then f 0 (y) = yf (y),
cK (0, T, x, K)
x
(ln K
+ (r 12 2 )T ). Let f (y) =
d+
d
erT N (d ) KerT f (d )
y
y
1
1
= xf (d+ )
erT N (d ) + erT f (d ) ,
TK
T
= xf (d+ )
49
y
1 e 2
2
and
=
=
=
=
cKK (0, T, x, K)
x
d
erT
1
d+
d
erT f (d )
+ (d )f (d )
xf (d+ )
f (d+ )(d+ )
2
y
y
y
TK
TK
T
rT
x
xd+
1
1
e
d
1
erT f (d )
f (d )
f (d+ ) +
f (d+ )
T K2
TK
K T
K T
T
K T
x
d
erT f (d )
d
[1 + ] +
f (d+ )
[1 + ]
2
K T
T
K T
T
erT
x
f (d )d+ 2 2 f (d+ )d .
K 2 T
K T
5.10. (i)
Proof. At time t0 , the value of the chooser option is V (t0 ) = max{C(t0 ), P (t0 )} = max{C(t0 ), C(t0 )
F (t0 )} = C(t0 ) + max{0, F (t0 )} = C(t0 ) + (er(T t0 ) K S(t0 ))+ .
(ii)
e rt0 V (t0 )] = E[e
e rt0 C(t0 )+(erT K ert0 S(t0 )+ ] =
Proof. By the risk-neutral pricing formula, V (0) = E[e
rt0 r(T t0 )
+
e
C(0) + E[e
(e
K S(t0 )) ]. The first term is the value of a call expiring at time T with strike
price K and the second term is the value of a put expiring at time t0 with strike price er(T t0 ) K.
5.11.
Proof. We first make an analysis which leads to the hint, then we give a formal proof.
(Analysis) If we want to construct a portfolio X that exactly replicates the cash flow, we must find a
solution to the backward SDE
(
dXt = t dSt + Rt (Xt t St )dt Ct dt
XT = 0.
Multiply Dt on both sides of the first equation and apply Itos product rule, we get d(Dt Xt ) = t d(Dt St )
RT
RT
Ct Dt dt. Integrate from 0 to T , we have DT XT D0 X0 = 0 t d(Dt St ) 0 Ct Dt dt. By the terminal
RT
RT
condition, we get X0 = D01 ( 0 Ct Dt dt 0 t d(Dt St )). X0 is the theoretical, no-arbitrage price of
the cash flow, provided we can find a trading strategy that solves the BSDE. Note the SDE for S
t
. Take the proper change of measure so that
gives d(Dt St ) = (Dt St )t (t dt + dWt ), where t = tR
t
Rt
f
Wt = s ds + Wt is a Brownian motion under the new measure Pe, we get
0
Z
Ct Dt dt = D0 X0 +
Z
t d(Dt St ) = D0 X0 +
ft .
t (Dt St )t dW
0
RT
RT
ft .
This says the random variable 0 Ct Dt dt has a stochastic integral representation D0 X0 + 0 t Dt St t dW
RT
This inspires us to consider the martingale generated by 0 Ct Dt dt, so that we can apply Martingale Representation Theorem and get a formula for by comparison of the integrands.
50
RT
Indeed, it is easy to see that X satisfies the first equation. To check the terminal condition, we note
RT
R
R
ft T Ct Dt dt = M0 + T
e t dW
ft MT = 0. So XT = 0. Thus, we have
XT DT = D0 X0 + 0 t Dt St t dW
0
0
found a trading strategy , so that the corresponding portfolio X replicates the cash flow and has zero
R
e T Ct Dt dt] is the no-arbitrage price of the cash flow at time zero.
terminal value. So X0 = E[
0
Remark: As shown in the analysis, d(Dt Xt ) = t d(Dt St ) Ct Dt dt. Integrate from t to T , we get
RT
RT
0 Dt Xt = t u d(Du Su ) t Cu Du du. Take conditional expectation w.r.t. Ft on both sides, we get
R
R
e T Cu Du du|Ft ]. So Xt = Dt1 E[
e T Cu Du du|Ft ]. This is the no-arbitrage price of the cash
Dt Xt = E[
t
t
flow at time t, and we have justified formula (5.6.10) in the textbook.
5.12. (i)
fj (t). So Bi is a
ei (t) = dBi (t) + i (t)dt = Pd ij (t) dWj (t) + Pd ij (t) j (t)dt = Pd ij (t) dW
Proof. dB
j=1 i (t)
j=1 i (t)
j=1 i (t)
2
P
ei (t)dB
ei (t) = d ij (t)2 dt = dt, by Levys Theorem, B
ei is a Brownian motion under
martingale. Since dB
j=1 i (t)
Pe.
(ii)
Proof.
dSi (t)
ei (t) +
R(t)Si (t)dt + i (t)Si (t)dB
d
X
j=1
d
X
ij (t)j (t)dt
j=1
ei (t).
R(t)Si (t)dt + i (t)Si (t)dB
(iii)
ei (t)dB
ek (t) = (dBi (t) + i (t)dt)(dBj (t) + j (t)dt) = dBi (t)dBj (t) = ik (t)dt.
Proof. dB
(iv)
Proof. By It
os product rule and martingale property,
Z t
Z t
Z t
E[Bi (t)Bk (t)] = E[
Bi (s)dBk (s)] + E[
Bk (s)dBi (s)] + E[
dBi (s)dBk (s)]
0
0
0
Z t
Z t
= E[
ik (s)ds] =
ik (s)ds.
0
eB
ei (t)B
ek (t)] =
Similarly, by part (iii), we can show E[
Rt
0
(v)
51
ik (s)ds.
Proof. By It
os product formula,
Z t
Z t
[P (W1 (u) 0) P (W1 (u) < 0)]du = 0.
sign(W1 (u))du] =
E[B1 (t)B2 (t)] = E[
0
Meanwhile,
eB
e1 (t)B
e2 (t)]
E[
Z t
e
E[
sign(W1 (u))du
0
Z t
[Pe(W1 (u) 0) Pe(W1 (u) < 0)]du
0
Z t
f1 (u) u) Pe(W
f1 (u) < u)]du
[Pe(W
0
Z t
1
f1 (u) < u) du
2
Pe(W
2
0
0,
=
=
=
=
<
eB
e1 (t)B
e2 (t)] for all t > 0.
for any t > 0. So E[B1 (t)B2 (t)] = E[
5.13. (i)
e 1 (t)] = E[
eW
f1 (t)] = 0 and E[W
e 2 (t)] = E[
eW
f2 (t)
Proof. E[W
Rt
0
(ii)
Proof.
e
Cov[W
1 (T ), W2 (T )]
e 1 (T )W2 (T )]
= E[W
"Z
Z
T
e
= E
W1 (t)dW2 (t) +
0
"Z
#
W2 (t)dW1 (t)
0
T
e
= E
#
"Z
f
f
f
e
W1 (t)(dW2 (t) W1 (t)dt) + E
#
f
W2 (t)dW1 (t)
"Z
e
= E
f1 (t)2 dt
W
Z
=
tdt
0
1
= T 2.
2
5.14. Equation (5.9.6) can be transformed into d(ert Xt ) = t [d(ert St ) aert dt] = t ert [dSt rSt dt
adt]. So, to make the discounted portfolio value ert Xt a martingale, we are motivated to change the measure
Rt
in such a way that St r 0 Su duat is a martingale under the new measure. To do this,hwe note the SDE for iS
t a
is dSt = t St dt+St dWt . Hence dSt rSt dtadt = [(t r)St a]dt+St dWt = St (t r)S
dt + dWt .
St
R
t
t a
ft = s ds + Wt , we can find an equivalent probability measure Pe, under which
Set t = (t r)S
and W
St
0
ft + adt and W
ft is a BM. This is the rational for formula (5.9.7).
S satisfies the SDE dSt = rSt dt + St dW
This is a good place to pause and think about the meaning of martingale measure. What is to be
a martingale? The new measure Pe should be such that the discounted value process of the replicating
52
portfolio is a martingale, not the discounted price process of the underlying. First, we want Dt Xt to be a
martingale under Pe because we suppose that X is able to replicate the derivative payoff at terminal time,
XT = VT . In order to avoid arbitrage, we must have Xt = Vt for any t [0, T ]. The difficulty is how
to calculate Xt and the magic is brought by the martingale measure in the following line of reasoning:
e T XT |Ft ] = Dt1 E[D
e T VT |Ft ]. You can think of martingale measure as a calculational
Vt = Xt = Dt1 E[D
convenience. That is all about martingale measure! Risk neutral is a just perception, referring to the
actual effect of constructing a hedging portfolio! Second, we note when the portfolio is self-financing, the
discounted price process of the underlying is a martingale under Pe, as in the classical Black-Scholes-Merton
model without dividends or cost of carry. This is not a coincidence. Indeed, we have in this case the
relation d(Dt Xt ) = t d(Dt St ). So Dt Xt being a martingale under Pe is more or less equivalent to Dt St
being a martingale under Pe. However, when the underlying pays dividends, or there is cost of carry,
d(Dt Xt ) = t d(Dt St ) no longer holds, as shown in formula (5.9.6). The portfolio is no longer self-financing,
but self-financing with consumption. What we still want to retain is the martingale property of Dt Xt , not
that of Dt St . This is how we choose martingale measure in the above paragraph.
e rT VT |Ft ], by Martingale Representation
Let VT be a payoff at time T , then for the martingale Mt = E[e
R
e t , so that Mt = M0 + t
e dW
fs . If we let t = et ert , then the
Theorem, we can find an adapted process
St
0 s
e t dW
ft . So by setting X0 = M0 = E[e
e rT VT ],
value of the corresponding portfolio X satisfies d(ert Xt ) =
rt
we must have e Xt = Mt , for all t [0, T ]. In particular, XT = VT . Thus the portfolio perfectly hedges
VT . This justifies the risk-neutral pricing of European-type contingent claims in the model where cost of
carry exists. Also note the risk-neutral measure is different from the one in case of no cost of carry.
Another perspective for perfect replication is the following. We need to solve the backward SDE
(
dXt = t dSt at dt + r(Xt t St )dt
XT = VT
for two unknowns, X and . To do so, we find a probability measure Pe, under which ert Xt is Ra martingale,
e rT VT |Ft ] := Mt . Martingale Representation Theorem gives Mt = M0 + t
e dW
fu for
then ert Xt = E[e
0 u
e This would give us a theoretical representation of by comparison of integrands,
some adapted process .
hence a perfect replication of VT .
(i)
Proof. As indicated in the above analysis, if we have (5.9.7) under Pe, then d(ert Xt ) = t [d(ert St )
ft . So (ert Xt )t0 , where X is given by (5.9.6), is a Pe-martingale.
aert dt] = t ert St dW
(ii)
ft + (r 1 2 )dt] + 1 Yt 2 dt = Yt (dW
ft + rdt). So d(ert Yt ) =
Proof. By It
os formula, dYt = Yt [dW
2
2
R
t
a
ft and ert Yt is a Pe-martingale. Moreover, if St = S0 Yt + Yt
ert Yt dW
ds, then
0 Ys
Z
dSt = S0 dYt +
0
a
dsdYt + adt =
Ys
Z
S0 +
0
a
ft + rdt) + adt = St (dW
ft + rdt) + adt.
ds Yt (dW
Ys
Note V appears only in the dt term, so multiply the integration factor e 2 t on both sides of the equation,
we get
1 2
f 1 2
d(e 2 t Vt ) = aertWt + 2 t dt.
Rt
1 2
f
Set Yt = eWt +(r 2 )t , we have d(St /Yt ) = adt/Yt . So St = Yt (S0 + 0 ads
Ys ).
(iii)
Proof.
e T |Ft ]
E[S
" Z
e
e
= S0 E[YT |Ft ] + E YT
=
=
=
=
#
a
ds|Ft
Ys
t
0
Z t
Z T
a
e T |Ft ] +
e T |Ft ] + a
e YT |Ft ds
S0 E[Y
dsE[Y
E
Ys
0 Ys
t
Z T
Z t
a
e T t ] + a
e T s ]ds
e T t ] +
dsYt E[Y
S0 Yt E[Y
E[Y
Y
s
t
0
Z T
Z t
a
r(T t)
ds Yt e
+a
er(T s) ds
S0 +
t
0 Ys
Z t
a
ads
Yt er(T t) (1 er(T t) ).
S0 +
r
0 Ys
t
a
ds + YT
Ys
e T ] = S0 erT a (1 erT ).
In particular, E[S
r
(iv)
Proof.
e T |Ft ]
dE[S
Z t
a
ads
(er(T t) dYt rYt er(T t) dt) + er(T t) (r)dt
= aer(T t) dt + S0 +
r
0 Ys
Z t
ads r(T t)
ft .
=
S0 +
e
Yt dW
0 Ys
e T |Ft ] is a Pe-martingale. As we have argued at the beginning of the solution, risk-neutral pricing is
So E[S
e T |Ft ]
valid even in the presence of cost of carry. So by an argument similar to that of 5.6.2, the process E[S
is the futures price process for the commodity.
(v)
e r(T t) (ST K)|Ft ] = 0 for K, and get K = E[S
e T |Ft ]. So F orS (t, T ) =
Proof. We solve the equation E[e
F utS (t, T ).
(vi)
Proof. We follow the hint. First, we solve the SDE
(
dXt = dSt adt + r(Xt St )dt
X0 = 0.
By our analysis in part (i), d(ert Xt ) = d(ert St ) aert dt. Integrate from 0 to t on both sides, we get
Xt = St S0 ert + ar (1 ert ) = St S0 ert ar (ert
XT = ST S0 erT ar (erT 1).
1).R In particular,
e T |Ft ] = S0 + t ads Yt er(T t) a (1er(T t) ). So F orS (0, T ) =
Meanwhile, F orS (t, T ) = F uts (t, T ) = E[S
0 Ys
S0 erT ar (1 erT ) and hence XT = ST F orS (0, T ). After the agent delivers the commodity, whose value
is ST , and receives the forward price F orS (0, T ), the portfolio has exactly zero value.
54
au u u
u
Yu (bu Zu du + u Zu dWu ) + Zu
du +
dWu
Zu
Zu
[Yu bu Zu + (au u u ) + u u ]du + (u Zu Yu + u )dWu
+ u Z u
u
du
Zu
Remark: To see how to find the above solution, we manipulate the equation (6.2.4) Ras follows. First, to
u
remove the term bu Xu du, we multiply on both sides of (6.2.4) the integrating factor e t bv dv . Then
d(Xu e
u = e
Let X
Ru
t
bv dv
Xu , a
u = e
Ru
Ru
t
bv dv
bv dv
) = e
Ru
t
bv dv
(au du + (u + u Xu )dWu ).
au and u = e
Ru
t
bv dv
u = a
u )dWu = (
u dWu .
dX
u du + (
u + u X
au du + u dWu ) + u X
u dWu , we consider X
u = X
u e
To deal with the term u X
u
dX
Ru
t
v dWv
Ru
t
v dWv
u dWu ] + X
u e
[(
au du + u dWu ) + u X
u )(u )e
+(
u + u X
Ru
t
v dWv
Ru
t
. Then
v dWv
1 Ru
(u )dWu + e t v dWv u2 du
2
du
u dWu u X
u dWu + 1 X
u u2 du u (
u )du
= a
u du + u dWu + u X
u + u X
2
1 2
= (
au u u X
u dWu ,
u u )du +
2
where a
u = a
u e
Ru
t
v dWv
and u = u e
Ru
t
v dWv
Ru
t
1 2
2 v dv
, we have
R
R
R
u e 12 tu v2 dv = e 12 tu v2 dv (dX
u + X
u 1 2 du) = e 21 tu v2 dv [(
d X
au u u )du + u dWu ].
u
2
Write everything back into the original X, a and , we get
Ru 2
Ru 2
Ru
Ru
Ru
Ru
1
1
d Xu e t bv dv t v dWv + 2 t v dv = e 2 t v dv t v dWv t bv dv [(au u u )du + u dWu ],
i.e.
d
Xu
Zu
=
1
[(au u u )du + u dWu ] = dYu .
Zu
55
1 (t)Dt [(t, Rt ) (t, Rt , T1 )]fr (t, Rt , T1 )dt + 2 (t)Dt [(t, Rt ) (t, Rt , T2 )]fr (t, Rt , T2 )dt
+Dt (t, Rt )[1 (t)fr (t, Rt , T1 ) + 2 (t)fr (t, Rt , T2 )]dWt .
(ii)
Proof. Let 1 (t) = St fr (t, Rt , T2 ) and 2 (t) = St fr (t, Rt , T1 ), then
d(Dt Xt )
If (t, Rt , T1 ) 6= (t, Rt , T2 ) for some t [0, T ], under the assumption that fr (t, r, T ) 6= 0 for all values of r
and 0 t T , DT XT D0 X0 > 0. To avoid arbitrage (see, for example, Exercise 5.7), we must have for
a.s. , (t, Rt , T1 ) = (t, Rt , T2 ), t [0, T ]. This implies (t, r, T ) does not depend on T .
(iii)
Proof. In (6.9.4), let 1 (t) = (t), T1 = T and 2 (t) = 0, we get
1
d(Dt Xt ) = (t)Dt Rt f (t, Rt , T ) + ft (t, Rt , T ) + (t, Rt )fr (t, Rt , T ) + 2 (t, Rt )frr (t, Rt , T ) dt
2
+Dt (t, Rt )(t)fr (t, Rt , T )dWt .
This is formula (6.9.5).
If fr (t, r, T ) = 0,
then d(Dt Xt ) = (t)Dt Rt f (t, Rt , T ) + ft (t, Rt , T ) + 21 2 (t, Rt )frr (t, Rt , T ) dt. We
choose (t) = sign Rt f (t, Rt , T ) + ft (t, Rt , T ) + 12 2 (t, Rt )frr (t, Rt , T ) . To avoid arbitrage in this
case, we must have ft (t, Rt , T ) + 12 2 (t, Rt )frr (t, Rt , T ) = Rt f (t, Rt , T ), or equivalently, for any r in the
range of Rt , ft (t, r, T ) + 12 2 (t, r)frr (t, r, T ) = rf (t, r, T ).
56
6.3.
Proof. We note
i
Rs
Rs
d h R s bv dv
e 0
C(s, T ) = e 0 bv dv [C(s, T )(bs ) + bs C(s, T ) 1] = e 0 bv dv .
ds
So integrate on both sides of the equation from t to T, we obtain
e
RT
0
bv dv
C(T, T ) e
Rt
0
bv dv
C(t, T ) =
Rs
0
bv dv
ds.
t
Rt
Z
A(T, T ) A(t, T ) =
bv dv
RT
t
Rs
0
bv dv
a(s)C(s, T )ds +
t
RT
ds =
(a(s)C(s, T )
1
2
RT
t
Rt
s
bv dv
1 2
2
2 (s)C (s, T ))ds.
6.4. (i)
Proof. By the definition of , we have
1
0 (t) = e 2
RT
t
C(u,T )du 1
1
2 (1)C(t, T ) = (t) 2 C(t, T ).
2
2 (t)
1
0
2
So C(t, T ) = (t)
2 . Differentiate both sides of the equation (t) = 2 (t) C(t, T ), we get
1
2 [0 (t)C(t, T ) + (t)C 0 (t, T )]
2
1
1
= 2 [ (t) 2 C 2 (t, T ) + (t)C 0 (t, T )]
2
2
1 4
1
=
(t)C 2 (t, T ) 2 (t)C 0 (t, T ).
4
2
00
(t)
So C 0 (t, T ) = 41 4 (t)C 2 (t, T ) 00 (t) / 12 (t) 2 = 21 2 C 2 (t, T ) 2
2 (t) .
00 (t)
(ii)
Proof. Plug formulas (6.9.8) and (6.9.9) into (6.5.14), we get
200 (t) 1 2 2
20 (t)
1
+ C (t, T ) = b(1) 2
+ 2 C 2 (t, T ) 1,
2
(t) 2
(t) 2
1
1
c1
c2
e( 2 b+)(T t) 1
e( 2 b)(T t) .
+
b
1
2b
57
(iv)
1
Proof. From part (iii), it is easy to see 0 (t) = c1 e( 2 b+)(T t) c2 e( 2 b)(T t) . In particular,
0 = C(T, T ) =
20 (T )
2(c1 c2 )
= 2
.
2 (T )
(T )
So c1 = c2 .
(v)
Proof. We first recall the definitions and properties of sinh and cosh:
sinh z =
ez ez
ez + ez
, cosh z =
, (sinh z)0 = cosh z, and (cosh z)0 = sinh z.
2
2
Therefore
(t)
= c1 e 2 b(T t)
= c1 e
=
=
12 b(T t)
e(T t)
e(T t)
1
1
2b +
2b
1
2b
1 2
4b
1
(T t)
2b +
e
e(T t)
1 2
2
2
1
2c1 1 b(T t)
1
(T t)
(T t)
2
e
b
)e
+
(
b
+
)e
(
2
2
2
2c1 1 b(T t)
e 2
[b sinh((T t)) + 2 cosh((T t))].
2
and
0 (t)
1 2c1 1 b(T t)
[b sinh((T t)) + 2 cosh((T t))]
b 2 e 2
2
2c1 1
+ 2 e 2 b(T t) [b cosh((T t)) 2 2 sinh((T t))]
2
1
b
b
2 2
b
= 2c1 e 2 b(T t)
sinh((T
t))
+
cosh((T
t))
cosh((T
t))
sinh((T
t))
2 2
2
2
2
1
b2 4 2
= 2c1 e 2 b(T t)
sinh((T t))
2 2
1
= 2c1 e 2 b(T t) sinh((T t)).
=
This implies
C(t, T ) =
20 (t)
sinh((T t))
.
=
2 (t)
cosh((T t)) + 12 b sinh((T t))
(vi)
Proof. By (6.5.15) and (6.9.8), A0 (t, T ) =
2a0 (t)
2 (t) .
Hence
Z
A(T, T ) A(t, T ) =
t
and
2a0 (s)
2a (T )
ds = 2 ln
,
2
(s)
(t)
#
"
1
2a (T )
2a
e 2 b(T t)
A(t, T ) = 2 ln
= 2 ln
.
(t)
58
6.5. (i)
Proof. Since g(t, X1 (t), X2 (t)) = E[h(X1 (T ), X2 (T ))|Ft ] and ert f (t, X1 (t), X2 (t)) = E[erT h(X1 (T ), X2 (T ))|Ft ],
iterated conditioning argument shows g(t, X1 (t), X2 (t)) and ert f (t, X1 (t), X2 (t)) ar both martingales.
(ii) and (iii)
Proof. We note
dg(t, X1 (t), X2 (t))
1
1
= gt dt + gx1 dX1 (t) + gx2 dX2 (t) + gx1 x2 dX1 (t)dX1 (t) + gx2 x2 dX2 (t)dX2 (t) + gx1 x2 dX1 (t)dX2 (t)
2
2
1
2
2
=
gt + gx1 1 + gx2 2 + gx1 x1 (11 + 12 + 211 12 ) + gx1 x2 (11 21 + 11 22 + 12 21 + 12 22 )
2
1
2
2
+ gx2 x2 (21
+ 22
+ 221 22 ) dt + martingale part.
2
So we must have
1
2
2
gt + gx1 1 + gx2 2 + gx1 x1 (11
+ 12
+ 211 12 ) + gx1 x2 (11 21 + 11 22 + 12 21 + 12 22 )
2
1
2
2
+ 22
+ 221 22 ) = 0.
+ gx2 x2 (21
2
Taking = 0 will give part (ii) as a special case. The PDE for f can be similarly obtained.
6.6. (i)
1
Pd
j=1
dR(t)
d
X
j=1
d
X
j=1
d
X
j=1
=
1 2
2Xj (t)dXj (t) + dt
4
bXj2 (t)dt
1 2
+ Xj (t)dWj (t) + dt
4
d
X
p
d 2
X (t)
pj
bR(t) dt + R(t)
dWj (t).
4
R(t)
j=1
Pd Xj2 (t)
then B is a local martingale with dB(t)dB(t) = j=1 R(t)
dt = dt. So
p
by Levys Theorem, B is a Brownian motion. Therefore dR(t) = (a bR(t))dt + R(t)dB(t) (a := d4 2 )
and R is a CIR interest rate process.
Let B(t) =
R t Xj (s)
dWj (s),
j=1 0
R(s)
Pd
59
(iii)
1
Proof. By (6.9.16), Xj (t) is dependent on Wj only and is normally distributed with mean e 2 bt Xj (0) and
2
variance 4b [1 ebt ]. So X1 (t), , Xd (t) are i.i.d. normal with the same mean (t) and variance v(t).
(iv)
Proof.
h
uXj2 (t)
E e
(x(t))2
ux2
e 2v(t) dx
p
2v(t)
dx
2v(t)
2 (t)
2 (t)
12uv(t)
(12uv(t))2
2v(t)/(12uv(t))
2
(t)
(x 12uv(t)
)
p
e
dx
2v(t)
p
2
2 (t)(12uv(t))2 (t)
(t)
(x 12uv(t)
)
1 2uv(t) 2v(t)/(12uv(t))
e 2v(t)(12uv(t))
p
p
e
dx
2v(t)
1 2uv(t)
u2 (t)
e 12uv(t)
p
.
1 2uv(t)
(v)
Proof. By R(t) =
Pd
j=1
ud2 (t)
2a
ebt uR(0)
12uv(t)
6.7. (i)
e rT (ST K)+ |Ft ] is a martingale by iterated conditioning argument. Since
Proof. ert c(t, St , Vt ) = E[e
d(ert c(t, St , Vt ))
1
rt
c(t, St , Vt )(r) + ct (t, St , Vt ) + cs (t, St , Vt )rSt + cv (t, St , Vt )(a bVt ) + css (t, St , Vt )Vt St2 +
= e
2
1
2
cvv (t, St , Vt ) Vt + csv (t, St , Vt )Vt St dt + martingale part,
2
we conclude rc = ct + rscs + cv (a bv) + 12 css vs2 + 21 cvv 2 v + csv sv. This is equation (6.9.26).
(ii)
60
Proof. Suppose c(t, s, v) = sf (t, log s, v) er(T t) Kg(t, log s, v), then
ct = sft (t, log s, v) rer(T t) Kg(t, log s, v) er(T t) Kgt (t, log s, v),
1
1
cs = f (t, log s, v) + sfs (t, log s, v) er(T t) Kgs (t, log s, v) ,
s
s
cv = sfv (t, log s, v) er(T t) Kgv (t, log s, v),
1
1
1
1
css = fs (t, log s, v) + fss (t, log s, v) er(T t) Kgss (t, log s, v) 2 + er(T t) Kgs (t, log s, v) 2 ,
s
s
s
s
K
csv = fv (t, log s, v) + fsv (t, log s, v) er(T t) gsv (t, log s, v),
s
cvv = sfvv (t, log s, v) er(T t) Kgvv (t, log s, v).
So
1
1
ct + rscs + (a bv)cv + s2 vcss + svcsv + 2 vcvv
2
2
= sft rer(T t) Kg er(T t) Kgt + rsf + rsfs rKer(T t) gs + (a bv)(sfv er(T t) Kgv )
1
1
gs
K
K
1
+ s2 v fs + fss er(T t) 2 gss + er(T t) K 2 + sv fv + fsv er(T t) gsv
2
s
s
s
s
s
1 2
+ v(sfvv er(T t) Kgvv )
2
1
1
1
1
= s ft + (r + v)fs + (a bv + v)fv + vfss + vfsv + 2 vfvv Ker(T t) gt + (r v)gs
2
2
2
2
1
1 2
+(a bv)gv + vgss + vgsv + vgvv + rsf rer(T t) Kg
2
2
= rc.
That is, c satisfies the PDE (6.9.26).
(iii)
Proof. First, by Markov property, f (t, Xt , Vt ) = E[1{XT log K} |Ft ]. So f (T, Xt , Vt ) = 1{XT log K} , which
implies f (T, x, v) = 1{xlog K} for all x R, v 0. Second, f (t, Xt , Vt ) is a martingale, so by differentiating
f and setting the dt term as zero, we have the PDE (6.9.32) for f . Indeed,
1
1
df (t, Xt , Vt ) =
ft (t, Xt , Vt ) + fx (t, Xt , Vt )(r + Vt ) + fv (t, Xt , Vt )(a bvt + Vt ) + fxx (t, Xt , Vt )Vt
2
2
1
+ fvv (t, Xt , Vt ) 2 Vt + fxv (t, Xt , Vt )Vt dt + martingale part.
2
So we must have ft + (r + 21 v)fx + (a bv + v)fv + 12 fxx v + 12 fvv 2 v + vfxv = 0. This is (6.9.32).
(iv)
Proof. Similar to (iii).
(v)
Proof. c(T, s, v) = sf (T, log s, v) er(T t) Kg(T, log s, v) = s1{log slog K} K1{log slog K} = 1{sK} (s
K) = (s K)+ .
6.8.
61
Proof. We follow the hint. Suppose h is smooth and compactly supported, then it is legitimate to exchange
integration and differentiation:
Z
Z
h(y)p(t, T, x, y)dy =
h(y)pt (t, T, x, y)dy,
gt (t, x) =
t 0
0
Z
gx (t, x) =
h(y)px (t, T, x, y)dy,
0
Z
gxx (t, x) =
h(y)pxx (t, T, x, y)dy.
0
So (6.9.45) implies 0 h(y) pt (t, T, x, y) + (t, x)px (t, T, x, y) + 21 2 (t, x)pxx (t, T, x, y) dy = 0. By the arbitrariness of h and assuming , pt , px , v, pxx are all continuous, we have
1
pt (t, T, x, y) + (t, x)px (t, T, x, y) + 2 (t, x)pxx (t, T, x, y) = 0.
2
This is (6.9.43).
6.9.
Proof. We first note dhb (Xu ) = h0b (Xu )dXu + 21 h00b (Xu )dXu dXu = h0b (Xu )(u, Xu ) + 21 2 (u, Xu )h00b (Xu ) du+
h0b (Xu )(u, Xu )dWu . Integrate on both sides of the equation, we have
Z
hb (XT ) hb (Xt ) =
t
1 2
0
00
hb (Xu )(u, Xu ) + (u, Xu )hb (Xu ) du + martingale part.
2
Z T
=
t
Z
=
t
1
E t,x [h0b (Xu )(u, Xu ) + 2 (u, Xu )h00b (Xu )]du
2
Z
1 2
00
0
hb (y)(u, y) + (u, y)hb (y) p(t, u, x, y)dydu.
2
Since hb vanishes outside (0, b), the integration range can be changed from (, ) to (0, b), which gives
(6.9.48).
By integration-by-parts formula, we have
Z b
Z b
0
b
(u, y)p(t, u, x, y)hb (y)dy = hb (y)(u, y)p(t, u, x, y)|0
hb (y) ((u, y)p(t, u, x, y))dy
y
0
0
Z b
=
hb (y) ((u, y)p(t, u, x, y))dy,
y
0
and
Z b
Z
2 (u, y)p(t, u, x, y)h00b (y)dy =
0
2
( (u, y)p(t, u, x, y))h0b (y)dy =
y
Z
0
2 2
( (u, y)p(t, u, x, y))hb (y)dy.
y
hb (y)
0
p(t, T, x, y)dy =
T
Z
0
1
[(T, y)p(t, T, x, y)]hb (y)dy +
y
2
62
Z
0
2 2
[ (T, y)p(t, T, x, y)]hb (y)dy,
y 2
that is,
Z
0
1 2 2
( (T, y)p(t, T, x, y)) dy = 0.
hb (y)
p(t, T, x, y) +
((T, y)p(t, T, x, y))
T
y
2 y 2
This is (6.9.50).
By (6.9.50) and the arbitrariness of hb , we conclude for any y (0, ),
1 2 2
( (T, y)p(t, T, x, y)) = 0.
p(t, T, x, y) +
((T, y)p(t, T, x, y))
T
y
2 y 2
6.10.
Proof. Under the assumption that limy (y K)rye
p(0, T, x, y) = 0, we have
Z
Z
Z
(yK) (rye
p(0, T, x, y))dy = (yK)rye
p(0, T, x, y)|
+
rye
p
(0,
T,
x,
y)dy
=
rye
p(0, T, x, y)dy.
K
y
K
K
K
If we further assume (6.9.57) and (6.9.58), then use integration-by-parts formula twice, we have
Z
1
2
(y K) 2 ( 2 (T, y)y 2 pe(0, T, x, y))dy
2 K
y
Z
2
1
2
2
2
(y K) ( (T, y)y pe(0, T, x, y))|K
( (T, y)y pe(0, T, x, y))dy
=
2
y
K y
1
= ( 2 (T, y)y 2 pe(0, T, x, y)|
K)
2
1 2
=
(T, K)K 2 pe(0, T, x, K).
2
Therefore,
cT (0, T, x, K)
Z
rc(0, T, x, K) + erT
(y K)e
pT (0, T, x, y)dy
K
Z
Z
= rerT
(y K)e
p(0, T, x, y)dy + erT
(y K)e
pT (0, T, x, y)dy
ZK
ZK
p(t, T, x, y))dy
= rerT
(y K)e
p(0, T, x, y)dy erT
(y K) (rye
y
K
K
Z
1 2 2
+erT
(y K)
( (T, y)y 2 pe(t, T, x, y))dy
2 y 2
K
Z
Z
rT
rT
= re
(y K)e
p(0, T, x, y)dy + e
rye
p(0, T, x, y)dy
=
1 2
+e
(T, K)K 2 pe(0, T, x, K)
2Z
1
= rerT K
pe(0, T, x, y)dy + erT 2 (T, K)K 2 pe(0, T, x, K)
2
K
1 2
= rKcK (0, T, x, K) + (T, K)K 2 cKK (0, T, x, K).
2
rT
7. Exotic Options
7.1. (i)
63
Proof. Since (, s) =
[log s
+ (r 12 2 ) ] =
(, s)
t
log s 12
r 21 2
,
r 12 2 1 1
log s 1 3
( ) 2
+
2
2
t
2
t
r 21 2
1 log s 1
(1)
=
(1)
2
1
1
1
=
log ss + (r 2 ) )
2
2
1
1
= (, ).
2
s
=
(ii)
Proof.
1
x
1
1
log + (r 2 )
,
(, ) =
=
x
c
x
c
2
x
c
1
1 2
1
c
log + (r )
(, ) =
= .
x
x
x
x
2
x
(iii)
Proof.
(log s+r )2 2 (log s+r )+ 1 4 2
(,s)
4
1
1
2 2
.
N 0 ( (, s)) = e 2 = e
2
2
Therefore
2 2 (log s+r )
N 0 (+ (, s))
er
2 2
=
=
e
N 0 ( (, s))
s
= e
4r log s2 2 log s
2 2
2r
2r
= e( 2 1) log s = s( 2 1) .
2r
So N 0 ( (, s1 )) = s( 2 1) N 0 ( (, s)).
(v)
Proof. + (, s) (, s) =
log s + (r + 21 2 )
log s + (r 12 2 ) =
= .
(vi)
Proof. (, s) (, s1 ) =
log s + (r 21 2 )
log s1 + (r 12 2 ) =
(vii)
Proof. N 0 (y) =
y
1 e 2
2
, so N 00 (y) =
y
1 e 2
2
( y2 )0 = yN 0 (y).
64
2 log
s.
To be continued ...
7.3.
c
c
c
cT W
ct = (W
fT W
ft ) + (T t) is independent of Ft ,
Proof. We note ST = S0 eWT = St e(WT Wt ) , W
cu W
ct ) is independent of Ft , and
suptuT (W
YT
S0 eMT
S0 e suptuT Wu 1{M
ct sup
tuT
St e suptuT (Wu Wt ) 1
c
+ S0 eMt 1{M
ct >sup
c
{ St e
ct }
W
tuT
cu W
c )
suptuT (W
t
+ Yt 1
{ St e
cu }
W
cu W
c )
suptuT (W
t
S0
S0
m
X
(Ytj Ytj1 )(Stj Stj1 )|
j=1
m
X
j=1
v
v
uX
uX
um
um
2
t
(Ytj Ytj1 ) t (Stj Stj1 )2
j=1
j=1
v
uX
um
max |Ytj Ytj1 |(YT Y0 )t (Stj Stj1 )2 .
1jm
j=1
0
0
= 2r
So vL
(L+) = vL
(L) if and only if
2 L (K L).
x=L
2rK
2r+ 2 .
8.2.
Proof. By the calculation in Section 8.3.3, we can see v2 (x) (K2 x)+ (K1 x)+ , rv2 (x) rxv20 (x)
1 2 2 00
2 x v2 (x) 0 for all x 0, and for 0 x < L1 < L2 ,
1
rv2 (x) rxv20 (x) 2 x2 v200 (x) = rK2 > rK1 > 0.
2
So the linear complementarity conditions for v2 imply v2 (x) = (K2 x)+ = K2 x > K1 x = (K1 x)+
on [0, L1 ]. Hence v2 (x) does not satisfy the third linear complementarity condition for v1 : for each x 0,
equality holds in either (8.8.1) or (8.8.2) or both.
8.3. (i)
65
Proof. Suppose x takes its values in a domain bounded away from 0. By the general theory of linear
differential equations, if we can find two linearly independent solutions v1 (x), v2 (x) of (8.8.4), then any
solution of (8.8.4) can be represented in the form of C1 v1 +C2 v2 where C1 and C2 are constants. So it suffices
to find two linearly independent special solutions of (8.8.4). Assume v(x) = xp for some constant p to be
determined, (8.8.4) yields xp (rpr 21 2 p(p1)) = 0. Solve the quadratic equation 0 = rpr 21 2 p(p1) =
2r
( 21 2 p r)(p 1), we get p = 1 or 2r2 . So a general solution of (8.8.4) has the form C1 x + C2 x 2 .
(ii)
Proof. Assume there is an interval [x1 , x2 ] where 0 < x1 < x2 < , such that v(x) 6 0 satisfies (8.3.19)
with equality on [x1 , x2 ] and satisfies (8.3.18) with equality for x at and immediately to the left of x1 and
2r
for x at and immediately to the right of x2 , then we can find some C1 and C2 , so that v(x) = C1 x + C2 x 2
on [x1 , x2 ]. If for some x0 [x1 , x2 ], v(x0 ) = v 0 (x0 ) = 0, by the uniqueness of the solution of (8.8.4), we
would conclude v 0. This is a contradiction. So such an x0 cannot exist. This implies 0 < x1 < x2 < K
(if K x2 , v(x2 ) = (K x2 )+ = 0 and v 0 (x2 )=the right derivative of (K x)+ at x2 , which is 0). 1 Thus
we have four equations for C1 and C2 :
2r2
C1 x1 + C2 x1 = K x1
C x + C x 2r2 = K x
1 2
C1
C1
2 2
2r2 1
2r
2 C2 x1
2r2 1
2r
2 C2 x2
= 1
= 1.
Since x1 6= x2 , the last two equations imply C2 = 0. Plug C2 = 0 into the first two equations, we have
1
2
C1 = Kx
= Kx
x1
x2 ; plug C2 = 0 into the last two equations, we have C1 = 1. Combined, we would have
x1 = x2 . Contradiction. Therefore our initial assumption is incorrect, and the only solution v that satisfies
the specified conditions in the problem is the zero solution.
(iii)
Proof. If in a right neighborhood of 0, v satisfies (8.3.19) with equality, then part (i) implies v(x) = C1 x +
2r
C2 x 2 for some constants C1 and C2 . Then v(0) = limx0 v(x) = 0 < (K 0)+ , i.e. (8.3.18) will be
violated. So we must have rv rxv 0 21 2 x2 v 00 > 0 in a right neighborhood of 0. According to (8.3.20),
v(x) = (K x)+ near o. So v(0) = K. We have thus concluded simultaneously that v cannot satisfy (8.3.19)
with equality near 0 and v(0) = K, starting from first principles (8.3.18)-(8.3.20).
(iv)
Proof. This is already shown in our solution of part (iii): near 0, v cannot satisfy (8.3.19) with equality.
(v)
Proof. If v satisfy (K x)+ with equality for all x 0, then v cannot have a continuous derivative as stated
in the problem. This is a contradiction.
(vi)
1 Note we have interpreted the condition v(x) satisfies (8.3.18) with equality for x at and immediately to the right of x
2
as v(x2 ) = (K x2 )+ and v 0 (x2 ) =the right derivative of (K x)+ at x2 . This is weaker than v(x) = (K x) in a right
neighborhood of x2 .
66
2r
Proof. By the result of part (i), we can start with v(x) = (K x)+ on [0, x1 ] and v(x) = C1 x + C2 x 2 on
[x1 , ). By the assumption of the problem, both v and v 0 are continuous. Since (K x)+ is not differentiable
at K, we must have x1 K.This gives us the equations
K x = (K x )+ = C x + C x 2r2
1
1
1 1
2 1
2r
1 = C 2r C x 2 1 .
1
2 1
Because v is assumed to be bounded, we must have C1 = 0 and the above equations only have two unknowns:
C2 and x1 . Solve them for C2 and x1 , we are done.
8.4. (i)
Proof. This is already shown in part (i) of Exercise 8.3.
(ii)
Proof. We solve for A, B the equations
(
2r
AL 2 + BL = K L
2r
2r2 AL 2 1 + B = 1,
2r
and we obtain A =
2 KL 2
2 +2r
,B=
2rK
L( 2 +2r)
1.
(iii)
Proof. By (8.8.5), B > 0. So for x K, f (x) BK > 0 = (K x)+ . If L x < K,
h
i
2r
x 2r2
x 2r2 +1
2
2
2r
2
)
(
+
2r)(
)
KL
+
2r(
2
2
2r
L
L
2rKx
KL 2r2
x +
K = x 2
.
f (x) (K x)+ = 2
+ 2r
L( 2 + 2r)
( 2 + 2r)L
2r
2r
2r
Proof. Since limx v(x) = limx f (x) = and limx vL (x) = limx (K L )( Lx ) 2 = 0, v(x)
and vL (x) are different. By part (iii), v(x) (K x)+ . So v satisfies (8.3.18). For x L, rv rxv 0
1 2 2 00
1 2 2 00
1 2 2 00
0
2 x v = rf rxf 2 x f = 0. For 0 x L, rv rxv 2 x v = r(K x) + rx = rK. Combined,
1 2 2 00
0
rv rxv 2 x v 0 for x 0. So v satisfies (8.3.19). Along the way, we also showed v satisfies (8.3.20).
In summary, v satisfies the linear complementarity condition (8.3.18)-(8.3.20), but v is not the function vL
given by (8.3.13).
(v)
Proof. By part (ii), B = 0 if and only if
2 K
x 2r2
2 +2r ( L )
2rK
L( 2 +2r)
1 = 0, i.e. L =
2rK
2r+ 2 .
2r
2r
x 2
= (K L)( L
)
= vL (x), on the interval [L, ).
e (ra)L ],
8.5. The difficulty of the dividend-paying case is that from Lemma 8.3.4, we can only obtain E[e
e rL ]. So we have to start from Theorem 8.3.2.
not E[e
(i)
67
)t
ft 1 (r a
. Assume S0 = x, then St = L if and only if W
2 2
(ii)
L vL (x)
Proof.
x
= ( L
) (1
(KL)
).
L
Set
L vL (x)
K
+1 .
(iii)
Proof. By It
os formula, we have
rt
1 00
rt
0
2 2
0
ft .
d e vL (St ) = e
rvL (St ) + vL (St )(r a)St + vL (St ) St dt + ert vL
(St )St dW
2
If x > L ,
1 00
0
rvL (x) + vL
(x)(r a)x + vL
(x) 2 x2
2
x
1
x2
x1
= r(K L )
+ (r a)x(K L )() + 2 x2 ()( 1)(K L )
L
2
L
L
x
1
= (K L )
r (r a) + 2 ( + 1) .
L
2
By the definition of , if we define u = r a 21 2 , we have
=
=
=
=
=
1
r + (r a) 2 ( + 1)
2
1 2 2
1
r + (r a 2 )
2
2
!2
!
r
r
1 2 u
1 u2
u
1 u2
r
+
+ 2r +
+
+ 2r u
2
2
2
2
2
r
r
!
2u u2
1 u2
u2
u u2
1 2 u2
r
+ 3
+ 2r + 2
+ 2r
+ 2+
+ 2r
2
4
2
r
r
u2
u u2
1 u2
u2
u u2
r 2
+ 2r
+ 2r + 2 +
+ 2r
2
2
2 2
2
0.
0
00
If x < L , rvL (x) + vL
(x)(r a)x + 21 vL
(x) 2 x2 = r(K x) + (1)(r a)x = rK + ax. Combined,
we get
0
ft .
d ert vL (St ) = ert 1{St <L } (rK aSt )dt + ert vL
(St )St dW
68
Following the reasoning in the proof of Theorem 8.3.5, we only need to show 1{x<L } (rK ax) 0 to finish
K
into the expression and
the solution. This is further equivalent to proving rK aL 0. Plug L = +1
q
note 1 12 (r a 12 2 )2 + 12 (r a 12 2 ) 0, the inequality is further reduced to r( + 1) a 0.
We prove this inequality as follows.
Assume for some K, r, a and (K and are assumed to be strictly positive, r and a are assumed to be
K
K. As shown before, this means
non-negative), rK aL < 0, then necessarily r < a, since L = +1
q
1
1
1 2
1
1 2 2
r( + 1) a < 0. Define = ra
,
then
<
0
and
=
(r
)
+
2
2 (r a 2 ) + 2r =
q
1
1
1
( 12 )2 + 2r. We have
( 2 ) +
r( + 1) a < 0
(r a) + r < 0
"
#
r
1
1
1
1 2
(r a)
( ) +
( ) + 2r + r < 0
2
r
1
1
( ) + ( )2 + 2r + r < 0
2
2
r
1 2
1
( ) + 2r < r ( )(< 0)
2
2
1 2
1
1
2
2
2
[( ) + 2r] > r + ( )2 + 2r( 2 )
2
2
2
0 > r2 r 2
0 > r 2 .
Since 2 < 0, we have obtained a contradiction. So our initial assumption is incorrect, and rK aL 0
must be true.
(iv)
Proof. The proof is similar to that of Corollary 8.3.6. Note the only properties used in the proof of Corollary
8.3.6 are that ert vL (St ) is a supermartingale, ertL vL (St L ) is a martingale, and vL (x) (K x)+ .
Part (iii) already proved the supermartingale-martingale property, so it suffices to show vL (x) (K x)+
K
< K. For x K > L , vL (x) > 0 = (K x)+ ; for 0 x < L ,
in our problem. Indeed, by 0, L = +1
vL (x) = K x = (K x)+ ; finally, for L x K,
x1
1
d
L1
K
(vL (x) (K x)) = (K L ) + 1 (K L ) + 1 = (K
) K + 1 = 0.
dx
+ 1 +1
L
L
and (vL (x) (K x))|x=L = 0. So for L x K, vL (x) (K x)+ 0. Combined, we have
vL (x) (K x)+ 0 for all x 0.
8.6.
Proof. By Lemma 8.5.1, Xt = ert (St K)+ is a submartingale. For any 0,T , Theorem 8.8.1 implies
e rT (ST K)+ ] E[e
e r T (S T K)+ ] E[er (S K)+ 1{ <} ] = E[er (S K)+ ],
E[e
e rT (ST
where we take the convention that er (S K)+ = 0 when = . Since is arbitrarily chosen, E[e
+
r
+
e
K) ] max 0,T E[e
(S K) ]. The other direction is trivial since T 0,T .
8.7.
69
S
B
S
B
,
(resp. , ) is a martingale under Pe (resp. P ). Third, if the market is
Pe (resp. P ), such that
e
B
e
B
T
B
1
h i dPe.
e
BT E Be0
B
0
Finally, if fT is a European contingent claim with maturity N and the market is both arbitrage-free and
complete, then
t E
fT |Ft = B
et E
e fT |Ft .
B
T
eT
B
B
That is, the price of fT is independent of the choice of numeraire.
The above theoretical results can be applied to market involving foreign money market account. We consider the following market: a domestic money market account M (M0 = 1), a foreign money market account
M f (M0f = 1), a (vector) asset price process S called stock. Suppose the domestic vs. foreign currency
exchange rate is Q. Note Q is not a traded asset. Denominated by domestic currency, the traded assets
are (M, M f Q, S), where M f Q can
as the price process of one unit foreign currency. Domestic risk fbe seen
M Q S
e
is a Pe-martingale. Denominated by foreign currency, the traded
neutral measure P is such that
M , M
S
M
ef is such that
assets are M f , M
, S
is a Pef -martingale.
Q , Q . Foreign risk-neutral measure P
QM f QM f
This is a change of numeraire in the market denominated by domestic currency, from M to M f Q. If we
assume the market is arbitrage-free and complete, the foreign risk-neutral measure is
dPef =
QT MTf
QT DT MTf e
e
h
i
d
P
=
dP
Q0 M0f
Q0
MT E M
0
M1 (t)
M2 (t)
is a martingale under P M2 .
(ii)
Proof. Let M1 (t) = Dt St and M2 (t) = Dt Nt /N0 . Then Pe(N ) as defined in (9.2.6) is P (M2 ) as defined in
St
1 (t)
e(N ) , which implies St(N ) = St is a martingale
Remark 9.2.5. Hence M
M2 (t) = Nt N0 is a martingale under P
Nt
(N )
e
under P
.
9.2. (i)
1
d(Nt1 ) = N01 e Wt (r 2
f
)t
, we have
)t
ct rdt).
ft (r 1 2 )dt + 1 2 dt] = Nt1 (dW
[dW
2
2
(ii)
Proof.
ct = Mt d
dM
1
Nt
1
+
dMt + d
Nt
1
Nt
ct (dW
ct rdt) + rM
ct dt = M
ct dW
ct .
dMt = M
bt
dX
1
1
1
+
dXt + d
dXt
Nt
Nt
Nt
1
1
1
= (t St + t Mt )d
+
(t dSt + t dMt ) + d
(t dSt + t dMt )
Nt
Nt
Nt
1
1
1
1
1
1
+
dSt + d
dSt + t Mt d
+
dMt + d
dMt
= t St d
Nt
Nt
Nt
Nt
Nt
Nt
b
c
= t dSt + t dMt .
= d
Xt
Nt
= Xt d
Proof. Nt = N0 e W3 (t)+(r 2
f
dNt1
)t
. So
1
= d(N01 e W3 (t)(r 2
)t
)
2
1
f
f3 (t) (r 1 2 )dt + 1 2 dt
= N01 e W3 (t)(r 2 )t dW
2
2
1
2
f
= Nt [dW3 (t) (r )dt],
f
71
and
(N )
dSt
= St
Define =
variation
f4 (t) =
2 2 + 2 and W
f4 ]t =
[W
f
W1 (t)
f
W3 (t),
2
2
t
2
t
+
t = t.
2
2
r2
p
2 2 + 2 .
(ii)
f2 (t) = W
f1 (t) + 1
Proof. This problem is the same as Exercise 4.13, we define W
2
12
f3 (t), then W
f2
W
is a martingale, with
f2 (t))2 =
(dW
f1 (t) + p 1
f3 (t)
dW
dW
p
2
1
1 2
!2
=
1
22
2
+
1 2
1 2
1 2
dt = dt,
f2 (t)dW
f1 (t) =
and dW
f2 is a BM independent of W
f1 , and dNt = rNt dt +
dt + 2 dt = 0. So W
1
p
f3 (t) = rNt dt + Nt [dW
f1 (t) + 1 2 dW
f2 (t)].
Nt dW
12
(iii)
f1 , W
f2 ) is a two-dimensional BM, and
Proof. Under Pe, (W
!
f1 (t)
dW
f2 (t)
dW
p
f3 (t) = rNt dt + Nt (, 1 2 )
dN
=
rN
dt
+
N
d
W
t
t
t
!
f1 (t)
dW
.
f2 (t)
dW
p
So under Pe, the volatility vector for S is (, 0), and the volatility vector for N is (, p1 2 ). By Theorem
9.2.2, under the measure Pe(N ) , the volatility vector for S (N ) is (v1 , v2 ) = ( , 1 2 . In particular,
the volatility of S (N ) is
q
q
p
p
v12 + v22 = ( )2 + ( 1 2 )2 = 2 2 + 2 ,
consistent with the result of part (i).
9.4.
Proof. From (9.3.15), we have Mtf Qt = M0f Q0 e
Rt
0
R
f3 (s)+ t (Rs 1 2 (s))ds
2 (s)dW
2 2
0
. So
R
R
Dtf
f3 (s) t (Rs 1 2 (s))ds
0t 2 (s)dW
2 2
0
= D0f Q1
0 e
Qt
72
and
d
Dtf
Qt
!
=
f
Dtf
f3 (t) (Rt 1 22 (t))dt + 1 22 (t)dt] = Dt [2 (t)dW
f3 (t) (Rt 22 (t))dt].
[2 (t)dW
Qt
2
2
Qt
Mt Dtf
Qt
Dtf
Qt
Df
+ t dMt + dMt d
Qt
Dtf
Qt
Mt d
f
Mt Dtf
f3 (t) (Rt 22 (t))dt] + Rt Mt Dt dt
[2 (t)dW
Qt
Qt
Mt Dtf
f3 (t) 2 (t)dt)
(2 (t)dW
2
Qt
Mt Dtf
f f (t).
2 (t)dW
3
Qt
Dtf
dSt + St d
Qt
Dtf
Qt
!
+ dSt d
Dtf
Qt
f
Dtf
f1 (t)) + St Dt [2 (t)dW
f3 (t) (Rt 22 (t))dt]
St (Rt dt + 1 (t)dW
Qt
Qt
f
f1 (t) Dt (2 (t))dW
f3 (t)
+St 1 (t)dW
Qt
=
Dtf St
f1 (t) 2 (t)dW
f3 (t) + 22 (t)dt 1 (t)2 (t)t dt]
[1 (t)dW
Qt
Dtf St
f f (t) 2 dW
f f (t)].
[1 (t)dW
1
3
Qt
9.5.
Proof. We combine the solutions of all the sub-problems into a single solution as follows. The payoff of a
ST
quanto call is ( Q
K)+ units of domestic currency at time T . By risk-neutral pricing formula, its price at
T
r(T
t)
e
time t is E[e
( ST K)+ |Ft ]. So we need to find the SDE for St under risk-neutral measure Pe. By
QT
Qt
f1 (t)2 12 W
f2 (t)+(r f + 1 2 1 2 )t
St
S0 (1 2 )W
2 2
2 1 . Define
So Q
=
e
Q0
t
f
4 =
(1 2 )2 + 22 (1 2 ) =
f4 is a martingale with [W
f4 ]t =
Then W
f
if we set a = r r + 1 2
22 ,
p
2
f4 (t) = 1 2 W
f1 (t) 2 1 W
f2 (t).
12 21 2 + 22 and W
4
4
2
(1 2 )2
)
t + 2 (1
t + t.
42
42
we have
St
S0 4 W
f4 (t)+(ra 1 2 )t
2 4
=
e
and d
Qt
Q0
73
St
Qt
=
St
f4 (t) + (r a)dt].
[4 dW
Qt
St
behaves like dividend-paying stock and the price of the quanto call option is like the
Therefore, under Pe, Q
t
price of a call option on a dividend-paying stock. Thus formula (5.5.12) gives us the desired price formula
for quanto call option.
9.6. (i)
Proof. d+ (t) d (t) =
1
2 (T
T t
t) = T t. So d (t) = d+ (t) T t.
(ii)
Proof. d+ (t)+d (t) =
2
T t
log ForSK(t,T ) . So d2+ (t)d2 (t) = (d+ (t)+d (t))(d+ (t)d (t)) = 2 log ForSK(t,T ) .
(iii)
Proof.
2
0.
(iv)
Proof.
=
=
=
dd+ (t)
1
1
1 p
ForS (t, T ) 1 2
dForS (t, T ) (dForS (t, T ))2
3
+ (T t)]dt +
dt
1 (T t) [log
2
K
2
2ForS (t, T )2
2
T t ForS (t, T )
1
ForS (t, T )
1
1
1
f T (t) 2 dt 2 dt)
p
log
dt +
dt +
(dW
K
2
2
4 T t
T t
2 (T t)3
f T (t)
dW
1
ForS (t, T )
3
dt
+
.
log
dt
K
2(T t)3/2
4 T t
T t
(v)
dt
.
2 T t
(vi)
Proof. By (iv) and (v), (dd (t))2 = (dd+ (t))2 =
dt
T t .
(vii)
Proof. dN (d+ (t)) = N 0 (d+ (t))dd+ (t)+ 21 N 00 (d+ (t))(dd+ (t))2 =
(viii)
74
1 e
2
d2
+ (t)
2
dd+ (t)+ 12 12 e
d2
+ (t)
2
Proof.
dN (d (t))
1
= N 0 (d (t))dd (t) + N 00 (d (t))(dd (t))2
2
d2
(t)
dt
dt
1 e 2
1 d2 (t)
(d (t))
dd+ (t) +
+
= e 2
2
T t
2 T t
2
2
2
d2
(t)(
T td+ (t))
2
2
1
e
ed (t)/2
ed (t)/2 dd+ (t) + p
dt +
2
2(T t) 2
2 2(T t)
2
1
ed (t)/2
d+ (t)e 2
dt.
ed (t)/2 dd+ (t) + p
dt
2
2(T t) 2
2(T t)
dt
d2
(t)
(ix)
Proof.
d2+ (t)/2
f T (t) e
dForS (t, T )dN (d+ (t)) = ForS (t, T )dW
d (t)/2
1
S (t, T )e +
c T (t) = Forp
dW
dt.
T t
2(T t)
(x)
Proof.
ForS (t, T )dN (d+ (t)) + dForS (t, T )dN (d+ (t)) KdN (d (t))
2
2
2
1
d+ (t)
ForS (t, T )ed+ (t)/2
ed+ (t)/2 dt +
p
= ForS (t, T ) ed+ (t)/2 dd+ (t)
dt
2
2(T t) 2
2(T t)
"
#
2
d+ (t)
ed (t)/2
d2 (t)/2
d2 (t)/2
e
e
dt
K
dd+ (t) + p
dt
2
2(T t) 2
2(T t)
"
#
2
2
Ked (t)/2
Kd+ (t)
ForS (t, T )d+ (t) d2+ (t)/2 ForS (t, T )ed+ (t)/2
d2 (t)/2
p
e
=
e
p
+
dt
2(T t) 2
2(T t) 2
2(T t)
2(T t)
2
2
1
+
ForS (t, T )ed+ (t)/2 Ked (t)/2 dd+ (t)
2
= 0.
2
2
The last = comes from (iii), which implies ed (t)/2 = ForSK(t,T ) ed+ (t)/2 .
75
e 1 (t)] =
Since all the Ik (t)s (k = 1, , 4) are normally distributed with zero mean, we can conclude E[Y
1 t
e
Y1 (0) and
(
21
1 t
e2 t )Y1 (0) + e2 t Y2 (0), if 1 =
6 2 ;
e 2 (t)] = 1 2 (e
E[Y
1 t
1 t
21 te
Y1 (0) + e
Y2 (0),
if 1 = 2 .
(ii)
Proof. The calculation relies on the following fact: if Xt and Yt are both martingales, then Xt Yt [X, Y ]t is
e t Yt ] = E{[X,
e
also a martingale. In particular, E[X
Y ]t }. Thus
t
21 u
e21 t 1 e
du =
, E[I1 (t)I2 (t)] =
21
e(1 +2 )t 1
,
1 + 2
0
0
Z t
1
e21 t 1
21 u
21 t
e
e
te
E[I1 (t)I3 (t)] = 0, E[I1 (t)I4 (t)] =
ue
du =
21
21
0
e 12 (t)] =
E[I
e(1 +2 )u du =
and
e 42 (t)] =
E[I
u2 e21 u du =
t2 e21 t
te21 t
e21 t 1
+
.
21
221
431
(iii)
Proof. Following the hint, we have
Z
e 1 (s)I2 (t)] = E[J
e 1 (t)I2 (t)] =
E[I
e(1 +2 )u 1{us} du =
e(1 +2 )s 1
.
1 + 2
10.2. (i)
RT
Proof. Assume B(t, T ) = E[e t Rs ds |Ft ] = f (t, Y1 (t), Y2 (t)). Then d(Dt B(t, T )) = Dt [Rt f (t, Y1 (t), Y2 (t))dt+
df (t, Y1 (t), Y2 (t))]. By It
os formula,
df (t, Y1 (t), Y2 (t))
[ft (t, Y1 (t), Y2 (t)) + fy1 (t, Y1 (t), Y2 (t))( 1 Y1 (t)) + fy2 (t, Y1 (t), Y2 (t))(2 )Y2 (t)]
1
+fy1 y2 (t, Y1 (t), Y2 (t))21 Y1 (t) + fy1 y1 (t, Y1 (t), Y2 (t))Y1 (t)
2
1
2
+ fy2 y2 (t, Y1 (t), Y2 (t))(21
Y1 (t) + + Y1 (t))]dt + martingale part.
2
1
2
2
2
2
+ ( 1 y1 )
221 y1
f = 0.
(0 + 1 y1 + 2 y2 ) +
2 y2
+
+ y1 2 + (21 y1 + + y1 ) 2
t
y1
y2
2
y1 y2
y1
y2
(ii)
76
f
y22
y1 f
= C1 (T t)f ,
f
y2
= C2 (T t)f ,
f
y1 y2
t f
= C1 (T t)C2 (T t)f ,
2f
y12
= C12 (T t)f ,
1
2
221 y1 C1 C2 + y1 C12 + (21
y1 + + y1 )C22 = 0.
2
Sorting out the LHS according to the independent variables y1 and y2 , we get
1
1 2
2
2
0
1 + C1 + 1 C1 + 21 C1 C2 + 2 C1 + 2 (21 + )C2 = 0
0
2 + C2 + 2 C2 = 0
0 + A0 C1 + 12 C22 = 0.
In other words, we can obtain the ODEs for C1 , C2 and A as follows
1
1 2
2
2
0
0
A = C1 21 C22 + 0 .
10.3. (i)
Proof. d(Dt B(t, T )) = Dt [Rt f (t, T, Y1 (t), Y2 (t))dt + df (t, T, Y1 (t), Y2 (t))] and
df (t, T, Y1 (t), Y2 (t))
=
[ft (t, T, Y1 (t), Y2 (t)) + fy1 (t, T, Y1 (t), Y2 (t))(1 Y1 (t)) + fy2 (t, T, Y1 (t), Y2 (t))(21 Y1 (t) 2 Y2 (t))
1
1
+ fy1 y1 (t, T, Y1 (t), Y2 (t)) + fy2 y2 (t, T, Y1 (t), Y2 (t))]dt + martingale part.
2
2
Since Dt B(t, T ) is a martingale under risk-neutral measure, we have the following PDE:
1
1 2
1 y1
+
f (t, T, y1 , y2 ) = 0.
(0 (t) + 1 y1 + 2 y2 ) +
(21 y1 + 2 y2 )
+
t
y1
y2
2 y12
2 y22
Suppose f (t, T, y1 , y2 ) = ey1 C1 (t,T )y2 C2 (t,T )A(t,T ) , then
d
d
ft (t, T, y1 , y2 ) = y1 dt
C1 (t, T ) y2 dt
C2 (t, T )
f
(t,
T,
y
,
y
)
=
C
(t,
T
)C
(t,
T
)f
(t, T, y1 , y2 ),
y1 y2
1 2
1
2
d
dt A(t, T )
f (t, T, y1 , y2 ),
77
1 2
1 2
d
d
2 dt C2 (t, T ) + 2 C2 (t, T ) = 0.
That is
d
1 2
1 2
dt A(t, T ) = 2 C1 (t, T ) + 2 C2 (t, T ) 0 (t).
(ii)
d 2 t
Proof. For C2 , we note dt
[e
C2 (t, T )] = e2 t 2 from the ODE in (i). Integrate from t to T , we have
R
T
0 e2 t C2 (t, T ) = 2 t e2 s ds = 22 (e2 T e2 t ). So C2 (t, T ) = 22 (1 e2 (T t) ). For C1 , we note
21 2 1 t
d 1 t
(e
C1 (t, T )) = (21 C2 (t, T ) 1 )e1 t =
(e
e2 T +(2 1 )t ) 1 e1 t .
dt
2
Integrate from t to T , we get
e1 t C1 (t, T )
(
2 1 T
(e
e1 t )
21
2 1
=
21 2 1 T
e1 t )
2 1 (e
21 2 2 T e(2 1 )T e(2 1 )t
+ 11 (e1 T
2 e
2 1
21 2 2 T
(T t) + 11 (e1 T e1 T )
2 e
e1 T )
if 1 6= 2
if 1 = 2 .
So
(
C1 (t, T ) =
21 2 1 (T t)
2 1 (e
21 2 1 (T t)
2 1 (e
1) +
1) +
21 2 e1 (T t) e2 (T t)
11 (e1 (T t) 1)
2
2 1
21 2 2 T +1 t
(T t) 11 (e1 (T t) 1)
2 e
if 1 6= 2
.
if 1 = 2 .
(iii)
Proof. From the ODE
d
dt A(t, T )
(iv)
Proof. We want to find 0 so that f (0, T, Y1 (0), Y2 (0)) = eY1 (0)C1 (0,T )Y2 (0)C2 (0,T )A(0,T ) = B(0, T ) for all
T > 0. Take logarithm on both sides and plug in the expression of A(t, T ), we get
Z T
1 2
(C1 (s, T ) + C22 (s, T )) 0 (s) ds.
log B(0, T ) = Y1 (0)C1 (0, T ) Y2 (0)C2 (0, T ) +
2
0
Taking derivative w.r.t. T, we have
1
1
log B(0, T ) = Y1 (0)
C1 (0, T ) Y2 (0)
C2 (0, T ) + C12 (T, T ) + C22 (T, T ) 0 (T ).
T
T
T
2
2
78
So
0 (T )
= Y1 (0)
C1 (0, T ) Y2 (0)
C2 (0, T )
log B(0, T )
Th
T i
T
(
2 2 T
Y2 (0)2 e2 T T
log B(0, T )
if 1 =
6 2
Y1 (0) 1 e1 T 21
2 e
=
T
2 T
2 T
1
Y1 (0) 1 e
21 2 e
T Y2 (0)2 e
T log B(0, T ) if 1 = 2 .
10.4. (i)
Proof.
bt
dX
= dXt + KeKt
bt dt + dB
bt .
= K X
(ii)
Proof.
ft = CB
et =
W
1
1
2
1 1
0
1
2
!
12
1
0
0 e
Bt =
2
f is a martingale with hW
f 1 it = hB
e 1 it = t, hW
f 2 it = h
So W
12
2 +122
t
12
f1, W
f 2 it = hB
e1,
= t, and hW
12
e1 + 1
B
12
e1 + 1
B
e 2 it =
B
12
et .
B
12
e 2 it =
B
12
t
2 +
1
2 t
12
12
t
+ 1
2 2 12 t =
f is a
= 0. Therefore W
bt = CK X
et dt+CdB
et = CKC 1 Yt dt+dW
ft = Yt dt+dW
ft ,
two-dimensional BM. Moreover, dYt = CdX
where
!
1
1 2 0
0
1
1 0
2 1
= CKC 1 = 1
1
1 2
|C| 12 11
1 12
2 12
1
!
1
0
1
0
1
p
=
2
1 2 1 2
2 2 1 2
1 1
2 1
2 12
!
1
0
2 (2 1 )1
=
2
2 .
2
(iii)
Proof.
Xt
=
=
=
Z t
Z t
bt + eKt
X
eKu (u)du = C 1 Yt + eKt
eKu (u)du
0
0
Z t
1
0
Y
(t)
1
Kt
Ku
p
+e
e (u)du
Y2 (t)
2 2 1 2
0
Z t
1 Y1p
(t)
Kt
+e
eKu (u)du.
2 Y1 (t) + 2 1 2 Y2 (t)
0
79
p
Rt
So Rt = X2 (t) = 2 Y1 (t)+2 1 2 Y2 (t)+0 (t), where 0 (t) is the second coordinate of eKt 0 eKu (u)du
p
and can be derived explicitly by Lemma 10.2.3. Then 1 = 2 and 2 = 2 1 2 .
10.5.
Proof. We note C(t, T ) and A(t, T ) are dependent only on T t. So C(t, t + ) and A(t, t + ) aare constants
when is fixed. So
d
Lt
dt
=
=
=
1
[C(t, t + )R0 (t) + A(t, t + )]
1
[C(0, )R0 (t) + A(0, )].
Hence L(t2 )L(t1 ) = 1 C(0, )[R(t2 )R(t1 )]+ 1 A(0, )(t2 t1 ). Since L(t2 )L(t1 ) is a linear transformation,
it is easy to verify that their correlation is 1.
10.6. (i)
h
f1 (t)) = 1 ( 0
Proof. If 2 = 0, then dRt = 1 dY1 (t) = 1 (1 Y1 (t)dt + dW
1
Rt
1 )1 dt
i
f1 (t) = (0 1
+ dW
f1 (t). So a = 0 1 and b = 1 .
1 Rt )dt + 1 dW
(ii)
Proof.
dRt
So a = 2 0 , b = 2 , =
et = 1
12 + 22 and B
2
1 +22
f1 (t) + 2
W
2
1 +22
f2 (t).
W
10.7. (i)
Proof. We use the canonical form of the model as in formulas (10.2.4)-(10.2.6). By (10.2.20),
dB(t, T )
f T (t) =
So the volatility vector of B(t, T ) under Pe is (C1 (T t), C2 (T t)). By (9.2.5), W
j
fj (t) (j = 1, 2) form a two-dimensional PeT BM.
u)du + W
(ii)
80
Rt
0
Cj (T
Proof. Under the T-forward measure, the numeraire is B(t, T ). By risk-neutral pricing, at time zero the
risk-neutral price V0 of the option satisfies
V0
1
T
C1 (T T )Y1 (T )C2 (T T )Y2 (T )A(T T )
+
e
=E
(e
K) .
B(0, T )
B(T, T )
Note B(T, T ) = 1, we get (10.7.19).
(iii)
Proof. We can rewrite (10.2.4) and (10.2.5) as
(
f T (t) C1 (T t)dt
dY1 (t) = 1 Y1 (t)dt + dW
1
f T (t) C2 (T t)dt.
dY2 (t) = 21 Y1 (t)dt 2 Y2 (t)dt + dW
2
Then
(
Rt
R
f T (s) t C1 (T s)e1 (st) ds
Y1 (t) = Y1 (0)e1 t + 0 e1 (st) dW
1
Rt
R0t
R
f2 (s) t C2 (T s)e2 (st) ds.
Y2 (t) = Y0 e2 t 21 0 Y1 (s)e2 (st) ds + 0 e2 (st) dW
0
So (Y1 , Y2 ) is jointly Gaussian and X is therefore Gaussian.
(iv)
ft , then
Proof. First, we recall the Black-Scholes formula for call options: if dSt = St dt + St dW
e T (S0 eWT +( 12 2 )T K)+ ] = S0 N (d+ ) KeT N (d )
E[e
with d =
and
(log
S0
K
d
where d = 1 ( log K + ( 21 2 )) (different from the problem. Check!). Since under PeT , X =
N ( 12 2 , 2 ), we have
10.11.
Proof. On each payment date Tj , the payoff of this swap contract is (K L(Tj1 , Tj1 )). Its no-arbitrage
price at time 0 is (KB(0, Tj ) B(0, Tj )L(0, Tj1 )) by Theorem 10.4. So the value of the swap is
n+1
X
j=1
n+1
X
j=1
10.12.
81
B(0, Tj )
n+1
X
j=1
1B(T,T +)
B(T,T +)
FT , we have
e
E[D(T
+ )L(T, T )]
e E[D(T
e
= E[
+ )L(T, T )|FT ]]
1 B(T, T + ) e
e
= E
E[D(T + )|FT ]
B(T, T + )
1 B(T, T + )
e
= E
D(T )B(T, T + )
B(T, T + )
D(T ) D(T )B(T, T + )
e
= E
B(0, T ) B(0, T + )
=
= B(0, T + )L(0, T ).
e(tu) e(tu)(e
(tu) (tu)
1.
82
log(+1)
1)
(by (11.3.4))
Proof. The problem is ambiguous in that the relation between N1 and N2 is not clearly stated. According
to page 524, paragraph 2, we would guess the condition should be that N1 and N2 are independent.
Suppose N1 and N2 are independent. Define M1 (t) = N1 (t) 1 t and M2 (t) = N2 (t) 2 t. Then by
independence E[M1 (t)M2 (t)] = E[M1 (t)]E[M2 (t)] = 0. Meanwhile, by Itos product formula, M1 (t)M2 (t) =
Rt
Rt
Rt
Rt
M1 (s)dM2 (s) + 0 M2 (s)dM1 (s) + [M1 , M2 ]t . Both 0 M1 (s)dM2 (s) and 0 P
M2 (s)dM1 (s) are mar0
tingales.
So
taking
expectation
on
both
sides,
we
get
0
=
0
+
E{[M
,
M
]
}
=
E[
1
2 t
0<st N1 (s)N2 (s)].
P
P
Since 0<st N1 (s)N2 (s) 0 a.s., we conclude 0<st
N
(s)N
(s)
=
0
a.s.
By
letting t = 1, 2, ,
1
2
P
we can find a set 0 of probability 1, so that 0 , 0<st N1 (s)N2 (s) = 0 for all t > 0. Therefore
N1 and N2 can have no simultaneous jump.
11.5.
Proof. We shall prove the whole path of N1 is independent of the whole path of N2 , following the scheme
suggested by page 489, paragraph 1.
Fix s 0, we consider Xt = u1 (N1 (t) N1 (s)) + u2 (N2 (t) N2 (s)) 1 (eu1 1)(t s) 2 (eu2 1)(t s),
t > s. Then by It
os formula for jump process, we have
Z t
Z
X
1 t Xu
c
Xu
Xs
Xt
e dXu +
e dXuc dXuc +
(eXu eXu )
=
e e
2 s
s
s<ut
Z t
X
=
eXu [1 (eu1 1) 2 (eu2 1)]du +
(eXu eXu ).
s
0<ut
Since Xt = u1 N1 (t)+u2 N2 (t) and N1 , N2 have no simultaneous jump, eXu eXu = eXu (eXu 1) =
eXu [(eu1 1)N1 (u) + (eu2 1)N2 (u)]. So
eXt 1
Z t
X
=
eXu [1 (eu1 1) 2 (eu2 1)]du +
eXu [(eu1 1)N1 (u) + (eu2 1)N2 (u)]
s
Z
=
s<ut
t
This shows (eXt )ts is a martingale w.r.t. (Ft )ts . So E[eXt ] 1, i.e.
E[eu1 (N1 (t)N1 (s))+u2 (N2 (t)N2 (s)) ] = e1 (e
u1
1)(ts)
e2 (e
u2
1)(ts)
E[e
E[e
Pn
i=1
Pn1
i=1
Pn1
i=1
Pn1
E[e
i=1
Pn
j=1
Pn1
j=1
E[eun (N1 (tn )N1 (tn1 ))+vn (N2 (tn )N2 (tn1 )) |Ftn1 ]]
Pn1
j=1
]E[eun (N1 (tn )N1 (tn1 ))+vn (N2 (tn )N2 (tn1 )) ]
Pn1
]E[eun (N1 (tn )N1 (tn1 )) ]E[evn (N2 (tn )N2 (tn1 )) ],
j=1
where the second equality comes from the independence of Ni (tn ) Ni (tn1 ) (i = 1, 2) relative to Ftn1 and
the third equality comes from the result obtained in the above paragraph. Working by induction, we have
Pn
Pn
E[e i=1 ui (N1 (ti )N1 (ti1 ))+ j=1 vj (N2 (tj )N2 (tj1 )) ]
n
n
Y
Y
E[eui (N1 (ti )N1 (ti1 )) ]
E[evj (N2 (tj )N2 (tj1 )) ]
i=1
j=1
Pn
E[e
83
]E[e
Pn
j=1
].
Note Xt = u2 Qt = u2 YNt Nt , where Nt is the Poisson process associated with Qt . So eXt eXt =
PNt u2 Yi
(e
1),
eXt (eXt 1) = eXt (eu2 YNt 1)Nt . Consider the compound Poisson process Ht = i=1
then Ht E[eu2 YNt 1]t = Ht ((u2 ) 1)t is a martingale, eXt eXt = eXt Ht and
eXt 1
Z
Z t
1
1 t Xs 2
eXs (u1 dWs u21 ds ((u2 ) 1)ds) +
e u1 ds +
eXs dHs
2
2 0
0
0
Z t
Z t
eXs d(Hs ((u2 ) 1)s).
=
eXs u1 dWs +
Z
0
1
This shows eXt is a martingale and E[eXt ] 1. So E[eu1 Wt +u2 Qt ] = e 2 u1 t et((u2 )1)t = E[eu1 Wt ]E[eu2 Qt ].
This shows Wt and Qt are independent.
11.7.
Proof. E[h(QT )|Ft ] = E[h(QT Qt +Qt )|Ft ] = E[h(QT t +x)]|x=Qt = g(t, Qt ), where g(t, x) = E[h(QT t +
x)].
References
[1] F. Delbaen and W. Schachermayer. The mathematics of arbitrage. Springer, 2006.
[2] J. Hull. Options, futures, and other derivatives. Fourth Edition. Prentice-Hall International Inc., New
Jersey, 2000.
[3] B. ksendal. Stochastic differential equations: An introduction with applications. Sixth edition. SpringerVerlag, Berlin, 2003.
[4] D. Revuz and M. Yor. Continous martingales and Brownian motion. Third edition. Springer-Verlag,
Berline, 1998.
[5] A. N. Shiryaev. Essentials of stochastic finance: facts, models, theory. World Scientific, Singapore, 1999.
[6] S. Shreve. Stochastic calculus for finance I. The binomial asset pricing model. Springer-Verlag, New
York, 2004.
[7] S. Shreve. Stochastic calculus for finance II. Continuous-time models. Springer-Verlag, New York, 2004.
[8] P. Wilmott. The mathematics of financial derivatives: A student introduction. Cambridge University
Press, 1995.
84