You are on page 1of 7

Answers for Stochastic Calculus for Finance I;

Steven Shreve VJul 15 2009


Marco Cabral
mapcabral@ufrj.br
Department of Mathematics
Federal University of Rio de Janeiro
July 25, 2009

Chapter 1
1.1: Since S1 (H) = uS0 , S1 (T ) = dS0 , X1 (H) = 0 S0 (u (1 + r)) and
X1 (T ) = 0 S0 (d (1 + r)). Therefore, X1 (H) positive implies X1 (T ) negative
and vice-versa.
1.2: Check that X1 (H) = 30 3/20 = X1 (T ). Therefore if X1 (H) is
positive, X1 (T ) is negative and vice-versa.
1.3: By (1.1.6), V0 = S0 .
1.4: mutatis-mutandis the proof in Theorem 1.2.2 replacing u by d and H
by T .
1.5: many computations.
1.6: We have 1.5 V1 = 0 S1 +(0 S0 )(1+r). We determine 0 = 1/2.
So we should sell short 1/2 stocks.
1.7: see last exercise.
1.8:
(i) vn (s, y) = 2/5[vn+1 (2s, y + 2s) + vn+1 (s/2, y + s/2)]
(ii) v0 (4, 4) = 2.375. One can check that v2 (16, 28) = 8, v2 (4, 16) =
1.25, v2 (4, 10) = 0.25 and v2 (1, 7) = 0.
(iii) (s, y) = [vn+1 (2s, y + 2s) vn+1 (s/2, y + s/2)]/[2s s/2].
1.9:
(i) Vn (w) = 1/(1+rn (w))[e
pn (w)Vn+1 (wH)+e
qn (w)Vn+1 (wT )] where pen (w) =
(1 + rn (w) dn (w))/(un (w) dn (w)) and qen (w) = 1 pen (w).
(ii) n (w) = [Vn+1 (wH) Vn+1 (wT )]/[Sn+1 (wH) Sn+1 (wT )]
(iii) pe = qe = 1/2. V0 = 9.375. One can check that V2 (HH) = 21.25,
V2 (HT ) = V2 (T H) = 7.5 and V2 (T T ) = 1.25.

Chapter 2
2.1:
(i) A and A{ are disjoint and their union is the whole space. Therefore
P (A A{ [= whole space]) = 1 = P (A) + P (A{ ) [they are disjoint].
(ii) it is enough to show it for two events A and B. By induction it follows for
any finite set of events. Since A B = A (B/A), and this is a disjoint union,
P (A B) = P (A) + P (B/A). Since B/A is a subset of B, P (B/A) P (B) by
definition (2.1.5).
2.2:
(i) Pe(S3 = 32) = Pe(S3 = 0.5) = (1/2)3 = 1/8; Pe(S3 = 2) = Pe(S3 = 8) =
3(1/2)3 = 3/8.
e 1 = (1 + r)S0 = 4(1 + r), ES
e 2 = (1 + r)2 S0 = 4(1 + r)2 , ES
e 3 = (1 +
(ii) ES
3
3
r) S0 = 4(1 + r) , Average rate of growth is 1 + r (see p.40, second paragraph).
(iii) P (S3 = 32) = (2/3)3 = 8/27, P (S3 = 8) = 4/9, P (S3 = 2) =
2/9, P (S3 = 0.5) = (1/3)3 = 1/27
We can compute ES1 directly or use p.38, second paragraph: En [Sn+1 ] =
(3/2)Sn . Therefore, E0 [S1 ] = E[S1 ] = (3/2)S0 , E1 [S2 ] = (3/2)S1 . Applying E
to both sides, E[E1 [S2 ]] = E[S2 ] = (3/2)E[S1 ] = (3/2)2 S0 . Following the same
reasoning, E[S3 ] = (3/2)3 S0 . Therefore, the average rate of growth is 3/2.
2.3: Use Jensens inequality and the martingale property: (Mn ) = (En [Mn+1 ])
En [(Mn+1 )].
2.4:
Pn+1
(i) En (Mn + 1) = j=1 En [Xj ] = (taking out what is known) =En [Xn+1 ] +
Pn
j=1 Xj = En [Xn+1 ] + Mn . Since Xj assumes 1 or 1 with equal probability,
and depends only on the n+1 coin toss (independence), En [Xn+1 ] = E[Xn+1 ] =
0. Therefore, En (Mn+1 ) = Mn .
(ii) Since Xn+1 depends only on the n+1 coin toss (independence), En (eXn+1 ) =
E(eXn+1 ) = (e + e )/2. En [eMn+1 ] = (taking out what is known) =
eMn En [eXn+1 ] = eMn (e + e )/2.
2.5:
2
(i) Hint: Mn+1 = Mn + Xn+1 (why?) and (Xj )2 = 1 and therefore, Mn+1
=
2
Mn + 2Mn Xn+1 + 1. Also In+1 = Mn (Mn+1 Mn ) + In = Mn Xn+1 + In . One
can prove by induction on n since, by induction hypotheses, In = 1/2(Mn2 n)
2
and therefore, In+1 = 1/2(Mn2 + 2Mn Xn+1 n) = 1/2(Mn+1
1 n).
2
(ii) from (i), In+1 = 1/2(Mn +Xn+1 ) (n+1)/2. Since Xn+1 = 1 or 1 with
same probability, En (f (In+1 )) = j(Mn ), where j(m) = E[f (1/2(m + Xn+1 )2
(n + 1)/2)] = 1/2f (1/2(m + 1)2 (n + 1)/2) + f (1/2(m 1)2 (n + 1)/2) =
1/2f (1/2(m2 n) + m) + f (1/2(m2 n) m)
Since In = 1/2(Mn2 n), En (f (In+1 )) = j(Mn ) = 1/2f (In + Mn ) + f (In Mn )
Now we
need to make the rhs to depend on In only. Since In = 1/2(Mn2 n),
2
Mn = 2In + n.

So En (f (In+1 )) = g(In ) where g(i) = 1/2f (i + 2i + n) + f (i 2i + n).


2.6: It is easy to show that In is an adapted process. En [In+1 ] =(taking
out what is known) = n (En (Mn+1 ) Mn ) + In . Since Mn is a martingale,
En (Mn+1 ) = Mn and the first term is zero.

2.7
2.8
0
0
0
(i) MN
1 = E(MN 1 ) = EN 1 (MN ) = MN 1 . Therefore MN 1 = MN 1 .
Proceed by induction.
(ii) En [Vn+1 ](w) = pVn+1 (wH) + qVn+1 (wT ) = (1 + r)Vn (w) from algorithm
1.2.16. Now is is easy to prove.
(iii) this is a consequence of the fact that En [Z], for any Z, is a martingale.
2.9 see p. 46 2.4.1: models with random interest rates, it would ...
(i) Pe(HH) = 1/4, Pe(HT ) = 1/4, Pe(T H) = 1/12, Pe(T T ) = 5/12.
(ii) V2 (HH) = 5, V2 (HT ) = 1, V2 (T H) = 1, V2 (T T ) = 0, V1 (H) = 1/9,
V1 (T ) = 12/5, V0 = 226/225.
(iii) 0 = 103/270.
(iv) 1 (H) = 1.
2.10
(i) Follow the proof of 2.4.5 (p. 40)
(ii) easy
(iii) easy
2.11
(i) easy since CN = (SN K)+ , FN = SN K and PN = (K SN )+ .
(ii) trivial
e N K). Since SN = S0 (1 +
e N ) = 1/(1 + r)N E(S
(iii) F0 = 1/(1 + r)N E(F
r)N , F0 = S0 K/(1 + r)N .
(vi) Yes, Cn = Pn .
2.12
2.13
Sn , Yn + SSn+1
Sn ). Now we apply the
(i) we write (Sn+1 , Yn+1 ) = ( SSn+1
n
n
independence lemma to variables: Sn+1 /Sn , Sn and Yn .
We obtain:
En (f (Sn+1 , Yn+1 )) = pf (uSn , Yn + uSn ) + qf (dSn , Yn + dSn ). Therefore,
g(s, y) = pf (us, y + us) + qf (ds, y + ds).
1e
1e
(ii) vN (s, y) = f (y/(N +1)). Vn = vn (Sn , Yn ) = E
n (Vn+1 ) = En (vn+1 (Sn+1 , Yn+1 )) =
r
r
1
(e
pvn+1 (uSn , Yn + uSn ) + qevn+1 (dSn , Yn + dSn )).
r
1
So, vn (s, y) = [e
pvn+1 (us, y + us) + qevn+1 (ds, y + ds)].
r
2.14
(i) see 2.13 (i)
1
y
(ii) vN (s, y) = f ( N M
). For 0 n < M, vn (s) = (e
pvn+1 (us) + qevn+1 (ds))
r
1
For n > M , vn (s, y) = [e
pvn+1 (us, y + us) + qevn+1 (ds, y + ds)].
r
1
For n = M , vM (s) = [e
pvn+1 (us, us) + qevn+1 (ds, ds)]
r

Chapter 3
3.1
1
(i) Since Pe > 0, Z > 0. Therefore Z(w)
> 0 for every w.
P
P
1
1
e =
e
(ii) E
P (w) = 1 replacing Z by its definition.
Z
Z(w) P (w) =
P
P
e 1Y)
(iii) EY =
YP =
Y (w)Pe(w)/Z(w) = E(
Z
3.2
P
(i) Pe() = Z(w)P (w) = EZ = 1.
P
P
e =
(ii) EY
Y Pe = Y ZP = E(Y Z)
P
e
e
P (iii) Since P (A) = 0, P (w) = 0 for every w A. Now P (A) = wA P (w) =
Z(w)
P
(w)
=
0
since
P
(w)
=
0
for
every
w

A.
wA
P
P
(iv) If Pe(A) = wA Pe(w) = wA Z(w)P
P (w) = 0. Since P (Z > 0) = 1,
Z(w) > 0 for every w. Therefore the sum wA Z(w)P (w) can be zero iff 1
P (w) = 0 for every w A. Therefore, P(A)=0.
(v) P (A) = 1 iff P (A{ ) = 1 P (A) = 0 iff Pe(A{ ) = 0 = 1 Pe(A) iff
Pe(A) = 1.
(vi) Let = {a, b}, Z(a) = 2, Z(b) = 0. Let P (a) = P (b) = 1/2. Now
e
P (a) = 1, Pe(b) = 0.
3.3 M0 = 13.5, M1 (H) = 18, M1 (T ) = 4.5, M2 (HH) = 24, M2 (HT ) =
6, M2 (T H) = 6, M2 (T T ) = 1.5
Now, En [Mn+1 ] = En [En+1 [Mn ]]. From the properties of conditional expectation, this is equal to En [Mn ] = Mn .
3.5
(i) Z(HH) = 9/16, Z(HT ) = 9/8, Z(T H) = 9/24, Z(T T ) = 45/12.
(ii) Z1 (H) = 3/4, Z1 (T ) = 3/2, Z0 = 1.

Chapter 4
4.1
(ii) V0C = 320/125 = 2.56, V2 (HH) = 64/5 = 12.8, V2 (HT ) = V2 (T H) =
8/5 = 1.6, V2 (T T ) = 0, V1 (H) = 144/25 = 5.76, V1 (T ) = 16/25 = 0.64.
4.2 0 = (0.4 3)/6, 1 (H) = 1/12, 1 (T ) = 1, C0 = 0, C1 (H) =
0, C1 (T ) = 1.
4.3 The time-zero price is V0 = 0.4 and the optimal stopping time is
(H ) = + and (T ) = 1.
Moreover, V1 (H) = V2 (HH) = V2 (HT ) = 0, V1 (T ) = 1 = max(1, 14/15),
V2 (T H) = 2/3 = max(2/3, 4/10), V2 (T T ) = 5/3 = max(5/3, 31/20), V3 (T T H) =
7/4 = 1.75, V3 (T T T ) = 8.5/4 = 2.125
4.4 No. With 1.36 we can make a hedge such that we can pay Y for any
outcame and we will have some spare cash. In this case he will not exercise
at optimal times. If we have HH or HT we will keep 0.36 at time one with
probability 1/2. If we have TT we get nothing and if we have TH we keep 1 at
time 1 with probability 1/4.
1 iff

means if, and only if.

Therefore at time zero we would have 0.35(1/2) + 4/5(1/4)1 = 0.38 = 1.74


1.36.
4.5 List of 11 stopping times that never exercises when the option is out of
money:
t(HH)
0
inf
inf
inf
inf
inf

t(HT)
0
2
2
2
2
2

t(TH)
0
1
2
inf
2
inf

t(TT)
0
1
inf
2
2
inf

value
1
1.36
0.32
0.8
0.96
0.16

inf
inf
inf
inf
inf
t(HH)

inf
inf
inf
inf
inf
t(HT)

1
2
inf
2
inf
t(TH)

1
inf
2
2
inf
t(TT)

1.2
0.16
0.64
0.8
0
value

4.6
(i) Since Sn /(1 + r)n is a martingale, Sn /(1 + r)n is a martingale by
theorem 4.3.2.
e 0 /(1 + r)0 ] = E[S
e N /(1 + r)N ]. Now the first term
Therefore, E[S
0
e 0 /(1 + r) ] = S0 . The last term is equal to E[S
e /(1 + r) ] since
is equal to E[S
N.
e /(1 + r) ] = E[(K
e
e
Therefore, for any , E[G
S )/(1 + r) ] = E[K/(1
+

e
r) ] E[S /(1 + r) ]. Since the discounted price is a martingale, this is equal
e
to E[K/(1
+ r) ] S0 . Since (1 + r) 1, the maximum for this term is when
(w) = 0 for every w. In this case, the first term will be K. Therefore the value
is K S0 .
4.7 Using an argument similar to 4.6 (i), taking (w) = N for every w, the
S0 K
value will be (1+r)
N .

Chapter 5
5.1
(i) We need the following property: If X and Y are independent r.v. then
E(XY ) = E(X)E(Y ).
Since 2 1 and 1 are independent,
E(2 ) = E(2 1 +1 ) = E(2 1 1 ) = E(2 1 )E(1 ) = (E(1 ))2
(ii) Write m = (m m1 ) + (m1 m2 ) + + (2 1 ) + 1 . Using
the same argument as in (i) we obtain the result.
(iii) Yes since the probability of rising from level 0 to level 1 is the same as
rising from level 1 to level 2. This is different though from the probability to go
to level 1.
5.2
5

(i) We can write f ( ) = p(e e )+e . Since p > 1/2 and (e e ) > 0,
f ( ) > 1/2(e e ) + e = cosh( ) > 1
Another proof is determining 0 such that f 0 (0 ) = 0. This have only one
solution 0 = log(q/p)/2. Since q/p < 1, 0 < 0. Also f 0 (0) = p q > 0.
Therefore f is strictly increasing for 0. Since f (0) = 1, f ( ) > 1 for every
> 0.
(ii) En [Sn+1 ] = Sn En [eXn+1 ]/f () = Sn E[eXn+1 ]/f () = Sn f ()/f () =
Sn
(iii) Follow the same argument as in pages 121 and 122.
(iv) (see p.123) Given (0, 1) we need to solve for > 0 which satisfies

= 1/f (). This is the same as to solve pe + (1 p)e


= 1. This is a

142 p(1p)

. Since
quadratic equation for e and we obtain that e =
2(1p)

1 < 2(1 p) iff 1 2(1 p) < iff < 1, we have that e < 1.
We conclude that > 0.
1 142 p(1p)
Therefore, E1 =
2(1p)
p
(v) Follow corollary 5.2 (p.124): Let =
1 42 p(1 p) E[1 1 ] =

1
1
1
2(1p) = 22 (1p) (I used a CAS=computer algebra system: Maxima) Now
p
p
if we let goes to 1 then goes to 2p1 (since 1 4p(1 p) = (2p 1)2 =
|2p 1|, since p > 1/2, this is equal to 2p 1). Therefore we obtain that
1
E1 = 2p1
.
5.3
(i) 0 = log(q/p) which is greater that 0 since q > p.
(ii) Follow the steps in p.121 and p.122 but replacing 2/(e +e ) by 1/f ().
Now the equation 5.2.11 from p.122 (with the replacement above) is true for
> 0 . Here we cannot let goes to zero. Instead we let goes to 0 . Since
from (i) e0 = q/p, we obtain the answer: p/q.
(iii) We can write E1 = E(I{1 =} 1 )+E(I{1 <} 1 ). Since 0 < < 1,
1
= 0 for the first term. Therefore, E1 = E(I{1 <} 1 ).

1 142 p(1p)
Following exercise 5.2 (iv): E1 =
(note that E1 =
2(1p)
1
1
E(I{1 <} ) + E(I{1 =} ).
p
(iv) Follow exercise 5.2 (v): Let = 1 42 p(1 p) E[I{1 <} 1 1 ] =
1
1
1

2(1p) = 22 (1p) (I used a CAS=computer algebra system: Maxima) Now


p
p
if we let goes to 1 then goes to 12p (since 1 4p(1 p) = (2p 1)2 =
|2p 1|, since p < 1/2, this is equal to 1 2p). Therefore we obtain that
p
E[I{1 <} 1 ] = (1p)(12p)
.
5.4
(ii) P (2 = 2k) = P (2 2k) P (2 2k 2)
By the reflection principle, P (2 2k) = P (M2k = 2) + 2P (M2k 4). Since
the random walk is symmetric, 2P (M2k 4) = P (M2k 4) + P (M2k 4).
Therefore, P (2 2k) = P (M2k = 2) + P (M2k 4) + P (M2k 4). This
is equal to P (2 2k) = 1 P (M2k = 0) P (M2k = 2).
Replacing k by k 1, P (2 2k 2) = 1P (M2k2 = 0)P (M2k2 = 2).
Therefore,
P (2 = 2k) = P (M2k2 = 0) + P (M2k2 = 2) P (M2k = 0) P (M2k =
2).

The complete formula can be written since P (Mm = 0) =



m!
1 m
and P (Mm = 2) = (m/21)!(m/2+1)!
and
2
5.5
(ii) replace


1 n
2

in (i) by p

nb
2 +m

n+b
2 m

m!
((m/2)!)2


1 m
2

You might also like