625 views

Uploaded by sss1192

Attribution Non-Commercial (BY-NC)

- Implementing Derivatives Models
- Stochastic Calculus for Finance Shreve Solutions
- Shreve s.e. Stochastic Calculus for Finance II
- Steven Shreve. Lectures on Stochastic Calculus and Finance
- Financial Calculus an Introduction to Derivative Pricing-Baxter
- Shreve Stochastic Calculus for Finance 2
- shreve solution manual
- Solution Manual for Stochastic Calculus for Finance
- Quant Interview Prep
- Stochastic Calculus for Finance
- Shreve S.E. Stochastic Calculus for Finance I.. the Binomial Asset Pricing Model
- Manual to Shreve
- 6fj3u.stochastic.calculus.for.Finance
- Joel Hasbrouck-Empirical Market Microstructure-Oxford University Press, USA (2007)
- Analysis of Financial Time Series, Ruey S. Tsay (2010)
- Oksendal(Errata Corrige)
- Continuous Martingales and Brownian Motion (3rd Ed, Yor and Revuz)(300dpi)
- [Dan Stefanica] Solutions Manual - A Primer for Th(BookFi)
- Shreve S.E. Stochastic Calculus for Finance I.. the Binomial Asset Pricing Model (1)
- The Mathematics Of Financial Derivatives

You are on page 1of 84

by Yan Zeng

Last updated: August 20, 2007

This is a solution manual for the two-volume textbook Stochastic calculus for nance, by Steven Shreve.

If you have any comments or nd any typos/errors, please email me at yz44@cornell.edu.

The current version omits the following problems. Volume I: 1.5, 3.3, 3.4, 5.7; Volume II: 3.9, 7.1, 7.2,

7.57.9, 10.8, 10.9, 10.10.

Acknowledgment I thank Hua Li (a graduate student at Brown University) for reading through this

solution manual and communicating to me several mistakes/typos.

1 Stochastic Calculus for Finance I: The Binomial Asset Pricing Model

1. The Binomial No-Arbitrage Pricing Model

1.1.

Proof. If we get the up sate, then X

1

= X

1

(H) =

0

uS

0

+ (1 + r)(X

0

0

S

0

); if we get the down state,

then X

1

= X

1

(T) =

0

dS

0

+(1 +r)(X

0

0

S

0

). If X

1

has a positive probability of being strictly positive,

then we must either have X

1

(H) > 0 or X

1

(T) > 0.

(i) If X

1

(H) > 0, then

0

uS

0

+ (1 + r)(X

0

0

S

0

) > 0. Plug in X

0

= 0, we get u

0

> (1 + r)

0

.

By condition d < 1 + r < u, we conclude

0

> 0. In this case, X

1

(T) =

0

dS

0

+ (1 + r)(X

0

0

S

0

) =

0

S

0

[d (1 +r)] < 0.

(ii) If X

1

(T) > 0, then we can similarly deduce

0

< 0 and hence X

1

(H) < 0.

So we cannot have X

1

strictly positive with positive probability unless X

1

is strictly negative with positive

probability as well, regardless the choice of the number

0

.

Remark: Here the condition X

0

= 0 is not essential, as far as a property denition of arbitrage for

arbitrary X

0

can be given. Indeed, for the one-period binomial model, we can dene arbitrage as a trading

strategy such that P(X

1

X

0

(1 +r)) = 1 and P(X

1

> X

0

(1 +r)) > 0. First, this is a generalization of the

case X

0

= 0; second, it is proper because it is comparing the result of an arbitrary investment involving

money and stock markets with that of a safe investment involving only money market. This can also be seen

by regarding X

0

as borrowed from money market account. Then at time 1, we have to pay back X

0

(1 + r)

to the money market account. In summary, arbitrage is a trading strategy that beats safe investment.

Accordingly, we revise the proof of Exercise 1.1. as follows. If X

1

has a positive probability of being

strictly larger than X

0

(1 + r), the either X

1

(H) > X

0

(1 + r) or X

1

(T) > X

0

(1 + r). The rst case yields

0

S

0

(u1r) > 0, i.e.

0

> 0. So X

1

(T) = (1+r)X

0

+

0

S

0

(d1r) < (1+r)X

0

. The second case can

be similarly analyzed. Hence we cannot have X

1

strictly greater than X

0

(1 + r) with positive probability

unless X

1

is strictly smaller than X

0

(1 +r) with positive probability as well.

Finally, we comment that the above formulation of arbitrage is equivalent to the one in the textbook.

For details, see Shreve [7], Exercise 5.7.

1.2.

1

Proof. X

1

(u) =

0

8 +

0

3

5

4

(4

0

+1.20

0

) = 3

0

+1.5

0

, and X

1

(d) =

0

2

5

4

(4

0

+1.20

0

) =

3

0

1.5

0

. That is, X

1

(u) = X

1

(d). So if there is a positive probability that X

1

is positive, then there

is a positive probability that X

1

is negative.

Remark: Note the above relation X

1

(u) = X

1

(d) is not a coincidence. In general, let V

1

denote the

payo of the derivative security at time 1. Suppose

X

0

and

0

are chosen in such a way that V

1

can be

replicated: (1 + r)(

X

0

0

S

0

) +

0

S

1

= V

1

. Using the notation of the problem, suppose an agent begins

with 0 wealth and at time zero buys

0

shares of stock and

0

options. He then puts his cash position

0

S

0

0

X

0

in a money market account. At time one, the value of the agents portfolio of stock, option

and money market assets is

X

1

=

0

S

1

+

0

V

1

(1 +r)(

0

S

0

+

0

X

0

).

Plug in the expression of V

1

and sort out terms, we have

X

1

= S

0

(

0

+

0

)(

S

1

S

0

(1 +r)).

Since d < (1 +r) < u, X

1

(u) and X

1

(d) have opposite signs. So if the price of the option at time zero is

X

0

,

then there will no arbitrage.

1.3.

Proof. V

0

=

1

1+r

1+rd

ud

S

1

(H) +

u1r

ud

S

1

(T)

=

S0

1+r

1+rd

ud

u +

u1r

ud

d

= S

0

. This is not surprising, since

this is exactly the cost of replicating S

1

.

Remark: This illustrates an important point. The fair price of a stock cannot be determined by the

risk-neutral pricing, as seen below. Suppose S

1

(H) and S

1

(T) are given, we could have two current prices, S

0

and S

0

. Correspondingly, we can get u, d and u

, d

0

and S

0

, respectively,

its not surprising that risk-neutral pricing formula always holds, in both cases. That is,

S

0

=

1+rd

ud

S

1

(H) +

u1r

ud

S

1

(T)

1 +r

, S

0

=

1+rd

S

1

(H) +

u

1r

u

S

1

(T)

1 +r

.

Essentially, this is because risk-neutral pricing relies on fair price=replication cost. Stock as a replicating

component cannot determine its own fair price via the risk-neutral pricing formula.

1.4.

Proof.

X

n+1

(T) =

n

dS

n

+ (1 +r)(X

n

n

S

n

)

=

n

S

n

(d 1 r) + (1 +r)V

n

=

V

n+1

(H) V

n+1

(T)

u d

(d 1 r) + (1 +r)

pV

n+1

(H) + qV

n+1

(T)

1 +r

= p(V

n+1

(T) V

n+1

(H)) + pV

n+1

(H) + qV

n+1

(T)

= pV

n+1

(T) + qV

n+1

(T)

= V

n+1

(T).

1.6.

2

Proof. The banks trader should set up a replicating portfolio whose payo is the opposite of the options

payo. More precisely, we solve the equation

(1 +r)(X

0

0

S

0

) +

0

S

1

= (S

1

K)

+

.

Then X

0

= 1.20 and

0

=

1

2

. This means the trader should sell short 0.5 share of stock, put the income

2 into a money market account, and then transfer 1.20 into a separate money market account. At time one,

the portfolio consisting of a short position in stock and 0.8(1 +r) in money market account will cancel out

with the options payo. Therefore we end up with 1.20(1 +r) in the separate money market account.

Remark: This problem illustrates why we are interested in hedging a long position. In case the stock

price goes down at time one, the option will expire without any payo. The initial money 1.20 we paid at

time zero will be wasted. By hedging, we convert the option back into liquid assets (cash and stock) which

guarantees a sure payo at time one. Also, cf. page 7, paragraph 2. As to why we hedge a short position

(as a writer), see Wilmott [8], page 11-13.

1.7.

Proof. The idea is the same as Problem 1.6. The banks trader only needs to set up the reverse of the

replicating trading strategy described in Example 1.2.4. More precisely, he should short sell 0.1733 share of

stock, invest the income 0.6933 into money market account, and transfer 1.376 into a separate money market

account. The portfolio consisting a short position in stock and 0.6933-1.376 in money market account will

replicate the opposite of the options payo. After they cancel out, we end up with 1.376(1 + r)

3

in the

separate money market account.

1.8. (i)

Proof. v

n

(s, y) =

2

5

(v

n+1

(2s, y + 2s) +v

n+1

(

s

2

, y +

s

2

)).

(ii)

Proof. 1.696.

(iii)

Proof.

n

(s, y) =

v

n+1

(us, y +us) v

n+1

(ds, y +ds)

(u d)s

.

1.9. (i)

Proof. Similar to Theorem 1.2.2, but replace r, u and d everywhere with r

n

, u

n

and d

n

. More precisely, set

p

n

=

1+rndn

undn

and q

n

= 1 p

n

. Then

V

n

=

p

n

V

n+1

(H) + q

n

V

n+1

(T)

1 +r

n

.

(ii)

Proof.

n

=

Vn+1(H)Vn+1(T)

Sn+1(H)Sn+1(T)

=

Vn+1(H)Vn+1(T)

(undn)Sn

.

(iii)

3

Proof. u

n

=

Sn+1(H)

Sn

=

Sn+10

Sn

= 1+

10

Sn

and d

n

=

Sn+1(T)

Sn

=

Sn10

Sn

= 1

10

Sn

. So the risk-neutral probabilities

at time n are p

n

=

1dn

undn

=

1

2

and q

n

=

1

2

. Risk-neutral pricing implies the price of this call at time zero is

9.375.

2. Probability Theory on Coin Toss Space

2.1. (i)

Proof. P(A

c

) +P(A) =

A

c P() +

A

P() =

P() = 1.

(ii)

Proof. By induction, it suces to work on the case N = 2. When A

1

and A

2

are disjoint, P(A

1

A

2

) =

A1A2

P() =

A1

P() +

A2

P() = P(A

1

) + P(A

2

). When A

1

and A

2

are arbitrary, using

the result when they are disjoint, we have P(A

1

A

2

) = P((A

1

A

2

) A

2

) = P(A

1

A

2

) + P(A

2

)

P(A

1

) +P(A

2

).

2.2. (i)

Proof.

P(S

3

= 32) = p

3

=

1

8

,

P(S

3

= 8) = 3 p

2

q =

3

8

,

P(S

3

= 2) = 3 p q

2

=

3

8

, and

P(S

3

= 0.5) = q

3

=

1

8

.

(ii)

Proof.

E[S

1

] = 8

P(S

1

= 8) + 2

P(S

1

= 2) = 8 p + 2 q = 5,

E[S

2

] = 16 p

2

+ 4 2 p q + 1 q

2

= 6.25, and

E[S

3

] = 32

1

8

+ 8

3

8

+ 2

3

8

+ 0.5

1

8

= 7.8125. So the average rates of growth of the stock price under

P

are, respectively: r

0

=

5

4

1 = 0.25, r

1

=

6.25

5

1 = 0.25 and r

2

=

7.8125

6.25

1 = 0.25.

(iii)

Proof. P(S

3

= 32) = (

2

3

)

3

=

8

27

, P(S

3

= 8) = 3 (

2

3

)

2

1

3

=

4

9

, P(S

3

= 2) = 2

1

9

=

2

9

, and P(S

3

= 0.5) =

1

27

.

Accordingly, E[S

1

] = 6, E[S

2

] = 9 and E[S

3

] = 13.5. So the average rates of growth of the stock price

under P are, respectively: r

0

=

6

4

1 = 0.5, r

1

=

9

6

1 = 0.5, and r

2

=

13.5

9

1 = 0.5.

2.3.

Proof. Apply conditional Jensens inequality.

2.4. (i)

Proof. E

n

[M

n+1

] = M

n

+E

n

[X

n+1

] = M

n

+E[X

n+1

] = M

n

.

(ii)

Proof. E

n

[

Sn+1

Sn

] = E

n

[e

Xn+1

2

e

+e

] =

2

e

+e

E[e

Xn+1

] = 1.

2.5. (i)

Proof. 2I

n

= 2

n1

j=0

M

j

(M

j+1

M

j

) = 2

n1

j=0

M

j

M

j+1

n1

j=1

M

2

j

n1

j=1

M

2

j

= 2

n1

j=0

M

j

M

j+1

+

M

2

n

n1

j=0

M

2

j+1

n1

j=0

M

2

j

= M

2

n

n1

j=0

(M

j+1

M

j

)

2

= M

2

n

n1

j=0

X

2

j+1

= M

2

n

n.

(ii)

Proof. E

n

[f(I

n+1

)] = E

n

[f(I

n

+M

n

(M

n+1

M

n

))] = E

n

[f(I

n

+M

n

X

n+1

)] =

1

2

[f(I

n

+M

n

)+f(I

n

M

n

)] =

g(I

n

), where g(x) =

1

2

[f(x +

2x +n) +f(x

2x +n)], since

2I

n

+n = [M

n

[.

2.6.

4

Proof. E

n

[I

n+1

I

n

] = E

n

[

n

(M

n+1

M

n

)] =

n

E

n

[M

n+1

M

n

] = 0.

2.7.

Proof. We denote by X

n

the result of n-th coin toss, where Head is represented by X = 1 and Tail is

represented by X = 1. We also suppose P(X = 1) = P(X = 1) =

1

2

. Dene S

1

= X

1

and S

n+1

=

S

n

+b

n

(X

1

, , X

n

)X

n+1

, where b

n

() is a bounded function on 1, 1

n

, to be determined later on. Clearly

(S

n

)

n1

is an adapted stochastic process, and we can show it is a martingale. Indeed, E

n

[S

n+1

S

n

] =

b

n

(X

1

, , X

n

)E

n

[X

n+1

] = 0.

For any arbitrary function f, E

n

[f(S

n+1

)] =

1

2

[f(S

n

+b

n

(X

1

, , X

n

)) +f(S

n

b

n

(X

1

, , X

n

))]. Then

intuitively, E

n

[f(S

n+1

] cannot be solely dependent upon S

n

when b

n

s are properly chosen. Therefore in

general, (S

n

)

n1

cannot be a Markov process.

Remark: If X

n

is regarded as the gain/loss of n-th bet in a gambling game, then S

n

would be the wealth

at time n. b

n

is therefore the wager for the (n+1)-th bet and is devised according to past gambling results.

2.8. (i)

Proof. Note M

n

= E

n

[M

N

] and M

n

= E

n

[M

N

].

(ii)

Proof. In the proof of Theorem 1.2.2, we proved by induction that X

n

= V

n

where X

n

is dened by (1.2.14)

of Chapter 1. In other words, the sequence (V

n

)

0nN

can be realized as the value process of a portfolio,

which consists of stock and money market accounts. Since (

Xn

(1+r)

n

)

0nN

is a martingale under

P (Theorem

2.4.5), (

Vn

(1+r)

n

)

0nN

is a martingale under

P.

(iii)

Proof.

V

n

(1+r)

n

= E

n

V

N

(1+r)

N

, so V

0

,

V

1

1+r

, ,

V

N1

(1+r)

N1

,

V

N

(1+r)

N

is a martingale under

P.

(iv)

Proof. Combine (ii) and (iii), then use (i).

2.9. (i)

Proof. u

0

=

S1(H)

S0

= 2, d

0

=

S1(H)

S0

=

1

2

, u

1

(H) =

S2(HH)

S1(H)

= 1.5, d

1

(H) =

S2(HT)

S1(H)

= 1, u

1

(T) =

S2(TH)

S1(T)

= 4

and d

1

(T) =

S2(TT)

S1(T)

= 1.

So p

0

=

1+r0d0

u0d0

=

1

2

, q

0

=

1

2

, p

1

(H) =

1+r1(H)d1(H)

u1(H)d1(H)

=

1

2

, q

1

(H) =

1

2

, p

1

(T) =

1+r1(T)d1(T)

u1(T)d1(T)

=

1

6

, and

q

1

(T) =

5

6

.

Therefore

P(HH) = p

0

p

1

(H) =

1

4

,

P(HT) = p

0

q

1

(H) =

1

4

,

P(TH) = q

0

p

1

(T) =

1

12

and

P(TT) =

q

0

q

1

(T) =

5

12

.

The proofs of Theorem 2.4.4, Theorem 2.4.5 and Theorem 2.4.7 still work for the random interest

rate model, with proper modications (i.e.

P would be constructed according to conditional probabili-

ties

P(

n+1

= H[

1

, ,

n

) := p

n

and

P(

n+1

= T[

1

, ,

n

) := q

n

. Cf. notes on page 39.). So

the time-zero value of an option that pays o V

2

at time two is given by the risk-neutral pricing formula

V

0

=

E

V2

(1+r0)(1+r1)

.

(ii)

Proof. V

2

(HH) = 5, V

2

(HT) = 1, V

2

(TH) = 1 and V

2

(TT) = 0. So V

1

(H) =

p1(H)V2(HH)+ q1(H)V2(HT)

1+r1(H)

=

2.4, V

1

(T) =

p1(T)V2(TH)+ q1(T)V2(TT)

1+r1(T)

=

1

9

, and V

0

=

p0V1(H)+ q0V1(T)

1+r0

1.

5

(iii)

Proof.

0

=

V1(H)V1(T)

S1(H)S1(T)

=

2.4

1

9

82

= 0.4

1

54

0.3815.

(iv)

Proof.

1

(H) =

V2(HH)V2(HT)

S2(HH)S2(HT)

=

51

128

= 1.

2.10. (i)

Proof.

E

n

[

Xn+1

(1+r)

n+1

] =

E

n

[

nYn+1Sn

(1+r)

n+1

+

(1+r)(XnnSn)

(1+r)

n+1

] =

nSn

(1+r)

n+1

E

n

[Y

n+1

] +

XnnSn

(1+r)

n

=

nSn

(1+r)

n+1

(u p +

d q) +

XnnSn

(1+r)

n

=

nSn+XnnSn

(1+r)

n

=

Xn

(1+r)

n

.

(ii)

Proof. From (2.8.2), we have

n

uS

n

+ (1 +r)(X

n

n

S

n

) = X

n+1

(H)

n

dS

n

+ (1 +r)(X

n

n

S

n

) = X

n+1

(T).

So

n

=

Xn+1(H)Xn+1(T)

uSndSn

and X

n

=

E

n

[

Xn+1

1+r

]. To make the portfolio replicate the payo at time N, we

must have X

N

= V

N

. So X

n

=

E

n

[

X

N

(1+r)

Nn

] =

E

n

[

V

N

(1+r)

Nn

]. Since (X

n

)

0nN

is the value process of the

unique replicating portfolio (uniqueness is guaranteed by the uniqueness of the solution to the above linear

equations), the no-arbitrage price of V

N

at time n is V

n

= X

n

=

E

n

[

V

N

(1+r)

Nn

].

(iii)

Proof.

E

n

[

S

n+1

(1 +r)

n+1

] =

1

(1 +r)

n+1

E

n

[(1 A

n+1

)Y

n+1

S

n

]

=

S

n

(1 +r)

n+1

[ p(1 A

n+1

(H))u + q(1 A

n+1

(T))d]

<

S

n

(1 +r)

n+1

[ pu + qd]

=

S

n

(1 +r)

n

.

If A

n+1

is a constant a, then

E

n

[

Sn+1

(1+r)

n+1

] =

Sn

(1+r)

n+1

(1a)( pu+ qd) =

Sn

(1+r)

n

(1a). So

E

n

[

Sn+1

(1+r)

n+1

(1a)

n+1

] =

Sn

(1+r)

n

(1a)

n

.

2.11. (i)

Proof. F

N

+P

N

= S

N

K + (K S

N

)

+

= (S

N

K)

+

= C

N

.

(ii)

Proof. C

n

=

E

n

[

C

N

(1+r)

Nn

] =

E

n

[

F

N

(1+r)

Nn

] +

E

n

[

P

N

(1+r)

Nn

] = F

n

+P

n

.

(iii)

Proof. F

0

=

E[

F

N

(1+r)

N

] =

1

(1+r)

N

E[S

N

K] = S

0

K

(1+r)

N

.

(iv)

6

Proof. At time zero, the trader has F

0

= S

0

in money market account and one share of stock. At time N,

the trader has a wealth of (F

0

S

0

)(1 +r)

N

+S

N

= K +S

N

= F

N

.

(v)

Proof. By (ii), C

0

= F

0

+P

0

. Since F

0

= S

0

(1+r)

N

S0

(1+r)

N

= 0, C

0

= P

0

.

(vi)

Proof. By (ii), C

n

= P

n

if and only if F

n

= 0. Note F

n

=

E

n

[

S

N

K

(1+r)

Nn

] = S

n

(1+r)

N

S0

(1+r)

Nn

= S

n

S

0

(1 +r)

n

.

So F

n

is not necessarily zero and C

n

= P

n

is not necessarily true for n 1.

2.12.

Proof. First, the no-arbitrage price of the chooser option at time m must be max(C, P), where

C =

E

(S

N

K)

+

(1 +r)

Nm

, and P =

E

(K S

N

)

+

(1 +r)

Nm

.

That is, C is the no-arbitrage price of a call option at time m and P is the no-arbitrage price of a put option

at time m. Both of them have maturity date N and strike price K. Suppose the market is liquid, then the

chooser option is equivalent to receiving a payo of max(C, P) at time m. Therefore, its current no-arbitrage

price should be

E[

max(C,P)

(1+r)

m

].

By the put-call parity, C = S

m

K

(1+r)

Nm

+P. So max(C, P) = P +(S

m

K

(1+r)

Nm

)

+

. Therefore, the

time-zero price of a chooser option is

P

(1 +r)

m

+

E

(S

m

K

(1+r)

Nm

)

+

(1 +r)

m

=

E

(K S

N

)

+

(1 +r)

N

+

E

(S

m

K

(1+r)

Nm

)

+

(1 +r)

m

.

The rst term stands for the time-zero price of a put, expiring at time N and having strike price K, and the

second term stands for the time-zero price of a call, expiring at time m and having strike price

K

(1+r)

Nm

.

If we feel unconvinced by the above argument that the chooser options no-arbitrage price is

E[

max(C,P)

(1+r)

m

],

due to the economical argument involved (like the chooser option is equivalent to receiving a payo of

max(C, P) at time m), then we have the following mathematically rigorous argument. First, we can

construct a portfolio

0

, ,

m1

, whose payo at time m is max(C, P). Fix , if C() > P(), we

can construct a portfolio

m

, ,

N1

whose payo at time N is (S

N

K)

+

; if C() < P(), we can

construct a portfolio

m

, ,

N1

whose payo at time N is (K S

N

)

+

. By dening (m k N 1)

k

() =

k

() if C() > P()

k

() if C() < P(),

we get a portfolio (

n

)

0nN1

whose payo is the same as that of the chooser option. So the no-arbitrage

price process of the chooser option must be equal to the value process of the replicating portfolio. In

particular, V

0

= X

0

=

E[

Xm

(1+r)

m

] =

E[

max(C,P)

(1+r)

m

].

2.13. (i)

Proof. Note under both actual probability P and risk-neutral probability

P, coin tosses

n

s are i.i.d.. So

without loss of generality, we work on P. For any function g, E

n

[g(S

n+1

, Y

n+1

)] = E

n

[g(

Sn+1

Sn

S

n

, Y

n

+

Sn+1

Sn

S

n

)] = pg(uS

n

, Y

n

+ uS

n

) + qg(dS

n

, Y

n

+ dS

n

), which is a function of (S

n

, Y

n

). So (S

n

, Y

n

)

0nN

is

Markov under P.

(ii)

7

Proof. Set v

N

(s, y) = f(

y

N+1

). Then v

N

(S

N

, Y

N

) = f(

N

n=0

Sn

N+1

) = V

N

. Suppose v

n+1

is given, then

V

n

=

E

n

[

Vn+1

1+r

] =

E

n

[

vn+1(Sn+1,Yn+1)

1+r

] =

1

1+r

[ pv

n+1

(uS

n

, Y

n

+ uS

n

) + qv

n+1

(dS

n

, Y

n

+ dS

n

)] = v

n

(S

n

, Y

n

),

where

v

n

(s, y) =

v

n+1

(us, y +us) + v

n+1

(ds, y +ds)

1 +r

.

2.14. (i)

Proof. For n M, (S

n

, Y

n

) = (S

n

, 0). Since coin tosses

n

s are i.i.d. under

P, (S

n

, Y

n

)

0nM

is Markov

under

P. More precisely, for any function h,

E

n

[h(S

n+1

)] = ph(uS

n

) +

h(dS

n

), for n = 0, 1, , M 1.

For any function g of two variables, we have

E

M

[g(S

M+1

, Y

M+1

)] =

E

M

[g(S

M+1

, S

M+1

)] = pg(uS

M

, uS

M

)+

qg(dS

M

, dS

M

). And for n M+1,

E

n

[g(S

n+1

, Y

n+1

)] =

E

n

[g(

Sn+1

Sn

S

n

, Y

n

+

Sn+1

Sn

S

n

)] = pg(uS

n

, Y

n

+uS

n

)+

qg(dS

n

, Y

n

+dS

n

), so (S

n

, Y

n

)

0nN

is Markov under

P.

(ii)

Proof. Set v

N

(s, y) = f(

y

NM

). Then v

N

(S

N

, Y

N

) = f(

N

K=M+1

S

k

NM

) = V

N

. Suppose v

n+1

is already given.

a) If n > M, then

E

n

[v

n+1

(S

n+1

, Y

n+1

)] = pv

n+1

(uS

n

, Y

n

+uS

n

) + qv

n+1

(dS

n

, Y

n

+dS

n

). So v

n

(s, y) =

pv

n+1

(us, y +us) + qv

n+1

(ds, y +ds).

b) If n = M, then

E

M

[v

M+1

(S

M+1

, Y

M+1

)] = pv

M+1

(uS

M

, uS

M

) + v

n+1

(dS

M

, dS

M

). So v

M

(s) =

pv

M+1

(us, us) + qv

M+1

(ds, ds).

c) If n < M, then

E

n

[v

n+1

(S

n+1

)] = pv

n+1

(uS

n

) + qv

n+1

(dS

n

). So v

n

(s) = pv

n+1

(us) + qv

n+1

(ds).

3. State Prices

3.1.

Proof. Note

Z() :=

P()

P()

=

1

Z()

. Apply Theorem 3.1.1 with P,

P, Z replaced by

P, P,

Z, we get the

analogous of properties (i)-(iii) of Theorem 3.1.1.

3.2. (i)

Proof.

P() =

P() =

Z()P() = E[Z] = 1.

(ii)

Proof.

E[Y ] =

Y ()

P() =

(iii)

Proof.

P(A) =

A

Z()P(). Since P(A) = 0, P() = 0 for any A. So

P(A) = 0.

(iv)

Proof. If

P(A) =

A

Z()P() = 0, by P(Z > 0) = 1, we conclude P() = 0 for any A. So

P(A) =

A

P() = 0.

(v)

Proof. P(A) = 1 P(A

c

) = 0

P(A

c

) = 0

P(A) = 1.

(vi)

8

Proof. Pick

0

such that P(

0

) > 0, dene Z() =

0, if =

0

1

P(0)

, if =

0

.

Then P(Z 0) = 1 and E[Z] =

1

P(0)

P(

0

) = 1.

Clearly

P( `

0

) = E[Z1

\{0}

] =

=0

Z()P() = 0. But P( `

0

) = 1 P(

0

) > 0 if

P(

0

) < 1. Hence in the case 0 < P(

0

) < 1, P and

P are not equivalent. If P(

0

) = 1, then E[Z] = 1 if

and only if Z(

0

) = 1. In this case

P(

0

) = Z(

0

)P(

0

) = 1. And

P and P have to be equivalent.

In summary, if we can nd

0

such that 0 < P(

0

) < 1, then Z as constructed above would induce a

probability

P that is not equivalent to P.

3.5. (i)

Proof. Z(HH) =

9

16

, Z(HT) =

9

8

, Z(TH) =

3

8

and Z(TT) =

15

4

.

(ii)

Proof. Z

1

(H) = E

1

[Z

2

](H) = Z

2

(HH)P(

2

= H[

1

= H) + Z

2

(HT)P(

2

= T[

1

= H) =

3

4

. Z

1

(T) =

E

1

[Z

2

](T) = Z

2

(TH)P(

2

= H[

1

= T) +Z

2

(TT)P(

2

= T[

1

= T) =

3

2

.

(iii)

Proof.

V

1

(H) =

[Z

2

(HH)V

2

(HH)P(

2

= H[

1

= H) +Z

2

(HT)V

2

(HT)P(

2

= T[

1

= T)]

Z

1

(H)(1 +r

1

(H))

= 2.4,

V

1

(T) =

[Z

2

(TH)V

2

(TH)P(

2

= H[

1

= T) +Z

2

(TT)V

2

(TT)P(

2

= T[

1

= T)]

Z

1

(T)(1 +r

1

(T))

=

1

9

,

and

V

0

=

Z

2

(HH)V

2

(HH)

(1 +

1

4

)(1 +

1

4

)

P(HH) +

Z

2

(HT)V

2

(HT)

(1 +

1

4

)(1 +

1

4

)

P(HT) +

Z

2

(TH)V

2

(TH)

(1 +

1

4

)(1 +

1

2

)

P(TH) + 0 1.

3.6.

Proof. U

(x) =

1

x

, so I(x) =

1

x

. (3.3.26) gives E[

Z

(1+r)

N

(1+r)

N

Z

] = X

0

. So =

1

X0

. By (3.3.25), we

have X

N

=

(1+r)

N

Z

=

X0

Z

(1 + r)

N

. Hence X

n

=

E

n

[

X

N

(1+r)

Nn

] =

E

n

[

X0(1+r)

n

Z

] = X

0

(1 + r)

n

E

n

[

1

Z

] =

X

0

(1 +r)

n 1

Zn

E

n

[Z

1

Z

] =

X0

n

, where the second to last = comes from Lemma 3.2.6.

3.7.

Proof. U

(x) = x

p1

and so I(x) = x

1

p1

. By (3.3.26), we have E[

Z

(1+r)

N

(

Z

(1+r)

N

)

1

p1

] = X

0

. Solve it for ,

we get

=

X

0

E

Z

p

p1

(1+r)

Np

p1

p1

=

X

p1

0

(1 +r)

Np

(E[Z

p

p1

])

p1

.

So by (3.3.25), X

N

= (

Z

(1+r)

N

)

1

p1

=

1

p1

Z

1

p1

(1+r)

N

p1

=

X0(1+r)

Np

p1

E[Z

p

p1

]

Z

1

p1

(1+r)

N

p1

=

(1+r)

N

X0Z

1

p1

E[Z

p

p1

]

.

3.8. (i)

9

Proof.

d

dx

(U(x) yx) = U

d

2

dx

2

(U(x) yx) =

U

(x) 0 (U is concave), x = I(y) is a maximum point. Therefore U(x) y(x) U(I(y)) yI(y) for every

x.

(ii)

Proof. Following the hint of the problem, we have

E[U(X

N

)] E[X

N

Z

(1 +r)

N

] E[U(I(

Z

(1 +r)

N

))] E[

Z

(1 +r)

N

I(

Z

(1 +r)

N

)],

i.e. E[U(X

N

)] X

0

E[U(X

N

)]

E[

(1+r)

N

X

N

] = E[U(X

N

)] X

0

. So E[U(X

N

)] E[U(X

N

)].

3.9. (i)

Proof. X

n

=

E

n

[

X

N

(1+r)

Nn

]. So if X

N

0, then X

n

0 for all n.

(ii)

Proof. a) If 0 x < and 0 < y

1

1 y 0. So U(x) yx U(I(y)) yI(y).

b) If 0 x < and y >

1

U(x) yx U(I(y)) yI(y).

c) If x and 0 < y

1

So U(x) yx U(I(y)) yI(y).

d) If x and y >

1

U(x) yx U(I(y)) yI(y).

(iii)

Proof. Using (ii) and set x = X

N

, y =

Z

(1+r)

N

, where X

N

is a random variable satisfying

E[

X

N

(1+r)

N

] = X

0

,

we have

E[U(X

N

)] E[

Z

(1 +r)

N

X

N

] E[U(X

N

)] E[

Z

(1 +r)

N

X

N

].

That is, E[U(X

N

)] X

0

E[U(X

N

)] X

0

. So E[U(X

N

)] E[U(X

N

)].

(iv)

Proof. Plug p

m

and

m

into (3.6.4), we have

X

0

=

2

N

m=1

p

m

m

I(

m

) =

2

N

m=1

p

m

m

1

{m

1

}

.

So

X0

2

N

m=1

p

m

m

1

{m

1

}

. Suppose there is a solution to (3.6.4), note

X0

m :

m

1

= . Let K = maxm :

m

1

, then

K

1

<

K+1

. So

K

<

K+1

and

X0

K

m=1

p

m

m

(Note, however, that K could be 2

N

. In this case,

K+1

is interpreted as . Also, note

we are looking for positive solution > 0). Conversely, suppose there exists some K so that

K

<

K+1

and

K

m=1

m

p

m

=

X0

K

<

1

<

K+1

. For such , we have

E[

Z

(1 +r)

N

I(

Z

(1 +r)

N

)] =

2

N

m=1

p

m

m

1

{m

1

}

=

K

m=1

p

m

m

= X

0

.

Hence (3.6.4) has a solution.

10

(v)

Proof. X

N

(

m

) = I(

m

) = 1

{m

1

}

=

, if m K

0, if m K + 1

.

4. American Derivative Securities

Before proceeding to the exercise problems, we rst give a brief summary of pricing American derivative

securities as presented in the textbook. We shall use the notation of the book.

From the buyers perspective: At time n, if the derivative security has not been exercised, then the buyer

can choose a policy with o

n

. The valuation formula for cash ow (Theorem 2.4.8) gives a fair price

for the derivative security exercised according to :

V

n

() =

N

k=n

E

n

1

{=k}

1

(1 +r)

kn

G

k

=

E

n

1

{N}

1

(1 +r)

n

G

.

The buyer wants to consider all the possible s, so that he can nd the least upper bound of security value,

which will be the maximum price of the derivative security acceptable to him. This is the price given by

Denition 4.4.1: V

n

= max

Sn

E

n

[1

{N}

1

(1+r)

n

G

].

From the sellers perspective: A price process (V

n

)

0nN

is acceptable to him if and only if at time n,

he can construct a portfolio at cost V

n

so that (i) V

n

G

n

and (ii) he needs no further investing into the

portfolio as time goes by. Formally, the seller can nd (

n

)

0nN

and (C

n

)

0nN

so that C

n

0 and

V

n+1

=

n

S

n+1

+ (1 + r)(V

n

C

n

n

S

n

). Since (

Sn

(1+r)

n

)

0nN

is a martingale under the risk-neutral

measure

P, we conclude

E

n

V

n+1

(1 +r)

n+1

V

n

(1 +r)

n

=

C

n

(1 +r)

n

0,

i.e. (

Vn

(1+r)

n

)

0nN

is a supermartingale. This inspired us to check if the converse is also true. This is exactly

the content of Theorem 4.4.4. So (V

n

)

0nN

is the value process of a portfolio that needs no further investing

if and only if

Vn

(1+r)

n

0nN

is a supermartingale under

P (note this is independent of the requirement

V

n

G

n

). In summary, a price process (V

n

)

0nN

is acceptable to the seller if and only if (i) V

n

G

n

; (ii)

Vn

(1+r)

n

0nN

is a supermartingale under

P.

Theorem 4.4.2 shows the buyers upper bound is the sellers lower bound. So it gives the price acceptable

to both. Theorem 4.4.3 gives a specic algorithm for calculating the price, Theorem 4.4.4 establishes the

one-to-one correspondence between super-replication and supermartingale property, and nally, Theorem

4.4.5 shows how to decide on the optimal exercise policy.

4.1. (i)

Proof. V

P

2

(HH) = 0, V

P

2

(HT) = V

P

2

(TH) = 0.8, V

P

2

(TT) = 3, V

P

1

(H) = 0.32, V

P

1

(T) = 2, V

P

0

= 9.28.

(ii)

Proof. V

C

0

= 5.

(iii)

Proof. g

S

(s) = [4 s[. We apply Theorem 4.4.3 and have V

S

2

(HH) = 12.8, V

S

2

(HT) = V

S

2

(TH) = 2.4,

V

S

2

(TT) = 3, V

S

1

(H) = 6.08, V

S

1

(T) = 2.16 and V

S

0

= 3.296.

(iv)

11

Proof. First, we note the simple inequality

max(a

1

, b

1

) + max(a

2

, b

2

) max(a

1

+a

2

, b

1

+b

2

).

> holds if and only if b

1

> a

1

, b

2

< a

2

or b

1

< a

1

, b

2

> a

2

. By induction, we can show

V

S

n

= max

g

S

(S

n

),

pV

S

n+1

+

V

S

n+1

1 +r

max

g

P

(S

n

) +g

C

(S

n

),

pV

P

n+1

+

V

P

n+1

1 +r

+

pV

C

n+1

+

V

C

n+1

1 +r

max

g

P

(S

n

),

pV

P

n+1

+

V

P

n+1

1 +r

+ max

g

C

(S

n

),

pV

C

n+1

+

V

C

n+1

1 +r

= V

P

n

+V

C

n

.

As to when < holds, suppose m = maxn : V

S

n

< V

P

n

+ V

C

n

. Then clearly m N 1 and it is possible

that n : V

S

n

< V

P

n

+ V

C

n

= . When this set is not empty, m is characterized as m = maxn : g

P

(S

n

) <

pV

P

n+1

+ qV

P

n+1

1+r

and g

C

(S

n

) >

pV

C

n+1

+ qV

C

n+1

1+r

or g

P

(S

n

) >

pV

P

n+1

+ qV

P

n+1

1+r

and g

C

(S

n

) <

pV

C

n+1

+ qV

C

n+1

1+r

.

4.2.

Proof. For this problem, we need Figure 4.2.1, Figure 4.4.1 and Figure 4.4.2. Then

1

(H) =

V

2

(HH) V

2

(HT)

S

2

(HH) S

2

(HT)

=

1

12

,

1

(T) =

V

2

(TH) V

2

(TT)

S

2

(TH) S

2

(TT)

= 1,

and

0

=

V

1

(H) V

1

(T)

S

1

(H) S

1

(T)

0.433.

The optimal exercise time is = infn : V

n

= G

n

. So

(HH) = , (HT) = 2, (TH) = (TT) = 1.

Therefore, the agent borrows 1.36 at time zero and buys the put. At the same time, to hedge the long

position, he needs to borrow again and buy 0.433 shares of stock at time zero.

At time one, if the result of coin toss is tail and the stock price goes down to 2, the value of the portfolio

is X

1

(T) = (1 +r)(1.36 0.433S

0

) +0.433S

1

(T) = (1 +

1

4

)(1.36 0.433 4) +0.433 2 = 3. The agent

should exercise the put at time one and get 3 to pay o his debt.

At time one, if the result of coin toss is head and the stock price goes up to 8, the value of the portfolio

is X

1

(H) = (1 + r)(1.36 0.433S

0

) + 0.433S

1

(H) = 0.4. The agent should borrow to buy

1

12

shares of

stock. At time two, if the result of coin toss is head and the stock price goes up to 16, the value of the

portfolio is X

2

(HH) = (1+r)(X

1

(H)

1

12

S

1

(H)) +

1

12

S

2

(HH) = 0, and the agent should let the put expire.

If at time two, the result of coin toss is tail and the stock price goes down to 4, the value of the portfolio is

X

2

(HT) = (1 +r)(X

1

(H)

1

12

S

1

(H)) +

1

12

S

2

(HT) = 1. The agent should exercise the put to get 1. This

will pay o his debt.

4.3.

Proof. We need Figure 1.2.2 for this problem, and calculate the intrinsic value process and price process of

the put as follows.

For the intrinsic value process, G

0

= 0, G

1

(T) = 1, G

2

(TH) =

2

3

, G

2

(TT) =

5

3

, G

3

(THT) = 1,

G

3

(TTH) = 1.75, G

3

(TTT) = 2.125. All the other outcomes of G is negative.

12

For the price process, V

0

= 0.4, V

1

(T) = 1, V

1

(TH) =

2

3

, V

1

(TT) =

5

3

, V

3

(THT) = 1, V

3

(TTH) = 1.75,

V

3

(TTT) = 2.125. All the other outcomes of V is zero.

Therefore the time-zero price of the derivative security is 0.4 and the optimal exercise time satises

() =

if

1

= H,

1 if

1

= T.

4.4.

Proof. 1.36 is the cost of super-replicating the American derivative security. It enables us to construct a

portfolio sucient to pay o the derivative security, no matter when the derivative security is exercised. So

to hedge our short position after selling the put, there is no need to charge the insider more than 1.36.

4.5.

Proof. The stopping times in o

0

are

(1) 0;

(2) 1;

(3) (HT) = (HH) = 1, (TH), (TT) 2, (4 dierent ones);

(4) (HT), (HH) 2, , (TH) = (TT) = 1 (4 dierent ones);

(5) (HT), (HH), (TH), (TT) 2, (16 dierent ones).

When the option is out of money, the following stopping times do not exercise

(i) 0;

(ii) (HT) 2, , (HH) = , (TH), (TT) 2, (8 dierent ones);

(iii) (HT) 2, , (HH) = , (TH) = (TT) = 1 (2 dierent ones).

For (i),

E[1

{2}

(

4

5

)

] = G

0

= 1. For (ii),

E[1

{2}

(

4

5

)

]

E[1

{

2}

(

4

5

)

], where

(HT) =

2,

(HH) = ,

(TH) =

(TT) = 2. So

E[1

{

2}

(

4

5

)

] =

1

4

[(

4

5

)

2

1 + (

4

5

)

2

(1 + 4)] = 0.96. For

(iii),

E[1

{2}

(

4

5

)

] has the biggest value when satises (HT) = 2, (HH) = , (TH) = (TT) = 1.

This value is 1.36.

4.6. (i)

Proof. The value of the put at time N, if it is not exercised at previous times, is K S

N

. Hence V

N1

=

maxKS

N1

,

E

N1

[

V

N

1+r

] = maxKS

N1

,

K

1+r

S

N1

= KS

N1

. The second equality comes from

the fact that discounted stock price process is a martingale under risk-neutral probability. By induction, we

can show V

n

= K S

n

(0 n N). So by Theorem 4.4.5, the optimal exercise policy is to sell the stock

at time zero and the value of this derivative security is K S

0

.

Remark: We cheated a little bit by using American algorithm and Theorem 4.4.5, since they are developed

for the case where is allowed to be . But intuitively, results in this chapter should still hold for the case

N, provided we replace maxG

n

, 0 with G

n

.

(ii)

Proof. This is because at time N, if we have to exercise the put and K S

N

< 0, we can exercise the

European call to set o the negative payo. In eect, throughout the portfolios lifetime, the portfolio has

intrinsic values greater than that of an American put stuck at K with expiration time N. So, we must have

V

AP

0

V

0

+V

EC

0

K S

0

+V

EC

0

.

(iii)

13

Proof. Let V

EP

0

denote the time-zero value of a European put with strike K and expiration time N. Then

V

AP

0

V

EP

0

= V

EC

0

E[

S

N

K

(1 +r)

N

] = V

EC

0

S

0

+

K

(1 +r)

N

.

4.7.

Proof. V

N

= S

N

K, V

N1

= maxS

N1

K,

E

N1

[

V

N

1+r

] = maxS

N1

K, S

N1

K

1+r

= S

N1

K

1+r

.

By induction, we can prove V

n

= S

n

K

(1+r)

Nn

(0 n N) and V

n

> G

n

for 0 n N 1. So the

time-zero value is S

0

K

(1+r)

N

and the optimal exercise time is N.

5. Random Walk

5.1. (i)

Proof. E[

2

] = E[

(21)+1

] = E[

(21)

]E[

1

] = E[

1

]

2

.

(ii)

Proof. If we dene M

(m)

n

= M

n+m

M

m

(m = 1, 2, ), then (M

(m)

)

m

as random functions are i.i.d. with

distributions the same as that of M. So

m+1

m

= infn : M

(m)

n

= 1 are i.i.d. with distributions the

same as that of

1

. Therefore

E[

m

] = E[

(mm1)+(m1m2)++1

] = E[

1

]

m

.

(iii)

Proof. Yes, since the argument of (ii) still works for asymmetric random walk.

5.2. (i)

Proof. f

() = pe

qe

, so f

1

2

(lnq lnp). Since

1

2

(lnq lnp) < 0,

f() > f(0) = 1 for all > 0.

(ii)

Proof. E

n

[

Sn+1

Sn

] = E

n

[e

Xn+1

1

f()

] = pe

1

f()

+qe

1

f()

= 1.

(iii)

Proof. By optional stopping theorem, E[S

n1

] = E[S

0

] = 1. Note S

n1

= e

Mn

1

(

1

f()

)

n1

e

1

,

by bounded convergence theorem, E[1

{1<}

S

1

] = E[lim

n

S

n1

] = lim

n

E[S

n1

] = 1, that is,

E[1

{1<}

e

(

1

f()

)

1

] = 1. So e

= E[1

{1<}

(

1

f()

)

1

]. Let 0, again by bounded convergence theorem,

1 = E[1

{1<}

(

1

f(0)

)

1

] = P(

1

< ).

(iv)

Proof. Set =

1

f()

=

1

pe

+qe

, then as varies from 0 to , can take all the values in (0, 1). Write

in terms of , we have e

=

1

14pq

2

2p

(note 4pq

2

< 4(

p+q

2

)

2

1

2

= 1). We want to choose > 0, so we

should take = ln(

1+

14pq

2

2p

). Therefore E[

1

] =

2p

1+

14pq

2

=

1

14pq

2

2q

.

14

(v)

Proof.

E[

1

] = E[

1

] = E[

1

11

], and

1 4pq

2

2q

=

1

2q

(1

1 4pq

2

)

1

=

1

2q

[

1

2

(1 4pq

2

)

1

2

(4pq2)

1

+ (1

1 4pq

2

)(1)

2

].

So E[

1

] = lim

1

E[

1

] =

1

2q

[

1

2

(1 4pq)

1

2

(8pq) (1

1 4pq)] =

1

2p1

.

5.3. (i)

Proof. Solve the equation pe

+ qe

1+

14pq

2p

= ln

1p

p

= lnq lnp. Set

0

= lnq lnp, then f(

0

) = 1 and f

0

. So f() > 1 for all >

0

.

(ii)

Proof. As in Exercise 5.2, S

n

= e

Mn

(

1

f()

)

n

is a martingale, and 1 = E[S

0

] = E[S

n1

] = E[e

Mn

1

(

1

f()

)

1n

].

Suppose >

0

, then by bounded convergence theorem,

1 = E[ lim

n

e

Mn

1

(

1

f()

)

n1

] = E[1

{1<}

e

(

1

f()

)

1

].

Let

0

, we get P(

1

< ) = e

0

=

p

q

< 1.

(iii)

Proof. From (ii), we can see E[1

{1<}

(

1

f()

)

1

] = e

, for >

0

. Set =

1

f()

, then e

=

1

14pq

2

2p

. We

need to choose the root so that e

> e

0

=

q

p

, so = ln(

1+

14pq

2

2p

), then E[

1

1

{1<}

] =

1

14pq

2

2q

.

(iv)

Proof. E[

1

1

{1<}

] =

E[

1

1

{1<}

][

=1

=

1

2q

[

4pq

14pq

(1

1 4pq)] =

1

2q

[

4pq

2q1

1 + 2q 1] =

p

q

1

2q1

.

5.4. (i)

Proof. E[

2

] =

k=1

P(

2

= 2k)

2k

=

k=1

(

2

)

2k

P(

2

= 2k)4

k

. So P(

2

= 2k) =

(2k)!

4

k

(k+1)!k!

.

(ii)

Proof. P(

2

= 2) =

1

4

. For k 2, P(

2

= 2k) = P(

2

2k) P(

2

2k 2).

P(

2

2k) = P(M

2k

= 2) +P(M

2k

4) +P(

2

2k, M

2k

0)

= P(M

2k

= 2) + 2P(M

2k

4)

= P(M

2k

= 2) +P(M

2k

4) +P(M

2k

4)

= 1 P(M

2k

= 2) P(M

2k

= 0).

15

Similarly, P(

2

2k 2) = 1 P(M

2k2

= 2) P(M

2k2

= 0). So

P(

2

= 2k) = P(M

2k2

= 2) +P(M

2k2

= 0) P(M

2k

= 2) P(M

2k

= 0)

= (

1

2

)

2k2

(2k 2)!

k!(k 2)!

+

(2k 2)!

(k 1)!(k 1)!

(

1

2

)

2k

(2k)!

(k + 1)!(k 1)!

+

(2k)!

k!k!

=

(2k)!

4

k

(k + 1)!k!

4

2k(2k 1)

(k + 1)k(k 1) +

4

2k(2k 1)

(k + 1)k

2

k (k + 1)

=

(2k)!

4

k

(k + 1)!k!

2(k

2

1)

2k 1

+

2(k

2

+k)

2k 1

4k

2

1

2k 1

=

(2k)!

4

k

(k + 1)!k!

.

5.5. (i)

Proof. For every path that crosses level m by time n and resides at b at time n, there corresponds a reected

path that resides at time 2mb. So

P(M

n

m, M

n

= b) = P(M

n

= 2mb) = (

1

2

)

n

n!

(m+

nb

2

)!(

n+b

2

m)!

.

(ii)

Proof.

P(M

n

m, M

n

= b) =

n!

(m+

nb

2

)!(

n+b

2

m)!

p

m+

nb

2

q

n+b

2

m

.

5.6.

Proof. On the innite coin-toss space, we dene M

n

= stopping times that takes values 0, 1, , n,

and M

of the perpetual

American put as in Section 5.4 can be dened as sup

M

E[1

{<}

(KS )

+

(1+r)

the same strike price K that expires at time n, its time-zero value V

(n)

is max

Mn

E[1

{<}

(KS )

+

(1+r)

].

Clearly (V

(n)

)

n0

is nondecreasing and V

(n)

V

n

V

(n)

exists and lim

n

V

(n)

V

.

For any given M

, we dene

(n)

=

, if =

n, if <

, then

(n)

is also a stopping time,

(n)

M

n

and lim

n

(n)

= . By bounded convergence theorem,

1

{<}

(K S

)

+

(1 +r)

= lim

n

1

{

(n)

<}

(K S

(n) )

+

(1 +r)

(n)

lim

n

V

(n)

.

Take sup at the left hand side of the inequality, we get V

lim

n

V

(n)

. Therefore V

= lim

n

V

(n)

.

Remark: In the above proof, rigorously speaking, we should use (KS

)

+

. So

this needs some justication.

5.8. (i)

16

Proof. v(S

n

) = S

n

S

n

K = g(S

n

). Under risk-neutral probabilities,

1

(1+r)

n

v(S

n

) =

Sn

(1+r)

n

is a martingale

by Theorem 2.4.4.

(ii)

Proof. If the purchaser chooses to exercises the call at time n, then the discounted risk-neutral expectation

of her payo is

E

SnK

(1+r)

n

= S

0

K

(1+r)

n

. Since lim

n

S

0

K

(1+r)

n

= S

0

, the value of the call at time

zero is at least sup

n

S

0

K

(1+r)

n

= S

0

.

(iii)

Proof. max

g(s),

pv(us)+ qv(ds)

1+r

= maxs K,

pu+ qv

1+r

s = maxs K, s = s = v(s), so equation (5.4.16) is

satised. Clearly v(s) = s also satises the boundary condition (5.4.18).

(iv)

Proof. Suppose is an optimal exercise time, then

E

S K

(1+r)

1

{<}

S

0

. Then P( < ) = 0 and

K

(1+r)

1

{<}

> 0. So

E

S K

(1+r)

1

{<}

<

E

S

(1+r)

1

{<}

. Since

Sn

(1+r)

n

n0

is a martingale

under risk-neutral measure, by Fatous lemma,

E

S

(1+r)

1

{<}

liminf

n

E

Sn

(1+r)

n

1

{<}

=

liminf

n

E

Sn

(1+r)

n

= liminf

n

E[S

0

] = S

0

. Combined, we have S

0

E

S K

(1+r)

1

{<}

< S

0

.

Contradiction. So there is no optimal time to exercise the perpetual American call. Simultaneously, we have

shown

E

S K

(1+r)

1

{<}

< S

0

for any stopping time . Combined with (ii), we conclude S

0

is the least

upper bound for all the prices acceptable to the buyer.

5.9. (i)

Proof. Suppose v(s) = s

p

, then we have s

p

=

2

5

2

p

s

p

+

2

5

s

p

2

p

. So 1 =

2

p+1

5

+

2

1p

5

. Solve it for p, we get p = 1

or p = 1.

(ii)

Proof. Since lim

s

v(s) = lim

s

(As +

B

s

) = 0, we must have A = 0.

(iii)

Proof. f

B

(s) = 0 if and only if B+s

2

4s = 0. The discriminant = (4)

2

4B = 4(4B). So for B 4,

the equation has roots and for B > 4, this equation does not have roots.

(iv)

Proof. Suppose B 4, then the equation s

2

4s + B = 0 has solution 2

4 B. By drawing graphs of

4 s and

B

s

, we should choose B = 4 and s

B

= 2 +

4 B = 2.

(v)

Proof. To have continuous derivative, we must have 1 =

B

s

2

B

. Plug B = s

2

B

back into s

2

B

4s

B

+B = 0,

we get s

B

= 2. This gives B = 4.

6. Interest-Rate-Dependent Assets

6.2.

17

Proof. X

k

= S

k

E

k

[D

m

(S

m

K)]D

1

k

Sn

Bn,m

B

k,m

for n k m. Then

E

k1

[D

k

X

k

] = E

k1

[D

k

S

k

E

k

[D

m

(S

m

K)]

S

n

B

n,m

B

k,m

D

k

]

= D

k1

S

k1

E

k1

[D

m

(S

m

K)]

S

n

B

n,m

E

k1

[E

k

[D

m

]]

= D

k1

[S

k1

E

k1

[D

m

(S

m

K)]D

1

k1

S

n

B

n,m

B

k1,m

]

= D

k1

X

k1

.

6.3.

Proof.

1

D

n

E

n

[D

m+1

R

m

] =

1

D

n

E

n

[D

m

(1 +R

m

)

1

R

m

] =

E

n

[

D

m

D

m+1

D

n

] = B

n,m

B

n,m+1

.

6.4.(i)

Proof. D

1

V

1

= E

1

[D

3

V

3

] = E

1

[D

2

V

2

] = D

2

E

1

[V

2

]. So V

1

=

D2

D1

E

1

[V

2

] =

1

1+R1

E

1

[V

2

]. In particular,

V

1

(H) =

1

1+R1(H)

V

2

(HH)P(w

2

= H[w

1

= H) =

4

21

, V

1

(T) = 0.

(ii)

Proof. Let X

0

=

2

21

. Suppose we buy

0

shares of the maturity two bond, then at time one, the value of

our portfolio is X

1

= (1 +R

0

)(X

0

B

0,2

) +

0

B

1,2

. To replicate the value V

1

, we must have

V

1

(H) = (1 +R

0

)(X

0

0

B

0,2

) +

0

B

1,2

(H)

V

1

(T) = (1 +R

0

)(X

0

0

B

0,2

) +

0

B

1,2

(T).

So

0

=

V1(H)V1(T)

B1,2(H)B1,2(T)

=

4

3

. The hedging strategy is therefore to borrow

4

3

B

0,2

2

21

=

20

21

and buy

4

3

share of the maturity two bond. The reason why we do not invest in the maturity three bond is that

B

1,3

(H) = B

1,3

(T)(=

4

7

) and the portfolio will therefore have the same value at time one regardless the

outcome of rst coin toss. This makes impossible the replication of V

1

, since V

1

(H) = V

1

(T).

(iii)

Proof. Suppose we buy

1

share of the maturity three bond at time one, then to replicate V

2

at time

two, we must have V

2

= (1 + R

1

)(X

1

1

B

1,3

) +

1

B

2,3

. So

1

(H) =

V2(HH)V2(HT)

B2,3(HH)B2,3(HT)

=

2

3

, and

1

(T) =

V2(TH)V2(TT)

B2,3(TH)B2,3(TT)

= 0. So the hedging strategy is as follows. If the outcome of rst coin toss is

T, then we do nothing. If the outcome of rst coin toss is H, then short

2

3

shares of the maturity three

bond and invest the income into the money market account. We do not invest in the maturity two bond,

because at time two, the value of the bond is its face value and our portfolio will therefore have the same

value regardless outcomes of coin tosses. This makes impossible the replication of V

2

.

6.5. (i)

18

Proof. Suppose 1 n m, then

E

m+1

n1

[F

n,m

] =

E

n1

[B

1

n,m+1

(B

n,m

B

n,m+1

)Z

n,m+1

Z

1

n1,m+1

]

=

E

n1

B

n,m

B

n,m+1

1

B

n,m+1

D

n

B

n1,m+1

D

n1

=

D

n

B

n1,m+1

D

n1

E

n1

[D

1

n

E

n

[D

m

] D

1

n

E

n

[D

m+1

]]

=

E

n1

[D

m

D

m+1

]

B

n1,m1

D

n1

=

B

n1,m

B

n1,m+1

B

n1,m+1

= F

n1,m

.

6.6. (i)

Proof. The agent enters the forward contract at no cost. He is obliged to buy certain asset at time m at

the strike price K = For

n,m

=

Sn

Bn,m

. At time n + 1, the contract has the value

E

n+1

[D

m

(S

m

K)] =

S

n+1

KB

n+1,m

= S

n+1

SnBn+1,m

Bn,m

. So if the agent sells this contract at time n +1, he will receive a cash

ow of S

n+1

SnBn+1,m

Bn,m

(ii)

Proof. By (i), the cash ow generated at time n + 1 is

(1 +r)

mn1

S

n+1

S

n

B

n+1,m

B

n,m

= (1 +r)

mn1

S

n+1

Sn

(1+r)

mn1

1

(1+r)

mn

= (1 +r)

mn1

S

n+1

(1 +r)

mn

S

n

= (1 +r)

m

E

n1

[

S

m

(1 +r)

m

] + (1 +r)

m

E

n

[

S

m

(1 +r)

m

]

= Fut

n+1,m

Fut

n,m

.

6.7.

Proof.

n+1

(0) =

E[D

n+1

V

n+1

(0)]

=

E[

D

n

1 +r

n

(0)

1

{#H(1n+1)=0}

]

=

E[

D

n

1 +r

n

(0)

1

{#H(1n)=0}

1

{n+1=T}

]

=

1

2

E[

D

n

1 +r

n

(0)

]

=

n

(0)

2(1 +r

n

(0))

.

19

For k = 1, 2, , n,

n+1

(k) =

E

D

n

1 +r

n

(#H(

1

n

))

1

{#H(1n+1)=k}

=

E

D

n

1 +r

n

(k)

1

{#H(1n)=k}

1

{n+1=T}

+

E

D

n

1 +r

n

(k 1)

1

{#H(1n)=k}

1

{n+1=H}

=

1

2

E[D

n

V

n

(k)]

1 +r

n

(k)

+

1

2

E[D

n

V

n

(k 1)]

1 +r

n

(k 1)

=

n

(k)

2(1 +r

n

(k))

+

n

(k 1)

2(1 +r

n

(k 1))

.

Finally,

n+1

(n + 1) =

E[D

n+1

V

n+1

(n + 1)] =

E

D

n

1 +r

n

(n)

1

{#H(1n)=n}

1

{n+1=H}

=

n

(n)

2(1 +r

n

(n))

.

Remark: In the above proof, we have used the independence of

n+1

and (

1

, ,

n

). This is guaranteed

by the assumption that p = q =

1

2

(note if and only if E[[] = constant). In case the binomial model

has stochastic up- and down-factor u

n

and d

n

, we can use the fact that

P(

n+1

= H[

1

, ,

n

) = p

n

and

P(

n+1

= T[

1

, ,

n

) = q

n

, where p

n

=

1+rndn

undn

and q

n

=

u1rn

undn

(cf. solution of Exercise 2.9 and

notes on page 39). Then for any X T

n

= (

1

, ,

n

), we have

E[Xf(

n+1

)] =

E[X

E[f(

n+1

)[T

n

]] =

E[X(p

n

f(H) +q

n

f(T))].

20

2 Stochastic Calculus for Finance II: Continuous-Time Models

1. General Probability Theory

1.1. (i)

Proof. P(B) = P((B A) A) = P(B A) +P(A) P(A).

(ii)

Proof. P(A) P(A

n

) implies P(A) lim

n

P(A

n

) = 0. So 0 P(A) 0, which means P(A) = 0.

1.2. (i)

Proof. We dene a mapping from A to as follows: (

1

2

) =

1

5

. Then is one-to-one and

onto. So the cardinality of A is the same as that of , which means in particular that A is uncountably

innite.

(ii)

Proof. Let A

n

= =

1

2

:

1

=

2

, ,

2n1

=

2n

. Then A

n

A as n . So

P(A) = lim

n

P(A

n

) = lim

n

[P(

1

=

2

) P(

2n1

=

2n

)] = lim

n

(p

2

+ (1 p)

2

)

n

.

Since p

2

+ (1 p)

2

< 1 for 0 < p < 1, we have lim

n

(p

2

+ (1 p)

2

)

n

= 0. This implies P(A) = 0.

1.3.

Proof. Clearly P() = 0. For any A and B, if both of them are nite, then A B is also nite. So

P(AB) = 0 = P(A) +P(B). If at least one of them is innite, then AB is also innite. So P(AB) =

= P(A) +P(B). Similarly, we can prove P(

N

n=1

A

n

) =

N

n=1

P(A

n

), even if A

n

s are not disjoint.

To see countable additivity property doesnt hold for P, let A

n

=

1

n

. Then A =

n=1

A

n

is an innite

set and therefore P(A) = . However, P(A

n

) = 0 for each n. So P(A) =

n=1

P(A

n

).

1.4. (i)

Proof. By Example 1.2.5, we can construct a random variable X on the coin-toss space, which is uniformly

distributed on [0, 1]. For the strictly increasing and continuous function N(x) =

x

2

e

2

2

d, we let

Z = N

1

(X). Then P(Z a) = P(X N(a)) = N(a) for any real number a, i.e. Z is a standard normal

random variable on the coin-toss space (

, T

, P).

(ii)

Proof. Dene X

n

=

n

i=1

Yi

2

i

, where

Y

i

() =

1, if

i

= H

0, if

i

= T.

Then X

n

() X() for every

n

= N

1

(X

n

) Z = N

1

(X) for

every . Clearly Z

n

depends only on the rst n coin tosses and Z

n

n1

is the desired sequence.

1.5.

21

Proof. First, by the information given by the problem, we have

0

1

[0,X())

(x)dxdP() =

1

[0,X())

(x)dP()dx.

The left side of this equation equals to

X()

0

dxdP() =

X()dP() = EX.

The right side of the equation equals to

1

{x<X()}

dP()dx =

0

P(x < X)dx =

0

(1 F(x))dx.

So EX =

0

(1 F(x))dx.

1.6. (i)

Proof.

Ee

uX

=

e

ux

1

2

e

(x)

2

2

2

dx

=

2

e

(x)

2

2

2

ux

2

2

dx

=

2

e

[x(+

2

u)]

2

(2

2

u+

4

u

2

)

2

2

dx

= e

u+

2

u

2

2

2

e

[x(+

2

u)]

2

2

2

dx

= e

u+

2

u

2

2

(ii)

Proof. E(X) = Ee

uX

= e

u+

u

2

2

2

e

u

= (EX).

1.7. (i)

Proof. Since [f

n

(x)[

1

2n

, f(x) = lim

n

f

n

(x) = 0.

(ii)

Proof. By the change of variable formula,

f

n

(x)dx =

2

e

x

2

2

dx = 1. So we must have

lim

n

f

n

(x)dx = 1.

(iii)

Proof. This is not contradictory with the Monotone Convergence Theorem, since f

n

n1

doesnt increase

to 0.

22

1.8. (i)

Proof. By (1.9.1), [Y

n

[ =

e

tX

e

snX

tsn

= [Xe

X

[ = Xe

X

Xe

2tX

. The last inequality is by X 0 and the

fact that is between t and s

n

, and hence smaller than 2t for n suciently large. So by the Dominated

Convergence Theorem,

(t) = lim

n

EY

n

= Elim

n

Y

n

= EXe

tX

.

(ii)

Proof. Since E[e

tX

+

1

{X0}

] +E[e

tX

1

{X<0}

] = E[e

tX

] < for every t R, E[e

t|X|

] = E[e

tX

+

1

{X0}

] +

E[e

(t)X

1

{X<0}

] < for every t R. Similarly, we have E[[X[e

t|X|

] < for every t R. So, similar to

(i), we have [Y

n

[ = [Xe

X

[ [X[e

2t|X|

for n suciently large, So by the Dominated Convergence Theorem,

(t) = lim

n

EY

n

= Elim

n

Y

n

= EXe

tX

.

1.9.

Proof. If g(x) is of the form 1

B

(x), where B is a Borel subset of R, then the desired equality is just (1.9.3).

By the linearity of Lebesgue integral, the desired equality also holds for simple functions, i.e. g of the

form g(x) =

n

i=1

1

Bi

(x), where each B

i

is a Borel subset of R. Since any nonnegative, Borel-measurable

function g is the limit of an increasing sequence of simple functions, the desired equality can be proved by

the Monotone Convergence Theorem.

1.10. (i)

Proof. If A

i

i=1

is a sequence of disjoint Borel subsets of [0, 1], then by the Monotone Convergence Theorem,

P(

i=1

A

i

) equals to

i=1

Ai

ZdP =

lim

n

1

n

i=1

Ai

ZdP = lim

n

n

i=1

Ai

ZdP = lim

n

n

i=1

Ai

ZdP =

i=1

P(A

i

).

Meanwhile,

P() = 2P([

1

2

, 1]) = 1. So

P is a probability measure.

(ii)

Proof. If P(A) = 0, then

P(A) =

A

ZdP = 2

A[

1

2

,1]

dP = 2P(A [

1

2

, 1]) = 0.

(iii)

Proof. Let A = [0,

1

2

).

1.11.

Proof.

Ee

uY

= Ee

uY

Z = Ee

uX+u

e

X

2

2

= e

u

2

2

Ee

(u)X

= e

u

2

2

e

(u)

2

2

= e

u

2

2

.

1.12.

Proof. First,

Z = e

Y

2

2

= e

(X+)

2

2

= e

2

2

+X

= Z

1

. Second, for any A T,

P(A) =

A

Zd

P =

(1

A

Z)ZdP =

1

A

dP = P(A). So P =

P. In particular, X is standard normal under

P, since its standard

normal under P.

1.13. (i)

23

Proof.

1

P(X B(x, )) =

1

x+

2

x

2

1

2

e

u

2

2

du is approximately

1

2

e

x

2

2

=

1

2

e

X

2

( )

2

.

(ii)

Proof. Similar to (i).

(iii)

Proof. X B(x, ) = X B(y , ) = X + B(y, ) = Y B(y, ).

(iv)

Proof. By (i)-(iii),

P(A)

P(A)

is approximately

2

e

Y

2

( )

2

2

e

X

2

( )

2

= e

Y

2

( )X

2

( )

2

= e

(X( )+)

2

X

2

( )

2

= e

X( )

2

2

.

1.14. (i)

Proof.

P() =

e

(

)X

dP =

0

e

(

)x

e

x

dx =

x

dx = 1.

(ii)

Proof.

P(X a) =

{Xa}

e

(

)X

dP =

a

0

e

(

)x

e

x

dx =

a

0

x

dx = 1 e

a

.

1.15. (i)

Proof. Clearly Z 0. Furthermore, we have

EZ = E

h(g(X))g

(X)

f(X)

h(g(x))g

(x)

f(x)

f(x)dx =

h(g(x))dg(x) =

h(u)du = 1.

(ii)

Proof.

P(Y a) =

{g(X)a}

h(g(X))g

(X)

f(X)

dP =

g

1

(a)

h(g(x))g

(x)

f(x)

f(x)dx =

g

1

(a)

h(g(x))dg(x).

By the change of variable formula, the last equation above equals to

a

P.

24

2. Information and Conditioning

2.1.

Proof. For any real number a, we have X a T

0

= , . So P(X a) is either 0 or 1. Since

lim

a

P(X a) = 1 and lim

a

P(X a) = 0, we can nd a number x

0

such that P(X x

0

) = 1 and

P(X x) = 0 for any x < x

0

. So

P(X = x

0

) = lim

n

P(x

0

1

n

< X x

0

) = lim

n

(P(X x

0

) P(X x

0

1

n

)) = 1.

2.2. (i)

Proof. (X) = , , HT, TH, TT, HH.

(ii)

Proof. (S

1

) = , , HH, HT, TH, TT.

(iii)

Proof.

P(HT, TH HH, HT) =

P(HT) =

1

4

,

P(HT, TH) =

P(HT) +

P(TH) =

1

4

+

1

4

=

1

2

,

and

P(HH, HT) =

P(HH) +

P(HT) =

1

4

+

1

4

=

1

2

. So we have

P(HT, TH)

P(HH, HT).

Similarly, we can work on other elements of (X) and (S

1

) and show that

P(A B) =

P(A)

A (X) and B (S

1

). So (X) and (S

1

) are independent under

P.

(iv)

Proof. P(HT, TH HH, HT) = P(HT) =

2

9

, P(HT, TH) =

2

9

+

2

9

=

4

9

and P(HH, HT) =

4

9

+

2

9

=

6

9

. So

P(HT, TH HH, HT) = P(HT, TH)P(HH, HT).

Hence (X) and (S

1

) are not independent under P.

(v)

Proof. Because S

1

and X are not independent under the probability measure P, knowing the value of X

will aect our opinion on the distribution of S

1

.

2.3.

Proof. We note (V, W) are jointly Gaussian, so to prove their independence it suces to show they are

uncorrelated. Indeed, EV W = EX

2

sin cos +XY cos

2

XY sin

2

+Y

2

sin cos = sin cos +

0 + 0 + sin cos = 0.

2.4. (i)

25

Proof.

Ee

uX+vY

= Ee

uX+vXZ

= Ee

uX+vXZ

[Z = 1P(Z = 1) +Ee

uX+vXZ

[Z = 1P(Z = 1)

=

1

2

Ee

uX+vX

+

1

2

Ee

uXvX

=

1

2

[e

(u+v)

2

2

+e

(uv)

2

2

]

= e

u

2

+v

2

2

e

uv

+e

uv

2

.

(ii)

Proof. Let u = 0.

(iii)

Proof. Ee

uX

= e

u

2

2

and Ee

vY

= e

v

2

2

. So Ee

uX+vY

= Ee

uX

Ee

vY

. Therefore X and Y cannot

be independent.

2.5.

Proof. The density f

X

(x) of X can be obtained by

f

X

(x) =

f

X,Y

(x, y)dy =

{y|x|}

2[x[ +y

2

e

(2|x|+y)

2

2

dy =

{|x|}

2

e

2

2

d =

1

2

e

x

2

2

.

The density f

Y

(y) of Y can be obtained by

f

Y

(y) =

f

XY

(x, y)dx

=

1

{|x|y}

2[x[ +y

2

e

(2|x|+y)

2

2

dx

=

0(y)

2x +y

2

e

(2x+y)

2

2

dx +

0y

2x +y

2

e

(2x+y)

2

2

dx

=

0(y)

2x +y

2

e

(2x+y)

2

2

dx +

0(y)

2x +y

2

e

(2x+y)

2

2

d(x)

= 2

|y|

2

e

2

2

d(

2

)

=

1

2

e

y

2

2

.

So both X and Y are standard normal random variables. Since f

X,Y

(x, y) = f

X

(x)f

Y

(y), X and Y are not

26

independent. However, if we set F(t) =

t

u

2

2

e

u

2

2

du, we have

EXY =

xyf

X,Y

(x, y)dxdy

=

xy1

{y|x|}

2[x[ +y

2

e

(2|x|+y)

2

2

dxdy

=

xdx

|x|

y

2[x[ +y

2

e

(2|x|+y)

2

2

dy

=

xdx

|x|

( 2[x[)

2

e

2

2

d

=

xdx(

|x|

2

e

2

2

d 2[x[

e

x

2

2

2

)

=

0

x

2

e

2

2

ddx +

2

e

2

2

ddx

=

0

xF(x)dx +

xF(x)dx.

So EXY =

0

xF(x)dx

0

uF(u)du = 0.

2.6. (i)

Proof. (X) = , , a, b, c, d.

(ii)

Proof.

EY [X =

{a,b,c,d}

EY 1

{X=}

P(X = )

1

{X=}

.

(iii)

Proof.

EZ[X = X +EY [X = X +

{a,b,c,d}

EY 1

{X=}

P(X = )

1

{X=}

.

(iv)

Proof. EZ[X EY [X = EZ Y [X = EX[X = X.

2.7.

27

Proof. Let = EY X and = EY X [(. Note is (-measurable, we have

V ar(Y X) = E(Y X )

2

2

2

= V ar(Err) + 2EY EY [( +E

2

= V ar(Err) +E

2

V ar(Err).

2.8.

Proof. It suces to prove the more general case. For any (X)-measurable random variable , EY

2

=

E(Y EY [X) = EY EY [X = EY EY = 0.

2.9. (i)

Proof. Consider the dice-toss space similar to the coin-toss space. Then a typical element in this space

is an innite sequence

1

3

, with

i

1, 2, , 6 (i N). We dene X() =

1

and f(x) =

1

{odd integers}

(x). Then its easy to see

(X) = , , :

1

= 1, , :

1

= 6

and (f(X)) equals to

, , :

1

= 1 :

1

= 3 :

1

= 5, :

1

= 2 :

1

= 4 :

1

= 6.

So , (f(X)) (X), and each of these containment is strict.

(ii)

Proof. No. (f(X)) (X) is always true.

2.10.

Proof.

A

g(X)dP = Eg(X)1

B

(X)

=

g(x)1

B

(x)f

X

(x)dx

=

yf

X,Y

(x, y)

f

X

(x)

dy1

B

(x)f

X

(x)dx

=

y1

B

(x)f

X,Y

(x, y)dxdy

= EY 1

B

(X)

= EY I

A

A

Y dP.

28

2.11. (i)

Proof. We can nd a sequence W

n

n1

of (X)-measurable simple functions such that W

n

W. Each W

n

can be written in the form

Kn

i=1

a

n

i

1

A

n

i

, where A

n

i

s belong to (X) and are disjoint. So each A

n

i

can be

written as X B

n

i

for some Borel subset B

n

i

of R, i.e. W

n

=

Kn

i=1

a

n

i

1

{XB

n

i

}

=

Kn

i=1

a

n

i

1

B

n

i

(X) = g

n

(X),

where g

n

(x) =

Kn

i=1

a

n

i

1

B

n

i

(x). Dene g = limsupg

n

, then g is a Borel function. By taking upper limits on

both sides of W

n

= g

n

(X), we get W = g(X).

(ii)

Proof. Note EY [X is (X)-measurable. By (i), we can nd a Borel function g such that EY [X =

g(X).

3. Brownian Motion

3.1.

Proof. We have T

t

T

u1

and W

u2

W

u1

is independent of T

u1

. So in particular, W

u2

W

u1

is independent

of T

t

.

3.2.

Proof. E[W

2

t

W

2

s

[T

s

] = E[(W

t

W

s

)

2

+ 2W

t

W

s

2W

2

s

[T

s

] = t s + 2W

s

E[W

t

W

s

[T

s

] = t s.

3.3.

Proof.

(3)

(u) = 2

4

ue

1

2

2

u

2

+(

2

+

4

u

2

)

2

ue

1

2

2

u

2

= e

1

2

2

u

2

(3

4

u+

4

u

2

), and

(4)

(u) =

2

ue

1

2

2

u

2

(3

4

u+

4

u

2

) +e

1

2

2

u

2

(3

4

+ 2

4

u). So E[(X )

4

] =

(4)

(0) = 3

4

.

3.4. (i)

Proof. Assume there exists A T, such that P(A) > 0 and for every A, lim

n

n1

j=0

[W

tj+1

W

tj

[() <

. Then for every A,

n1

j=0

(W

tj+1

W

tj

)

2

() max

0kn1

[W

t

k+1

W

t

k

[()

n1

j=0

[W

tj+1

W

tj

[()

0, since lim

n

max

0kn1

[W

t

k+1

W

t

k

[() = 0. This is a contradiction with lim

n

n1

j=0

(W

tj+1

W

tj

)

2

= T a.s..

(ii)

Proof. Note

n1

j=0

(W

tj+1

W

tj

)

3

max

0kn1

[W

t

k+1

W

t

k

[

n1

j=0

(W

tj+1

W

tj

)

2

0 as n , by an

argument similar to (i).

3.5.

29

Proof.

E[e

rT

(S

T

K)

+

]

= e

rT

(ln

K

S

0

(r

1

2

2

)T)

(S

0

e

(r

1

2

2

)T+x

K)

e

x

2

2T

2T

dx

= e

rT

T

(ln

K

S

0

(r

1

2

2

)T)

(S

0

e

(r

1

2

2

)T+

Ty

K)

e

y

2

2

2

dy

= S

0

e

1

2

2

T

T

(ln

K

S

0

(r

1

2

2

)T)

1

2

e

y

2

2

+

Ty

dy Ke

rT

T

(ln

K

S

0

(r

1

2

2

)T)

1

2

e

y

2

2

dy

= S

0

T

(ln

K

S

0

(r

1

2

2

)T)

T

1

2

e

2

2

d Ke

rT

N

T

(ln

S

0

K

+ (r

1

2

2

)T)

= Ke

rT

N(d

+

(T, S

0

)) Ke

rT

N(d

(T, S

0

)).

3.6. (i)

Proof.

E[f(X

t

)[T

t

] = E[f(W

t

W

s

+a)[T

s

][

a=Ws+t

= E[f(W

ts

+a)][

a=Ws+t

=

f(x +W

s

+t)

e

x

2

2(ts)

2(t s)

dx

=

f(y)

e

(yWss(ts))

2

2(ts)

2(t s)

dy

= g(X

s

).

So E[f(X

t

)[T

s

] =

f(y)p(t s, X

s

, y)dy with p(, x, y) =

1

2

e

(yx)

2

2

.

(ii)

Proof. E[f(S

t

)[T

s

] = E[f(S

0

e

Xt

)[T

s

] with =

v

. So by (i),

E[f(S

t

)[T

s

] =

f(S

0

e

y

)

1

2(t s)

e

(yXs(ts))

2

2(ts)

dy

S0e

y

=z

=

0

f(z)

1

2(t s)

e

(

1

ln

z

S

0

ln

Ss

S

0

(ts))

2

2

dz

z

=

0

f(z)

e

(ln

z

Ss

v(ts))

2

2

2

(ts)

z

2(t s)

dz

=

0

f(z)p(t s, S

s

, z)dz

= g(S

s

).

3.7. (i)

30

Proof. E

Zt

Zs

[T

s

= E[exp(W

t

W

s

) +(t s) ( +

2

2

)(t s)] = 1.

(ii)

Proof. By optional stopping theorem, E[Z

tm

] = E[Z

0

] = 1, that is, E[expX

tm

( +

2

2

)t

m

] =

1.

(iii)

Proof. If 0 and > 0, Z

tm

e

m

. By bounded convergence theorem,

E[1

{m<}

Z

m

] = E[ lim

t

Z

tm

] = lim

t

E[Z

tm

] = 1,

since on the event

m

= , Z

tm

e

m

1

2

2

t

0 as t . Therefore, E[e

m(+

2

2

)m

] = 1. Let

0, by bounded convergence theorem, we have P(

m

< ) = 1. Let +

2

2

= , we get

E[e

m

] = e

m

= e

mm

2+

2

.

(iv)

Proof. We note for > 0, E[

m

e

m

] < since xe

x

is bounded on [0, ). So by an argument similar

to Exercise 1.8, E[e

m

] is dierentiable and

E[e

m

] = E[

m

e

m

] = e

mm

2+

2 m

2 +

2

.

Let 0, by monotone increasing theorem, E[

m

] =

m

(v)

Proof. By > 2 > 0, we get +

2

2

> 0. Then Z

tm

e

m

and on the event

m

= , Z

tm

e

m(

2

2

+)t

0 as t . Therefore,

E[e

m(+

2

2

)m

1

{m<}

] = E[ lim

t

Z

tm

] = lim

t

E[Z

tm

] = 1.

Let 2, then we get P(

m

< ) = e

2m

= e

2||m

< 1. Set = +

2

2

. So we get

E[e

m

] = E[e

m

1

{m<}

] = e

m

= e

mm

2+

2

.

3.8. (i)

Proof.

n

(u) =

E[e

u

1

n

Mnt,n

] = (

E[e

u

n

X1,n

])

nt

= (e

u

n

p

n

+e

n

q

n

)

nt

=

e

u

r

n

+ 1 e

n

e

n

e

+e

r

n

1 +e

n

e

n

e

nt

.

(ii)

31

Proof.

1

x

2

(u) =

e

ux

rx

2

+ 1 e

x

e

x

e

x

e

ux

rx

2

+ 1 e

x

e

x

e

x

t

x

2

.

So,

ln 1

x

2

(u) =

t

x

2

ln

(rx

2

+ 1)(e

ux

e

ux

) +e

(u)x

e

(u)x

e

x

e

x

=

t

x

2

ln

(rx

2

+ 1) sinhux + sinh( u)x

sinhx

=

t

x

2

ln

(rx

2

+ 1) sinhux + sinhxcoshux coshxsinhux

sinhx

=

t

x

2

ln

coshux +

(rx

2

+ 1 coshx) sinhux

sinhx

.

(iii)

Proof.

coshux +

(rx

2

+ 1 coshx) sinhux

sinhx

= 1 +

u

2

x

2

2

+O(x

4

) +

(rx

2

+ 1 1

2

x

2

2

+O(x

4

))(ux +O(x

3

))

x +O(x

3

)

= 1 +

u

2

x

2

2

+

(r

2

2

)ux

3

+O(x

5

)

x +O(x

3

)

+O(x

4

)

= 1 +

u

2

x

2

2

+

(r

2

2

)ux

3

(1 +O(x

2

))

x(1 +O(x

2

))

+O(x

4

)

= 1 +

u

2

x

2

2

+

rux

2

1

2

ux

2

+O(x

4

).

(iv)

Proof.

ln 1

x

2

=

t

x

2

ln(1 +

u

2

x

2

2

+

ru

x

2

ux

2

2

+O(x

4

)) =

t

x

2

(

u

2

x

2

2

+

ru

x

2

ux

2

2

+O(x

4

)).

So lim

x0

ln 1

x

2

(u) = t(

u

2

2

+

ru

u

2

), and

E[e

u

1

n

Mnt,n

] =

n

(u)

1

2

tu

2

+ t(

r

2

)u. By the one-to-one

correspondence between distribution and moment generating function, (

1

n

M

nt,n

)

n

converges to a Gaussian

random variable with mean t(

r

2

) and variance t. Hence (

n

M

nt,n

)

n

converges to a Gaussian random

variable with mean t(r

2

2

) and variance

2

t.

4. Stochastic Calculus

4.1.

32

Proof. Fix t and for any s < t, we assume s [t

m

, t

m+1

) for some m.

Case 1. m = k. Then I(t)I(s) =

t

k

(M

t

M

t

k

)

t

k

(M

s

M

t

k

) =

t

k

(M

t

M

s

). So E[I(t)I(s)[T

t

] =

t

k

E[M

t

M

s

[T

s

] = 0.

Case 2. m < k. Then t

m

s < t

m+1

t

k

t < t

k+1

. So

I(t) I(s) =

k1

j=m

tj

(M

tj+1

M

tj

) +

t

k

(M

s

M

t

k

)

tm

(M

s

M

tm

)

=

k1

j=m+1

tj

(M

tj+1

M

tj

) +

t

k

(M

t

M

t

k

) +

tm

(M

tm+1

M

s

).

Hence

E[I(t) I(s)[T

s

]

=

k1

j=m+1

E[

tj

E[M

tj+1

M

tj

[T

tj

][T

s

] +E[

t

k

E[M

t

M

t

k

[T

t

k

][T

s

] +

tm

E[M

tm+1

M

s

[T

s

]

= 0.

Combined, we conclude I(t) is a martingale.

4.2. (i)

Proof. We follow the simplication in the hint and consider I(t

k

) I(t

l

) with t

l

< t

k

. Then I(t

k

) I(t

l

) =

k1

j=l

tj

(W

tj+1

W

tj

). Since

t

is a non-random process and W

tj+1

W

tj

T

tj

T

t

l

for j l, we must

have I(t

k

) I(t

l

) T

t

l

.

(ii)

Proof. We use the notation in (i) and it is clear I(t

k

) I(t

l

) is normal since it is a linear combination of

independent normal random variables. Furthermore, E[I(t

k

) I(t

l

)] =

k1

j=l

tj

E[W

tj+1

W

tj

] = 0 and

V ar(I(t

k

) I(t

l

)) =

k1

j=l

2

tj

V ar(W

tj+1

W

tj

) =

k1

j=l

2

tj

(t

j+1

t

j

) =

t

k

t

l

2

u

du.

(iii)

Proof. E[I(t) I(s)[T

s

] = E[I(t) I(s)] = 0, for s < t.

(iv)

Proof. For s < t,

E[I

2

(t)

t

0

2

u

du (I

2

(s)

s

0

2

u

du)[T

s

]

= E[I

2

(t) I

2

(s)

t

s

2

u

du[T

s

]

= E[(I(t) I(s))

2

+ 2I(t)I(s) 2I

2

(s)[T

s

]

t

s

2

u

du

= E[(I(t) I(s))

2

] + 2I(s)E[I(t) I(s)[T

s

]

t

s

2

u

du

=

t

s

2

u

du + 0

t

s

2

u

du

= 0.

33

4.3.

Proof. I(t) I(s) =

0

(W

t1

W

0

) +

t1

(W

t2

W

t1

)

0

(W

t1

W

0

) =

t1

(W

t2

W

t1

) = W

s

(W

t

W

s

).

(i) I(t) I(s) is not independent of T

s

, since W

s

T

s

.

(ii) E[(I(t) I(s))

4

] = E[W

4

s

]E[(W

t

W

s

)

4

] = 3s 3(t s) = 9s(t s) and 3E[(I(t) I(s))

2

] =

3E[W

2

s

]E[(W

t

W

s

)

2

] = 3s(t s). Since E[(I(t) I(s))

4

] = 3E[(I(t) I(s))

2

], I(t) I(s) is not normally

distributed.

(iii) E[I(t) I(s)[T

s

] = W

s

E[W

t

W

s

[T

s

] = 0.

(iv)

E[I

2

(t)

t

0

2

u

du (I

2

(s)

s

0

2

u

du)[T

s

]

= E[(I(t) I(s))

2

+ 2I(t)I(s) 2I

2

(s)

t

s

W

2

u

du[T

s

]

= E[W

2

s

(W

t

W

s

)

2

+ 2W

s

(W

t

W

s

) W

2

s

(t s)[T

s

]

= W

2

s

E[(W

t

W

s

)

2

] + 2W

s

E[W

t

W

s

[T

s

] W

2

s

(t s)

= W

2

s

(t s) W

2

s

(t s)

= 0.

4.4.

Proof. (Cf. ksendal [3], Exercise 3.9.) We rst note that

j

Bt

j

+t

j+1

2

(B

tj+1

B

tj

)

=

Bt

j

+t

j+1

2

(B

tj+1

Bt

j

+t

j+1

2

) +B

tj

(Bt

j

+t

j+1

2

B

tj

)

j

(Bt

j

+t

j+1

2

B

tj

)

2

.

The rst term converges in L

2

(P) to

T

0

B

t

dB

t

. For the second term, we note

E

Bt

j

+t

j+1

2

B

tj

t

2

= E

Bt

j

+t

j+1

2

B

tj

j

t

j+1

t

j

2

j, k

E

Bt

j

+t

j+1

2

B

tj

t

j+1

t

j

2

Bt

k

+t

k+1

2

B

t

k

t

k+1

t

k

2

j

E

B

2

t

j+1

t

j

2

t

j+1

t

j

2

j

2

t

j+1

t

j

2

T

2

max

1jn

[t

j+1

t

j

[ 0,

34

since E[(B

2

t

t)

2

] = E[B

4

t

2tB

2

t

+t

2

] = 3E[B

2

t

]

2

2t

2

+t

2

= 2t

2

. So

j

Bt

j

+t

j+1

2

(B

tj+1

B

tj

)

T

0

B

t

dB

t

+

T

2

=

1

2

B

2

T

in L

2

(P).

4.5. (i)

Proof.

d lnS

t

=

dS

t

S

t

1

2

d'S`

t

S

2

t

=

2S

t

dS

t

d'S`

t

2S

2

t

=

2S

t

(

t

S

t

dt +

t

S

t

dW

t

)

2

t

S

2

t

dt

2S

2

t

=

t

dW

t

+ (

t

1

2

2

t

)dt.

(ii)

Proof.

lnS

t

= lnS

0

+

t

0

s

dW

s

+

t

0

(

s

1

2

2

s

)ds.

So S

t

= S

0

exp

t

0

s

dW

s

+

t

0

(

s

1

2

2

s

)ds.

4.6.

Proof. Without loss of generality, we assume p = 1. Since (x

p

)

= px

p1

, (x

p

)

= p(p 1)x

p2

, we have

d(S

p

t

) = pS

p1

t

dS

t

+

1

2

p(p 1)S

p2

t

d'S`

t

= pS

p1

t

(S

t

dt +S

t

dW

t

) +

1

2

p(p 1)S

p2

t

2

S

2

t

dt

= S

p

t

[pdt +pdW

t

+

1

2

p(p 1)

2

dt]

= S

p

t

p[dW

t

+ ( +

p 1

2

2

)dt].

4.7. (i)

Proof. dW

4

t

= 4W

3

t

dW

t

+

1

2

4 3W

2

t

d'W`

t

= 4W

3

t

dW

t

+ 6W

2

t

dt. So W

4

T

= 4

T

0

W

3

t

dW

t

+ 6

T

0

W

2

t

dt.

(ii)

Proof. E[W

4

T

] = 6

T

0

tdt = 3T

2

.

(iii)

Proof. dW

6

t

= 6W

5

t

dW

t

+

1

2

6 5W

4

t

dt. So W

6

T

= 6

T

0

W

5

t

dW

t

+15

T

0

W

4

t

dt. Hence E[W

6

T

] = 15

T

0

3t

2

dt =

15T

3

.

4.8.

35

Proof. d(e

t

R

t

) = e

t

R

t

dt +e

t

dR

t

= e

t(dt+dWt)

. Hence

e

t

R

t

= R

0

+

t

0

e

s

(ds +dW

s

) = R

0

+

(e

t

1) +

t

0

e

s

dW

s

,

and R

t

= R

0

e

t

+

(1 e

t

) +

t

0

e

(ts)

dW

s

.

4.9. (i)

Proof.

Ke

r(Tt)

N

(d

) = Ke

r(Tt)

e

d

2

2

= Ke

r(Tt)

e

(d

+

Tt)

2

2

2

= Ke

r(Tt)

e

Ttd+

e

2

(Tt)

2

N

(d

+

)

= Ke

r(Tt)

x

K

e

(r+

2

2

)(Tt)

e

2

(Tt)

2

N

(d

+

)

= xN

(d

+

).

(ii)

Proof.

c

x

= N(d

+

) +xN

(d

+

)

x

d

+

(T t, x) Ke

r(Tt)

N

(d

)

x

d

(T t, x)

= N(d

+

) +xN

(d

+

)

x

d

+

(T t, x) xN

(d

+

)

x

d

+

(T t, x)

= N(d

+

).

(iii)

Proof.

c

t

= xN

(d

+

)

x

d

+

(T t, x) rKe

r(Tt)

N(d

) Ke

r(Tt)

N

(d

)

t

d

(T t, x)

= xN

(d

+

)

t

d

+

(T t, x) rKe

r(Tt)

N(d

) xN

(d

+

)

t

d

+

(T t, x) +

2

T t

= rKe

r(Tt)

N(d

)

x

2

T t

N

(d

+

).

(iv)

36

Proof.

c

t

+rxc

x

+

1

2

2

x

2

c

xx

= rKe

r(Tt)

N(d

)

x

2

T t

N

(d

+

) +rxN(d

+

) +

1

2

2

x

2

N

(d

+

)

x

d

+

(T t, x)

= rc

x

2

T t

N

(d

+

) +

1

2

2

x

2

N

(d

+

)

1

T tx

= rc.

(v)

Proof. For x > K, d

+

(T t, x) > 0 and lim

tT

d

+

(T t, x) = lim

0

d

+

(, x) = . lim

tT

d

(T t, x) =

lim

0

d

(, x) = lim

0

ln

x

K

+

1

(r +

1

2

2

)

= . Similarly, lim

tT

d

= for x

(0, K). Also its clear that lim

tT

d

= 0 for x = K. So

lim

tT

c(t, x) = xN(lim

tT

d

+

) KN(lim

tT

d

) =

x K, if x > K

0, if x K

= (x K)

+

.

(vi)

Proof. It is easy to see lim

x0

d

x0

c(t, x) = lim

x0

xN(lim

x

d

+

(T t, x))

Ke

r(Tt)

N(lim

x0

d

(T t, x)) = 0.

(vii)

Proof. For t [0, T], it is clear lim

x

d

= . Note

lim

x

x(N(d

+

) 1) = lim

x

N

(d

+

)

x

d

+

x

2

= lim

x

N

(d

+

)

1

Tt

x

1

.

By the expression of d

+

, we get x = K exp

T td

+

(T t)(r +

1

2

2

). So we have

lim

x

x(N(d

+

) 1) = lim

x

N

(d

+

)

x

T t

= lim

d+

e

d

2

+

2

2

Ke

Ttd+(Tt)(r+

1

2

2

)

T t

= 0.

Therefore

lim

x

[c(t, x) (x e

r(Tt)

K)]

= lim

x

[xN(d

+

) Ke

r(Tt)N(d)

x +Ke

r(Tt)

]

= lim

x

[x(N(d

+

) 1) +Ke

r(Tt)

(1 N(d

))]

= lim

x

x(N(d

+

) 1) +Ke

r(Tt)

(1 N( lim

x

d

))

= 0.

4.10. (i)

37

Proof. We show (4.10.16) + (4.10.9) (4.10.16) + (4.10.15), i.e. assuming X has the representation

X

t

=

t

S

t

+

t

M

t

, continuous-time self-nancing condition has two equivalent formulations, (4.10.9) or

(4.10.15). Indeed, dX

t

=

t

dS

t

+

t

dM

t

+(S

t

d

t

+dS

t

d

t

+M

t

d

t

+dM

t

d

t

). So dX

t

=

t

dS

t

+

t

dM

t

S

t

d

t

+dS

t

d

t

+M

t

d

t

+dM

t

d

t

= 0, i.e. (4.10.9) (4.10.15).

(ii)

Proof. First, we clarify the problems by stating explicitly the given conditions and the result to be proved.

We assume we have a portfolio X

t

=

t

S

t

+

t

M

t

. We let c(t, S

t

) denote the price of call option at time t

and set

t

= c

x

(t, S

t

). Finally, we assume the portfolio is self-nancing. The problem is to show

rN

t

dt =

c

t

(t, S

t

) +

1

2

2

S

2

t

c

xx

(t, S

t

)

dt,

where N

t

= c(t, S

t

)

t

S

t

.

Indeed, by the self-nancing property and

t

= c

x

(t, S

t

), we have c(t, S

t

) = X

t

(by the calculations in

Subsection 4.5.1-4.5.3). This uniquely determines

t

as

t

=

X

t

t

S

t

M

t

=

c(t, S

t

) c

x

(t, S

t

)S

t

M

t

=

N

t

M

t

.

Moreover,

dN

t

=

c

t

(t, S

t

)dt +c

x

(t, S

t

)dS

t

+

1

2

c

xx

(t, S

t

)d'S

t

`

t

d(

t

S

t

)

=

c

t

(t, S

t

) +

1

2

c

xx

(t, S

t

)

2

S

2

t

dt + [c

x

(t, S

t

)dS

t

d(X

t

t

M

t

)]

=

c

t

(t, S

t

) +

1

2

c

xx

(t, S

t

)

2

S

2

t

dt +M

t

d

t

+dM

t

d

t

+ [c

x

(t, S

t

)dS

t

+

t

dM

t

dX

t

].

By self-nancing property, c

x

(t, S

t

)dt +

t

dM

t

=

t

dS

t

+

t

dM

t

= dX

t

, so

c

t

(t, S

t

) +

1

2

c

xx

(t, S

t

)

2

S

2

t

dt = dN

t

M

t

d

t

dM

t

d

t

=

t

dM

t

=

t

rM

t

dt = rN

t

dt.

4.11.

Proof. First, we note c(t, x) solves the Black-Scholes-Merton PDE with volatility

1

:

t

+rx

x

+

1

2

x

2

2

1

2

x

2

r

c(t, x) = 0.

So

c

t

(t, S

t

) +rS

t

c

x

(t, S

t

) +

1

2

2

1

S

2

t

c

xx

(t, S

t

) rc(t, S

t

) = 0,

and

dc(t, S

t

) = c

t

(t, S

t

)dt +c

x

(t, S

t

)(S

t

dt +

2

S

t

dW

t

) +

1

2

c

xx

(t, S

t

)

2

2

S

2

t

dt

=

c

t

(t, S

t

) +c

x

(t, S

t

)S

t

+

1

2

2

2

S

2

t

c

xx

(t, S

t

)

dt +

2

S

t

c

x

(t, S

t

)dW

t

=

rc(t, S

t

) + ( r)c

x

(t, S

t

)S

t

+

1

2

S

2

t

(

2

2

2

1

)c

xx

(t, S

t

)

dt +

2

S

t

c

x

(t, S

t

)dW

t

.

38

Therefore

dX

t

=

rc(t, S

t

) + ( r)c

x

(t, S

t

)S

t

+

1

2

S

2

t

(

2

2

2

1

)

xx

(t, S

t

) +rX

t

rc(t, S

t

) +rS

t

c

x

(t, S

t

)

1

2

(

2

2

2

1

)S

2

t

c

xx

(t, S

t

) c

x

(t, S

t

)S

t

dt + [

2

S

t

c

x

(t, S

t

) c

x

(t, S

t

)

2

S

t

]dW

t

= rX

t

dt.

This implies X

t

= X

0

e

rt

. By X

0

, we conclude X

t

= 0 for all t [0, T].

4.12. (i)

Proof. By (4.5.29), c(t, x) p(t, x) = x e

r(Tt)

K. So p

x

(t, x) = c

x

(t, x) 1 = N(d

+

(T t, x)) 1,

p

xx

(t, x) = c

xx

(t, x) =

1

x

Tt

N

(d

+

(T t, x)) and

p

t

(t, x) = c

t

(t, x) +re

r(Tt)

K

= rKe

r(Tt)

N(d

(T t, x))

x

2

T t

N

(d

+

(T t, x)) +rKe

r(Tt)

= rKe

r(Tt)

N(d

(T t, x))

x

2

T t

N

(d

+

(T t, x)).

(ii)

Proof. For an agent hedging a short position in the put, since

t

= p

x

(t, x) < 0, he should short the

underlying stock and put p(t, S

t

) p

x

(t, S

t

)S

t

(> 0) cash in the money market account.

(iii)

Proof. By the put-call parity, it suces to show f(t, x) = x Ke

r(Tt)

satises the Black-Scholes-Merton

partial dierential equation. Indeed,

t

+

1

2

2

x

2

2

x

2

+rx

x

r

f(t, x) = rKe

r(Tt)

+

1

2

2

x

2

0 +rx 1 r(x Ke

r(Tt)

) = 0.

Remark: The Black-Scholes-Merton PDE has many solutions. Proper boundary conditions are the key

to uniqueness. For more details, see Wilmott [8].

4.13.

Proof. We suppose (W

1

, W

2

) is a pair of local martingale dened by SDE

dW

1

(t) = dB

1

(t)

dW

2

(t) = (t)dB

1

(t) +(t)dB

2

(t).

(1)

We want to nd (t) and (t) such that

(dW

2

(t))

2

= [

2

(t) +

2

(t) + 2(t)(t)(t)]dt = dt

dW

1

(t)dW

2

(t) = [(t) +(t)(t)]dt = 0.

(2)

Solve the equation for (t) and (t), we have (t) =

1

1

2

(t)

and (t) =

(t)

1

2

(t)

. So

W

1

(t) = B

1

(t)

W

2

(t) =

t

0

(s)

1

2

(s)

dB

1

(s) +

t

0

1

1

2

(s)

dB

2

(s)

(3)

39

is a pair of independent BMs. Equivalently, we have

B

1

(t) = W

1

(t)

B

2

(t) =

t

0

(s)dW

1

(s) +

t

0

1

2

(s)dW

2

(s).

(4)

4.14. (i)

Proof. Clearly Z

j

T

tj+1

. Moreover

E[Z

j

[T

tj

] = f

(W

tj

)E[(W

tj+1

W

tj

)

2

(t

j+1

t

j

)[T

tj

] = f

(W

tj

)(E[W

2

tj+1tj

] (t

j+1

t

j

)) = 0,

since W

tj+1

W

tj

is independent of T

tj

and W

t

N(0, t). Finally, we have

E[Z

2

j

[T

tj

] = [f

(W

tj

)]

2

E[(W

tj+1

W

tj

)

4

2(t

j+1

t

j

)(W

tj+1

W

tj

)

2

+ (t

j+1

t

j

)

2

[T

tj

]

= [f

(W

tj

)]

2

(E[W

4

tj+1tj

] 2(t

j+1

t

j

)E[W

2

tj+1tj

] + (t

j+1

t

j

)

2

)

= [f

(W

tj

)]

2

[3(t

j+1

t

j

)

2

2(t

j+1

t

j

)

2

+ (t

j+1

t

j

)

2

]

= 2[f

(W

tj

)]

2

(t

j+1

t

j

)

2

,

where we used the independence of Browian motion increment and the fact that E[X

4

] = 3E[X

2

]

2

if X is

Gaussian with mean 0.

(ii)

Proof. E[

n1

j=0

Z

j

] = E[

n1

j=0

E[Z

j

[T

tj

]] = 0 by part (i).

(iii)

Proof.

V ar[

n1

j=0

Z

j

] = E[(

n1

j=0

Z

j

)

2

]

= E[

n1

j=0

Z

2

j

+ 2

0i<jn1

Z

i

Z

j

]

=

n1

j=0

E[E[Z

2

j

[T

tj

]] + 2

0i<jn1

E[Z

i

E[Z

j

[T

tj

]]

=

n1

j=0

E[2[f

(W

tj

)]

2

(t

j+1

t

j

)

2

]

=

n1

j=0

2E[(f

(W

tj

))

2

](t

j+1

t

j

)

2

2 max

0jn1

[t

j+1

t

j

[

n1

j=0

E[(f

(W

tj

))

2

](t

j+1

t

j

)

0,

since

n1

j=0

E[(f

(W

tj

))

2

](t

j+1

t

j

)

T

0

E[(f

(W

t

))

2

]dt < .

4.15. (i)

40

Proof. B

i

is a local martingale with

(dB

i

(t))

2

=

j=1

ij

(t)

i

(t)

dW

j

(t)

2

=

d

j=1

2

ij

(t)

2

i

(t)

dt = dt.

So B

i

is a Brownian motion.

(ii)

Proof.

dB

i

(t)dB

k

(t) =

j=1

ij

(t)

i

(t)

dW

j

(t)

l=1

kl

(t)

k

(t)

dW

l

(t)

1j, ld

ij

(t)

kl

(t)

i

(t)

k

(t)

dW

j

(t)dW

l

(t)

=

d

j=1

ij

(t)

kj

(t)

i

(t)

k

(t)

dt

=

ik

(t)dt.

4.16.

Proof. To nd the m independent Brownion motion W

1

(t), , W

m

(t), we need to nd A(t) = (a

ij

(t)) so

that

(dB

1

(t), , dB

m

(t))

tr

= A(t)(dW

1

(t), , dW

m

(t))

tr

,

or equivalently

(dW

1

(t), , dW

m

(t))

tr

= A(t)

1

(dB

1

(t), , dB

m

(t))

tr

,

and

(dW

1

(t), , dW

m

(t))

tr

(dW

1

(t), , dW

m

(t))

= A(t)

1

(dB

1

(t), , dB

m

(t))

tr

(dB

1

(t), , dB

m

(t))(A(t)

1

)

tr

dt

= I

mm

dt,

where I

mm

is the mm unit matrix. By the condition dB

i

(t)dB

k

(t) =

ik

(t)dt, we get

(dB

1

(t), , dB

m

(t))

tr

(dB

1

(t), , dB

m

(t)) = C(t).

So A(t)

1

C(t)(A(t)

1

)

tr

= I

mm

, which gives C(t) = A(t)A(t)

tr

. This motivates us to dene A as the

square root of C. Reverse the above analysis, we obtain a formal proof.

4.17.

Proof. We will try to solve all the sub-problems in a single, long solution. We start with the general X

i

:

X

i

(t) = X

i

(0) +

t

0

i

(u)du +

t

0

i

(u)dB

i

(u), i = 1, 2.

41

The goal is to show

lim

0

C()

V

1

()V

2

()

= (t

0

).

First, for i = 1, 2, we have

M

i

() = E[X

i

(t

0

+) X

i

(t

0

)[T

t0

]

= E

t0+

t0

i

(u)du +

t0+

t0

i

(u)dB

i

(u)[T

t0

=

i

(t

0

) +E

t0+

t0

(

i

(u)

i

(t

0

))du[T

t0

.

By Conditional Jensens Inequality,

t0+

t0

(

i

(u)

i

(t

0

))du[T

t0

t0+

t0

[

i

(u)

i

(t

0

)[du[T

t0

Since

1

t0+

t0

[

i

(u)

i

(t

0

)[du 2M and lim

0

1

t0+

t0

[

i

(u)

i

(t

0

)[du = 0 by the continuity of

i

,

the Dominated Convergence Theorem under Conditional Expectation implies

lim

0

1

t0+

t0

[

i

(u)

i

(t

0

)[du[T

t0

= E

lim

0

1

t0+

t0

[

i

(u)

i

(t

0

)[du[T

t0

= 0.

So M

i

() =

i

(t

0

) +o(). This proves (iii).

To calculate the variance and covariance, we note Y

i

(t) =

t

0

i

(u)dB

i

(u) is a martingale and by Itos

formula Y

i

(t)Y

j

(t)

t

0

i

(u)

j

(u)du is a martingale (i = 1, 2). So

E[(X

i

(t

0

+) X

i

(t

0

))(X

j

(t

0

+) X

j

(t

0

))[T

t0

]

= E

Y

i

(t

0

+) Y

i

(t

0

) +

t0+

t0

i

(u)du

Y

j

(t

0

+) Y

j

(t

0

) +

t0+

t0

j

(u)du

[T

t0

= E [(Y

i

(t

0

+) Y

i

(t

0

)) (Y

j

(t

0

+) Y

j

(t

0

)) [T

t0

] +E

t0+

t0

i

(u)du

t0+

t0

j

(u)du[T

t0

+E

(Y

i

(t

0

+) Y

i

(t

0

))

t0+

t0

j

(u)du[T

t0

+E

(Y

j

(t

0

+) Y

j

(t

0

))

t0+

t0

i

(u)du[T

t0

I = E[Y

i

(t

0

+)Y

j

(t

0

+) Y

i

(t

0

)Y

j

(t

0

)[T

t0

] = E

t0+

t0

i

(u)

j

(u)

ij

(t)dt[T

t0

.

By an argument similar to that involved in the proof of part (iii), we conclude I =

i

(t

0

)

j

(t

0

)

ij

(t

0

) +o()

and

II = E

t0+

t0

(

i

(u)

i

(t

0

))du

t0+

t0

j

(u)du[T

t0

+

i

(t

0

)E

t0+

t0

j

(u)du[T

t0

= o() + (M

i

() o())M

j

()

= M

i

()M

j

() +o().

42

By Cauchys inequality under conditional expectation (note E[XY [T] denes an inner product on L

2

()),

III E

[Y

i

(t

0

+) Y

i

(t

0

)[

t0+

t0

[

j

(u)[du[T

t0

E[(Y

i

(t

0

+) Y

i

(t

0

))

2

[T

t0

]

M

E[Y

i

(t

0

+)

2

Y

i

(t

0

)

2

[T

t0

]

M

E[

t0+

t0

i

(u)

2

du[T

t0

]

M M

= o()

Similarly, IV = o(). In summary, we have

E[(X

i

(t

0

+) X

i

(t

0

))(X

j

(t

0

+) X

j

(t

0

))[T

t0

] = M

i

()M

j

() +

i

(t

0

)

j

(t

0

)

ij

(t

0

) +o() +o().

This proves part (iv) and (v). Finally,

lim

0

C()

V

1

()V

2

()

= lim

0

(t

0

)

1

(t

0

)

2

(t

0

) +o()

(

2

1

(t

0

) +o())(

2

2

(t

0

) +o())

= (t

0

).

This proves part (vi). Part (i) and (ii) are consequences of general cases.

4.18. (i)

Proof.

d(e

rt

t

) = (de

Wt

1

2

2

t

) = e

Wt

1

2

2

t

dW

t

= (e

rt

t

)dW

t

,

where for the second =, we used the fact that e

Wt

1

2

2

t

solves dX

t

= X

t

dW

t

. Since d(e

rt

t

) =

re

rt

t

dt +e

rt

d

t

, we get d

t

=

t

dW

t

r

t

dt.

(ii)

Proof.

d(

t

X

t

) =

t

dX

t

+X

t

d

t

+dX

t

d

t

=

t

(rX

t

dt +

t

( r)S

t

dt +

t

S

t

dW

t

) +X

t

(

t

dW

t

r

t

dt)

+(rX

t

dt +

t

( r)S

t

dt +

t

S

t

dW

t

)(

t

dW

t

r

t

dt)

=

t

(

t

( r)S

t

dt +

t

S

t

dW

t

) X

t

t

dW

t

t

S

t

t

dt

=

t

t

S

t

dW

t

X

t

t

dW

t

.

So

t

X

t

is a martingale.

(iii)

Proof. By part (ii), X

0

=

0

X

0

= E[

T

X

t

] = E[

T

V

T

]. (This can be seen as a version of risk-neutral pricing,

only that the pricing is carried out under the actual probability measure.)

4.19. (i)

Proof. B

t

is a local martingale with [B]

t

=

t

0

sign(W

s

)

2

ds = t. So by Levys theorem, B

t

is a Brownian

motion.

43

(ii)

Proof. d(B

t

W

t

) = B

t

dW

t

+ sign(W

t

)W

t

dW

t

+ sign(W

t

)dt. Integrate both sides of the resulting equation

and the expectation, we get

E[B

t

W

t

] =

t

0

E[sign(W

s

)]ds =

t

0

E[1

{Ws0}

1

{Ws<0}

]ds =

1

2

t

1

2

t = 0.

(iii)

Proof. By It os formula, dW

2

t

= 2W

t

dW

t

+dt.

(iv)

Proof. By It os formula,

d(B

t

W

2

t

) = B

t

dW

2

t

+W

2

t

dB

t

+dB

t

dW

2

t

= B

t

(2W

t

dW

t

+dt) +W

2

t

sign(W

t

)dW

t

+sign(W

t

)dW

t

(2W

t

dW

t

+dt)

= 2B

t

W

t

dW

t

+B

t

dt +sign(W

t

)W

2

t

dW

t

+ 2sign(W

t

)W

t

dt.

So

E[B

t

W

2

t

] = E[

t

0

B

s

ds] + 2E[

t

0

sign(W

s

)W

s

ds]

=

t

0

E[B

s

]ds + 2

t

0

E[sign(W

s

)W

s

]ds

= 2

t

0

(E[W

s

1

{Ws0}

] E[W

s

1

{Ws<0}

])ds

= 4

t

0

0

x

e

x

2

2s

2s

dxds

= 4

t

0

s

2

ds

= 0 = E[B

t

] E[W

2

t

].

Since E[B

t

W

2

t

] = E[B

t

] E[W

2

t

], B

t

and W

t

are not independent.

4.20. (i)

Proof. f(x) =

x K, if x K

0, if x < K.

So f

(x) =

1, if x > K

undened, if x = K

0, if x < K

and f

(x) =

0, if x = K

undened, if x = K.

(ii)

Proof. E[f(W

T

)] =

K

(x K)

e

x

2

2T

2T

dx =

T

2

e

K

2

2T

K(

K

T

) where is the distribution function of

standard normal random variable. If we suppose

T

0

f

(W

t

)dt = 0, the expectation of RHS of (4.10.42) is

equal to 0. So (4.10.42) cannot hold.

(iii)

44

Proof. This is trivial to check.

(iv)

Proof. If x = K, lim

n

f

n

(x) =

1

8n

= 0; if x > K, for n large enough, x K +

1

2n

, so lim

n

f

n

(x) =

lim

n

(x K) = x K; if x < K, for n large enough, x K

1

2n

, so lim

n

f

n

(x) = lim

n

0 = 0. In

summary, lim

n

f

n

(x) = (x K)

+

. Similarly, we can show

lim

n

f

n

(x) =

0, if x < K

1

2

, if x = K

1, if x > K.

(5)

(v)

Proof. Fix , so that W

t

() < K for any t [0, T]. Since W

t

() can obtain its maximum on [0, T], there

exists n

0

, so that for any n n

0

, max

0tT

W

t

() < K

1

2n

. So

L

K

(T)() = lim

n

n

T

0

1

(K

1

2n

,K+

1

2n

)

(W

t

())dt = 0.

(vi)

Proof. Take expectation on both sides of the formula (4.10.45), we have

E[L

K

(T)] = E[(W

T

K)

+

] > 0.

So we cannot have L

K

(T) = 0 a.s..

4.21. (i)

Proof. There are two problems. First, the transaction cost could be big due to active trading; second, the

purchases and sales cannot be made at exactly the same price K. For more details, see Hull [2].

(ii)

Proof. No. The RHS of (4.10.26) is a martingale, so its expectation is 0. But E[(S

T

K)

+

] > 0. So

X

T

= (S

T

K)

+

.

5. Risk-Neutral Pricing

5.1. (i)

Proof.

df(X

t

) = f

(X

t

)dt +

1

2

f

(X

t

)d'X`

t

= f(X

t

)(dX

t

+

1

2

d'X`

t

)

= f(X

t

)

t

dW

t

+ (

t

R

t

1

2

2

t

)dt +

1

2

2

t

dt

= f(X

t

)(

t

R

t

)dt +f(X

t

)

t

dW

t

.

This is formula (5.2.20).

45

(ii)

Proof. d(D

t

S

t

) = S

t

dD

t

+ D

t

dS

t

+ dD

t

dS

t

= S

t

R

t

D

t

dt + D

t

t

S

t

dt + D

t

t

S

t

dW

t

= D

t

S

t

(

t

R

t

)dt +

D

t

S

t

t

dW

t

. This is formula (5.2.20).

5.2.

Proof. By Lemma 5.2.2.,

E[D

T

V

T

[T

t

] = E

D

T

V

T

Z

T

Zt

[T

t

t

V

t

Z

t

=

E[D

T

V

T

Z

T

[T

t

].

5.3. (i)

Proof.

c

x

(0, x) =

d

dx

E[e

rT

(xe

W

T

+(r

1

2

2

)T

K)

+

]

=

E

e

rT

d

dx

h(xe

W

T

+(r

1

2

2

)T

)

=

E

e

rT

e

W

T

+(r

1

2

2

)T

1

{xe

W

T

+(r

1

2

2

)T

>K}

= e

1

2

2

T

E

W

T

1

{

W

T

>

1

(ln

K

x

(r

1

2

2

)T)}

= e

1

2

2

T

E

W

T

T

1

{

W

T

T>

1

T

(ln

K

x

(r

1

2

2

)T)

T}

= e

1

2

2

T

2

e

z

2

2

e

Tz

1

{z

T>d+(T,x)}

dz

=

2

e

(z

T)

2

2

1

{z

T>d+(T,x)}

dz

= N(d

+

(T, x)).

(ii)

Proof. If we set

Z

T

= e

W

T

1

2

2

T

and

Z

t

=

E[

Z

T

[T

t

], then

Z is a

P-martingale,

Z

t

> 0 and E[

Z

T

] =

E[e

W

T

1

2

2

T

] = 1. So if we dene

P by d

P = Z

T

d

P on T

T

, then

P is a probability measure equivalent to

P, and

c

x

(0, x) =

E[

Z

T

1

{S

T

>K}

] =

P(S

T

> K).

Moreover, by Girsanovs Theorem,

W

t

=

W

t

+

t

0

()du =

W

t

t is a

P-Brownian motion (set =

in Theorem 5.4.1.)

(iii)

Proof. S

T

= xe

W

T

+(r

1

2

2

)T

= xe

W

T

+(r+

1

2

2

)T

. So

P(S

T

> K) =

P(xe

W

T

+(r+

1

2

2

)T

> K) =

P

W

T

T

> d

+

(T, x)

= N(d

+

(T, x)).

46

5.4. First, a few typos. In the SDE for S, (t)d

W(t) (t)S(t)d

c(0, S(0)), E

E. In the second equation for c(0, S(0)), the variable for BSM should be

BSM

T, S(0); K,

1

T

T

0

r(t)dt,

1

T

T

0

2

(t)dt

.

(i)

Proof. d lnS

t

=

dSt

St

1

2S

2

t

d'S`

t

= r

t

dt +

t

d

W

t

1

2

2

t

dt. So S

T

= S

0

exp

T

0

(r

t

1

2

2

t

)dt +

T

0

t

d

W

t

. Let

X =

T

0

(r

t

1

2

2

t

)dt +

T

0

t

d

W

t

. The rst term in the expression of X is a number and the second term

is a Gaussian random variable N(0,

T

0

2

t

dt), since both r and ar deterministic. Therefore, S

T

= S

0

e

X

,

with X N(

T

0

(r

t

1

2

2

t

)dt,

T

0

2

t

dt),.

(ii)

Proof. For the standard BSM model with constant volatility and interest rate R, under the risk-neutral

measure, we have S

T

= S

0

e

Y

, where Y = (R

1

2

2

)T +

W

T

N((R

1

2

2

)T,

2

T), and

E[(S

0

e

Y

K)

+

] =

e

RT

BSM(T, S

0

; K, R, ). Note R =

1

T

(E[Y ] +

1

2

V ar(Y )) and =

1

T

V ar(Y ), we can get

E[(S

0

e

Y

K)

+

] = e

E[Y ]+

1

2

V ar(Y )

BSM

T, S

0

; K,

1

T

E[Y ] +

1

2

V ar(Y )

1

T

V ar(Y )

.

So for the model in this problem,

c(0, S

0

) = e

T

0

rtdt

E[(S

0

e

X

K)

+

]

= e

T

0

rtdt

e

E[X]+

1

2

V ar(X)

BSM

T, S

0

; K,

1

T

E[X] +

1

2

V ar(X)

1

T

V ar(X)

= BSM

T, S

0

; K,

1

T

T

0

r

t

dt,

1

T

T

0

2

t

dt

.

5.5. (i)

Proof. Let f(x) =

1

x

, then f

(x) =

1

x

2

and f

(x) =

2

x

3

. Note dZ

t

= Z

t

t

dW

t

, so

d

1

Z

t

= f

(Z

t

)dZ

t

+

1

2

f

(Z

t

)dZ

t

dZ

t

=

1

Z

2

t

(Z

t

)

t

dW

t

+

1

2

2

Z

3

t

Z

2

t

2

t

dt =

t

Z

t

dW

t

+

2

t

Z

t

dt.

(ii)

Proof. By Lemma 5.2.2., for s, t 0 with s < t,

M

s

=

E[

M

t

[T

s

] = E

Zt

Mt

Zs

[T

s

t

M

t

[T

s

] =

Z

s

M

s

. So M = Z

M is a P-martingale.

(iii)

47

Proof.

d

M

t

= d

M

t

1

Z

t

=

1

Z

t

dM

t

+M

t

d

1

Z

t

+dM

t

d

1

Z

t

=

t

Z

t

dW

t

+

M

t

t

Z

t

dW

t

+

M

t

2

t

Z

t

dt +

t

t

Z

t

dt.

(iv)

Proof. In part (iii), we have

d

M

t

=

t

Z

t

dW

t

+

M

t

t

Z

t

dW

t

+

M

t

2

t

Z

t

dt +

t

t

Z

t

dt =

t

Z

t

(dW

t

+

t

dt) +

M

t

t

Z

t

(dW

t

+

t

dt).

Let

t

=

t+Mtt

Zt

, then d

M

t

=

t

d

W

t

. This proves Corollary 5.3.2.

5.6.

Proof. By Theorem 4.6.5, it suces to show

W

i

(t) is an T

t

-martingale under

P and [

W

i

,

W

j

](t) = t

ij

(i, j = 1, 2). Indeed, for i = 1, 2,

W

i

(t) is an T

t

-martingale under

P if and only if

W

i

(t)Z

t

is an T

t

-martingale

under P, since

E[

W

i

(t)[T

s

] = E

W

i

(t)Z

t

Z

s

[T

s

.

By Itos product formula, we have

d(

W

i

(t)Z

t

) =

W

i

(t)dZ

t

+Z

t

d

W

i

(t) +dZ

t

d

W

i

(t)

=

W

i

(t)(Z

t

)(t) dW

t

+Z

t

(dW

i

(t) +

i

(t)dt) + (Z

t

t

dW

t

)(dW

i

(t) +

i

(t)dt)

=

W

i

(t)(Z

t

)

d

j=1

j

(t)dW

j

(t) +Z

t

(dW

i

(t) +

i

(t)dt) Z

t

i

(t)dt

=

W

i

(t)(Z

t

)

d

j=1

j

(t)dW

j

(t) +Z

t

dW

i

(t)

This shows

W

i

(t)Z

t

is an T

t

-martingale under P. So

W

i

(t) is an T

t

-martingale under

P. Moreover,

[

W

i

,

W

j

](t) =

W

i

+

i

(s)ds, W

j

+

j

(s)ds

(t) = [W

i

, W

j

](t) = t

ij

.

Combined, this proves the two-dimensional Girsanovs Theorem.

5.7. (i)

Proof. Let a be any strictly positive number. We dene X

2

(t) = (a +X

1

(t))D(t)

1

. Then

P

X

2

(T)

X

2

(0)

D(T)

= P(a +X

1

(T) a) = P(X

1

(T) 0) = 1,

and P

X

2

(T) >

X2(0)

D(T)

= P(X

1

(T) > 0) > 0, since a is arbitrary, we have proved the claim of this problem.

Remark: The intuition is that we invest the positive starting fund a into the money market account,

and construct portfolio X

1

from zero cost. Their sum should be able to beat the return of money market

account.

(ii)

48

Proof. We dene X

1

(t) = X

2

(t)D(t) X

2

(0). Then X

1

(0) = 0,

P(X

1

(T) 0) = P

X

2

(T)

X

2

(0)

D(T)

= 1, P(X

1

(T) > 0) = P

X

2

(T) >

X

2

(0)

D(T)

> 0.

5.8. The basic idea is that for any positive

P-martingale M, dM

t

= M

t

1

Mt

dM

t

. By Martingale Repre-

sentation Theorem, dM

t

=

t

d

W

t

for some adapted process

t

. So dM

t

= M

t

(

t

Mt

)d

W

t

, i.e. any positive

martingale must be the exponential of an integral w.r.t. Brownian motion. Taking into account discounting

factor and apply Itos product rule, we can show every strictly positive asset is a generalized geometric

Brownian motion.

(i)

Proof. V

t

D

t

=

E[e

T

0

Rudu

V

T

[T

t

] =

E[D

T

V

T

[T

t

]. So (D

t

V

t

)

t0

is a

P-martingale. By Martingale Represen-

tation Theorem, there exists an adapted process

t

, 0 t T, such that D

t

V

t

=

t

0

s

d

W

s

, or equivalently,

V

t

= D

1

t

t

0

s

d

W

s

. Dierentiate both sides of the equation, we get dV

t

= R

t

D

1

t

t

0

s

d

W

s

dt +D

1

t

t

d

W

t

,

i.e. dV

t

= R

t

V

t

dt +

t

Dt

dW

t

.

(ii)

Proof. We prove the following more general lemma.

Lemma 1. Let X be an almost surely positive random variable (i.e. X > 0 a.s.) dened on the probability

space (, (, P). Let T be a sub -algebra of (, then Y = E[X[T] > 0 a.s.

Proof. By the property of conditional expectation Y

t

0 a.s. Let A = Y = 0, we shall show P(A) = 0. In-

deed, note A T, 0 = E[Y I

A

] = E[E[X[T]I

A

] = E[XI

A

] = E[X1

A{X1}

] +

n=1

E[X1

A{

1

n

>X

1

n+1

}

]

P(AX 1)+

n=1

1

n+1

P(A

1

n

> X

1

n+1

). So P(AX 1) = 0 and P(A

1

n

> X

1

n+1

) = 0,

n 1. This in turn implies P(A) = P(AX > 0) = P(AX 1) +

n=1

P(A

1

n

> X

1

n+1

) =

0.

By the above lemma, it is clear that for each t [0, T], V

t

=

E[e

T

t

Rudu

V

T

[T

t

] > 0 a.s.. Moreover,

by a classical result of martingale theory (Revuz and Yor [4], Chapter II, Proposition (3.4)), we have the

following stronger result: for a.s. , V

t

() > 0 for any t [0, T].

(iii)

Proof. By (ii), V > 0 a.s., so dV

t

= V

t

1

Vt

dV

t

= V

t

1

Vt

R

t

V

t

dt +

t

Dt

d

W

t

= V

t

R

t

dt + V

t

t

VtDt

d

W

t

= R

t

V

t

dt +

t

V

t

d

W

t

, where

t

=

t

VtDt

. This shows V follows a generalized geometric Brownian motion.

5.9.

Proof. c(0, T, x, K) = xN(d

+

) Ke

rT

N(d

) with d

=

1

T

(ln

x

K

+ (r

1

2

2

)T). Let f(y) =

1

2

e

y

2

2

,

then f

(y) = yf(y),

c

K

(0, T, x, K) = xf(d

+

)

d

+

y

e

rT

N(d

) Ke

rT

f(d

)

d

y

= xf(d

+

)

1

TK

e

rT

N(d

) +e

rT

f(d

)

1

T

,

49

and

c

KK

(0, T, x, K)

= xf(d

+

)

1

TK

2

TK

f(d

+

)(d

+

)

d

+

y

e

rT

f(d

)

d

y

+

e

rT

T

(d

)f(d

)

d

y

=

x

TK

2

f(d

+

) +

xd

+

TK

f(d

+

)

1

K

T

e

rT

f(d

)

1

K

e

rT

d

T

f(d

)

1

K

T

= f(d

+

)

x

K

2

T

[1

d

+

T

] +

e

rT

f(d

)

K

T

[1 +

d

T

]

=

e

rT

K

2

T

f(d

)d

+

x

K

2

2

T

f(d

+

)d

.

5.10. (i)

Proof. At time t

0

, the value of the chooser option is V (t

0

) = maxC(t

0

), P(t

0

) = maxC(t

0

), C(t

0

)

F(t

0

) = C(t

0

) + max0, F(t

0

) = C(t

0

) + (e

r(Tt0)

K S(t

0

))

+

.

(ii)

Proof. By the risk-neutral pricing formula, V (0) =

E[e

rt0

V (t

0

)] =

E[e

rt0

C(t

0

)+(e

rT

Ke

rt0

S(t

0

)

+

] =

C(0) +

E[e

rt0

(e

r(Tt0)

K S(t

0

))

+

]. The rst term is the value of a call expiring at time T with strike

price K and the second term is the value of a put expiring at time t

0

with strike price e

r(Tt0)

K.

5.11.

Proof. We rst make an analysis which leads to the hint, then we give a formal proof.

(Analysis) If we want to construct a portfolio X that exactly replicates the cash ow, we must nd a

solution to the backward SDE

dX

t

=

t

dS

t

+R

t

(X

t

t

S

t

)dt C

t

dt

X

T

= 0.

Multiply D

t

on both sides of the rst equation and apply Itos product rule, we get d(D

t

X

t

) =

t

d(D

t

S

t

)

C

t

D

t

dt. Integrate from 0 to T, we have D

T

X

T

D

0

X

0

=

T

0

t

d(D

t

S

t

)

T

0

C

t

D

t

dt. By the terminal

condition, we get X

0

= D

1

0

(

T

0

C

t

D

t

dt

T

0

t

d(D

t

S

t

)). X

0

is the theoretical, no-arbitrage price of

the cash ow, provided we can nd a trading strategy that solves the BSDE. Note the SDE for S

gives d(D

t

S

t

) = (D

t

S

t

)

t

(

t

dt + dW

t

), where

t

=

tRt

t

. Take the proper change of measure so that

W

t

=

t

0

s

ds +W

t

is a Brownian motion under the new measure

P, we get

T

0

C

t

D

t

dt = D

0

X

0

+

T

0

t

d(D

t

S

t

) = D

0

X

0

+

T

0

t

(D

t

S

t

)

t

d

W

t

.

This says the random variable

T

0

C

t

D

t

dt has a stochastic integral representation D

0

X

0

+

T

0

t

D

t

S

t

t

d

W

t

.

This inspires us to consider the martingale generated by

T

0

C

t

D

t

dt, so that we can apply Martingale Rep-

resentation Theorem and get a formula for by comparison of the integrands.

50

(Formal proof) Let M

T

=

T

0

C

t

D

t

dt, and M

t

=

E[M

T

[T

t

]. Then by Martingale Representation Theo-

rem, we can nd an adapted process

t

, so that M

t

= M

0

+

t

0

t

d

W

t

. If we set

t

=

t

DtStt

, we can check

X

t

= D

1

t

(D

0

X

0

+

t

0

u

d(D

u

S

u

)

t

0

C

u

D

u

du), with X

0

= M

0

=

E[

T

0

C

t

D

t

dt] solves the SDE

dX

t

=

t

dS

t

+R

t

(X

t

t

S

t

)dt C

t

dt

X

T

= 0.

Indeed, it is easy to see that X satises the rst equation. To check the terminal condition, we note

X

T

D

T

= D

0

X

0

+

T

0

t

D

t

S

t

t

d

W

t

T

0

C

t

D

t

dt = M

0

+

T

0

t

d

W

t

M

T

= 0. So X

T

= 0. Thus, we have

found a trading strategy , so that the corresponding portfolio X replicates the cash ow and has zero

terminal value. So X

0

=

E[

T

0

C

t

D

t

dt] is the no-arbitrage price of the cash ow at time zero.

Remark: As shown in the analysis, d(D

t

X

t

) =

t

d(D

t

S

t

) C

t

D

t

dt. Integrate from t to T, we get

0 D

t

X

t

=

T

t

u

d(D

u

S

u

)

T

t

C

u

D

u

du. Take conditional expectation w.r.t. T

t

on both sides, we get

D

t

X

t

=

E[

T

t

C

u

D

u

du[T

t

]. So X

t

= D

1

t

E[

T

t

C

u

D

u

du[T

t

]. This is the no-arbitrage price of the cash

ow at time t, and we have justied formula (5.6.10) in the textbook.

5.12. (i)

Proof. d

B

i

(t) = dB

i

(t) +

i

(t)dt =

d

j=1

ij(t)

i(t)

dW

j

(t) +

d

j=1

ij(t)

i(t)

j

(t)dt =

d

j=1

ij(t)

i(t)

d

W

j

(t). So B

i

is a

martingale. Since d

B

i

(t)d

B

i

(t) =

d

j=1

ij(t)

2

i(t)

2

dt = dt, by Levys Theorem,

B

i

is a Brownian motion under

P.

(ii)

Proof.

dS

i

(t) = R(t)S

i

(t)dt +

i

(t)S

i

(t)d

B

i

(t) + (

i

(t) R(t))S

i

(t)dt

i

(t)S

i

(t)

i

(t)dt

= R(t)S

i

(t)dt +

i

(t)S

i

(t)d

B

i

(t) +

d

j=1

ij

(t)

j

(t)S

i

(t)dt S

i

(t)

d

j=1

ij

(t)

j

(t)dt

= R(t)S

i

(t)dt +

i

(t)S

i

(t)d

B

i

(t).

(iii)

Proof. d

B

i

(t)d

B

k

(t) = (dB

i

(t) +

i

(t)dt)(dB

j

(t) +

j

(t)dt) = dB

i

(t)dB

j

(t) =

ik

(t)dt.

(iv)

Proof. By It os product rule and martingale property,

E[B

i

(t)B

k

(t)] = E[

t

0

B

i

(s)dB

k

(s)] +E[

t

0

B

k

(s)dB

i

(s)] +E[

t

0

dB

i

(s)dB

k

(s)]

= E[

t

0

ik

(s)ds] =

t

0

ik

(s)ds.

Similarly, by part (iii), we can show

E[

B

i

(t)

B

k

(t)] =

t

0

ik

(s)ds.

(v)

51

Proof. By It os product formula,

E[B

1

(t)B

2

(t)] = E[

t

0

sign(W

1

(u))du] =

t

0

[P(W

1

(u) 0) P(W

1

(u) < 0)]du = 0.

Meanwhile,

E[

B

1

(t)

B

2

(t)] =

E[

t

0

sign(W

1

(u))du

=

t

0

[

P(W

1

(u) 0)

P(W

1

(u) < 0)]du

=

t

0

[

P(

W

1

(u) u)

P(

W

1

(u) < u)]du

=

t

0

2

1

2

P(

W

1

(u) < u)

du

< 0,

for any t > 0. So E[B

1

(t)B

2

(t)] =

E[

B

1

(t)

B

2

(t)] for all t > 0.

5.13. (i)

Proof.

E[W

1

(t)] =

E[

W

1

(t)] = 0 and

E[W

2

(t)] =

E[

W

2

(t)

t

0

W

1

(u)du] = 0, for all t [0, T].

(ii)

Proof.

Cov[W

1

(T), W

2

(T)] =

E[W

1

(T)W

2

(T)]

=

E

T

0

W

1

(t)dW

2

(t) +

T

0

W

2

(t)dW

1

(t)

=

E

T

0

W

1

(t)(d

W

2

(t)

W

1

(t)dt)

+

E

T

0

W

2

(t)d

W

1

(t)

T

0

W

1

(t)

2

dt

T

0

tdt

=

1

2

T

2

.

5.14. Equation (5.9.6) can be transformed into d(e

rt

X

t

) =

t

[d(e

rt

S

t

) ae

rt

dt] =

t

e

rt

[dS

t

rS

t

dt

adt]. So, to make the discounted portfolio value e

rt

X

t

a martingale, we are motivated to change the measure

in such a way that S

t

r

t

0

S

u

duat is a martingale under the new measure. To do this, we note the SDE for S

is dS

t

=

t

S

t

dt+S

t

dW

t

. Hence dS

t

rS

t

dtadt = [(

t

r)S

t

a]dt+S

t

dW

t

= S

t

(tr)Sta

St

dt +dW

t

.

Set

t

=

(tr)Sta

St

and

W

t

=

t

0

s

ds +W

t

, we can nd an equivalent probability measure

P, under which

S satises the SDE dS

t

= rS

t

dt +S

t

d

W

t

+adt and

W

t

is a BM. This is the rational for formula (5.9.7).

This is a good place to pause and think about the meaning of martingale measure. What is to be

a martingale? The new measure

P should be such that the discounted value process of the replicating

52

portfolio is a martingale, not the discounted price process of the underlying. First, we want D

t

X

t

to be a

martingale under

P because we suppose that X is able to replicate the derivative payo at terminal time,

X

T

= V

T

. In order to avoid arbitrage, we must have X

t

= V

t

for any t [0, T]. The diculty is how

to calculate X

t

and the magic is brought by the martingale measure in the following line of reasoning:

V

t

= X

t

= D

1

t

E[D

T

X

T

[T

t

] = D

1

t

E[D

T

V

T

[T

t

]. You can think of martingale measure as a calculational

convenience. That is all about martingale measure! Risk neutral is a just perception, referring to the

actual eect of constructing a hedging portfolio! Second, we note when the portfolio is self-nancing, the

discounted price process of the underlying is a martingale under

P, as in the classical Black-Scholes-Merton

model without dividends or cost of carry. This is not a coincidence. Indeed, we have in this case the

relation d(D

t

X

t

) =

t

d(D

t

S

t

). So D

t

X

t

being a martingale under

P is more or less equivalent to D

t

S

t

being a martingale under

P. However, when the underlying pays dividends, or there is cost of carry,

d(D

t

X

t

) =

t

d(D

t

S

t

) no longer holds, as shown in formula (5.9.6). The portfolio is no longer self-nancing,

but self-nancing with consumption. What we still want to retain is the martingale property of D

t

X

t

, not

that of D

t

S

t

. This is how we choose martingale measure in the above paragraph.

Let V

T

be a payo at time T, then for the martingale M

t

=

E[e

rT

V

T

[T

t

], by Martingale Representation

Theorem, we can nd an adapted process

t

, so that M

t

= M

0

+

t

0

s

d

W

s

. If we let

t

=

te

rt

St

, then the

value of the corresponding portfolio X satises d(e

rt

X

t

) =

t

d

W

t

. So by setting X

0

= M

0

=

E[e

rT

V

T

],

we must have e

rt

X

t

= M

t

, for all t [0, T]. In particular, X

T

= V

T

. Thus the portfolio perfectly hedges

V

T

. This justies the risk-neutral pricing of European-type contingent claims in the model where cost of

carry exists. Also note the risk-neutral measure is dierent from the one in case of no cost of carry.

Another perspective for perfect replication is the following. We need to solve the backward SDE

dX

t

=

t

dS

t

a

t

dt +r(X

t

t

S

t

)dt

X

T

= V

T

for two unknowns, X and . To do so, we nd a probability measure

P, under which e

rt

X

t

is a martingale,

then e

rt

X

t

=

E[e

rT

V

T

[T

t

] := M

t

. Martingale Representation Theorem gives M

t

= M

0

+

t

0

u

d

W

u

for

some adapted process

. This would give us a theoretical representation of by comparison of integrands,

hence a perfect replication of V

T

.

(i)

Proof. As indicated in the above analysis, if we have (5.9.7) under

P, then d(e

rt

X

t

) =

t

[d(e

rt

S

t

)

ae

rt

dt] =

t

e

rt

S

t

d

W

t

. So (e

rt

X

t

)

t0

, where X is given by (5.9.6), is a

P-martingale.

(ii)

Proof. By Itos formula, dY

t

= Y

t

[d

W

t

+ (r

1

2

2

)dt] +

1

2

Y

t

2

dt = Y

t

(d

W

t

+ rdt). So d(e

rt

Y

t

) =

e

rt

Y

t

d

W

t

and e

rt

Y

t

is a

P-martingale. Moreover, if S

t

= S

0

Y

t

+Y

t

t

0

a

Ys

ds, then

dS

t

= S

0

dY

t

+

t

0

a

Y

s

dsdY

t

+adt =

S

0

+

t

0

a

Y

s

ds

Y

t

(d

W

t

+rdt) +adt = S

t

(d

W

t

+rdt) +adt.

This shows S satises (5.9.7).

Remark: To obtain this formula for S, we rst set U

t

= e

rt

S

t

to remove the rS

t

dt term. The SDE for

U is dU

t

= U

t

d

W

t

+ ae

rt

dt. Just like solving linear ODE, to remove U in the d

W

t

term, we consider

V

t

= U

t

e

Wt

. Itos product formula yields

dV

t

= e

Wt

dU

t

+U

t

e

Wt

()d

W

t

+

1

2

2

dt

+dU

t

e

Wt

()d

W

t

+

1

2

2

dt

= e

Wt

ae

rt

dt

1

2

2

V

t

dt.

53

Note V appears only in the dt term, so multiply the integration factor e

1

2

2

t

on both sides of the equation,

we get

d(e

1

2

2

t

V

t

) = ae

rt

Wt+

1

2

2

t

dt.

Set Y

t

= e

Wt+(r

1

2

2

)t

, we have d(S

t

/Y

t

) = adt/Y

t

. So S

t

= Y

t

(S

0

+

t

0

ads

Ys

).

(iii)

Proof.

E[S

T

[T

t

] = S

0

E[Y

T

[T

t

] +

E

Y

T

t

0

a

Y

s

ds +Y

T

T

t

a

Y

s

ds[T

t

= S

0

E[Y

T

[T

t

] +

t

0

a

Y

s

ds

E[Y

T

[T

t

] +a

T

t

Y

T

Y

s

[T

t

ds

= S

0

Y

t

E[Y

Tt

] +

t

0

a

Y

s

dsY

t

E[Y

Tt

] +a

T

t

E[Y

Ts

]ds

=

S

0

+

t

0

a

Y

s

ds

Y

t

e

r(Tt)

+a

T

t

e

r(Ts)

ds

=

S

0

+

t

0

ads

Y

s

Y

t

e

r(Tt)

a

r

(1 e

r(Tt)

).

In particular,

E[S

T

] = S

0

e

rT

a

r

(1 e

rT

).

(iv)

Proof.

d

E[S

T

[T

t

] = ae

r(Tt)

dt +

S

0

+

t

0

ads

Y

s

(e

r(Tt)

dY

t

rY

t

e

r(Tt)

dt) +

a

r

e

r(Tt)

(r)dt

=

S

0

+

t

0

ads

Y

s

e

r(Tt)

Y

t

d

W

t

.

So

E[S

T

[T

t

] is a

P-martingale. As we have argued at the beginning of the solution, risk-neutral pricing is

valid even in the presence of cost of carry. So by an argument similar to that of '5.6.2, the process

E[S

T

[T

t

]

is the futures price process for the commodity.

(v)

Proof. We solve the equation

E[e

r(Tt)

(S

T

K)[T

t

] = 0 for K, and get K =

E[S

T

[T

t

]. So For

S

(t, T) =

Fut

S

(t, T).

(vi)

Proof. We follow the hint. First, we solve the SDE

dX

t

= dS

t

adt +r(X

t

S

t

)dt

X

0

= 0.

By our analysis in part (i), d(e

rt

X

t

) = d(e

rt

S

t

) ae

rt

dt. Integrate from 0 to t on both sides, we get

X

t

= S

t

S

0

e

rt

+

a

r

(1 e

rt

) = S

t

S

0

e

rt

a

r

(e

rt

1). In particular, X

T

= S

T

S

0

e

rT

a

r

(e

rT

1).

Meanwhile, For

S

(t, T) = Fut

s

(t, T) =

E[S

T

[T

t

] =

S

0

+

t

0

ads

Ys

Y

t

e

r(Tt)

a

r

(1e

r(Tt)

). So For

S

(0, T) =

S

0

e

rT

a

r

(1 e

rT

) and hence X

T

= S

T

For

S

(0, T). After the agent delivers the commodity, whose value

is S

T

, and receives the forward price For

S

(0, T), the portfolio has exactly zero value.

54

6. Connections with Partial Dierential Equations

6.1. (i)

Proof. Z

t

= 1 is obvious. Note the form of Z is similar to that of a geometric Brownian motion. So by Itos

formula, it is easy to obtain dZ

u

= b

u

Z

u

du +

u

Z

u

dW

u

, u t.

(ii)

Proof. If X

u

= Y

u

Z

u

(u t), then X

t

= Y

t

Z

t

= x 1 = x and

dX

u

= Y

u

dZ

u

+Z

u

dY

u

+dY

u

Z

u

= Y

u

(b

u

Z

u

du +

u

Z

u

dW

u

) +Z

u

a

u

u

u

Z

u

du +

u

Z

u

dW

u

+

u

Z

u

u

Z

u

du

= [Y

u

b

u

Z

u

+ (a

u

u

u

) +

u

u

]du + (

u

Z

u

Y

u

+

u

)dW

u

= (b

u

X

u

+a

u

)du + (

u

X

u

+

u

)dW

u

.

Remark: To see how to nd the above solution, we manipulate the equation (6.2.4) as follows. First, to

remove the term b

u

X

u

du, we multiply on both sides of (6.2.4) the integrating factor e

u

t

bvdv

. Then

d(X

u

e

u

t

bvdv

) = e

u

t

bvdv

(a

u

du + (

u

+

u

X

u

)dW

u

).

Let

X

u

= e

u

t

bvdv

X

u

, a

u

= e

u

t

bvdv

a

u

and

u

= e

u

t

bvdv

u

, then

X satises the SDE

d

X

u

= a

u

du + (

u

+

u

X

u

)dW

u

= ( a

u

du +

u

dW

u

) +

u

X

u

dW

u

.

To deal with the term

u

X

u

dW

u

, we consider

X

u

=

X

u

e

u

t

vdWv

. Then

d

X

u

= e

u

t

vdWv

[( a

u

du +

u

dW

u

) +

u

X

u

dW

u

] +

X

u

u

t

vdWv

(

u

)dW

u

+

1

2

e

u

t

vdWv

2

u

du

+(

u

+

u

X

u

)(

u

)e

u

t

vdWv

du

= a

u

du +

u

dW

u

+

u

X

u

dW

u

u

X

u

dW

u

+

1

2

X

u

2

u

du

u

(

u

+

u

X

u

)du

= ( a

u

u

u

1

2

X

u

2

u

)du +

u

dW

u

,

where a

u

= a

u

e

u

t

vdWv

and

u

=

u

e

u

t

vdWv

. Finally, use the integrating factor e

u

t

1

2

2

v

dv

, we have

d

X

u

e

1

2

u

t

2

v

dv

= e

1

2

u

t

2

v

dv

(d

X

u

+

X

u

1

2

2

u

du) = e

1

2

u

t

2

v

dv

[( a

u

u

u

)du +

u

dW

u

].

Write everything back into the original X, a and , we get

d

X

u

e

u

t

bvdv

u

t

vdWv+

1

2

u

t

2

v

dv

= e

1

2

u

t

2

v

dv

u

t

vdWv

u

t

bvdv

[(a

u

u

u

)du +

u

dW

u

],

i.e.

d

X

u

Z

u

=

1

Z

u

[(a

u

u

u

)du +

u

dW

u

] = dY

u

.

This inspired us to try X

u

= Y

u

Z

u

.

6.2. (i)

55

Proof. The portfolio is self-nancing, so for any t T

1

, we have

dX

t

=

1

(t)df(t, R

t

, T

1

) +

2

(t)df(t, R

t

, T

2

) +R

t

(X

t

1

(t)f(t, R

t

, T

1

)

2

(t)f(t, R

t

, T

2

))dt,

and

d(D

t

X

t

)

= R

t

D

t

X

t

dt +D

t

dX

t

= D

t

[

1

(t)df(t, R

t

, T

1

) +

2

(t)df(t, R

t

, T

2

) R

t

(

1

(t)f(t, R

t

, T

1

) +

2

(t)f(t, R

t

, T

2

))dt]

= D

t

[

1

(t)

f

t

(t, R

t

, T

1

)dt +f

r

(t, R

t

, T

1

)dR

t

+

1

2

f

rr

(t, R

t

, T

1

)

2

(t, R

t

)dt

+

2

(t)

f

t

(t, R

t

, T

2

)dt +f

r

(t, R

t

, T

2

)dR

t

+

1

2

f

rr

(t, R

t

, T

2

)

2

(t, R

t

)dt

R

t

(

1

(t)f(t, R

t

, T

1

) +

2

(t)f(t, R

t

, T

2

))dt]

=

1

(t)D

t

[R

t

f(t, R

t

, T

1

) +f

t

(t, R

t

, T

1

) +(t, R

t

)f

r

(t, R

t

, T

1

) +

1

2

2

(t, R

t

)f

rr

(t, R

t

, T

1

)]dt

+

2

(t)D

t

[R

t

f(t, R

t

, T

2

) +f

t

(t, R

t

, T

2

) +(t, R

t

)f

r

(t, R

t

, T

2

) +

1

2

2

(t, R

t

)f

rr

(t, R

t

, T

2

)]dt

+D

t

(t, R

t

)[D

t

(t, R

t

)[

1

(t)f

r

(t, R

t

, T

1

) +

2

(t)f

r

(t, R

t

, T

2

)]]dW

t

=

1

(t)D

t

[(t, R

t

) (t, R

t

, T

1

)]f

r

(t, R

t

, T

1

)dt +

2

(t)D

t

[(t, R

t

) (t, R

t

, T

2

)]f

r

(t, R

t

, T

2

)dt

+D

t

(t, R

t

)[

1

(t)f

r

(t, R

t

, T

1

) +

2

(t)f

r

(t, R

t

, T

2

)]dW

t

.

(ii)

Proof. Let

1

(t) = S

t

f

r

(t, R

t

, T

2

) and

2

(t) = S

t

f

r

(t, R

t

, T

1

), then

d(D

t

X

t

) = D

t

S

t

[(t, R

t

, T

2

) (t, R

t

, T

1

)]f

r

(t, R

t

, T

1

)f

r

(t, R

t

, T

2

)dt

= D

t

[[(t, R

t

, T

1

) (t, R

t

, T

2

)]f

r

(t, R

t

, T

1

)f

r

(t, R

t

, T

2

)[dt.

Integrate from 0 to T on both sides of the above equation, we get

D

T

X

T

D

0

X

0

=

T

0

D

t

[[(t, R

t

, T

1

) (t, R

t

, T

2

)]f

r

(t, R

t

, T

1

)f

r

(t, R

t

, T

2

)[dt.

If (t, R

t

, T

1

) = (t, R

t

, T

2

) for some t [0, T], under the assumption that f

r

(t, r, T) = 0 for all values of r

and 0 t T, D

T

X

T

D

0

X

0

> 0. To avoid arbitrage (see, for example, Exercise 5.7), we must have for

a.s. , (t, R

t

, T

1

) = (t, R

t

, T

2

), t [0, T]. This implies (t, r, T) does not depend on T.

(iii)

Proof. In (6.9.4), let

1

(t) = (t), T

1

= T and

2

(t) = 0, we get

d(D

t

X

t

) = (t)D

t

R

t

f(t, R

t

, T) +f

t

(t, R

t

, T) +(t, R

t

)f

r

(t, R

t

, T) +

1

2

2

(t, R

t

)f

rr

(t, R

t

, T)

dt

+D

t

(t, R

t

)(t)f

r

(t, R

t

, T)dW

t

.

This is formula (6.9.5).

If f

r

(t, r, T) = 0, then d(D

t

X

t

) = (t)D

t

R

t

f(t, R

t

, T) +f

t

(t, R

t

, T) +

1

2

2

(t, R

t

)f

rr

(t, R

t

, T)

dt. We

choose (t) = sign

R

t

f(t, R

t

, T) +f

t

(t, R

t

, T) +

1

2

2

(t, R

t

)f

rr

(t, R

t

, T)

case, we must have f

t

(t, R

t

, T) +

1

2

2

(t, R

t

)f

rr

(t, R

t

, T) = R

t

f(t, R

t

, T), or equivalently, for any r in the

range of R

t

, f

t

(t, r, T) +

1

2

2

(t, r)f

rr

(t, r, T) = rf(t, r, T).

56

6.3.

Proof. We note

d

ds

s

0

bvdv

C(s, T)

= e

s

0

bvdv

[C(s, T)(b

s

) +b

s

C(s, T) 1] = e

s

0

bvdv

.

So integrate on both sides of the equation from t to T, we obtain

e

T

0

bvdv

C(T, T) e

t

0

bvdv

C(t, T) =

T

t

e

s

0

bvdv

ds.

Since C(T, T) = 0, we have C(t, T) = e

t

0

bvdv

T

t

e

s

0

bvdv

ds =

T

t

e

t

s

bvdv

ds. Finally, by A

(s, T) =

a(s)C(s, T) +

1

2

2

(s)C

2

(s, T), we get

A(T, T) A(t, T) =

T

t

a(s)C(s, T)ds +

1

2

T

t

2

(s)C

2

(s, T)ds.

Since A(T, T) = 0, we have A(t, T) =

T

t

(a(s)C(s, T)

1

2

2

(s)C

2

(s, T))ds.

6.4. (i)

Proof. By the denition of , we have

(t) = e

1

2

2

T

t

C(u,T)du

1

2

2

(1)C(t, T) =

1

2

(t)

2

C(t, T).

So C(t, T) =

2

(t)

(t)

2

. Dierentiate both sides of the equation

(t) =

1

2

(t)

2

C(t, T), we get

(t) =

1

2

2

[

(t)C(t, T) +(t)C

(t, T)]

=

1

2

2

[

1

2

(t)

2

C

2

(t, T) +(t)C

(t, T)]

=

1

4

4

(t)C

2

(t, T)

1

2

2

(t)C

(t, T).

So C

(t, T) =

1

4

4

(t)C

2

(t, T)

(t)

/

1

2

(t)

2

=

1

2

2

C

2

(t, T)

2

(t)

2

(t)

.

(ii)

Proof. Plug formulas (6.9.8) and (6.9.9) into (6.5.14), we get

(t)

2

(t)

+

1

2

2

C

2

(t, T) = b(1)

2

(t)

2

(t)

+

1

2

2

C

2

(t, T) 1,

i.e.

(t) b

(t)

1

2

2

(t) = 0.

(iii)

Proof. The characteristic equation of

(t) b

(t)

1

2

2

(t) = 0 is

2

b

1

2

2

= 0, which gives two

roots

1

2

(b

b

2

+ 2

2

) =

1

2

b with =

1

2

b

2

+ 2

2

. Therefore by standard theory of ordinary dierential

equations, a general solution of is (t) = e

1

2

bt

(a

1

e

t

+ a

2

e

t

) for some constants a

1

and a

2

. It is then

easy to see that we can choose appropriate constants c

1

and c

2

so that

(t) =

c

1

1

2

b +

e

(

1

2

b+)(Tt)

c

2

1

2

b

e

(

1

2

b)(Tt)

.

57

(iv)

Proof. From part (iii), it is easy to see

(t) = c

1

e

(

1

2

b+)(Tt)

c

2

e

(

1

2

b)(Tt)

. In particular,

0 = C(T, T) =

2

(T)

2

(T)

=

2(c

1

c

2

)

2

(T)

.

So c

1

= c

2

.

(v)

Proof. We rst recall the denitions and properties of sinh and cosh:

sinhz =

e

z

e

z

2

, coshz =

e

z

+e

z

2

, (sinhz)

= sinhz.

Therefore

(t) = c

1

e

1

2

b(Tt)

e

(Tt)

1

2

b +

e

(Tt)

1

2

b

= c

1

e

1

2

b(Tt)

1

2

b

1

4

b

2

2

e

(Tt)

1

2

b +

1

4

b

2

2

e

(Tt)

=

2c

1

2

e

1

2

b(Tt)

(

1

2

b )e

(Tt)

+ (

1

2

b +)e

(Tt)

=

2c

1

2

e

1

2

b(Tt)

[b sinh((T t)) + 2 cosh((T t))].

and

(t) =

1

2

b

2c

1

2

e

1

2

b(Tt)

[b sinh((T t)) + 2 cosh((T t))]

+

2c

1

2

e

1

2

b(Tt)

[b cosh((T t)) 2

2

sinh((T t))]

= 2c

1

e

1

2

b(Tt)

b

2

2

2

sinh((T t)) +

b

2

cosh((T t))

b

2

cosh((T t))

2

2

2

sinh((T t))

= 2c

1

e

1

2

b(Tt)

b

2

4

2

2

2

sinh((T t))

= 2c

1

e

1

2

b(Tt)

sinh((T t)).

This implies

C(t, T) =

2

(t)

2

(t)

=

sinh((T t))

cosh((T t)) +

1

2

b sinh((T t))

.

(vi)

Proof. By (6.5.15) and (6.9.8), A

(t, T) =

2a

(t)

2

(t)

. Hence

A(T, T) A(t, T) =

T

t

2a

(s)

2

(s)

ds =

2a

2

ln

(T)

(t)

,

and

A(t, T) =

2a

2

ln

(T)

(t)

=

2a

2

ln

e

1

2

b(Tt)

cosh((T t)) +

1

2

b sinh((T t))

.

58

6.5. (i)

Proof. Since g(t, X

1

(t), X

2

(t)) = E[h(X

1

(T), X

2

(T))[T

t

] and e

rt

f(t, X

1

(t), X

2

(t)) = E[e

rT

h(X

1

(T), X

2

(T))[T

t

],

iterated conditioning argument shows g(t, X

1

(t), X

2

(t)) and e

rt

f(t, X

1

(t), X

2

(t)) ar both martingales.

(ii) and (iii)

Proof. We note

dg(t, X

1

(t), X

2

(t))

= g

t

dt +g

x1

dX

1

(t) +g

x2

dX

2

(t) +

1

2

g

x1x2

dX

1

(t)dX

1

(t) +

1

2

g

x2x2

dX

2

(t)dX

2

(t) +g

x1x2

dX

1

(t)dX

2

(t)

=

g

t

+g

x1

1

+g

x2

2

+

1

2

g

x1x1

(

2

11

+

2

12

+ 2

11

12

) +g

x1x2

(

11

21

+

11

22

+

12

21

+

12

22

)

+

1

2

g

x2x2

(

2

21

+

2

22

+ 2

21

22

)

dt + martingale part.

So we must have

g

t

+g

x1

1

+g

x2

2

+

1

2

g

x1x1

(

2

11

+

2

12

+ 2

11

12

) +g

x1x2

(

11

21

+

11

22

+

12

21

+

12

22

)

+

1

2

g

x2x2

(

2

21

+

2

22

+ 2

21

22

) = 0.

Taking = 0 will give part (ii) as a special case. The PDE for f can be similarly obtained.

6.6. (i)

Proof. Multiply e

1

2

bt

on both sides of (6.9.15), we get

d(e

1

2

bt

X

j

(t)) = e

1

2

bt

X

j

(t)

1

2

bdt + (

b

2

X

j

(t)dt +

1

2

dW

j

(t)

= e

1

2

bt

1

2

dW

j

(t).

So e

1

2

bt

X

j

(t) X

j

(0) =

1

2

t

0

e

1

2

bu

dW

j

(u) and X

j

(t) = e

1

2

bt

X

j

(0) +

1

2

t

0

e

1

2

bu

dW

j

(u)

. By Theorem

4.4.9, X

j

(t) is normally distributed with mean X

j

(0)e

1

2

bt

and variance

e

bt

4

2

t

0

e

bu

du =

2

4b

(1 e

bt

).

(ii)

Proof. Suppose R(t) =

d

j=1

X

2

j

(t), then

dR(t) =

d

j=1

(2X

j

(t)dX

j

(t) +dX

j

(t)dX

j

(t))

=

d

j=1

2X

j

(t)dX

j

(t) +

1

4

2

dt

=

d

j=1

bX

2

j

(t)dt +X

j

(t)dW

j

(t) +

1

4

2

dt

d

4

2

bR(t)

dt +

R(t)

d

j=1

X

j

(t)

R(t)

dW

j

(t).

Let B(t) =

d

j=1

t

0

Xj(s)

R(s)

dW

j

(s), then B is a local martingale with dB(t)dB(t) =

d

j=1

X

2

j

(t)

R(t)

dt = dt. So

by Levys Theorem, B is a Brownian motion. Therefore dR(t) = (a bR(t))dt +

R(t)dB(t) (a :=

d

4

2

)

and R is a CIR interest rate process.

59

(iii)

Proof. By (6.9.16), X

j

(t) is dependent on W

j

only and is normally distributed with mean e

1

2

bt

X

j

(0) and

variance

2

4b

[1 e

bt

]. So X

1

(t), , X

d

(t) are i.i.d. normal with the same mean (t) and variance v(t).

(iv)

Proof.

E

e

uX

2

j

(t)

e

ux

2 e

(x(t))

2

2v(t)

dx

2v(t)

=

(12uv(t))x

2

2(t)x+

2

(t)

2v(t)

2v(t)

dx

=

2v(t)

e

(

x

(t)

12uv(t)

)

2

+

2

(t)

12uv(t)

2

(t)

(12uv(t))

2

2v(t)/(12uv(t))

dx

=

1 2uv(t)

2v(t)

e

(

x

(t)

12uv(t)

)

2

2v(t)/(12uv(t))

dx

e

2

(t)(12uv(t))

2

(t)

2v(t)(12uv(t))

1 2uv(t)

=

e

u

2

(t)

12uv(t)

1 2uv(t)

.

(v)

Proof. By R(t) =

d

j=1

X

2

j

(t) and the fact X

1

(t), , X

d

(t) are i.i.d.,

E[e

uR(t)

] = (E[e

uX

2

1

(t)

])

d

= (1 2uv(t))

d

2

e

ud

2

(t)

12uv(t)

= (1 2uv(t))

2a

2

e

e

bt

uR(0)

12uv(t)

.

6.7. (i)

Proof. e

rt

c(t, S

t

, V

t

) =

E[e

rT

(S

T

K)

+

[T

t

] is a martingale by iterated conditioning argument. Since

d(e

rt

c(t, S

t

, V

t

))

= e

rt

c(t, S

t

, V

t

)(r) +c

t

(t, S

t

, V

t

) +c

s

(t, S

t

, V

t

)rS

t

+c

v

(t, S

t

, V

t

)(a bV

t

) +

1

2

c

ss

(t, S

t

, V

t

)V

t

S

2

t

+

1

2

c

vv

(t, S

t

, V

t

)

2

V

t

+c

sv

(t, S

t

, V

t

)V

t

S

t

dt + martingale part,

we conclude rc = c

t

+rsc

s

+c

v

(a bv) +

1

2

c

ss

vs

2

+

1

2

c

vv

2

v +c

sv

sv. This is equation (6.9.26).

(ii)

60

Proof. Suppose c(t, s, v) = sf(t, log s, v) e

r(Tt)

Kg(t, log s, v), then

c

t

= sf

t

(t, log s, v) re

r(Tt)

Kg(t, log s, v) e

r(Tt)

Kg

t

(t, log s, v),

c

s

= f(t, log s, v) +sf

s

(t, log s, v)

1

s

e

r(Tt)

Kg

s

(t, log s, v)

1

s

,

c

v

= sf

v

(t, log s, v) e

r(Tt)

Kg

v

(t, log s, v),

c

ss

= f

s

(t, log s, v)

1

s

+f

ss

(t, log s, v)

1

s

e

r(Tt)

Kg

ss

(t, log s, v)

1

s

2

+e

r(Tt)

Kg

s

(t, log s, v)

1

s

2

,

c

sv

= f

v

(t, log s, v) +f

sv

(t, log s, v) e

r(Tt)

K

s

g

sv

(t, log s, v),

c

vv

= sf

vv

(t, log s, v) e

r(Tt)

Kg

vv

(t, log s, v).

So

c

t

+rsc

s

+ (a bv)c

v

+

1

2

s

2

vc

ss

+svc

sv

+

1

2

2

vc

vv

= sf

t

re

r(Tt)

Kg e

r(Tt)

Kg

t

+rsf +rsf

s

rKe

r(Tt)

g

s

+ (a bv)(sf

v

e

r(Tt)

Kg

v

)

+

1

2

s

2

v

1

s

f

s

+

1

s

f

ss

e

r(Tt)

K

s

2

g

ss

+e

r(Tt)

K

g

s

s

2

+sv

f

v

+f

sv

e

r(Tt)

K

s

g

sv

+

1

2

2

v(sf

vv

e

r(Tt)

Kg

vv

)

= s

f

t

+ (r +

1

2

v)f

s

+ (a bv +v)f

v

+

1

2

vf

ss

+vf

sv

+

1

2

2

vf

vv

Ke

r(Tt)

g

t

+ (r

1

2

v)g

s

+(a bv)g

v

+

1

2

vg

ss

+vg

sv

+

1

2

2

vg

vv

+rsf re

r(Tt)

Kg

= rc.

That is, c satises the PDE (6.9.26).

(iii)

Proof. First, by Markov property, f(t, X

t

, V

t

) = E[1

{X

T

log K}

[T

t

]. So f(T, X

t

, V

t

) = 1

{X

T

log K}

, which

implies f(T, x, v) = 1

{xlog K}

for all x R, v 0. Second, f(t, X

t

, V

t

) is a martingale, so by dierentiating

f and setting the dt term as zero, we have the PDE (6.9.32) for f. Indeed,

df(t, X

t

, V

t

) =

f

t

(t, X

t

, V

t

) +f

x

(t, X

t

, V

t

)(r +

1

2

V

t

) +f

v

(t, X

t

, V

t

)(a bv

t

+V

t

) +

1

2

f

xx

(t, X

t

, V

t

)V

t

+

1

2

f

vv

(t, X

t

, V

t

)

2

V

t

+f

xv

(t, X

t

, V

t

)V

t

dt + martingale part.

So we must have f

t

+ (r +

1

2

v)f

x

+ (a bv +v)f

v

+

1

2

f

xx

v +

1

2

f

vv

2

v +vf

xv

= 0. This is (6.9.32).

(iv)

Proof. Similar to (iii).

(v)

Proof. c(T, s, v) = sf(T, log s, v) e

r(Tt)

Kg(T, log s, v) = s1

{log slog K}

K1

{log slog K}

= 1

{sK}

(s

K) = (s K)

+

.

6.8.

61

Proof. We follow the hint. Suppose h is smooth and compactly supported, then it is legitimate to exchange

integration and dierentiation:

g

t

(t, x) =

t

0

h(y)p(t, T, x, y)dy =

0

h(y)p

t

(t, T, x, y)dy,

g

x

(t, x) =

0

h(y)p

x

(t, T, x, y)dy,

g

xx

(t, x) =

0

h(y)p

xx

(t, T, x, y)dy.

So (6.9.45) implies

0

h(y)

p

t

(t, T, x, y) +(t, x)p

x

(t, T, x, y) +

1

2

2

(t, x)p

xx

(t, T, x, y)

dy = 0. By the ar-

bitrariness of h and assuming , p

t

, p

x

, v, p

xx

are all continuous, we have

p

t

(t, T, x, y) +(t, x)p

x

(t, T, x, y) +

1

2

2

(t, x)p

xx

(t, T, x, y) = 0.

This is (6.9.43).

6.9.

Proof. We rst note dh

b

(X

u

) = h

b

(X

u

)dX

u

+

1

2

h

b

(X

u

)dX

u

dX

u

=

b

(X

u

)(u, X

u

) +

1

2

2

(u, X

u

)h

b

(X

u

)

du+

h

b

(X

u

)(u, X

u

)dW

u

. Integrate on both sides of the equation, we have

h

b

(X

T

) h

b

(X

t

) =

T

t

b

(X

u

)(u, X

u

) +

1

2

2

(u, X

u

)h

b

(X

u

)

du + martingale part.

Take expectation on both sides, we get

E

t,x

[h

b

(X

T

) h

b

(X

t

)] =

h

b

(y)p(t, T, x, y)dy h(x)

=

T

t

E

t,x

[h

b

(X

u

)(u, X

u

) +

1

2

2

(u, X

u

)h

b

(X

u

)]du

=

T

t

b

(y)(u, y) +

1

2

2

(u, y)h

b

(y)

p(t, u, x, y)dydu.

Since h

b

vanishes outside (0, b), the integration range can be changed from (, ) to (0, b), which gives

(6.9.48).

By integration-by-parts formula, we have

b

0

(u, y)p(t, u, x, y)h

b

(y)dy = h

b

(y)(u, y)p(t, u, x, y)[

b

0

b

0

h

b

(y)

y

((u, y)p(t, u, x, y))dy

=

b

0

h

b

(y)

y

((u, y)p(t, u, x, y))dy,

and

b

0

2

(u, y)p(t, u, x, y)h

b

(y)dy =

b

0

y

(

2

(u, y)p(t, u, x, y))h

b

(y)dy =

b

0

2

y

(

2

(u, y)p(t, u, x, y))h

b

(y)dy.

Plug these formulas into (6.9.48), we get (6.9.49).

Dierentiate w.r.t. T on both sides of (6.9.49), we have

b

0

h

b

(y)

T

p(t, T, x, y)dy =

b

0

y

[(T, y)p(t, T, x, y)]h

b

(y)dy +

1

2

b

0

2

y

2

[

2

(T, y)p(t, T, x, y)]h

b

(y)dy,

62

that is,

b

0

h

b

(y)

T

p(t, T, x, y) +

y

((T, y)p(t, T, x, y))

1

2

2

y

2

(

2

(T, y)p(t, T, x, y))

dy = 0.

This is (6.9.50).

By (6.9.50) and the arbitrariness of h

b

, we conclude for any y (0, ),

T

p(t, T, x, y) +

y

((T, y)p(t, T, x, y))

1

2

2

y

2

(

2

(T, y)p(t, T, x, y)) = 0.

6.10.

Proof. Under the assumption that lim

y

(y K)ry p(0, T, x, y) = 0, we have

K

(yK)

y

(ry p(0, T, x, y))dy = (yK)ry p(0, T, x, y)[

K

+

K

ry p(0, T, x, y)dy =

K

ry p(0, T, x, y)dy.

If we further assume (6.9.57) and (6.9.58), then use integration-by-parts formula twice, we have

1

2

K

(y K)

2

y

2

(

2

(T, y)y

2

p(0, T, x, y))dy

=

1

2

(y K)

y

(

2

(T, y)y

2

p(0, T, x, y))[

y

(

2

(T, y)y

2

p(0, T, x, y))dy

=

1

2

(

2

(T, y)y

2

p(0, T, x, y)[

K

)

=

1

2

2

(T, K)K

2

p(0, T, x, K).

Therefore,

c

T

(0, T, x, K) = rc(0, T, x, K) +e

rT

K

(y K) p

T

(0, T, x, y)dy

= re

rT

K

(y K) p(0, T, x, y)dy +e

rT

K

(y K) p

T

(0, T, x, y)dy

= re

rT

K

(y K) p(0, T, x, y)dy e

rT

K

(y K)

y

(ry p(t, T, x, y))dy

+e

rT

K

(y K)

1

2

2

y

2

(

2

(T, y)y

2

p(t, T, x, y))dy

= re

rT

K

(y K) p(0, T, x, y)dy +e

rT

K

ry p(0, T, x, y)dy

+e

rT

1

2

2

(T, K)K

2

p(0, T, x, K)

= re

rT

K

K

p(0, T, x, y)dy +

1

2

e

rT

2

(T, K)K

2

p(0, T, x, K)

= rKc

K

(0, T, x, K) +

1

2

2

(T, K)K

2

c

KK

(0, T, x, K).

7. Exotic Options

7.1. (i)

63

Proof. Since

(, s) =

1

[log s + (r

1

2

2

)] =

log s

1

2

+

r

1

2

(, s) =

log s

(

1

2

)

3

2

t

+

r

1

2

1

2

1

2

t

=

1

2

log s

(1)

r

1

2

(1)

=

1

2

1

log ss + (r

1

2

2

))

=

1

2

(,

1

s

).

(ii)

Proof.

(,

x

c

) =

x

log

x

c

+ (r

1

2

2

)

=

1

x

(,

c

x

) =

x

log

c

x

+ (r

1

2

2

)

=

1

x

.

(iii)

Proof.

N

(, s)) =

1

2

e

(,s)

2

=

1

2

e

(log s+r)

2

2

(log s+r)+

1

4

2

2

2

.

Therefore

N

(

+

(, s))

N

(, s))

= e

2

2

(log s+r)

2

2

=

e

r

s

and e

r

N

(, s)) = sN

(

+

(, s)).

(iv)

Proof.

N

(, s))

N

(, s

1

))

= e

[

(log s+r)

2

(log

1

s

+r)

2

]

2

(log slog

1

s

)

2

2

= e

4r log s2

2

log s

2

2

= e

(

2r

2

1) log s

= s

(

2r

2

1)

.

So N

(, s

1

)) = s

(

2r

2

1)

N

(, s)).

(v)

Proof.

+

(, s)

(, s) =

1

log s + (r +

1

2

2

)

log s + (r

1

2

2

)

=

1

2

=

.

(vi)

Proof.

(, s)

(, s

1

) =

1

log s + (r

1

2

2

)

log s

1

+ (r

1

2

2

)

=

2 log s

.

(vii)

Proof. N

(y) =

1

2

e

y

2

2

, so N

(y) =

1

2

e

y

2

2

(

y

2

2

)

= yN

(y).

64

To be continued ...

7.3.

Proof. We note S

T

= S

0

e

W

T

= S

t

e

(

W

T

Wt)

,

W

T

W

t

= (

W

T

W

t

) + (T t) is independent of T

t

,

sup

tuT

(

W

u

W

t

) is independent of T

t

, and

Y

T

= S

0

e

M

T

= S

0

e

sup

tuT

Wu

1

{

Mtsup

tuT

Wt}

+S

0

e

Mt

1

{

Mt>sup

tuT

Wu}

= S

t

e

sup

tuT

(

Wu

Wt)

1

{

Y

t

S

t

e

sup

tuT

(

Wu

W

t

)

}

+Y

t

1

{

Y

t

S

t

e

sup

tuT

(

Wu

W

t

)

}

.

So E[f(S

T

, Y

T

)[T

t

] = E[f(x

S

Tt

S0

, x

Y

Tt

S0

1

{

y

x

Y

Tt

S

0

}

+ y1

{

y

x

Y

Tt

S

0

}

)], where x = S

t

, y = Y

t

. Therefore

E[f(S

T

, Y

T

)[T

t

] is a Borel function of (S

t

, Y

t

).

7.4.

Proof. By Cauchys inequality and the monotonicity of Y , we have

[

m

j=1

(Y

tj

Y

tj1

)(S

tj

S

tj1

)[

m

j=1

[Y

tj

Y

tj1

[[S

tj

S

tj1

[

j=1

(Y

tj

Y

tj1

)

2

j=1

(S

tj

S

tj1

)

2

max

1jm

[Y

tj

Y

tj1

[(Y

T

Y

0

)

j=1

(S

tj

S

tj1

)

2

.

If we increase the number of partition points to innity and let the length of the longest subinterval

max

1jm

[t

j

t

j1

[ approach zero, then

m

j=1

(S

tj

S

tj1

)

2

[S]

T

[S]

0

< and max

1jm

[Y

tj

Y

tj1

[ 0 a.s. by the continuity of Y . This implies

m

j=1

(Y

tj

Y

tj1

)(S

tj

S

tj1

) 0.

8. American Derivative Securities

8.1.

Proof. v

L

(L+) = (K L)(

2r

2

)(

x

L

)

2r

2

1 1

L

x=L

=

2r

2

L

(K L). So v

L

(L+) = v

L

(L) if and only if

2r

2

L

(K L) = 1. Solve for L, we get L =

2rK

2r+

2

.

8.2.

Proof. By the calculation in Section 8.3.3, we can see v

2

(x) (K

2

x)

+

(K

1

x)

+

, rv

2

(x) rxv

2

(x)

1

2

2

x

2

v

2

(x) 0 for all x 0, and for 0 x < L

1

< L

2

,

rv

2

(x) rxv

2

(x)

1

2

2

x

2

v

2

(x) = rK

2

> rK

1

> 0.

So the linear complementarity conditions for v

2

imply v

2

(x) = (K

2

x)

+

= K

2

x > K

1

x = (K

1

x)

+

on [0, L

1

]. Hence v

2

(x) does not satisfy the third linear complementarity condition for v

1

: for each x 0,

equality holds in either (8.8.1) or (8.8.2) or both.

8.3. (i)

65

Proof. Suppose x takes its values in a domain bounded away from 0. By the general theory of linear

dierential equations, if we can nd two linearly independent solutions v

1

(x), v

2

(x) of (8.8.4), then any

solution of (8.8.4) can be represented in the form of C

1

v

1

+C

2

v

2

where C

1

and C

2

are constants. So it suces

to nd two linearly independent special solutions of (8.8.4). Assume v(x) = x

p

for some constant p to be

determined, (8.8.4) yields x

p

(rpr

1

2

2

p(p1)) = 0. Solve the quadratic equation 0 = rpr

1

2

2

p(p1) =

(

1

2

2

p r)(p 1), we get p = 1 or

2r

2

. So a general solution of (8.8.4) has the form C

1

x +C

2

x

2r

2

.

(ii)

Proof. Assume there is an interval [x

1

, x

2

] where 0 < x

1

< x

2

< , such that v(x) 0 satises (8.3.19)

with equality on [x

1

, x

2

] and satises (8.3.18) with equality for x at and immediately to the left of x

1

and

for x at and immediately to the right of x

2

, then we can nd some C

1

and C

2

, so that v(x) = C

1

x+C

2

x

2r

2

on [x

1

, x

2

]. If for some x

0

[x

1

, x

2

], v(x

0

) = v

(x

0

) = 0, by the uniqueness of the solution of (8.8.4), we

would conclude v 0. This is a contradiction. So such an x

0

cannot exist. This implies 0 < x

1

< x

2

< K

(if K x

2

, v(x

2

) = (K x

2

)

+

= 0 and v

(x

2

)=the right derivative of (K x)

+

at x

2

, which is 0).

1

Thus

we have four equations for C

1

and C

2

:

C

1

x

1

+C

2

x

2r

2

1

= K x

1

C

1

x

2

+C

2

x

2r

2

2

= K x

2

C

1

2r

2

C

2

x

2r

2

1

1

= 1

C

1

2r

2

C

2

x

2r

2

1

2

= 1.

Since x

1

= x

2

, the last two equations imply C

2

= 0. Plug C

2

= 0 into the rst two equations, we have

C

1

=

Kx1

x1

=

Kx2

x2

; plug C

2

= 0 into the last two equations, we have C

1

= 1. Combined, we would have

x

1

= x

2

. Contradiction. Therefore our initial assumption is incorrect, and the only solution v that satises

the specied conditions in the problem is the zero solution.

(iii)

Proof. If in a right neighborhood of 0, v satises (8.3.19) with equality, then part (i) implies v(x) = C

1

x +

C

2

x

2r

2

for some constants C

1

and C

2

. Then v(0) = lim

x0

v(x) = 0 < (K 0)

+

, i.e. (8.3.18) will be

violated. So we must have rv rxv

1

2

2

x

2

v

v(x) = (Kx)

+

near o. So v(0) = K. We have thus concluded simultaneously that v cannot satisfy (8.3.19)

with equality near 0 and v(0) = K, starting from rst principles (8.3.18)-(8.3.20).

(iv)

Proof. This is already shown in our solution of part (iii): near 0, v cannot satisfy (8.3.19) with equality.

(v)

Proof. If v satisfy (Kx)

+

with equality for all x 0, then v cannot have a continuous derivative as stated

in the problem. This is a contradiction.

(vi)

1

Note we have interpreted the condition v(x) satises (8.3.18) with equality for x at and immediately to the right of x

2

as v(x

2

) = (K x

2

)

+

and v

(x

2

) =the right derivative of (K x)

+

at x

2

. This is weaker than v(x) = (K x) in a right

neighborhood of x

2

.

66

Proof. By the result of part (i), we can start with v(x) = (K x)

+

on [0, x

1

] and v(x) = C

1

x +C

2

x

2r

2

on

[x

1

, ). By the assumption of the problem, both v and v

+

is not dierentiable

at K, we must have x

1

K.This gives us the equations

K x

1

= (K x

1

)

+

= C

1

x

1

+C

2

x

2r

2

1

1 = C

1

2r

2

C

2

x

2r

2

1

1

.

Because v is assumed to be bounded, we must have C

1

= 0 and the above equations only have two unknowns:

C

2

and x

1

. Solve them for C

2

and x

1

, we are done.

8.4. (i)

Proof. This is already shown in part (i) of Exercise 8.3.

(ii)

Proof. We solve for A, B the equations

AL

2r

2

+BL = K L

2r

2

AL

2r

2

1

+B = 1,

and we obtain A =

2

KL

2r

2

+2r

, B =

2rK

L(

2

+2r)

1.

(iii)

Proof. By (8.8.5), B > 0. So for x K, f(x) BK > 0 = (K x)

+

. If L x < K,

f(x) (K x)

+

=

2

KL

2r

2

+ 2r

x

2r

2

+

2rKx

L(

2

+ 2r)

K = x

2r

2

KL

2r

2

+ 2r(

x

L

)

2r

2

+1

(

2

+ 2r)(

x

L

)

2r

(

2

+ 2r)L

.

Let g() =

2

+ 2r

2r

2

+1

(

2

+ 2r)

2r

2

with 1. Then g(1) = 0 and g

() = 2r(

2r

2

+ 1)

2r

2

(

2

+

2r)

2r

2r

2

1

=

2r

2

(

2

+ 2r)

2r

2

1

( 1) 0. So g() 0 for any 1. This shows f(x) (K x)

+

for

L x < K. Combined, we get f(x) (K x)

+

for all x L.

(iv)

Proof. Since lim

x

v(x) = lim

x

f(x) = and lim

x

v

L

(x) = lim

x

(K L

)(

x

L

)

2r

2

= 0, v(x)

and v

L

(x) are dierent. By part (iii), v(x) (K x)

+

. So v satises (8.3.18). For x L, rv rxv

1

2

2

x

2

v

= rf rxf

1

2

2

x

2

f

= 0. For 0 x L, rv rxv

1

2

2

x

2

v

rv rxv

1

2

2

x

2

v

0 for x 0. So v satises (8.3.19). Along the way, we also showed v satises (8.3.20).

In summary, v satises the linear complementarity condition (8.3.18)-(8.3.20), but v is not the function v

L

given by (8.3.13).

(v)

Proof. By part (ii), B = 0 if and only if

2rK

L(

2

+2r)

1 = 0, i.e. L =

2rK

2r+

2

. In this case, v(x) = Ax

2r

2

=

2

K

2

+2r

(

x

L

)

2r

2

= (K L)(

x

L

)

2r

2

= v

L

(x), on the interval [L, ).

8.5. The diculty of the dividend-paying case is that from Lemma 8.3.4, we can only obtain

E[e

(ra)

L

],

not

E[e

r

L

]. So we have to start from Theorem 8.3.2.

(i)

67

Proof. By (8.8.9), S

t

= S

0

e

Wt+(ra

1

2

2

)t

. Assume S

0

= x, then S

t

= L if and only if

W

t

1

(r a

1

2

2

)t =

1

log

x

L

. By Theorem 8.3.2,

E[e

r

L

] = e

log

x

L

(ra

1

2

2

)+

2

(ra

1

2

2

)

2

+2r

.

If we set =

1

2

(r a

1

2

2

) +

1

2

(r a

1

2

)

2

+ 2r, we can write

E[e

r

L

] as e

log

x

L

= (

x

L

)

. So

the risk-neutral expected discounted pay o of this strategy is

v

L

(x) =

K x, 0 x L

(K L)(

x

L

)

, x > L.

(ii)

Proof.

L

v

L

(x) = (

x

L

)

(1

(KL)

L

). Set

L

v

L

(x) = 0 and solve for L

, we have L

=

K

+1

.

(iii)

Proof. By It os formula, we have

d

e

rt

v

L

(S

t

)

= e

rt

rv

L

(S

t

) +v

L

(S

t

)(r a)S

t

+

1

2

v

L

(S

t

)

2

S

2

t

dt +e

rt

v

L

(S

t

)S

t

d

W

t

.

If x > L

,

rv

L

(x) +v

L

(x)(r a)x +

1

2

v

L

(x)

2

x

2

= r(K L

x

L

+ (r a)x(K L

)()

x

1

L

+

1

2

2

x

2

()( 1)(K L

)

x

2

L

= (K L

x

L

r (r a) +

1

2

2

( + 1)

.

By the denition of , if we dene u = r a

1

2

2

, we have

r + (r a)

1

2

2

( + 1)

= r

1

2

2

+(r a

1

2

2

)

= r

1

2

2

+

1

u

2

2

+ 2r

2

+

2

+

1

u

2

2

+ 2r

u

= r

1

2

u

2

4

+

2u

u

2

2

+ 2r +

1

u

2

2

+ 2r

+

u

2

2

+

u

u

2

2

+ 2r

= r

u

2

2

2

u

u

2

2

+ 2r

1

2

u

2

2

+ 2r

+

u

2

2

+

u

u

2

2

+ 2r

= 0.

If x < L

, rv

L

(x) +v

L

(x)(r a)x +

1

2

v

L

(x)

2

x

2

= r(K x) +(1)(r a)x = rK +ax. Combined,

we get

d

e

rt

v

L

(S

t

)

= e

rt

1

{St<L}

(rK aS

t

)dt +e

rt

v

L

(S

t

)S

t

d

W

t

.

68

Following the reasoning in the proof of Theorem 8.3.5, we only need to show 1

{x<L}

(rKax) 0 to nish

the solution. This is further equivalent to proving rK aL

0. Plug L

=

K

+1

into the expression and

note

1

2

(r a

1

2

2

)

2

+

1

2

(r a

1

2

2

) 0, the inequality is further reduced to r( +1) a 0.

We prove this inequality as follows.

Assume for some K, r, a and (K and are assumed to be strictly positive, r and a are assumed to be

non-negative), rK aL

=

K

+1

K. As shown before, this means

r( + 1) a < 0. Dene =

ra

1

2

(r a

1

2

2

) +

1

2

(r a

1

2

2

)

2

+ 2r =

1

(

1

2

) +

1

(

1

2

)

2

+ 2r. We have

r( + 1) a < 0 (r a) +r < 0

(r a)

(

1

2

) +

1

(

1

2

)

2

+ 2r

+r < 0

(

1

2

) +

(

1

2

)

2

+ 2r +r < 0

(

1

2

)

2

+ 2r < r (

1

2

)(< 0)

2

[(

1

2

)

2

+ 2r] > r

2

+

2

(

1

2

)

2

+ 2r(

1

2

2

)

0 > r

2

r

2

0 > r

2

.

Since

2

< 0, we have obtained a contradiction. So our initial assumption is incorrect, and rK aL

0

must be true.

(iv)

Proof. The proof is similar to that of Corollary 8.3.6. Note the only properties used in the proof of Corollary

8.3.6 are that e

rt

v

L

(S

t

) is a supermartingale, e

rt

L

v

L

(S

t

L

) is a martingale, and v

L

(x) (Kx)

+

.

Part (iii) already proved the supermartingale-martingale property, so it suces to show v

L

(x) (K x)

+

in our problem. Indeed, by 0, L

=

K

+1

< K. For x K > L

, v

L

(x) > 0 = (Kx)

+

; for 0 x < L

,

v

L

(x) = K x = (K x)

+

; nally, for L

x K,

d

dx

(v

L

(x) (K x)) = (K L

)

x

1

L

+ 1 (K L

)

L

1

+ 1 = (K

K

+ 1

)

1

K

+1

+ 1 = 0.

and (v

L

(x) (K x))[

x=L

= 0. So for L

x K, v

L

(x) (K x)

+

0. Combined, we have

v

L

(x) (K x)

+

0 for all x 0.

8.6.

Proof. By Lemma 8.5.1, X

t

= e

rt

(S

t

K)

+

is a submartingale. For any

0,T

, Theorem 8.8.1 implies

E[e

rT

(S

T

K)

+

]

E[e

rT

(S

T

K)

+

] E[e

r

(S

K)

+

1

{<}

] = E[e

r

(S

K)

+

],

where we take the convention that e

r

(S

K)

+

= 0 when = . Since is arbitrarily chosen,

E[e

rT

(S

T

K)

+

] max

0,T

E[e

r

(S

K)

+

]. The other direction is trivial since T

0,T

.

8.7.

69

Proof. Suppose [0, 1] and 0 x

1

x

2

, we have f((1 )x

1

+ x

2

) (1 )f(x

1

) + f(x

2

)

(1 )h(x

1

) +h(x

2

). Similarly, g((1 )x

1

+x

2

) (1 )h(x

1

) +h(x

2

). So

h((1 )x

1

+x

2

) = maxf((1 )x

1

+x

2

), g((1 )x

1

+x

2

) (1 )h(x

1

) +h(x

2

).

That is, h is also convex.

9. Change of Numeraire

To provide an intuition for change of numeraire, we give a summary of results for change of

numeraire in discrete case. This summary is based on Shiryaev [5].

Consider a model of nancial market (

B,

B, S) as in [1] Denition 2.1.1 or [5] page 383. Here

B and

B and

B are both strictly

positive, then both of them can be chosen as numeaire.

Several results hold under this model. First, no-arbitrage and completeness properties of market are

independent of the choice of numeraire (see, for example, Shiryaev [5] page 413 Remark and page 481).

Second, if the market is arbitrage-free, then corresponding to

B (resp.

B), there is an equivalent probability

P (resp.

P), such that

B

,

S

(resp.

B

,

S

) is a martingale under

P (resp.

P). Third, if the market is

both arbitrage-free and complete, we have the relation

d

P =

B

T

B

T

1

E

B0

B0

d

P.

Finally, if f

T

is a European contingent claim with maturity N and the market is both arbitrage-free and

complete, then

B

t

E

f

T

B

T

[T

t

=

B

t

f

T

B

T

[T

t

.

That is, the price of f

T

is independent of the choice of numeraire.

The above theoretical results can be applied to market involving foreign money market account. We con-

sider the following market: a domestic money market account M (M

0

= 1), a foreign money market account

M

f

(M

f

0

= 1), a (vector) asset price process S called stock. Suppose the domestic vs. foreign currency

exchange rate is Q. Note Q is not a traded asset. Denominated by domestic currency, the traded assets

are (M, M

f

Q, S), where M

f

Q can be seen as the price process of one unit foreign currency. Domestic risk-

neutral measure

P is such that

M

f

Q

M

,

S

M

is a

P-martingale. Denominated by foreign currency, the traded

assets are

M

f

,

M

Q

,

S

Q

P

f

is such that

M

QM

f

,

S

QM

f

is a

P

f

-martingale.

This is a change of numeraire in the market denominated by domestic currency, from M to M

f

Q. If we

assume the market is arbitrage-free and complete, the foreign risk-neutral measure is

d

P

f

=

Q

T

M

f

T

M

T

E

Q0M

f

0

M0

d

P =

Q

T

D

T

M

f

T

Q

0

d

P

on T

T

. Under the above set-up, for a European contingent claim f

T

, denominated in domestic currency, its

payo in foreign currency is f

T

/Q

T

. Therefore its foreign price is

E

f

D

f

T

f

T

D

f

t

Q

T

[T

t

domestic currency, we have Q

t

E

f

D

f

T

f

T

D

f

t

Q

T

[T

t

P

f

and

P on T

T

and the Bayes

formula, we get

Q

t

E

f

D

f

T

f

T

D

f

t

Q

T

[T

t

=

E

D

T

f

T

D

t

[T

t

.

The RHS is exactly the price of f

T

in domestic market if we apply risk-neutral pricing.

9.1. (i)

70

Proof. For any 0 t T, by Lemma 5.5.2,

E

(M2)

M

1

(T)

M

2

(T)

T

t

= E

M

2

(T)

M

2

(t)

M

1

(T)

M

2

(T)

T

t

=

E[M

1

(T)[T

t

]

M

2

(t)

=

M

1

(t)

M

2

(t)

.

So

M1(t)

M2(t)

is a martingale under P

M2

.

(ii)

Proof. Let M

1

(t) = D

t

S

t

and M

2

(t) = D

t

N

t

/N

0

. Then

P

(N)

as dened in (9.2.6) is P

(M2)

as dened in

Remark 9.2.5. Hence

M1(t)

M2(t)

=

St

Nt

N

0

is a martingale under

P

(N)

, which implies S

(N)

t

=

St

Nt

is a martingale

under

P

(N)

.

9.2. (i)

Proof. Since N

1

t

= N

1

0

e

Wt(r

1

2

2

)t

, we have

d(N

1

t

) = N

1

0

e

Wt(r

1

2

2

)t

[d

W

t

(r

1

2

2

)dt +

1

2

2

dt] = N

1

t

(d

W

t

rdt).

(ii)

Proof.

d

M

t

= M

t

d

1

N

t

+

1

N

t

dM

t

+d

1

N

t

dM

t

=

M

t

(d

W

t

rdt) +r

M

t

dt =

M

t

d

W

t

.

Remark: This can also be obtained directly from Theorem 9.2.2.

(iii)

Proof.

d

X

t

= d

X

t

N

t

= X

t

d

1

N

t

+

1

N

t

dX

t

+d

1

N

t

dX

t

= (

t

S

t

+

t

M

t

)d

1

N

t

+

1

N

t

(

t

dS

t

+

t

dM

t

) +d

1

N

t

(

t

dS

t

+

t

dM

t

)

=

t

S

t

d

1

N

t

+

1

N

t

dS

t

+d

1

N

t

dS

t

+

t

M

t

d

1

N

t

+

1

N

t

dM

t

+d

1

N

t

dM

t

=

t

d

S

t

+

t

d

M

t

.

9.3. To avoid singular cases, we need to assume 1 < < 1.

(i)

Proof. N

t

= N

0

e

W3(t)+(r

1

2

2

)t

. So

dN

1

t

= d(N

1

0

e

W3(t)(r

1

2

2

)t

)

= N

1

0

e

W3(t)(r

1

2

2

)t

W

3

(t) (r

1

2

2

)dt +

1

2

2

dt

= N

1

t

[d

W

3

(t) (r

2

)dt],

71

and

dS

(N)

t

= N

1

t

dS

t

+S

t

dN

1

t

+dS

t

dN

1

t

= N

1

t

(rS

t

dt +S

t

d

W

1

(t)) +S

t

N

1

t

[d

W

3

(t) (r

2

)dt]

= S

(N)

t

(rdt +d

W

1

(t)) +S

(N)

t

[d

W

3

(t) (r

2

)dt] S

(N)

t

dt

= S

(N)

t

(

2

)dt +S

(N)

t

(d

W

1

(t) d

W

3

(t)).

Dene =

2

2 +

2

and

W

4

(t) =

W

1

(t)

W

3

(t), then

W

4

is a martingale with quadratic

variation

[

W

4

]

t

=

2

2

t 2

2

t +

2

r

2

t = t.

By Levys Theorem,

W

4

is a BM and therefore, S

(N)

t

has volatility =

2

2 +

2

.

(ii)

Proof. This problem is the same as Exercise 4.13, we dene

W

2

(t) =

1

2

W

1

(t) +

1

1

2

W

3

(t), then

W

2

is a martingale, with

(d

W

2

(t))

2

=

1

2

d

W

1

(t) +

1

1

2

d

W

3

(t)

2

=

2

1

2

+

1

1

2

2

2

1

2

dt = dt,

and d

W

2

(t)d

W

1

(t) =

1

2

dt +

1

2

dt = 0. So

W

2

is a BM independent of

W

1

, and dN

t

= rN

t

dt +

N

t

d

W

3

(t) = rN

t

dt +N

t

[d

W

1

(t) +

1

2

d

W

2

(t)].

(iii)

Proof. Under

P, (

W

1

,

W

2

) is a two-dimensional BM, and

dS

t

= rS

t

dt +S

t

d

W

1

(t) = rS

t

dt +S

t

(, 0)

W

1

(t)

d

W

2

(t)

dN

t

= rN

t

dt +N

t

d

W

3

(t) = rN

t

dt +N

t

(,

1

2

)

W

1

(t)

d

W

2

(t)

.

So under

P, the volatility vector for S is (, 0), and the volatility vector for N is (,

1

2

). By Theorem

9.2.2, under the measure

P

(N)

, the volatility vector for S

(N)

is (v

1

, v

2

) = ( ,

1

2

. In particular,

the volatility of S

(N)

is

v

2

1

+v

2

2

=

( )

2

+ (

1

2

)

2

=

2

2 +

2

,

consistent with the result of part (i).

9.4.

Proof. From (9.3.15), we have M

f

t

Q

t

= M

f

0

Q

0

e

t

0

2(s)d

W3(s)+

t

0

(Rs

1

2

2

2

(s))ds

. So

D

f

t

Q

t

= D

f

0

Q

1

0

e

t

0

2(s)d

W3(s)

t

0

(Rs

1

2

2

2

(s))ds

72

and

d

D

f

t

Q

t

=

D

f

t

Q

t

[

2

(t)d

W

3

(t) (R

t

1

2

2

2

(t))dt +

1

2

2

2

(t)dt] =

D

f

t

Q

t

[

2

(t)d

W

3

(t) (R

t

2

2

(t))dt].

To get (9.3.22), we note

d

M

t

D

f

t

Q

t

= M

t

d

D

f

t

Q

t

+

D

f

t

Q

t

dM

t

+dM

t

d

D

f

t

Q

t

=

M

t

D

f

t

Q

t

[

2

(t)d

W

3

(t) (R

t

2

2

(t))dt] +

R

t

M

t

D

f

t

Q

t

dt

=

M

t

D

f

t

Q

t

(

2

(t)d

W

3

(t)

2

2

(t)dt)

=

M

t

D

f

t

Q

t

2

(t)d

W

f

3

(t).

To get (9.3.23), we note

d

D

f

t

S

t

Q

t

=

D

f

t

Q

t

dS

t

+S

t

d

D

f

t

Q

t

+dS

t

d

D

f

t

Q

t

=

D

f

t

Q

t

S

t

(R

t

dt +

1

(t)d

W

1

(t)) +

S

t

D

f

t

Q

t

[

2

(t)d

W

3

(t) (R

t

2

2

(t))dt]

+S

t

1

(t)d

W

1

(t)

D

f

t

Q

t

(

2

(t))d

W

3

(t)

=

D

f

t

S

t

Q

t

[

1

(t)d

W

1

(t)

2

(t)d

W

3

(t) +

2

2

(t)dt

1

(t)

2

(t)

t

dt]

=

D

f

t

S

t

Q

t

[

1

(t)d

W

f

1

(t)

2

d

W

f

3

(t)].

9.5.

Proof. We combine the solutions of all the sub-problems into a single solution as follows. The payo of a

quanto call is (

S

T

Q

T

K)

+

units of domestic currency at time T. By risk-neutral pricing formula, its price at

time t is

E[e

r(Tt)

(

S

T

Q

T

K)

+

[T

t

]. So we need to nd the SDE for

St

Qt

under risk-neutral measure

P. By

formula (9.3.14) and (9.3.16), we have S

t

= S

0

e

1

W1(t)+(r

1

2

2

1

)t

and

Q

t

= Q

0

e

2

W3(t)+(rr

f

1

2

2

2

)t

= Q

0

e

2

W1(t)+2

1

2

W2(t)+(rr

f

1

2

2

2

)t

.

So

St

Qt

=

S0

Q0

e

(12)

W1(t)2

1

2

W2(t)+(r

f

+

1

2

2

2

1

2

2

1

)t

. Dene

4

=

(

1

2

)

2

+

2

2

(1

2

) =

2

1

2

1

2

+

2

2

and

W

4

(t) =

1

2

W

1

(t)

2

1

2

W

2

(t).

Then

W

4

is a martingale with [

W

4

]

t

=

(12)

2

2

4

t +

2(1

2

)

2

4

t +t. So

W

4

is a Brownian motion under

P. So

if we set a = r r

f

+

1

2

2

2

, we have

S

t

Q

t

=

S

0

Q

0

e

4

W4(t)+(ra

1

2

2

4

)t

and d

S

t

Q

t

=

S

t

Q

t

[

4

d

W

4

(t) + (r a)dt].

73

Therefore, under

P,

St

Qt

behaves like dividend-paying stock and the price of the quanto call option is like the

price of a call option on a dividend-paying stock. Thus formula (5.5.12) gives us the desired price formula

for quanto call option.

9.6. (i)

Proof. d

+

(t) d

(t) =

1

Tt

2

(T t) =

T t. So d

(t) = d

+

(t)

T t.

(ii)

Proof. d

+

(t)+d

(t) =

2

Tt

log

For

S

(t,T)

K

. So d

2

+

(t)d

2

(t) = (d

+

(t)+d

(t))(d

+

(t)d

(t)) = 2 log

For

S

(t,T)

K

.

(iii)

Proof.

For

S

(t, T)e

d

2

+

(t)/2

Ke

d

2

(t)

= e

d

2

+

(t)/2

[For

S

(t, T) Ke

d

2

+

(t)/2d

2

(t)/2

]

= e

d

2

+

(t)/2

[For

S

(t, T) Ke

log

For

S

(t,T)

K

]

= 0.

(iv)

Proof.

dd

+

(t)

=

1

2

(T t)

3

[log

For

S

(t, T)

K

+

1

2

2

(T t)]dt +

1

T t

dFor

S

(t, T)

For

S

(t, T)

(dFor

S

(t, T))

2

2For

S

(t, T)

2

1

2

dt

=

1

2

(T t)

3

log

For

S

(t, T)

K

dt +

4

T t

dt +

1

T t

(d

W

T

(t)

1

2

2

dt

1

2

2

dt)

=

1

2(T t)

3/2

log

For

S

(t, T)

K

dt

3

4

T t

dt +

d

W

T

(t)

T t

.

(v)

Proof. dd

(t) = dd

+

(t) d(

T t) = dd

+

(t) +

dt

2

Tt

.

(vi)

Proof. By (iv) and (v), (dd

(t))

2

= (dd

+

(t))

2

=

dt

Tt

.

(vii)

Proof. dN(d

+

(t)) = N

(d

+

(t))dd

+

(t)+

1

2

N

(d

+

(t))(dd

+

(t))

2

=

1

2

e

d

2

+

(t)

2

dd

+

(t)+

1

2

1

2

e

d

2

+

(t)

2

(d

+

(t))

dt

Tt

.

(viii)

74

Proof.

dN(d

(t)) = N

(d

(t))dd

(t) +

1

2

N

(d

(t))(dd

(t))

2

=

1

2

e

d

2

(t)

2

dd

+

(t) +

dt

2

T t

+

1

2

e

d

2

(t)

2

2

(d

(t))

dt

T t

=

1

2

e

d

2

(t)/2

dd

+

(t) +

e

d

2

(t)/2

2

2(T t)

dt +

e

d

2

(t)(

Ttd

+

(t))

2

2(T t)

2

dt

=

1

2

e

d

2

(t)/2

dd

+

(t) +

e

d

2

(t)/2

2(T t)

dt

d

+

(t)e

d

2

(t)

2

2(T t)

2

dt.

(ix)

Proof.

dFor

S

(t, T)dN(d

+

(t)) = For

S

(t, T)d

W

T

(t)

e

d

2

+

(t)/2

2

1

T t

d

W

T

(t) =

For

S

(t, T)e

d

2

+

(t)/2

2(T t)

dt.

(x)

Proof.

For

S

(t, T)dN(d

+

(t)) +dFor

S

(t, T)dN(d

+

(t)) KdN(d

(t))

= For

S

(t, T)

2

e

d

2

+

(t)/2

dd

+

(t)

d

+

(t)

2(T t)

2

e

d

2

+

(t)/2

dt

+

For

S

(t, T)e

d

2

+

(t)/2

2(T t)

dt

K

e

d

2

(t)/2

2

dd

+

(t) +

2(T t)

e

d

2

(t)/2

dt

d

+

(t)

2(T t)

2

e

d

2

(t)/2

dt

For

S

(t, T)d

+

(t)

2(T t)

2

e

d

2

+

(t)/2

+

For

S

(t, T)e

d

2

+

(t)/2

2(T t)

Ke

d

2

(t)/2

2(T t)

Kd

+

(t)

2(T t)

2

e

d

2

(t)/2

dt

+

1

For

S

(t, T)e

d

2

+

(t)/2

Ke

d

2

(t)/2

dd

+

(t)

= 0.

The last = comes from (iii), which implies e

d

2

(t)/2

=

For

S

(t,T)

K

e

d

2

+

(t)/2

.

10. Term-Structure Models

10.1. (i)

Proof. Using the notation I

1

(t), I

2

(t), I

3

(t) and I

4

(t) introduced in the problem, we can write Y

1

(t) and

Y

2

(t) as Y

1

(t) = e

1t

Y

1

(0) +e

1t

I

1

(t) and

Y

2

(t) =

21

12

(e

1t

e

2t

)Y

1

(0) +e

2t

Y

2

(0) +

21

12

e

1t

I

1

(t) e

2t

I

2

(t)

e

2t

I

3

(t), if

1

=

2

;

21

te

1t

Y

1

(0) +e

1t

Y

2

(0)

21

te

1t

I

1

(t) e

1t

I

4

(t)

+e

1t

I

3

(t), if

1

=

2

.

75

Since all the I

k

(t)s (k = 1, , 4) are normally distributed with zero mean, we can conclude

E[Y

1

(t)] =

e

1t

Y

1

(0) and

E[Y

2

(t)] =

21

12

(e

1t

e

2t

)Y

1

(0) +e

2t

Y

2

(0), if

1

=

2

;

21

te

1t

Y

1

(0) +e

1t

Y

2

(0), if

1

=

2

.

(ii)

Proof. The calculation relies on the following fact: if X

t

and Y

t

are both martingales, then X

t

Y

t

[X, Y ]

t

is

also a martingale. In particular,

E[X

t

Y

t

] =

E[X, Y ]

t

. Thus

E[I

2

1

(t)] =

t

0

e

21u

du =

e

21t

1

2

1

,

E[I

1

(t)I

2

(t)] =

t

0

e

(1+2)u

du =

e

(1+2)t

1

1

+

2

,

E[I

1

(t)I

3

(t)] = 0,

E[I

1

(t)I

4

(t)] =

t

0

ue

21u

du =

1

2

1

te

21t

e

21t

1

2

1

and

E[I

2

4

(t)] =

t

0

u

2

e

21u

du =

t

2

e

21t

2

1

te

21t

2

2

1

+

e

21t

1

4

3

1

.

(iii)

Proof. Following the hint, we have

E[I

1

(s)I

2

(t)] =

E[J

1

(t)I

2

(t)] =

t

0

e

(1+2)u

1

{us}

du =

e

(1+2)s

1

1

+

2

.

10.2. (i)

Proof. Assume B(t, T) = E[e

T

t

Rsds

[T

t

] = f(t, Y

1

(t), Y

2

(t)). Then d(D

t

B(t, T)) = D

t

[R

t

f(t, Y

1

(t), Y

2

(t))dt+

df(t, Y

1

(t), Y

2

(t))]. By Itos formula,

df(t, Y

1

(t), Y

2

(t)) = [f

t

(t, Y

1

(t), Y

2

(t)) +f

y1

(t, Y

1

(t), Y

2

(t))(

1

Y

1

(t)) +f

y2

(t, Y

1

(t), Y

2

(t))(

2

)Y

2

(t)]

+f

y1y2

(t, Y

1

(t), Y

2

(t))

21

Y

1

(t) +

1

2

f

y1y1

(t, Y

1

(t), Y

2

(t))Y

1

(t)

+

1

2

f

y2y2

(t, Y

1

(t), Y

2

(t))(

2

21

Y

1

(t) + +Y

1

(t))]dt + martingale part.

Since D

t

B(t, T) is a martingale, we must have

(

0

+

1

y

1

+

2

y

2

) +

t

+ (

1

y

1

)

y

1

2

y

2

y

2

+

1

2

2

21

y

1

2

y

1

y

2

+y

1

2

y

2

1

+ (

2

21

y

1

+ +y

1

)

2

y

2

2

f = 0.

(ii)

76

Proof. If we suppose f(t, y

1

, y

2

) = e

y1C1(Tt)y2C2(Tt)A(Tt)

, then

t

f = [y

1

C

1

(T t) + y

2

C

2

(T t) +

A

(T t)]f,

y1

f = C

1

(T t)f,

f

y2

= C

2

(T t)f,

2

f

y1y2

= C

1

(T t)C

2

(T t)f,

2

f

y

2

1

= C

2

1

(T t)f,

and

2

f

y

2

2

= C

2

2

(T t)f. So the PDE in part (i) becomes

(

0

+

1

y

1

+

2

y

2

)+y

1

C

1

+y

2

C

2

+A

(

1

y

1

)C

1

+

2

y

2

C

2

+

1

2

2

21

y

1

C

1

C

2

+y

1

C

2

1

+ (

2

21

y

1

+ +y

1

)C

2

2

= 0.

Sorting out the LHS according to the independent variables y

1

and y

2

, we get

1

+C

1

+

1

C

1

+

21

C

1

C

2

+

1

2

C

2

1

+

1

2

(

2

21

+)C

2

2

= 0

2

+C

2

+

2

C

2

= 0

0

+A

C

1

+

1

2

C

2

2

= 0.

In other words, we can obtain the ODEs for C

1

, C

2

and A as follows

1

=

1

C

1

21

C

1

C

2

1

2

C

2

1

1

2

(

2

21

+)C

2

2

+

1

dierent from (10.7.4), check!

C

2

=

2

C

2

+

2

A

= C

1

1

2

C

2

2

+

0

.

10.3. (i)

Proof. d(D

t

B(t, T)) = D

t

[R

t

f(t, T, Y

1

(t), Y

2

(t))dt +df(t, T, Y

1

(t), Y

2

(t))] and

df(t, T, Y

1

(t), Y

2

(t))

= [f

t

(t, T, Y

1

(t), Y

2

(t)) +f

y1

(t, T, Y

1

(t), Y

2

(t))(

1

Y

1

(t)) +f

y2

(t, T, Y

1

(t), Y

2

(t))(

21

Y

1

(t)

2

Y

2

(t))

+

1

2

f

y1y1

(t, T, Y

1

(t), Y

2

(t)) +

1

2

f

y2y2

(t, T, Y

1

(t), Y

2

(t))]dt + martingale part.

Since D

t

B(t, T) is a martingale under risk-neutral measure, we have the following PDE:

(

0

(t) +

1

y

1

+

2

y

2

) +

t

1

y

1

y

1

(

21

y

1

+

2

y

2

)

y

2

+

1

2

2

y

2

1

+

1

2

y

2

2

f(t, T, y

1

, y

2

) = 0.

Suppose f(t, T, y

1

, y

2

) = e

y1C1(t,T)y2C2(t,T)A(t,T)

, then

f

t

(t, T, y

1

, y

2

) =

y

1

d

dt

C

1

(t, T) y

2

d

dt

C

2

(t, T)

d

dt

A(t, T)

f(t, T, y

1

, y

2

),

f

y1

(t, T, y

1

, y

2

) = C

1

(t, T)f(t, T, y

1

, y

2

),

f

y2

(t, T, y

1

, y

2

) = C

2

(t, T)f(t, T, y

1

, y

2

),

f

y1y2

(t, T, y

1

, y

2

) = C

1

(t, T)C

2

(t, T)f(t, T, y

1

, y

2

),

f

y1y1

(t, T, y

1

, y

2

) = C

2

1

(t, T)f(t, T, y

1

, y

2

),

f

y2y2

(t, T, y

1

, y

2

) = C

2

2

(t, T)f(t, T, y

1

, y

2

).

So the PDE becomes

(

0

(t) +

1

y

1

+

2

y

2

) +

y

1

d

dt

C

1

(t, T) y

2

d

dt

C

2

(t, T)

d

dt

A(t, T)

+

1

y

1

C

1

(t, T)

+(

21

y

1

+

2

y

2

)C

2

(t, T) +

1

2

C

2

1

(t, T) +

1

2

C

2

2

(t, T) = 0.

77

Sorting out the terms according to independent variables y

1

and y

2

, we get

0

(t)

d

dt

A(t, T) +

1

2

C

2

1

(t, T) +

1

2

C

2

2

(t, T) = 0

1

d

dt

C

1

(t, T) +

1

C

1

(t, T) +

21

C

2

(t, T) = 0

2

d

dt

C

2

(t, T) +

2

C

2

(t, T) = 0.

That is

d

dt

C

1

(t, T) =

1

C

1

(t, T) +

21

C

2

(t, T)

1

d

dt

C

2

(t, T) =

2

C

2

(t, T)

2

d

dt

A(t, T) =

1

2

C

2

1

(t, T) +

1

2

C

2

2

(t, T)

0

(t).

(ii)

Proof. For C

2

, we note

d

dt

[e

2t

C

2

(t, T)] = e

2t

2

from the ODE in (i). Integrate from t to T, we have

0 e

2t

C

2

(t, T) =

2

T

t

e

2s

ds =

2

2

(e

2T

e

2t

). So C

2

(t, T) =

2

2

(1 e

2(Tt)

). For C

1

, we note

d

dt

(e

1t

C

1

(t, T)) = (

21

C

2

(t, T)

1

)e

1t

=

21

2

(e

1t

e

2T+(21)t

)

1

e

1t

.

Integrate from t to T, we get

e

1t

C

1

(t, T)

=

212

21

(e

1T

e

1t

)

212

2

e

2T e

(

2

1

)T

e

(

2

1

)t

21

+

1

1

(e

1T

e

1T

) if

1

=

2

212

21

(e

1T

e

1t

)

212

2

e

2T

(T t) +

1

1

(e

1T

e

1T

) if

1

=

2

.

So

C

1

(t, T) =

212

21

(e

1(Tt)

1) +

212

2

e

1

(Tt)

e

2

(Tt)

21

1

1

(e

1(Tt)

1) if

1

=

2

212

21

(e

1(Tt)

1) +

212

2

e

2T+1t

(T t)

1

1

(e

1(Tt)

1) if

1

=

2

.

.

(iii)

Proof. From the ODE

d

dt

A(t, T) =

1

2

(C

2

1

(t, T) +C

2

2

(t, T))

0

(t), we get

A(t, T) =

T

t

0

(s)

1

2

(C

2

1

(s, T) +C

2

2

(s, T))

ds.

(iv)

Proof. We want to nd

0

so that f(0, T, Y

1

(0), Y

2

(0)) = e

Y1(0)C1(0,T)Y2(0)C2(0,T)A(0,T)

= B(0, T) for all

T > 0. Take logarithm on both sides and plug in the expression of A(t, T), we get

log B(0, T) = Y

1

(0)C

1

(0, T) Y

2

(0)C

2

(0, T) +

T

0

1

2

(C

2

1

(s, T) +C

2

2

(s, T))

0

(s)

ds.

Taking derivative w.r.t. T, we have

T

log B(0, T) = Y

1

(0)

T

C

1

(0, T) Y

2

(0)

T

C

2

(0, T) +

1

2

C

2

1

(T, T) +

1

2

C

2

2

(T, T)

0

(T).

78

So

0

(T) = Y

1

(0)

T

C

1

(0, T) Y

2

(0)

T

C

2

(0, T)

T

log B(0, T)

=

Y

1

(0)

1

e

1T

212

2

e

2T

Y

2

(0)

2

e

2T

T

log B(0, T) if

1

=

2

Y

1

(0)

1

e

1T

21

2

e

2T

T

Y

2

(0)

2

e

2T

T

log B(0, T) if

1

=

2

.

10.4. (i)

Proof.

d

X

t

= dX

t

+Ke

Kt

t

0

e

Ku

(u)dudt (t)dt

= KX

t

dt + d

B

t

+Ke

Kt

t

0

e

Ku

(u)dudt

= K

X

t

dt + d

B

t

.

(ii)

Proof.

W

t

= C

B

t

=

1

1

0

1

2

1

2

1

2

1

0

0

2

B

t

=

1 0

1

2

1

1

2

B

t

.

So

W

1

`

t

= '

B

1

`

t

= t, '

W

2

`

t

= '

1

2

B

1

+

1

1

2

B

2

`

t

=

2

t

1

2

+

t

1

2

2

1

2

t =

2

+12

2

1

2

t = t, and '

W

1

,

W

2

`

t

= '

B

1

,

1

2

B

1

+

1

1

2

B

2

`

t

=

t

1

2

+

t

1

2

= 0. Therefore

W is a

two-dimensional BM. Moreover, dY

t

= Cd

X

t

= CK

X

t

dt+Cd

B

t

= CKC

1

Y

t

dt+d

W

t

= Y

t

dt+d

W

t

,

where

= CKC

1

=

1

1

0

1

2

1

2

1

2

1

0

1

2

1

[C[

1

2

1

2

0

1

2

1

1

1

1

0

1

1

1

2

1

2

1

2

2

2

1

2

1

0

2

2

1

2

1

0

2(21)1

2

1

2

2

.

(iii)

Proof.

X

t

=

X

t

+e

Kt

t

0

e

Ku

(u)du = C

1

Y

t

+e

Kt

t

0

e

Ku

(u)du

=

1

0

2

2

1

2

Y

1

(t)

Y

2

(t)

+e

Kt

t

0

e

Ku

(u)du

=

1

Y

1

(t)

2

Y

1

(t) +

2

1

2

Y

2

(t)

+e

Kt

t

0

e

Ku

(u)du.

79

So R

t

= X

2

(t) =

2

Y

1

(t)+

2

1

2

Y

2

(t)+

0

(t), where

0

(t) is the second coordinate of e

Kt

t

0

e

Ku

(u)du

and can be derived explicitly by Lemma 10.2.3. Then

1

=

2

and

2

=

2

1

2

.

10.5.

Proof. We note C(t, T) and A(t, T) are dependent only on T t. So C(t, t + ) and A(t, t + ) aare constants

when is xed. So

d

dt

L

t

=

B(t, t + )[C(t, t + )R

(t) A(t, t + )]

B(t, t + )

=

1

[C(t, t + )R

(t) +A(t, t + )]

=

1

[C(0, )R

Hence L(t

2

)L(t

1

) =

1

C(0, )[R(t

2

)R(t

1

)]+

1

A(0, )(t

2

t

1

). Since L(t

2

)L(t

1

) is a linear transformation,

it is easy to verify that their correlation is 1.

10.6. (i)

Proof. If

2

= 0, then dR

t

=

1

dY

1

(t) =

1

(

1

Y

1

(t)dt +d

W

1

(t)) =

1

(

0

1

Rt

1

)

1

dt +d

W

1

(t)

= (

0

1

R

t

)dt +

1

d

W

1

(t). So a =

0

1

and b =

1

.

(ii)

Proof.

dR

t

=

1

dY

1

(t) +

2

dY

2

(t)

=

1

1

Y

1

(t)dt +

1

d

W

1

(t)

2

21

Y

1

(t)dt

2

2

Y

2

(t)dt +

2

d

W

2

(t)

= Y

1

(t)(

1

1

+

2

21

)dt

2

2

Y

2

(t)dt +

1

d

W

1

(t) +

2

d

W

2

(t)

= Y

1

(t)

2

1

dt

2

2

Y

2

(t)dt +

1

d

W

1

(t) +

2

d

W

2

(t)

=

2

(Y

1

(t)

1

+Y

2

(t)

2

)dt +

1

d

W

1

(t) +

2

d

W

2

(t)

=

2

(R

t

0

)dt +

2

1

+

2

2

2

1

+

2

2

d

W

1

(t) +

2

2

1

+

2

2

d

W

2

(t)

.

So a =

2

0

, b =

2

, =

2

1

+

2

2

and

B

t

=

1

2

1

+

2

2

W

1

(t) +

2

2

1

+

2

2

W

2

(t).

10.7. (i)

Proof. We use the canonical form of the model as in formulas (10.2.4)-(10.2.6). By (10.2.20),

dB(t, T) = df(t, Y

1

(t), Y

2

(t))

= de

Y1(t)C1(Tt)Y2(t)C2(Tt)A(Tt)

= dt term +B(t, T)[C

1

(T t)d

W

1

(t) C

2

(T t)d

W

2

(t)]

= dt term +B(t, T)(C

1

(T t), C

2

(T t))

W

1

(t)

d

W

2

(t)

.

So the volatility vector of B(t, T) under

P is (C

1

(T t), C

2

(T t)). By (9.2.5),

W

T

j

(t) =

t

0

C

j

(T

u)du +

W

j

(t) (j = 1, 2) form a two-dimensional

P

T

BM.

(ii)

80

Proof. Under the T-forward measure, the numeraire is B(t, T). By risk-neutral pricing, at time zero the

risk-neutral price V

0

of the option satises

V

0

B(0, T)

=

E

T

1

B(T, T)

(e

C1(

TT)Y1(T)C2(

TT)Y2(T)A(

TT)

K)

+

.

Note B(T, T) = 1, we get (10.7.19).

(iii)

Proof. We can rewrite (10.2.4) and (10.2.5) as

dY

1

(t) =

1

Y

1

(t)dt +d

W

T

1

(t) C

1

(T t)dt

dY

2

(t) =

21

Y

1

(t)dt

2

Y

2

(t)dt +d

W

T

2

(t) C

2

(T t)dt.

Then

Y

1

(t) = Y

1

(0)e

1t

+

t

0

e

1(st)

d

W

T

1

(s)

t

0

C

1

(T s)e

1(st)

ds

Y

2

(t) = Y

0

e

2t

21

t

0

Y

1

(s)e

2(st)

ds +

t

0

e

2(st)

d

W

2

(s)

t

0

C

2

(T s)e

2(st)

ds.

So (Y

1

, Y

2

) is jointly Gaussian and X is therefore Gaussian.

(iv)

Proof. First, we recall the Black-Scholes formula for call options: if dS

t

= S

t

dt +S

t

d

W

t

, then

E[e

T

(S

0

e

W

T

+(

1

2

2

)T

K)

+

] = S

0

N(d

+

) Ke

T

N(d

)

with d

=

1

T

(log

S0

K

+(

1

2

2

)T). Let T = 1, S

0

= 1 and = W

1

+(

1

2

2

), then

d

= N(

1

2

2

,

2

)

and

E[(e

K)

+

] = e

N(d

+

) KN(d

),

where d

=

1

(log K + (

1

2

2

)) (dierent from the problem. Check!). Since under

P

T

, X

d

=

N(

1

2

2

,

2

), we have

B(0, T)

E

T

[(e

X

K)

+

] = B(0, T)(e

N(d

+

) KN(d

)).

10.11.

Proof. On each payment date T

j

, the payo of this swap contract is (K L(T

j1

, T

j1

)). Its no-arbitrage

price at time 0 is (KB(0, T

j

) B(0, T

j

)L(0, T

j1

)) by Theorem 10.4. So the value of the swap is

n+1

j=1

[KB(0, T

j

) B(0, T

j

)L(0, T

j1

)] = K

n+1

j=1

B(0, T

j

)

n+1

j=1

B(0, T

j

)L(0, T

j1

).

10.12.

81

Proof. Since L(T, T) =

1B(T,T+)

B(T,T+)

T

T

, we have

E[

T

]]

=

E

1 B(T, T +)

B(T, T +)

E[D(T +)[T

T

]

=

E

1 B(T, T +)

B(T, T +)

D(T)B(T, T +)

=

E

D(T) D(T)B(T, T +)

=

B(0, T) B(0, T +)

11. Introduction to Jump Processes

11.1. (i)

Proof. First, M

2

t

= N

2

t

2tN

t

+

2

t

2

. So E[M

2

t

] < . f(x) = x

2

is a convex function. So by conditional

Jensens inequality,

E[f(M

t

)[T

s

] f(E[M

t

[T

s

]) = f(M

s

), s t.

So M

2

t

is a submartingale.

(ii)

Proof. We note M

t

has independent and stationary increment. So s t, E[M

2

t

M

2

s

[T

s

] = E[(M

t

M

s

)

2

[T

s

] + E[(M

t

M

s

) 2M

s

[T

s

] = E[M

2

ts

] + 2M

s

E[M

ts

] = V ar(N

ts

) + 0 = (t s). That is,

E[M

2

t

t[T

s

] = M

2

s

s.

11.2.

Proof. P(N

s+t

= k[N

s

= k) = P(N

s+t

N

s

= 0[N

s

= k) = P(N

t

= 0) = e

t

= 1 t + O(t

2

). Similarly,

we have P(N

s+t

= k + 1[N

s

= k) = P(N

t

= 1) =

(t)

1

1!

e

t

= t(1 t + O(t

2

)) = t + O(t

2

), and

P(N

s+t

k + 2[N

2

= k) = P(N

t

2) =

k=2

(t)

k

k!

e

t

= O(t

2

).

11.3.

Proof. For any t u, we have

E

S

u

S

t

T

t

= E[( + 1)

NtNu

e

(tu)

[T

t

]

= e

(tu)

E[( + 1)

Ntu

]

= e

(tu)

E[e

Ntu log(+1)

]

= e

(tu)

e

(tu)(e

log(+1)

1)

(by (11.3.4))

= e

(tu)

e

(tu)

= 1.

So S

t

= E[S

u

[T

t

] and S is a martingale.

11.4.

82

Proof. The problem is ambiguous in that the relation between N

1

and N

2

is not clearly stated. According

to page 524, paragraph 2, we would guess the condition should be that N

1

and N

2

are independent.

Suppose N

1

and N

2

are independent. Dene M

1

(t) = N

1

(t)

1

t and M

2

(t) = N

2

(t)

2

t. Then by

independence E[M

1

(t)M

2

(t)] = E[M

1

(t)]E[M

2

(t)] = 0. Meanwhile, by Itos product formula, M

1

(t)M

2

(t) =

t

0

M

1

(s)dM

2

(s) +

t

0

M

2

(s)dM

1

(s) +[M

1

, M

2

]

t

. Both

t

0

M

1

(s)dM

2

(s) and

t

0

M

2

(s)dM

1

(s) are mar-

tingales. So taking expectation on both sides, we get 0 = 0 + E[M

1

, M

2

]

t

= E[

0<st

N

1

(s)N

2

(s)].

Since

0<st

N

1

(s)N

2

(s) 0 a.s., we conclude

0<st

N

1

(s)N

2

(s) = 0 a.s. By letting t = 1, 2, ,

we can nd a set

0

of probability 1, so that

0

,

0<st

N

1

(s)N

2

(s) = 0 for all t > 0. Therefore

N

1

and N

2

can have no simultaneous jump.

11.5.

Proof. We shall prove the whole path of N

1

is independent of the whole path of N

2

, following the scheme

suggested by page 489, paragraph 1.

Fix s 0, we consider X

t

= u

1

(N

1

(t)N

1

(s))+u

2

(N

2

(t)N

2

(s))

1

(e

u1

1)(ts)

2

(e

u2

1)(ts),

t > s. Then by Itos formula for jump process, we have

e

Xt

e

Xs

=

t

s

e

Xu

dX

c

u

+

1

2

t

s

e

Xu

dX

c

u

dX

c

u

+

s<ut

(e

Xu

e

Xu

)

=

t

s

e

Xu

[

1

(e

u1

1)

2

(e

u2

1)]du +

0<ut

(e

Xu

e

Xu

).

Since X

t

= u

1

N

1

(t)+u

2

N

2

(t) and N

1

, N

2

have no simultaneous jump, e

Xu

e

Xu

= e

Xu

(e

Xu

1) =

e

Xu

[(e

u1

1)N

1

(u) + (e

u2

1)N

2

(u)]. So

e

Xt

1

=

t

s

e

Xu

[

1

(e

u1

1)

2

(e

u2

1)]du +

s<ut

e

Xu

[(e

u1

1)N

1

(u) + (e

u2

1)N

2

(u)]

=

t

s

e

Xu

[(e

u1

1)d(N

1

(u)

1

u) (e

u2

1)d(N

2

(u)

2

u)].

This shows (e

Xt

)

ts

is a martingale w.r.t. (T

t

)

ts

. So E[e

Xt

] 1, i.e.

E[e

u1(N1(t)N1(s))+u2(N2(t)N2(s))

] = e

1(e

u

1

1)(ts)

e

2(e

u

2

1)(ts)

= E[e

u1(N1(t)N1(s))

]E[e

u2(N2(t)N2(s))

].

This shows N

1

(t) N

1

(s) is independent of N

2

(t) N

2

(s).

Now, suppose we have 0 t

1

< t

2

< t

3

< < t

n

, then the vector (N

1

(t

1

), , N

1

(t

n

)) is independent

of (N

2

(t

1

), , N

2

(t

n

)) if and only if (N

1

(t

1

), N

1

(t

2

) N

1

(t

1

), , N

1

(t

n

) N

1

(t

n1

)) is independent of

(N

2

(t

1

), N

2

(t

2

) N

2

(t

1

), , N

2

(t

n

) N

2

(t

n1

)). Let t

0

= 0, then

E[e

n

i=1

ui(N1(ti)N1(ti1))+

n

j=1

vj(N2(tj)N2(tj1))

]

= E[e

n1

i=1

ui(N1(ti)N1(ti1))+

n1

j=1

vj(N2(tj)N2(tj1))

E[e

un(N1(tn)N1(tn1))+vn(N2(tn)N2(tn1))

[T

tn1

]]

= E[e

n1

i=1

ui(N1(ti)N1(ti1))+

n1

j=1

vj(N2(tj)N2(tj1))

]E[e

un(N1(tn)N1(tn1))+vn(N2(tn)N2(tn1))

]

= E[e

n1

i=1

ui(N1(ti)N1(ti1))+

n1

j=1

vj(N2(tj)N2(tj1))

]E[e

un(N1(tn)N1(tn1))

]E[e

vn(N2(tn)N2(tn1))

],

where the second equality comes from the independence of N

i

(t

n

) N

i

(t

n1

) (i = 1, 2) relative to T

tn1

and

the third equality comes from the result obtained in the above paragraph. Working by induction, we have

E[e

n

i=1

ui(N1(ti)N1(ti1))+

n

j=1

vj(N2(tj)N2(tj1))

]

=

n

i=1

E[e

ui(N1(ti)N1(ti1))

]

n

j=1

E[e

vj(N2(tj)N2(tj1))

]

= E[e

n

i=1

ui(N1(ti)N1(ti1))

]E[e

n

j=1

vj(N2(tj)N2(tj1))

].

83

This shows the whole path of N

1

is independent of the whole path of N

2

.

11.6.

Proof. Let X

t

= u

1

W

t

1

2

u

2

1

t +u

2

Q

t

t((u

2

) 1) where is the moment generating function of the jump

size Y . Itos formula for jump process yields

e

Xt

1 =

t

0

e

Xs

(u

1

dW

s

1

2

u

2

1

ds ((u

2

) 1)ds) +

1

2

t

0

e

Xs

u

2

1

ds +

0<st

(e

Xs

e

Xs

).

Note X

t

= u

2

Q

t

= u

2

Y

Nt

N

t

, where N

t

is the Poisson process associated with Q

t

. So e

Xt

e

Xt

=

e

Xt

(e

Xt

1) = e

Xt

(e

u2Y

N

t

1)N

t

. Consider the compound Poisson process H

t

=

Nt

i=1

(e

u2Yi

1),

then H

t

E[e

u2Y

N

t

1]t = H

t

((u

2

) 1)t is a martingale, e

Xt

e

Xt

= e

Xt

H

t

and

e

Xt

1 =

t

0

e

Xs

(u

1

dW

s

1

2

u

2

1

ds ((u

2

) 1)ds) +

1

2

t

0

e

Xs

u

2

1

ds +

t

0

e

Xs

dH

s

=

t

0

e

Xs

u

1

dW

s

+

t

0

e

Xs

d(H

s

((u

2

) 1)s).

This shows e

Xt

is a martingale and E[e

Xt

] 1. So E[e

u1Wt+u2Qt

] = e

1

2

u1t

e

t((u2)1)t

= E[e

u1Wt

]E[e

u2Qt

].

This shows W

t

and Q

t

are independent.

11.7.

Proof. E[h(Q

T

)[T

t

] = E[h(Q

T

Q

t

+Q

t

)[T

t

] = E[h(Q

Tt

+x)][

x=Qt

= g(t, Q

t

), where g(t, x) = E[h(Q

Tt

+

x)].

References

[1] F. Delbaen and W. Schachermayer. The mathematics of arbitrage. Springer, 2006.

[2] J. Hull. Options, futures, and other derivatives. Fourth Edition. Prentice-Hall International Inc., New

Jersey, 2000.

[3] B. ksendal. Stochastic dierential equations: An introduction with applications. Sixth edition. Springer-

Verlag, Berlin, 2003.

[4] D. Revuz and M. Yor. Continous martingales and Brownian motion. Third edition. Springer-Verlag,

Berline, 1998.

[5] A. N. Shiryaev. Essentials of stochastic nance: facts, models, theory. World Scientic, Singapore, 1999.

[6] S. Shreve. Stochastic calculus for nance I. The binomial asset pricing model. Springer-Verlag, New

York, 2004.

[7] S. Shreve. Stochastic calculus for nance II. Continuous-time models. Springer-Verlag, New York, 2004.

[8] P. Wilmott. The mathematics of nancial derivatives: A student introduction. Cambridge University

Press, 1995.

84

- Implementing Derivatives ModelsUploaded byKiran Kumar Kuppa
- Stochastic Calculus for Finance Shreve SolutionsUploaded byJason Wu
- Shreve s.e. Stochastic Calculus for Finance IIUploaded byTaylor Martin
- Steven Shreve. Lectures on Stochastic Calculus and FinanceUploaded bySee Keong Lee
- Financial Calculus an Introduction to Derivative Pricing-BaxterUploaded byAshish
- Shreve Stochastic Calculus for Finance 2Uploaded byfour13r
- shreve solution manualUploaded bySalim Habchi
- Solution Manual for Stochastic Calculus for FinanceUploaded byXin Zhou
- Quant Interview PrepUploaded byShivgan Joshi
- Stochastic Calculus for FinanceUploaded byphotino
- Shreve S.E. Stochastic Calculus for Finance I.. the Binomial Asset Pricing ModelUploaded byBill
- Manual to ShreveUploaded byIvan Tay
- 6fj3u.stochastic.calculus.for.FinanceUploaded byVSōkrátēsRamessur
- Joel Hasbrouck-Empirical Market Microstructure-Oxford University Press, USA (2007)Uploaded bySaad Tate
- Analysis of Financial Time Series, Ruey S. Tsay (2010)Uploaded byKe Qiu
- Oksendal(Errata Corrige)Uploaded byPaul Brown
- Continuous Martingales and Brownian Motion (3rd Ed, Yor and Revuz)(300dpi)Uploaded byGeoff Lindsell
- [Dan Stefanica] Solutions Manual - A Primer for Th(BookFi)Uploaded bysk3342001
- Shreve S.E. Stochastic Calculus for Finance I.. the Binomial Asset Pricing Model (1)Uploaded byTimar Stay Focused Jackson
- The Mathematics Of Financial DerivativesUploaded byTommy Anderson
- O'HaraUploaded byKun Zhang
- Solutions Manual - A Primer for the Mathematics of Financial EngineeringUploaded bypopoyboy
- [Mark Joshi]Quant Job Interview Questions And Answers.pdfUploaded by黃凱文
- John Hull [Numerical method, Other Derivatives]Uploaded bysteven2121
- Ross Solution Manual_OddExersicesUploaded bytlanggy
- The Concepts and Practice of Mathematical FinanceUploaded byHorace Cheng
- The Concepts and Practice of Mathematical FinanceUploaded byPierre Wong
- Factor_Investing.pdfUploaded byAvinash Acharya
- Bonds Markets Analysis and Strategies - Frank J. FabozziUploaded byFrancisco Javier Larios Montanari
- Financial Engineering Principles - Perry BeaumontUploaded byShankey Gupta

- List of SAP Transaction CodesUploaded byGlowly bsl
- ch04 news release 2Uploaded byapi-432997302
- Hdfc Fmp 1199d January 2017 (1)-Series 37 SidUploaded bypramod ponaganti
- EY Ipsas Outlook Mar 2015Uploaded byNatya Nindyagitaya
- Chapter 1-3 Notes & ExercisesUploaded byHafizah Mat Nawi
- Structured RatesUploaded bykabhijit04
- Banco Espanol Filipino vs PetersonUploaded byPia Sotto
- Engro notice scheme of emloyee stock options.pdfUploaded byFahad Mansoori
- Triangle Trading MethodUploaded byxagus
- Generic LLC Members AgreementUploaded byGeoBiaJr
- Basic DebitcreditUploaded byaliiqbal497
- TAX Case Laws (Final)Uploaded bySanjay Malhotra
- Get Complete Information on Rate Making in Marine InsuranceUploaded byFaheem
- ETI AR 2011Uploaded bySankha Wilbagedara
- Chapter 02 Investment AppraisalUploaded byMarzuka Akter Khan
- directionalUploaded bysachin251
- cocr1Uploaded byMelissa Joan Fernandes
- Silver River Manufacturing Company (SRM).docxUploaded bySiddhartha Chhetri
- Topic 2 - Classification of CompanyUploaded bySofia Fareez
- 348371236 Exhibitor List Inapa 2017Uploaded byPandi Indra Kurnia
- MGMT S 2710 Summary 16 (Hedging MNCs) by Cesar FernandezUploaded byKushagra Surana
- Mabvax v. Harvey Kesner MTD 05-09-2019 OrderUploaded byTeri Buhl
- Spotlight Manhattan BeachUploaded bySouth Bay Digs
- Do Accounting Standards MatterUploaded byEuniceChung
- Cdn banking - the 4 pillarsUploaded bymoctapka
- 01 - Mousetrap 2012 - User GuideUploaded bymadddy_hyd
- Foreign Capital Movement-Assignment(Group1)_Ver4(3)Uploaded bysubhadip2409
- career research paperUploaded byapi-287743261
- BIR 1903Uploaded byphoebemariealhambra1475
- 3033 Topic 3 Stock Broking IndustryUploaded bykristin_kim_13