## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

by Yan Zeng

Last updated: August 20, 2007

This is a solution manual for the two-volume textbook Stochastic calculus for ﬁnance, by Steven Shreve.

If you have any comments or ﬁnd any typos/errors, please email me at yz44@cornell.edu.

The current version omits the following problems. Volume I: 1.5, 3.3, 3.4, 5.7; Volume II: 3.9, 7.1, 7.2,

7.5–7.9, 10.8, 10.9, 10.10.

Acknowledgment I thank Hua Li (a graduate student at Brown University) for reading through this

solution manual and communicating to me several mistakes/typos.

1 Stochastic Calculus for Finance I: The Binomial Asset Pricing Model

1. The Binomial No-Arbitrage Pricing Model

1.1.

Proof. If we get the up sate, then X

1

= X

1

(H) = ∆

0

uS

0

+ (1 + r)(X

0

− ∆

0

S

0

); if we get the down state,

then X

1

= X

1

(T) = ∆

0

dS

0

+(1 +r)(X

0

−∆

0

S

0

). If X

1

has a positive probability of being strictly positive,

then we must either have X

1

(H) > 0 or X

1

(T) > 0.

(i) If X

1

(H) > 0, then ∆

0

uS

0

+ (1 + r)(X

0

− ∆

0

S

0

) > 0. Plug in X

0

= 0, we get u∆

0

> (1 + r)∆

0

.

By condition d < 1 + r < u, we conclude ∆

0

> 0. In this case, X

1

(T) = ∆

0

dS

0

+ (1 + r)(X

0

− ∆

0

S

0

) =

∆

0

S

0

[d −(1 +r)] < 0.

(ii) If X

1

(T) > 0, then we can similarly deduce ∆

0

< 0 and hence X

1

(H) < 0.

So we cannot have X

1

strictly positive with positive probability unless X

1

is strictly negative with positive

probability as well, regardless the choice of the number ∆

0

.

Remark: Here the condition X

0

= 0 is not essential, as far as a property deﬁnition of arbitrage for

arbitrary X

0

can be given. Indeed, for the one-period binomial model, we can deﬁne arbitrage as a trading

strategy such that P(X

1

≥ X

0

(1 +r)) = 1 and P(X

1

> X

0

(1 +r)) > 0. First, this is a generalization of the

case X

0

= 0; second, it is “proper” because it is comparing the result of an arbitrary investment involving

money and stock markets with that of a safe investment involving only money market. This can also be seen

by regarding X

0

as borrowed from money market account. Then at time 1, we have to pay back X

0

(1 + r)

to the money market account. In summary, arbitrage is a trading strategy that beats “safe” investment.

Accordingly, we revise the proof of Exercise 1.1. as follows. If X

1

has a positive probability of being

strictly larger than X

0

(1 + r), the either X

1

(H) > X

0

(1 + r) or X

1

(T) > X

0

(1 + r). The ﬁrst case yields

∆

0

S

0

(u−1−r) > 0, i.e. ∆

0

> 0. So X

1

(T) = (1+r)X

0

+∆

0

S

0

(d−1−r) < (1+r)X

0

. The second case can

be similarly analyzed. Hence we cannot have X

1

strictly greater than X

0

(1 + r) with positive probability

unless X

1

is strictly smaller than X

0

(1 +r) with positive probability as well.

Finally, we comment that the above formulation of arbitrage is equivalent to the one in the textbook.

For details, see Shreve [7], Exercise 5.7.

1.2.

1

Proof. X

1

(u) = ∆

0

8 +Γ

0

3 −

5

4

(4∆

0

+1.20Γ

0

) = 3∆

0

+1.5Γ

0

, and X

1

(d) = ∆

0

2 −

5

4

(4∆

0

+1.20Γ

0

) =

−3∆

0

−1.5Γ

0

. That is, X

1

(u) = −X

1

(d). So if there is a positive probability that X

1

is positive, then there

is a positive probability that X

1

is negative.

Remark: Note the above relation X

1

(u) = −X

1

(d) is not a coincidence. In general, let V

1

denote the

payoﬀ of the derivative security at time 1. Suppose

¯

X

0

and

¯

∆

0

are chosen in such a way that V

1

can be

replicated: (1 + r)(

¯

X

0

−

¯

∆

0

S

0

) +

¯

∆

0

S

1

= V

1

. Using the notation of the problem, suppose an agent begins

with 0 wealth and at time zero buys ∆

0

shares of stock and Γ

0

options. He then puts his cash position

−∆

0

S

0

−Γ

0

¯

X

0

in a money market account. At time one, the value of the agent’s portfolio of stock, option

and money market assets is

X

1

= ∆

0

S

1

+ Γ

0

V

1

−(1 +r)(∆

0

S

0

+ Γ

0

¯

X

0

).

Plug in the expression of V

1

and sort out terms, we have

X

1

= S

0

(∆

0

+

¯

∆

0

Γ

0

)(

S

1

S

0

−(1 +r)).

Since d < (1 +r) < u, X

1

(u) and X

1

(d) have opposite signs. So if the price of the option at time zero is

¯

X

0

,

then there will no arbitrage.

1.3.

Proof. V

0

=

1

1+r

1+r−d

u−d

S

1

(H) +

u−1−r

u−d

S

1

(T)

=

S0

1+r

1+r−d

u−d

u +

u−1−r

u−d

d

= S

0

. This is not surprising, since

this is exactly the cost of replicating S

1

.

Remark: This illustrates an important point. The “fair price” of a stock cannot be determined by the

risk-neutral pricing, as seen below. Suppose S

1

(H) and S

1

(T) are given, we could have two current prices, S

0

and S

0

. Correspondingly, we can get u, d and u

, d

**. Because they are determined by S
**

0

and S

0

, respectively,

it’s not surprising that risk-neutral pricing formula always holds, in both cases. That is,

S

0

=

1+r−d

u−d

S

1

(H) +

u−1−r

u−d

S

1

(T)

1 +r

, S

0

=

1+r−d

u

−d

S

1

(H) +

u

−1−r

u

−d

S

1

(T)

1 +r

.

Essentially, this is because risk-neutral pricing relies on fair price=replication cost. Stock as a replicating

component cannot determine its own “fair” price via the risk-neutral pricing formula.

1.4.

Proof.

X

n+1

(T) = ∆

n

dS

n

+ (1 +r)(X

n

−∆

n

S

n

)

= ∆

n

S

n

(d −1 −r) + (1 +r)V

n

=

V

n+1

(H) −V

n+1

(T)

u −d

(d −1 −r) + (1 +r)

˜ pV

n+1

(H) + ˜ qV

n+1

(T)

1 +r

= ˜ p(V

n+1

(T) −V

n+1

(H)) + ˜ pV

n+1

(H) + ˜ qV

n+1

(T)

= ˜ pV

n+1

(T) + ˜ qV

n+1

(T)

= V

n+1

(T).

1.6.

2

Proof. The bank’s trader should set up a replicating portfolio whose payoﬀ is the opposite of the option’s

payoﬀ. More precisely, we solve the equation

(1 +r)(X

0

−∆

0

S

0

) + ∆

0

S

1

= −(S

1

−K)

+

.

Then X

0

= −1.20 and ∆

0

= −

1

2

. This means the trader should sell short 0.5 share of stock, put the income

2 into a money market account, and then transfer 1.20 into a separate money market account. At time one,

the portfolio consisting of a short position in stock and 0.8(1 + r) in money market account will cancel out

with the option’s payoﬀ. Therefore we end up with 1.20(1 +r) in the separate money market account.

Remark: This problem illustrates why we are interested in hedging a long position. In case the stock

price goes down at time one, the option will expire without any payoﬀ. The initial money 1.20 we paid at

time zero will be wasted. By hedging, we convert the option back into liquid assets (cash and stock) which

guarantees a sure payoﬀ at time one. Also, cf. page 7, paragraph 2. As to why we hedge a short position

(as a writer), see Wilmott [8], page 11-13.

1.7.

Proof. The idea is the same as Problem 1.6. The bank’s trader only needs to set up the reverse of the

replicating trading strategy described in Example 1.2.4. More precisely, he should short sell 0.1733 share of

stock, invest the income 0.6933 into money market account, and transfer 1.376 into a separate money market

account. The portfolio consisting a short position in stock and 0.6933-1.376 in money market account will

replicate the opposite of the option’s payoﬀ. After they cancel out, we end up with 1.376(1 + r)

3

in the

separate money market account.

1.8. (i)

Proof. v

n

(s, y) =

2

5

(v

n+1

(2s, y + 2s) +v

n+1

(

s

2

, y +

s

2

)).

(ii)

Proof. 1.696.

(iii)

Proof.

δ

n

(s, y) =

v

n+1

(us, y +us) −v

n+1

(ds, y +ds)

(u −d)s

.

1.9. (i)

Proof. Similar to Theorem 1.2.2, but replace r, u and d everywhere with r

n

, u

n

and d

n

. More precisely, set

¯ p

n

=

1+rn−dn

un−dn

and ¯ q

n

= 1 − ¯ p

n

. Then

V

n

=

¯ p

n

V

n+1

(H) + ¯ q

n

V

n+1

(T)

1 +r

n

.

(ii)

Proof. ∆

n

=

Vn+1(H)−Vn+1(T)

Sn+1(H)−Sn+1(T)

=

Vn+1(H)−Vn+1(T)

(un−dn)Sn

.

(iii)

3

Proof. u

n

=

Sn+1(H)

Sn

=

Sn+10

Sn

= 1+

10

Sn

and d

n

=

Sn+1(T)

Sn

=

Sn−10

Sn

= 1−

10

Sn

. So the risk-neutral probabilities

at time n are ˜ p

n

=

1−dn

un−dn

=

1

2

and ˜ q

n

=

1

2

. Risk-neutral pricing implies the price of this call at time zero is

9.375.

2. Probability Theory on Coin Toss Space

2.1. (i)

Proof. P(A

c

) +P(A) =

¸

ω∈A

c P(ω) +

¸

ω∈A

P(ω) =

¸

ω∈Ω

P(ω) = 1.

(ii)

Proof. By induction, it suﬃces to work on the case N = 2. When A

1

and A

2

are disjoint, P(A

1

∪ A

2

) =

¸

ω∈A1∪A2

P(ω) =

¸

ω∈A1

P(ω) +

¸

ω∈A2

P(ω) = P(A

1

) + P(A

2

). When A

1

and A

2

are arbitrary, using

the result when they are disjoint, we have P(A

1

∪ A

2

) = P((A

1

− A

2

) ∪ A

2

) = P(A

1

− A

2

) + P(A

2

) ≤

P(A

1

) +P(A

2

).

2.2. (i)

Proof.

¯

P(S

3

= 32) = ¯ p

3

=

1

8

,

¯

P(S

3

= 8) = 3¯ p

2

¯ q =

3

8

,

¯

P(S

3

= 2) = 3¯ p¯ q

2

=

3

8

, and

¯

P(S

3

= 0.5) = ¯ q

3

=

1

8

.

(ii)

Proof.

¯

E[S

1

] = 8

¯

P(S

1

= 8) + 2

¯

P(S

1

= 2) = 8¯ p + 2¯ q = 5,

¯

E[S

2

] = 16¯ p

2

+ 4 2¯ p¯ q + 1 ¯ q

2

= 6.25, and

¯

E[S

3

] = 32

1

8

+ 8

3

8

+ 2

3

8

+ 0.5

1

8

= 7.8125. So the average rates of growth of the stock price under

¯

P

are, respectively: ¯ r

0

=

5

4

−1 = 0.25, ¯ r

1

=

6.25

5

−1 = 0.25 and ¯ r

2

=

7.8125

6.25

−1 = 0.25.

(iii)

Proof. P(S

3

= 32) = (

2

3

)

3

=

8

27

, P(S

3

= 8) = 3 (

2

3

)

2

1

3

=

4

9

, P(S

3

= 2) = 2

1

9

=

2

9

, and P(S

3

= 0.5) =

1

27

.

Accordingly, E[S

1

] = 6, E[S

2

] = 9 and E[S

3

] = 13.5. So the average rates of growth of the stock price

under P are, respectively: r

0

=

6

4

−1 = 0.5, r

1

=

9

6

−1 = 0.5, and r

2

=

13.5

9

−1 = 0.5.

2.3.

Proof. Apply conditional Jensen’s inequality.

2.4. (i)

Proof. E

n

[M

n+1

] = M

n

+E

n

[X

n+1

] = M

n

+E[X

n+1

] = M

n

.

(ii)

Proof. E

n

[

Sn+1

Sn

] = E

n

[e

σXn+1

2

e

σ

+e

−σ

] =

2

e

σ

+e

−σ

E[e

σXn+1

] = 1.

2.5. (i)

Proof. 2I

n

= 2

¸

n−1

j=0

M

j

(M

j+1

− M

j

) = 2

¸

n−1

j=0

M

j

M

j+1

−

¸

n−1

j=1

M

2

j

−

¸

n−1

j=1

M

2

j

= 2

¸

n−1

j=0

M

j

M

j+1

+

M

2

n

−

¸

n−1

j=0

M

2

j+1

−

¸

n−1

j=0

M

2

j

= M

2

n

−

¸

n−1

j=0

(M

j+1

−M

j

)

2

= M

2

n

−

¸

n−1

j=0

X

2

j+1

= M

2

n

−n.

(ii)

Proof. E

n

[f(I

n+1

)] = E

n

[f(I

n

+M

n

(M

n+1

−M

n

))] = E

n

[f(I

n

+M

n

X

n+1

)] =

1

2

[f(I

n

+M

n

)+f(I

n

−M

n

)] =

g(I

n

), where g(x) =

1

2

[f(x +

√

2x +n) +f(x −

√

2x +n)], since

√

2I

n

+n = [M

n

[.

2.6.

4

Proof. E

n

[I

n+1

−I

n

] = E

n

[∆

n

(M

n+1

−M

n

)] = ∆

n

E

n

[M

n+1

−M

n

] = 0.

2.7.

Proof. We denote by X

n

the result of n-th coin toss, where Head is represented by X = 1 and Tail is

represented by X = −1. We also suppose P(X = 1) = P(X = −1) =

1

2

. Deﬁne S

1

= X

1

and S

n+1

=

S

n

+b

n

(X

1

, , X

n

)X

n+1

, where b

n

() is a bounded function on ¦−1, 1¦

n

, to be determined later on. Clearly

(S

n

)

n≥1

is an adapted stochastic process, and we can show it is a martingale. Indeed, E

n

[S

n+1

− S

n

] =

b

n

(X

1

, , X

n

)E

n

[X

n+1

] = 0.

For any arbitrary function f, E

n

[f(S

n+1

)] =

1

2

[f(S

n

+b

n

(X

1

, , X

n

)) +f(S

n

−b

n

(X

1

, , X

n

))]. Then

intuitively, E

n

[f(S

n+1

] cannot be solely dependent upon S

n

when b

n

’s are properly chosen. Therefore in

general, (S

n

)

n≥1

cannot be a Markov process.

Remark: If X

n

is regarded as the gain/loss of n-th bet in a gambling game, then S

n

would be the wealth

at time n. b

n

is therefore the wager for the (n+1)-th bet and is devised according to past gambling results.

2.8. (i)

Proof. Note M

n

= E

n

[M

N

] and M

n

= E

n

[M

N

].

(ii)

Proof. In the proof of Theorem 1.2.2, we proved by induction that X

n

= V

n

where X

n

is deﬁned by (1.2.14)

of Chapter 1. In other words, the sequence (V

n

)

0≤n≤N

can be realized as the value process of a portfolio,

which consists of stock and money market accounts. Since (

Xn

(1+r)

n

)

0≤n≤N

is a martingale under

¯

P (Theorem

2.4.5), (

Vn

(1+r)

n

)

0≤n≤N

is a martingale under

¯

P.

(iii)

Proof.

V

n

(1+r)

n

= E

n

V

N

(1+r)

N

, so V

0

,

V

1

1+r

, ,

V

N−1

(1+r)

N−1

,

V

N

(1+r)

N

is a martingale under

¯

P.

(iv)

Proof. Combine (ii) and (iii), then use (i).

2.9. (i)

Proof. u

0

=

S1(H)

S0

= 2, d

0

=

S1(H)

S0

=

1

2

, u

1

(H) =

S2(HH)

S1(H)

= 1.5, d

1

(H) =

S2(HT)

S1(H)

= 1, u

1

(T) =

S2(TH)

S1(T)

= 4

and d

1

(T) =

S2(TT)

S1(T)

= 1.

So ¯ p

0

=

1+r0−d0

u0−d0

=

1

2

, ¯ q

0

=

1

2

, ¯ p

1

(H) =

1+r1(H)−d1(H)

u1(H)−d1(H)

=

1

2

, ¯ q

1

(H) =

1

2

, ¯ p

1

(T) =

1+r1(T)−d1(T)

u1(T)−d1(T)

=

1

6

, and

¯ q

1

(T) =

5

6

.

Therefore

¯

P(HH) = ¯ p

0

¯ p

1

(H) =

1

4

,

¯

P(HT) = ¯ p

0

¯ q

1

(H) =

1

4

,

¯

P(TH) = ¯ q

0

¯ p

1

(T) =

1

12

and

¯

P(TT) =

¯ q

0

¯ q

1

(T) =

5

12

.

The proofs of Theorem 2.4.4, Theorem 2.4.5 and Theorem 2.4.7 still work for the random interest

rate model, with proper modiﬁcations (i.e.

¯

P would be constructed according to conditional probabili-

ties

¯

P(ω

n+1

= H[ω

1

, , ω

n

) := ¯ p

n

and

¯

P(ω

n+1

= T[ω

1

, , ω

n

) := ¯ q

n

. Cf. notes on page 39.). So

the time-zero value of an option that pays oﬀ V

2

at time two is given by the risk-neutral pricing formula

V

0

=

¯

E

V2

(1+r0)(1+r1)

.

(ii)

Proof. V

2

(HH) = 5, V

2

(HT) = 1, V

2

(TH) = 1 and V

2

(TT) = 0. So V

1

(H) =

p1(H)V2(HH)+ q1(H)V2(HT)

1+r1(H)

=

2.4, V

1

(T) =

p1(T)V2(TH)+ q1(T)V2(TT)

1+r1(T)

=

1

9

, and V

0

=

p0V1(H)+ q0V1(T)

1+r0

≈ 1.

5

(iii)

Proof. ∆

0

=

V1(H)−V1(T)

S1(H)−S1(T)

=

2.4−

1

9

8−2

= 0.4 −

1

54

≈ 0.3815.

(iv)

Proof. ∆

1

(H) =

V2(HH)−V2(HT)

S2(HH)−S2(HT)

=

5−1

12−8

= 1.

2.10. (i)

Proof.

¯

E

n

[

Xn+1

(1+r)

n+1

] =

¯

E

n

[

∆nYn+1Sn

(1+r)

n+1

+

(1+r)(Xn−∆nSn)

(1+r)

n+1

] =

∆nSn

(1+r)

n+1

¯

E

n

[Y

n+1

] +

Xn−∆nSn

(1+r)

n

=

∆nSn

(1+r)

n+1

(u¯ p +

d¯ q) +

Xn−∆nSn

(1+r)

n

=

∆nSn+Xn−∆nSn

(1+r)

n

=

Xn

(1+r)

n

.

(ii)

Proof. From (2.8.2), we have

∆

n

uS

n

+ (1 +r)(X

n

−∆

n

S

n

) = X

n+1

(H)

∆

n

dS

n

+ (1 +r)(X

n

−∆

n

S

n

) = X

n+1

(T).

So ∆

n

=

Xn+1(H)−Xn+1(T)

uSn−dSn

and X

n

=

¯

E

n

[

Xn+1

1+r

]. To make the portfolio replicate the payoﬀ at time N, we

must have X

N

= V

N

. So X

n

=

¯

E

n

[

X

N

(1+r)

N−n

] =

¯

E

n

[

V

N

(1+r)

N−n

]. Since (X

n

)

0≤n≤N

is the value process of the

unique replicating portfolio (uniqueness is guaranteed by the uniqueness of the solution to the above linear

equations), the no-arbitrage price of V

N

at time n is V

n

= X

n

=

¯

E

n

[

V

N

(1+r)

N−n

].

(iii)

Proof.

¯

E

n

[

S

n+1

(1 +r)

n+1

] =

1

(1 +r)

n+1

¯

E

n

[(1 −A

n+1

)Y

n+1

S

n

]

=

S

n

(1 +r)

n+1

[¯ p(1 −A

n+1

(H))u + ¯ q(1 −A

n+1

(T))d]

<

S

n

(1 +r)

n+1

[¯ pu + ¯ qd]

=

S

n

(1 +r)

n

.

If A

n+1

is a constant a, then

¯

E

n

[

Sn+1

(1+r)

n+1

] =

Sn

(1+r)

n+1

(1−a)(¯ pu+¯ qd) =

Sn

(1+r)

n

(1−a). So

¯

E

n

[

Sn+1

(1+r)

n+1

(1−a)

n+1

] =

Sn

(1+r)

n

(1−a)

n

.

2.11. (i)

Proof. F

N

+P

N

= S

N

−K + (K −S

N

)

+

= (S

N

−K)

+

= C

N

.

(ii)

Proof. C

n

=

¯

E

n

[

C

N

(1+r)

N−n

] =

¯

E

n

[

F

N

(1+r)

N−n

] +

¯

E

n

[

P

N

(1+r)

N−n

] = F

n

+P

n

.

(iii)

Proof. F

0

=

¯

E[

F

N

(1+r)

N

] =

1

(1+r)

N

¯

E[S

N

−K] = S

0

−

K

(1+r)

N

.

(iv)

6

Proof. At time zero, the trader has F

0

= S

0

in money market account and one share of stock. At time N,

the trader has a wealth of (F

0

−S

0

)(1 +r)

N

+S

N

= −K +S

N

= F

N

.

(v)

Proof. By (ii), C

0

= F

0

+P

0

. Since F

0

= S

0

−

(1+r)

N

S0

(1+r)

N

= 0, C

0

= P

0

.

(vi)

Proof. By (ii), C

n

= P

n

if and only if F

n

= 0. Note F

n

=

¯

E

n

[

S

N

−K

(1+r)

N−n

] = S

n

−

(1+r)

N

S0

(1+r)

N−n

= S

n

−S

0

(1 +r)

n

.

So F

n

is not necessarily zero and C

n

= P

n

is not necessarily true for n ≥ 1.

2.12.

Proof. First, the no-arbitrage price of the chooser option at time m must be max(C, P), where

C =

¯

E

¸

(S

N

−K)

+

(1 +r)

N−m

, and P =

¯

E

¸

(K −S

N

)

+

(1 +r)

N−m

.

That is, C is the no-arbitrage price of a call option at time m and P is the no-arbitrage price of a put option

at time m. Both of them have maturity date N and strike price K. Suppose the market is liquid, then the

chooser option is equivalent to receiving a payoﬀ of max(C, P) at time m. Therefore, its current no-arbitrage

price should be

¯

E[

max(C,P)

(1+r)

m

].

By the put-call parity, C = S

m

−

K

(1+r)

N−m

+P. So max(C, P) = P +(S

m

−

K

(1+r)

N−m

)

+

. Therefore, the

time-zero price of a chooser option is

¯

E

¸

P

(1 +r)

m

+

¯

E

¸

(S

m

−

K

(1+r)

N−m

)

+

(1 +r)

m

¸

=

¯

E

¸

(K −S

N

)

+

(1 +r)

N

+

¯

E

¸

(S

m

−

K

(1+r)

N−m

)

+

(1 +r)

m

¸

.

The ﬁrst term stands for the time-zero price of a put, expiring at time N and having strike price K, and the

second term stands for the time-zero price of a call, expiring at time m and having strike price

K

(1+r)

N−m

.

If we feel unconvinced by the above argument that the chooser option’s no-arbitrage price is

¯

E[

max(C,P)

(1+r)

m

],

due to the economical argument involved (like “the chooser option is equivalent to receiving a payoﬀ of

max(C, P) at time m”), then we have the following mathematically rigorous argument. First, we can

construct a portfolio ∆

0

, , ∆

m−1

, whose payoﬀ at time m is max(C, P). Fix ω, if C(ω) > P(ω), we

can construct a portfolio ∆

m

, , ∆

N−1

whose payoﬀ at time N is (S

N

− K)

+

; if C(ω) < P(ω), we can

construct a portfolio ∆

m

, , ∆

N−1

whose payoﬀ at time N is (K −S

N

)

+

. By deﬁning (m ≤ k ≤ N −1)

∆

k

(ω) =

∆

k

(ω) if C(ω) > P(ω)

∆

k

(ω) if C(ω) < P(ω),

we get a portfolio (∆

n

)

0≤n≤N−1

whose payoﬀ is the same as that of the chooser option. So the no-arbitrage

price process of the chooser option must be equal to the value process of the replicating portfolio. In

particular, V

0

= X

0

=

¯

E[

Xm

(1+r)

m

] =

¯

E[

max(C,P)

(1+r)

m

].

2.13. (i)

Proof. Note under both actual probability P and risk-neutral probability

¯

P, coin tosses ω

n

’s are i.i.d.. So

without loss of generality, we work on P. For any function g, E

n

[g(S

n+1

, Y

n+1

)] = E

n

[g(

Sn+1

Sn

S

n

, Y

n

+

Sn+1

Sn

S

n

)] = pg(uS

n

, Y

n

+ uS

n

) + qg(dS

n

, Y

n

+ dS

n

), which is a function of (S

n

, Y

n

). So (S

n

, Y

n

)

0≤n≤N

is

Markov under P.

(ii)

7

Proof. Set v

N

(s, y) = f(

y

N+1

). Then v

N

(S

N

, Y

N

) = f(

N

n=0

Sn

N+1

) = V

N

. Suppose v

n+1

is given, then

V

n

=

¯

E

n

[

Vn+1

1+r

] =

¯

E

n

[

vn+1(Sn+1,Yn+1)

1+r

] =

1

1+r

[¯ pv

n+1

(uS

n

, Y

n

+ uS

n

) + ¯ qv

n+1

(dS

n

, Y

n

+ dS

n

)] = v

n

(S

n

, Y

n

),

where

v

n

(s, y) =

¯ v

n+1

(us, y +us) + ¯ v

n+1

(ds, y +ds)

1 +r

.

2.14. (i)

Proof. For n ≤ M, (S

n

, Y

n

) = (S

n

, 0). Since coin tosses ω

n

’s are i.i.d. under

¯

P, (S

n

, Y

n

)

0≤n≤M

is Markov

under

¯

P. More precisely, for any function h,

¯

E

n

[h(S

n+1

)] = ¯ ph(uS

n

) +

¯

h(dS

n

), for n = 0, 1, , M −1.

For any function g of two variables, we have

¯

E

M

[g(S

M+1

, Y

M+1

)] =

¯

E

M

[g(S

M+1

, S

M+1

)] = ¯ pg(uS

M

, uS

M

)+

¯ qg(dS

M

, dS

M

). And for n ≥ M+1,

¯

E

n

[g(S

n+1

, Y

n+1

)] =

¯

E

n

[g(

Sn+1

Sn

S

n

, Y

n

+

Sn+1

Sn

S

n

)] = ¯ pg(uS

n

, Y

n

+uS

n

)+

¯ qg(dS

n

, Y

n

+dS

n

), so (S

n

, Y

n

)

0≤n≤N

is Markov under

¯

P.

(ii)

Proof. Set v

N

(s, y) = f(

y

N−M

). Then v

N

(S

N

, Y

N

) = f(

N

K=M+1

S

k

N−M

) = V

N

. Suppose v

n+1

is already given.

a) If n > M, then

¯

E

n

[v

n+1

(S

n+1

, Y

n+1

)] = ¯ pv

n+1

(uS

n

, Y

n

+uS

n

) + ¯ qv

n+1

(dS

n

, Y

n

+dS

n

). So v

n

(s, y) =

¯ pv

n+1

(us, y +us) + ¯ qv

n+1

(ds, y +ds).

b) If n = M, then

¯

E

M

[v

M+1

(S

M+1

, Y

M+1

)] = ¯ pv

M+1

(uS

M

, uS

M

) + ¯ v

n+1

(dS

M

, dS

M

). So v

M

(s) =

¯ pv

M+1

(us, us) + ¯ qv

M+1

(ds, ds).

c) If n < M, then

¯

E

n

[v

n+1

(S

n+1

)] = ¯ pv

n+1

(uS

n

) + ¯ qv

n+1

(dS

n

). So v

n

(s) = ¯ pv

n+1

(us) + ¯ qv

n+1

(ds).

3. State Prices

3.1.

Proof. Note

¯

Z(ω) :=

P(ω)

P(ω)

=

1

Z(ω)

. Apply Theorem 3.1.1 with P,

¯

P, Z replaced by

¯

P, P,

¯

Z, we get the

analogous of properties (i)-(iii) of Theorem 3.1.1.

3.2. (i)

Proof.

¯

P(Ω) =

¸

ω∈Ω

¯

P(ω) =

¸

ω∈Ω

Z(ω)P(ω) = E[Z] = 1.

(ii)

Proof.

¯

E[Y ] =

¸

ω∈Ω

Y (ω)

¯

P(ω) =

¸

ω∈Ω

Y (ω)Z(ω)P(ω) = E[Y Z].

(iii)

Proof.

˜

P(A) =

¸

ω∈A

Z(ω)P(ω). Since P(A) = 0, P(ω) = 0 for any ω ∈ A. So

¯

P(A) = 0.

(iv)

Proof. If

¯

P(A) =

¸

ω∈A

Z(ω)P(ω) = 0, by P(Z > 0) = 1, we conclude P(ω) = 0 for any ω ∈ A. So

P(A) =

¸

ω∈A

P(ω) = 0.

(v)

Proof. P(A) = 1 ⇐⇒ P(A

c

) = 0 ⇐⇒

¯

P(A

c

) = 0 ⇐⇒

¯

P(A) = 1.

(vi)

8

Proof. Pick ω

0

such that P(ω

0

) > 0, deﬁne Z(ω) =

0, if ω = ω

0

1

P(ω0)

, if ω = ω

0

.

Then P(Z ≥ 0) = 1 and E[Z] =

1

P(ω0)

P(ω

0

) = 1.

Clearly

¯

P(Ω ` ¦ω

0

¦) = E[Z1

Ω\{ω0}

] =

¸

ω=ω0

Z(ω)P(ω) = 0. But P(Ω ` ¦ω

0

¦) = 1 − P(ω

0

) > 0 if

P(ω

0

) < 1. Hence in the case 0 < P(ω

0

) < 1, P and

¯

P are not equivalent. If P(ω

0

) = 1, then E[Z] = 1 if

and only if Z(ω

0

) = 1. In this case

¯

P(ω

0

) = Z(ω

0

)P(ω

0

) = 1. And

¯

P and P have to be equivalent.

In summary, if we can ﬁnd ω

0

such that 0 < P(ω

0

) < 1, then Z as constructed above would induce a

probability

¯

P that is not equivalent to P.

3.5. (i)

Proof. Z(HH) =

9

16

, Z(HT) =

9

8

, Z(TH) =

3

8

and Z(TT) =

15

4

.

(ii)

Proof. Z

1

(H) = E

1

[Z

2

](H) = Z

2

(HH)P(ω

2

= H[ω

1

= H) + Z

2

(HT)P(ω

2

= T[ω

1

= H) =

3

4

. Z

1

(T) =

E

1

[Z

2

](T) = Z

2

(TH)P(ω

2

= H[ω

1

= T) +Z

2

(TT)P(ω

2

= T[ω

1

= T) =

3

2

.

(iii)

Proof.

V

1

(H) =

[Z

2

(HH)V

2

(HH)P(ω

2

= H[ω

1

= H) +Z

2

(HT)V

2

(HT)P(ω

2

= T[ω

1

= T)]

Z

1

(H)(1 +r

1

(H))

= 2.4,

V

1

(T) =

[Z

2

(TH)V

2

(TH)P(ω

2

= H[ω

1

= T) +Z

2

(TT)V

2

(TT)P(ω

2

= T[ω

1

= T)]

Z

1

(T)(1 +r

1

(T))

=

1

9

,

and

V

0

=

Z

2

(HH)V

2

(HH)

(1 +

1

4

)(1 +

1

4

)

P(HH) +

Z

2

(HT)V

2

(HT)

(1 +

1

4

)(1 +

1

4

)

P(HT) +

Z

2

(TH)V

2

(TH)

(1 +

1

4

)(1 +

1

2

)

P(TH) + 0 ≈ 1.

3.6.

Proof. U

(x) =

1

x

, so I(x) =

1

x

. (3.3.26) gives E[

Z

(1+r)

N

(1+r)

N

λZ

] = X

0

. So λ =

1

X0

. By (3.3.25), we

have X

N

=

(1+r)

N

λZ

=

X0

Z

(1 + r)

N

. Hence X

n

=

¯

E

n

[

X

N

(1+r)

N−n

] =

¯

E

n

[

X0(1+r)

n

Z

] = X

0

(1 + r)

n

¯

E

n

[

1

Z

] =

X

0

(1 +r)

n 1

Zn

E

n

[Z

1

Z

] =

X0

ξn

, where the second to last “=” comes from Lemma 3.2.6.

3.7.

Proof. U

(x) = x

p−1

and so I(x) = x

1

p−1

. By (3.3.26), we have E[

Z

(1+r)

N

(

λZ

(1+r)

N

)

1

p−1

] = X

0

. Solve it for λ,

we get

λ =

¸

¸

¸

X

0

E

¸

Z

p

p−1

(1+r)

Np

p−1

¸

p−1

=

X

p−1

0

(1 +r)

Np

(E[Z

p

p−1

])

p−1

.

So by (3.3.25), X

N

= (

λZ

(1+r)

N

)

1

p−1

=

λ

1

p−1

Z

1

p−1

(1+r)

N

p−1

=

X0(1+r)

Np

p−1

E[Z

p

p−1

]

Z

1

p−1

(1+r)

N

p−1

=

(1+r)

N

X0Z

1

p−1

E[Z

p

p−1

]

.

3.8. (i)

9

Proof.

d

dx

(U(x) −yx) = U

**(x) −y. So x = I(y) is an extreme point of U(x) −yx. Because
**

d

2

dx

2

(U(x) −yx) =

U

**(x) ≤ 0 (U is concave), x = I(y) is a maximum point. Therefore U(x) −y(x) ≤ U(I(y)) −yI(y) for every
**

x.

(ii)

Proof. Following the hint of the problem, we have

E[U(X

N

)] −E[X

N

λZ

(1 +r)

N

] ≤ E[U(I(

λZ

(1 +r)

N

))] −E[

λZ

(1 +r)

N

I(

λZ

(1 +r)

N

)],

i.e. E[U(X

N

)] −λX

0

≤ E[U(X

∗

N

)] −

¯

E[

λ

(1+r)

N

X

∗

N

] = E[U(X

∗

N

)] −λX

0

. So E[U(X

N

)] ≤ E[U(X

∗

N

)].

3.9. (i)

Proof. X

n

=

¯

E

n

[

X

N

(1+r)

N−n

]. So if X

N

≥ 0, then X

n

≥ 0 for all n.

(ii)

Proof. a) If 0 ≤ x < γ and 0 < y ≤

1

γ

, then U(x) − yx = −yx ≤ 0 and U(I(y)) − yI(y) = U(γ) − yγ =

1 −yγ ≥ 0. So U(x) −yx ≤ U(I(y)) −yI(y).

b) If 0 ≤ x < γ and y >

1

γ

, then U(x) − yx = −yx ≤ 0 and U(I(y)) − yI(y) = U(0) − y 0 = 0. So

U(x) −yx ≤ U(I(y)) −yI(y).

c) If x ≥ γ and 0 < y ≤

1

γ

, then U(x) −yx = 1 −yx and U(I(y)) −yI(y) = U(γ) −yγ = 1 −yγ ≥ 1 −yx.

So U(x) −yx ≤ U(I(y)) −yI(y).

d) If x ≥ γ and y >

1

γ

, then U(x) − yx = 1 − yx < 0 and U(I(y)) − yI(y) = U(0) − y 0 = 0. So

U(x) −yx ≤ U(I(y)) −yI(y).

(iii)

Proof. Using (ii) and set x = X

N

, y =

λZ

(1+r)

N

, where X

N

is a random variable satisfying

¯

E[

X

N

(1+r)

N

] = X

0

,

we have

E[U(X

N

)] −E[

λZ

(1 +r)

N

X

N

] ≤ E[U(X

∗

N

)] −E[

λZ

(1 +r)

N

X

∗

N

].

That is, E[U(X

N

)] −λX

0

≤ E[U(X

∗

N

)] −λX

0

. So E[U(X

N

)] ≤ E[U(X

∗

N

)].

(iv)

Proof. Plug p

m

and ξ

m

into (3.6.4), we have

X

0

=

2

N

¸

m=1

p

m

ξ

m

I(λξ

m

) =

2

N

¸

m=1

p

m

ξ

m

γ1

{λξm≤

1

γ

}

.

So

X0

γ

=

¸

2

N

m=1

p

m

ξ

m

1

{λξm≤

1

γ

}

. Suppose there is a solution λ to (3.6.4), note

X0

γ

> 0, we then can conclude

¦m : λξ

m

≤

1

γ

¦ = ∅. Let K = max¦m : λξ

m

≤

1

γ

¦, then λξ

K

≤

1

γ

< λξ

K+1

. So ξ

K

< ξ

K+1

and

X0

γ

=

¸

K

m=1

p

m

ξ

m

(Note, however, that K could be 2

N

. In this case, ξ

K+1

is interpreted as ∞. Also, note

we are looking for positive solution λ > 0). Conversely, suppose there exists some K so that ξ

K

< ξ

K+1

and

¸

K

m=1

ξ

m

p

m

=

X0

γ

. Then we can ﬁnd λ > 0, such that ξ

K

<

1

λγ

< ξ

K+1

. For such λ, we have

E[

Z

(1 +r)

N

I(

λZ

(1 +r)

N

)] =

2

N

¸

m=1

p

m

ξ

m

1

{λξm≤

1

γ

}

γ =

K

¸

m=1

p

m

ξ

m

γ = X

0

.

Hence (3.6.4) has a solution.

10

(v)

Proof. X

∗

N

(ω

m

) = I(λξ

m

) = γ1

{λξm≤

1

γ

}

=

γ, if m ≤ K

0, if m ≥ K + 1

.

4. American Derivative Securities

Before proceeding to the exercise problems, we ﬁrst give a brief summary of pricing American derivative

securities as presented in the textbook. We shall use the notation of the book.

From the buyer’s perspective: At time n, if the derivative security has not been exercised, then the buyer

can choose a policy τ with τ ∈ o

n

. The valuation formula for cash ﬂow (Theorem 2.4.8) gives a fair price

for the derivative security exercised according to τ:

V

n

(τ) =

N

¸

k=n

¯

E

n

¸

1

{τ=k}

1

(1 +r)

k−n

G

k

=

¯

E

n

¸

1

{τ≤N}

1

(1 +r)

τ−n

G

τ

.

The buyer wants to consider all the possible τ’s, so that he can ﬁnd the least upper bound of security value,

which will be the maximum price of the derivative security acceptable to him. This is the price given by

Deﬁnition 4.4.1: V

n

= max

τ∈Sn

¯

E

n

[1

{τ≤N}

1

(1+r)

τ−n

G

τ

].

From the seller’s perspective: A price process (V

n

)

0≤n≤N

is acceptable to him if and only if at time n,

he can construct a portfolio at cost V

n

so that (i) V

n

≥ G

n

and (ii) he needs no further investing into the

portfolio as time goes by. Formally, the seller can ﬁnd (∆

n

)

0≤n≤N

and (C

n

)

0≤n≤N

so that C

n

≥ 0 and

V

n+1

= ∆

n

S

n+1

+ (1 + r)(V

n

− C

n

− ∆

n

S

n

). Since (

Sn

(1+r)

n

)

0≤n≤N

is a martingale under the risk-neutral

measure

¯

P, we conclude

¯

E

n

¸

V

n+1

(1 +r)

n+1

−

V

n

(1 +r)

n

= −

C

n

(1 +r)

n

≤ 0,

i.e. (

Vn

(1+r)

n

)

0≤n≤N

is a supermartingale. This inspired us to check if the converse is also true. This is exactly

the content of Theorem 4.4.4. So (V

n

)

0≤n≤N

is the value process of a portfolio that needs no further investing

if and only if

Vn

(1+r)

n

0≤n≤N

is a supermartingale under

¯

P (note this is independent of the requirement

V

n

≥ G

n

). In summary, a price process (V

n

)

0≤n≤N

is acceptable to the seller if and only if (i) V

n

≥ G

n

; (ii)

Vn

(1+r)

n

0≤n≤N

is a supermartingale under

¯

P.

Theorem 4.4.2 shows the buyer’s upper bound is the seller’s lower bound. So it gives the price acceptable

to both. Theorem 4.4.3 gives a speciﬁc algorithm for calculating the price, Theorem 4.4.4 establishes the

one-to-one correspondence between super-replication and supermartingale property, and ﬁnally, Theorem

4.4.5 shows how to decide on the optimal exercise policy.

4.1. (i)

Proof. V

P

2

(HH) = 0, V

P

2

(HT) = V

P

2

(TH) = 0.8, V

P

2

(TT) = 3, V

P

1

(H) = 0.32, V

P

1

(T) = 2, V

P

0

= 9.28.

(ii)

Proof. V

C

0

= 5.

(iii)

Proof. g

S

(s) = [4 − s[. We apply Theorem 4.4.3 and have V

S

2

(HH) = 12.8, V

S

2

(HT) = V

S

2

(TH) = 2.4,

V

S

2

(TT) = 3, V

S

1

(H) = 6.08, V

S

1

(T) = 2.16 and V

S

0

= 3.296.

(iv)

11

Proof. First, we note the simple inequality

max(a

1

, b

1

) + max(a

2

, b

2

) ≥ max(a

1

+a

2

, b

1

+b

2

).

“>” holds if and only if b

1

> a

1

, b

2

< a

2

or b

1

< a

1

, b

2

> a

2

. By induction, we can show

V

S

n

= max

g

S

(S

n

),

¯ pV

S

n+1

+

¯

V

S

n+1

1 +r

¸

≤ max

g

P

(S

n

) +g

C

(S

n

),

¯ pV

P

n+1

+

¯

V

P

n+1

1 +r

+

¯ pV

C

n+1

+

¯

V

C

n+1

1 +r

¸

≤ max

g

P

(S

n

),

¯ pV

P

n+1

+

¯

V

P

n+1

1 +r

¸

+ max

g

C

(S

n

),

¯ pV

C

n+1

+

¯

V

C

n+1

1 +r

¸

= V

P

n

+V

C

n

.

As to when “<” holds, suppose m = max¦n : V

S

n

< V

P

n

+ V

C

n

¦. Then clearly m ≤ N −1 and it is possible

that ¦n : V

S

n

< V

P

n

+ V

C

n

¦ = ∅. When this set is not empty, m is characterized as m = max¦n : g

P

(S

n

) <

pV

P

n+1

+ qV

P

n+1

1+r

and g

C

(S

n

) >

pV

C

n+1

+ qV

C

n+1

1+r

or g

P

(S

n

) >

pV

P

n+1

+ qV

P

n+1

1+r

and g

C

(S

n

) <

pV

C

n+1

+ qV

C

n+1

1+r

¦.

4.2.

Proof. For this problem, we need Figure 4.2.1, Figure 4.4.1 and Figure 4.4.2. Then

∆

1

(H) =

V

2

(HH) −V

2

(HT)

S

2

(HH) −S

2

(HT)

= −

1

12

, ∆

1

(T) =

V

2

(TH) −V

2

(TT)

S

2

(TH) −S

2

(TT)

= −1,

and

∆

0

=

V

1

(H) −V

1

(T)

S

1

(H) −S

1

(T)

≈ −0.433.

The optimal exercise time is τ = inf¦n : V

n

= G

n

¦. So

τ(HH) = ∞, τ(HT) = 2, τ(TH) = τ(TT) = 1.

Therefore, the agent borrows 1.36 at time zero and buys the put. At the same time, to hedge the long

position, he needs to borrow again and buy 0.433 shares of stock at time zero.

At time one, if the result of coin toss is tail and the stock price goes down to 2, the value of the portfolio

is X

1

(T) = (1 +r)(−1.36 −0.433S

0

) +0.433S

1

(T) = (1 +

1

4

)(−1.36 −0.433 4) +0.433 2 = −3. The agent

should exercise the put at time one and get 3 to pay oﬀ his debt.

At time one, if the result of coin toss is head and the stock price goes up to 8, the value of the portfolio

is X

1

(H) = (1 + r)(−1.36 − 0.433S

0

) + 0.433S

1

(H) = −0.4. The agent should borrow to buy

1

12

shares of

stock. At time two, if the result of coin toss is head and the stock price goes up to 16, the value of the

portfolio is X

2

(HH) = (1+r)(X

1

(H) −

1

12

S

1

(H)) +

1

12

S

2

(HH) = 0, and the agent should let the put expire.

If at time two, the result of coin toss is tail and the stock price goes down to 4, the value of the portfolio is

X

2

(HT) = (1 +r)(X

1

(H) −

1

12

S

1

(H)) +

1

12

S

2

(HT) = −1. The agent should exercise the put to get 1. This

will pay oﬀ his debt.

4.3.

Proof. We need Figure 1.2.2 for this problem, and calculate the intrinsic value process and price process of

the put as follows.

For the intrinsic value process, G

0

= 0, G

1

(T) = 1, G

2

(TH) =

2

3

, G

2

(TT) =

5

3

, G

3

(THT) = 1,

G

3

(TTH) = 1.75, G

3

(TTT) = 2.125. All the other outcomes of G is negative.

12

For the price process, V

0

= 0.4, V

1

(T) = 1, V

1

(TH) =

2

3

, V

1

(TT) =

5

3

, V

3

(THT) = 1, V

3

(TTH) = 1.75,

V

3

(TTT) = 2.125. All the other outcomes of V is zero.

Therefore the time-zero price of the derivative security is 0.4 and the optimal exercise time satisﬁes

τ(ω) =

∞ if ω

1

= H,

1 if ω

1

= T.

4.4.

Proof. 1.36 is the cost of super-replicating the American derivative security. It enables us to construct a

portfolio suﬃcient to pay oﬀ the derivative security, no matter when the derivative security is exercised. So

to hedge our short position after selling the put, there is no need to charge the insider more than 1.36.

4.5.

Proof. The stopping times in o

0

are

(1) τ ≡ 0;

(2) τ ≡ 1;

(3) τ(HT) = τ(HH) = 1, τ(TH), τ(TT) ∈ ¦2, ∞¦ (4 diﬀerent ones);

(4) τ(HT), τ(HH) ∈ ¦2, ∞¦, τ(TH) = τ(TT) = 1 (4 diﬀerent ones);

(5) τ(HT), τ(HH), τ(TH), τ(TT) ∈ ¦2, ∞¦ (16 diﬀerent ones).

When the option is out of money, the following stopping times do not exercise

(i) τ ≡ 0;

(ii) τ(HT) ∈ ¦2, ∞¦, τ(HH) = ∞, τ(TH), τ(TT) ∈ ¦2, ∞¦ (8 diﬀerent ones);

(iii) τ(HT) ∈ ¦2, ∞¦, τ(HH) = ∞, τ(TH) = τ(TT) = 1 (2 diﬀerent ones).

For (i),

¯

E[1

{τ≤2}

(

4

5

)

τ

G

τ

] = G

0

= 1. For (ii),

¯

E[1

{τ≤2}

(

4

5

)

τ

G

τ

] ≤

¯

E[1

{τ

∗

≤2}

(

4

5

)

τ

∗

G

τ

∗], where τ

∗

(HT) =

2, τ

∗

(HH) = ∞, τ

∗

(TH) = τ

∗

(TT) = 2. So

¯

E[1

{τ

∗

≤2}

(

4

5

)

τ

∗

G

τ

∗] =

1

4

[(

4

5

)

2

1 + (

4

5

)

2

(1 + 4)] = 0.96. For

(iii),

¯

E[1

{τ≤2}

(

4

5

)

τ

G

τ

] has the biggest value when τ satisﬁes τ(HT) = 2, τ(HH) = ∞, τ(TH) = τ(TT) = 1.

This value is 1.36.

4.6. (i)

Proof. The value of the put at time N, if it is not exercised at previous times, is K − S

N

. Hence V

N−1

=

max¦K−S

N−1

,

¯

E

N−1

[

V

N

1+r

]¦ = max¦K−S

N−1

,

K

1+r

−S

N−1

¦ = K−S

N−1

. The second equality comes from

the fact that discounted stock price process is a martingale under risk-neutral probability. By induction, we

can show V

n

= K − S

n

(0 ≤ n ≤ N). So by Theorem 4.4.5, the optimal exercise policy is to sell the stock

at time zero and the value of this derivative security is K −S

0

.

Remark: We cheated a little bit by using American algorithm and Theorem 4.4.5, since they are developed

for the case where τ is allowed to be ∞. But intuitively, results in this chapter should still hold for the case

τ ≤ N, provided we replace “max¦G

n

, 0¦” with “G

n

”.

(ii)

Proof. This is because at time N, if we have to exercise the put and K − S

N

< 0, we can exercise the

European call to set oﬀ the negative payoﬀ. In eﬀect, throughout the portfolio’s lifetime, the portfolio has

intrinsic values greater than that of an American put stuck at K with expiration time N. So, we must have

V

AP

0

≤ V

0

+V

EC

0

≤ K −S

0

+V

EC

0

.

(iii)

13

Proof. Let V

EP

0

denote the time-zero value of a European put with strike K and expiration time N. Then

V

AP

0

≥ V

EP

0

= V

EC

0

−

¯

E[

S

N

−K

(1 +r)

N

] = V

EC

0

−S

0

+

K

(1 +r)

N

.

4.7.

Proof. V

N

= S

N

−K, V

N−1

= max¦S

N−1

−K,

¯

E

N−1

[

V

N

1+r

]¦ = max¦S

N−1

−K, S

N−1

−

K

1+r

¦ = S

N−1

−

K

1+r

.

By induction, we can prove V

n

= S

n

−

K

(1+r)

N−n

(0 ≤ n ≤ N) and V

n

> G

n

for 0 ≤ n ≤ N − 1. So the

time-zero value is S

0

−

K

(1+r)

N

and the optimal exercise time is N.

5. Random Walk

5.1. (i)

Proof. E[α

τ2

] = E[α

(τ2−τ1)+τ1

] = E[α

(τ2−τ1)

]E[α

τ1

] = E[α

τ1

]

2

.

(ii)

Proof. If we deﬁne M

(m)

n

= M

n+τm

−M

τm

(m = 1, 2, ), then (M

(m)

·

)

m

as random functions are i.i.d. with

distributions the same as that of M. So τ

m+1

− τ

m

= inf¦n : M

(m)

n

= 1¦ are i.i.d. with distributions the

same as that of τ

1

. Therefore

E[α

τm

] = E[α

(τm−τm−1)+(τm−1−τm−2)+···+τ1

] = E[α

τ1

]

m

.

(iii)

Proof. Yes, since the argument of (ii) still works for asymmetric random walk.

5.2. (i)

Proof. f

(σ) = pe

σ

− qe

−σ

, so f

(σ) > 0 if and only if σ >

1

2

(ln q − ln p). Since

1

2

(ln q − ln p) < 0,

f(σ) > f(0) = 1 for all σ > 0.

(ii)

Proof. E

n

[

Sn+1

Sn

] = E

n

[e

σXn+1

1

f(σ)

] = pe

σ 1

f(σ)

+qe

−σ 1

f(σ)

= 1.

(iii)

Proof. By optional stopping theorem, E[S

n∧τ1

] = E[S

0

] = 1. Note S

n∧τ1

= e

σMn∧τ

1

(

1

f(σ)

)

n∧τ1

≤ e

σ·1

,

by bounded convergence theorem, E[1

{τ1<∞}

S

τ1

] = E[lim

n→∞

S

n∧τ1

] = lim

n→∞

E[S

n∧τ1

] = 1, that is,

E[1

{τ1<∞}

e

σ

(

1

f(σ)

)

τ1

] = 1. So e

−σ

= E[1

{τ1<∞}

(

1

f(σ)

)

τ1

]. Let σ ↓ 0, again by bounded convergence theorem,

1 = E[1

{τ1<∞}

(

1

f(0)

)

τ1

] = P(τ

1

< ∞).

(iv)

Proof. Set α =

1

f(σ)

=

1

pe

σ

+qe

−σ

, then as σ varies from 0 to ∞, α can take all the values in (0, 1). Write σ

in terms of α, we have e

σ

=

1±

√

1−4pqα

2

2pα

(note 4pqα

2

< 4(

p+q

2

)

2

1

2

= 1). We want to choose σ > 0, so we

should take σ = ln(

1+

√

1−4pqα

2

2pα

). Therefore E[α

τ1

] =

2pα

1+

√

1−4pqα

2

=

1−

√

1−4pqα

2

2qα

.

14

(v)

Proof.

∂

∂α

E[α

τ1

] = E[

∂

∂α

α

τ1

] = E[τ

1

α

τ1−1

], and

1 −

1 −4pqα

2

2qα

=

1

2q

(1 −

1 −4pqα

2

)α

−1

=

1

2q

[−

1

2

(1 −4pqα

2

)

−

1

2

(−4pq2α)α

−1

+ (1 −

1 −4pqα

2

)(−1)α

2

].

So E[τ

1

] = lim

α↑1

∂

∂α

E[α

τ1

] =

1

2q

[−

1

2

(1 −4pq)

−

1

2

(−8pq) −(1 −

√

1 −4pq)] =

1

2p−1

.

5.3. (i)

Proof. Solve the equation pe

σ

+ qe

−σ

= 1 and a positive solution is ln

1+

√

1−4pq

2p

= ln

1−p

p

= ln q −ln p. Set

σ

0

= ln q −ln p, then f(σ

0

) = 1 and f

(σ) > 0 for σ > σ

0

. So f(σ) > 1 for all σ > σ

0

.

(ii)

Proof. As in Exercise 5.2, S

n

= e

σMn

(

1

f(σ)

)

n

is a martingale, and 1 = E[S

0

] = E[S

n∧τ1

] = E[e

σMn∧τ

1

(

1

f(σ)

)

τ1∧n

].

Suppose σ > σ

0

, then by bounded convergence theorem,

1 = E[ lim

n→∞

e

σMn∧τ

1

(

1

f(σ)

)

n∧τ1

] = E[1

{τ1<∞}

e

σ

(

1

f(σ)

)

τ1

].

Let σ ↓ σ

0

, we get P(τ

1

< ∞) = e

−σ0

=

p

q

< 1.

(iii)

Proof. From (ii), we can see E[1

{τ1<∞}

(

1

f(σ)

)

τ1

] = e

−σ

, for σ > σ

0

. Set α =

1

f(α)

, then e

σ

=

1±

√

1−4pqα

2

2pα

. We

need to choose the root so that e

σ

> e

σ0

=

q

p

, so σ = ln(

1+

√

1−4pqα

2

2pα

), then E[α

τ1

1

{τ1<∞}

] =

1−

√

1−4pqα

2

2qα

.

(iv)

Proof. E[τ

1

1

{τ1<∞}

] =

∂

∂α

E[α

τ1

1

{τ1<∞}

][

α=1

=

1

2q

[

4pq

√

1−4pq

− (1 −

√

1 −4pq)] =

1

2q

[

4pq

2q−1

− 1 + 2q − 1] =

p

q

1

2q−1

.

5.4. (i)

Proof. E[α

τ2

] =

¸

∞

k=1

P(τ

2

= 2k)α

2k

=

¸

∞

k=1

(

α

2

)

2k

P(τ

2

= 2k)4

k

. So P(τ

2

= 2k) =

(2k)!

4

k

(k+1)!k!

.

(ii)

Proof. P(τ

2

= 2) =

1

4

. For k ≥ 2, P(τ

2

= 2k) = P(τ

2

≤ 2k) −P(τ

2

≤ 2k −2).

P(τ

2

≤ 2k) = P(M

2k

= 2) +P(M

2k

≥ 4) +P(τ

2

≤ 2k, M

2k

≤ 0)

= P(M

2k

= 2) + 2P(M

2k

≥ 4)

= P(M

2k

= 2) +P(M

2k

≥ 4) +P(M

2k

≤ −4)

= 1 −P(M

2k

= −2) −P(M

2k

= 0).

15

Similarly, P(τ

2

≤ 2k −2) = 1 −P(M

2k−2

= −2) −P(M

2k−2

= 0). So

P(τ

2

= 2k) = P(M

2k−2

= −2) +P(M

2k−2

= 0) −P(M

2k

= −2) −P(M

2k

= 0)

= (

1

2

)

2k−2

¸

(2k −2)!

k!(k −2)!

+

(2k −2)!

(k −1)!(k −1)!

−(

1

2

)

2k

¸

(2k)!

(k + 1)!(k −1)!

+

(2k)!

k!k!

=

(2k)!

4

k

(k + 1)!k!

¸

4

2k(2k −1)

(k + 1)k(k −1) +

4

2k(2k −1)

(k + 1)k

2

−k −(k + 1)

=

(2k)!

4

k

(k + 1)!k!

¸

2(k

2

−1)

2k −1

+

2(k

2

+k)

2k −1

−

4k

2

−1

2k −1

=

(2k)!

4

k

(k + 1)!k!

.

5.5. (i)

Proof. For every path that crosses level m by time n and resides at b at time n, there corresponds a reﬂected

path that resides at time 2m−b. So

P(M

∗

n

≥ m, M

n

= b) = P(M

n

= 2m−b) = (

1

2

)

n

n!

(m+

n−b

2

)!(

n+b

2

−m)!

.

(ii)

Proof.

P(M

∗

n

≥ m, M

n

= b) =

n!

(m+

n−b

2

)!(

n+b

2

−m)!

p

m+

n−b

2

q

n+b

2

−m

.

5.6.

Proof. On the inﬁnite coin-toss space, we deﬁne M

n

= ¦stopping times that takes values 0, 1, , n, ∞¦

and M

∞

= ¦stopping times that takes values 0, 1, 2, ¦. Then the time-zero value V

∗

of the perpetual

American put as in Section 5.4 can be deﬁned as sup

τ∈M∞

¯

E[1

{τ<∞}

(K−Sτ )

+

(1+r)

τ

]. For an American put with

the same strike price K that expires at time n, its time-zero value V

(n)

is max

τ∈Mn

¯

E[1

{τ<∞}

(K−Sτ )

+

(1+r)

τ

].

Clearly (V

(n)

)

n≥0

is nondecreasing and V

(n)

≤ V

∗

for every n. So lim

n

V

(n)

exists and lim

n

V

(n)

≤ V

∗

.

For any given τ ∈ M

∞

, we deﬁne τ

(n)

=

∞, if τ = ∞

τ ∧ n, if τ < ∞

, then τ

(n)

is also a stopping time, τ

(n)

∈ M

n

and lim

n→∞

τ

(n)

= τ. By bounded convergence theorem,

¯

E

¸

1

{τ<∞}

(K −S

τ

)

+

(1 +r)

τ

= lim

n→∞

¯

E

¸

1

{τ

(n)

<∞}

(K −S

τ

(n) )

+

(1 +r)

τ

(n)

≤ lim

n→∞

V

(n)

.

Take sup at the left hand side of the inequality, we get V

∗

≤ lim

n→∞

V

(n)

. Therefore V

∗

= lim

n

V

(n)

.

Remark: In the above proof, rigorously speaking, we should use (K−S

τ

) in the places of (K−S

τ

)

+

. So

this needs some justiﬁcation.

5.8. (i)

16

Proof. v(S

n

) = S

n

≥ S

n

−K = g(S

n

). Under risk-neutral probabilities,

1

(1+r)

n

v(S

n

) =

Sn

(1+r)

n

is a martingale

by Theorem 2.4.4.

(ii)

Proof. If the purchaser chooses to exercises the call at time n, then the discounted risk-neutral expectation

of her payoﬀ is

¯

E

Sn−K

(1+r)

n

= S

0

−

K

(1+r)

n

. Since lim

n→∞

S

0

−

K

(1+r)

n

= S

0

, the value of the call at time

zero is at least sup

n

S

0

−

K

(1+r)

n

= S

0

.

(iii)

Proof. max

g(s),

pv(us)+ qv(ds)

1+r

¸

= max¦s −K,

pu+ qv

1+r

s¦ = max¦s −K, s¦ = s = v(s), so equation (5.4.16) is

satisﬁed. Clearly v(s) = s also satisﬁes the boundary condition (5.4.18).

(iv)

Proof. Suppose τ is an optimal exercise time, then

¯

E

Sτ −K

(1+r)

τ

1

{τ<∞}

≥ S

0

. Then P(τ < ∞) = 0 and

¯

E

K

(1+r)

τ

1

{τ<∞}

> 0. So

¯

E

Sτ −K

(1+r)

τ

1

{τ<∞}

<

¯

E

Sτ

(1+r)

τ

1

{τ<∞}

. Since

Sn

(1+r)

n

n≥0

is a martingale

under risk-neutral measure, by Fatou’s lemma,

¯

E

Sτ

(1+r)

τ

1

{τ<∞}

≤ liminf

n→∞

¯

E

Sτ∧n

(1+r)

τ∧n

1

{τ<∞}

=

liminf

n→∞

¯

E

Sτ∧n

(1+r)

τ∧n

= liminf

n→∞

¯

E[S

0

] = S

0

. Combined, we have S

0

≤

¯

E

Sτ −K

(1+r)

τ

1

{τ<∞}

< S

0

.

Contradiction. So there is no optimal time to exercise the perpetual American call. Simultaneously, we have

shown

¯

E

Sτ −K

(1+r)

τ

1

{τ<∞}

< S

0

for any stopping time τ. Combined with (ii), we conclude S

0

is the least

upper bound for all the prices acceptable to the buyer.

5.9. (i)

Proof. Suppose v(s) = s

p

, then we have s

p

=

2

5

2

p

s

p

+

2

5

s

p

2

p

. So 1 =

2

p+1

5

+

2

1−p

5

. Solve it for p, we get p = 1

or p = −1.

(ii)

Proof. Since lim

s→∞

v(s) = lim

s→∞

(As +

B

s

) = 0, we must have A = 0.

(iii)

Proof. f

B

(s) = 0 if and only if B+s

2

−4s = 0. The discriminant ∆ = (−4)

2

−4B = 4(4−B). So for B ≤ 4,

the equation has roots and for B > 4, this equation does not have roots.

(iv)

Proof. Suppose B ≤ 4, then the equation s

2

− 4s + B = 0 has solution 2 ±

√

4 −B. By drawing graphs of

4 −s and

B

s

, we should choose B = 4 and s

B

= 2 +

√

4 −B = 2.

(v)

Proof. To have continuous derivative, we must have −1 = −

B

s

2

B

. Plug B = s

2

B

back into s

2

B

−4s

B

+B = 0,

we get s

B

= 2. This gives B = 4.

6. Interest-Rate-Dependent Assets

6.2.

17

Proof. X

k

= S

k

−E

k

[D

m

(S

m

−K)]D

−1

k

−

Sn

Bn,m

B

k,m

for n ≤ k ≤ m. Then

E

k−1

[D

k

X

k

] = E

k−1

[D

k

S

k

−E

k

[D

m

(S

m

−K)] −

S

n

B

n,m

B

k,m

D

k

]

= D

k−1

S

k−1

−E

k−1

[D

m

(S

m

−K)] −

S

n

B

n,m

E

k−1

[E

k

[D

m

]]

= D

k−1

[S

k−1

−E

k−1

[D

m

(S

m

−K)]D

−1

k−1

−

S

n

B

n,m

B

k−1,m

]

= D

k−1

X

k−1

.

6.3.

Proof.

1

D

n

¯

E

n

[D

m+1

R

m

] =

1

D

n

¯

E

n

[D

m

(1 +R

m

)

−1

R

m

] =

¯

E

n

[

D

m

−D

m+1

D

n

] = B

n,m

−B

n,m+1

.

6.4.(i)

Proof. D

1

V

1

= E

1

[D

3

V

3

] = E

1

[D

2

V

2

] = D

2

E

1

[V

2

]. So V

1

=

D2

D1

E

1

[V

2

] =

1

1+R1

E

1

[V

2

]. In particular,

V

1

(H) =

1

1+R1(H)

V

2

(HH)P(w

2

= H[w

1

= H) =

4

21

, V

1

(T) = 0.

(ii)

Proof. Let X

0

=

2

21

. Suppose we buy ∆

0

shares of the maturity two bond, then at time one, the value of

our portfolio is X

1

= (1 +R

0

)(X

0

−∆B

0,2

) + ∆

0

B

1,2

. To replicate the value V

1

, we must have

V

1

(H) = (1 +R

0

)(X

0

−∆

0

B

0,2

) + ∆

0

B

1,2

(H)

V

1

(T) = (1 +R

0

)(X

0

−∆

0

B

0,2

) + ∆

0

B

1,2

(T).

So ∆

0

=

V1(H)−V1(T)

B1,2(H)−B1,2(T)

=

4

3

. The hedging strategy is therefore to borrow

4

3

B

0,2

−

2

21

=

20

21

and buy

4

3

share of the maturity two bond. The reason why we do not invest in the maturity three bond is that

B

1,3

(H) = B

1,3

(T)(=

4

7

) and the portfolio will therefore have the same value at time one regardless the

outcome of ﬁrst coin toss. This makes impossible the replication of V

1

, since V

1

(H) = V

1

(T).

(iii)

Proof. Suppose we buy ∆

1

share of the maturity three bond at time one, then to replicate V

2

at time

two, we must have V

2

= (1 + R

1

)(X

1

− ∆

1

B

1,3

) + ∆

1

B

2,3

. So ∆

1

(H) =

V2(HH)−V2(HT)

B2,3(HH)−B2,3(HT)

= −

2

3

, and

∆

1

(T) =

V2(TH)−V2(TT)

B2,3(TH)−B2,3(TT)

= 0. So the hedging strategy is as follows. If the outcome of ﬁrst coin toss is

T, then we do nothing. If the outcome of ﬁrst coin toss is H, then short

2

3

shares of the maturity three

bond and invest the income into the money market account. We do not invest in the maturity two bond,

because at time two, the value of the bond is its face value and our portfolio will therefore have the same

value regardless outcomes of coin tosses. This makes impossible the replication of V

2

.

6.5. (i)

18

Proof. Suppose 1 ≤ n ≤ m, then

¯

E

m+1

n−1

[F

n,m

] =

¯

E

n−1

[B

−1

n,m+1

(B

n,m

−B

n,m+1

)Z

n,m+1

Z

−1

n−1,m+1

]

=

¯

E

n−1

¸

B

n,m

B

n,m+1

−1

B

n,m+1

D

n

B

n−1,m+1

D

n−1

=

D

n

B

n−1,m+1

D

n−1

¯

E

n−1

[D

−1

n

¯

E

n

[D

m

] −D

−1

n

¯

E

n

[D

m+1

]]

=

¯

E

n−1

[D

m

−D

m+1

]

B

n−1,m1

D

n−1

=

B

n−1,m

−B

n−1,m+1

B

n−1,m+1

= F

n−1,m

.

6.6. (i)

Proof. The agent enters the forward contract at no cost. He is obliged to buy certain asset at time m at

the strike price K = For

n,m

=

Sn

Bn,m

. At time n + 1, the contract has the value

¯

E

n+1

[D

m

(S

m

− K)] =

S

n+1

−KB

n+1,m

= S

n+1

−

SnBn+1,m

Bn,m

. So if the agent sells this contract at time n +1, he will receive a cash

ﬂow of S

n+1

−

SnBn+1,m

Bn,m

(ii)

Proof. By (i), the cash ﬂow generated at time n + 1 is

(1 +r)

m−n−1

S

n+1

−

S

n

B

n+1,m

B

n,m

= (1 +r)

m−n−1

S

n+1

−

Sn

(1+r)

m−n−1

1

(1+r)

m−n

= (1 +r)

m−n−1

S

n+1

−(1 +r)

m−n

S

n

= (1 +r)

m

¯

E

n1

[

S

m

(1 +r)

m

] + (1 +r)

m

¯

E

n

[

S

m

(1 +r)

m

]

= Fut

n+1,m

−Fut

n,m

.

6.7.

Proof.

ψ

n+1

(0) =

¯

E[D

n+1

V

n+1

(0)]

=

¯

E[

D

n

1 +r

n

(0)

1

{#H(ω1···ωn+1)=0}

]

=

¯

E[

D

n

1 +r

n

(0)

1

{#H(ω1···ωn)=0}

1

{ωn+1=T}

]

=

1

2

¯

E[

D

n

1 +r

n

(0)

]

=

ψ

n

(0)

2(1 +r

n

(0))

.

19

For k = 1, 2, , n,

ψ

n+1

(k) =

¯

E

¸

D

n

1 +r

n

(#H(ω

1

ω

n

))

1

{#H(ω1···ωn+1)=k}

=

¯

E

¸

D

n

1 +r

n

(k)

1

{#H(ω1···ωn)=k}

1

{ωn+1=T}

+

¯

E

¸

D

n

1 +r

n

(k −1)

1

{#H(ω1···ωn)=k}

1

{ωn+1=H}

=

1

2

¯

E[D

n

V

n

(k)]

1 +r

n

(k)

+

1

2

¯

E[D

n

V

n

(k −1)]

1 +r

n

(k −1)

=

ψ

n

(k)

2(1 +r

n

(k))

+

ψ

n

(k −1)

2(1 +r

n

(k −1))

.

Finally,

ψ

n+1

(n + 1) =

¯

E[D

n+1

V

n+1

(n + 1)] =

¯

E

¸

D

n

1 +r

n

(n)

1

{#H(ω1···ωn)=n}

1

{ωn+1=H}

=

ψ

n

(n)

2(1 +r

n

(n))

.

Remark: In the above proof, we have used the independence of ω

n+1

and (ω

1

, , ω

n

). This is guaranteed

by the assumption that ¯ p = ¯ q =

1

2

(note ξ ⊥ η if and only if E[ξ[η] = constant). In case the binomial model

has stochastic up- and down-factor u

n

and d

n

, we can use the fact that

¯

P(ω

n+1

= H[ω

1

, , ω

n

) = p

n

and

¯

P(ω

n+1

= T[ω

1

, , ω

n

) = q

n

, where p

n

=

1+rn−dn

un−dn

and q

n

=

u−1−rn

un−dn

(cf. solution of Exercise 2.9 and

notes on page 39). Then for any X ∈ T

n

= σ(ω

1

, , ω

n

), we have

¯

E[Xf(ω

n+1

)] =

¯

E[X

¯

E[f(ω

n+1

)[T

n

]] =

¯

E[X(p

n

f(H) +q

n

f(T))].

20

2 Stochastic Calculus for Finance II: Continuous-Time Models

1. General Probability Theory

1.1. (i)

Proof. P(B) = P((B −A) ∪ A) = P(B −A) +P(A) ≥ P(A).

(ii)

Proof. P(A) ≤ P(A

n

) implies P(A) ≤ lim

n→∞

P(A

n

) = 0. So 0 ≤ P(A) ≤ 0, which means P(A) = 0.

1.2. (i)

Proof. We deﬁne a mapping φ from A to Ω as follows: φ(ω

1

ω

2

) = ω

1

ω

3

ω

5

. Then φ is one-to-one and

onto. So the cardinality of A is the same as that of Ω, which means in particular that A is uncountably

inﬁnite.

(ii)

Proof. Let A

n

= ¦ω = ω

1

ω

2

: ω

1

= ω

2

, , ω

2n−1

= ω

2n

¦. Then A

n

↓ A as n → ∞. So

P(A) = lim

n→∞

P(A

n

) = lim

n→∞

[P(ω

1

= ω

2

) P(ω

2n−1

= ω

2n

)] = lim

n→∞

(p

2

+ (1 −p)

2

)

n

.

Since p

2

+ (1 −p)

2

< 1 for 0 < p < 1, we have lim

n→∞

(p

2

+ (1 −p)

2

)

n

= 0. This implies P(A) = 0.

1.3.

Proof. Clearly P(∅) = 0. For any A and B, if both of them are ﬁnite, then A ∪ B is also ﬁnite. So

P(A∪B) = 0 = P(A) +P(B). If at least one of them is inﬁnite, then A∪B is also inﬁnite. So P(A∪B) =

∞ = P(A) +P(B). Similarly, we can prove P(∪

N

n=1

A

n

) =

¸

N

n=1

P(A

n

), even if A

n

’s are not disjoint.

To see countable additivity property doesn’t hold for P, let A

n

= ¦

1

n

¦. Then A = ∪

∞

n=1

A

n

is an inﬁnite

set and therefore P(A) = ∞. However, P(A

n

) = 0 for each n. So P(A) =

¸

∞

n=1

P(A

n

).

1.4. (i)

Proof. By Example 1.2.5, we can construct a random variable X on the coin-toss space, which is uniformly

distributed on [0, 1]. For the strictly increasing and continuous function N(x) =

x

−∞

1

√

2π

e

−

ξ

2

2

dξ, we let

Z = N

−1

(X). Then P(Z ≤ a) = P(X ≤ N(a)) = N(a) for any real number a, i.e. Z is a standard normal

random variable on the coin-toss space (Ω

∞

, T

∞

, P).

(ii)

Proof. Deﬁne X

n

=

¸

n

i=1

Yi

2

i

, where

Y

i

(ω) =

1, if ω

i

= H

0, if ω

i

= T.

Then X

n

(ω) → X(ω) for every ω ∈ Ω

∞

where X is deﬁned as in (i). So Z

n

= N

−1

(X

n

) → Z = N

−1

(X) for

every ω. Clearly Z

n

depends only on the ﬁrst n coin tosses and ¦Z

n

¦

n≥1

is the desired sequence.

1.5.

21

Proof. First, by the information given by the problem, we have

Ω

∞

0

1

[0,X(ω))

(x)dxdP(ω) =

∞

0

Ω

1

[0,X(ω))

(x)dP(ω)dx.

The left side of this equation equals to

Ω

X(ω)

0

dxdP(ω) =

Ω

X(ω)dP(ω) = E¦X¦.

The right side of the equation equals to

∞

0

Ω

1

{x<X(ω)}

dP(ω)dx =

∞

0

P(x < X)dx =

∞

0

(1 −F(x))dx.

So E¦X¦ =

∞

0

(1 −F(x))dx.

1.6. (i)

Proof.

E¦e

uX

¦ =

∞

−∞

e

ux

1

σ

√

2π

e

−

(x−µ)

2

2σ

2

dx

=

∞

−∞

1

σ

√

2π

e

−

(x−µ)

2

−2σ

2

ux

2σ

2

dx

=

∞

−∞

1

σ

√

2π

e

−

[x−(µ+σ

2

u)]

2

−(2σ

2

uµ+σ

4

u

2

)

2σ

2

dx

= e

uµ+

σ

2

u

2

2

∞

−∞

1

σ

√

2π

e

−

[x−(µ+σ

2

u)]

2

2σ

2

dx

= e

uµ+

σ

2

u

2

2

(ii)

Proof. E¦φ(X)¦ = E¦e

uX

¦ = e

uµ+

u

2

σ

2

2

≥ e

uµ

= φ(E¦X¦).

1.7. (i)

Proof. Since [f

n

(x)[ ≤

1

√

2nπ

, f(x) = lim

n→∞

f

n

(x) = 0.

(ii)

Proof. By the change of variable formula,

∞

−∞

f

n

(x)dx =

∞

−∞

1

√

2π

e

−

x

2

2

dx = 1. So we must have

lim

n→∞

∞

−∞

f

n

(x)dx = 1.

(iii)

Proof. This is not contradictory with the Monotone Convergence Theorem, since ¦f

n

¦

n≥1

doesn’t increase

to 0.

22

1.8. (i)

Proof. By (1.9.1), [Y

n

[ =

e

tX

−e

snX

t−sn

= [Xe

θX

[ = Xe

θX

≤ Xe

2tX

. The last inequality is by X ≥ 0 and the

fact that θ is between t and s

n

, and hence smaller than 2t for n suﬃciently large. So by the Dominated

Convergence Theorem, ϕ

(t) = lim

n→∞

E¦Y

n

¦ = E¦lim

n→∞

Y

n

¦ = E¦Xe

tX

¦.

(ii)

Proof. Since E[e

tX

+

1

{X≥0}

] +E[e

−tX

−

1

{X<0}

] = E[e

tX

] < ∞ for every t ∈ R, E[e

t|X|

] = E[e

tX

+

1

{X≥0}

] +

E[e

−(−t)X

−

1

{X<0}

] < ∞ for every t ∈ R. Similarly, we have E[[X[e

t|X|

] < ∞ for every t ∈ R. So, similar to

(i), we have [Y

n

[ = [Xe

θX

[ ≤ [X[e

2t|X|

for n suﬃciently large, So by the Dominated Convergence Theorem,

ϕ

(t) = lim

n→∞

E¦Y

n

¦ = E¦lim

n→∞

Y

n

¦ = E¦Xe

tX

¦.

1.9.

Proof. If g(x) is of the form 1

B

(x), where B is a Borel subset of R, then the desired equality is just (1.9.3).

By the linearity of Lebesgue integral, the desired equality also holds for simple functions, i.e. g of the

form g(x) =

¸

n

i=1

1

Bi

(x), where each B

i

is a Borel subset of R. Since any nonnegative, Borel-measurable

function g is the limit of an increasing sequence of simple functions, the desired equality can be proved by

the Monotone Convergence Theorem.

1.10. (i)

Proof. If ¦A

i

¦

∞

i=1

is a sequence of disjoint Borel subsets of [0, 1], then by the Monotone Convergence Theorem,

¯

P(∪

∞

i=1

A

i

) equals to

1

∪

∞

i=1

Ai

ZdP =

lim

n→∞

1

∪

n

i=1

Ai

ZdP = lim

n→∞

1

∪

n

i=1

Ai

ZdP = lim

n→∞

n

¸

i=1

Ai

ZdP =

∞

¸

i=1

¯

P(A

i

).

Meanwhile,

¯

P(Ω) = 2P([

1

2

, 1]) = 1. So

¯

P is a probability measure.

(ii)

Proof. If P(A) = 0, then

¯

P(A) =

A

ZdP = 2

A∩[

1

2

,1]

dP = 2P(A∩ [

1

2

, 1]) = 0.

(iii)

Proof. Let A = [0,

1

2

).

1.11.

Proof.

¯

E¦e

uY

¦ = E¦e

uY

Z¦ = E¦e

uX+uθ

e

−θX−

θ

2

2

¦ = e

uθ−

θ

2

2

E¦e

(u−θ)X

¦ = e

uθ−

θ

2

2

e

(u−θ)

2

2

= e

u

2

2

.

1.12.

Proof. First,

ˆ

Z = e

θY −

θ

2

2

= e

θ(X+θ)−

θ

2

2

= e

θ

2

2

+θX

= Z

−1

. Second, for any A ∈ T,

ˆ

P(A) =

A

ˆ

Zd

¯

P =

(1

A

ˆ

Z)ZdP =

1

A

dP = P(A). So P =

ˆ

P. In particular, X is standard normal under

ˆ

P, since it’s standard

normal under P.

1.13. (i)

23

Proof.

1

P(X ∈ B(x, )) =

1

x+

2

x−

2

1

√

2π

e

−

u

2

2

du is approximately

1

1

√

2π

e

−

x

2

2

=

1

√

2π

e

−

X

2

(¯ ω)

2

.

(ii)

Proof. Similar to (i).

(iii)

Proof. ¦X ∈ B(x, )¦ = ¦X ∈ B(y −θ, )¦ = ¦X +θ ∈ B(y, )¦ = ¦Y ∈ B(y, )¦.

(iv)

Proof. By (i)-(iii),

P(A)

P(A)

is approximately

√

2π

e

−

Y

2

(¯ ω)

2

√

2π

e

−

X

2

(¯ ω)

2

= e

−

Y

2

(¯ ω)−X

2

(¯ ω)

2

= e

−

(X(¯ ω)+θ)

2

−X

2

(¯ ω)

2

= e

−θX(¯ ω)−

θ

2

2

.

1.14. (i)

Proof.

¯

P(Ω) =

¯

λ

λ

e

−(

λ−λ)X

dP =

¯

λ

λ

∞

0

e

−(

λ−λ)x

λe

−λx

dx =

∞

0

¯

λe

−

λx

dx = 1.

(ii)

Proof.

¯

P(X ≤ a) =

{X≤a}

¯

λ

λ

e

−(

λ−λ)X

dP =

a

0

¯

λ

λ

e

−(

λ−λ)x

λe

−λx

dx =

a

0

¯

λe

−

λx

dx = 1 −e

−

λa

.

1.15. (i)

Proof. Clearly Z ≥ 0. Furthermore, we have

E¦Z¦ = E

h(g(X))g

(X)

f(X)

=

∞

−∞

h(g(x))g

(x)

f(x)

f(x)dx =

∞

−∞

h(g(x))dg(x) =

∞

−∞

h(u)du = 1.

(ii)

Proof.

¯

P(Y ≤ a) =

{g(X)≤a}

h(g(X))g

(X)

f(X)

dP =

g

−1

(a)

−∞

h(g(x))g

(x)

f(x)

f(x)dx =

g

−1

(a)

−∞

h(g(x))dg(x).

By the change of variable formula, the last equation above equals to

a

−∞

h(u)du. So Y has density h under

¯

P.

24

2. Information and Conditioning

2.1.

Proof. For any real number a, we have ¦X ≤ a¦ ∈ T

0

= ¦∅, Ω¦. So P(X ≤ a) is either 0 or 1. Since

lim

a→∞

P(X ≤ a) = 1 and lim

a→∞

P(X ≤ a) = 0, we can ﬁnd a number x

0

such that P(X ≤ x

0

) = 1 and

P(X ≤ x) = 0 for any x < x

0

. So

P(X = x

0

) = lim

n→∞

P(x

0

−

1

n

< X ≤ x

0

) = lim

n→∞

(P(X ≤ x

0

) −P(X ≤ x

0

−

1

n

)) = 1.

2.2. (i)

Proof. σ(X) = ¦∅, Ω, ¦HT, TH¦, ¦TT, HH¦¦.

(ii)

Proof. σ(S

1

) = ¦∅, Ω, ¦HH, HT¦, ¦TH, TT¦¦.

(iii)

Proof.

¯

P(¦HT, TH¦ ∩ ¦HH, HT¦) =

¯

P(¦HT¦) =

1

4

,

¯

P(¦HT, TH¦) =

¯

P(¦HT¦) +

¯

P(¦TH¦) =

1

4

+

1

4

=

1

2

,

and

¯

P(¦HH, HT¦) =

¯

P(¦HH¦) +

¯

P(¦HT¦) =

1

4

+

1

4

=

1

2

. So we have

¯

P(¦HT, TH¦ ∩ ¦HH, HT¦) =

¯

P(¦HT, TH¦)

¯

P(¦HH, HT¦).

Similarly, we can work on other elements of σ(X) and σ(S

1

) and show that

¯

P(A∩ B) =

¯

P(A)

¯

P(B) for any

A ∈ σ(X) and B ∈ σ(S

1

). So σ(X) and σ(S

1

) are independent under

¯

P.

(iv)

Proof. P(¦HT, TH¦ ∩ ¦HH, HT¦) = P(¦HT¦) =

2

9

, P(¦HT, TH¦) =

2

9

+

2

9

=

4

9

and P(¦HH, HT¦) =

4

9

+

2

9

=

6

9

. So

P(¦HT, TH¦ ∩ ¦HH, HT¦) = P(¦HT, TH¦)P(¦HH, HT¦).

Hence σ(X) and σ(S

1

) are not independent under P.

(v)

Proof. Because S

1

and X are not independent under the probability measure P, knowing the value of X

will aﬀect our opinion on the distribution of S

1

.

2.3.

Proof. We note (V, W) are jointly Gaussian, so to prove their independence it suﬃces to show they are

uncorrelated. Indeed, E¦V W¦ = E¦−X

2

sin θ cos θ+XY cos

2

θ−XY sin

2

θ+Y

2

sin θ cos θ¦ = −sin θ cos θ+

0 + 0 + sin θ cos θ = 0.

2.4. (i)

25

Proof.

E¦e

uX+vY

¦ = E¦e

uX+vXZ

¦

= E¦e

uX+vXZ

[Z = 1¦P(Z = 1) +E¦e

uX+vXZ

[Z = −1¦P(Z = −1)

=

1

2

E¦e

uX+vX

¦ +

1

2

E¦e

uX−vX

¦

=

1

2

[e

(u+v)

2

2

+e

(u−v)

2

2

]

= e

u

2

+v

2

2

e

uv

+e

−uv

2

.

(ii)

Proof. Let u = 0.

(iii)

Proof. E¦e

uX

¦ = e

u

2

2

and E¦e

vY

¦ = e

v

2

2

. So E¦e

uX+vY

¦ = E¦e

uX

¦E¦e

vY

¦. Therefore X and Y cannot

be independent.

2.5.

Proof. The density f

X

(x) of X can be obtained by

f

X

(x) =

f

X,Y

(x, y)dy =

{y≥−|x|}

2[x[ +y

√

2π

e

−

(2|x|+y)

2

2

dy =

{ξ≥|x|}

ξ

√

2π

e

−

ξ

2

2

dξ =

1

√

2π

e

−

x

2

2

.

The density f

Y

(y) of Y can be obtained by

f

Y

(y) =

f

XY

(x, y)dx

=

1

{|x|≥−y}

2[x[ +y

√

2π

e

−

(2|x|+y)

2

2

dx

=

∞

0∨(−y)

2x +y

√

2π

e

−

(2x+y)

2

2

dx +

0∧y

−∞

−2x +y

√

2π

e

−

(−2x+y)

2

2

dx

=

∞

0∨(−y)

2x +y

√

2π

e

−

(2x+y)

2

2

dx +

0∨(−y)

∞

2x +y

√

2π

e

−

(2x+y)

2

2

d(−x)

= 2

∞

|y|

ξ

√

2π

e

−

ξ

2

2

d(

ξ

2

)

=

1

√

2π

e

−

y

2

2

.

So both X and Y are standard normal random variables. Since f

X,Y

(x, y) = f

X

(x)f

Y

(y), X and Y are not

26

independent. However, if we set F(t) =

∞

t

u

2

√

2π

e

−

u

2

2

du, we have

E¦XY ¦ =

∞

−∞

∞

−∞

xyf

X,Y

(x, y)dxdy

=

∞

−∞

∞

−∞

xy1

{y≥−|x|}

2[x[ +y

√

2π

e

−

(2|x|+y)

2

2

dxdy

=

∞

−∞

xdx

∞

−|x|

y

2[x[ +y

√

2π

e

−

(2|x|+y)

2

2

dy

=

∞

−∞

xdx

∞

|x|

(ξ −2[x[)

ξ

√

2π

e

−

ξ

2

2

dξ

=

∞

−∞

xdx(

∞

|x|

ξ

2

√

2π

e

−

ξ

2

2

dξ −2[x[

e

−

x

2

2

√

2π

)

=

∞

0

x

∞

x

ξ

2

√

2π

e

−

ξ

2

2

dξdx +

0

−∞

x

∞

−x

ξ

2

√

2π

e

−

ξ

2

2

dξdx

=

∞

0

xF(x)dx +

0

−∞

xF(−x)dx.

So E¦XY ¦ =

∞

0

xF(x)dx −

∞

0

uF(u)du = 0.

2.6. (i)

Proof. σ(X) = ¦∅, Ω, ¦a, b¦, ¦c, d¦¦.

(ii)

Proof.

E¦Y [X¦ =

¸

α∈{a,b,c,d}

E¦Y 1

{X=α}

¦

P(X = α)

1

{X=α}

.

(iii)

Proof.

E¦Z[X¦ = X +E¦Y [X¦ = X +

¸

α∈{a,b,c,d}

E¦Y 1

{X=α}

¦

P(X = α)

1

{X=α}

.

(iv)

Proof. E¦Z[X¦ −E¦Y [X¦ = E¦Z −Y [X¦ = E¦X[X¦ = X.

2.7.

27

Proof. Let µ = E¦Y −X¦ and ξ = E¦Y −X −µ[(¦. Note ξ is (-measurable, we have

V ar(Y −X) = E¦(Y −X −µ)

2

¦

= E¦[(Y −E¦Y [(¦) + (E¦Y [(¦ −X −µ)]

2

¦

= V ar(Err) + 2E¦(Y −E¦Y [(¦)ξ¦ +E¦ξ

2

¦

= V ar(Err) + 2E¦Y ξ −E¦Y [(¦ξ¦ +E¦ξ

2

¦

= V ar(Err) +E¦ξ

2

¦

≥ V ar(Err).

2.8.

Proof. It suﬃces to prove the more general case. For any σ(X)-measurable random variable ξ, E¦Y

2

ξ¦ =

E¦(Y −E¦Y [X¦)ξ¦ = E¦Y ξ −E¦Y [X¦ξ¦ = E¦Y ξ¦ −E¦Y ξ¦ = 0.

2.9. (i)

Proof. Consider the dice-toss space similar to the coin-toss space. Then a typical element ω in this space

is an inﬁnite sequence ω

1

ω

2

ω

3

, with ω

i

∈ ¦1, 2, , 6¦ (i ∈ N). We deﬁne X(ω) = ω

1

and f(x) =

1

{odd integers}

(x). Then it’s easy to see

σ(X) = ¦∅, Ω, ¦ω : ω

1

= 1¦, , ¦ω : ω

1

= 6¦¦

and σ(f(X)) equals to

¦∅, Ω, ¦ω : ω

1

= 1¦ ∪ ¦ω : ω

1

= 3¦ ∪ ¦ω : ω

1

= 5¦, ¦ω : ω

1

= 2¦ ∪ ¦ω : ω

1

= 4¦ ∪ ¦ω : ω

1

= 6¦¦.

So ¦∅, Ω¦ ⊂ σ(f(X)) ⊂ σ(X), and each of these containment is strict.

(ii)

Proof. No. σ(f(X)) ⊂ σ(X) is always true.

2.10.

Proof.

A

g(X)dP = E¦g(X)1

B

(X)¦

=

∞

−∞

g(x)1

B

(x)f

X

(x)dx

=

yf

X,Y

(x, y)

f

X

(x)

dy1

B

(x)f

X

(x)dx

=

y1

B

(x)f

X,Y

(x, y)dxdy

= E¦Y 1

B

(X)¦

= E¦Y I

A

¦

=

A

Y dP.

28

2.11. (i)

Proof. We can ﬁnd a sequence ¦W

n

¦

n≥1

of σ(X)-measurable simple functions such that W

n

↑ W. Each W

n

can be written in the form

¸

Kn

i=1

a

n

i

1

A

n

i

, where A

n

i

’s belong to σ(X) and are disjoint. So each A

n

i

can be

written as ¦X ∈ B

n

i

¦ for some Borel subset B

n

i

of R, i.e. W

n

=

¸

Kn

i=1

a

n

i

1

{X∈B

n

i

}

=

¸

Kn

i=1

a

n

i

1

B

n

i

(X) = g

n

(X),

where g

n

(x) =

¸

Kn

i=1

a

n

i

1

B

n

i

(x). Deﬁne g = limsup g

n

, then g is a Borel function. By taking upper limits on

both sides of W

n

= g

n

(X), we get W = g(X).

(ii)

Proof. Note E¦Y [X¦ is σ(X)-measurable. By (i), we can ﬁnd a Borel function g such that E¦Y [X¦ =

g(X).

3. Brownian Motion

3.1.

Proof. We have T

t

⊂ T

u1

and W

u2

−W

u1

is independent of T

u1

. So in particular, W

u2

−W

u1

is independent

of T

t

.

3.2.

Proof. E[W

2

t

−W

2

s

[T

s

] = E[(W

t

−W

s

)

2

+ 2W

t

W

s

−2W

2

s

[T

s

] = t −s + 2W

s

E[W

t

−W

s

[T

s

] = t −s.

3.3.

Proof. ϕ

(3)

(u) = 2σ

4

ue

1

2

σ

2

u

2

+(σ

2

+σ

4

u

2

)σ

2

ue

1

2

σ

2

u

2

= e

1

2

σ

2

u

2

(3σ

4

u+σ

4

u

2

), and ϕ

(4)

(u) = σ

2

ue

1

2

σ

2

u

2

(3σ

4

u+

σ

4

u

2

) +e

1

2

σ

2

u

2

(3σ

4

+ 2σ

4

u). So E[(X −µ)

4

] = ϕ

(4)

(0) = 3σ

4

.

3.4. (i)

Proof. Assume there exists A ∈ T, such that P(A) > 0 and for every ω ∈ A, lim

n

¸

n−1

j=0

[W

tj+1

−W

tj

[(ω) <

∞. Then for every ω ∈ A,

¸

n−1

j=0

(W

tj+1

−W

tj

)

2

(ω) ≤ max

0≤k≤n−1

[W

t

k+1

−W

t

k

[(ω)

¸

n−1

j=0

[W

tj+1

−W

tj

[(ω) →

0, since lim

n→∞

max

0≤k≤n−1

[W

t

k+1

− W

t

k

[(ω) = 0. This is a contradiction with lim

n→∞

¸

n−1

j=0

(W

tj+1

−

W

tj

)

2

= T a.s..

(ii)

Proof. Note

¸

n−1

j=0

(W

tj+1

−W

tj

)

3

≤ max

0≤k≤n−1

[W

t

k+1

−W

t

k

[

¸

n−1

j=0

(W

tj+1

−W

tj

)

2

→ 0 as n → ∞, by an

argument similar to (i).

3.5.

29

Proof.

E[e

−rT

(S

T

−K)

+

]

= e

−rT

∞

1

σ

(ln

K

S

0

−(r−

1

2

σ

2

)T)

(S

0

e

(r−

1

2

σ

2

)T+σx

−K)

e

−

x

2

2T

√

2πT

dx

= e

−rT

∞

1

σ

√

T

(ln

K

S

0

−(r−

1

2

σ

2

)T)

(S

0

e

(r−

1

2

σ

2

)T+σ

√

Ty

−K)

e

−

y

2

2

√

2π

dy

= S

0

e

−

1

2

σ

2

T

∞

1

σ

√

T

(ln

K

S

0

−(r−

1

2

σ

2

)T)

1

√

2π

e

−

y

2

2

+σ

√

Ty

dy −Ke

−rT

∞

1

σ

√

T

(ln

K

S

0

−(r−

1

2

σ

2

)T)

1

√

2π

e

−

y

2

2

dy

= S

0

∞

1

σ

√

T

(ln

K

S

0

−(r−

1

2

σ

2

)T)−σ

√

T

1

√

2π

e

−

ξ

2

2

dξ −Ke

−rT

N

1

σ

√

T

(ln

S

0

K

+ (r −

1

2

σ

2

)T)

= Ke

−rT

N(d

+

(T, S

0

)) −Ke

−rT

N(d

−

(T, S

0

)).

3.6. (i)

Proof.

E[f(X

t

)[T

t

] = E[f(W

t

−W

s

+a)[T

s

][

a=Ws+µt

= E[f(W

t−s

+a)][

a=Ws+µt

=

∞

−∞

f(x +W

s

+µt)

e

−

x

2

2(t−s)

2π(t −s)

dx

=

∞

−∞

f(y)

e

−

(y−Ws−µs−µ(t−s))

2

2(t−s)

2π(t −s)

dy

= g(X

s

).

So E[f(X

t

)[T

s

] =

∞

−∞

f(y)p(t −s, X

s

, y)dy with p(τ, x, y) =

1

√

2πτ

e

−

(y−x−µτ)

2

2τ

.

(ii)

Proof. E[f(S

t

)[T

s

] = E[f(S

0

e

σXt

)[T

s

] with µ =

v

σ

. So by (i),

E[f(S

t

)[T

s

] =

∞

−∞

f(S

0

e

σy

)

1

2π(t −s)

e

−

(y−Xs−µ(t−s))

2

2(t−s)

dy

S0e

σy

=z

=

∞

0

f(z)

1

2π(t −s)

e

−

(

1

σ

ln

z

S

0

−

1

σ

ln

Ss

S

0

−µ(t−s))

2

2

dz

σz

=

∞

0

f(z)

e

−

(ln

z

Ss

−v(t−s))

2

2σ

2

(t−s)

σz

2π(t −s)

dz

=

∞

0

f(z)p(t −s, S

s

, z)dz

= g(S

s

).

3.7. (i)

30

Proof. E

Zt

Zs

[T

s

= E[exp¦σ(W

t

−W

s

) +σµ(t −s) −(σµ +

σ

2

2

)(t −s)¦] = 1.

(ii)

Proof. By optional stopping theorem, E[Z

t∧τm

] = E[Z

0

] = 1, that is, E[exp¦σX

t∧τm

−(σµ +

σ

2

2

)t ∧ τ

m

¦] =

1.

(iii)

Proof. If µ ≥ 0 and σ > 0, Z

t∧τm

≤ e

σm

. By bounded convergence theorem,

E[1

{τm<∞}

Z

τm

] = E[ lim

t→∞

Z

t∧τm

] = lim

t→∞

E[Z

t∧τm

] = 1,

since on the event ¦τ

m

= ∞¦, Z

t∧τm

≤ e

σm−

1

2

σ

2

t

→ 0 as t → ∞. Therefore, E[e

σm−(σµ+

σ

2

2

)τm

] = 1. Let

σ ↓ 0, by bounded convergence theorem, we have P(τ

m

< ∞) = 1. Let σµ +

σ

2

2

= α, we get

E[e

−ατm

] = e

−σm

= e

mµ−m

√

2α+µ

2

.

(iv)

Proof. We note for α > 0, E[τ

m

e

−ατm

] < ∞ since xe

−αx

is bounded on [0, ∞). So by an argument similar

to Exercise 1.8, E[e

−ατm

] is diﬀerentiable and

∂

∂α

E[e

−ατm

] = −E[τ

m

e

−ατm

] = e

mµ−m

√

2α+µ

2 −m

2α +µ

2

.

Let α ↓ 0, by monotone increasing theorem, E[τ

m

] =

m

µ

< ∞ for µ > 0.

(v)

Proof. By σ > −2µ > 0, we get σµ +

σ

2

2

> 0. Then Z

t∧τm

≤ e

σm

and on the event ¦τ

m

= ∞¦, Z

t∧τm

≤

e

σm−(

σ

2

2

+σµ)t

→ 0 as t → ∞. Therefore,

E[e

σm−(σµ+

σ

2

2

)τm

1

{τm<∞}

] = E[ lim

t→∞

Z

t∧τm

] = lim

t→∞

E[Z

t∧τm

] = 1.

Let σ ↓ −2µ, then we get P(τ

m

< ∞) = e

2µm

= e

−2|µ|m

< 1. Set α = σµ +

σ

2

2

. So we get

E[e

−ατm

] = E[e

−ατm

1

{τm<∞}

] = e

−σm

= e

mµ−m

√

2α+µ

2

.

3.8. (i)

Proof.

ϕ

n

(u) =

¯

E[e

u

1

√

n

Mnt,n

] = (

¯

E[e

u

√

n

X1,n

])

nt

= (e

u

√

n

¯ p

n

+e

−

u

√

n

¯ q

n

)

nt

=

¸

e

u

√

n

r

n

+ 1 −e

−

σ

√

n

e

σ

√

n

−e

−

σ

√

n

+e

−

u

√

n

−

r

n

−1 +e

σ

√

n

e

σ

√

n

−e

−

σ

√

n

¸

nt

.

(ii)

31

Proof.

ϕ 1

x

2

(u) =

¸

e

ux

rx

2

+ 1 −e

−σx

e

σx

−e

−σx

−e

−ux

rx

2

+ 1 −e

σx

e

σx

−e

−σx

t

x

2

.

So,

ln ϕ 1

x

2

(u) =

t

x

2

ln

¸

(rx

2

+ 1)(e

ux

−e

−ux

) +e

(σ−u)x

−e

−(σ−u)x

e

σx

−e

−σx

=

t

x

2

ln

¸

(rx

2

+ 1) sinh ux + sinh(σ −u)x

sinh σx

=

t

x

2

ln

¸

(rx

2

+ 1) sinh ux + sinh σxcosh ux −cosh σxsinh ux

sinh σx

=

t

x

2

ln

¸

cosh ux +

(rx

2

+ 1 −cosh σx) sinh ux

sinh σx

.

(iii)

Proof.

cosh ux +

(rx

2

+ 1 −cosh σx) sinh ux

sinh σx

= 1 +

u

2

x

2

2

+O(x

4

) +

(rx

2

+ 1 −1 −

σ

2

x

2

2

+O(x

4

))(ux +O(x

3

))

σx +O(x

3

)

= 1 +

u

2

x

2

2

+

(r −

σ

2

2

)ux

3

+O(x

5

)

σx +O(x

3

)

+O(x

4

)

= 1 +

u

2

x

2

2

+

(r −

σ

2

2

)ux

3

(1 +O(x

2

))

σx(1 +O(x

2

))

+O(x

4

)

= 1 +

u

2

x

2

2

+

rux

2

σ

−

1

2

σux

2

+O(x

4

).

(iv)

Proof.

ln ϕ 1

x

2

=

t

x

2

ln(1 +

u

2

x

2

2

+

ru

σ

x

2

−

σux

2

2

+O(x

4

)) =

t

x

2

(

u

2

x

2

2

+

ru

σ

x

2

−

σux

2

2

+O(x

4

)).

So lim

x↓0

ln ϕ 1

x

2

(u) = t(

u

2

2

+

ru

σ

−

σu

2

), and

¯

E[e

u

1

√

n

Mnt,n

] = ϕ

n

(u) →

1

2

tu

2

+ t(

r

σ

−

σ

2

)u. By the one-to-one

correspondence between distribution and moment generating function, (

1

√

n

M

nt,n

)

n

converges to a Gaussian

random variable with mean t(

r

σ

−

σ

2

) and variance t. Hence (

σ

√

n

M

nt,n

)

n

converges to a Gaussian random

variable with mean t(r −

σ

2

2

) and variance σ

2

t.

4. Stochastic Calculus

4.1.

32

Proof. Fix t and for any s < t, we assume s ∈ [t

m

, t

m+1

) for some m.

Case 1. m = k. Then I(t)−I(s) = ∆

t

k

(M

t

−M

t

k

)−∆

t

k

(M

s

−M

t

k

) = ∆

t

k

(M

t

−M

s

). So E[I(t)−I(s)[T

t

] =

∆

t

k

E[M

t

−M

s

[T

s

] = 0.

Case 2. m < k. Then t

m

≤ s < t

m+1

≤ t

k

≤ t < t

k+1

. So

I(t) −I(s) =

k−1

¸

j=m

∆

tj

(M

tj+1

−M

tj

) + ∆

t

k

(M

s

−M

t

k

) −∆

tm

(M

s

−M

tm

)

=

k−1

¸

j=m+1

∆

tj

(M

tj+1

−M

tj

) + ∆

t

k

(M

t

−M

t

k

) + ∆

tm

(M

tm+1

−M

s

).

Hence

E[I(t) −I(s)[T

s

]

=

k−1

¸

j=m+1

E[∆

tj

E[M

tj+1

−M

tj

[T

tj

][T

s

] +E[∆

t

k

E[M

t

−M

t

k

[T

t

k

][T

s

] + ∆

tm

E[M

tm+1

−M

s

[T

s

]

= 0.

Combined, we conclude I(t) is a martingale.

4.2. (i)

Proof. We follow the simpliﬁcation in the hint and consider I(t

k

) −I(t

l

) with t

l

< t

k

. Then I(t

k

) −I(t

l

) =

¸

k−1

j=l

∆

tj

(W

tj+1

−W

tj

). Since ∆

t

is a non-random process and W

tj+1

−W

tj

⊥ T

tj

⊃ T

t

l

for j ≥ l, we must

have I(t

k

) −I(t

l

) ⊥ T

t

l

.

(ii)

Proof. We use the notation in (i) and it is clear I(t

k

) − I(t

l

) is normal since it is a linear combination of

independent normal random variables. Furthermore, E[I(t

k

) − I(t

l

)] =

¸

k−1

j=l

∆

tj

E[W

tj+1

− W

tj

] = 0 and

V ar(I(t

k

) −I(t

l

)) =

¸

k−1

j=l

∆

2

tj

V ar(W

tj+1

−W

tj

) =

¸

k−1

j=l

∆

2

tj

(t

j+1

−t

j

) =

t

k

t

l

∆

2

u

du.

(iii)

Proof. E[I(t) −I(s)[T

s

] = E[I(t) −I(s)] = 0, for s < t.

(iv)

Proof. For s < t,

E[I

2

(t) −

t

0

∆

2

u

du −(I

2

(s) −

s

0

∆

2

u

du)[T

s

]

= E[I

2

(t) −I

2

(s) −

t

s

∆

2

u

du[T

s

]

= E[(I(t) −I(s))

2

+ 2I(t)I(s) −2I

2

(s)[T

s

] −

t

s

∆

2

u

du

= E[(I(t) −I(s))

2

] + 2I(s)E[I(t) −I(s)[T

s

] −

t

s

∆

2

u

du

=

t

s

∆

2

u

du + 0 −

t

s

∆

2

u

du

= 0.

33

4.3.

Proof. I(t) −I(s) = ∆

0

(W

t1

−W

0

) + ∆

t1

(W

t2

−W

t1

) −∆

0

(W

t1

−W

0

) = ∆

t1

(W

t2

−W

t1

) = W

s

(W

t

−W

s

).

(i) I(t) −I(s) is not independent of T

s

, since W

s

∈ T

s

.

(ii) E[(I(t) − I(s))

4

] = E[W

4

s

]E[(W

t

− W

s

)

4

] = 3s 3(t − s) = 9s(t − s) and 3E[(I(t) − I(s))

2

] =

3E[W

2

s

]E[(W

t

−W

s

)

2

] = 3s(t −s). Since E[(I(t) −I(s))

4

] = 3E[(I(t) −I(s))

2

], I(t) −I(s) is not normally

distributed.

(iii) E[I(t) −I(s)[T

s

] = W

s

E[W

t

−W

s

[T

s

] = 0.

(iv)

E[I

2

(t) −

t

0

∆

2

u

du −(I

2

(s) −

s

0

∆

2

u

du)[T

s

]

= E[(I(t) −I(s))

2

+ 2I(t)I(s) −2I

2

(s) −

t

s

W

2

u

du[T

s

]

= E[W

2

s

(W

t

−W

s

)

2

+ 2W

s

(W

t

−W

s

) −W

2

s

(t −s)[T

s

]

= W

2

s

E[(W

t

−W

s

)

2

] + 2W

s

E[W

t

−W

s

[T

s

] −W

2

s

(t −s)

= W

2

s

(t −s) −W

2

s

(t −s)

= 0.

4.4.

Proof. (Cf. Øksendal [3], Exercise 3.9.) We ﬁrst note that

¸

j

Bt

j

+t

j+1

2

(B

tj+1

−B

tj

)

=

¸

j

Bt

j

+t

j+1

2

(B

tj+1

−Bt

j

+t

j+1

2

) +B

tj

(Bt

j

+t

j+1

2

−B

tj

)

+

¸

j

(Bt

j

+t

j+1

2

−B

tj

)

2

.

The ﬁrst term converges in L

2

(P) to

T

0

B

t

dB

t

. For the second term, we note

E

¸

¸

j

Bt

j

+t

j+1

2

−B

tj

2

−

t

2

¸

2

¸

¸

¸

= E

¸

¸

j

Bt

j

+t

j+1

2

−B

tj

2

−

¸

j

t

j+1

−t

j

2

¸

2

¸

¸

¸

=

¸

j, k

E

¸

Bt

j

+t

j+1

2

−B

tj

2

−

t

j+1

−t

j

2

Bt

k

+t

k+1

2

−B

t

k

2

−

t

k+1

−t

k

2

=

¸

j

E

¸

B

2

t

j+1

−t

j

2

−

t

j+1

−t

j

2

2

¸

=

¸

j

2

t

j+1

−t

j

2

2

≤

T

2

max

1≤j≤n

[t

j+1

−t

j

[ → 0,

34

since E[(B

2

t

−t)

2

] = E[B

4

t

−2tB

2

t

+t

2

] = 3E[B

2

t

]

2

−2t

2

+t

2

= 2t

2

. So

¸

j

Bt

j

+t

j+1

2

(B

tj+1

−B

tj

) →

T

0

B

t

dB

t

+

T

2

=

1

2

B

2

T

in L

2

(P).

4.5. (i)

Proof.

d ln S

t

=

dS

t

S

t

−

1

2

d'S`

t

S

2

t

=

2S

t

dS

t

−d'S`

t

2S

2

t

=

2S

t

(α

t

S

t

dt +σ

t

S

t

dW

t

) −σ

2

t

S

2

t

dt

2S

2

t

= σ

t

dW

t

+ (α

t

−

1

2

σ

2

t

)dt.

(ii)

Proof.

ln S

t

= ln S

0

+

t

0

σ

s

dW

s

+

t

0

(α

s

−

1

2

σ

2

s

)ds.

So S

t

= S

0

exp¦

t

0

σ

s

dW

s

+

t

0

(α

s

−

1

2

σ

2

s

)ds¦.

4.6.

Proof. Without loss of generality, we assume p = 1. Since (x

p

)

= px

p−1

, (x

p

)

= p(p −1)x

p−2

, we have

d(S

p

t

) = pS

p−1

t

dS

t

+

1

2

p(p −1)S

p−2

t

d'S`

t

= pS

p−1

t

(αS

t

dt +σS

t

dW

t

) +

1

2

p(p −1)S

p−2

t

σ

2

S

2

t

dt

= S

p

t

[pαdt +pσdW

t

+

1

2

p(p −1)σ

2

dt]

= S

p

t

p[σdW

t

+ (α +

p −1

2

σ

2

)dt].

4.7. (i)

Proof. dW

4

t

= 4W

3

t

dW

t

+

1

2

4 3W

2

t

d'W`

t

= 4W

3

t

dW

t

+ 6W

2

t

dt. So W

4

T

= 4

T

0

W

3

t

dW

t

+ 6

T

0

W

2

t

dt.

(ii)

Proof. E[W

4

T

] = 6

T

0

tdt = 3T

2

.

(iii)

Proof. dW

6

t

= 6W

5

t

dW

t

+

1

2

6 5W

4

t

dt. So W

6

T

= 6

T

0

W

5

t

dW

t

+15

T

0

W

4

t

dt. Hence E[W

6

T

] = 15

T

0

3t

2

dt =

15T

3

.

4.8.

35

Proof. d(e

βt

R

t

) = βe

βt

R

t

dt +e

βt

dR

t

= e

βt(αdt+σdWt)

. Hence

e

βt

R

t

= R

0

+

t

0

e

βs

(αds +σdW

s

) = R

0

+

α

β

(e

βt

−1) +σ

t

0

e

βs

dW

s

,

and R

t

= R

0

e

−βt

+

α

β

(1 −e

−βt

) +σ

t

0

e

−β(t−s)

dW

s

.

4.9. (i)

Proof.

Ke

−r(T−t)

N

(d

−

) = Ke

−r(T−t)

e

−

d

2

−

2

√

2π

= Ke

−r(T−t)

e

−

(d

+

−σ

√

T−t)

2

2

√

2π

= Ke

−r(T−t)

e

σ

√

T−td+

e

−

σ

2

(T−t)

2

N

(d

+

)

= Ke

−r(T−t)

x

K

e

(r+

σ

2

2

)(T−t)

e

−

σ

2

(T−t)

2

N

(d

+

)

= xN

(d

+

).

(ii)

Proof.

c

x

= N(d

+

) +xN

(d

+

)

∂

∂x

d

+

(T −t, x) −Ke

−r(T−t)

N

(d

−

)

∂

∂x

d

−

(T −t, x)

= N(d

+

) +xN

(d

+

)

∂

∂x

d

+

(T −t, x) −xN

(d

+

)

∂

∂x

d

+

(T −t, x)

= N(d

+

).

(iii)

Proof.

c

t

= xN

(d

+

)

∂

∂x

d

+

(T −t, x) −rKe

−r(T−t)

N(d

−

) −Ke

−r(T−t)

N

(d

−

)

∂

∂t

d

−

(T −t, x)

= xN

(d

+

)

∂

∂t

d

+

(T −t, x) −rKe

−r(T−t)

N(d

−

) −xN

(d

+

)

¸

∂

∂t

d

+

(T −t, x) +

σ

2

√

T −t

= −rKe

−r(T−t)

N(d

−

) −

σx

2

√

T −t

N

(d

+

).

(iv)

36

Proof.

c

t

+rxc

x

+

1

2

σ

2

x

2

c

xx

= −rKe

−r(T−t)

N(d

−

) −

σx

2

√

T −t

N

(d

+

) +rxN(d

+

) +

1

2

σ

2

x

2

N

(d

+

)

∂

∂x

d

+

(T −t, x)

= rc −

σx

2

√

T −t

N

(d

+

) +

1

2

σ

2

x

2

N

(d

+

)

1

σ

√

T −tx

= rc.

(v)

Proof. For x > K, d

+

(T − t, x) > 0 and lim

t↑T

d

+

(T − t, x) = lim

τ↓0

d

+

(τ, x) = ∞. lim

t↑T

d

−

(T − t, x) =

lim

τ↓0

d

−

(τ, x) = lim

τ↓0

1

σ

√

τ

ln

x

K

+

1

σ

(r +

1

2

σ

2

)

√

τ −σ

√

τ

= ∞. Similarly, lim

t↑T

d

±

= −∞ for x ∈

(0, K). Also it’s clear that lim

t↑T

d

±

= 0 for x = K. So

lim

t↑T

c(t, x) = xN(lim

t↑T

d

+

) −KN(lim

t↑T

d

−

) =

x −K, if x > K

0, if x ≤ K

= (x −K)

+

.

(vi)

Proof. It is easy to see lim

x↓0

d

±

= −∞. So for t ∈ [0, T], lim

x↓0

c(t, x) = lim

x↓0

xN(lim

x↓

d

+

(T − t, x)) −

Ke

−r(T−t)

N(lim

x↓0

d

−

(T −t, x)) = 0.

(vii)

Proof. For t ∈ [0, T], it is clear lim

x→∞

d

±

= ∞. Note

lim

x→∞

x(N(d

+

) −1) = lim

x→∞

N

(d

+

)

∂

∂x

d

+

−x

−2

= lim

x→∞

N

(d

+

)

1

σ

√

T−t

−x

−1

.

By the expression of d

+

, we get x = K exp¦σ

√

T −td

+

−(T −t)(r +

1

2

σ

2

)¦. So we have

lim

x→∞

x(N(d

+

) −1) = lim

x→∞

N

(d

+

)

−x

σ

√

T −t

= lim

d+→∞

e

−

d

2

+

2

√

2π

−Ke

σ

√

T−td+−(T−t)(r+

1

2

σ

2

)

σ

√

T −t

= 0.

Therefore

lim

x→∞

[c(t, x) −(x −e

−r(T−t)

K)]

= lim

x→∞

[xN(d

+

) −Ke

−r(T−t)N(d−)

−x +Ke

−r(T−t)

]

= lim

x→∞

[x(N(d

+

) −1) +Ke

−r(T−t)

(1 −N(d

−

))]

= lim

x→∞

x(N(d

+

) −1) +Ke

−r(T−t)

(1 −N( lim

x→∞

d

−

))

= 0.

4.10. (i)

37

Proof. We show (4.10.16) + (4.10.9) ⇐⇒ (4.10.16) + (4.10.15), i.e. assuming X has the representation

X

t

= ∆

t

S

t

+ Γ

t

M

t

, “continuous-time self-ﬁnancing condition” has two equivalent formulations, (4.10.9) or

(4.10.15). Indeed, dX

t

= ∆

t

dS

t

+Γ

t

dM

t

+(S

t

d∆

t

+dS

t

d∆

t

+M

t

dΓ

t

+dM

t

dΓ

t

). So dX

t

= ∆

t

dS

t

+Γ

t

dM

t

⇐⇒

S

t

d∆

t

+dS

t

d∆

t

+M

t

dΓ

t

+dM

t

dΓ

t

= 0, i.e. (4.10.9) ⇐⇒ (4.10.15).

(ii)

Proof. First, we clarify the problems by stating explicitly the given conditions and the result to be proved.

We assume we have a portfolio X

t

= ∆

t

S

t

+ Γ

t

M

t

. We let c(t, S

t

) denote the price of call option at time t

and set ∆

t

= c

x

(t, S

t

). Finally, we assume the portfolio is self-ﬁnancing. The problem is to show

rN

t

dt =

¸

c

t

(t, S

t

) +

1

2

σ

2

S

2

t

c

xx

(t, S

t

)

dt,

where N

t

= c(t, S

t

) −∆

t

S

t

.

Indeed, by the self-ﬁnancing property and ∆

t

= c

x

(t, S

t

), we have c(t, S

t

) = X

t

(by the calculations in

Subsection 4.5.1-4.5.3). This uniquely determines Γ

t

as

Γ

t

=

X

t

−∆

t

S

t

M

t

=

c(t, S

t

) −c

x

(t, S

t

)S

t

M

t

=

N

t

M

t

.

Moreover,

dN

t

=

¸

c

t

(t, S

t

)dt +c

x

(t, S

t

)dS

t

+

1

2

c

xx

(t, S

t

)d'S

t

`

t

−d(∆

t

S

t

)

=

¸

c

t

(t, S

t

) +

1

2

c

xx

(t, S

t

)σ

2

S

2

t

dt + [c

x

(t, S

t

)dS

t

−d(X

t

−Γ

t

M

t

)]

=

¸

c

t

(t, S

t

) +

1

2

c

xx

(t, S

t

)σ

2

S

2

t

dt +M

t

dΓ

t

+dM

t

dΓ

t

+ [c

x

(t, S

t

)dS

t

+ Γ

t

dM

t

−dX

t

].

By self-ﬁnancing property, c

x

(t, S

t

)dt + Γ

t

dM

t

= ∆

t

dS

t

+ Γ

t

dM

t

= dX

t

, so

¸

c

t

(t, S

t

) +

1

2

c

xx

(t, S

t

)σ

2

S

2

t

dt = dN

t

−M

t

dΓ

t

−dM

t

dΓ

t

= Γ

t

dM

t

= Γ

t

rM

t

dt = rN

t

dt.

4.11.

Proof. First, we note c(t, x) solves the Black-Scholes-Merton PDE with volatility σ

1

:

∂

∂t

+rx

∂

∂x

+

1

2

x

2

σ

2

1

∂

2

∂x

2

−r

c(t, x) = 0.

So

c

t

(t, S

t

) +rS

t

c

x

(t, S

t

) +

1

2

σ

2

1

S

2

t

c

xx

(t, S

t

) −rc(t, S

t

) = 0,

and

dc(t, S

t

) = c

t

(t, S

t

)dt +c

x

(t, S

t

)(αS

t

dt +σ

2

S

t

dW

t

) +

1

2

c

xx

(t, S

t

)σ

2

2

S

2

t

dt

=

¸

c

t

(t, S

t

) +αc

x

(t, S

t

)S

t

+

1

2

σ

2

2

S

2

t

c

xx

(t, S

t

)

dt +σ

2

S

t

c

x

(t, S

t

)dW

t

=

¸

rc(t, S

t

) + (α −r)c

x

(t, S

t

)S

t

+

1

2

S

2

t

(σ

2

2

−σ

2

1

)c

xx

(t, S

t

)

dt +σ

2

S

t

c

x

(t, S

t

)dW

t

.

38

Therefore

dX

t

=

¸

rc(t, S

t

) + (α −r)c

x

(t, S

t

)S

t

+

1

2

S

2

t

(σ

2

2

−σ

2

1

)σ

xx

(t, S

t

) +rX

t

−rc(t, S

t

) +rS

t

c

x

(t, S

t

)

−

1

2

(σ

2

2

−σ

2

1

)S

2

t

c

xx

(t, S

t

) −c

x

(t, S

t

)αS

t

dt + [σ

2

S

t

c

x

(t, S

t

) −c

x

(t, S

t

)σ

2

S

t

]dW

t

= rX

t

dt.

This implies X

t

= X

0

e

rt

. By X

0

, we conclude X

t

= 0 for all t ∈ [0, T].

4.12. (i)

Proof. By (4.5.29), c(t, x) − p(t, x) = x − e

−r(T−t)

K. So p

x

(t, x) = c

x

(t, x) − 1 = N(d

+

(T − t, x)) − 1,

p

xx

(t, x) = c

xx

(t, x) =

1

σx

√

T−t

N

(d

+

(T −t, x)) and

p

t

(t, x) = c

t

(t, x) +re

−r(T−t)

K

= −rKe

−r(T−t)

N(d

−

(T −t, x)) −

σx

2

√

T −t

N

(d

+

(T −t, x)) +rKe

−r(T−t)

= rKe

−r(T−t)

N(−d

−

(T −t, x)) −

σx

2

√

T −t

N

(d

+

(T −t, x)).

(ii)

Proof. For an agent hedging a short position in the put, since ∆

t

= p

x

(t, x) < 0, he should short the

underlying stock and put p(t, S

t

) −p

x

(t, S

t

)S

t

(> 0) cash in the money market account.

(iii)

Proof. By the put-call parity, it suﬃces to show f(t, x) = x −Ke

−r(T−t)

satisﬁes the Black-Scholes-Merton

partial diﬀerential equation. Indeed,

∂

∂t

+

1

2

σ

2

x

2

∂

2

∂x

2

+rx

∂

∂x

−r

f(t, x) = −rKe

−r(T−t)

+

1

2

σ

2

x

2

0 +rx 1 −r(x −Ke

−r(T−t)

) = 0.

Remark: The Black-Scholes-Merton PDE has many solutions. Proper boundary conditions are the key

to uniqueness. For more details, see Wilmott [8].

4.13.

Proof. We suppose (W

1

, W

2

) is a pair of local martingale deﬁned by SDE

dW

1

(t) = dB

1

(t)

dW

2

(t) = α(t)dB

1

(t) +β(t)dB

2

(t).

(1)

We want to ﬁnd α(t) and β(t) such that

(dW

2

(t))

2

= [α

2

(t) +β

2

(t) + 2ρ(t)α(t)β(t)]dt = dt

dW

1

(t)dW

2

(t) = [α(t) +β(t)ρ(t)]dt = 0.

(2)

Solve the equation for α(t) and β(t), we have β(t) =

1

√

1−ρ

2

(t)

and α(t) = −

ρ(t)

√

1−ρ

2

(t)

. So

W

1

(t) = B

1

(t)

W

2

(t) =

t

0

−ρ(s)

√

1−ρ

2

(s)

dB

1

(s) +

t

0

1

√

1−ρ

2

(s)

dB

2

(s)

(3)

39

is a pair of independent BM’s. Equivalently, we have

B

1

(t) = W

1

(t)

B

2

(t) =

t

0

ρ(s)dW

1

(s) +

t

0

1 −ρ

2

(s)dW

2

(s).

(4)

4.14. (i)

Proof. Clearly Z

j

∈ T

tj+1

. Moreover

E[Z

j

[T

tj

] = f

(W

tj

)E[(W

tj+1

−W

tj

)

2

−(t

j+1

−t

j

)[T

tj

] = f

(W

tj

)(E[W

2

tj+1−tj

] −(t

j+1

−t

j

)) = 0,

since W

tj+1

−W

tj

is independent of T

tj

and W

t

∼ N(0, t). Finally, we have

E[Z

2

j

[T

tj

] = [f

(W

tj

)]

2

E[(W

tj+1

−W

tj

)

4

−2(t

j+1

−t

j

)(W

tj+1

−W

tj

)

2

+ (t

j+1

−t

j

)

2

[T

tj

]

= [f

(W

tj

)]

2

(E[W

4

tj+1−tj

] −2(t

j+1

−t

j

)E[W

2

tj+1−tj

] + (t

j+1

−t

j

)

2

)

= [f

(W

tj

)]

2

[3(t

j+1

−t

j

)

2

−2(t

j+1

−t

j

)

2

+ (t

j+1

−t

j

)

2

]

= 2[f

(W

tj

)]

2

(t

j+1

−t

j

)

2

,

where we used the independence of Browian motion increment and the fact that E[X

4

] = 3E[X

2

]

2

if X is

Gaussian with mean 0.

(ii)

Proof. E[

¸

n−1

j=0

Z

j

] = E[

¸

n−1

j=0

E[Z

j

[T

tj

]] = 0 by part (i).

(iii)

Proof.

V ar[

n−1

¸

j=0

Z

j

] = E[(

n−1

¸

j=0

Z

j

)

2

]

= E[

n−1

¸

j=0

Z

2

j

+ 2

¸

0≤i<j≤n−1

Z

i

Z

j

]

=

n−1

¸

j=0

E[E[Z

2

j

[T

tj

]] + 2

¸

0≤i<j≤n−1

E[Z

i

E[Z

j

[T

tj

]]

=

n−1

¸

j=0

E[2[f

(W

tj

)]

2

(t

j+1

−t

j

)

2

]

=

n−1

¸

j=0

2E[(f

(W

tj

))

2

](t

j+1

−t

j

)

2

≤ 2 max

0≤j≤n−1

[t

j+1

−t

j

[

n−1

¸

j=0

E[(f

(W

tj

))

2

](t

j+1

−t

j

)

→ 0,

since

¸

n−1

j=0

E[(f

(W

tj

))

2

](t

j+1

−t

j

) →

T

0

E[(f

(W

t

))

2

]dt < ∞.

4.15. (i)

40

Proof. B

i

is a local martingale with

(dB

i

(t))

2

=

¸

d

¸

j=1

σ

ij

(t)

σ

i

(t)

dW

j

(t)

¸

2

=

d

¸

j=1

σ

2

ij

(t)

σ

2

i

(t)

dt = dt.

So B

i

is a Brownian motion.

(ii)

Proof.

dB

i

(t)dB

k

(t) =

d

¸

j=1

σ

ij

(t)

σ

i

(t)

dW

j

(t)

¸

¸

¸

d

¸

l=1

σ

kl

(t)

σ

k

(t)

dW

l

(t)

¸

=

¸

1≤j, l≤d

σ

ij

(t)σ

kl

(t)

σ

i

(t)σ

k

(t)

dW

j

(t)dW

l

(t)

=

d

¸

j=1

σ

ij

(t)σ

kj

(t)

σ

i

(t)σ

k

(t)

dt

= ρ

ik

(t)dt.

4.16.

Proof. To ﬁnd the m independent Brownion motion W

1

(t), , W

m

(t), we need to ﬁnd A(t) = (a

ij

(t)) so

that

(dB

1

(t), , dB

m

(t))

tr

= A(t)(dW

1

(t), , dW

m

(t))

tr

,

or equivalently

(dW

1

(t), , dW

m

(t))

tr

= A(t)

−1

(dB

1

(t), , dB

m

(t))

tr

,

and

(dW

1

(t), , dW

m

(t))

tr

(dW

1

(t), , dW

m

(t))

= A(t)

−1

(dB

1

(t), , dB

m

(t))

tr

(dB

1

(t), , dB

m

(t))(A(t)

−1

)

tr

dt

= I

m×m

dt,

where I

m×m

is the mm unit matrix. By the condition dB

i

(t)dB

k

(t) = ρ

ik

(t)dt, we get

(dB

1

(t), , dB

m

(t))

tr

(dB

1

(t), , dB

m

(t)) = C(t).

So A(t)

−1

C(t)(A(t)

−1

)

tr

= I

m×m

, which gives C(t) = A(t)A(t)

tr

. This motivates us to deﬁne A as the

square root of C. Reverse the above analysis, we obtain a formal proof.

4.17.

Proof. We will try to solve all the sub-problems in a single, long solution. We start with the general X

i

:

X

i

(t) = X

i

(0) +

t

0

θ

i

(u)du +

t

0

σ

i

(u)dB

i

(u), i = 1, 2.

41

The goal is to show

lim

↓0

C()

V

1

()V

2

()

= ρ(t

0

).

First, for i = 1, 2, we have

M

i

() = E[X

i

(t

0

+) −X

i

(t

0

)[T

t0

]

= E

¸

t0+

t0

Θ

i

(u)du +

t0+

t0

σ

i

(u)dB

i

(u)[T

t0

= Θ

i

(t

0

) +E

¸

t0+

t0

(Θ

i

(u) −Θ

i

(t

0

))du[T

t0

.

By Conditional Jensen’s Inequality,

E

¸

t0+

t0

(Θ

i

(u) −Θ

i

(t

0

))du[T

t0

≤ E

¸

t0+

t0

[Θ

i

(u) −Θ

i

(t

0

)[du[T

t0

Since

1

t0+

t0

[Θ

i

(u) − Θ

i

(t

0

)[du ≤ 2M and lim

→0

1

t0+

t0

[Θ

i

(u) − Θ

i

(t

0

)[du = 0 by the continuity of Θ

i

,

the Dominated Convergence Theorem under Conditional Expectation implies

lim

→0

1

E

¸

t0+

t0

[Θ

i

(u) −Θ

i

(t

0

)[du[T

t0

= E

¸

lim

→0

1

t0+

t0

[Θ

i

(u) −Θ

i

(t

0

)[du[T

t0

= 0.

So M

i

() = Θ

i

(t

0

) +o(). This proves (iii).

To calculate the variance and covariance, we note Y

i

(t) =

t

0

σ

i

(u)dB

i

(u) is a martingale and by Itˆo’s

formula Y

i

(t)Y

j

(t) −

t

0

σ

i

(u)σ

j

(u)du is a martingale (i = 1, 2). So

E[(X

i

(t

0

+) −X

i

(t

0

))(X

j

(t

0

+) −X

j

(t

0

))[T

t0

]

= E

¸

Y

i

(t

0

+) −Y

i

(t

0

) +

t0+

t0

Θ

i

(u)du

Y

j

(t

0

+) −Y

j

(t

0

) +

t0+

t0

Θ

j

(u)du

[T

t0

= E [(Y

i

(t

0

+) −Y

i

(t

0

)) (Y

j

(t

0

+) −Y

j

(t

0

)) [T

t0

] +E

¸

t0+

t0

Θ

i

(u)du

t0+

t0

Θ

j

(u)du[T

t0

+E

¸

(Y

i

(t

0

+) −Y

i

(t

0

))

t0+

t0

Θ

j

(u)du[T

t0

+E

¸

(Y

j

(t

0

+) −Y

j

(t

0

))

t0+

t0

Θ

i

(u)du[T

t0

= I +II +III +IV.

I = E[Y

i

(t

0

+)Y

j

(t

0

+) −Y

i

(t

0

)Y

j

(t

0

)[T

t0

] = E

¸

t0+

t0

σ

i

(u)σ

j

(u)ρ

ij

(t)dt[T

t0

.

By an argument similar to that involved in the proof of part (iii), we conclude I = σ

i

(t

0

)σ

j

(t

0

)ρ

ij

(t

0

) +o()

and

II = E

¸

t0+

t0

(Θ

i

(u) −Θ

i

(t

0

))du

t0+

t0

Θ

j

(u)du[T

t0

+ Θ

i

(t

0

)E

¸

t0+

t0

Θ

j

(u)du[T

t0

= o() + (M

i

() −o())M

j

()

= M

i

()M

j

() +o().

42

By Cauchy’s inequality under conditional expectation (note E[XY [T] deﬁnes an inner product on L

2

(Ω)),

III ≤ E

¸

[Y

i

(t

0

+) −Y

i

(t

0

)[

t0+

t0

[Θ

j

(u)[du[T

t0

≤ M

E[(Y

i

(t

0

+) −Y

i

(t

0

))

2

[T

t0

]

≤ M

E[Y

i

(t

0

+)

2

−Y

i

(t

0

)

2

[T

t0

]

≤ M

E[

t0+

t0

Θ

i

(u)

2

du[T

t0

]

≤ M M

√

= o()

Similarly, IV = o(). In summary, we have

E[(X

i

(t

0

+) −X

i

(t

0

))(X

j

(t

0

+) −X

j

(t

0

))[T

t0

] = M

i

()M

j

() +σ

i

(t

0

)σ

j

(t

0

)ρ

ij

(t

0

) +o() +o().

This proves part (iv) and (v). Finally,

lim

↓0

C()

V

1

()V

2

()

= lim

↓0

ρ(t

0

)σ

1

(t

0

)σ

2

(t

0

) +o()

(σ

2

1

(t

0

) +o())(σ

2

2

(t

0

) +o())

= ρ(t

0

).

This proves part (vi). Part (i) and (ii) are consequences of general cases.

4.18. (i)

Proof.

d(e

rt

ζ

t

) = (de

−θWt−

1

2

θ

2

t

) = −e

−θWt−

1

2

θ

2

t

θdW

t

= −θ(e

rt

ζ

t

)dW

t

,

where for the second “=”, we used the fact that e

−θWt−

1

2

θ

2

t

solves dX

t

= −θX

t

dW

t

. Since d(e

rt

ζ

t

) =

re

rt

ζ

t

dt +e

rt

dζ

t

, we get dζ

t

= −θζ

t

dW

t

−rζ

t

dt.

(ii)

Proof.

d(ζ

t

X

t

) = ζ

t

dX

t

+X

t

dζ

t

+dX

t

dζ

t

= ζ

t

(rX

t

dt + ∆

t

(α −r)S

t

dt + ∆

t

σS

t

dW

t

) +X

t

(−θζ

t

dW

t

−rζ

t

dt)

+(rX

t

dt + ∆

t

(α −r)S

t

dt + ∆

t

σS

t

dW

t

)(−θζ

t

dW

t

−rζ

t

dt)

= ζ

t

(∆

t

(α −r)S

t

dt + ∆

t

σS

t

dW

t

) −θX

t

ζ

t

dW

t

−θ∆

t

σS

t

ζ

t

dt

= ζ

t

∆

t

σS

t

dW

t

−θX

t

ζ

t

dW

t

.

So ζ

t

X

t

is a martingale.

(iii)

Proof. By part (ii), X

0

= ζ

0

X

0

= E[ζ

T

X

t

] = E[ζ

T

V

T

]. (This can be seen as a version of risk-neutral pricing,

only that the pricing is carried out under the actual probability measure.)

4.19. (i)

Proof. B

t

is a local martingale with [B]

t

=

t

0

sign(W

s

)

2

ds = t. So by L´evy’s theorem, B

t

is a Brownian

motion.

43

(ii)

Proof. d(B

t

W

t

) = B

t

dW

t

+ sign(W

t

)W

t

dW

t

+ sign(W

t

)dt. Integrate both sides of the resulting equation

and the expectation, we get

E[B

t

W

t

] =

t

0

E[sign(W

s

)]ds =

t

0

E[1

{Ws≥0}

−1

{Ws<0}

]ds =

1

2

t −

1

2

t = 0.

(iii)

Proof. By Itˆ o’s formula, dW

2

t

= 2W

t

dW

t

+dt.

(iv)

Proof. By Itˆ o’s formula,

d(B

t

W

2

t

) = B

t

dW

2

t

+W

2

t

dB

t

+dB

t

dW

2

t

= B

t

(2W

t

dW

t

+dt) +W

2

t

sign(W

t

)dW

t

+sign(W

t

)dW

t

(2W

t

dW

t

+dt)

= 2B

t

W

t

dW

t

+B

t

dt +sign(W

t

)W

2

t

dW

t

+ 2sign(W

t

)W

t

dt.

So

E[B

t

W

2

t

] = E[

t

0

B

s

ds] + 2E[

t

0

sign(W

s

)W

s

ds]

=

t

0

E[B

s

]ds + 2

t

0

E[sign(W

s

)W

s

]ds

= 2

t

0

(E[W

s

1

{Ws≥0}

] −E[W

s

1

{Ws<0}

])ds

= 4

t

0

∞

0

x

e

−

x

2

2s

√

2πs

dxds

= 4

t

0

s

2π

ds

= 0 = E[B

t

] E[W

2

t

].

Since E[B

t

W

2

t

] = E[B

t

] E[W

2

t

], B

t

and W

t

are not independent.

4.20. (i)

Proof. f(x) =

x −K, if x ≥ K

0, if x < K.

So f

(x) =

1, if x > K

undeﬁned, if x = K

0, if x < K

and f

(x) =

0, if x = K

undeﬁned, if x = K.

(ii)

Proof. E[f(W

T

)] =

∞

K

(x − K)

e

−

x

2

2T

√

2πT

dx =

T

2π

e

−

K

2

2T

− KΦ(−

K

√

T

) where Φ is the distribution function of

standard normal random variable. If we suppose

T

0

f

(W

t

)dt = 0, the expectation of RHS of (4.10.42) is

equal to 0. So (4.10.42) cannot hold.

(iii)

44

Proof. This is trivial to check.

(iv)

Proof. If x = K, lim

n→∞

f

n

(x) =

1

8n

= 0; if x > K, for n large enough, x ≥ K +

1

2n

, so lim

n→∞

f

n

(x) =

lim

n→∞

(x −K) = x −K; if x < K, for n large enough, x ≤ K −

1

2n

, so lim

n→∞

f

n

(x) = lim

n→∞

0 = 0. In

summary, lim

n→∞

f

n

(x) = (x −K)

+

. Similarly, we can show

lim

n→∞

f

n

(x) =

0, if x < K

1

2

, if x = K

1, if x > K.

(5)

(v)

Proof. Fix ω, so that W

t

(ω) < K for any t ∈ [0, T]. Since W

t

(ω) can obtain its maximum on [0, T], there

exists n

0

, so that for any n ≥ n

0

, max

0≤t≤T

W

t

(ω) < K −

1

2n

. So

L

K

(T)(ω) = lim

n→∞

n

T

0

1

(K−

1

2n

,K+

1

2n

)

(W

t

(ω))dt = 0.

(vi)

Proof. Take expectation on both sides of the formula (4.10.45), we have

E[L

K

(T)] = E[(W

T

−K)

+

] > 0.

So we cannot have L

K

(T) = 0 a.s..

4.21. (i)

Proof. There are two problems. First, the transaction cost could be big due to active trading; second, the

purchases and sales cannot be made at exactly the same price K. For more details, see Hull [2].

(ii)

Proof. No. The RHS of (4.10.26) is a martingale, so its expectation is 0. But E[(S

T

− K)

+

] > 0. So

X

T

= (S

T

−K)

+

.

5. Risk-Neutral Pricing

5.1. (i)

Proof.

df(X

t

) = f

(X

t

)dt +

1

2

f

(X

t

)d'X`

t

= f(X

t

)(dX

t

+

1

2

d'X`

t

)

= f(X

t

)

¸

σ

t

dW

t

+ (α

t

−R

t

−

1

2

σ

2

t

)dt +

1

2

σ

2

t

dt

= f(X

t

)(α

t

−R

t

)dt +f(X

t

)σ

t

dW

t

.

This is formula (5.2.20).

45

(ii)

Proof. d(D

t

S

t

) = S

t

dD

t

+ D

t

dS

t

+ dD

t

dS

t

= −S

t

R

t

D

t

dt + D

t

α

t

S

t

dt + D

t

σ

t

S

t

dW

t

= D

t

S

t

(α

t

− R

t

)dt +

D

t

S

t

σ

t

dW

t

. This is formula (5.2.20).

5.2.

Proof. By Lemma 5.2.2.,

¯

E[D

T

V

T

[T

t

] = E

D

T

V

T

Z

T

Zt

[T

t

. Therefore (5.2.30) is equivalent to D

t

V

t

Z

t

=

E[D

T

V

T

Z

T

[T

t

].

5.3. (i)

Proof.

c

x

(0, x) =

d

dx

¯

E[e

−rT

(xe

σ

W

T

+(r−

1

2

σ

2

)T

−K)

+

]

=

¯

E

¸

e

−rT

d

dx

h(xe

σ

W

T

+(r−

1

2

σ

2

)T

)

=

¯

E

e

−rT

e

σ

W

T

+(r−

1

2

σ

2

)T

1

{xe

σ

W

T

+(r−

1

2

σ

2

)T

>K}

= e

−

1

2

σ

2

T

¯

E

e

σ

W

T

1

{

W

T

>

1

σ

(ln

K

x

−(r−

1

2

σ

2

)T)}

= e

−

1

2

σ

2

T

¯

E

¸

e

σ

√

T

W

T

√

T

1

{

W

T

√

T

−σ

√

T>

1

σ

√

T

(ln

K

x

−(r−

1

2

σ

2

)T)−σ

√

T}

= e

−

1

2

σ

2

T

∞

−∞

1

√

2π

e

−

z

2

2

e

σ

√

Tz

1

{z−σ

√

T>−d+(T,x)}

dz

=

∞

−∞

1

√

2π

e

−

(z−σ

√

T)

2

2

1

{z−σ

√

T>−d+(T,x)}

dz

= N(d

+

(T, x)).

(ii)

Proof. If we set

´

Z

T

= e

σ

W

T

−

1

2

σ

2

T

and

´

Z

t

=

¯

E[

´

Z

T

[T

t

], then

´

Z is a

¯

P-martingale,

´

Z

t

> 0 and E[

´

Z

T

] =

¯

E[e

σ

W

T

−

1

2

σ

2

T

] = 1. So if we deﬁne

´

P by d

´

P = Z

T

d

¯

P on T

T

, then

´

P is a probability measure equivalent to

¯

P, and

c

x

(0, x) =

¯

E[

´

Z

T

1

{S

T

>K}

] =

´

P(S

T

> K).

Moreover, by Girsanov’s Theorem,

´

W

t

=

W

t

+

t

0

(−σ)du =

W

t

− σt is a

´

P-Brownian motion (set Θ = −σ

in Theorem 5.4.1.)

(iii)

Proof. S

T

= xe

σ

W

T

+(r−

1

2

σ

2

)T

= xe

σ

W

T

+(r+

1

2

σ

2

)T

. So

´

P(S

T

> K) =

´

P(xe

σ

W

T

+(r+

1

2

σ

2

)T

> K) =

´

P

´

W

T

√

T

> −d

+

(T, x)

= N(d

+

(T, x)).

46

5.4. First, a few typos. In the SDE for S, “σ(t)d

W(t)” → “σ(t)S(t)d

**W(t)”. In the ﬁrst equation for
**

c(0, S(0)), E →

¯

E. In the second equation for c(0, S(0)), the variable for BSM should be

BSM

¸

T, S(0); K,

1

T

T

0

r(t)dt,

1

T

T

0

σ

2

(t)dt

¸

.

(i)

Proof. d ln S

t

=

dSt

St

−

1

2S

2

t

d'S`

t

= r

t

dt +σ

t

d

W

t

−

1

2

σ

2

t

dt. So S

T

= S

0

exp¦

T

0

(r

t

−

1

2

σ

2

t

)dt +

T

0

σ

t

d

W

t

¦. Let

X =

T

0

(r

t

−

1

2

σ

2

t

)dt +

T

0

σ

t

d

W

t

. The ﬁrst term in the expression of X is a number and the second term

is a Gaussian random variable N(0,

T

0

σ

2

t

dt), since both r and σ ar deterministic. Therefore, S

T

= S

0

e

X

,

with X ∼ N(

T

0

(r

t

−

1

2

σ

2

t

)dt,

T

0

σ

2

t

dt),.

(ii)

Proof. For the standard BSM model with constant volatility Σ and interest rate R, under the risk-neutral

measure, we have S

T

= S

0

e

Y

, where Y = (R−

1

2

Σ

2

)T +Σ

W

T

∼ N((R−

1

2

Σ

2

)T, Σ

2

T), and

¯

E[(S

0

e

Y

−K)

+

] =

e

RT

BSM(T, S

0

; K, R, Σ). Note R =

1

T

(E[Y ] +

1

2

V ar(Y )) and Σ =

1

T

V ar(Y ), we can get

¯

E[(S

0

e

Y

−K)

+

] = e

E[Y ]+

1

2

V ar(Y )

BSM

T, S

0

; K,

1

T

E[Y ] +

1

2

V ar(Y )

,

1

T

V ar(Y )

.

So for the model in this problem,

c(0, S

0

) = e

−

T

0

rtdt

¯

E[(S

0

e

X

−K)

+

]

= e

−

T

0

rtdt

e

E[X]+

1

2

V ar(X)

BSM

T, S

0

; K,

1

T

E[X] +

1

2

V ar(X)

,

1

T

V ar(X)

= BSM

¸

T, S

0

; K,

1

T

T

0

r

t

dt,

1

T

T

0

σ

2

t

dt

¸

.

5.5. (i)

Proof. Let f(x) =

1

x

, then f

(x) = −

1

x

2

and f

(x) =

2

x

3

. Note dZ

t

= −Z

t

Θ

t

dW

t

, so

d

1

Z

t

= f

(Z

t

)dZ

t

+

1

2

f

(Z

t

)dZ

t

dZ

t

= −

1

Z

2

t

(−Z

t

)Θ

t

dW

t

+

1

2

2

Z

3

t

Z

2

t

Θ

2

t

dt =

Θ

t

Z

t

dW

t

+

Θ

2

t

Z

t

dt.

(ii)

Proof. By Lemma 5.2.2., for s, t ≥ 0 with s < t,

M

s

=

¯

E[

M

t

[T

s

] = E

Zt

Mt

Zs

[T

s

. That is, E[Z

t

M

t

[T

s

] =

Z

s

M

s

. So M = Z

M is a P-martingale.

(iii)

47

Proof.

d

M

t

= d

M

t

1

Z

t

=

1

Z

t

dM

t

+M

t

d

1

Z

t

+dM

t

d

1

Z

t

=

Γ

t

Z

t

dW

t

+

M

t

Θ

t

Z

t

dW

t

+

M

t

Θ

2

t

Z

t

dt +

Γ

t

Θ

t

Z

t

dt.

(iv)

Proof. In part (iii), we have

d

M

t

=

Γ

t

Z

t

dW

t

+

M

t

Θ

t

Z

t

dW

t

+

M

t

Θ

2

t

Z

t

dt +

Γ

t

Θ

t

Z

t

dt =

Γ

t

Z

t

(dW

t

+ Θ

t

dt) +

M

t

Θ

t

Z

t

(dW

t

+ Θ

t

dt).

Let

¯

Γ

t

=

Γt+MtΘt

Zt

, then d

M

t

=

¯

Γ

t

d

W

t

. This proves Corollary 5.3.2.

5.6.

Proof. By Theorem 4.6.5, it suﬃces to show

W

i

(t) is an T

t

-martingale under

¯

P and [

W

i

,

W

j

](t) = tδ

ij

(i, j = 1, 2). Indeed, for i = 1, 2,

W

i

(t) is an T

t

-martingale under

¯

P if and only if

W

i

(t)Z

t

is an T

t

-martingale

under P, since

¯

E[

W

i

(t)[T

s

] = E

¸

W

i

(t)Z

t

Z

s

[T

s

¸

.

By Itˆo’s product formula, we have

d(

W

i

(t)Z

t

) =

W

i

(t)dZ

t

+Z

t

d

W

i

(t) +dZ

t

d

W

i

(t)

=

W

i

(t)(−Z

t

)Θ(t) dW

t

+Z

t

(dW

i

(t) + Θ

i

(t)dt) + (−Z

t

Θ

t

dW

t

)(dW

i

(t) + Θ

i

(t)dt)

=

W

i

(t)(−Z

t

)

d

¸

j=1

Θ

j

(t)dW

j

(t) +Z

t

(dW

i

(t) + Θ

i

(t)dt) −Z

t

Θ

i

(t)dt

=

W

i

(t)(−Z

t

)

d

¸

j=1

Θ

j

(t)dW

j

(t) +Z

t

dW

i

(t)

This shows

W

i

(t)Z

t

is an T

t

-martingale under P. So

W

i

(t) is an T

t

-martingale under

¯

P. Moreover,

[

W

i

,

W

j

](t) =

¸

W

i

+

·

0

Θ

i

(s)ds, W

j

+

·

0

Θ

j

(s)ds

(t) = [W

i

, W

j

](t) = tδ

ij

.

Combined, this proves the two-dimensional Girsanov’s Theorem.

5.7. (i)

Proof. Let a be any strictly positive number. We deﬁne X

2

(t) = (a +X

1

(t))D(t)

−1

. Then

P

X

2

(T) ≥

X

2

(0)

D(T)

= P(a +X

1

(T) ≥ a) = P(X

1

(T) ≥ 0) = 1,

and P

X

2

(T) >

X2(0)

D(T)

= P(X

1

(T) > 0) > 0, since a is arbitrary, we have proved the claim of this problem.

Remark: The intuition is that we invest the positive starting fund a into the money market account,

and construct portfolio X

1

from zero cost. Their sum should be able to beat the return of money market

account.

(ii)

48

Proof. We deﬁne X

1

(t) = X

2

(t)D(t) −X

2

(0). Then X

1

(0) = 0,

P(X

1

(T) ≥ 0) = P

X

2

(T) ≥

X

2

(0)

D(T)

= 1, P(X

1

(T) > 0) = P

X

2

(T) >

X

2

(0)

D(T)

> 0.

5.8. The basic idea is that for any positive

¯

P-martingale M, dM

t

= M

t

1

Mt

dM

t

. By Martingale Repre-

sentation Theorem, dM

t

=

¯

Γ

t

d

W

t

for some adapted process

¯

Γ

t

. So dM

t

= M

t

(

Γt

Mt

)d

W

t

, i.e. any positive

martingale must be the exponential of an integral w.r.t. Brownian motion. Taking into account discounting

factor and apply Itˆo’s product rule, we can show every strictly positive asset is a generalized geometric

Brownian motion.

(i)

Proof. V

t

D

t

=

¯

E[e

−

T

0

Rudu

V

T

[T

t

] =

¯

E[D

T

V

T

[T

t

]. So (D

t

V

t

)

t≥0

is a

¯

P-martingale. By Martingale Represen-

tation Theorem, there exists an adapted process

¯

Γ

t

, 0 ≤ t ≤ T, such that D

t

V

t

=

t

0

¯

Γ

s

d

W

s

, or equivalently,

V

t

= D

−1

t

t

0

¯

Γ

s

d

W

s

. Diﬀerentiate both sides of the equation, we get dV

t

= R

t

D

−1

t

t

0

¯

Γ

s

d

W

s

dt +D

−1

t

¯

Γ

t

d

W

t

,

i.e. dV

t

= R

t

V

t

dt +

Γt

Dt

dW

t

.

(ii)

Proof. We prove the following more general lemma.

Lemma 1. Let X be an almost surely positive random variable (i.e. X > 0 a.s.) deﬁned on the probability

space (Ω, (, P). Let T be a sub σ-algebra of (, then Y = E[X[T] > 0 a.s.

Proof. By the property of conditional expectation Y

t

≥ 0 a.s. Let A = ¦Y = 0¦, we shall show P(A) = 0. In-

deed, note A ∈ T, 0 = E[Y I

A

] = E[E[X[T]I

A

] = E[XI

A

] = E[X1

A∩{X≥1}

] +

¸

∞

n=1

E[X1

A∩{

1

n

>X≥

1

n+1

}

] ≥

P(A∩¦X ≥ 1¦)+

¸

∞

n=1

1

n+1

P(A∩¦

1

n

> X ≥

1

n+1

¦). So P(A∩¦X ≥ 1¦) = 0 and P(A∩¦

1

n

> X ≥

1

n+1

¦) = 0,

∀n ≥ 1. This in turn implies P(A) = P(A∩¦X > 0¦) = P(A∩¦X ≥ 1¦) +

¸

∞

n=1

P(A∩¦

1

n

> X ≥

1

n+1

¦) =

0.

By the above lemma, it is clear that for each t ∈ [0, T], V

t

=

¯

E[e

−

T

t

Rudu

V

T

[T

t

] > 0 a.s.. Moreover,

by a classical result of martingale theory (Revuz and Yor [4], Chapter II, Proposition (3.4)), we have the

following stronger result: for a.s. ω, V

t

(ω) > 0 for any t ∈ [0, T].

(iii)

Proof. By (ii), V > 0 a.s., so dV

t

= V

t

1

Vt

dV

t

= V

t

1

Vt

R

t

V

t

dt +

Γt

Dt

d

W

t

= V

t

R

t

dt + V

t

Γt

VtDt

d

W

t

= R

t

V

t

dt +

σ

t

V

t

d

W

t

, where σ

t

=

Γt

VtDt

. This shows V follows a generalized geometric Brownian motion.

5.9.

Proof. c(0, T, x, K) = xN(d

+

) − Ke

−rT

N(d

−

) with d

±

=

1

σ

√

T

(ln

x

K

+ (r ±

1

2

σ

2

)T). Let f(y) =

1

√

2π

e

−

y

2

2

,

then f

(y) = −yf(y),

c

K

(0, T, x, K) = xf(d

+

)

∂d

+

∂y

−e

−rT

N(d

−

) −Ke

−rT

f(d

−

)

∂d

−

∂y

= xf(d

+

)

−1

σ

√

TK

−e

−rT

N(d

−

) +e

−rT

f(d

−

)

1

σ

√

T

,

49

and

c

KK

(0, T, x, K)

= xf(d

+

)

1

σ

√

TK

2

−

x

σ

√

TK

f(d

+

)(−d

+

)

∂d

+

∂y

−e

−rT

f(d

−

)

∂d

−

∂y

+

e

−rT

σ

√

T

(−d

−

)f(d

−

)

d

−

∂y

=

x

σ

√

TK

2

f(d

+

) +

xd

+

σ

√

TK

f(d

+

)

−1

Kσ

√

T

−e

−rT

f(d

−

)

−1

Kσ

√

T

−

e

−rT

d

−

σ

√

T

f(d

−

)

−1

Kσ

√

T

= f(d

+

)

x

K

2

σ

√

T

[1 −

d

+

σ

√

T

] +

e

−rT

f(d

−

)

Kσ

√

T

[1 +

d

−

σ

√

T

]

=

e

−rT

Kσ

2

T

f(d

−

)d

+

−

x

K

2

σ

2

T

f(d

+

)d

−

.

5.10. (i)

Proof. At time t

0

, the value of the chooser option is V (t

0

) = max¦C(t

0

), P(t

0

)¦ = max¦C(t

0

), C(t

0

) −

F(t

0

)¦ = C(t

0

) + max¦0, −F(t

0

)¦ = C(t

0

) + (e

−r(T−t0)

K −S(t

0

))

+

.

(ii)

Proof. By the risk-neutral pricing formula, V (0) =

¯

E[e

−rt0

V (t

0

)] =

¯

E[e

−rt0

C(t

0

)+(e

−rT

K−e

−rt0

S(t

0

)

+

] =

C(0) +

¯

E[e

−rt0

(e

−r(T−t0)

K − S(t

0

))

+

]. The ﬁrst term is the value of a call expiring at time T with strike

price K and the second term is the value of a put expiring at time t

0

with strike price e

−r(T−t0)

K.

5.11.

Proof. We ﬁrst make an analysis which leads to the hint, then we give a formal proof.

(Analysis) If we want to construct a portfolio X that exactly replicates the cash ﬂow, we must ﬁnd a

solution to the backward SDE

dX

t

= ∆

t

dS

t

+R

t

(X

t

−∆

t

S

t

)dt −C

t

dt

X

T

= 0.

Multiply D

t

on both sides of the ﬁrst equation and apply Itˆo’s product rule, we get d(D

t

X

t

) = ∆

t

d(D

t

S

t

) −

C

t

D

t

dt. Integrate from 0 to T, we have D

T

X

T

− D

0

X

0

=

T

0

∆

t

d(D

t

S

t

) −

T

0

C

t

D

t

dt. By the terminal

condition, we get X

0

= D

−1

0

(

T

0

C

t

D

t

dt −

T

0

∆

t

d(D

t

S

t

)). X

0

is the theoretical, no-arbitrage price of

the cash ﬂow, provided we can ﬁnd a trading strategy ∆ that solves the BSDE. Note the SDE for S

gives d(D

t

S

t

) = (D

t

S

t

)σ

t

(θ

t

dt + dW

t

), where θ

t

=

αt−Rt

σt

. Take the proper change of measure so that

W

t

=

t

0

θ

s

ds +W

t

is a Brownian motion under the new measure

¯

P, we get

T

0

C

t

D

t

dt = D

0

X

0

+

T

0

∆

t

d(D

t

S

t

) = D

0

X

0

+

T

0

∆

t

(D

t

S

t

)σ

t

d

W

t

.

This says the random variable

T

0

C

t

D

t

dt has a stochastic integral representation D

0

X

0

+

T

0

∆

t

D

t

S

t

σ

t

d

W

t

.

This inspires us to consider the martingale generated by

T

0

C

t

D

t

dt, so that we can apply Martingale Rep-

resentation Theorem and get a formula for ∆ by comparison of the integrands.

50

(Formal proof) Let M

T

=

T

0

C

t

D

t

dt, and M

t

=

¯

E[M

T

[T

t

]. Then by Martingale Representation Theo-

rem, we can ﬁnd an adapted process

¯

Γ

t

, so that M

t

= M

0

+

t

0

¯

Γ

t

d

W

t

. If we set ∆

t

=

Γt

DtStσt

, we can check

X

t

= D

−1

t

(D

0

X

0

+

t

0

∆

u

d(D

u

S

u

) −

t

0

C

u

D

u

du), with X

0

= M

0

=

¯

E[

T

0

C

t

D

t

dt] solves the SDE

dX

t

= ∆

t

dS

t

+R

t

(X

t

−∆

t

S

t

)dt −C

t

dt

X

T

= 0.

Indeed, it is easy to see that X satisﬁes the ﬁrst equation. To check the terminal condition, we note

X

T

D

T

= D

0

X

0

+

T

0

∆

t

D

t

S

t

σ

t

d

W

t

−

T

0

C

t

D

t

dt = M

0

+

T

0

¯

Γ

t

d

W

t

−M

T

= 0. So X

T

= 0. Thus, we have

found a trading strategy ∆, so that the corresponding portfolio X replicates the cash ﬂow and has zero

terminal value. So X

0

=

¯

E[

T

0

C

t

D

t

dt] is the no-arbitrage price of the cash ﬂow at time zero.

Remark: As shown in the analysis, d(D

t

X

t

) = ∆

t

d(D

t

S

t

) − C

t

D

t

dt. Integrate from t to T, we get

0 − D

t

X

t

=

T

t

∆

u

d(D

u

S

u

) −

T

t

C

u

D

u

du. Take conditional expectation w.r.t. T

t

on both sides, we get

−D

t

X

t

= −

¯

E[

T

t

C

u

D

u

du[T

t

]. So X

t

= D

−1

t

¯

E[

T

t

C

u

D

u

du[T

t

]. This is the no-arbitrage price of the cash

ﬂow at time t, and we have justiﬁed formula (5.6.10) in the textbook.

5.12. (i)

Proof. d

¯

B

i

(t) = dB

i

(t) +γ

i

(t)dt =

¸

d

j=1

σij(t)

σi(t)

dW

j

(t) +

¸

d

j=1

σij(t)

σi(t)

Θ

j

(t)dt =

¸

d

j=1

σij(t)

σi(t)

d

W

j

(t). So B

i

is a

martingale. Since d

¯

B

i

(t)d

¯

B

i

(t) =

¸

d

j=1

σij(t)

2

σi(t)

2

dt = dt, by L´evy’s Theorem,

¯

B

i

is a Brownian motion under

¯

P.

(ii)

Proof.

dS

i

(t) = R(t)S

i

(t)dt +σ

i

(t)S

i

(t)d

¯

B

i

(t) + (α

i

(t) −R(t))S

i

(t)dt −σ

i

(t)S

i

(t)γ

i

(t)dt

= R(t)S

i

(t)dt +σ

i

(t)S

i

(t)d

¯

B

i

(t) +

d

¸

j=1

σ

ij

(t)Θ

j

(t)S

i

(t)dt −S

i

(t)

d

¸

j=1

σ

ij

(t)Θ

j

(t)dt

= R(t)S

i

(t)dt +σ

i

(t)S

i

(t)d

¯

B

i

(t).

(iii)

Proof. d

¯

B

i

(t)d

¯

B

k

(t) = (dB

i

(t) +γ

i

(t)dt)(dB

j

(t) +γ

j

(t)dt) = dB

i

(t)dB

j

(t) = ρ

ik

(t)dt.

(iv)

Proof. By Itˆ o’s product rule and martingale property,

E[B

i

(t)B

k

(t)] = E[

t

0

B

i

(s)dB

k

(s)] +E[

t

0

B

k

(s)dB

i

(s)] +E[

t

0

dB

i

(s)dB

k

(s)]

= E[

t

0

ρ

ik

(s)ds] =

t

0

ρ

ik

(s)ds.

Similarly, by part (iii), we can show

¯

E[

¯

B

i

(t)

¯

B

k

(t)] =

t

0

ρ

ik

(s)ds.

(v)

51

Proof. By Itˆ o’s product formula,

E[B

1

(t)B

2

(t)] = E[

t

0

sign(W

1

(u))du] =

t

0

[P(W

1

(u) ≥ 0) −P(W

1

(u) < 0)]du = 0.

Meanwhile,

¯

E[

¯

B

1

(t)

¯

B

2

(t)] =

¯

E[

t

0

sign(W

1

(u))du

=

t

0

[

¯

P(W

1

(u) ≥ 0) −

¯

P(W

1

(u) < 0)]du

=

t

0

[

¯

P(

W

1

(u) ≥ u) −

¯

P(

W

1

(u) < u)]du

=

t

0

2

1

2

−

¯

P(

W

1

(u) < u)

du

< 0,

for any t > 0. So E[B

1

(t)B

2

(t)] =

¯

E[

¯

B

1

(t)

¯

B

2

(t)] for all t > 0.

5.13. (i)

Proof.

¯

E[W

1

(t)] =

¯

E[

W

1

(t)] = 0 and

¯

E[W

2

(t)] =

¯

E[

W

2

(t) −

t

0

W

1

(u)du] = 0, for all t ∈ [0, T].

(ii)

Proof.

¯

Cov[W

1

(T), W

2

(T)] =

¯

E[W

1

(T)W

2

(T)]

=

¯

E

¸

T

0

W

1

(t)dW

2

(t) +

T

0

W

2

(t)dW

1

(t)

¸

=

¯

E

¸

T

0

W

1

(t)(d

W

2

(t) −

W

1

(t)dt)

¸

+

¯

E

¸

T

0

W

2

(t)d

W

1

(t)

¸

= −

¯

E

¸

T

0

W

1

(t)

2

dt

¸

= −

T

0

tdt

= −

1

2

T

2

.

5.14. Equation (5.9.6) can be transformed into d(e

−rt

X

t

) = ∆

t

[d(e

−rt

S

t

) −ae

−rt

dt] = ∆

t

e

−rt

[dS

t

−rS

t

dt −

adt]. So, to make the discounted portfolio value e

−rt

X

t

a martingale, we are motivated to change the measure

in such a way that S

t

−r

t

0

S

u

du−at is a martingale under the new measure. To do this, we note the SDE for S

is dS

t

= α

t

S

t

dt+σS

t

dW

t

. Hence dS

t

−rS

t

dt−adt = [(α

t

−r)S

t

−a]dt+σS

t

dW

t

= σS

t

(αt−r)St−a

σSt

dt +dW

t

.

Set θ

t

=

(αt−r)St−a

σSt

and

W

t

=

t

0

θ

s

ds +W

t

, we can ﬁnd an equivalent probability measure

¯

P, under which

S satisﬁes the SDE dS

t

= rS

t

dt +σS

t

d

W

t

+adt and

W

t

is a BM. This is the rational for formula (5.9.7).

This is a good place to pause and think about the meaning of “martingale measure.” What is to be

a martingale? The new measure

¯

P should be such that the discounted value process of the replicating

52

portfolio is a martingale, not the discounted price process of the underlying. First, we want D

t

X

t

to be a

martingale under

¯

P because we suppose that X is able to replicate the derivative payoﬀ at terminal time,

X

T

= V

T

. In order to avoid arbitrage, we must have X

t

= V

t

for any t ∈ [0, T]. The diﬃculty is how

to calculate X

t

and the magic is brought by the martingale measure in the following line of reasoning:

V

t

= X

t

= D

−1

t

¯

E[D

T

X

T

[T

t

] = D

−1

t

¯

E[D

T

V

T

[T

t

]. You can think of martingale measure as a calculational

convenience. That is all about martingale measure! Risk neutral is a just perception, referring to the

actual eﬀect of constructing a hedging portfolio! Second, we note when the portfolio is self-ﬁnancing, the

discounted price process of the underlying is a martingale under

¯

P, as in the classical Black-Scholes-Merton

model without dividends or cost of carry. This is not a coincidence. Indeed, we have in this case the

relation d(D

t

X

t

) = ∆

t

d(D

t

S

t

). So D

t

X

t

being a martingale under

¯

P is more or less equivalent to D

t

S

t

being a martingale under

¯

P. However, when the underlying pays dividends, or there is cost of carry,

d(D

t

X

t

) = ∆

t

d(D

t

S

t

) no longer holds, as shown in formula (5.9.6). The portfolio is no longer self-ﬁnancing,

but self-ﬁnancing with consumption. What we still want to retain is the martingale property of D

t

X

t

, not

that of D

t

S

t

. This is how we choose martingale measure in the above paragraph.

Let V

T

be a payoﬀ at time T, then for the martingale M

t

=

¯

E[e

−rT

V

T

[T

t

], by Martingale Representation

Theorem, we can ﬁnd an adapted process

¯

Γ

t

, so that M

t

= M

0

+

t

0

¯

Γ

s

d

W

s

. If we let ∆

t

=

Γte

rt

σSt

, then the

value of the corresponding portfolio X satisﬁes d(e

−rt

X

t

) =

¯

Γ

t

d

W

t

. So by setting X

0

= M

0

=

¯

E[e

−rT

V

T

],

we must have e

−rt

X

t

= M

t

, for all t ∈ [0, T]. In particular, X

T

= V

T

. Thus the portfolio perfectly hedges

V

T

. This justiﬁes the risk-neutral pricing of European-type contingent claims in the model where cost of

carry exists. Also note the risk-neutral measure is diﬀerent from the one in case of no cost of carry.

Another perspective for perfect replication is the following. We need to solve the backward SDE

dX

t

= ∆

t

dS

t

−a∆

t

dt +r(X

t

−∆

t

S

t

)dt

X

T

= V

T

for two unknowns, X and ∆. To do so, we ﬁnd a probability measure

¯

P, under which e

−rt

X

t

is a martingale,

then e

−rt

X

t

=

¯

E[e

−rT

V

T

[T

t

] := M

t

. Martingale Representation Theorem gives M

t

= M

0

+

t

0

¯

Γ

u

d

W

u

for

some adapted process

¯

Γ. This would give us a theoretical representation of ∆ by comparison of integrands,

hence a perfect replication of V

T

.

(i)

Proof. As indicated in the above analysis, if we have (5.9.7) under

¯

P, then d(e

−rt

X

t

) = ∆

t

[d(e

−rt

S

t

) −

ae

−rt

dt] = ∆

t

e

−rt

σS

t

d

W

t

. So (e

−rt

X

t

)

t≥0

, where X is given by (5.9.6), is a

¯

P-martingale.

(ii)

Proof. By Itˆo’s formula, dY

t

= Y

t

[σd

W

t

+ (r −

1

2

σ

2

)dt] +

1

2

Y

t

σ

2

dt = Y

t

(σd

W

t

+ rdt). So d(e

−rt

Y

t

) =

σe

−rt

Y

t

d

W

t

and e

−rt

Y

t

is a

¯

P-martingale. Moreover, if S

t

= S

0

Y

t

+Y

t

t

0

a

Ys

ds, then

dS

t

= S

0

dY

t

+

t

0

a

Y

s

dsdY

t

+adt =

S

0

+

t

0

a

Y

s

ds

Y

t

(σd

W

t

+rdt) +adt = S

t

(σd

W

t

+rdt) +adt.

This shows S satisﬁes (5.9.7).

Remark: To obtain this formula for S, we ﬁrst set U

t

= e

−rt

S

t

to remove the rS

t

dt term. The SDE for

U is dU

t

= σU

t

d

W

t

+ ae

−rt

dt. Just like solving linear ODE, to remove U in the d

W

t

term, we consider

V

t

= U

t

e

−σ

Wt

. Itˆo’s product formula yields

dV

t

= e

−σ

Wt

dU

t

+U

t

e

−σ

Wt

(−σ)d

W

t

+

1

2

σ

2

dt

+dU

t

e

−σ

Wt

(−σ)d

W

t

+

1

2

σ

2

dt

= e

−σ

Wt

ae

−rt

dt −

1

2

σ

2

V

t

dt.

53

Note V appears only in the dt term, so multiply the integration factor e

1

2

σ

2

t

on both sides of the equation,

we get

d(e

1

2

σ

2

t

V

t

) = ae

−rt−σ

Wt+

1

2

σ

2

t

dt.

Set Y

t

= e

σ

Wt+(r−

1

2

σ

2

)t

, we have d(S

t

/Y

t

) = adt/Y

t

. So S

t

= Y

t

(S

0

+

t

0

ads

Ys

).

(iii)

Proof.

¯

E[S

T

[T

t

] = S

0

¯

E[Y

T

[T

t

] +

¯

E

¸

Y

T

t

0

a

Y

s

ds +Y

T

T

t

a

Y

s

ds[T

t

¸

= S

0

¯

E[Y

T

[T

t

] +

t

0

a

Y

s

ds

¯

E[Y

T

[T

t

] +a

T

t

¯

E

¸

Y

T

Y

s

[T

t

ds

= S

0

Y

t

¯

E[Y

T−t

] +

t

0

a

Y

s

dsY

t

¯

E[Y

T−t

] +a

T

t

¯

E[Y

T−s

]ds

=

S

0

+

t

0

a

Y

s

ds

Y

t

e

r(T−t)

+a

T

t

e

r(T−s)

ds

=

S

0

+

t

0

ads

Y

s

Y

t

e

r(T−t)

−

a

r

(1 −e

r(T−t)

).

In particular,

¯

E[S

T

] = S

0

e

rT

−

a

r

(1 −e

rT

).

(iv)

Proof.

d

¯

E[S

T

[T

t

] = ae

r(T−t)

dt +

S

0

+

t

0

ads

Y

s

(e

r(T−t)

dY

t

−rY

t

e

r(T−t)

dt) +

a

r

e

r(T−t)

(−r)dt

=

S

0

+

t

0

ads

Y

s

e

r(T−t)

σY

t

d

W

t

.

So

¯

E[S

T

[T

t

] is a

¯

P-martingale. As we have argued at the beginning of the solution, risk-neutral pricing is

valid even in the presence of cost of carry. So by an argument similar to that of '5.6.2, the process

¯

E[S

T

[T

t

]

is the futures price process for the commodity.

(v)

Proof. We solve the equation

¯

E[e

−r(T−t)

(S

T

− K)[T

t

] = 0 for K, and get K =

¯

E[S

T

[T

t

]. So For

S

(t, T) =

Fut

S

(t, T).

(vi)

Proof. We follow the hint. First, we solve the SDE

dX

t

= dS

t

−adt +r(X

t

−S

t

)dt

X

0

= 0.

By our analysis in part (i), d(e

−rt

X

t

) = d(e

−rt

S

t

) − ae

−rt

dt. Integrate from 0 to t on both sides, we get

X

t

= S

t

− S

0

e

rt

+

a

r

(1 − e

rt

) = S

t

− S

0

e

rt

−

a

r

(e

rt

− 1). In particular, X

T

= S

T

− S

0

e

rT

−

a

r

(e

rT

− 1).

Meanwhile, For

S

(t, T) = Fut

s

(t, T) =

¯

E[S

T

[T

t

] =

S

0

+

t

0

ads

Ys

Y

t

e

r(T−t)

−

a

r

(1−e

r(T−t)

). So For

S

(0, T) =

S

0

e

rT

−

a

r

(1 −e

rT

) and hence X

T

= S

T

−For

S

(0, T). After the agent delivers the commodity, whose value

is S

T

, and receives the forward price For

S

(0, T), the portfolio has exactly zero value.

54

6. Connections with Partial Diﬀerential Equations

6.1. (i)

Proof. Z

t

= 1 is obvious. Note the form of Z is similar to that of a geometric Brownian motion. So by Itˆo’s

formula, it is easy to obtain dZ

u

= b

u

Z

u

du +σ

u

Z

u

dW

u

, u ≥ t.

(ii)

Proof. If X

u

= Y

u

Z

u

(u ≥ t), then X

t

= Y

t

Z

t

= x 1 = x and

dX

u

= Y

u

dZ

u

+Z

u

dY

u

+dY

u

Z

u

= Y

u

(b

u

Z

u

du +σ

u

Z

u

dW

u

) +Z

u

a

u

−σ

u

γ

u

Z

u

du +

γ

u

Z

u

dW

u

+σ

u

Z

u

γ

u

Z

u

du

= [Y

u

b

u

Z

u

+ (a

u

−σ

u

γ

u

) +σ

u

γ

u

]du + (σ

u

Z

u

Y

u

+γ

u

)dW

u

= (b

u

X

u

+a

u

)du + (σ

u

X

u

+γ

u

)dW

u

.

Remark: To see how to ﬁnd the above solution, we manipulate the equation (6.2.4) as follows. First, to

remove the term b

u

X

u

du, we multiply on both sides of (6.2.4) the integrating factor e

−

u

t

bvdv

. Then

d(X

u

e

−

u

t

bvdv

) = e

−

u

t

bvdv

(a

u

du + (γ

u

+σ

u

X

u

)dW

u

).

Let

¯

X

u

= e

−

u

t

bvdv

X

u

, ¯ a

u

= e

−

u

t

bvdv

a

u

and ¯ γ

u

= e

−

u

t

bvdv

γ

u

, then

¯

X satisﬁes the SDE

d

¯

X

u

= ¯ a

u

du + (¯ γ

u

+σ

u

¯

X

u

)dW

u

= (¯ a

u

du + ¯ γ

u

dW

u

) +σ

u

¯

X

u

dW

u

.

To deal with the term σ

u

¯

X

u

dW

u

, we consider

ˆ

X

u

=

¯

X

u

e

−

u

t

σvdWv

. Then

d

ˆ

X

u

= e

−

u

t

σvdWv

[(¯ a

u

du + ¯ γ

u

dW

u

) +σ

u

¯

X

u

dW

u

] +

¯

X

u

e

−

u

t

σvdWv

(−σ

u

)dW

u

+

1

2

e

−

u

t

σvdWv

σ

2

u

du

+(¯ γ

u

+σ

u

¯

X

u

)(−σ

u

)e

−

u

t

σvdWv

du

= ˆ a

u

du + ˆ γ

u

dW

u

+σ

u

ˆ

X

u

dW

u

−σ

u

ˆ

X

u

dW

u

+

1

2

ˆ

X

u

σ

2

u

du −σ

u

(ˆ γ

u

+σ

u

ˆ

X

u

)du

= (ˆ a

u

−σ

u

ˆ γ

u

−

1

2

ˆ

X

u

σ

2

u

)du + ˆ γ

u

dW

u

,

where ˆ a

u

= ¯ a

u

e

−

u

t

σvdWv

and ˆ γ

u

= ¯ γ

u

e

−

u

t

σvdWv

. Finally, use the integrating factor e

u

t

1

2

σ

2

v

dv

, we have

d

ˆ

X

u

e

1

2

u

t

σ

2

v

dv

= e

1

2

u

t

σ

2

v

dv

(d

ˆ

X

u

+

ˆ

X

u

1

2

σ

2

u

du) = e

1

2

u

t

σ

2

v

dv

[(ˆ a

u

−σ

u

ˆ γ

u

)du + ˆ γ

u

dW

u

].

Write everything back into the original X, a and γ, we get

d

X

u

e

−

u

t

bvdv−

u

t

σvdWv+

1

2

u

t

σ

2

v

dv

= e

1

2

u

t

σ

2

v

dv−

u

t

σvdWv−

u

t

bvdv

[(a

u

−σ

u

γ

u

)du +γ

u

dW

u

],

i.e.

d

X

u

Z

u

=

1

Z

u

[(a

u

−σ

u

γ

u

)du +γ

u

dW

u

] = dY

u

.

This inspired us to try X

u

= Y

u

Z

u

.

6.2. (i)

55

Proof. The portfolio is self-ﬁnancing, so for any t ≤ T

1

, we have

dX

t

= ∆

1

(t)df(t, R

t

, T

1

) + ∆

2

(t)df(t, R

t

, T

2

) +R

t

(X

t

−∆

1

(t)f(t, R

t

, T

1

) −∆

2

(t)f(t, R

t

, T

2

))dt,

and

d(D

t

X

t

)

= −R

t

D

t

X

t

dt +D

t

dX

t

= D

t

[∆

1

(t)df(t, R

t

, T

1

) + ∆

2

(t)df(t, R

t

, T

2

) −R

t

(∆

1

(t)f(t, R

t

, T

1

) + ∆

2

(t)f(t, R

t

, T

2

))dt]

= D

t

[∆

1

(t)

f

t

(t, R

t

, T

1

)dt +f

r

(t, R

t

, T

1

)dR

t

+

1

2

f

rr

(t, R

t

, T

1

)γ

2

(t, R

t

)dt

+∆

2

(t)

f

t

(t, R

t

, T

2

)dt +f

r

(t, R

t

, T

2

)dR

t

+

1

2

f

rr

(t, R

t

, T

2

)γ

2

(t, R

t

)dt

−R

t

(∆

1

(t)f(t, R

t

, T

1

) + ∆

2

(t)f(t, R

t

, T

2

))dt]

= ∆

1

(t)D

t

[−R

t

f(t, R

t

, T

1

) +f

t

(t, R

t

, T

1

) +α(t, R

t

)f

r

(t, R

t

, T

1

) +

1

2

γ

2

(t, R

t

)f

rr

(t, R

t

, T

1

)]dt

+∆

2

(t)D

t

[−R

t

f(t, R

t

, T

2

) +f

t

(t, R

t

, T

2

) +α(t, R

t

)f

r

(t, R

t

, T

2

) +

1

2

γ

2

(t, R

t

)f

rr

(t, R

t

, T

2

)]dt

+D

t

γ(t, R

t

)[D

t

γ(t, R

t

)[∆

1

(t)f

r

(t, R

t

, T

1

) + ∆

2

(t)f

r

(t, R

t

, T

2

)]]dW

t

= ∆

1

(t)D

t

[α(t, R

t

) −β(t, R

t

, T

1

)]f

r

(t, R

t

, T

1

)dt + ∆

2

(t)D

t

[α(t, R

t

) −β(t, R

t

, T

2

)]f

r

(t, R

t

, T

2

)dt

+D

t

γ(t, R

t

)[∆

1

(t)f

r

(t, R

t

, T

1

) + ∆

2

(t)f

r

(t, R

t

, T

2

)]dW

t

.

(ii)

Proof. Let ∆

1

(t) = S

t

f

r

(t, R

t

, T

2

) and ∆

2

(t) = −S

t

f

r

(t, R

t

, T

1

), then

d(D

t

X

t

) = D

t

S

t

[β(t, R

t

, T

2

) −β(t, R

t

, T

1

)]f

r

(t, R

t

, T

1

)f

r

(t, R

t

, T

2

)dt

= D

t

[[β(t, R

t

, T

1

) −β(t, R

t

, T

2

)]f

r

(t, R

t

, T

1

)f

r

(t, R

t

, T

2

)[dt.

Integrate from 0 to T on both sides of the above equation, we get

D

T

X

T

−D

0

X

0

=

T

0

D

t

[[β(t, R

t

, T

1

) −β(t, R

t

, T

2

)]f

r

(t, R

t

, T

1

)f

r

(t, R

t

, T

2

)[dt.

If β(t, R

t

, T

1

) = β(t, R

t

, T

2

) for some t ∈ [0, T], under the assumption that f

r

(t, r, T) = 0 for all values of r

and 0 ≤ t ≤ T, D

T

X

T

− D

0

X

0

> 0. To avoid arbitrage (see, for example, Exercise 5.7), we must have for

a.s. ω, β(t, R

t

, T

1

) = β(t, R

t

, T

2

), ∀t ∈ [0, T]. This implies β(t, r, T) does not depend on T.

(iii)

Proof. In (6.9.4), let ∆

1

(t) = ∆(t), T

1

= T and ∆

2

(t) = 0, we get

d(D

t

X

t

) = ∆(t)D

t

¸

−R

t

f(t, R

t

, T) +f

t

(t, R

t

, T) +α(t, R

t

)f

r

(t, R

t

, T) +

1

2

γ

2

(t, R

t

)f

rr

(t, R

t

, T)

dt

+D

t

γ(t, R

t

)∆(t)f

r

(t, R

t

, T)dW

t

.

This is formula (6.9.5).

If f

r

(t, r, T) = 0, then d(D

t

X

t

) = ∆(t)D

t

−R

t

f(t, R

t

, T) +f

t

(t, R

t

, T) +

1

2

γ

2

(t, R

t

)f

rr

(t, R

t

, T)

dt. We

choose ∆(t) = sign

¸

−R

t

f(t, R

t

, T) +f

t

(t, R

t

, T) +

1

2

γ

2

(t, R

t

)f

rr

(t, R

t

, T)

¸

. To avoid arbitrage in this

case, we must have f

t

(t, R

t

, T) +

1

2

γ

2

(t, R

t

)f

rr

(t, R

t

, T) = R

t

f(t, R

t

, T), or equivalently, for any r in the

range of R

t

, f

t

(t, r, T) +

1

2

γ

2

(t, r)f

rr

(t, r, T) = rf(t, r, T).

56

6.3.

Proof. We note

d

ds

e

−

s

0

bvdv

C(s, T)

= e

−

s

0

bvdv

[C(s, T)(−b

s

) +b

s

C(s, T) −1] = −e

−

s

0

bvdv

.

So integrate on both sides of the equation from t to T, we obtain

e

−

T

0

bvdv

C(T, T) −e

−

t

0

bvdv

C(t, T) = −

T

t

e

−

s

0

bvdv

ds.

Since C(T, T) = 0, we have C(t, T) = e

t

0

bvdv

T

t

e

−

s

0

bvdv

ds =

T

t

e

t

s

bvdv

ds. Finally, by A

(s, T) =

−a(s)C(s, T) +

1

2

σ

2

(s)C

2

(s, T), we get

A(T, T) −A(t, T) = −

T

t

a(s)C(s, T)ds +

1

2

T

t

σ

2

(s)C

2

(s, T)ds.

Since A(T, T) = 0, we have A(t, T) =

T

t

(a(s)C(s, T) −

1

2

σ

2

(s)C

2

(s, T))ds.

6.4. (i)

Proof. By the deﬁnition of ϕ, we have

ϕ

(t) = e

1

2

σ

2

T

t

C(u,T)du

1

2

σ

2

(−1)C(t, T) = −

1

2

ϕ(t)σ

2

C(t, T).

So C(t, T) = −

2ϕ

(t)

φ(t)σ

2

. Diﬀerentiate both sides of the equation ϕ

(t) = −

1

2

ϕ(t)σ

2

C(t, T), we get

ϕ

(t) = −

1

2

σ

2

[ϕ

(t)C(t, T) +ϕ(t)C

(t, T)]

= −

1

2

σ

2

[−

1

2

ϕ(t)σ

2

C

2

(t, T) +ϕ(t)C

(t, T)]

=

1

4

σ

4

ϕ(t)C

2

(t, T) −

1

2

σ

2

ϕ(t)C

(t, T).

So C

(t, T) =

1

4

σ

4

ϕ(t)C

2

(t, T) −ϕ

(t)

/

1

2

ϕ(t)σ

2

=

1

2

σ

2

C

2

(t, T) −

2ϕ

(t)

σ

2

ϕ(t)

.

(ii)

Proof. Plug formulas (6.9.8) and (6.9.9) into (6.5.14), we get

−

2ϕ

(t)

σ

2

ϕ(t)

+

1

2

σ

2

C

2

(t, T) = b(−1)

2ϕ

(t)

σ

2

ϕ(t)

+

1

2

σ

2

C

2

(t, T) −1,

i.e. ϕ

(t) −bϕ

(t) −

1

2

σ

2

ϕ(t) = 0.

(iii)

Proof. The characteristic equation of ϕ

(t) − bϕ

(t) −

1

2

σ

2

ϕ(t) = 0 is λ

2

− bλ −

1

2

σ

2

= 0, which gives two

roots

1

2

(b ±

√

b

2

+ 2σ

2

) =

1

2

b ±γ with γ =

1

2

√

b

2

+ 2σ

2

. Therefore by standard theory of ordinary diﬀerential

equations, a general solution of ϕ is ϕ(t) = e

1

2

bt

(a

1

e

γt

+ a

2

e

−γt

) for some constants a

1

and a

2

. It is then

easy to see that we can choose appropriate constants c

1

and c

2

so that

ϕ(t) =

c

1

1

2

b +γ

e

−(

1

2

b+γ)(T−t)

−

c

2

1

2

b −γ

e

−(

1

2

b−γ)(T−t)

.

57

(iv)

Proof. From part (iii), it is easy to see ϕ

(t) = c

1

e

−(

1

2

b+γ)(T−t)

−c

2

e

−(

1

2

b−γ)(T−t)

. In particular,

0 = C(T, T) = −

2ϕ

(T)

σ

2

ϕ(T)

= −

2(c

1

−c

2

)

σ

2

ϕ(T)

.

So c

1

= c

2

.

(v)

Proof. We ﬁrst recall the deﬁnitions and properties of sinh and cosh:

sinh z =

e

z

−e

−z

2

, cosh z =

e

z

+e

−z

2

, (sinh z)

= cosh z, and (cosh z)

= sinh z.

Therefore

ϕ(t) = c

1

e

−

1

2

b(T−t)

¸

e

−γ(T−t)

1

2

b +γ

−

e

γ(T−t)

1

2

b −γ

= c

1

e

−

1

2

b(T−t)

¸

1

2

b −γ

1

4

b

2

−γ

2

e

−γ(T−t)

−

1

2

b +γ

1

4

b

2

−γ

2

e

γ(T−t)

=

2c

1

σ

2

e

−

1

2

b(T−t)

¸

−(

1

2

b −γ)e

−γ(T−t)

+ (

1

2

b +γ)e

γ(T−t)

=

2c

1

σ

2

e

−

1

2

b(T−t)

[b sinh(γ(T −t)) + 2γ cosh(γ(T −t))].

and

ϕ

(t) =

1

2

b

2c

1

σ

2

e

−

1

2

b(T−t)

[b sinh(γ(T −t)) + 2γ cosh(γ(T −t))]

+

2c

1

σ

2

e

−

1

2

b(T−t)

[−γb cosh(γ(T −t)) −2γ

2

sinh(γ(T −t))]

= 2c

1

e

−

1

2

b(T−t)

¸

b

2

2σ

2

sinh(γ(T −t)) +

bγ

σ

2

cosh(γ(T −t)) −

bγ

σ

2

cosh(γ(T −t)) −

2γ

2

σ

2

sinh(γ(T −t))

= 2c

1

e

−

1

2

b(T−t)

b

2

−4γ

2

2σ

2

sinh(γ(T −t))

= −2c

1

e

−

1

2

b(T−t)

sinh(γ(T −t)).

This implies

C(t, T) = −

2ϕ

(t)

σ

2

ϕ(t)

=

sinh(γ(T −t))

γ cosh(γ(T −t)) +

1

2

b sinh(γ(T −t))

.

(vi)

Proof. By (6.5.15) and (6.9.8), A

(t, T) =

2aϕ

(t)

σ

2

ϕ(t)

. Hence

A(T, T) −A(t, T) =

T

t

2aϕ

(s)

σ

2

ϕ(s)

ds =

2a

σ

2

ln

ϕ(T)

ϕ(t)

,

and

A(t, T) = −

2a

σ

2

ln

ϕ(T)

ϕ(t)

= −

2a

σ

2

ln

¸

γe

1

2

b(T−t)

γ cosh(γ(T −t)) +

1

2

b sinh(γ(T −t))

¸

.

58

6.5. (i)

Proof. Since g(t, X

1

(t), X

2

(t)) = E[h(X

1

(T), X

2

(T))[T

t

] and e

−rt

f(t, X

1

(t), X

2

(t)) = E[e

−rT

h(X

1

(T), X

2

(T))[T

t

],

iterated conditioning argument shows g(t, X

1

(t), X

2

(t)) and e

−rt

f(t, X

1

(t), X

2

(t)) ar both martingales.

(ii) and (iii)

Proof. We note

dg(t, X

1

(t), X

2

(t))

= g

t

dt +g

x1

dX

1

(t) +g

x2

dX

2

(t) +

1

2

g

x1x2

dX

1

(t)dX

1

(t) +

1

2

g

x2x2

dX

2

(t)dX

2

(t) +g

x1x2

dX

1

(t)dX

2

(t)

=

¸

g

t

+g

x1

β

1

+g

x2

β

2

+

1

2

g

x1x1

(γ

2

11

+γ

2

12

+ 2ργ

11

γ

12

) +g

x1x2

(γ

11

γ

21

+ργ

11

γ

22

+ργ

12

γ

21

+γ

12

γ

22

)

+

1

2

g

x2x2

(γ

2

21

+γ

2

22

+ 2ργ

21

γ

22

)

**dt + martingale part.
**

So we must have

g

t

+g

x1

β

1

+g

x2

β

2

+

1

2

g

x1x1

(γ

2

11

+γ

2

12

+ 2ργ

11

γ

12

) +g

x1x2

(γ

11

γ

21

+ργ

11

γ

22

+ργ

12

γ

21

+γ

12

γ

22

)

+

1

2

g

x2x2

(γ

2

21

+γ

2

22

+ 2ργ

21

γ

22

) = 0.

Taking ρ = 0 will give part (ii) as a special case. The PDE for f can be similarly obtained.

6.6. (i)

Proof. Multiply e

1

2

bt

on both sides of (6.9.15), we get

d(e

1

2

bt

X

j

(t)) = e

1

2

bt

X

j

(t)

1

2

bdt + (−

b

2

X

j

(t)dt +

1

2

σdW

j

(t)

= e

1

2

bt

1

2

σdW

j

(t).

So e

1

2

bt

X

j

(t) − X

j

(0) =

1

2

σ

t

0

e

1

2

bu

dW

j

(u) and X

j

(t) = e

−

1

2

bt

X

j

(0) +

1

2

σ

t

0

e

1

2

bu

dW

j

(u)

. By Theorem

4.4.9, X

j

(t) is normally distributed with mean X

j

(0)e

−

1

2

bt

and variance

e

−bt

4

σ

2

t

0

e

bu

du =

σ

2

4b

(1 −e

−bt

).

(ii)

Proof. Suppose R(t) =

¸

d

j=1

X

2

j

(t), then

dR(t) =

d

¸

j=1

(2X

j

(t)dX

j

(t) +dX

j

(t)dX

j

(t))

=

d

¸

j=1

2X

j

(t)dX

j

(t) +

1

4

σ

2

dt

=

d

¸

j=1

−bX

2

j

(t)dt +σX

j

(t)dW

j

(t) +

1

4

σ

2

dt

=

d

4

σ

2

−bR(t)

dt +σ

R(t)

d

¸

j=1

X

j

(t)

R(t)

dW

j

(t).

Let B(t) =

¸

d

j=1

t

0

Xj(s)

√

R(s)

dW

j

(s), then B is a local martingale with dB(t)dB(t) =

¸

d

j=1

X

2

j

(t)

R(t)

dt = dt. So

by L´evy’s Theorem, B is a Brownian motion. Therefore dR(t) = (a − bR(t))dt + σ

R(t)dB(t) (a :=

d

4

σ

2

)

and R is a CIR interest rate process.

59

(iii)

Proof. By (6.9.16), X

j

(t) is dependent on W

j

only and is normally distributed with mean e

−

1

2

bt

X

j

(0) and

variance

σ

2

4b

[1 −e

−bt

]. So X

1

(t), , X

d

(t) are i.i.d. normal with the same mean µ(t) and variance v(t).

(iv)

Proof.

E

e

uX

2

j

(t)

=

∞

−∞

e

ux

2 e

−

(x−µ(t))

2

2v(t)

dx

2πv(t)

=

∞

−∞

e

−

(1−2uv(t))x

2

−2µ(t)x+µ

2

(t)

2v(t)

2πv(t)

dx

=

∞

−∞

1

2πv(t)

e

−

(

x−

µ(t)

1−2uv(t)

)

2

+

µ

2

(t)

1−2uv(t)

−

µ

2

(t)

(1−2uv(t))

2

2v(t)/(1−2uv(t))

dx

=

∞

−∞

1 −2uv(t)

2πv(t)

e

−

(

x−

µ(t)

1−2uv(t)

)

2

2v(t)/(1−2uv(t))

dx

e

−

µ

2

(t)(1−2uv(t))−µ

2

(t)

2v(t)(1−2uv(t))

1 −2uv(t)

=

e

−

uµ

2

(t)

1−2uv(t)

1 −2uv(t)

.

(v)

Proof. By R(t) =

¸

d

j=1

X

2

j

(t) and the fact X

1

(t), , X

d

(t) are i.i.d.,

E[e

uR(t)

] = (E[e

uX

2

1

(t)

])

d

= (1 −2uv(t))

−

d

2

e

udµ

2

(t)

1−2uv(t)

= (1 −2uv(t))

−

2a

σ

2

e

−

e

−bt

uR(0)

1−2uv(t)

.

6.7. (i)

Proof. e

−rt

c(t, S

t

, V

t

) =

¯

E[e

−rT

(S

T

−K)

+

[T

t

] is a martingale by iterated conditioning argument. Since

d(e

−rt

c(t, S

t

, V

t

))

= e

−rt

¸

c(t, S

t

, V

t

)(−r) +c

t

(t, S

t

, V

t

) +c

s

(t, S

t

, V

t

)rS

t

+c

v

(t, S

t

, V

t

)(a −bV

t

) +

1

2

c

ss

(t, S

t

, V

t

)V

t

S

2

t

+

1

2

c

vv

(t, S

t

, V

t

)σ

2

V

t

+c

sv

(t, S

t

, V

t

)σV

t

S

t

ρ

**dt + martingale part,
**

we conclude rc = c

t

+rsc

s

+c

v

(a −bv) +

1

2

c

ss

vs

2

+

1

2

c

vv

σ

2

v +c

sv

σsvρ. This is equation (6.9.26).

(ii)

60

Proof. Suppose c(t, s, v) = sf(t, log s, v) −e

−r(T−t)

Kg(t, log s, v), then

c

t

= sf

t

(t, log s, v) −re

−r(T−t)

Kg(t, log s, v) −e

−r(T−t)

Kg

t

(t, log s, v),

c

s

= f(t, log s, v) +sf

s

(t, log s, v)

1

s

−e

−r(T−t)

Kg

s

(t, log s, v)

1

s

,

c

v

= sf

v

(t, log s, v) −e

−r(T−t)

Kg

v

(t, log s, v),

c

ss

= f

s

(t, log s, v)

1

s

+f

ss

(t, log s, v)

1

s

−e

−r(T−t)

Kg

ss

(t, log s, v)

1

s

2

+e

−r(T−t)

Kg

s

(t, log s, v)

1

s

2

,

c

sv

= f

v

(t, log s, v) +f

sv

(t, log s, v) −e

−r(T−t)

K

s

g

sv

(t, log s, v),

c

vv

= sf

vv

(t, log s, v) −e

−r(T−t)

Kg

vv

(t, log s, v).

So

c

t

+rsc

s

+ (a −bv)c

v

+

1

2

s

2

vc

ss

+ρσsvc

sv

+

1

2

σ

2

vc

vv

= sf

t

−re

−r(T−t)

Kg −e

−r(T−t)

Kg

t

+rsf +rsf

s

−rKe

−r(T−t)

g

s

+ (a −bv)(sf

v

−e

−r(T−t)

Kg

v

)

+

1

2

s

2

v

¸

−

1

s

f

s

+

1

s

f

ss

−e

−r(T−t)

K

s

2

g

ss

+e

−r(T−t)

K

g

s

s

2

+ρσsv

f

v

+f

sv

−e

−r(T−t)

K

s

g

sv

+

1

2

σ

2

v(sf

vv

−e

−r(T−t)

Kg

vv

)

= s

¸

f

t

+ (r +

1

2

v)f

s

+ (a −bv +ρσv)f

v

+

1

2

vf

ss

+ρσvf

sv

+

1

2

σ

2

vf

vv

−Ke

−r(T−t)

¸

g

t

+ (r −

1

2

v)g

s

+(a −bv)g

v

+

1

2

vg

ss

+ρσvg

sv

+

1

2

σ

2

vg

vv

+rsf −re

−r(T−t)

Kg

= rc.

That is, c satisﬁes the PDE (6.9.26).

(iii)

Proof. First, by Markov property, f(t, X

t

, V

t

) = E[1

{X

T

≥log K}

[T

t

]. So f(T, X

t

, V

t

) = 1

{X

T

≥log K}

, which

implies f(T, x, v) = 1

{x≥log K}

for all x ∈ R, v ≥ 0. Second, f(t, X

t

, V

t

) is a martingale, so by diﬀerentiating

f and setting the dt term as zero, we have the PDE (6.9.32) for f. Indeed,

df(t, X

t

, V

t

) =

¸

f

t

(t, X

t

, V

t

) +f

x

(t, X

t

, V

t

)(r +

1

2

V

t

) +f

v

(t, X

t

, V

t

)(a −bv

t

+ρσV

t

) +

1

2

f

xx

(t, X

t

, V

t

)V

t

+

1

2

f

vv

(t, X

t

, V

t

)σ

2

V

t

+f

xv

(t, X

t

, V

t

)σV

t

ρ

**dt + martingale part.
**

So we must have f

t

+ (r +

1

2

v)f

x

+ (a −bv +ρσv)f

v

+

1

2

f

xx

v +

1

2

f

vv

σ

2

v +σvρf

xv

= 0. This is (6.9.32).

(iv)

Proof. Similar to (iii).

(v)

Proof. c(T, s, v) = sf(T, log s, v) − e

−r(T−t)

Kg(T, log s, v) = s1

{log s≥log K}

− K1

{log s≥log K}

= 1

{s≥K}

(s −

K) = (s −K)

+

.

6.8.

61

Proof. We follow the hint. Suppose h is smooth and compactly supported, then it is legitimate to exchange

integration and diﬀerentiation:

g

t

(t, x) =

∂

∂t

∞

0

h(y)p(t, T, x, y)dy =

∞

0

h(y)p

t

(t, T, x, y)dy,

g

x

(t, x) =

∞

0

h(y)p

x

(t, T, x, y)dy,

g

xx

(t, x) =

∞

0

h(y)p

xx

(t, T, x, y)dy.

So (6.9.45) implies

∞

0

h(y)

p

t

(t, T, x, y) +β(t, x)p

x

(t, T, x, y) +

1

2

γ

2

(t, x)p

xx

(t, T, x, y)

dy = 0. By the ar-

bitrariness of h and assuming β, p

t

, p

x

, v, p

xx

are all continuous, we have

p

t

(t, T, x, y) +β(t, x)p

x

(t, T, x, y) +

1

2

γ

2

(t, x)p

xx

(t, T, x, y) = 0.

This is (6.9.43).

6.9.

Proof. We ﬁrst note dh

b

(X

u

) = h

b

(X

u

)dX

u

+

1

2

h

b

(X

u

)dX

u

dX

u

=

h

b

(X

u

)β(u, X

u

) +

1

2

γ

2

(u, X

u

)h

b

(X

u

)

du+

h

b

(X

u

)γ(u, X

u

)dW

u

. Integrate on both sides of the equation, we have

h

b

(X

T

) −h

b

(X

t

) =

T

t

¸

h

b

(X

u

)β(u, X

u

) +

1

2

γ

2

(u, X

u

)h

b

(X

u

)

**du + martingale part.
**

Take expectation on both sides, we get

E

t,x

[h

b

(X

T

) −h

b

(X

t

)] =

∞

−∞

h

b

(y)p(t, T, x, y)dy −h(x)

=

T

t

E

t,x

[h

b

(X

u

)β(u, X

u

) +

1

2

γ

2

(u, X

u

)h

b

(X

u

)]du

=

T

t

∞

−∞

¸

h

b

(y)β(u, y) +

1

2

γ

2

(u, y)h

b

(y)

p(t, u, x, y)dydu.

Since h

b

vanishes outside (0, b), the integration range can be changed from (−∞, ∞) to (0, b), which gives

(6.9.48).

By integration-by-parts formula, we have

b

0

β(u, y)p(t, u, x, y)h

b

(y)dy = h

b

(y)β(u, y)p(t, u, x, y)[

b

0

−

b

0

h

b

(y)

∂

∂y

(β(u, y)p(t, u, x, y))dy

= −

b

0

h

b

(y)

∂

∂y

(β(u, y)p(t, u, x, y))dy,

and

b

0

γ

2

(u, y)p(t, u, x, y)h

b

(y)dy = −

b

0

∂

∂y

(γ

2

(u, y)p(t, u, x, y))h

b

(y)dy =

b

0

∂

2

∂y

(γ

2

(u, y)p(t, u, x, y))h

b

(y)dy.

Plug these formulas into (6.9.48), we get (6.9.49).

Diﬀerentiate w.r.t. T on both sides of (6.9.49), we have

b

0

h

b

(y)

∂

∂T

p(t, T, x, y)dy = −

b

0

∂

∂y

[β(T, y)p(t, T, x, y)]h

b

(y)dy +

1

2

b

0

∂

2

∂y

2

[γ

2

(T, y)p(t, T, x, y)]h

b

(y)dy,

62

that is,

b

0

h

b

(y)

¸

∂

∂T

p(t, T, x, y) +

∂

∂y

(β(T, y)p(t, T, x, y)) −

1

2

∂

2

∂y

2

(γ

2

(T, y)p(t, T, x, y))

dy = 0.

This is (6.9.50).

By (6.9.50) and the arbitrariness of h

b

, we conclude for any y ∈ (0, ∞),

∂

∂T

p(t, T, x, y) +

∂

∂y

(β(T, y)p(t, T, x, y)) −

1

2

∂

2

∂y

2

(γ

2

(T, y)p(t, T, x, y)) = 0.

6.10.

Proof. Under the assumption that lim

y→∞

(y −K)ry¯ p(0, T, x, y) = 0, we have

−

∞

K

(y−K)

∂

∂y

(ry¯ p(0, T, x, y))dy = −(y−K)ry¯ p(0, T, x, y)[

∞

K

+

∞

K

ry¯ p(0, T, x, y)dy =

∞

K

ry¯ p(0, T, x, y)dy.

If we further assume (6.9.57) and (6.9.58), then use integration-by-parts formula twice, we have

1

2

∞

K

(y −K)

∂

2

∂y

2

(σ

2

(T, y)y

2

¯ p(0, T, x, y))dy

=

1

2

¸

(y −K)

∂

∂y

(σ

2

(T, y)y

2

¯ p(0, T, x, y))[

∞

K

−

∞

K

∂

∂y

(σ

2

(T, y)y

2

¯ p(0, T, x, y))dy

= −

1

2

(σ

2

(T, y)y

2

¯ p(0, T, x, y)[

∞

K

)

=

1

2

σ

2

(T, K)K

2

¯ p(0, T, x, K).

Therefore,

c

T

(0, T, x, K) = −rc(0, T, x, K) +e

−rT

∞

K

(y −K)¯ p

T

(0, T, x, y)dy

= −re

−rT

∞

K

(y −K)¯ p(0, T, x, y)dy +e

−rT

∞

K

(y −K)¯ p

T

(0, T, x, y)dy

= −re

−rT

∞

K

(y −K)¯ p(0, T, x, y)dy −e

−rT

∞

K

(y −K)

∂

∂y

(ry¯ p(t, T, x, y))dy

+e

−rT

∞

K

(y −K)

1

2

∂

2

∂y

2

(σ

2

(T, y)y

2

¯ p(t, T, x, y))dy

= −re

−rT

∞

K

(y −K)¯ p(0, T, x, y)dy +e

−rT

∞

K

ry¯ p(0, T, x, y)dy

+e

−rT

1

2

σ

2

(T, K)K

2

¯ p(0, T, x, K)

= re

−rT

K

∞

K

¯ p(0, T, x, y)dy +

1

2

e

−rT

σ

2

(T, K)K

2

¯ p(0, T, x, K)

= −rKc

K

(0, T, x, K) +

1

2

σ

2

(T, K)K

2

c

KK

(0, T, x, K).

7. Exotic Options

7.1. (i)

63

Proof. Since δ

±

(τ, s) =

1

σ

√

τ

[log s + (r ±

1

2

σ

2

)τ] =

log s

σ

τ

−

1

2

+

r±

1

2

σ

2

σ

√

τ,

∂

∂t

δ

±

(τ, s) =

log s

σ

(−

1

2

)τ

−

3

2

∂τ

∂t

+

r ±

1

2

σ

2

σ

1

2

τ

−

1

2

∂τ

∂t

= −

1

2τ

¸

log s

σ

1

√

τ

(−1) −

r ±

1

2

σ

2

σ

√

τ(−1)

= −

1

2τ

1

σ

√

τ

¸

−log ss + (r ±

1

2

σ

2

)τ)

= −

1

2τ

δ

±

(τ,

1

s

).

(ii)

Proof.

∂

∂x

δ

±

(τ,

x

c

) =

∂

∂x

1

σ

√

τ

¸

log

x

c

+ (r ±

1

2

σ

2

)τ

=

1

xσ

√

τ

,

∂

∂x

δ

±

(τ,

c

x

) =

∂

∂x

1

σ

√

τ

¸

log

c

x

+ (r ±

1

2

σ

2

)τ

= −

1

xσ

√

τ

.

(iii)

Proof.

N

(δ

±

(τ, s)) =

1

√

2π

e

−

δ

±

(τ,s)

2

=

1

√

2π

e

−

(log s+rτ)

2

±σ

2

τ(log s+rτ)+

1

4

σ

4

τ

2

2σ

2

τ

.

Therefore

N

(δ

+

(τ, s))

N

(δ

−

(τ, s))

= e

−

2σ

2

τ(log s+rτ)

2σ

2

τ

=

e

−rτ

s

and e

−rτ

N

(δ

−

(τ, s)) = sN

(δ

+

(τ, s)).

(iv)

Proof.

N

(δ

±

(τ, s))

N

(δ

±

(τ, s

−1

))

= e

−

[

(log s+rτ)

2

−(log

1

s

+rτ)

2

]

±σ

2

τ(log s−log

1

s

)

2σ

2

τ

= e

−

4rτ log s±2σ

2

τ log s

2σ

2

τ

= e

−(

2r

σ

2

±1) log s

= s

−(

2r

σ

2

±1)

.

So N

(δ

±

(τ, s

−1

)) = s

(

2r

σ

2

±1)

N

(δ

±

(τ, s)).

(v)

Proof. δ

+

(τ, s) −δ

−

(τ, s) =

1

σ

√

τ

log s + (r +

1

2

σ

2

)τ

−

1

σ

√

τ

log s + (r −

1

2

σ

2

)τ

=

1

σ

√

τ

σ

2

τ = σ

√

τ.

(vi)

Proof. δ

±

(τ, s) −δ

±

(τ, s

−1

) =

1

σ

√

τ

log s + (r ±

1

2

σ

2

)τ

−

1

σ

√

τ

log s

−1

+ (r ±

1

2

σ

2

)τ

=

2 log s

σ

√

τ

.

(vii)

Proof. N

(y) =

1

√

2π

e

−

y

2

2

, so N

(y) =

1

√

2π

e

−

y

2

2

(−

y

2

2

)

= −yN

(y).

64

To be continued ...

7.3.

Proof. We note S

T

= S

0

e

σ

W

T

= S

t

e

σ(

W

T

−

Wt)

,

´

W

T

−

´

W

t

= (

W

T

−

W

t

) + α(T − t) is independent of T

t

,

sup

t≤u≤T

(

´

W

u

−

´

W

t

) is independent of T

t

, and

Y

T

= S

0

e

σ

M

T

= S

0

e

σ sup

t≤u≤T

Wu

1

{

Mt≤sup

t≤u≤T

Wt}

+S

0

e

σ

Mt

1

{

Mt>sup

t≤u≤T

Wu}

= S

t

e

σ sup

t≤u≤T

(

Wu−

Wt)

1

{

Y

t

S

t

≤e

σ sup

t≤u≤T

(

Wu−

W

t

)

}

+Y

t

1

{

Y

t

S

t

≤e

σ sup

t≤u≤T

(

Wu−

W

t

)

}

.

So E[f(S

T

, Y

T

)[T

t

] = E[f(x

S

T−t

S0

, x

Y

T−t

S0

1

{

y

x

≤

Y

T−t

S

0

}

+ y1

{

y

x

≤

Y

T−t

S

0

}

)], where x = S

t

, y = Y

t

. Therefore

E[f(S

T

, Y

T

)[T

t

] is a Borel function of (S

t

, Y

t

).

7.4.

Proof. By Cauchy’s inequality and the monotonicity of Y , we have

[

m

¸

j=1

(Y

tj

−Y

tj−1

)(S

tj

−S

tj−1

)[ ≤

m

¸

j=1

[Y

tj

−Y

tj−1

[[S

tj

−S

tj−1

[

≤

m

¸

j=1

(Y

tj

−Y

tj−1

)

2

m

¸

j=1

(S

tj

−S

tj−1

)

2

≤

max

1≤j≤m

[Y

tj

−Y

tj−1

[(Y

T

−Y

0

)

m

¸

j=1

(S

tj

−S

tj−1

)

2

.

If we increase the number of partition points to inﬁnity and let the length of the longest subinterval

max

1≤j≤m

[t

j

−t

j−1

[ approach zero, then

¸

m

j=1

(S

tj

−S

tj−1

)

2

→

[S]

T

−[S]

0

< ∞ and max

1≤j≤m

[Y

tj

−

Y

tj−1

[ → 0 a.s. by the continuity of Y . This implies

¸

m

j=1

(Y

tj

−Y

tj−1

)(S

tj

−S

tj−1

) → 0.

8. American Derivative Securities

8.1.

Proof. v

L

(L+) = (K −L)(−

2r

σ

2

)(

x

L

)

−

2r

σ

2

−1 1

L

x=L

= −

2r

σ

2

L

(K − L). So v

L

(L+) = v

L

(L−) if and only if

−

2r

σ

2

L

(K −L) = −1. Solve for L, we get L =

2rK

2r+σ

2

.

8.2.

Proof. By the calculation in Section 8.3.3, we can see v

2

(x) ≥ (K

2

−x)

+

≥ (K

1

−x)

+

, rv

2

(x) −rxv

2

(x) −

1

2

σ

2

x

2

v

2

(x) ≥ 0 for all x ≥ 0, and for 0 ≤ x < L

1∗

< L

2∗

,

rv

2

(x) −rxv

2

(x) −

1

2

σ

2

x

2

v

2

(x) = rK

2

> rK

1

> 0.

So the linear complementarity conditions for v

2

imply v

2

(x) = (K

2

−x)

+

= K

2

−x > K

1

−x = (K

1

−x)

+

on [0, L

1∗

]. Hence v

2

(x) does not satisfy the third linear complementarity condition for v

1

: for each x ≥ 0,

equality holds in either (8.8.1) or (8.8.2) or both.

8.3. (i)

65

Proof. Suppose x takes its values in a domain bounded away from 0. By the general theory of linear

diﬀerential equations, if we can ﬁnd two linearly independent solutions v

1

(x), v

2

(x) of (8.8.4), then any

solution of (8.8.4) can be represented in the form of C

1

v

1

+C

2

v

2

where C

1

and C

2

are constants. So it suﬃces

to ﬁnd two linearly independent special solutions of (8.8.4). Assume v(x) = x

p

for some constant p to be

determined, (8.8.4) yields x

p

(r−pr−

1

2

σ

2

p(p−1)) = 0. Solve the quadratic equation 0 = r−pr−

1

2

σ

2

p(p−1) =

(−

1

2

σ

2

p −r)(p −1), we get p = 1 or −

2r

σ

2

. So a general solution of (8.8.4) has the form C

1

x +C

2

x

−

2r

σ

2

.

(ii)

Proof. Assume there is an interval [x

1

, x

2

] where 0 < x

1

< x

2

< ∞, such that v(x) ≡ 0 satisﬁes (8.3.19)

with equality on [x

1

, x

2

] and satisﬁes (8.3.18) with equality for x at and immediately to the left of x

1

and

for x at and immediately to the right of x

2

, then we can ﬁnd some C

1

and C

2

, so that v(x) = C

1

x+C

2

x

−

2r

σ

2

on [x

1

, x

2

]. If for some x

0

∈ [x

1

, x

2

], v(x

0

) = v

(x

0

) = 0, by the uniqueness of the solution of (8.8.4), we

would conclude v ≡ 0. This is a contradiction. So such an x

0

cannot exist. This implies 0 < x

1

< x

2

< K

(if K ≤ x

2

, v(x

2

) = (K −x

2

)

+

= 0 and v

(x

2

)=the right derivative of (K −x)

+

at x

2

, which is 0).

1

Thus

we have four equations for C

1

and C

2

:

C

1

x

1

+C

2

x

−

2r

σ

2

1

= K −x

1

C

1

x

2

+C

2

x

−

2r

σ

2

2

= K −x

2

C

1

−

2r

σ

2

C

2

x

−

2r

σ

2

−1

1

= −1

C

1

−

2r

σ

2

C

2

x

−

2r

σ

2

−1

2

= −1.

Since x

1

= x

2

, the last two equations imply C

2

= 0. Plug C

2

= 0 into the ﬁrst two equations, we have

C

1

=

K−x1

x1

=

K−x2

x2

; plug C

2

= 0 into the last two equations, we have C

1

= −1. Combined, we would have

x

1

= x

2

. Contradiction. Therefore our initial assumption is incorrect, and the only solution v that satisﬁes

the speciﬁed conditions in the problem is the zero solution.

(iii)

Proof. If in a right neighborhood of 0, v satisﬁes (8.3.19) with equality, then part (i) implies v(x) = C

1

x +

C

2

x

−

2r

σ

2

for some constants C

1

and C

2

. Then v(0) = lim

x↓0

v(x) = 0 < (K − 0)

+

, i.e. (8.3.18) will be

violated. So we must have rv − rxv

−

1

2

σ

2

x

2

v

**> 0 in a right neighborhood of 0. According to (8.3.20),
**

v(x) = (K−x)

+

near o. So v(0) = K. We have thus concluded simultaneously that v cannot satisfy (8.3.19)

with equality near 0 and v(0) = K, starting from ﬁrst principles (8.3.18)-(8.3.20).

(iv)

Proof. This is already shown in our solution of part (iii): near 0, v cannot satisfy (8.3.19) with equality.

(v)

Proof. If v satisfy (K−x)

+

with equality for all x ≥ 0, then v cannot have a continuous derivative as stated

in the problem. This is a contradiction.

(vi)

1

Note we have interpreted the condition “v(x) satisﬁes (8.3.18) with equality for x at and immediately to the right of x

2

”

as “v(x

2

) = (K − x

2

)

+

and v

(x

2

) =the right derivative of (K − x)

+

at x

2

.” This is weaker than “v(x) = (K − x) in a right

neighborhood of x

2

.”

66

Proof. By the result of part (i), we can start with v(x) = (K −x)

+

on [0, x

1

] and v(x) = C

1

x +C

2

x

−

2r

σ

2

on

[x

1

, ∞). By the assumption of the problem, both v and v

**are continuous. Since (K−x)
**

+

is not diﬀerentiable

at K, we must have x

1

≤ K.This gives us the equations

K −x

1

= (K −x

1

)

+

= C

1

x

1

+C

2

x

−

2r

σ

2

1

−1 = C

1

−

2r

σ

2

C

2

x

−

2r

σ

2

−1

1

.

Because v is assumed to be bounded, we must have C

1

= 0 and the above equations only have two unknowns:

C

2

and x

1

. Solve them for C

2

and x

1

, we are done.

8.4. (i)

Proof. This is already shown in part (i) of Exercise 8.3.

(ii)

Proof. We solve for A, B the equations

AL

−

2r

σ

2

+BL = K −L

−

2r

σ

2

AL

−

2r

σ

2

−1

+B = −1,

and we obtain A =

σ

2

KL

2r

σ

2

σ

2

+2r

, B =

2rK

L(σ

2

+2r)

−1.

(iii)

Proof. By (8.8.5), B > 0. So for x ≥ K, f(x) ≥ BK > 0 = (K −x)

+

. If L ≤ x < K,

f(x) −(K −x)

+

=

σ

2

KL

2r

σ

2

σ

2

+ 2r

x

−

2r

σ

2

+

2rKx

L(σ

2

+ 2r)

−K = x

−

2r

σ

2

KL

2r

σ

2

σ

2

+ 2r(

x

L

)

2r

σ

2

+1

−(σ

2

+ 2r)(

x

L

)

2r

σ

2

(σ

2

+ 2r)L

.

Let g(θ) = σ

2

+ 2rθ

2r

σ

2

+1

− (σ

2

+ 2r)θ

2r

σ

2

with θ ≥ 1. Then g(1) = 0 and g

(θ) = 2r(

2r

σ

2

+ 1)θ

2r

σ

2

− (σ

2

+

2r)

2r

σ

2

θ

2r

σ

2

−1

=

2r

σ

2

(σ

2

+ 2r)θ

2r

σ

2

−1

(θ − 1) ≥ 0. So g(θ) ≥ 0 for any θ ≥ 1. This shows f(x) ≥ (K − x)

+

for

L ≤ x < K. Combined, we get f(x) ≥ (K −x)

+

for all x ≥ L.

(iv)

Proof. Since lim

x→∞

v(x) = lim

x→∞

f(x) = ∞ and lim

x→∞

v

L∗

(x) = lim

x→∞

(K − L

∗

)(

x

L∗

)

−

2r

σ

2

= 0, v(x)

and v

L∗

(x) are diﬀerent. By part (iii), v(x) ≥ (K − x)

+

. So v satisﬁes (8.3.18). For x ≥ L, rv − rxv

−

1

2

σ

2

x

2

v

= rf −rxf −

1

2

σ

2

x

2

f

= 0. For 0 ≤ x ≤ L, rv −rxv

−

1

2

σ

2

x

2

v

= r(K−x) +rx = rK. Combined,

rv −rxv

−

1

2

σ

2

x

2

v

**≥ 0 for x ≥ 0. So v satisﬁes (8.3.19). Along the way, we also showed v satisﬁes (8.3.20).
**

In summary, v satisﬁes the linear complementarity condition (8.3.18)-(8.3.20), but v is not the function v

L∗

given by (8.3.13).

(v)

Proof. By part (ii), B = 0 if and only if

2rK

L(σ

2

+2r)

− 1 = 0, i.e. L =

2rK

2r+σ

2

. In this case, v(x) = Ax

−

2r

σ

2

=

σ

2

K

σ

2

+2r

(

x

L

)

−

2r

σ

2

= (K −L)(

x

L

)

−

2r

σ

2

= v

L∗

(x), on the interval [L, ∞).

8.5. The diﬃculty of the dividend-paying case is that from Lemma 8.3.4, we can only obtain

¯

E[e

−(r−a)τ

L

],

not

¯

E[e

−rτ

L

]. So we have to start from Theorem 8.3.2.

(i)

67

Proof. By (8.8.9), S

t

= S

0

e

σ

Wt+(r−a−

1

2

σ

2

)t

. Assume S

0

= x, then S

t

= L if and only if −

W

t

−

1

σ

(r − a −

1

2

σ

2

)t =

1

σ

log

x

L

. By Theorem 8.3.2,

¯

E[e

−rτ

L

] = e

−

1

σ

log

x

L

1

σ

(r−a−

1

2

σ

2

)+

1

σ

2

(r−a−

1

2

σ

2

)

2

+2r

.

If we set γ =

1

σ

2

(r − a −

1

2

σ

2

) +

1

σ

1

σ

2

(r −a −

1

σ

2

)

2

+ 2r, we can write

¯

E[e

−rτ

L

] as e

−γ log

x

L

= (

x

L

)

−γ

. So

the risk-neutral expected discounted pay oﬀ of this strategy is

v

L

(x) =

K −x, 0 ≤ x ≤ L

(K −L)(

x

L

)

−γ

, x > L.

(ii)

Proof.

∂

∂L

v

L

(x) = −(

x

L

)

−γ

(1 −

γ(K−L)

L

). Set

∂

∂L

v

L

(x) = 0 and solve for L

∗

, we have L

∗

=

γK

γ+1

.

(iii)

Proof. By Itˆ o’s formula, we have

d

e

−rt

v

L∗

(S

t

)

= e

−rt

¸

−rv

L∗

(S

t

) +v

L∗

(S

t

)(r −a)S

t

+

1

2

v

L∗

(S

t

)σ

2

S

2

t

dt +e

−rt

v

L∗

(S

t

)σS

t

d

W

t

.

If x > L

∗

,

−rv

L∗

(x) +v

L∗

(x)(r −a)x +

1

2

v

L∗

(x)σ

2

x

2

= −r(K −L

∗

)

x

L

∗

−γ

+ (r −a)x(K −L

∗

)(−γ)

x

−γ−1

L

−γ

∗

+

1

2

σ

2

x

2

(−γ)(−γ −1)(K −L

∗

)

x

−γ−2

L

−γ

∗

= (K −L

∗

)

x

L

∗

−γ

¸

−r −(r −a)γ +

1

2

σ

2

γ(γ + 1)

.

By the deﬁnition of γ, if we deﬁne u = r −a −

1

2

σ

2

, we have

r + (r −a)γ −

1

2

σ

2

γ(γ + 1)

= r −

1

2

σ

2

γ

2

+γ(r −a −

1

2

σ

2

)

= r −

1

2

σ

2

u

σ

2

+

1

σ

u

2

σ

2

+ 2r

2

+

u

σ

2

+

1

σ

u

2

σ

2

+ 2r

u

= r −

1

2

σ

2

u

2

σ

4

+

2u

σ

3

u

2

σ

2

+ 2r +

1

σ

2

u

2

σ

2

+ 2r

+

u

2

σ

2

+

u

σ

u

2

σ

2

+ 2r

= r −

u

2

2σ

2

−

u

σ

u

2

σ

2

+ 2r −

1

2

u

2

σ

2

+ 2r

+

u

2

σ

2

+

u

σ

u

2

σ

2

+ 2r

= 0.

If x < L

∗

, −rv

L∗

(x) +v

L∗

(x)(r −a)x +

1

2

v

L∗

(x)σ

2

x

2

= −r(K −x) +(−1)(r −a)x = −rK +ax. Combined,

we get

d

e

−rt

v

L∗

(S

t

)

= −e

−rt

1

{St<L∗}

(rK −aS

t

)dt +e

−rt

v

L∗

(S

t

)σS

t

d

W

t

.

68

Following the reasoning in the proof of Theorem 8.3.5, we only need to show 1

{x<L∗}

(rK−ax) ≥ 0 to ﬁnish

the solution. This is further equivalent to proving rK − aL

∗

≥ 0. Plug L

∗

=

γK

γ+1

into the expression and

note γ ≥

1

σ

1

σ

2

(r −a −

1

2

σ

2

)

2

+

1

σ

2

(r −a −

1

2

σ

2

) ≥ 0, the inequality is further reduced to r(γ +1) −aγ ≥ 0.

We prove this inequality as follows.

Assume for some K, r, a and σ (K and σ are assumed to be strictly positive, r and a are assumed to be

non-negative), rK − aL

∗

< 0, then necessarily r < a, since L

∗

=

γK

γ+1

≤ K. As shown before, this means

r(γ + 1) − aγ < 0. Deﬁne θ =

r−a

σ

, then θ < 0 and γ =

1

σ

2

(r − a −

1

2

σ

2

) +

1

σ

1

σ

2

(r −a −

1

2

σ

2

)

2

+ 2r =

1

σ

(θ −

1

2

σ) +

1

σ

(θ −

1

2

σ)

2

+ 2r. We have

r(γ + 1) −aγ < 0 ⇐⇒ (r −a)γ +r < 0

⇐⇒ (r −a)

¸

1

σ

(θ −

1

2

σ) +

1

σ

(θ −

1

2

σ)

2

+ 2r

¸

+r < 0

⇐⇒ θ(θ −

1

2

σ) +θ

(θ −

1

2

σ)

2

+ 2r +r < 0

⇐⇒ θ

(θ −

1

2

σ)

2

+ 2r < −r −θ(θ −

1

2

σ)(< 0)

⇐⇒ θ

2

[(θ −

1

2

σ)

2

+ 2r] > r

2

+θ

2

(θ −

1

2

σ)

2

+ 2θr(θ −

1

2

σ

2

)

⇐⇒ 0 > r

2

−θrσ

2

⇐⇒ 0 > r −θσ

2

.

Since θσ

2

< 0, we have obtained a contradiction. So our initial assumption is incorrect, and rK −aL

∗

≥ 0

must be true.

(iv)

Proof. The proof is similar to that of Corollary 8.3.6. Note the only properties used in the proof of Corollary

8.3.6 are that e

−rt

v

L∗

(S

t

) is a supermartingale, e

−rt∧τ

L∗

v

L∗

(S

t

∧τ

L∗

) is a martingale, and v

L∗

(x) ≥ (K−x)

+

.

Part (iii) already proved the supermartingale-martingale property, so it suﬃces to show v

L∗

(x) ≥ (K −x)

+

in our problem. Indeed, by γ ≥ 0, L

∗

=

γK

γ+1

< K. For x ≥ K > L

∗

, v

L∗

(x) > 0 = (K−x)

+

; for 0 ≤ x < L

∗

,

v

L∗

(x) = K −x = (K −x)

+

; ﬁnally, for L

∗

≤ x ≤ K,

d

dx

(v

L∗

(x) −(K −x)) = −γ(K −L

∗

)

x

−γ−1

L

−γ

∗

+ 1 ≥ −γ(K −L

∗

)

L

−γ−1

∗

L

−γ

∗

+ 1 = −γ(K −

γK

γ + 1

)

1

γK

γ+1

+ 1 = 0.

and (v

L∗

(x) − (K − x))[

x=L∗

= 0. So for L

∗

≤ x ≤ K, v

L∗

(x) − (K − x)

+

≥ 0. Combined, we have

v

L∗

(x) ≥ (K −x)

+

≥ 0 for all x ≥ 0.

8.6.

Proof. By Lemma 8.5.1, X

t

= e

−rt

(S

t

−K)

+

is a submartingale. For any τ ∈ Γ

0,T

, Theorem 8.8.1 implies

¯

E[e

−rT

(S

T

−K)

+

] ≥

¯

E[e

−rτ∧T

(S

τ∧T

−K)

+

] ≥ E[e

−rτ

(S

τ

−K)

+

1

{τ<∞}

] = E[e

−rτ

(S

τ

−K)

+

],

where we take the convention that e

−rτ

(S

τ

−K)

+

= 0 when τ = ∞. Since τ is arbitrarily chosen,

¯

E[e

−rT

(S

T

−

K)

+

] ≥ max

τ∈Γ

0,T

¯

E[e

−rτ

(S

τ

−K)

+

]. The other direction “≤” is trivial since T ∈ Γ

0,T

.

8.7.

69

Proof. Suppose λ ∈ [0, 1] and 0 ≤ x

1

≤ x

2

, we have f((1 − λ)x

1

+ λx

2

) ≤ (1 − λ)f(x

1

) + λf(x

2

) ≤

(1 −λ)h(x

1

) +λh(x

2

). Similarly, g((1 −λ)x

1

+λx

2

) ≤ (1 −λ)h(x

1

) +λh(x

2

). So

h((1 −λ)x

1

+λx

2

) = max¦f((1 −λ)x

1

+λx

2

), g((1 −λ)x

1

+λx

2

)¦ ≤ (1 −λ)h(x

1

) +λh(x

2

).

That is, h is also convex.

9. Change of Num´eraire

To provide an intuition for change of num´eraire, we give a summary of results for change of

num´eraire in discrete case. This summary is based on Shiryaev [5].

Consider a model of ﬁnancial market (

¯

B,

¯

B, S) as in [1] Deﬁnition 2.1.1 or [5] page 383. Here

¯

B and

¯

B are both one-dimensional while S could be a vector price process. Suppose

¯

B and

¯

B are both strictly

positive, then both of them can be chosen as num´eaire.

Several results hold under this model. First, no-arbitrage and completeness properties of market are

independent of the choice of num´eraire (see, for example, Shiryaev [5] page 413 Remark and page 481).

Second, if the market is arbitrage-free, then corresponding to

¯

B (resp.

¯

B), there is an equivalent probability

¯

P (resp.

¯

P), such that

¯

B

B

,

S

B

(resp.

B

¯

B

,

S

¯

B

) is a martingale under

¯

P (resp.

¯

P). Third, if the market is

both arbitrage-free and complete, we have the relation

d

¯

P =

¯

B

T

¯

B

T

1

E

¯

B0

B0

d

¯

P.

Finally, if f

T

is a European contingent claim with maturity N and the market is both arbitrage-free and

complete, then

¯

B

t

¯

E

¸

f

T

¯

B

T

[T

t

=

¯

B

t

¯

E

¸

f

T

¯

B

T

[T

t

.

That is, the price of f

T

is independent of the choice of num´eraire.

The above theoretical results can be applied to market involving foreign money market account. We con-

sider the following market: a domestic money market account M (M

0

= 1), a foreign money market account

M

f

(M

f

0

= 1), a (vector) asset price process S called stock. Suppose the domestic vs. foreign currency

exchange rate is Q. Note Q is not a traded asset. Denominated by domestic currency, the traded assets

are (M, M

f

Q, S), where M

f

Q can be seen as the price process of one unit foreign currency. Domestic risk-

neutral measure

¯

P is such that

M

f

Q

M

,

S

M

is a

¯

P-martingale. Denominated by foreign currency, the traded

assets are

M

f

,

M

Q

,

S

Q

**. Foreign risk-neutral measure
**

¯

P

f

is such that

M

QM

f

,

S

QM

f

is a

¯

P

f

-martingale.

This is a change of num´eraire in the market denominated by domestic currency, from M to M

f

Q. If we

assume the market is arbitrage-free and complete, the foreign risk-neutral measure is

d

¯

P

f

=

Q

T

M

f

T

M

T

E

Q0M

f

0

M0

d

¯

P =

Q

T

D

T

M

f

T

Q

0

d

¯

P

on T

T

. Under the above set-up, for a European contingent claim f

T

, denominated in domestic currency, its

payoﬀ in foreign currency is f

T

/Q

T

. Therefore its foreign price is

¯

E

f

D

f

T

f

T

D

f

t

Q

T

[T

t

**. Convert this price into
**

domestic currency, we have Q

t

¯

E

f

D

f

T

f

T

D

f

t

Q

T

[T

t

**. Use the relation between
**

¯

P

f

and

¯

P on T

T

and the Bayes

formula, we get

Q

t

¯

E

f

¸

D

f

T

f

T

D

f

t

Q

T

[T

t

¸

=

¯

E

¸

D

T

f

T

D

t

[T

t

.

The RHS is exactly the price of f

T

in domestic market if we apply risk-neutral pricing.

9.1. (i)

70

Proof. For any 0 ≤ t ≤ T, by Lemma 5.5.2,

E

(M2)

¸

M

1

(T)

M

2

(T)

T

t

= E

¸

M

2

(T)

M

2

(t)

M

1

(T)

M

2

(T)

T

t

=

E[M

1

(T)[T

t

]

M

2

(t)

=

M

1

(t)

M

2

(t)

.

So

M1(t)

M2(t)

is a martingale under P

M2

.

(ii)

Proof. Let M

1

(t) = D

t

S

t

and M

2

(t) = D

t

N

t

/N

0

. Then

¯

P

(N)

as deﬁned in (9.2.6) is P

(M2)

as deﬁned in

Remark 9.2.5. Hence

M1(t)

M2(t)

=

St

Nt

N

0

is a martingale under

¯

P

(N)

, which implies S

(N)

t

=

St

Nt

is a martingale

under

¯

P

(N)

.

9.2. (i)

Proof. Since N

−1

t

= N

−1

0

e

−ν

Wt−(r−

1

2

ν

2

)t

, we have

d(N

−1

t

) = N

−1

0

e

−ν

Wt−(r−

1

2

ν

2

)t

[−νd

W

t

−(r −

1

2

ν

2

)dt +

1

2

ν

2

dt] = N

−1

t

(−νd

´

W

t

−rdt).

(ii)

Proof.

d

´

M

t

= M

t

d

1

N

t

+

1

N

t

dM

t

+d

1

N

t

dM

t

=

´

M

t

(−νd

´

W

t

−rdt) +r

´

M

t

dt = −ν

´

M

t

d

´

W

t

.

Remark: This can also be obtained directly from Theorem 9.2.2.

(iii)

Proof.

d

´

X

t

= d

X

t

N

t

= X

t

d

1

N

t

+

1

N

t

dX

t

+d

1

N

t

dX

t

= (∆

t

S

t

+ Γ

t

M

t

)d

1

N

t

+

1

N

t

(∆

t

dS

t

+ Γ

t

dM

t

) +d

1

N

t

(∆

t

dS

t

+ Γ

t

dM

t

)

= ∆

t

¸

S

t

d

1

N

t

+

1

N

t

dS

t

+d

1

N

t

dS

t

+ Γ

t

¸

M

t

d

1

N

t

+

1

N

t

dM

t

+d

1

N

t

dM

t

= ∆

t

d

´

S

t

+ Γ

t

d

´

M

t

.

9.3. To avoid singular cases, we need to assume −1 < ρ < 1.

(i)

Proof. N

t

= N

0

e

ν

W3(t)+(r−

1

2

ν

2

)t

. So

dN

−1

t

= d(N

−1

0

e

−ν

W3(t)−(r−

1

2

ν

2

)t

)

= N

−1

0

e

−ν

W3(t)−(r−

1

2

ν

2

)t

¸

−νd

W

3

(t) −(r −

1

2

ν

2

)dt +

1

2

ν

2

dt

= N

−1

t

[−νd

W

3

(t) −(r −ν

2

)dt],

71

and

dS

(N)

t

= N

−1

t

dS

t

+S

t

dN

−1

t

+dS

t

dN

−1

t

= N

−1

t

(rS

t

dt +σS

t

d

W

1

(t)) +S

t

N

−1

t

[−νd

W

3

(t) −(r −ν

2

)dt]

= S

(N)

t

(rdt +σd

W

1

(t)) +S

(N)

t

[−νd

W

3

(t) −(r −ν

2

)dt] −σS

(N)

t

ρdt

= S

(N)

t

(ν

2

−σρ)dt +S

(N)

t

(σd

W

1

(t) −νd

W

3

(t)).

Deﬁne γ =

σ

2

−2ρσν +ν

2

and

W

4

(t) =

σ

γ

W

1

(t) −

ν

γ

W

3

(t), then

W

4

is a martingale with quadratic

variation

[

W

4

]

t

=

σ

2

γ

2

t −2

σν

γ

2

ρt +

ν

2

r

2

t = t.

By L´evy’s Theorem,

W

4

is a BM and therefore, S

(N)

t

has volatility γ =

σ

2

−2ρσν +ν

2

.

(ii)

Proof. This problem is the same as Exercise 4.13, we deﬁne

W

2

(t) =

−ρ

√

1−ρ

2

W

1

(t) +

1

√

1−ρ

2

W

3

(t), then

W

2

is a martingale, with

(d

W

2

(t))

2

=

−

ρ

1 −ρ

2

d

W

1

(t) +

1

1 −ρ

2

d

W

3

(t)

2

=

ρ

2

1 −ρ

2

+

1

1 −ρ

2

−

2ρ

2

1 −ρ

2

dt = dt,

and d

W

2

(t)d

W

1

(t) = −

ρ

√

1−ρ

2

dt +

ρ

√

1−ρ

2

dt = 0. So

W

2

is a BM independent of

W

1

, and dN

t

= rN

t

dt +

νN

t

d

W

3

(t) = rN

t

dt +νN

t

[ρd

W

1

(t) +

1 −ρ

2

d

W

2

(t)].

(iii)

Proof. Under

¯

P, (

W

1

,

W

2

) is a two-dimensional BM, and

dS

t

= rS

t

dt +σS

t

d

W

1

(t) = rS

t

dt +S

t

(σ, 0)

d

W

1

(t)

d

W

2

(t)

dN

t

= rN

t

dt +νN

t

d

W

3

(t) = rN

t

dt +N

t

(νρ, ν

1 −ρ

2

)

d

W

1

(t)

d

W

2

(t)

.

So under

¯

P, the volatility vector for S is (σ, 0), and the volatility vector for N is (νρ, ν

1 −ρ

2

). By Theorem

9.2.2, under the measure

¯

P

(N)

, the volatility vector for S

(N)

is (v

1

, v

2

) = (σ −νρ, −ν

1 −ρ

2

. In particular,

the volatility of S

(N)

is

v

2

1

+v

2

2

=

(σ −νρ)

2

+ (−ν

1 −ρ

2

)

2

=

σ

2

−2νρσ +ν

2

,

consistent with the result of part (i).

9.4.

Proof. From (9.3.15), we have M

f

t

Q

t

= M

f

0

Q

0

e

t

0

σ2(s)d

W3(s)+

t

0

(Rs−

1

2

σ

2

2

(s))ds

. So

D

f

t

Q

t

= D

f

0

Q

−1

0

e

−

t

0

σ2(s)d

W3(s)−

t

0

(Rs−

1

2

σ

2

2

(s))ds

72

and

d

D

f

t

Q

t

=

D

f

t

Q

t

[−σ

2

(t)d

W

3

(t) −(R

t

−

1

2

σ

2

2

(t))dt +

1

2

σ

2

2

(t)dt] =

D

f

t

Q

t

[−σ

2

(t)d

W

3

(t) −(R

t

−σ

2

2

(t))dt].

To get (9.3.22), we note

d

M

t

D

f

t

Q

t

= M

t

d

D

f

t

Q

t

+

D

f

t

Q

t

dM

t

+dM

t

d

D

f

t

Q

t

=

M

t

D

f

t

Q

t

[−σ

2

(t)d

W

3

(t) −(R

t

−σ

2

2

(t))dt] +

R

t

M

t

D

f

t

Q

t

dt

= −

M

t

D

f

t

Q

t

(σ

2

(t)d

W

3

(t) −σ

2

2

(t)dt)

= −

M

t

D

f

t

Q

t

σ

2

(t)d

W

f

3

(t).

To get (9.3.23), we note

d

D

f

t

S

t

Q

t

=

D

f

t

Q

t

dS

t

+S

t

d

D

f

t

Q

t

+dS

t

d

D

f

t

Q

t

=

D

f

t

Q

t

S

t

(R

t

dt +σ

1

(t)d

W

1

(t)) +

S

t

D

f

t

Q

t

[−σ

2

(t)d

W

3

(t) −(R

t

−σ

2

2

(t))dt]

+S

t

σ

1

(t)d

W

1

(t)

D

f

t

Q

t

(−σ

2

(t))d

W

3

(t)

=

D

f

t

S

t

Q

t

[σ

1

(t)d

W

1

(t) −σ

2

(t)d

W

3

(t) +σ

2

2

(t)dt −σ

1

(t)σ

2

(t)ρ

t

dt]

=

D

f

t

S

t

Q

t

[σ

1

(t)d

W

f

1

(t) −σ

2

d

W

f

3

(t)].

9.5.

Proof. We combine the solutions of all the sub-problems into a single solution as follows. The payoﬀ of a

quanto call is (

S

T

Q

T

−K)

+

units of domestic currency at time T. By risk-neutral pricing formula, its price at

time t is

¯

E[e

−r(T−t)

(

S

T

Q

T

− K)

+

[T

t

]. So we need to ﬁnd the SDE for

St

Qt

under risk-neutral measure

¯

P. By

formula (9.3.14) and (9.3.16), we have S

t

= S

0

e

σ1

W1(t)+(r−

1

2

σ

2

1

)t

and

Q

t

= Q

0

e

σ2

W3(t)+(r−r

f

−

1

2

σ

2

2

)t

= Q

0

e

σ2ρ

W1(t)+σ2

√

1−ρ

2

W2(t)+(r−r

f

−

1

2

σ

2

2

)t

.

So

St

Qt

=

S0

Q0

e

(σ1−σ2ρ)

W1(t)−σ2

√

1−ρ

2

W2(t)+(r

f

+

1

2

σ

2

2

−

1

2

σ

2

1

)t

. Deﬁne

σ

4

=

(σ

1

−σ

2

ρ)

2

+σ

2

2

(1 −ρ

2

) =

σ

2

1

−2ρσ

1

σ

2

+σ

2

2

and

W

4

(t) =

σ

1

−σ

2

ρ

σ

4

W

1

(t) −

σ

2

1 −ρ

2

σ

4

W

2

(t).

Then

W

4

is a martingale with [

W

4

]

t

=

(σ1−σ2ρ)

2

σ

2

4

t +

σ2(1−ρ

2

)

σ

2

4

t +t. So

W

4

is a Brownian motion under

¯

P. So

if we set a = r −r

f

+ρσ

1

σ

2

−σ

2

2

, we have

S

t

Q

t

=

S

0

Q

0

e

σ4

W4(t)+(r−a−

1

2

σ

2

4

)t

and d

S

t

Q

t

=

S

t

Q

t

[σ

4

d

W

4

(t) + (r −a)dt].

73

Therefore, under

¯

P,

St

Qt

behaves like dividend-paying stock and the price of the quanto call option is like the

price of a call option on a dividend-paying stock. Thus formula (5.5.12) gives us the desired price formula

for quanto call option.

9.6. (i)

Proof. d

+

(t) −d

−

(t) =

1

σ

√

T−t

σ

2

(T −t) = σ

√

T −t. So d

−

(t) = d

+

(t) −σ

√

T −t.

(ii)

Proof. d

+

(t)+d

−

(t) =

2

σ

√

T−t

log

For

S

(t,T)

K

. So d

2

+

(t)−d

2

−

(t) = (d

+

(t)+d

−

(t))(d

+

(t)−d

−

(t)) = 2 log

For

S

(t,T)

K

.

(iii)

Proof.

For

S

(t, T)e

−d

2

+

(t)/2

−Ke

−d

2

−

(t)

= e

−d

2

+

(t)/2

[For

S

(t, T) −Ke

d

2

+

(t)/2−d

2

−

(t)/2

]

= e

−d

2

+

(t)/2

[For

S

(t, T) −Ke

log

For

S

(t,T)

K

]

= 0.

(iv)

Proof.

dd

+

(t)

=

1

2

√

1σ

(T −t)

3

[log

For

S

(t, T)

K

+

1

2

σ

2

(T −t)]dt +

1

σ

√

T −t

¸

dFor

S

(t, T)

For

S

(t, T)

−

(dFor

S

(t, T))

2

2For

S

(t, T)

2

−

1

2

σdt

=

1

2σ

(T −t)

3

log

For

S

(t, T)

K

dt +

σ

4

√

T −t

dt +

1

σ

√

T −t

(σd

W

T

(t) −

1

2

σ

2

dt −

1

2

σ

2

dt)

=

1

2σ(T −t)

3/2

log

For

S

(t, T)

K

dt −

3σ

4

√

T −t

dt +

d

W

T

(t)

√

T −t

.

(v)

Proof. dd

−

(t) = dd

+

(t) −d(σ

√

T −t) = dd

+

(t) +

σdt

2

√

T−t

.

(vi)

Proof. By (iv) and (v), (dd

−

(t))

2

= (dd

+

(t))

2

=

dt

T−t

.

(vii)

Proof. dN(d

+

(t)) = N

(d

+

(t))dd

+

(t)+

1

2

N

(d

+

(t))(dd

+

(t))

2

=

1

√

2π

e

−

d

2

+

(t)

2

dd

+

(t)+

1

2

1

√

2π

e

−

d

2

+

(t)

2

(−d

+

(t))

dt

T−t

.

(viii)

74

Proof.

dN(d

−

(t)) = N

(d

−

(t))dd

−

(t) +

1

2

N

(d

−

(t))(dd

−

(t))

2

=

1

√

2π

e

−

d

2

−

(t)

2

dd

+

(t) +

σdt

2

√

T −t

+

1

2

e

−

d

2

−

(t)

2

√

2π

(−d

−

(t))

dt

T −t

=

1

√

2π

e

−d

2

−

(t)/2

dd

+

(t) +

σe

−d

2

−

(t)/2

2

2π(T −t)

dt +

e

−

d

2

−

(t)(σ

√

T−t−d

+

(t))

2

2(T −t)

√

2π

dt

=

1

√

2π

e

−d

2

−

(t)/2

dd

+

(t) +

σe

−d

2

−

(t)/2

2π(T −t)

dt −

d

+

(t)e

−

d

2

−

(t)

2

2(T −t)

√

2π

dt.

(ix)

Proof.

dFor

S

(t, T)dN(d

+

(t)) = σFor

S

(t, T)d

W

T

(t)

e

−d

2

+

(t)/2

√

2π

1

√

T −t

d

´

W

T

(t) =

σFor

S

(t, T)e

−d

2

+

(t)/2

2π(T −t)

dt.

(x)

Proof.

For

S

(t, T)dN(d

+

(t)) +dFor

S

(t, T)dN(d

+

(t)) −KdN(d

−

(t))

= For

S

(t, T)

¸

1

√

2π

e

−d

2

+

(t)/2

dd

+

(t) −

d

+

(t)

2(T −t)

√

2π

e

−d

2

+

(t)/2

dt

+

σFor

S

(t, T)e

−d

2

+

(t)/2

2π(T −t)

dt

−K

¸

e

−d

2

−

(t)/2

√

2π

dd

+

(t) +

σ

2π(T −t)

e

−d

2

−

(t)/2

dt −

d

+

(t)

2(T −t)

√

2π

e

−d

2

−

(t)/2

dt

¸

=

¸

For

S

(t, T)d

+

(t)

2(T −t)

√

2π

e

−d

2

+

(t)/2

+

σFor

S

(t, T)e

−d

2

+

(t)/2

2π(T −t)

−

Kσe

−d

2

−

(t)/2

2π(T −t)

−

Kd

+

(t)

2(T −t)

√

2π

e

−d

2

−

(t)/2

¸

dt

+

1

√

2π

For

S

(t, T)e

−d

2

+

(t)/2

−Ke

−d

2

−

(t)/2

dd

+

(t)

= 0.

The last “=” comes from (iii), which implies e

−d

2

−

(t)/2

=

For

S

(t,T)

K

e

−d

2

+

(t)/2

.

10. Term-Structure Models

10.1. (i)

Proof. Using the notation I

1

(t), I

2

(t), I

3

(t) and I

4

(t) introduced in the problem, we can write Y

1

(t) and

Y

2

(t) as Y

1

(t) = e

−λ1t

Y

1

(0) +e

−λ1t

I

1

(t) and

Y

2

(t) =

λ21

λ1−λ2

(e

−λ1t

−e

−λ2t

)Y

1

(0) +e

−λ2t

Y

2

(0) +

λ21

λ1−λ2

e

−λ1t

I

1

(t) −e

−λ2t

I

2

(t)

−e

−λ2t

I

3

(t), if λ

1

= λ

2

;

−λ

21

te

−λ1t

Y

1

(0) +e

−λ1t

Y

2

(0) −λ

21

te

−λ1t

I

1

(t) −e

−λ1t

I

4

(t)

+e

−λ1t

I

3

(t), if λ

1

= λ

2

.

75

Since all the I

k

(t)’s (k = 1, , 4) are normally distributed with zero mean, we can conclude

¯

E[Y

1

(t)] =

e

−λ1t

Y

1

(0) and

¯

E[Y

2

(t)] =

λ21

λ1−λ2

(e

−λ1t

−e

−λ2t

)Y

1

(0) +e

−λ2t

Y

2

(0), if λ

1

= λ

2

;

−λ

21

te

−λ1t

Y

1

(0) +e

−λ1t

Y

2

(0), if λ

1

= λ

2

.

(ii)

Proof. The calculation relies on the following fact: if X

t

and Y

t

are both martingales, then X

t

Y

t

−[X, Y ]

t

is

also a martingale. In particular,

¯

E[X

t

Y

t

] =

¯

E¦[X, Y ]

t

¦. Thus

¯

E[I

2

1

(t)] =

t

0

e

2λ1u

du =

e

2λ1t

−1

2λ

1

,

¯

E[I

1

(t)I

2

(t)] =

t

0

e

(λ1+λ2)u

du =

e

(λ1+λ2)t

−1

λ

1

+λ

2

,

¯

E[I

1

(t)I

3

(t)] = 0,

¯

E[I

1

(t)I

4

(t)] =

t

0

ue

2λ1u

du =

1

2λ

1

¸

te

2λ1t

−

e

2λ1t

−1

2λ

1

and

¯

E[I

2

4

(t)] =

t

0

u

2

e

2λ1u

du =

t

2

e

2λ1t

2λ

1

−

te

2λ1t

2λ

2

1

+

e

2λ1t

−1

4λ

3

1

.

(iii)

Proof. Following the hint, we have

¯

E[I

1

(s)I

2

(t)] =

¯

E[J

1

(t)I

2

(t)] =

t

0

e

(λ1+λ2)u

1

{u≤s}

du =

e

(λ1+λ2)s

−1

λ

1

+λ

2

.

10.2. (i)

Proof. Assume B(t, T) = E[e

−

T

t

Rsds

[T

t

] = f(t, Y

1

(t), Y

2

(t)). Then d(D

t

B(t, T)) = D

t

[−R

t

f(t, Y

1

(t), Y

2

(t))dt+

df(t, Y

1

(t), Y

2

(t))]. By Itˆo’s formula,

df(t, Y

1

(t), Y

2

(t)) = [f

t

(t, Y

1

(t), Y

2

(t)) +f

y1

(t, Y

1

(t), Y

2

(t))(µ −λ

1

Y

1

(t)) +f

y2

(t, Y

1

(t), Y

2

(t))(−λ

2

)Y

2

(t)]

+f

y1y2

(t, Y

1

(t), Y

2

(t))σ

21

Y

1

(t) +

1

2

f

y1y1

(t, Y

1

(t), Y

2

(t))Y

1

(t)

+

1

2

f

y2y2

(t, Y

1

(t), Y

2

(t))(σ

2

21

Y

1

(t) +α +βY

1

(t))]dt + martingale part.

Since D

t

B(t, T) is a martingale, we must have

¸

−(δ

0

+δ

1

y

1

+δ

2

y

2

) +

∂

∂t

+ (µ −λ

1

y

1

)

∂

∂y

1

−λ

2

y

2

∂

∂y

2

+

1

2

2σ

21

y

1

∂

2

∂y

1

∂y

2

+y

1

∂

2

∂y

2

1

+ (σ

2

21

y

1

+α +βy

1

)

∂

2

∂y

2

2

f = 0.

(ii)

76

Proof. If we suppose f(t, y

1

, y

2

) = e

−y1C1(T−t)y2C2(T−t)−A(T−t)

, then

∂

∂t

f = [y

1

C

1

(T − t) + y

2

C

2

(T − t) +

A

(T − t)]f,

∂

∂y1

f = −C

1

(T − t)f,

∂f

∂y2

= −C

2

(T − t)f,

∂

2

f

∂y1∂y2

= C

1

(T − t)C

2

(T − t)f,

∂

2

f

∂y

2

1

= C

2

1

(T − t)f,

and

∂

2

f

∂y

2

2

= C

2

2

(T −t)f. So the PDE in part (i) becomes

−(δ

0

+δ

1

y

1

+δ

2

y

2

)+y

1

C

1

+y

2

C

2

+A

−(µ−λ

1

y

1

)C

1

+λ

2

y

2

C

2

+

1

2

2σ

21

y

1

C

1

C

2

+y

1

C

2

1

+ (σ

2

21

y

1

+α +βy

1

)C

2

2

= 0.

Sorting out the LHS according to the independent variables y

1

and y

2

, we get

−δ

1

+C

1

+λ

1

C

1

+σ

21

C

1

C

2

+

1

2

C

2

1

+

1

2

(σ

2

21

+β)C

2

2

= 0

−δ

2

+C

2

+λ

2

C

2

= 0

−δ

0

+A

−µC

1

+

1

2

αC

2

2

= 0.

In other words, we can obtain the ODEs for C

1

, C

2

and A as follows

C

1

= −λ

1

C

1

−σ

21

C

1

C

2

−

1

2

C

2

1

−

1

2

(σ

2

21

+β)C

2

2

+δ

1

diﬀerent from (10.7.4), check!

C

2

= −λ

2

C

2

+δ

2

A

= µC

1

−

1

2

αC

2

2

+δ

0

.

10.3. (i)

Proof. d(D

t

B(t, T)) = D

t

[−R

t

f(t, T, Y

1

(t), Y

2

(t))dt +df(t, T, Y

1

(t), Y

2

(t))] and

df(t, T, Y

1

(t), Y

2

(t))

= [f

t

(t, T, Y

1

(t), Y

2

(t)) +f

y1

(t, T, Y

1

(t), Y

2

(t))(−λ

1

Y

1

(t)) +f

y2

(t, T, Y

1

(t), Y

2

(t))(−λ

21

Y

1

(t) −λ

2

Y

2

(t))

+

1

2

f

y1y1

(t, T, Y

1

(t), Y

2

(t)) +

1

2

f

y2y2

(t, T, Y

1

(t), Y

2

(t))]dt + martingale part.

Since D

t

B(t, T) is a martingale under risk-neutral measure, we have the following PDE:

¸

−(δ

0

(t) +δ

1

y

1

+δ

2

y

2

) +

∂

∂t

−λ

1

y

1

∂

∂y

1

−(λ

21

y

1

+λ

2

y

2

)

∂

∂y

2

+

1

2

∂

2

∂y

2

1

+

1

2

∂

∂y

2

2

f(t, T, y

1

, y

2

) = 0.

Suppose f(t, T, y

1

, y

2

) = e

−y1C1(t,T)−y2C2(t,T)−A(t,T)

, then

f

t

(t, T, y

1

, y

2

) =

−y

1

d

dt

C

1

(t, T) −y

2

d

dt

C

2

(t, T) −

d

dt

A(t, T)

f(t, T, y

1

, y

2

),

f

y1

(t, T, y

1

, y

2

) = −C

1

(t, T)f(t, T, y

1

, y

2

),

f

y2

(t, T, y

1

, y

2

) = −C

2

(t, T)f(t, T, y

1

, y

2

),

f

y1y2

(t, T, y

1

, y

2

) = C

1

(t, T)C

2

(t, T)f(t, T, y

1

, y

2

),

f

y1y1

(t, T, y

1

, y

2

) = C

2

1

(t, T)f(t, T, y

1

, y

2

),

f

y2y2

(t, T, y

1

, y

2

) = C

2

2

(t, T)f(t, T, y

1

, y

2

).

So the PDE becomes

−(δ

0

(t) +δ

1

y

1

+δ

2

y

2

) +

−y

1

d

dt

C

1

(t, T) −y

2

d

dt

C

2

(t, T) −

d

dt

A(t, T)

+λ

1

y

1

C

1

(t, T)

+(λ

21

y

1

+λ

2

y

2

)C

2

(t, T) +

1

2

C

2

1

(t, T) +

1

2

C

2

2

(t, T) = 0.

77

Sorting out the terms according to independent variables y

1

and y

2

, we get

−δ

0

(t) −

d

dt

A(t, T) +

1

2

C

2

1

(t, T) +

1

2

C

2

2

(t, T) = 0

−δ

1

−

d

dt

C

1

(t, T) +λ

1

C

1

(t, T) +λ

21

C

2

(t, T) = 0

−δ

2

−

d

dt

C

2

(t, T) +λ

2

C

2

(t, T) = 0.

That is

d

dt

C

1

(t, T) = λ

1

C

1

(t, T) +λ

21

C

2

(t, T) −δ

1

d

dt

C

2

(t, T) = λ

2

C

2

(t, T) −δ

2

d

dt

A(t, T) =

1

2

C

2

1

(t, T) +

1

2

C

2

2

(t, T) −δ

0

(t).

(ii)

Proof. For C

2

, we note

d

dt

[e

−λ2t

C

2

(t, T)] = −e

−λ2t

δ

2

from the ODE in (i). Integrate from t to T, we have

0 −e

−λ2t

C

2

(t, T) = −δ

2

T

t

e

−λ2s

ds =

δ2

λ2

(e

−λ2T

−e

−λ2t

). So C

2

(t, T) =

δ2

λ2

(1 −e

−λ2(T−t)

). For C

1

, we note

d

dt

(e

−λ1t

C

1

(t, T)) = (λ

21

C

2

(t, T) −δ

1

)e

−λ1t

=

λ

21

δ

2

λ

2

(e

−λ1t

−e

−λ2T+(λ2−λ1)t

) −δ

1

e

−λ1t

.

Integrate from t to T, we get

−e

−λ1t

C

1

(t, T)

=

−

λ21δ2

λ2λ1

(e

−λ1T

−e

−λ1t

) −

λ21δ2

λ2

e

−λ2T e

(λ

2

−λ

1

)T

−e

(λ

2

−λ

1

)t

λ2−λ1

+

δ1

λ1

(e

−λ1T

−e

−λ1T

) if λ

1

= λ

2

−

λ21δ2

λ2λ1

(e

−λ1T

−e

−λ1t

) −

λ21δ2

λ2

e

−λ2T

(T −t) +

δ1

λ1

(e

−λ1T

−e

−λ1T

) if λ

1

= λ

2

.

So

C

1

(t, T) =

λ21δ2

λ2λ1

(e

−λ1(T−t)

−1) +

λ21δ2

λ2

e

−λ

1

(T−t)

−e

−λ

2

(T−t)

λ2−λ1

−

δ1

λ1

(e

−λ1(T−t)

−1) if λ

1

= λ

2

λ21δ2

λ2λ1

(e

−λ1(T−t)

−1) +

λ21δ2

λ2

e

−λ2T+λ1t

(T −t) −

δ1

λ1

(e

−λ1(T−t)

−1) if λ

1

= λ

2

.

.

(iii)

Proof. From the ODE

d

dt

A(t, T) =

1

2

(C

2

1

(t, T) +C

2

2

(t, T)) −δ

0

(t), we get

A(t, T) =

T

t

¸

δ

0

(s) −

1

2

(C

2

1

(s, T) +C

2

2

(s, T))

ds.

(iv)

Proof. We want to ﬁnd δ

0

so that f(0, T, Y

1

(0), Y

2

(0)) = e

−Y1(0)C1(0,T)−Y2(0)C2(0,T)−A(0,T)

= B(0, T) for all

T > 0. Take logarithm on both sides and plug in the expression of A(t, T), we get

log B(0, T) = −Y

1

(0)C

1

(0, T) −Y

2

(0)C

2

(0, T) +

T

0

¸

1

2

(C

2

1

(s, T) +C

2

2

(s, T)) −δ

0

(s)

ds.

Taking derivative w.r.t. T, we have

∂

∂T

log B(0, T) = −Y

1

(0)

∂

∂T

C

1

(0, T) −Y

2

(0)

∂

∂T

C

2

(0, T) +

1

2

C

2

1

(T, T) +

1

2

C

2

2

(T, T) −δ

0

(T).

78

So

δ

0

(T) = −Y

1

(0)

∂

∂T

C

1

(0, T) −Y

2

(0)

∂

∂T

C

2

(0, T) −

∂

∂T

log B(0, T)

=

−Y

1

(0)

δ

1

e

−λ1T

−

λ21δ2

λ2

e

−λ2T

−Y

2

(0)δ

2

e

−λ2T

−

∂

∂T

log B(0, T) if λ

1

= λ

2

−Y

1

(0)

δ

1

e

−λ1T

−λ

21

δ

2

e

−λ2T

T

−Y

2

(0)δ

2

e

−λ2T

−

∂

∂T

log B(0, T) if λ

1

= λ

2

.

10.4. (i)

Proof.

d

´

X

t

= dX

t

+Ke

−Kt

t

0

e

Ku

Θ(u)dudt −Θ(t)dt

= −KX

t

dt + Σd

´

B

t

+Ke

−Kt

t

0

e

Ku

Θ(u)dudt

= −K

´

X

t

dt + Σd

´

B

t

.

(ii)

Proof.

W

t

= CΣ

¯

B

t

=

1

σ1

0

−

ρ

σ1

√

1−ρ

2

1

σ2

√

1−ρ

2

σ

1

0

0 σ

2

¯

B

t

=

1 0

−

ρ

√

1−ρ

2

1

√

1−ρ

2

¯

B

t

.

So

W is a martingale with '

W

1

`

t

= '

¯

B

1

`

t

= t, '

W

2

`

t

= '−

ρ

√

1−ρ

2

¯

B

1

+

1

√

1−ρ

2

¯

B

2

`

t

=

ρ

2

t

1−ρ

2

+

t

1−ρ

2

−2

ρ

1−ρ

2

ρt =

ρ

2

+1−2ρ

2

1−ρ

2

t = t, and '

W

1

,

W

2

`

t

= '

¯

B

1

, −

ρ

√

1−ρ

2

¯

B

1

+

1

√

1−ρ

2

¯

B

2

`

t

= −

ρt

√

1−ρ

2

+

ρt

√

1−ρ

2

= 0. Therefore

W is a

two-dimensional BM. Moreover, dY

t

= Cd

´

X

t

= −CK

¯

X

t

dt+CΣd

¯

B

t

= −CKC

−1

Y

t

dt+d

W

t

= −ΛY

t

dt+d

W

t

,

where

Λ = CKC

−1

=

1

σ1

0

−

ρ

σ1

√

1−ρ

2

1

σ2

√

1−ρ

2

λ

1

0

−1 λ

2

1

[C[

¸

1

σ2

√

1−ρ

2

0

ρ

σ1

√

1−ρ

2

1

σ1

¸

=

λ1

σ1

0

−

ρλ1

σ1

√

1−ρ

2

−

1

σ2

√

1−ρ

2

λ2

σ2

√

1−ρ

2

σ

1

0

ρσ

2

σ

2

1 −ρ

2

=

λ

1

0

ρσ2(λ2−λ1)−σ1

σ2

√

1−ρ

2

λ

2

.

(iii)

Proof.

X

t

=

´

X

t

+e

−Kt

t

0

e

Ku

Θ(u)du = C

−1

Y

t

+e

−Kt

t

0

e

Ku

Θ(u)du

=

σ

1

0

ρσ

2

σ

2

1 −ρ

2

Y

1

(t)

Y

2

(t)

+e

−Kt

t

0

e

Ku

Θ(u)du

=

σ

1

Y

1

(t)

ρσ

2

Y

1

(t) +σ

2

1 −ρ

2

Y

2

(t)

+e

−Kt

t

0

e

Ku

Θ(u)du.

79

So R

t

= X

2

(t) = ρσ

2

Y

1

(t)+σ

2

1 −ρ

2

Y

2

(t)+δ

0

(t), where δ

0

(t) is the second coordinate of e

−Kt

t

0

e

Ku

Θ(u)du

and can be derived explicitly by Lemma 10.2.3. Then δ

1

= ρσ

2

and δ

2

= σ

2

1 −ρ

2

.

10.5.

Proof. We note C(t, T) and A(t, T) are dependent only on T −t. So C(t, t +¯ τ) and A(t, t +¯ τ) aare constants

when ¯ τ is ﬁxed. So

d

dt

L

t

= −

B(t, t + ¯ τ)[−C(t, t + ¯ τ)R

(t) −A(t, t + ¯ τ)]

¯ τB(t, t + ¯ τ)

=

1

¯ τ

[C(t, t + ¯ τ)R

(t) +A(t, t + ¯ τ)]

=

1

¯ τ

[C(0, ¯ τ)R

(t) +A(0, ¯ τ)].

Hence L(t

2

)−L(t

1

) =

1

¯ τ

C(0, ¯ τ)[R(t

2

)−R(t

1

)]+

1

¯ τ

A(0, ¯ τ)(t

2

−t

1

). Since L(t

2

)−L(t

1

) is a linear transformation,

it is easy to verify that their correlation is 1.

10.6. (i)

Proof. If δ

2

= 0, then dR

t

= δ

1

dY

1

(t) = δ

1

(−λ

1

Y

1

(t)dt +d

W

1

(t)) = δ

1

(

δ0

δ1

−

Rt

δ1

)λ

1

dt +d

W

1

(t)

= (δ

0

λ

1

−

λ

1

R

t

)dt +δ

1

d

W

1

(t). So a = δ

0

λ

1

and b = λ

1

.

(ii)

Proof.

dR

t

= δ

1

dY

1

(t) +δ

2

dY

2

(t)

= −δ

1

λ

1

Y

1

(t)dt +λ

1

d

W

1

(t) −δ

2

λ

21

Y

1

(t)dt −δ

2

λ

2

Y

2

(t)dt +δ

2

d

W

2

(t)

= −Y

1

(t)(δ

1

λ

1

+δ

2

λ

21

)dt −δ

2

λ

2

Y

2

(t)dt +δ

1

d

W

1

(t) +δ

2

d

W

2

(t)

= −Y

1

(t)λ

2

δ

1

dt −δ

2

λ

2

Y

2

(t)dt +δ

1

d

W

1

(t) +δ

2

d

W

2

(t)

= −λ

2

(Y

1

(t)δ

1

+Y

2

(t)δ

2

)dt +δ

1

d

W

1

(t) +δ

2

d

W

2

(t)

= −λ

2

(R

t

−δ

0

)dt +

δ

2

1

+δ

2

2

¸

δ

1

δ

2

1

+δ

2

2

d

W

1

(t) +

δ

2

δ

2

1

+δ

2

2

d

W

2

(t)

¸

.

So a = λ

2

δ

0

, b = λ

2

, σ =

δ

2

1

+δ

2

2

and

¯

B

t

=

δ1

√

δ

2

1

+δ

2

2

W

1

(t) +

δ2

√

δ

2

1

+δ

2

2

W

2

(t).

10.7. (i)

Proof. We use the canonical form of the model as in formulas (10.2.4)-(10.2.6). By (10.2.20),

dB(t, T) = df(t, Y

1

(t), Y

2

(t))

= de

−Y1(t)C1(T−t)−Y2(t)C2(T−t)−A(T−t)

= dt term +B(t, T)[−C

1

(T −t)d

W

1

(t) −C

2

(T −t)d

W

2

(t)]

= dt term +B(t, T)(−C

1

(T −t), −C

2

(T −t))

d

W

1

(t)

d

W

2

(t)

.

So the volatility vector of B(t, T) under

¯

P is (−C

1

(T − t), −C

2

(T − t)). By (9.2.5),

W

T

j

(t) =

t

0

C

j

(T −

u)du +

W

j

(t) (j = 1, 2) form a two-dimensional

¯

P

T

−BM.

(ii)

80

Proof. Under the T-forward measure, the numeraire is B(t, T). By risk-neutral pricing, at time zero the

risk-neutral price V

0

of the option satisﬁes

V

0

B(0, T)

=

¯

E

T

¸

1

B(T, T)

(e

−C1(

¯

T−T)Y1(T)−C2(

¯

T−T)Y2(T)−A(

¯

T−T)

−K)

+

.

Note B(T, T) = 1, we get (10.7.19).

(iii)

Proof. We can rewrite (10.2.4) and (10.2.5) as

dY

1

(t) = −λ

1

Y

1

(t)dt +d

W

T

1

(t) −C

1

(T −t)dt

dY

2

(t) = −λ

21

Y

1

(t)dt −λ

2

Y

2

(t)dt +d

W

T

2

(t) −C

2

(T −t)dt.

Then

Y

1

(t) = Y

1

(0)e

−λ1t

+

t

0

e

λ1(s−t)

d

W

T

1

(s) −

t

0

C

1

(T −s)e

λ1(s−t)

ds

Y

2

(t) = Y

0

e

−λ2t

−λ

21

t

0

Y

1

(s)e

λ2(s−t)

ds +

t

0

e

λ2(s−t)

d

W

2

(s) −

t

0

C

2

(T −s)e

λ2(s−t)

ds.

So (Y

1

, Y

2

) is jointly Gaussian and X is therefore Gaussian.

(iv)

Proof. First, we recall the Black-Scholes formula for call options: if dS

t

= µS

t

dt +σS

t

d

W

t

, then

¯

E[e

−µT

(S

0

e

σW

T

+(µ−

1

2

σ

2

)T

−K)

+

] = S

0

N(d

+

) −Ke

−µT

N(d

−

)

with d

±

=

1

σ

√

T

(log

S0

K

+(µ±

1

2

σ

2

)T). Let T = 1, S

0

= 1 and ξ = σW

1

+(µ−

1

2

σ

2

), then ξ

d

= N(µ−

1

2

σ

2

, σ

2

)

and

¯

E[(e

ξ

−K)

+

] = e

µ

N(d

+

) −KN(d

−

),

where d

±

=

1

σ

(−log K + (µ ±

1

2

σ

2

)) (diﬀerent from the problem. Check!). Since under

¯

P

T

, X

d

=

N(µ −

1

2

σ

2

, σ

2

), we have

B(0, T)

¯

E

T

[(e

X

−K)

+

] = B(0, T)(e

µ

N(d

+

) −KN(d

−

)).

10.11.

Proof. On each payment date T

j

, the payoﬀ of this swap contract is δ(K −L(T

j−1

, T

j−1

)). Its no-arbitrage

price at time 0 is δ(KB(0, T

j

) −B(0, T

j

)L(0, T

j−1

)) by Theorem 10.4. So the value of the swap is

n+1

¸

j=1

δ[KB(0, T

j

) −B(0, T

j

)L(0, T

j−1

)] = δK

n+1

¸

j=1

B(0, T

j

) −δ

n+1

¸

j=1

B(0, T

j

)L(0, T

j−1

).

10.12.

81

Proof. Since L(T, T) =

1−B(T,T+δ)

δB(T,T+δ)

∈ T

T

, we have

¯

E[D(T +δ)L(T, T)] =

¯

E[

¯

E[D(T +δ)L(T, T)[T

T

]]

=

¯

E

¸

1 −B(T, T +δ)

δB(T, T +δ)

¯

E[D(T +δ)[T

T

]

=

¯

E

¸

1 −B(T, T +δ)

δB(T, T +δ)

D(T)B(T, T +δ)

=

¯

E

¸

D(T) −D(T)B(T, T +δ)

δ

=

B(0, T) −B(0, T +δ)

δ

= B(0, T +δ)L(0, T).

11. Introduction to Jump Processes

11.1. (i)

Proof. First, M

2

t

= N

2

t

−2λtN

t

+ λ

2

t

2

. So E[M

2

t

] < ∞. f(x) = x

2

is a convex function. So by conditional

Jensen’s inequality,

E[f(M

t

)[T

s

] ≥ f(E[M

t

[T

s

]) = f(M

s

), ∀s ≤ t.

So M

2

t

is a submartingale.

(ii)

Proof. We note M

t

has independent and stationary increment. So ∀s ≤ t, E[M

2

t

− M

2

s

[T

s

] = E[(M

t

−

M

s

)

2

[T

s

] + E[(M

t

− M

s

) 2M

s

[T

s

] = E[M

2

t−s

] + 2M

s

E[M

t−s

] = V ar(N

t−s

) + 0 = λ(t − s). That is,

E[M

2

t

−λt[T

s

] = M

2

s

−λs.

11.2.

Proof. P(N

s+t

= k[N

s

= k) = P(N

s+t

− N

s

= 0[N

s

= k) = P(N

t

= 0) = e

−λt

= 1 − λt + O(t

2

). Similarly,

we have P(N

s+t

= k + 1[N

s

= k) = P(N

t

= 1) =

(λt)

1

1!

e

−λt

= λt(1 − λt + O(t

2

)) = λt + O(t

2

), and

P(N

s+t

≥ k + 2[N

2

= k) = P(N

t

≥ 2) =

¸

∞

k=2

(λt)

k

k!

e

−λt

= O(t

2

).

11.3.

Proof. For any t ≤ u, we have

E

¸

S

u

S

t

T

t

= E[(σ + 1)

Nt−Nu

e

−λσ(t−u)

[T

t

]

= e

−λσ(t−u)

E[(σ + 1)

Nt−u

]

= e

−λσ(t−u)

E[e

Nt−u log(σ+1)

]

= e

−λσ(t−u)

e

λ(t−u)(e

log(σ+1)

−1)

(by (11.3.4))

= e

−λσ(t−u)

e

λσ(t−u)

= 1.

So S

t

= E[S

u

[T

t

] and S is a martingale.

11.4.

82

Proof. The problem is ambiguous in that the relation between N

1

and N

2

is not clearly stated. According

to page 524, paragraph 2, we would guess the condition should be that N

1

and N

2

are independent.

Suppose N

1

and N

2

are independent. Deﬁne M

1

(t) = N

1

(t) − λ

1

t and M

2

(t) = N

2

(t) − λ

2

t. Then by

independence E[M

1

(t)M

2

(t)] = E[M

1

(t)]E[M

2

(t)] = 0. Meanwhile, by Itˆo’s product formula, M

1

(t)M

2

(t) =

t

0

M

1

(s−)dM

2

(s) +

t

0

M

2

(s−)dM

1

(s) +[M

1

, M

2

]

t

. Both

t

0

M

1

(s−)dM

2

(s) and

t

0

M

2

(s−)dM

1

(s) are mar-

tingales. So taking expectation on both sides, we get 0 = 0 + E¦[M

1

, M

2

]

t

¦ = E[

¸

0<s≤t

∆N

1

(s)∆N

2

(s)].

Since

¸

0<s≤t

∆N

1

(s)∆N

2

(s) ≥ 0 a.s., we conclude

¸

0<s≤t

∆N

1

(s)∆N

2

(s) = 0 a.s. By letting t = 1, 2, ,

we can ﬁnd a set Ω

0

of probability 1, so that ∀ω ∈ Ω

0

,

¸

0<s≤t

∆N

1

(s)∆N

2

(s) = 0 for all t > 0. Therefore

N

1

and N

2

can have no simultaneous jump.

11.5.

Proof. We shall prove the whole path of N

1

is independent of the whole path of N

2

, following the scheme

suggested by page 489, paragraph 1.

Fix s ≥ 0, we consider X

t

= u

1

(N

1

(t)−N

1

(s))+u

2

(N

2

(t)−N

2

(s))−λ

1

(e

u1

−1)(t−s)−λ

2

(e

u2

−1)(t−s),

t > s. Then by Itˆo’s formula for jump process, we have

e

Xt

−e

Xs

=

t

s

e

Xu

dX

c

u

+

1

2

t

s

e

Xu

dX

c

u

dX

c

u

+

¸

s<u≤t

(e

Xu

−e

Xu−

)

=

t

s

e

Xu

[−λ

1

(e

u1

−1) −λ

2

(e

u2

−1)]du +

¸

0<u≤t

(e

Xu

−e

Xu−

).

Since ∆X

t

= u

1

∆N

1

(t)+u

2

∆N

2

(t) and N

1

, N

2

have no simultaneous jump, e

Xu

−e

Xu−

= e

Xu−

(e

∆Xu

−1) =

e

Xu−

[(e

u1

−1)∆N

1

(u) + (e

u2

−1)∆N

2

(u)]. So

e

Xt

−1

=

t

s

e

Xu−

[−λ

1

(e

u1

−1) −λ

2

(e

u2

−1)]du +

¸

s<u≤t

e

Xu−

[(e

u1

−1)∆N

1

(u) + (e

u2

−1)∆N

2

(u)]

=

t

s

e

Xu−

[(e

u1

−1)d(N

1

(u) −λ

1

u) −(e

u2

−1)d(N

2

(u) −λ

2

u)].

This shows (e

Xt

)

t≥s

is a martingale w.r.t. (T

t

)

t≥s

. So E[e

Xt

] ≡ 1, i.e.

E[e

u1(N1(t)−N1(s))+u2(N2(t)−N2(s))

] = e

λ1(e

u

1

−1)(t−s)

e

λ2(e

u

2

−1)(t−s)

= E[e

u1(N1(t)−N1(s))

]E[e

u2(N2(t)−N2(s))

].

This shows N

1

(t) −N

1

(s) is independent of N

2

(t) −N

2

(s).

Now, suppose we have 0 ≤ t

1

< t

2

< t

3

< < t

n

, then the vector (N

1

(t

1

), , N

1

(t

n

)) is independent

of (N

2

(t

1

), , N

2

(t

n

)) if and only if (N

1

(t

1

), N

1

(t

2

) − N

1

(t

1

), , N

1

(t

n

) − N

1

(t

n−1

)) is independent of

(N

2

(t

1

), N

2

(t

2

) −N

2

(t

1

), , N

2

(t

n

) −N

2

(t

n−1

)). Let t

0

= 0, then

E[e

n

i=1

ui(N1(ti)−N1(ti−1))+

n

j=1

vj(N2(tj)−N2(tj−1))

]

= E[e

n−1

i=1

ui(N1(ti)−N1(ti−1))+

n−1

j=1

vj(N2(tj)−N2(tj−1))

E[e

un(N1(tn)−N1(tn−1))+vn(N2(tn)−N2(tn−1))

[T

tn−1

]]

= E[e

n−1

i=1

ui(N1(ti)−N1(ti−1))+

n−1

j=1

vj(N2(tj)−N2(tj−1))

]E[e

un(N1(tn)−N1(tn−1))+vn(N2(tn)−N2(tn−1))

]

= E[e

n−1

i=1

ui(N1(ti)−N1(ti−1))+

n−1

j=1

vj(N2(tj)−N2(tj−1))

]E[e

un(N1(tn)−N1(tn−1))

]E[e

vn(N2(tn)−N2(tn−1))

],

where the second equality comes from the independence of N

i

(t

n

) −N

i

(t

n−1

) (i = 1, 2) relative to T

tn−1

and

the third equality comes from the result obtained in the above paragraph. Working by induction, we have

E[e

n

i=1

ui(N1(ti)−N1(ti−1))+

n

j=1

vj(N2(tj)−N2(tj−1))

]

=

n

¸

i=1

E[e

ui(N1(ti)−N1(ti−1))

]

n

¸

j=1

E[e

vj(N2(tj)−N2(tj−1))

]

= E[e

n

i=1

ui(N1(ti)−N1(ti−1))

]E[e

n

j=1

vj(N2(tj)−N2(tj−1))

].

83

This shows the whole path of N

1

is independent of the whole path of N

2

.

11.6.

Proof. Let X

t

= u

1

W

t

−

1

2

u

2

1

t +u

2

Q

t

−λt(ϕ(u

2

) −1) where ϕ is the moment generating function of the jump

size Y . Itˆo’s formula for jump process yields

e

Xt

−1 =

t

0

e

Xs

(u

1

dW

s

−

1

2

u

2

1

ds −λ(ϕ(u

2

) −1)ds) +

1

2

t

0

e

Xs

u

2

1

ds +

¸

0<s≤t

(e

Xs

−e

Xs−

).

Note ∆X

t

= u

2

∆Q

t

= u

2

Y

Nt

∆N

t

, where N

t

is the Poisson process associated with Q

t

. So e

Xt

− e

Xt−

=

e

Xt−

(e

∆Xt

− 1) = e

Xt−

(e

u2Y

N

t

− 1)∆N

t

. Consider the compound Poisson process H

t

=

¸

Nt

i=1

(e

u2Yi

− 1),

then H

t

−λE[e

u2Y

N

t

−1]t = H

t

−λ(ϕ(u

2

) −1)t is a martingale, e

Xt

−e

Xt−

= e

Xt−

∆H

t

and

e

Xt

−1 =

t

0

e

Xs

(u

1

dW

s

−

1

2

u

2

1

ds −λ(ϕ(u

2

) −1)ds) +

1

2

t

0

e

Xs

u

2

1

ds +

t

0

e

Xs−

dH

s

=

t

0

e

Xs

u

1

dW

s

+

t

0

e

Xs−

d(H

s

−λ(ϕ(u

2

) −1)s).

This shows e

Xt

is a martingale and E[e

Xt

] ≡ 1. So E[e

u1Wt+u2Qt

] = e

1

2

u1t

e

λt(ϕ(u2)−1)t

= E[e

u1Wt

]E[e

u2Qt

].

This shows W

t

and Q

t

are independent.

11.7.

Proof. E[h(Q

T

)[T

t

] = E[h(Q

T

−Q

t

+Q

t

)[T

t

] = E[h(Q

T−t

+x)][

x=Qt

= g(t, Q

t

), where g(t, x) = E[h(Q

T−t

+

x)].

References

[1] F. Delbaen and W. Schachermayer. The mathematics of arbitrage. Springer, 2006.

[2] J. Hull. Options, futures, and other derivatives. Fourth Edition. Prentice-Hall International Inc., New

Jersey, 2000.

[3] B. Øksendal. Stochastic diﬀerential equations: An introduction with applications. Sixth edition. Springer-

Verlag, Berlin, 2003.

[4] D. Revuz and M. Yor. Continous martingales and Brownian motion. Third edition. Springer-Verlag,

Berline, 1998.

[5] A. N. Shiryaev. Essentials of stochastic ﬁnance: facts, models, theory. World Scientiﬁc, Singapore, 1999.

[6] S. Shreve. Stochastic calculus for ﬁnance I. The binomial asset pricing model. Springer-Verlag, New

York, 2004.

[7] S. Shreve. Stochastic calculus for ﬁnance II. Continuous-time models. Springer-Verlag, New York, 2004.

[8] P. Wilmott. The mathematics of ﬁnancial derivatives: A student introduction. Cambridge University

Press, 1995.

84

5 Proof. X1 (u) = ∆0 × 8 + Γ0 × 3 − 5 (4∆0 + 1.20Γ0 ) = 3∆0 + 1.5Γ0 , and X1 (d) = ∆0 × 2 − 4 (4∆0 + 1.20Γ0 ) = 4 −3∆0 − 1.5Γ0 . That is, X1 (u) = −X1 (d). So if there is a positive probability that X1 is positive, then there is a positive probability that X1 is negative. Remark: Note the above relation X1 (u) = −X1 (d) is not a coincidence. In general, let V1 denote the ¯ ¯ payoﬀ of the derivative security at time 1. Suppose X0 and ∆0 are chosen in such a way that V1 can be ¯ 0 − ∆0 S0 ) + ∆0 S1 = V1 . Using the notation of the problem, suppose an agent begins ¯ ¯ replicated: (1 + r)(X with 0 wealth and at time zero buys ∆0 shares of stock and Γ0 options. He then puts his cash position ¯ −∆0 S0 − Γ0 X0 in a money market account. At time one, the value of the agent’s portfolio of stock, option and money market assets is

¯ X1 = ∆0 S1 + Γ0 V1 − (1 + r)(∆0 S0 + Γ0 X0 ). Plug in the expression of V1 and sort out terms, we have ¯ X1 = S0 (∆0 + ∆0 Γ0 )( S1 − (1 + r)). S0

¯ Since d < (1 + r) < u, X1 (u) and X1 (d) have opposite signs. So if the price of the option at time zero is X0 , then there will no arbitrage. 1.3.

S0 1 Proof. V0 = 1+r 1+r−d S1 (H) + u−1−r S1 (T ) = 1+r 1+r−d u + u−1−r d = S0 . This is not surprising, since u−d u−d u−d u−d this is exactly the cost of replicating S1 . Remark: This illustrates an important point. The “fair price” of a stock cannot be determined by the risk-neutral pricing, as seen below. Suppose S1 (H) and S1 (T ) are given, we could have two current prices, S0 and S0 . Correspondingly, we can get u, d and u , d . Because they are determined by S0 and S0 , respectively, it’s not surprising that risk-neutral pricing formula always holds, in both cases. That is, 1+r−d u−d S1 (H)

S0 =

+

u−1−r u−d S1 (T )

1+r

, S0 =

1+r−d u −d

S1 (H) +

u −1−r u −d S1 (T )

1+r

.

Essentially, this is because risk-neutral pricing relies on fair price=replication cost. Stock as a replicating component cannot determine its own “fair” price via the risk-neutral pricing formula. 1.4. Proof. Xn+1 (T ) = = ∆n dSn + (1 + r)(Xn − ∆n Sn )

∆n Sn (d − 1 − r) + (1 + r)Vn pVn+1 (H) + q Vn+1 (T ) ˜ ˜ Vn+1 (H) − Vn+1 (T ) (d − 1 − r) + (1 + r) = u−d 1+r = p(Vn+1 (T ) − Vn+1 (H)) + pVn+1 (H) + q Vn+1 (T ) ˜ ˜ ˜ = pVn+1 (T ) + q Vn+1 (T ) ˜ ˜ = Vn+1 (T ).

1.6.

2

Proof. The bank’s trader should set up a replicating portfolio whose payoﬀ is the opposite of the option’s payoﬀ. More precisely, we solve the equation (1 + r)(X0 − ∆0 S0 ) + ∆0 S1 = −(S1 − K)+ .

1 Then X0 = −1.20 and ∆0 = − 2 . This means the trader should sell short 0.5 share of stock, put the income 2 into a money market account, and then transfer 1.20 into a separate money market account. At time one, the portfolio consisting of a short position in stock and 0.8(1 + r) in money market account will cancel out with the option’s payoﬀ. Therefore we end up with 1.20(1 + r) in the separate money market account. Remark: This problem illustrates why we are interested in hedging a long position. In case the stock price goes down at time one, the option will expire without any payoﬀ. The initial money 1.20 we paid at time zero will be wasted. By hedging, we convert the option back into liquid assets (cash and stock) which guarantees a sure payoﬀ at time one. Also, cf. page 7, paragraph 2. As to why we hedge a short position (as a writer), see Wilmott [8], page 11-13.

1.7. Proof. The idea is the same as Problem 1.6. The bank’s trader only needs to set up the reverse of the replicating trading strategy described in Example 1.2.4. More precisely, he should short sell 0.1733 share of stock, invest the income 0.6933 into money market account, and transfer 1.376 into a separate money market account. The portfolio consisting a short position in stock and 0.6933-1.376 in money market account will replicate the opposite of the option’s payoﬀ. After they cancel out, we end up with 1.376(1 + r)3 in the separate money market account. 1.8. (i)

2 s s Proof. vn (s, y) = 5 (vn+1 (2s, y + 2s) + vn+1 ( 2 , y + 2 )).

(ii) Proof. 1.696. (iii) Proof. δn (s, y) = vn+1 (us, y + us) − vn+1 (ds, y + ds) . (u − d)s

1.9. (i) Proof. Similar to Theorem 1.2.2, but replace r, u and d everywhere with rn , un and dn . More precisely, set pn = 1+rn −dn and qn = 1 − pn . Then un −dn Vn = pn Vn+1 (H) + qn Vn+1 (T ) . 1 + rn

(ii) Proof. ∆n = (iii) 3

Vn+1 (H)−Vn+1 (T ) Sn+1 (H)−Sn+1 (T )

=

Vn+1 (H)−Vn+1 (T ) . (un −dn )Sn

10 10 Proof. un = Sn+1 (H) = Sn +10 = 1+ Sn and dn = Sn+1 (T ) = Sn −10 = 1− Sn . So the risk-neutral probabilities Sn Sn Sn Sn at time n are pn = u1−dnn = 1 and qn = 1 . Risk-neutral pricing implies the price of this call at time zero is ˜ ˜ 2 2 n −d 9.375.

2. Probability Theory on Coin Toss Space 2.1. (i) Proof. P (Ac ) + P (A) = (ii) Proof. By induction, it suﬃces to work on the case N = 2. When A1 and A2 are disjoint, P (A1 ∪ A2 ) = ω∈A1 ∪A2 P (ω) = ω∈A1 P (ω) + ω∈A2 P (ω) = P (A1 ) + P (A2 ). When A1 and A2 are arbitrary, using the result when they are disjoint, we have P (A1 ∪ A2 ) = P ((A1 − A2 ) ∪ A2 ) = P (A1 − A2 ) + P (A2 ) ≤ P (A1 ) + P (A2 ). 2.2. (i)

1 3 1 Proof. P (S3 = 32) = p3 = 8 , P (S3 = 8) = 3p2 q = 3 , P (S3 = 2) = 3pq 2 = 8 , and P (S3 = 0.5) = q 3 = 8 . 8 ω∈Ac

P (ω) +

ω∈A

P (ω) =

ω∈Ω

P (ω) = 1.

(ii) Proof. E[S1 ] = 8P (S1 = 8) + 2P (S1 = 2) = 8p + 2q = 5, E[S2 ] = 16p2 + 4 · 2pq + 1 · q 2 = 6.25, and 3 1 E[S3 ] = 32 · 1 + 8 · 8 + 2 · 3 + 0.5 · 8 = 7.8125. So the average rates of growth of the stock price under P 8 8 5 are, respectively: r0 = 4 − 1 = 0.25, r1 = 6.25 − 1 = 0.25 and r2 = 7.8125 − 1 = 0.25. 5 6.25 (iii)

8 1 Proof. P (S3 = 32) = ( 2 )3 = 27 , P (S3 = 8) = 3 · ( 2 )2 · 1 = 4 , P (S3 = 2) = 2 · 1 = 2 , and P (S3 = 0.5) = 27 . 3 3 3 9 9 9 Accordingly, E[S1 ] = 6, E[S2 ] = 9 and E[S3 ] = 13.5. So the average rates of growth of the stock price 9 6 under P are, respectively: r0 = 4 − 1 = 0.5, r1 = 6 − 1 = 0.5, and r2 = 13.5 − 1 = 0.5. 9

**2.3. Proof. Apply conditional Jensen’s inequality. 2.4. (i) Proof. En [Mn+1 ] = Mn + En [Xn+1 ] = Mn + E[Xn+1 ] = Mn . (ii)
**

2 n+1 Proof. En [ SSn ] = En [eσXn+1 eσ +e−σ ] = 2 σXn+1 ] eσ +e−σ E[e

= 1.

2.5. (i)

2 2 Proof. 2In = 2 j=0 Mj (Mj+1 − Mj ) = 2 j=0 Mj Mj+1 − j=1 Mj − j=1 Mj = 2 j=0 Mj Mj+1 + n−1 n−1 n−1 n−1 2 2 2 2 2 2 2 2 Mn − j=0 Mj+1 − j=0 Mj = Mn − j=0 (Mj+1 − Mj ) = Mn − j=0 Xj+1 = Mn − n. n−1 n−1 n−1 n−1 n−1

(ii) Proof. En [f (In+1 )] = En [f (In + Mn (Mn+1 − Mn ))] = En [f (In + Mn Xn+1 )] = 1 [f (In + Mn ) + f (In − Mn )] = 2 √ √ √ g(In ), where g(x) = 1 [f (x + 2x + n) + f (x − 2x + n)], since 2In + n = |Mn |. 2 2.6.

4

(Sn )n≥1 cannot be a Markov process. 5 .5. 1} .2.5 and Theorem 2. 2. In the proof of Theorem 1. Proof. Indeed. Since ( (1+r)n )0≤n≤N is a martingale under P (Theorem Vn 2. then use (i). ωn ) := qn .7. S 1 1 0 −d So p0 = 1+r−d0 0 = 2 . P would be constructed according to conditional probabilities P (ωn+1 = H|ω1 . Cf. q1 (H) = 2 . V2 (HT ) = 1. Xn ))]. · · · . S0 (T and d1 (T ) = S21 (TT)) = 1. p1 (T ) = 2 1 4. Therefore P (HH) = p0 p1 (H) = 1 . to be determined later on.5).4. p1 (H) u0 5 q1 (T ) = 6 . For any arbitrary function f . q0 = 2 . Deﬁne S1 = X1 and Sn+1 = n Sn +bn (X1 . (i) (H) S1 (H) 1 = 2.4. d0 = S1S0 = 2 . notes on page 39.2. 2. u1 (T ) = S2 (T H) S1 (T ) =4 1+r1 (H)−d1 (H) u1 (H)−d1 (H) 1 = 1 . with proper modiﬁcations (i.4.2. · · · . and we can show it is a martingale. V2 (HH) = 5.14) of Chapter 1.4. · · · . V2 (T H) = 1 and V2 (T T ) = 0. Theorem 2. En [In+1 − In ] = En [∆n (Mn+1 − Mn )] = ∆n En [Mn+1 − Mn ] = 0. In other words. ωn ) := pn and P (ωn+1 = T |ω1 . (iv) Proof. (i) Proof. · · · .4. VN (1+r)N −1 (1+r)N is a martingale under P .4. · · · . Clearly (Sn )n≥1 is an adapted stochastic process. · · · . and V0 = p0 V1 (H)+q0 V1 (T ) 1+r0 ≈ 1. then Sn would be the wealth at time n. Xn )Xn+1 . d1 (H) = S2 (HT ) S1 (H) = 1. VN −1 . En [Sn+1 − Sn ] = bn (X1 .Proof.8. )(1+r (ii) Proof.e.). Then 2 intuitively. Note Mn = En [MN ] and Mn = En [MN ]. Therefore in general. V1 1+r . ···. Xn which consists of stock and money market accounts. 4 5 q0 q1 (T ) = 12 .9. Combine (ii) and (iii).7 still work for the random interest rate model. We denote by Xn the result of n-th coin toss. Remark: If Xn is regarded as the gain/loss of n-th bet in a gambling game. Vn (1+r)n = En VN (1+r)N . (ii) Proof. V1 (T ) = p1 (T )V2 (T H)+q1 (T )V2 (T T ) 1+r1 (T ) p1 (H)V2 (HH)+q1 (H)V2 (HT ) 1+r1 (H) = = 1 9. Xn )En [Xn+1 ] = 0. (iii) Proof. Xn )) + f (Sn − bn (X1 . where Head is represented by X = 1 and Tail is 1 represented by X = −1. 2. where bn (·) is a bounded function on {−1. and P (HT ) = p0 q1 (H) = P (T H) = q0 p1 (T ) = and P (T T ) = The proofs of Theorem 2. the sequence (Vn )0≤n≤N can be realized as the value process of a portfolio. 1+r1 (T )−d1 (T ) u1 (T )−d1 (T ) 1 12 1 = 6 . En [f (Sn+1 ] cannot be solely dependent upon Sn when bn ’s are properly chosen. bn is therefore the wager for the (n+1)-th bet and is devised according to past gambling results. We also suppose P (X = 1) = P (X = −1) = 2 . ( (1+r)n )0≤n≤N is a martingale under P . So the time-zero value of an option that pays oﬀ V2 at time two is given by the risk-neutral pricing formula V0 = E (1+r0V2 1 ) . we proved by induction that Xn = Vn where Xn is deﬁned by (1. En [f (Sn+1 )] = 1 [f (Sn + bn (X1 . u0 = u1 (H) = = S2 (HH) S1 (H) = 1. So V1 (H) = 2. Proof. so V0 .

∆0 = (iv) Proof. we 1+r VN X must have XN = VN . Sn+1 So En [ (1+r)n+1 (1−a)n+1 ] = 2.11. = Sn (1+r)n (1−a).3815.4 − 1 54 ≈ 0. So Xn = En [ (1+r)N −n ] = En [ (1+r)N −n ]. V2 (HH)−V2 (HT ) S2 (HH)−S2 (HT ) V1 (H)−V1 (T ) S1 (H)−S1 (T ) = 1 2. (iv) 6 . (iii) Proof. ∆1 (H) = 2. we have ∆n uSn + (1 + r)(Xn − ∆n Sn ) = Xn+1 (H) ∆n dSn + (1 + r)(Xn − ∆n Sn ) = Xn+1 (T ). (ii) CN FN PN Proof.8. = ∆n Sn (1+r)n+1 En [Yn+1 ] + Xn −∆n Sn (1+r)n = ∆n Sn (1+r)n+1 (up + dq) + Xn −∆n Sn (1+r)n = ∆n Sn +Xn −∆n Sn (1+r)n = (ii) Proof. Cn = En [ (1+r)N −n ] = En [ (1+r)N −n ] + En [ (1+r)N −n ] = Fn + Pn .10.(iii) Proof. (i) Proof. FN + PN = SN − K + (K − SN )+ = (SN − K)+ = CN . So ∆n = Xn+1 (H)−Xn+1 (T ) uSn −dSn and Xn = En [ Xn+1 ]. From (2. the no-arbitrage price of VN at time n is Vn = Xn = En [ (1+r)N −n ]. (1 + r)n Sn (1+r)n+1 (1−a)(pu+qd) Sn+1 If An+1 is a constant a. F0 = E[ (1+r)N ] = 1 (1+r)N E[SN − K] = S0 − K (1+r)N . En [ (1+r)n+1 ] = En [ ∆n Yn+1 Sn + (1+r)n+1 (1+r)(Xn −∆n Sn ) ] (1+r)n+1 Xn (1+r)n . En [ Sn+1 ] (1 + r)n+1 = = < = 1 En [(1 − An+1 )Yn+1 Sn ] (1 + r)n+1 Sn [p(1 − An+1 (H))u + q(1 − An+1 (T ))d] (1 + r)n+1 Sn [pu + qd] (1 + r)n+1 Sn . = 5−1 12−8 = 1. (i) Xn+1 Proof. (iii) FN Proof. Since (Xn )0≤n≤N is the value process of the N unique replicating portfolio (uniqueness is guaranteed by the uniqueness of the solution to the above linear VN equations).2). then En [ (1+r)n+1 ] = Sn (1+r)n (1−a)n . To make the portfolio replicate the payoﬀ at time N .4− 9 8−2 = 0.

∆m−1 .i. Therefore. the trader has F0 = S0 in money market account and one share of stock. we can construct a portfolio ∆m . Cn = Pn if and only if Fn = 0. then the chooser option is equivalent to receiving a payoﬀ of max(C. expiring at time m and having strike price (1+r)N −m . For any function g. By (ii). So the no-arbitrage price process of the chooser option must be equal to the value process of the replicating portfolio. if C(ω) < P (ω). C0 = F0 + P0 . we get a portfolio (∆n )0≤n≤N −1 whose payoﬀ is the same as that of the chooser option. (1+r)N S0 (1+r)N −n (1+r)N S0 (1+r)N = 0.P ) ]. (ii) 7 Sn+1 Sn Sn )] . Proof.. If we feel unconvinced by the above argument that the chooser option’s no-arbitrage price is E[ max(C. Suppose the market is liquid. we work on P . (i) Proof. (1+r)m due to the economical argument involved (like “the chooser option is equivalent to receiving a payoﬀ of max(C.13. coin tosses ωn ’s are i. C0 = P0 . P ) at time m. P ). 2. By deﬁning (m ≤ k ≤ N − 1) ∆k (ω) = ∆k (ω) ∆k (ω) if C(ω) > P (ω) if C(ω) < P (ω). First. ∆N −1 whose payoﬀ at time N is (K − SN )+ . Both of them have maturity date N and strike price K.P ) ]. P ) = P + (Sm − (1+r)N −m )+ .d. if C(ω) > P (ω). By (ii). C is the no-arbitrage price of a call option at time m and P is the no-arbitrage price of a put option at time m. P ). (v) Proof. the no-arbitrage price of the chooser option at time m must be max(C. Since F0 = S0 − (vi) SN −K Proof. En [g(Sn+1 . we can construct a portfolio ∆0 . Yn+1 )] = En [g( SSn Sn . First. C = Sm − (1+r)N −m + P . Yn ). Yn + = pg(uSn . Yn + dSn ). the time-zero price of a chooser option is E K (Sm − (1+r)N −m )+ P +E (1 + r)m (1 + r)m =E K (Sm − (1+r)N −m )+ (K − SN )+ . Fix ω. · · · . Yn )0≤n≤N is Markov under P . expiring at time N and having strike price K. So max(C. Note Fn = En [ (1+r)N −n ] = Sn − So Fn is not necessarily zero and Cn = Pn is not necessarily true for n ≥ 1. which is a function of (Sn . Note under both actual probability P and risk-neutral probability P . whose payoﬀ at time m is max(C. then we have the following mathematically rigorous argument. where C=E (SN − K)+ (K − SN )+ . ∆N −1 whose payoﬀ at time N is (SN − K)+ . and P = E . In Xm particular. (1+r)m 2. P ) at time m”). V0 = X0 = E[ (1+r)m ] = E[ max(C. and the K second term stands for the time-zero price of a call. we can construct a portfolio ∆m . So n+1 without loss of generality. +E (1 + r)N (1 + r)m The ﬁrst term stands for the time-zero price of a put. = Sn − S0 (1 + r)n . · · · . the trader has a wealth of (F0 − S0 )(1 + r)N + SN = −K + SN = FN . Yn + uSn ) + qg(dSn . At time zero. So (Sn . (1 + r)N −m (1 + r)N −m That is.P ) ]. Therefore. its current no-arbitrage price should be E[ max(C.12.Proof. At time N . (1+r)m K K By the put-call parity. · · · .

YN ) = f ( N K=M +1 Sk N −M ) = VN . Yn )0≤n≤N is Markov under P . More precisely. Since P (A) = 0. then En [vn+1 (Sn+1 .1. we conclude P (ω) = 0 for any ω ∈ A. Then vN (SN . uSM )+ n+1 n+1 qg(dSM . so (Sn . So vn (s. So vM (s) = pvM +1 (us. then = 1 1+r [pvn+1 (uSn . 1+r 2. E[Y ] = (iii) ˜ Proof. · · · . Since coin tosses ωn ’s are i. 3. YN ) = f ( +1 Vn = where En [ Vn+1 ] 1+r = n+1 En [ vn+1 (S1+r . 0).2. M − 1. Then vN (SN . 3. (i) Proof. Yn + dSn ). Y (ω)P (ω) = ω∈Ω Y (ω)Z(ω)P (ω) = E[Y Z].Yn+1 ) ] N n=0 Sn N +1 ) = VN . dSM ). Yn + uSn ) + qvn+1 (dSn . Yn+1 )] = pvn+1 (uSn .1. y + us) + qvn+1 (ds. b) If n = M .1. (Sn . (vi) ω∈A ω∈Ω ω∈Ω P (ω) = ω∈Ω Z(ω)P (ω) = E[Z] = 1. under P . If P (A) = ω∈A Z(ω)P (ω) = 0. YM +1 )] = pvM +1 (uSM . for any function h. 1.d. So vn (s) = pvn+1 (us) + qvn+1 (ds). for n = 0. we get the analogous of properties (i)-(iii) of Theorem 3. Note Z(ω) := P (ω) P (ω) = 1 Z(ω) . P (Ω) = (ii) Proof. (v) Proof. Yn +uSn )+ qg(dSn . 8 . Z(ω)P (ω).1. En [g(Sn+1 . y + ds) . For any function g of two variables. y) = vn+1 (us. (i) Proof. Yn + dSn )] = vn (Sn .i. So P (A) = 0. Z replaced by P . a) If n > M . En [h(Sn+1 )] = ph(uSn ) + h(dSn ). then En [vn+1 (Sn+1 )] = pvn+1 (uSn ) + qvn+1 (dSn ). P (ω) = 0 for any ω ∈ A. And for n ≥ M +1. c) If n < M . Yn )0≤n≤M is Markov under P . P . then EM [vM +1 (SM +1 . y + us) + vn+1 (ds. y) = pvn+1 (us. Proof. Suppose vn+1 is already given. uSM ) + vn+1 (dSM . State Prices 3. P . us) + qvM +1 (ds. P (A) = (iv) Proof. Set vN (s. Yn+1 )] = En [g( SSn Sn . Z. we have EM [g(SM +1 . ds). SM +1 )] = pg(uSM . Yn + SSn Sn )] = pg(uSn . P (A) = 1 ⇐⇒ P (Ac ) = 0 ⇐⇒ P (Ac ) = 0 ⇐⇒ P (A) = 1. by P (Z > 0) = 1. dSM ). (ii) y Proof. vn (s. (Sn . So P (A) = ω∈A P (ω) = 0.14. Set vN (s. y + ds). Yn ) = (Sn .1 with P . Yn + uSn ) + qvn+1 (dSn . For n ≤ M . YM +1 )] = EM [g(SM +1 . Yn + dSn ). y) = f ( N −M ). Apply Theorem 3.Proof. Suppose vn+1 is given. y) = f ( Ny ). Yn ).

we get p−1 1 1 λ= X0 p E 1 Z p−1 Np = p−1 X0 (1 + r)N p (E[Z p−1 ])p−1 1 p . Z (3. (i) Proof. (i) 9 . Solve it for λ.25).6.8. Z(HH) = (ii) Proof. deﬁne Z(ω) = 1 P (ω0 ) 0. In summary. In this case P (ω0 ) = Z(ω0 )P (ω0 ) = 1.4.Proof. we have E[ (1+r)N ( (1+r)N ) p−1 ] = X0 . then Z as constructed above would induce a probability P that is not equivalent to P .5. U (x) = xp−1 and so I(x) = x p−1 . Z1 (T ) = V1 (T ) = and V0 = Z2 (HH)V2 (HH) Z2 (HT )V2 (HT ) Z2 (T H)V2 (T H) P (HH) + P (T H) + 0 ≈ 1. if we can ﬁnd ω0 such that 0 < P (ω0 ) < 1. And P and P have to be equivalent.3. 3. (1+r)N λZ so I(x) = = 1 Z] 1 x. ω=ω0 Clearly P (Ω \ {ω0 }) = E[Z1Ω\{ω0 } ] = Z(ω)P (ω) = 0.7. Z(HT ) = 9 . Z1 (H) = E1 [Z2 ](H) = Z2 (HH)P (ω2 = H|ω1 = H) + Z2 (HT )P (ω2 = T |ω1 = H) = 3 E1 [Z2 ](T ) = Z2 (T H)P (ω2 = H|ω1 = T ) + Z2 (T T )P (ω2 = T |ω1 = T ) = 2 . So λ = = En [ X0 (1+r) Z n 1 X0 . (iii) Proof. 1 P (ω0 ) . Z1 (H)(1 + r1 (H)) [Z2 (T H)V2 (T H)P (ω2 = H|ω1 = T ) + Z2 (T T )V2 (T T )P (ω2 = T |ω1 = T )] 1 = . · P (ω0 ) = 1. ] ] 3. 0 = Xn .3. U (x) = have XN = 1 x. 1 1 1 1 P (HT ) + 1 (1 + 4 )(1 + 4 ) (1 + 4 )(1 + 4 ) (1 + 4 )(1 + 1 ) 2 3. Z(T H) = 8 3 8 and Z(T T ) = 15 4 . By (3. By (3.3. then E[Z] = 1 if and only if Z(ω0 ) = 1. But P (Ω \ {ω0 }) = 1 − P (ω0 ) > 0 if P (ω0 ) < 1. V1 (H) = [Z2 (HH)V2 (HH)P (ω2 = H|ω1 = H) + Z2 (HT )V2 (HT )P (ω2 = T |ω1 = T )] = 2.3. XN = ( (1+r)N ) p−1 = 1 1 Np λ p−1 Z p−1 N (1+r) p−1 = X0 (1+r) p−1 E[Z p p−1 Z p−1 N (1+r) p−1 = (1+r)N X0 Z p−1 E[Z p p−1 1 . if ω = ω0 Then P (Z ≥ 0) = 1 and E[Z] = if ω = ω0 .26).26) gives E[ (1+r)N 1 X0 (1 + r)n Zn En [Z · X0 N Z (1 + r) .2. we 1 ] = X0 (1 + r)n En [ Z ] = the second to last “=” comes from Lemma 3. where ξ Hence Xn = (1+r)N λZ X En [ (1+r)N −n ] N ] = X0 . Proof. Z1 (T )(1 + r1 (T )) 9 3 4. 3. Z λZ Proof. (1+r) p−1 λZ So by (3. Hence in the case 0 < P (ω0 ) < 1. If P (ω0 ) = 1. P and P are not equivalent.6.25). Pick ω0 such that P (ω0 ) > 0. 9 16 .

6. a) If 0 ≤ x < γ and 0 < y ≤ γ . So U (x) − yx ≤ U (I(y)) − yI(y). So U (x) − yx ≤ U (I(y)) − yI(y). 1 d) If x ≥ γ and y > γ . 3. suppose there exists some K so that ξK < ξK+1 and K X0 1 m=1 ξm pm = γ . Following the hint of the problem. we have 2N 2N X0 = m=1 pm ξm I(λξm ) = m=1 1 pm ξm γ1{λξm ≤ γ } . Xn = En [ (1+r)N −n ]. For such λ. Plug pm and ξm into (3. Suppose there is a solution λ to (3.4) has a solution.9. note γ > 0. E[U (XN )] − λX0 ≤ E[U (XN )] − λX0 . that K could be 2 . Let K = max{m : λξm ≤ γ }. then Xn ≥ 0 for all n. So E[U (XN )] ≤ E[U (XN )]. then U (x) − yx = −yx ≤ 0 and U (I(y)) − yI(y) = U (γ) − yγ = 1 − yγ ≥ 0. note = . N (ii) 1 Proof.d d Proof. N N N (1 + r) (1 + r) (1 + r) (1 + r)N λ ∗ ∗ ∗ ∗ i. Using (ii) and set x = XN . (iv) Proof.e. Therefore U (x) − y(x) ≤ U (I(y)) − yI(y) for every x. however. So U (x) − yx ≤ U (I(y)) − yI(y). In this case. x = I(y) is a maximum point. Then we can ﬁnd λ > 0. we have E[U (XN )] − E[XN λZ λZ λZ λZ ] ≤ E[U (I( ))] − E[ I( )]. then U (x) − yx = 1 − yx < 0 and U (I(y)) − yI(y) = U (0) − y · 0 = 0. (iii) XN λZ Proof. we have Z λZ 1 E[ I( )] = pm ξm 1{λξm ≤ γ } γ = pm ξm γ = X0 . So U (x) − yx ≤ U (I(y)) − yI(y). N (1 + r) (1 + r)N m=1 m=1 Hence (3. (1 + r)N (1 + r)N N ∗ ∗ That is. we then can conclude 1 1 1 λξm ≤ γ } = ∅. ξK+1 is interpreted as ∞. 1 c) If x ≥ γ and 0 < y ≤ γ . So if XN ≥ 0. then λξK ≤ γ < λξK+1 . Because dx2 (U (x) − yx) = U (x) ≤ 0 (U is concave). 2 (ii) Proof. Also. we have λZ λZ ∗ E[U (XN )] − E[ XN ] ≤ E[U (XN )] − E[ X ∗ ]. then U (x) − yx = −yx ≤ 0 and U (I(y)) − yI(y) = U (0) − y · 0 = 0. (i) X Proof. Conversely.4). So ξK < ξK+1 and K N m=1 pm ξm (Note. such that ξK < λγ < ξK+1 . where XN is a random variable satisfying E[ (1+r)N ] = X0 . 1 b) If 0 ≤ x < γ and y > γ . y = (1+r)N .6. 10 2N K 2N X0 1 m=1 pm ξm 1{λξm ≤ γ } . So x = I(y) is an extreme point of U (x) − yx. dx (U (x) − yx) = U (x) − y. So E[U (XN )] ≤ E[U (XN )].4). E[U (XN )] − λX0 ≤ E[U (XN )] − E[ (1+r)N XN ] = E[U (XN )] − λX0 .6. then U (x) − yx = 1 − yx and U (I(y)) − yI(y) = U (γ) − yγ = 1 − yγ ≥ 1 − yx. So X0 γ X0 γ {m : = we are looking for positive solution λ > 0).

4. This is exactly the content of Theorem 4. (ii) is a supermartingale under P .3 gives a speciﬁc algorithm for calculating the price. if m ≤ K . V2S (T T ) = 3. V1S (T ) = 2. V2P (T T ) = 3. (1 + r)k−n (1 + r)τ −n The buyer wants to consider all the possible τ ’s. From the seller’s perspective: A price process (Vn )0≤n≤N is acceptable to him if and only if at time n. which will be the maximum price of the derivative security acceptable to him. V1P (H) = 0. − n+1 n (1 + r) (1 + r) (1 + r)n Vn i. 0≤n≤N Theorem 4. Theorem 4. (i) Proof. Theorem 4. V2P (HT ) = V2P (T H) = 0. a price process (Vn )0≤n≤N is acceptable to the seller if and only if (i) Vn ≥ Gn . From the buyer’s perspective: At time n. so that he can ﬁnd the least upper bound of security value.4.4. (iv) 11 . V2P (HH) = 0.2 shows the buyer’s upper bound is the seller’s lower bound. we conclude En Cn Vn+1 Vn =− ≤ 0. The valuation formula for cash ﬂow (Theorem 2. V0C = 5.4.8.4.28. V0P = 9.16 and V0S = 3. gS (s) = |4 − s|. We apply Theorem 4. Theorem 4.1. We shall use the notation of the book. then the buyer can choose a policy τ with τ ∈ Sn . (iii) Proof.4 establishes the one-to-one correspondence between super-replication and supermartingale property. Formally. 4. This is the price given by 1 Deﬁnition 4.3 and have V2S (HH) = 12.4. if the derivative security has not been exercised. and ﬁnally.4. we ﬁrst give a brief summary of pricing American derivative securities as presented in the textbook. Since ( (1+r)n )0≤n≤N is a martingale under the risk-neutral measure P . V1P (T ) = 2. if m ≥ K + 1 4.8) gives a fair price for the derivative security exercised according to τ : N Vn (τ ) = k=n En 1{τ =k} 1 1 Gk = En 1{τ ≤N } Gτ .5 shows how to decide on the optimal exercise policy. V1S (H) = 6.296.32. the seller can ﬁnd (∆n )0≤n≤N and (Cn )0≤n≤N so that Cn ≥ 0 and Sn Vn+1 = ∆n Sn+1 + (1 + r)(Vn − Cn − ∆n Sn ).(v) ∗ 1 Proof. 0. V2S (HT ) = V2S (T H) = 2. American Derivative Securities Before proceeding to the exercise problems.8.4.08. So (Vn )0≤n≤N is the value process of a portfolio that needs no further investing if and only if Vn (1+r)n Vn (1+r)n is a supermartingale under P (note this is independent of the requirement 0≤n≤N Vn ≥ Gn ). (ii) Proof. XN (ω m ) = I(λξm ) = γ1{λξm ≤ γ } = γ. This inspired us to check if the converse is also true. In summary.4.e. he can construct a portfolio at cost Vn so that (i) Vn ≥ Gn and (ii) he needs no further investing into the portfolio as time goes by.1: Vn = maxτ ∈Sn En [1{τ ≤N } (1+r)τ −n Gτ ]. So it gives the price acceptable to both.4. ( (1+r)n )0≤n≤N is a supermartingale.

he needs to borrow again and buy 0.125. G2 (T T ) = 5 . Then clearly m ≤ N − 1 and it is possible S P C that {n : Vn < Vn + Vn } = ∅. At time one.433 × 4) + 0. For this problem.75. G3 (T T T ) = 2. 1+r 4. First. and calculate the intrinsic value process and price process of the put as follows. The agent should borrow to buy 12 shares of stock.4. At the same time. The agent should exercise the put at time one and get 3 to pay oﬀ his debt. Then ∆1 (H) = and ∆0 = V2 (HH) − V2 (HT ) 1 V2 (T H) − V2 (T T ) = − . ≤ max gP (Sn ). if the result of coin toss is tail and the stock price goes down to 2. the value of the portfolio 1 is X1 (T ) = (1 + r)(−1.2. we can show S Vn = max gS (Sn ).2.Proof. to hedge the long position. This will pay oﬀ his debt. S S pVn+1 + Vn+1 1+r C P P pV C + Vn+1 pVn+1 + Vn+1 + n+1 1+r 1+r C C pVn+1 + Vn+1 1+r ≤ max gP (Sn ) + gC (Sn ).433S1 (T ) = (1 + 4 )(−1. At time one.1 and Figure 4. the value of the portfolio 1 is X1 (H) = (1 + r)(−1.433S0 ) + 0. b2 ) ≥ max(a1 + a2 . 3 G3 (T T H) = 1. By induction. G2 (T H) = 3 . the result of coin toss is tail and the stock price goes down to 4. Proof. If at time two. S P C As to when “<” holds.36 at time zero and buys the put. the value of the 1 1 portfolio is X2 (HH) = (1 + r)(X1 (H) − 12 S1 (H)) + 12 S2 (HH) = 0.3. Figure 4. 4. and the agent should let the put expire. τ (HT ) = 2.36 − 0. if the result of coin toss is head and the stock price goes up to 16. G3 (T HT ) = 1. 12 .2 for this problem. Therefore.36 − 0. S2 (HH) − S2 (HT ) 12 S2 (T H) − S2 (T T ) V1 (H) − V1 (T ) ≈ −0.1.2. Proof. G0 = 0. The agent should exercise the put to get 1. G1 (T ) = 1.36 − 0.433S1 (H) = −0. P C = Vn + Vn . So τ (HH) = ∞. P P pVn+1 + Vn+1 1+r + max gC (Sn ).2. we need Figure 4. we note the simple inequality max(a1 . ∆1 (T ) = = −1. suppose m = max{n : Vn < Vn + Vn }. if the result of coin toss is head and the stock price goes up to 8.4. the value of the portfolio is 1 1 X2 (HT ) = (1 + r)(X1 (H) − 12 S1 (H)) + 12 S2 (HT ) = −1.433. S1 (H) − S1 (T ) The optimal exercise time is τ = inf{n : Vn = Gn }. “>” holds if and only if b1 > a1 . At time two. b2 > a2 .433S0 ) + 0. the agent borrows 1. τ (T H) = τ (T T ) = 1.433 shares of stock at time zero. m is characterized as m = max{n : gP (Sn ) < P P pVn+1 +qVn+1 1+r and gC (Sn ) > C C pVn+1 +qVn+1 1+r or gP (Sn ) > P P pVn+1 +qVn+1 1+r and gC (Sn ) < C C pVn+1 +qVn+1 }. 2 For the intrinsic value process. We need Figure 1. All the other outcomes of G is negative.4. b2 < a2 or b1 < a1 . When this set is not empty. b1 + b2 ). b1 ) + max(a2 .433 × 2 = −3.

So by Theorem 4. throughout the portfolio’s lifetime. This is because at time N . V1 (T ) = 1. 4. Proof. τ ∗ (T H) = τ ∗ (T T ) = 2.4 and the optimal exercise time satisﬁes τ (ω) = ∞ if ω1 = H. E[1{τ ≤2} ( 4 )τ Gτ ] has the biggest value when τ satisﬁes τ (HT ) = 2. For (ii).36. Remark: We cheated a little bit by using American algorithm and Theorem 4. τ (T T ) ∈ {2. we can show Vn = K − Sn (0 ≤ n ≤ N ). τ (T T ) ∈ {2. τ (T T ) ∈ {2. the optimal exercise policy is to sell the stock at time zero and the value of this derivative security is K − S0 . In eﬀect. 0}” with “Gn ”. the following stopping times do not exercise (i) τ ≡ 0. ∞}.125.5. τ (HH). V3 (T HT ) = 1. τ (T H).5.75. τ (HH) = ∞.4. (4) τ (HT ). (ii) τ (HT ) ∈ {2. τ (T H). τ (T H) = τ (T T ) = 1 (2 diﬀerent ones). V1 (T T ) = 3 .36 is the cost of super-replicating the American derivative security. ∞}. ∞} (4 diﬀerent ones). ∞} (8 diﬀerent ones). τ (HH) ∈ {2. we can exercise the European call to set oﬀ the negative payoﬀ. τ (HH) = ∞.36. The stopping times in S0 are (1) τ ≡ 0. if it is not exercised at previous times. τ (T H).5. 1. 1 if ω1 = T . 5 This value is 1. For 5 (iii). we must have V0AP ≤ V0 + V0EC ≤ K − S0 + V0EC . 4. By induction. provided we replace “max{Gn . V3 (T T H) = 1. E[1{τ ≤2} ( 4 )τ Gτ ] = G0 = 1. Hence VN −1 = VN K max{K − SN −1 . The value of the put at time N . ∞}. (i) Proof. EN −1 [ 1+r ]} = max{K − SN −1 .4. τ (HH) = ∞. no matter when the derivative security is exercised. It enables us to construct a portfolio suﬃcient to pay oﬀ the derivative security. But intuitively.6. the portfolio has intrinsic values greater than that of an American put stuck at K with expiration time N . (ii) Proof. results in this chapter should still hold for the case τ ≤ N . there is no need to charge the insider more than 1. if we have to exercise the put and K − SN < 0. So E[1{τ ∗ ≤2} ( 5 )τ Gτ ∗ ] = 4 [( 4 )2 · 1 + ( 5 )2 (1 + 4)] = 0. (iii) 13 . (3) τ (HT ) = τ (HH) = 1.4. where τ ∗ (HT ) = 5 5 1 4 4 ∗ 2. Proof. 1+r − SN −1 } = K − SN −1 . V1 (T H) = 3 . 4. τ ∗ (HH) = ∞. So. (5) τ (HT ). E[1{τ ≤2} ( 5 )τ Gτ ] ≤ E[1{τ ∗ ≤2} ( 4 )τ Gτ ∗ ].4.96. V0 = 0. The second equality comes from the fact that discounted stock price process is a martingale under risk-neutral probability. When the option is out of money. (2) τ ≡ 1. (iii) τ (HT ) ∈ {2. Therefore the time-zero price of the derivative security is 0. All the other outcomes of V is zero. since they are developed for the case where τ is allowed to be ∞. So to hedge our short position after selling the put. V3 (T T T ) = 2. ∗ 4 For (i). ∞} (16 diﬀerent ones).2 5 For the price process. τ (T H) = τ (T T ) = 1. τ (T H) = τ (T T ) = 1 (4 diﬀerent ones). is K − SN .

E[ατ2 ] = E[α(τ2 −τ1 )+τ1 ] = E[α(τ2 −τ1 ) ]E[ατ1 ] = E[ατ1 ]2 . so we 14 . (iv) 1 peσ +qe−σ . 2pα 1+ 1−4pqα Proof. then as σ varies from 0 to ∞. so f (σ) > 0 if and only if σ > f (σ) > f (0) = 1 for all σ > 0. with (m) distributions the same as that of M .7. 5. 2.Proof. Since 1 2 (ln q − ln p) < 0. then (M· )m as random functions are i.1. α can take all the values in (0. Let σ ↓ 0.i. SN −1 − 1+r } = SN −1 − 1+r . VN −1 = max{SN −1 − K. Therefore E[ατm ] = E[α(τm −τm−1 )+(τm−1 −τm−2 )+···+τ1 ] = E[ατ1 ]m . So the K time-zero value is S0 − (1+r)N and the optimal exercise time is N . since the argument of (ii) still works for asymmetric random walk. 5. If we deﬁne Mn = Mn+τm − Mτm (m = 1. So τm+1 − τm = inf{n : Mn = 1} are i. √ 1± 1−4pqα2 (note 4pqα2 < 4( p+q )2 · 12 = 1). f (σ) = peσ − qe−σ . VN K K Proof. We want to choose σ > in terms of α. we can prove Vn = Sn − (1+r)N −n (0 ≤ n ≤ N ) and Vn > Gn for 0 ≤ n ≤ N − 1. (i) Proof. So e−σ = E[1{τ1 <∞} ( f (σ) )τ1 ]. by bounded convergence theorem. (ii) 1 1 1 n+1 Proof. (ii) Proof. Set α = 1 f (σ) = Write σ 0. Note Sn∧τ1 = eσMn∧τ1 ( f (σ) )n∧τ1 ≤ eσ·1 . · · · ). Then V0AP ≥ V0EP = V0EC − E[ K SN − K ] = V0EC − S0 + . (iii) 1 Proof. E[1{τ1 <∞} Sτ1 ] = E[limn→∞ Sn∧τ1 ] = limn→∞ E[Sn∧τ1 ] = 1. 1).d. (1 + r)N (1 + r)N 4. (i) Proof. Therefore E[α ] = . En [ SSn ] = En [eσXn+1 f (σ) ] = peσ f (σ) + qe−σ f (σ) = 1.2. VN = SN − K. that is. Yes.i.d. again by bounded convergence theorem. EN −1 [ 1+r ]} = max{SN −1 − K. Random Walk 5. we have eσ = 2pα 2 √ √ 1+ 1−4pqα2 1−4pqα2 τ1 √2pα 2 = 1− 2qα should take σ = ln( ). 1 1 E[1{τ1 <∞} eσ ( f (σ) )τ1 ] = 1. E[Sn∧τ1 ] = E[S0 ] = 1. 1 1 = E[1{τ1 <∞} ( f (0) )τ1 ] = P (τ1 < ∞). By optional stopping theorem. Let V0EP denote the time-zero value of a European put with strike K and expiration time N . with distributions the same as that of τ1 . 1 2 (ln q − ln p). K By induction. (m) (m) (iii) Proof.

(i) Proof. and 1− 1 − 4pqα2 2qα = = So E[τ1 ] = limα↑1 5. then E[ατ1 1{τ1 <∞} ] = . √ 1± 1−4pqα2 1 1 Proof. Suppose σ > σ0 . then f (σ0 ) = 1 and f (σ) > 0 for σ > σ0 . ∞ k=1 ∂ τ1 ∂α E[α 1{τ1 <∞} ]|α=1 = 1 √ 4pq 2q [ 1−4pq − (1 − √ 1 − 4pq)] = 1 4pq 2q [ 2q−1 − 1 + 2q − 1] = P (τ2 = 2k)α2k = ∞ α 2k k=1 ( 2 ) P (τ2 = 2k)4k .3. f (σ) f (σ) Let σ ↓ σ0 . √ (ii) 1 1 Proof. then eσ = . Solve the equation peσ + qe−σ = 1 and a positive solution is ln 1+ 2p = ln 1−p = ln q − ln p. need to choose the root so that eσ > eσ0 = p . ∂ τ1 ∂α E[α ] ∂ = E[ ∂α ατ1 ] = E[τ1 ατ1 −1 ]. From (ii). So f (σ) > 1 for all σ > σ0 . So P (τ2 = 2k) = (2k)! . for σ > σ0 . 4k (k+1)!k! 15 .(v) Proof. we get P (τ1 < ∞) = e−σ0 = (iii) p q < 1. so σ = ln( 2pα 2qα (iv) Proof. P (τ2 = 2k) = P (τ2 ≤ 2k) − P (τ2 ≤ 2k − 2). Sn = eσMn ( f (σ) )n is a martingale. we can see E[1{τ1 <∞} ( f (σ) )τ1 ] = e−σ . ∂ τ1 ∂α E[α ] − 4pq)− 2 (−8pq) − (1 − 1 √ 1 − 4pq)] = 1−4pq Proof. 1 = E[ lim eσMn∧τ1 ( n→∞ 1 n∧τ1 1 τ1 ) ] = E[1{τ1 <∞} eσ ( ) ]. As in Exercise 5. 4 P (τ2 ≤ 2k) = = = = P (M2k = 2) + P (M2k ≥ 4) + P (τ2 ≤ 2k. M2k ≤ 0) P (M2k = 2) + 2P (M2k ≥ 4) P (M2k = 2) + P (M2k ≥ 4) + P (M2k ≤ −4) 1 − P (M2k = −2) − P (M2k = 0). and 1 = E[S0 ] = E[Sn∧τ1 ] = E[eσMn∧τ1 ( f (σ) )τ1 ∧n ].4. (i) 1 (1 − 1 − 4pqα2 )α−1 2q 1 1 1 [− (1 − 4pqα2 )− 2 (−4pq2α)α−1 + (1 − 2q 2 = 1 1 2q [− 2 (1 1 − 4pqα2 )(−1)α2 ]. 1 2p−1 . E[τ1 1{τ1 <∞} ] = p 1 q 2q−1 . Set p σ0 = ln q − ln p. P (τ2 = 2) = 1 . Set α = f (α) . E[ατ2 ] = (ii) Proof. We 2pα √ √ 1− 1−4pqα2 1+ 1−4pqα2 q ). then by bounded convergence theorem. For k ≥ 2.2. 5.

= lim E 1{τ (n) <∞} τ n→∞ n→∞ (1 + r) (1 + r)τ (n) Take sup at the left hand side of the inequality. Proof.6. (i) 16 . (i) Proof.5. Mn = b) = n! (m + n−b n+b 2 )!( 2 − m)! pm+ n−b 2 q n+b 2 −m . τ (n) ∈ Mn τ ∧ n. ∞} and M∞ = {stopping times that takes values 0. So this needs some justiﬁcation. rigorously speaking. we should use (K − Sτ ) in the places of (K − Sτ )+ . Therefore V ∗ = limn V (n) . 1. we get V ∗ ≤ limn→∞ V (n) . if τ = ∞ For any given τ ∈ M∞ . So P (τ2 = 2k) = = = = = P (M2k−2 = −2) + P (M2k−2 = 0) − P (M2k = −2) − P (M2k = 0) 1 (2k − 2)! (2k − 2)! 1 (2k)! (2k)! ( )2k−2 + − ( )2k + 2 k!(k − 2)! (k − 1)!(k − 1)! 2 (k + 1)!(k − 1)! k!k! (2k)! 4 4 (k + 1)k(k − 1) + (k + 1)k 2 − k − (k + 1) 4k (k + 1)!k! 2k(2k − 1) 2k(2k − 1) (2k)! 2(k 2 − 1) 2(k 2 + k) 4k 2 − 1 + − 4k (k + 1)!k! 2k − 1 2k − 1 2k − 1 (2k)! . (1+r) + Clearly (V (n) )n≥0 is nondecreasing and V (n) ≤ V ∗ for every n. On the inﬁnite coin-toss space. E 1{τ <∞} (K − Sτ (n) )+ (K − Sτ )+ ≤ lim V (n) .8. 2. its time-zero value V (n) is maxτ ∈Mn E[1{τ <∞} (K−Sτ τ ].Similarly. By bounded convergence theorem. if τ < ∞ and limn→∞ τ (n) = τ . · · · }. For an American put with (1+r) ) the same strike price K that expires at time n. then τ (n) is also a stopping time. there corresponds a reﬂected path that resides at time 2m − b. So limn V (n) exists and limn V (n) ≤ V ∗ . (ii) Proof. 1. Mn = b) = P (Mn = 2m − b) = ( )n 2 (m + n! n−b n+b 2 )!( 2 − m)! . For every path that crosses level m by time n and resides at b at time n. 5. n. we deﬁne τ (n) = . · · · . ∗ P (Mn ≥ m. we deﬁne Mn = {stopping times that takes values 0. P (τ2 ≤ 2k − 2) = 1 − P (M2k−2 = −2) − P (M2k−2 = 0). 4k (k + 1)!k! 5. 5. Then the time-zero value V ∗ of the perpetual )+ American put as in Section 5. So 1 ∗ P (Mn ≥ m. ∞.4 can be deﬁned as supτ ∈M∞ E[1{τ <∞} (K−Sτ τ ]. Remark: In the above proof.

5. 17 . Combined. v(Sn ) = Sn ≥ Sn −K = g(Sn ). pv(us)+qv(ds) 1+r (iv) Proof.4. This gives B = 4. Solve it for p. By drawing graphs of 6. s} = s = v(s). Interest-Rate-Dependent Assets 6. If the purchaser chooses to exercises the call at time n. E 1{τ <∞} ≤ lim inf n→∞ E = Sτ −K Sτ ∧n lim inf n→∞ E (1+r)τ ∧n = lim inf n→∞ E[S0 ] = S0 . we have S0 ≤ E (1+r)τ 1{τ <∞} < S0 . (ii) Proof. by Theorem 2. fB (s) = 0 if and only if B + s2 − 4s = 0. = max{s − K.2. Suppose v(s) = sp . (iv) Proof.4. B s) 2 sp 5 2p . So E Sτ −K (1+r)τ 1{τ <∞} < E Sτ (1+r)τ Sτ (1+r)τ 1{τ <∞} . then the equation s2 − 4s + √ = 0 has solution 2 ± B 4 − s and B . then the discounted risk-neutral expectation Sn −K K K of her payoﬀ is E (1+r)n = S0 − (1+r)n . by Fatou’s lemma. Since limn→∞ S0 − (1+r)n = S0 . Combined with (ii). then E E K (1+r)τ Sτ −K (1+r)τ K (1+r)n = S0 .16) is 1+r satisﬁed. (ii) 1 (1+r)n v(Sn ) = Sn (1+r)n is a martingale Proof. we conclude S0 is the least upper bound for all the prices acceptable to the buyer. this equation does not have roots. we should choose B = 4 and sB = 2 + 4 − B = 2. s (v) B Proof. we get p = 1 = 0. then we have sp = 2 2p sp + 5 or p = −1.4. The discriminant ∆ = (−4)2 − 4B = 4(4 − B). the equation has roots and for B > 4.18). √ 4 − B. To have continuous derivative. the value of the call at time zero is at least supn S0 − (iii) Proof. Suppose τ is an optimal exercise time. Since is a martingale n≥0 Sτ ∧n (1+r)τ ∧n 1{τ <∞} under risk-neutral measure. pu+qv s} = max{s − K. we must have A = 0.Proof. Then P (τ < ∞) = 0 and Sn (1+r)n 1{τ <∞} > 0. we must have −1 = − s2 .9. Plug B = s2 back into s2 − 4sB + B = 0. So 1 = 2p+1 5 + 21−p 5 . Suppose B ≤ 4. (i) Proof. Under risk-neutral probabilities.4. So for B ≤ 4. Since lims→∞ v(s) = lims→∞ (As + (iii) Proof. Clearly v(s) = s also satisﬁes the boundary condition (5. So there is no optimal time to exercise the perpetual American call. Simultaneously. Contradiction. 1{τ <∞} ≥ S0 . so equation (5. max g(s). we have Sτ −K shown E (1+r)τ 1{τ <∞} < S0 for any stopping time τ . B B B we get sB = 2.

3.−1 Proof.2 (H) V1 (T ) = (1 + R0 )(X0 − ∆0 B0. So ∆1 (H) = B2. If the outcome of ﬁrst coin toss is H. So the hedging strategy is as follows. D1 V1 = E1 [D3 V3 ] = E1 [D2 V2 ] = D2 E1 [V2 ]. If the outcome of ﬁrst coin toss is (T H)−B2. and 3 2.5. then we do nothing. then to replicate V2 at time V2 (HH)−V two. Xk = Sk − Ek [Dm (Sm − K)]Dk − Sn Bn.2 − 21 = 20 and buy 4 3 3 21 3 1. Then Ek−1 [Dk Xk ] Sn Bk.3 2 T . This makes impossible the replication of V1 . since V1 (H) = V1 (T ). then at time one. 1 (ii) 2 Proof.2 ) + ∆0 B1.m − Bn.3 (T H)−V2 (T(T)T ) = 0.3 . we must have D2 D1 E1 [V2 ] = 1 1+R1 E1 [V2 ]. (i) 18 .3 ) + ∆1 B2. In particular.2 (T ).m = Dk−1 Xk−1 .(i) Proof.2 . We do not invest in the maturity two bond.3 (HT V2 T ∆1 (T ) = B2. The reason why we do not invest in the maturity three bond is that B1. 1 1 Dm − Dm+1 En [Dm+1 Rm ] = En [Dm (1 + Rm )−1 Rm ] = En [ ] = Bn. = Ek−1 [Dk Sk − Ek [Dm (Sm − K)] − 6. So V1 = 4 V1 (H) = 1+R1 (H) V2 (HH)P (w2 = H|w1 = H) = 21 . V1 (H)−V ) 2 So ∆0 = B1. because at time two.m Sn −1 = Dk−1 [Sk−1 − Ek−1 [Dm (Sm − K)]Dk−1 − Bk−1.m+1 .4. then short 3 shares of the maturity three bond and invest the income into the money market account.m Bk. (iii) Proof. Let X0 = 21 .3 (T )(= 4 ) and the portfolio will therefore have the same value at time one regardless the 7 outcome of ﬁrst coin toss. the value of the bond is its face value and our portfolio will therefore have the same value regardless outcomes of coin tosses. V1 (H) = (1 + R0 )(X0 − ∆0 B0. V1 (T ) = 0. 6. we must have V2 = (1 + R1 )(X1 − ∆1 B1. Suppose we buy ∆1 share of the maturity three bond at time one.2 ) + ∆0 B1. Proof.2 (H)−B1 (T(T ) = 4 . This makes impossible the replication of V2 .m Dk ] Bn.m for n ≤ k ≤ m. To replicate the value V1 .3 (H) = B1. Dn Dn Dn 6.2 ) + ∆0 B1.2 share of the maturity two bond. The hedging strategy is therefore to borrow 4 B0.3 (HH)−B2 (HT ) ) = − 2 .m Sn = Dk−1 Sk−1 − Ek−1 [Dm (Sm − K)] − Ek−1 [Ek [Dm ]] Bn. the value of our portfolio is X1 = (1 + R0 )(X0 − ∆B0.m ] Bn. Suppose we buy ∆0 shares of the maturity two bond.

7.m+1 ] = En−1 = = Bn.m+1 Zn−1.m1 Dn−1 Bn−1. 2(1 + rn (0)) 19 . Bn.m ] −1 −1 = En−1 [Bn. The agent enters the forward contract at no cost.m .m So if the agent sells this contract at time n + 1. ψn+1 (0) = E[Dn+1 Vn+1 (0)] Dn = E[ 1{#H(ω1 ···ωn+1 )=0} ] 1 + rn (0) Dn = E[ 1{#H(ω1 ···ωn )=0} 1{ωn+1 =T } ] 1 + rn (0) 1 Dn E[ ] = 2 1 + rn (0) ψn (0) = . the contract has the value En+1 [Dm (Sm − K)] = n.m+1 Dn Bn−1. 6.m Sn+1 − KBn+1.m+1 Dn−1 En−1 [Dm − Dm+1 ] Bn−1. By (i).m − Bn.m+1 Dn−1 Dn −1 −1 En−1 [Dn En [Dm ] − Dn En [Dm+1 ]] Bn−1.m Sn Bn+1.m+1 = Bn−1.m+1 (Bn.m+1 = Fn−1. (i) Proof. then m+1 En−1 [Fn.6. he will receive a cash Sn Bn+1.m Sn (1+r)m−n−1 1 (1+r)m−n + r)m−n Sn Sn+1 − (1 + r)m−n−1 Sn+1 − (1 Sm Sm ] + (1 + r)m En [ ] = (1 + r)m En1 [ m (1 + r) (1 + r)m = F utn+1.m .m Bn. the cash ﬂow generated at time n + 1 is (1 + r)m−n−1 Sn+1 − = = (1 + r) m−n−1 Sn Bn+1. Suppose 1 ≤ n ≤ m.m+1 )Zn.m − Bn−1.m . At time n + 1. 6.m = Sn+1 − ﬂow of Sn+1 − (ii) Proof. Proof.Proof.m − F utn. He is obliged to buy certain asset at time m at the strike price K = F orn.m Bn.m = BSn .m+1 Bn.m −1 Bn.

n. ψn+1 (k) = E Dn 1{#H(ω1 ···ωn+1 )=k} 1 + rn (#H(ω1 · · · ωn )) Dn Dn = E 1{#H(ω1 ···ωn )=k} 1{ωn+1 =T } + E 1{#H(ω1 ···ωn )=k} 1{ωn+1 =H} 1 + rn (k) 1 + rn (k − 1) = = Finally. we have used the independence of ωn+1 and (ω1 . we can use the fact that P (ωn+1 = H|ω1 . ψn+1 (n + 1) = E[Dn+1 Vn+1 (n + 1)] = E Dn ψn (n) 1{#H(ω1 ···ωn )=n} 1{ωn+1 =H} = . 2(1 + rn (k)) 2(1 + rn (k − 1)) Remark: In the above proof. ωn ).and down-factor un and dn .9 and un −dn un −dn notes on page 39). 20 . we have E[Xf (ωn+1 )] = E[X E[f (ωn+1 )|Fn ]] = E[X(pn f (H) + qn f (T ))]. In case the binomial model has stochastic up. solution of Exercise 2. 2. · · · . · · · .For k = 1. ωn ) = pn and P (ωn+1 = T |ω1 . ωn ). · · · . · · · . · · · . where pn = 1+rn −dn and qn = u−1−rn (cf. 1 + rn (n) 2(1 + rn (n)) 1 E[Dn Vn (k)] 1 E[Dn Vn (k − 1)] + 2 1 + rn (k) 2 1 + rn (k − 1) ψn (k) ψn (k − 1) + . ωn ) = qn . This is guaranteed 1 by the assumption that p = q = 2 (note ξ ⊥ η if and only if E[ξ|η] = constant). Then for any X ∈ Fn = σ(ω1 .

(i) Proof. We deﬁne a mapping φ from A to Ω as follows: φ(ω1 ω2 · · · ) = ω1 ω3 ω5 · · · . which is uniformly distributed on [0. Z is a standard normal random variable on the coin-toss space (Ω∞ . (i) Proof.2. Then An ↓ A as n → ∞. For any A and B. (ii) Proof. So P (A) = lim P (An ) = lim [P (ω1 = ω2 ) · · · P (ω2n−1 = ω2n )] = lim (p2 + (1 − p)2 )n . By Example 1.1. Deﬁne Xn = n Yi i=1 2i . If at least one of them is inﬁnite. 1. we can construct a random variable X on the coin-toss space. So Zn = N −1 (Xn ) → Z = N −1 (X) for every ω. However. 1. Then A = ∪∞ An is an inﬁnite n=1 ∞ set and therefore P (A) = ∞. P (B) = P ((B − A) ∪ A) = P (B − A) + P (A) ≥ P (A). if ωi = T . we have limn→∞ (p2 + (1 − p)2 )n = 0. then A ∪ B is also inﬁnite. which means P (A) = 0.3. which means in particular that A is uncountably inﬁnite. (i) Proof. then A ∪ B is also ﬁnite. Then P (Z ≤ a) = P (X ≤ N (a)) = N (a) for any real number a. (ii) Proof. So P (A ∪ B) = 0 = P (A) + P (B).4. Proof. (ii) Proof. even if An ’s are not disjoint. 1. P (A) ≤ P (An ) implies P (A) ≤ limn→∞ P (An ) = 0. For the strictly increasing and continuous function N (x) = −∞ √1 e− 2 dξ. Similarly. ω2n−1 = ω2n }. x ξ2 where Yi (ω) = 1. Then Xn (ω) → X(ω) for every ω ∈ Ω∞ where X is deﬁned as in (i). F∞ . P ). 1. So P (A) = n=1 P (An ).5.5. Clearly Zn depends only on the ﬁrst n coin tosses and {Zn }n≥1 is the desired sequence. if ωi = H 0. we let 2π Z = N −1 (X). n=1 1 To see countable additivity property doesn’t hold for P . n→∞ n→∞ n→∞ Since p2 + (1 − p)2 < 1 for 0 < p < 1. we can prove P (∪N An ) = n=1 P (An ). · · · . 1].2 Stochastic Calculus for Finance II: Continuous-Time Models 1. So 0 ≤ P (A) ≤ 0. So P (A ∪ B) = N ∞ = P (A) + P (B). Then φ is one-to-one and onto. This implies P (A) = 0. So the cardinality of A is the same as that of Ω.e. General Probability Theory 1.2. P (An ) = 0 for each n. Let An = {ω = ω1 ω2 · · · : ω1 = ω2 . if both of them are ﬁnite. 21 . i. Clearly P (∅) = 0. let An = { n }.

By the change of variable formula. Since |fn (x)| ≤ (ii) Proof.X(ω)) (x)dxdP (ω) = Ω 0 0 Ω 1[0. f (x) = limn→∞ fn (x) = 0. 2nπ u2 σ 2 2 ≥ euµ = φ(E{X}). The left side of this equation equals to X(ω) dxdP (ω) = Ω 0 Ω X(ω)dP (ω) = E{X}. by the information given by the problem. 22 . ∞ (1 0 − F (x))dx.X(ω)) (x)dP (ω)dx. (i) Proof.7. E{φ(X)} = E{euX } = euµ+ 1. The right side of the equation equals to ∞ ∞ ∞ 1{x<X(ω)} dP (ω)dx = 0 Ω 0 P (x < X)dx = 0 (1 − F (x))dx.Proof. since {fn }n≥1 doesn’t increase to 0. E{euX } = = = = = (x−µ)2 1 eux √ e− 2σ2 dx σ 2π −∞ ∞ (x−µ)2 −2σ 2 ux 1 2σ 2 √ e− dx −∞ σ 2π ∞ [x−(µ+σ 2 u)]2 −(2σ 2 uµ+σ 4 u2 ) 1 2σ 2 √ e− dx −∞ σ 2π ∞ [x−(µ+σ 2 u)]2 σ 2 u2 1 2σ 2 √ e− euµ+ 2 dx −∞ σ 2π ∞ euµ+ σ 2 u2 2 (ii) Proof. −∞ (iii) Proof. So we must have n→∞ lim fn (x)dx = 1. fn (x)dx = ∞ x2 ∞ √1 e− 2 −∞ 2π dx = 1. ∞ −∞ √1 . So E{X} = 1. (i) Proof. we have ∞ ∞ 1[0. This is not contradictory with the Monotone Convergence Theorem.6. First.

In particular. P (Ω) = 2P ([ 2 . since it’s standard ˆ (1A Z)ZdP = 1A dP = P (A). If {Ai }∞ is a sequence of disjoint Borel subsets of [0.3). 1]) = 1. 1 ). (ii) Proof. 2 2 2 1.13. the desired equality also holds for simple functions. 1 Meanwhile. then the desired equality is just (1. i. (i) 23 . then by the Monotone Convergence Theorem. If P (A) = 0. Z = eθY − 2 = eθ(X+θ)− 2 = e 2 +θX = Z −1 . By the linearity of Lebesgue integral.9. 1]) = 0. ϕ (t) = limn→∞ E{Yn } = E{limn→∞ Yn } = E{XetX }. So P = P normal under P . Proof.e. Second. 1. By (1. where each Bi is a Borel subset of R. Borel-measurable function g is the limit of an increasing sequence of simple functions. Let A = [0. we have |Yn | = |XeθX | ≤ |X|e2t|X| for n suﬃciently large.1. First.8. Since any nonnegative.10. ϕ (t) = limn→∞ E{Yn } = E{limn→∞ Yn } = E{XetX }. E[et|X| ] = E[etX 1{X≥0} ] + − E[e−(−t)X 1{X<0} ] < ∞ for every t ∈ R. 1].9. θ θ θ ˆ ˆ ˆ Proof. i=1 P (∪∞ Ai ) equals to i=1 n ∞ + − + 1∪∞ Ai ZdP = i=1 n→∞ lim 1∪n Ai ZdP = lim i=1 n→∞ 1∪n Ai ZdP = lim i=1 n→∞ ZdP = i=1 Ai i=1 P (Ai ). where B is a Borel subset of R.11. then P (A) = (iii) Proof. Proof. tX sn X (ii) Proof. The last inequality is by X ≥ 0 and the Proof. and hence smaller than 2t for n suﬃciently large.12. for any A ∈ F. So by the Dominated Convergence Theorem. Since E[etX 1{X≥0} ] + E[e−tX 1{X<0} ] = E[etX ] < ∞ for every t ∈ R. 1. 1. 2 1. So by the Dominated Convergence Theorem.1] dP = 2P (A ∩ [ 1 . X is standard normal under P . we have E[|X|et|X| ] < ∞ for every t ∈ R. (i) Proof. similar to (i). E{euY } = E{euY Z} = E{euX+uθ e−θX− θ2 2 A ZdP = 2 1 A∩[ 2 . 2 } = euθ− θ2 2 E{e(u−θ)X } = euθ− θ2 2 e (u−θ)2 2 =e u2 2 . So. (i) −e = |XeθX | = XeθX ≤ Xe2tX . Similarly. the desired equality can be proved by the Monotone Convergence Theorem. g of the n form g(x) = i=1 1Bi (x). P (A) = A ZdP = ˆ ˆ . So P is a probability measure. |Yn | = e t−sn fact that θ is between t and sn .1).9. If g(x) is of the form 1B (x).

Proof. Proof. )} = {X + θ ∈ B(y. )} = {X ∈ B(y − θ. Similar to (i). h(u)du. )}. 0 1. 1. P (Y ≤ a) = {g(X)≤a} h(g(X))g (X) dP = f (X) g −1 (a) −∞ h(g(x))g (x) f (x)dx = f (x) a −∞ g −1 (a) h(g(x))dg(x). −∞ By the change of variable formula.15. P (X ≤ a) = {X≤a} λ −(λ−λ)X e dP = λ a 0 λ −(λ−λ)x −λx e λe dx = λ a λe−λx dx = 1 − e−λa . (ii) Proof.14. we have E{Z} = E h(g(X))g (X) f (X) ∞ = −∞ h(g(x))g (x) f (x)dx = f (x) ∞ ∞ h(g(x))dg(x) = −∞ −∞ h(u)du = 1. P (A) P (A) √ √ is approximately e− Y 2 (ω) ¯ 2 Y 2 (ω)−X 2 (ω) ¯ ¯ 2 (X(ω)+θ)2 −X 2 (ω) ¯ ¯ 2 θ2 2 2π e 2π X 2 (ω) ¯ − 2 = e− = e− ω = e−θX(¯ )− . )} = {Y ∈ B(y. Clearly Z ≥ 0. (ii) 1 P (X ∈ B(x. (iii) Proof. So Y has density h under 24 . Furthermore. By (i)-(iii). (i) Proof. P (Ω) = λ −(λ−λ)X λ e dP = λ λ ∞ ∞ e−(λ−λ)x λe−λx dx = 0 0 λe−λx dx = 1. (ii) Proof. (i) Proof. )) = 1 u2 x+ 2 1 √ e− 2 x− 2 2π du is approximately x 1 √1 e− 2 2π 2 · = X √1 e− 2π 2 (ω) ¯ 2 . (iv) Proof. {X ∈ B(x. the last equation above equals to P.

Proof. So P (X ≤ a) is either 0 or 1. T H})P ({HH. T H} ∩ {HH. HT }) = P ({HT }) = 4 . HT }) = 9 9 9 4 2 6 9 + 9 = 9 . (iii) 1 Proof. HT }. HH}}. So P (X = x0 ) = lim P (x0 − n→∞ 1 1 < X ≤ x0 ) = lim (P (X ≤ x0 ) − P (X ≤ x0 − )) = 1. σ(X) = {∅. Indeed. (i) Proof. E{V W } = E{−X 2 sin θ cos θ +XY cos2 θ −XY sin2 θ +Y 2 sin θ cos θ} = − sin θ cos θ + 0 + 0 + sin θ cos θ = 0. T H} ∩ {HH. T H}.2. (ii) Proof. So we have 4 4 1 4 + 1 4 1 = 2. 2.2. HT }).3. So σ(X) and σ(S1 ) are independent under P . Hence σ(X) and σ(S1 ) are not independent under P . Ω.1. For any real number a. HT }). P ({HT. {T H. {HT. P ({HT. W ) are jointly Gaussian. Since lima→∞ P (X ≤ a) = 1 and lima→∞ P (X ≤ a) = 0. Proof. we have {X ≤ a} ∈ F0 = {∅. 2. T H}) = 2 + 2 = 4 and P ({HH. we can ﬁnd a number x0 such that P (X ≤ x0 ) = 1 and P (X ≤ x) = 0 for any x < x0 . so to prove their independence it suﬃces to show they are uncorrelated. Similarly. n→∞ n n 2. knowing the value of X will aﬀect our opinion on the distribution of S1 . T H} ∩ {HH. T H} ∩ {HH. (iv) 2 Proof. HT }) = P ({HH}) + P ({HT }) = 1 + 1 = 2 . HT }) = P ({HT }) = 9 . {T T. T H}) = P ({HT }) + P ({T H}) = 1 and P ({HH. Ω}. we can work on other elements of σ(X) and σ(S1 ) and show that P (A ∩ B) = P (A)P (B) for any A ∈ σ(X) and B ∈ σ(S1 ). T H})P ({HH. So P ({HT.4. Ω. We note (V. (i) 25 . {HH. (v) Proof. T T }}. Because S1 and X are not independent under the probability measure P . P ({HT. σ(S1 ) = {∅. HT }) = P ({HT. HT }) = P ({HT. Information and Conditioning 2. P ({HT. P ({HT.

The density fX (x) of X can be obtained by fX (x) = fX. Since fX. Therefore X and Y cannot 2|x| + y − (2|x|+y)2 2 √ dy = e 2π {ξ≥|x|} ξ2 x2 ξ 1 √ e− 2 dξ = √ e− 2 . y) = fX (x)fY (y). E{euX+vY } = E{euX+vXZ } = E{euX+vXZ |Z = 1}P (Z = 1) + E{euX+vXZ |Z = −1}P (Z = −1) 1 1 = E{euX+vX } + E{euX−vX } 2 2 (u−v)2 1 (u+v)2 = [e 2 + e 2 ] 2 uv u2 +v 2 e + e−uv = e 2 .Y (x. 2π 2π The density fY (y) of Y can be obtained by fY (y) = = = = = = fXY (x. 2 (ii) Proof. Let u = 0. y)dx 2|x| + y − (2|x|+y)2 2 e 1{|x|≥−y} √ dx 2π ∞ 0∧y 2x + y − (2x+y)2 −2x + y − (−2x+y)2 2 2 √ √ e dx + e dx 2π 2π 0∨(−y) −∞ 2x + y − (2x+y)2 2 √ e dx + 2π 0∨(−y) ∞ ξ2 ξ ξ √ e− 2 d( ) 2 2 2π |y| 1 − y2 √ e 2. (iii) Proof.Y (x. Proof.Proof. 2. 2π ∞ 0∨(−y) ∞ 2x + y − (2x+y)2 2 √ e d(−x) 2π So both X and Y are standard normal random variables. E{euX } = e be independent.5. X and Y are not 26 . So E{euX+vY } = E{euX }E{evY }. y)dy = {y≥−|x|} u2 2 and E{evY } = e v2 2 .

E{Y |X} = α∈{a. E{Z|X} − E{Y |X} = E{Z − Y |X} = E{X|X} = X. σ(X) = {∅.d} E{Y 1{X=α} } 1{X=α} . {a.6. (ii) Proof.independent. 2.c. P (X = α) (iv) Proof. y)dxdy 2|x| + y − (2|x|+y)2 2 xy1{y≥−|x|} √ dxdy e 2π −∞ −∞ ∞ ∞ 2|x| + y − (2|x|+y)2 2 dy xdx y √ e 2π −∞ −|x| ∞ ∞ ξ2 ξ (ξ − 2|x|) √ e− 2 dξ xdx 2π |x| −∞ ∞ ∞ = = = = xdx( −∞ ∞ |x| ∞ ξ2 ξ2 e− 2 √ e− 2 dξ − 2|x| √ ) 2π 2π x2 = 0 ∞ x x ξ2 ξ2 √ e− 2 dξdx + 2π 0 ∞ x −∞ −x ξ2 ξ2 √ e− 2 dξdx 2π 0 = 0 xF (x)dx + −∞ xF (−x)dx. Ω.b. E{Z|X} = X + E{Y |X} = X + α∈{a. Proof. if we set F (t) = ∞ ∞ u2 − u2 √ e 2 t 2π ∞ du.Y (x. P (X = α) (iii) Proof. (i) ∞ 0 xF (x)dx − ∞ 0 uF (u)du = 0.c.7.d} E{Y 1{X=α} } 1{X=α} . b}. 27 . So E{XY } = 2. {c. d}}.b. we have E{XY } = −∞ ∞ −∞ ∞ xyfX. However.

Then it’s easy to see σ(X) = {∅. It suﬃces to prove the more general case. We deﬁne X(ω) = ω1 and f (x) = 1{odd integers} (x). No. 28 .10. Ω} ⊂ σ(f (X)) ⊂ σ(X). Proof.Y (x. (ii) Proof. y) dy1B (x)fX (x)dx fX (x) y1B (x)fX. {ω : ω1 = 1} ∪ {ω : ω1 = 3} ∪ {ω : ω1 = 5}. · · · . 2. 2. Let µ = E{Y − X} and ξ = E{Y − X − µ|G}.Y (x.9. Consider the dice-toss space similar to the coin-toss space. E{Y2 ξ} = E{(Y − E{Y |X})ξ} = E{Y ξ − E{Y |X}ξ} = E{Y ξ} − E{Y ξ} = 0. So {∅.8. Note ξ is G-measurable. 2. σ(f (X)) ⊂ σ(X) is always true. Proof.Proof. 2. with ωi ∈ {1. Ω. g(X)dP A ∞ = E{g(X)1B (X)} = −∞ g(x)1B (x)fX (x)dx yfX. {ω : ω1 = 2} ∪ {ω : ω1 = 4} ∪ {ω : ω1 = 6}}. {ω : ω1 = 1}. (i) Proof. Then a typical element ω in this space is an inﬁnite sequence ω1 ω2 ω3 · · · . we have V ar(Y − X) = = = = E{(Y − X − µ)2 } E{[(Y − E{Y |G}) + (E{Y |G} − X − µ)]2 } V ar(Err) + 2E{(Y − E{Y |G})ξ} + E{ξ 2 } V ar(Err) + 2E{Y ξ − E{Y |G}ξ} + E{ξ 2 } = V ar(Err) + E{ξ 2 } ≥ V ar(Err). y)dxdy = = = E{Y 1B (X)} = E{Y IA } = A Y dP. 6} (i ∈ N). For any σ(X)-measurable random variable ξ. {ω : ω1 = 6}} and σ(f (X)) equals to {∅. and each of these containment is strict. Ω. · · · .

then g is a Borel function.. 3. By (i). i. This is a contradiction with limn→∞ j=0 (Wtj+1 − Wtj )2 = T a. 3. (ii) Proof. such that P (A) > 0 and for every ω ∈ A.5. by an 29 . we can ﬁnd a Borel function g such that E{Y |X} = g(X).4. Then for every ω ∈ A.e. 3. E[Wt2 − Ws |Fs ] = E[(Wt − Ws )2 + 2Wt Ws − 2Ws |Fs ] = t − s + 2Ws E[Wt − Ws |Fs ] = t − s. (i) Proof. 2 2 Proof. n−1 n−1 j=0 (Wtj+1 n−1 1 2 2 1 2 2 1 2 2 1 2 u2 (3σ 4 u+ − Wtj )2 → 0 as n → ∞. Wn = i=1 an 1{X∈Bi } = i=1 an 1Bi (X) = gn (X).1.11. ϕ(3) (u) = 2σ 4 ue 2 σ u +(σ 2 +σ 4 u2 )σ 2 ue 2 σ u = e 2 σ u (3σ 4 u+σ 4 u2 ).2. Wu2 − Wu1 is independent of Ft . (i) Proof. and ϕ(4) (u) = σ 2 ue 2 σ 1 2 2 σ 4 u2 ) + e 2 σ u (3σ 4 + 2σ 4 u). 3. since limn→∞ max0≤k≤n−1 |Wtk+1 − Wtk |(ω) = 0. So in particular. j=0 (Wtj+1 −Wtj )2 (ω) ≤ max0≤k≤n−1 |Wtk+1 −Wtk |(ω) j=0 |Wtj+1 −Wtj |(ω) → n−1 0. Proof. By taking upper limits on i both sides of Wn = gn (X). (ii) Proof. Deﬁne g = lim sup gn . Brownian Motion 3. where An ’s belong to σ(X) and are disjoint. Proof.3.2. Assume there exists A ∈ F. we get W = g(X). Note j=0 (Wtj+1 − Wtj )3 ≤ max0≤k≤n−1 |Wtk+1 − Wtk | argument similar to (i). Each Wn Kn can be written in the form i=1 an 1An . limn j=0 |Wtj+1 − Wtj |(ω) < n−1 n−1 ∞. So E[(X − µ)4 ] = ϕ(4) (0) = 3σ 4 . We have Ft ⊂ Fu1 and Wu2 − Wu1 is independent of Fu1 . 3. Note E{Y |X} is σ(X)-measurable. So each An can be i i i i Kn Kn n n n n written as {X ∈ Bi } for some Borel subset Bi of R. i i Kn n where gn (x) = i=1 an 1Bi (x). We can ﬁnd a sequence {Wn }n≥1 of σ(X)-measurable simple functions such that Wn ↑ W .s.

E[f (St )|Fs ] = E[f (S0 eσXt )|Fs ] with µ = ∞ v σ. Xs . ∞ −∞ f (y)p(t − s. y)dy with p(τ. x. So by (i).Proof. (i) Proof. E[f (Xt )|Ft ] = E[f (Wt − Ws + a)|Fs ]|a=Ws +µt = E[f (Wt−s + a)]|a=Ws +µt ∞ = −∞ ∞ f (x + Ws + µt) e− e− 2(t−s) 2π(t − s) dy x2 dx (y−Ws −µs−µ(t−s))2 2(t−s) = −∞ f (y) 2π(t − s) = g(Xs ).7. y) = √ 1 e− 2πτ (y−x−µτ )2 2τ . S0 )). 1 2π(t − s) 1 2π(t − s) e − E[f (St )|Fs ] = −∞ S0 eσy =z ∞ f (S0 eσy ) f (z) 0 ∞ e− (y−Xs −µ(t−s))2 2(t−s) dy dz σz = S ( 1 ln z − 1 ln s −µ(t−s))2 σ S0 σ S0 2 = 0 ∞ f (z) e − (ln z −v(t−s))2 Ss 2σ 2 (t−s) σz 2π(t − s) dz = 0 f (z)p(t − s.6. z)dz g(Ss ). = 3. (i) 30 . 3. E[e−rT (ST − K)+ ] ∞ = e −rT 1 K σ (ln S0 (S0 e −(r− 1 σ 2 )T ) 2 ∞ (r− 1 σ 2 )T +σx 2 e− 2T − K) √ dx 2πT e− 2 − K) √ dy 2π ∞ 1 √ K S0 y2 x2 = = = = e −rT σ 1 √ T (S0 e (ln K S0 √ (r− 1 σ 2 )T +σ T y 2 −(r− 1 σ 2 )T ) 2 S0 e− 2 σ ∞ 1 2 ∞ T σ 1 √ T (ln K S0 −(r− 1 σ 2 )T ) 2 √ √ y2 1 √ e− 2 +σ T y dy − Ke−rT 2π σ T (ln −(r− 1 σ 2 )T ) 2 y2 1 √ e− 2 dy 2π S0 σ 1 √ T (ln K S0 −(r− 1 σ 2 )T )−σ 2 T ξ2 1 √ e− 2 dξ − Ke−rT N 2π S 1 1 √ (ln 0 + (r − σ 2 )T ) K 2 σ T Ke −rT N (d+ (T. Ss . S0 )) − Ke −rT N (d− (T. So E[f (Xt )|Fs ] = (ii) Proof.

σ2 2 > 0. t→∞ 2 σ2 2 )t ∧ τm }] = t→∞ since on the event {τm = ∞}. 1 σ2 )τm ] = 1. E[τm ] = (v) Proof. ∞). So we get 2 √ 2 E[e−ατm ] = E[e−ατm 1{τm <∞} ] = e−σm = emµ−m 2α+µ .8. By optional stopping theorem. by monotone increasing theorem. E[τm e−ατm ] < ∞ since xe−αx is bounded on [0. E[eσm−(σµ+ σ2 2 )τm 1{τm <∞} ] = E[ lim Zt∧τm ] = lim E[Zt∧τm ] = 1. we get σµ + e 2 σm−( σ 2 −m 2α + µ2 . Zt∧τm ≤ +σµ)t → 0 as t → ∞. we get 2 √ 2 E[e−ατm ] = e−σm = emµ−m 2α+µ . by bounded convergence theorem. we have P (τm < ∞) = 1. Then Zt∧τm ≤ eσm and on the event {τm = ∞}. Let (iv) Proof. ϕn (u) = = E[e e 1 u √n Mnt. that is. By σ > −2µ > 0.Proof. Set α = σµ + σ . 3. So by an argument similar to Exercise 1.8. E[1{τm <∞} Zτm ] = E[ lim Zt∧τm ] = lim E[Zt∧τm ] = 1. If µ ≥ 0 and σ > 0. Zt∧τm ≤ eσm . (iii) Proof. E[eσm−(σµ+ 2 2 σ ↓ 0. m µ < ∞ for µ > 0. E (ii) Zt Zs |Fs = E[exp{σ(Wt − Ws ) + σµ(t − s) − (σµ + σ2 2 )(t − s)}] = 1. Let σµ + σ = α. E[exp{σXt∧τm − (σµ + 1. We note for α > 0. (i) Proof. Therefore.n n ])nt = (e u − √n u √ n pn + e −e u − √n σ √ n qn )nt nt u √ n r n +1−e σ √ n σ − √n e −e +e r −n − 1 + e e σ √ n σ − √n . E[Zt∧τm ] = E[Z0 ] = 1. then we get P (τm < ∞) = e2µm = e−2|µ|m < 1. Zt∧τm ≤ eσm− 2 σ t → 0 as t → ∞. Proof. (ii) 31 .n ] = (E[e σ − √n u √ X1. By bounded convergence theorem. E[e−ατm ] is diﬀerentiable and √ ∂ 2 E[e−ατm ] = −E[τm e−ατm ] = emµ−m 2α+µ ∂α Let α ↓ 0. Therefore. t→∞ t→∞ 2 Let σ ↓ −2µ.

cosh ux + = = (rx2 + 1 − cosh σx) sinh ux sinh σx 2 2 (rx2 + 1 − 1 − σ 2x + O(x4 ))(ux + O(x3 )) u2 x2 1+ + O(x4 ) + 2 σx + O(x3 ) 2 (r − σ )ux3 + O(x5 ) u2 x2 2 1+ + + O(x4 ) 2 σx + O(x3 ) 2 (r − σ )ux3 (1 + O(x2 )) u2 x2 2 = 1+ + + O(x4 ) 2 σx(1 + O(x2 )) u2 x2 rux2 1 = 1+ + − σux2 + O(x4 ). ϕ So. σ2 2 ) and variance σ 2 t.n r ] = ϕn (u) → 1 tu2 + t( σ − σ )u. Stochastic Calculus 4. ln ϕ 1 x2 1 x2 (u) = e ux rx2 + 1 − e−σx eσx − e−σx −e −ux rx2 + 1 − eσx eσx − e−σx t x2 . ln ϕ 1 x2 = t u2 x2 ru 2 σux2 t u2 x2 ru 2 σux2 ln(1 + + x − + O(x4 )) = 2 ( + x − + O(x4 )).1.n )n converges to a Gaussian r σ σ random variable with mean t( σ − 2 ) and variance t. (u) = = = = t x2 t x2 t x2 t x2 (rx2 + 1)(eux − e−ux ) + e(σ−u)x − e−(σ−u)x eσx − e−σx (rx2 + 1) sinh ux + sinh(σ − u)x ln sinh σx 2 (rx + 1) sinh ux + sinh σx cosh ux − cosh σx sinh ux ln sinh σx 2 (rx + 1 − cosh σx) sinh ux ln cosh ux + .Proof. By the one-to-one 2 2 variable with mean t(r − 4. Hence ( √n Mnt. x2 2 σ 2 x 2 σ 2 2 So limx↓0 ln ϕ 1 correspondence between distribution and moment generating function. sinh σx ln (iii) Proof. 32 . ( √n Mnt. and E[e 1 u √n Mnt.n )n converges to a Gaussian random 1 x2 (u) = t( u + 2 ru σ − σu 2 ). 2 σ 2 (iv) Proof.

So E[I(t)−I(s)|Ft ] = ∆tk E[Mt − Ms |Fs ] = 0. we must have I(tk ) − I(tl ) ⊥ Ftl . Then tm ≤ s < tm+1 ≤ tk ≤ t < tk+1 . Furthermore. Combined. m < k. Hence E[I(t) − I(s)|Fs ] k−1 = j=m+1 E[∆tj E[Mtj+1 − Mtj |Ftj ]|Fs ] + E[∆tk E[Mt − Mtk |Ftk ]|Fs ] + ∆tm E[Mtm+1 − Ms |Fs ] = 0. E[I(tk ) − I(tl )] = j=l ∆tj E[Wtj+1 − Wtj ] = 0 and V ar(I(tk ) − I(tl )) = (iii) Proof. tm+1 ) for some m. (ii) Proof. We use the notation in (i) and it is clear I(tk ) − I(tl ) is normal since it is a linear combination of k−1 independent normal random variables. we assume s ∈ [tm . m = k. For s < t. we conclude I(t) is a martingale. So k−1 I(t) − I(s) = j=m ∆tj (Mtj+1 − Mtj ) + ∆tk (Ms − Mtk ) − ∆tm (Ms − Mtm ) k−1 = j=m+1 ∆tj (Mtj+1 − Mtj ) + ∆tk (Mt − Mtk ) + ∆tm (Mtm+1 − Ms ). Fix t and for any s < t. Case 2.2. 4.Proof. E[I(t) − I(s)|Fs ] = E[I(t) − I(s)] = 0. t s k−1 j=l ∆2j V ar(Wtj+1 − Wtj ) = t k−1 j=l ∆2j (tj+1 − tj ) = t tk tl ∆2 du. 33 . We follow the simpliﬁcation in the hint and consider I(tk ) − I(tl ) with tl < tk . (iv) Proof. u E[I 2 (t) − 0 ∆2 du − (I 2 (s) − u 0 t ∆2 du)|Fs ] u = E[I 2 (t) − I 2 (s) − s ∆2 du|Fs ] u t = E[(I(t) − I(s))2 + 2I(t)I(s) − 2I 2 (s)|Fs ] − s ∆2 du u t = E[(I(t) − I(s))2 ] + 2I(s)E[I(t) − I(s)|Fs ] − s t t ∆2 du u = s ∆2 du + 0 − u s ∆2 du u = 0. (i) Proof. for s < t. Case 1. Since ∆t is a non-random process and Wtj+1 − Wtj ⊥ Ftj ⊃ Ftl for j ≥ l. Then I(tk ) − I(tl ) = k−1 j=l ∆tj (Wtj+1 − Wtj ). Then I(t)−I(s) = ∆tk (Mt −Mtk )−∆tk (Ms −Mtk ) = ∆tk (Mt −Ms ).

since Ws ∈ Fs . we note 2 2 B tj +tj+1 − Btj t − 2 2 tj+1 − tj 2 2 = E j 2 B tj +tj+1 − Btj 2 − j 2 = j. 4 (ii) E[(I(t) − I(s))4 ] = E[Ws ]E[(Wt − Ws )4 ] = 3s · 3(t − s) = 9s(t − s) and 3E[(I(t) − I(s))2 ] = 2 2 3E[Ws ]E[(Wt − Ws ) ] = 3s(t − s). j 2 The ﬁrst term converges in L2 (P ) to E j 2 T 0 Bt dBt .4. 4.3. (iii) E[I(t) − I(s)|Fs ] = Ws E[Wt − Ws |Fs ] = 0. Exercise 3. Øksendal [3]. (Cf. (i) I(t) − I(s) is not independent of Fs . Proof.) We ﬁrst note that B tj +tj+1 (Btj+1 − Btj ) j 2 = j B tj +tj+1 (Btj+1 − B tj +tj+1 ) + Btj (B tj +tj+1 − Btj ) + 2 2 2 (B tj +tj+1 − Btj )2 .9. 2 1≤j≤n 34 . k E B tj +tj+1 − Btj 2 − tj+1 − tj 2 2 B tk +tk+1 − Btk 2 − tk+1 − tk 2 = j E B 2j+1 −tj − t 2 tj+1 − tj 2 = j 2· tj+1 − tj 2 2 ≤ T max |tj+1 − tj | → 0. I(t) − I(s) = ∆0 (Wt1 − W0 ) + ∆t1 (Wt2 − Wt1 ) − ∆0 (Wt1 − W0 ) = ∆t1 (Wt2 − Wt1 ) = Ws (Wt − Ws ). For the second term. Since E[(I(t) − I(s))4 ] = 3E[(I(t) − I(s))2 ]. I(t) − I(s) is not normally distributed. (iv) t s E[I 2 (t) − 0 ∆2 du − (I 2 (s) − u 0 ∆2 du)|Fs ] u t 2 Wu du|Fs ] = E[(I(t) − I(s))2 + 2I(t)I(s) − 2I 2 (s) − 2 = E[Ws (Wt − Ws )2 + 2Ws (Wt − Ws ) − 2 2 = Ws (t − s) − Ws (t − s) s 2 Ws (t − s)|Fs ] 2 2 = Ws E[(Wt − Ws )2 ] + 2Ws E[Wt − Ws |Fs ] − Ws (t − s) = 0.4. Proof.

t 0 σs dWs + t (αs 0 − 1 2 2 σs )ds}. E[WT ] = 6 T 0 1 2 · 4 · 3Wt2 d W t 4 = 4Wt3 dWt + 6Wt2 dt. dWt6 = 6Wt5 dWt + 1 · 6 · 5Wt4 dt. Since (xp ) = pxp−1 .8. we have p d(St ) 1 p−1 p−2 = pSt dSt + p(p − 1)St d S t 2 1 p−1 p−2 2 = pSt (αSt dt + σSt dWt ) + p(p − 1)St σ 2 St dt 2 1 p = St [pαdt + pσdWt + p(p − 1)σ 2 dt] 2 p−1 2 p = St p[σdWt + (α + σ )dt]. T 0 Wt5 dWt + 15 T 0 6 Wt4 dt. (i) Proof. So WT = 4 T 0 Wt3 dWt + 6 T 0 Wt2 dt.7. Proof. 2 4. d ln St = dSt 1d S t 2St dSt − d S − 2 = 2 St 2 St 2St t = 2 2 2St (αt St dt + σt St dWt ) − σt St dt 1 2 = σt dWt + (αt − σt )dt. So T B tj +tj+1 (Btj+1 − Btj ) → j 2 Bt dBt + 0 T 1 2 = BT in L2 (P ). 2 2St 2 (ii) Proof. So WT = 6 2 3 15T . tdt = 3T 2 . dWt4 = 4Wt3 dWt + (ii) 4 Proof. (iii) 6 Proof. Hence E[WT ] = 15 T 0 3t2 dt = 4.6. 2 2 4.2 4 2 2 since E[(Bt − t)2 ] = E[Bt − 2tBt + t2 ] = 3E[Bt ]2 − 2t2 + t2 = 2t2 . Without loss of generality. 35 . (i) Proof. t t ln St = ln S0 + 0 σs dWs + 0 1 2 (αs − σs )ds. we assume p = 1. 2 So St = S0 exp{ 4.5. (xp ) = p(p − 1)xp−2 .

x) + √ ∂t ∂t 2 T −t σx −r(T −t) = −rKe N (d− ) − √ N (d+ ). 0 α βt (e − 1) + σ β t eβs dWs . −r(T −t) Ke N (d− ) = = = = = Ke Ke Ke −r(T −t) e − d2 − 2 √ 2π (d+ −σ √ 2 T −t)2 − −r(T −t) e √ 2π e e N (d+ ) x (r+ σ2 )(T −t) − σ2 (T −t) 2 2 Ke−r(T −t) e e N (d+ ) K xN (d+ ). = N (d+ ) + xN (d+ ) (iii) Proof. 0 and Rt = R0 e−βt + α (1 − e−βt ) + σ β 4. cx ∂ ∂ d+ (T − t. √ σ 2 (T −t) −r(T −t) σ T −td+ − 2 (ii) Proof. (i) Proof. x) − rKe−r(T −t) N (d− ) − xN (d+ ) d+ (T − t.Proof. x) − rKe−r(T −t) N (d− ) − Ke−r(T −t) N (d− ) d− (T − t. x) − Ke−r(T −t) N (d− ) d− (T − t. x) ∂x ∂x = N (d+ ). x) ∂x ∂t ∂ ∂ σ = xN (d+ ) d+ (T − t. d(eβt Rt ) = βeβt Rt dt + eβt dRt = eβt(αdt+σdWt ) .9. ct = xN (d+ ) ∂ ∂ d+ (T − t. Hence t eβt Rt = R0 + 0 eβs (αds + σdWs ) = R0 + t −β(t−s) e dWs . x) ∂x ∂x ∂ ∂ = N (d+ ) + xN (d+ ) d+ (T − t. 2 T −t (iv) 36 . x) − xN (d+ ) d+ (T − t.

So we have −x e− 2 −Keσ lim x(N (d+ ) − 1) = lim N (d+ ) √ = lim √ x→∞ x→∞ σ T − t d+ →∞ 2π Therefore x→∞ d2 + √ T −td+ −(T −t)(r+ 1 σ 2 ) 2 √ σ T −t = 0. we get x = K exp{σ T − td+ − (T − t)(r + 2 σ 2 )}. Also it’s clear that limt↑T d± = 0 for x = K. Note ∂ N (d+ ) σ√1 −t N (d+ ) ∂x d+ T lim x(N (d+ ) − 1) = lim = lim . So lim c(t. x) = −rKe−r(T −t) N (d− ) − √ 2 ∂x 2 T −t σx 1 1 = rc − √ N (d+ ) + σ 2 x2 N (d+ ) √ 2 2 T −t σ T − tx = rc. (vii) Proof. K). limt↑T d± = −∞ for x ∈ 2 (0. x) = xN (lim d+ ) − KN (lim d− ) = t↑T t↑T t↑T x − K. x) = limτ ↓0 σ√τ ln K + σ (r + 1 σ 2 ) τ − σ τ = ∞. x) = √ √ 1 x 1 limτ ↓0 d− (τ. limt↑T d− (T − t. Similarly. T ]. 4. 1 ct + rxcx + σ 2 x2 cxx 2 1 σx ∂ N (d+ ) + rxN (d+ ) + σ 2 x2 N (d+ ) d+ (T − t. x)) − Ke−r(T −t) N (limx↓0 d− (T − t. x) = limτ ↓0 d+ (τ. lim [c(t. It is easy to see limx↓0 d± = −∞. limx↓0 c(t. For x > K. x) > 0 and limt↑T d+ (T − t. So for t ∈ [0. x→∞ x→∞ x→∞ −x−2 −x−1 √ 1 By the expression of d+ . 0. (i) 37 . if x > K = (x − K)+ . it is clear limx→∞ d± = ∞. x) = ∞. For t ∈ [0.Proof. (v) Proof. if x ≤ K (vi) Proof. d+ (T − t.10. T ]. x) − (x − e−r(T −t) K)] lim [xN (d+ ) − Ke−r(T −t)N (d− ) − x + Ke−r(T −t) ] = = = = x→∞ x→∞ x→∞ lim [x(N (d+ ) − 1) + Ke−r(T −t) (1 − N (d− ))] lim x(N (d+ ) − 1) + Ke−r(T −t) (1 − N ( lim d− )) x→∞ 0. x)) = 0. x) = limx↓0 xN (limx↓ d+ (T − t.

dXt = ∆t dSt +Γt dMt +(St d∆t +dSt d∆t +Mt dΓt +dMt dΓt ). St )dt + Γt dMt = ∆t dSt + Γt dMt = dXt . The problem is to show 1 2 rNt dt = ct (t.e.15). St ) + σ 2 St cxx (t. St ). i. St ) dt + σ2 St cx (t. St ) + (α − r)cx (t.e. St )dWt 2 1 2 2 2 rc(t. 2 4. St ) − ∆t St .15). We let c(t. St ) = = = .10. (ii) Proof. 2 38 and dc(t. Mt Mt Mt By self-ﬁnancing property.1-4. St ) = Xt (by the calculations in Subsection 4. St )St + σ2 St cxx (t.10. by the self-ﬁnancing property and ∆t = cx (t.9) ⇐⇒ (4. St )dWt . 2 where Nt = c(t. St )σ2 St dt 2 1 2 2 ct (t. (4. St ) = 0. x) solves the Black-Scholes-Merton PDE with volatility σ1 : 2 ∂ ∂ 1 2 ∂ + rx + x2 σ1 2 − r c(t.11. St ). Indeed. St )dt + cx (t. St )dSt − d(Xt − Γt Mt )] 2 1 2 ct (t. St )σ 2 St dt = dNt − Mt dΓt − dMt dΓt = Γt dMt = Γt rMt dt = rNt dt. St )σ 2 St dt + Mt dΓt + dMt dΓt + [cx (t. Finally. we note c(t. Proof. (4.10. St )dt + cx (t. St ) + cxx (t. St ) + σ1 St cxx (t.16) + (4. so 1 2 ct (t. we have c(t. “continuous-time self-ﬁnancing condition” has two equivalent formulations. 2 1 2 2 ct (t. St )dSt + cxx (t. St )(αSt dt + σ2 St dWt ) + cxx (t. St )d St t − d(∆t St ) 2 1 2 ct (t. St ) dt + σ2 St cx (t. St ) + αcx (t.15).Proof.5. St ) + cxx (t. 2 c(t.10. cx (t. St )St + St (σ2 − σ1 )cxx (t. St ) − rc(t. St )dSt + Γt dMt − dXt ].10. St ) denote the price of call option at time t and set ∆t = cx (t. dNt = = = 1 ct (t. St ) − cx (t.3). i. we assume the portfolio is self-ﬁnancing.9) ⇐⇒ (4. We show (4.9) or (4. Indeed.10.16) + (4. St ) + rSt cx (t. x) = 0. St ) dt. St ) + cxx (t. First. St )σ 2 St dt + [cx (t. assuming X has the representation Xt = ∆t St + Γt Mt .5.10. We assume we have a portfolio Xt = ∆t St + Γt Mt . we clarify the problems by stating explicitly the given conditions and the result to be proved. So dXt = ∆t dSt +Γt dMt ⇐⇒ St d∆t + dSt d∆t + Mt dΓt + dMt dΓt = 0. St )St Nt Xt − ∆t St = = . ∂t ∂x 2 ∂x So 1 2 2 ct (t. First. This uniquely determines Γt as Γt = Moreover.10.

Solve the equation for α(t) and β(t). Proper boundary conditions are the key to uniqueness.12. By (4. 1 pxx (t.29). x) = ct (t. St ) − cx (t. So px (t. x) + re−r(T −t) K σx = −rKe−r(T −t) N (d− (T − t. x) = x − e−r(T −t) K. x) = −rKe−r(T −t) + σ 2 x2 · 0 + rx · 1 − r(x − Ke−r(T −t) ) = 0. St )St (> 0) cash in the money market account. Proof. We want to ﬁnd α(t) and β(t) such that (dW2 (t))2 = [α2 (t) + β 2 (t) + 2ρ(t)α(t)β(t)]dt = dt dW1 (t)dW2 (t) = [α(t) + β(t)ρ(t)]dt = 0. 4. x) = σx√T −t N (d+ (T − t. W2 ) is a pair of local martingale deﬁned by SDE dW1 (t) = dB1 (t) dW2 (t) = α(t)dB1 (t) + β(t)dB2 (t). x)) − √ N (d+ (T − t. x)) − 1. x)) + rKe−r(T −t) 2 T −t σx N (d+ (T − t. see Wilmott [8].13. = rKe−r(T −t) N (−d− (T − t.Therefore dXt = 1 2 2 2 rc(t. we conclude Xt = 0 for all t ∈ [0. By X0 . St ) 2 1 2 2 2 − (σ2 − σ1 )St cxx (t. we have β(t) = √ 1 1−ρ2 (t) (1) (2) . St ) − cx (t. x)).5. he should short the underlying stock and put p(t. 4. x) < 0. Indeed. = This implies Xt = X0 ert . x) = x − Ke−r(T −t) satisﬁes the Black-Scholes-Merton partial diﬀerential equation. So and α(t) = − √ ρ(t)2 1−ρ (t) W1 (t) = B1 (t) t W2 (t) = 0 √−ρ(s) dB1 (s) + 2 1−ρ (s) t 0 √ 1 dB2 (s) 1−ρ2 (s) (3) 39 . since ∆t = px (t. (iii) Proof. ∂t 2 ∂x ∂x 2 Remark: The Black-Scholes-Merton PDE has many solutions. We suppose (W1 . x) = cx (t. it suﬃces to show f (t. St )σ2 St ]dWt 2 rXt dt. St )St + St (σ2 − σ1 )σxx (t. (i) Proof. St ) + rSt cx (t. T ]. x) − p(t. St ) + (α − r)cx (t. St )αSt dt + [σ2 St cx (t. St ) − px (t. For an agent hedging a short position in the put. x) = cxx (t. St ) + rXt − rc(t. x)) − √ 2 T −t (ii) Proof. For more details. 1 1 ∂ ∂2 ∂ + σ 2 x2 2 + rx − r f (t. x) − 1 = N (d+ (T − t. c(t. x)) and pt (t. By the put-call parity.

4. we have B1 (t) = W1 (t) t B2 (t) = 0 ρ(s)dW1 (s) + (4) t 0 1 − ρ2 (s)dW2 (s). n−1 n−1 n−1 j=0 Zj ] = E[ n−1 j=0 E[Zj |Ftj ]] = 0 by part (i). t).14. where we used the independence of Browian motion increment and the fact that E[X 4 ] = 3E[X 2 ]2 if X is Gaussian with mean 0. Moreover E[Zj |Ftj ] = f (Wtj )E[(Wtj+1 − Wtj )2 − (tj+1 − tj )|Ftj ] = f (Wtj )(E[Wt2 −tj ] − (tj+1 − tj )) = 0.15. j+1 since Wtj+1 − Wtj is independent of Ftj and Wt ∼ N (0. V ar[ j=0 Zj ] = E[( j=0 n−1 Zj )2 ] 2 Zj + 2 = E[ j=0 n−1 Zi Zj ] 0≤i<j≤n−1 = j=0 n−1 2 E[E[Zj |Ftj ]] + 2 0≤i<j≤n−1 E[Zi E[Zj |Ftj ]] = j=0 n−1 E[2[f (Wtj )]2 (tj+1 − tj )2 ] 2E[(f (Wtj ))2 ](tj+1 − tj )2 j=0 n−1 = ≤ 2 0≤j≤n−1 max |tj+1 − tj | · j=0 E[(f (Wtj ))2 ](tj+1 − tj ) → 0. since n−1 j=0 E[(f (Wtj ))2 ](tj+1 − tj ) → T 0 E[(f (Wt ))2 ]dt < ∞.is a pair of independent BM’s. (i) Proof. (ii) Proof. (i) 40 . Finally. we have 2 E[Zj |Ftj ] = = = = [f (Wtj )]2 E[(Wtj+1 − Wtj )4 − 2(tj+1 − tj )(Wtj+1 − Wtj )2 + (tj+1 − tj )2 |Ftj ] [f (Wtj )]2 (E[Wt4 −tj ] − 2(tj+1 − tj )E[Wt2 −tj ] + (tj+1 − tj )2 ) j+1 j+1 [f (Wtj )]2 [3(tj+1 − tj )2 − 2(tj+1 − tj )2 + (tj+1 − tj )2 ] 2[f (Wtj )]2 (tj+1 − tj )2 . E[ (iii) Proof. 4. Equivalently. Clearly Zj ∈ Ftj+1 .

dBm (t))tr = A(t)(dW1 (t). · · · . This motivates us to deﬁne A as the square root of C. dWm (t)) = A(t)−1 (dB1 (t). To ﬁnd the m independent Brownion motion W1 (t). dBi (t)dBk (t) = j=1 d σij (t) dWj (t) σi (t) d l=1 σkl (t) dWl (t) σk (t) = 1≤j. we get (dB1 (t). Proof. · · · . Reverse the above analysis. · · · . · · · . 4. We start with the general Xi : t t Xi (t) = Xi (0) + 0 θi (u)du + 0 σi (u)dBi (u).Proof. l≤d d σij (t)σkl (t) dWj (t)dWl (t) σi (t)σk (t) = j=1 σij (t)σkj (t) dt σi (t)σk (t) = ρik (t)dt.17. long solution. 2. dWm (t))tr .16. · · · . 4. Wm (t). dBm (t))tr (dB1 (t). dBm (t))(A(t)−1 )tr dt = Im×m dt. dWm (t))tr (dW1 (t). which gives C(t) = A(t)A(t)tr . · · · . So A(t)−1 C(t)(A(t)−1 )tr = Im×m . dBm (t)) = C(t). 2 σi (t) So Bi is a Brownian motion. or equivalently (dW1 (t). · · · . we need to ﬁnd A(t) = (aij (t)) so that (dB1 (t). where Im×m is the m × m unit matrix. By the condition dBi (t)dBk (t) = ρik (t)dt. · · · . we obtain a formal proof. · · · . and (dW1 (t). 41 . Proof. i = 1. · · · . dBm (t))tr (dB1 (t). Bi is a local martingale with (dBi (t))2 = j=1 d 2 σij (t) dWj (t) = σi (t) d j=1 2 σij (t) dt = dt. (ii) Proof. · · · . We will try to solve all the sub-problems in a single. dWm (t))tr = A(t)−1 (dB1 (t). dBm (t))tr .

So E[(Xi (t0 + ) − Xi (t0 ))(Xj (t0 + ) − Xj (t0 ))|Ft0 ] t0 + t0 + = E Yi (t0 + ) − Yi (t0 ) + t0 Θi (u)du Yj (t0 + ) − Yj (t0 ) + t0 t0 + Θj (u)du |Ft0 t0 + = E [(Yi (t0 + ) − Yi (t0 )) (Yj (t0 + ) − Yj (t0 )) |Ft0 ] + E t0 t0 + Θi (u)du t0 Θj (u)du|Ft0 t0 + +E (Yi (t0 + ) − Yi (t0 )) t0 Θj (u)du|Ft0 + E (Yj (t0 + ) − Yj (t0 )) t0 Θi (u)du|Ft0 = I + II + III + IV. t0 + t0 + E[Xi (t0 + ) − Xi (t0 )|Ft0 ] t0 + t0 + E t0 Θi (u)du + t0 t0 + σi (u)dBi (u)|Ft0 Θi (t0 ) + E t0 (Θi (u) − Θi (t0 ))du|Ft0 . the Dominated Convergence Theorem under Conditional Expectation implies 1 lim E →0 t0 + |Θi (u) − Θi (t0 )|du|Ft0 = E lim t0 1 t0 + →0 |Θi (u) − Θi (t0 )|du|Ft0 = 0. E t0 t + (Θi (u) − Θi (t0 ))du|Ft0 ≤E t0 t + |Θi (u) − Θi (t0 )|du|Ft0 Since 1 t00 |Θi (u) − Θi (t0 )|du ≤ 2M and lim →0 1 t00 |Θi (u) − Θi (t0 )|du = 0 by the continuity of Θi . 42 . we conclude I = σi (t0 )σj (t0 )ρij (t0 ) + o( ) and t0 + t0 + t0 + II = E t0 (Θi (u) − Θi (t0 ))du t0 Θj (u)du|Ft0 + Θi (t0 ) E t0 Θj (u)du|Ft0 = o( ) + (Mi ( ) − o( ))Mj ( ) = Mi ( )Mj ( ) + o( ). we note Yi (t) = 0 σi (u)dBi (u) is a martingale and by Itˆ’s o t formula Yi (t)Yj (t) − 0 σi (u)σj (u)du is a martingale (i = 1. This proves (iii). t0 So Mi ( ) = Θi (t0 ) + o( ). t To calculate the variance and covariance. 2). t0 + I = E[Yi (t0 + )Yj (t0 + ) − Yi (t0 )Yj (t0 )|Ft0 ] = E t0 σi (u)σj (u)ρij (t)dt|Ft0 . First. By an argument similar to that involved in the proof of part (iii).The goal is to show lim ↓0 C( ) V1 ( )V2 ( ) = ρ(t0 ). for i = 1. we have Mi ( ) = = = By Conditional Jensen’s Inequality. 2.

Bt is a Brownian e 43 . Since d(ert ζt ) = ζt dXt + Xt dζt + dXt dζt ζt (rXt dt + ∆t (α − r)St dt + ∆t σSt dWt ) + Xt (−θζt dWt − rζt dt) +(rXt dt + ∆t (α − r)St dt + ∆t σSt dWt )(−θζt dWt − rζt dt) ζt (∆t (α − r)St dt + ∆t σSt dWt ) − θXt ζt dWt − θ∆t σSt ζt dt ζt ∆t σSt dWt − θXt ζt dWt . By part (ii). we get dζt = −θζt dWt − rζt dt. we used the fact that e−θWt − 2 θ rert ζt dt + ert dζt . (i) Proof. t0 + III ≤ E |Yi (t0 + ) − Yi (t0 )| t0 |Θj (u)|du|Ft0 ≤ M ≤ M ≤ M E[(Yi (t0 + ) − Yi (t0 ))2 |Ft0 ] E[Yi (t0 + )2 − Yi (t0 )2 |Ft0 ] t0 + E[ √ t0 Θi (u)2 du|Ft0 ] ≤ M ·M = o( ) Similarly. X0 = ζ0 X0 = E[ζT Xt ] = E[ζT VT ]. IV = o( ).) 4. 4.By Cauchy’s inequality under conditional expectation (note E[XY |F] deﬁnes an inner product on L2 (Ω)). (This can be seen as a version of risk-neutral pricing. d(ζt Xt ) = = = = So ζt Xt is a martingale. (i) Proof.18. So by L´vy’s theorem. where for the second “=”. only that the pricing is carried out under the actual probability measure. (iii) Proof. This proves part (iv) and (v). (ii) Proof. Part (i) and (ii) are consequences of general cases. lim ↓0 C( ) V1 ( )V2 ( ) = lim ↓0 ρ(t0 )σ1 (t0 )σ2 (t0 ) + o( ) 2 2 (σ1 (t0 ) + o( ))(σ2 (t0 ) + o( )) = ρ(t0 ). we have E[(Xi (t0 + ) − Xi (t0 ))(Xj (t0 + ) − Xj (t0 ))|Ft0 ] = Mi ( )Mj ( ) + σi (t0 )σj (t0 )ρij (t0 ) + o( ) + o( ).19. In summary. t 0 1 2 1 2 1 2 t solves dXt = −θXt dWt . d(ert ζt ) = (de−θWt − 2 θ t ) = −e−θWt − 2 θ t θdWt = −θ(ert ζt )dWt . Bt is a local martingale with [B]t = motion. This proves part (vi). Finally. sign(Ws )2 ds = t.

By Itˆ’s formula.10. E[Bt Wt2 ] = E[ 0 t Bs ds] + 2E[ 0 t sign(Ws )Ws ds] E[sign(Ws )Ws ]ds = 0 E[Bs ]ds + 2 0 t = = = = 2 0 t (E[Ws 1{Ws ≥0} ] − E[Ws 1{Ws <0} ])ds ∞ 0 t 4 0 e− 2s dxds x√ 2πs x2 s ds 2π 0 0 = E[Bt ] · E[Wt2 ]. 4 Since E[Bt Wt2 ] = E[Bt ] · E[Wt2 ]. if x = K. if x > K x − K. (iii) T −K 2T 2π e T suppose 0 2 K − KΦ(− √T ) where Φ is the distribution function of f (Wt )dt = 0. E[f (WT )] = ∞ (x K e 2T − K) √2πT dx = 2 −x standard normal random variable. So (4. Integrate both sides of the resulting equation and the expectation. (i) 1. the expectation of RHS of (4. if x < K 0. o d(Bt Wt2 ) = = = So t t Bt dWt2 + Wt2 dBt + dBt dWt2 Bt (2Wt dWt + dt) + Wt2 sign(Wt )dWt + sign(Wt )dWt (2Wt dWt + dt) 2Bt Wt dWt + Bt dt + sign(Wt )Wt2 dWt + 2sign(Wt )Wt dt. if x = K and f (x) = 0. Bt and Wt are not independent. if x < K.42) is 44 .(ii) Proof.10. dWt2 = 2Wt dWt + dt.20. 2 2 (iii) Proof. we get t t E[Bt Wt ] = 0 E[sign(Ws )]ds = 0 E[1{Ws ≥0} − 1{Ws <0} ]ds = 1 1 t − t = 0. if x ≥ K So f (x) = undeﬁned. 0.42) cannot hold. if x = K undeﬁned. Proof. By Itˆ’s formula. If we equal to 0. 4. f (x) = (ii) Proof. o (iv) Proof. d(Bt Wt ) = Bt dWt + sign(Wt )Wt dWt + sign(Wt )dt.

So we cannot have LK (T ) = 0 a. T ]. so that for any n ≥ n0 . the transaction cost could be big due to active trading. so limn→∞ fn (x) = limn→∞ 0 = 0.s.10. if x > K. second. if x = K n→∞ 2 1. 4. the purchases and sales cannot be made at exactly the same price K. 5. for n large enough. (i) Proof. there 1 exists n0 . so that Wt (ω) < K for any t ∈ [0. So T LK (T )(ω) = lim n n→∞ 0 1 1 1(K− 2n .21. This is trivial to check. x ≤ K − 2n . if x < K (5) lim fn (x) = 1 .20). 45 1 f (Xt )dt + f (Xt )d X 2 1 f (Xt )(dXt + d X t ) 2 t 1 2 1 2 f (Xt ) σt dWt + (αt − Rt − σt )dt + σt dt 2 2 f (Xt )(αt − Rt )dt + f (Xt )σt dWt . If x = K. max0≤t≤T Wt (ω) < K − 2n . Fix ω. But E[(ST − K)+ ] > 0. df (Xt ) = = = = This is formula (5. for n large enough. .K+ 2n ) (Wt (ω))dt = 0.10. if x < K.Proof. (i) Proof. T ]. (v) Proof. so its expectation is 0. see Hull [2]. Similarly.1. (iv) 1 1 Proof. if x > K.45). so limn→∞ fn (x) = 1 limn→∞ (x − K) = x − K. Risk-Neutral Pricing 5.. (ii) Proof. limn→∞ fn (x) = 8n = 0. For more details. (vi) Proof. we can show 0. limn→∞ fn (x) = (x − K)+ . So XT = (ST − K)+ . First. Since Wt (ω) can obtain its maximum on [0. There are two problems. Take expectation on both sides of the formula (4. we have E[LK (T )] = E[(WT − K)+ ] > 0.26) is a martingale. No. In summary. The RHS of (4. x ≥ K + 2n .2.

d(Dt St ) = St dDt + Dt dSt + dDt dSt = −St Rt Dt dt + Dt αt St dt + Dt σt St dWt = Dt St (αt − Rt )dt + Dt St σt dWt .. Proof.(ii) Proof. So WT √ > −d+ (T. then Z is a P -martingale. 5. then P is a probability measure equivalent to P . P (ST > K) = P (xeσWT +(r+ 2 σ )T > K) = P 46 .x)} dz 2π √ T )2 (z−σ 1 2 √ e− 2π −∞ N (d+ (T. x)).2. ST = xeσWT +(r− 2 σ 1 2 1 2 t (−σ)du 0 = Wt − σt is a P -Brownian motion (set Θ = −σ )T = xeσWT +(r+ 2 σ 1 2 1 2 )T .2. 5. and cx (0.3. x) = E[ZT 1{ST >K} ] = P (ST > K).4.) (iii) Proof. By Lemma 5. Wt = Wt + in Theorem 5.1. x)). Therefore (5. This is formula (5.x)} dz (ii) Proof.2. by Girsanov’s Theorem. E[DT VT |Ft ] = E E[DT VT ZT |Ft ].30) is equivalent to Dt Vt Zt = E e−rT eσWT +(r− 2 σ e e 1 − 2 σ2 T 1 2 )T 1 {xeσWT +(r− 2 σ 1 2 )T >K} E e σ WT √ σ T 1{WT > 1 (ln K −(r− 1 σ2 )T )} σ x 2 WT √ T 1 − 2 σ2 T E e 1 √ W { √T −σ T > T σ 1 √ T (ln K x √ −(r− 1 σ 2 )T )−σ T } 2 e− 2 σ ∞ 1 2 ∞ T −∞ √ z2 1 √ e− 2 eσ T z 1{z−σ√T >−d+ (T. If we set ZT = eσWT − 2 σ T and Zt = E[ZT |Ft ]. 1{z−σ√T >−d+ (T.20).2. x) = = = = = = = = 1 2 d E[e−rT (xeσWT +(r− 2 σ )T − K)+ ] dx 1 2 d E e−rT h(xeσWT +(r− 2 σ )T ) dx DT VT ZT Zt |Ft . So if we deﬁne P by dP = ZT dP on FT .2. Moreover. cx (0. (i) Proof. Zt > 0 and E[ZT ] = 1 2 E[eσWT − 2 σ T ] = 1. x) T = N (d+ (T.

5.4. First, a few typos. In the SDE for S, “σ(t)dW (t)” → “σ(t)S(t)dW (t)”. In the ﬁrst equation for c(0, S(0)), E → E. In the second equation for c(0, S(0)), the variable for BSM should be 1 T 2 1 T r(t)dt, σ (t)dt . BSM T, S(0); K, T 0 T 0 (i) Proof. d ln St = X = − is a Gaussian with X ∼ N ( (ii) Proof. For the standard BSM model with constant volatility Σ and interest rate R, under the risk-neutral measure, we have ST = S0 eY , where Y = (R− 1 Σ2 )T +ΣWT ∼ N ((R− 1 Σ2 )T, Σ2 T ), and E[(S0 eY −K)+ ] = 2 2 eRT BSM (T, S0 ; K, R, Σ). Note R =

1

T (rt 0

T T dSt 1 2 1 1 2 2 St − 2St d S t = rt dt + σt dWt − 2 σt dt. So ST = S0 exp{ 0 (rt − 2 σt )dt + 0 T 1 2 2 σt )dt + 0 σt dWt . The ﬁrst term in the expression of X is a number and the T 2 random variable N (0, 0 σt dt), since both r and σ ar deterministic. Therefore, T T 2 2 (rt − 1 σt )dt, 0 σt dt),. 2 0

σt dWt }. Let second term ST = S0 eX ,

1 T

(E[Y ] + 1 V ar(Y )) and Σ = 2 T, S0 ; K, 1 T

1 T

V ar(Y ), we can get 1 V ar(Y ) . T

**E[(S0 eY − K)+ ] = eE[Y ]+ 2 V ar(Y ) BSM So for the model in this problem, c(0, S0 ) = = e− e−
**

T 0

1 E[Y ] + V ar(Y ) , 2

rt dt

E[(S0 eX − K)+ ] e BSM T, S0 ; K, 1 T

T 0

T 0

1 rt dt E[X]+ 2 V ar(X)

1 T

1 E[X] + V ar(X) , 2

1 V ar(X) T

= 1 BSM T, S0 ; K, T

0

T

rt dt,

2 σt dt .

5.5. (i)

1 1 Proof. Let f (x) = x , then f (x) = − x2 and f (x) = 2 x3 .

Note dZt = −Zt Θt dWt , so

d

1 Zt

1 1 1 2 2 2 Θt Θ2 t = f (Zt )dZt + f (Zt )dZt dZt = − 2 (−Zt )Θt dWt + 3 Zt Θt dt = Z dWt + Z dt. 2 Zt 2 Zt t t

**(ii) Proof. By Lemma 5.2.2., for s, t ≥ 0 with s < t, Ms = E[Mt |Fs ] = E Zs Ms . So M = Z M is a P -martingale. (iii)
**

Zt Mt Zs |Fs

. That is, E[Zt Mt |Fs ] =

47

Proof. dMt = d Mt · 1 Zt = 1 1 1 Γt M t Θt M t Θ2 Γ t Θt t dMt + Mt d + dMt d = dWt + dWt + dt + dt. Zt Zt Zt Zt Zt Zt Zt

(iv) Proof. In part (iii), we have dMt = Let Γt = 5.6. Proof. By Theorem 4.6.5, it suﬃces to show Wi (t) is an Ft -martingale under P and [Wi , Wj ](t) = tδij (i, j = 1, 2). Indeed, for i = 1, 2, Wi (t) is an Ft -martingale under P if and only if Wi (t)Zt is an Ft -martingale under P , since Wi (t)Zt E[Wi (t)|Fs ] = E |Fs . Zs By Itˆ’s product formula, we have o d(Wi (t)Zt ) = Wi (t)dZt + Zt dWi (t) + dZt dWi (t) = Wi (t)(−Zt )Θ(t) · dWt + Zt (dWi (t) + Θi (t)dt) + (−Zt Θt · dWt )(dWi (t) + Θi (t)dt)

d

Γt M t Θt M t Θ2 Γt Θt Γt M t Θt t dWt + dWt + dt + dt = (dWt + Θt dt) + (dWt + Θt dt). Zt Zt Zt Zt Zt Zt then dMt = Γt dWt . This proves Corollary 5.3.2.

Γt +Mt Θt , Zt

= Wi (t)(−Zt )

j=1 d

Θj (t)dWj (t) + Zt (dWi (t) + Θi (t)dt) − Zt Θi (t)dt

=

Wi (t)(−Zt )

j=1

Θj (t)dWj (t) + Zt dWi (t)

**This shows Wi (t)Zt is an Ft -martingale under P . So Wi (t) is an Ft -martingale under P . Moreover,
**

· ·

[Wi , Wj ](t) = Wi +

0

Θi (s)ds, Wj +

0

Θj (s)ds (t) = [Wi , Wj ](t) = tδij .

Combined, this proves the two-dimensional Girsanov’s Theorem. 5.7. (i) Proof. Let a be any strictly positive number. We deﬁne X2 (t) = (a + X1 (t))D(t)−1 . Then P X2 (T ) ≥ X2 (0) D(T ) = P (a + X1 (T ) ≥ a) = P (X1 (T ) ≥ 0) = 1,

and P X2 (T ) > X2 (0) = P (X1 (T ) > 0) > 0, since a is arbitrary, we have proved the claim of this problem. D(T ) Remark: The intuition is that we invest the positive starting fund a into the money market account, and construct portfolio X1 from zero cost. Their sum should be able to beat the return of money market account. (ii) 48

Proof. We deﬁne X1 (t) = X2 (t)D(t) − X2 (0). Then X1 (0) = 0, P (X1 (T ) ≥ 0) = P X2 (T ) ≥ X2 (0) D(T ) = 1, P (X1 (T ) > 0) = P X2 (T ) > X2 (0) D(T ) > 0.

5.8. The basic idea is that for any positive P -martingale M , dMt = Mt ·

sentation Theorem, dMt = Γt dWt for some adapted process Γt . So martingale must be the exponential of an integral w.r.t. Brownian motion. Taking into account discounting factor and apply Itˆ’s product rule, we can show every strictly positive asset is a generalized geometric o Brownian motion. (i) Proof. Vt Dt = E[e− 0 Ru du VT |Ft ] = E[DT VT |Ft ]. So (Dt Vt )t≥0 is a P -martingale. By Martingale Represent tation Theorem, there exists an adapted process Γt , 0 ≤ t ≤ T , such that Dt Vt = 0 Γs dWs , or equivalently, −1 t −1 t −1 Vt = Dt 0 Γs dWs . Diﬀerentiate both sides of the equation, we get dVt = Rt Dt 0 Γs dWs dt + Dt Γt dWt , i.e. dVt = Rt Vt dt + (ii) Proof. We prove the following more general lemma. Lemma 1. Let X be an almost surely positive random variable (i.e. X > 0 a.s.) deﬁned on the probability space (Ω, G, P ). Let F be a sub σ-algebra of G, then Y = E[X|F] > 0 a.s. Proof. By the property of conditional expectation Yt ≥ 0 a.s. Let A = {Y = 0}, we shall show P (A) = 0. In∞ 1 1 deed, note A ∈ F, 0 = E[Y IA ] = E[E[X|F]IA ] = E[XIA ] = E[X1A∩{X≥1} ] + n=1 E[X1A∩{ n >X≥ n+1 } ] ≥

1 1 1 1 1 P (A∩{X ≥ 1})+ n=1 n+1 P (A∩{ n > X ≥ n+1 }). So P (A∩{X ≥ 1}) = 0 and P (A∩{ n > X ≥ n+1 }) = 0, ∞ 1 1 ∀n ≥ 1. This in turn implies P (A) = P (A ∩ {X > 0}) = P (A ∩ {X ≥ 1}) + n=1 P (A ∩ { n > X ≥ n+1 }) = 0. ∞ Γt Dt dWt .

T

1 Mt dMt . By Martingale RepreΓ dMt = Mt ( Mtt )dWt , i.e. any positive

By the above lemma, it is clear that for each t ∈ [0, T ], Vt = E[e− t Ru du VT |Ft ] > 0 a.s.. Moreover, by a classical result of martingale theory (Revuz and Yor [4], Chapter II, Proposition (3.4)), we have the following stronger result: for a.s. ω, Vt (ω) > 0 for any t ∈ [0, T ]. (iii)

1 1 Proof. By (ii), V > 0 a.s., so dVt = Vt Vt dVt = Vt Vt Rt Vt dt + Γt Dt dWt Γt = Vt Rt dt + Vt Vt Dt dWt = Rt Vt dt +

T

σt Vt dWt , where σt = 5.9.

Γt Vt Dt .

This shows V follows a generalized geometric Brownian motion.

Proof. c(0, T, x, K) = xN (d+ ) − Ke−rT N (d− ) with d± = then f (y) = −yf (y), cK (0, T, x, K) = xf (d+ )

1 √ σ T

x (ln K + (r ± 1 σ 2 )T ). Let f (y) = 2

y √1 e− 2 2π

2

,

∂d+ ∂d− − e−rT N (d− ) − Ke−rT f (d− ) ∂y ∂y −1 1 = xf (d+ ) √ − e−rT N (d− ) + e−rT f (d− ) √ , σ TK σ T

49

and cKK (0, T, x, K) x ∂d− e−rT 1 ∂d+ d− − √ − e−rT f (d− ) + √ (−d− )f (d− ) xf (d+ ) √ f (d+ )(−d+ ) 2 ∂y ∂y ∂y σ TK σ TK σ T x xd+ −1 −1 e−rT d− −1 √ √ − e−rT f (d− ) √ − √ f (d− ) √ f (d+ ) + √ f (d+ ) σ T K2 σ TK Kσ T Kσ T σ T Kσ T x d+ e−rT f (d− ) d− √ [1 − √ ] + √ f (d+ ) [1 + √ ] 2σ T K σ T Kσ T σ T e−rT x f (d− )d+ − 2 2 f (d+ )d− . Kσ 2 T K σ T

= = = =

5.10. (i) Proof. At time t0 , the value of the chooser option is V (t0 ) = max{C(t0 ), P (t0 )} = max{C(t0 ), C(t0 ) − F (t0 )} = C(t0 ) + max{0, −F (t0 )} = C(t0 ) + (e−r(T −t0 ) K − S(t0 ))+ . (ii) Proof. By the risk-neutral pricing formula, V (0) = E[e−rt0 V (t0 )] = E[e−rt0 C(t0 )+(e−rT K −e−rt0 S(t0 )+ ] = C(0) + E[e−rt0 (e−r(T −t0 ) K − S(t0 ))+ ]. The ﬁrst term is the value of a call expiring at time T with strike price K and the second term is the value of a put expiring at time t0 with strike price e−r(T −t0 ) K. 5.11. Proof. We ﬁrst make an analysis which leads to the hint, then we give a formal proof. (Analysis) If we want to construct a portfolio X that exactly replicates the cash ﬂow, we must ﬁnd a solution to the backward SDE dXt = ∆t dSt + Rt (Xt − ∆t St )dt − Ct dt XT = 0. Multiply Dt on both sides of the ﬁrst equation and apply Itˆ’s product rule, we get d(Dt Xt ) = ∆t d(Dt St ) − o T T Ct Dt dt. Integrate from 0 to T , we have DT XT − D0 X0 = 0 ∆t d(Dt St ) − 0 Ct Dt dt. By the terminal T T −1 condition, we get X0 = D0 ( 0 Ct Dt dt − 0 ∆t d(Dt St )). X0 is the theoretical, no-arbitrage price of the cash ﬂow, provided we can ﬁnd a trading strategy ∆ that solves the BSDE. Note the SDE for S −R gives d(Dt St ) = (Dt St )σt (θt dt + dWt ), where θt = αtσt t . Take the proper change of measure so that Wt =

t θ ds 0 s

**+ Wt is a Brownian motion under the new measure P , we get
**

T T T

Ct Dt dt = D0 X0 +

0 T 0

∆t d(Dt St ) = D0 X0 +

0

∆t (Dt St )σt dWt .

T

This says the random variable 0 Ct Dt dt has a stochastic integral representation D0 X0 + 0 ∆t Dt St σt dWt . T This inspires us to consider the martingale generated by 0 Ct Dt dt, so that we can apply Martingale Representation Theorem and get a formula for ∆ by comparison of the integrands.

50

by L´vy’s Theorem. Take conditional expectation w. so that Mt = M0 + + t 0 Γt dWt . by part (iii). we can ﬁnd an adapted process Γt . So Xt = Dt E[ t Cu Du du|Ft ]. So Bi is a is a Brownian motion under R(t)Si (t)dt + σi (t)Si (t)dBi (t) + j=1 σij (t)Θj (t)Si (t)dt − Si (t) j=1 σij (t)Θj (t)dt R(t)Si (t)dt + σi (t)Si (t)dBi (t). it is easy to see that X satisﬁes the ﬁrst equation. (iv) Proof. d(Dt Xt ) = ∆t d(Dt St ) − Ct Dt dt. we get T T 0 − Dt Xt = t ∆u d(Du Su ) − t Cu Du du. So X0 = E[ 0 Ct Dt dt] is the no-arbitrage price of the cash ﬂow at time zero.6. Ft on both sides. 5. Remark: As shown in the analysis. Thus. so that the corresponding portfolio X replicates the cash ﬂow and has zero T terminal value. t 0 Similarly. 51 . So XT = 0. we note T T T XT DT = D0 X0 + 0 ∆t Dt St σt dWt − 0 Ct Dt dt = M0 + 0 Γt dWt − MT = 0. Since dBi (t)dBi (t) = P. we have found a trading strategy ∆. Then by Martingale Representation Theot 0 rem. we can check Cu Du du). and Mt = E[MT |Ft ].10) in the textbook. (i) Proof. we get T T −1 −Dt Xt = −E[ t Cu Du du|Ft ]. with X0 = M0 = E[ Ct Dt dt] solves the SDE dXt = ∆t dSt + Rt (Xt − ∆t St )dt − Ct dt XT = 0. (ii) Proof. and we have justiﬁed formula (5.r. Integrate from t to T . To check the terminal condition. o t t t E[Bi (t)Bk (t)] = E[ 0 t Bi (s)dBk (s)] + E[ 0 t Bk (s)dBi (s)] + E[ 0 dBi (s)dBk (s)] = E[ 0 ρik (s)ds] = 0 ρik (s)ds. By Itˆ’s product rule and martingale property. If we set ∆t = T 0 ∆u d(Du Su ) − t 0 Γt Dt St σt . dBi (t)dBk (t) = (dBi (t) + γi (t)dt)(dBj (t) + γj (t)dt) = dBi (t)dBj (t) = ρik (t)dt. dBi (t) = dBi (t) + γi (t)dt = martingale. dSi (t) = = = R(t)Si (t)dt + σi (t)Si (t)dBi (t) + (αi (t) − R(t))Si (t)dt − σi (t)Si (t)γi (t)dt d d σij (t) σij (t) d d j=1 σi (t) Θj (t)dt = j=1 σi (t) dWj (t) + σij (t)2 d e j=1 σi (t)2 dt = dt.t. This is the no-arbitrage price of the cash ﬂow at time t. (iii) Proof. Bi σij (t) d j=1 σi (t) dWj (t). we can show E[Bi (t)Bk (t)] = (v) ρik (s)ds. Indeed.12.(Formal proof) Let MT = Xt = −1 Dt (D0 X0 T 0 Ct Dt dt.

= E 0 T W1 (t)dW2 (t) + 0 W2 (t)dW1 (t) T = E 0 W1 (t)(dW2 (t) − W1 (t)dt) + E 0 T W2 (t)dW1 (t) = −E 0 T W1 (t)2 dt tdt = − 0 1 = − T 2.6) can be transformed into d(e−rt Xt ) = ∆t [d(e−rt St ) − ae−rt dt] = ∆t e−rt [dSt − rSt dt − adt]. 2 5.Proof. t E[B1 (t)B2 (t)] = E[ 0 t sign(W1 (u))du [P (W1 (u) ≥ 0) − P (W1 (u) < 0)]du = 0 t = 0 t [P (W1 (u) ≥ u) − P (W1 (u) < u)]du 2 0 = < 0.13. To do this. to make the discounted portfolio value e−rt Xt a martingale. E[W1 (t)] = E[W1 (t)] = 0 and E[W2 (t)] = E[W2 (t) − (ii) Proof. Meanwhile. Equation (5. under which S satisﬁes the SDE dSt = rSt dt + σSt dWt + adt and Wt is a BM. for all t ∈ [0.” What is to be a martingale? The new measure P should be such that the discounted value process of the replicating 52 . Cov[W1 (T ). W2 (T )] = E[W1 (T )W2 (T )] T T t 0 W1 (u)du] = 0. Hence dSt −rSt dt−adt = [(αt −r)St −a]dt+σSt dWt = σSt Set θt = (αt −r)St −a σSt (αt −r)St −a dt σSt + dWt .7). This is the rational for formula (5. 5.9. o t t E[B1 (t)B2 (t)] = E[ 0 sign(W1 (u))du] = 0 [P (W1 (u) ≥ 0) − P (W1 (u) < 0)]du = 0. we note the SDE for S is dSt = αt St dt+σSt dWt . T ]. By Itˆ’s product formula.14. we can ﬁnd an equivalent probability measure P . and Wt = t θ ds 0 s + Wt .9. So. 1 − P (W1 (u) < u) du 2 for any t > 0. So E[B1 (t)B2 (t)] = E[B1 (t)B2 (t)] for all t > 0. This is a good place to pause and think about the meaning of “martingale measure. we are motivated to change the measure t in such a way that St −r 0 Su du−at is a martingale under the new measure. (i) Proof.

dYt = Yt [σdWt + (r − 2 σ 2 )dt] + 2 Yt σ 2 dt = Yt (σdWt + rdt). So by setting X0 = M0 = E[e−rT VT ].7). That is all about martingale measure! Risk neutral is a just perception. So d(e−rt Yt ) = o t a σe−rt Yt dWt and e−rt Yt is a P -martingale. when the underlying pays dividends. but self-ﬁnancing with consumption.7) under P . However. as shown in formula (5. for all t ∈ [0. The portfolio is no longer self-ﬁnancing. The SDE for U is dUt = σUt dWt + ae−rt dt. This justiﬁes the risk-neutral pricing of European-type contingent claims in the model where cost of carry exists. We need to solve the backward SDE dXt = ∆t dSt − a∆t dt + r(Xt − ∆t St )dt XT = VT for two unknowns. we must have Xt = Vt for any t ∈ [0. the discounted price process of the underlying is a martingale under P . Moreover. This is not a coincidence. is a P -martingale. You can think of martingale measure as a calculational convenience. (ii) 1 1 Proof. To do so. If we let ∆t = Γt e t . under which e−rt Xt is a martingale.portfolio is a martingale.6). we ﬁnd a probability measure P . First.9. Itˆ’s product formula yields o dVt = = e−σWt dUt + Ut e−σWt 1 (−σ)dWt + σ 2 dt + dUt · e−σWt 2 1 (−σ)dWt + σ 2 dt 2 1 e−σWt ae−rt dt − σ 2 Vt dt. Another perspective for perfect replication is the following. so that Mt = M0 + 0 Γs dWs . d(Dt Xt ) = ∆t d(Dt St ) no longer holds. This would give us a theoretical representation of ∆ by comparison of integrands. by Martingale Representation rt t Theorem. or there is cost of carry. where X is given by (5. By Itˆ’s formula. Martingale Representation Theorem gives Mt = M0 + 0 Γu dWu for some adapted process Γ. then for the martingale Mt = E[e−rT VT |Ft ]. In order to avoid arbitrage. So (e−rt Xt )t≥0 . if we have (5. T ]. then t dSt = S0 dYt + 0 a dsdYt + adt = Ys t S0 + 0 a ds Yt (σdWt + rdt) + adt = St (σdWt + rdt) + adt. hence a perfect replication of VT . Ys This shows S satisﬁes (5.9. referring to the actual eﬀect of constructing a hedging portfolio! Second. we ﬁrst set Ut = e−rt St to remove the rSt dt term. then d(e−rt Xt ) = ∆t [d(e−rt St ) − ae−rt dt] = ∆t e−rt σSt dWt . Indeed. then the σS value of the corresponding portfolio X satisﬁes d(e−rt Xt ) = Γt dWt . Also note the risk-neutral measure is diﬀerent from the one in case of no cost of carry. to remove U in the dWt term.6). we consider Vt = Ut e−σWt . So Dt Xt being a martingale under P is more or less equivalent to Dt St being a martingale under P . not the discounted price process of the underlying. In particular. What we still want to retain is the martingale property of Dt Xt . not that of Dt St . we have in this case the relation d(Dt Xt ) = ∆t d(Dt St ). X and ∆. 2 53 .9. Remark: To obtain this formula for S. we can ﬁnd an adapted process Γt . T ]. we note when the portfolio is self-ﬁnancing. we want Dt Xt to be a martingale under P because we suppose that X is able to replicate the derivative payoﬀ at terminal time. This is how we choose martingale measure in the above paragraph.9. XT = VT . we must have e−rt Xt = Mt . The diﬃculty is how to calculate Xt and the magic is brought by the martingale measure in the following line of reasoning: −1 −1 Vt = Xt = Dt E[DT XT |Ft ] = Dt E[DT VT |Ft ]. Let VT be a payoﬀ at time T . t then e−rt Xt = E[e−rT VT |Ft ] := Mt . (i) Proof. Thus the portfolio perfectly hedges VT . As indicated in the above analysis. XT = VT . if St = S0 Yt + Yt 0 Ys ds. Just like solving linear ODE. as in the classical Black-Scholes-Merton model without dividends or cost of carry.

r In particular. After the agent delivers the commodity. T ) = F utS (t.2. (v) Proof. r r r Meanwhile. So by an argument similar to that of §5. )t . E[ST ] = S0 erT − a (1 − erT ). 54 . By our analysis in part (i). So F orS (0. 0 Ys E[ST |Ft ] = S0 E[YT |Ft ] + E YT 0 t a ds + YT Ys T t T a ds|Ft Ys E YT |Ft ds Ys E[YT −s ]ds t = S0 E[YT |Ft ] + 0 a dsE[YT |Ft ] + a Ys t t T = S0 Yt E[YT −t ] + 0 t a dsYt E[YT −t ] + a Ys T t = = S0 + 0 t a ds Yt er(T −t) + a Ys ads Ys er(T −s) ds S0 + 0 a Yt er(T −t) − (1 − er(T −t) ). Integrate from 0 to t on both sides. T ). So St = Yt (S0 + t ads ). r (iv) Proof. we get Xt = St − S0 ert + a (1 − ert ) = St − S0 ert − a (ert − 1). F orS (t. T ). risk-neutral pricing is valid even in the presence of cost of carry. We solve the equation E[e−r(T −t) (ST − K)|Ft ] = 0 for K. whose value r is ST . T ). d(e−rt Xt ) = d(e−rt St ) − ae−rt dt. t dE[ST |Ft ] = aer(T −t) dt + S0 + 0 t ads Ys a (er(T −t) dYt − rYt er(T −t) dt) + er(T −t) (−r)dt r = S0 + 0 ads Ys er(T −t) σYt dWt . so multiply the integration factor e 2 σ we get 1 2 1 2 d(e 2 σ t Vt ) = ae−rt−σWt + 2 σ t dt. and receives the forward price F orS (0. we have d(St /Yt ) = adt/Yt . As we have argued at the beginning of the solution.6. t 1 2 1 2 t on both sides of the equation. We follow the hint. So E[ST |Ft ] is a P -martingale. T ) = F uts (t. So F orS (t. the process E[ST |Ft ] is the futures price process for the commodity. (vi) Proof. T ) = E[ST |Ft ] = S0 + t ads 0 Ys Yt er(T −t) − a (1−er(T −t) ). Set Yt = eσWt +(r− 2 σ (iii) Proof. and get K = E[ST |Ft ]. In particular. we solve the SDE dXt = dSt − adt + r(Xt − St )dt X0 = 0. XT = ST − S0 erT − a (erT − 1). the portfolio has exactly zero value.Note V appears only in the dt term. T ) = r S0 erT − a (1 − erT ) and hence XT = ST − F orS (0. First.

Finally. Note the form of Z is similar to that of a geometric Brownian motion. we have u t 1 ˆ ˆ 1 2 (dXu + Xu · σu du) = e 2 2 [(ˆu − σu γu )du + γu dWu ].4) the integrating factor e− t bv dv . then Xt = Yt Zt = x · 1 = x and dXu = = = = Yu dZu + Zu dYu + dYu Zu au − σu γu γu du + dWu Zu Zu [Yu bu Zu + (au − σu γu ) + σu γu ]du + (σu Zu Yu + γu )dWu Yu (bu Zu du + σu Zu dWu ) + Zu (bu Xu + au )du + (σu Xu + γu )dWu . a ˆ ˆ Write everything back into the original X.4) as follows. If Xu = Yu Zu (u ≥ t). u ≥ t.2. ¯ γ a ¯ ¯ ˆ ¯ To deal with the term σu Xu dWu . a ˆ ˆ 2 where au = au e− ˆ ¯ ˆ 1 d Xu e 2 u t σv dWv 2 σv dv and γu = γu e− ˆ ¯ = e2 1 u t 2 σv dv u t σv dWv .1.6. to u remove the term bu Xu du. a and γ. Zt = 1 is obvious.2. we manipulate the equation (6. we get d Xu e− i. au = e− ¯ u t bv dv au and γu = e− ¯ bv dv ¯ γu . 6.e. we multiply on both sides of (6. First. Connections with Partial Diﬀerential Equations 6. (i) 55 . (i) Proof. it is easy to obtain dZu = bu Zu du + σu Zu dWu . + σu Z u γu du Zu Remark: To see how to ﬁnd the above solution. we consider Xu = Xu e− ˆ dXu = e− u t u t σv dWv . So by Itˆ’s o formula. Then d(Xu e− ¯ Let Xu = e− u t u t bv dv ) = e− u t bv dv (au du + (γu + σu Xu )dWu ). use the integrating factor e u t 2 σv dv u 1 2 σ dv t 2 v .2. Xu Zu = 1 [(au − σu γu )du + γu dWu ] = dYu . Then σv dWv σv dWv ¯ ¯ [(¯u du + γu dWu ) + σu Xu dWu ] + Xu e− a ¯ u t u t 1 (−σu )dWu + e− 2 u t σv dWv 2 σu du ¯ +(¯u + σu Xu )(−σu )e− γ σv dWv du 1 ˆ 2 ˆ ˆ ˆ = au du + γu dWu + σu Xu dWu − σu Xu dWu + Xu σu du − σu (ˆu + σu Xu )du ˆ ˆ γ 2 1 ˆ 2 = (ˆu − σu γu − Xu σu )du + γu dWu . d u t bv dv− u t 1 σv dWv + 2 u t 2 σv dv = e2 1 u t 2 σv dv− u t σv dWv − u t bv dv [(au − σu γu )du + γu dWu ]. Zu This inspired us to try Xu = Yu Zu . u t bv dv Xu . (ii) Proof. then X satisﬁes the SDE ¯ ¯ ¯ dXu = au du + (¯u + σu Xu )dWu = (¯u du + γu dWu ) + σu Xu dWu .

Rt .4). T2 )|dt. then d(Dt Xt ) = Dt St [β(t. T ) + 1 γ 2 (t. Rt . T1 )]fr (t. Rt . T1 ) − β(t. T ) + 2 γ 2 (t. we get d(Dt Xt ) = 1 ∆(t)Dt −Rt f (t. Rt . T1 ) + ∆2 (t)df (t. T2 )γ 2 (t. Rt . so for any t ≤ T1 . T ). Rt . Rt )dt 2 −Rt (∆1 (t)f (t. Rt . Rt . Rt )fr (t. Rt . DT XT − D0 X0 > 0. T ) dt. Rt )dt 2 1 +∆2 (t) ft (t. T2 ) + α(t. for any r in the 2 range of Rt . T1 )γ 2 (t. T2 )dRt + frr (t. Rt . T ) . Rt . 2 56 . Rt .7). Rt . we have dXt = ∆1 (t)df (t. Rt . we must have ft (t. Rt . T ) does not depend on T . To avoid arbitrage (see. Integrate from 0 to T on both sides of the above equation. T1 )fr (t.9. T1 ) + ∆2 (t)f (t. Rt . Rt . Rt . Rt . The portfolio is self-ﬁnancing. r. Rt )fr (t. Rt )fr (t. r)frr (t. T2 ). (ii) Proof. Rt )frr (t. r. Rt . Rt . r. T1 ) + α(t. Rt . Rt . T1 ) = β(t. Rt . Rt . T2 ) for some t ∈ [0. T ) + ft (t. T )dWt . Rt . T2 ))dt] 1 = Dt [∆1 (t) ft (t. T ) = rf (t. Rt )frr (t. under the assumption that fr (t. Rt . T ). r. T2 ))dt. Rt . T ]. T1 ) + ∆2 (t)df (t. T ) = 0 for all values of r and 0 ≤ t ≤ T . Rt . Rt . T1 ) − β(t. Rt . T1 )dt + ∆2 (t)Dt [α(t. Rt . Rt . Rt . T2 )]]dWt = ∆1 (t)Dt [α(t. Rt . then d(Dt Xt ) = ∆(t)Dt −Rt f (t.5). T2 )]dWt . Rt . ∀t ∈ [0. T2 ) + Rt (Xt − ∆1 (t)f (t. T ) + α(t. 1 If fr (t. T2 )]fr (t. Rt . Rt . Rt )frr (t. for example. Rt . T1 ) + ∆2 (t)f (t. Rt )[Dt γ(t. T2 )]fr (t. T2 ) + γ 2 (t. Let ∆1 (t) = St fr (t. Rt )frr (t. T1 ) + ∆2 (t)fr (t. (iii) Proof. or equivalently. T ) dt 2 +Dt γ(t. ω. T2 )]fr (t. T ]. T1 )dRt + frr (t. Rt . This implies β(t. Rt )frr (t. T1 = T and ∆2 (t) = 0. Rt . Rt ) − β(t. In (6. Rt . Rt . If β(t. T2 ) and ∆2 (t) = −St fr (t. Rt )∆(t)fr (t. T2 )|dt. Rt )[∆1 (t)fr (t. T1 )]fr (t.9. Rt . T ) + γ 2 (t. This is formula (6. Rt . T2 )dt = Dt |[β(t. Rt . Rt . Rt .s. Rt )[∆1 (t)fr (t. Rt . T ) + 1 γ 2 (t. T1 )]dt 2 1 +∆2 (t)Dt [−Rt f (t. T1 ) + ft (t. T2 )dt +Dt γ(t. β(t. Rt . Rt . r. Rt . T ) = 0. Rt . Rt . T1 ) = β(t. T2 ) + ft (t. Rt . T2 )]dt 2 +Dt γ(t. We 1 2 choose ∆(t) = sign −Rt f (t. T1 )fr (t. Exercise 5. T1 ) + ∆2 (t)fr (t. T2 ))dt] 1 = ∆1 (t)Dt [−Rt f (t. Rt . Rt . Rt . we get T DT XT − D0 X0 = 0 Dt |[β(t. let ∆1 (t) = ∆(t). T ) = Rt f (t. T ) + 2 γ (t. To avoid arbitrage in this case. T2 )dt + fr (t. r. Rt . ft (t.Proof. Rt . Rt . T1 )fr (t. Rt )frr (t. T1 )dt + fr (t. T ) + ft (t. Rt ) − β(t. T2 ) − Rt (∆1 (t)f (t. Rt . T1 ) − ∆2 (t)f (t. and d(Dt Xt ) = −Rt Dt Xt dt + Dt dXt = Dt [∆1 (t)df (t. we must have for a. Rt . T1 ) + γ 2 (t. T ) + ft (t. Rt . T2 ) − β(t. T1 ).

T ) = 0. The characteristic equation of ϕ (t) − bϕ (t) − 2 σ 2 ϕ(t) = 0 is λ2 − bλ − 2 σ 2 = 0. T ). C(u.6. we have A(t.8) and (6. a general solution of ϕ is ϕ(t) = e 2 bt (a1 eγt + a2 e−γt ) for some constants a1 and a2 . T ) = (ii) 1 4 2 4 σ ϕ(t)C (t. t Since A(T. (iii) 1 1 Proof. T )ds + t 1 2 σ 2 (s)C 2 (s. T ) = − bv dv T t e− bv dv ds = T e t s bv dv ds.4. So C (t. T ) = e− s 0 bv dv [C(s. we have ϕ (t) = e 2 σ 1 2 T t T (a(s)C(s. T ) − A(t. By the deﬁnition of ϕ.5. T ) 1 − ϕ (t) / 1 ϕ(t)σ 2 = 2 σ 2 C 2 (t. T ) + ϕ(t)C (t. which gives two √ √ 1 1 1 roots 2 (b ± b2 + 2σ 2 ) = 2 b ± γ with γ = 2 b2 + 2σ 2 . T ) = 6.14). Diﬀerentiate both sides of the equation ϕ (t) = − 2 ϕ(t)σ 2 C(t. T ) − 2 Proof.e. T ) = b(−1) 2 + σ 2 C 2 (t. we have C(t. T ) − 1. T )(−bs ) + bs C(s. T ). T )] 2 2 1 4 1 σ ϕ(t)C 2 (t. 4 2 2ϕ (t) σ 2 ϕ(t) . Therefore by standard theory of ordinary diﬀerential 1 equations. we get − 2ϕ (t) 1 2 2 2ϕ (t) 1 + σ C (t. we get A(T. T ) = − φ(t)σ2 . T ) t − 1 2 2 2 σ (s)C (s. we get ϕ (t) = = = 1 − σ 2 [ϕ (t)C(t. T ) + ϕ(t)C (t. ϕ (t) − bϕ (t) − 2 σ 2 ϕ(t) = 0.9. Proof. Finally. T ))ds. T ). T )ds. So integrate on both sides of the equation from t to T. T ). It is then easy to see that we can choose appropriate constants c1 and c2 so that ϕ(t) = 1 2b 1 1 c1 c2 e−( 2 b+γ)(T −t) − 1 e−( 2 b−γ)(T −t) . 2 2ϕ (t) 1 So C(t. T ) = − ϕ(t)σ 2 C(t. Since C(T. T ) − σ 2 ϕ(t)C (t. we obtain e− T 0 bv dv C(T. Plug formulas (6. by A (s.9) into (6. T ) = e 1 −a(s)C(s. σ 2 ϕ(t) 2 σ ϕ(t) 2 1 i. (i) Proof. T ) + 2 σ 2 (s)C 2 (s. T ) = 0. T ) = T a(s)C(s.9. +γ b−γ 2 57 .3. T ) − e− t 0 t 0 T bv dv C(t. T )] 2 1 1 − σ 2 [− ϕ(t)σ 2 C 2 (t. T ) − 1] = −e− s 0 bv dv . T ) = − t s 0 e− T t s 0 bv dv ds.T )du 1 2 1 σ 2 (−1)C(t. We note d − e ds s 0 bv dv C(s.

(v) Proof. 0 = C(T. T ) − A(t. T ) = − (vi) Proof.15) and (6. T ) = − So c1 = c2 .(iv) Proof. (sinh z) = cosh z. σ2 This implies C(t. cosh z = . By (6. it is easy to see ϕ (t) = c1 e−( 2 b+γ)(T −t) − c2 e−( 2 b−γ)(T −t) . σ2 ϕ(t) σ γ cosh(γ(T − t)) + 1 b sinh(γ(T − t)) 2 58 . = σ 2 ϕ(t) γ cosh(γ(T − t)) + 1 b sinh(γ(T − t)) 2 1 1 1 1 2ϕ (T ) 2(c1 − c2 ) =− 2 . We ﬁrst recall the deﬁnitions and properties of sinh and cosh: sinh z = Therefore ϕ(t) = c1 e− 2 b(T −t) = c1 e− 2 b(T −t) = = and ϕ (t) = 1 2c1 − 1 b(T −t) [b sinh(γ(T − t)) + 2γ cosh(γ(T − t))] b· 2 e 2 2 σ 2c1 1 + 2 e− 2 b(T −t) [−γb cosh(γ(T − t)) − 2γ 2 sinh(γ(T − t))] σ 1 bγ bγ 2γ 2 b2 = 2c1 e− 2 b(T −t) sinh(γ(T − t)) + 2 cosh(γ(T − t)) − 2 cosh(γ(T − t)) − 2 sinh(γ(T − t)) 2 2σ σ σ σ 2 2 1 b − 4γ = 2c1 e− 2 b(T −t) sinh(γ(T − t)) 2σ 2 1 = −2c1 e− 2 b(T −t) sinh(γ(T − t)). A (t. From part (iii). 2 2 eγ(T −t) e−γ(T −t) − 1 1 2b + γ 2b − γ 1 2b 1 2 4b 1 − γ −γ(T −t) b + γ γ(T −t) e − 12 2 e 2 2 −γ 4b − γ 1 2c1 − 1 b(T −t) 1 e 2 −( b − γ)e−γ(T −t) + ( b + γ)eγ(T −t) 2 σ 2 2 2c1 − 1 b(T −t) e 2 [b sinh(γ(T − t)) + 2γ cosh(γ(T − t))]. 2ϕ (t) sinh(γ(T − t)) .8). T ) = 2aϕ (t) σ 2 ϕ(t) . In particular. σ 2 ϕ(T ) σ ϕ(T ) ez − e−z ez + e−z . Hence T A(T. T ) = t 2aϕ (s) 2a ϕ(T ) ds = 2 ln . and (cosh z) = sinh z. 2 ϕ(s) σ σ ϕ(t) 1 and A(t.9.5. T ) = − 2a ϕ(T ) 2a γe 2 b(T −t) ln = − 2 ln .

X1 (t). (ii) and (iii) Proof. X2 (t)) and e−rt f (t. Therefore dR(t) = (a − bR(t))dt + σ e and R is a CIR interest rate process. iterated conditioning argument shows g(t. 59 R(t)dB(t) (a := d σ 2 ) 4 . X1 (t). j=1 0 R(s) d 2 σ − bR(t) dt + σ 4 R(t) j=1 Xj (t) R(t) dWj (t). X2 (T ))|Ft ].6. then d e−bt 2 t bu e du 4 σ 0 dR(t) = j=1 d (2Xj (t)dXj (t) + dXj (t)dXj (t)) 1 2Xj (t)dXj (t) + σ 2 dt 4 1 2 −bXj (t)dt + σXj (t)dWj (t) + σ 2 dt 4 d = j=1 d = j=1 = t Xj (s) d √ dWj (s). (i) Proof. 6. Suppose R(t) = d j=1 2 Xj (t). X2 (t)) 1 1 = gt dt + gx1 dX1 (t) + gx2 dX2 (t) + gx1 x2 dX1 (t)dX1 (t) + gx2 x2 dX2 (t)dX2 (t) + gx1 x2 dX1 (t)dX2 (t) 2 2 1 2 2 = gt + gx1 β1 + gx2 β2 + gx1 x1 (γ11 + γ12 + 2ργ11 γ12 ) + gx1 x2 (γ11 γ21 + ργ11 γ22 + ργ12 γ21 + γ12 γ22 ) 2 1 2 2 + gx2 x2 (γ21 + γ22 + 2ργ21 γ22 ) dt + martingale part.5. X2 (t)) = E[h(X1 (T ). 2 So we must have 1 2 2 gt + gx1 β1 + gx2 β2 + gx1 x1 (γ11 + γ12 + 2ργ11 γ12 ) + gx1 x2 (γ11 γ21 + ργ11 γ22 + ργ12 γ21 + γ12 γ22 ) 2 1 2 2 + gx2 x2 (γ21 + γ22 + 2ργ21 γ22 ) = 0. X2 (T ))|Ft ] and e−rt f (t. We note dg(t.4. X2 (t)) = E[e−rT h(X1 (T ). X1 (t). we get 1 1 b 1 1 d(e 2 bt Xj (t)) = e 2 bt Xj (t) bdt + (− Xj (t)dt + σdWj (t) 2 2 2 1 1 = e 2 bt σdWj (t). The PDE for f can be similarly obtained. Xj (t) is normally distributed with mean Xj (0)e− 2 bt and variance (ii) Proof. 2 1 So e 2 bt Xj (t) − Xj (0) = 1 t 1 2σ 0 e 2 bu dWj (u) and Xj (t) = e− 2 bt Xj (0) + 1 σ 2 1 1 1 t 0 e 2 bu dWj (u) . 2 Taking ρ = 0 will give part (ii) as a special case.6. By Theorem = σ2 −bt ). Multiply e 2 bt on both sides of (6. 2 Xj (t) d j=1 R(t) dt Let B(t) = then B is a local martingale with dB(t)dB(t) = = dt. X1 (t). So by L´vy’s Theorem.9. Since g(t. X2 (t)) ar both martingales.15). (i) Proof. B is a Brownian motion. X1 (t). 4b (1 − e 1 4.9.

St .. 4b (iv) Proof. St .7. Xd (t) are i.9. Xd (t) are i. · · · . Vt )(−r) + ct (t. St . Vt )σ 2 Vt + csv (t. e − 2v(t)/(1−2uv(t)) µ(t) (x− 1−2uv(t) ) 2 dx · e− µ2 (t)(1−2uv(t))−µ2 (t) 2v(t)(1−2uv(t)) 1 − 2uv(t) = e uµ2 (t) − 1−2uv(t) 1 − 2uv(t) (v) Proof. By R(t) = d j=1 2 Xj (t) and the fact X1 (t). (i) Proof. St .26). St . Vt )rSt + cv (t. Vt )Vt St + 2 1 cvv (t. Vt ) + cs (t.d. Vt )σVt St ρ dt + martingale part. Vt ) = E[e−rT (ST − K)+ |Ft ] is a martingale by iterated conditioning argument.(iii) Proof.i. e−rt c(t. This is equation (6. Since d(e−rt c(t. St . St . 2 (ii) 60 . St . Vt )) 1 2 = e−rt c(t. By (6. St . E e 2 uXj (t) 1 ∞ = −∞ ∞ e ux2 e− (x−µ(t))2 2v(t) dx 2πv(t) (1−2uv(t))x2 −2µ(t)x+µ2 (t) − 2v(t) = −∞ ∞ e 2πv(t) 1 2πv(t) e− µ(t) (x− 1−2uv(t) ) 2 dx µ2 (t) µ2 (t) − 1−2uv(t) (1−2uv(t))2 2v(t)/(1−2uv(t)) + = −∞ ∞ dx = −∞ 1 − 2uv(t) 2πv(t) . So X1 (t). · · · . 2 1 we conclude rc = ct + rscs + cv (a − bv) + 1 css vs2 + 2 cvv σ 2 v + csv σsvρ.9. Vt )(a − bVt ) + css (t.d.i. Xj (t) is dependent on Wj only and is normally distributed with mean e− 2 bt Xj (0) and 2 variance σ [1 − e−bt ].16). 6. 2 d udµ2 (t) 2a e−bt uR(0) 1−2uv(t) E[euR(t) ] = (E[euX1 (t) ])d = (1 − 2uv(t))− 2 e 1−2uv(t) = (1 − 2uv(t))− σ2 e− . normal with the same mean µ(t) and variance v(t).

v ≥ 0. Vt )(a − bvt + ρσVt ) + fxx (t. Vt ) = E[1{XT ≥log K} |Ft ]. x. v) = 1{x≥log K} for all x ∈ R. Xt . log s.9. log s. log s. Vt )Vt 2 2 1 + fvv (t. v). Vt ) = 1{XT ≥log K} . v) − e−r(T −t) gsv (t. Vt )(r + Vt ) + fv (t. Vt ) is a martingale. s s s s K csv = fv (t. v) 2 . log s. Vt ) = 1 1 ft (t. v). (iii) Proof. v) . so by diﬀerentiating f and setting the dt term as zero. then ct = sft (t. This is (6. s.32). log s. log s. v) + fss (t. c satisﬁes the PDE (6.8. 61 . Xt . log s. log s. log s. which implies f (T.9. log s. v) = s1{log s≥log K} − K1{log s≥log K} = 1{s≥K} (s − K) = (s − K)+ . Xt . Vt )σ 2 Vt + fxv (t. log s. log s.Proof. Similar to (iii). v). log s. 1 1 1 1 css = fs (t. v) − e−r(T −t) Kg(T. Xt . log s. So f (T. v) + sfs (t. df (t. by Markov property. log s. v). log s. s cvv = sfvv (t. f (t. c(T. Suppose c(t. That is. f (t. (v) Proof. Xt . 1 1 cs = f (t. v) = sf (T. v) − e−r(T −t) Kgt (t. 6. v) = sf (t. v) 2 + e−r(T −t) Kgs (t. Xt . log s.32) for f . log s. So 1 1 ct + rscs + (a − bv)cv + s2 vcss + ρσsvcsv + σ 2 vcvv 2 2 = sft − re−r(T −t) Kg − e−r(T −t) Kgt + rsf + rsfs − rKe−r(T −t) gs + (a − bv)(sfv − e−r(T −t) Kgv ) 1 1 gs K K 1 + s2 v − fs + fss − e−r(T −t) 2 gss + e−r(T −t) K 2 + ρσsv fv + fsv − e−r(T −t) gsv 2 s s s s s 1 2 + σ v(sfvv − e−r(T −t) Kgvv ) 2 1 1 1 1 = s ft + (r + v)fs + (a − bv + ρσv)fv + vfss + ρσvfsv + σ 2 vfvv − Ke−r(T −t) gt + (r − v)gs 2 2 2 2 1 1 2 +(a − bv)gv + vgss + ρσvgsv + σ vgvv + rsf − re−r(T −t) Kg 2 2 = rc. Vt ) + fx (t. v) − e−r(T −t) Kgvv (t. s s cv = sfv (t. 2 2 (iv) Proof. Vt )σVt ρ dt + martingale part. v) − e−r(T −t) Kgss (t. Second. v) − e−r(T −t) Kgs (t. Xt . First. v) − e−r(T −t) Kgv (t. Xt . Xt . v). v) − e−r(T −t) Kg(t.26). 2 1 So we must have ft + (r + 2 v)fx + (a − bv + ρσv)fv + 1 fxx v + 1 fvv σ 2 v + σvρfxv = 0. s. Indeed. log s. v) + fsv (t. we have the PDE (6. log s. Xt .9. v) − re−r(T −t) Kg(t. log s.

48). T. Xu ) + γ 2 (u. Diﬀerentiate w. T. x. x. y)dy. We follow the hint.t. y)]hb (y)dy. By integration-by-parts formula. x. y) dy = 0. x. y)dy = − ∂T b 0 ∂ 1 [β(T. y)hb (y)dy 0 = hb (y)β(u. T on both sides of (6. T.9. x)pxx (t. x. pxx are all continuous. y)p(t.45) implies 0 h(y) pt (t. x. y)p(t. u. y)dy − h(x) 1 E t. T. Suppose h is smooth and compactly supported. x. y)hb (y)dy = − 0 0 ∂ 2 (γ (u. u. x) = ∞ 1 So (6. 6. x)pxx (t. y) = 0. ∂y and b b γ 2 (u.49). we have b hb (y) 0 ∂ p(t.9. y)p(t. y)p(t.48). We ﬁrst note dhb (Xu ) = hb (Xu )dXu + 2 hb (Xu )dXu dXu = hb (Xu )β(u. y) + β(t. y))hb (y)dy = ∂y b 0 ∂2 2 (γ (u. T. y)p(t.9. u. px . y)dy = 0 ∞ 0 h(y)pt (t. x.9. x. u. T. T. 2 Take expectation on both sides. T. x. y) + 2 γ 2 (t. T. y)p(t. T. x) = gx (t.r. v. y))dy. y)dydu. Xu )hb (Xu ) du+ hb (Xu )γ(u. x. x. y) + γ 2 (u. x. x)px (t. u.43). x)px (t.Proof.49). Xu ) + 2 γ 2 (u. we get ∞ E t. ∂y Plug these formulas into (6. the integration range can be changed from (−∞. y)hb (y) p(t. h(y)px (t. b). 1 1 Proof.9. T. y))hb (y)dy. x. Xu )hb (Xu )]du 2 ∞ −∞ = t T = t 1 hb (y)β(u. T. y)dy.x [hb (XT ) − hb (Xt )] = −∞ T hb (y)p(t. which gives (6. x. x) = 0 ∞ ∂ ∂t ∞ ∞ h(y)p(t.9. x. ∞) to (0. x. y)|b − 0 0 b hb (y) ∂ (β(u. x. ∂y 2 . Xu )dWu . y))dy ∂y = − 0 ∂ hb (y) (β(u. y)dy. y) + β(t. u. Integrate on both sides of the equation. we have b b β(u. T. y)]hb (y)dy + ∂y 2 62 b 0 ∂2 2 [γ (T. 0 gxx (t. 2 Since hb vanishes outside (0. then it is legitimate to exchange integration and diﬀerentiation: gt (t. we have 1 pt (t. Xu ) + γ 2 (u. u. u.x [hb (Xu )β(u. b). y) + γ 2 (t. x. x. x.9. Xu )hb (Xu ) du + martingale part. h(y)pxx (t. x. y)p(t. 2 This is (6. By the arbitrariness of h and assuming β. y)p(t. we get (6. we have T hb (XT ) − hb (Xt ) = t 1 hb (Xu )β(u. T. y)p(t. pt .

T. Exotic Options 7. K) + σ (T. y)dy (y − K) K = −re−rT K ∞ (y − K)p(0. T.9. x. x. T. x. y)) dy = 0. y))dy = K 2 ∂y ∂y K 1 = − (σ 2 (T. y)) − ∂T ∂y 2 ∂y 2 This is (6. x. y))|∞ − (σ (T. T. x. we have ∞ − K (y−K) ∂ (ryp(0. y)p(t.1. y) + (β(T. T. x. T. x. x. K). y)dy = K K ryp(0. x. x. y))dy = −(y−K)ryp(0. T. 2 7. y)y 2 p(0. T. ∂ ∂ 1 ∂2 2 (γ (T.50). T. p(t. T. (i) 63 . x. T. x. x. K) = = −rc(0. we conclude for any y ∈ (0. we have 1 ∞ ∂2 (y − K) 2 (σ 2 (T.58). x. By (6. x. Under the assumption that limy→∞ (y − K)ryp(0. K) + e−rT K ∞ (y − K)pT (0. y)y 2 p(0. y)dy + e−rT K ∞ (y − K)pT (0. x. x. y)dy − e−rT ∂ (ryp(t. y))dy ∂y +e−rT K 1 ∂2 2 (y − K) (σ (T. T. y) = 0. T. y)y 2 p(t. K)K 2 p(0. x. T. y)dy 1 +e−rT σ 2 (T. T. y))dy 2 ∂y 2 ∞ ∞ = −re−rT K (y − K)p(0.9. K)K 2 p(0. x. x. 2 Therefore. T. x. y)p(t. y)y 2 p(0.9. ∞ cT (0.9. x. K)K 2 p(0. K). x. T. T.that is. K) 2 ∞ 1 −rT = re K p(0. x. T. ∞). If we further assume (6. y) + (β(T. Proof. T. T.57) and (6. x. y)) − ∂T ∂y 2 ∂y 2 6. T. y)dy + e−rT σ 2 (T. y)p(t. y))dy 2 K ∂y ∞ ∂ 2 1 ∂ 2 (y − K) (σ (T.50) and the arbitrariness of hb . b hb (y) 0 ∂ ∂ 1 ∂2 2 (γ (T. y)|∞ + K ∂y ∞ ∞ ryp(0. x. y)dy. y)dy ∞ −re−rT K ∞ (y − K)p(0. y)p(t. T. T. y)|∞ ) K 2 1 2 = σ (T. T. p(t. T. x. y)y 2 p(0.10. then use integration-by-parts formula twice. T. T. T. x. x. K)K 2 cKK (0. K) 2 K 1 2 = −rKcK (0. x. T. y)) = 0. x. T. y)dy + e−rT K ryp(0.

1 √ σ τ 1 log s + (r ± 2 σ 2 )τ − 1 √ σ τ log s−1 + (r ± 1 σ 2 )τ = 2 2 log s √ . s) − δ± (τ. N (y) = y √1 e− 2 2π 2 1 √ σ τ 1 log s + (r + 2 σ 2 )τ − 1 √ σ τ log s + (r − 1 σ 2 )τ = 2 1 √ σ2 τ σ τ √ = σ τ. s)) = e− −1 )) N (δ± (τ. N (δ± (τ. ). xσ τ 1 √ . s 2r 2 −(log 1 +rτ )2 ±σ 2 τ (log s−log 1 ) s s 2σ 2 τ ] = e− 4rτ log s±2σ 2 τ log s 2σ 2 τ = e−( σ2 ±1) log s = s−( σ2 ±1) . s)). s−1 ) = (vii) Proof. (v) Proof. s) = 1 √ [log s σ τ + (r ± 1 σ 2 )τ ] = 2 = log s − 1 2 σ τ + 1 r± 2 σ 2 √ τ. 2r 2r So N (δ± (τ. s)). s)) s and e−rτ N (δ− (τ. s−1 )) = s( σ2 ±1) N (δ± (τ. ∂ x ∂ δ± (τ. ) = ∂x x ∂x σ τ σ τ 1 √ 1 √ log log x 1 + (r ± σ 2 )τ c 2 = 1 √ . 2τ s (ii) Proof. [(log s+rτ ) N (δ± (τ. σ τ . (iv) Proof. (log s+rτ )2 ±σ 2 τ (log s+rτ )+ 1 σ 4 τ 2 δ± (τ. s)) = sN (δ+ (τ. δ± (τ. so N (y) = y √1 e− 2 2π 2 (− y2 ) = −yN (y). s)) e−rτ 2σ 2 τ = = e− N (δ− (τ. ) = ∂x c ∂x c ∂ ∂ δ± (τ. s)) = √ e− 2 = √ e− 2π 2π Therefore 2σ 2 τ (log s+rτ ) N (δ+ (τ. 2 64 . s) = (vi) Proof. s) − δ− (τ. s) ∂t r ± 1 σ 2 1 − 1 ∂τ log s 1 − 3 ∂τ 2 (− )τ 2 + τ 2 σ 2 ∂t σ 2 ∂t 1 r ± 2 σ2 √ 1 log s 1 √ (−1) − = − τ (−1) 2τ σ σ τ 1 1 1 = − · √ − log ss + (r ± σ 2 )τ ) 2τ σ τ 2 1 1 = − δ± (τ. δ+ (τ.s) 4 1 1 2σ 2 τ . xσ τ 1 c + (r ± σ 2 )τ x 2 =− (iii) Proof. Since δ± (τ.Proof. σ ∂ δ± (τ.

x=L 2rK 2r+σ 2 . supt≤u≤T (Wu − Wt ) is independent of Ft . 2r x 1 Proof. y = Yt . by the continuity of Y .2) or both.1) or (8.8. YT )|Ft ] = E[f (x ST 0 . So vL (L+) = vL (L−) if and only if 2L Proof. YT )|Ft ] is a Borel function of (St . This implies 8..3. 2 So the linear complementarity conditions for v2 imply v2 (x) = (K2 − x)+ = K2 − x > K1 − x = (K1 − x)+ on [0. x YT −t 1{ y ≤ YT −t } + y1{ y ≤ YT −t } )]. vL (L+) = (K − L)(− σ2 )( L )− σ2 −1 L 2r m j=1 (Ytj − Ytj−1 )(Stj − Stj−1 ) → 0. Proof. 7. − σ2r (K − L) = −1. where x = St . −t So E[f (ST .To be continued .3. Therefore S S0 x S0 x S0 E[f (ST .1. (i) 65 .3. and for 0 ≤ x < L1∗ < L2∗ . equality holds in either (8. = − σ2r (K − L). We note ST = S0 eσWT = St eσ(WT −Wt ) . Proof. Hence v2 (x) does not satisfy the third linear complementarity condition for v1 : for each x ≥ 0. WT − Wt = (WT − Wt ) + α(T − t) is independent of Ft . we can see v2 (x) ≥ (K2 − x)+ ≥ (K1 − x)+ . we get L = 2L 8.3. By Cauchy’s inequality and the monotonicity of Y . rv2 (x) − rxv2 (x) − 1 2 2 2 σ x v2 (x) ≥ 0 for all x ≥ 0. we have m m | j=1 (Ytj − Ytj−1 )(Stj − Stj−1 )| ≤ j=1 |Ytj − Ytj−1 ||Stj − Stj−1 | m m ≤ j=1 (Ytj − Ytj−1 )2 j=1 (Stj − Stj−1 )2 m ≤ 1≤j≤m max |Ytj − Ytj−1 |(YT − Y0 ) j=1 (Stj − Stj−1 )2 . By the calculation in Section 8. 7.8. 1 rv2 (x) − rxv2 (x) − σ 2 x2 v2 (x) = rK2 > rK1 > 0.2.s. If we increase the number of partition points to inﬁnity and let the length of the longest subinterval m 2 max1≤j≤m |tj − tj−1 | approach zero..4. 8. American Derivative Securities 8. and YT = = = S0 eσMT S0 eσ supt≤u≤T Wu 1{Mt ≤sup St e σ supt≤u≤T (Wu −Wt ) t≤u≤T Wt } + S0 eσMt 1{Mt >sup } t≤u≤T Wu } } 1 { St ≤e t Y σ supt≤u≤T (Wu −Wt ) + Yt 1 { St ≤e t Y σ supt≤u≤T (Wu −Wt ) . Solve for L. L1∗ ]. then [S]T − [S]0 < ∞ and max1≤j≤m |Ytj − j=1 (Stj − Stj−1 ) → Ytj−1 | → 0 a. Yt ).

19) with equality near 0 and v(0) = K. (v) Proof. If in a right neighborhood of 0. v(x2 ) = (K − x2 )+ = 0 and v (x2 )=the right derivative of (K − x)+ at x2 . we would conclude v ≡ 0. Assume there is an interval [x1 .19) with equality.3.19) with equality.3.Proof.4).8. This implies 0 < x1 < x2 < K (if K ≤ x2 . Since x1 = x2 .3.20). the last two equations imply C2 = 0.18) with equality for x at and immediately to the right of x ” 2 as “v(x2 ) = (K − x2 )+ and v (x2 ) =the right derivative of (K − x)+ at x2 . v cannot satisfy (8.18) will be 1 violated. So we must have rv − rxv − 2 σ 2 x2 v > 0 in a right neighborhood of 0. so that v(x) = C1 x + C2 x− σ2 on [x1 .18)-(8. Suppose x takes its values in a domain bounded away from 0. x2 ] and satisﬁes (8. we have C1 = K−x1 = K−x2 . We have thus concluded simultaneously that v cannot satisfy (8. If for some x0 ∈ [x1 . Assume v(x) = xp for some constant p to be 1 1 determined. i. plug C2 = 0 into the last two equations. Then v(0) = limx↓0 v(x) = 0 < (K − 0)+ . such that v(x) ≡ 0 satisﬁes (8. and the only solution v that satisﬁes the speciﬁed conditions in the problem is the zero solution.4).8. v2 (x) of (8.20). which is 0). If v satisfy (K − x)+ with equality for all x ≥ 0. This is a contradiction.19) with equality on [x1 .18) with equality for x at and immediately to the left of x1 and 2r for x at and immediately to the right of x2 .” 66 . So such an x0 cannot exist. x2 ] where 0 < x1 < x2 < ∞. Solve the quadratic equation 0 = r−pr− 2 σ 2 p(p−1) = 2r 1 2r (− 2 σ 2 p − r)(p − 1). (8. x2 ]. (ii) Proof. (iii) Proof. (iv) Proof.4) has the form C1 x + C2 x− σ2 . By the general theory of linear diﬀerential equations.e. So it suﬃces to ﬁnd two linearly independent special solutions of (8. then v cannot have a continuous derivative as stated in the problem. This is already shown in our solution of part (iii): near 0. Combined. Contradiction.3. then we can ﬁnd some C1 and C2 . starting from ﬁrst principles (8. we get p = 1 or − σ2 . then any solution of (8.3.8. This is a contradiction.4) yields xp (r−pr− 2 σ 2 p(p−1)) = 0. (8. v satisﬁes (8. (vi) 1 Note we have interpreted the condition “v(x) satisﬁes (8. then part (i) implies v(x) = C1 x + 2r C2 x− σ2 for some constants C1 and C2 . v(x0 ) = v (x0 ) = 0.4). x2 ]. Therefore our initial assumption is incorrect. we would have x1 x2 x1 = x2 . + v(x) = (K − x) near o.8. by the uniqueness of the solution of (8.3. So v(0) = K. 1 Thus we have four equations for C1 and C2 : 2r C1 x1 + C2 x− σ2 = K − x1 1 2r C x + C x − σ 2 = K − x 1 2 C − 1 C1 − 2 2 2r − σ2 −1 2r σ 2 C2 x1 2r − σ2 −1 2r σ 2 C2 x2 2 = −1 = −1.8. So a general solution of (8.3. we have C1 = −1.” This is weaker than “v(x) = (K − x) in a right neighborhood of x2 .3. Plug C2 = 0 into the ﬁrst two equations. According to (8.4) can be represented in the form of C1 v1 +C2 v2 where C1 and C2 are constants.8. if we can ﬁnd two linearly independent solutions v1 (x).3.3.

Combined. i. v(x) + and vL∗ (x) are diﬀerent.18).e. we must have x1 ≤ K. rv − rxv − 2 σ x v = r(K − x) + rx = rK. rv − rxv − 1 2 2 1 2 2 1 2 2 2 σ x v = rf − rxf − 2 σ x f = 0.20). x x 2r KL σ2 σ 2 + 2r( L ) σ2 +1 − (σ 2 + 2r)( L ) σ2 2r 2rKx σ 2 KL σ2 − 2r − σ2 + σ2 + x −K =x . (ii) Proof. L = 2rK 2r+σ 2 . So for x ≥ K. v satisﬁes the linear complementarity condition (8. (i) 67 . By the result of part (i). By part (iii).5).19). We solve for A. (iv) x Proof.B= 2rK L(σ 2 +2r) − 1. 1 σ2 2 1 2r Because v is assumed to be bounded.5. both v and v are continuous. but v is not the function vL∗ given by (8. So g(θ) ≥ 0 for any θ ≥ 1. B the equations AL− σ2 + BL = K − L 2r 2r − σ2 AL− σ2 −1 + B = −1. f (x) − (K − x) = 2 σ + 2r L(σ 2 + 2r) (σ 2 + 2r)L 2r Let g(θ) = σ 2 + 2rθ σ2 +1 − (σ 2 + 2r)θ σ2 with θ ≥ 1.13). For x ≥ L.3. 2r 2r 2r 2r 2r 2r 2r σ 2 KL σ2 σ 2 +2r 2r .3. we can start with v(x) = (K − x)+ on [0. v(x) ≥ (K − x) .3. not E[e−rτL ]. For 0 ≤ x ≤ L.8. we get f (x) ≥ (K − x)+ for all x ≥ L.4. By (8.3. on the interval [L.Proof. 2r (v) Proof. Since (K −x)+ is not diﬀerentiable at K. f (x) ≥ BK > 0 = (K − x)+ .3. we can only obtain E[e−(r−a)τL ].18)-(8. B = 0 if and only if 2r σ2 K x − σ2 σ 2 +2r ( L ) 2r 2rK L(σ 2 +2r) − 1 = 0.4. This shows f (x) ≥ (K − x)+ for L ≤ x < K. ∞).This gives us the equations 2r K − x = (K − x )+ = C x + C x− σ2 1 1 1 1 2 1 2r −1 = C − 2r C x− σ2 −1 . 8. So v satisﬁes (8.2.3. Combined. By part (ii). So v satisﬁes (8.3. x1 ] and v(x) = C1 x + C2 x− σ2 on [x1 .3. we must have C1 = 0 and the above equations only have two unknowns: C2 and x1 . Solve them for C2 and x1 . Since limx→∞ v(x) = limx→∞ f (x) = ∞ and limx→∞ vL∗ (x) = limx→∞ (K − L∗ )( L∗ )− σ2 = 0. we are done. This is already shown in part (i) of Exercise 8. In summary. By the assumption of the problem. 1 2 2 rv − rxv − 2 σ x v ≥ 0 for x ≥ 0. ∞). and we obtain A = (iii) Proof. we also showed v satisﬁes (8.20). (i) Proof. Then g(1) = 0 and g (θ) = 2r( σ2 + 1)θ σ2 − (σ 2 + 2r 2r 2r 2r 2r) σ2 θ σ2 −1 = σ2 (σ 2 + 2r)θ σ2 −1 (θ − 1) ≥ 0. Along the way. 8. v(x) = Ax− σ2 = 2r x = (K − L)( L )− σ2 = vL∗ (x). The diﬃculty of the dividend-paying case is that from Lemma 8. So we have to start from Theorem 8. If L ≤ x < K.3. In this case. B > 0.

2. By Itˆ’s formula.9).8. x 1 1 1 x 1 If we set γ = σ2 (r − a − 1 σ 2 ) + σ σ2 (r − a − σ2 )2 + 2r. x > L. −rvL∗ (x) + vL∗ (x)(r − a)x + 2 vL∗ (x)σ 2 x2 = −r(K − x) + (−1)(r − a)x = −rK + ax. u 1 + 2 σ σ 2u u2 + 3 σ4 σ u2 + 2r σ2 2 + u 1 + 2 σ σ u2 + 2r σ2 u2 + 2r u σ2 + u2 u + σ2 σ u2 + 2r σ2 u2 + 2r σ2 1 u2 + 2r + 2 σ2 σ u2 u − 2 2σ σ u2 1 + 2r − 2 σ 2 u2 u2 u + 2r + 2 + 2 σ σ σ 1 If x < L∗ . we get d e−rt vL∗ (St ) = −e−rt 1{St <L∗ } (rK − aSt )dt + e−rt vL∗ (St )σSt dWt . By (8. we can write E[e−rτL ] as e−γ log L = ( L )−γ .Proof. then St = L if and only if −Wt − σ (r − a − 1 − σ log x L 1 1 2 σ (r−a− 2 σ )+ 1 σ2 (r−a− 1 σ 2 )2 +2r 2 . (iii) Proof. By Theorem 8. 2 If x > L∗ . we have 1 r + (r − a)γ − σ 2 γ(γ + 1) 2 1 2 2 1 = r − σ γ + γ(r − a − σ 2 ) 2 2 1 = r − σ2 2 1 = r − σ2 2 = r− = 0. (ii) Proof. 68 . we have L∗ = γK γ+1 . we have o 1 2 d e−rt vL∗ (St ) = e−rt −rvL∗ (St ) + vL∗ (St )(r − a)St + vL∗ (St )σ 2 St dt + e−rt vL∗ (St )σSt dWt . 1 −r − (r − a)γ + σ 2 γ(γ + 1) . L Set ∂ ∂L vL (x) = 0 and solve for L∗ . 2 1 By the deﬁnition of γ. if we deﬁne u = r − a − 2 σ 2 . 0≤x≤L x −γ (K − L)( L ) . Assume S0 = x. 1 −rvL∗ (x) + vL∗ (x)(r − a)x + vL∗ (x)σ 2 x2 2 −γ x 1 x−γ−2 x−γ−1 = −r(K − L∗ ) + (r − a)x(K − L∗ )(−γ) −γ + σ 2 x2 (−γ)(−γ − 1)(K − L∗ ) −γ L∗ 2 L∗ L∗ = (K − L∗ ) x L∗ −γ ∂ ∂L vL (x) x = −( L )−γ (1 − γ(K−L) ). So 2 the risk-neutral expected discounted pay oﬀ of this strategy is vL (x) = K − x.3. Combined. E[e−rτL ] = e 1 2 )t 1 . St = S0 eσWt +(r−a− 2 σ 1 x 1 2 2 σ )t = σ log L .

vL∗ (x) = K − x = (K − x)+ . 2 2 We prove this inequality as follows. a and σ (K and σ are assumed to be strictly positive. For any τ ∈ Γ0. vL∗ (x) > 0 = (K − x)+ . 69 . r and a are assumed to be γK non-negative).T . so it suﬃces to show vL∗ (x) ≥ (K − x)+ γK in our problem. 8. by γ ≥ 0. then necessarily r < a. rK − aL∗ < 0. since L∗ = γ+1 ≤ K. ﬁnally. So for L∗ ≤ x ≤ K. we have obtained a contradiction. This is further equivalent to proving rK − aL∗ ≥ 0. L∗ = γ+1 < K. The proof is similar to that of Corollary 8. Xt = e−rt (St − K)+ is a submartingale. Part (iii) already proved the supermartingale-martingale property.8.T .6.7. Proof. for 0 ≤ x < L∗ . where we take the convention that e−rτ (Sτ −K)+ = 0 when τ = ∞. (iv) Proof. then θ < 0 and γ = 1 σ 2 (r − a − 1 σ2 ) + 2 1 σ 1 σ 2 (r 1 − a − 2 σ 2 )2 + 2r = − 1 σ) + 2 1 σ (θ − 1 σ)2 + 2r.3. Indeed. We have 2 ⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒ (r − a)γ + r < 0 (r − a) 1 1 1 (θ − σ) + σ 2 σ 1 (θ − σ)2 + 2r + r < 0 2 r(γ + 1) − aγ < 0 1 1 θ(θ − σ) + θ (θ − σ)2 + 2r + r < 0 2 2 1 2 1 θ (θ − σ) + 2r < −r − θ(θ − σ)(< 0) 2 2 1 2 1 1 2 2 2 θ [(θ − σ) + 2r] > r + θ (θ − σ)2 + 2θr(θ − σ 2 ) 2 2 2 0 > r2 − θrσ 2 0 > r − θσ 2 . Theorem 8. the inequality is further reduced to r(γ + 1) − aγ ≥ 0. Since θσ 2 < 0. By Lemma 8. vL∗ (x) − (K − x)+ ≥ 0.5. e−rt∧τL∗ vL∗ (St ∧τL∗ ) is a martingale.Following the reasoning in the proof of Theorem 8. Deﬁne θ = 1 σ (θ r−a σ .1. For x ≥ K > L∗ . we only need to show 1{x<L∗ } (rK − ax) ≥ 0 to ﬁnish γK the solution.1 implies E[e−rT (ST − K)+ ] ≥ E[e−rτ ∧T (Sτ ∧T − K)+ ] ≥ E[e−rτ (Sτ − K)+ 1{τ <∞} ] = E[e−rτ (Sτ − K)+ ]. and vL∗ (x) ≥ (K −x)+ .5. dx γ + 1 γ+1 L∗ L∗ and (vL∗ (x) − (K − x))|x=L∗ = 0. So our initial assumption is incorrect.T E[e−rτ (Sτ − K)+ ]. and rK − aL∗ ≥ 0 must be true.6.6 are that e−rt vL∗ (St ) is a supermartingale. Plug L∗ = γ+1 into the expression and 1 1 1 note γ ≥ σ σ2 (r − a − 1 σ 2 )2 + σ2 (r − a − 1 σ 2 ) ≥ 0. Assume for some K.3. −γ−1 x−γ−1 1 d L∗ γK (vL∗ (x) − (K − x)) = −γ(K − L∗ ) −γ + 1 ≥ −γ(K − L∗ ) −γ + 1 = −γ(K − ) γK + 1 = 0. As shown before. we have vL∗ (x) ≥ (K − x)+ ≥ 0 for all x ≥ 0. Combined.3. r. Note the only properties used in the proof of Corollary 8. Since τ is arbitrarily chosen. this means r(γ + 1) − aγ < 0. The other direction “≤” is trivial since T ∈ Γ0. for L∗ ≤ x ≤ K. 8. E[e−rT (ST − K)+ ] ≥ maxτ ∈Γ0.

Therefore its foreign price is E f domestic currency. where M f Q can be seen as the price process of one unit foreign currency. QM f . P ). Suppose λ ∈ [0. Change of Num´raire e To provide an intuition for change of num´raire. We consider the following market: a domestic money market account M (M0 = 1). ¯ . g((1 − λ)x1 + λx2 )} ≤ (1 − λ)h(x1 ) + λh(x2 ). its payoﬀ in foreign currency is fT /QT . we have f ((1 − λ)x1 + λx2 ) ≤ (1 − λ)f (x1 ) + λf (x2 ) ≤ (1 − λ)h(x1 ) + λh(x2 ).1 or [5] page 383. (resp. This is a change of num´raire in the market denominated by domestic currency. e Several results hold under this model. Here B and ¯ ¯ B are both one-dimensional while S could be a vector price process. then both of them can be chosen as num´aire. Dt The RHS is exactly the price of fT in domestic market if we apply risk-neutral pricing. Q Q Mf Q S M . then corresponding to B (resp. 9. Similarly. Suppose the domestic vs. 9. denominated in domestic currency. Finally. Third. such that B B B B both arbitrage-free and complete. Shiryaev [5] page 413 Remark and page 481). S).1. 1] and 0 ≤ x1 ≤ x2 . if the market is arbitrage-free. then fT ¯ ¯ fT Bt E ¯ |Ft = Bt E |Ft .1. Suppose B and B are both strictly positive. Under the above set-up. we give a summary of results for change of e num´raire in discrete case. Denominated by domestic currency. if the market is P (resp. (i) 70 . So h((1 − λ)x1 + λx2 ) = max{f ((1 − λ)x1 + λx2 ). for a European contingent claim fT . no-arbitrage and completeness properties of market are independent of the choice of num´raire (see. P ). foreign currency exchange rate is Q.Proof. for example. the traded M QM f S . the foreign risk-neutral measure is dP f = f QT MT f Q0 M0 M0 dP = MT E f QT DT MT dP Q0 on FT . BT BT That is. g((1 − λ)x1 + λx2 ) ≤ (1 − λ)h(x1 ) + λh(x2 ). there is an equivalent probability ¯ B S B S ¯ ¯ . This summary is based on Shiryaev [5]. the traded assets are (M. S) as in [1] Deﬁnition 2. e ¯ Second. Convert this price into |Ft . B). Foreign risk-neutral measure P f is such that is a P f -martingale. M . the price of fT is independent of the choice of num´raire. First. B. M f Q. we have Qt E f formula. we have the relation ¯ dP = ¯ BT BT E 1 ¯ B0 B0 dP . Use the relation between P f and P on FT and the Bayes f DT fT f Dt QT Qt E f |Ft = E DT fT |Ft . Denominated by foreign currency. h is also convex. M is a P -martingale. if fT is a European contingent claim with maturity N and the market is both arbitrage-free and complete. we get f DT fT f Dt QT f DT fT f Dt QT |Ft . Domestic riskneutral measure P is such that assets are S Mf. e ¯ Consider a model of ﬁnancial market (B. If we e assume the market is arbitrage-free and complete. a foreign money market account f M f (M0 = 1). Note Q is not a traded asset. That is. e The above theoretical results can be applied to market involving foreign money market account. a (vector) asset price process S called stock. ¯ ) is a martingale under P (resp. from M to M f Q.

71 .Proof. So 1 2 −1 = d(N0 e−ν W3 (t)−(r− 2 ν −1 = N0 e )t ) 1 −ν W3 (t)−(r− 2 ν 2 )t 1 1 −νdW3 (t) − (r − ν 2 )dt + ν 2 dt 2 2 = Nt−1 [−νdW3 (t) − (r − ν 2 )dt]. (i) Proof.2. For any 0 ≤ t ≤ T . M2 (T ) M2 (t) M2 (T ) M2 (t) M2 (t) is a martingale under P M2 . d Mt = Mt d 1 Nt + 1 dMt + d Nt 1 Nt dMt = Mt (−νdWt − rdt) + rMt dt = −ν Mt dWt . Let M1 (t) = Dt St and M2 (t) = Dt Nt /N0 .3.2.2. Remark: This can also be obtained directly from Theorem 9. we have 1 1 [−νdWt − (r − ν 2 )dt + ν 2 dt] = Nt−1 (−νdWt − rdt).2.2.6) is P (M2 ) as deﬁned in (N ) S S Remark 9. (iii) Proof. E (M2 ) So M1 (t) M2 (t) M2 (T ) M1 (T ) E[M1 (T )|Ft ] M1 (T ) M1 (t) Ft = E Ft = = . by Lemma 5. 2 2 −1 d(Nt−1 ) = N0 e−ν Wt −(r− 2 ν 2 )t (ii) Proof. To avoid singular cases. Hence M1 (t) = Nt N0 is a martingale under P (N ) .5. dXt 1 1 1 + dXt + d dXt Nt Nt Nt 1 1 1 = (∆t St + Γt Mt )d + (∆t dSt + Γt dMt ) + d (∆t dSt + Γt dMt ) Nt Nt Nt 1 1 1 1 1 1 + dSt + d dSt + Γt Mt d + dMt + d = ∆t St d Nt Nt Nt Nt Nt Nt = d = Xt d = ∆t dSt + Γt dMt . Nt = N0 eν W3 (t)+(r− 2 ν dNt−1 1 2 )t . Xt Nt dMt 9. we need to assume −1 < ρ < 1. which implies St = Nt is a martingale M2 (t) t t under P (N ) . 9. Since Nt−1 = N0 e−ν Wt −(r− 2 ν 1 1 2 )t . (ii) Proof.5. Then P (N ) as deﬁned in (9.2. (i) −1 Proof.

This problem is the same as Exercise 4.2.2. From (9. and dW2 (t)dW1 (t) = − √ ρ dt + √ ρ 1−ρ2 dt = 0. (N ) ρdt (N ) St (ν 2 (N ) St (σdW1 (t) σ γ W1 (t) σ 2 − 2ρσν + ν 2 and W4 (t) = [W4 ]t = − ν γ W3 (t). and dNt = rNt dt + νNt dW3 (t) = rNt dt + νNt [ρdW1 (t) + (iii) 1 − ρ2 dW2 (t)]. ν 9. Under P . So W2 is a BM independent of W1 . St e (ii) has volatility γ = σ 2 − 2ρσν + ν 2 . Proof. v2 ) = (σ − νρ. the volatility vector for S (N ) is (v1 . we deﬁne W2 (t) = √−ρ 2 W1 (t) + √ 1 1−ρ 1−ρ2 W3 (t). f Proof. In particular. and the volatility vector for N is (νρ.3. 9. 0). the volatility vector for S is (σ. we have Mtf Qt = M0 Q0 e f Dt f = D0 Q−1 e− 0 Qt t 0 σ2 (s)dW3 (s)+ t 2 (Rs − 1 σ2 (s))ds 2 0 . γ2 γ r (N ) By L´vy’s Theorem. So under P . Proof. So t 0 σ2 (s)dW3 (s)− t 2 (Rs − 1 σ2 (s))ds 2 0 72 . By Theorem 1 − ρ2 . ν dW1 (t) dW2 (t) 1 − ρ2 ) · dW1 (t) .15). dW2 (t) 1 − ρ2 ). 0) · dNt = rNt dt + νNt dW3 (t) = rNt dt + Nt (νρ. under the measure P (N ) . with (dW2 (t)) = 2 − ρ 1 − ρ2 1−ρ2 dW1 (t) + 1 1 − ρ2 2 dW3 (t) = 1 2ρ2 ρ2 + − 1 − ρ2 1 − ρ2 1 − ρ2 dt = dt.13. consistent with the result of part (i). then W4 is a martingale with quadratic σν ν2 σ2 t − 2 2 ρt + 2 t = t. W4 is a BM and therefore.4. and dSt = rSt dt + σSt dW1 (t) = rSt dt + St (σ. then W2 is a martingale. W2 ) is a two-dimensional BM.and dSt (N ) = Nt−1 dSt + St dNt−1 + dSt dNt−1 = Nt−1 (rSt dt + σSt dW1 (t)) + St Nt−1 [−νdW3 (t) − (r − ν 2 )dt] (N ) = St = Deﬁne γ = variation (rdt + σdW1 (t)) + St − σρ)dt + (N ) [−νdW3 (t) − (r − ν 2 )dt] − σSt − νdW3 (t)). (W1 . −ν the volatility of S (N ) is 2 2 v1 + v2 = (σ − νρ)2 + (−ν 1 − ρ2 )2 = σ 2 − 2νρσ + ν 2 .

Qt 73 . Proof. we note d f Dt St Qt = = f Dt dSt + St d Qt f Dt Qt + dSt d f Dt Qt f f St Dt Dt 2 St (Rt dt + σ1 (t)dW1 (t)) + [−σ2 (t)dW3 (t) − (Rt − σ2 (t))dt] Qt Qt +St σ1 (t)dW1 (t) = = f Dt (−σ2 (t))dW3 (t) Qt f Dt St 2 [σ1 (t)dW1 (t) − σ2 (t)dW3 (t) + σ2 (t)dt − σ1 (t)σ2 (t)ρt dt] Qt f Dt St f f [σ1 (t)dW1 (t) − σ2 dW3 (t)].3. we have 2 St S0 σ4 W4 (t)+(r−a− 1 σ4 )t 2 = e and d Qt Q0 St Qt = St [σ4 dW4 (t) + (r − a)dt]. We combine the solutions of all the sub-problems into a single solution as follows.5. The payoﬀ of a S quanto call is ( QT − K)+ units of domestic currency at time T . 2 2 σ4 σ4 σ1 − σ2 ρ σ2 1 − ρ2 W1 (t) − W2 (t).3.23). By formula (9. its price at T S time t is E[e−r(T −t) ( QT − K)+ |Ft ]. Deﬁne 2 1−ρ2 W2 (t)+(r−r f − 1 σ2 )t 2 . we note d f Mt D t Qt = = = = Mt d f Dt Qt + f Dt dMt + dMt d Qt f Dt Qt f f Mt Dt Rt Mt Dt 2 [−σ2 (t)dW3 (t) − (Rt − σ2 (t))dt] + dt Qt Qt − − f Mt Dt 2 (σ2 (t)dW3 (t) − σ2 (t)dt) Qt f Mt Dt f σ2 (t)dW3 (t).3. σ4 σ4 Then W4 is a martingale with [W4 ]t = So W4 is a Brownian motion under P .and d f Dt Qt = f Dt 1 2 1 2 Df 2 [−σ2 (t)dW3 (t) − (Rt − σ2 (t))dt + σ2 (t)dt] = t [−σ2 (t)dW3 (t) − (Rt − σ2 (t))dt]. we have St = S0 e Qt = Q0 eσ2 W3 (t)+(r−r √ 2 f 2 − 1 σ2 )t 2 2 σ1 W1 (t)+(r− 1 σ1 )t 2 and √ = Q0 eσ2 ρW1 (t)+σ2 .3.22).14) and (9. By risk-neutral pricing formula. Qt To get (9. So 2 if we set a = r − rf + ρσ1 σ2 − σ2 . Qt 9. Qt 2 2 Qt To get (9.16). So we need to ﬁnd the SDE for T St Qt under risk-neutral measure P . So St Qt = S0 (σ1 −σ2 ρ)W1 (t)−σ2 Q0 e 2 2 1−ρ W2 (t)+(r f + 1 σ2 − 1 σ1 )t 2 2 σ4 = 2 (σ1 − σ2 ρ)2 + σ2 (1 − ρ2 ) = 2 2 σ1 − 2ρσ1 σ2 + σ2 and W4 (t) = 2 (σ1 −σ2 ρ)2 t + σ2 (1−ρ ) t + t.

T ) 3σ dt + √ . Qt behaves like dividend-paying stock and the price of the quanto call option is like the t price of a call option on a dividend-paying stock. d+ (t)+d− (t) = (iii) Proof. T )e−d+ (t)/2 − Ke−d− (t) 2 2 √1 σ 2 (T σ T −t √ √ − t) = σ T − t. dd+ (t) 1 1 1√ ForS (t. (i) Proof.T ) . (dd− (t))2 = (dd+ (t))2 = (vii) 1 Proof. log dt − √ K 2σ(T − t)3/2 4 T −t T −t = = = (v) √ Proof. 9. By (iv) and (v). So d− (t) = d+ (t) − σ T − t. 2 T −t dt T −t . −t (viii) 74 .T ) 2 K ] e−d+ (t)/2 [ForS (t.12) gives us the desired price formula for quanto call option. Thus formula (5.T ) . 2 2 2 (iv) Proof. √2 σ T −t log ForS (t. ForS (t. d+ (t) − d− (t) = (ii) Proof. T ) − Ked+ (t)/2−d− (t)/2 ] ForS (t. dd− (t) = dd+ (t) − d(σ T − t) = dd+ (t) + (vi) Proof.5.6. + − K K = = = e−d+ (t)/2 [ForS (t.S Therefore. T ) − Kelog 0. T ) 1 2 dForS (t. T ) σ 1 1 2 1 2 log dt + √ dt + √ (σdW T (t) − σ dt − σ dt) K 2 2 4 T −t σ T −t 2σ (T − t)3 dW T (t) 1 ForS (t. T ) 1 ForS (t. T ) 2 σ T − t ForS (t. T ))2 + σ (T − t)]dt + √ − − σdt 1σ (T − t)3 [log 2 2 K 2 2ForS (t. dd+ (t)+ 1 √1 e− 2 2π d2 (t) + 2 (−d+ (t)) Tdt . T ) (dForS (t. under P . dN (d+ (t)) = N (d+ (t))dd+ (t)+ 2 N (d+ (t))(dd+ (t))2 = √1 e− 2π d2 (t) + 2 σdt √ . So d2 (t)−d2 (t) = (d+ (t)+d− (t))(d+ (t)−d− (t)) = 2 log ForS (t.

T )dN (d+ (t)) − KdN (d− (t)) 2 2 1 d+ (t) σForS (t. Using the notation I1 (t).T ) e−d+ (t)/2 . T )dN (d+ (t)) = σForS (t.1. T −t 2π 2π(T − t) T 2 2 (x) Proof. if λ1 = λ2 . √ e−d− (t)/2 dd+ (t) + dt − 2π 2(T − t) 2π 2π(T − t) 2 d2 (t) − dt = (ix) Proof. (i) Proof. I3 (t) and I4 (t) introduced in the problem. T )d+ (t) −d2 (t)/2 σForS (t. 75 . we can write Y1 (t) and Y2 (t) as Y1 (t) = e−λ1 t Y1 (0) + e−λ1 t I1 (t) and Y2 (t) = λ21 λ21 −λ1 t − e−λ2 t )Y1 (0) + e−λ2 t Y2 (0) + λ1 −λ2 e−λ1 t I1 (t) − e−λ2 t I2 (t) − e−λ2 t I3 (t). dN (d− (t)) 1 = N (d− (t))dd− (t) + N (d− (t))(dd− (t))2 2 = d2 (t) − 1 √ e− 2 2π σdt dd+ (t) + √ 2 T −t 2 dt 1 e− 2 √ (−d− (t)) + 2 T −t 2π d2 (t)(σ − √ T −t−d+ (t)) d2 (t) − = 2 2 1 e− σe−d− (t)/2 √ e−d− (t)/2 dd+ (t) + √ dt + 2π 2(T − t) 2π 2 2π(T − t) 2 1 σe−d− (t)/2 d+ (t)e− 2 √ dt. 2 2 The last “=” comes from (iii). 1 e−d+ (t)/2 σForS (t.Proof. T ) √ e−d+ (t)/2 dd+ (t) − dt 2π 2(T − t) 2π 2π(T − t) 2 −K e−d− (t)/2 √ dd+ (t) + 2π 2 σ 2π(T − t) e−d− (t)/2 dt − 2 2 2 d+ (t) √ e−d− (t)/2 dt 2(T − t) 2π 2 2 Kσe−d− (t)/2 Kd+ (t) ForS (t. if λ1 = λ2 . λ1 −λ2 (e −λ1 t −λ1 t −λ1 t −λ21 te Y1 (0) + e Y2 (0) − λ21 te I1 (t) − e−λ1 t I4 (t) + e−λ1 t I3 (t). T )e−d+ (t)/2 √ √ e−d− (t)/2 dt = e + − − + 2(T − t) 2π 2(T − t) 2π 2π(T − t) 2π(T − t) 2 2 1 +√ ForS (t. ForS (t. T )e−d+ (t)/2 − Ke−d− (t)/2 dd+ (t) 2π = 0. T )e−d+ (t)/2 √ dForS (t. I2 (t). T )dN (d+ (t)) + dForS (t. Term-Structure Models 10. T )e−d+ (t)/2 √ e−d+ (t)/2 dt + = ForS (t. which implies e−d− (t)/2 = ForS (t. T )dW (t) √ dW T (t) = dt. K 10.

Y1 (t). λ1 + λ2 E[I1 (t)I3 (t)] = 0. The calculation relies on the following fact: if Xt and Yt are both martingales. (ii) 76 . Y1 (t). Y2 (t))dt+ df (t. T ) = E[e− t Rs ds |Ft ] = f (t. (ii) Proof. In particular.2. λ1 −λ2 (e −λ21 te−λ1 t Y1 (0) + e−λ1 t Y2 (0). E[I1 (t)I2 (t)] = 2λ1 t t e(λ1 +λ2 )u du = 0 e(λ1 +λ2 )t − 1 . Y2 (t))(µ − λ1 Y1 (t)) + fy2 (t. Y2 (t))(σ21 Y1 (t) + α + βY1 (t))]dt + martingale part. Y1 (t). · · · . Y1 (t). 2λ1 2λ2 4λ3 1 1 (iii) Proof. Then d(Dt B(t. T )) = Dt [−Rt f (t. (i) Proof. Y2 (t))σ21 Y1 (t) + fy1 y1 (t.Since all the Ik (t)’s (k = 1. Assume B(t. E[I1 (t)I4 (t)] = 0 ue2λ1 u du = 1 e2λ1 t − 1 te2λ1 t − 2λ1 2λ1 and t 2 E[I4 (t)] = 0 u2 e2λ1 u du = t2 e2λ1 t te2λ1 t e2λ1 t − 1 − + . Y2 (t)) + fy1 (t. Y1 (t). Y2 (t))Y1 (t) 2 1 2 + fy2 y2 (t. 2 T Since Dt B(t. Y1 (t). Y ]t }. 4) are normally distributed with zero mean. if λ1 = λ2 . By Itˆ’s formula. Thus t 2 E[I1 (t)] = 0 e2λ1 u du = e2λ1 t − 1 . we can conclude E[Y1 (t)] = e−λ1 t Y1 (0) and E[Y2 (t)] = λ21 −λ1 t − e−λ2 t )Y1 (0) + e−λ2 t Y2 (0). we must have −(δ0 + δ1 y1 + δ2 y2 ) + ∂ ∂ ∂ 1 + (µ − λ1 y1 ) − λ 2 y2 + ∂t ∂y1 ∂y2 2 2σ21 y1 ∂2 ∂2 ∂2 2 + y1 2 + (σ21 y1 + α + βy1 ) 2 ∂y1 ∂y2 ∂y1 ∂y2 f = 0. λ1 + λ2 10. Y1 (t). E[Xt Yt ] = E{[X. Y2 (t))(−λ2 )Y2 (t)] 1 +fy1 y2 (t. Following the hint. Y1 (t). Y1 (t). if λ1 = λ2 . Y1 (t). then Xt Yt − [X. Y2 (t)) = [ft (t. o df (t. we have t E[I1 (s)I2 (t)] = E[J1 (t)I2 (t)] = 0 e(λ1 +λ2 )u 1{u≤s} du = e(λ1 +λ2 )s − 1 . Y ]t is also a martingale. T ) is a martingale. Y2 (t)). Y2 (t))].

T )f (t. (i) Proof. y2 ) = 0. T. If we suppose f (t. T ) − fy1 (t.3. T. T. y1 .T )−A(t. y1 . y2 ) = −C1 (t. = C1 (T − t)C2 (T − t)f . d(Dt B(t. T ) − y2 C2 (t. Y2 (t))] and df (t. T. T ) − A(t. Y1 (t). T. y1 . 2 2 77 . 10. So the PDE in part (i) becomes −(δ0 +δ1 y1 +δ2 y2 )+y1 C1 +y2 C2 +A −(µ−λ1 y1 )C1 +λ2 y2 C2 + 1 2 2 2 2σ21 y1 C1 C2 + y1 C1 + (σ21 y1 + α + βy1 )C2 = 0. Y2 (t)) + fy1 (t. Y2 (t))dt + df (t. ∂f ∂y2 = −C2 (T − t)f . T. we can obtain the ODEs for C1 .Proof. Y1 (t). y ) = −C (t. y . Y1 (t). T. T. T ) is a martingale under risk-neutral measure. y1 . T. T )f (t. T. y2 ) = e−y1 C1 (t. T. 2 In other words. y1 . Y1 (t).7. T ) + C1 (t. Y2 (t))(−λ21 Y1 (t) − λ2 Y2 (t)) 1 1 + fy1 y1 (t.T )−y2 C2 (t. y2 ) = −y1 dt C1 (t. y1 . Y1 (t). f (t. then d d ft (t. ∂ f ∂y1 ∂y2 2 ∂ ∂t f = [y1 C1 (T − t) + y2 C2 (T − t) + ∂2f 2 ∂y1 2 = C1 (T − t)f . T. T ) dt dt dt 1 2 1 2 +(λ21 y1 + λ2 y2 )C2 (t. T ) − y2 dt C2 (t. T ) + λ1 y1 C1 (t. T ) f (t. y . we have the following PDE: −(δ0 (t) + δ1 y1 + δ2 y2 ) + 1 ∂ ∂ ∂ ∂ 1 ∂2 − λ1 y1 − (λ21 y1 + λ2 y2 ) + 2 + 2 ∂y 2 f (t. check! 2 C = −λ2 C2 + δ2 2 1 2 A = µC1 − 2 αC2 + δ0 . y2 1 2 2 1 2 fy1 y2 (t. y1 . y2 ). y1 . d d d C1 (t. Y2 (t)) + fy2 y2 (t. y2 ). T. T ) = 0. So the PDE becomes −(δ0 (t) + δ1 y1 + δ2 y2 ) + −y1 d dt A(t. T. T. T. y1 . Y1 (t).4). 2 2 Since Dt B(t. T ) + C2 (t. y2 ) = C2 (t. y2 ). T. T )C2 (t. y2 ) = e−y1 C1 (T −t)y2 C2 (T −t)−A(T −t) . T )f (t. T. and ∂ f 2 ∂y2 2 ∂ ∂y1 f = −C1 (T − t)f . y2 ). T. Y2 (t))]dt + martingale part. Y1 (t). y1 .T ) . y1 . y1 . Y1 (t). T. 2 Sorting out the LHS according to the independent variables y1 and y2 . fy y (t. ∂t ∂y1 ∂y2 2 ∂y1 2 Suppose f (t. y2 ). y ). T. we get 1 1 2 2 2 −δ1 + C1 + λ1 C1 + σ21 C1 C2 + 2 C1 + 2 (σ21 + β)C2 = 0 −δ + C2 + λ2 C2 = 0 2 2 −δ0 + A − µC1 + 1 αC2 = 0. T )) = Dt [−Rt f (t. then A (T − t)]f . y2 ) = C1 (t. T )f (t. T. y2 ) = C 2 (t. Y2 (t))(−λ1 Y1 (t)) + fy2 (t. 1 1 1 2 fy2 y2 (t. T )f (t. y1 . 2 = C2 (T − t)f . C2 and A as follows 1 2 2 2 C1 = −λ1 C1 − σ21 C1 C2 − 2 C1 − 1 (σ21 + β)C2 + δ1 diﬀerent from (10. Y2 (t)) = [ft (t.

if λ1 = λ2 . T ) − Y2 (0) C2 (0. T ) = λ21 δ2 −λ1 (T −t) λ2 λ1 (e λ21 δ2 −λ1 (T −t) λ2 λ1 (e δ2 − λ21λ1 (e−λ1 T − e−λ1 t ) − λ2 δ2 − λ21λ1 (e−λ1 T − e−λ1 t ) − λ2 λ21 δ2 −λ2 T e(λ2 −λ1 )T −e(λ2 −λ1 )t δ + λ1 (e−λ1 T λ2 e λ2 −λ1 1 λ21 δ2 −λ2 T δ (T − t) + λ1 (e−λ1 T − e−λ1 T ) λ2 e 1 − e−λ1 T ) if λ1 = λ2 if λ1 = λ2 . 2 (iv) Proof. Y2 (0)) = e−Y1 (0)C1 (0. T. T ) for all T > 0. T )] = −e−λ2 t δ2 from the ODE in (i). T ) + 1 C1 (t. T ) = 2 C1 (t. T ) − δ2 dt 2 d 1 2 1 2 dt A(t. T ) − δ1 )e−λ1 t = (e − e−λ2 T +(λ2 −λ1 )t ) − δ1 e−λ1 t . T ) = −δ2 t e−λ2 s ds = λ2 (e−λ2 T − e−λ2 t ). T. T ) = −Y1 (0)C1 (0. − 1) + − 1) + λ21 δ2 e−λ1 (T −t) −e−λ2 (T −t) δ − λ1 (e−λ1 (T −t) − 1) λ2 λ2 −λ1 1 λ21 δ2 −λ2 T +λ1 t δ (T − t) − λ1 (e−λ1 (T −t) − 1) λ2 e 1 if λ1 = λ2 . we get T log B(0.r. T ) − Y2 (0)C2 (0. For C2 . T ) − δ0 (t). T ) + λ21 C2 (t. T )) ds. T ) 2 2 = 1 (C1 (t. T ) = λ2 C2 (t. T ) + C2 (T. T ) + 2 C2 (t. we have ∂ ∂ ∂ 1 2 1 2 log B(0. dt λ2 Integrate from t to T . T ) − δ1 d C (t. T )) − δ0 (s) ds. From the ODE d dt A(t. T ) = 0 2 −δ − d C (t. T ) + C2 (t.T ) = B(0. we have T δ δ 0 − e−λ2 t C2 (t. Take logarithm on both sides and plug in the expression of A(t. T )) − δ0 (t). we get 1 2 d 2 −δ0 (t) − dt A(t. (iii) Proof. T ) + λ21 C2 (t. T ) = 0. T ) + 0 1 2 2 (C (s. So C2 (t. (ii) d Proof. For C1 .T )−A(0. T ) = λ1 C1 (t. we get 2 T A(t. Y1 (0). That is d dt C1 (t. T ). T ) + λ2 C2 (t. T ) + C1 (T. T ) + C2 (s.t. We want to ﬁnd δ0 so that f (0. T ) − δ0 (T ). T ) = So C1 (t. T ) = 0 1 dt 1 d −δ2 − dt C2 (t. T ) + C2 (s. ∂T ∂T ∂T 2 2 78 . T )) = (λ21 C2 (t. T ) + λ1 C1 (t. T ) = t 1 2 2 δ0 (s) − (C1 (s. we note 2 2 λ21 δ2 −λ1 t d −λ1 t (e C1 (t.T )−Y2 (0)C2 (0. we note dt [e−λ2 t C2 (t. T ) + 2 C2 (t. 2 1 Taking derivative w. Integrate from t to T .Sorting out the terms according to independent variables y1 and y2 . we get −e−λ1 t C1 (t. T ) = λ2 (1 − e−λ2 (T −t) ). T ) = −Y1 (0) C1 (0.

− √ ρ B1 + √ 1 B2 1−ρ2 ρt −√ 2 + 1−ρ B2 = ρ2 t 1−ρ2 √ ρt −1 1−ρ2 = 0. (iii) Proof. Therefore W is a two-dimensional BM.4.So δ0 (T ) = −Y1 (0) = 10. (ii) Proof. dXt = dXt + Ke−Kt 0 eKu Θ(u)dudt − Θ(t)dt t = −KXt dt + ΣdBt + Ke−Kt 0 eKu Θ(u)dudt = −K Xt dt + ΣdBt . and W 1 . T ) ∂T ∂T ∂T −λ1 T −Y1 (0) δ1 e−λ1 T − −Y1 (0) δ1 e − λ21 δ2 −λ2 T ∂ − Y2 (0)δ2 e−λ2 T − ∂T log B(0. (i) Proof. W 2 t = B1. 0 . W 2 1−ρ2 = −√ ρ 1−ρ2 B1 + √ 1 = = t. Moreover. ρ t + 1−ρ2 −2 1−ρ2 ρt = 1−ρ2 So W is a martingale with W 1 ρ2 +1−2ρ2 t 1−ρ2 = B1 t = t. dYt = CdXt = −CK Xt dt+CΣdBt = −CKC Yt dt+dWt = −ΛYt dt+dWt . t t Xt = = = Xt + e−Kt 0 eKu Θ(u)du = C −1 Yt + e−Kt 0 eKu Θ(u)du Θ(u)du σ1 ρσ2 σ2 0 1 − ρ2 Y1 (t) + e−Kt Y2 (t) t e 0 Ku σ1 Y1 (t) + e−Kt ρσ2 Y1 (t) + σ2 1 − ρ2 Y2 (t) 79 t eKu Θ(u)du. T ) λ2 e ∂ λ21 δ2 e−λ2 T T − Y2 (0)δ2 e−λ2 T − ∂T log B(0. T ) − log B(0. T ) if λ1 = λ2 if λ1 = λ2 . Wt = CΣBt = 1 σ1 − √ρ 2 σ1 1−ρ t 0 √1 σ2 1−ρ2 t σ1 0 0 Bt = σ2 1−ρ2 t 1 −√ ρ 1−ρ2 0 √1 t Bt . T ) − Y2 (0) C2 (0. where 1 √1 0 1 σ2 1−ρ2 0 λ1 0 σ1 · Λ = CKC −1 = − √ρ ρ 1 √1 −1 λ2 |C| σ √1−ρ2 σ1 σ1 1−ρ2 σ2 1−ρ2 1 = ρλ − √1 σ1 λ1 σ1 1−ρ2 − √1 σ2 1−ρ2 0 λ √2 σ2 1−ρ2 σ1 ρσ2 σ2 0 1 − ρ2 = λ1 ρσ2 (λ2 −λ1 )−σ1 √ σ2 1−ρ2 0 λ2 . t ∂ ∂ ∂ C1 (0.

(i) δ0 Proof. WjT (t) = u)du + Wj (t) (j = 1. By (10. If δ2 = 0.20). t + τ ) aare constants ¯ ¯ when τ is ﬁxed. Y2 (t)) = de−Y1 (t)C1 (T −t)−Y2 (t)C2 (T −t)−A(T −t) = dt term + B(t. T ) and A(t.3. δ2 2 δ1 2 + δ2 dW2 (t) .7. So C(t. T ) under P is (−C1 (T − t). then dRt = δ1 dY1 (t) = δ1 (−λ1 Y1 (t)dt + dW1 (t)) = δ1 ( δ1 − Rt δ1 )λ1 dt + dW1 (t) = (δ0 λ1 − λ1 Rt )dt + δ1 dW1 (t). t + τ ) ¯ ¯ 1 [C(t.6. Y1 (t). t + τ ) and A(t. 2 2 δ1 + δ2 and Bt = √ δ1 2 2 δ1 +δ2 W1 (t) + √ δ2 2 2 δ1 +δ2 So the volatility vector of B(t. dB(t. By (9. ¯ ¯ τ ¯ 1 1 Hence L(t2 )−L(t1 ) = τ C(0. t + τ )] ¯ ¯ ¯ τ B(t. (i) Proof.2. t + τ )[−C(t.2. τ )].So Rt = X2 (t) = ρσ2 Y1 (t)+σ2 1 − ρ2 Y2 (t)+δ0 (t). dRt = δ1 dY1 (t) + δ2 dY2 (t) = −δ1 λ1 Y1 (t)dt + λ1 dW1 (t) − δ2 λ21 Y1 (t)dt − δ2 λ2 Y2 (t)dt + δ2 dW2 (t) = −Y1 (t)(δ1 λ1 + δ2 λ21 )dt − δ2 λ2 Y2 (t)dt + δ1 dW1 (t) + δ2 dW2 (t) = −Y1 (t)λ2 δ1 dt − δ2 λ2 Y2 (t)dt + δ1 dW1 (t) + δ2 dW2 (t) = −λ2 (Y1 (t)δ1 + Y2 (t)δ2 )dt + δ1 dW1 (t) + δ2 dW2 (t) = −λ2 (Rt − δ0 )dt + So a = λ2 δ0 .2. t Ku e Θ(u)du 0 Proof. 10. τ )(t2 −t1 ). t + τ )R (t) − A(t. T ) = df (t. −C2 (T − t)). t + τ )] ¯ ¯ τ ¯ 1 [C(0. where δ0 (t) is the second coordinate of e−Kt and can be derived explicitly by Lemma 10. −C2 (T − t)) dW1 (t) . t + τ )R (t) + A(t. τ )[R(t2 )−R(t1 )]+ τ A(0. We use the canonical form of the model as in formulas (10. Since L(t2 )−L(t1 ) is a linear transformation. So ¯ d Lt dt = − = = B(t. T )(−C1 (T − t).2. (ii) Proof. Then δ1 = ρσ2 and δ2 = σ2 1 − ρ2 .4)-(10. σ = 10. (ii) 80 Cj (T − . b = λ2 . So a = δ0 λ1 and b = λ1 .2. τ )R (t) + A(0. dW2 (t) t 0 2 2 δ1 + δ2 δ1 2 δ1 + 2 δ2 dW1 (t) + W2 (t). 10.5. We note C(t.5). T ) are dependent only on T − t.6). T )[−C1 (T − t)dW1 (t) − C2 (T − t)dW2 (t)] = dt term + B(t. ¯ ¯ ¯ ¯ it is easy to verify that their correlation is 1. 2) form a two-dimensional P T −BM.

Tj−1 )] = δK j=1 j=1 B(0. T ).Proof. we have d B(0.2. at time zero the risk-neutral price V0 of the option satisﬁes V0 1 ¯ ¯ ¯ = ET (e−C1 (T −T )Y1 (T )−C2 (T −T )Y2 (T )−A(T −T ) − K)+ . S0 = 1 and ξ = σW1 + (µ − 1 σ 2 ). Tj )L(0. On each payment date Tj .4. Tj ) − δ j=1 B(0.4) and (10. T )(eµ N (d+ ) − KN (d− )).19). then E[e−µT (S0 eσWT +(µ− 2 σ with d± = and 1 √ σ T 1 2 )T − K)+ ] = S0 N (d+ ) − Ke−µT N (d− ) d (log S0 K 1 + (µ ± 1 σ 2 )T ). We can rewrite (10. By risk-neutral pricing. Tj−1 )). B(0. T ) = 1. Tj )L(0. Tj ) − B(0. Tj )L(0. σ ).12. Tj ) − B(0. 81 . T ) B(T. (iv) Proof. Under the T-forward measure. the payoﬀ of this swap contract is δ(K − L(Tj−1 .11.2. Check!). Its no-arbitrage price at time 0 is δ(KB(0. T )E T [(eX − K)+ ] = B(0. 10. First. Proof. (iii) Proof. Tj−1 ).7. X = 1 2 2 N (µ − 2 σ . So the value of the swap is n+1 n+1 n+1 δ[KB(0. 10. Since under P T . Y2 ) is jointly Gaussian and X is therefore Gaussian. Tj−1 )) by Theorem 10. we get (10. 1 1 where d± = σ (− log K + (µ ± 2 σ 2 )) (diﬀerent from the problem. T ) Note B(T. Then Y1 (t) = Y1 (0)e−λ1 t + Y2 (t) = Y0 e−λ2 t − λ21 t λ1 (s−t) t T e dW1 (s) − 0 C1 (T − s)eλ1 (s−t) ds 0 t t t Y (s)eλ2 (s−t) ds + 0 eλ2 (s−t) dW2 (s) − 0 0 1 C2 (T − s)eλ2 (s−t) ds. So (Y1 . σ 2 ) 2 2 E[(eξ − K)+ ] = eµ N (d+ ) − KN (d− ). then ξ = N (µ − 2 σ 2 . we recall the Black-Scholes formula for call options: if dSt = µSt dt + σSt dWt . the numeraire is B(t.5) as T dY1 (t) = −λ1 Y1 (t)dt + dW1 (t) − C1 (T − t)dt T dY2 (t) = −λ21 Y1 (t)dt − λ2 Y2 (t)dt + dW2 (t) − C2 (T − t)dt. Let T = 1.

E[(σ + 1)Nt −Nu e−λσ(t−u) |Ft ] e−λσ(t−u) E[(σ + 1)Nt−u ] e−λσ(t−u) E[eNt−u log(σ+1) ] e−λσ(t−u) eλ(t−u)(e e −λσ(t−u) λσ(t−u) log(σ+1) ∞ (λt)k −λt k=2 k! e = O(t2 ).T +δ) δB(T. First. Proof. T )] 11. T )|FT ]] 1 − B(T. T + δ)L(0. 1 we have P (Ns+t = k + 1|Ns = k) = P (Nt = 1) = (λt) e−λt = λt(1 − λt + O(t2 )) = λt + O(t2 ). 82 . E[Mt2 − Ms |Fs ] = E[(Mt − 2 2 Ms ) |Fs ] + E[(Mt − Ms ) · 2Ms |Fs ] = E[Mt−s ] + 2Ms E[Mt−s ] = V ar(Nt−s ) + 0 = λ(t − s). For any t ≤ u. So by conditional Jensen’s inequality.1.3. T + δ) δB(T. (ii) 2 Proof. 11. −1) (by (11. we have = E[E[D(T + δ)L(T. P (Ns+t = k|Ns = k) = P (Ns+t − Ns = 0|Ns = k) = P (Nt = 0) = e−λt = 1 − λt + O(t2 ). Similarly. T ) − B(0. T + δ) = E D(T )B(T.4)) e 1. f (x) = x2 is a convex function. ∀s ≤ t. That is. 2 E[Mt2 − λt|Fs ] = Ms − λs. Proof. 11. we have E Su Ft St = = = = = = So St = E[Su |Ft ] and S is a martingale. T + δ) D(T ) − D(T )B(T. and 1! P (Ns+t ≥ k + 2|N2 = k) = P (Nt ≥ 2) = 11. (i) Proof. We note Mt has independent and stationary increment.4.Proof. T + δ) 1 − B(T. T ) = 1−B(T.3.2. Introduction to Jump Processes 11. T + δ) = E E[D(T + δ)|FT ] δB(T. E[f (Mt )|Fs ] ≥ f (E[Mt |Fs ]) = f (Ms ). So ∀s ≤ t.T +δ) ∈ FT . T ). So E[Mt2 ] < ∞. T + δ) = E δ B(0. Since L(T. T + δ) = δ = B(0. Mt2 = Nt2 − 2λtNt + λ2 t2 . E[D(T + δ)L(T. So Mt2 is a submartingale.

So E[eXt ] ≡ 1.e.r. By letting t = 1. · · · . · · · . N1 (tn ) − N1 (tn−1 )) is independent of (N2 (t1 ). 83 . N2 (t2 ) − N2 (t1 ).. Suppose N1 and N2 are independent. Let t0 = 0. (Ft )t≥s . we have E[e n n i=1 ui (N1 (ti )−N1 (ti−1 ))+ n n j=1 vj (N2 (tj )−N2 (tj−1 )) ] = i=1 E[eui (N1 (ti )−N1 (ti−1 )) ] j=1 n i=1 E[evj (N2 (tj )−N2 (tj−1 )) ] ]E[e n j=1 = E[e ui (N1 (ti )−N1 (ti−1 )) vj (N2 (tj )−N2 (tj−1 )) ]. Working by induction.s. following the scheme suggested by page 489. 2. Meanwhile. then E[e = = = E[e E[e E[e n i=1 ui (N1 (ti )−N1 (ti−1 ))+ ui (N1 (ti )−N1 (ti−1 ))+ ui (N1 (ti )−N1 (ti−1 ))+ ui (N1 (ti )−N1 (ti−1 ))+ n j=1 vj (N2 (tj )−N2 (tj−1 )) ] E[eun (N1 (tn )−N1 (tn−1 ))+vn (N2 (tn )−N2 (tn−1 )) |Ftn−1 ]] ]E[eun (N1 (tn )−N1 (tn−1 ))+vn (N2 (tn )−N2 (tn−1 )) ] ]E[eun (N1 (tn )−N1 (tn−1 )) ]E[evn (N2 (tn )−N2 (tn−1 )) ]. 11. M2 ]t } = E[ 0<s≤t ∆N1 (s)∆N2 (s)].Proof. So eXt − 1 t = s t eXu− [−λ1 (eu1 − 1) − λ2 (eu2 − 1)]du + s<u≤t eXu− [(eu1 − 1)∆N1 (u) + (eu2 − 1)∆N2 (u)] = s eXu− [(eu1 − 1)d(N1 (u) − λ1 u) − (eu2 − 1)d(N2 (u) − λ2 u)]. N2 (tn )) if and only if (N1 (t1 ). suppose we have 0 ≤ t1 < t2 < t3 < · · · < tn .5.s. we conclude 0<s≤t ∆N1 (s)∆N2 (s) = 0 a. Now. i. · · · . 2) relative to Ftn−1 and the third equality comes from the result obtained in the above paragraph. M2 ]t . N2 have no simultaneous jump. paragraph 1. M1 (t)M2 (t) = o t t t t M1 (s−)dM2 (s) + 0 M2 (s−)dM1 (s) + [M1 . According to page 524. Deﬁne M1 (t) = N1 (t) − λ1 t and M2 (t) = N2 (t) − λ2 t. This shows (eXt )t≥s is a martingale w. Fix s ≥ 0. This shows N1 (t) − N1 (s) is independent of N2 (t) − N2 (s). So taking expectation on both sides. · · · . Both 0 M1 (s−)dM2 (s) and 0 M2 (s−)dM1 (s) are mar0 tingales. we have o t e Xt − e Xs = s t c eXu dXu + 1 2 t c c eXu dXu dXu + s (eXu − eXu− ) s<u≤t = s eXu [−λ1 (eu1 − 1) − λ2 (eu2 − 1)]du + (eXu − eXu− ). Then by Itˆ’s formula for jump process. so that ∀ω ∈ Ω0 . paragraph 2. n−1 i=1 n−1 i=1 n−1 i=1 n−1 j=1 n−1 j=1 n−1 j=1 vj (N2 (tj )−N2 (tj−1 )) vj (N2 (tj )−N2 (tj−1 )) vj (N2 (tj )−N2 (tj−1 )) where the second equality comes from the independence of Ni (tn ) − Ni (tn−1 ) (i = 1. N1 (tn )) is independent of (N2 (t1 ). t > s. we consider Xt = u1 (N1 (t) − N1 (s)) + u2 (N2 (t) − N2 (s)) − λ1 (eu1 − 1)(t − s) − λ2 (eu2 − 1)(t − s). by Itˆ’s product formula. We shall prove the whole path of N1 is independent of the whole path of N2 . N2 (tn ) − N2 (tn−1 )). 0<s≤t ∆N1 (s)∆N2 (s) = 0 for all t > 0. Proof. we would guess the condition should be that N1 and N2 are independent. N1 (t2 ) − N1 (t1 ). · · · . Then by independence E[M1 (t)M2 (t)] = E[M1 (t)]E[M2 (t)] = 0. The problem is ambiguous in that the relation between N1 and N2 is not clearly stated. we get 0 = 0 + E{[M1 . eXu −eXu− = eXu− (e∆Xu −1) = eXu− [(eu1 − 1)∆N1 (u) + (eu2 − 1)∆N2 (u)]. then the vector (N1 (t1 ). Therefore N1 and N2 can have no simultaneous jump. E[eu1 (N1 (t)−N1 (s))+u2 (N2 (t)−N2 (s)) ] = eλ1 (e u1 −1)(t−s) eλ2 (e u2 −1)(t−s) = E[eu1 (N1 (t)−N1 (s)) ]E[eu2 (N2 (t)−N2 (s)) ]. we can ﬁnd a set Ω0 of probability 1.t. 0<u≤t Since ∆Xt = u1 ∆N1 (t)+u2 ∆N2 (t) and N1 . Since 0<s≤t ∆N1 (s)∆N2 (s) ≥ 0 a.

[3] B. World Scientiﬁc. Third edition. Yor. Options. Continous martingales and Brownian motion. [5] A. where g(t. Springer-Verlag. 2003. and other derivatives. [4] D. where Nt is the Poisson process associated with Qt . Øksendal. 1 This shows eXt is a martingale and E[eXt ] ≡ 1. Qt ). Proof. New York. Shreve. 1999. References [1] F. The mathematics of arbitrage. Springer-Verlag. 2006. theory. 0<s≤t Note ∆Xt = u2 ∆Qt = u2 YNt ∆Nt . 1998. So eXt − eXt− = Nt eXt− (e∆Xt − 1) = eXt− (eu2 YNt − 1)∆Nt . N. Berline.6. Stochastic calculus for ﬁnance II. x) = E[h(QT −t + x)]. [2] J. Consider the compound Poisson process Ht = i=1 (eu2 Yi − 1). then Ht − λE[eu2 YNt − 1]t = Ht − λ(ϕ(u2 ) − 1)t is a martingale. Schachermayer. Cambridge University Press. Springer-Verlag. futures.This shows the whole path of N1 is independent of the whole path of N2 . This shows Wt and Qt are independent.. 11. E[h(QT )|Ft ] = E[h(QT −Qt +Qt )|Ft ] = E[h(QT −t +x)]|x=Qt = g(t. Singapore. 84 . Springer. 11. Shiryaev. 1995. Revuz and M. 2000. [6] S. SpringerVerlag. Sixth edition. The mathematics of ﬁnancial derivatives: A student introduction. Hull. Prentice-Hall International Inc. Essentials of stochastic ﬁnance: facts. Berlin. models. So E[eu1 Wt +u2 Qt ] = e 2 u1 t eλt(ϕ(u2 )−1)t = E[eu1 Wt ]E[eu2 Qt ]. The binomial asset pricing model.7. Shreve. [7] S. 2004. Continuous-time models. [8] P. Delbaen and W. Stochastic diﬀerential equations: An introduction with applications. eXt − eXt− = eXt− ∆Ht and t eXt − 1 = 0 t 1 1 eXs (u1 dWs − u2 ds − λ(ϕ(u2 ) − 1)ds) + 1 2 2 t t t eXs u2 ds + 1 0 0 eXs− dHs = 0 eXs u1 dWs + 0 eXs− d(Hs − λ(ϕ(u2 ) − 1)s). Proof. Let Xt = u1 Wt − 1 u2 t + u2 Qt − λt(ϕ(u2 ) − 1) where ϕ is the moment generating function of the jump 2 1 size Y . Stochastic calculus for ﬁnance I. New York. Itˆ’s formula for jump process yields o t eXt − 1 = 0 1 1 eXs (u1 dWs − u2 ds − λ(ϕ(u2 ) − 1)ds) + 1 2 2 t eXs u2 ds + 1 0 (eXs − eXs− ). Wilmott. Fourth Edition. 2004. New Jersey.

- Introduction to Time Series and Forecasting【solution manual 】
- Answer Exercises Shreve
- Stochastic Calculus for FinanceII
- Lecture Notes Stochastic Calculus
- Prob Probability With Martingales-Williams
- Elementary Stochastic Calculus With Finance in View_thomas Mikosch (World 1998 212p)
- Solution Manual for Shreve 2
- 4 Stochastic Calculus for Finance II Continuous-Time Models
- Oksendal, Bernt - Stochastic Differential Equations. an Introduction With Applications. Sixth Edition.
- Steven Shreve. Lectures on Stochastic Calculus and Finance
- Shreve s.e. Stochastic Calculus for Finance II
- Solution Manual for Stochastic Calculus for Finance
- Shreve S.E. Stochastic Calculus for Finance I.. the Binomial Asset Pricing Model
- Stochastic Calculus for Finance Shreve Solutions
- Manual to Shreve
- Solutions to Statistics and Data Analysis for Financial Engineering
- Lamberton D., Lapeyre P. - Introduction to Stochastic Calculus Applied to Finance
- Shreve S.E. Stochastic Calculus for Finance I.. the Binomial Asset Pricing Model (1)
- Stochastic Processes - Ross
- Test Bank for Options Futures and Other Derivatives 9th Edition Hull
- Options,Futures and Other Derivatives 8th Solution Manual-1
- Introduction to Stochastic Calculus With Applications
- Options Futures and Other Derivatives 7e by Hull Solutions Manual
- Solution Shreve Stochastic Calculus Vol1
- Durrett Probability Theory and Examples Solutions PDF
- The Mathematics Of Financial Derivatives
- Adventures in Stochastic Processes (Sidney Resnick)(2)
- Brownian Motion and Stochastic Calculus - Ioannis Karatzas
- Elton;Gruber;Brown & Goetzmann - Modern Portfolio Theory & Investment Analysis.6ed
- Bjork_algunas_soluciones

Skip carousel

- The Weekly Charts are Positive with Dow and S&P 500 above Pivots
- I am still tracking what I believe is a topping out pattern.
- ValuEngine Valuations are Positive but the Forecasting Model is Cautious.
- The New House of Money - Chapter 2 - Jim Chanos
- SEC Inspector General's Report to Congress
- Anthony M. Reinach v. Commissioner of Internal Revenue, 373 F.2d 900, 2d Cir. (1967)
- HAFA - Home Affordable Foreclosure Alternatives
- Dow Transports & Russell 2000 are lagging versus their 2011 highs.
- The daily charts for the major averages remain positive.
- Are we facing a replay of the March 2000 Tech Bubble?
- Zlotnick, Albert M., Individually and on Behalf of All Others Similarly Situated v. Tie Communications & L.W. Kifer, 836 F.2d 818, 3rd Cir. (1988)
- Short Selling_Cleaning Up After Elephants
- Higher yields / oil will end stock market rally.
- 19460827_Minutes.pdf
- Goldman Sachs Market Structure
- The Dow, SPX, Transports and Russell 2000 lag their 50-day SMAs
- 19480505_Minutes.pdf
- Stocks React Negatively to Cash Neutral Operation Twist
- The Topping Out Pattern Continues – Forget Dow 15,000
- Kase Quarterly Letter Q3 2013
- Walter E. Hendricks and Dema P. Hendricks v. Commissioner of Internal Revenue, 423 F.2d 485, 4th Cir. (1970)
- Samuel Peltz v. Shb Commodities, Inc., Isaac Mayer, Solomon Mayer, and Bezalel Mayer, 115 F.3d 1082, 2d Cir. (1997)
- Ed J. Hagen Martha Jo Hagen, and Hagen Investments, Inc. v. Commissioner of Internal Revenue, 951 F.2d 1259, 10th Cir. (1991)
- Stocks remain Cheap with improving Technicals
- The Case for a Bear Market for Stocks
- nycirc_1977_08110.pdf
- Dow / SPX breakout must be followed by the lagging indices.
- Bank Failure Friday Hits Illinois
- Choppy trading continues as ranges hold
- United States v. Samuel Cohen, 448 F.2d 1224, 2d Cir. (1971)

Skip carousel

- SEC v. Cooperman
- Equipment Lease
- Harold S. Divine and Rita K. Divine v. Commissioner of Internal Revenue, 500 F.2d 1041, 2d Cir. (1974)
- Markopolos Testimony on Madoff
- Pershing Square IV Letter to Investors
- Hedge Fund Gates Study
- Madoff Trustee's Lawsuit Against Fairfield Greenwich
- Rent to Own Agreement
- Deloitte v Flanagan
- ACCA Exam Tips June 2010 OpenTuition.com
- Omega Advisors Inc. Letter to Investors 9.21.16
- Buy-Sell Agreement
- Lease with Option to Purchase
- Aaron v. EMC Corporation, 4th Cir. (1997)
- Salt Lake Tribune v. Management Planning, 390 F.3d 684, 10th Cir. (2004)
- frbrich_wp14-16.pdf
- UT Dallas Syllabus for fin6355.501.11f taught by David Springate (spring8)
- Joseph E. Seagram & Sons, Inc. v. Dan W. Shaffer, 310 F.2d 668, 10th Cir. (1962)
- Anthony M. Reinach v. Commissioner of Internal Revenue, 373 F.2d 900, 2d Cir. (1967)
- Sacramento-Municipal-Util-Dist-GS-TOU2-Medium-General-Service-Time-of-Use
- Richard J. Sennott Et Ux. v. Rodman & Renshaw, 414 U.S. 926 (1973)
- COTT CORP /CN/ 8-K (Events or Changes Between Quarterly Reports) 2009-02-24
- Commitment of Traders Report
- Fed. Sec. L. Rep. P 90,297, 12 Fla. L. Weekly Fed. C 156 Forrest Kelly Clay, Individually and on Behalf of All Those Similarly Situated v. Riverwood International Corporation, Thomas H. Johnson, 157 F.3d 1259, 11th Cir. (1998)
- UT Dallas Syllabus for fin6360.0u1.11u taught by Christopher Angelo (cga051000)
- John Arnold Cftc August 3v3 With Appendix
- Howard P. Blount and Dolly H. Blount v. Commissioner of Internal Revenue, 425 F.2d 921, 2d Cir. (1969)
- Commodity Futures Trading Commission v. William C. Dunn and Delta Consultants, Inc., Delta Options, Ltd. And Nopkine Co., Ltd., 58 F.3d 50, 2d Cir. (1995)
- PIONEER NATURAL RESOURCES CO 8-K (Events or Changes Between Quarterly Reports) 2009-02-24
- In the Matter of Fashion Optical, Ltd., Bankrupt. Joe Steele, Trustee v. Dr. Charles J. Gebetsberger, 653 F.2d 1385, 10th Cir. (1981)

Sign up to vote on this title

UsefulNot usefulRead Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading