This action might not be possible to undo. Are you sure you want to continue?
“FINANCIAL MATHEMATICS I:
Stochastic Calculus, Option Pricing,
Portfolio Optimization”
Holger Kraft
University of Kaiserslautern
Department of Mathematics
Mathematical Finance Group
Summer Term 2005
This Version: August 4, 2005
CONTENTS 1
Contents
1 Introduction 3
1.1 What is an Option? . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Fundamental Relations . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Discretetime State Pricing with Finite State Space 6
2.1 Singleperiod Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 Description of the Model . . . . . . . . . . . . . . . . . . . . 6
2.1.2 ArrowDebreu Securities . . . . . . . . . . . . . . . . . . . . . 7
2.1.3 Deﬂator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.4 Riskneutral Measure . . . . . . . . . . . . . . . . . . . . . . 10
2.1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Multiperiod Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Modeling Information Flow . . . . . . . . . . . . . . . . . . . 14
2.2.2 Description of the Multiperiod Model . . . . . . . . . . . . . 16
2.2.3 Some Basic Deﬁnitions Revisited . . . . . . . . . . . . . . . . 19
2.2.4 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.5 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Introduction to Stochastic Calculus 23
3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.1 Why Stochastic Calculus? . . . . . . . . . . . . . . . . . . . . 23
3.1.2 What is a Reasonable Model for Stock Dynamics? . . . . . . 23
3.2 On Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.1 Cadlag and Caglad . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.2 Modiﬁcation, Indistinguishability, Measurability . . . . . . . 25
3.2.3 Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Prominent Examples of Stochastic Processes . . . . . . . . . . . . . . 29
3.4 Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Completition, Augmentation, Usual Conditions . . . . . . . . . . . . 34
3.6 ItoIntegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.7 Ito’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4 Continuoustime Market Model and Option Pricing 47
CONTENTS 2
4.1 Description of the Model . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 BlackScholes Approach: Pricing by Replication . . . . . . . . . . . . 49
4.3 Pricing with Deﬂators . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4 Pricing with Riskneutral Measures . . . . . . . . . . . . . . . . . . . 57
5 Continuoustime Portfolio Optimization 68
5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2 Martingale Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.3 Stochastic Optimal Control Approach . . . . . . . . . . . . . . . . . 73
5.3.1 Heuristic Derivation of the HJB . . . . . . . . . . . . . . . . . 73
5.3.2 Solving the HJB . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.3 Veriﬁcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4 Intermediate Consumption . . . . . . . . . . . . . . . . . . . . . . . . 76
5.5 Inﬁnite Horizon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
References 86
1 INTRODUCTION 3
1 Introduction
1.1 What is an Option?
• There exist two main variants: call and put.
• A European call gives the owner the right to buy some asset (e.g. stock, bond,
electricity, corn . . . ) at a future date (maturity) for a ﬁxed price (strike price).
• Payoﬀ: max¦S(T) − K; 0¦, where S(T) is the asset price at maturity T and
K is the strike price.
• An American call gives the owner the right to buy some asset during the whole
lifetime of the call.
• Put: replace “buy” with “sell”
• Since the owner has the right but not the obligation, this right has a positive
value.
1.2 Fundamental Relations
Consider an option written on an underlying which pays no dividends etc. during
the lifetime of the option. In the following we assume that the underlying is a stock.
The timet call and put prices are denoted by Call(t) and Put(t). The timet price
of a zerocoupon bond with maturity T is denoted by p(t, T).
Proposition 1.1 (Bounds)
(i) For a European call we get: S(t) ≥ Call(t) ≥ max¦S(t) −Kp(t, T); 0¦
(ii) For a European put we have: Kp(t, T) ≥ Put(t) ≥ max¦Kp(t, T) −S(t); 0¦
Proof. (i) The relation S(t) ≥ Call(t) is obvious because the stock can be in
terpreted as a call with strike 0. We now assume that Call(t) < max¦S(t) −
Kp(t, T); 0¦. W.l.o.g. S(t) −Kp(t, T) ≥ 0. Consider the strategy
• buy one call,
• sell one stock,
• buy K zeros with maturity T,
leading to an initial inﬂow of S(t)−Call(t)−Kp(t, T) > 0. At time T, we distinguish
two scenarios:
1. S(T) > K: Then the value of the strategy is −S(T) +S(T) −K +K = 0.
2. S(T) ≤ K: Then the value of the strategy is −S(T) + 0 +K ≥ 0.
Hence, the strategy would be an arbitrage opportunity.
(ii) The relation Kp(t, T) ≥ Put(t) is valid because otherwise the strategy
• sell one put,
• buy K zeros with maturity T
1 INTRODUCTION 4
would be an arbitrage strategy. The second relation can be proved similarly to the
relation for the call in (i). 2
Proposition 1.2 (PutCall Parity)
The following relation holds: Put(t) = Call(t) −S(t) +Kp(t, T)
Proof. Consider the strategy
• buy one put,
• sell one call,
• buy one stock,
• sell K zeros with maturity T.
It is easy to check that the timeT value of this strategy is zero. Since no additional
payments need to be made, no arbitrage dictates that the timet value is zero as
well implying the desired result. 2
Remark. This is a very important relationship because it ties together European
call and put prices.
Proposition 1.3 (American Call)
Assume that interest rates are positive. Early exercise of an American call is not
optimal, i.e. Call
am
(t) = Call(t).
Proof. The following relations hold
Call
am
(t) ≥ Call(t)
(∗)
≥ max¦S(t) −Kp(t, T); 0¦ ≥ S(t) −K
where (∗) holds due to Proposition 1.1 (i). Hence, the American call is worth more
alive than dead. 2
Remark.
• Unfortunately, this result does not hold for American puts.
• If dividends are paid, then the result for the American call breaks down as well.
Proposition 1.4 (American Put)
Assume that interest rates are positive. Then the following relations hold:
(i) Call
am
(t) −S(t) +Kp(t, T) ≤ Put
am
(t) ≤ Call
am
(t) −S(t) +K
(ii) Put(t) ≤ Put
am
(t) ≤ Put(t) +K(1 −p(t, T)).
Proof. (i) The ﬁrst inequality follows from the putcall parity and Proposition 1.3.
For the second one assume that Put
am
(t) > Call
am
(t) −S(t) +K and consider the
following strategy:
• sell one put,
• buy one call,
• sell one stock,
• invest K Euros in the money market account.
1
1
The timet value of the money market account is denoted by M(t). By convention, M(0) = 1.
1 INTRODUCTION 5
By assumption, the initial value of this strategy is strictly positive. For t ∈ [0, T]
we distinguish to scenarios:
1. S(t) > K: Then the value of the strategy is
0 + (S(t) −K) −S(t) +KM(t) = K(M(t) −1) ≥ 0
since interest rates are positive and thus M(t) ≥ 1.
2. S(t) ≤ K: Then the value of the strategy is
−(K −S(t)) + 0 −S(t) +KM(t) = K(M(t) −1) ≥ 0.
Hence, the strategy would be an arbitrage opportunity.
(ii) We have
Put(t) ≤ Put
am
(t) ≤ Call
am
(t) −S(t) +K = Call(t) −S(t) +K
= Put(t) +S(t) −Kp(t, T) −S(t) +K = Put(t) +K(1 −p(t, T)).
2
Remark. If interest rates are zero, then Put(t) = Put
am
(t).
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 6
2 Discretetime State Pricing with Finite State
Space
2.1 Singleperiod Model
2.1.1 Description of the Model
• We consider a singleperiod model starting today (t = 0) and ending at time
t = T. Without loss of generality T = 1.
• Uncertainty at time t = 1 is represented by a ﬁnite set Ω = ¦ω
1
, . . . , ω
I
¦ of
states, one of which will be revealed as true. Ω is said to be the state space.
• Each state ω
i
∈ Ω can occur with probability P(ω
i
) > 0. This deﬁnes a
probability measure P : Ω →(0, 1).
• At time t = 0 one can trade in J + 1 assets. The time0 prices of these
assets are described by the vector S
= (S
0
, S
1
, . . . , S
J
). The statedependent
payoﬀs of the assets at time t = 1 are described by the payoﬀ matrix
X =
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
X
0
(ω
1
) X
J
(ω
1
)
X
0
(ω
2
) X
J
(ω
2
)
.
.
.
.
.
.
.
.
.
X
0
(ω
I
) X
J
(ω
I
)
¸
.
• To shorten notations, X
ij
:= X
j
(ω
i
).
• The ﬁrst asset is assumed to be riskless, i.e. X
i0
= X
k0
= const. for all i,
k ∈ ¦1, . . . , I¦.
• Therefore, at time t = 0 there exists a riskless interest rate given by
r =
X
0
S
0
−1.
• An investor buys and sells assets at time t = 0. This is modeled via a trading
strategy ϕ = (ϕ
0
, . . . , ϕ
J
)
, where ϕ
j
denotes the number of asset j which the
investor holds in his portfolio (combination of assets).
Deﬁnition 2.1 (Arbitrage)
We distinguish two kinds of arbitrage opportunities:
(i) A free lunch is a trading strategy ϕ with
S
ϕ < 0 and Xϕ ≥ 0.
(ii) A free lottery is a trading strategy ϕ with
S
ϕ = 0 and Xϕ ≥ 0 and Xϕ = 0.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 7
Assumptions:
(a) All investors agree upon which states cannot occur at time t = 1.
(b) All investors estimate the same payoﬀ matrix (homogeneous beliefs).
(c) At time t = 0 there is only one market price for each security and each investor
can buy and sell as many securities as desired without changing the market
price (price taker).
(d) Frictionless markets, i.e.
• assets are divisible,
• no restrictions on short sales,
2
• no transaction costs,
• no taxes.
(e) The asset market contains no arbitrage opportunities, i.e. there are neither free
lunches nor free lotteries.
2.1.2 ArrowDebreu Securities
To characterize arbitragefree markets, the notion of an ArrowDebreu security is
crucial:
Deﬁnition 2.2 (ArrowDebreu Security)
An asset paying one Euro in exactly one state and zero else is said to be an Arrow
Debreu security (ADS). Its price is an ArrowDebreu price (ADP).
Remarks.
• In our model, we have as many ADS as states at time t = 1, i.e. we have I
ArrowDebreu securities.
• The ith ADS has the payoﬀ e
i
= (0, . . . , 0, 1, 0, . . . , 0)
, where the ith entry is
1. The corresponding ArrowDebreu price is denoted by π
i
.
• Unfortunately, in practice ADSs are not traded.
We can now prove the following theorem characterizing markets which do not con
tain arbitrage opportunities.
Theorem 2.1 (Characterization of No Arbitrage)
Under assumptions (a)(d) the following statements are equivalent:
• The market does not contain arbitrage opportunities (Assumption (e)).
2
A short sale of an assets is equivalent to selling an asset one does not own. An investor can do
this by borrowing the asset from a third party and selling the borrowed security. The net position
is a cash inﬂow equal to the price of the security and a liability (borrowing) of the security to
the third party. In abstract mathematical terms, a short sale corresponds to buying a negative
amount of a security.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 8
• There exist strictly positive ArrowDebreu prices, i.e. π
i
> 0, i = 1, . . . , I, such
that
S
j
=
I
¸
i=1
π
i
X
ij
.
Proof. Suppose (ii) to hold. We consider a strategy ϕ with Xϕ ≥ 0 and Xϕ = 0.
Then
S
ϕ
(ii)
= π
Xϕ
(∗)
> 0,
where (∗) holds because π > 0, Xϕ ≥ 0, and Xϕ = 0. For this reason, there are
no free lunches. The argument to exclude free lotteries is similar. The implication
“(i)⇒(ii)” follows from the socalled FarkasStiemke lemma (See Mangasarian (1969,
p. 32) or Dax (1997)). 2
Remarks.
• The ADPs are the solution of the following system of linear equations:
S = X
π.
• The hard part of the proof is actually the proof of implication “(i)⇒(ii)”. For
brevity, this proof is omitted here.
Given our market with prices S and payoﬀ matrix X, it is now of interest how to
price other assets.
Deﬁnition 2.3 (Contingent Claim)
In the singleperiod model, a contingent claim (derivative, option) is some additional
asset with payoﬀ C ∈ IR
I
.
Deﬁnition 2.4 (Replication)
Consider a market with payoﬀ matrix X. A contingent claim with payoﬀ C can be
replicated (hedged) in this market if there exists a trading strategy ϕ such that
C = Xϕ.
Remark. If an asset with payoﬀ C is replicable via a trading strategy ϕ, by the
no arbitrage assumption its price S
C
has to be (“law of one price”)
S
C
=
J
¸
j=1
ϕ
j
S
j
.
Deﬁnition 2.5 (Completeness)
An arbitragefree market is said to be complete if every contingent claim is replica
ble.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 9
Theorem 2.2 (Characterization of a Complete Market)
Assume that the ﬁnancial market contains no arbitrage opportunities. Then the
market is complete if and only if the ArrowDebreu prices are uniquely determined,
i.e.
Rank(X) = I.
Proof.
π is unique.
⇐⇒ X
π = S has a unique solution.
⇐⇒ Rank (X) = I
⇐⇒ Range (X) = IR
I
⇐⇒ Every contingent claim is replicable.
2
Remarks.
• The market is complete if the number of assets with linearly independent payoﬀs
is equal to the number of states.
• If the number of assets is smaller than the number of states (J < I), the market
cannot be complete.
2.1.3 Deﬂator
Main result from the previous subsection: In an arbitragefree market, there
exist strictly positive ADPs such that
S
j
=
I
¸
i=1
π
i
X
ij
.
In this subsection: Using the physical probabilities P(ω
i
), i = 1, . . . , I, arbitrage
free asset prices can be represented as discounted expected values of the correspond
ing payoﬀs.
Deﬁnition 2.6 (Deﬂator)
In an arbitragefree market, a deﬂator H is deﬁned by
H =
π
P
.
Remarks.
• Since the market is arbitragefree, there exists at least one vector of ADSs π, but
π and thus the deﬂator H are only unique if the market is complete.
• The ADS π and the probability measure P can be interpreted as random variables
with realizations π
i
= π(ω
i
) and P(ω
i
).
Proposition 2.3 (Pricing with a Deﬂator)
In an arbitragefree market, the asset prices are given by
S
j
= E
P
[H X
j
].
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 10
Proof.
S
j
=
I
¸
i=1
π
i
X
ij
=
I
¸
i=1
P(ω
i
)
π(ω
i
)
P(ω
i
)
X
j
(ω
i
) = E
P
π
P
X
j
= E
P
[H X
j
]
2
Remarks.
• The deﬂator H is the appropriate discount factor for payoﬀs at time t = 1.
• The deﬂator H ist a random variable with realizations π(ω
i
)/P(ω
i
).
• On average, one needs to discount with the riskless rate r because
E
P
[H] =
I
¸
i=1
P(ω
i
)
π(ω
i
)
P(ω
i
)
=
I
¸
i=1
π(ω
i
) =
1
1 +r
.
Interpretation of a Deﬂator:
• In an equilibrium model one can show that the deﬂator equals the marginal
rate of substitution between consumption at time t = 0 and t = 1 of a repre
sentative investor, i.e. (roughly speaking) we have
H =
∂u/∂c
1
∂u/∂c
0
,
where u = u(c
0
, c
1
) is the utility function of the representative investor.
• Setting u
0
:= ∂u/∂c
0
and u
1
:= ∂u/∂c
1
leads to
r =
1
E
P
[H]
−1 =
u
0
E
P
(u
1
)
−1,
which gives the following result:
(i) u
0
> E(u
1
) (”‘time preference”’):
3
r > 0,
(ii) u
0
< E(u
1
): r < 0.
2.1.4 Riskneutral Measure
Two important observations:
(i) If an investor buys all ADSs, he gets one Euro at time t = 1 for sure, i.e. we get
I
¸
i=1
π
i
=
1
1 +r
⇒
I
¸
i=1
π
i
(1 +r) = 1.
(ii) In an arbitragefree market, it holds that π
i
> 0, i = 1, . . . , I.
3
Intuitively, time preference describes the preference for present consumption over future
consumption.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 11
Proposition 2.4 (Riskneutral Measure)
In an arbitragefree market, there exists a probability measure Q deﬁned via
Q(ω
i
) = q
i
:= π
i
(1 +r)
This measure is said to be the riskneutral measure or equivalent martingale mea
sure.
Proof. follows from (i) and (ii). 2
Using the riskneutral measure, there is a third representation of an asset price:
Proposition 2.5 (Riskneutral Pricing)
In an arbitragefree market, asset prices possess the following representation
S
j
= E
Q
X
j
R
,
where R := 1 +r.
Proof.
S
j
=
I
¸
i=1
π
i
X
ij
=
I
¸
i=1
π
i
R
X
ij
R
=
I
¸
i=1
q
i
X
ij
R
= E
Q
X
j
R
.
2
Remarks.
• Warning: The riskneutral measure Q is diﬀerent from the physical measure P.
• Roughly speaking, riskneutral probabilities are compounded ArrowDebreu se
curities.
• The notion “riskneutral measure” comes from the fact that under this measure
prices are discounted expected values where one needs to discount using the
riskless interest rate.
To motivate the notion “equivalent martingale measure”, let us recall some basic
deﬁnitions:
Deﬁnition 2.7 (Equivalent Probability Measures)
A Probability Measure Q is said to be continuous w.r.t. another probability measure
P if for any event A ⊂ Ω
P(A) = 0 =⇒Q(A) = 0.
If Q is continuous w.r.t. P and vice versa, then P and Q are said to be equivalent,
i.e. both measures possess the same null sets.
Theorem 2.6 (RadonNikodym)
If Q is continuous w.r.t. P, then Q possesses a density w.r.t. Q, i.e. there exists a
positive random variable D =
dQ
dP
such that
E
Q
[Y ] = E
P
[D Y ]
for any random variable Y . This density is uniquely determined (Pa.s.).
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 12
Deﬁnition 2.8 (Stochastic Process in a Singleperiod Model)
Consider a singleperiod model with time points t = 0, 1. The ﬁnite sequence
¦Y (t)¦
t=0,1
forms a stochastic process.
Deﬁnition 2.9 (Martingale in a Singleperiod Model)
Consider a singleperiod model with ﬁnite state space. Uncertainty at time t = 1
is modeled by some probability measure Q. A realvalued stochastic process is said
to be a martingale w.r.t. Q (short: Qmartingale) if
E
Q
[Y (1)] = Y (0),
where it is assumed that E
Q
[[Y (1)[] < ∞.
We can now characterize riskneutral measures as follows:
Proposition 2.7 (Properties of a Riskneutral Measure)
(i) The discounted price processes ¦S
j
, X
j
/R¦ are martingales under a riskneutral
measure Q.
(ii) The physical measure P and a riskneutral measure Q are equivalent.
Proof. (i) follows from Proposition 2.5.
(ii) Since ADPs are strictly positive, we have Q(ω
i
) > 0 and P(ω
i
) > 0 which gives
the desired result. 2
Remarks.
• Proposition 2.7 is the reason why Q is said to be an equivalent martingale
measure.
• From Propositions 2.3 and 2.5, we get
E
Q
X
j
R
= S
j
= E
P
[H X
j
].
Therefore, we obtain
E
Q
[X
j
] = E
P
[ H R
. .. .
=dQ/dP
X
j
],
i.e. H = (dQ/dP)/R. Analogously, one can show dP/dQ = 1/(H R).
2.1.5 Summary
Theorem 2.8 (No Arbitrage)
Consider a ﬁnancial market with price vector S and payoﬀ matrix X. If the as
sumptions (a)(d) are satisﬁed, then the following statements are equivalent:
(i) The ﬁnancial market contains no arbitrage opportunities (assumption (e)).
(ii) There exist strictly positive ArrowDebreu prices π
i
, i = 1, . . . , I, such that
S
j
=
I
¸
i=1
π
i
X
ij
.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 13
(iii) There exists a statedependent discount factor H, the socalled deﬂator, such
that for all j = 0, . . . , J
S
j
= E
P
[H X
j
].
(iv) There exists an equivalent martingale measure Q such that the discounted
price processes ¦S
j
, X
j
/R¦ are martingales under Q, i.e. for all j = 0, . . . , J
S
j
= E
Q
X
j
R
Theorem 2.9 (Completeness)
Consider a ﬁnancial market with price vector S and payoﬀ matrix X. If the as
sumptions (a)(e) are satisﬁed, then the following statements are equivalent:
(i) The ﬁnancial market is complete, i.e. each additional asset can be replicated.
(ii) The ArrowDebreu prices π
i
, i = 1, . . . , I, are uniquely determined.
(iii) There exists only one deﬂator H.
(iv) There exists only one equivalent martingale Q.
Remarks.
• In a complete market, the price of any additional asset is given by the price of
its replicating portfolio.
• The value of this portfolio can be calculated applying the following theorem.
Theorem 2.10 (Pricing of Contingent Claims)
Consider an arbitragefree and complete market with price vector S and payoﬀ
matrix X. Let C ∈ IR
I
be the payoﬀ of a contingent claim. Its today’s price S
C
can be calculated by using one of the following formulas:
(i) S
C
= C
π =
I
¸
i=1
π
i
C(ω
i
) (Pricing with ADSs).
(ii) S
C
= E
P
[H C] (Pricing with the deﬂator).
(iii) S
C
= E
Q
C
R
(Pricing with the equivalent martingale measure).
Proof. In an arbitragefree and complete market, there exist ADPs which are
uniquely determined. The noarbitrage requirement together with the deﬁnition of
an ADP gives formula (i). Formulas (ii) and (iii) follow from (i) using H = π/P
and Q(ω
i
) = π
i
R. 2
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 14
2.2 Multiperiod Model
In the previous section we have modeled only two points in time:
t = 0: The investor chooses the trading strategy.
t = T: The investor liquidates his portfolio and gets the payoﬀs.
This has the following consequences:
• We have disregarded the information ﬂow between 0 and T.
• Every trading strategy was a static strategy (”‘buy and hold”’), i.e. intermediate
portfolio adjustments was not allowed.
In this section
• we will model the information ﬂow between 0 and T.
→ Filtration
• and the investor can adjust his portfolio according to the arrival of new informa
tion about prices.
→ dynamic (selfﬁnancing) trading strategy.
2.2.1 Modeling Information Flow
Motivation
Consider the dynamics of a stock in a twoperiod binomial model:
121
110
100
90
81
99
1
P
P
P
P
P
PPq
1
P
P
P
P
P
PPq
1
P
P
P
P
P
PPq
t = 0 t = 0.5 t = 1
• Due to the additional trading date at time t = 0.5, the investor gets more infor
mation about the stock price at time t = 1.
• At t = 0 it is possible that the stock price at time t = 1 equals 81, 99 or 121.
• If, however, the investor already knows that at time t = 0.5 the stock price equals
110 (90), then at time t = 1 the stock price can only take the values 121 or 99
(99 or 81).
• It is one of the goals of this section to model this additional piece of information!
Reference Example
To demonstrate how one can model the information ﬂow, we consider the following
example:
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 15
ω
1
¦ω
1
, ω
2
¦
¦ω
1
, ω
2
, ω
3
, ω
4
¦
¦ω
3
, ω
4
¦
ω
4
ω
2
, ω
3
1
P
P
P
P
P
PPq
1
P
P
P
P
P
PPq
1
P
P
P
P
P
PPq
t = 0 t = 1 t = 2
The state space thus equals: Ω = ¦ω
1
, ω
2
, ω
3
, ω
4
¦.
Example for the interpretation of the states.
ω
1
: DAX stands at 5000 at t = 1 and at 6000 at time t = 2.
ω
2
: DAX stands at 5000 at t = 1 and at 4000 at time t = 2.
ω
3
: DAX stands at 3000 at t = 1 and at 4000 at time t = 2.
ω
4
: DAX stands at 3000 at t = 1 and at 2000 at time t = 2.
First Approach to Model the Flow of Information
Deﬁnition 2.10 (Partition)
A partition of the state space is a set ¦A
1
, . . . , A
K
¦ of subsets A
k
⊂ Ω of a state
space Ω such that
• the subsets A
k
are disjoint, i.e. A
k
∩ A
l
= ∅,
• Ω = A
1
∪ . . . ∪ A
K
.
Example. ¦¦ω
1
, ω
2
¦, ¦ω
3
, ω
4
¦¦.
Deﬁnition 2.11 (Information Structure)
An information structure is a sequence ¦F
t
¦
t
of partitions F
0
, . . ., F
T
such that
• F
0
= ¦ω
1
, . . . , ω
I
¦ (”‘ At the beginning each state can occur.”’),
• F
T
= ¦¦ω
1
¦, . . . , ¦ω
I
¦¦ (”‘ At the end, one knows for sure which
state has occurred. ”’),
• the partitions become ﬁner over time.
Example.
F
0
= ¦¦ω
1
, ω
2
, ω
3
, ω
4
¦¦.
F
1
= ¦¦ω
1
, ω
2
¦, ¦ω
3
, ω
4
¦¦.
F
2
= ¦¦ω
1
¦, ¦ω
2
¦, ¦ω
3
¦, ¦ω
4
¦¦.
Remarks.
• Problem: This intuitive deﬁnition cannot be generalized to the continuoustime
case.
• However, it provides a good intuition of what information ﬂow actually means.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 16
Second Approach to Model the Flow of Information: Filtration
Deﬁnition 2.12 (σField)
A system T of subsets of the state space Ω is said to be a σﬁeld if
(i) Ω ∈ T,
(ii) A ∈ T ⇒A
C
∈ T,
(iii) for every sequence ¦A
n
¦
n∈IN
with A
n
∈ T we have
∞
¸
n=1
A
n
∈ T.
Remark. For a ﬁnite σﬁeld, it is suﬃcient if ﬁnite (instead of inﬁnite) unions of
events belong to the σﬁeld (see requirement (iii) of Deﬁnition 2.12).
Example. ¦∅, ¦ω
1
, ω
2
¦, ¦ω
3
, ω
4
¦, Ω¦.
Deﬁnition 2.13 (Filtration)
Consider a state space T. A family ¦T
t
¦
t
of subσﬁelds T
t
, t ∈ 1, is said to be a
ﬁltration if T
s
⊂ T
t
, s ≤ t and s, t ∈ 1. Here 1 denotes some ordered index set.
Remark. In our discrete time model, we can always use ﬁltrations with the fol
lowing properties ({(Ω) denotes the powerset of Ω):
• T
0
= ¦∅, Ω¦ (”‘At the beginning, one has no information.”’),
• T
T
= {(Ω) (”‘At the end one is fully informed.”’),
• T
0
⊂ T
1
⊂ . . . ⊂ T
T
(”‘The state of information increases over time.”’).
Example. T
0
= ¦∅, Ω¦, T
1
= ¦∅, ¦ω
1
, ω
2
¦, ¦ω
3
, ω
4
¦, Ω¦, T
2
= {(Ω).
Some facts about ﬁltrations:
• Filtrations are used to model the ﬂow of information over time.
• At time t, one can decide if the event A ∈ T
t
has occurred or not.
• Taking conditional expectation E[Y [T
t
] of a random variable Y means that one
takes expectation on the basis of all information available at time t.
2.2.2 Description of the Multiperiod Model
Properties of the Model:
• Trading takes place at T + 1 dates, t = 0, . . . , T.
• Investors can trade in J + 1 assets with price processes (S
0
(t), . . . , S
J
(t)), t =
0, . . . , T.
• S
j
(t) denotes the price of the jth asset at time t.
• The range of S
j
(t) is ﬁnite.
• Trading of an investor is modeled via his trading strategy ϕ(t) = (ϕ
0
(t), . . . , ϕ
J
(t))
,
where ϕ
j
(t) denotes the number of asset j at time t in the investor’s portfolio.
• A trading strategy needs to be predictable, i.e. on the basis of all information
about prices S(t −1) at time t −1 the investor selects his portfolio which he holds
until time t.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 17
• The value of a trading strategy at time t is given by:
V
ϕ
(t) =
S(t)
ϕ(t) =
¸
J
j=0
S
j
(t)ϕ
j
(t) f¨ ur t = 1, . . . , T
S(0)
ϕ(1) =
¸
J
j=0
S
j
(0)ϕ
j
(1) f¨ ur t = 0
Deﬁnition 2.14 (Contingent Claim)
A contingent claim leads to a statedependent payoﬀ C(t) at time t, which is mea
surable w.r.t. T
t
.
Remarks.
• A typical contingent claim is a European call with payoﬀ C(T) = max¦S(T) −
K; 0¦, where S(T) denotes the value of the underlying at time T and K the strike
price.
• American options are no contingent claims in the sense of Deﬁnition 2.14, but
one can generalize this deﬁnition accordingly.
• Recall the following paradigm: Arbitragefree pricing of a contingent claim
means ﬁnding a hedging (replication) strategy of the corresponding payoﬀ.
• In the singleperiod model: The price of the replication strategy is then the price
of the contingent claim.
• In the multiperiod model we need an additional requirement: The replication
strategy should not require intermediate inﬂows or outﬂows of cash.
• Only in this case, the initial costs are equal to the fair price of the contingent
claim.
• This leads to the notion of a selfﬁnancing trading strategy.
Deﬁnition 2.15 (Selfﬁnancing Trading Strategy)
A trading strategy ϕ(t) = (ϕ
1
(t), . . . , ϕ
J
(t)) is said to be selfﬁnancing if for t =
1, . . . , T −1
S(t)
ϕ(t) = S(t)
ϕ(t + 1) ⇐⇒
J
¸
j=0
S
j
(t) ϕ
j
(t) =
J
¸
j=0
S
j
(t) ϕ
j
(t + 1).
Remarks.
• To buy assets, the investor has to sell other ones.
• For this reason, the portfolio value changes only if prices change and not if the
numbers of assets in the portfolio change.
Notation:
• Change of the value of strategy ϕ at time t:
∆V
ϕ
(t) := V
ϕ
(t) −V
ϕ
(t −1)
• Price changes at time t
∆S(t) = (∆S
0
(t), . . . , ∆S
J
(t)) := (S
0
(t) −S
0
(t −1), . . . , S
J
(t) −S
J
(t −1))
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 18
• Gains process of the trading strategy ϕ at time t:
ϕ(t)
∆S(t) =
J
¸
j=0
ϕ
j
(t)∆S
j
(t)
• Cumulative gains process of the trading strategy ϕ at time t:
G
ϕ
(t) :=
t
¸
τ=1
ϕ(τ)
∆S(τ) =
t
¸
τ=1
J
¸
j=0
ϕ
j
(τ)∆S
j
(τ)
Proposition 2.11 (Characterization of a Selfﬁnancing Trading Strategy)
A selfﬁnancing trading strategy ϕ(t) has the following properties:
(i) The value of ϕ changes only due to price changes, i.e.
∆V
ϕ
(t) = ϕ(t)
∆S(t).
(ii) The value of ϕ is equal to its initial value plus the cumulative gains, i.e.
V
ϕ
(t) = V
ϕ
(0) +G
ϕ
(t) = V
ϕ
(0) +
t
¸
τ=1
ϕ(τ)
∆S(τ).
Proof. (i) For a selfﬁnancing strategy ϕ, we obtain
∆V
ϕ
(t) = V
ϕ
(t) −V
ϕ
(t −1)
=
J
¸
j=0
S
j
(t)ϕ
j
(t) −
J
¸
j=0
S
j
(t −1)ϕ
j
(t −1)
(∗)
=
J
¸
j=0
S
j
(t)ϕ
j
(t) −
J
¸
j=0
S
j
(t −1)ϕ
j
(t)
=
J
¸
j=0
ϕ
j
(t)
S
j
(t) −S
j
(t −1)
=
J
¸
j=0
ϕ
j
(t)∆S
j
(t)
= ϕ(t)
∆S(t).
The equality (∗) is valid because ϕ is selfﬁnancing.
(ii) Any (not necessary selfﬁnancing) trading strategy satisﬁes
V
ϕ
(t) = V
ϕ
(0) +
t
¸
τ=1
∆V
ϕ
(τ).
Since ϕ is assumed to be selfﬁnancing, by (i), we obtain
V
ϕ
(t) = V
ϕ
(0) +
t
¸
τ=1
∆V
ϕ
(τ).
(i)
= V
ϕ
(0) +
t
¸
τ=1
ϕ(τ)
∆S(τ)
= V
ϕ
(0) +G
ϕ
(t).
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 19
2
Remarks.
• In continuoustime models, the relation (i) is still valid if ∆ is replace by d.
• Unfortunately, the intuitive relation
S(t)
ϕ(t) = S(t)
ϕ(t + 1)
has no continuoustime equivalent.
2.2.3 Some Basic Deﬁnitions Revisited
The notion of a selfﬁnancing trading strategy is also important to generalize the
following deﬁnitions to the multiperiod setting:
Deﬁnition 2.16 (Arbitrage)
We distinguish two kinds of arbitrage opportunities:
(i) A free lunch is a selfﬁnancing trading strategy ϕ with
V
ϕ
(0) < 0 und V
ϕ
(T) ≥ 0.
(ii) A free lottery is a selfﬁnancing trading strategy ϕ with
V
ϕ
(0) = 0 und V
ϕ
(T) ≥ 0 und V
ϕ
(T) = 0.
Deﬁnition 2.17 (Replication of a Contingent Claim)
A contingent claim with payoﬀ C(T) is said to be replicable if there exists a self
ﬁnancing trading strategy ϕ such that
C(T) = V
ϕ
(T).
The trading strategy ϕ is said to be a replication (hedging) strategy for the claim
C(T).
Remarks.
• By deﬁnition, a replication strategy of a claim leads to the same payoﬀ as the
claim.
• In general, replication strategies are dynamic strategies, i.e. at every trading
date the hedge portfolio needs to be adjusted according to the arrival of new
information about prices.
The deﬁnition of a complete market remains the same:
Deﬁnition 2.18 (Completeness)
An arbitragefree market is said to be complete if each contingent claim can be
replicated.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 20
2.2.4 Assumptions
(a) All investors agree upon which states cannot occur at times t > 0.
(b) All investors agree upon the range of the price processes S(t).
(c) At time t = 0 there is only one market price for each security and each investor
can buy and sell as many securities as desired without changing the market
price (price taker).
(d) Frictionless markets, i.e.
• assets are divisible,
• no restrictions on short sales,
4
• no transaction costs,
• no taxes.
(e) The asset market contains no arbitrage opportunities, i.e. there are neither free
lunches nor free lotteries.
2.2.5 Main Results
Conventions:
• The 0th asset is a money market account, i.e. it is locally riskfree with
S
0
(0) = 1 und S
0
(t) =
t
¸
u=1
(1 +r
u
), t = 1, . . . , T,
where r
u
denotes the interest rate for the period [τ −1, τ].
• The discounted (normalized) price processes are deﬁned by
Z
j
(t) :=
S
j
(t)
S
0
(t)
.
• The money market account is thus used as a numeraire.
Deﬁnition 2.19 (Martingal)
A realvalued adapted process ¦Y (t)¦
t
, t = 0, . . . , T, with E[[Y (t)[] < ∞ is a mar
tingale w.r.t. a probability measure Q if for s ≤ t
E
Q
[Y (t)[T
s
] = Y (s).
Remark. A martingale is thus a process being on average stationary.
Important Facts:
4
A short sale of an assets is equivalent to selling an asset one does not own. An investor can do
this by borrowing the asset from a third party and selling the borrowed security. The net position
is a cash inﬂow equal to the price of the security and a liability (borrowing) of the security to
the third party. In abstract mathematical terms, a short sale corresponds to buying a negative
amount of a security.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 21
• A multiperiod model consists of a sequence of singleperiod models.
• Therefore, a multiperiod model is arbitragefree if all singleperiod models are
arbitragefree (See, e.g., Bingham/Kiesel (2004), Chapter 4).
For these reasons, the following three theorems hold:
Theorem 2.12 (Characterization of No Arbitrage)
If assumptions (a)(d) are satisﬁed, the following statements are equivalent:
(i) The ﬁnancial market contains no arbitrage opportunities (assumption (e)).
(ii) There exist a statedependent discount factor H, the socalled deﬂator, such
that for all time points s, t ∈ ¦0, . . . , T¦ with s ≤ t and all assets j = 0, . . . , J
we have
S
j
(s) =
E
P
[H(t)S
j
(t)[T
s
]
H(s)
,
i.e. all prices can be represented as discounted expectations under the physical
measure.
(iii) There exists a riskneutral probability measure Q such that all discounted
price processes Z
j
(t) are martingales under Q, i.e. for all time points s, t ∈
¦0, . . . , T¦ with s ≤ t and all assets j = 1, . . . , J we have
Z
j
(s) = E
Q
[Z
j
(t)[T
s
] ⇐⇒
S
j
(s)
S
0
(s)
= E
Q
S
j
(t)
S
0
(t)
T
s
Remarks.
• The equivalence “No Arbitrage ⇔ Existence of Q” is known as Fundamental
Theorem of Asset Pricing.
• Unfortunately, in continuoustime models this equivalence breaks down.
• However, the implication “Existence of Q ⇒ No Arbitrage”’ is still valid,
Theorem 2.13 (Pricing of Contingent Claims)
Consider a ﬁnancial market satisfying assumptions (a)(e) and a contingent claim
with payoﬀ C(T). Then we obtain for s, t ∈ ¦0, . . . , T¦ with s ≤ t:
(i) The replication strategy ϕ for the claim is unique, i.e. the arbitragefree price
of the claim is uniquely determined by
C(s) = V
ϕ
(s).
(ii) Using the deﬂator, we can calculate the claim’s times price
C(s) =
E
P
[H(t)C(t)[T
s
]
H(s)
.
(iii) The following riskneutral pricing formula holds:
C(s) = S
0
(s) E
Q
C(t)
S
0
(t)
T
s
.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 22
Remarks.
• Result (i) says that the today’s price of a claim is uniquely determined by its
replication costs, i.e. C(0) = V
ϕ
(0).
• From (iii) we get
C(s)
S
0
(s)
= E
Q
C(t)
S
0
(t)
T
s
,
i.e. the discounted price processes of contingent claims are Qmartingales as well.
Note that one needs to discount with the money market account.
• If interest rates are constant, then from (iii)
C(0) = E
Q
C(T)
(1 +r)
T
=
E
Q
[C(T)]
(1 +r)
T
.
• Pricing of contingent claims boils down to computing expectations under the
riskneutral measure.
Theorem 2.14 (Completeness)
Consider a ﬁnancial market satisfying assumptions (a)(e). Then the following
statements are equivalent
(i) The market is complete.
(ii) There exists a unique deﬂator H.
(iii) There exists a unique martingale measure Q.
2.2.6 Summary
The following results are important:
• It is always valid that a ﬁnancial market contains no arbitrage opportunities if
an equivalent martingale measure exists.
• In our discrete model, the converse is also valid: If the market contains no arbi
trage opportunities, there exists a martingale measure.
• If a martingale measure Qexists, the discounted price processes are Qmartingales.
• If a unique martingale measure exists, the market contains is arbitragefree and
complete.
• If a unique martingale measure exists, the today’s price of a contingent claim is
given by the following Qexpectation
C(0) = E
Q
C(T)
S
0
(T)
.
3 INTRODUCTION TO STOCHASTIC CALCULUS 23
3 Introduction to Stochastic Calculus
3.1 Motivation
3.1.1 Why Stochastic Calculus?
• In the previous section, the investor was able to trade at ﬁnitely many dates
t = 0, . . . , T only.
• In a continuoustime model, trading takes place continuously, i.e. at any date
in the time period [0, T].
• Consequently, price dynamics are modeled continuously.
• In continuoustime models asset dynamics are modeled via stochastic diﬀer
ential equations.
Example “BlackScholes Model”. Two investment opportunities:
• money market account: riskless asset with constant interest rate r modeled via
an ordinary diﬀerential equation (ODE):
dM(t) = M(t)rdt, M(0) = 1.
• stock: risky asset modeled via a stochastic diﬀerential equation (SDE)
dS(t) = S(t)
µdt +σdW(t)
, S(0) = s
0
,
where W is a Brownian motion.
In this section we answer the following questions:
Q1 What is a Brownian motion?
Q2 What is the meaning of dW(t)? → Stochastic Integral
Q3 What are stochastic diﬀerential equations and do explicit solutions exist? →
Variation of Constant
Q4 How does the dynamics of contingent claims look like? → Ito Formula
3.1.2 What is a Reasonable Model for Stock Dynamics?
Model for the Money Market Account
The solution of the ODE B
(t) = r B(t) with B(0) = 1 is
B(t) = b
0
exp(rt)
3 INTRODUCTION TO STOCHASTIC CALCULUS 24
and thus
ln(B(t)) = ln(b
0
) +r t.
Model for the Stock
The result for the money market account motivates the following model for the
stock dynamics:
ln(S(t)) = ln(s
0
) +α t + “randomness”
Notation: Y (t) = “randomness”, i.e. Y (t) = ln(S(t)) −ln(s
0
) −α t
Reasonable properties of Y :
1) no tendency, i.e. E[Y (t)] = 0,
2) increasing with time t, i.e. Var(Y (t)) increases with t,
3) no memory, i.e. for Y (t) = Y (s) +[Y (t) −Y (s)] the diﬀerence [Y (t) −Y (s)]
• depends only on t −s and
• is independent of Y (s),
Properties which would make life easier:
4) normally distributed, i.e. Y (t) ∼ ^(0, σ
2
t), σ > 0,
5) continuous.
3.2 On Stochastic Processes
Deﬁnition 3.1 (Stochastic Process)
A stochastic process X deﬁned on (Ω, T) is a collection of random variables ¦X
t
)¦
t∈I
,
i.e. σ¦X
t
¦ ⊂ T.
Remark. As otherwise stated, our stochastic processes take values in the mea
surable space (IR
n
, B(IR
n
)), i.e. the ndimensional Euclidian space equipped with
σﬁeld of Borel sets. Therefore, every X
t
is (T, B(IR
n
))measurable.
3.2.1 Cadlag and Caglad
Deﬁnition 3.2 (Cadlag and Caglad Functions)
Let f : [0, T] → IR
n
be a function with existing left and right limits, i.e. for each
t ∈ [0, T] the limits
f(t−) = lim
st
f(s), f(t+) = lim
st
f(s)
exist. The function f is cadlag if f(t) = f(t+) and caglad if f(t) = f(t−).
A cadlag function cannot jump around too widly as the next proposition shows.
3 INTRODUCTION TO STOCHASTIC CALCULUS 25
Proposition 3.1
(i) A cadlag function f can have at most a countable number of discontinuities, i.e.
¦t ∈ [0, T] : f(t) = f(t−)¦ is countable.
(ii) For any ε > 0, the number of discontinuities on [0, T] larger than ε is ﬁnite.
Proof. See Fristedt/Gray (1997).
3.2.2 Modiﬁcation, Indistinguishability, Measurability
Recall the following deﬁnition:
Deﬁnition 3.3 (Filtration)
Consider a σﬁeld T. A family ¦T
t
¦ of subσﬁelds T
t
, t ∈ 1, is said to be a
ﬁltration if T
s
⊂ T
t
, s ≤ t and s, t ∈ 1. Here 1 denotes some ordered index set.
Deﬁnition 3.4 (Adapted Process)
A process X is said to be adapted to the ﬁltration ¦T
t
¦
t∈I
if every X
t
is measurable
w.r.t. T
t
, i.e. σ¦X
t
¦ ⊂ T
t
.
Remarks.
• The notation ¦X
t
, T
t
¦
t∈I
indicates that the process X is adapted to ¦T
t
¦
t∈I
.
• For ﬁxed ω ∈ Ω, the realization ¦X
t
(ω)¦
t
is said to be a path of X. For this reason,
a stochastic process can be interpreted as functionvalued random variable.
Deﬁnition 3.5 (Modiﬁcation, Indistinguishability)
Let X and Y be stochastic processes.
(i) Y is a modiﬁcation of X if
P(ω : X
t
(ω) = Y
t
(ω)) = 1 for all t ≥ 0.
(ii) X and Y are indistinguishable if
P(ω : X
t
(ω) = Y
t
(ω) for all t ≥ 0) = 1,
i.e. X and Y have the same paths Pa.s.
Example. Consider a positive random variable τ with a continuous distribution.
Set X
t
≡ 0 and Y
t
= 1
{t=τ}
. Then Y is a modiﬁcation of X, but P(X
t
=
Y
t
for all t ≥ 0) = 0.
Proposition 3.2 (Modiﬁcation, Indistinguishability)
Let X and Y be stochastic processes.
(i) If X and Y are indistinguishable, then Y is a modiﬁcation of X and vice versa.
(ii) If X and Y have rightcontinuous paths and Y is a modiﬁcation of X, then X
and Y are indistinguishable.
(iii) Let X be adapted to ¦T
t
¦. If Y is a modiﬁcation of X and T
0
contains all
Pnull sets of T, then Y is adapted to ¦T
t
¦
t
.
3 INTRODUCTION TO STOCHASTIC CALCULUS 26
Proof. (i) is obvious.
(ii) Since Y is a modiﬁcation of X, we have
P(ω[X
t
(ω) = Y
t
(ω) for all t ≥ [0, ∞) ∩ Q) = 1.
By the rightcontinuity assumption, the claim follows.
(iii) Exercise. 2
If X is an (adapted) stochastic process, then every X
t
is (T, B(IR
n
))measurable
((T
t
, B(IR
n
))measurable). However, X is actually a functionvalued random vari
able, and so, for technical reasons, it is often convenient to have some joint measur
ability properties.
Deﬁnition 3.6 (Measurable Stochastic Process)
A stochastic process is said to measurable, if for each A ∈ B(IR
n
) the set ¦(t, ω) :
X
t
(ω) ∈ A¦ belongs to the product σﬁeld B(IR
n
) ⊗T, i.e. if the mapping
(t, ω) →X
t
(ω) : ([0, ∞) Ω, B([0, ∞)) ⊗T) →(IR
n
, B(IR
n
))
is measurable.
Deﬁnition 3.7 (Progressively Measurable Stochastic Process)
A stochastic process is said to progressively measurable w.r.t. the ﬁltration ¦T
t
¦, if
for each t ≥ 0 and A ∈ B(IR
n
) the set ¦(s, ω) ∈ [0, t] Ω : X
s
(ω) ∈ A¦ belongs to
the product σﬁeld B([0, t]) ⊗T
t
, i.e. if the mapping
(s, ω) →X
s
(ω) : ([0, t] Ω, B([0, t]) ⊗T
t
) →(IR
n
, B(IR
n
))
is measurable for each t ≥ 0.
Remark. Any progressively measurable process is measurable and adapted (Exer
cise). The following proposition provides the extent to which the converse is true.
Proposition 3.3
If a stochastic process is measurable and adapted to the ﬁltration ¦T
t
¦, then it has
a modiﬁcation which is progressively measurable w.r.t. ¦T
t
¦.
Proof. See Karatzas/Shreve (1991, p. 5).
Fortunately, if a stochastic process is left or rightcontinuous, a stronger result can
be proved:
Proposition 3.4
If the stochastic process X is adapted to the ﬁltration ¦T
t
¦ and right or left
continuous, then X is also progressively measurable w.r.t. ¦T
t
¦.
Proof. See Karatzas/Shreve (1991, p. 5).
Remark. If X is right or leftcontinuous, but not necessarily adapted to ¦T
t
¦,
then X is at least measurable.
3 INTRODUCTION TO STOCHASTIC CALCULUS 27
3.2.3 Stopping Times
Consider a measurable space (Ω, T). An T measurable random variable τ : Ω →
1 ∪ ¦∞¦ where 1 = [0, ∞) or 1 = IN is said to be a random time. Two special
classes of random times are of particular importance.
Deﬁnition 3.8 (Stopping Time, Optional Time)
(i) A random time is a stopping time w.r.t. the ﬁltration ¦T
t
¦
t∈I
if the event
¦τ ≤ t¦ ∈ T
t
for every t ∈ 1.
(ii) A random time is an optional time w.r.t. the ﬁltration ¦T
t
¦
t∈I
if the event
¦τ < t¦ ∈ T
t
for every t ∈ 1.
Propositions 3.5 and 3.6 show how both concepts are connected.
Proposition 3.5
If τ is optional and c > 0 is a positive constant, then τ +c is a stopping time.
Proof. Exercise.
Deﬁnition 3.9 (Continuity of Filtrations)
(i) A ﬁltration ¦(
t
¦ is said to be rightcontinuous if
(
t
= (
t+
:=
¸
ε>0
(
t+ε
(ii) A ﬁltration ¦(
t
¦ is said to be leftcontinuous if
(
t
= (
t−
:= σ
¸
s<t
(
s
¸
Remark. Let X be stochastic process.
• Loosely speaking, leftcontinuity of ¦F
X
t
¦ means that X
t
can be guessed by ob
serving X
s
, 0 ≤ s < t.
• Rightcontinuity means intuitively that if X
s
has been observed for 0 ≤ s ≤ t,
then one cannot learn more by peeking inﬁnitesimally far into the future.
Proposition 3.6
Every stopping time is optional, and both concepts coincide if the ﬁltration is right
continuous.
Proof. Consider a stopping time τ. Then ¦τ < t¦ =
¸
t>ε>0
¦T ≤ t − ε¦ and
¦T ≤ t −ε¦ ∈ T
t−ε
⊂ T
t
, i.e. τ is optional. On the other hand, consider an optional
time τ w.r.t. a rightcontinuous ﬁltration ¦T
t
¦. Since ¦τ ≤ t¦ =
¸
t+ε>u>t
¦T < u¦,
we have ¦τ ≤ t¦ ∈ T
t+ε
for every ε > 0 implying ¦τ ≤ t¦ =
¸
ε>0
T
t+ε
= T
t+
= T
t
.
2
Remark. In our applications, we will always work with rightcontinuous ﬁltrations.
5
5
See Deﬁnition 3.18.
3 INTRODUCTION TO STOCHASTIC CALCULUS 28
Proposition 3.7
(i) Let τ
1
and τ
2
be stopping times. Then τ
1
∧τ
2
= min¦τ
1
, τ
2
¦, τ
1
∨τ
2
= max¦τ
1
, τ
2
¦,
τ
1
+τ
2
, and ατ
1
with α ≥ 1 are stopping times.
(ii) Let ¦τ
n
¦
n∈IN
be a sequence of optional times. Then the random times
sup
n≥1
τ
n
, inf
n≥1
τ
n
, lim
n→∞
τ
n
, lim
n→∞
τ
n
are all optional. Furthermore, if the τ
n
’s are stopping times, then so is sup
n≥1
τ
n
.
Proof. Exercise.
Proposition and Deﬁnition 3.8 (Stopping Time σField)
Let τ be a stopping time of the ﬁltration ¦T
t
¦. Then the set
¦A ∈ T : A∩ ¦τ ≤ t¦ ∈ T
t
for all t ≥ 0¦
is a σﬁeld, the socalled stopping time σﬁeld T
τ
.
Proof. Exercise.
Proposition 3.9 (Properties of a Stopped σField)
Let τ
1
and τ
2
be stopping times.
(i) If τ
1
= t ∈ [0, ∞), then T
τ
1
= T
t
.
(ii) If τ
1
≤ τ
2
, then T
τ
1
⊂ T
τ
2
.
(iii) T
τ
1
∧τ
2
= T
τ
1
∩ T
τ
2
and the events
¦τ
1
< τ
2
¦, ¦τ
2
< τ
1
¦, ¦τ
1
≤ τ
2
¦, ¦τ
2
≤ τ
1
¦, ¦τ
1
= τ
2
¦
belong to T
τ
1
∩ T
τ
2
.
Proof. Exercise.
Deﬁnition 3.10 (Stopped Process)
(i) If X is a stochastic process and τ is random time, we deﬁne the function X
τ
on
the event ¦τ < ∞¦ by
X
τ
(ω) := X
τ(ω)
(ω)
If X
∞
(ω) is deﬁned for all ω ∈ Ω, then X
τ
can also be deﬁned on Ω, by setting
X
τ
(ω) := X
∞
(ω) on ¦τ = ∞¦.
(ii) The process stopped at time τ is the process ¦X
t∧τ
¦
t∈[0,∞)
.
Proposition 3.10 (Measurability of a Stopped Process)
(i) If the process X is measurable and the random time τ is ﬁnite, then the function
X
τ
is a random variable.
(ii) If the process X is progressively measurable w.r.t. ¦T
t
¦
t∈[0,∞)
and τ is a stopping
time of this ﬁltration, then the random variable X
τ
, deﬁned on the set ¦τ < ∞¦ ∈
T
τ
, is T
τ
measurable, and the stopped process ¦X
τ∧t
, T
t
¦
t∈[0,∞)
is progressively
measurable.
3 INTRODUCTION TO STOCHASTIC CALCULUS 29
Proof. Karatzas/Shreve (1991, p. 5 and p. 9).
Remark. The following result is obvious: If X is adapted and cadlag and τ is a
stopping time, then
X
t∧τ
= X
t
1
{t<τ}
+X
τ
1
{t≥τ}
.
is also adapted and cadlag.
Deﬁnition 3.11 (Hitting Time)
Let X be a stochastic process and let A ∈ B(IR
n
) be a Borel set in IR
n
. Deﬁne
τ
A
(ω) = inf¦t > 0 : X
t
(ω) ∈ A¦.
Then τ is called a hitting time of A for X.
Proposition 3.11
Let X be a rightcontinuous process adapted to the ﬁltration ¦T
t
¦.
(i) If A
o
is an open set, then the hitting time of A
o
is an optional time.
(ii) If A
c
is a closed set, then the random variable
˜ τ
A
c
(ω) = inf¦t > 0 : X
t
(ω) ∈ A
c
or X
t−
(ω) ∈ A
c
¦.
is a stopping time.
Proof. (i) For simplicity n = 1. We have ¦τ
A
o
< t¦ =
¸
s∈Q∩[0,t)
¦X
s
∈ A
o
¦. The
inclusion ⊃ is obvious. Suppose that the inclusion ⊂ does not hold. Then we have
X
i
∈ A
o
for some i ∈ IR ` Q and i < t, but X
q
∈ A
o
for all q ∈ Q with q < t.
Since A
o
is open, there exists a δ > 0 such that (X
i
− δ, X
i
+ δ) ⊂ A
o
. Therefore,
X
q
∈ (X
i
−δ, X
i
+δ). However, then there exists a sequence (q
n
)
n∈IN
with q
n
`s
0
and q
n
∈ Q ∩ [0, t), but X
q
n
→ X
i
which is a contradiction to the rightcontinuity
of X.
(ii) See Karatzas/Shreve (1991, p. 39) and Protter (2005, p. 5). 2
Remarks.
• If the ﬁltration is rightcontinuous, then the hitting time τ
A
o
is a stopping time.
• If X is continuous , then the stopping time ˜ τ
A
c
is a hitting time of A
c
.
• Even if X is continuous, in general the hitting time of A
o
is not a stopping time.
For example, suppose that X is realvalued, that for some ω, X
t
(ω) ≤ 1 for t ≤ 1,
and that X
1
(ω) = 1. Let A
o
= (1, ∞). Without looking slightly ahead of time 1,
we cannot tell wheter or not τ
A
o
= 1.
3.3 Prominent Examples of Stochastic Processes
Theorem and Deﬁnition 3.12 (Brownian Motion)
(i) A Brownian motion is an adapted, realvalued process ¦W
t
, T
t
¦
t∈[0,∞)
such that
W
0
= 0 a.s. and
3 INTRODUCTION TO STOCHASTIC CALCULUS 30
• W
t
−W
s
∼ ^(0, t −s) for 0 ≤ s < t “stationary increments”
• W
t
−W
s
independent of T
s
for 0 ≤ s < t “independent increments”
exists and is said to be a onedimensional Brownian motion.
(ii) An ndimensional Brownian motion W := (W
1
, . . . , W
n
) is an ndimensional
vector of independent onedimensional Brownian motions.
Proof. Applying Kolmogorov’s extension theorem,
6
it is straightforward to prove
the existence of a Brownian motion. See, e.g., Karatzas/Shreve (1991, pp. 47ﬀ). 2
Theorem 3.13 (Kolmogorov’s Continuity Theorem)
Suppose that the process X = ¦X
t
¦
t∈[0,∞)
satisﬁes the following condition: For all
T > 0 there exist positive constants α, β, C such that
E[[X
t
−X
s
[
α
] ≤ C [t −s[
1+β
, 0 ≤ s, t ≤ T.
Then there exists a continuous modiﬁcation of X.
Proof. See Karatzas/Shreve (1991, pp.53ﬀ).
Corollary 3.14 (Continuous Version of the Brownian Motion)
An ndimensional Brownian motion possesses a continuous modiﬁcation.
Proof. One can show that
E[[W
t
−W
s
[
4
] = n(n + 2)[t −s[
2
.
Hence, we can apply Kolmogorov’s continuity theorem with α = 4, C = n(n + 2),
and β = 1. 2
Remark. We will always work with this continuous version.
Deﬁnition 3.12 (Poisson Process)
A Poisson process with intensity λ > 0 is an adapted, integervalued cadlag process
N = ¦N
t
, T
t
¦
t[0,∞)
such that N
0
= 0, and for 0 ≤ t < ∞, the diﬀerence N
t
−N
s
is
independent of T
s
and is Poisson distributed with mean λ(t −s).
7
3.4 Martingales
Deﬁnition 3.13 (Martingale, Supermartingale, Submartingale)
A process X = ¦X
t
¦
t∈[0,∞)
adapted to the the ﬁltration ¦T
t
¦
t∈[0,∞)
is called a
martingale (resp. supermartingale, submartingale) with respect to ¦T
t
¦
t∈[0,∞)
if
(i) X
t
∈ L
1
(dP), i.e. E[[X
t
[] < ∞.
6
Kolmogorov’s extension theorem basically says that for given (ﬁnitedimensional) distributions
satisfying a certain consistency condition there exists a probability space and a stochastic process
such that its (ﬁnitedimensional) distributions coincide with the given distributions. See, e.g.,
Karatzas/Shreve (1991, p. 50).
7
A random variable Y taking values in IN ∪{0} is Poisson distributed with parameter λ > 0 if
P(Y = n) = e
−λ
λ
n
/(n!), n ∈ IN.
3 INTRODUCTION TO STOCHASTIC CALCULUS 31
(ii) if s ≤ t, then E[X
t
[T
s
] = X
s
, a.s. (resp. E[X
t
[T
s
] ≤ X
s
, resp. E[X
t
[T
s
] ≥ X
s
).
Examples.
• Let W be a onedimensional Brownian motion. Then W is a martingale and W
2
is a submartingale.
• The arithmetic Brownian motion X
t
:= µt +σW
t
, µ, σ ∈ IR, is a martingale for
µ = 0, a submartingale for µ > 0, and a supermartingale for µ < 0.
• Let N be a Poisson process with intensity λ. Then ¦N
t
−λt¦
t
is a martingale.
Deﬁnition 3.14 (SquareIntegrable)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be a rightcontinuous martingale. We say that X is square
integrable if E[X
2
t
] < ∞ for every t ≥ 0. If, in addition, X
0
= 0 a.s. we write
X ∈ ´
2
(or X ∈ ´
c
2
if X is also continuous).
Proposition 3.15 (Functions of Submartingales)
(i) Let X = (X
1
, . . . , X
n
) with realvalued components X
k
be a martingale w.r.t.
¦T
t
¦
t∈[0,∞)
and let ϕ : IR
n
→ IR be a convex function such that E[[ϕ(X
t
)[] < ∞
for all t ∈ [0, ∞). Then ϕ(X) is a submartingale w.r.t. ¦T
t
¦
t∈[0,∞)
(ii) Let X = (X
1
, . . . , X
n
) with realvalued components X
k
be a submartingale w.r.t.
¦T
t
¦
t∈[0,∞)
and let ϕ : IR
n
→ IR be a convex, nondecreasing function such that
E[[ϕ(X
t
)[] < ∞ for all t ∈ [0, ∞). Then ϕ(X) is a submartingale w.r.t. ¦T
t
¦
t∈[0,∞)
Proof. follows from Jensen’s inequality (Exercise). 2
For any X ∈ ´
2
and 0 ≤ t < ∞, we deﬁne
[[X[[
t
:=
E[X
2
t
] and [[X[[ :=
∞
¸
n=1
[[X[[
n
∧ 1
2
n
.
Note that [[X−Y [[ deﬁnes a the pseudometric on ´
2
. If we identify indistinguish
able processes, the pseudometric [[X −Y [[ becomes a metric on ´
2
.
8
Proposition 3.16
If we identify indistinguishable processes, the pair (´
2
, [[) is a complete metric
space, and ´
c
2
is a closed subset of ´
2
.
Proof. See, e.g., Karatzas/Shreve (1991, pp. 37f).
An important question now arises: What happens to the martingale property of a
stopped martingale? For bounded stopping times, nothing changes.
Theorem 3.17 (Optional Sampling Theorem, 1st Version)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be a rightcontinuous submartingale (supermartingale) and
let τ
1
and τ
2
be bounded stopping times of the ﬁltration ¦T
t
¦ with τ
1
≤ τ
2
≤ c ∈ IR
a.s. Then
E[X
τ
2
[T
τ
1
] ≥ (≤)X
τ
1
8
This means: (i) X − Y  ≥ 0 and X − Y  = 0 iﬀ X = Y . (ii) X − Y  = Y − X. (iii)
X −Y  = X −Z +Z −Y .
3 INTRODUCTION TO STOCHASTIC CALCULUS 32
If the stopping times are unbounded, in general, this result does only hold if X can
be extended to ∞ such that
˜
X := ¦X
t
, T
t
¦
t∈[0,∞]
is still a sub or supermartingale.
Theorem 3.18 (Sub and Supermartingale Convergence)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be a rightcontinuous submartingale (supermartingale)
and assume sup
t≥0
E[X
+
t
] < ∞ (sup
t≥0
E[X
−
t
] < ∞). Then the limit X
∞
(ω) =
lim
t→∞
X
t
(ω) exists for a.e. ω ∈ Ω and E[X
∞
[ < ∞.
Proof. See Karatzas/Shreve (1991, pp. 17f).
This proposition ensures that a limit exists, but it is not clear at all if
˜
X is a sub or
supermartingale. To solve this problem, we need to require the following condition:
Deﬁnition 3.15 (Uniform Integrability)
A familiy of random variables (U)
a∈A
is uniformly integrable (UI) if
lim
n→∞
sup
a
{U
α
≥n}
[U
α
[dP = 0.
Using the deﬁnition, it is sometimes not easy to check if a family of random variables
is uniformly integrable.
Theorem 3.19 (Characterization of UI)
Let (U)
a∈A
be a subset of L
1
. The following are equivalent:
(i) (U)
a∈A
is uniformly integrable.
(ii) There exists a positive, increasing, convex function g deﬁned on [0, ∞) such that
lim
x→∞
g(x)
x
= +∞ and sup
a∈A
E[g ◦ [U
α
[] < ∞.
Remark. A good choice is often g(x) = x
1+ε
with ε > 0.
Proposition 3.20
The following conditions are equivalent for a nonnegative, rightcontinuous sub
martingale X = ¦X
t
, T
t
¦
t∈[0,∞)
:
(i) It is a uniformly integrable family of random variables.
(ii) It converges in L
1
, as t →∞.
(iii) It converges Pa.s. (t → ∞) to an integrable random variable X
∞
, such that
¦X
t
, T
t
¦
t∈[0,∞]
is a submartingale.
For the implication “(i)⇒(ii)⇒(iii)”, the assumption of nonnegativity can be dropped.
Theorem 3.21 (UI Martingales can be closed)
The following conditions are equivalent for a rightcontinuous martingale X =
¦X
t
, T
t
¦
t∈[0,∞)
:
(i) It is a uniformly integrable family of random variables.
3 INTRODUCTION TO STOCHASTIC CALCULUS 33
(ii) It converges in L
1
, as t →∞.
(iii) It converges Pa.s. (t → ∞) to an integrable random variable X
∞
, such that
¦X
t
, T
t
¦
t∈[0,∞]
is a martingale, i.e. the martingale is closed by Y .
(iv) There exists an integrable random variable Y , such that X
t
= E[Y [T
t
] a.s. for
all t ≥ 0.
If (iv) holds and X
∞
is the random variable in (c), then E[Y [T
∞
] = X
∞
a.s.
We now formulate a more general version of Doob’s optional sampling theorem:
Theorem 3.22 (Optional Sampling Theorem, 2nd Version)
Let X = ¦X
t
, T
t
¦
t∈[0,∞]
be a rightcontinuous submartingale (supermartingale)
with a last element X
∞
and let τ
1
and τ
2
be stopping times of the ﬁltration ¦T
t
¦
with τ
1
≤ τ
2
a.s. Then
E[X
τ
2
[T
τ
1
] ≥ (≤)X
τ
1
Proof. See, e.g., Karatzas/Shreve (1991, pp. 19f).
Proposition 3.23 (Stopped (Sub)Martingales)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be an adapted rightcontinuous (sub)martingale and let
τ be a bounded stopping time. The stopped process ¦X
t∧τ
, T
t
¦
t∈[0,∞)
is also an
adapted rightcontinuous (sub)martingale.
Proof. Exercise.
Remarks.
• The diﬃculty in this proposition is to show that the stopped process is a (sub)
martingale for the ﬁltration ¦T
t
¦. It is obvious that the stopped process is a
(sub)martingale for ¦T
t∧τ
¦.
• If ¦X
t
, T
t
¦
t∈[0,∞]
is a UI rightcontinuous martingale, then the stopping time τ
may be unbounded (See Protter (2005, p. 10)).
Corollary 3.24 (Conditional Expectation and Stopped σFields)
Let τ
1
and τ
2
be stopping times and Z be an integrable random variable. Then
E[E[Z[T
τ
1
][T
τ
2
] = E[E[Z[T
τ
2
][T
τ
1
] = E[Z[T
τ
1
∧τ
2
], P −a.s.
Proof. Exercise.
For technical reasons, we sometimes need to weaken the martingale concept.
Deﬁnition 3.16 (Local Martingale)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be a (continuous) process. If there exists a nondecreasing
sequence ¦τ
n
¦
n∈IN
of stopping times of ¦T
t
¦, such that ¦X
(n)
t
:= X
t∧τ
n
, T
t
¦
t∈[0,∞)
is a martingale for each n ≥ 1 and P(lim
n→∞
τ
n
= ∞) = 1, then we say that X is
a (continuous) local martingale.
3 INTRODUCTION TO STOCHASTIC CALCULUS 34
Remark.
• Every martingale is a local martingale (Choose τ
n
= n).
• However, there exist local martingales which are no martingales. Especially,
E[X
t
] need not to exist for local martingales.
Proposition 3.25
A nonnegative local martingale is a supermartingale.
Proof. follows from Fatou’s lemma (Exercise). 2
This concept can be applied to other situations (e.g. locally bounded, locally square
integrable):
Deﬁnition 3.17 (Local Property)
Let X be a stochastic process. A property π is said to hold locally if there exists a
sequence of stopping times ¦τ
n
¦
n∈IN
with P(lim
n→∞
τ
n
= ∞) = 1 such that X
(n)
has property π for each n ≥ 1.
3.5 Completition, Augmentation, Usual Conditions
The following deﬁnition is very important:
Deﬁnition 3.18 (Usual Conditions)
A ﬁltered probability space (Ω, (, P, ¦(
t
¦) satisﬁes the usual conditions if
(i) it is complete,
9
(ii) (
0
contains all the Pnull sets of (, and
(iii) ¦(
t
¦ is rightcontinuous.
In the sequel, we impose the following
Standing assumption: We always consider a ﬁltered probability space (Ω, T, P, ¦T
t
¦)
satisfying the usual conditions.
One of the main reasons for this assumption is that the following proposition holds:
Proposition 3.26
Let (Ω, (, P, ¦(
t
¦) satisfy the usual conditions. Then every martingale for ¦(
t
¦
possesses a unique cadlag modiﬁcation.
Proof. See, e.g. Karatzas/Shreve (1991, pp. 16f).
Unfortunately, we are confronted with the following result.
Proposition 3.27
The ﬁltration ¦T
W
t
¦ is leftcontinuous, but fails to be rightcontinuous.
9
A probability space is complete if for any N ⊂ Ω such that there exists an
˜
N ∈ G with N ⊂
˜
N
and P(
˜
N) = 0, we have N ∈ G.
3 INTRODUCTION TO STOCHASTIC CALCULUS 35
Fortunately, there is a way out. Let T
∞
:= σ¦
¸
t≥0
T
t
¦ and consider the space
(Ω, T
W
∞
, P). Deﬁne the collection of Pnull sets by
^ := ¦A ⊂ Ω : ∃B ∈ T
W
∞
with A ⊂ B, P(B) = 0¦.
The Paugmentation of ¦T
W
t
¦ is deﬁned by T
t
:= σ¦T
W
t
∪ ^¦ and is called the
Brownian ﬁltration.
Theorem 3.28 (Brownian Filtration)
The Brownian ﬁltration, is left and rightcontinuous. Besides, W is still a Brownian
motion w.r.t. the Brownian ﬁltration.
This is the reason why we work with the Brownian ﬁltration instead of ¦T
W
t
¦. For
a Poisson process, a similar result holds.
Proposition 3.29 (Augmented Filtration of a Poisson Process)
The Paugmentation of the natural ﬁltration induced by a Poisson process N is
rightcontinuous. Besides, N is still a Poisson process w.r.t. this Paugmentation.
The proofs of the previous results can be found in Karatzas/Shreve (1991), pp. 89ﬀ.
Remark. More generally, one can prove the following results:
(i) If a stochastic process is leftcontinuous, then the ﬁltration ¦T
X
t
¦ is leftcontinuous.
(ii) Even if X is is continuous, ¦T
X
t
¦ need not to be rightcontinuous.
(iii) For an ndimensional strong Markov process X the augmented ﬁltration is
rightcontinuous.
3.6 ItoIntegral
Our goal is to deﬁne an integral of the form:
t
0
X(s)dW(s),
where X is an appropriate stochastic process. From the LebesgueStieltjes theory
we know that for a measurable function f and a processes A of ﬁnite variation on
[0, t] one can deﬁne
t
0
f(s)dA(s) = lim
n→∞
n−1
¸
k=0
f
kt
n
A
(k+1)t
n
−A
kt
n
.
However, the variation of a Brownian motion is unbounded on any interval. There
fore, naive stochastic integration is impossible (See Protter (2005, pp. 43ﬀ)).
Good News: We can deﬁne a stochastic integral in terms of L
2
convergence!
Agenda. (disregarding measurability for the moment)
1st Step. Deﬁne a stochastic integral for simple processes κ (constant process
except for ﬁnitely many jumps).
I
t
(κ) :=
t
0
κ
s
dW
s
:=?
3 INTRODUCTION TO STOCHASTIC CALCULUS 36
2nd Step. Every L
2
[0, T]process X can be approximated by a sequence ¦κ
(n)
¦
n
of simple processes such that
[[X −κ
(n)
[[
2
T
:= E
¸
T
0
(X(s) −κ
(n)
(s))
2
ds
¸
→0.
3rd Step. For each n the stochastic integral
I
t
(κ
(n)
) =
t
0
κ
(n)
(s) dW(s)
is a welldeﬁned random variable and the sequence ¦I
t
(κ
(n)
)¦
n
converges (in some
sense) to a random variable I
4th Step. Deﬁne the stochastic integral for a L
2
[0, T]process X as follows:
t
0
X(s) dW(s) := lim
n→∞
I
t
(κ
(n)
) = lim
n→∞
t
0
κ
(n)
(s) dW(s).
5th Step. Extend the deﬁnition by localization.
Standing Assumption. ¦T
t
¦ is the Brownian ﬁltration of a Brownian motion W.
1st Step.
Deﬁnition 3.19 (Simple Process)
A stochastic process ¦κ
t
¦
t∈[0,T]
is a simple process if there exist real numbers 0 =
t
0
< t
1
< . . . < t
p
= T, p ∈ IN, and bounded random variables Φ
i
: Ω → IR,
i = 0, 1, . . . , p such that Φ
0
is T
0
measurable, Φ
i
is T
t
i−1
measurable, and for all
ω ∈ Ω we have
κ
t
(ω) = Φ
0
(ω)1
{0}
(t) +
p
¸
i=1
Φ
i
(ω)1
(t
i−1
,t
i
]
(t)
Remarks.
• κ
t
is T
t
i
−1
measurable for t ∈ (t
i−1
, t
i
] (measurable w.r.t. the lefthand side of
the interval).
• κ is a leftcontinuous step function.
Deﬁnition 3.20 (Stochastic Integral for Simple Processes)
For a simple process κ and t ∈ [0, T] the stochastic integral I(κ) is deﬁned by
I
t
(κ) :=
t
0
κ
s
dW
s
:=
¸
1≤i≤p
Φ
i
(W
t
i
∧t
−W
t
i−1
∧t
)
Proposition 3.30 (Stochastic Integral for Simple Processes)
For a simple process κ we have:
(i) The integral I(κ) = ¦I
t
(κ)¦
t∈[0,T]
is a continuous martingale for ¦T
t
¦
t∈[0,T]
.
(ii) E
(I
t
(κ))
2
= E
t
0
κ
2
s
ds
for all t ∈ [0, T].
(iii) E
sup
0≤t≤T
[I
t
(κ)[
2
≤ 4E
T
0
κ
2
s
ds
3 INTRODUCTION TO STOCHASTIC CALCULUS 37
Proof. (i) Exercise.
(ii) For simplicity, t = t
k+1
. Then
E[I
t
(κ)
2
] =
k
¸
i=1
k
¸
j=1
E
Φ
i
Φ
j
(W
t
i
−W
t
i−1
)(W
t
j
−W
t
j−1
)
.
1st case: i > j.
E
Φ
i
Φ
j
(W
t
i
−W
t
i−1
)(W
t
j
−W
t
j−1
)
= E
E
Φ
i
Φ
j
(W
t
i
−W
t
i−1
)(W
t
j
−W
t
j−1
)[T
t
i−1
= E[Φ
i
Φ
j
(W
t
j
−W
t
j−1
)(E[W
t
i
[T
t
i−1
]
. .. .
=W
t
i−1
−W
t
i−1
)]
= 0.
2nd case i = j.
E
Φ
2
i
(W
t
i
−W
t
i−1
)
2
= E
Φ
2
i
E[(W
t
i
−W
t
i−1
)
2
[T
t
i−1
]
= E[Φ
2
i
(t
i
−t
i−1
)]
Hence,
E[I
t
(κ)
2
] = E
¸
k
¸
i=1
Φ
2
i
(t
i
−t
i−1
)
¸
= E
¸
t
0
κ
2
s
ds
.
(iii) Apply (i), (ii), and Doob’s inequality (see Kartzas/Shreve (1991, p. 14): For a
rightcontinuous martingale ¦M
t
¦ with E[M
2
t
] < ∞ for all t ≥ 0 we have
E
¸
sup
0≤≤T
[M
t
[
2
¸
≤ 4E[M
2
T
].
It remains to check:
E[I
t
(κ)
2
] = E
k
¸
i=1
Φ
2
i
(t
i
−t
i−1
)
. .. .
≤T
≤ T
k
¸
i=1
E
Φ
2
i
< ∞,
since, by Deﬁnition 3.19, Φ is bounded. 2
Remarks.
• Stochastic integrals over [t, T] are deﬁned by
T
t
κ
s
dW
s
:=
T
0
κ
s
dW
s
−
t
0
κ
s
dW
s
.
• Let κ
1
and κ
2
be simple processes and a, b ∈ IR. The stochastic integral for
simple processes is linear, i.e.
I
t
(aκ
1
+bκ
2
) = aI
t
(κ
1
) +bI
t
(κ
2
).
This follows directly from Deﬁnition 3.20. Note that linear combinations of simple
functions are simple.
• For κ ≡ 1,
t
0
1 dW
s
= W
t
. Hence,
E
¸
t
0
dW
s
2
= E[W
2
t
] = t =
t
0
ds.
This is the reason why people sometimes write dW
t
=
√
dt, which is wrong from
a mathematical point of view.
3 INTRODUCTION TO STOCHASTIC CALCULUS 38
2nd Step. Every L
2
[0, T]process X can be approximated by a sequence ¦κ
(n)
¦
n
of simple processes.
Deﬁne the vector space
L
2
[0, T] := L
2
[0, T], Ω, T, P, ¦T
t
¦
t∈[0,T]
:=
¦X
t
, T
t
¦
t∈[0,T]
: progressively measurable, realvalued with E
T
0
X
2
s
ds
< ∞
¸
with (semi)norm
[[X[[
2
T
:= E
¸
T
0
X
2
s
ds
¸
,
where we identify processes for which X = Y only Lebesgue ⊗Pa.s.
For simple processes the mapping κ → I(κ) induces a norm on the vector space of
the corresponding stochastic integrals via
[[I(κ)[[
2
L
T
:= E
T
0
κ
s
dW
s
2
¸
¸
Prop. 3.30 (ii)
= E
¸
T
0
κ
2
s
ds
¸
= [[κ[[
2
T
.
Since the mapping I(X) is linear and norm preserving, it is an isometry, called the
Ito isometry.
Proposition 3.31 (Simple Processes Approximate L
2
[0, T]processes)
The linear subspace of simple processes is dense in X ∈ L
2
([0, T]), i.e. for all
X ∈ L
2
[0, T] there exists a sequence ¦κ
(n)
¦
n
of simple processes with
[[X −κ
(n)
[[
T
= E
¸
T
0
(X
s
−κ
(n)
s
)
2
ds
¸
→0 ( as n →∞)
Proof. Assume that X ∈ L
2
[0, T] is continuous and bounded. Choose
κ
(n)
t
(ω) := X
0
(ω)1
{0}
(t) +
2
n
−1
¸
k=0
X
kT/2
n(ω)1
(kT/2
n
,(k+1)T/2
n
]
(t).
Since X is bounded, ¦κ
(n)
¦
n
is uniformly bounded and we can apply the dominated
convergence theorem to obtain the desired result.
For the general case, see Karatzas/Shreve (1991, p. 133). 2
3rd and 4th Step.
Deﬁne
´
c
2
([0, T]) := ¦M : M continuous martingale on [0, T] w.r.t. ¦T
t
¦,
E[M
2
t
] < ∞ for every t ∈ [0, T]
¸
Theorem and Deﬁnition 3.32 (Ito Integral for X ∈ L
2
[0, T])
There exists a unique linear mapping J : L
2
[0, T] → ´
c
2
([0, T]) with the following
properties:
3 INTRODUCTION TO STOCHASTIC CALCULUS 39
(i) If κ is simple, then P(J
t
(κ) = I
t
(X) for all t ∈ [0, T]) = 1.
(ii) E[J
t
(X)
2
] = E
t
0
X
2
s
ds
“Ito isometry”
J is unique in the following sense: If there exists a second mapping J
satisfying (i)
and (ii) for all X ∈ L
2
[0, T], then J(X) and J
(X) are indistinguishable.
For X ∈ L
2
[0, T], we deﬁne
t
0
X
s
dW
s
:= J
t
(X)
for every t ∈ [0, T] and call it the Ito integral or stochastic integral of X (w.r.t. W).
Proof. Let X ∈ L
2
[0, T].
(a) Let ¦κ
(n)
¦
n
be an approximating sequence, i.e. [[X − κ
(n)
[[
T
→ 0 as n → ∞.
Since ¦κ
(n)
¦
n
is also a Cauchy sequence w.r.t. [[ [[
T
, we obtain for every t ∈ [0, T]
(as n, m →∞):
E[(I
t
(κ
(n)
)−I
t
(κ
(m)
))
2
]
I linear
= E[I
t
(κ
(n)
−κ
(m)
)
2
]
Ito Isom.
= E
t
0
(κ
(n)
−κ
(m)
)
2
ds
→0.
Hence, ¦I
t
(κ
(n)
)¦
n
is a Cauchy sequence in the ordinary L
2
(Ω, T
t
, P) for random
variables (t ﬁxed!) which is a complete space and thus
I
t
(κ
(n)
)
L
2
−→Z
t
,
where Z
t
is T
t
measurable and E[Z
2
t
] < ∞. Since I
t
(κ
(n)
)
P
−→ Z
t
as well, there
exists a subsequence ¦n
k
¦
k
such that
I
t
(κ
(n
k
)
)
a.s.
−→Z
t
(1)
as k →∞, i.e. Z
t
is only Pa.s. unique (may depend on sequence!).
(b) Z is a martingale: Note that I
t
(κ
(n)
) is a martingale for all n ∈ IN. By the
deﬁnition of the conditional expected value, we thus get
A
I
t
(κ
(n)
) dP =
A
I
s
(κ
(n)
) dP
for s < t and for all A ∈ T
s
and n ∈ IN. The L
2
convergence (1) leads to (as
n →∞)
10
A
Z
t
dP ←−
A
I
t
(κ
(n)
) dP =
A
I
s
(κ
(n)
) dP −→
A
Z
s
dP.
Since Z is adapted, (ii) is proved.
(c) By Proposition 3.26, Z possesses a rightcontinuous modiﬁcation ¦J
t
(X), T
t
¦
t∈[0,T]
still being a squareintegrable martingale. Applying Doob’s inequality and the
BorelCantelli Lemma, one can prove that J(X) is even continuous (Exercise).
(d) J
t
(X) is independent of the approximating sequence: Let ¦κ
(n)
¦
n
and
¦˜ κ
(n)
¦
n
be two approximating sequences for X with I
t
(κ
(n)
)
L
2
−→Z
t
and I
t
(˜ κ
(n)
)
L
2
−→
10
See Bauer (1992, MI, p. 92 and p. 99).
3 INTRODUCTION TO STOCHASTIC CALCULUS 40
˜
Z
t
. Then ¦ˆ κ
(n)
¦
n
with ˆ κ
(2n)
= κ
(2n)
(even) and ˆ κ
(2n+1)
= ˜ κ
2n+1
(odd) is an
approximating sequence for X with ¦I
t
(ˆ κ
(n)
)¦
n
converges in L
2
to both Z
t
and
˜
Z
t
.
Thus they need to coincide with J
t
(X) Pa.s.
11
(e) Ito isometry:
E
J
t
(X)
2
(∗)
= lim
n→∞
E
I
t
(κ
(n)
)
2
= lim
n→∞
E
t
0
(κ
(n)
s
)
2
ds
(∗∗)
= E
t
0
X
2
s
ds
(∗) holds because I
t
(κ
(n)
)
L
2
−→ J
t
(X) (as n → ∞). This is the L
2
convergence in
the ordinaryL
2
(Ω, T
t
, P).
(∗∗) holds because κ
(n)
L
2
−→X (as n →∞). This is the L
2
convergence in ([0, T]
Ω, B[0, T] ⊗T, Lebesgue ⊗P).
12
(f) Linearity of J follows from the linearity of I and from the fact that if ¦κ
(n)
X
¦
n
approximates X and ¦κ
(n)
Y
¦
n
approximates Y , then ¦aκ
(n)
X
+bκ
(n)
Y
¦
n
approximates
aX +bY (a, b ∈ IR). Therefore,
aJ
t
(X) +bJ
t
(Y ) = a lim
n→∞
I
t
(κ
(n)
X
) +b lim
n→∞
I
t
(κ
(n)
Y
) = lim
n→∞
I
t
aκ
(n)
X
+bκ
(n)
Y
= J
t
(aX +bY ).
(g) Uniqueness of J: Let J
be a linear mapping with properties (i) and (ii).
Then
J
t
(X) = J
t
( lim
n→∞
κ
(n)
)
(∗)
= lim
n→∞
J
t
(κ
(n)
)
(i)
= lim
n→∞
I
t
(κ
(n)
) = J
t
(X) P −a.s.
for all t ∈ [0, T]. Since J(X) and J
(X) are continuous processes, by Proposition
3.2 (ii), they are indistinguishable, i.e. P(J
t
(X) = J
t
(X) for all t ∈ [0, T]) = 1. The
equality (∗) holds because linear mappings are continuous. 2
5th Step. The class of admissible integrands can be extended. Deﬁne
H
2
[0, T] := H
2
[0, T], Ω, T, P, ¦T
t
¦
t∈[0,T]
:=
¦X
t
, T
t
¦
t∈[0,T]
: progressively measurable, realvalued with
P
T
0
X
2
s
ds < ∞
= 1
¸
In general, for X ∈ H
2
[0, T] it can happen that
[[X[[
2
T
= E
¸
T
0
X
2
s
ds
¸
= ∞.
Therefore, the Ito isometry needs not to hold and the approximation argument
breaks down. Fortunately, one can use a localization argument:
13
τ
n
(ω) := T ∧ inf
t ∈ [0, T] :
t
0
X
2
s
(ω) ds ≥ n
¸
,
X
(n)
t
(ω) := X
t
(ω)1
{τ
n
(ω)≥t}
.
11
See Bauer (1992, MI, p. 130).
12
Both (∗) and (∗∗) are a consequence of Bauer (1992, MI, p. 92).
13
Every τ
n
is a stopping time because it is the ﬁrsthitting time of [n, ∞) for the continuous
process {
t
0
X
2
s
ds}
t
. See Proposition 3.11.
3 INTRODUCTION TO STOCHASTIC CALCULUS 41
Hence, X
(n)
t
(ω) ∈ L
2
[0, T] and J(X
(n)
) is deﬁned and a martingale for ﬁxed n ∈ IN.
Now deﬁne
J
t
(X) := J
t
(X
(n)
) for X ∈ H
2
[0, T] and 0 ≤ t ≤ τ
n
.
Note that
• J
t
(X
(n)
) = J
t
(X
(m)
) for all 0 ≤ t ≤ τ
n
∧ τ
m
, i.e. J is welldeﬁned,
• lim
n→∞
τ
n
= T a.s. (since P
T
0
X
2
s
ds < ∞
= 1, i.e. there exists an event
A with P(A) = 1 such that for all ω ∈ A there exists an c(ω) > 0 with
T
0
X
2
s
(ω) ds ≤ c(ω).)
Important Results.
• By construction, for X ∈ H
2
[0, T] the process ¦J
t
(X)¦
t∈[0,T]
is a continuous local
martingale with localizing sequence ¦τ
n
¦
n
.
• J is still linear, i.e. J
t
(aX + bY ) = aJ
t
(X) + bJ
t
(Y ) for a, b ∈ IR and X,
Y ∈ H
2
[0, T]. (Choose τ
n
:= τ
X
n
∧ τ
Y
n
as localizing sequence of aX + bY , where
¦τ
X
n
¦
n
is the localizing sequence of X.)
• E[J
t
(X)] needs not to exist.
Deﬁnition 3.21 (Multidimensional Ito Integral)
Let ¦W(t), T
t
¦ be an mdimensional Brownian motion and ¦X(t), T
t
¦ an IR
n,m

valued progressively measurable process with all components X
ij
∈ H
2
[0, T]. Then
we deﬁne the Ito integral of X w.r.t. W as
t
0
X(s) dW(s) :=
¸
¸
¸
¸
¸
¸
¸
m
j=1
t
0
X
1j
(s)dW
j
(s)
.
.
.
¸
m
j=1
t
0
X
nj
(s)dW
j
(s)
¸
, t ∈ [0, T],
where
t
0
X
ij
(s)dW
j
(s) is an onedimensional Ito integral.
3.7 Ito’s Formula
Recall our
Standing assumption: (Ω, T, P, ¦T
t
¦) is a ﬁltered probability space satisfying
the usual conditions.
Additional Assumption: ¦W
t
, T
t
¦ is a (multidimensional) Brownian motion of
appropriate dimension.
Deﬁnition 3.22 (Ito Process)
Let W be an mdimensional Brownian motion.
(i) ¦X(t), T
t
¦ is said to be a realvalued Ito process if for all t ≥ 0 it possesses the
representation
X(t) = X(0) +
t
0
α(s) ds +
t
0
β(s)
dW(s)
3 INTRODUCTION TO STOCHASTIC CALCULUS 42
= X(0) +
t
0
α(s) ds +
m
¸
j=1
t
0
β
j
(s)
dW
j
(s)
for an T
0
 measurable random variable X(0), and progressively measurable pro
cesses α, β with
t
0
[α(s)[ ds < ∞,
m
¸
j=1
t
0
β
2
j
(s) ds < ∞, P −a.s. for all t ≥ 0. (2)
(ii) An ndimensional Ito process X = (X
(1)
, . . . , X
(n)
) is a vector of onedimensional
Ito processes X
(i)
.
Remarks.
• To shorten notations, one sometimes uses the symbolic diﬀerential notation
dX(t) = α(t)dt +β(t)
dW(t)
• It can be shown that the processes α and β are unique up to indistinguishability.
• By (2), we have β
j
∈ H
2
[0, T].
• For a realvalued Ito process X and a realvalued progressively measurable process
Y , we deﬁne
t
0
Y
s
dX
s
:=
t
0
Y
s
α
s
ds +
t
0
Y
s
β
s
dW
s
Deﬁnition 3.23 (Quadratic Variation and Covariation)
Let X
(1)
and X
(2)
two realvalued Ito processes with
X
(i)
(t) = X(0) +
t
0
α
(i)
(s) ds +
t
0
β
(i)
(s)
dW(s),
where i ∈ ¦1, 2¦. Then,
< X
(1)
, X
(2)
>
t
:=
m
¸
j=1
t
0
β
(1)
j
(s) β
(2)
j
(s) ds
is said to be the quadratic covariation of X
(1)
and X
(2)
. In particular, < X >
t
:=<
X, X >
t
is said to be the quadratic variation of an Ito process X.
Remarks. The deﬁnition may appear a bit unfounded. However, one can show
the following (see Karatzas/Shreve (1991, pp. 32ﬀ)): For ﬁxed t ≥ 0, let Π =
¦t
0
, . . . , t
n
¦ with 0 = t
0
< t
1
< . . . < t
n
= t be a partition of [0, t]. Then the pth
variation of X over the partition Π is deﬁned by (p > 0)
V
(p)
t
(Π) =
n
¸
k=1
[X
t
k
−X
t
k−1
[
p
.
The mesh of Π is deﬁned by [[Π[[ = max
1≤k≤n
[t
k
− t
k−1
[. For X ∈ ´
c
2
, it holds
that
lim
Π→0
V
(2)
t
(Π) =< X >
t
in probability.
The main tool for working with Ito processes is Ito’s formula:
3 INTRODUCTION TO STOCHASTIC CALCULUS 43
Theorem 3.33 (Ito’s Formula)
Let X be an ndimensional Ito process with
X
i
(t) = X
i
(0) +
t
0
α
i
(s) ds +
m
¸
j=1
t
0
β
ij
(s) dW
j
(s).
Let f : [0, ∞) IR
n
→IR be a C
1,2
function. Then
f(t, X
1
(t), . . . , X
n
(t)) = f(0, X
1
(0), . . . , X
n
(0)) +
t
0
f
t
(s, X
1
(s), . . . , X
n
(s)) ds
+
n
¸
i=1
t
0
f
x
i
(s, X
1
(s), . . . , X
n
(s)) dX
i
(s)
+0.5
n
¸
i,j=1
t
0
f
x
i
x
j
(s, X
1
(s), . . . , X
n
(s)) d < X
i
, X
j
>
s
,
where all integrals are deﬁned.
Proof. For simplicity, X(0) = const., n = m = 1, and f : IR →IR.
1st step. By applying a localization argument, we can assume that
M
t
:=
t
0
β
s
dW
s
, B
t
:=
t
0
α
s
ds,
ˆ
B
t
:=
t
0
[α
s
[ ds,
t
0
β
2
s
ds, and thus X are uniformly bounded by a constant C. Hence, the relevant
support of f, f
, and f
is compact implying that these functions are bounded as
well. Besides, M is a martingale (since β ∈ L
2
[0, T]).
2nd step. Let Π = ¦t
0
, . . . , t
p
¦ be a partition of [0, t]. Taylor expension leads to
(Note: f(X
t
k
) = f(X
t
k−1
) +f
(X
t
k−1
)(X
t
k
−X
t
k−1
) +. . ..)
f(X
t
) −f(X
0
) =
p
¸
k=1
(f(X
t
k
) −f(X
t
k−1
))
=
p
¸
k=1
f
(X
t
k−1
)(X
t
k
−X
t
k−1
)
. .. .
(∗)
+0.5
p
¸
k=1
f
(η
k
)(X
t
k
−X
t
k−1
)
2
. .. .
(∗∗)
with η
k
= X
t
k−1
+θ
k
(X
t
k
−X
t
k−1
), θ
k
∈ (0, 1).
3rd step. Convergence of (*). We get
(∗) =
p
¸
k=1
f
(X
t
k−1
)(B
t
k
−B
t
k−1
)
. .. .
A
1
(Π)
+
p
¸
k=1
f
(X
t
k−1
)(M
t
k
−M
t
k−1
)
. .. .
A
2
(Π)
.
(i) For [[Π[[ → 0, we obtain (LebesgueStieltjes theory)
A
1
(Π)
a.s.,L
1
−→
t
0
f
(X
s
)dB
s
=
t
0
f
(X
s
)α
s
ds
(ii) Approximate f
(X
s
) by a simple process
Y
Π
s
(ω) := f
(X
0
)1
{0}
(s) +
p
¸
k=1
f
(X
t
k−1
(ω))1
(t
k−1
,t
k
]
(s).
3 INTRODUCTION TO STOCHASTIC CALCULUS 44
Note that A
2
(Π) =
t
0
Y
Π
s
dM
s
.
14
By the Ito isometry,
E
¸
t
0
f
(X
s
) dM
s
−A
2
(Π)
2
= E
¸
t
0
(f
(X
s
) −Y
Π
s
)β
s
dW
s
2
= E
¸
t
0
(f
(X
s
) −Y
Π
s
)
2
β
2
s
ds
L
2
−→0,
where the limit follows from the dominated convergence theorem (as [[Π[[ → 0).
Therefore,
15
(as [[Π[[ → 0)
(∗) = A
1
(Π) +A
2
(Π)
L
1
−→
t
0
f
(X
s
)α
s
ds +
t
0
f
(X
s
) dM
s
4th step. Convergence of (**). It can be shown that (See Korn/Korn (1999,
pp. 53ﬀ))
(∗∗)
L
1
−→
t
0
f
(X
s
)β
2
s
ds.
5th step. By steps 24, there exists a subsequence ¦Π
k
¦
k
, such that the integrals
converges a.s. towards the corresponding limits. Therefore, for ﬁxed t both sides
of Ito’s formula are equal a.s. Since both sides are continuous processes in t, by
Proposition 3.2 (ii), they are indistinguishable. 2
Remark. Often a shorthand symbolic diﬀerential notation is used (f : [0, ∞)
IR →IR)
df(t, X
t
) = f
t
(t, X
t
)dt +f
x
(t, X
t
)dX
t
+ 0.5f
xx
(t, X
t
)d < X >
t
Examples.
• X
t
= t and f ∈ C
2
leads to the fundamental theorem of integration.
• X
t
= h(t) and f ∈ C
2
leads to the chain rule of ordinary calculus.
• X
t
= W
t
and f(x) = x
2
:
W
2
t
= 2
t
0
W
s
dW
s
+t
• X
t
= x
0
+αt +σW
t
, x
0
, α, β ∈ IR, and f(x) = e
x
.
df(X
t
) = f
(X
t
)dX
t
+ 0.5f
(X
t
)d < X >
t
= f(X
t
)
αdt +σdW
t
+ 0.5σ
2
dt
= f(X
t
)
(α + 0.5σ
2
)dt +σdW
t
.
Therefore, the solution of the stock price dynamics in the BlackScholes model,
dS
t
= S
t
[µdt +σdW
t
], reads
S
t
= s
0
e
(µ−0.5σ
2
)t+σW
t
. (3)
14
This is actually a consequence of the more general deﬁnition of Ito integrals in Karatzas/Shreve
(1991, pp. 129ﬀ).
15
L
2
convergence implies L
1
convergence
3 INTRODUCTION TO STOCHASTIC CALCULUS 45
• Ito’s product rule: Let X and Y be Ito processes and f(x, y) = x y Then
(omitting dependencies)
d(XY ) = f
x
dX +f
y
dY + 0.5 f
xx
....
=0
d < X > +0.5 f
yy
....
=0
d < Y > + f
xy
....
=1
d < X, Y >
= XdY +Y dX +d < X, Y >
Deﬁnition 3.24 (Stochastic Diﬀerential Equation)
If for a given ﬁltered probability space (Ω, T, P, ¦T
t
¦) there exists an adapted pro
cess X with X(0) = x ∈ IR
n
ﬁxed and
X
i
(t) = x
i
+
t
0
α
i
(s, X(s)) ds +
m
¸
j=0
t
0
β
ij
(s, X(s)) dW
j
(s)
Pa.s. for every t ≥ 0 and i ∈ ¦1, . . . , n¦, such that
t
0
¸
[α
i
(s, X(s))[ +
m
¸
j=1
β
2
ij
(s, X(s))
¸
ds < ∞
Pa.s. for every t ≥ 0 and i ∈ ¦1, . . . , n¦, then X is a strong solution of the stochastic
diﬀerential equation (SDE)
dX(t) = α(t, X(t))dt +β(t, X(t))dW(t) (4)
X(0) = x,
where α : [0, ∞) IR
n
→IR
n
and β : [0, ∞) IR
n
→IR
n,m
are given functions.
Theorem 3.34 (Existence and Uniqueness)
If the coeﬃcients α and β are continuous functions satisfying the following global
Lipschitz and linear growth conditions for ﬁxed K > 0
[[α(t, z) −α(t, y)[[ +[[β(t, z) −β(t, y)[[ ≤ K[[z −y[[,
[[α(t, y)[[
2
+[[β(t, y)[[
2
≤ K
2
(1 +[[y[[
2
),
z, y ∈ IR, then there exists an adapted continuous process X being a strong solution
of (4) and satisfying
E[[[X
t
[[
2
] ≤ C(1 +[[x[[
2
)e
CT
for all t ∈ [0, T],
where C = C(K, T) > 0. X is unique up to indistinguishability.
Proof. See Korn/Korn (1999, pp. 128ﬀ).
In general, there do not exist explicit solutions to SDEs. However, the important
family of linear SDEs admits explicit solutions:
3 INTRODUCTION TO STOCHASTIC CALCULUS 46
Proposition 3.35 (Variation of Constants)
Fix x ∈ IR and let A, a, B, b be progressively measurable realvalued processes with
(Pa.s. for every t ≥ 0)
t
0
([A(s)[ +[a(s)[) ds < ∞,
m
¸
j=1
t
0
B
j
(s)
2
+b
2
j
(s)
ds < ∞.
Then the linear SDE
dX(t) = (A(t)X(t) +a(t))dt +
m
¸
j=1
(B
j
(t)X(t) +b
j
(t))dW
j
(t),
X(0) = x,
possesses an adapted Lebesgue ⊗Punique solution
X(t) = Z(t)
x +
t
0
1
Z(u)
¸
a(u) −
m
¸
j=1
B
j
(u)b
j
(u)
¸
du +
m
¸
j=1
t
0
b
j
(u)
Z(u)
dW
j
(u)
¸
¸
,
where
Z(t) = exp
t
0
A(u) −0.5[[B(u)[[
2
du +
t
0
B(u)
dW(u)
. (5)
is the solution of the homogeneous SDE
dZ(t) = Z(t) [A(t)dt +B(t)
dW(t)] ,
Z(0) = 1.
Proof. For simplicity, n = m = 1. As in (3), one can show that Z solves the
homogeneous SDE. To prove uniqueness, note that Ito’s formula leads to
d
1
Z
=
1
Z
(−A+B
2
)dt −BdW
.
Let
˜
Z be another solution. Then, by Ito’s product rule,
d
˜
Z
1
Z
=
˜
Zd
1
Z
+
1
Z
d
˜
Z+ <
1
Z
,
˜
Z >
=
˜
Z
Z
(−A+B
2
)dt −BdW
+
˜
Z
Z
[Adt +BdW] −
˜
Z
Z
B
2
dt = 0
implying
˜
Z
1
Z
= const. Due to the initial condition, we obtain
˜
Z = Z.
To show that X satisﬁes the linear SDE, deﬁne Y := X/Z. Ito’s product rule
applied to Y Z gives the desired result. To prove uniqueness, let X and
˜
X be two
solutions to the linear SDE. Then
ˆ
X :=
˜
X −X solves the homogeneous SDE
d
ˆ
X =
ˆ
X[Adt +BdW]
with
ˆ
X
0
= 0. Applying Ito’s product rule to 0 Z shows
ˆ
X
t
= 0 a.s. for all t ≥ 0. 2
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 47
Example. In Brownian models the money market account and stocks are modeled
via (i = 1, . . . , I)
dS
0
(t) = S
0
(t)r(t)dt, S(0) = 1, (6)
dS
i
(t) = S
i
(t)
µ
i
(t)dt +
m
¸
j=1
σ
ij
(t)dW
j
(t)
¸
¸
, S(0) = s
i0
> 0.
Applying “variation of constants” leads to
S
0
(t) = e
t
0
r(s) ds
,
S
i
(t) = s
i0
e
t
0
µ
i
(s)−0.5
¸
m
j=1
σ
2
ij
(s) ds+
¸
m
j=1
t
0
σ
ij
(s)dW
j
(s)
.
4 Continuoustime Market Model and Option Pric
ing
4.1 Description of the Model
We consider a continuoustime security market as given by (6).
Mathematical Assumptions.
• Recall the standing assumption: (Ω, T, P, ¦T
t
¦) is a ﬁltrered probability space
satisfying the usual conditions.
• ¦W
t
, T
t
¦ is a (multidimensional) Brownian motion of appropriate dimension.
• ¦T
t
¦ is the Brownian ﬁltration.
• The coeﬃcients r, µ, and σ are progressively measurable processes being uni
formly bounded.
Notation.
• ϕ
i
(t): number of stock i at time t
• ϕ
0
(t)S
0
(t): money invested in the money market account at time t
• X
ϕ
(t) :=
¸
I
i=0
ϕ
i
(t)S
i
(t): wealth of the investor
• Assumption: x
0
:= X(0) =
¸
I
i=0
ϕ
i
(0)S
i
(0) ≥ 0 (nonnegative initial wealth)
• π
i
(t) := ϕ
i
(t)S
i
(t)/X(t): proportion invested in stock i
• 1 −
¸
i
π
i
: proportion invested in the money market account
Deﬁnition 4.1 (Trading Strategy, Selfﬁnancing, Admissible)
(i) Let ϕ = (ϕ
0
, . . . , ϕ
I
)
be an IR
I+1
valued process which is progressively measur
able w.r.t. ¦T
t
¦ and satisﬁes (Pa.s.)
T
0
[ϕ
0
(t)[dt < ∞,
I
¸
i=1
T
0
(ϕ
i
(t)S
i
(t))
2
dt < ∞. (7)
Then ϕ is said to be a trading strategy.
(ii) If
dX(t) = ϕ(t)
dS(t)
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 48
holds, then ϕ is said to be selfﬁnancing.
(iii) A selfﬁnancing ϕ is said to be admissible if X(t) ≥ 0 for all t ≥ 0.
Remarks.
• The corresponding portfolio process π is said to be selfﬁnancing or admissible
if ϕ is selfﬁnancing or admissible (Recall π
i
= ϕ
i
S
i
/X). Note that if we work
with π, then we need to require that X(t) > 0 for all t ≥ 0. Otherwise, π is not
welldeﬁned.
• (7) ensures that all integrals are deﬁned.
• Compare the deﬁnition (ii) with the result of Proposition 2.11.
• Every trading strategy satisﬁes
dX(t) = d(ϕ(t)
S(t)).
Hence, in abstract mathematical terms, selfﬁnancing means that ϕ can be put
before the d operator.
• In the following, we always consider admissible trading strategies.
• Since admissible strategies are progressive, investors cannot see into the future
(no clairvoyants).
Deﬁnition 4.2 (Arbitrage, Option, Replication, Fair Value)
(i) An admissible trading strategy ϕ is an arbitrage opportunity if (Pa.s.)
X
ϕ
(0) = 0, X
ϕ
(T) ≥ 0, P(X
ϕ
(T) > 0) > 0.
(ii) An T
T
measurable random variable C(T) ≥ 0 is said to be a contingent claim,
T ≥ 0.
(iii) An admissible strategy ϕ is a replication strategy for the claim C(T) if
X
ϕ
(T) = C(T) P −a.s.
(iv) The fair time0 value of a claim is deﬁned as
C(0) = inf¦p : D(p) = ∅¦,
where
D(x) := ¦ϕ : ϕ admissible and replicates C(T) and X
ϕ
(0) = x¦
Remark. The strategy in (i) is also known as “free lottery”.
Economical Assumptions.
• All investors model the dynamics of the securities as above.
• All investors agree upon σ, but may disagree upon µ.
• Investors are price takers.
• Investors use admissible strategies.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 49
• Frictionless markets
• No arbitrage (see Deﬁnition 4.2 (i))
Proposition 4.1 (Wealth Equation)
The wealth dynamics of a selfﬁnancing strategy ϕ are described by the SDE
dX(t) = X(t)
(r(t) +π(t)
(µ(t) −r(t)1))dt +π(t)
σ(t)dW(t)
, X(0) = x
0
,
the socalled wealth equation.
Proof. By deﬁnition,
dX = ϕ
dS
= ϕ
0
S
0
rdt +
I
¸
i=1
ϕ
i
S
i
µ
i
dt +
m
¸
j=1
σ
ij
dW
j
=
1 −
I
¸
i=1
π
i
Xrdt +
I
¸
i=1
π
i
X
µ
i
dt +
m
¸
j=1
σ
ij
dW
j
= X
r +
I
¸
i=1
π
i
(µ
i
−r)
dt +
m
¸
j=1
I
¸
i=1
π
i
σ
ij
dW
j
2
Remarks.
• Due to the boundedness of the coeﬃcients, by “variation of constants”, the con
dition (P −a.s.)
t
0
π
i
(s)
2
ds < ∞
is suﬃcient for the wealth equation to possess a unique solution.
• One can directly start with π (instead of ϕ) and require that for the wealth
equation
t
0
X(s)
2
π
i
(s)
2
ds < ∞
4.2 BlackScholes Approach: Pricing by Replication
We wish to price a European call written on a stock S with strike K, maturity T,
and payoﬀ
C(T) = max¦S(T) −K; 0¦,
where
dS(t) = S(t)
µdt +σdW(t)
with constants µ and σ. The interest rate r is constant as well.
Assumptions.
• No payout are made to the stocks during the lifetime of the call.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 50
• There exists a C
1,2
function f = f(t, s) such that the timet price of the call is
given by C(t) = f(t, S(t)).
Then
dC(t) = f
t
(t, S(t))dt +f
s
(t, S(t))dS(t) + 0.5f
ss
(t, (S(t))d < S >
t
= f
t
dt +f
s
S
µdt +σdW
+ 0.5f
ss
S
2
σ
2
dt
=
f
t
+f
s
Sµ + 0.5f
ss
S
2
σ
2
dt +f
s
SσdW
Consider a selfﬁnancing trading strategy ϕ = (ϕ
M
, ϕ
S
, ϕ
C
) and choose ϕ
C
(t) ≡
−1. Then
dX
ϕ
(t) = ϕ
M
(t)dM(t) +ϕ
S
(t)dS(t) −dC(t)
= ϕ
0
rMdt +ϕ
S
S
µdt +σdW
−
f
t
+f
s
Sµ + 0.5f
ss
S
2
σ
2
dt −f
s
SσdW
=
ϕ
M
rM +ϕ
S
Sµ −f
t
−f
s
Sµ −0.5f
ss
S
2
σ
2
. .. .
(∗)
dt +
ϕ
1
Sσ −f
s
Sσ
. .. .
(∗∗)
dW
Choosing ϕ
S
(t) = f
s
(t) leads to (∗∗) = 0. Hence, the strategy is locally riskfree
implying (∗) = X
ϕ
(t)r (Otherwise there is an arbitrage opportunity with the money
market account). Rewriting (*):
(∗) = ϕ
M
rM −f
t
−0.5f
ss
S
2
σ
2
= r
ϕ
M
M +ϕ
S
S −C
. .. .
=X
ϕ
−rf
s
S +rC −f
t
−0.5f
ss
S
2
σ
2
. .. .
(∗∗∗)
.
Therefore, (∗ ∗ ∗) = 0, i.e.
rf
s
(t, S(t))S(t) −rf(t, S(t)) +f
t
(t, S(t)) + 0.5f
ss
(t, S(t))S
2
(t)σ
2
= 0.
It follows:
Theorem 4.2 (PDE of BlackScholes (1973))
In an arbitragefree market the price function f(t, s) of a European call satisﬁes the
BlackScholes PDE
f
t
(t, s) +rsf
s
(t, s) + 0.5s
2
σ
2
f
ss
(t, s) −rf(t, s) = 0,
(t, s) ∈ [0, T] IR
+
, with terminal condition f(T, s) = max¦s −K, 0¦.
Remark. We have not used the terminal condition to derive the PDE. There
fore, every European (pathindependent) option satisﬁes this PDE. Of course, the
terminal condition will be diﬀerent.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 51
Question: How should we solve this PDE?
1st approach: Merton (1973). Solve the PDE by transforming it to the heat
equation u
t
= u
xx
which has a wellknown solution.
16
2nd approach. Apply the FeynmanKac theorem (see below) which states that
the time0 solution of the BlackScholes PDE is given by
C(0) = E
max¦Y (T) −K, 0¦
Y (0) = S(0)
e
−rT
,
where Y satisﬁes the SDE
dY (t) = Y (t)
rdt +σdW(t)
, Y (0) = S(0).
Theorem 4.3 (BlackScholes Formula)
(i) The price of the European call is given by
C(t) = S(t) N(d
1
(t)) −K N(d
2
(t)) e
−r(T−t)
with
d
1
(t) =
ln(S(t)/K) + (r + 0.5
2
)(T −t)
σ
√
T −t
,
d
2
(t) = d
1
(t) −σ
√
T −t.
(ii) The trading strategy ϕ := (ϕ
M
, ϕ
S
) with
ϕ
M
(t) =
C(t) −f
s
(t, S(t)) S(t)
M(t)
,
ϕ
S
(t) = f
s
(t, S(t)),
is selfﬁnancing and a replication strategy for the call.
Proof. (i) Compute (exercise)
C(t) = E
max¦Y (T) −K, 0¦
Y (t) = S(t)
e
−r(T−t)
.
(ii) We have
X
ϕ
(t) = ϕ
M
(t) M(t) +ϕ
S
(t) S(t) = C(t). (8)
Hence, (ϕ
M
, ϕ
S
) replicates the call and is selfﬁnancing because
dX
ϕ
(8)
= dC
Ito
=
f
t
+f
s
Sµ + 0.5f
ss
S
2
σ
2
dt +f
s
SσdW
(+)
=
r[C −f
s
S] +f
s
Sµ
dt +f
s
SσdW
Def. (ϕ
M
, ϕ
S
)
=
rMϕ
M
+ϕ
S
Sµ
dt +ϕ
S
SσdW
=
ϕ
M
rMdt +ϕ
S
S
µdt +SσdW
= ϕ
M
dM +ϕ
S
dS.
16
See, e.g., Korn/Korn (1999, p. 124).
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 52
Note: (+) holds due to the BlackScholes PDE
f
t
+ 0.5f
ss
S
2
σ
2
= r[C −f
s
S]
2
Remark. Since (ϕ
M
, ϕ
S
) is selfﬁnancing, the strategy (ϕ
M
, ϕ
S
, ϕ
C
) with ϕ
C
= −1
is selfﬁnancing as well because
d(ϕ
C
(t)C(t)) = d(−C(t)) = −dC(t) = ϕ
C
(t)dC(t).
Theorem 4.4 (FeynmanKac Representation)
Let β and γ be functions of (t, x) ∈ [0, ∞) IR. Under technical conditions (see
Korn/Korn (1999, p. 136)), there exists a unique C
1,2
solution F = F(t, x) of the
PDE
F
t
(t, x) +β(t, x)F
x
(t, x) + 0.5γ
2
(t, x)F
xx
(t, x) −rF(t, x) = 0 (9)
with terminal condition F(T, x) = h(x) possessing the representation
F(t
0
, x) = e
−r(T−t
0
)
E[h(X(T))[X(t
0
) = x],
where X satisﬁes the SDE
dX(s) = β(s, X(s))ds +γ(s, X(s))dW(s)
with initial condition X(t
0
) = x.
Sketch of Proof. (i) Firstly assume r = 0. If F is a C
1,2
function, we can apply
Ito’s formula
dF = F
t
dt +F
x
dX + 0.5F
xx
d < X > .
Since d < X >= γ
2
dt, we get
dF = F
t
dt +F
x
βdt +γdW
+ 0.5γ
2
F
xx
dt
=
F
t
+βF
x
+ 0.5γ
2
F
xx
dt +γF
x
dW.
Suppose that there exists a function F satisfying the PDE, i.e.
F
t
+βF
x
+ 0.5γ
2
F
xx
= 0
with F(T, x) = h(x). Then
dF = γF
x
dW
or in integral notation
F(s, X(s)) = F(t
0
, X(t
0
)) +
s
t
0
γ(u, X(u))F
x
(u, X(u))dW(u).
If E
s
t
0
γ(u, X
u
)
2
F
x
(u, X
u
)
2
du
< ∞, the stochastic integral has zero expectation
and thus
F(t
0
, x) = E[F(s, X(s)[X(t
0
) = x].
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 53
Choose s = T and note that F(T, X(T)) = h(X(T)).
(ii) For r = 0, multiply the PDE (9) by e
−rt
:
F
t
(t, x)e
−rt
+β(t, x)F
x
(t, x)e
−rt
+ 0.5γ
2
(t, x)F
xx
(t, x)e
−rt
−rF(t, x)e
−rt
= 0.
Set G(t, x) := F(t, x)e
−rt
. Since
G
t
(t, x) = F
t
(t, x)e
−rt
−rF(t, x)e
−rt
,
we get
G
t
(t, x) +β(t, x)G
x
(t, x) + 0.5γ
2
(t, x)G
xx
(t, x) = 0
with terminal condition G(T, x) = e
−rT
h(x). Now we can apply (i):
G(t
0
, x) = E[e
−rT
h(X(T))[X(t
0
) = x].
Substituting G(t
0
, x) = e
−rt
0
F(t
0
, x) gives the desired result. 2
Remarks.
• Two crucial points are not proved: Firstly, we have not checked under which
conditions a C
1,2
function F exists. Secondly, we have not checked under which
conditions γ(, X)F
x
(, X) ∈ L
2
[0, T].
• To solve the BlackScholes PDE, choose β(t, x) = rx and γ(t, x) = xσ.
• The FeynmanKac representation can be generalized to a multidimensional set
ting, i.e. X is a multidimensional Ito process, with stochastic short rate r = r(X).
4.3 Pricing with Deﬂators
In a singleperiod model the deﬂator is given by (See remark to Proposition 2.7)
H =
1
1 +r
dQ
dP
This motivates the deﬁnition of the deﬂator in continuoustime:
H(t) :=
1
M(t)
Z(t),
where
M(t) := exp
t
0
r(s) ds
,
Z(t) := exp
−0.5
t
0
[[θ(s)[[
2
ds −
t
0
θ(s)
dW(s)
,
The process θ is implicitly deﬁned as the solution to the following system of linear
equations:
σ(t)θ(t) = µ(t) −r(t)1. (10)
Applying Ito’s product rule yields the following SDE for H:
dH(t) = d(Z(t)M(t)
−1
) = −H(t)[r(t)dt +θ(t)
dW(t)]
Remarks.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 54
• In this section we are going to work under the physical measure. Nevertheless
we will see later on that Z can be interpreted as a density inducing a change of
measure (Girsanov’s Theorem).
• The system of equations (10) needs not to have a (unique) solution.
• The process θ is said to be the market price of risk and can be interpreted as a
risk premium (excess return per units of volatility).
The following theorem shows that H has indeed the desired properties:
Theorem 4.5 (Arbitragefree Pricing with Deﬂator and Completeness)
(i) Assume that (10) has a bounded solution for θ. Let ϕ be an admissible portfolio
strategy. Then ¦H(t)X
ϕ
(t)¦
t∈[0,T]
is both a local martingale and a supermartingale
implying
E[H(t)X
ϕ
(t)] ≤ X
ϕ
(0) for all t ∈ [0, T]. (11)
Besides, the market is arbitragefree.
(ii) Assume that I = m (“# stocks = # sources of risk”) and that σ is uniformly
positive deﬁnite.
17
Then (10) has a unique solution which is uniformly bounded.
Furthermore, let C(T) be an T
T
measurable contingent claim with
x := E[H(T)C(T)] < ∞ (12)
Then there exists a P ⊗Lebesgueunique portfolio process ϕ such that ϕ is a repli
cation strategy for C(T), i.e. (Pa.s.)
X
ϕ
(T) = C(T),
and X
ϕ
(0) = x. Hence, the market is complete. The timet price of the claim
equals
C(t) =
E[H(T)C(T)[T
t
]
H(t)
,
i.e. the process ¦H(t)C(t)¦
t∈[0,T]
is a martingale.
Remarks.
• The boundedness assumption in (i) can be weakened.
• The price in (ii) is thus equal to the expected discounted payoﬀ under the phys
ical measure where we discount with the statedependent discount factor H, the
deﬂator.
• Compare (ii) with Theorem 2.13.
Proof. For simplicity, I = 1. (i) Applying Ito’s product rule, one gets for an
admissible ϕ = (ϕ
0
, ϕ
1
):
18
d(HX
ϕ
) = HdX
ϕ
+X
ϕ
dH +d < H, X
ϕ
>
17
The process σ is uniformly positive deﬁnite if there exists a constant K > 0 such that
x
σ(t)σ(t)
x ≥ Kx
x for all x ∈ IR
m
and all t ∈ [0, T], Pa.s. This requirement implies that
σ(t) is regular.
18
Note θσ = µ −r.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 55
= HX
ϕ
rdt +Hϕ
1
S(µ −r)dt +Hϕ
1
SσdW −X
ϕ
Hrdt
−X
ϕ
HθdW −Hϕ
1
Sθσdt
= H(ϕ
1
Sσ −X
ϕ
θ)dW. (13)
Since ϕ is admissible, we have X
ϕ
(t) ≥ 0. By deﬁnition, H(t) ≥ 0 and thus
H(t)X(t) ≥ 0. By Proposition 3.25, HX is a supermartingale which proves (11).
Note that an arbitrage strategy would violate this supermartingale property.
(ii) Clearly, θ = (µ − r)/σ is unique and uniformly bounded by some K
θ
> 0.
19
Deﬁne
X(t) :=
E[H(T)C(T)[T
t
]
H(t)
By deﬁnition, X(t) is adapted, X(0) = x, X(t) ≥ 0, and X(T) = C(T) (Pa.s.).
20
Set M(t) := H(t)X(t) = E[H(T)C(T)[T
t
]. Due to (12), M is a martingale with
M(0) = x. Applying the martingale representation theorem, there exists an ¦T
t
¦
progressively measurable realvalued process ψ with
T
0
ψ(s)
2
ds < ∞ such that
(P −a.s.)
H(t)X(t) = M(t) = x +
t
0
ψ(s) dW(s). (14)
Since H and
·
0
ψ(s) dW(s) are continuous, we can choose X to be continuous.
Comparing (13) and (14) yields
21
H(ϕ
1
Sσ −Xθ) = ψ.
Since there exists a K
σ
> 0 such that σ(t) ≥ K
σ
for all t ∈ [0, T],
22
we get
ϕ
1
=
ψ/H +Xθ
Sσ
.
Since X
ϕ
= ϕ
0
M +ϕ
1
S for any strategy ϕ, we deﬁne
ϕ
0
:=
X −ϕ
1
S
M
which implies that ϕ is selfﬁnancing and X = X
ϕ
. Since M > 0, H > 0, and
X ≥ 0 are continuous, M as well as H are pathwise bounded away from zero and X
is pathwise bounded from above, i.e. M(ω) ≥ K
M
(ω) > 0, H(ω) ≥ K
H
(ω) > 0, and
X(ω) ≤ K
X
(ω) < ∞ for a.a. ω ∈ Ω. Therefore, the process is a trading strategy
because
23
T
0
(ϕ
1
(s)S(s))
2
ds =
T
0
ψ(s)/H(s) +X(s)θ(s)
σ(s)
2
ds
≤
2
K
2
σ
1
K
2
H
T
0
ψ(s)
2
ds +K
2
X
K
2
θ
T
< ∞,
19
Note that all coeﬃcients are bounded and σ is uniformly positive deﬁnite.
20
Since F
0
is the trivial σalgebra augemented by null sets, is every F
0
measurable random
variable a.s. constant and the conditioned expected value is equal to the unconditioned expected
value.
21
Note that the representation of an Ito process is unique.
22
By assumption, σ is uniformly positive deﬁnite.
23
(a +b)
2
≤ 2a
2
+ 2b
2
.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 56
T
0
[ϕ
0
(s)[ ds =
T
0
[X(s) −ϕ
1
(s)S(s)[
M(s)
ds
≤
1
K
M
K
X
T +
T
0
1 + (ϕ
1
(s)S(s))
2
ds
< ∞
for Pa.s. 2
Theorem 4.6 (Martingale Representation Theorem)
Let ¦M
t
¦
t∈[0,T]
be a realvalued process being adapted to the Brownian ﬁltration.
(i) If M is a square integrable martingale, i.e. E[M
2
t
] < ∞ for all t ∈ [0, T], then
there exists a progressively measurable IR
m
valued process ¦ψ
t
¦
t∈[0,T]
with
E
¸
T
0
[[ψ
s
[[
2
ds
¸
< ∞ (15)
and
M
t
= M
0
+
t
0
ψ
s
dW
s
, Pa.s.
(ii) If M is a local martingale, then the result from (i) holds with
T
0
[[ψ
s
[[
2
ds < ∞
instead of (15).
In both cases, ψ is P ⊗Lebesgueunique.
Proof. See Korn/Korn (1999, pp. 81ﬀ).
Remarks.
• All Brownian (local) martingales are stochastic integrals and vice versa!
• The quadratic (co)variation is deﬁned for all (local) Brownian martingales.
Proposition and Deﬁnition 4.7 (Quadratic (Co)variation)
(i) Let X, Y be Brownian local martingales. The quadratic covariation < X, Y >
is the unique adapted, continuous process of bounded variation with < X, Y >
0
= 0
such that XY − < X, Y > is a Brownian local martingale. If X = Y , this process
is nondecreasing.
(ii) Let (Ω, (, P, ¦(
t
¦) be a ﬁltered probability space satisfying the usual conditions
and let X, Y be continuous local martingales for the ﬁltration ¦(
t
¦. Then the result
of (i) holds with XY − < X, Y > being a local martingale for ¦(
t
¦.
Proof. (i) By the martingale representation theorem, dX = αdW and dY = βdW
with progressively measurable processes α and β. Ito’s product rule yields
d(XY ) = Y dX +XdY +d < X, Y >= Y αdW +XβdW +αβdt.
Since Ito integrals are local Brownian martingales, XY − < X, Y > is a Brownian
martingale. The rest is obvious.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 57
(ii) See Kartzas/Shreve (1991, p. 36). 2
Remark. One can show that if X and Y are squareintegrable, then XY − <
X, Y > is a martingale.
4.4 Pricing with Riskneutral Measures
The deﬂator is deﬁned as
H(t) =
1
M(t)
Z(t),
where
dZ(t) = −Z(t)θ(t)
dW(t)
is a local martingale. If it is even a martingale, we have E[Z(T)] = 1 and we can
deﬁne a probability measure on T
T
by
Q(A) := E[1
A
Z(T)], A ∈ T
T
.
Hence, Z(T) is the density of Q and we can rewrite the result from Theorem 4.5 as
C(0) = E[H(T)C(T)] = E
¸
Z(T)
C(T)
M(T)
= E
Q
¸
C(T)
M(T)
,
i.e. the price of a claim equals its expected discounted payoﬀ where we discount with
the money market account (instead of the deﬂator) and expectation is calculated
w.r.t. the measure Q (instead of the physical measure).
Two important questions arise:
Q1. Under which conditions is Z a martingale? (Novikov condition)
Q2. What happens to the dynamics of the assets if we change to the measure Q?
(Girsanov’s theorem)
Proposition 4.8
Let θ be progressive and K > 0. If
T
0
[[θ(s)[[
2
ds ≤ K, then the process ¦Z(t)¦
t∈[0,T]
deﬁned by
dZ(t) = −Z(t)θ(t)
dW(t), Z(0) = 1, (16)
is a martingale.
Proof. For simplicity, m = 1. The process Z is a local martingale. By variations
of constants, we get
Z(t) = exp
−0.5
t
0
θ(s)
2
ds −
t
0
θ(s) dW(s)
≥ 0. (17)
By Proposition 3.25, it is thus a supermartingale. Hence, it is suﬃcient to show
that E[Z(T)] = 1. Deﬁne
τ
n
:= inf
t ≥ 0 :
t
0
θ(s) dW(s)
≥ n
¸
.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 58
Due to (17), for ﬁxed n ∈ IN, we get Z(T∧τ
n
)
2
≤ e
2n
and thus
T∧τ
n
0
Z(s)
2
θ(s)
2
ds ≤
K e
2n
. From (16), we obtain E[Z(T ∧ τ
n
)] = 1. Therefore, the claim is proved if
¦Z(T ∧ τ
n
)¦
n∈IN
is UI because in this case
lim
n→∞
E[Z(T ∧ τ
n
)] = E[Z(T)].
We obtain the estimate
[Z(T ∧ τ
n
)[
1+ε
≤ exp
0.5(ε +ε
2
)
T
0
θ(s)
2
ds
. .. .
independent of n and bounded by assumption
Y (T ∧ τ
n
),
where
dY (t) = −Y (t)(1 +ε)θ(s)dW(s), Y (0) = 1.
With the same arguments leading to E[Z(T ∧τ
n
)] = 1 we conclude E[Y (T ∧τ
n
)] = 1.
Therefore,
sup
n∈IN
E[Z(T ∧ τ
n
)[
1+ε
< ∞,
which, by Proposition 3.19, proves UI of ¦Z(T ∧ τ
n
)¦
n∈IN
. 2
Remark. The requirement in the proposition is a strong one. It can be replaced
by the Novikov condition (See Karatzas/Shreve (1991, pp. 198ﬀ)):
E
exp
0.5
T
0
[[θ(s)[[
2
ds
< ∞.
We turn to the second question and deﬁne (Q := Q
T
)
Q
t
(A) := E[1
A
Z(t)], A ∈ T
t
, t ∈ [0, T]. (18)
The following lemma shows that Q
t
is welldeﬁned.
Lemma 4.9
For a bounded stopping time τ ∈ [0, T] and A ∈ T
τ
, we get Q
T
(A) = Q
τ
(A).
Proof. The optional sampling theorem (OS) yields (See Theorem 3.17)
Q
T
(A) = E[1
A
Z(T)] = E[E[1
A
Z(T)[T
τ
]] = E[1
A
E[Z(T)[T
τ
]]
OS
= E[1
A
Z(τ)] = Q
τ
(A).
2
To prove Girsanov’s theorem, we need some additional tools.
Theorem 4.10 (Levy’s Characterization of the Brownian Motion)
An mdimensional stochastic process X with X(0) = 0 is an mdimensional Brow
nian motion if and only if it is a continuous local martingale with
24
< X
j
, X
k
>
t
= δ
jk
t.
24
Note that δ
ij
= 1
{i=j}
is the Kronecker delta.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 59
Proof. For simplicity m = 1. Clearly, the Brownian motion is a continuous local
martingale with < W >
t
= t. To prove the converse, ﬁx u ∈ IR and set f(t, x) =
exp(iux + 0.5u
2
t) and Z
t
:= f(t, X
t
), where i =
√
−1. Since f ∈ C
1,2
is analytic, a
complexvalued version of Ito’s formula yields
dZ = f
t
dt +f
x
dX + 0.5f
xx
d < X >
= 0.5u
2
fdt +iufdX −0.5u
2
fd < X >
<X>
t
=t
= iufdX.
Since an integral w.r.t. a local martingale is a local martingale as well,
25
Z is a
(complexvalued) local martingale which is bounded on [0, T] for every T ∈ IR.
Hence, Z is a martingale implying E[exp(iuX
t
+ 0.5u
2
t[T
s
] = exp(iuX
s
+ 0.5u
2
s),
i.e.
E[exp(iu(X
t
−X
s
))[T
s
] = exp(−0.5u
2
(t −s)).
Since this hold for any u ∈ IR, the diﬀerence X
t
−X
s
∼ ^(0, t −s) and X
t
−X
s
is
independent of T
s
. 2
Proposition 4.11 (Bayes Rule)
Let µ and ν be two probability measures on a measurable space (Ω, () such that
dν/dµ = f for some f ∈ L
1
(µ). Let X be a random variable on (Ω, () such that
E
ν
[[X[] =
[X[fdµ < ∞.
Let H be a σﬁeld with H ⊂ (. Then
E
ν
[X[H] E
µ
[f[H] = E
µ
[fX[H] a.s.
Proof. Let H ∈ H. We start with the lefthand side:
H
E
µ
[fX[H] dµ =
H
Xf dµ =
H
Xdν =
H
E
ν
[X[H] dν =
H
E
ν
[X[H]f dµ
= E
µ
E
ν
[X[H] f1
H
= E
µ
[E
µ
[E
ν
[X[H] f1
H
[H]]
= E
µ
[1
H
E
ν
[X[H] E
µ
[f[H]] =
H
E
ν
[X[H] E
µ
[f[H] dµ.
2
Lemma 4.12
Let ¦Y
t
, T
t
¦
t∈[0,T]
be a rightcontinuous adapted process. The following statements
are equivalent:
(i) ¦Z
t
Y
t
, T
t
¦
t∈[0,T]
is a local Pmartingale.
(ii) ¦Y
t
, T
t
¦
t∈[0,T]
is a local Q
T
martingale.
25
For instants, a Brownian local martingale can be represented as Ito integral.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 60
Proof. Assume (i) to hold and let ¦τ
n
¦
n
be a localizing sequence of ZY . Then
26
E
Q
T
[Y
t∧τ
n
[T
s
]
Lem 4.9
= E
Q
t∧τ
n
[Y
t∧τ
n
[T
s
]
Bayes
=
E
P
[Z
t∧τ
n
Y
t∧τ
n
[T
s
]
E
P
[Z
t∧τ
n
[T
s
]
=
Z
s∧τ
n
Y
s∧τ
n
Z
s∧τ
n
= Y
s∧τ
n
.
The converse implication can be proved similarly (exercise). 2
Theorem 4.13 (Girsanov’s Theorem)
We assume that Z is a martingale and deﬁne the process ¦W
Q
(t), T
t
¦ by
W
Q
j
(t) := W
j
(t) +
t
0
θ
j
(s) ds, t ≥ 0,
j ∈ ¦1, . . . , m¦. For ﬁxed T ∈ [0, ∞), the process ¦W
Q
(t), T
t
¦
t∈[0,T]
is an m
dimensional Brownian motion on (Ω, T
T
, Q
T
), where Q
T
is deﬁned by (18).
Proof. For simplicity, m = 1. By Levy’s characterization, we need to show that
(i) ¦W
Q
(t), T
t
¦
t∈[0,T]
is a continuous local martingale on (Ω, T
T
, Q
T
).
(ii) < W
Q
>
t
= t on (Ω, T
T
, Q
T
).
ad (i). Ito’s product rule yields
d(ZW
Q
) = ZdW
Q
+W
Q
dZ +d < W
Q
, Z >
= ZdW +Zθdt −W
Q
ZθdW −Zθdt
= (Z −W
Q
Zθ)dW,
i.e. ZW
Q
is a local Pmartingale. By Lemma 4.12, the process W
Q
is a local
Q
T
martingale as well.
ad (ii). Ito’s product rule leads to
d
Z ((W
Q
)
2
−t)
= d
W
Q
(ZW
Q
)
−d(Z t)
= W
Q
d(ZW
Q
) +ZW
Q
dW
Q
+d < ZW
Q
, W
Q
> −Zdt −tdZ
= W
Q
d(ZW
Q
) +ZW
Q
(θdt +dW) + (Z −W
Q
Zθ)dt −Zdt +tθZdW
= W
Q
d(ZW
Q
) +Z(W
Q
+tθ)dW,
i.e. ¦Z
t
((W
Q
t
)
2
−t)¦ is a local Pmartingale. Applying Lemma 4.12 again yields
that ¦(W
Q
t
)
2
− t¦ is a local Q
T
martingale. By Proposition 4.7 (ii), the claim (ii)
follows. 2
Corollary 4.14 (Girsanov II)
If ¦M
t
¦
t∈[0,T]
is a Brownian local Pmartingale and
D
t
:=
t
0
1
Z
s
d < Z, M >
s
,
26
Note that Y
t∧τ
n
is F
t∧τ
n
measurable, i.e. the conditional expected value can be computed
w.r.t. Q
t∧τ
n
instead of Q
T
. Lemma 4.9 ensures that both measures coincide on F
t∧τ
n
. Besides,
F
t∧τ
n
⊂ F
T
, i.e. Bayes rule can be applied. Furthermore, by Proposition 3.23, E
P
[Z
t∧τ
n
F
s
] =
Z
s∧τ
n
.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 61
then ¦M
t
− D
t
¦
t∈[0,T]
is a continuous local Q
T
martingale and < M − D >
Q
=<
M >, i.e. the quadratic variations of M −D under Q
T
and M under P coincide.
Proof. By the martingale representation theorem, we have dM = αdW for some
progressive α. Hence,
dD =
1
Z
d < Z, M >= −
1
Z
θZαdt = −θαdt
and thus
d(M −D) = αdW +θαdt = αdW
Q
.
Therefore, M −D is a local Qmartingale. Besides,
d < M −D >
Q
t
= α
2
dt = d < M >
t
,
which completes the proof. 2
Theorem 4.15 (Converse Girsanov)
Let Q be an equivalent probability measure to P on T
T
. Then the density process
¦Z(t), T
t
¦
t∈[0,T]
deﬁned by
Z(t) :=
dQ
dP
F
t
is a positive Brownian Pmartingale possessing the representation
dZ(t) = −Z(t)θ(t)dW(t), Z(0) = 1,
for a progressively measurable mdimensional process θ with
T
0
[[θ(s)[[
2
ds < ∞ P −a.s. (19)
Proof. Since Q and P are equivalent on T
T
, both measures are equivalent on T
t
,
t ∈ [0, T] , i.e. Z(t) > 0 for all t ∈ [0, T]. Besides, for A ∈ T
t
Q
Z(T)dP =
A
dQ/dP[
F
T
Lem 4.9
=
A
dQ/dP[
F
t
=
Q
Z(t)dP,
i.e. Z(t) = E[Z(T)[T
t
] implying that Z is a Brownian martingale. By the martingale
representation theorem,
Z(t) = 1 +
t
0
ψ(s)
dW(s)
with a progressive process ψ satisfying
T
0
[[ψ(s)[[
2
ds < ∞, P −a.s.. Since Z > 0,
we have Z(ω) ≥ K
Z
(ω) > 0 for a.a. ω ∈ Ω. Deﬁning
θ(t) = −
ψ(t)
Z(t)
thus implies (19) which completes the proof. 2
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 62
Deﬁnition 4.3 (DoleansDade Exponential)
For an Ito process X with X
0
= 0 the DoleansDade exponential (stochastic ex
ponential) of X, written c(X), is the unique Ito process Z that is the solution
of
Y
t
= 1 +
t
0
Y
s
dX
s
.
Remarks.
• Therefore, the density Z is sometimes written as c(
·
0
θ
s
dW
s
).
• One can show that c(X)c(Y ) = c(X + Y + < X, Y >) and that (c(X))
−1
=
c(−X+ < X, X >).
We now come back to our pricing problem. We set Q := Q
T
. Recall our deﬁnitions
H(t) :=
1
M(t)
Z(t),
where
M(t) := exp
t
0
r(s) ds
,
Z(t) := exp
−0.5
t
0
[[θ(s)[[
2
ds −
t
0
θ(s)
dW(s)
,
and θ is implicitly deﬁned by:
σ(t)θ(t) = µ(t) −r(t)1. (20)
For bounded θ, by Girsanov’s theorem, Z(t) = dQ/dP[
F
t
is a density process for
the measure Q
t
(A) = E[Z(t)1
A
] and ¦W
Q
(t), T
t
¦ deﬁned by
W
Q
j
(t) := W
j
(t) +
t
0
θ
j
(s) ds, t ≥ 0,
is a QBrownian motion.
Deﬁnition 4.4 (Equivalent Martingale Measure)
A probability measure Q is a (weak) equivalent martingale measure if
(i) Q ∼ P and
(ii) all discounted price processes S
i
/M, i = 1, . . . , I, are (local) Qmartingales.
Remark. An equivalent martingale measure (EMM) is also said to be a riskneutral
measure.
Proposition 4.16
If Q is a weak EMM and
E
¸
exp
0.5
T
0
[[σ
i
(s)[[
2
ds
¸
< ∞ (21)
for every i ∈ ¦1, . . . , I¦, then Q is already an EMM.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 63
Proof. Deﬁne Y
i
:= S
i
/M. Since Q is a weak martingale measure, Y
i
is a local
Qmartingale. By the martingale representation theorem and the uniqueness of the
representation of an Ito process,
dY
i
= Y
i
σ
i
dW
Q
.
By the Novikov condition (21), this is a Qmartingale. 2
Remark. Since we have assumed that σ is uniformly bounded, we need not to
distinguish between EMM and weak EMM.
Theorem 4.17 (Characterization of EMM)
In the Brownian market (6), the following are equivalent:
(i) The probability measure Q is an EMM.
(ii) There exists a solution θ to the system of equations (20) such that the process
Z = c(
·
0
θ
s
dW
s
) is a Girsanov density inducing the probability measure Q.
Proof. Deﬁne Y
i
:= S
i
/M. Assume (i) to hold. Since Q is a martingale measure,
dY
i
= Y
i
σ
i
dW
Q
. (22)
Besides, since Q is equivalent to P, by the converse Girsanov, there exists a process
θ such that dQ/dP = c(
·
0
θ
s
dW
s
) and dW
Q
= dW +θdt is a Brownian increment
under Q. Therefore,
dS
i
= S
i
[µ
i
dt +σ
i
dW] = S
i
[(µ
i
−σ
i
θ)dt +σ
i
dW
Q
]
implying
dY
i
= Y
i
[(µ
i
−σ
i
θ −r)dt +σ
i
dW
Q
] (23)
Comparing (22) and (23) gives (ii).
On the other hand, assume (ii) to hold. By deﬁnition, Q is equivalent to P and
dW
Q
= dW +θdt is a QBrownian increment. Ito’s formula leads to
dY
i
= Y
i
[(µ
i
−r)dt +σ
i
dW] = Y
i
[(µ
i
−r −σ
i
θ)dt +σ
i
dW
Q
] = Y
i
σ
i
dW
Q
,
which is Qmartingale increment because σ is assumed to be uniformly bounded. 2
Corollary 4.18 (QDynamics of the Assets)
Under an EMM Q, the dynamicss of asset i are given by
dS
i
(t) = S
i
(t)[r(t)dt +σ
i
(t)
dW
Q
(t)],
i.e. the drift µ does not matter under Q.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 64
Theorem 4.19 (Arbitragefree Pricing under Q and Completeness)
(i) Let Qbe an EMM and let ϕ be an admissible portfolio strategy. Then ¦X
ϕ
(t)/M(t)¦
t∈[0,T]
is both a local Qmartingale and a Qsupermartingale implying that
E
Q
¸
X
ϕ
(t)
M(t)
≤ X
ϕ
(0) for all t ∈ [0, T].
Besides, the market is arbitragefree. If additionally
E
Q
¸
T
0
ϕ
i
(s)
S
i
(s)
M(s)
2
ds
¸
< ∞ (24)
for every i = 1, . . . , I, then ¦X
ϕ
(t)/M(t)¦
t∈[0,T]
is a Qmartingale.
(ii) Assume that I = m (“# stocks = # sources of risk”) and that σ is uniformly
positive deﬁnite. Then (10) has a unique solution which is uniformly bounded
implying that there exists a unique EMM Q. Besides, the market is complete.
Furthermore, let C(T) be an T
T
measurable contingent claim with
x := E
Q
[C(T)/M(T)] < ∞
Then the timet price of the claim equals
C(t) = M(t) E
Q
¸
C(T)
M(T)
T
t
,
i.e. the discounted price process ¦C(t)/M(t)¦
t∈[0,T]
is a Qmartingale.
Proof. (i) Consider
dX
ϕ
= ϕ
dS = Xrdt +
I
¸
i=1
ϕ
i
S
i
σ
i
dW
Q
.
Since d(1/M) = −(1/M)rdt,
d
X
M
= Xd
1
M
+
1
M
dX =
1
M
I
¸
i=1
S
i
ϕ
i
σ
i
dW
Q
which is a positive local Qmartingale and thus a Qsupermartingale. If (24) is
satisﬁed, then it is a Qmartingale.
(ii) Since θ is unique, by Theorem 4.17, the EMM Q is unique. The claim price
follows from Theorem 4.5 because H = Z/M. 2
Remark.
• An EMM exists if, for instants, θ is bounded. This follows from Theorem 4.17.
• If ϕ is uniquely bounded, then (24) holds because µ, σ, and r are assumed to be
uniformly bounded.
• If the market is incomplete, then there exist more than one market price of risk
θ. Let Q and
˜
Q be two corresponding EMM. If (24) holds, then for an admissible
strategy ϕ we get
M(t) E
Q
¸
X
ϕ
(T)
M(T)
T
t
= X
ϕ
(t) = M(t) E
˜
Q
¸
X
ϕ
(T)
M(T)
T
t
.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 65
This implies that any EMM assigns the same value to a replicable claim, namly
its replication costs. However, the prices of nonreplicable claims may diﬀer
under diﬀerent EMM. This is the reason why pricing in incomplete markets is so
complicated and in some sense arbitrary.
Corollary 4.20 (BlackScholes Again)
The price of the European call is given by
C(t) = S(t) N(d
1
(t)) −K N(d
2
(t)) e
−r(T−t)
with
d
1
(t) =
ln(S(t)/K) + (r + 0.5
2
)(T −t)
σ
√
T −t
,
d
2
(t) = d
1
(t) −σ
√
T −t.
Proof. By Theorem 4.19,
C(t) = E
Q
[max¦S(T) −K; 0¦] e
−r(T−t)
,
where due to Corollary 4.18
dS(t) = S(t)[rdt +σdW
Q
(t)].
Thus S(T) has the same distribution as Y (T) in the proof of Proposition 4.3 and
the result follows. 2
Corollary 4.21 (QDynamics of Claims)
Under the assumptions of Theorem 4.19, the dynamics of a replicable claim C(T)
are given by
dC(t) = C(t)[r(t)dt +ψ(t)
dW
Q
(t)] (25)
with a suitable progressively measurable process ψ.
Proof. By Theorem 4.19, C/M is a (local) Qmartingale and thus (25) follows
from the martingale representation theorem. 2
Proposition 4.22 (BlackScholes PDE Again and Hedging)
Under the assumptions of Theorem 4.19, let C(T) be a replicable claim with C(t) =
f(t, S(t)) and C(T) = h(S(T)), where f is a C
1,2
function.
(i) The pricing function f satisﬁes the following multidimensional BlackScholes
PDE:
f
t
+
I
¸
i=1
f
i
s
i
r + 0.5
I
¸
i,k=1
m
¸
j=1
s
i
s
k
σ
ij
σ
kj
f
ik
−rf = 0, f(T, s) = h(s), s ∈ IR
I
+
,
where f
i
denotes the partial derivative w.r.t. the ith asset price.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 66
(ii) A replication strategy for C(T) is given by
ϕ
i
(t) = f
i
(t, S(t)), i = 1, . . . , I,
where the remaining funds C(t)−
¸
I
i=1
ϕ
i
(t)S
i
(t) are invested in the money market
account, i.e. ϕ
M
(t) = (C(t) −
¸
I
i=1
ϕ
i
(t)S
i
(t))/M(t).
Proof. (i) Ito’s formula yields
dC =
f
t
+
I
¸
i=1
f
i
S
i
r + 0.5
I
¸
i,k=1
m
¸
j=1
f
ik
S
i
S
k
σ
ij
σ
kj
dt +
I
¸
i=1
m
¸
j=1
f
i
S
i
σ
ij
dW
Q
j
. (26)
Since the representation of an Ito process is unique, comparing the drift with the
drift of (25) yields the BlackScholes PDE.
(ii) Since C(T) is attainable, there exists a replication strategy ϕ with X
ϕ
(T) =
C(T) and Qdynamics
dX
ϕ
= X
ϕ
rdt +
I
¸
i=1
m
¸
j=1
ϕ
i
S
i
σ
ij
dW
Q
j
,
where due to Theorem 4.19 (ii) the drift equals r.
27
Comparing the diﬀusion part
with the diﬀusion part of (26) shows that one can choose ϕ
i
= f
i
. 2
Remarks.
• The strategy in (ii) is said to be a delta hedging strategy.
• Under the conditions of Theorem 4.19 (ii), the delta hedging strategy is the
unique replication strategy.
• In the BlackScholes model, we get for the European call (see Exercise 29): ϕ
S
=
N(d
1
). Due to the putcall parity, the delta hedge for the put reads ϕ
S
=
−N(−d
1
).
• If I > m, i.e. (“# stocks > # sources of risk”), then in general there exist further
replication strategies, i.e. the choice ϕ
i
= f
i
in the proof of (ii) is not the only
possible choice.
In Theorem 4.19, we have proved that the following implications hold:
1) Existence of EMM =⇒ No arbitrage
2) Uniqueness of EMM =⇒ Completeness
The following theorems show to which extend the converse is true.
Theorem 4.23 (Existence of a Market Price of Risk)
Assume that the market (6) is arbitragefree. Then there exists a market price of
risk, i.e. there exists a progressively measurable process θ solving the system (10).
Proof. Deﬁne φ
i
:= ϕ
i
S
i
(amount of money invested in asset i) leading to the
following version of the wealth equation:
dX(t) = X(t)r(t)dt +φ(t)
(µ(t) −r(t)1)dt +φ(t)
σ(t)dW(t).
27
The representation of the diﬀusion part follows from rewriting Xπ
σdW
Q
in terms of ϕ =
πX/S.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 67
Assume that there exists a selfﬁnancing strategy with
φ(t)
σ(t) = 0 and φ(t)
(µ(t) −r(t)1) = 0.
This strategy would be an arbitrage since it does not involve risk, but its wealth
equation has a drift diﬀerent from r. Thus in an arbitragefree market every vector
in the kernel Ker(σ(t)
) should be orthogonal to the vector µ(t) − r(t)1. But the
orthogonal complement of Ker(σ(t)
) is Range(σ(t)). Hence, µ(t) −r(t)1 should be
in the range of σ(t), i.e. there exists a θ(t) such that
µ(t) −r(t)1 = σ(t)θ(t)
The progressive measurability of θ is proved in Karatzas/Shreve (1999, pp. 12ﬀ).2
Remark. In this theorem, nothing is said about whether the corresponding process
Z is a martingale. Although a market price of risk exists, it is not clear if an EMM
exists.
Theorem 4.24 (Uniqueness of EMM)
Assume that the market is complete and let Qbe an EMM such that ¦X
ϕ
(t)/M(t)¦
t∈[0,T]
is a Qmartingale for all admissible strategies ϕ with E[X
ϕ
(T)/M(T)] < ∞. If
˜
Q
is another EMM, then Q =
˜
Q, i.e. Q is the unique EMM.
Proof. Fix j ∈ ¦1, . . . , m¦. Since the market is complete, there exists an admissible
trading strategy ϕ
j
such that X
ϕ
j
(T) = Y
j
(T), where
Y
j
(T) = exp
T
0
r(s) ds −0.5T +W
Q
j
(T)
.
From Theorem 4.19, we know that the process X
ϕ
j
/M is a local martingale under
Q and
˜
Q. By assumption, X
ϕ
j
/M is even a Qmartingale implying
X
ϕ
j
(t) = M(t)E
Q
¸
X
ϕ
j
(T)
M(T)
T
t
= M(t)E
Q
¸
Y
j
(T)
M(T)
T
t
= Y
j
(t) P −a.s.,
i.e. Y
j
is a modiﬁcation of X
ϕ
j
. By Proposition 3.2, X
ϕ
j
and Y
j
are indistinguish
able. Hence, Y
j
/M is a local martingale under Q and
˜
Q. By Theorem 4.17, both
measures are characterized by market prices of risk θ and
˜
θ such that
dW
Q
j
(t) = dW
j
(t) +θ(t)dt,
dW
˜
Q
j
(t) = dW
j
(t) +
˜
θ(t)dt.
Therefore,
Y
j
(t)
M(t)
= 1 +
t
0
Y
j
(s)
M(s)
dW
Q
j
(s) = 1 +
t
0
Y
j
(s)
M(s)
dW
˜
Q
j
(s) +
t
0
Y
j
(s)
M(s)
(θ
j
(s) −
˜
θ
j
(s)) ds.
Since Y
j
/M is a Brownian local
˜
Qmartingale, the process Y
j
(θ
j
−
˜
θ
j
)/M needs to
be indistinguishable from zero.
28
Since Y
j
/M > 0, the market prices of risk θ
j
and
˜
θ
j
are indistinguishable. Since this holds for every j, we get Q =
˜
Q. 2
Remarks.
28
Recall that the representation of Ito processes is unique up to indistinguishability.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 68
• Since ¦X
ϕ
(t)/M(t)¦
t∈[0,T]
is a Qsupermartingale, this process is a Qmartingale
if, for instants,
E
Q
¸
X
ϕ
(T)
M(T)
= X
ϕ
(0).
• Obviously it is suﬃcient to check if ¦X
ϕ
j
(t)/M(t)¦
t∈[0,T]
is a Qmartingale for
every j = 1, . . . , m.
5 Continuoustime Portfolio Optimization
5.1 Motivation
In the theory of option pricing the following problem is tackled:
Given: Payoﬀ C(T)
Wanted: Today’s price C(0) and hedging strategy ϕ
In the theory of portfolio optimization the converse problem is considered:
Given: Initial wealth X(0)
Wanted: Optimal ﬁnal wealth X
∗
(T) and optimal strategy ϕ
∗
Question: What is “optimal”?
1st Idea. Maximze expected ﬁnal wealth, i.e. max
ϕ
E[X
ϕ
(T)].
Problem: Petersburg Paradox. Consider a game where a coin is ﬂipped. If in
the nth round, the ﬁrst time head occurs, then the player gets 2
n−1
Euros, i.e. the
expected value is
1
2
1 +
1
4
2 +
1
8
4 +. . . +
1
2
n
2
n−1
+. . . =
∞
¸
n=0
1
2
= ∞.
Hence, investors who maximize expected ﬁnal wealth should be willing to pay ∞
Euros to play this game! Therefore, maximization of expected ﬁnal wealth is usually
not a reasonable criterion.
Question: What is the reason for this result?
Answer: People are risk averse, i.e. if they can choose between
• 50 Euros or
• 100 Euros with probability 0.5 and 0 Euros with probability 0.5,
they choose the 50 Euros.
Bernoulli’s idea. Measure payoﬀs in terms of a utility function U(x) = ln(x), i.e.
1
2
ln(1) +
1
4
ln(2) +
1
8
ln(4) +. . . +
1
2
n
ln(2
n−1
) +. . .
=
∞
¸
n=1
1
2
n
ln(2
n−1
) = ln(2)
∞
¸
n=1
n −1
2
n
= ln(2).
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 69
Hence, an investor following this criterion would be willing to pay at most 2 Euros
to play this game.
This idea can be generalized: For a given initial wealth x
0
> 0, the investor
maximizes the following utility functional
sup
π∈A
E[U(X
π
(T))], (27)
where / := ¦π : π admissible and E[U(X
π
(T))
−
] < ∞¦ and
dX
π
(t) = X
π
(t)
(r(t) +π(t)
(µ(t) −r(t)1))dt +π(t)
σdW(t)
, X
π
(0) = x
0
,
and the utility function U has the following properties:
Deﬁnition 5.1 (Utility Function)
A strictly concave, strictly increasing, and continuously diﬀerentiable function U :
(0, ∞) →IR is said to be a utility function if
U
(0) := lim
x0
U
(x) = ∞, U
(∞) := lim
x∞
U
(x) = 0.
Remarks.
• The investor’s greed is reﬂected by U
> 0, i.e. loosely speaking: “More is better
than less”.
• The investor’s risk aversion is reﬂected by the concavity of U which implies
that for any nonconstant random variable Z, by Jensen’s inequality,
E[U(Z)] < U(E[Z]).
i.e. loosely speaking: “A bird in the hand is worth two in the bush”.
There exist two approaches to solve the dynamic optimization problem (27):
1. Martingale approach: Theorem 4.5 ensures that in a complete market any
claim can be replicated.
=⇒ Determine the optimal wealth via a static optimization problem and interpret
this wealth as a contingent claim
2. Optimal control approac: We know that expectations can be represented as
solutions to PDEs (FeynmanKac theorem)
=⇒ Represent the functional (27) as PDE and solve the PDE
Standing Assumption. We consider a Brownian market (6) with bounded coef
ﬁcients.
5.2 Martingale Approach
Assumption in the Martingale Approach: The market is complete.
The martingale approach consists of a threestep procedure:
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 70
1st step. Compute the optimal wealth X
∗
(T) by solving a static optimization
problem.
2nd step. Prove that the static optimization problem is equivalent to the dynamic
optimization problem (27)
3rd step. Determine the replication strategy for X
∗
(T) which, in the context of
portfolio optimization, is said to be the optimal portfolio strategy π
∗
.
1st step. Consider the constrained static optimization problem
sup
X∈C
E[U(A)],
subject to the socalled budget constraint (BC)
E[H(T)A] ≤ x
0
and ( := ¦C(T) ≥ 0 : C(T) is T
T
measurable and E[U(C(T))
−
] < ∞¦.
The BC states that the present value of terminal wealth cannot exceed the initial
wealth x
0
(see Theorem 4.5 (i)).
Heuristic solution. Pointwise Lagrangian
L(A, λ) := U(A) +λ(x
0
−H(T)A).
Firstorder condition (FOC)
U
(A) −λH(T) = 0,
i.e. A
∗
= V (λH(T)), where V := (U
)
−1
: (0, ∞) →(0, ∞) is strictly decreasing.
Since the investor is greedy, the BC is satisﬁed as equality and we can choose λ
∗
such that
Γ(λ) := E[H(T)V (λH(T))] = x
0
(28)
leading to λ
∗
= Λ(x
0
) := Γ
−1
(x
0
).
2nd step. Verify that the heuristic solution is indeed the optimal solution.
Lemma 5.1 (Properties of Γ)
Assume Γ(λ) < ∞ for all λ > 0. Then Γ is
(i) strictly decreasing,
(ii) continuous on (0, ∞),
(iii) Γ(0) := lim
λ0
Γ(λ) = ∞, and Γ(∞) := lim
λ∞
Γ(λ) = 0.
Proof. (i) V is strictly decreasing on (0, ∞). Since H(t) > 0 for every t ∈ [0, T],
the function Γ is also strictly decreasing.
(ii) follows from the continuity of H and V as well as the dominated convergence
theorem.
29
29
To check continuity at λ
0
, choose some λ
1
< λ
0
. Then due to (i) we can use Γ(λ
1
) < ∞ as a
majorant.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 71
(iii) Since
lim
y0
V (y) = ∞, lim
y∞
V (y) = 0,
and V is strictly decreasing, the ﬁrst result follows from the monotonous convergence
theorem and the second from the dominated convergence theorem. 2
Remark. Hence, Λ(x) = Γ
−1
(x) exists on (0, ∞) with Λ ≥ 0 and
Λ(0) := lim
x0
Λ(x) = ∞, Λ(∞) := lim
x∞
Λ(x) = 0.
Therefore, one can always ﬁnd a λ such that (28) is satisﬁed as equality.
Lemma 5.2
For a utility function U with U
−1
= V we get
U(V (y)) ≥ U(x) +y(V (y) −x) for 0 < x, y < ∞.
Proof. By concavity of U, Taylor approximation leads to U(x
2
) ≤ U(x
1
) +
U
(x
1
)(x
2
−x
1
) and thus U(x
1
) ≥ U(x
2
) +U
(x
1
)(x
1
−x
2
) implying
U(V (y)) ≥ U(x) +U
(V (y))(V (y) −x) = U(x) +y(V (y) −x).
2
Theorem 5.3 (Optimal Final Wealth)
Assume x
0
> 0 and Γ(λ) < ∞ for all λ > 0. Then the optimal ﬁnal wealth reads
A
∗
= V (Λ(x
0
) H(T))
and there exists an admissible optimal strategy π
∗
∈ / such that X
π
∗
(T) = x
0
and
X
π
∗
(T) = A
∗
, Pa.s.
Proof. By deﬁnition of Λ(x
0
), we get E[H(T)A
∗
] = x
0
. Besides, A
∗
> 0 because
V > 0. Hence, A
∗
satisﬁes all constraints and, by Theorem 4.5 (ii), there exists a
strategy π
∗
replicating A
∗
.
We now verify E[U(A
∗
)
−
] < ∞, i.e. π
∗
∈ /. Due to Lemma 5.2,
U(A
∗
) ≥ U(1) + Λ(x
0
)H(T)(A
∗
−1).
Therefore,
30
E[U(A
∗
)
−
] ≤ [U(1)[ +Λ(x
0
) E[H(T)(A
∗
+1)] ≤ [U(1)[ +Λ(x
0
) (x
0
+E[H(T)]) < ∞.
Finally, we show that π
∗
and A
∗
are indeed optimal. Consider some π ∈ /. Due
to Lemma 5.2,
U(A
∗
) ≥ U(X
π
(T)) + Λ(x
0
)H(T)(A
∗
−X
π
(T))
30
a ≥ b implies a
−
≤ b
−
≤ b.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 72
and thus
E[U(A
∗
)] ≥ E[U(X
π
(T))] + Λ(x
0
)E[H(T)(A
∗
−X
π
(T))]
= E[U(X
π
(T))] + Λ(x
0
) (x
0
−E[H(T)(X
π
(T))])
. .. .
BC
≥ 0
which gives the desired result. 2
3rd step. To determine the optimal strategy π
∗
, one can apply a similar method
as in Proposition 4.22 (ii). This is demonstrated in the following example.
Example “Power Utility with Constant Coeﬃcients”.
We make the following assumptions:
• Power utility function U(x) =
1
γ
x
γ
• I = m = 1 (“one stock, one Brownian motion”)
• The coeﬃcients r, µ, σ are constant.
• σ > 0, i.e. the market is complete.
Since V (y) = (U
)
−1
(y) = y
1
γ−1
, the optimal ﬁnal wealth reads:
A
∗
= V (Λ(x
0
)H(T)) = (Λ(x
0
)H(T))
1
γ−1
,
where
H(T)
1
γ−1
= exp
1
1−γ
rT + 0.5
1
1−γ
θ
2
T +
1
1−γ
θW(T)
(29)
and θ = (µ −r)/σ. On the other hand, the ﬁnal wealth for a strategy π is given by
X
π
(T) = x
0
exp
rT +
T
0
(µ −r)π
s
−0.5σ
2
π
2
s
ds +
t
0
σπ
s
dW
s
. (30)
By comparing the diﬀusion parts of (29) and (30) we guess the optimal strategy:
π
∗
(t) =
1
1 −γ
µ −r
σ
2
implying
X
π
∗
(T) = exp
rT +
1
1−γ
θ
2
T −0.5
1
(1−γ)
2
θ
2
T +
1
1−γ
θW(T)
.
Since Γ(λ) = E[H(T)V (λH(T))] = λ
1
γ−1
E[H(T)
γ
γ−1
] = x
0
, we get
Λ(x
0
)
1
γ−1
= Γ
−1
(x
0
) = x
0
exp
−
γ
1−γ
rT −0.5
γ
1−γ
θ
2
T −0.5
γ
1−γ
2
θ
2
T
.
Therefore,
A
∗
= (Λ(x
0
)H(T))
1
γ−1
= x
0
exp
rT + 0.5θ
2
T −0.5
γ
1−γ
2
θ
2
T +
1
1−γ
θW(T)
.
It is easy to check that
1
1−γ
−0.5
1
(1−γ)
2
= 0.5 −0.5
γ
1−γ
2
.
Hence, A
∗
= X
π
∗
(T) and due to completeness π
∗
is the unique optimal portfolio
strategy.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 73
5.3 Stochastic Optimal Control Approach
Assumption. The coeﬃcients r, µ, σ are constant and I = m = 1.
Deﬁne the value function
G(t, x) := sup
π∈A
E
t,x
[U(X
π
(T))],
where E
t,x
[U(X
π
(T))] := E[U(X
π
(T))[X(t) = x]. Note that G(T, x) = U(x).
5.3.1 Heuristic Derivation of the HJB
We start with a heuristic derivation of the socalled HamiltionJacobiBellman
equation (HJB) which is a PDE for G.
Assume that there exists an optimal portfolio strategy π
∗
. We now carry out the
following procedure:
(i) For given (t, x) ∈ [0, T] (0, ∞), we consider the following strategies over
[t, T]:
• Strategy I: Use the optimal strategy π
∗
.
• Strategy II: For an arbitrary π use the strategy
ˆ π =
π on [t, t + ∆]
π
∗
on (t + ∆, T]
(ii) Compute E
t,x
[U(X
π
∗
(T))] and E
t,x
[U(X
ˆ π
(T))].
(iii) Using the fact that π
∗
is at least as good as ˆ π and taking the limit ∆ → 0
yields the HJB.
We set X
∗
:= X
π
∗
and
ˆ
X := X
ˆ π
.
ad (ii).
Strategy I. By assumption, E
t,x
[U(X
∗
(T))] = G(t, x).
Strategy II.
E
t,x
[U(
ˆ
X(T))] = E
t,x
[E
t+∆,
ˆ
X(t+∆)
[U(
ˆ
X(T))]] = E
t,x
[E
t+∆,
ˆ
X(t+∆)
[U(X
∗
(T))]]
= E
t,x
[G(t + ∆,
ˆ
X(t + ∆))] = E
t,x
[G(t + ∆, X
π
(t + ∆))],
for any admissible π.
ad (iii). By deﬁnition of the strategies,
E
t,x
[G(t + ∆, X
π
(t + ∆))] ≤ G(t, x), (31)
where we have equality if π = π
∗
. Given that G ∈ C
1,2
, Ito’s formula yields
G(t + ∆, X
π
(t + ∆)) = G(t, x) +
t+∆
t
G
t
(s, X
π
s
)ds +
t+∆
t
G
x
(s, X
π
s
)dX
π
s
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 74
+0.5
t+∆
t
G
xx
(s, X
π
s
)d < X
π
>
s
= G(t, x) +
t+∆
t
G
t
(s, X
π
s
) +G
x
(s, X
π
s
)X
π
s
(r +π
s
(µ −r))
+0.5G
xx
(s, X
π
s
)(X
π
s
)
2
π
2
s
σ
2
ds
+
t+∆
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
Given E
t,x
[
t+∆
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
] = 0, we can rewrite (31):
E
t,x
¸
t+∆
t
G
t
(s, X
π
s
) +G
x
(s, X
π
s
)X
π
s
(r +π
s
(µ −r))
+0.5G
xx
(s, X
π
s
)(X
π
s
)
2
π
2
s
σ
2
ds
≤ 0.
Dividing by ∆ and taking the limit ∆ →0 we get (Note: X
π
(t) = x)
G
t
(t, x) +G
x
(t, x)x(r +π
t
(µ −r)) + 0.5G
xx
(t, x)x
2
π
2
t
σ
2
≤ 0
for any admissible π. As above, we have equality for π = π
∗
. Therefore, we get the
HJB
sup
π
¸
G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) + 0.5G
xx
(t, x)x
2
π
2
σ
2
¸
= 0.
with ﬁnal condition G(T, x) = U(x).
Remarks.
• Since (t, x) was arbitrarily chosen, the HJB holds for all (t, x) ∈ [0, T] (0, ∞).
• Note that we take the supremum over a number π and not over a process.
5.3.2 Solving the HJB
We consider the HJB for an investor with power utility U(x) =
1
γ
x
γ
:
sup
π
¸
G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) + 0.5G
xx
(t, x)x
2
π
2
σ
2
¸
= 0.
with ﬁnal condition G(T, x) =
1
γ
x
γ
. Pointwise maximization over π yields the
following FOC
G
x
(t, x)x(µ −r) +G
xx
(t, x)x
2
π(t)σ
2
= 0 (32)
implying
π
∗
(t) = −
µ −r
σ
2
G
x
(t, x)
xG
xx
(t, x)
.
Since the derivative of (32) w.r.t. π is G
x
x(t, x)x
2
σ
2
, the candidate π
∗
is the global
maximum if G
xx
< 0.
Substituting π
∗
into the HJB yields a PDE for G:
G
t
(t, x) +rxG
x
(t, x) −0.5θ
2
G
2
x
(t, x)
G
xx
(t, x)
= 0
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 75
with G(T, x) =
1
γ
x
γ
. We try the separation G(t, x) =
1
γ
x
γ
f(t)
1−γ
for some function
f with f(T) = 1 and obtain
f
= −
γ
1−γ
(r + 0.5
1
1−γ
θ
2
)
. .. .
=:K
f.
This is an ODE for f with the solution f(t) = exp(−K(T −t)). Hence, we get the
following candidates for the value function and the optimal strategy:
G(t, x) =
1
γ
x
γ
exp
−(1 −γ)K(T −t)
, (33)
π
∗
(t) =
1
1 −γ
µ −r
σ
2
.
Remark. Our candidate G is indeed a C
1,2
function. Since γ < 1, we also get
G
xx
(t, x) = (γ −1)x
γ−1
f(t) < 0.
5.3.3 Veriﬁcation
In this section, we show that our heuristic derivation of the HJB has lead to the
correct result.
Proposition 5.4 (Veriﬁcation Result)
Assume that the coeﬃcients are constant and that the investor has a power utility
function. Then candidates (33) are the value function and the optimal strategy of
the portfolio problem (27), where for γ < 0 only bounded admissible strategies are
considered.
Proof. Let π ∈ /. Since G ∈ C
1,2
, Ito’s formula yields
G(T, X
π
(T)) = G(t, x) +
T
t
G
t
(s, X
π
s
)ds +
T
t
G
x
(s, X
π
s
)dX
π
s
+0.5
T
t
G
xx
(s, X
π
s
)d < X
π
>
s
= G(t, x) +
T
t
G
t
(s, X
π
s
) +G
x
(s, X
π
s
)X
π
s
(r +π
s
(µ −r))
+0.5G
xx
(s, X
π
s
)(X
π
s
)
2
π
2
s
σ
2
ds +
T
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
(∗)
≤ G(t, x) +
T
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
,
where (∗) holds because G satisﬁes the HJB.
The righthand side is a local martingale in T. For γ > 0 we have G > 0 and thus
the righthand side is even a supermartingale implying
E
t,x
[
1
γ
(X
π
(T))
γ
] ≤ G(t, x), (34)
where we use G(T, x) =
1
γ
x
γ
. Besides, since E
t,x
[
T
t
G
x
(s, X
∗
s
)X
∗
s
π
∗
s
σ dW
s
] = 0
(exercise) and for π
∗
G
t
(s, X
∗
s
) +G
x
(s, X
∗
s
)X
∗
s
(r +π
∗
s
(µ −r)) + 0.5G
xx
(s, X
∗
s
)(X
∗
s
)
2
(π
∗
s
)
2
σ
2
= 0,
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 76
we obtain
E
t,x
[
1
γ
(X
∗
(T))
γ
] = G(t, x),
i.e. G is the value function and π
∗
is the optimal solution.
Since for a bounded strategy π we get E[
T
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
] = 0, the relation
(34) also holds for γ < 0 and bounded admissible strategies. 2
Remarks.
• For γ < 0 the boundedness assumption of π can be weakened.
• Note that we have not explicitly used the fact that the market is complete.
5.4 Intermediate Consumption
So far the investor has maximized expected utility from terminal wealth only. We
now consider a situation where the investor maximizes
• expected utility from terminal wealth as well as
• expected utility from intermediate consumption.
Assumption. At time t, the investor consumes at a rate c(t) ≥ 0, i.e. over the
interval [0, T] he consumes
T
0
c(t) dt.
Remark. Hence, in our model consumption can be interpreted as a continuous
stream of dividends (See Exercise 40).
To formalize the problem we need to generalize the concept of admissibility:
Deﬁnition 5.2 (Consumption Process, Selfﬁnancing, Admissible)
(i) A progressively measurable, realvalued process ¦c
t
¦
t∈[0,T]
with c ≥ 0 and
T
0
c(t) dt < ∞ P −a.s. is said to be a consumption process.
(ii) Let ϕ be a trading strategy (see Deﬁnition 4.1) and c be a consumption process.
If
dX(t) = ϕ(t)
dS(t) −c(t)dt
holds, then the pair (ϕ, c) is said to be selfﬁnancing.
(iii) A selfﬁnancing (ϕ, c) is said to be admissible if X(t) ≥ 0 for all t ≥ 0.
Hence, the investor’s goal function reads
sup
(π,c)∈A
E
T
0
U
1
(t, c(t)) dt +U
2
(X
π,c
(T))
, (35)
where U
1
is a utility function for ﬁxed t ∈ [0, T],
/ :=
(π, c) : (π, c) admissible and E
T
0
U
1
(t, c(t))
−
dt +U
2
(X
π,c
(T))
−
< ∞
¸
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 77
and
dX
π,c
(t) = X
π,c
(t)
(r(t) +π(t)
(µ(t) −r(t)1))dt +π(t)
σdW(t)
−c(t)dt,
X
π,c
(0) = x
0
.
We set V
1
:= (U
1
)
−1
and V
2
:= (U
2
)
−1
and deﬁne
Γ(λ) := E
T
0
H(t)V
1
(t, λH(t)) dt +H(T)V
2
(λH(T))
It is straightforward to generalize the result of Section 5.2.
Theorem 5.5 (Optimal Consumption and Optimal Final Wealth)
Assume x
0
> 0 and Γ(λ) < ∞ for all λ > 0. Then the optimal consumption and
the optimal ﬁnal wealth reads
c
∗
(t) = V
1
(t, Λ(x
0
) H(t))
A
∗
= V
2
(Λ(x
0
) H(T))
and there exists a portfolio strategy π
∗
such that (π
∗
, c
∗
) ∈ / and X
π
∗
,c
∗
(T) = x
0
and X
π
∗
,c
∗
(T) = A
∗
, Pa.s.
We will present the solution by using the stochastic control approach. The
value function now reads
G(t, x) := sup
(π,c)∈A
E
t,x
T
t
U
1
(s, c(s)) ds +U
2
(X
π,c
(T))
.
As in Section 5.3 we make the following
Assumption. The coeﬃcients r, µ, σ are constant and I = m = 1.
Assume that there exists an optimal strategy (π
∗
, c
∗
). We now carry out the pro
cedure of Section 5.3:
(i) For given (t, x) ∈ [0, T] (0, ∞), we consider the following strategies over
[t, T]:
• Strategy I: Use the optimal strategy (π
∗
, c
∗
).
• Strategy II: For an arbitrary (π, c) use the strategy
(ˆ π, ˆ c) =
(π, c) on [t, t + ∆]
(π
∗
, c
∗
) on (t + ∆, T]
(ii) Compute E
t,x
[
T
t
U
1
(s, c
∗
(s)) ds + U
2
(X
π
∗
,c
∗
(T))] and E
t,x
[
T
t
U
1
(s, ˆ c(s)) ds +
U
2
(X
ˆ π,ˆ c
(T))].
(iii) Using the fact that (π
∗
, c
∗
) is at least as good as (ˆ π, ˆ c) and taking the limit
∆ →0 yields the HJB.
We set X
∗
:= X
π
∗
,c
∗
and
ˆ
X := X
ˆ π,ˆ c
.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 78
ad (ii).
Strategy I. By assumption, E
t,x
[
T
t
U
1
(s, c
∗
(s)) ds +U
2
(X
∗
(T))] = G(t, x).
Strategy II.
E
t,x
T
t
U
1
(s, ˆ c(s)) ds +U
2
(
ˆ
X(T))
= E
t,x
E
t+∆,
ˆ
X(t+∆)
T
t
U
1
(s, ˆ c(s)) ds +U
2
(
ˆ
X(T))
= E
t,x
t+∆
t
U
1
(s, ˆ c(s)) ds + E
t+∆,
ˆ
X(t+∆)
T
t+∆
U
1
(s, ˆ c(s)) ds +U
2
(
ˆ
X(T))
= E
t,x
t+∆
t
U
1
(s, c(s)) ds + E
t+∆,
ˆ
X(t+∆)
T
t+∆
U
1
(s, c
∗
(s)) ds +U
2
(X
∗
(T))
= E
t,x
t+∆
t
U
1
(s, c(s)) ds +G(t + ∆, X
π,c
(t + ∆))
for any admissible (π, c).
ad (iii). By deﬁnition of the strategies,
E
t,x
t+∆
t
U
1
(s, c(s)) ds +G(t + ∆, X
π,c
(t + ∆))
≤ G(t, x), (36)
where we have equality if (π, c) = (π
∗
, c
∗
). To shorten notations, we set X := X
π,c
.
Given that G ∈ C
1,2
, Ito’s formula yields
G(t + ∆, X
t+∆
) = G(t, x) +
t+∆
t
G
t
(s, X
s
)ds +
t+∆
t
G
x
(s, X
s
)dX
s
+0.5
t+∆
t
G
xx
(s, X
s
)d < X >
s
= G(t, x) +
t+∆
t
G
t
(s, X
s
) +G
x
(s, X
s
)X
s
(r +π
s
(µ −r)) −G
x
(s, X
s
)c
s
+0.5G
xx
(s, X
s
)(X
s
)
2
π
2
s
σ
2
ds
+
t+∆
t
G
x
(s, X
s
)X
s
π
s
σ dW
s
Given E
t,x
[
t+∆
t
G
x
(s, X
s
)X
s
π
s
σ dW
s
] = 0, we can rewrite (36):
E
t,x
¸
t+∆
t
U
1
(s, c
s
) ds +
t+∆
t
G
t
(s, X
s
) +G
x
(s, X
s
)X
s
(r +π
s
(µ −r))
−G
x
(s, X
s
)c
s
+ 0.5G
xx
(s, X
s
)(X
s
)
2
π
2
s
σ
2
ds
≤ 0.
Dividing by ∆ and taking the limit ∆ →0 we get (Note: X(t) = x)
U
1
(t, c
t
) +G
t
(t, x) +G
x
(t, x)x(r +π
t
(µ −r)) −G
x
(t, x)c
t
+0.5G
xx
(t, x)x
2
π
2
t
σ
2
≤ 0
for any admissible (π, c). As above, we have equality for (π, c) = (π
∗
, c
∗
). Therefore,
we get the HJB
sup
π,c
U
1
(t, c) +G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) −G
x
(t, x)c (37)
+0.5G
xx
(t, x)x
2
π
2
σ
2
¸
= 0
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 79
with ﬁnal condition G(T, x) = U
2
(x).
Remarks.
• Since (t, x) was arbitrarily chosen, the HJB holds for all (t, x) ∈ [0, T] (0, ∞).
• Note that we take the supremum over a number (π, c) and not over a process.
Example “Power Utility”. We consider the following power utility functions
U
1
(t, c) = e
−αt
1
γ
c
γ
,
U
2
(x) = e
−αT
1
γ
x
γ
,
where α ≥ 0 resembles the investor’s time preferences. The HJB reads
sup
π,c
e
−αt 1
γ
c
γ
+G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) −G
x
(t, x)c
+0.5G
xx
(t, x)x
2
π
2
σ
2
¸
= 0.
with ﬁnal condition G(T, x) = e
−αT 1
γ
x
γ
. The FOCs read
G
x
(t, x)x(µ −r) +G
xx
(t, x)x
2
π(t)σ
2
= 0
e
−αt
c
γ−1
−G
x
(t, x) = 0
implying
π
∗
(t) = −
µ −r
σ
2
G
x
(t, x)
xG
xx
(t, x)
c
∗
(t) = e
α
γ−1
t
G
1
γ−1
x
(t, x)
Substituting (π
∗
, c
∗
) into the HJB yields
1−γ
γ
e
α
γ−1
t
G
γ
γ−1
x
(t, x) +G
t
(t, x) +rxG
x
(t, x) −0.5θ
2
G
2
x
(t, x)
G
xx
(t, x)
= 0.
We try the separation G(t, x) =
1
γ
x
γ
f
β
(t) for some function f with f(T) = 1 and
some constant β. Simplifying yields
1−γ
γ
e
−
α
1−γ
t
f
β+γ−1
γ−1
+
β
γ
f
+
r + 0.5
1
1−γ
θ
2
f = 0
Choosing β = 1 −γ leads to
1−γ
γ
e
−
α
1−γ
t
+
1−γ
γ
f
+
r + 0.5
1
1−γ
θ
2
f = 0,
which can be rewritten as
f
= Kf +g,
with K := −
γ
1−γ
(r +0.5
1
1−γ
θ
2
) and g(t) := −e
α
γ−1
t
. This is an inhomgeneous ODE
for f. Variations of Constants yields
f(t) = f
H
(t)
f(T) −
T
t
g(s)
f
H
(s)
ds
,
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 80
where f
H
(t) = e
−K(T−t)
is the solution of the corresponding homogeneous ODE
(see Section 5.3). Hence,
f(t) = e
−K(T−t)
1 +
T
t
e
−
α
1−γ
s+K(T−s)
ds
. .. .
≥0
= e
−K(T−t)
1 −
e
KT
(1−γ)
α+K(1−γ)
e
−(
α
1−γ
+K)T
−e
−(
α
1−γ
+K)t
¸
.
Especially, f > 0. Hence, we get the following candidates for the value function
and the optimal strategy:
G(t, x) =
1
γ
x
γ
f
1−γ
(t), (38)
π
∗
(t) =
1
1 −γ
µ −r
σ
2
,
c
∗
(t) =
X
∗
(t)
f(t)
e
−
α
1−γ
t
.
Remark.
• Our candidate G is indeed a C
1,2
function. Since γ < 1, we also get G
xx
(t, x) =
(γ −1)x
γ−1
f(t) < 0.
• Clearly, it needs to be shown that these candidates are indeed the value function
and the optimal strategy! This can be done analogously to Proposition 5.4. For
the convenience of the reader, the proof of this veriﬁcation result is also presented.
Proposition 5.6 (Veriﬁcation Result)
Assume that the coeﬃcients are constant and that the investor has a power utility
function. Then the candidates (38) are the value function and the optimal strategy
of the portfolio problem (35), where for γ < 0 only bounded portfolio strategies π
are considered.
Proof. Let (π, c) ∈ /. Since G ∈ C
1,2
, Ito’s formula yields
G(T, X
π,c
(T)) +
T
t
e
−αs 1
γ
c
γ
s
ds
= G(t, x) +
T
t
G
t
(s, X
π,c
s
)ds +
T
t
G
x
(s, X
π,c
s
)dX
π,c
s
+0.5
T
t
G
xx
(s, X
π,c
s
)d < X
π,c
>
s
+
T
t
1
γ
e
−αs
c
γ
s
ds
= G(t, x) +
T
t
G
t
(s, X
π,c
s
) +G
x
(s, X
π,c
s
)X
π,c
s
(r +π
s
(µ −r)) −G
x
(s, X
π,c
s
)c
s
+0.5G
xx
(s, X
π,c
s
)(X
π,c
s
)
2
π
2
s
σ
2
+
1
γ
e
−αs
c
γ
s
ds
+
T
t
G
x
(s, X
π,c
s
)X
π,c
s
π
s
σ dW
s
(39)
It is straightforward to show that E
t,x
[
T
t
G
x
(s, X
∗
s
)X
∗
s
π
∗
s
σ dW
s
] = 0 (exercise).
Besides, G satisﬁes the HJB, i.e. for (π, c) = (π
∗
, c
∗
)
G
t
(s, X
∗
s
) +G
x
(s, X
∗
s
)X
∗
s
(r +π
∗
s
(µ −r)) −G
x
(s, X
∗
s
)c
∗
s
+0.5G
xx
(s, X
∗
s
)(X
∗
s
)
2
(π
∗
s
)
2
σ
2
+
1
γ
e
−αs
(c
∗
s
)
γ
= 0,
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 81
we obtain
E
t,x
[G(T, X
∗
(T))] + E
t,x
T
t
e
−αs 1
γ
(c
∗
s
)
γ
ds
= G(t, x). (40)
implying
E
t,x
[e
−αT 1
γ
X
∗
(T)
γ
] + E
t,x
T
t
e
−αs 1
γ
(c
∗
s
)
γ
ds
= G(t, x)
because G(T, x) = e
−αT 1
γ
x
γ
.
Finally, we need to show that for all (π, c) ∈ /
E
t,x
[e
−αT 1
γ
X
π,c
(T)
γ
] + E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x).
Since G satisﬁes the HJB, for all (π, c) ∈ / we get
G
t
(s, X
π,c
s
) +G
x
(s, X
π,c
s
)X
π,c
s
(r +π
s
(µ −r)) −G
x
(s, X
π,c
s
)c
s
+0.5G
xx
(s, X
π,c
s
)(X
π,c
s
)
2
(π
s
)
2
σ
2
+
1
γ
e
−αs
(c
s
)
γ
≤ 0.
Therefore, (39) yields
G(T, X
π,c
(T)) +
T
t
e
−αs 1
γ
c
γ
s
ds ≤ G(t, x) +
T
t
G
x
(s, X
π,c
s
)X
π,c
s
π
s
σ dW
s
(41)
and now two cases are distinguished.
1st case: γ > 0. The righthand side of (41) is a local martingale in T. Since
for γ > 0 the lefthand side is greater than zero, this is even a supermartingale
implying
E
t,x
[G(T, X
π,c
(T))] + E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x) (42)
and thus
E
t,x
[e
−αT 1
γ
X
π,c
(T)
γ
] + E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x).
2nd case: γ < 0. Since for a bounded strategy π we obtain
E[
T
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
] = 0,
the result of the 1st case also holds for γ < 0 and bounded admissible strategies. 2
Remark. If we set c ≡ 0, the previous proof is identical to the proof of Proposition
5.4.
5.5 Inﬁnite Horizon
If the optimal portfolio decision over the lifetime of an investor is to be deter
mined, then the investment horizon lies very far into the future. For this reason, it
reasonable to approximate this situation by a portfolio problem with T = ∞.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 82
Hence, the investor’s goal function reads
sup
(π,c)∈A
E
∞
0
U
1
(t, c(t)) dt
, (43)
where
/ :=
(π, c) : (π, c) admissible and E
∞
0
U
1
(t, c(t))
−
dt
< ∞
¸
and
dX
π,c
(t) = X
π,c
(t)
(r(t) +π(t)
(µ(t) −r(t)1))dt +π(t)
σdW(t)
−c(t)dt,
X
π,c
(0) = x
0
.
As in the previous section we make the following
Assumption 1. The coeﬃcients r, µ, σ are constant and I = m = 1.
Additionally, we make the
Assumption 2. U
1
(t, c(t)) = e
−αt
U(c(t)), where α ≥ 0 and U is a utility function.
The value function is given by
G(t, x) := sup
(π,c)∈A
E
t,x
∞
t
e
−αs
U(c(s)) ds
.
Lemma 5.7
The value function has the representation
G(t, x) = e
−αt
H(x),
where H is a function depending on wealth only.
Proof. We obtain
G(t, x)e
αt
= sup
(π,c)∈A
E
t,x
∞
t
e
−α(s−t)
U(c(s)) ds
= sup
(π,c)∈A
E
0,x
∞
0
e
−αs
U(c(s)) ds
=: H(x)
2
Hence, the HJB (37) can be rewritten as follows:
sup
π,c
e
−αt
U(c) +G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) −G
x
(t, x)c
+0.5G
xx
(t, x)x
2
π
2
σ
2
¸
=
sup
π,c
U(c) −αH(x) +H
x
(x)x(r +π(µ −r)) −H
x
(x)c + 0.5H
xx
(x)x
2
π
2
σ
2
¸
= 0
Example “Power Utility”. Let U(c) =
1
γ
c
γ
. Then the FOCs read
H
x
(x)x(µ −r) +H
xx
(x)x
2
π(t)σ
2
= 0
c
γ−1
−H
x
(x) = 0
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 83
implying
π
∗
(t) = −
µ −r
σ
2
H
x
(x)
xH
xx
(x)
c
∗
(t) = H
1
γ−1
x
(x).
Substituting (π
∗
, c
∗
) into the HJB leads to
1−γ
γ
H
γ
γ−1
x
(x) −αH(x) +rxH
x
(x) −0.5θ
2
H
2
x
(x)
H
xx
(x)
= 0.
We try the ansatz H(x) =
1
γ
x
γ
k
β
for some constants k and β. Simplifying yields
1−γ
γ
k
β
γ−1
−
α
γ
+r + 0.5
1
1−γ
θ
2
= 0
Choosing β = γ −1 leads to
k =
γ
1−γ
α
γ
−r −0.5
1
1−γ
θ
2
.
Our ansatz is only reasonable if k ≥ 0. For γ < 0 this is valid in any case. For
γ > 0 we need to assume that
α ≥ γ(r + 0.5
1
1−γ
θ
2
). (44)
Hence, we get the following candidates for the value function and the optimal strat
egy:
G(t, x) =
1
γ
e
−αt
x
γ
k
γ−1
, (45)
π
∗
(t) =
1
1 −γ
µ −r
σ
2
,
c
∗
(t) = k X
∗
(t).
Remark. In contrast to the previous section, the investor now consumes a time
independent constant fraction of wealth.
Proposition 5.8 (Veriﬁcation Result)
Assume that the coeﬃcients are constant and that the investor has a power utility
function where for γ < 0 only bounded portfolio strategies π are considered. If (44)
holds as strict inequality, i.e.
α > γ(r + 0.5
1
1−γ
θ
2
), (46)
then the candidates (45) are the value function and the optimal strategy of the
portfolio problem (43).
Proof. Let (π, c) ∈ /. Analogously to the proof of Proposition 5.6, we get the
relations (42) and (40) for the candidate G for the value function of this subsection
given by (45)
E
t,x
[G(T, X
π,c
(T))] + E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x)
E
t,x
[G(T, X
∗
(T))] + E
t,x
T
t
e
−αs 1
γ
(c
∗
s
)
γ
ds
= G(t, x).
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 84
Firstly note that, by the monotonous convergence theorem,
31
for any consumption
process c
lim
T→∞
E
t,x
T
t
e
−αs
c
γ
s
ds
= E
t,x
∞
t
e
−αs
c
γ
s
ds
. (47)
Secondly, since G(T, X
∗
(T)) =
1
γ
e
−αT
X
∗
(T)
γ
k
γ−1
, we get
E
t,x
[G(T, X
∗
(T))] =
1
γ
k
γ−1
e
−αT
E
t,x
[X
∗
(T)
γ
]
=
1
γ
k
γ−1
e
−αT
x
γ
E
t,x
e
γ{r+
1
1−γ
θ
2
−0.5
1
(1−γ)
2
θ
2
−k}T+
γ
1−γ
θW(T−t)
=
1
γ
k
γ−1
x
γ
e
−αT+γ{r+
1
1−γ
θ
2
−0.5
1
(1−γ)
2
θ
2
−k+0.5
γ
2
(1−γ)
2
θ
2
}T
=
1
γ
k
γ−1
x
γ
e
−kT
−→0 as T →∞, (48)
since (46) ensures that k > 0. Therefore,
E
t,x
∞
t
e
−αs 1
γ
(c
∗
s
)
γ
ds
= G(t, x).
To prove
E
t,x
∞
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x) (49)
two cases are distinguished.
1st case: γ > 0. Since in this case G(T, X
π,c
(T)) ≥ 0, the relation (49) follows
immediately from (42).
2nd case: γ < 0. Since in this case G < 0, the proof is more involved. From (42)
it follows that
sup
π,c
E
t,x
[G(T, X
π,c
(T))]
¸
+ E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x)
and we obtain the following estimate
0 ≥ sup
π,c
E
t,x
[G(T, X
π,c
(T))]
¸
= sup
π,c
E
t,x
1
γ
e
−αT
X
π,c
(T)
γ
¸
k
γ−1
(∗)
≥ sup
π,c
E
t,x
1
γ
e
−αT
X
π,c
(T)
γ
+
T
t
e
−αs 1
γ
c
γ
s
ds
¸
k
γ−1
≥ sup
π
E
t,x
1
γ
e
−αT
X
π,0
(T)
γ
¸
k
γ−1
= sup
π
E
t,x
1
γ
X
π,0
(T)
γ
¸
e
−αT
k
γ−1
(+)
=
1
γ
x
γ
e
γ(r+0.5
1
1−γ
θ
2
)(T−t)
e
−αT
k
γ−1
(46)
−→0 as T →∞,
where (∗) is valid because
T
t
e
−αs 1
γ
c
γ
s
ds ≤ 0 and (+) follows from Proposition 5.4.
This establishes (49). 2
Remarks.
• The relation
lim
T→∞
E
t,x
[G(T, X
π,c
(T))] = 0
is the socalled transversality condition.
31
We emphasize that 1/γ is deliberately omitted because without this factor the argument of
the expected value is positive.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 85
• For γ > 0 the property (48) can be proved by using a diﬀerent argument:
0 ≤ E
t,x
[G(T, X
∗
(T))] = E
t,x
[
1
γ
X
∗
(T)
γ
] e
−αT
k
γ−1
≤ E
t,x
[
1
γ
X
π
∗
,0
(T)
γ
] e
−αT
k
γ−1
≤ sup
π
E
t,x
[
1
γ
X
π,0
(T)
γ
] e
−αT
k
γ−1
(+)
=
1
γ
x
γ
e
γ(r+0.5
1
1−γ
θ
2
)(T−t)
e
−αT
k
γ−1
(46)
−→0 as T →∞
where (+) follows from Proposition 5.4. Applying this argument one can avoid
some tedious calculations.
• As in Proposition 5.4, the boundedness assumption for γ < 0 can be weakened.
References
Bingham, N. H. ; R. Kiesel (2004): Riskneutral valuation, 2nd ed., Springer, Hei
delberg.
Bj¨ork, T. (2004): Arbitrage theory in continuous time, 2nd ed., Oxford University
Press, Oxford.
Dax, A. (1997): An elementary proof of Farkas’ lemma, SIAM Review 39, 503507.
Duﬃe, D. (2001): Dynamic asset pricing theory, 3rd ed., Princeton University Press,
Princeton.
Karatzas, I.; S. E. Shreve (1991): Brownian motion and stochastic calculus, 2nd ed.,
Springer, New York.
Karatzas, I.; S. E. Shreve (1999): Methods of mathematical ﬁnance, Springer, New
York.
Korn, R. (1997): Optimal portfolios, World Scientiﬁc, Singapore.
Korn, R.; E. Korn (1999): Optionsbewertung und PortfolioOptimierung, Vieweg,
Braunschweig/Wiesbaden.
Korn, R.; E. Korn (2001): Option pricing and portfolio optimization  Modern meth
ods of ﬁnancial mathematics, AMS, Providence, Rhode Island.
Mangasarian, O. L. (1969): Nonlinear programming, New York.
Merton, R. C. (1990): Continuoustime ﬁnance, Basil Blackwell, Cambridge MA.
Protter, P. (2005): Stochastic integration and diﬀerential equations, 2nd ed., Springer,
New York.
86
CONTENTS
1
Contents
1 Introduction 1.1 1.2 What is an Option? . . . . . . . . . . . . . . . . . . . . . . . . . . . Fundamental Relations . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 3 6 6 6 7 9 10 12 14 14 16 19 20 20 22 23 23 23 23 24 24 25 27 29 30 34 35 41 47
2 Discretetime State Pricing with Finite State Space 2.1 Singleperiod Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 2.1.3 2.1.4 2.1.5 2.2 Description of the Model . . . . . . . . . . . . . . . . . . . .
ArrowDebreu Securities . . . . . . . . . . . . . . . . . . . . . Deﬂator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Riskneutral Measure . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Multiperiod Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 2.2.2 2.2.3 2.2.4 2.2.5 2.2.6 Modeling Information Flow . . . . . . . . . . . . . . . . . . . Description of the Multiperiod Model . . . . . . . . . . . . . Some Basic Deﬁnitions Revisited . . . . . . . . . . . . . . . . Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Introduction to Stochastic Calculus 3.1 Motivation 3.1.1 3.1.2 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Why Stochastic Calculus? . . . . . . . . . . . . . . . . . . . . What is a Reasonable Model for Stock Dynamics? . . . . . .
On Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 3.2.2 3.2.3 Cadlag and Caglad . . . . . . . . . . . . . . . . . . . . . . . . Modiﬁcation, Indistinguishability, Measurability . . . . . . . Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 3.4 3.5 3.6 3.7
Prominent Examples of Stochastic Processes . . . . . . . . . . . . . . Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Completition, Augmentation, Usual Conditions . . . . . . . . . . . . ItoIntegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ito’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Continuoustime Market Model and Option Pricing
CONTENTS
2
4.1 4.2 4.3 4.4
Description of the Model . . . . . . . . . . . . . . . . . . . . . . . . . BlackScholes Approach: Pricing by Replication . . . . . . . . . . . . Pricing with Deﬂators . . . . . . . . . . . . . . . . . . . . . . . . . . Pricing with Riskneutral Measures . . . . . . . . . . . . . . . . . . .
47 49 53 57 68 68 69 73 73 74 75 76 81 86
5 Continuoustime Portfolio Optimization 5.1 5.2 5.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Martingale Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . Stochastic Optimal Control Approach . . . . . . . . . . . . . . . . . 5.3.1 5.3.2 5.3.3 Heuristic Derivation of the HJB . . . . . . . . . . . . . . . . . Solving the HJB . . . . . . . . . . . . . . . . . . . . . . . . . Veriﬁcation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 5.5
Intermediate Consumption . . . . . . . . . . . . . . . . . . . . . . . . Inﬁnite Horizon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References
1
INTRODUCTION
3
1
1.1
Introduction
What is an Option?
• There exist two main variants: call and put. • A European call gives the owner the right to buy some asset (e.g. stock, bond, electricity, corn . . . ) at a future date (maturity) for a ﬁxed price (strike price). • Payoﬀ: max{S(T ) − K; 0}, where S(T ) is the asset price at maturity T and K is the strike price. • An American call gives the owner the right to buy some asset during the whole lifetime of the call. • Put: replace “buy” with “sell” • Since the owner has the right but not the obligation, this right has a positive value.
1.2
Fundamental Relations
Consider an option written on an underlying which pays no dividends etc. during the lifetime of the option. In the following we assume that the underlying is a stock. The timet call and put prices are denoted by Call(t) and P ut(t). The timet price of a zerocoupon bond with maturity T is denoted by p(t, T ). Proposition 1.1 (Bounds) (i) For a European call we get: S(t) ≥ Call(t) ≥ max{S(t) − Kp(t, T ); 0} (ii) For a European put we have: Kp(t, T ) ≥ P ut(t) ≥ max{Kp(t, T ) − S(t); 0} Proof. (i) The relation S(t) ≥ Call(t) is obvious because the stock can be interpreted as a call with strike 0. We now assume that Call(t) < max{S(t) − Kp(t, T ); 0}. W.l.o.g. S(t) − Kp(t, T ) ≥ 0. Consider the strategy • buy one call, • sell one stock, • buy K zeros with maturity T , leading to an initial inﬂow of S(t)−Call(t)−Kp(t, T ) > 0. At time T , we distinguish two scenarios: 1. S(T ) > K: Then the value of the strategy is −S(T ) + S(T ) − K + K = 0. 2. S(T ) ≤ K: Then the value of the strategy is −S(T ) + 0 + K ≥ 0. Hence, the strategy would be an arbitrage opportunity. (ii) The relation Kp(t, T ) ≥ P ut(t) is valid because otherwise the strategy • sell one put, • buy K zeros with maturity T
Hence. 0} ≥ S(t) − K where (∗) holds due to Proposition 1. the American call is worth more alive than dead. T )). • buy one stock. T ) Proof. i. Proposition 1. Early exercise of an American call is not optimal. • sell one stock. Since no additional payments need to be made. . • buy one call. this result does not hold for American puts. Proof.1 INTRODUCTION 4 would be an arbitrage strategy.e. • sell K zeros with maturity T . 2 Remark.3 (American Call) Assume that interest rates are positive. 2 Proposition 1. For the second one assume that P utam (t) > Callam (t) − S(t) + K and consider the following strategy: • sell one put. This is a very important relationship because it ties together European call and put prices. • invest K Euros in the money market account. • sell one call. By convention. Proof. 2 Remark. The following relations hold Callam (t) ≥ Call(t) ≥ max{S(t) − Kp(t. no arbitrage dictates that the timet value is zero as well implying the desired result.2 (PutCall Parity) The following relation holds: P ut(t) = Call(t) − S(t) + Kp(t.1 (i). Proposition 1. The second relation can be proved similarly to the relation for the call in (i).3. Callam (t) = Call(t). • Unfortunately.4 (American Put) Assume that interest rates are positive.1 1 The (∗) timet value of the money market account is denoted by M(t). M (0) = 1. then the result for the American call breaks down as well. Then the following relations hold: (i) Callam (t) − S(t) + Kp(t. T ) ≤ P utam (t) ≤ Callam (t) − S(t) + K (ii) P ut(t) ≤ P utam (t) ≤ P ut(t) + K(1 − p(t. T ). (i) The ﬁrst inequality follows from the putcall parity and Proposition 1. • If dividends are paid. Consider the strategy • buy one put. It is easy to check that the timeT value of this strategy is zero.
then P ut(t) = P utam (t). the initial value of this strategy is strictly positive. If interest rates are zero.1 INTRODUCTION 5 By assumption. For t ∈ [0. T ) − S(t) + K = P ut(t) + K(1 − p(t. 2 Remark. 2. (ii) We have P ut(t) ≤ P utam (t) ≤ Callam (t) − S(t) + K = Call(t) − S(t) + K = P ut(t) + S(t) − Kp(t. T )). S(t) ≤ K: Then the value of the strategy is −(K − S(t)) + 0 − S(t) + KM (t) = K(M (t) − 1) ≥ 0. the strategy would be an arbitrage opportunity. Hence. T ] we distinguish to scenarios: 1. S(t) > K: Then the value of the strategy is 0 + (S(t) − K) − S(t) + KM (t) = K(M (t) − 1) ≥ 0 since interest rates are positive and thus M (t) ≥ 1. .
. . . • Uncertainty at time t = 1 is represented by a ﬁnite set Ω = {ω1 . • At time t = 0 one can trade in J + 1 assets.1 2. . . Deﬁnition 2.1 (Arbitrage) We distinguish two kinds of arbitrage opportunities: (i) A free lunch is a trading strategy ϕ with Sϕ<0 and Xϕ ≥ 0. . i. Ω is said to be the state space. 1). S0 • An investor buys and sells assets at time t = 0. I}. This is modeled via a trading strategy ϕ = (ϕ0 . . ωI } of states. • Each state ωi ∈ Ω can occur with probability P (ωi ) > 0. • The ﬁrst asset is assumed to be riskless. . . . Without loss of generality T = 1. at time t = 0 there exists a riskless interest rate given by r= X0 − 1. X0 (ωI ) · · · XJ (ωI ) X0 (ω1 ) • To shorten notations. . . . . . This deﬁnes a probability measure P : Ω → (0.e.. . . Xi0 = Xk0 = const. .2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 6 2 Discretetime State Pricing with Finite State Space 2.1. k ∈ {1. where ϕj denotes the number of asset j which the investor holds in his portfolio (combination of assets). SJ ). The time0 prices of these assets are described by the vector S = (S0 . The statedependent payoﬀs of the assets at time t = 1 are described by the payoﬀ matrix X= · · · XJ (ω1 ) X0 (ω2 ) · · · XJ (ω2 ) . . . ϕJ ) . one of which will be revealed as true. S1 .1 Singleperiod Model Description of the Model • We consider a singleperiod model starting today (t = 0) and ending at time t = T . (ii) A free lottery is a trading strategy ϕ with Sϕ=0 and Xϕ ≥ 0 and Xϕ = 0. Xij := Xj (ωi ). • Therefore. . . . . for all i. .
• assets are divisible. short sale of an assets is equivalent to selling an asset one does not own.e. • Unfortunately. 1. 0) . The corresponding ArrowDebreu price is denoted by πi . 2A .2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 7 Assumptions: (a) All investors agree upon which states cannot occur at time t = 1.e. . . 0. • The ith ADS has the payoﬀ ei = (0. (b) All investors estimate the same payoﬀ matrix (homogeneous beliefs). An investor can do this by borrowing the asset from a third party and selling the borrowed security. (c) At time t = 0 there is only one market price for each security and each investor can buy and sell as many securities as desired without changing the market price (price taker). a short sale corresponds to buying a negative amount of a security. • no restrictions on short sales.e. • no taxes. The net position is a cash inﬂow equal to the price of the security and a liability (borrowing) of the security to the third party. i. the notion of an ArrowDebreu security is crucial: Deﬁnition 2. . In abstract mathematical terms.1. . where the ith entry is 1. 2. i. . 0. in practice ADSs are not traded.2 (ArrowDebreu Security) An asset paying one Euro in exactly one state and zero else is said to be an ArrowDebreu security (ADS). . We can now prove the following theorem characterizing markets which do not contain arbitrage opportunities. (e) The asset market contains no arbitrage opportunities. • In our model.1 (Characterization of No Arbitrage) Under assumptions (a)(d) the following statements are equivalent: • The market does not contain arbitrage opportunities (Assumption (e)). Theorem 2.2 • no transaction costs. Its price is an ArrowDebreu price (ADP). we have I ArrowDebreu securities.2 ArrowDebreu Securities To characterize arbitragefree markets. there are neither free lunches nor free lotteries. Remarks. i. we have as many ADS as states at time t = 1. (d) Frictionless markets. . .
e. i.4 (Replication) Consider a market with payoﬀ matrix X. where (∗) holds because π > 0. there are no free lunches. • The hard part of the proof is actually the proof of implication “(i)⇒(ii)”. this proof is omitted here.3 (Contingent Claim) In the singleperiod model. I. option) is some additional asset with payoﬀ C ∈ I I . . For this reason. and Xϕ = 0. i = 1. 32) or Dax (1997)). Deﬁnition 2. by the no arbitrage assumption its price SC has to be (“law of one price”) J SC = j=1 ϕj · S j . Deﬁnition 2. The argument to exclude free lotteries is similar. • The ADPs are the solution of the following system of linear equations: S = X π. If an asset with payoﬀ C is replicable via a trading strategy ϕ. . Proof. such that I Sj = i=1 πi · Xij . p. Xϕ ≥ 0. a contingent claim (derivative. The implication “(i)⇒(ii)” follows from the socalled FarkasStiemke lemma (See Mangasarian (1969. Then (∗) (ii) S ϕ = π Xϕ > 0. R Deﬁnition 2. Given our market with prices S and payoﬀ matrix X.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 8 • There exist strictly positive ArrowDebreu prices. We consider a strategy ϕ with Xϕ ≥ 0 and Xϕ = 0. πi > 0. it is now of interest how to price other assets. Remark.5 (Completeness) An arbitragefree market is said to be complete if every contingent claim is replicable. 2 Remarks. . A contingent claim with payoﬀ C can be replicated (hedged) in this market if there exists a trading strategy ϕ such that C = Xϕ. Suppose (ii) to hold. . . For brevity.
6 (Deﬂator) In an arbitragefree market. the market cannot be complete. there exists at least one vector of ADSs π. • Since the market is arbitragefree.e. the asset prices are given by Sj = EP [H · Xj ]. . Rank (X) = I Range (X) = I I R Every contingent claim is replicable. Rank(X) = I. . arbitragefree asset prices can be represented as discounted expected values of the corresponding payoﬀs. • The ADS π and the probability measure P can be interpreted as random variables with realizations πi = π(ωi ) and P (ωi ).2 (Characterization of a Complete Market) Assume that the ﬁnancial market contains no arbitrage opportunities. In this subsection: Using the physical probabilities P (ωi ). • If the number of assets is smaller than the number of states (J < I).2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 9 Theorem 2. Proof. . • The market is complete if the number of assets with linearly independent payoﬀs is equal to the number of states. but π and thus the deﬂator H are only unique if the market is complete.3 Deﬂator Main result from the previous subsection: In an arbitragefree market. I.1. Then the market is complete if and only if the ArrowDebreu prices are uniquely determined. X π = S has a unique solution. ⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒ π is unique. Proposition 2. a deﬂator H is deﬁned by H= π . there exist strictly positive ADPs such that I Sj = i=1 πi · Xij . i = 1. i. 2 Remarks. Deﬁnition 2. 2. . .3 (Pricing with a Deﬂator) In an arbitragefree market. P Remarks.
I I Sj = i=1 πi · Xij = i=1 P (ωi ) π(ωi ) π Xj = EP [H · Xj ] Xj (ωi ) = EP P (ωi ) P 2 Remarks. • The deﬂator H ist a random variable with realizations π(ωi )/P (ωi ).2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 10 Proof. ∂u/∂c0 where u = u(c0 . • On average. (roughly speaking) we have H= ∂u/∂c1 . • Setting u0 := ∂u/∂c0 and u1 := ∂u/∂c1 leads to r= 1 u0 −1= − 1. time preference describes the preference for present consumption over future consumption. (ii) In an arbitragefree market. i. . one needs to discount with the riskless rate r because I EP [H] = i=1 P (ωi ) · π(ωi ) = P (ωi ) I π(ωi ) = i=1 1 . we get I πi = i=1 1 1+r I ⇒ i=1 πi · (1 + r) = 1. 3 Intuitively.e. EP [H] EP (u1 ) which gives the following result: (i) u0 > E(u1 ) (”‘time preference”’):3 r > 0. . c1 ) is the utility function of the representative investor. I. . 1+r Interpretation of a Deﬂator: • In an equilibrium model one can show that the deﬂator equals the marginal rate of substitution between consumption at time t = 0 and t = 1 of a representative investor.4 Riskneutral Measure r < 0. . (ii) u0 < E(u1 ): 2. . Two important observations: (i) If an investor buys all ADSs. i = 1. i.e.1. it holds that πi > 0. • The deﬂator H is the appropriate discount factor for payoﬀs at time t = 1. he gets one Euro at time t = 1 for sure.
• The notion “riskneutral measure” comes from the fact that under this measure prices are discounted expected values where one needs to discount using the riskless interest rate. Proof.5 (Riskneutral Pricing) In an arbitragefree market.t.e. there exists a probability measure Q deﬁned via Q(ωi ) = qi := πi · (1 + r) This measure is said to be the riskneutral measure or equivalent martingale measure. P and vice versa. I I 2 Xj .t.r. Proof. Theorem 2.s. R R 2 Remarks. This density is uniquely determined (P a. Q. P . i.). Using the riskneutral measure.7 (Equivalent Probability Measures) A Probability Measure Q is said to be continuous w. then P and Q are said to be equivalent. another probability measure P if for any event A ⊂ Ω P (A) = 0 =⇒ Q(A) = 0. there is a third representation of an asset price: Proposition 2. both measures possess the same null sets. riskneutral probabilities are compounded ArrowDebreu securities. • Warning: The riskneutral measure Q is diﬀerent from the physical measure P .6 (RadonNikodym) If Q is continuous w.e.r. follows from (i) and (ii). asset prices possess the following representation Sj = E Q where R := 1 + r. To motivate the notion “equivalent martingale measure”. R Sj = i=1 πi · Xij = i=1 πi · R · Xij = R I qi · i=1 Xij Xj = EQ .t. then Q possesses a density w. .4 (Riskneutral Measure) In an arbitragefree market.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 11 Proposition 2. let us recall some basic deﬁnitions: Deﬁnition 2.r. • Roughly speaking. If Q is continuous w. there exists a positive random variable D = dQ such that dP EQ [Y ] = EP [D · Y ] for any random variable Y .r. i.t.
t.7 (Properties of a Riskneutral Measure) (i) The discounted price processes {Sj . (i) follows from Proposition 2.7 is the reason why Q is said to be an equivalent martingale measure.3 and 2. 2 Remarks. i = 1. Xj /R} are martingales under a riskneutral measure Q.e.5. where it is assumed that EQ [Y (1)] < ∞.1 forms a stochastic process. R i.8 (Stochastic Process in a Singleperiod Model) Consider a singleperiod model with time points t = 0. . Q (short: Qmartingale) if EQ [Y (1)] = Y (0). one can show dP/dQ = 1/(H · R).2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 12 Deﬁnition 2.8 (No Arbitrage) Consider a ﬁnancial market with price vector S and payoﬀ matrix X.5 Summary Theorem 2. Uncertainty at time t = 1 is modeled by some probability measure Q. 2. Deﬁnition 2. . H = (dQ/dP )/R. we get EQ Therefore. . such that I Sj = i=1 πi · Xij . The ﬁnite sequence {Y (t)}t=0. I. Proof. (ii) There exist strictly positive ArrowDebreu prices πi . then the following statements are equivalent: (i) The ﬁnancial market contains no arbitrage opportunities (assumption (e)). we obtain EQ [Xj ] = EP [ H · R ·Xj ]. • From Propositions 2.r. We can now characterize riskneutral measures as follows: Proposition 2. • Proposition 2. =dQ/dP Xj = Sj = EP [H · Xj ]. (ii) Since ADPs are strictly positive. Analogously. A realvalued stochastic process is said to be a martingale w.5. . we have Q(ωi ) > 0 and P (ωi ) > 0 which gives the desired result. (ii) The physical measure P and a riskneutral measure Q are equivalent. 1.1. If the assumptions (a)(d) are satisﬁed.9 (Martingale in a Singleperiod Model) Consider a singleperiod model with ﬁnite state space. .
there exist ADPs which are uniquely determined. .e. J Sj = EQ Xj R Theorem 2. Formulas (ii) and (iii) follow from (i) using H = π/P and Q(ωi ) = πi · R. Its today’s price SC R can be calculated by using one of the following formulas: I (i) SC = C π = i=1 πi · C(ωi ) (Pricing with ADSs). • In a complete market. If the assumptions (a)(e) are satisﬁed. . Remarks. . each additional asset can be replicated. Xj /R} are martingales under Q. . . i = 1. (iv) There exists an equivalent martingale measure Q such that the discounted price processes {Sj . . J Sj = EP [H · Xj ]. . (ii) The ArrowDebreu prices πi . . . (Pricing with the deﬂator).10 (Pricing of Contingent Claims) Consider an arbitragefree and complete market with price vector S and payoﬀ matrix X.e. the socalled deﬂator. (iii) There exists only one deﬂator H. for all j = 0. Theorem 2. 2 . Let C ∈ I I be the payoﬀ of a contingent claim. (iv) There exists only one equivalent martingale Q. . (ii) SC = EP [H · C] (iii) SC = EQ C R Proof. In an arbitragefree and complete market. the price of any additional asset is given by the price of its replicating portfolio.9 (Completeness) Consider a ﬁnancial market with price vector S and payoﬀ matrix X. . i. such that for all j = 0. (Pricing with the equivalent martingale measure). then the following statements are equivalent: (i) The ﬁnancial market is complete. are uniquely determined. • The value of this portfolio can be calculated applying the following theorem. The noarbitrage requirement together with the deﬁnition of an ADP gives formula (i). .2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 13 (iii) There exists a statedependent discount factor H. i. I.
99 or 121. • Every trading strategy was a static strategy (”‘buy and hold”’).2 Multiperiod Model In the previous section we have modeled only two points in time: t = 0: t = T: The investor chooses the trading strategy.e. • At t = 0 it is possible that the stock price at time t = 1 equals 81. This has the following consequences: • We have disregarded the information ﬂow between 0 and T . The investor liquidates his portfolio and gets the payoﬀs.2.1 Modeling Information Flow Motivation Consider the dynamics of a stock in a twoperiod binomial model: t=0 t = 0.5 t=1 1 121 1 110 PPP PP P q P 100 PP 1 99 PP PP q P 90 PP PP PP q P 81 • Due to the additional trading date at time t = 0. • It is one of the goals of this section to model this additional piece of information! Reference Example To demonstrate how one can model the information ﬂow. 2. then at time t = 1 the stock price can only take the values 121 or 99 (99 or 81). however. In this section • we will model the information ﬂow between 0 and T .5. intermediate portfolio adjustments was not allowed.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 14 2. we consider the following example: . → Filtration • and the investor can adjust his portfolio according to the arrival of new information about prices. the investor already knows that at time t = 0.5 the stock price equals 110 (90). → dynamic (selfﬁnancing) trading strategy. • If. the investor gets more information about the stock price at time t = 1. i.
ω2 . ω2 . . . • Problem: This intuitive deﬁnition cannot be generalized to the continuoustime case. {ω3 . . FT such that • F0 = {ω1 .10 (Partition) A partition of the state space is a set {A1 . F1 = {{ω1 . ω4 }}. ”’). one knows for sure which state has occurred. ω3 . ω3 . . ω4 }. AK } of subsets Ak ⊂ Ω of a state space Ω such that • the subsets Ak are disjoint. Example for the interpretation of the ω1 : DAX stands at 5000 at t = 1 and at ω2 : DAX stands at 5000 at t = 1 and at ω3 : DAX stands at 3000 at t = 1 and at ω4 : DAX stands at 3000 at t = 1 and at states. ∪ AK . ω2 }. t = 2. Example. Deﬁnition 2. ω4 }}. {ω3 . ω4 }}. . . ω4 } 2 3 PP 1 PP PP q P . {ω2 }. i. t=0 t=1 First Approach to Model the Flow of Information Deﬁnition 2. 6000 at 4000 at 4000 at 2000 at time time time time t = 2. F0 = {{ω1 . . • the partitions become ﬁner over time. . ω2PPP PP P ω . . .”’). .. . . ω3 . . • FT = {{ω1 }.ω q P {ω1 . t = 2.11 (Information Structure) An information structure is a sequence {Ft }t of partitions F0 . {{ω1 . . . F2 = {{ω1 }. ω2 }. . Ak ∩ Al = ∅. ω2 . t = 2. • Ω = A1 ∪ . {ωI }} (”‘ At the end. • However.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 15 t=2 1 ω1 {ω } 1 1 . it provides a good intuition of what information ﬂow actually means. . {ω3 }. {ω4 }}. Example.e. ω {ω3 4PP } PP PP q P ω 4 The state space thus equals: Ω = {ω1 . Remarks. ωI } (”‘ At the beginning each state can occur.
. F0 = {∅. ω2 }. • Investors can trade in J + 1 assets with price processes (S0 (t). . F2 = P(Ω).”’). one can decide if the event A ∈ Ft has occurred or not. t = 0. ω4 }. one has no information. we can always use ﬁltrations with the following properties (P(Ω) denotes the powerset of Ω): • F0 = {∅. . . it is suﬃcient if ﬁnite (instead of inﬁnite) unions of events belong to the σﬁeld (see requirement (iii) of Deﬁnition 2. . 2. Ω}. . is said to be a ﬁltration if Fs ⊂ Ft . on the basis of all information about prices S(t − 1) at time t − 1 the investor selects his portfolio which he holds until time t. . .2. Example. Some facts about ﬁltrations: • Filtrations are used to model the ﬂow of information over time. • A trading strategy needs to be predictable. {ω3 . ϕJ (t)) . where ϕj (t) denotes the number of asset j at time t in the investor’s portfolio. . Example. . . t ∈ I.e. • Sj (t) denotes the price of the jth asset at time t. ω2 }. {ω3 . • The range of Sj (t) is ﬁnite.”’). {ω1 . SJ (t)).12 (σField) A system F of subsets of the state space Ω is said to be a σﬁeld if (i) Ω ∈ F. T . . . Here I denotes some ordered index set. t = 0. F1 = {∅. {ω1 .”’). ω4 }. (ii) A ∈ F ⇒ AC ∈ F. For a ﬁnite σﬁeld. . • At time t. s ≤ t and s.13 (Filtration) Consider a state space F. • F0 ⊂ F1 ⊂ . . A family {Ft }t of subσﬁelds Ft .12). ⊂ FT (”‘The state of information increases over time. • Taking conditional expectation E[Y Ft ] of a random variable Y means that one takes expectation on the basis of all information available at time t. ∞ (iii) for every sequence {An }n∈I with An ∈ F we have N n=1 An ∈ F. . Ω} (”‘At the beginning. Deﬁnition 2. T . . Remark. . In our discrete time model. Remark. Ω}. {∅. t ∈ I. i. • Trading of an investor is modeled via his trading strategy ϕ(t) = (ϕ0 (t).2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 16 Second Approach to Model the Flow of Information: Filtration Deﬁnition 2.2 Description of the Multiperiod Model Properties of the Model: • Trading takes place at T + 1 dates. . Ω}. • FT = P(Ω) (”‘At the end one is fully informed.
. • Recall the following paradigm: Arbitragefree pricing of a contingent claim means ﬁnding a hedging (replication) strategy of the corresponding payoﬀ. . . • American options are no contingent claims in the sense of Deﬁnition 2. Remarks. T − 1 J J S(t) ϕ(t) = S(t) ϕ(t + 1) ⇐⇒ j=0 Sj (t) · ϕj (t) = j=0 Sj (t) · ϕj (t + 1). . . . • A typical contingent claim is a European call with payoﬀ C(T ) = max{S(T ) − K.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 17 • The value of a trading strategy at time t is given by: S(t) ϕ(t) = J Sj (t)ϕj (t) f¨r t = 1. 0}. . . . . T u j=0 Vϕ (t) = S(0) ϕ(1) = J S (0)ϕ (1) f¨r t = 0 u j j=0 j Deﬁnition 2. • This leads to the notion of a selfﬁnancing trading strategy. where S(T ) denotes the value of the underlying at time T and K the strike price. • In the singleperiod model: The price of the replication strategy is then the price of the contingent claim. ϕJ (t)) is said to be selfﬁnancing if for t = 1. • In the multiperiod model we need an additional requirement: The replication strategy should not require intermediate inﬂows or outﬂows of cash. . Remarks. . Ft .14. ∆SJ (t)) := (S0 (t) − S0 (t − 1). . the initial costs are equal to the fair price of the contingent claim.t. SJ (t) − SJ (t − 1)) . . Notation: • Change of the value of strategy ϕ at time t: ∆Vϕ (t) := Vϕ (t) − Vϕ (t − 1) • Price changes at time t ∆S(t) = (∆S0 (t). Deﬁnition 2. . . . but one can generalize this deﬁnition accordingly.14 (Contingent Claim) A contingent claim leads to a statedependent payoﬀ C(t) at time t. the portfolio value changes only if prices change and not if the numbers of assets in the portfolio change. . .r.15 (Selfﬁnancing Trading Strategy) A trading strategy ϕ(t) = (ϕ1 (t). which is measurable w. the investor has to sell other ones. • To buy assets. • Only in this case. . • For this reason.
t Vϕ (t) = Vϕ (0) + Gϕ (t) = Vϕ (0) + τ =1 ϕ(τ ) ∆S(τ ). Since ϕ is assumed to be selfﬁnancing. (i) For a selfﬁnancing strategy ϕ. we obtain ∆Vϕ (t) = = j=0 (∗) J Vϕ (t) − Vϕ (t − 1) J J Sj (t)ϕj (t) − j=0 J Sj (t − 1)ϕj (t − 1) = Sj (t)ϕj (t) − j=0 J j=0 Sj (t − 1)ϕj (t) = j=0 J ϕj (t) Sj (t) − Sj (t − 1) = j=0 ϕj (t)∆Sj (t) ϕ(t) ∆S(t). i. i.e. by (i).11 (Characterization of a Selfﬁnancing Trading Strategy) A selfﬁnancing trading strategy ϕ(t) has the following properties: (i) The value of ϕ changes only due to price changes. = The equality (∗) is valid because ϕ is selfﬁnancing.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 18 • Gains process of the trading strategy ϕ at time t: J ϕ(t) ∆S(t) = j=0 ϕj (t)∆Sj (t) • Cumulative gains process of the trading strategy ϕ at time t: t t J Gϕ (t) := τ =1 ϕ(τ ) ∆S(τ ) = τ =1 j=0 ϕj (τ )∆Sj (τ ) Proposition 2. Proof. (ii) The value of ϕ is equal to its initial value plus the cumulative gains. . ϕ(τ ) ∆S(τ ) τ =1 = Vϕ (0) + = Vϕ (0) + Gϕ (t). ∆Vϕ (t) = ϕ(t) ∆S(t).e. we obtain t Vϕ (t) = (i) Vϕ (0) + τ =1 t ∆Vϕ (τ ). (ii) Any (not necessary selfﬁnancing) trading strategy satisﬁes t Vϕ (t) = Vϕ (0) + τ =1 ∆Vϕ (τ ).
2. at every trading date the hedge portfolio needs to be adjusted according to the arrival of new information about prices. 2. • By deﬁnition.18 (Completeness) An arbitragefree market is said to be complete if each contingent claim can be replicated. The deﬁnition of a complete market remains the same: Deﬁnition 2. a replication strategy of a claim leads to the same payoﬀ as the claim. (ii) A free lottery is a selfﬁnancing trading strategy ϕ with Vϕ (0) = 0 und Vϕ (T ) ≥ 0 und Vϕ (T ) = 0. .2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 19 2 Remarks. • In general. the intuitive relation S(t) ϕ(t) = S(t) ϕ(t + 1) has no continuoustime equivalent. • Unfortunately.16 (Arbitrage) We distinguish two kinds of arbitrage opportunities: (i) A free lunch is a selfﬁnancing trading strategy ϕ with Vϕ (0) < 0 und Vϕ (T ) ≥ 0.e. the relation (i) is still valid if ∆ is replace by d.3 Some Basic Deﬁnitions Revisited The notion of a selfﬁnancing trading strategy is also important to generalize the following deﬁnitions to the multiperiod setting: Deﬁnition 2.17 (Replication of a Contingent Claim) A contingent claim with payoﬀ C(T ) is said to be replicable if there exists a selfﬁnancing trading strategy ϕ such that C(T ) = Vϕ (T ). The trading strategy ϕ is said to be a replication (hedging) strategy for the claim C(T ). Remarks. Deﬁnition 2. i. • In continuoustime models. replication strategies are dynamic strategies.
. T .2.5 Main Results Conventions: • The 0th asset is a money market account.4 Assumptions (a) All investors agree upon which states cannot occur at times t > 0. .e. 2. t = 0. T.4 • no transaction costs. (d) Frictionless markets. (c) At time t = 0 there is only one market price for each security and each investor can buy and sell as many securities as desired without changing the market price (price taker). • no taxes. i. 4A . A martingale is thus a process being on average stationary.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 20 2. . there are neither free lunches nor free lotteries. . . i. with E[Y (t)] < ∞ is a martingale w. where ru denotes the interest rate for the period [τ − 1.e. • no restrictions on short sales. Important Facts: short sale of an assets is equivalent to selling an asset one does not own. An investor can do this by borrowing the asset from a third party and selling the borrowed security. .e. a probability measure Q if for s ≤ t EQ [Y (t)Fs ] = Y (s). (b) All investors agree upon the range of the price processes S(t). . . τ ]. t = 1. Deﬁnition 2. a short sale corresponds to buying a negative amount of a security.19 (Martingal) A realvalued adapted process {Y (t)}t . Remark. S0 (t) • The money market account is thus used as a numeraire.r.t.2. The net position is a cash inﬂow equal to the price of the security and a liability (borrowing) of the security to the third party. it is locally riskfree with t S0 (0) = 1 und S0 (t) = u=1 (1 + ru ). In abstract mathematical terms. (e) The asset market contains no arbitrage opportunities. • assets are divisible. i. • The discounted (normalized) price processes are deﬁned by Zj (t) := Sj (t) .
. Bingham/Kiesel (2004). . . Chapter 4). . the socalled deﬂator. t ∈ {0. . the following statements are equivalent: (i) The ﬁnancial market contains no arbitrage opportunities (assumption (e)). a multiperiod model is arbitragefree if all singleperiod models are arbitragefree (See.e. Then we obtain for s. T } with s ≤ t: (i) The replication strategy ϕ for the claim is unique. . .e. .2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 21 • A multiperiod model consists of a sequence of singleperiod models. . • Unfortunately. i. H(s) i. .12 (Characterization of No Arbitrage) If assumptions (a)(d) are satisﬁed. t ∈ {0. the implication “Existence of Q ⇒ No Arbitrage”’ is still valid. the arbitragefree price of the claim is uniquely determined by C(s) = Vϕ (s). i.13 (Pricing of Contingent Claims) Consider a ﬁnancial market satisfying assumptions (a)(e) and a contingent claim with payoﬀ C(T ). T } with s ≤ t and all assets j = 0. T } with s ≤ t and all assets j = 1. t ∈ {0. . S0 (t) . Theorem 2. • Therefore. (iii) There exists a riskneutral probability measure Q such that all discounted price processes Zj (t) are martingales under Q. . . in continuoustime models this equivalence breaks down. .. e. . J we have Zj (s) = EQ [Zj (t)Fs ] ⇐⇒ Sj (s) Sj (t) = EQ Fs S0 (s) S0 (t) Remarks. . for all time points s. For these reasons. the following three theorems hold: Theorem 2. . . . J we have EP [H(t)Sj (t)Fs ] Sj (s) = . • The equivalence “No Arbitrage ⇔ Existence of Q” is known as Fundamental Theorem of Asset Pricing. . H(s) (iii) The following riskneutral pricing formula holds: C(s) = S0 (s) · EQ C(t) Fs . such that for all time points s. (ii) Using the deﬂator.e. • However. we can calculate the claim’s times price C(s) = EP [H(t)C(t)Fs ] . (ii) There exist a statedependent discount factor H. all prices can be represented as discounted expectations under the physical measure.g.
i. Note that one needs to discount with the money market account. • From (iii) we get C(t) C(s) = EQ Fs . (1 + r)T • Pricing of contingent claims boils down to computing expectations under the riskneutral measure. S0 (T ) .e. Then the following statements are equivalent (i) The market is complete.14 (Completeness) Consider a ﬁnancial market satisfying assumptions (a)(e).2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 22 Remarks. • If a unique martingale measure exists.e. then from (iii) C(0) = EQ C(T ) (1 + r)T = EQ [C(T )] . there exists a martingale measure. the discounted price processes of contingent claims are Qmartingales as well. • If interest rates are constant. (ii) There exists a unique deﬂator H. • If a unique martingale measure exists. 2. • In our discrete model. C(0) = Vϕ (0). Theorem 2.2. • Result (i) says that the today’s price of a claim is uniquely determined by its replication costs. the market contains is arbitragefree and complete. the today’s price of a contingent claim is given by the following Qexpectation C(0) = EQ C(T ) . the discounted price processes are Qmartingales. the converse is also valid: If the market contains no arbitrage opportunities. • If a martingale measure Q exists. (iii) There exists a unique martingale measure Q. S0 (s) S0 (t) i.6 Summary The following results are important: • It is always valid that a ﬁnancial market contains no arbitrage opportunities if an equivalent martingale measure exists.
T ].1.1 3. S(0) = s0 . at any date in the time period [0. where W is a Brownian motion.1. . price dynamics are modeled continuously.e. In this section we answer the following questions: Q1 What is a Brownian motion? Q2 What is the meaning of dW (t)? → Stochastic Integral Q3 What are stochastic diﬀerential equations and do explicit solutions exist? → Variation of Constant Q4 How does the dynamics of contingent claims look like? → Ito Formula 3. Example “BlackScholes Model”. i. . Two investment opportunities: • money market account: riskless asset with constant interest rate r modeled via an ordinary diﬀerential equation (ODE): dM (t) = M (t)rdt.3 INTRODUCTION TO STOCHASTIC CALCULUS 23 3 3.1 Introduction to Stochastic Calculus Motivation Why Stochastic Calculus? • In the previous section. • In a continuoustime model. T only. • In continuoustime models asset dynamics are modeled via stochastic diﬀerential equations. the investor was able to trade at ﬁnitely many dates t = 0.2 What is a Reasonable Model for Stock Dynamics? Model for the Money Market Account The solution of the ODE B (t) = r · B(t) with B(0) = 1 is B(t) = b0 exp(rt) . • stock: risky asset modeled via a stochastic diﬀerential equation (SDE) dS(t) = S(t) µdt + σdW (t) . trading takes place continuously. . M (0) = 1. . • Consequently.
3 INTRODUCTION TO STOCHASTIC CALCULUS 24 and thus ln(B(t)) = ln(b0 ) + r · t. σ > 0.e. for each R t ∈ [0. i. 3. 2) increasing with time t. σ{Xt } ⊂ F.e. B(I n )).2.e. 5) continuous. i. B(I n ))measurable. As otherwise stated. σ 2 t). Y (t) = ln(S(t)) − ln(s0 ) − α · t Reasonable properties of Y : 1) no tendency.e.e. i. Remark. E[Y (t)] = 0.e.2 On Stochastic Processes Deﬁnition 3.1 (Stochastic Process) A stochastic process X deﬁned on (Ω. The function f is cadlag if f (t) = f (t+) and caglad if f (t) = f (t−). . i. Properties which would make life easier: 4) normally distributed. i. Therefore. Y (t) ∼ N (0. Model for the Stock The result for the money market account motivates the following model for the stock dynamics: ln(S(t)) = ln(s0 ) + α · t + “randomness” Notation: Y (t) = “randomness”. i. our stochastic processes take values in the measurable space (I n . s t f (t+) = lim f (s) s t exist. 3) no memory. i. Var(Y (t)) increases with t.e. the ndimensional Euclidian space equipped with R R σﬁeld of Borel sets. i.e. R 3.1 Cadlag and Caglad Deﬁnition 3. every Xt is (F. for Y (t) = Y (s) + [Y (t) − Y (s)] the diﬀerence [Y (t) − Y (s)] • depends only on t − s and • is independent of Y (s). A cadlag function cannot jump around too widly as the next proposition shows. T ] the limits f (t−) = lim f (s).2 (Cadlag and Caglad Functions) Let f : [0. T ] → I n be a function with existing left and right limits. F) is a collection of random variables {Xt )}t∈I .
for all t ≥ 0. s ≤ t and s.r. Proof. t ∈ I. i. • For ﬁxed ω ∈ Ω. (iii) Let X be adapted to {Ft }.2 (Modiﬁcation. {t ∈ [0. Remarks. then Y is a modiﬁcation of X and vice versa. • The notation {Xt . Deﬁnition 3. the number of discontinuities on [0. then Y is adapted to {Ft }t . Measurability Recall the following deﬁnition: Deﬁnition 3.e. (i) Y is a modiﬁcation of X if P (ω : Xt (ω) = Yt (ω)) = 1 (ii) X and Y are indistinguishable if P (ω : Xt (ω) = Yt (ω) for all t ≥ 0) = 1. (ii) If X and Y have rightcontinuous paths and Y is a modiﬁcation of X. (i) If X and Y are indistinguishable. T ] larger than ε is ﬁnite. the realization {Xt (ω)}t is said to be a path of X.e. i. a stochastic process can be interpreted as functionvalued random variable.e. then X and Y are indistinguishable. If Y is a modiﬁcation of X and F0 contains all P null sets of F. Deﬁnition 3.t. t ∈ I.1 (i) A cadlag function f can have at most a countable number of discontinuities.3 INTRODUCTION TO STOCHASTIC CALCULUS 25 Proposition 3.5 (Modiﬁcation. A family {Ft } of subσﬁelds Ft .2 Modiﬁcation. Indistinguishability) Let X and Y be stochastic processes. Indistinguishability. X and Y have the same paths P a. . See Fristedt/Gray (1997).2. Example. T ] : f (t) = f (t−)} is countable. Then Y is a modiﬁcation of X. Indistinguishability) Let X and Y be stochastic processes. Proposition 3. 3.s.3 (Filtration) Consider a σﬁeld F.4 (Adapted Process) A process X is said to be adapted to the ﬁltration {Ft }t∈I if every Xt is measurable w. Ft . Here I denotes some ordered index set. Ft }t∈I indicates that the process X is adapted to {Ft }t∈I . Consider a positive random variable τ with a continuous distribution. For this reason. σ{Xt } ⊂ Ft . (ii) For any ε > 0. but P (Xt = Yt for all t ≥ 0) = 0. Set Xt ≡ 0 and Yt = 1{t=τ } . is said to be a ﬁltration if Fs ⊂ Ft . i.
the claim follows. Fortunately. ω) → Xt (ω) : ([0. and so. then X is at least measurable. ∞) × Ω. (ii) Since Y is a modiﬁcation of X. B(I n ))measurable). X is actually a functionvalued random variR able.t. ∞) ∩ Q ) = 1. {Ft }. 5). if the mapping R (t. B(I n )) R R is measurable for each t ≥ 0. if a stochastic process is left. we have P (ωXt (ω) = Yt (ω) for all t ≥ [0. if for each t ≥ 0 and A ∈ B(I n ) the set {(s.e. if the mapping (s.t. i. Deﬁnition 3.e. ∞)) ⊗ F) → (I n . if for each A ∈ B(I n ) the set {(t.or leftcontinuous. . B(I n ))measurable R ((Ft . {Ft }. Any progressively measurable process is measurable and adapted (Exercise). a stronger result can be proved: Proposition 3. t]) ⊗ Ft . then X is also progressively measurable w.t. p. B([0.7 (Progressively Measurable Stochastic Process) A stochastic process is said to progressively measurable w. B(I n )) R R is measurable.4 If the stochastic process X is adapted to the ﬁltration {Ft } and right. See Karatzas/Shreve (1991. ω) : R Xt (ω) ∈ A} belongs to the product σﬁeld B(I n ) ⊗ F. 2 If X is an (adapted) stochastic process. p. (i) is obvious.or leftcontinuous.or rightcontinuous. but not necessarily adapted to {Ft }. 5).3 INTRODUCTION TO STOCHASTIC CALCULUS 26 Proof. Proposition 3.r. for technical reasons. t] × Ω : Xs (ω) ∈ A} belongs to R the product σﬁeld B([0. it is often convenient to have some joint measurability properties. Proof. t]) ⊗ Ft ) → (I n . ω) → Xs (ω) : ([0. By the rightcontinuity assumption. Deﬁnition 3. See Karatzas/Shreve (1991. If X is right.r. then it has a modiﬁcation which is progressively measurable w.r. Proof.6 (Measurable Stochastic Process) A stochastic process is said to measurable. However. then every Xt is (F. Remark. the ﬁltration {Ft }. ω) ∈ [0. i. t] × Ω. Remark. (iii) Exercise. B([0.3 If a stochastic process is measurable and adapted to the ﬁltration {Ft }. The following proposition provides the extent to which the converse is true.
then τ + c is a stopping time.8 (Stopping Time. 0 ≤ s < t. Since {τ ≤ t} = t+ε>u>t {T < u}.5 5 See Deﬁnition 3. the ﬁltration {Ft }t∈I if the event {τ ≤ t} ∈ Ft for every t ∈ I. we have {τ ≤ t} ∈ Ft+ε for every ε > 0 implying {τ ≤ t} = ε>0 Ft+ε = Ft+ = Ft .3 Stopping Times Consider a measurable space (Ω.6 Every stopping time is optional. (ii) A random time is an optional time w. Proof.2.t. ∞) or I = I is said to be a random time.6 show how both concepts are connected. .r.5 If τ is optional and c > 0 is a positive constant.9 (Continuity of Filtrations) (i) A ﬁltration {Gt } is said to be rightcontinuous if Gt = Gt+ := ε>0 Gt+ε (ii) A ﬁltration {Gt } is said to be leftcontinuous if Gt = Gt− := σ s<t Gs Remark. τ is optional. Proposition 3.18. Deﬁnition 3. Proof. consider an optional time τ w. leftcontinuity of {FtX } means that Xt can be guessed by observing Xs . then one cannot learn more by peeking inﬁnitesimally far into the future. An F. and both concepts coincide if the ﬁltration is rightcontinuous. Optional Time) (i) A random time is a stopping time w. Proposition 3. Propositions 3.t. Let X be stochastic process.r. the ﬁltration {Ft }t∈I if the event {τ < t} ∈ Ft for every t ∈ I. i. • Rightcontinuity means intuitively that if Xs has been observed for 0 ≤ s ≤ t. Then {τ < t} = t>ε>0 {T ≤ t − ε} and {T ≤ t − ε} ∈ Ft−ε ⊂ Ft .t.r.5 and 3. Consider a stopping time τ .measurable random variable τ : Ω → I ∪ {∞} where I = [0. F). • Loosely speaking. we will always work with rightcontinuous ﬁltrations. In our applications. Two special N classes of random times are of particular importance. 2 Remark. Exercise. a rightcontinuous ﬁltration {Ft }.3 INTRODUCTION TO STOCHASTIC CALCULUS 27 3.e. On the other hand. Deﬁnition 3.
we deﬁne the function Xτ on the event {τ < ∞} by Xτ (ω) := Xτ (ω) (ω) If X∞ (ω) is deﬁned for all ω ∈ Ω. is Fτ measurable. then the random variable Xτ . by setting Xτ (ω) := X∞ (ω) on {τ = ∞}. if the τn ’s are stopping times.r. Then the random times N sup τn .∞) and τ is a stopping time of this ﬁltration. Proposition and Deﬁnition 3. Exercise. τ1 + τ2 .∞) is progressively measurable. Proposition 3. Exercise. limn→∞ τn are all optional. τ1 ∨τ2 = max{τ1 . {τ1 ≤ τ2 }. (ii) Let {τn }n∈I be a sequence of optional times. then so is supn≥1 τn .3 INTRODUCTION TO STOCHASTIC CALCULUS 28 Proposition 3. Then τ1 ∧τ2 = min{τ1 . (iii) Fτ1 ∧τ2 = Fτ1 ∩ Fτ2 and the events {τ1 < τ2 }. the socalled stopping time σﬁeld Fτ . Proposition 3.7 (i) Let τ1 and τ2 be stopping times. n≥1 n≥1 inf τn . {τ2 < τ1 }. (ii) If the process X is progressively measurable w. {Ft }t∈[0. Exercise. belong to Fτ1 ∩ Fτ2 . τ2 }.10 (Measurability of a Stopped Process) (i) If the process X is measurable and the random time τ is ﬁnite. then Fτ1 ⊂ Fτ2 . and the stopped process {Xτ ∧t . (i) If τ1 = t ∈ [0.8 (Stopping Time σField) Let τ be a stopping time of the ﬁltration {Ft }. Ft }t∈[0. τ2 }. deﬁned on the set {τ < ∞} ∈ Fτ . then Fτ1 = Ft . Then the set {A ∈ F : A ∩ {τ ≤ t} ∈ Ft for all t ≥ 0} is a σﬁeld. {τ1 = τ2 } .t. and ατ1 with α ≥ 1 are stopping times. then Xτ can also be deﬁned on Ω. {τ2 ≤ τ1 }. then the function Xτ is a random variable. ∞).∞) . Proof. limn→∞ τn . (ii) The process stopped at time τ is the process {Xt∧τ }t∈[0.9 (Properties of a Stopped σField) Let τ1 and τ2 be stopping times. Furthermore. (ii) If τ1 ≤ τ2 .10 (Stopped Process) (i) If X is a stochastic process and τ is random time. Proof. Proof. Deﬁnition 3.
Deﬁne R τA (ω) = inf{t > 0 : Xt (ω) ∈ A}. 39) and Protter (2005. 5 and p. (i) If Ao is an open set. Xi + δ) ⊂ Ao . • If the ﬁltration is rightcontinuous. then Xt∧τ = Xt 1{t<τ } + Xτ 1{t≥τ } . 2 Remarks. that for some ω. p. in general the hitting time of Ao is not a stopping time. 3. Let Ao = (1. Proposition 3. The following result is obvious: If X is adapted and cadlag and τ is a stopping time.12 (Brownian Motion) (i) A Brownian motion is an adapted. ˜ • Even if X is continuous. we cannot tell wheter or not τAo = 1.∞) such that W0 = 0 a. then the stopping time τAc is a hitting time of Ac . Xq ∈ (Xi − δ.s. then the random variable τAc (ω) = inf{t > 0 : Xt (ω) ∈ Ac or Xt− (ω) ∈ Ac }. Deﬁnition 3. Then we have Xi ∈ Ao for some i ∈ I \ Q and i < t. However. Suppose that the inclusion ⊂ does not hold. then the hitting time of Ao is an optional time. and .3 INTRODUCTION TO STOCHASTIC CALCULUS 29 Proof. • If X is continuous . is also adapted and cadlag. but Xqn → Xi which is a contradiction to the rightcontinuity of X. then there exists a sequence (qn )n∈I with qn s0 N and qn ∈ Q ∩ [0. Ft }t∈[0. We have {τAo < t} = s∈Q ∩[0. For example. The inclusion ⊃ is obvious.11 (Hitting Time) R Let X be a stochastic process and let A ∈ B(I n ) be a Borel set in I n . p. (i) For simplicity n = 1. ∞). Therefore. but Xq ∈ Ao for all q ∈ Q with q < t. Without looking slightly ahead of time 1. (ii) If Ac is a closed set. there exists a δ > 0 such that (Xi − δ. and that X1 (ω) = 1. 5). then the hitting time τAo is a stopping time. Remark. Xt (ω) ≤ 1 for t ≤ 1. R Since Ao is open.t) {Xs ∈ Ao }. p. suppose that X is realvalued.3 Prominent Examples of Stochastic Processes Theorem and Deﬁnition 3. Karatzas/Shreve (1991. t). Proof. realvalued process {Wt . 9). ˜ is a stopping time. Xi + δ).11 Let X be a rightcontinuous process adapted to the ﬁltration {Ft }. (ii) See Karatzas/Shreve (1991. Then τ is called a hitting time of A for X.
integervalued cadlag process N = {Nt . Proof. Wn ) is an ndimensional vector of independent onedimensional Brownian motions. 50).∞) such that N0 = 0. C such that E[Xt − Xs α ] ≤ C · t − s1+β . 2 Theorem 3.g. the diﬀerence Nt − Ns is independent of Fs and is Poisson distributed with mean λ(t − s). See.g.6 it is straightforward to prove the existence of a Brownian motion. pp. 3. See Karatzas/Shreve (1991. .14 (Continuous Version of the Brownian Motion) An ndimensional Brownian motion possesses a continuous modiﬁcation. e. n ∈ I . One can show that E[Wt − Ws 4 ] = n(n + 2)t − s2 . E[Xt ] < ∞. Ft }t[0. i. Corollary 3. . Applying Kolmogorov’s extension theorem. Deﬁnition 3.12 (Poisson Process) A Poisson process with intensity λ > 0 is an adapted.3 INTRODUCTION TO STOCHASTIC CALCULUS 30 • Wt − Ws ∼ N (0.13 (Martingale. Proof.. 6 Kolmogorov’s extension theorem basically says that for given (ﬁnitedimensional) distributions satisfying a certain consistency condition there exists a probability space and a stochastic process such that its (ﬁnitedimensional) distributions coincide with the given distributions.e. t ≤ T. 2 Remark. Hence. (ii) An ndimensional Brownian motion W := (W1 . 7 A random variable Y taking values in I ∪ {0} is Poisson distributed with parameter λ > 0 if N P (Y = n) = e−λ λn /(n!). See. and for 0 ≤ t < ∞. β. Then there exists a continuous modiﬁcation of X.13 (Kolmogorov’s Continuity Theorem) Suppose that the process X = {Xt }t∈[0. Submartingale) A process X = {Xt }t∈[0. . Proof.∞) satisﬁes the following condition: For all T > 0 there exist positive constants α.7 0 ≤ s.∞) is called a martingale (resp..∞) adapted to the the ﬁltration {Ft }t∈[0. and β = 1. C = n(n + 2). we can apply Kolmogorov’s continuity theorem with α = 4. 47ﬀ). Karatzas/Shreve (1991.∞) if (i) Xt ∈ L1 (dP ). p. Karatzas/Shreve (1991. N . pp. t − s) for 0 ≤ s < t • Wt − Ws independent of Fs for 0 ≤ s < t “stationary increments” “independent increments” exists and is said to be a onedimensional Brownian motion. supermartingale.4 Martingales Deﬁnition 3. e. We will always work with this continuous version.53ﬀ). Supermartingale. submartingale) with respect to {Ft }t∈[0. .
E[Xt Fs ] ≥ Xs ).17 (Optional Sampling Theorem. Then ϕ(X) is a submartingale w. pp. we write X ∈ M2 (or X ∈ Mc if X is also continuous). An important question now arises: What happens to the martingale property of a stopped martingale? For bounded stopping times. a submartingale for µ > 0. resp. Examples. . Karatzas/Shreve (1991.8 Proposition 3. .t. we deﬁne Xt := 2 E[Xt ] 2 and X := Xn ∧ 1 . Xn ) with realvalued components Xk be a submartingale w. • Let N be a Poisson process with intensity λ.r. the pseudometric X − Y  becomes a metric on M2 . X0 = 0 a.r. then E[Xt Fs ] = Xs . σ ∈ I is a martingale for R.16 If we identify indistinguishable processes. If we identify indistinguishable processes. Then E[Xτ2 Fτ1 ] ≥ (≤)Xτ1 means: (i) X − Y  ≥ 0 and X − Y  = 0 iﬀ X = Y . (ii) X − Y  = Y − X.∞) be a rightcontinuous submartingale (supermartingale) and let τ1 and τ2 be bounded stopping times of the ﬁltration {Ft } with τ1 ≤ τ2 ≤ c ∈ I R a.s. 37f).∞) (ii) Let X = (X1 . Then ϕ(X) is a submartingale w. µ. ) is a complete metric space. .g. • The arithmetic Brownian motion Xt := µt + σWt .t. and Mc is a closed subset of M2 .r. . {Ft }t∈[0. {Ft }t∈[0.s. Xn ) with realvalued components Xk be a martingale w. {Ft }t∈[0. Ft }t∈[0. Deﬁnition 3. 8 This .t. (iii) X − Y  = X − Z + Z − Y .t. in addition. .r.15 (Functions of Submartingales) (i) Let X = (X1 . the pair (M2 . 2 Proof. follows from Jensen’s inequality (Exercise). µ = 0. . 1st Version) Let X = {Xt . Theorem 3. 2 Proposition 3. • Let W be a onedimensional Brownian motion.s. and a supermartingale for µ < 0. a.. {Ft }t∈[0.∞) and let ϕ : I n → I be a convex. . Then {Nt − λt}t is a martingale. Ft }t∈[0. (resp. Then W is a martingale and W 2 is a submartingale.3 INTRODUCTION TO STOCHASTIC CALCULUS 31 (ii) if s ≤ t. We say that X is square2 integrable if E[Xt ] < ∞ for every t ≥ 0. If.∞) and let ϕ : I n → I be a convex function such that E[ϕ(Xt )] < ∞ R R for all t ∈ [0. ∞). See.∞) be a rightcontinuous martingale. nothing changes. 2n n=1 ∞ Note that X − Y  deﬁnes a the pseudometric on M2 . ∞). .∞) Proof. nondecreasing function such that R R E[ϕ(Xt )] < ∞ for all t ∈ [0. For any X ∈ M2 and 0 ≤ t < ∞. E[Xt Fs ] ≤ Xs . e.14 (SquareIntegrable) Let X = {Xt .
(ii) There exists a positive. . a∈A x→∞ Remark. For the implication “(i)⇒(ii)⇒(iii)”. A good choice is often g(x) = x1+ε with ε > 0. 17f). as t → ∞. we need to require the following condition: Deﬁnition 3. but it is not clear at all if X is a sub. Theorem 3. See Karatzas/Shreve (1991. rightcontinuous submartingale X = {Xt .s.or supermartingale. Theorem 3. ∞) such that lim g(x) = +∞ x and sup E[g ◦ Uα ] < ∞. increasing. this result does only hold if X can ˜ be extended to ∞ such that X := {Xt . Then the limit X∞ (ω) = limt→∞ Xt (ω) exists for a. ˜ This proposition ensures that a limit exists. Using the deﬁnition.and Supermartingale Convergence) Let X = {Xt .15 (Uniform Integrability) A familiy of random variables (U )a∈A is uniformly integrable (UI) if lim sup a {Uα ≥n} n→∞ Uα dP = 0. The following are equivalent: (i) (U )a∈A is uniformly integrable.∞] is a submartingale.19 (Characterization of UI) Let (U )a∈A be a subset of L1 . in general. Ft }t∈[0.∞] is still a sub.∞) : (i) It is a uniformly integrable family of random variables. (ii) It converges in L1 . (t → ∞) to an integrable random variable X∞ .∞) : (i) It is a uniformly integrable family of random variables. Proof. such that {Xt . pp. To solve this problem. ω ∈ Ω and EX∞  < ∞.18 (Sub. Ft }t∈[0. Ft }t∈[0.e.∞) be a rightcontinuous submartingale (supermartingale) + − and assume supt≥0 E[Xt ] < ∞ (supt≥0 E[Xt ] < ∞). convex function g deﬁned on [0.20 The following conditions are equivalent for a nonnegative. Ft }t∈[0. it is sometimes not easy to check if a family of random variables is uniformly integrable. Proposition 3. Theorem 3. Ft }t∈[0.3 INTRODUCTION TO STOCHASTIC CALCULUS 32 If the stopping times are unbounded.or supermartingale. (iii) It converges P a. the assumption of nonnegativity can be dropped.21 (UI Martingales can be closed) The following conditions are equivalent for a rightcontinuous martingale X = {Xt .
We now formulate a more general version of Doob’s optional sampling theorem: Theorem 3. . Remarks.23 (Stopped (Sub)Martingales) Let X = {Xt . pp.24 (Conditional Expectation and Stopped σFields) Let τ1 and τ2 be stopping times and Z be an integrable random variable.3 INTRODUCTION TO STOCHASTIC CALCULUS 33 (ii) It converges in L1 . Ft }t∈[0. Ft }t∈[0. (iii) It converges P a. Ft }t∈[0. The stopped process {Xt∧τ . e. (t → ∞) to an integrable random variable X∞ . such that {Xt := Xt∧τn . such that Xt = E[Y Ft ] a. we sometimes need to weaken the martingale concept. • The diﬃculty in this proposition is to show that the stopped process is a (sub)martingale for the ﬁltration {Ft }. Then E[E[ZFτ1 ]Fτ2 ] = E[E[ZFτ2 ]Fτ1 ] = E[ZFτ1 ∧τ2 ]. P − a.. Exercise. 19f). If there exists a nondecreasing (n) sequence {τn }n∈I of stopping times of {Ft }.s.∞) is also an adapted rightcontinuous (sub)martingale. Proposition 3. i. If (iv) holds and X∞ is the random variable in (c). Exercise. then the stopping time τ may be unbounded (See Protter (2005. Ft }t∈[0. It is obvious that the stopped process is a (sub)martingale for {Ft∧τ }. Ft }t∈[0.s. 2nd Version) Let X = {Xt . p.∞) N is a martingale for each n ≥ 1 and P (limn→∞ τn = ∞) = 1. then E[Y F∞ ] = X∞ a. Proof.s. Deﬁnition 3. as t → ∞.∞] is a UI rightcontinuous martingale.e. • If {Xt . (iv) There exists an integrable random variable Y . Corollary 3.s. Proof.16 (Local Martingale) Let X = {Xt .∞] be a rightcontinuous submartingale (supermartingale) with a last element X∞ and let τ1 and τ2 be stopping times of the ﬁltration {Ft } with τ1 ≤ τ2 a. For technical reasons. Ft }t∈[0. 10)).g.∞) be a (continuous) process. the martingale is closed by Y .s.∞] is a martingale.22 (Optional Sampling Theorem. Karatzas/Shreve (1991.∞) be an adapted rightcontinuous (sub)martingale and let τ be a bounded stopping time. then we say that X is a (continuous) local martingale. for all t ≥ 0. such that {Xt . Ft }t∈[0. Then E[Xτ2 Fτ1 ] ≥ (≤)Xτ1 Proof. See.
P. we impose the following Standing assumption: We always consider a ﬁltered probability space (Ω.26 Let (Ω.9 (ii) G0 contains all the P null sets of G.g.27 W The ﬁltration {Ft } is leftcontinuous. Usual Conditions The following deﬁnition is very important: Deﬁnition 3. Karatzas/Shreve (1991. locally bounded. G. and P (N 9A . E[Xt ] need not to exist for local martingales.3 INTRODUCTION TO STOCHASTIC CALCULUS 34 Remark. P. 2 This concept can be applied to other situations (e. e. ˜ ˜ probability space is complete if for any N ⊂ Ω such that there exists an N ∈ G with N ⊂ N ˜ ) = 0. we are confronted with the following result. {Ft }) satisfying the usual conditions. In the sequel.g. See. Proposition 3. Proposition 3. F. {Gt }) satisﬁes the usual conditions if (i) it is complete. G. A property π is said to hold locally if there exists a sequence of stopping times {τn }n∈I with P (limn→∞ τn = ∞) = 1 such that X (n) N has property π for each n ≥ 1. {Gt }) satisfy the usual conditions. Proof. 3. One of the main reasons for this assumption is that the following proposition holds: Proposition 3. follows from Fatou’s lemma (Exercise).25 A nonnegative local martingale is a supermartingale. Then every martingale for {Gt } possesses a unique cadlag modiﬁcation. but fails to be rightcontinuous. there exist local martingales which are no martingales. Augmentation. Proof. and (iii) {Gt } is rightcontinuous. pp. Unfortunately. we have N ∈ G. 16f). Especially.18 (Usual Conditions) A ﬁltered probability space (Ω.5 Completition. P. • However.17 (Local Property) Let X be a stochastic process. • Every martingale is a local martingale (Choose τn = n). locally square integrable): Deﬁnition 3.
r.6 ItoIntegral Our goal is to deﬁne an integral of the form: t X(s)dW (s). the Brownian ﬁltration. (disregarding measurability for the moment) 1st Step.and rightcontinuous. W is still a Brownian motion w. Deﬁne the collection of P null sets by W N := {A ⊂ Ω : ∃B ∈ F∞ with A ⊂ B. W W The P augmentation of {Ft } is deﬁned by Ft := σ{Ft ∪ N } and is called the Brownian ﬁltration. t It (κ) := 0 κs dWs :=? . then the ﬁltration {Ft } is leftcontinuous. P ).r. Therefore. 0 where X is an appropriate stochastic process. (iii) For an ndimensional strong Markov process X the augmented ﬁltration is rightcontinuous.28 (Brownian Filtration) The Brownian ﬁltration. Besides.t. {Ft } need not to be rightcontinuous. 3. More generally. the variation of a Brownian motion is unbounded on any interval.3 INTRODUCTION TO STOCHASTIC CALCULUS 35 Fortunately. W This is the reason why we work with the Brownian ﬁltration instead of {Ft }. The proofs of the previous results can be found in Karatzas/Shreve (1991). 89ﬀ. However. N is still a Poisson process w. Let F∞ := σ{ t≥0 Ft } and consider the space W (Ω. pp. there is a way out. one can prove the following results: X (i) If a stochastic process is leftcontinuous. Theorem 3. t] one can deﬁne t n−1 f (s)dA(s) = lim 0 n→∞ f k=0 kt n · A (k+1)t n −A kt n . For a Poisson process. F∞ . Deﬁne a stochastic integral for simple processes κ (constant process except for ﬁnitely many jumps). From the LebesgueStieltjes theory we know that for a measurable function f and a processes A of ﬁnite variation on [0. this P augmentation. Proposition 3. a similar result holds. is left. pp. X (ii) Even if X is is continuous.t. P (B) = 0}. Besides.29 (Augmented Filtration of a Poisson Process) The P augmentation of the natural ﬁltration induced by a Poisson process N is rightcontinuous. Remark. 43ﬀ)). naive stochastic integration is impossible (See Protter (2005. Good News: We can deﬁne a stochastic integral in terms of L2 convergence! Agenda.
i = 0. . . Every L2 [0. .r.3 INTRODUCTION TO STOCHASTIC CALCULUS 36 2nd Step. For each n the stochastic integral t It (κ(n) ) = 0 κ(n) (s) dW (s) is a welldeﬁned random variable and the sequence {It (κ(n) )}n converges (in some sense) to a random variable I 4th Step. 1. T ]process X as follows: t t X(s) dW (s) := lim It (κ(n) ) = lim 0 n→∞ n→∞ κ(n) (s) dW (s). (ii) E (It (κ))2 = E (iii) E t 0 κ2 ds for all t ∈ [0. • κt is Fti −1 measurable for t ∈ (ti−1 . the lefthand side of the interval). < tp = T .T ] is a continuous martingale for {Ft }t∈[0. Deﬁne the stochastic integral for a L2 [0. and bounded random variables Φi : Ω → I N R. s 2 sup0≤t≤T It (κ) ≤ 4E T 0 κ2 ds s . 1st Step.T ] . • κ is a leftcontinuous step function.T ] is a simple process if there exist real numbers 0 = t0 < t1 < . . T ] the stochastic integral I(κ) is deﬁned by t It (κ) := 0 κs dWs := 1≤i≤p Φi (Wti ∧t − Wti−1 ∧t ) Proposition 3.t. 3rd Step. . p ∈ I .ti ] (t) Remarks. {Ft } is the Brownian ﬁltration of a Brownian motion W . Standing Assumption. Deﬁnition 3. T ]process X can be approximated by a sequence {κ(n) }n of simple processes such that T X − κ(n) 2 := E T 0 (X(s) − κ(n) (s))2 ds → 0.20 (Stochastic Integral for Simple Processes) For a simple process κ and t ∈ [0. p such that Φ0 is F0 measurable. ti ] (measurable w.30 (Stochastic Integral for Simple Processes) For a simple process κ we have: (i) The integral I(κ) = {It (κ)}t∈[0. and for all ω ∈ Ω we have p κt (ω) = Φ0 (ω)1{0} (t) + i=1 Φi (ω)1(ti−1 . Deﬁnition 3. Extend the deﬁnition by localization.19 (Simple Process) A stochastic process {κt }t∈[0. 0 5th Step. Φi is Fti−1 measurable. . T ].
t = tk+1 . and Doob’s inequality (see Kartzas/Shreve (1991. • Stochastic integrals over [t.20. b ∈ I R. Then k k E[It (κ)2 ] = i=1 j=1 E Φi Φj (Wti − Wti−1 )(Wtj − Wtj−1 ) . (i) Exercise. √ dt. p. 14): For a rightcontinuous martingale {Mt } with E[Mt2 ] < ∞ for all t ≥ 0 we have 2 E It remains to check: k sup Mt  0≤≤T 2 ≤ 4E[MT ]. i 2 since. . k E[It (κ)2 ] = E i=1 Φ2 (ti − ti−1 ) ≤ T i ≤T i=1 E Φ2 < ∞. 0 1 dWs = Wt . T ] are deﬁned by T T t κs dWs := t 0 κs dWs − 0 κs dWs . s (iii) Apply (i). E Φi Φj (Wti − Wti−1 )(Wtj − Wtj−1 ) = E E Φi Φj (Wti − Wti−1 )(Wtj − Wtj−1 )Fti−1 = E[Φi Φj (Wtj − Wtj−1 )(E[Wti Fti−1 ] −Wti−1 )] =Wti−1 = 2nd case i = j. Hence. i. Note that linear combinations of simple functions are simple.3 INTRODUCTION TO STOCHASTIC CALCULUS 37 Proof. Φ is bounded. by Deﬁnition 3. (ii) For simplicity. Remarks.19. t 2 t E 0 dWs = E[Wt2 ] = t = 0 ds.e. t • For κ ≡ 1. (ii). This follows directly from Deﬁnition 3. 1st case: i > j. 0. • Let κ1 and κ2 be simple processes and a. k t E[It (κ)2 ] = E i=1 Φ2 (ti − ti−1 ) = E i 0 κ2 ds . which is wrong from This is the reason why people sometimes write dWt = a mathematical point of view. The stochastic integral for simple processes is linear. It (aκ1 + bκ2 ) = aIt (κ1 ) + bIt (κ2 ). E Φ2 (Wti − Wti−1 )2 = E Φ2 E[(Wti − Wti−1 )2 Fti−1 ] = E[Φ2 (ti − ti−1 )] i i i Hence.
T ]). Ω. 3rd and 4th Step. Deﬁne Mc ([0. T ] := := L2 [0. it is an isometry. T ]) := 2 {M : M continuous martingale on [0. Deﬁne the vector space L2 [0. For simple processes the mapping κ → I(κ) induces a norm on the vector space of the corresponding stochastic integrals via T 2 T I(κ)2 T L := E 0 κs dWs 3. F. T ] w.r. {Ft }t∈[0. 133).(k+1)T /2n ] (t). called the Ito isometry. {Ft }. for all X ∈ L2 [0. Proposition 3. Every L2 [0. p. Assume that X ∈ L2 [0. Ft }t∈[0. i.31 (Simple Processes Approximate L2 [0. T ] is continuous and bounded. T ] Theorem and Deﬁnition 3. Since X is bounded.t. realvalued with E 0 2 Xs ds < ∞ with (semi)norm T X2 := E T 0 2 Xs ds . Choose 2n −1 (n) κt (ω) := X0 (ω)1{0} (t) + k=0 XkT /2n (ω)1(kT /2n . P. T ].T ] T {Xt . where we identify processes for which X = Y only Lebesgue ⊗ P a. T ]) There exists a unique linear mapping J : L2 [0. T ]) with the following 2 properties: 2 . T ]process X can be approximated by a sequence {κ(n) }n of simple processes.32 (Ito Integral for X ∈ L2 [0. see Karatzas/Shreve (1991. s T 0 Since the mapping I(X) is linear and norm preserving.3 INTRODUCTION TO STOCHASTIC CALCULUS 38 2nd Step. T ]processes) The linear subspace of simple processes is dense in X ∈ L2 ([0.e. T ] there exists a sequence {κ(n) }n of simple processes with T X − κ(n) T = E 0 (Xs − κ(n) )2 ds → 0 s ( as n → ∞) Proof. For the general case. E[Mt2 ] < ∞ for every t ∈ [0. = (ii) E κ2 ds = κ2 .30 Prop. {κ(n) }n is uniformly bounded and we can apply the dominated convergence theorem to obtain the desired result. T ] → Mc ([0.s.T ] : progressively measurable.
T ] still being a squareintegrable martingale.e. (ii) is proved.s. 99). Since It (κ(n) ) −→ Zt as well. T ]. i. {It (κ(n) )}n is a Cauchy sequence in the ordinary L2 (Ω.t. we deﬁne t Xs dWs := Jt (X) 0 for every t ∈ [0.26. Applying Doob’s inequality and the BorelCantelli Lemma.t. Proof. (b) Z is a martingale: Note that It (κ(n) ) is a martingale for all n ∈ I .r. X − κ(n) T → 0 as n → ∞. (d) Jt (X) is independent of the approximating sequence: Let {κ(n) }n and {˜ (n) }n be two approximating sequences for X with It (κ(n) ) −→ Zt and It (˜ (n) ) −→ κ κ 10 See L2 L2 Bauer (1992.r. (ii) E[Jt (X)2 ] = E t 0 2 Xs ds “Ito isometry” J is unique in the following sense: If there exists a second mapping J satisfying (i) and (ii) for all X ∈ L2 [0. we obtain for every t ∈ [0. Ft }t∈[0. = E 0 (κ(n) −κ(m) )2 ds → 0. 2 where Zt is Ft measurable and E[Zt ] < ∞. Since Z is adapted. . T ] (as n.  · T . For X ∈ L2 [0. W ). T ]) = 1. T ]. Since {κ(n) }n is also a Cauchy sequence w. Zt is only P a. m → ∞): t E[(It (κ(n) )−It (κ(m) ))2 ] I linear = E[It (κ(n) −κ(m) )2 ] Ito Isom. (a) Let {κ(n) }n be an approximating sequence. By the N deﬁnition of the conditional expected value. p. there exists a subsequence {nk }k such that P L2 (1) It (κ(nk ) ) −→ Zt a.3 INTRODUCTION TO STOCHASTIC CALCULUS 39 (i) If κ is simple. The L2 convergence (1) leads to (as N 10 n → ∞) Zt dP ←− A A It (κ(n) ) dP = A Is (κ(n) ) dP −→ A Zs dP. (c) By Proposition 3. P ) for random variables (t ﬁxed!) which is a complete space and thus It (κ(n) ) −→ Zt . we thus get It (κ(n) ) dP = A A Is (κ(n) ) dP for s < t and for all A ∈ Fs and n ∈ I .s. unique (may depend on sequence!). Ft . then P (Jt (κ) = It (X) for all t ∈ [0. T ]. i. as k → ∞. Z possesses a rightcontinuous modiﬁcation {Jt (X). then J(X) and J (X) are indistinguishable. MI.e. one can prove that J(X) is even continuous (Exercise). T ] and call it the Ito integral or stochastic integral of X (w. Let X ∈ L2 [0. 92 and p. Hence.
T ] : 0 2 Xs (ω) ds ≥ n . B[0. T ] ⊗ F. P. the Ito isometry needs not to hold and the approximation argument breaks down. p.3 INTRODUCTION TO STOCHASTIC CALCULUS 40 ˜ Zt . κ Thus they need to coincide with Jt (X) P a. F. then {aκX + bκY }n approximates aX + bY (a. one can use a localization argument:13 t τn (ω) := T ∧ inf t ∈ [0. MI. p.11. realvalued with T P 0 2 Xs ds < ∞ = 1 In general.T ] := {Xt . MI. The class of admissible integrands can be extended.s. aJt (X) + bJt (Y ) = a lim It (κX ) + b lim It (κY ) = lim It aκX + bκY n→∞ n→∞ n→∞ (n) (n) (n) (n) (n) L2 = Jt (aX + bY ). Deﬁne H 2 [0. b ∈ I R). {Ft }t∈[0.T ] : progressively measurable. (∗) and (∗∗) are a consequence of Bauer (1992. T ]) = 1. Then Jt (X) = Jt ( lim κ(n) ) = lim Jt (κ(n) ) = lim It (κ(n) ) = Jt (X) P − a. See Proposition 3. 12 Both 0 11 See (n) . Bauer (1992. Therefore. they are indistinguishable. Ω.s. P (Jt (X) = Jt (X) for all t ∈ [0. for X ∈ H 2 [0. T ]. (∗∗) holds because κ(n) −→ X (as n → ∞). P ). 92). 130). n→∞ n→∞ n→∞ (∗) (i) for all t ∈ [0. Since J(X) and J (X) are continuous processes. Then {ˆ (n) }n with κ(2n) = κ(2n) (even) and κ(2n+1) = κ2n+1 (odd) is an κ ˆ ˆ ˜ (n) 2 ˜ approximating sequence for X with {It (ˆ )}n converges in L to both Zt and Zt .11 (e) Ito isometry: t t E Jt (X)2 = lim E It (κ(n) )2 = lim E n→∞ n→∞ 0 L2 (∗) (κ(n) )2 ds s (∗∗) = E 0 2 Xs ds (∗) holds because It (κ(n) ) −→ Jt (X) (as n → ∞). T ] := H 2 [0.2 (ii). The equality (∗) holds because linear mappings are continuous.12 (f) Linearity of J follows from the linearity of I and from the fact that if {κX }n (n) (n) (n) approximates X and {κY }n approximates Y . This is the L2 convergence in ([0. ∞) for the continuous n t 2 process { Xs ds}t . by Proposition 3. i. Ft }t∈[0. Xt (ω) := Xt (ω)1{τn (ω)≥t} . (g) Uniqueness of J: Let J be a linear mapping with properties (i) and (ii). Ft .e. Lebesgue ⊗ P ). 2 5th Step. T ]. Fortunately. This is the L2 convergence in the ordinaryL2 (Ω. T ] × Ω. T ] it can happen that T X2 = E T 0 2 Xs ds = ∞. Therefore. 13 Every τ is a stopping time because it is the ﬁrsthitting time of [n.
e. Then we deﬁne the Ito integral of X w.T ] is a continuous local martingale with localizing sequence {τn }n . R X Y Y ∈ H 2 [0. i. 0 t m j=1 0 Xnj (s)dWj (s) where t 0 Xij (s)dWj (s) is an onedimensional Ito integral. Jt (aX + bY ) = aJt (X) + bJt (Y ) for a. F. T ]. . Ft } is said to be a realvalued Ito process if for all t ≥ 0 it possesses the representation t t X(t) = X(0) + 0 α(s) ds + 0 β(s) dW (s) . (Choose τn := τn ∧ τn as localizing sequence of aX + bY . J is welldeﬁned. T ] the process {Jt (X)}t∈[0.r. • J is still linear. 3. i. for X ∈ H 2 [0. Ft } an I n. T 2 • limn→∞ τn = T a. W as t m X1j (s)dWj (s) j=1 0 t .e. T ]. there exists an event A with P (A) = 1 such that for all ω ∈ A there exists an c(ω) > 0 with T 2 Xs (ω) ds ≤ c(ω). X(s) dW (s) := . Deﬁnition 3. T ] and 0 ≤ t ≤ τn . t ∈ [0. (i) {X(t). i. T ] and J(X (n) ) is deﬁned and a martingale for ﬁxed n ∈ I .21 (Multidimensional Ito Integral) Let {W (t). Xt (ω) ∈ L2 [0.7 Ito’s Formula Recall our Standing assumption: (Ω. N Now deﬁne Jt (X) := Jt (X (n) ) for X ∈ H 2 [0. .) 0 Important Results.s.m R 2 valued progressively measurable process with all components Xij ∈ H [0. {Ft }) is a ﬁltered probability space satisfying the usual conditions. P. Additional Assumption: {Wt . • By construction. (since P 0 Xs ds < ∞ = 1.t. Ft } be an mdimensional Brownian motion and {X(t).e. Note that • Jt (X (n) ) = Jt (X (m) ) for all 0 ≤ t ≤ τn ∧ τm .22 (Ito Process) Let W be an mdimensional Brownian motion. where X {τn }n is the localizing sequence of X. Deﬁnition 3.3 INTRODUCTION TO STOCHASTIC CALCULUS (n) 41 Hence. T ]. Ft } is a (multidimensional) Brownian motion of appropriate dimension.) • E[Jt (X)] needs not to exist. b ∈ I and X.
. one can show the following (see Karatzas/Shreve (1991. . Then the pth variation of X over the partition Π is deﬁned by (p > 0) n Vt (p) (Π) = k=1 Xtk − Xtk−1 p . we deﬁne t t t Ys dXs := 0 0 Ys αs ds + 0 Ys βs dWs Deﬁnition 3. X >t is said to be the quadratic variation of an Ito process X. The mesh of Π is deﬁned by Π = max1≤k≤n tk − tk−1 . X (n) ) is a vector of onedimensional Ito processes X (i) . t]. Then. X (2) >t := j=1 βj (s) βj (s) ds (1) (2) is said to be the quadratic covariation of X (1) and X (2) . For X ∈ Mc . let Π = {t0 . Π→0 The main tool for working with Ito processes is Ito’s formula: . However.3 INTRODUCTION TO STOCHASTIC CALCULUS t m t 42 = X(0) + 0 α(s) ds + j=1 0 βj (s) dWj (s) for an F0 . tn } with 0 = t0 < t1 < . . for all t ≥ 0. pp. j=1 0 (2) 0 α(s) ds < ∞. we have βj ∈ H 2 [0. . < tn = t be a partition of [0. P − a. In particular.measurable random variable X(0). . and progressively measurable processes α. The deﬁnition may appear a bit unfounded. . . 32ﬀ)): For ﬁxed t ≥ 0. it holds 2 that (2) lim Vt (Π) =< X >t in probability. • To shorten notations. . • By (2). T ]. < X >t :=< X.s. β with t m t 2 βj (s) ds < ∞. 2}. Remarks. • For a realvalued Ito process X and a realvalued progressively measurable process Y .23 (Quadratic Variation and Covariation) Let X (1) and X (2) two realvalued Ito processes with t t X (i) (t) = X(0) + 0 α(i) (s) ds + 0 β (i) (s) dW (s). where i ∈ {1. m t 0 < X (1) . (ii) An ndimensional Ito process X = (X (1) . Remarks. . one sometimes uses the symbolic diﬀerential notation dX(t) = α(t)dt + β(t) dW (t) • It can be shown that the processes α and β are unique up to indistinguishability. .
. . 2 βs ds. . and f : I → I R R. . ∞) × I n → I be a C 1. T ]). Xn (s)) d < Xi .) p f (Xt ) − f (X0 ) = k=1 p (f (Xtk ) − f (Xtk−1 )) p = k=1 f (Xtk−1 )(Xtk − Xtk−1 ) + 0. . . X(0) = const. X1 (s). Hence. . . where all integrals are deﬁned. 1st step. X1 (t). the relevant support of f . . Bt := 0 αs ds. Xn (0)) + 0 n t ft (s.5 k=1 (∗) f (ηk )(Xtk − Xtk−1 )2 (∗∗) with ηk = Xtk−1 + θk (Xtk − Xtk−1 ). . .L1 0 t t f (Xs )dBs = 0 f (Xs )αs ds (ii) Approximate f (Xs ) by a simple process p YsΠ (ω) := f (X0 )1{0} (s) + k=1 f (Xtk−1 (ω))1(tk−1 . .3 INTRODUCTION TO STOCHASTIC CALCULUS 43 Theorem 3. For simplicity. . . . . .j=1 0 fxi xj (s. . Taylor expension leads to (Note: f (Xtk ) = f (Xtk−1 ) + f (Xtk−1 )(Xtk − Xtk−1 ) + . Xj >s . t]. Let f : [0. . and f is compact implying that these functions are bounded as well. we can assume that t t t Mt := 0 t 0 βs dWs . .. By applying a localization argument. . 1). f . n = m = 1.2 function. We get p p (∗) = k=1 f (Xtk−1 )(Btk − Btk−1 ) + k=1 A1 (Π) f (Xtk−1 )(Mtk − Mtk−1 ) .5 i. Xn (t)) = f (0. .tk ] (s). . Convergence of (*).. . X1 (s).33 (Ito’s Formula) Let X be an ndimensional Ito process with t m t Xi (t) = Xi (0) + 0 αi (s) ds + j=1 0 βij (s) dWj (s). and thus X are uniformly bounded by a constant C. Besides. . Let Π = {t0 . θk ∈ (0. M is a martingale (since β ∈ L2 [0. . A2 (Π) (i) For Π → 0. 3rd step. 2nd step. Xn (s)) dXi (s) t +0. ˆ Bt := 0 αs  ds. . Then R R t f (t. X1 (0). Xn (s)) ds + i=1 0 n fxi (s.s. Proof. we obtain (LebesgueStieltjes theory) A1 (Π) −→ a.. tp } be a partition of [0. X1 (s).
s. • Xt = t and f ∈ C 2 leads to the fundamental theorem of integration. It can be shown that (See Korn/Korn (1999.15 (as Π → 0) (∗) = A1 (Π) + A2 (Π) −→ 0 L1 t t f (Xs )αs ds + 0 f (Xs ) dMs 4th step. there exists a subsequence {Πk }k . Xt )dt + fx (t.5f (Xt )d < X >t = f (Xt ) αdt + σdWt + 0. they are indistinguishable. dSt = St [µdt + σdWt ]. By steps 24. 2 t 2 E 0 f (Xs ) dMs − A2 (Π) = = E 0 t (f (Xs ) − YsΠ )βs dWs 2 (f (Xs ) − YsΠ )2 βs ds −→ 0. the solution of the stock price dynamics in the BlackScholes model. Xt ) = ft (t. Xt )d < X >t Examples. • Xt = h(t) and f ∈ C 2 leads to the chain rule of ordinary calculus. ∞) × I →I R R) df (t. 53ﬀ)) (∗∗) −→ 0 L1 t 2 f (Xs )βs ds. towards the corresponding limits. Since both sides are continuous processes in t.5σ 2 dt = f (Xt ) (α + 0. β ∈ I and f (x) = ex . Often a shorthand symbolic diﬀerential notation is used (f : [0.2 (ii).s.14 By the Ito isometry. 5th step. Therefore. is actually a consequence of the more general deﬁnition of Ito integrals in Karatzas/Shreve (1991.3 INTRODUCTION TO STOCHASTIC CALCULUS t 0 44 Note that A2 (Π) = t YsΠ dMs . reads (3) 14 This St = s0 e(µ−0. Therefore. α. • Xt = Wt and f (x) = x2 : t Wt2 = 2 0 Ws dWs + t • Xt = x0 + αt + σWt . 0 L2 E where the limit follows from the dominated convergence theorem (as Π → 0). df (Xt ) = f (Xt )dXt + 0. 2 Remark. 129ﬀ). R. x0 .5fxx (t. Convergence of (**). 15 L2 convergence implies L1 convergence . pp. pp. by Proposition 3. Xt )dXt + 0. Therefore.5σ 2 )dt + σdWt .5σ 2 )t+σWt . for ﬁxed t both sides of Ito’s formula are equal a. such that the integrals converges a.
pp. z.s. then X is a strong solution of the stochastic diﬀerential equation (SDE) (4) dX(t) = α(t. z) − α(t. . z) − β(t.5 fyy d < Y > + fxy d < X. T ]. for every t ≥ 0 and i ∈ {1. X(s)) + 0 j=1 2 βij (s. the important family of linear SDEs admits explicit solutions: . where C = C(K. y) + β(t. y)2 + β(t. where α : [0. for every t ≥ 0 and i ∈ {1. Y > =0 =0 =1 = XdY + Y dX + d < X. X(s)) ds < ∞ P a.3 INTRODUCTION TO STOCHASTIC CALCULUS 45 • Ito’s product rule: Let X and Y be Ito processes and f (x. . . α(t. . R R R R Theorem 3. X(t))dt + β(t. Proof. Y > Deﬁnition 3. However. In general.34 (Existence and Uniqueness) If the coeﬃcients α and β are continuous functions satisfying the following global Lipschitz and linear growth conditions for ﬁxed K > 0 α(t. X(s)) ds + j=0 0 βij (s.24 (Stochastic Diﬀerential Equation) If for a given ﬁltered probability space (Ω.m are given functions.5 fxx d < X > +0. P. y) ≤ Kz − y. 128ﬀ). ∞) × I n → I n and β : [0. y ∈ I then there exists an adapted continuous process X being a strong solution R.s. ∞) × I n → I n. {Ft }) there exists an adapted process X with X(0) = x ∈ I n ﬁxed and R t m t Xi (t) = xi + 0 αi (s. . n}. y) = x · y Then (omitting dependencies) d(XY ) = fx dX + fy dY + 0. . there do not exist explicit solutions to SDEs. X(s)) dWj (s) P a. y)2 ≤ K 2 (1 + y2 ). . X(t))dW (t) X(0) = x. n}. . See Korn/Korn (1999. T ) > 0. of (4) and satisfying E[Xt 2 ] ≤ C(1 + x2 )eCT for all t ∈ [0. X is unique up to indistinguishability. F. such that t m αi (s.
we obtain Z = Z. ˜1 d Z Z ˜ = Zd = 1 Z + 1 ˜ 1 ˜ dZ+ < .s. j j=1 0 Then the linear SDE m dX(t) X(0) = x. Due to the initial condition. Z ˜ Let Z be another solution. Proof. B. To prove uniqueness. Bj (s)2 + b2 (s) ds < ∞. Z(0) = 1. n = m = 1. deﬁne Y := X/Z.s. by Ito’s product rule. for every t ≥ 0) t (A(s) + a(s)) ds m 0 t < ∞. Ito’s product rule ˜ applied to Y Z gives the desired result. one can show that Z solves the homogeneous SDE. let X and X be two ˆ ˜ solutions to the linear SDE. To prove uniqueness. As in (3). = (A(t)X(t) + a(t))dt + j=1 (Bj (t)X(t) + bj (t))dWj (t). Z(u) (5) Z(t) = exp 0 A(u) − 0. Z > Z Z ˜ ˜ ˜ Z Z Z (−A + B 2 )dt − BdW + [Adt + BdW ] − B 2 dt = 0 Z Z Z ˜1 ˜ implying Z Z = const. for all t ≥ 0. possesses an adapted Lebesgue ⊗ P unique solution m m t 1 X(t) = Z(t) x + a(u) − Bj (u)bj (u) du + 0 Z(u) j=1 j=1 where t t t 0 bj (u) dWj (u) . Then X := X − X solves the homogeneous SDE ˆ ˆ dX = X[Adt + BdW ] ˆ ˆ with X0 = 0.3 INTRODUCTION TO STOCHASTIC CALCULUS 46 Proposition 3. For simplicity. Applying Ito’s product rule to 0 · Z shows Xt = 0 a.5B(u)2 du + 0 B(u) dW (u) . a. To show that X satisﬁes the linear SDE.35 (Variation of Constants) Fix x ∈ I and let A. b be progressively measurable realvalued processes with R (P a. note that Ito’s formula leads to d 1 Z = 1 (−A + B 2 )dt − BdW . 2 . Then. is the solution of the homogeneous SDE dZ(t) = Z(t) [A(t)dt + B(t) dW (t)] .
S(0) = si0 > 0. . µ. I) (6) dS0 (t) = S0 (t)r(t)dt. Mathematical Assumptions. In Brownian models the money market account and stocks are modeled via (i = 1. . • The coeﬃcients r. • Recall the standing assumption: (Ω. (ii) If dX(t) = ϕ(t) dS(t) .4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 47 Example. • ϕi (t): number of stock i at time t • ϕ0 (t)S0 (t): money invested in the money market account at time t I • X ϕ (t) := i=0 ϕi (t)Si (t): wealth of the investor I • Assumption: x0 := X(0) = i=0 ϕi (0)Si (0) ≥ 0 (nonnegative initial wealth) • πi (t) := ϕi (t)Si (t)/X(t): proportion invested in stock i • 1 − i πi : proportion invested in the money market account Deﬁnition 4. . {Ft }) is a ﬁltrered probability space satisfying the usual conditions.5 σij (s)dWj (s) . Selfﬁnancing. • {Wt .) T I T (7) 0 ϕ0 (t)dt < ∞. .r.t. . ϕI ) be an I I+1 valued process which is progressively measurR able w. m j=1 2 σij (s) ds+ m j=1 t 0 Si (t) = si0 e 0 µi (s)−0. m σij (t)dWj (t) . {Ft } and satisﬁes (P a.s. • {Ft } is the Brownian ﬁltration. . P. 4 Continuoustime Market Model and Option Pricing 4. Notation. . S(0) = 1. . dSi (t) = Si (t) µi (t)dt + j=1 Applying “variation of constants” leads to t S0 (t) = e 0 r(s) ds t .1 Description of the Model We consider a continuoustime security market as given by (6). F.1 (Trading Strategy. Admissible) (i) Let ϕ = (ϕ0 . i=1 0 (ϕi (t)Si (t))2 dt < ∞. and σ are progressively measurable processes being uniformly bounded. Then ϕ is said to be a trading strategy. Ft } is a (multidimensional) Brownian motion of appropriate dimension.
Note that if we work with π.) X ϕ (0) = 0. Replication. (iii) A selfﬁnancing ϕ is said to be admissible if X(t) ≥ 0 for all t ≥ 0. Remarks. • In the following. • Investors use admissible strategies.2 (Arbitrage. Hence. then we need to require that X(t) > 0 for all t ≥ 0. π is not welldeﬁned. P (X ϕ (T ) > 0) > 0. but may disagree upon µ. Otherwise. (ii) An FT measurable random variable C(T ) ≥ 0 is said to be a contingent claim. • (7) ensures that all integrals are deﬁned. • Investors are price takers. .s. • All investors agree upon σ. • Compare the deﬁnition (ii) with the result of Proposition 2. then ϕ is said to be selfﬁnancing. Economical Assumptions. in abstract mathematical terms. • Since admissible strategies are progressive.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 48 holds. (iii) An admissible strategy ϕ is a replication strategy for the claim C(T ) if X ϕ (T ) = C(T ) P − a.11. investors cannot see into the future (no clairvoyants). • All investors model the dynamics of the securities as above. we always consider admissible trading strategies. Option. Fair Value) (i) An admissible trading strategy ϕ is an arbitrage opportunity if (P a. selfﬁnancing means that ϕ can be put before the d operator. X ϕ (T ) ≥ 0. Deﬁnition 4.s. The strategy in (i) is also known as “free lottery”. T ≥ 0. • Every trading strategy satisﬁes dX(t) = d(ϕ(t) S(t)). where D(x) := {ϕ : ϕ admissible and replicates C(T ) and X ϕ (0) = x} Remark. (iv) The fair time0 value of a claim is deﬁned as C(0) = inf{p : D(p) = ∅}. • The corresponding portfolio process π is said to be selfﬁnancing or admissible if ϕ is selfﬁnancing or admissible (Recall πi = ϕi Si /X).
Assumptions.s. • Due to the boundedness of the coeﬃcients. dX = ϕ dS I m X(0) = x0 . • No payout are made to the stocks during the lifetime of the call. where dS(t) = S(t) µdt + σdW (t) with constants µ and σ. maturity T .2 BlackScholes Approach: Pricing by Replication We wish to price a European call written on a stock S with strike K. = ϕ0 S0 rdt + i=1 I ϕi Si µi dt + j=1 I σij dWj m = 1− i=1 πi Xrdt + i=1 I πi X µi dt + j=1 m I σij dWj = X r+ i=1 πi (µi − r) dt + j=1 i=1 πi σij dWj 2 Remarks. Proof. 0}.2 (i)) Proposition 4. . • One can directly start with π (instead of ϕ) and require that for the wealth equation t X(s)2 πi (s)2 ds < ∞ 0 4. The interest rate r is constant as well. By deﬁnition. the socalled wealth equation. and payoﬀ C(T ) = max{S(T ) − K.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 49 • Frictionless markets • No arbitrage (see Deﬁnition 4.1 (Wealth Equation) The wealth dynamics of a selfﬁnancing strategy ϕ are described by the SDE dX(t) = X(t) (r(t) + π(t) (µ(t) − r(t)1))dt + π(t) σ(t)dW (t) . by “variation of constants”.) t πi (s)2 ds < ∞ 0 is suﬃcient for the wealth equation to possess a unique solution. the condition (P − a.
i. S(t)) + ft (t. .5fss S 2 σ 2 dt + ϕ1 Sσ − fs Sσ dW (∗) (∗∗) Choosing ϕS (t) = fs (t) leads to (∗∗) = 0. (∗ ∗ ∗) = 0. ϕS . We have not used the terminal condition to derive the PDE. (∗∗∗) Therefore. the terminal condition will be diﬀerent. s) − rf (t. s) ∈ [0.5fss S 2 σ 2 dt = ft + fs Sµ + 0.5fss (t. s) such that the timet price of the call is given by C(t) = f (t. s) = max{s − K. rfs (t. s) of a European call satisﬁes the BlackScholes PDE ft (t.5fss S 2 σ 2 . S(t))dS(t) + 0.5fss (t. ϕC ) and choose ϕC (t) ≡ −1. s) = 0. 0}.5fss S 2 σ 2 = r ϕM M + ϕS S − C =X ϕ −rfs S + rC − ft − 0.5fss S 2 σ 2 dt − fs SσdW = ϕM rM + ϕS Sµ − ft − fs Sµ − 0. It follows: Theorem 4. Then dC(t) = ft (t. Therefore. (t. S(t))dt + fs (t. S(t)) + 0. R Remark. T ] × I + . Rewriting (*): (∗) = ϕM rM − ft − 0. Of course. s) + rsfs (t.e. every European (pathindependent) option satisﬁes this PDE.2 (PDE of BlackScholes (1973)) In an arbitragefree market the price function f (t. Hence.2 function f = f (t. the strategy is locally riskfree implying (∗) = X ϕ (t)r (Otherwise there is an arbitrage opportunity with the money market account).5s2 σ 2 fss (t. S(t)). (S(t))d < S >t = ft dt + fs S µdt + σdW + 0. Then dX ϕ (t) = ϕM (t)dM (t) + ϕS (t)dS(t) − dC(t) = ϕ0 rM dt + ϕS S µdt + σdW − ft + fs Sµ + 0. S(t))S(t) − rf (t. with terminal condition f (T. S(t))S 2 (t)σ 2 = 0. s) + 0.5fss S 2 σ 2 dt + fs SσdW Consider a selfﬁnancing trading strategy ϕ = (ϕM .4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 50 • There exists a C 1.
124). = Y (0) = S(0). S(t)) · S(t) .3 (BlackScholes Formula) (i) The price of the European call is given by C(t) = S(t) · N (d1 (t)) − K · N (d2 (t)) · e−r(T −t) with ln(S(t)/K) + (r + 0. is selfﬁnancing and a replication strategy for the call. M (t) ϕS (t) = fs (t. p. (i) Compute (exercise) C(t) = E max{Y (T ) − K. σ T −t √ d2 (t) = d1 (t) − σ T − t. Theorem 4. Proof. . S(t)).4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 51 Question: How should we solve this PDE? 1st approach: Merton (1973). d1 (t) = (ii) The trading strategy ϕ := (ϕM . Ito (+) = Def. e.16 2nd approach. ϕS ) replicates the call and is selfﬁnancing because dX ϕ (8) = = dC ft + fs Sµ + 0..5fss S 2 σ 2 dt + fs SσdW r[C − fs S] + fs Sµ dt + fs SσdW rM ϕM + ϕS Sµ dt + ϕS SσdW ϕM rM dt + ϕS S µdt + SσdW ϕM dM + ϕS dS. ϕS ) with ϕM (t) C(t) − fs (t. Apply the FeynmanKac theorem (see below) which states that the time0 solution of the BlackScholes PDE is given by C(0) = E max{Y (T ) − K.g. 0} Y (t) = S(t) e−r(T −t) . (ϕM . (ϕM . Solve the PDE by transforming it to the heat equation ut = uxx which has a wellknown solution. ϕS ) = = = 16 See. 0} Y (0) = S(0) e−rT . (ii) We have (8) X ϕ (t) = ϕM (t) · M (t) + ϕS (t) · S(t) = C(t). where Y satisﬁes the SDE dY (t) = Y (t) rdt + σdW (t) . Korn/Korn (1999. Hence.52 )(T − t) √ .
Suppose that there exists a function F satisfying the PDE. ϕC ) with ϕC = −1 is selfﬁnancing as well because d(ϕC (t)C(t)) = d(−C(t)) = −dC(t) = ϕC (t)dC(t).5γ 2 Fxx = 0 with F (T. x) ∈ [0. X(u))dW (u). Since d < X >= γ 2 dt. If F is a C 1. (i) Firstly assume r = 0. the strategy (ϕM . Since (ϕM . Xu )2 du < ∞. x)Fxx (t. X(s))ds + γ(s.2 function. . the stochastic integral has zero expectation and thus F (t0 .5γ 2 Fxx dt + γFx dW. p. ϕS ) is selfﬁnancing.5fss S 2 σ 2 = r[C − fs S] 2 Remark. x) = h(x).5Fxx d < X > . X(s)) = F (t0 . Korn/Korn (1999. we get dF = Ft dt + Fx βdt + γdW + 0. x) = h(x) possessing the representation F (t0 .4 (FeynmanKac Representation) Let β and γ be functions of (t.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 52 Note: (+) holds due to the BlackScholes PDE ft + 0. x)Fx (t.5γ 2 Fxx dt = Ft + βFx + 0. x) = E[F (s. 136)). where X satisﬁes the SDE dX(s) = β(s. ϕS . we can apply Ito’s formula dF = Ft dt + Fx dX + 0. Sketch of Proof. X(s)X(t0 ) = x].2 solution F = F (t. Xu )2 Fx (u. x) = 0 with terminal condition F (T. x) = e−r(T −t0 ) E[h(X(T ))X(t0 ) = x].e. i. If E t0 γ(u.5γ 2 (t. x) − rF (t. x) + β(t. Then dF = γFx dW or in integral notation s F (s. Theorem 4. there exists a unique C 1. X(t0 )) + t0 s γ(u. x) of the PDE (9) Ft (t. Ft + βFx + 0. x) + 0. X(u))Fx (u. ∞) × I Under technical conditions (see R. X(s))dW (s) with initial condition X(t0 ) = x.
5γ 2 (t. Set G(t. i.5γ 2 (t. x) = E[e−rT h(X(T ))X(t0 ) = x]. x)Fxx (t. x) gives the desired result. T ]. 2 Remarks. x) + β(t. x)e−rt + 0. M (t) This motivates the deﬁnition of the deﬂator in continuoustime: H(t) := where t M (t) Z(t) := exp 0 r(s) ds . X)Fx (·. x) := F (t. Applying Ito’s product rule yields the following SDE for H: dH(t) = d(Z(t)M (t)−1 ) = −H(t)[r(t)dt + θ(t) dW (t)] Remarks. x)Gx (t. Since Gt (t. we have not checked under which conditions a C 1. x)e−rt + β(t. x)Fx (t. X) ∈ L2 [0. Substituting G(t0 . t t := exp −0. x) = Ft (t. The process θ is implicitly deﬁned as the solution to the following system of linear equations: (10) σ(t)θ(t) = µ(t) − r(t)1. x) = rx and γ(t.5 0 θ(s)2 ds − 0 θ(s) dW (s) . X is a multidimensional Ito process. 4. Secondly.2 function F exists. x)e−rt − rF (t. • Two crucial points are not proved: Firstly. X(T )) = h(X(T )). . x) = 0 with terminal condition G(T. x)e−rt . x)Gxx (t. multiply the PDE (9) by e−rt : Ft (t. x) = e−rt0 F (t0 . Now we can apply (i): G(t0 . x) + 0. x)e−rt . with stochastic short rate r = r(X).7) H= 1 dQ 1 + r dP 1 Z(t). x) = e−rT h(x).e. we have not checked under which conditions γ(·. x)e−rt − rF (t. x) = xσ. (ii) For r = 0. choose β(t. we get Gt (t. • The FeynmanKac representation can be generalized to a multidimensional setting.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 53 Choose s = T and note that F (T. x)e−rt = 0. • To solve the BlackScholes PDE.3 Pricing with Deﬂators In a singleperiod model the deﬂator is given by (See remark to Proposition 2.
Besides. Proof. • The boundedness assumption in (i) can be weakened. Furthermore. T ].s. 17 The . (P a. let C(T ) be an FT measurable contingent claim with (12) x := E[H(T )C(T )] < ∞ Then there exists a P ⊗ Lebesgueunique portfolio process ϕ such that ϕ is a replication strategy for C(T ). Then {H(t)X ϕ (t)}t∈[0.T ] is a martingale. Let ϕ be an admissible portfolio strategy. I = 1. The following theorem shows that H has indeed the desired properties: Theorem 4. For simplicity.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 54 • In this section we are going to work under the physical measure. Remarks. the market is complete. P a. • The system of equations (10) needs not to have a (unique) solution. 18 Note θσ = µ − r.17 Then (10) has a unique solution which is uniformly bounded. (ii) Assume that I = m (“# stocks = # sources of risk”) and that σ is uniformly positive deﬁnite. • The process θ is said to be the market price of risk and can be interpreted as a risk premium (excess return per units of volatility). H(t) i. ϕ1 ):18 d(HX ϕ ) = HdX ϕ + X ϕ dH + d < H. The timet price of the claim equals E[H(T )C(T )Ft ] C(t) = .e.s. X ϕ > process σ is uniformly positive deﬁnite if there exists a constant K > 0 such that x σ(t)σ(t) x ≥ Kx x for all x ∈ I m and all t ∈ [0. i. Hence.13. • Compare (ii) with Theorem 2. the market is arbitragefree. the process {H(t)C(t)}t∈[0. • The price in (ii) is thus equal to the expected discounted payoﬀ under the physical measure where we discount with the statedependent discount factor H. Nevertheless we will see later on that Z can be interpreted as a density inducing a change of measure (Girsanov’s Theorem). the deﬂator. one gets for an admissible ϕ = (ϕ0 .) X ϕ (T ) = C(T ).5 (Arbitragefree Pricing with Deﬂator and Completeness) (i) Assume that (10) has a bounded solution for θ.e. T ].T ] is both a local martingale and a supermartingale implying (11) E[H(t)X ϕ (t)] ≤ X ϕ (0) for all t ∈ [0. and X ϕ (0) = x. This requirement implies that R σ(t) is regular. (i) Applying Ito’s product rule.
the process is a trading strategy because23 T T (ϕ1 (s)S(s))2 ds 0 = 0 ψ(s)/H(s) + X(s)θ(s) σ(s) 1 2 KH T 2 ds < ∞. H(ω) ≥ KH (ω) > 0. and X ≥ 0 are continuous. Since there exists a Kσ > 0 such that σ(t) ≥ Kσ for all t ∈ [0. 20 Since . we can choose X to be continuous.22 we get ϕ1 = ψ/H + Xθ . (ii) Clearly. ≤ 19 Note 2 2 Kσ 2 2 ψ(s)2 ds + KX Kθ T 0 that all coeﬃcients are bounded and σ is uniformly positive deﬁnite. and X(ω) ≤ KX (ω) < ∞ for a. By deﬁnition.s. Since M > 0. Note that an arbitrage strategy would violate this supermartingale property. ω ∈ Ω.e. σ is uniformly positive deﬁnite.) t (14) · H(t)X(t) = M (t) = x + 0 ψ(s) dW (s). we have X ϕ (t) ≥ 0. θ = (µ − r)/σ is unique and uniformly bounded by some Kθ > 0. Due to (12). X(t) ≥ 0.20 Set M (t) := H(t)X(t) = E[H(T )C(T )Ft ]. M as well as H are pathwise bounded away from zero and X is pathwise bounded from above.a.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING = HX ϕ rdt + Hϕ1 S(µ − r)dt + Hϕ1 SσdW − X ϕ Hrdt −X ϕ HθdW − Hϕ1 Sθσdt = H(ϕ1 Sσ − X ϕ θ)dW. 55 (13) Since ϕ is admissible. and X(T ) = C(T ) (P a. is every F0 measurable random variable a. H(t) ≥ 0 and thus H(t)X(t) ≥ 0. M is a martingale with M (0) = x.s. X(t) is adapted. i. X(0) = x. F0 is the trivial σalgebra augemented by null sets. HX is a supermartingale which proves (11). 21 Note that the representation of an Ito process is unique. T ]. 22 By assumption. Since H and 0 ψ(s) dW (s) are continuous. 23 (a + b)2 ≤ 2a2 + 2b2 . there exists an {Ft }T progressively measurable realvalued process ψ with 0 ψ(s)2 ds < ∞ such that (P − a. Comparing (13) and (14) yields21 H(ϕ1 Sσ − Xθ) = ψ. M (ω) ≥ KM (ω) > 0. By Proposition 3.19 Deﬁne E[H(T )C(T )Ft ] X(t) := H(t) By deﬁnition. H > 0. Therefore.s. Applying the martingale representation theorem. we deﬁne ϕ0 := which implies that ϕ is selfﬁnancing and X = X ϕ .25. constant and the conditioned expected value is equal to the unconditioned expected value. Sσ X − ϕ1 S M Since X ϕ = ϕ0 M + ϕ1 S for any strategy ϕ.).
s.T ] with R T (15) and E 0 ψs 2 ds < ∞ t Mt = M 0 + 0 ψs dWs . .7 (Quadratic (Co)variation) (i) Let X. Then the result of (i) holds with XY − < X. If X = Y . Y > is a Brownian martingale. {Gt }) be a ﬁltered probability space satisfying the usual conditions and let X. P. (ii) Let (Ω. • All Brownian (local) martingales are stochastic integrals and vice versa! • The quadratic (co)variation is deﬁned for all (local) Brownian martingales. The quadratic covariation < X. P a. dX = αdW and dY = βdW with progressively measurable processes α and β. E[Mt2 ] < ∞ for all t ∈ [0. Y >0 = 0 such that XY − < X. XY − < X. See Korn/Korn (1999. Y > is the unique adapted. then the result from (i) holds with T ψs 2 ds < ∞ 0 instead of (15). The rest is obvious. Ito’s product rule yields d(XY ) = Y dX + XdY + d < X. Proof. Y be continuous local martingales for the ﬁltration {Gt }. i. T ]. In both cases. Proof. 81ﬀ).s. Proposition and Deﬁnition 4. Y be Brownian local martingales. this process is nondecreasing. (i) If M is a square integrable martingale. Remarks. Since Ito integrals are local Brownian martingales.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING T T 56 ϕ0 (s) ds 0 = 0 X(s) − ϕ1 (s)S(s) ds M (s) T ≤ for P a. 1 KM KX T + 0 1 + (ϕ1 (s)S(s))2 ds <∞ 2 Theorem 4. Y > being a local martingale for {Gt }. continuous process of bounded variation with < X. (i) By the martingale representation theorem. Y >= Y αdW + XβdW + αβdt. Y > is a Brownian local martingale.e. G. ψ is P ⊗ Lebesgueunique. pp. (ii) If M is a local martingale.T ] be a realvalued process being adapted to the Brownian ﬁltration.6 (Martingale Representation Theorem) Let {Mt }t∈[0. then there exists a progressively measurable I m valued process {ψt }t∈[0.
it is thus a supermartingale. the price of a claim equals its expected discounted payoﬀ where we discount with the money market account (instead of the deﬂator) and expectation is calculated w.5 as C(0) = E[H(T )C(T )] = E Z(T ) C(T ) C(T ) = EQ . Z(0) = 1.t. Hence. the measure Q (instead of the physical measure). Under which conditions is Z a martingale? (Novikov condition) Q2.r. If 0 θ(s)2 ds ≤ K. By Proposition 3. Remark. One can show that if X and Y are squareintegrable. The process Z is a local martingale.4 Pricing with Riskneutral Measures 1 Z(t). Y > is a martingale.e. . For simplicity. Hence. then XY − < X.8 T Let θ be progressive and K > 0.T ] deﬁned by (16) dZ(t) = −Z(t)θ(t) dW (t). Z(T ) is the density of Q and we can rewrite the result from Theorem 4. we get t t (17) Z(t) = exp −0. is a martingale. What happens to the dynamics of the assets if we change to the measure Q? (Girsanov’s theorem) Proposition 4.25. then the process {Z(t)}t∈[0. p. m = 1. we have E[Z(T )] = 1 and we can deﬁne a probability measure on FT by Q(A) := E[1A Z(T )].4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 57 2 (ii) See Kartzas/Shreve (1991. 36). By variations of constants. Proof. M (T ) M (T ) i. A ∈ FT . it is suﬃcient to show that E[Z(T )] = 1. 4.5 0 θ(s)2 ds − 0 θ(s) dW (s) ≥ 0. M (t) The deﬂator is deﬁned as H(t) = where dZ(t) = −Z(t)θ(t) dW (t) is a local martingale. If it is even a martingale. Deﬁne t τn := inf t ≥ 0 : 0 θ(s) dW (s) ≥ n . Two important questions arise: Q1.
n∈I N which. T ]. t ∈ [0.10 (Levy’s Characterization of the Brownian Motion) An mdimensional stochastic process X with X(0) = 0 is an mdimensional Brownian motion if and only if it is a continuous local martingale with24 < Xj . we get QT (A) = Qτ (A).5(ε + ε2 ) 0 θ(s)2 ds Y (T ∧ τn ). It can be replaced by the Novikov condition (See Karatzas/Shreve (1991. the claim is proved if {Z(T ∧ τn )}n∈I is UI because in this case N n→∞ lim E[Z(T ∧ τn )] = E[Z(T )]. 198ﬀ)): T E exp 0.17) QT (A) = E[1A Z(T )] = E[E[1A Z(T )Fτ ]] = E[1A E[Z(T )Fτ ]] = E[1A Z(τ )] = Qτ (A). Y (0) = 1. Proof. 24 Note OS that δij = 1{i=j} is the Kronecker delta. for ﬁxed n ∈ I .19. we need some additional tools. Theorem 4. A ∈ Ft .4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING T ∧τ 58 Due to (17). T ] and A ∈ Fτ . by Proposition 3. From (16). The following lemma shows that Qt is welldeﬁned. proves UI of {Z(T ∧ τn )}n∈I . sup EZ(T ∧ τn )1+ε < ∞. N 2 Remark. The requirement in the proposition is a strong one. We turn to the second question and deﬁne (Q := QT ) (18) Qt (A) := E[1A Z(t)]. We obtain the estimate T Z(T ∧ τn )1+ε ≤ exp 0. 2 To prove Girsanov’s theorem.5 0 θ(s)2 ds < ∞. . With the same arguments leading to E[Z(T ∧τn )] = 1 we conclude E[Y (T ∧τn )] = 1. The optional sampling theorem (OS) yields (See Theorem 3. we get Z(T ∧τn )2 ≤ e2n and thus 0 n Z(s)2 θ(s)2 ds ≤ N 2n K e . independent of n and bounded by assumption where dY (t) = −Y (t)(1 + ε)θ(s)dW (s). Xk >t = δjk t. we obtain E[Z(T ∧ τn )] = 1.9 For a bounded stopping time τ ∈ [0. Therefore. Lemma 4. Therefore. pp.
Proof.12 Let {Yt . a complexvalued version of Ito’s formula yields dZ = = <X>t =t ft dt + fx dX + 0. t − s) and Xt − Xs is R. Xt ). where i = −1.T ] be a rightcontinuous adapted process.t.5u2 tFs ] = exp(iuXs + 0.5u t) and Zt := f (t. E[exp(iu(Xt − Xs ))Fs ] = exp(−0.T ] is a local QT martingale. Since this hold for any u ∈ I the diﬀerence Xt − Xs ∼ N (0. G) such that Eν [X] = Let H be a σﬁeld with H ⊂ G.25 Z is a (complexvalued) local martingale which is bounded on [0.5u2 f dt + iuf dX − 0. G) such that dν/dµ = f for some f ∈ L1 (µ). a local martingale is a local martingale as well. Ft }t∈[0. Ft }t∈[0. a Brownian local martingale can be represented as Ito integral.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 59 Proof. Ft }t∈[0. Hence. The following statements are equivalent: (i) {Zt Yt . For simplicity m = 1. Z is a martingale implying E[exp(iuXt + 0. i.T ] is a local P martingale.e. the Brownian motion is a continuous local martingale with < W >t = t.s. ﬁx u ∈ I and set f (t.2 is analytic. Clearly. Then Eν [XH] Eµ [f H] = Eµ [f XH] a.11 (Bayes Rule) Let µ and ν be two probability measures on a measurable space (Ω. 2 Proposition 4.5u2 f d < X > iuf dX. x) = R √ 2 exp(iux + 0. T ] for every T ∈ I R. We start with the lefthand side: Eµ [f XH] dµ = H H Xf dµ < ∞. Xf dµ = H Xdν = H Eν [XH] dν = H Eν [XH]f dµ = Eµ Eν [XH] · f 1H = Eµ [Eµ [Eν [XH] · f 1H H]] = Eµ [1H Eν [XH] · Eµ [f H]] = H Eν [XH] · Eµ [f H] dµ. Let X be a random variable on (Ω.5u2 s).5fxx d < X > 0. Let H ∈ H. To prove the converse. Since f ∈ C 1. . = Since an integral w. independent of Fs . 25 For instants. 2 Lemma 4.5u2 (t − s)). (ii) {Yt .r.
e. Ito’s product rule leads to d Z · ((W Q )2 − t) = d W Q · (ZW Q ) − d(Z · t) = W Q d(ZW Q ) + ZW Q dW Q + d < ZW Q . the claim (ii) follows. Bayes rule can be applied. the process {W Q (t).14 (Girsanov II) If {Mt }t∈[0. Ft∧τn ⊂ FT . QT ). . i.T ] is a continuous local martingale on (Ω. Qt∧τn instead of QT .T ] is a Brownian local P martingale and t Dt := 0 26 Note 1 d < Z. Applying Lemma 4. By Proposition 4. W Q > −Zdt − tdZ = W Q d(ZW Q ) + ZW Q (θdt + dW ) + (Z − W Q Zθ)dt − Zdt + tθZdW = W Q d(ZW Q ) + Z(W Q + tθ)dW. j ∈ {1. . m = 1. i.T ] is an mdimensional Brownian motion on (Ω. FT . Assume (i) to hold and let {τn }n be a localizing sequence of ZY . Lemma 4.12 again yields that {(WtQ )2 − t} is a local QT martingale.r. m}. Z > = ZdW + Zθdt − W Q ZθdW − Zθdt = (Z − W Q Zθ)dW.9 = EQt∧τn [Yt∧τn Fs ] = Bayes EP [Zt∧τn Yt∧τn Fs ] EP [Zt∧τn Fs ] = Zs∧τn Ys∧τn = Ys∧τn .9 ensures that both measures coincide on Ft∧τn . Zs that Yt∧τn is Ft∧τn measurable.7 (ii). Ft }t∈[0. Furthermore. t ≥ 0. i. . Ito’s product rule yields d(ZW Q ) = ZdW Q + W Q dZ + d < W Q .t. . Ft }t∈[0. we need to show that (i) {W Q (t). For ﬁxed T ∈ [0.23. QT ). Then26 EQT [Yt∧τn Fs ] Lem 4. ad (ii). EP [Zt∧τn Fs ] = Zs∧τn . where QT is deﬁned by (18). Zs∧τn 2 The converse implication can be proved similarly (exercise). Proof. For simplicity. FT . . Ft } by t WjQ (t) := Wj (t) + 0 θj (s) ds. i.e. ∞).e. Theorem 4.12. the conditional expected value can be computed w. by Proposition 3. M >s . By Levy’s characterization. ad (i). By Lemma 4.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 60 Proof. (ii) < W Q >t = t on (Ω. Besides. ZW Q is a local P martingale.e. the process W Q is a local QT martingale as well. 2 Corollary 4. FT .13 (Girsanov’s Theorem) We assume that Z is a martingale and deﬁne the process {W Q (t). {Zt · ((WtQ )2 − t)} is a local P martingale. QT ).
we have Z(ω) ≥ KZ (ω) > 0 for a. for a progressively measurable mdimensional process θ with T (19) 0 θ(s)2 ds < ∞ P − a.15 (Converse Girsanov) Let Q be an equivalent probability measure to P on FT .a.e. Ft }t∈[0. dD = and thus d(M − D) = αdW + θαdt = αdW Q . Therefore. P − a. d < M − D >Q = α2 dt = d < M >t ..e. By the martingale representation theorem.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 61 then {Mt − Dt }t∈[0. i. ω ∈ Ω. 2 1 1 d < Z. Z(t) > 0 for all t ∈ [0. M >= − θZαdt = −θαdt Z Z Theorem 4. Besides. Besides. . Proof. t Z(t) = 1 + 0 T ψ(s) dW (s) with a progressive process ψ satisfying 0 ψ(s)2 ds < ∞. i. t ∈ [0. t which completes the proof. the quadratic variations of M − D under QT and M under P coincide.s. Since Q and P are equivalent on FT . Z(0) = 1. T ] . both measures are equivalent on Ft . Since Z > 0. for A ∈ Ft Z(T )dP = Q A dQ/dP FT Lem 4. Then the density process {Z(t). By the martingale representation theorem.T ] is a continuous local QT martingale and < M − D >Q =< M >.T ] deﬁned by dQ Z(t) := dP Ft is a positive Brownian P martingale possessing the representation dZ(t) = −Z(t)θ(t)dW (t).s. Proof.e. i. T ]. Deﬁning θ(t) = − ψ(t) Z(t) 2 thus implies (19) which completes the proof. we have dM = αdW for some progressive α. Z(t) = E[Z(T )Ft ] implying that Z is a Brownian martingale. M − D is a local Qmartingale. Hence.9 = dQ/dP Ft = A Q Z(t)dP.
I}. Deﬁnition 4. and θ is implicitly deﬁned by: (20) σ(t)θ(t) = µ(t) − r(t)1. X >). . Proposition 4. the density Z is sometimes written as E( 0 θs dWs ). Ft } deﬁned by t WjQ (t) := Wj (t) + 0 θj (s) ds. . M (t) M (t) Z(t) := exp 0 r(s) ds . I.16 If Q is a weak EMM and T (21) E exp 0. . t ≥ 0. . written E(X). i = 1. is the unique Ito process Z that is the solution of t Yt = 1 + 0 Ys dXs . . t t := exp −0. • One can show that E(X)E(Y ) = E(X + Y + < X. For bounded θ.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 62 Deﬁnition 4. then Q is already an EMM. . Remarks.3 (DoleansDade Exponential) For an Ito process X with X0 = 0 the DoleansDade exponential (stochastic exponential) of X. We set Q := QT . Z(t) = dQ/dP Ft is a density process for the measure Qt (A) = E[Z(t)1A ] and {W Q (t). Recall our deﬁnitions H(t) := where t 1 Z(t). We now come back to our pricing problem.5 0 σi (s)2 ds <∞ for every i ∈ {1. . is a QBrownian motion. Y >) and that (E(X))−1 = E(−X+ < X.5 0 θ(s)2 ds − 0 θ(s) dW (s) . . by Girsanov’s theorem. are (local) Qmartingales.4 (Equivalent Martingale Measure) A probability measure Q is a (weak) equivalent martingale measure if (i) Q ∼ P and (ii) all discounted price processes Si /M . An equivalent martingale measure (EMM) is also said to be a riskneutral measure. . Remark. · • Therefore.
by the converse Girsanov. On the other hand. 2 Remark. Deﬁne Yi := Si /M .e.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 63 Proof. i. dSi = Si [µi dt + σi dW ] = Si [(µi − σi θ)dt + σi dW Q ] implying (23) dYi = Yi [(µi − σi θ − r)dt + σi dW Q ] Comparing (22) and (23) gives (ii). Since Q is a martingale measure.17 (Characterization of EMM) In the Brownian market (6). Since Q is a weak martingale measure. since Q is equivalent to P . Therefore. the following are equivalent: (i) The probability measure Q is an EMM. this is a Qmartingale. (22) dYi = Yi σi dW Q . Deﬁne Yi := Si /M . Assume (i) to hold. By deﬁnition. By the Novikov condition (21).18 (QDynamics of the Assets) Under an EMM Q. By the martingale representation theorem and the uniqueness of the representation of an Ito process. Besides. we need not to distinguish between EMM and weak EMM. Proof. which is Qmartingale increment because σ is assumed to be uniformly bounded. Yi is a local Qmartingale. Ito’s formula leads to dYi = Yi [(µi − r)dt + σi dW ] = Yi [(µi − r − σi θ)dt + σi dW Q ] = Yi σi dW Q . (ii) There exists a solution θ to the system of equations (20) such that the process · Z = E( 0 θs dWs ) is a Girsanov density inducing the probability measure Q. assume (ii) to hold. there exists a process · θ such that dQ/dP = E( 0 θs dWs ) and dW Q = dW + θdt is a Brownian increment under Q. 2 Corollary 4. the dynamicss of asset i are given by dSi (t) = Si (t)[r(t)dt + σi (t) dW Q (t)]. . Theorem 4. Since we have assumed that σ is uniformly bounded. dYi = Yi σi dW Q . the drift µ does not matter under Q. Q is equivalent to P and dW Q = dW + θdt is a QBrownian increment.
• An EMM exists if. M (T ) i. 2 Remark. This follows from Theorem 4. then {X ϕ (t)/M (t)}t∈[0. and r are assumed to be uniformly bounded. If (24) is satisﬁed. then it is a Qmartingale. then (24) holds because µ. let C(T ) be an FT measurable contingent claim with x := EQ [C(T )/M (T )] < ∞ Then the timet price of the claim equals C(t) = M (t) EQ C(T ) Ft . . Furthermore. (ii) Since θ is unique. θ is bounded.19 (Arbitragefree Pricing under Q and Completeness) (i) Let Q be an EMM and let ϕ be an admissible portfolio strategy. by Theorem 4.5 because H = Z/M . If additionally T (24) EQ 0 ϕi (s) Si (s) M (s) 2 ds < ∞ for every i = 1.17. Besides. Besides. If (24) holds. Then {X ϕ (t)/M (t)}t∈[0.e. the EMM Q is unique.T ] is a Qmartingale. the market is arbitragefree.T ] is a Qmartingale. σ. I. . d X M = Xd 1 M + 1 1 dX = M M I Si ϕi σi dW Q i=1 which is a positive local Qmartingale and thus a Qsupermartingale. • If ϕ is uniquely bounded. for instants. then for an admissible strategy ϕ we get M (t) EQ X ϕ (T ) X ϕ (T ) Ft = X ϕ (t) = M (t) EQ Ft . then there exist more than one market price of risk ˜ θ.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 64 Theorem 4. Let Q and Q be two corresponding EMM. (i) Consider I dX ϕ = ϕ dS = Xrdt + i=1 ϕi Si σi dW Q .17. Proof. . the discounted price process {C(t)/M (t)}t∈[0. • If the market is incomplete. T ]. the market is complete. . (ii) Assume that I = m (“# stocks = # sources of risk”) and that σ is uniformly positive deﬁnite. Then (10) has a unique solution which is uniformly bounded implying that there exists a unique EMM Q. The claim price follows from Theorem 4. Since d(1/M ) = −(1/M )rdt. ˜ M (T ) M (T ) .T ] is both a local Qmartingale and a Qsupermartingale implying that EQ X ϕ (t) ≤ X ϕ (0) M (t) for all t ∈ [0.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 65 This implies that any EMM assigns the same value to a replicable claim.k=1 j=1 si sk σij σkj fik − rf = 0. Thus S(T ) has the same distribution as Y (T ) in the proof of Proposition 4. s ∈ I + . let C(T ) be a replicable claim with C(t) = f (t.19. s) = h(s).18 dS(t) = S(t)[rdt + σdW Q (t)].21 (QDynamics of Claims) Under the assumptions of Theorem 4. where due to Corollary 4. the ith asset price. S(t)) and C(T ) = h(S(T )). C/M is a (local) Qmartingale and thus (25) follows from the martingale representation theorem.52 )(T − t) √ .3 and the result follows.5 i. 2 Corollary 4. However.r. 2 Proposition 4.19. the dynamics of a replicable claim C(T ) are given by (25) dC(t) = C(t)[r(t)dt + ψ(t) dW Q (t)] with a suitable progressively measurable process ψ. f (T. (i) The pricing function f satisﬁes the following multidimensional BlackScholes PDE: I I m ft + i=1 fi si r + 0.t. Proof. where f is a C 1.2 function. the prices of nonreplicable claims may diﬀer under diﬀerent EMM. . By Theorem 4.20 (BlackScholes Again) The price of the European call is given by C(t) = S(t) · N (d1 (t)) − K · N (d2 (t)) · e−r(T −t) with d1 (t) = ln(S(t)/K) + (r + 0.19. C(t) = EQ [max{S(T ) − K. σ T −t √ d2 (t) = d1 (t) − σ T − t. Corollary 4. RI where fi denotes the partial derivative w. This is the reason why pricing in incomplete markets is so complicated and in some sense arbitrary. namly its replication costs. Proof. By Theorem 4.19. 0}] e−r(T −t) .22 (BlackScholes PDE Again and Hedging) Under the assumptions of Theorem 4.
comparing the drift with the drift of (25) yields the BlackScholes PDE. the delta hedging strategy is the unique replication strategy.19 (ii) the drift equals r. Then there exists a market price of risk. I i = 1. there exists a replication strategy ϕ with X ϕ (T ) = C(T ) and Qdynamics I m dX = X rdt + i=1 j=1 ϕ ϕ ϕi Si σij dWjQ . we have proved that the following implications hold: 1) Existence of EMM =⇒ No arbitrage 2) Uniqueness of EMM =⇒ Completeness The following theorems show to which extend the converse is true. . .19. where due to Theorem 4. • The strategy in (ii) is said to be a delta hedging strategy. i.e.e. ϕM (t) = (C(t) − i=1 ϕi (t)Si (t))/M (t). • Under the conditions of Theorem 4.27 Comparing the diﬀusion part with the diﬀusion part of (26) shows that one can choose ϕi = fi . we get for the European call (see Exercise 29): ϕS = N (d1 ). . Deﬁne φi := ϕi Si (amount of money invested in asset i) leading to the following version of the wealth equation: dX(t) = X(t)r(t)dt + φ(t) (µ(t) − r(t)1)dt + φ(t) σ(t)dW (t). 27 The representation of the diﬀusion part follows from rewriting Xπ σdW Q in terms of ϕ = πX/S. where the remaining funds C(t) − i=1 ϕi (t)Si (t) are invested in the money market I account.e. the delta hedge for the put reads ϕS = −N (−d1 ).4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 66 (ii) A replication strategy for C(T ) is given by ϕi (t) = fi (t. 2 Remarks. the choice ϕi = fi in the proof of (ii) is not the only possible choice.19 (ii). Proof.23 (Existence of a Market Price of Risk) Assume that the market (6) is arbitragefree. Theorem 4. Due to the putcall parity. (ii) Since C(T ) is attainable. . (“# stocks > # sources of risk”). S(t)). i. . I.e. (i) Ito’s formula yields I I m I m (26) dC = ft + i=1 fi Si r + 0. then in general there exist further replication strategies. i. Since the representation of an Ito process is unique.k=1 j=1 fik Si Sk σij σkj dt + i=1 j=1 fi Si σij dWjQ . In Theorem 4. • If I > m. there exists a progressively measurable process θ solving the system (10). i. Proof.5 i. • In the BlackScholes model.
Proof. 28 Recall that the representation of Ito processes is unique up to indistinguishability. By assumption. the market prices of risk θj and ˜ ˜ θj are indistinguishable. 12ﬀ). .24 (Uniqueness of EMM) Assume that the market is complete and let Q be an EMM such that {X ϕ (t)/M (t)}t∈[0.2.T ] ˜ is a Qmartingale for all admissible strategies ϕ with E[X ϕ (T )/M (T )] < ∞. where T Yj (T ) = exp 0 r(s) ds − 0. nothing is said about whether the corresponding process Z is a martingale. µ(t) − r(t)1 should be in the range of σ(t). ˜ = dWj (t) + θ(t)dt. But the orthogonal complement of Ker(σ(t) ) is Range(σ(t)). both ˜ measures are characterized by market prices of risk θ and θ such that dWjQ (t) ˜ dWjQ (t) = dWj (t) + θ(t)dt.2 Remark. Yj (t) =1+ M (t) t 0 Yj (s) dWjQ (s) = 1 + M (s) t 0 Yj (s) ˜ dWjQ (s) + M (s) t 0 Yj (s) ˜ (θj (s) − θj (s)) ds. X ϕj /M is even a Qmartingale implying X ϕj (t) = M (t)EQ Yj (T ) X ϕj (T ) Ft = M (t)EQ Ft = Yj (t) P − a. i. then Q = Q. M (T ) M (T ) i. By Theorem 4. the process Yj (θj − θj )/M needs to 28 be indistinguishable from zero. it is not clear if an EMM exists. If Q ˜ is another EMM. pp. there exists a θ(t) such that µ(t) − r(t)1 = σ(t)θ(t) The progressive measurability of θ is proved in Karatzas/Shreve (1999.s. X ϕj and Yj are indistinguish˜ able. Hence..e. In this theorem.e. Since Yj /M > 0. Therefore.e. Yj is a modiﬁcation of X ϕj . Theorem 4. we know that the process X ϕj /M is a local martingale under ˜ Q and Q. Since the market is complete. This strategy would be an arbitrage since it does not involve risk.19. we get Q = Q.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 67 Assume that there exists a selfﬁnancing strategy with φ(t) σ(t) = 0 and φ(t) (µ(t) − r(t)1) = 0. there exists an admissible trading strategy ϕj such that X ϕj (T ) = Yj (T ). Q is the unique EMM. . M (s) ˜ ˜ Since Yj /M is a Brownian local Qmartingale. Hence. i. Fix j ∈ {1. Yj /M is a local martingale under Q and Q.5T + WjQ (T ) . Although a market price of risk exists. m}. .17. 2 Remarks. but its wealth equation has a drift diﬀerent from r. Since this holds for every j. From Theorem 4. . Thus in an arbitragefree market every vector in the kernel Ker(σ(t) ) should be orthogonal to the vector µ(t) − r(t)1. . By Proposition 3.
If in the nth round. Maximze expected ﬁnal wealth.e. i. 2n 2n n=1 n=1 ∞ . maximization of expected ﬁnal wealth is usually not a reasonable criterion. . m. . . . .e. .e. .1 Continuoustime Portfolio Optimization Motivation In the theory of option pricing the following problem is tackled: Given: Payoﬀ C(T ) Wanted: Today’s price C(0) and hedging strategy ϕ In the theory of portfolio optimization the converse problem is considered: Given: Initial wealth X(0) Wanted: Optimal ﬁnal wealth X ∗ (T ) and optimal strategy ϕ∗ Question: What is “optimal”? 1st Idea. . the ﬁrst time head occurs. Measure payoﬀs in terms of a utility function U (x) = ln(x).5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 68 • Since {X ϕ (t)/M (t)}t∈[0.e. investors who maximize expected ﬁnal wealth should be willing to pay ∞ Euros to play this game! Therefore. the expected value is 1 1 1 1 1 · 1 + · 2 + · 4 + . . Problem: Petersburg Paradox.T ] is a Qmartingale for every j = 1. . they choose the 50 Euros. . if they can choose between • 50 Euros or • 100 Euros with probability 0. = = ∞. X ϕ (T ) EQ = X ϕ (0). + n · 2n−1 + . Consider a game where a coin is ﬂipped. + n · ln(2n−1 ) + .5. this process is a Qmartingale if. 2 4 8 2 ∞ ∞ 1 n−1 = · ln(2n−1 ) = ln(2) = ln(2). 5 5. for instants.T ] is a Qsupermartingale. i. then the player gets 2n−1 Euros. 2 4 8 2 2 n=0 Hence. i.5 and 0 Euros with probability 0. maxϕ E[X ϕ (T )]. Bernoulli’s idea. 1 1 1 1 · ln(1) + · ln(2) + · ln(4) + . . M (T ) • Obviously it is suﬃcient to check if {X ϕj (t)/M (t)}t∈[0. i. Question: What is the reason for this result? Answer: People are risk averse.
• The investor’s risk aversion is reﬂected by the concavity of U which implies that for any nonconstant random variable Z. =⇒ Determine the optimal wealth via a static optimization problem and interpret this wealth as a contingent claim 2. There exist two approaches to solve the dynamic optimization problem (27): 1. Martingale approach: Theorem 4. by Jensen’s inequality. an investor following this criterion would be willing to pay at most 2 Euros to play this game. x 0 X π (0) = x0 . π∈A where A := {π : π admissible and E[U (X π (T ))− ] < ∞} and dX π (t) = X π (t) (r(t) + π(t) (µ(t) − r(t)1))dt + π(t) σdW (t) . ∞) → I is said to be a utility function if R U (0) := lim U (x) = ∞.2 Martingale Approach Assumption in the Martingale Approach: The market is complete. i. and continuously diﬀerentiable function U : (0. • The investor’s greed is reﬂected by U > 0. 5.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 69 Hence.1 (Utility Function) A strictly concave. loosely speaking: “More is better than less”.5 ensures that in a complete market any claim can be replicated.e. x ∞ Remarks. the investor maximizes the following utility functional (27) sup E[U (X π (T ))]. E[U (Z)] < U (E[Z]). strictly increasing. loosely speaking: “A bird in the hand is worth two in the bush”. U (∞) := lim U (x) = 0. and the utility function U has the following properties: Deﬁnition 5.e. We consider a Brownian market (6) with bounded coefﬁcients. The martingale approach consists of a threestep procedure: . This idea can be generalized: For a given initial wealth x0 > 0. i. Optimal control approac: We know that expectations can be represented as solutions to PDEs (FeynmanKac theorem) =⇒ Represent the functional (27) as PDE and solve the PDE Standing Assumption.
∞) is strictly decreasing. the function Γ is also strictly decreasing.1 (Properties of Γ) Assume Γ(λ) < ∞ for all λ > 0. Since H(t) > 0 for every t ∈ [0. T ]. ∞).29 check continuity at λ0 .e.5 (i)). 2nd step. Determine the replication strategy for X ∗ (T ) which. ∞) → (0. 2nd step. the BC is satisﬁed as equality and we can choose λ∗ such that (28) Γ(λ) := E[H(T )V (λH(T ))] = x0 leading to λ∗ = Λ(x0 ) := Γ−1 (x0 ). Pointwise Lagrangian L(X . (ii) follows from the continuity of H and V as well as the dominated convergence theorem. Prove that the static optimization problem is equivalent to the dynamic optimization problem (27) 3rd step. Then Γ is (i) strictly decreasing. ∞). Proof. where V := (U )−1 : (0. 1st step. Lemma 5. X ∗ = V (λH(T )). Since the investor is greedy. The BC states that the present value of terminal wealth cannot exceed the initial wealth x0 (see Theorem 4. and Γ(∞) := limλ ∞ Γ(λ) = 0. i. is said to be the optimal portfolio strategy π ∗ . (ii) continuous on (0. Compute the optimal wealth X ∗ (T ) by solving a static optimization problem. Heuristic solution. 29 To . choose some λ1 < λ0 . Consider the constrained static optimization problem sup E[U (X )]. (i) V is strictly decreasing on (0. (iii) Γ(0) := limλ 0 Γ(λ) = ∞. Verify that the heuristic solution is indeed the optimal solution. Firstorder condition (FOC) U (X ) − λH(T ) = 0. in the context of portfolio optimization.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 70 1st step. Then due to (i) we can use Γ(λ1 ) < ∞ as a majorant. X ∈C subject to the socalled budget constraint (BC) E[H(T )X ] ≤ x0 and C := {C(T ) ≥ 0 : C(T ) is FT measurable and E[U (C(T ))− ] < ∞}. λ) := U (X ) + λ(x0 − H(T )X ).
U (X ∗ ) ≥ U (1) + Λ(x0 )H(T )(X ∗ − 1). Λ(x) = Γ−1 (x) exists on (0. ∞ and V is strictly decreasing. By deﬁnition of Λ(x0 ). Hence. Therefore.3 (Optimal Final Wealth) Assume x0 > 0 and Γ(λ) < ∞ for all λ > 0. by Theorem 4. 2 Theorem 5. one can always ﬁnd a λ such that (28) is satisﬁed as equality. y < ∞. 0 y lim V (y) = 0. U (X ∗ ) ≥ U (X π (T )) + Λ(x0 )H(T )(X ∗ − X π (T )) 30 a ∗ ≥ b implies a− ≤ b− ≤ b. Consider some π ∈ A.30 E[U (X ∗ )− ] ≤ U (1)+Λ(x0 ) E[H(T )(X ∗ +1)] ≤ U (1)+Λ(x0 ) (x0 +E[H(T )]) < ∞.e. Due to Lemma 5. By concavity of U .s. Proof. We now verify E[U (X ∗ )− ] < ∞. x 0 Λ(∞) := lim Λ(x) = 0. Proof. i.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 71 (iii) Since y lim V (y) = ∞. P a. we get E[H(T )X ∗ ] = x0 .2 For a utility function U with U −1 = V we get U (V (y)) ≥ U (x) + y(V (y) − x) for 0 < x. Lemma 5. x ∞ Therefore.2. ∞) with Λ ≥ 0 and Λ(0) := lim Λ(x) = ∞.2. Finally. X ∗ > 0 because V > 0. 2 Remark. Hence. Taylor approximation leads to U (x2 ) ≤ U (x1 ) + U (x1 )(x2 − x1 ) and thus U (x1 ) ≥ U (x2 ) + U (x1 )(x1 − x2 ) implying U (V (y)) ≥ U (x) + U (V (y))(V (y) − x) = U (x) + y(V (y) − x).5 (ii). π ∗ ∈ A. there exists a strategy π ∗ replicating X ∗ . the ﬁrst result follows from the monotonous convergence theorem and the second from the dominated convergence theorem. Then the optimal ﬁnal wealth reads X ∗ = V (Λ(x0 ) · H(T )) and there exists an admissible optimal strategy π ∗ ∈ A such that X π (T ) = x0 and ∗ X π (T ) = X ∗ . . Due to Lemma 5. Besides. X ∗ satisﬁes all constraints and. we show that π ∗ and X ∗ are indeed optimal.
Example “Power Utility with Constant Coeﬃcients”. 1−γ Hence. On the other hand.5 2 2 γ θ T 1−γ .5 2 γ .5 It is easy to check that 1 1−γ ∗ 1 1 γ 1−γ rT γ − 0. the market is complete. This is demonstrated in the following example. σ are constant.5 1−γ θ2 T − 0. Since Γ(λ) = E[H(T )V (λH(T ))] = λ γ−1 E[H(T ) γ−1 ] = x0 . one can apply a similar method as in Proposition 4. • σ > 0. the optimal ﬁnal wealth reads: X ∗ = V (Λ(x0 )H(T )) = (Λ(x0 )H(T )) γ−1 .5 1−γ θ2 T + 1 1−γ θW (T ) and θ = (µ − r)/σ. To determine the optimal strategy π ∗ .5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 72 and thus E[U (X ∗ )] ≥ E[U (X π (T ))] + Λ(x0 )E[H(T )(X ∗ − X π (T ))] = E[U (X π (T ))] + Λ(x0 ) (x0 − E[H(T )(X π (T ))]) BC ≥0 which gives the desired result. µ. By comparing the diﬀusion parts of (29) and (30) we guess the optimal strategy: π ∗ (t) = implying X π (T ) = exp rT + ∗ 1 µ−r 1 − γ σ2 2 1 1−γ θ T 1 1 − 0. where (29) 1 1 1 H(T ) γ−1 = exp 1 1−γ rT 1 + 0.5θ2 T − 0. the ﬁnal wealth for a strategy π is given by T t 2 (µ − r)πs − 0.5 − 0. one Brownian motion”) • The coeﬃcients r. X ∗ = (Λ(x0 )H(T )) γ−1 = x0 exp rT + 0. Since V (y) = (U )−1 (y) = y γ−1 . X ∗ = X π (T ) and due to completeness π ∗ is the unique optimal portfolio strategy.5σ 2 πs ds + 0 0 (30) X π (T ) = x0 exp rT + σπs dWs .22 (ii). 2 2 γ θ T 1−γ + 1 1−γ θW (T ) . 2 3rd step. i.e. we get Λ(x0 ) γ−1 = Γ−1 (x0 ) = x0 exp − Therefore. 1 − 0.5 (1−γ)2 = 0. . We make the following assumptions: 1 • Power utility function U (x) = γ xγ • I = m = 1 (“one stock.5 (1−γ)2 θ2 T + γ 1 1−γ θW (T ) .
X π (t + ∆))].X(t+∆) [U (X(T ))]] = Et. t + ∆] π= ˆ ∗ π on (t + ∆. 5.x [U (X π (T ))]. Strategy II.x [U (X π (T ))] := E[U (X π (T ))X(t) = x].3 Stochastic Optimal Control Approach Assumption. Xs )dXs G(t + ∆. ∗ ˆ ˆ We set X ∗ := X π and X := X π . for any admissible π. (31) Et. x) ∈ [0. x). Given that G ∈ C 1.x [G(t + ∆. By assumption. T ] ˆ (ii) Compute Et. Strategy I.x [U (X π (T ))] and Et. Et. T ]: • Strategy I: Use the optimal strategy π ∗ . ad (ii). Xs )ds + t t π π Gx (s.3. ∗ (iii) Using the fact that π ∗ is at least as good as π and taking the limit ∆ → 0 ˆ yields the HJB. X π (t + ∆))] ≤ G(t. x) := sup Et. We now carry out the following procedure: (i) For given (t.x [G(t + ∆. x). σ are constant and I = m = 1. π∈A where Et. µ. where we have equality if π = π ∗ .x [U (X(T ))] = Et.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 73 5. X(t + ∆))] = Et.x [G(t + ∆. Assume that there exists an optimal portfolio strategy π ∗ . we consider the following strategies over [t. By deﬁnition of the strategies. X π (t + ∆)) = G(t. ˆ ˆ ˆ ˆ Et.X(t+∆) [U (X ∗ (T ))]] ˆ = Et.1 Heuristic Derivation of the HJB We start with a heuristic derivation of the socalled HamiltionJacobiBellman equation (HJB) which is a PDE for G. T ] × (0. ad (iii).x [U (X ∗ (T ))] = G(t. Note that G(T. x) = U (x).x [U (X π (T ))].2 . • Strategy II: For an arbitrary π use the strategy π on [t.x [Et+∆.x [Et+∆. Ito’s formula yields t+∆ t+∆ π Gt (s. ∞). x) + . Deﬁne the value function G(t. The coeﬃcients r.
Xs )(Xs )2 πs σ 2 ds ≤ 0. Substituting π ∗ into the HJB yields a PDE for G: Gt (t. x)x(r + π(µ − r)) + 0. we get the HJB sup Gt (t. Xs )Xs (r + πs (µ − r)) t π π 2 +0. Xs )(Xs )2 πs σ 2 ds t+∆ + t π π Gx (s. x)x(r + πt (µ − r)) + 0. 5. π 1 with ﬁnal condition G(T. x) ∈ [0. Remarks. x) was arbitrarily chosen. Xs ) + Gx (s. the HJB holds for all (t. x)x2 σ 2 .2 Solving the HJB 1 We consider the HJB for an investor with power utility U (x) = γ xγ : sup Gt (t. x) + Gx (t. Xs )Xs πs σ dWs t+∆ t Given Et.3.5 t t+∆ π Gxx (s. x) + t π π π Gt (s. x) x =0 Gxx (t. As above. Xs )Xs πs σ dWs ] = 0.5Gxx (s. x)x2 π 2 σ 2 = 0. x) . Therefore.t. x)x(µ − r) + Gxx (t.5Gxx (t. Xs )Xs (r + πs (µ − r)) π π 2 +0. Pointwise maximization over π yields the following FOC (32) Gx (t. the candidate π ∗ is the global maximum if Gxx < 0. π with ﬁnal condition G(T. x) = U (x). x) − 0.r. Xs ) + Gx (s.5Gxx (t.5θ2 G2 (t. x)x2 πt σ 2 ≤ 0 for any admissible π.5Gxx (t. Xs )d < X π >s = G(t. x) = γ xγ . x) + Gx (t.5Gxx (s. x) + rxGx (t.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION t+∆ 74 +0. x)x2 π 2 σ 2 = 0. x) Since the derivative of (32) w. Et.x [ π π Gx (s. we have equality for π = π ∗ .x Dividing by ∆ and taking the limit ∆ → 0 we get (Note: X π (t) = x) 2 Gt (t. x) . x)x2 π(t)σ 2 = 0 implying π ∗ (t) = − µ − r Gx (t. T ] × (0. we can rewrite (31): t+∆ π π π Gt (s. ∞). • Note that we take the supremum over a number π and not over a process. π is Gx x(t. x)x(r + π(µ − r)) + 0. σ 2 xGxx (t. • Since (t. x) + Gx (t.
x) = γ xγ . We try the separation G(t. x) + +0. Hence.5Gxx (s. Since γ < 1. Xs )d < X π >s t T π π Gx (s. Then candidates (33) are the value function and the optimal strategy of the portfolio problem (27).5 = G(t. Xs )(Xs )2 (πs )2 σ 2 = 0.5 1−γ θ2 ) f. x) + where (∗) holds because G satisﬁes the HJB.2 . Let π ∈ A. Xs )ds + t T t π Gxx (s.3 Veriﬁcation In this section. Xs )Xs (r + πs (µ − r)) T π π Gx (s.3. x). x) + t π π π Gt (s.5Gxx (s. 1 − γ σ2 Remark. Proof. X π (T )) = G(t. .5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 75 1 1 with G(T.4 (Veriﬁcation Result) Assume that the coeﬃcients are constant and that the investor has a power utility function. x) = π ∗ (t) = 1 γ x exp − (1 − γ)K(T − t) . 5. Xs )Xs πs σ dWs ] = 0 ∗ ∗ ∗ ∗ ∗ ∗ ∗ Gt (s. Xs )(Xs )2 πs σ 2 ds + (∗) T π π Gx (s. Xs )Xs (r + πs (µ − r)) + 0. Xs )Xs πs σ dWs . For γ > 0 we have G > 0 and thus the righthand side is even a supermartingale implying (34) where we use G(T.x [ T t ∗ ∗ ∗ Gx (s.2 function. =:K This is an ODE for f with the solution f (t) = exp(−K(T − t)). Xs )dXs G(T. since Et. x) = γ xγ f (t)1−γ for some function f with f (T ) = 1 and obtain γ 1 f = − 1−γ (r + 0. Ito’s formula yields T T π Gt (s. 1 γ γx . Xs ) + Gx (s. Since G ∈ C 1. we show that our heuristic derivation of the HJB has lead to the correct result. we get the following candidates for the value function and the optimal strategy: (33) G(t. we also get Gxx (t. The righthand side is a local martingale in T . t ≤ G(t.x [ γ (X π (T ))γ ] ≤ G(t. γ 1 µ−r . x) = (exercise) and for π ∗ 1 Et. Proposition 5. Besides. Xs )Xs πs σ dWs t π π 2 +0. where for γ < 0 only bounded admissible strategies are considered. Xs ) + Gx (s. Our candidate G is indeed a C 1. x) = (γ − 1)xγ−1 f (t) < 0.
is said to be a consumption process. G is the value function and π ∗ is the optimal solution.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 76 we obtain 1 Et. Hence. Assumption.x [ γ (X ∗ (T ))γ ] = G(t. • Note that we have not explicitly used the fact that the market is complete. π π Since for a bounded strategy π we get E[ t Gx (s.s. Remark.1) and c be a consumption process.4 Intermediate Consumption So far the investor has maximized expected utility from terminal wealth only. 5. i. Xs )Xs πs σ dWs ] = 0. the relation (34) also holds for γ < 0 and bounded admissible strategies. 0 (ii) Let ϕ be a trading strategy (see Deﬁnition 4. (iii) A selfﬁnancing (ϕ. T ]. then the pair (ϕ. We now consider a situation where the investor maximizes • expected utility from terminal wealth as well as • expected utility from intermediate consumption.T ] with c ≥ 0 and T c(t) dt < ∞ P − a. x). realvalued process {ct }t∈[0. c) admissible and E 0 U1 (t. Selfﬁnancing. Hence. c(t)) dt + U2 (X π.c (T ))− < ∞ . If dX(t) = ϕ(t) dS(t) − c(t)dt holds. i. c) is said to be selfﬁnancing.c (T )) . T A := (π. Admissible) (i) A progressively measurable.2 (Consumption Process. • For γ < 0 the boundedness assumption of π can be weakened. in our model consumption can be interpreted as a continuous stream of dividends (See Exercise 40). where U1 is a utility function for ﬁxed t ∈ [0. c) is said to be admissible if X(t) ≥ 0 for all t ≥ 0. T ] he consumes 0 c(t) dt. c) : (π. At time t. over the T interval [0. To formalize the problem we need to generalize the concept of admissibility: Deﬁnition 5.e. the investor consumes at a rate c(t) ≥ 0.c)∈A 0 U1 (t. the investor’s goal function reads T (35) sup E (π. 2 T Remarks.e. c(t))− dt + U2 (X π.
c(s)) ds + U2 (X π.ˆ(T ))].c (T )) . • Strategy II: For an arbitrary (π.c (T ) = X ∗ . x) := sup Et. T ]: • Strategy I: Use the optimal strategy (π ∗ . .x (π. X π. we consider the following strategies over [t. c) and taking the limit π ˆ ∆ → 0 yields the HJB.c∗ (T ))] and Et. c∗ ) is at least as good as (ˆ .3: (i) For given (t. We set V1 := (U1 )−1 and V2 := (U2 )−1 and deﬁne T Γ(λ) := E 0 H(t)V1 (t.5 (Optimal Consumption and Optimal Final Wealth) Assume x0 > 0 and Γ(λ) < ∞ for all λ > 0. c) = π ˆ ∗ ∗ (π .x [ T t U1 (s. P a.c∗ (T ) = x0 We will present the solution by using the stochastic control approach.3 we make the following Assumption. As in Section 5. c ) on (t + ∆.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 77 and dX π. c) use the strategy (π. .c∗ ˆ c ˆ and X := X π.x [ ˆ c U2 (X π. We set X ∗ := X π ∗ . c) on [t. λH(t)) dt + H(T )V2 (λH(T )) It is straightforward to generalize the result of Section 5. c∗ ). T ] (ii) Compute Et.ˆ. T ] × (0. σ are constant and I = m = 1. The coeﬃcients r.c (t) = X π. c∗ ) ∈ A and X π ∗ ∗ and X π .s. Theorem 5.c (t) (r(t) + π(t) (µ(t) − r(t)1))dt + π(t) σdW (t) − c(t)dt. x) ∈ [0.c (0) = x0 . Λ(x0 ) · H(t)) X∗ = V2 (Λ(x0 ) · H(T )) ∗ and there exists a portfolio strategy π ∗ such that (π ∗ . ∞). The value function now reads T G(t.2. µ. We now carry out the procedure of Section 5. c∗ ). c(s)) ds + ˆ (iii) Using the fact that (π ∗ . Assume that there exists an optimal strategy (π ∗ . c∗ (s)) ds + U2 (X π ∗ . t + ∆] (ˆ .c)∈A t U1 (s. Then the optimal consumption and the optimal ﬁnal wealth reads c∗ (t) = V1 (t. T t U1 (s.
5Gxx (t. c(s)) ds + U2 (X(T )) ˆ ˆ T = Et. we have equality for (π. c) = (π ∗ . Xs )Xs (r + πs (µ − r)) − Gx (s.5Gxx (s. x). Xs )ds + t t+∆ Gx (s. c∗ ). x) + Gx (t. x). we can rewrite (36): t+∆ U1 (s. X π. c(s)) ds + U2 (X(T )) ˆ ˆ T = Et. c). x)x(r + π(µ − r)) − Gx (t. c∗ (s)) ds + U2 (X ∗ (T ))] = G(t. c(s)) ds + U2 (X(T )) ˆ U1 (s. Ito’s formula yields t+∆ t+∆ G(t + ∆. Xs )cs + 0.2 . ad (iii). Xs ) + Gx (s. x) + t Gt (s.5Gxx (s. Strategy I. By deﬁnition of the strategies. c(s)) ds + G(t + ∆. Xs )(Xs )2 πs σ 2 ds ≤ 0.x [ Strategy II.X(t+∆) t t+∆ ˆ U1 (s. ct ) + Gt (t. Xs )cs 2 +0.x Et+∆. c(s)) ds + Et+∆. Et. c(s)) ds + G(t + ∆. Xt+∆ ) = G(t. Xs )Xs (r + πs (µ − r)) 2 −Gx (s.x t U1 (s.c . By assumption. c∗ ). c(s)) ds + Et+∆. Xs ) + Gx (s.x t Gx (s. Dividing by ∆ and taking the limit ∆ → 0 we get (Note: X(t) = x) 2 U1 (t.x t t+∆ U1 (s. c∗ (s)) ds + U2 (X ∗ (T )) = Et. Xs )(Xs )2 πs σ 2 ds t+∆ + t Gx (s. c) + Gt (t.x [ Et. cs ) ds + t Gt (s. x) + Gx (t. Therefore. Xs )Xs πs σ dWs t+∆ t t+∆ Given Et.5Gxx (t. As above. X π. T T t U1 (s. Xs )Xs πs σ dWs ] = 0. x)x2 π 2 σ 2 = 0 . Given that G ∈ C 1. Xs )dXs +0. x)c π.x = Et.c (t + ∆)) for any admissible (π. we set X := X π. x) + t Gt (s. To shorten notations.5 t t+∆ Gxx (s.c +0.c (t + ∆)) ≤ G(t.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 78 ad (ii). we get the HJB (37) sup U1 (t.x t ˆ U1 (s.X(t+∆) ˆ U1 (s. x)ct + 0.x t U1 (s. x)x(r + πt (µ − r)) − Gx (t. x)x2 πt σ 2 ≤ 0 for any admissible (π. t+∆ (36) Et. Et. c). Xs )d < X >s = G(t. c) = (π ∗ . where we have equality if (π.X(t+∆) t t+∆ t+∆ ˆ t+∆ T ˆ U1 (s.
5 1−γ θ2 f = 0 γ Choosing β = 1 − γ leads to α 1−γ − 1−γ t γ e + 1−γ γ f 1 + r + 0. x)x2 π 2 σ 2 = 0. x) α 1 γ−1 c∗ (t) = e γ−1 t Gx (t. x)x(r + π(µ − r)) − Gx (t.5 1−γ θ2 f = 0. x) = γ xγ f β (t) for some function f with f (T ) = 1 and some constant β.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 79 with ﬁnal condition G(T. The HJB reads 1 sup e−αt γ cγ + Gt (t. γ where α ≥ 0 resembles the investor’s time preferences. Variations of Constants yields T α f (t) = f H (t) f (T ) − t g(s) ds . x) = e−αT γ xγ . T ] × (0. x) 1 We try the separation G(t. which can be rewritten as f = Kf + g. f H (s) . γ 1 with K := − 1−γ (r + 0. x) ∈ [0. Example “Power Utility”. x) = U2 (x). 1 with ﬁnal condition G(T. γ −αT 1 γ U2 (x) = e x .5Gxx (t. x) x = 0.5θ2 G2 (t. • Note that we take the supremum over a number (π. x) − 0. • Since (t. ∞). Simplifying yields α 1−γ − 1−γ t f γ e β+γ−1 γ−1 1 + β f + r + 0. c∗ ) into the HJB yields α 1−γ γ−1 t γ−1 Gx (t. x) σ 2 xGxx (t. This is an inhomgeneous ODE for f . c) = e−αt cγ . the HJB holds for all (t. x)x(µ − r) + Gxx (t. x)c π. x) + rxGx (t.c +0. x) = implying π ∗ (t) = − µ − r Gx (t. x) γ e γ + Gt (t. The FOCs read Gx (t. Remarks. We consider the following power utility functions 1 U1 (t. x)x2 π(t)σ 2 = 0 0 e−αt cγ−1 − Gx (t. x) + Gx (t. x) Substituting (π ∗ . c) and not over a process. Gxx (t. x) was arbitrarily chosen.5 1−γ θ2 ) and g(t) := −e γ−1 t .
Xs )dXs t T 1 −αs γ cs γe = G(t. Since γ < 1. f (t) Remark. π ∗ (t) = 1 − γ σ2 α X ∗ (t) − 1−γ t c∗ (t) = e .2 function.c >s + t T ds = G(t. the proof of this veriﬁcation result is also presented. Let (π. for (π. we also get Gxx (t. G(t. c) = (π ∗ . Xs )ds + π. Xs )(Xs )2 πs σ 2 + γ e−αs cγ ds s T (39) + t π. Xs )Xs (r + πs (µ − r)) − Gx (s. • Clearly. we get the following candidates for the value function and the optimal strategy: 1 γ 1−γ (38) x f (t). c∗ ) ∗ ∗ ∗ ∗ ∗ Gt (s.c Gx (s. Proposition 5. Xs )d < X π. it needs to be shown that these candidates are indeed the value function and the optimal strategy! This can be done analogously to Proposition 5.c Gt (s.5Gxx (s.5Gxx (s.3). c) ∈ A.x [ t Gx (s. Xs )cs π. Hence.c π. x) = (γ − 1)xγ−1 f (t) < 0. x) + t T +0. Hence. G satisﬁes the HJB.6 (Veriﬁcation Result) Assume that the coeﬃcients are constant and that the investor has a power utility function. x) = γ 1 µ−r . where for γ < 0 only bounded portfolio strategies π are considered. T f (t) = e−K(T −t) 1 + t e− 1−γ s+K(T −s) ds ≥0 α = e −K(T −t) 1− eKT (1−γ) α+K(1−γ) e−( 1−γ +K)T − e−( 1−γ +K)t α α . f > 0. X π. Xs )Xs πs σ dWs ] = 0 (exercise). For the convenience of the reader. Xs ) + Gx (s.c π. Xs )(Xs )2 (πs )2 σ 2 + γ e−αs (c∗ )γ = 0.c π. Especially.c π. Xs )Xs (r + πs (µ − r)) − Gx (s.c π. • Our candidate G is indeed a C 1. Xs )Xs πs σ dWs T ∗ ∗ ∗ It is straightforward to show that Et. s . x) + t π.c Gxx (s.c 2 1 +0. Ito’s formula yields T G(T.c π. Xs )c∗ s ∗ ∗ ∗ 1 +0.c (T )) + t T 1 e−αs γ cγ ds s T π.c Gx (s. Besides.2 . Then the candidates (38) are the value function and the optimal strategy of the portfolio problem (35).c Gt (s.5 t π.e. Since G ∈ C 1.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 80 where f H (t) = e−K(T −t) is the solution of the corresponding homogeneous ODE (see Section 5. Xs ) + Gx (s.4. i. Proof.
5
CONTINUOUSTIME PORTFOLIO OPTIMIZATION
81
we obtain
T
(40) implying
Et,x [G(T, X ∗ (T ))] + Et,x
t
1 e−αs γ (c∗ )γ ds = G(t, x). s
T 1 Et,x [e−αT γ X ∗ (T )γ ] + Et,x 1 because G(T, x) = e−αT γ xγ . t 1 e−αs γ (c∗ )γ ds = G(t, x) s
Finally, we need to show that for all (π, c) ∈ A
T 1 Et,x [e−αT γ X π,c (T )γ ] + Et,x t 1 e−αs γ cγ ds ≤ G(t, x). s
Since G satisﬁes the HJB, for all (π, c) ∈ A we get
π,c π,c π,c π,c Gt (s, Xs ) + Gx (s, Xs )Xs (r + πs (µ − r)) − Gx (s, Xs )cs π,c π,c 1 +0.5Gxx (s, Xs )(Xs )2 (πs )2 σ 2 + γ e−αs (cs )γ ≤ 0.
Therefore, (39) yields
T T 1 e−αs γ cγ ds ≤ G(t, x) + s π,c π,c Gx (s, Xs )Xs πs σ dWs t
(41) G(T, X π,c (T )) +
t
and now two cases are distinguished. 1st case: γ > 0. The righthand side of (41) is a local martingale in T . Since for γ > 0 the lefthand side is greater than zero, this is even a supermartingale implying
T
(42) and thus
Et,x [G(T, X π,c (T ))] + Et,x
t
1 e−αs γ cγ ds ≤ G(t, x) s
T 1 Et,x [e−αT γ X π,c (T )γ ] + Et,x t 1 e−αs γ cγ ds ≤ G(t, x). s
2nd case: γ < 0. Since for a bounded strategy π we obtain
T
E[
t
π π Gx (s, Xs )Xs πs σ dWs ] = 0,
the result of the 1st case also holds for γ < 0 and bounded admissible strategies. 2 Remark. If we set c ≡ 0, the previous proof is identical to the proof of Proposition 5.4.
5.5
Inﬁnite Horizon
If the optimal portfolio decision over the lifetime of an investor is to be determined, then the investment horizon lies very far into the future. For this reason, it reasonable to approximate this situation by a portfolio problem with T = ∞.
5
CONTINUOUSTIME PORTFOLIO OPTIMIZATION
82
Hence, the investor’s goal function reads
∞
(43) where
sup E
(π,c)∈A 0
U1 (t, c(t)) dt ,
∞
A := (π, c) : (π, c) admissible and E
0
U1 (t, c(t))− dt < ∞
and dX π,c (t) = X π,c (t) (r(t) + π(t) (µ(t) − r(t)1))dt + π(t) σdW (t) − c(t)dt, X π,c (0) = x0 . As in the previous section we make the following Assumption 1. The coeﬃcients r, µ, σ are constant and I = m = 1. Additionally, we make the Assumption 2. U1 (t, c(t)) = e−αt U (c(t)), where α ≥ 0 and U is a utility function. The value function is given by
∞
G(t, x) := sup Et,x
(π,c)∈A t
e−αs U (c(s)) ds .
Lemma 5.7 The value function has the representation G(t, x) = e−αt H(x), where H is a function depending on wealth only. Proof. We obtain
∞
G(t, x)eαt
= =
sup Et,x
(π,c)∈A t ∞
e−α(s−t) U (c(s)) ds e−αs U (c(s)) ds =: H(x)
0
sup E0,x
(π,c)∈A
2 Hence, the HJB (37) can be rewritten as follows: sup e−αt U (c) + Gt (t, x) + Gx (t, x)x(r + π(µ − r)) − Gx (t, x)c
π,c
+0.5Gxx (t, x)x2 π 2 σ 2 = sup U (c) − αH(x) + Hx (x)x(r + π(µ − r)) − Hx (x)c + 0.5Hxx (x)x2 π 2 σ 2 = 0
π,c
1 Example “Power Utility”. Let U (c) = γ cγ . Then the FOCs read
Hx (x)x(µ − r) + Hxx (x)x2 π(t)σ 2
= 0 0
cγ−1 − Hx (x) =
5
CONTINUOUSTIME PORTFOLIO OPTIMIZATION
83
implying π ∗ (t) = − µ − r Hx (x) σ 2 xHxx (x)
1
γ−1 c∗ (t) = Hx (x).
Substituting (π ∗ , c∗ ) into the HJB leads to
1−γ γ−1 (x) γ Hx
γ
− αH(x) + rxHx (x) − 0.5θ2
2 Hx (x) = 0. Hxx (x)
1 We try the ansatz H(x) = γ xγ k β for some constants k and β. Simplifying yields
β 1−γ γ−1 γ k
−
α γ
1 + r + 0.5 1−γ θ2 = 0
Choosing β = γ − 1 leads to k=
γ 1−γ α γ 1 − r − 0.5 1−γ θ2 .
Our ansatz is only reasonable if k ≥ 0. For γ < 0 this is valid in any case. For γ > 0 we need to assume that (44)
1 α ≥ γ(r + 0.5 1−γ θ2 ).
Hence, we get the following candidates for the value function and the optimal strategy: (45) G(t, x) π ∗ (t) c∗ (t) 1 −αt γ γ−1 e x k , γ 1 µ−r , = 1 − γ σ2 = k X ∗ (t). =
Remark. In contrast to the previous section, the investor now consumes a timeindependent constant fraction of wealth. Proposition 5.8 (Veriﬁcation Result) Assume that the coeﬃcients are constant and that the investor has a power utility function where for γ < 0 only bounded portfolio strategies π are considered. If (44) holds as strict inequality, i.e. (46)
1 α > γ(r + 0.5 1−γ θ2 ),
then the candidates (45) are the value function and the optimal strategy of the portfolio problem (43). Proof. Let (π, c) ∈ A. Analogously to the proof of Proposition 5.6, we get the relations (42) and (40) for the candidate G for the value function of this subsection given by (45)
T
Et,x [G(T, X π,c (T ))] + Et,x
t T
1 e−αs γ cγ ds s 1 e−αs γ (c∗ )γ ds s
≤ G(t, x) = G(t, x).
Et,x [G(T, X ∗ (T ))] + Et,x
t
5
CONTINUOUSTIME PORTFOLIO OPTIMIZATION
84
Firstly note that, by the monotonous convergence theorem,31 for any consumption process c
T ∞
(47)
T →∞
lim Et,x
t
e−αs cγ ds = Et,x s
t
e−αs cγ ds . s
1 Secondly, since G(T, X ∗ (T )) = γ e−αT X ∗ (T )γ k γ−1 , we get
Et,x [G(T, X ∗ (T ))]
= = =
1 γ−1 −αT t,x e E [X ∗ (T )γ ] γk 1 γ−1 −αT γ t,x e x E γk
e
1 γ{r+ 1−γ θ 2 −0.5
1 (1−γ)2
γ θ 2 −k}T + 1−γ θW (T −t)
(48)
=
γ 2 2 2 1 1 1 γ−1 γ −αT +γ{r+ 1−γ θ −0.5 (1−γ)2 θ −k+0.5 (1−γ)2 θ }T x e γk 1 γ−1 γ −kT x e −→ 0 as T → ∞, γk
2
since (46) ensures that k > 0. Therefore,
∞
Et,x
t
1 e−αs γ (c∗ )γ ds = G(t, x). s
To prove (49) Et,x
t
∞ 1 e−αs γ cγ ds ≤ G(t, x) s
two cases are distinguished. 1st case: γ > 0. Since in this case G(T, X π,c (T )) ≥ 0, the relation (49) follows immediately from (42). 2nd case: γ < 0. Since in this case G < 0, the proof is more involved. From (42) it follows that
T
sup Et,x [G(T, X π,c (T ))] + Et,x
π,c t
1 e−αs γ cγ ds ≤ G(t, x) s
and we obtain the following estimate 0 ≥
(∗)
sup Et,x [G(T, X π,c (T ))] = sup Et,x
π,c π,c T 1 γ
1 γ
e−αT X π,c (T )γ k γ−1
π,0 1 (T )γ γX
k γ−1
≥ ≥
(+)
sup Et,x
π,c
e−αT X π,c (T )γ +
t
1 e−αs γ cγ ds s
sup
π
1 Et,x γ
e
−αT
X
π,0
(T )
γ
k γ−1 = sup Et,x
π
e−αT k γ−1
=
1 2 1 γ γ(r+0.5 1−γ θ )(T −t) −αT γ−1 (46) e k −→ x e γ T t
0
as T → ∞,
where (∗) is valid because This establishes (49). Remarks. • The relation
1 e−αs γ cγ ds ≤ 0 and (+) follows from Proposition 5.4. s 2
T →∞
lim Et,x [G(T, X π,c (T ))] = 0
is the socalled transversality condition.
emphasize that 1/γ is deliberately omitted because without this factor the argument of the expected value is positive.
31 We
5 1−γ θ )(T −t) −αT γ−1 (46) e k −→ γx e = 0 as T → ∞ where (+) follows from Proposition 5.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 85 • For γ > 0 the property (48) can be proved by using a diﬀerent argument: 0 ≤ ≤ ≤ (+) 1 Et.x [ γ X ∗ (T )γ ] e−αT k γ−1 1 Et. X ∗ (T ))] = Et.x [ γ X π π ∗ . Applying this argument one can avoid some tedious calculations.x [G(T. • As in Proposition 5.0 (T )γ ] e−αT k γ−1 1 sup Et.4. the boundedness assumption for γ < 0 can be weakened.0 (T )γ ] e−αT k γ−1 1 2 1 γ γ(r+0. .x [ γ X π.4.
O. Dax. P. Basil Blackwell. 2nd ed. New York. I. L. S. 3rd ed. R. Heidelberg. 2nd ed. (1969): Nonlinear programming. D. World Scientiﬁc. Korn (1999): Optionsbewertung und PortfolioOptimierung. Providence.. R.. Korn. Oxford University o Press. Vieweg. Springer. New York. Duﬃe... Korn (2001): Option pricing and portfolio optimization . Merton. (2004): Arbitrage theory in continuous time. E. (2001): Dynamic asset pricing theory. (1997): Optimal portfolios.. Shreve (1991): Brownian motion and stochastic calculus.. Korn. N. Kiesel (2004): Riskneutral valuation. New York. AMS.References Bingham. Cambridge MA. Korn. Braunschweig/Wiesbaden. Karatzas. I. E. Shreve (1999): Methods of mathematical ﬁnance. Springer. 503507. Singapore. E. Oxford. (1990): Continuoustime ﬁnance...Modern methods of ﬁnancial mathematics. T. 2nd ed. R. Mangasarian. H. Springer. E. New York. Bj¨rk. 2nd ed. Springer. Rhode Island. Karatzas.. C. S. Protter. R. Princeton University Press. . SIAM Review 39. (2005): Stochastic integration and diﬀerential equations. 86 . (1997): An elementary proof of Farkas’ lemma. Princeton. A. R.