This action might not be possible to undo. Are you sure you want to continue?
“FINANCIAL MATHEMATICS I:
Stochastic Calculus, Option Pricing,
Portfolio Optimization”
Holger Kraft
University of Kaiserslautern
Department of Mathematics
Mathematical Finance Group
Summer Term 2005
This Version: August 4, 2005
CONTENTS 1
Contents
1 Introduction 3
1.1 What is an Option? . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Fundamental Relations . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Discretetime State Pricing with Finite State Space 6
2.1 Singleperiod Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 Description of the Model . . . . . . . . . . . . . . . . . . . . 6
2.1.2 ArrowDebreu Securities . . . . . . . . . . . . . . . . . . . . . 7
2.1.3 Deﬂator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.4 Riskneutral Measure . . . . . . . . . . . . . . . . . . . . . . 10
2.1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Multiperiod Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Modeling Information Flow . . . . . . . . . . . . . . . . . . . 14
2.2.2 Description of the Multiperiod Model . . . . . . . . . . . . . 16
2.2.3 Some Basic Deﬁnitions Revisited . . . . . . . . . . . . . . . . 19
2.2.4 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.5 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Introduction to Stochastic Calculus 23
3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.1 Why Stochastic Calculus? . . . . . . . . . . . . . . . . . . . . 23
3.1.2 What is a Reasonable Model for Stock Dynamics? . . . . . . 23
3.2 On Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.1 Cadlag and Caglad . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.2 Modiﬁcation, Indistinguishability, Measurability . . . . . . . 25
3.2.3 Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Prominent Examples of Stochastic Processes . . . . . . . . . . . . . . 29
3.4 Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Completition, Augmentation, Usual Conditions . . . . . . . . . . . . 34
3.6 ItoIntegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.7 Ito’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4 Continuoustime Market Model and Option Pricing 47
CONTENTS 2
4.1 Description of the Model . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 BlackScholes Approach: Pricing by Replication . . . . . . . . . . . . 49
4.3 Pricing with Deﬂators . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4 Pricing with Riskneutral Measures . . . . . . . . . . . . . . . . . . . 57
5 Continuoustime Portfolio Optimization 68
5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2 Martingale Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.3 Stochastic Optimal Control Approach . . . . . . . . . . . . . . . . . 73
5.3.1 Heuristic Derivation of the HJB . . . . . . . . . . . . . . . . . 73
5.3.2 Solving the HJB . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.3 Veriﬁcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4 Intermediate Consumption . . . . . . . . . . . . . . . . . . . . . . . . 76
5.5 Inﬁnite Horizon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
References 86
1 INTRODUCTION 3
1 Introduction
1.1 What is an Option?
• There exist two main variants: call and put.
• A European call gives the owner the right to buy some asset (e.g. stock, bond,
electricity, corn . . . ) at a future date (maturity) for a ﬁxed price (strike price).
• Payoﬀ: max¦S(T) − K; 0¦, where S(T) is the asset price at maturity T and
K is the strike price.
• An American call gives the owner the right to buy some asset during the whole
lifetime of the call.
• Put: replace “buy” with “sell”
• Since the owner has the right but not the obligation, this right has a positive
value.
1.2 Fundamental Relations
Consider an option written on an underlying which pays no dividends etc. during
the lifetime of the option. In the following we assume that the underlying is a stock.
The timet call and put prices are denoted by Call(t) and Put(t). The timet price
of a zerocoupon bond with maturity T is denoted by p(t, T).
Proposition 1.1 (Bounds)
(i) For a European call we get: S(t) ≥ Call(t) ≥ max¦S(t) −Kp(t, T); 0¦
(ii) For a European put we have: Kp(t, T) ≥ Put(t) ≥ max¦Kp(t, T) −S(t); 0¦
Proof. (i) The relation S(t) ≥ Call(t) is obvious because the stock can be in
terpreted as a call with strike 0. We now assume that Call(t) < max¦S(t) −
Kp(t, T); 0¦. W.l.o.g. S(t) −Kp(t, T) ≥ 0. Consider the strategy
• buy one call,
• sell one stock,
• buy K zeros with maturity T,
leading to an initial inﬂow of S(t)−Call(t)−Kp(t, T) > 0. At time T, we distinguish
two scenarios:
1. S(T) > K: Then the value of the strategy is −S(T) +S(T) −K +K = 0.
2. S(T) ≤ K: Then the value of the strategy is −S(T) + 0 +K ≥ 0.
Hence, the strategy would be an arbitrage opportunity.
(ii) The relation Kp(t, T) ≥ Put(t) is valid because otherwise the strategy
• sell one put,
• buy K zeros with maturity T
1 INTRODUCTION 4
would be an arbitrage strategy. The second relation can be proved similarly to the
relation for the call in (i). 2
Proposition 1.2 (PutCall Parity)
The following relation holds: Put(t) = Call(t) −S(t) +Kp(t, T)
Proof. Consider the strategy
• buy one put,
• sell one call,
• buy one stock,
• sell K zeros with maturity T.
It is easy to check that the timeT value of this strategy is zero. Since no additional
payments need to be made, no arbitrage dictates that the timet value is zero as
well implying the desired result. 2
Remark. This is a very important relationship because it ties together European
call and put prices.
Proposition 1.3 (American Call)
Assume that interest rates are positive. Early exercise of an American call is not
optimal, i.e. Call
am
(t) = Call(t).
Proof. The following relations hold
Call
am
(t) ≥ Call(t)
(∗)
≥ max¦S(t) −Kp(t, T); 0¦ ≥ S(t) −K
where (∗) holds due to Proposition 1.1 (i). Hence, the American call is worth more
alive than dead. 2
Remark.
• Unfortunately, this result does not hold for American puts.
• If dividends are paid, then the result for the American call breaks down as well.
Proposition 1.4 (American Put)
Assume that interest rates are positive. Then the following relations hold:
(i) Call
am
(t) −S(t) +Kp(t, T) ≤ Put
am
(t) ≤ Call
am
(t) −S(t) +K
(ii) Put(t) ≤ Put
am
(t) ≤ Put(t) +K(1 −p(t, T)).
Proof. (i) The ﬁrst inequality follows from the putcall parity and Proposition 1.3.
For the second one assume that Put
am
(t) > Call
am
(t) −S(t) +K and consider the
following strategy:
• sell one put,
• buy one call,
• sell one stock,
• invest K Euros in the money market account.
1
1
The timet value of the money market account is denoted by M(t). By convention, M(0) = 1.
1 INTRODUCTION 5
By assumption, the initial value of this strategy is strictly positive. For t ∈ [0, T]
we distinguish to scenarios:
1. S(t) > K: Then the value of the strategy is
0 + (S(t) −K) −S(t) +KM(t) = K(M(t) −1) ≥ 0
since interest rates are positive and thus M(t) ≥ 1.
2. S(t) ≤ K: Then the value of the strategy is
−(K −S(t)) + 0 −S(t) +KM(t) = K(M(t) −1) ≥ 0.
Hence, the strategy would be an arbitrage opportunity.
(ii) We have
Put(t) ≤ Put
am
(t) ≤ Call
am
(t) −S(t) +K = Call(t) −S(t) +K
= Put(t) +S(t) −Kp(t, T) −S(t) +K = Put(t) +K(1 −p(t, T)).
2
Remark. If interest rates are zero, then Put(t) = Put
am
(t).
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 6
2 Discretetime State Pricing with Finite State
Space
2.1 Singleperiod Model
2.1.1 Description of the Model
• We consider a singleperiod model starting today (t = 0) and ending at time
t = T. Without loss of generality T = 1.
• Uncertainty at time t = 1 is represented by a ﬁnite set Ω = ¦ω
1
, . . . , ω
I
¦ of
states, one of which will be revealed as true. Ω is said to be the state space.
• Each state ω
i
∈ Ω can occur with probability P(ω
i
) > 0. This deﬁnes a
probability measure P : Ω →(0, 1).
• At time t = 0 one can trade in J + 1 assets. The time0 prices of these
assets are described by the vector S
= (S
0
, S
1
, . . . , S
J
). The statedependent
payoﬀs of the assets at time t = 1 are described by the payoﬀ matrix
X =
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
X
0
(ω
1
) X
J
(ω
1
)
X
0
(ω
2
) X
J
(ω
2
)
.
.
.
.
.
.
.
.
.
X
0
(ω
I
) X
J
(ω
I
)
¸
.
• To shorten notations, X
ij
:= X
j
(ω
i
).
• The ﬁrst asset is assumed to be riskless, i.e. X
i0
= X
k0
= const. for all i,
k ∈ ¦1, . . . , I¦.
• Therefore, at time t = 0 there exists a riskless interest rate given by
r =
X
0
S
0
−1.
• An investor buys and sells assets at time t = 0. This is modeled via a trading
strategy ϕ = (ϕ
0
, . . . , ϕ
J
)
, where ϕ
j
denotes the number of asset j which the
investor holds in his portfolio (combination of assets).
Deﬁnition 2.1 (Arbitrage)
We distinguish two kinds of arbitrage opportunities:
(i) A free lunch is a trading strategy ϕ with
S
ϕ < 0 and Xϕ ≥ 0.
(ii) A free lottery is a trading strategy ϕ with
S
ϕ = 0 and Xϕ ≥ 0 and Xϕ = 0.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 7
Assumptions:
(a) All investors agree upon which states cannot occur at time t = 1.
(b) All investors estimate the same payoﬀ matrix (homogeneous beliefs).
(c) At time t = 0 there is only one market price for each security and each investor
can buy and sell as many securities as desired without changing the market
price (price taker).
(d) Frictionless markets, i.e.
• assets are divisible,
• no restrictions on short sales,
2
• no transaction costs,
• no taxes.
(e) The asset market contains no arbitrage opportunities, i.e. there are neither free
lunches nor free lotteries.
2.1.2 ArrowDebreu Securities
To characterize arbitragefree markets, the notion of an ArrowDebreu security is
crucial:
Deﬁnition 2.2 (ArrowDebreu Security)
An asset paying one Euro in exactly one state and zero else is said to be an Arrow
Debreu security (ADS). Its price is an ArrowDebreu price (ADP).
Remarks.
• In our model, we have as many ADS as states at time t = 1, i.e. we have I
ArrowDebreu securities.
• The ith ADS has the payoﬀ e
i
= (0, . . . , 0, 1, 0, . . . , 0)
, where the ith entry is
1. The corresponding ArrowDebreu price is denoted by π
i
.
• Unfortunately, in practice ADSs are not traded.
We can now prove the following theorem characterizing markets which do not con
tain arbitrage opportunities.
Theorem 2.1 (Characterization of No Arbitrage)
Under assumptions (a)(d) the following statements are equivalent:
• The market does not contain arbitrage opportunities (Assumption (e)).
2
A short sale of an assets is equivalent to selling an asset one does not own. An investor can do
this by borrowing the asset from a third party and selling the borrowed security. The net position
is a cash inﬂow equal to the price of the security and a liability (borrowing) of the security to
the third party. In abstract mathematical terms, a short sale corresponds to buying a negative
amount of a security.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 8
• There exist strictly positive ArrowDebreu prices, i.e. π
i
> 0, i = 1, . . . , I, such
that
S
j
=
I
¸
i=1
π
i
X
ij
.
Proof. Suppose (ii) to hold. We consider a strategy ϕ with Xϕ ≥ 0 and Xϕ = 0.
Then
S
ϕ
(ii)
= π
Xϕ
(∗)
> 0,
where (∗) holds because π > 0, Xϕ ≥ 0, and Xϕ = 0. For this reason, there are
no free lunches. The argument to exclude free lotteries is similar. The implication
“(i)⇒(ii)” follows from the socalled FarkasStiemke lemma (See Mangasarian (1969,
p. 32) or Dax (1997)). 2
Remarks.
• The ADPs are the solution of the following system of linear equations:
S = X
π.
• The hard part of the proof is actually the proof of implication “(i)⇒(ii)”. For
brevity, this proof is omitted here.
Given our market with prices S and payoﬀ matrix X, it is now of interest how to
price other assets.
Deﬁnition 2.3 (Contingent Claim)
In the singleperiod model, a contingent claim (derivative, option) is some additional
asset with payoﬀ C ∈ IR
I
.
Deﬁnition 2.4 (Replication)
Consider a market with payoﬀ matrix X. A contingent claim with payoﬀ C can be
replicated (hedged) in this market if there exists a trading strategy ϕ such that
C = Xϕ.
Remark. If an asset with payoﬀ C is replicable via a trading strategy ϕ, by the
no arbitrage assumption its price S
C
has to be (“law of one price”)
S
C
=
J
¸
j=1
ϕ
j
S
j
.
Deﬁnition 2.5 (Completeness)
An arbitragefree market is said to be complete if every contingent claim is replica
ble.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 9
Theorem 2.2 (Characterization of a Complete Market)
Assume that the ﬁnancial market contains no arbitrage opportunities. Then the
market is complete if and only if the ArrowDebreu prices are uniquely determined,
i.e.
Rank(X) = I.
Proof.
π is unique.
⇐⇒ X
π = S has a unique solution.
⇐⇒ Rank (X) = I
⇐⇒ Range (X) = IR
I
⇐⇒ Every contingent claim is replicable.
2
Remarks.
• The market is complete if the number of assets with linearly independent payoﬀs
is equal to the number of states.
• If the number of assets is smaller than the number of states (J < I), the market
cannot be complete.
2.1.3 Deﬂator
Main result from the previous subsection: In an arbitragefree market, there
exist strictly positive ADPs such that
S
j
=
I
¸
i=1
π
i
X
ij
.
In this subsection: Using the physical probabilities P(ω
i
), i = 1, . . . , I, arbitrage
free asset prices can be represented as discounted expected values of the correspond
ing payoﬀs.
Deﬁnition 2.6 (Deﬂator)
In an arbitragefree market, a deﬂator H is deﬁned by
H =
π
P
.
Remarks.
• Since the market is arbitragefree, there exists at least one vector of ADSs π, but
π and thus the deﬂator H are only unique if the market is complete.
• The ADS π and the probability measure P can be interpreted as random variables
with realizations π
i
= π(ω
i
) and P(ω
i
).
Proposition 2.3 (Pricing with a Deﬂator)
In an arbitragefree market, the asset prices are given by
S
j
= E
P
[H X
j
].
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 10
Proof.
S
j
=
I
¸
i=1
π
i
X
ij
=
I
¸
i=1
P(ω
i
)
π(ω
i
)
P(ω
i
)
X
j
(ω
i
) = E
P
π
P
X
j
= E
P
[H X
j
]
2
Remarks.
• The deﬂator H is the appropriate discount factor for payoﬀs at time t = 1.
• The deﬂator H ist a random variable with realizations π(ω
i
)/P(ω
i
).
• On average, one needs to discount with the riskless rate r because
E
P
[H] =
I
¸
i=1
P(ω
i
)
π(ω
i
)
P(ω
i
)
=
I
¸
i=1
π(ω
i
) =
1
1 +r
.
Interpretation of a Deﬂator:
• In an equilibrium model one can show that the deﬂator equals the marginal
rate of substitution between consumption at time t = 0 and t = 1 of a repre
sentative investor, i.e. (roughly speaking) we have
H =
∂u/∂c
1
∂u/∂c
0
,
where u = u(c
0
, c
1
) is the utility function of the representative investor.
• Setting u
0
:= ∂u/∂c
0
and u
1
:= ∂u/∂c
1
leads to
r =
1
E
P
[H]
−1 =
u
0
E
P
(u
1
)
−1,
which gives the following result:
(i) u
0
> E(u
1
) (”‘time preference”’):
3
r > 0,
(ii) u
0
< E(u
1
): r < 0.
2.1.4 Riskneutral Measure
Two important observations:
(i) If an investor buys all ADSs, he gets one Euro at time t = 1 for sure, i.e. we get
I
¸
i=1
π
i
=
1
1 +r
⇒
I
¸
i=1
π
i
(1 +r) = 1.
(ii) In an arbitragefree market, it holds that π
i
> 0, i = 1, . . . , I.
3
Intuitively, time preference describes the preference for present consumption over future
consumption.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 11
Proposition 2.4 (Riskneutral Measure)
In an arbitragefree market, there exists a probability measure Q deﬁned via
Q(ω
i
) = q
i
:= π
i
(1 +r)
This measure is said to be the riskneutral measure or equivalent martingale mea
sure.
Proof. follows from (i) and (ii). 2
Using the riskneutral measure, there is a third representation of an asset price:
Proposition 2.5 (Riskneutral Pricing)
In an arbitragefree market, asset prices possess the following representation
S
j
= E
Q
X
j
R
,
where R := 1 +r.
Proof.
S
j
=
I
¸
i=1
π
i
X
ij
=
I
¸
i=1
π
i
R
X
ij
R
=
I
¸
i=1
q
i
X
ij
R
= E
Q
X
j
R
.
2
Remarks.
• Warning: The riskneutral measure Q is diﬀerent from the physical measure P.
• Roughly speaking, riskneutral probabilities are compounded ArrowDebreu se
curities.
• The notion “riskneutral measure” comes from the fact that under this measure
prices are discounted expected values where one needs to discount using the
riskless interest rate.
To motivate the notion “equivalent martingale measure”, let us recall some basic
deﬁnitions:
Deﬁnition 2.7 (Equivalent Probability Measures)
A Probability Measure Q is said to be continuous w.r.t. another probability measure
P if for any event A ⊂ Ω
P(A) = 0 =⇒Q(A) = 0.
If Q is continuous w.r.t. P and vice versa, then P and Q are said to be equivalent,
i.e. both measures possess the same null sets.
Theorem 2.6 (RadonNikodym)
If Q is continuous w.r.t. P, then Q possesses a density w.r.t. Q, i.e. there exists a
positive random variable D =
dQ
dP
such that
E
Q
[Y ] = E
P
[D Y ]
for any random variable Y . This density is uniquely determined (Pa.s.).
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 12
Deﬁnition 2.8 (Stochastic Process in a Singleperiod Model)
Consider a singleperiod model with time points t = 0, 1. The ﬁnite sequence
¦Y (t)¦
t=0,1
forms a stochastic process.
Deﬁnition 2.9 (Martingale in a Singleperiod Model)
Consider a singleperiod model with ﬁnite state space. Uncertainty at time t = 1
is modeled by some probability measure Q. A realvalued stochastic process is said
to be a martingale w.r.t. Q (short: Qmartingale) if
E
Q
[Y (1)] = Y (0),
where it is assumed that E
Q
[[Y (1)[] < ∞.
We can now characterize riskneutral measures as follows:
Proposition 2.7 (Properties of a Riskneutral Measure)
(i) The discounted price processes ¦S
j
, X
j
/R¦ are martingales under a riskneutral
measure Q.
(ii) The physical measure P and a riskneutral measure Q are equivalent.
Proof. (i) follows from Proposition 2.5.
(ii) Since ADPs are strictly positive, we have Q(ω
i
) > 0 and P(ω
i
) > 0 which gives
the desired result. 2
Remarks.
• Proposition 2.7 is the reason why Q is said to be an equivalent martingale
measure.
• From Propositions 2.3 and 2.5, we get
E
Q
X
j
R
= S
j
= E
P
[H X
j
].
Therefore, we obtain
E
Q
[X
j
] = E
P
[ H R
. .. .
=dQ/dP
X
j
],
i.e. H = (dQ/dP)/R. Analogously, one can show dP/dQ = 1/(H R).
2.1.5 Summary
Theorem 2.8 (No Arbitrage)
Consider a ﬁnancial market with price vector S and payoﬀ matrix X. If the as
sumptions (a)(d) are satisﬁed, then the following statements are equivalent:
(i) The ﬁnancial market contains no arbitrage opportunities (assumption (e)).
(ii) There exist strictly positive ArrowDebreu prices π
i
, i = 1, . . . , I, such that
S
j
=
I
¸
i=1
π
i
X
ij
.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 13
(iii) There exists a statedependent discount factor H, the socalled deﬂator, such
that for all j = 0, . . . , J
S
j
= E
P
[H X
j
].
(iv) There exists an equivalent martingale measure Q such that the discounted
price processes ¦S
j
, X
j
/R¦ are martingales under Q, i.e. for all j = 0, . . . , J
S
j
= E
Q
X
j
R
Theorem 2.9 (Completeness)
Consider a ﬁnancial market with price vector S and payoﬀ matrix X. If the as
sumptions (a)(e) are satisﬁed, then the following statements are equivalent:
(i) The ﬁnancial market is complete, i.e. each additional asset can be replicated.
(ii) The ArrowDebreu prices π
i
, i = 1, . . . , I, are uniquely determined.
(iii) There exists only one deﬂator H.
(iv) There exists only one equivalent martingale Q.
Remarks.
• In a complete market, the price of any additional asset is given by the price of
its replicating portfolio.
• The value of this portfolio can be calculated applying the following theorem.
Theorem 2.10 (Pricing of Contingent Claims)
Consider an arbitragefree and complete market with price vector S and payoﬀ
matrix X. Let C ∈ IR
I
be the payoﬀ of a contingent claim. Its today’s price S
C
can be calculated by using one of the following formulas:
(i) S
C
= C
π =
I
¸
i=1
π
i
C(ω
i
) (Pricing with ADSs).
(ii) S
C
= E
P
[H C] (Pricing with the deﬂator).
(iii) S
C
= E
Q
C
R
(Pricing with the equivalent martingale measure).
Proof. In an arbitragefree and complete market, there exist ADPs which are
uniquely determined. The noarbitrage requirement together with the deﬁnition of
an ADP gives formula (i). Formulas (ii) and (iii) follow from (i) using H = π/P
and Q(ω
i
) = π
i
R. 2
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 14
2.2 Multiperiod Model
In the previous section we have modeled only two points in time:
t = 0: The investor chooses the trading strategy.
t = T: The investor liquidates his portfolio and gets the payoﬀs.
This has the following consequences:
• We have disregarded the information ﬂow between 0 and T.
• Every trading strategy was a static strategy (”‘buy and hold”’), i.e. intermediate
portfolio adjustments was not allowed.
In this section
• we will model the information ﬂow between 0 and T.
→ Filtration
• and the investor can adjust his portfolio according to the arrival of new informa
tion about prices.
→ dynamic (selfﬁnancing) trading strategy.
2.2.1 Modeling Information Flow
Motivation
Consider the dynamics of a stock in a twoperiod binomial model:
121
110
100
90
81
99
1
P
P
P
P
P
PPq
1
P
P
P
P
P
PPq
1
P
P
P
P
P
PPq
t = 0 t = 0.5 t = 1
• Due to the additional trading date at time t = 0.5, the investor gets more infor
mation about the stock price at time t = 1.
• At t = 0 it is possible that the stock price at time t = 1 equals 81, 99 or 121.
• If, however, the investor already knows that at time t = 0.5 the stock price equals
110 (90), then at time t = 1 the stock price can only take the values 121 or 99
(99 or 81).
• It is one of the goals of this section to model this additional piece of information!
Reference Example
To demonstrate how one can model the information ﬂow, we consider the following
example:
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 15
ω
1
¦ω
1
, ω
2
¦
¦ω
1
, ω
2
, ω
3
, ω
4
¦
¦ω
3
, ω
4
¦
ω
4
ω
2
, ω
3
1
P
P
P
P
P
PPq
1
P
P
P
P
P
PPq
1
P
P
P
P
P
PPq
t = 0 t = 1 t = 2
The state space thus equals: Ω = ¦ω
1
, ω
2
, ω
3
, ω
4
¦.
Example for the interpretation of the states.
ω
1
: DAX stands at 5000 at t = 1 and at 6000 at time t = 2.
ω
2
: DAX stands at 5000 at t = 1 and at 4000 at time t = 2.
ω
3
: DAX stands at 3000 at t = 1 and at 4000 at time t = 2.
ω
4
: DAX stands at 3000 at t = 1 and at 2000 at time t = 2.
First Approach to Model the Flow of Information
Deﬁnition 2.10 (Partition)
A partition of the state space is a set ¦A
1
, . . . , A
K
¦ of subsets A
k
⊂ Ω of a state
space Ω such that
• the subsets A
k
are disjoint, i.e. A
k
∩ A
l
= ∅,
• Ω = A
1
∪ . . . ∪ A
K
.
Example. ¦¦ω
1
, ω
2
¦, ¦ω
3
, ω
4
¦¦.
Deﬁnition 2.11 (Information Structure)
An information structure is a sequence ¦F
t
¦
t
of partitions F
0
, . . ., F
T
such that
• F
0
= ¦ω
1
, . . . , ω
I
¦ (”‘ At the beginning each state can occur.”’),
• F
T
= ¦¦ω
1
¦, . . . , ¦ω
I
¦¦ (”‘ At the end, one knows for sure which
state has occurred. ”’),
• the partitions become ﬁner over time.
Example.
F
0
= ¦¦ω
1
, ω
2
, ω
3
, ω
4
¦¦.
F
1
= ¦¦ω
1
, ω
2
¦, ¦ω
3
, ω
4
¦¦.
F
2
= ¦¦ω
1
¦, ¦ω
2
¦, ¦ω
3
¦, ¦ω
4
¦¦.
Remarks.
• Problem: This intuitive deﬁnition cannot be generalized to the continuoustime
case.
• However, it provides a good intuition of what information ﬂow actually means.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 16
Second Approach to Model the Flow of Information: Filtration
Deﬁnition 2.12 (σField)
A system T of subsets of the state space Ω is said to be a σﬁeld if
(i) Ω ∈ T,
(ii) A ∈ T ⇒A
C
∈ T,
(iii) for every sequence ¦A
n
¦
n∈IN
with A
n
∈ T we have
∞
¸
n=1
A
n
∈ T.
Remark. For a ﬁnite σﬁeld, it is suﬃcient if ﬁnite (instead of inﬁnite) unions of
events belong to the σﬁeld (see requirement (iii) of Deﬁnition 2.12).
Example. ¦∅, ¦ω
1
, ω
2
¦, ¦ω
3
, ω
4
¦, Ω¦.
Deﬁnition 2.13 (Filtration)
Consider a state space T. A family ¦T
t
¦
t
of subσﬁelds T
t
, t ∈ 1, is said to be a
ﬁltration if T
s
⊂ T
t
, s ≤ t and s, t ∈ 1. Here 1 denotes some ordered index set.
Remark. In our discrete time model, we can always use ﬁltrations with the fol
lowing properties ({(Ω) denotes the powerset of Ω):
• T
0
= ¦∅, Ω¦ (”‘At the beginning, one has no information.”’),
• T
T
= {(Ω) (”‘At the end one is fully informed.”’),
• T
0
⊂ T
1
⊂ . . . ⊂ T
T
(”‘The state of information increases over time.”’).
Example. T
0
= ¦∅, Ω¦, T
1
= ¦∅, ¦ω
1
, ω
2
¦, ¦ω
3
, ω
4
¦, Ω¦, T
2
= {(Ω).
Some facts about ﬁltrations:
• Filtrations are used to model the ﬂow of information over time.
• At time t, one can decide if the event A ∈ T
t
has occurred or not.
• Taking conditional expectation E[Y [T
t
] of a random variable Y means that one
takes expectation on the basis of all information available at time t.
2.2.2 Description of the Multiperiod Model
Properties of the Model:
• Trading takes place at T + 1 dates, t = 0, . . . , T.
• Investors can trade in J + 1 assets with price processes (S
0
(t), . . . , S
J
(t)), t =
0, . . . , T.
• S
j
(t) denotes the price of the jth asset at time t.
• The range of S
j
(t) is ﬁnite.
• Trading of an investor is modeled via his trading strategy ϕ(t) = (ϕ
0
(t), . . . , ϕ
J
(t))
,
where ϕ
j
(t) denotes the number of asset j at time t in the investor’s portfolio.
• A trading strategy needs to be predictable, i.e. on the basis of all information
about prices S(t −1) at time t −1 the investor selects his portfolio which he holds
until time t.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 17
• The value of a trading strategy at time t is given by:
V
ϕ
(t) =
S(t)
ϕ(t) =
¸
J
j=0
S
j
(t)ϕ
j
(t) f¨ ur t = 1, . . . , T
S(0)
ϕ(1) =
¸
J
j=0
S
j
(0)ϕ
j
(1) f¨ ur t = 0
Deﬁnition 2.14 (Contingent Claim)
A contingent claim leads to a statedependent payoﬀ C(t) at time t, which is mea
surable w.r.t. T
t
.
Remarks.
• A typical contingent claim is a European call with payoﬀ C(T) = max¦S(T) −
K; 0¦, where S(T) denotes the value of the underlying at time T and K the strike
price.
• American options are no contingent claims in the sense of Deﬁnition 2.14, but
one can generalize this deﬁnition accordingly.
• Recall the following paradigm: Arbitragefree pricing of a contingent claim
means ﬁnding a hedging (replication) strategy of the corresponding payoﬀ.
• In the singleperiod model: The price of the replication strategy is then the price
of the contingent claim.
• In the multiperiod model we need an additional requirement: The replication
strategy should not require intermediate inﬂows or outﬂows of cash.
• Only in this case, the initial costs are equal to the fair price of the contingent
claim.
• This leads to the notion of a selfﬁnancing trading strategy.
Deﬁnition 2.15 (Selfﬁnancing Trading Strategy)
A trading strategy ϕ(t) = (ϕ
1
(t), . . . , ϕ
J
(t)) is said to be selfﬁnancing if for t =
1, . . . , T −1
S(t)
ϕ(t) = S(t)
ϕ(t + 1) ⇐⇒
J
¸
j=0
S
j
(t) ϕ
j
(t) =
J
¸
j=0
S
j
(t) ϕ
j
(t + 1).
Remarks.
• To buy assets, the investor has to sell other ones.
• For this reason, the portfolio value changes only if prices change and not if the
numbers of assets in the portfolio change.
Notation:
• Change of the value of strategy ϕ at time t:
∆V
ϕ
(t) := V
ϕ
(t) −V
ϕ
(t −1)
• Price changes at time t
∆S(t) = (∆S
0
(t), . . . , ∆S
J
(t)) := (S
0
(t) −S
0
(t −1), . . . , S
J
(t) −S
J
(t −1))
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 18
• Gains process of the trading strategy ϕ at time t:
ϕ(t)
∆S(t) =
J
¸
j=0
ϕ
j
(t)∆S
j
(t)
• Cumulative gains process of the trading strategy ϕ at time t:
G
ϕ
(t) :=
t
¸
τ=1
ϕ(τ)
∆S(τ) =
t
¸
τ=1
J
¸
j=0
ϕ
j
(τ)∆S
j
(τ)
Proposition 2.11 (Characterization of a Selfﬁnancing Trading Strategy)
A selfﬁnancing trading strategy ϕ(t) has the following properties:
(i) The value of ϕ changes only due to price changes, i.e.
∆V
ϕ
(t) = ϕ(t)
∆S(t).
(ii) The value of ϕ is equal to its initial value plus the cumulative gains, i.e.
V
ϕ
(t) = V
ϕ
(0) +G
ϕ
(t) = V
ϕ
(0) +
t
¸
τ=1
ϕ(τ)
∆S(τ).
Proof. (i) For a selfﬁnancing strategy ϕ, we obtain
∆V
ϕ
(t) = V
ϕ
(t) −V
ϕ
(t −1)
=
J
¸
j=0
S
j
(t)ϕ
j
(t) −
J
¸
j=0
S
j
(t −1)ϕ
j
(t −1)
(∗)
=
J
¸
j=0
S
j
(t)ϕ
j
(t) −
J
¸
j=0
S
j
(t −1)ϕ
j
(t)
=
J
¸
j=0
ϕ
j
(t)
S
j
(t) −S
j
(t −1)
=
J
¸
j=0
ϕ
j
(t)∆S
j
(t)
= ϕ(t)
∆S(t).
The equality (∗) is valid because ϕ is selfﬁnancing.
(ii) Any (not necessary selfﬁnancing) trading strategy satisﬁes
V
ϕ
(t) = V
ϕ
(0) +
t
¸
τ=1
∆V
ϕ
(τ).
Since ϕ is assumed to be selfﬁnancing, by (i), we obtain
V
ϕ
(t) = V
ϕ
(0) +
t
¸
τ=1
∆V
ϕ
(τ).
(i)
= V
ϕ
(0) +
t
¸
τ=1
ϕ(τ)
∆S(τ)
= V
ϕ
(0) +G
ϕ
(t).
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 19
2
Remarks.
• In continuoustime models, the relation (i) is still valid if ∆ is replace by d.
• Unfortunately, the intuitive relation
S(t)
ϕ(t) = S(t)
ϕ(t + 1)
has no continuoustime equivalent.
2.2.3 Some Basic Deﬁnitions Revisited
The notion of a selfﬁnancing trading strategy is also important to generalize the
following deﬁnitions to the multiperiod setting:
Deﬁnition 2.16 (Arbitrage)
We distinguish two kinds of arbitrage opportunities:
(i) A free lunch is a selfﬁnancing trading strategy ϕ with
V
ϕ
(0) < 0 und V
ϕ
(T) ≥ 0.
(ii) A free lottery is a selfﬁnancing trading strategy ϕ with
V
ϕ
(0) = 0 und V
ϕ
(T) ≥ 0 und V
ϕ
(T) = 0.
Deﬁnition 2.17 (Replication of a Contingent Claim)
A contingent claim with payoﬀ C(T) is said to be replicable if there exists a self
ﬁnancing trading strategy ϕ such that
C(T) = V
ϕ
(T).
The trading strategy ϕ is said to be a replication (hedging) strategy for the claim
C(T).
Remarks.
• By deﬁnition, a replication strategy of a claim leads to the same payoﬀ as the
claim.
• In general, replication strategies are dynamic strategies, i.e. at every trading
date the hedge portfolio needs to be adjusted according to the arrival of new
information about prices.
The deﬁnition of a complete market remains the same:
Deﬁnition 2.18 (Completeness)
An arbitragefree market is said to be complete if each contingent claim can be
replicated.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 20
2.2.4 Assumptions
(a) All investors agree upon which states cannot occur at times t > 0.
(b) All investors agree upon the range of the price processes S(t).
(c) At time t = 0 there is only one market price for each security and each investor
can buy and sell as many securities as desired without changing the market
price (price taker).
(d) Frictionless markets, i.e.
• assets are divisible,
• no restrictions on short sales,
4
• no transaction costs,
• no taxes.
(e) The asset market contains no arbitrage opportunities, i.e. there are neither free
lunches nor free lotteries.
2.2.5 Main Results
Conventions:
• The 0th asset is a money market account, i.e. it is locally riskfree with
S
0
(0) = 1 und S
0
(t) =
t
¸
u=1
(1 +r
u
), t = 1, . . . , T,
where r
u
denotes the interest rate for the period [τ −1, τ].
• The discounted (normalized) price processes are deﬁned by
Z
j
(t) :=
S
j
(t)
S
0
(t)
.
• The money market account is thus used as a numeraire.
Deﬁnition 2.19 (Martingal)
A realvalued adapted process ¦Y (t)¦
t
, t = 0, . . . , T, with E[[Y (t)[] < ∞ is a mar
tingale w.r.t. a probability measure Q if for s ≤ t
E
Q
[Y (t)[T
s
] = Y (s).
Remark. A martingale is thus a process being on average stationary.
Important Facts:
4
A short sale of an assets is equivalent to selling an asset one does not own. An investor can do
this by borrowing the asset from a third party and selling the borrowed security. The net position
is a cash inﬂow equal to the price of the security and a liability (borrowing) of the security to
the third party. In abstract mathematical terms, a short sale corresponds to buying a negative
amount of a security.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 21
• A multiperiod model consists of a sequence of singleperiod models.
• Therefore, a multiperiod model is arbitragefree if all singleperiod models are
arbitragefree (See, e.g., Bingham/Kiesel (2004), Chapter 4).
For these reasons, the following three theorems hold:
Theorem 2.12 (Characterization of No Arbitrage)
If assumptions (a)(d) are satisﬁed, the following statements are equivalent:
(i) The ﬁnancial market contains no arbitrage opportunities (assumption (e)).
(ii) There exist a statedependent discount factor H, the socalled deﬂator, such
that for all time points s, t ∈ ¦0, . . . , T¦ with s ≤ t and all assets j = 0, . . . , J
we have
S
j
(s) =
E
P
[H(t)S
j
(t)[T
s
]
H(s)
,
i.e. all prices can be represented as discounted expectations under the physical
measure.
(iii) There exists a riskneutral probability measure Q such that all discounted
price processes Z
j
(t) are martingales under Q, i.e. for all time points s, t ∈
¦0, . . . , T¦ with s ≤ t and all assets j = 1, . . . , J we have
Z
j
(s) = E
Q
[Z
j
(t)[T
s
] ⇐⇒
S
j
(s)
S
0
(s)
= E
Q
S
j
(t)
S
0
(t)
T
s
Remarks.
• The equivalence “No Arbitrage ⇔ Existence of Q” is known as Fundamental
Theorem of Asset Pricing.
• Unfortunately, in continuoustime models this equivalence breaks down.
• However, the implication “Existence of Q ⇒ No Arbitrage”’ is still valid,
Theorem 2.13 (Pricing of Contingent Claims)
Consider a ﬁnancial market satisfying assumptions (a)(e) and a contingent claim
with payoﬀ C(T). Then we obtain for s, t ∈ ¦0, . . . , T¦ with s ≤ t:
(i) The replication strategy ϕ for the claim is unique, i.e. the arbitragefree price
of the claim is uniquely determined by
C(s) = V
ϕ
(s).
(ii) Using the deﬂator, we can calculate the claim’s times price
C(s) =
E
P
[H(t)C(t)[T
s
]
H(s)
.
(iii) The following riskneutral pricing formula holds:
C(s) = S
0
(s) E
Q
C(t)
S
0
(t)
T
s
.
2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 22
Remarks.
• Result (i) says that the today’s price of a claim is uniquely determined by its
replication costs, i.e. C(0) = V
ϕ
(0).
• From (iii) we get
C(s)
S
0
(s)
= E
Q
C(t)
S
0
(t)
T
s
,
i.e. the discounted price processes of contingent claims are Qmartingales as well.
Note that one needs to discount with the money market account.
• If interest rates are constant, then from (iii)
C(0) = E
Q
C(T)
(1 +r)
T
=
E
Q
[C(T)]
(1 +r)
T
.
• Pricing of contingent claims boils down to computing expectations under the
riskneutral measure.
Theorem 2.14 (Completeness)
Consider a ﬁnancial market satisfying assumptions (a)(e). Then the following
statements are equivalent
(i) The market is complete.
(ii) There exists a unique deﬂator H.
(iii) There exists a unique martingale measure Q.
2.2.6 Summary
The following results are important:
• It is always valid that a ﬁnancial market contains no arbitrage opportunities if
an equivalent martingale measure exists.
• In our discrete model, the converse is also valid: If the market contains no arbi
trage opportunities, there exists a martingale measure.
• If a martingale measure Qexists, the discounted price processes are Qmartingales.
• If a unique martingale measure exists, the market contains is arbitragefree and
complete.
• If a unique martingale measure exists, the today’s price of a contingent claim is
given by the following Qexpectation
C(0) = E
Q
C(T)
S
0
(T)
.
3 INTRODUCTION TO STOCHASTIC CALCULUS 23
3 Introduction to Stochastic Calculus
3.1 Motivation
3.1.1 Why Stochastic Calculus?
• In the previous section, the investor was able to trade at ﬁnitely many dates
t = 0, . . . , T only.
• In a continuoustime model, trading takes place continuously, i.e. at any date
in the time period [0, T].
• Consequently, price dynamics are modeled continuously.
• In continuoustime models asset dynamics are modeled via stochastic diﬀer
ential equations.
Example “BlackScholes Model”. Two investment opportunities:
• money market account: riskless asset with constant interest rate r modeled via
an ordinary diﬀerential equation (ODE):
dM(t) = M(t)rdt, M(0) = 1.
• stock: risky asset modeled via a stochastic diﬀerential equation (SDE)
dS(t) = S(t)
µdt +σdW(t)
, S(0) = s
0
,
where W is a Brownian motion.
In this section we answer the following questions:
Q1 What is a Brownian motion?
Q2 What is the meaning of dW(t)? → Stochastic Integral
Q3 What are stochastic diﬀerential equations and do explicit solutions exist? →
Variation of Constant
Q4 How does the dynamics of contingent claims look like? → Ito Formula
3.1.2 What is a Reasonable Model for Stock Dynamics?
Model for the Money Market Account
The solution of the ODE B
(t) = r B(t) with B(0) = 1 is
B(t) = b
0
exp(rt)
3 INTRODUCTION TO STOCHASTIC CALCULUS 24
and thus
ln(B(t)) = ln(b
0
) +r t.
Model for the Stock
The result for the money market account motivates the following model for the
stock dynamics:
ln(S(t)) = ln(s
0
) +α t + “randomness”
Notation: Y (t) = “randomness”, i.e. Y (t) = ln(S(t)) −ln(s
0
) −α t
Reasonable properties of Y :
1) no tendency, i.e. E[Y (t)] = 0,
2) increasing with time t, i.e. Var(Y (t)) increases with t,
3) no memory, i.e. for Y (t) = Y (s) +[Y (t) −Y (s)] the diﬀerence [Y (t) −Y (s)]
• depends only on t −s and
• is independent of Y (s),
Properties which would make life easier:
4) normally distributed, i.e. Y (t) ∼ ^(0, σ
2
t), σ > 0,
5) continuous.
3.2 On Stochastic Processes
Deﬁnition 3.1 (Stochastic Process)
A stochastic process X deﬁned on (Ω, T) is a collection of random variables ¦X
t
)¦
t∈I
,
i.e. σ¦X
t
¦ ⊂ T.
Remark. As otherwise stated, our stochastic processes take values in the mea
surable space (IR
n
, B(IR
n
)), i.e. the ndimensional Euclidian space equipped with
σﬁeld of Borel sets. Therefore, every X
t
is (T, B(IR
n
))measurable.
3.2.1 Cadlag and Caglad
Deﬁnition 3.2 (Cadlag and Caglad Functions)
Let f : [0, T] → IR
n
be a function with existing left and right limits, i.e. for each
t ∈ [0, T] the limits
f(t−) = lim
st
f(s), f(t+) = lim
st
f(s)
exist. The function f is cadlag if f(t) = f(t+) and caglad if f(t) = f(t−).
A cadlag function cannot jump around too widly as the next proposition shows.
3 INTRODUCTION TO STOCHASTIC CALCULUS 25
Proposition 3.1
(i) A cadlag function f can have at most a countable number of discontinuities, i.e.
¦t ∈ [0, T] : f(t) = f(t−)¦ is countable.
(ii) For any ε > 0, the number of discontinuities on [0, T] larger than ε is ﬁnite.
Proof. See Fristedt/Gray (1997).
3.2.2 Modiﬁcation, Indistinguishability, Measurability
Recall the following deﬁnition:
Deﬁnition 3.3 (Filtration)
Consider a σﬁeld T. A family ¦T
t
¦ of subσﬁelds T
t
, t ∈ 1, is said to be a
ﬁltration if T
s
⊂ T
t
, s ≤ t and s, t ∈ 1. Here 1 denotes some ordered index set.
Deﬁnition 3.4 (Adapted Process)
A process X is said to be adapted to the ﬁltration ¦T
t
¦
t∈I
if every X
t
is measurable
w.r.t. T
t
, i.e. σ¦X
t
¦ ⊂ T
t
.
Remarks.
• The notation ¦X
t
, T
t
¦
t∈I
indicates that the process X is adapted to ¦T
t
¦
t∈I
.
• For ﬁxed ω ∈ Ω, the realization ¦X
t
(ω)¦
t
is said to be a path of X. For this reason,
a stochastic process can be interpreted as functionvalued random variable.
Deﬁnition 3.5 (Modiﬁcation, Indistinguishability)
Let X and Y be stochastic processes.
(i) Y is a modiﬁcation of X if
P(ω : X
t
(ω) = Y
t
(ω)) = 1 for all t ≥ 0.
(ii) X and Y are indistinguishable if
P(ω : X
t
(ω) = Y
t
(ω) for all t ≥ 0) = 1,
i.e. X and Y have the same paths Pa.s.
Example. Consider a positive random variable τ with a continuous distribution.
Set X
t
≡ 0 and Y
t
= 1
{t=τ}
. Then Y is a modiﬁcation of X, but P(X
t
=
Y
t
for all t ≥ 0) = 0.
Proposition 3.2 (Modiﬁcation, Indistinguishability)
Let X and Y be stochastic processes.
(i) If X and Y are indistinguishable, then Y is a modiﬁcation of X and vice versa.
(ii) If X and Y have rightcontinuous paths and Y is a modiﬁcation of X, then X
and Y are indistinguishable.
(iii) Let X be adapted to ¦T
t
¦. If Y is a modiﬁcation of X and T
0
contains all
Pnull sets of T, then Y is adapted to ¦T
t
¦
t
.
3 INTRODUCTION TO STOCHASTIC CALCULUS 26
Proof. (i) is obvious.
(ii) Since Y is a modiﬁcation of X, we have
P(ω[X
t
(ω) = Y
t
(ω) for all t ≥ [0, ∞) ∩ Q) = 1.
By the rightcontinuity assumption, the claim follows.
(iii) Exercise. 2
If X is an (adapted) stochastic process, then every X
t
is (T, B(IR
n
))measurable
((T
t
, B(IR
n
))measurable). However, X is actually a functionvalued random vari
able, and so, for technical reasons, it is often convenient to have some joint measur
ability properties.
Deﬁnition 3.6 (Measurable Stochastic Process)
A stochastic process is said to measurable, if for each A ∈ B(IR
n
) the set ¦(t, ω) :
X
t
(ω) ∈ A¦ belongs to the product σﬁeld B(IR
n
) ⊗T, i.e. if the mapping
(t, ω) →X
t
(ω) : ([0, ∞) Ω, B([0, ∞)) ⊗T) →(IR
n
, B(IR
n
))
is measurable.
Deﬁnition 3.7 (Progressively Measurable Stochastic Process)
A stochastic process is said to progressively measurable w.r.t. the ﬁltration ¦T
t
¦, if
for each t ≥ 0 and A ∈ B(IR
n
) the set ¦(s, ω) ∈ [0, t] Ω : X
s
(ω) ∈ A¦ belongs to
the product σﬁeld B([0, t]) ⊗T
t
, i.e. if the mapping
(s, ω) →X
s
(ω) : ([0, t] Ω, B([0, t]) ⊗T
t
) →(IR
n
, B(IR
n
))
is measurable for each t ≥ 0.
Remark. Any progressively measurable process is measurable and adapted (Exer
cise). The following proposition provides the extent to which the converse is true.
Proposition 3.3
If a stochastic process is measurable and adapted to the ﬁltration ¦T
t
¦, then it has
a modiﬁcation which is progressively measurable w.r.t. ¦T
t
¦.
Proof. See Karatzas/Shreve (1991, p. 5).
Fortunately, if a stochastic process is left or rightcontinuous, a stronger result can
be proved:
Proposition 3.4
If the stochastic process X is adapted to the ﬁltration ¦T
t
¦ and right or left
continuous, then X is also progressively measurable w.r.t. ¦T
t
¦.
Proof. See Karatzas/Shreve (1991, p. 5).
Remark. If X is right or leftcontinuous, but not necessarily adapted to ¦T
t
¦,
then X is at least measurable.
3 INTRODUCTION TO STOCHASTIC CALCULUS 27
3.2.3 Stopping Times
Consider a measurable space (Ω, T). An T measurable random variable τ : Ω →
1 ∪ ¦∞¦ where 1 = [0, ∞) or 1 = IN is said to be a random time. Two special
classes of random times are of particular importance.
Deﬁnition 3.8 (Stopping Time, Optional Time)
(i) A random time is a stopping time w.r.t. the ﬁltration ¦T
t
¦
t∈I
if the event
¦τ ≤ t¦ ∈ T
t
for every t ∈ 1.
(ii) A random time is an optional time w.r.t. the ﬁltration ¦T
t
¦
t∈I
if the event
¦τ < t¦ ∈ T
t
for every t ∈ 1.
Propositions 3.5 and 3.6 show how both concepts are connected.
Proposition 3.5
If τ is optional and c > 0 is a positive constant, then τ +c is a stopping time.
Proof. Exercise.
Deﬁnition 3.9 (Continuity of Filtrations)
(i) A ﬁltration ¦(
t
¦ is said to be rightcontinuous if
(
t
= (
t+
:=
¸
ε>0
(
t+ε
(ii) A ﬁltration ¦(
t
¦ is said to be leftcontinuous if
(
t
= (
t−
:= σ
¸
s<t
(
s
¸
Remark. Let X be stochastic process.
• Loosely speaking, leftcontinuity of ¦F
X
t
¦ means that X
t
can be guessed by ob
serving X
s
, 0 ≤ s < t.
• Rightcontinuity means intuitively that if X
s
has been observed for 0 ≤ s ≤ t,
then one cannot learn more by peeking inﬁnitesimally far into the future.
Proposition 3.6
Every stopping time is optional, and both concepts coincide if the ﬁltration is right
continuous.
Proof. Consider a stopping time τ. Then ¦τ < t¦ =
¸
t>ε>0
¦T ≤ t − ε¦ and
¦T ≤ t −ε¦ ∈ T
t−ε
⊂ T
t
, i.e. τ is optional. On the other hand, consider an optional
time τ w.r.t. a rightcontinuous ﬁltration ¦T
t
¦. Since ¦τ ≤ t¦ =
¸
t+ε>u>t
¦T < u¦,
we have ¦τ ≤ t¦ ∈ T
t+ε
for every ε > 0 implying ¦τ ≤ t¦ =
¸
ε>0
T
t+ε
= T
t+
= T
t
.
2
Remark. In our applications, we will always work with rightcontinuous ﬁltrations.
5
5
See Deﬁnition 3.18.
3 INTRODUCTION TO STOCHASTIC CALCULUS 28
Proposition 3.7
(i) Let τ
1
and τ
2
be stopping times. Then τ
1
∧τ
2
= min¦τ
1
, τ
2
¦, τ
1
∨τ
2
= max¦τ
1
, τ
2
¦,
τ
1
+τ
2
, and ατ
1
with α ≥ 1 are stopping times.
(ii) Let ¦τ
n
¦
n∈IN
be a sequence of optional times. Then the random times
sup
n≥1
τ
n
, inf
n≥1
τ
n
, lim
n→∞
τ
n
, lim
n→∞
τ
n
are all optional. Furthermore, if the τ
n
’s are stopping times, then so is sup
n≥1
τ
n
.
Proof. Exercise.
Proposition and Deﬁnition 3.8 (Stopping Time σField)
Let τ be a stopping time of the ﬁltration ¦T
t
¦. Then the set
¦A ∈ T : A∩ ¦τ ≤ t¦ ∈ T
t
for all t ≥ 0¦
is a σﬁeld, the socalled stopping time σﬁeld T
τ
.
Proof. Exercise.
Proposition 3.9 (Properties of a Stopped σField)
Let τ
1
and τ
2
be stopping times.
(i) If τ
1
= t ∈ [0, ∞), then T
τ
1
= T
t
.
(ii) If τ
1
≤ τ
2
, then T
τ
1
⊂ T
τ
2
.
(iii) T
τ
1
∧τ
2
= T
τ
1
∩ T
τ
2
and the events
¦τ
1
< τ
2
¦, ¦τ
2
< τ
1
¦, ¦τ
1
≤ τ
2
¦, ¦τ
2
≤ τ
1
¦, ¦τ
1
= τ
2
¦
belong to T
τ
1
∩ T
τ
2
.
Proof. Exercise.
Deﬁnition 3.10 (Stopped Process)
(i) If X is a stochastic process and τ is random time, we deﬁne the function X
τ
on
the event ¦τ < ∞¦ by
X
τ
(ω) := X
τ(ω)
(ω)
If X
∞
(ω) is deﬁned for all ω ∈ Ω, then X
τ
can also be deﬁned on Ω, by setting
X
τ
(ω) := X
∞
(ω) on ¦τ = ∞¦.
(ii) The process stopped at time τ is the process ¦X
t∧τ
¦
t∈[0,∞)
.
Proposition 3.10 (Measurability of a Stopped Process)
(i) If the process X is measurable and the random time τ is ﬁnite, then the function
X
τ
is a random variable.
(ii) If the process X is progressively measurable w.r.t. ¦T
t
¦
t∈[0,∞)
and τ is a stopping
time of this ﬁltration, then the random variable X
τ
, deﬁned on the set ¦τ < ∞¦ ∈
T
τ
, is T
τ
measurable, and the stopped process ¦X
τ∧t
, T
t
¦
t∈[0,∞)
is progressively
measurable.
3 INTRODUCTION TO STOCHASTIC CALCULUS 29
Proof. Karatzas/Shreve (1991, p. 5 and p. 9).
Remark. The following result is obvious: If X is adapted and cadlag and τ is a
stopping time, then
X
t∧τ
= X
t
1
{t<τ}
+X
τ
1
{t≥τ}
.
is also adapted and cadlag.
Deﬁnition 3.11 (Hitting Time)
Let X be a stochastic process and let A ∈ B(IR
n
) be a Borel set in IR
n
. Deﬁne
τ
A
(ω) = inf¦t > 0 : X
t
(ω) ∈ A¦.
Then τ is called a hitting time of A for X.
Proposition 3.11
Let X be a rightcontinuous process adapted to the ﬁltration ¦T
t
¦.
(i) If A
o
is an open set, then the hitting time of A
o
is an optional time.
(ii) If A
c
is a closed set, then the random variable
˜ τ
A
c
(ω) = inf¦t > 0 : X
t
(ω) ∈ A
c
or X
t−
(ω) ∈ A
c
¦.
is a stopping time.
Proof. (i) For simplicity n = 1. We have ¦τ
A
o
< t¦ =
¸
s∈Q∩[0,t)
¦X
s
∈ A
o
¦. The
inclusion ⊃ is obvious. Suppose that the inclusion ⊂ does not hold. Then we have
X
i
∈ A
o
for some i ∈ IR ` Q and i < t, but X
q
∈ A
o
for all q ∈ Q with q < t.
Since A
o
is open, there exists a δ > 0 such that (X
i
− δ, X
i
+ δ) ⊂ A
o
. Therefore,
X
q
∈ (X
i
−δ, X
i
+δ). However, then there exists a sequence (q
n
)
n∈IN
with q
n
`s
0
and q
n
∈ Q ∩ [0, t), but X
q
n
→ X
i
which is a contradiction to the rightcontinuity
of X.
(ii) See Karatzas/Shreve (1991, p. 39) and Protter (2005, p. 5). 2
Remarks.
• If the ﬁltration is rightcontinuous, then the hitting time τ
A
o
is a stopping time.
• If X is continuous , then the stopping time ˜ τ
A
c
is a hitting time of A
c
.
• Even if X is continuous, in general the hitting time of A
o
is not a stopping time.
For example, suppose that X is realvalued, that for some ω, X
t
(ω) ≤ 1 for t ≤ 1,
and that X
1
(ω) = 1. Let A
o
= (1, ∞). Without looking slightly ahead of time 1,
we cannot tell wheter or not τ
A
o
= 1.
3.3 Prominent Examples of Stochastic Processes
Theorem and Deﬁnition 3.12 (Brownian Motion)
(i) A Brownian motion is an adapted, realvalued process ¦W
t
, T
t
¦
t∈[0,∞)
such that
W
0
= 0 a.s. and
3 INTRODUCTION TO STOCHASTIC CALCULUS 30
• W
t
−W
s
∼ ^(0, t −s) for 0 ≤ s < t “stationary increments”
• W
t
−W
s
independent of T
s
for 0 ≤ s < t “independent increments”
exists and is said to be a onedimensional Brownian motion.
(ii) An ndimensional Brownian motion W := (W
1
, . . . , W
n
) is an ndimensional
vector of independent onedimensional Brownian motions.
Proof. Applying Kolmogorov’s extension theorem,
6
it is straightforward to prove
the existence of a Brownian motion. See, e.g., Karatzas/Shreve (1991, pp. 47ﬀ). 2
Theorem 3.13 (Kolmogorov’s Continuity Theorem)
Suppose that the process X = ¦X
t
¦
t∈[0,∞)
satisﬁes the following condition: For all
T > 0 there exist positive constants α, β, C such that
E[[X
t
−X
s
[
α
] ≤ C [t −s[
1+β
, 0 ≤ s, t ≤ T.
Then there exists a continuous modiﬁcation of X.
Proof. See Karatzas/Shreve (1991, pp.53ﬀ).
Corollary 3.14 (Continuous Version of the Brownian Motion)
An ndimensional Brownian motion possesses a continuous modiﬁcation.
Proof. One can show that
E[[W
t
−W
s
[
4
] = n(n + 2)[t −s[
2
.
Hence, we can apply Kolmogorov’s continuity theorem with α = 4, C = n(n + 2),
and β = 1. 2
Remark. We will always work with this continuous version.
Deﬁnition 3.12 (Poisson Process)
A Poisson process with intensity λ > 0 is an adapted, integervalued cadlag process
N = ¦N
t
, T
t
¦
t[0,∞)
such that N
0
= 0, and for 0 ≤ t < ∞, the diﬀerence N
t
−N
s
is
independent of T
s
and is Poisson distributed with mean λ(t −s).
7
3.4 Martingales
Deﬁnition 3.13 (Martingale, Supermartingale, Submartingale)
A process X = ¦X
t
¦
t∈[0,∞)
adapted to the the ﬁltration ¦T
t
¦
t∈[0,∞)
is called a
martingale (resp. supermartingale, submartingale) with respect to ¦T
t
¦
t∈[0,∞)
if
(i) X
t
∈ L
1
(dP), i.e. E[[X
t
[] < ∞.
6
Kolmogorov’s extension theorem basically says that for given (ﬁnitedimensional) distributions
satisfying a certain consistency condition there exists a probability space and a stochastic process
such that its (ﬁnitedimensional) distributions coincide with the given distributions. See, e.g.,
Karatzas/Shreve (1991, p. 50).
7
A random variable Y taking values in IN ∪{0} is Poisson distributed with parameter λ > 0 if
P(Y = n) = e
−λ
λ
n
/(n!), n ∈ IN.
3 INTRODUCTION TO STOCHASTIC CALCULUS 31
(ii) if s ≤ t, then E[X
t
[T
s
] = X
s
, a.s. (resp. E[X
t
[T
s
] ≤ X
s
, resp. E[X
t
[T
s
] ≥ X
s
).
Examples.
• Let W be a onedimensional Brownian motion. Then W is a martingale and W
2
is a submartingale.
• The arithmetic Brownian motion X
t
:= µt +σW
t
, µ, σ ∈ IR, is a martingale for
µ = 0, a submartingale for µ > 0, and a supermartingale for µ < 0.
• Let N be a Poisson process with intensity λ. Then ¦N
t
−λt¦
t
is a martingale.
Deﬁnition 3.14 (SquareIntegrable)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be a rightcontinuous martingale. We say that X is square
integrable if E[X
2
t
] < ∞ for every t ≥ 0. If, in addition, X
0
= 0 a.s. we write
X ∈ ´
2
(or X ∈ ´
c
2
if X is also continuous).
Proposition 3.15 (Functions of Submartingales)
(i) Let X = (X
1
, . . . , X
n
) with realvalued components X
k
be a martingale w.r.t.
¦T
t
¦
t∈[0,∞)
and let ϕ : IR
n
→ IR be a convex function such that E[[ϕ(X
t
)[] < ∞
for all t ∈ [0, ∞). Then ϕ(X) is a submartingale w.r.t. ¦T
t
¦
t∈[0,∞)
(ii) Let X = (X
1
, . . . , X
n
) with realvalued components X
k
be a submartingale w.r.t.
¦T
t
¦
t∈[0,∞)
and let ϕ : IR
n
→ IR be a convex, nondecreasing function such that
E[[ϕ(X
t
)[] < ∞ for all t ∈ [0, ∞). Then ϕ(X) is a submartingale w.r.t. ¦T
t
¦
t∈[0,∞)
Proof. follows from Jensen’s inequality (Exercise). 2
For any X ∈ ´
2
and 0 ≤ t < ∞, we deﬁne
[[X[[
t
:=
E[X
2
t
] and [[X[[ :=
∞
¸
n=1
[[X[[
n
∧ 1
2
n
.
Note that [[X−Y [[ deﬁnes a the pseudometric on ´
2
. If we identify indistinguish
able processes, the pseudometric [[X −Y [[ becomes a metric on ´
2
.
8
Proposition 3.16
If we identify indistinguishable processes, the pair (´
2
, [[) is a complete metric
space, and ´
c
2
is a closed subset of ´
2
.
Proof. See, e.g., Karatzas/Shreve (1991, pp. 37f).
An important question now arises: What happens to the martingale property of a
stopped martingale? For bounded stopping times, nothing changes.
Theorem 3.17 (Optional Sampling Theorem, 1st Version)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be a rightcontinuous submartingale (supermartingale) and
let τ
1
and τ
2
be bounded stopping times of the ﬁltration ¦T
t
¦ with τ
1
≤ τ
2
≤ c ∈ IR
a.s. Then
E[X
τ
2
[T
τ
1
] ≥ (≤)X
τ
1
8
This means: (i) X − Y  ≥ 0 and X − Y  = 0 iﬀ X = Y . (ii) X − Y  = Y − X. (iii)
X −Y  = X −Z +Z −Y .
3 INTRODUCTION TO STOCHASTIC CALCULUS 32
If the stopping times are unbounded, in general, this result does only hold if X can
be extended to ∞ such that
˜
X := ¦X
t
, T
t
¦
t∈[0,∞]
is still a sub or supermartingale.
Theorem 3.18 (Sub and Supermartingale Convergence)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be a rightcontinuous submartingale (supermartingale)
and assume sup
t≥0
E[X
+
t
] < ∞ (sup
t≥0
E[X
−
t
] < ∞). Then the limit X
∞
(ω) =
lim
t→∞
X
t
(ω) exists for a.e. ω ∈ Ω and E[X
∞
[ < ∞.
Proof. See Karatzas/Shreve (1991, pp. 17f).
This proposition ensures that a limit exists, but it is not clear at all if
˜
X is a sub or
supermartingale. To solve this problem, we need to require the following condition:
Deﬁnition 3.15 (Uniform Integrability)
A familiy of random variables (U)
a∈A
is uniformly integrable (UI) if
lim
n→∞
sup
a
{U
α
≥n}
[U
α
[dP = 0.
Using the deﬁnition, it is sometimes not easy to check if a family of random variables
is uniformly integrable.
Theorem 3.19 (Characterization of UI)
Let (U)
a∈A
be a subset of L
1
. The following are equivalent:
(i) (U)
a∈A
is uniformly integrable.
(ii) There exists a positive, increasing, convex function g deﬁned on [0, ∞) such that
lim
x→∞
g(x)
x
= +∞ and sup
a∈A
E[g ◦ [U
α
[] < ∞.
Remark. A good choice is often g(x) = x
1+ε
with ε > 0.
Proposition 3.20
The following conditions are equivalent for a nonnegative, rightcontinuous sub
martingale X = ¦X
t
, T
t
¦
t∈[0,∞)
:
(i) It is a uniformly integrable family of random variables.
(ii) It converges in L
1
, as t →∞.
(iii) It converges Pa.s. (t → ∞) to an integrable random variable X
∞
, such that
¦X
t
, T
t
¦
t∈[0,∞]
is a submartingale.
For the implication “(i)⇒(ii)⇒(iii)”, the assumption of nonnegativity can be dropped.
Theorem 3.21 (UI Martingales can be closed)
The following conditions are equivalent for a rightcontinuous martingale X =
¦X
t
, T
t
¦
t∈[0,∞)
:
(i) It is a uniformly integrable family of random variables.
3 INTRODUCTION TO STOCHASTIC CALCULUS 33
(ii) It converges in L
1
, as t →∞.
(iii) It converges Pa.s. (t → ∞) to an integrable random variable X
∞
, such that
¦X
t
, T
t
¦
t∈[0,∞]
is a martingale, i.e. the martingale is closed by Y .
(iv) There exists an integrable random variable Y , such that X
t
= E[Y [T
t
] a.s. for
all t ≥ 0.
If (iv) holds and X
∞
is the random variable in (c), then E[Y [T
∞
] = X
∞
a.s.
We now formulate a more general version of Doob’s optional sampling theorem:
Theorem 3.22 (Optional Sampling Theorem, 2nd Version)
Let X = ¦X
t
, T
t
¦
t∈[0,∞]
be a rightcontinuous submartingale (supermartingale)
with a last element X
∞
and let τ
1
and τ
2
be stopping times of the ﬁltration ¦T
t
¦
with τ
1
≤ τ
2
a.s. Then
E[X
τ
2
[T
τ
1
] ≥ (≤)X
τ
1
Proof. See, e.g., Karatzas/Shreve (1991, pp. 19f).
Proposition 3.23 (Stopped (Sub)Martingales)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be an adapted rightcontinuous (sub)martingale and let
τ be a bounded stopping time. The stopped process ¦X
t∧τ
, T
t
¦
t∈[0,∞)
is also an
adapted rightcontinuous (sub)martingale.
Proof. Exercise.
Remarks.
• The diﬃculty in this proposition is to show that the stopped process is a (sub)
martingale for the ﬁltration ¦T
t
¦. It is obvious that the stopped process is a
(sub)martingale for ¦T
t∧τ
¦.
• If ¦X
t
, T
t
¦
t∈[0,∞]
is a UI rightcontinuous martingale, then the stopping time τ
may be unbounded (See Protter (2005, p. 10)).
Corollary 3.24 (Conditional Expectation and Stopped σFields)
Let τ
1
and τ
2
be stopping times and Z be an integrable random variable. Then
E[E[Z[T
τ
1
][T
τ
2
] = E[E[Z[T
τ
2
][T
τ
1
] = E[Z[T
τ
1
∧τ
2
], P −a.s.
Proof. Exercise.
For technical reasons, we sometimes need to weaken the martingale concept.
Deﬁnition 3.16 (Local Martingale)
Let X = ¦X
t
, T
t
¦
t∈[0,∞)
be a (continuous) process. If there exists a nondecreasing
sequence ¦τ
n
¦
n∈IN
of stopping times of ¦T
t
¦, such that ¦X
(n)
t
:= X
t∧τ
n
, T
t
¦
t∈[0,∞)
is a martingale for each n ≥ 1 and P(lim
n→∞
τ
n
= ∞) = 1, then we say that X is
a (continuous) local martingale.
3 INTRODUCTION TO STOCHASTIC CALCULUS 34
Remark.
• Every martingale is a local martingale (Choose τ
n
= n).
• However, there exist local martingales which are no martingales. Especially,
E[X
t
] need not to exist for local martingales.
Proposition 3.25
A nonnegative local martingale is a supermartingale.
Proof. follows from Fatou’s lemma (Exercise). 2
This concept can be applied to other situations (e.g. locally bounded, locally square
integrable):
Deﬁnition 3.17 (Local Property)
Let X be a stochastic process. A property π is said to hold locally if there exists a
sequence of stopping times ¦τ
n
¦
n∈IN
with P(lim
n→∞
τ
n
= ∞) = 1 such that X
(n)
has property π for each n ≥ 1.
3.5 Completition, Augmentation, Usual Conditions
The following deﬁnition is very important:
Deﬁnition 3.18 (Usual Conditions)
A ﬁltered probability space (Ω, (, P, ¦(
t
¦) satisﬁes the usual conditions if
(i) it is complete,
9
(ii) (
0
contains all the Pnull sets of (, and
(iii) ¦(
t
¦ is rightcontinuous.
In the sequel, we impose the following
Standing assumption: We always consider a ﬁltered probability space (Ω, T, P, ¦T
t
¦)
satisfying the usual conditions.
One of the main reasons for this assumption is that the following proposition holds:
Proposition 3.26
Let (Ω, (, P, ¦(
t
¦) satisfy the usual conditions. Then every martingale for ¦(
t
¦
possesses a unique cadlag modiﬁcation.
Proof. See, e.g. Karatzas/Shreve (1991, pp. 16f).
Unfortunately, we are confronted with the following result.
Proposition 3.27
The ﬁltration ¦T
W
t
¦ is leftcontinuous, but fails to be rightcontinuous.
9
A probability space is complete if for any N ⊂ Ω such that there exists an
˜
N ∈ G with N ⊂
˜
N
and P(
˜
N) = 0, we have N ∈ G.
3 INTRODUCTION TO STOCHASTIC CALCULUS 35
Fortunately, there is a way out. Let T
∞
:= σ¦
¸
t≥0
T
t
¦ and consider the space
(Ω, T
W
∞
, P). Deﬁne the collection of Pnull sets by
^ := ¦A ⊂ Ω : ∃B ∈ T
W
∞
with A ⊂ B, P(B) = 0¦.
The Paugmentation of ¦T
W
t
¦ is deﬁned by T
t
:= σ¦T
W
t
∪ ^¦ and is called the
Brownian ﬁltration.
Theorem 3.28 (Brownian Filtration)
The Brownian ﬁltration, is left and rightcontinuous. Besides, W is still a Brownian
motion w.r.t. the Brownian ﬁltration.
This is the reason why we work with the Brownian ﬁltration instead of ¦T
W
t
¦. For
a Poisson process, a similar result holds.
Proposition 3.29 (Augmented Filtration of a Poisson Process)
The Paugmentation of the natural ﬁltration induced by a Poisson process N is
rightcontinuous. Besides, N is still a Poisson process w.r.t. this Paugmentation.
The proofs of the previous results can be found in Karatzas/Shreve (1991), pp. 89ﬀ.
Remark. More generally, one can prove the following results:
(i) If a stochastic process is leftcontinuous, then the ﬁltration ¦T
X
t
¦ is leftcontinuous.
(ii) Even if X is is continuous, ¦T
X
t
¦ need not to be rightcontinuous.
(iii) For an ndimensional strong Markov process X the augmented ﬁltration is
rightcontinuous.
3.6 ItoIntegral
Our goal is to deﬁne an integral of the form:
t
0
X(s)dW(s),
where X is an appropriate stochastic process. From the LebesgueStieltjes theory
we know that for a measurable function f and a processes A of ﬁnite variation on
[0, t] one can deﬁne
t
0
f(s)dA(s) = lim
n→∞
n−1
¸
k=0
f
kt
n
A
(k+1)t
n
−A
kt
n
.
However, the variation of a Brownian motion is unbounded on any interval. There
fore, naive stochastic integration is impossible (See Protter (2005, pp. 43ﬀ)).
Good News: We can deﬁne a stochastic integral in terms of L
2
convergence!
Agenda. (disregarding measurability for the moment)
1st Step. Deﬁne a stochastic integral for simple processes κ (constant process
except for ﬁnitely many jumps).
I
t
(κ) :=
t
0
κ
s
dW
s
:=?
3 INTRODUCTION TO STOCHASTIC CALCULUS 36
2nd Step. Every L
2
[0, T]process X can be approximated by a sequence ¦κ
(n)
¦
n
of simple processes such that
[[X −κ
(n)
[[
2
T
:= E
¸
T
0
(X(s) −κ
(n)
(s))
2
ds
¸
→0.
3rd Step. For each n the stochastic integral
I
t
(κ
(n)
) =
t
0
κ
(n)
(s) dW(s)
is a welldeﬁned random variable and the sequence ¦I
t
(κ
(n)
)¦
n
converges (in some
sense) to a random variable I
4th Step. Deﬁne the stochastic integral for a L
2
[0, T]process X as follows:
t
0
X(s) dW(s) := lim
n→∞
I
t
(κ
(n)
) = lim
n→∞
t
0
κ
(n)
(s) dW(s).
5th Step. Extend the deﬁnition by localization.
Standing Assumption. ¦T
t
¦ is the Brownian ﬁltration of a Brownian motion W.
1st Step.
Deﬁnition 3.19 (Simple Process)
A stochastic process ¦κ
t
¦
t∈[0,T]
is a simple process if there exist real numbers 0 =
t
0
< t
1
< . . . < t
p
= T, p ∈ IN, and bounded random variables Φ
i
: Ω → IR,
i = 0, 1, . . . , p such that Φ
0
is T
0
measurable, Φ
i
is T
t
i−1
measurable, and for all
ω ∈ Ω we have
κ
t
(ω) = Φ
0
(ω)1
{0}
(t) +
p
¸
i=1
Φ
i
(ω)1
(t
i−1
,t
i
]
(t)
Remarks.
• κ
t
is T
t
i
−1
measurable for t ∈ (t
i−1
, t
i
] (measurable w.r.t. the lefthand side of
the interval).
• κ is a leftcontinuous step function.
Deﬁnition 3.20 (Stochastic Integral for Simple Processes)
For a simple process κ and t ∈ [0, T] the stochastic integral I(κ) is deﬁned by
I
t
(κ) :=
t
0
κ
s
dW
s
:=
¸
1≤i≤p
Φ
i
(W
t
i
∧t
−W
t
i−1
∧t
)
Proposition 3.30 (Stochastic Integral for Simple Processes)
For a simple process κ we have:
(i) The integral I(κ) = ¦I
t
(κ)¦
t∈[0,T]
is a continuous martingale for ¦T
t
¦
t∈[0,T]
.
(ii) E
(I
t
(κ))
2
= E
t
0
κ
2
s
ds
for all t ∈ [0, T].
(iii) E
sup
0≤t≤T
[I
t
(κ)[
2
≤ 4E
T
0
κ
2
s
ds
3 INTRODUCTION TO STOCHASTIC CALCULUS 37
Proof. (i) Exercise.
(ii) For simplicity, t = t
k+1
. Then
E[I
t
(κ)
2
] =
k
¸
i=1
k
¸
j=1
E
Φ
i
Φ
j
(W
t
i
−W
t
i−1
)(W
t
j
−W
t
j−1
)
.
1st case: i > j.
E
Φ
i
Φ
j
(W
t
i
−W
t
i−1
)(W
t
j
−W
t
j−1
)
= E
E
Φ
i
Φ
j
(W
t
i
−W
t
i−1
)(W
t
j
−W
t
j−1
)[T
t
i−1
= E[Φ
i
Φ
j
(W
t
j
−W
t
j−1
)(E[W
t
i
[T
t
i−1
]
. .. .
=W
t
i−1
−W
t
i−1
)]
= 0.
2nd case i = j.
E
Φ
2
i
(W
t
i
−W
t
i−1
)
2
= E
Φ
2
i
E[(W
t
i
−W
t
i−1
)
2
[T
t
i−1
]
= E[Φ
2
i
(t
i
−t
i−1
)]
Hence,
E[I
t
(κ)
2
] = E
¸
k
¸
i=1
Φ
2
i
(t
i
−t
i−1
)
¸
= E
¸
t
0
κ
2
s
ds
.
(iii) Apply (i), (ii), and Doob’s inequality (see Kartzas/Shreve (1991, p. 14): For a
rightcontinuous martingale ¦M
t
¦ with E[M
2
t
] < ∞ for all t ≥ 0 we have
E
¸
sup
0≤≤T
[M
t
[
2
¸
≤ 4E[M
2
T
].
It remains to check:
E[I
t
(κ)
2
] = E
k
¸
i=1
Φ
2
i
(t
i
−t
i−1
)
. .. .
≤T
≤ T
k
¸
i=1
E
Φ
2
i
< ∞,
since, by Deﬁnition 3.19, Φ is bounded. 2
Remarks.
• Stochastic integrals over [t, T] are deﬁned by
T
t
κ
s
dW
s
:=
T
0
κ
s
dW
s
−
t
0
κ
s
dW
s
.
• Let κ
1
and κ
2
be simple processes and a, b ∈ IR. The stochastic integral for
simple processes is linear, i.e.
I
t
(aκ
1
+bκ
2
) = aI
t
(κ
1
) +bI
t
(κ
2
).
This follows directly from Deﬁnition 3.20. Note that linear combinations of simple
functions are simple.
• For κ ≡ 1,
t
0
1 dW
s
= W
t
. Hence,
E
¸
t
0
dW
s
2
= E[W
2
t
] = t =
t
0
ds.
This is the reason why people sometimes write dW
t
=
√
dt, which is wrong from
a mathematical point of view.
3 INTRODUCTION TO STOCHASTIC CALCULUS 38
2nd Step. Every L
2
[0, T]process X can be approximated by a sequence ¦κ
(n)
¦
n
of simple processes.
Deﬁne the vector space
L
2
[0, T] := L
2
[0, T], Ω, T, P, ¦T
t
¦
t∈[0,T]
:=
¦X
t
, T
t
¦
t∈[0,T]
: progressively measurable, realvalued with E
T
0
X
2
s
ds
< ∞
¸
with (semi)norm
[[X[[
2
T
:= E
¸
T
0
X
2
s
ds
¸
,
where we identify processes for which X = Y only Lebesgue ⊗Pa.s.
For simple processes the mapping κ → I(κ) induces a norm on the vector space of
the corresponding stochastic integrals via
[[I(κ)[[
2
L
T
:= E
T
0
κ
s
dW
s
2
¸
¸
Prop. 3.30 (ii)
= E
¸
T
0
κ
2
s
ds
¸
= [[κ[[
2
T
.
Since the mapping I(X) is linear and norm preserving, it is an isometry, called the
Ito isometry.
Proposition 3.31 (Simple Processes Approximate L
2
[0, T]processes)
The linear subspace of simple processes is dense in X ∈ L
2
([0, T]), i.e. for all
X ∈ L
2
[0, T] there exists a sequence ¦κ
(n)
¦
n
of simple processes with
[[X −κ
(n)
[[
T
= E
¸
T
0
(X
s
−κ
(n)
s
)
2
ds
¸
→0 ( as n →∞)
Proof. Assume that X ∈ L
2
[0, T] is continuous and bounded. Choose
κ
(n)
t
(ω) := X
0
(ω)1
{0}
(t) +
2
n
−1
¸
k=0
X
kT/2
n(ω)1
(kT/2
n
,(k+1)T/2
n
]
(t).
Since X is bounded, ¦κ
(n)
¦
n
is uniformly bounded and we can apply the dominated
convergence theorem to obtain the desired result.
For the general case, see Karatzas/Shreve (1991, p. 133). 2
3rd and 4th Step.
Deﬁne
´
c
2
([0, T]) := ¦M : M continuous martingale on [0, T] w.r.t. ¦T
t
¦,
E[M
2
t
] < ∞ for every t ∈ [0, T]
¸
Theorem and Deﬁnition 3.32 (Ito Integral for X ∈ L
2
[0, T])
There exists a unique linear mapping J : L
2
[0, T] → ´
c
2
([0, T]) with the following
properties:
3 INTRODUCTION TO STOCHASTIC CALCULUS 39
(i) If κ is simple, then P(J
t
(κ) = I
t
(X) for all t ∈ [0, T]) = 1.
(ii) E[J
t
(X)
2
] = E
t
0
X
2
s
ds
“Ito isometry”
J is unique in the following sense: If there exists a second mapping J
satisfying (i)
and (ii) for all X ∈ L
2
[0, T], then J(X) and J
(X) are indistinguishable.
For X ∈ L
2
[0, T], we deﬁne
t
0
X
s
dW
s
:= J
t
(X)
for every t ∈ [0, T] and call it the Ito integral or stochastic integral of X (w.r.t. W).
Proof. Let X ∈ L
2
[0, T].
(a) Let ¦κ
(n)
¦
n
be an approximating sequence, i.e. [[X − κ
(n)
[[
T
→ 0 as n → ∞.
Since ¦κ
(n)
¦
n
is also a Cauchy sequence w.r.t. [[ [[
T
, we obtain for every t ∈ [0, T]
(as n, m →∞):
E[(I
t
(κ
(n)
)−I
t
(κ
(m)
))
2
]
I linear
= E[I
t
(κ
(n)
−κ
(m)
)
2
]
Ito Isom.
= E
t
0
(κ
(n)
−κ
(m)
)
2
ds
→0.
Hence, ¦I
t
(κ
(n)
)¦
n
is a Cauchy sequence in the ordinary L
2
(Ω, T
t
, P) for random
variables (t ﬁxed!) which is a complete space and thus
I
t
(κ
(n)
)
L
2
−→Z
t
,
where Z
t
is T
t
measurable and E[Z
2
t
] < ∞. Since I
t
(κ
(n)
)
P
−→ Z
t
as well, there
exists a subsequence ¦n
k
¦
k
such that
I
t
(κ
(n
k
)
)
a.s.
−→Z
t
(1)
as k →∞, i.e. Z
t
is only Pa.s. unique (may depend on sequence!).
(b) Z is a martingale: Note that I
t
(κ
(n)
) is a martingale for all n ∈ IN. By the
deﬁnition of the conditional expected value, we thus get
A
I
t
(κ
(n)
) dP =
A
I
s
(κ
(n)
) dP
for s < t and for all A ∈ T
s
and n ∈ IN. The L
2
convergence (1) leads to (as
n →∞)
10
A
Z
t
dP ←−
A
I
t
(κ
(n)
) dP =
A
I
s
(κ
(n)
) dP −→
A
Z
s
dP.
Since Z is adapted, (ii) is proved.
(c) By Proposition 3.26, Z possesses a rightcontinuous modiﬁcation ¦J
t
(X), T
t
¦
t∈[0,T]
still being a squareintegrable martingale. Applying Doob’s inequality and the
BorelCantelli Lemma, one can prove that J(X) is even continuous (Exercise).
(d) J
t
(X) is independent of the approximating sequence: Let ¦κ
(n)
¦
n
and
¦˜ κ
(n)
¦
n
be two approximating sequences for X with I
t
(κ
(n)
)
L
2
−→Z
t
and I
t
(˜ κ
(n)
)
L
2
−→
10
See Bauer (1992, MI, p. 92 and p. 99).
3 INTRODUCTION TO STOCHASTIC CALCULUS 40
˜
Z
t
. Then ¦ˆ κ
(n)
¦
n
with ˆ κ
(2n)
= κ
(2n)
(even) and ˆ κ
(2n+1)
= ˜ κ
2n+1
(odd) is an
approximating sequence for X with ¦I
t
(ˆ κ
(n)
)¦
n
converges in L
2
to both Z
t
and
˜
Z
t
.
Thus they need to coincide with J
t
(X) Pa.s.
11
(e) Ito isometry:
E
J
t
(X)
2
(∗)
= lim
n→∞
E
I
t
(κ
(n)
)
2
= lim
n→∞
E
t
0
(κ
(n)
s
)
2
ds
(∗∗)
= E
t
0
X
2
s
ds
(∗) holds because I
t
(κ
(n)
)
L
2
−→ J
t
(X) (as n → ∞). This is the L
2
convergence in
the ordinaryL
2
(Ω, T
t
, P).
(∗∗) holds because κ
(n)
L
2
−→X (as n →∞). This is the L
2
convergence in ([0, T]
Ω, B[0, T] ⊗T, Lebesgue ⊗P).
12
(f) Linearity of J follows from the linearity of I and from the fact that if ¦κ
(n)
X
¦
n
approximates X and ¦κ
(n)
Y
¦
n
approximates Y , then ¦aκ
(n)
X
+bκ
(n)
Y
¦
n
approximates
aX +bY (a, b ∈ IR). Therefore,
aJ
t
(X) +bJ
t
(Y ) = a lim
n→∞
I
t
(κ
(n)
X
) +b lim
n→∞
I
t
(κ
(n)
Y
) = lim
n→∞
I
t
aκ
(n)
X
+bκ
(n)
Y
= J
t
(aX +bY ).
(g) Uniqueness of J: Let J
be a linear mapping with properties (i) and (ii).
Then
J
t
(X) = J
t
( lim
n→∞
κ
(n)
)
(∗)
= lim
n→∞
J
t
(κ
(n)
)
(i)
= lim
n→∞
I
t
(κ
(n)
) = J
t
(X) P −a.s.
for all t ∈ [0, T]. Since J(X) and J
(X) are continuous processes, by Proposition
3.2 (ii), they are indistinguishable, i.e. P(J
t
(X) = J
t
(X) for all t ∈ [0, T]) = 1. The
equality (∗) holds because linear mappings are continuous. 2
5th Step. The class of admissible integrands can be extended. Deﬁne
H
2
[0, T] := H
2
[0, T], Ω, T, P, ¦T
t
¦
t∈[0,T]
:=
¦X
t
, T
t
¦
t∈[0,T]
: progressively measurable, realvalued with
P
T
0
X
2
s
ds < ∞
= 1
¸
In general, for X ∈ H
2
[0, T] it can happen that
[[X[[
2
T
= E
¸
T
0
X
2
s
ds
¸
= ∞.
Therefore, the Ito isometry needs not to hold and the approximation argument
breaks down. Fortunately, one can use a localization argument:
13
τ
n
(ω) := T ∧ inf
t ∈ [0, T] :
t
0
X
2
s
(ω) ds ≥ n
¸
,
X
(n)
t
(ω) := X
t
(ω)1
{τ
n
(ω)≥t}
.
11
See Bauer (1992, MI, p. 130).
12
Both (∗) and (∗∗) are a consequence of Bauer (1992, MI, p. 92).
13
Every τ
n
is a stopping time because it is the ﬁrsthitting time of [n, ∞) for the continuous
process {
t
0
X
2
s
ds}
t
. See Proposition 3.11.
3 INTRODUCTION TO STOCHASTIC CALCULUS 41
Hence, X
(n)
t
(ω) ∈ L
2
[0, T] and J(X
(n)
) is deﬁned and a martingale for ﬁxed n ∈ IN.
Now deﬁne
J
t
(X) := J
t
(X
(n)
) for X ∈ H
2
[0, T] and 0 ≤ t ≤ τ
n
.
Note that
• J
t
(X
(n)
) = J
t
(X
(m)
) for all 0 ≤ t ≤ τ
n
∧ τ
m
, i.e. J is welldeﬁned,
• lim
n→∞
τ
n
= T a.s. (since P
T
0
X
2
s
ds < ∞
= 1, i.e. there exists an event
A with P(A) = 1 such that for all ω ∈ A there exists an c(ω) > 0 with
T
0
X
2
s
(ω) ds ≤ c(ω).)
Important Results.
• By construction, for X ∈ H
2
[0, T] the process ¦J
t
(X)¦
t∈[0,T]
is a continuous local
martingale with localizing sequence ¦τ
n
¦
n
.
• J is still linear, i.e. J
t
(aX + bY ) = aJ
t
(X) + bJ
t
(Y ) for a, b ∈ IR and X,
Y ∈ H
2
[0, T]. (Choose τ
n
:= τ
X
n
∧ τ
Y
n
as localizing sequence of aX + bY , where
¦τ
X
n
¦
n
is the localizing sequence of X.)
• E[J
t
(X)] needs not to exist.
Deﬁnition 3.21 (Multidimensional Ito Integral)
Let ¦W(t), T
t
¦ be an mdimensional Brownian motion and ¦X(t), T
t
¦ an IR
n,m

valued progressively measurable process with all components X
ij
∈ H
2
[0, T]. Then
we deﬁne the Ito integral of X w.r.t. W as
t
0
X(s) dW(s) :=
¸
¸
¸
¸
¸
¸
¸
m
j=1
t
0
X
1j
(s)dW
j
(s)
.
.
.
¸
m
j=1
t
0
X
nj
(s)dW
j
(s)
¸
, t ∈ [0, T],
where
t
0
X
ij
(s)dW
j
(s) is an onedimensional Ito integral.
3.7 Ito’s Formula
Recall our
Standing assumption: (Ω, T, P, ¦T
t
¦) is a ﬁltered probability space satisfying
the usual conditions.
Additional Assumption: ¦W
t
, T
t
¦ is a (multidimensional) Brownian motion of
appropriate dimension.
Deﬁnition 3.22 (Ito Process)
Let W be an mdimensional Brownian motion.
(i) ¦X(t), T
t
¦ is said to be a realvalued Ito process if for all t ≥ 0 it possesses the
representation
X(t) = X(0) +
t
0
α(s) ds +
t
0
β(s)
dW(s)
3 INTRODUCTION TO STOCHASTIC CALCULUS 42
= X(0) +
t
0
α(s) ds +
m
¸
j=1
t
0
β
j
(s)
dW
j
(s)
for an T
0
 measurable random variable X(0), and progressively measurable pro
cesses α, β with
t
0
[α(s)[ ds < ∞,
m
¸
j=1
t
0
β
2
j
(s) ds < ∞, P −a.s. for all t ≥ 0. (2)
(ii) An ndimensional Ito process X = (X
(1)
, . . . , X
(n)
) is a vector of onedimensional
Ito processes X
(i)
.
Remarks.
• To shorten notations, one sometimes uses the symbolic diﬀerential notation
dX(t) = α(t)dt +β(t)
dW(t)
• It can be shown that the processes α and β are unique up to indistinguishability.
• By (2), we have β
j
∈ H
2
[0, T].
• For a realvalued Ito process X and a realvalued progressively measurable process
Y , we deﬁne
t
0
Y
s
dX
s
:=
t
0
Y
s
α
s
ds +
t
0
Y
s
β
s
dW
s
Deﬁnition 3.23 (Quadratic Variation and Covariation)
Let X
(1)
and X
(2)
two realvalued Ito processes with
X
(i)
(t) = X(0) +
t
0
α
(i)
(s) ds +
t
0
β
(i)
(s)
dW(s),
where i ∈ ¦1, 2¦. Then,
< X
(1)
, X
(2)
>
t
:=
m
¸
j=1
t
0
β
(1)
j
(s) β
(2)
j
(s) ds
is said to be the quadratic covariation of X
(1)
and X
(2)
. In particular, < X >
t
:=<
X, X >
t
is said to be the quadratic variation of an Ito process X.
Remarks. The deﬁnition may appear a bit unfounded. However, one can show
the following (see Karatzas/Shreve (1991, pp. 32ﬀ)): For ﬁxed t ≥ 0, let Π =
¦t
0
, . . . , t
n
¦ with 0 = t
0
< t
1
< . . . < t
n
= t be a partition of [0, t]. Then the pth
variation of X over the partition Π is deﬁned by (p > 0)
V
(p)
t
(Π) =
n
¸
k=1
[X
t
k
−X
t
k−1
[
p
.
The mesh of Π is deﬁned by [[Π[[ = max
1≤k≤n
[t
k
− t
k−1
[. For X ∈ ´
c
2
, it holds
that
lim
Π→0
V
(2)
t
(Π) =< X >
t
in probability.
The main tool for working with Ito processes is Ito’s formula:
3 INTRODUCTION TO STOCHASTIC CALCULUS 43
Theorem 3.33 (Ito’s Formula)
Let X be an ndimensional Ito process with
X
i
(t) = X
i
(0) +
t
0
α
i
(s) ds +
m
¸
j=1
t
0
β
ij
(s) dW
j
(s).
Let f : [0, ∞) IR
n
→IR be a C
1,2
function. Then
f(t, X
1
(t), . . . , X
n
(t)) = f(0, X
1
(0), . . . , X
n
(0)) +
t
0
f
t
(s, X
1
(s), . . . , X
n
(s)) ds
+
n
¸
i=1
t
0
f
x
i
(s, X
1
(s), . . . , X
n
(s)) dX
i
(s)
+0.5
n
¸
i,j=1
t
0
f
x
i
x
j
(s, X
1
(s), . . . , X
n
(s)) d < X
i
, X
j
>
s
,
where all integrals are deﬁned.
Proof. For simplicity, X(0) = const., n = m = 1, and f : IR →IR.
1st step. By applying a localization argument, we can assume that
M
t
:=
t
0
β
s
dW
s
, B
t
:=
t
0
α
s
ds,
ˆ
B
t
:=
t
0
[α
s
[ ds,
t
0
β
2
s
ds, and thus X are uniformly bounded by a constant C. Hence, the relevant
support of f, f
, and f
is compact implying that these functions are bounded as
well. Besides, M is a martingale (since β ∈ L
2
[0, T]).
2nd step. Let Π = ¦t
0
, . . . , t
p
¦ be a partition of [0, t]. Taylor expension leads to
(Note: f(X
t
k
) = f(X
t
k−1
) +f
(X
t
k−1
)(X
t
k
−X
t
k−1
) +. . ..)
f(X
t
) −f(X
0
) =
p
¸
k=1
(f(X
t
k
) −f(X
t
k−1
))
=
p
¸
k=1
f
(X
t
k−1
)(X
t
k
−X
t
k−1
)
. .. .
(∗)
+0.5
p
¸
k=1
f
(η
k
)(X
t
k
−X
t
k−1
)
2
. .. .
(∗∗)
with η
k
= X
t
k−1
+θ
k
(X
t
k
−X
t
k−1
), θ
k
∈ (0, 1).
3rd step. Convergence of (*). We get
(∗) =
p
¸
k=1
f
(X
t
k−1
)(B
t
k
−B
t
k−1
)
. .. .
A
1
(Π)
+
p
¸
k=1
f
(X
t
k−1
)(M
t
k
−M
t
k−1
)
. .. .
A
2
(Π)
.
(i) For [[Π[[ → 0, we obtain (LebesgueStieltjes theory)
A
1
(Π)
a.s.,L
1
−→
t
0
f
(X
s
)dB
s
=
t
0
f
(X
s
)α
s
ds
(ii) Approximate f
(X
s
) by a simple process
Y
Π
s
(ω) := f
(X
0
)1
{0}
(s) +
p
¸
k=1
f
(X
t
k−1
(ω))1
(t
k−1
,t
k
]
(s).
3 INTRODUCTION TO STOCHASTIC CALCULUS 44
Note that A
2
(Π) =
t
0
Y
Π
s
dM
s
.
14
By the Ito isometry,
E
¸
t
0
f
(X
s
) dM
s
−A
2
(Π)
2
= E
¸
t
0
(f
(X
s
) −Y
Π
s
)β
s
dW
s
2
= E
¸
t
0
(f
(X
s
) −Y
Π
s
)
2
β
2
s
ds
L
2
−→0,
where the limit follows from the dominated convergence theorem (as [[Π[[ → 0).
Therefore,
15
(as [[Π[[ → 0)
(∗) = A
1
(Π) +A
2
(Π)
L
1
−→
t
0
f
(X
s
)α
s
ds +
t
0
f
(X
s
) dM
s
4th step. Convergence of (**). It can be shown that (See Korn/Korn (1999,
pp. 53ﬀ))
(∗∗)
L
1
−→
t
0
f
(X
s
)β
2
s
ds.
5th step. By steps 24, there exists a subsequence ¦Π
k
¦
k
, such that the integrals
converges a.s. towards the corresponding limits. Therefore, for ﬁxed t both sides
of Ito’s formula are equal a.s. Since both sides are continuous processes in t, by
Proposition 3.2 (ii), they are indistinguishable. 2
Remark. Often a shorthand symbolic diﬀerential notation is used (f : [0, ∞)
IR →IR)
df(t, X
t
) = f
t
(t, X
t
)dt +f
x
(t, X
t
)dX
t
+ 0.5f
xx
(t, X
t
)d < X >
t
Examples.
• X
t
= t and f ∈ C
2
leads to the fundamental theorem of integration.
• X
t
= h(t) and f ∈ C
2
leads to the chain rule of ordinary calculus.
• X
t
= W
t
and f(x) = x
2
:
W
2
t
= 2
t
0
W
s
dW
s
+t
• X
t
= x
0
+αt +σW
t
, x
0
, α, β ∈ IR, and f(x) = e
x
.
df(X
t
) = f
(X
t
)dX
t
+ 0.5f
(X
t
)d < X >
t
= f(X
t
)
αdt +σdW
t
+ 0.5σ
2
dt
= f(X
t
)
(α + 0.5σ
2
)dt +σdW
t
.
Therefore, the solution of the stock price dynamics in the BlackScholes model,
dS
t
= S
t
[µdt +σdW
t
], reads
S
t
= s
0
e
(µ−0.5σ
2
)t+σW
t
. (3)
14
This is actually a consequence of the more general deﬁnition of Ito integrals in Karatzas/Shreve
(1991, pp. 129ﬀ).
15
L
2
convergence implies L
1
convergence
3 INTRODUCTION TO STOCHASTIC CALCULUS 45
• Ito’s product rule: Let X and Y be Ito processes and f(x, y) = x y Then
(omitting dependencies)
d(XY ) = f
x
dX +f
y
dY + 0.5 f
xx
....
=0
d < X > +0.5 f
yy
....
=0
d < Y > + f
xy
....
=1
d < X, Y >
= XdY +Y dX +d < X, Y >
Deﬁnition 3.24 (Stochastic Diﬀerential Equation)
If for a given ﬁltered probability space (Ω, T, P, ¦T
t
¦) there exists an adapted pro
cess X with X(0) = x ∈ IR
n
ﬁxed and
X
i
(t) = x
i
+
t
0
α
i
(s, X(s)) ds +
m
¸
j=0
t
0
β
ij
(s, X(s)) dW
j
(s)
Pa.s. for every t ≥ 0 and i ∈ ¦1, . . . , n¦, such that
t
0
¸
[α
i
(s, X(s))[ +
m
¸
j=1
β
2
ij
(s, X(s))
¸
ds < ∞
Pa.s. for every t ≥ 0 and i ∈ ¦1, . . . , n¦, then X is a strong solution of the stochastic
diﬀerential equation (SDE)
dX(t) = α(t, X(t))dt +β(t, X(t))dW(t) (4)
X(0) = x,
where α : [0, ∞) IR
n
→IR
n
and β : [0, ∞) IR
n
→IR
n,m
are given functions.
Theorem 3.34 (Existence and Uniqueness)
If the coeﬃcients α and β are continuous functions satisfying the following global
Lipschitz and linear growth conditions for ﬁxed K > 0
[[α(t, z) −α(t, y)[[ +[[β(t, z) −β(t, y)[[ ≤ K[[z −y[[,
[[α(t, y)[[
2
+[[β(t, y)[[
2
≤ K
2
(1 +[[y[[
2
),
z, y ∈ IR, then there exists an adapted continuous process X being a strong solution
of (4) and satisfying
E[[[X
t
[[
2
] ≤ C(1 +[[x[[
2
)e
CT
for all t ∈ [0, T],
where C = C(K, T) > 0. X is unique up to indistinguishability.
Proof. See Korn/Korn (1999, pp. 128ﬀ).
In general, there do not exist explicit solutions to SDEs. However, the important
family of linear SDEs admits explicit solutions:
3 INTRODUCTION TO STOCHASTIC CALCULUS 46
Proposition 3.35 (Variation of Constants)
Fix x ∈ IR and let A, a, B, b be progressively measurable realvalued processes with
(Pa.s. for every t ≥ 0)
t
0
([A(s)[ +[a(s)[) ds < ∞,
m
¸
j=1
t
0
B
j
(s)
2
+b
2
j
(s)
ds < ∞.
Then the linear SDE
dX(t) = (A(t)X(t) +a(t))dt +
m
¸
j=1
(B
j
(t)X(t) +b
j
(t))dW
j
(t),
X(0) = x,
possesses an adapted Lebesgue ⊗Punique solution
X(t) = Z(t)
x +
t
0
1
Z(u)
¸
a(u) −
m
¸
j=1
B
j
(u)b
j
(u)
¸
du +
m
¸
j=1
t
0
b
j
(u)
Z(u)
dW
j
(u)
¸
¸
,
where
Z(t) = exp
t
0
A(u) −0.5[[B(u)[[
2
du +
t
0
B(u)
dW(u)
. (5)
is the solution of the homogeneous SDE
dZ(t) = Z(t) [A(t)dt +B(t)
dW(t)] ,
Z(0) = 1.
Proof. For simplicity, n = m = 1. As in (3), one can show that Z solves the
homogeneous SDE. To prove uniqueness, note that Ito’s formula leads to
d
1
Z
=
1
Z
(−A+B
2
)dt −BdW
.
Let
˜
Z be another solution. Then, by Ito’s product rule,
d
˜
Z
1
Z
=
˜
Zd
1
Z
+
1
Z
d
˜
Z+ <
1
Z
,
˜
Z >
=
˜
Z
Z
(−A+B
2
)dt −BdW
+
˜
Z
Z
[Adt +BdW] −
˜
Z
Z
B
2
dt = 0
implying
˜
Z
1
Z
= const. Due to the initial condition, we obtain
˜
Z = Z.
To show that X satisﬁes the linear SDE, deﬁne Y := X/Z. Ito’s product rule
applied to Y Z gives the desired result. To prove uniqueness, let X and
˜
X be two
solutions to the linear SDE. Then
ˆ
X :=
˜
X −X solves the homogeneous SDE
d
ˆ
X =
ˆ
X[Adt +BdW]
with
ˆ
X
0
= 0. Applying Ito’s product rule to 0 Z shows
ˆ
X
t
= 0 a.s. for all t ≥ 0. 2
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 47
Example. In Brownian models the money market account and stocks are modeled
via (i = 1, . . . , I)
dS
0
(t) = S
0
(t)r(t)dt, S(0) = 1, (6)
dS
i
(t) = S
i
(t)
µ
i
(t)dt +
m
¸
j=1
σ
ij
(t)dW
j
(t)
¸
¸
, S(0) = s
i0
> 0.
Applying “variation of constants” leads to
S
0
(t) = e
t
0
r(s) ds
,
S
i
(t) = s
i0
e
t
0
µ
i
(s)−0.5
¸
m
j=1
σ
2
ij
(s) ds+
¸
m
j=1
t
0
σ
ij
(s)dW
j
(s)
.
4 Continuoustime Market Model and Option Pric
ing
4.1 Description of the Model
We consider a continuoustime security market as given by (6).
Mathematical Assumptions.
• Recall the standing assumption: (Ω, T, P, ¦T
t
¦) is a ﬁltrered probability space
satisfying the usual conditions.
• ¦W
t
, T
t
¦ is a (multidimensional) Brownian motion of appropriate dimension.
• ¦T
t
¦ is the Brownian ﬁltration.
• The coeﬃcients r, µ, and σ are progressively measurable processes being uni
formly bounded.
Notation.
• ϕ
i
(t): number of stock i at time t
• ϕ
0
(t)S
0
(t): money invested in the money market account at time t
• X
ϕ
(t) :=
¸
I
i=0
ϕ
i
(t)S
i
(t): wealth of the investor
• Assumption: x
0
:= X(0) =
¸
I
i=0
ϕ
i
(0)S
i
(0) ≥ 0 (nonnegative initial wealth)
• π
i
(t) := ϕ
i
(t)S
i
(t)/X(t): proportion invested in stock i
• 1 −
¸
i
π
i
: proportion invested in the money market account
Deﬁnition 4.1 (Trading Strategy, Selfﬁnancing, Admissible)
(i) Let ϕ = (ϕ
0
, . . . , ϕ
I
)
be an IR
I+1
valued process which is progressively measur
able w.r.t. ¦T
t
¦ and satisﬁes (Pa.s.)
T
0
[ϕ
0
(t)[dt < ∞,
I
¸
i=1
T
0
(ϕ
i
(t)S
i
(t))
2
dt < ∞. (7)
Then ϕ is said to be a trading strategy.
(ii) If
dX(t) = ϕ(t)
dS(t)
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 48
holds, then ϕ is said to be selfﬁnancing.
(iii) A selfﬁnancing ϕ is said to be admissible if X(t) ≥ 0 for all t ≥ 0.
Remarks.
• The corresponding portfolio process π is said to be selfﬁnancing or admissible
if ϕ is selfﬁnancing or admissible (Recall π
i
= ϕ
i
S
i
/X). Note that if we work
with π, then we need to require that X(t) > 0 for all t ≥ 0. Otherwise, π is not
welldeﬁned.
• (7) ensures that all integrals are deﬁned.
• Compare the deﬁnition (ii) with the result of Proposition 2.11.
• Every trading strategy satisﬁes
dX(t) = d(ϕ(t)
S(t)).
Hence, in abstract mathematical terms, selfﬁnancing means that ϕ can be put
before the d operator.
• In the following, we always consider admissible trading strategies.
• Since admissible strategies are progressive, investors cannot see into the future
(no clairvoyants).
Deﬁnition 4.2 (Arbitrage, Option, Replication, Fair Value)
(i) An admissible trading strategy ϕ is an arbitrage opportunity if (Pa.s.)
X
ϕ
(0) = 0, X
ϕ
(T) ≥ 0, P(X
ϕ
(T) > 0) > 0.
(ii) An T
T
measurable random variable C(T) ≥ 0 is said to be a contingent claim,
T ≥ 0.
(iii) An admissible strategy ϕ is a replication strategy for the claim C(T) if
X
ϕ
(T) = C(T) P −a.s.
(iv) The fair time0 value of a claim is deﬁned as
C(0) = inf¦p : D(p) = ∅¦,
where
D(x) := ¦ϕ : ϕ admissible and replicates C(T) and X
ϕ
(0) = x¦
Remark. The strategy in (i) is also known as “free lottery”.
Economical Assumptions.
• All investors model the dynamics of the securities as above.
• All investors agree upon σ, but may disagree upon µ.
• Investors are price takers.
• Investors use admissible strategies.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 49
• Frictionless markets
• No arbitrage (see Deﬁnition 4.2 (i))
Proposition 4.1 (Wealth Equation)
The wealth dynamics of a selfﬁnancing strategy ϕ are described by the SDE
dX(t) = X(t)
(r(t) +π(t)
(µ(t) −r(t)1))dt +π(t)
σ(t)dW(t)
, X(0) = x
0
,
the socalled wealth equation.
Proof. By deﬁnition,
dX = ϕ
dS
= ϕ
0
S
0
rdt +
I
¸
i=1
ϕ
i
S
i
µ
i
dt +
m
¸
j=1
σ
ij
dW
j
=
1 −
I
¸
i=1
π
i
Xrdt +
I
¸
i=1
π
i
X
µ
i
dt +
m
¸
j=1
σ
ij
dW
j
= X
r +
I
¸
i=1
π
i
(µ
i
−r)
dt +
m
¸
j=1
I
¸
i=1
π
i
σ
ij
dW
j
2
Remarks.
• Due to the boundedness of the coeﬃcients, by “variation of constants”, the con
dition (P −a.s.)
t
0
π
i
(s)
2
ds < ∞
is suﬃcient for the wealth equation to possess a unique solution.
• One can directly start with π (instead of ϕ) and require that for the wealth
equation
t
0
X(s)
2
π
i
(s)
2
ds < ∞
4.2 BlackScholes Approach: Pricing by Replication
We wish to price a European call written on a stock S with strike K, maturity T,
and payoﬀ
C(T) = max¦S(T) −K; 0¦,
where
dS(t) = S(t)
µdt +σdW(t)
with constants µ and σ. The interest rate r is constant as well.
Assumptions.
• No payout are made to the stocks during the lifetime of the call.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 50
• There exists a C
1,2
function f = f(t, s) such that the timet price of the call is
given by C(t) = f(t, S(t)).
Then
dC(t) = f
t
(t, S(t))dt +f
s
(t, S(t))dS(t) + 0.5f
ss
(t, (S(t))d < S >
t
= f
t
dt +f
s
S
µdt +σdW
+ 0.5f
ss
S
2
σ
2
dt
=
f
t
+f
s
Sµ + 0.5f
ss
S
2
σ
2
dt +f
s
SσdW
Consider a selfﬁnancing trading strategy ϕ = (ϕ
M
, ϕ
S
, ϕ
C
) and choose ϕ
C
(t) ≡
−1. Then
dX
ϕ
(t) = ϕ
M
(t)dM(t) +ϕ
S
(t)dS(t) −dC(t)
= ϕ
0
rMdt +ϕ
S
S
µdt +σdW
−
f
t
+f
s
Sµ + 0.5f
ss
S
2
σ
2
dt −f
s
SσdW
=
ϕ
M
rM +ϕ
S
Sµ −f
t
−f
s
Sµ −0.5f
ss
S
2
σ
2
. .. .
(∗)
dt +
ϕ
1
Sσ −f
s
Sσ
. .. .
(∗∗)
dW
Choosing ϕ
S
(t) = f
s
(t) leads to (∗∗) = 0. Hence, the strategy is locally riskfree
implying (∗) = X
ϕ
(t)r (Otherwise there is an arbitrage opportunity with the money
market account). Rewriting (*):
(∗) = ϕ
M
rM −f
t
−0.5f
ss
S
2
σ
2
= r
ϕ
M
M +ϕ
S
S −C
. .. .
=X
ϕ
−rf
s
S +rC −f
t
−0.5f
ss
S
2
σ
2
. .. .
(∗∗∗)
.
Therefore, (∗ ∗ ∗) = 0, i.e.
rf
s
(t, S(t))S(t) −rf(t, S(t)) +f
t
(t, S(t)) + 0.5f
ss
(t, S(t))S
2
(t)σ
2
= 0.
It follows:
Theorem 4.2 (PDE of BlackScholes (1973))
In an arbitragefree market the price function f(t, s) of a European call satisﬁes the
BlackScholes PDE
f
t
(t, s) +rsf
s
(t, s) + 0.5s
2
σ
2
f
ss
(t, s) −rf(t, s) = 0,
(t, s) ∈ [0, T] IR
+
, with terminal condition f(T, s) = max¦s −K, 0¦.
Remark. We have not used the terminal condition to derive the PDE. There
fore, every European (pathindependent) option satisﬁes this PDE. Of course, the
terminal condition will be diﬀerent.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 51
Question: How should we solve this PDE?
1st approach: Merton (1973). Solve the PDE by transforming it to the heat
equation u
t
= u
xx
which has a wellknown solution.
16
2nd approach. Apply the FeynmanKac theorem (see below) which states that
the time0 solution of the BlackScholes PDE is given by
C(0) = E
max¦Y (T) −K, 0¦
Y (0) = S(0)
e
−rT
,
where Y satisﬁes the SDE
dY (t) = Y (t)
rdt +σdW(t)
, Y (0) = S(0).
Theorem 4.3 (BlackScholes Formula)
(i) The price of the European call is given by
C(t) = S(t) N(d
1
(t)) −K N(d
2
(t)) e
−r(T−t)
with
d
1
(t) =
ln(S(t)/K) + (r + 0.5
2
)(T −t)
σ
√
T −t
,
d
2
(t) = d
1
(t) −σ
√
T −t.
(ii) The trading strategy ϕ := (ϕ
M
, ϕ
S
) with
ϕ
M
(t) =
C(t) −f
s
(t, S(t)) S(t)
M(t)
,
ϕ
S
(t) = f
s
(t, S(t)),
is selfﬁnancing and a replication strategy for the call.
Proof. (i) Compute (exercise)
C(t) = E
max¦Y (T) −K, 0¦
Y (t) = S(t)
e
−r(T−t)
.
(ii) We have
X
ϕ
(t) = ϕ
M
(t) M(t) +ϕ
S
(t) S(t) = C(t). (8)
Hence, (ϕ
M
, ϕ
S
) replicates the call and is selfﬁnancing because
dX
ϕ
(8)
= dC
Ito
=
f
t
+f
s
Sµ + 0.5f
ss
S
2
σ
2
dt +f
s
SσdW
(+)
=
r[C −f
s
S] +f
s
Sµ
dt +f
s
SσdW
Def. (ϕ
M
, ϕ
S
)
=
rMϕ
M
+ϕ
S
Sµ
dt +ϕ
S
SσdW
=
ϕ
M
rMdt +ϕ
S
S
µdt +SσdW
= ϕ
M
dM +ϕ
S
dS.
16
See, e.g., Korn/Korn (1999, p. 124).
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 52
Note: (+) holds due to the BlackScholes PDE
f
t
+ 0.5f
ss
S
2
σ
2
= r[C −f
s
S]
2
Remark. Since (ϕ
M
, ϕ
S
) is selfﬁnancing, the strategy (ϕ
M
, ϕ
S
, ϕ
C
) with ϕ
C
= −1
is selfﬁnancing as well because
d(ϕ
C
(t)C(t)) = d(−C(t)) = −dC(t) = ϕ
C
(t)dC(t).
Theorem 4.4 (FeynmanKac Representation)
Let β and γ be functions of (t, x) ∈ [0, ∞) IR. Under technical conditions (see
Korn/Korn (1999, p. 136)), there exists a unique C
1,2
solution F = F(t, x) of the
PDE
F
t
(t, x) +β(t, x)F
x
(t, x) + 0.5γ
2
(t, x)F
xx
(t, x) −rF(t, x) = 0 (9)
with terminal condition F(T, x) = h(x) possessing the representation
F(t
0
, x) = e
−r(T−t
0
)
E[h(X(T))[X(t
0
) = x],
where X satisﬁes the SDE
dX(s) = β(s, X(s))ds +γ(s, X(s))dW(s)
with initial condition X(t
0
) = x.
Sketch of Proof. (i) Firstly assume r = 0. If F is a C
1,2
function, we can apply
Ito’s formula
dF = F
t
dt +F
x
dX + 0.5F
xx
d < X > .
Since d < X >= γ
2
dt, we get
dF = F
t
dt +F
x
βdt +γdW
+ 0.5γ
2
F
xx
dt
=
F
t
+βF
x
+ 0.5γ
2
F
xx
dt +γF
x
dW.
Suppose that there exists a function F satisfying the PDE, i.e.
F
t
+βF
x
+ 0.5γ
2
F
xx
= 0
with F(T, x) = h(x). Then
dF = γF
x
dW
or in integral notation
F(s, X(s)) = F(t
0
, X(t
0
)) +
s
t
0
γ(u, X(u))F
x
(u, X(u))dW(u).
If E
s
t
0
γ(u, X
u
)
2
F
x
(u, X
u
)
2
du
< ∞, the stochastic integral has zero expectation
and thus
F(t
0
, x) = E[F(s, X(s)[X(t
0
) = x].
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 53
Choose s = T and note that F(T, X(T)) = h(X(T)).
(ii) For r = 0, multiply the PDE (9) by e
−rt
:
F
t
(t, x)e
−rt
+β(t, x)F
x
(t, x)e
−rt
+ 0.5γ
2
(t, x)F
xx
(t, x)e
−rt
−rF(t, x)e
−rt
= 0.
Set G(t, x) := F(t, x)e
−rt
. Since
G
t
(t, x) = F
t
(t, x)e
−rt
−rF(t, x)e
−rt
,
we get
G
t
(t, x) +β(t, x)G
x
(t, x) + 0.5γ
2
(t, x)G
xx
(t, x) = 0
with terminal condition G(T, x) = e
−rT
h(x). Now we can apply (i):
G(t
0
, x) = E[e
−rT
h(X(T))[X(t
0
) = x].
Substituting G(t
0
, x) = e
−rt
0
F(t
0
, x) gives the desired result. 2
Remarks.
• Two crucial points are not proved: Firstly, we have not checked under which
conditions a C
1,2
function F exists. Secondly, we have not checked under which
conditions γ(, X)F
x
(, X) ∈ L
2
[0, T].
• To solve the BlackScholes PDE, choose β(t, x) = rx and γ(t, x) = xσ.
• The FeynmanKac representation can be generalized to a multidimensional set
ting, i.e. X is a multidimensional Ito process, with stochastic short rate r = r(X).
4.3 Pricing with Deﬂators
In a singleperiod model the deﬂator is given by (See remark to Proposition 2.7)
H =
1
1 +r
dQ
dP
This motivates the deﬁnition of the deﬂator in continuoustime:
H(t) :=
1
M(t)
Z(t),
where
M(t) := exp
t
0
r(s) ds
,
Z(t) := exp
−0.5
t
0
[[θ(s)[[
2
ds −
t
0
θ(s)
dW(s)
,
The process θ is implicitly deﬁned as the solution to the following system of linear
equations:
σ(t)θ(t) = µ(t) −r(t)1. (10)
Applying Ito’s product rule yields the following SDE for H:
dH(t) = d(Z(t)M(t)
−1
) = −H(t)[r(t)dt +θ(t)
dW(t)]
Remarks.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 54
• In this section we are going to work under the physical measure. Nevertheless
we will see later on that Z can be interpreted as a density inducing a change of
measure (Girsanov’s Theorem).
• The system of equations (10) needs not to have a (unique) solution.
• The process θ is said to be the market price of risk and can be interpreted as a
risk premium (excess return per units of volatility).
The following theorem shows that H has indeed the desired properties:
Theorem 4.5 (Arbitragefree Pricing with Deﬂator and Completeness)
(i) Assume that (10) has a bounded solution for θ. Let ϕ be an admissible portfolio
strategy. Then ¦H(t)X
ϕ
(t)¦
t∈[0,T]
is both a local martingale and a supermartingale
implying
E[H(t)X
ϕ
(t)] ≤ X
ϕ
(0) for all t ∈ [0, T]. (11)
Besides, the market is arbitragefree.
(ii) Assume that I = m (“# stocks = # sources of risk”) and that σ is uniformly
positive deﬁnite.
17
Then (10) has a unique solution which is uniformly bounded.
Furthermore, let C(T) be an T
T
measurable contingent claim with
x := E[H(T)C(T)] < ∞ (12)
Then there exists a P ⊗Lebesgueunique portfolio process ϕ such that ϕ is a repli
cation strategy for C(T), i.e. (Pa.s.)
X
ϕ
(T) = C(T),
and X
ϕ
(0) = x. Hence, the market is complete. The timet price of the claim
equals
C(t) =
E[H(T)C(T)[T
t
]
H(t)
,
i.e. the process ¦H(t)C(t)¦
t∈[0,T]
is a martingale.
Remarks.
• The boundedness assumption in (i) can be weakened.
• The price in (ii) is thus equal to the expected discounted payoﬀ under the phys
ical measure where we discount with the statedependent discount factor H, the
deﬂator.
• Compare (ii) with Theorem 2.13.
Proof. For simplicity, I = 1. (i) Applying Ito’s product rule, one gets for an
admissible ϕ = (ϕ
0
, ϕ
1
):
18
d(HX
ϕ
) = HdX
ϕ
+X
ϕ
dH +d < H, X
ϕ
>
17
The process σ is uniformly positive deﬁnite if there exists a constant K > 0 such that
x
σ(t)σ(t)
x ≥ Kx
x for all x ∈ IR
m
and all t ∈ [0, T], Pa.s. This requirement implies that
σ(t) is regular.
18
Note θσ = µ −r.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 55
= HX
ϕ
rdt +Hϕ
1
S(µ −r)dt +Hϕ
1
SσdW −X
ϕ
Hrdt
−X
ϕ
HθdW −Hϕ
1
Sθσdt
= H(ϕ
1
Sσ −X
ϕ
θ)dW. (13)
Since ϕ is admissible, we have X
ϕ
(t) ≥ 0. By deﬁnition, H(t) ≥ 0 and thus
H(t)X(t) ≥ 0. By Proposition 3.25, HX is a supermartingale which proves (11).
Note that an arbitrage strategy would violate this supermartingale property.
(ii) Clearly, θ = (µ − r)/σ is unique and uniformly bounded by some K
θ
> 0.
19
Deﬁne
X(t) :=
E[H(T)C(T)[T
t
]
H(t)
By deﬁnition, X(t) is adapted, X(0) = x, X(t) ≥ 0, and X(T) = C(T) (Pa.s.).
20
Set M(t) := H(t)X(t) = E[H(T)C(T)[T
t
]. Due to (12), M is a martingale with
M(0) = x. Applying the martingale representation theorem, there exists an ¦T
t
¦
progressively measurable realvalued process ψ with
T
0
ψ(s)
2
ds < ∞ such that
(P −a.s.)
H(t)X(t) = M(t) = x +
t
0
ψ(s) dW(s). (14)
Since H and
·
0
ψ(s) dW(s) are continuous, we can choose X to be continuous.
Comparing (13) and (14) yields
21
H(ϕ
1
Sσ −Xθ) = ψ.
Since there exists a K
σ
> 0 such that σ(t) ≥ K
σ
for all t ∈ [0, T],
22
we get
ϕ
1
=
ψ/H +Xθ
Sσ
.
Since X
ϕ
= ϕ
0
M +ϕ
1
S for any strategy ϕ, we deﬁne
ϕ
0
:=
X −ϕ
1
S
M
which implies that ϕ is selfﬁnancing and X = X
ϕ
. Since M > 0, H > 0, and
X ≥ 0 are continuous, M as well as H are pathwise bounded away from zero and X
is pathwise bounded from above, i.e. M(ω) ≥ K
M
(ω) > 0, H(ω) ≥ K
H
(ω) > 0, and
X(ω) ≤ K
X
(ω) < ∞ for a.a. ω ∈ Ω. Therefore, the process is a trading strategy
because
23
T
0
(ϕ
1
(s)S(s))
2
ds =
T
0
ψ(s)/H(s) +X(s)θ(s)
σ(s)
2
ds
≤
2
K
2
σ
1
K
2
H
T
0
ψ(s)
2
ds +K
2
X
K
2
θ
T
< ∞,
19
Note that all coeﬃcients are bounded and σ is uniformly positive deﬁnite.
20
Since F
0
is the trivial σalgebra augemented by null sets, is every F
0
measurable random
variable a.s. constant and the conditioned expected value is equal to the unconditioned expected
value.
21
Note that the representation of an Ito process is unique.
22
By assumption, σ is uniformly positive deﬁnite.
23
(a +b)
2
≤ 2a
2
+ 2b
2
.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 56
T
0
[ϕ
0
(s)[ ds =
T
0
[X(s) −ϕ
1
(s)S(s)[
M(s)
ds
≤
1
K
M
K
X
T +
T
0
1 + (ϕ
1
(s)S(s))
2
ds
< ∞
for Pa.s. 2
Theorem 4.6 (Martingale Representation Theorem)
Let ¦M
t
¦
t∈[0,T]
be a realvalued process being adapted to the Brownian ﬁltration.
(i) If M is a square integrable martingale, i.e. E[M
2
t
] < ∞ for all t ∈ [0, T], then
there exists a progressively measurable IR
m
valued process ¦ψ
t
¦
t∈[0,T]
with
E
¸
T
0
[[ψ
s
[[
2
ds
¸
< ∞ (15)
and
M
t
= M
0
+
t
0
ψ
s
dW
s
, Pa.s.
(ii) If M is a local martingale, then the result from (i) holds with
T
0
[[ψ
s
[[
2
ds < ∞
instead of (15).
In both cases, ψ is P ⊗Lebesgueunique.
Proof. See Korn/Korn (1999, pp. 81ﬀ).
Remarks.
• All Brownian (local) martingales are stochastic integrals and vice versa!
• The quadratic (co)variation is deﬁned for all (local) Brownian martingales.
Proposition and Deﬁnition 4.7 (Quadratic (Co)variation)
(i) Let X, Y be Brownian local martingales. The quadratic covariation < X, Y >
is the unique adapted, continuous process of bounded variation with < X, Y >
0
= 0
such that XY − < X, Y > is a Brownian local martingale. If X = Y , this process
is nondecreasing.
(ii) Let (Ω, (, P, ¦(
t
¦) be a ﬁltered probability space satisfying the usual conditions
and let X, Y be continuous local martingales for the ﬁltration ¦(
t
¦. Then the result
of (i) holds with XY − < X, Y > being a local martingale for ¦(
t
¦.
Proof. (i) By the martingale representation theorem, dX = αdW and dY = βdW
with progressively measurable processes α and β. Ito’s product rule yields
d(XY ) = Y dX +XdY +d < X, Y >= Y αdW +XβdW +αβdt.
Since Ito integrals are local Brownian martingales, XY − < X, Y > is a Brownian
martingale. The rest is obvious.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 57
(ii) See Kartzas/Shreve (1991, p. 36). 2
Remark. One can show that if X and Y are squareintegrable, then XY − <
X, Y > is a martingale.
4.4 Pricing with Riskneutral Measures
The deﬂator is deﬁned as
H(t) =
1
M(t)
Z(t),
where
dZ(t) = −Z(t)θ(t)
dW(t)
is a local martingale. If it is even a martingale, we have E[Z(T)] = 1 and we can
deﬁne a probability measure on T
T
by
Q(A) := E[1
A
Z(T)], A ∈ T
T
.
Hence, Z(T) is the density of Q and we can rewrite the result from Theorem 4.5 as
C(0) = E[H(T)C(T)] = E
¸
Z(T)
C(T)
M(T)
= E
Q
¸
C(T)
M(T)
,
i.e. the price of a claim equals its expected discounted payoﬀ where we discount with
the money market account (instead of the deﬂator) and expectation is calculated
w.r.t. the measure Q (instead of the physical measure).
Two important questions arise:
Q1. Under which conditions is Z a martingale? (Novikov condition)
Q2. What happens to the dynamics of the assets if we change to the measure Q?
(Girsanov’s theorem)
Proposition 4.8
Let θ be progressive and K > 0. If
T
0
[[θ(s)[[
2
ds ≤ K, then the process ¦Z(t)¦
t∈[0,T]
deﬁned by
dZ(t) = −Z(t)θ(t)
dW(t), Z(0) = 1, (16)
is a martingale.
Proof. For simplicity, m = 1. The process Z is a local martingale. By variations
of constants, we get
Z(t) = exp
−0.5
t
0
θ(s)
2
ds −
t
0
θ(s) dW(s)
≥ 0. (17)
By Proposition 3.25, it is thus a supermartingale. Hence, it is suﬃcient to show
that E[Z(T)] = 1. Deﬁne
τ
n
:= inf
t ≥ 0 :
t
0
θ(s) dW(s)
≥ n
¸
.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 58
Due to (17), for ﬁxed n ∈ IN, we get Z(T∧τ
n
)
2
≤ e
2n
and thus
T∧τ
n
0
Z(s)
2
θ(s)
2
ds ≤
K e
2n
. From (16), we obtain E[Z(T ∧ τ
n
)] = 1. Therefore, the claim is proved if
¦Z(T ∧ τ
n
)¦
n∈IN
is UI because in this case
lim
n→∞
E[Z(T ∧ τ
n
)] = E[Z(T)].
We obtain the estimate
[Z(T ∧ τ
n
)[
1+ε
≤ exp
0.5(ε +ε
2
)
T
0
θ(s)
2
ds
. .. .
independent of n and bounded by assumption
Y (T ∧ τ
n
),
where
dY (t) = −Y (t)(1 +ε)θ(s)dW(s), Y (0) = 1.
With the same arguments leading to E[Z(T ∧τ
n
)] = 1 we conclude E[Y (T ∧τ
n
)] = 1.
Therefore,
sup
n∈IN
E[Z(T ∧ τ
n
)[
1+ε
< ∞,
which, by Proposition 3.19, proves UI of ¦Z(T ∧ τ
n
)¦
n∈IN
. 2
Remark. The requirement in the proposition is a strong one. It can be replaced
by the Novikov condition (See Karatzas/Shreve (1991, pp. 198ﬀ)):
E
exp
0.5
T
0
[[θ(s)[[
2
ds
< ∞.
We turn to the second question and deﬁne (Q := Q
T
)
Q
t
(A) := E[1
A
Z(t)], A ∈ T
t
, t ∈ [0, T]. (18)
The following lemma shows that Q
t
is welldeﬁned.
Lemma 4.9
For a bounded stopping time τ ∈ [0, T] and A ∈ T
τ
, we get Q
T
(A) = Q
τ
(A).
Proof. The optional sampling theorem (OS) yields (See Theorem 3.17)
Q
T
(A) = E[1
A
Z(T)] = E[E[1
A
Z(T)[T
τ
]] = E[1
A
E[Z(T)[T
τ
]]
OS
= E[1
A
Z(τ)] = Q
τ
(A).
2
To prove Girsanov’s theorem, we need some additional tools.
Theorem 4.10 (Levy’s Characterization of the Brownian Motion)
An mdimensional stochastic process X with X(0) = 0 is an mdimensional Brow
nian motion if and only if it is a continuous local martingale with
24
< X
j
, X
k
>
t
= δ
jk
t.
24
Note that δ
ij
= 1
{i=j}
is the Kronecker delta.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 59
Proof. For simplicity m = 1. Clearly, the Brownian motion is a continuous local
martingale with < W >
t
= t. To prove the converse, ﬁx u ∈ IR and set f(t, x) =
exp(iux + 0.5u
2
t) and Z
t
:= f(t, X
t
), where i =
√
−1. Since f ∈ C
1,2
is analytic, a
complexvalued version of Ito’s formula yields
dZ = f
t
dt +f
x
dX + 0.5f
xx
d < X >
= 0.5u
2
fdt +iufdX −0.5u
2
fd < X >
<X>
t
=t
= iufdX.
Since an integral w.r.t. a local martingale is a local martingale as well,
25
Z is a
(complexvalued) local martingale which is bounded on [0, T] for every T ∈ IR.
Hence, Z is a martingale implying E[exp(iuX
t
+ 0.5u
2
t[T
s
] = exp(iuX
s
+ 0.5u
2
s),
i.e.
E[exp(iu(X
t
−X
s
))[T
s
] = exp(−0.5u
2
(t −s)).
Since this hold for any u ∈ IR, the diﬀerence X
t
−X
s
∼ ^(0, t −s) and X
t
−X
s
is
independent of T
s
. 2
Proposition 4.11 (Bayes Rule)
Let µ and ν be two probability measures on a measurable space (Ω, () such that
dν/dµ = f for some f ∈ L
1
(µ). Let X be a random variable on (Ω, () such that
E
ν
[[X[] =
[X[fdµ < ∞.
Let H be a σﬁeld with H ⊂ (. Then
E
ν
[X[H] E
µ
[f[H] = E
µ
[fX[H] a.s.
Proof. Let H ∈ H. We start with the lefthand side:
H
E
µ
[fX[H] dµ =
H
Xf dµ =
H
Xdν =
H
E
ν
[X[H] dν =
H
E
ν
[X[H]f dµ
= E
µ
E
ν
[X[H] f1
H
= E
µ
[E
µ
[E
ν
[X[H] f1
H
[H]]
= E
µ
[1
H
E
ν
[X[H] E
µ
[f[H]] =
H
E
ν
[X[H] E
µ
[f[H] dµ.
2
Lemma 4.12
Let ¦Y
t
, T
t
¦
t∈[0,T]
be a rightcontinuous adapted process. The following statements
are equivalent:
(i) ¦Z
t
Y
t
, T
t
¦
t∈[0,T]
is a local Pmartingale.
(ii) ¦Y
t
, T
t
¦
t∈[0,T]
is a local Q
T
martingale.
25
For instants, a Brownian local martingale can be represented as Ito integral.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 60
Proof. Assume (i) to hold and let ¦τ
n
¦
n
be a localizing sequence of ZY . Then
26
E
Q
T
[Y
t∧τ
n
[T
s
]
Lem 4.9
= E
Q
t∧τ
n
[Y
t∧τ
n
[T
s
]
Bayes
=
E
P
[Z
t∧τ
n
Y
t∧τ
n
[T
s
]
E
P
[Z
t∧τ
n
[T
s
]
=
Z
s∧τ
n
Y
s∧τ
n
Z
s∧τ
n
= Y
s∧τ
n
.
The converse implication can be proved similarly (exercise). 2
Theorem 4.13 (Girsanov’s Theorem)
We assume that Z is a martingale and deﬁne the process ¦W
Q
(t), T
t
¦ by
W
Q
j
(t) := W
j
(t) +
t
0
θ
j
(s) ds, t ≥ 0,
j ∈ ¦1, . . . , m¦. For ﬁxed T ∈ [0, ∞), the process ¦W
Q
(t), T
t
¦
t∈[0,T]
is an m
dimensional Brownian motion on (Ω, T
T
, Q
T
), where Q
T
is deﬁned by (18).
Proof. For simplicity, m = 1. By Levy’s characterization, we need to show that
(i) ¦W
Q
(t), T
t
¦
t∈[0,T]
is a continuous local martingale on (Ω, T
T
, Q
T
).
(ii) < W
Q
>
t
= t on (Ω, T
T
, Q
T
).
ad (i). Ito’s product rule yields
d(ZW
Q
) = ZdW
Q
+W
Q
dZ +d < W
Q
, Z >
= ZdW +Zθdt −W
Q
ZθdW −Zθdt
= (Z −W
Q
Zθ)dW,
i.e. ZW
Q
is a local Pmartingale. By Lemma 4.12, the process W
Q
is a local
Q
T
martingale as well.
ad (ii). Ito’s product rule leads to
d
Z ((W
Q
)
2
−t)
= d
W
Q
(ZW
Q
)
−d(Z t)
= W
Q
d(ZW
Q
) +ZW
Q
dW
Q
+d < ZW
Q
, W
Q
> −Zdt −tdZ
= W
Q
d(ZW
Q
) +ZW
Q
(θdt +dW) + (Z −W
Q
Zθ)dt −Zdt +tθZdW
= W
Q
d(ZW
Q
) +Z(W
Q
+tθ)dW,
i.e. ¦Z
t
((W
Q
t
)
2
−t)¦ is a local Pmartingale. Applying Lemma 4.12 again yields
that ¦(W
Q
t
)
2
− t¦ is a local Q
T
martingale. By Proposition 4.7 (ii), the claim (ii)
follows. 2
Corollary 4.14 (Girsanov II)
If ¦M
t
¦
t∈[0,T]
is a Brownian local Pmartingale and
D
t
:=
t
0
1
Z
s
d < Z, M >
s
,
26
Note that Y
t∧τ
n
is F
t∧τ
n
measurable, i.e. the conditional expected value can be computed
w.r.t. Q
t∧τ
n
instead of Q
T
. Lemma 4.9 ensures that both measures coincide on F
t∧τ
n
. Besides,
F
t∧τ
n
⊂ F
T
, i.e. Bayes rule can be applied. Furthermore, by Proposition 3.23, E
P
[Z
t∧τ
n
F
s
] =
Z
s∧τ
n
.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 61
then ¦M
t
− D
t
¦
t∈[0,T]
is a continuous local Q
T
martingale and < M − D >
Q
=<
M >, i.e. the quadratic variations of M −D under Q
T
and M under P coincide.
Proof. By the martingale representation theorem, we have dM = αdW for some
progressive α. Hence,
dD =
1
Z
d < Z, M >= −
1
Z
θZαdt = −θαdt
and thus
d(M −D) = αdW +θαdt = αdW
Q
.
Therefore, M −D is a local Qmartingale. Besides,
d < M −D >
Q
t
= α
2
dt = d < M >
t
,
which completes the proof. 2
Theorem 4.15 (Converse Girsanov)
Let Q be an equivalent probability measure to P on T
T
. Then the density process
¦Z(t), T
t
¦
t∈[0,T]
deﬁned by
Z(t) :=
dQ
dP
F
t
is a positive Brownian Pmartingale possessing the representation
dZ(t) = −Z(t)θ(t)dW(t), Z(0) = 1,
for a progressively measurable mdimensional process θ with
T
0
[[θ(s)[[
2
ds < ∞ P −a.s. (19)
Proof. Since Q and P are equivalent on T
T
, both measures are equivalent on T
t
,
t ∈ [0, T] , i.e. Z(t) > 0 for all t ∈ [0, T]. Besides, for A ∈ T
t
Q
Z(T)dP =
A
dQ/dP[
F
T
Lem 4.9
=
A
dQ/dP[
F
t
=
Q
Z(t)dP,
i.e. Z(t) = E[Z(T)[T
t
] implying that Z is a Brownian martingale. By the martingale
representation theorem,
Z(t) = 1 +
t
0
ψ(s)
dW(s)
with a progressive process ψ satisfying
T
0
[[ψ(s)[[
2
ds < ∞, P −a.s.. Since Z > 0,
we have Z(ω) ≥ K
Z
(ω) > 0 for a.a. ω ∈ Ω. Deﬁning
θ(t) = −
ψ(t)
Z(t)
thus implies (19) which completes the proof. 2
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 62
Deﬁnition 4.3 (DoleansDade Exponential)
For an Ito process X with X
0
= 0 the DoleansDade exponential (stochastic ex
ponential) of X, written c(X), is the unique Ito process Z that is the solution
of
Y
t
= 1 +
t
0
Y
s
dX
s
.
Remarks.
• Therefore, the density Z is sometimes written as c(
·
0
θ
s
dW
s
).
• One can show that c(X)c(Y ) = c(X + Y + < X, Y >) and that (c(X))
−1
=
c(−X+ < X, X >).
We now come back to our pricing problem. We set Q := Q
T
. Recall our deﬁnitions
H(t) :=
1
M(t)
Z(t),
where
M(t) := exp
t
0
r(s) ds
,
Z(t) := exp
−0.5
t
0
[[θ(s)[[
2
ds −
t
0
θ(s)
dW(s)
,
and θ is implicitly deﬁned by:
σ(t)θ(t) = µ(t) −r(t)1. (20)
For bounded θ, by Girsanov’s theorem, Z(t) = dQ/dP[
F
t
is a density process for
the measure Q
t
(A) = E[Z(t)1
A
] and ¦W
Q
(t), T
t
¦ deﬁned by
W
Q
j
(t) := W
j
(t) +
t
0
θ
j
(s) ds, t ≥ 0,
is a QBrownian motion.
Deﬁnition 4.4 (Equivalent Martingale Measure)
A probability measure Q is a (weak) equivalent martingale measure if
(i) Q ∼ P and
(ii) all discounted price processes S
i
/M, i = 1, . . . , I, are (local) Qmartingales.
Remark. An equivalent martingale measure (EMM) is also said to be a riskneutral
measure.
Proposition 4.16
If Q is a weak EMM and
E
¸
exp
0.5
T
0
[[σ
i
(s)[[
2
ds
¸
< ∞ (21)
for every i ∈ ¦1, . . . , I¦, then Q is already an EMM.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 63
Proof. Deﬁne Y
i
:= S
i
/M. Since Q is a weak martingale measure, Y
i
is a local
Qmartingale. By the martingale representation theorem and the uniqueness of the
representation of an Ito process,
dY
i
= Y
i
σ
i
dW
Q
.
By the Novikov condition (21), this is a Qmartingale. 2
Remark. Since we have assumed that σ is uniformly bounded, we need not to
distinguish between EMM and weak EMM.
Theorem 4.17 (Characterization of EMM)
In the Brownian market (6), the following are equivalent:
(i) The probability measure Q is an EMM.
(ii) There exists a solution θ to the system of equations (20) such that the process
Z = c(
·
0
θ
s
dW
s
) is a Girsanov density inducing the probability measure Q.
Proof. Deﬁne Y
i
:= S
i
/M. Assume (i) to hold. Since Q is a martingale measure,
dY
i
= Y
i
σ
i
dW
Q
. (22)
Besides, since Q is equivalent to P, by the converse Girsanov, there exists a process
θ such that dQ/dP = c(
·
0
θ
s
dW
s
) and dW
Q
= dW +θdt is a Brownian increment
under Q. Therefore,
dS
i
= S
i
[µ
i
dt +σ
i
dW] = S
i
[(µ
i
−σ
i
θ)dt +σ
i
dW
Q
]
implying
dY
i
= Y
i
[(µ
i
−σ
i
θ −r)dt +σ
i
dW
Q
] (23)
Comparing (22) and (23) gives (ii).
On the other hand, assume (ii) to hold. By deﬁnition, Q is equivalent to P and
dW
Q
= dW +θdt is a QBrownian increment. Ito’s formula leads to
dY
i
= Y
i
[(µ
i
−r)dt +σ
i
dW] = Y
i
[(µ
i
−r −σ
i
θ)dt +σ
i
dW
Q
] = Y
i
σ
i
dW
Q
,
which is Qmartingale increment because σ is assumed to be uniformly bounded. 2
Corollary 4.18 (QDynamics of the Assets)
Under an EMM Q, the dynamicss of asset i are given by
dS
i
(t) = S
i
(t)[r(t)dt +σ
i
(t)
dW
Q
(t)],
i.e. the drift µ does not matter under Q.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 64
Theorem 4.19 (Arbitragefree Pricing under Q and Completeness)
(i) Let Qbe an EMM and let ϕ be an admissible portfolio strategy. Then ¦X
ϕ
(t)/M(t)¦
t∈[0,T]
is both a local Qmartingale and a Qsupermartingale implying that
E
Q
¸
X
ϕ
(t)
M(t)
≤ X
ϕ
(0) for all t ∈ [0, T].
Besides, the market is arbitragefree. If additionally
E
Q
¸
T
0
ϕ
i
(s)
S
i
(s)
M(s)
2
ds
¸
< ∞ (24)
for every i = 1, . . . , I, then ¦X
ϕ
(t)/M(t)¦
t∈[0,T]
is a Qmartingale.
(ii) Assume that I = m (“# stocks = # sources of risk”) and that σ is uniformly
positive deﬁnite. Then (10) has a unique solution which is uniformly bounded
implying that there exists a unique EMM Q. Besides, the market is complete.
Furthermore, let C(T) be an T
T
measurable contingent claim with
x := E
Q
[C(T)/M(T)] < ∞
Then the timet price of the claim equals
C(t) = M(t) E
Q
¸
C(T)
M(T)
T
t
,
i.e. the discounted price process ¦C(t)/M(t)¦
t∈[0,T]
is a Qmartingale.
Proof. (i) Consider
dX
ϕ
= ϕ
dS = Xrdt +
I
¸
i=1
ϕ
i
S
i
σ
i
dW
Q
.
Since d(1/M) = −(1/M)rdt,
d
X
M
= Xd
1
M
+
1
M
dX =
1
M
I
¸
i=1
S
i
ϕ
i
σ
i
dW
Q
which is a positive local Qmartingale and thus a Qsupermartingale. If (24) is
satisﬁed, then it is a Qmartingale.
(ii) Since θ is unique, by Theorem 4.17, the EMM Q is unique. The claim price
follows from Theorem 4.5 because H = Z/M. 2
Remark.
• An EMM exists if, for instants, θ is bounded. This follows from Theorem 4.17.
• If ϕ is uniquely bounded, then (24) holds because µ, σ, and r are assumed to be
uniformly bounded.
• If the market is incomplete, then there exist more than one market price of risk
θ. Let Q and
˜
Q be two corresponding EMM. If (24) holds, then for an admissible
strategy ϕ we get
M(t) E
Q
¸
X
ϕ
(T)
M(T)
T
t
= X
ϕ
(t) = M(t) E
˜
Q
¸
X
ϕ
(T)
M(T)
T
t
.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 65
This implies that any EMM assigns the same value to a replicable claim, namly
its replication costs. However, the prices of nonreplicable claims may diﬀer
under diﬀerent EMM. This is the reason why pricing in incomplete markets is so
complicated and in some sense arbitrary.
Corollary 4.20 (BlackScholes Again)
The price of the European call is given by
C(t) = S(t) N(d
1
(t)) −K N(d
2
(t)) e
−r(T−t)
with
d
1
(t) =
ln(S(t)/K) + (r + 0.5
2
)(T −t)
σ
√
T −t
,
d
2
(t) = d
1
(t) −σ
√
T −t.
Proof. By Theorem 4.19,
C(t) = E
Q
[max¦S(T) −K; 0¦] e
−r(T−t)
,
where due to Corollary 4.18
dS(t) = S(t)[rdt +σdW
Q
(t)].
Thus S(T) has the same distribution as Y (T) in the proof of Proposition 4.3 and
the result follows. 2
Corollary 4.21 (QDynamics of Claims)
Under the assumptions of Theorem 4.19, the dynamics of a replicable claim C(T)
are given by
dC(t) = C(t)[r(t)dt +ψ(t)
dW
Q
(t)] (25)
with a suitable progressively measurable process ψ.
Proof. By Theorem 4.19, C/M is a (local) Qmartingale and thus (25) follows
from the martingale representation theorem. 2
Proposition 4.22 (BlackScholes PDE Again and Hedging)
Under the assumptions of Theorem 4.19, let C(T) be a replicable claim with C(t) =
f(t, S(t)) and C(T) = h(S(T)), where f is a C
1,2
function.
(i) The pricing function f satisﬁes the following multidimensional BlackScholes
PDE:
f
t
+
I
¸
i=1
f
i
s
i
r + 0.5
I
¸
i,k=1
m
¸
j=1
s
i
s
k
σ
ij
σ
kj
f
ik
−rf = 0, f(T, s) = h(s), s ∈ IR
I
+
,
where f
i
denotes the partial derivative w.r.t. the ith asset price.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 66
(ii) A replication strategy for C(T) is given by
ϕ
i
(t) = f
i
(t, S(t)), i = 1, . . . , I,
where the remaining funds C(t)−
¸
I
i=1
ϕ
i
(t)S
i
(t) are invested in the money market
account, i.e. ϕ
M
(t) = (C(t) −
¸
I
i=1
ϕ
i
(t)S
i
(t))/M(t).
Proof. (i) Ito’s formula yields
dC =
f
t
+
I
¸
i=1
f
i
S
i
r + 0.5
I
¸
i,k=1
m
¸
j=1
f
ik
S
i
S
k
σ
ij
σ
kj
dt +
I
¸
i=1
m
¸
j=1
f
i
S
i
σ
ij
dW
Q
j
. (26)
Since the representation of an Ito process is unique, comparing the drift with the
drift of (25) yields the BlackScholes PDE.
(ii) Since C(T) is attainable, there exists a replication strategy ϕ with X
ϕ
(T) =
C(T) and Qdynamics
dX
ϕ
= X
ϕ
rdt +
I
¸
i=1
m
¸
j=1
ϕ
i
S
i
σ
ij
dW
Q
j
,
where due to Theorem 4.19 (ii) the drift equals r.
27
Comparing the diﬀusion part
with the diﬀusion part of (26) shows that one can choose ϕ
i
= f
i
. 2
Remarks.
• The strategy in (ii) is said to be a delta hedging strategy.
• Under the conditions of Theorem 4.19 (ii), the delta hedging strategy is the
unique replication strategy.
• In the BlackScholes model, we get for the European call (see Exercise 29): ϕ
S
=
N(d
1
). Due to the putcall parity, the delta hedge for the put reads ϕ
S
=
−N(−d
1
).
• If I > m, i.e. (“# stocks > # sources of risk”), then in general there exist further
replication strategies, i.e. the choice ϕ
i
= f
i
in the proof of (ii) is not the only
possible choice.
In Theorem 4.19, we have proved that the following implications hold:
1) Existence of EMM =⇒ No arbitrage
2) Uniqueness of EMM =⇒ Completeness
The following theorems show to which extend the converse is true.
Theorem 4.23 (Existence of a Market Price of Risk)
Assume that the market (6) is arbitragefree. Then there exists a market price of
risk, i.e. there exists a progressively measurable process θ solving the system (10).
Proof. Deﬁne φ
i
:= ϕ
i
S
i
(amount of money invested in asset i) leading to the
following version of the wealth equation:
dX(t) = X(t)r(t)dt +φ(t)
(µ(t) −r(t)1)dt +φ(t)
σ(t)dW(t).
27
The representation of the diﬀusion part follows from rewriting Xπ
σdW
Q
in terms of ϕ =
πX/S.
4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 67
Assume that there exists a selfﬁnancing strategy with
φ(t)
σ(t) = 0 and φ(t)
(µ(t) −r(t)1) = 0.
This strategy would be an arbitrage since it does not involve risk, but its wealth
equation has a drift diﬀerent from r. Thus in an arbitragefree market every vector
in the kernel Ker(σ(t)
) should be orthogonal to the vector µ(t) − r(t)1. But the
orthogonal complement of Ker(σ(t)
) is Range(σ(t)). Hence, µ(t) −r(t)1 should be
in the range of σ(t), i.e. there exists a θ(t) such that
µ(t) −r(t)1 = σ(t)θ(t)
The progressive measurability of θ is proved in Karatzas/Shreve (1999, pp. 12ﬀ).2
Remark. In this theorem, nothing is said about whether the corresponding process
Z is a martingale. Although a market price of risk exists, it is not clear if an EMM
exists.
Theorem 4.24 (Uniqueness of EMM)
Assume that the market is complete and let Qbe an EMM such that ¦X
ϕ
(t)/M(t)¦
t∈[0,T]
is a Qmartingale for all admissible strategies ϕ with E[X
ϕ
(T)/M(T)] < ∞. If
˜
Q
is another EMM, then Q =
˜
Q, i.e. Q is the unique EMM.
Proof. Fix j ∈ ¦1, . . . , m¦. Since the market is complete, there exists an admissible
trading strategy ϕ
j
such that X
ϕ
j
(T) = Y
j
(T), where
Y
j
(T) = exp
T
0
r(s) ds −0.5T +W
Q
j
(T)
.
From Theorem 4.19, we know that the process X
ϕ
j
/M is a local martingale under
Q and
˜
Q. By assumption, X
ϕ
j
/M is even a Qmartingale implying
X
ϕ
j
(t) = M(t)E
Q
¸
X
ϕ
j
(T)
M(T)
T
t
= M(t)E
Q
¸
Y
j
(T)
M(T)
T
t
= Y
j
(t) P −a.s.,
i.e. Y
j
is a modiﬁcation of X
ϕ
j
. By Proposition 3.2, X
ϕ
j
and Y
j
are indistinguish
able. Hence, Y
j
/M is a local martingale under Q and
˜
Q. By Theorem 4.17, both
measures are characterized by market prices of risk θ and
˜
θ such that
dW
Q
j
(t) = dW
j
(t) +θ(t)dt,
dW
˜
Q
j
(t) = dW
j
(t) +
˜
θ(t)dt.
Therefore,
Y
j
(t)
M(t)
= 1 +
t
0
Y
j
(s)
M(s)
dW
Q
j
(s) = 1 +
t
0
Y
j
(s)
M(s)
dW
˜
Q
j
(s) +
t
0
Y
j
(s)
M(s)
(θ
j
(s) −
˜
θ
j
(s)) ds.
Since Y
j
/M is a Brownian local
˜
Qmartingale, the process Y
j
(θ
j
−
˜
θ
j
)/M needs to
be indistinguishable from zero.
28
Since Y
j
/M > 0, the market prices of risk θ
j
and
˜
θ
j
are indistinguishable. Since this holds for every j, we get Q =
˜
Q. 2
Remarks.
28
Recall that the representation of Ito processes is unique up to indistinguishability.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 68
• Since ¦X
ϕ
(t)/M(t)¦
t∈[0,T]
is a Qsupermartingale, this process is a Qmartingale
if, for instants,
E
Q
¸
X
ϕ
(T)
M(T)
= X
ϕ
(0).
• Obviously it is suﬃcient to check if ¦X
ϕ
j
(t)/M(t)¦
t∈[0,T]
is a Qmartingale for
every j = 1, . . . , m.
5 Continuoustime Portfolio Optimization
5.1 Motivation
In the theory of option pricing the following problem is tackled:
Given: Payoﬀ C(T)
Wanted: Today’s price C(0) and hedging strategy ϕ
In the theory of portfolio optimization the converse problem is considered:
Given: Initial wealth X(0)
Wanted: Optimal ﬁnal wealth X
∗
(T) and optimal strategy ϕ
∗
Question: What is “optimal”?
1st Idea. Maximze expected ﬁnal wealth, i.e. max
ϕ
E[X
ϕ
(T)].
Problem: Petersburg Paradox. Consider a game where a coin is ﬂipped. If in
the nth round, the ﬁrst time head occurs, then the player gets 2
n−1
Euros, i.e. the
expected value is
1
2
1 +
1
4
2 +
1
8
4 +. . . +
1
2
n
2
n−1
+. . . =
∞
¸
n=0
1
2
= ∞.
Hence, investors who maximize expected ﬁnal wealth should be willing to pay ∞
Euros to play this game! Therefore, maximization of expected ﬁnal wealth is usually
not a reasonable criterion.
Question: What is the reason for this result?
Answer: People are risk averse, i.e. if they can choose between
• 50 Euros or
• 100 Euros with probability 0.5 and 0 Euros with probability 0.5,
they choose the 50 Euros.
Bernoulli’s idea. Measure payoﬀs in terms of a utility function U(x) = ln(x), i.e.
1
2
ln(1) +
1
4
ln(2) +
1
8
ln(4) +. . . +
1
2
n
ln(2
n−1
) +. . .
=
∞
¸
n=1
1
2
n
ln(2
n−1
) = ln(2)
∞
¸
n=1
n −1
2
n
= ln(2).
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 69
Hence, an investor following this criterion would be willing to pay at most 2 Euros
to play this game.
This idea can be generalized: For a given initial wealth x
0
> 0, the investor
maximizes the following utility functional
sup
π∈A
E[U(X
π
(T))], (27)
where / := ¦π : π admissible and E[U(X
π
(T))
−
] < ∞¦ and
dX
π
(t) = X
π
(t)
(r(t) +π(t)
(µ(t) −r(t)1))dt +π(t)
σdW(t)
, X
π
(0) = x
0
,
and the utility function U has the following properties:
Deﬁnition 5.1 (Utility Function)
A strictly concave, strictly increasing, and continuously diﬀerentiable function U :
(0, ∞) →IR is said to be a utility function if
U
(0) := lim
x0
U
(x) = ∞, U
(∞) := lim
x∞
U
(x) = 0.
Remarks.
• The investor’s greed is reﬂected by U
> 0, i.e. loosely speaking: “More is better
than less”.
• The investor’s risk aversion is reﬂected by the concavity of U which implies
that for any nonconstant random variable Z, by Jensen’s inequality,
E[U(Z)] < U(E[Z]).
i.e. loosely speaking: “A bird in the hand is worth two in the bush”.
There exist two approaches to solve the dynamic optimization problem (27):
1. Martingale approach: Theorem 4.5 ensures that in a complete market any
claim can be replicated.
=⇒ Determine the optimal wealth via a static optimization problem and interpret
this wealth as a contingent claim
2. Optimal control approac: We know that expectations can be represented as
solutions to PDEs (FeynmanKac theorem)
=⇒ Represent the functional (27) as PDE and solve the PDE
Standing Assumption. We consider a Brownian market (6) with bounded coef
ﬁcients.
5.2 Martingale Approach
Assumption in the Martingale Approach: The market is complete.
The martingale approach consists of a threestep procedure:
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 70
1st step. Compute the optimal wealth X
∗
(T) by solving a static optimization
problem.
2nd step. Prove that the static optimization problem is equivalent to the dynamic
optimization problem (27)
3rd step. Determine the replication strategy for X
∗
(T) which, in the context of
portfolio optimization, is said to be the optimal portfolio strategy π
∗
.
1st step. Consider the constrained static optimization problem
sup
X∈C
E[U(A)],
subject to the socalled budget constraint (BC)
E[H(T)A] ≤ x
0
and ( := ¦C(T) ≥ 0 : C(T) is T
T
measurable and E[U(C(T))
−
] < ∞¦.
The BC states that the present value of terminal wealth cannot exceed the initial
wealth x
0
(see Theorem 4.5 (i)).
Heuristic solution. Pointwise Lagrangian
L(A, λ) := U(A) +λ(x
0
−H(T)A).
Firstorder condition (FOC)
U
(A) −λH(T) = 0,
i.e. A
∗
= V (λH(T)), where V := (U
)
−1
: (0, ∞) →(0, ∞) is strictly decreasing.
Since the investor is greedy, the BC is satisﬁed as equality and we can choose λ
∗
such that
Γ(λ) := E[H(T)V (λH(T))] = x
0
(28)
leading to λ
∗
= Λ(x
0
) := Γ
−1
(x
0
).
2nd step. Verify that the heuristic solution is indeed the optimal solution.
Lemma 5.1 (Properties of Γ)
Assume Γ(λ) < ∞ for all λ > 0. Then Γ is
(i) strictly decreasing,
(ii) continuous on (0, ∞),
(iii) Γ(0) := lim
λ0
Γ(λ) = ∞, and Γ(∞) := lim
λ∞
Γ(λ) = 0.
Proof. (i) V is strictly decreasing on (0, ∞). Since H(t) > 0 for every t ∈ [0, T],
the function Γ is also strictly decreasing.
(ii) follows from the continuity of H and V as well as the dominated convergence
theorem.
29
29
To check continuity at λ
0
, choose some λ
1
< λ
0
. Then due to (i) we can use Γ(λ
1
) < ∞ as a
majorant.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 71
(iii) Since
lim
y0
V (y) = ∞, lim
y∞
V (y) = 0,
and V is strictly decreasing, the ﬁrst result follows from the monotonous convergence
theorem and the second from the dominated convergence theorem. 2
Remark. Hence, Λ(x) = Γ
−1
(x) exists on (0, ∞) with Λ ≥ 0 and
Λ(0) := lim
x0
Λ(x) = ∞, Λ(∞) := lim
x∞
Λ(x) = 0.
Therefore, one can always ﬁnd a λ such that (28) is satisﬁed as equality.
Lemma 5.2
For a utility function U with U
−1
= V we get
U(V (y)) ≥ U(x) +y(V (y) −x) for 0 < x, y < ∞.
Proof. By concavity of U, Taylor approximation leads to U(x
2
) ≤ U(x
1
) +
U
(x
1
)(x
2
−x
1
) and thus U(x
1
) ≥ U(x
2
) +U
(x
1
)(x
1
−x
2
) implying
U(V (y)) ≥ U(x) +U
(V (y))(V (y) −x) = U(x) +y(V (y) −x).
2
Theorem 5.3 (Optimal Final Wealth)
Assume x
0
> 0 and Γ(λ) < ∞ for all λ > 0. Then the optimal ﬁnal wealth reads
A
∗
= V (Λ(x
0
) H(T))
and there exists an admissible optimal strategy π
∗
∈ / such that X
π
∗
(T) = x
0
and
X
π
∗
(T) = A
∗
, Pa.s.
Proof. By deﬁnition of Λ(x
0
), we get E[H(T)A
∗
] = x
0
. Besides, A
∗
> 0 because
V > 0. Hence, A
∗
satisﬁes all constraints and, by Theorem 4.5 (ii), there exists a
strategy π
∗
replicating A
∗
.
We now verify E[U(A
∗
)
−
] < ∞, i.e. π
∗
∈ /. Due to Lemma 5.2,
U(A
∗
) ≥ U(1) + Λ(x
0
)H(T)(A
∗
−1).
Therefore,
30
E[U(A
∗
)
−
] ≤ [U(1)[ +Λ(x
0
) E[H(T)(A
∗
+1)] ≤ [U(1)[ +Λ(x
0
) (x
0
+E[H(T)]) < ∞.
Finally, we show that π
∗
and A
∗
are indeed optimal. Consider some π ∈ /. Due
to Lemma 5.2,
U(A
∗
) ≥ U(X
π
(T)) + Λ(x
0
)H(T)(A
∗
−X
π
(T))
30
a ≥ b implies a
−
≤ b
−
≤ b.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 72
and thus
E[U(A
∗
)] ≥ E[U(X
π
(T))] + Λ(x
0
)E[H(T)(A
∗
−X
π
(T))]
= E[U(X
π
(T))] + Λ(x
0
) (x
0
−E[H(T)(X
π
(T))])
. .. .
BC
≥ 0
which gives the desired result. 2
3rd step. To determine the optimal strategy π
∗
, one can apply a similar method
as in Proposition 4.22 (ii). This is demonstrated in the following example.
Example “Power Utility with Constant Coeﬃcients”.
We make the following assumptions:
• Power utility function U(x) =
1
γ
x
γ
• I = m = 1 (“one stock, one Brownian motion”)
• The coeﬃcients r, µ, σ are constant.
• σ > 0, i.e. the market is complete.
Since V (y) = (U
)
−1
(y) = y
1
γ−1
, the optimal ﬁnal wealth reads:
A
∗
= V (Λ(x
0
)H(T)) = (Λ(x
0
)H(T))
1
γ−1
,
where
H(T)
1
γ−1
= exp
1
1−γ
rT + 0.5
1
1−γ
θ
2
T +
1
1−γ
θW(T)
(29)
and θ = (µ −r)/σ. On the other hand, the ﬁnal wealth for a strategy π is given by
X
π
(T) = x
0
exp
rT +
T
0
(µ −r)π
s
−0.5σ
2
π
2
s
ds +
t
0
σπ
s
dW
s
. (30)
By comparing the diﬀusion parts of (29) and (30) we guess the optimal strategy:
π
∗
(t) =
1
1 −γ
µ −r
σ
2
implying
X
π
∗
(T) = exp
rT +
1
1−γ
θ
2
T −0.5
1
(1−γ)
2
θ
2
T +
1
1−γ
θW(T)
.
Since Γ(λ) = E[H(T)V (λH(T))] = λ
1
γ−1
E[H(T)
γ
γ−1
] = x
0
, we get
Λ(x
0
)
1
γ−1
= Γ
−1
(x
0
) = x
0
exp
−
γ
1−γ
rT −0.5
γ
1−γ
θ
2
T −0.5
γ
1−γ
2
θ
2
T
.
Therefore,
A
∗
= (Λ(x
0
)H(T))
1
γ−1
= x
0
exp
rT + 0.5θ
2
T −0.5
γ
1−γ
2
θ
2
T +
1
1−γ
θW(T)
.
It is easy to check that
1
1−γ
−0.5
1
(1−γ)
2
= 0.5 −0.5
γ
1−γ
2
.
Hence, A
∗
= X
π
∗
(T) and due to completeness π
∗
is the unique optimal portfolio
strategy.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 73
5.3 Stochastic Optimal Control Approach
Assumption. The coeﬃcients r, µ, σ are constant and I = m = 1.
Deﬁne the value function
G(t, x) := sup
π∈A
E
t,x
[U(X
π
(T))],
where E
t,x
[U(X
π
(T))] := E[U(X
π
(T))[X(t) = x]. Note that G(T, x) = U(x).
5.3.1 Heuristic Derivation of the HJB
We start with a heuristic derivation of the socalled HamiltionJacobiBellman
equation (HJB) which is a PDE for G.
Assume that there exists an optimal portfolio strategy π
∗
. We now carry out the
following procedure:
(i) For given (t, x) ∈ [0, T] (0, ∞), we consider the following strategies over
[t, T]:
• Strategy I: Use the optimal strategy π
∗
.
• Strategy II: For an arbitrary π use the strategy
ˆ π =
π on [t, t + ∆]
π
∗
on (t + ∆, T]
(ii) Compute E
t,x
[U(X
π
∗
(T))] and E
t,x
[U(X
ˆ π
(T))].
(iii) Using the fact that π
∗
is at least as good as ˆ π and taking the limit ∆ → 0
yields the HJB.
We set X
∗
:= X
π
∗
and
ˆ
X := X
ˆ π
.
ad (ii).
Strategy I. By assumption, E
t,x
[U(X
∗
(T))] = G(t, x).
Strategy II.
E
t,x
[U(
ˆ
X(T))] = E
t,x
[E
t+∆,
ˆ
X(t+∆)
[U(
ˆ
X(T))]] = E
t,x
[E
t+∆,
ˆ
X(t+∆)
[U(X
∗
(T))]]
= E
t,x
[G(t + ∆,
ˆ
X(t + ∆))] = E
t,x
[G(t + ∆, X
π
(t + ∆))],
for any admissible π.
ad (iii). By deﬁnition of the strategies,
E
t,x
[G(t + ∆, X
π
(t + ∆))] ≤ G(t, x), (31)
where we have equality if π = π
∗
. Given that G ∈ C
1,2
, Ito’s formula yields
G(t + ∆, X
π
(t + ∆)) = G(t, x) +
t+∆
t
G
t
(s, X
π
s
)ds +
t+∆
t
G
x
(s, X
π
s
)dX
π
s
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 74
+0.5
t+∆
t
G
xx
(s, X
π
s
)d < X
π
>
s
= G(t, x) +
t+∆
t
G
t
(s, X
π
s
) +G
x
(s, X
π
s
)X
π
s
(r +π
s
(µ −r))
+0.5G
xx
(s, X
π
s
)(X
π
s
)
2
π
2
s
σ
2
ds
+
t+∆
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
Given E
t,x
[
t+∆
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
] = 0, we can rewrite (31):
E
t,x
¸
t+∆
t
G
t
(s, X
π
s
) +G
x
(s, X
π
s
)X
π
s
(r +π
s
(µ −r))
+0.5G
xx
(s, X
π
s
)(X
π
s
)
2
π
2
s
σ
2
ds
≤ 0.
Dividing by ∆ and taking the limit ∆ →0 we get (Note: X
π
(t) = x)
G
t
(t, x) +G
x
(t, x)x(r +π
t
(µ −r)) + 0.5G
xx
(t, x)x
2
π
2
t
σ
2
≤ 0
for any admissible π. As above, we have equality for π = π
∗
. Therefore, we get the
HJB
sup
π
¸
G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) + 0.5G
xx
(t, x)x
2
π
2
σ
2
¸
= 0.
with ﬁnal condition G(T, x) = U(x).
Remarks.
• Since (t, x) was arbitrarily chosen, the HJB holds for all (t, x) ∈ [0, T] (0, ∞).
• Note that we take the supremum over a number π and not over a process.
5.3.2 Solving the HJB
We consider the HJB for an investor with power utility U(x) =
1
γ
x
γ
:
sup
π
¸
G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) + 0.5G
xx
(t, x)x
2
π
2
σ
2
¸
= 0.
with ﬁnal condition G(T, x) =
1
γ
x
γ
. Pointwise maximization over π yields the
following FOC
G
x
(t, x)x(µ −r) +G
xx
(t, x)x
2
π(t)σ
2
= 0 (32)
implying
π
∗
(t) = −
µ −r
σ
2
G
x
(t, x)
xG
xx
(t, x)
.
Since the derivative of (32) w.r.t. π is G
x
x(t, x)x
2
σ
2
, the candidate π
∗
is the global
maximum if G
xx
< 0.
Substituting π
∗
into the HJB yields a PDE for G:
G
t
(t, x) +rxG
x
(t, x) −0.5θ
2
G
2
x
(t, x)
G
xx
(t, x)
= 0
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 75
with G(T, x) =
1
γ
x
γ
. We try the separation G(t, x) =
1
γ
x
γ
f(t)
1−γ
for some function
f with f(T) = 1 and obtain
f
= −
γ
1−γ
(r + 0.5
1
1−γ
θ
2
)
. .. .
=:K
f.
This is an ODE for f with the solution f(t) = exp(−K(T −t)). Hence, we get the
following candidates for the value function and the optimal strategy:
G(t, x) =
1
γ
x
γ
exp
−(1 −γ)K(T −t)
, (33)
π
∗
(t) =
1
1 −γ
µ −r
σ
2
.
Remark. Our candidate G is indeed a C
1,2
function. Since γ < 1, we also get
G
xx
(t, x) = (γ −1)x
γ−1
f(t) < 0.
5.3.3 Veriﬁcation
In this section, we show that our heuristic derivation of the HJB has lead to the
correct result.
Proposition 5.4 (Veriﬁcation Result)
Assume that the coeﬃcients are constant and that the investor has a power utility
function. Then candidates (33) are the value function and the optimal strategy of
the portfolio problem (27), where for γ < 0 only bounded admissible strategies are
considered.
Proof. Let π ∈ /. Since G ∈ C
1,2
, Ito’s formula yields
G(T, X
π
(T)) = G(t, x) +
T
t
G
t
(s, X
π
s
)ds +
T
t
G
x
(s, X
π
s
)dX
π
s
+0.5
T
t
G
xx
(s, X
π
s
)d < X
π
>
s
= G(t, x) +
T
t
G
t
(s, X
π
s
) +G
x
(s, X
π
s
)X
π
s
(r +π
s
(µ −r))
+0.5G
xx
(s, X
π
s
)(X
π
s
)
2
π
2
s
σ
2
ds +
T
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
(∗)
≤ G(t, x) +
T
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
,
where (∗) holds because G satisﬁes the HJB.
The righthand side is a local martingale in T. For γ > 0 we have G > 0 and thus
the righthand side is even a supermartingale implying
E
t,x
[
1
γ
(X
π
(T))
γ
] ≤ G(t, x), (34)
where we use G(T, x) =
1
γ
x
γ
. Besides, since E
t,x
[
T
t
G
x
(s, X
∗
s
)X
∗
s
π
∗
s
σ dW
s
] = 0
(exercise) and for π
∗
G
t
(s, X
∗
s
) +G
x
(s, X
∗
s
)X
∗
s
(r +π
∗
s
(µ −r)) + 0.5G
xx
(s, X
∗
s
)(X
∗
s
)
2
(π
∗
s
)
2
σ
2
= 0,
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 76
we obtain
E
t,x
[
1
γ
(X
∗
(T))
γ
] = G(t, x),
i.e. G is the value function and π
∗
is the optimal solution.
Since for a bounded strategy π we get E[
T
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
] = 0, the relation
(34) also holds for γ < 0 and bounded admissible strategies. 2
Remarks.
• For γ < 0 the boundedness assumption of π can be weakened.
• Note that we have not explicitly used the fact that the market is complete.
5.4 Intermediate Consumption
So far the investor has maximized expected utility from terminal wealth only. We
now consider a situation where the investor maximizes
• expected utility from terminal wealth as well as
• expected utility from intermediate consumption.
Assumption. At time t, the investor consumes at a rate c(t) ≥ 0, i.e. over the
interval [0, T] he consumes
T
0
c(t) dt.
Remark. Hence, in our model consumption can be interpreted as a continuous
stream of dividends (See Exercise 40).
To formalize the problem we need to generalize the concept of admissibility:
Deﬁnition 5.2 (Consumption Process, Selfﬁnancing, Admissible)
(i) A progressively measurable, realvalued process ¦c
t
¦
t∈[0,T]
with c ≥ 0 and
T
0
c(t) dt < ∞ P −a.s. is said to be a consumption process.
(ii) Let ϕ be a trading strategy (see Deﬁnition 4.1) and c be a consumption process.
If
dX(t) = ϕ(t)
dS(t) −c(t)dt
holds, then the pair (ϕ, c) is said to be selfﬁnancing.
(iii) A selfﬁnancing (ϕ, c) is said to be admissible if X(t) ≥ 0 for all t ≥ 0.
Hence, the investor’s goal function reads
sup
(π,c)∈A
E
T
0
U
1
(t, c(t)) dt +U
2
(X
π,c
(T))
, (35)
where U
1
is a utility function for ﬁxed t ∈ [0, T],
/ :=
(π, c) : (π, c) admissible and E
T
0
U
1
(t, c(t))
−
dt +U
2
(X
π,c
(T))
−
< ∞
¸
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 77
and
dX
π,c
(t) = X
π,c
(t)
(r(t) +π(t)
(µ(t) −r(t)1))dt +π(t)
σdW(t)
−c(t)dt,
X
π,c
(0) = x
0
.
We set V
1
:= (U
1
)
−1
and V
2
:= (U
2
)
−1
and deﬁne
Γ(λ) := E
T
0
H(t)V
1
(t, λH(t)) dt +H(T)V
2
(λH(T))
It is straightforward to generalize the result of Section 5.2.
Theorem 5.5 (Optimal Consumption and Optimal Final Wealth)
Assume x
0
> 0 and Γ(λ) < ∞ for all λ > 0. Then the optimal consumption and
the optimal ﬁnal wealth reads
c
∗
(t) = V
1
(t, Λ(x
0
) H(t))
A
∗
= V
2
(Λ(x
0
) H(T))
and there exists a portfolio strategy π
∗
such that (π
∗
, c
∗
) ∈ / and X
π
∗
,c
∗
(T) = x
0
and X
π
∗
,c
∗
(T) = A
∗
, Pa.s.
We will present the solution by using the stochastic control approach. The
value function now reads
G(t, x) := sup
(π,c)∈A
E
t,x
T
t
U
1
(s, c(s)) ds +U
2
(X
π,c
(T))
.
As in Section 5.3 we make the following
Assumption. The coeﬃcients r, µ, σ are constant and I = m = 1.
Assume that there exists an optimal strategy (π
∗
, c
∗
). We now carry out the pro
cedure of Section 5.3:
(i) For given (t, x) ∈ [0, T] (0, ∞), we consider the following strategies over
[t, T]:
• Strategy I: Use the optimal strategy (π
∗
, c
∗
).
• Strategy II: For an arbitrary (π, c) use the strategy
(ˆ π, ˆ c) =
(π, c) on [t, t + ∆]
(π
∗
, c
∗
) on (t + ∆, T]
(ii) Compute E
t,x
[
T
t
U
1
(s, c
∗
(s)) ds + U
2
(X
π
∗
,c
∗
(T))] and E
t,x
[
T
t
U
1
(s, ˆ c(s)) ds +
U
2
(X
ˆ π,ˆ c
(T))].
(iii) Using the fact that (π
∗
, c
∗
) is at least as good as (ˆ π, ˆ c) and taking the limit
∆ →0 yields the HJB.
We set X
∗
:= X
π
∗
,c
∗
and
ˆ
X := X
ˆ π,ˆ c
.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 78
ad (ii).
Strategy I. By assumption, E
t,x
[
T
t
U
1
(s, c
∗
(s)) ds +U
2
(X
∗
(T))] = G(t, x).
Strategy II.
E
t,x
T
t
U
1
(s, ˆ c(s)) ds +U
2
(
ˆ
X(T))
= E
t,x
E
t+∆,
ˆ
X(t+∆)
T
t
U
1
(s, ˆ c(s)) ds +U
2
(
ˆ
X(T))
= E
t,x
t+∆
t
U
1
(s, ˆ c(s)) ds + E
t+∆,
ˆ
X(t+∆)
T
t+∆
U
1
(s, ˆ c(s)) ds +U
2
(
ˆ
X(T))
= E
t,x
t+∆
t
U
1
(s, c(s)) ds + E
t+∆,
ˆ
X(t+∆)
T
t+∆
U
1
(s, c
∗
(s)) ds +U
2
(X
∗
(T))
= E
t,x
t+∆
t
U
1
(s, c(s)) ds +G(t + ∆, X
π,c
(t + ∆))
for any admissible (π, c).
ad (iii). By deﬁnition of the strategies,
E
t,x
t+∆
t
U
1
(s, c(s)) ds +G(t + ∆, X
π,c
(t + ∆))
≤ G(t, x), (36)
where we have equality if (π, c) = (π
∗
, c
∗
). To shorten notations, we set X := X
π,c
.
Given that G ∈ C
1,2
, Ito’s formula yields
G(t + ∆, X
t+∆
) = G(t, x) +
t+∆
t
G
t
(s, X
s
)ds +
t+∆
t
G
x
(s, X
s
)dX
s
+0.5
t+∆
t
G
xx
(s, X
s
)d < X >
s
= G(t, x) +
t+∆
t
G
t
(s, X
s
) +G
x
(s, X
s
)X
s
(r +π
s
(µ −r)) −G
x
(s, X
s
)c
s
+0.5G
xx
(s, X
s
)(X
s
)
2
π
2
s
σ
2
ds
+
t+∆
t
G
x
(s, X
s
)X
s
π
s
σ dW
s
Given E
t,x
[
t+∆
t
G
x
(s, X
s
)X
s
π
s
σ dW
s
] = 0, we can rewrite (36):
E
t,x
¸
t+∆
t
U
1
(s, c
s
) ds +
t+∆
t
G
t
(s, X
s
) +G
x
(s, X
s
)X
s
(r +π
s
(µ −r))
−G
x
(s, X
s
)c
s
+ 0.5G
xx
(s, X
s
)(X
s
)
2
π
2
s
σ
2
ds
≤ 0.
Dividing by ∆ and taking the limit ∆ →0 we get (Note: X(t) = x)
U
1
(t, c
t
) +G
t
(t, x) +G
x
(t, x)x(r +π
t
(µ −r)) −G
x
(t, x)c
t
+0.5G
xx
(t, x)x
2
π
2
t
σ
2
≤ 0
for any admissible (π, c). As above, we have equality for (π, c) = (π
∗
, c
∗
). Therefore,
we get the HJB
sup
π,c
U
1
(t, c) +G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) −G
x
(t, x)c (37)
+0.5G
xx
(t, x)x
2
π
2
σ
2
¸
= 0
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 79
with ﬁnal condition G(T, x) = U
2
(x).
Remarks.
• Since (t, x) was arbitrarily chosen, the HJB holds for all (t, x) ∈ [0, T] (0, ∞).
• Note that we take the supremum over a number (π, c) and not over a process.
Example “Power Utility”. We consider the following power utility functions
U
1
(t, c) = e
−αt
1
γ
c
γ
,
U
2
(x) = e
−αT
1
γ
x
γ
,
where α ≥ 0 resembles the investor’s time preferences. The HJB reads
sup
π,c
e
−αt 1
γ
c
γ
+G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) −G
x
(t, x)c
+0.5G
xx
(t, x)x
2
π
2
σ
2
¸
= 0.
with ﬁnal condition G(T, x) = e
−αT 1
γ
x
γ
. The FOCs read
G
x
(t, x)x(µ −r) +G
xx
(t, x)x
2
π(t)σ
2
= 0
e
−αt
c
γ−1
−G
x
(t, x) = 0
implying
π
∗
(t) = −
µ −r
σ
2
G
x
(t, x)
xG
xx
(t, x)
c
∗
(t) = e
α
γ−1
t
G
1
γ−1
x
(t, x)
Substituting (π
∗
, c
∗
) into the HJB yields
1−γ
γ
e
α
γ−1
t
G
γ
γ−1
x
(t, x) +G
t
(t, x) +rxG
x
(t, x) −0.5θ
2
G
2
x
(t, x)
G
xx
(t, x)
= 0.
We try the separation G(t, x) =
1
γ
x
γ
f
β
(t) for some function f with f(T) = 1 and
some constant β. Simplifying yields
1−γ
γ
e
−
α
1−γ
t
f
β+γ−1
γ−1
+
β
γ
f
+
r + 0.5
1
1−γ
θ
2
f = 0
Choosing β = 1 −γ leads to
1−γ
γ
e
−
α
1−γ
t
+
1−γ
γ
f
+
r + 0.5
1
1−γ
θ
2
f = 0,
which can be rewritten as
f
= Kf +g,
with K := −
γ
1−γ
(r +0.5
1
1−γ
θ
2
) and g(t) := −e
α
γ−1
t
. This is an inhomgeneous ODE
for f. Variations of Constants yields
f(t) = f
H
(t)
f(T) −
T
t
g(s)
f
H
(s)
ds
,
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 80
where f
H
(t) = e
−K(T−t)
is the solution of the corresponding homogeneous ODE
(see Section 5.3). Hence,
f(t) = e
−K(T−t)
1 +
T
t
e
−
α
1−γ
s+K(T−s)
ds
. .. .
≥0
= e
−K(T−t)
1 −
e
KT
(1−γ)
α+K(1−γ)
e
−(
α
1−γ
+K)T
−e
−(
α
1−γ
+K)t
¸
.
Especially, f > 0. Hence, we get the following candidates for the value function
and the optimal strategy:
G(t, x) =
1
γ
x
γ
f
1−γ
(t), (38)
π
∗
(t) =
1
1 −γ
µ −r
σ
2
,
c
∗
(t) =
X
∗
(t)
f(t)
e
−
α
1−γ
t
.
Remark.
• Our candidate G is indeed a C
1,2
function. Since γ < 1, we also get G
xx
(t, x) =
(γ −1)x
γ−1
f(t) < 0.
• Clearly, it needs to be shown that these candidates are indeed the value function
and the optimal strategy! This can be done analogously to Proposition 5.4. For
the convenience of the reader, the proof of this veriﬁcation result is also presented.
Proposition 5.6 (Veriﬁcation Result)
Assume that the coeﬃcients are constant and that the investor has a power utility
function. Then the candidates (38) are the value function and the optimal strategy
of the portfolio problem (35), where for γ < 0 only bounded portfolio strategies π
are considered.
Proof. Let (π, c) ∈ /. Since G ∈ C
1,2
, Ito’s formula yields
G(T, X
π,c
(T)) +
T
t
e
−αs 1
γ
c
γ
s
ds
= G(t, x) +
T
t
G
t
(s, X
π,c
s
)ds +
T
t
G
x
(s, X
π,c
s
)dX
π,c
s
+0.5
T
t
G
xx
(s, X
π,c
s
)d < X
π,c
>
s
+
T
t
1
γ
e
−αs
c
γ
s
ds
= G(t, x) +
T
t
G
t
(s, X
π,c
s
) +G
x
(s, X
π,c
s
)X
π,c
s
(r +π
s
(µ −r)) −G
x
(s, X
π,c
s
)c
s
+0.5G
xx
(s, X
π,c
s
)(X
π,c
s
)
2
π
2
s
σ
2
+
1
γ
e
−αs
c
γ
s
ds
+
T
t
G
x
(s, X
π,c
s
)X
π,c
s
π
s
σ dW
s
(39)
It is straightforward to show that E
t,x
[
T
t
G
x
(s, X
∗
s
)X
∗
s
π
∗
s
σ dW
s
] = 0 (exercise).
Besides, G satisﬁes the HJB, i.e. for (π, c) = (π
∗
, c
∗
)
G
t
(s, X
∗
s
) +G
x
(s, X
∗
s
)X
∗
s
(r +π
∗
s
(µ −r)) −G
x
(s, X
∗
s
)c
∗
s
+0.5G
xx
(s, X
∗
s
)(X
∗
s
)
2
(π
∗
s
)
2
σ
2
+
1
γ
e
−αs
(c
∗
s
)
γ
= 0,
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 81
we obtain
E
t,x
[G(T, X
∗
(T))] + E
t,x
T
t
e
−αs 1
γ
(c
∗
s
)
γ
ds
= G(t, x). (40)
implying
E
t,x
[e
−αT 1
γ
X
∗
(T)
γ
] + E
t,x
T
t
e
−αs 1
γ
(c
∗
s
)
γ
ds
= G(t, x)
because G(T, x) = e
−αT 1
γ
x
γ
.
Finally, we need to show that for all (π, c) ∈ /
E
t,x
[e
−αT 1
γ
X
π,c
(T)
γ
] + E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x).
Since G satisﬁes the HJB, for all (π, c) ∈ / we get
G
t
(s, X
π,c
s
) +G
x
(s, X
π,c
s
)X
π,c
s
(r +π
s
(µ −r)) −G
x
(s, X
π,c
s
)c
s
+0.5G
xx
(s, X
π,c
s
)(X
π,c
s
)
2
(π
s
)
2
σ
2
+
1
γ
e
−αs
(c
s
)
γ
≤ 0.
Therefore, (39) yields
G(T, X
π,c
(T)) +
T
t
e
−αs 1
γ
c
γ
s
ds ≤ G(t, x) +
T
t
G
x
(s, X
π,c
s
)X
π,c
s
π
s
σ dW
s
(41)
and now two cases are distinguished.
1st case: γ > 0. The righthand side of (41) is a local martingale in T. Since
for γ > 0 the lefthand side is greater than zero, this is even a supermartingale
implying
E
t,x
[G(T, X
π,c
(T))] + E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x) (42)
and thus
E
t,x
[e
−αT 1
γ
X
π,c
(T)
γ
] + E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x).
2nd case: γ < 0. Since for a bounded strategy π we obtain
E[
T
t
G
x
(s, X
π
s
)X
π
s
π
s
σ dW
s
] = 0,
the result of the 1st case also holds for γ < 0 and bounded admissible strategies. 2
Remark. If we set c ≡ 0, the previous proof is identical to the proof of Proposition
5.4.
5.5 Inﬁnite Horizon
If the optimal portfolio decision over the lifetime of an investor is to be deter
mined, then the investment horizon lies very far into the future. For this reason, it
reasonable to approximate this situation by a portfolio problem with T = ∞.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 82
Hence, the investor’s goal function reads
sup
(π,c)∈A
E
∞
0
U
1
(t, c(t)) dt
, (43)
where
/ :=
(π, c) : (π, c) admissible and E
∞
0
U
1
(t, c(t))
−
dt
< ∞
¸
and
dX
π,c
(t) = X
π,c
(t)
(r(t) +π(t)
(µ(t) −r(t)1))dt +π(t)
σdW(t)
−c(t)dt,
X
π,c
(0) = x
0
.
As in the previous section we make the following
Assumption 1. The coeﬃcients r, µ, σ are constant and I = m = 1.
Additionally, we make the
Assumption 2. U
1
(t, c(t)) = e
−αt
U(c(t)), where α ≥ 0 and U is a utility function.
The value function is given by
G(t, x) := sup
(π,c)∈A
E
t,x
∞
t
e
−αs
U(c(s)) ds
.
Lemma 5.7
The value function has the representation
G(t, x) = e
−αt
H(x),
where H is a function depending on wealth only.
Proof. We obtain
G(t, x)e
αt
= sup
(π,c)∈A
E
t,x
∞
t
e
−α(s−t)
U(c(s)) ds
= sup
(π,c)∈A
E
0,x
∞
0
e
−αs
U(c(s)) ds
=: H(x)
2
Hence, the HJB (37) can be rewritten as follows:
sup
π,c
e
−αt
U(c) +G
t
(t, x) +G
x
(t, x)x(r +π(µ −r)) −G
x
(t, x)c
+0.5G
xx
(t, x)x
2
π
2
σ
2
¸
=
sup
π,c
U(c) −αH(x) +H
x
(x)x(r +π(µ −r)) −H
x
(x)c + 0.5H
xx
(x)x
2
π
2
σ
2
¸
= 0
Example “Power Utility”. Let U(c) =
1
γ
c
γ
. Then the FOCs read
H
x
(x)x(µ −r) +H
xx
(x)x
2
π(t)σ
2
= 0
c
γ−1
−H
x
(x) = 0
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 83
implying
π
∗
(t) = −
µ −r
σ
2
H
x
(x)
xH
xx
(x)
c
∗
(t) = H
1
γ−1
x
(x).
Substituting (π
∗
, c
∗
) into the HJB leads to
1−γ
γ
H
γ
γ−1
x
(x) −αH(x) +rxH
x
(x) −0.5θ
2
H
2
x
(x)
H
xx
(x)
= 0.
We try the ansatz H(x) =
1
γ
x
γ
k
β
for some constants k and β. Simplifying yields
1−γ
γ
k
β
γ−1
−
α
γ
+r + 0.5
1
1−γ
θ
2
= 0
Choosing β = γ −1 leads to
k =
γ
1−γ
α
γ
−r −0.5
1
1−γ
θ
2
.
Our ansatz is only reasonable if k ≥ 0. For γ < 0 this is valid in any case. For
γ > 0 we need to assume that
α ≥ γ(r + 0.5
1
1−γ
θ
2
). (44)
Hence, we get the following candidates for the value function and the optimal strat
egy:
G(t, x) =
1
γ
e
−αt
x
γ
k
γ−1
, (45)
π
∗
(t) =
1
1 −γ
µ −r
σ
2
,
c
∗
(t) = k X
∗
(t).
Remark. In contrast to the previous section, the investor now consumes a time
independent constant fraction of wealth.
Proposition 5.8 (Veriﬁcation Result)
Assume that the coeﬃcients are constant and that the investor has a power utility
function where for γ < 0 only bounded portfolio strategies π are considered. If (44)
holds as strict inequality, i.e.
α > γ(r + 0.5
1
1−γ
θ
2
), (46)
then the candidates (45) are the value function and the optimal strategy of the
portfolio problem (43).
Proof. Let (π, c) ∈ /. Analogously to the proof of Proposition 5.6, we get the
relations (42) and (40) for the candidate G for the value function of this subsection
given by (45)
E
t,x
[G(T, X
π,c
(T))] + E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x)
E
t,x
[G(T, X
∗
(T))] + E
t,x
T
t
e
−αs 1
γ
(c
∗
s
)
γ
ds
= G(t, x).
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 84
Firstly note that, by the monotonous convergence theorem,
31
for any consumption
process c
lim
T→∞
E
t,x
T
t
e
−αs
c
γ
s
ds
= E
t,x
∞
t
e
−αs
c
γ
s
ds
. (47)
Secondly, since G(T, X
∗
(T)) =
1
γ
e
−αT
X
∗
(T)
γ
k
γ−1
, we get
E
t,x
[G(T, X
∗
(T))] =
1
γ
k
γ−1
e
−αT
E
t,x
[X
∗
(T)
γ
]
=
1
γ
k
γ−1
e
−αT
x
γ
E
t,x
e
γ{r+
1
1−γ
θ
2
−0.5
1
(1−γ)
2
θ
2
−k}T+
γ
1−γ
θW(T−t)
=
1
γ
k
γ−1
x
γ
e
−αT+γ{r+
1
1−γ
θ
2
−0.5
1
(1−γ)
2
θ
2
−k+0.5
γ
2
(1−γ)
2
θ
2
}T
=
1
γ
k
γ−1
x
γ
e
−kT
−→0 as T →∞, (48)
since (46) ensures that k > 0. Therefore,
E
t,x
∞
t
e
−αs 1
γ
(c
∗
s
)
γ
ds
= G(t, x).
To prove
E
t,x
∞
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x) (49)
two cases are distinguished.
1st case: γ > 0. Since in this case G(T, X
π,c
(T)) ≥ 0, the relation (49) follows
immediately from (42).
2nd case: γ < 0. Since in this case G < 0, the proof is more involved. From (42)
it follows that
sup
π,c
E
t,x
[G(T, X
π,c
(T))]
¸
+ E
t,x
T
t
e
−αs 1
γ
c
γ
s
ds
≤ G(t, x)
and we obtain the following estimate
0 ≥ sup
π,c
E
t,x
[G(T, X
π,c
(T))]
¸
= sup
π,c
E
t,x
1
γ
e
−αT
X
π,c
(T)
γ
¸
k
γ−1
(∗)
≥ sup
π,c
E
t,x
1
γ
e
−αT
X
π,c
(T)
γ
+
T
t
e
−αs 1
γ
c
γ
s
ds
¸
k
γ−1
≥ sup
π
E
t,x
1
γ
e
−αT
X
π,0
(T)
γ
¸
k
γ−1
= sup
π
E
t,x
1
γ
X
π,0
(T)
γ
¸
e
−αT
k
γ−1
(+)
=
1
γ
x
γ
e
γ(r+0.5
1
1−γ
θ
2
)(T−t)
e
−αT
k
γ−1
(46)
−→0 as T →∞,
where (∗) is valid because
T
t
e
−αs 1
γ
c
γ
s
ds ≤ 0 and (+) follows from Proposition 5.4.
This establishes (49). 2
Remarks.
• The relation
lim
T→∞
E
t,x
[G(T, X
π,c
(T))] = 0
is the socalled transversality condition.
31
We emphasize that 1/γ is deliberately omitted because without this factor the argument of
the expected value is positive.
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 85
• For γ > 0 the property (48) can be proved by using a diﬀerent argument:
0 ≤ E
t,x
[G(T, X
∗
(T))] = E
t,x
[
1
γ
X
∗
(T)
γ
] e
−αT
k
γ−1
≤ E
t,x
[
1
γ
X
π
∗
,0
(T)
γ
] e
−αT
k
γ−1
≤ sup
π
E
t,x
[
1
γ
X
π,0
(T)
γ
] e
−αT
k
γ−1
(+)
=
1
γ
x
γ
e
γ(r+0.5
1
1−γ
θ
2
)(T−t)
e
−αT
k
γ−1
(46)
−→0 as T →∞
where (+) follows from Proposition 5.4. Applying this argument one can avoid
some tedious calculations.
• As in Proposition 5.4, the boundedness assumption for γ < 0 can be weakened.
References
Bingham, N. H. ; R. Kiesel (2004): Riskneutral valuation, 2nd ed., Springer, Hei
delberg.
Bj¨ork, T. (2004): Arbitrage theory in continuous time, 2nd ed., Oxford University
Press, Oxford.
Dax, A. (1997): An elementary proof of Farkas’ lemma, SIAM Review 39, 503507.
Duﬃe, D. (2001): Dynamic asset pricing theory, 3rd ed., Princeton University Press,
Princeton.
Karatzas, I.; S. E. Shreve (1991): Brownian motion and stochastic calculus, 2nd ed.,
Springer, New York.
Karatzas, I.; S. E. Shreve (1999): Methods of mathematical ﬁnance, Springer, New
York.
Korn, R. (1997): Optimal portfolios, World Scientiﬁc, Singapore.
Korn, R.; E. Korn (1999): Optionsbewertung und PortfolioOptimierung, Vieweg,
Braunschweig/Wiesbaden.
Korn, R.; E. Korn (2001): Option pricing and portfolio optimization  Modern meth
ods of ﬁnancial mathematics, AMS, Providence, Rhode Island.
Mangasarian, O. L. (1969): Nonlinear programming, New York.
Merton, R. C. (1990): Continuoustime ﬁnance, Basil Blackwell, Cambridge MA.
Protter, P. (2005): Stochastic integration and diﬀerential equations, 2nd ed., Springer,
New York.
86
CONTENTS
1
Contents
1 Introduction 1.1 1.2 What is an Option? . . . . . . . . . . . . . . . . . . . . . . . . . . . Fundamental Relations . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 3 6 6 6 7 9 10 12 14 14 16 19 20 20 22 23 23 23 23 24 24 25 27 29 30 34 35 41 47
2 Discretetime State Pricing with Finite State Space 2.1 Singleperiod Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 2.1.3 2.1.4 2.1.5 2.2 Description of the Model . . . . . . . . . . . . . . . . . . . .
ArrowDebreu Securities . . . . . . . . . . . . . . . . . . . . . Deﬂator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Riskneutral Measure . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Multiperiod Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 2.2.2 2.2.3 2.2.4 2.2.5 2.2.6 Modeling Information Flow . . . . . . . . . . . . . . . . . . . Description of the Multiperiod Model . . . . . . . . . . . . . Some Basic Deﬁnitions Revisited . . . . . . . . . . . . . . . . Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Introduction to Stochastic Calculus 3.1 Motivation 3.1.1 3.1.2 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Why Stochastic Calculus? . . . . . . . . . . . . . . . . . . . . What is a Reasonable Model for Stock Dynamics? . . . . . .
On Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 3.2.2 3.2.3 Cadlag and Caglad . . . . . . . . . . . . . . . . . . . . . . . . Modiﬁcation, Indistinguishability, Measurability . . . . . . . Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 3.4 3.5 3.6 3.7
Prominent Examples of Stochastic Processes . . . . . . . . . . . . . . Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Completition, Augmentation, Usual Conditions . . . . . . . . . . . . ItoIntegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ito’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Continuoustime Market Model and Option Pricing
CONTENTS
2
4.1 4.2 4.3 4.4
Description of the Model . . . . . . . . . . . . . . . . . . . . . . . . . BlackScholes Approach: Pricing by Replication . . . . . . . . . . . . Pricing with Deﬂators . . . . . . . . . . . . . . . . . . . . . . . . . . Pricing with Riskneutral Measures . . . . . . . . . . . . . . . . . . .
47 49 53 57 68 68 69 73 73 74 75 76 81 86
5 Continuoustime Portfolio Optimization 5.1 5.2 5.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Martingale Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . Stochastic Optimal Control Approach . . . . . . . . . . . . . . . . . 5.3.1 5.3.2 5.3.3 Heuristic Derivation of the HJB . . . . . . . . . . . . . . . . . Solving the HJB . . . . . . . . . . . . . . . . . . . . . . . . . Veriﬁcation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 5.5
Intermediate Consumption . . . . . . . . . . . . . . . . . . . . . . . . Inﬁnite Horizon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References
1
INTRODUCTION
3
1
1.1
Introduction
What is an Option?
• There exist two main variants: call and put. • A European call gives the owner the right to buy some asset (e.g. stock, bond, electricity, corn . . . ) at a future date (maturity) for a ﬁxed price (strike price). • Payoﬀ: max{S(T ) − K; 0}, where S(T ) is the asset price at maturity T and K is the strike price. • An American call gives the owner the right to buy some asset during the whole lifetime of the call. • Put: replace “buy” with “sell” • Since the owner has the right but not the obligation, this right has a positive value.
1.2
Fundamental Relations
Consider an option written on an underlying which pays no dividends etc. during the lifetime of the option. In the following we assume that the underlying is a stock. The timet call and put prices are denoted by Call(t) and P ut(t). The timet price of a zerocoupon bond with maturity T is denoted by p(t, T ). Proposition 1.1 (Bounds) (i) For a European call we get: S(t) ≥ Call(t) ≥ max{S(t) − Kp(t, T ); 0} (ii) For a European put we have: Kp(t, T ) ≥ P ut(t) ≥ max{Kp(t, T ) − S(t); 0} Proof. (i) The relation S(t) ≥ Call(t) is obvious because the stock can be interpreted as a call with strike 0. We now assume that Call(t) < max{S(t) − Kp(t, T ); 0}. W.l.o.g. S(t) − Kp(t, T ) ≥ 0. Consider the strategy • buy one call, • sell one stock, • buy K zeros with maturity T , leading to an initial inﬂow of S(t)−Call(t)−Kp(t, T ) > 0. At time T , we distinguish two scenarios: 1. S(T ) > K: Then the value of the strategy is −S(T ) + S(T ) − K + K = 0. 2. S(T ) ≤ K: Then the value of the strategy is −S(T ) + 0 + K ≥ 0. Hence, the strategy would be an arbitrage opportunity. (ii) The relation Kp(t, T ) ≥ P ut(t) is valid because otherwise the strategy • sell one put, • buy K zeros with maturity T
2 Remark. The following relations hold Callam (t) ≥ Call(t) ≥ max{S(t) − Kp(t. 2 Proposition 1. no arbitrage dictates that the timet value is zero as well implying the desired result. • sell one stock. Proposition 1. Then the following relations hold: (i) Callam (t) − S(t) + Kp(t. For the second one assume that P utam (t) > Callam (t) − S(t) + K and consider the following strategy: • sell one put. • If dividends are paid. Early exercise of an American call is not optimal. • sell one call. this result does not hold for American puts. The second relation can be proved similarly to the relation for the call in (i). By convention. Proposition 1. (i) The ﬁrst inequality follows from the putcall parity and Proposition 1.e. T )). Proof.2 (PutCall Parity) The following relation holds: P ut(t) = Call(t) − S(t) + Kp(t. • invest K Euros in the money market account. M (0) = 1. It is easy to check that the timeT value of this strategy is zero. • buy one stock. Since no additional payments need to be made. . the American call is worth more alive than dead.3 (American Call) Assume that interest rates are positive. Proof. i.1 1 The (∗) timet value of the money market account is denoted by M(t). • Unfortunately. T ) ≤ P utam (t) ≤ Callam (t) − S(t) + K (ii) P ut(t) ≤ P utam (t) ≤ P ut(t) + K(1 − p(t. 2 Remark. T ).3. Callam (t) = Call(t). T ) Proof. then the result for the American call breaks down as well. • sell K zeros with maturity T . Consider the strategy • buy one put. • buy one call.1 (i).4 (American Put) Assume that interest rates are positive. 0} ≥ S(t) − K where (∗) holds due to Proposition 1. Hence.1 INTRODUCTION 4 would be an arbitrage strategy. This is a very important relationship because it ties together European call and put prices.
T )). 2. then P ut(t) = P utam (t). S(t) ≤ K: Then the value of the strategy is −(K − S(t)) + 0 − S(t) + KM (t) = K(M (t) − 1) ≥ 0. (ii) We have P ut(t) ≤ P utam (t) ≤ Callam (t) − S(t) + K = Call(t) − S(t) + K = P ut(t) + S(t) − Kp(t. Hence. T ] we distinguish to scenarios: 1. For t ∈ [0. If interest rates are zero. T ) − S(t) + K = P ut(t) + K(1 − p(t. 2 Remark. . the strategy would be an arbitrage opportunity. S(t) > K: Then the value of the strategy is 0 + (S(t) − K) − S(t) + KM (t) = K(M (t) − 1) ≥ 0 since interest rates are positive and thus M (t) ≥ 1.1 INTRODUCTION 5 By assumption. the initial value of this strategy is strictly positive.
S1 . Without loss of generality T = 1.1 (Arbitrage) We distinguish two kinds of arbitrage opportunities: (i) A free lunch is a trading strategy ϕ with Sϕ<0 and Xϕ ≥ 0. . . . • At time t = 0 one can trade in J + 1 assets. at time t = 0 there exists a riskless interest rate given by r= X0 − 1. . This deﬁnes a probability measure P : Ω → (0. SJ ). • Uncertainty at time t = 1 is represented by a ﬁnite set Ω = {ω1 . i. where ϕj denotes the number of asset j which the investor holds in his portfolio (combination of assets). The statedependent payoﬀs of the assets at time t = 1 are described by the payoﬀ matrix X= · · · XJ (ω1 ) X0 (ω2 ) · · · XJ (ω2 ) . • Therefore. . Deﬁnition 2. k ∈ {1. one of which will be revealed as true. 1). ϕJ ) . . . . . .e. . . . Xi0 = Xk0 = const. (ii) A free lottery is a trading strategy ϕ with Sϕ=0 and Xϕ ≥ 0 and Xϕ = 0. .. • The ﬁrst asset is assumed to be riskless.1 Singleperiod Model Description of the Model • We consider a singleperiod model starting today (t = 0) and ending at time t = T . . The time0 prices of these assets are described by the vector S = (S0 . . for all i. Ω is said to be the state space. . . Xij := Xj (ωi ). ωI } of states. X0 (ωI ) · · · XJ (ωI ) X0 (ω1 ) • To shorten notations. • Each state ωi ∈ Ω can occur with probability P (ωi ) > 0. . This is modeled via a trading strategy ϕ = (ϕ0 . . . .2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 6 2 Discretetime State Pricing with Finite State Space 2. I}. . .1. . S0 • An investor buys and sells assets at time t = 0.1 2.
• In our model. Remarks. 2A . we have I ArrowDebreu securities.2 • no transaction costs. (b) All investors estimate the same payoﬀ matrix (homogeneous beliefs).2 (ArrowDebreu Security) An asset paying one Euro in exactly one state and zero else is said to be an ArrowDebreu security (ADS). In abstract mathematical terms. The corresponding ArrowDebreu price is denoted by πi . . . • assets are divisible. An investor can do this by borrowing the asset from a third party and selling the borrowed security. . the notion of an ArrowDebreu security is crucial: Deﬁnition 2. (c) At time t = 0 there is only one market price for each security and each investor can buy and sell as many securities as desired without changing the market price (price taker).e. 0. i. The net position is a cash inﬂow equal to the price of the security and a liability (borrowing) of the security to the third party. 2. • Unfortunately. 1.2 ArrowDebreu Securities To characterize arbitragefree markets.1. . i. there are neither free lunches nor free lotteries.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 7 Assumptions: (a) All investors agree upon which states cannot occur at time t = 1.1 (Characterization of No Arbitrage) Under assumptions (a)(d) the following statements are equivalent: • The market does not contain arbitrage opportunities (Assumption (e)). . where the ith entry is 1. in practice ADSs are not traded.e. i. 0. . Its price is an ArrowDebreu price (ADP). • The ith ADS has the payoﬀ ei = (0. 0) . a short sale corresponds to buying a negative amount of a security. Theorem 2. . • no taxes. (e) The asset market contains no arbitrage opportunities. we have as many ADS as states at time t = 1.e. (d) Frictionless markets. We can now prove the following theorem characterizing markets which do not contain arbitrage opportunities. . short sale of an assets is equivalent to selling an asset one does not own. • no restrictions on short sales.
option) is some additional asset with payoﬀ C ∈ I I . p. Given our market with prices S and payoﬀ matrix X. For this reason.3 (Contingent Claim) In the singleperiod model. 32) or Dax (1997)). i. The implication “(i)⇒(ii)” follows from the socalled FarkasStiemke lemma (See Mangasarian (1969. I. by the no arbitrage assumption its price SC has to be (“law of one price”) J SC = j=1 ϕj · S j . . and Xϕ = 0. Remark.5 (Completeness) An arbitragefree market is said to be complete if every contingent claim is replicable. Then (∗) (ii) S ϕ = π Xϕ > 0. . Xϕ ≥ 0. .e. • The hard part of the proof is actually the proof of implication “(i)⇒(ii)”.4 (Replication) Consider a market with payoﬀ matrix X. πi > 0. . The argument to exclude free lotteries is similar. We consider a strategy ϕ with Xϕ ≥ 0 and Xϕ = 0. . 2 Remarks. such that I Sj = i=1 πi · Xij . this proof is omitted here. If an asset with payoﬀ C is replicable via a trading strategy ϕ. there are no free lunches.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 8 • There exist strictly positive ArrowDebreu prices. it is now of interest how to price other assets. i = 1. For brevity. R Deﬁnition 2. where (∗) holds because π > 0. • The ADPs are the solution of the following system of linear equations: S = X π. A contingent claim with payoﬀ C can be replicated (hedged) in this market if there exists a trading strategy ϕ such that C = Xϕ. Proof. a contingent claim (derivative. Deﬁnition 2. Deﬁnition 2. Suppose (ii) to hold.
P Remarks. Then the market is complete if and only if the ArrowDebreu prices are uniquely determined.e.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 9 Theorem 2. a deﬂator H is deﬁned by H= π . there exist strictly positive ADPs such that I Sj = i=1 πi · Xij . . Proposition 2. • If the number of assets is smaller than the number of states (J < I). • Since the market is arbitragefree. Deﬁnition 2. • The ADS π and the probability measure P can be interpreted as random variables with realizations πi = π(ωi ) and P (ωi ). 2 Remarks.3 (Pricing with a Deﬂator) In an arbitragefree market. 2. i = 1. arbitragefree asset prices can be represented as discounted expected values of the corresponding payoﬀs. In this subsection: Using the physical probabilities P (ωi ). • The market is complete if the number of assets with linearly independent payoﬀs is equal to the number of states. but π and thus the deﬂator H are only unique if the market is complete. . the asset prices are given by Sj = EP [H · Xj ]. there exists at least one vector of ADSs π. I. .3 Deﬂator Main result from the previous subsection: In an arbitragefree market. i. .6 (Deﬂator) In an arbitragefree market. Rank(X) = I.2 (Characterization of a Complete Market) Assume that the ﬁnancial market contains no arbitrage opportunities. the market cannot be complete. Proof. ⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒ π is unique. X π = S has a unique solution.1. Rank (X) = I Range (X) = I I R Every contingent claim is replicable. .
. Two important observations: (i) If an investor buys all ADSs. (ii) In an arbitragefree market. (ii) u0 < E(u1 ): 2. • The deﬂator H is the appropriate discount factor for payoﬀs at time t = 1. EP [H] EP (u1 ) which gives the following result: (i) u0 > E(u1 ) (”‘time preference”’):3 r > 0. • Setting u0 := ∂u/∂c0 and u1 := ∂u/∂c1 leads to r= 1 u0 −1= − 1.1. ∂u/∂c0 where u = u(c0 . . . one needs to discount with the riskless rate r because I EP [H] = i=1 P (ωi ) · π(ωi ) = P (ωi ) I π(ωi ) = i=1 1 . we get I πi = i=1 1 1+r I ⇒ i=1 πi · (1 + r) = 1.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 10 Proof.4 Riskneutral Measure r < 0. he gets one Euro at time t = 1 for sure. it holds that πi > 0. 3 Intuitively. . time preference describes the preference for present consumption over future consumption. c1 ) is the utility function of the representative investor. • On average.e. I. i. I I Sj = i=1 πi · Xij = i=1 P (ωi ) π(ωi ) π Xj = EP [H · Xj ] Xj (ωi ) = EP P (ωi ) P 2 Remarks. • The deﬂator H ist a random variable with realizations π(ωi )/P (ωi ). i = 1. 1+r Interpretation of a Deﬂator: • In an equilibrium model one can show that the deﬂator equals the marginal rate of substitution between consumption at time t = 0 and t = 1 of a representative investor. (roughly speaking) we have H= ∂u/∂c1 . .e. i.
r.6 (RadonNikodym) If Q is continuous w.e. R R 2 Remarks. let us recall some basic deﬁnitions: Deﬁnition 2. • Roughly speaking. there is a third representation of an asset price: Proposition 2.4 (Riskneutral Measure) In an arbitragefree market. • Warning: The riskneutral measure Q is diﬀerent from the physical measure P . there exists a positive random variable D = dQ such that dP EQ [Y ] = EP [D · Y ] for any random variable Y . Proof.t. both measures possess the same null sets.e.t. P . riskneutral probabilities are compounded ArrowDebreu securities.t.). Using the riskneutral measure.r. Theorem 2.5 (Riskneutral Pricing) In an arbitragefree market. Q. R Sj = i=1 πi · Xij = i=1 πi · R · Xij = R I qi · i=1 Xij Xj = EQ .2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 11 Proposition 2. i. If Q is continuous w. asset prices possess the following representation Sj = E Q where R := 1 + r. To motivate the notion “equivalent martingale measure”. another probability measure P if for any event A ⊂ Ω P (A) = 0 =⇒ Q(A) = 0. • The notion “riskneutral measure” comes from the fact that under this measure prices are discounted expected values where one needs to discount using the riskless interest rate. P and vice versa.r. Proof.7 (Equivalent Probability Measures) A Probability Measure Q is said to be continuous w. then Q possesses a density w.t.r. This density is uniquely determined (P a. I I 2 Xj . i. follows from (i) and (ii).s. . there exists a probability measure Q deﬁned via Q(ωi ) = qi := πi · (1 + r) This measure is said to be the riskneutral measure or equivalent martingale measure. then P and Q are said to be equivalent.
1 forms a stochastic process. =dQ/dP Xj = Sj = EP [H · Xj ]. If the assumptions (a)(d) are satisﬁed. R i.1. Q (short: Qmartingale) if EQ [Y (1)] = Y (0). we have Q(ωi ) > 0 and P (ωi ) > 0 which gives the desired result. 2 Remarks. 2. • From Propositions 2. The ﬁnite sequence {Y (t)}t=0. . (ii) The physical measure P and a riskneutral measure Q are equivalent. A realvalued stochastic process is said to be a martingale w. I.5.3 and 2.5 Summary Theorem 2. H = (dQ/dP )/R.e.r.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 12 Deﬁnition 2.t. (ii) There exist strictly positive ArrowDebreu prices πi . we get EQ Therefore. i = 1.8 (No Arbitrage) Consider a ﬁnancial market with price vector S and payoﬀ matrix X. then the following statements are equivalent: (i) The ﬁnancial market contains no arbitrage opportunities (assumption (e)).8 (Stochastic Process in a Singleperiod Model) Consider a singleperiod model with time points t = 0. (ii) Since ADPs are strictly positive. 1. We can now characterize riskneutral measures as follows: Proposition 2.7 is the reason why Q is said to be an equivalent martingale measure. • Proposition 2.5. . .9 (Martingale in a Singleperiod Model) Consider a singleperiod model with ﬁnite state space. Deﬁnition 2. (i) follows from Proposition 2. where it is assumed that EQ [Y (1)] < ∞.7 (Properties of a Riskneutral Measure) (i) The discounted price processes {Sj . Proof. such that I Sj = i=1 πi · Xij . we obtain EQ [Xj ] = EP [ H · R ·Xj ]. . . Analogously. Xj /R} are martingales under a riskneutral measure Q. one can show dP/dQ = 1/(H · R). Uncertainty at time t = 1 is modeled by some probability measure Q.
Remarks.10 (Pricing of Contingent Claims) Consider an arbitragefree and complete market with price vector S and payoﬀ matrix X. If the assumptions (a)(e) are satisﬁed.e. 2 . Let C ∈ I I be the payoﬀ of a contingent claim. . .2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 13 (iii) There exists a statedependent discount factor H. each additional asset can be replicated. there exist ADPs which are uniquely determined. (ii) SC = EP [H · C] (iii) SC = EQ C R Proof. are uniquely determined. (iv) There exists only one equivalent martingale Q. for all j = 0. . (iv) There exists an equivalent martingale measure Q such that the discounted price processes {Sj . .e. the price of any additional asset is given by the price of its replicating portfolio. Theorem 2. I. . The noarbitrage requirement together with the deﬁnition of an ADP gives formula (i). . . . J Sj = EQ Xj R Theorem 2. i. J Sj = EP [H · Xj ]. (iii) There exists only one deﬂator H. (Pricing with the equivalent martingale measure). . (ii) The ArrowDebreu prices πi . . Xj /R} are martingales under Q. then the following statements are equivalent: (i) The ﬁnancial market is complete. In an arbitragefree and complete market. the socalled deﬂator. i = 1. . • The value of this portfolio can be calculated applying the following theorem. such that for all j = 0. i. Formulas (ii) and (iii) follow from (i) using H = π/P and Q(ωi ) = πi · R. . • In a complete market. (Pricing with the deﬂator). Its today’s price SC R can be calculated by using one of the following formulas: I (i) SC = C π = i=1 πi · C(ωi ) (Pricing with ADSs).9 (Completeness) Consider a ﬁnancial market with price vector S and payoﬀ matrix X.
then at time t = 1 the stock price can only take the values 121 or 99 (99 or 81). • At t = 0 it is possible that the stock price at time t = 1 equals 81. intermediate portfolio adjustments was not allowed. → Filtration • and the investor can adjust his portfolio according to the arrival of new information about prices.2.1 Modeling Information Flow Motivation Consider the dynamics of a stock in a twoperiod binomial model: t=0 t = 0.5 t=1 1 121 1 110 PPP PP P q P 100 PP 1 99 PP PP q P 90 PP PP PP q P 81 • Due to the additional trading date at time t = 0. • If. 2. however.2 Multiperiod Model In the previous section we have modeled only two points in time: t = 0: t = T: The investor chooses the trading strategy. → dynamic (selfﬁnancing) trading strategy. The investor liquidates his portfolio and gets the payoﬀs.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 14 2. the investor gets more information about the stock price at time t = 1. we consider the following example: . • It is one of the goals of this section to model this additional piece of information! Reference Example To demonstrate how one can model the information ﬂow.e. In this section • we will model the information ﬂow between 0 and T . the investor already knows that at time t = 0. i. 99 or 121.5 the stock price equals 110 (90). This has the following consequences: • We have disregarded the information ﬂow between 0 and T . • Every trading strategy was a static strategy (”‘buy and hold”’).5.
t=0 t=1 First Approach to Model the Flow of Information Deﬁnition 2. • the partitions become ﬁner over time. ω4 }}. • Ω = A1 ∪ . ω3 . . ω4 }. . Ak ∩ Al = ∅.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 15 t=2 1 ω1 {ω } 1 1 . F1 = {{ω1 . Example. . . F0 = {{ω1 . . ω4 }}. .10 (Partition) A partition of the state space is a set {A1 . Remarks. . t = 2. . {ω4 }}. ω2 . {ω2 }. . ω2PPP PP P ω . F2 = {{ω1 }. . ω4 }}.ω q P {ω1 . {ω3 . ∪ AK . 6000 at 4000 at 4000 at 2000 at time time time time t = 2. {{ω1 . ”’). . it provides a good intuition of what information ﬂow actually means.”’). ω2 . ω2 }. . FT such that • F0 = {ω1 . Example. {ω3 . • However.e. ω3 . ω2 . ωI } (”‘ At the beginning each state can occur. i. ω2 }. Example for the interpretation of the ω1 : DAX stands at 5000 at t = 1 and at ω2 : DAX stands at 5000 at t = 1 and at ω3 : DAX stands at 3000 at t = 1 and at ω4 : DAX stands at 3000 at t = 1 and at states. t = 2.11 (Information Structure) An information structure is a sequence {Ft }t of partitions F0 . AK } of subsets Ak ⊂ Ω of a state space Ω such that • the subsets Ak are disjoint. ω {ω3 4PP } PP PP q P ω 4 The state space thus equals: Ω = {ω1 . . ω3 . • FT = {{ω1 }. . .. ω4 } 2 3 PP 1 PP PP q P . . • Problem: This intuitive deﬁnition cannot be generalized to the continuoustime case. . Deﬁnition 2. one knows for sure which state has occurred. {ω3 }. . {ωI }} (”‘ At the end. t = 2.
• The range of Sj (t) is ﬁnite. one has no information. F2 = P(Ω). 2. t ∈ I. Some facts about ﬁltrations: • Filtrations are used to model the ﬂow of information over time. . . . • Trading of an investor is modeled via his trading strategy ϕ(t) = (ϕ0 (t). Here I denotes some ordered index set. • A trading strategy needs to be predictable. . In our discrete time model. For a ﬁnite σﬁeld. F0 = {∅. . t = 0. . ϕJ (t)) . • F0 ⊂ F1 ⊂ . {ω3 . .2 Description of the Multiperiod Model Properties of the Model: • Trading takes place at T + 1 dates. . • At time t. . is said to be a ﬁltration if Fs ⊂ Ft . F1 = {∅. . t ∈ I. ω2 }. s ≤ t and s. . A family {Ft }t of subσﬁelds Ft . Ω} (”‘At the beginning. Deﬁnition 2. one can decide if the event A ∈ Ft has occurred or not. Ω}. Example. {ω1 . ω4 }. (ii) A ∈ F ⇒ AC ∈ F. Example. on the basis of all information about prices S(t − 1) at time t − 1 the investor selects his portfolio which he holds until time t. . Remark. where ϕj (t) denotes the number of asset j at time t in the investor’s portfolio. . Ω}. . ω4 }.e.”’). . T .12). • FT = P(Ω) (”‘At the end one is fully informed. {∅. t = 0. . SJ (t)).2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 16 Second Approach to Model the Flow of Information: Filtration Deﬁnition 2. T . {ω3 . ω2 }.”’). . Remark. . ⊂ FT (”‘The state of information increases over time. .”’). we can always use ﬁltrations with the following properties (P(Ω) denotes the powerset of Ω): • F0 = {∅. ∞ (iii) for every sequence {An }n∈I with An ∈ F we have N n=1 An ∈ F.12 (σField) A system F of subsets of the state space Ω is said to be a σﬁeld if (i) Ω ∈ F. i.13 (Filtration) Consider a state space F. • Sj (t) denotes the price of the jth asset at time t. it is suﬃcient if ﬁnite (instead of inﬁnite) unions of events belong to the σﬁeld (see requirement (iii) of Deﬁnition 2. • Taking conditional expectation E[Y Ft ] of a random variable Y means that one takes expectation on the basis of all information available at time t. Ω}. {ω1 . • Investors can trade in J + 1 assets with price processes (S0 (t).2.
• In the singleperiod model: The price of the replication strategy is then the price of the contingent claim. . . . . where S(T ) denotes the value of the underlying at time T and K the strike price. . T u j=0 Vϕ (t) = S(0) ϕ(1) = J S (0)ϕ (1) f¨r t = 0 u j j=0 j Deﬁnition 2. ∆SJ (t)) := (S0 (t) − S0 (t − 1). the investor has to sell other ones. • For this reason. . T − 1 J J S(t) ϕ(t) = S(t) ϕ(t + 1) ⇐⇒ j=0 Sj (t) · ϕj (t) = j=0 Sj (t) · ϕj (t + 1). . . • American options are no contingent claims in the sense of Deﬁnition 2. . . Remarks. Remarks. Deﬁnition 2. but one can generalize this deﬁnition accordingly. . . Ft . . . ϕJ (t)) is said to be selfﬁnancing if for t = 1.15 (Selfﬁnancing Trading Strategy) A trading strategy ϕ(t) = (ϕ1 (t). • To buy assets. • Recall the following paradigm: Arbitragefree pricing of a contingent claim means ﬁnding a hedging (replication) strategy of the corresponding payoﬀ. which is measurable w.14 (Contingent Claim) A contingent claim leads to a statedependent payoﬀ C(t) at time t. 0}. SJ (t) − SJ (t − 1)) . • A typical contingent claim is a European call with payoﬀ C(T ) = max{S(T ) − K. . • Only in this case. Notation: • Change of the value of strategy ϕ at time t: ∆Vϕ (t) := Vϕ (t) − Vϕ (t − 1) • Price changes at time t ∆S(t) = (∆S0 (t). . . . . .14.r. • This leads to the notion of a selfﬁnancing trading strategy. the initial costs are equal to the fair price of the contingent claim.t. the portfolio value changes only if prices change and not if the numbers of assets in the portfolio change.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 17 • The value of a trading strategy at time t is given by: S(t) ϕ(t) = J Sj (t)ϕj (t) f¨r t = 1. • In the multiperiod model we need an additional requirement: The replication strategy should not require intermediate inﬂows or outﬂows of cash.
we obtain ∆Vϕ (t) = = j=0 (∗) J Vϕ (t) − Vϕ (t − 1) J J Sj (t)ϕj (t) − j=0 J Sj (t − 1)ϕj (t − 1) = Sj (t)ϕj (t) − j=0 J j=0 Sj (t − 1)ϕj (t) = j=0 J ϕj (t) Sj (t) − Sj (t − 1) = j=0 ϕj (t)∆Sj (t) ϕ(t) ∆S(t). i. ∆Vϕ (t) = ϕ(t) ∆S(t).e. (i) For a selfﬁnancing strategy ϕ. (ii) Any (not necessary selfﬁnancing) trading strategy satisﬁes t Vϕ (t) = Vϕ (0) + τ =1 ∆Vϕ (τ ).e. ϕ(τ ) ∆S(τ ) τ =1 = Vϕ (0) + = Vϕ (0) + Gϕ (t). (ii) The value of ϕ is equal to its initial value plus the cumulative gains. t Vϕ (t) = Vϕ (0) + Gϕ (t) = Vϕ (0) + τ =1 ϕ(τ ) ∆S(τ ). Proof. = The equality (∗) is valid because ϕ is selfﬁnancing. we obtain t Vϕ (t) = (i) Vϕ (0) + τ =1 t ∆Vϕ (τ ). . i. Since ϕ is assumed to be selfﬁnancing. by (i).11 (Characterization of a Selfﬁnancing Trading Strategy) A selfﬁnancing trading strategy ϕ(t) has the following properties: (i) The value of ϕ changes only due to price changes.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 18 • Gains process of the trading strategy ϕ at time t: J ϕ(t) ∆S(t) = j=0 ϕj (t)∆Sj (t) • Cumulative gains process of the trading strategy ϕ at time t: t t J Gϕ (t) := τ =1 ϕ(τ ) ∆S(τ ) = τ =1 j=0 ϕj (τ )∆Sj (τ ) Proposition 2.
i.17 (Replication of a Contingent Claim) A contingent claim with payoﬀ C(T ) is said to be replicable if there exists a selfﬁnancing trading strategy ϕ such that C(T ) = Vϕ (T ). Remarks.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 19 2 Remarks.2.e. the relation (i) is still valid if ∆ is replace by d. at every trading date the hedge portfolio needs to be adjusted according to the arrival of new information about prices. 2. the intuitive relation S(t) ϕ(t) = S(t) ϕ(t + 1) has no continuoustime equivalent. replication strategies are dynamic strategies.16 (Arbitrage) We distinguish two kinds of arbitrage opportunities: (i) A free lunch is a selfﬁnancing trading strategy ϕ with Vϕ (0) < 0 und Vϕ (T ) ≥ 0.3 Some Basic Deﬁnitions Revisited The notion of a selfﬁnancing trading strategy is also important to generalize the following deﬁnitions to the multiperiod setting: Deﬁnition 2. • In continuoustime models. Deﬁnition 2. The trading strategy ϕ is said to be a replication (hedging) strategy for the claim C(T ). . (ii) A free lottery is a selfﬁnancing trading strategy ϕ with Vϕ (0) = 0 und Vϕ (T ) ≥ 0 und Vϕ (T ) = 0. The deﬁnition of a complete market remains the same: Deﬁnition 2. a replication strategy of a claim leads to the same payoﬀ as the claim. • Unfortunately.18 (Completeness) An arbitragefree market is said to be complete if each contingent claim can be replicated. • In general. • By deﬁnition.
• assets are divisible.e. (c) At time t = 0 there is only one market price for each security and each investor can buy and sell as many securities as desired without changing the market price (price taker). (b) All investors agree upon the range of the price processes S(t). • The discounted (normalized) price processes are deﬁned by Zj (t) := Sj (t) . • no restrictions on short sales. a probability measure Q if for s ≤ t EQ [Y (t)Fs ] = Y (s).r.t. 2. t = 0.e. (e) The asset market contains no arbitrage opportunities. .2. An investor can do this by borrowing the asset from a third party and selling the borrowed security.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 20 2. • no taxes. a short sale corresponds to buying a negative amount of a security. i. Remark. it is locally riskfree with t S0 (0) = 1 und S0 (t) = u=1 (1 + ru ). . .2.4 • no transaction costs. A martingale is thus a process being on average stationary. The net position is a cash inﬂow equal to the price of the security and a liability (borrowing) of the security to the third party. 4A . where ru denotes the interest rate for the period [τ − 1. S0 (t) • The money market account is thus used as a numeraire. . Deﬁnition 2.4 Assumptions (a) All investors agree upon which states cannot occur at times t > 0. there are neither free lunches nor free lotteries. with E[Y (t)] < ∞ is a martingale w. t = 1.5 Main Results Conventions: • The 0th asset is a money market account. i. i. T . . . .19 (Martingal) A realvalued adapted process {Y (t)}t . In abstract mathematical terms. Important Facts: short sale of an assets is equivalent to selling an asset one does not own. (d) Frictionless markets.e. T. τ ]. .
. the socalled deﬂator. . the following three theorems hold: Theorem 2.e.. T } with s ≤ t: (i) The replication strategy ϕ for the claim is unique.g. S0 (t) . (ii) There exist a statedependent discount factor H. t ∈ {0. Theorem 2. for all time points s. Then we obtain for s. i. we can calculate the claim’s times price C(s) = EP [H(t)C(t)Fs ] . • Therefore. Chapter 4). the following statements are equivalent: (i) The ﬁnancial market contains no arbitrage opportunities (assumption (e)).e. • Unfortunately. J we have EP [H(t)Sj (t)Fs ] Sj (s) = . .e. . . . . (ii) Using the deﬂator. . a multiperiod model is arbitragefree if all singleperiod models are arbitragefree (See. the implication “Existence of Q ⇒ No Arbitrage”’ is still valid. t ∈ {0. H(s) i. i. . • The equivalence “No Arbitrage ⇔ Existence of Q” is known as Fundamental Theorem of Asset Pricing.12 (Characterization of No Arbitrage) If assumptions (a)(d) are satisﬁed. . T } with s ≤ t and all assets j = 0. T } with s ≤ t and all assets j = 1.13 (Pricing of Contingent Claims) Consider a ﬁnancial market satisfying assumptions (a)(e) and a contingent claim with payoﬀ C(T ). • However. all prices can be represented as discounted expectations under the physical measure. . (iii) There exists a riskneutral probability measure Q such that all discounted price processes Zj (t) are martingales under Q. For these reasons. t ∈ {0. H(s) (iii) The following riskneutral pricing formula holds: C(s) = S0 (s) · EQ C(t) Fs . . . in continuoustime models this equivalence breaks down. . such that for all time points s. . e. . J we have Zj (s) = EQ [Zj (t)Fs ] ⇐⇒ Sj (s) Sj (t) = EQ Fs S0 (s) S0 (t) Remarks.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 21 • A multiperiod model consists of a sequence of singleperiod models. Bingham/Kiesel (2004). . . . . the arbitragefree price of the claim is uniquely determined by C(s) = Vϕ (s).
Note that one needs to discount with the money market account. • If a unique martingale measure exists. the market contains is arbitragefree and complete. S0 (T ) . (ii) There exists a unique deﬂator H. C(0) = Vϕ (0). the discounted price processes of contingent claims are Qmartingales as well.6 Summary The following results are important: • It is always valid that a ﬁnancial market contains no arbitrage opportunities if an equivalent martingale measure exists. (iii) There exists a unique martingale measure Q. (1 + r)T • Pricing of contingent claims boils down to computing expectations under the riskneutral measure.2.e. the converse is also valid: If the market contains no arbitrage opportunities. • If interest rates are constant. Theorem 2. the discounted price processes are Qmartingales. S0 (s) S0 (t) i. 2. then from (iii) C(0) = EQ C(T ) (1 + r)T = EQ [C(T )] . • Result (i) says that the today’s price of a claim is uniquely determined by its replication costs. • From (iii) we get C(t) C(s) = EQ Fs . • In our discrete model. Then the following statements are equivalent (i) The market is complete.14 (Completeness) Consider a ﬁnancial market satisfying assumptions (a)(e). the today’s price of a contingent claim is given by the following Qexpectation C(0) = EQ C(T ) . i. there exists a martingale measure.2 DISCRETETIME STATE PRICING WITH FINITE STATE SPACE 22 Remarks.e. • If a martingale measure Q exists. • If a unique martingale measure exists.
1 Introduction to Stochastic Calculus Motivation Why Stochastic Calculus? • In the previous section. . Two investment opportunities: • money market account: riskless asset with constant interest rate r modeled via an ordinary diﬀerential equation (ODE): dM (t) = M (t)rdt. price dynamics are modeled continuously.2 What is a Reasonable Model for Stock Dynamics? Model for the Money Market Account The solution of the ODE B (t) = r · B(t) with B(0) = 1 is B(t) = b0 exp(rt) .e. i. M (0) = 1. S(0) = s0 . • In continuoustime models asset dynamics are modeled via stochastic diﬀerential equations. where W is a Brownian motion. at any date in the time period [0. • In a continuoustime model.3 INTRODUCTION TO STOCHASTIC CALCULUS 23 3 3. • stock: risky asset modeled via a stochastic diﬀerential equation (SDE) dS(t) = S(t) µdt + σdW (t) . Example “BlackScholes Model”. In this section we answer the following questions: Q1 What is a Brownian motion? Q2 What is the meaning of dW (t)? → Stochastic Integral Q3 What are stochastic diﬀerential equations and do explicit solutions exist? → Variation of Constant Q4 How does the dynamics of contingent claims look like? → Ito Formula 3. . trading takes place continuously.1.1 3.1. the investor was able to trade at ﬁnitely many dates t = 0. T ]. . • Consequently. . T only.
A cadlag function cannot jump around too widly as the next proposition shows. T ] → I n be a function with existing left and right limits. σ 2 t). i.e.2 On Stochastic Processes Deﬁnition 3.2. for Y (t) = Y (s) + [Y (t) − Y (s)] the diﬀerence [Y (t) − Y (s)] • depends only on t − s and • is independent of Y (s). every Xt is (F. i. our stochastic processes take values in the measurable space (I n . i. 3) no memory. σ > 0.e.e.e. σ{Xt } ⊂ F. 5) continuous. Var(Y (t)) increases with t. 2) increasing with time t. Therefore. B(I n ))measurable. for each R t ∈ [0. B(I n )). The function f is cadlag if f (t) = f (t+) and caglad if f (t) = f (t−). Y (t) ∼ N (0. Properties which would make life easier: 4) normally distributed.e. i. i.e. Y (t) = ln(S(t)) − ln(s0 ) − α · t Reasonable properties of Y : 1) no tendency.3 INTRODUCTION TO STOCHASTIC CALCULUS 24 and thus ln(B(t)) = ln(b0 ) + r · t. As otherwise stated. F) is a collection of random variables {Xt )}t∈I .1 Cadlag and Caglad Deﬁnition 3. Model for the Stock The result for the money market account motivates the following model for the stock dynamics: ln(S(t)) = ln(s0 ) + α · t + “randomness” Notation: Y (t) = “randomness”. i. the ndimensional Euclidian space equipped with R R σﬁeld of Borel sets. E[Y (t)] = 0.2 (Cadlag and Caglad Functions) Let f : [0. 3. Remark. i.e. R 3. . i. s t f (t+) = lim f (s) s t exist.e. T ] the limits f (t−) = lim f (s).1 (Stochastic Process) A stochastic process X deﬁned on (Ω.
Proof. Ft .2. then Y is adapted to {Ft }t . Remarks. Proposition 3. {t ∈ [0. i.2 Modiﬁcation. (i) Y is a modiﬁcation of X if P (ω : Xt (ω) = Yt (ω)) = 1 (ii) X and Y are indistinguishable if P (ω : Xt (ω) = Yt (ω) for all t ≥ 0) = 1. then Y is a modiﬁcation of X and vice versa. X and Y have the same paths P a. (ii) For any ε > 0.s. the number of discontinuities on [0.e.2 (Modiﬁcation. If Y is a modiﬁcation of X and F0 contains all P null sets of F. Here I denotes some ordered index set.t. Indistinguishability.r. i. s ≤ t and s. . then X and Y are indistinguishable. Consider a positive random variable τ with a continuous distribution. i. For this reason.1 (i) A cadlag function f can have at most a countable number of discontinuities. but P (Xt = Yt for all t ≥ 0) = 0. • The notation {Xt . for all t ≥ 0. σ{Xt } ⊂ Ft . Measurability Recall the following deﬁnition: Deﬁnition 3. a stochastic process can be interpreted as functionvalued random variable. Example. • For ﬁxed ω ∈ Ω. A family {Ft } of subσﬁelds Ft . See Fristedt/Gray (1997).3 INTRODUCTION TO STOCHASTIC CALCULUS 25 Proposition 3. t ∈ I. Set Xt ≡ 0 and Yt = 1{t=τ } .e.e. 3. T ] : f (t) = f (t−)} is countable. Indistinguishability) Let X and Y be stochastic processes. Deﬁnition 3.5 (Modiﬁcation. (i) If X and Y are indistinguishable. Then Y is a modiﬁcation of X. (iii) Let X be adapted to {Ft }. Ft }t∈I indicates that the process X is adapted to {Ft }t∈I . T ] larger than ε is ﬁnite. t ∈ I. the realization {Xt (ω)}t is said to be a path of X. Deﬁnition 3. (ii) If X and Y have rightcontinuous paths and Y is a modiﬁcation of X. is said to be a ﬁltration if Fs ⊂ Ft .4 (Adapted Process) A process X is said to be adapted to the ﬁltration {Ft }t∈I if every Xt is measurable w.3 (Filtration) Consider a σﬁeld F. Indistinguishability) Let X and Y be stochastic processes.
X is actually a functionvalued random variR able. {Ft }. then it has a modiﬁcation which is progressively measurable w. ω) : R Xt (ω) ∈ A} belongs to the product σﬁeld B(I n ) ⊗ F. {Ft }. (iii) Exercise. Proposition 3. See Karatzas/Shreve (1991. then X is also progressively measurable w. it is often convenient to have some joint measurability properties. ω) → Xt (ω) : ([0. 5). t]) ⊗ Ft . if the mapping R (t. (i) is obvious. Fortunately. and so. The following proposition provides the extent to which the converse is true. p. i. for technical reasons. the ﬁltration {Ft }. ω) → Xs (ω) : ([0. However.3 If a stochastic process is measurable and adapted to the ﬁltration {Ft }. then X is at least measurable.r.r. Remark. we have P (ωXt (ω) = Yt (ω) for all t ≥ [0. B(I n )) R R is measurable for each t ≥ 0.r.6 (Measurable Stochastic Process) A stochastic process is said to measurable.or leftcontinuous. B(I n ))measurable). a stronger result can be proved: Proposition 3. . Proof. (ii) Since Y is a modiﬁcation of X. 5). B(I n ))measurable R ((Ft .3 INTRODUCTION TO STOCHASTIC CALCULUS 26 Proof. if the mapping (s. Deﬁnition 3. t] × Ω. if for each t ≥ 0 and A ∈ B(I n ) the set {(s. Proof.or rightcontinuous. ∞) ∩ Q ) = 1. t] × Ω : Xs (ω) ∈ A} belongs to R the product σﬁeld B([0. Remark. if a stochastic process is left.4 If the stochastic process X is adapted to the ﬁltration {Ft } and right. Any progressively measurable process is measurable and adapted (Exercise).t. but not necessarily adapted to {Ft }.7 (Progressively Measurable Stochastic Process) A stochastic process is said to progressively measurable w. ∞) × Ω.or leftcontinuous. See Karatzas/Shreve (1991. 2 If X is an (adapted) stochastic process. i.t. ∞)) ⊗ F) → (I n . B(I n )) R R is measurable. the claim follows. By the rightcontinuity assumption.t. B([0. t]) ⊗ Ft ) → (I n .e. ω) ∈ [0. Deﬁnition 3. if for each A ∈ B(I n ) the set {(t. If X is right. B([0. p.e. then every Xt is (F.
6 Every stopping time is optional. An F. ∞) or I = I is said to be a random time. Deﬁnition 3. Since {τ ≤ t} = t+ε>u>t {T < u}. the ﬁltration {Ft }t∈I if the event {τ ≤ t} ∈ Ft for every t ∈ I. • Loosely speaking.9 (Continuity of Filtrations) (i) A ﬁltration {Gt } is said to be rightcontinuous if Gt = Gt+ := ε>0 Gt+ε (ii) A ﬁltration {Gt } is said to be leftcontinuous if Gt = Gt− := σ s<t Gs Remark. leftcontinuity of {FtX } means that Xt can be guessed by observing Xs . Then {τ < t} = t>ε>0 {T ≤ t − ε} and {T ≤ t − ε} ∈ Ft−ε ⊂ Ft . Two special N classes of random times are of particular importance. Proof.2. then one cannot learn more by peeking inﬁnitesimally far into the future. the ﬁltration {Ft }t∈I if the event {τ < t} ∈ Ft for every t ∈ I.6 show how both concepts are connected. Proposition 3.t.r. then τ + c is a stopping time. In our applications.t.3 Stopping Times Consider a measurable space (Ω. Propositions 3. τ is optional.5 5 See Deﬁnition 3. 0 ≤ s < t. Optional Time) (i) A random time is a stopping time w. we will always work with rightcontinuous ﬁltrations. i.8 (Stopping Time. Let X be stochastic process. • Rightcontinuity means intuitively that if Xs has been observed for 0 ≤ s ≤ t.r. 2 Remark. .e. Proof. F). Exercise. Consider a stopping time τ .5 and 3.t. (ii) A random time is an optional time w. Proposition 3.r. On the other hand. Deﬁnition 3. we have {τ ≤ t} ∈ Ft+ε for every ε > 0 implying {τ ≤ t} = ε>0 Ft+ε = Ft+ = Ft . consider an optional time τ w.5 If τ is optional and c > 0 is a positive constant.measurable random variable τ : Ω → I ∪ {∞} where I = [0. a rightcontinuous ﬁltration {Ft }.18.3 INTRODUCTION TO STOCHASTIC CALCULUS 27 3. and both concepts coincide if the ﬁltration is rightcontinuous.
Proposition 3. (iii) Fτ1 ∧τ2 = Fτ1 ∩ Fτ2 and the events {τ1 < τ2 }. then Fτ1 = Ft .∞) . then the function Xτ is a random variable. and the stopped process {Xτ ∧t . τ2 }.7 (i) Let τ1 and τ2 be stopping times. Then τ1 ∧τ2 = min{τ1 . Furthermore. is Fτ measurable. (ii) If the process X is progressively measurable w. (ii) The process stopped at time τ is the process {Xt∧τ }t∈[0. Then the set {A ∈ F : A ∩ {τ ≤ t} ∈ Ft for all t ≥ 0} is a σﬁeld.∞) and τ is a stopping time of this ﬁltration.10 (Measurability of a Stopped Process) (i) If the process X is measurable and the random time τ is ﬁnite. τ1 ∨τ2 = max{τ1 . Ft }t∈[0. Then the random times N sup τn . then so is supn≥1 τn . {Ft }t∈[0. then the random variable Xτ . (ii) Let {τn }n∈I be a sequence of optional times. the socalled stopping time σﬁeld Fτ .9 (Properties of a Stopped σField) Let τ1 and τ2 be stopping times. limn→∞ τn are all optional. Proof. τ1 + τ2 . τ2 }. Exercise. {τ1 = τ2 } . Exercise. Proposition 3. Proposition and Deﬁnition 3. Deﬁnition 3. if the τn ’s are stopping times. belong to Fτ1 ∩ Fτ2 . ∞). deﬁned on the set {τ < ∞} ∈ Fτ . and ατ1 with α ≥ 1 are stopping times. then Fτ1 ⊂ Fτ2 . by setting Xτ (ω) := X∞ (ω) on {τ = ∞}. (ii) If τ1 ≤ τ2 .∞) is progressively measurable. n≥1 n≥1 inf τn .r. then Xτ can also be deﬁned on Ω.3 INTRODUCTION TO STOCHASTIC CALCULUS 28 Proposition 3. Exercise. Proof. {τ2 ≤ τ1 }.8 (Stopping Time σField) Let τ be a stopping time of the ﬁltration {Ft }. Proof. {τ1 ≤ τ2 }.10 (Stopped Process) (i) If X is a stochastic process and τ is random time. {τ2 < τ1 }. we deﬁne the function Xτ on the event {τ < ∞} by Xτ (ω) := Xτ (ω) (ω) If X∞ (ω) is deﬁned for all ω ∈ Ω. limn→∞ τn .t. (i) If τ1 = t ∈ [0.
realvalued process {Wt . R Since Ao is open. 5 and p. • If X is continuous . t). Proposition 3. • If the ﬁltration is rightcontinuous.t) {Xs ∈ Ao }. Let Ao = (1.s. Deﬁnition 3. then the hitting time of Ao is an optional time. then the hitting time τAo is a stopping time. there exists a δ > 0 such that (Xi − δ.11 (Hitting Time) R Let X be a stochastic process and let A ∈ B(I n ) be a Borel set in I n . The following result is obvious: If X is adapted and cadlag and τ is a stopping time.11 Let X be a rightcontinuous process adapted to the ﬁltration {Ft }. that for some ω. (ii) If Ac is a closed set. Without looking slightly ahead of time 1. ˜ • Even if X is continuous. ˜ is a stopping time. Deﬁne R τA (ω) = inf{t > 0 : Xt (ω) ∈ A}. 2 Remarks. but Xqn → Xi which is a contradiction to the rightcontinuity of X. is also adapted and cadlag. (i) If Ao is an open set. suppose that X is realvalued. but Xq ∈ Ao for all q ∈ Q with q < t. Then we have Xi ∈ Ao for some i ∈ I \ Q and i < t. 5). Then τ is called a hitting time of A for X.3 Prominent Examples of Stochastic Processes Theorem and Deﬁnition 3. in general the hitting time of Ao is not a stopping time. 9). However. then the random variable τAc (ω) = inf{t > 0 : Xt (ω) ∈ Ac or Xt− (ω) ∈ Ac }. then the stopping time τAc is a hitting time of Ac . Xi + δ). Therefore.∞) such that W0 = 0 a. and . Xt (ω) ≤ 1 for t ≤ 1. Suppose that the inclusion ⊂ does not hold. ∞). (ii) See Karatzas/Shreve (1991. For example. and that X1 (ω) = 1.3 INTRODUCTION TO STOCHASTIC CALCULUS 29 Proof. Proof. p. p. The inclusion ⊃ is obvious. we cannot tell wheter or not τAo = 1. We have {τAo < t} = s∈Q ∩[0. Ft }t∈[0.12 (Brownian Motion) (i) A Brownian motion is an adapted. 3. Xq ∈ (Xi − δ. then Xt∧τ = Xt 1{t<τ } + Xτ 1{t≥τ } . then there exists a sequence (qn )n∈I with qn s0 N and qn ∈ Q ∩ [0. 39) and Protter (2005. p. Karatzas/Shreve (1991. (i) For simplicity n = 1. Xi + δ) ⊂ Ao . Remark.
e. Proof. 3.. .3 INTRODUCTION TO STOCHASTIC CALCULUS 30 • Wt − Ws ∼ N (0. 7 A random variable Y taking values in I ∪ {0} is Poisson distributed with parameter λ > 0 if N P (Y = n) = e−λ λn /(n!).53ﬀ). Proof. 2 Remark.∞) adapted to the the ﬁltration {Ft }t∈[0. t ≤ T. Proof. 47ﬀ).∞) is called a martingale (resp. . . submartingale) with respect to {Ft }t∈[0. pp. Applying Kolmogorov’s extension theorem. Then there exists a continuous modiﬁcation of X. See. i. See Karatzas/Shreve (1991. . and β = 1.. n ∈ I . p.4 Martingales Deﬁnition 3. Ft }t[0. Karatzas/Shreve (1991. integervalued cadlag process N = {Nt .14 (Continuous Version of the Brownian Motion) An ndimensional Brownian motion possesses a continuous modiﬁcation.g. Wn ) is an ndimensional vector of independent onedimensional Brownian motions. E[Xt ] < ∞. We will always work with this continuous version.13 (Martingale. Deﬁnition 3. One can show that E[Wt − Ws 4 ] = n(n + 2)t − s2 . the diﬀerence Nt − Ns is independent of Fs and is Poisson distributed with mean λ(t − s). we can apply Kolmogorov’s continuity theorem with α = 4. (ii) An ndimensional Brownian motion W := (W1 . 6 Kolmogorov’s extension theorem basically says that for given (ﬁnitedimensional) distributions satisfying a certain consistency condition there exists a probability space and a stochastic process such that its (ﬁnitedimensional) distributions coincide with the given distributions.e. 50). Submartingale) A process X = {Xt }t∈[0. See.∞) if (i) Xt ∈ L1 (dP ). pp.7 0 ≤ s. supermartingale.6 it is straightforward to prove the existence of a Brownian motion. C such that E[Xt − Xs α ] ≤ C · t − s1+β . e.13 (Kolmogorov’s Continuity Theorem) Suppose that the process X = {Xt }t∈[0. 2 Theorem 3.12 (Poisson Process) A Poisson process with intensity λ > 0 is an adapted. N . β.∞) satisﬁes the following condition: For all T > 0 there exist positive constants α. C = n(n + 2). Karatzas/Shreve (1991.g. t − s) for 0 ≤ s < t • Wt − Ws independent of Fs for 0 ≤ s < t “stationary increments” “independent increments” exists and is said to be a onedimensional Brownian motion.∞) such that N0 = 0. Corollary 3. and for 0 ≤ t < ∞. Hence. Supermartingale.
µ.r.s.t. and a supermartingale for µ < 0. 2 Proposition 3.∞) be a rightcontinuous martingale.∞) and let ϕ : I n → I be a convex function such that E[ϕ(Xt )] < ∞ R R for all t ∈ [0. Then ϕ(X) is a submartingale w. in addition. E[Xt Fs ] ≤ Xs . For any X ∈ M2 and 0 ≤ t < ∞. . . If. An important question now arises: What happens to the martingale property of a stopped martingale? For bounded stopping times.15 (Functions of Submartingales) (i) Let X = (X1 . (ii) X − Y  = Y − X.∞) be a rightcontinuous submartingale (supermartingale) and let τ1 and τ2 be bounded stopping times of the ﬁltration {Ft } with τ1 ≤ τ2 ≤ c ∈ I R a.s. Then ϕ(X) is a submartingale w. Then W is a martingale and W 2 is a submartingale..r. Xn ) with realvalued components Xk be a submartingale w. X0 = 0 a. See. Deﬁnition 3. .14 (SquareIntegrable) Let X = {Xt . ) is a complete metric space. we write X ∈ M2 (or X ∈ Mc if X is also continuous).t. we deﬁne Xt := 2 E[Xt ] 2 and X := Xn ∧ 1 .g. We say that X is square2 integrable if E[Xt ] < ∞ for every t ≥ 0.r.t. and Mc is a closed subset of M2 .t.3 INTRODUCTION TO STOCHASTIC CALCULUS 31 (ii) if s ≤ t. e. {Ft }t∈[0. ∞). resp.8 Proposition 3. . • The arithmetic Brownian motion Xt := µt + σWt . {Ft }t∈[0. (resp. Then E[Xτ2 Fτ1 ] ≥ (≤)Xτ1 means: (i) X − Y  ≥ 0 and X − Y  = 0 iﬀ X = Y . σ ∈ I is a martingale for R.∞) Proof. . (iii) X − Y  = X − Z + Z − Y .r.16 If we identify indistinguishable processes. the pair (M2 . Ft }t∈[0. . then E[Xt Fs ] = Xs . Karatzas/Shreve (1991. a. • Let W be a onedimensional Brownian motion. {Ft }t∈[0.∞) and let ϕ : I n → I be a convex. ∞). • Let N be a Poisson process with intensity λ. 2 Proof.s. nondecreasing function such that R R E[ϕ(Xt )] < ∞ for all t ∈ [0. 2n n=1 ∞ Note that X − Y  deﬁnes a the pseudometric on M2 . pp. Then {Nt − λt}t is a martingale. Theorem 3. Ft }t∈[0. a submartingale for µ > 0. 1st Version) Let X = {Xt . 8 This . follows from Jensen’s inequality (Exercise). Xn ) with realvalued components Xk be a martingale w. the pseudometric X − Y  becomes a metric on M2 . If we identify indistinguishable processes. {Ft }t∈[0. .∞) (ii) Let X = (X1 . µ = 0. Examples. 37f).17 (Optional Sampling Theorem. . E[Xt Fs ] ≥ Xs ). nothing changes.
. such that {Xt . ω ∈ Ω and EX∞  < ∞. Theorem 3.∞) : (i) It is a uniformly integrable family of random variables. this result does only hold if X can ˜ be extended to ∞ such that X := {Xt . To solve this problem. Theorem 3.18 (Sub.19 (Characterization of UI) Let (U )a∈A be a subset of L1 .∞] is still a sub. Ft }t∈[0. Proof.or supermartingale. Theorem 3. ∞) such that lim g(x) = +∞ x and sup E[g ◦ Uα ] < ∞. A good choice is often g(x) = x1+ε with ε > 0. (ii) There exists a positive. it is sometimes not easy to check if a family of random variables is uniformly integrable. rightcontinuous submartingale X = {Xt .15 (Uniform Integrability) A familiy of random variables (U )a∈A is uniformly integrable (UI) if lim sup a {Uα ≥n} n→∞ Uα dP = 0.and Supermartingale Convergence) Let X = {Xt . increasing. we need to require the following condition: Deﬁnition 3. Then the limit X∞ (ω) = limt→∞ Xt (ω) exists for a. 17f).e. Ft }t∈[0. (iii) It converges P a.3 INTRODUCTION TO STOCHASTIC CALCULUS 32 If the stopping times are unbounded. convex function g deﬁned on [0. (ii) It converges in L1 .∞) : (i) It is a uniformly integrable family of random variables. (t → ∞) to an integrable random variable X∞ . pp. in general. Using the deﬁnition.20 The following conditions are equivalent for a nonnegative. For the implication “(i)⇒(ii)⇒(iii)”. Ft }t∈[0. See Karatzas/Shreve (1991. the assumption of nonnegativity can be dropped.or supermartingale.s. Ft }t∈[0. Ft }t∈[0. a∈A x→∞ Remark. The following are equivalent: (i) (U )a∈A is uniformly integrable.∞) be a rightcontinuous submartingale (supermartingale) + − and assume supt≥0 E[Xt ] < ∞ (supt≥0 E[Xt ] < ∞). but it is not clear at all if X is a sub. as t → ∞. ˜ This proposition ensures that a limit exists. Proposition 3.∞] is a submartingale.21 (UI Martingales can be closed) The following conditions are equivalent for a rightcontinuous martingale X = {Xt .
for all t ≥ 0. The stopped process {Xt∧τ . then we say that X is a (continuous) local martingale. Exercise. as t → ∞.s.16 (Local Martingale) Let X = {Xt .23 (Stopped (Sub)Martingales) Let X = {Xt . Ft }t∈[0. Proof. (iv) There exists an integrable random variable Y . It is obvious that the stopped process is a (sub)martingale for {Ft∧τ }. Proof. Exercise.∞) be an adapted rightcontinuous (sub)martingale and let τ be a bounded stopping time.s. Ft }t∈[0. pp. P − a. we sometimes need to weaken the martingale concept. (t → ∞) to an integrable random variable X∞ . If there exists a nondecreasing (n) sequence {τn }n∈I of stopping times of {Ft }. 19f). i. Ft }t∈[0.24 (Conditional Expectation and Stopped σFields) Let τ1 and τ2 be stopping times and Z be an integrable random variable. Ft }t∈[0.∞] is a UI rightcontinuous martingale. For technical reasons.∞) N is a martingale for each n ≥ 1 and P (limn→∞ τn = ∞) = 1.g. then the stopping time τ may be unbounded (See Protter (2005. Remarks. • If {Xt .3 INTRODUCTION TO STOCHASTIC CALCULUS 33 (ii) It converges in L1 . then E[Y F∞ ] = X∞ a. If (iv) holds and X∞ is the random variable in (c). such that {Xt . p. Proposition 3. See.∞] be a rightcontinuous submartingale (supermartingale) with a last element X∞ and let τ1 and τ2 be stopping times of the ﬁltration {Ft } with τ1 ≤ τ2 a.∞) be a (continuous) process. Then E[Xτ2 Fτ1 ] ≥ (≤)Xτ1 Proof. Then E[E[ZFτ1 ]Fτ2 ] = E[E[ZFτ2 ]Fτ1 ] = E[ZFτ1 ∧τ2 ]. 2nd Version) Let X = {Xt . such that {Xt := Xt∧τn . such that Xt = E[Y Ft ] a.s. We now formulate a more general version of Doob’s optional sampling theorem: Theorem 3. Karatzas/Shreve (1991. e. • The diﬃculty in this proposition is to show that the stopped process is a (sub)martingale for the ﬁltration {Ft }. Ft }t∈[0. Ft }t∈[0.e. .. the martingale is closed by Y . 10)).22 (Optional Sampling Theorem. Deﬁnition 3.∞] is a martingale. Ft }t∈[0.s.∞) is also an adapted rightcontinuous (sub)martingale. Corollary 3.s. (iii) It converges P a.
27 W The ﬁltration {Ft } is leftcontinuous. Augmentation. Especially. and (iii) {Gt } is rightcontinuous. F.5 Completition. Proof. but fails to be rightcontinuous. Proposition 3. we are confronted with the following result. follows from Fatou’s lemma (Exercise).25 A nonnegative local martingale is a supermartingale.18 (Usual Conditions) A ﬁltered probability space (Ω. P. 2 This concept can be applied to other situations (e. e.26 Let (Ω. and P (N 9A . {Gt }) satisfy the usual conditions. Unfortunately. locally square integrable): Deﬁnition 3. A property π is said to hold locally if there exists a sequence of stopping times {τn }n∈I with P (limn→∞ τn = ∞) = 1 such that X (n) N has property π for each n ≥ 1. 16f). ˜ ˜ probability space is complete if for any N ⊂ Ω such that there exists an N ∈ G with N ⊂ N ˜ ) = 0. pp.g. there exist local martingales which are no martingales.g.3 INTRODUCTION TO STOCHASTIC CALCULUS 34 Remark. One of the main reasons for this assumption is that the following proposition holds: Proposition 3. P.9 (ii) G0 contains all the P null sets of G. Proposition 3. G. In the sequel. P. G. locally bounded. See. we impose the following Standing assumption: We always consider a ﬁltered probability space (Ω. we have N ∈ G. Proof. Then every martingale for {Gt } possesses a unique cadlag modiﬁcation. • However.17 (Local Property) Let X be a stochastic process. Karatzas/Shreve (1991. Usual Conditions The following deﬁnition is very important: Deﬁnition 3. E[Xt ] need not to exist for local martingales. • Every martingale is a local martingale (Choose τn = n). {Ft }) satisfying the usual conditions. 3. {Gt }) satisﬁes the usual conditions if (i) it is complete.
naive stochastic integration is impossible (See Protter (2005. pp. P (B) = 0}. More generally.t. Deﬁne the collection of P null sets by W N := {A ⊂ Ω : ∃B ∈ F∞ with A ⊂ B. the variation of a Brownian motion is unbounded on any interval. Besides. t It (κ) := 0 κs dWs :=? . 89ﬀ.3 INTRODUCTION TO STOCHASTIC CALCULUS 35 Fortunately. is left.t. 3.28 (Brownian Filtration) The Brownian ﬁltration. Deﬁne a stochastic integral for simple processes κ (constant process except for ﬁnitely many jumps). X (ii) Even if X is is continuous.and rightcontinuous. this P augmentation. P ). Let F∞ := σ{ t≥0 Ft } and consider the space W (Ω. {Ft } need not to be rightcontinuous.r. W This is the reason why we work with the Brownian ﬁltration instead of {Ft }. 43ﬀ)). there is a way out. (disregarding measurability for the moment) 1st Step. Therefore. However.6 ItoIntegral Our goal is to deﬁne an integral of the form: t X(s)dW (s). Proposition 3. W is still a Brownian motion w. F∞ . For a Poisson process. one can prove the following results: X (i) If a stochastic process is leftcontinuous. Remark. Good News: We can deﬁne a stochastic integral in terms of L2 convergence! Agenda. N is still a Poisson process w. a similar result holds. the Brownian ﬁltration. W W The P augmentation of {Ft } is deﬁned by Ft := σ{Ft ∪ N } and is called the Brownian ﬁltration. The proofs of the previous results can be found in Karatzas/Shreve (1991). pp. Besides. t] one can deﬁne t n−1 f (s)dA(s) = lim 0 n→∞ f k=0 kt n · A (k+1)t n −A kt n . (iii) For an ndimensional strong Markov process X the augmented ﬁltration is rightcontinuous. 0 where X is an appropriate stochastic process. then the ﬁltration {Ft } is leftcontinuous. Theorem 3.29 (Augmented Filtration of a Poisson Process) The P augmentation of the natural ﬁltration induced by a Poisson process N is rightcontinuous. From the LebesgueStieltjes theory we know that for a measurable function f and a processes A of ﬁnite variation on [0.r.
. Deﬁne the stochastic integral for a L2 [0. . . s 2 sup0≤t≤T It (κ) ≤ 4E T 0 κ2 ds s .T ] is a simple process if there exist real numbers 0 = t0 < t1 < . p ∈ I . T ]. and for all ω ∈ Ω we have p κt (ω) = Φ0 (ω)1{0} (t) + i=1 Φi (ω)1(ti−1 . 1st Step. . ti ] (measurable w. For each n the stochastic integral t It (κ(n) ) = 0 κ(n) (s) dW (s) is a welldeﬁned random variable and the sequence {It (κ(n) )}n converges (in some sense) to a random variable I 4th Step. and bounded random variables Φi : Ω → I N R. (ii) E (It (κ))2 = E (iii) E t 0 κ2 ds for all t ∈ [0. p such that Φ0 is F0 measurable. T ]process X can be approximated by a sequence {κ(n) }n of simple processes such that T X − κ(n) 2 := E T 0 (X(s) − κ(n) (s))2 ds → 0. 1. < tp = T .r. i = 0. 3rd Step.30 (Stochastic Integral for Simple Processes) For a simple process κ we have: (i) The integral I(κ) = {It (κ)}t∈[0.3 INTRODUCTION TO STOCHASTIC CALCULUS 36 2nd Step.19 (Simple Process) A stochastic process {κt }t∈[0.ti ] (t) Remarks. • κ is a leftcontinuous step function. .T ] is a continuous martingale for {Ft }t∈[0. Deﬁnition 3. T ] the stochastic integral I(κ) is deﬁned by t It (κ) := 0 κs dWs := 1≤i≤p Φi (Wti ∧t − Wti−1 ∧t ) Proposition 3.T ] . the lefthand side of the interval). . Extend the deﬁnition by localization. Every L2 [0. T ]process X as follows: t t X(s) dW (s) := lim It (κ(n) ) = lim 0 n→∞ n→∞ κ(n) (s) dW (s). Deﬁnition 3. Standing Assumption.t.20 (Stochastic Integral for Simple Processes) For a simple process κ and t ∈ [0. • κt is Fti −1 measurable for t ∈ (ti−1 . {Ft } is the Brownian ﬁltration of a Brownian motion W . 0 5th Step. Φi is Fti−1 measurable.
• Stochastic integrals over [t. E Φi Φj (Wti − Wti−1 )(Wtj − Wtj−1 ) = E E Φi Φj (Wti − Wti−1 )(Wtj − Wtj−1 )Fti−1 = E[Φi Φj (Wtj − Wtj−1 )(E[Wti Fti−1 ] −Wti−1 )] =Wti−1 = 2nd case i = j. It (aκ1 + bκ2 ) = aIt (κ1 ) + bIt (κ2 ). t 2 t E 0 dWs = E[Wt2 ] = t = 0 ds.3 INTRODUCTION TO STOCHASTIC CALCULUS 37 Proof.19. E Φ2 (Wti − Wti−1 )2 = E Φ2 E[(Wti − Wti−1 )2 Fti−1 ] = E[Φ2 (ti − ti−1 )] i i i Hence. b ∈ I R. k t E[It (κ)2 ] = E i=1 Φ2 (ti − ti−1 ) = E i 0 κ2 ds .e. Φ is bounded. • Let κ1 and κ2 be simple processes and a. 1st case: i > j. (ii). s (iii) Apply (i). T ] are deﬁned by T T t κs dWs := t 0 κs dWs − 0 κs dWs . p. by Deﬁnition 3. and Doob’s inequality (see Kartzas/Shreve (1991. i. √ dt. (ii) For simplicity. This follows directly from Deﬁnition 3. t = tk+1 . t • For κ ≡ 1. 0.20. The stochastic integral for simple processes is linear. 14): For a rightcontinuous martingale {Mt } with E[Mt2 ] < ∞ for all t ≥ 0 we have 2 E It remains to check: k sup Mt  0≤≤T 2 ≤ 4E[MT ]. which is wrong from This is the reason why people sometimes write dWt = a mathematical point of view. Then k k E[It (κ)2 ] = i=1 j=1 E Φi Φj (Wti − Wti−1 )(Wtj − Wtj−1 ) . Hence. k E[It (κ)2 ] = E i=1 Φ2 (ti − ti−1 ) ≤ T i ≤T i=1 E Φ2 < ∞. (i) Exercise. i 2 since. . Note that linear combinations of simple functions are simple. 0 1 dWs = Wt . Remarks.
3 INTRODUCTION TO STOCHASTIC CALCULUS 38 2nd Step. T ]) There exists a unique linear mapping J : L2 [0. T ]). Every L2 [0. Deﬁne Mc ([0. For the general case. for all X ∈ L2 [0. T ].32 (Ito Integral for X ∈ L2 [0. F. Ω. T ]process X can be approximated by a sequence {κ(n) }n of simple processes. T ] := := L2 [0. T ]processes) The linear subspace of simple processes is dense in X ∈ L2 ([0. E[Mt2 ] < ∞ for every t ∈ [0.t.T ] : progressively measurable. Choose 2n −1 (n) κt (ω) := X0 (ω)1{0} (t) + k=0 XkT /2n (ω)1(kT /2n . T ]) with the following 2 properties: 2 . {Ft }t∈[0. Deﬁne the vector space L2 [0.(k+1)T /2n ] (t). i. it is an isometry. called the Ito isometry. Ft }t∈[0. 3rd and 4th Step.30 Prop.r.s. T ] is continuous and bounded. P.e. see Karatzas/Shreve (1991. Since X is bounded. T ] w. s T 0 Since the mapping I(X) is linear and norm preserving. Proposition 3. p. T ]) := 2 {M : M continuous martingale on [0. {Ft }. For simple processes the mapping κ → I(κ) induces a norm on the vector space of the corresponding stochastic integrals via T 2 T I(κ)2 T L := E 0 κs dWs 3. T ] there exists a sequence {κ(n) }n of simple processes with T X − κ(n) T = E 0 (Xs − κ(n) )2 ds → 0 s ( as n → ∞) Proof.T ] T {Xt . = (ii) E κ2 ds = κ2 . where we identify processes for which X = Y only Lebesgue ⊗ P a.31 (Simple Processes Approximate L2 [0. Assume that X ∈ L2 [0. T ] Theorem and Deﬁnition 3. T ] → Mc ([0. 133). realvalued with E 0 2 Xs ds < ∞ with (semi)norm T X2 := E T 0 2 Xs ds . {κ(n) }n is uniformly bounded and we can apply the dominated convergence theorem to obtain the desired result.
T ] still being a squareintegrable martingale. 99). MI. = E 0 (κ(n) −κ(m) )2 ds → 0. The L2 convergence (1) leads to (as N 10 n → ∞) Zt dP ←− A A It (κ(n) ) dP = A Is (κ(n) ) dP −→ A Zs dP. (b) Z is a martingale: Note that It (κ(n) ) is a martingale for all n ∈ I . By the N deﬁnition of the conditional expected value. Hence. m → ∞): t E[(It (κ(n) )−It (κ(m) ))2 ] I linear = E[It (κ(n) −κ(m) )2 ] Ito Isom. unique (may depend on sequence!). 92 and p. p. then P (Jt (κ) = It (X) for all t ∈ [0.26. Applying Doob’s inequality and the BorelCantelli Lemma. we deﬁne t Xs dWs := Jt (X) 0 for every t ∈ [0.e.e. Ft }t∈[0.  · T . T ] (as n. (ii) E[Jt (X)2 ] = E t 0 2 Xs ds “Ito isometry” J is unique in the following sense: If there exists a second mapping J satisfying (i) and (ii) for all X ∈ L2 [0. X − κ(n) T → 0 as n → ∞. then J(X) and J (X) are indistinguishable. Ft . Let X ∈ L2 [0. Zt is only P a. we thus get It (κ(n) ) dP = A A Is (κ(n) ) dP for s < t and for all A ∈ Fs and n ∈ I . Since Z is adapted. (ii) is proved. For X ∈ L2 [0. 2 where Zt is Ft measurable and E[Zt ] < ∞. i. Since It (κ(n) ) −→ Zt as well. W ).r.t. P ) for random variables (t ﬁxed!) which is a complete space and thus It (κ(n) ) −→ Zt . we obtain for every t ∈ [0.3 INTRODUCTION TO STOCHASTIC CALCULUS 39 (i) If κ is simple. there exists a subsequence {nk }k such that P L2 (1) It (κ(nk ) ) −→ Zt a. T ]. T ]. as k → ∞. . (d) Jt (X) is independent of the approximating sequence: Let {κ(n) }n and {˜ (n) }n be two approximating sequences for X with It (κ(n) ) −→ Zt and It (˜ (n) ) −→ κ κ 10 See L2 L2 Bauer (1992. (c) By Proposition 3.r.s. Proof.s. T ]) = 1.t. T ] and call it the Ito integral or stochastic integral of X (w. Z possesses a rightcontinuous modiﬁcation {Jt (X). {It (κ(n) )}n is a Cauchy sequence in the ordinary L2 (Ω. Since {κ(n) }n is also a Cauchy sequence w. (a) Let {κ(n) }n be an approximating sequence. i. T ]. one can prove that J(X) is even continuous (Exercise).
p. P ). This is the L2 convergence in the ordinaryL2 (Ω. (g) Uniqueness of J: Let J be a linear mapping with properties (i) and (ii). T ] × Ω. Then {ˆ (n) }n with κ(2n) = κ(2n) (even) and κ(2n+1) = κ2n+1 (odd) is an κ ˆ ˆ ˜ (n) 2 ˜ approximating sequence for X with {It (ˆ )}n converges in L to both Zt and Zt . one can use a localization argument:13 t τn (ω) := T ∧ inf t ∈ [0.12 (f) Linearity of J follows from the linearity of I and from the fact that if {κX }n (n) (n) (n) approximates X and {κY }n approximates Y . i. T ].s. p. T ]. 12 Both 0 11 See (n) . ∞) for the continuous n t 2 process { Xs ds}t . Xt (ω) := Xt (ω)1{τn (ω)≥t} . 92). T ] ⊗ F.11. 130).s. 2 5th Step. (∗∗) holds because κ(n) −→ X (as n → ∞). Bauer (1992. realvalued with T P 0 2 Xs ds < ∞ = 1 In general. Fortunately. See Proposition 3. they are indistinguishable. MI. B[0. (∗) and (∗∗) are a consequence of Bauer (1992. T ] : 0 2 Xs (ω) ds ≥ n . κ Thus they need to coincide with Jt (X) P a. Lebesgue ⊗ P ). Ft }t∈[0. Ω. Then Jt (X) = Jt ( lim κ(n) ) = lim Jt (κ(n) ) = lim It (κ(n) ) = Jt (X) P − a. T ]) = 1. Therefore. for X ∈ H 2 [0. Ft . {Ft }t∈[0. then {aκX + bκY }n approximates aX + bY (a. Therefore. F.2 (ii). Deﬁne H 2 [0. the Ito isometry needs not to hold and the approximation argument breaks down. T ] it can happen that T X2 = E T 0 2 Xs ds = ∞. n→∞ n→∞ n→∞ (∗) (i) for all t ∈ [0.T ] : progressively measurable. b ∈ I R). The class of admissible integrands can be extended. aJt (X) + bJt (Y ) = a lim It (κX ) + b lim It (κY ) = lim It aκX + bκY n→∞ n→∞ n→∞ (n) (n) (n) (n) (n) L2 = Jt (aX + bY ). T ] := H 2 [0. P (Jt (X) = Jt (X) for all t ∈ [0.3 INTRODUCTION TO STOCHASTIC CALCULUS 40 ˜ Zt . by Proposition 3. P. MI.e. This is the L2 convergence in ([0.T ] := {Xt . Since J(X) and J (X) are continuous processes.11 (e) Ito isometry: t t E Jt (X)2 = lim E It (κ(n) )2 = lim E n→∞ n→∞ 0 L2 (∗) (κ(n) )2 ds s (∗∗) = E 0 2 Xs ds (∗) holds because It (κ(n) ) −→ Jt (X) (as n → ∞). The equality (∗) holds because linear mappings are continuous. 13 Every τ is a stopping time because it is the ﬁrsthitting time of [n.
Ft } an I n. Xt (ω) ∈ L2 [0. 0 t m j=1 0 Xnj (s)dWj (s) where t 0 Xij (s)dWj (s) is an onedimensional Ito integral. Deﬁnition 3. Note that • Jt (X (n) ) = Jt (X (m) ) for all 0 ≤ t ≤ τn ∧ τm . T 2 • limn→∞ τn = T a. there exists an event A with P (A) = 1 such that for all ω ∈ A there exists an c(ω) > 0 with T 2 Xs (ω) ds ≤ c(ω). Ft } is said to be a realvalued Ito process if for all t ≥ 0 it possesses the representation t t X(t) = X(0) + 0 α(s) ds + 0 β(s) dW (s) . Ft } is a (multidimensional) Brownian motion of appropriate dimension. 3. Ft } be an mdimensional Brownian motion and {X(t).3 INTRODUCTION TO STOCHASTIC CALCULUS (n) 41 Hence.) 0 Important Results. Then we deﬁne the Ito integral of X w.m R 2 valued progressively measurable process with all components Xij ∈ H [0.e. i. .e. for X ∈ H 2 [0.22 (Ito Process) Let W be an mdimensional Brownian motion. (i) {X(t). T ]. where X {τn }n is the localizing sequence of X. t ∈ [0. T ] the process {Jt (X)}t∈[0. i.T ] is a continuous local martingale with localizing sequence {τn }n . F.21 (Multidimensional Ito Integral) Let {W (t). i. Additional Assumption: {Wt . (since P 0 Xs ds < ∞ = 1. Jt (aX + bY ) = aJt (X) + bJt (Y ) for a. W as t m X1j (s)dWj (s) j=1 0 t . (Choose τn := τn ∧ τn as localizing sequence of aX + bY .7 Ito’s Formula Recall our Standing assumption: (Ω. {Ft }) is a ﬁltered probability space satisfying the usual conditions. T ].e. • J is still linear. J is welldeﬁned.s. Deﬁnition 3. N Now deﬁne Jt (X) := Jt (X (n) ) for X ∈ H 2 [0. P.r. b ∈ I and X.t. • By construction. R X Y Y ∈ H 2 [0. T ]. T ] and 0 ≤ t ≤ τn . X(s) dW (s) := . T ] and J(X (n) ) is deﬁned and a martingale for ﬁxed n ∈ I . .) • E[Jt (X)] needs not to exist.
let Π = {t0 . . Then. The mesh of Π is deﬁned by Π = max1≤k≤n tk − tk−1 . However. .23 (Quadratic Variation and Covariation) Let X (1) and X (2) two realvalued Ito processes with t t X (i) (t) = X(0) + 0 α(i) (s) ds + 0 β (i) (s) dW (s). we deﬁne t t t Ys dXs := 0 0 Ys αs ds + 0 Ys βs dWs Deﬁnition 3. . Then the pth variation of X over the partition Π is deﬁned by (p > 0) n Vt (p) (Π) = k=1 Xtk − Xtk−1 p . . it holds 2 that (2) lim Vt (Π) =< X >t in probability. and progressively measurable processes α. 2}. X >t is said to be the quadratic variation of an Ito process X. < X >t :=< X. 32ﬀ)): For ﬁxed t ≥ 0. Remarks. t]. for all t ≥ 0. .measurable random variable X(0). . we have βj ∈ H 2 [0. • By (2). tn } with 0 = t0 < t1 < .s. .3 INTRODUCTION TO STOCHASTIC CALCULUS t m t 42 = X(0) + 0 α(s) ds + j=1 0 βj (s) dWj (s) for an F0 . T ]. Π→0 The main tool for working with Ito processes is Ito’s formula: . . (ii) An ndimensional Ito process X = (X (1) . < tn = t be a partition of [0. pp. where i ∈ {1. one can show the following (see Karatzas/Shreve (1991. P − a. X (n) ) is a vector of onedimensional Ito processes X (i) . X (2) >t := j=1 βj (s) βj (s) ds (1) (2) is said to be the quadratic covariation of X (1) and X (2) . • For a realvalued Ito process X and a realvalued progressively measurable process Y . β with t m t 2 βj (s) ds < ∞. . j=1 0 (2) 0 α(s) ds < ∞. one sometimes uses the symbolic diﬀerential notation dX(t) = α(t)dt + β(t) dW (t) • It can be shown that the processes α and β are unique up to indistinguishability. • To shorten notations. The deﬁnition may appear a bit unfounded. In particular. . Remarks. For X ∈ Mc . m t 0 < X (1) .
X1 (s). 3rd step. 2nd step.. X1 (0). . 1st step. . the relevant support of f . For simplicity. Taylor expension leads to (Note: f (Xtk ) = f (Xtk−1 ) + f (Xtk−1 )(Xtk − Xtk−1 ) + . 1). . n = m = 1.s. 2 βs ds.5 k=1 (∗) f (ηk )(Xtk − Xtk−1 )2 (∗∗) with ηk = Xtk−1 + θk (Xtk − Xtk−1 ). ∞) × I n → I be a C 1. where all integrals are deﬁned.3 INTRODUCTION TO STOCHASTIC CALCULUS 43 Theorem 3. A2 (Π) (i) For Π → 0.L1 0 t t f (Xs )dBs = 0 f (Xs )αs ds (ii) Approximate f (Xs ) by a simple process p YsΠ (ω) := f (X0 )1{0} (s) + k=1 f (Xtk−1 (ω))1(tk−1 . Then R R t f (t.33 (Ito’s Formula) Let X be an ndimensional Ito process with t m t Xi (t) = Xi (0) + 0 αi (s) ds + j=1 0 βij (s) dWj (s). we obtain (LebesgueStieltjes theory) A1 (Π) −→ a. . . .j=1 0 fxi xj (s. Xj >s . . Besides. and f is compact implying that these functions are bounded as well. t]. . Bt := 0 αs ds. .) p f (Xt ) − f (X0 ) = k=1 p (f (Xtk ) − f (Xtk−1 )) p = k=1 f (Xtk−1 )(Xtk − Xtk−1 ) + 0. . . θk ∈ (0. . Xn (s)) dXi (s) t +0. and thus X are uniformly bounded by a constant C. . tp } be a partition of [0. Let Π = {t0 . M is a martingale (since β ∈ L2 [0. Convergence of (*). Xn (s)) d < Xi . Proof. . . Xn (t)) = f (0. We get p p (∗) = k=1 f (Xtk−1 )(Btk − Btk−1 ) + k=1 A1 (Π) f (Xtk−1 )(Mtk − Mtk−1 ) . Xn (s)) ds + i=1 0 n fxi (s. f . X1 (s). . . By applying a localization argument.5 i. X(0) = const. ˆ Bt := 0 αs  ds. .. . . X1 (s). T ]). Hence. Let f : [0. and f : I → I R R..tk ] (s). .2 function. . we can assume that t t t Mt := 0 t 0 βs dWs . . X1 (t). . . . . Xn (0)) + 0 n t ft (s.
Xt )dXt + 0. • Xt = Wt and f (x) = x2 : t Wt2 = 2 0 Ws dWs + t • Xt = x0 + αt + σWt . towards the corresponding limits. 5th step. pp.3 INTRODUCTION TO STOCHASTIC CALCULUS t 0 44 Note that A2 (Π) = t YsΠ dMs .5σ 2 dt = f (Xt ) (α + 0. x0 . 15 L2 convergence implies L1 convergence . Therefore.s. It can be shown that (See Korn/Korn (1999. they are indistinguishable. there exists a subsequence {Πk }k . Since both sides are continuous processes in t. is actually a consequence of the more general deﬁnition of Ito integrals in Karatzas/Shreve (1991.14 By the Ito isometry.5σ 2 )t+σWt . 2 Remark.2 (ii). 2 t 2 E 0 f (Xs ) dMs − A2 (Π) = = E 0 t (f (Xs ) − YsΠ )βs dWs 2 (f (Xs ) − YsΠ )2 βs ds −→ 0. ∞) × I →I R R) df (t. pp.5f (Xt )d < X >t = f (Xt ) αdt + σdWt + 0. 0 L2 E where the limit follows from the dominated convergence theorem (as Π → 0). By steps 24.5fxx (t. Xt )dt + fx (t. R. • Xt = t and f ∈ C 2 leads to the fundamental theorem of integration. by Proposition 3. 129ﬀ). β ∈ I and f (x) = ex . for ﬁxed t both sides of Ito’s formula are equal a.15 (as Π → 0) (∗) = A1 (Π) + A2 (Π) −→ 0 L1 t t f (Xs )αs ds + 0 f (Xs ) dMs 4th step. Therefore. dSt = St [µdt + σdWt ]. Often a shorthand symbolic diﬀerential notation is used (f : [0. reads (3) 14 This St = s0 e(µ−0. α. the solution of the stock price dynamics in the BlackScholes model. Convergence of (**). Xt ) = ft (t. df (Xt ) = f (Xt )dXt + 0. • Xt = h(t) and f ∈ C 2 leads to the chain rule of ordinary calculus. Therefore. such that the integrals converges a. 53ﬀ)) (∗∗) −→ 0 L1 t 2 f (Xs )βs ds.s.5σ 2 )dt + σdWt . Xt )d < X >t Examples.
s. for every t ≥ 0 and i ∈ {1. where C = C(K. y ∈ I then there exists an adapted continuous process X being a strong solution R. such that t m αi (s. ∞) × I n → I n. .m are given functions. Proof. X(t))dW (t) X(0) = x. . z) − β(t. then X is a strong solution of the stochastic diﬀerential equation (SDE) (4) dX(t) = α(t. X(s)) + 0 j=1 2 βij (s. . P. In general.5 fyy d < Y > + fxy d < X. . z) − α(t. X is unique up to indistinguishability. y)2 + β(t.s. y) = x · y Then (omitting dependencies) d(XY ) = fx dX + fy dY + 0. . {Ft }) there exists an adapted process X with X(0) = x ∈ I n ﬁxed and R t m t Xi (t) = xi + 0 αi (s.5 fxx d < X > +0. ∞) × I n → I n and β : [0. α(t. n}. . Y > Deﬁnition 3. X(s)) ds + j=0 0 βij (s. X(t))dt + β(t. T ) > 0. X(s)) dWj (s) P a. where α : [0. X(s)) ds < ∞ P a. for every t ≥ 0 and i ∈ {1. pp. there do not exist explicit solutions to SDEs. y)2 ≤ K 2 (1 + y2 ). z. of (4) and satisfying E[Xt 2 ] ≤ C(1 + x2 )eCT for all t ∈ [0. F.3 INTRODUCTION TO STOCHASTIC CALCULUS 45 • Ito’s product rule: Let X and Y be Ito processes and f (x. n}.24 (Stochastic Diﬀerential Equation) If for a given ﬁltered probability space (Ω.34 (Existence and Uniqueness) If the coeﬃcients α and β are continuous functions satisfying the following global Lipschitz and linear growth conditions for ﬁxed K > 0 α(t. Y > =0 =0 =1 = XdY + Y dX + d < X. 128ﬀ). . y) ≤ Kz − y. R R R R Theorem 3. y) + β(t. . T ]. the important family of linear SDEs admits explicit solutions: . However. See Korn/Korn (1999.
a. Applying Ito’s product rule to 0 · Z shows Xt = 0 a. Z ˜ Let Z be another solution. for every t ≥ 0) t (A(s) + a(s)) ds m 0 t < ∞. n = m = 1. possesses an adapted Lebesgue ⊗ P unique solution m m t 1 X(t) = Z(t) x + a(u) − Bj (u)bj (u) du + 0 Z(u) j=1 j=1 where t t t 0 bj (u) dWj (u) . we obtain Z = Z. As in (3). Z(0) = 1. Ito’s product rule ˜ applied to Y Z gives the desired result. b be progressively measurable realvalued processes with R (P a. is the solution of the homogeneous SDE dZ(t) = Z(t) [A(t)dt + B(t) dW (t)] . Then. Bj (s)2 + b2 (s) ds < ∞.3 INTRODUCTION TO STOCHASTIC CALCULUS 46 Proposition 3. Due to the initial condition. one can show that Z solves the homogeneous SDE. To prove uniqueness. for all t ≥ 0. ˜1 d Z Z ˜ = Zd = 1 Z + 1 ˜ 1 ˜ dZ+ < .s. Proof. To show that X satisﬁes the linear SDE. To prove uniqueness.5B(u)2 du + 0 B(u) dW (u) . = (A(t)X(t) + a(t))dt + j=1 (Bj (t)X(t) + bj (t))dWj (t). note that Ito’s formula leads to d 1 Z = 1 (−A + B 2 )dt − BdW . deﬁne Y := X/Z. Z(u) (5) Z(t) = exp 0 A(u) − 0. Z > Z Z ˜ ˜ ˜ Z Z Z (−A + B 2 )dt − BdW + [Adt + BdW ] − B 2 dt = 0 Z Z Z ˜1 ˜ implying Z Z = const. by Ito’s product rule.35 (Variation of Constants) Fix x ∈ I and let A. let X and X be two ˆ ˜ solutions to the linear SDE. Then X := X − X solves the homogeneous SDE ˆ ˆ dX = X[Adt + BdW ] ˆ ˆ with X0 = 0.s. For simplicity. j j=1 0 Then the linear SDE m dX(t) X(0) = x. B. 2 .
m σij (t)dWj (t) . . • The coeﬃcients r.1 (Trading Strategy.) T I T (7) 0 ϕ0 (t)dt < ∞. .5 σij (s)dWj (s) . S(0) = 1. .s. In Brownian models the money market account and stocks are modeled via (i = 1.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 47 Example. {Ft }) is a ﬁltrered probability space satisfying the usual conditions. Ft } is a (multidimensional) Brownian motion of appropriate dimension. . ϕI ) be an I I+1 valued process which is progressively measurR able w. • Recall the standing assumption: (Ω. . • {Ft } is the Brownian ﬁltration. dSi (t) = Si (t) µi (t)dt + j=1 Applying “variation of constants” leads to t S0 (t) = e 0 r(s) ds t . • {Wt . m j=1 2 σij (s) ds+ m j=1 t 0 Si (t) = si0 e 0 µi (s)−0. Notation. µ. I) (6) dS0 (t) = S0 (t)r(t)dt.1 Description of the Model We consider a continuoustime security market as given by (6). .r. i=1 0 (ϕi (t)Si (t))2 dt < ∞. {Ft } and satisﬁes (P a. . (ii) If dX(t) = ϕ(t) dS(t) . • ϕi (t): number of stock i at time t • ϕ0 (t)S0 (t): money invested in the money market account at time t I • X ϕ (t) := i=0 ϕi (t)Si (t): wealth of the investor I • Assumption: x0 := X(0) = i=0 ϕi (0)Si (0) ≥ 0 (nonnegative initial wealth) • πi (t) := ϕi (t)Si (t)/X(t): proportion invested in stock i • 1 − i πi : proportion invested in the money market account Deﬁnition 4. 4 Continuoustime Market Model and Option Pricing 4. Mathematical Assumptions. S(0) = si0 > 0. Then ϕ is said to be a trading strategy.t. Selfﬁnancing. F. Admissible) (i) Let ϕ = (ϕ0 . . P. and σ are progressively measurable processes being uniformly bounded.
• Every trading strategy satisﬁes dX(t) = d(ϕ(t) S(t)). Fair Value) (i) An admissible trading strategy ϕ is an arbitrage opportunity if (P a. Otherwise. • The corresponding portfolio process π is said to be selfﬁnancing or admissible if ϕ is selfﬁnancing or admissible (Recall πi = ϕi Si /X). (iv) The fair time0 value of a claim is deﬁned as C(0) = inf{p : D(p) = ∅}. Deﬁnition 4. • Since admissible strategies are progressive. then we need to require that X(t) > 0 for all t ≥ 0. Hence. (iii) A selfﬁnancing ϕ is said to be admissible if X(t) ≥ 0 for all t ≥ 0. • Investors use admissible strategies. .4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 48 holds. • All investors model the dynamics of the securities as above. Option. in abstract mathematical terms. but may disagree upon µ. The strategy in (i) is also known as “free lottery”. • In the following. Remarks. • Compare the deﬁnition (ii) with the result of Proposition 2. • Investors are price takers. π is not welldeﬁned.) X ϕ (0) = 0. then ϕ is said to be selfﬁnancing. selfﬁnancing means that ϕ can be put before the d operator. (iii) An admissible strategy ϕ is a replication strategy for the claim C(T ) if X ϕ (T ) = C(T ) P − a.s. we always consider admissible trading strategies.s. Economical Assumptions. investors cannot see into the future (no clairvoyants). T ≥ 0. Replication. Note that if we work with π. where D(x) := {ϕ : ϕ admissible and replicates C(T ) and X ϕ (0) = x} Remark. • All investors agree upon σ. X ϕ (T ) ≥ 0.11. (ii) An FT measurable random variable C(T ) ≥ 0 is said to be a contingent claim.2 (Arbitrage. P (X ϕ (T ) > 0) > 0. • (7) ensures that all integrals are deﬁned.
maturity T .1 (Wealth Equation) The wealth dynamics of a selfﬁnancing strategy ϕ are described by the SDE dX(t) = X(t) (r(t) + π(t) (µ(t) − r(t)1))dt + π(t) σ(t)dW (t) . = ϕ0 S0 rdt + i=1 I ϕi Si µi dt + j=1 I σij dWj m = 1− i=1 πi Xrdt + i=1 I πi X µi dt + j=1 m I σij dWj = X r+ i=1 πi (µi − r) dt + j=1 i=1 πi σij dWj 2 Remarks. 0}. the condition (P − a. Assumptions. Proof. By deﬁnition. dX = ϕ dS I m X(0) = x0 . and payoﬀ C(T ) = max{S(T ) − K. the socalled wealth equation. where dS(t) = S(t) µdt + σdW (t) with constants µ and σ.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 49 • Frictionless markets • No arbitrage (see Deﬁnition 4. by “variation of constants”.2 (i)) Proposition 4.2 BlackScholes Approach: Pricing by Replication We wish to price a European call written on a stock S with strike K. • One can directly start with π (instead of ϕ) and require that for the wealth equation t X(s)2 πi (s)2 ds < ∞ 0 4. • No payout are made to the stocks during the lifetime of the call. . The interest rate r is constant as well. • Due to the boundedness of the coeﬃcients.) t πi (s)2 ds < ∞ 0 is suﬃcient for the wealth equation to possess a unique solution.s.
S(t))S 2 (t)σ 2 = 0.5fss (t. It follows: Theorem 4. Therefore.5fss S 2 σ 2 dt = ft + fs Sµ + 0. s) ∈ [0. with terminal condition f (T. (∗∗∗) Therefore. . s) + rsfs (t.5fss S 2 σ 2 dt + fs SσdW Consider a selfﬁnancing trading strategy ϕ = (ϕM . 0}.2 function f = f (t. the terminal condition will be diﬀerent. S(t))S(t) − rf (t. T ] × I + . i. Then dC(t) = ft (t. S(t)) + 0. We have not used the terminal condition to derive the PDE. (∗ ∗ ∗) = 0. s) = 0. Of course.5fss S 2 σ 2 = r ϕM M + ϕS S − C =X ϕ −rfs S + rC − ft − 0.5fss S 2 σ 2 . S(t))dt + fs (t.5s2 σ 2 fss (t.5fss S 2 σ 2 dt − fs SσdW = ϕM rM + ϕS Sµ − ft − fs Sµ − 0. S(t))dS(t) + 0. Hence. the strategy is locally riskfree implying (∗) = X ϕ (t)r (Otherwise there is an arbitrage opportunity with the money market account).2 (PDE of BlackScholes (1973)) In an arbitragefree market the price function f (t. rfs (t. S(t)) + ft (t. (t. R Remark. s) = max{s − K.5fss (t. S(t)). every European (pathindependent) option satisﬁes this PDE. s) − rf (t.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 50 • There exists a C 1. ϕC ) and choose ϕC (t) ≡ −1.e. Rewriting (*): (∗) = ϕM rM − ft − 0. (S(t))d < S >t = ft dt + fs S µdt + σdW + 0. Then dX ϕ (t) = ϕM (t)dM (t) + ϕS (t)dS(t) − dC(t) = ϕ0 rM dt + ϕS S µdt + σdW − ft + fs Sµ + 0. s) + 0. ϕS .5fss S 2 σ 2 dt + ϕ1 Sσ − fs Sσ dW (∗) (∗∗) Choosing ϕS (t) = fs (t) leads to (∗∗) = 0. s) of a European call satisﬁes the BlackScholes PDE ft (t. s) such that the timet price of the call is given by C(t) = f (t.
0} Y (0) = S(0) e−rT .52 )(T − t) √ . d1 (t) = (ii) The trading strategy ϕ := (ϕM .g. (ϕM . M (t) ϕS (t) = fs (t.3 (BlackScholes Formula) (i) The price of the European call is given by C(t) = S(t) · N (d1 (t)) − K · N (d2 (t)) · e−r(T −t) with ln(S(t)/K) + (r + 0. S(t)). Proof. ϕS ) = = = 16 See.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 51 Question: How should we solve this PDE? 1st approach: Merton (1973). ϕS ) with ϕM (t) C(t) − fs (t. Theorem 4. σ T −t √ d2 (t) = d1 (t) − σ T − t. Solve the PDE by transforming it to the heat equation ut = uxx which has a wellknown solution. Apply the FeynmanKac theorem (see below) which states that the time0 solution of the BlackScholes PDE is given by C(0) = E max{Y (T ) − K. (ii) We have (8) X ϕ (t) = ϕM (t) · M (t) + ϕS (t) · S(t) = C(t). 0} Y (t) = S(t) e−r(T −t) .. (i) Compute (exercise) C(t) = E max{Y (T ) − K. ϕS ) replicates the call and is selfﬁnancing because dX ϕ (8) = = dC ft + fs Sµ + 0. Hence.16 2nd approach. (ϕM .5fss S 2 σ 2 dt + fs SσdW r[C − fs S] + fs Sµ dt + fs SσdW rM ϕM + ϕS Sµ dt + ϕS SσdW ϕM rM dt + ϕS S µdt + SσdW ϕM dM + ϕS dS. S(t)) · S(t) . Korn/Korn (1999. Ito (+) = Def. where Y satisﬁes the SDE dY (t) = Y (t) rdt + σdW (t) . p. e. . = Y (0) = S(0). is selfﬁnancing and a replication strategy for the call. 124).
x) = e−r(T −t0 ) E[h(X(T ))X(t0 ) = x]. If E t0 γ(u. Korn/Korn (1999. ϕS . x)Fx (t. If F is a C 1.5γ 2 Fxx dt + γFx dW. ∞) × I Under technical conditions (see R. x) + β(t. Xu )2 Fx (u. Sketch of Proof.5Fxx d < X > .4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 52 Note: (+) holds due to the BlackScholes PDE ft + 0.5γ 2 (t.4 (FeynmanKac Representation) Let β and γ be functions of (t.5fss S 2 σ 2 = r[C − fs S] 2 Remark.5γ 2 Fxx dt = Ft + βFx + 0. Xu )2 du < ∞. X(u))dW (u). x)Fxx (t. Suppose that there exists a function F satisfying the PDE. x) ∈ [0. the strategy (ϕM . X(s))dW (s) with initial condition X(t0 ) = x. X(s)) = F (t0 . Then dF = γFx dW or in integral notation s F (s. ϕS ) is selfﬁnancing. x) − rF (t. the stochastic integral has zero expectation and thus F (t0 .e. we can apply Ito’s formula dF = Ft dt + Fx dX + 0. we get dF = Ft dt + Fx βdt + γdW + 0. X(s))ds + γ(s. X(s)X(t0 ) = x]. i. where X satisﬁes the SDE dX(s) = β(s. x) = h(x) possessing the representation F (t0 . Ft + βFx + 0. x) of the PDE (9) Ft (t. X(t0 )) + t0 s γ(u. X(u))Fx (u. x) = E[F (s. (i) Firstly assume r = 0. x) = h(x). there exists a unique C 1. 136)).2 solution F = F (t. Since (ϕM . Theorem 4. ϕC ) with ϕC = −1 is selfﬁnancing as well because d(ϕC (t)C(t)) = d(−C(t)) = −dC(t) = ϕC (t)dC(t). p. Since d < X >= γ 2 dt.5γ 2 Fxx = 0 with F (T. x) = 0 with terminal condition F (T. x) + 0. .2 function.
i. x)Fx (t. x)Gx (t.2 function F exists. The process θ is implicitly deﬁned as the solution to the following system of linear equations: (10) σ(t)θ(t) = µ(t) − r(t)1. x)Fxx (t. t t := exp −0. x) = Ft (t. x)e−rt + 0. X)Fx (·. x)e−rt . x) = rx and γ(t. x) := F (t. multiply the PDE (9) by e−rt : Ft (t. x) + β(t. x) = 0 with terminal condition G(T.3 Pricing with Deﬂators In a singleperiod model the deﬂator is given by (See remark to Proposition 2. (ii) For r = 0. x) = xσ. • The FeynmanKac representation can be generalized to a multidimensional setting. • Two crucial points are not proved: Firstly. X is a multidimensional Ito process. x)Gxx (t. . Set G(t. we get Gt (t. Substituting G(t0 . we have not checked under which conditions γ(·. Since Gt (t. T ]. 4. x) = e−rT h(x).5γ 2 (t. x) + 0. Secondly. x) = e−rt0 F (t0 . x)e−rt . x)e−rt − rF (t.7) H= 1 dQ 1 + r dP 1 Z(t). Applying Ito’s product rule yields the following SDE for H: dH(t) = d(Z(t)M (t)−1 ) = −H(t)[r(t)dt + θ(t) dW (t)] Remarks. we have not checked under which conditions a C 1.5 0 θ(s)2 ds − 0 θ(s) dW (s) . x)e−rt + β(t. choose β(t. x) gives the desired result. x)e−rt = 0. Now we can apply (i): G(t0 . • To solve the BlackScholes PDE. M (t) This motivates the deﬁnition of the deﬂator in continuoustime: H(t) := where t M (t) Z(t) := exp 0 r(s) ds . X) ∈ L2 [0. 2 Remarks.e. x) = E[e−rT h(X(T ))X(t0 ) = x]. x)e−rt − rF (t.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 53 Choose s = T and note that F (T. with stochastic short rate r = r(X).5γ 2 (t. X(T )) = h(X(T )).
18 Note θσ = µ − r. For simplicity. (P a. • The price in (ii) is thus equal to the expected discounted payoﬀ under the physical measure where we discount with the statedependent discount factor H.5 (Arbitragefree Pricing with Deﬂator and Completeness) (i) Assume that (10) has a bounded solution for θ. • Compare (ii) with Theorem 2. Remarks. 17 The . This requirement implies that R σ(t) is regular. the process {H(t)C(t)}t∈[0.s. (ii) Assume that I = m (“# stocks = # sources of risk”) and that σ is uniformly positive deﬁnite.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 54 • In this section we are going to work under the physical measure. i.e. • The process θ is said to be the market price of risk and can be interpreted as a risk premium (excess return per units of volatility).T ] is a martingale. The following theorem shows that H has indeed the desired properties: Theorem 4. • The system of equations (10) needs not to have a (unique) solution. ϕ1 ):18 d(HX ϕ ) = HdX ϕ + X ϕ dH + d < H. The timet price of the claim equals E[H(T )C(T )Ft ] C(t) = . P a. Hence. • The boundedness assumption in (i) can be weakened. Proof.s. Besides. T ].T ] is both a local martingale and a supermartingale implying (11) E[H(t)X ϕ (t)] ≤ X ϕ (0) for all t ∈ [0. let C(T ) be an FT measurable contingent claim with (12) x := E[H(T )C(T )] < ∞ Then there exists a P ⊗ Lebesgueunique portfolio process ϕ such that ϕ is a replication strategy for C(T ). (i) Applying Ito’s product rule. H(t) i. the market is complete. the market is arbitragefree. Let ϕ be an admissible portfolio strategy.e. one gets for an admissible ϕ = (ϕ0 .) X ϕ (T ) = C(T ). Nevertheless we will see later on that Z can be interpreted as a density inducing a change of measure (Girsanov’s Theorem).17 Then (10) has a unique solution which is uniformly bounded. X ϕ > process σ is uniformly positive deﬁnite if there exists a constant K > 0 such that x σ(t)σ(t) x ≥ Kx x for all x ∈ I m and all t ∈ [0. and X ϕ (0) = x. Furthermore. the deﬂator.13. T ]. Then {H(t)X ϕ (t)}t∈[0. I = 1.
Therefore. 23 (a + b)2 ≤ 2a2 + 2b2 .). H > 0. M (ω) ≥ KM (ω) > 0. Since there exists a Kσ > 0 such that σ(t) ≥ Kσ for all t ∈ [0. σ is uniformly positive deﬁnite. we have X ϕ (t) ≥ 0. 55 (13) Since ϕ is admissible. constant and the conditioned expected value is equal to the unconditioned expected value. T ]. By Proposition 3. M as well as H are pathwise bounded away from zero and X is pathwise bounded from above.) t (14) · H(t)X(t) = M (t) = x + 0 ψ(s) dW (s).22 we get ϕ1 = ψ/H + Xθ .s.e. and X ≥ 0 are continuous. HX is a supermartingale which proves (11).s. Note that an arbitrage strategy would violate this supermartingale property. i. we deﬁne ϕ0 := which implies that ϕ is selfﬁnancing and X = X ϕ . Since H and 0 ψ(s) dW (s) are continuous. the process is a trading strategy because23 T T (ϕ1 (s)S(s))2 ds 0 = 0 ψ(s)/H(s) + X(s)θ(s) σ(s) 1 2 KH T 2 ds < ∞. By deﬁnition. ω ∈ Ω.25. X(t) is adapted. there exists an {Ft }T progressively measurable realvalued process ψ with 0 ψ(s)2 ds < ∞ such that (P − a. X(0) = x. Due to (12). Sσ X − ϕ1 S M Since X ϕ = ϕ0 M + ϕ1 S for any strategy ϕ.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING = HX ϕ rdt + Hϕ1 S(µ − r)dt + Hϕ1 SσdW − X ϕ Hrdt −X ϕ HθdW − Hϕ1 Sθσdt = H(ϕ1 Sσ − X ϕ θ)dW. X(t) ≥ 0. θ = (µ − r)/σ is unique and uniformly bounded by some Kθ > 0.20 Set M (t) := H(t)X(t) = E[H(T )C(T )Ft ]. 21 Note that the representation of an Ito process is unique.s. Applying the martingale representation theorem. H(ω) ≥ KH (ω) > 0. Comparing (13) and (14) yields21 H(ϕ1 Sσ − Xθ) = ψ.a. ≤ 19 Note 2 2 Kσ 2 2 ψ(s)2 ds + KX Kθ T 0 that all coeﬃcients are bounded and σ is uniformly positive deﬁnite. and X(T ) = C(T ) (P a. F0 is the trivial σalgebra augemented by null sets. H(t) ≥ 0 and thus H(t)X(t) ≥ 0. Since M > 0. is every F0 measurable random variable a. 20 Since . 22 By assumption. we can choose X to be continuous. (ii) Clearly. M is a martingale with M (0) = x.19 Deﬁne E[H(T )C(T )Ft ] X(t) := H(t) By deﬁnition. and X(ω) ≤ KX (ω) < ∞ for a.
e. then the result from (i) holds with T ψs 2 ds < ∞ 0 instead of (15). ψ is P ⊗ Lebesgueunique. T ]. Y >0 = 0 such that XY − < X. G. See Korn/Korn (1999. this process is nondecreasing. • All Brownian (local) martingales are stochastic integrals and vice versa! • The quadratic (co)variation is deﬁned for all (local) Brownian martingales.s. Proof. Proof. dX = αdW and dY = βdW with progressively measurable processes α and β. (ii) If M is a local martingale. i. The rest is obvious. E[Mt2 ] < ∞ for all t ∈ [0.s. Proposition and Deﬁnition 4. Ito’s product rule yields d(XY ) = Y dX + XdY + d < X.7 (Quadratic (Co)variation) (i) Let X. XY − < X. then there exists a progressively measurable I m valued process {ψt }t∈[0. pp. If X = Y . Y be continuous local martingales for the ﬁltration {Gt }. Y > is the unique adapted. {Gt }) be a ﬁltered probability space satisfying the usual conditions and let X. (i) If M is a square integrable martingale. Y > is a Brownian martingale. Remarks. continuous process of bounded variation with < X. P a. Y > is a Brownian local martingale. 1 KM KX T + 0 1 + (ϕ1 (s)S(s))2 ds <∞ 2 Theorem 4. Then the result of (i) holds with XY − < X. The quadratic covariation < X. In both cases.T ] be a realvalued process being adapted to the Brownian ﬁltration. Since Ito integrals are local Brownian martingales. Y be Brownian local martingales.T ] with R T (15) and E 0 ψs 2 ds < ∞ t Mt = M 0 + 0 ψs dWs . (i) By the martingale representation theorem. Y > being a local martingale for {Gt }. P. .4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING T T 56 ϕ0 (s) ds 0 = 0 X(s) − ϕ1 (s)S(s) ds M (s) T ≤ for P a. (ii) Let (Ω. Y >= Y αdW + XβdW + αβdt. 81ﬀ).6 (Martingale Representation Theorem) Let {Mt }t∈[0.
4.8 T Let θ be progressive and K > 0. Y > is a martingale. If it is even a martingale. For simplicity. If 0 θ(s)2 ds ≤ K.e. Hence.r. 36). it is thus a supermartingale. M (T ) M (T ) i.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 57 2 (ii) See Kartzas/Shreve (1991. m = 1. Remark. By variations of constants. The process Z is a local martingale. Proof. Under which conditions is Z a martingale? (Novikov condition) Q2.5 as C(0) = E[H(T )C(T )] = E Z(T ) C(T ) C(T ) = EQ . Hence. we have E[Z(T )] = 1 and we can deﬁne a probability measure on FT by Q(A) := E[1A Z(T )]. One can show that if X and Y are squareintegrable.5 0 θ(s)2 ds − 0 θ(s) dW (s) ≥ 0. Two important questions arise: Q1. Z(T ) is the density of Q and we can rewrite the result from Theorem 4. it is suﬃcient to show that E[Z(T )] = 1. Deﬁne t τn := inf t ≥ 0 : 0 θ(s) dW (s) ≥ n .25. p. then the process {Z(t)}t∈[0.T ] deﬁned by (16) dZ(t) = −Z(t)θ(t) dW (t). A ∈ FT . By Proposition 3. What happens to the dynamics of the assets if we change to the measure Q? (Girsanov’s theorem) Proposition 4.4 Pricing with Riskneutral Measures 1 Z(t). is a martingale. M (t) The deﬂator is deﬁned as H(t) = where dZ(t) = −Z(t)θ(t) dW (t) is a local martingale. . the price of a claim equals its expected discounted payoﬀ where we discount with the money market account (instead of the deﬂator) and expectation is calculated w.t. we get t t (17) Z(t) = exp −0. the measure Q (instead of the physical measure). then XY − < X. Z(0) = 1.
We obtain the estimate T Z(T ∧ τn )1+ε ≤ exp 0. With the same arguments leading to E[Z(T ∧τn )] = 1 we conclude E[Y (T ∧τn )] = 1.10 (Levy’s Characterization of the Brownian Motion) An mdimensional stochastic process X with X(0) = 0 is an mdimensional Brownian motion if and only if it is a continuous local martingale with24 < Xj . T ].4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING T ∧τ 58 Due to (17). n∈I N which. for ﬁxed n ∈ I .9 For a bounded stopping time τ ∈ [0. From (16). Proof. Theorem 4. T ] and A ∈ Fτ . 198ﬀ)): T E exp 0. The requirement in the proposition is a strong one. Lemma 4. t ∈ [0. independent of n and bounded by assumption where dY (t) = −Y (t)(1 + ε)θ(s)dW (s). We turn to the second question and deﬁne (Q := QT ) (18) Qt (A) := E[1A Z(t)]. proves UI of {Z(T ∧ τn )}n∈I . Therefore. N 2 Remark. sup EZ(T ∧ τn )1+ε < ∞. . we obtain E[Z(T ∧ τn )] = 1. The following lemma shows that Qt is welldeﬁned. the claim is proved if {Z(T ∧ τn )}n∈I is UI because in this case N n→∞ lim E[Z(T ∧ τn )] = E[Z(T )]. pp. we get Z(T ∧τn )2 ≤ e2n and thus 0 n Z(s)2 θ(s)2 ds ≤ N 2n K e . Xk >t = δjk t. we get QT (A) = Qτ (A). by Proposition 3. 2 To prove Girsanov’s theorem. It can be replaced by the Novikov condition (See Karatzas/Shreve (1991. we need some additional tools. A ∈ Ft .19.5 0 θ(s)2 ds < ∞.5(ε + ε2 ) 0 θ(s)2 ds Y (T ∧ τn ). 24 Note OS that δij = 1{i=j} is the Kronecker delta. The optional sampling theorem (OS) yields (See Theorem 3. Therefore. Y (0) = 1.17) QT (A) = E[1A Z(T )] = E[E[1A Z(T )Fτ ]] = E[1A E[Z(T )Fτ ]] = E[1A Z(τ )] = Qτ (A).
Xf dµ = H Xdν = H Eν [XH] dν = H Eν [XH]f dµ = Eµ Eν [XH] · f 1H = Eµ [Eµ [Eν [XH] · f 1H H]] = Eµ [1H Eν [XH] · Eµ [f H]] = H Eν [XH] · Eµ [f H] dµ.5u2 f d < X > iuf dX. T ] for every T ∈ I R. G) such that dν/dµ = f for some f ∈ L1 (µ). Clearly. = Since an integral w.T ] is a local P martingale.t. Let X be a random variable on (Ω.T ] be a rightcontinuous adapted process. Xt ).r.5u2 f dt + iuf dX − 0. For simplicity m = 1.5u t) and Zt := f (t.25 Z is a (complexvalued) local martingale which is bounded on [0. 25 For instants. Ft }t∈[0.s.5u2 s). where i = −1. Let H ∈ H.e. ﬁx u ∈ I and set f (t.5u2 tFs ] = exp(iuXs + 0.5u2 (t − s)). Then Eν [XH] Eµ [f H] = Eµ [f XH] a. E[exp(iu(Xt − Xs ))Fs ] = exp(−0.12 Let {Yt .5fxx d < X > 0. 2 Proposition 4. Proof. independent of Fs . 2 Lemma 4. a local martingale is a local martingale as well. Since this hold for any u ∈ I the diﬀerence Xt − Xs ∼ N (0. (ii) {Yt . G) such that Eν [X] = Let H be a σﬁeld with H ⊂ G. Ft }t∈[0. a complexvalued version of Ito’s formula yields dZ = = <X>t =t ft dt + fx dX + 0. We start with the lefthand side: Eµ [f XH] dµ = H H Xf dµ < ∞. Since f ∈ C 1.11 (Bayes Rule) Let µ and ν be two probability measures on a measurable space (Ω. To prove the converse. . the Brownian motion is a continuous local martingale with < W >t = t. x) = R √ 2 exp(iux + 0. a Brownian local martingale can be represented as Ito integral. t − s) and Xt − Xs is R. i.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 59 Proof. Ft }t∈[0. Z is a martingale implying E[exp(iuXt + 0.2 is analytic. Hence.T ] is a local QT martingale. The following statements are equivalent: (i) {Zt Yt .
23.7 (ii). By Proposition 4. we need to show that (i) {W Q (t). By Levy’s characterization. FT . Ito’s product rule yields d(ZW Q ) = ZdW Q + W Q dZ + d < W Q . QT ). where QT is deﬁned by (18).4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 60 Proof. Assume (i) to hold and let {τn }n be a localizing sequence of ZY .T ] is a Brownian local P martingale and t Dt := 0 26 Note 1 d < Z. ∞). Z > = ZdW + Zθdt − W Q ZθdW − Zθdt = (Z − W Q Zθ)dW. Lemma 4. Then26 EQT [Yt∧τn Fs ] Lem 4. the conditional expected value can be computed w. By Lemma 4. i. . i. FT . FT . Besides.r. m}.13 (Girsanov’s Theorem) We assume that Z is a martingale and deﬁne the process {W Q (t).12 again yields that {(WtQ )2 − t} is a local QT martingale.e. W Q > −Zdt − tdZ = W Q d(ZW Q ) + ZW Q (θdt + dW ) + (Z − W Q Zθ)dt − Zdt + tθZdW = W Q d(ZW Q ) + Z(W Q + tθ)dW. m = 1. Zs∧τn 2 The converse implication can be proved similarly (exercise). M >s . EP [Zt∧τn Fs ] = Zs∧τn . 2 Corollary 4. For ﬁxed T ∈ [0.9 ensures that both measures coincide on Ft∧τn . {Zt · ((WtQ )2 − t)} is a local P martingale.e. the claim (ii) follows. Ft∧τn ⊂ FT . the process W Q is a local QT martingale as well. Ito’s product rule leads to d Z · ((W Q )2 − t) = d W Q · (ZW Q ) − d(Z · t) = W Q d(ZW Q ) + ZW Q dW Q + d < ZW Q . Theorem 4. by Proposition 3. ad (ii). Applying Lemma 4.T ] is an mdimensional Brownian motion on (Ω.e. . Ft }t∈[0. ad (i).T ] is a continuous local martingale on (Ω. . the process {W Q (t).t. t ≥ 0. For simplicity. Ft } by t WjQ (t) := Wj (t) + 0 θj (s) ds. Qt∧τn instead of QT .9 = EQt∧τn [Yt∧τn Fs ] = Bayes EP [Zt∧τn Yt∧τn Fs ] EP [Zt∧τn Fs ] = Zs∧τn Ys∧τn = Ys∧τn .e. QT ). Bayes rule can be applied.14 (Girsanov II) If {Mt }t∈[0. .12. Ft }t∈[0. i. . j ∈ {1. (ii) < W Q >t = t on (Ω. QT ). ZW Q is a local P martingale. Furthermore. i. Proof. Zs that Yt∧τn is Ft∧τn measurable.
we have Z(ω) ≥ KZ (ω) > 0 for a. Then the density process {Z(t). Since Z > 0.e.a.s. By the martingale representation theorem.T ] deﬁned by dQ Z(t) := dP Ft is a positive Brownian P martingale possessing the representation dZ(t) = −Z(t)θ(t)dW (t).. Z(t) = E[Z(T )Ft ] implying that Z is a Brownian martingale.15 (Converse Girsanov) Let Q be an equivalent probability measure to P on FT . Z(0) = 1. 2 1 1 d < Z. i. T ]. dD = and thus d(M − D) = αdW + θαdt = αdW Q . M >= − θZαdt = −θαdt Z Z Theorem 4. t Z(t) = 1 + 0 T ψ(s) dW (s) with a progressive process ψ satisfying 0 ψ(s)2 ds < ∞. d < M − D >Q = α2 dt = d < M >t . for a progressively measurable mdimensional process θ with T (19) 0 θ(s)2 ds < ∞ P − a.e. i. Proof. Z(t) > 0 for all t ∈ [0. t which completes the proof. ω ∈ Ω. we have dM = αdW for some progressive α. . By the martingale representation theorem. Ft }t∈[0. P − a.s. Hence.T ] is a continuous local QT martingale and < M − D >Q =< M >. Since Q and P are equivalent on FT .9 = dQ/dP Ft = A Q Z(t)dP. Besides. the quadratic variations of M − D under QT and M under P coincide.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 61 then {Mt − Dt }t∈[0. i.e. both measures are equivalent on Ft . T ] . Besides. t ∈ [0. for A ∈ Ft Z(T )dP = Q A dQ/dP FT Lem 4. Deﬁning θ(t) = − ψ(t) Z(t) 2 thus implies (19) which completes the proof. Therefore. M − D is a local Qmartingale. Proof.
Remarks. Ft } deﬁned by t WjQ (t) := Wj (t) + 0 θj (s) ds. M (t) M (t) Z(t) := exp 0 r(s) ds . Deﬁnition 4. .5 0 θ(s)2 ds − 0 θ(s) dW (s) . are (local) Qmartingales. . then Q is already an EMM.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 62 Deﬁnition 4. Recall our deﬁnitions H(t) := where t 1 Z(t). t ≥ 0. . • One can show that E(X)E(Y ) = E(X + Y + < X.3 (DoleansDade Exponential) For an Ito process X with X0 = 0 the DoleansDade exponential (stochastic exponential) of X. I. t t := exp −0. and θ is implicitly deﬁned by: (20) σ(t)θ(t) = µ(t) − r(t)1. . Proposition 4. An equivalent martingale measure (EMM) is also said to be a riskneutral measure. . i = 1. . We set Q := QT . written E(X). I}. For bounded θ. is the unique Ito process Z that is the solution of t Yt = 1 + 0 Ys dXs .4 (Equivalent Martingale Measure) A probability measure Q is a (weak) equivalent martingale measure if (i) Q ∼ P and (ii) all discounted price processes Si /M . by Girsanov’s theorem. Z(t) = dQ/dP Ft is a density process for the measure Qt (A) = E[Z(t)1A ] and {W Q (t). X >). Remark. We now come back to our pricing problem. is a QBrownian motion. · • Therefore. the density Z is sometimes written as E( 0 θs dWs ). Y >) and that (E(X))−1 = E(−X+ < X.16 If Q is a weak EMM and T (21) E exp 0.5 0 σi (s)2 ds <∞ for every i ∈ {1. . . .
Deﬁne Yi := Si /M . since Q is equivalent to P . Therefore. Assume (i) to hold. the drift µ does not matter under Q. 2 Corollary 4. Since Q is a weak martingale measure. On the other hand. By deﬁnition. we need not to distinguish between EMM and weak EMM. . assume (ii) to hold. by the converse Girsanov. Since Q is a martingale measure.e. Since we have assumed that σ is uniformly bounded. By the Novikov condition (21). Deﬁne Yi := Si /M . (ii) There exists a solution θ to the system of equations (20) such that the process · Z = E( 0 θs dWs ) is a Girsanov density inducing the probability measure Q. i. 2 Remark. By the martingale representation theorem and the uniqueness of the representation of an Ito process.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 63 Proof. Ito’s formula leads to dYi = Yi [(µi − r)dt + σi dW ] = Yi [(µi − r − σi θ)dt + σi dW Q ] = Yi σi dW Q . Proof. Theorem 4. dSi = Si [µi dt + σi dW ] = Si [(µi − σi θ)dt + σi dW Q ] implying (23) dYi = Yi [(µi − σi θ − r)dt + σi dW Q ] Comparing (22) and (23) gives (ii). there exists a process · θ such that dQ/dP = E( 0 θs dWs ) and dW Q = dW + θdt is a Brownian increment under Q.18 (QDynamics of the Assets) Under an EMM Q. Besides. Q is equivalent to P and dW Q = dW + θdt is a QBrownian increment. which is Qmartingale increment because σ is assumed to be uniformly bounded. the dynamicss of asset i are given by dSi (t) = Si (t)[r(t)dt + σi (t) dW Q (t)]. dYi = Yi σi dW Q . the following are equivalent: (i) The probability measure Q is an EMM. (22) dYi = Yi σi dW Q .17 (Characterization of EMM) In the Brownian market (6). this is a Qmartingale. Yi is a local Qmartingale.
then {X ϕ (t)/M (t)}t∈[0. let C(T ) be an FT measurable contingent claim with x := EQ [C(T )/M (T )] < ∞ Then the timet price of the claim equals C(t) = M (t) EQ C(T ) Ft . • If the market is incomplete. • If ϕ is uniquely bounded. If (24) holds. the discounted price process {C(t)/M (t)}t∈[0. then it is a Qmartingale. (i) Consider I dX ϕ = ϕ dS = Xrdt + i=1 ϕi Si σi dW Q . σ. and r are assumed to be uniformly bounded. then (24) holds because µ. T ]. ˜ M (T ) M (T ) . the EMM Q is unique. If additionally T (24) EQ 0 ϕi (s) Si (s) M (s) 2 ds < ∞ for every i = 1.5 because H = Z/M . Since d(1/M ) = −(1/M )rdt.19 (Arbitragefree Pricing under Q and Completeness) (i) Let Q be an EMM and let ϕ be an admissible portfolio strategy. I. Proof. Besides.17. 2 Remark. the market is complete.T ] is a Qmartingale. Then (10) has a unique solution which is uniformly bounded implying that there exists a unique EMM Q. Furthermore. . Then {X ϕ (t)/M (t)}t∈[0. the market is arbitragefree. Besides.e. If (24) is satisﬁed.17. θ is bounded. (ii) Assume that I = m (“# stocks = # sources of risk”) and that σ is uniformly positive deﬁnite. then for an admissible strategy ϕ we get M (t) EQ X ϕ (T ) X ϕ (T ) Ft = X ϕ (t) = M (t) EQ Ft . (ii) Since θ is unique.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 64 Theorem 4. Let Q and Q be two corresponding EMM.T ] is both a local Qmartingale and a Qsupermartingale implying that EQ X ϕ (t) ≤ X ϕ (0) M (t) for all t ∈ [0. then there exist more than one market price of risk ˜ θ. M (T ) i. d X M = Xd 1 M + 1 1 dX = M M I Si ϕi σi dW Q i=1 which is a positive local Qmartingale and thus a Qsupermartingale. . for instants. • An EMM exists if. by Theorem 4.T ] is a Qmartingale. . The claim price follows from Theorem 4. . This follows from Theorem 4.
2 function. (i) The pricing function f satisﬁes the following multidimensional BlackScholes PDE: I I m ft + i=1 fi si r + 0. s ∈ I + .k=1 j=1 si sk σij σkj fik − rf = 0. Corollary 4.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 65 This implies that any EMM assigns the same value to a replicable claim. However. By Theorem 4. Proof. C/M is a (local) Qmartingale and thus (25) follows from the martingale representation theorem.21 (QDynamics of Claims) Under the assumptions of Theorem 4. where f is a C 1. 2 Corollary 4. f (T.52 )(T − t) √ .19. . the prices of nonreplicable claims may diﬀer under diﬀerent EMM. where due to Corollary 4. s) = h(s).22 (BlackScholes PDE Again and Hedging) Under the assumptions of Theorem 4. the ith asset price.19. Proof. namly its replication costs. let C(T ) be a replicable claim with C(t) = f (t. σ T −t √ d2 (t) = d1 (t) − σ T − t.19.3 and the result follows.19.r.18 dS(t) = S(t)[rdt + σdW Q (t)]. the dynamics of a replicable claim C(T ) are given by (25) dC(t) = C(t)[r(t)dt + ψ(t) dW Q (t)] with a suitable progressively measurable process ψ. Thus S(T ) has the same distribution as Y (T ) in the proof of Proposition 4. S(t)) and C(T ) = h(S(T )). 2 Proposition 4.t.20 (BlackScholes Again) The price of the European call is given by C(t) = S(t) · N (d1 (t)) − K · N (d2 (t)) · e−r(T −t) with d1 (t) = ln(S(t)/K) + (r + 0.5 i. By Theorem 4. RI where fi denotes the partial derivative w. 0}] e−r(T −t) . C(t) = EQ [max{S(T ) − K. This is the reason why pricing in incomplete markets is so complicated and in some sense arbitrary.
there exists a replication strategy ϕ with X ϕ (T ) = C(T ) and Qdynamics I m dX = X rdt + i=1 j=1 ϕ ϕ ϕi Si σij dWjQ . (ii) Since C(T ) is attainable. Since the representation of an Ito process is unique. Deﬁne φi := ϕi Si (amount of money invested in asset i) leading to the following version of the wealth equation: dX(t) = X(t)r(t)dt + φ(t) (µ(t) − r(t)1)dt + φ(t) σ(t)dW (t). Then there exists a market price of risk. In Theorem 4. . (“# stocks > # sources of risk”).19. ϕM (t) = (C(t) − i=1 ϕi (t)Si (t))/M (t). . the delta hedging strategy is the unique replication strategy.k=1 j=1 fik Si Sk σij σkj dt + i=1 j=1 fi Si σij dWjQ .5 i. I. Proof. i. Theorem 4. • Under the conditions of Theorem 4. where due to Theorem 4. I i = 1. then in general there exist further replication strategies. i. S(t)).19 (ii). where the remaining funds C(t) − i=1 ϕi (t)Si (t) are invested in the money market I account. there exists a progressively measurable process θ solving the system (10). we have proved that the following implications hold: 1) Existence of EMM =⇒ No arbitrage 2) Uniqueness of EMM =⇒ Completeness The following theorems show to which extend the converse is true.27 Comparing the diﬀusion part with the diﬀusion part of (26) shows that one can choose ϕi = fi . i. the choice ϕi = fi in the proof of (ii) is not the only possible choice. 2 Remarks. (i) Ito’s formula yields I I m I m (26) dC = ft + i=1 fi Si r + 0.e. the delta hedge for the put reads ϕS = −N (−d1 ). • The strategy in (ii) is said to be a delta hedging strategy. Proof. comparing the drift with the drift of (25) yields the BlackScholes PDE.e. Due to the putcall parity. i.e.e.19 (ii) the drift equals r. . • If I > m. . . • In the BlackScholes model.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 66 (ii) A replication strategy for C(T ) is given by ϕi (t) = fi (t. we get for the European call (see Exercise 29): ϕS = N (d1 ).23 (Existence of a Market Price of Risk) Assume that the market (6) is arbitragefree. 27 The representation of the diﬀusion part follows from rewriting Xπ σdW Q in terms of ϕ = πX/S.
24 (Uniqueness of EMM) Assume that the market is complete and let Q be an EMM such that {X ϕ (t)/M (t)}t∈[0. Yj /M is a local martingale under Q and Q. . By Proposition 3. Although a market price of risk exists. Q is the unique EMM. This strategy would be an arbitrage since it does not involve risk.s. By assumption. i. the market prices of risk θj and ˜ ˜ θj are indistinguishable. Hence. both ˜ measures are characterized by market prices of risk θ and θ such that dWjQ (t) ˜ dWjQ (t) = dWj (t) + θ(t)dt. µ(t) − r(t)1 should be in the range of σ(t). there exists a θ(t) such that µ(t) − r(t)1 = σ(t)θ(t) The progressive measurability of θ is proved in Karatzas/Shreve (1999.19.2. X ϕj /M is even a Qmartingale implying X ϕj (t) = M (t)EQ Yj (T ) X ϕj (T ) Ft = M (t)EQ Ft = Yj (t) P − a. where T Yj (T ) = exp 0 r(s) ds − 0. In this theorem. nothing is said about whether the corresponding process Z is a martingale. we get Q = Q. there exists an admissible trading strategy ϕj such that X ϕj (T ) = Yj (T ).e. pp. i. Yj is a modiﬁcation of X ϕj . By Theorem 4. Proof. we know that the process X ϕj /M is a local martingale under ˜ Q and Q. then Q = Q. .. it is not clear if an EMM exists. . the process Yj (θj − θj )/M needs to 28 be indistinguishable from zero.e.2 Remark. But the orthogonal complement of Ker(σ(t) ) is Range(σ(t)). .5T + WjQ (T ) . M (T ) M (T ) i.e.T ] ˜ is a Qmartingale for all admissible strategies ϕ with E[X ϕ (T )/M (T )] < ∞. Theorem 4. Therefore. Since the market is complete. m}. M (s) ˜ ˜ Since Yj /M is a Brownian local Qmartingale. Thus in an arbitragefree market every vector in the kernel Ker(σ(t) ) should be orthogonal to the vector µ(t) − r(t)1. Yj (t) =1+ M (t) t 0 Yj (s) dWjQ (s) = 1 + M (s) t 0 Yj (s) ˜ dWjQ (s) + M (s) t 0 Yj (s) ˜ (θj (s) − θj (s)) ds.4 CONTINUOUSTIME MARKET MODEL AND OPTION PRICING 67 Assume that there exists a selfﬁnancing strategy with φ(t) σ(t) = 0 and φ(t) (µ(t) − r(t)1) = 0. From Theorem 4. If Q ˜ is another EMM. X ϕj and Yj are indistinguish˜ able. 2 Remarks. 28 Recall that the representation of Ito processes is unique up to indistinguishability. ˜ = dWj (t) + θ(t)dt. Since this holds for every j. Since Yj /M > 0. . but its wealth equation has a drift diﬀerent from r.17. 12ﬀ). Fix j ∈ {1. Hence.
2 4 8 2 2 n=0 Hence.T ] is a Qsupermartingale. Problem: Petersburg Paradox. Measure payoﬀs in terms of a utility function U (x) = ln(x).5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 68 • Since {X ϕ (t)/M (t)}t∈[0. 2n 2n n=1 n=1 ∞ . = = ∞. Bernoulli’s idea.e. 1 1 1 1 · ln(1) + · ln(2) + · ln(4) + . Maximze expected ﬁnal wealth. investors who maximize expected ﬁnal wealth should be willing to pay ∞ Euros to play this game! Therefore. i. X ϕ (T ) EQ = X ϕ (0). if they can choose between • 50 Euros or • 100 Euros with probability 0.1 Continuoustime Portfolio Optimization Motivation In the theory of option pricing the following problem is tackled: Given: Payoﬀ C(T ) Wanted: Today’s price C(0) and hedging strategy ϕ In the theory of portfolio optimization the converse problem is considered: Given: Initial wealth X(0) Wanted: Optimal ﬁnal wealth X ∗ (T ) and optimal strategy ϕ∗ Question: What is “optimal”? 1st Idea. If in the nth round. maxϕ E[X ϕ (T )]. 5 5. . . i. 2 4 8 2 ∞ ∞ 1 n−1 = · ln(2n−1 ) = ln(2) = ln(2). m. . + n · ln(2n−1 ) + . . maximization of expected ﬁnal wealth is usually not a reasonable criterion.e. the ﬁrst time head occurs. for instants.e. i. then the player gets 2n−1 Euros. . . they choose the 50 Euros. .T ] is a Qmartingale for every j = 1.5. . i.e. Question: What is the reason for this result? Answer: People are risk averse.5 and 0 Euros with probability 0. . the expected value is 1 1 1 1 1 · 1 + · 2 + · 4 + . Consider a game where a coin is ﬂipped. M (T ) • Obviously it is suﬃcient to check if {X ϕj (t)/M (t)}t∈[0. . . + n · 2n−1 + . . this process is a Qmartingale if.
Optimal control approac: We know that expectations can be represented as solutions to PDEs (FeynmanKac theorem) =⇒ Represent the functional (27) as PDE and solve the PDE Standing Assumption. E[U (Z)] < U (E[Z]). U (∞) := lim U (x) = 0. 5. i.1 (Utility Function) A strictly concave. an investor following this criterion would be willing to pay at most 2 Euros to play this game. and continuously diﬀerentiable function U : (0. There exist two approaches to solve the dynamic optimization problem (27): 1. • The investor’s greed is reﬂected by U > 0. This idea can be generalized: For a given initial wealth x0 > 0. loosely speaking: “A bird in the hand is worth two in the bush”. Martingale approach: Theorem 4. and the utility function U has the following properties: Deﬁnition 5.5 ensures that in a complete market any claim can be replicated. i. loosely speaking: “More is better than less”.2 Martingale Approach Assumption in the Martingale Approach: The market is complete.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 69 Hence. π∈A where A := {π : π admissible and E[U (X π (T ))− ] < ∞} and dX π (t) = X π (t) (r(t) + π(t) (µ(t) − r(t)1))dt + π(t) σdW (t) .e. by Jensen’s inequality.e. strictly increasing. x ∞ Remarks. We consider a Brownian market (6) with bounded coefﬁcients. The martingale approach consists of a threestep procedure: . ∞) → I is said to be a utility function if R U (0) := lim U (x) = ∞. x 0 X π (0) = x0 . the investor maximizes the following utility functional (27) sup E[U (X π (T ))]. • The investor’s risk aversion is reﬂected by the concavity of U which implies that for any nonconstant random variable Z. =⇒ Determine the optimal wealth via a static optimization problem and interpret this wealth as a contingent claim 2.
2nd step. is said to be the optimal portfolio strategy π ∗ . ∞) → (0. Prove that the static optimization problem is equivalent to the dynamic optimization problem (27) 3rd step. Then Γ is (i) strictly decreasing. ∞). and Γ(∞) := limλ ∞ Γ(λ) = 0. X ∈C subject to the socalled budget constraint (BC) E[H(T )X ] ≤ x0 and C := {C(T ) ≥ 0 : C(T ) is FT measurable and E[U (C(T ))− ] < ∞}. ∞). X ∗ = V (λH(T )). ∞) is strictly decreasing. Lemma 5. The BC states that the present value of terminal wealth cannot exceed the initial wealth x0 (see Theorem 4.29 check continuity at λ0 . 29 To . Pointwise Lagrangian L(X . in the context of portfolio optimization. 1st step. Consider the constrained static optimization problem sup E[U (X )]. 2nd step. choose some λ1 < λ0 .1 (Properties of Γ) Assume Γ(λ) < ∞ for all λ > 0. Verify that the heuristic solution is indeed the optimal solution. Heuristic solution. the BC is satisﬁed as equality and we can choose λ∗ such that (28) Γ(λ) := E[H(T )V (λH(T ))] = x0 leading to λ∗ = Λ(x0 ) := Γ−1 (x0 ). the function Γ is also strictly decreasing. (i) V is strictly decreasing on (0. (iii) Γ(0) := limλ 0 Γ(λ) = ∞. λ) := U (X ) + λ(x0 − H(T )X ). Proof. Then due to (i) we can use Γ(λ1 ) < ∞ as a majorant. Determine the replication strategy for X ∗ (T ) which.e. where V := (U )−1 : (0.5 (i)). Compute the optimal wealth X ∗ (T ) by solving a static optimization problem. (ii) follows from the continuity of H and V as well as the dominated convergence theorem. Since the investor is greedy. Since H(t) > 0 for every t ∈ [0.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 70 1st step. (ii) continuous on (0. i. Firstorder condition (FOC) U (X ) − λH(T ) = 0. T ].
we show that π ∗ and X ∗ are indeed optimal. By deﬁnition of Λ(x0 ). the ﬁrst result follows from the monotonous convergence theorem and the second from the dominated convergence theorem. Proof. Hence. i. Besides. Finally.2 For a utility function U with U −1 = V we get U (V (y)) ≥ U (x) + y(V (y) − x) for 0 < x. X ∗ > 0 because V > 0.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 71 (iii) Since y lim V (y) = ∞. Lemma 5. π ∗ ∈ A. Then the optimal ﬁnal wealth reads X ∗ = V (Λ(x0 ) · H(T )) and there exists an admissible optimal strategy π ∗ ∈ A such that X π (T ) = x0 and ∗ X π (T ) = X ∗ . U (X ∗ ) ≥ U (1) + Λ(x0 )H(T )(X ∗ − 1). . 0 y lim V (y) = 0. Taylor approximation leads to U (x2 ) ≤ U (x1 ) + U (x1 )(x2 − x1 ) and thus U (x1 ) ≥ U (x2 ) + U (x1 )(x1 − x2 ) implying U (V (y)) ≥ U (x) + U (V (y))(V (y) − x) = U (x) + y(V (y) − x). Λ(x) = Γ−1 (x) exists on (0. By concavity of U .2.5 (ii). there exists a strategy π ∗ replicating X ∗ . one can always ﬁnd a λ such that (28) is satisﬁed as equality.3 (Optimal Final Wealth) Assume x0 > 0 and Γ(λ) < ∞ for all λ > 0. We now verify E[U (X ∗ )− ] < ∞.s. Consider some π ∈ A. by Theorem 4. ∞ and V is strictly decreasing. x ∞ Therefore. 2 Theorem 5. Proof. y < ∞.2. Hence. x 0 Λ(∞) := lim Λ(x) = 0.30 E[U (X ∗ )− ] ≤ U (1)+Λ(x0 ) E[H(T )(X ∗ +1)] ≤ U (1)+Λ(x0 ) (x0 +E[H(T )]) < ∞. 2 Remark. U (X ∗ ) ≥ U (X π (T )) + Λ(x0 )H(T )(X ∗ − X π (T )) 30 a ∗ ≥ b implies a− ≤ b− ≤ b. we get E[H(T )X ∗ ] = x0 . Due to Lemma 5.e. Due to Lemma 5. Therefore. P a. ∞) with Λ ≥ 0 and Λ(0) := lim Λ(x) = ∞. X ∗ satisﬁes all constraints and.
σ are constant. • σ > 0. the optimal ﬁnal wealth reads: X ∗ = V (Λ(x0 )H(T )) = (Λ(x0 )H(T )) γ−1 . Example “Power Utility with Constant Coeﬃcients”. To determine the optimal strategy π ∗ .5 − 0. one can apply a similar method as in Proposition 4. This is demonstrated in the following example. where (29) 1 1 1 H(T ) γ−1 = exp 1 1−γ rT 1 + 0.22 (ii). 1−γ Hence.5θ2 T − 0. Since Γ(λ) = E[H(T )V (λH(T ))] = λ γ−1 E[H(T ) γ−1 ] = x0 . i. By comparing the diﬀusion parts of (29) and (30) we guess the optimal strategy: π ∗ (t) = implying X π (T ) = exp rT + ∗ 1 µ−r 1 − γ σ2 2 1 1−γ θ T 1 1 − 0. X ∗ = X π (T ) and due to completeness π ∗ is the unique optimal portfolio strategy. 2 3rd step. X ∗ = (Λ(x0 )H(T )) γ−1 = x0 exp rT + 0.5 It is easy to check that 1 1−γ ∗ 1 1 γ 1−γ rT γ − 0. the market is complete.5 (1−γ)2 θ2 T + γ 1 1−γ θW (T ) . the ﬁnal wealth for a strategy π is given by T t 2 (µ − r)πs − 0. We make the following assumptions: 1 • Power utility function U (x) = γ xγ • I = m = 1 (“one stock. 1 − 0.5σ 2 πs ds + 0 0 (30) X π (T ) = x0 exp rT + σπs dWs . Since V (y) = (U )−1 (y) = y γ−1 .5 (1−γ)2 = 0. we get Λ(x0 ) γ−1 = Γ−1 (x0 ) = x0 exp − Therefore.e.5 1−γ θ2 T + 1 1−γ θW (T ) and θ = (µ − r)/σ.5 1−γ θ2 T − 0.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 72 and thus E[U (X ∗ )] ≥ E[U (X π (T ))] + Λ(x0 )E[H(T )(X ∗ − X π (T ))] = E[U (X π (T ))] + Λ(x0 ) (x0 − E[H(T )(X π (T ))]) BC ≥0 which gives the desired result.5 2 γ . µ.5 2 2 γ θ T 1−γ . . one Brownian motion”) • The coeﬃcients r. On the other hand. 2 2 γ θ T 1−γ + 1 1−γ θW (T ) .
Given that G ∈ C 1. Strategy I.x [U (X π (T ))] := E[U (X π (T ))X(t) = x]. x) + . (31) Et.X(t+∆) [U (X ∗ (T ))]] ˆ = Et. x) = U (x).X(t+∆) [U (X(T ))]] = Et.3 Stochastic Optimal Control Approach Assumption. x). T ] × (0. 5.x [G(t + ∆.x [U (X π (T ))]. T ] ˆ (ii) Compute Et.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 73 5.x [U (X π (T ))] and Et. x) ∈ [0. We now carry out the following procedure: (i) For given (t. X π (t + ∆))] ≤ G(t.1 Heuristic Derivation of the HJB We start with a heuristic derivation of the socalled HamiltionJacobiBellman equation (HJB) which is a PDE for G. ˆ ˆ ˆ ˆ Et. • Strategy II: For an arbitrary π use the strategy π on [t. X π (t + ∆))]. Strategy II. x) := sup Et.x [U (X ∗ (T ))] = G(t. σ are constant and I = m = 1. ad (ii). Note that G(T. for any admissible π.x [Et+∆. Assume that there exists an optimal portfolio strategy π ∗ . Xs )ds + t t π π Gx (s. Et. x).x [G(t + ∆.x [U (X(T ))] = Et. ∗ (iii) Using the fact that π ∗ is at least as good as π and taking the limit ∆ → 0 ˆ yields the HJB. t + ∆] π= ˆ ∗ π on (t + ∆. By assumption. Deﬁne the value function G(t. we consider the following strategies over [t. X π (t + ∆)) = G(t.x [U (X π (T ))]. Xs )dXs G(t + ∆. The coeﬃcients r.2 . By deﬁnition of the strategies. T ]: • Strategy I: Use the optimal strategy π ∗ . ad (iii).x [G(t + ∆. X(t + ∆))] = Et. π∈A where Et. where we have equality if π = π ∗ .3. µ. ∞). Ito’s formula yields t+∆ t+∆ π Gt (s. ∗ ˆ ˆ We set X ∗ := X π and X := X π .x [Et+∆.
x)x2 πt σ 2 ≤ 0 for any admissible π.5Gxx (s. Xs )Xs πs σ dWs ] = 0.t. Remarks. x) − 0. Substituting π ∗ into the HJB yields a PDE for G: Gt (t.x Dividing by ∆ and taking the limit ∆ → 0 we get (Note: X π (t) = x) 2 Gt (t.r. the candidate π ∗ is the global maximum if Gxx < 0. Xs )(Xs )2 πs σ 2 ds t+∆ + t π π Gx (s.5 t t+∆ π Gxx (s. x)x(r + π(µ − r)) + 0. we have equality for π = π ∗ . Xs )Xs (r + πs (µ − r)) t π π 2 +0. x) = U (x).x [ π π Gx (s. Et. 5. • Note that we take the supremum over a number π and not over a process. x) . x) + Gx (t. • Since (t.2 Solving the HJB 1 We consider the HJB for an investor with power utility U (x) = γ xγ : sup Gt (t.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION t+∆ 74 +0. ∞).3. x) Since the derivative of (32) w. π with ﬁnal condition G(T. π is Gx x(t. Therefore. Xs )Xs (r + πs (µ − r)) π π 2 +0. Pointwise maximization over π yields the following FOC (32) Gx (t. T ] × (0. x) + rxGx (t. x) . Xs ) + Gx (s. x) ∈ [0. x)x(µ − r) + Gxx (t.5Gxx (t. x) x =0 Gxx (t. we can rewrite (31): t+∆ π π π Gt (s.5Gxx (t. x) + Gx (t. x)x2 π 2 σ 2 = 0. Xs )(Xs )2 πs σ 2 ds ≤ 0.5Gxx (s. x)x2 σ 2 . Xs ) + Gx (s.5Gxx (t.5θ2 G2 (t. x) + Gx (t. x)x(r + πt (µ − r)) + 0. x)x(r + π(µ − r)) + 0. π 1 with ﬁnal condition G(T. As above. x) + t π π π Gt (s. we get the HJB sup Gt (t. x)x2 π(t)σ 2 = 0 implying π ∗ (t) = − µ − r Gx (t. σ 2 xGxx (t. x)x2 π 2 σ 2 = 0. x) was arbitrarily chosen. Xs )Xs πs σ dWs t+∆ t Given Et. the HJB holds for all (t. Xs )d < X π >s = G(t. x) = γ xγ .
5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 75 1 1 with G(T. x) + t π π π Gt (s. since Et.x [ T t ∗ ∗ ∗ Gx (s. Let π ∈ A. Xs ) + Gx (s. Xs )Xs πs σ dWs ] = 0 ∗ ∗ ∗ ∗ ∗ ∗ ∗ Gt (s. Ito’s formula yields T T π Gt (s. For γ > 0 we have G > 0 and thus the righthand side is even a supermartingale implying (34) where we use G(T.x [ γ (X π (T ))γ ] ≤ G(t. x) = (γ − 1)xγ−1 f (t) < 0.2 . x). Xs )dXs G(T.4 (Veriﬁcation Result) Assume that the coeﬃcients are constant and that the investor has a power utility function. Xs )(Xs )2 (πs )2 σ 2 = 0. Xs )(Xs )2 πs σ 2 ds + (∗) T π π Gx (s. Xs )Xs πs σ dWs t π π 2 +0.5Gxx (s. Hence. Xs )Xs (r + πs (µ − r)) T π π Gx (s. we show that our heuristic derivation of the HJB has lead to the correct result. t ≤ G(t.5Gxx (s. we also get Gxx (t. We try the separation G(t. x) + +0. Xs )d < X π >s t T π π Gx (s. Then candidates (33) are the value function and the optimal strategy of the portfolio problem (27). Besides. Since γ < 1. Xs )Xs πs σ dWs . Xs )Xs (r + πs (µ − r)) + 0. x) = (exercise) and for π ∗ 1 Et. 1 − γ σ2 Remark. x) + where (∗) holds because G satisﬁes the HJB. .3. 5. x) = π ∗ (t) = 1 γ x exp − (1 − γ)K(T − t) . The righthand side is a local martingale in T . 1 γ γx . we get the following candidates for the value function and the optimal strategy: (33) G(t. Xs )ds + t T t π Gxx (s. where for γ < 0 only bounded admissible strategies are considered. Xs ) + Gx (s. Proposition 5. Proof. Our candidate G is indeed a C 1. γ 1 µ−r . x) = γ xγ . Since G ∈ C 1.5 1−γ θ2 ) f. =:K This is an ODE for f with the solution f (t) = exp(−K(T − t)).2 function. X π (T )) = G(t.3 Veriﬁcation In this section. x) = γ xγ f (t)1−γ for some function f with f (T ) = 1 and obtain γ 1 f = − 1−γ (r + 0.5 = G(t.
Xs )Xs πs σ dWs ] = 0. T ]. (iii) A selfﬁnancing (ϕ. Hence. c) : (π. To formalize the problem we need to generalize the concept of admissibility: Deﬁnition 5. 2 T Remarks. • Note that we have not explicitly used the fact that the market is complete. G is the value function and π ∗ is the optimal solution. i.c (T )) . T A := (π. • For γ < 0 the boundedness assumption of π can be weakened. c(t))− dt + U2 (X π. 5. i. where U1 is a utility function for ﬁxed t ∈ [0.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 76 we obtain 1 Et. then the pair (ϕ. Remark. T ] he consumes 0 c(t) dt. c) is said to be admissible if X(t) ≥ 0 for all t ≥ 0. in our model consumption can be interpreted as a continuous stream of dividends (See Exercise 40). c) admissible and E 0 U1 (t. x).e.2 (Consumption Process. over the T interval [0.1) and c be a consumption process. c(t)) dt + U2 (X π. is said to be a consumption process.e. the investor’s goal function reads T (35) sup E (π. Selfﬁnancing. At time t. 0 (ii) Let ϕ be a trading strategy (see Deﬁnition 4. If dX(t) = ϕ(t) dS(t) − c(t)dt holds. c) is said to be selfﬁnancing. the relation (34) also holds for γ < 0 and bounded admissible strategies.s. Assumption. realvalued process {ct }t∈[0. Admissible) (i) A progressively measurable. Hence.4 Intermediate Consumption So far the investor has maximized expected utility from terminal wealth only. the investor consumes at a rate c(t) ≥ 0.x [ γ (X ∗ (T ))γ ] = G(t.T ] with c ≥ 0 and T c(t) dt < ∞ P − a.c (T ))− < ∞ . π π Since for a bounded strategy π we get E[ t Gx (s.c)∈A 0 U1 (t. We now consider a situation where the investor maximizes • expected utility from terminal wealth as well as • expected utility from intermediate consumption.
3: (i) For given (t. c∗ ).s. c) and taking the limit π ˆ ∆ → 0 yields the HJB. T ] (ii) Compute Et.c (0) = x0 . The value function now reads T G(t. we consider the following strategies over [t.c (T ) = X ∗ .c (T )) .c∗ ˆ c ˆ and X := X π. T ] × (0.2. c) use the strategy (π. . c∗ ) is at least as good as (ˆ .x (π.c∗ (T ) = x0 We will present the solution by using the stochastic control approach. We set V1 := (U1 )−1 and V2 := (U2 )−1 and deﬁne T Γ(λ) := E 0 H(t)V1 (t. • Strategy II: For an arbitrary (π. x) := sup Et.c)∈A t U1 (s.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 77 and dX π. λH(t)) dt + H(T )V2 (λH(T )) It is straightforward to generalize the result of Section 5. Assume that there exists an optimal strategy (π ∗ . c∗ (s)) ds + U2 (X π ∗ . µ. . X π. Λ(x0 ) · H(t)) X∗ = V2 (Λ(x0 ) · H(T )) ∗ and there exists a portfolio strategy π ∗ such that (π ∗ . c(s)) ds + ˆ (iii) Using the fact that (π ∗ . We set X ∗ := X π ∗ .ˆ. Then the optimal consumption and the optimal ﬁnal wealth reads c∗ (t) = V1 (t. c) = π ˆ ∗ ∗ (π . c(s)) ds + U2 (X π. c) on [t.ˆ(T ))]. The coeﬃcients r. Theorem 5.c (t) (r(t) + π(t) (µ(t) − r(t)1))dt + π(t) σdW (t) − c(t)dt. c∗ ).3 we make the following Assumption. T t U1 (s. We now carry out the procedure of Section 5. σ are constant and I = m = 1.c (t) = X π.x [ ˆ c U2 (X π. As in Section 5.c∗ (T ))] and Et. T ]: • Strategy I: Use the optimal strategy (π ∗ . t + ∆] (ˆ . P a.5 (Optimal Consumption and Optimal Final Wealth) Assume x0 > 0 and Γ(λ) < ∞ for all λ > 0. c∗ ) ∈ A and X π ∗ ∗ and X π . x) ∈ [0. ∞). c ) on (t + ∆.x [ T t U1 (s.
X π.c (t + ∆)) ≤ G(t. Strategy I. x) + Gx (t. we have equality for (π. x)c π.x = Et. Xs ) + Gx (s. we can rewrite (36): t+∆ U1 (s. c).x [ Strategy II. Xs ) + Gx (s.x t U1 (s.x t Gx (s. Given that G ∈ C 1.c +0. c(s)) ds + G(t + ∆.5Gxx (t. c(s)) ds + G(t + ∆. where we have equality if (π. As above. Xs )Xs (r + πs (µ − r)) − Gx (s. we set X := X π.x Et+∆. Therefore. Xs )(Xs )2 πs σ 2 ds ≤ 0. By assumption.x t t+∆ U1 (s. c(s)) ds + U2 (X(T )) ˆ U1 (s. Xt+∆ ) = G(t. Xs )ds + t t+∆ Gx (s. x) + t Gt (s. Et. c) + Gt (t. t+∆ (36) Et. Xs )cs + 0. c) = (π ∗ . c∗ ). x). T T t U1 (s. c∗ (s)) ds + U2 (X ∗ (T ))] = G(t.5Gxx (s. X π. Xs )cs 2 +0.c (t + ∆)) for any admissible (π.c .X(t+∆) t t+∆ t+∆ ˆ t+∆ T ˆ U1 (s. Xs )dXs +0. x)x(r + π(µ − r)) − Gx (t. c(s)) ds + U2 (X(T )) ˆ ˆ T = Et.x t ˆ U1 (s.5Gxx (t. Ito’s formula yields t+∆ t+∆ G(t + ∆.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 78 ad (ii).5 t t+∆ Gxx (s.2 . By deﬁnition of the strategies.x [ Et. c) = (π ∗ . x). Dividing by ∆ and taking the limit ∆ → 0 we get (Note: X(t) = x) 2 U1 (t. cs ) ds + t Gt (s. c(s)) ds + Et+∆. Xs )d < X >s = G(t.X(t+∆) t t+∆ ˆ U1 (s. c). To shorten notations.x t U1 (s. x)ct + 0. Xs )Xs πs σ dWs t+∆ t t+∆ Given Et. x) + t Gt (s. x)x2 π 2 σ 2 = 0 . Et. Xs )Xs (r + πs (µ − r)) 2 −Gx (s. c∗ (s)) ds + U2 (X ∗ (T )) = Et.X(t+∆) ˆ U1 (s.5Gxx (s. ct ) + Gt (t. c(s)) ds + Et+∆. x)x(r + πt (µ − r)) − Gx (t. c(s)) ds + U2 (X(T )) ˆ ˆ T = Et. x)x2 πt σ 2 ≤ 0 for any admissible (π. Xs )Xs πs σ dWs ] = 0. ad (iii). c∗ ). x) + Gx (t. Xs )(Xs )2 πs σ 2 ds t+∆ + t Gx (s. we get the HJB (37) sup U1 (t.
Variations of Constants yields T α f (t) = f H (t) f (T ) − t g(s) ds . the HJB holds for all (t. γ −αT 1 γ U2 (x) = e x . Gxx (t. x) σ 2 xGxx (t. x) α 1 γ−1 c∗ (t) = e γ−1 t Gx (t. x) γ e γ + Gt (t. x)x(µ − r) + Gxx (t. x) 1 We try the separation G(t. which can be rewritten as f = Kf + g. x) + Gx (t. ∞). Example “Power Utility”. x)x2 π 2 σ 2 = 0. x) x = 0. x) = γ xγ f β (t) for some function f with f (T ) = 1 and some constant β. c) and not over a process. x) ∈ [0. Simplifying yields α 1−γ − 1−γ t f γ e β+γ−1 γ−1 1 + β f + r + 0. f H (s) . T ] × (0.5θ2 G2 (t. x)c π. We consider the following power utility functions 1 U1 (t. The HJB reads 1 sup e−αt γ cγ + Gt (t. The FOCs read Gx (t. x) + rxGx (t. x) Substituting (π ∗ .5 1−γ θ2 ) and g(t) := −e γ−1 t . • Note that we take the supremum over a number (π.c +0. x) = e−αT γ xγ .5Gxx (t. γ 1 with K := − 1−γ (r + 0.5 1−γ θ2 f = 0 γ Choosing β = 1 − γ leads to α 1−γ − 1−γ t γ e + 1−γ γ f 1 + r + 0. x) = implying π ∗ (t) = − µ − r Gx (t. x) − 0. x)x(r + π(µ − r)) − Gx (t.5 1−γ θ2 f = 0. c∗ ) into the HJB yields α 1−γ γ−1 t γ−1 Gx (t.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 79 with ﬁnal condition G(T. This is an inhomgeneous ODE for f . 1 with ﬁnal condition G(T. x) was arbitrarily chosen. c) = e−αt cγ . x) = U2 (x). γ where α ≥ 0 resembles the investor’s time preferences. • Since (t. x)x2 π(t)σ 2 = 0 0 e−αt cγ−1 − Gx (t. Remarks.
Hence.c Gt (s.c Gxx (s.5 t π. • Our candidate G is indeed a C 1.c π. c∗ ) ∗ ∗ ∗ ∗ ∗ Gt (s. Ito’s formula yields T G(T. f (t) Remark. G satisﬁes the HJB. Xs )c∗ s ∗ ∗ ∗ 1 +0. Xs )Xs (r + πs (µ − r)) − Gx (s. Then the candidates (38) are the value function and the optimal strategy of the portfolio problem (35). x) = γ 1 µ−r .c Gx (s.c π. Xs )d < X π. x) = (γ − 1)xγ−1 f (t) < 0. Proof. Xs )Xs πs σ dWs T ∗ ∗ ∗ It is straightforward to show that Et. f > 0.c >s + t T ds = G(t.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 80 where f H (t) = e−K(T −t) is the solution of the corresponding homogeneous ODE (see Section 5.c (T )) + t T 1 e−αs γ cγ ds s T π. Hence.c π.6 (Veriﬁcation Result) Assume that the coeﬃcients are constant and that the investor has a power utility function. Let (π. Xs )(Xs )2 (πs )2 σ 2 + γ e−αs (c∗ )γ = 0. we get the following candidates for the value function and the optimal strategy: 1 γ 1−γ (38) x f (t). π ∗ (t) = 1 − γ σ2 α X ∗ (t) − 1−γ t c∗ (t) = e . Xs )Xs (r + πs (µ − r)) − Gx (s.c π. where for γ < 0 only bounded portfolio strategies π are considered.e. Xs ) + Gx (s. it needs to be shown that these candidates are indeed the value function and the optimal strategy! This can be done analogously to Proposition 5. for (π. Xs )(Xs )2 πs σ 2 + γ e−αs cγ ds s T (39) + t π. Xs ) + Gx (s. Proposition 5. c) = (π ∗ . • Clearly. Xs )cs π. i. Especially. the proof of this veriﬁcation result is also presented. G(t.4. x) + t π.c π.c Gx (s. T f (t) = e−K(T −t) 1 + t e− 1−γ s+K(T −s) ds ≥0 α = e −K(T −t) 1− eKT (1−γ) α+K(1−γ) e−( 1−γ +K)T − e−( 1−γ +K)t α α .c 2 1 +0.5Gxx (s.3). c) ∈ A.5Gxx (s.2 . we also get Gxx (t. Xs )dXs t T 1 −αs γ cs γe = G(t. s . Xs )Xs πs σ dWs ] = 0 (exercise).2 function. Besides. Since γ < 1.c π. Since G ∈ C 1.c Gt (s.x [ t Gx (s. x) + t T +0. Xs )ds + π. For the convenience of the reader. X π.
5
CONTINUOUSTIME PORTFOLIO OPTIMIZATION
81
we obtain
T
(40) implying
Et,x [G(T, X ∗ (T ))] + Et,x
t
1 e−αs γ (c∗ )γ ds = G(t, x). s
T 1 Et,x [e−αT γ X ∗ (T )γ ] + Et,x 1 because G(T, x) = e−αT γ xγ . t 1 e−αs γ (c∗ )γ ds = G(t, x) s
Finally, we need to show that for all (π, c) ∈ A
T 1 Et,x [e−αT γ X π,c (T )γ ] + Et,x t 1 e−αs γ cγ ds ≤ G(t, x). s
Since G satisﬁes the HJB, for all (π, c) ∈ A we get
π,c π,c π,c π,c Gt (s, Xs ) + Gx (s, Xs )Xs (r + πs (µ − r)) − Gx (s, Xs )cs π,c π,c 1 +0.5Gxx (s, Xs )(Xs )2 (πs )2 σ 2 + γ e−αs (cs )γ ≤ 0.
Therefore, (39) yields
T T 1 e−αs γ cγ ds ≤ G(t, x) + s π,c π,c Gx (s, Xs )Xs πs σ dWs t
(41) G(T, X π,c (T )) +
t
and now two cases are distinguished. 1st case: γ > 0. The righthand side of (41) is a local martingale in T . Since for γ > 0 the lefthand side is greater than zero, this is even a supermartingale implying
T
(42) and thus
Et,x [G(T, X π,c (T ))] + Et,x
t
1 e−αs γ cγ ds ≤ G(t, x) s
T 1 Et,x [e−αT γ X π,c (T )γ ] + Et,x t 1 e−αs γ cγ ds ≤ G(t, x). s
2nd case: γ < 0. Since for a bounded strategy π we obtain
T
E[
t
π π Gx (s, Xs )Xs πs σ dWs ] = 0,
the result of the 1st case also holds for γ < 0 and bounded admissible strategies. 2 Remark. If we set c ≡ 0, the previous proof is identical to the proof of Proposition 5.4.
5.5
Inﬁnite Horizon
If the optimal portfolio decision over the lifetime of an investor is to be determined, then the investment horizon lies very far into the future. For this reason, it reasonable to approximate this situation by a portfolio problem with T = ∞.
5
CONTINUOUSTIME PORTFOLIO OPTIMIZATION
82
Hence, the investor’s goal function reads
∞
(43) where
sup E
(π,c)∈A 0
U1 (t, c(t)) dt ,
∞
A := (π, c) : (π, c) admissible and E
0
U1 (t, c(t))− dt < ∞
and dX π,c (t) = X π,c (t) (r(t) + π(t) (µ(t) − r(t)1))dt + π(t) σdW (t) − c(t)dt, X π,c (0) = x0 . As in the previous section we make the following Assumption 1. The coeﬃcients r, µ, σ are constant and I = m = 1. Additionally, we make the Assumption 2. U1 (t, c(t)) = e−αt U (c(t)), where α ≥ 0 and U is a utility function. The value function is given by
∞
G(t, x) := sup Et,x
(π,c)∈A t
e−αs U (c(s)) ds .
Lemma 5.7 The value function has the representation G(t, x) = e−αt H(x), where H is a function depending on wealth only. Proof. We obtain
∞
G(t, x)eαt
= =
sup Et,x
(π,c)∈A t ∞
e−α(s−t) U (c(s)) ds e−αs U (c(s)) ds =: H(x)
0
sup E0,x
(π,c)∈A
2 Hence, the HJB (37) can be rewritten as follows: sup e−αt U (c) + Gt (t, x) + Gx (t, x)x(r + π(µ − r)) − Gx (t, x)c
π,c
+0.5Gxx (t, x)x2 π 2 σ 2 = sup U (c) − αH(x) + Hx (x)x(r + π(µ − r)) − Hx (x)c + 0.5Hxx (x)x2 π 2 σ 2 = 0
π,c
1 Example “Power Utility”. Let U (c) = γ cγ . Then the FOCs read
Hx (x)x(µ − r) + Hxx (x)x2 π(t)σ 2
= 0 0
cγ−1 − Hx (x) =
5
CONTINUOUSTIME PORTFOLIO OPTIMIZATION
83
implying π ∗ (t) = − µ − r Hx (x) σ 2 xHxx (x)
1
γ−1 c∗ (t) = Hx (x).
Substituting (π ∗ , c∗ ) into the HJB leads to
1−γ γ−1 (x) γ Hx
γ
− αH(x) + rxHx (x) − 0.5θ2
2 Hx (x) = 0. Hxx (x)
1 We try the ansatz H(x) = γ xγ k β for some constants k and β. Simplifying yields
β 1−γ γ−1 γ k
−
α γ
1 + r + 0.5 1−γ θ2 = 0
Choosing β = γ − 1 leads to k=
γ 1−γ α γ 1 − r − 0.5 1−γ θ2 .
Our ansatz is only reasonable if k ≥ 0. For γ < 0 this is valid in any case. For γ > 0 we need to assume that (44)
1 α ≥ γ(r + 0.5 1−γ θ2 ).
Hence, we get the following candidates for the value function and the optimal strategy: (45) G(t, x) π ∗ (t) c∗ (t) 1 −αt γ γ−1 e x k , γ 1 µ−r , = 1 − γ σ2 = k X ∗ (t). =
Remark. In contrast to the previous section, the investor now consumes a timeindependent constant fraction of wealth. Proposition 5.8 (Veriﬁcation Result) Assume that the coeﬃcients are constant and that the investor has a power utility function where for γ < 0 only bounded portfolio strategies π are considered. If (44) holds as strict inequality, i.e. (46)
1 α > γ(r + 0.5 1−γ θ2 ),
then the candidates (45) are the value function and the optimal strategy of the portfolio problem (43). Proof. Let (π, c) ∈ A. Analogously to the proof of Proposition 5.6, we get the relations (42) and (40) for the candidate G for the value function of this subsection given by (45)
T
Et,x [G(T, X π,c (T ))] + Et,x
t T
1 e−αs γ cγ ds s 1 e−αs γ (c∗ )γ ds s
≤ G(t, x) = G(t, x).
Et,x [G(T, X ∗ (T ))] + Et,x
t
5
CONTINUOUSTIME PORTFOLIO OPTIMIZATION
84
Firstly note that, by the monotonous convergence theorem,31 for any consumption process c
T ∞
(47)
T →∞
lim Et,x
t
e−αs cγ ds = Et,x s
t
e−αs cγ ds . s
1 Secondly, since G(T, X ∗ (T )) = γ e−αT X ∗ (T )γ k γ−1 , we get
Et,x [G(T, X ∗ (T ))]
= = =
1 γ−1 −αT t,x e E [X ∗ (T )γ ] γk 1 γ−1 −αT γ t,x e x E γk
e
1 γ{r+ 1−γ θ 2 −0.5
1 (1−γ)2
γ θ 2 −k}T + 1−γ θW (T −t)
(48)
=
γ 2 2 2 1 1 1 γ−1 γ −αT +γ{r+ 1−γ θ −0.5 (1−γ)2 θ −k+0.5 (1−γ)2 θ }T x e γk 1 γ−1 γ −kT x e −→ 0 as T → ∞, γk
2
since (46) ensures that k > 0. Therefore,
∞
Et,x
t
1 e−αs γ (c∗ )γ ds = G(t, x). s
To prove (49) Et,x
t
∞ 1 e−αs γ cγ ds ≤ G(t, x) s
two cases are distinguished. 1st case: γ > 0. Since in this case G(T, X π,c (T )) ≥ 0, the relation (49) follows immediately from (42). 2nd case: γ < 0. Since in this case G < 0, the proof is more involved. From (42) it follows that
T
sup Et,x [G(T, X π,c (T ))] + Et,x
π,c t
1 e−αs γ cγ ds ≤ G(t, x) s
and we obtain the following estimate 0 ≥
(∗)
sup Et,x [G(T, X π,c (T ))] = sup Et,x
π,c π,c T 1 γ
1 γ
e−αT X π,c (T )γ k γ−1
π,0 1 (T )γ γX
k γ−1
≥ ≥
(+)
sup Et,x
π,c
e−αT X π,c (T )γ +
t
1 e−αs γ cγ ds s
sup
π
1 Et,x γ
e
−αT
X
π,0
(T )
γ
k γ−1 = sup Et,x
π
e−αT k γ−1
=
1 2 1 γ γ(r+0.5 1−γ θ )(T −t) −αT γ−1 (46) e k −→ x e γ T t
0
as T → ∞,
where (∗) is valid because This establishes (49). Remarks. • The relation
1 e−αs γ cγ ds ≤ 0 and (+) follows from Proposition 5.4. s 2
T →∞
lim Et,x [G(T, X π,c (T ))] = 0
is the socalled transversality condition.
emphasize that 1/γ is deliberately omitted because without this factor the argument of the expected value is positive.
31 We
.x [ γ X ∗ (T )γ ] e−αT k γ−1 1 Et.x [ γ X π.x [ γ X π π ∗ .5 1−γ θ )(T −t) −αT γ−1 (46) e k −→ γx e = 0 as T → ∞ where (+) follows from Proposition 5.5 CONTINUOUSTIME PORTFOLIO OPTIMIZATION 85 • For γ > 0 the property (48) can be proved by using a diﬀerent argument: 0 ≤ ≤ ≤ (+) 1 Et. Applying this argument one can avoid some tedious calculations.x [G(T.0 (T )γ ] e−αT k γ−1 1 2 1 γ γ(r+0.0 (T )γ ] e−αT k γ−1 1 sup Et.4. • As in Proposition 5. the boundedness assumption for γ < 0 can be weakened. X ∗ (T ))] = Et.4.
Bj¨rk. (1997): An elementary proof of Farkas’ lemma. O. 3rd ed. (2001): Dynamic asset pricing theory. Korn. Korn.. Shreve (1991): Brownian motion and stochastic calculus. Heidelberg. A. Princeton. Kiesel (2004): Riskneutral valuation. E. Springer. Protter. Rhode Island. Springer. Springer. Mangasarian. R.. Cambridge MA. R. N. Braunschweig/Wiesbaden. Dax..Modern methods of ﬁnancial mathematics. E. S. Oxford. (2005): Stochastic integration and diﬀerential equations.. R. Princeton University Press. . Providence. New York. P. Shreve (1999): Methods of mathematical ﬁnance. R. Springer. L. AMS. Korn. C. Korn (2001): Option pricing and portfolio optimization . I. 2nd ed. Karatzas. 2nd ed. R. E. 503507. (1969): Nonlinear programming. (2004): Arbitrage theory in continuous time. (1990): Continuoustime ﬁnance. World Scientiﬁc.. Merton. SIAM Review 39. Korn (1999): Optionsbewertung und PortfolioOptimierung.. 2nd ed. T. 86 . Singapore.References Bingham. I. D.. 2nd ed. H. (1997): Optimal portfolios. New York. S.. Basil Blackwell. New York.. Vieweg. Karatzas. Duﬃe. New York. Oxford University o Press. E.