Financial Mathematics: Martingales & MDPs
Financial Mathematics: Martingales & MDPs
Contents
1 Martingales 4
1.1 Step-by-Step Understanding of Martingales . . . . . . . . . . . . . . . . . 4
1.2 Properties of Conditional Expectation (CE) . . . . . . . . . . . . . . . . 6
1.2.1 1. Properties of Conditional Expectation (CE): . . . . . . . . . . 6
1.2.2 2. Recursive Relation for Martingales: . . . . . . . . . . . . . . . 6
1.2.3 3. Continuous-Time Martingales: . . . . . . . . . . . . . . . . . . 6
1.2.4 4. Example 11.2: Random Walks . . . . . . . . . . . . . . . . . . 6
1.2.5 Example 11.3: A Geometric Random Walk . . . . . . . . . . . . . 7
1.3 Stopping Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Definition of Stopping Time . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Interpretation of Stopping Time . . . . . . . . . . . . . . . . . . . 8
1.3.3 Example: First Hitting Time . . . . . . . . . . . . . . . . . . . . 9
1.3.4 Example: Coin Tossing and Stopping Time . . . . . . . . . . . . . 9
1.3.5 Theorem 11.1: Martingale Property under Stopping Time . . . . 10
1.3.6 Theorem 11.2: Optional Stopping Theorem (OST) . . . . . . . . . 11
1.3.7 Call Options and American Derivative Security (ADS) . . . . . . 12
1
CONTENTS
4 Stochastic Calculus 31
4.1 Stochastic Calculus and Itô’s Integrals . . . . . . . . . . . . . . . . . . . 31
4.1.1 White Noise and Wiener Process . . . . . . . . . . . . . . . . . . 31
4.1.2 Itô’s Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.1.3 Derivation of the Itô Integral Formula . . . . . . . . . . . . . . . 31
4.1.4 Properties of Itô Integrals . . . . . . . . . . . . . . . . . . . . . . 32
4.1.5 Stochastic Integral with Non-Random Integrand . . . . . . . . . . 32
4.1.6 Examples of Itô Integrals . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.7 Summary of Itô’s Integral Formula . . . . . . . . . . . . . . . . . 33
4.1.8 Itô’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.9 Multiplication Table and Derivations . . . . . . . . . . . . . . . . 33
4.1.10 Examples of Itô’s Lemma . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Statement 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.1 Example: Geometric Brownian Motion (GBM) . . . . . . . . . . . 35
4.3 Product Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.4 Quotient Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.4.1 Example: Product of Processes . . . . . . . . . . . . . . . . . . . 36
4.5 Stochastic Differential Equation (SDE) . . . . . . . . . . . . . . . . . . . 36
4.5.1 Itô Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5.2 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . . . 36
4.5.3 Ornstein-Uhlenbeck Process (OU Process) . . . . . . . . . . . . . 37
4.5.4 Step 4: Stationary Distribution . . . . . . . . . . . . . . . . . . . 38
5 Applications in Finance 40
5.1 Applications in Finance: Pricing Stock Options . . . . . . . . . . . . . . 40
5.1.1 Introduction to Stock Options . . . . . . . . . . . . . . . . . . . . 40
5.1.2 Pricing Problem Setup . . . . . . . . . . . . . . . . . . . . . . . . 40
5.1.3 Calculation of Holdings . . . . . . . . . . . . . . . . . . . . . . . . 40
5.1.4 Case Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.2 Option Pricing in Discrete Time . . . . . . . . . . . . . . . . . . . . . . . 42
5.2.1 Stock Pricing and Interest Rates . . . . . . . . . . . . . . . . . . 42
5.2.2 Definitions of Call and Put Options . . . . . . . . . . . . . . . . . 42
5.2.3 Classical Valuation of Options . . . . . . . . . . . . . . . . . . . . 42
5.2.4 Example: Option Valuation . . . . . . . . . . . . . . . . . . . . . 42
5.2.5 Replicating Portfolio (Hedging Strategy) . . . . . . . . . . . . . . 43
5.2.6 Fair Price of the Call Option . . . . . . . . . . . . . . . . . . . . . 43
5.2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3 Application in Finance: Option Pricing in Discrete Time . . . . . . . . . 43
5.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.2 Classical Expected Payoff Method . . . . . . . . . . . . . . . . . . 44
5.3.3 Replicating Portfolio Method . . . . . . . . . . . . . . . . . . . . 44
2
CONTENTS
3
Chapter 1
Martingales
E[Xt+1 |Ft ] = Xt
F0 ⊆ F1 ⊆ F2 ⊆ . . .
4
Chapter 1. Martingales
Ω = {all possible sequences of up (u) and down (d) moves over time T }
For example, if T = 2:
Ω = {uu, ud, du, dd}
Filtration in Finance:
E[Xt+1 |Ft ] = Xt
Example: In the Binomial Market Model, if the asset price St is adjusted to remove any
drift (e.g., under a risk-neutral measure):
E[St+1 |Ft ] = St
3. Filtration:
Ft = {all events up to time t}
5
Chapter 1. Martingales
E[ZX|G] = ZE[X|G]
E[X|G] = E[X]
E[Xt+s |Ft ] = Xt ∀s ≥ 1
Expanding step-by-step:
E[Xt+s |Ft ] = Xt
E[Xt+s |Ft ] = Xt ∀s ≥ 0
6
Chapter 1. Martingales
Special Case
If P (Yi = 1) = p and P (Yi = −1) = 1 − p, then Xn is a martingale if p = 12 .
7
Chapter 1. Martingales
Step 3: Generalization
It is clear that, setting S0 := 0, Sn := Y1 + · · · + Yn , for any v ∈ R such that:
the process:
Xn′ := e−vSn −n ln φY (v) , n = 0, 1, 2, . . . ,
is a martingale. Specifically:
evSn
Xn′ = .
φY (v)n
Remarks
For both {Xn } and {Xn′ }, we could, instead of the filtration:
Ft := σ(Y1 , . . . , Yt ),
{τ ≤ t} ∈ Ft for each t = 0, 1, 2, . . .
{τ = t} = {τ ≤ t} ∩ {τ > t − 1} ∈ Ft ,
which implies:
t
[
{τ ≤ t} = {τ = s}, for t = 0, 1, 2, . . .
s=0
8
Chapter 1. Martingales
Applications
• Finance: Selling a stock when its price exceeds a target value.
• Gambling: Quitting a game when winnings or losses hit a specific threshold.
• Statistics: Stopping a sequential hypothesis test when a test statistic exceeds a
critical value.
9
Chapter 1. Martingales
Stopping Time
Define the stopping time τ as the first time Xk ≥ 3, i.e., the first time the player’s net
winnings reach or exceed 3 : τ := inf{k ≥ 0 : Xk ≥ 3}.
E[Xk+1 |Fk ] = Xk .
Summary
The coin-tossing example demonstrates how stopping times arise naturally in games of
chance. The first hitting time τ in this example is the first time winnings reach a target
value.
Zt := Xτ ∧t , t = 0, 1, 2, . . .
Proof
1. **Verify Integrability**: For t = 0, we have Z0 = X0 , which is integrable as {Xt } is a
martingale. For t > 0, the representation:
t
X
Zt+1 = Xk 1{τ =k} + Xt+1 1{τ >t}
k=0
implies:
t+1
X
E[|Zt+1 |] ≤ E[|Xk |] < ∞,
k=0
ensuring integrability.
10
Chapter 1. Martingales
2. **Verify the Martingale Property**: Using the representation of Zt+1 and applying
conditional expectation:
" t #
X
E[Zt+1 |Ft ] = E Xk 1{τ =k} + Xt+1 1{τ >t} Ft .
k=0
Since Xk 1{τ =k} is Fk -measurable and 1{τ >t} depends only on Ft , we can simplify:
t
X
E[Zt+1 |Ft ] = Xk 1{τ =k} + 1{τ >t} E[Xt+1 |Ft ].
k=0
E[Xτ ] = E[X0 ].
Proof
1. By Theorem 11.1, the process Zt := Xτ ∧t is a martingale. From the definition of a
martingale:
E[Zt ] = E[Z0 ], ∀t ≥ 0.
2. As t → ∞, the boundedness of τ ensures Zt → Xτ . Thus, by the Dominated
Convergence Theorem:
Remarks
1. In a fair game, the Optional Stopping Theorem states that there is no strategy based
on stopping that can ”beat the system.” The expected value of the martingale remains
constant. 2. In the general case, τ can be unbounded, and additional conditions are
required for the theorem to hold. For example, if Xt is uniformly integrable, the OST
holds even for unbounded τ .
—
11
Chapter 1. Martingales
Types of Call Options There are two main types of call options:
• European Call Option (Vanilla Option):
– Can only be exercised at the expiration date (terminal point of the time in-
terval).
– Example Scenarios:
∗ Price Drops: The investor ignores the option and buys the shares at the
lower market price.
∗ Price Rises: The investor exercises the option to buy shares at the lower
strike price.
12
Chapter 2
2.1 Introduction
Markov Decision Processes (MDPs) provide a framework to model decision-making in en-
vironments where outcomes are influenced by both randomness and the decision-maker’s
actions. These processes are widely used in fields such as operations research, artificial
intelligence, and economics.
• Actions: At each step, the decision-maker chooses an action from a set of available
actions.
13
Chapter 1. Markov Decision Processes (MDP)
decision_tree.png
figureA decision
tree illustrating the stages and uncertainty in MDPs.
where Xt is the state at time t, at is the action taken, and R(Xt , at ) is the reward.
14
Chapter 1. Markov Decision Processes (MDP)
• Actions (A): The set of possible actions the decision-maker can take at any given
state.
• Reward Function (R(i, a)): The reward earned by taking action a in state i.
Optimality Equation
The main tool for solving finite-stage models is the optimality equation:
" #
X
Vn (i) = max R(i, a) + pij (a)Vn−1 (j) ,
a
j
where:
•
P
j pij (a)Vn−1 (j): Expected future reward for transitioning to state j and following
the optimal policy thereafter.
Backward Calculation
1. Start with the terminal reward VT (i) = maxa R(i, a) at the final stage T .
2.3 Conclusion
MDPs provide a systematic framework for sequential decision-making under uncertainty.
The finite-stage model optimizes decisions over a fixed horizon by balancing immediate
and future rewards. The dynamic programming approach ensures the solution is both
efficient and optimal.
15
Chapter 1. Markov Decision Processes (MDP)
where Zj is the price (in thousands of dollars). If the seller rejects an offer, it is lost.
The seller aims to maximize the expected selling price. The problem is to derive the
optimal policy for selling the house and find the maximum expected value of the selling
price.
Formulation
We define the process {Xt } to include all necessary information for making decisions:
(
Zt , if the house is not sold yet,
Xt =
0, otherwise,
where t = 1, 2, 3(= T ).
At each step, there are two possible actions:
• a = 0: Do nothing.
p20 (1) = 1, p00 (a) = 1 for any a, pxy (0) = P (Zj = y) for any x ̸= 0.
R(x, 1) = x, R(x, 0) = 0.
Solution
The total additive reward is simply the selling price, as the property is sold only once.
Starting with:
V0 (x) = 0,
we calculate the optimal value functions step by step using the optimality equation:
Step 1 (t = 3) : At the final stage, the seller accepts the last buyer’s offer, regardless
of its value:
V1 (x) = max R(x, a) = x.
a
16
Chapter 1. Markov Decision Processes (MDP)
Step 2 (t = 2) : The seller’s decision depends on whether the house has been sold or
not:
V2 (x) = max{x, 109},
where 109 is the expected value of the next offer:
where:
E[V2 (Z2 )] = 109 · 0.3 + 111.7 · 0.5 + 120 · 0.2 = 111.7.
The maximum expected selling price is:
where Zj is the price (in thousands of dollars). If the seller rejects an offer, it is lost.
The seller aims to maximize the expected selling price. The problem is to derive the
optimal policy for selling the house and find the maximum expected value of the selling
price.
Formulation
We define the process {Xt } to include all necessary information for making decisions:
(
Zt , if the house is not sold yet,
Xt =
0, otherwise,
where t = 1, 2, 3(= T ).
At each step, there are two possible actions:
• a = 1: Sell the house.
• a = 0: Do nothing.
The transition probabilities are:
p20 (1) = 1, p00 (a) = 1 for any a, pxy (0) = P (Zj = y) for any x ̸= 0.
R(x, 1) = x, R(x, 0) = 0.
17
Chapter 1. Markov Decision Processes (MDP)
Solution
The total additive reward is simply the selling price, as the property is sold only once.
Starting with:
V0 (x) = 0,
we calculate the optimal value functions step by step using the optimality equation:
Stage t = 3: At the final stage, the seller accepts the last buyer’s offer:
V1 (x) = x.
E[V1 (Z3 )] = E[Z3 ] = 100 · 0.3 + 110 · 0.5 + 120 · 0.2 = 109.
Stage t = 2: The seller’s decision depends on whether the house has been sold or not:
Stage t = 1: Similarly:
V3 (x) = max{x, 112}.
For the expected value:
where Yj are i.i.d. random variables with a common distribution F , having finite mean
µ = E[Y1 ].
An American call option entitles the holder to buy a block of shares (exercise the
option) of a given company at a stated price at any time during a specified time interval.
18
Chapter 1. Markov Decision Processes (MDP)
Illustration
An American call option is used for:
• Hedging Risks: Protect against future price rises.
Objective
Given an American call option to buy shares at a fixed price c with T -days to expiry, the
task is to maximize the expected profit. The optimal policy prescribes when to exercise
the option based on observed stock prices Xt .
where:
• R(Xt , at ): Reward at time t.
Bellman Equation
• Finite Horizon:
" #
X
Vn (i) = max R(i, a) + α pij (a)Vn−1 (j) .
a
j
• Infinite Horizon:
" #
X
V (i) = max R(i, a) + α pij (a)V (j) .
a
j
19
Chapter 1. Markov Decision Processes (MDP)
1 1
−1
= αr .
Xt − r Xt+1 Xt+1 − r−1 Xt+2
Simplify:
Xt − r−1 Xt+1 = αr Xt+1 − r−1 Xt+2 .
Xt = b1 rt + b2 (αr)t .
b1 + b2 = X 0 ,
b1 rT + b2 (αr)T = XT .
From the first equation:
b2 = X 0 − b1 .
Substitute into the second equation:
b1 rT + (X0 − b1 )(αr)T = XT .
Simplify:
b1 rT − b1 (αr)T = XT − X0 (αr)T .
Factor out b1 :
b1 rT − (αr)T = XT − X0 (αr)T .
Solve for b1 :
XT − X0 (αr)T
b1 = .
rT − (αr)T
Substitute back to find b2 :
b2 = X 0 − b1 .
—
Ct = b1 rt − rt + b2 (αr)t − (αr)t+1 .
Simplify:
Ct = b2 (1 − α)(αr)t .
21
Chapter 1. Markov Decision Processes (MDP)
X0 −r−T XT
Using b2 = 1−αT
, the final consumption policy is:
X0 − r−T XT
Ct = (1 − α)(αr)t .
1 − αT
For T = ∞, b2 = X0 , and:
Ct = (1 − α)Xt .
—
Final Results
1. **Consumption Policy for Finite Horizon**:
X0 − r−T XT
Ct = (1 − α)(αr)t .
1 − αT
2. **Consumption Policy for Infinite Horizon**:
Ct = (1 − α)Xt .
where:
• Ct : Consumption at time t.
• wt : Fraction invested in the risky asset.
• 1 − wt : Fraction invested in the safe asset.
The objective is to maximize:
"T −1 #
X
max E αt log(Ct ) .
Ct ,wt
t=0
where:
Xt+1 = (x − C) [(1 − w)r + wZ] .
22
Chapter 1. Markov Decision Processes (MDP)
23
Chapter 3
where:
Step 2: Properties of Xi
Since Xi are independent and identically distributed (i.i.d.) random variables, we have:
1 1
E[Xi ] = (+1) · + (−1) · = 0,
2 2
Var(Xi ) = E[Xi ] − (E[Xi ]) = (1) − (0)2 = 1.
2 2
24
Chapter 1. Brownian Decision Process
Step 5: Limit ∆x → 0, ∆t → 0
√
Let ∆x = σ ∆t, where σ > 0. As ∆t → 0:
E[X(t)] = 0,
√ t
Var(X(t)) = (σ ∆t)2 · = σ 2 t.
∆t
Therefore, X(t) converges in distribution to a normal distribution:
E[Wt+s | Ft ] = Wt , ∀s ≥ 0.
Example: Yt = Wt2 − t
Consider the process Yt = Wt2 − t. We will show that Yt is a martingale.
25
Chapter 1. Brownian Decision Process
Thus, Yt is a martingale.
with x0 = 0 and t0 = 0.
26
Chapter 1. Brownian Decision Process
Step-by-Step Solution:
2.
4.
1.
3.
5. By the properties of Brownian motion:
1 2 1
Y (1) − Y ( ) ∼ N 0, σ · ,
2 2
and it is independent of Y ( 21 ).
2. Rewrite the probability:
1 1 1
P (Y (1) > 0|Y ( ) = σ) = P (Y (1) − Y ( ) > −σ|Y ( ) = σ).
2 2 2
27
Chapter 1. Brownian Decision Process
Step-by-Step Solution:
1. Use the conditional distribution of Brownian motion:
σ σ2
1
Y |Y (1) = σ ∼ N , .
2 2 4
2. The probability of being ahead at the midpoint is:
− σ2
1
P (Y ( ) > 0|Y (1) = σ) = P Z > σ ,
2 2
28
Chapter 1. Brownian Decision Process
σ2
Y (t) = (µ − )t + σW (t),
2
where:
σ2
• E[Y (t)] = (µ − 2
)t,
• Var[Y (t)] = σ 2 t,
2
• Y (t) ∼ N (µ − σ2 )t, σ 2 t .
where:
σ2
• µ− 2
is the effective drift term,
29
Chapter 1. Brownian Decision Process
Key properties:
30
Chapter 4
Stochastic Calculus
• W (0) = 0,
where a = t0 < t1 < · · · < tn = b forms a partition of [a, b], and max(ti − ti−1 ) → 0.
31
Chapter 1. Stochastic Calculus
• Variance: Z b Z b
V f (t) dW (t) = f 2 (t) dt,
a a
32
Chapter 1. Stochastic Calculus
33
Chapter 1. Stochastic Calculus
Example 2: Yt = eWt
Using Itô’s formula:
1
dYt = f ′ (Wt )dWt + f ′′ (Wt )(dWt )2
2
Here:
f (x) = ex , f ′ (x) = ex , f ′′ (x) = ex
Substitute into the formula:
1
d(eWt ) = eWt dWt + eWt dt
2
Summary
- Itô’s Formula (General Form):
1
dYt = f ′ (Xt )dXt + f ′′ (Xt )(dXt )2
2
where (dXt )2 = dt.
- For a Brownian motion Wt , we use:
dWt2 = dt
4.2 Statement 1
Let f (t, x) have continuous partial derivatives ∂t f , ∂x f , and be twice continuously differ-
entiable in x, ∂x2 f . If {Xt } is an Itô process
dXt = at dt + bt dWt ,
Derivation
d(Zt ) = d f (t, Wt ) ,
1
= ∂t f (t, Wt )dt + ∂x f (t, Wt )dWt + ∂x2 f (t, Wt )(dWt )2 .
2
Substituting the derivatives:
1
d(Zt ) = µf (t, Wt )dt + σf (t, Wt )dWt + σ 2 f (t, Wt )dt,
2
1
d(Zt ) = µZt dt + σZt dWt + σ 2 Zt dt,
2
2
σ
d(Zt ) = µ + Zt dt + σZt dWt .
2
Zt := Xt Yt
dXt Xt Xt 1
dZt = − 2 dYt + 3 (dYt )2 − 2 dXt dYt
Yt Yt Yt Yt
35
Chapter 1. Stochastic Calculus
Solution of SDE
The solution of the SDE is given by:
Z t Z t
Xt = X 0 + a(s, Xs )ds + b(s, Xs )dWs , t ∈ [0, T ].
0 0
36
Chapter 1. Stochastic Calculus
Simplify:
d ert Xt = σert dWt .
ert Xt − X0 .
37
Chapter 1. Stochastic Calculus
Thus:
E[Xt ] = e−rt X0 .
*(b) Variance of Xt The variance is given by:
Z t
2 −r(t−s)
Var(Xt ) = σ Var e dWs .
0
Simplify: Z t Z t Z t
−2r(t−s) −2rt 2rs −2rt
e ds = e e ds = e e2rs ds.
0 0 0
Evaluate the integral: Z t
1 2rt
e2rs ds =
e −1 .
0 2r
Substitute back:
1 2rt
Var(Xt ) = σ 2 e−2rt ·
e −1 .
2r
Simplify:
σ2
1 − e−2rt .
Var(Xt ) =
2r
σ2
Xt ∼ N 0, .
2r
38
Chapter 1. Stochastic Calculus
Summary
The solution to the Ornstein-Uhlenbeck process is:
Z t
−rt
Xt = e X0 + σ e−r(t−s) dWs .
0
- Mean:
E[Xt ] = e−rt X0 .
- Variance:
σ2
1 − e−2rt .
Var(Xt ) =
2r
- Stationary distribution:
σ2
Xt ∼ N 0, .
2r
39
Chapter 5
Applications in Finance
40
Chapter 1. Applications in Finance
• If C = 15:
x = 1, y = −3.
Initial cost:
100 − 45 = 55.
Value of holdings at time 1 is:
50.
Guaranteed profit:
55 − 50 = 5.
5.1.5 Conclusion
A sure-win betting scheme is called an arbitrage (risk-free profit). Thus, the only option
cost C that does not result in arbitrage is:
50
C= .
3
41
Chapter 1. Applications in Finance
42
Chapter 1. Applications in Finance
• B: Bonds at t = 0.
The value of the portfolio at t = 0:
V0 (Π) = ∆S0 + B.
5.2.7 Summary
The Black-Scholes approach shows that the fair price of a call option can be determined
by constructing a replicating portfolio consisting of stock and risk-free bonds. In this
example, ∆ = 0.4 and B = −30.48 provide the appropriate hedge.
43
Chapter 1. Applications in Finance
44
Chapter 1. Applications in Finance
45
Chapter 1. Applications in Finance
d < 1 + r < u.
2. Filtration:
• ∆: Number of shares
• b: Number of bonds
46
Chapter 1. Applications in Finance
Solving for ∆:
Xu − X d
∆= , (3)
(u − d)S0
and for b:
uXd − dXu
b= . (4)
(1 + r)(u − d)
Equations (3) and (4) provide a perfect hedge for X, and its time t = 0 value is:
Xu − Xd uXd − dXu
V0 = ∆S0 + b = + .
u−d (1 + r)(u − d)
1+r−d u − (1 + r)
p∗ = , 1 − p∗ = .
u−d u−d
The fair price of the claim X at t = 0 is:
1
C= [p∗ Xu + (1 − p∗ )Xd ] .
1+r
Calculate:
1+r−d 1.25 − 0.5
p∗ = = = 0.6, 1 − p∗ = 0.4.
u−d 1.75 − 0.5
The fair price of the European call is:
1
C= [p∗ Xu + (1 − p∗ )Xd ] ,
1+r
where:
Thus:
1 1
C= [0.6(0.75) + 0.4(0)] = (0.45) = 0.36.
1.25 1.25
47
Chapter 1. Applications in Finance
Portfolio Value at t = 0
V0 = ∆S0 + b = 0.6 × 1 − 0.24 = 0.36.
Portfolio Value at t = 1
Case 1: If S1 = uS0 = 1.75:
Replication Verification
In both cases:
V1 (u) = (S1 − k)+ = 0.75, V1 (d) = (S1 − k)+ = 0.
Thus, the replicating portfolio exactly matches the option’s payoff at t = 1.
Summary
• At t = 0, V0 = 0.36 (matches the call price C).
1 0.36
Stock Price Tree: ↗ ↘ Call Price Tree: ↗ ↘ .
1.75 0.5 0.75 0
48
Chapter 1. Applications in Finance
V0 = ∆S0 + b = 0 =⇒ b = −∆S0
Here:
• ∆: Number of stocks in the portfolio,
Portfolio Value at t = 1
At time t = 1, the portfolio value is:
V1 = ∆S1 + b(1 + r)
Conclusion
The condition d < 1 + r < u ensures no arbitrage exists in the binomial market. This
framework guarantees that the financial model is fair and reflects realistic market behav-
ior, avoiding arbitrage opportunities.
49
Chapter 1. Applications in Finance
Assumptions
1. The market satisfies the No-Arbitrage (NA) condition: d < 1 + r < u.
• At t = 0: S0 = 120.
• At t = 1:
• At t = 2:
270 if price moves up twice,
S2 = 90 if price moves up and down,
30 if price moves down twice.
1+r−d 1 − 0.5
p∗ = = = 0.5, 1 − p∗ = 0.5.
u−d 1.5 − 0.5
max(S2 − k, 0) :
50
Chapter 1. Applications in Finance
V0 = ∆1 × S0 + b1 = 52.5.
Conclusion
The two-period binomial model allows us to replicate the option by creating a hedging
portfolio (∆ and b) at each step, ensuring the portfolio value matches the option payoff.
This framework is fundamental for understanding multi-period extensions and is widely
used in option pricing, particularly in discrete time.
51