You are on page 1of 30

Financial Mathematics

Numerical Methods

Jaime Casassus

Instituto de Economı́a
Pontificia Universidad Católica de Chile

Casassus (UC) CEMLA 23-Nov-23 1 / 30


Table of Contents

1 Monte Carlo Simulation

2 Finite Difference Methods

Casassus (UC) CEMLA 23-Nov-23 2 / 30


Monte Carlo Integration
• Suppose then that we want to compute
Z 1
θ= g (x)dx
0

• We can use Monte-Carlo simulation by noting that

θ = E[g (U)] where U ∼ U(0, 1)


• Can use this to estimate θ as follows:
◦ Generate U1 , U2 , . . . , Un ∼ IID U(0, 1)
◦ Estimate θ with
g (U1 ) + · · · + g (Un )
θbn =
n
• There are two reasons that explain why θbn is a good estimator:
◦ θbn is unbiased, i.e., E[θbn ] = θ
◦ θbn is consistent, i.e., θbn −→ θ as n =⇒ ∞ with probability 1.
Casassus (UC) CEMLA 23-Nov-23 3 / 30
Monte Carlo Integration (cont.)
• Can also apply Monte Carlo integration to more general problems.

• Suppose we want to estimate


Z Z
θ= g (x, y )f (x, y )dx dy
A

where f (x, y ) is a density function on A.

• Then observe that θ = E[g (X , Y )] where X , Y have joint density f (x, y ).

• To estimate θ using simulation we simply generate n random vectors (X , Y ) with joint


density f (x, y ) and then estimate θ with

g (X1 , Y1 ) + · · · + g (Xn , Yn )
θbn =
n

Casassus (UC) CEMLA 23-Nov-23 4 / 30


Example: Black-Scholes option pricing
• We want to estimate the price of an European Call on the stock:
h i
C (0) = E e −rT max[S(T ) − K , 0]

• when the price of the stock follows a Geometric Brownian Motion:


σ2 σ2

S(t) = S(0)e (r − 2
)t+σW (t)
= S(0)e (r − 2
)t+σx T

where x ∼ N(0, 1).


• Let √
σ2
g (x) = e −rT max[S(0)e (r − 2
)t+σx T
− K , 0]
• We generate n standard normal random variables x and estimate:

[ g (x1 ) + · · · + g (xn )
C (0)n =
n
Casassus (UC) CEMLA 23-Nov-23 5 / 30
Example: Black-Scholes-Merton Formula (cont.)
1import matplotlib . pyplot as plt
2import numpy as np
3
4S0 = 250;
5r = 0.03;
6sigma = 0.3;
7T = 1;
8K = 200;
9
10n = 100000;
11
12x = np . random . normal (0 , 1 , n )
13
14ST = S0 * np . exp (( r -0.5* sigma **2) * T + sigma * x * np . sqrt ( T ) )
15
16g = np . exp ( - r * T ) * np . fmax ( ST -K ,0)
17
18C0h = np . mean ( g )
19C0h

Casassus (UC) CEMLA 23-Nov-23 6 / 30


Example: Black-Scholes option pricing (cont.)
• We can also simulate option values for a vector of spot prices.

Casassus (UC) CEMLA 23-Nov-23 7 / 30


Simulating SDEs
• Have an Stochastic Differential Equation (SDE) of the form:
dXt = µ(t, Xt )dt + σ(t, Xt )dWt

• We wish to simulate values of XT but we don’t know its distribution.


• So simulate a discretized version of the SDE:
{Xb0 , Xbh , Xb2h , . . . , Xbmh }
T
where h is a constant step-size and m = h is the number of time steps.
• The simplest and most commonly used scheme is the Euler scheme:

Xbkh = Xb(k−1)h + µ((k − 1)h, Xb(k−1)h )h + σ((k − 1)h, Xb(k−1)h ) h ϵk
where the ϵ′k s are IID N(0, 1).
• We only care about XT , but we still need to generate intermediate values, so simulating SDEs is
computationally intensive.
Casassus (UC) CEMLA 23-Nov-23 8 / 30
Example: Black-Scholes option pricing
• We want to estimate the price of an European Call on the stock when the price of the stock
follows a Geometric Brownian Motion:

dSt = r St dt + σ St dWt

• The Euler scheme for this process is:



Sbkh = Sb(k−1)h + r Sb(k−1)h h + σ Sb(k−1)h h ϵk

where the ϵ′k s are IID N(0, 1).

• Generate a simulated path {Sb0 , Sbh , Sb2h , . . . , Sbmh } to get one draw of SbT .

• Define g (ST ) = e −rT max[ST − K , 0], generate n random variables SbT ,i to estimate:

[ g (SbT ,1 ) + g (SbT ,2 ) + · · · + g (SbT ,n )


C (0)n =
n

Casassus (UC) CEMLA 23-Nov-23 9 / 30


Example: Black-Scholes option pricing (cont.)

Casassus (UC) CEMLA 23-Nov-23 10 / 30


Example: Black-Scholes-Merton Formula (cont.)
1:
2:
3def simulate_path ( S0 ,r , sigma ,m , h ) :
4S = [ S0 ]
5for j in range ( m ) :
6e = np . random . normal (0 , 1)
7Sj = S [ -1] + r * S [ -1]* h + sigma * S [ -1]* np . sqrt ( h ) * e
8S . append ( Sj )
9return ( S )
10
11ST = []
12for i in range ( n ) :
13S = simulate_path ( S0 ,r , sigma ,m , h )
14ST . append ( S [ -1])
15
16g = np . exp ( - r * T ) * np . fmax ( np . array ( ST ) -K ,0)
17
18C0h = np . mean ( g )
19:

Casassus (UC) CEMLA 23-Nov-23 11 / 30


Example: Option Pricing Under Heston
• Consider Heston’s stochastic volatility model:
p
dSt = r St dt + Vt St dWtS
p
dVt = κ(θ − Vt )dt + σ Vt dWtV
where dWtS dWtV = ρ dt. Note that dWtV can be expressed in terms of independendant Brownian
Motions: p
dWtV = ρ dWtS + 1 − ρ2 dWt1

• The Euler scheme for this joint process is:


q √
Sbkh = Sb(k−1)h + r Sb(k−1)h h + Vb(k−1)h Sb(k−1)h h ϵSk
q √
Vbkh = Vb(k−1)h + κ(θ − Vb(k−1)h )h + σ Vb(k−1)h h ϵVk
p
where ϵVk = ρ ϵSk + 1 − ρ2 ϵ1k and all ϵSk ’s and ϵ1k ’s are IID N(0, 1).

• Generate n simulated paths {(Sb0 , Vb0 ), (Sbh , Vbh ), . . . , (Sbmh , Vbmh )} and evaluate the g (SbT ,i )’s to get
[
C (0)n .
Casassus (UC) CEMLA 23-Nov-23 12 / 30
Example: Option Pricing Under Heston (cont.)
1:
2def simulate_path ( S0 , V0 ,r , kappa , theta , sigma , rho ,m , h ) :
3SV = [( S0 , V0 ) ]
4for j in range ( m ) :
5eS = np . random . normal (0 , 1)
6ei = np . random . normal (0 , 1)
7eV = rho * eS + np . sqrt (1 - rho **2) * ei
8Si = SV [ -1][0]
9Vi = SV [ -1][1]
10Sj = Si + r * Si * h + np . sqrt ( Vi ) * Si * np . sqrt ( h ) * eS
11Vj = Vi + kappa *( theta - Vi ) * h + sigma * np . sqrt ( Vi ) * np . sqrt ( h ) * eV
12SV . append (( Sj , Vj ) )
13return ( np . array ( SV ) )
14
15ST = []
16for i in range ( n ) :
17SV = simulate_path ( S0 , V0 ,r , kappa , theta , sigma , rho ,m , h )
18ST . append ( SV [ -1][0])
19:

Casassus (UC) CEMLA 23-Nov-23 13 / 30


Table of Contents

1 Monte Carlo Simulation

2 Finite Difference Methods

Casassus (UC) CEMLA 23-Nov-23 14 / 30


One-dimensional ODEs
• Simple ways to solve ODEs based on an evenly spaced grid points to approximate the differential
equations.
• It transform a differential equation into a system of algebraic equations to solve.
• First, divide the interval of [a, b] into n equal subintervals of length h

Casassus (UC) CEMLA 23-Nov-23 15 / 30


One-dimensional ODEs (cont.)
• The first partial derivative, y ′ (x), can be approximated in many ways

• Consider for now the “central derivative” and also the standard approximation for the second
derivative, y ′′ (x)
dy yi+1 − yi−1 d 2y yi−1 − 2yi + yi+1
= and 2
=
dx 2h dx h2
Casassus (UC) CEMLA 23-Nov-23 16 / 30
One-dimensional ODEs (cont.)
• Let see the system of equations with an example: a rocket problem that we want to have at 50m
off the ground after 5 secs after launching. The ODE is
d 2y
= −g
dt 2
with the boundary conditions y (0) = 0 and y (5) = 50.
• Assume n = 10, therefore, h = 0.5. The finite difference equations are:
y (0) = 0
yi−1 − 2yi + yi+1 = −gh2 for i = 1, 2, . . . , n − 1
y (5) = 50
• or in matrix form  
1 y0
 
0

 1
 −2 1   y1
   −gh2 
 .. .. ..   ...
  
= ...

. . .
 
  
1  yn−1 −gh2
 
1 −2
   
1 yn 50

Casassus (UC) CEMLA 23-Nov-23 17 / 30


One-dimensional ODEs (cont.)
• Solving the system of equations

Casassus (UC) CEMLA 23-Nov-23 18 / 30


Two-dimensional PDEs
• If f (t, S) is a function of two variables, the PDE contains derivatives with respect to more than
one variable (e.g., as in the Black-Scholes PDE)

• In this case we have a 2D grid

Casassus (UC) CEMLA 23-Nov-23 19 / 30


Two-dimensional PDEs (cont.)
• When pricing options, the grid has the value of the option for different prices and maturities.

• But we know some points on the grid

• The discretized system of equations relates the inside points of the grid, however, there are
different ways of doing this.
Casassus (UC) CEMLA 23-Nov-23 20 / 30
The Explicit Finite Difference Method
• Consider the Black-Scholes PDE
1
0= fSS (t, S(t))σ 2 S(t)2 + fS (t, S(t))r S(t) + ft (t, S(t)) − r f (t, S(t))
2
with boundary condition f (T ) = max[S(T ) − K , 0]
• use a backward approximation for ft (S(t), t)

∂f fi,j − fi−1,j
=
∂t δt
• Therefore, inside the grid the PDE gives
1 fi,j+1 − 2fi,j + fi,j−1 2 fi,j+1 − fi,j−1 fi,j − fi−1,j
0= σ (j δS )2 + r (j δS ) + − r fi,j
2 δS2 2δS δt

• The PDE relates the known values at time t = i δt (i.e., fi,j−1 , fi,j and fi,j+1 ) with fi−1,j .
• There system of equation is really simple, but explicit finite difference method can be unstable.

Casassus (UC) CEMLA 23-Nov-23 21 / 30


The Implicit Finite Difference Method
• Here we consider a forward approximation for ft (S(t), t)
∂f fi+1,j − fi,j
=
∂t δt
• Therefore, inside the grid the PDE gives
1 fi,j+1 − 2fi,j + fi,j−1 2 fi,j+1 − fi,j−1 fi+1,j − fi,j
0= σ (j δS )2 + r (j δS ) + − r fi,j
2 δS2 2δS δt

• The PDE relates the known values at time t = (i + 1) δt (i.e., fi+1,j ) with fi,j−1 , fi,j and fi,j+1 :

• A system of eqs needs to be solved with the proper boundary conditions.


Casassus (UC) CEMLA 23-Nov-23 22 / 30
The Crank-Nicolson Finite Difference Method
• The Crank-Nicolson finite difference method represents an average of the implicit method and the
explicit method:

• This method considers fi− 12 ,j , but a price will not be calculated for this node. Rather it is used as
a mathematical convenience that will not appear in the final equations.
• fi− 21 ,j relates in an explicit way with future values fi,j±1 and in an implicit way with previous values
fi−1,j±1 . It uses all three of the left side nodes based on the values of all three of the right side
nodes.
• It converges at a faster rate than the explicit/implicit methods.
Casassus (UC) CEMLA 23-Nov-23 23 / 30
Example: American Put Option Pricing
• Consider an American Put Option with maturity T .
• We know that in the continuation region (S(t) > B(t)) the American option solves:
1
0= PSS (S(t), t)σ 2 S(t)2 + PS (S(t), t)r S(t) + Pt (S(t), t) − r P(S(t), t)
2
• And in the exercise region (S(t) ≤ B(t)):

P(S(t), t) = max[K − S(t), 0]

• Boundary condition for time: the price of the put option at maturity T is

P(S(T ), T ) = max[K − S(T ), 0]

and the exercise boundary B(T ) = K .


• Boundary condition for prices: we will also assume that

lim PSS (S(t), t) = lim PSS (S(t), t) = 0


S(t)→0 S(t)→∞

Casassus (UC) CEMLA 23-Nov-23 24 / 30


Example: American Put Option Pricing (cont.)
• Let’s consider the Implicit Finite Difference Method:
1 Pi,j+1 − 2Pi,j + Pi,j−1 2 Pi,j+1 − Pi,j−1 Pi+1,j − Pi,j
0= 2 σ (j δS )2 + r (j δS )+ −r Pi,j ∀i < N, j < M
2 δS 2δS δt

• Reordering:
 2 2     2 2 
σ j rj 2 1 σ j rj Pi+1,j
− Pi,j−1 − σ j + + r Pi,j + + Pi,j+1 = − ∀i < N, j < M
2 2 δt 2 2 δt

• Boundary conditions:
PN,j = max[K − j δS , 0] ∀j
Pi,−1 − 2Pi,0 + Pi,1 = 0 ∀i
Pi,M−1 − 2Pi,M + Pi,M+1 = 0 ∀i

Casassus (UC) CEMLA 23-Nov-23 25 / 30


Example: American Put Option Pricing (cont.)
• Writing the equations in matrix form for any time t = i δt :

Casassus (UC) CEMLA 23-Nov-23 26 / 30


Example: American Put Option Pricing (cont.)
• Eliminating the boundaries at j = −1 and j = M + 1:

Casassus (UC) CEMLA 23-Nov-23 27 / 30


Example: American Put Option Pricing (cont.)
• Eliminating the lower tridiagonal with Gaussian elimination:

• We can easily solve por Pi,0 .

• If Pi,0 < max[K − S0 , 0] then we set Pi,0 = max[K − S0 , 0].

• We repeat the algorithm for Pi,1 , Pi,2 and so on.

Casassus (UC) CEMLA 23-Nov-23 28 / 30


Example: American Put Option Pricing (cont.)
• The price of the American Put Option today (t = 0) conditional on today’s spot price is:

Casassus (UC) CEMLA 23-Nov-23 29 / 30


Example: American Put Option Pricing (cont.)
• And the early exercise boundary B(t) is given by:

Casassus (UC) CEMLA 23-Nov-23 30 / 30

You might also like