You are on page 1of 90

Chapter 5 Finite Difference

Methods

Math6911 S08, HM Zhu


References

1. Chapters 5 and 9, Brandimarte

2. Section 17.8, Hull

3. Chapter 7, “Numerical analysis”, Burden and Faires

2
Math6911, S08, HM ZHU
Outline

• Finite difference (FD) approximation to the derivatives


• Explicit FD method
• Numerical issues
• Implicit FD method
• Crank-Nicolson method
• Dealing with American options
• Further comments

3
Math6911, S08, HM ZHU
Chapter 5 Finite Difference
Methods

5.1 Finite difference approximations

Math6911 S08, HM Zhu


Finite-difference mesh

• Aim to approximate the values of the continuous function


f(t, S) on a set of discrete points in (t, S) plane
• Divide the S-axis into equally spaced nodes at distance
∆S apart, and, the t-axis into equally spaced nodes a
distance ∆t apart
• (t, S) plane becomes a mesh with mesh points on
(i ∆t, j∆S)
• We are interested in the values of f(t, S) at mesh points
(i ∆t, j∆S), denoted as
fi , j = f ( i∆t, j ∆S ) 5
Math6911, S08, HM ZHU
The mesh for finite-difference
approximation
fi , j = f ( i∆t, j ∆S )
S
Smax=M∆S

?
j ∆S

i ∆t T=N ∆t t
6
Math6911, S08, HM ZHU
Black-Scholes Equation for a
European option with value V(S,t)

∂V 1 2 2 ∂ 2V ∂V
+ σ S + rS − rV = 0 (5.1)
∂t 2 ∂ S
2
∂S
where 0 < S < +∞ and 0 ≤ t < T
with proper final and boundary conditions

Notes:
This is a second-order hyperbolic, elliptic, or parabolic,
forward or backward partial differential equation
Its solution is sufficiently well behaved ,i.e. well-posed

Math6911, S08, HM ZHU


Finite difference approximations

The basic idea of FDM is to replace the partial derivatives


by approximations obtained by Taylor expansions near the
point of interests

For example,
∂f ( S,t ) f ( S ,t + ∆t ) − f ( S ,t ) f ( S ,t + ∆t ) − f ( S ,t )
= lim ≈
∂t ∆t → 0 ∆t ∆t
for small ∆t, using Taylor expansion at point ( S ,t )
∂f ( S ,t )
f ( S ,t + ∆t ) = f ( S ,t ) +
∂t
(
∆t + O ( ∆ t )
2
)
8
Math6911, S08, HM ZHU
Forward-, Backward-, and Central-
difference approximation to 1st order
derivatives

central
backward forward

t − ∆t t t + ∆t
∂f ( t,S ) f ( t + ∆t,S ) − f ( t,S )
Forward: ≈ + O ( ∆t )
∂t ∆t
∂f ( t,S ) f ( t,S ) − f ( t − ∆t,S )
Backward: ≈ + O ( ∆t )
∂t ∆t
∂f ( t,S ) f ( t + ∆t,S ) − f ( t − ∆t,S )
Central:
∂t

2∆t
(
+ O ( ∆t )
2
)
Symmetric Central-difference
approximations to 2nd order derivatives

∂ 2 f ( t,S ) f ( t,S + ∆S ) − 2 f ( t,S ) + f ( t,S − ∆S )


∂S 2

( ∆S )
2 (
+ O ( ∆S )
2
)
Use Taylor's expansions for f ( t,S + ∆S ) and f ( t,S − ∆S )
around point ( t,S ) :
f ( t,S + ∆S ) = ?
+
f ( t,S − ∆S ) = ?
10
Math6911, S08, HM ZHU
Finite difference approximations

∂f fi +1, j − fi , j ∂f fi , j +1 − fi , j
Forward Difference: ≈ , ≈
∂t ∆t ∂S ∆S
∂f fi , j − fi −1, j ∂f fi , j − fi , j −1
Backward Difference: ≈ , ≈
∂t ∆t ∂S ∆S
∂f fi +1, j − fi −1, j ∂f fi , j +1 − fi , j −1
Central Difference: ≈ , ≈
∂t 2∆t ∂S 2∆S
As to the second derivative, we have:
∂ 2 f ⎛ fi , j +1 − fi , j fi , j − fi , j −1 ⎞
≈⎜ − ⎟ ∆S
∂S 2
⎝ ∆S ∆S ⎠
fi , j +1 − 2 fi , j + fi , j −1
=
( )
2
∆ S 11
Math6911, S08, HM ZHU
Finite difference approximations

• Depending on which combination of schemes we use in


discretizing the equation, we will have explicit, implicit, or
Crank-Nicolson methods
• We also need to discretize the boundary and final
conditions accordingly. For example, for European Call,
Final Condition:
f N , j = max ( j ∆S − K , 0 ) , for j = 0 ,1,...,M
Boundary Conditions:
⎧⎪ fi ,0 = 0
⎨ − r ( N − i ) ∆t
, for i = 0 ,1,...,N
⎪⎩ fi ,M = S max − Ke
12
Math6911, S08, HM ZHU where Smax = M ∆S.
Chapter 5 Finite Difference
Methods

5.2.1 Explicit Finite-Difference Method

Math6911 S08, HM Zhu


Explicit Finite Difference Methods

∂ f ∂f 1 2 2∂2f
In + rS + σ S = rf , at point (i ∆ t , j ∆ S ), set
∂t ∂S 2 ∂S 2

∂ f f i , j − f i −1 , j
backward difference: ≈
∂t ∆t
∂f f i , j +1 − f i , j −1
central difference: ≈ ,
∂S 2∆S
and
∂2f f i , j + 1 + f i , j −1 − 2 f i , j
≈ , r f = rf i , j , S = j ∆ S
∂S 2
∆S 2

Math6911, S08, HM ZHU


Explicit Finite Difference Methods

Rewriting the equation, we get an explicit schem e:


f i −1 , j = a *j f i , j −1 + b*j f i , j + c*j f i , j +1 (5.2)
where
1
(
a = ∆ t σ 2 j 2 − rj
*
j
2
)
(
b j = 1 − ∆t σ j + r
* 2 2
)
∆ t (σ + rj )
1
c = *
j
2
j 2

2
for i = N - 1, N - 2 , ..., 1, 0 and j = 1, 2 , ..., M - 1 .
Math6911, S08, HM ZHU
Numerical Computation Dependency

S
Smax=M∆S
(j+1)∆S x

j∆S x x

(j-1)∆S x

0
0 (i-1)∆t i∆t T=N ∆t t
Math6911, S08, HM ZHU
Implementation

1. Starting with the final values f N , j , we apply (5.2) to solve


f N −1, j for 1 ≤ j ≤ M − 1. We use the boundary condition
to determine f N −1,0 and f N -1,M .

2. Repeat the process to determine f N − 2 , j and so on

17
Math6911, S08, HM ZHU
Example

We compare explicit finite difference solution for a


European put with the exact Black-Scholes formula, where
T = 5/12 yr, S0=$50, K = $50, σ=30%, r = 10%.

Black-Scholes Price: $2.8446


EFD Method with Smax=$100, ∆S=2, ∆t=5/1200: $2.8288
EFD Method with Smax=$100, ∆S=1, ∆t=5/4800: $2.8406

18
Math6911, S08, HM ZHU
Example (Stability)

We compare explicit finite difference solution for a European


put with the exact Black-Scholes formula, where T = 5/12 yr,
S0=$50, K = $50, σ=30%, r = 10%.

Black-Scholes Price: $2.8446


EFD Method with Smax=$100, ∆S=2, ∆t=5/1200: $2.8288
EFD Method with Smax=$100, ∆S=1.5, ∆t=5/1200: $3.1414
EFD Method with Smax=$100, ∆S=1, ∆t=5/1200: -$2.8271E22

19
Math6911, S08, HM ZHU
Chapter 5 Finite Difference
Methods

5.2.2 Numerical Stability

Math6911 S08, HM Zhu


Numerical Accuracy

• The problem itself


• The discretization scheme used
• The numerical algorithm used

21
Math6911, S08, HM ZHU
Conditioning Issue

Suppose we have mathematically posed problem:


y = f ( x)
where y is to evaluated given an input x.
Let x* = x + δ x for small change δ x.
( )
If hen f x* is near f ( x ) , then we call the problem is
well - conditioned . Otherwise, it is ill-posed/ill-conditioned.

22
Math6911, S08, HM ZHU
Conditioning Issue

• Conditioning issue is related to the problem itself, not to the


specific numerical algorithm; Stability issue is related to the
numerical algorithm
• One can not expect a good numerical algorithm to solve an ill-
conditioned problem any more accurately than the data
warrant
• But a bad numerical algorithm can produce poor solutions
even to well-conditioned problems

23
Math6911, S08, HM ZHU
Conditional Issue

The concept "near" can be measured by further information about


the particular problem:
( )
f ( x ) − f x* δx
f ( x)
≤C
x
( f ( x ) ≠ 0)
where C is called condition number of this problem. If C is large,
the problem is ill-conditioned.

24
Math6911, S08, HM ZHU
Floating Point Number & Error

Let x be any real number.


Infinite decimal expansion : x = ± .x1 x2 xd 10 e
Truncated floating point number : x ≈ fl ( x ) = ± .x1 x2 xd 10 e
where x1 ≠ 0, 0 ≤ xi ≤ 9,
d : an integer, precision of the floating point system
e : an bounded integer
Floating point or roundoff error : fl ( x ) − x
25
Math6911, S08, HM ZHU
Error Propagation

When additional calculations are done, there is an


accumulation of these floating point errors.
Example : Let x = − 0.6667 and fl ( x ) = −0.667 10 0
where d = 3.
Floating point error : fl ( x ) − x = −0.0003
Error propagatio n : fl ( x ) − x = 0.00040011
2 2

26
Math6911, S08, HM ZHU
Numerical Stability or Instability

Stability ensures if the error between the numerical


soltuion and the exact solution remains bounded
as numerical computation progresses.
( )
That is, f x* (the solution of a slightly perturbed problem)
is near f * ( x )( the computed solution) .

Stability concerns about the behavior of fi , j − f ( i∆t, j∆S )


as numerical computation progresses for fixed discretization
steps ∆t and ∆S. 27
Math6911, S08, HM ZHU
Convergence issue

Convergence of the numerical algorithm concerns about


the behavior of fi , j − f ( i∆t, j∆S ) as ∆t, ∆S → 0 for fixed
values ( i∆t, j∆S ) .

For well-posed linear initial value problem,


Stability ⇔ Convergence
(Lax's equivalence theorem, Richtmyer and Morton, "Difference
Methods for Initial Value Problems" (2nd) 1967)
28
Math6911, S08, HM ZHU
Numerical Accuracy

• These factors contribute the accuracy of a numerical solution.


We can find a “good” estimate if our problem is well-
conditioned and the algorithm is stable

( )
Stable: f * ( x ) ≈ f x* ⎫

⎬ f ( x) ≈ f ( x)
*

Well-conditioned: f ( x ) ≈ f ( x ) ⎪⎭
*

29
Math6911, S08, HM ZHU
Chapter 5 Finite Difference
Methods

5.2.3 Financial Interpretation of Numerical Instability

Math6911 S08, HM Zhu


Financial Interpretation of instability
(Hall, page 423-4)

If ∂ f ∂ S and ∂ 2 f ∂ S 2 are assum ed to be the sam e at ( i + 1, j )


as they are at ( i, j ), we obtain equations of the form :
f = aˆ f
i,j j + bˆ f
i +1 , j −1+ cˆ f
j i +1 , j (5.3)
j i +1 , j +1

where
1 ⎛1 1 ⎞ 1
â j = ⎜ ∆ tσ j − ∆ t r j ⎟ = πd
2 2

1 + r ∆t ⎝ 2 2 ⎠ 1 + r ∆t
b̂ j =
1
1 + r ∆t
( 1 − σ j ∆t =
2 2
)
1
1 + r ∆t
π0

1 ⎛1 1 ⎞ 1
ĉ j = ⎜ ∆ tσ j + ∆ t r j ⎟ = πu
2 2

1 + r ∆t ⎝ 2 2 ⎠ 1 + r ∆t
for i = N - 1, N - 2 , ..., 1, 0 and j = 1, 2 , ..., M - 1 .
Math6911, S08, HM ZHU
Explicit Finite Difference Methods

ƒi +1, j +1
πu

π0
ƒi , j ƒi +1, j

πd
ƒi +1, j –1
These coefficients can be interpreted as probabilities times
a discount factor. If one of these probability < 0, instability
occurs.
Math6911, S08, HM ZHU
Explicit Finite Difference Method as
Trinomial Tree

Check if the mean and variance of the


Expected value of the increase in asset price during ∆t:
E [ ∆ ] = −∆S π d + 0π 0 + ∆S π u = rj∆S ∆t = rS ∆t
Variance of the increment:
E ⎡⎣ ∆ 2 ⎤⎦ = − ( ∆S ) π d + 0π 0 + ( ∆S ) π u = σ 2 j 2 ( ∆S ) 2 ∆t = σ 2 S 2 ∆t
2 2

Var [ ∆ ] = E ⎡⎣ ∆ 2 ⎤⎦ − E 2 [ ∆ ] = σ 2 S 2 ∆t − r 2 S 2 ( ∆t ) ≈ σ 2 S 2 ∆t
2

which is coherent with geometric Brownian motion in a risk-neutral world

Math6911, S08, HM ZHU


Change of Variable

Define Z = ln S. The B-S equation becomes


∂f ⎛ σ 2 ⎞ ∂f σ 2 ∂ 2 f
+⎜r − ⎟ + = rf
∂t ⎝ 2 ⎠ ∂Z 2 ∂Z 2

The corresponding difference equation is


fi +1, j − fi , j ⎛ σ 2 ⎞ fi +1, j +1 − f i +1, j −1 σ 2 f i +1, j +1 − 2 fi +1, j −1 + fi +1, j +1
+⎜r − ⎟ + = rfi , j
∆t ⎝ 2 ⎠ 2 ∆Z 2 ∆Z 2

or
fi , j = α *j f i +1, j −1 + β *j fi +1, j + γ *j f i +1, j +1 (5.4)

34
Math6911, S08, HM ZHU
Change of Variable

where
1 ⎡ ⎛ σ 2
⎞ ∆t σ 2
∆t ⎤
αj =
*
⎢− ⎜ r − ⎟ + ⎥
1 + r ∆t ⎣ ⎝ 2 ⎠ 2∆Z 2 ∆Z 2 ⎦
1 ⎛ ∆t 2 ⎞
β *j = ⎜ 1 − σ ⎟
1 + r ∆t ⎝ ∆Z 2

1 ⎡ ⎛ σ 2
⎞ ∆ t σ 2
∆t ⎤
γj =
*
⎢⎜ r − ⎟ + 2⎥
1 + r ∆t ⎣⎝ 2 ⎠ 2 ∆ Z 2 ∆Z ⎦
It can be shown that it is numerically most efficient if ∆Z = σ 3∆t .

35
Math6911, S08, HM ZHU
Reduced to Heat Equation

Get rid of the varying coefficients S and S² by using


change of variables:
− ( k −1) x − ( k +1)2 τ
1 1
S = Ee x , t = T − 2τ σ 2 , V (S , t ) = E e 2 4
u ( x, τ )
⎛1 2⎞
k =r ⎜ σ ⎟
⎝2 ⎠
Equation (5.1) becomes heat equation (5.5):

∂ u ∂ 2u
= (5.5)
∂ τ ∂ x2
for − ∞ < x < +∞ and τ > 0 36
Math6911, S08, HM ZHU
Explicit Finite Difference Method

With u nm = u (n δx,mδ τ ), this involves solving a system of


finite difference equations of the form :
u nm +1 − u nm
δτ
+ O (δτ ) =
u nm+1 − 2u nm + u nm−1
(δx ) 2
(
+ O (δx )
2
)
( )
Ignoring terms of O (δτ ) and O (δx ) , we can approximat e
2

this by
u nm +1 = α u nm+1 + (1 − 2α )u nm + α u nm−1
⎛ 1 2 ⎞
δτ ⎜ σ T ⎟
where α = , for -N −
≤ n ≤ N +
and m = 0 ,
1 ,...M ⎜ = 2 ⎟
(δx )2 ⎜

δτ ⎟

Math6911, S08, HM ZHU ⎝ ⎠
Stability and Convergence
(P. Wilmott, et al, Option Pricing)
Stability:
The solution of Eqn (5.5) is
δτ 1 1
i) Stable if 0 < α = ≤ ; ii) Unstable if α >
(δ x ) 2
2
2
Convergence:
1
If 0 < α ≤ , then the explicit finite-difference approximation
2
converges to the exact solution as δτ ,δ x → 0
(in the sense that unm → u ( nδ x, mδτ ) as δτ ,δ x → 0)
Rate of Convergence is O (δτ ) 38
Math6911, S08, HM ZHU
Chapter 5 Finite Difference
Methods

5.3.1 Implicit Finite-Difference Method

Math6911 S08, HM Zhu


Implicit Finite Difference Methods

∂ f ∂f 1 2 2∂2f
In + rS + σ S = r f , we use
∂t ∂S 2 ∂S 2

∂ f f i +1,j − f i ,j
forw ard difference: ≈
∂t ∆t
∂ f f i , j + 1 − f i , j −1
central difference: ≈ ,
∂ S 2∆S
and
∂ 2f f i , j + 1 + f i , j −1 − 2 f i , j
≈ , rf = rf i , j
∂ S 2
∆S 2

Math6911, S08, HM ZHU


Implicit Finite Difference Methods

Rewriting the equation, we get an implicit scheme:


a j f i,j −1 + b j f i,j + c j f i,j +1 = f i +1, j (5.6)
where
1
(
a j = ∆ t −σ 2 j 2 + rj
2
)
(
b j = 1 + ∆t σ j + r
2 2
)
= − ∆ t (σ + rj )
1
cj 2
j 2

2
for i = N - 1, N - 2 , ..., 1, 0 and j = 1, 2 , ..., M - 1 .
Math6911, S08, HM ZHU
Numerical Computation Dependency

S
Smax=M∆S
(j+1)∆S x

j∆S x x
(j-1)∆S x

0
0 (i-1)∆t i∆t (i+1)∆t T=N ∆t t
Math6911, S08, HM ZHU
Implementation

Equation (5.6) can be rewritten in matrix form:


Cfi = fi +1 + bi ( 5.7 )
where fi and bi are (M − 1) dimensional vectors
T T
fi = ⎡⎣ fi ,1 , fi ,2 , fi ,3 , fi ,M −1 ⎤⎦ ,bi = ⎡⎣ − a1 fi ,0 ,0 , 0 , , 0 , −cM −1 fi ,M ⎤⎦
and C is (M − 1) × (M − 1) symmetric matrices
⎡ b1 c1 0 0 ⎤
⎢a b2 c2 0 ⎥⎥
⎢ 2
C=⎢0 a3 b3 ⎥
⎢ ⎥
⎢ cM − 2 ⎥
⎢⎣ 0 0 aM −1 bM −1 ⎥⎦
Implementation

1. Starting with the final values f N , j , we need to solve a


linear system (5.7) to obtain f N −1, j for 1 ≤ j ≤ M − 1
using LU factorization or iterative methods. We use the
boundary condition to determine f N −1,0 and f N -1,M .

2. Repeat the process to determine f N − 2 , j and so on

44
Math6911, S08, HM ZHU
Example

We compare implicit finite difference solution for a


European put with the exact Black-Scholes formula, where
T = 5/12 yr, S0=$50, K = $50, σ=30%, r = 10%.

Black-Scholes Price: $2.8446


IFD Method with Smax=$100, ∆S=2, ∆t=5/1200: $2.8194
IFD Method with Smax=$100, ∆S=1, ∆t=5/4800: $2.8383

45
Math6911, S08, HM ZHU
Example (Stability)

We compare implicit finite difference solution for a European


put with the exact Black-Scholes formula, where T = 5/12 yr,
S0=$50, K = $50, σ=30%, r = 10%.

Black-Scholes Price: $2.8846


IFD Method with Smax=$100, ∆S=2, ∆t=5/1200: $2.8288
IFD Method with Smax=$100, ∆S=1.5, ∆t=5/1200: $3.1325
IFD Method with Smax=$100, ∆S=1, ∆t=5/1200: $2.8348

46
Math6911, S08, HM ZHU
Implicit vs Explicit Finite Difference Methods

ƒi +1, j +1 ƒi , j +1

ƒi , j ƒi +1, j
ƒi , j ƒi +1, j

ƒi , j –1
ƒi +1, j –1
Implicit Method
Explicit Method
(always stable)
Math6911, S08, HM ZHU
Implicit vs Explicit Finite Difference
Method

• The explicit finite difference method is


equivalent to the trinomial tree approach:
– Truncation error: O(∆t)
– Stability: not always
• The implicit finite difference method is
equivalent to a multinomial tree approach:
– Truncation error: O(∆t)
– Stability: always
48
Math6911, S08, HM ZHU
Other Points on Finite Difference
Methods

• It is better to have ln S rather than S as


the underlying variable in general
• Improvements over the basic implicit and
explicit methods:
– Crank-Nicolson method, average of
explicit and implicit FD methods, trying
to achieve
• Truncation error: O((∆t)2 )
• Stability: always

Math6911, S08, HM ZHU


Chapter 5 Finite Difference
Methods

5.3.2 Solving a linear system using direct methods

Math6911 S08, HM Zhu


Solve Ax=b

A x=b

Various shapes of matrix A

Lower triangular Upper triangular General

51
Math6911, S08, HM ZHU
5.3.2.A Triangular Solvers

Math6911 S08, HM Zhu


Example: 3 x 3 upper triangular system

⎡4 6 1 ⎤⎡ x1 ⎤ ⎡100⎤
⎢0 1 1 ⎥⎢ x ⎥= ⎢ 10 ⎥
⎢ ⎥⎢ 2 ⎥ ⎢ ⎥
⎢⎣0 0 4⎥⎦⎢⎣ x3 ⎥⎦ ⎢⎣ 20 ⎥⎦

⇒ x3 = 20 / 4 = 5
⇒ x2 = 10 − x3 = 5
⇒ 4 x1 = 100 − x3 − 6 * x2 = 65
∴ x1 = 65 4
53
Math6911, S08, HM ZHU
Solve an upper triangular system Ax=b

⎛ x1 ⎞
⎜ ⎟
⎜ x2 ⎟
(0 0 aii ain )⎜ ⎟ = aii xi + ∑ aij x j = bi ,i = 1, … , n
⎜ ⎟ j >i
⎜x ⎟
⎝ n⎠
⇒ xn = bn ann
⎛ ⎞
xi = ⎜⎜ bi − ∑ aij x j ⎟⎟ aii ,i = 1, n −1
⎝ j >i ⎠
54
Math6911, S08, HM ZHU
Implementation

Function x = UpperTriSolve(A, b)
n = length(b);
x(n) = b(n)/A(n,n);
for i = n-1:-1:1
sum = b(i);
for j = i+1:n
sum = sum - A(i,j)*x(j);
end
x(i) = sum/A(i,i);
end
55
Math6911, S08, HM ZHU
5.3.2.B Gauss Elimination

Math6911 S08, HM Zhu


To solve Ax=b for general form A

= *

To solve Ax = b : Solve two triangular systems :


Suppose A = LU. Then 1) solve z from Lz = b
Ax = LUx = b 2) solve x from Ux = z

z

57
Math6911, S08, HM ZHU
Gauss Elimination = *

Goal: Make A an upper triangular matrix through using


fundamental row operations

For example, to zero the elements in the lower triangular part of A


1) Zero a21
⎡ 1 0 0⎤⎡ a11 a12 a13 ⎤ ⎡ a11 a12 a13 ⎤
⎢ a ⎥⎢ ⎥ ⎢ a21 a12 a21 a13 ⎥
⎢− 21 1 0⎥⎢a21 a22 a23 ⎥ = ⎢ 0 a22 − a23 − ⎥
⎢ a11 ⎥⎢ a a32 a33 ⎥⎦ ⎢⎢a
a11 a11 ⎥
⎢⎣ 0 0 1⎥⎦⎣ 31 ⎣ 31 a32 a33 ⎥⎦

58
Math6911, S08, HM ZHU E 21 A
Gauss Elimination = *

2) Zero a31
⎡ ⎤
⎡ ⎤
⎢ 1 ⎥ ⎡ a11 a12 a13 ⎤ ⎢ a11 a a

⎥ ⎢ ⎥
12 13
0 0 ⎢
⎢ ⎥ a21 a12 a21 a13 ⎥ ⎢ a21 a12 a21 a13 ⎥
⎢ 0 1 0⎥ ⎢ 0 a22 − a23 − = ⎢ 0 a22 − a23 − ⎥
⎢ ⎥⎢ a11 a11 ⎥ ⎢ a11 a11 ⎥
⎢ − a31 ⎢ ⎥
0 1 ⎥ ⎢⎣ a31 a32 a33 ⎥⎦ ⎢ a31 a12 a31 a13 ⎥
⎢⎣ a11 ⎥⎦ ⎢ 0 a32 − a33 − ⎥
⎣ a11 a11 ⎦
⇓ ⎡ a11 a12 a13 ⎤
E 31 E 21 A = ⎢⎢ 0 a22 a23 ⎥⎥
⎢⎣ 0 a32 a33 ⎥⎦
59
Math6911, S08, HM ZHU
Gauss Elimination = *

3) Zero a32
⎡ ⎤ ⎡ ⎤
⎢1 0 0 ⎥ ⎡ a11 a12 a13 ⎤ ⎢ a11 a12 a13 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢0 1 0⎥ ⎢ 0 a22 a23 ⎥ = ⎢ 0 a22 a23 ⎥ ≡U
⎢ ⎥⎢ ⎢ ⎥
⎢0 − a32 ⎥ ⎣ 0 a32 a33 ⎥⎦ ⎢ a32 a23 ⎥
1 0 0 a33 −
⎢⎣ a22 ⎥⎦ ⎢⎣ a22 ⎥⎦

E 32 E 31 E 21 A = U

lower triangular 60
Math6911, S08, HM ZHU
Gauss Elimination = *

⎡ ⎤
⎢ ⎥
Claim 1: ⎢ 1 0 0⎥
⎢ a 21 ⎥
E 32 E 31 E 21 = ⎢− 1 0⎥
⎢ a1 1 ⎥
⎢ a 31 a 32 ⎥
⎢− − 1⎥
⎣ a1 1 a 22 ⎦
Claim 2: ⎡ ⎤
⎢ ⎥
⎢ 1 0 0⎥
⎢ a 21 ⎥
( E 32 E 31 E 21 )
−1
=⎢ 1 0⎥
⎢ a11 ⎥
⎢ a 31 a 32 ⎥
⎢ 1⎥ 61
Math6911, S08, HM ZHU
⎣ a11 a 22 ⎦
LU Factorization = *

Therefore, through Gauss Elimination, we have

E 32 E 31 E 21A = U
A = (E 32 E 31 E 21 ) U
−1

A = LU

It is called LU factorization. When A is symmetric,

A = LLT which is called Cholesky Decomposition


62
Math6911, S08, HM ZHU
To solve Ax=b for general form A

To solve Ax = b becomes
1) Use Gauss elimination to make A upper triangular, i.e.
L−1Ax = L−1b ⇒ Ux = z
2) Solve x from Ux = z

This suggests that when doing Gauss elimination, we can do it


to the augmented matrix [A b] associated with the linear system.

63
Math6911, S08, HM ZHU
An exercise

% build the matrix A


A = [2, -1, 0; -1, 2, -1; 0, -1, 2]

% build the vector b


⎡ 2 −1 ⎤⎡ x1 ⎤ ⎡0 ⎤ x_true = [1:3]';
⎢− 1 2 − 1⎥⎢ x ⎥ = ⎢0 ⎥ b = A * x_true;
⎢ ⎥⎢ 2 ⎥ ⎢ ⎥ % lu decomposition of A
⎢⎣ − 1 2 ⎥⎦⎢⎣ x3 ⎥⎦ ⎢⎣4⎥⎦ [l, u] = lu(A)

% solve z from lz = b where z = ux


z = l\b;

% solve x from ux = z
x = u\z
64
Math6911, S08, HM ZHU
General Gauss Elimination

row i
a ji
row j - ∗ row i
aii

65
Math6911, S08, HM ZHU
What if we have zero or very small
diagonal elements?

Somtimes, we need to permute rows of A so that Gauss eliminatio n


can be computed or computed stably, i.e., PA = LU. This is called
partial pivoting.
For example
⎡0 1⎤
A=⎢
⎣2 3 ⎥⎦
⎡ 0 1 ⎤ ⎡ 0 1 ⎤ ⎡ 2 3⎤
PA = ⎢ ⎥ ⎢ ⎥ =⎢ ⎥
⎣ 1 0 ⎦⎣ 2 3 ⎦ ⎣ 0 1 ⎦
⎡ 0.0001 1⎤ ⎡ 1 0 ⎤ ⎡ 0.0001 1 ⎤
A=⎢ ⎥ =⎢ ⎥ ⎢ ⎥ = LU
⎣ 1 1⎦ ⎣10,000 1 ⎦ ⎣ 0 - 9999 ⎦
⎡ 0 1 ⎤ ⎡ 0.0001 1⎤ ⎡ 1 0 ⎤ ⎡1 1 ⎤
PA = ⎢ ⎥ ⎢ ⎥ =⎢ ⎥ ⎢ ⎥ = LU 66
Math6911, S08, HM⎣ 1
ZHU 0 ⎦ ⎣ 1 1⎦ ⎣ 0.0001 1 ⎦ ⎣ 0 .9999 ⎦
Can we always have A = LU?

No!
If det ( A (1:k,1:k ) ) ≠ 0 for k =1,...,n-1,
then A ∈ Rnxn has an LU factorization.
Proof:
See Golub and Loan, Matrix Computation, 3rd edition

67
Math6911, S08, HM ZHU
Chapter 5 Finite Difference
Methods

5.4.1 Crank-Nicolson Method

Math6911 S08, HM Zhu


Explicit Finite Difference Methods

∂ f ∂f 1 2 2∂2f
W ith + rS + σ S = rf ,
∂t ∂S 2 ∂S 2

we can obtain an explicite form :


f i +1 , j − f i , j
+ O ( ∆t ) =
∆t
f i +1 , j +1 − f i +1 , j −1 1 2 2 2 f i + 1 , j + 1 + f i + 1 , j −1 − 2 f i + 1 , j
− rj ∆ S − σ j ( ∆S )
2∆S 2 ∆ S2
+ rf i +1 , j + O (( ∆S ) ) 2

Math6911, S08, HM ZHU


Implicit Finite Difference Methods

∂ f ∂f 1 2 2∂2f
W ith + rS + σ S = r f,
∂t ∂S 2 ∂S 2

we can obtain an im plicit form :


f i +1 , j − f i , j
+ O ( ∆t ) =
∆t
f i , j +1 − f i , j −1 1 2 2 2 f i , j + 1 + f i , j −1 − 2 f i , j
− rj ∆ S − σ j ( ∆S )
2∆S 2 ∆ S2
+ rf i , j + O (( ∆S ) )2

Math6911, S08, HM ZHU


Crank-Nicolson Methods: average of explicit
and implicit methods
(Brandmarte, p485)

fi+1, j − fi, j
∆t
(
+ O ( ∆t ) =
2
)
rj∆S ⎛ fi+1, j +1 − fi+1, j −1 fi, j +1 − fi, j −1 ⎞
− ⎜ + ⎟
2 ⎝ 2∆S 2∆S ⎠
1 2 2 2 ⎛ fi +1, j +1 + fi +1, j −1 − 2 fi +1, j fi, j +1 + fi, j −1 − 2 fi, j ⎞
− σ j ( ∆S ) ⎜ + ⎟
4 ⎝ ∆S 2
∆S 2

r
+ ( fi+1, j + fi, j ) + O ( ∆S )
2
(2
)
Math6911, S08, HM ZHU
Crank-Nicolson Methods

R e w ritin g th e e q u a tio n , w e g e t a n C ra n k -N ic o ls o n s c h e m e :
(
− α j f i , j −1 + 1 − β )fj i,j − γ j f i , j +1 =
α j f i + 1 , j −1 + (1 + β ) f
j i +1 , j + γ j f i +1 , j +1 (5 .5 )
w h e re
∆t
α j=
4
σ 2 j 2 − rj( )
∆t
βj = −
2
σ 2 j2 + r ( )
∆t
γj=
4
σ 2 j 2 + rj( )
fo r i = N - 1 , N - 2 , ..., 1 , 0 a n d j = 1 , 2 , ..., M - 1 .
Math6911, S08, HM ZHU
Numerical Computation Dependency

S
Smax=M∆S
(j+1)∆S x x

j∆S x x

(j-1)∆S x x

0
0 (i-1)∆t i∆t (i+1)∆t T=N ∆t t
Math6911, S08, HM ZHU
Implementation
Equation (5.5) can be rewritten in matrix form:
M1fi = M 2fi +1 + b
where fi and bi are (M − 1) dimensional vectors
T
fi = ⎡⎣ fi ,1 , fi ,2 , fi ,3 , fi ,M −1 ⎤⎦ ,

b = ⎡⎣α1 ( fi ,0 + fi +1,0 ) , 0 ,γ M −1 ( fi ,M + f i +1,M ) ⎤⎦


T

and M1 and M 2 are (M − 1) × (M − 1) symmetric matrices


⎡1 − β1 −γ 1 0 0 ⎤
⎢ −α 1 − β γ2 0 ⎥
⎢ 2 2 ⎥
M1 = ⎢ 0 α3 ⎥
⎢ ⎥
⎢ −γ M − 2 ⎥
⎢⎣ 0 0 −α M −1 1 − β M −1 ⎥⎦
Implementation

and
⎡1 + β1 γ1 0 0 ⎤
⎢ α 1 + β γ 0 ⎥
⎢ 2 2 2 ⎥
M2 = ⎢ 0 α3 ⎥
⎢ ⎥
⎢ γ M −2 ⎥
⎢⎣ 0 0 α M −1 1 + β M −1 ⎥⎦

75
Math6911, S08, HM ZHU
Example

We compare Crank-Nicolson Methods for a European put


with the exact Black-Scholes formula, where T = 5/12 yr,
S0=$50, K = $50, σ=30%, r = 10%.

Black-Scholes Price: $2.8446


CN Method with Smax=$100, ∆S=2, ∆t=5/1200: $2.8241
CN Method with Smax=$100, ∆S=1, ∆t=5/4800: $2.8395

76
Math6911, S08, HM ZHU
Example (Stability)

We compare Crank-Nicolson Method for a European put with


the exact Black-Scholes formula, where T = 5/12 yr, S0=$50,
K = $50, σ=30%, r = 10%.

Black-Scholes Price: $2.8446


CN Method with Smax=$100, ∆S=1.5, ∆t=5/1200: $3.1370
CN Method with Smax=$100, ∆S=1, ∆t=5/1200: $2.8395

77
Math6911, S08, HM ZHU
Example: Barrier Options

Barrier options are options where the payoff depends on whether


the underlying asset’s price reaches a certain level during a certain
period of time.

Types:
knock-out: option ceases to exist when the asset price reaches a barrier
Knock-in: option comes into existence when the asset price reaches a barrier

78
Math6911, S08, HM ZHU
Example: Barrier Option

We compare Crank-Nicolson Method for a European down-


and-out put, where T = 5/12 yr, S0=$50, K = $50, Sb=$50
σ=40%, r = 10%.

What are the boundary conditions for S?

79
Math6911, S08, HM ZHU
Example: Barrier Option

We compare Crank-Nicolson Method for a European down-


and-out put, where T = 5/12 yr, S0=$50, K = $50, Sb=$50
σ=40%, r = 10%.

Boundary conditions are f ( t,S max ) = 0 and f ( t,Sb ) = 0

Exact Price (Hall, p533-535): $0.5424


CN Method with Smax=$100, ∆S=0.5, ∆t=1/1200: $0.5414

80
Math6911, S08, HM ZHU
Appendix A.

Matrix Norms

Math6911 S08, HM Zhu


Vector Norms

- Norms serve as a way to measure the length of a vector or a matrix


- A vector norm is a function mapping x ∈ ℜ n to a real number x s.t.
• x > 0 for any x ≠ 0; x = 0 iff x = 0
• c x = c x for any c ∈ ℜ
• x + y ≤ x + y for any x, y ∈ ℜ n
- There are various ways to define a norm
1
⎛ n
p⎞
p
x p
≡ ⎜ ∑ xi ⎟ (p = 2 is the Euclidean norm )
⎝ i =1 ⎠
x ∞
≡ max xi
1≤ i ≤ n

− For example, v = [2 4 - 1 3]. v 1 = ?, v ∞


= ?, v 2 = ? 82
Math6911, S08, HM ZHU
Matrix Norms
- Similarly, a matrix norm is a function mapping A ∈ ℜ m×n to a real number A s.t.
• A > 0 for any A ≠ 0; A = 0 iff A = 0
• c A = c A for any c ∈ ℜ
• A + B ≤ A + B for any A, B ∈ ℜ m×n
- Various commonly used matrix norms
1
Ax ⎛ m n ⎞ 2
≡ ⎜⎜ ∑∑ aij
2
A ≡ sup
p
A ⎟
p F ⎟
x≠0 x p ⎝ i =1 j =1 ⎠
m n
A 1 ≡ max ∑a
1≤ j ≤ n i =1
ij A ∞
≡ max
1≤ i ≤ m
∑a
j =1
ij

( )
A 2 ≡ ρ A T A , the spectral norm, where
ρ (B ) ≡ max{λk : λk is an eigenvalue of B}
An Example

⎡ 2 4 − 1⎤

A=⎢ 3 1 5⎥ ⎥
⎢⎣− 2 3 − 1⎥⎦

A ∞
=? A 2 =?
A1 =? A F
=?
84
Math6911, S08, HM ZHU
Basic Properties of Norms

n×n
Let A, B ∈ ℜ and x, y ∈ ℜ . Then
n

1. x ≥ 0; and x = 0 ⇔ x = 0
2. x + y ≤ x + y
3. α x = α x where α is a real number
4. Ax ≤ A x
5. AB ≤ A B
85
Math6911, S08, HM ZHU
Condition number of a square matrix

( )
All norms in ℜn ℜm×n are equivalent. That is, if • α and • β are norms
on ℜn , then ∃ c1 ,c2 > 0 such that for all x ∈ℜn , we have
c1 x α ≤ x β
≤ c2 x α

Condition Number of A Matrix: C ≡ A A−1 , where A ∈ℜn×n .


The condition number gives a measure of how close a matrix is
close to singular. The bigger the C, the harder it is to solve Ax = b.

86
Math6911, S08, HM ZHU
Convergence

- vectors x k converges to x ⇔ x k − x converges to 0

- matrix A k → 0 ⇔ A k − 0 → 0

87
Math6911, S08, HM ZHU
Appendix B.

Basic Row Operations

Math6911 S08, HM Zhu


Basic row operations
= *

Three kinds of basic row operations:


1) Interchange the order of two rows or (equations)
⎡0 1 0⎤⎡a11 a12 a13 ⎤ ⎡a21 a22 a23 ⎤
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢1 0 0⎥⎢a21 a22 a23 ⎥ = ⎢a11 a12 a13 ⎥
⎢⎣0 0 1⎥⎦⎢⎣a31 a32 a33 ⎥⎦ ⎢⎣a31 a32 a33 ⎥⎦

89
Math6911, S08, HM ZHU
Basic row operations = *

2) Multiply a row by a nonzero constant


⎡c 0 0⎤⎡a11 a12 a13 ⎤ ⎡ca11 ca12 ca13 ⎤
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢0 1 0⎥⎢a21 a22 a23 ⎥ = ⎢ a21 a22 a23 ⎥
⎢⎣0 0 1⎥⎦⎢⎣a31 a32 a33 ⎥⎦ ⎢⎣ a31 a32 a33 ⎥⎦

3) Add or subtract rows


⎡ 1 0 0⎤⎡ a11 a12 a13 ⎤ ⎡ a11 a12 a13 ⎤
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ −1 1 0 a
⎥⎢ 21 a 22 a23 ⎥ = ⎢a21 − a11 a22 −a12 a23 −a13 ⎥
⎢⎣ 0 0 1⎥⎦⎢⎣a31 a32 a33 ⎥⎦ ⎢⎣ a31 a32 a33 ⎥⎦
90
Math6911, S08, HM ZHU

You might also like