Professional Documents
Culture Documents
L4 Extra Notes
L4 Extra Notes
The diagram above shows the ‡ucatuations in the price of apple stock over a 35 year period. This
uncertainty that is quite conspicuous is the most important feature of …nancial modelling. Because there
is so much randomness, the most successful mathematical models of …nancial assets have a probabilistic
foundation.
The evolution of …nancial assets is random and depends on time. They are examples of stochastic processes
which are random variables indexed (parameterized) with time. If the movement of an asset is discrete it
is called a random walk. A continuous movement is called a di¤usion process. We will consider the asset
price dynamics to exhibit continuous behaviour and each random path traced out is called a realization.
If we used a to denote the price of apple stock at time t; we could express this as at : This would be an
example of a stochastic process.
Hence the need for de…ning a robust set of properties for the randomness observed in an asset price
realization, which is Brownian Motion. This was named after the Scottish Botanist who in the summer of
1827, while examining grains of pollen of the plant Clarkia pulchella suspended in water under a microscope,
observed minute particles, ejected from the pollen grains, executing a continuous highly irregular …dgety
motion. Further study showed that …ner particles moved more rapidly, and that the motion was stimulated
by heat and by a decrease in the viscosity of the liquid. The …ndings were published in A Brief Account
of Microscopical Observations Made in the Months of June, July and August 1827 There is little doubt
that as well as the most well known (and famous) stochastic process, Brownian motion is the most widely
used.
The origins of quantitative …nance can be traced back to the start of the twentieth century. Louis Jean-
Baptiste Alphonse Bachelier (March 11, 1870 - April 28, 1946) is credited with being the …rst person to
derive the price of an option where the share price movement was modelled by Brownian motion, as part
of his PhD at the Sorbonne, entitled Théorie de la Spéculation (published 1900).
1
Thus, Bachelier may be considered a pioneer in the study of …nancial mathematics and one of the earliest
exponents of Brownian Motion. Five years later Einstein used Brownian motion to study di¤usions; which
described the microscopic transport of material and heat. In 1920 Norbert Wiener, a mathematician at
MIT provided a mathematical construction of Brownian motion together with numerous results about the
properties of Brownian motion - in fact he was the …rst to show that Brownian motion exists and is a
well-de…ned entity. Hence Wiener process is also used as a name for this, and denoted W; W (t) or Wt .
Suppose we now wish to keep a score of our winnings after the nth toss - we introduce a new random
variable
Xn
Wn = Ri
i=1
This allows us to keep a track of our total winnings. This represents the position of a marker that starts
o¤ at the origin (no winnings). So starting with no money means
W0 = 0
2
Now we can calculate expectations of Wn
" #
X
n X
n
E [Wn ] = E Ri = E [Ri ] = 0
i=1 i=1
E [Xn2 ] =
A Note on Variations
t
Consider a function ft ; where ti = i ; we can de…ne di¤erent measures of how much ft varies over time as
n
X
n
N
N
V = fti fti 1
i=1
X
n
V = fti fti 1
total variation of trajectory - sum of absolute changes
i=1
X
n
2
V2 = fti fti 1
quadratic variation - sum of squared changes
i=1
Now look at the quadratic variation of the random walk. After each toss, we have won or lost $1. That is
Wn Wn 1 = 1 =) jWn Wn 1 j = 1
Hence
X
n
(Wi Wi 1 )2 = n
| {z }
i=1 =1
X
6
(Wi Wi 1 )2
i=1
X
6
p 2
= t=6
i=1
t
= 6 =t
6
3
p
Now speed up the game. So we perform n tosses within time t with each bet being t=n: Time for each
toss is t=n. p
Wi Wi 1 = t=n
The quadratic variation is
X
n
p 2
(Wi Wi 1 )2 = n t=n
i=1
= t
As n becomes larger and larger, time between subsequent tosses decreases and the bet sizes become smaller.
The time and bet size decrease in turn like
1
time decrease O
n
1
bet size O p
n
The scaling we have used has been chosen carefully to both keep the random walk …nite and also not
becoming zero. i.e. In the limit n ! 1; the random walk stays …nite. It has an expectation conditional
on a starting value of zero, of
" #
Xn
E [Wt ] = E lim Ri
n!1
i=1
X
n
= lim E [Ri ] = n 0
n!1
i=1
Mean of Wt = 0
4
" #
X
n
E Wt2 = E lim Ri2
n!1
i=1
X
n
p 2
= lim E Ri2 = lim n t=n
n!1 n!1
i=1
V [Wt ] = E Wt2 =t
This limiting process as dt tends to zero is called Brownian Motion and denoted Wt .
+1 if H
Z=
1 if T
Let
X
n
Xn = Zi
i=1
this is the position of a random walker starting at the origin and after each toss moves one unit up or down
with equal probability of a half. We know
X
n
E [Xn ] = E [Zi ] = 0
i=1
X
n
V [Xn ] = V [Zi ] = n
i=1
Now introduce time t; steps N: That is, perform N tosses in time t. Can we …nd a continuous time limit?
Each step takes t = t=N: If we let N ! 1; steps of size 1 would become in…nite (or may possibly
collapse to zero). We require a …nite random walk. So look for a suitable scaling of the time-step (away
from 1), keeping in mind the Central Limit Theorem. Let
Y = NZ
Thus
E [XN ] = 0 8N
V [XN ] = E (XN )2
2 !2 3 " #
X
N X
N
= E4 Yi 5 = E 2
N Zi2
i=1 i=1
2 t 2
= NN = N
t
5
2 2
We need N= t O (1) : Choose N= t = 1: Therefore
E (XN )2 = V [XN ] = t:
Quadratic Variation
Consider a function ft ; which has at most a …nite number of jumps or discontinuities. De…ne the Quadratic
Variation
X1
N
2
Q [f ] = lim fti+1 fti
N !1
i=0
Take the time period [0; T ] with N partitions so dt = T =N ; ti = idt: For Brownian Motion Wt on the
interval [0; T ] we have
X1
N
2
Q [Wt ] = lim Wti+1 Wti
N !1
i=0
X1
N
p 2
= lim dt = lim N dt
N !1 N !1
i=0
= T
X1
N
V [Wt ] = lim Wti+1 Wti
N !1
i=0
X1
N
p
= lim dt
N !1
i=0
p
= lim N dt ! 1
N !1
6
Properties of a Wiener Process/Brownian motion
A stochastic process fWt : t 2 R+ g is de…ned to be Brownian motion (or a Wiener process ) if
Brownian motion has independent Gaussian increments, with zero mean and variance equal to the
temporal extension of the increment. That is for each t > 0 and s > 0, Wt Ws is normal with mean
0 and variance jt sj ;
i.e.
Wt Ws N (0; jt sj) :
Coin tosses are Binomial, but due to a large number and the Central Limit Theorem we have a
distribution that is normal. Wt Ws (= x) has a pdf given by
1 x2
p (x) = p exp
2 jt sj 2 jt sj
0 t0 t1 t2 :::::
Also called standard Brownian motion if the above properties hold. More importantly is the result (in
stochastic di¤erential equations)
dW = Wt+dt Wt N (0; dt)
7
Mean Square Convergence
Consider a function F (X) : If
E (F (X) l)2 !0
then we say that F (X) = l in the mean square limit, also called mean square convergence. We present a
full derivation of the mean square limit. Starting with the quantity:
2 !2 3
X
n
E4 (W (tj ) W (tj 1 ))2 t 5
j=1
jt
where tj = = j t:
n
Hence we are saying that up to mean square convergence,
dW 2 = dt:
This is the symbolic way of writing this property of a Wiener process, as the partitions t become smaller
and smaller.
Developing the terms inside the expectation
First, we will simplify the notation in order to deal more easily with the outer (right most) squaring. Let
Y (tj ) = (W (tj ) W (tj 1 ))2 , then we can rewrite the expectation as:
2 !2 3
X
n
E4 Y (tj ) t 5
j=1
Expanding we have:
E [(Y (t1 ) + Y (t2 ) + : : : + Y (tn ) t) (Y (t1 ) + Y (t2 ) + : : : + Y (tn ) t)]
The term inside the Expectation is equal to
Y (t1 )2 + Y (t1 )Y (t2 ) + : : : + Y (t1 )Y (tn ) Y (t1 )t
+Y (t2 )2 + Y (t2 )Y (t1 ) + : : : + Y (t2 )Y (tn ) Y (t2 )t
..
.
+Y (tn )2 + Y (tn )Y (t1 ) + : : : + Y (tn )Y (tn 1 ) Y (tn )t
tY (t1 ) tY (t2 ) : : : tY (tn ) + t2
Rearranging
Y (t1 )2 + Y (t2 )2 + : : : + Y (tn )2
2Y (t1 )Y (t2 ) + 2Y (t1 )Y (t3 ) + : : : + 2Y (tn 1 )Y (tn )
2Y (t1 )t 2Y (t2 )t : : : 2Y (tn )t
+t2
We can now factorize to get
X
n X
n X X
n
2
Y (tj ) + 2 Y (ti )Y (tj ) 2t Y (tj ) + t2
j=1 i=1 j<i j=1
8
Substituting back Y (tj ) = (W (tj ) W (tj 1 ))2 and taking the expectation, we arrive at:
X
n
E[ (W (tj ) W (tj 1 ))4
j=1
Xn X
+2 (W (ti ) W (ti 1 ))2 (W (tj ) W (tj 1 ))2
i=1 j<i
Xn
2t (W (tj ) W (tj 1 ))2
j=1
2
+t ]
9
now put p
z
u= p ! du = n=tdz
t=n
Our integral becomes !4
r Z r r
n t 1 2 t
u exp 2
u du
2t R n n
r
Z
1 t2
= u4 exp 1 2
2
u du
2 n2 R
r Z
t2 1
= 2
: u4 exp 1 2
2
u du
n 2 R
2
t
= 2
:E u4 :
n
So the problem reduces to …nding the fourth moment of a standard normal random variable. Here we do
not have to explicitly calculate any integral. Two ways to do this.
Either use the MGF as we did earlier and obtained the fourth moment to be three.
Or the other method is to make use of the fact that the kurtosis of the standardised normal distribution
is 3.
Because of the single summation, the fourth moment and the variance multiplied by t actually recur n
times. Because of the double summation, the product of variances occurs n(n2 1) times.
t2 t2 t
+
3nn(n 1) 2tn + t2
n2 n2 n
t2 t2 t2
= 3 + t2 2t2 + t2 = 2
n n n
1
= O( )
n
So, as our partition becomes …ner and …ner and n tends to in…nity, the quadratic variation will tend to t
in the mean square limit.
10
Wiener Process Trajectory
120
100
80
60
W(t)
40
20
0
0 1 2 3 4 5 6 7 8 9
-20
-40
t
11
Taylor Series and Itô
If we were to do a naive Taylor series expansion of F (Wt ), completely disregarding the nature of Wt , and
treating dWt as a small increment in Wt , we would get
dF 1 d2 F
F (Wt + dWt ) = F (Wt ) + dWt + dWt2 ;
dWt 2 dWt2
ignoring higher-order terms. We could argue that F (Wt + dWt ) F (Wt ) was just the ‘change in’F and so
dF 1 d2 F
dF = dWt + dWt2 :
dWt 2 dWt2
There are two parts. The part before the plus, A (Wt ) dt; is the deterministic bit. The random component
follows B (Wt ) dWt . More importantly A (Wt ) is the drift; B (Wt ) is the di¤usion.
Now consider a slight extension. A function of a Wiener Process f = f (t; Wt ) ; so we can allow both t and
Wt to change, i.e.
t ! t + dt
Wt ! Wt + dWt :
Using Taylor as before
2
@f @f 1 @ f
f (t + dt; Wt + dWt ) = f (t; Wt ) + dt + dWt + 2 2
dWt2 + ::
@t @Wt @Wt
df = f (t + dt; Wt + dWt ) f (t; Wt )
This gives another form of Itô:
@f @2f @f
df = + 12 dt + dWt : (*)
@t @Wt2 @Wt
This is also a SDE.
12
Examples:
@f @f @2f
1. Obtain a SDE for f = teWt : We need @t
= eWt ; @Wt
= teWt = @Wt2
, then substituting in ( )
@f @f @2f
= 2tWtn ; = nt2 Wtn 1 ; = n (n 1) t2 Wtn 2 ;
@t @Wt @Wt2
in ( ) gives
df = 2tWtn + 21 n (n 1) t2 Wtn 2
dt + nt2 Wtn 1 dWt :
This was derived in class during lecture 1.4 (30 January 2020).
@f
Comparing this to the stochastic integral formula above, we see that dW
t + eW =) f = tW + eW : Also
@2f @f
@W 2
= eWt ; @t
= Wt :
Substituting all these terms in to the formula and noting that f (0; W (0)) = 1 veri…es the result.
13
Example
Itô’s lemma can be used to deduce the following formula for stochastic di¤erential equations and stochastic
integrals Z t Z t
@F @F 1 @2F
dW ( ) = F (W (t) ; t) F (W (0) ; 0) + d
0 @W 0 @ 2 @W 2
for a function F (W ( ); ) where dW ( ) is an increment of a Brownian motion.
If W (0) = 0 evaluate Z t
2
sin W dW ( ):
0
@F
# = t2 sin W ! F = t2 cos W #
@W2
@ F @F
2
= t2 cos W = 2t cos W
@W @t
and substitute into the integral formula
Z t Z t
2 2 1 2
sin W dW ( ) = t cos W 2 cos W + cos W d
0 0 2
14