You are on page 1of 51

# The First Passage Time

Problem
Rajarshi Sarkar
University of Nice and Sophia Antipolis and INRIA - Sophia Antipolis
August,2012
Abstract
This report looks into the first passage time problem. First-passage
time problems are deceptively very simple, but at the same time very complex to tackle with sometimes. We know very little about the first-passage
behavior except in some simple cases like standard Wiener process,Wiener
process with drift,and likewise.
I have tried to study three kinds of approaches towards finding an approximate solution of the first-passage time through numerical simulation.They
are, the Euler scheme with the Brownian bridge idea, the fast algorithm
for Gauss-Markov Processes that involves a probabilistic invariant of dichotomic search and the exact algorithm studied for some very specific
kinds of stochastic differential equations. I have numerically compared
the Euler scheme and the Fast algorithm, and stated the analytical result
for obtaining a very nice relation for the first-passage time in case of Exact
Simulation.
Moreover, I have also tried to price a barrier option, a financial instrument
dependent on the approximation of the first-passage time for the price of
the underlying asset.

Introduction
dXt = (Xt , t)dt + (Xt , t)dWt

with

X0 = x0

(1)

where and are smooth enough for us to have existence and uniqueness
properties for the above stochastic differential equation (SDE). We would just
need the two functions and to be Lipschitz on every compact support [0,T]
for every T 0. We seek to simulate or find a random event of the form,
= inf{t > 0; Xt > L}
where, L is the boundary, which can be a constant or some continuous deterministic function.We usually look for in some closed interval of the form [0,T],
since we know that in the case of a Brownian motion with the boundary being
an affine function of time we get, E( ) = . Thus, we usually bound it in some
finite interval.
First passage time or first hitting time or stopping time problems have been
the center of attraction for many years now. It has useful application in biology,physics,finance, to name a few. It is also very interesting to look at the
1

BROWNIAN MOTION

first-passage time problems, since they are related to many fields of Mathematics, such as probability theory, functional analysis, numerical analysis, number
theory.
Despite the importance and varied applications of first-passage times,its only in
very few situations that we explicitly know the law for them. In the case of a
standard Brownian motion, we know the analytical solution for the first-passage
time for a constant boundary L = a. Bachelier, was one of the first to study the
first-passage time problem, followed by Levy. The very famous Bachelier-Levy
formula was derived for the density p of a Brownian motions first passage over
a linear boundary of the form a + bt, as,


a + bt
a

p(t) = 3
t
t2
 2
where, (y) = 12 exp y2
Ofcourse, after them lots of people have improved upon their work and added
more to the theory of first-passage time. However, it is still quite unexplored
as there does not exist any simple form for the density function for the firstpassage time for more complicated diffusion processes. Thus, lies the challenge
to be able to simulate and correctly approximate the first-passage time for such
processes. We embark on this journey, looking at Gauss-Markov processes and
some interesting processes.
First, we look into some results for the Brownian motion and then the GaussMarkov process, which will be later useful when we use the algorithms.
Second, we look into the Euler scheme for numerically approximating the firstpassage time problem.
Third, we look into the Fast Algorithm developed by Taillefumier and Magnasco
in their paper [1] and try to implement it.
Fourth, we try and compare the Euler scheme and the Fast Algorithm.
Fifth, we look into a new method of simulating exactly the path for a diffusion
process[3]. We, very quickly realise the restriction of such a method, however it
is still mathematically very interesting to see the development of such a method.
Here, we just give an analytical result pertaining to the first-passage time. We
find the law for the first-passage time as some function of another first-passage
time, which we already know how to simulate.
Last, but not least, we look into the Financial application of the first-passage
time in pricing a barrier option.
Notations : The notations for a Brownian motion or a Wiener process will be
B(.) and W (.).

Brownian Motion

We will look into some properties for, first a standard Brownian motion and
then for, a Brownian motion with drift. We first state the probability distribution function for the first-passage time for a standard Brownian motion and
then we look into the probability density function for a Brownian motion with
drift. In the process of doing so, we also look into the law of the maximum of
a Brownian bridge conditioned on the two extreme values of the bridge.

BROWNIAN MOTION

## Standard Brownian Motion

B = {B(t), Ft ; 0 t T } is our standard Brownian motion(Wiener process)
on some probability space (, F, P ). We look for the law of the first-passage
time defined as a = inf{t > 0; B(t) a}.
In order to find the law of our first-passage time, we look into the law of the
maximum M (T ) = sup0tT B(t).
Interestingly, the law can be derived in a very simple way by the use of a
key property of the Brownian motion - Reflection Principle. It tells us that
the Brownian motion reflected after they hit any barrier, preserves the same
motion(distribution) as the original Brownian motion. Thus, if our barrier is a
constant L = a, then we can show that,
d = B(t) if t < a
B(t)
d = 2B(a ) B(t) = 2a B(t) if t a
B(t)

## d is a Brownian motion with the same law as the original B(t).

Then B(t)
Now, its is very simple to show that a < , as we know that limsupn B(n) =
.
Our main goal is to find the probability density function for the first-passage
time. Thus,
P (a < T )
Now, we see we can look into the same thing from a different point of view as
P (a T ) = P (M (T ) a). Thus,
P (M (T ) a) = P (M (T ) a, B(T ) a) + P (M (T ) a, B(T ) < a)
 2
R 1
y
exp 2T
dy
Now, since P (M (T ) a, B(T ) a) = P (B(T ) a) = a 2T
Thus, we work with the other part, i.e P (M (T ) a, B(T ) < a). We know, from
[) = 2B(a ) B(T ) = 2a B(T ) , as in this
the Reflection Principle, that B(T
case we are assuming that T . Thus, we get, that if B(T ) < a, then
[) = 2a B(T ) > 2a a = a, and moreover as the law of B
b and that of B
B(T
are same, we get,
[) > a) = P (B(T
[) > a)
P (M (T ) a, B(T ) < a) = P (M (T ) a, B(T
= P (B(T ) > a)
Thus, we get,
P (M (T ) a) = 2P (B(T ) > a)
 2
R
y
1
Thus, P (a T ) = P (M (T ) a) = 2P (B(T ) > a) = 2 2T
exp 2T
dy
a
By differentiating the above relation with respect to T, we get the probability

BROWNIAN MOTION

 2
a
.
density function of a (T ) as 3 a exp 2T
 T 22 2
a
Thus, fa (t) = 3
exp a2t
t2

## Brownian Motion with drift

We now, look into the first-passage time for a drifted Brownian motion.
W b = {W (t)b = B(t) bt, Ft ; 0 t T } be a Brownian motion with a drift
term bt. Let this process be defined over the same probability space (, F, P ).
Now, we are interested in the first-passage time,
ab = inf{t > 0; W (t)b = B(t) bt a}.
Thus, like before we look into the law for the maximum of W .
Let, M (T )b = sup0tT W (t)b = sup0tT (B(t) bt).
P (ab < T ) = P (M (T )b a)

Now, we know that exp 2bB(t) 2b2 t 0tT is a martingale Thus, letting

R(t) = exp 2bB(t) 2b2 t , we seek to apply Doobs optional stopping theorem. Now, if we let = min(ab , T ), thus making a bounded stopping time.
Now, Doobs optional stopping theorem says, if X is a martingale with respect to some filtration F, and we have a stopping time that is bounded, i.e,
() < N , for some N N . Then ,
E(X ) = E(X0 )
Now, since our R(t) is a martingale, the R(t)1ab <T is a martingale also. Thus,


we have E R(T )1ab <T = E R(0)1ab <T . Also, from the optional stopping


theorem we get that E R()1ab <T = E R(0)1ab <T
Thus, we have,


E R(T )1ab <T = E R()1ab <T

= E R(ab )1ab <T
Thus, writing the expectation in the integral form we get,
Z
Z


2
exp 2bB(T ) 2b T dP =
exp 2bB(ab ) 2b2 ab dP
ab <T

ab <T

Now, we know that W (ab )b = B(ab ) bab = a, thus the right hand side is,
Z
Z


exp 2bB(ab ) 2b2 ab dP =
exp 2b(B(ab ) bab ) dP
ab <T

ab <T

Z
=

exp (2ba) dP
ab <T

## = exp(2ba)P (ab < T )

BROWNIAN MOTION

Thus we get,
P (ab < T ) = exp(2ba)


exp 2bB(T ) 2b2 T dP

ab <T

Z
= exp(2ba)


exp 2bB(T ) 2b2 T dP

M (T )b a

Z
= exp(2ba)


exp 2bB(T ) 2b2 T dP

sup0tT (B(t)bt)a

## Now, by Girsanovs theorem, we change the probability measure from P to Q,

such that under Q, W (t) is a standard Brownian motion. Recalling Girsanovs
Theorem,
Given a Brownian motion B = {Bt ; 0 t T }, defined on the probability
space (, F, P ), for some T > 0, and given a left-continuous
RT
cess u = {ut ; 0 t T }, such that E 0 u2t dt < . Now, if we set
!
Z
Z T
1 T 2
ut dt
Z = exp
ut dBt
2 0
0
and if E(Z) = 1, then we can define A , Q(A) = E(1A Z). Then Q is
a probability measure equivalent to P , moreover the process W u = {Wtu =
Rt
Bt + 0 u2s dBs ; o t T }, is a standard Brownian motion under this new
probability measure Q with respect to the filtration F = {Ft ; 0 t T }.


2
dQ
Thus, the change of probability measure dP
= exp bB(T ) b 2T . Thus, we


b2 T
need to change the measure from P to Q, thus using dP
=
dQ = exp bB(T ) + 2


b2 T
exp bW (T ) 2
Z
dP
P (ab < T ) = exp(2ba)
exp (2b(B(T ) bT ))
dQ
dQ
sup0tT (B(t)bt)a


Z
b2 T
dQ
= exp(2ba)
exp (2bW (T )) exp bW (T )
2
sup0tT (B(t)bt)a


Z
b2 T
= exp(2ba)
exp bW (T )
dQ
2
sup0tT (B(t)bt)a
Again taking help of the reflection principle, we know that,
Q(M (T ) > a, W (T ) < x) = Q2a (W (T ) < x) = Q(W (T ) < x 2a) f or
Q(M (T ) > a, W (T ) > x) = Q(W (T ) > x) f or

xa

Thus dividing the domain of the integration into two parts, we get,


Z a
b2 T
exp bx
fW (T ) (x 2a) dx+
P (ab < T ) = exp(2ba)
2



Z
b2 T
exp(2ba)
exp bx
fW (T ) (x) dx
2
a

x<a

BROWNIAN MOTION

 2
1
x
and denoting, P (W (T ) < y) = (y) =
where, fW (T ) (x) = 2T
exp 2T


Ry
2
x
1
dx, we get,
exp 2T

2T
P (ab





b2 T
(x 2a)2
1

< T ) = exp(2ba)
exp bx
exp
dx+
2
2T
2T





Z
b2 T
1
x2

exp bx
exp(2ba)
exp
dx
2
2T
2T
a
Z

## Thus, after simplification of the above relation, we get,





a + bT
bT a

P (ab < T ) = 1
+ exp(2ba)
T
T
Upon, differentiating with respect to all the measurable sets ab < T , we get the
probability density function as,


a
(a + bt)2
fab (t) = 3 exp
2t
t 2 2
This is the very famous Bachelier - Levy formula

## Generating the Brownian Bridge

Now, we look at how we can generate a Brownian bridge Btx,y , where t1 t t2
and we know the value of the Brownian motion at the two extreme points of
the interval [t1 , t2 ] as Bt1 = x and Bt2 = z. From now on we fix the value of t
as some value between t1 and t2 .
Now, let us assume a process of the form, Btx,y = {Bt ; t1 t t2 |Bt1 = x, Bt2 =
y}.
Let,
Y, = Bt1 + Btx,y Bt2
for, t1 t t2 . Finally, we would want this Y, to give us some clue on how
to generate Btx,y with the help of three independent random variables. Thus
to begin with, lets find the law of Y, . To find the law, we take help of the
independent increment property of Brownian motion, Bt1 q (Bt Bt1 ) q (Bt2
Bt ).
Thus, splitting, the above relation as,
Y, = aBt1 + b(Btx,y Bt1 ) + c(Bt2 Btx,y )
we get,
a b =
bc=1
c =

BROWNIAN MOTION

## Thus, finally we get the values of a, b and c as,

a=1
b=1
c =

Now, we can take the Laplace transform of the above expression as;


x,y
x,y
E(eiY, ) = E ei(aBt1 +b(Bt Bt1 )+c(Bt2 Bt ))


x,y
x,y
E(eiY, ) = E eiaBt1 eib(Bt Bt1 ) eic(Bt2 Bt )
now,as Bt1 q(Btx,y Bt1 )q(Bt2 Btx,y ), that is Bt1 independent of (Btx,y Bt1 ),
which in turn is independent of (Bt2 Btx,y ), and we know that for two independent events X, Y , E(XY ) = E(X)E(Y ), thus we can take the expectation
of the independent events separately.
x,y

## E(eiY, ) = E(eiaBt1 )E(eib(Bt

E(eiY, ) = e

1
2
2 (a) t1

E(eiY, ) = e

Bt1 )

1
2
2 (b) (tt1 )

x,y

)E(eic(Bt2 Bt

1
2
2 (c) (t2 t)

1 2
2
2
2
2 (a t1 +b (tt1 )+c (t2 t))

Thus, we get a Gaussian random variable with mean 0 and variance (a2 t1 +
b2 (t t1 ) + c2 (t2 t)). Assume that Y, is independent of Bt1 and Bt2 , since
the idea is to get an unbiased Brownian motion from Y, , in the sense, we want
Y, to be independent of both Bt1 and Bt2 .Thus, we take the covariance to
seek the condition for independence. ;
Cov(Y, , Bt1 ) = Cov(aBt1 , Bt1 ) + Cov(b(Btx,y Bt1 ), Bt1 ) + Cov(c(Bt2 Btx,y ), Bt1 )
Cov(Y, , Bt1 ) = at1
as, Bt1 q (Btx,y Bt1 ) and also Bt1 q (Bt2 Btx,y ). Now, we want the covariance
to be 0. Thus, we get ,
at1 = 0
(1 )t1 = 0
since, t1 6= 0;
+ =1

Similarly,
Cov(Y, , Bt2 ) = Cov(Bt1 , Bt2 ) + Cov(Btx,y , Bt2 ) + Cov(Bt2 , Bt2 )
Cov(Y, , Bt2 ) = t1 + t t2

BROWNIAN MOTION

as we know that for a Brownian motion, Cov(Bs , Bt ) = min(s, t). Again, this
covariance value is equal to 0. Thus,
t1 + t t2 = 0
Now, from the previous result we know,
T hus,

=1

(1 )t1 + t t2 = 0
t t1
=
t2 t1
t2 t
thus, =
t2 t1

Thus, we have the complete relation between the biased Brownian motion(the Brownian bridge) Btx,y and the unbiased one which we construct from
the Y, . We know, Bt1 = x and Bt2 = y. Thus, what we have is,
Btx,y = Bt1 + Bt2 + Y,
t2 t
t t1
Btx,y =
x+
y + Y,
t2 t1
t2 t1
Now, we know that, Y, N (0, a2 t1 + b2 (t t1 ) + c2 (t2 t))
Knowing the values of a, b, c, we can directly put the values into the above
equation and get the final value of Y, .


(t2 t)(t t1 )
Y, N
(t2 t1 )
Thus, finally we get,
t2 t
x+
t2 t1
t2 t
=
x+
t2 t1

Btx,y =
Btx,y

t t1
y + B (t2 t)(tt1 ) 
t2 t1
(t2 t1 )
t t1
t2 t
B tt1
y+
t2 t1
t2 t1 t2 t

## Law of the First-Passage Time for a Brownian Bridge

Now, we look into the law of the first-passage time in the case of a Brownian
bridge. Thus, for every 0 tx tz 1 and for reals x,z,L with x, z <
L, considering the first passage time as, tx = inf{t > tx |Bt > L}. Then
conditioning to Btx = x and Btz = z, we get,


2(L x)(L z)
P (tx < tz |Btx = x, Btz = z) = exp
tz tx
Proof : We know that Bt is a Gaussian random variable N (0, t). Now, we
are looking for something like P (tx < tz |Btx = x, Btz = z).To study this
probability, we first introduce the maximum of the process between time tx and
tz . as;
Mtx ,tz = suptx ttz Bt

GAUSS-MARKOV PROCESS

## P (tx < tz |Btx = x, Btz = z) = P (Mtx ,tz > L|Btx = x, Btz = z)

Now,we know that from probability theory, P (Mtx ,tz > L|Btx = x, Btz = z) =
P (Mtx ,tz > L, Btz = z|Btx = x)
.
P (Btz = z|Btx = x)
We know from the strong Markov property for a Brownian motion, (Bt+s |Bu ; 0
u s) = (Bt+s |Bs ), where t 0, and from the reflection property, as discussed
previously, we get,
P (Mtx ,tz > L, Btz < z|Btx = x) = P (Mtx ,tz > L, Btz > 2L z|Btx = x)
= P (Btz > 2L z|Btx = x)
since, we have assumed in the beginning that z < L.
Thus,


Z
((y x0 ) (x x0 ))2
1
exp
P (Btz > 2Lz|Btx = x) = p
dy
2(tz tx )
2(tz tx ) 2Lz
By differentiation of the above expression with respect to z and L, we get,


2(2l (x + z))
(2l (x + z))2
p
P (Btz dz, Mtx ,tz dl|Btx = x) =
exp
dzdl
2(tz tx )
2(tz tx )
Also, since Bt follows a Gaussian distribution, we can write,


(z x)2
1
P (Btz dz|Btx = x) = p
exp
2(tz tx )
2(tz tx )
Thus, we get,
P (Mtx ,tz dl|Btx = x, Btz

2(2l (x + z))
exp
= z) =
tz tx

2(l x)(l z)
tz tx


dl

Integrating the above value, only on those measurable sets for which Mtx ,tz L,
we get exactly the probability for the process to reach the boundary(barrier)
in-between the times tx and tz .


2(L x)(L z)
P (Mtx ,tz > L|Btx = x, Btz = z) = exp
tz tx

T hus,

## P (tx < tz |Btx = x, Btz = z) = P (Mtx ,tz > L|Btx = x, Btz = z)



2(L x)(L z)
P (tx < tz |Btx = x, Btz = z) = exp
tz tx

Thus, we get a very nice representation for the first-passage time for a Brownian
bridge.

Gauss-Markov Process

Mathematical Calculations
Definitions
Given some probability space (,F,P ); where is the sample space, F is its
associated field and P is the probability measure equipped with the (,F),

GAUSS-MARKOV PROCESS

10

## let X be a continuous stochastic process with natural filtration Ft .Then X is a

Gauss-Markov process if ;
1. X is a Gaussian process, if for any integers n and positive reals t1 < t2 <
< tn , the random vector (Xt1 ,Xt2 ,. . . ,Xtn ) has a joint Gaussian distribution.
Meaning, for any reals a1 ,a2 ,. . . ,an , random variable
n
X

ai Xti

i=1

## is a Gaussian random variable.

2. X is a Markov process, if for any s,t 0 and A B(R)
P (Xt+s A|Fs ) = P (Xt+s A|Xs )
which states that the conditional probability distribution of future states Xt+s ,
given the present state and all the past states Fs , depends only on the present
state Xs . Now, we define the first-passage time problem. Let us say, Xt0 = x0
L and take into account the random variable sL on (,F,P )
sL () = inf{t s|Xt () L}
The random variable sL is defined as the stopping time with respect to Ft and
is known as the first passage time of X for a boundary L.The first passage time
problem also consist of determining the distribution of sL on [s,+).
Doobs Integral Representation
Let W = {Wt , Ft ; 0 t T} denote a standard real Wiener process on some
probability space (,F,P ).We consider the class of real Gaussian Markov process X = {Xt ,Ft ;0 t T} solution to the following stochastic differential equation;
dXt = (t)Xt dt + (t)dWt
with X0 = x0 ;
where is a bounded function and is a continuous,bounded, positive function.
Now, we can solve this equation by using variation of parameters.
Let Yt be the solution of the differential equation;
dYt = (t)Yt dt
Rt
with Y0 = 1; then, we get, Yt = exp( 0 (u) du) for any 0 t T
Now we consider Xt = Yt Zt for some Zt . Applying Itos formula, we get
dXt = Zt dYt + Yt dZt + d < Yt , Zt >
Z t
dXt = Zt (t)Yt dt + exp(
(u) du)dZt
0
Z t
dXt = (t)Xt dt + exp(
(u) du)dZt
0

Rt
From this we get, (t)dWt = exp( 0 (u) du)dZt with Z0 = x0 and from further
simplification we get,
Z t
Z s
Z t = x0 +
exp(
(u) du)(s) dWs
0

GAUSS-MARKOV PROCESS

11

Rt
Rt
Rs
Thus we get, Xt = exp( 0 (u) du)(x0 + 0 exp( 0 (u) du)(s) dWs ).
Thus defining the functions g, h and f as,
Z t
g(t) = exp(
(u) du)
0
Z t
Z s
h(t) =
exp(2
(u) du)(s)2 ds
0
0
Z t
f (t) = exp(
(u) du)(t)
0

## and we can write the stochastic equation like;



Xt
= f (t)dWt
d
g(t)
with X0 = x0
with the solution being;
Z
Xt = g(t)(x0 +

f (s) dWs )
0

Now, we can see that the Xt is a RGaussian random variable with mean being
t
g(t)x0 and the variance being g(t)2 0 f (s)2 ds or g(t)2 .h(t).
Discrete Construction of Gauss-Markov Processes
Given some reals tx < ty < tz , the conditioning formula for a Gauss-Markov process gives a distribution of Xty , knowing Xtx =x and Xtz =z[2]. It can be shown
that the corresponding probability density is the Gaussian law N ((ty ),(ty )),
where (ty ) denotes the time dependent mean,
(ty ) =

## g(ty ) h(ty ) h(tx )

g(ty ) h(tz ) h(ty )
x+
z
g(tx ) h(tz ) h(tx )
g(tz ) h(tz ) h(tx )

(ty )2 = g(ty )2

h(tz ) h(tx )

## Proof : The discrete construction of a Gaussian Markov process will depend

heavily on two results. First one being Doobs integral representation which
gives an expression for the probability P (Xt = x|Xt0 = x0 ) of a general GaussMarkov process.

2
x0
x

g(t)
g(t0 )
1

p
exp
P (Xt [x, x+dx]|Xt0 = x0 ) =
dx
2(h(t) h(t0 )
g(t) 2(h(t) h(t0 ))
iff h(t) 6= h(t0 ). Second result being the fact that we can actually evaluate
P (Xty [y, y + dy]|Xtx = x, Xtz = z) with tx < ty < tz , the probability of Xt
knowing the values at times tx and tz as x and z respectively. This is because X
is a Markov process, thus a sample path which starts from x and terminates at

GAUSS-MARKOV PROCESS

12

## z passing through y can be thought of as two independent paths; one starting

at x and finishing at y and another one starting at y and ending at z. Thus
after normalization of the absolute probability for a path to go to z from x, we
have,
P (Xty [y, y+dy]|Xtx = x, Xtz = z) =

## P (Xty [y, y + dy]|Xtx = x).P (Xtz [z, z + dz]|Xty = y)

P (Xtz [z, z + dz]|Xtx = x)

Thus, we use the above two properties to derive the law of Xty given the values
of Xtx = x and Xtz = z given that tx < ty < tz .
P (Xty [y, y+dy]|Xtx = x, Xtz = z) =


1
g(ty ) 2(h(ty )h(tx ))

exp

## P (Xty [y, y + dy]|Xtx = x).P (Xtz [z, z + dz]|Xty = y)

p(Xtz [z, z + dz]|Xtx = x)
2 !
2 !


y
g(tx )
g(ty )
x

2(h(ty )h(tx ))

g(tz )

1
2(h(tz )h(tx ))

g(tz )

1
2(h(tz )h(ty ))

exp

z
g(ty )
g(tz )
y

2(h(tz )h(ty ))

dydz



2
( g(tzz ) g(txx ) )
exp 2(h(t
dz
z )h(tx ))

We got this representation from the first result due to Doobs representation.
By factorizing the part inside the exponential, we get,
P (Xty [y, y + dy]|Xtx = x, Xtz = z) =

2

g(ty ) h(ty )h(tx )
g(ty ) h(tz )h(ty )
x
+
z
y

## g(tx ) h(tz )h(tx )

g(tz ) h(tz )h(tx )
1

q
exp
dy
(h(t
)h(t
))(h(t
y
x
z )h(ty ))
(h(t
)h(t
))(h(t
)h(t
))
2
y
x
z
y
2
2g(t
)
y
2g(ty )
h(t
)h(t
)
z
x
h(tz )h(tx )
Thus getting the desired result that Xty with the knowledge that Xtx = x ,
g(t ) h(t )h(t )
Xtz = z, does indeed have a Gaussian law with mean (ty ) = g(txy ) h(tzz )h(txy ) x+
g(ty ) h(ty )h(tx )
g(tz ) h(tz )h(tx ) z

.
h(tz )h(tx )

## Results on First-Passage Time

First we look into the time-changed Wiener process X with a constant boundary
L.Time changed Weiner
R t process means, thats X is a solution for equal to
zero.Thus Xt = x0 + 0 f (s) dWs
Thus, for every 0 tx tz 1 and for reals x,z,L with x, z < L, considering
the first passage time as, tx = inf{t > tx |Xt > L}. Then conditioning to
Xtx = x and Xtz = z, we get,


2(L x)(L z)
P (tx < tz |Xtx = x, Xtz = z) = exp
h(tz ) h(tx )
Rt 2
with h(t) = tx f (u) du
Rt
Proof : We know that Xt = x0 + 0 f (s) dWs is a Gaussian random variable
Rt
N (x0 , 0 f 2 (s) ds). Now, we are looking for something like P (tx < tz |Xtx =
x, Xtz = z).To study this probability, we first introduce the maximum of the
process between time tx and tz . as;
Mtx ,tz = suptx ttz Xt

GAUSS-MARKOV PROCESS

13

## P (tx < tz |Xtx = x, Xtz = z) = P (Mtx ,tz > L|Xtx = x, Xtz = z)

Now,we know that from probability theory, P (Mtx ,tz > L|Xtx = x, Xtz = z) =
P (Mtx ,tz > L, Xtz = z|Xtx = x)
.
P (Xtz = z|Xtx = x)
We know from the strong Markov property for a Wiener process, (Xt+s |Xu ; 0
u s) = (Xt+s |Xs ), where t 0, and from the reflection property, as discussed
previously, we get,
P (Mtx ,tz > L, Xtz < z|Xtx = x) = P (Mtx ,tz > L, Xtz > 2L z|Xtx = x)
= P (Xtz > 2L z|Xtx = x)
since, we have assumed in the beginning that z < L.
Thus,


Z
((y x0 ) (x x0 ))2
1
p
exp
dy
P (Xtz > 2Lz|Xtx = x) =
2(h(tz ) h(tx ))
2(h(tz ) h(tx )) 2Lz
By differentiation of the above expression with respect to z and L, we get,


(2l (x + z))2
2(2l (x + z))
dzdl
exp
P (Xtz dz, Mtx ,tz dl|Xtx = x) = p
2(h(tz ) h(tx ))
2(h(tz ) h(tx ))
Also, since Xt follows a Gaussian distribution, we can write,


1
(z x)2
P (Xtz dz|Xtx = x) = p
exp
2(h(tz ) h(tx ))
2(h(tz ) h(tx ))
Thus, we get,
P (Mtx ,tz dl|Xtx = x, Xtz = z) =

2(2l (x + z))
exp
h(tz ) h(tx )

2(l x)(l z)
h(tz ) h(tx )


dl

Integrating the above value, only on those measurable sets for which Mtx ,tz L,
we get exactly the probability for the process to reach the boundary(barrier)
in-between the times tx and tz .


2(L x)(L z)
P (Mtx ,tz > L|Xtx = x, Xtz = z) = exp
h(tz ) h(tx )

T hus,

## P (tx < tz |Xtx = x, Xtz = z) = P (Mtx ,tz > L|Xtx = x, Xtz = z)



2(L x)(L z)
P (tx < tz |Xtx = x, Xtz = z) = exp
h(tz ) h(tx )

Another result, which will finally help us to attain the final result on the
probability of crossing a constant boundary for a Gauss-Markov process.
Let W = {Wt , Ft ; 0 t 1} denote a standard real Wiener process on some
probability space (, F, P ). For, every 0 t 1 and every reals x, z, Lx , Lz
with x < Lx and z < Lz , consider the first-passage time tx = inf{t tx |Wt
L(t)}. where L is the affine function joining (tx , x) to (tz , z). Then conditioning
on Wtx = x and Wtz = z, we have


2(Lx x)(Lz z)
P (tx < tz |Wtx = x, Wtz = z) = exp
tz tx

GAUSS-MARKOV PROCESS

14

## Proof. The affine barrier has an expression of, L(t) =

thus it looks like L(t) = a + bt where,
a=

tz Lx tx Lz
,
tz tx

b=

tz t
tz tx Lx

ttx
tz tx Lz ,

Lz Lx
tz tx

Now,we can change the first-passage time problem with an affine boundary to
a drifted Brownian motion Wt0 = Wt bt with a constant boundary a.
tx = inf{t tx |Wt a + bt} = inf{t tx |Wt bt a} = t0x
We will be using the Girsanovs Theorem to change the probability measure to
make this drifted Brownian motion into a standard one under this new probability measure.
The process defined for t tx is;


b2
Zt = exp b(Wt x) (t tx )
2
This is a martingale with E(Zt ) = 1 as Ztx = 1.Also, from the above expression
of Zt , we can deduce E(Zt ) = 1, from the result of characteristic function for a
Brownian motion;



b2
E(Zt ) = E exp b(Wt x) (t tx )
2
 2

b
= E (exp (b(Wt x))) exp (t tx )
2
 2

 2

b
b
= exp
(t tx ) exp (t tx )
2
2
=1
Thus, we get;
E(Zt ) = 1
We can now define a new probability measure; Q, under which Wt0 is a standard
Weiner process with variance equal to t.To test this fact,



b2
Q(Wt0z [z 0 , z 0 + dz 0 ]|Wt0x = x0 = x btx ) = E 1Wtz [z,z+dz]|Wtx =x exp b(z x) (tz tx )
2


2
1
(z x)
b2
=p
exp
+ b(z x) (tz tx ) dz
2(tz tx )
2
2(tz tx )


2
1
(z x b(tz tx ))
=p
exp
dz
2(tz tx )
2(tz tx )


1
(z 0 x0 )2
p
=
exp
dz 0
2(t

t
)
2(tz tx )
z
x
if we let z 0 = z btz . Thus, under Q, Wt0 is indeed a standard Brownian motion.
Now we know that the two probability measures are equivalent. Also, we
define the relation between the two measures as;
Q(A) = EP (1A Ztz )

GAUSS-MARKOV PROCESS

15

where, A Ftx ,tz where, Ftx ,tz means the natural filtration generated by the
original Weiner process Wt tx t tz . Now, Wt0x = x0 = x btx and
Wt0z = z 0 = z btz .
We guess that, the two events, dB 0 = {Wt0z [z 0 , z 0 + dz 0 ]|Wt0x = x0 } and
dB = {Wtz [z, z + dz]|Wtx = x}, have the same probability under the two
probability measure Q and P respectively. In both the cases, we just evaluate the probability of a Brownian motion being equal to z 0 and z, thanks to
Girsanovs theorem. However, when Wt0z = z 0 , simultaneously Wtz = z. Thus,
they have the same law and thus the same probability.
!
2
(z 0 x0 )
1
0
exp
P (dB) = Q(dB ) = p
2(tz tx )
2(tz tx )
If, Mt0x ,tz is the maximum of the path (Wt0 )tx ttz .
Mt0x ,tz = suptx ttz Wt0 = suptx ttz (Wt bt)
Lets, consider the infinitesimal event dE = {Wtz dz, suptx ttz (Wt bt)
d|Wtx = x} and dE 0 = {Wt0z dz 0 , suptx ttz Mt0x ,tz d|Wt0x = x0 }. Again
by Girsanovs theorem, we know that,
P (dE) = Q(dE 0 )
Now, from the previous result we explicitly know the representation of the above
probabilities. Thus,
!
2
(2 (x0 + z 0 ))
2(2 (x0 + z 0 )
0
dz 0 d
exp
Q(dE ) = p
2(tz tx )
2(tz tx )(tz tx )
We need, Q(Mt0x ,tz > a|Wt0x = x0 , Wt0z = z 0 ). By the law of conditional probability we know that,
Q(Mt0x ,tz > a|Wt0x = x0 , Wt0z = z 0 ) =

## Q(Mt0x ,tz > a, Wt0z = z 0 |Wt0x = x0 )

Q(dE 0 )
=
Q(Wt0z = z 0 |Wt0x = x0 )
Q(dB 0 )

Thus, we get
P (suptx ttz (Wt bt) d|Wtx = x, Wtz = z) = Q(Mt0x ,tz d|Wt0x = x0 , Wt0z = z 0 )


2(2 (x0 + z 0 ))
2( x0 )( z 0 )
=
exp
d
tz tx
tz tx
Now, integrating the above relation over all the measurable sets for which
Mt0x ,tz > a, we get the final result on probability as,


2(a x0 )(a z 0 )
P (suptx ttz (Wt bt) > a|Wtx = x, Wtz = z) = exp
tz tx
Thus, putting back the original values of a =
Lx
and b = Ltzz t
we get,
x

tz Lx tx Lz 0
,x
tz tx

= xbtx , z 0 = zbtz ,

P (suptx ttz (Wt bt) > a|Wtx = x, Wtz = z) = P (tx < tz |Wtx = x, Wtz = z)


2(Lx x)(Lz z)
= exp
tz tx

GAUSS-MARKOV PROCESS

16

We can extrapolate this result to get the same for a time changed Wiener
process(Brownian motion). If we define B = {Bt , Ft ; 0 t T } as the solution
of the SDE, dBt = f (t)dWt for some smooth,bounded function f . Letting,
Btx = x1 and Btz = z1 and the affine boundary being the same, then we can
also say,


2(Lx x1 )(Lz z1 )
P (suptx <t<tz Bt > a + bt|Btx = x1 , Btz = z1 ) = exp
h(tz ) h(tx )
where h(t) =

Rt
t0

f (u)2 du.

Now, we turn our attention on the problem at hand, that is the GaussMarkov process.
Let W = {Wt , Ft ; 0 t T } denote a standard real Wiener process on
some probability space (, F, P ). Let X = {Xt , Ft ; 0 t T } is a real
adapted centered Gauss-Markov process on (, F, P ), defined by the equation,
dXt = (t)Xt dt + (t)dWt

with

X0 = x0

## with and satisfying the regularity criteria of being non-positive, bounded

function and being positive,bounded and homogeneously Holder continuous
function. For every, 0 tx < tz T and for any reals x, z, Lx , Lz with x < Lx
and z < Lz , we consider the first-passage time tx = inf{t > tx |Xt > (t)}
where for tx < t < tz ,
(t) =

## g(t) h(tz ) h(t)

g(t) h(t) h(tx )
Lx +
Lz
g(tx ) h(tz ) h(tx )
g(tz ) h(tz ) h(tx )

is the expectation of Xt knowing that Xtx = Lx and Xtz = Lz . Then conditioning to Xtx = x and Xtz = z we have,


2(Lx x)(Lz z)
P (tx < tz |Xtx = x, Xtz = z) = exp
g(tz )g(tx )(h(tz ) h(tx ))

## Proof : First we remember Doobs integral representation for the Gauss-Markov

process of the form being discussed here. Thus, then X has the form of,
Z t
Xt = g(t)(x0 +
f (u) dWu )
0

Rt

Rt
where, g(t) = exp( 0 (u) du) and f (t) = exp( 0 (u) du)(t) and h(t), is a
Rs
Rt
2
part of the variance for the process Xt , being equal to 0 exp(2
 0 (u) du)(s) ds.

Xt
Xt
Thus, we see that g(t)
is a time changed Wiener process, as d g(t)
= f (t)dWt .
Thus, we can apply both the above proved results very nicely to this proof.
We also see, by the definition of the boundary in consideration, we get (tx ) =
Lx and (tz ) = Lz .

EULER SCHEME

17

Lz
Lx
, Kz = g(t
, x0 = g(txx ) , z 0 = g(tzz ) , for any reals x, z, Lx , Lz ,
Let, Kx = g(t
x)
z)
with x < Lx and z < Lz . Now, lets define the affine boundary which we are
going to consider in the first step which joins (h(tx ), Kx ) and (h(tz ), Kz ) for
tx < t < tz as,

L(t) =

h(tz ) h(t)
h(t) h(tx )
Kx +
Kz
h(tz ) h(tx )
h(tz ) h(tx )

0
0
Now, we consider the first-passage time of the form h(t
= inf{h(t) > h(tx )|Wh(t)
>
x)
0
L(t)}, where Wt is the time changed Wiener process defined by the SDE ,
dWt0 = f (t)dWt .
0
0
Then, conditioning to Wh(t
= x0 and Wh(t
= z 0 , we get,
x)
z)



2(Kx x0 )(Kz z 0 )
0
0
0
0
0
<
h(t
)|W
=
x
=
z
P (h(t
,
W
)
=
exp

z
h(tx )
h(tz )
x)
h(tz ) h(tx )
We know that h(t) is an increasing function. Now, we look into the event whose
probability we just calculated.
0
0
0
0
0
{h(tx ) < h(tz )|Wh(t
= x0 , Wh(t
= z 0 } = {h(t) (h(tx ), h(tz )), Wh(t)
> L(t)|Wh(t
= x0 , Wh(t
= z0}
x)
z)
x)
z)
0
0
0
= {t (tx , tz ), Wh(t)
> L(t)|Wh(t
= x0 , Wh(t
= z0}
x)
z)
0
0
0
= {t (tx , tz ), g(t)Wh(t)
> g(t)L(t)|g(tx )Wh(t
= x, g(tz )(Wh(t
= z}
x)
z)
0
Now, since we can very easily see that Xt and g(t)Wh(t)
have the same law, thus
for tx t tz , we can deduce that,
0
0
P (h(tz ) < h(tz )|Wh(t
= x0 , Wh(t
= z 0 ) = P (tx < tz |Xtx = x, Xtz = z)
x)
z)

using the fact that, g(t)L(t) = (t) and tx = inf{t > tx |Xt > (t)}. Thus,
finally putting all the values, we get,


2(Lx x)(Lz z)
P (tx < tz |Xtx = x, Xtz = z) = exp
g(tz )g(tx )(h(tz ) h(tx ))

Euler Scheme

We look into the Euler Scheme for the approximation of the first-passage time
for a Gauss-Markov process.
We are given a stochastic differential equation of the form,
dXt = (Xt , t)dt + (Xt , t)dWt

0tT

X0 = x0

where, and are smooth functions. The idea is to apply the time discretization
scheme of the stochastic differential equation in question whose solution is Xt .

EULER SCHEME

18

T
T
, where N N . We denote the quantity N
as
The time discretization step is N
. Thus, the scheme looks like,

b(k+1) = X
bk + (X
bk , k) + (X
bk , k) G k+1 with k {0, 1, .....N 1}
X

X0 = x0

## where, G 1 , G 2 , ......, G N are N (0, 1), as we know that W(k+1) Wk N (0, ) =

N (0, 1).
We define the first-passage time as = inf{t 0; Xt Lt }, where Lt is the
boundary in question. The boundary can be a simple constant boundary or an
affine boundary also. In this report, we deal with constant boundary only. Now,
we define the first-passage time in accordance with our discrete Euler scheme
T
as e = inf{ti 0; Xti Lti }, where ti = i = i N
. Thus we approximate the e
using the Euler scheme.

## First we look at the application of Euler scheme to a Gauss-Markov process.

Gauss-Markov Process
The Gauss-Markov process has the form;
dXt = (t)Xt dt + (t)dWt

with

X0 = x0

## where, is a non-positive function which is bounded on every compact support

[0,T] for every T > 0 and is a positive homogeneously Holder continuous
function. We work here with an Ornstein-Uhlenbeck process of the form,
dXt = Xt dt + dWt

with

for

0tT

X0 = x0
where is a non-positive real constant, is a positive constant and x0 is a real
T
where N is a Natural number,
constant. T > 0 The Euler scheme, with = N
is,

b(k+1) = X
bk + X
bk + G k+1 for k {0, 1, ....N 1}
X
X0 = x0

## where, G 1 , G 2 , ......, G N are N (0, 1), as we know that W(k+1) Wk N (0, ) =

N (0, 1).
We look for the first time instance when the discrete process crosses the boundary Lt , thus we define the first-passage time or the first-hitting time as i =
T
inf{ti 0; Xti Lti } where ti = i N
for i {0, 1, ....N }. We note the first time
the crossing happens and denote that as our first-passage time.
Now, what if none of the discrete simulations touches or crosses the boundary.
Even though the discrete points might not cross, the continuous path may still
cross. In such cases, we use an exit probability at each time step to check

EULER SCHEME

19

what is the probability of crossing the boundary by the diffusion process in such
time intervals. Thus, the method will be to obtain the realizations (Xti )0iN
thanks to the Euler scheme and then conditioning on the values of (Xti ) and
(Xti+1 ),we look at the probability for the crossing to happen in-between these
two values, if and only if none of the values cross the boundary. Also we know
that (Xt )ti tti+1 has the law of some form of Brownian bridge.Using results
from the previous section which we have already proved, we get,


2(Lti x)(Lti+1 z)
bt = x, X
bt = z) = exp
P ( (ti , ti+1 )|X
i
i+1
g(ti )g(ti+1 )(h(ti+1 ) h(ti ))
where we have
the fact that > ti . where g(t) = exp(t) and

 acknowledged
2 1exp(2t)
. We get these representation from Doobs Integral
h(t) = ()
2
Representation dealt in the last section.
Now, after we find this probability of crossing, we generate a uniform random
variable U(0, 1). We check this probability of exit with . The idea is the
same as before, as we are looking for a Bernoulli event such as whether the
continuous path between the two discrete path cross the boundary or not. We
want to generate a random event such that its probability of success is exactly
the probability of crossing. In this respect we know that, for a uniform random
variable y U(0, 1), P (y x) = x if 0 x 1.Thus, we generate a uniform
random variable, in this case it is and compare it with our probability of exit
p and we know that,
P ( p) = p
Thus, if p, then we accept that the continuous path between Xti and Xti+1
does cross the boundary.

Numerical Results
We look into the Euler scheme simulation of the Ornstein-Uhlenbeck process
with the help of the Brownian bridge concept.
dXt = Xt dt + dWt

with

X0 = x0

The idea in doing so, is we simulate a new value of the process at the time
point. Now, if the value exceeds the value of the boundary, we observe the time
and denote it as the first-passage time. However, if the value does not cross the
boundary value, we check the probability of crossing or exit between the newly
simulated value and its previously generated value (which was also below the
boundary) and compare it with an uniform random variable U(0, 1). The
idea is to generate a random event whose probability of success is same as the
probability of cross p.
Then, with the parameters being ;
= 1.0, = 1.0, x0 = 0.0, initialtime = 0.0, f inaltime = 10.0, timestep =
10
1
216 , = f inaltime time step = 216 , L = 0.5 and M onteCarlonumber =
200000,

EULER SCHEME

20

## The value of the Ornstein-Uhlenbeck process

1.5

"simulate_path.dat"

1
0.5
0
-0.5
-1
-1.5
-2
0

10

Time

## Figure 1: The Euler scheme generated values for an Ornstein-Uhlenbeck process

We first try to estimate the probability density function(PDF) for the firstpassage time using a kernel density estimator.
Kernel density estimator belongs to the class of non-parametric density estimators used to estimate probability density functions. A non-parametric estimator
have no fixed structure and depends on all the data points to reach the estimate.
Given a set of independent and identically distributed sample points (x1 , x2 , ....., xN )
with an unknown density function f , which we want to estimate.Formally, the
kernel estimator smooth out the contribution of each of the observed sample
points over a local neighborhood of that data point. Thus, the kernel density
estimator for this unknown density function f is,


N
N
1 X
1 1 X
x xi
c
fh (x) =
Kh (x xi ) =
K
N i=1
h N i=1
h

where h is the kernel bandwidth and Kh (x) = h1 K hx .
In, this case we use a smooth kernel
R estimator like the Gaussian kernel function.
Such a function K integrates to 1, K(x) dx = 1, and its a unimodal, smooth
function which peaks at 0. Its a very nice selection as the kernel estimator
function. Now, after we select the kernel estimator function, our job is to choose
a proper bandwidth h, since there is always a trade-off between the bias of the
estimator and its variance. If, we choose a very small value for the bandwidth,
the estimator can give us very spiky estimates(without much smoothing) or
instead if we choose too large a value for the bandwidth, it can lead to over
smoothing of the estimator. However, for Gaussian kernel estimators, we can
approximately find the optimal bandwidth h, with this formula,

h=

4 5
3N

 15
=

1.06
1

N5

EULER SCHEME

21

## f(x) the probability density function of the first-passage time

where, is the standard deviation of the set of sample points (x1 , x2 , ...., xN ).
With, this value of h, we get a nice approximate for the pdf of the first-passage
time for the Ornstein-Uhlenbeck process.1 .
1

"firstpassagekernel.dat"

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

10

Figure 2: The estimated Probability Density Function(PDF) for the FirstPassage time for an Ornstein-Uhlenbeck process
Now, as we try to estimate the cumulative density function.If we have a
sample data X = {X1 , X2 , ..., XN }, by the law of large number we know that
X1 +X2 +....+XN
converges to E(X).
N
P (X x) = E(1Xx )

N
X
1X i x
N
i=1

where X denotes the sample data set for the first-passage time and N in our
above expression is the Monte-Carlo number, or the number of simulated values
we have in our data set.
Thus, we have the cumulative distribution for the first-passage time.
The cumulative distribution for the first-passage time for the Ornstein-Uhlenbeck
process is,

Now, we know that for this method,we encounter the statistical error due to
the Monte-Carlo simulation.
Now, we know that the statistical error, |E(f (X))

PM
1
V
ariance
f
(X
)|

C
, where C is some positive constant depending
i
i=1
M
M
on the confidence interval of our choice. Thus, we need to calculate the variance
of our sample data of first-passage time. Now, it is very easy to make a mistake
1 The

## N, in the example denotes out Monte Carlo number

EULER SCHEME

22

"cumulative_density.dat"

0.9
P(First-passage Time < X)

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

10

## Figure 3: The cumulative distribution for the First-Passage time for an

Ornstein-Uhlenbeck process
in computationally calculating the variance. If, we try to calculate the variance
of a sample data of {x1 , x2 , ....., xn },
PM
V arianceexpected =

i=1

(xi x)2
M

However, such a calculation will lead to a biased calculation. In, order to get
the unbiased form of the variance we see that,
M E(V arianceexpected ) = E(

M
X

(xi x)2 )

i=1
M
X
= E(
(x2i + x2 2xi x))
i=1
M
M
M
X
X
X
= E(
x2i ) + E(
x2 ) 2E(
xi x)
i=1

i=1

i=1
M
X

xi )

i=1

Now, since x =

PM

i=1

xi

, thus,

PM

i=1

xi = M x.

## M E(V arianceexpected ) = M E(x21 ) M E(x2 )

as we know that {x1 , x2 , ....., xn } are independent and identically distributed.

A FAST ALGORITHM

23

Thus, the law of all the sample variables are the same. Now, we know that,
E(x2 ) = V ariance(x) + E(x)2
PM
xi
= V ariance( i=1
) + (mean)2
M
V ariance(x1 )
=
+ (mean)2
M
also, we know that E(x)= E(X) = mean, from the strong law of large numbers.
Also, as they are identically distributed, V ariance(x1 ) = V ariance and E(x1 ) =
E(X) = mean. Thus, we get,



V ariance(x1 )
2
2
M E(V arianceexpected ) = M V ariance(x1 ) + E(x1 ) M
+ (mean)
M



V ariance
2
2
= M V ariance + mean M
+ (mean)
M
= (M 1)V ariance
M V ariance

expected
=unbiased Variance for the sample data .
Thus, we get that
M 1
Thus, with such a choice of variance, we will get an unbiased variance for our
sample data and thus an accurate statistical error.

## A Fast Algorithm for the First-Passage Time

problem for a Gauss-Markov Process

The paper [1] by Taillefumier and Magnasco details the procedure they follow
to approximate the first-passage time for a Gauss-Markov process with Holder
continuous boundaries, including the Ornstein-Uhlenbeck process. The main
idea behind their algorithm is a probabilistic variant of dichotomic search. Their
method evaluates discrete points in a sample path exactly and then refines this
evaluation recursively only on regions where a passage is estimated to be more
probable.

Introduction
For the class of Gauss-Markov process, which also includes the ever so important
Ornstein-Uhlenbeck process, the numerical approximation was performed in a
new way thanks to the paper [1]. A general Gauss-Markov process has the form;
dXt = (t)Xt dt + (t)dWt

with

X0 = x0

## where is a bounded function and is continuous bounded positive function.

The process was path-wise simulated at high resolution only on regions where
such a resolution was needed. Thus, numerically simulating the process accurately only when it is close to the boundary, but not when far away from the
boundary.
The Ornstein-Uhlenbeck is the one of the most studied from the class of GaussMarkov processes. It is defined as a process U , a solution to the linear stochastic
differential equation of type,
dUt = Ut dt + dWt ,

A FAST ALGORITHM

with

24

U0 = u0 (deterministic).

For example of a first-passage problem concerning an Ornstein-Uhlenbeck process is the leaky integrate-and-fire neuron. In this model the process Ut
represents the membrane potential of a neuron; any time the voltage crosses a
given threshold value, the cell fires and and emits action potential and resets
its potential to a base value.More general Gauss-Markov processes can be seen
as a Ornstein-Uhlenbeck process with the parameters and depending on
time.Certain assumptions are taken into account.They are :
1.Co-efficient Regularity Assumption : The Gauss-Markov processes are solution of a linear stochastic equation with time-dependent non-positive,bounded
function and with time-dependent positive, homogeneously Holder continuous
function .
2. Barrier Regularity Assumption : The barrier is assumed to be homogeneously
Holder continuous and non-negative.
These determine the regularity of a continuous density function for the firstpassage time and prescribes the speed of convergence of the first-passage time
computation.

Algorithm
We will embark on defining the algorithm, originally constructed by Taillefumier
and Magnasco [1].This algorithm efficiently computes the distribution of firstpassage time for a general class of Gauss-Markov processes X and of continuous
boundary L. It involves recursively implementing the probability variant of the
dichotomic search. A dichotomic search is a search algorithm that operates by
selecting between two distinct alternatives (dichotomies) at each step. It is a
specific type of divide and conquer algorithm. A well-known example is binary
search.
The idea is, to simulate the path of the process using the discrete construction
method. We generate first the final position of the process using the formula,
p
XT = g(T )x0 + g(T ) h(T )
where, is a Gaussian random variable N (0, 1).
Then we simulate XT /2 using the result of the discrete construction of GaussMarkov process. In this way, we keep on dividing and subdividing the domain to
find the values of out process at specific time.The idea behind is the dichotomic
search algorithm. The moment, we come across the value of the process Xt
2
reaching or over-shooting the value of the boundary L; by continuity of the
sample path we know that the crossing has occurred before or at t thus, we disregard continuing the simulation of the sample path for time s following t.We
are only concerned with the part of the process prior to time t and continue our
dichotomic search in that part only.
In the case, when the newly simulated value does not cross the boundary and
so does none of the values on its left and as well as its right, we can check if
the probability for a crossing to happen(we get this value with the help of the
result obtained from the section Gauss-Markov process) is larger than some
prescribed small positive real . We will be checking the probability of crossing
2 where

## t has the form

2k+1
T
2n +1

A FAST ALGORITHM

25

on both sides of that simulated value and then decide to continue the dichotomic
search algorithm in the part where we get a greater probability of crossing
We define a certain depth of search initially, i.e. 2TN . When this depth of
discretization is reached, we decide to look for the first-passage time for this
particular simulation. In the case when one of the simulated values cross the
boundary, the first passage time is the time corresponding to the one which
crosses the boundary first. In the other case, when none of the simulated values
cross the boundary, we check with the probability of crossing and compare the
value with an uniform random variable. If the probability of crossing is bigger
than the uniform random variable, we consider that the continuous path between the two time points in the interval; where the probability of crossing is
being checked for, has actually crossed the boundary and note the mid point of
that interval as the first-passage time.
The idea behind this probabilistic variant of the dichotomic search is to search
for the first-passage time by only simulating the process on some points where
the outcome happen close enough to the boundary L.

## Definition of the Algorithm

We are looking for the first-passage time in the interval [0,1] for a process X
defined by the stochastic differential equation of the form,
dXt = (t)Xt dt + (t)dWt
X0 = x0
where, is a non-positive function, bounded on every compact support [0,T],
T > 0, is a positive homogeneously Holder continuous function and x0 is a
given deterministic constant(real).
We hope to compute the approximate occurrences N of the true first-passage
time , with a time-step as deep as 2N . Assuming, we can generate innumerable independent identically distributed random variables of law N (0, 1) and
also independent identically distributed random variable of law U(0, 1).
1. The values of the sample path at the end point of [0,T] is given by,
p
XT = g(T )x0 + g(T ) h(T ), X0 = x0
2k+2
If we want to
2. Now, lets say, ln,k = 2n2k+1 T , mn,k = 22k+1
n +1 T , rn,k = 2n +1 T .
generate the sample path at mn,k we take help of the values of the sample path
at ln,k and rn,k due to the Markov property of the process. We generate the
sample path as per the result obtained from the discrete construction of the
Gauss-Markov process.
Thus, lets say that we have already simulated the value of the process at these
values ln,k and rn,k as, Xln,k = xn,k and Xrn,k = zn,k then from the result
obtained in the discrete construction of the Gauss-Markov process section we
get, Xmn,k = (mn,k )n,k + (mn,k ) where,

n,k N (0, 1)

A FAST ALGORITHM

26

## g(mn,k ) h(rn,k ) h(mn,k )

g(mn,k ) h(mn,k ) h(ln,k )
xn,k +
zn,k
g(ln,k ) h(rn,k ) h(ln,k )
g(rn,k ) h(rn,k ) h(ln,k )
s
(h(mn,k ) h(ln,k ))(h(rn,k ) h(mn,k ))
(mn,k ) = g(mn,k )
h(rn,k ) h(ln,k )

(mn,k ) =

3. For, every 1 n N , the algorithm does not take into account any time
following the occurrence of a value of the sample path above the barrier L.
4. Finally, for n=N , we define the approximate first-passage time N as, the
first time when the sample path crosses the boundary, or if it does not, then we
look into the probability of its crossing within the time lN,k and rN,k by comparing the probability of cross with an uniform random variable distributed
as U(0, 1). If the probability of cross is larger than the then we define the
first-passage time N as the mid point of the lN,k and rN,k for the corresponding
k.
Simulation of an Ornstein-Uhlenbeck process
In this section, we work with an Ornstein-Uhlenbeck process as a representative
of the Gauss-Markov process family. The reason behind choosing an OrnsteinUhlenbeck process is that, is it one of the most popular Gauss-Markov processes.
Moreover, it has varied application in the field of physics,finance to name a few.
The Ornstein-Uhlenbeck process we work with looks like,
dXt = Xt dt + dWt

with

X0 = x0

0tT

## where, is a non-positive real constant, is a positive real constant and x0 is a

real constant also. Moreover, we work with a constant boundary L to simulate
the first-passage time for the above diffusion process.
Of course, all the results pertaining to the Gauss-Markov process we have
described so far will become very simple. However, the idea is to apply the
algorithm to use and see its efficiency and speed.
The algorithm that was put into coding is a slight modification of the above
mentioned one.
The main ideas behind the coded algorithm are;
1.We first choose an initial depth of say 2N where N is relatively large. We
generate the final value by using the formula,
p
XT = g(T )x0 + g(T ) h(T )
and we already know the initial value as X0 = x0 and N (0, 1). We know go
about dividing the domain [0,T] by first generating the value at X T using the
2
values at X0 and XT and taking help of the result from discrete construction
of the Gauss-Markov process. Similarly, we generate X T , X 3T and so on until
4
4
we reach the depth of 2N , i.e, until we simulate the value X TN .
2

2. Once, we have generated all the values of our first round of simulation,
we look for the first time when one of such values crosses the boundary. If, none
of the values cross the boundary, we state that the process does not cross the

A FAST ALGORITHM

27

boundary and we carry on generating a new set of path for the same process.
Otherwise, if we find there exist at least one time instance when the process
crosses over the boundary, we note the first time when it happens as Tf inal .
3. We start to check for probability of crossing in each interval of time 2TN
starting from the initial value of the time chosen, which is 0 in our case, to
the value when it crosses the boundary for the
Thus,

 we check the
 first time.
probability of crossing between time instances 0, 2TN , 2TN , 22T
N , and so on until
we reach Tf inal .At each interval the probability = p is compared with a positive
small, real, pre-chosen . If p  for a particular interval, then we subdivide
that interval like we did for the interval [0,T]. This time again, the depth is
chosen to be 2N with the same N as before.
i
h
(k+1)T
, for some k
4. After, simulating all the values in the interval 2kT
N ,
2N
T

2N

## {0, 1, ....., f inal

1} where the probability of cross exceeds the value of the ,
T
we look for the time when the value crosses the boundary. If such a time exists
till we reach the Tf inal , then we say it is the first-passage time for the simulated
path. Otherwise, we check the probability of crossing with an uniform random
variable U(0, 1).
Now the question arises, which probability of exit should we consider. In this
simulation, we consider that interval which has the highest probability of exit.
Now,if the probability of exit or crossing is greater than equal to the uniform
variable , we consider the mid-point of that time interval; with reference to
which the probability of crossing was measured, as the first-passage time for the
simulated path.

Numerical Simulation
The equation in question is,
dXt = Xt dt + dWt

with

X0 = x0

0tT

with = 1.0 , = 1.0, x0 = 0.0, the boundary L = 0.5, and finally the
T = 10. The depth is chosen to be 210 . First we simulate the first set of values
for X.
After the initial simulation, we check for the first time the process value has
touched or exceeded the value of the boundary L. We find such a time in this
case, and we go on checking the probability of exit in each interval with an  ,
where  = 0.0013 . Wherever the probability is greater than the , we simulate
more values of the process in that interval using the same idea as before.
If, one of these new simulated values touch the boundary value or crosses it, the
first one to do so becomes our first-passage time. If, not so, we check in which
interval, the probability of cross was the highest. We will already have had new
values of X simulated in that interval. This time we check the probability of
cross with an uniform random variable U(0, 1) in each of the newly formed
intervals, thanks to the second round of simulation. The first such probability
calculated to be greater than , gives us the result that although the discrete
path does not touch or cross the boundary, but the continuous path between
3 the

idea behind choosing this value of the  is justified in the next section

28

## Value of the Ornstein-Uhlenbeck process X

1.5

"Initial_Simulation.dat"

1
0.5
0
-0.5
-1
-1.5
-2
0

10

Time

## Figure 4: Initial Simulation

those two time instances does. Thus, we note the mid-point of that particular
interval as our first-passage time for this particular simulation.
Out here, the interval in which the investigation for the first-passage time is
going on is [6.4721, 6.4728]. The precision in discretization is concentrated on
only those intervals, where we suspect the first-passage time might exist. This
is a huge improvement in terms of speed.In this simulation, the first-passage
time is simulated as 6.47267, thus in the range described above.
We do such approximation of the first-passage time for a Monte Carlo number = 1000000, of simulations. Thanks to that we are able to approximate the
cumulative distribution and the density function in the same way as we did for
the Euler scheme, of the first-passage time for the above Ornstein-Uhlenbeck
process.

## Euler Scheme vs Fast Algorithm

Here we compare the two previous algorithms in the sense of their speed of simulation, and efficiency. We know that with the Fast algorithm, we will be able
to simulate the first-passage time in less computational time. We attain this,
since we avoid simulating the process; under consideration, any further beyond
a point once we find a crossing has occurred. Instead of simulating with higher
precision over the entire interval, we concentrate only on the region where we
have found our first-passage time to exist.
Of course, there are some instances when the fast algorithm returns an erroneous
first-passage time. This happens when we search for the first-passage time in
an interval where the passage might take place, but its not the first time that
the process is crossing the barrier.

## The value of the Ornstein-Uhlenbeck process

0.52

29

"second_round_of_simulation.dat"

0.51
0.5
0.49
0.48
0.47
0.46
0.45
6.4721

6.4722

6.4723

6.4724

6.4725

6.4726

6.4727

6.4728

Time

## Figure 5: The simulated values of the Ornstein-Uhlenbeck process, looking for

the first-passage time in a confined interval

"cumulative_distribution.dat"

0.9
P(First-passage Time < X)

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

10

## Figure 6: The Cumulative Distribution for the First-Passage Time for an

Ornstein-Uhlenbeck Process

## f(x) the probability density function of the first-passage time

30

0.9

"density_kernel.dat"

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

10

Figure 7: The kernel approximation of the probability density function for the
first-passage time for an Ornstein-Uhlenbeck process

It is given that for a chosen parameter , with which we compare our probability of crossing at each interval, and for a given depth 2N , we get,
P ( N > + 2N ) = (N, )
that is the probability that a simulated first-passage time does not approximate
a true first-passage time, where, (N, ) 2N . Thus, we see that with a clever
choice of , we can lower this probability.
The choice of  is done in this fashion.
ln ln (N 1)ln2 lnT
with such a choice of , we get .
Thus we look into the time and efficiency of the two algorithms with the same
variable values and same depth = 2N . Also, we simulate the first-passage time
in both the algorithms using the same Monte-Carlo number.

6.1

Numerical Results

## We look into the following the SDE,

dXt = Xt dt + dWt

with

X0 = x0

## where = -1.0, = 1.0, x0 = 0.0 , barrier =L = 0.5, Final Time = T = 10.0,

depth = 2N , = 2TN .
We first look into the results obtained from the Euler scheme for the above
SDE. We tabulate the Monte-Carlo number, the delta or the time-step, the

MCN

Delta

1000
11000
170000
2700000

0.039
0.00976
0.002441
0.00061

31

## Table 1: Euler Scheme Results

N Expected value Statistical Error
8
10
12
14

1.1566
1.1946
1.2056
1.2162

0.09301
0.0286
0.00727
0.001828

## Running Time (in seconds)

0.11
1.476
117.465
1786.091

expected value E( 1 <T ), the statistical error and the running time for the
algorithm.
We now look into the numerical result for the fast algorithm for the above
SDE.If, we want the probability for an erroneous result for the first-passage time
to be less that 1010 , we need to choose  = exp(38.5). However, with such a
small epsilon, computationally, its almost equal to zero, thus we choose a much
bigger  = exp(18) and hence the probability of the algorithm returning an
erroneous result also gets higher 103

MCN

Delta

1000
11000
170000
2700000

0.039
0.00976
0.002441
0.00061

## Table 2: Fast Algorithm Results

N Expected value Statistical Error
8
10
12
14

1.564
1.348
1.246
1.2198

0.1235
0.0335
0.00836
0.00202

0.41
5.833
189.148
2023.65

## We see that although in theory we expect the fast algorithm to be faster

than the Euler scheme, we do not get that in practice. This may happen due to
the fact that for this particular choice of parameters, the probability of the first
passage time being somewhere closer to the initial time is bigger than it being
nearer to the final time.
Analytical solution for the first-passage time for an Ornstein-Uhlenbeck
process
For the boundary as L = 0 and X0 = 1.0 and keeping the values of the other
things same,
1
f (x) =
2

1
sinh(x)

 32



exp(x)
x
exp
+
2sinh(x) 2

The graph of the density function for the case when L = 0.5 and X0 = 0 is,
Comparing it with the density functions estimated with the Euler scheme and
the Fast algorithm respectively.
Thus, we see that the probability density function that we get is a very nice
approximation of the density function for the Ornstein-Uhlenbeck process.
We also see, that the speed of the Fast algorithm, as the Monte-Carlo number
increases, also increases.

## EULER SCHEME VS FAST ALGORITHM

32

1.4

"firstpassagekernel.dat"

1.2

the pdf

1
0.8
0.6
0.4
0.2
0
0

10

## f(x) the probability density function of the first-passage time

Figure 8: The graph of the density function using the analytical solution for an
OU process

"firstpassagekernel.dat"

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

10

Figure 9: The graph of the density function using the kernel approximation for
an Euler scheme

## EXACT SIMULATION OF DIFFUSION SAMPLE PATHS

0.9

33

"density_kernel.dat"

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

10

Figure 10: The graph of the density function using the kernel approximation
for the Fast algorithm

## The following algorithm simulates the diffusion path for a one-dimensional

stochastic differential equation exactly. Its a very interesting and quite restricted
approach in the sense, there are some strong restriction to when we can use this
method. For example, it can not be used in a simple case as a Black-Scholes
Model or in the case of a Ornstein-Uhlenbeck processes. However, whenever it
can be put into use, it returns the exact law for the process.

Introduction
This section looks into the exact simulation of a diffusion described by a stochastic differential equation. The paper on exact simulation [3] by Beskos and
Roberts, gives a very nice algorithm to do so. The idea was further extended
in the paper [4] by Beskos,Papaspiliopoulos and Roberts.We look at the more
evolved method, mainly presented in [4]. It involves rejection procedure,whenever
applicable, and the use of Point Poisson Process(PPP) to return exact draws
from any finite-dimensional distribution of the solution of the stochastic differential equation.
Lets W = {Wt ; 0 t T } be a standard Brownian motion. Considering
the general type of a one-dimensional stochastic differential equation;
dXt = b(Xt )dt + (Xt )dWt ,

0 t T,

X0 = x0 R

## where b is the drift co-efficient b : R 7 R and the diffusion co-efficient :

R 7 R. We know that there exist a weakly unique solution to the above
stochastic differential equation if it can shown that b and are locally Lipschitz
i.e; |b(x)b(y)| C1 |xy| and |(x)(y)| C2 |xy|, where x, y R and C1

## EXACT SIMULATION OF DIFFUSION SAMPLE PATHS

34

RT
and C2 are real constants, and E 0 (s)2 ds < holds for some T < . Also,
if the functions are Lipschitz locally, they are also continuous locally, thus the
functions b and also satisfy the locally bounded condition, since continuous
functions are bounded in a compact set.
So,assuming the above mentioned conditions on the functions b and , we
The exact algorithm provides an alternative method to Euler scheme, which
involves no approximation and still it is computationally highly efficient. It
returns exact sample paths. In the following part, we are going to transform
the above stochastic differential equation to a one with = 1.This can be done
by a simple transformation.
dXt = b(Xt )dt + (Xt )dWt
Let, Yt = (Xt ) such that by applying Itos formula on it we get,
1
dYt = (Xt )0 dXt + (Xt )00 (Xt )2 dt
2
1
0
= (Xt ) b(Xt )dt + (Xt )00 (Xt )2 dt + (Xt )0 (Xt )dWt
2
Thus to make the diffusion co-efficient 1, we get,
(Xt )0 (Xt ) = 1
Thus, we get,
Z

(x) =
x0

1
du
(u)

## And, moreover we also get,

(Xt )00 =
thus, we finally get Yt = (Xt ) =

R Xt
x0

(Xt )0
(Xt )2

1
(s)

ds

## Rejection Sampling Algorithm

Rejection sampling is a very widely used technique in numerical simulation.The
idea is to generate random variables which follow distribution say,f (y). If we
can develop an in-equality of the sort say, f (y) Cg(y);where C is a positive
real constant, and we can easily generate random variable which follow the law
of g(y), then using the rejection algorithm we can generate random variable
which follows the law of f (y). The main algorithm is as follows;

1.
2.
in
3.
4.

Rejection Sampling
Generate Y g
Generate U U(0, 1), where U(0, 1) mean uniform random variable
the range (0,1)
f (Y )
If U Cg(Y
) , return Y
else goto 1.

35

## This algorithm will return us a random variable which is distributed as per

f (y).
Now, lets state a simple result of rejection sampling.
P roposition :
Let be a test function. Now, the rejection sampling gives us a random variable
x which is distributed according to fY .We use the relation fY CfX ; where C
is any positive real number, to generate candidates from fX and use the rejection sampling method as described before. Now, the comparison is made with
fY (x)
then we
a uniform random variable U U(0, 1), in the sense if, U Cf
X (x)
accept the random variable x which was drawn originally from fX , otherwise
we keep on drawing.
Now, we state that,


fY (x)
E((Z)) = E (X)|U
CfX (x)
Proof:
Z
E((Z)) =

(z)fY (z) dz

Z +

fY (z)
fX (z) dz
Cf
X (z)



fY (z)
= CE (Z)
CfX (z)
=C

Now, since

fY (z)
CfX (z)

(z)

## [0, 1], thus we can consider the following result,

if, 0 a 1; a = P (U a) = E(1U a )

## Thus taking help of this result we get,



E((Z)) = CE (Z)1U fY (z)
CfX (z)

 

fY (z)
fY (z)
= CE (Z)|U
.P U
CfX (z)
CfX (z)
Now,




fY (z)
P U
= E 1U fY (z)
CfX (z)
CfX (z)
Z
fY (z)
=
fX (z) dz
R CfX (z)
1
=
C
Thus, the probability of rejection or acceptance depends on the constant C
only.

## EXACT SIMULATION OF DIFFUSION SAMPLE PATHS

36

Exact Algorithm
Considering the diffusion process Y ={Yt ; 0 t T }, which follows the stochastic differential equation;
dYt = (Yt )dt + dWt

0 t T,

Y0 = y0 R

The drift function is assumed to satisfy the Lipschitz condition and also a
growth bound locally.
The main idea behind this algorithm is applying Girsanovs Theorem to make
the process simple enough to simulate. We apply Girsanovs Theorem once to
change the processYt into a standard Wiener process under some probability
space. Then we apply the Girsanovs theorem to change the probability space
so that under it, the process now becomes a Brownian bridge like process. There
are more intricate details in the process of changing the probabilities, but we
will get to that in detail while explaining the entire process.
We first define a sample space (, F) which is equipped with a probability
measure P . Now, under this measure, the Brownian motion which propels our
diffusion process is defined as W = {Wt ; 0 t T }. We know if we have
transform the diffusion process at hand Yt into a standard Wiener process, we
have to take help of Girsanovs Theorem. Now;
dYt = (Yt )dt + dWt
Now, lets say the new probability measure Q under which Yt is a Wiener process
ft .
of the form W
Some notes on Girsanovs Theorem:
Z
Z t
1 t
|s |2 ds}
Let, Z(t) = exp{
s dWs
2
0
0
!
Z T
1
Novikovs Proposition : EP (exp
|t |2 dt < +
2 0
Then, if {t } is an adapted process satisfying the Novikovs Proposition, and
for each T > 0 Z(T ) is a likelihood ratio: that is, the formula;
Q(F ) = EP (Z(T )1F )
defines a new probability measure on (, F). Girsanovs Theorem describes
the distribution of the stochastic process {W (t)} under this new probability
measure. We define,
Z
t

ft = Wt
W

s ds
0

Then, the Girsanovs Theorem states that under this new probability measure
ft }0tT is a standard Wiener process.
Q,the stochastic process {W
Now, we have to transform the original equation we had, dYt = (Yt )dt+dWt
into a Brownian motion under some new probability measure Q. Thus taking

37

g
WtQ = Wt
dQ
= exp{
dP

s ds

0
T

1
s dWs
2

|s |2 ds}

## Now, we want Yt to be a time Brownian motion,

ft
dYt = ((Yt ) + t )dt + dW
ft , we want ((Yt ) + t ) = 0. Thus,
Now, since we want Yt to be of the form W
we get
ft under the new probability measure Q Thus;
t = (Yt ) and Yt = W
E P (T (Y )) = E Q (T (Y )

dP
)
dQ
Z

= E (T (Y ) exp{
f ) exp{
= E Q (T (W

Z
0

0
T

1
s dWs +
2

|s |2 ds})

fs ) dW
fs 1
(W
2

fs )2 ds})
(W

Thus, we get :
Z T
Z
1 T
dP
f
f
ft )2 dt) = G(W
f)
= exp(
(Wt ) dWt
(W
dQ
2 0
0
 R

T
fs ) ds ) < +.
This holds if E Q (exp 21 0 (W
Assuming the Novikovs Proposition, we can define the change of the probability
measure.

## Thus,our goal is now to implement a rejection sampling algorithm using the

RT
ft ) dW
ft
above expression, but the difficulty lies in evaluating exp( 0 (W
R
1 T
2
f
2 0 (Wt ) dt).We can simplify the expression using Itos formula to remove
the Ito integral.
R u Assuming that the drift co-efficient is differential everywhere,
let A(u) = 0 (s) dy. Now, looking at the Itos formula for this function, we
get;
Z
Z T
1 T 0 f
f
f
g
f
(Wt ) dt
(Wt ) dWt = A(WT ) A(W0 )
2 0
0
Thus putting the previous relation into the main equation of probability change,
we get
!
Z T
0
1
2
f ) = exp A(W
g
f
ft ) + (W
ft )) dt
G(W
( (W
T ) A(W0 )
2 0
Now, rejection sampling using the candidates from Brownian motion is possible
f ) is bounded. Now, for that to happen we would
only if we can assure that G(W

## EXACT SIMULATION OF DIFFUSION SAMPLE PATHS

38

need A to be bounded, thus increasing the restriction on A.In order to get rid
of this strong condition, we introduce a third probability measure Z, which will
be finally used to construct the candidates for the rejection sampling.
We will use candidate paths from a process which is identical to a Brownian
motion except at the end point, i.e at time T . Such, processes are known as
c
biased Brownian motion, W
P roposition:
Let M = {Mt ; 0 t T }, N = {Nt ; 0 t T } be two stochastic processes
on (, F) with corresponding probability measure M and N . Assuming that
fM and fN are the densities of the ending points MT and NT respectively with
identical support R. If, it is true that (M |MT = ) (N |NT = ) for all R,
then;
fM g
dM
() =
(WT )
dN
fN
Proof:
The property, (M |MT = ) (N |NT = ) for all R can be expressed in a
more rigorous way as ;
g
g
M [A|(W
T )] = N [A|(WT )]

a.s.

## It is enough to show that;

M [A] = EN [1A

fM g
(WT )]
fN

Now,
EN [1A

fM g
fM g
g
(WT )] = EN [EN [1A
(WT )|(W
T )]]
fN
fN
fM g
g
= EN [
(WT )EN [1A |(W
T )]]
fN
fM g
g
(WT )N [A|(W
= EN [
T )]]
fN
g
= EM [N [A|(W
T )]]
g
= EM [M [A|(W
T )]]
g
= EM [EM [1A |(W
T )]]
= M [A]

Now, we use the above proposition to get the density function for 
the final posi
g
f
tion for the biased Brownian motion. We want to get rid of the exp A(W
T ) A(W0 )
f ). Thus, changing the probability measure from Q to Z we want
part from G(W
the change of probability to be of the form;


dQ
g
f
= exp A(W
T ) + A(W0 )
dZ


g
= C1 exp A(W
T)

## EXACT SIMULATION OF DIFFUSION SAMPLE PATHS

39



f0 ) .
where C1 is a constant of the form exp A(W
Now, from the previous preposition, we know that;
dQ
=
dZ

1
2T

y0 )
)
exp( (WT2T
g

hT

where, hT is the density of the end point of the biased Brownian motion. The
numerator for the above expression comes from the fact that under the probag
bility measure Q, we have a Brownian motion W
T N (0, T ). Thus, we get the
final expression for the density function hT ;
!
2
g
(W
1
T y0 )
g

exp A(WT )
hT =
2T
C1 2T
Now, we see that,
dP
dP dQ
=
.
dZ
dQ dZ
1
= exp
2

ct ) + (W
ct )) dt
( (W
0

Thus,to recapitulate the notations used so far, under the probability measure
f is a Brownian
P , W is a Brownian motion, under the probability measure Q, W
c is a biased Brownian motion in
motion and under the probability measure Z, W
d
the sense that the end point, i.e. W
T is distributed according to some probability
density function hT .
Now, we can go ahead in our endevour if we assume that the expression
0
inside the integral 12 ( (.) + 2 (.)) is bounded from below at least. If it is so,
0
k1 and k2 R s.t. k1 21 ( (.) + 2 (.)) k2 , then we can define a new
0
function (.) = 12 ( (.) + 2 (.)) k1 .
Thus, we will get,
!
Z T
dP
ct ) dt
= C2 exp
(W
dZ
0
 R

T
where C2 is a constant that come from exp 0 k dt .
Thus, for any test function , we get,
Z
c ) exp
EP ((Y )) = C2 .EZ (W

!!

ct ) dt
(W

= C2 .EZ

!!

Z
c )|something whose probability of happening is
(W

exp

ct ) dt
(W
0

Z
P

exp

!!

ct ) dt
(W
0

Z
EP ((Y )) = EZ

(W

exp
0

!!
c
(Wt ) dt

## EXACT SIMULATION OF DIFFUSION SAMPLE PATHS

40

Now comes the most important part of this section, the algorithm to generate
the exact sample path for the solution of the stochastic differential equation.

Algorithm
The main difficulty in ourapproach will be
 to find something whose success of
RT
c
happening is equal to exp 0 (Wt ) dt . However, we are lucky to have the
Point Poisson Process, which helps
 Rus to give us a situation where we can get
T
ct ) dt .
the success of something as exp 0 (W

## Point Poisson Process

A Point Poisson Process(PPP) on a domain D(e.g [0,T] x [0,K]) with intensity
I(t, x) is a collection set of random points {(t1 , x1 ), (t2 , x2 ), ......., (t(I) , x(I) )}
in the domain D, where is a random function of the intensity I.We define
N (A, ), for A D and defined on a probability space (, F, P ). We
define N (A, .) to denote the measure or the number of points of the Point Poisson Process in the set A D for one realization of the process.
Characterization of a Point Poisson Process :
For all A D, N (A, ) for a particular realization of the random
process,
R
is a random variable in itself with a Poisson law of parameter A I(t, x) dtdx.
That is, the random variable NA = N (A, ), has the representation as ,

R
R
( A I(t, x) dtdx)k exp A I(t, x) dtdx
P (NA = k) =
k!
For all subsets A and B of D, with empty intersection of points,implies that NA
and NB are also independent random variables.
A particular case can be with an intensity I = 1 and D = [0,T] x [0,K]. In
this case the random
number of points in a set A D, NA , has a Poisson law
R
with mean = A I(t, x) dtdx = V olume(A).
In such a PPP, {(t1 , x1 ), (t2 , x2 ), ......, (t , x )} where is some random function
of our intensity and we choose the time to increase as t1 t2 ..... t , in that
case the time is distributed according to exponential law as, tk+1 tk E(K)
such that tk [0, T ]k, where K is the upper bound for the spacial co-ordinate,
in this case x [0, K] . Also, the space variable is distributed according to the
law of uniform random variable as, xk U(0, K) for all k.

Proof : Now, with the intensity we choose, i.e I = 1 , we can get the above
relations. Also, remembering that the above relations and the proof that will
follow will only be valid for the chosen intensity and the chosen set D.

## EXACT SIMULATION OF DIFFUSION SAMPLE PATHS

With I = 1, we get

R
A

41

## I(t, x) dtdx = V olume(A), where A D.

P (x1 > ) = P (No. of points in[0, ]x[0, t1 ] = 0, No. of points in[, K]x[0, t1 ] = 1|
No. of points in[0, K]x[0, t1 ] = 1)
= P (N[0,]x[0,t1 ] = 0, N[,K]x[0,t1 ] = 1|N[0,K]x[0,t1 ] = 1)
= P (N[,K]x[0,t1 ] = 1|N[0,K]x[0,t1 ] = 1)
V olume([, K]x[0, t1 ]) exp(V olume([, K]x[0, t1 ])) exp(V olume([0, ]x[0, t1 ])
V olume([0, K]x[0, t1 ]) exp(V olume([0, K]x[0, t1 ]))
(K )t1
=
Kt1

=1
K

Thus,
P (x1 ) = 1 (1

)=
K
K

## Thus, we get that x1 U(0, K)

Thus, we obtain the fact that the space variable is uniformly distributed on the
domain [0,K], owing this to the intensity chosen as 1.
Now, we look into the distribution of tk by looking into t1 .
P (t1 > ) = P (N o.of pointson[0, ]x[0, K] = 0)
= exp(V olume([0, ]x[0, K]))
= exp(K)
Thus,
P (t1 ) = 1 exp(K)
Thus, we get that t1 E(K). Now,if we translate the origin to the previously
generated value, tk and simulate the next value as tk+1 tk E(K)

We needed
 R to generate
 something whose probability of success would have been
T
c
exp 0 (Wt ) dt . Now we can see that with the help of PPP, we can simct )] such that if N =
ulate a PPP with unit intensity on [0,T] x [0, sup0tT (W
ct )}; t [0, T ], then ;
number of points of the PPP below the graph of {t, (W
!
Z T
ct ) dt
P (N = 0) = exp
(W
0

## Now, to prove our hypothesis, we see that,

P (N = 0) = exp(Volume of A)
ct )};
wher, A= the part of the space-time axis which lies under the graph of {t, (W
t [0, T ].
RT
ct ) dt, so putting the pieces together, we get
Thus, Volume of A = 0 (W
!
Z T
c
P (N = 0) = exp
((Wt ) dt
0

## EXACT SIMULATION OF DIFFUSION SAMPLE PATHS

42

Thus, we finally get the random event whose probability of success is exactly
what we were looking for.
Now, we are in a position to write the proper algorithm for the simulation
of the exact path for the diffusion process in question.
Exact Simulation
1. Generate a random variable (T ) hT , and set (0) = 0
2. Generate a Point Poisson Process(PPP) on the domain [0,T] x [0,K], where K
ct ), with unit intensity4 . Thus, ts+1 ts E(K) and xs+1 xs
= sup0tT (W
U(0, K)
3. We generate all the interim values of the process at all the times generated
from the PPP, using the concept of the Brownian
 0 bridge

4. We compare the the values of ((ti )) = 12 ((ti )) + 2 ((ti )) k; where
(ti ) is the value of the process at some generated time ti , thanks to PPP,
[
with our space variable of the PPP to the scope of, ((t
i )) < xi . If, this holds
0
for all the i s such that ti T , then we accept the path as our exact simulated
path of our original process as the two processes X and are equal in law.
[
5. If ti (ti T ), s.t.((t
i )) xi , then we back to step 1.

## Thus, the proper algorithm will be,

Exact Simulation
1. Generate (T ) hT and set (0) = 0
2. Generate a PPP on [0,T] x [0,K] with unit intensity
3. Generate the values of the process at all the interim time
points generated with the help of the PPP, using the concept of
the Brownian bridge
4. If, ((ti )) < xi i, s.t ti T ; where (ti ) is the value of the process at the time ti and xi is the space variable generated during
the PPP, then end
5. Else goto 1
The above algorithm returns an exact skeleton of the desired process X at
a finite number of time points.

Numerical Simulation
We are restricted on our choiceof the SDE, mainly because of the bounded0
ness criteria of 21 (.) + 2 (.) . In a later paper[4], they showed how the
 0

idea can be extended to the case when limsupx 12 (x) + 2 (x) < or
4 Intensity being 1 means there is an equal of probability of choosing any point in the
domain, thus no point is more favourable than another

## EVALUATION OF THE FIRST-PASSAGE TIME USING EXACT SIMULATION43

 0

limsupx 21 (x) + 2 (x) < .
dXt = sin(Xt )dt + dWt

X0 = x0

We know that Xt and (t) are equal in law, Thus, simulating the accepted path
for gives us the exact path for X.

x0 = X0 = 10.5, T imef inal = 10.0, sup0tT ((ti )) = sup0tT 21 cos((ti )) + sin2 ((ti )) +
9
9
1
2 = 8 . Thus, K = 8 .
We have the exact skeleton of the process defined by the SDE,

10.5

"samplepath.dat"

10

9.5

8.5

8
0

10

15

20

25

30

Time

## Figure 11: The Exact Simulated path for the process X

We, know look into the first-passage time for such a process. How to simulate the first-passage time in this case.

## Evaluation of the first-passage time using Exact Simulation

Now, after implementing the exact simulation, we accept a path with values
of the diffusion in question at some time points between initial time =0 and
final time = T. The time points in between are evaluated such that, every
Tk+1 Tk is exponentially distributed with parameter K(where K is the sup
of the function ). Now, we are interested in the first passage time. We have
Tk+1 , Tk , XTk+1 , XTk .Let us denote XTk+1 = y, XTk = x, and Tk+1 = t1 , Tk = t2
just for convenience stake. Now, we look into the probability of the process

## crossing the barrier in between these two times t1 and t2 .

Now, since we are dealing here with Brownian bridge, we will give the following results for Brownian bridge. Thus, from now on we will be recognizing
x,y .
our diffusion process in question with a Brownian bridge of the form Bg
Now, what we are dealing here is a biased Brownian motion(Brownian
x,y = {B ; t t t |B = x, B = y}. We want to see, how we can
bridge) Bg
t 1
2
t1
t2
represent this biased Brownian motion with the help of an unbiased Brownian
motion.Let our unbiased Brownian motion be represented as B = {Bt ; t 0}.

x,y
Y, = Bt1 + Bg
Bt2
t

## for, t1 t t2 . Finally, we would want this Y, to give us some clue of the

unbiased Brownian motion that we so want to associate with.
We get the above expression from previous result on the construction of a Brownian bridge,i.e. we have the complete relation between the biased Brownian
x,y
motion Bg
and the unbiased one which we construct from the Y, . We know,
t
Bt1 = x and Bt2 = y. Thus, what we have is,
x,y
Bg
= Bt1 + Bt2 + Y,
t
t
t t1
2t
x,y
Bg
=
x+
y + Y,
t
t2 t1
t2 t1
Now, we know that, Y, N (0, a2 t1 + b2 (t t1 ) + c2 (t2 t))

Knowing the values of a, b, c, we can directly put the values into the above
equation and get the final value of Y, .


(t2 t)(t t1 )
Y, N
(t2 t1 )
Thus, finally we get,
t2 t
x,y
Bg
=
x+
t
t2 t1
t2 t
x,y
Bg
=
x+
t
t2 t1

t t1
y + B (t2 t)(tt1 ) 
t2 t1
(t2 t1 )
t t1
t2 t
y+
B tt1
t2 t1
t2 t1 t2 t

Thus, we have been able to relate our biased Brownian motion with an unbiased
one.
The next job will be to look into the first-passage of our biased Brownian motion
with the help of results known to us for unbiased Brownian motions(even with
drift).
We define the first-passage time for our original process as,
x,y
x,y
L = inf{t [t1 , t2 ]; Bg
L}
t
and the first-passage time for Brownian motion with drift will be defined as,

## APPLICATION IN FINANCE - BARRIER OPTIONS

45

, = inf{t 0; Bt + t}
P (Lx,y > s)

where =
where,

x,y
= P (Bg
u < L; t1 u s)


t2 u
u t1
t2 u
=P
B ut1 < L; t1 u s
x+
y+
t2 t1
t2 t1
t2 t1 t2 u


Ly
Lx
s t1

+
u ; 0 u
= P Bu <
t2 s
t2 t1
t2 t1


s t1
= P , >
t2 s

Lx
t2 t1

and =

Ly .
t2 t1

## Thus, Lx,y is equal in distribution to g(, ),

xt2 + t1
f or 0 x
x+1
We, know the distribution of the random variable , from the very famous
Bachelier - Levy formula,



( + u)2
f, (u) = 3 exp
, f or u > 0
2u
u 2 2
g(x) =

Thus, this way we can actually derive the analytical result for the first-passage
time for the biased Brownian motion in each interval, and in return get the
first-passage time for the original diffusion process X in each such intervals.

## Application in Finance - Barrier Options

European vanilla option is a contract giving the option holder the right to buy or
sell one unit of underlying assets at a prescribed price, known as exercise price or
strike price K, at a prescribed time, known as expiration date T. Barrier options
are similar to standard vanilla options except that the option is knocked out or
in if the underlying asset price hits the barrier price, B, before expiration date.
Since 1967, barrier options have been traded in the over-the-counter (OTC)
market and nowadays are the most popular class of exotic options. Therefore it
is quite important to develop accurate and efficient methods to evaluate barrier
option prices in financial derivative markets.
Pricing a barrier option is done mainly with the help of the expectation approach,which requires the knowledge of the risk-neutral probability density of
the underlying asset price as it breaches the barrier from above or below.Barrier
option prices are then obtained, by integrating the discounted pay-off function
for the barrier option over the calculated density.
Let the underlying asset price St follows the following stochastic differential
equation(SDE),
dSt = (St , t)dt + (St , t)dWt ,

with

St0 = s0

Now, by the above described method, we will get the price of the barrier option
as,
V (s0 , T ) = E Q ((ST 1 >T )|St0 = s0 )

46

## where, = inf{t > 0; St L}, where L is the boundary or the barrier.

(ST 1 >T ) is the discounted pay-off function which depends only on the price
of the asset at the maturity and E Q is the expectation calculated under the
risk-neutral probability. In case of a complete market, this risk-neutral probability is unique.
We take into account the case of knock out barrier options, which means they
will be worthless once the asset price hits the boundary or the barrier, in this
case which is B. If, we assume a very simple pay-off function of the form
(ST 1 >T ) = exp(r(T t0 ))max(ST K, 0) and (ST 1 T ) = 0. Here, K
represents the strike price and r is the interest rate obtained from risk less asset
and T is the maturity time or the expiration time.
In order to approximate the price of the option according to the formula mentioned above, we employ a Monte-Carlo method, which is a very simple and
robust numerical method. The main drawback of such a method is its slow
rate of convergence. From, Central Limit Theorem, we know that the statistical
error,

M
CN
X
s.d
1
f (STj ) | C
|E(f (ST ))
M CN j=1
M CN
where M CN denotes the number of Monte-Carlo simulations, s.d is the standard
deviation of our sample data and C is a constant depending on our confidence
interval. Thus, we see that in order to lower the statistical error, we have to
either increase the number of simulations or decrease the standard deviation or
in turn the variance.
We look into the Euler scheme to model the price of a barrier option.The
SDE under consideration is;
dSt = (St , t)dt + (St , t)dWt

with

St0 = s0

where and are known as the drift function and the volatility function respectively.We want these functions to be Lipschitz on any compact support [0,T] for
any T0. This condition will give us the existence and the uniqueness result
for the above SDE. Once, we have established these results, we look forward to
simulate the asset price. It is done, with the help of the Euler scheme.

## S(k+1) = Sk + (Sk , k) + (Sk , k) Gk+1

T
where = N
with N = number of discretization points, k, k + 1 denotes the
kth and the k + 1th iterated solution of the above
SDE and G N (0, 1).
Also, we know that W(k+1) Wk N (0, ) = N (0, 1)
What we are interested is in, E(f (ST )), which is suppose to give us the price
of the option concerned. Here f (ST 1 >T ) is the pay-off function associated
to the diffusion and T is the expiration or the maturity time. For a typical
European Call Option, the pay-off function is ; f (ST ) = max((ST K), 0)
where K is the strike price5 .
5 Strike price is the priced determined by the buyer(seller) of the option, such that if the
price of the option goes beyond(under) the determined strike price, the buyer(seller) of the
option can still buy(sell) at the strike price.

## APPLICATION IN FINANCE - BARRIER OPTIONS

47

Now we employ Monte-Carlo method to approximate E(f (ST )). We simulate, lets say, MCN(the number of Monte-Carlo simulations) values of XTm using
the above mentioned Euler Scheme and approximate the E(f (ST )) by;
m

E(f (ST ))

1 X i
(S K)
m i=1 T

Now, in following the Euler scheme, we know that there is a weak error due
to discretization which is of the order 1. However, we can get rid of this error.
The error mainly comes from the fact, that the discrete path may not cross
the boundary, but the continuous path between two discrete values may still be
able to cross the boundary. This can be checked if we introduce the concept of
crossing probability associated to Brownian bridge. This crossing probability
has been introduced before. We are merely just recalling it. Thus, as before, we
compare this probability of cross p with an uniform random variable . If the
uniform random variable p, then we accept that the path between two values of the price of the asset, has indeed crossed the boundary, iff the two values
are below the boundary. Now, we know that the expression for this probability
of cross is not the simple. We know it for some cases. Since, we are dealing
with Gauss-Markov processes, we will assume that the price of the underlying
asset is a Gauss-Markov process and its SDE is given by,
dSt = (t)St dt + (t)dWt

with

St0 = s0

## where, and are real,bounded and Holder continuous functions, and s0 is a

positive constant.
In this case we can very easily define the probability of cross,
Thus, we define ,


2(B x)(B z)
P (supti tti+1 St |Sti = x, Sti+1 = z) = exp
g(ti )g(ti+1 )(h(ti+1 ) h(ti ))
R

t
T
where, ti = i N
for i {0, 1, ...., N 1}, g(t) = exp t0 (u) du and h(t) =
 R

Rt
u
(u)
exp

(s)
ds
du. We get these functions, g and h from the
t0
t0
Doobs Integral Representation, proved previously.
We now just have to compare this probability of exit or cross, after we simulate a new value of the price which is below the boundary. We compare it
with a uniform random variable . The idea behind doing so, is we are looking
for a random event whose probability of success is equal to the probability of
exit or cross. Now, we know that for an uniform random variable U(0, 1),
P ( < p) = p 0 p 1. Thus, the probability of success of < p
is same as the probability of exit. This helps us to get rid of the error due
to discretization due to the application of the Euler scheme for the numerical
approximation.

## APPLICATION IN FINANCE - BARRIER OPTIONS

48

Numerical Simulation
For the numerical simulation purpose, we simulate the price according to the
following equation,
dSt = St dt + dWt

with

S0 = s0

where, = 0.01, = 1.0, x0 = 10.5, strike K = 11.0 The Euler scheme looks
likes,

S(k+1) = Sk + Sk + G k+1
where k = 2kT
N for k {0, 1, ..., N 1} for a certain N , and we know that

## W(k+1) Wk N (0, ) = N (0, 1).

The pay-off function in this case is chosen to be (ST 1St < B0 t T ) =
max(ST K, 0).
Thus, we simulate the values of the asset price at the discrete points until the
value crosses the boundary B = 12.8. If the value does not cross the boundary,
we check with the exit probability. We stop the simulation, once we get one of
the simulated values cross the boundary or we get the continuous path between
two simulated values cross the boundary.
Here is the simulation of the asset price We simulate the above Euler scheme for

## The Price of the Underlying asset S_t

12

"process_path.dat"

11.5
11
10.5
10
9.5
9
8.5
0

10

Time

## Figure 12: The Euler scheme simulation of the asset Price St

some Monte-Carlo number to get the price for the option. We price the option
in a fashion such as, we check for the price of the asset to cross the boundary.
If, we attain a value due to the Euler scheme which crosses the boundary, we
immediately put the price data of the option due to that particular simulation
as 0. We also put the price as 0, when we get that the continuous path between
two simulated values cross the boundary. Each such Monte-Carlo simulations
add to the sample data for the price of the option and we finally calculate the

49

price as,
PM

i=1

## E((ST 1St <B0tT )) = E(max(ST K, 0)) = E

(STi K)
M

taking only those values of STi s whose entire simulated path is below the boundary value, in this case it being B and also if the final value STi is greater than
K.
Table 3: Pricing the Option - Euler Scheme
Monte-Carlo Number N
Price
Statistical Error
10000
100000
1000000

8
9
10

0.000319
0.001
0.0038

0.00028
0.00012
0.000082

## Similarly, we also apply the fast algorithm by Taillefumier and Magnasco[1]

to simulate the price of the option. We follow the Doob Integral representation
to simulate the final value for the price and use the discrete construction for the
Gauss-Markov processes to simulate all the middle values.
Now, after the application of the fast algorithm, if we do not find a single
value of the asset price crossing the boundary B or the probability of crossing
is always falling short of the uniform random variable generated, we simply add
(ST K) to the sample data for our pay-off function if ST > K. Otherwise,
the moment we find one of the values crossing the boundary either in the first
round of simulation or in the second round of simulation on confined intervals,
we immediately add 0 to our sample data for the pay-off function.
The method of finally obtaining the price of the option, represented as the
expected value of the pay-off function as,
!
PM
i
i=1 (ST K)
E((ST 1St <B0tT )) = E(max(ST K, 0)) = E
M
where M is the Monte-Carlo number of simulations done.

## Thus, we see a financial application of the first-passage time in pricing an

European barrier option.

## Table 4: Pricing the Option - Fast Algorithm

Monte-Carlo Number N
Price
Statistical Error
10000
100000
1000000

8
9
10

0.000298
0.000989
0.00372

0.00032
0.00023
0.000076

10

10

CONCLUSION

50

Conclusion

We look into the first-passage problem mainly pertaining to Gauss-Markov process, barring the Exact simulation algorithm. It is true that we know how the
probability density function for the case of an Ornstein-Uhlenbeck process looks
like, however the expression is quite cumbersome and long. What, we were most
interested here were general Gauss-Markov processes. We may have simulated
only Ornstein-Uhlenbeck like processes, but our aim was to give general results.
We see, that when applicable the Exact simulation algorithm returns very accurate results for the first-passage time. It just takes help of the first-passage time
for a Brownian motion over an affine boundary, which is very simple to simulate
since we know exactly the density for it. However, the algorithm restrict its
usage to a great deal.

## While working on this report, my knowledge of C-programming improved

a lot. Also, my initial purpose of working on this report was mainly the financial application part, however the more I worked on this topic the more I
started to cherish it and thus finally the section on financial application found
itself almost towards the end, thus showing my change of motivation. I found
the problem more interesting rather than the application it has, thus increasing
my interest on first-passage time problems in general.

11

Acknowledgment

I would like to thank the entire Team of TOSCA in INRIA, Sophia, specially
Etienne Tanre for his constant support and guidance at every step of my report.
Without his support, this report would not have been possible. I would also
like to extend my gratitude to Denis Talay for welcoming me into the group
TOSCA to write my Master report, Francois Delarue our coordinator, James
Inglis and Camilo Andres Garcia Trillos for the numerous talks and discussion
which helped me in ways more than one. Finally, I would like to thank Bruno
Rubino, of University of LAquila, Italy, for giving me this wonderful opportunity to study in the course MATHMODS.

References
[1] Thibaud Taillefumier, Marcelo O.Magnasco (2010) A Fast algorithm for
the First-Passage Times of Gauss-Markov processes with Holder Continuous Boundaries.
[2] Thibaud Taillefumier (2008) A Discrete Construction for Gauss-Markov
Processes.
[3] Alexandros Beskos,Omiros Papaspiliopoulos and Gareth O.Roberts (2006)
Retrospective Exact Simulation of Diffusion Sample Paths with Applications.
[4] A.Beskos and Gareth O.Roberts (2004) Exact Simulation of Diffusion.

REFERENCES

51

## [5] Kyoung-Sook Moon(2008 Efficient Monte-Carlo Algorithm for Pricing

Barrier options.
[6] Peter W.Buchen Pricing European Barrier Options
[7] I.Karatzas and S.Shreve Brownian motion and Stochastic Calculus
[8] Jim Pitman Point Poisson Process
[9] A C - Primer
[10] E.Gobet Weak approximation of killed diffusion using Euler scheme
[11] Griselda Deelstra Remarks on boundary crossing results for Brownian
motion
[12] David Williams Probability with Martingales
[13] Pierre Patie On first passage time problems motivated by financial applications