You are on page 1of 106

The Basics of Financial Mathematics

Spring 2003
Richard F. Bass
Department of Mathematics
University of Connecticut
These notes are c _2003 by Richard Bass. They may be used for personal use or
class use, but not for commercial purposes. If you nd any errors, I would appreciate
hearing from you: bass@math.uconn.edu
1
1. Introduction.
In this course we will study mathematical nance. Mathematical nance is not
about predicting the price of a stock. What it is about is guring out the price of options
and derivatives.
The most familiar type of option is the option to buy a stock at a given price at
a given time. For example, suppose Microsoft is currently selling today at $40 per share.
A European call option is something I can buy that gives me the right to buy a share of
Microsoft at some future date. To make up an example, suppose I have an option that
allows me to buy a share of Microsoft for $50 in three months time, but does not compel
me to do so. If Microsoft happens to be selling at $45 in three months time, the option is
worthless. I would be silly to buy a share for $50 when I could call my broker and buy it
for $45. So I would choose not to exercise the option. On the other hand, if Microsoft is
selling for $60 three months from now, the option would be quite valuable. I could exercise
the option and buy a share for $50. I could then turn around and sell the share on the
open market for $60 and make a prot of $10 per share. Therefore this stock option I
possess has some value. There is some chance it is worthless and some chance that it will
lead me to a prot. The basic question is: how much is the option worth today?
The huge impetus in nancial derivatives was the seminal paper of Black and Scholes
in 1973. Although many researchers had studied this question, Black and Scholes gave a
denitive answer, and a great deal of research has been done since. These are not just
academic questions; today the market in nancial derivatives is larger than the market
in stock securities. In other words, more money is invested in options on stocks than in
stocks themselves.
Options have been around for a long time. The earliest ones were used by manu-
facturers and food producers to hedge their risk. A farmer might agree to sell a bushel of
wheat at a xed price six months from now rather than take a chance on the vagaries of
market prices. Similarly a steel renery might want to lock in the price of iron ore at a
xed price.
The sections of these notes can be grouped into ve categories. The rst is elemen-
tary probability. Although someone who has had a course in undergraduate probability
will be familiar with some of this, we will talk about a number of topics that are not usu-
ally covered in such a course: -elds, conditional expectations, martingales. The second
category is the binomial asset pricing model. This is just about the simplest model of a
stock that one can imagine, and this will provide a case where we can see most of the major
ideas of mathematical nance, but in a very simple setting. Then we will turn to advanced
probability, that is, ideas such as Brownian motion, stochastic integrals, stochastic dier-
ential equations, Girsanov transformation. Although to do this rigorously requires measure
theory, we can still learn enough to understand and work with these concepts. We then
2
return to nance and work with the continuous model. We will derive the Black-Scholes
formula, see the Fundamental Theorem of Asset Pricing, work with equivalent martingale
measures, and the like. The fth main category is term structure models, which means
models of interest rate behavior.
I found some unpublished notes of Steve Shreve extremely useful in preparing these
notes. I hope that he has turned them into a book and that this book is now available.
The stochastic calculus part of these notes is from my own book: Probabilistic Techniques
in Analysis, Springer, New York, 1995.
I would also like to thank Evarist Gine who pointed out a number of errors.
3
2. Review of elementary probability.
Lets begin by recalling some of the denitions and basic concepts of elementary
probability. We will only work with discrete models at rst.
We start with an arbitrary set, called the probability space, which we will denote
by , the capital Greek letter omega. We are given a class T of subsets of . These are
called events. We require T to be a -eld.
Denition 2.1. A collection T of subsets of is called a -eld if
(1) T,
(2) T,
(3) A T implies A
c
T, and
(4) A
1
, A
2
, . . . T implies both

i=1
A
i
T and

i=1
A
i
T.
Here A
c
= : / A denotes the complement of A. denotes the empty set, that
is, the set with no elements. We will use without special comment the usual notations of
(union), (intersection), (contained in), (is an element of).
Typically, in an elementary probability course, T will consist of all subsets of
, but we will later need to distinguish between various -elds. Here is an exam-
ple. Suppose one tosses a coin two times and lets denote all possible outcomes. So
= HH, HT, TH, TT. A typical -eld T would be the collection of all subsets of .
In this case it is trivial to show that T is a -eld, since every subset is in T. But if
we let ( = , , HH, HT, TH, TT, then ( is also a -eld. One has to check the
denition, but to illustrate, the event HH, HT is in (, so we require the complement of
that set to be in ( as well. But the complement is TH, TT and that event is indeed in
(.
One point of view which we will explore much more fully later on is that the -eld
tells you what events you know. In this example, T is the -eld where you know
everything, while ( is the -eld where you know only the result of the rst toss but not
the second. We wont try to be precise here, but to try to add to the intuition, suppose
one knows whether an event in T has happened or not for a particular outcome. We
would then know which of the events HH, HT, TH, or TT has happened and so
would know what the two tosses of the coin showed. On the other hand, if we know which
events in ( happened, we would only know whether the event HH, HT happened, which
means we would know that the rst toss was a heads, or we would know whether the event
TH, TT happened, in which case we would know that the rst toss was a tails. But
there is no way to tell what happened on the second toss from knowing which events in (
happened. Much more on this later.
The third basic ingredient is a probability.
4
Denition 2.2. A function P on T is a probability if it satises
(1) if A T, then 0 P(A) 1,
(2) P() = 1, and
(3) P() = 0, and
(4) if A
1
, A
2
, . . . T are pairwise disjoint, then P(

i=1
A
i
) =

i=1
P(A
i
).
A collection of sets A
i
is pairwise disjoint if A
i
A
j
= unless i = j.
There are a number of conclusions one can draw from this denition. As one
example, if A B, then P(A) P(B) and P(A
c
) = 1 P(A). See Note 1 at the end of
this section for a proof.
Someone who has had measure theory will realize that a -eld is the same thing
as a -algebra and a probability is a measure of total mass one.
A random variable (abbreviated r.v.) is a function X from to R, the reals. To
be more precise, to be a r.v. X must also be measurable, which means that : X()
a T for all reals a.
The notion of measurability has a simple denition but is a bit subtle. If we take
the point of view that we know all the events in (, then if Y is (-measurable, then we
know Y . Phrased another way, suppose we know whether or not the event has occurred
for each event in (. Then if Y is (-measurable, we can compute the value of Y .
Here is an example. In the example above where we tossed a coin two times, let X
be the number of heads in the two tosses. Then X is T measurable but not ( measurable.
To see this, let us consider A
a
= : X() a. This event will equal
_

_
if a 0;
HH, HT, TH if 0 < a 1;
HH if 1 < a 2;
if 2 < a.
For example, if a =
3
2
, then the event where the number of heads is
3
2
or greater is the
event where we had two heads, namely, HH. Now observe that for each a the event A
a
is in T because T contains all subsets of . Therefore X is measurable with respect to T.
However it is not true that A
a
is in ( for every value of a take a =
3
2
as just one example
the subset HH is not in (. So X is not measurable with respect to the -eld (.
A discrete r.v. is one where P( : X() = a) = 0 for all but countably many as,
say, a
1
, a
2
, . . ., and

i
P( : X() = a
i
) = 1. In dening sets one usually omits the ;
thus (X = x) means the same as : X() = x.
In the discrete case, to check measurability with respect to a -eld T, it is enough
that (X = a) T for all reals a. The reason for this is that if x
1
, x
2
, . . . are the values of
5
x for which P(X = x) ,= 0, then we can write (X a) =
x
i
a
(X = x
i
) and we have a
countable union. So if (X = x
i
) T, then (X a) T.
Given a discrete r.v. X, the expectation or mean is dened by
EX =

x
xP(X = x)
provided the sum converges. If X only takes nitely many values, then this is a nite sum
and of course it will converge. This is the situation that we will consider for quite some
time. However, if X can take an innite number of values (but countable), convergence
needs to be checked. For example, if P(X = 2
n
) = 2
n
for n = 1, 2, . . ., then EX =

n=1
2
n
2
n
= .
There is an alternate denition of expectation which is equivalent in the discrete
setting. Set
EX =

X()P().
To see that this is the same, look at Note 2 at the end of the section. The advantage of the
second denition is that some properties of expectation, such as E(X +Y ) = EX +EY ,
are immediate, while with the rst denition they require quite a bit of proof.
We say two events A and B are independent if P(AB) = P(A)P(B). Two random
variables X and Y are independent if P(X A, Y B) = P(X A)P(X B) for all A
and B that are subsets of the reals. The comma in the expression P(X A, Y B) means
and. Thus
P(X A, Y B) = P((X A) (Y B)).
The extension of the denition of independence to the case of more than two events or
random variables is not surprising: A
1
, . . . , A
n
are independent if
P(A
i
1
A
i
j
) = P(A
i
1
) P(A
i
j
)
whenever i
1
, . . . , i
j
is a subset of 1, . . . , n.
A common misconception is that an event is independent of itself. If A is an event
that is independent of itself, then
P(A) = P(A A) = P(A)P(A) = (P(A))
2
.
The only nite solutions to the equation x = x
2
are x = 0 and x = 1, so an event is
independent of itself only if it has probability 0 or 1.
Two -elds T and ( are independent if A and B are independent whenever A T
and B (. A r.v. X and a -eld ( are independent if P((X A) B) = P(X A)P(B)
whenever A is a subset of the reals and B (.
6
As an example, suppose we toss a coin two times and we dene the -elds (
1
=
, , HH, HT, TH, TT and (
2
= , , HH, TH, HT, TT. Then (
1
and (
2
are
independent if P(HH) = P(HT) = P(TH) = P(TT) =
1
4
. (Here we are writing P(HH)
when a more accurate way would be to write P(HH).) An easy way to understand this
is that if we look at an event in (
1
that is not or , then that is the event that the rst
toss is a heads or it is the event that the rst toss is a tails. Similarly, a set other than
or in (
2
will be the event that the second toss is a heads or that the second toss is a
tails.
If two r.v.s X and Y are independent, we have the multiplication theorem, which
says that E(XY ) = (EX)(EY ) provided all the expectations are nite. See Note 3 for a
proof.
Suppose X
1
, . . . , X
n
are n independent r.v.s, such that for each one P(X
i
= 1) = p,
P(X
i
= 0) = 1 p, where p [0, 1]. The random variable S
n
=

n
i=1
X
i
is called a
binomial r.v., and represents, for example, the number of successes in n trials, where the
probability of a success is p. An important result in probability is that
P(S
n
= k) =
n!
k!(n k)!
p
k
(1 p)
nk
.
The variance of a random variable is
Var X = E[(X EX)
2
].
This is also equal to
E[X
2
] (EX)
2
.
It is an easy consequence of the multiplication theorem that if X and Y are independent,
Var (X +Y ) = Var X + Var Y.
The expression E[X
2
] is sometimes called the second moment of X.
We close this section with a denition of conditional probability. The probability
of A given B, written P(A [ B) is dened by
P(A B)
P(B)
,
provided P(B) ,= 0. The conditional expectation of X given B is dened to be
E[X; B]
P(B)
,
7
provided P(B) ,= 0. The notation E[X; B] means E[X1
B
], where 1
B
() is 1 if B and
0 otherwise. Another way of writing E[X; B] is
E[X; B] =

B
X()P().
(We will use the notation E[X; B] frequently.)
Note 1. Suppose we have two disjoint sets C and D. Let A
1
= C, A
2
= D, and A
i
= for
i 3. Then the A
i
are pairwise disjoint and
P(C D) = P(

i=1
A
i
) =

i=1
P(A
i
) = P(C) +P(D) (2.1)
by Denition 2.2(3) and (4). Therefore Denition 2.2(4) holds when there are only two sets
instead of innitely many, and a similar argument shows the same is true when there are an
arbitrary (but nite) number of sets.
Now suppose A B. Let C = A and D = B A, where B A is dened to be
B A
c
(this is frequently written B A as well). Then C and D are disjoint, and by (2.1)
P(B) = P(C D) = P(C) +P(D) P(C) = P(A).
The other equality we mentioned is proved by letting C = A and D = A
c
. Then C and
D are disjoint, and
1 = P() = P(C D) = P(C) +P(D) = P(A) +P(A
c
).
Solving for P(A
c
), we have
P(A
c
) = 1 P(A).
Note 2. Let us show the two denitions of expectation are the same (in the discrete case).
Starting with the rst denition we have
EX =

x
xP(X = x)
=

x
x

{:X()=x}
P()
=

{:X()=x}
X()P()
=

X()P(),
8
and we end up with the second denition.
Note 3. Suppose X can takes the values x
1
, x
2
, . . . and Y can take the values y
1
, y
2
, . . ..
Let A
i
= : X() = x
i
and B
j
= : Y () = y
j
. Then
X =

i
x
i
1
A
i
, Y =

j
y
j
1
B
j
,
and so
XY =

j
x
i
y
i
1
A
i
1
B
j
.
Since 1
A
i
1
B
j
= 1
A
i
B
j
, it follows that
E[XY ] =

j
x
i
y
j
P(A
i
B
j
),
assuming the double sum converges. Since X and Y are independent, A
i
= (X = x
i
) is
independent of B
j
= (Y = y
j
) and so
E[XY ] =

j
x
i
y
j
P(A
i
)P(B
j
)
=

i
x
i
P(A
i
)
_

j
y
j
P(B
j
)
_
=

i
x
i
P(A
i
)EY
= (EX)(EY ).
9
3. Conditional expectation.
Suppose we have 200 men and 100 women, 70 of the men are smokers, and 50 of
the women are smokers. If a person is chosen at random, then the conditional probability
that the person is a smoker given that it is a man is 70 divided by 200, or 35%, while the
conditional probability the person is a smoker given that it is a women is 50 divided by
100, or 50%. We will want to be able to encompass both facts in a single entity.
The way to do that is to make conditional probability a random variable rather
than a number. To reiterate, we will make conditional probabilities random. Let M, W be
man, woman, respectively, and S, S
c
smoker and nonsmoker, respectively. We have
P(S [ M) = .35, P(S [ W) = .50.
We introduce the random variable
(.35)1
M
+ (.50)1
W
and use that for our conditional probability. So on the set M its value is .35 and on the
set W its value is .50.
We need to give this random variable a name, so what we do is let ( be the -eld
consisting of , , M, W and denote this random variable P(S [ (). Thus we are going
to talk about the conditional probability of an event given a -eld.
What is the precise denition?
Denition 3.1. Suppose there exist nitely (or countably) many sets B
1
, B
2
, . . ., all hav-
ing positive probability, such that they are pairwise disjoint, is equal to their union, and
( is the -eld one obtains by taking all nite or countable unions of the B
i
. Then the
conditional probability of A given ( is
P(A [ () =

i
P(A B
i
)
P(B
i
)
1
B
i
().
In short, on the set B
i
the conditional probability is equal to P(A [ B
i
).
Not every -eld can be so represented, so this denition will need to be extended
when we get to continuous models. -elds that can be represented as in Denition 3.B are
called nitely (or countably) generated and are said to be generated by the sets B
1
, B
2
, . . ..
Lets look at another example. Suppose consists of the possible results when we
toss a coin three times: HHH, HHT, etc. Let T
3
denote all subsets of . Let T
1
consist of
the sets , , HHH, HHT, HTH, HTT, and THH, THT, TTH, TTT. So T
1
consists
of those events that can be determined by knowing the result of the rst toss. We want to
let T
2
denote those events that can be determined by knowing the rst two tosses. This will
10
include the sets , , HHH, HHT, HTH, HTT, THH, THT, TTH, TTT. This is
not enough to make T
2
a -eld, so we add to T
2
all sets that can be obtained by taking
unions of these sets.
Suppose we tossed the coin independently and suppose that it was fair. Let us
calculate P(A [ T
1
), P(A [ T
2
), and P(A [ T
3
) when A is the event HHH. First
the conditional probability given T
1
. Let C
1
= HHH, HHT, HTH, HTT and C
2
=
THH, THT, TTH, TTT. On the set C
1
the conditional probability is P(AC
1
)/P(C
1
) =
P(HHH)/P(C
1
) =
1
8
/
1
2
=
1
4
. On the set C
2
the conditional probability is P(AC
2
)/P(C
2
)
= P()/P(C
2
) = 0. Therefore P(A [ T
1
) = (.25)1
C
1
. This is plausible the probability of
getting three heads given the rst toss is
1
4
if the rst toss is a heads and 0 otherwise.
Next let us calculate P(A [ T
2
). Let D
1
= HHH, HHT, D
2
= HTH, HTT, D
3
= THH, THT, D
4
= TTH, TTT. So T
2
is the -eld consisting of all possible unions
of some of the D
i
s. P(A [ D
1
) = P(HHH)/P(D
1
) =
1
8
/
1
4
=
1
2
. Also, as above, P(A [
D
i
) = 0 for i = 2, 3, 4. So P(A [ T
2
) = (.50)1
D
1
. This is again plausible the probability
of getting three heads given the rst two tosses is
1
2
if the rst two tosses were heads and
0 otherwise.
What about conditional expectation? Recall E[X; B
i
] = E[X1
B
i
] and also that
E[1
B
] = 1 P(1
B
= 1) + 0 P(1
B
= 0) = P(B). Given a random variable X, we dene
E[X [ (] =

i
E[X; B
i
]
P(B
i
)
1
B
i
.
This is the obvious denition, and it agrees with what we had before because E[1
A
[ (]
should be equal to P(A [ ().
We now turn to some properties of conditional expectation. Some of the following
propositions may seem a bit technical. In fact, they are! However, these properties are
crucial to what follows and there is no choice but to master them.
Proposition 3.2. E[X [ (] is ( measurable, that is, if Y = E[X [ (], then (Y > a) is a
set in ( for each real a.
Proof. By the denition,
Y = E[X [ (] =

i
E[X; B
i
]
P(B
i
)
1
B
i
=

i
b
i
1
B
i
if we set b
i
= E[X; B
i
]/P(B
i
). The set (Y a) is a union of some of the B
i
, namely, those
B
i
for which b
i
a. But the union of any collection of the B
i
is in (.
An example might help. Suppose
Y = 2 1
B
1
+ 3 1
B
2
+ 6 1
B
3
+ 4 1
B
4
and a = 3.5. Then (Y a) = B
3
B
4
, which is in (.
11
Proposition 3.3. If C ( and Y = E[X [ (], then E[Y ; C] = E[X; C].
Proof. Since Y =

E[X;B
i
]
P(B
i
)
1
B
i
and the B
i
are disjoint, then
E[Y ; B
j
] =
E[X; B
j
]
P(B
j
)
E1
B
j
= E[X; B
j
].
Now if C = B
j
1
B
j
n
, summing the above over the j
k
gives E[Y ; C] = E[X; C].
Let us look at the above example for this proposition, and let us do the case where
C = B
2
. Note 1
B
2
1
B
2
= 1
B
2
because the product is 1 1 = 1 if is in B
2
and 0 otherwise.
On the other hand, it is not possible for an to be in more than one of the B
i
, so
1
B
2
1
B
i
= 0 if i ,= 2. Multiplying Y in the above example by 1
B
2
, we see that
E[Y ; C] = E[Y ; B
2
] = E[Y 1
B
2
] = E[3 1
B
2
]
= 3E[1
B
2
] = 3P(B
2
).
However the number 3 is not just any number; it is E[X; B
2
]/P(B
2
). So
3P(B
2
) =
E[X; B
2
]
P(B
2
)
P(B
2
) = E[X; B
2
] = E[X; C],
just as we wanted. If C = B
1
B
4
, for example, we then write
E[X; C] = E[X1
C
] = E[X(1
B
2
+ 1
B
4
)]
= E[X1
B
2
] +E[X1
B
4
] = E[X; B
2
] +E[X; B
4
].
By the rst part, this equals E[Y ; B
2
]+E[Y ; B
4
], and we undo the above string of equalities
but with Y instead of X to see that this is E[Y ; C].
If a r.v. Y is ( measurable, then for any a we have (Y = a) ( which means that
(Y = a) is the union of one or more of the B
i
. Since the B
i
are disjoint, it follows that Y
must be constant on each B
i
.
Again let us look at an example. Suppose Z takes only the values 1, 3, 4, 7. Let
D
1
= (Z = 1), D
2
= (Z = 3), D
3
= (Z = 4), D
4
= (Z = 7). Note that we can write
Z = 1 1
D
1
+ 3 1
D
2
+ 4 1
D
3
+ 7 1
D
4
.
To see this, if D
2
, for example, the right hand side will be 0+3 1+0+0, which agrees
with Z(). Now if Z is ( measurable, then (Z a) ( for each a. Take a = 7, and we
see D
4
(. Take a = 4 and we see D
3
D
4
(. Taking a = 3 shows D
2
D
3
D
4
(.
12
Now D
3
= (D
3
D
4
) D
c
4
, so since ( is a -eld, D
3
(. Similarly D
2
, D
1
(. Because
sets in ( are unions of the B
i
s, we must have Z constant on the B
i
s. For example, if it
so happened that D
1
= B
1
, D
2
= B
2
B
4
, D
3
= B
3
B
6
B
7
, and D
4
= B
5
, then
Z = 1 1
B
1
+ 3 1
B
2
+ 4 1
B
3
+ 3 1
B
4
+ 7 1
B
5
+ +4 1
B
6
+ 4 1
B
7
.
We still restrict ourselves to the discrete case. In this context, the properties given
in Propositions 3.2 and 3.3 uniquely determine E[X [ (].
Proposition 3.4. Suppose Z is ( measurable and E[Z; C] = E[X; C] whenever C (.
Then Z = E[X [ (].
Proof. Since Z is ( measurable, then Z must be constant on each B
i
. Let the value of Z
on B
i
be z
i
. So Z =

i
z
i
1
B
i
. Then
z
i
P(B
i
) = E[Z; B
i
] = E[X; B
i
],
or z
i
= E[X; B
i
]/P(B
i
) as required.
The following propositions contain the main facts about this new denition of con-
ditional expectation that we will need.
Proposition 3.5. (1) If X
1
X
2
, then E[X
1
[ (] E[X
2
[ (].
(2) E[aX
1
+bX
2
[ (] = aE[X
1
[ (] +bE[X
2
[ (].
(3) If X is ( measurable, then E[X [ (] = X.
(4) E[E[X [ (]] = EX.
(5) If X is independent of (, then E[X [ (] = EX.
We will prove Proposition 3.5 in Note 1 at the end of the section. At this point it
is more fruitful to understand what the proposition says.
We will see in Proposition 3.8 below that we may think of E[X [ (] as the best
prediction of X given (. Accepting this for the moment, we can give an interpretation of
(1)-(5). (1) says that if X
1
is larger than X
2
, then the predicted value of X
1
should be
larger than the predicted value of X
2
. (2) says that the predicted value of X
1
+X
2
should
be the sum of the predicted values. (3) says that if we know ( and X is ( measurable,
then we know X and our best prediction of X is X itself. (4) says that the average of the
predicted value of X should be the average value of X. (5) says that if knowing ( gives us
no additional information on X, then the best prediction for the value of X is just EX.
Proposition 3.6. If Z is ( measurable, then E[XZ [ (] = ZE[X [ (].
We again defer the proof, this time to Note 2.
Proposition 3.6 says that as far as conditional expectations with respect to a -
eld ( go, (-measurable random variables act like constants: they can be taken inside or
outside the conditional expectation at will.
13
Proposition 3.7. If H ( T, then
E[E[X [ H] [ (] = E[X [ H] = E[E[X [ (] [ H].
Proof. E[X [ H] is H measurable, hence ( measurable, since H (. The left hand
equality now follows by Proposition 3.5(3). To get the right hand equality, let W be the
right hand expression. It is H measurable, and if C H (, then
E[W; C] = E[E[X [ (]; C] = E[X; C]
as required.
In words, if we are predicting a prediction of X given limited information, this is
the same as a single prediction given the least amount of information.
Let us verify that conditional expectation may be viewed as the best predictor of
a random variable given a -eld. If X is a r.v., a predictor Z is just another random
variable, and the goodness of the prediction will be measured by E[(X Z)
2
], which is
known as the mean square error.
Proposition 3.8. If X is a r.v., the best predictor among the collection of (-measurable
random variables is Y = E[X [ (].
Proof. Let Z be any (-measurable random variable. We compute, using Proposition
3.5(3) and Proposition 3.6,
E[(X Z)
2
[ (] = E[X
2
[ (] 2E[XZ [ (] +E[Z
2
[ (]
= E[X
2
[ (] 2ZE[X [ (] +Z
2
= E[X
2
[ (] 2ZY +Z
2
= E[X
2
[ (] Y
2
+ (Y Z)
2
= E[X
2
[ (] 2Y E[X [ (] +Y
2
+ (Y Z)
2
= E[X
2
[ (] 2E[XY [ (] +E[Y
2
[ (] + (Y Z)
2
= E[(X Y )
2
[ (] + (Y Z)
2
.
We also used the fact that Y is ( measurable. Taking expectations and using Proposition
3.5(4),
E[(X Z)
2
] = E[(X Y )
2
] +E[(Y Z)
2
].
The right hand side is bigger than or equal to E[(X Y )
2
] because (Y Z)
2
0. So the
error in predicting X by Z is larger than the error in predicting X by Y , and will be equal
if and only if Z = Y . So Y is the best predictor.
14
There is one more interpretation of conditional expectation that may be useful. The
collection of all random variables is a linear space, and the collection of all (-measurable
random variables is clearly a subspace. Given X, the conditional expectation Y = E[X [ (]
is equal to the projection of X onto the subspace of (-measurable random variables. To
see this, we write X = Y +(X Y ), and what we have to check is that the inner product
of Y and X Y is 0, that is, Y and X Y are orthogonal. In this context, the inner
product of X
1
and X
2
is dened to be E[X
1
X
2
], so we must show E[Y (XY )] = 0. Note
E[Y (X Y ) [ (] = Y E[X Y [ (] = Y (E[X [ (] Y ) = Y (Y Y ) = 0.
Taking expectations,
E[Y (X Y )] = E[E[Y (X Y ) [ (] ] = 0,
just as we wished.
If Y is a discrete random variable, that is, it takes only countably many values
y
1
, y
2
, . . ., we let B
i
= (Y = y
i
). These will be disjoint sets whose union is . If (Y )
is the collection of all unions of the B
i
, then (Y ) is a -eld, and is called the -eld
generated by Y . It is easy to see that this is the smallest -eld with respect to which Y
is measurable. We write E[X [ Y ] for E[X [ (Y )].
Note 1. We prove Proposition 3.5. (1) and (2) are immediate from the denition. To prove
(3), note that if Z = X, then Z is ( measurable and E[X; C] = E[Z; C] for any C (; this
is trivial. By Proposition 3.4 it follows that Z = E[X [ (];this proves (3). To prove (4), if we
let C = and Y = E[X [ (], then EY = E[Y ; C] = E[X; C] = EX.
Last is (5). Let Z = EX. Z is constant, so clearly ( measurable. By the in-
dependence, if C (, then E[X; C] = E[X1
C
] = (EX)(E1
C
) = (EX)(P(C)). But
E[Z; C] = (EX)(P(C)) since Z is constant. By Proposition 3.4 we see Z = E[X [ (].
Note 2. We prove Proposition 3.6. Note that ZE[X [ (] is ( measurable, so by Proposition
3.4 we need to show its expectation over sets C in ( is the same as that of XZ. As in the
proof of Proposition 3.3, it suces to consider only the case when C is one of the B
i
. Now Z
is ( measurable, hence it is constant on B
i
; let its value be z
i
. Then
E[ZE[X [ (]; B
i
] = E[z
i
E[X [ (]; B
i
] = z
i
E[E[X [ (]; B
i
] = z
i
E[X; B
i
] = E[XZ; B
i
]
as desired.
15
4. Martingales.
Suppose we have a sequence of -elds T
1
T
2
T
3
. An example would be
repeatedly tossing a coin and letting T
k
be the sets that can be determined by the rst
k tosses. Another example is to let T
k
be the events that are determined by the values
of a stock at times 1 through k. A third example is to let X
1
, X
2
, . . . be a sequence of
random variables and let T
k
be the -eld generated by X
1
, . . . , X
k
, the smallest -eld
with respect to which X
1
, . . . , X
k
are measurable.
Denition 4.1. A r.v. X is integrable if E[X[ < . Given an increasing sequence of
-elds T
n
, a sequence of r.v.s X
n
is adapted if X
n
is T
n
measurable for each n.
Denition 4.2. A martingale M
n
is a sequence of random variables such that
(1) M
n
is integrable for all n,
(2) M
n
is adapted to T
n
, and
(3) for all n
E[M
n+1
[ T
n
] = M
n
. (4.1)
Usually (1) and (2) are easy to check, and it is (3) that is the crucial property. If
we have (1) and (2), but instead of (3) we have
(3/) for all n
E[M
n+1
[ T
n
] M
n
,
then we say M
n
is a submartingale. If we have (1) and (2), but instead of (3) we have
(3//) for all n
E[M
n+1
[ T
n
] M
n
,
then we say M
n
is a supermartingale.
Submartingales tends to increase and supermartingales tend to decrease. The
nomenclature may seem like it goes the wrong way; Doob dened these terms by anal-
ogy with the notions of subharmonic and superharmonic functions in analysis. (Actually,
it is more than an analogy: we wont explore this, but it turns out that the composition
of a subharmonic function with Brownian motion yields a submartingale, and similarly for
superharmonic functions.)
Note that the denition of martingale depends on the collection of -elds. When
it is needed for clarity, one can say that (M
n
, T
n
) is a martingale. To dene conditional
expectation, one needs a probability, so a martingale depends on the probability as well.
When we need to, we will say that M
n
is a martingale with respect to the probability P.
This is an issue when there is more than one probability around.
We will see that martingales are ubiquitous in nancial math. For example, security
prices and ones wealth will turn out to be examples of martingales.
16
The word martingale is also used for the piece of a horses bridle that runs from
the horses head to its chest. It keeps the horse from raising its head too high. It turns out
that martingales in probability cannot get too large. The word also refers to a gambling
system. I did some searching on the Internet, and there seems to be no consensus on the
derivation of the term.
Here is an example of a martingale. Let X
1
, X
2
, . . . be a sequence of independent
r.v.s with mean 0 that are independent. (Saying a r.v. X
i
has mean 0 is the same as
saying EX
i
= 0; this presupposes that E[X
1
[ is nite.) Set T
n
= (X
1
, . . . , X
n
), the
-eld generated by X
1
, . . . , X
n
. Let M
n
=

n
i=1
X
i
. Denition 4.2(2) is easy to see.
Since E[M
n
[

n
i=1
E[X
i
[, Denition 4.2(1) also holds. We now check
E[M
n+1
[ T
n
] = X
1
+ +X
n
+E[X
n+1
[ T
n
] = M
n
+EX
n+1
= M
n
,
where we used the independence.
Another example: suppose in the above that the X
k
all have variance 1, and let
M
n
= S
2
n
n, where S
n
=

n
i=1
X
i
. Again (1) and (2) of Denition 4.2 are easy to check.
We compute
E[M
n+1
[ T
n
] = E[S
2
n
+ 2X
n+1
S
n
+X
2
n+1
[ T
n
] (n + 1).
We have E[S
2
n
[ T
n
] = S
2
n
since S
n
is T
n
measurable.
E[2X
n+1
S
n
[ T
n
] = 2S
n
E[X
n+1
[ T
n
] = 2S
n
EX
n+1
= 0.
And E[X
2
n+1
[ T
n
] = EX
2
n+1
= 1. Substituting, we obtain E[M
n+1
[ T
n
] = M
n
, or M
n
is
a martingale.
A third example: Suppose you start with a dollar and you are tossing a fair coin
independently. If it turns up heads you double your fortune, tails you go broke. This is
double or nothing. Let M
n
be your fortune at time n. To formalize this, let X
1
, X
2
, . . .
be independent r.v.s that are equal to 2 with probability
1
2
and 0 with probability
1
2
. Then
M
n
= X
1
X
n
. Let T
n
be the -eld generated by X
1
, . . . , X
n
. Note 0 M
n
2
n
, and
so Denition 4.2(1) is satised, while (2) is easy. To compute the conditional expectation,
note EX
n+1
= 1. Then
E[M
n+1
[ T
n
] = M
n
E[X
n+1
[ T
n
] = M
n
EX
n+1
= M
n
,
using the independence.
Before we give our fourth example, let us observe that
[E[X [ T][ E[[X[ [ T]. (4.2)
To see this, we have [X[ X [X[, so E[[X[ [ T] E[X [ T] E[[X[ [ T]. Since
E[[X[ [ T] is nonnegative, (4.2) follows.
Our fourth example will be used many times, so we state it as a proposition.
17
Proposition 4.3. Let T
1
, T
2
, . . . be given and let X be a xed r.v. with E[X[ < . Let
M
n
= E[X [ T
n
]. Then M
n
is a martingale.
Proof. Denition 4.2(2) is clear, while
E[M
n
[ E[E[[X[ [ T
n
]] = E[X[ <
by (4.2); this shows Denition 4.2(1). We have
E[M
n+1
[ T
n
] = E[E[X [ T
n+1
] [ T
n
] = E[X [ T
n
] = M
n
.
18
5. Properties of martingales.
When it comes to discussing American options, we will need the concept of stopping
times. A mapping from into the nonnegative integers is a stopping time if ( = k) T
k
for each k.
An example is = mink : S
k
A. This is a stopping time because ( = k) =
(S
1
, . . . , S
k1
< A, S
k
A) T
k
. We can think of a stopping time as the rst time
something happens. = maxk : S
k
A, the last time, is not a stopping time. (We will
use the convention that the minimum of an empty set is +; so, for example, with the
above denition of , on the event that S
k
is never in A, we have = .
Here is an intuitive description of a stopping time. If I tell you to drive to the city
limits and then drive until you come to the second stop light after that, you know when
you get there that you have arrived; you dont need to have been there before or to look
ahead. But if I tell you to drive until you come to the second stop light before the city
limits, either you must have been there before or else you have to go past where you are
supposed to stop, continue on to the city limits, and then turn around and come back two
stop lights. You dont know when you rst get to the second stop light before the city
limits that you get to stop there. The rst set of instructions forms a stopping time, the
second set does not.
Note ( k) =
k
j=0
( = j). Since ( = j) T
j
T
k
, then the event ( k) T
k
for all k. Conversely, if is a r.v. with ( k) T
k
for all k, then
( = k) = ( k) ( k 1).
Since ( k) T
k
and ( k 1) T
k1
T
k
, then ( = k) T
k
, and such a must
be a stopping time.
Our rst result is Jensens inequality.
Proposition 5.1. If g is convex, then
g(E[X [ (]) E[g(X) [ (]
provided all the expectations exist.
For ordinary expectations rather than conditional expectations, this is still true.
That is, if g is convex and the expectations exist, then
g(EX) E[g(X)].
We already know some special cases of this: when g(x) = [x[, this says [EX[ E[X[;
when g(x) = x
2
, this says (EX)
2
EX
2
, which we know because EX
2
(EX)
2
=
E(X EX)
2
0.
19
For Proposition 5.1 as well as many of the following propositions, the statement of
the result is more important than the proof, and we relegate the proof to Note 1 below.
One reason we want Jensens inequality is to show that a convex function applied
to a martingale yields a submartingale.
Proposition 5.2. If M
n
is a martingale and g is convex, then g(M
n
) is a submartingale,
provided all the expectations exist.
Proof. By Jensens inequality,
E[g(M
n+1
) [ T
n
] g(E[M
n+1
[ T
n
]) = g(M
n
).
If M
n
is a martingale, then EM
n
= E[E[M
n+1
[ T
n
]] = EM
n+1
. So EM
0
=
EM
1
= = EM
n
. Doobs optional stopping theorem says the same thing holds when
xed times n are replaced by stopping times.
Theorem 5.3. Suppose K is a positive integer, N is a stopping time such that N K
a.s., and M
n
is a martingale. Then
EM
N
= EM
K
.
Here, to evaluate M
N
, one rst nds N() and then evaluates M

() for that value of N.


Proof. We have
EM
N
=
K

k=0
E[M
N
; N = k].
If we show that the k-th summand is E[M
n
; N = k], then the sum will be
K

k=0
E[M
n
; N = k] = EM
n
as desired. We have
E[M
N
; N = k] = E[M
k
; N = k]
by the denition of M
N
. Now (N = k) is in T
k
, so by Proposition 2.2 and the fact that
M
k
= E[M
k+1
[ T
k
],
E[M
k
; N = k] = E[M
k+1
; N = k].
We have (N = k) T
k
T
k+1
. Since M
k+1
= E[M
k+2
[ T
k+1
], Proposition 2.2 tells us
that
E[M
k+1
; N = k] = E[M
k+2
; N = k].
20
We continue, using (N = k) T
k
T
k+1
T
k+2
, and we obtain
E[M
N
; N = k] = E[M
k
; N = k] = E[M
k+1
; N = k] = = E[M
n
; N = k].
If we change the equalities in the above to inequalities, the same result holds for sub-
martingales.
As a corollary we have two of Doobs inequalities:
Theorem 5.4. If M
n
is a nonnegative submartingale,
(a) P(max
kn
M
k
)
1

EM
n
.
(b) E(max
kn
M
2
k
) 4EM
2
n
.
For the proof, see Note 2 below.
Note 1. We prove Proposition 5.1. If g is convex, then the graph of g lies above all the
tangent lines. Even if g does not have a derivative at x
0
, there is a line passing through x
0
which lies beneath the graph of g. So for each x
0
there exists c(x
0
) such that
g(x) g(x
0
) +c(x
0
)(x x
0
).
Apply this with x = X() and x
0
= E[X [ (](). We then have
g(X) g(E[X [ (]) +c(E[X [ (])(X E[X [ (]).
If g is dierentiable, we let c(x
0
) = g

(x
0
). In the case where g is not dierentiable, then we
choose c to be the left hand upper derivate, for example. (For those who are not familiar with
derivates, this is essentially the left hand derivative.) One can check that if c is so chosen,
then c(E[X [ (]) is ( measurable.
Now take the conditional expectation with respect to (. The rst term on the right is
( measurable, so remains the same. The second term on the right is equal to
c(E[X [ (])E[X E[X [ (] [ (] = 0.
Note 2. We prove Theorem 5.4. Set M
n+1
= M
n
. It is easy to see that the sequence
M
1
, M
2
, . . . , M
n+1
is also a submartingale. Let N = mink : M
k
(n + 1), the rst
time that M
k
is greater than or equal to , where a b = min(a, b). Then
P(max
kn
M
k
) = P(N n)
21
and if N n, then M
N
. Now
P(max
kn
M
k
) = E[1
(Nn)
] E
_
M
N

; N n
_
(5.1)
=
1

E[M
Nn
; N n]
1

EM
Nn
.
Finally, since M
n
is a submartingale, EM
Nn
EM
n
.
We now look at (b). Let us write M

for max
kn
M
k
. If EM
2
n
= , there is nothing
to prove. If it is nite, then by Jensens inequality, we have
EM
2
k
= E[E[M
n
[ T
k
]
2
] E[E[M
2
n
[ T
k
] ] = EM
2
n
<
for k n. Then
E(M

)
2
= E[ max
1kn
M
2
k
] E
_
n

k=1
M
2
k

< .
We have
E[M
Nn
; N n] =

k=0
E[M
kn
; N = k].
Arguing as in the proof of Theorem 5.3,
E[M
kn
; N = k] E[M
n
; N = k],
and so
E[M
Nn
; N n]

k=0
E[M
n
; N = k] = E[M
n
; N n].
The last expression is at most E[M
n
; M

]. If we multiply (5.1) by 2 and integrate over


from 0 to , we obtain
_

0
2P(M

)d 2
_

0
E[M
n
: M

]
= 2E
_

0
M
n
1
(M

)
d
= 2E
_
M
n
_
M

0
d
_
= 2E[M
n
M

].
Using Cauchy-Schwarz, this is bounded by
2(EM
2
n
)
1/2
(E(M

)
2
)
1/2
.
22
On the other hand,
_

0
2P(M

)d = E
_

0
21
(M

)
d
= E
_
M

0
2d = E(M

)
2
.
We therefore have
E(M

)
2
2(EM
2
n
)
1/2
(E(M

)
2
)
1/2
.
Recall we showed E(M

)
2
< . We divide both sides by (E(M

)
2
)
1/2
, square both sides,
and obtain (b).
Note 3. We will show that bounded martingales converge. (The hypothesis of boundedness
can be weakened; for example, E[M
n
[ c < for some c not depending on n suces.)
Theorem 5.5. Suppose M
n
is a martingale bounded in absolute value by K. That is,
[M
n
[ K for all n. Then lim
n
M
n
exists a.s.
Proof. Since M
n
is bounded, it cant tend to + or . The only possibility is that it
might oscillate. Let a < b be two rationals. What might go wrong is that M
n
might be larger
than b innitely often and less than a innitely often. If we show the probability of this is 0,
then taking the union over all pairs of rationals (a, b) shows that almost surely M
n
cannot
oscillate, and hence must converge.
Fix a < b, let N
n
= (M
n
a)
+
, and let S
1
= mink : N
k
0, T
1
= mink > S
1
:
N
k
b a, S
2
= mink > T
1
: N
k
0, and so on. Let U
n
= maxk : T
k
n. U
n
is called the number of upcrossings up to time n. We want to show that max
n
U
n
< a.s.
Note by Jensens inequality N
n
is a submartingale. Since S
1
< T
1
< S
2
< , then S
n+1
> n.
We can write
2K N
n
N
S
n+1
n
=
n+1

k=1
(N
S
k+1
n
N
T
k
n
) +
n+1

k=1
(N
T
k
n
N
S
k
n
).
Now take expectations. The expectation of the rst sum on the right and the last term are
greater than or equal to zero by optional stopping. The middle term is larger than (b a)U
n
,
so we conclude
(b a)EU
n
2K.
Let n to see that E max
n
U
n
< , which implies max
n
U
n
< a.s., which is what we
needed.
Note 4. We will state Fatous lemma in the following form.
If X
n
is a sequence of nonnegative random variables converging to X a.s., then EX
sup
n
EX
n
.
This formulation is equivalent to the classical one and is better suited for our use.
23
6. The one step binomial asset pricing model.
Let us begin by giving the simplest possible model of a stock and see how a European
call option should be valued in this context.
Suppose we have a single stock whose price is S
0
. Let d and u be two numbers with
0 < d < 1 < u. Here d is a mnemonic for down and u for up. After one time unit
the stock price will be either uS
0
with probability P or else dS
0
with probability Q, where
P + Q = 1. We will assume 0 < P, Q < 1. Instead of purchasing shares in the stock, you
can also put your money in the bank where one will earn interest at rate r. Alternatives
to the bank are money market funds or bonds; the key point is that these are considered
to be risk-free.
A European call option in this context is the option to buy one share of the stock
at time 1 at price K. K is called the strike price. Let S
1
be the price of the stock at time
1. If S
1
is less than K, then the option is worthless at time 1. If S
1
is greater than K, you
can use the option at time 1 to buy the stock at price K, immediately turn around and
sell the stock for price S
1
and make a prot of S
1
K. So the value of the option at time
1 is
V
1
= (S
1
K)
+
,
where x
+
is max(x, 0). The principal question to be answered is: what is the value V
0
of
the option at time 0? In other words, how much should one pay for a European call option
with strike price K?
It is possible to buy a negative number of shares of a stock. This is equivalent to
selling shares of a stock you dont have and is called selling short. If you sell one share
of stock short, then at time 1 you must buy one share at whatever the market price is at
that time and turn it over to the person that you sold the stock short to. Similarly you
can buy a negative number of options, that is, sell an option.
You can also deposit a negative amount of money in the bank, which is the same
as borrowing. We assume that you can borrow at the same interest rate r, not exactly a
totally realistic assumption. One way to make it seem more realistic is to assume you have
a large amount of money on deposit, and when you borrow, you simply withdraw money
from that account.
We are looking at the simplest possible model, so we are going to allow only one
time step: one makes an investment, and looks at it again one day later.
Lets suppose the price of a European call option is V
0
and see what conditions
one can put on V
0
. Suppose you start out with V
0
dollars. One thing you could do is
buy one option. The other thing you could do is use the money to buy
0
shares of
stock. If V
0
>
0
S
0
, there will be some money left over and you put that in the bank. If
V
0
<
0
S
0
, you do not have enough money to buy the stock, and you make up the shortfall
by borrowing money from the bank. In either case, at this point you have V
0

0
S
0
in
24
the bank and
0
shares of stock.
If the stock goes up, at time 1 you will have

0
uS
0
+ (1 +r)(V
0

0
S
0
),
and if it goes down,

0
dS
0
+ (1 +r)(V
0

0
S
0
).
We have not said what
0
should be. Let us do that now. Let V
u
1
= (uS
0
K)
+
and V
d
1
= (dS
0
K)
+
. Note these are deterministic quantities, i.e., not random. Let

0
=
V
u
1
V
d
1
uS
0
dS
0
,
and we will also need
W
0
=
1
1 +r
_
1 +r d
u d
V
u
1
+
u (1 +r)
u d
V
d
1
_
.
In a moment we will do some algebra and see that if the stock goes up and you had
bought stock instead of the option you would now have
V
u
1
+ (1 +r)(V
0
W
0
),
while if the stock went down, you would now have
V
d
1
+ (1 +r)(V
0
W
0
).
Lets check the rst of these, the second being similar. We need to show

0
uS
0
+ (1 +r)(V
0

0
S
0
) = V
u
1
+ (1 +r)(V
0
W
0
). (6.1)
The left hand side of (6.1) is equal to

0
S
0
(u (1 +r)) + (1 +r)V
0
=
V
u
1
V
d
1
u d
(u (1 +r)) + (1 +r)V
0
. (6.2)
The right hand side of (6.1) is equal to
V
u
1

_
1 +r d
u d
V
u
1
+
u (1 +r)
u d
V
d
1
_
+ (1 +r)V
0
. (6.3)
Now check that the coecients of V
0
, of V
u
1
, and of V
d
1
agree in (6.2) and (6.3).
Suppose that V
0
> W
0
. What you want to do is come along with no money, sell
one option for V
0
dollars, use the money to buy
0
shares, and put the rest in the bank
25
(or borrow if necessary). If the buyer of your option wants to exercise the option, you give
him one share of stock and sell the rest. If he doesnt want to exercise the option, you sell
your shares of stock and pocket the money. Remember it is possible to have a negative
number of shares. You will have cleared (1 + r)(V
0
W
0
), whether the stock went up or
down, with no risk.
If V
0
< W
0
, you just do the opposite: sell
0
shares of stock short, buy one option,
and deposit or make up the shortfall from the bank. This time, you clear (1+r)(W
0
V
0
),
whether the stock goes up or down.
Now most people believe that you cant make a prot on the stock market without
taking a risk. The name for this is no free lunch, or arbitrage opportunities do not
exist. The only way to avoid this is if V
0
= W
0
. In other words, we have shown that the
only reasonable price for the European call option is W
0
.
The no arbitrage condition is not just a reection of the belief that one cannot get
something for nothing. It also represents the belief that the market is freely competitive.
The way it works is this: suppose W
0
= $3. Suppose you could sell options at a price
V
0
= $5; this is larger than W
0
and you would earn V
0
W
0
= $2 per option without risk.
Then someone else would observe this and decide to sell the same option at a price less
than V
0
but larger than W
0
, say $4. This person would still make a prot, and customers
would go to him and ignore you because they would be getting a better deal. But then a
third person would decide to sell the option for less than your competition but more than
W
0
, say at $3.50. This would continue as long as any one would try to sell an option above
price W
0
.
We will examine this problem of pricing options in more complicated contexts, and
while doing so, it will become apparent where the formulas for
0
and W
0
came from. At
this point, we want to make a few observations.
Remark 6.1. First of all, if 1 +r > u, one would never buy stock, since one can always
do better by putting money in the bank. So we may suppose 1 + r < u. We always have
1 +r 1 > d. If we set
p =
1 +r d
u d
, q =
u (1 +r)
u d
,
then p, q 0 and p +q = 1. Thus p and q act like probabilities, but they have nothing to
do with P and Q. Note also that the price V
0
= W
0
does not depend on P or Q. It does
depend on p and q, which seems to suggest that there is an underlying probability which
controls the option price and is not the one that governs the stock price.
Remark 6.2. There is nothing special about European call options in our argument
above. One could let V
u
1
and V
1
d
be any two values of any option, which are paid out if the
26
stock goes up or down, respectively. The above analysis shows we can exactly duplicate
the result of buying any option V by instead buying some shares of stock. If in some model
one can do this for any option, the market is called complete in this model.
Remark 6.3. If we let P be the probability so that S
1
= uS
0
with probability p and
S
1
= dS
0
with probability q and we let E be the corresponding expectation, then some
algebra shows that
V
0
=
1
1 +r
EV
1
.
This will be generalized later.
Remark 6.4. If one buys one share of stock at time 0, then one expects at time 1 to
have (Pu + Qd)S
0
. One then divides by 1 + r to get the value of the stock in todays
dollars. (r, the risk-free interest rate, can also be considered the rate of ination. A dollar
tomorrow is equivalent to 1/(1 +r) dollars today.) Suppose instead of P and Q being the
probabilities of going up and down, they were in fact p and q. One would then expect to
have (pu+qd)S
0
and then divide by 1+r. Substituting the values for p and q, this reduces
to S
0
. In other words, if p and q were the correct probabilities, one would expect to have
the same amount of money one started with. When we get to the binomial asset pricing
model with more than one step, we will see that the generalization of this fact is that the
stock price at time n is a martingale, still with the assumption that p and q are the correct
probabilities. This is a special case of the fundamental theorem of nance: there always
exists some probability, not necessarily the one you observe, under which the stock price
is a martingale.
Remark 6.5. Our model allows after one time step the possibility of the stock going up or
going down, but only these two options. What if instead there are 3 (or more) possibilities.
Suppose for example, that the stock goes up a factor u with probability P, down a factor
d with probability Q, and remains constant with probability R, where P + Q + R = 1.
The corresponding price of a European call option would be (uS
0
K)
+
, (dS
0
K)
+
, or
(S
0
K)
+
. If one could replicate this outcome by buying and selling shares of the stock,
then the no arbitrage rule would give the exact value of the call option in this model.
But, except in very special circumstances, one cannot do this, and the theory falls apart.
One has three equations one wants to satisfy, in terms of V
u
1
, V
d
1
, and V
c
1
. (The c is
a mnemonic for constant.) There are however only two variables,
0
and V
0
at your
disposal, and most of the time three equations in two unknowns cannot be solved.
Remark 6.6. In our model we ruled out the cases that P or Q were zero. If Q = 0,
that is, we are certain that the stock will go up, then we would always invest in the stock
if u > 1 + r, as we would always do better, and we would always put the money in the
bank if u 1 +r. Similar considerations apply when P = 0. It is interesting to note that
27
the cases where P = 0 or Q = 0 are the only ones in which our derivation is not valid.
It turns out that in more general models the true probabilities enter only in determining
which events have probability 0 or 1 and in no other way.
28
7. The multi-step binomial asset pricing model.
In this section we will obtain a formula for the pricing of options when there are n
time steps, but each time the stock can only go up by a factor u or down by a factor d.
The Black-Scholes formula we will obtain is already a nontrivial result that is useful.
We assume the following.
(1) Unlimited short selling of stock
(2) Unlimited borrowing
(3) No transaction costs
(4) Our buying and selling is on a small enough scale that it does not aect the market.
We need to set up the probability model. will be all sequences of length n of Hs
and Ts. S
0
will be a xed number and we dene S
k
() = u
j
d
kj
S
0
if the rst k elements
of a given has j occurrences of H and k j occurrences of T. (What we are doing is
saying that if the j-th element of the sequence making up is an H, then the stock price
goes up by a factor u; if T, then down by a factor d.) T
k
will be the -eld generated by
S
0
, . . . , S
k
.
Let
p =
(1 +r) d
u d
, q =
u (1 +r)
u d
and dene P() = p
j
q
nj
if has j appearances of H and n j appearances of T. We
observe that under P the random variables S
k+1
/S
k
are independent and equal to u with
probability p and d with probability q. To see this, let Y
k
= S
k
/S
k1
. Thus Y
k
is the
factor the stock price goes up or down at time k. Then P(Y
1
= y
1
, . . . , Y
n
= y
n
) = p
j
q
nj
,
where j is the number of the y
k
that are equal to u. On the other hand, this is equal to
P(Y
1
= y
1
) P(Y
n
= y
n
). Let E denote the expectation corresponding to P.
The P we construct may not be the true probabilities of going up or down. That
doesnt matter - it will turn out that using the principle of no arbitrage, it is P that
governs the price.
Our rst result is the fundamental theorem of nance in the current context.
Proposition 7.1. Under P the discounted stock price (1 +r)
k
S
k
is a martingale.
Proof. Since the random variable S
k+1
/S
k
is independent of T
k
, we have
E[(1 +r)
(k+1)
S
k+1
[ T
k
] = (1 +r)
k
S
k
(1 +r)
1
E[S
k+1
/S
k
[ T
k
].
Using the independence the conditional expectation on the right is equal to
E[S
k+1
/S
k
] = pu +qd = 1 +r.
29
Substituting yields the proposition.
Let
k
be the number of shares held between times k and k + 1. We require
k
to be T
k
measurable.
0
,
1
, . . . is called the portfolio process. Let W
0
be the amount
of money you start with and let W
k
be the amount of money you have at time k. W
k
is
the wealth process. If we have
k
shares between times k and k + 1, then at time k + 1
those shares will be worth
k
S
k+1
. The amount of cash we hold between time k and k +1
is W
k
minus the amount held in stock, that is, W
k

k
S
k
. At time k + 1 this is worth
(1 +r)[W
k

k
S
k
]. Therefore
W
k+1
=
k
S
k+1
+ (1 +r)[W
k

k
S
k
].
Note that in the case where r = 0 we have
W
k+1
W
k
=
k
(S
k+1
S
k
),
or
W
k+1
= W
0
+
k

i=0

i
(S
i+1
S
i
).
This is a discrete version of a stochastic integral. Since
E[W
k+1
W
k
[ T
k
] =
k
E[S
k+1
S
k
[ T
k
] = 0,
it follows that in the case r = 0 that W
k
is a martingale. More generally
Proposition 7.2. Under P the discounted wealth process (1 +r)
k
W
k
is a martingale.
Proof. We have
(1 +r)
(k+1)
W
k+1
= (1 +r)
k
W
k
+
k
[(1 +r)
(k+1)
S
k+1
(1 +r)
k
S
k
].
Observe that
E[
k
[(1 +r)
(k+1)
S
k+1
(1 +r)
k
S
k
[ T
k
]
=
k
E[(1 +r)
(k+1)
S
k+1
(1 +r)
k
S
k
[ T
k
] = 0.
The result follows.
Our next result is that the binomial model is complete. It is easy to lose the idea
in the algebra, so rst let us try to see why the theorem is true.
For simplicity let us rst consider the case r = 0. Let V
k
= E[V [ T
k
]; by Propo-
sition 4.3 we see that V
k
is a martingale. We want to construct a portfolio process, i.e.,
30
choose
k
s, so that W
n
= V . We will do it inductively by arranging matters so that
W
k
= V
k
for all k. Recall that W
k
is also a martingale.
Suppose we have W
k
= V
k
at time k and we want to nd
k
so that W
k+1
= V
k+1
.
At the (k +1)-st step there are only two possible changes for the price of the stock and so
since V
k+1
is T
k+1
measurable, only two possible values for V
k+1
. We need to choose
k
so that W
k+1
= V
k+1
for each of these two possibilities. We only have one parameter,
k
,
to play with to match up two numbers, which may seem like an overconstrained system of
equations. But both V and W are martingales, which is why the system can be solved.
Now let us turn to the details. In the following proof we allow r 0.
Theorem 7.3. The binomial asset pricing model is complete.
The precise meaning of this is the following. If V is any random variable that is T
n
measurable, there exists a constant W
0
and a portfolio process
k
so that the wealth
process W
k
satises W
n
= V . In other words, starting with W
0
dollars, we can trade
shares of stock to exactly duplicate the outcome of any option V .
Proof. Let
V
k
= (1 +r)
k
E[(1 +r)
n
V [ T
k
].
By Proposition 4.3 (1 +r)
k
V
k
is a martingale. If = (t
1
, . . . , t
n
), where each t
i
is an H
or T, let

k
() =
V
k+1
(t
1
, . . . , t
k
, H, t
k+2
, . . . , t
n
) V
k+1
(t
1
, . . . , t
k
, T, t
k+2
, . . . , t
n
)
S
k+1
(t
1
, . . . , t
k
, H, t
k+2
, . . . , t
n
) S
k+1
(t
1
, . . . , t
k
, T, t
k+2
, . . . , t
n
)
.
Set W
0
= V
0
, and we will show by induction that the wealth process at time k equals V
k
.
The rst thing to show is that
k
is T
k
measurable. Neither S
k+1
nor V
k+1
depends
on t
k+2
, . . . , t
n
. So
k
depends only on the variables t
1
, . . . , t
k
, hence is T
k
measurable.
Now t
k+2
, . . . , t
n
play no role in the rest of the proof, and t
1
, . . . , t
k
will be xed,
so we drop the ts from the notation. If we write V
k+1
(H), this is an abbreviation for
V
k+1
(t
1
, . . . , t
k
, H, t
k+2
, . . . , t
n
).
We know (1 +r)
k
V
k
is a martingale under P so that
V
k
= E[(1 +r)
1
V
k+1
[ T
k
] (7.1)
=
1
1 +r
[pV
k+1
(H) +qV
k+1
(T)].
(See Note 1.) We now suppose W
k
= V
k
and want to show W
k+1
(H) = V
k+1
(H) and
W
k+1
(T) = V
k+1
(T). Then using induction we have W
n
= V
n
= V as required. We show
the rst equality, the second being similar.
31
W
k+1
(H) =
k
S
k+1
(H) + (1 +r)[W
k

k
S
k
]
=
k
[uS
k
(1 +r)S
k
] + (1 +r)V
k
=
V
k+1
(H) V
k+1
(T)
(u d)S
k
S
k
[u (1 +r)] +pV
k+1
(H) +qV
k+1
(T)
= V
k+1
(H).
We are done.
Finally, we obtain the Black-Scholes formula in this context. Let V be any option
that is T
n
-measurable. The one we have in mind is the European call, for which V =
(S
n
K)
+
, but the argument is the same for any option whatsoever.
Theorem 7.4. The value of the option V at time 0 is V
0
= (1 +r)
n
EV .
Proof. We can construct a portfolio process
k
so that if we start with W
0
= (1+r)
n
EV ,
then the wealth at time n will equal V , no matter what the market does in between. If
we could buy or sell the option V at a price other than W
0
, we could obtain a riskless
prot. That is, if the option V could be sold at a price c
0
larger than W
0
, we would sell
the option for c
0
dollars, use W
0
to buy and sell stock according to the portfolio process

k
, have a net worth of V +(1 +r)
n
(c
0
W
0
) at time n, meet our obligation to the buyer
of the option by using V dollars, and have a net prot, at no risk, of (1 + r)
n
(c
0
W
0
).
If c
0
were less than W
0
, we would do the same except buy an option, hold
k
shares at
time k, and again make a riskless prot. By the no arbitrage rule, that cant happen,
so the price of the option V must be W
0
.
Remark 7.5. Note that the proof of Theorem 7.4 tells you precisely what hedging
strategy (i.e., what portfolio process) to use.
In the binomial asset pricing model, there is no diculty computing the price of a
European call. We have
E(S
n
K)
+
=

x
(x K)
+
P(S
n
= x)
and
P(S
n
= x) =
_
n
k
_
p
k
q
nk
if x = u
k
d
nk
S
0
. Therefore the price of the European call is
(1 +r)
n
n

k=0
(u
k
d
nk
S
0
K)
+
_
n
k
_
p
k
q
nk
.
32
The formula in Theorem 7.4 holds for exotic options as well. Suppose
V = max
i=1,...,n
S
i
min
j=1,...,n
S
j
.
In other words, you sell the stock for the maximum value it takes during the rst n time
steps and you buy at the minimum value the stock takes; you are allowed to wait until
time n and look back to see what the maximum and minimum were. You can even do this
if the maximum comes before the minimum. This V is still T
n
measurable, so the theory
applies. Naturally, such a buy low, sell high option is very desirable, and the price of
such a V will be quite high. It is interesting that even without using options, you can
duplicate the operation of buying low and selling high by holding an appropriate number
of shares
k
at time k, where you do not look into the future to determine
k
.
Let us look at an example of a European call so that it is clear how to do the
calculations. Consider the binomial asset pricing model with n = 3, u = 2, d =
1
2
, r = 0.1,
S
0
= 10, and K = 15. If V is a European call with strike price K and exercise date n, let
us compute explicitly the random variables V
1
and V
2
and calculate the value V
0
. Let us
also compute the hedging strategy
0
,
1
, and
2
.
Let
p =
(1 +r) d
u d
= .4, q =
u (1 +r)
u d
= .6.
The following table describes the values of the stock, the payo V , and the probabilities
for each possible outcome .
33
S
1
S
2
S
3
V Probability
HHH 10u 10u
2
10u
3
65 p
3
HHT 10u 10u
2
10u
2
d 5 p
2
q
HTH 10u 10ud 10u
2
d 5 p
2
q
HTT 10u 10ud 10ud
2
0 pq
2
THH 10d 10ud 10u
2
d 5 p
2
q
THT 10d 10ud 10ud
2
0 pq
2
TTH 10d 10d
2
10ud
2
0 pq
2
TTT 10d 10d
2
10d
3
0 q
3
We then calculate
V
0
= (1 +r)
3
EV = (1 +r)
3
(65p
3
+ 15p
2
q) = 4.2074.
V
1
= (1 +r)
2
E[V [ T
1
], so we have
V
1
(H) = (1 +r)
2
(65p
2
+ 10pq) = 10.5785, V
1
(T) = (1 +r)
2
5pq = .9917.
V
2
= (1 +r)
1
E[V [ T
2
], so we have
V
2
(HH) = (1 +r)
1
(65p + 5q) = 24.5454, V
2
(HT) = (1 +r)
1
5p = 1.8182,
V
2
(TH) = (1 +r)
1
5p = 1.8182, V
2
(TT) = 0.
The formula for
k
is given by

k
=
V
k+1
(H) V
k+1
(T)
S
k+1
(H) S
k+1
(T)
,
so

0
=
V
1
(H) V
1
(T)
S
1
(H) S
1
(T)
= .6391,
where V
1
and S
1
are as above.

1
(H) =
V
2
(HH) V
2
(HT)
S
2
(HH) S
2
(HT)
= .7576,
1
(T) =
V
2
(TH) V
2
(TT)
S
2
(TH) S
2
(TT)
= .2424.

2
(HH) =
V
3
(HHH) V
3
(HHT)
S
3
(HHH) S
3
(HHT)
= 1.0,

2
(HT) =
V
3
(HTH) V
3
(HTT)
S
3
(HTH) S
3
(HTT)
= .3333,

2
(TH) =
V
3
(THH) V
3
(THT)
S
3
(THH) S
3
(THT)
= .3333,

2
(TT) =
V
3
(TTH) V
3
(TTT)
S
3
(TTH) S
3
(TTT)
= 0.0.
34
Note 1. The second equality is (7.1) is not entirely obvious. Intuitively, it says that one has
a heads with probability p and the value of V
k+1
is V
k+1
(H) and one has tails with probability
q, and the value of V
k+1
is V
k+1
(T).
Let us give a more rigorous proof of (7.1). The right hand side of (7.1) is T
k
measurable,
so we need to show that if A T
k
, then
E[V
k+1
; A] = E[pV
k+1
(H) +qV
k+1
(T); A].
By linearity, it suces to show this for A = = (t
1
t
2
t
n
) : t
1
= s
1
, . . . , t
k
= s
k
, where
s
1
s
2
s
k
is any sequence of Hs and Ts. Now
E[V
k+1
; s
1
s
k
] = E[V
k+1
; s
1
s
k
H] +E[V
k+1
; s
1
s
k
T]
= V
k+1
(s
1
s
k
H)P(s
1
s
k
H) +V
k+1
(s
1
s
k
T)P(s
1
s
k
T).
By independence this is
V
k+1
(s
1
s
k
H)P(s
1
s
k
)p +V
k+1
(s
1
s
k
T)P(s
1
s
k
)q,
which is what we wanted.
35
8. American options.
An American option is one where you can exercise the option any time before some
xed time T. For example, on a European call, one can only use it to buy a share of stock
at the expiration time T, while for an American call, at any time before time T, one can
decide to pay K dollars and obtain a share of stock.
Let us give an informal argument on how to price an American call, giving a more
rigorous argument in a moment. One can always wait until time T to exercise an American
call, so the value must be at least as great as that of a European call. On the other hand,
suppose you decide to exercise early. You pay K dollars, receive one share of stock, and
your wealth is S
t
K. You hold onto the stock, and at time T you have one share of stock
worth S
T
, and for which you paid K dollars. So your wealth is S
T
K (S
T
K)
+
. In
fact, we have strict inequality, because you lost the interest on your K dollars that you
would have received if you had waited to exercise until time T. Therefore an American
call is worth no more than a European call, and hence its value must be the same as that
of a European call.
This argument does not work for puts, because selling stock gives you some money
on which you will receive interest, so it may be advantageous to exercise early. (A put is
the option to sell a stock at a price K at time T.)
Here is the more rigorous argument. Suppose that if you exercise the option at time
k, your payo is g(S
k
). In present day dollars, that is, after correcting for ination, you
have (1+r)
k
g(S
k
). You have to make a decision on when to exercise the option, and that
decision can only be based on what has already happened, not on what is going to happen
in the future. In other words, we have to choose a stopping time , and we exercise the
option at time (). Thus our payo is (1 +r)

g(S

). This is a random quantity. What


we want to do is nd the stopping time that maximizes the expected value of this random
variable. As usual, we work with P, and thus we are looking for the stopping time such
that n and
E(1 +r)

g(S

)
is as large as possible. The problem of nding such a is called an optimal stopping
problem.
Suppose g(x) is convex with g(0) = 0. Certainly g(x) = (xK)
+
is such a function.
We will show that n is the solution to the above optimal stopping problem: the best
time to exercise is as late as possible.
We have
g(x) = g(x + (1 ) 0) g(x) + (1 )g(0) = g(x), 0 1. (8.1)
36
By Jensens inequality,
E[(1 +r)
(k+1)
g(S
k+1
) [ T
k
] = (1 +r)
k
E
_
1
1 +r
g(S
k+1
) [ T
k
_
(1 +r)
k
E
_
g
_
1
1 +r
S
k+1
_
[ T
k
_
(1 +r)
k
g
_
E
_
1
1 +r
S
k+1
[ T
k
__
= (1 +r)
k
g(S
k
).
For the rst inequality we used (8.1). So (1 + r)
k
g(S
k
) is a submartingale. By optional
stopping,
E[(1 +r)

g(S

)] E[(1 +r)
n
g(S
n
)],
so n always does best.
For puts, the payo is g(S
k
), where g(x) = (K x)
+
. This is also convex function,
but this time g(0) ,= 0, and the above argument fails.
Although good approximations are known, an exact solution to the problem of
valuing an American put is unknown, and is one of the major unsolved problems in nancial
mathematics.
37
9. Continuous random variables.
We are now going to start working toward continuous times and stocks that can
take any positive number as a value, so we need to prepare by extending some of our
denitions.
Given any random variable X 0, we can approximate it by r.vs X
n
that are
discrete. We let
X
n
=
n2
n

i=0
i
2
n
1
(i/2
n
X<(i+1)/2
n
)
.
In words, if X() lies between 0 and n, we let X
n
() be the closest value i/2
n
that is
less than or equal to X(). For where X() > n + 2
n
we set X
n
() = 0. Clearly
the X
n
are discrete, and approximate X. In fact, on the set where X n, we have that
[X() X
n
()[ 2
n
.
For reasonable X we are going to dene EX = limEX
n
. Since the X
n
increase
with n, the limit must exist, although it could be +. If X is not necessarily nonnegative,
we dene EX = EX
+
EX

, provided at least one of EX


+
and EX

is nite. Here
X
+
= max(X, 0) and X

= max(X, 0).
There are some things one wants to prove, but all this has been worked out in
measure theory and the theory of the Lebesgue integral; see Note 1. Let us conne ourselves
here to showing this denition is the same as the usual one when X has a density.
Recall X has a density f
X
if
P(X [a, b]) =
_
b
a
f
X
(x)dx
for all a and b. In this case
EX =
_

xf
X
(x)dx
provided
_

[x[f
X
(x)dx < . With our denition of X
n
we have
P(X
n
= i/2
n
) = P(X [i/2
n
, (i + 1)/2
n
)) =
_
(i+1)/2
n
i/2
n
f
X
(x)dx.
Then
EX
n
=

i
i
2
n
P(X
n
= i/2
n
) =

i
_
(i+1)/2
n
i/2
n
i
2
n
f
X
(x)dx.
Since x diers from i/2
n
by at most 1/2
n
when x [i/2
n
, (i + 1)/2
n
), this will tend to
_
xf
X
(x)dx, unless the contribution to the integral for [x[ n does not go to 0 as n .
As long as
_
[x[f
X
(x)dx < , one can show that this contribution does indeed go to 0.
38
We also need an extension of the denition of conditional probability. A r.v. is (
measurable if (X > a) ( for every a. How do we dene E[Z [ (] when ( is not generated
by a countable collection of disjoint sets?
Again, there is a completely worked out theory that holds in all cases; see Note 2.
Let us give a denition that is equivalent that works except for a very few cases. Let us
suppose that for each n the -eld (
n
is nitely generated. This means that (
n
is generated
by nitely many disjoint sets B
n1
, . . . , B
nm
n
. So for each n, the number of B
ni
is nite but
arbitrary, the B
ni
are disjoint, and their union is . Suppose also that (
1
(
2
. Now

n
(
n
will not in general be a -eld, but suppose ( is the smallest -eld that contains
all the (
n
. Finally, dene P(A [ () = limP(A [ (
n
).
This is a fairly general set-up. For example, let be the real line and let (
n
be
generated by the sets (, n), [n, ) and [i/2
n
, (i + 1)/2
n
). Then ( will contain every
interval that is closed on the left and open on the right, hence ( must be the -eld that
one works with when one talks about Lebesgue measure on the line.
The question that one might ask is: how does one know the limit exists? Since
the (
n
increase, we know by Proposition 4.3 that M
n
= P(A [ (
n
) is a martingale with
respect to the (
n
. It is certainly bounded above by 1 and bounded below by 0, so by the
martingale convergence theorem, it must have a limit as n .
Once one has a denition of conditional probability, one denes conditional expec-
tation by what one expects. If X is discrete, one can write X as

j
a
j
1
A
j
and then one
denes
E[X [ (] =

j
a
j
P(A
j
[ ().
If the X is not discrete, one write X = X
+
X

, one approximates X
+
by discrete random
variables, and takes a limit, and similarly for X

. One has to worry about convergence,


but everything does go through.
With this extended denition of conditional expectation, do all the properties of
Section 2 hold? The answer is yes. See Note 2 again.
With continuous random variables, we need to be more cautious about what we
mean when we say two random variables are equal. We say X = Y almost surely, abbre-
viated a.s., if
P( : X() ,= Y ()) = 0.
So X = Y except for a set of probability 0. The a.s. terminology is used other places as
well: X
n
Y a.s. means that except for a set of s of probability zero, X
n
() Y ().
Note 1. The best way to dene expected value is via the theory of the Lebesgue integral. A
probability P is a measure that has total mass 1. So we dene
EX =
_
X() P(d).
39
To recall how the denition goes, we say X is simple if X() =

m
i=1
a
i
1
A
i
() with each
a
i
0, and for a simple X we dene
EX =
m

i=1
a
i
P(A
i
).
If X is nonnegative, we dene
EX = supEY : Y simple , Y X.
Finally, provided at least one of EX
+
and EX

is nite, we dene
EX = EX
+
EX

.
This is the same denition as described above.
Note 2. The Radon-Nikodym theorem from measure theory says that if Q and P are two
nite measures on (, () and Q(A) = 0 whenever P(A) = 0 and A (, then there exists an
integrable function Y that is (-measurable such that Q(A) =
_
A
Y dP for every measurable
set A.
Let us apply the Radon-Nikodym theorem to the following situation. Suppose (, T, P)
is a probability space and X 0 is integrable: EX < . Suppose ( T. Dene two new
probabilities on ( as follows. Let P

= P[
G
, that is, P

(A) = P(A) if A ( and P

(A) is not
dened if A T (. Dene Q by Q(A) =
_
A
XdP = E[X; A] if A (. One can show
(using the monotone convergence theorem from measure theory) that Q is a nite measure on
(. (One can also use this denition to dene Q(A) for A T, but we only want to dene Q
on (, as we will see in a moment.) So Q and P

are two nite measures on (, (). If A (


and P

(A) = 0, then P(A) = 0 and so it follows that Q(A) = 0. By the Radon-Nikodym


theorem there exists an integrable random variable Y such that Y is ( measurable (this is why
we worried about which -eld we were working with) and
Q(A) =
_
A
Y dP

if A (. Note
((a) Y is ( measurable, and
(b) if A (,
E[Y ; A] = E[X; A]
because
E[Y ; A] = E[Y 1
A
] =
_
A
Y dP =
_
A
Y dP

= Q(A) =
_
A
XdP = E[X1
A
] = E[X; A].
40
We dene E[X [ (] to be the random variable Y . If X is integrable but not necessarily
nonnegative, then X
+
and X

will be integrable and we dene


E[X [ (] = E[X
+
[ (] E[X

[ (].
We dene
P(B [ () = E[1
B
[ (]
if B T.
Let us show that there is only one r.v., up to almost sure equivalence, that satises (a)
and (b) above. If Y and Z are ( measurable, and E[Y ; A] = E[X; A] = E[Z; A] for A (,
then the set A
n
= (Y > Z +
1
n
) will be in (, and so
E[Z; A
n
] +
1
n
P(A
n
) = E[Z +
1
n
; A
n
] E[Y ; A
n
] = E[Z; A
n
].
Consequently P(A
n
) = 0. This is true for each positive integer n, so P(Y > Z) = 0. By
symmetry, P(Z > Y ) = 0, and therefore P(Y ,= Z) = 0 as we wished.
If one checks the proofs of Propositions 2.3, 2.4, and 2.5, one sees that only properties
(a) and (b) above were used. So the propositions hold for the new denition of conditional
expectation as well.
In the case where ( is nitely or countably generated, under both the new and old
denitions (a) and (b) hold. By the uniqueness result, the new and old denitions agree.
41
10. Stochastic processes.
We will be talking about stochastic processes. Previously we discussed sequences
S
1
, S
2
, . . . of r.v.s. Now we want to talk about processes Y
t
for t 0. For example, we
can think of S
t
being the price of a stock at time t. Any nonnegative time t is allowed.
We typically let T
t
be the smallest -eld with respect to which Y
s
is measurable
for all s t. So T
t
= (Y
s
: s t). As you might imagine, there are a few technicalities
one has to worry about. We will try to avoid thinking about them as much as possible,
but see Note 1.
We call a collection of -elds T
t
with T
s
T
t
if s < t a ltration. We say the
ltration satises the usual conditions if the T
t
are right continuous and complete (see
Note 1); all the ltrations we consider will satisfy the usual conditions.
We say a stochastic process has continuous paths if the following holds. For each
, the map t Y
t
() denes a function from [0, ) to R. If this function is a continuous
function for all s except for a set of probability zero, we say Y
t
has continuous paths.
Denition 10.1. A mapping : [0, ) is a stopping time if for each t we have
( t) T
t
.
Typically, will be a continuous random variable and P( = t) = 0 for each t, which
is why we need a denition just a bit dierent from the discrete case.
Since ( < t) =

n=1
( t
1
n
) and ( t
1
n
) T
t
1
n
T
t
, then for a stopping
time we have ( < t) T
t
for all t.
Conversely, suppose is a nonnegative r.v. for which ( < t) T
t
for all t. We
claim is a stopping time. The proof is easy, but we need the right continuity of the T
t
here, so we put the proof in Note 2.
A continuous time martingale (or submartingale) is what one expects: each M
t
is
integrable, each M
t
is T
t
measurable, and if s < t, then
E[M
t
[ T
s
] = M
s
.
(Here we are saying the left hand side and the right hand side are equal almost surely; we
will usually not write the a.s. since almost all of our equalities for random variables are
only almost surely.)
The analogues of Doobs theorems go through. Note 3 has the proofs.
Note 1. For technical reasons, one typically denes T
t
as follows. Let T
0
t
= (Y
s
: s t).
This is what we referred to as T
t
above. Next add to T
0
t
all sets N for which P(N) = 0. Such
sets are called null sets, and since they have probability 0, they dont aect anything. In fact,
one wants to add all sets N that we think of being null sets, even though they might not be
42
measurable. To be more precise, we say N is a null set if infP(A) : A T, N A = 0.
Recall we are starting with a -eld T and all the T
0
t
s are contained in T. Let T
00
t
be the -
eld generated by T
0
t
and all null sets N, that is, the smallest -eld containing T
0
t
and every
null set. In measure theory terminology, what we have done is to say T
00
t
is the completion of
T
0
t
.
Lastly, we want to make our -elds right continuous. We set T
t
=
>0
T
00
t+
. Al-
though the union of -elds is not necessarily a -eld, the intersection of -elds is. T
t
contains T
00
t
but might possibly contain more besides. An example of an event that is in T
t
but that may not be in T
00
t
is
A = : lim
n
Y
t+
1
n
() 0.
A T
00
t+
1
m
for each m, so it is in T
t
. There is no reason it needs to be in T
00
t
if Y is not
necessarily continuous at t. It is easy to see that
>0
T
t+
= T
t
, which is what we mean
when we say T
t
is right continuous.
When talking about a stochastic process Y
t
, there are various types of measurability
one can consider. Saying Y
t
is adapted to T
t
means Y
t
is T
t
measurable for each t. However,
since Y
t
is really a function of two variables, t and , there are other notions of measurability
that come into play. We will be considering stochastic processes that have continuous paths or
that are predictable (the denition will be given later), so these various types of measurability
will not be an issue for us.
Note 2. Suppose ( < t) T
t
for all t. Then for each positive integer n
0
,
( t) =

n=n
0
( < t +
1
n
).
For n n
0
we have ( < t +
1
n
) T
t+
1
n
T
t+
1
n
0
. Therefore ( t) T
t+
1
n
0
for each n
0
.
Hence the set is in the intersection:
n
0
>1
T
t+
1
n
0

>0
T
t+
= T
t
.
Note 3. We want to prove the analogues of Theorems 5.3 and 5.4. The proof of Doobs
inequalities are simpler. We only will need the analogue of Theorem 5.4(b).
Theorem 10.2. Suppose M
t
is a martingale with continuous paths and EM
2
t
< for
all t. Then for each t
0
E[(sup
st
0
M
s
)
2
] 4E[[M
t
0
[
2
].
Proof. By the denition of martingale in continuous time, N
k
is a martingale in discrete time
with respect to (
k
when we set N
k
= M
kt
0
/2
n and (
k
= T
kt
0
/2
n. By Theorem 5.4(b)
E[ max
0k2
n
M
2
kt
0
/2
n] = E[ max
0k2
n
N
2
k
] 4EN
2
2
n = 4EM
2
t
0
.
43
(Recall (max
k
a
k
)
2
= max a
2
k
if all the a
k
0.)
Now let n . Since M
t
has continuous paths, max
0k2
n M
2
kt
0
/2
n
increases up to
sup
st
0
M
2
s
. Our result follows from the monotone convergence theorem from measure theory
(see Note 4).
We now prove the analogue of Theorem 5.3. The proof is simpler if we assume that
EM
2
t
is nite; the result is still true without this assumption.
Theorem 10.3. Suppose M
t
is a martingale with continuous paths, EM
2
t
< for all t,
and is a stopping time bounded almost surely by t
0
. Then EM

= EM
t
0
.
Proof. We approximate by stopping times taking only nitely many values. For n > 0
dene

n
() = infkt
0
/2
n
: () < kt
0
/2
n
.

n
takes only the values kt
0
/2
n
for some k 2
n
. The event (
n
jt
0
/2
n
) is equal to
( < jt
0
/2
n
), which is in T
jt
0
/2
n since is a stopping time. So (
n
s) T
s
if s is of the
form jt
0
/2
n
for some j. A moments thought, using the fact that
n
only takes values of the
form kt
0
/2
n
, shows that
n
is a stopping time.
It is clear that
n
for every . Since M
t
has continuous paths, M

n
M

a.s.
Let N
k
and (
k
be as in the proof of Theorem 10.2. Let
n
= k if
n
= kt
0
/2
n
. By
Theorem 5.3,
EN

n
= EN
2
n,
which is the same as saying
EM

n
= EM
t
0
.
To complete the proof, we need to show EM

n
converges to EM

. This is almost
obvious, because we already observed that M

n
M

a.s. Without the assumption that


EM
2
t
< for all t, this is actually quite a bit of work, but with the assumption it is not too
bad.
Either [M

n
M

[ is less than or equal to 1 or greater than 1. If it is greater than 1,


it is less than [M

n
M

[
2
. So in either case,
[M

n
M

[ 1 +[M

n
M

[
2
. (10.1)
Because both [M

n
[ and [M

[ are bounded by sup


st
0
[M
s
[, the right hand side of (10.1) is
bounded by 1 + 4 sup
st
0
[M
s
[
2
, which is integrable by Theorem 10.2. [M

n
M

[ 0, and
so by the dominated convergence theorem from measure theory (Note 4),
E[M

n
M

[ 0.
44
Finally,
[EM

n
EM

[ = [E(M

n
M

)[ E[M

n
M

[ 0.
Note 4. The dominated convergence theorem says that if X
n
X a.s. and [X
n
[ Y a.s.
for each n, where EY < , then EX
n
EX.
The monotone convergence theorem says that if X
n
0 for each n, X
n
X
n+1
for
each n, and X
n
X, then EX
n
EX.
45
11. Brownian motion.
First, let us review a few facts about normal random variables. We say X is a
normal random variable with mean a and variance b
2
if
P(c X d) =
_
d
c
1

2b
2
e
(ya)
2
/2b
2
dy
and we will abbreviate this by saying X is ^(a, b
2
). If X is ^(a, b
2
), then EX = a,
Var X = b
2
, and E[X[
p
< is nite for every positive integer p. Moreover
Ee
tX
= e
at
e
t
2
b
2
/2
.
Let S
n
be a simple symmetric random walk. This means that Y
k
= S
k
S
k1
equals +1 with probability
1
2
, equals 1 with probability
1
2
, and is independent of Y
j
for
j < k. We notice that ES
n
= 0 while ES
2
n
=

n
i=1
EY
2
i
+

i=j
EY
i
Y
j
= n using the fact
that E[Y
i
Y
j
] = (EY
i
)(EY
j
) = 0.
Dene X
n
t
= S
nt
/

n if nt is an integer and by linear interpolation for other t. If


nt is an integer, EX
n
t
= 0 and E(X
n
t
)
2
= t. It turns out X
n
t
does not converge for any .
However there is another kind of convergence, called weak convergence, that takes
place. There exists a process Z
t
such that for each k, each t
1
< t
2
< < t
k
, and each
a
1
< b
1
, a
2
< b
2
, . . . , a
k
< b
k
, we have
(1) The paths of Z
t
are continuous as a function of t.
(2) P(X
n
t
1
[a
1
, b
1
], . . . , X
n
t
k
[a
k
, b
k
]) P(Z
t
1
[a
1
, b
1
], . . . , Z
t
k
[a
k
, b
k
]).
See Note 1 for more discussion of weak convergence.
The limit Z
t
is called a Brownian motion starting at 0. It has the following prop-
erties.
(1) EZ
t
= 0.
(2) EZ
2
t
= t.
(3) Z
t
Z
s
is independent of T
s
= (Z
r
, r s).
(4) Z
t
Z
s
has the distribution of a normal random variable with mean 0 and variance
t s. This means
P(Z
t
Z
s
[a, b]) =
_
b
a
1
_
2(t s)
e
y
2
/2(ts)
dy.
(This result follows from the central limit theorem.)
(5) The map t Z
t
() is continuous for almost all .
See Note 2 for a few remarks on this denition.
It is common to use B
t
(B for Brownian) or W
t
(W for Wiener, who was the
rst person to prove rigorously that Brownian motion exists). We will most often use W
t
.
46
We will use Brownian motion extensively and develop some of its properties. As
one might imagine for a limit of a simple random walk, the paths of Brownian motion have
a huge number of oscillations. It turns out that the function t W
t
() is continuous, but
it is not dierentiable; in fact one cannot dene a derivative at any value of t. Another
bizarre property: if one looks at the set of times at which W
t
() is equal to 0, this is a
set which is uncountable, but contains no intervals. There is nothing special about 0 the
same is true for the set of times at which W
t
() is equal to a for any level a.
In what follows, one of the crucial properties of a Brownian motion is that it is a
martingale with continuous paths. Let us prove this.
Proposition 11.1. W
t
is a martingale with respect to T
t
and W
t
has continuous paths.
Proof. As part of the denition of a Brownian motion, W
t
has continuous paths. W
t
is T
t
measurable by the denition of T
t
. Since the distribution of W
t
is that of a normal random
variable with mean 0 and variance t, then E[W
t
[ < for all t. (In fact, E[W
t
[
n
< for
all n.)
The key property is to show E[W
t
[ T
s
] = W
s
.
E[W
t
[ T
s
] = E[W
t
W
s
[ T
s
] +E[W
s
[ T
s
] = E[W
t
W
s
] +W
s
= W
s
.
We used here the facts that W
t
W
s
is independent of T
s
and that E[W
t
W
s
] = 0
because W
t
and W
s
have mean 0.
We will also need
Proposition 11.2. W
2
t
t is a martingale with continuous paths with respect to T
t
.
Proof. That W
2
t
t is integrable and is T
t
measurable is as in the above proof. We
calculate
E[W
2
t
t [ T
s
] = E[((W
t
W
s
) +W
s
)
2
[ T
s
] t
= E[(W
t
W
s
)
2
[ T
s
] + 2E[(W
t
W
s
)W
s
[ T
s
] +E[W
2
s
[ T
s
] t
= E[(W
t
W
s
)
2
] + 2W
s
E[W
t
W
s
[ T
s
] +W
2
s
t.
We used the facts that W
s
is T
s
measurable and that (W
t
W
s
)
2
is independent of T
s
because W
t
W
s
is. The second term on the last line is equal to W
s
E[W
t
W
s
] = 0. The
rst term, because W
t
W
s
is normal with mean 0 and variance t s, is equal to t s.
Substituting, the last line is equal to
(t s) + 0 +W
2
s
t = W
2
s
s
47
as required.
Note 1. A sequence of random variables X
n
converges weakly to X if P(a < X
n
< b)
P(a < X < b) for all a, b [, ] such that P(X = a) = P(X = b) = 0. a and b can be
innite. If X
n
converges to a normal random variable, then P(X = a) = P(X = b) = 0 for all
a and b. This is the type of convergence that takes place in the central limit theorem. It will
not be true in general that X
n
converges to X almost surely.
For a sequence of random vectors (X
n
1
, . . . , X
m
k
) to converge to a random vector
(X
1
, . . . , X
k
), one can give an analogous denition. But saying that the normalized random
walks X
n
(t) above converge weakly to Z
t
actually says more than (2). A result from probability
theory says that X
n
converges to X weakly if and only if E[f(X
n
)] E[f(X)] whenever f
is a bounded continuous function on R. We use this to dene weak convergence for stochastic
processes. Let C([0, ) be the collection of all continuous functions from [0, ) to the reals.
This is a metric space, so the notion of function from C([0, )) to R being continuous makes
sense. We say that the processes X
n
converge weakly to the process Z, and mean by this
that E[F(X
n
)] E[F(Z)] whenever F is a bounded continuous function on C([0, )). One
example of such a function F would be F(f) = sup
0t<
[f(t)[ if f C([0, )); Another
would be F(f) =
_
1
0
f(t)dt.
The reason one wants to show that X
n
converges weakly to Z instead of just showing
(2) is that weak convergence can be shown to imply that Z has continuous paths.
Note 2. First of all, there is some redundancy in the denition: one can show that parts
of the denition are implied by the remaining parts, but we wont worry about this. Second,
we actually want to let T
t
to be the completion of (Z
s
: s t), that is, we throw in all the
null sets into each T
t
. One can prove that the resulting T
t
are right continuous, and hence
the ltration T
t
satises the usual conditions. Finally, the almost all in (5) means that
t Z
t
() is continuous for all , except for a set of of probability zero.
48
12. Stochastic integrals.
If one wants to consider the (deterministic) integral
_
t
0
f(s) dg(s), where f and g
are continuous and g is continuously dierentiable, we can dene it analogously to the
usual Riemann integral as the limit of Riemann sums

n
i=1
f(s
i
)[g(s
i
) g(s
i1
)], where
s
1
< s
2
< < s
n
is a partition of [0, t]. This is known as the Riemann-Stieltjes integral.
One can show (using the mean value theorem, for example) that
_
t
0
f(s) dg(s) =
_
t
0
f(s)g

(s) ds.
If we were to take f(s) = 1
[0,a]
(s) (which is not continuous, but that is a minor matter
here), one would expect the following:
_
t
0
1
[0,a]
(s) dg(s) =
_
t
0
1
[0,a]
(s)g

(s) ds =
_
a
0
g

(s) ds = g(a) g(0).


Note that although we use the fact that g is dierentiable in the intermediate stages, the
rst and last terms make sense for any g.
We now want to replace g by a Brownian path and f by a random integrand. The
expression
_
f(s) dW(s) does not make sense as a Riemann-Stieltjes integral because it is
a fact that W(s) is not dierentiable as a function of t. We need to dene the expression
by some other means. We will show that it can be dened as the limit in L
2
of Riemann
sums. The resulting integral is called a stochastic integral.
Let us consider a very special case rst. Suppose f is continuous and deterministic
(i.e., does not depend on ). Suppose we take a Riemann sum approximation
I
n
=
2
n
1

i=0
f(
i
2
n
)[W(
i+1
2
n
) W(
i
2
n
)].
Since W
t
has zero expectation for each t, EI
n
= 0. Let us calculate the second moment:
EI
2
n
= E
__

i
f(
i
2
n
)[W(
i+1
2
n
) W(
i
2
n
)]
_
2
_
(12.1)
= E
2
n
1

i=0
f(
i
2
n
)
2
[W(
i+1
2
n
) W(
i
2
n
)]
2
+E

i=j
f(
i
2
n
)f(
j
2
n
)[W(
i+1
2
n
) W(
i
2
n
)] [W(
j+1
2
n
) W(
j
2
n
)].
The rst sum is bounded by

i
f(
i
2
n
)
2
1
2
n

_
1
0
f(t)
2
dt,
49
since the second moment of W(
i+1
2
n
) W(
i
2
n
) is 1/2
n
. Using the independence and the
fact that W
t
has mean zero,
E
_
[W(
i+1
2
n
W(
i
2
n
)] [W(
j+1
2
n
W(
j
2
n
)]

= E[W(
i+1
2
n
W(
i
2
n
)]E[W(
j+1
2
n
W(
j
2
n
)] = 0,
and so the second sum on the right hand side of (12.1) is zero. This calculation is the key
to the stochastic integral.
We now turn to the construction. Let W
t
be a Brownian motion. We will only
consider integrands H
s
such that H
s
is T
s
measurable for each s (see Note 1). We will
construct
_
t
0
H
s
dW
s
for all H with
E
_
t
0
H
2
s
ds < . (12.2)
Before we proceed we will need to dene the quadratic variation of a continuous
martingale. We will use the following theorem without proof because in our applications we
can construct the desired increasing process directly. We often say a process is a continuous
process if its paths are continuous, and similarly a continuous martingale is a martingale
with continuous paths.
Theorem 12.1. Suppose M
t
is a continuous martingale such that EM
2
t
< for all t.
There exists one and only one increasing process A
t
that is adapted to T
t
, has continuous
paths, and A
0
= 0 such that M
2
t
A
t
is a martingale.
The simplest example of such a martingale is Brownian motion. If W
t
is a Brownian
motion, we saw in Proposition 11.2 that W
2
t
t is a martingale. So in this case A
t
= t
almost surely, for all t. Hence W)
t
= t.
We use the notation M)
t
for the increasing process given in Theorem 12.1 and call
it the quadratic variation process of M. We will see later that in the case of stochastic
integrals, where
N
t
=
_
t
0
H
s
dW
s
,
it turns out that N)
t
=
_
t
0
H
2
s
ds.
We will use the following frequently, and in fact, these are the only two properties
of Brownian motion that play a signicant role in the construction.
Lemma 12.1. (a) E[W
b
W
a
[ T
a
] = 0.
(b) E[W
2
b
W
2
a
[ T
a
] = E[(W
b
W
a
)
2
[ T
a
] = b a.
Proof. (a) This is E[W
b
W
a
] = 0 by the independence of W
b
W
a
from T
a
and the
fact that W
b
and W
a
have mean zero.
50
(b) (W
b
W
a
)
2
is independent of T
a
, so the conditional expectation is the same as
E[(W
b
W
a
)
2
]. Since W
b
W
a
is a ^(0, b a), the second equality in (b) follows.
To prove the rst equality in (b), we write
E[W
2
b
W
2
a
[ T
a
] = E[((W
b
W
a
) +W
a
)
2
[ T
a
] E[W
2
a
[ T
a
]
= E[(W
b
W
a
)
2
[ T
a
] + 2E[W
a
(W
b
W
a
) [ T
a
] +E[W
2
a
[ T
a
]
E[W
2
a
[ T
a
]
= E[(W
b
W
a
)
2
[ T
a
] + 2W
a
E[W
b
W
a
[ T
a
],
and the rst equality follows by applying (a).
We construct the stochastic integral in three steps. We say an integrand H
s
= H
s
()
is elementary if
H
s
() = G()1
(a,b]
(s)
where 0 a < b and G is bounded and T
a
measurable. We say H is simple if it is a nite
linear combination of elementary processes, that is,
H
s
() =
n

i=1
G
i
()1
(a
i
,b
i
]
(s). (12.3)
We rst construct the stochastic integral for H elementary; the work here is showing the
stochastic integral is a martingale. We next construct the integral for H simple and here
the diculty is calculating the second moment. Finally we consider the case of general H.
First step. If G is bounded and T
a
measurable, let H
s
() = G()1
(a,b]
(s), and dene the
stochastic integral to be the process N
t
, where N
t
= G(W
tb
W
ta
). Compare this to
the rst paragraph of this section, where we considered Riemann-Stieltjes integrals.
Proposition 12.2. N
t
is a continuous martingale, EN
2

= E[G
2
(b a)] and
N)
t
=
_
t
0
G
2
1
[a,b]
(s) ds.
Proof. The continuity is clear. Let us look at E[N
t
[ T
s
]. In the case a < s < t < b, this
is equal to
E[G(W
t
W
a
) [ T
s
] = GE[(W
t
W
a
) [ T
s
] = G(W
s
W
a
) = N
s
.
In the case s < a < t < b, E[N
t
[ T
s
] is equal to
E[G(W
t
W
a
) [ T
s
] = E[GE[W
t
W
a
[ T
a
] [ T
s
] = 0 = N
s
.
51
The other possibilities are s < t < a < b, a < b < s < t, as < a < b < t, and a < s < b < t;
these are done similarly.
For EN
2

, we have using Lemma 12.1(b)


EN
2

= E[G
2
E[(W
b
W
a
)
2
[ T
a
]] = E[G
2
E[W
2
b
W
2
a
[ T
a
]] = E[G
2
(b a)].
For N)
t
, we need to show
E[G
2
(W
tb
W
ta
)
2
G
2
(t b t a) [ T
s
]
= G
2
(W
sb
W
sa
)
2
G
2
(s b s a).
We do this by checking all six cases for the relative locations of a, b, s, and t; we do one of
the cases in Note 2.
Second step. Next suppose H
s
is simple as in (12.3). In this case dene the stochastic
integral
N
t
=
_
t
0
H
s
dW
s
=
n

i=1
G
i
(W
b
i
t
W
a
i
t
).
Proposition 12.3. N
t
is a continuous martingale, EN
2

= E
_

0
H
2
s
ds, and N)
t
=
_
t
0
H
2
s
ds.
Proof. We may rewrite H so that the intervals (a
i
, b
i
] satisfy a
1
b
1
a
2
b
2
b
n
.
For example, if we had a
1
< a
2
< b
1
< b
2
, we could write
H
s
= G
1
1
(a
1
,a
2
]
+ (G
1
+G
2
)1
(a
2
,b
1
]
+G
2
1
(b
1
,b
2
]
,
and then if we set G

1
= G
1
, G

2
= G
1
+ G
2
, G

3
= G
2
and a

1
= a
1
, b

1
= a
2
, a

2
= a
2
, b

2
=
b
1
, a

3
= b
1
, b

3
= b
2
, we have written H as
3

i=1
G

i
1
(a

i
,b

i
]
.
So now we have H simple but with the intervals (a

i
, b

i
] non-overlapping.
Since the sum of martingales is clearly a martingale, N
t
is a martingale. The sum
of continuous processes will be continuous, so N
t
has continuous paths.
We have
EN
2

= E
_

G
2
i
(W
b
i
W
a
i
)
2
_
+ 2E
_

i<j
G
i
G
j
(W
b
i
W
a
i
)(W
b
j
W
a
j
)
_
.
52
The terms in the second sum vanish, because when we condition on T
a
j
, we have
E[G
i
G
j
(W
b
i
W
a
i
)E[(W
b
j
W
a
j
) [ T
a
j
] = 0.
Taking expectations,
E[G
i
G
j
(W
b
i
W
a
i
)(W
b
j
W
a
j
)] = 0.
For the terms in the rst sum, by Lemma 12.1
E[G
2
i
(W
b
i
W
a
i
)
2
] = E[G
2
i
E[(W
b
i
W
a
i
)
2
[ T
a
i
]] = E[G
2
i
([b
i
a
i
)].
So
EN
2

=
n

i=1
E[G
2
i
([b
i
a
i
)],
and this is the same as E
_

0
H
2
s
ds.
Third step. Now suppose H
s
is adapted and E
_

0
H
2
s
ds < . Using some results from
measure theory (Note 3), we can choose H
n
s
simple such that E
_

0
(H
n
s
H
s
)
2
ds 0.
The triangle inequality then implies (see Note 3 again)
E
_

0
(H
n
s
H
m
s
)
2
ds 0.
Dene N
n
t
=
_
t
0
H
n
s
dW
s
using Step 2. By Doobs inequality (Theorem 10.3) we have
E[sup
t
(N
n
t
N
m
t
)
2
] = E
_
sup
t
_
_
t
0
(H
n
s
H
m
s
) dW
s
_
2
_
4E
_
_

0
(H
n
s
H
m
s
) dW
s
_
2
= 4E
_

0
(H
n
s
H
m
s
)
2
ds 0.
This should look reminiscent of the denition of Cauchy sequences, and in fact that is what
is going on here; Note 3 has details. In the present context Cauchy sequences converge,
and one can show (Note 3) that there exists a process N
t
such that
E
__
sup
t

_
t
0
H
n
s
dW
s
N
t

_
2
_
0.
If H
n
s
and H
n
s

are two sequences converging to H, then E(


_
t
0
(H
n
s
H
n
s

) dW
s
)
2
=
E
_
t
0
(H
n
s
H
n
s

)
2
ds 0, or the limit is independent of which sequence H
n
we choose. See
Note 4 for the proof that N
t
is a martingale, EN
2
t
= E
_
t
0
H
2
s
ds, and N)
t
=
_
t
0
H
2
s
ds.
53
Because sup
t
[
_
t
0
H
n
s
dW
s
N
t
] 0, one can show there exists a subsequence such that the
convergence takes place almost surely, and with probability one, N
t
has continuous paths
(Note 5).
We write N
t
=
_
t
0
H
s
dW
s
and call N
t
the stochastic integral of H with respect to
W.
We discuss some extensions of the denition. First of all, if we replace W
t
by a
continuous martingale M
t
and H
s
is adapted with E
_
t
0
H
2
s
dM)
s
< , we can duplicate
everything we just did (see Note 6) with ds replaced by dM)
s
and get a stochastic integral.
In particular, if dM)
s
= K
2
s
ds, we replace ds by K
2
s
ds.
There are some other extensions of the denition that are not hard. If the random
variable
_

0
H
2
s
dM)
s
is nite but without its expectation being nite, we can dene the
stochastic integral by dening it for t T
N
for suitable stopping times T
N
and then letting
T
N
; look at Note 7.
A process A
t
is of bounded variation if the paths of A
t
have bounded variation. This
means that one can write A
t
= A
+
t
A

t
, where A
+
t
and A

t
have paths that are increasing.
[A[
t
is then dened to be A
+
t
+ A

t
. A semimartingale is the sum of a martingale and a
process of bounded variation. If
_

0
H
2
s
dM)
s
+
_

0
[H
s
[ [dA
s
[ < and X
t
= M
t
+ A
t
,
we dene
_
t
0
H
s
dX
s
=
_
t
0
H
s
dM
s
+
_
t
0
H
s
dA
s
,
where the rst integral on the right is a stochastic integral and the second is a Riemann-
Stieltjes or Lebesgue-Stieltjes integral. For a semimartingale, we dene X)
t
= M
t
). Note
7 has more on this.
Given two semimartingales X and Y we dene X, Y )
t
by what is known as polar-
ization:
X, Y )
t
=
1
2
[X +Y )
t
X)
t
Y )
t
].
As an example, if X
t
=
_
t
0
H
s
dW
s
and Y
t
=
_
t
0
K
s
dW
s
, then (X+Y )
t
=
_
t
0
(H
s
+K
s
)dW
s
,
so
X +Y )
t
=
_
t
0
(H
s
+K
s
)
2
ds =
_
t
0
H
2
s
ds +
_
t
0
2H
s
K
s
ds +
_
t
0
K
2
s
ds.
Since X)
t
=
_
t
0
H
2
s
ds with a similar formula for Y )
t
, we conclude
X, Y )
t
=
_
t
0
H
s
K
s
ds.
The following holds, which is what one would expect.
54
Proposition 12.4. Suppose K
s
is adapted to T
s
and E
_

0
K
2
s
ds < . Let N
t
=
_
t
0
K
s
dW
s
. Suppose H
s
is adapted and E
_

0
H
2
s
dN)
s
< . Then E
_

0
H
2
s
K
2
s
ds <
and
_
t
0
H
s
dN
s
=
_
t
0
H
s
K
s
dW
s
.
The argument for the proof is given in Note 8.
What does a stochastic integral mean? If one thinks of the derivative of Z
t
as being
a white noise, then
_
t
0
H
s
dZ
s
is like a lter that increases or decreases the volume by a
factor H
s
.
For us, an interpretation is that Z
t
represents a stock price. Then
_
t
0
H
s
dZ
s
repre-
sents our prot (or loss) if we hold H
s
shares at time s. This can be seen most easily if
H
s
= G1
[a,b]
. So we buy G() shares at time a and sell them at time b. The stochastic
integral represents our prot or loss.
Since we are in continuous time, we are allowed to buy and sell continuously and
instantaneously. What we are not allowed to do is look into the future to make our
decisions, which is where the H
s
adapted condition comes in.
Note 1. Let us be more precise concerning the measurability of H that is needed. H is a
stochastic process, so can be viewed as a map from [0, ) to R by H : (s, ) H
s
().
We dene a -eld T on [0, ) as follows. Consider the collection of processes of the form
G()1
(a,b])
(s) where G is bounded and T
a
measurable for some a < b. Dene T to be the
smallest -eld with respect to which every process of this form is measurable. T is called the
predictable or previsible -eld, and if a process H is measurable with respect to T, then the
process is called predictable. What we require for our integrands H is that they be predictable
processes.
If H
s
has continuous paths, then approximating continuous functions by step functions
shows that such an H can be approximated by linear combinations of processes of the form
G()1
(a,b])
(s). So continuous processes are predictable. The majority of the integrands we
will consider will be continuous.
If one is slightly more careful, one sees that processes whose paths are functions which
are continuous from the left at each time point are also predictable. This gives an indication
of where the name comes from. If H
s
has paths which are left continuous, then H
t
=
lim
n
H
t
1
n
, and we can predict the value of H
t
from the values at times that come
before t. If H
t
is only right continuous and a path has a jump at time t, this is not possible.
Note 2. Let us consider the case a < s < t < b; again similar arguments take care of the
other ve cases. We need to show
E[G
2
(W
t
W
a
)
2
G
2
(t a) [ T
s
] = G
2
(W
s
W
a
)
2
G
2
(s a). (12.4)
55
The left hand side is equal to G
2
E[(W
t
W
a
)
2
(t a) [ T
s
]. We write this as
G
2
E[((W
t
W
s
) + (W
s
W
a
))
2
(t a) [ T
s
]
= G
2
_
E[(W
t
W
s
)
2
[ T
s
] + 2E[(W
t
W
s
)(W
s
W
a
) [ T
s
]
+E[(W
s
W
a
)
2
[ T
s
] (t a)
_
= G
2
_
E[(W
t
W
s
)
2
] + 2(W
s
W
a
)E[W
t
W
s
[ T
s
] + (W
s
W
a
)
2
(t a)
_
= G
2
_
(t s) + 0 + (W
s
W
a
)
2
(t a)
_
.
The last expression is equal to the right hand side of (12.4).
Note 3. A denition from measure theory says that if is a measure, |f|
2
, the L
2
norm of
f with respect to the measure , is dened as
_
_
f(x)
2
(dx)
_
1/2
. The space L
2
is dened
to be the set of functions f for which |f|
2
< . (A technical proviso: one has to identify as
equal functions which dier only on a set of measure 0.) If one denes a distance between two
functions f and g by d(f, g) = |f g|
2
, this is a metric on the space L
2
, and a theorem from
measure theory says that L
2
is complete with respect to this metric. Another theorem from
measure theory says that the collection of simple functions (functions of the form

n
i=1
c
i
1
A
i
)
is dense in L
2
with respect to the metric.
Let us dene a norm on stochastic processes; this is essentially an L
2
norm. Dene
|N| =
_
E sup
0t<
N
2
t
_
1/2
.
One can show that this is a norm, and hence that the triangle inequality holds. Moreover, the
space of processes N such that |N| < is complete with respect to this norm. This means
that if N
n
is a Cauchy sequence, i.e., if given there exists n
0
such that |N
n
N
m
| <
whenever n, m n
0
, then the Cauchy sequence converges, that is, there exists N with |N| <
such that |N
n
N| 0.
We can dene another norm on stochastic processes. Dene
|H|
2
=
_
E
_

0
H
2
s
ds
_
1/2
.
This can be viewed as a standard L
2
norm, namely, the L
2
norm with respect to the measure
dened on T by
(A) = E
_

0
1
A
(s, )ds.
Since the set of simple functions with respect to is dense in L
2
, this says that if H is
measurable with respect to T, then there exist simple processes H
n
s
that are also measurable
with respect to T such that |H
n
H|
2
0.
56
Note 4. We have |N
n
N| 0, where the norm here is the one described in Note 3. Each
N
n
is a stochastic integral of the type described in Step 2 of the construction, hence each N
n
t
is a martingale. Let s < t and A T
s
. Since E[N
n
t
[ T
s
] = N
n
s
, then
E[N
n
t
; A] = E[N
n
s
; A]. (12.5)
By Cauchy-Schwarz,
[E[N
n
t
; A] E[N
t
; A][ E[ [N
n
t
N
t
[; A]
_
E[(N
n
t
N
t
)
2
]
_
1/2
_
E[1
2
A
]
_
1/2
|N
n
N| 0. (12.6)
We have a similar limit when t is replaced by s, so taking the limit in (12.5) yields
E[N
t
; A] = E[N
s
; A].
Since N
s
is T
s
measurable and has the same expectation over sets A T
s
as N
t
does, then
by Proposition 4.3 E[N
t
[ T
s
] = N
s
, or N
t
is a martingale.
Suppose |N
n
N| 0. Given > 0 there exists n
0
such that |N
n
N| < if
n n
0
. Take = 1 and choose n
0
. By the triangle inequality,
|N| |N
n
| +|N
n
N| |N
n
| + 1 <
since |N
n
| is nite for each n.
That N
2
t
N)
t
is a martingale is similar to the proof that N
t
is a martingale, but
slightly more delicate. We leave the proof to the reader, but note that in place of (SEC.402)
one writes
[E[(N
n
t
)
2
; A] E[(N
t
)
2
; A][ E[ [(N
n
t
)
2
(N
t
)
2
[] E[ [N
n
t
N
t
[ [N
n
t
+N
t
[]. (12.7)
By Cauchy-Schwarz this is less than
|N
n
t
N
t
| |N
n
t
+N
t
|.
since |N
n
t
+ N
t
| |N
n
t
| + |N
t
| is bounded independently of n, we see that the left hand
side of (12.7) tends to 0.
Note 5. We have |N
n
N| 0, where the norm is described in Note 3. This means that
E[sup
t
[N
n
t
N
t
[
2
] 0.
A result from measure theory implies that there exists a subsequence n
k
such that
sup
t
[N
n
k
t
N
t
[
2
0, a.s.
57
So except for a set of s of probability 0, N
n
k
t
() converges to N
t
() uniformly. Each N
n
k
t
()
is continuous by Step 2, and the uniform limit of continuous functions is continuous, therefore
N
t
() is a continuous function of t. Incidentally, this is the primary reason we considered
Doobs inequalities.
Note 6. If M
t
is a continuous martingale, E[M
b
M
a
[ T
a
] = E[M
b
[ T
a
] M
a
=
M
a
M
a
= 0. This is the analogue of Lemma 12.1(a). To show the analogue of Lemma
12.1(b),
E[(M
b
M
a
)
2
[ T
a
] = E[M
2
b
[ T
a
] 2E[M
b
M
a
[ T
a
] +E[M
2
a
[ T
a
]
= E[M
2
b
[ T
a
] 2M
a
E[M
b
[ T
a
] +M
2
a
= E[M
2
b
M)
b
[ T
a
] +E[M)
b
M)
a
[ T
a
] +M
2
a
M)
a
= E[M)
b
M)
a
[ T
a
],
since M
2
t
M)
t
is a martingale. That
E[M
2
b
M
2
a
[ T
a
] = E[M)
b
M)
a
[ T
a
]
is just a rewriting of
E[M
2
b
M)
b
[ T
a
] = M
2
a
M)
a
= E[M
2
a
M)
a
[ T
a
].
With these two properties in place of Lemma 12.1, replacing W
s
by M
s
and ds by
dM)
s
, the construction of the stochastic integral
_
t
0
H
s
dM
s
goes through exactly as above.
Note 7. If we let T
K
= inft > 0 :
_
t
0
H
2
s
dM)
s
K, the rst time the integral is larger
than or equal to K, and we let H
K
s
= H
s
1
(sT
K
)
, then
_

0
H
K
s
dM)
s
K and there is no
diculty dening N
K
t
=
_
t
0
H
K
s
dM
s
for every t. One can show that if t T
K
1
and T
K
2
, then
N
K
1
t
= N
K
2
t
a.s. If
_
t
0
H
s
dM)
s
is nite for every t, then T
K
as K . If we call the
common value N
t
, this allows one to dene the stochastic integral N
t
for each t in the case
where the integral
_
t
0
H
2
s
dM)
s
is nite for every t, even if the expectation of the integral is
not.
We can do something similar is M
t
is a martingale but where we do not have EM)

<
. Let S
K
= inft : [M
t
[ K, the rst time [M
t
[ is larger than or equal to K. If we let
M
K
t
= M
tS
K
, where t S
k
= min(t, S
K
), then one can show M
K
is a martingale bounded
in absolute value by K. So we can dene J
K
t
=
_
t
0
H
s
dM
K
t
for every t, using the paragraph
above to handle the wider class of Hs, if necessary. Again, one can show that if t S
K
1
and
t S
K
2
, then the value of the stochastic integral will be the same no matter whether we use
M
K
1
or M
K
2
as our martingale. We use the common value as a denition of the stochastic
integral J
t
. We have S
K
as K , so we have a denition of J
t
for each t.
58
Note 8. We only outline how the proof goes. To show
_
t
0
H
s
dN
s
=
_
t
0
H
s
K
s
dW
s
, (12.8)
one shows that (SEC.801) holds for H
s
simple and then takes limits. To show this, it suces
to look at H
s
elementary and use linearity. To show (12.8) for H
s
elementary, rst prove this
in the case when K
s
is elementary, use linearity to extend it to the case when K is simple, and
then take limits to obtain it for arbitrary K. Thus one reduces the proof to showing (12.8)
when both H and K are elementary. In this situation, one can explicitly write out both sides
of the equation and see that they are equal.
59
13. Itos formula.
Suppose W
t
is a Brownian motion and f : R R is a C
2
function, that is, f and its
rst two derivatives are continuous. Itos formula, which is sometime known as the change
of variables formula, says that
f(W
t
) f(W
0
) =
_
t
0
f

(W
s
)dW
s
+
1
2
_
t
0
f

(W
s
)ds.
Compare this with the fundamental theorem of calculus:
f(t) f(0) =
_
t
0
f

(s)ds.
In Itos formula we have a second order term to carry along.
The idea behind the proof is quite simple. By Taylors theorem.
f(W
t
) f(W
0
) =
n1

i=0
[f(W
(i+1)t/n
) f(W
it/n
)]

n1

i=1
f

(W
it/n
)(W
(i+1)t/n
W
it/n
)
+
1
2
n1

i=0
f

(W
it/n
)(W
(i+1)t/n
W
it/n
)
2
.
The rst sum on the right is approximately the stochastic integral and the second is
approximately the quadratic variation.
For a more general semimartingale X
t
= M
t
+A
t
, Itos formula reads
Theorem 13.1. If f C
2
, then
f(X
t
) f(X
0
) =
_
t
0
f

(X
s
)dX
s
+
1
2
_
t
0
f

(X
s
)dM)
s
.
Let us look at an example. Let W
t
be Brownian motion, X
t
= W
t

2
t/2, and
f(x) = e
x
. Then X)
t
= W)
t
=
2
t, f

(x) = f(x) = e
x
, and
e
W
t

2
t/2
= 1 +
_
t
0
e
X
s
dW
s

1
2
_
t
0
e
X
s 1
2

2
ds (13.1)
+
1
2
_
t
0
e
X
s

2
ds
= 1 +
_
t
0
e
X
s
dW
s
.
60
This example will be revisited many times later on.
Let us give another example of the use of Itos formula. Let X
t
= W
t
and let
f(x) = x
k
. Then f

(x) = kx
k1
and f

(x) = k(k 1)x


k2
. We then have
W
k
t
= W
k
0
+
_
t
0
kW
k1
s
dW
s
+
1
2
_
t
0
k(k 1)W
k2
s
dW)
s
=
_
t
0
kW
k1
s
dW
s
+
k(k 1)
2
_
t
0
W
k2
s
ds.
When k = 3, this says W
3
t
3
_
t
0
W
s
ds is a stochastic integral with respect to a Brownian
motion, and hence a martingale.
For a semimartingale X
t
= M
t
+A
t
we set X)
t
= M)
t
. Given two semimartingales
X, Y , we dene
X, Y )
t
=
1
2
[X +Y )
t
X)
t
Y )
t
].
The following is known as Itos product formula. It may also be viewed as an
integration by parts formula.
Proposition 13.2. If X
t
and Y
t
are semimartingales,
X
t
Y
t
= X
0
Y
0
+
_
t
0
X
s
dY
s
+
_
t
0
Y
s
dX
s
+X, Y )
t
.
Proof. Applying Itos formula with f(x) = x
2
to X
t
+Y
t
, we obtain
(X
t
+Y
t
)
2
= (X
0
+Y
0
)
2
+ 2
_
t
0
(X
s
+Y
s
)(dX
s
+dY
s
) +X +Y )
t
.
Applying Itos formula with f(x) = x
2
to X and to Y , then
X
2
t
= X
2
0
+ 2
_
t
0
X
s
dX
s
+X)
t
and
Y
2
t
= Y
2
0
+ 2
_
t
0
Y
s
dY
s
+Y )
t
.
Then some algebra and the fact that
X
t
Y
t
=
1
2
[(X
t
+Y
t
)
2
X
2
t
Y
2
t
]
yields the formula.
There is a multidimensional version of Itos formula: if X
t
= (X
1
t
, . . . , X
d
t
) is a
vector, each component of which is a semimartingale, and f C
2
, then
f(X
t
) f(X
0
) =
d

i=1
_
t
0
f
x
i
(X
s
)dX
i
s
+
1
2
d

i,j=1
_
t
0

2
f
x
2
i
(X
s
)dX
i
, X
j
)
s
.
The following application of Itos formula, known as Levys theorem, is important.
61
Theorem 13.3. Suppose M
t
is a continuous martingale with M)
t
= t. Then M
t
is a
Brownian motion.
Before proving this, recall from undergraduate probability that the moment generating
function of a r.v. X is dened by m
X
(a) = Ee
aX
and that if two random variables have
the same moment generating function, they have the same law. This is also true if we
replace a by iu. In this case we have
X
(u) = Ee
iuX
and
X
is called the characteristic
function of X. The reason for looking at the characteristic function is that
X
always
exists, whereas m
X
(a) might be innite. The one special case we will need is that if X is
a normal r.v. with mean 0 and variance t, then
X
(u) = e
u
2
t/2
. This follows from the
formula for m
X
(a) with a replaced by iu (this can be justied rigorously).
Proof. We will prove that M
t
is a ^(0, t); for the remainder of the proof see Note 1.
We apply Itos formula with f(x) = e
iux
. Then
e
iuM
t
= 1 +
_
t
0
iue
iuM
s
dM
s
+
1
2
_
t
0
(u
2
)e
iuM
s
dM)
s
.
Taking expectations and using M)
s
= s and the fact that a stochastic integral is a
martingale, hence has 0 expectation, we have
Ee
iuM
t
= 1
u
2
2
_
t
0
e
iuM
s
ds.
Let J(t) = Ee
iuM
t
. The equation can be rewritten
J(t) = 1
u
2
2
_
t
0
J(s)ds.
So J

(t) =
1
2
u
2
J(t) with J(0) = 1. The solution to this elementary ODE is J(t) =
e
u
2
t/2
. Since
Ee
iuM
t
= e
u
2
t/2
,
then by our remarks above the law of M
t
must be that of a ^(0, t), which shows that M
t
is a mean 0 variance t normal r.v.
Note 1. If A T
s
and we do the same argument with M
t
replaced by M
s+t
M
s
, we have
e
iu(M
s+t
M
s
)
= 1 +
_
t
0
iue
iu(M
s+r
M
s
)
dM
r
+
1
2
_
t
0
(u
2
)e
iu(M
s+r
M
s
)
dM)
r
.
Multiply this by 1
A
and take expectations. Since a stochastic integral is a martingale, the
stochastic integral term again has expectation 0. If we let K(t) = E[e
iu(M
t+s
M
t
)
; A], we
now arrive at K

(t) =
1
2
u
2
K(t) with K(0) = P(A), so
K(t) = P(A)e
u
2
t/2
.
62
Therefore
E
_
e
iu(M
t+s
M
s
)
; A
_
= Ee
iu(M
t+s
M
s
)
P(A). (13.2)
If f is a nice function and

f is its Fourier transform, replace u in the above by u, multiply
by

f(u), and integrate over u. (To do the integral, we approximate the integral by a Riemann
sum and then take limits.) We then have
E[f(M
s+t
M
s
); A] = E[f((M
s+t
M
s
)]P(A).
By taking limits we have this for f = 1
B
, so
P(M
s+t
M
s
B, A) = P(M
s+t
M
s
B)P(A).
This implies that M
s+t
M
s
is independent of T
s
.
Note Var (M
t
M
s
) = t s; take A = in (13.2).
63
14. The Girsanov theorem.
Suppose P is a probability and
dX
t
= dW
t
+(X
t
)dt,
where W
t
is a Brownian motion. This is short hand for
X
t
= X
0
+W
t
+
_
t
0
(X
s
)ds. (14.1)
Let
M
t
= exp
_

_
t
0
(X
s
)dW
s

_
t
0
(X
s
)
2
ds/2
_
. (14.2)
Then as we have seen before, by Itos formula, M
t
is a martingale. This calculation is
reviewed in Note 1. We also observe that M
0
= 1.
Now let us dene a new probability by setting
Q(A) = E[M
t
; A] (14.3)
if A T
t
. We had better be sure this Q is well dened. If A T
s
T
t
, then E[M
t
; A] =
E[M
s
; A] because M
t
is a martingale. We also check that Q() = E[M
t
; ] = EM
t
. This
is equal to EM
0
= 1, since M
t
is a martingale.
What the Girsanov theorem says is
Theorem 14.1. Under Q, X
t
is a Brownian motion.
Under P, W
t
is a Brownian motion and X
t
is not. Under Q, the process W
t
is no
longer a Brownian motion.
In order for a process X
t
to be a Brownian motion, we need at a minimum that X
t
is mean zero and variance t. To dene mean and variance, we need a probability. Therefore
a process might be a Brownian motion with respect to one probability and not another.
Most of the other parts of the denition of being a Brownian motion also depend on the
probability.
Similarly, to be a martingale, we need conditional expectations, and the conditional
expectation of a random variable depends on what probability is being used.
There is a more general version of the Girsanov theorem.
Theorem 14.2. If X
t
is a martingale under P, then under Q the process X
t
D
t
is a
martingale where
D
t
=
_
t
0
1
M
s
dX, M)
s
.
64
X)
t
is the same under both P and Q.
Let us see how Theorem 14.1 can be used. Let S
t
be the stock price, and suppose
dS
t
= S
t
dW
t
+mS
t
dt.
(So in the above formulation, (x) = m for all x.) Dene
M
t
= e
(m/)(W
t
)(m
2
/2
2
)t
.
Then from (13.1) M
t
is a martingale and
M
t
= 1 +
_
t
0
_

_
M
s
dW
s
.
Let X
t
= W
t
. Then
X, M)
t
=
_
t
0
_

_
M
s
ds =
_
t
0
M
s
m

ds.
Therefore
_
t
0
1
M
s
dX, M)
s
=
_
t
0
m

ds = (m/)t.
Dene Q by (14.3). By Theorem 14.2, under Q the process

W
t
= W
t
+ (m/)t is a
martingale. Hence
dS
t
= S
t
(dW
t
+ (m/)dt) = S
t
d

W
t
,
or
S
t
= S
0
+
_
t
0
S
s
d

W
s
is a martingale. So we have found a probability under which the asset price is a martingale.
This means that Q is the risk-neutral probability, which we have been calling P.
Let us give another example of the use of the Girsanov theorem. Suppose X
t
=
W
t
+t, where is a constant. We want to compute the probability that X
t
exceeds the
level a by time t
0
.
We rst need the probability that a Brownian motion crosses a level a by time t
0
.
If A
t
= sup
st
W
t
, (note we are not looking at [W
t
[), we have
P(A
t
> a, c W
t
d) =
_
d
c
(t, a, x), (14.4)
where
(t, a, x) =
_
1

2t
e
x
2
/2t
x a
1

2t
e
(2ax)
2
/2t
x < a.
65
This is called the reection principle, and the name is due to the derivation, given in Note
2. Sometimes one says
P(W
t
= x, A
t
> a) = P(W
t
= 2a x), x < a,
but this is not precise because W
t
is a continuous random variable and both sides of the
above equation are zero; (14.4) is the rigorous version of the reection principle.
Now let W
t
be a Brownian motion under P. Let dQ/dP = M
t
= e
W
t

2
t/2
. Let
Y
t
= W
t
t. Theorem 14.1 says that under Q, Y
t
is a Brownian motion. We have
W
t
= Y
t
+t.
Let A = (sup
st
0
W
s
a). We want to calculate
P(sup
st
0
(W
s
+s) a).
W
t
is a Brownian motion under P while Y
t
is a Brownian motion under Q. So this proba-
bility is equal to
Q(sup
st
0
(Y
s
+s) a).
This in turn is equal to
Q(sup
st
0
W
s
a) = Q(A).
Now we use the expression for M
t
:
Q(A) = E
P
[e
W
t
0

2
t
0
/2
; A]
=
_

e
x
2
t
0
/2
P(sup
st
0
W
s
a, W
t
0
= x)dx
= e

2
t
0
/2
_
_
a

2t
0
e
x
e
(2ax)
2
/2t
0
dx +
_

a
1

2t
0
e
x
e
x
2
/2t
0
dx.
_
Proof of Theorem 14.1. Using Itos formula with f(x) = e
x
,
M
t
= 1
_
t
0
(X
r
)M
r
dW
r
.
So
W, M)
t
=
_
t
0
(X
r
)M
r
dr.
Since Q(A) = E
P
[M
t
; A], it is not hard to see that
E
Q
[W
t
; A] = E
P
[M
t
W
t
; A].
66
By Itos product formula this is
E
P
_
_
t
0
M
r
dW
r
; A
_
+E
P
_
_
t
0
W
r
dM
r
; A
_
+E
P
_
W, M)
t
; A
_
.
Since
_
t
0
M
r
dW
r
and
_
t
0
W
r
dM
r
are stochastic integrals with respect to martingales, they
are themselves martingales. Thus the above is equal to
E
P
_
_
s
0
M
r
dW
r
; A
_
+E
P
_
_
s
0
W
r
dM
r
; A
_
+E
P
_
W, M)
t
; A
_
.
Using the product formula again, this is
E
P
[M
s
W
s
; A] +E
P
[W, M)
t
W, M)
s
; A] = E
Q
[W
s
; A] +E
P
[W, M)
t
W, M)
s
; A].
The last term on the right is equal to
E
P
_
_
t
s
dW, M)
r
; A
_
= E
P
_

_
t
s
M
r
(X
r
)dr; A
_
= E
P
_

_
t
s
E
P
[M
t
[ T
r
](X
r
)dr; A
_
= E
P
_

_
t
s
M
t
(X
r
)dr; A
_
= E
Q
_

_
t
s
(X
r
)dr; A
_
= E
Q
_
_
t
0
(X
r
) dr; A
_
+E
Q
_
_
s
0
(X
r
) dr; A
_
.
Therefore
E
Q
_
W
t
+
_
t
0
(X
r
)dr; A
_
= E
Q
_
W
s
+
_
s
0
(X
r
)dr; A
_
,
which shows X
t
is a martingale with respect to Q.
Similarly, X
2
t
t is a martingale with respect to Q. By Levys theorem, X
t
is a
Brownian motion.
In Note 3 we give a proof of Theorem 14.2 and in Note 4 we show how Theorem
14.1 is really a special case of Theorem 14.2.
Note 1. Let
Y
t
=
_
t
0
(X
s
)dW
s

1
2
_
t
0
[(X
s
)]
2
ds.
We apply Itos formula with the function f(x) = e
x
. Note the martingale part of Y
t
is the
stochastic integral term and the quadratic variation of Y is the quadratic variation of the
martingale part, so
Y )
t
=
_
t
0
[(X
s
)]
2
ds.
67
Then f

(x) = e
x
, f

(x) = e
x
, and hence
M
t
= e
Y
t
= e
Y
0
+
_
t
0
e
Y
s
dY
s
+
1
2
_
t
0
e
Y
s
dY )
s
= 1 +
_
t
0
M
s
((X
s
)dW
s

1
2
_
t
0
[(X
s
)]
2
ds
+
1
2
_
t
0
M
s
[(X
s
)]
2
ds
= 1
_
t
0
M
s
(X
s
)dW
s
.
Since stochastic integrals with respect to a Brownian motion are martingales, this completes
the argument that M
t
is a martingale.
Note 2. Let S
n
be a simple random walk. This means that X
1
, X
2
, . . . , are independent
and identically distributed random variables with P(X
i
= 1) = P(X
i
= 1) =
1
2
; let S
n
=

n
i=1
X
i
. If you are playing a game where you toss a fair coin and win $1 if it comes up heads
and lose $1 if it comes up tails, then S
n
will be your fortune at time n. Let A
n
= max
0kn
S
k
.
We will show the analogue of (14.4) for S
n
, which is
P(S
n
= x, A
n
a) =
_
P(S
n
= x) x a
P(S
n
= 2a x) x < a.
(14.5)
(14.4) can be derived from this using a weak convergence argument.
To establish (14.5), note that if x a and S
n
= x, then automatically A
n
a, so
the only case to consider is when x < a. Any path that crosses a but is at level x at time n
has a corresponding path determined by reecting across level a at the rst time the Brownian
motion hits a; the reected path will end up at a + (a x) = 2a x. The probability on the
left hand side of (14.5) is the number of paths that hit a and end up at x divided by the total
number of paths. Since the number of paths that hit a and end up at x is equal to the number
of paths that end up at 2ax, then the probability on the left is equal to the number of paths
that end up at 2a x divided by the total number of paths; this is P(S
n
= 2a x), which is
the right hand side.
Note 3. To prove Theorem 14.2, we proceed as follows. Assume without loss of generality
that X
0
= 0. Then if A T
s
,
E
Q
[X
t
; A] = E
P
[M
t
X
t
; A]
= E
P
_
_
t
0
M
r
dX
r
; A
_
+E
P
_
_
t
0
X
r
dM
r
; A
_
+E
P
[X, M)
t
; A]
= E
P
_
_
s
0
M
r
dX
r
; A
_
+E
P
_
_
s
0
X
r
dM
r
; A
_
+E
P
[X, M)
t
; A]
= E
Q
[X
s
; A] +E
Q
[X, M)
t
X, M)
s
; A].
68
Here we used the fact that stochastic integrals with respect to the martingales X and M are
again martingales.
On the other hand,
E
P
[X, M)
t
X, M)
s
; A] = E
P
_
_
t
s
dX, M)
r
; A
_
= E
P
_
_
t
s
M
r
dD
r
; A
_
= E
P
_
_
t
s
E
P
[M
t
[ T
r
] dD
r
; A
_
= E
P
_
_
t
s
M
t
dD
r
; A
_
= E
P
[(D
t
D
s
)M
t
; A]
= E
Q
[D
t
D
s
; A].
The proof of the quadratic variation assertion is similar.
Note 4. Here is an argument showing how Theorem 14.1 can also be derived from Theorem
14.2.
From our formula for M we have dM
t
= M
t
(X
t
)dW
t
, and therefore dX, M)
t
=
M
t
(X
t
)dt. Hence by Theorem 14.2 we see that under Q, X
t
is a continuous martingale
with X)
t
= t. By Levys theorem, this means that X is a Brownian motion under Q.
69
15. Stochastic dierential equations.
Let W
t
be a Brownian motion. We are interested in the existence and uniqueness
for stochastic dierential equations (SDEs) of the form
dX
t
= (X
t
) dW
t
+b(X
t
) dt, X
0
= x
0
. (15.1)
This means X
t
satises
X
t
= x
0
+
_
t
0
(X
s
) dW
s
+
_
t
0
b(X
s
) ds. (15.2)
Here W
t
is a Brownian motion, and (15.2) holds for almost every .
We have to make some assumptions on and b. We assume they are Lipschitz,
which means:
[(x) (y)[ c[x y[, [b(x) b(y)[ c[x y[
for some constant c. We also suppose that and b grow at most linearly, which means:
[(x)[ c(1 +[x[), [b(x)[ c(1 +[x[).
Theorem 15.1. There exists one and only one solution to (15.2).
The idea of the proof is Picard iteration, which is how existence and uniqueness for
ordinary dierential equations is proved; see Note 1.
The intuition behind (15.1) is that X
t
behaves locally like a multiple of Brownian
motion plus a constant drift: locally X
t+h
X
t
(W
t+h
W
t
) +b((t +h) t). However
the constants and b depend on the current value of X
t
. When X
t
is at dierent points,
the coecients vary, which is why they are written (X
t
) and b(X
t
). is sometimes called
the diusion coecient and is sometimes called the drift coecient.
The above theorem also works in higher dimensions. We want to solve
dX
i
t
=
d

j=1

ij
(X
s
)dW
j
s
+b
i
(X
s
)ds, i = 1, . . . , d.
This is an abbreviation for the equation
X
i
t
= x
i
0
+
_
t
0
d

j=1

ij
(X
s
)dW
j
s
+
_
t
0
b
i
(X
s
)ds.
Here the initial value is x
0
= (x
1
0
, . . . , x
d
0
), the solution process is X
t
= (X
1
t
, . . . , X
d
t
), and
W
1
t
, . . . , W
d
t
are d independent Brownian motions. If all of the
ij
and b
i
are Lipschitz
and grow at most linearly, we have existence and uniqueness for the solution.
70
Suppose one wants to solve
dZ
t
= aZ
t
dW
t
+bZ
t
dt.
Note that this equation is linear in Z
t
, and it turns out that linear equations are almost
the only ones that have an explicit solution. In this case we can write down the explicit
solution and then verify that it satises the SDE. The uniqueness result above (Theorem
15.1) shows that we have in fact found the solution.
Let
Z
t
= Z
0
e
aW
t
a
2
t/2+bt
.
We will verify that this is correct by using Itos formula. Let X
t
= aW
t
a
2
t/2+bt. Then
X
t
is a semimartingale with martingale part aW
t
and X)
t
= a
2
t. Z
t
= e
X
t
. By Itos
formula with f(x) = e
x
,
Z
t
= Z
0
+
_
t
0
e
X
s
dX
s
+
1
2
_
t
0
e
X
s
a
2
ds
= Z
0
+
_
t
0
aZ
s
dW
s

_
t
0
a
2
2
Z
s
ds +
_
t
0
bZ
s
ds
+
1
2
_
t
0
a
2
Z
s
ds
=
_
t
0
aZ
s
dW
s
+
_
t
0
bZ
s
ds.
This is the integrated form of the equation we wanted to solve.
There is a connection between SDEs and partial dierential equations. Let f be a
C
2
function. If we apply Itos formula,
f(X
t
) = f(X
0
) +
_
t
0
f

(X
s
)dX
s
+
1
2
_
t
0
f

(X
s
)dX)
s
.
From (15.2) we know X)
t
=
_
t
0
(X
s
)
2
ds. If we substitute for dX
s
and dX)
s
, we obtain
f(X
t
) = f(X
0
) +
_
t
0
f

(X
s
)dW
s
+
_
t
0
(X
s
)ds
+
1
2
_
t
0
f

(X
s
)(X
s
)
2
ds
= f(X
0
) +
_
t
0
f

(X
s
)dW
s
+
_
t
0
Lf(X
s
)ds,
where we write
Lf(x) =
1
2
(x)
2
f

(x) +(x)f

(x).
71
L is an example of a dierential operator. Since the stochastic integral with respect to a
Brownian motion is a martingale, we see from the above that
f(X
t
) f(X
0
)
_
t
0
Lf(X
s
)ds
is a martingale. This fact can be exploited to derive results about PDEs from SDEs and
vice versa.
Note 1. Let us illustrate the uniqueness part, and for simplicity, assume b is identically 0 and
is bounded.
Proof of uniqueness. If X and Y are two solutions,
X
t
Y
t
=
_
t
0
[(X
s
) (Y
s
)]dW
s
.
So
E[X
t
Y
t
[
2
= E
_
t
0
[(X
s
) (Y
s
)[
2
ds c
_
t
0
E[X
s
Y
s
[
2
ds,
using the Lipschitz hypothesis on . If we let g(t) = E[X
t
Y
t
[
2
, we have
g(t) c
_
t
0
g(s) ds.
Since we are assuming is bounded, EX
2
t
= E
_
t
0
((X
s
))
2
ds ct and similarly for EY
2
t
, so
g(t) ct. Then
g(t) c
_
t
0
_
c
_
s
0
g(r) dr
_
ds.
Iteration implies
g(t) At
n
/n!
for each n, which implies g must be 0.
72
16. Continuous time nancial models.
The most common model by far in nance is one where the security price is based
on a Brownian motion. One does not want to say the price is some multiple of Brownian
motion for two reasons. First, of all, a Brownian motion can become negative, which
doesnt make sense for stock prices. Second, if one invests $1,000 in a stock selling for $1
and it goes up to $2, one has the same prot, namely, $1,000, as if one invests $1,000 in a
stock selling for $100 and it goes up to $200. It is the proportional increase one wants.
Therefore one sets S
t
/S
t
to be the quantity related to a Brownian motion. Dier-
ent stocks have dierent volatilities (consider a high-tech stock versus a pharmaceutical).
In addition, one expects a mean rate of return on ones investment that is positive (oth-
erwise, why not just put the money in the bank?). In fact, one expects the mean rate
of return to be higher than the risk-free interest rate r because one expects something in
return for undertaking risk.
So the model that is used is to let the stock price be modeled by the SDE
dS
t
/S
t
= dW
t
+dt,
or what looks better,
dS
t
= S
t
dW
t
+S
t
dt. (16.1)
Fortunately this SDE is one of those that can be solved explicitly, and in fact we
gave the solution in Section 15.
Proposition 16.1. The solution to (16.1) is given by
S
t
= S
0
e
W
t
+((
2
/2)t)
. (16.2)
Proof. Using Theorem 15.1 there will only be one solution, so we need to verify that S
t
as given in (16.2) satises (16.1). We already did this, but it is important enough that we
will do it again. Let us rst assume S
0
= 1. Let X
t
= W
t
+ ( (
2
/2)t, let f(x) = e
x
,
and apply Itos formula. We obtain
S
t
= e
X
t
= e
X
0
+
_
t
0
e
X
s
dX
s
+
1
2
_
t
0
e
X
s
dX)
s
= 1 +
_
t
0
S
s
dW
s
+
_
t
0
S
s
(
1
2

2
)ds
+
1
2
_
t
0
S
s

2
ds
= 1 +
_
t
0
S
s
dW
s
+
_
t
0
S
s
ds,
73
which is (16.1). If S
0
,= 0, just multiply both sides by S
0
.
Suppose for the moment that the interest rate r is 0. If one purchases
0
shares
(possibly a negative number) at time t
0
, then changes the investment to
1
shares at time
t
1
, then changes the investment to
2
at time t
2
, etc., then ones wealth at time t will be
X
t
0
+
0
(S
t
1
S
t
0
) +
1
(S
t
2
S
t
1
) + +
i
(S
t
i+1
S
t
i
). (16.3)
To see this, at time t
0
one has the original wealth X
t
0
. One buys
0
shares and the cost
is
0
S
t
0
. At time t
1
one sells the
0
shares for the price of S
t
1
per share, and so ones
wealth is now X
t
0
+
0
(S
t
1
S
t
0
). One now pays
1
S
t
1
for
1
shares at time t
1
and
continues. The right hand side of (16.3) is the same as
X
t
0
+
_
t
t
0
(s)dS
s
,
where we have t t
i+1
and (s) =
i
for t
i
s < t
i+1
. In other words, our wealth is
given by a stochastic integral with respect to the stock price. The requirement that the
integrand of a stochastic integral be adapted is very natural: we cannot base the number
of shares we own at time s on information that will not be available until the future.
How should we modify this when the interest rate r is not zero? Let P
t
be the
present value of the stock price. So
P
t
= e
rt
S
t
.
Note that P
0
= S
0
. When we hold
i
shares of stock from t
i
to t
i+1
, our prot in present
days dollars will be

i
(P
t
i+1
P
t
i
).
The formula for our wealth then becomes
X
t
0
+
_
t
t
0
(s)dP
s
.
By Itos product formula,
dP
t
= e
rt
dS
t
re
rt
S
t
dt
= e
rt
S
t
dW
t
+e
rt
S
t
dt re
rt
S
t
dt
= P
t
dW
t
+ ( r)P
t
dt.
Similarly to (16.2), the solution to this SDE is
P
t
= P
0
e
W
t
+(r
2
/2)t
. (16.4)
The continuous time model of nance is that the security price is given by (16.1)
(often called geometric Brownian motion), that there are no transaction costs, but one can
trade as many shares as one wants and vary the amount held in a continuous fashion. This
clearly is not the way the market actually works, for example, stock prices are discrete,
but this model has proved to be a very good one.
74
17. Markov properties of Brownian motion.
Let W
t
be a Brownian motion. Because W
t+r
W
t
is independent of (W
s
: s t),
then knowing the path of W up to time s gives no help in predicting W
t+r
W
t
. In
particular, if we want to predict W
t+r
and we know W
t
, then knowing the path up to time
t gives no additional advantage in predicting W
t+r
. Phrased another way, this says that
to predict the future, we only need to know where we are and not how we got there.
Lets try to give a more precise description of this property, which is known as the
Markov property.
Fix r and let Z
t
= W
t+r
W
r
. Clearly the map t Z
t
is continuous since the
same is true for W. Since Z
t
Z
s
= W
t+r
W
s+r
, then the distribution of Z
t
Z
s
is
normal with mean zero and variance (t +r) (s +r). One can also check the other parts
of the denition to show that Z
t
is also a Brownian motion.
Recall that a stopping time in the continuous framework is a r.v. T taking values
in [0, ) such that (T t) T
t
for all t. To make a satisfactory theory, we need that the
T
t
be right continuous (see Section 10), but this is fairly technical and we will ignore it.
If T is a stopping time, T
T
is the collection of events A such that A(T > t) T
t
for all t.
Let us try to provide some motivation for this denition of T
T
. It will be simpler to
consider the discrete time case. The analogue of T
T
in the discrete case is the following:
if N is a stopping time, let
T
N
= A : A (N k) T
k
for all k.
If X
k
is a sequence that is adapted to the -elds T
k
, that is, X
k
is T
k
measurable when
k = 0, 1, 2, . . ., then knowing which events in T
k
have occurred allows us to calculate X
k
for each k. So a reasonable denition of T
N
should allow us to calculate X
N
whenever
we know which events in T
N
have occurred or not. Or phrased another way, we want X
N
to be T
N
measurable. Where did the sequence X
k
come from? It could be any adapted
sequence. Therefore one denition of the -eld of events occurring before time N might
be:
Consider the collection of random variables X
N
where X
k
is a sequence adapted
to T
k
. Let (
N
be the smallest -eld with respect to which each of these random
variables X
N
is measurable.
In other words, we want (
N
to be the -eld generated by the collection of random
variables X
N
for all sequences X
k
that are adapted to T
k
.
We show in Note 1 that T
N
= (
N
. The -eld T
N
is just a bit easier to work with.
Now we proceed to the strong Markov property for Brownian motion, the proof of
which is given in Note 2.
75
Proposition 17.1. If X
t
is a Brownian motion and T is a bounded stopping time, then
X
T+t
X
T
is a mean 0 variance t random variable and is independent of T
T
.
This proposition says: if you want to predict X
T+t
, you could do it knowing all of
T
T
or just knowing X
T
. Since X
T+t
X
T
is independent of T
T
, the extra information
given in T
T
does you no good at all.
We need a way of expressing the Markov and strong Markov properties that will
generalize to other processes.
Let W
t
be a Brownian motion. Consider the process W
x
t
= x+W
t
, which is known
as Brownian motion started at x. Dene

to be set of continuous functions on [0, ), let


X
t
() = (t), and let the -eld be the one generated by the X
t
. Dene P
x
on (

, T

) by
P
x
(X
t
1
A
1
, . . . , X
t
n
A
n
) = P(W
x
t
1
A
1
, . . . , W
x
t
n
A
n
).
What we have done is gone from one probability space with many processes W
x
t
to one
process X
t
with many probability measures P
x
.
An example in the Markov chain setting might help. No knowledge of Markov chains
is necessary to understand this. Suppose we have a Markov chain with 3 states, A, B, and
C. Suppose we have a probability P and three dierent Markov chains. The rst, called
X
A
n
, represents the position at time n for the chain started at A. So X
A
0
= A, and X
A
1
can
be one of A, B, C, X
A
2
can be one of A, B, C, and so on. Similarly we have X
B
n
, the chain
started at B, and X
C
n
. Dene

= (AAA), (AAB), (ABA), . . . , (BAA), (BAB), . . ..


So

denotes the possible sequence of states for time n = 0, 1, 2. If = ABA, set


Y
0
() = A, Y
1
() = B, Y
2
() = A, and similarly for all the other 26 values of . Dene
P
A
(AAA) = P(X
A
0
= A, X
A
1
= A, X
A
2
= A). Similarly dene P
A
(AAB), . . .. Dene
P
B
(AAA) = P(X
B
0
= A, X
B
1
= A, X
B
2
= A) (this will be 0 because we know X
B
0
= B),
and similarly for the other values of . We also dene P
C
. So we now have one process,
Y
n
, and three probabilities P
A
, P
B
, P
C
. As you can see, there really isnt all that much
going on here.
Here is another formulation of the Markov property.
Proposition 17.2. If s < t and f is bounded or nonnegative, then
E
x
[f(X
t
) [ T
s
] = E
X
s
[f(X
ts
)], a.s.
The right hand side is to be interpreted as follows. Dene (x) = E
x
f(X
ts
). Then
E
X
s
f(X
ts
) means (X
s
()). One often writes P
t
f(x) for E
x
f(X
t
). We prove this in
Note 3.
This formula generalizes: If s < t < u, then
E
x
[f(X
t
)g(X
u
) [ T
s
] = E
X
s
[f(X
ts
)g(X
us
)],
76
and so on for functions of X at more times.
Using Proposition 17.1, the statement and proof of Proposition 17.2 can be extended
to stopping times.
Proposition 17.3. If T is a bounded stopping time, then
E
x
[f(X
T+t
) [ T
T
] = E
X
T
[f(X
t
)].
We can also establish the Markov property and strong Markov property in the
context of solutions of stochastic dierential equations. If we let X
x
t
denote the solution
to
X
x
t
= x +
_
t
0
(X
x
s
)dW
s
+
_
t
0
b(X
x
s
)ds,
so that X
x
t
is the solution of the SDE started at x, we can dene new probabilities by
P
x
(X
t
1
A
1
, . . . , X
t
n
A
n
) = P(X
x
t
1
A
1
, . . . , X
x
t
n
A
n
).
This is similar to what we did in dening P
x
for Brownian motion, but here we do not
have translation invariance. One can show that when there is uniqueness for the solution
to the SDE, the family (P
x
, X
t
) satises the Markov and strong Markov property. The
statement is precisely the same as the statement of Proposition 17.3.
Note 1. We want to show (
N
= T
N
. Since (
N
is the smallest -eld with respect to which
X
N
is measurable for all adapted sequences X
k
and it is easy to see that T
N
is a -eld, to
show (
N
T
N
, it suces to show that X
N
is measurable with respect to T
N
whenever X
k
is adapted. Therefore we need to show that for such a sequence X
k
and any real number a,
the event (X
N
> a) T
N
.
Now (X
N
> a) (N = j) = (X
j
> a) (N = j). The event (X
j
> a) T
j
since X is an adapted sequence. Since N is a stopping time, then (N j) T
j
and
(N j 1)
c
T
j1
T
j
, and so the event (N = j) = (N j) (N j 1)
c
is in T
j
. If
j k, then (N = j) T
j
T
k
. Therefore
(X
N
> a) (N k) =
k
j=0
((X
N
> a) (N = j)) T
k
,
which proves that (X
N
> a) T
N
.
To show T
N
(
N
, we suppose that A T
N
. Let X
k
= 1
A(Nk)
. Since A T
N
,
then A(N k) T
k
, so X
k
is T
k
measurable. But X
N
= 1
A(NN)
= 1
A
, so A = (X
N
>
0) (
N
. We have thus shown that T
N
(
N
, and combining with the previous paragraph,
we conclude T
N
= (
N
.
77
Note 2. Let T
n
be dened by T
n
() = (k + 1)/2
n
if T() [k/2
n
, (k + 1)/2
n
). It is easy
to check that T
n
is a stopping time. Let f be continuous and A T
T
. Then A T
T
n
as
well. We have
E[f(X
T
n
+t
X
T
n
); A] =

E[f(X k
2
n
+t
X k
2
n
); A T
n
= k/2
n
]
=

E[f(X k
2
n
+t
X k
2
n
)]P(A T
n
= k/2
n
)
= Ef(X
t
)P(A).
Let n , so
E[f(X
T+t
X
T
); A] = Ef(X
t
)P(A).
Taking limits this equation holds for all bounded f.
If we take A = and f = 1
B
, we see that X
T+t
X
T
has the same distribution as X
t
,
which is that of a mean 0 variance t normal random variable. If we let A T
T
be arbitrary
and f = 1
B
, we see that
P(X
T+t
X
T
B, A) = P(X
t
B)P(A) = P(X
T+t
X
T
B)P(A),
which implies that X
T+t
X
T
is independent of T
T
.
Note 3. Before proving Proposition 17.2, recall from undergraduate analysis that every
bounded function is the limit of linear combinations of functions e
iux
, u R. This follows
from using the inversion formula for Fourier transforms. There are various slightly dierent
formulas for the Fourier transform. We use

f(u) =
_
e
iux
f(x) dx. If f is smooth enough and
has compact support, then one can recover f by the formula
f(x) =
1
2
_
e
iux

f(u) du.
We can rst approximate this improper integral by
1
2
_
N
N
e
iux

f(u) du
by taking N larger and larger. For each N we can approximate
1
2
_
N
N
e
iux

f(u) du by using
Riemann sums. Thus we can approximate f(x) by a linear combination of terms of the form
e
iu
j
x
. Finally, bounded functions can be approximated by smooth functions with compact
support.
Proof. Let f(x) = e
iux
. Then
E
x
[e
iuX
t
[ T
s
] = e
iuX
s
E
x
[e
iu(X
t
X
s
)
[ T
s
]
= e
iuX
s
e
u
2
(ts)/2
.
On the other hand,
(y) = E
y
[f(X
ts
)] = E[e
iu(W
ts
+y)
] = e
iuy
e
u
2
(ts)/2
.
So (X
s
) = E
x
[e
iuX
t
[ T
s
]. Using linearity and taking limits, we have the lemma for all f.
78
18. Martingale representation theorem.
In this section we want to show that every random variable that is T
t
measurable
can be written as a stochastic integral of Brownian motion. In the next section we use
this to show that under the model of geometric Brownian motion the market is complete.
This means that no matter what option one comes up with, one can exactly replicate the
result (no matter what the market does) by buying and selling shares of stock.
In mathematical terms, we let T
t
be the -eld generated by W
s
, s t. From (16.2)
we see that T
t
is also the same as the -eld generated by S
s
, s t, so it doesnt matter
which one we work with. We want to show that if V is T
t
measurable, then there exists
H
s
adapted such that
V = V
0
+
_
H
s
dW
s
, (18.1)
where V
0
is a constant.
Our goal is to prove
Theorem 18.1. If V is T
t
measurable and EV
2
< , then there exists a constant c and
an adapted integrand H
s
with E
_
t
0
H
2
s
ds < such that
V = c +
_
t
0
H
s
dW
s
.
Before we prove this, let us explain why this is called a martingale representation
theorem. Suppose M
s
is a martingale adapted to T
s
, where the T
s
are the -eld generated
by a Brownian motion. Suppose also that EM
2
t
< . Set V = M
t
. By Theorem 18.1, we
can write
M
t
= V = c +
_
t
0
H
s
dW
s
.
The stochastic integral is a martingale, so for r t,
M
r
= E[M
t
[ T
r
] = c +E
_
_
t
0
H
s
dW
s
[ T
r
_
= c +
_
r
0
H
s
dW
s
.
We already knew that stochastic integrals were martingales; what this says is the converse:
every martingale can be represented as a stochastic integral. Dont forget that we need
EM
2
t
< and M
s
adapted to the -elds of a Brownian motion.
In Note 1 we show that if every martingale can be represented as a stochastic
integral, then every random variable V that is T
t
measurable can, too, provided EV
2
< .
There are several proofs of Theorem 18.1. Unfortunately, they are all technical. We
outline one proof here, giving details in the notes. We start with the following, proved in
Note 2.
79
Proposition 18.2. Suppose
V
n
= c
n
+
_
t
0
H
n
s
dW
s
,
c
n
c,
E[V
n
V [
2
0,
and for each n the process H
n
is adapted with E
_
t
0
(H
n
s
)
2
ds < . Then there exist a
constant c and an adapted H
s
with E
_
t
0
H
2
s
ds < so that
V
t
= c +
_
t
0
H
s
dW
s
.
What this proposition says is that if we can represent a sequence of random variables V
n
and V
n
V , then we can represent V .
Let 1 be the collection of random variables that can be represented as stochastic
integrals. By this we mean
1 = V : EV
2
< ,V is T
t
measurable,V = c +
_
t
0
H
s
dW
s
for some adapted H with E
_
t
0
H
2
s
ds < .
Next we show 1 contains a particular collection of random variables. (The proof is
in Note 3.)
Proposition 18.3. If g is bounded, the random variable g(W
t
) is in 1.
An almost identical proof shows that if f is bounded, then
f(W
t
W
s
) = c +
_
t
s
H
r
dW
r
for some c and H
r
.
Proposition 18.4. If t
0
t
1
t
n
t and f
1
, . . . , f
n
are bounded functions, then
f
1
(W
t
1
W
t
0
)f
2
(W
t
2
W
t
1
) f
n
(W
t
n
W
t
n1
) is in 1.
See Note 4 for the proof.
We now nish the proof of Theorem 18.1. We have shown that a large class of
random variables is contained in 1.
Proof of Theorem 18.1. We have shown that random variables of the form
f
1
(W
t
1
W
t
0
)f
2
(W
t
2
W
t
1
) f
n
(W
t
n
W
t
n1
) (18.2)
80
are in 1. Clearly if V
i
1 for i = 1, . . . , m, and a
i
are constants, then a
1
V
1
+ a
m
V
m
is
also in 1. Finally, from measure theory we know that if EV
2
< and V is T
t
measurable,
we can nd a sequence V
k
such that E[V
k
V [
2
0 and each V
k
is a linear combination
of random variables of the form given in (18.2). Now apply Proposition 18.2.
Note 1. Suppose we know that every martingale M
s
adapted to T
s
with EM
2
t
can be
represented as M
r
= c+
_
r
0
H
s
dW
s
for some suitable H. If V is T
t
measurable with EV
2
< ,
let M
r
= E[V [ T
r
]. We know this is a martingale, so
M
r
= c +
_
r
0
H
s
dW
s
for suitable H. Applying this with r = t,
V = E[V [ T
t
] = M
t
= c +
_
t
0
H
s
dW
s
.
Note 2. We prove Proposition 18.2. By our assumptions,
E[(V
n
c
n
) (V
m
c
m
)[
2
0
as n, m . So
E

_
t
0
(H
n
s
H
m
s
)dW
s

2
0.
From our formulas for stochastic integrals, this means
E
_
t
0
[H
n
s
H
m
s
[
2
ds 0.
This says that H
n
s
is a Cauchy sequence in the space L
2
(with respect to the norm | |
2
given
by |Y |
2
=
_
E
_
t
0
Y
2
s
ds
_
1/2
). Measure theory tells us that L
2
is a complete metric space, so
there exists H
s
such that
E
_
t
0
[H
n
s
H
s
[
2
ds 0.
In particular H
n
s
H
s
, and this implies H
s
is adapted. Another consequence, due to Fatous
lemma, is that E
_
t
0
H
2
s
ds.
Let U
t
=
_
t
0
H
s
dW
s
. Then as above,
E[(V
n
c
n
) U
t
[
2
= E
_
t
0
(H
n
s
H
s
)
2
ds 0.
Therefore U
t
= V c, and U has the desired form.
81
Note 3. Here is the proof of Proposition 18.3. By Itos formula with X
s
= iuW
s
+u
2
s/2
and f(x) = e
x
,
e
X
t
= 1 +
_
t
0
e
X
s
(iu)dW
s
+
_
t
0
e
X
s
(u
2
/2)ds
+
1
2
_
t
0
e
X
s
(iu)
2
ds
= 1 iu
_
t
0
e
X
s
dW
s
.
If we multiply both sides by e
u
2
t/2
, which is a constant and hence adapted, we obtain
e
iuW
t
= c
u
+
_
t
0
H
u
s
dW
s
(18.3)
for an appropriate constant c
u
and integrand H
u
.
If f is a smooth function (e.g., C

with compact support), then its Fourier transform

f will also be very nice. So if we multiply (18.3) by



f(u) and integrate over u from to
, we obtain
f(W
t
) = c +
_
t
0
H
s
dW
s
for some constant c and some adapted integrand H. (We implicitly used Proposition 18.2,
because we approximate our integral by Riemann sums, and then take a limit.) Now using
Proposition 18.2 we take limits and obtain the proposition.
Note 4. The argument is by induction; let us do the case n = 2 for clarity. So we suppose
V = f(W
t
)g(W
u
W
t
).
From Proposition 18.3 we now have that
f(W
t
) = c +
_
t
0
H
s
dW
s
, g(W
u
W
t
) = d +
_
u
t
K
s
dW
s
.
Set H
r
= H
r
if 0 s < t and 0 otherwise. Set K
r
= K
r
if s r < t and 0 otherwise. Let
X
s
= c +
_
s
0
H
r
dW
r
and Y
s
= d +
_
s
0
K
r
dW
r
. Then
X, Y )
s
=
_
s
0
H
r
K
r
dr = 0.
Then by the Ito product formula,
X
s
Y
s
= X
0
Y
0
+
_
s
0
X
r
dY
r
+
_
s
0
Y
r
dX
r
+X, Y )
s
= cd +
_
s
0
[X
r
K
r
+Y
r
H
r
]dW
r
.
82
If we now take s = u, that is exactly what we wanted. Note that X
r
K
r
+Y
r
H
r
is 0 if r > u;
this is needed to do the general induction step.
83
19. Completeness.
Now let P
t
be a geometric Brownian motion. As we mentioned in Section 16, if
P
t
= P
0
exp(W
t
+ ( r
2
/2)t), then given P
t
we can determine W
t
and vice versa,
so the elds generated by P
t
and W
t
are the same. Recall P
t
satises
dP
t
= P
t
dW
t
+ ( r)P
t
dt.
Dene a new probability P by
dP
dP
= M
t
= exp(aW
t
a
2
t/2).
By the Girsanov theorem,

W
t
= W
t
at
is a Brownian motion under P. So
dP
t
= P
t
d

W
t
+P
t
adt + ( r)P
t
dt.
If we choose a = ( r)/, we then have
dP
t
= P
t
d

W
t
. (19.1)
Since

W
t
is a Brownian motion under P, then P
t
must be a martingale, since it is a
stochastic integral of a Brownian motion. We can rewrite (19.1) as
d

W
t
=
1
P
1
t
dP
t
. (19.2)
Given a T
t
measurable variable V , we know by Theorem 18.1 that there exist a
constant and an adapted process H
s
such that E
_
t
0
H
2
s
ds < and
V = c +
_
t
0
H
s
d

W
s
.
But then using (19.2) we have
V = c +
_
t
0
H
s

1
P
1
s
dP
s
.
We have therefore proved
Theorem 19.1. If P
t
is a geometric Brownian motion and V is T
t
measurable and square
integrable, then there exist a constant c and an adapted process K
s
such that
V = c +
_
t
0
K
s
dP
s
.
Moreover, there is a probability P under which P
t
is a martingale.
The probability P is called the risk-neutral measure. Under P the present day value
of the stock price is a martingale.
84
20. Black-Scholes formula, I.
We can now derive the formula for the price of any option. Let T 0 be a xed
real. If V is T
T
measurable, we have by Theorem 19.1 that
V = c +
_
T
0
K
s
dP
s
, (20.1)
and under P, the process P
s
is a martingale.
Theorem 20.1. The price of V must be EV .
Proof. This is the no arbitrage principle again. Suppose the price of the option V at
time 0 is W. Starting with 0 dollars, we can sell the option V for W dollars, and use the
W dollars to buy and trade shares of the stock. In fact, if we use c of those dollars, and
invest according to the strategy of holding K
s
shares at time s, then at time T we will
have
e
rT
(W
0
c) +V
dollars. At time T the buyer of our option exercises it and we use V dollars to meet that
obligation. That leaves us a prot of e
rT
(W
0
c) if W
0
> c, without any risk. Therefore
W
0
must be less than or equal to c. If W
0
< c, we just reverse things: we buy the option
instead of sell it, and hold K
s
shares of stock at time s. By the same argument, since
we cant get a riskless prot, we must have W
0
c, or W
0
= c.
Finally, under P the process P
t
is a martingale. So taking expectations in (20.1),
we obtain
EV = c.
The formula in the statement of Theorem 20.1. is amenable to calculation. Suppose
we have the standard European option, where
V = e
rt
(S
t
K)
+
= (e
rt
S
t
e
rt
K)
+
= (P
t
e
rt
K)
+
.
Recall that under P the stock price satises
dP
t
= P
t
d

W
t
,
where

W
t
is a Brownian motion under P. So then
P
t
= P
0
e
W
t

2
t/2
.
85
Hence
EV = E[(P
T
e
rT
K)
+
] (20.2)
= E[(P
0
e
W
T
(
2
/2)T
e
rT
K)
+
].
We know the density of

W
T
is just (2T)
1/2
e
y
2
/(2T)
, so we can do some calculations
(see Note 1) and end up with the famous Black-Scholes formula:
W
0
= x(g(x, T)) Ke
rT
(h(x, T)),
where (z) =
1

2
_
z

e
y
2
/2
dy, x = P
0
= S
0
,
g(x, T) =
log(x/K) + (r +
2
/2)T

T
,
h(x, T) = g(x, T)

T.
It is of considerable interest that the nal formula depends on but is completely
independent of . The reason for that can be explained as follows. Under P the process P
t
satises dP
t
= P
t
d

W
t
, where

W
t
is a Brownian motion. Therefore, similarly to formulas
we have already done,
P
t
= P
0
e
W
t

2
t/2
,
and there is no present here. (We used the Girsanov formula to get rid of the .) The
price of the option V is
E[P
T
e
rT
K]
+
, (20.3)
which is independent of since P
t
is.
Note 1. We want to calculate
E
_
(xe
W
T

2
T/2
e
rT
K)
+
_
, (20.4)
where

W
t
is a Brownian motion under P and we write x for P
0
= S
0
. Since

W
T
is a normal
random vairable with mean 0 and variance T, we can write it as

TZ, where Z is a standard


mean 0 variance 1 normal random variable.
Now
xe

TZ
2
T/2
> e
rT
K
if and only if
log x +

TZ
2
T/2 > r + log K,
86
or if
Z > (
2
T/2) r + log K log x.
We write z
0
for the right hand side of the above inequality. Recall that 1 (z) = (z) for
all z by the symmetry of the normal density. So (20.4) is equal to
1

2
_

z
0
(xe

Tz
2
T/2
e
rT
K)
+
e
z
2
/2
dz
= x
1

2
_

z
0
e

1
2
(z
2
2

Tz+
2
T
dz Ke
rT 1

2
_

z
0
e
z
2
/2
dz
= x
1

2
_

z
0
e

1
2
(z

T)
2
dz Ke
rT
(1 (z
0
))
= x
1

2
_

z
0

T
e
y
2
/2
dy Ke
rT
(z
0
)
= x(1 (z
0

T)) Ke
rT
(z
0
)
= x(

T z
0
) Ke
rT
(z
0
).
This is the Black-Scholes formula if we observe that

T z
0
= g(x, T) and z
0
= h(x, T).
87
21. Hedging strategies.
The previous section allows us to compute the value of any option, but we would also
like to know what the hedging strategy is. This means, if we know V = EV +
_
T
0
H
s
dS
s
,
what should H
s
be? This might be important to know if we wanted to duplicate an option
that was not available in the marketplace, or if we worked for a bank and wanted to provide
an option for sale.
It is not always possible to compute H, but in many cases of interest it is possible.
We illustrate one technique with two examples.
First, suppose we want to hedge the standard European call V = e
rT
(S
T
K)
+
=
(P
T
e
rT
K)
+
. We are working here with the risk-neutral probability only. It turns out
it makes no dierence: the denition of
_
t
0
H
s
dX
s
for a semimartingale X does not depend
on the probability P, other than worrying about some integrability conditions.
We can rewrite V as
V = EV +g(

W
T
),
where
g(x) = (e
x
2
T/2
e
rT
K)
+
EV.
Therefore the expectation of g(

W
T
) is 0. Recall that under P,

W is a Brownian motion.
If we write g(

W
T
) as
_
T
0
H
s
d

W
s
, (21.1)
then since dP
t
= P
t
d

W
t
, we have
g(

W
T
) = c +
_
T
0
1
P
s
H
s
dP
s
. (21.2)
Therefore it suces to nd the representation of the form (21.1).
Recall from the section on the Markov property that
P
t
f(x) = E
x
f(

W
t
) = Ef(x +

W
t
) =
_
1

2t
e
(y)
2
/2t
f(x +y)dy.
Let M
t
= E[g(

W
T
) [ T
t
]. By Proposition 4.3, we know that M
t
is a martingale. By the
Markov property Proposition 17.2, we see that
M
t
= E
W
t
[g(

W
Tt
] = P
Tt
g(

W
t
). (21.3)
Now let us apply Itos formula with the function f(x
1
, x
2
) = P
x
2
g(x
1
) to the process
X
t
= (X
1
t
, X
2
t
) = (

W
t
, T t). So we need to use the multidimensional version of Itos
formula. We have dX
1
t
= d

W
t
and dX
2
t
= dt. Since X
2
t
is a decreasing process and has
88
no martingale part, then dX
2
)
t
= 0 and dX
1
, X
2
)
t
= 0, while dX
1
)
t
= dt. Itos formula
says that
f(X
1
t
, X
2
t
) = f(X
1
0
, X
2
0
) +
_
t
0
2

i=1
f
x
i
(X
t
)dX
i
t
+
1
2
_
t
0
2

i,j=1

2
f
x
i
x
j
(X
t
)dX
i
, X
j
)
t
= c +
_
t
0
f
x
1
(X
t
)d

W
t
+ some terms with dt.
But we know that f(X
t
) = P
Tt
g(

W
t
) = M
t
is a martingale, so the sum of the terms
involving dt must be zero; if not, f(X
t
) would have a bounded variation part. We conclude
M
t
=
_
t
0

x
P
Ts
g(

W
s
)d

W
s
.
If we take t = T, we then have
g(

W
T
) = M
T
=
_
T
0

x
P
Ts
g(

W
s
)d

W
s
,
and we have our representation.
For a second example, lets look at the sell-high option. Here the payo is sup
sT
S
s
,
the largest the stock price ever is up to time T. This is T
T
measurable, so we can compute
its value. How can one get the equivalent outcome without looking into the future?
For simplicity, let us suppose the interest rate r is 0. Let N
t
= sup
st
S
s
, the
maximum up to time t. It is not the case that N
t
is a Markov process. Intuitively, the
reasoning goes like this: suppose the maximum up to time 1 is $100, and we want to
predict the maximum up to time 2. If the stock price at time 1 is close to $100, then we
have one prediction, while if the stock price at time 1 is close to $2, we would denitely
have another prediction. So the prediction for N
2
does not depend just on N
1
, but also
the stock price at time 1. This same intuitive reasoning does suggest, however, that the
triple Z
t
= (S
t
, N
t
, t) is a Markov process, and this turns out to be correct. Adding in the
information about the current stock price gives a certain amount of evidence to predict
the future values of N
t
; adding in the history of the stock prices up to time t gives no
additional information.
Once we believe this, the rest of the argument is very similar to the rst example.
Let P
u
f(z) = E
z
f(Z
u
), where z = (s, n, t). Let g(Z
t
) = N
t
EN
T
. Then
M
t
= E[g(Z
T
) [ T
t
] = E
Z
t
[g(Z
Tt
)] = P
Tt
g(Z
t
).
89
We then let f(s, n, t) = P
Tt
g(s, n, t) and apply Itos formula. The process N
t
is always
increasing, so has no martingale part, and hence N)
t
= 0. When we apply Itos formula,
we get a dS
t
term, which is the martingale term, we get some terms involving dt, which are
of bounded variation, and we get a term involving dN
t
, which is also of bounded variation.
But M
t
is a martingale, so all the dt and dN
t
terms must cancel. Therefore we should be
left with the martingale term, which is
_
t
0

s
P
Ts
g(S
s
, N
s
, s)dS
s
,
where again g(s, n, t) = n. This gives us our hedging strategy for the sell-high option, and
it can be explicitly calculated.
There is another way to calculate hedging strategies, using what is known as the
Clark-Haussmann-Ocone formula. This is a more complicated procedure, and most cases
can be done as well by an appropriate use of the Markov property.
90
22. Black-Scholes formula, II.
Here is a second approach to the Black-Scholes formula. This approach works for
European calls and several other options, but does not work in the generality that the
rst approach does. On the other hand, it allows one to compute more easily what the
equivalent strategy of buying or selling stock should be to duplicate the outcome of the
given option. In this section we work with the actual price of the stock instead of the
present value.
Let V
t
be the value of the portfolio and assume V
t
= f(S
t
, T t) for all t, where f
is some function that is suciently smooth. We also want V
T
= (S
T
K)
+
.
Recall Itos formula. The multivariate version is
f(X
t
) = f(X
0
) +
_
t
0
d

i=1
f
x
i
(X
s
) dX
i
s
+
1
2
_
t
0
d

i,j=1
f
x
i
x
j
(X
s
) dX
i
, X
j
)
s
.
Here X
t
= (X
1
t
, . . . , X
d
t
) and f
x
i
denotes the partial derivative of f in the x
i
direction,
and similarly for the second partial derivatives.
We apply this with d = 2 and X
t
= (S
t
, T t). From the SDE that S
t
solves,
dX
1
)
t
=
2
S
2
t
dt, X
2
)
t
= 0 (since T t is of bounded variation and hence has no
martingale part), and X
1
, X
2
)
t
= 0. Also, dX
2
t
= dt. Then
V
t
V
0
= f(S
t
, T t) f(S
0
, T) (22.1)
=
_
t
0
f
x
(S
u
, T u) dS
u

_
t
0
f
s
(S
u
, T u) du
+
1
2
_
t
0

2
S
2
u
f
xx
(S
u
, T u) du.
On the other hand, if a
u
and b
u
are the number of shares of stock and bonds, respectively,
held at time u,
V
t
V
0
=
_
t
0
a
u
dS
u
+
_
t
0
b
u
d
u
. (22.2)
This formula says that the increase in net worth is given by the prot we obtain by holding
a
u
shares of stock and b
u
bonds at time u. Since the value of the portfolio at time t is
V
t
= a
t
S
t
+b
t

t
,
we must have
b
t
= (V
t
a
t
S
t
)/
t
. (22.3)
Also, recall

t
=
0
e
rt
. (22.4)
91
To match up (22.2) with (22.1), we must therefore have
a
t
= f
x
(S
t
, T t) (22.5)
and
r[f(S
t
, T t) S
t
f
x
(S
t
, T t)] = f
s
(S
t
, T t) +
1
2

2
S
2
t
f
xx
(S
t
, T t) (22.6)
for all t and all S
t
. (22.6) leads to the parabolic PDE
f
s
=
1
2

2
x
2
f
xx
+rxf
x
rf, (x, s) (0, ) [0, T), (22.7)
and
f(x, 0) = (x K)
+
. (22.8)
Solving this equation for f, f(x, T) is what V
0
should be, i.e., the cost of setting up the
equivalent portfolio. Equation (22.5) shows what the trading strategy should be.
92
23. The fundamental theorem of nance.
In Section 19, we showed there was a probability measure under which P
t
= e
rt
S
t
was a martingale. This is true very generally. Let S
t
be the price of a security in todays
dollars. We will suppose S
t
is a continuous semimartingale, and can be written S
t
=
M
t
+A
t
.
Arbitrage means that there is a trading strategy H
s
such that there is no chance that
we lose anything and there is a positive prot with positive probability. Mathematically,
arbitrage exists if there exists H
s
that is adapted and satises a suitable integrability
condition with
_
T
0
H
s
dS
s
0, a.s.
and
P
_
_
T
0
H
s
dS
s
> b
_
>
for some b, > 0. It turns out that to get a necessary and sucient condition for S
t
to be
a martingale, we need a slightly weaker condition.
The NFLVR condition (no free lunch with vanishing risk) is that there do not
exist a xed time T, , b > 0, and H
n
(that are adapted and satisfy the appropriate
integrability conditions) such that
_
T
0
H
n
(s) dS
s
>
1
n
, a.s.
for all t and
P
_
_
T
0
H
n
(s) dS
s
> b
_
> .
Here T, b, do not depend on n. The condition says that one can with positive
probability make a prot of b and with a loss no larger than 1/n.
Two probabilities P and Q are equivalent if P(A) = 0 if and only Q(A) = 0,
i.e., the two probabilities have the same collection of sets of probability zero. Q is an
equivalent martingale measure if Q is a probability measure, Q is equivalent to P, and S
t
is a martingale under Q.
Theorem 23.1. If S
t
is a continuous semimartingale and the NFLVR conditions holds,
then there exists an equivalent martingale measure Q.
The proof is rather technical and involves some heavy-duty measure theory, so we
will only point examine a part of it. Suppose that we happened to have S
t
= W
t
+ f(t),
where f(t) is a deterministic increasing continuous function. To obtain the equivalent
martingale measure, we would want to let
M
t
= e

_
t
0
f

(s)dW
s

1
2
_
t
0
(f

(s))
2
ds
.
93
In order for M
t
to make sense, we need f to be dierentiable. A result from measure
theory says that if f is not dierentiable, then we can nd a subset A of [0, ) such
that
_
t
0
1
A
(s)ds = 0 but the amount of increase of f over the set A is positive. This last
statement is phrased mathematically by saying
_
t
0
1
A
(s)df(s) > 0,
where the integral is a Riemann-Stieltjes (or better, a Lebesgue-Stieltjes) integral. Then
if we hold H
s
= 1
A
(s) shares at time s, our net prot is
_
t
0
H
s
dS
s
=
_
t
0
1
A
(s)dW
s
+
_
t
0
1
A
(s) df(s).
The second term would be positive since this is the amount of increase of f over the set
A. The rst term is 0, since E(
_
t
0
1
A
(s)dW
s
)
2
=
_
t
0
1
A
(s)
2
ds = 0. So our net prot is
nonrandom and positive, or in other words, we have made a net gain without risk. This
contradicts no arbitrage. See Note 1 for more on this.
Sometime Theorem 23.1 is called the rst fundamental theorem of asset pricing.
The second fundamental theorem is the following.
Theorem 23.2. The equivalent martingale measure is unique if and only if the market is
complete.
We will not prove this.
Note 1. We will not prove Theorem 23.1, but let us give a few more indications of what is
going on. First of all, recall the Cantor set. This is where E
1
= [0, 1], E
2
is the set obtained
from E
1
by removing the open interval (
1
3
,
2
3
), E
3
is the set obtained from E
2
by removing
the middle third from each of the two intervals making up E
2
, and so on. The intersection,
E =

n=1
E
n
, is the Cantor set, and is closed, nonempty, in fact uncountable, yet it contains
no intervals. Also, the Lebesgue measure of A is 0. We set A = E. Let f be the Cantor-
Lebesgue function. This is the function that is equal to 0 on (, 0], 1 on [1, ), equal to
1
2
on the interval [
1
3
,
2
3
], equal to
1
4
on [
1
9
,
2
9
], equal to
3
4
on [
7
9
,
8
9
], and is dened similarly on
each interval making up the complement of A. It turns out we can dene f on A so that it is
continuous, and one can show
_
1
0
1
A
(s)df(s) = 1. So this A and f provide a concrete example
of what we were discussing.
94
24. American puts.
The proper valuation of American puts is one of the important unsolved problems
in mathematical nance. Recall that a European put pays out (K S
T
)
+
at time T,
while an American put allows one to exercise early. If one exercises an American put at
time t < T, one receives (K S
t
)
+
. Then during the period [t, T] one receives interest,
and the amount one has is (K S
t
)
+
e
r(Tt)
. In todays dollars that is the equivalent of
(KS
t
)
+
e
rt
. One wants to nd a rule, known as the exercise policy, for when to exercise
the put, and then one wants to see what the value is for that policy. Since one cannot look
into the future, one is in fact looking for a stopping time that maximizes
Ee
r
(K S

)
+
.
There is no good theoretical solution to nding the stopping time , although good
approximations exist. We will, however, discuss just a bit of the theory of optimal stopping,
which reworks the problem into another form.
Let G
t
denote the amount you will receive at time t. For American puts, we set
G
t
= e
rt
(K S
t
)
+
.
Our problem is to maximize EG

over all stopping times .


We rst need
Proposition 24.1. If S and T are bounded stopping times with S T and M is a
martingale, then
E[M
T
[ T
S
] = M
S
.
Proof. Let A T
S
. Dene U by
U() =
_
S() if A,
T() if / A.
It is easy to see that U is a stopping time, so by Doobs optional stopping theorem,
EM
0
= EM
U
= E[M
S
; A] +E[M
T
; A
c
].
Also,
EM
0
= EM
T
= E[M
T
; A] +E[M
T
; A
c
].
Taking the dierence, E[M
T
; A] = E[M
s
; A], which is what we needed to show.
Given two supermartingales X
t
and Y
t
, it is routine to check that X
t
Y
t
is also a
supermartingale. Also, if X
n
t
are supermartingales with X
n
t
X
t
, one can check that X
t
95
is again a supermartingale. With these facts, one can show that given a process such as
G
t
, there is a least supermartingale larger than G
t
.
So we dene W
t
to be a supermartingale (with respect to P, of course) such that
W
t
G
t
a.s for each t and if Y
t
is another supermartingale with Y
t
G
t
for all t, then
W
t
Y
t
for all t. We set = inft : W
t
= G
t
. We will show that is the solution to the
problem of nding the optimal stopping time. Of course, computing W
t
and is another
problem entirely.
Let
T
t
= : is a stopping time, t T.
Let
V
t
= sup
T
t
E[G

[ T
t
].
Proposition 24.2. V
t
is a supermartingale and V
t
G
t
for all t.
Proof. The xed time t is a stopping time in T
t
, so V
t
E[G
t
[ T
t
] = G
t
, or V
t
G
t
. so
we only need to show that V
t
is a supermartingale.
Suppose s < t. Let be the stopping time in T
t
for which V
t
= E[G

[ T
t
].
T
t
T
s
. Then
E[V
t
[ T
s
] = E[G

[ T
s
] sup
T
s
E[G

[ T
s
] = V
s
.
Proposition 24.3. If Y
t
is a supermartingale with Y
t
G
t
for all t, then Y
t
V
t
.
Proof. If T
t
, then since Y
t
is a supermartingale, we have
E[Y

[ T
t
] Y
t
.
So
V
t
= sup
T
t
E[G

[ T
t
] sup
T
t
E[Y

[ T
t
] Y
t
.
What we have shown is that W
t
is equal to V
t
. It remains to show that is optimal.
There may in fact be more than one optimal time, but in any case is one of them. Recall
we have T
0
is the -eld generated by S
0
, and hence consists of only and .
96
Proposition 24.4. is an optimal stopping time.
Proof. Since T
0
is trivial, V
0
= sup
T
0
E[G

[ T
0
] = sup

E[G

]. Let be a stopping
time where the supremum is attained. Then
V
0
E[V

[ T
0
] = E[V

] E[G

] = V
0
.
Therefore all the inequalities must be equalities. Since V

, we must have V

= G

.
Since was the rst time that W
t
equals G
t
and W
t
= V
t
, we see that . Then
E[G

] = E[V

] EV

= EG

.
Therefore the expected value of G

is as least as large as the expected value of G

, and
hence is also an optimal stopping time.
The above representation of the optimal stopping problem may seem rather bizarre.
However, this procedure gives good usable results for some optimal stopping problems. An
example is where G
t
is a function of just W
t
.
97
25. Term structure.
We now want to consider the case where the interest rate is nondeterministic, that
is, it has a random component. To do so, we take another look at option pricing.
Accumulation factor. Let r(t) be the (random) interest rate at time t. Let
(t) = e
_
t
0
r(u)du
be the accumulation factor. One dollar at time T will be worth 1/(T) in todays dollars.
Let V = (S
T
K)
+
be the payo on the standard European call option at time T
with strike price K, where S
t
is the stock price. In todays dollars it is worth, as we have
seen, V/(T). Therefore the price of the option should be
E
_
V
(T)
_
.
We can also get an expression for the value of the option at time t. The payo, in terms
of dollars at time t, should be the payo at time T discounted by the interest or ination
rate, and so should be
e

_
T
t
r(u)du
(S
T
K)
+
.
Therefore the value at time t is
E
_
e

_
T
t
r(u)du
(S
T
K)
+
[ T
t
_
= E
_
(t)
(T)
V [ T
t
_
= (t)E
_
V
(T)
[ T
t
_
.
From now on we assume we have already changed to the risk-neutral measure and
we write P instead of P.
Zero coupon. A zero coupon bond with maturity date T pays $1 at time T and nothing
before. This is equivalent to an option with payo value V = 1. So its price at time t, as
above, should be
B(t, T) = (t)E
_
1
(T)
[ T
t
_
= E
_
e

_
T
t
r(u)du
[ T
t
_
.
Lets derive the SDE satised by B(t, T). Let N
t
= E[1/(T) [ T
t
]. This is a
martingale. By the martingale representation theorem,
N
t
= E[1/(T)] +
_
t
0
H
s
dW
s
for some adapted integrand H
s
. So B(t, T) = (t)N
t
. Here T is xed. By Itos product
formula,
dB(t, T) = (t)dN
t
+N
t
d(t)
= (t)H
t
dW
t
+N
t
r(t)(t)dt
= (t)H
t
dW
t
+B(t, T)r(t)dt,
98
and we thus have
dB(t, T) = (t)H
t
dW
t
+B(t, T)r(t)dt. (25.1)
Forward rates. We now discuss forward rates. If one holds T xed and graphs B(t, T) as
a function of t, the graph will not clearly show the behavior of r. One sometimes species
interest rates by what are known as forward rates.
Suppose we want to borrow $1 at time T and repay it with interest at time T +.
At the present time we are at time t T. Let us try to accomplish this by buying a zero
coupon bond with maturity date T and shorting (i.e., selling) N zero coupon bonds with
maturity date T +. Our outlay of money at time t is
B(t, T) NB(t, T +) = 0.
If we set
N = B(t, T)/B(t, T +),
our outlay at time t is 0. At time T we receive $1. At time T + we pay B(t, T)/B(t, T +).
The eective rate of interest R over the time period T to T + is
e
R
=
B(t, T)
B(t, T +)
.
Solving for R, we have
R =
log B(t, T) log B(t, T +)

.
We now let 0. We dene the forward rate by
f(t, T) =

T
log B(t, T). (25.2)
Sometimes interest rates are specied by giving f(t, T) instead of B(t, T) or r(t).
Recovering B from f. Let us see how to recover B(t, T) from f(t, T). Integrating, we have
_
T
t
f(t, u)du =
_
T
t

u
log B(t, u)du = log B(t, u) [
u=T
u=t
= log B(t, T) + log B(t, t).
Since B(t, t) is the value of a zero coupon bond at time t which expires at time t, it is
equal to 1, and its log is 0. Solving for B(t, T), we have
B(t, T) = e

_
T
t
f(t,u)du
. (25.3)
99
Recovering r from f. Next, let us show how to recover r(t) from the forward rates. We
have
B(t, T) = E
_
e

_
T
t
r(u)du
[ T
t
_
.
Dierentiating,

T
B(t, T) = E
_
r(T)e

_
T
t
r(u)du
[ T
t
_
.
Evaluating this when T = t, we obtain
E[r(t) [ T
t
] = r(t). (25.4)
On the other hand, from (25.3) we have

T
B(t, T) = f(t, T)e

_
T
t
f(t,u)du
.
Setting T = t we obtain f(t, t). Comparing with (25.4) yields
r(t) = f(t, t). (25.5)
100
26. Some interest rate models.
Heath-Jarrow-Morton model
Instead of specifying r, the Heath-Jarrow-Morton model (HJM) species the forward
rates:
df(t, T) = (t, T)dW
t
+(t, T)dt. (26.1)
Let us derive the SDE that B(t, T) satises. Let

(t, T) =
_
T
t
(t, u)du,

(t, T) =
_
T
t
(t, u)du.
Since B(t, T) = exp(
_
T
t
f(t, u)du), we derive the SDE for B by using Itos formula with
the function e
x
and X
t
=
_
T
t
f(t, u)du. We have
dX
t
= f(t, t)dt
_
T
t
df(t, u)du
= r(t)dt
_
T
t
[(t, u)dt +(t, u)dW
t
] du
= r(t)dt
_
_
T
t
(t, u)du
_
dt
_
_
T
t
(t, u)du
_
dW
t
= r(t)dt

(t, T)dt

(t, T)dW
t
.
Therefore, using Itos formula,
dB(t, T) = B(t, T)dX
t
+
1
2
B(t, T)(

(t, T))
2
dt
= B(t, T)
_
r(t)

+
1
2
(

)
2
_
dt

B(t, T)dW
t
.
From (25.1) we know the dt term must be B(t, T)r(t)dt, hence
dB(t, T) = B(t, T)r(t)dt

B(t, T)dW
t
.
Comparing with (26.1), we see that if P is the risk-neutral measure, we have

=
1
2
(

)
2
.
See Note 1 for more on this.
Hull and White model
In this model, the interest rate r is specied as the solution to the SDE
dr(t) = (t)dW
t
+ (a(t) b(t)r(t))dt. (26.2)
Here , a, b are deterministic functions. The stochastic integral term introduces random-
ness, while the a br term causes a drift toward a(t)/b(t). (Note that if (t) = , a(t) =
a, b(t) = b are constants and = 0, then the solution to (26.2) becomes r(t) = a/b.)
101
(26.2) is one of those SDEs that can be solved explicitly. Let K(t) =
_
t
0
b(u)du.
Then
d
_
e
K(t)
r(t)
_
= e
K(t)
r(t)b(t)dt +e
K(t)
_
a(t) b(t)r(t)
_
dt +e
K(t)
[(t)dW
t
]
= e
K(t)
a(t)dt +e
K(t)
[(t)dW
t
].
Integrating both sides,
e
K(t)
r(t) = r(0) +
_
t
0
e
K(u)
a(u)du +
_
t
0
e
K(u)
(u)dW
u
.
Multiplying both sides by e
K(t)
, we have the explicit solution
r(t) = e
K(t)
_
r(0) +
_
t
0
e
K(u)
a(u)du +
_
t
0
e
K(u)
(u)dW
u
_
.
If F(u) is deterministic, then
_
t
0
F(u)dW
u
= lim

F(u
i
)(W
u
i+1
W
u
i
).
From undergraduate probability, linear combinations of Gaussian r.v.s (Gaussian = nor-
mal) are Gaussian, and also limits of Gaussian r.v.s are Gaussian, so we conclude that the
r.v.
_
t
0
F(u)dW
u
is Gaussian. We see that the mean at time t is
Er(t) = e
K(t)
_
r(0) +
_
t
0
e
K(u)
a(u)du
_
.
We know how to calculate the second moment of a stochastic integral, so
Var r(t) = e
2K(t)
_
t
0
e
2K(u)
(u)
2
du.
(One can similarly calculate the covariance of r(s) and r(t).) Limits of linear combinations
of Gaussians are Gaussian, so we can calculate the mean and variance of
_
T
0
r(t)dt and get
an explicit expression for
B(0, T) = Ee

_
T
0
r(u)du
.
Cox-Ingersoll-Ross model
One drawback of the Hull and White model is that since r(t) is Gaussian, it can take
negative values with positive probability, which doesnt make sense. The Cox-Ingersoll-
Ross model avoids this by modeling r by the SDE
dr(t) = (a br(t))dt +
_
r(t)dW
t
.
102
The dierence from the Hull and White model is the square root of r in the stochastic
integral term. This square root term implies that when r(t) is small, the uctuations in
r(t) are larger than they are in the Hull and White model. Provided a
1
2

2
, it can be
shown that r(t) will never hit 0 and will always be positive. Although one cannot solve
for r explicitly, one can calculate the distribution of r. It turns out to be related to the
square of what are known in probability theory as Bessel processes. (The density of r(t),
for example, will be given in terms of Bessel functions.)
Note 1. If P is not the risk-neutral measure, it is still possible that one exists. Let (t) be a
function of t, let M
t
= exp(
_
t
0
(u)dW
u

1
2
_
t
0
(u)
2
du) and dene P(A) = E[M
T
; A] for
A T
T
. By the Girsanov theorem,
dB(t, T) = B(t, T)
_
r(t)

+
1
2
(

)
2
+

]dt

B(t, T)d

W
t
,
where

W
t
is a Brownian motion under P. Again, comparing this with (25.1) we must have

=
1
2
(

)
2
+

.
Dierentiating with respect to T, we obtain
(t, T) = (t, T)

(t, T) +(t, T)(t).


If we try to solve this equation for , there is no reason o-hand that depends only on t and
not T. However, if does not depend on T, P will be the risk-neutral measure.
103
Problems
1. Show E[XE[Y [ (] ] = E[Y E[X [ (] ].
2. Prove that E[aX
1
+bX
2
[ (] = aE[X
1
[ (] +bE[X
2
[ (].
3. Suppose X
1
, X
2
, . . . , X
n
are independent and for each i we have P(X
i
= 1)
= P(X
i
= 1) =
1
2
. Let S
n
=

n
i=1
X
i
. Show that M
n
= S
3
n
3nS
n
is a martingale.
4. Let X
i
and S
n
be as in Problem 3. Let (x) =
1
2
(e
x
+e
x
). Show that M
n
= e
aS
n
(a)
n
is a martingale for each a real.
5. Suppose M
n
is a martingale, N
n
= M
2
n
, and EN
n
< for each n. Show
E[N
n+1
[ T
n
] N
n
for each n. Do not use Jensens inequality.
6. Suppose M
n
is a martingale, N
n
= [M
n
[, and EN
n
< for each n. Show
E[N
n+1
[ T
n
] N
n
for each n. Do not use Jensens inequality.
7. Suppose X
n
is a martingale with respect to (
n
and T
n
= (X
1
, . . . , X
n
). Show X
n
is
a martingale with respect to T
n
.
8. Show that if X
n
and Y
n
are martingales with respect to T
n
and Z
n
= max(X
n
, Y
n
),
then E[Z
n+1
[ T
n
] Z
n
.
9. Let X
n
and Y
n
be martingales with EX
2
n
< and EY
2
n
< . Show
EX
n
Y
n
EX
0
Y
0
=
n

m=1
E(X
m
X
m1
)(Y
m
Y
m1
).
10. Consider the binomial asset pricing model with n = 3, u = 3, d =
1
2
, r = 0.1, S
0
= 20,
and K = 10. If V is a European call with strike price K and exercise date n, compute
explicitly the random variables V
1
and V
2
and calculate the value V
0
.
11. In the same model as problem 1, compute the hedging strategy
0
,
1
, and
2
.
12. Show that in the binomial asset pricing model the value of the option V at time k is
V
k
.
13. Suppose X
n
is a submartingale. Show there exists a martingale M
n
such that if
A
n
= X
n
M
n
, then A
0
A
1
A
2
and A
n
is T
n1
measurable for each n.
14. Suppose X
n
is a submartingale and X
n
= M
n
+A
n
= M

n
+A

n
, where both A
n
and
A

n
are T
n1
measurable for each n, both M and M

are martingales, both A


n
and A

n
increase in n, and A
0
= A

0
. Show M
n
= M

n
for each n.
15. Suppose that S and T are stopping times. Show that max(S, T) and min(S, T) are
also stopping times.
104
16. Suppose that S
n
is a stopping time for each n and S
1
S
2
. Show S = lim
n
S
n
is also a stopping time. Show that if instead S
1
S
2
and S = lim
n
S
n
, then S is
again a stopping time.
17. Let W
t
be Brownian motion. Show that e
iuW
t
+u
2
t/2
can be written in the form
_
t
0
H
s
dW
s
and give an explicit formula for H
s
.
18. Suppose M
t
is a continuous bounded martingale for which M)

is also bounded.
Show that
2
n
1

i=0
(Mi+1
2
n
M i
2
n
)
2
converges to M)
1
as n .
[Hint: Show that Itos formula implies
(Mi+1
2
n
M i
2
n
)
2
=
_
(i+1)/2
n
i/2
n
(M
s
M i
2
n
)dM
s
+M) i+1
2
n
M) i
2
n
.
Then sum over i and show that the stochastic integral term goes to zero as n .]
19. Let f

(0) = f

(0) = 0 and f

(x) =
1
2
1
(,)
(x). You may assume that it is valid to
use Itos formula with the function f

(note f

/ C
2
). Show that
1
2
_
t
0
1
(,)
(W
s
)ds
converges as 0 to a continuous nondecreasing process that is not identically zero and
that increases only when X
t
is at 0.
[Hint: Use Itos formula to rewrite
1
2
_
t
0
1
(,)
(W
s
)ds in terms of f

(W
t
) f

(W
0
) plus a
stochastic integral term and take the limit in this formula.]
20. Let X
t
be the solution to
dX
t
= (X
t
)dW
t
+b(X
t
)dt, X
0
= x,
where W
t
is Brownian motion and and b are bounded C

functions and is bounded


below by a positive constant. Find a nonconstant function f such that f(X
t
) is a martin-
gale.
[Hint: Apply Itos formula to f(X
t
) and obtain an ordinary dierential equation that f
needs to satisfy.]
21. Suppose X
t
= W
t
+ F(t), where F is a twice continuously dierentiable function,
F(0) = 0, and W
t
is a Brownian motion under P. Find a probability measure Q under
105
which X
t
is a Brownian motion and prove your statement. (You will need to use the
general Girsanov theorem.)
22. Suppose X
t
= W
t

_
t
0
X
s
ds. Show that
X
t
=
_
t
0
e
st
dW
s
.
23. Suppose we have a stock where = 2, K = 15, S
0
= 10, r = 0.1, and T = 3. Suppose
we are in the continuous time model. Determine the price of the standard European call
using the Black-Scholes formula.
23. Let
(t, x, y, ) = P(sup
st
(W
s
+s) = y for s t, W
t
= x),
where W
t
is a Brownian motion. More precisely, for each A, B, C, D,
P(A sup
st
(W
s
+s) B, C W
t
D) =
_
D
C
_
B
A
(t, x, y, )dy dx.
( has an explicit formula, but we dont need that here.) Let the stock price S
t
be given
by the standard geometric Brownian motion. Let V be the option that pays o sup
sT
S
s
at time T. Determine the price at time 0 of V as an expression in terms of .
25. Suppose the interest rate is 0 and S
t
is the standard geometric Brownian motion stock
price. Let A and B be xed positive reals, and let V be the option that pays o 1 at time
T if A S
T
B and 0 otherwise.
(a) Determine the price at time 0 of V .
(b) Find the hedging strategy that duplicates the claim V .
26. Let V be the standard European call that has strike price K and exercise date T. Let
r and be constants, as usual, but let (t) be a deterministic (i.e., nonrandom) function.
Suppose the stock price is given by
dS
t
= S
t
dW
t
+(t)S
t
dt,
where W
t
is a Brownian motion. Find the price at time 0 of V .
106

You might also like