This action might not be possible to undo. Are you sure you want to continue?

Till Schr¨ oter

St Hugh’s College University of Oxford A Special Topic Essay Trinity 2007

Contents

1 Malliavin Calculus 1.1 The Wiener Chaos decomposition . . . . . . . . . . . . . . . . . . . . 1.2 1.3 The Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . The Divergence Operator . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 8 12 17 17 17 20 22 25 32

2 Some Application of Malliavin Calculus to Finance 2.1 Monte Carlo Simulations . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 2.2 2.3 Greeks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conditional Expectation . . . . . . . . . . . . . . . . . . . . .

Hedging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Insider Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Bibliography

i

**Chapter 1 Malliavin Calculus
**

In this chapter we review the basic ideas of Malliavin Calculus. Malliavin Calculus is an inﬁnite-dimensional diﬀerential calculus on the Wiener space. The theory originated from attempts to describe the probability law of functionals on the Wiener space and was initiated by Paul Malliavin (see [Nua06] and the references therein for the development of the theory). In the ﬁrst section we develop the analysis on the Wiener space. It ﬁrst element is the Wiener Chaos Decomposition, a decomposition of a L2 -space that yields in a less abstract setting a fundamental representation theorem of square integrable random variables. In the second section we introduce the Malliavin derivative and deduce some basic properties of the derivative that are useful when applying the theory later on to a ﬁnancial setting. The third and ﬁnal section of this chapter introduces the adjoint to the derivative operator. This adjoint operator turns then out to be an integral that, in the case of adapted processes as integrands, coincides with the well known Ito integral. The adjoint operator yields to a stochastic integration by parts formula that turns out to be extremely useful in applying the theory to ﬁnance. In this introduction to the Malliavin Calculus we follow [Nua06] and [Øks96].

1.1

The Wiener Chaos decomposition

For a real, separable Hilbert space H we deﬁne Deﬁnition 1.1.1. A stochastic process W = {W (h, ω ), h ∈ H } on a probability space (Ω, F , P ) is an isonormal Gaussian process if W is centred and Gaussian such that E (W (h)W (g )) = h, g

H

for all h, g ∈ H . 1

Here the mapping h → W (h) is a linear mapping, as the above relationship between scalar products yields E (W (λh + µg ) − λW (h) − µW (g ))2 = 0. Therefore the mapping provides a linear isometry between H and some space H1 that contains the isonormal random variables, where H1 is a subspace of L2 (Ω, F , P ) (in short L2 (F )), due to the observation that W (h)

2 L2 (P )

= E W (h)2 = h

2 H.

We call H1 the space of Gaussian zero mean random variables. As we will see later on, given a certain structure of H , H1 is in this case the space induced by the Ito integrals { ht dWt , h ∈ H }. Denoting by G the σ -ﬁeld generated by the the random variables {W (h), h ∈ H } the goal of this section is to ﬁnd a decomposition of L2 (Ω, G , P ) (in short L2 (G )). To do this we give some results concerning so called Hermite polynomials. These polynomials are given by (−1)n x2 dn −x2 e2 (e 2 ), H0 (x) = 1. n! dxn The Hermite polynomial are coeﬃcients of the power expansion in t of the function 2 2 F (t, x) = exp(tx − t2 ), as can easily be seen by rewriting F (t, x) = exp( x −1 (x − t)2 ) 2 2 Hn (x) := and expanding the function around t = 0. The power expansion combined with some particular properties of F , i.e.

∂F ∂x

= tF ,

∂F ∂t

= (x − t)F and F (−x, t) = F (x, −t),

lead to the corresponding behaviour of the Hermite polynomials for n ≥ 1: Hn (x) = Hn−1 (x), (n + 1)Hn+1 (x) = xHn (x) − Hn−1 (x), Hn (−x) = (−1)n Hn (x). (1.1) (1.2) (1.3)

We are able to establish the following orthogonality relationship between the Hermite polynomials: Lemma 1.1.2. Let X , Y be two random variables with joint Gaussian distribution such that E (X ) = E (Y ) = 0 and E (X 2 ) = E (Y 2 ) = 1. Then for all n, m ≥ 0 we have E (Hn (X )Hm (Y )) = 0

1 (E (XY n!

))

n

if if

n = m, n = m.

(1.4)

2

**Proof. For all s, t ∈ R the multivariate moment generating function leads to the equality E exp(sX − s2 t2 ) exp(tY − ) 2 2
**

n+m

= exp(stE (XY )).

∂ Taking on both sides the partial derivative ∂s n ∂tm at s = t = 0 and taking into account that ∞ ∂n F (X, s) = n!Hn (X ) + (n + i)!si Hn+i (X ) n ∂s i=1

yields E (n!m!Hn (X )Hm (Y )) = 0 n!(E (XY ))n if n = m, if n = m.

Lemma 1.1.3. The random variables {eW (h), h ∈ H } form a total subset1 of L2 (G ). Proof. We choose X ∈ L2 (G ) such that E (XeW (h) ) = 0 for all h ∈ H . By the linearity of W (h), for such an X holds

m

E

X exp(

i=1

ti W (hi ))

= 0,

ti ∈ R, hi ∈ H, m ≥ 1.

This equation states, that the Laplace transform2 of the measure ν is zero, where ν is given by ν (B ) = E (X 1B (W (h1 ), . . . , W (hm ))) for a Borel set B in Rm . As the transform is zero the measure ν itself must be zero for every set G ∈ G , i.e. E (X 1G ) = 0 for G ∈ G . Thus X must be zero. The linear subspace of L2 (G ) created by {Hn (W (h)), h ∈ H such that h = 1} for n ≥ 1 is denoted by Hn . For n = m the previous lemma yields that the spaces Hn and Hm are orthogonal. This fact leads to a orthogonal decomposition of the space L2 (G ):

We recall brieﬂy the deﬁnition of a total subset: For some vector space E and its dual E ∗ , Γ ⊂ E ∗ is a total subset over E , if Γ⊥ = {0}. Γ⊥ := {x ∈ E, f, x = 0 for all f ∈ Γ}

2 1

For a (signed) measure ν on

**Rm the Laplace transform is given for some t ∈ Rm by:
**

L{ν }(t) =

Rm

exp( t, x )dν (x)

3

**Theorem 1.1.4. The space L2 (G ) can be decomposed into the inﬁnite sum of orthogonal subspaces: L (G ) =
**

n=0 2 ∞

Hn .

(1.5)

Proof. We proof this theorem by taking an arbitrary element X ∈ L2 (G ) such that X is orthogonal to Hn for all n ≥ 1. That means choosing X such that it is either not in Hn for n ≥ 1 or, if it is, it is equal to zero. We do this by choosing an X ∈ L2 (G ) that satisﬁes E (XHn (W (h))) = 0 for all h ∈ H with h

2 H

= 1, n ≥ 1.

Next we show that the only X ∈ L (G ) that fulﬁls that condition is indeed the zero element, thereby establishing that all elements are contained in the r.h.s of (1.5). Expressing xn as a linear combination of Hermite polynomials Hi (x), 0 ≤ i ≤ n we get E (XW (h)n ) = 0 for all n ≥ 0, this, by a power expansion of the exponent, leads to E (X exp tW (h)) = 0 for all t ∈ R. By the previous lemma X = 0 follows. We have seen in Theorem 1.1.4 that the L2 (G ) can be decomposed into orthogonal subspaces. This property, of course, should be reﬂected by the elements of L2 (G ). Next the goal is to ﬁnd a decomposition of given random variable into a sum of suitable orthogonal random variables. To do so we leave the abstract setting and consider the Hilbert space H = L2 (T, B , µ) = L2 (T ), where (T, B ) is a measurable space and µ is a σ -ﬁnite measure without atoms. The Gaussian process W is characterised by the family {W (1B ), B ∈ B and µ(B ) < ∞} as every element in the given Hilbert space can be approximated by linear combinations of suitable indicator functions. For the brevity of notation we will also write W (B ) for W (1B ). By deﬁnition W (A) has a distribution of N (0, µ(A)), if µ(A) < ∞. Next we deﬁne the multiple stochastic integral Im (f ) as it plays a key role in establishing a orthogonal decomposition of a random variable. Deﬁning B0 := {A ∈ B , µ(A) < ∞} we want to deﬁne a stochastic integral for a function f ∈ L2 (T m , B m , µm ) (for m ≥ 1, T m denotes the m-times product of spaces T and µm the corresponding product measure) and denote by Em the set of simple functions

n

f (t1 , . . . , tm ) =

i1 ,...,im =1

ai1 ,...,im 1Ai1 ×···×Aim (t1 , . . . , tm ),

(1.6)

where A1 , . . . , An are pairwise disjoint sets in B0 and the coeﬃcients ai1 ,...,im vanish if any two indices ij are equal. The integral with respect to the simple functions is deﬁned as Im (f ) :=

i1 ,...,im =1 n

ai1 ,...,im W (Ai1 ) · · · W (Aim ). 4

As any two simple functions can be rewritten with respect to a common set of indicator functions, the linearity of the integral obvious. Two other important properties hold as well: Lemma 1.1.5. For the integral we ﬁnd: ˜), where f ˜ denotes the symmetrization of f given by 1. Im (f ) = Im (f ˜(t1 , . . . , tm ) = 1 f m! f (tσ(1) , . . . , tσ(m) )

σ

**and σ running over all permutations of {1, . . . , m}. 2. E (Im (f )Iq (g )) = 0 ˜, g m! f ˜
**

L2 (T m )

if if

q = m, q = m.

(1.7)

Proof. The ﬁrst item can easily be checked for a function of the kind f (t1 , . . . , tm ) = 1Ai1 ×···×Aim (t1 , . . . , tm ). Due to the linearity of the integral it is suﬃcient to consider this case alone. For the second item we consider two symmetric functions f ∈ Em and g ∈ Eq . If m = q the expectation is always zero, as there is always one random variable independent of the rest with expectation zero. For m = q and a symmetric function the coeﬃcients satisfy bi1 ,...,im = bσ(i1 ),...,σ(im ) for all permutations σ of {i1 , . . . , im }. Therefore

n

g (t1 , . . . , tm ) =

i1 ,...,im =1 n

**bi1 ,...,im 1Ai1 ×···×Aim (t1 , . . . , tm ) bi1 ,...,im 1Ai1 ×···×Aim (t1 , . . . , tm ),
**

i1 <···<im

**= m! and this leads to
**

n

E (Im (f )Iq (g )) = m!

i1 <···<im

**m! ai1 ,...,im bi1 ,...,im µ(Ai1 ) · · · µ(Aim )
**

L2 (T m ) .

˜, g = m! f ˜

These results along with the density of Em in L2 (T m ) enables us to extend the integral to any arbitrary function in L2 (T m ). We choose a sequence (f n )n∈N ⊂ Em 5

such that f n → f ∈ L2 (T m ). (f n )n∈N is obviously a Cauchy sequence and we obtain, by letting f = g in the second item of the previous lemma, E (Im (f n ) − Im (f k ))2 = E (Im (f n − f k ))2 = m! f n − f k

L2 (T m )

≤ m! f n − f k

L2 (T m )

→ 0.

**Therefore Im (f n ) is a Cauchy sequence in L2 (F ) and we denote the limit Im (f ) by Im (f ) =:
**

Tm

f (t1 , . . . , tm )dW (t1 ) · · · dW (tm ).

**This integral however is not yet the standard Ito integral. For a simple function h=
**

n i1 =1

**ai1 1Ai1 this leads to
**

n n n

W (h) = W (

i1 =1

ai1 1Ai1 ) =

i1 =1

ai1 W (1Ai1 ) =

i1 =1

ai1 W (Ai1 ) =

T

hdW (t).

Obviously, by the density of Em , this property extends to the entire L2 (T ) = H and the isonormal Gaussian process is given by {W (h) = polynomials: Proposition 1.1.6. Let Hm (x) be the m-th Hermite polynomial and let h ∈ H = L2 (T ) such that h

H T

hs dWs , h ∈ H }. This

integral deﬁned with respect to Gaussian process is closely related to the Hermite

= 1. Then h(t1 ) · · · h(tm )dW (t1 ) · · · dW (tm )

Tm

m!Hm (W (h)) =

m 2 m holds and, denoting by L2 S (T ) the closed subspace of L (T ) generated by the sym-

metric functions,

m Im (L2 (T m )) = Im (L2 S (T )) = Hn .

Proof. For m = 1 and simple functions this relationship surely holds true. Due to the density of E1 in L2 (T ), this relationship extends to L2 (T ). Next we assume that the relationship holds for m and, using the notation h⊗m = h(t1 ) · · · h(tm ), we obtain: Im+1 (h⊗m+1 ) = Im (h⊗m )I1 (h) − mIm−1 h⊗m−1

T

h(t)2 µ(dt)

= m!Hm (W (h)) W (h) − m(m − 1)!Hm−1 (W (h)) = m!(m + 1)Hm+1 (W (h)) = (m + 1)!Hm+1 (W (h)) . Here the ﬁrst equality is proved in [Nua06] (Proposition 1.1.2). The second equality stems from the assumption made for the induction and the fact that 6

T

h(t)2 µ(dt) =

h

2 H

= 1. Finally we use (1.2) to obtain the result. For the second part of the state2 L2 (T m ) m 2 m holds on L2 S (T ). Therefore Im (LS (T ))

ment note, that E (Im (f )) = m! f

m is a closed subspace of L2 (F ). Due to the ﬁrst part of this proposition, Im (L2 S (T ))

**also contains the random variables Hm (W (h)), h ∈ H that satisfy h = 1. Thus, m 2 m the m-th Wiener chaos is in Im (L2 S (T )), i.e Hm ⊂ Im (LS (T )). Due to (1.7), the
**

m orthogonality of integrals of diﬀerent order, Hn and Im (L2 S (T )) are orthogonal for m 2 2 m m = n. As Im (L2 S (T )) ⊂ L (G ) this establishes Im (LS (T )) = Hm .

This proposition establishes the chaos expansion theorem (or chaos decomposition) of a square integrable random variable: Theorem 1.1.7. A random variable F ∈ L2 (G ) can be expanded into a series of multiple stochastic integrals F =

n=0 ∞

In (fn ).

As non-degenerate integrals have an expectation of zero, f0 must be E (F ) and I0 is deﬁned as the identity mapping. In case of symmetric fn they are uniquely determined by F . We close this section by considering the relation between the stochastic integral with respect to isonormal Gaussian process and the standard Ito integral. The probabilistic setting is given by the probability space (Ω, F , P ). Setting T = R+ and µ = λ, λ being the Lebesque measure, we note that we are able to deﬁne a Brownian motion Wt via the isonormal Gaussian process W (h) by setting: Wt := W ([0, t]) = W (1[0,t] ), t ∈ T. Then for a symmetric function f : T n → R the following relationship can easily be checked for simple processes:

∞ tn t2

In (f ) = n!

0 0

···

0

f (t1 , . . . , tn )dWt1 · · · dWtn .

The multiple Ito integral has the above structure as the integration limit must ensure that the adaptivity of the integrand is still given. By taking in account the density of the simple functions the above equality can be extended to the general case.

7

1.2

The Malliavin Derivative

In this section we will introduce the derivative DF of a square integrable random variable F : Ω → R. The goal is to deﬁne a derivative with respect to the chance ∞ parameter ω . In order to achieve this we consider the set Cp (Rn ), the set of all inﬁnitely continuously diﬀerentiable functions f : Rn → R such that f and all of its

∞ (Rn ) we denote derivatives have at most polynomial growth. For n ≥ 1 and f ∈ Cp

by S the set of all random variables of the form: F = f (W (h1 ), . . . , W (hn )), (1.8)

where h1 , . . . hn ∈ H . F ∈ S will be called a smooth random variable. The derivative of a smooth random variable is deﬁned by: Deﬁnition 1.2.1. The derivative of a random variable F ∈ S is given by:

n

DF =

i=0

∂i f (W (h1 ), . . . , W (hn ))hi .

(1.9)

By this deﬁnition the derivative is a mapping DF : Ω → H . Why this can indeed be understood as a derivative with respect to the chance parameter will become clear later on. Next we prove an important integration by parts formula: Lemma 1.2.2. Let F ∈ F and h ∈ H , then E ( DF, h

H)

= E (F W (h)) .

(1.10)

Proof. Due to the linearity of the scalar product an W , we are able to normalise (1.10) such that h is of norm one. Setting h = e1 there exist orthonormal elements in e1 , . . . , en in H such that F can be rewritten as F = f (W (e1 ), . . . , W (en ))

∞ for a suitable function f ∈ Cp (Rn ). Denoting by φ(x) the multivariate density of the standard normal distribution we obtain by the classical integration by parts formula

E ( DF, h

H)

= =

Rn

∂1 f (x)φ(x)dx

f (x)φ(x)xdx Rn = E (F W (e1 )) = E (F W (h)) .

8

**By applying the previous lemma to F G we obtain Lemma 1.2.3. Let F, G ∈ F and h ∈ H , then E (G DF, h
**

H)

= E (−F DG, h

H)

+ E (GF W (h)) .

(1.11)

For any p ≥ 1 we denote the domain of the derivative operator in Lp (F ) by D1,p , where ¯ D1,p = S and the closure is taken with respect to the norm F

1,p

= (E (|F |p ) + E ( DF

1/p p H ))

.

This deﬁnition of D1,p is sensible as it is shown in [Nua06] that D is a closable operator. Remark 1.2.4. For a real separable Hilbert space V , we are able to deﬁne a (semi-) norm for a family SV of V -valued smooth random variables of the form

n

F =

j =1

Fj vj ,

vj ∈ V,

Fj ∈ S

by setting F where DF = is denoted by D (V ). We deﬁne an auxiliary operator Dh on the set of smooth random variables by: Dh F := DF, h . This is an operator from Lp (F ) to Lp (F ) for p ≥ 1 and its domain will be denoted by Dh,p . The derivative operator also satisﬁes a certain chain rule (Proposition 1.2.3 [Nua06]): Lemma 1.2.5. Let ϕ : Rm → R be a continuously diﬀerentiable function with bounded partial derivatives. Suppose F = (F 1 , . . . , F m ) is a random vector with components in D1,p . Then ϕ(F ) is in D1,p and

m n j =1 1,p 1,p,V

= E( F

p V)

+ E ( DF

1/p p H ⊗V )

,

DFj ⊗ vj . The completion of SV with respect to the above norm

D(ϕ(F )) =

i=1

∂i ϕ(F )DF i .

9

Similar to the previous section we will now abandon the setting of an arbitrary Hilbert space H and specify H = L2 (T, B , µ). Again (T, B ) is a measurable space and µ is σ -ﬁnite atomless measure. In this setting the derivative of a random variable F ∈ D1,2 will be a stochastic process given by {Dt F, t ∈ T } due to the fact, that we are able to identify L2 (Ω; H ) with L2 (T × Ω) by identifying each h ∈ H with (ht )t∈T . The next example will show why we consider the operator Dh F as a derivative w.r.t the chance parameter ω . Example 1.2.6. We consider the Wiener space with Ω = C0 ([0, 1]), a Brownian motion given by Wt (ω ) = ω (t), and subspace H 1 of Ω containing the functions of the form x(t) = 0 x (s)ds such that x (s) ∈ H = L2 ([0, 1], λ). This space is called the Cameron-Martin space and we are able to obtain a Hilbert space structure on it by setting

1 t

x, y

H1

= x ,y

H

=

0

x (s)y (s)ds.

We consider a random variable F ∈ S of the particular form F = f (W (t1 ), . . . , W (tn )), 0 ≤ t1 < · · · < tn ≤ 1. Here W (1[0,ti ] ) = W (ti ), and, by the given choice of H , W (1[0,ti ] ) is a Brownian motion. Therefore F (ω ) = f (ω (t1 ), . . . , ω (tn )) = f (W (t1 ), . . . , W (tn )). In this setting Dh F yields:

n

D F = DF, h

h

H

=

i=0 n

∂i f (W (t1 ), . . . , W (tn )) 1[0,ti ] , h

ti

H

=

i=0

∂i f (W (t1 ), . . . , W (tn ))

0 ·

h(s)ds

=

d F (ω + ε dε

h(s)ds)|ε=0 .

0

**Making use of the Wiener decomposition for a square integrable random variable, i.e. F =
**

n=0 ∞

In (fn ),

(1.12)

for a symmetric function fn , we can easily compute the derivative of any random variable: Proposition 1.2.7. Let F ∈ D1,2 be a square integrable random variable with a decomposition given above. the we have:

∞

Dt F =

n=1

nIn−1 (fn (·, t))

(1.13)

10

Proof. We show the statement for a simple function. The general case then results from the density of the simple functions. Let F = Im (fm ) for a symmetric function fm , then, by applying (1.9) to g (W (Ai1 ), . . . , W (Ain )) with g (x1 , . . . , xn ) = x1 · · · xn , we obtain:

m n

Dt F =

j =1 i1 ,...,im =1

ai1 ,...,im W (Ai1 ) · · · 1Aij · · · W (Aim )

= mIm−1 (fm (·, t))

Using the chaos decomposition also yields a representation for the conditional expectation of square integrable random variable F : Proposition 1.2.8. Suppose F is a square integrable random variable with the representation (1.12). Let A ∈ B , then:

∞

E (F |FA ) =

n=0

n In (fn 1⊗ A ).

(1.14)

Proof. It is enough to assume that F = In (fn ) where fn is a function in En . By linearity we also may assume that the kernel fn is of the form 1B1 ×···×Bn with B1 , . . . , Bn being mutually disjoint sets of ﬁnite measure. The linearity of W and the properties of the conditional expectation then lead to E (F |FA ) = E (W (B1 ) · · · W (Bn )|FA )

n

=E

i=1

(W (Bi ∩ A) + W (Bi ∩ Ac ))|FA

= In (1(B1 ∩A)×···×(Bn ∩A) ).

This result enables us ﬁnally to calculate the derivative of a conditional expectation. Proposition 1.2.9. Assume F is member of D1,2 and A ∈ B . Then the conditional expectation E [F |FA ] belongs to D1,2 and the derivative is a.s. given by Dt (E [F |FA ]) = E [Dt F |FA ]1A (t). Proof. By the Proposition 1.2.7 and 1.2.8 we obtain

∞

Dt (E [F |FA ]) =

n=1

n−1 nIn−1 (fn (·, t)1⊗ 1A (t)) = E [Dt F |FA ]1A (t). A

11

1.3

The Divergence Operator

In this section we introduce the divergence operator deﬁned as the adjoint of the derivative operator. If the underlying Hilbert space H is of the form L2 (T, B , µ). Again (T, B ) we see, that the divergence operator can be understood as a integral. We therefore deﬁne: Deﬁnition 1.3.1. We denote by δ the adjoint of the operator D. That is δ is an unbound operator on L2 (Ω; H ) with values in L2 (Ω) and fulﬁls: • The domain of δ , denoted by Dom δ , is the set of H-valued square integrable random variables u ∈ L2 (Ω; H ) such that |E ( DF, u

H )|

≤c F

2,

for all F ∈ D1,2 , where c is some constant depending on u. • If u belongs to Dom δ , then δ (u) is an element of L2 (Ω) characterized by E (F δ (u)) = E ( DF, u for any F ∈ D1,2 . Denoting by SH the class of smooth elementary processes of the form

n H)

(1.15)

u=

j =1

Fj hj

(1.16)

**where Fj is a smooth random variable and hj are elements of H . We deduce from Lemma 1.2.3, keeping in mind E [ DG, u ] =
**

n n j =1 n

E [Fj DG, hj ], that

H.

δ (u) =

j =1

Fj W (hj ) −

j =1

DFj , hj

(1.17)

For a random variable F ∈ D1,2 the term δ (F ) can be calculated with the help of the next proposition. Proposition 1.3.2. Let F ∈ D1,2 and u be in the domain of δ such that F u ∈ L2 (Ω; H ). Then F u belongs to the domain of δ and we obtain the equality δ (F u) = F δ (u) − DF, u provided the r.h.s is square integrable. 12

H

**Proof. For any smooth random variable G we have: E [Gδ (F u)] = E [ DG, F u
**

H]

= E [ u, D(F G) − GDF = E [(δ (u)F − u, DF

H]

H )G]

Deﬁning Dh (u) :=

n j =1

Dh (Fj )hj for u ∈ SH , F ∈ S and h ∈ H the following + δ (Dh u).

**commutativity relationship holds true: Dh (δ (u)) = u, h Proof: (1.17) yields
**

n H

(1.18)

D (δ (u)) =

j =1 n

h

D(Fj W (hj )) − D DFj , hj

n

H, h H

=

j =1

Fj h, hj

H

H

+

j =1

(Dh Fj W (hj ) − D(Dh Fj ), hj

H

= u, h

+ δ (Dh u)

Remark 1.3.3. This commutativity relationship can be extended to D1,2 (H ), a completion of SH . See [Nua06] for details. Next we are going to consider the special Hilbert space H = L2 (T ) again. In this case Dom δ ⊂ L2 (T × Ω) due to the deﬁnition of the adjoint operator. The operator δ (u) is called the Skorohod integral of the process u. The following notation will be used: δ (u) =

T

ut δWt .

To obtain the Wiener chaos decomposition of the Skorohod integral we ﬁrst note that any u ∈ L2 (T × Ω) as a Wiener chaos expansion of the form

∞

u(t) =

n=0

In (fn (·, t)).

(1.19)

For each n ≥ 1, fn ∈ L2 (T n+1 ) is a symmetric function in the ﬁrst n variables. We have the following result: Proposition 1.3.4. Let u ∈ L2 (T × Ω) have the chaos expansion (1.19). Then u ∈ Dom δ if and only if the series

∞

δ (u) =

n=0

˜ In+1 (f n)

(1.20)

converges in L2 (Ω). 13

For a proof see [Nua06]. Here the functions of n + 1 variables are symmetric only in the ﬁrst n variables and the symmetrization of fn in all its variables is given by: ˜ f n (t1 , . . . , tn , t) = 1 n+1

n

fn (t1 , . . . , tn , t) +

i=1

fn (t1 , . . . , ti−1 , t, ti+1 , . . . , tn , ti ) .

**As a consequence of Proposition 1.3.4, Dom δ is formed by elements of a subspace of L2 (T × Ω) that satisfy:
**

∞ ∞

E [δ (u) ] =

j =0 i=0 ∞

2

˜ ˜ E [Ij +1 (f j )Ii+1 (fi )]

2 ˜ E [Ij +1 (f j) ] j =0 ∞

= =

j =0

˜ (j + 1)! f j

2 L2 (T n+1 )

< ∞.

In what sense can the Skorohod integral be understood as an integral? In our current setting with H = L2 (T ) the random variable W (hj ) can be written as the integral hj (t)dWt and 1.17 becomes:

n n

ut δWt =

T j =1

Fj

T

hj (t)dWj −

j =1 T

Dt Fj hj (t)µ(dt).

Therefore, considering this special Hilbert space, the Skorohod integral can be seen as the Ito integral and an additional term involving the Malliavin derivative. The derivative of the Skorohod integral is given by: Proposition 1.3.5. Suppose that u ∈ D1,2 (L2 (T )). Assume that for almost all t the process (Dt us )t∈T is Skorohod integrable and the process version that is in L (T × Ω). Then Dt (δ (u)) = ut +

T 2 T

Dt us δWs

t∈T

has a

Dt us δWs .

Proof. (1.18) yields the commutation relation and the following remark its validity for D1,2 (L2 (T )). In this setting we obtain: Dh (δ (u)) = u, h

H

+ δ (Dh u).

14

In the given Hilbert space this becomes: Dt (δ (u))ht dµt = = = ut ht dµt + ut ht dµt + ut + Dt us ht dµt δWs Dt us δWs ht dµt

Dt us δWs ht dµt .

Next, we are going to relate the Skorohod and Ito integrals by observing that the Skorohod integral of an adapted process coincides with the Ito integral of this process. In order to establish this result we recall that B0 = {A ∈ B, µ(A) < ∞} and quote the following Lemma from [Nua06] without proof: Lemma 1.3.6. Let A ∈ B0 and let F be a square integrable random variable that is measurable with respect to σ -ﬁeld FAc . The the process F 1A is Skorohod integrable and δ (F 1A ) = F W (A). This Lemma establishes the connection to the Ito integral in the next proposition. 2 Denoting by L2 a the closed subspace of L ([0, 1]×Ω)generated by the adapted processes we obtain:

2 Proposition 1.3.7. L2 a is a subset of Dom δ and the operator δ restricted to La

**coincides with the Ito integral, that is
**

1

δ (u) =

0

ut dWt .

**Proof. Suppose u is an elementary adapted process of the form
**

n

ut =

j =1

Fj 1(tj ,tj+1 ] (t),

where Fj ∈ L2 (Ω, Ft , P ) and 0 ≤ t1 < · · · < tn+1 ≤ 1. The the above Lemma yields that u ∈ Dom δ and for the Skorohod integral we have the representation:

n

δ (u) =

j =1

Fj (W (tj +1 ) − W (tj )).

The elementary processes above are dense in L2 a and a limit argument yields the desired result. 15

We close this section by pointing out a representation theorem in terms of a Malliavin derivative, the Clark-Ocone representation formula. It is well known that any square integrable random variable can be written as

1

F = E (F ) +

0

φ(t)dWt .

for an adapted integrand φ(t). Next we show that φ can be written in terms of a Malliavin derivative: Proposition 1.3.8 (Clark-Ocone). Let F ∈ D1,2 and suppose that W is a onedimensional Brownian motion on [0, 1]. Then

1

F = E (F ) +

0

E (Dt F |Ft )dWt . From (1.13) and (1.14) we obtain:

(1.21)

Proof. Suppose that F = E (Dt F |Ft ) =

∞ n=0 In (fn ). ∞

nE (In−1 (fn (·, t))|Ft )

n=1 ∞

=

n=1

nIn−1 (fn (t1 , . . . , tn−1 , t)1{t1 ∨···∨tn−1 <t} ).

Now, setting ut = E (Dt F |Ft ) and keeping in mind, that the symmetrization of 1 fn (t1 , . . . , tn−1 , t)1{t1 ∨···∨tn−1 <t} equals n fn (t1 , . . . , tn−1 , t) if u is not trivially zero, we obtain by Proposition 1.3.4:

∞

δ (u) =

n=1

In (fn ) = F − E (F ).

In the next part we will look at the application to ﬁnance of the theory developed so far.

16

**Chapter 2 Some Application of Malliavin Calculus to Finance
**

The use of Malliavin Calculus in ﬁnance was introduced in two strands of literature. One stand is concerned with the use of Malliavin calculus in Monte Carlo simulations and was introduced in [FLL+ ] and [FLLL01]. The other strand of literature centres around the use of Malliavin Calculus in hedging and was introduced in [KO91]. Another application of Malliavin Calculus can be found in Insider Trading models, that we also review in this chapter. In all applications we encounter the Hilbert space H will always be given by H = L2 ([0, T ], B , λ) and the market setting is given by a ”Black-Scholes economy”.

2.1

2.1.1

**Monte Carlo Simulations
**

Greeks

The key idea of using Malliavin Calculus for the numerical computation of the so called Greeks (the sensitivities of a derivative with respect to diﬀerent price relevant parameters) is an extension of the integration by parts formula developed in the previous chapter: Proposition 2.1.1. Let F , G be two random variables such that F ∈ D1,2 . Consider an H -valued random variable u such that Du F = DF, u H = 0 a.s. and Gu(Du F )−1 ∈ Dom δ . Then for any continuously diﬀerentiable function f with bounded derivative we have E [f (F )G] = E [f (F )H (F, G)]

17

where H (F, G) = δ

Gu Du F

.

**Proof. By Lemma 1.2.5 a chain rule holds and we obtain Du (f (F )) = f (F )Du (F ). This and the duality relationship (1.15) yield: E [f (F )G] = E [Du (f (F ))(Du F )−1 G] = E [ D(f (F )), u(Du F )−1 G = E [f (F )δ (Gu(Du F )−1 )].
**

H]

If we chose u = DF we obtain: H (F, G) = δ GDF DF H .

Considering an option with pay-oﬀ B in T the price of the option at any time t = 0 is given by the expectation under an equivalent martingale measure Q: V0 = e−rT EQ (B ) provided that EQ (B 2 ) < ∞. Here r denotes the constant interest rate. Suppose we are able to write B = f (Fα ), f being continuous diﬀerentiable and α being one of the parameters of the problem. Then dFα ∂V0 = e−rT EQ f (Fα ) ∂α dα By the above theorem this can be rewritten as: ∂V0 dFα = e−rT EQ f (Fα )H (Fα , ) . ∂α dα (2.2) . (2.1)

This is the main idea of using Malliavin Calculus to derive eﬃcient Monte Carlo simulation formulas. [FLL+ ] have shown that the above relations also hold if f is not continuous diﬀerentiable anymore – as in European options for example. Then (2.1) is hard to handle by Monte Carlo methods due to the derivative in the expectation operator.

18

Example 2.1.2 (Delta and Gamma in a Black Scholes World). We give an example of the computation of a Delta and a Gamma of an option in a Black-Scholes world with constant parameters. It is well known that in this setting the value of a stock is given by σ2 St = S0 exp((µ − )t + σWt ) =: v (WT ). 2 for t ∈ [0, T ]. Then the Delta of an option with pay-oﬀ function Φ(ST ) is given by ∆= ∂V0 ∂ST = EQ e−rT Φ (ST ) ∂S0 ∂S0 = e−rT EQ (Φ (ST )ST ) . S0

**Applying Proposition 2.1.1 with u = 1, F = ST , G = ST and applying the chain rule to DST = Dt v (WT ) = v (WT )Dt WT = σST we obtain by Proposition 1.3.4 (yielding δ (1) = I1 (1)):
**

T −1

δ ST

0

Dt ST dt

=δ

1 σT

=

WT . σ

Therefore the Delta of the option can be expressed as e−rT EQ (Φ(ST )WT ) . ∆= S0 σT This term can easily be simulated by Monte Carlo methods. The Γ of an option is given by ∂ 2 V0 e−rT 2 = EQ Φ (ST )ST . 2 2 ∂S0 S0

2 Taking in Proposition 2.1.1 u = 1, F = ST and G = ST we obtain by Lemma 1.3.2 T 2 δ ST 0 −1

Dt ST dt

=δ

ST σT

= ST

WT −1 . σT

**This in turn yields:
**

2 = EQ Φ (ST )ST EQ Φ (ST )ST

WT −1 σT

.

WT σT

**Applying Proposition 2.1.1 again, now to u = 1, F = ST and G = ST obtain ﬁrst δ ST WT −1 σT
**

T −1

− 1 , we

Dt ST dt

0

= δ ST =δ =

WT − 1 (σT ST )−1 σT

WT 1 − 2 2 σ T σT 2 WT 1 WT − 2 − 2 2 σ T σ T σT

,

19

and ﬁnally EQ Φ (ST )ST WT −1 σT = EQ Φ(ST )

2 WT WT 1 − 2 − 2 2 σ T σ T σT

.

Therefore the Γ is given by: ∂ 2 V0 e−rT = 2 EQ Φ(ST ) 2 ∂S0 S0 σT WT 1 − − WT σT σ .

Remark 2.1.3. In Proposition 2.1.1 the weight H (F, G) depends on the choice of u. Therefore diﬀerent weights are conceivable. In [FLLL01] the variance optimal structure for a Monte Carlo estimator of such a weight is shown. However the variance optimal weights are diﬃcult to implement in an estimation procedure. The methods demonstrated above can also be used to obtain estimators for exotic options (e.g see [GKH03]). A comprehensive overview for practical purposes on the use of Malliavin Calculus in Finance is given in the paper [KHM04]

2.1.2

Conditional Expectation

The use of Malliavin Calculus to compute conditional expectations was introduced by [FLLL01]. They derived an representation formula that can be use in Monte Carlo simulations with little eﬀort. The main theorem of their work is given by: Proposition 2.1.4. Let F, G ∈ D1,2 and assume there exist a process u ∈ H such that E [ DG, u H |σ (F, G)] = 1. Moreover let φ be a continuous diﬀerentiable function of polynomial growth. Writing H (y ) = 1{y>0} we obtain: E [φ(F )|G = 0] = Proof. First we note that E [φ(F )|G = 0] = lim +

ε→0

E [φ(F )H (G)δ (u) − φ (F )H (G)Du F ] E [H (G)δ (u)]

E [φ(F )1(−ε,ε) (G)] . E [1(−ε,ε) (G)]

Next, by using the chain rule and the duality relation, we obtain

E [φ(F )Hε (G)δ (u)] = E [ D(φ(F )Hε (G)), u

H] H] H]

= E [ φ(F )1(−ε,ε) (G)DG + Hε (G)φ (F )DF, u = E [φ(F )1(−ε,ε) (G) DG, u

H]

+ E [Hε (G)φ (F ) DF, u

20

and E [1(−ε,ε) (G)] = E [1(−ε,ε) (G) DG, u = E [ DHε (G), u where Hε (y ) is given by if y ≤ −ε, 0 Hε (y ) = y + ε if y ∈ [−ε, ε], 2ε if y ≥ ε. H (G) converges a.s. to 2H (G) yields the desired result. Noting that 1 ε ε Corollary 2.1.5. If there exists an u ∈ H satisfying E [ DF, u obtain, keeping everything as in the above proposition, E [φ(F )|G = 0] = E [φ(F )H (G)δ (u)] . E [H (G)δ (u)]

H |σ (F, G)] H] H]

= E [Hε (G)δ (u)],

= 0 we

Example 2.1.6. In this example we derive a formula for the price of an option with pay-oﬀ φ(ST ) in T that is suitable for Monte Carlo simulations. The price is given by V0 (S, t), meaning today’s price of an option yielding an amount φ(ST ) in time T under the condition that St = S where

S0 V0 (S, t) = e−rT EQ [φ(ST )|St = S ].

**In order to handle the conditional expectation in that formula we make use of Corollary 2.1.5 and obtain:
**

S0 EQ [φ(ST )|St

= S] =

S0 EQ [φ(ST )H (St − S )δ (u)] S0 EQ [H (St − S )δ (u)]

for a suitable u satisfying the conditions of Proposition 2.1.4 and Corollary 2.1.5. Such an u is given by us = 1 1 1 1 u ˜s = 1(0,t) (s) − 1(t,T ) (s) . σSt σSt t T −t

**The only unknown term left is δ (u). We calculate the Skorohod integral with help of Proposition 1.3.2: 1 1 δ (u) = δ (St−1 u ˜s ) = σ σ
**

T

St−1 δ (˜ u)

−

0

˜s ds . Ds St−1 u

21

Calculating the derivative Ds St−1 and the Skorohod integral with adapted integrand u ˜s , we get for the last term:

T

u) − St−1 δ (˜

0

Ds St−1 u ˜s ds = St−1

Wt WT − Wt − t T −t

+ σSt−1 .

**This yields the pricing formula suitable for Monte Carlo methods: V0 (S, t) = e
**

−rT S0 EQ φ(ST )H (St − S )St−1 S0 EQ H (St − S )St−1 Wt σt Wt σt

−

WT −Wt σ (T −t)

+1 .

−

W T −Wt σ (T −t)

+1

Remark 2.1.7. The formula representing the conditional expectation may be used to calibrate stochastic volatility modes as is also shown in [FLLL01]. Another frequent application of the formula can be found in literature on the pricing of American options (e.g. XXX). Depending on the choice of u more variance eﬃcient estimators are possible – see [BET03] for details.

2.2

Hedging

In this section we extend the Clark-Ocone formula from Proposition 1.3.8 to the case under an equivalent measure Q where the Brownian motion is given by

t

Wt = Wt +

0

θs ds.

T 0 2 θs ds < ∞ holds

**In this case (θs )s∈[0,T ] is a measurable adapted process such that a.s. The measure Q is given by the process ZT , Zt =
**

t dQ | , dP Ft t 2 θs ds . 0

where

Zt = exp −

0

θs dWs −

1 2

In order to proof the generalized Clark-Ocone formula we ﬁrst need two results that are given by the next lemmata. Lemma 2.2.1 (Generalized Bayes Formula). Suppose G ∈ L1 (Q). Then EQ [G|Ft ] = E [ZT G|Ft ] Zt

**Lemma 2.2.2. Let F ∈ D1,2 be an FT measurable random variable and Let θ ∈ D1,2 (L2 (T )). Assume
**

T 2 2 E (ZT F )

+ E(

0

(ZT Dt F )2 dt) < ∞ 22

and E(

0

T

T

T

{Z T F

θt +

t

Dt θs dWs +

t

θs Dt θs ds }2 dt) < ∞

then ZT F ∈ D1,2 and

T T

Dt (ZT F ) = ZT Dt F − ZT F

θt +

t

Dt θs dWs +

t

θs Dt θs ds .

**Proof. First by product and chain rule we obtain Dt (ZT F ) = ZT Dt F + F Dt ZT
**

T

= ZT Dt F − F ZT Dt

0 2 = 2θs Dt θs yields Proposition 1.3.5 and Dt θs T

θs dWs +

1 2

T 2 θs ds . 0

Dt

0

θs dWs +

1 2

T 2 ds θs 0

T

T

= θt +

0

Dt θs dWs +

0

θs Dt θs ds.

Finally the following observation leads to the desired result: Dt (θs ) = Dt (E [θs |Ft ]) = E [Dt θs |Ft ]1[0,s] (t). The last equality above is due to Proposition 1.2.9. Now we are able to derive the generalized Clark-Ocone representation formula. Proposition 2.2.3. Let the assumptions of Lemma 2.2.2 hod true, then

T T

(2.3)

F = EQ [F ] +

0

EQ Dt F − F

t

Dt θs dWs |Ft dWt

Proof. We deﬁne Yt = EQ [F |Ft ]. With the help of Lemma 2.2.1 we rewrite this as Yt = Zt−1 E [ZT F |Ft ]. The Clark-Ocone formula yields

t

E [ZT F |Ft ] = E [ZT F ] +

0

E [Ds (ZT F )|Fs ]dWs .

**Now combining the last two steps we obtain
**

t

**Yt = Zt−1 EQ [F ] + Zt−1
**

0

E [Ds (ZT F )|Fs ]dWs .

(2.4)

From the previous Lemma we get: 23

T

E [Ds (ZT F )|Fs ] = E ZT

Dt F − F (θt +

t T

**Dt θs dWs ) |Ft Dt θs dWs )|Ft
**

t

(2.5) (2.6) (2.7)

= Zt EQ Dt F − F (θt +

= Zt Ψt − Zt EQ [F θt |Ft ] = Zt Ψt − Zt Yt θt . Here Ψt = EQ Dt F − F

T t

Dt θs dWs |Ft and the last equality stems from the adapt-

**edness of θ. Now substituting (2.7) into (2.4) yields
**

t t

**Yt = Zt−1 EQ [F ] + Zt−1
**

0

**Zs Ψs dWs − Zt−1
**

0

Zs Ys θs dWs .

Now applying Ito´s product formula to the single terms in the above equation, thereby using that d(Zt−1 ) = Zt−1 (θs dWt + θ2 dt) we obtain

2 2 d(Yt ) = Yt (θt dWt + θt dt) + Ψt dWt − Yt θt dWt + θt Ψt dt − Yt θt dt

= Ψt dWt . This establishes the result. Next we are going to look at an example where we use the generalized Clark-Ocone representation to obtain a hedging portfolio. Example 2.2.4. Let B be the pay-oﬀ of an option. We are again in a Black-Scholes world with constant coeﬃcients. In this complete world this pay-oﬀ is replicable by an suitable hedging strategy βt and the value of the discounted replicating portfolio at time t is given by

t

Vt (β ) := e−rT EQ [B ] +

0

σ Ss βs dWs .

Let us assume that B , the pay-oﬀ, and θ, the process that induces Q, satisfy the assumptions of Lemma 2.2.2 (θt does this trivially as it is a constant in the present setting). Then by the generalized Clark-Ocone representation formula the following equality must hold true:

T

σ St βt = e−rT EQ Dt B − B

t

Dt θs dWs |Ft

24

**Therefore the portfolio is given by βt = e−r(T −t) EQ Dt B − B σSt
**

T

Dt θs dWs |Ft .

t

As θ is a constant this reduces to βt = e−r(T −t) EQ [Dt B |Ft ] . σSt

If we assume that the pay-oﬀ B is given by φ(ST ) for a suitable function φ we obtain e−r(T −t) EQ [φ (ST )σST |Ft ] . σSt For t=0 this recovers exactly the ∆ of Example 2.1.2, i.e. βt = β0 = e−rT EQ [φ (ST )ST ] . S0 that

Remark 2.2.5. We have seen in the above example that the Malliavin approach leads to the ∆-hedging formula for an optimal portfolio. In [Ber02] and [Ber03] Bermin analyses how the classical approach and the Malliavin Calculus based approach compare. In [Ber02] he shows that the generalized Clark-Ocone representation formula holds indeed for any square integrable random variable, not only for the members of

**D1,2 . Using this result [Ber03] establishes, that the Malliavin approach yields the
**

∆-hedging formula under slightly weaker conditions than the traditional approach.

2.3

Insider Trading

Insider trading is a natural application of Malliavin Calculus and the related theory of Skorohod integration. The use of these tools arises from the need to model information that is vaster than the market knowledge. Vaster information leads to processes that are not adapted to the market ﬁltration anymore and therefore not treatable by Ito integration theory. For this reason we will consider Skorohod integrals, or more exactly ’forward integrals’ that are closely related to Skorohod integrals, as they allow anticipative processes. We proceed by ﬁrst developing a market set-up then giving some pertinent results from [Nua06] and ﬁnally obtaining an optimal portfolio along the lines of [ØA04] Let (Bt )t∈[0,T ] be a standard Brownian motion on a ﬁltered probability space (Ω, F , Ft , P ). An insider is a person who is informed of the information contained a

25

ﬁltration G = (Gt )t∈[0,T ] strictly bigger than the ﬁltration F = (Ft )t∈[0,T ] . As pointed out in this case the question arises how to understand an integral

T

φt dBt

0

(2.8)

when the process φt is Gt adapted? One common approach is to assume that Bt is still a semi-martingale with respect to G and allows a decomposition as Bt = Bt + At where Bt is a Brownian motion under G and At a continuous, ﬁnite variation process. In case that such a decomposition exists the above integral can then be deﬁned by

T T T

φt dBt :=

0 0

φt dBt +

0

φt dAt .

However, such a decomposition does not always exist and we therefore look for a more general approach of deﬁning the integral. One possible way to proceed is by considering forward integrals. Deﬁnition 2.3.1. Let φ = {φt , t ∈ [0, T ]} be a stochastic process. The forward stochastic integral in then given as the limit over all partitions 0 ≤ t0 ≤ · · · ≤ tn ≤ T

T

φt d− Bt := lim

0

∆ti →0

**φti (Wti+1 − Wti ),
**

i

where the limit is understood as limit in probability. If the limit exists in L2 (P ) then we write φ ∈ Dom2 δ − . This integral is closely related to the Skorohod integral, as we will see later on, and therefore also allows non-adapted integrands. In the sequel we will always consider an integral (2.8) as forward integral. The motivation for using forward integrals in the market model is given by the following observations: • Considering a buy and hold strategy φt = 1[t1 ,t2 ] (t) for 0 ≤ t1 ≤ t2 ≤ T and the Brownian motion Bt as price process of an asset we obtain

T

φt d− Bt =

0

t2 t1

d− Bt = lim

t2

∆ti →0

(Bti+1 − Bti ) =

i t1

dBt = Bt2 − Bt1 .

Therefore, just as in standard Ito integration theory, the forward integral gives just the amount gained over a period [t1 , t2 ]. 26

• Assume that we have the situation pointed out previously that Bt is still a semi-martingale with respect to G and allows a decomposition Bt = Bt + At , where Bt is a Brownian motion under G and At a continuous, ﬁnite variation process, then, for a forward integrable, G-adapted process φt ,

T T T T

φt d− Bt =

0 0

φt dBt

=

0

φt dBt +

0

φt dAt .

Hence, the setting where Bt is still a semi-martingale is just a special instance of the more general forward integral. The above observation can easily be conﬁrmed by considering

T T

φt dBt +

0 0

**φt dAt = lim = lim
**

T

∆ti →0

**φti (∆Bti + ∆Ati )
**

j

∆ti →0

φti ∆Bti

j

=

0

φt d− Bt .

**The market modelled by forward integrals has dynamics given by the equations dSt0 = ρt St0 dt,
**

0 S0 =1

(2.9) S0 > 0. (2.10)

dSt = St [µt dt + σt d− Bt ],

The diﬀerential equations are as always understood as the corresponding integrals and coeﬃcients satisfy the following conditions: • µt , σt and ρt are G-adapted, • E

0 T 2 {|ρt | + |µt | + σt }dt < ∞,

• σt is Malliavin diﬀerentiable and (D− σ )s exists (for a deﬁnition see below), • the equation (2.10) has a unique G-adapted solution. Moreover we consider three ﬁltrations (some of them already introduced) that are related by Ht ⊂ Ft ⊂ Gt ⊂ F for any t ∈ [0, T ]. In the present setting we consider a portfolio πt as an Ht adapted process giving the fraction of the total wealth invested into the risky asset St . 27

Deﬁnition 2.3.2. The set AH of admissible portfolios consists of all Ht adapted process πt that are Malliavin diﬀerentiable and satisfy the conditions • πt σt is Skorohod integrable, • E

0 T

|πt (D− σ )t |dt < ∞,

T

• E

0

|µt − ρt ||πt |dt < ∞.

In this model a large investor with inside information might have access to the larger ﬁltration Gt and inﬂuence the market by his actions. An agent, trying to optimise his wealth in some sense, has only partial information contained in Ht and behaves accordingly. We are now trying to solve a logarithmic utility maximisation problem given by:

π Φ(x) = sup E x [log(XT )], π ∈AH

(2.11)

where the wealth process is given through the dynamics dXt = (ρt + [µt − ρt ]πt )Xt dt + πt σt Xt d− Bt , X0 = x. (2.12)

**∗ and If the investor has an initial capital of x. An optimal portfolio is denoted by πt π satisﬁes Φ(x) = E x [log(XT )].
**

∗

Next, before we solve the above problem, we review some pertinent results from Malliavin Calculus and point out the relation between the Skorohod and the forward integral. First we deﬁne a random variable that can be understood as a left Malliavin derivative. Deﬁnition 2.3.3. Assume that X ∈ L1,2 := D1,2 (L2 ([0, T ])) and p ∈ [1, 2]. The we denote by D− X the element of Lp ([0, T ] × Ω) satisfying

T n→∞

lim

sup

0 (s−1/n)∨0≤t<s

E [|Ds Xt − (D− X )s |p ]ds = 0.

,2 1 ,2 We denote by L1 that satisﬁes the above equation. p− the class of processes in L

With the help of this deﬁnition we are now able to state the relation between the forward and the Skorohod integral. 28

**Proposition 2.3.4. Let {φt , t ∈ [0, T ]} be a stochastic process which is continuous in
**

,2 the norm of the space D1,2 . Suppose that φ ∈ L1 1− . Then φ is forward integrable and T T

δ (φ) =

0

φt d− Bt −

0

(D− φ)s ds.

**Proof. Using equation (1.17) we obtain
**

n−1 n−1 n−1

δ

i=0

φti 1(ti ,ti+1 ]

ti+1

=

i=0

**φti (Wti+1 − Wti ) −
**

i=0 ti

Ds φti ds.

(2.13)

**Due to the continuity of φ in the D1,2 norm the process
**

n−1

φ− t

=

i=0

φti 1(ti ,ti+1 ] (t)

converges in the norm of L1,2 to the process φ. Therefore δ (φ− ) converges in L2 (Ω) to δ (φ). The ﬁst term of the r.h.s converges by deﬁnition of the forward integral towards

T 0

**φt d− Bt and for the second term of the r.h.s we have the following approximation:
**

n−1 ti+1 T

E |

i=0 n−1 ti+1 ti ti

Ds φti ds −

0

(D− φ)s ds|

≤

i=0 T

E |Ds φti − (D− φ)s | ds E |Ds φt − (D− φ)s | ds

≤

0

sup

(s−∆ti )∨0≤t<s

1 ,2 By the deﬁnition of the space L1 − the last term converges to zero as the partition

gets ﬁner, i.e. ∆ti tends to zero. As the Expectation of the Skorohod integral is zero we obtain the following Lemma. Lemma 2.3.5. Let φ be as in the above proposition, then

T T

E

0

φt d− Bt = E

0

(D− φ)s ds ,

provided the expectation exists.

29

A Ito type formula for forward integrals exists ([Nua06], Theorem 3.2.7) yielding a solution to the wealth process (2.12):

T T

Xtπ = X0 exp

0

1 2 2 ρt + (µt − ρt )πt − πt σt dt + 2

πt σt d− Bt .

0

(2.14)

**This ﬁnally enables us to solve the logarithmic utility maximisation problem. By using Lemma 2.3.5 we obtain:
**

π ] − log x E [logXT T

=E

0 T

=E

0

T 1 2 2 πt σt d− Bt ρt + (µt − ρt )πt − πt σt dt + 2 0 1 2 2 ρt + (µt − ρt )πt − πt σt + D− (πσ )t dt 2

As πt is Ht adapted and Ht ⊂ Ft we obtain by the same reasoning as in (2.3) that for s > t: Ds πt = 0. Applying the Malliavin Chain rule yields: D− (πσ )t = πt D− (σ )t + σt D− (π )t = πt D− (σ )t . Therefore the above equation turns to

T π E [logXT ]

− log x = E

0

1 2 2 ρt + βt πt − πt σt dt , 2

where βt := µt − ρt + D− (σ )t . We basically could maximise under the integral now if there would not be a restriction due to diﬀerent measureabilitys of the coeﬃcients. We bypass the problem by deﬁning ρt = E [ρt |Ht ] and similarly for the other coeﬃcients and obtain

T π E [log XT ] − log x = E 0

1 2 2 σt dt . ρt + βt πt − πt 2

**We maximise pointwise and obtain the optimal portfolio by
**

∗ πt =

βt . 2 σt

Summarizing the above we get the following result: 30

**Proposition 2.3.6. Suppose that σt = 0 a.e and that • for βt as deﬁned above the following holds true
**

T

E

0

2 βt dt < ∞, 2 σt

then

T

Φ(x) = log x + E

0

ρs +

2 βt dt < ∞. 2 2σt

• π ∗ is an admissible portfolio, i.e.

∗ = πt

E [βt |Ht ] ∈ AH , 2 E [σt |Ht ]

then π ∗ is the optimal control. As a consequence the famous Merton solution can be recovered easily from this result. Example 2.3.7. Assume that mathcalHt = Ft = Gt , then Ds σ = 0 and the optimal portfolio to this problem is given by

∗ πt =

E [βt |Ht ] µ t − ρt = . 2 2 E [σt |Ht ] σt

Remark 2.3.8. The use of anticipative calculus to insider problems has been developed in [LNN03]. There a problem was treated where the insider information consists of knowing the terminal value of some random variable. [Bk05] generalises the results of this section (mainly taken from [ØA04]) to other utility functions and a more general setting. In [KHS06] results from [ØA04] are also extended. A total diﬀerent approach to the use of Malliavin Calculus is taken by [Imk03]. He assumes a set-up where the Brownian Motion is a semi-martingale under the enlarged ﬁltration and tries to characterise the additional drift in terms of results from Malliavin Calculus.

31

Bibliography

[Ber02] H. Bermin. A general approach to hedging options: Applications to barrier and partial barrier options. Mathematical Finance, 12(3):199–218, 2002. H. Bermin. Hedging options: The malliavin calculus approach versus the ∆-hedging approach. Mathematical Finance, 13(1):73–84, 2003. [BET03] B. Bouchard, I. Ekeland, and N. Touzi. On the malliavin approach to monte carlo approximation of conditional expectations. Woking Paper, 2003. [Bk05] F. Biagini and B. Øksendal. A general stochastic calculus approach to insider trading. Applied Mathematics and Optimization, 52(2):167–181, 2005. [FLL+ ] E. Fourni´ e, J. Lasry, J. Lebuchoux, P. Lions, and N. Touzi. Applications of malliavin calculus to monte carlo methods in ﬁnance, journal = Finance and Stochastics, volume = 3, number = , month = , year = 1999, pages = 391-412,. [FLLL01] E. Fourni´ e, J. Lasry, P. Lions, and J. Lebuchoux. Applications of malliavin calculus to monte-carlo methods in ﬁnance. ii. Finance and Stochastics, 5(2):201–236, 2001. [GKH03] E. Gobet and A. Kohatsu-Higa. Computation of greeks for barrier and lookback options using malliavin calculus. Elect. Comm. in Probab., (8):51– 62, 2003. [Imk03] P. Imkeller. Malliavin’s calculus in insider models: Additional utility and free lunches. Mathematical Finance, 13(1):153–169, 2003. [KHM04] A. Kohatsu-Higa and M. Montero. Malliavin calculus in ﬁnance. Handbook of computational and numerical methods in ﬁnance, pages 111–174, 2004. 32

[Ber03]

[KHS06] A. Kohatsu-Higa and A. Sulem. Utility maximization in an insider inﬂuenced market. Mathematical Finance, 16(1):153–179, 2006. [KO91] I. Karatzas and D. Ocone. A generalized clark representation formula, with applications to optimal portfolios. Stoch. Stoch. Rep., 34:187–220, 1991. [LNN03] J. Le´ on, R. Navarro, and D. Nualart. 13(1):171–185, 2003. [Nua06] David Nualart. The Malliavin Calculus and Related Topics. Probability and Its Applications. Springer, 2. edition, 2006. [ØA04] B. Øksendal and A.Sulem. Partial observation control in an anticipating environment. Russian Math. Surveys, 59(2):355–375, 2004. [Øks96] B. Øksendal. An introduction to malliavin calculus with applications to economics. Lecture Notes, 1996. An anticipating calculus ap-

proach to the utility maximization of an insider. Mathematical Finance,

33

Something on Malliavin

Something on Malliavin

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd