Professional Documents
Culture Documents
X(t) = x0 (t),
t [, 0],
(1.2)
where
Y (t) = X(t ),
A(t) =
t
t
(1.3)
572
573
f (t, X(t), Y (t), A(t), u(t), ) dt + g(X(T ), ) ,
u AE ,
T
Here, and in the following, we suppress the for notational simplicity. The problem we
consider in this paper is the following.
Find
(x0 ) and u AE such that
(1.4)
574
B. KSENDAL ET AL.
and [8] in the stochastic Brownian case. To the best of the authors knowledge, despite the
statement of a result in [17], this kind of approach was not developed for delay systems driven
by a Lvy noise.
Nonetheless, in some cases still very interesting for the applications, it happens that systems
with delay can be reduced to nite-dimensional systems, since the information we need from
their dynamics can be represented by a nite-dimensional variable evolving in terms of itself.
In such a context, the crucial point is to understand when this nite-dimensional reduction of
the problem is possible and/or to nd conditions ensuring that. There are some papers dealing
with this subject in the stochastic Brownian case: we refer the reader to [5], [9], [10], [11],
and [12]. The paper [3] represents an extension of [11] to the case when the equation is driven
by a Lvy noise.
We also mention the paper [4], where certain control problems of stochastic functional
differential equations are studied by means of the Girsanov transformation. This approach,
however, does not work if there is a delay in the noise components.
In [14] the problem is transformed into a stochastic control problem for Volterra equations.
Our approach in the current paper is different from all the above. Note that the presence of
the terms Y (t) and A(t) in (1.1) makes the problem non-Markovian and we cannot use a (nitedimensional) dynamic programming approach. However, we will show that it is possible to
obtain a (PontryaginBismutBensoussan-type) stochastic maximum principle for the problem.
The paper is organised as follows: we introduce the Hamiltonian and the time-advanced
backward stochastic differential equation (BSDE) for the adjoint processes in Section 2, then
we prove sufcient and necessary stochastic maximum principles in Sections 3 and 4. Section 5
is devoted to existence and uniqueness theorems for various time-advanced BSDEs with jumps.
Finally, an example of application to an optimal consumption problem with delay is given is
Section 6.
2. Hamiltonian and time-advanced BSDEs for adjoint equations
We dene the Hamiltonian
H : [0, T ] R R R U R R R R
by
H (t, x, y, a, u, p, q, r(), ) = H (t, x, y, a, u, p, q, r())
= f (t, x, y, a, u) + b(t, x, y, a, u)p + (t, x, y, a, u)q
+
(t, x, y, a, u, z)r(z)(dz),
(2.1)
R0
where R is the set of functions r : R0 R such that the last term in (2.1) converges.
We assume that b, , and are C 1 functions with respect to (x, y, a, u) and that
2
2
b (t, X(t), Y (t), A(t), u(t)) + (t, X(t), Y (t), A(t), u(t))
x
x
i
i
2
+
x (t, X(t), Y (t), A(t), u(t), z) (dz) dt <
i
R0
T
E
0
for xi = x, y, a, and u.
(2.2)
575
Associated to H we dene the adjoint processes p(t), q(t), r(t, z), t [0, T ], z R0 , by
the BSDE
dp(t) = E[(t) | Ft ] dt + q(t) dB(t) +
r(t, z)N (dt, dz),
t [0, T ],
(2.3)
R0
p(T ) = g (X(T )),
where
H
(t, X(t), Y (t), A(t), u(t), p(t), q(t), r(t, ))
x
H
(t) =
Note that this BSDE is anticipative, or time advanced in the sense that the process (t)
contains future values of X(s), u(s), p(s), q(s), r(s, ), s t + .
In the case when there are no jumps and no integral term in (2.4), anticipative BSDEs
(ABSDEs for short) have been studied by Peng and Yang [16], who proved existence and
uniqueness of such equations under certain conditions. They also related a class of linear
ABSDEs to a class of linear stochastic delay control problems where there is no delay in the noise
coefcients. Thus, in our paper we extend this relation to general nonlinear control problems and
general nonlinear ABSDEs by means of the maximum principle, where throughout the study we
include the possibility of delays also in the noise coefcients, as well as the possibility of jumps.
After this paper was written, we became aware of a recent paper [1] on a similar maximum
principle for the stochastic control problem of delayed systems. However, the authors do not
consider delays of moving average type and they do not allow jumps. On the other hand, they
allow delay in the control.
Remark 2.1. The methods used in this paper extend easily to more general delay systems. For
example, we can add nitely many delay terms of the form
Yi (t) := X(t i ),
i = 1, . . . , N,
where the i > 0 are given, in all the coefcients. Moreover, we can include more general
moving average terms of the form
t
Aj (t) :=
j (t, s)X(s) ds,
j = 1, . . . , M,
t
where the j (t, s), 0 s t T , are given locally bounded Fs -adapted processes.
In this case the process (t) in (2.4) must be modied accordingly. More precisely, the last
term in (2.4) must be changed to the sum of
t+
H
(t, X(s), Y1 (s), . . . , YN (s), A1 (s), . . . , AM (s), u(s), p(s), q(s), r(s, ))
aj
t
j (t, s) 1[0,T ] (s) ds,
1 j M.
576
B. KSENDAL ET AL.
Even more generally, when modied appropriately, our method could also deal with delay terms
of the form
A (t) :=
where is a given (positive) measure on [0, T ]. In this case the BSDE (2.3) must be modied
to
N
H
H
E
(t) Ft +
(t + i ) Ft 1[0,T i ] (t) dt
dp(t) = E
x
y
i=1
E
+
t+
R0
H
(s)(t, s) 1[0,T ] (s) ds
a
Ft d(t) + q(t) dB(t)
t [0, T ],
H (t) =
H (t, X(t), Y1 (t), . . . , YN (t), . . .), etc.
x
x
For simplicity of presentation, however, we have chosen to focus on just the two types of
delay, Y (t) and A(t) given in (1.3).
3. A sufcient maximum principle
In this section we establish a maximum principle of sufcient type, i.e. we show that, under
some assumptions, maximizing the Hamiltonian leads to an optimal control.
Theorem 3.1. (Sufcient maximum principle.) Let u AE with corresponding state processes
q(t),
q(t),
r (t, ))
are concave for each t [0, T ] almost surely (a.s.).
(ii)
T
p(t)
E
0
(t) +
R0
+ X 2 (t) q 2 (t) +
2
(t, z)(dz)
2
r (t, z)(dz) dt <
2
R0
for all u AE .
(iii)
), A(t),
q(t),
r (t, )) | Et ]
vU
), A(t),
p(t),
q(t),
r (t, )) | Et ]
for all t [0, T ] a.s.
Then u(t)
is an optimal control for problem (1.4).
(3.1)
577
I1 = E
(3.2)
{f (t, X(t), Y (t), A(t), u(t)) f (t, X(t), Y (t), A(t), u(t))}
dt ,
(3.3)
))].
I2 = E[g(X(T )) g(X(T
(3.4)
q(t),
r (t, ))
I1 = E
0
H (t, X(t),
Y (t), A(t),
u(t),
p(t),
q(t),
r (t, ))
p(t)
q(t)
z))
r (t, z)(dz)} dt
T H
H
H
(t)(X(t) X(t))
+
(t)(Y (t) Y (t)) +
(t)(A(t) A(t))
x
y
a
H
+
(t)(u(t) u(t))
(b(t) b(t))
p(t)
( (t) (t))q(t)
u
( (t, z) (t, z))r (t, z)(dz) dt ,
(3.5)
H (t) =
H (t, X(t),
Y (t), A(t),
u(t),
p(t),
q(t),
r (t, )),
x
x
b(t) = b(t, X(t), Y (t), A(t), u(t)),
= b(t, X(t),
b(t)
Y (t), A(t),
u(t)),
=E
p(t)(dX(t)
dX(t))
+
0
(X(t) X(t))
dp(t)
+
0
( (t) (t))q(t)
dt +
0
R
578
B. KSENDAL ET AL.
=E
0
(b(t) b(t))
p(t)
dt +
T
0
(X(t) X(t))
E[(t) | Ft ] dt
( (t) (t))q(t)
dt +
R
(3.6)
+
(t)(A(t) A(t))
+
(t)(u(t) u(t))
a
u
+ (t)(X(t) X(t))
dt
=E
H
(t ) +
(t) 1[0,T ] (t) + (t ) (Y (t) Y (t)) dt
x
y
T
H
H
(t)(A(t) A(t))
dt +
(t)(u(t) u(t))
dt . (3.7)
a
u
0
T + H
+
0
(s)(A(s) A(s))
ds
a
0
T
s
H
=
e(sr) (X(r) X(r))
dr ds
(s)
a
0
s
T r+
H
t a
Combining this with (3.7) and using (2.4), we obtain
J (u) J (u)
T +
t
H
H
H
s
t a
T
H
(t)(u(t) u(t))
dt
+ (t ) (Y (t) Y (t)) dt +
u
0
T
H
=E
(t)(u(t) u(t))
dt
u
0
T
H
Et dt
=E
E
(t)(u(t) u(t))
u
0
T
H
=E
dt
E
(t) Et (u(t) u(t))
u
0
0.
579
q(t),
for all s (, ).
(A2) For all t0 [0, T ] and all bounded Et0 -measurable random variables , the control
process (t) dened by
(t) = 1[t0 ,T ] (t),
t [0, T ],
(4.1)
belongs to AE .
(A3) For all bounded AE , the derivative process
d u+s
(t) := X
(t)
ds
s=0
(4.2)
t
b
b
b
b
(t)(t) +
(t)(t ) +
(t)
(t)(t) dt
e(tr) (r) dr +
x
y
a
u
t
t
e(tr) (r) dr +
(t)(t) +
(t)(t ) +
(t)
(t)(t) dB(t)
+
x
y
a
u
t
(t, z)(t) +
(t, z)(t )
+
y
R0 x
t
+
(t)
(t)(t) N (dt, dz),
(4.3)
e(tr) (r) dr +
a
u
t
b(t) =
b(t, X(t), X(t ), A(t), u(t)),
x
x
etc., and we have used the facts that
d u+s
d u+s
Y
(t)
= X
(t )
= (t )
ds
ds
s=0
s=0
580
B. KSENDAL ET AL.
t
d
d u+s
(tr) u+s
A
(t)
=
e
X
(r) dr
ds
ds t
s=0
s=0
t
d
e(tr) X u+s (r)
dt
=
ds
t
s=0
t
=
e(tr) (r) dr.
and
Note that
(t) = 0
for t [, 0].
solutions X(t)
of (1.1)(1.2) and p(t),
q(t),
2
2
E
p (t)
(t) (t) +
(t) 2 (t )
x
y
0
2 2
2 t
(t)
e(tr) (r) dr +
(t)
+
a
u
t
2
2
+
(t, z) (t) +
(t, z) 2 (t )
x
y
R0
t
2 2
2
(t, z)
e(tr) (r) dr +
(t, z) (dz) dt
+
a
u
t
T
+
r 2 (t, z)(dz) dt < .
2 (t) q 2 (t) +
0
R0
d
= 0.
J (u + s)
ds
s=0
= 0 a.s.
H (t, X(t),
Y (t), A(t),
u, p(t),
q(t),
r (t, )) Et
E
u
u=u(t)
581
H
+
(t)
(t)p(t)
(t)q(t)
(t, z)r(t, z)(dz) (t ) dt
y
y
y
R y
0
T
H
+
(t)
(t)p(t)
(t)q(t)
(t, z)r(t, z)(dz)
a
a
a
R a
0
t
e(tr) (r) dr dt
=E
T H
(t)
(t)p(t)
(t)q(t)
x
x
+
0
f
(t)(t) dt + g (X(T ))(T ) .
u
(4.4)
By (4.3),
E[g (X(T ))(T )]
= E[p(T )(T )]
T
T
=E
(t) dp(t)
p(t) d(t) +
0
0
t
T
q(t)
e(tr) (r) dr
(t)(t) +
(t)(t ) +
(t)
+
x
y
a
0
t
+
(t)(t) dt
u
T
+
(t, z)(t) +
(t, z)(t )
r(t, z)
x
y
R
0
t
+
e(tr) (r) dr +
(t, z)
(t)(t) (dz) dt
a
u
t
T
t
b
b
b
=E
p(t)
e(tr) (r) dr
(t)(t) +
(t)(t ) +
(t)
x
y
a
0
t
T
b
+
(t) E[(t) | Ft ] dt
(t)(t) dt +
u
0
T
t
+
q(t)
e(tr) (r) dr
(t)(t) +
(t)(t ) +
(t)
x
y
a
0
t
+
(t)(t) dt
u
T
+
r(t, z)
(t, z)(t) +
(t, z)(t )
x
y
R
0
t
(tr)
+
(t, z)
(t, z)(t) (dz) dt .
e
(r) dr +
a
u
t
(4.5)
Combining (4.4) and (4.5) we obtain
T
T
H
H
(t)
(t )
(t) + (t) dt +
(t) dt
0=E
x
y
0
0
T t
T
H
H
+
e(tr) (r) dr
(t) dt +
(t)(t) dt
a
u
0
t
0
582
B. KSENDAL ET AL.
H
H
H
(t)
(t)
(t + ) 1[0,T ] (t)
x
x
y
0
t+
T
H
H
t
s
e
(s)e
(t) dt
1[0,T ] (s) ds
dt +
(t )
a
y
t
0
T
T
s
H
H
+
(s) ds +
(t)(t) dt
e(st) (t) dt
a
u
0
0
s
T
t+
H
H
=E
dt
(s)es 1[0,T ] (s) ds
(t + ) 1[0,T ] (t) et
(t)
a
y
t
0
T t+
T
H
H
t
s
(t) dt + e
(s)e
(t )
1[0,T ] (s) ds (t) dt
+
y
a
0
0
t
T
H
+
(t)(t) dt
u
0
T
H
=E
(t)(t) dt ,
(4.6)
u
0
=E
(t)
H (t) dt = 0.
u
E
H (s) = 0.
u
Since this holds for all s t0 and all , we conclude that
E
H (t0 ) Et0 = 0.
u
This shows that assertion (i) implies assertion (ii).
Conversely, since every bounded AE can be approximated by linear combinations of
controls of the form (4.1), we can prove that assertion (ii) implies assertion (i) by reversing
the above argument.
5. Existence and uniqueness theorems for time-advanced BSDEs with jumps
We now study time-advanced BSDEs driven both by Brownian motion B(t) and compensated
Poisson random measures N (dt, dz). We rst provide a constructive procedure to compute
the solution of time-advanced BSDEs of the form (2.3)-(2.4), typically satised by the adjoint
processes of the Hamiltonian. Then we turn to more general BSDEs and provide several
theorems on the existence and uniqueness of the solutions under different sets of assumptions
on the driver and the terminal condition, which require different treatments in the proof.
583
(5.1)
Consider the following time-advanced BSDE in the unknown Ft -adapted processes (p(t),
q(t), r(t, z)):
dp(t) = E[F (t, p(t), p(t + ) 1[0,T ] (t), pt 1[0,T ] (t), q(t), q(t + ) 1[0,T ] (t),
qt 1[0,T ] (t), r(t), r(t + ) 1[0,T ] (t), rt 1[0,T ] (t)) | Ft ] dt
+ q(t) dB(t) +
r(t, z)N (dt, dz),
t [0, T ],
(5.2)
R
p(T ) = G,
(5.3)
T
p (t) + q (t) +
(5.4)
Moreover, the solution can be found by inductively solving a sequence of BSDEs backwards
as follows.
Step 0. In the interval [T , T ] we let p(t), q(t), and r(t, z) be dened as the solution of
the classical BSDE
dp(t) = F (t, p(t), 0, 0, q(t), 0, 0, r(t, z), 0, 0) dt + q(t) dB(t)
+
r(t, z)N (dt, dz),
t [T , T ],
R
p(T ) = G.
Step k, k 1. If the values of (p(t), q(t), r(t, z)) have been found for t [T k, T
(k 1)] then, if t [T (k + 1), T k], the values of p(t + ), pt , q(t + ), qt , r(t + , z),
584
B. KSENDAL ET AL.
p(t) = G(t),
t [T , T + ],
(5.6)
where G is a given continuous Ft -adapted stochastic process. We shall present three theorems
with various conditions on F and G.
Theorem 5.2. Assume that E[supT tT + |G(t)|2 ] < and that condition (5.1) is satised.
Then the BSDE (5.5)(5.6) admits a unique solution (p(t), q(t), r(t, z)) such that
T
p 2 (t) + q 2 (t) +
E
0
r 2 (t, z)(dz) dt < .
r 0 (t, x)
585
+
t
T
+2
+
rsn1 ()) | Fs ]) ds
|r n+1 (s, z) r n (s, z)|2 ds(dz) +
(5.8)
T
t
2E
T
t
+E
+E
+E
(s + ), r
(s) p (s)| ds + E
T s+
s
T
(s), q
n1
n1
(s, ), r
|q (s) q
n1
n1
(s + , )) | Fs ]| ds
(s)| ds
2
|q n (u) q n1 (u)|2 du ds
|r (s) r
T
|q n (s + ) q n1 (s + )|2 ds
+E
|p
n+1
n1
n1
(s)|2H
ds + E
T
t
|r (s + ) r
s+
n
n1
2
|r (u) r
(u)|H du ds .
n1
(s
+ )|2H
ds
(5.9)
586
B. KSENDAL ET AL.
Note that
E
|q (s + ) q
n1
(s + )| ds E
2
|q (s) q
n1
(s)| ds .
2
|q n (u) q n1 (u)|2 du ds = E
T s+
s
T +
E
|q n (u) q n1 (u)|2 du
u
u
ds
|q n (s) q n1 (s)|2 ds .
T
t
C E
|p
+ 3 E
n+1
(s) p (s)| ds + (2 + M) E
|q (s) q
n1
(s)| ds
2
|r n (s) r n1 (s)|2H ds .
(5.10)
n+1
+E
(t) p (t)| ] + E
|r
n+1
T
1
|p n+1 (s) p n (s)|2 ds + E
|q n (s) q n1 (s)|2 ds
2
t
t
T
1
+ E
|r n (s) r n1 (s)|2H ds .
2
t
C E
1
eC t E
2
T
t
|q (s) q
n1
1
(s)| ds + eC t E
2
T
t
n
|q
n+1
(s) q (s)| ds
|r (s) r
n1
(s)|2H
ds .
587
|p
E
+
0
T
n+1
1
2
E
T
t
|q
n+1
(s) q (s)| ds
2
dte
C t
dteC t E
1
2
(s) p (s)| ds +
dteC t E
|q n (s) q n1 (s)|2 ds
dteC t E
|r n (s) r n1 (s)|2H ds .
T
t
(5.11)
In particular,
dteC t E
dte
0
1
2
R
T
dteC t E
C t
E
|q (s) q
dteC t E
T
t
n1
(s)| ds
2
|r n (s) r n1 (s)|2H ds .
(5.12)
This yields
dte
0
C t
E
R
T
dteC t E
0
n
21 C
t
|r
n+1
n
1
|p n+1 (s) p n (s)|2 ds
C.
2
E
0
n+1
|r
R
1 n
2
|q
n+1
(s) q (s)| ds
2
CnC .
In view of (5.10), (5.11), and (5.12), we conclude that there exist progressively measurable
588
B. KSENDAL ET AL.
n 0
lim
n 0
lim
n 0
q(s) dBs +
T
t
R
i.e. (p(t), q(t), r(t, z)) is a solution. Uniqueness follows easily from Its formula, a similar
calculation used to deduce (5.8) and (5.9), and Gronwalls lemma.
Step 2: general case. Let p0 (t) = 0. For n 1, dene (p n (t), q n (t), r n (t, z)) to be the
unique solution to the following BSDE:
dpn (t) = E[F (t, pn1 (t), pn1 (t + ), ptn1 , q n (t), q n (t + ), qtn , r n (t, ),
r n (t + , ), rtn ()) | Ft ] dt + q n (t) dBt + r n (t, z)N (dt, dz),
pn (t) = G(t),
t [0, T ],
t [T , T + ].
The existence of (p n (t), q n (t), r n (t, z)) is proved in step 1. By the same arguments leading to
(5.10), we deduce that
T
1
n+1
n
2
n+1
n
2
|r
(s, z) r (s, z)| ds(dz)
E[|p (t) p (t)| ] + E
2
R
t
T
1
+ E
|q n+1 (s) q n (s)|2 ds
2
t
T
T
1
n+1
n
2
n
n1
2
CE
|p (s) p (s)| ds + E
|p (s) p (s)| ds .
2
t
t
This implies that
T
T
d Ct
1 Ct
n+1
n
2
n
n1
2
|p (s) p (s)| ds
e E
|p (s) p (s)| ds . (5.13)
e E
dt
2
t
t
Integrating (5.13) from u to T we obtain
T
T
1 T
n+1
n
2
C(tu)
n
n1
2
E
|p (s) p (s)| ds
dte
E
|p (s) p (s)| ds
2 u
u
t
T
T
CT
n
n1
2
e
dt E
|p (s) p (s)| ds .
u
589
+ |q1 q1 | + |q2 q2 |
0s
+ |q q|
V1 + |r1 r1 |H + |r2 r2 |H + |r r |V2 .
(5.14)
Then the BSDE (5.5) admits a unique solution (p(t), q(t), r(t, z)) such that
T
E sup |p(t)|2 +
q 2 (t) +
r 2 (t, z)(dz) dt < .
0
0tT
Proof. Step 1: assume that F is independent of p1 , p2 , and p. In this case condition (5.14)
reduces to assumption (5.1). By step 1 in the proof of Theorem 5.2, there is a unique solution
(p(t), q(t), r(t, z)) to (5.5).
Step 2: general case. Let p0 (t) = 0. For n 1, dene (p n (t), q n (t), r n (t, z)) to be the
unique solution to the following BSDE:
dpn (t) = E[F (t, pn1 (t), p n1 (t + ), ptn1 , q n (t), q n (t + ), qtn , r n (t, ), r n (t + , ),
rtn ()) | Ft ] dt + q n (t) dBt + r n (t, z)N (dt, dz),
pn (t) = G(t),
t [T , T + ].
By step 1, (pn (t), q n (t), r n (t, z)) exists. We are going to show that (p n (t), q n (t), r n (t, z))
forms a Cauchy sequence. Using Its formula, we have
|pn+1 (t) p n (t)|2 +
= 2
T
t
R
T
t
590
B. KSENDAL ET AL.
t
T
2
Take the conditional expectation with respect to Ft , take the supremum over the interval [u, T ],
and use condition (5.14) to obtain
|q n+1 (s) q n (s)|2 ds Ft
t
utT
n+1
n
2
|r
(s, z) r (s, z)| ds(dz) Ft
utT
+ sup E
utT
|p (s) p (s)| ds Ft
C sup E
u
utT
T
|p n (s) p n1 (s)|2 ds Ft
+ C1 sup E
u
utT
T
+ C2 sup E
E sup |p n (v) p n1 (v)|2 Fs ds
utT
n+1
+ C3 sup E
utT
T
t
utT
svT
T
t
+ C4 sup E
|q
n+1
R
(s) q (s)| ds Ft
n
Ft
|r n+1 (s, z) r n (s, z)|2 ds(dz) Ft .
(5.15)
Choosing > 0 such that C3 < 1 and C4 < 1, it follows from (5.15) that
sup |p n+1 (t) p n (t)|2
utT
|p n+1 (s) p n (s)|2 ds Ft
u
utT
T
E sup |p n (v) p n1 (v)|2 Fs ds
+ (C1 + C2 ) sup E
C sup E
utT
Note that
E
and
E
svT
|p n+1 (s) p n (s)|2 ds Ft
sup |p (v) p
svT
n1
(v)| Fs ds
2
Ft
Ft .
(5.16)
591
T
are right-continuous
martingales on [0, T ] with terminal random variables u |p n+1 (s)
T
pn (s)|2 ds and u E[supsvT |p n (v) p n1 (v)|2 | Fs ] ds. Thus, for > 1, we have
T
n+1
n
2
E
sup E
|p (s) p (s)| ds Ft
u
utT
c E
cT , E
and
E
sup E
utT
cT , E
cT , E
(s) p (s)| ds
sup |p
n+1
svT
sup |p (v) p
(v) p (v)|
n1
svT
E
T
T
u
|p
n+1
ds ,
(5.17)
Ft
Fs ds
(v)| Fs ds
2
svT
sup |p n (v) p n1 (v)|2 ds .
(5.18)
svT
utT
+ C2, E
Set
gn (u) = E
T
u
svT
T
sup |p n (v) p n1 (v)|2 ds .
svT
(5.19)
sup |p n (s) p n1 (s)|2 dt .
tsT
d C1, u
(e
gn+1 (u)) eC1, u C2, gn (u).
dt
(5.20)
T
t
gn (s) ds.
592
B. KSENDAL ET AL.
Theorem 5.4. Assume that E[supT tT + |G(t)|2 ] < and F satises (5.14), i.e.
|F (t, p1 , p2 , p) F (t, p 1 , p 2 , p)|
. (5.21)
C |p1 p 1 | + |p2 p 2 | + sup |p(s) p(s)|
0s
Then the BSDE (5.5) admits a unique solution such that (5.4) holds.
Proof. Let p0 (t) = 0. For n 1, dene (p n (t), q n (t), r n (t, z)) to be the unique solution
to the following BSDE:
dpn (t) = E[F (t, pn1 (t), pn1 (t + ), ptn1 ) | Ft ] dt + q n (t) dBt + r n (t, z)N (dt, dz),
pn (t) = G(t),
t [T , T + ].
We will show that (pn (t), q n (t), r n (t, z)) forms a Cauchy sequence. Subtracting p n from
and taking the conditional expectation with respect to Ft , we obtain
T
n+1
n
p (t) p (t) = E
(E[F (s, p n (s), pn (s + ), psn ) | Fs ]
t
E[F (s, pn1 (s), pn1 (s + ), psn1 ) | Fs ]) ds Ft .
pn+1
Take the supremum over the interval [u, T ] and use assumption (5.21) to obtain
sup |p n+1 (t) p n (t)|2
utT
C sup E
utT
|p (s) p
+ C sup E
utT
cT E
and
E
u
T
sup E
utT
cT E
sup |p (v) p
T
u
cE
2
(s)| ds Ft
n1
svT
n1
u
T
u
(v)| Fs ds
2
Ft
.
(5.22)
2
|p (s) p (s)| ds Ft
2
|p n (s) p n1 (s)| ds
n
n1
sup |p n (v) p n1 (v)|2 ds ,
svT
2
Ft
E sup |p (v) p
svT
E sup |p n (v) p n1 (v)|2 Fs ds .
n1
(v)| Fs ds
svT
svT
It follows easily from here that (p n (t), q n (t), r n (t, z)) converges to some limit (p(t), q(t),
r(t, z)), which is the unique solution of (5.5).
593
t [, 0],
X(t) = x0 (t),
t [, 0].
(6.1)
(6.2)
c U1 (t, c, ) is C 1 ,
U1 (t, c, ) > 0,
c
c
U1 (t, c, ) is strictly decreasing,
c
lim
U1 (t, c, ) = 0 for all t, [0, T ] .
c c
Set v0 (t, ) = U1 (t, 0, )/c, and dene
if v v0 (t, ),
0
1
I (t, v, ) =
c
Suppose that we want to nd the consumption rate c(t)
such that
J (c)
= sup{J (c), c A},
where
J (c) = E
(6.3)
U1 (t, c(t), ) dt + g(X(T )) .
Here g : R R is a given concave C 1 function and A is the family of all cdlg, Ft -adapted
processes c(t) 0 such that E[|g(X(T ))|] < .
In this case the Hamiltonian given by (2.1) takes the form
H (t, x, y, a, u, p, q, r(), ) = U1 (t, c, ) + ((t)y c)p + y(t)q
+y
(t, z)r(z)(dz).
R
594
B. KSENDAL ET AL.
Maximizing H with respect to c gives the following rst-order condition for an optimal c(t):
) = p(t).
U1 (t, c(t),
c
(6.4)
The time-advanced BSDE for p(t), q(t), and r(t, z) is, by (2.3)(2.4),
(t, z)r(t + , z)(dz)
dp(t) = E (t)p(t + ) + (t)q(t + ) +
R
1[0,T ] (t) Ft dt,
+ q(t) dB(t) +
r(t, z)N (dt, dz),
t [0, T ],
R
(6.5)
(6.6)
t [T , T ],
with corresponding q(t) and r(t, z) given by the martingale representation theorem (found, e.g.
by using the ClarkOcone theorem).
Step 2. If t [T 2, T ] and T 2 > 0, we obtain by step 1 the BSDE
dp(t) = E (t)p(t + ) + (t)q(t + ) +
(t, z)r(t + , z)(dz) Ft dt
R
+ q(t) dB(t) +
r(t, z)N (dt, dz),
t [T 2, T ],
R
with p(T ) known from step 1. Note that p(t + ), q(t + ), and r(t + ) are also
known from step 1. Therefore, this is a simple BSDE which can be solved for p(t), q(t), and
r(t, ), t [T 2, T ]. We continue like this by induction up to and including step j ,
where j is such that T j 0 < T (j 1). With this procedure we end up with a solution
p(t) = pX(T ) (t) of (6.5)(6.6) which depends on the (optimal) terminal value X(T ). If
0 p(t) v0 (t, )
(6.7)
t [0, T ].
(6.8)
595
Proposition 6.1. Let p(t), q(t), and r(t, z) be the solution of the BSDE (6.5)(6.5), as described above. Suppose that (6.7) holds. Then the optimal consumption rate c(t)
and the
corresponding optimal terminal wealth X(t) are given implicitly by the coupled equations
(6.8) and (6.1)(6.2).
To obtain a more explicit solution presentation, let us now assume that (t) is deterministic
and that g(x) = kx, k > 0.
Since k is deterministic, we can choose q = r = 0 in (6.5)(6.6) and the BSDE becomes
dp(t) = (t)p(t + ) 1[0,T ] (t) dt,
p(t) = k
t < T,
for t [T , T ].
t [, T ].
Then
dh(t) = dp(T t) = (T t)p(T t + ) dt = (T t)h(t ) dt
for t [0, T ], and
h(t) = p(T t) = k
for t [, 0].
(6.9)
for j [j , (j + 1)].
We have thus proved the following result.
Proposition 6.2. Assume that (t) is deterministic and that g(x) = kx, k > 0. The optimal
consumption rate c (t) for the problem (6.1)(6.2), (6.3) is given by
c (t) = I (t, h (T t), ),
(6.11)
Thus, the optimal consumption rate increases if the delay increases. The explanation for this
may be that the delay postpones the negative effect on the growth of the cash ow caused by
the consumption.
596
B. KSENDAL ET AL.
Acknowledgements
We thank Joscha Diehl, Martin Schweizer, and an anonymous referee for helpful comments
and suggestions.
The research leading to these results has received funding from the European Research
Council under the European Communitys Seventh Framework Programme (FP7/2007-2013),
grant agreement number 228087.
References
[1] Chen, L. and Wu, Z. (2010). Maximum principle for the stochastic optimal control problem with delay and
application. Automatica 46, 10741080.
[2] Chojnowska-Michalik, A. (1978). Representation theorem for general stochastic delay equations. Bull. Acad.
Polon. Sci. Sr. Sci. Math. Astronom. Phys. 26, 635642.
[3] David D. (2008). Optimal control of stochastic delayed systems with jumps. Preprint.
[4] El-Karoui, N. and Hamadne, S. (2003). BSDEs and risk-sensitive control, zero-sum and nonzero-sum game
problems of stochastic functional differential equations. Stoch. Process. Appl. 107, 145169.
[5] Elsanousi, I., ksendal, B. and Sulem, A. (2000). Some solvable stochastic control problems with delay.
Stoch. Stoch. Reports 71, 6989.
[6] Federico, S. (2009). A stochastic control problem with delay arising in a pension fund model. To appear in
Finance Stoch.
[7] Gozzi, F. and Marinelli, C. (2004). Stochastic optimal control of delay equations arising in advertising
models. In Stochastic Partial Differential Equations and ApplicationsVII (Lecture Notes Pure Appl. Math.
245), Chapman and Hall/CRC, Boca Raton, FL, pp. 133148.
[8] Gozzi, F., Marinelli, C. and Savin, S. (2009). On controlled linear diffusions with delay in a model of optimal
advertising under uncertainty with memory effects. J. Optimization Theory Appl. 142, 291321.
[9] Kolmanovski, V. B. and Shaikhet, L. E. (1996). Control of Systems with Aftereffect. American Mathematical
Society, Providence, RI.
[10] Larssen, B. (2002). Dynamic programming in stochastic control of systems with delay. Stoch. Stoch. Reports
74, 651673.
[11] Larssen, B. and Risebro, N. H. (2003). When are HJB-equations in stochastic control of delay systems nite
dimensional? Stoch. Anal. Appl. 21, 643671.
[12] ksendal, B. and Sulem, A. (2001). A maximum principle for optimal control of stochastic systems with
delay with applications to nance. In Optimal Control and Partial Differential Equations, eds J. L. Menaldi,
E. Rofman and A. Sulem, IOS Press, Amsterdam, pp. 6479.
[13] ksendal, B. and Sulem, A. (2007). Applied Stochastic Control of Jump Diffusions, 2nd edn. Springer, Berlin.
[14] ksendal, B. and Zhang, T. (2010). Optimal control with partial information for stochastic Volterra equations.
Internat. J. Stoch. Anal. 2010, 25 pp.
[15] Pardoux, . and Peng, S. G. (1990). Adapted solution of a backward stochastic differential equation. Systems
Control Lett. 14, 5561.
[16] Peng, S. and Yang, Z. (2009). Anticipated backward stochastic differential equations. Ann. Prob. 37, 877902.
[17] Peszat, S. and Zabczyk, J. (2008). Stochastic Partial Differential Equations with Lvy Noise (Encyclopedia
Math. Appl. 113). Cambridge University Press.
[18] Situ, R. (1997). On solutions of backward stochastic differential equations with jumps and applications. Stoch.
Process. Appl. 66, 209236.