You are on page 1of 9

ELSEVIER Insurance: Mathematics and Economics 18 (1996) 173-181

INSURING[
MATHEMATICS& ECONOMICS

Aspects of prospective mean values in risk theory


Christian M. Moiler
Laboratory of Actuarial Mathematics, University of Copenhagen, Universitetsparken 5, 2100 O, Denmark.
Received September 1995; revised February 1996

Abstract

The present paper deals with conditional mean values for analysing prospective events in risk theory, mainly related to reserve
evaluation. In some (Markov) cases, for instance the classical life insurance set-up, Kolmogorov's backward differential equations
suffice as a constructive tool, together with basic martingale relations. However, in many important (Markov) cases we need more
refined martingale techniques. We shall mainly focus on cases with random time horizon defined as an exit time. The martingale results
are carried out in a marked point process set-up, by use of the important concept of an intensity measure.

Keywords: Thiele's differential equation; Compound distribution; Martingale; Optional sampling; Marked point process; Exit time

1. Introduction

Applications of stochastic calculus in life-and non-life insurance have until now not been a major issue.
It has been lightly touched in life insurance where the classical idea is to model the policy in accordance
with a Markov (jump) process of finite state space. Using the nice structure of Kolmogorov's back-
ward differential equations, one can establish the so-called Thiele's differential equations, which give a
tool for identifying the prospective reserves, see Hoem (1969). However, he did not use any kind of
martingale relations.
A generalization of the classical life insurance model was studied by Moiler (1993) in a semi-Markov case
where payment functions and transition intensities were allowed to depend on the duration in the visiting
state. A simple martingale argument seemed sufficient to obtain a result. We shall develop the ideas in
Moiler (1993, 1995) further to see how conditional expectations can be obtained and evaluated under a
random time horizon, which in particular will be defined as an exit time for some measurable set. We give a
practical example.
Basic in the analysis is that our stochastic phenomena occur at random times (points), Tl < T2,..., and
are represented by corresponding marks Z1, Z 2 , . . . . The sequence (T,,Z,)n>I is called a marked point
process, see below, and in this manner our formulation is applicable for both life-and non-life insurance
problems, and many other situations of interest in risk theory.
Davis (1993), who introduced the concept of PD (piecewise-deterministic) Markov processes, has also
shown interest for conditional means, but his motivation is different (extended generator, strong Markov
property, cemetary state).

1 E-mail: chrmax@math.ku.dk

0167-6687/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved


PH 0 1 6 7 - 6 6 8 7 ( 9 6 ) 0 0 0 0 3 - 5
174 C. M. Mealier~Insurance: Mathematics and Economics 18 (1996J 173 18!

In Section 2 we outline the concept of a marked point process and the associated martingale results as
stated in Br'emaud (1981).
In Section 3, we stress the martingale approach and the techniques to obtain prospective mean values
from a system of differential equations. First we pay attention to the semi-Markov model in M011er (1993).
We shall present a different p r o o f based on D o o b ' s optional sampling theorem. Then we mention the
classical M a r k o v j u m p case under a fixed period of time. Finally, we shall indicate how a compound
distribution function can be obtained, which is also analysed in Moller (1996) for fixed finite time horizons.
Throughout we assume that the force of interest is fixed over time. However generalizations to stochastic
interest are indeed feasible.

2. Some elements of point process theory

Assume there is given a filtered probability space (fL f , f t , P) satisfying the usual conditions, that is, the
space is complete and the filtration (f~)t>0 is right-continuous. In the sequel all random variables are
assumed to be defined on (fL f , P).
Basic in the studies are the concept of marked point processes and the associated martingale theory. A
point process is a sequence (T,, Z,),>~ of stochastic pairs, where TI, T2,... are non-negative and represent
times of occurrence of some phenomena represented by the stochastic elements Z ~ , Z 2 , . . . , called the
marks, which are assumed to take values in some measurable space Z endowed with a ~r-algebra £. This
model framework can be convenient for the analysis of insurance problems. In non-life insurance, the
points typically represent times of occurrence of claims, and the marks represent the individual claim
amounts. In life insurance the points could represent times of transition between states of a process
governing the policy, and the state entered or the pair of states involved in the transition could represent the
mark.
Let R and ~+ denote the real line and non-negative half line endowed with their usual Borel a-algebras 13
and/3+, respectively. Let I(F) denote the indicator of a set F in f . Introduce for each A C £ the counting
measure
')c

N(t,A) = Z I(Ti <_ t, Zi C A), (2.1)


i= I

which counts the number of jumps in the time interval (0, t] with marks taking values in A. In particular we
write N(t) = N(t, Z). The counting processes lead to the natural filtration

f U = cr(N(s,A), s < t, A C £).

Another important issue is the existence of an intensity process: Assume that N(t, A) admits an ~)-intensity
process u(t, A)(U u C f t ) assumed to be bounded over finite intervals, informally defined as

u(t,A)dt = E(N(dt, A ) I f t ), (2.2)

where ~'t- = vs<t.T's is the information prior to time t. We abbreviate u(t) - u(t, Z), which is the intensity
of N(t). We can also write the intensity of the form

~,(t, A) = ~(t)~(t, A), a(t, A) = [ a(t, dz), i2.3


Jz~ A
where G(t, A) is a probability and is interpreted as the conditional probability given all information prior
to time t and that a j u m p occurred at time t, that the associated mark will belong to A. An important result
C. M. Moiler~Insurance: Mathematics and Economics 18 (1996) 173-181 175

(e.g. Bfemaud, 1981, pp. 27,235) states that the process

Mr=f( ° f~ H(s,z)(N(ds, dz) - u(s, dz)ds),


,t] z
where H is some )Vt-predictable process (indexed by Z) is a zero mean Ut-martingale, that is,

EIf(t,v]~zH(S,z)N(ds, dz) ZFt] =E[lt,v]lezH(s,z)~'(s, dz)ds .~t], (2.4)

for any t < v, whenever

iz0 L .,sz, ,sH


for any t > O. In the sequel it should be sufficient to know that, in particular, any process with left
continuous or deterministic paths (indexed by Z) is predictable.
In the sequel we will also write g and fA instead of f(,,,l and f~eA, respectively.

3. The martingale approach

Let X = (Xt)t>_Obe a jump process taking values in the measurable space (Z,C). This leads to the
associated marked point process (T,,Zn)n_> l, where Tl < T2 < ..- denote the jump times of X, and
Zl, Z 2 , . . . are the corresponding values of X at the jumps, Z~ = Xr,. Define the cadlag (right-continuous
with left-hand limit) process.

Ut = Z ( t - Tn)I(Tn < t < Tn+l), (3.1)


n_>0

that measures the time spent in the visiting state of Xt and To = 0. We shall assume that the process
f( = (f(t)t>o, f(t = (Xt, Ut), is a
(PD) Markov process given by transition functions

p(s, 2; t,B) = P(f(t E B[)(s--- 2), s < t,

where B is in the a-algebra £ ®/3+ and >7 = (x, u). Further, we assume that k admits an intensity function
(t,u,x) --+ A(x,~)(t,A) for each A E ,~, given by
A(x,~)(t,A) = h\olimP(t'2;t+h, x R+) , xf[A.

Again we write A(x,u)(t) = A(x,u)(t,Z\x). Then the intensity process of N(t, A) exists and is given by

u(t, A) -----Aj?,(t,A)I(Xt f[ A).


We introduce the measurable payment functions ( t , u , x ) ~ ~(x,u)(t) and (t,u,x,y)~ b(x,u)(t,y), where
((x,~/(t) plays the role of an (annual) annuity or premium rate, assumed to be non-negative (negative) if it
represents premium (annuity, pension). If X jumps from x to y at time t with a duration of u in state x, an
176 C. M. Moiler~Insurance: Mathematics and Economics 18 (1996) 173 181

amount of b(x,~)(t, y) is paid to the insured. We assume that the annual interest is fixed and let 6 denote the
corresponding force of interest.
Let now our stochastic risk business be represented by a stochastic process (Rz)t_>0 driven by the
(stochastic) differential equation

dR, = 6dt Rt + ( . f , ( t ) d t - feb/c, (t,z)N(dt, dz), (3.2)

which has a self-explained interpretation. The minus in (3.2) indicates that b~x,,)(t, y) is normally interpreted
as non-negative. First let T <_ oc, be a fixed period of time. Using integration by parts, it is readily checked
that the process

Qt(T) =
/t e -6(s-t) bk~ (s,z)N(ds, dz) - ~8,(s)ds + / x r ( T ) e ~lr-ti, (3.3)

satisfies (3.2) with Rr = /~?T(T), where (t, u, x) ~ l(x,u)(t) is some measurable function representing a cost
by time T. We shall use the convention lkr (T)e -~r = 0 if T = ~c, obtained for instance by replacing ltx,u/ (t)
with l(x,u)(t)l(t < oo).
An actuary would then typically be interested in the conditional mean E[Qt(T) I ~ N] for t E [0, T], which
due to Markov property only depends on 5v u via Xt, and together with (2.4) the expectation is given by

E[Qt(T)IXt = (x, u)] =


/T e-~/'-'~E/,.~.u/
]
\x~ b2,(s,z)A2,(s, dz ) - ~2~(s) ds (3.4)

+ e-*~r-ttEo.~,x)[l~ ( T)],

where Eit,u,x ) denotes the conditional expectation given Xt = (x, u). If we knew a version o f ' K o l m o g o r o v ' s
backward differential equations' for .Y we could simply evaluate E(t,u,x ) with respect to these, and
differentiate to obtain a system of differential equations for (3.4) of similar structure. So this approach can
be done if X is of Markov type, implying that A(~,~)(t) is assumed independent of u, and if furthermore all
payment functions are not depending on u, see comments following (3.10). For classical Markov theory we
refer to D o o b (1953).
To open for further flexibility and applications, we shall complicate matters further by operating under a
random time horizon given by some ~'t-stopping time r , that is, {r _< t} C ~-t for all t _> 0. In particular we
shall assume that r is given as the first exit time

rBc = inf{t > 0 : Xt ¢ BC},

for some set BC = B × C, where B E £ and C 6/3+. Define also

7"Bc,t = inf{s > t : J(s ~ BC} = t + inf{s >_ 0 : Xt+s ~ BC}.


Note that 7-BC,Tec = "CBC. We define "rBc,t = oc for any t, if X, E BC for all s _> t. We write

and define the process

Ft = /ote-'~s[fzbf~_(s,z)N(ds, dz) - ~yc~(s)dsJ •


C. M. M~ller/lnsurance." Mathematics and Economics 18 (1996) 173-181 177

Then for each t > 0, we can decompose as


-~St
ao(rnc) = l(rnc <_ t)Qo(rBC) + I(rBc > t){r, + Qt(rBca)e }
(3.5)
= Frac^t + I(rBC <_ t)ly:~ac (rBC)e -brBc + l(rBC > t)Qt(rBc,t)e -~t,

where we have used that

I('rBC > S) = I(reC > t)I(rBC,t > S), VS > t,

and we recall from (3.3) that

Qt(rnc,,) = f'C'e-els-'[L be, (s,z)N(ds, dz) - ~k,(s)ds] + l:?Tsc,(rBc,,)e-~(~ac.'-').


We are now concerned with evaluating the mean process

V~:,N(t) = E[Qt(rBc,t)].~], t > O,

which again due to the 9r~-Markov property depends through ~'tN only via Xt, which we abbreviate Vj?,(t),
where (t, u, x) ~ V(x,u)(t) is the measurable function

V(x,u)(t) = E[Qt(rBC,t)[.f(t = (x, u)].

We now have the boundary condition

V(x,~)(t) = l(x,~)(t), V(x,u) q[ BC, t >_ O. (3.6)

For identifying (t, u) ~ V(x,~)(t) (for (x, u) E BC) from a system of differential equations we first assume
that ElQo(rBc)l < oc, and define the 3rtN-martingale
n t = E[Qo(rBc)IjrNt ], t >_ 0
By (3.5) and (3.6) we can write

Mt = r~,~^, + I(rBC < t)lf~Bc (rBc)e-6*Bc + I(rBC > t)e-*tV2,(t)


(3.7)
= r~sc^t + e -~rac^t V~,ac^, (rBC A t), t >_ O.

This relation can be used to prove the following generalization of Theorem 3.1 in M~ller (1993).

Theorem 3.1. Let t ~ Dt be a function playing the role o f Ut between the jumps, that is, D, is non-negative
with derivative dDt = dr. Then over the continuity points o f A(x,u)(t, A), b(x.u)( t, z) and ((x,~)(t), the functions
t ~ V(x.n,)(t) satisfy the system o f differential equations

dV(x,n,)(t)
= [6 + A(x,o,)(t)]V(x,O,)(t) + ~(x,O,)(t)
dt
(3.8)
-- J Zf\x A(x,O,) ( t, dz)[b(x,D,) ( t, z) + V(-,o)( t)], t s (o, rb), xcB,
178 C. M. Moiler~Insurance: Mathematics and Economics 18 (1996) 173-181

with the boundary condition


V(x,~)(t)=l(x,u)(t), V(x,u) f[BC. t>_O,

where "c*c is the (deterministic) time

~-~ = infIt >_ 0: Dt q[ C}

Proof. Stopping the martingale at T~ the first j u m p time of X, the process t ~ Mt,,,TE becomes an 5t-~~-
martingale (optional sampling), and then in particular has the constant mean value

E(x,u)[M, AT, ] = V(x,,)(0), Vt > O.

However, we observe that ~-Bc A T1 = 7-~ A T1, P(x,~)-a.s. for any (x, u) • BC, since we can only leave BC
(via C) deterministically before T1 when we start in BC, and if do not leave BC before Tj, then this will in
particular not occur deterministically. Using (3.7), we then get for any t • (0, r~,)

MtAT, = F~AT~ + e -6tAb Vx,,.T~(t A T1)


(3.9)
= PtAT, + I(Tl > t)e ~tVl.,-,+,)(t) + I(TI < t)e at, V,z,.ol(T~ ).

The P(x,~)-distribution of (T1, Zl) is given by

P(x,,)(Tl • dt, Zl • A) = exp -


(I' )
)%~,,+s)(s)ds A(.~.~+,,(t A)dt.

For the first term in (3.9), we can by (2.4) write

E(x,~)[FtAT,] = E(x,~) I(TI >_ s)e -~s b(x,,+s)(s,z),~l~,~+.~)(s, dz ) - ((x,~.,l(s) ds


ix

and by evaluating the rest of the (constant) mean in (3.9), we can finally, by differentiation, arrive at (3.8)
with Dt = u + t, and since this only depends on t, u via their sum, we get the desired result. []
Note that (3.8) can be viewed as a system of partial differential equations. Namely, if we assume that
(t, u) ~ V(x,,)(t) has continuous partial derivatives for t c (0, 7-~.) and u > 0, we can write

dV(x,D,)(t) _ OV(x,D,)(t) OV(x,V,)(t)


+ , t • (o,~-b).
dt Ot Ou
In the case where rBc is bounded by a finite horizon T, say, occurring e.g. if 7-8c is replaced by rac A T, we
can then hope to evalute (3.8) (numerically) together with the initial condition

Vlx,u~(T ) = tCx,u/(T), V(x,u) c BC.

A special case that leads to studying the distribution of first exit time for k is obtained with
b(x,~)(t,y),~(x,u)(t) and 8 identically equal to zero, and furthermore with l(x.u)(t)= l(t < T) for some
T <_ o¢. Then
QtO-Rc,~) = I( inf )(~ {/BC),
t<s<T
C. M. McJller/lnsurance: Mathematics and Economics 18 (1996) 173-181 179

and

V(x,./(t) = P( inf L ¢ BCr~, = (x,u)),


t<_s<T

is the probability that k exits BC after time t, given ~'t = (x, u). Also, the boundary condition obviously
reads
v(x,ul(t) = l, (x, u) ¢ BC
We refer to M¢ller (1995) for some aspects of this probability for studying ruin probabilities for some PD-
Markov processes. Another special case is when X becomes Markov, obtained by letting A(x,u)(t) be
independent of u, which we denote by Ax(t). We then modify Tsc as
TD = inf{t > 0 : J(t ¢ D}, D E £,

and 7D,t accordingly. If we further let the payment functions be independent of u, denoted (x(t), bx(t, y) and
lx(t) respectively, and modify Qt( • ) to this case, the mean process

Vyu(t) : E[Qt(TD,t)].f~], t >_ O,

depends then on ~-U only via Xt, and is given by the function t ~ Vx(t), defined as

Vx(t) = E[Qt(To,t)lXt = x].

Then we immediately obtain the following corollary.

Corollary 3.2. Over the continuity points of Ax(t, A), bx(t, z) and (x(t), the functions t ~ Vx(t) satisfy the
system of differential equations

d V x ( t ) [,~+~,~(t)]Vx(t)+¢x(t)_f z ~x(t, dz)[b~(t,z)+ Vz(t)], t > 0 , xeO, (3.10)


dt \x

with the boundary condition

Vx(t) = lx(t), Vx C D, t>0.

If we restrict to operate under a fixed period of time T _< oe, the system in (3.10) will hold for all t E (0, T)
and x E Z with no boundary condition. This is the trivial case and is merely a consequence of the classical
Kolmogorov backward differential equations, which read
Op(s,x; t, A) [
Os - Ax(S)p(s,x;t,A) - J z p(s,z;t,A)Ax(s, dz),
\x
where
p ( s , x ; t , A ) = P(Xt C A[Xs = x),

is the transition function. Modifying (3.4) accordingly, we can namely write

Vx(t) = e -e(s-t) bw(s, Z)Aw(S, dz) - (w(s)ds p(t, x; s, dw)ds + e -~(r-t) lz(T)p(t, x; T, dz),
\w
and by differentiation the desired system for Vx(t) arises.
180 C. M. Moiler~Insurance." Mathematics and Economics 18 (1996) 173 181

An application o f the above results could be:

Example 3.1. Let X be o f M a r k o v type given by the particular form Xt = (St, Nt), where St has finite
state space ,7, and Art denotes the n u m b e r of j u m p s of X over [0, t]. Let m be some fixed integer. In the
following we interpret Nt as the n u m b e r of claims over [0, t].
Say that the c o m p a n y has the rule, that it as m a x i m u m tolerates m claims, and if the (rn + 1)th claim
occurs at time t it immediately terminates the contract and thereafter pays an a m o u n t o f A(t) to the
policyholder. This corresponds to a case with the r a n d o m insurance period of

T = inf{t > 0 : Nt = m + 1 },

which is equal to TDWith

D = ,,7 × {0 . . . . , m}.

Since Nt only takes j u m p s of size 1, we obtain the reserves from (3.10) with

l(i,n)(t) = A(t)I(n = m + 1), Vi Q ~.~,

which corresponds to evaluate the system

dVo"n)(t) - [6 + Aq,n)(t)] V(j,n)(t) + ((j,n)(t)


dt
- Z A(j,,)(t; i,n + 1)[bfj,,)(t; i,n + 1) + V(,,,+j,(t)], j ¢ `7, n = 0, 1 . . . .
i¢j
under the b o u n d a r y conditions

V(i,m+l)(t) = A(t), V(i,,)(t) = 0, Vi ¢ `7, n = m + 2.....

where A(i,,)(t; i,n + 1) is the intensity for the transition (j,n) -~ (i, n + 1) at time t. So we have a finite (and
then solvable) system. A reduction to a finite system will not a p p e a r if we had to operate under a fixed
period o f time. Also, c o n t r a r y to a fixed period of time, it is i m p o r t a n t to note that V(c,)(t) still depends on
n for any i E ,,7 even if the intensity and p a y m e n t functions are assumed independent of n.
There are of course m a n y interesting cases to be studied using similar techniques and the necessary
aspects of general martingale theory. F o r instance, a n o t h e r aspect of conditional mean could be to study
the distribution of the r a n d o m variable CTo, where Ct is the j u m p process (and X M a r k o v )

N(t) t
C, = Z bxri. (Ti, Zi) = fo JZ bx, (s,z)N(ds, dz).
i=1

Define then the function t ~ Fx(t,r),r E ~, as

Fx(t,r) = P (f"j; bx¢ (s,z)N(ds, dz) _< rlXt = x , )


which satisfies the b o u n d a r y condition

F~(t, ~) = I(~ > 0), Vx ¢ D.


C. M. Moiler/Insurance: Mathematics and Economics 18 (1996) 173-181 181

Repeating the technique in (3.5) now with Qo(ro) = I (V's(T°)~z..i=lbxTi_ (T i, Zi) _< r), it will appear that the
process

M, = Fx,^,° (t A TO, r -- Ct^ro),

becomes an ¢'N-martingale. Copying the same techniques for proving Theorem 3.1, we can arrive at the
following theorem.

Theorem 3.3. Over the continuity points of Ax(t, A) and bx(t, z), the functions t ~ Fx(t, r) satisfy for each
r C ~, the system of differential equations
dFx(t,r)
Ax(t) Fx(t,r) - f z Ax(t, d z ) F z ( t , r - bx(t,z)), t > O, x E D, (3.11)
dt \x

with the boundary condition

Fx(t,r)=l(r>_O), Vxq[D, t>_O.

As above we can evaluate (3.11) numerically if we replace TO with TO /X T for some finite T, and use the
initial condition Fx(T, r) = l(r _> 0), for all x 6 D.

References

Br~maud, P. (1981). Point Processes and Queues. Springer, New York.


Davis, M.H.A. (1993). Markov Models and Optimization. Chapman & Hall, London.
Doob, J.L. (1953). Stochastic Processes. John Wiley, New York.
Hoem, J.M. (1969). Markov chain models in life insurance. Bl[i.tter der Deutschen Gesellschaft ffir Versicherungsmathematik 9,
91-107.
Moiler, C.M. (1993). A stochastic version of Thiele's differential equation, Scandinavian Actuarial Journal 1, 1-16.
M~ller, C.M. (1995). Stochastic differential equations for ruin probabilities. Journal of Applied Probability 32, 74-89.
Moller, C.M. (1996). Integral equations for compound distribution functions. Journal of Applied Probability 33, to appear.

You might also like