Professional Documents
Culture Documents
Lectures On Insurance Models
Lectures On Insurance Models
IN MATHEMATICS 54
Lectures on
Insurance Models
Texts and Readings in Mathematics
Advisory Editor
C. S. Seshadri, Chennai Mathematical Institute, Chennai.
Managing Editor
Rajendra Bhatia, Indian Statistical Institute, New Delhi.
Editors
R. B. Bapat, Indian Statistical Institute, New Delhi.
V. S. Borkar, Tata Inst. of Fundamental Research, Mumbai.
Probai Chaudhuri, Indian Statistical Institute, Kolkata.
V. S. Sunder, Inst. of Mathematical Sciences, Chennai.
M. Vanninathan, TIFR Centre, Bangalore.
Lectures on
Insurance Models
s. Ramasubramanian
Indian Statistical Institute
Bangalore
o oHINDUSTAN
f1dg1@
U UJJ UBOOKAGENCY
Published in India by
email: info@hindbook.com
http://www.hindbook.com
All export rights for this edition vest exclusively with Hindustan Book
Agency (India). Unauthorized export is a violation of Copyright Law
and is subject to legal action.
Preface vii
1 Introduction 1
1.1 Main assumptions 2
1.2 An overview . 3
2 Poisson model 6
2.1 Preliminaries . . . . . . . . . . . . . . . . 6
2.2 Interarrival and waiting time distributions 10
2.3 Order statistics property . 17
2.4 Cramer-Lundberg Model. 20
2.5 Exercises 23
3 Renewal model 29
3.1 Preliminaries 29
3.2 Renewal function . 36
3.3 Renewal theorems 43
3.4 Sparre Andersen model 54
3.4.1 Some basic premium principles 57
3.5 Exercises 61
Appendix 173
A.l Basic not ions . . . . . . . . . .173
A.2 On the central limit problem · 180
A.3 Martingales . . . . . . . . . . · 185
A.4 Brownian motion and Ito integrals · 189
Bibliography 196
Index 200
Preface
Introd uction
Catastrophes, natural as wen as man-made, rein force the fact that al-
most everyone would like to be assured of some (non-supernatural)
agency to bank upon in times of grave need. If the affected parties are
too poor, then it is the responsibility of governments and the "haves"
to co me to the rescue. It is not uncommon that places of worship, pala-
tial buildings and schools serve as refuges for the affected. However,
there are also significant sections of the population who are willing to
pay regular premium to suitable agencies during normal times to have
insurance cover to tide over crises.
Insurance policies have been prevalent in Europe since middle ages.
Scientists like J . Graunt, E. Halley, A. De Moivre and C. Neumann are
believed to have played major roles in formulating some of the pro ce-
dures in connection with life insurance in 17th century. In India, as
advocated by educationist and social reformer I. Vidyasagar, insurance
schemes aimed at mitigating the plight of widows were introduced in
late 19th century.
By 18th century the extreme uncertainties encountered in maritime
trade gave rise to insurance covers against property loss; these included
also fairly sophisticated schemes like reinsurance policies, that is, in-
suring of insurance companies. Complexities of 20th and 21st centuries
have, of course, spawned a whole gamut of insurance schemesjproducts
(like life, healthjmedical, accident, disability, fire, theft, car, pension
funds, travel, housing, business related insurance, etc.). Thus insurance
has become apart of modern life, whether we like it or not.
In an insurance set up, a considerable proportion of the financial
risk is shifted to the insurance company. The implicit trust between
the insured and the insurance company (perhaps formalized by a legal
2 1. Introduction
for t 2: o. Clearly S(t) = cumulative claim over [0, t]. We shall write
N(t)
S(t) = z= Xi, t 2: 0 which shall stand for (1.2). An implicit assumption
i=l
here is that payment is immediate on receipt of the claim. Note that a
sampie path of N and the corresponding sampie path of S have jumps
at the same times {Td, by 1 for N and by Xi for S.
Notation: A function fC) is said to be o(h) if lim f~h) = 0; we
h->O
denote this by f(·) = o(h). That is, f(·) decays at a fast er rate than h.
1.2 An overview
We now give an overview of the book, indicating along the way relation
to the main objectives of risk theory. References to more comprehensive
works are also given.
A primary task is to model times at which claims arrive, and to
model payments involved when claims are made. In the notation of
Section 1.1, this means modelling {Td (or equivalently N(·)) and {Xj};
consequently the total claim amount process SC) can be modelled. A
reasonable model should be realistic as well as amenable to analysis.
In Chapter 2 we discuss the versatile Poisson process and introduce
the classical Cramer-Lundberg model, along with their basic properties.
As indicated earlier, this is the first mathematical model in the subject
and even after 100 years continues to hold a central place.
The renewal model, which is a genealization of the Poisson model,
is considered in Chapter 3. An important role is played by the renewal
function t ~ E(N(t) + 1), where N(·) is the counting process associated
4 1. Introduction
Poisson model
2.1 Preliminaries
Observe that
and hence
(2.6)
far n = 1,2, .... To solve (2.6) inductively, set Qn(t) = eAt Pn(t), t ~
0, n = 0,1,2, ... By (2.6)
d
dt Qn(t) = )..Qn-I(t), t > 0, n = 1,2,.... (2.7)
Theorem 2.1 Let the stochastic process {N(t) : t ~ O} satisfy the pos-
tulates (Ni) - (N6). Then for any t ~ 0, s ~ 0, k = 0,1,2, ...
Thus the random variable N(t) has the Poisson distribution with pa-
rameter )..t. The stochastic process {N (t) : t ~ O} is called a time
homogeneous Po iss on process with arrival rate).. > o. 0
(iii) For 0 :s: s < t the random variable N (t) - N ( s) has a Poisson
t
distribution with parameter J A(y) dy.
s
The above definition is not the most economical one. For example,
existence of left limits follows from the paths being nondecreasing. This
may be a small price for the sake of clarity.
As N(·) is nondecreasing, N(O) == O,N(t,w) < 00 for P - a.e. w,
note that the sampie paths have left and right limits at every t > O.
So condition (ii) above means that we have chosen a right continuous
version.
[EK]. We shall assume that a Poisson process exists for a given rate
function. A constructive version of the homogeneous Poisson process
will, however, be indicated later; see Theorem 2.5.
t
For a Poisson process, let 11( t) = J ,\( s) ds, t 2: O. The function 11(')
°
can be considered as an intrinsic clock for N. If N is homogeneous the
intrinsic clock evolves linearly; that is, claims arrive roughly uniformly
over time. If N has nonconstant rate then the clock "slows down" or
"accelerates" according to ,\(.); in insurance context, nonconstant rate
may refer to seasonal trends.
The Poisson process owes its versatility to the fact that many natu-
ral functionals connected with the model can be explicitly determined.
The next two sections discuss a few examples which are relevant in the
actuarial context as weIl.
In Section 2.2 it is shown that the interarrival times (that is, time
between successive claim arrivals) of a homogeneous Poisson process
form a sequence of independent identically distributed random variables
having an exponential distribution. In fact, the homogeneous Poisson
process is characterized by this property. Section 2.3 discusses the joint
distribution of the first n claim arrival times, given that exactly n ar-
rivals have taken place during [0, t]. This has connection with the order
statistics of n independent identically distributed random variables hav-
ing uniform distribution. Both these properties play major role in the
Cramer-Lundberg risk model as will be seen in the sequel.
Tn inf{t 2: 0 : N(t) = n}
time of arrival of n-th claim (2.9)
Tn - Tn - l (2.10)
time between (n - 1) th and n th claim arrivals.
The random variables To, Tl, T 2 , ... are called claim arrival times (or
waiting times); the sequence {An: n = 1,2, ... } is called the sequence
of interarrival times.
2.2. Interarrival and waiting time distributions 11
J. \
b
{ ..\ne-At
n , if 0 < tl < t2 < ... < tn < 00 (2.12)
0, otherwise.
By postulate (N5) of Theorem 2.1 and Exercise 2.1.4 observe that 0 <
Tl < T2 < ... < T n < 00 a.s., that is, P((TI , T2, ... , T n ) E G n ) = 1.
If a "reet angle" I ~ (al, bll x (a2, b21 x ... x (an, bnl C G n then
o ::; al < b1 ::; a2 < b2 ::; ... < bn- 1 ::; an < bn < 00. Now by the inde-
pendent increment property of the Poisson process we get, taking bo = 0,
12 2. Poisson model
[g
~
J... JI
).,ne->'tndtl dt2··· dt n ·
( )"t)k ()"h)€
q t t
k,kH ( ,
+ h) = e-A(t+h) - - - -
k! R! (2.19)
t::. kH
= L
A
!(Zl,Z2,Z3,Z4)
Ak zk-le-AZ1) ()..e- AZ2 )
( (k-l)! 1
(A(€-2)!
l- 1 z€-2e-AZ3) ()..e- AZ4 )
3 ,
{
if Zl, Z2, Z3, Z4 >0
0, otherwise.
(2.21)
2.2. Interarrival and waiting time distributions 15
00
o
J
(2.22)
where F = {(Zl, Z2, Z3, Z4) : 0 ::; Zl ::; t < (Zl + Z2) < (Zl + Z2 + Z3) ::;
t + h < (Zl + Z2 + Z3 + Z4)}. Plugging (2.21) into (2.22), an elementary
integration now leads to (2.19).
To show that N(td, N(t2) -N(td, ... ,N(tn ) -N(tn-l) are indepen-
dent random variables, for 0 < tl < t2 < ... < t n < 00, n = 3,4, ... , it
is dear that the above approach will work though the book-keeping will
be more tedious. Also the above argument shows that N(t + s) - N(s)
and N(t) have the same distribution (viz. Poisson distribution with
parameter At) for any t 2: 0, s 2: O.
As 0 < Tl < T 2 < ... < T n < 00 a.s. it is seen from OUf construction
(that is, (2.16)) that the sampIe paths of N are right continuous at every
t 2: 0 and have left limits at every t > O. This completes the proof. 0
In the jargon of the theory of stochastic processes, time homogeneous
Poisson process is the renewal process with i.i.d. exponential arrival
rates. Proposition 2.3 and Theorem 2.5 imply the existence of Poisson
process for a large dass of intensity functions, once a sequence of i.i.d.
exponential random variables exist. The latter requirement is, of course,
guaranteed by the consistency theorem.
The next result concerns age and residual life. These terms have
come from reliability theory.
B(t) is called the backward recurrence time or age, while F(t) is called
the forward recurrence time or excess life or residual life. Let G B(t),F(t)
denote the distribution function of the two dimensional random variable
(B(t), F(t)). Then
In particular (i) B(t) and F(t) are independent random variables, (ii)
F(t) has Exp(>..) distribution, and (iii) B(t) has a truncated exponential
distribution (with parameter >") with a jump at t, that is
that is, given that exactly one arrival has taken place in [0, t], the time
of arrival is uniformly distributed over [0, t].
P(N(t)=n) ' E
18 2. Poisson model
(n, J:, p) note that (Tl, T 2, ... ,Tn ) is an ]Rn-valued random variable. So
the quest ion is : What is the probability measure P(T1, T2, ... ,Tn )-l
on ]Rn? To appreciate the answer one needs the following.
Theorem 2.8 Let Y 1, Y 2, ... ,Yn be i.i.d. random variables with com-
mon probability density function f(·)· Let 1(1) :S Y(2) ... :S 1(n) denote
the order statistics ofY1, Y 2, ... , Y n ; that is, Y(k)(W) is the k-th smallest
value among {Y1(w), Y 2 (w), ... , Yn(w)}, k = 1,2, ... , n for any wEn.
Then the joint probability density function of 1(1)' Y(2) , ... ,Y(n) is given
by
In particular, ifV1, V 2, ... , Vn are i.i.d. U(O, t) random variables (that is,
having uniform distribution over [0, t]), then the joint probability density
function of the order statistics V(1)' V(2)' ... ,V(n) is given by
{
(n!)/(tn), ifO< SI < S2 < ... < Sn< t
(2.26)
0, otherwise.
Note that the n-dimensional random variable (Y(l)' 1(2)' ... , 1(n)) takes
values in the set H = {(Y1, Y2, ... , Yn) E ]Rn : Y1 < Y2 < ... < Yn}.
Let B S;;; H be a Borel set. Let Bi, i = 1,2, ... , (n!) correspond to
disjoint sets obtained by permutation of coordinates of elements of B,
with BI = B. Observe that
2.3. Order statistics property 19
n!
i=l
tJ
n'
!(Zd!(Z2) . . . !(zn) dZl dz2 .. . dZn
t=l Bi
Theorem 2.9 Let N(·) be a homogeneous Poisson process with rate A >
0, and {Ti} the associated arrival times. Fix t > 0, n = 1,2, .... Then
the conditional probability density !unction o!
(Tl, T 2, ... , T n ) given N(t) = n is
If a "reet angle" I = (al, bll x (a2, b21 x .. . x (an, bnl C Gn,t then 0 ~
al < bl ~ a2 < ... < bn- l ~ an < bn < t. Now by (2.12) in Theorem
20 2. Poisson model
2.4
JJ... J J
bl b2 bn 00
J... ~~
al a2 an
dS l dS2.·· dSn
I
Since rectangles of the form I generate the Borel a-algebra of Gn,t, (2.27)
is now immediate. 0
(2.29)
where U(l) < U(2) < ... < U(n) is the order statistics corresponding
to i.i.d. U(O, 1) random variables U1, U2, ... ,Un , and :1:: denotes that
the two sides have the same distribution. Moreover, by Kolmogorov
consistency theorem, the Ui's can be chosen independently of {N(t) :
t ~ O} or {Tj }. 0
1. Claims happen at the arrival times 0 < Tl < T2 < ... < Tn < ...
of a homogeneous Poisson process {N (t) : t ~ O} with rate, say,
). > O.
2.4. Cramer-Lundberg Model 21
2. The i th claim arrival at time Ti causes claim size Xi. The sequence
{Xd is an i.i.d. sequence of nonnegative random variables.
3. The sequences {Ti} and {Xj} are independent; equivalently the
process {N(t)} and the sequence {Xj} are independent.
The total claim amount upto time t in this model is given by
N(t)
S(t) = LXi, t 20 (2.30)
i=l
it
0, if N(t) = 0
{
So(t) = e-rTiXi , if N(t) = k, k = 1,2,... (2.31)
22 2. Poisson model
This is the "present value" (at time 0) of the cumulative claim amount
over the time horizon [0, t]. Observe that by Theorem 2.9 and (2.29),
E(So(t)) ~ E ~ )
~E (t, e-'" Xi I N(t) ~ n) . P(N(t) ~ n)
n
L L E[e-
00
(t,
n=li=l
Theorem 2.12 Let So(·) be the discounted sum process defined by (2.31)
corresponding to the Cramer- Lundberg model. Let). > 0, r > 0 denote
respectively the claim arrival rate and the interest rate. Assume that the
claim size has finite expectation. Then
o
2.5. Exercises 23
for t 2::: 0, where r > 0 again denotes interest rate. This process can
be interpreted as follows. Let t > 0 be fixed. Suppose claims which
arrive during [0, t] are settled only at time t. So there is a delay in claim
settlement, and hence the claim amount accrues an interest compounded
continuously at rate r > O. So SI (t) is the totalliability of the insurance
company at time t due to claims occurring over [0, t]. Hence E(SI (t)) is
the expected amount that the company should have at time t to honour
its commitments.
Notes: The material considered in this Chapter is standard. One
may look up [Bl2], [KT], [Re], [Rol], [Ro2], [RSST], [M], etc.
2.5 Exercises
(b) For any fixed t 2::: 0, show that P - a.e. w the sample path N (., w)
does not have a jump at t.
(c) Show that (b) above does not mean that sampie paths of N(·)
are continuous. In fact, with prob ability one, sample paths have
discontinuities.
24 2. Poisson model
(d) For P - a.e. wEn, for any t > 0, show that the number of jumps of
N(·, w) over [0, t] is finite. That is, almost every sampIe path has
only finitely many jumps over any time interval of finite length.
That is, the Markov property holds for the Poisson process.
2.1.4. Let {N(t) : t 2: 0 be a Poisson process with a continuous rate
function A(. ). For any t 2: 0 show that
show that {Tn ::; t} E Fr, {An::; t} E Fr. That is, each Tn and hence
each An is a stopping time w. r. t. the filtration {Fr}.
2.2.2. Let Tl be the first arrival time corresponding to a time ho-
mogeneous Poisson process. For t, s 2: 0 show that
Ü .. U
k=l 1:'011 :'012:'O.·.:'OJk-1 :'0 mn -1,
{ ~~~ ~ ) ~ ) = jk-b }.
~ ... ,k-l
Also, L denotes sum of a finite number of terms.)
finite
2.2.5. Let N be as in Exercise 2.2.4. Let {Ti} be the arrival times.
Fix an integer n > 1. Let G n be given by (2.13). Let Fh .. .,Tn denote the
26 2. Poisson model
ön
where Öt 1Öt 2...Öt n HW = O.
(Hint: (i) Need to consider only k = n in the hint to Exercise 2.2.4
with mi = i.
>.n-1
(ii) Show that we need to look at only {.)1·)2 '(' _.)1 ~.... (n -1-')n-2 )'}
. X
{t{1 ~ - iI)h-h ... (tn - t n _l)n-l- j n - 2 • e->'t n } where jn-l = n -1, je 2:
R, R = 1,2, ... ,n - 1, jl ~ j2 ~ ... ~ jn-l. That is, terms not of this
form will be part of H W.
(iii) Unless je = R for all R = 1,2, ... ,n - 1 show that the coefficient
of tl t2 ... tn-l in the expression in (ii) above is zero.)
2.2.6. Let {Tn } be as in Exercise 2.2.5.
(a) Show that T n has the gamma distribution r(n, >.). This distribu-
tion is also called Erlang distribution with parameters n, >. in queueing
theory.
(b) Show that 0 < Tl < T 2 < ... < T n < 00 a.s.
2.2.7. Derive (2.19) in cases other than k 2: 1, R 2: 2.
2.2.8. Let {N(t) : t 2: O} be a Poisson process with rate function
t
>.(.). Set f..l(t) = J >.(s) ds, t 2: O. Assume that >.(.) is continuous and
o
>.(r) > 0 a.e.r (w.r.t. the Lebesgue measure on [0, (0)). Let {Td be the
claim arrival times, and {A j } the interarrival times for the process N (.)
defined by analogues of (2.9), (2.10). Then the following hold.
(a) Fix n 2: 1. Then the probability density function of (Tl, T 2,···, T n )
is
{
e-/t(tn) >'( tl )>'( t2) ... >'( t n ), if 0 < tl < t2 < ... < t n < 00
0, otherwise.
{
e-/t(a1 +a2+ ... + a n) [.nt=l
>.(al + ... + ai )] , if aj > 0, 1 ~ j ~n
0, otherwise.
2.5. Exercises 27
Thus the interarrival times are Li.d. 's only when N (.) is time homoge-
neous.
2.2.9. Show that TN(t) in (2.23) is a random variable.
2.2.10. Let {Ad be the sequence of interarrival times of a homoge-
neous Poisson process with rate A. Fix n 2: 1. Find the distribution of
min{A I , A 2 ,···, An}.
2.3.1. Let N(·), AU, /1(.), {Td be as in Exercise 2.2.8. For any
t > 0, n = 1,2, ... show that
where 0 < Y(l) < Y(2) < ... < Y(n) is the order statistics correspond-
ing to an Li.d. sample YI , Y2 , ..• , Yn with common probability density
function g(x) = ~~~ I[Q,t] (x).
(Hint: Use part (a) of Exercise 2.2.8 and imitate the proof of Theo-
rem 2.9.)
2.3.2. Let N(·), {Td, {Uj } be as in Theorem 2.9 and Remark 2.10.
Let n = 1,2, ... and t > 0 be fixed. For any measurable symmetrie
function g on jRn show that
2.4.1. (i) In the Cramer-Lundberg model assurne that the claim size
has finite expectation. Find E(S(t)).
(ii) Assuming that the claim size has finite second moment show that
Var(S(t)) = At E(Xr).
28 2. Poisson model
2.4.2. If Fl, F 2 are distribution functions on IR, recall that the con-
volution of F 1 and F2 is given by
F 1 * F 2 (x) = J
IR
F1(x - Y) dF2 (y) = F 2 * F1(x), x E IR.
Let F denote the common distribution function of claim size Xi. Let
F*(O)(x) = I[o,oo)(x), x E IR and F*(l) = F. For n = 2,3, ... define
F*(n)(x) = (F*(n-l) * F)(x) = (F * ... * F)(x), x E IR, the n-fold
convolution of F. Let 8(·) be given by (2.30). Show that for x E IR
P(8(t) ~ x) = f
n=O
) )~
n.
F*(n)(x).
2.4.3. Assume that the claim size Xi has moment generating func-
tion 1Yx in a nieghbourhood of O. Show that the moment generating
function of 8(t) is given by
Renewal model
3.1 Preliminaries
N(t) max {i 2: 0 : Ti ~ t}
#{ i 2: 1 : Ti :s t}, t 2: 0 (3.1)
is called a renewal process. The sequences {Tn } and {An} are referred
to respectively as arrival times and· interarrival times of the renewal
process N (.). T n is also called n th renewal epoch.
If there are large gaps between arrivals then the Poisson process
model for claim number process is not adequate. This is the case if
interarrival times have thicker tails than the exponential distribution.
Of course, many nice properties of the Poisson process are lost in h~
renewal model. However, many asymptotic properties are similar to the
earlier model.
In this section we establish the strong law of large numbers for N ( .)
and the elementary renewal theorem.
To begin with, we want to establish that N(t) has finite expectation.
For this, and other purposes, it is convenient to treat To = 0 also as one
renewal. Indeed define for t 2 0
00
No(t) ~ L 1[o,tj(Ti ) = N(t) + 1. (3.3)
i=O
Proof: Easy. 0
Theorem 3.3 Let the interarrival time {A} satisfy (3.2). Then
E(N(t)) < 00, t 2 O.
Proof: By assumption there exists a > 0 such that P(A i > a) > O.
By rescaling, if necessary, assurne a = 1. Define Ai = 1(1,00) (Ai), i =
1,2, .... Clearly {Ad is a sequence of i.i.d. Bernoulli random variables
with parameter p = P(A > 1). Also Ai ::; Ai for all i. So by Lemma
3.2 it is enough to prove that E(No(t)) < 00.
Denoting by fin the corresponding renewal sequence, for k 2 1
note that 'h has a binomial distribution with parameters k and p. By
monotone convergence theorem, denoting [tl = integral part of t
3.1. Preliminaries 31
E(No(t))
00
1+L P(Tk ::; [tl) (.: To == 0, Tk integer valued)
k=1
00 00 [tl
1 + L(1- p)k + L LP(Tk = j)
k=1 k=1 j=1
1 [tl 00
- +L L
A
P(Tk = j)
P j=1 k=1
[tl 00
~ ~) )
p j=1 k=j J
(because P(Tk = j) = 0 if j < k)
1 [tl pi 00
- +L -:r LU: + 1) ... (R + j)(l - p)€
P j=1 J. €=o
< - +
1
L[tl pi
-:r
..
2JjJ
[00.
L
RJ (l- p)€
1
<00
P j=1 J. €=o
by ratio test, as j varies only over a finite set. o
Here is the first asymptotic result.
*
Theorem 3.4 (SLLN for renewal process). In addition to the hypoth-
esis of Theorem 3.3, let E(Ad = < 00. Then
. 1
hm -N(t) = A, a.s. (3.4)
t-+oo t
We need a lemma.
with probability one. Now E(A) < 00 for all i implies P(Tn < (0) =1
for every n. It then follows that P (lim N(t)
~
= (0) = 1. Observe that
for all sufficiently large t, with probability one. By Lemma 3.5 note that
P ~~ ;(W = A-1) = 1. The required conclusion (3.4) is now clear
from (3.6). 0
Theorem 3.4 suggests that E(N(t)) will be asymptotically linear.
Indeed we have
So it remains to establish
. 1
lImsup - E(N(t)) ::; A. (3.9)
~ t
For this we consider truncated inter arrival times. For b > 0 set
E [T(b)
Mb)(t)+1
] = E[N(b) (t) + 1] . E[A(b)]
1 ,
t > ° (3.11)
we shall proceed with the proof. (3.11) will be proved later. By (3.11)
we get
1 1
limsup - E(N(t)) :S (b) , for any b > 0.
t--'>oo t E[A 1 ]
(3.12)
and
(In (3.13) and elsewhere for a random variable Y and a measurable set
B, E[Y : ~ JY dP = E[lB' Y]). Note that P(T < 00) = 1 and so
B
i';b) is well defined.
By Theorem 3.3, E( T) < 00 and hence
Theorem 3.7 Assume that 0 < Var(Ad < 00. Denote E(Ad =
±'c = ),3 Var(Ad. Then for every x E IR
·
1Im P (N(t)G
- ),t
::; X
)
= <I>()
X (3.14)
t-.oo V ct
where <I> (.) is the distribution function of the standard normal distribu-
tion. Moreover the convergence in (3.14) is uniform over x E IR.
T, - ),-1 )
lim sup P ( n n <x - <I>(x) = O. (3.15)
n-.oo xElR JnVar(Ad -
N(t) - ),t )
P ( VCt ::; x = P(N(t) ::; xVct + ),t)
P(N(t) < m(t) + 1) = P(Tm (t)+1 > t)
(
T m (t)+1 - ),-1 (m(t) + 1) t - ),-I(m(t) + 1) )
P J(m(t) + l)Var(Ad > J(m(t) + l)Var(At) .
holds for each x E IR. Since the convergence in (3.15) is uniform, and
1 - <1>( -x) = <I>(x), note that (3.16) now implies (3.14) pointwise. 0
36 3. Renewal model
J J
(Xl
(3.19)
3.2. Renewal function 37
F(x) = lim
8-+00
L ~)
n.
f<;)(s). (3.20)
~
if x>
~ ~
Z
lim e- 8X (sx)k ={ if x = Z (3.21)
8-+00 ~ k'
~ 1, if x< z.
=
.
hm ~
((Z-x))
Vx VB = r.h.s of (3.21).
8-+00 X
o
Proof of theorem: As F is the distribution function of a proba-
bility measure, by repeated application of the dominated convergence
theorem
J
00
"(-1t sn f(n)(s) =
~
nS:sz
n! F
J"
0
00
~
nS:sz
e- sx (sx)n dF(x)
n!
J
00
Theorem 3.12 Let the hypo thesis be as in Theorem 3.3. Then the
renewal junction H is a right continuous nondecreasing junction on IR
such that H(O-) = 0, O:S H(t) < 00, t 2: 0, and hence /-lH is a er-finite
measure supported on [0,00). Moreover
L
00 00
H = L F *(n) , th at zs
· , /-lH = /-l*F(n). (3.24)
n=O n=O
L L
00 00
Theorem 3.13 Let the hypo thesis be as in Theorem 3.3. Then the
Laplace-Stieltjes transform RH of H is well defined on [0,00). Moreover
RH(S) > 1, s > 0 and
(3.25)
Consequently
, 1
fF(S) = 1 - -,- , S > o. (3.26)
fH(S)
Proof: Note that RF(s) = E(e- sA1 ). As {Ad is an i.i.d. sequenee,
clearly RF*(n)(s) = E[e- STn ] = [RF(s)]n for all n. Henee for s > 0, by
Theorem 3.12
RH(S) =
00
2:
n=O
J°00
e-sxdF*(n)(x) = ~
1 - fF(S)
00 00
I[o,oo)(t)
00
+ 2: J t
F*(n-l)(t - s) dF(s)
n=l °
J 2:
t oo
I[o,oo)(t) + F*(k>(t - s) dF(s)
° k=O
+J
t
°
40 3. Renewal model
Thus we obtain
Theorem 3.14 With the hypo thesis as in Theorem 3.3, the renewal
junction satisfies the renewal equation
J
t
J °
t
J J
t
°
Proof: As /1F =1= 150 , by Theorems 3.3 and 3.12, Hand /1H are
weIl defined. As /1H([O, s]) < 00 for any < s < 00, for any 10-
cally bounded measurable function 0: supported on [0, (0) note that
3.2. Renewal junction 41
t
t I-t J a(t -
s) d/LH(S) is a well defined locaIly bounded measurable
o
function supported on [0,(0). Moreover
.I
o
t
a(t - s) d/LH(S) =
00
L
n=O
!0
t
a(t - s) ~) ) t 2: o.
Thus U( ·) given by (3.29) is a weIl defined locaIly bounded function
supported on [0,(0). Also by Exercise 3.2.1 (see below), note that
U(t) = !
o
t
u(t - s) d/LH(S) = u(t) +L
k=l
00
J
0
t
u(t - s) ~) )
u(t) +L
00
! t
u(t - s) ~ ) * /Lp)(s)
t.l [l
j=O 0
L! !
00 t t-r
+ j [f
j=O 0 0
+ .I
t
(3.30)
(3.31)
I: I[O,tj(Ti(D)) .
<Xl
N(D)(t) =
i=l
Then imitating the proofs of Theorems 3.12, 3.13 one can get the
following
Theorem 3.18 Let the hypotheses oj Theorem 3.3 and Definition 3.17
hold. Then
00
J
t
H(D)(t) = C(t) + H(D)(t - s) dF(s) , t 2: O. (3.33)
° o
3.3. Renewal theorems 43
lim H(t
t---+oo
+ h) - H(t) = t---+oo
lim E[N(t + h) - N(t)] = >'h. (3.35)
J
00
lim U (t) =
t---+oo
>. u (s) ds (3.36)
o
3.2.4 we know that g(.) is the solution to (3.28) with u(t) = I[o,h) (t). In
this case (3.35) can be written as
J
00
°
In the Poisson process model, and in the delayed renewal model given
in Proposition 3.19 note that (3.35) holds exactly for all t.
Having guessed the asymptotic behaviour of the solution to the re-
newal equation, we need the following definition to carry it through for
a large dass of functions.
[0,00) be nonincreasing such that J Ul(X) U2(X) dx < 00 and lim c(h) =
1, where
° h---+O
Note that ul((n - 2)h) u2((n - l)h) ~ Ul(X) U2(X) for (n - 2)h ~ x ~
(n - 1) h. Therefore
Similarly '!J.(h) 2: cdh) J=Ul(X) U2(X) dx. Since c(h) ----t 1 as h ----t 0 the
h
required conclusion now follows. 0
L m n I[(n-l)h,nh)(t),
00
u*(t)
n=l
L m n I[(n-l)h,nh)(t).
00
u*(t) =
n=l
J J
t t
J
t
In a similar fashion
lim
t--+oo
J
o
t
u*(t - r) dH(r) = )"h ~ m n ·
L......t
00
n=l
48 3. Renewal model
J
00
lim h'""' m n
hlO ~
= lim
hlO
h '""'
~
mn = u(s) ds. (3.41 )
n n 0
J J
t t
for any t > 0, h > 0. As JLF =I=- 80 we can choose h > ° such that
(1 - F(h)) > 0. Consequently the above implies
Since any finite interval is a finite union of intervals of length ::; h, the
claim (3.42) now follows.
By (3.42) and the Helly selection principle outlined in Exercise 3.3.2
there is a sequence tk -7 00 and a a-finite measure v on IR such that
Vtk ~ v. Note that v need not be supported on [0, (0). (As Vt is
supported on [-t, (0) we should expect v to be supported on IR and it
turns out to be so.)
Now let u be a continuous function supported on [0, a] for some
a > 0. Let U be the corresponding solution to the renewal equation
(3.28). By (3.29), far tk > a note that
J
X+tk
u(x + tk - r) dH(r)
o
J J
x x
J
00
u(x - s) dVtk(S), if x 2: 0.
-00
°
If x < then the above equality holds for tk > a + lxi. As Vtk ~ v it
now follows by Exercise 3.3.1 (a) that
J
00
J
X+tk
lim
tk->OO
U(x + tk - y) dF(y)
o
(": u is supported on [0, a])
J
00
lim
tk->OO
U(x + tk - y) dF(y)
-00
J
00
((x - y) dF(y).
-00
Thus
J
00
(3.45)
J
00
for any directly Riemann integrable function u(·), where U (.) stands for
the corresponding solution to the renewal equation (3.28). In particular
take u(t) = 1 - F(t), t 2 o. By Lemma 3.25 (vi), as F has finite
expectation, we have r.h.s of (3.46) = aA -1. Since l.h.s of (3.46) in this
case is 1, we get a = A, completing the proof. 0
As an application of the key renewal theorem we shall now prove a
limit theorem for the residual waiting time. From the definition of N (.)
we know that for every t 2 0
(3.47)
lim Q(t,O
t-+oo
= A /(1 - F(s)) ds ~ ~) (3.49)
o
That is, the limit distribution of the residual waiting time is the same
as the integrated tail distribution given by (3.34).
L
00
f/ /
t ~
dF(y) dF*(k)(x)
k=O 0 t-x
t
Theorem 3.28 Let the notation be as in Definition 3.17. Let the hy-
potheses of Theorem 3.21 hold. For t ~ 0, denote by
(3.52)
~) + L
00
G(t +0 - G(t)
t
lim j[F(t - x
t--->oo
~) - F(t - x)] d(G * H)(x) = ~) (3.54)
o
3.3. Renewal theorems 53
(3.55)
J
t
J~
~
F(y)dy - J 1
0
t
F(y)dy
Theorem 3.31 Consider the Sparre Andersen model with the inter-
arrival times {Ai} and the claim sizes {Xj } having finite expectation.
Then
. 1 ~ )
(i) 11m - S(t) = (A) a.s.; (3.57)
t-+oo t ~ 1
. 1 ~
(ii) 11m -t ~ )) = ~ ); (3.58)
t-+oo 1
3.4. Sparre Andersen model 55
as t ---> 00.
1 1
-;; J>:t [S(t) - AJLt]
1 1 AJL
- I\";. [S(t) - AJLTN(t)] + I\";.(TN(t) - t)
(J v At (JV At
1 1 N(t) AJL
- - L(Xi - AJLAd + -- (TN(t) - t)
(J J>:t i= 1 J>:t
(J
1 1 N(t) A
- -J>:t L
(J i=1
Yi+
(JV
~
At
(TN(t) -t)
1 1 AJL
~ Z[Atj
v At
+ vI\";.
At
[ZWt) - Z[Atj]
\
+ (JVI\";.
At
(TN(t) - t)
[At]) 1/ 2 1 1
( At JIMJ Z[At j + J>:t (ZN(t) - Z[Atj )
AJL
+ I\";. (TN(t) - t) . (3.61)
(JV At
Let E> 0 be arbitrary; put CE = {.At IZN(t) - Z[Atj I> E}. Let <5 > 0,
put
inequality
°
where K is a positive constant independent of t, E, <5. For fixed E > 0,
first choose <5 > such that P(B 2 ) is very small for all t. Then choose
t large enough so that P(BJ) is small. Thus P( C€) --t as t --t 00 for °
any E > O. This proves (3.62) .
Next for E > 0,
L P(t -
00
k=O
1 N(t) = k) . P(N(t) = k)
k=O
L P(A
00
°
k=O
P(A 1 > EVt) --t as t --t 00
the company loses on the average; similarly if p(t) > E(S(t)) for large
t, then the company may gain on the average. With this perspective,
the following premium principles are in vogue; also these are perhaps
elementary to implement.
Hence
E(XI)
PEV(t) = (1 + p) E(AI) t, t 2: 0 (3.65)
(3.69)
section of the population is willing to pay aprice (for being insured ) that
is greater than the expected claim amount. In economics, an explanation
for such apparent anomalous situations is given in terms of the so called
utility functions. This postulates the existence of a utility function which
determines decisions, in our case, collectively for the insured.
Let I S;;; lR. be an interval; it need not be a finite interval. A function
v : I ~ lR. is called a utility junction if it is nondecreasing and concave.
For each wealth x E I a utility v(x) is attached; v(x) can be different
from x. Since utility may be assumed to increase with wealth v(·) is
assumed to be nondecreasing; this is also phrased as "marginal utility
is nonnegative" , that is, Vi (x) ;:: 0 if Vi (.) makes sense. Next, note
that giving Re.1 to a poor person makes more sense than giving it to
a millionaire. So increments of v(x) for small values of x should be
more than those for large values of x; that is, if Xl < X2, h > 0 then
V(XI + h) - v(xd ;:: V(X2 + h) - V(X2); again, this may be termed as
"decreasing marginal ~ or v" (x) ::; 0 whenever v" (.) makes sense.
Thus v(·) should be concave.
The concavity assumption is also known as risk aversion. Adecision
maker is risk averse if a fixed loss is preferred over a random loss that
has the same expected value. If we consider a random loss Y, then
the expected utility is E[v( -Y)]. By Jensen's inequality E[v( -Y)] ::;
v(E( -Y)) as v is concave. So the fixed loss E( -Y) is given greater
utility than the expected utility of the random loss. Risk averse decision
makers are generally considered 'reasonable' decision makers.
We assurne that there is a utility function v collectively for the po-
tential buyers of insurance policy. For the insured the loss is S(t) in the
absence of insurance; (if the initial wealth is x then it gets depleted to
x - S(t) if there is no insurance; for simplicity we take x = 0). So in
the absence of insurance the expected utility is E[v( -S(t))]. If ins ur-
ance cover is taken, similarly the utility at time is v( -p(t)), where p(t)
denotes the premium paid upto t. Therefore insurance will be taken by
a potential buyer only if
A deal improving the expected utility for both sides will be possible if
p+O 2: p-O; this is a win-win situation.
Some useful utility functions are:
(i) linear utility: v(x) = x;
(ii) quadratic utility: v(x) = { 0-,(0: - x)2, x:S 0: where 0: > 0 is a
x 2:0:
constant;
(iii) exponential utility: v(x) = -ae- ax , where a > O.
The exponential premium principle (3.70) can bejustified as the zero
utility premium p+ (.) corresponding to the exponential utility function.
Though the exponential principle is very useful and has nice theoretical
features (see Exercise 3.4.8), as existence of moment generating function
of SO is assumed, it is not applicable for heavy tailed cases.
For more information on premium principles, utility function, etc.
see [RSST], [KGDD].
3.5 Exercises
3.1.1. Suppose the Li.d. interarrival times {A} satisfy P(A i > 0) =
1. For any t > 0 show that P(N(t) - N(t-) > 1) = O.
3.1.2. Let the interarrival times {Ad satisfy (3.2). Assume that
P(A i = 00) = 0; that is, the distribution of Ai is nondefective.
62 3. Renewal model
0, if i > N(t)
{ 1, if i::; N(t)
+1
Yi = I[i,oo)(N(t) + 1) = + 1.
N(t)+l 00
L
00
E[g(Adl P(N(t) + 1 ~ i)
i=l
E [g(Adl . E [N(t) + I}
which is Wald's identity.
3.1.5. Let {Ad, {Tn }, {Fn } be as in Exercise 3.1.3. Assurne that
E(A i ) < 00. Set Ai = Ai - E(A i ), i ~ 1, and Tn = Al + ... + An =
Tn - nE(A l ), n ~ 1; of course To == O. Show that {Tn : n = 0,1,2, ... }
is a martingale W.r.t. {Fn }; that is, Tn is integrable and E [Tn+ l I Fnl =
Tn , n ~ 1.
3.1.6. Let c > 0 be a constant such that the last equality in (3.16)
holds. Show that c = A3 Yar(Ad. This observation is due to Srikant
Iyer and Adimurthy. (Note: If E(N(t)) is asymptotically At and 0 <
Yar Al< 00 one hopes for a CLT for N (. ). Exercise 2.1. 5 suggests
3.5. Exercises 63
3.2.1. Let f-L, v be finite measures on IR. Let the convolution f-L * v
be defined by (f-L * v)(E) = J f-L(E - x) dv(x), for any Borel set E. For
any Borel measurable function 9 on IR show that
J
g(z) d(f-L * v)(z) = JJ
g(z + x) df-L(z) dv(x).
°
the interarrival time Ai, i ~ 2 have finite expectation A-1. If H(D) (t) =
At, t ~ show that G(t) = F[(t) given by (3.34).
J
00
°
3.3.2. Suppose {l1n} is a sequence of o--finite measures on lR such
that {l1k([-X, xl) : k = 1,2, ... } is a bounded sequence of numbers
for each x E IR. Show that there exist a subsequence {l1 nk} and a
o--finite measure 11 such that I1 n k ~ 11 in the sense of Definition 3.22
(Rint: Imitate the proof of Helly selection principle involving Cantor's
diagonalization argument for every [-K, K]. Take furt her subsequences.
See [Fe].)
3.3.3. (a) Let uC) be a nonnegative continuous function on [0,(0).
Let an = sup{u(x): n-1::; x::; n}, n = 1,2, .... Show that u(·) is
directly Riemann integrable if and only if I: an < 00.
n
(b) Show that the assertion in (a) holds even if u(·) is a nonnegative
right continuous function having left limit at every t.
3.3.4. (a) Let u(·) be directly Riemann integrable. Show that u(·)
is integrable and lim u(x) = 0. What is lim u(h)? (Rint: Take h =
x--->+oo hl0
2- k , k=1,2, .... )
(b) Show by an example that the converse of the above is not true
in general.
3.3.5. Show that a nonincreasing, nonnegative function on [0,(0) is
directly Riemann integrable if and only if it is integrable.
3.3.6. (a) Let u(·) be directly Riemann integrable. Show that
u+C), u-O, lul are directly Riemann integrable. (Recall u+(x) =
max{O, u(x)}, u-(x) = max{O, -u(x)}.)
3.5. Exercises 65
1
o
00
1(1-
o
~
F(r))dr.
1
t+h
HCD)(t + h) - HCD)(t) = [H(t +h- s) - H(t - s)] dG(s).
o
! !!
00 00 00
!!
00 00
Applying the key renewal theorem show that !im J(t) = FI(O. Thus
t->oo
we have another way of proving the result in (a).
3.3.10. Let M ~ 1 be an integer. Let {Ai: i ~ I} be a sequence of
independent nonnegative random variables such that Ai has distribution
function Gi, i = 1,2, ... , M, and each A j , j ~ M + 1 has the distribution
function F. Define To == 0, Tn = Al +A 2 + ... +An , n ~ 1. {Tn } may be
called the M -delayed renewal sequence. Let N denote the associated M-
~ renewal process, fI the lvf-delayed renewal junction associated
with N.
(a) Find the renewal equation satisfied by the function
M-I
L(t) = H(t) - L (GI * ... * Ge)(t), t ~ o. Show that L is the re-
e=l
newal function associated with a delayed renewal process.
(b) Find the analogue of (3.53) for N.
(c) Show_ that analogues of Theorems 3.6, 3.21 and 3.28 hold for the
process N.
3.4.1. (a) In the Sparre Andersen model assurne that E(Xd < 00.
Show that E(S(t)) exists and E(S(t)) = E(XdE(N(t)).
(b) Complete the proof of Theorem 3.31.
3.4.2. Assurne that E(Xr) < 00 in the Sparre Andersen model.
Show that
v(x) = V(x) = -a e- ax , x E IR
where a > 0 is the risk aversion factor. Formulate the utility equilib-
rium equations for the insurer and the insured respectively. Show that
p+(t) = p-(t) = PExp(t) given by (3.70). This gives a justification for
the exponential premium principle.
68 3. Renewal model
J
00
Proposition 4.1 1f S F > 0 then there exists C > 0, 0 < A < s F such
that
Conversely, if (4.3) holds for some 0 > 0, A > 0 then mF(S) < 00 for
alt s < A, and hence SF > O.
70 4. Claim size distributions
Proof: Let SF > o. So there is A > 0 such that mF(A) < 00. Now
for any x ~ 0
00 00
The above is generally regarded as the proper not ion of a well be-
haved distribution.
The following are examples of light-tailed distributions:
(i) Exponential distribution, EXp(A), with parameter A > O.
4.1. Light-tailed distributions 71
°
(iv) Truncated standard normal, that is, F(x) = P(IN(O, 1)1 ::; x), x E
IR. Note that F(x) = 2<I>(x) - 1, x 2: where <I> is the distribution
function of the standard normal distribution.
To any light tailed distribution a useful class of associated distribu-
tions can be defined. Let F be a light tailed distribution. Then SF > 0.
Let t < SF. Define
F,(x) ~{ m:(t) J
[O,x]
etYdF(y), if x 2: ° (4.5)
0, if x< 0.
(4.6)
Proof: Note that SF > 0. By Exercise 4.1.4 below we know that the
moment generating function of Ft exists in a neighbourhood of and °
°
that for < s < SF - t, n = 1,2, ...
. E(cx)
hm -L(
X--->OQX
) = 1, for all c > ° (4.8)
Jt
ax
lim
X-t(X)
c(t) dt 0, if a >1 (4.10)
x
Jt
x
lim
X-t(X)
c(t) dt = 0, if a < 1. (4.11)
ax
°
Let a > 1. For fJ > there exists to > such that
t 2: to. Without loss of generality to < x < ax. Then
° ic (t) I < fJ for all
Jt
ax
°
Proposition 4.6 Let L be a slowly varying function of the form (4.9).
Then for any a >
lim x-aL(x) = 0, lim xaL(x) = 00. (4.12)
x----+CX) x---+CX)
°
Proof: Let a > be fixed. For any Tl E (0, a) there exists to
such that -Tl < c(t) < Tl for all t > to. Then for x > to
> °
exp ( [ f ,(tl dt) ,; C, exp ~ ~) = C, ~
74 4. Claim size distributions
°
So, as c(t) ----t c > we now get x-aL(x) ::; C tüT/x-(a-T/) for x > to,
°
where C > is a constant. As a > 'Tl the first assertion now follows.
In d similar fashion, for x > to one can show that x aL( x) 2: C töx(a-T/);
the second assertion follows now since a > 'Tl. 0
Our next objective is to show that (4.9) gives all slowly varying
functions. For this we need a very basic result due to Karamata.
lim h(x
x--+oo
+ u) - h(x) = 0, for any u E R (4.14)
Next let u E [0, Al. Clearly IIx+u n lxi = 2A - u 2: A for any x. For
x 2: Xo, by (4.15)
Hence Ih(x)1 :::; 1 + Ih(A)1 on [A, A+ 1J. By induction Ih(x)1 :::; n+ Ih(A)1
on [A, A + nl, for n = 1,2, ... The required conclusion is now immediate.
o
Theorem 4.9 (Karamata representation theorem): Let L be a positive
measurable function on (0, 00 ). Then L is slowly varying at 00 if and
only if it can be written in the form (4.9) as in Proposition 4.5.
J
x
where d, e are measurable functions on IR such that d(x) ---t d E IR, e(x) ---t
o as x ---t 00, and b E IR; to see this take d(x) = logc(eX ), e(v) =
t( eV ), v E IR, b = log Xo. Therefore to prove the result we need to show
that (4.14) in the proof of Theorem 4.7 implies (4.16).
76 4. Claim size distributions
J J
x+l x
+ J
b+l
h(t) dt, x 2: b. (4.17)
b
The last term on the r.h.s. of (4.17) is a constant, d say. Put e(x) =
h(x + 1) - h(x); then e(x) ---> 0 as x ---> 00 by (4.14). The first term
1
on the r.h.s. of (4.17) = J (h(x)
- h(x + s)) ds ---> 0 as x ---> 00 by the
o
uniform convergence theorem (Theorem 4.7). Thus (4.16) holds with
b+l 1
d(x) = J h(t) dt + J (h(x) - h(x + s)) ds, e(x) = h(x + 1) - h(x). 0
b 0
Proposition 4.6 leads naturally to the following definition, again due
to Karamata. This also plays an important role in the central limit
problem; see Appendix A.2.
f(>"x) 1
lim >..a' uniformly over >.. E [A, 00). (4.19)
x-+oo f(x)
4.2. Regularly varying functions and distributions 77
Note that 4A- a / 2 > A- a and A > 1. Hence, as f(·) > 0 and 4A- a / 2 < E,
we get for A ~ al, x ~ Xo by (4.21)
- ~ I < 4 \ -a/2
Iff(x)
(AX)
Aa - A < E,
(4.25)
L(y)
L(x) ::; A max
{(Y)1/
;
(x)1/} ,V x,
'y y 2: K. (4.26)
L(y) c(y)
L(x) = c(x) exp ( !
y1
"I'(t) dt
)
.
4.2. Regularly varying junctions and distributions 79
As c(·) --+ c E (0,00) and E(-) --+ 0, it is now clear that we can choose
K(A, 7]) > Xo such that
J J
x 1
J
1
P L(sx) ( )d
s L(x) ~ s s.
o
By Lemma 4.13 the integrand on r.h.s. of the above is dominated by
2sP- 17 for all x 2: K'. Also as x --+ 00 the integrand converges to sp. So
by the dominated convergence theorem
·
11m
x->oo
1
x P+lL()
x
J x
tP L (t) dt -_ P
-+1- .
1
(4.27)
K'
J
K'
Note that (4.27), (4.28) imply (4.22). Divergence of Ip(x) follows from
(4.22) and Proposition 4.6.
x
Proof of (ii): Here p = -1; write L(x) := Ip(x) = Jt L(t) dt.
K
Let 0 < () < 1 be arbitrary. Then for x ~ K such that ()x ~ K
J
_ 1
Jl
x
Therefore
(4.29)
1
x p +1 L(x)
J 00
tPL(t) dt + _1_
p +1 J
00
sP L(sx) ds -
L(x)
J 00
sPds
x I I
J(
00
L(x) >
L(x) -
J ()
~ L(sx) ds.
s L(x)
1
U~ dt) ,
we can establish that (analogous to (4.29))
(ii) Let a > 1, and F[ denote the corresponding integrated tail distri-
bution given by (3.34); that is,
1
dF[(t) = I(Q,oo)(t) E(X) (1 - F(t)) dt. (4.32)
Then F[ E R-(O:-l).
82 4. Claim size distributions
J
00
J
00
Example 4.15 Pareto distribution. Let 0: > 0, '" > O. Let the dis-
tribution function F, supported on [0,00), be such that its right tail is
given by
_ ",a
This distribution is called the Pareto (0:, "') distribution. The parameter
0: > 0 is called the tail parameter, and '" > 0 is called the scale parameter.
As the tail parameter is the important one, by usual abuse of notation
we will suppress the scale parameter, and may just write Pareto (0:)
distribution to denote (4.34). It is clear that Pareto (0:) E R_ a , 0: > O.
By Exercise 4.2.12 it follows that Pareto (0:) is heavy-tailed, that is, its
moment generating function does not exist in any neighbourhood of O.
By Exercise 4.2.13, if 0: > 1, F[ has Pareto (0: - 1) distribution and
F[ E R-(a-l).
Prom the discussion in Appendix A.2, for 0 < 0: < 2 note that
Pareto (0:) belongs to the domain of normal attraction of an o:-stable
distribution.
Pareto distribution is perhaps the most important heavy tailed dis-
tribution. It is one of the few distributions that has been found to give a
good fit to observed data, especially in actuarial context. In economics,
4.2. Regularly varying junctions and distributions 83
F\ (rSx) F 2(rSx) I
I F I ((l - rS)x) + F 2((1 - rS)x)
FI(rSx) F 2(rSx) . FI(rSX) + F2(rSX)
FI(rSx) + F 2(rSx) F I ((l - rS)x) + F 2((1 - rS)x)
- (rS) -er (LI + L2)(rSx)
< F I (rSx) 1 _ rS (LI + L 2)((1 _ rS)x) = 0(1), as x -----t 00
G(x) G(x)
1 < lirninf < lirnsup =------'---==--
x-->oo FI(x) + F 2(x) - x-->oo FI(X) + F2(X)
< (1 - rS)-er
for any 0 < rS < ~ Letting rS 1 0 in the above we get (4.36) cornpleting
the proof. 0
Let Xl, X 2 , ... be an i.i.d. sequence. Let F denote the common distribu-
tion function. Denote by Sn = Xl + ... + X n , Mn = max{X I , ... , X n },
n ~ 2, respectively the partial sum and the partial maximum. For n ~ 2
4.3. Subexponential distributions 85
note that
1- t
k=ü
~) (-1)k(F(x))k
n F(x) [1 + 0(1)], as x ----? 00. (4.41 )
Thus, for regularly varying distributions, the tail behaviour of the partial
sum Sn is essentially determined by the tail behaviour of the partial
maximum Mn.
In situations involving "dangerous risks", it has been observed by
actuaries that whenever the total amount is very large, a very few of
the largest claims (in some cases may be even a single largest claim)
make up almost the entire bulk of the total claim. This is precisely the
scenario envisaged in the above discussion. So the following definition
is the next natural step.
. F*(n)(t) . 1 - F*(n)(t)
11m
t-+oo F(t)
= t->oo
11m
1 - F(t)
= n. (4 44)
.
(iii) mF (s) = 00 for all s > 0; so moment generating function does not
exist in any neighbourhood of O.
In particular, any subexponential distribution F is heavy tailed, and
F(log x) is slowly varying.
J J
x x
and hence
~
F (x) = 1 +
F(x)
J x_
F (x - t) dF (t).
F(x)
(4.47)
o
4.3. Subexponential distributions 87
~
F(x)
) l + J y_
F(x - t) dF(t)
F(x)
+ J x_
F(x - t) dF(t)
F(x)
o y
F(x - y) (F*(2)(X) ) 1
I:S F(x) :S F(x) - I - F(y) (F(x) _ F(y))' (4.48)
J-
x
lim =--
1
F(x)
x--->oo
G(x - t) dF(t) = a. (4.52)
o
88 4. Claim size distributions
Let E > 0 be arbitrary. By (4.49) there exists Xo such that G(x) <
(a + E) F(x) for all x 2: Xo. Therefore
I
x
- 1 G (x - t) dF (t)
F(x)
o
=-
1 xl-xo_G(x - t) dF(t) + =-
1 IX-
G(x - t) dF(t)
F(x)
o
F(x)
x-xo
< (a + E)_l
F(x)
I
X-Xo
F(x - t) dF(t) + F(x) - F(x - xo)
F(x)
o
< (a + E)_l
F(x)
I X
F(x - t) dF(t)
_ _
By Theorem 4.19(a) and Exercise 4.3.2 we see that the r.h.s. of the
above converges to (a + E) as x -'> 00. Hence
1-
X
1-
X
1
~~f F(x) G(x - t) dF(t) 2: a.
o
Lemma 4.22 (Kesten): Let FES . Then for E > 0 there exists a
constant K E (0,00) such that
for all n. By (4.51) applied to G = F*(n) we see that for any a < 00,
1 + sup
~
i o
x--
F*(n)(x - t)
F(x)
dF(t)
< 1 + sup
O:S;x:S;a
i x__
F*(n)(x - t) dF(t)
F(x )
o
i
x--
F*(n)(x - t) dF(t)
+sup
~ F(x)
o
< 1 + _1
F(a)
+ an sup
x>a
J x_
F(x - t) dF(t)
F(x)
- 0
1 p;(2)(x) - F(x)
1 + =-- + an sup
F(a) ~ F(x)
by (4.47). As F is subexponential for E > 0 we can choose a > 0 such
that
F*(2l(x) - F(x)
F(x) :::; (1 + E) for all x ~ a.
an :::; (1 + _F(a) ~ (1 + t ,
1 )
E
E
lim G(x)
F(x)
x--->oo
= f: kpk.
k=O
D
°
We say that two distributions F, G on (0,00) are tail equivalent
(denoted F ",t G) if there exists < a < 00 such that
. G(x)
11m ~ (4.54)
x--->oo F(x)
limsup
x--->oo
1 x_
G(x - t) dG(t) ~ l.
G(x)
(4.55)
o
Let Xo > 0, to be chosen suitably later, be fixed. Write
l
x_
G(x - t) dG(t)
G(x)
o
J
x-xo_
G(x - t) dG(t)
G(x)
+ J x_
G(x - t) dG(t)
G(x)
o ~
h(x) < a + E
a- E
J
x-xo_
F( x - t) dG (t) < a + E
F(x) - a- E
J x_
F( x - t) dG( t)
F(x)
o 0
x
G(x) - J G(x - t) dF(t)
a+E 0
a-E F(x)
(because F * G = G * F)
x
F(x) - G(x) + J G(x - t) dF(t)
a +E 0
a- E F(x)
F(x) - (a - E)F(x) }
a +E 1 { .+(a + E) x-xo_
< -
a- E
=---
F(x)
J0 F(x - t) dF(t)
+F(x - xo) - F(x)
:S a+E{ ~ ) ) ~ ~) ) dF(t)}.
a- E + F(x-xo)
F(x)
Proof: By hypothesis note that 0 < 'L k Pk < 00. Apply Lemma
k=O
4.24. o
A converse of the above result is also true. See [EKM].
For a distribution F supported on (0,00) and with expectation I-" E
(0,00) let F[ denote the corresponding integrated tail distribution. An
important question, that will playamajor role in risk theory, is: When
is F[ subexponential? It is perhaps still not known if FES implies
F[ ES. A related quest ion is: Are there sufficient conditions that ensure
that both Fand F[ are subexponential? As an example of the latter, by
Theorem 4.14 we know that if FE R- a , 0: > 1 then F[ E R-(a-l) and
hence both are subexponential. In particular if F is Pareto (0:), 0: > 1
then both Fand F[ are subexponential.
Recall that we consider only distribution functions F on IR such
that F(O) = 0, F(x) < 1 for all x. Let F have finite expectation 1-".
Rephrasing the first quest ion in the preceding paragraph: When can we
have
Clearly (4.56) will hold if for E > 0 we can find xo > 0 such that
1
t
l
x_
f
x/2_
F(x-y) F()d
F(x) y y
o
f
v_
F(x- y ) F(y)dy+
F(x)
f x/2_
F(x-y) F()d
F(x) y Y
o v
f f
v _ ~
1 < F(x-v)
F(x)
{ ~
f-L
f
x/2_
F(x - y) F(y) dy - F[(v)
F(x)
}
.
o
Lemma 4.30 Let F, G E.c. Suppose there exist 0 < al, a2 < 00 such
that
G(x)
al < F(x) <
- =-- - a2, J'f alt x >
or
-
o.
j G(x - t) G(t) dt
G(x)
o
v_ x/2_
°
x/2 _
°
x/2 _
n < x < n + 1, n = 0,1,2, .... Details of proof are left as exercise; see
Exercise 4.3.10. D
Theorem 4.33 If F E S* then FES and F[ ES.
Proof: By Lemma 4'.29 we have F E L. So in view of Lemmas 4.32,
4.30, 4.24 and Exercise 4.3.11 we may assume without loss of generality
that F has a density function f(-) and a hazard rate function q(-) with
p;(2)(x) =
F(x)
J
1+
x_
F(x - t) dF(t)
F(x)
o
J
1+
v_
F(x - t) dF(t) +
F(x)
J
x-v
F(x - t) f(t) dt
F(x)
o v
+J
x
F(x - t) f(t) dt. (4.60)
F(x)
x-v
4.3. Subexponential distributions 97
So (4.60) becomes
°
v_ _ _
F*(2) (x)
2 I F(x - t) dF(t) + F(v) F(x - v)
F(x) F(x) F(x)
° x-v
+ I F(x - t) f(t) dt. (4.61)
F(x)
v
As F E 1:" for any fixed v > 0, note that I ~~~) ) I :::; ~ ~) :::; 2 for
all sufficiently large x, urtiformly over °: :;
t :::; v. So by the dominated
convergence theorem
°
v
limsup 1
~ -F (X) < 00. (4.64)
X---'CXJ F(x) 2
By hypothesis there exist a > 0, Xo > 0 such that yq(y) < a for all
y :2 Xo· Therefore
= limsup
X---'CXJ
j q(y) dy ~ alog2 < 00
x/2
j
x_ x/2_
j F(x - t) F(t) dt
F(x)
=2 F(x - t) F(t) dt
F(x)
o 0
j j
v _ 00
°
of (4.65) can be made arbitrarily small by choosing a large enough v.
With such a v > (chosen and fixed), as F E .c, by Exercise 4.3.13, it
now follows that
l
x_
F(x - t)
lim F(t) dt ~ 2J-L.
x---+oo F(x)
o
As the reverse inequality always holds (Exercise 4.3.9(a)) we now have
FE S*. 0
The next result can be useful if q(x) = ~) x ~ 00 does not hold.
f
00
(ii) exq(x) F(x) dx < 00.
o
Then F E S*.
lim
x---+oo
1 x/2
eQ(x)-Q(x-y) F(y) dy = J-L. (4.66)
o
Monotonicity of q over [0,(0) gives
J
x/2
J
x/2
J
x/ 2
J
00
1 I
4.4 Exercises
1 - F(x)
lim sup -'xx < 00 for some A. > O.
x-+oo e
lim 1 - <1>(x) =1
x-+oo x-Icp(x) ,
102 4. Claim size distributions
mF(t + s) c
mp, s - ,lor s < t,
A _ ( ) _
SF -
mF t
A ()
4.2.1. Show that the following functions are slowly varying at 00.
(i) L(·) > 0, measurable with lim L(x) = c > o.
X-+OO
(ii) [log(l + x)]b, bE ffi. (Rint: First consider b = integer).
(iii) 10g2 x ~ log(log x), logk x ~ 10g(10gk_1 x).
. 4.2.2. Let L be slowly varying and nondecreasing (resp. nonin-
creasing). Show that the function !() in Karamata representation can
be taken to be nonnegative (resp. nonpositive).
4.2.3. If LI, L2 are slowly varying (at 00), then so are L I L2, f~
LI + L2· (Rint: Take max{EI(t),E2(t)} in Karamata representation.)
4.2.4. If L varies slowly at 00 show that
where c(-) > 0 is a measurable function such that c(x) ----- Co > 0, 8(·) is
a measurable function such that lim 8(t) = 0:.
t->oo
4.2.9. Let L be a slowly varying function. For 0: E IR, let Lo,{x) =
x
J taL(t) dt, x > 1. Show directly, using Proposition 4.6, that La(x)
1
diverges as x ----- (X) if 0: > 0, and that La(x) converges if 0: < -1.
4.2.10. a) Let L(t) = ~ ) t 2: e. Show that L is a slowly varying
J t
00
p;(2)(x)
~~~f F(x) ~ 2.
F*(2)(X)
limsup < 2.
x--->oo F(x)-
Show that F is subexponential.
4.3.2. If F satisfies (4.44) for n = 2 show that
lim
x--->oo
J x
F(x - y) dF(y)
F(x)
= l.
o
(Rint: Use (4.47)).
. 4.3.3. Using (4.44) for n = 2, show that F(x) > 0 for all x ~ O.
4.3.4. Derive (4.51).
z=
n
4.3.5. Let PO, PI, ... ,Pn 2: 0 such that Pi = 1; so {pd gives
i=O
z=
n
a probability on {0,1, ... ,n}. Let FES and G(x) = pjF*(j) (x).
j=o
Note that G is a distribution function. Show that
. G(x) ~
11m ~ ~ kpk.
x--->oo F(x)
k=O
F(x - t) dF*(m)(t).
o
4.4. Exercises 105
F*(n) (x)
limsup < n
x-.oo F(x)-
. J
hm
x-.oo
x_
F(x-y)-
F(x)
F(y) dy 2: 2p,.
o
(b) Show that F E S* {:}
J
x/2 _
F(x-y)-
lim F(y) dy = p,.
x-.oo F(x)
o
4.3.10. (a) Let G be a distribution on (0,00) with hazard rate
function qG. If qG(x) ~ 0 as x ~ 00 show that GEL.
(b) Complete the proof of Lemma 4.32.
4.3.11. Let F, G be distributions supported on (0,00). Assume
that lim
x-.oo
=GP((X»
x
= C E (0,00). Suppose F has finite expectation p,p.
106 4. Claim size distributions
lim j v F(x - t) -
F(t) dt =
jV _F(t) dt.
x--->oo F(x)
o 0
4.3.14. Let 0 < T < 1, c> 0, and F(x) = exp( -cx T ), x > O. This
is the heavy-tailed Weibull distribution. Show that F E S* and hence
F, FI are both subexponential.
4.3.15. Let F be a distribution function supported on (0,00) with
x
hazard rate function qp(.). Define Q(x) = J qp(t) dt, x> O.
o
(a) Show that F(x) = exp( -Q(x)), x > O.
(b) Using (4.47) show that
-- x
l (x_) _ 1 = j{exP[Q(x) - Q(x - t) - Q(t)]} qp(t) dt.
---::F*=:-(2_
F(x)
o
Ruin Problems
N(t)
R(t)=x+ct-S(t)=x+ct- LXi, t2:0. (5.1 )
i=l
R(·) is ealled the risk or surplus process; in the Sparre Andersen model
it is ealled the renewal risk process; in the Cramer-Lundberg model it
is ealled the Lundberg risk process. Here x is the initial eapital of the
eompany. Note that R(t) gives the surplus or capital balance of the
eompany at time t.
108 5. Ruin Problems
The event that R(·) falls below zero is called ruin; that is, ruin is
{R(t) < 0 for some t 2: O} = U {R(t) < O}. Define the ruin time TO by
t2':O
TO = inf{t 2: 0 : R(t) < O} (5.2)
inf R(Tn )
{ n>O < O} = {inf
n>O
[x + cTn - ~
~
Xi] < O} .
- - i=l
n
Write So == 0, Zn = X n - cAn , Sn = L Zi, n 2: 1. It is now clear that
i=1
Theorem 5.1 With the notation as above, assume that Zl "I- 0, and
that E(Zl) exists.
a) If E(Zl) > 0 then lim Sn = +00, a.s.
n->oo
b) If E(Zl) < 0 then lim Sn = -00, a.s.
n->oo
c) If E(Zl) =0 then
limsup Sn = +00, liminf
n->oo
Sn = -00, a.s.
n->oo
o
5.1. Risk process and an associated random walk 109
Assertions (a) and (b) are easy consequences of the strong law of
large numbers, and hence left as exercise. Though assertion (c) may not
be surprising to those familiar with the symmetrie simple random walk
on Z, its proof is by no means obvious. In fact we need to introduce the
following before taking up the proof.
For n = 0,1,2, ... let On ~ a{ Si : i ::; n} denote the filtration
generated by the random walk. Note that On = a {Zi : i ::; n}. Define
v+ min{n> °: Sn > O}, (5.5)
V min{n> °: Sn ::; O}; (5.6)
v+ (resp. v-) is called the first entrance time 01 {Sn} into (0, (0) (resp.
(-00,0]). Set vet = 0, vi= v+, Va
= 0, v[ = v-; for n 2: 1 define
min{j > v; :Sj > Sv+} n
(5.7)
min{j > v;; : Sj ::; Sv-n }. (5.8)
Note that one can define S !In+ (resp. S V n-) in an unambiguous manner
whenever v; < 00 (resp. v;; < (0); see Exercise 5.1.4. The stopping
time v; is called the n th strang ascending ladder epoch of {Sn}; similarly
v;; is called the n th (weak) descending ladder epoch of {Sn}. Note the
slight asymmetry between (5.5), (5.7) on the one hand and (5.6), (5.8)
on the other.
Proof of Theorem 5.1: In view of Exercise 5.1.3 we need to con-
sider only case (c); so assume E(ZI) = 0. We first prove
P(v+ < (0) = 1; (5.9)
that is, the random walk {Sn} enters (0, (0) in finite time with probabil-
ity 1. Put M = min {n : Sn = sup Sj} = first time maximum is reached;
j?O
of course, M = 00 if supremum is not attained. For any n 2: 0, note that
the events {Si< Sn, i ::; n - 1} and {Sj ::; Sn, j > n} are independent,
because the former involves ZI, Z2, ... , Zn, while the latter depends on
d
Zn+1, Zn+2, .... Clearly (ZI, Z2,.·· ,Zn) = (Zn, Zn-I,··. ,Zd. There-
fore
where the last step follows by the definition of IF. Next note that
j j-n
L Zk 4. L Zc for j > n. Hence using the definition of v+ we have
k=n+l C=l
P (t
k=l
Zk :S O,j > 0) = P(v+ = 00).
L
00
n=O
00
n=O
L
00
Now, if possible let P(v+ = 00) > O. Then (5.10) implies E(v-) <
00. Consequently Wald's identity and our hypothesis give E(Sv-) =
E(v-) E(Zd = O. By definition of Sv- note that Sv- :S O. Hence
Sv- = 0 a.s. Now P(ZI = 0) < 1, E(Zd = 0 imply P(ZI < 0) > O.
This would mean P(Sv- < 0) :2 P(ZI < 0) > 0 which contradicts
Sv- = 0 a.s. Hence P(v+ = 00) = 0, proving (5.9).
We want to prove next that for any m = 1,2, ...
(5.12)
5.1. Risk process and an associated random walk 111
Then one can get the following analogue of (5.10) by proceeding simi-
larly:
(5.13)
This in turn leads to (5.11), just like (5.9) was derived from (5.10). See
Exercise 5.1.6.
By (5.11) and Exercise 5.1.5, it now follows that {S V ++ - SV n+ :
n 1
Definition 5.3 We say that the renewal risk model satisfies the net
profit condition (NPC) if E(Xd - cE(Ad < O.
be as in Section 5.1. A quest ion related to ruin is, what is the rate at
which P( n n > x) tends to 0 forany x > E(Xd7 More precisely,
. IS
does there exist a nice function I, called the rate lunction, such that for
x> E(Xd
This quest ion was studied by Esscher and Cramer. If {Zd has moment
generating function, a satisfactory answer can be given, with Esscher
transform (see (4.5)) playing a major role. Under suitable conditions,
Esscher trans form converts an event of "smalI" probability into one of
"considerable" probability, thus facilitating analysis.
It turns out that the above quest ion in terms of rate function is
meaningful in problems arising in diverse disciplines. A far reaching
abstract framework of large deviation principles has been synthesized
by Indian born probabilist S.R.S. Varadhan with implications in statis-
tics, PDE theory, physics, engineering, etc. For this outstanding work
Varadhan has been awarded the prestigious Abel prize for 2007. For a
quick overview, an interested reader may see M. Raussen and C. Skau:
Interview with Srinivasa Varadhan, Notices 01 the American Mathemat-
ical Society, vol. 55 (2008) Feb. 2008 issue, pp. 238-246, and R. Bhatia:
A Conversation with S.R.S. Varadhan, The Mathematical Intelligencer,
vol. 30, No. 2, Spring 2008, pp. 24-42. The role of Esscher tilt is
highlighted in these interviews. See also S. Ramasubramanian: Large
deviations: an introduction to 2007 Abel prize. Proceedings 01 the Indian
Academy 01 Sciences (Mathematical Sciences), 118 (2008), 161-182.
We continue with the renewal risk model. We assume that the net
profit condition holds; see Definition 5.3 in the preceding section. We
also assume that the claim size distribution is light tailed, that is, the
claim size Xl has moment generating function in a neighbourhood of
zero.
Definition 5.4 Let Zl = Xl - CAI, and assume that the moment gen-
erating function mz(-) of Zl exists in (-ho, ho), where h o > O. Suppose
h = r is the unique positive solution to the equation
Proposition 5.5 Assume that (i) E(ZI) < 0, that is, the net profit
condition holds, (ii) there exists h o > 0 such that mz(h) < 00 lor alt
Ihl < h o, and hjho
lim mz(h) = 00. Then the Lundberg coejJicient is welt
defined.
Proposition 5.6 Assume that the (NPC) holds and that Lundberg co-
ejJicient r exists. Let {Sn: n ~ O} be the random walk as in Section
5.1. Then {erSn: n = 0,1,2, ... } is a martingale with respect to {Qn}.
Theorem 5.7 (Lundberg bound): Consider the renewal risk model. As-
sume that the (NPC) holds and that the Lundberg coejJicient r > 0 exists.
Then
The case n = 1 is dealt with in Exercise 5.2.3. Assume that (5.16) holds
for n :S k. We need to show that it holds for n = k + l.
Let Fz denote the distribution function of Zl . As Z2 + ... + Zi is
independent of ZI, and Z2 + ... + Zj+1 4. Sj, we obtain
+ J
(-oo,x]
P (C::;If::;akx+ I Si) > x - z I Zl = z) dFz(z)
P(ZI > x) + J
(-oo,x]
P ~~ Sj > x - z) dFz(z)
(5.20)
1 c ..\
(NPC) holds {::} - - - < 0 {:} 0 - - > O. (5.21)
o ..\c
Clearly
where p > 0 is the safety loading factor. Clearly (NPC) holds as E(Zl) =
~ < O. So the Lundberg coefficient is r = ~ 0, and hence the
Lundberg bound is
From the above we see that ruin is unlikely if the initial capital is large.
Also note that Ifp i 1 as p ---t 00; so the bound does not change signif-
icantly if p is very large. On the other hand, for large p the premium
will be high, and hence the policy may look unattractive to a buyer.
J
~
(t
I(-oo,x] (z)P
t=2
Zi :S x - z , V n ~ 2 I Zl = z) dFz( z )
JJ
oox+CW
P(Sm:S x - (u - cw) V m ~ 1) dFx(u) dFA(W)
o 0
JJ +
00 x+cw
rp(x - u cw) dFx(u) Ae-).w dw
o 0
~ ~ J ~ J
00 z
J
X
J J
x x
'P(t) - '1'(0)
I I
t t
II
t x
~ Fx(u)'P'(x - u) du dx
o 0
~I I
t t
~ 1['P(t - u) - 'P(O)]Fx(u) du
o
I
t
AI -
t
AI1
'1'(0) = 1 - - . (5.30)
c
We know that the integrated tail distribution corresponding to Fx is
given by dFx,I(x) = ~ Fx(x) dx. So, using (5.30), we can now write
(5.29) as
'P(t) = ( c - c AI1) + -z
AI1 I t
'P(t - x) dFx,I(x). (5.31)
o
118 5. Ruin Problems
PI!
t
<p(t) = -1-
+p
+ -- l+p
<p(t - x) dPx I(X),
,
t;:::: 0 (5.32)
o
where p = ~ - 1) 0,
> and PX,I is the integrated tail distribution of
the claim size distribution Fx. 0
where p*(n)
X,I denotes the n-fold convolution of FX,I. Consequently
'ljJ ( t ) -_ 1 - <p ( t ) _ p
- -1-
+p
2: (1 +p
00
1 ) ~
n
F
,
X I ) t;:::: O.
t, (5.34)
n=l
v~ t.
t=I
Ui ={ 0 if J = 0 . _
UI + ... + Uk, If J - k.
(5.35)
1
(1 + P) dFX ,I (x) = l. (5.36)
Then the defective renewal equation for 1jJ(-) given in Exercise 5.3.2 (see
below) leads to
+j
o
t
00
where
J
00
J J
00 00
J
00
eI/
t
(1
1
+ p) J 00
l-
-p,Fx(x)dx dt
o t
J J
00 x
1 Fx(x) el/tdt dx
M(1+P)
o 0
Vi'(I\P) [[eVXFX(X)dX-[FX(X)dx]
t ~ ) = V(l:P) (5.41)
by (5.36). Therefore applying the key renewal theorem (Theorem 3.26
(b)) to the renewal equation (5.37) we get
lim el/t'ljJ(t)
t---+oo
whence the result follows; in the above we have used (5.38), (5.41).
o
122 5. Ruin Problems
dG(x) ~ ( () ) exp {- ( () ) x}
l+p l+p
·1(0
,
oo)(x) dx. (5.43)
(5.44)
f
o
teV(t-X) 1
(1 + p)
e-O(t-x) (d fn=O
G*(n)(x))
_l_ e -(O-v)t
l+p
+ f t
1 e-(O-v)(t-x).
(l+p)
()
(l+p)
dx
o
+ f () {() ( )}
t
1
-:-----:-e _(1+p)
_
IJ t exp - - - t ---' x dx
(l+p) (l+pF l+p
o
1
l+p
5.4. Exact asymptotics in Cramer-Lundberg model 123
Hence we have
7jJ (t ) = (1
1
+ p) e
-vt
= (1
1
+ p) exp
{PO}
- (1 + p) t
Thus we have an explicit expression for the ruin probability in this case.
o
5.4.2 Subexponential claims
We had already mentioned in an earlier section that the distributions
that actually fit claim size data are heavy tailed distributions like the
Pareto, lognormal or heavy tailed Weibull distributions. Clearly the
discussion in subsection 5.4.1 does not apply to such distributions.
To get a handle on the situation we return to the Pollaczek-Khinchin
formula (5.34) for the ruin probability 7jJ(.). Recall that essentially only
the net profit condition is assumed here in the Cramer-Lundberg model.
Dividing (5.34) by FX,I(t) we get
(1 + ) ~) ) )~K ( : : ;) n, V t 2 O.
lim
t--->oo
7jJ (t) = -p-
FX,I(t) 1+p
f
n=l
n(l + p)-n = ~
p
(5.47)
o
Example 5.15 Consider the Cramer-Lundberg model with the net profit
condition. Suppose the claim size distribution F is Pareto (0:), 0: > l.
As 0: > 1 we know that the claim size distribution has finite expectation.
Also the integrated tail distribution F] is Pareto (0: - 1) and hence is
subexponential. So by Theorem 5.14
1 l'i: a - 1
'IjJ(t) = P (I'i: + t)a-l (1 + 0(1)) , as t -t 00 . (5.48)
When claim size is Pareto (0:), 0: > 1, ~ ) indicates that the ruin
probability decays only at a power rate. Exercise 5.4.5 shows that a
similar behaviour is exhibited when claim size distribution is R_o., with
0: > 1; (in fact, Example 5.15 is a special case of Exercise 5.4.5). This
is In sharp contrast to the situation in Theorem 5.12 where the ruin
probability decays exponentially. So Pareto distributed risks, and risks
with regularly varying tails are more dangerous than light-tailed risks.
For Cramer-Lundberg model with net profit condition p > 0, the
following are, in fact, equivalent:
(a) F] ES; (b) <pe) = 1 - 'IjJ(.) ES; (c) lim .l(t) = p-l
t ..... oo FJ(t)
While (a) => (c) is proved in Theorem 5.14, (a) => (b) is given in Exercise
5.4.4. For a proof the reader is referred to an appendix in [EKM].
Our presentation of this section is heavily influenced by [EKM].
Remark 5.16 For the Sparre Andersen model also similar analysis is
possible. For analogues of Theorems 5.10, 5.12, 5.14 see, respectively,
Theorems 6.5.1, 6.5.7, 6.5.11 in Section 6.5 of [RSST]. These results
make use of the so called ladder height distributions.
5.5. Exercises 125
5.5 Exercises
5.1.1. (a) Show that TO is a stopping time with respect to the filtra-
tion {Ft : t ?: O}, where F t is the smallest IT-algebra making the family
{S(r) : 0:::; r :::; t} measurable; that is, show that {TO :::; t} E F t for all t.
°
(Rint: {TO > t} = {R(s) ?: for all s:::; t} = n{R(s) ?: 0: s:::; t, s
rational} n { R( t) ?: O}. Rere we are using the special structure of R(·),
viz. it is increasing except at jumps which are negative.)
(b) Show that "Ruin" is an event.
5.1.2. Show that inf R(t) is a (possibly extended real valued) ran-
t2':O
dom variable.
5.1.3. As E(Zd exists, prove assertions (a), (b) of Theorem 5.1
using the strong law of large numbers.
5.1.4. (a) Let j = 0,1,2,.... Show that {vi = k}, {vj- = k} E
(;h, k = 0, 1,2, ... ; that is, vi, vj- are stopping times w.r.t. {9n}.
(b) Show that Sv; (resp. Sv;;) is a random variable on {V;{ < oo}
(resp. {v;;- < oo}).
(c) Show that S V ++ - S V + takes values in (0, (0).
k 1 k
5.1.5. (a) For any interval (a,b) C (0,00) show that
P(Sv+ - Sv+ E (a, b))
00
1 0
(b) For i = 0,1,2, ... , k = 0,1, ... show that {v: = i} is indepen-
dent of {ZiH : € ?: 1}; in fact 9i and {ZiH : € 2 1} are independent.
I(t
(c) Let k?: 1. For any interval (a,b) C (0,00) show that
P(S + - S + E (a, b))
Vk +1 Vk
L00 (00
i=1
L
n=1
P
(=1
~
(2: ZiH
ZiHl E (-00,0], j:::; n -1,
E (a,b) ))
. P(v: = i)
00
L P(Sj E (-00,0], j :::; n - 1, Sn E (a, b)).
n=1
126 5. Ruin Problems
L Bd
00
f
N=1
P(v: = N)· P ( ~ Ze > 0, V j::::; n -1)
e=N+j+l
P ( L
N+j
Zq ::::; 0, V j > n
)
.
q=N+n+l
'ljJ(t) = (1
1
+ -
p) FX,I(t)
1
+ (1 + p) J t
'ljJ(t - x) dFx,I(x), t ~ 0
o
where F X,I denotes the tail of the integrated tail distribution Fx,I.
5.3.3. Give a proof of Theorem 5.10.
5.3.4. Let V have a eompound geometrie distribution with eharae-
teristies (p, Fu) where 0 < P < 1 and Fu is supported on (0,00). Show
that the distribution function of V is given by
L
00
Fv(x) = (1 - p) pk ~ ) ) X ~ O.
k=ü
(i) If 0 < 0: < 1 show that «(-) == 0 is the only bounded solution to
the convolution equation. (Rint: Iterate).
(ii) Let 0 < 0: < 1. Let dG(y) = I(-oo,o)(Y) ß eßY dy; so G is
supported on (-00,0). Show that we can have solution of the form
«(x) = e>'x, x E IR.
(iii) Let 0: > 1. Let dG(y) = I(o,oo)(Y) ß e-ßYdy. Show that we can
have solution of the form «(x) = e>'x, x E IR. Thus it is possible to
have unbounded solutions to the convolution equation (3.44) if Fis not
a probability distribution.
5.4.2. Consider the Cramer-Lundberg model with the net profit
condition (5.26); denote p = ~ - 1.
(i) Show that (5.36) is equivalent to
J
00
eI/x Fx(x) dx =~
o
hence conclude that if a v > 0 satisfying (5.36) exists then it is unique.
(ii) If (5.36) holds, show that the claim size X has moment generating
function in a neighbourhood of O. So X is light..,tailed. (Rint: Use (4.4).)
(iii) Show that (5.36) holds {:} v is the Lundberg coefficient as defined
in Section 5.2.
5.4.3. Assume that the claim size X has moment generating func-
tion mx(-) in (-ho, ho), ho > 0 and that there is v E (0, ho) satisfying
(5.36). Show tha,t (5.38) holds, and in fact
J
00
Around the same time Lundberg formulated his risk model, French
mathematician Louis Bachelier worked out a quantitative theory of
Brownian motion from a study of stock price fluctuations. (It is in-
teresting to note that Bachelier's pioneering work predated the famous
work of Einstein on Brownian motion by five years!) Though risk models
perturbed by Brownian motionjLevy process and diffusion approxima-
tion of risk processes have been studied since 1970's, (see Chapter 13
of [RSST]), interaction between these two important stochastic models
has been sporadic till recently.
Hipp and Plum [HP1, HP2] formulated a few years back a model
in which part of the surplus can be invested in a risky asset, and ob-
tained some interesting results. Such a model requires degeneracy of the
diffusion at the origin. A systematic study of the set up will be very
useful. This chapter is just an attempt at that. Needless to emphasize,
the results are far from complete.
In this chapter we consider the Cramer-Lundberg model with the
possibility of investments in a riskless bond and in a risky asset. The
investment strategy may help in reducing the ruin prob ability.
The prerequisites needed for this chapter are substantially more than
the earlier chapters. Reasonable familiarity with the Brownian motion
process, stochastic calculus, stochastic differential equations, and rudi-
ments of continuous time Markov processes will be helpful. Some of
the excellent books are [0], [Du], [KS], [lW], [BW]. We shall freely use
results from these sources, of course, with appropriate reference. A con-
eise account of Browian motion and Ito integrals is given in Appendix
A.4.
6.1. Lundberg risk process and M arkov property 131
Recall from Chapter 5 that the Lundberg risk process is given by (5.1)
where {N(t)} is a homogeneous Poisson process with rate). > o. In
the Cramer-Lundberg model, by Theorem 2.12 we know that the total
claim amount process {S (t)} has independent increments. Hence the
Lundberg risk process {R(t)} also has the independent increment prop-
erty. That is, R(·) is a Levy process. Though the Markov property is
intuitively obvious for a Levy process, we put it in a convenient form.
For t 2': 0 define F t = a{R(s) : 0 :S s :S t} = the smallest a-
algebra making the family {R( s) : 0 :S s :S t} measurable; here R( t) =
N(t)
x + ct - S(t) = x + ct - z=
i=l
Xi, t 2': 0 as given by (5.1). {Ft : t 2': O} is
the filtration generated by R(·).
If A, C are sub a-algebras in (0, F, P) such that P(A n C) = P(A) .
P(C) for any A E A, CE C we say that the sub a-algebras A and C are
independent. In addition, if C = a(Y) where Y is a random variable, we
say that the sub a-algebra A and the random variable Y are independent.
So the independent increment property of R(·) can also be stated as: for
any 0 :S s :S t, the sub a-algebra F s and the random variable R( t) - R( s)
are independent.
Proof: Let
P((R(tI), . .. ,R(tk)) E B I Fd
P((R(tI), ... ,R(tk)) E B I R(t)) a.s. (6.1)
where r.h.s. of (6.3) stands for h(R(t)) with h denoting the measurable
function
P(s,x;r,A) = J
IR
P(t,y;r,A) P(S,Xit,dy). (6.7)
Proof:
P(s, Xi r, A) Es,x[IA(R(r))]
Es,x[Es,x[IA(R(r)) I O"(R(t))]]
Es,x[Et,R(t) (IA(R(r))] (by (6.3), (6.5))
Es ,x[P(t, R(t)i ri A)] (by (6.6))
J
n
o
134 6. Lundberg risk process with investment
Time homogeneity of R(·) implies that for r, r' 2': 0, xE lR, A E ß(lR)
P(O, x; t + s, A) = J
lR
P(O, y; t, A) P(O, x; s, dy) (6.9)
1
lim - [Td(x) - J(x)] cf'(x) + >'E[J(x - X) - J(x)]
HO t
J
oe
Here Fx denotes the common distribution Junction oJ claim size Xi; and
X denotes a generic random variable with distribution Junction Fx .
Proof: Note that the r.h.s. of (6.12) is well defined for any J E V.
Now conditioning with respect to N(t), we see that for t ~ 0, x E IR
1
- h(t) A e->.t j[J(x + ct - z) - f(x)] dFx(z)
t
Theorem 6.7 For any f E V, x E lR the function t f--t TI f (x) has right
derivative at any t 2: 0, and ~ Tt!(x) = T t Lf(x). Moreover for any
o ~ s ~ t,
t
By the preceding part of the proof, t f--t H(t) is continuous, has right
..
derIVatIve, and = 0, t 2: s. Also note that H ( s ) = O. We
d+ H(t)
--d-t-
now claim that H(t) = 0, t 2: s. To prove the claim we proceed as
6.1. Lundberg risk process and M arkov property 137
J
t
°
is a martingale w. r. t. {:Ft }.
(6.18)
Theorem 6.9 Assume that the claim size distribution Fx has a prob-
ability density junction. Assume also the net profit condition (NPC)
(5.26). Then 1jJ, 'P E D. Moreover
(i) 1jJ solves the problem:
By continuity of <p' on [0, (0), the expression (5.28) holds for all
°
x 2: 0. As <pe) = on (-00,0), and Fx is supported on (0,00) it is now
easily seen from (5.28) that L<p(x) ~ 0, x 2: 0. Finally, as the (NPC)
holds, by Exercise 5.1.7 (b) it follows that lim <p(x) = 1. 0
x---->oo
°
Proof: As 9 is constant on (- 00, 0) and F,x is su pported on (0, 00 ),
note that Lg(y) = 0, y < 0. So Lg(·) = on IR. Let x 2: 0. If R(O) = x,
by Theorem 6.8 it follows that g(R(t)) - g(x) is a martingale. So by the
optional sampling theorem g(x) = Ex [g(R(ro 1\ t))], for all t 2: 0.
°
By Exercise 6.1.10, R(ro) < and hence g(R(ro)) = 1, whenever
ro < 00. Therefore it follows that
As the net profit condition holds, by Exercise 5.1. 7 (c), R( t) ---- +00 a.s.
as t ---- 00. So on the set {ro = oo}, R(t) ---- +00 a.s. Since lim g(y) = 0,
yjoo
we ,now get ti
lim00
Ex [I{T<o>t}g(R(t))]
-
= 0. Consequently from (6.26) we
now get g(x) = lim Px(ro ~ t) = Px(ro < (0) = 'l/J(x) as required. 0
tjoo
Thus, if Fx has a density function, and if (NPC) holds, by Theorems
6.9, 6.10, it follows that the ruin probability 'l/JO is the unique solution
in V to the problem (6.25). So (6.25) can be used to find the ruin
probability. We illustrate this in the case of exponentially distributed
claim sizes. We follow [H].
b.
*.
Example 6.11 Let Fx = Exp (0), 0 > 0; so JL = E(x) = The net
profit condition here means c> AJL = Put h(x) = E['l/J(x - X)], xE
IR. Since 'l/J(.) = 1 on (-00,0) note that h(x) = 1, x < 0. For x 2: it °
140 6. Lundberg risk process with investment
J
00
e- ox + J x
h'(x) = _()e- OX - () J x
'IjJ(z) ()e-O(x-z)dz + ()'IjJ(x)
°
o
()['IjJ(x) - h(x)], x ~ (6.28)
°
and hence h' is bounded continuous on [0, (0); (that is h restricted to
[0,(0) is a CC-function). From L'IjJ(x) = 0, x ~ we get
-c 'IjJ'(x) = A h(x) - A'IjJ(X), x ~ 0. (6.29)
As the r.h.s. of (6.29) is Clon [0, (0), it now follows that 'IjJ is C 2
on [0,(0). So differentiating (6.29), and using (6.28), (6.29) we obtain
'IjJ"(x) = ~ 'IjJ'(x) , x ~ 0. That is 'IjJ(.) satisfies the ODE:
with initial value Z(O) = Zo > 0, where a, b > 0 are constants, and
{ B (t) : t 2: O} is a standard one dimensional Brownian motion inde-
pendent of the total claim amount process SC). Let A(x) denote the
amount invested in the risky asset when the surplus is x; so A(R(t))
would represent the amount invested in the risky asset at time t. The
remaining amount R( t) - A( R( t)) is invested in the bond. (If we ass urne
A(x) :S x that would be a budget constraint.) The function A(·) will be
treated as a control; it will be assumed to be a reasonably weH behaved
function. We will write R(A) (.) to denote the riskjsurplus process when
investment is made according to A(·) as above.
The two (random) "observables" of the system are the total claim
amount S(·), and the unit price ZU of the risky asset. We assume that
the processes {S (s) : s 2: O} and {Z (t) : t 2: O} are independent; this
means that the sub a-algebras a{S(s) : s 2: O} and a{Z(t) : t 2: O} are
independent.
Note that (6.31) stands for the stochastic integral equation
J J
t t
where the last term on the r.h.s. is an Ito integral. The "dispersion" or
"volatility" factor a > 0 is indicative of fiuctuations in the price of the
risky asset; the "drift" factor b > 0 indicates the rate at which the risky
asset would appreciate in the absence of any volatility.
It is intuitively clear that the SDE for the current surplus R(A) (.)
corresponding to the control AC) can be written as (see Exercise 6.2.2)]
where c, b, 0" > 0, (2: o. The functions O"(A) , b(A) are respectively the
dispersion and drift coefficients corresponding to the control A ( .). The
stochastic process R(A) (.) is required to satisfy the SDE
J
t
Poisson process (total claim amount process), areal valued random vari-
able. We assurne that {gt : t 2': O} is a right continuous filtration such
that o-{R(A) (0), Z(s), S(r) : 0::; s, r ::; t} ~ gt, t 2': o. We also assurne
that for each t 2': 0, {B(t+r) - B(t), S(t+a) - S(t) : r, a 2': O} is indepen-
dent of gt. Here 'right continuous filtration' means that gt is a nünde-
creasing family of sub o--algebras with the property gt = gt+h!3:. gt+hn
h>O
for each t 2': O. This is an important technical assumption making argu-
ments involving stopping times simpler; see [KS] and references therein
für details and nuances. By a strong solution to the SDE (6.36) we
mean a {gd-adapted process {R(A)(t) : t 2': O} with r.c.l.l. sampie paths
satisfying the equation (6.36) with probability one.
Note that the corresponding continuous diffusion with drift b(A) and
dispersion o-(A) is the solution to the Brownian driven SDE
J
t
J
t
+ b(A)(R(A)(s)) ds, t 2': o. (6.37)
o
It is weIl known that a unique strong solution of (6.37) exists when
o-(A), b(A) are Lipschitz continuous. Moreover the process y(A) (.) is a
strong Marküv process with continuous sampie paths. See [KS], [lW] for
more details and information; see also Appendix A.4.
Tn i 00 a.s.
°
the claim arrival times. We know that < Tl < T2 < ... < Tn < ... and
On the random interval [0, Td our (6.36) is just the Brownian driven
SDE (6.37) with y(A)(O) = R(A)(O); so y(A)C) is weH defined. Now put
where Xl is the jump at Tl; note that Xl is the claim size for the
first claim. Clearly the above is the unique solution on [0, Tl]. By the
strang Markov property of B(·), SO note that R(A)(Td is independent
of {B (Tl + s) - B (Tt), S (Tl + r) - S (Td : s, r 2: O}; see [lW], [KS] for
detailed discussion on strang Markov property; abrief account is given
in Appendix A.4. So, with R(A)(Td playing the role of R(A)(O), repeat
the above procedure on h , T2) , using Brownian driven SDE on h ,T2)
to get R(A)(t) for Tl ::; t < T2 (and hence for t E [0, T2)). Then put
R(A)(T2) = R(A)(T2-) - X 2. This determines the solution uniquely on
[0,T2].
Praceed inductively to get the unique strang solution on [0, T n ] for
each n. As T n i 00 we are done. 0
A useful comparison result, which may be of independent interest, is
given next. Note that no moment conditions on claim sizes are assumed
(ii) Let 6,6 be random variables independent of B(·), SC) such that
6 ::; 6 o..s. Let R(i) be the solution to the SDE (6.36) with initial value
°
~ i = 1,2. Then P(R(1)(t) ::; R(2)(t) for alt 0::; t < (0) = 1.
(iii) Let R(A) (0) ::; yCA) (0). Then for any T > there exist constants
°
Cl, C2 > such that
(6.38)
(ii) Let R(A) (.) be as in Theorem 6.12. For any bounded continuous
function f on IR, t > 0 the function x ~ Ex [J(R(A) (t))] is bounded
continuous.
J
T
Indeed, denoting by y(n) the solution to the Brownian driven SDE (6.37)
with initial value X n , and using Theorem 6.13 (iii), we have
146 6. Lundberg risk process with investment
J
T
E la(A)(R(n)(s))1 2 ds
o
J
T
la(A)(0)1 2 E 1(_oo,o)(R(n)(s)) ds
o
J
T
+E 1[o,oo)(R(n)(s)) ·la(A)(R(n)(s))1 2 ds
o
J
T
J
T
condition we get
la(A)(R(n)(r)) - a(A)(R(O)(r))1 2 dr
o
J
t
J
t
In particular, jor k ~ 1, 0 :::; SI < S2 < ... < Sk and any bounded
measurable junction j on IR k
o
In fact, a stronger property holds, which is also very useful. Recall
that a random variable T taking values in [0, ooJ ~ [0,00) U {+oo} is
called a stopping time w.r.t. {9d if {T :::; t} E 9t for all t ~ o. Define
the pre-T IJ"-algebra
J
T+t
+ b(Al(R(Al(r)) dr - (S(T + t) - S(T))
T
R(Al(T) + J t
J
t
In particular, for any k 2: 1, 0 :S SI < ... < Sk < 00 and any bounded
measurable function f on jRk
g(y(A)(t)) - g(y(A)(O)) - J~ t
) )) ds
o
a continuous martingale, (6.48)
where
(6.49)
(6.51)
when the net profit condition hold; see (5.30). See Exercise 6.2.7. So,
having a nondegnerate investment plan at 0 increases the ruin probabil-
ity. This is contrary to our objective, and hence nondegeneracy at 0 is
not desirable.
J J
t t
h(t) = 0 + b(A)(h(s)) ds + ) h ))~ ) ds
o 0
for some cp E C oo (lR) }.
By the support theorem of Stroock and Varadhan (see [SVI]), support of
p(Yo(A))-l = closure of Hin Co [0, (0). As (J(A)(O) = 0, b(A)(-) 2: c > 0
and y(A)(O) = 0, any hE H will satisfy h(t) > 0 for all sufficiently small
t > O. So for any W in the support of p(yo(A))-l we have w(t) 2: 0 for
all sufficiently small t 2: O. Hence P( 1'0 > 0) = 1.
Next suppose P( 1'0 < (0) > O. By continuity of sample paths
y(A)(To(w)) = 0 for any w with TO(w) < 00. Then by the strong
Markov property of y(A) and the preceding paragraph, it follows that
y(A)(To(w) + t) 2: 0 for all sufficiently small t 2: 0 for a.e.w. This would
contradict the definition of To(w). Hence P(To = (0) = 1. 0
Thus the first two cases can occur only with probability zero. 0
The next result identifies the infinitesimal generator of the Markov
process R(A)(.).
where Fx is the distribution function of claim size, and f', 1" are as in
Definitions 6.5, 6.17.
e- At E y [f (y(A)(t)) - f(y)] .
(6.54)
where C is independent of y, t.
Let y ::::: O. Then by Lemma 6.19, y(A)(t) ::::: 0 for all t. By our
assumption on f, note that ~ ) f is bounded. Hence, as f is C 2 on
[0, (0), by (6.48) we get (extending f to a C;-function on IR ifnecessary),
E y [f (y(A)(t)) - f(y)]
Boundedness of ~ f, and t - 'Tl ::; t now imply that (6.54) holds for
y< 0.
o
Proof of Theorem 6.21: We may assume that f is as in Lemma
6.22; and by Exercise 6.2.9 it is enough to consider x 2: 0. Fix x 2: 0.
Note that for t 2: 0,
°
Clearly h(t) = o(t), t 1 as f is bounded and P(N(t) 2: 2) = o(t), t 1 0.
On {N(t) = O}, we know that R(A)(s) = y(A)(s) 2: 0, 5(s) = 0, °: ;
s ::; t. So as (in the derivation of (6.55)) in the proof of Lemma 6.23,
using (6.48) and independence of y(A) and N we get
~ ~ e- At Ex ~ f (yCA)(rl) dr]
~~) f(x) = ~ ((T(A) (x)f f"(x) + b(A)(x) f'(x). (6.58)
Here we have used the fact that ~ fis a bounded continuous function
when restricted to [0,00), because f is as in Lemma 6.22.
Next, on the set {N(t) = I} clearly 5(t) = Xl, the first claim size.
Therefore
IJll(t)1
= lEx [E [ {f (R(A)(t)) - f (R(A)( Tl)) } h~ h 1 1]]
97 I
J
t
Proof: (i) We will prove the estimate in (i); weIl definedness will be
evident on the way. Let to > 0 be fixed. For any event U we denote
Px(U) ~ P(U I R(A)(O) = x). Consider first x 2:: O. By our hypotheses
on g, Theorem 6.13 and known estimates concerning Brownian driven
SDE's we get for 0 ~ t ~ to
(6.62)
158 6. Lundberg risk process with investment
where Cl, C 2 are constants depending only on to, K, the Lipschitz and
growth constants of a, b.
Next let x < 0, and 1] = inf{t 2': 0 : RCA)(t) 2': O}. Since jumps are
negative it is clear that RCA) (1]) = 0 on {1] < oo}. Therefore by tlle
strong Markov property (Theorem 6.16) we obtain for 0 ::; t ::; to
for all 0 ::; t ::; 1; cf. (6.62). As the r.h.s. of the above is known
to be integrable, an application of the dominated convergence theorem
completes the proof of (ii).
(iii) See Exercise 6.2.11.
(iv) Let t > 0 be fixed; (for t = 0 there is nothing to prove). Let
X n 1 xo. For n = 0,1,2, ... let RCn)(.) denote the solution to the SDE
(6.36) with initial value x n . Let y > 0 be such that Xo ::; Xn < Y for
all n. Let Y (.) be the solution to the Brownian driven SDE (6.37) with
initial value y. By the comparison result (Theorem 6.13) we have
RCO)(t) ::; RCn)(t) ::; Y(t) a.s. for all n. (6.64)
By Theorem 6.14 (i) we have RCn)(t) -- RCO)(t) in probability. So, for
any subsequence {x n' } of {x n } there is a further subsequence {x n" } such
6.2. Risk process with investment 159
that R(n") ----+ R(O)(t) a.s. As 9 is right continuous, (6.64) now implies
that g(R(n")(t)) ----+ g(R(O) (t)) a.s. By our hypothesis on 9 and (6.64) we
see that
Ig(R(n")(t))1 SC1 +C2 IY(t)1 2 , foralln".
Since Y (t) is square integrable, by dominated convergence theorem
E[g(R(n") (t))] ----+ E[g(R(O) (t))] as n" ----+ 00. So any subsequence has a
further subsequence converging to E[g( R(O) (t))]. Therefore
E[g(R(n)(t))] ----+ E[g(R(O) (t))] as n ----+ 00; that is Tt(A) g(x n ) ----+ Tt(A) g(x)
whenever X n 1 xo. Hence Tt(A) g(.) is right continuous, completing the
proof. D
Lemma 6.26 Let the hypotheses of Theorem 6.21 hold. For fE 1)1, xE
lR, t 1-+ Tt(A) f(x) is a continuous function on [0,00).
Proof: Though we need to prove only left continuity at t > 0, our
proof will also give continuity at t > O. We follow the approach in [Dy] ,
VoLl, p.35; we just sketch the proof.
Let 0 < a < b < t, 0 < s < t - h, s < t, h small. As ~ ) : r ~ O}
is a contraction semigroup,
~~~ ITi A) ~~ f ) - ~f ) 1
< II T(A)
t-h-s f - T(A)fll·
t-s (6.65)
J
t-a
J11
b
Since r 1-+ ~ ) fll is integrable over [0, t + 1], by Exercise 6.1.6, r.h.s.
of (6.66) tends to 0 as h ----+ O. In fact we have proved t 1-+ Tt(A) f is
continuous in the 'sup' norm. D
In the proof above, note that f', 1" do not enter the picture; only
boundedness of f is used. This leads to the next lemma
160 6. Lundberg risk process with investment
~ f ) - Tt(A) f(x)
h
Tt(A) {* ~ ) f(x) - f(X)]}
proving the first assertion. As Tt(A) f(x) and T/ A )L(A) f(x) are respec-
tively continuous and right continuous functions of t (by Lemma 6.26
and Proposition 6.25), by Lemma 0.10 on p.237 of [Dy], vo1.2, (6.67)
and (6.68) are equivalent. D
The next result is the analogue of Theorem 6.8.
Theorem 6.29 Under the hypotheses of Theorem 6.21, for any fE VI,
any nonnegative square integrable R(A)(O),
-J
t
Proof: By Corollary 6.24 and Theorem 6.13, for 0 :::; s :::; t we have
where y(A)(O) = R(A)(O). It follows that L(A) f(R(A)(s)), 0:::; s :::; t and
MY) (t) are integrable. In view of the preceding proposition, imitating
the proof of Theorem 6.8 the required result can now be established. 0
Next, define the ruin probability of the process R(A)(.) by
(6.71)
(i) L(A)U(X) = 0, if x ~ 0 }
(ii) u( x) = 0, if x < 0 and . (6.73)
(iii) lim u(x) = 1
x-oo
Now by (6.72), (iii) of (6.73) and (6.74) it follows that u(x) = Px (T6 A ) =
(0) = <p(A)(x). 0
The requirement (6.72) means that the process RCA)(.) wanders off
to +00 with probability one. Recall that the analogous property of
the Lundberg risk process R(·) played a key role earlier; the net profit
condition ensured this property for R(·) by Theorem 5.1 and Exercise
5.1.7. We want tü give a sufficient condition für (6.72) to hold. Für this
we need a lemma
t
f(R(A)(t)) - f(x) - J L(A) f(R(A)(s)) ds, t ?: 0 is a Px-martingale. Fix
°
!
r > O. Then for x < r, t > 0 by optional sampling theorem
Theorem 6.32 Let the hypotheses of Theorem 6.21 and the net profit
condition (5.26) hold. Suppose there exist ro > 0, u E 1)1 such that
(i) u ?: 0 and non-increasing, (ii) u is strictly decreasing on [ro, 00), and
(iii) L(A)u(·):s 0 on [ro, 00). Then (6.72) holds.
Proof: Let r1 > ro be arbitrary; let T{A) = inf {t ?: 0 : R(A) (t) :s r1}.
We claim that Px (T{A) < 00) < 1 for all x > r1. Suppose not; then there
is x > rl with Px(T{A) < 00) = 1. By Theorem 6.29, option al sampling
theorem and (iii) above, for any t > 0
By the strong Markov property, (6.75) and Lemma 6.31 we have for
i 2 1
J
00
0< K o < mm
. {2bar' (), -;l () (
c- 7i).) } . (6.76)
Define
J
00
J
x
~ )) )) ) + [(x + (b - ()A(x)](<p(A))'(x)
~ )) (ip(A))"'(x)
2
J
00
6.4 Exercises
J
SI
6.2.1. (i) Using stochastic calculus show that (see [KS]) Z(·) satis-
fying (6.31) is given by
(ii) Show that the total claim amount SO and the unit price Z(·)
of the risky asset are independent as processes {:} {Xj , N (s) : j
1,2, ... ,s 2 O} and {B(t) : t 2 O} are independent.
(iii) For any t 2 0 show that O'{Z(u),S(r) : 0 ::; r,u ::; t}
O'{B(r), S(u) : 0::; r, u::; t}.
(iv) It will generally be assumed that b > (. Why is this a reasonable
assumption?
170 6. Lundberg risk process with investment
6.2.2. (i) Let (}(t) = number of units of the risky asset bought at
time t. So A(R(A)(t-)) = (}(t) Z(t) = amount invested in risky asset at
time. (Why?) Argue that R(A) is governed by the SDE
dR(A)(t) = c dt - dS(t) + (}(t) dZ(t)
+(. [R(A)(t) - (}(t) Z(t)] dt, t > O.
Using Exercise 6.2.1 (i), now obtain (6.32).
(ii) Show that the SDE's (6.32) and (6.33) are the same with prob-
ability one.
6.2.3. Formulate a suitable not ion of a strong solution to (6.39),
and establish the analogues of Theorems 6.12 - 6.14 for the SDE (6.39).
6.2.4. Work out the details of the proof of Theorem 6.15.
6.2.5. (i) Let T be a stopping time w.r.t {9d and 97 the associated
pre-T O"-algebra. Show that 97 is indeed a O"-algebra, and that T is 9 7-
measurable. (It can be shown that R(A) (T), defined on {T < oo}, is also
97-measurable; see p.9 of [KS].)
(ii) Show that the jump times Tn , n ~ 1 and the ruin time T6 A ) are
stopping times w.r.t. {9d; (for the second assertion you may need the
assumption that the filtration {9d is right continuous, that is, 9t =
9t+ ~ n
h>O
9tH for all t ~ 0).
6.2.6. The purpose of this exercise is to show that nondegeneracy
of the dispersion coefficient at x = 0 leads to immediate ruin of R(A).
Let A, O"(A) , b(A), R(A), y(A) be as in (6.36), (6.37). Let the ruin time
T6 A) of R(A) be defined by (6.51) and put 1'0 = inf{t ~ 0 : y(A)(t) <
O}. Assume that (0"(A)(0))2 > 0; so by continuity, inf{(0"(A)(y))2 : y E
[-xo, xo]} > 0 for some xo > o.
(i) To prove P(T6 A) = 0 [ R(A)(O) = 0) = 1 it is enough to prove that
P(To = 0 [ y(A)(O) = 0) = 1.
(ii) Show that one may assume without loss of generality that b(A) ==
O. (Hint: Enough to prove that P(ToI\T = 0 [ y(A)(O) = 0) = 0 V T > O.
Then use Girsanov's theorem for continuous diffusions; see p.202 of [Du]
or p.302 of [KS].)
t
(iii) Now we can take y(A)(t) = J O"(A)(y(A)(s)) dB(s), t ~ o.
o
Assume that inf (0"(A)(y))2 > O. Using Theorem 3.4.6, p.174 of [KS]
yElR
(or Theorem 4.4, Chap. 3, p.113 of [Du]) conclude that y(A) is a time-
changed Brownian motion. Then use Blumenthal's 0-11aw (p.14 of [Du]
or p.94 of [KS]) to conlcude that P( 1'0 = 0 [ y(A) (0) = 0) = 1.
6.4. Exercises 171
°
6.2.8. Give a proof of Lemma 6.22.
(Rint: Fix x E IR, f E 1)1; let K > which will be chosen suitably. Let
fK E 1)1 be such that x + K > 0, fK(Y) = f(y), Y :s: x + K, fK(Y) = 0,
Y ~ x + 2K, and IlfKlloo := sup IfK(y)1 :s: sup If(y)1 =: Ilflloo. By
comparison result Theorem 6.13 and Chebyshev's inequality, for all < °
t :s: 1,
~ Ex [f (R(A)(t)) - fK (R(A)(t))]
°
Using standard arguments for SDE's driven by Brownian motion, show
that for :s: t :s: 1, Ex(ly(A)(t) - x1 2 ) :s: ct where c is a constant
°
independent of t, x, K.)
6.2.9. For f E 1)1, X < show that both sides of (6.52) are zero.
°:
°: °
(Rint: As the jumps are negative, note that T/o ~ inf {t ~ R(A) (t) ~
O} = inf{t ~ R(A)(t) = O}. Since a(A)(-) = 0, b(A)(-) = c > on
(-00,0), clearly T/o ~ ~ lxi a.s. Px , for x < 0.)
°
6.2.10. Take A(x) = ax, x ~ in Lemma 6.19, where a > is a
constant. Show directly (without invoking the support theorem) that
°
the conclusion of the lemma holds.
6.2.11. (a) Let 9 be as in Proposition 6.25; (assume also the hy-
potheses and notation of Proposition 6.25). For h ~ set gh(Y) =°
~ ) ) ) Y E IR. By (ii) ofProposition 6.25, lim gh(Y) = 0, Y E IR.
hlO
Using (i) of Proposition 6.25 for t ~ 0, O:S: h :s: 1, show that
J
t
f (R(A)(t)) - f(x) - L(A) f (R(A)(s)) ds, t ~0
o
~ u(x) ~ 0, x ~ 1.
J
00
°
able X. It can be easily proved that F x is a nondecreasing right contin-
uous function on IR with Fx(-oo) ~ lim Fx(x) = and Fx(+oo) ~
xl-oo
lim Fx(x) = 1. Also the set of discontinuity points of Fx can be at
xi+oo
174 Appendix
J
x
J
IR
Ig(x)1 dFx(x) == J
IR
Ig(x)1 d vx(x) < 00
J... J J
bn b2 b1
Here the function f(Xl, ... ,Xn) is also referred to as the joint probability
density function of Xl, X 2, ... , X n . We may also say that Xl, X2, ... , X n
are jointly distributed according to f(x 1 , ... ,Xn )·
Real valued randorn variables X I ,X2 , ... ,Xn are said to be indepen-
dent if for any Borel sets BI, B 2, ... ,Bn in JR
(A.1.5)
J... J
00 00
provided the integral exists. In particular let Xl, X 2 have joint p.dJ.
f(Xl,X2); let X l ,X2 have finite expectations ml,m2 respectively. One
can then define the covariance between Xl, X 2 by
JJ
00 00
for any Borel set G ~ IR. So if Xl, X 2 , . .. ,Xn are i.i.d.r.v.'s with com-
mon distribution 1/, then the distribution of Xl + ... + X n is the n-fold
convolution 1/*(n) = 1/ * ... * 1/.
Various not ions of convergence are possible in the context of random
variables. Let X n , n 2': 1 and X be random variables. We say that
{Xn } converges to X almost surely (or with prabability one) if:J M E F
such that P(M) = 0 and Xn(w) ~ X(w) for w tJ. M; in other words
P(lim X n = X) = 1. This is often denoted X n ~ X a.s. or X n ~ X
w.p.1. We say that {Xn } converges to X in prabability if for any f > 0
lim
n->oo J
IR
g(z) dFn(z) = J
IR
g(z) dF(z)
The preceding theorem forms the philosophical basis for taking av-
erages in many situations.
J~
b
and px(x) = 0 if X rf- {O, 1, ... , n}. This is the binomial distri-
bution with parameters n and p. In this case E(X) = np and
Var(X) = np(l - p).
The classical central limit theorem says that if {Xi} is an Li.d. se-
quence with finite nonzero variance, then ~ ) is asymptotically
Vn ar(xI)
standard normal (in the sense of convergence in distribution), where
Sn = (Xl + X 2 + ... + X n ), n 2: 1 denotes the sequence of partial sums.
So, if variance exists then the partial sum Sn is approximately normal (or
Gaussian) for large n; this approach has proved to be extremely useful.
There are also very interestingjsignificant extensions giving asymptotic
normality.
However, when second moments do not exist the above result is of
limited use, as it does happen in actuarial context involving heavy tailed
distributions. Still, a discussion of the centrallimit problem gives a good
perspective regarding motivation for the various definitions encountered
in the sequel. It may be mentioned that the central limit problem was
investigated purely as a mathematical curiosity, and many outstanding
results were obtained by Levy, Doeblin, Kolmogorov, Khinchin, FeIler
and others.
Motivated by the CLT, the following quest ion is meaningful.
Question: What are the possible nondegenerate limit distributions for
partial sums Sn (of Li.d.'s) when suitably centered and normalized?
This is related to the following definition.
where Yl , Y 2 , ..• ,Yn are i.i.d. 's with Ras distribution function, and Y
also has R as its distribution function.
Proposition A.2.3 R is stable {:} for any real numbers Cl, C2 and Y,
Yl , Y 2 independent random variables with common distribution function
R, there exist constants C i= 0, I such that Cl Yl + c2 Y 2 :1:. cY + I' Thus
the class of stable distributions coincides with the class of distributions
closed under convolutions and multiplication with real numbers, upto
changes of location and scale. 0
The connection between our quest ion and the definitions above is
the following
In fact one can say more than the preceding theorem. A major role
is played in the analysis by the function
and
1 - P(x) P(-x)
-1---P-(-x)-+--'---'-P-(--x-) --; p, 1 _ P(x) + P( -x) --; (1 - p), as x --; 00,
(A.2.4)
where 0: E (0,2), L is slowly varying, 0 :S p :S l.
(b)' (equivalent form of (b)): P belongs to the domain of attraction
of a non normal stable distribution if and only if
Remark A.2.7 As can be discerned from the theorem ab ove , the pa-
rameter a E (0,2] plays a dominant role in the description of a stable
distribution. To give an idea of this, let X be a random variable with
distribution function F. Suppose the eorresponding Fourier transform
(or eharaeteristie funetion) is given by
(A.2.6)
°
where e > 0, < a :S 2 are eonstants, and i = A. Then F is a stable
distribution. Note that a = 2 eorresponds to the Gaussian ease. In this
ease the random variable X is symmetrie about the origin, and henee is
°
also ealled the symmetrie a-stable distribution. For < a < 2 it ean be
shown that X has finite moments only of order < a. A representation
for the eharaeteristie function/Fourier transform of a nonnormal stable
distribution ean be given, with a E (0,2) onee again determining tail
behaviour and existenee of moments. See [Fe], [EKM], [L], ... In fact,
for studying stable distributions, Fourier transforms seem to be most
amenable to analysis. 0
Remark A.2.8 (i) If F has finite non zero variance (]"2, then we know
from classical CLT that bn = n E(XI), an = y'n (]".
(ii) Let G denote symmetrie a-stable distribution. Let {Xi} be an
i.i.d. sequence distributed aeeording to C. Let Sn = Xl + ... + X n , n :2:
1. As Xi 's are symmetrie about 0, it is natural to take bn = as the °
centering faetor. Now Sn ~ anX I by definition of stable distribution for
some an > 0. So by (A.2.6), for t E IR
(A.2.8)
(A.2.9)
Thus there are very good probabilistic reasons, not just aesthetic
ones, why power laws (especially Pareto distributions) form an impor-
ta'nt class among the heavy tailed distributions.
For proofs of the various results discussed above, extensions/ gen-
eralizations, and for more insight and applications, one may see [Fe],
[EKM], [Lj, [ST], etc.
A.3. Martingales 185
A.3 Martingales
J
A
X(w) dP(w) = J
A
Y(w) dP(w), for all A E A. (A.3.1)
(A.3.3)
set
(A.3.5)
4. For 0 ::; s ::; t the random variable B (t) - B (s) has Gaussian (nor-
mal) distribution with mean 0 and variance (t - s); Le., B(t) -
B(s) rv N(O, t - s). Then {B(t) : t 2:: O} is called an {Fd-
adapted one- dimensional Brawnian motion. If a process {B o(t)}
satisfies the above and Bo(O) == 0 a.s. then Bo is called stan-
dard Brownian motion. With Bo as ab ove , x E IR, the process
B (t) = x + B o(t), t 2:: 0 is the one-dimensional Brownian motion
starting at x.
p(s,x;t,z) = 1
y'27J"(t-s)
exp {It-s )(z-x)2} .
-2( (A.4.l)
(A.4.2)
This is an easy consequence of the independent increments property.
Clearly with 0 ::; s < t, x E IR fixed, z 1-+ p(O, x; t-s, z) is the probability
density function of B(t - s) given B(O) = x; this is also the p.dJ. of
B(t) conditional on B(s) = x. Note that (A.4.2) holds even if B(O) is
random variable, of course, measurable w.r.t. Fo. It is not difficult to
prove that the distribution of B(O), called the initial distribution, and
the function p given by (A.4.l) determine the distribution of the process
{B(t) : t 2:: O}. The distribution of the Brownian motion process is
a probability measure on the space G[O, (0) of continuous functions on
[0, (0); this probability measure is referred to as Wiener measure.
In fact a stronger version of (A.4.2), called the strang Markov prap-
erly holds. Let T be a finite stopping time w.r.t. {Fd. Then the
stochastic process {B(t) = B(T + t) - B(T) : t 2:: O} is again distributed
A.4. Brownian motion and lto integrals 191
P(B(T+S) E AI O"{T,B(T)})
J p(O, B(T); s, z)dz. (A.4.3)
A
äp
äs
+ ~2 äx
ä 2p
2
_ 0 0 < s < t, x E lR
- ,
(A.4.4)
(A.4.5)
J
t
It(f)(w) ~ f(s,w) dB(s,w)
o
n-l
A L ai(w)[B(tHl 1\ t, w) - B(ti 1\ t, w)l, O:S t :S T. (A.4.6)
i=O
Note that for each i, a i and [B(ti+l 1\ t) - B(ti 1\ t)] are independent .
Consequently it is easily proved that {It (f) : t E [0, T]} is a martingale
w.r.t. {Ft : t E [0, Tl}. With a little effort one can prove that
(A.4.7)
for °:Ss :S t :S T.
Next let 9 : [0, T] x n -+ IR be a measurable function satisfying
(i) for each t E [0, Tl, the map (s, w) 1---+ g(s, w), °
:S s :S t, wEn is
jointly measurable as a map from ([0, t] x n, B[O, t] x F t ) to the realline;
(that is, 9 is progressively measurable);
t
Now define It(g) == J g(s) dB(s) =
o
lim [t(fn) where limit is taken in
n--->oo
t
The process lt (g) = J g( s) dB (s), t ;::: 0 is called the lto integral of g; it is
o
clearly linear in g. The following properties of the process {It (g) : t ;::: O}
are not difficult to establish
i) progressive measurability w.r.t. {Fd, and continuity of sampie paths
with probability one;
ii) {It (g) : t ;::: O} is a martingale with respect to {Ft : t ;::: O}
iii) {(1t(g))2 -[lg(sW ds : t ;::: O} is also a martingale w.r.t. {Ft }.
t
With more effort one can define Ito integral J g(s) dB(s) for pro-
o
gressively measurable functions 9 satisfying
J J
t t
J J
t t
+J J111(S,
t t
Presence of the "second order term" (the last term) on the r.h.s. of
(A.4.9) is the special feature of Ito's calculus. For a proof of Ito's formula
(A.4.9) and extensions, see [KS], [V2], [0], etc.
Now that Ito integrals are at our disposal we can talk ab out stochas-
tic integral equations. Let a : [0, Tl x IR ~ IR, b : [0, Tl x IR ~ IR be
functions, and X(O) an Fo-measurable random variable. Consider the
stochastic integral equation
J J
t t
J
t
f(t,X(t)) - f(O,X(O)) = h(s,X(s)) O"(s,X(s)) dB(s)
o
J
t
+ {fo(s, X(s)) + Lf(s, X(s))} ds (A.4.12)
o
where
1 2 82 f 8f (s, x).
Lf(s, x) = -
2
0" (s, x) 8 2 (s, x)
x
+ b(s, x) -8
x
(A.4.13)
J
t
f(t, X(t)) - f(O, X(O)) - {fo(s, X(s)) + Lf(s, X(s))}ds
o
= a continuous martingale W.r.t. {Fd.
Bibliography
[Ro2] S.M. Ross: Stochastic Processes. 2nd edition. Wiley (Asia), Sin-
gapore, 2004.