This action might not be possible to undo. Are you sure you want to continue?
Ruin Probabilities
Seren Asmussen
World Scientific
Ruin Probabilities
ADVANCED SERIES ON STATISTICAL SCIENCE & APPLIED PROBABILITY
Editor: Ole E. BarndorffNielsen
Published Vol. 1: Random Walks of Infinitely Many Particles by P. Revesz Vol. 2: Ruin Probabilities by S. Asmussen Vol. 3: Essentials of Stochastic Finance : Facts, Models, Theory by Albert N. Shiryaev Vol. 4: Principles of Statistical Inference from a NeoFisherian Perspective by L. Pace and A. Salvan Vol. 5: Local Stereology by Eva B. Vedel Jensen Vol. 6: Elementary Stochastic Calculus  With Finance in View by T. Mikosch Vol. 7: Stochastic Methods in Hydrology: Rain, Landforms and Floods eds. O. E. Barndorff Nielsen et al. Vol. 8: Statistical Experiments and Decisions : Asymptotic Theory by A. N. Shiryaev and V. G. Spokoiny
Ruin P robabilities
Soren Asmussen
Mathematical Statistics Centre for Mathematical Sciences Lund University
Sweden
World Scientific
Singapore • NewJersey • London • Hong Kong
Published by World Scientific Publishing Co. Pte. Ltd. P O Box 128, Fatter Road , Singapore 912805 USA office: Suite 1B, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
Library of Congress CataloginginPublication Data Asmussen, Soren
Ruin probabilities / Soren Asmussen. p. cm.  (Advanced series on statistical science and applied probability ; vol. 2) Includes bibliographical references and index. ISBN 9810222939 (alk. paper) 1. InsuranceMathematics. 2. Risk. I. Tide. II. Advanced series on statistical science & applied probability ; vol. 2. HG8781 .A83 2000 368'.01dc2l 00038176
British Library CataloguinginPublication Data A catalogue record for this book is available from the British Library.
First published 2000 Reprinted 2001
Copyright ® 2000 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
Printed by Fulsland Offset Printing (S) Pte Ltd, Singapore
Contents
Preface I ix
Introduction 1 1 The risk process . . . . . . . . . . . . . .. . . . .. .. . . . . 1 2 Claim size distributions .. . . . . . . . .. . . . . . . . . . . . 5 3 The arrival process . . . . . . . . . . . . . . . . . . . . . . . . 11 4 A summary of main results and methods . . . . .. . . . . . . 13 5 Conventions . .. . .. .. . . . . . . . . . . . . . . . . . . . . 19
II Some general tools and results 23 1 Martingales . .. . .. .. . . . . . .. . . . . . . . . . . . . . 24 2 Likelihood ratios and change of measure . . .. . . . . . .. . 26 3 Duality with other applied probability models . . .. . . . . . 30 4 Random walks in discrete or continuous time . . . . . . . . . . 33 5 Markov additive processes . . . . . . . .. . . . . . . . . . . . 39 6 The ladder height distribution . . . .. . .. .. . . . . . . . . 47
III The compound Poisson model 57 1 Introduction . . . . . . . . .. .. .. . .. .. . . . . . . 58 . . . . . . . . . . . . . . . 61 3 Special cases of the PollaczeckKhinchine formula . . . . . . . 62 4 Change of measure via exponential families . . . .... . .. . 67 5 Lundberg conjugation . .. . . . . . . . . . . . . . . . . . . . . 69 6 Further topics related to the adjustment coefficient .. . . . . 75 7 Various approximations for the ruin probability . . . . . . . . 79 8 Comparing the risks of different claim size distributions . . . . 83 9 Sensitivity estimates . . . . . . . . . . . . . . . . . . . . . . . 10 Estimation of the adjustment coefficient . . . . . . . . . . . . 86 93 2 The PollaczeckKhinchine formula
v
vi
CONTENTS
IV The probability of ruin within finite time 97 1 Exponential claims . . . . . . . . . . . . . . . . . . . . . . . . 98 2 The ruin probability with no initial reserve . . . . . . . . . . . 103 3 Laplace transforms . . . . . . . . . . . . . . . . . . . . . . . . 108 4 When does ruin occur? . . . . . . . . . . . . . . . . . . . . . . 110 5 Diffusion approximations . . . . . . . . . . . . .. . . .. . . . 117 6 Corrected diffusion approximations . . . . . . . . . . .. . . . 121 7 How does ruin occur ? . . .. . . . . . . . . . . . . . . . . . . . 127 V Renewal arrivals 131 1 Introduction .. . . . . . . . . . . . . . . . . . . . . . . . . . . 131 2 Exponential claims. The compound Poisson model with negative claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3 Change of measure via exponential families . . . . . . . . . . . 137 4 The duality with queueing theory .. .. .. . . . .. . . . . . 141 VI Risk theory in a Markovian environment 145 1 Model and examples . . . . . . . . . . . .. . .. . . . . . . . 145 2 The ladder height distribution . . . . . . . . . .. . . . . . . . 152 3 Change of measure via exponential families ........... 160 4 Comparisons with the compound Poisson model ........ 168 5 The Markovian arrival process . . . . . . .. .. . . ... . . . 173 6 Risk theory in a periodic environment .. . . . .. . . . . . . . 176 7 Dual queueing models .... ... ................ 185 VII Premiums depending on the current reserve 189 1 Introduction . . . . . . . . . . . . . . . . . . . .. . . . . . . . 189 2 The model with interest . . . . . .. . . . . . . . . . .. . . . 196 3 The local adjustment coefficient. Logarithmic asymptotics . . 201 VIII Matrixanalytic methods 215 1 Definition and basic properties of phasetype distributions .. 215 2 Renewal theory . . . . . . . . . . . . . . . . . . . . . . . . . . 223 3 The compound Poisson model . . . . . . . . . .. . . . . . . . 227 4 The renewal model . . . . . . . . . . . . . . . .. . . . . . . . 229 5 Markovmodulated input . . .. . . . . . . . . . . . . . . . . . 234 6 Matrixexponential distributions . . . . . . . . . . . .. . . . 240 7 Reservedependent premiums . . . . .. . . . .. . . . . . . . 244
. . .. . The twobarrier ruin problem . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . .. . . 331 A2 WienerHopf factorization . . . . 279 X Simulation methodology 281 1 Generalities . . . . . . . . . .. . . 344 AS Complements on phasetype distributions . . .. . 287 4 Importance sampling for the finite horizon case . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . 281 2 Simulation via the PollaczeckKhinchine formula . . . . . . . . . .. . . . . . . . . . . 261 4 Models with dependent input . . . . . . . . . . . . . . . . . . . . . . 306 4 The distribution of the aggregate claims . . . . . . . . . . . . . . . . . . 259 3 The renewal model . . . . . . . . . . . . . . . . . . . . . 292 6 Sensitivity analysis . . . . . . . . . . . . .. . . . . . 323 6 Reinsurance . . . . .. . . . . . . . . . . 340 A4 Some linear algebra . . . 336 A3 Matrixexponentials . . . . . . . . . . . . .. . . 350 Bibliography Index 363 383 . . . . . . . .. . . . . . . . . . . . . . . 297 2 Further applications of martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . .. .. . . . 290 5 Regenerative simulation . . . . .. . . . . . . . .. . 316 5 Principles for premium calculation . . . . . . 251 2 The compound Poisson model .. 304 3 Large deviations . . . . . . . . . . . . 326 Appendix 331 Al Renewal theory . . . . . .. . . . . . . . . . . 271 6 Reservedependent premiums . . . . 264 5 Finitehorizon ruin probabilities . . 285 3 Importance sampling via Lundberg conjugation . . . . .. . . . . .CONTENTS vii IX Ruin probabilities in the presence of heavy tails 251 1 Subexponential distributions .. . . . . . . .. . . 294 XI Miscellaneous topics 297 1 The ruin problem for Bernoulli random walk and Brownian motion. . . . . . . . . . . . . . . .. . . . . . . .
This page is intentionally left blank .
and other projects absorbed my interest. A similar thank goes to all colleagues who encouraged me to finish the project and continued to refer to the book by Asmussen which was to appear in a year which continued to be postponed. which has in particular removed one of the standard criticisms of the area. However. The course was never realized. In particular. University of Copenhagen. and my belief was that this could be done rather quickly. the idea was close to expand these to a short book on the subject. Thus. and the series editor Ole BarndorffNielsen for their patience. this applies to longrange dependence which is intensely studied in the neighboring ix . As an excuse: many of these projects were related to the book. that it can only say something about very simple models and questions. Let me take this opportunity to thank above all my publisher World Scientific Publishing Co. Since I was to produce some handouts for the students anyway. but the handouts were written and the book was started (even a contract was signed with a deadline I do not dare to write here!). and has been an active area of research from the days of Lundberg all the way up to today. But the pace was much slower than expected. Risk theory in general and ruin probablities in particular is traditionally considered as part of insurance mathematics. it would not be fair not to say that the practical relevance of the area has been questioned repeatedly. I have deliberately stayed away from discussing the practical relevance of the theory. the book is basically mathematical in its flavour. I was invited to give a course on ruin probabilities at the Laboratory of Insurance Mathematics. if the formulations occasionally give a different impression. One reason for writing this book is a feeling that the area has in the recent years achieved a considerable mathematical maturity.Preface The most important to say about the history of this book is: it took too long time to write it! In 1991. and the result is now that the book is much more related to my own research than the initial outline. Apart from these remarks. It has obviously not been possible to cover all subareas. it is not by intention.
IV. I regret that due to time constraints. VII.g. see also Schmidli [325] and the references in Asmussen & Taksar [52]. Willinger et al.lth. The present book is in between these two possibilities.13 and XI. I intend to keep a list of misprints and remarks posted on my web page.g. some basic discussion can be found in the books by Biihlmann [82] and Gerber [157].13 and IX. The book does not go into the broader aspects of the interface between insurance mathematics and mathematical finance. X. VI. More recently.2. Asmussen. incorporate 11. For a brief orientation. some papers not cited in the text but judged to be of interest are included in the Bibliography. Chapters IXX then go in more depth with some of the special approaches for analyzing specific models and add a number of results on the models in Chapters IIIVII (also Chapter II is essentially methodological in its flavor).14. e. http:// www. IV.4a. [381]).15.se Lund February 2000 Soren Asmussen . Good luck! I have tried to be fairly exhaustive in citing references close to the text. VIII. VII. Concerning ruin probabilities.1. IX. It is obvious that such a system involves a number of inconsistencies and omissions. IV.se/matstat / staff/asmus and I am therefore grateful to get relevant material sent by email to asmusfmaths . One is by model. see e.x PREFACE field of queueing theory.5.6 (to understand the PollaczeckKhinchine formula in 111. Hojgaard & Taksar [35] and Paulsen & Gjessing [284]. Another interesting area which is not covered is dynamic control. 111. The main motivation comes from statistical data for network traffic (e. read Chapter I.g. 111.13.89. another by method.lth. In the classical setting of CramerLundberg models.45.maths .3.13. The rest is up to your specific interests. see in particular Michna [259]. the standard stochastic control setting of diffusion models has been considered. an area which is becoming increasingly important. Here is a suggestion on how to get started with the book. Hojgaard & Taksar [206]. the first part of 11. Chapters IIIVII introduce some of the main models and give a first derivation of some of their properties.2.2 more properly). For a second reading. for the effects on tail probabilities. Resnick & Samorodnitsky [303] and references therein. In addition. it has not been possible to incorporate more numerical examples than the few there are. Finally. A book like this can be organized in many ways. for which I apologize to the reader and the authors of the many papers who ought to have been on the list.
Parts of II. 5. not least the more complicated ones. were produced by Lone Juul Hansen .6 is reprinted from Asmussen & Schmidt [49] and parts of IX. . A number of other figures were supplied by Christian Geisler Asmussen . as well as some additional references continue to be at the web page. of which there are not many at this stage .1 by Bjarne Hojgaard and the table in Example 111. Aarhus.4 from Asmussen.1 and X.PREFACE xi The second printing differs from the first only by minor corrections.8 . 111 . Lund September 2001 Soren Asmussen Acknowledgements Many of the figures . many of which were pointed out by Hanspeter Schmidli . 3 is reprinted from Asmussen & Nielsen [39] and parts of IX. Section VIII.6. Fig.5 from Asmussen [21] with permission from CRC Press.6 by my 1999 simulation class in Lund. supported by Center for Mathematical Physics and Stochastics (MaPhySto). Fig. 1 is almost identical to Section 2 of Asmussen [26] and reprinted with permission of Blackwell Publishers. Schmidli & Schmidt [47] with the permission from Applied Probability Trust .3 are reprinted from Asmussen & Rubinstein [46] and parts of VIII. 5 from Asmussen & Kliippelberg [36] with the permission from Elsevier Science . IV. Section VII .2 by Rafal Kulik . Parts of X. More substantial remarks.
This page is intentionally left blank .
They are the main topics of study of the present book. Letting T(u) = inf {t > 0 : Rt < 0} = inf It > 0 : St > u}.T) = P inf Rt < 0 I . as defined in broad terms . (1. respectively.Chapter I Introduction 1 The risk process In this chapter . is a model for the time evolution of the reserves of an insurance company. (1.2) (O<t<T Ro=ul. (1. M = (1. we introduce some general notation and terminology. We denote throughout the initial reserve by u = Ro. A risk reserve process { Rt}t>o.i(u. results and topics to be studied in the rest of the book.3) sup St. and give a very brief summary of some of the models.1) We also refer to t/) ( u) and 0(u. it is frequently more convenient to work with the claim surplus process {St}t>0 defined by St = u .Rt. T) as ruin probabilities with infinite horizon and finite horizon . t/i(u) = P (infRt < 0) = P (infR t < 0 t>0 t>0 The probability of ruin before time T is t.4) O<t<oo O<t<T 1 . The probability O(u) of ultimate ruin is the probability that the reserve ever drops below zero. For mathematical purposes. MT = sup St.
.T) = F (MT > u) = P(r(u) < T). Thus. We denote the interarrival times of claims by T2. and Nt = min {n > 0 : 0rn+1 > t} = max {n > 0: Un < t}• The size of the nth claim is denoted by Un. INTRODUCTION be the time to ruin and the maxima with infinite and finite horizon..1 . . the following setup will cover the vast majority of the book: • There are only finitely many claims in finite time intervals. • Premiums flow in at rate p. respectively.5) i.pt. St = E Uk . the time of arrival of the nth claim is an = T1 + • • • + Tn. That is.b(u) = P (r(u) < oo) = P(M > u). Figure 1. the ruin probabilities can then alternatively be written as . t] is finite. say.E Uk. Putting things together.1. we see that Nt Nt Rt = u + pt . (1. and T1 is the time of the first claim. the number Nt of arrivals in [0. 1.7) k=1 k=1 The sample paths of {Rt} and {St} and the connection between the two processes are illustrated in Fig. However. per unit time.2 CHAPTER I. (1. T3.i(u. (1.6) Sofar we have not imposed any assumptions on the risk reserve process.
(1. • General Levy processes (defined as continuous time processes with stationary independent increments) where the jump component has infinite Levy measure.e. immaterial. and hence . but as an approximation to the risk process rather than as a model of intrinsic merit. and the basic ruin probabilities are derived in XI. though many results are straightforward to generalize from the compound Poisson model. Some main examples of models not incorporated in the above setup are: • Models with a premium depending on the reserve (i.s. not discuss whether this actually corresponds to practice. then M = oo a. say 10% . For the purpose of studying ruin probabilities this distinction is. 1. and hence O(u) < 1 for all sufficiently large u. t * oo.) V 0.1. then M < oo a. We shall discuss Brownian motion somewhat in Chapter IV. rl= pP P It is sometimes stated in the theoretical literature that the typical values of the safety loading 77 are relatively small. of course. one may well argue that Brownian motion in itself could be a reasonable model.8) The interpretation of p is as the average amount of claim per unit time.s.1 Assume that (1. that the insurance company should try to ensure 77 > 0.b(u) = 1 for all u. and in fact: Proposition 1.. however.20%.(.1. A further basic quantity is the safety loading (or the security loading) n defined as the relative amount by which the premium rate p exceeds p. on Fig. THE RISK PROCESS 3 Note that it is a matter of taste (or mathematical convenience) whether one allows {Rt} and/or {St} to continue its evolution after the time T(u) of ruin. • Brownian motion or more general diffusions. allowing a countable infinity of jumps on Fig. 1. If 77 < 0. however. since any modeling involves some approximative assumptions. However. for example. Thus.1 the slope of {Rt} should depend also on the level). VII. We shall not deal with this case either. one could well replace Rt by Rtnr(u) or RtA.8) holds. . The models we consider will typically have the property that there exists a constant p such that Nt a E Uk k=1 p. We study this case in Ch. a basic references is Gerber [127].. we shall. It would appear obvious. If 77 > 0.1.
b(u) < 1 for all u when rl > 0. and independent of {(0(t).10) is a property which we will typically encounter. k=1 (1. U2. where {Nt} is a Poisson process with rate .6EU (on the average. _ St __ k =1 Uk pt a4..8) that F N. (1.8). zP(u . U2. M < oo a.s. Here it is easy to see that p = . If u oo. this needs to be verified in each separate case.v.2 (Cox PROCESSES) Here {Nt} is a Poisson process with random rate /3(t) (say) at time t. we obtain typically a somewhat stronger conclusion.i. namely. are i.T) = i.11) .. .Q (say) and U1. it is not too difficult to show that p as defined by (1. t t p  p' t ^ oo.Tp). Then the connection between the ruin probabilities for the given risk process {Rt} and those ^(u). . Nt)}.T) for {Rt} is given by V)(u) = t/i (u). INTRODUCTION Proof It follows from (1.s. then this limit is > 0 which implies St a$ oo and hence M = oo a. Proposition 1. in connection with risk processes in a Markovian or periodic environment (Chapter VI).d. . The simplest example is 3(t) = V where V is a r . with the most notable special case being V having a Gamma distribution. (1. are i..8) is given by ^t p = EU • lim it (3(s) ds t. if {(3(t)} is nonergodic. and here (1. rl > 0.3 Assume p 54 1 and define Rt = Rt1p.10) Again.10) hold with p constant. However. tb(u) = 1 for all u holds also when rl = 0. Thus p may well be random for such processes.Q claims arrive per unit time and the mean of a single claim is EU) and that also Nt t aoo t lira EEUk = p.oo t 0 J (provided the limit exists). This case is referred to as the mixed Poisson process.4 CHAPTER I. namely that M = oo a. However. not all models considered in the literature have this feature: Example 1.s. then similarly limSt/t < 0. corresponding to the Pdlya process.. 0(u.i. If 77 < 0. (1.i(u. 0 We shall only encounter a few instances of a Cox process. and independent of {Nt}. The simplest concrete example (to be studied in Chapter III) is the compound Poisson model.. If U1. and that .d. St In concrete models.
Note that when p = 1. The term risk theory is often interpreted in a broader sense than as just to comprise the study of ruin probabilities. Daykin. Note that life insurance (e. Pentikainen & Pesonen [101]. while the first mathematically substantial results were given in Lundberg [251] and Cramer [91]. The Swedish school was pioneering not only in risk theory. Mitteilungen der Verein der Schweizerischen Versicherungsmathematiker and the Scandinavian Actuarial Journal. Sundt [354].g. Some of the main general ideas were laid down by Lundberg [250]. and in fact p < 1 is the fundamental assumption of queueing theory ensuring steadystate behaviour (existence of a limiting stationary distribution).. another important early Swedish work is Tacklind [373]. Gerber [159]) has a rather different flavour. [76].g.. Rolski. often referred to as collective risk theory or just risk theory. the role of the result is to justify to take p = 1. the claim arrivals are Poisson or renewal at the same time). Heilmann [191]. [330]. Schmidt & Teugels [307] and Seal [326]. the recent survey by Grandell [173] and references therein. Segerdahl [334] and Philipson [289]. lighttailed distributions (sometimes the term . U2. Insurance: Mathematics and Economics.. which is feasible since in most cases the process { Rt } has a similar structure as {Rt} (for example. [134]. in a number of models. Some early surveys are given in Cramer [91]. was largely initiated in Sweden in the first half of the century. In the even more general area of nonlife insurance mathematics. but in probability and applied probability as a whole. see also Chapter XI. For mixed Poisson processes and Polya processes. [101]. Buhlmann [82]. Straub [353]. Taylor [364]. Since { Rt } has premium rate 1. the research literature is often published in journals like Astin Bulletin . 2 Claim size distributions This section contains a brief survey of some of the most popular classes of distributions B which have been used to model the claims U1. see e . CLAIM SIZE DISTRIBUTIONS 5 The proof is trivial. Schmidli. Embrechts et al. Cox processes are treated extensively in Grandell [171]. Gerber [157]. Notes and references The study of ruin probabilities. De Vylder [110].. the assumption > 0 is equivalent to p < 1. Grandell [171]. Daykin et al. Hipp & Michel [198].2. Some main later textbooks are (in alphabetical order) Buhlmann [82]. many results and methods in random walk theory originate from there and the area was ahead of related ones like queueing theory. and we do not get near to the topic anywhere in this book. in particular. we shall be able to identify p with the traffic intensity of an associated queue. Besides in standard journals in probability and applied probability. some main texts (typically incorporating some ruin theory but emphasizing the topic to a varying degree) are Bowers et al. An idea of the additional topics and problems one may incorporate under risk theory can be obtained from the survey paper [273] by Norberg. We roughly classify these into two groups .
e.1) The parameter 6 is referred to as the rate or the intensity. Here lighttailed means that the tail B(x) = 1 .2) = 0. if 1 °O AB Jbos x B(dx) > 0.6 CHAPTER I.2 and /LB is the mean of B. Equivalently. As in a number of other applied probability areas.3) . one could mention also the folklore in actuarial practice to consider B heavytailed if '20% of the claims account for more than 80% of the total claims'.2 (THE GAMMA DISTRIBUTION) The gamma distribution with parameters p.x given X > x is again exponential with rate b (this is essentially equivalent to the failure rate being constant). P B[s]= (8Is ) . The crucial feature is the lack of memory: if X is exponential with rate 6.g. then the conditional distribution of X .u at the time of ruin given r(u) is again exponential u with rate 8.O(u) can be found in closed form. (2. but different more restrictive definitions are often used: subexponential.g. regularly varying (see below) or even regularly varying with infinite variance. 2a Lighttailed distributions Example 2. the m. for the compound Poisson model with exponential claim sizes the ruin probability . B[s] is finite for some s > 0. the exponential distribution is by far the simplest to deal with in risk theory as well. and can also be interpreted as the (constant) failure rate b(x)/B(x). and heavytailed distributions.8.B(x) satisfies B(x) = O(e8x) for some s > 0. Example 2 .f. In particular. INTRODUCTION 'Cramertype conditions' is used).f. i. For example in the compound Poisson model.1 (THE EXPONENTIAL DISTRIBUTION) Here the density is b(x) = beax (2. 6 has density r(p)xPleax b(x) P and m. a fact which turns out to contain considerable information. where B(bo. s<8. a simple stopping time argument shows that this implies that the conditional distribution of the overshoot ST(u) . B is heavytailed if b[s] = oo for all s > 0. In contrast. On the more heuristical side.
X2. p).v. are i. the Gamma density (2. In particular. p) = J tPletdt. . u Example 2 .y i=1 where >i ai = 1. An appealing feature is its simple connection to the Poisson process: B(x) = P(Xi + • • • + XP > x) is the probability of at most p . This special case is referred to as the Erlang distribution with p stages. B(x) = r(p) Asymptotically. one has r(bx.. i = 1. .. In particular. we develop computationally tractable results mainly for the Erlang case (p = 1. the squared coefficient of variation (s..d.ate (b2 ): L• i=o In the present text. P b(x) = r` aibiea. among others. by Grandell & Segerdahl [175] and Thorin [369].v.3 (THE HYPEREXPONENTIAL DISTRIBUTION) This is defined as a finite mixture of exponential distributions.. where X1. p. then X v Xl + • • • + X.c.)..c. The exact form of the tail B(x) is given by the incomplete Gamma function r(x. Ruin probabilities for the general case has been studied. 0 < ai < 1. p) °° where r (x.1 Poisson events in [0. x] so that B(x) = r` e.) VarX1 (EX )2 p is < 1 for p > 1..1) (or the 1/pth root if p < 1). 0. JP 1 B(x) r(p ) XP ie ax In the sense of the theory of infinitely divisible distributions. if p is integer and X has the gamma distribution p. u . An important property of the hyperexponential distribution is that its s. > 1 for p < 1 and = 1 for p = 1 (the exponential case). or just the Erlang(p) distribution. is > 1. CLAIM SIZE DISTRIBUTIONS 7 The mean EX is p/b and the variance Var X is p/b2.. 2.2) can be considered as the pth power of the exponential density (2. and exponential with rate d.2.i. .
the Erlang and the hyperexponential distributions. However.7) q1 b(x) = cjxieWWx + djxi cos(ajx)ea'x + > ejxi sin(bjx)e`ix . . T) or sometimes the triple (E. a. it is notable from a practical point of view because of reinsurance: if excessofloss reinsurance has been arranged with retention level xo.. This class of distributions plays a major role in this book as the one within computationally tractable exact forms of the ruin probability z/)(u) can be obtained. equivalently. (or. there exists a xo < oo such that B(x) = 0 for x > xo.1 and defer further details to u Chapter VIII. The couple (a.6. This class of distributions is popular in older literature on both risk theory and queues..7) are possibly complexvalued but the parameters in (2. which is slightly smaller but more amenable to probabilistic reasoning.6. Example 2 .e. the restriction T of the intensity matrix of the Markov process to E and the row vector a = (ai)iEE of initial probabilities.8 CHAPTER I. but the current trend in applied probability is to restrict attention to the class of phasetype distributions. We give some theory for matrixu exponential distribution in VIII.5 (DISTRIBUTIONS WITH RATIONAL TRANSFORMS) A distribution B has a rational m. T) is called the representation.8) are realvalued. The parameters of a phasetype distribution is the set E of transient states. Equivalent characterizations are that the density b(x) has one of the forms q b(x) j=1 = cjxienbx.6 (DISTRIBUTIONS WITH BOUNDED SUPPORT) This example (i.(2. We give a more comprehensive treatment in VIII.4 (PHASETYPE DISTRIBUTIONS) A phasetype distribution is the distribution of the absorption time in a Markov process with finitely many states. a rational Laplace transform) if B[s] _ p(s)/q(s) with p(s) and q(s) being polynomials of finite degree. Example 2 . of which one is absorbing and the rest transient. B(x) = aeTxe where t = Te and e = (1 . 1)' is the column vector with 1 at all entries.g. Important special cases are the exponential. The density and c. then the claim size which is relevant from the point of view of the insurance company itself is U A xo rather than U u (the excess (U .f. are b(x) = aeTxt.f.8) j=1 j=1 j=1 where the parameters in (2. resp. q2 q3 (2. INTRODUCTION Example 2 . See XI.d. B(x) > 0 for x < xo) is of course a trivial instance of a lighttailed distribution.xo)+ is covered by the reinsurer).
1. we obtain the Weibull distribution B(x) = eCx'.10) The loinormal distribution has moments of all orders. (2.7 (THE WEIBULL DISTRIBUTION) This distribution originates from reliability theory. the mean u is eµ+a /2 and the second moment is e2µ+2o2.N(0. b(x) = crx''le`xr.u l b(x) = d dx or J ax lor 1 exp Asymptotically. (2. However. p is defined as the distribution of ev where V . a)/A)a+1' x > a.N(p.11) ex log logx 2r p 1 1 2 ( a ) f 1 (lox_P)2} (2. All moments are finite.12) Sometimes also a location parameter a > 0 and a scale parameter A > 0 is allowed. It follows that the density is 't (1ogX . a2). or equivalently as the distribution of a°U+µ where U . one being B(x) (1 + X)b(x) (1 + x)a+1' x > 0. There are various variants of the definition around.p a 1 (2.1). the exponential distribution representing the simplest example since here b(x) is constant.13) u .9 (THE PARETO DISTRIBUTION) Here the essence is that the tail B(x) decreases like a power of x. the tail is B (x ) 2 x.9) which is heavytailed when 0 < r < I. Example 2 .pl = 1 W (logx . (2. Writing c = d/r. Here failure rates b(x) = b(x)/B(x) play an important role. x < a.2. CLAIM SIZE DISTRIBUTIONS 9 2b Heavytailed distributions Example 2. and then b(x) = 0. b(x) _ A(1 + (x a The pth moment is finite if and only if p < a .8 (THE LOGNORMAL DISTRIBUTION) The lognormal distribution with parameters a2. In particular. u Example 2 . in practice one may observe that b(x) is either decreasing or increasing and may try to model smooth (incerasing or decreasing) deviations from constancy by 6(x) = dx''1 (0 < r < oo).
Choudhury & Whitt [1] as the class of distributions of r. where Y is Pareto distributed with a = (p . x + 00. the loggamma distribution (with exponent 5) and a Pareto mixture of exponentials. A = 1 and X is standard exponential.13).'s of the form YX. x 4 oo (any L having a limit in (0. { s () 1s+3s29s3log(1+2s I p=3.12 (DISTRIBUTIONS WITH REGULARLY VARYING TAILS) The tail B(x) of a distribution B is said to be regularly varying with exponent a if B(x) .15) x2 + 16x3 ) a3x/2) 3 (1 . i. Thus. in particular. satisfies L(xt)/L(x) 4 1. examples of distributions with regularly varying tails are the Pareto distribution (2. oo) is slowly varying .2).11 (PARETO MIXTURES OF EXPONENTIALS) This class was introduced by Abate. In general.(1 + 2x + 2x2)e2x) p = 2 (2.10 CHAPTER I.16) 11 Example 2. The density is 8p(log x)pi b(x) .17) where L (x) is slowly varying. (2. B(x) = O(xP).14) The pth moment is finite if p < 5 and infinite if p > 5.L( x ). The motivation for this class is the fact that the Laplace transform is explicit (which is not the case for the Pareto or other standard heavytailed distributions).e. The simplest examples correspond to p small and integervalued.(1 + Zx + $ p = 3. u Example 2 . (2. INTRODUCTION Example 2. u . the density is { 3 (1 . 6 is defined as the distribution of et' where V has the gamma density (2.x6+lr(p) (2.12) (here L (x) * 1) and ( 2. the loggamma distribution is a Pareto distribution.1)/p.v. in particular. another standard example is (log x)').10 (THE LOGGAMMA DISTRIBUTION) The loggamma distribution with parameters p. For p = 1.
18) B(x) It can be proved (see IX. When studying ruin probabilities. but the model also admits a natural interpretation : a large portfolio of insurance holders .. From a practical point of view. one may argue that this difficulty is not resticted to ruin probability theory alone.4) or even to completely different applied probability areas like extreme value theory: if we are using a Gaussian process to predict extreme value behaviour. We return to a closer study in IX. and so is the Weibull distribution with 0 < r < 1.1) that any distribution with a regularly varying tail is subexponential. However. and based upon such information it seems questionable to extrapolate to tail behaviour. THE ARRIVAL PROCESS 11 Example 2. the claim size distribution represents of course only one aspect (though a major one).. At least as important is the specification of the structure of the point process {Nt } of claim arrivals and its possible dependence with the claims. it will be seen that we obtain completely different results depending on whether the claim size distribution is exponentially bounded or heavytailed. U2. We give some discussion on standard methods to distinguish between light and heavy tails in Section 4f. Namely. Similar discussion applies to the distribution of the accumulated claims (XI. though the proof of this is nontrivial.13 (THE SUBEXPONENTIAL CLASS OF DISTRIBUTIONS) We say that a distribution B is subexponential if xroo lim B `2^ = 2.1.. which each have a ( timehomogeneous) small rate of experiencing a . for example the lognormal distribution is subexponential (but not regularly varying). Thus. Also.3. By far the most prominent case is the compound Poisson (CramerLundberg) model where {Nt} is Poisson and independent of the claim sizes U1. the subexponential class of distributions provide a convenient framework for studying large classes of heavyu tailed distributions. but can never be sure whether this is also so for atypical levels for which far less detailed statistical information is available. this phenomenon represents one of the true controversies of the area. The reason is in part mathematical since this model is the easiest to analyze. (2.. 3 The arrival process For the purpose of modeling a risk process . we may know that such a process (with a covariance function estimated from data) is a reasonable description of the behaviour of the system under study in typical conditions. the knowledge of the claim size distribution will typically be based upon statistical data.
g. which facilitate the analysis.. This model can be intuitively understood in some simple cases like { Jt} describing weather conditions in car insurance . it may be used in a purely descriptive way when it is empirically observed that the claim arrivals are more bursty than allowed for by the simple Poisson process. Mathematically. Historically. its basic feature is to allow more variation (bursty arrivals ) than inherent in the simple Poisson process. T2. where {/3 (t)}too is an arbitrary stochastic process . with a common term {Nt} is a Markovmodulated Poisson process . in Chapter VII).. Nevertheless . IV (and. such that 8(t) = . The difficulty in such an approach lies in that it may be difficult or even impossible to imbed such a distribution into the continuous setup of {Nt } evolving over time . found the Poisson distribution to be inadequate and suggested various other univariate distributions as alternatives . In order to prove reasonably substantial and interesting results .d. in particular to allow for certain inhomogeneities. we study this case in VI . the periodic and the Markov modulated models also have attractive features . and also that the ruin problem may be hard to analyze . epidemics in life insurance etc.(3. An obvious example is 3(t) depending on the time of the year (the season). with the extension to premiums depending on the reserve.. in just the same way as the Poisson process arises in telephone traffic (a large number of subscribers each calling with a small rate). the first extension to be studied in detail was {Nt } to be renewal (the interarrival times T1 . The compound Poisson model is studied in detail in Chapters III. The one we focus on (Chapter VI) is a Markovian environment : the environmental conditions are described by a finite Markov process {Jt }too.. Cox processes are. 5. I. radioactive decay (a huge number of atoms each splitting with a tiny rate ) and many other applications. has some mathematically appealing random walk features .12 CHAPTER I. INTRODUCTION claim . However .e. to be studied in Chapter V. see 11. This model . it is more questionable whether it provides a model with a similar intuitive content as the Poisson model. In others. e. A more appealing way to allow for inhomogeneity is by means of an intensity . To the author 's knowledge .8 (t) is a periodic function of t. This applies also to the case where the claim size distribution depends on the time of the year or . Some of them have concentrated on the marginal distribution of NT (say T = one year ). so that . are i. too general and one neeed to specialize to more concrete assumptions . Another one is Cox processes. getting away from the simple Poisson process seems a crucial step in making the model more realistic. the negative binomial distribution.6. The point of view we take here is Markov dependent random walks in continuous time (Markov additive processes ).i. when Jt = i. not many detailed studies of the goodnessoffit of the Poisson model in insurance are available . however.3(t) fluctuating over time. gives rise to an arrival process which is very close to a Poisson process. but with a general not necessarily exponential distribution ).
reliability. stochastic differential equations. Similarly. however.0 (u.1) holds as well provided the risk process has a premium rule depending on the reserve.1) permitting to translate freely between risk theory and the queueing/storage setting. Some of these have a certain resemblance in flavour and methodology. 4 A summary of main results and methods 4a Duality with other applied probability models Risk theory may be viewed as one of many applied probability areas. stochastic geometry. ruin probabilities for risk processes with an input process which is renewal. interacting particle systems. others are quite different. and which seems well motivated from a practical point of view as well. The ones which appear most related to risk theory are queueing theory and dam/storage processes. this amounts to Vo having the stationary distribution of {Vt}). extreme value theory. genetics models. In the setting of (4. A general release rule p(x) means that {Vt} decreases according to the differential equation V = p(V) in between jumps. A SUMMARY OF MAIN RESULTS AND METHODS 13 the environment (VI.6) . the classical result is that the ruin probabilities for the compound Poisson model are related to the workload (virtual waiting time) process {Vt}too of an initially empty M/G/1 queue by means of . 0(u) = P(V > u). this gives only f0 O°i (u)du which is of limited . The study of the steady state is by far the most dominant topic of queueing and storage theory. with Poisson arrivals and constant release rule p(x) = 1. In fact. Thus. Mathematically. that quite often the emphasis is on computing expected values like EV.v. methods or modeling ideas developed in one area often has relevance for the other one as well. and here (4. it is desirable to have a set of formulas like (4. Markovmodulated or periodic can be related to queues with similar characteristics. R = p(R) in between jumps. The M/G/1 workload process { Vt } may also be seen as one of the simplest storage models. time series and Gaussian processes. point processes and so on.'s like V is available. others being branching processes. and a lot of information on steadystate r. A stochastic process {Vt } is said to be in the steady state if it is strictly stationary (in the Markov case. More generally. It should be noted.1).1) where V is the limit in distribution of Vt as t + oo.4. and the limit t 4 oo is the steadystate limit. (4.T) = P(VT > u). dam/storage processes. it is a recurrent theme of this book to stress this connection which is often neglected in the specialized literature on risk theory. queueing theory.
see Corollary III. the functions w x f d 1 exdx () . 4b Exact solutions Of course .g.p(y) y^ Jo p(x) can be written in closed form. Thus . e . as is typically the case. The cases where this is possible are basically the following for the infinite horizon ruin probability 0(u): • The compound Poisson model with constant premium rate p = 1 and exponential claim size distribution B. Similarly. Example VIII. 3.8.1 is a sample path relation should be stressed : in this way the approach also applies to models having supplementary r.1) in the setting of a general premium rule p(x): the events {VT > u} and {r (u) < T} coincide when the risk process and the storage process are coupled in a suitable way (via timereversion ). The infinite horizon (steady state ) case is covered by letting T oo. which can be expanded into a sum of exponential terms by diagonalization (see. 3. which gives a sample path version of (4..3. the two areas. see Boxma & Cohen [74] and Abate & Whitt [3]. • The compound Poisson model with a claim size distribution degenerate at one point.1).14 CHAPTER I. • The compound Poisson model with premium rate p(x) depending on the reserve and exponential claim size distribution B. p = 0/8 and y = 8 . Here Vi(u) is given in terms of a matrixexponential function ( Corollary VIII.'s like the environmental process {Jt} in a Markovmodulated setting. see Corollary VII. • The compound Poisson model with some rather special heavytailed claim size distributions. Here O(u) = peryu where 3 is the arrival intensity.T). though overlapping. Vi(u.2).3. have to some extent a different flavour. .1. The fact that Theorem H. A prototype of the duality results in this book is Theorem 11.3. the ideal is to be able to come up with closed form solutions for the ruin probabilities 0(u).1 . Here ?P(u) is explicit provided that . INTRODUCTION intrinsic interest . B(x) = ebx. • The compound Poisson model with constant premium rate p = 1 and B being phasetype with a just few phases .6.3. The qualifier 'with just a few phases ' refers to the fact that the diagonalization has to be carried out numerically in higher dimensions.v. much of the study of finite horizon problems (often referred to as transient behaviour) in queueing theory deals with busy period analysis which has no interpretation in risk theory at all.
f f 2µ(y)/a2(y) dy} dx  (4. Here are some of the main approaches: Laplace transform inversion Often. the formulas ( IV. [s.4. a2 (x): Ip (u) = where S(u) = f °O exp {.b(u)du . For the finite horizon ruin probability 0(u. 191). esuTb( u.7. f {eXp U LX 2. say the fast Fourier transform (FFT) as implemented in Grubel [179] for infinite horizon ruin probabilities for the renewal model. Embrechts. it is easier to find the Laplace transforms = f e8 . A notable fact ( see again XI.1) are so complicated that they should rather be viewed as basis for numerical methods than as closedform solutions.1) is the explicit form of the ruin probability when {Rt} is a diffusion with infinitesimal drift and variance µ(x). Also Brownian models or certain skip free random walks lead to explicit solutions (see XI . but are somewhat out of the mainstream of the area . • An astable Levy process with drift . O(u. .S(u) 1S(oo) f °D exp {. relevant references are Grubel [179]. Grubel & Pitts [132] and Grubel & Hermesmeier [180] (see also the Bibliographical Notes in [307] p.2) is the natural scale. T) themselves. T). We don't discuss Laplace transform inversion much. T) can then be calculated numerically by some method for transform inversion. However.ff 2µ(y)/a2(y) dy} dx . 1). Given this can be done. where Furrer [150] recently computed ii(u) as an infinite series involving the Mittag. (u.Lef$er function. the only example of something like an explicit expression is the compound Poisson model with constant premium rate p = 1 and exponential claim size distribution . Abate & Whitt [2]. A SUMMARY OF MAIN RESULTS AND METHODS 15 • The compound Poisson model with a two step premium rule p(x) and B being phasetype with just a few phases. T) du dT 0 TO 00 in closed form than the ruin probabilities z/'(u). the second best alternative is a numerical procedure which allows to calculate the exact values of the ruin probabilities. see VIII.u(y)/a2(y) dy} 4c Numerical methods Next to a closedform solution. Ab(u).
For the compound Poisson model with p = 1 and claim size distribution B with moment generating function (m. u * oo.16 CHAPTER L INTRODUCTION Matrixanalytic methods This approach is relevant when the claim size distribution is of phasetype (or matrixexponential).or integral equation. (4.g. see VIII. dt] most often leads to equations involving both differential and integral terms.4) 00['Y]1)'Y = 0. whereas for the renewal arrival model and the Markovian environment model U has to be calculated numerically. and in particular the naive idea of conditioning upon process behaviour in [0. and carry out the solution by some standard numerical method. as the solution of linear differential equations or by some series expansion (not necessarily the straightforward Eo U'u/n! one!). most often it is more difficult to come up with reasonably simple equations than one may believe at a first sight. .p)/(13B'[ry] .3) in the compound Poisson model which is an integral equation of Volterra type. which can equivalently be written as f3 [7] = 1 +13 .f. it states that i/i(u) .3) where C = (1 . T) as the solution to a differential. However.7. either as the iterative solution of a fixpoint problem or by finding the diagonal form in terms of the complex roots to certain transcendental equations. One example where this is feasible is the renewal equation for tl'(u) (Corollary III. U is explicit in terms of the model parameters. In the compound Poisson model with p = 1.3.Ce"u.) B[s].and integral equations The idea is here to express 'O(u) or '(u. Differential. and in quite a few cases (Chapter VIII). 0(u) is then given in terms of a matrixexponential function euu (here U is some suitable matrix) which can be computed by diagonalization. 4d Approximations The CramdrLundberg approximation This is one of the most celebrated result of risk theory (and probability theory as a whole).1) and y > 0 is the solution of the Lundberg equation (4. An example where this idea can be carried through by means of a suitable choice of supplementary variables is the case of statedependent premium p(x) and phasetype claims.
u > oo. the exact solution is as easy to compute as the CramerLundberg approximation at least in the first two of these three models. The CramerLundberg approximation is renowned not only for its mathematical beauty but also for being very precise. A SUMMARY OF MAIN RESULTS AND METHODS 17 It is rather standard to call ry the adjustment coefficient but a variety of other terms are also frequently encountered. . corrected diffusion approximations (see IV. It has generalizations to the models with renewal arrivals. Diffusion approximations are easy to calculate. In the case of heavytailed distributions. For example. in such cases the evaluation of C is more cumbersome. for the compound Poisson model ^(u) p pu In fact . T). a Markovian environment or periodically varying parameters.6) are by far the best one can do in terms of finite horizon ruin probabilities '(u.7 and IV. Large claims approximations In order for the CramerLundberg approximation to be valid.2. when the claim size distribution is of phasetype. Approximations for O(u) as well as for 1(u. However. but typically not very precise in their first naive implementation. See Chapter IX. other approaches are thus required. the claim size distribution should have an exponentially decreasing tail B(x). incorporating correction terms may change the picture dramatically. and use the fact that first passage probabilities are more readily calculated for diffusions than for the risk process itself. However. In particular.4. In fact. T) for large u are available in most of the models we discuss. some further possibilities are surveyed in 111 . (4. J B dx. This list of approximations does by no means exhaust the topic. in some cases the results are even more complete than for light tails. often for all u > 0 and not just for large u.6) 4e Bounds and inequalities The outstanding result in the area is Lundberg's inequality (u) < e"lu. Diffusion approximations Here the idea is simply to approximate the risk process by a Brownian motion (or a more general diffusion) by fitting the first and second moment.
. one expects a model with a deterministic claim size distribution B. However.d. this is extrapolation from data due to the extreme sensitivity of the ruin probabilities to the tail of the claim size distribution in particular (in contrast.3). . . . . When comparing different risk models. fitting a parametric model to U1. can we trust the confidence intervals for the large values of u which are of interest? In the present author's opinion. lower bounds etc.k (U(`) .i. e. which is a standard statistical problem since the claim sizes Ui. UNT are i. as a general rule. This procedure in itself is fairly straightforward. it splits up into the estimation of the Poisson intensity (the estimator is /l3 = NT/T) and of the parameter(s) of the claim size distribution. say degenerate at m.18 CHAPTER I.g. it is a general principle that adding random variation to a model increases the risk.U(k)) i =k+ i . they have however to be estimated from data. For example. and to plot the empirical mean residual life 1 N .) at various places and in various settings. it has the advantage of not involving approximations and also. The standard suggestion is to observe that the mean residual life E[U . the difficulty comes in when drawing inference about the ruin probabilities. one may question whether it is possible to distinguish between claim size distributions which are heavytailed or have an exponentially decaying tail. In practice. This is proved for the compound Poisson model in 111.. We return to various extensions and sharpenings of Lundberg's inequality (finite horizon versions. given NT. However.x U > x] = B(x) f '(yx)B(dx) typically has a finite limit (possibly 0) in the lighttailed case and goes to oo in the heavytailed case. empirical evidence shows that the general principle holds in a broad variety of settings.. though not too many precise mathematical results have been obtained. For example.8. 4f Statistical methods Any of the approaches and results above assume that the parameters of the model are completely known. INTRODUCTION Compared to the CramerLundberg approximation (4. of being somewhat easier to generalize beyond the compound Poisson setting. UNT may be viewed as an interpolation in or smoothing of the histogram). in the compound Poisson model. . obtained say by observing the risk process in [0. to have smaller ruin probabilities than when B is nondegenerate with the same mean m. How do we produce a confidence interval? And. more importantly. T]. .
For example. respectively. (or a functional of the expectation of a set of r. reference [14]. We look at a variety of such methods in Chapter X.3). 5 Conventions Numbering and reference system The basic principles are just as in the author's earlier book Applied Probability and Queues (Wiley 1987.e. . formula (5. but is not very satisfying. claims U1.i. in all other chapters than VI where we just write .3) or Section 3 of Chapter VI are referred to as Proposition VI. Thus Proposition 4. 4g Simulation The development of modern computers have made simulation a popular experimental tool in all branches of applied probability and statistics.d. formula VI. CONVENTIONS 19 as function of U(k). The chapter number is specified only when it is not the current one.. Simulation may be used just to get some vague insight in the process under study: simulate one or several sample paths. The problem is entirely analogous to estimating steadystate characteristics by simulation in queueing/storage theory. because it appears to require an infinitely long simulation. A main problem is that ruin is typically a rare event (i.3 (or just VI.4. and look at them to see whether they exhibit the expected behaviour or some surprises come up. and in fact methods from that area can often be used in risk theory as well .2. Klnppelberg & Mikosch [134].(5.2.. and of course the method is relevant in risk theory as well. the more typical situation is to perform a Monte Carlo experiment to estimate probabilities (or expectations or distributions) which are not analytically available.v's) which can be generated by simulation. where U(1) < . in this book referred to as [APQ]). UN. However. See further Embrechts. The infinite horizon case presents a difficulty.. this is a straightforward way to estimate finite horizon ruin probabilities.3) and Section VI. to observe whether one or the other limiting behaviour is apparent in the tail. Still. < U(N) are the order statistics based upon N i..v. having small probability) and that therefore naive simulation is expensive or even infeasible in terms of computer time.5.. and also discuss how to develop methods which are efficient in terms of producing a small variance for a fixed simulation budget. Truncation to a finite horizon has been used. . good methods exist in a number of models and are based upon representing the ruin probability zb(u) as expected value of a r.
References like Proposition A.20 CHAPTER L INTRODUCTION Proposition 4.h.c. B(dy). squared coefficient of variation. cumulative distribution function P(X < x) c. If.ceax.i. as for typical claim size distributions.g. left hand side (of equation) m.f. E. b[s] is defined always if Rs < 0 and sometimes in a larger strip (for example. w. The Laplace transform is b[s].f. mation.g.3) or Section 3. say a heuristic approxi1 + h + h2/2.v. B(x) = P(X < x) = fx. independent identically distributed i. A different type of asymptotics: less precise.t.The same symbol B is used for a probability measure B(dx) = P(X E dx) and its c. (moment generating function) fm e82B(dx) of the distribution B. and for a defective probability distribution IIGII < 1. Abbreviations c.g. i.Used in asymptotic relations to indicate that the ratio between two expressions is 1 in the limit. with probability Mathematical notation P probability. infinitely often l. or a more precise one like eh .e. i. .v. formula (5. B is concentrated on [0. see under b[s] below. for a probability distribution IIGII = 1.h. log E[s] where b[s] is the m. with respect to w.o.d. E expectation.29) refer to the Appendix. n! 27r nn+1/2en.2. moment generating function. oo). then for 1s < 5).f. cumulant generating function.r.4. if B(x) . right hand side (of equation) r. r. h + 0. EX2/(EX)2. random variable s. .B(x) = P(X > x) of B.f.f.f.d. (A. In particular.d.p. n i oo.s.g. B[s] the m. IIGII the total mass (variation ) of a (signed ) measure G . B(x) the tail 1 .s.g.
Usually. the ith unit row vector is e'i. all stochastic processes considered in this book are assumed to have sample paths in this space. F o r a given set x1.5. I(A) the indicator function of the event A. 0 marks the end of a proof. E[X. oo) the space of Rvalued functions which are rightcontionuous and have left limits.A] means E[XI(A)]. N(it.oi denotes the column vector with the xi as components Special notation for risk processes /3 the arrival intensity (when the arrival process is Poisson).. Usually. the processes we consider are piecewise continuous. matrices have uppercase Roman or Greek letters like T.e. (xi)diag denotes the diagonal matrix with the xi on the diagonal (xi)row denotes the row vector with the xi as components (xi). Notation like f3i and 3(t) in Chapter VI has a similar .. the value just before t. xa. .e. . In the Frenchinspired literature. Xt_ the left limit limstt X8f i. an example or a remark. i. A. In particular: I is the identity matrix e is the column vector with all entries equal to 1 ei is the ith unit column vector. only have finitely many jumps in each finite interval. though slightly more complicated. Matrices and vectors are denoted by bold letters. Unless otherwise stated. a. Then the assumption of Dpaths just means that we use the convention that the value at each jump epoch is the right limit rather than the left limit.e. row vectors have lowercase Greek letters like a. 7r. and column vectors have lowercase Roman letters like t. intensity interpretation. of numbers. R(s) the real part of a complex number s. . CONVENTIONS {6B the mean EX = f xB(dx) of B ABA' the nth moment EXn = f x"B(dx) of B. i. the ith entry is 1 and all other 0. (the dimension is usually clear from the context and left unspecified in the notation). a2) the normal distribution with mean p and variance oa2. Thus. often the term 'cadlag' (continues a droite avec limites a gauche) is used for the Dproperty. 21 D [0.
EL the probability measure and its corresponding expectation corresponding to the exponential change of measure given by Lundberg conjugation. or quantities with a similar time average interpretation. J the rate parameter of B for the exponential case B(x) = eby. e. I. I.5.1. p the net amount /3pB of claims per unit time.5. though slightly more complicated.22 CHAPTER L INTRODUCTION B the claim size distribution. ry The adjustment coefficient. interpretation. . cf.1. FL. Notation like BE and B(t) in Chapter VI has a similar. cf. VI. 'q the safety loading .g. cf. 111.
the level of the exposition is. in particular at a first reading of the book.Chapter II Some general tools and results The present chapter collects and surveys some topics which repeatedly show up in the study of ruin probabilities. Sections 4. 5 on random walks and Markov additive processes can be skipped until reading Chapter VI on the Markovian environment model. the relevance for the mainstream of exposition is the following: The martingale approach in Section 1 is essentially only used here. in most cases via likelihood ratio arguments. strictly speaking. Due to the generality of the theory. used in Chapter VI on risk processes in a Markovian (or periodic) environment. Sections 4. however. When encountered for the first time in connection with the compound Poisson model in Chapter III. The duality results in Section 3 (and. fundamental ( at least in the author' s opinion) and the probability involved is rather simple and intuitive. More precisely. The topic is. a parallel selfcontained treatment is given of the facts needed there. The reader should therefore observe that it is possible to skip most of the chapter. somewhat more advanced than in the rest of the book. The likelihood ratio approach in Section 2 is basic for most of the models under study. 23 . 5) are. in part. however. however. The general theory is. All results are proved elsewhere . not crucial for the rest of the book.
StUit. Proposition 1. claim size distribution B and p = .u denote the overshoot. Thus N. T) = P(T(u) < T). and in the limit (1. however. V) (u.0.(u). {e'YS° }t>0 is a martingale. As usual. SOME GENERAL TOOLS AND RESULTS The ladder height formula in Theorem 6.24 CHAPTER II.2) As T > oo. Our first result is a representation formula for O(u) obtained by using the martingale optional stopping theorem .1 is basic for the study of the compound Poisson model in Chapter III. the second term converges to 0 by (b) and dominated convergence (e7ST < eryu on {r(u) > T}). T(u) < T] + E [eryST . Then e7u (u) = E[e74(u)j7(u) < oo] Proof We shall use optional stopping at time r(u)AT (we cannot use the stopping time T(u) directly because P(T(u) = oo) > 0 and also because the conditions of the optional stopping time theorem present a problem.1 Assume that (a) for some ry > 0.QµB < 1. (b) St a$ oo on {T(u) = oo}. using r(u) A T invokes no problems because r(u) A T is bounded by T).T(u) < cc] = e7uE {e7f(u) I T(u) < cc] z/. (1. The more general Theorem 6.5 can be skipped. T(u) < oo] + 0 = eryuE [e7Vu). Example 1 .)AT = E [e7ST(°). the time to ruin r(u) is inf It > 0 : St > u}. T(u) > T] .2) takes the form 1 = E [e'ys().2 Consider the compound Poisson model with Poisson arrival rate . Let e(u) = ST(u) . and the ruin probabilities are ip(u) = P (T(u) < oo). f1 . We get 1 = Ee7So = E e'Y S(.. 1 Martingales We consider the claim surplus process {St} of a general risk process.
1. MARTINGALES 25 where {Nt} is a Poisson process with rate . Since {St} has stationary independent increments.1) .1 are satisfied. the conditions of Proposition 1.3 Assume that {Rt} is Brownian motion with variance constant o.2 and drift p > 0. B(x) _ edx.i. u Corollary 1.1.Ft = a(S" : v < t). condition (a) of Proposition 1.Q(B[a] . and (b) follows from p < 1 and the law of large numbers (see Proposition III.f.a. From this it is readily seen (see III.x.1 is satisfied..2(c)). and thus Ee'rs° = 1.3/6 < 1. Under the conditions of Proposi Proof Just note that C(u) > 0. and p =.d. Then {St } is Brownian motion with variance constant o2 and drift p < 0.2. A simple calculation (see Proposition III.Q and the U.r" where y = S .Q./3. the ruin probability is O(u) = pe.g. By standard formulas for the m. Thus 00 E [e'rt (") I T(u) < oo] = I e5e  dx = f 5edx . the conditional distribution of the overshoot e(u) = U .= e"(') where K(a) = .a it is immediately seen that y = S . and thus by the memoryless property of the exponential distribution .a = a . Since {St} has stationary independent increments. The available information on this jump is that the distribution given r(u) = t and S. Eeas° = e"(°) where n(a) = a2a2/2 .1. it follows that E [e7st+v I J] = e"rstE [e7(st+vSt) I Ft] = e7StEe"rs° = elst where . are i. Example 1 . Thus. 1. and thus Ee7s° = 1.1) .u + x is again just exponential with rate S. Proof Since c(a) = /3 (B[a] .6a for details) that typically a solution to the Lundberg equation K(y) = 0 exists. with common distribution B (and independent of {Nt}). Thus. O(u ) < e7". of the normal distribution.5 For the compound Poisson model with B exponential. From this it is immediately seen that the solution to the Lundberg equation ic(y) = 0 is y = 2p/a2.(„)_ = x is that of a claim size U given U > u . the martingale property now follows just as in Example 1.4 (LUNDBERG ' S INEQUALITY ) tion 1 . u Corollary 1.ap.1) shows that Eels. Now at the time r(u) of ruin {St} upcrosses level u by making a jump .
u Notes and references The first use of martingales in risk theory is due to Gerber [156]. the parameters of the two processes can be reconstructed from a single infinite path.26 CHAPTER IL SOME GENERAL TOOLS AND RESULTS Corollary 1. Theorem 2. F). 2 Likelihood ratios and change of measure We consider stochastic processes {Xt} with a Polish state space E and paths in the Skorohod space DE = DE[0. More recent references are Dassios & Embrechts [98]. oo). and by the law of large numbers for the Poisson process . and is further exploited in his book [157].1 Let F. as shown by the following example this setup is too restrictive: typically'. hence so is Nt = limfyo N2`i. . A]. P correspond to the claim surplus process of two compound Poisson risk processes with Poisson rates /3. and in analogy with the theory of measures on finite dimensional spaces one could study conditions for the RadonNikodym derivative dP/dP to exist.Ft}too and the Borel afield F. we look for a process {Lt} (the likelihood ratio process) such that P(A) = E[Lt. I.6 If {Rt} is Brownian motion with variance constant a2 and drift p > 0. F(S) = P(S) = 1. Proof Just note that ^(u) = 0 by continuity of Brownian motion. Delbaen & Haezendonck [103] and Schmidli [320]. Thus the sets S = I tlim +oot t =.6 N S = { lim Nt I t +00 t gJ are both in F.v. P are then singular (concentrated on two disjoint measurable sets). (2. The interesting concept is therefore to look for absolute continuity only on finite time intervals (possibly random.e. The number Nt F) of jumps > e before time t is a (measurable) r. B. But if a $ ^ . then S and S are disjoint .3 below). cf. However. Grandell & Schmidli [131]. Embrechts. A somewhat similar u argument gives singularity when B $ B. P on (DE. then z/'(u) = e7" where 'y = 21A/a2. 0 and claim size distributions B.F). A E Ft. Example 2 . on (DE.1) 'though not always: it is not difficult to construct a counterexample say in terms of transient Markov processes. and F.. [172]. which we equip with the natural filtration {. Grandell [171]. Two such processes may be represented by probability measures F.
Then Ft (A) = E[Lt.A] = EE[LtI(A)IF8] = EI(A)E[LtIFB] = EI(A)L8 = PS(A).2(i).2) Proof Assume first G C {T < T} for some fixed deterministic T < oo. then there exists a unique probability measure Pon . A] = E[Lt. Lets < t. The truth of this for all A E Y. This proves (i). Then P(A) = E[Lt. . (ii) Conversely. A]..2 Let {Ft}t>o be the natural filtration on DE. then { 1 P(G) = EG . u The following likelihood ratio identity (typically with r being the time r(u) to ruin) is a fundamental tool throughout the book: Theorem 2 . ({Ft} . define P by Pt (A) = E[Lt.Ft}. that the restriction of P to (DE. A E F8. F) such that ELt = 1.t. ({. Lt < 0] can only be nonnegative if P(A) = 0. if for some probability measure P and some {.r. implies that E[LtI.e.t. under the assumptions of (ii) we have for A E rg and s < t that A E Ft as well and hence E[L8.F). G l ] = E [_I(G)E[LTIFT] ] = E { _I(G)Lr ] = P(G).F8] = L8 and the martingale property. A E F. G ] = E [LT . ELt = 1 follows by taking A = DE in (2. Proof Under the assumptions of (i). A]. Finally.F such that (2. Then Lt > 0 and ELt = 1 ensure that Pt is a probability measure on (DE.Tt) is absolutely continuous w. Hence E [_ .1) and nonnegativity by letting A = {Lt < 0}. Hence the family {Pt} is t>o consistent and hence extendable to a probability measure F on (DE.t. Conversely. then {Lt} is a nonnegative martingale w.r.Pt}adapted process {Lt}t>o (2. the restriction of P to (DE.2. By the martingale property. Proposition 2. 1 J (2. A E Ft . LIKELIHOOD RATIOS AND CHANGE OF MEASURE 27 (i. If r is a stopping time and G E PT.1) holds. using the martingale property in the fourth step.3 Let {Lt}. .Y) such that P(A) = Pt(A). we have E [ LTIFT]1 = LT on {T < T}. _. Ft). P) such that LLt = 1. P be as in Proposition 2. (i) If {Lt}t> o is a nonnegative martingale w.Pt)) The following result gives the connection to martingales.r.1) holds. F the Borel o•field and P a given probability measure on (DE. G C {T < oo}.
4) Proof Letting G = {r(u) < oo}. Consider a (timehomogeneous) Markov process {Xt} with state space E. To this end. T(u) < oo]. St). First we ask when the Markov property is preserved. one would typically have Xt = Rt.1) in Proposition 1. is nonnegative and has Ey Lt = 1 for all x. {St} = {u . we have F(G) = V )(u). both sides are increasing in T.1. 5) for processes with some randomwalklike structure. each F.Ft} . u From Theorem 2. the natural filtration {. Lr(u) 11 The advantage of (2. The crucial step is to obtain information on the process evolving according to F.Gn {r <T} .Rt} the claim surplus process and {Jt} a process of supplementary variables possibly needed to make the process Markovian. is Markov w..O(u) = eryuE[e'YC(u). A change of measure is performed by finding a process {Lt} which is a martingale w. of (2.3) to G of{r < T} we get 1111 F(Gn {r <T}) = E[ 1 . Now just rewrite the r. first in the Markov case and next (Sections 4. SOME GENERAL TOOLS AND RESULTS In the general case .2) follows by monotone convergence.28 CHAPTER II.t. The problem is thus to investigate which characteristics of {Xt} and {Lt} ensure a given set of properties of the changed probability measure.t. (2. r(u) < oo] occuring there than with the (conditional) expectation E[e'r{(u ) Jr(u) < oo] in (1. (2.4 Under condition (a) of Proposition 1. and this problem will now be studied.r. we assume for simplicity that {Xt} has Dpaths. we need the concept of a multiplicative functional. in continuous time (the discrete time case is parallel but slightly simpler).2) by noting that 1 = ersr(„) = e1'ue7Ou). say.. and letting T * oo. Rt) or Xt = (Jt. t.1).r. Xt = (Jt. applying (2.3 we obtain a likelihood ratio representation of the ruin probability V) (u) parallel to the martingale representation (1. .4) compared to (1.s.1: Corollary 2. For the definition.h. Xt = St.1) is that it seems in general easier to deal with the (unconditional) expectation E[eryVu). 1 Since everything is non negative. where {Rt} is the risk reserve process. In the context of ruin probabilities.
Ft].[Y. t.YB] for any Ftmeasurable r. By definition of Px. Lt has the form Lt = 'Pt ({x }0<u<t) for some mapping cot : DE[O. t] * [0. 0 . s. which in turn is the same as Ex[Lt+8Zt • (V8 o Bt)] = Ex[Lt • (L8 o 91)Z1 • (Y8 o et)] (2.Y8f t < s. and then L. o 9tI.(Xtitl) with all t(i) < t + s. Zt. LIKELIHOOD RATIOS AND CHANGE OF MEASURE 29 on DE and define {Lt} to be a multiplicative functional if {Lt} is adapted to {.6) implies (2.s. this in turn means Ex[Lt+8Zt(V8 oet)] = Ex[LtZtExt[L8Y8]]. o 9t = V.5 Let {Xt} be Markov w. (2.6) for any .t.(A) = Ex [Lt.v. ({Xt+u} 0<u<8) Theorem 2.5) is equivalent to Ex[Lt+8Vt+8] = E8[Lt • (L8 o 91)Vt+8] for any . which is the same as Ex[Zt(Y8 o Bt)] = E8[ZtEx. since Ext [L8Y8] = E[(Y8 o et)(L8 o 8t)I.v.F8measurable r.8) which is the same as (2.v. let {Lt} be a nonnegative martingale with Ex Lt = 1 for all x. Vt+e.5) Pxa.Ft] = Ex.Ft }. or.7) for any Ftmeasurable Zt and any .v. nonnegative and Lt+8 = Lt•(Lso9t) (2.7). oo). Ex[Lt+8Zt(Y8 o et)] = Ex[LtZt(Y8 o 0t)(L8 o Bt)].r. Proof Since both sides of (2. The converse follows since the class of r. Similarly. t and let Px be the probability measure given by t.2. Then the family {Px}xEE defines a timehomogeneous Markov process if and only if {Lt} is a multiplicative functional. since Zt • (Y8 o Ot ) is .'s of the form fl' f. where Ot is the shift operator. (2.T9measurable Y8. The precise meaning of this is the following: being . Y8.'s of the form Zt • (Y8 o 0t) comprises all r. the natural filtration {Ft} on DE.7). (2.Ft+8measurable. Indeed.v.Ftmeasurable.Pt+8measurable r. (2. the Markov property can be written E. A].. for all x..5) are Tt+e measurable.
At where At = k: vk <t U. In between jumps. 0 < vl < . R = p(R)).. } E[Lt+B I.6 For {u . see Dynkin [128] and Kunita [239]..Ft] = LtE[L8 o 9t I. The corresponding claim sizes are Ul. and just after time or* {Vt} makes an upwards jump of size UU = UN _k+l. with a proof somewhat different from the present one. In between jumps.. the premium rate is p(r) when the reserve is r (i. it xEE suffices to assume that {Lt} is a multiplicative functional with Ex Lt = 1 for all x. .. Indeed. aN where or* = T UN_k+l. . The formulation has applications to virtually all the risk models studied in this book. More precisely . T] in the following setup: The risk process {Rt}o<t<T has arrivals at epochs or. t.. u Notes and references The results of the present section are essentially known in a very general Markov process formulation.. t] = LtExt L8 = Lt.. Ro = u (say). UN. . The storage process {Vt }o<t<T is essentially defined by time reversion. then Remark 2. We work on a finite time interval [0... Thus R = Ro + f p(R8) ds . the arrival epochs are Qi. CN. and thus for the moment no parametric assumptions (on say the structure of the arrival process) are needed. and the time to ruin is 7(u) = inf {t > 0: Rt < 0}. < aN < T.5 can be found in Kuchler & Sorensen [240]. . . The result is a sample path relation.. we shall establish a general connection between ruin probabilities and certain stochastic processes which occurs for example as models for queueing and storage..e. A more elementary version along the lines of Theorem 2.30 CHAPTER H. {Vt} . 3 Duality with other applied probability models In this section.1) The initial condition is arbitrary. SOME GENERAL TOOLS AND RESULTS to define a timehomogeneous Markov process. (3. reflection at zero and initiar condition Vo = 0. (using the Markov property in the second step) so that the martingale property is automatic.
Theorem 3.1 The events {T(u) < T} and {VT > u} coincide.11 4.AT_t.2) k: ok <t and we use the convention p(O) = 0 to make zero a reflecting barrier (when hitting 0.____•_. :.3) (u) Proof Let rt' denote the solution of R = p(R) subject to r0 = u. Then rt°) > rt°) for all t when u > v.. (3.T) = P(VT > u).1) we have Vt = At  f P(Vs)ds where A= U= AT .x. DUALITY WITH OTHER APPLIED PROBABILITY MODELS 31 decreases at rate p(r) when Vt = r (i.1 Define r(u) = inf It > 0: Rt < 0} (r(u) = oo if Rt > 0 for all t < T) and let ii(u. {Vt} remains at 0 until the next arrival).__..._._: 1} 0 011 =T01N ^N3 To 0 011 014 01N Figure 3. In particular.___ ..__. That is. instead of (3.. 3. Note that these definitions make {Rt} rightcontinuous (as standard) and {Vt} leftcontinuous..1. The sample path relation between these two processes is illustrated in Fig.. (3.. .T) = inf Rt < 0 P (O<t<T P(r(u) < T) be the ruin probability.3. V = p(V)). V)(u.e..
if and only if O(u) < 1 for all u. Historically.U1 > roil .U1 = Rol.1 and its proof is from Asmussen & Schock Petersen [50]. and since ruin can only occur at the times of claims. Theorem 3. u A basic example is when {Rt} is the risk reserve process corresponding to claims arriving at Poisson rate . u Notes and references Some main reference on storage processes are Harrison & Resnick [187] and Brockwell.1 with Ro = u = u2). Historically. The arrival epochs correspond to rainfalls.d. Proof Let T ^ oo in (3. Some further relevant more general references are Asmussen [21] and Asmussen & Sigman [51]. if nothing else n = N). Corollary 3. 3. and then '0 (u) = P(V > u). we have RQ„ < 0 so that indeed r(u) < T.2 Consider the compound Poisson risk model with a general premium rule p(r). we can repeat the argument and get VoN_1 > Ra2 and so on. If VaN > 0. the connection between risk theory and other applied probability areas appears first to have been noted by Prabhu [293] in a queueing context. say V. this represents a model for storage. the distinction between right. and in between rainfalls water is released at rate p(r) when Vt (the content) is r. Then Vo. Thus we may think of {Vt} as having compound Poisson input and being defined for all t < oo.3 and being i. Then the time reversibility of the Poisson process ensures that {At } and {At } have the same distribution (for finitedimensional distributions. Hence RQ„ > 0 for all n < N.and left continuity is immaterial because the probability of a Poisson arrival at any fixed time t is zero).i. we have r(u) > T. Then the storage process {Vt} has a proper limit in distribution. The results can be viewed as special cases of Siegmund duality. = r(VT) .1 with Ro = u = ul). and a general premium rule p(r) when the reserve is r.2 from Harrison & Resnick [188].3). and so on. with distribution B. We get: Corollary 3. 3. Hence if n satisfies VVN_n+1 = 0 (such a n exists. Then similarly VVN = r0. one may feel that the interaction between the different areas has been surprisingly limited even up to today. say of water in a dam though other interpretations like the amount of goods stored are also possible.T l . Nevertheless. Resnick & Tweedie [79]. see Siegmund [344].32 CHAPTER IL SOME GENERAL TOOLS AND RESULTS Suppose first VT > u (this situation corresponds to the solid path of {Rt} in Fig.Ul < roil  Ul = RQ„ Va1V_1 < RQ2. Suppose next VT < u (this situation corresponds to the broken path of {Rt} in Fig. .
N From this the result immediately follows...1 in terms of Lindley processes .. N min (Y1 + + Yn).. Z2 . is defined by assigning Wo some arbitrary value > 0 and letting Wn+1 = (Wn + Zn+1)+• (4. (c) The Lindley process {WN} generated by Zl = Y1.... . R valued sequence Z1.2) (for a rigorous proof. {Wn}n=0. Z2. I. Theorem 4.s. 0)... as long as the random walk only takes nonnegative values. W2.2 The following assertions are equivalent: (a) 0(u) = P(r(u) < oo) < 1 for all u > 0..1. Most often.... Let further N be fixed and let Wo.. WN be the Lindley process generated by Z1 = YN. hits (oo.min n=0. Z2. . can be viewed as the reflected version of the random walk with increments Z1.. of (4.h. there is an analogue of Theorem 3. generated by Z1. the Lindley process Wo. ..1) Thus {Wn}n=o.... . .4. N min (Y1 + • • • + YNn) n=0. ZN = ...e.2) satisfies the same recursion as in (4.2).. ... W1.1.1. WN = YN ... W1.. .Y1 according to Wo = 0..1..1....1. Z2 = YN_1 i .. with common distribution F (say)..i..1 Let r(u) = inf In: u + Y1 + • • • + Yn < 0}..YNn+1) n=0.d. . Z2 = Y2... and is reset to 0 once the r. 1} is often referred to as simple random walk or Bernoulli random walk).w. Here F is a general probability distribution on R (the special case of F being concentrated on {1. Proof By (4.d. 0 Corollary 4. if Wo = 0 then (Z1+•••+Zn) WN = Zl+•••+ZN.. i.. evolves as a random walk with increments Z1i Z2. For a given i.Yl min (YN .. where the Yi are i ... n=0.N (4... Xo = 0... (b) 1/i(u) = P(•r(u) < oo) > 0 as u * oo.. . = Xo + Y1 + • • • + Y.. Then the events {r(u) < N} and {WN > u} coincide. . has a proper limit W in distribution as n + oo. In particular. just verify that the r ..1)). For discrete time random walks . RANDOM WALKS IN DISCRETE OR CONTINUOUS TIME 33 4 Random walks in discrete or continuous time A random walk in discrete time is defined as X.
. In that case . Remark 4 . ±1.. 176) but appears to be rather intractable. + Z.s. the Lindley processes in Corollary 4. N. Similarly. either M = oo a. . SOME GENERAL TOOLS AND RESULTS (d) m = inf.) sup n=0. .l. assumption on the Z1..=o. 0 By the law of large numbers.N so that WN _P4 M = supra=0. a sufficient condition for (e) is that EY is welldefined and > 0. (e)Yi+•••+Yn 74 . In general. The converse follows from general random walk theory since it is standard that lim sup (Y1 + • • + Yn) = oo when Y1 + • • • + Yn 74 oo. then the restrictions of Fx. the Y1..34 CHAPTER II. Proof Since (YN. a Markovian change of measure as in Theorem 2.l.. Y1) has the same distribution as (Y1.. . Clearly.1 have the same distribution for n = 0.. equivalently. . (e). . there is a more general version of Corollary 4. ..g... (Z1 + • • • + Zn) = m and P(W > u) = P(M > u) = i (u ). YN in Theorem 4.. (d) #. w.1.. . By Kolmogorov's 01 law. (Yi + • • • + Yn) > oo a. e. Combining these facts gives easily the equivalence of (a)(d).s...5 does not necessarily lead to a random walk: if. W v m and P(W > u) = P (m > u) = 0(u).. YN).s . ZN or... F has a strictly positive density and the Px corresponds to a Markov chain such that the density of X1 given Xo = x is also strictly positive.s..2 and Theorem 4...d. . doubly u infinite (n'= 0. The following result gives the necessary and sufficient condition for {Ln} to define a new random walk: .1 is equivalent to WN D MN = (Z1 + .. . . or M < oo a..3 The i. For a random walk.. .ooa.1 is actually not necessary .the result is a sample path relation as is Theorem 3. ±2..) and defines Zn = Yn. Next consider change of measure via likelihood ratios.1.o.g..1. Px to Fn are equivalent (have the same null sets) so that the likelihood ratio Ln exists. Thus the assertion of Theorem 4.1.2.i. One then assumes Yn to be a stationary sequence.. the condition 00 F(YI+•••+ Yn<0)<00 n=1 is known to be necessary and sufficient ((APQ] p.
g. Breiman [78] p. and define Ln by (4. (4.. this means E(g(x. 100 ). implying g (x.s. and so onforn =3.5 corresponds to a new random walk if and only if Ln = h(Y1) .g..s. Y) f (Y)] for all f and x.5 Consider a random walk and an a such that c(a) = log F[a] = log Ee° ' is finite. of F). In that case. Y) = h(Y ) a.4. then n n Ex [f f = Ex H fi a( YY) i=1 i_1 ( Y=) h(YY) H Ef=(Y=)h(Y=) = II J fi(Y )P( d) from which the random walk property is immediate with the asserted form of F.. the random walk property implies Ex f (Y1) = Eo f (Y1 ). Y ) f (Y)] = E[g(O. Since L1 has the form g (Xo. In particular. for some function h with Eh(Y) = 1.nrc(a )} (4. y ). where h (y) = g(0. u A particular important example is exponential change of measure (h(y) = e°y'(") where r. . For n = 2. the changed increment distribution is F(x ) = E[h(Y). we get L2 = L1 (L1 o91 ) = h(Y1)g(X1. = 1 for all n and x.4) ({Ln} is the familiar Wald martingale .3) holds.3) holds for n = 1. Proof If (4.. The corresponding likelihood ratio is Ln = exp {a (Y1 + • • • + Yn) .3) 1Pxa.5 corresponds to a new random walk with changed increment distribution F(x) = e'(a) Jr e"'F(dy) . (a) = log F(a] is the c... Y < x]. Y1).4. h(Yn) (4.4 Let {Ln} be a multiplicative functional of a random walk with E_L.Y2) = h(1'i)h(I'a). e.. Then the change of measure in Theorem 2.f. We get: Corollary 4. Conversely. cf. Then the change of measure in Theorem 2.4). RANDOM WALKS INDISCRETE OR CONTINUOUS TIME 35 Proposition 4.
36 CHAPTER II. The appropriate generalization of random walks to continuous time is processes with stationary independent increments (Levy processes). but we omit the details ).Xn < x). corresponds to a process with stationary independent increments and u = p. The traditional formal definition is that {Xt} is Rvalued with the increments Xt(1)_t(o). e f x:IxJ>e} v(dx) < oo (4. a Brownian component {Bt} (scaled by a variance constant) and a pure jump process {Mt}. Xt =Xo+pt+oBt+Mt.Xt)I. However.t(i . In continuous time. (4. which corresponds to the compound Poisson case: here jumps of {Mt} occur at rate 0 and have distribution B = v/0 (in particular . they arise as models for the reserve or claim surplus at a discrete sequence of instants. or imbedded into continuous time processes . say by recording the reserve or claim surplus just before or just after claims (see Chapter V for some fundamental examples). however. Xt(2)_t(l).5) Note that the structure of such a process admits a complete description.7) for all e > 0. the interpretation is that the rate of a jump of size x is v(dx) (if f of Ixlv (dx) = oo. see Chapter V. SOME GENERAL TOOLS AND RESULTS Discrete time random walks have classical applications in queueing theory via the Lindley process representation of the waiting time . v2 = 0 and v = 3B).Ft] = Eof (X.. (4.1). .. An equivalent characterisation is {Xt} being Markov with state space R and E [f (Xt+e . the pure jump process is given by its Levy measure v(dx).6) More precisely. say the beginning of each month or year . i. In discrete time. this description needs some amendments. {Xt} is a random walk. Roughly. the claim surplus process for the compound Poisson risk model . < t(n) and with Xt( i)_t(i_l) having distribution depending only on t(i) .. In risk theory. {Xt} can be written as the independent sum of a pure drift {pt}. given by a the increment distribution F(x) = P(Xn+l . with premium rate p.e. A general jump process can be thought of as limit of compound Poisson processes with drift by considering a sequence v(n) of bounded measures with v(n) T v. we are . a positive measure on R with the properties e J x2v(dx) < oo.). The simplest case is 3 = JhvMM < oo. the tradition in the area is to use continuous time models. Xt (n)t(n1) being independent whenever t(O) < t(1 ) < .. .
1)v(dx) (4.10) .] Processes with a more complicated path structure like Brownian motion or jump processes with unbounded Levy measure are not covered by Section 3.3 and decreases linearly at rate 1 in between jumps. then Ee'(xtxo) = Eoeaxt = etx(a). V E [0. Furthermore.8) O<t<T (assuming Wo = Xo = 0 for simplicity).e.v. is easily seen to be f3pB < 1.7 If {Xt} has stationary independent increments as in (4. defined as a system with a single server working at a unit rate. having Poisson arrivals with rate .min Xt (4. VT + V for some r. Chapter III. cf.Q and distribution B of the service times of the arriving customers. has upwards jumps governed by B at the epochs of a Poisson process with rate . oo]. and b(u) = P(V > u). A different interpretation is as the workload or virtual waiting time process in an M/G/1 queue. where VT is the virtual waiting time at time T in an initially empty M/G/1 queue with the same arrival rate /3 and the service times having the same distribution B as the claims in the risk process. Then the storage process {Vt} has constant release rate 1. (ex .1. T) = P(VT > u). Proposition 4. jxJ v(dx) < oo. where c(a) = ap + a2a2/2 + f 00 provided the Levy measure of the jump part {Mt} satisfies f". Now consider reflected versions of processes with stationary independent increments.s. virtual waiting time refers to Vt being the amount of time a customer would have to wait before starting service if he arrived at time t (this interpretation requires FIFO = First In First Out queueing discipline: the customers are served in the order of arrival).6).6 In the compound Poisson risk model with constant premium rate p(r) . and the reflected version is then defined by means of the abstract reflection operator as in (4. [The condition for V < oo a. Here workload refers to the fact that we can interpret Vt as the amount of time the server will have to work until the system is empty provided no new customers arrive. Corollary 4. First assume in the setting of Section 3 that {Rt} is the risk reserve process for the compound Poisson risk model with constant premium rate p(r) = 1. O(u.2). RANDOM WALKS IN DISCRETE OR CONTINUOUS TIME 37 almost solely concerned with the compound Poisson case and shall therefore not treat the intricacies of unbounded Levy measures in detail. i. WT = XT .4.
then the changed parameters in the representation (4. By explicit calculation . Xt Xo) with E2Lt = 1 for all x.Xt)g(s.1.Xo is necessarily infinitely divisible when {Xt} has stationary independent increments. we use the characterization (4.xo)tk ( e).4 o) aµ + ((a + 0 ) 2  0 2 )o 2 /2+ r w J 00 (e (a + 9)x  a 9x )v(dx) 00 a(µ + O 2) + a2a2 / 2 + J (eax . Then the Markov process given by Theorem 2.1)eexv(dx).Xt)L8 o 0tIFt] = E [f (Xt+s . This is of course no coincidence since the distribution of Xl .38 CHAPTER IL SOME GENERAL TOOLS AND RESULTS Proof By standard formulas for the normal distribution. e.Xt)I'Ftl = E [f(Xt+B . Eea(µt + QBt) = et{aµ +a2OZ/2}.6) are µ = µ + Oo2 .11) (eax . (4. u Note that (4. let e" (a ) = Eoeaxl. X8) = Eof (X8)L8 = Eof (X8)• For the second. .f. Q2 = v2. Theorem 4 .1 that E eaMt = exp fmoo In the general case .1 ) v(dx) . v(dx) = e9xv (dx).g.10) is the LevyKhinchine representation of the c. Proof For the first statement . Then l e" (a) = Eo [ Li ea "] = eK (9)Eo {e ( a+9)x1 J I = er(a+o )K(B) R(a) = K(a + 0) . use the representation as limit of compound Poisson processes.. of an infinitely divisible distribution (see. t. Xt +B .g. we show in the compound Poisson case ( IlvIl < oo) in Proposition III. In particular.Xt)I Ftl = Eof (X8)g(s.5) and get E [f(Xt+B . 8 Assume that {Xt} has stationary independent increments and that {Lt} is a nonnegative multiplicative functional of the form Lt = g(t. Chung [86]). if Lt = e9(xt . 5 has stationary independent increments as well.
3B[B]. U2. As for processes with stationary independent increments .10 Let Xt be the claim surplus process of a compound Poisson risk process with Poisson rate .o[f (S8)g(J8)]. MAP stands for the Markovian arrival process discussed below. let the given Markov process (specified by the Px) be the claim surplus process of a compound Poisson risk process with Poisson rate 0 and claim size distribution B.1) For shorthand .St)g(Jt+s)I. <t whenever the RadonNikodym derivative dB/dB exists (e. . St)} where {Jt} is a Markov process with state space E (say) and the increments of {St} are governed by {Jt} in the sense that E [f (St+8 . v(dx) _ .0 in the following. corresponding to p = 1.l3 and claim u size distribution B. and let the Px refer to the claim surplus process of another compound Poisson risk process with Poisson rate. Ei. b with b(x) > 0 for all x such that b(x) > 0). where . a = 0. Example 4 . 0. . Recalling that U1. are the arrival times and U1.5.9 If X0 = 0. the structure of MAP's is completely understood when E is finite: 2and only there . b = a = 0) the changed process is the claim surplus process of another compound Poisson risk process with Poisson rate .(3 = .g.3 and claim size distribution B # B. B have densities b. Thus (since µ = p = 1.3 and claim size distribution B. the corresponding claim sizes .(3B(dx). B(dx) = B[9] B(dx). it is then easily seen that Lt = H dB(Ui) i:o.. abbreviated as MAP in this section2.0. one reason is that in parts of the applied probability literature. dB/dB = b/b when B.4). is defined as a bivariate Markov process {Xt} = {(Jt.2.Ft] = Ejt...8. (5. Example 4 . u 5 Markov additive processes A Markov additive processes.11 For an example of a likelihood ratio not covered by Theorem 4.. we write Pi..3 =. Then we can write v(dx) _ /3eOxB(dx) = / (dx). then the martingale {eex(t)tk(e)} is the continuous u time analogue of the Wald martingale (4. MARKOV ADDITIVE PROCESSES 39 Remark 4. Ei instead of P2.
If all Fij are concentrated on (0. Then a Markov additive process can be defined by letting t St = lim 1 I(IJB1 < e)ds E1o 2d o be the local time at 0 up to time t. let {Jt} be standard Brownian motion on the line. this means that the MAP can be simulated by first simulating the Markov chain {J„} and next the Y1. {St} evolves like a process with stationary independent increments and the parameters pi.40 CHAPTER H. Jn = j.. SOME GENERAL TOOLS AND RESULTS In discrete time.[a) = (Ei[easl. a jump of {Jt} from i to j # i has probability qij of giving rise to a jump of {St} at the same time. In addition..g.. v.i.it = A. (That a process with this description is a MAP is obvious. vi(dx) in (4.1 For a MAP in discrete time and with E finite. As an example. consider the matrix Ft [a] with ijth element least Ei . Proposition 5.6) depending on i. An alternative description is in terms of the transition matrix P = (piA.9 EE = (iii&ij[a])i j EE . In continuous time (assuming Dpaths). a MAP is specified by the measurevalued matrix (kernel) F(dx) whose ijth element is the defective probability distribution Fij(dx) = Pi.. a MAP is the same as a semiMarkov or Markov renewal process. {Jt} is specified by its intensity matrix A = (Aij)i.) If E is infinite a MAP may be much more complicated.o(Ji = j.f. . the converse requires a proof. which we omit and refer to Neveu [272] or cinlar [87].Sr_1. with the Y„ being interpreted as interarrival times. oo). t+s) where Jt .jEE (here pij = Pi(J1 = j)) and the probability measures Hij(dx)=P(Y1 EdxlJo=i.jEE• On an interval [t. Y1 E dx) where Y„ = S„ . Fn[a] = F[a]n where P[a] = P . the distribution of which has some distribution Bij.J1=j)= Fij (dx) Pij In simulation language. 1 J1 ='^])iJEE = (Fij[a])i . by generating Yn according to Hij when J„_1 = i.. Y2. As a generalization of the m.
qkj + k?^j qkj Bkj [a] } = Ei [east. In matrix formulation . qij. u In the following. Then. aSt h = (1 + Ajjh) Ei [east . up to o( h) terms.5. Jt = j] (1 + htc (j) (a)) j + Ak j qk j (Bk +h E Ei [east . assume that the Markov chain/process {Jt} is ergodic. 013 .ijgij(Bij[a] .1)) . this means that F't+h [a] = Ft[a] II+h(rc(i)(a)) +hA+h(Aijgij(Bij[a]1)) I. pi. Jt = k] { xk kEE j la] . By PerronFrobenius theory (see A. j E E) and So = 0. Then the matrix Pt[a] with ijth element Ei [east. Sn ) yields Ei[easn+ '. we infer that in the discrete time case the . where K[a] = A+ (r. Jn+1 = A] = 41 Ei[ e 5„. Bij (i. Jt = j] is given by etK[a]. Jt = j] Ejesh'^ + E Ak j hEi [ease . Jt = k] { 1 . vi(dx).1 )v(dx). u Proposition 5. kEE Jn = k]Ek[e"Y" .(')(a)) diag + ().2 Let E be finite and consider a continuous time Markov additive process with parameters A. a= . J1 = A which in matrix formulation is the same as Fn+1 [a] = Fn[a]F[a]. MARKOV ADDITIVE PROCESSES Proof Conditioning upon (Jn. Proof Let {Stt) } be a process with stationary independent increments and pa rameters pi . \ diag Ft[a] = Ft[a]K. which in conjunction with Fo[a] = I implies Ft[a] = etK[a) according to the standard solution formula for systems of linear differential equations.4c). 00 r(i) (a) = api + a2ot /2 + f (e° .1) } (recall that qjj = 0). vi(dx) (i E E).
of a random walk. Furthermore. We also get an analogue of the Wald martingale for random walks: Proposition 5.c(a) (and h(")).Jt+v = easttK( a)E [ee (st+vst)vK(a)h(a) jt+v I ^tJ = easttt(a)EJt (easesvK(a )h^a)1 = easttK(a)h^a).etx It then follows that E feast+^(t+v)K(a)h(a) I ^tl l . and write k = k(°).7.4c). The function ic(a) plays in many respects the same role as the cumulant g. we are free to impose two normalizations. its derivatives are 'asymptotic cumulants'. Jt = j] .5 EiSt = tK'(0) + ki . Corollary 5.e=e°tk.4 Eie"sth(a) = h=a)et?("). .4. In particular.f.h(a)vva)etw(a). Eie"sth^a) = e'Pt[a]h( a) = e. cf.42 CHAPTER II. Proposition 5.t. u Let k(a) denote the derivative of h() w. Corollary 5. The corresponding left and right eigenvectors v(").2) where 7r = v(°) is the stationary distribution. and we shall take V(a)h(a) = 1. just note that [a]h(a) = eietK (a)h(a) = etK(a)h(a). Yrh(a ) = 1. Jeast. Corollary 5. a. (5. and appropriate generalizations of the Wald martingale (and the associated change of measure) can be defined in terms of . Proof For the first assertion. SOME GENERAL TOOLS AND RESULTS matrix F[a] has a real eigenvalue ic(a) with maximal absolute value and that in the continuous time case K[a] has a real eigenvalue K(a) with maximal real part. Proof By PerronFrobenius theory (see A.r.Eikjt = ttc'(0) + ki . as will be seen from the following results. Then h(°) = e.tK(a)h(a) J jj it L o is a martingale. h(") may be chosen with strictly positive components. cf.3 Ei [east. Since v("). h(") are only given up to a constants.
for a random walk: Corollary 5.. 8 Also for E being infinite (possibly uncountable ). the distribution of Jo). t im v^"St = '(0) Proof The first assertion is immediate by dividing by tin Corollary 5. we differentiate (5.e. E=ST = tc'(0)E7.7 No matter the initial distribution v of Jo..") }) .4. subtraction yields Vary St = tic"(0) + O(1). Corollary 5. MARKOV ADDITIVE PROCESSES Proof By differentiation in Proposition 5. Remark 5 .5. u The argument is slightly heuristic (e.6 For any stopping time T with finite mean. For the second .St]2 = t2/c'(0 ) 2 + 2ttc (0)vk . [E.5 yields + W (a)k.g.3) to get Ej [St a " st h i(a ) + 2Ste"st k(a) + e"st k^a) J etI(a) (kia )' + ttc (a)ki") + t {ic"(a)h.+ k.2ttc (0)Evkjt + 0(i). tam E tSt a (0). t a oo. . the existence of exponential moments is assumed ) but can be made rigorous by passing to characteristic functions. Squaring in Corollary 5.3) Let a = 0 and recall that h(°) = e so that 0=°) = h(o) = 1.Eikjr . In the same way. Ee"st typically grows asymptotically exponential with a rate ic(a) independent of the initial condition (i. there is typically a function h = h(") on E and a ^c(a) such that Ey a"st t" (") * h(x). summing and letting a = 0 yields E„ [St + 2Stkj.a) + ttc (a)2hia ) Multiplying by v=. Since it is easily seen by an asymptotic independence argument that E„ [Stkjt] u = trc'(0) E„kjt + O(1).5. 43 Ei [Steast h(a) + east k^a)1 = et"(a) (kia) + tic (a)hia)) .4) . (5. ] = t2tc (0)2 + 2tK'(0)vk + ttc"(0) + O(1) . (5. More precisely. one obtains a generalization of Wald's identity EST = Er • ES.
5 defines a new MAP. u forsEE). Then {Lt } is a multiplicative functional. this is..5) is a martingale . and the family {f LEE given by Theorem 2. s) = ea8h(i). see. St) } as follows.6.10 Let {(Jt. Given a function h on E. Remark 5. For t small . h(Jo) Lo is a Px martingale for each x E E. 0) = n(a) h(i).1) one then ( at least heuristically) obtains lim Ex eaSv v a) K( v+oo nEx easttK(a)EJt east(vt)K(a) u[J = Ex easttk(a)h(Jt) It then follows as in the proof of Proposition 5. G is defined as Gf (x) = lim Exf (Xt) . however. We then want to determine h and x(a) such that Ejeasth (Jt) = etK(a)h(i). xEE . From (5.s.6) We shall not exploit this approach systematically. 1) (i. Jt = (s+t) mod 1 P8a. in particular that f is bounded. In view of this discussion . 0) = h(i )( 1 + ttc(a)).for the present purposes.9 The condition that (5. (5.6. inconvenient due to the unboundedness of ea8 so we shall not aim for complete rigour but interpret C in a broader sense. V. let ha(i.44 CHAPTER IL SOME GENERAL TOOLS AND RESULTS for all x E E. gha(i. we take the martingale property as our basic condition below (though this is automatic in the finite case). where {Jt} is deterministic period motion on E = [0.5) is a martingale can be expressed via the infinitesimal generator G of {Xt } = { (Jt. Usually. An example beyond the finite case occurs for periodic risk processes in VI.3b and Remark VI.5.(9) {Lt}t>o = . St)} be a MAP and let 0 be such that h(Jt) OStt. some extra conditions are imposed. First. this leads to h(i) + tcha( i. however.4 that { h(Jt) easttK(a) L o (5. 0 Proposition 5..f (x) tyo t provided the limit exists.e.
0 < qij < 1 and Bij [0] > 0.(0)j.12 The expression for A means h(e) Aij = hie) Aij [1 + gij(Bij[0] i 0 j.11 Consider the irreducible case with E finite.10 is given by P = eK(e) Oh e) F[e]Oh(')..1) .ic(0)e = ic(0)Oh e) h(e) .11 below in the finite case. 1 + q(b . In the infinite case .c(0)e = tc(0)e . Bi(dx) = Bi(dx).. this gives a direct verification that A is an intensity matrix: the offdiagonal elements are nonnegative because Aij > 0.St)sl(e) h(Jt) 45 The proof that we have a MAP is contained in the proof of Theorem 5. and by A = Oh(°) K [0]Oh(e ) vi(dx) = e"xvi (dx). 0<q<1.1) eft ea' f ij (dx) = Hij (dx) Hij [0] . Bi.5. That 0 < qij < 1 follows from the inequality qb <1. then also vi (dx) is compound Poisson with e Ox ^i = /3iBi[0]. Here Oh(e) is the diagonal matrix with the h=e) on the diagonal. . That the rows sum to 1 follows from Ae = Oh(e) K[O]h(B) . We omit the details. one can directly verify that (5. in the discrete time case.7(dx) Bij [0] Bij(dx) in the continuous time case .7) In particular. if vi(dx) is compound Poisson. Bi [0] Remark 5. qij = r. (5. vi (dx) = f3 Bi(dx) with . In particular.1) holds for the P.tc(0)e = 0 .Qi < oo and Bi a probability measure. Then the MAP in Proposition 5. ^i = of qij Bij [0] 1 + qij ( Bij [0] . Ai = µi + 0Q. 0<b<oo. u Theorem 5. MARKOV ADDITIVE PROCESSES Proof That { Lt} is a multiplicative functional follows from L8 ogt = h(Jt+s) es(St+ .
Here the stated formula for P follows immediately by letting t = 1.tc(') (8)/ d)ag h 7 Aiiii (Bii[a + 0] . this means that Ft[a] = etw ( e)Ohc) Ft[a + 9]oh (e) (5.tc(0)I. H1. v= .. is absolutely continuous w. Jl = j] :(Yi E dx.8).. it follows that indeed the normalizing constant is H1 [0]. Ji = j) h(e) eeyK(B)p h(8) h(e) eexK ( h=e) e)Fi.8) yields et'[a] = Ohie )et (K[a +e]K(e)I)Oh(°) By a general formula (A.8.tc (') (0) corresponds to the stated parameters µ. a = 0 in (5.8) h(. Hence the same is true for H=j and H.t. (dx) of a process with stationary independent increments follows from Theorem 4. Jt = A.Bay [0]) That k(') (a + 0) . in continuous time ( 5. In matrix notation . v. Yi E dx. . . F:j with a density proportional to eei . Jl = j) = Ei[Lt. are probability measures . Further Fib (dx ) = P=(YI E dx. Letting a = 0 yields the stated expression for A. SOME GENERAL TOOLS AND RESULTS Proof of Theorem 5.K [O])Oh(e) (0) l + ( A + (tc(') (a + 0) . Similarly.46 CHAPTER II.r.13) for matrixexponentials .11. Now we can write K[a] =A+A ) ( K[a + 0] . since Hij.e) Consider first the discrete time case . First note that the ijth element of Ft[a] is etK(e)Ej [e(a+B)st E:[east Jt = j] = Ej[Lteas' .tc(0)' )Ah() = Oh(o) K[a + 0]Oh() .. This shows that F. Jt = j] = hie) . (dx). this implies k[a] = A 1 ) (K[a + 0] .
. which. however. however. THE LADDER HEIGHT DISTRIBUTION 47 Finally note that by (5. [225]. the literature on the continuous time case tends more to deal with special cases. [262] in discrete time. has no mass on (oo.+ < x. < x) = 11 (S.. Write r+ = T(0) and define the associated ladder height ST+ and ladder height distribution by G+(x) = 11 (S. h.7).6. there is. IIG+II = G+(oo) = P(T+ < oo) = 0(0) < 1 when 77 > 0 (there is positive probability that {St} will never come above level 0). oo). 7+ < oo). Note that G+ is concentrated on (0.Bij[0]) = hjel)ijgijBij[0](Bij[a] .(u) = inf {t > 0 : St > u} to ruin in the particular case u = 0 . For the Wald identity in Corollary 5.1). 0]. see also Fuh & Lai [149] and Moustakides [264]. 6 The ladder height distribution We consider the claim surplus process {St } of a general risk process and the time 7.)Ajjgij(Bij[a+0] .1) = Aij4ij(Bij[a] . Much of the pioneering was done in the sixties in papers like Keilson & Wishart [224].6. Conditions for analogues of Corollary 5. and is typically defective. Though the literature on MAP's is extensive. [261]. is slightly less general than the present setting. an extensive bibliography on aspects of the theory can be found in Asmussen [16]. [226] and Miller [260].3 for an infinite E are given by Ney & Nummelin [266]. The closest reference on exponential families of random walks on a Markov chain we know of within the more statistical oriented literature is Hoglund [203]. . Notes and references The earliest paper on treatment of MAP's in the present spirit we know of is Nagaev [265]. hardly a single comprehensive treatment. i.e.
d. which gives an explicit expression for G+ in a very general setting. the second ladder point is ST+(2) where r+(2) is the time of the next relative maximum after r+(1) = r+.2) . we shall first consider the compound Poisson model in the notation of Example 1. Theorem 6 .e.5 below. 6. i. In other cases like the Markovian environment model.B(x) denotes the tail of B. = ST+(1) Figure 6. The first ladder step is precisely ST+. 6. the second ladder height (step) is ST+(2) . On Fig. o 00 (6.ST+(1) and so on. Recall that B(x) = 1 . where basically only stationarity is assumed. they have a semiMarkov structure (but in complete generality.1) The interpretation of R+(A ) is as the expected time {St} spends in the set A before T+. the dependence structure seems too complicated to be useful). R+ is concentrated on (oo. at present we concentrate on the first ladder height. The main result of this section is Theorem 6.1. it follows that for g > 0 measurable.. a fact which turns out to be extremely useful. i. there are only finitely many)..1. In simple cases like the compound Poisson model. g(y)R+(dy) = E f g(St)dt. To illustrate the ideas. by approximation with step functions . has no mass on ( 0. oo).(3B(x ) = pbo(x) on (0. Here bo(x) _ B(x)/µB.1. see Fig.i. 0 f T+ (6.e. the ladder heights are i. and the maximum M is the total height of the ladder.T+ > t)dt = E f 0T+I(St E A) dt. the sum of all the ladder steps (if rl > 0.2.48 CHAPTER K. 1 For the compound Poisson model with p = 01LB < 1. For the proof of Theorem 6. In any case.1 The term ladder height is motivated from the shape of the process {Mt} of relative maxima. 0]. Thus. G+ is given by the defective density g + (x) =. define the prer+occupation measure R+ by R+(A) = E f o "o I(St E A.00 ). Also. SOME GENERAL TOOLS AND RESULTS M ST+(2) Sr.
ST<St.0<t<T) = F(ST E A. 6. 0]. . ST < ST_t.2(a): T+ > t Figure 6.O<t<T). {St }o<t<T is constructed from {St}o<t<T by timereversion and hence.O<t<T) = P(STEA. 0 < t < T) P(STEA. St S* t a Figure 6. P(STEA. has the same distribution as {St}o<t<T. THE LADDER HEIGHT DISTRIBUTION Lemma 6 .St<0. 0 < t < T. see Fig.ST_t. 49 Proof Let T be fixed and define St = ST . That is.2(b): r+ < t Thus.T+>T) = P(STEA.2 R+ is the restriction of Lebesgue measure to (00.6.2. since the distribution of the Poisson process is invariant under time reversion.ST<St.
y)R+(dy) 00 Proof A jump of {St} at time t and of size U contributes to the event IS.r. oo). cf. and since the jump rate is /3.St _)I(r+ > t). it follows that R+ (A) is the expected time when ST is in A and at a minimum at the same time . SOME GENERAL TOOLS AND RESULTS Integrating w.. for A C (0. E A} precisely when r+ > t.. oo). Figure 6.St _).St). But since St 4 oo a.y) (here we used the fact that the probability of a jump at u t is zero in the second step. we get G+ (A) = f 00 /3 dt E[B(A . That is. The probability of this given { Su}u<t is B(A . . s.2) in the last). this is just the Lebesgue measure of A.3 G+ is the restriction of /3R+*B to (0. 6.3 Lemma 6 .50 CHAPTER II. and (6. Fig. U + St_ E A. T+ > t] 0 _ /3 f E[B( A .t dT.3 where the bold lines correspond to minimal values.T+ > t] dt 0 T+ _ /3E f g( St) dt = 0 f g(y) R+(dy) 0 00 where g(y) = B(A . G+(A) = Q f 0 B(A .
h]} /h (by stationarity.4).e.1. . Lemma 6. We call M * stationary if M* o B8 has the same distribution as M* for all s > 0..4) are (ak.. 0 Generalizing the setup..* ) and the second the mark (the claim size Uk ). Uk) (k = 1. The marked point process .M o 08 shifted by s is defined the obvious way. THE LADDER HEIGHT DISTRIBUTION 51 Proof of Theorem 6. The points in the plane (marked by x on Fig.. oo) x (0. oo).3 yields g+ (x) = .Q f r+(x . Fig. i.z)B(dz) _ f I(x < z)B(dz) _ f (x). obviously.. we define the arrival rate as E# { k : ak E [0 .T+ < oo).1 With r+(y) = I(y < 0) the density of R+. 6 . as a point process on [0. The traditional representation of the input sequence {(TT. i. 6 .:T1 +•••+Tk <t}.) where ak = Ti + • • • + Tk . we consider the claim surplus process {St }t>o of a risk reserve process in a very general setup. 2. The first ladder epoch r is defined as inf It > 0 : St > 0} and the corresponding ladder height distribution is * G+ (A) = P(S** E A) = P(ST+ E A. this is equivalent to the risk process {St*} being stationary in the sense of (6. this does not depend on h). U k)} k=1 a is as a marked point process M *. The sample path structure is assumed to be as for the compound Poisson case: {St*} is generated from interclaim times Tk and claim sizes Uk according to premium 1 per unit time. Uk) for those k for which ak . Nt St =>Uk k=1 t where Nt = max{k = O. cf. assuming basically stationarity in time and space. In the stationary case. {St+8 . . the first component representing time (the arrival time o.6. 4 (the points in the plane are (ak .e.s > 0).s.S8 )t> o = {St }t>o for all s > 0.
This more or less gives a proof that indeed (6. most often one takes h = 1).. Example 6 . The two fundamental formulas connecting M* and M are Eco(M) = aE E.5) does not depend on h. o. V(M* o eak ). Oh becomes the approximate probability F(ri < h) of an arrival in [0. i.e.2. SOME GENERAL TOOLS AND RESULTS M* U. h] Eco(M*) = 1 E f co(M o Bt)dt. k: vk E [0. As above . Assume {Jt} irreducible so that a stationary distribution 7r = (1i)iGE exists. where T is the first arrival time > 0 of M and h > 0 an arbitrary constant (in the literature. vi(dx) = .52 CHAPTER II..s. See. Note also that (again by stationarity) the Palm distribution also represents the conditional distribution of M* o Ot given an arrival at time t. Section 5) which has pure jump structure corresponding to pi = a = 0.4 Given a stationary marked point process M*.4 Consider a finite Markov additive process (cf.. of (6. where TI = 0. 0.. h] and the sum approximately ^o(M*)I(ul < h). Uk) k=1.5) represents the conditional distribution of M* given vi = 0. letting h J. i 1 U2 Us 1_ 0 or Q2 $ U3 *1 L 0 7 X I 11 1 Figure 6. = 0 . .g. e. the r. h. Sigman [348] for these and further aspects of Palm theory. we define its Palm version M as a marked point process having the conditional distribution of M* given an arrival at time 0 .QiBi(dx). We represent M by the sequence (Tk. and let T = T2 denote the first proper interarrival time .
oo a.*(0) with initial reserve u = 0 is p = /3EU0. the distribution of Ul) is the mixture B = E aii Bi + aij Bij J = j#i !i J. This follows by noting that iP*(0) = IIG+JI = J0 "o g+(x)dx = .O for i = j and iriAijgij/./.. First choose (Jo_. the probability aij of Jt . aij for (i. It follows that we can describe the Palm version M as follows . v. Assume that St * . Jo) w. let U0 be a r. THE LADDER HEIGHT DISTRIBUTION 53 Interpreting jump times as arrival times and jump sizes as marks. 5 Consider a general stationary claim surplus process {St }t>o. we get a marked point process generated by Poisson arrivals at rate /3i and mark distribution Bi when Jt = i. If Jt_ = i. qij when {Jt} jumps from i to j and have mark distribution Bij. Before giving the proof.O fo "o F(x)dx = .= i. let the arrivals and their marks be generated by {Jt} starting from Jo = j.s. . Jt = j is iri(3i /. and by some additional arrivals which occur w.p.p. After that.OEU0. having the Palm distribution of the claim size and F (x) = F(Uo < x) its distribution .6iBi + Aijgij Bij j#i iEE iEE 0 Theorem 6 .e. an arrival for M* occurs before time t + dt w. dt A + E Aijgij j#i Thus the arrival rate for M* is 1] it A + E Aijgij iEE i#i Given that an arrival occurs at time t . we note: Corollary 6.OF(x). A stationary marked point process M* is obtained by assigning Jo distribution Tr. Then the ladder height distribution G+ is given by the (defective) density g+(x) = . Note in particular that the Palm distribution of the mark size (i. the ruin probability .p. and that p = 0EU0 < 1. j) and let the initial mark Ul have distribution Bi when i = j and Bij otherwise.6 Under the assumptions of Theorem 6.5.O for i # j.6.
2. CHAPTER H. 6.o. Then clearly * G+ (A) = P(ST+ E A) = Consider a process { f p(t)f3dt. Proof of Theorem 6. which makes an upwards jump at time ..Mt). Let p(t) be the conditional probability that ST+ E A.0<u<t) = P(StEA.g. has a very simple interpretation as the average amount of claims received per unit time .o. and the kth preceding claim arrives at time t . 0<u<t) = P(St EA.5.Su_ <0. (k = St}t>o 1.54 By (6.Su< 0. oo)). The result is notable by giving an explicit expression for a ruin in great generality and by only depending on the parameters of the model through the arrival rate 0 and the average ( in the Palm sense) claim size EU0. in (oo.. SOME GENERAL TOOLS AND RESULTS V` (0) = E E Uk k: ak E [0. It follows that for A C (0. The last property is referred to as insensitivity in the applied probability literature.. the mark at time Qk is denoted by Uk.0<u<tIAt) = P(St EA. .. A standard argument for stationary processes ([78] p.e.St*_ u. The sample path relation between { Su } and { Su } amounts to S„ = St .$St_ u. . the arrival times 0 < 0'1 < Q2 < .Su<0.Q_k and has size U_ k. oo ) and the arrival times 0 > 0_1 > a_2 > .l. 105) shows that one can assume w.5). Now conditionally upon At . that M* and M have doubly infinite time (i. { Su}0<u<t is distributed as a process {Su} . 0). moves down linearly at a unit rate in between jumps and starts from S0 = U. T+ = t given the event At that an arrival at t occurs . oo) x (0 .0<u<t) = P(St EA..s. ..).St <.5. in (0.A.. are point processes on (oo .(left limit) when 0 < it < t and is illustrated on Fig . h.0<u<tIAt) = P(St EA. oo) p(t) = P(St EA..St<Su.o<u<t where a claim arrives at time t and has size Uo. We then represent M by the mark (claim size ) Uo of the arrival at time 0.1] here the r .
A sample path inspection just as in the proof of Lemma 6 .5. the support of L has right endpoint U0. Uo]. Thus. Fig. THE LADDER HEIGHT DISTRIBUTION 55 { A Su}0<u<t U0 U0 \t tt u>0 N U_1 Figure 6. 2 therefore immediately shows that L(dy) is Lebesgue measure on (oo.6. time instants corresponding to such minimal values have been marked with bold lines in the path of { St}. and since by assumption St * oo a.5 where the boxes on the time axis correspond to time intervals where {St } is at a minimum belonging to A and split A into pieces corresponding to segments where {Su} is at a relative minimum. In Fig. 6.s. t a oo.5 where it = { St < Su. Mt)dt = i3EL(A) o"o . and we let L(dy) be the random measure L(A) = fo°° I(St E A. G' (A) = 3 f P(St E A. the left endpoint of the support is oo. NIt)dt . 6.. cf. Since So = U0. 0 < u < t } is the event that { Su } has a relative minimum at t .
56 CHAPTER II.6 is Bjork & Grandell [67]. . [263] (a special case of the result appear in Proposition VI.5 is due to Schmidt & coworkers [48].1). [147].2. SOME GENERAL TOOLS AND RESULTS = OE f 0 I(Uo>y)I (yEA)dy = Q f IP (Uo>y)dy A 0o a fA P(y) dy• 0 Notes and references Theorem 6. A further relevant reference related to Corollary 6.
exact matrixexponential solutions under the assumption that B is phasetype (see further VIII. 3). Thus . It is worth mentioning that much of the analysis of this chapter can be carried over in a straightforward way to more general Levy processes .e. • the claim sizes U1.4 below . A common view of the literature is to consider such processes as perturbed compound Poisson risk processes . being of the form Rt = Rt+Bt + Jt where {Rt } is a compound 57 .6) and simulation methods ( Chapter X). For finite horizon ruin probabilities . and assume that • { Nt}t>o is a Poisson process with rate j3. U2. i=1 i=1 An important omission of the discussion in this chapter is the numerical evaluation of the ruin probability. {Rt} and the associated claims surplus process {St} are given by Nt Nt Rt = u+t EUi. Some possibilities are numerical Laplace transform inversion via Corollary 3. i.Chapter III The compound Poisson model We consider throughout this chapter a risk reserve process {Rt } t>o in the terminology and notation of Chapter I. say.d. with common distribution B. 4. are i.. • the premium rate is p = 1.. . i. Panjer's recursion ( Corollary XI. St = uRt = EUi t. see Chapter IV. and independent of {Nt}.
g.+Uk)P(Nt = k) k=O e8t k=O B[s]k . [324].1 (a) ESt = t(13µ$ . Schmidli [319].s. P = PAB = 1/(1 + rl) Proposition 1. THE COMPOUND POISSON MODEL Poisson risk process.1 is the expected claim surplus per unit time. and that B(k)[0] = Pak).1). Proof It was noted in Chapter I that p . for (d) just note that the kth cumulant of St is tic(k) (0).g.t = E[Ntµs] .'s etc. of the claim surplus St . 1 Introduction For later reference. (d) The kth cumulant of St is tf3p(k) for k > 2... we shall start by giving the basic formulas for moments.u . (c) Ee8St = et" (8) where c(s) = f3(B[s] . We do not spell out in detail such generalizations. m. and this immediately yields (a). and Schlegel [316]. cumulants . Dufresne & Gerber [126]. A more formal proof goes as follows: Nt r Nt ESt = E > U k . we get Ee8st = 00 e8t c` Ee8 (U1+. The same method yields also the variance as Nt Ne Nt Var St = Var E Uk = Var E ^ Uk Nt +EVar [ k=1 k=1 1 k=1 Uk Nt Var [Ntµs] + E[NtVar U] = 113µs + t13Var U = tf3pB2). See e.6pBa).58 CHAPTER III. 0 .t = E E [ U k k=1 k=1 Nt .)3t (fit' k t} = etk(8) exp {st '3t + B[s]f Finally.Rt. Write pB^) = EUn' YB = Pali = EU. Furrer [150].t = fltpB .1) .1). e . (b) Var St = t.f. For (c). {Bt} a Brownian motion and {Rt} a pure jump process.t = t(p .1) = t(p . say stable Levy motion. where K(k) (0) is the kth derivative of is at 0.
II. we have Sok . INTRODUCTION 59 The linear way the index t enters in the formulas in Proposition 1. the Uk . Obviously.2 (DRIFT AND OSCILLATION) St/ta3'p1 ast >oo. rather to view {St} directly as a random walk in continuous time.3EU01 = 1µs where rt is the safety loading.i. (c) If 77 > 0. We return to this approach in Chapter V. if t = nh + v with 0 < v < h. meaning that the increments are stationary and independent.. 1. then Snh .1 = . obviously 0(u) = F(maxk Sok > u).3) is proved similarly. we get a discrete time random walk imbedded in the claim surplus process {St}. however. then lien inft.4. where Tk is the time between the kth and the (k .Sok_l = Uk . For example..1)th claim.S„ attains its minimal value when there are no arrivals in (u. The connections to random walks are in fact fundamental. then St 4 co. u + v]. v > 0. Proof We first note that for u. St = oo. we need the following lemma: Lemma 1. The right hand inequality in (1. S„+V > S„ . 2. lim supt.Tk.d. (d) If 17 = 0.1 is the same as if {St} was a random walk indexed by t = 0. Sn+0 . and there are at least two ways to exploit this: Recalling that ok is the time of the kth claim. St = oo. Here is one immediate application: Proposition 1. so that {Sok } is a random walk with mean EUET = EU. . cf.1. The point of view in the present chapter is.. then St 00.Tk are i. In this way. (b) If 77 < 0..V.3 If nh < t < (n + 1)h. which is often used in the literature for obtaining information about {St} and the ruin probabilities. In particular. then St> SnhV>Snhh. (a) No matter the value of 77. For the proof. Indeed. and the value is then precisely v.h < St < S(n+1)h + h.
Part (d) follows by a (slightly more intricate) general random walk result ([APQ]. this case can be reduced to the compound Poisson model by an easy operational time transformation u T1(t) where T(s) = )3 fo M(t)dt. Snh u = 00 (the lemma is not needed for (d)). and < 1 for all u when 77 > 0. 2h. If rl > 0.60 CHAPTER III.. Notes and references All material of the present section is standard. h A similar argument for lim sup proves (a). However. is a discrete time random walk. then {St} upcrosses level 0 a. Assuming that each risk generates claims at Poisson intensity /3 and pays premium 1 per unit time. and hence by the strong law of large numbers. For any fixed h. at least once.1) as t 4 oo is normal vtwith mean zero and variance )3µsz) Proof Since {St}t>o is a Levy process (a random walk in continuous time).3.1. is a discretetime random walk for any h > 0.2.p. hence by induction i. THE COMPOUND POISSON MODEL Proof of Proposition 1. or by a general result on discrete skeletons ([APQ] p..4 The ruin probability 0(u) is 1 for all u when 77 < 0.1). it is seen that upcrossing occurs at least twice. lim supn_.. . Thus using Lemma 1. where the size of the portfolio at time t is M(t).. 169) stating that lim infra..2.3. 0 Snh = 00._. 1 since St 4 oo) and repeating the argument. Snh/n a4' ESh = h(p .o.5 The limiting distribution of St .6 Often it is of interest to consider size fluctuations.s.. Proof The case of 17 < 0 is immediate since then M = oo by Proposition 1. There is also a central limit version of Proposition 1.1(b)) that the assertion holds as t 4 oo through values of the form t = 0.1. it suffices to prove 4'(0) = F(M > 0) < 1... if P(M > 0) = 1. {Snh}n=o. we get lim inf St t>oo t nroo nh<t<(n+1)h t = lim inf inf St h l++m of Sn 7t h = ESh = p . (c) are immediate consequences of (a). Corollary 1. This contradicts u St400.2: Proposition 1. Considering the next downcrossing (which occurs w. {Snh}n=o.t . p.. and (b).. and hence it folz lows from standard central limit theory and the expression Var(St) = tf3pB (Proposition 1.1. Remark 1 . The general case now follows either by another easy application of Lemma 1. h. u 307).
we may view the ladder heights as a terminating renewal process and M becomes then the lifetime. IV. but we shall be able to extract substantial information from the formula. B(x)/aB.1 provides a representation formula for 0(u). [APQ] Ch. 11. Thus .IIG +II)EG+ .1.34 or A.IIG+II) (the parenthesis gives the probability that there are no further ladder steps after the nth ). This follows simply by noting that the process repeats itself after reaching a relative maximum. nevertheless. We assume throughout rl > 0 or. where G+ is given n=0 by the defective density g+ (x) = 3B (x) = pbo(x) on (0.2. Theorem 2. the ladder heights are i. equivalently. we can rewrite the PollaczeckKhinchine formula as 00 (u) = P (M > u) = (1 . Here bo(x) _ Proof The probability that M is attained in precisely n ladder steps and does not exceed x is G+ (x)(1 . i. THE POLLACZECKKHINCHINE FORMULA 61 2 The PollaczeckKhinchine formula The time to ruin r(u) is defined as in Chapter I as inf It > 0: St > u}. that r(0) < oo) is Bo: taking y = 0 shows that the conditional distribution of (minus) the surplus ST(o). 0 Alternatively. the formula for the distribution of M follows . p < 1.P) E PnBon(u) . As a vehicle for computing tIi(u). Note that the distribution B0 with density bo is familiar from renewal theory as the limiting stationary distribution of the overshoot (forwards recurrence time )..6. 1 The distribution of M is (1. and we further get information about the joint conditional distribution of the surplus and the deficit. Fig.6. The decomposition of M as a sum of ladder heights now yields: 00 Theorem 2 . and we shall here exploit the decomposition of the maximum M as sum of ladder heights. cf.1. (2. d. The following results generalizes the fact that the conditional distribution of the deficit ST(o) just after ruin given that ruin occurs (i. 1e. which we henceforth refer to as the PollaczeckKhinchine formula. Summing over n.e.. oo ).1) representing the distribution of M as a geometric compound.just before ruin is again B0. n=0 (2. It is crucial to note that for the compound Poisson model. The expression for g+ was proved in Theorem 11. Combined with i/i(u) = P ( M > u). Note that this .1) is not entirely satisfying because of the infinite sum of convolution powers.
In the risk theory literature. However. ST(o)) is given by the following four equivalent statements: B(z) dz. cf. V is uniform on (0. As shown in Theorem 11.just after ruin. there is a general marked point process version. Beekman [61].1 is traditionally carried out for the imbedded discrete time random walk. (d) the marginal distribution of ST(o)_ is B0. The proof of Theorem 11. Theorem 2 .62 CHAPTER III.d. We assume rt > 0 throughout. Theorem 2. 1) and W has distribution Fw given by dFyy/ dB(x) = x/µB. .5. 2 The joint distribution of (ST(o )_. (c) the marginal distribution of ST(o)_ is Bo . cf.6. see Schmidli [323] and references there. Feller [143] or Wolff [384]. the PollaczeckKhinchine formula is often referred to as Beekman 's convolution formula. ladder heights so that the results do not appear not too useful for estimating 0(u) for u>0. in this setting there is no decomposition of M as a sum of i. (1 . ST(o) > y. 7r(0 ) < oo) = Q 3 Special cases of the PollaczeckKhinchine formula The model and notation is the same as in the preceding sections.V)W) where V. (a) 11 (ST(o)_ > x. and the conditional distribution of ST(o) given ST(o)_ = y is the overshoot distribution B(Y) given by Bov)(z) _ Bo (y + z )/Bo(y). [62]. see for example [APQ]. Theorem A1. ST(o )) given r (0) < oo is the same as the distribution of (VW.2(a) is from Dufresne & Gerber [125]. For the study of the joint distribution of the surplus ST(u)_ just before ruin and the deficit ST(„). where it requires slightly more calculation.2 and it gives an alternative derivation of the distribution of the deficit ST(o) Notes and references The PollaczeckKhinchine formula is standard in queueing theory. THE COMPOUND POISSON MODEL distribution is the same as the limiting joint distribution of the age and excess life in a renewal process governed by B.6. f +b (b) the joint distribution of (ST( o). the form of G+ is surprisingly insensitive to the form of {St} and holds in a certain general marked point process setup.i.5. W are independent. and the conditional distribution of ST(o)_ given ST(o)_ = z is Bo z) The proof is given in IV. Again. Asmussen & Schmidt [49]. cf.
3b Exponential claims Corollary 3.6. then.3 so that the conditional distribution of M given M > 0 is exponential with rate S '3 and 0(u) = P(M > u) = P(M > 0)P(M > uIM > 0) = pe(6Mu. Bon is the Erlang distribution with n phases and thus the density of M at x > 0 is (1 . SPECIAL CASES OF POLLACZECKKHINCHINE 3a The ruin probability when the initial reserve is zero 63 The case u = 0 is remarkable by giving a formula for V)(u) which depends only on the claim size distribution through its mean: Corollary 3.1 0(0) = p = Nl2B = 1 1 +71 Proof Just note that (recall that T+ = r(0)) 00 z/^(0) = I' (r+ < oo) = IIG+II = )3 f(x)dx =l3LB• Notes and references The fact that tp(u) only depends on B through µB is often referred to as an insensitivity property. however . As shown in 11.p) E pn S n x n. the formula for P(O) holds in a more general setting.0(u) = pe(aA)" Proof The distribution Bo of the ascending ladder height ( given that it is defined ) is the distribution of the overshoot of {St} at time r+ over level 0.. hence without memory. Alternatively.p. 1 .1)1 00 ( 1 . and hence this overshoot has the same distribution as the claims themselves . also be seen probabilistically without summing infinite series . 0 . Thus r(x) = S(1 .e. the result follows .p) = S . The result can.p.2 If B is exponential with rate S. Integrating from u to oo.3. the current ladder step must terminate which occurs at rate S and there must be no further ones which occurs w. I. a further relevant reference is Bjork & Grandell [67]. use Laplace transforms.1 e ax = n1 (n .O)e(b0)x. For a failure at x. Thus . B0 is exponential with rate S and the result can now be proved from the Pollaczeck Khinchine formula by elementary calculations . But claims are exponential . Let r ( x) be the failure rate of M at x > 0.p)pSe a ( l v)x = p( S .
y)G+(dy ) = f U V(u .4) is similar (equivalently. THE COMPOUND POISSON MODEL In VIII.+ >u.1 p pBo(u). then 24 1 V. (b) use stopped martingales .3. and conditioning upon S. Then the first term on the r. T+ <00) (3.p + G+ * Z(u) = 1 . u + oo. (Example VIII. (3.y)G+(dy) For the last identity in (3.3). (u) 35eu + 35e6u. is ?7+ ( u).2).3) Equivalently.3)). we use the PollaczeckKhinchine formula in Chapter IX to show that b(u) .S.i(u) satisfies the defective renewal equation Z(u) = 1 .h. 2 is one of the main classical early results in the area.s. (3. and weights 1/2 for each. cf. the survival probability Z(u) = 1 .S.+ <u. 3c Some classical analytical results Recall the notation G+(u) = f^°° G+(dx).T+ <oo)+P(M> u. Corollary 3. A variety of proofs are available .+ <U.y)/3B (y) dy. (3.p + f u Z(u . The case of (3.3. 0 Proof Write o (u) as P(M>u) = P(S. We mention in particular the following: (a) check that ip (u) = pe (60)u is solution of the renewal equation (3.+ = y yields P(M>u.3) below. (3.1. if 3 = 3 and B is a mixture of two exponential distributions with rates 3 and 7.2) Notes and references Corollary 3. just insert the explicit form of G+.4) zu P(M > u . E.g. II. u .1) For a heavytailed B.64 CHAPTER III.4) can be derived by elementary algebra from (3. we show that expression for /'(u) which are explicit (up to matrix exponentials) come out in a similar way also when B is phasetype.3 The ruin probability Vi(u) satisfies the defective renewal equation ik (u) = 6+ (u) + G+ * 0(u) = Q f B(y) dy + u 0 f u 0(u .T+ <oo).y)f3 (y) dy.
5) Proof We first find the m.p)s .pBo[s] no (1 .4 The Laplace transform of the ruin probability is 65 fo Hence Ee8M 00 e8uiP(u)du . Bo of B0 as m e8u B(du) = B[s] . Griibel [179] and Thorin & Wikstad [370] (see also the Bibliographical Notes in [307] p.. 191).PPB2) EM2 = PPB) + QZPBl 2(1 .1 Bo[s] = f oc. for example.5 The first two moments of M are 2 EM . Corollary 3.P)pB' (3. numerical inversion of the Laplace transform is one of the classical approaches for computing ruin probabilities.(3B[s] 1 .Ps s(.3 .5 can be found in virtually any queueing book. g.s . it is not surprising that such arguments are more cumbersome since the ladder height representation is not used. which yields the survival probability as 00 f u }t Z(u) = f f3eRtdt 0 from which (3.3 is standard ./3B[s] which is the same as (3. e.(3 .p)s s /3 .g.p) E p"Bo[s]" = 1 . Also (3. In fact.f. 111112 or Feller [143].4) can be derived by elementary but tedious manipulations. see e. (3. Of course.6) 00 = (I .g.3 .7) s +. The approach there is to condition upon the first claim occuring at time t and having size x . 206207)./3B[s] .3. either of these sets of formulas are what many authors call the PollaczeckKhinchine formula. [APQ] pp.7).8) Proof This can be shown. In view of (3.5). 0 Notes and references Corollary 3.s . We omit the details (see. SPECIAL CASES OF POLLACZECKKHINCHINE Corollary 3.)3B[s]) (3.. Embrechts. .7) and Corollary 3. [APQ] pp. Griibel & Pitts [132]. eau B(u) du = f PB 3PB SPB 0 o (3.P)PB 2(1 .Ee8M) f ao e8' ( u)du = a8uP (M > u)du = 0 o 1 ( 1+ (1 . by analytical manipulations (L'Hospital's rule) from (3.5). Some relevant references are Abate & Whitt [2].p)2 3(1 .p = (1 .
Q) k=0 k! E e0( = /32(u) .9) shown for n .3I( 0<y<1)dy Z(y)/3I(0<u. For n < u < n + 1.u/p)]k ko k! Proof By replacing {St} by {Stu/p} if necessary. Z^ =eR(k.)3(1 .z/'(u) takes the form Z(u) L^J L. then p) 1: ep(k u/. eO('u) [)3(k . differentiation yields Z'(u) _ /3Z(u) which together with the boundary condition Z(0) = 1/3 yields Z(u) _ (1/3) eAu so that (3.u)]k k! k0 The renewal equation (3./32(u .y<1)dy 0<u<1 1 < u < oo uu ulhu 1a+/3 J0 uZ(y)dy U Z(y) dy 113+0 For 0 < u < 1./3Z(u .u) a)Qea" + (1 . of (3.1).u)]k k! (1 L3) 1: e_O(ku) NIN (k (k .1 < u < n and let Z(u ) denote the r.Q (k 1 k= n  [O(k .1)! k=1 u1 . Assume (3.9) follows for 0 < u < 1. THE COMPOUND POISSON MODEL 3d Deterministic claims Corollary 3.4) for Z( u) means f lhu Z(u) = 1.1).u) [p(k . differentiation yields Z(u) _ /3Z(u) .u)]k d 1 u) _ a) n ( du ( k! (1  .u)]k1 ku+1) [/3( k .u + 1 )]k = QZ(u) . we may assume p = 1 so that the stated formula in terms of the survival probability Z(u) = 1 .3+ 18+ J0 Z(uy).s.66 CHAPTER III. .6 If B is degenerate at p.9).h.Q) 3e.u) [N(k .
The answer is yes: inserting in (4. B9(dx) = B[9] B(dx).) The adaptation of this construction to stochastic processes with stationary independent increment as {St} has been carried out in 11.2) shows that the solution is Ox [O]0]. co(a) = rc(a + 9) .a.1) . (4.Qe(Bo[a] .(9).g.2) (Here 9 is any such number such that r.4.3B = .f.1) . (4. (4. of F9.(9) is welldefined.1 that c(a) = /3(B[a] . say t = 1: recall from Proposition 1.1) or equivalently. 00 the standard definition of the exponential family {F9} generated by F is FB(dx) = e°xK(e)F(dx). CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES 67 Since Z(n) = 2(n) by the induction hypothesis.r.f. we just have to multiply (4.f. The question then naturally arises whether ie is the c. in terms of the c.d.g.a.6 is identical to the formula for the M/D/1 waiting time distribution derived by Erlang [139]. corresponding to a compound Poisson risk process in the sense that for a suitable arrival intensity 00 and a suitable claim size distribution BB we have no(a) = rc(a + 9) . and thus (4. and define rce by (4. F and c. we set up . 4 Change of measure via exponential families If X is a random variable with c. 0 Notes and references Corollary 3.3B[9]. We could first tentatively consider the claim surplus X = St for a single t.4) . Formalizing this for the purpose of studying the whole process {St}. See also Iversen & Staalhagen [208] for a discussion of computational aspects and further references.4.f.rc(9) = .2). or equivalently BB[a] = B[^+ Repeating for t 54 1.3) by t.g. K(a) = logEe'X = 109f 00 eaxF(dx) = logF[a].4) works as well. but will now be repeated for the sake of selfcontainedness. it follows that Z(u) = 2(u) for n<u<n+1.
oo) governing a given compound Poisson risk process with arrival intensity.r.i.S(k_1)Tln. with T taking the role of n) is the analogue of the expression exp{8(x1 + • • • + xn) . v(Xi. (4.t. and PBT) the restriction of PB to FT. ..d. Z is measurable w. (4. Then the Xk are i. then EBZ = E [Ze9ST _T"(9)I .10) .1. n) for a given n. . But let Xk = SkT/n .1) and multiply from 1 to n). the corresponding expectation operator is E9. and define 09. (4. G C {T < oo}.2. Ti(a)/n. = exp {BST .FT. EeeBSt + tk(B) = 1.t. .g. .5) for the density of n i.6) F(G) = Po (G) = EB [exp {BST + Ttc(0)} .8) By standard measure theory. with common c.68 CHAPTER III. Xn). (4.2 For any fixed T. the PBT) are mutually equivalent on. and thus (4.1 Let P be the probability measure on D[0. in particular the expression (4. G]. The following result (Proposition 4. for G E FT. Proposition 4. Then FB denotes the probability measure governing the compound Poisson risk process with arrival intensity. Then P(G) = Fo(G) = EB [exp {BST + TK(O)} .7) now follows by taking Z = eBST+TK(e)I(G) u Theorem 4 . .d. and dP(T) dP^T) That is.nr. BB by (4.5) for the density.r.7) Proof We must prove that if Z is FTmeasurable.3 and claim size distribution B. THE COMPOUND POISSON MODEL Definition 4. it suffices to consider the case where Z is measurable w..f.3 Let T be any stopping time and let G E FT. The identity (4. .. Let FT = o(St : t < T) denote the o•algebra spanned by the St. replications from Fe (replace x by xi in (4. t < T. G].9) Proof We first note that for any fixed t. (4.0e and claim size distribution Be.Tic (0)} .(9)} (4.8) follows by discrete exponential family theory.i.4).FTn) = Q(SkTIn : k = 0.
Then GT = G n Jr < T} satisfies GT E FT. Thus. Given FT. 5 Lundberg conjugation Being a c. Thus by (4. Ee [exp { BST +Trc(9)} I(G) FT)] = 1.5.. and hence (4. so that PG = EeE0 [exp { 9ST+Trc(9)}I(G)I FT)] = Ee [exp { BST + rrrc(O)} I(G)EB [ exp {9 (ST . LUNDBERG CONJUGATION 69 Now assume first that G C Jr < T} for some deterministic T.(Y) = 13(B['Y] . The behaviour at zero is given by the first order Taylor expansion c(a) r.9) holds with G replaced by GT. . c(a) is a convex function of a. according to what has just been proved.r)rc(9)}I .1) _ 1 + a.1) . Then G E FT. t = T . the typical shape of rc is as in Fig.1(a). subject to the basic assumption ij > 0 of a positive safety loading. (4. 77 Thus.FT]] = EB [exp { BST + Trc(9)} I(G)] .g. 5. Now consider a general G. (a) rc (a) (b) KL(a) 'Y 'Y Figure 5.f.r is deterministic. GT C_ Jr < T}. Letting T t oo and using monotone convergence then shows u that (4. (0) + rc'(0)a = 0 + ES1 a = a (p .7 1 Some discussion further supporting this statement is given in the next section.ST) + (T .9) holds for G as well.7) holds.1 It is seen that typically) a ry > 0 satisfying 0 = r.10).
and (4.4) yields /3L = b and that BL is again exponential with rate bL =. Thus B[7] = 6/. Lundberg conjugation corresponds to interchanging the rates of the interarrival times and the claim sizes. G = {T(u) < oo} in Theorem 4.g.3.1) is precisely what is needed for one of the terms in the exponent .3) cf.1) is known as the Lundberg equation and plays a fundamental role in risk theory . we further note that ( 5. b[s] = 5/(b . Equation (5. e.2 s As support for memory. Example 5 . 5. Thus.2) 7 Figure 5. we write FL instead of F7. . Fig. (5.4) ELS1 = #L(0) cf.3. an equivalent version illustrated in Fig. Fig. An established terminology is to call y the adjustment coefficient but there are various alternatives around. (5. Taking T = r(u). the Lundberg exponent.1(b).s).1) .QL instead of /37 and so on in the following . 5.1(b). u It is a crucial fact that when governed by FL. the claim surplus process has positive drift > 0. THE COMPOUND POISSON MODEL exists .2)) is 7 = 5/3.a = i(a + 7).1 Consider the case of exponential claims. 5. It is then readily seen that the nonzero solution of (5.2 is B(7) = 1 + ^.3. (5.70 CHAPTER III. Note that KL (a) = /L (BL [a] .1) (or (5.
where C .G+L)(x)) dx ry^+L) J 00 f 0 (1 . To this end. take first 0 = ry. PL ) with density 1 .3 (THE CRAMERLUNDBERG APPROXIMATION) i'(u) .6) Proof By renewal theory. e(u) has a limit i. we can rewrite this as 0(u) = e"ELe7^(u).7) 0 and all that is needed to check is that ( 5. 0 Theorem 5 . T = T+.5. G = {S.1 (5.e7x)G+(dx). we therefore have ELe7t(u) + C where C ELe7 (00) = µ+) f e7(1 .t.ascending ladder height distribution and µ+ its mean. ST+ E A] .u be the overshoot and noting that PL(T(u) < oo) = 1 by (5.3 takes a particular simple form.3. Proof Just note that e(u) > 0 in (5. Letting e(u) = ST(u) . LUNDBERG CONJUGATION 71 to vanish so that Theorem 4.1.7) is the same as (5.1p .2 (LUNDBERG'S INEQUALITY) For all u > 0.8) . Since a7' is continuous and bounded.(oo) (in the sense of weak convergence w.4).1e. V) (u) = P(T(u) < oo) = EL [exp {ryS. (5. Then P(ST+ E A) = EL [exp { 7S?+} .r.Ce7u as u 4 oo. see A . (5. (5.+ E A} in Theorem 4. T(u) < oo] .5).5) Theorem 5 .P Y j o' xeryxOB (x) dx /3k [Y] . V)(u) < e7u.G+ L) (x) G+L) (x) IL(+) µ+L) L) where G+L) is the FL. which shows that G(L) (dx) = e7xG +(dx) = e7x /3 (x) dx.(u)} .6 ).
11) so that I 7B ['Y](B[7]1) BI [7]Q VP (7) 72 7 (using (5.8) yields +L) J0 xel'B ( x) dx (5.1) (5. of course.12) Example 5 . Noting that SIG(L)II = 1 because of (5.e.1 .4).3 (this was found already in Example 5. but some tedious (though elementary) calculations remain to bring the expressions on a final form.")G + (dx ) = 1  J0 00 3B(x) dx = 1p.7). From this it follows.4 Consider first the exponential case b(x) = Seax. that 7 = S . Then 0(u) = pe(a_Q)u where p = /3/S.1)) and 7µ+L) = 'y/3 [7] 7 1/0 = /3B ['y] . THE COMPOUND POISSON MODEL In principle. A direct proof of C = p is of course easy: B ['y] d S S (S7 )2 d7S y S 02' C 1p 1p _ 1p /3B' [7] 2 1 P1 p. Using (5.10) VW = JI c* e° (x) dx = a (B[a] .1 = ^7 The accuracy of Lundberg's inequality in the exponential case thus depends on how close p is to one. u . this solves the problem of evaluating (5.72 CHAPTER III. we get L where 00 (1 .1 above) and that C = p. . (5. or equivalently of how close the safety loading 77 is to zero.
5. LUNDBERG CONJUGATION Remark 5.5 Noting that PL  1 = ,3LIBL  1 = #ci (0 ) = k (ry) _ ,QB' ['Y]  1 ,
73
we can rewrite the CramerLundberg constant C in the nice symmetrical form G, _'(0)1  1  p K'(7) PL1
(5.13)
In Chapter IV, we shall need the following result which follows by a variant of the calculations in the proof of Theorem 5.3: 1  aB[ry  a]  1 Lemma 5 . 6 For a # ry, ELea^ (°°) = 7 aK'(7) 7  a Proof Replacing 7 by a in (5.7) and using ( 5.8), we obtain 1 (I 1  ^ e('ra) x,3 (x)dx) (L ) ELea^*) = a \\\ f
using integration by parts as in (3.6) in the last step . Inserting (5.12), the result follows. u
Notes and references The results of this section are classical, with Lundberg's inequality being given first in Lundberg [251] and the CramerLundberg approximation in Cramer [91]. Therefore, extensions and generalizations are main topics in the area of ruin probabilities, and in particular numerous such results can be found later in this book; in particular, see Sections IV.4, V.3, VI.3, VI.6.
The mathematical approach we have taken is less standard in risk theory (some of the classical ones can be found in the next subsection). The techniques are basically standard ones from sequential analysis, see for example Wald [376] and Siegmund [346].
5a Alternative proofs
For the sake of completeness, we shall here give some classical proofs, first one of Lundberg's inequality which is slightly longer but maybe also slightly more elementary:
74 CHAPTER III. THE COMPOUND POISSON MODEL
Alternative proof of Lundberg 's inequality Let X the value of {St} just after the first claim , F(x) = P(X < x). Then , since X is the independent difference U  T between an interarrival time T and a claim U, ,3+ry F'[7} = Ee7 ( UT) = Ee7U • Ee7T = B['Y] a = 1' where the last equality follows from c(ry) = 1. Let 0( n) (u) denote the probability of ruin after at most n claims. Conditioning upon the value x of X and considering the cases x > u and x < u separately yields
,0(n +1) (u) = F(u) +
Ju
0 (n) (u  x) F(dx).
We claim that this implies /,(n) (u) < e 7u, which completes the proof since Vi(u) = limniw 1/J(n) (u). Indeed , this is obvious for n = 0 since 00)(u) = 0. Assuming it proved for n, we get
„/, (n+1)(u) <
F(u) + e7u
00
Ju
e7(u=) F(dx)
00
<
f
e7x F(dx)
+ fu
e  7(u z) F(dx)
u
o0
= e 7uE[ 'Y] = e 7u.
Of further proofs of Lundberg's inequality, we mention in particular the martingale approach, see II.1. Next consider the CramerLundberg approximation. Here the most standard proof is via the renewal equation in Corollary 3.3 (however, as will be seen, the calculations needed to identify the constant C are precisely the same as above): Alternative proof of the CramerLundberg's approximation Recall from Corollary
3.3 that
(u) = )3
J OO B(x) dx + J U Vi(u  x)/3 (x) dx.
u 0
Multiplying by e7u and letting Z(u) = e7" O(u), we can rewrite this as
u Z(u) =
z(u) = e7u/
J
B(x)dx, F(dx) = e7x,QB(x)dx,
u
z(u)
f +
J
e7(ux ),Y' 1 • l•(u  x) • e7'/B(x) dx,
0
= z(u) +
J0 u Z(u  x)F(dx),
6. MORE ON THE ADJUSTMENT COEFFICIENT 75
i.e. Z = z+F*Z. Note that by (5.11) and the Lundberg equation, ry is precisely the correct exponent which will ensure that F is a proper distribution (IIFII = 1). It is then a matter of routine to verify the conditions of the key renewal theorem (Proposition A1.1) to conclude that Z (u) has the limit C = f z(x)dx/µF, so that it only remains to check that C reduces to the expression given above. However, µF is immediately seen to be the same as a+ calculated in (5.10), whereas
L
00
z(u) du =
f
J
/3e7udu "o
J "o B(x) dx = J "o B(x)dx J y,0eludu
u 0 0
B(x)^ (e7x  1) dx = ^' (B[7]  1)  As] [0 µs] = l y P^
using the Lundberg equation and the calculations in (5.11). Easy calculus now gives (5.6). u
6 Further topics related to the adjustment coefficient
6a On the existence of y
In order that the adjustment coefficient y exists, it is of course necessary that B is lighttailed in the sense of I.2a, i.e. that b[a] < oo for some a > 0. This excludes heavytailed distributions like the lognormal or Pareto, but may in many other cases not appear all that restrictive, and the following possibilities then occur: 1. B[a] < oo for all a < oo. 2. There exists a* < oo such that b[a] < oo for all a < a* and b[a] = 00 for all a > a*. 3. There exists a* < oo such that fl[a] < oo for all a < a* and b[a] = 00 for all a > a*. In particular , monotone convergence yields b[a] T oo as a T oo in case 1, and B[a] T oo as a f a* in case 2 (in exponential family theory , this is often referred to as the steep case). Thus the existence of y is automatic in cases 1 , 2; standard examples are distributions with finite support or tail satisfying B(x) = o(eax)
76 CHAPTER III. THE COMPOUND POISSON MODEL
for all a in case 1, and phasetype or Gamma distributions in case 2. Case 3 may be felt to be rather atypical, but some nonpathological examples exist, for example the inverse Gaussian distribution (see Example 9.7 below for details). In case 3, y exists provided B[a*] > 1+a*/,3 and not otherwise, that is, dependent on whether 0 is larger or smaller than the threshold value a*/(B[a*]  1). Notes and references Ruin probabilities in case 3 with y nonexistent are studied, e.g., by Borovkov [73] p. 132 and Embrechts & Veraverbeeke [136]. To the present authors mind, this is a somewhat special situation and therefore not treated in this book.
6b Bounds and approximations for 'y
Proposition 6.1 ry <
2(1  aps) 2µs
OMB PB)
Proof From U > 0 it follows that B[a] = Eea' > 1 + µsa + pB2)a2/2. Hence 1 = a(B[7]  1) > Q (YPB +72µs)/2) = 3µs + OYµa2) 2 (6.1) 7 'Y from which the results immediately follows. u
The upper bound in Proposition 6.1 is also an approximation for small safety loadings (heavy traffic, cf. Section 7c): Proposition 6.2 Let B be fixed but assume that 0 = ,3(77) varies with the safety loading such that 0 = 1 Then as 77 .0, µB(1 +rl) 2) Y = Y(77) 277 PB Further, the CramerLundberg constant satisfies C = C(r1)  1. Proof Since O(u) + 1 as r7 , 0, it follows from Lundberg's inequality that y * 0. Hence by Taylor expansion, the inequality in (6.1) is also an approximation so that OAY]  1) N Q (711s + 72µB2) /2) = p + 3,,,(2) B 'y 7 2 2(1  p) _ 271µB
QPB PB)
6. MORE ON THE ADJUSTMENT COEFFICIENT 77
That C 4 1 easily follows from y 4 0 and C = ELe7V°O) (in the limit, b(oo) is distributed as the overshoot corresponding to q = 0 ). For an alternative analytic proof, note that C  1P = rlµB 73B' [7]  1 B' [ry)  1/0 711µB µB +7µB2 )  µB(1 +77 ) 'l = 1. 277q
77
7PBIPB
 77
13 Obviously, the approximation (6.2) is easier to calculate than y itself. However, it needs to be used with caution say in Lundberg's inequality or the CramerLundberg approximation, in particular when u is large.
6c A refinement of Lundberg 's inequality
The following result gives a sharpening of Lundberg 's inequality (because obviously C+ < 1) as well as a supplementary lower bound:
Theorem 6 .3 C_eryu < ,)(u) < C+ eryu where
= B(x) = C_ x>o f °° e7( Yx)B(dy )' C+
B(x) xuo f 0 e'r( vx)B(dy)
Proof Let H(dt, dx ) be the PLdistribution of the time r(u) of ruin and the reserve u  S7(„)_ just before ruin . Given r(u) = t, u  ST (u) = x, a claim occurs at time t and has distribution BL(dy)/BL(x), y > x. Hence ELe7£(u) 0
J
°o
H(dt, dx)
fX
eY(Y x) 00 f°° B(dy) x
BL dy
BL(x)
o
f
f H(dt, dx)
L ^ H(dt, dx) f e7B( x)B(dy) Jo oc, < C+
J0 0 o" H(dt, dx) = C. o" J
The upper bound then follows from ik(u) = e7uELeVu), and the proof of the u lower bound is similar.
78 CHAPTER III. THE COMPOUND POISSON MODEL
Example 6.4 If B(x) = eax, then an explicit calculation shows easily that B(x) _ e6X fz ° e7(Yx)B(dy) f x' e(6,6)(Yx)8esydy = 5 = P. Hence C_ = C+ = p so that the bounds in Theorem 6.3 collapse and yield the exact expression pey" for O(u). u The following concluding example illustrates a variety of the topics discussed above (though from a general point of view the calculations are deceivingly simple: typically, 7 and other quantities will have to be calculated numerically). Example 6.5 Assume as for (3.1) that /3 = 3 and b(x) = 2 .3e3x + 2 .7e7x, and recall that the ruin probability is 24 5su 5eu + 3e *(u) = 3 Since the dominant term is 24/35 • e", it follows immediately that 7 = 1 and C = 24/35 = 0.686 (also, bounding aS" by a" confirms Lundberg's inequality). For a direct verification, note that the Lundberg equation is
7 = /3(B['Y]1)
= 3\
2.337
+2.7771
which after some elementary algebra leads to the cubic equation 273  1472 + 127 = 0 with roots 0, 1, 6. Thus indeed 7 = 1 (6 is not in the domain of convergence of B[7] and therefore excluded). Further, 1P = B [7] 181B = 13 2.3+2.71 = 1 3 1 7 I 7'
_ 17
2 (3 a )2 + 2 (7  a)2 «=7=1 2 1p _ 7 _ 24
36 '
3.171 35* 36 For Theorem 6.3, note that the function QB[Y]1 f°°{L 3e_3x+
u
• 7e7x 1 dx
J
3 + 3e4u
f 0c, ex .
I 2 . 3e3x + 2 . 7e7x l dx
l J
9/2 + 7/2e4u
7. VARIOUS APPROXIMATIONS FOR THE RUIN PROBABILITY 79
attains its minimum C_ = 2/3 = 0.667 for u = oo and its maximum C+ = 3/4 = 0.750 for u = 0, so that 0.667 < C < 0.750 in accordance with C = 0.686.
Notes and references Theorem 6.3 is from Taylor [360]. Closely related results are given in a queueing setting in Kingman [231], Ross [308] and Rossberg & Siegel [309]. Some further references on variants and extensions of Lundberg's inequality are Kaas & Govaaerts [217], Willmot [382], Dickson [114] and Kalashnikov [218], [220], all of which also go into aspects of the heavytailed case.
7 Various approximations for the ruin probabil
ity
7a The BeekmanBowers approximation
The idea is to write i (u) as F(M > u), fit a gamma distribution with parameters A, 6 to the distribution of M by matching the two first moments and use the approximation
0(u)
f
u
Sa
r(A)
xa  leax dx.
According to Corollary 3.5, this means that A, 8 are given by A/S = a1, 2A/52 = a2 (2) PIB3) ^ZP(B)2 __ PPB a2 al 2(1  P)PB 3(1  P)µ8 + 2(1  p)2' i.e. S = 2a1 /a2, A = 2a2 1/a2.
Notes and references The approximation was introduced by Beekman [60], with the present version suggested by Bowers in the discussion of [60].
7b De Vylder's approximation
Given a risk process with parameters ,(3, B, p = 1, the idea is to approximate the ruin probability with the one for a different process with exponential claims, say with rate parameter S, arrival intensity a and premium rate p. In order to make the processes look so much as possible alike, we make the first three cumulants match, which according to Proposition 1.1 means p=AUB1=P1,
2N
(2) 6^= =OP
,
/3,4)
.
)3 )PBo µB  . Proposition 7.2) was suggested by De Vylder [109]. p* _ .3 and Corollary 3. we have according to the PollaczeckKhinchine formula in the form (3. the approximating risk process has ruin probability z. Proposition 1.1.Ps(/3max .b(u) = p*e.1 As /3 f Nmax./3)] 1 .3 )1 } _ 1p 1 . numerical evidence (e.s(/3max . Letting Bo be the stationary excess life distribution. (/3max . heavy traffic conditions mean that the safety loading q is positive but small. Though of course it is based upon purely empirical grounds. Mathematically. Notes and references The approximation (7. 7c The heavy traffic approximation The term heavy traffic comes from queueing theory.P .3* /S.80 CHAPTER III. That is. THE COMPOUND POISSON MODEL These three equations have solutions 9/3µB2)3 30µa2)2 3µa2) (3) P+ (3) ' 0 .p .PBo [s (/3max . the premiums exceed only slightly the expected claims. [174]) shows that it may produce surprisingly good results. Grandell [171] pp.8µBo  Ss' u where 6 = µB/µBo = 2µa/µB 2) .g. we shall represent this situation with a limit where /3 T fl but B is fixed.1.(bA*)".7) that Ee$(Amex /j)M _ 1p _ 1p Eo [s (0max 1 .p + p { 1 1p ti 1 ./3)PBo PB . or equivalently that /3 is only slightly smaller than /3max = 1/µ8.p = (/3max 0)µB. and hence the ruin probability approximation is b(u) e(bAln)u. cf. 1924. but has an obvious interpretation also in risk theory: on the average./3)M converges in distribution to the 2a exponential distribution with rate S = B' Proof Note first that 1 .(3)2 P PB 2µB 2µB Letting /3* = /3/P.
It is worth noting that this is essentially the same as the approximation (2) z/i(u) Ce. Mathematically. the term light traffic comes from queueing theory.Q T /3max. and hence 2µ2B 1 .l3)M > (/3max . 7d The light traffic approximation As for heavy traffic . but has an obvious interpretation also in risk theory: on the average ./3)u * v.2 If .ze a2unµB laB (7. then P(u) 4 e6„ Proof Write z'(u) as P((/3max .g.7. in risk theory heavy traffic is most often argued to be the typical case rather than light traffic . VARIOUS APPROXIMATIONS FOR THE RUIN PROBABILITY 81 Corollary 7. light traffic conditions mean that the safety loading rl is positive and large . obviously Corollary 7. Notes and references Heavy traffic limit theory for queues goes back to Kingman [230]. light traffic is of some interest as a complement to heavy traffic . as well as it is needed for the interpolation approximation to be studied in the next subsection. . or equivalently that 0 is small compared to µB . we shall represent this situation with a limit where 3 10 but B is fixed. the premiums are much larger than the expected claims . u * oo in such a way that (3max . These results suggest the approximation Vi(u) e6(0_. We return to heavy traffic from a different point of view (diffusion approximations) in Chapter IV and give further references there .ryu . That is . the first results of heavy traffic type seem to be due to Hadwiger [184]./3)u). However .0)u.p. This follows since rl = 1/p . 2 provides the better mathematical foundation. [APQ] Ch.1 1 . In the setting of risk theory.2. Numerical evidence shows that the fit of (7. while the approximation may be far off for large u.p _ 2rl11B PB p. The present situation of Poisson arrivals is somewhat more elementary to deal with than the renewal case (see e .B AB ) 6()3max _'3) = However . Of course.4) suggested by the Cramer Lundberg approximation and Proposition 6. VIII).3) is reasonable for g being say 1020% and u being small or moderate.
5) follow by integration by parts. THE COMPOUND POISSON MODEL Proposition 7.Q limIP ( u) + Q lim z/'(u) Amax &0 amax ATAm. Sigman [347]. the Poisson case is much easier than the renewal case.3 is the same which comes out by saying that basically ruin can only occur at the F(U . [97]. Again. by monotone time T of the first claim . Indeed.e. ( 3 J O B dx.3 As . and hence 00 (U) /3pBBo (u) = 0 / B(x)dx.T > u) = J o" B(x + u)/3eax dx . 10 ( u The alternative expressions in (7. En'=2 • • • = O(/32) so that only the first terms matters. The crude idea of interpolating between light and heavy traffic leads to 0 (u) C1 . . Another way to understand that the present analysis is much simpler than in these references is the fact that in the queueing setting light traffic theory is much easier for virtual waiting times (the probability of the conditioning event {M > 0} is explicit) than for actual waiting times .u.= 1 aJ 1 a 0+ 1 = = p. 0 u Notes and references Light traffic limit theory for queues was initiated by Bloomfield & Cox [69]. u Note that heuristically the light traffic approximation in Proposition 7.5) u Proof According to the PollaczeckKhinchine formula. Omax max m. 7e Interpolating between light and heavy traffic We shall now outline an idea of how the heavy and light traffic approximations can be combined. z/' (u) convergence P(U . (7.T > u).(3 10. see Daley & Rolski [96].u)+.82 CHAPTER III. For a more comprehensive treatment. U > u] = /3iE(U . 0(u) /3 J B(x)dx = /3E[U . i. cf. ao n=1 00 n=1 (u) P) anllBBon(U) onPaBon(u) • Asymptotically. Light traffic does not appear to have been studied in risk theory. Asmussen [19] and references there.
8 Comparing the risks of different claim size distributions Given two claim size distributions B(1). even if the safety loading is not very small. _(E) (u). with rate 1/µB = /3max.O0 M. ./3)) .VHT) ( ax QmQ ) h (B) ( .O(E)(u) 1 (1 .6) (1p) The particular features of this approximation is that it is exact for the exponential distribution and asymptotically correct both in light and heavy traffic. one may hope that some correction of the heavy traffic approximation has been obtained. (U). we may ask which one carries the larger risk in the sense of larger values of the ruin probability V(') (u) for a fixed value of 0. Let OLT) (u) denote the light traffic approximation given by Proposition 7. Another main queueing paper is Whitt [380]. (7. z/i(E) (u) = pe(QmaxQ)u. Thus . "/Qmex Cu) CLT(u ( /3max 0) + O16 CHT( U(Qmaz .3 and use similar notation for %(B) (u) = (u). that is.x . ^ LT Q maxQ m"^ Qlo V LT) ( CHT(v) (say). however.8.3). The adaptation to risk theory is new. . Substituting v = u(. ) M.6) is . COMPARISONS OF CLAIM SIZE DISTRIBUTIONS 83 which is clearly useless . we combine with our explicit knowledge of ip(u) for the exponential claim size distribution E whith the same mean PB as the given one B. no empirical study of the fit of (7. [84]. we see that the following limits HT) (u'). Al . the idea of interpolating between light and heavy traffic is due to Burman & Smith [83 ]. f / Qmax B(x)dx 00 eQmaxxdx 4/ Qmax 00 QmaxQ amaze" and the approximation we suggest is J B(x) dx = cLT(v) (say). Instead. to get nondegenerate limits . available. B(2). where further references can be found .Wmax f(x ) dx + pee6mQ. ^IE) exist: 1 (B) HT QmsxQ hm J e e6" 2µE/µE2)'" = e(1 6)" =  Q1Qm. Notes and references In the queueing setting .3n.
B(') <i. then Bill = B(2). this ordering measures difference in variability. whereas (consider x2) B(2) has the larger variance. cf.ill(u) < V)(2) (U) for all u. B(2) and PB(1) = µB(2). the proof is complete. U(2) such that U(l) has distribution B(').1 is quite weak.s. an equivalent characterization is f f dB(') < f f dB (2) for any nondecreasing convex function f. u Of course. In particular (consider the convex functions x and x) the definition implies that B(1) and B(2) must have the same mean. THE COMPOUND POISSON MODEL To this end.84 CHAPTER III. most often the term stoploss ordering is used instead of increasing convex ordering because for a given distribution B.2 If B(') <j. Bill is said to be convexly smaller than B(2) (in symbols. Recall that B(') is said to be stochastically smaller than B(2) (in symbols. XI. Proof According to the above characterization of stochastical ordering. then i. B(2)) if f fdB(1) < f fdB(2) for any convex function f. Finally. or the existence of random variables U(l). we have the convex ordering. B(') <d B(2)) if B(1)(x) < B(2)(x) for all x. then . A weaker concept is increasing convex ordering: B(1) is said to be smaller than B(2) (in symbols. In the literature on risk theory. In terms of the time to ruin. Taking probabilities. one can interpret f x°° B(y) dy as the net stoploss premium in a stoploss or excessofloss reinsurance arrangement with retention limit x.6. we shall need various ordering properties of distributions. Here convex ordering is useful: Proposition 8. Rather than measuring difference in size.1 If B(') <d B(2). U(2) distribution B(2) and U(1) < U(2) a. for more detail and background on which we refer to Stoyan [352] or Shaked & Shantikumar [337]. this implies St T(l)(u) > r(2)(u) for all u so that 17(I) (U) < oo} C_ {T(2)(u) < oo}. we can assume that 1) < St 2l for all t.' 1)(u) < V)(2) (U) for all u. equivalent characterizations are f f dB(') < f f dB (2) for any nondecreasing function f. Proposition 8. . B(2)) in the increasing convex order if f BM (y) dy < f 00 Bi2i (y) dy x x for all x. and a particular deficit is that we cannot compare the risks of claim size distributions with the same mean: if BM <d B(2) and µB«) = /IB(2). Proposition 8. B(' <.
Bo1) <_d Bo2) which implies the same order relation for all convolution powers.3 If B(1) <. COMPARISONS OF CLAIM SIZE DISTRIBUTIONS Proof Since the means are equal. This u implies that D <.p ) E /3"µ"Bo2)* n(u) _ V(2) (u) n=1 = Corollary 8. Then V. Proof Consider the light traffic approximation in Proposition 7. say to p. A partial converse to Proposition 8. Hence by the PollaczeckKhinchine formula . The problem is to specify what 'variation' means.8.(1) (. A first attempt would of course be to identify 'variation' with variance. with fixed mean.1.. .e. then /'(')(u) < 0(2)(u) for all u. Corollary 8.5 If '0(1)(u) < p(2) (U) for all u and a.4 Let D refer to the distribution degenerate at 'LB . B. larger variance is paramount to larger second moment.1 and µB at 1 so that the safety loading 11 is 10%. and consider the following claim size distributions: B1: the standard exponential distribution with density ay. u We finally give a numerical example illustrating how differences in the claim size distribution B may lead to very different ruin probabilities even if we fix the mean p = PB.3 provides another instance of this..2 is the following: Proposition 8. B(2).6 Fix /3 at 1/1. B(2). Proof If f is convex.6 below is that (in a rough formulation) increased variation in B increases the risk (assuming that we fix the mean). we have Bol) (x) f ' B(1) (y) dy < ' f' B(2) (y) dy = Bo2) (x)• µ 85 I. A general picture that emerges from these results and numerical studies like in Example 8. and here is one more result of the same flavor: Corollary 8. then B(1) <. The heavy traffic approximation (7. it is seen that asymptotically in heavy traffic larger claim size variance leads to larger ruin probabilities.u) = (1 _ P) E /3npnBo( 1):n(u) n=1 00 < (1. (D) (u) < O(B) (U ) for all u. from which the result immediately follows. we have by Jensen 's inequality that E f (U) > f ( EU). Example 8.4) certainly supports this view: noting that.
9 Sensitivity estimates In a broad setting. in comparison to B2 the effect on the ua does not show before a = 0. 11 Notes and references Further relevant references are Goovaerts et al.lA. [166]. One then obtains the following table: U005 U0.1%.e'\1x + 0. we have 0r3 = 2 < or2 = 1 < 02 = 10 < 04 = 00 so that in this sense B4 is the most variable. i. B4: the Pareto distribution with density 3/(1 + 2x)5/2. B3: the Erlang distribution with density 4xe2x. 1%. 0.0' U0.4142. Let ua denote the a fractile of the ruin function.86 CHAPTER III.. which appears to be smaller than the range of interest in insurance risk (certainly not in queueing applications!). sensitivity analysis (or pertubation analysis) deals with the calculation of the derivative (the gradient in higher dimensions) of a performance measure s(O) of a stochastic or deterministic system. We return to ordering of ruin probabilities in a special problem in VI. = 0. In terms of variances o2. van Heerwarden [189].e. However. For B1i B2. A standard example from queueing theory is .) = a. 0.000. and this is presumably a consequence of a heavier tail rather than larger variance.4.9A2e'2r where A. A2 = 3. 32 50 75 100 B2 B3 B4 35 181 24 282 37 70 245 425 56 568 74 1100 (the table was produced using simulation and the numbers are therefore subject to statistical uncertainty). THE COMPOUND POISSON MODEL B2: the hyperexponential distribution with density 0. Pellerey [287] and (for the convex ordering) Makowski [ 252]. and consider a = 5%. B3 the comparison is as expected from the intutition concerning the variability of these distributions. with the hyperexponential distribution being more variable than the exponential distribution and the Erlang distribution less.001 u0. B. the behaviour of which is governed by a parameter 9. Note to make the figures comparable.1358. 1/)(u. all distributions have mean 1.01%. Kluppelberg [234].01%.
and hence a _ e(60)u + u e(60)u = ( i + which is of the order of magnitude uV.2 Consider a risk process { Rt} with a general premium rate p. Let R(P) = Rtli.Ap). Example 9. Assume for example that 8 is known. increasing in u. i. where Q2 = fl ( l2 1113 / _ Ou2v)2.(u) for large u. SENSITIVITY ESTIMATES 87 a queueing network.e. Similar conclusions will be found below. Then the arrival rate /3(P) for { R(P) } is )31p.3.01/2u.1. and s(9) the expected sojourn time of a customer in the network. Proof This is an easy time transformation argument in a similar way as in Proposition 1. Then ib = Pe(613)u. a/3 0 . with 0 the vector of service rates at different nodes and routing probabilities.1 Consider the case of claims which are exponential with rate 8 (the premium rate is one). s(9) is of course the ruin probability t' = Vi(u) (with u fixed) and 0 a set of parameters determining the arrival rate 0. and hence the effect of changing p from 1 to 1 + Ap corresponds to changing /3 to /3/(1 + Op) /3(1 . the premium rate p and the claim size distribution B.9. the standard deviation on the normalized estimate ^/1' (the relative error ) is approximatively . we may be interested in a'/ap for assesing the effects of a small change in the premium. In the present setting. a2/t).19P a/ . Thus at p = 1. obtained say in the natural way as the empirical arrival rate Nt/t in [0. In particular . where the partial derivatives are evaluated at p = 1. it follows that ' is approximatively normal N(0.. For example. or we may be interested in aV)/0/3 as a measure of the uncertainty on '0 if 0 is only approximatively known. Then a p ao = 00 Qa/. a0 as ao 80 19P . Thus. if = a e(6A)u. say estimated from data. Then if t is large . the distribution of %3 0 is approximatively normal N(0„ Q/t). while /3 = j3 is an estimate. t]. u Proposition 9.
6) As will be seen below. various parametric families of claim size distributions could be considered.10) below. ()} p(dx) .1 or Proposition 9. ^) . (9.(/3 + y)we(9 + 7.()wC(e. x > 0 (9.3) follows by straightforward algebra. but must look for approximations for the sensitivities 0.uypCe7u urypO. Consider first the adjustment coefficient y as function of 3.6 below for some discussion of this assumption). 4) (9 . for the ruin probabilities .w(O. Similar notation for partial derivatived are used below.2) (see Remark 9. e. However .()^ 1 .3 70 = 'Ye = = 7 /3(1 we(e +'y. mathematically a proof is needed basically to show that two limits (u * oo and the differentiation as limit of finite differences) are interchangeable. it suffices to fix the premium at p = 1 and consider only the effects of changing . and the proofs of (9.88 CHAPTER III. so that heuristically we obtain '00 50ryu = Coe"u . this intuition is indeed correct.()YC = 1 +y/ /3 \ Q2 From this (9. we can rewrite the Lundberg equation as w(9+ y.3.()(0 +'0) ' (9 .Owe (9. The most intuitive approach is to rely on the accuracy of the CramerLundberg approximation . Proposition 9. Of course. (9. we cannot expect in general to find explicit expressions like in Example 9. namely that of a twoparameter exponential family of the form Bo. (3+'y)PC (0+7.4). In the case of the claim size distribution B. u Now consider the ruin probability 0 = 0 (u) itself. Consider first the case of 8/8/3: . 3) ( 9 .3 or/and B. and write yp = 8y/8/3 and so on .g. /3 yields w e(e + Y. THE COMPOUND POISSON MODEL As a consequence.^)] 1(/3+y)we(9+'y. () = log(1 + y//3). 5) (Q+'Y)[we(0+7.3. (.r.w(6. Viei '0(.t.5) are similar. but we shall concentrate on a special structure covering a number of important cases. Differentiating w. 9.0 = t/'(u) and the CramerLundberg constant C. () Proof According to (9.((dx ) = exp {Ox + (t(x) .
z2(U) = e7" J u b(u .w(9. ()} .p)/C'y.x). we get p(u) = J "O B(x) dx + J U O(u . 11 For the following. Be. Further write de = [we (9 +'y. w((9 + a. we note the formulas Ee.2 of the Appendix ).x)B(x) dx + J U W(u .C). z2(u) _ 1 ^ e'ri`i7i( u . Hence by a variant of the key renewal theorem (Proposition A1. the proof is complete.8). Combining these estimates . () .x). SENSITIVITY ESTIMATES Proposition 9.3 (see in particular (5.4t (U)e°`U = which are wellknown and easy to show (see e. BarndorffNielsen [58]). 0(u) = /3 Ju"O B(x) dx + f 0 0(u .10) (9.12)). F(dx) = e'yy/3B(x)dx. But from the proof of Theorem 5. () . O} (9.9) (9. Z= zl + z2 where zl (u) = e7u J m B(x)dx. u 0 Proceeding in a similar way as in the proof of the CramerLundberg approximation based upon (9.QB(x) dx.11) Ee. ()] exp {w (O + y.w(O. and alsoo zl(u) + 0 because of B['y ] < oo. PF = (1 .([a] = exp {w(9 + a. (9. By dominated convergence.w(9. ()} . () .9. it holds that 89 a ue ryu a/3 Q(1 P) 7C2 Proof We shall use the renewal equation (3.(e"U = = wS(O.8) Letting cp = e0/e/3 and differentiating (9. () . Z(u)/u a C//3PF where PF is the mean of F. we multiply by e7" and let Z(u) = elt" cp(u).3) for z/'(u).4 As u oo.g.x)B(x)dx.we(9 .8) (Section 5). () exp {w(9 + a. u 0 Then Z = z + F * Z and F is a proper probability distribution .St (U) Ee.3(x) dx.x) F(dx ) f u J C F(dx) = C as u 4 oo.
()]B(dy) dx x 0 0C T ON O .x).w (9. ^) .2) holds. Then as u > oo.8) that cp(u) . ()]e7vB(dy) 'fCd 7 c . ()]B(dy) dx. 0 x Multiplying by e7" and letting Z(u) = e"uV(u).QB(x)dx.w(e. 8^ ue7u. ())B(dy) dx.lB(x) dx = e7uzl(u) + e7°zz(u) + V(u T where zl (u) = .wc(O.x)f3 f ^[t(y) . u Z2(U) = e7° f u ^/i(u . By dominated convergence and (9. THE COMPOUND POISSON MODEL [we(e+7. oo z2 (u) f C . this implies Z = z + F * Z. ()](e7v .wc (O.12) f exp {O y + (t(y) .11).w( (0. 8 8() 8( (9.w( (0.we (0. 01 (i+) do = +'Y.6e7u f "o f[t(y) .5 Assume that (9. 2 z 07P N ue7u (3C de . ^)} [wc (0 + 7.wc(9. )}B(dy)• Letting cp it thus follows from (9. F(dx) = e7x. C) .1) B(dy) 'f '[t(y) .9)(9. z = zl + z2. ()} 1z(dy) = f [t(y) .e7x/3 f 00 [t(y) . C)] (1 + 7 ) Proposition 9.90 CHAPTER III.6C do 89 1p 8( 1p Proof By straightforward differentiation.w(0.
and also zj (u) 4 0 because of f Hence. w(e. we (9. It follows after some elementary calculus that p = a)3/5 and. Here (9.(log r(a) a log S)} • r(a) 1.a/35a&y' ' (9.w((9.6 Consider the gamma density b (x ) = Sa xa..a log S = log r(c) . Example 9. ue_Yu 'C2d( 8a 8( 1 p . by inserting in the above formulas. that C = a.15) (9.9.pa+1 .18) (05 + 57 _'3_y .rye) S 5rya.ry) 5a1 cry (5 . () = C/9 = a/S.1edz = 1 exp {Sx + a log x . t(x) = logx. < = a..1 ./35' a/i'y + aryl 625ry. a /(S .12) takes the form y) alp a . () = log r(a) .14) de = d( 7!3 76 = 7e = log ( \ ( \5a_ / \SSry ) 72 .3ary tog('Finally.Sry a/32 + a/37 + /37 .17) (9.2) holds with p(dx) = xldx.QS 1 . ( 9.yu/3C2do u86 89 1p' az/) = 8z/. SENSITIVITY ESTIMATES as u 3 oo. () ='I'(t. U 7µF from which the second assertion of (9.Y)a+1 ' (9..C log(9). . 9 = S. ())B(dy) < oo.12) follows. We get w( (0. and the proof of the first one u is similar.13) (9.) log(9) = %F(a) logs where %1 = F'/]F is the Digamma function. Z(u) /3C 91 o c'o e11(t (y) .16) (9.
l3 of Section 6a needed for the existence of ry becomes e^Q > 1+62 / 2."62 .log c = 2 In particular.21og 2.3.2 .9) (() .22. further yield .([Y] .7 Consider the inverse Gaussian density ( b(x) Zx37 exp This has the form (9.92 CHAPTER III. C) = B = Yc = de = do = .2) with µ(dx) = 2x3zrdx. C = .2 log (0.2a) } Thus the condition B[a*] > 1 + a* /. THE COMPOUND POISSON MODEL Example 9. ()} = exp {c (C .CZ try)} 1 C C2 try .1 = eXP {c(C .w(9. which we omit in part .1 16 +ry c C22ry 2( = + 70 We (e. for a < a* = z (. C) . () = Cc .3Ee. w(e. 9 = . Straightforward but tedious calculations . t(x) _ . Be.S[a] = exp {w (9 + a.
ae t 1lEY u S _ .3C2de 1p' z a = c . Thus. However .8 The specific form of (9.2) is motivated as follows. That it is no restriction to assume one of the ti(x) to be linear follows since the whole setup requires exponential moments to be finite (thus we can always extend the family if necessary by adding a term Ox). sj=1 and let YT be defined by IKT('ryT) = 0. the results presented here are new. and we estimate y by means of the empirical solution ryT to the Lundberg equation. Van Wouve et al. the exponent is either Ox. However. we can just fix k . the main tool is simulation. in which case we can just let t(x) = 0. . Thus. (9. to our knowledge. thus.+UNT) > 1.1) .7 and references there. Note that if NT = 0. for which we refer to X.a. Notes and references The general area of sensitivity analysis (gradient estimation) is currently receiving considerable interest in queueing theory. kT (a) = /T (BT [a] .oo. we have assumed k = 2 and ti (x) = x. queueing networks) are typically much more complicated than the one considered here. Finally if k = 1.g.2 of the parameters. To this end.cue_7u)3C P Remark 9. B are assumed to be completely unknown. the exponent of the density in an exponential family has the form 01 tl (x) + • • • + 9ktk (x). by the LLN both F (NT = 0) and F (PT > 1) converge to 0 as T .. then ryT < 0. In general. ESTIMATION OF THE ADJUSTMENT COEFFICIENT Finally. let NT 16T = ^T . or Ct(x). and hence explicit or asymptotic estimates are in general not possible. in u which case the extension just described applies. if 1 PT = /3TNT(U1+. 10 Estimation of the adjustment coefficient We consider a nonparametric setup where /3. Also. [379] consider a special problem related to reinsurance. the models there (e. then BT and hence ryT is undefined. BT [a]= NT ^` e"U.. That it is no restriction to assume k < 2 follows since if k > 2.12) takes the form a = a 93 ar.10. Comparatively less work seems to have been done in risk theory..
T y .B[7]2 }) ( T 0 .(27)/K'(7)2. since NT /T . it is easy to see that we can write \ V1 1 l _ . 1) r.1) 'YT . 16T where V1. B[2'Y]  /3T ) .B[7]2 n Hence ( 10.Q and Anscombe 's theorem. (10.B[7]) + B [7] .2) rT(7) N N (0.v.'Y .1 As T 4 oo.. For the proof. .: N ()3.1) .1) .b[Yp'V21 T { (E[7] . then (10. THE COMPOUND POISSON MODEL Theorem 10 .)vl+ N CO.If . 7T a4' 7. vfoVFB[2y].B[7]) 0+ Iv/o(b[y].3) Proof Since Var(eryU) = we have B[7].94 CHAPTER III.i3)(B[7] 1) + (3(BT[7]  .B[7]2 V2 . B [7]2 (10.3/ T). a2 where a2 = /3r. Lemma 10 .3T .'s.a BT[7] I B[7] I + . More generally.: N 0.2) follows from NT/T a4' ./^ B[27] . B[27] .1)2 + E[27] . Hence KT(7) = (F' + (OT a(B[7l 0))((BT [7] . we need a lemma. V2 are independent N (0.2 As T * oo.7 + (. N ( n[7]. If furthermore B[27] < oo.
1 can be used to obtain error bounds on the ruin probabilities when the parameters . Let 0 < E < ry.E ) < 4T(7T) < (7 +0' which implies 'T(ry4) a$' r. Theorem 10. To this end .10.3). By the law of large numbers..e) < 0 < r.c'(7) N (0' T (2(7) / N (0.1 By the law of large numbers. 7T E (y . first note that e7TU N (e7U u2e27Uo'2/T) 7 . NT i =1 n'(a) for all a so that for all sufficiently large T K7 .Q. lcT(a ) 4 /c(a). y + E) eventually. Then r.4) and Lemma 10. I. where ryT is some point between ryT and ry.KT(7) kT(7) K'(7) . °7IT) .e. If ryT E (7  we have KT(7 . ESTIMATION OF THE ADJUSTMENT COEFFICIENT which is the same as (10.(ry . 6"Y (10. Combining ( 10. 0 are estimated from data . Now write KT(7T)  kT(7) = 4T(7T)( 7T 7).e. OT a 95 u 4 /3.'T(a) = 1 E Uie°U' a$' EUe "u = B'[a].'(y). it follows that 7T7 KT(7T) .E) < 0 < kT(7 + E) for all sufficiently large T .(ry + e) and hence KT(7 .4) + E).2. NT BT [a] Hence r.E) < 4T(7T) < 4T(7 + E). BT[a] 3 B[a]. Proof of Theorem 10. and the truth of this for all e > 0 implies ryT at 'y.
1 : Vt = 0.g. Deheuvels & Steinebach [102].T VIT where r7ry... One (see Schmidli [321]) is to let {Vt} be the workload process of an M /G/1 queue with the same arrival epochs as the risk process and service times U1. Letting Wo = 0. i ..96 CHAPTER III. THE COMPOUND POISSON MODEL Thus an asymptotic upper a confidence bound for a7' (and hence by Lundberg's inequality for 0(u)) is e"TU + f. A major restriction of the approach is the condition B[2ry] < oo which may be quite restrictive. [197]. U2. For example .ueryuU ". Csorgo & Teugels [95].Wn) are i .C1e"a ( see e. it means 2 (8 . satisfies b(. > 0 for some t E [Wn_ 1. i.. various alternatives have been developed.info< „< t S. and the known fact that the Y„ = max Vt tE[W„1. V. Mammitzsch [253] and Pitts. with a tail of the form P(Y > y) . if B is exponential with rate 8 so that ry = 8 . Hipp [196].f. t]}... Griibel & Embrechts [292].) = a (e. 6 < 2.e.Q. Asmussen [23]) can then be used to produce an estimate of ry.0) < 5. Wn).g. For this reason . Notes and references Theorem 10.3 or equivalently p > 1/2 or 11 < 100%. = 1. wn = inf{t > W. Frees [146]. ft.i. This approach in fact applies also for many models more general than the compound Poisson one.5%). . Embrechts & Mikosch [133].T = 3TKT ( 21T)IKT (^T)2 is the empirical estimate of vy and fc.e. the nth busy cycle is then [Wn1. Further work on estimation of y with different methods can be found in Csorgo & Steinebach [94].. Herkenrath [192]..d. Vt = St .1 is from Grandell [170].96 if a = 2.
Unless otherwise stated. defined as solution of c(ry) = 0 where ic(s) _ /3(B[s] . it is assumed that i > 0 and that the adjustment coefficient (Lundberg exponent) y. the premium rate is 1. B[•] and mean AB. The safety loading is q = 1/p . 97 .1 (the role of ryy will be explained in Section 4b).s.1 where p = 13µB. T) = P( /r(u) <T) \ = PI inf Rt <OIRo=u1 /\0<t<T PI sup St>ul 0<t<T Only the compound Poisson case is treated. the Poisson intensity is 0 and the claim size distribution is B with m.1) . Further let 'Yo be the unique point in (0.g. In particular.f.Chapter IV The probability of ruin within finite time This chapter is concerned with the finite time ruin probabilities 0(u. The notation is essentially as in Chapter III. See Fig. 0. exists. generalizations to other models are either discussed in the Notes and References or in relevant chapters. 'y) where c(a) attains it minimum value.
r.2) Proof Let as in Example 111. E[T(u) I T(u) < 00 ] = ELT (U). (u) is exponential with rate 0 w. PL = 6/0 = 1/p > 1). we have for k = 1.1 In the compound Poisson model with exponential claims with rate S and safety loading 77 > 0. 2 that E [T(u)k. FL and independent of T(u). Var[T(u) I T(u) < 00] = VarL T( U) .. By the likelihood identity III.u is the overshoot. PROBABILITY OF RUIN IN FINITE TIME Figure 0. . In particular. 1 Exponential claims Proposition 1. 7. using that the overshoot l.(4.5 .) = e7u ELe'Y^(u) ELT(U)k = e'Yu b ELT(u)k = O(u)ELT(u)k. the conditional mean and variance of the time to ruin are given by E[r(u) I T (u) < oo] Var [T ( u) I T( u) < oo] /3u+1 J )3 _ 2/3Su+/3+S (S)3)3 (1.t. 1 FL.(U) < 00] = ELT(u)ke'YS.9). the time of ruin is T(u) and ^(u) = ST(t&) .(.1 The claims surplus is {St}. EL refer to the exponentially tilted process 3 with arrival intensity S and exponential claims with rate / (thus .98 CHAPTER IV.1) (1.
Let 0 > yo be determined by ^c(0 ) = a."(ry) = 26//32.V/ is as asserted. where = eBu I 1 . is V1rLSr( u) +VarL ((PL .h.1)2VarLT(u) + 2 Ca 1I VarLT(u).6 + a)0 .0) .6. 0 Proposition 1. of (1.1//32 (6/)3 1)2 26(/3u + 1)/(6 ./3) .6a = 0 with solution 0 (the . which leads to the quadratic 02 + (/3 .1)T(u) are independent with QL the same mean .B = a.I (1. we have by Wald's identity that (note that ELSt = t(pL .(yo) = 2V .s.1.s.h.1 /3u + 1 u + 1 //3 = 6/3 6/01 For (1.12 Thus the l.1 (6)3)2 which is the same as the r. This means that /3(6/(6 .1)T(u)) = VarLe(u) + (PL .s. Wald's second moment identity yields 2 EL (Sr(u) .1)) ELST(u) ELT(u) (PL ./3 . EXPONENTIAL CLAIMS For (1 .2) is aLELT( u) .1) . T(u) < oo] fora > r.1)T(u))2 = UL where = s.2 In the compound Poisson model with exponential claims with rate 6 and safety loading rl > 0.h.1)ELT(u).2). u + ELe(u) _ PL .3) B = 0(a) = + (6/3a)2+4a6 2 and hence that the value of ic(yo) Proof It is readily checked that yo = 6 . Since Sr (u) and (PL . 1).(PL . . the 1. the Laplace transform of the time to ruin is given by Eea7( u) = E [eaT (u).
Note that it follows from Proposition 1.v.3.'s with rate 5.4.OuEee 04(u) = ee u be BB+B where we used that PB(T(u) < oo ) = 1 because 0 > ryo and hence E9S1 = K'(0) u > 0.100 CHAPTER IV. are the lengths of of the ladder segments 2. St Ti F. T2 . But by the fundamental likelihood ratio identity ( Theorem 111.. Fig. Cf. the result follows.3) we have E [e«T(u ). M(u) T(u) = T + E Tk k=1 where T = T(0) is the length of the first ladder segment .3 that we can write EeaT( u) = eeuEe 017(o).. are the ladder heights which form a terminating sequence of exponential r.1 where Y1. . T(u) < oo] = EB [exp {aT(u ) . PROBABILITY OF RUIN IN FINITE TIME sign of the square root is + because 0 > 0). Ti...0. and M(u)+1 is the index of the ladder segment corresponding to T(u).Y1 Y2 Figure 1..... 1. T(u) < oo] = e. .1 . Using 5 = 6 . Y2.4) The interpretation of this that T(u) can be written as the independent sum of T(0) plus a r. More precisely.9ST(u) +T(u)!c(0)} . (1.v.T+ Ti a t U T I 1 a i F. Y(u) belonging to a convolution semigroup .
T) 1 I fl(O)h(0) fdO where (1.d.1(u. . Note that the case 6 # 1 is easily reduced to the case S = 1 via the formula V.1 )!.k + 1). Proof We use the formula .e..6. cf.. where U1. Corollary 11.T) = P(VT > u) where {Vt } is the workload process in an initially empty M/M/1 queue with arrival rate 0 and service rate S = 1.v. For j = 0.1. the conditional distribution of VT given QT = N is that of EN where the r.i. and exponential with rate S = 1.3 Assume that claims are exponential with rate b = 1. Then V(u. Hence 00 F(VT > u ) P(QT = N)P(EN > u) N=1 00 N1 k F(QT = N) eu N=1 k=1 °O u k! k Ee k=0 1t P(QT .T is the residual service time of the customer being currently served and U2 .T.. .T.i (u. T. including the customer being currently served). [4]) 00 (x/2)2n+3 Ij (x) OnI(n+j)! . 1). EN has an Erlang distribution with parameters (N. then VT = U1. . UN.6(u) = Vfl/j l(Su.. .ST). EXPONENTIAL CLAIMS 101 For numerical purposes . 2..4.T are conditionally i. Since U1 ..I ex cos B cos j O dB fo " .T the service times of the customers awaiting service . .3 sin0 + 29) f3(0) = 1+/32/cos9.1. If QT = N > 0. density xN lex/(N . let (cf. UN.6) fl(9) f2(0) = = fexp {2iTcos9(1+/3)T+u(/cos91) cos (uisin9) . U2..T.T + • • • + UN. T) to be evaluated by numerical integration: Proposition 1.0. Let {Qt} be the queue length process of the queue (number in system. the following formula is convenient by allowing t.cos (u/. i.
ie .1)] L _112 /(k+1)/2 [.38). Then (see Prabhu [294] pp.)3k+1 = e(1+0)T e201/2Tcos 7r 0 e )3(k +l)/2 [31/ 2 cos ( kO) .13(k +l)/2ei(k +1)9 R E .112 l 1( k +1)/2 [ 31/ 2 cos(kO) . similar formulas are in [APQ] pp.31 /2eie L 1)] 1 I/31/2eie .(31/2 cos (( k + 2)9) . 8789) 00 E aj j= 00 = 1.cos((k + 1)0)] f3(0) 00 flk +1 > j=k1 3j/2 COS(jB) l)/2ei(k+1)e )3j/2eije = R)3(k+ (31 /2eie . let I _ j (x) = Ij (x).1 R [. k k2 + $k+1 E bj 00 t j . PROBABILITY OF RUIN IN FINITE TIME denote the modified Bessel function of order j. 912.(31/2eie .3(k +1)/ 2ei(k + l)6 (. (1. f3(0) .i(k +1)e R [/3( klal/2e:0 (01 /2 e .102 CHAPTER IV. and define tj = e(1+R)Taj/2Ij(2vT T).8 ) yields F(QT > k + 1) .cos (( k + 1)0)] f3(9) Hence the integral expression in (1. 00 E '3j/2 cos(je) j=k+1 00 _ j=k+1 ^j/ zeij = .k + 1) = 1 k +1 + bj j=00 j=00 00 j=kk+1 j=k1 By Euler 's formulas.)3k +1 tj g'(QT >.cos((k + 2)9)] d9.44).1 00 ok+lR 00 j=k1 +1)/2e . in particular equations (1.
Related formulas are in Takacs [359]. is numerically unstable for large T. t )). k! k=O k0 i/z Co Uk ate" o'/z e . however. We allow a general claim size distribution B and recall that we have the explicit formula z/i(0) _ P(7(0) Goo) = p. there are several misprints in the formula there. it follows as in (1. t) = P . expresses V)(0. equivalently. The first formula. . Seal [327] gives a different numerical integration fomula for 1 . going back to Cramer. T). however.0(u. oo (u)31/2e^e)k = )3k z cos(k9) = R k. 2 The ruin probability with no initial reserve In this section . or.. E Fk. F(x.3 was given in Asmussen [12] (as pointed out by BarndorffNielsen & Schmidli [59].e = e' COS a cos(uf31/2 sin 0). the numerical examples in [12] are correct). We first prove two classical formulas which are remarkable by showing that the ruin probabilities can be reconstructed from the distributions of the St.7) that _ [^ au ak+l (30 k L. T) in terms of F(. and the next one (often called Seal's formula but originating from Prabhu [293]) shows how to reduce the case u 54 0 to this. THE RUIN PROBABILITY WITH NO INITIAL RESERVE Since P(QOO > k + 1) = flk+1. we are concerned with describing the distribution of the ruin time T(0) in the case where the initial reserve is u = 0. u Notes and references Proposition 1. from the accumulated claim distribution N.T) which.2. The rest of the proof is easy algebra. k=0 103 Cu) A further application of Euler's formulas yields cc k =0 k 'ese)k __ U #kJ2 cos((k + 2)9) = R eNO ^` (u^1 L k= = eup i/z L OI = =ateU161/2 e '0+2iO COS a cos(u(31/2 sin 9 + 20). Ui < x I / (note that P(St < x ) = F(x + t.
T)) 1 fT P(M(v. Proof For any v E [0.b (0.T)dx.t)= {Stv) < SM. . [v.T))dv.(6. Then 1 .1.T]. T T o where the second equality follows from II. we define a new claim surplus process St StM NJ Figure 2. T].i. 2.3) with A = (0. co ). and the third from the obvious fact (exchangeability properties of the Poisson process) that has the same distribution as St = { Si0)} so that P(M(v.1 In formulas. meaning that we interchange the two segments of the arrival process of {St}o<t<_T corresponding to the intervals [0. See Fig.(.(0.T)) does not {Stv)} depend on v.S„ 0 <t<Tv STS„+St_T+v Tv<t<T as the event that IS. resp. Stv^ _ Define M(v.0<w<t} St+v .T) T F(x. f T lStv)} 0<t<T by a 'cyclic translation'.104 CHAPTER IV. 1 1 . ") } is at a minimum at time t. PROBABILITY OF RUIN IN FINITE TIME Theorem 2 . v]. T) = P(Tr(0) > T) = P(M(0.T))dv E^T I(M(v.
2.T) and Sv < 0 on M(0. this integral is 0 if STv) . T)) dv = TEST = T fP(ST < x) dx T T NT 1 f P(ST < x) dx = 1 f P Ui T . Fig 2. cf. then M(v. v < t < T} n M(0. letting w = inf It > 0 : St_ = mino<w<T Sw}. then i fT I(M(v. T) as {ST<St+ vS. Hence T TE f I( M(v.T)) dv f T I(M(0. T].T) occurs or not as long as ST < 0. T)) dv. 0<t <Tv}n{ST<ST Sv+St T+v.xdx.Sv. we can take v E (w E. v) = M(0. v). .. T. Indeed.2. We claim that if M(0. v)) dv = ST T T o (note that the Lebesgue measure of the v for which {St} is at a minimum at v is exactly . or it occurs. 0<t<v} = {ST < St . v). w) for some small E. T) occurs. T T o i =1 Let f (•. in which case there is a last time o where St downcrosses level u. THE RUIN PROBABILITY WITH NO INITIAL RESERVE 105 Now consider the evaluation of fo I(M(v. T)). ST > 0.Tt))f(u+t. Obviously. T) occurs. t). there exist v such that M(v. t) denote the density of F(•. where the last equality follows from ST < St on M(0. T) = M(0. If ST < 0.t)dt. v<t<T}n{ST<STSv+St. T Theorem 2 . we can write M(v.T) occurs.T)f(I z /)(0.. v). For example. It follows that if M(0 .T) = F(u+T. It is then clear from the cyclical nature of the problem that this holds irrespective of whether M(0.v<t<T} = {ST<StSv.2 10(u.ST on M (0. Proof The event {ST < u} = { Ei T Ui < u + T j can occur in two ways: either ruin does not occur in [0.
Proposition 2. 0 < t <T. t + dt] occurs if and only if St E [u. Then P(T(0) E • I T(0) < oo) = P(T_ (Z) E •).ST_ t_ and let A(z. PROBABILITY OF RUIN IN FINITE TIME u Q II T Figure 2.u+dt]).3 Define r_ (z) = inf It > 0 : St = z}.z. {St > . {S t > z. Hence P(ST<u) = 1 .2 . define St = ST . E [t. which is independent of St and has the stationary excess distribution B0.T) = . The proof is combined with the proof of Theorem 111. z > 0.2.(0. The following representation of T(0) will be used in the next section. u + dt] and there is no upcrossing of level u after time t. For a fixed T > 0. ST_ _ z}.2. Let Z be a r. 0 < t <T . ST_ _ z}. which occurs w. C*(z. ST_ _ z} .T) = {St < 0. u which is the same as the assertion of the theorem.106 CHAPTER IV. O(T .Tt))P(StE[u.T) = C(z. 2.t). Proof of Theorem 111.b(u. 0 < t < T.2 Here o.T)+ J0 T (1V.v.p.
T))dT = Off(z) dz P(T_ (z ) < oo) = 3B(z) dz. Thus P(Sr(o)_>x. u which is the assertion of Theorem 111. T + dT] I S7(o)_ E [z.T) = C*(z. Hence integrating (2. 2. Fig. z + dz]. z + dz].1) that P(T(0) E [T. A(z. r(0) < oo) = 3R(z) dz JP(C(z. z + dz]. THE RUIN PROBABILITY WITH NO INITIAL RESERVE Then 107 P(r(0) E [T.T + dT]. (2. 7( 0) < oo) = P (C(z)) dT.3). z + dz].3 But by sample path inspection (cf. ST(o)_ E [z. It follows by division by P(ST(o)_ E [z.T(0)<oc) = f x F(U > y + z U > z) P(Sr(o)_ E [z. Proof of Proposition 2.T))f3B(z) dz dT. .T)).1) z T . and since {St}o<t<T. we therefore have P(A(z.2. T(0) < oo) B(y B(z) + z) f3B(z) dz = 3 f °^ B(y + z) dz = f3 + x v f B(z) dz.1) yields P(ST(o)_ E [z.T).2. Figure 2.2. T(0) < oo) = OR(z) dz in (2.T)) = P(Cx. {St }o<t<T have the same distribution . z + dz]) = P(A(z.3.ST(o) >y.
2.1) where a > r.6.r(a). cf. (3. T(0) < oo) 0 = dT f 0 P(C(z))P(Z E [z. a martingale proof is in Delbaen & Haezendonck [103].s. because of77>0. r(0) < oo. PROBABILITY OF RUIN IN FINITE TIME ]P(7(0) E [T.1) .5a). Theorem 2. z + dz]. r(a) denotes the solution < 'Yo of the equation a = ic(r (a)) = .(yo).(3(B[r( a)] .108 Hence CHAPTER IV.T + dT] T(0) < oo) dT f ' P(C(z))P(Sr( o)_ E [z.c(r(a)) l = l er( a)se+at } u yields 1 = eyr(a)Eear(y). 3 Laplace transforms Throughout in this section. some relevant references are Shtatland [338] and Gusak & Korolyuk [181]. Proposition 2. one based upon a result of Asmussen & Schmidt [49] generalizing Theorem 11. In the setting of general Levy processes.1 and the present proof is in the spirit of Ballot theorems. Tak'ecs [359]. z + dz]. Proof Optional stopping of the martingale I er (a) 9 t. Let T_ (y) be defined as Proposition 2. who instead of the present direct proof gave two arguments.3.2. Notes and references For Theorems 2.2 ga(x) = Qexr(a) f "o eyr(a)B(dy) x . Note that T_ (y) < oo a.5 and one upon excursion theory for Markov processes (see IX. [329].1.T+dT]).3 was noted by Asmussen & Kl(ippelberg [36]. T(0) < oo) = dTP(T_(Z) E [T.1 Eear( y) = eyr(a). I L Let ga(x) be the density of the measure E[ear(°). Lemma 3. Lemma 3 . see in addition to Prabhu [293] also Seal [326]. ^(0) E dx] (recall that ^(0) = Sr(o)) and write ga[b] = f OD ebxga(x) dx.
2. It is then easily seen that Za(u) is the solution of the renewal equation Za (u) = za (u) + fo Z. T(u) < oo] du = Proof Define Za(u) = E [eaT(" ). Z = y] = EeaT.ga [b] 1 . y + dy]. u . Further by Theorem 111.1] evr(a)B(dy)[ b .(v) = ev''(a).4 E[eaT (o). Hence eb"du E[eaT(").ga [b] 0 TO Using Lemma 3.g.ST(o)_ just before ruin .3. time T(u): u u Here is a classical result : the double m.r(a) The result now follows by inserting /3B[s] = ic( s) +/3+ s and ic(r(a)) =a.T(0) < oo] = 20[b] = za[b] (9a[b] 9a[0])/b 1 .3 ga[b] = c(b) Proof + b + a .f.ic(b)/b x(b) + a eb"E[eaT(" ). Then by Proposition 2. Corollary 3.5 f 00 o a/r(a) . (Laplace transform) of the ruin Corollary 3. (u .x)(a) B(dy)• Lemma 3 .2 P(Z E [y.3.°° ga(x)dx.r(a) b . rr(0) < oo) = 1_ r(a) Proof Let b = 0.r(a) oo Q f ex(br(a))dx f00 eyr(a)B(dy) x 0 Q f evraB(dy) e(a))dx 0 Q cc ev(br (a)) . the result follows after simple algebra. £(0) E dx) = /3B(x + dy) dx and hence ga(x) = f e r)/3B(x + dy) _ /3 f x e(v.r(a) = a [B[b] B[r(a)]] . r(u) < oo). b . E[ear (o) I T(0) < oo .3.x)ga (x) dx where za(u) = f. LAPLACE TRANSFORMS 109 Proof Let Z be the surplus .
6. That is. This proves the first assertion of (4. By Proposition 111. St/t 1 1/m.. PROBABILITY OF RUIN IN FINITE TIME 4 When does ruin occur? For the general compound Poisson model.1)Er(u) . (4.1) i. u 1 ET(u) 1 p1 u where Pw2 = 311B)m3• 7(u) .1. Theorem 4 . T(u) a. The first main result of the present section is that the value umL.2 Assume ri < 0.2.w ) v/.r(u) = Er(u) • ES. t T(u) T(u) T(u) t m = lim = lim = lim Utioo u + Sr(u) u+oo S. we need the following auxiliary result: Proposition 4. and take basically the form of approximations and inequalities.s. note that by Wald's identity u + EC(u) = ES. For the proof. T(u)/u mL as u + oo. P = /3µB > 1.3). for any c > 0 P( Further. = (p .mL > E T(u) < 00 ) 40. for any m T(u) u . Later results then deal with more precise and refined versions of this statement. where _ 1 _ 1 1 C ML w(ry) 6B'[7J 1 . mu ) ( 0 m < ML '(u) 1 m > rL.mu D 2 4 N(0.UProof The assumption 11 < 0 ensures that P(T(u) < oo) = 1 and r(u) a4' oo.1 Assume 77 > 0. uoo u using e. Then given r(u) < 00.s. For the second .(u ) = o(u) a.00 St = lim .. (u) t. the known results are even less explicit than for the exponential claims case. i. cf.3LELU 1 1p' is in some appropriate sense critical as the most 'likely' time of ruin (here C is the CramerLundberg constant).e. Proposition A1. and hence a.110 CHAPTER IV. Then as u * oo.h(u.
T(u) < oo f / 00) e7uE L [e_7 (t1). which may be viewed both as a refinement of Theorem 4.3.mL U > E. and (4.6µB2) Z v m (3µB2) Z. of (4. this can be rewritten as u + 1(u) . Theorem 7.N(0.1). the result comes out not only by the present direct proof but also from any of the results in the following subsections.1 is standard. and as a timedependent version of the CramerLundberg approximation.t/m D (2) 111 . Tu) T( u) .2.^ N (o. If Z . Notes and references Theorem 4.mu (2) '• m3/2 µB 7 .mu m . 4a Segerdahl's normal approximation We shall now prove a classical result due to Segerdahl.6. According to Anscombe' s theorem (e. implying T(u) . WHEN DOES RUIN OCCUR? and that Ee(u)/u a 0. proving (4. .2 of [86]) and (4.r(u)/m T(u) ti µB2) Z.2) follows immediately from u (4. cf. 4). again Proposition A1.h.mL >E By Proposition 4.1 The l.5) St . T) for T which are close to the critical value umL).1 (by considering 0(u. Thus.3).1. T (u) < 00 J 0(u) e7'PL U \ I T u) . the same conclusion holds with t replaced by r(u). 1'r(U) .1). For (4 . note first that ( Proposition 111.g.7 6  11 Proof of Theorem 4.4. PL (•)+ 0. apB ) .1).1) is T (u)  U mL P( T (u) < I > E. though it is not easy to attribute priority to any particular author.s.
PROBABILITY OF RUIN IN FINITE TIME Corollary 4. we can replace T(u) by r(u').r.112 CHAPTER IV. and similarly as above we get E[f(^(u)) I Fr(u. letting Z be a N(0.t. resp . Then for any y.e(u') oo w . Proof Define u' = u . g are continuous and bounded on [0. Then h(u) 4 h(oo) = E f (6(oo)).VU T.)mu \ h(oo)Eg (r(ul) .5) For the proof.f ( (oo)) .3).mul h(oo)Eg(Z).T(u') given F. Using ( 4. (oo.v.um.(u.u1/4)I(S(u') > u1 /4) h(oo) + 0. Then the distribution of T(u) .4 (SIAM'S LEMMA) If 71 < 0.ul/4.3 (SEGERDAHL [333]) Let C be the CramerLundberg constant and define wL = f3LELU2mL = f3B"[ry]mL where ML = 1/(pL1) = 1/($B'[ry]1).4).w2) r.6). one has 9 (r(u)_rnu) Ef (^(u)) * E. O . S( u ) < ul/4] < ET(ul / 4) = O(ul/4). e'°'/b (u.) is readily seen to be degenerate at zero if ST(u•) > u and otherwise that of T(v) with v = u . Let h(u) = E f (^(u)). we need the following auxiliary result: Proposition 4.l:(oo) (recall that rt < 0). and thus in (4. using that ul/4 .T ( u')] = E[ T ( ul /4 .a C4'(y )• ( 4. Hence Ef (Vu )) 9 (T(u.6) whenever f. E9(Z) (4.ST( u') = u1/4 .))I h(ul /4  ^(u)) I(6 (u') C ) f < ul /4 + f(e(u') . with w2 as in (4. oo).L+YWLV'U) . P because of ^(u') . then e(u) and r(u) are asymptotically independent in the sense that. we get E[ T (u) . oo ).^(T(u')).
where we used Stain's lemma in the third step and (4. Segerdahl 's result suggests the approximation b(u. see also von Bahr [55 ] and Gut [182]. CL Fig. T(u) < umL + ywL f.(ay) = 17 7y = ay . PL(T(u ) < umL + ywL) 113 4 C4(y). The precise condition for (4. For refinements of Corollary 4.oo.yK(ay)• (4. . For practical purposes . Theorem 4.7) To arrive at this . yy by 1 K.8) Note that ay > 7o and that 7y > •y (unless for the critical value y = 1/ML). e7v" y < ^'(7) (4 .3 in terms of Edgeworth expansions . Cf. The present proof is basically that of Siegmund [342].9) ( 4 . that for the fit of (4.7) to be valid is that T varies with u in such a way that y(T) has a limit in (. 3 is due to Segerdahl [333].z/)(u . however . define ay. y u) < e 7v" .5 '(u . y u) < . 10) '5(u) . just substitute T = umL + ywL in (4. also Hoglund [204]. in practice one would trust (4.T) Ce7"4 (T .7) to be good.4. 0. Notes and references Corollary 4 .umL wI V"U u (4.5) and solve for y = y(T). y > k'(7) . ELe7E (") . u needs to be very large). oo ) as u * oo.3 ery"z/i(u . see Asmussen [12] and Malinovskii [254]. Thus . WHEN DOES RUIN OCCUR? Proof of Corollary 4. umL + ywL f) = e"P(T (u) < umL + ywL) = EL [e7V ").dependent version of Lundberg's inequality For y > 0.1.4) in the last. 4b Gerber's time.7) whenever u is large and ly(T)l moderate or small (numerical evidence presented in [12 ] indicates .
b (u. Numerical comparisons are in Grandell [172 ]. if y > 1/ic'(y). see MartinLM [257] . In view of Theorem 4. For a different proof.7 i.3 yields easily the following sharpening of (4. a. yu 11 < T(u) < oo j < eayu +Y UK(ay) Remark 4. Then ic(ay) > 0 (see Fig .h(u.1). which shows that the correct rate of decay of tp(u. u Differentiating w.ay4(u)+ T(u)K(ay ). and hence t. and generalizations to more general models are given in Chapter VI. From the proof it is seen that this amounts to that a should maximize ayic(a).yu ) = eayuEav [e . we have rc(ay) < 0 and get (u) . the bound a7y° turns out to be rather crude . which may be understood from Theorem 4. yu < T (u) < oo 1 l e ayuEav [eT ( u)K(ay). T(u) < yu] < eayu + yUr(ay) Y < eayuEav [ eT(u)K(av )L T(u) < yu} Similarly. we arrive at the expression in (4.6. the point is that we want to select an a which produces the largest possible exponent in the inequalities. . f Some urther discussion is given in XI. Hoglund [203] treats the renewal case.v"U.114 CHAPTER IV.2.r. dy) Notes and references Theorem 4 .8 below .9): Proposition 4.t. However.5. yu ) = < eayuEay [eay^ ( u)+T(U)K ( ay). yu) < C+(ay)e7a„ where l C+(ay) = sup f 00 eayR(xy)B( . 0. 5 is due to Gerber [156 ]. An easy combination with the proof of Theorem 111. PROBABILITY OF RUIN IN FINITE TIME Proof Consider first the case y < 1/K'(y).8).Y' (u.6 It may appear that the proof uses considerably less information on ay than is inherent in the definition (4.8). yy is sometimes called the timedependent Lundberg exponent. who used a martingale argument. yu) is e 'Yyu/ .
6 with P replaced by Pay and FL by Pay.. (4.2 yields EaT(u) u u r. (4.5. The traditional application of the saddlepoint method is to derive approximations.e.. then the solution &y < ay of .yyu y l ay I 21ry/3B" [ay] V fU_ u + 00. and ii(u) . u 4 oo. This idea is precisely what characterizes the saddlepoint method. we have ryas = ay .8 If y < 1/ic'(ry).13) .ay and get Ea e ayf (00) y _ 'Ya( ayKal lay C 1 .e.12) < yu] Here the first expectation can be estimated similarly as in the proof of the CramerLundberg ' s approximation in Chapter III. i. and b(u.ayC() .4. Ea . yu) = e. and in case of ruin probabilities the approach leads to the following result: Theorem 4 . We thereby obtain that T is 'in the center' of the Padistribution of T(u).i(u. not inequalities. the choice of ay.: T. (4. yu ) eaauEaye . [eT(u )K( ay). it is instructive to reinspect the choice of the change of measure in the proof.11) ' If y > 1/ r .'(y ).c(&) = ic(ay) is < 0. if we want EaT(u) .yu) c ay . yu ) ayay e ryyu ayay 27ry/3B"[ay] u Proof In view of Stam 's lemma. then ay > 0.. WHEN DOES RUIN OCCUR? 115 4c Arfwedson's saddlepoint approximation Our next objective is to strengthen the timedependent Lundberg inequalities to approximations.ay y 'Yay  ay . Proposition 4. T(u) suggests heuristically that l t/.ay a. the formula 0(u.z.(u.ayuEay f eay^ ( u)+T(u)K(ay). (0) r1 (a) ' I. then the relevant choice is precisely a = ay where y = T/u. Using Lemma 111. T(u) < yu] . For any a > yo. As a motivation.^3 ]1/ Bay [lay .
7ruw2 Inserting these estimates in (4.ay + ayl /BLay] . where V is normal(0.c(ay)ul/2W p 2ir = eyu(ay) dz 1 rc(ay ) 2. Example 4.B[ay] /ay &y y(ay .l'B)y /(Pay .9 Assume that B(x) = eay.1B[ay]1 ) y(ay . V < 01 Ir 00 er(ay)"1'2"'x eyur.I ay &y a ^c'(ay) a (1 +. The difficulties in making the proof precise is in part to show (4.(ay) _ y(ay . i B[7ay . we get heuristically that Eay Ler (u)r(ay).13) rigorously. (ay) J0 1 K(ay )u 1 00 c2(x) dx /2 w 1 ezcp(z /( k(ay)u1 /2w)) dz /O° _ 1 1 J e Z . and in part that for the final calculation one needs a sharpened version of the CLT for t(u) (basically a local CLT with remainder term). (4.1.116 CHAPTER IV.1) . Then ic(a) = .1)3 = y3/3B"[ay].1) under Pay mation (4. a nr=.13).a.1)3 = (jB"[ay]l (Pay . . it seems tempting to apply the normal approxiyu + ul/2wV.4).ay) ay +.ay)K(ay) ay ayI&YI For the second term in (4.12) is 0 entirely similar.13).(j (1 .11) follows.c'(a) _ /3a/(8 . PROBABILITY OF RUIN IN FINITE TIME ry I i .a)2 . Writing r(u) and W2 = I3ay{. The proof of (4. and the equation ic'(a) = 1/y is easily seen to have .a) . T(U) < yu] = eyuk (ay)E''ay (ek(ay )"1/2WV.3(5/(S .ay ) r.
because the c. (5.1.5. then { __ . 5 Diffusion approximations The idea behind the diffusion approximation is to first approximate the claim surplus process by a Brownian motion with drift by matching the two first moments.= (s. in discrete time: if p = ES. .. It follows that 5^y =5ay = /«y =f3+ay=l3+d 1+1/y' V 1+^1/y /35 1+1/y /3' ay ay =Qay say =.1) .p.g.3+52 1+/351/y' sy 7 B ii[ay] 25 _ 251/2(1 + y)3/2 (5 . c a 00.. The mathematical result behind is Donsker's theorem for a simple random walk {Sn}n=o. and next to note that such an approximation in particular implies that the first passage probabilities are close.. A related result appears in BarndorffNielsen & Schmidli [59].f.ay)3 0 3/2 and (4. is undefined for a > 5).8 is from Arfwedson [9].i )( v s vc ('3 + s _2 / . 2 = Var(Si ) the variance. yu) when y < 1/ic'('y) = p/1 .11) gives the expression '31/4 ( .tcp) Lo {Wo ( t)}t>0 . 0 Notes and references Theorem 4./4 ^y for 1/i (u. DIFFUSION APPROXIMATIONS solution ay=5 117 V 1 (the sign of the square root is . y) a''y" L '3 _ fl ) 51 /4(1 +1IY)3/4 \. is the drift and o.
1 As p J. 0 . and consider the limit p j p. n/c < t < (n + 1)/c. (5. Indeed .3) whenever c = cp f oo as p 1 p.3) takes the form LI S(P) { a2 to2/µ2 + t LI S (P) { a2 ta2/µ2 {W0(t)}.p. a2 =/3µB2) Proof The first step is to note that { WC (St P) .1 below). of which a particular case is the claim surplus process (see the proof of Theorem 5. However.e.. we shall represent this assumption on 77 by a family {StP) L of claim surplus processes indexed by the premium rate p.3.2) t>o where p = pp = p .tcpp) y = { WC (Sct) pct) } {Wo( t)}t>o (5. This is the regime of the diffusion approximation (note that this is just the same as for the heavy traffic approximation for infinite horizon ruin probabilities studied in III..118 CHAPTER IV. such that the claim size distribution B and the Poisson rate a are the same for all p (i.1. We want an approximation of the claim surplus process itself.a = Snp) and the inequalities Sn )C . this is an easy consequence of (5. for the purpose of approximating ruin probabilities the centering around the mean (the tcp term in (5.z } {W_1(t )}t>o (5. cf. + {Wo(t ) .tp).t} _ {W_1(t)} .p/c < St(p) < S((n+l)/ c + Pp/c. where p is the critical premium rate APBTheorem 5 . Lemma 111. PROBABILITY OF RUIN IN FINITE TIME where {W( (t)} is Brownian motion with drift S and variance (diffusion constant) 1 (here 2 refers to weak convergence in D = D[0.1)) is inconvenient.7c). and this can be obtained under the assumption that the safety loading rt is small and positive. p. oo)). It is fairly straightforward to translate Donsker's theorem into a parallel statement for continuous time random walks (Levy processes).1) with S. St = EN` U= . Letting c = a2/pp. we have o {i!t s: . Mathematically.
(. since ti(u) has infinite horizon . (ua2 To2 op \ IPI > IG ( T .s. u) of r( (u) (often referred to as the inverse Gaussian distribution) is given by IG(x.T) IG(Tp2/ a2). u) is defective when < 0. w. any probability measure concentrated on the continuous functions.h. However. Because of the direct argument in Chapter III.s.6) This is the same as the heavy traffic approximation derived in III.7c. ulpl /a2) = e2"1µl / or2. Corollary 5 .1 . and in fact some additional arguments are needed to justify (5.6) from Theorem 5.f I \\\ J \ (5.t. is IG(T.u).1.5).e.h. ^ p2 Proof Since f 4 SUP0<t<T f (t) is continuous on D a.8 or [APQ] p.. For practical purposes . we obtain formally the approximation V.r. Corollary 5. 199. (5. u). the continuous mapping theorem yields sup W Sz2 to lP 4 sup Wi(t)• O<t<T O<t<T a2 Since the r. (5.5) Note that letting T * oo in ( 5. u) =PIT( (u) < x) = 1 . 196. ('.1 I 7= .2 As p j p.5. the continuity argument above does not generalize immediately. . ulpI/a2).s.Ta2 /p2). see Grandell [ 168]. we omit the details . 119 It is wellknown (Corollary XI.( ^ I + e2( \ I .4) Note that IG(.(u) ti IG(oo.h. has a continuous distribution. 1. this implies P sup 0<t<T a 12 Stu2 /µ2 > u 4 P ( sup W_1( t) > u O<t<T But the l.1. 263) that the distribution IG(•.2 suggests the approximation u 0(u. and the r.. C. TS(u)=inf{t>0: WW(t)>u}. is 1/ip (ua2 /IpI. [169] or [APQ] pp.. DIFFUSION APPROXIMATIONS Now let Tp(u) = inf{t>0: S?)>u}.
Then as 0 _+ 90. The first application in risk theory is Iglehart [207]. e.Po = 09µB6 . the claim size distribution B9 and the premium rate p9 depends on 0.00µB6 + 0. All material of this section can be found in these references..1 and Section VIII. as an example of such a generalization we mention the paper [129] by Emanuel et al.5) combined with the fact that finite horizon ruin probabilities are so hard to deal with even for the compound Poisson model makes this approximation more appealing.6) are presented. pt? 4 peo.3 Consider a family {Ste) } oc claim surplus processes indexed by a parameter 9. The picture which emerges is that the approximations are not terribly precise. 0) { 2 StQ2 /µ2 D { W_ i(t)}t>o t>o D 2 where p = pe = pe . for more general models it may be easier to generalize the diffusion approximation than the CramerLundberg approximation.1. as 0 * 00 and that the U2 are uniformly integrable w. in the next subsection we shall derive a refinement of (5. See for example Billingsley [64]. a2 = ae = 00µa6 Notes and references Diffusion approximations of random walks via Donsker's theorem is a classical topic of probability theory. Furrer. PROBABILITY OF RUIN IN FINITE TIME Checks of the numerical fits of (5. in Asmussen [12].Pe.6 of [APQ]. The proof is a straightforward combination of the proof of Theorem 5. pe . . the simplicity of (5.r. (5. on the premium rule involving interest. we have ^A. Further relevant references in this direction are Furrer [151] and Boxma & Cohen [75]. We conclude this section by giving a more general triangular array version of Theorem 5. In contrast. Theorem 5.g. However. For claims with infinite variance.6) therefore does not appear to of much practical relevance for the compound Poisson model. such that the Poisson rate Oe. the B9.5) for the compound Poisson model which does not require much more computation. Assume further that 039µB6 < pe.5) and (5. Michna & Weron [152] suggested an approximation by a stable Levy process rather than a Brownian motion. in particular for large u. B0 * Boo. In view of the excellent fit of the CramerLundberg approximation. and which is much more precise. However. [169].120 CHAPTER IV. that 00 4090. and two further standard references in the area are Grandell [168].t.
it is more convenient here to use some value 9o < 0 and let 9 = 0 correspond to n = 0 (zero drift). this means the following: 1.1) . 0(0) = 0. Determine yo > 0 by r. whereas there we let the given 3B. Let PO refer to the risk process with parameters e9oz Qo = QB[90]. In this setup. CORRECTED DIFFUSION APPROXIMATIONS 121 6 Corrected diffusion approximations The idea behind the simple diffusion approximation is to replace the risk process by a Brownian motion (by fitting the two first moments ) and use the Brownian first passage probabilities as approximation for the ruin probabilities. let P9 refer to the risk process with parameters Q9 = QoB0[9] = QB[9 9o]. PB('r(u ) < oo) < 1 for 9 < 0. Then r.6. 2.'(yo) = 0 and let 90 = 'Yo. 3.Q (B[s] . and we are studying b(u.90] B(dx).ao (0) _ /c(s + 9 . which we have seen to play an important role for example for the CramerLundberg approximation ./c(9 . 77 = 1/p . B9(dx) =Bale] Bo(dx) e9z keo)z = B[9 . and we want to consider the limit 77 10 corresponding to Oo f 0. 77 is close to zero. Since Brownian motion is skip free. The objective of the corrected diffusion approximation is to take this and other deficits into consideration.c(s) = . risk process with safety loading 77 > 0 correspond to 9 = 0 . claim size distribution B . this idea ignores (among other things) the presence of the overshoot e(u). 9o T 0. In terms of the given risk process with Poisson intensity . .9(s) = Ico ( s + 9) .s and p = /3µB < 1. The setup is the exponential family of compound risk processes with parameters ( B9 constructed in III. .1 > 0.T) = Peo(r(u) < T) for 90 < 0. Then EOU' = Boki[0] = Biki[eo]/E[9o] and "(s) = k(sBo)k(9o). However .6. For each 9.90) . P9(r (u) < oo) = 1 for 9 > 0.4.90) and the given risk process corresponds to Poo where 90 = 'yo. this is because in the regime of the diffusion approximation . Bo(dx) = B[eo]B(dx).
Varo S1 = f30Eo U2 = S1. Theorem 5. 0o to. C.3) this implies (take u = 1) Ego exp { . (.S.(y) = 0.1) IG(x. bl IG(t81. i. IGu+u2. (U. C . . The corrected diffusion approximation to be derived is (u. S2 = 3E0U2 Bier [Yo] 3B"[Yo] Write the initial reserve u for the given risk process as u = C/Oo ( note that C < 0) and. Vargo S.7(u)/u2} eh(A.T) 1+u2 (6. PROBABILITY OF RUIN IN FINITE TIME Recall that IG(x. () where h (A. the solution of r. (6... 9otc0" (0) = 0061 = ul. One has (6. _ ^(u) = ST . for brevity. u) = IG(x/u2. C) = 2A + (2 .3 applies and yields 1061 U61 Stdlu2/CZdi {W_1(t)}t>0 t>0 which easily leads to 1 StU2 {W( J(t)1t>0 { u S1 t>o Y'(u. 1) • Since L eatIG (dt. u) = euh(a .u. and Si = QoEoU2 = Q B"'['Yo Eo U3 ]. .. The first step in the derivation is to note that µ = k (0) = r0 (00) . means up to o(u1) terms): .() The idea of the proof is to improve upon this by an O (u1) term (in the following.1) . write r = T(u).122 CHAPTER IV.2' where as ususal ry > 0 is the adjustment coefficient for the given risk process. (01..e.2) . (.C. tu2 ) i IG (t. u) denotes the distribution function of the passage time of Brownian motion {W((t)} with unit variance and drift C from level 0 to level u > 0.
however.5) according to (6.3. the r. In ( 1) and (2).1 + u2 I Indeed. A numerical illustration is given in Fig.7.h.5) Once this is established .exp { h(A. . that the saddlepoint approximation of BarndorffNielsen & Schmidli [59] is a serious competitor and is in fact preferable if 77 is large] . in (3) and (4). 1. 1% in (2) and (4). that whereas the proof of Proposition 6. p = 0.2). is the c.z . 9o T 0 in such as way that C = Sou is fixed. calculated using numerical integration and Proposition 1. of (6. .4. (6. we get by formal Laplace transform inversion that C 2 u. distributed as Z . But the Laplace transform of such a r. The solid line represents the exact value .6. .'yu /2)(1 + b2/u)} + Aug 1I J . . u is Eeazead2/++ Eeaz[1 + ab2/u] where the last expression coincides with the r. Note.52/u where Z has distribution IG (•.1 + 629.d.3). and the dotted line the corrected diffusion approximation (6. The justification for the procedure is the wonderful numerical fit which has been found in numerical examples and which for a small or moderate safety loading 77 is by far the best amoung the various available approximations [note. bl I IG I t +2 . the formal Laplace transform inversion is heuristic: an additional argument would be required to infer that the remainder term in (6.1 below is exact. 6 .h.2 ). just replace t by Tb1/u2. however .2) is indeed o(u1). which is based upon exponential claims with mean µB = 1.s. it holds for any fixed A > 0 that Ego exp { Ab1rr(u)/u2} . CORRECTED DIFFUSION APPROXIMATIONS 123 Proposition 6. The initial reserve u has been selected such that the infinite horizon ruin probability b(u) is 10% in (1) and (3). of a (defective) r. To arrive at (6.f.1 As u + oo. we have p =.ry2 .v.v.3 = 0.s.
PROBABILITY OF RUIN IN FINITE TIME 0. Note that the ordinary diffusion approximation requires p to be close to 1 and '0 (u) to be not too small.7.1 proceeds in several steps..7 or at values of Vi(u) like 1% is unsatisfying.EB 0 p ex p ( 7 S h ^)u .OOIi O.W21 0.^) .19)2 11 20 20 i0 T 1n0 Figure 6.T1 00. .(061 0.aa1 .T) 0.00 0. see Asmussen [12].u2 2u3 (e .08 0.114 0. For further numerical illustrations.T) 111 0. (Inc 0s 0.01 0.() Lemma 6. OM 0. it gives the right order of magnitude and the ordinary diffusion approximation hopelessly fails for this value of p.08 a.07 0. A51 7(SAT 3 3 h(X.4 may not be outstanding but nevertheless.02 I 90 120 160 2W A0 Z WT 40 80 120 160 100 240 280 T 111 WI.1 W IU.111 W(U.05{ 0.1 It is seen that the numerical fit is extraordinary for p = 0.. BarndorffNielsen & Schmidli [59] and Asmussen & Hojgaard [34]. and all of the numerical studies the author knows of indicate that its fit at p = 0. Similarly.124 0.199 0.011 L1 60 T IM 11. the fit at p = 0.TI CHAPTER IV.2 e.0 0. The proof of Proposition 6.
. () 62 Eeo exp u u2 J . 3 lim Eof (u) = EoC(oo) = a2 Ep = 3EoU2 uroo Proof By partial integration .00)(u +C)  'r (. C) 1 1 + u2/ 111 + 2u CZ Z  (2A + ()1/2 J 1 Proof It follows by a suitable variant of Stam's lemma (Proposition 4.61a2T (B3 .1) h(A.6. 1 = PB(T < oo) = Eo0 exp 125 {(B .r0 (00)) } Replacing B by 8/u and Bo by C/u yields e(B() = E eo exp { (e .3 EoU2 + 103OoEoU3 + " 2 6 Using d2 . + a1b2 + .+ h (A. 1 / Po(C(0) > y) dy EoC(0) x k EDUk + 1 k Eo[(0)k+1 EoC(0) _ (k + 1)EoU' EoC(^) _ (k + 1) Eo£(0) Lemma 6 . () + C and note that 2 KO (0) = 102.. exp ue } al 1J 3 exP I [2).C)C/u .h.6) u U3 Lemma 6 . the formulas Po(C(0) > x) Po(C(co) > x) imply 1 °° Po(ST(o) > x) = EIU fIP0 (U>y)dy . (6.4) that the r..(3) J t _ aa1T l + eh(A..2u (B3 . in Lemma 6.C2 = 2). CORRECTED DIFFUSION APPROXIMATIONS Proof For a>0.T (co (8/u) .(3)Eea LauT exp i 3J . (6. the result follows.4 Ea.s.7) 2 2 .co (e) .2 behaves like C l Eeo eXp r _ ^81T 1 Sl u2 1 u 2u3 [1+h(AC) S .co ((/u)) } Let 8 = (2a + (2)1/2 = h().
and inserting this and 9o 2 = S/u on the r.2. and the correction terms which need to be added cancels conveniently with some of the more complicated expressions in Lemma 6.\+ (2 (3 e 2u [ (2.e h(aS)h (^^ 262 exp {_h(.h (A. () by h(\.2) for O(u) (indeed . C) ( 1+ u2 The result follows by combining Lemma 6 . 2 and (6.() I 1 + u2 ) y . Thus by Taylor expansion around ( = 90u.6) and 7co (Oo) = ico('y + Bo) to get 0 = 21 (^/2 + 2y90) + 1112 (_Y3 + 3_Y200 + 3y9o) + O(u4)..4. yields +90 62 0 + O(u 3) 2u2 +O(u 3). we get the correct asymptotic exponential decay parameter ^/ in the approximation ( 6. 5 exp { _h(A) (1 + / y u J)) exp 1.2 (^/2 + 3y9o + 390) + O(u3).s. PROBABILITY OF RUIN IN FINITE TIME The last term is approximately (e 3 (3) 27. we get h(A. yu/2). 0 The last step is to replace h(A.126 CHAPTER IV.7) and using eh(a. l Lemma 6 .h.() . letting formally T * oo yields 7/)(u) C'e7u where C' = e7a2).x. 2 + 00 = . Thus a2 y = 290 + O (u2).(2A + ()1/21 exp S h(A.2u [2A+ (2 3 .1 (y/2 + Oo)u . yu/2) h(A. () . [2+ (2 .\ + () 1 2 / .S) d e 62 . There are two reasons for this : in this way. yu/2) 11+ 62 I} S 1 \\\ u/11 l 62 (3 2u 2A Proof Use first (6.6  d h(A.
() I 1 + u2 )I 2u L 2A+C2_(2 exp { _h.1 (y/2 + Oo)u )} 1 (i + U ) [2+ C2 2u 62 S Pt^ exP { J 62(2 exp { h(A. the approach to the finite horizon case is in part different and uses local central limit theorems.7. The corrected diffusion approximation was extended to the renewal model in Asmussen & Hojgaard [34]. i. ()} 3 h (A.(i+ 62 exP{ h(A.4.e. In Siegmund's book [346]. and to the Markovmodulated model of Chapter VI in Asmussen [16]. u Notes and references Corrected diffusion approximations were introduced by Siegmund [345] in a discrete random walk setting. that is. the 'typical' value (say in sense of the conditional mean) was umL. . Hogan [200] considered a variant of the corrected diffusion approximation which does not require exponential moments.5 in Lemma 6. () I 1 + u 2 ) } S 1 . 'yu/2) 127 ( i+ M pz^ exP { h (A.1: Just insert Lemma 6.T) has not been carried out and seems nontrivial. The answer is similar: the process behaved as if it changed its whole distribution to FL. HOW DOES RUIN OCCUR? exp { h (x. with the translation to risk processes being carried out by the author [12]. The adaptation to risk theory has not been carried out. ()} . We shall now generalize this question by asking what a sample path of the risk process looks like given it leads to ruin. the same as for the unconditional Lundberg process. 7 How does ruin occur? We saw in Section 4 that given that ruin occurs. this case is in part simpler than the general random walk case because the ladder height distribution G+ can be found explicitly (as pBo) which avoids the numerical integration involving characteristic functions which was used in [345] to determine the constants. His ideas were adapted by Asmussen & Binswanger [27] to derive approximations for the infinite horizon ruin probability 'i(u) when claims are heavytailed. the analogous analysis of finite horizon ruin probabilities O(u. Fuh [148] considers the closely related case of discrete time Markov additive processes. 0 1 Proof of Proposition 6. () (i+a ) 2A + (2 .
(u) and satifying PL(F(u)) * 1. Note that basically the difference between FT(u) and . In the exponential case. so in the in the proof. the Poisson rate changes from .T. u * oo.r.. then P(u) and FL coincide on . PROBABILITY OF RUIN IN FINITE TIME changed its arrival rate from 0 to /3L and its claim size distribution from B to BL.(u). Recall that 13L = (3B[ry] and BL(dx) = e'rxB(dx)/B[7].. Theorem 7 .FT(u) = o' (T(u ). . + TMOO ). . F(u)c] P(r(u) < oo) ?P(U) < EL[e7u. F(u)c] ti e' ru]PL (F(u)`) > 0.(u) is not .(u)_ and ^(u) are independent . {ST(u)+t . We are concerned with describing the F(u) distribution of {St}o<t<T(u) (note that the behaviour after rr(u) is trivial: by the strong Markov property.FT(u) is the stopping time oalgebra carrying all relevant information about r(u) and {St}o<t<T(u)• Define P(u) = P(•IT(u) < oo) as the distribution of the risk process given ruin with initial reserve u.TT(u) _measurable. the numerator becomes e'ruELe7^ (u)PL(F( u)t) = e7uCFL (F(u)°) when F(u) E .F.128 CHAPTER IV. we give a typical application of Theorem 7.t.(u)_ is that i.t. Then also P(u)(F(u)) + 1. P(u) and rate = aL w. stating roughly that under F(u). and let M(u) be the index of the claim leading to ruin (thus T(u) = Ti + T2 + . . In fact.F.1. Recall that .(u)_ and similarly the denominator is exactly equal to Ce7u.1 Let {F(u)}u>0 be any family of events with F(u) E F.3 to . {St}0< t<T(u)) Proof Write e'rsr(u ) = e'rue'r£(u).ST(u)}t> o is just an independent copy of {St}t>o).r.EL[e7S. ^(u) is exponential with rate 8 w. FL As example.vi(u) Ce'Yu Corollary 7.3L and the claim size distribution from B to BL. r(u) < oo) . Proof P(u) (F(u)c) = F(flu)c.2 If B is exponential.
however.3 M(u) pcu) 1 . Notes and references The results of the present section are part of a more general study carried out by the author [11]. take I(Tk < x) . A somewhat similar study was carried out in the queueing setting by Anantharam [6]. who also treated the heavytailed case.eALx) M(u) k=1 u The proof of the second is similar.eaLx. the queueing results are of a somewhat different type because of the presence of reflection at 0. see further XI.7. . HOW DOES RUIN OCCUR? Corollary 7.(1 . This is currently a very active area of research. Proof For the first assertion. the subject treated in this section leads into the area of large deviations theory. From a mathematical point of view. 129 M(u) >2 I(Tk < x) M(tu) p(u) M(u) >2 I(Uk < x) BL(x).3.
This page is intentionally left blank .
Proposition 1. In the socalled zerodelayed case. the one corresponding to the stationary case by 00)(u). D'2. .1 (T1 = a1).(1. with Nt = # {n: Un <t} the number of arrivals before t. with the same distribution A (say) for T2. and M is the maximum of {St}. r(u) the time to ruin.. Var(St) = 11Ba2A + I4AaB lim t goo t PA 131 . . The ruin probability corresponding to the zerodelayed case is denoted by 1/'(u). {St} is the claim surplus process given by I.. the Tn are independent. the claim sizes U1.Chapter V Renewal arrivals 1 Introduction The basic assumption of this chapter states that the arrival epochs O'1.Q. see A. . AA t*oo lim St = lim ESt t tioo t = p . U2.1..1). with common distribution B. and the one corresponding to T1 = s by 1/i8 (u). of the risk process form a renewal process: letting Tn = Qn .. the distribution Al of T1 is A as well.i. A different important possibility is Al to be the stationary delay distribution A° with density A(x)/µA. We use much of the same notation as in Chapter I. Thus the premium rate is 1.Then no matter the distribution Al of T1i B.d.7).. .. T3.1 Define p = !µ. Then the arrival process is stationary which could be a reasonable assumption in many cases (for these and further basic facts from renewal theory.. are i.
s + t µA PA 0 Of course. by the elementary renewal theorem (cf. one could imagine that the claims are recorded only at discrete epochs (say each week or month) and thus each U. stating that E[Nt+a .0 > 0. after E. the definition 77 = 1/p . This has a direct physical interpretation (a large portfolio with claims arising with small rates and independently). The renewal model is often referedd to as the Sparre Andersen process. For (1 . RENEWAL ARRIVALS lim E [St+a . and (1 .St] = a(p . the . Example 1 .132 Furthermore for any a > 0.2 (DETERMINISTIC ARRIVALS) If A is degenerate. CHAPTER V. However .1) follows . say at a. Nt = Var(PBNt) + E(4Nt) Q2 2 0`2 A tpB B + o(t).a is really the accumulated claims over a period u of length a. From this ( 1.1 of the safety loading appears reasonable here as well. we get similarly by using known facts about ENt and Var Nt that Nt Var(St) = Var E Nt U. Thus. A. Nt + EVar U. such that no arrivals occur in the off state.Nt] * a/PA.1) ENt/t + 1/µA.t. Sparre Andersen whose 1959 paper [7] was the first to treat renewal assumptions in risk theory in more depth. Proposition 1. t 4oo Proof Obviously. The simplest case is of course the Poisson case where A and Al are both exponential with rate 0. Nt ESt = E E UI Nt t = ENt•pB . If the environment is Markovian with transition rate A from on to off and u from OFF to ON.3 (SWITCHED POISSON ARRIVALS) Assume that the process has a random environment with two states ON. Here are two special cases of the renewal model with a similar direct interpretation: Example 1. 3) follows similarly by Blackwell 's renewal theorem. 2). OFF.1). but the arrival rate in the ON state is .1 gives the desired interpretation of the constant p as the expected claims per unit time.
initial vector (1 0) and phase generator 11 However. u For later use.d. in general the mechanism generating a renewal arrival process appears much harder to understand. INTRODUCTION 133 interarrival times become i. However. The following representation of the ruin probability will be a basic vehicle for studying the ruin probabilities: Proposition 1.4) w.. S o<t<oo n=0. For the stationary case.4) fo Indeed.4) with phase space {oN. (1. Therefore. . and then the whole process repeats itself).1.1.s < u). we feel it reasonable to present at least some basic features of the model. The values of the claim surplus process just after claims has the same distri bution as {Snd^ }• Since the claim surplus process {St} decreases in between max St = max ^d^. if for nothing else then for the mathematical elegance of the subject. as follows easily by noting that the evolution of the risk process after time s is that of a renewal risk model with initial reserve U1 .t.. arrival times. we have From this the result immediately follows. U1 .s. integrate (1. the relevance of the model has been questioned repeatedly.i. and for historical reasons.. and the present author agrees to a large extent to this criticism.2.1.s > u) of ruin at the time s of the first claim whereas the second is P(r(u) < oo.T between a claim U and an interarrival time T.. (an arrival occurs necessarily in the ON state. Proof The essence of the argument is that ruin can only occur at claim times. A is phasetype (Example 1.. Ao..r. the fundamental connections to the theory of queues and random walks. More precisely. the first term represents the probability F(U1 .oFF}.y)B(dy).} with {S(d)} a discrete time random walk with increments distributed as the independent difference U . we note that the ruin probabilities for the delayed case T1 = s can be expressed as in terms of the ones for the zerodelayed case as u+8 z/i8(u) = B(u + s) + '( u + s .4 The ruin probabilities for the zerodelayed case can be represented as 0(u) = P(M(d) > u) where M(d) = Max {Snd) : n = 0.
If .1) +ry. The compound Poisson model with negative claims We first consider a variant of the compound Poisson model obtained essentially by signreversion . with common distribution B* (say) concentrated on (0.d. the claims and the premium rate are negative so that the risk reserve process . 00). (2. 2.3* pB.1. we shall be able to compute the ruin probabilities i(i* (u) for this model very quickly (. Using Lundberg conjugation .a*PB• > 1. i. the remaining part of the prepayment (if any ) is made available to the company. b=1 !=1 where {Nt } is a Poisson process with rate . t. At the time of death . each of which receive a payment at constant rate during the lifetime . are independent of {Nt} and i. That is .1 r* (u) One situation where this model is argued to be relevant is life annuities. then 0 * (u) = 1 for all u > 0. the claim surplus process are given by Nt Nt Rt = u+^U. < 1. resp . U Figure 2.1) . St = t .Ut. The initial reserve is obtained by prepayments from the policy holders.134 CHAPTER V.3* (say ) and the U.1 If. RENEWAL ARRIVALS 2 Exponential claims. then 0*(u) = e 'r" where ry > 0 is the unique solution of 0 = k*(ry) = *(B*[ry] . Theorem 2 .0* (u) = P (rr* (u) < oo) where rr* (u) = inf It > 0: Rt < 0} ) . A typical sample path of {Rt } is illustrated in Fig. A simple sample path comparison will then provide us with the ruin probabilities for the renewal model with exponential claim size distribution.
Rt.2 Assume now . and the Lundberg conjugate of {St} is { St } and vice versa. and thus 1 = P(T.g. 0 Now return to the renewal model.. Define T_ (u) = inf It > 0 : St = u} . . Then the c.2. the safety loading of { St} is > 0. Hence T_(u) < oo a. Then { St } is the claim surplus process of a standard compound Poisson risk process with parameters 0 *. Fig. B*. B* [7] and let {St} be a compound Poisson risk process with parameters . (a) is*(a) (b) .f. St=Rtu=St.(a) 7 Figure 2. Then the function k* is defined on the whole of (oo. of {St} is c(a) = is*(a7). Proof Define 135 St =u . T_ (u) = inf { t > 0 : St = u 'r* (u).s.3*. Since ic'(0) < 0. > 1 . Hence y exists and is unique. then by Proposition 111. Let B(dx) = ^e7x B*(dx).(u ) < oo) = E {e7sr_ (u). cf.0.* (a) = log Ee'st I. 2. 2.2 sup St = inf St = 00 t>o t>o and hence 0* (u) = 1 follows.2(b).UB.2(a). If I3*pB* < 1. EXPONENTIAL OR NEGATIVE CLAIMS [Note that r.1. 0) and has typically the shape on Fig. B. T_ ( u) < 001 e7"P(T_ (u) < oo) = e"V)* (u).
.7r+ 7r EeTo b/(Sa) + +.4.• • • .1) + ry = 0 which is easily seen to be the same as (2.)(u) _ 1r+e7" where ry > 0 is the unique solution of 1 = Ee'Y(uT ) = S 8 A[. with rate S (say).. and hence the failure rate .a) = 1 .1 means that M* is exponentially distributed with rate ry. the distribution of M(d) is a mixture of an atom at zero and an exponential distribution with rate parameter ry with weights 1 . A variant of the last part of the proof.1. Hence M* max {To + Ti + • • • + Tn .136 CHAPTER V. and (2 .+Tn U1 Un.. T2 = U2. with the probability that a particular jump time is not followed by any later maximum values being 1 .Ui .1.Tn} n=0. Now the value of {St*} just before the nth claim is To +T1* +... alternatively termination occurs at a jump time (having rate 8).. 2. respectively. 1) means that 8(A[ry] .4 goes as follows: define 7r+ = P(M(d) > 0) and consider {St*} only when the process is at a maximum value. To + max {Ul+•••+UnTI. To + M(d) in the notation of Proposition 1..'s and noting that V)*(u) = P(M* > u) so that Theorem 2.•.2).2) 7 and7r+=1Proof We can couple the renewal model { St} and the compound Poisson model {St*} with negative claims in such a way the interarrival times of { St*} are To .1 it is seen that ruin is equivalent to one of these values being > u.Un } = max St = t>0 n=0. then . RENEWAL ARRIVALS Theorem 2 .Tr+.Y a I.u+ and lr+.2 If B is exponential. which has the advantage of avoiding transforms and leading up to the basic ideas of the study of the phasetype case in VIII.e.. the failure rate of this process is y. u Hence P(M(d) > u) _ 1r+e'r"...1... and from Fig .f. and 5PA > 1.. Taking m.Ti = U1.Y] (2.. we get Ee'M(d) = Ee°M* _ Y/(. However. Then B* = A. 3* = 6.g. According to Theorem 2.
we see that ry = 6(1.B(dx).3. B^d) where Aad> (dt) = ^[ a] A(dt). It only remains to note that this change of measure can be achieved by changing the interarrival distribution A and the service time distribution B to Aad^..7r+). the relevant exponential change of measure corresponds to changing the distribution F(d) of Y = U . CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES 137 is b(1.3 A[a] B[a] F( d) [a +)3] F(d) [a] = Fad) [^] Letting M(u) = inf in = 1. the imbedded discrete time random walk and Markov additive processes.6. Thus a ladder step terminates at rate b and is followed by one more with probability 7r+.7r+) and hence r+ = 1.5.y/b. Putting this equal to y.2. we have ] A[a )3] E«d'efl' = Bad> [a] A ad> [Q] = B[a +. Hence the failure rate of M(') is 6(1 .. hence exponential with rate b. The probability that the first ladder step is finite is 7r+. 3a The imbedded random walk The key steps have already been carried out in Corollary 11. 0 3 Change of measure via exponential families We shall discuss two points of view. However.T to F(d)(x) = eK^d^(«) ^x e"vFidi(dy) 00 K(d) (a) = log F(d) [a] = log B[a] + log A[a] . resp.4. letting P(d) refer to the renewal risk model with these changed parameters . 111.. Furthermore. Bads (dx) = ..7r+) = ry and hence P(M(d) > u) = P(M(d) > 0)e7u = 7r+e'r". : S(d) > u} .2. consider instead the failure rate of M(d) and decompose M(d) into ladder steps as in II. This follow since. which states that for a given a. a ladder step is the overshoot of a claim size.
This is known to be sufficient for ^(O) ]p (d) ([APQ] Proposition 3. 7µA . We have the following versions of Lundberg' s inequality and the CramerLundberg approximation: Theorem 3 . Proof Proposition 3.C(°)eryu where C(O) = C0[7] . VIII.1).C8e7u where Cs = Ce78B[7]. the evaluation of C is at the same level of difficulty as the evaluation of i/i(u) in matrixexponential form.2 p. just note that F7(d) is nonlattice when F is so . i . provided the distribution F of U . cf.(u) . For the stationary case.r.138 CHAPTER V.Ce"u where C = limu. and claim (a) follows immediately from this and e (u) > 0. (d) (7) _ 0. For claim (b).1 implies Cu) = e«uE ( 7d)e«^(u) .e. E(d)e 1' (u).t.. Consider now the Lundberg case. In fact.4. let 7 > 0 be the solution of r. O(u) = eauE (d)ea{ (u)+M(u)K (d)(a) . 00)(u) .u the overshoot . Corollary 3.3 For the delayed case Tl = s. ik. (a) '(u) < eryu. we get: Proposition 3. to converge in distribution since p(yd) (r(0) < oo) = 1 because of r (d)' (y) > 0..1) is explicit given 7. It should be noted that the computation of the CramerLundberg constant C is much more complicated for the renewal case than for the compound Poisson case where C = (1 .1 For any a such that k(d)' (a) > 0. in the easiest nonexponential case where B is phasetype.p)/($B'[7] .2 In the zerodelayed case.T is nonlattice.. (b) V)(u) . 187) and thereby for ^(u) to be nonlattice w. RENEWAL ARRIVALS be the number of claims leading to ruin and ^(u) u = SM(u) .
h'(s).9. Sdt) = h(s .1) = C(O). Equating this to tch(s) and dividing by h(s) yields h'(s)/h(s) _ . St)} and h.St)} can be defined by taking Jt as the residual time until the next arrival. E8h0 (Jdt.. we get r u +8 e"8(u) 139 e7uB(u + s) + 4 0 + L 00 J e7(v8)e7(u+8v). 0 0 .a .(°) ( Ce8B[7] Ao(ds) similar manner. 0) = tc(a)h(s). IPA 0 Of course. we look for a function h(s) and a k (both depending on a) such that Gh.dt ) eadt = h ( s) . y) = e°yh(s). where G is the infinitesimal generator of {Xt} = {(Jt.5. B(x) = o(e7x) and dominated convergence.3..(s.4). (s. The underlying Markov process {Jt} for the Markov additive process {Xt} = {(Jt. Here K.y) B(dy) 0 For the stationary case. 3b Markov additive processes We take the Markov additive point of view of II. To determine boundary 0. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES Proof Using (1.1) (normalizing by h(0) = 1)./c. According to Remark 11. another use of dominated convergence combined with Ao[s] = (A[s] 1)/SPA yields 00 u) e7u iP8(u) Ao(ds) + f 0 = CB['Y](A[y] . For s > 0. (u + s .0 ) = Eo[ha ( Jdt. we invoke the behavior at the 1 = h«(0. 0) = ah (s) .dt(ah ( s) + h'(s)) so that Gha ( s.5. h(s) = e(a +x( a))8 (3. delayed version of Lundberg's inequality can be obtained in a e7u. The expressions are slightly more complicated and we omit the details.Sdt] = Ee'uh(T) means 1 = f ' e°^B(dy) f ' h( s)A(ds). Let P8f E8 refer to the case Jo = s.
( a+r' (a))(Jt s) h(s) where c(a) is the solution of (3.s and P(d). resp. Further. . 5 For the compound Poisson case where A is exponential with rate . J n+1 u are independent with distribution Aa for the Tk and Ba for the Uk. Remark 3 . [a + /3] A[b . .1)a in agreement u with Chapter III.rc(a)] B[a] A[a . A[a .140 CHAPTER V.s is the probability measure governing a renewal risk process with Jo = s and the interarrival distribution A and the service time distribution B changed to Aa. .c(a)] B[a] Proof Pa. Note that the changed distributions of A and B are in general not the same for Pa. ] = E..e(«+k(a))t esy A(dt). (3. T2 are independent with distributions Ba. . [e1U1 + 6T2ea ( U1s)stc ( a)e(a+K(a ))( T2s)I B[a +.rc(a)] = B = Ba[13]Aa[5]. = J8 = T2.2) As in 11.5. Ba(dx) = B(dx). Ease AU1+6T2 [ AU1+6T2 = Ea a LT.. St)}too by letting the likelihood ratio Lt restricted to Yt = a((J..e.tK( a)e.a . .8. T2. Proposition 3.. An important exception is.S„):0<v< )be Lt = eaSt tK(a) h(Jt) = east .2) means 1 = B[a]/3/(/3+a+rc (a)). however..c(a)] which shows that U1. the determination of the adjustment coefficient ry where the defining equations rc(d) (ry) = 0 and rc(ry) = 0 are the same. U.4 The probability measure Pa. (3.a . since JT.. .13]A[b . rc(a) = 0 (B[a] .s(Jo = s) = 1 follows trivially from Lo = 1. RENEWAL ARRIVALS B[a]A[a . we can now for each a define a new probability measure Pa. Ba where Aa (dt) .rc(a)] = 1.s governing {(Jt. Aa as asserted . i. resp.2). An easy extension of the argument shows that U1.
4. and U„ the service time of customer n. and define yy = ay . that is. THE DUALITY WITH QUEUEING THEORY 141 The Markov additive point of view is relevant when studying problems which cannot be reduced to the imbedded random walk.)+1 e J j e(ay+w(ay))8 e . the amount of . [APQ] Ch. say finite horizon ruin probabilities where the approach via the imbedded random walk yields results on the probability of ruin after N claims. yu ) < e7yu. .5 Proposition 3. . Let M(u) be the number of claims leading to ruin .4. yu) = F'ay.g.s eaysr(")+r(u ) K(ay) h (s) . The virtual waiting time Vt at time t is the residual amount of work at time t.yx(ay). . it is easily seen that ic(ay ) > 0.5. T(u) < yu h(JT(u)) < eayu+yuk(ay ) ( Eia y Le(a(+k(ay))s v. not after time T. defined as the single server queue with first in first out (FIFO. A(ds). Proof As in the proof of Theorem IV.6 Let y < let ay > 0 be the solution of ic'(ay) = 1/y. which is the same as the asserted inequality for 0.(u.y yuAa y [ay + K(ay) . XII. see e. For the approach via Markov additive processes.ay+ray))TM(. Using the Markov additive approach yields for example the following analogue of Theorem IV. the time from he arrives to the queue till he starts service. or FCFS = first come first served) queueing discipline and renewal interarrival times. yu).t. u The approach via the embedded random walk is standard. Then J(rr(u)) = TM(u)+1 and hence Ws(u.rc( ay)] = e(aa+(r ))sb[a ]e7yu L y1 In particular.r. 2.. Label the customers 1.. and assume that T„ is the time between the arrivals of customers n .4. Notes and references 4 The duality with queueing theory We first review some basic facts about the GI/G/1 queue. The claim for the zerodelayed case follows by integration w.1 and n. for the zerodelayed case zp8(u. The actual waiting time Wn of customer n is defined as his time spent in queue (excluding the service time). see in particular Dassios & Embrechts [98] and Asmussen & Rubinstein [45].yu ) e7vu A[ay . that is. Then "^ e(ay+w(aY))8 Ys(u.
1) The following result shows that {Wn} is a Lindley process in the sense of II.+ Un. It then jumps to VQ„ .4): Proposition 4.4. If W1 = 0. an+1) = [on. in which case {V} remains at zero until time on+1. Thus Vos}1 _ = (Wn + Un . since customer n arrives at time on.2 Let Mnd) = maxk=o. Thus.n1 (U1 +• • •+Uk Tl . p < 1. Then P(r(u) < T) is the probability z/iiNi (u) of ruin after at most N claims."^ Vi(N) (u).. (4. but interchanging the set . equivalently.3 Assume rl > 0 or. The traffic intensity of the queue is p = EU/ET.. 0 Applying Theorem 11.1 and Corollary 11. Also {Zt}o<t<T evolves like the leftcontinuous version of the virtual waiting time process up to just before the Nth arrival. we get: Corollary 4.Tn)+.2) (b) as t * oo.4.. (u). and obviously z/'(u) = limN.. the proposition follows.3) Proof Part (a) is contained in Theorem 11.4.1). The next result summarizes the fundamental duality relations between the steadystate behaviour of the queue and the ruin probabilities (part (a) was essentially derived already in 11. (4. (4. whereas in [On . Let the T there be the random time UN. then Wn v M. we have Wn = Van(left limit). and combining with (4. and we have P(W > u) = V.1.2.142 CHAPTER V. .3. and we have P(V > u) = ?/iiol(u). equivalently.1. but we shall present a slightly different proof via the duality result given in Theorem II.4: Proposition 4. the waiting time a customer would have if he arrived at time t. Then: (a) as n + oo. RENEWAL ARRIVALS time the server will have to work until the system is empty provided no new customers arrive (for this reason often the term workload process is used) or.Tn)+ Proof The amount of residual work just before customer n arrives is VQ„ . on + Tn) the residual work decreases linearly until possibly zero is hit.• • • Tk ). Vt converges in distribution to a random variable V.1 Wn+1 = (Wn + U. Wn converges i n distribution to a random variable W.
cf...T* < x) fK(x_y)F(dy) (x > 0 is crucial for the second equality!). (4.1.4.. Then the corresponding queue is M/G/1. convergence in distribution hold for arbitrary initial conditions . and hence in particular ZT is distributed as the virtual waiting time just before the Nth arrival. we let T be deterministic .5) Proof Letting n .. as WN.Ao in (b). It follows that P(WN > u) =. we get W = (W + U* .(N)(u) has the limit tp(u) for all u. (4. T] form a stationary renewal process with interarrival distribution A. conditioning upon U* . where U*. we obtain: Corollary 4. TN) with (TN.4) Tlim F(s) (VT > u) = limo P(s) (r(u) < T) = '+^io) (u)• 0 It should be noted that this argument only establishes the convergence in distribution subject to certain initial conditions. Letting n oo in Corollary 4. by an obvious reversibility argument this does not affect the distribution .4 The steadystate actual waiting time W has the same distribution as M(d). Then K(x) = J x00K(x . u Now return to the Poisson case ..T*)+ < x) = P(W + U* . but this requires some additional arguments (involving regeneration at 0 but not difficult) that we omit.e. For part (b)..2). T1 . T1. namely W1 = 0 in (a) and Vo = 0. Then the arrivals of {Rt} in [0. K(x) = P(W < x).T* = y yields K(x) = P ((W + U* . and we get: ...2. However. i. Ti) and similarly for the U. THE DUALITY WITH QUEUEING THEORY 143 (T1. resp . hence (since the residual lifetime at 0 and the age at T have the same distribution . x > 0.5 (LINDLEY'S INTEGRAL EQUATION) Let F(x) = P(U1T1 < x).le) the same is true for the timereversed point process which is the interarrival process for { Zt}o<t < T• Thus as before .y)F(dy).. Corollary 4. A.T*)+. which implies the convergence in distribution and (4.T* are independent and distributed as U1.. { Zt}o<t < T has the same distribution as the leftcontinuous version of the virtual waiting time process so that P(s)(VT > u) = P(s)(r(u) < T).oo in Proposition 4. Hence for x > 0. In fact .
The equation (4.144 CHAPTER V. the actual and the virtual waiting time have the same distribution in the steady state. That is.g.g. Hence '(u) = Ali(°)(u). RENEWAL ARRIVALS Corollary 4. despite the fact that the extension from M/G/1 is of equally doubtful relevance as we argued in Section 1 to be the case in risk theory. Asmussen [24] and references there.5) is in fact a homogeneous WienerHopf equation. see e.5) to hold for all x E R and not just x > 0). the zerodelayed and the stationary renewal processes are identical. implying P(W > u) = P(V > u) for all u. Cohen [88] or [APQ] Ch. Some early classical papers are Smith [350] and Lindley [246].5) looks like the convolution equation K = F * K but is not the same (one would need (4. VIII). 0 Notes and references The GI/G/1 queue is a favourite of almost any queueing book (see e . . Note that (4. Proof For the Poisson case.6 For the M/G/1 queue with p < 1. W v V.
N. The intensity matrix governing {Jt} is denoted by A = (A.T) = Pi (T(u) < T). • The premium rate when Jt = i is pi.(3i when Jt = i.f pi. {Jt} describes the environmental conditions for the risk process. .)iJEE and its stationary limiting distribution by lr. As in Chapter I. The ruin probabilities with initial environment i are '+ki(u) = pi(T(u ) < oo) = Pi (M > u). dv. t St = E Ui . and can be computed as the positive solution of WA = 0. • Claims arriving when Jt = i have distribution Bi. here it exists whenever A is irreducible which is assumed throughout. M = supt>o St.Chapter VI Risk theory in a Markovian environment 1 Model and examples We assume that arrivals are not homogeneous in time but determined by a Markov process {Jt}0<t<oo with a finite state space E as follows: • The arrival intensity is . Thus. {St} denotes the claim surplus process.. i=1 0 and r(u) = inf It > 0: St > u}. 145 Oj( u. Ire = 1.
We let p Pi = /ji/AB.11 below. u . Proposition 1. Example 1 . T(=)). Thus.14. Bi. assume that the sojourn time in the icy state has a more general distribution A(i). say. a(i).146 CHAPTER VI. Assume similarly that the sojourn time in the normal state has distribution A(n) which we approximate with a phasetype distribution with representation (E('). t(i) = T(')e are the exit rates. the intensity matrix is A OW) T(i) T(n) t(n)a(i) where t(n) = T(n)e. i and corresponding arrival intensities Qn. MARKOVIAN ENVIRONMENT where as usual Pi refers to the case Jo = i. Then the state space for the environment is the disjoint union of E(n) and E(i). /3 = Nn when j E E(n). the operational time argument given in Example 1.2 (ALTERNATING RENEWAL ENVIRONMENT) The model of Example 1. r^ = P (1. we could distinguish between normal and icy road conditions.. we shall assume that pi = 1.a('). P = E 7riPi. cf. = iii when j E E(i). leading to E having two states n. one expects that 3i > on and presumably also that Bn # Bi. which is clearly unrealistic. Unless otherwise stated. Example 1.2.1 Consider car insurance. with rates Ani and Ain. cf.1 implicitly assumes that the sojourn times of the environment in the normal and the icy states are exponential. According to Theorem A5.T(n)). and p is the overall average amount of claims per unit time. Cl The versatility of the model in terms of incorporating (or at least approximating) many phenomena which look very different or more complicated at a first sight goes in fact much further: Example 1. and we have f3. An example of how such a mechanism could be relevant in risk theory follows.1) iEE Then pi is the average amount of claims received per unit time when the environment is in state i.4) with representation (E(i). respectively. and assume that weather conditions play a major role for the occurence of accidents. this is no restriction when studying infinite horizon ruin probabilities. say. f3i and claim size distributions Bn. meaning that accidents occuring during icy road conditions lead to claim amounts which are different from the normal ones. For example. in blockpartitioned form.5 below. we can approximate A(i) with a phasetype distribution (cf.
. such that a sojourn time of type rt is followed by one of type c w. This amounts to a family (A(")) ?CH Of sojourn time distributions. Example VIII. MODEL AND EXAMPLES 147 Example 1 . and 1/ii(u) = t/ii(u).T(n)). it = Jel(t).a(n).3 Consider again the alternating renewal model for car insurance in Example 1. dt. but assume now that the arrival intensity changes during the icy period... 4 (SEMIMARKOVIAN ENVIRONMENT) Dependence between the length of an icy period and the following normal one (and vice versa) can be modelled by semiMarkov structure.>. The simplest model for the arrival intensity amounts to . 1 . depending only on 77. and . we assume again pi = 1 so that the claim surplus is Nt St = ?Ui_t. where W = (w.j = .3. T(1) +w11t(1)a(1) w12t (1)a(2) w21t(2)a(1) w1gt(1)a(9) w2gt ( 2)a(q) T(2) +w22t( 2)a(2) A = wg1t(9)a(1) wg2t(9)a(2) . In the car insurance example. u From now on.. u Example 1 . Qi = .3i. u Example 1. one could for example have H = {i1. 0 Then (by standard operational time arguments) {St } is a risk process in a Markovian environment with unit premium rate.5 (MARKOVMODULATED PREMIUMS) Returning for a short while to the case of general premium rates pi depending on the environment i. ..J017. i8f n1. let T 9(T) = f pi. One way to model this would be to take A(') to be Coxian (cf. the state space E for the environment is { ('q. such that the icy period is of two types (long and short) each with their sojourn time distribution A('L).2.tEH is a transition matrix. and similarly for the normal period.3i/pi.n.1.Q. .. Approximating each A('?) by a phasetype distribution with representation (E('l). i ) : n E H. A('^). (9) where q = CHI. Indeed. St = SB=(t). T(9) +wggt(9)0.. say it is larger initially. n8}.p. t(n) = T("i)e...4) with states i1. iq (visited in that order) and letfOil >.. resp. the parameters are ^ij = aid/pi.. Then for example wi.1. w. say. i E E(n) }.. is the probability that a long icy period is followed by a short normal one.
JT1 = j) = Qj • e. one can associate in a natural way a standard Poisson one by averaging over the environment. N > 1(Ul < x) a4 B*(x). Nt Nt a .e(A(Oi)d'sg)xe. . More precisely.8 As t oo. MARKOVIAN ENVIRONMENT We now turn to some more mathematically oriented basic discussion.148 CHAPTER VI.5. A remark which is fundamental for much of the intuition on the model consists in noting that to each risk process in a Markovian environment.(3iBi(dx). dx. Pi (Ti E dx. we put )3* = E 7fi/3i. vi(dx) = . )3*. o = 0. B* = 1 /^* Bi.6 The claim surplus process {St} of a risk process in a Markovian environment is a Markov additive process corresponding to the parameters µi = pi. Next we note a semiMarkov structure of the arrival process: Proposition 1.7 The Pidistribution of T1 is phasetype with representation (ei. qij = 0 in the notation of Chapter 11. t l=1 Note that the last statement of the proposition just means that in the limit. the Markov additive structure will be used for exponential change of measure and thereby versions of Lundberg's inequality and the CramerLundberg approximation. iEE iEE )3 These parameters are the ones which the statistician would estimate if he ignored the presence of Markovmodulation: Proposition 1. Note also that (as the proof shows) 7ri/3i//3* gives the proportion of the claims which are of type i (arrive in state i).(Qi)diag)• More precisely. In particular. Proof The result immediately follows by noting that T1 is obtained as the lifelength of {Jt} killed at the time of the first arrival and that the exit rate obviu ously is f3j in state j.A . the empirical distribution of the claims is B*. . The key property for much of the analysis presented below is the following immediate observation: Proposition 1.
cf. has distribution (7ri)3i //3*)iEE and is independent of Ti. this converges to the exponential distribution with rate 0* as a * oo. Nt a' t t iEE Also. y Ni) i Nti) t a. However . {St} to the compound Poisson model with parameters 0 *. the Fidistribution of T1 in {St(a ) } is phase type with representation (E. and furthermore in the limit JT. Bi. Then it is standard that ti lt '4' iri as t > oo. oo) as a 4 oo. Bi.* (u) for all u.. denoting the sizes of the claims arriving in state i by U(') 1 standard law of large numbers yields U(') the N 1: I(Ukik < x) k=1 N a$. we may view Nt`i as the number of events in a Poisson process where the accumulated intensity at time t is Niti.. In particular. Then {St)} + {St*} in D[0. Conditioning . the limiting distribution of the first claim size U1 is B*. Example 11. and let {St °i} refer to the one with parameters Pi.. A. In particular... MODEL AND EXAMPLES 149 Proof Let ti = f1 I(JJ = i) ds be the time spent in state i up to time t and Nti) the number of claim arrivals in state i . i.9 Consider a Markovmodulated risk process {St} with param eters Ni. By Proposition A5.2. N + oo Hence 1 Nt 1 N`+) Nits Nt E I ( Ut <.(/3i)aiag). zli( (u) .1. aA.aA . Hence Nt'> a . iEE 13 A different interpretation of B* is as the Palm distribution of the claim size.4. Bi(x). The next result shows that we can think of the averaged compound Poisson risk model as the limit of the Markovmodulated one obtained by speeding up the Markovmodulation. Proposition 1. e. given {Jt}0<t<0. ^j 7riNi.6.7.x) = Nt E > I (Uk) X) Nt Bi(x) 1=1 iEE k=1 iEE 1: t5 Bi( x) = B*(x). Proof According to Proposition 1. B*.
> 1.1 with periods with positive drift alternating with periods with negative drift. s 5 in state 2.=1. since E3 is a more dangerous claim size distribution than E7 (the mean is larger and the tail is heavier). 1..10 Let E_{1. state 1 appears as more dangerous than state 2. we may imagine that we have two types of claims such that the claim size distributions are E3 and E7. is as in {St }.2..FT. which also yield O(a) (u. 9 . 5 5 where E5 denotes the exponential distribution with intensity parameter 5 and a > 0 is arbitrary.150 CHAPTER VI. the overall drift is negative since it = (2 2) so that p = 71P1 + 112P2 = 7. there are p = 2 background states of {Jt}.1 of [145]. thick.g.). T) for all u and T. That is. the paths of the surplus process will exhibit the type of behaviour in Fig.l3* and U2 having distribution B*. and (at least when a is small such that state changes of the environment are infrequent). shows similarly that in the limit (T2.2. e. we first get that 3 (3* = 2.. from Theorem 3. Claims of type E3 arrive with intensity 2 . 1.. 0 Example 1. Continuing in this manner shows that the limiting distribution of (T. those of type E7 with intensity z s = 5 in state 1 and with intensity z . oo). 132=2. marked by thin.2}.. with T2 being exponential with rate .2 +2 2 = 3. T) + ?P* (u. The fact that indeed 0(a) (U) 3 0* (u) follows.. resp. lines in the path of {St}. U2) are independent of . the company even suffers an average loss.s = o in state 1 and with intensity 1 . B1=3E3+2E7..s = 1o in state 2. U. B2=1E3+4E7. MARKOVIAN ENVIRONMENT upon FT.1. A= (  a a ) \ a a 5 5 J 9 3 2 a1=2. Computing the parameters of the averaged compound Poisson model.. On Fig. From this the convergence in distribution follows by general facts on weak convergence in D[0. and in fact P1 = 31AB1 = 9 3 1 2 (5 3 3 1 1 2 1 5 7 1 81 70 ' _ 19 4 5 P2 = .31µB 2 = 2 5 3 7 70 Thus in state 1 where p.. Thus.
. iEE .11 (a ) ESt/t * p .1) of the safety loading is (as for the renewal model in Chapter V) based upon an asymptotic consideration given by the following result: Proposition 1.8. we have E[St + t I (t(i))iE EI = E t(i)OW = iEE t(i)Pi• iEE Taking expectations and using the well known fact Et(i)/t * 7ri yields (a).3* = 3/4 of the claims occur in state 1 and the remaining fraction 1/4 in state 2. Hence B* = 415E3+5E7/ +4 ( 51E3 +5 E7) = 1E 3 +2E7. t + oo.s. note first that EN Uk')/N a4' µgi. (b) St/t * p . a fraction r. 01 /. t * oo. 0 The definition (1. MODEL AND EXAMPLES 151 Figure 1.1 a.1). Hence (i) Nti) 1 U(i) k' N(i)k=1 E t 4 St + t = iEE Nt t 1: 7ri Qi µs. = P.(3.1. Proof In the notation of Proposition 1. the averaged compound Poisson model is the same as in III.1 Thus. For (b).1. That is.
some early studies are in Janssen & Reinhard [211].1)Eiw = 0.. and hence wn /n a4.. and a more comprehensive treatment in Asmussen [16].Jt=i}. with X2.... 136 or A. then M < oo a. n n Thus {SWn l is a discrete time random walk with mean zero. Now obviously the w. Eiw.g. with some important improvements being obtained in Asmussen [17] in the queueing setting and being implemented numerically in Asmussen & Rolski [43].1 jEE = (p .a form a renewal process . There seems still to be more to be done in this area. and hence M = 00.1 and the Corollary are standard. and so on. The proof of Proposition 1. The case 77 > 0 is similarly easy.. see [APQ] p. [315]. EiX = 0. [APQ]. let some state i be fixed and define w=wl=inf{t >0:Jt_#i. The mainstream of the present chapter follows [16].1 of St / t is > 0. Proposition 1. limit p . X 1 =Sty. If 77 > 0. [302]. Proof The case 77 < 0 is trivial since then the a.Jt=i}. Statistical aspects are not treated here.4. PB. Since the X„ are independent . and .ld.s. and hence 1/ii(u ) = 1 for all i and u.2(a) p... + Xn SWn ](1 a . having the Pidistribution of X. . Now let r) = 0.1(b) is essentially the same as the proof of the strong law of large numbers for cumulative processes. See Meier [258] and Ryden [314]. and hence oscillates between 0o and oo so that also here M = oo.. also + .Eiw o'o Eiw • E ^ifjµs. and involves a version of the .\ i and EiX1 Ei f 13 J. Theorem II. see the Notes to Section 7.0i(u) < 1 for all i and u. dt . X3. MARKOVIAN ENVIRONMENT Corollary 1.. s. w2=inf {t>w1:Jt_#i. 0 Notes and references The Markovmodulated Poisson process has become very popular in queueing theory during the last decade. .s. [212]. then M = 00 a. Then by standard Markov process formulas (e. X2 =SW2 So.12 If 77 < 0.152 CHAPTER VI. 2 The ladder height distribution Our mathematical treatment of the ruin problem follows the model of Chapter III for the simple compound Poisson model. 38) Eiw1 = 1/ir. In risk theory.
Jr+ =j.j E E.. Define the ladder epoch T+ by T+ = inf It : St > 0} = r(0). •) II = JG+(i.g. T R(i. G+ is the matrix whose ijth element is E G +(i.IIG +II)e. 6. for i.3*B *(y)dy. IIG+ II denotes the matrix with ijth element IIG+(i.1) 0 (b) G+ (y. the definition of . j. we define the convolution operation by the same rule as for multiplication of realvalued matrices.Jt=j)dt.i. oo)) = f R(i. T+ < oo) and let G+ be the measurevalued matrix with ijth element G+(i. •). Let further R denote the preT+ occupation kernel. see also Example II. by specializing results for general stationary risk processes (Theorem II . •) * G +(k. n=0 (2. j. For measurevalued matrices. and S (dx) the measure valued diagonal matrix with /3 Bj(dx) as ith diagonal element.j. .x). only with the product of real numbers replaced by convolution of measures. 00 (2. B* in Section 1.2) R(dx)S((y .6.(u) = Pi(M < u) = e' E G+ (u)(I .4) we obtain the following result .2 (a) The distribution of M is given by 00 1 . Thus. THE LADDER HEIGHT DISTRIBUTION 153 PollaczeckKhinchine formula (see Proposition 2. j. •)• kEE Also. That is.5. e.1 irG+(dy)e =.6*.A) = Pt(ST+ E A. k.x. dx)/jBj(y .A) =ZI(St E.2(a) below ) where the ladder height distribution is evaluated by a time reversion argument. let G+(i.EA. oo)). but is substantially more involved than for the compound Poisson case .2.dx). However . cf. Proposition 2. (y. j. Proposition 2.a/i. The form of G+ turns out to be explicit (or at least computable). we get the same ladder height distribution as for the averaged compound Poisson model. which represents a nice simplified form of the ladder height distribution G+ when taking certain averages : starting {Jt} stationary.j. j. oo) = J ao 0 G+(i.
G+ the probability that there are no further ladder steps starting from environment j is e^ ( I . and let further {my} be the Evalued process obtained by observing {Jt } only when {St*} is at a minimum value.3. To this end . MARKOVIAN ENVIRONMENT Proof The probability that there are n proper ladder steps not exceeding x and (x)ej. resp. From this (2. St < S* for u < t.3) We let {St*} be defined as {St}.IIG+II)e. the intensity matrix A* has ijth element * 7r ^i3 7ri and we have Pi(JT = j) = 7rj P2(JT = i)7ri (2. JJ = j. marked by thin.3 When q > 0. thick. see Figure 2. we need as in Chapters II. 0  x Figure 2. The u proof of (2. {mx} is a non terminating Markov process on E. only with {Jt} replaced by {Jt } (the /3i and Bi are the same ). III to bring R and G+ on a more explicit form . and that the environment is j at the nth when we start from i is e . mx = j when for some (necessarily unique) t we have St = x.154 CHAPTER VI. lines in the path of {St}. we need to invoke the timereversed version {Jt } of {Jt} .6. To make Proposition 2.1 The following observation is immediate: Proposition 2.2) is just the same as the proof of Lemma 11. . hence uniquely specified by its intensity matrix Q (say). That is.1) follows by summing over n and j.1 for an illustration in the case of p = 2 environmental states of {Jt}.2 useful .
.(/3i) diag. If there are no jumps in (t. Proof The argument relies on an interpretation in terms of excursions. ( Q( n)) converges monotonically to Q. we say that the excursion has depth 0. = x}. For example the excursion of depth 2 has one subexcursion which is of depth 1.4 Q satisfies the nonlinear matrix equation Q = W(Q) where 0 co(Q) = n* . Figure 2.2.0. Q( n+l) _ ^. and the excursion is said to have depth 1 if each of these subexcursions have depth 0.and a jump (claim arrival) occurs at time t. The definitions are illustrated on Fig. and S(dx) is the diagonal matrix with the f3iBi(dx) on the diagonal.2 where there are three excursions of depth 1. 2. s].(/3i)diag + T S(dx) eQx. 0 mms1   ^O \ T. THE LADDER HEIGHT DISTRIBUTION 155 Proposition 2.2. and the excursion ends at time s = inf {v > t : S. {S. In general. An excursion of {St*} above level x starts at time t if St = x.*. we recursively define the depth of an excursion as 1 plus the maximal depth of a subexcursion.2 . corresponding to two subexcursions of depth 0. the sequence {Q(n)} A* defined by Q(O) = . Otherwise each jump at a minimum level during the excursion starts a subexcursion. } is a minimum value at v = t. Note that the integral in the definition of W(Q) is the matrix whose ith row is the ith row of _ 3 f e2Bi(dx). Furthermore.
mx+dx = j) occurs in two ways . we first compute qij for i $ j. j. h. the subintensity matrix of {min+i ) } is cp (Q(n)) = Q(n +l) which implies that qgj +1) = \!.j +/3ipij. A) = f Pi(mx = j) dx eie4xej dx A u (2.St <S*.(01)diag = Q.j.156 CHAPTER VI. Now let {m ( n) } be {mx } killed at the first time i7n (say) a subexcursion of depth at least n occurs . By considering minimum values within the excursion.. Q = W(Q) follows. MARKOVIAN ENVIRONMENT Let p=7) be the probability that an excursion starting from Jt = i has depth at most n and terminates at J8 = j and pij the probability that an excursion starting from Jt = i terminates at J8 = j. 7rE Proof We shall show that Fi(Jt=j. either due to a jump of {Jt } which occurs with intensity A= j.s. (2.5) A (note that we use A = {x : x E Al on the r. Then a jump to j (i. StEA .St EA. Suppose mx = i.Qi + )%pij) Now just note that t pij and insert (2. It follows that qij = A.. p1^) Define a further kernel U by f U(i. Theorem 2 . or through an arrival starting an excursion terminating with J. Writing out in matrix notation .4). of the definition to make U be concentrated on (co. e. it becomes clear that pij = r [eQh] 0 ij Bi (dy) • (2. A) = L' U(j. It is clear that { mini } is a terminating Markov process and that { mio) } has subintensity matrix A* .4) To show Q = cp(Q).u< t). Similarly by induction . The proof of Q = W(Q) then immediately carries over to show that the subintensity matrix of {mil) } is cp (Q(o)) = Q(l). Fi(mh =i ) = 1 + =hflh+Qihpii+o(h) implies qii = 'iii /i +)3ipii.6) . Similarly. = j.T+>t) _ ^iF 7ri (JJ =i. A). 0)). i.5 R(i.
St EA. it is readily checked that 7r is a left eigenvector of K corresponding to the eigenvalue 0 (when p < 1). we shall see that nevertheless we have enough information to derive.7 It is instructive to see how Proposition 2..4]. From Qe = 0. dt.0<u<t. and get irPi(Jt =j. (c) the matrix K satisfies the nonlinear matrix equation K = W(K) where W( K) = A ( i) diag + fi J "O eKx S(dx). (b) for z > 0. {Jt }. K( n (d) the sequence converges monotonically to K. oo)) = f o' eIXS((x + z. THE LADDER HEIGHT DISTRIBUTION 157 from which the result immediately follows by integrating from 0 to oo w..6 (a) R(dx) = eKxdx..S„<0.=StSt.r.1 can be rederived using the more detailed form of G+ in Corollary 2. Remark 2. and to obtain a simple solution in the ..Qi)diag. x < 0. We may then assume Ju=Jtu. S.(.St EA. To this end. the CramerLundberg approximation (Section 3). e. `` {K(n)} [the W(•) here is of course not the same as in Proposition 2.St <Su.0<u<t) = P.6 is hardly all that explicit in general.t. 0<u<t). St < St U.6). 0 +1) = cp (K( n)) defined by K(o) = A . 0 < u < t) = 7rjPj(Jt =i. oo))dx. consider stationary versions of {Jt}. where A is the diagonal matrix with 7r on the diagonal: Corollary 2. (Jo = j.Jo=i. Jt = i. and this immediately yields (2.z+>t) = P.g. u It is convenient at this stage to rewrite the above results in terms of the matrix K = 0'Q'A. St E A.2.6(b): from 7rK = 0 we get 7rG+(dy)e = J W 7reKx(fiiBi(dy + x))diag dx • e 0 w(fiiB1(dy + x))col dx f 0 EirifiiBi(y)dy = fi*B*(y)dy• iEE 0 Though maybe Corollary 2.(Jt=j.StEA.. and we let k be the corresponding right eigenvector normalized by Irk = 1. G+((z.
158
CHAPTER VI. MARKOVIAN ENVIRONMENT
special case of phasetype claims (Chapter VIII). As preparation, we shall give at this place some simple consequences of Corollary 2.6. Lemma 2 .8 (I  IIG+II)e = (1  p)k. Proof Using Corollary 2.6(b) with z = 0, we get
IIG+II = feIxsux, oo dx.
In particular, multiplying by K and integrating by parts yields
0
(2.7)
I)S(dx) KIIG+II =  (eKx
T
= K  A + (,13i)diag 
Z
S(dx) = K A.
2.8)
0 OO
Let L = (kir  K)'. Then (k7r  K) k = k implies Lk = k. Now using (2.7), (2.8) and ireKx = ir, we get
kirIIG +IIe =
ao k f
7rS((x , oo))e = k (lri(3ips, ) rowe = pk,
0 KIIG+IIe = Ke,
(kirK)(I  IIG+II)e = kKepk+Ke = ( 1p)k.
Multiplying by L to the left, the proof is complete. u
Here is an alternative algorithm to the iteration scheme in Corollary 2.6 for computing K. Let IAI denote the determinant of the matrix A and d the number of states in E. Proposition 2.9 The following assertions are equivalent: (a) all d eigenvalues of K are distinct; (b) there exist d distinct solutions 8 1 , .. , sd E {s E C : its < 0} of (A + (131(Bi[s]  1))diag  sIl = 0. (2.9) I n that case , then Si, ... , sd are precisely the eigenvalues of K, and the corresponding left row eigenvectors al, ... , ad can be computed by
ai (A 
(fi(Bi[Si]

1))d iag  siI) = 0.
(2.10)
2. THE LADDER HEIGHT DISTRIBUTION
Thus, al seal K=
159
(2.11)
ad sdad Proof Since K is similar to the subintensity matrix Q, all eigenvalues must indeed be in Is E C : 2s < 0}.
Assume aK = sa. Then multiplying K = W(K) by a to the left, we get sa = a
f A It follows that if (a) holds, then so does (b), and the eigenvalues and eigenvectors
(
 (f3i)diag +
eS(dx)
= a (A  (/3i) diag + (/3iEi[s])diag)
can be computed as asserted. The proof that (b) implies (a) is more involved and omitted; see Asmussen u [16]. In the computation of the CramerLundberg constant C, we shall also need some formulas which are only valid if p > 1 instead of (as up to now) p < 1. Let M+ denote the matrix with ijth entry M+(i,j) = xG+(i,j;dx). 0 Lemma 2 .10 Assume p > 1. Then IIG+II is stochastic with invariant probability vector C+ (say) proportional to irK, S+ _ 7rK/(7rKe). Furthermore, irKM+e = p  1. Proof From p > 1 it follows that St a4' oo and hence IIG+II is stochastic. That 7rK = e'Q'0 is nonzero and has nonnegative components follows since Qe has the same property for p > 1. Thus the formula for C+ follows immediately by multiplying (2.8) by 7r, which yields irKIIG+II = irK. Further M+ = fdzfeS(( x+z oo)) dx f 00 dy fy eKx dx S((y, oo)) 0 0 m K' f (eKy  I) S((y, oo))dy, 0 00
7rKM+e = 7r f d y(I  eKy) S((y, oo))e
= lr(/3ipB;) diage 
irII G +Ile
=p1
160
CHAPTER VI. MARKOVIAN ENVIRONMENT
u
(since IIG+II being stochastic implies IIG+ IIe = e).
Notes and references The exposition follows Asmussen [17] closely (the proof of Proposition 2.4 is different). The problem of computing G+ may be viewed as a special case of WienerHopf factorization for continuoustime random walks with Markovdependent increments (Markov additive processes ); the discretetime case is surveyed in Asmussen [15] and references given there.
3 Change of measure via exponential families
We first recall some notation and some results which were given in Chapter II
in a more general Markov additive process context. Define Ft as the measurevalued matrix with ijth entry Ft(i, j; x) = Pi[St < x; Jt = j], and Ft[s] as the matrix with ijth entry Ft[i, j; s] = Ei[e8St; Jt = j] (thus, F[s] may be viewed as the matrix m.g.f. of Ft defined by entrywise integration). Define further
K[a] = A + ((3i(Bi[a]  1))  aI
diag
(the matrix function K[a] is of course not related to the matrix K of the preceding section]. Then (Proposition 11.5.2):
Proposition 3.1 Ft[a] = etK[a] It follows from II.5 that K[a] has a simple and unique eigenvalue x(a) with maximal real part, such that the corresponding left and right eigenvectors VW, h(a) may be taken with strictly positive components. We shall use the normalization v(a)e = v(a)hi') = 1. Note that since K[0] = A, we have vi°> = 7r, h(°) = e. The function x(a) plays the role of an appropriate generalization of the c.g.f., see Theorem 11.5.7. Now consider some 9 such that all Bi[9] and hence ic(9), v(8), h(e) etc. are welldefined. The aim is to define governing parameters f3e;i, Be;i, Ae = 0!^1)i,jEE for a risk process, such that one can obtain suitable generalizations of the likelihood ratio identitites of Chapter II and thereby of Lundberg's inequality, the CramerLundberg approximation etc. According to Theorem 11.5.11, the appropriate choice is
e9x
09;i =13ihi[9], Bo;i (dx) = Bt[B]Bi(dx),
Ae = AB 1K[9]De  r.(9)I oB 1 ADe + (i3i(Bi[9] 
1))diag  (#c(9) + 9)I
3. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES
161
where AB is the diagonal matrix with h(e) as ith diagonal element . That is,
hie) DEB) _ ^Y' Me)
iii
i#j i=j
+ /i(Bi[9] 1)  r. (9)  0
We recall that it was shown in II . 5 that Ae is an intensity matrix, that Eie°St h(o) = etK(e)hEe ) and that { eest  t(e)h(9 ) } is a martingale. t>o We let Pe;i be the governing probability measure for a risk process with parameters ,69;i, B9; i, A9 and initial environment Jo = i. Recall that if PBT) is ]p(T) the restriction of Pe ;i to YT = a {(St, Jt) : t < T} and PET) = PoT), then and PET) are equivalent for T < oo. More generally, allowing T to be a stopping time, Theorem II.2.3 takes the following form: Proposition 3.2 Let r be any stopping time and let G E Pr, G C {r < oo}. Then
PiG = Po;iG = hE°) Ee;i lh
1 j,)
exp {BST + rrc(0 ) }; G .
J
(3.1)
Let F9;t[s], ice ( s) and pe be defined the same way as Ft[s], c (s) and p, only with the original risk process replaced by the one with changed parameters. Lemma 3.3 Fe;t [s] = et"(B) 0 1 Ft[s + O]0. Proof By II.( 5.8). u
Lemma 3.4 rte ( s) = rc(s+B )  rc(O). In particular, pe > 1 whenever ic'(s) > 0. Proof The first formula follows by Lemma 3.3 and the second from Pe = rc'' (s).
Notes and references The exposition here and in the next two subsections (on likelihood ratio identities and Lundberg conjugation) follows Asmussen [16] closely (but is somewhat more selfcontained).
3a Lundberg conjugation
Since the definition of c( s) is a direct extension of the definition for the classical Poisson model, the Lundberg equation is r. (y) = 0. We assume that a solution
162
CHAPTER VI. MARKOVIAN ENVIRONMENT
y > 0 exists and use notation like PL;i instead of P7;i; also, for brevity we write h = h(7) and v = v(7).
Substituting 0 = y, T = T(u), G = {T(u) < oo} in Proposition 3.2, letting ^(u) = S7(u)  u be the overshoot and noting that PL;i(T(u) < oo) = 1 by Lemma 3.4, we obtain: Corollary 3.5
V)i(u,
T) =
h ie 7uE L,i
e 7{(u)
h =(u)
e WO
; T(u) < T ,
(3 . 2) (3.3)
ioi(u)
= h ie 7u E
hj,(„)
.
Noting that 6(u) > 0, (3.3) yields
Corollary 3.6 (LUNDBERG'S INEQUALITY) Oi(u)  < hi efu. min2EE h9
Assuming it has been shown that C = limo, 0 EL;i[e7^(u)/hj,(„j exists and is independent of i (which is not too difficult, cf. the proof of Lemma 3.8 below), it also follows immediately that 0j(u)  hiCe7u. However, the calculation of C is nontrivial. Recall the definition of G+, K, k from Section 2.
Theorem 3 .7 (THE CRAMERLUNDBERG APPROXIMATION) In the lighttailed case, 0j(u)  hiCe7u, where
C (PL 1) "Lk.
(3.4)
To calculate C, we need two lemmas . For the first, recall the definition of (+, M+ in Lemma 2.10. Lemma 3 .8 As u 4 oo, (^(u), JT(u)) converges in distribution w.r.t. PL;i, with the density gj(x) (say) of the limit (e(oo), JT(,,,,)) at b(oo) = x, JT(oo) = j being independent of i and given by
gi (x) = L 1 L E CL;'GL (e,.1; (x, oo)) S+M+e LEE
Proof We shall need to invoke the concept of semiregeneration , see A.1f. Interpreting the ladder points as semiregeneration points (the types being the environmental states in which they occur), {e(u),JJ(u))} is semiregenerative with the first semiregeneration point being (^(0), JT(o)) _ (S,+, J,+). The formula for gj (x) now follows immediately from Proposition A1.7, noting that the u nonlattice property is obvious because all GL (j, j; •) have densities.
3. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES
Lemma 3 .9 KL = 01K0  ryI, G+[ry] _
163
111G+IIA, G+['y]h = h.
Proof Appealing to the occupation measure interpretation of K, cf. Corollary 2.6, we get for x < 0 that eteKxej dx =
fPs(StE dx,J =j,r > t)dt
= hie7x f O PL;i(St E dx, Jt = j, T+ > t) dt hj o
= ht e7xe^eK`xej dx,
which is equivalent to the first statement of the lemma. The proof of the second is a similar but easier application of the basic likelihood ratio identity Proposition 3.2. In the same way we get G+['y] = AIIG+IIT1, and since IIG+ IIe = e, it follows that
G +[ry l h
= oIIG+IIo 1h = AIIG+ IIe =
De
= h.
Proof of Theorem 3.7 Using Lemma 3.8, we get EL (e'W ); JT(.) = jl = f 00 e 7xgj (x) dx L J o 1 °°
f e7^G+( t, j; (x, oo)) dx S+M+e LEE °

1 (+;l f S +M +e LEE 0
rr ry S +M +e LEE
0 1(1  e7 x ) G+(1,j; dx)

1
E(+(IIG+(e,j)IIG+[t,j;
In matrix formulation, this means that
C =
E L;i
e7f()
hj,r(_) L
 L
ryC M e
L
c+
(IIG+II  G +[ 7]) 0le
1
L
YC+M+e
'y(PL  1)
(ir KL) (I  G+[ y]) 0le,
164
CHAPTER VI. MARKOVIAN ENVIRONMENT
using Lemma 2.10 for the two last equalities. Inserting first Lemma 3.9 and next Lemma 2.8, this becomes 1 7r LA 1(YI  K)(I  IIG+II)e 'Y(PL  1) = 1 P 7r LA 1(yI  K) k = 1P 7rLO1k. Y(PL  1) (PL  1 ) Thus, to complete the proof it only remains to check that irL = vL A. The normalization vLhL = 1 ensures vLOe = 1. Finally, VLOAL = vLAA'K['Y]A = 0
since by definition vLK[y] = k(y)vL = 0.
u
3b Ramifications of Lundberg 's inequality
We consider first the timedependent version of Lundberg 's inequality, cf. IV.4. The idea is as there to substitute T = yu in 'Pi (u, T) and to replace the Lundberg exponent y by yy = ay  yk(ay ), where ay is the unique solution of rc(ay)= 1 Y Graphically, the situation is just as in Fig. 0.1 of Chapter IV. Thus, one has always yy > y, whereas ay > y, k( ay) > 0 when y < 1/k'(y), and ay < y, k(ay) < 0 when y > 1/k'(y). Theorem 3 .10 Let C+°) (y) _ 1
miniEE hiav)
Then 1 y< (y)
y>
Vi(u,yu)
Pi(u) 
C+°)(y)hiav)
e7vu,
(3.6)
V,i(u,yu)
< C+)(y)hiar )e 'Yvu,
(y) (3.7)
Proof Consider first the case y <
Then, since k (ay) > 0, (3 .1) yields
'12(u,yu)
hiav)]E'iav,i
h(ay ) J*(u)
exp {ayST(,L ) +r(u)k( ay)}; T(u) < yu
11 Let Bj (x) C_ = min 1 • inf jEE hj x>o f2° e'r( vx)Bj(dy) ' C+ _ mE 1 Bj(x) J Y x)Bj (dy). we let G+ * W(u) be the vector with ith component E(G+(i. 1 Similarly. hj P . oo)) and. (3. if y > 1lk'(ry). for a vector <p(u) = (cpi (u))iEE of functions .i [e*(u)K(av). Our next objective is to improve upon the constant in front of a7u in Lundberg's inequality as well as to supplement with a lower bound: Theorem 3.3.(ay)}.V)i(u. yu < r(u) < 00 h 4(u) < h(av)C+o)(y)eavuEav . as in the classical case (3. yu < r(u) < 00] < hiav)C+o)( y)eavu+yuw(av) 0 Note that the proof appears to use less information than is inherent in the definition (3. we shall need the matrices G+ and R of Section 2.y)G+(z. Chie ryu < Vi(u ) < C+hie 7u. r(u) yu o)(y)eavuEav. We further write G(u) for the vector with ith component Gi(u) = EiEE G+(i. we have ic(ay) < 0 and get 'i(u) .j. yu) f h(av)e v avuE«v.9) For the proof. However.i [eT(u)K(av ).5).5) will produce the maximal ryy for which the argument works. av 'i [h. r(u) < yu] hiay)C+ h=av)C+ o) (y)eayu+yuw(av).7. (u.j) * coj)(u) _ f u ^Pj(u . dy)• o iEE jEE .i I (a) exp {aye(u) + r(u)r. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES 165 hiav)e _avuE. exp {e() + r(u))} ..00 su e7( ( 3.8 ) Then for all i E E and all u > 0.
U = U". j. Then cpin)(u) sit (u) as n + oo. MARKOVIAN ENVIRONMENT Lemma 3 . if r+ (n) is the nth ladder epoch. = Eo G+ G.u Iv 2°)(u)I Pi(rr+(N + 1) < oo) + 0. we have G *(N +1) * ^. j. dy) = aj f Bj(dy .x) jEE 00 u 0 //^^ C+E.& (u).13 For all i and u. dx). 0 G+(i. 00 Thus C+ > hj f"o e7(Yu)G +(i. just note that the recursion <p(n+1) = G + G+ * (p(n) holds for the particular case where cpin)(u) is the probability of ruin after at most n ladder steps and that then obviously u cp2n) (u) + t.12 Assume sup1. 00 f C_ hj f e(Y)G+(i. jEE u 0 j.(0) ] (u) < sup Jt t. Lemma 3 . Hence lim cp(n) exists and equals U * G.x ) = Gi(u).j. dx) f e7( vu)Bj (dy . and define W(n+1) (u) = G(u) + (G+ * tp(n))(u). dx) 100 C . Then iterating the defining equation ip(n+1) = G + G+ * V(n) we get W(N+1) = UN * G + G+N+1) * ^(o) However.7. dy) : 1(u) < C+ > hj u e(1tL)G+(i. dy) 00 C+ ijhj f R(i. j.u IMP:°) (u) I < oo. dx ) Bj (u .x ) R(i. n > oo. j. Proof Write UN = EN G+ .x) x) jEE 0 E Qj f jEE R(i.166 CHAPTER VI. _ To see that the ith component of U * G(u) equals ?Pi (u).j.ery(&u+x)Bj (dy) Bj(u Bj (u .3jhj // f 00 R(i. °O .dy).
10 ) by Lemma 3 .10: Theorem 3 . CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES proving the upper inequality. and assuming it shown for n. taking cps°) (u) = 0. and let 8 = e'(70). We claim by induction that then cpin) (u) > C_ hie7u for all n. (3. jEE estimating the first term in (3.tpi(u. it follows that Vi(u) < C_(yo) h=70)e7ou. Here is an estimate of the rate of convergence of the finite horizon ruin probabilities 'i (u.11).Pi(MT > u) = Pi(MT < u.y)G+(i. and the proof of the lower one is similar. and using Lemma 3 .MT<u. 13 and the second by the induction hypothesis . dy) jEE u U +C_ hje7( yu)G jEE"" +(i. T) < C+(')' o)hi7u)e7ou8T . letting MT = maxo<t<T St.u)G+(i.ST).10) C_ 1 f hje7(y.M>u) = Ei [VGJT (u .13) Hence. we get Wo n +1) (u) = ? 7 i ( U ) + E J u gyp.(u) < T ) to 0i (u) which is different from Theorem 3. T) = Pi (7.11) C_e7u 57 O+[i. j. +i . ST < u] < C+(yo)e7ouEi [h^7o)e70ST1 l T J = C h(7o)e7ou8T . this is obvious if n = 0. MT < u.12) Proof We first note that just as in the proof of Theorem 3. (3. dy) jEE o (3. Then 0< Vi (u )  0i(u.M > u) = Pi(ST<u. j.3. 9 for the last equality in (3. 14 Let yo > 0 be the solution of 'c'(yo ) = 0. from which the lower inequality follows by letting n * oo.11.8) with y replaced by yo and hi by h=7o ).13 Let first cp=°)(u) = C_ hie"u in Lemma 3. we have Vii (u) . u The proof of the upper inequality is similar . let C+(yo) be as in (3. y]hj = C_ e7uhi. dy) (3.n) ( u . Indeed. j.T) = Pi(M > u) . j.13. 167 u Proof of Theorem 3.
. M" of the corresponding two claim surplus proceses (note that 0'(u) _ P(M' > u). we also assume that there exist i # j such that either /3i <.o.o. u > 0. we refer to . <s.3).168 CHAPTER VI.. but that in general the picture is more diverse.2) (4.2) alone just amounts to an ordering of the states.3) Bl <_s. MARKOVIAN ENVIRONMENT Notes and references The results and proofs are from Asmussen and Rolski [44]. 4 Comparisons with the compound Poisson model 4a Ordering of the ruin functions For two risk functions 0'. we define the stochastic ordering by 0' < s. and finally in part from queueing theory. where o*(u) is the ruin probability for the averaged compound Poisson model defined in Section 1 and . Occasionally we strengthen (4. < . It was long conjectured that 0* Vi. The motivation that such a result should be true came in part from numerical studies. is the one for the Markovmodulated one in the stationary case (the distribution of J0 is 7r). B2 <_s.0. this correponds to the usual stochastic ordering of the maxima M'. The results to be presented show that quite often this is so.3) to B = Bi does not depend on i.33 or Bi 0 Bj.1) Obviously.3p. [177].o... V)" if z/i'(u) <'c"" (u). in part from the folklore principle that any added stochastic variation increases the risk. 0"(u) = P(M" > u)) Now consider the risk process in a Markovian environment and define i' (u) _ >iEE irioi(u). Further related discussion is given in Grigelionis [176].o. The conditions which play a role in the following are: .31:5)32 . ". this is not the case for (4.. For the notion of monotone Markov processes.4) To avoid trivialities.... Bp. where it has been observed repeatedly that Markovmodulation increases waiting times and in fact some partial results had been obtained. (4. (4.5) Note that whereas (4. (4. The Markov process {Jt} is stochastically monotone (4.
.. Lemma 4 .r(u x)dx.8) ^j Tri/iBd(x) . the second follows from an extension of Theorem I1.x) dx u o i =1 i=1 (4. ^i 7ri = 1.4) say basically that if i < j . COMPARISONS WITH THE COMPOUND POISSON MODEL 169 Stoyan [352].9 ) below).1 Assume that conditions (4.x)dx _ /3*B*(u) + f u / ^ t=1 > 3 * B* ( ) + f (4. it follows by a standard ...5 (cf.7) 7ri. and it is in fact easy to show that Vii(u ) < t/j(u) (this is used in the derivation of (4. . Conditions (4. 1:7riaibi > E 7riai i=1 i=1 j=1 The equality holds if and only if a1 = .9) (4.1. The first is a standard result going back to Chebycheff and appearing in a more general form in Esary. Section 4. Then V. b1 < .6). T(0) < oo) = Bi(x) dx/tcai ..2.* For the proof.. = aP or b1 = .x)B*(x) dx. also Proposition 2. (4. we obtain (cf. 7(0) < oo) = pirf+). where 7r2+) = QiµBilri/p.3* f uB(x) z/^. Lemma 4 .. Comparing (4.r (Sr(o) E dx Jr(o) = i. Proof of Theorem 4.10) Q*B*(u)+.7) and Lemma 4. Theorem 4 . Proschan & Walkup [140]. E 7r i Wi(u .13* J0 u 0*(u . 3 (a) P.4) is automatic in some simple examples like birthdeath processes or p = 2 .4..2)(4.. then j is the more risky state . = b. 0 Here (4. .10) and (4..r (JT(o) = i. then P P P 7rjbj. Conditioning upon the first ladder epoch.3iBi(x)YPi(u .1) which with basically the same proof can be found in Asmussen & Schmidt [49]. note that (4.6) 7r= fl*B*(u) + p> s=1 +) fu 0 b (u  x)Bt (x) /pB.x) of i and using Lemma 4. 2 If al < .. < bp and 7ri > 0 (i = 1.9) follows by considering the increasing functions 3iBi (x) and Oi (u . Proposition 2..1 for the first term in (4.3 for the second) *(u) _ /3 *B* (u) +.2)(4..4) hold. we need two lemmas.. < a. p).2. dx (4. (b) P.6.
6)./3*.4 Assume that . 4b Ordering of adjustment coefficients Despite the fact that V)* (u) < *. Using (4.4 is not vacuous. 01 = 103.11) i=1` and that A has the form eAo for some fixed intensity matrix A0. it will hold for all sufficiently large u.. Frey.0.0*• i=1 But it is intuitively clear (see Theorem 3. Recall that .s. What is missing in relation to Theorem 4.4 is the understanding of whether the stochastic monotonicity condition (4. (u) may fail for some u. As is seen. Rolski & Schmidt [32].6).3i. Then the l...s. u Here is a counterexample showing that the inequality tp* (u) < V). this ruin probability is /3iPBi. dominates the solution 0* to the renewal equation (4.8) we get P P '*' (0) = 3* + /3*1* (0) _ > lri'3qqi • E 7i/ipBi . i=1 i=1 7'r(0) _ EFioiwi(0) . Proof Since 0. u To see that Proposition 4.(0) = V. (4.4) is essential (the present author conjectures it is). it is sufficient to show that 0'. let = ( 1/2 1/2 ) .170 CHAPTER VI. Q2 = 1.2.(0) < b *'(0) for e small enough.11) is of order 104 and the r. MARKOVIAN ENVIRONMENT argument from renewal theory that tk. Notes and references The results are from Asmussen.3µi < 1 for all i. For u = 0. of order 101. that P P /^ 1r1NiµBi /^2 /^ ^i/ji pBi < 1il3i i=1 i=1 (4. µB. and from this the claim follows.h.h.1 of [145] for a formal proof) that z/ii(u) converges to the ruin probability for the compound Poisson model with parameters . they are at present not quite complete. µB2 = 104. (u) is not in general true: Proposition 4. = 102. except possibly for a very special situation .r (u ) fails for all sufficiently small e > 0.1 and Proposition 4.* (0). Then i/i*(u) < .. Bi as e J.. of (4. 0.
1) . Now we can view {Xt} as a cumulative process (see A.4(b) that the limit in (4. (4. with strict inequality unless rci (y*) does not depend on iEE.13) (4. (4.5. in particular . The adjustment coefficient y* for the averaged compound Poisson model is the solution > 0 of rc*(ry*) = 0 where rc*(a) _ 13*(B*[a] . It is clear that the distribution of X. and by Proposition II. Hence if 5i 54 0 for some i E E. which in view of EiEE 1ibi = 0 is only possible if Si = 0 for all i E E.14) is nonzero so that A"(0) > 0. COMPARISONS WITH THE COMPOUND POISSON MODEL 171 the adjustment coefficient for the Markovmodulated model is defined as the solution y > 0 of ic(y) = 0 where c(a) is the eigenvalue with maximal real part of the matrix A + (rci(a))diag where rci(a) = ai(Bi[a] .g.5 y < ry*. (4.2 we have (Ei[e"X'.g.. with strict inequality unless a = 0 or bi = 0 for all i E E.(a) > 0 for all a 0 0. e. Asmussen [20]) as discussed in 11.12) iEE Theorem 4.)a.7) )i is convex with A'(0) = lim EXt tioo t = iEE 70i = 0.13) implies A(a) > 0 for all a. Further (see Corollary 11. it follows by Proposition A1.14) A„(O) iioo varXt t t By convexity. This implies that A is strictly convex. Jt = i])' EE = vA+n(6. Lemma 4.4. is nondegenerate unless bi does not depend on i E E.5.1) .a.Jt=kI A (the return time of k) where k E E is some arbitrary but fixed state. cf.a = E irirci(a). Then {(Jt.5.ld) with generic cycle w = inf{t>0: Jt_54 k.6 Let (di)iEE be a given set of constants satisfying EiEE iribi = 0 and define A(a) as the eigenvalue with maximal real part of the matrix A + a(bi)diag• Then )t(a) > 0. Xt)} is a Markov additive process (a socalled Markovian fluid model.. 0 . Proof Define X= f &ids.
h'(0) = (Ao . improving upon more incomplete results from Asmussen. (4. this implies that the solution y > 0 of K(y) = 0 must satisfy y < y*. 4c Sensitivity estimates for the adjustment coefficient Now assume that the intensity matrix for the environment is Ae = Ao/ e. h depend on the parameter (e or a). Then > risi = 0 because of (4. Hence rc (y*) > 0. Notes and references Theorem 4.Qi and Bi are fixed .6. the basic equation is (A + (rci(y))diag)h = 0. If rci(y* ) is not a constant function of i E E. h(0) = e. 0 = ((ri(Y))diag + ery (4{('Y))diag)h + (A0 + e(?i'Y))diag)h'.. Here we put a = 1/e.12) and rc*(y*) = 0. y.) and rc (•).5. Frey.5 is from Asmussen & O'Cinneide [40]. In the case of e. Hence letting e = 0 in (4. Further a(1) = rc(y*) by definition of A(. we have 7rh' = 0. note that y(a) + mins=1. Since ic is convex with rc'(0) < 0 . The corresponding adjustment coefficient is denoted by ry(e). a = 1 in Lemma 4. (4. and our aim is to compute the sensitivity ay e a E=O A dual result deals with the limit a 4 oo.eir)h'(0)... multiply the basic equation by a to obtain 0 = (A0 + e(r£i(y))diag)h. whereas the .15) Normalizing h by 7rh = 0. MARKOVIAN ENVIRONMENT Proof of Theorem 4.p yi and compute 8y 8a a=0 In both cases.e7r)1 (Ici(Y*))diage. Thus y(e) * y* as e 10. Let bi = rci(y*).16) Differentiating (4.172 CHAPTER VI.15) yields 0 = (Ii(y*)) diage + Aoh'(0) = (rci('Y*)) diage + (Ao . Rolski & Schmidt [32]. where A.15) once more and letting e = 0 we get . we get rc (y*) > 0 which in a similar manner implies that u y < y*..
. . 5 The Markovian arrival process We shall here briefly survey an extension of the model. the intensity for such a transition (referred to as marked in the following) is denoted by Aii l and the remaining intensity . multiplying (4..8 If (4. Frey.8 when ryi < 0 for some i is open. (4. (4. then 8a a=o All rci (0) Notes and references The results are from Asmussen. THE MARKOVIAN ARRIVAL PROCESS 173 0 = 27'(0)(ri(`Y *)) diage + 2(ci('Y* )) diag h' (0) + Aoh" (0) . (4.18).19) Then 'y ^ ryl as a ^ 0 and we may take h(0) = el (the first unit vector). Inserting (4..5. We get 0 = (aAo + ( lc&Y))diag)h. which has recently received much attention in the queueing literature.17) by 7r to the left to get (4.16) yields Proposition 4.19) holds. and we have proved: Proposition 4. i = 2. and may have some relevance in risk theory as well (though this still remains to be implemented).7 8ry aE = 1 7r(ci ('Y*))diag ( Ao e7r)1(Xi(Y*))diage *=0 P Now turn to the case of a. The additional feature of the model is the following: • Certain transitions of {Jt} from state i to state j are accompanied by a claim with distribution Bid. Rolski & Schmidt [32].i(7' *))diagh'(0). p.20) and multiplying by el to the left we get 0 = All + 7'(0)rci (0) + 0 (here we used icl (ry(0)) = 0 to infer that the first component of K[7(0)]h'( 0) is 0).20) Letting a = 0 in (4. The analogue of Proposition 4. We assume that 0 < y < 7i.18) 0 = 27'(0)p+27r(rs. 0 = (Ao + ry'(ii(Y)) diag )h + (aAo + (Ki(7'))diag)h'.17) (4.
In the above setting. T). and thus 1i = 0. and that are determined by A = A(l ) +A(2) where A is the intensity matrix the governing {Jt}.i.174 CHAPTER VI. Bii = Bi . The extension of the model can also be motivated via Markov additive processes: if {Nt} is the counting process of a point process.d. MARKOVIAN ENVIRONMENT f o r a transition i + j by A . the claim surplus is a Markov additive process (cf. where qij is the probability that a transition i * j is accompanied by a claim with distribution. . refer to notation) { Jt k) }. B.2) A(1) = A(' 1) ® A(1. Bij = B.2). A(1) = A . A(1'k) A(2 k1). Here are some main examples: Example 5 .1 (PHASETYPE RENEWAL ARRIVALS) Consider a risk process where the claim sizes are i. the definition of Bij is redundant for i i4 j. we use the convention that a1i = f3i where 3i is the Poisson rate in state i. A ( 2) = A (2`1 ) ® A. that Bii = Bi .2 for details). For i = j.6i ) diag. Indeed.^) etc. A(l) = tv.2 (SUPERPOSITIONS) A nice feature of the setup is that it is closed under superposition of independent arrival streams . is neither 0 or 1 is covered by letting Bij have an atom of size qij at 0. u Example 5 . This is the only way in which arrivals can occur. then {Nt} is a Markov additive process if and only if it corresponds to an arrival mechanism of the type just considered.4). we may let {Jt} represent the phase processes of the individual interarrival times glued together (see further VIII. but the point process of arrivals is not Poisson but renewal with interclaim times having common distribution A of phasetype with representation (v. Jt = (Jtl). the definition of Bi is redundant because of f3i = 0. and the marked transitions are then the ones corresponding to arrivals. the Markovmodulated compound Poisson model considered sofar corresponds to A(l) = (. Again .(13i )diag. II. A(l) = T. j(2) } be two independent environmental processes and let E(k). Jt2)) (2. with common distribution B. We then let (see the Appendix for the Kronecker E = E(1) x E(2). Thus . let { Jt 1) }. Note that the case that 0 < qij < 1.
This means that the environmental states are of the form i1i2 • • • iN with il.... RETIRED. assume that there is a finite number N of policies.}.. Bilo.iN. iN = all BOi2. However . i2i .4 (A SINGLE LIFE INSURANCE POLICY ) Consider the life insurance of a single policy holder which can be in one of several states.1i2. the kth policy enters a recovering state. E 10. u Example 5 . The individual pays at rate pi when in state i and receives an amount having distribution Bij when his/her state changes from i to j. INVALIDIZED.3 (AN INDIVIDUAL MODEL) In contrast to the collective assumptions (which underly most of the topics treated sofar in this book and lead to Poisson arrivals).g... after which it starts afresh. 11. Thus.. superpositions of renewal processes. or.iN = C27 All other offdiagonal elements of A are zero so that all other Bii are redundant. In this way we can model. e...5... more recently. where ik = 0 means that the kth policy has not yet expired and ik = 1 that it has expired.. claims occur only at state transitions for the environment so that AN2..iil... In fact . Similarly. MARRIED. say.kl is redundant). the idea of arrivals at transition epochs can be found in Hermann [193] and Rudemo [313]. as the Markovian arrival process ( MAP). Example 5 .iN C17 AilO. The versatility of the setup is even greater than for the Markovmodulated model. u Notes and references The point process of arrivals was studied in detail by Neuts [267] and is often referred to in the queueing literature as Neuts ' versatile point process . iN. possibly having a general phasetype sojourn time. THE MARKOVIAN ARRIVAL PROCESS Bij.iil.iN = a2..iN.kj = Bik) B13 4k = Bak) 175  (the definition of the remaining Bij. E = { WORKING. • upon a claim. all Al i2. with rate ai... DEAD etc. WIDOWED. DIVORCED. and that the policy then expires.1i2 . Easy modifications apply to allow for • the time until expiration of the kth policy is general phasetype rather than exponential. Hermann [193 ] and Asmussen & Koole [37] showed that in some appropriate ...iN are zero and all Bi are redundant. iN. Assume further that the ith policy leads to a claim having distribution Ci after a time which is exponential..
Lucantoni [248]. we talk of s as the 'time of the year'. Sengupta [336].1) Then the average arrival rate is /3* and the safety loading rt is 77 = (p* . MARKOVIAN ENVIRONMENT sense any arrival stream to a risk process can be approximated by a model of the type studied in this section : any marked point process is the weak limit of a sequence of such models . For the Markovmodulated model. Obviously. Let 1 1 /3* _ f /3(t) dt. Neuts [271] and Asmussen & Perry [42].p)/p. p * = 0 p(t) dt. p(t) and B(t) are defined also for t t [0. Thus at time t the premium rate is p(s + t). from an application point of view. one needs to assume also (as a minimum) that they are measurable in t. 0 < t < 1. 1).3*µs • p = f /3(v) dv 0 0 (6. one limitation for approximation purposes is the inequality Var Nt > ENt which needs not hold for all arrival streams.176 CHAPTER VI. [248]. • The premium rate at time t of the year is p(t). The basic assumptions are as follows: • The arrival intensity at time t of the year is 3(t) for a certain function /3(t). )3 t 1 J (6. By periodic extension. we may assume that the functions /3(t). 1). We denote throughout the initial season by s and by P(8) the corresponding governing probability measure for the risk process. B* = J f B(t) ((*) dt. for s E E = [0. a claim arrives with rate /3(s + t) and is distributed according to B(8+0 . but now exhibiting (deterministic) periodic fluctuations rather than (random ) Markovian ones. Without loss of generality. • Claims arriving at time t of the year have distribution B(t). where i f00 xB(°) (dx) _ .2) Note that p is the average net claim amount per unit time and µ* = p//3* the average mean claim size. Lucantoni et at. 6 Risk theory in a periodic environment 6a The model We assume as in the previous part of the chapter that the arrival mechanism has a certain timeinhomogeneity. Some main queueing references using the MAP are Ramaswami [298]. continuity would hold in presumably all reasonable examples. . let the period be 1.
8. Thus . St = SeI(t). B*. the conditional distribution . Example 6 . 0 Then (by standard operational time arguments ) {St} is a periodic risk process with unit premium rate and the same infinite horizon ruin probabilities. p* as an averaged version of the periodic model. the discussion in 111. It is easily seen that . we shall see that for the periodic model increasing A increases the effect of the periodic fluctuations.1) and Example 1. it turns out that they have the same adjustment coefficient.w(t)) dt). the average compound Poisson model is the same as in III. in agreement with the general principle of added variation increasing the risk (cf. since the added variation is deterministic. of the periodic model as arising from the compound Poisson model by adding some extra variability.t. respectively.2 Define T 6(T) = p(t ) dt. equivalently.9). (6. for Markovmodulated model typically the adjustment coefficient is larger than for the averaged model (cf. In contrast. Thus. one may think of the standard compound Poisson model with parameters 3*.w(t).(3.3(t) = 3A(1 + sin 27rt).3) Note that A enters just as a scaling factor of the time axis. We u assume in the rest of this section that p(t) . RISK THEORY IN A PERIODIC ENVIRONMENT 177 In a similar manner as in Proposition 1. or. and thus the averaged standard compound Poisson models have the same risk for all A. and we recall from there that the ruin probability is 24 1 *(u) _ 3 5eu + 35e6u. The behaviour of the periodic model needs not to be seen as a violation of this principle.6. The claim surplus process {St } two is defined in the obvious way as St = ^N° Ui .3* = 3A.10. p* = A whereas B* is a mixture of exponential distributions with intensities 3 and 7 and weights 1/2 for each (1/2 = ff w(t)dt = f o (1.1. In particular. The arrival process {Nt}t>0 is a timeinhomogeneous Poisson process with intensity function {/3(s + t)}t>0 . not random.1 As an example to be used for numerical illustration throughout this section. Section 4b). Many of the results given below indicate that the averaged and the periodic model share a number of main features. In contrast. p(t) = A and let B(t) be a mixture of two exponential distributions with intensities 3 and 7 and weights w(t) _ (1 +cos27rt)/2 and 1 . let . u Remark 6 .
Q(v) (B(„) [a] . with the underlying Markov process {Jt} being deterministic period motion on E = [0. To this end.g.a . 3 E(8)eaSt = h(s.178 CHAPTER VL MARKOVIAN ENVIRONMENT of U. r(u) _ inf It > 0 : St > u} is the time to ruin . of the averaged compound Poisson model (the last expression is independent of s by periodicity).(3(s + t)dt)e«St adt + /3(s + t)dt .f. with some variants in the proofs.5 (see in particular Remark 11.3(v)(B(vl [a] . a) etw*(a) h(s+t. and the ruin probabilities are 0(8) (U) = P(s )(r(u) < 00). we obtain E..a) = exp { . [44] (the literature in the mathematical equivalent setting of queueing theory is somewhat more extensive.s .a be the c.5. given that the ith claim occurs at time t is B(8+t). Notes and references The model has been studied in risk theory by.f.. i. J Theorem 6 .T) = P(8)(r(u) <T).tc* (a)] dv then h (. of the claim surplus process.3(s + t)dt[B(8+t)[a] .al. t + dt] or not. let f 8+1 tc *(a) _ (B* [a] . a) is periodic on R. e.e. The claim surplus process {St} may be seen as a Markov additive process.a.a) Proof Conditioning upon whether a claim occurs in [t.1) dv .east B(8+t) [a] east .1) . 1). Daykin et.1]) .8).^8 [. and define h(s. (6. we start by deriving formulas giving the m. see the Notes to Section 7).g. 0 (5)(u. As usual.. [101] . 6b Lundberg conjugation Motivated by the discussion in Chapter II.1) a = J8 . Dassios & Embrechts [98] and Asmussen & Rolski [43].g.(8) [eaSt+dt I7t] = = (1 .(1 . Jt = (s + t) mod 1 P(8) .4) At a first sight this point of view may appear quite artificial. but it turns out to have obvious benefits in terms of guidelining the analysis of the model as a parallel of the analysis for the Markovian environment risk process. .adt +. The exposition of the present chapter is basically an extract from [44].
4).6. RISK THEORY IN A PERIODIC ENVIRONMENT E(8)east+ dt d Et. a) h(s + t. B) eoSt t.1)dv  o h(t.(8)east 179 = = = = = E(8)east (1 .log h(s. a) et.s. we can write Lo Jt.2.5 The formula for h(s) = h(s.6 . a). 9) is a P ( 8)martingale with mean one. Proof In the Markov additive sense of (6..3. According to Remark 11.5.1].1)dv l og E(8) et where atetk•(a) h(t. dt log E(8)east a + f3(s + t) [B(8+t) [a] . St)} and . it then suffices to note that E(8)Le.9 as follows. h(s + t. E (8)east (a +. 0) P(8)a. so that obviously {Lo. u Remark 6.c* (e) {Le. a) = h(s.0(s + t)dt[B(8+t)[a] .1]) .* (a) h(s. at + f log h(s + t.t. a) = exp I f t3(v)(kv)[a) . a) . 0) exist and are finite. St)} . a) as well as the fact that rc = k` (a) is the correct exponential growth rate of Eeast can be derived via Remark 11. a) Corollary 6.t}t>o = h(s.3(s + t)[D(8 +t)[a] .1]) . a) Thus E(8)east = h(s + t.9) eastt.adt +.t} is a multiplicative functional for the Markov process { (Jt.4 For each 0 such that the integrals in the definition of h(t . + v)(B([a] .t = 1 by Theorem 6.(e) Let = h( h(Jo. With g the infinitesimal generator of {Xt} = {(Jt.
However.f.3(s)ks)[a]h(s)} ah(s) 13(s)h(s) + h'(s) +. the restrictions of Plsi and Pest to Ft are equivalent with likelihood ratio Le.7 When a > yo. Proposition 6. J s [. That rc = is*(a) then follows by noting that h(1) _ u h(0) by periodicity. correspond to a new periodic risk model with parameters ex . For each 0 satisfying the conditions of Corollary 6. When a = y. ry)) at which n* (a) attains its minimum.3(s)h(s) + h'(s) +.6 ( s ) exp { 0( s )&s) [a] + tc . MARKOVIAN ENVIRONMENT ha(s.'y).g.1) . it follows by Theorem II. Equating this to rch (s) and dividing by h(s) yields h(s ) = h(s) = a + .tc] dv} (normalizing by h(0) = 1).(3(s)dt) +.3(s)B(s) [a]h(s). Sdt) = h(s + dt) eadt (1 . say. such that for any s and T < oo.5 that we can define a new Markov process {(Jt. yo is determined by 0 = k* (70) = QB*. [70] . Now define 'y as the positive solution of the Lundberg equation for the averaged model. Proof (i) Check that m.T. . ( iii) use approximations with piecewiese constant /3(s). P(s) (T(u) < oo) = 1 for all u > 0. y solves n* (y) = 0. That is. cf.6 The P(s). of St is as for the asserted periodic risk model.3(v)( Bi"i [a] . That is. Proposition 6. St)} with governing probability measures Fes). 0) = h(s) + dt {ah(s) . 0 < s < 1.y) = eayh(s). B(s).a .3(s)dt • B(s)[a]h(s) = gha(s.1. (iv) finally.0) = Kh(s). as above E (s) ha(Jdt. see [44] for 11 a formal proof. we put for short h(s) = h(s. Bet)(dx) = ^ B(t ) (dx). A further important constant is the value yo (located in (0.3.2. (ii) use Markovmodulated approximations (Section 6c).4. the requirement is cha(i.60(t) = a(t)B(t)[0]. Lemma 6 .180 CHAPTER VI.
we get: .2. e(cc)) Letting u > oo in (6. 1).8) (6.g.8 The ruin probabilities can be computed as (u)+T(u)k'(a) ^/i(8) (u. a) TI h(9(u). RISK THEORY IN A PERIODIC ENVIRONMENT Proof According to (6. f (x. Here and in the following. Wu).4. (6. s E I.9 Assume that there exist open intervals I C [0. a) a > ry0 (6. The proof involves machinery from the ergodic theory of Markov chains on a general state space. say s0. The relevant likelihood ratio representation of the ruin probabilities now follows immediately from Corollary 11. has a unique stationary distribution.9) 0(')(u) = h(s. Lemma 6 . T(u) < (6.9) and noting that weak convergence entails convergence of E f (^(u). and no matter what is the initial season s. 9(u)) for any bounded continuous function (e. we need the following auxiliary result . B(oo)). J C R+ such that the B(8).6(v) dv Jo ' xe«xB (°) (dx) r^ xe«xB'(dx) = Q'B' [ a] = ^' J 0 = ^c"'(a) + 1.2).u is the overshoot and 9(u) = (T(u) + s) mod 1 the season at the time of ruin. a)e«uE (a iP(s) (u) = h( s)e7uE(` ) h(O(u)) To obtain the CramerLundberg approximation from Corollary 3. u which is > 1 by convexity. 0(u)) * (b(oo). have components with densities b(8)(x) satisfying inf sEI. xEJ 0 (s)b(8)(x) > 0. ^(u) = ST(u) . T) = h(s. a) e«uE(8 ) e «^ . considered with governing probability measures { E(8) }E[ .7) h(B(u).1. the mean number of claims per unit time is p« 181 = Jo 1.6. q) = eryx/h(q)). and we refer to [44].1) the distribution of (l: (oo). the Markov process {(^(u).10) Then for each a. Corollary 6. which is not used elsewhere in the book.9(u))} u>0.
6 for the Markovmodulated model: Theorem 6 . where e. this provides an algorithm for computing C as a limit.W. 11 7/'O (u) < C+°)h(s) ery". we obtain immediately the following version of Lundberg ' s inequality which is a direct parallel of the result given in Corollary 3.11) Note that ( 6.ir) } Plots of h for different values of A are given in Fig.) C = E1 h(B(oo)) u + oo.16.9).Ch(s)ery". Noting that ^(u) > 0 in ( 6. which may provide one among many motivations for the Markovmodulated approximation procedure to be considered in Section 6c. MARKOVIAN ENVIRONMENT Theorem 6. (6. 10 shows that certainly ry is the correct Lundberg exponent.11) gives an interpretation of h(s ) as a measure of how the risks of different initial seasons s vary. it does not seem within the range of our methods to compute C explicitly.1.1. Among other things.1 In contrast to h. illustrating that the effect of seasonality increases with A. For our basic Example 6 . Vi(8) (u) . 6.10) of Lemma 3. At this stage . A=1/4 A=1 A=4 0 Figure 6.10 Under the condition (6. elementary calculus yields h(s) = exp { A C 2^ cos 2irs  4^ sin 21rs + 11 cos 41rs . where C(o) = 1 + info < t<i h(t) . 1. Theorem 6 .182 CHAPTER VI.
e7 ( yx)B(t)(dy) > Then for all $ E [0. We state the results below.(ay) > 0 when y < 1/ic' (7).(8) (u. e. where ay is the unique solution of W(ay) =y• (6.yr.12 Let 00)(y) 1 Then info < t<i h(t. Lundberg's inequality can be con siderably sharpened and extended.3x + (1 . we first note that the function fu° exu {w • 3e .42 so that 183 tp(8) (u) < 1.13) Elementary convexity arguments show that we always have ryy > Y and ay > ry.13 to our basic example. (6. Theorem 6.47r sin 27rs + 167r cos 47rs .4.w)e4u ..167r I Cu. in our basic example with A = 1. we substitute T = yu in 0(u.7x j dx _7x } _ 6w + 6(1 . r. T) and replace the Lundberg exponent ry by ryy = ay . whereas ay < y. Consider first the timedependent version of Lundberg's inequality.w) .17) (6. (ay).15) The next result improves upon the constant C+) in front of eryu in Theorem 6. Just as in IV. 1 (6.16 In order to apply Theorem 6.14) < C+)(y)h(s) e7yu.7e .13 Let = 1 B(t) C o<tf i h(t) 2no f °O e'r(Yx)B( t) (dy)' (x) x 1 B(t) (x) C+ = sup sup o<t<i h ( t) xo J. Theorem 6 .w ) • 7e u{w • 3e3x + ( 1 .w)e4u dx 9w + 7(1 .11 as well as it supplements with a lower bound. RISK THEORY IN A PERIODIC ENVIRONMENT Thus.42 • exp {J_ cos 27rs . 1 ) and all u > 0. ay) • (6. .12) As for the Markovian environment model. yu) 000 (u) .6. we obtain Co) = 1.0(8) (u+ yu) (6. #c( ay) < 0 when y > 1/tc'('y).(s)(u) < C+h(s)e7".g. the proofs are basically the same as in Section 3 and we refer to [44] for details. C_h(s)e7u < V.
66. for A = 1 (where 3 e0. 1/i18 1 s (u) > 0. 1) for the environment). much of the analysis of the preceding section is modelled after the techniques developed in the preceding sections for the case of a finite E. 14 Let C+('yo) be as in (6. completing a cycle . The idea is basically to approximate the (deterministic) continuous clock by a discrete (random) Markovian one with n 'months'.T) < C+('Yo)h( s. but thereby also slightly longer. the nth Markovian environmental process {Jt} moves cyclically on {1.013. Then 7oudT .\ = 0 . Thus. Thus C_ = 2 inf ex cos 2irs . MARKOVIAN ENVIRONMENT attains its minimum 2 /3 for u = oo and its maximum 6 /(7 + 2w) for u = 0.19 I eu.I eu. with s the initial season.(8)(u. Some of the present proofs are more elementary by avoiding the general point process machinery of [44]. and let 8 = er' (Y0).9 3 0<8<1 p 27r 47r 167r 161r 2 _ _e.20).20 •exp { 2n cos 27rs .013.cos 27rs . such a deterministic periodic environment may be seen as a special case of a Markovian one (allowing a continuous state space E = [0. 6c Markovmodulated approximations A periodic risk model may be seen as a varying environment model. . 1).184 CHAPTER VI. Of course.. This observation motivates to look for a more formal connection between the periodic model and the one evolving in a finite Markovian environment. exp 2^ cos 21rs . 0 <'p(8)(u ) .16) with 'y replaced by yo and h(t) by h(t.18) Notes and references The material is from Asmussen & Rolski [44].L sin 27rs + 1 I cos 47rs . we have the following result: Theorem 6 .66. n}.1 sin 2irs + 16_ cos 47rs . Finally. C+ = 1. where the environment at time t is (s + t) mod 1 E [0.4^ sin 2irs + 16^ cos 41rs .1 sin 27rs + 1 cos 47rs .\ 3 C+ = sup 6 exp { A (. .0..19 } 0 <8<1 8 + cos 21rs Thus e.181 s(u) < 1.16. and in fact. yo). .'Yo)e (6.g.
1 ((i 1)/n) ) and Bni = B . and the ruin probability corresponding to the initial state i of the environment is then Y'yn)(t) = F (M(n) > t). one simple choice is Oni = 0( i . Bi. since the settings are equivalent from a mathematical point of view. We let {Stn)} (6. it is desirable to have formulas permitting freely to translate from one setting into the other. This queue is commonly denoted as the Markovmodulated M/G/1 queue and has received considerable attention in the last decade. so that the intensity matrix is A(n) given by n n 0 ••• 0 0 n n ••• 0 A(n) _ (6. T) can be expressed in a simple way in terms of the waiting time probabilities of a queueing system with the input being the timereversed input of the risk process.21) which serves as an approximation to 0(1)(u) whenever n is large and i/n s. We want to choose the /3ni and Bni in order to achieve good convergence to the periodic model.jEE of the risk process.20) be the claim surplus process of t>o the nth approximating Markovmodulated model. but others are also possible. 7 Dual queueing models The essence of the results of the present section is that the ruin probabilities i/ (u). Notes and references See Rolski [306]. z/'i (u. Let 0j. (6.7.19) n 0 0 ••• n Arrivals occur at rate /3ni and their claim sizes are distributed according to Bni if the governing Markov process is in state i. Thus. M(n) = Supt>o Stn). DUAL QUEUEING MODELS 185 within one unit of time on the average . AE= Aii'r?/7ri• The arrival intensity is /3i when Jt = i. . A be the parameters defining the risk process in a random environment and consider a queueing system governed by a Markov process {Jt } ('Markovmodulated') as follows: • The intensity matrix for {Jt } is the timereversed intensity matrix At _ A ())i. To this end.
The first conclusion of that result then states that the events {T(u) < T.2). {Jt }o<t<T• Then we may assume that Jt = JTt. JT = i) = P(V > u. Then Pi(T(u) < T. (7. MARKOVIAN ENVIRONMENT • Customers arriving when Jt = i have service time distribution Bi.P(V > u. let T . JT = i) = 'P. just sum (7. Proposition 7..186 CHAPTER VI. Jo = i.1). Proposition 7. Now let In denote the environment when customer n arrives and I* the steadystate limit.=1 . 0 < t < T and that the risk process {Rt}o<t<T is coupled to the virtual waiting process {Vt}o<t<T as in the basic dualitylemma (Theorem 11. .2) Oi(u) = 1.2 The relation between the steadystate distributions of the actual and the virtual waiting time distribution is given by F(W > u. (7. Jt ). JJ = i).. and (7. 2 .3). J* = i) = P. JT = Z). JT = i} coincide. (7. The actual waiting time process 1W1.n(VT > u. and the virtual waiting time (workload) process {Vt}too are defined exactly as for the renewal model in Chapter V. JT = j} and {VT > u. Jo = j. Proof Consider stationary versions of {Jt}o<t<T. For (7. Taking probabilities and using the stationarity yields 7riPi(T(u) < T.1) follows.T(V > u I J* = i).. I* = i). . (VT > u I JT = 2).oo in u (7. J*) is the steadystate limit of (Vt. JT = j) = 7rjPj(VT > u.1 Assume V0 = 0. I* )3i P(V > u.2) and use that limF (VT > u.0i (u .3) 7ri where (V. J* = i) for all j. and for (7. ii (u) = it /3 P(W > u. J* = i).3.1) over j. In particular. (7.1) 7ri In particular. T) = 7ri 1 P. JT = j) = LjPj (VT > u.4) where 0* = >jEE 7rj/3j. • The queueing discipline is FIFO.
p < 1 then ensures that V(*) = limNloo VN+9 exists in distribution. a paper relying heavily on classical complex plane methods.5) follows from (7.=i) a4. the dual queueing model is a periodic M/G/1 queue with arrival rate 0(t) and service time distribution B(') at time t of the year (assuming w. and of these. on average 0*T customers arrive in [0. The relation (7.7) (7.8) For treatments of periodic M/G/1 queue. Lemoine [242]. P(1')(r(u) < oo) = P(')(00) > u).3)..6) (7. P(W >u. and one has PI'>(rr(u) < T) = P('_T)(VT > u). J* = i) see W > u. In the setting of the periodic model of Section 6. n=1 N However. and (7. DUAL QUEUEING MODELS 187 Proof Identifying the distribution of (W. >u. on average /32TP(V > u. I* = i. if T is large.T)(T(u) <T) = P(8)(VT > u). A more probabilistic treatment was given by Asmussen [17].4) can be found in Regterschot & de Smit [301]. With {Vt} denoting the workload process of the periodic queue. Taking the ratio yields (7.7) of that paper. [243]. see in particular Harrison & Lemoine [186]. a general formalism allowing this type of conclusion is 'conditional PASTA'.3) improving somewhat upon (2. P(.l.o. see Regterschot & van Doorn [123]. I*) with the timeaverage . with (7.I *=i). and Rolski [306]. T]. Proposition 7.7.I. . that /3(t). and further references (to which we add Prabhu & Zhu [296]) can be found there. we have 1: I(W. B(t) have been periodically extended to negative t).4). N * oo.g.1 is from Asmussen [16]. (7. The first comprehensive solution of the waiting time problem is Regterschot & de Smit [301]. u Notes and references One of the earliest papers drawing attention to the Markovmodulated M/G/1 queue is Burman & Smith [84].4) and (7.
This page is intentionally left blank .
i. are i. Thus in between jumps.2) tk(u. and that the claim sizes U1. the aggregate claims in [0. Thus. However .1) (other terms are accumulated claims or total claims). and the evolution of the reserve may be described by the equation Rt = u . {Rt} moves according to the differential equation R = p(R).6.d. finite horizon.Chapter VII Premiums depending on the current reserve 1 Introduction We assume as in Chapter III that the claim arrival process {Nt} is Poisson with rate . z/i(u) = F IinffRt< 0IRo=u 1 (1.At + p(R8) ds. U2. resp .. Zt As earlier. 189 . i&(u. and T(u) = inf {t > 0 : Rt < u} is the time to ruin starting from Ro = u so that '(u) = F(T(u) < oo). the premium charged is assumed to depend upon the current reserve Rt so that the premium rate is p(r) when Rt = r.T) = FloinfTRt< OIRo=u1 denote the ruin probabilities with/initial reserve u and infinite. .. with common distribution B and independent of {Nt}.T) = F(T(u) < T). t] are Nt At = Ui (1.
when x > p/S. Proposition 1. dividends are paid out at rate pi . we can put Rt = Rt + p/S. but assume now that the company borrows the deficit in the bank when the reserve goes negative.i(u) = 1 for all u. say e.190 CHAPTER VII. Now return to the general model.2. it seems reasonable to assume monotonicity (p(r) is u .'(u)) > 0 so that V'(v) < 1. i.4 Either i. Example 1. the payout rate of interest is Sx and absolute ruin occurs when this exceeds the premium inflow p. where one would try to attract new customers as soon as the business has become reasonably safe. say at interest rate b. Example 1.p2. Proof Obviously '(u) < ilb(v) when u > v.e.1 Assume that the company reduces the premium rate from pi to p2 when the reserve comes above some critical value v. pi > p2 and p(r) = One reason could be competition. No tractable necessary and sufficient condition is known in complete generality of the model. Another could be the payout of dividends: here the premium paid by the policy holders is the same for all r. Thus at deficit x > 0 (meaning Rt = x). or o(u) < 1 for all u. If Ro = v < u. That is. Hence in terms of survival probabilities. A basic question is thus which premium rules p(r) ensure that 'O(u) < 1. there is positive probability. Assume 0(u) < 1 for some u. RESERVEDEPENDENT PREMIUMS The following examples provide some main motivation for studying the model: Example 1 . However. oo) is given by i (u + p/S).p/S) r > p/S p5(p/5r) 0<r<p/5 Then the ruin problem for {Rt } is of the type defined above. that {Rt} will reach level u before the first claim arrives.Vi(v) u > e(1 . 1 . rather than when the reserve itself becomes negative. and the probability of absolute ruin with initial reserve u E [p/S. P(r) _ p + e(r .2 (INTEREST) If the company charges a constant premium rate p u but invests its money at interest rate e. but when the reserve comes above v. In this situation. we get p(r) = p + er.3 (ABSOLUTE RUIN) Consider the same situation as in Example 1.
This is basically covered by the following result (but note that the case p(r) .I3IB requires a more detailed analysis and that µB < oo is not always necessary for O(u) < 1 when p(r) 4 oo.b(u.e.6 For any T < oo. (1.. .2) for r sufficiently large so that p(oo) = limr.2 once more. In case (b). INTRODUCTION 191 decreasing in Example 1.5) and the process {Vt} has a proper limit in distribution . instead of (1. Starting from Ro = uo. Proposition I1I. Let Op(u) refer to the compound Poisson model with the same 0. then ?(u) = 1 for all u. say V.1 and increasing in Example 1. However.1. {Vt} decreases at rate p(v) when Vt = v (i.+ p(r) exists. we have z/i(u) <p(u .2) we have t Vt = At .2(d)). and hence by a geometric trials argument. hence Rt < uo also for a whole sequence of is converging to oo. then l/i(u) < 1 for all u. Then 0(u) = P(V > u).uo) and. We next recall the following results. cf.o(uo) = 1 so that t/'(u) = 1 for all u by Proposition 1. let uo be chosen such that p(r) < p = /3µB for r > uo.6) .uo) < 1.3µB for all sufficiently large r. and P(Rt + oo) > 0. Here {Vt}two is a storage process which has reflection at zero and initial condition Vo = 0. In case (a). { Vt} remains at 0 until the next arrival). [APQ] pp. Theorem 1. Then if u > no. (b) If p(r) > /3µB + e for all sufficiently large r and some e > 0. one can couple the risk process and the storage process on [0.4) 0 and we use the convention p(O) = 0 to make zero a reflecting barrier (when hitting 0.5 (a) If p(r) < /.3. B and (constant) premium rate p. the probability that Rt < uo for some t is at least tp(0) = 1 (cf.4. appealing to Proposition 111. if and only if V)(u) < 1 for all u.T) = P(VT > u). That is. 296297): Theorem 1. In between jumps. which was proved in 11. obviously infu<uo z/'(u) > 0. (1. Hence ik(u) < 1 for all u by Proposition III.2(d). V = p(V)).1. Proof This follows by a simple comparison with the compound Poisson model. T] i n such a way that the events {r(u) <T} and {VT > u} coincide. In particular.1. let uo be chosen such that p(r) > p = 0ILB + e for r > uo. that u zPp(u .1.f p(Vs) ds. (1.
6 applicable. Then the ruin probability is tp (u) = f' g(y)dy. oo).8) is the rate of downcrossings (the event of an arrival in [t.Sx}.7 p(x)g(x) = tofB (x) + a f (x .8 Assume that B is exponential with rate b. t + dt] if and only if Vt E [x. Jo AX) (1.h.y. say if p(r) goes to 0 at rate 1 /r or faster as r j 0.8) as the rate of upcrossings. oo) must be the same as the flow the other way. we arrive at the desired interpretation of the r. say. the l.6x and that w(x) < oo for all x > 0. An attempt of an upcrossing occurs as result of an arrival.9) Proof We may rewrite (1. of (1. x + p(x)dt]). this means that the rate of upcrossings of level x must be the same as the rate of downcrossings. (1.s. Oeax f x e'Yg (y) dy } = p) eaxa(x) .Qw(x) . In view of the path structure of {V t }.6w(x) . It is intuitively obvious and not too hard to prove that G is a mixture of two components. the flow of mass from [0. B(x) = e. say when {Vt} is in state y.y)g(y) dy. Note that it may happen that w (x) = oo for all x > 0. Considering the cases y = 0 and 0 < y < x separately. x] to (x.192 CHAPTER VII. yo ^ 1 + oo Q exp {. It follows in particular that 0(u) = fg(Y)dy. RESERVEDEPENDENT PREMIUMS In order to make Theorem 1. u Define ^x 1 w(x) Jo p(t) dt. Now obviously. we thus need to look more into the stationary distribution G. Then w(x) is the time it takes for the reserve to reach level x provided it starts with Ro = 0 and no claims arrive. Corollary 1. and is succesful if the jump size is larger than x .8) Proof In stationarity. of (1.8) as g(x) = p 1 {yo13e_6x +. (1.s. and the other being given by a density g(x) on (0.h. for the storage process {Vt}. t + dt] can be neglected so that a path of {Vt} corresponds to a downcrossing in [t. where g(x) = p( ^ exp {.7) Proposition 1. one having an atom at 0 of size 'yo.Sx} dx. say.
1. INTRODUCTION
where c(x) = 1o + fo elyg(y) dy so that (x) = eaxg(x) _
193
1
p(x)
nkx).
Thus log rc(x) = log rc(0) + Jo X L dt = log rc(0) + /3w(x), p(t) c(x) = rc (0)em"lxl = Yoes"lxl, g(x) = eaxK' (x) = e6x ,Yo)3w'(x)e'6"lxl which is the same as the expression in (1.9). That 'Yo has the asserted value is u a consequence of 1 = I I G I I = yo + f g• Remark 1.9 The exponential case in Corollary 1.8 is the only one in which explicit formulas are known (or almost so; see further the notes to Section 2), and thus it becomes important to develop algorithms for computing the ruin probabilities. We next outline one possible approach based upon the integral equation (1.8) (another one is based upon numerical solution of a system of differential equations which can be derived under phasetype assumptions, see further VIII.7). A Volterra integral equation has the general form x g(x) = h(x) + f K(x, y)9(y) dy, 0 (1.10)
where g(x) is an unknown function (x > 0), h(x) is known and K(x,y) is a suitable kernel. Dividing (1.8) by p(x) and letting K(x, y) _ ,QB(x  y) _ 'YoIB(x) p(x) , h(x) p(x) we see that for fixed to, the function g(x) in (1.8) satisfies (1.10). For the purpose of explicit computation of g(x) (and thereby %(u)), the general theory of Volterra equations does not seem to lead beyond the exponential case already treated in Corollary 1.8. However, one might try instead a numerical solution. We consider the simplest possible approach based upon the most basic numerical integration procedure, the trapezoidal rule hfxN() dx = 2 [f ( xo) + 2f (xi) + 2f ( x2) + ... + 2f (XN1) + f (xN)1
p
194
CHAPTER VII. RESERVEDEPENDENT PREMIUMS
where xk = x0 + kh. Fixing h > 0, letting x0 = 0 (i.e. xk = kh) and writing 9k = 9(xk ), Kk,e = K(xk, xe), this leads to h 9N = hN + 2 {KN,09o+KN,N9N}+h{KN,191+'''+KN,N19N1},
i.e. 9 N=
hN+ ZKN ,ogo +h{KN,lgl+•••+KN,N19N1} 1  ZKNN
(
1.11
)
In the case of (1.8), the unknown yo is involved. However, (1.11) is easily seen to be linear in yo. One therefore first makes a trial solution g*(x) corresponding to yo = 1, i.e. h(x) = h*(x) = (3B(x)/p(x), and computes f o' g*(x)dx numerically (by truncation and using the gk). Then g(x) = yog*(x), and IIGII = 1 then yields f 00 g*(x)dx (1.12) 1= 1+ 'Yo from which yo and hence g(x) and z/'(u) can be computed. u
la Twostep premium functions
We now assume the premium function to be constant in two levels as in Example 1.1, p(r) _ J 1'1 r < v P2 r > v. (1.13)
We may think of the risk reserve process Rt as pieced together of two risk reserve processes R' and Rt with constant premiums p1, P2, such that Rt coincide with Rt under level v and with above level v. For an example of a sample path, Rt see Fig. 1.1.
Rt
V
Figure 1.1
1. INTRODUCTION
195
Proposition 1.10 Let V)' (u) denote the ruin probability of {Rt}, define a = inf It > 0 : Rt < v}, let pi ( u) be the probability of ruin between a and the next upcrossing of v (including ruin possibly at a), and let q(u) = 1  V" (u) Then
1  q(u) + q ( u)z,b(v) p1(v) u = 0<u<v v
0 < u < v. (1.14)
1 + pi (v )  '02 (0) pi (u) + (0, (u  v)  pi (u)) z/i(v ) v < u < oo.
Proof Let w = inf{ t > 0 1 Rt= v or Rt < 0} and let Q1 (u) = Pu(RC,, = v) be the probability of upcrossing level v before ruin given the process starts at u < v. If we for a moment consider the process under level v, Rt , only, we get Vil (u ) = 1  q, (u ) + g1(u),O1( v). Solving for ql (u), it follows that q1 (u) = q(u). With this interpretation of q(u) is follows that if u < v then the probability of ruin will be the sum of the probability of being ruined before upcrossing v, 1  q(u), and the probability of ruin given we hit v first , q(u)z'(v). Similarly, if u > v then the probability of ruin is the sum of being ruined between a and the next upcrossing of v which is pl (u), and the probability of ruin given the process hits v before ( oo, 0) again after a, (Pu(a < oo )  p1(u))''(v) = (Vi2(u  v)  p1 (u))''(v)• This yields the expression for u > v, and the one for u = v then immediately follows. u Example 1 .11 Assume that B is exponential, B(x) = e62. Then
01 (u)
_
0 e .yiu ,,2 (u) = )3 e 72u p1S P2S
1  ~ ery1u p1S 1  Q eryly P1S
where ry; = S  ,Q/p;, so that
q

Furthermore , for u > v P(a < oo ) = 02(u  v) and the conditional distribution of v  Ro given a < oo is exponential with rate S . If v  Ro < 0, ruin occurs at time a . If v  R, = x E [0, v], the probability of ruin before the next upcrossing of v is 1  q(v  x). Hence
196
CHAPTER VII. RESERVEDEPENDENT PREMIUMS
( pi(u) _ 02 ( u  v){ aav + J (1  q(v  x))bedxdx 0 I
1 a e 7i(v x)
eP2,e 7z(uv)
1
_
P16 0 1  a e7iv P16
Se6xdx
1  e 6V Qbe72(uv)
P2 1 
a
e 71v (e(71 6)v  1)
1  p1(71  b)
Ie71v P16
p2be 7z(uv) 1 _
1  e71v a
1  e 7iv P '6
0
Also for general phasetype distributions, all quantities in Proposition 1.10 can be found explicitly, see VIII.7.
Notes and references Some early references drawing attention to the model are Dawidson [100] and Segerdahl [332]. For the absolute ruin problem, see Gerber [155] and Dassios & Embrechts [98]. Equation (1.6) was derived by Harrison & Resnick [186] by a different approach, whereas (1.5) is from Asmussen & Schock Petersen [50]; see further the notes to II.3. One would think that it should be possible to derive the representations (1.7), (1.8) of the ruin probabilities without reference to storage processes. No such direct derivation is, however, known to the author. For some explicit solutions beyond Corollary 1.8, see the notes to Section 2 Remark 1.9 is based upon Schock Petersen [288]; for complexity and accuracy aspects, see the Notes to VIII.7. Extensive discussion of the numerical solution of Volterra equations can be found in Baker [57]; see also Jagerman [209], [210].
2 The model with interest
In this section, we assume that p(x) = p + Ex. This example is of particular application relevance because of the interpretation of f as interest rate. However, it also turns out to have nice mathematical features.
2. THE MODEL WITH INTEREST
197
A basic tool is a representation of the ruin probability in terms of a discounted stochastic integral Z =  f eEtdSt 0 (2.1)
w.r.t. the claim surplus process St = At  pt = EN` U;  pt of the associated compound Poisson model without interest . Write Rt") when Ro = u. We first note that: Proposition 2.1 Rt") = eetu + Rt°) Proof The result is obvious if one thinks in economic terms and represents the reserve at time t as the initial reserve u with added interest plus the gains/deficit from the claims and incoming premiums. For a more formal mathematical proof, note that
dR(u) = p + eR(u)  dAt,
d [R(")  eetu] = p + e [R(u)  eEtu]  dAt . Since R( ;u)  eE'0u = 0 for all u, Rt")  eEtu must therefore be independent of u which yields the result. 0 Let
Zt = eetR(0) = eet (ft (p + eR(°)) ds  At I
Then dZt = e Et (_edt
f t (p + eR°) ds + (p + eR°)) dt + e dt A dA
v Z,, =  eetdSt,
= e_et (pdt  dAt) = eEtdSt. / Thus 0 where the last integral exists pathwise because {St} is of locally bounded variation. Proposition 2.2 The r.v. Z in (2.1) is welldefined and finite, with distribution H(z) = P(Z < z) given by the m.g.f.
H[a] = Ee" = exp
where k(a) _
(aeEt) dt} = exp {f °° k
k
{fa
(y) dy}
13(B[a]  1)  pa. Further Zt a ' Z
as t + oo.
198
CHAPTER VII. RESERVEDEPENDENT PREMIUMS
Proof Let Mt =At tAUB. Then St = Mt+t(/3pBp) and {M„} is a martingale. eEtdMt} From this it follows immediately that {fo is again a martingale. The mean is 0 and (since Var(dMt) = /3PB2)dt)
Var (
Z
'
e'tdMt )
J e eft/3p(B)dt = a2B (1  e2ev). o
/' v
(2)
Hence the limit as v 3 oo exists by the convergence theorem for L2bounded martingales, and we have v
Zv =
v
eEtdSt = f et(dMt + (,3pB  p)dt)
o o

0  f0"
J
a'
0  f 0 oo
eEt
(dMt + (3p$ 
p)dt)
eEtdSt = Z.
Now if X1i X2, ... are i.i.d. with c.g.f. 0 and p < 1, we obtain the c .g.f. of E0° p'Xn at c as
00
00
00
log E fl ea°n X„
n=1
= log 11 e0(av ") _
n=1
E 0(apn). n=1
Letting p = eEh, Xn = Snh  S( n+1)h, we have q5(a) = hic( a), and obtain the c.g.f. of Z =  f0,30 e'tdSt as 00 00 00 lim E 0(apn ) = li h E rc(ae Fnh) = f tc (aet) dt;
n=1 1 n=1 0
the last expression for H[a] follows by the substitution y = aeEt Theorem 2.3 z/'(u) = H(u) E [H(RT(u)) I r(u) < oo] .
u
Proof Write r = r(u) for brevity. On {r < oo }, we have

u + Z =
(u + Zr ) + ( Z  Zr) = e
ET {e
(u + Zr)  f '* eE(tT )dSt] T
e
ET [
R( u)
+ Z`],
2. THE MODEL WITH INTEREST
199
where Z* =  K* eE(tT)dSt is independent of F, and distributed as Z. The last equality followed from Rt") = eEt(Zt + u), cf. Proposition 2.1, which also yields r < oo on {Z < u}. Hence H(u) = P(u + Z < 0) = P(RT + Z* < 0; r < oo) zb(u)E [P(RT + Z* < 0 I)7T, r < oo)] _ O(u)E [H(RT(")) I r(u) < oo] .
Corollary 2.4 Assume that B is exponential, B(x) = e6', and that p(x) _ p + Ex with p > 0. Then
. o€Q/E Ir, (8(p + cu);
V) (u)
aA/Epal Ee 6n1 E +^3E1 / E
1\ E E
1r
Cbp;
E El al
where 1'(x; i) = f 2°° tnletdt is the incomplete Gamma function. Proof 1 We use Corollary 1.8 and get
w(x) fo P + Etdt = g(x) = p +0x
e log(p + Ex)  e loge,
exp {  log(p + Ex)   log p  6x }
pal(p + ex)plE1e6^ J ryo)3 70 = 1 + J p) exp {Ow(x)  Sx} dx x r^ = 1+ ' /E (p + Ex)01'leax dx + 0
f J
= 1+
a
Epo/ E
f yI/ E 1e 6(Y P)/E dy
P (
1+ OEA/E 1e6 P /Er
60/e po/ e
,;,3 )
E E
lp(u) = to foo a exp {w(x)  bx} AX)
acO/E" 1 ePE l
Yo
50 1epolE
(
+ cu); 0)
5(p
E E
2) follows by elementary algebra.f. with density x(3/e1aQ/e fV (x) _ e 6X ' x > 0.3a/ (5 .g. of Z is IogH[a] = f ytc(y)dy = e fa (0. RT(u) has an exponential distribution with rate (S) and hence E [H(RT(u))I r(u) < oo] L Pe6'r (P/C . then {Rt} is the diffusion with drift function p+Ex and constant variance a2. The process {St} corresponds to {Wt} so that c(a) or2a2/2 .a) .2y +µ ) dy . From ic(a) = . /^ u Example 2 .e.3. . Proof 2 We use Theorem 2.pa. r (j3/E) In particular.pa. 13/E).b P/E dx /' P/ ' (p/  x)p/e 150/f I' (/3/E) (6P1'E.01'E) + (p/E)al aO l febP/E } IF (0 /0 jF From this (2.3 is also valid if {Rt} is obtained by adding interest to a more general process {Wt} with stationary independent increments.200 CHAPTER VII.V. it follows that logH[a] = f 1 c(y)dy = 1 f '(pa/(a +y))dy f 0 0 Ey R/E 1 [pa + )3log 8 .13 /E) r (. As an example. assume that {Wt} is Brownian motion with drift µ and variance v2. i. H(u) = P(Z r < u) = P(V > u + p/E) = (8(p + Eu)/E.3/E) By the memoryless property of the exponential distribution. and the c./3 log(b + a)] = log ePa/f (a + a ) e which shows that Z is distributed as p/E . where V is Gamma(b.2) follows by elementary algebra.x) dx e.V < x)]0 + f P(V > p/E ) + eby fv (p/E . RESERVEDEPENDENT PREMIUMS u from which (2.5 The analysis leading to Theorem 2.
Paulsen & Gjessing [286] found some remarkable explicit formulas for 0(u) beyond the exponential case in Corollary 1.. that the analysis does not seem to carry over to general phasetype distributions. for a martingale proof. Paulsen [281]. write y* for the solution of the Lundberg equation f3(B[ry *] . [129].8.3. write Vi* (u) for the ruin probability etc.3 is from Harrison [185]. 3 The local adjustment coefficient. see e. THE LOCAL ADJUSTMENT COEFFICIENT _ Q2a2 pa 4e E 201 I. Some of these references also go into a stochastic interest rate. The formula (2.1) .d.i. Emanuel et at. [129] and Harrison [185]. not even Erlang(3) or H3. Gerber [157] p.2 is a special case of a perpetuity. [357]. or to nonlinear premium rules p(•). Gerber [155]. A r.p*. Z is normal (p/E. se e.v.Y*p* W*(u) < ery*u = 0. Delbaen & Haezendonck [104]. [283].g. Corollary 2. and since RT = 0 by the continuity of Brownian motion. it is also used as basis for a diffusion approximation by these authors. Q2/2E).3) was derived by Emanuel et at.. It must be noted. [282]. as in the proof of Proposition 2. The solution is in terms of Bessel functions for an Erlang(2) B and in terms of confluent hypergeometric functions for a H2 B (a mixture of two exponentials). 134 (the time scale there is discrete but the argument is easily adapted to the continuous case).e. Goldie & Griibel [167]. however. of the form Ei° p"X" with the X„ i. it follows that the ruin probability is Cu) H(u) H(0) 11 Notes and references Theorem 2.g.4 is classical. Logarithmic asymptotics For the classical risk model with constant premium rule p(x) . Further studies of the model with interest can be found in Boogaert & Crijns [71]. Paulsen & Gjessing [286] and Sundt & Teugels [356]. and recall Lundberg 's inequality .
i)eex. obviously O(u) can be bounded with the probability that the Cramer Lundberg compound Poisson model with premium rate p* downcrosses level uE starting from u . Then lim sup u>oo u and e E''p(r) + 0. oo for all E > 0. let p* be a in (3. The steepness assumption and p(x) + oo ensure 'y(x) * So. i. (3.e. as will hold under the steepness assumption of Theorem 3.ap(x). x>0 (3. log ?i(u) < < 00 JO .*(u) .e.'y ( x)) = 0 where r.. the function y(x) of the reserve x obtained by for a fixed x to define y(x) as the adjustment coefficient of the classical risk model with p* = p(x). then log u (u) In the proof as well as in the remaining part of the section . a first step is the following: Theorem 3 . and (for simplicity) that inf p(x) > (3µs .3) When trying to extend these results to the model of this chapter where p(x) depends on x.E).w (u) J dt > c(3)eeu v 1 p(u+ t) .1. Let y* < So.2) such that p(x) < c(. If 60 s f 6o. Proof of Theorem 3. a) = f3(B[a] . which in turn by Lundberg's inequality can be bounded by ery*(1E)" Hence limsup„.log '(u)/u < ry*(1 . as solution of the equation n(x. (3. For the last asssertion . Letting first E * 0 and next ry * T 5o yields the first statement of the theorem.202 CHAPTER VII. choose uo such that p( x) > p* when x > u0E.>o 7(x) > 0. Then we have the following lower bound for the time for the reserve to go from level u to level u + v without a claim: w(u + v) . 1) and for a given E > 0.1 ). it holds that f3[s] T oo. i. When u > uo.5) which implies inf.4) we assume existence of y(x) for all x. B(x) > C(2)e(ao+f)x for all x. e(1o+e)2 (x ) u > 00. and that p(x) * oo.1) .. choose c(. RESERVEDEPENDENT PREMIUMS and the CramerLundberg approximation V. x * oo.1 Assume that for some 0 < 5o < oo. c(. (x.C*ef*". The intuitive idea behind introducing local adjustment coefficients is that the classical risk model with premium rate p* = p(x) serves as a 'local approximation ' at level x for the general model when the reserve is close to x. we will use the local adjustment coefficient 'y(x).1.
the asymptotics u * oo and c . Then .ea°/(ecf1)). one can then assume that e = 1 is small enough for Theorem 3. The rest of this section deals with tail estimates involving the local adjustment coefficient. (u) = O(u/e). u Obviously. Theorem 3 .13 is a technical condition on the claim size distribution B.3. and hence '(u) > c(4)eeuc( 2)e(do+e)u The truth of this for all e > 0 implies lim inf log V.6) The second main result to be derived states that the bound in Theorem 3.2 is also an approximation under appropriate conditions.0 are the same.. UJU > x cannot have a much heavier tail than the claim U itself.e. If p(x) = pis constant . . 2 Assume that p(x) is a nondecreasing function of x and let I(u) = fo ry(x)dx. then Rte) = CRtie for all t so that V). Bucklew [81]). Then limelog l/ie (u) = I(u). Condition 3. The first main result in this direction is the following version of Lundberg's inequality: Theorem 3 .13 below holds. For e > 0.7) CIO Remarks: 1. let 0e (u) be evaluated for the process only with 3 replaced by /0/e and U. Theorem 3.(u) > so. 3) = (1 .v.g. The form of the result is superficially similar to the CramerLundberg approximation. 3.3 Assume that either (a) p(r) is a non decreasing function of r. which essentially says that an overshoot r. THE LOCAL ADJUSTMENT COEFFICIENT 203 where c. 2. Therefore the probability that a claim arrives ( before the reserve has reached level u + v is at least c(. However.3 to be reasonably precise and use e` (u) as approximation to 0 (u). noting that in many cases the constant C is close to 1. the limit is not u + oo but the slow Markov walk limit in large deviations theory (see e. ruin will occur if the claim is at least u + v. The slow Markov walk limit is appropriate if p(x) does not vary too much compared to the given mean interarrival time 1/0 and the size U of the claims.2).' (u) < eI("). I. the result is not very informative if bo = oo. {Rte)} defined as in (1. by cU2.1 only presents a first step. and in particular. or (b) Condition 3. (3.4)eE" Given such an arrival. (3.
4 Consider again the exponential case B(x) = eax as in Corollary 1.(iw(x) . RESERVEDEPENDENT PREMIUMS 4.8 in terms of I(u) when the claims are exponential: Example 3 .7) is only captures 'the main term in the exponent' but is not precise to describe the asymptotic form of O(u) in terms of ratio limit theorems (the precise asymptotics could be logI(u)e1(U) or I(u)"e_I(u).(3/p(x).3. One would expect the behaviour in 2) to be important for the quantitative performance of the Lundberg inequality (3. rather than eI(u)). Then y(x) = b .(x) dx.8.bu}. we get = 1+ J" AX) exp {(3w (x) . the logaritmic form of (3.bx} dx fo 00 1 + [exp {/(3w(x) .204 CHAPTER VII. 3a Examples Before giving the proofs of Theorems 3. we consider some simple examples.6). say. 3. we show how to rewrite the explicit solution for ti(u) in Corollary 1. and r j 1 'Yo v(x)dx = bu  a J0 p(x)ldx = Integrating by parts.bx}]o + b /' oo exp low (x) .bx} dx oo exp low(x) bx dx 70 Ju r oo = b J exp low (x) . J0 ^oo g(x ) dx f AX) lexp IOW (X ) bx + b u 1 exp low(x) .bx} dx = 1 + J0 dodx(x) exp {. As typical in large deviations theory. However.bx} dx 1+0.exp {/33w(u) .2. it is formally needed only for Theorem 3. 5.3.1 + b f e. First.bx} dx . u .
(3.5 Assume that {Rt} is a diffusion on [0.I ( v )dy fo +u) dxdy .8) 7(x)dxdy 11 We next give a direct derivations of Theorems 3.9 ) 11000 eI(v)dy f000 e. Similarly.10 or Karlin & Taylor [222] pp.0.fory(x+u)dxdy ( 3. 191195) that 1P (U) = fu0 eI(v)dy = eI(u) follo e. It is well known that (see Theorem XI. oo) with drift µ(x) and variance a2 (x) > 0 at x.e. and (3.3 in the particularly simple case of diffusions: Example 3.1.. > lime log e = 0 and AE * 0. Be = e log U000 e. in the definition of AE converges to 0.1/8 .fo 7(x) dx /E dy > av 'yo /Edy = E (1 .3. In particular. and (3. For Theorem 3. note first that the appropriate slow Markov walk assumption amounts to u. 0. u .f y(x)dxd y If 7(x) is increasing . Choosing yo.I ( u) fool. IE(u) = I(u)/e. (X) = µ(x).5) is infx>o 7(x) > 0 which implies that f °O .2. (3. ry(x /b I u o e f0 °° e  e.2(X) = ev2(x) so that 7e(x) = 7(x)/e.2. applying the inequality 7(x + u) > 7(x) yields immediately the conclusion of Theorem 3.BE. BE * 0.9) yields e log . The appropriate definition of the local adjustment coefficient 7(x) is then as the one 2p(x)la2(x) for the locally approximating Brownian motion.fo 7(x)dx/Edy f ..ev 0 O /E) J0 70 70 Yo This implies lim inf A.3. (u) = I(u) + AE .7) follows.10) where AE = e log 000 e. the integral is bounded by 1 eventually and hence lim sup AE < lim sup a log 1 = 0. THE LOCAL ADJUSTMENT COEFFICIENT and hence 205 f°° eI(v )dy . 70 > 0 such that 7(x) < 7o for y < yo. we get r 00 e.fa 7(x+u)dx/Edy o The analogue of (3. 3.
1 3. 7(x) is typically not explicit. I(u ) = G1(u) + . the slow Markov limit a * 0 and the limit u walk approximation deteriorates as x becomes large. we have 5 > 7o and get lim inf AE > lime log e . G. the results are suggestive in their form and much more explicit than anything else in the literature. .. Then the solution of the Lundberg equation is y* = b .4..7o C 15 I I.0. Ignoring 1/5 in the formula there.5) and 7* = 5 .206 CHAPTER VII.5. ) Note that this expression shows up also in the explicit formula for lk(u) in the form given in Example 3. lim sup Af < lim sup c log(1 .0/e. I. E+o e*O By (3. . 0 Now (3.0) = 0. this leads to (3./3 1 AX dx. . so our approach is to determine standard functions Gl (u).7) follows just as in Example We next investigate what the upper bound / approximation aI (°) looks like in the case p(x) = a + bx (interest) subject to various forms of the tail B(x) of B. 0.6/p* so that u 1 I (U) = bu .5 for risk processes with exponential claims is as follows: Example 3 . Thus 7e(x) _7(x)/e and (3. the slow Markov walk assumption means 5E = b/c. Further.Q/p*.6) exactly as in Example 3. G.e. RESERVEDEPENDENT PREMIUMS The analogue of Example 3.5. however . that the interchange of the slow Markov walk oo is not justified and in fact.6 Assume that B is exponential with rate S..(u) oo.5. Of course.. _ .10) holds if we redefine AE as AE = flog (j °° efo 7(x)dx/edy _ E/5 I and similarly for B. > . (u) representing the first few terms in the asymptotic expansion of I(u) as u + oo. Nevertheless . .+1 (u) = o( 1). + Gq(u) + o(G9(u))• Gi (u) It should be noted . As in Example 3. G. .
if the phase generator is irreducible ( Proposition VIII.1/a)..g. . e. phasetype distributions (Example 1.3. in the phasetype case . Hence (3.8). and hence (3.y/s)dy sn 1 1 f ' evy'7ldy = cse8r(T7) as s T oc.ry*°p*. more generally.c2 Su a dx ) Su a<1 Su .:. x T 1. u(logu + r7loglogu). More precisely.x)n1.11) that b[s] * co as s f S and hence 7* T S as p* + oo. THE LOCAL ADJUSTMENT COEFFICIENT Example 3 . u Example 3 .clxale5x 207 (3. 1.7 Assume that B(x) . (3. Here B[s] is defined for all s and B[s] . c4 = c2b 1/'/(1 .c3 logu a= 1 J 0 a + bx 1/ ( c4ul 1/° a > 1 where c3 = c2 /b. This covers mixtures or convolutions of exponentials or. For example. 2. fu I(u) Su .3cse7*I7(77) . I(u) Pt. say 1 is the upper limit and B(x) .1 =$ f cse8 Sn f e"B(x)dx = e8 Jo s eIB ( 1 .12) with y > 1.11) with a > 0.8 Assume next that B has bounded support. 1) and 17 = k + 1 if B is the convolution of k uniforms on (0. B[s] = 1 + s exB(x)dx = 1 +c1SF(a) ('+o(')) (S . . ry* loge*+ g7loglogp*.1) leads to . y = 2 if B is uniform on (0.1/k).cs(1 . 77 = 1 if B is degenerate at 1.s)C' f "o o as s T S. It follows from (3. the typical case is a = 1 which holds .4) or gamma distributions.Y .C2p* C2 = (3clr( a))11'.1) leads to (S7T N Ocp a.
.Ul up to the first claim (here ru (•) denotes the solution of i = p (r) starting from ru(0) = u).12). (3.ru(TI)) . Hence for u<V.Ote7o( u)(u.B[7o (u)] .208 CHAPTER VII. I (u) c8u log u 0 where c8 = 2/c7. RESERVEDEPENDENT PREMIUMS Example 3 .u . h].c8 log .. x f oo . 1 = E.css 2%rc7eC782/2.•.4) is the formula h logEues ( Rhu) .r„(Ti).15) Proposition 3.13) We get b[s] .9 As a case intermediate between (3.4) of the local adjustment coefficient is not the only possible one: whereas the motivation for (3. of U1 + v .10 Assume that p(x) is a nondecreasing function of x.r^.g. assume that B(x) CO x2/2c7. 7 * . (b) 'y(x) <'Yo(x)• Proof That 7(x) is nondecreasing follows easily by inspection of (3.log p*. (3.f. one could also have considered the increment ru (T1) .1) .11) and (3.sp(u). This leads to an alternative local adjustment coefficient 7o(u) defined as solution of 1 = Ee''o(u)(vi+u . 1 0 3e. 3b Proof of Theorem 3. g.2 We first remark that the definition (3. (3. By convexity of the m .3 (B[s] . this is only possible if 7o(v) 2 7o(u)• .14) for the m . of the increment in a small time interval [0.f.(T1)) > Ee7o(u)(ul+vr»(Ti)).1 Cgs o"O 0 esxex2/2c7 dx = cgsec782/2 f . h 10.4). The assumption implies that ru(t) .u is a nondecreasing function of u.(t))dt.e7o ( u)(ul+u r. Then: (a) y(x) and 7o(x) are also nondecreasing functions of x. ec78)2/2c7 dx C7 .
the case of 7 then follows immediately by Proposition 3.17) shown for n and let Fu(x) = P(U1 + u .11 Assume that p(x) is a nondecreasing function of x.3. The case n = 0 is clear since here To = 0 so that ik(°)(u) = 0. THE LOCAL ADJUSTMENT COEFFICIENT For (b).17) from which the theorem follows by letting n + oo.(n+1) (u) efo Yo(x)dxI^"Q exyo( I u u)Fu(dx )+ J . Hence „/.ru(T1) < x). this is only possible if yo(u) > 7(u). Separating after whether ruin occurs at the first claim or not.4) considered as function of 7 is convex and 0 for y = 0.7o (u)p(u)• Since (3.u[70(u)] fo eyo(x)dx . note that the assumption implies that ru(t) .Fu(u ) + J ^(n)(u . Hence 1 = EeYo(u)(U1+uru(T1)) < E. Also. Assume (3.es'Yo(u)Fu(dx)} o0 e fo yo( x)dx j. Then (u) < efo Yo(x)dx.10(b): Theorem 3.2 in terms of 7o.u > tp(u).16) Proof Define 411(n)(u) = P('r(u) < on) as the ruin probability after at most n claims (on = TI + • • • + Tn). fa 7o(y)dy < u7o(u) < xyo (u) for x > u. it is easily seen that fu x7o(y)dy < xyo (u).1) . We shall show by induction that (' Y'(n) (u) < e fo 'Yo(x)dx (3. we obtain „I.x)Fu(dx) 00 U efo J = o (y) dYF (dx) )+f I 11 /' / 00 e f oFu fu dx) + of u :7o(Y)dYFu(dx) 00 J u 1 l` Considering the cases x > 0 and x < 0 separately.e70(u)(U1P(u)T1) 209 0 + 7o(u)p(u)' 0 <_ 00['Yo( u)] .(n+l) (u) 1 . (3. u We prove Theorem 3.
Further.. 0 It follows from Proposition 3.n. For these reasons.n = sup p(x). RESERVEDEPENDENT PREMIUMS where the last identity immediately follows from (3.E (u) denote the ruin probability for the classical model with 0 replaced by . C*e where the first equality follows by an easy scaling argument and the approximation by (3.3).n u k}1. yo(u) appears more difficult to evaluate than y(u). P k.nbe C*. Lemma 3...n <Z auk}l.11 is sharper than the one given by Theorem 3.3 The idea of the proof is to bound { R( f) } above and below in a small interval [x . (un2. for either of Theorems 3.n.. we used also Proposition 3.I. W O .n) must first downcross unl. let Op*.15).3 is required.12 lim sup4^o f log O. and here it is easily seen that yo(u) . the value of {R(E)} at the time of downcrossing is < unl.3). y* evaluated for p* = Pk.n (starting from u/n) without that 2u/n is upcrossed before ruin.n inf n uk1.E (u/n) Now as e . op*.n. (3. 0.11 be reasonably tight something like the slow Markov walk conditions in Theorem 3.2. by €U=.: y(u). in accordance with the notation i/iE (u).. and. resp. 0.2. The probability of this is at least n n. 3c Proof of Theorem 3.x/n.3/e and U. 3.n AX). ryk.n) > k =1 II v ^k n. {RtE)} (starting from u = un.E ( u/n) ^•e.10(b ) that the bound provided by Theorem 3. To this end.2). given downcrossing occurs.n) pn niE (u /n) n n_1 n. pk n = uk_l. Also.n so that n.n. we have chosen to work with y(u) as the fundamental local adjustment coefficient.e (u) = v'.. However. (u) < I(u).n = ku. in . (u). x + x/n] by two classical risk processes with a constant p and appeal to the classical results (3.210 CHAPTER VII. Let Ck.E (u/ n) Y'E (un . the probability that ruin occurs in the CramerLundberg model with p* = pn. Proof For ruin to occur. Y*u /E.E (u/n). define uk.10(a) for some of the inequalities.
n . . *p•. in obvious notation one has tC (x) = y(x)/e.3. for all x . V < oo such that (i) for any u < oo there exist Cu < oo and a (u) > supy <„ 7(x) such that P(V > x) < Cue. (u) CIO < Letting n 4 oo and using a Riemann sum approximation completes the proof.E (u/n) OP +^p•. B(x) (3.e. Indeed . /' (u/n) 'T nk.nu/en(1 + where o(1) refers to the limit e .13 There exists a r.n + 0(1).n cE (2u/n) Ck ne7k.. uk_1.nu /En) o(1)).E (u/n) Op•.2 gives 7PE (u) < eIi"i/f = lim inf Clog 0E (u) > I (u).12 completes the proof.. 3 now follows easily in case (a). also ryk. 0 with n and u fixed. i.19) .. y > 0 it holds that F(U>x +yIU>x) B(x + y) < F (V > y).E (urn) < \ *I.n <X<Uk. we need the following condition: Condition 3. 11 Theorem 3.7k.n.! (u/n) n n m 7k. 40 Combining with the upper bound of Lemma 3.n = sup ?'(x).i.nk=1 limsupelogv). THE LOCAL ADJUSTMENT COEFFICIENT particular.F (2u/n).log Ck. ne7k. 211 Clearly.a( u)z. It follows that n log V'C (u) k =1 log Ypk. so that Theorem 3..18) (ii) the family of claim overshoot distributions is stochastically dominated by V. since ry' is an increasing function of p'. k=1 k=1 n u _ nE7 k. (3. v.nu /fn( 1 Ck  e. In case (b).
. ) (u u /n)) .E (2u/n .E(E) (u.nu/En0(1) .v. RESERVEDEPENDENT PREMIUMS To complete the proof. T() (u.EV) = El + E2.n V.eV) • P (T(E) (u.QEU 1 . v ) = inf { t > 0 : R(c ) < v R) = u } .^(E) (u. Then the standard Lundberg inequality yields El < E?.n < ery1. u/n)) .18) for the last equality). u /en 0(i) _n so that E2 < e2ryl nu/En0(1).5) and the standard formula for b(0). u/n) < oo) EV). (u/n .E (u/n . Write EO. Then Y'E (u) ^(E) (u. . The probability of ruin in between two downcrossings is bounded by Epp .EV) = EiI 1 .. u /n) < oo] l = = < E [OE (u/n . infx>2u /n P(x) . let v < u and define T(E) (u.E (0) cf.212 CHAPTER VII. (R. N with EN < 1 = infx>2u/nA(x) = 0(1).. T(E) (u.^'' = E [ . where El is the contribution from the event that the process does not reach level 2u/n before ruin and E2 is the rest. we first note that the number of downcrossings of 2u/n starting from RoE) = 2u/n is bounded by a geometric r. u/n) < oo) . (u/n .EV) = e.. For E2. u/n) < oo] E [OE (u/n .1 n. v ) = v .of:>2 in n(x).nu /EnE [e71.2y 1 ' . u/n) < oo] . V < u/En] + P(V > u/En) (u/En .( .R<) (u v). u/n)) I T(E) (u. P (T(E) (u. Ei + E2 < e71. (3.V) = e71 nu/Eno(l) (using (3.
7) then comes out (at least heuristically) by analytical manipulations with the action integral.=1 J An approximation similar to (3. whereas the most probable path leading to ruin is the solution of r(x) _ k (x.13. s). Djehiche [122] gives an approximation for tp(u. Typically. 0 ) (= p(x) . . it might be possible to show that the limits e . one can in fact arrive at the optimal path by showing that the approximation for 0(u. the approximation (3. u/n) < oo { 40 )I U nryl n+liminfelogP (T(')(u.g.) = exp .)Ui } .21) to pass from u to 0. Whereas the result of [122] is given in terms of an action integral which does not look very explicit.r.1.J y(Rs)dR. the results are from Asmussen & Nielsen [39].20) (with ic(x.3. 0 and b T 00 are interchangeable in the setting of [89]. Bucklew [81]). T) is maximized over T by taking T as the time for (3. the rigorous implementation of these ideas via large deviations techniques would require slightly stronger smoothness conditions on p(x) than ours and conditions somewhat different from Condition 3.u/n) < oo) CI  > u n n ryi n' i=1 Another Riemann sum approximation completes the proof. (u) 40 213 lim inf e log(Ei +E2) + logP (r(`) (u.3EU) (3.T) = P „(info<t <T Rt < 0) via related large deviations techniques. where the key mathematical tool is the deep WentzellFreidlin theory of slow Markov walks (see e .21) (the initial condition is r(0) = u in both cases). u Notes and references With the exception of Theorem 3. [89]. s) as in (3. l o JJJ o . Comparing these references with the present work shows that in the slow Markov walk setup. the risk process itself is close to the solution of the differential equation r(x) _ r (x.7(x)) (3.7) for ruin probabilities in the presence of an upper barrier b appears in Cottrell et al. THE LOCAL ADJUSTMENT COEFFICIENT Hence lim inf e log Ali.J r(Rs)p(R.4) and the prime meaning differentiation w.)ds + Y(R2.t. they also discuss simulation based upon 'local exponential change of measure' for which the likelihood ratio is ( /'t /'t Ns Lt = exp S . Similarly.
We should like.g. For different types of applications of large deviations to ruin probabilities . the exponential distribution ). . to point out as a maybe much more important fact that the present approach is far more elementary and selfcontained than that using large deviations theory. e.3. however. RESERVEDEPENDENT PREMIUMS the simplest being to require b[s] to be defined for all s > 0 (thus excluding . see XI.214 CHAPTER VII..
1) is 'Here as usual . P. a terminating Markov process {Jt} with state space E and intensity matrix T is defined as the restriction to E of a Markov process {Jt}o<t<. We often write p for the number of elements of E. F (Jt = A eventually) = 1 for all i E E 1 and where all states i E E are transient. refers to the case Jo = i.Chapter VIII Matrixanalytic methods 1 Definition and basic properties of phasetype distributions Phasetype distributions are the computational vehicle of much of modern applied probability. Typically. that is. Note that since (1. This implies in particular that the intensity matrix for { it } can be written in blockpartitioned form as T 0 0 . if v = (vi)iEE is a probability distribution. A proper knowledge of phasetype distributions seems therefore a must for anyone working in an applied probability area like risk theory. if a problem can be solved explicitly when the relevant distributions are exponentials. and not in other cases. oo) is said to be of phasetype if B is the distribution of the lifetime of a terminating Markov process {Jt}t>o with finitely many states and time homogeneous transition rates. we write Pv for the case where Jo has distribution v so that Pv = KER viPi• 215 . on Eo = E U {A} where A is some extra state which is absorbing. then the problem may admit an algorithmic solution involving a reasonable degree of computational effort if one allows for the more general assumption of phasetype structure. More precisely. A distribution B on (0.
Thus the phasetype distributions with p = 1 is exactly the class of exponential distributions. MATRIXANALYTIC METHODS the intensity matrix of a nonterminating Markov process. the exit rates ti and the transition rates (intensities) tij: tj 3 aj ai i ti tk tjk FkJ ak Figure 1.e. j. tij > 0 for i 54 j and EjEE tij < 0 .1 Suppose that p = 1 and write . In particular. E = {i.0 = t11.216 CHAPTER VIII. and the phasetype distribution is the lifetime of a particle with constant failure rate /3. Here are some important special cases: Example 1 . We now say that B is of phasetype with representation (E. k}. that is. A convenient graphical representation is the phase diagram in terms of the entrance probabilities ai. and we have t = Te. a. Equivalently. The initial vector a is written as a row vector.e. t1 = /3. 0 2this means that tii < 0. B(t) = Fa(^ < t ).1 The phase diagram of a phasetype distribution with 3 phases. Then a = a1 = 1. T is a subintensity matrix2. i. the rows sum to one which in matrix notation can be rewritten as t + Te = 0 where e is the column Evector with all components equal to one.2) The interpretation of the column vector t is as the exit rate vector.3. T) (or sometimes just (a. the ith component ti gives the intensity in state i for leaving E and going to the absorbing state A.T)) if B is the Padistribution of the absorption time C = inf{t > 0 : it = A}. (1. an exponential distribution with rate parameter . i. C is the lifetime sup It > 0 : Jt E E} of {Jt}.
. PHASETYPE DISTRIBUTIONS 217 Example 1. a = (1 0 0 .1)!e Since this corresponds to a convolution of p exponential densities with the same rate S... . 6.. ... . ..3 The hyperexponential distribution HP with p parallel channels is defined as a mixture of p exponential distributions with rates 51. 0 •. 0 0 0 T= t= 0 ••• S S 0 0 0 0 0 0 .2 The Erlang distribution EP with p phases is defined Gamma distribution with integer parameter p and density bp XP1 6x (p. 0 SP 0 and the phase diagram is (p = 2) . 0 ••• 0 0 Sp1 0 0 t= 0 0 00 •..2 corresponding to E = {1.x i=1 Thus E _ Si 0 T 0 S2 0 0 .. p}....1... . so that the density is P E ai6ie6.. the EP distribution may be represented by the phase diagram (p = 3) Figure 1. 0 0 0 0 S 6 . 0 S 6 Example 1. 00)) S s o .
f. E t ikp kj = kEE kEE 3For a number of additional important properties of matrixexponentials and discussion of computational aspects .g. dp.e. Recall that the matrixexponential eK is defined by the standard series expansion Eo K"/n! 3. and is defined as the class of phasetype distributions with a phase diagram of the following form: 1 617 ti t2 2 b2.3 0 Example 1 . 36) yields s d. ds^ = ds' = ttlaj + tikpkj. MATRIXANALYTIC METHODS Figure 1. [APQ ] p.3 . the backwards equation for {Jt} (e. .d. the Erlang distribution is a special case of a Coxian distribution. the restriction of P8 to E.4 For example. T). see A. Proof Let P8 = (p ^) be the sstep EA x EA transition matrix for {Jt } and P8 the sstep E x Etransition matrix for {Jt} .1 tP1 1 Figure 1.1)"n! aT"e.4 (COXIAN DISTRIBUTIONS) This class of distributions is popular in much of the applied literature. The basic analytical properties of phase type distributions are given by the following result . j E E.t2 yt bP. (b) the density is b(x ) = B'(x) = aeTxt.aeTxe. p:.218 CHAPTER VIII. Then for i .f is B (x) = 1 . i. Theorem 1 .g. 5 Let B be phasetype with representation (E. Then: (a) the c. (c) the m. a. B[s] = f0°O esxB (dx) is a(sI T)lt (d) the nth moment f0°O xnB(dx) is (.
tii and have an additional time to absorption either go to state j which has m .tii tii .s) is the m .jEE B'(x) _ cx Pxe = aeTxTe = aeTxt (since T and eTx commute). . and since b[s] = ah.s j# tii i (1. the rule (A.B(x) = 1'a (( > x) = P.6) . Since 1 . the solution is P8 = eT8. in which case the time to absorption is 0 with m .tii we go to A. Then h tit ti + ti3 h j . 1.T) 't = (. hj . tij / . d8 P8 = TP8. After that. ti/ . j#i jEE tijhj + his = ti.T) 1t.5) as hi(tii + s) = ti  t ij hj.. and (b) then follows from 1: aipF. we i w. = aPxe.n lt .s I . (Jx E E) = this proves (a). Rewriting ( 1.p. h = (T + sI)1t. Alternatively. define hi = Eie8S.f. i. For (c).. B(n)[0] = _ Alternatively. this means in vector notation that (T + sI)h = t. of the initial sojourn in state i. for n = 1 we may put ki = Ei( and get as in (1.f. Part (d) follows by differentiating the m. PHASETYPE DISTRIBUTIONS 219 That is.tii is the rate of the exponential holding time of state i and hence (tii)/(tii .1.p.g.g.g. we arrive once more at the stated expression for B[s]. or w.1 ) n +l n ! a (s I + T ) .f.12) for integrating matrixexponentials yields B[s] = J esxaeTxt dx = a ( f°°e(81+T)dx ) t a(sI .f.5) ki = 1 + tii L jj:Ai tii (1.g.5) Indeed .e. i. d" dsn a (. (1) n+1n!aT . and since obviously P° = I.n1t = (1)nn!aTn1Te (1)nn! aTne.
T= 2 111 so that 2 2 Then (cf."n! ( ( l 2 2 ) 17 9 0 \ 1 / 10 10 32 n! 35 6" +n!353 Similarly.6 Though typically the evaluation of matrixexponentials is most conveniently carried out on a computer. another the case p = 2 where explicit diagonalization formulas are always available.s.h. making the problem trivial. 0 Example 1. Example A3. One obvious instance is the hyperexponential distribution. there are some examples where it is appealing to write T on diagonal form. This implies that we can compute the nth moment as (1)"n! aT "e 1"n! 1 1 22 9 9 10 70 7 1 10 10 1 9 +6. MATRIXANALYTIC METHODS which is solved as above to get k = aTle. Consider for example 3 9 a= (2 2). we get the density as 9 9 6 (1 1) 10 7 1 0 10 2 aeTyt = e x .220 CHAPTER VIII. see the Appendix. are idempotent.7) the diagonal form of T is 9 9 1 9 T 10 7 10 70 1 10 6 10 7 0 70 9 1 10 where the two matrices on the r.
where the initial vector a is substochastic. hail = E=EE a. (1. .7) Proof According to (A. This is the traditional choice in the literature. T) is then defined to be oo on a set of probability 1.29) and Proposition A4. B[Q] of B is f3[Q] = J e'1zB(dx) _ (v (9 I)(T ® Q)1(t ® I). PHASETYPE DISTRIBUTIONS 1 10 7 10 221 9 6 70 7 9 10 2 +e 6x (1 11 2 2 35ex + 18e6x 35 The following result becomes basic in Sections 4. a random variable U having a defective phasetype distribution with representation (a.e 11BIJ = 1laDD < 1.T) with weight hall and an atom at zero with weight 1 . then the matrix m.11aDD. i. 00 B[Q] = J0 f veTxteQx dx = (v ® I) ( f° eT x edx I (t I) (v (& I) ( (T ®Q)xdx f o" e o )( t ® I) _ (v ® I)(T ® Q)1(t ® I). 0 Sometimes it is relevant also to consider phasetype distributions. < 1.7 If B is phasetype with representation (v.e a mixture of a phasetype distribution with representation (a/llall.4. and in fact one also most often there allows a to have a component ao at A.4b for definitions and basic rules): Proposition 1. • The phasetype distribution B is zeromodified. There are two ways to interpret this: • The phasetype distribution B is defective. or one just lets U be undefined on this additional set. 5 and serves at this stage to introduce Kronecker notation and calculus (see A. i.1.hall.f.g.T).
Neuts.. 0 Of course. the result follows (with C = (ah)(ve)). but todays interest in the topic was largely initiated by M. O'Cinneide [276] gave a necessary and sufficient for a distribution B with a rational m. Rolski. Example A5. Here is a sufficient condition: Proposition 1. Schmidli. but in many practical cases. cf. Notes and references The idea behind using phasetype distributions goes back to Erlang. (or Laplace transform) are often used where one would now work instead with phasetype distributions. Using B(x) = aeTxe . 1. 2. distributions with a rational m.222 CHAPTER VIII.8) Proof By PerronFrobenius theory (A. T). cf. not only in the tail but in the whole distribution.f. h can be chosen with strictly positive component. 77 > 0 and k = 0. the conditions of Proposition 1. Other expositions of the basic theory of phasetype distributions can be found in [APQ].8). i is real and positive. it is easily seen that the asymptotic form of the tail of a general phasetype distribution has the form B(x) _ Cxkenx.hve7x. All material of the present section is standard. one has k = 0. Lipsky [247]. let v. .. B[s] = p(s)/q(s) to be phasetype: the density b(x) should be strictly positive for x > 0 and the root of q(s) with the smallest real part should be unique (not necessarily simple. h be the corresponding left and right eigenvectors normalized by vh = 1 and define C = ah • ve . let . we give a criterion for asymptotical exponentiality of a phasetype distribution B. but the relevant T is not irreducible.f. See in particular the notes to Section 6.q be the eigenvalue of largest real part of T. v. x * oo. B(x) . the text is essentially identical to Section 2 of Asmussen [26]. see his book [269] (a historical important intermediate step is Jensen [214]).F. The Erlang distribution gives an example where k > 0 (in fact. Then the tail B(x) is asymptotically exponential. where C. No satisfying . the Erlang case). (1.8 Let B be phasetype with representation (a. In older literature. MATRIXANALYTIC METHODS la Asymptotic exponentiality Writing T on the Jordan canonical form.Ce7'. assume that T is irreducible .8 are far from necessary ( a mixture of phasetype distributions with the respective T(') irreducible has obviously an asymptotically exponential tail. here k = p1). Schmidt & Teugels [307] and Wolff [384].4c).1 of the Appendix. In Proposition A5. and we have eTx .g.g.
. if U is absolutely continuous on (0.2. The explicit calculation of the renewal density (or the renewal measure) is often thought of as infeasible for other distributions. we refer to U as the renewal measure. JtJt1) Then { 0<t<U1 . as the lifetimes of items (say electrical bulbs) which are replaced upon failure. + U0 is 0 .. we denote the density by u(x) and refer to u as the renewal density.. is Markov and has two types of jumps .T).+UnEA). .1) Proof Let {Jtk)} be the governing phase process for Uk and define {Jt} by piecing the { J(k) } together. Jt={Jt?ul}. but is in part repeated below. Let U1. the renewals form a Poisson process and we have u(x) = 0.f. what is the smallest possible dimension of the phase space E? 2 Renewal theory A summary of the renewal theory in general is given in A...t.r. known.d... be i. . If B is exponential with rate 0.1. or the density is available ) is. n=O We may think of the U. (2.1 Consider a renewal process with interarrivals which are phasetype with representation (cr.. oo) w. Lebesgue measure. For this reason.1 of the Appendix.. A related important unsolved problem deals with minimal representations: given a phasetype distribution .: U1 + .. +UnEA} 00 = EEI(U1 +.. the problem has an algorithmically tractable solution if B is phasetype: Theorem 2. U2.g. and U(A) is then the expected number of replacements (renewals) in A. U1<t < U1+U2. with common distribution B and define4 U(A) = E# {n = 0. RENEWAL THEORY 223 algorithm for finding a phase representation of a distribution B (which is known to be phasetype and for which the m. however.i. Then the renewal density exists and is given by u(x) = ae(T+ta)xt. but nevertheless. . the jumps of the j(k) and the it } k) to the next J( k+l) A jump jumps corresponding to a transition from one Jt 4Here the empty sum U1 +...
the density is veTxt = B(x)/µB. the lifetime of the renewal process. as the time of the last renewal. i. T). This is defined as U1 + .e.2 Consider a terminating renewal process with interarrivals which are defective phasetype with representation (a. Hence ( 2. see Fig. . which is ti in state i. and hence ( 2. and the jumps of the first type are governed by T.T + ta). u The argument goes through without change if the renewal process is terminating.224 CHAPTER VIII. MATRIXANALYTIC METHODS of the last type from i to j occurs at rate tiaj . The renewal density at x is now just the rate of jumps of the second type. is the first k with Uk = 00.3 Consider a renewal process with interarrivals which are phasetype with representation (a. 2. Equivalently. (b) £(t) has a limiting distribution as t * oo. B is defective .1) remains valid for that case. Then: (a) the excess life t(t) at time t is phasetype with representation ( vt.e. Then the lifetime is zeromodified phase type with representation (a. define the excess life e(t) at time t as the time until the next renewal following t. and let µB = aTle be the mean of B. Hence the intensity matrix is T + ta. Corollary 2. u Returning to nonterminating renewal processes .IIBII which is > 0 in the defective case. which is phase type with representation (v. the phasetype assumptions also yield the distribution of a further quantity of fundamental importance in later parts of this chapter . IIafl < 1. this is welldefined. However.T). i.. since Uk = oo with probability 1 . that is.1.T) where v = aT1 /µB. + Uit_1 where s.T) where vt = ae (T+ta)t . and the distribution of Jx is ae ( T+t«)x. fi(t) U2 U1 .. .U1 U3 U2 U3 U4 Figure 2. Proof Just note that { it } is a governing phase process for the lifetime.1 Corollary 2.1) follows by the law of total probability.
2) v(T + ta) = 0. we first compute the stationary distribution of Q.T) where vt is the distribution of it which is obviously given by the expression in (a). cf. The renewal density is then aeQtt = (al a2) ( 7i 7"2. i.6. hence e(t) is phasetype with representation (vt. The formulas involve the matrixexponential of the intensity matrix Q = T + to = ( tll + tlal t12 + t2al tlz + tlaz _ q1 ql t22 + t2a2 q2 q2 (say).4 Consider a nonterminating renewal process with two phases. (2.1. Al. (ii) First check the asserted identity for the density: since T.e.2.e. Here are two different arguments that this yields the asserted expression: (i) Just check that aT1/µB satisfies (2. Hence in (b) it is immediate that v exists and is the stationary limiting distribution of it. Next appeal to the standard fact from renewal theory that the limiting distribution of e(x) has density B(x)/µB.q2. = qz ql (x1 xz) = ql + qz ql + q ' and the nonzero eigenvalue A = ql . we get B(x) aeTxe aT1eTxTe µB µB PB = veTxt. u Example 2 . The time of the next renewal after t is the time of the next jump of the second type.) ( t2 ) . According to Example A3.2): aT1 e = AB = 1 µB µB a + aT'Tea aT1(T + ta) µB PB a + aea a + a µB µB =0. RENEWAL THEORY 225 Proof Consider again the process { Jt } in the proof of Theorem 2. the unique positive solution of ve = 1. T1 and eTx commute.
6 Let B be hyperexponential.52) 25152 51x2+5251 51a2+5251 Notes and references Renewal theory for phasetype distributions is treated in Neuts [268] and Kao [221].(biaz + aza. A = 25. MATRIXANALYTIC METHODS e.tl) 7r2t2 + eat (a17r2 . Then _ Q Hence 51 0 0 52 + 51 52 _ 5152 51a2 ) (al a2) 52a1 62a1 Slat + 52a1 51a2 51a2+52a1 A = 51a2 .e2bt) 13 Example 2 .52a1. and Example 2.`t (al a2) + C 11 172 ir12 / \ t 2 ) r1 (7r1 7r2) ( t2 7rltl + J + eAt (al a2) ( 71(t2 .226 CHAPTER VIII.t2) . t1B 0 Example 2 . Hence 7r = (1/2 1/2).a27r1) (t1 . Then Q= 0 55 )+(1o)=( j ad ).t2) 1 + eat (a17r2 .4 yields the renewal density as u(t) = 2 (1 .a27rl) (tl . The present treatment is somewhat more probabilistic.4 yields the renewal density as u(t) = 5152 e. . )t (51 . and Example 2.5 Let B be Erlang(2).
Now just observe that the initial vector of {mx} is a+ and that the lifelength is M. cf.3. marked by thin and thick lines on the figure. and if there is a subsequent ladder step starting in j whic occurs w. we shall.1 on the next page.i. Since the results is so basic. the Markov processes representing ladder steps can be pieced together to one {my}. T) where a+ is given by a+ = . THE COMPOUND POISSON MODEL 227 3 The compound Poisson model 3a Phasetype claims Consider the compound Poisson (CramerLundberg) model in the notation of Section 1. r(u) the time of ruin with initial reserve u.(u) = a+e(T+tQ+)u Note in particular that p = IIG+II = a+e. add a more selfcontained explanation of why of the phasetype structure is preserved. Considering the first. Next. Then each claim (jump) corresponds to one (finite) sample path of the Markov process. For (b).p. however. B the claim size distribution. Thus the total rate is tip + tia+. (b) V. i. Here we have taken the terminating Markov process underlying B with two states. The essence is contained in Fig. a+j. itself phasetype with the same phase generator T and the initial vector a+ being the distribution of the upcrossing Markov process at time ST+_. T(0) < oo) the ladder height distribution and M = supt>o St. T). {St} the claim surplus process. Then: (a) G+ is defective phasetype with representation (a+. T). and M is zeromodified phasetype with representation (a+. with 0 denoting the Poisson intensity.2. use the phasetype representation of Bo. Corollary 3. G+(.e. Corollary 2. Within ladder steps.1 Assume that the claim size distribution B is phasetype with representation (a. the transitions are governed by T whereas termination of ladder steps may lead to some additional ones: a transition from i to j occurs if the ladder step terminates in state i. and rewriting in matrix form yields the phase generator of {my} as T + ta+. T + to+). The stars represent the ladder points ST+(k). Proof The result follows immediately by combining the PollaczeckKhinchine formula by general results on phasetype distributions: for (a). which occurs at rate ti.3. . 3.) = F(ST(o) E •. we see that the ladder height Sr+ is just the residual lifetime of the Markov process corresponding to the claim causing upcrossing of level 0. We asssume that B is phasetype with representation (a. represent the maximum M as the lifetime of a terminating renewal process and use Corollary 2.f3aT1.
t t d kkt S.M {mx} ST+(2)  S ..7)diag so that a+ = QaT 1 = 3 ( 3 2 2) 0 3 9 2 14 7 2 11 2 T+ta+ = 3 0 07/+( 7I \ 2 14 . 0 Example 3. 7e7x 2 2 Thus b is hyperexponential (a mixture of exponential distributions) with a (2 2 ).3. T = (3 .Q = 3 and b(x) = . 3e3x + .1 . see Corollary 2.. MATRIXANALYTIC METHODS t . This is in fact a simple consequence of the form of the excess distribution B0.228 CHAPTER VIII. Figure 3.1 This derivation is a complete proof except for the identification of a+ with .QaT1.2 Assume that ..
For the compound Poisson model. see Section 6. so that as there 229 9 9 e(T+ta+)u 1 9 e_u 10 70 10 70 7 10 Thus 1 7 9 10 ) + e6'4 ( 10 10 .1): Proposition 4. this was obtained in Section 3. For an attempt. if we define {mz} just as for the Poisson case (cf. For further more or less explicit computations of ruin probabilities.1 can be found in Neuts [269] (in the setting of M/G/1 queues. We assume p = PB/µA < 1 and that B is phasetype with representation (a.T).4. 3. The parameters of Example 3. Fig. but that such a simple and general solution exists does not appear to have been well known to the risk theoretic community. cf. That is. 3.4.6. but there the vector a+ is not explicit but needs to be calculated (typically by an iteration). The result carries over to B being matrixexponential. we encounter similar expressions for the ruin probabilities in the renewal. 4 The renewal model We consider the renewal model in the notation of Chapter V. T) for some vector a+ = (a+. where a+ is the (defective) .j). 0(8) (u) (recall that z/i(u) refers to the zerodelayed case and iY(8) (u) to the stationary case). THE RENEWAL MODEL This is the same matrix as is Example 1. We shall derive phasetype representations of the ruin probabilities V) (u). It is notable that the phasetype assumption does not seem to simplify the computation of finite horizon ruin probabilities substantially.2 are taken from Gerber [157]. (a) G+ is of phasetype with representation (a+.1 which does not use that A is exponential) by noting that the distribution G+ of the ascending ladder height ST+ is necessarily (defective) phasetype with representation (a+. In the next sections. the discussion around Fig.1 In the zerodelayed case. his derivation of +'(u) is different. with A denoting the interarrival distribution and B the service time distribution.6). and the argument for the renewal case starts in just the same way (cf. T).and Markovmodulated models.^(u) = a+e( T+ta+)ue = 24eu + 1 e6u 35 35 0 Notes and references Corollary 3. see Shin [340]. see Stanford & Stroinski [351] . the duality result given in Corollary 11.
the form in which we derive a+ for the renewal model is as the unique solution of a fixpoint problem a+ = cp(a+).*'} is Markov with the same transition intensities as {mx}. In fact. Hence by Theorem 11. but with initial distribution a rather than a+. Proof Obviously. Then {m.4 Consider the renewal model with interarrival distribution A and the claim size distribution B being of phasetype with representation (a. obviously mo = m. MATRIXANALYTIC METHODS (b) The maximum claim surplus M is the lifetime of {mx}.5. 4.3 a+ satisfies a+ = V(a+). where B0 is the stationary excess life distribution corresponding to B. with intensity matrix Q given by Q = T + to+.1). Fig.Sy} in the same way as {mx} is defined from {St}.T). where u w(a +) = aA[T + to+) = a J0 e(T+t+)1A(dy).3.6. where a(8) = aT1/PA. The key difference from the Poisson case is that it is more difficult to evaluate a+.*} from {St+y . it follows by integrating y out that the distribution a+ u of mo is given by the final expression in (4. G(') = pBo.T)• Proposition 4. Since the conditional distribution of my given T1 = y is ae4y.T).1. (c) {mx } is a (terminating) Markov process on E.2 The distribution G(s) of the first ladder height of the claim surplus process {Ste) } for the stationary case is phase type with representation (a(8). CHAPTER VIII. cf. But by Corollary 2. Nevertheless. We have now almost collected all pieces of the main result of this section: Theorem 4 . the Palm distribution of the claim size is just B.1) Proof We condition upon T1 = y and define {m. (4. Also. B0 is phasetype with representation (aT1/µa. Then .230 distribution of mo. the calculation of the first ladder height is simple in the stationary case: Proposition 4. which for numerical purposes can be solved by iteration.
3). (4. The term tf3 in cp(i3) represents feedback with rate vector t and feedback probability vector (3. a+l ) = cp (a+°)) . It remains to prove convergence of the iteration scheme (4.2. only with initial distribution a(*) for mo.0) is an increasing function of /3..•. THE RENEWAL MODEL 231 . Furthermore . a+2) = ^p (a+l)) .. i y ^ T1= y `•r Figure 4.1 by noting that the distribution of mo is a+. I {mx} .2) where a+ satisfies (4.1(b). the maximum claim surplus for the stationary case has a similar representation as in Proposition 4. by a+ = lim a +n) where a+°) . i.1/pA. and that this is given by Proposition 4.1). .e.^(u) = a+e ( T+ta+)xe. In particular .3) (defined on the domain of subprobability vectors . (4.^(8)(u) = a ( 8)e(T+ta +) xe.1 .0.3) Proof The first expression in (4. a+ can be computed by iteration of (4. The second follows in a similar way by noting that only the first ladder step has a different distribution in the stationary case...1) and a(8) _ aT. Hence ^p(.4. thus . .M. a+) > 0 = a+o) implies a+) _ (a+) > W (a+)) = a+) .2 ) follows from Proposition 4.
both quantities are just 0 . To prove the converse inequality. Then each subexcursion of {St+Tl . 0 = a+) < a+ yields a+) _ (a+0)) (a+) = a+ (n and by induction that a(n) < a+ for all n .} can contain at most n . MATRIXANALYTIC METHODS and (by induction ) that { a+ n) } is an increasing sequence such that limn. Let Fn = {T1 + • • • + Tn+1 > r+}be the event that {my} has at most n arrivals in [T1. let F be the distribution of U1 . Thus by (4. Proof Suppose first Qh = sh. 7+ ]. and hence we may assume that h has been normalized such that ahA[s] = 1.T)1t..1 arrivals (n arrivals are excluded because of the initial arrival at time T1 ). which links together the phasetype setting and the classical complex plane approach to the renewal model (see further the notes). However. so to complete the proof it suffices to show that &+ < a+) for all n. 0 0 We next give an alternative algorithm. Then s is an eigenvalue of Q = T + ta+ if and only if 1 =. with B[s]. and let &+". Then F[s] = a(sI .4. Similarly. Assume the assertion shown for n .T)It. n) &+n) T a+. .5) Since s $ sp(T).5 Let s be some complex number with k(s) > 0. Obviously.1. a+ ) exists .232 CHAPTER VIII.4).f. It follows that n1) so that on Fn the feedback to {mz} after each ladder step cannot exceed &+ a+ n) < a f ^ e(T+ t&+ 1))YA(dy) o < a is e(T+t«+1')YA(dy) _ w (a+1 )) = a+n). s ¢ sp(T). Fn ).T1. Then e4'h = e82h and hence sh = Qh = (T + taA[Q])h = Th + A[s]tah. For n = 0.4) whenever EeR(S)U < oo. In that case.5) yields h = (sI . the corresponding right eigenvector may be taken as (sI . we use an argument similar to the proof of Proposition VI. the normalization is equivalent to F(s) = 1.4) makes sense and provides an analytic continuation of F[•] as long as s ¢ sp(T).) = P(mTl = i. Thus . To this end. Theorem 4.g. this implies that ahA[s] # 0.ST.2.T)'t • A[s] (4. (4. Then (4. F[s] being interpreted in the sense of the analytical continuation of the m. limn4oo a ) < a+.P[s] = A[s]B[s]. (4.
. .type with representation (a+. . D that with columns p1 hl. Hence with h = (sI T)... Pd with corresponding eigenvectors hl. pdhd. and the topic is classic both in risk theory and queueing theory (recall that we can identify 0(u) with the tail P(W > u) of the GI/PH /1 waiting time W. hd. As in Corollary 4.... the classical algorithm starts by looking for roots in the complex plane of the equation f3[y]A[ry] = 1.. The roots are counted and located by Rouche' s theorem (a classical result from complex analysis giving a criterion for two complex functions to have the same number of zeros within the unit circle ). This immediately implies that Q has the form CD1 and the last assertion on the diagonal form . Corollary 4. and the solution is . in turn. . T) with a+ = a(QT)/at. we get at a(Q .T)It.. yd satisfying R(ryi) > 0.. and hence by the WienerHopf factorization identity (A. Pd in the domain ER(s) > 0 . Let d denote the number of phases.6.lt we get Qh = (T + to+)h = T(sI .9) we have G+[s] = 1 which according to Theorem 1. t(ry) > 0. Given T has been computed. letting vi be the left eigenvector of Q corresponding to pi and normalised by vihi = 1 . 0). we have IG_ [s] I < 1 .6 Suppose u < 0. This gives d roots 'y.T)lt = sh. .1 has the d distinct eigenvalues .. Since R(s) > 0 and G _ is concentrated on (oo..6) i=1 i=1 Proof Appealing to Theorem 4. Further. Notes and references Results like those of the present section have a long history.5. explicit expressions for the ruin/ queueing probabilities are most often derived under the slightly more general assumption that b is rational (say with degree d of the polynomial in the denominator) as discussed in Section 6.. W v M(d) in the notation of Chapter V). Then G+ is phase..T) = 1 ata+ = a+.' that the equation F(s) = 1 has d distinct roots p1. (4. THE RENEWAL MODEL 233 Suppose next F(s) = 1.... .p1i . hd.T)lt + t = s(sI . Q has diagonal form d d Q = dpivi®hi = dpihivi.5(c) means that a+(sI T)1t = 1.4. Q = CD1 where C is the matrix with columns hl.. In older literature . the matrix Q in Theorem 2.. . and define hi = (piI .
In risk theory. It turns out that subject to the phase. e. the ruin probability can be found in matrixexponential form just as for the renewal model.contained derivation). MATRIXANALYTIC METHODS d F 1 + a J e°" ip(u) du = Ee°w = 11(t. where R is an unknown matrix. [119]. starting around in 1975. involving . E(t)). T('). The distribution of W comes out from the approach but in a rather complicated form . which contains somewhat stronger results concerning the fixpoint problem and the iteration scheme. The solutions are based upon iterations schemes like in Theorem 4. the intensity matrix is A and the stationary row vector is ir . Numerical examples appear in Asmussen & Rolski [43].type assumptions are basic. The number of elements of El=> is denoted by q. similar discussion appears in Kemperman [227] and much of the queueing literature like Cohen [88]. see Dickson & Hipp [118]. The matrix.234 then in transform terms CHAPTER VIII. We assume that each B.F. and appears already in some early work by Wallace [377]. whereas the approach was introduced in queueing theory by Smith [350]. see Neuts [269]. Here phase. [270] and Latouche & Ramaswami [241]. That is . For further explicit computations of ruin probabilities in the phasetype renewal case .4. a pioneering paper in this direction is Tacklind [373]. This complex plane approach has been met with substantial criticism for a number of reasons like being lacking probabilistic interpretation and not giving the waiting time distribution / ruin probability itself but only the transform.. the fixpoint problems look like R=Ao+RAI+R2A2+ .g.type assumption . 5 Markovmodulated input We consider a risk process {St } in a Markovian environment in the notation of Chapter VI. Asmussen & O'Cinneide [ 41] for a short self. with representation say (a(' ).exponential form of the distribution was found by Sengupta [335] and the phasetype form by the author [18]. the background Markov process with p states is {Jt}. The exposition here is based upon [18]. is phasetype. an alternative approach (the matrixgeometric method ) has been developed largely by M.) d (see. and the distribution of an arrival claim is B. For surveys . Neuts and his students. In queueing theory.. but the models solved are basically Markov chains and processes with countably many states ( for example queue length processes ). The arrival rate in background state i is a..
and the one E(•) for B. The key unknown is the matrix K. We start in Section 5a with an algorithm involving roots in a similar manner as Corollary 4. The connection between the two models is a fluid representation of the Markovmodulated risk process given in Fig. The two environmental states are denoted o. p = ql = Q2 = 2.1. MARKOVMODULATED INPUT 235 some parameters like the ones T or a+ for the renewal model which need to be determined by similar algorithms.5. the phase space E(°) for B. This calculation in a special case gives also the ruin probabilities for the Markovmodulated risk process with phasetype claims.1 In Fig. Section 5b then gives a representation along the lines of Theorem 4. Diagonalization Consider a process {(It. The stationary distribution is obtained by finding the maximum of the Vcomponent of the version of {(It. 5a Calculations via fluid models. •.2. 5. the analysis involves new features like an equivalence with first passage problems for Markovian fluids and the use of martingales (these ideas also apply to phasetype renewal models though we have not given the details). 5. O. states . Vt)}t>o such that {It} is a Markov process with a finite state space F and {Vt} has piecewiese linear paths. However. (a) 0 0 ♦ o ° tl ♦ • 0 0 o } o o (b) 0 } ♦ • 0 o f o Figure 5.4. has states o.Vt)} obtained by time reversing the I component.6. The version of the process obtained by imposing reflection on the V component is denoted a Markovian fluid and is of considerable interest in telecommunications engineering as model for an ATM (Asynchronuous Transfer Mode) switch.1. for which the relevant fixpoint problem and iteration scheme has already been studied in VI. say with slope r(i) on intervals where It = i.
1))diag + sII = 0 (5.236 CHAPTER VIII.1) if and only if s is an eigenvalue of E. a E E(i) } . whereas Ee8s' = oo for all t and all s > so where so < oo. t. a) of {It}. 4.Vt)} is then obtained by changing the vertical jumps to segments with slope 1.1 A complex number s satisfies 'A+ (f3i(Bi[s] . resp. in the fluid model Eel'.1(a). 5. F = E U { (i.31a(l) (/3i)diag . Thus F = {o. If s is such a number. corresponding to the partitioning + Epp). This implies that in the fluid context. The intensity matrix for { It} is (taking p = 3 for simplicity) I A . Eli) + Proposition 5. Bi[s] = a(i)(T(i) + sI)it(').A 0 Or 1A/ _ t(i) 0 t(2) 0 0 0 0 0 t(3) 0 T1 0 0 0 . i E E.92a(2) 0 0 T(2) 0 0 0 f33a(3) 0 0 T(3) with the four blocks denoted by Ei„ i. The fluid model on Fig . First. 4. we have more martingales at our disposal.1(b) {(It . 4}. < oo for all s. '31a(1) 0 0 f32a(2) 0 0 AI = t(1) 0 0 0 t(2) 0 0 0 t(3) 0 T1 0 0 0 0 T(2) 0 '33a(3) 0 0 T(3) The reasons for using the fluid representation are twofold. a) : i E E.1))diag ) a = sa and the eigenvector b = . r(i) _ 1. a) = 1. V. •. In the general formulation . the probability in the Markovmodulated model of upcrossing level u in state i of {Jt} and phase a E Eli) is the same as the probability that the fluid model upcrosses level u in state (i. o. F is the disjoint union of E and the Eli). 5.(Ni)diag r(i. Second. Let E denote the matrix . A claim in state i can then be represented by an E()valued Markov process as on Fig. MATRIXANALYTIC METHODS 4. Recall that in the phasetype case. 2. consider the vector a satisfying (A + (13i(Bi[ s] . j = 1. of E into components indexed by E.
sI)1t)) iag I = 0 which is the same as (5.5.T('))1t(i) . assume that a is chosen as asserted which means (Ell .E22 . where c. resp .sI + E12 (sI . Then (up to a constant) c = a.E21a + sd = sd. Then E21c+E22d = E21a .sI ()3i)diag .sI) (sI .E22)1 E21a. t(1) 0 0 then also 0 t(2) 0 . 0 .sI.sI 0 0 t(3) 0 0 = 0.(sI . E(1) + + E(P). iEE (a> of 0* 1 AI.E22)1 E21) a = 0. Noting that E11c + E12d = se by definition. For the assertions on the eigenvectors.sI+ ((3ia(i)(T(i) .E22)1 E21a E21a .32a(2) (/3i)diag .1). and let d = (sI .Nla(1) 0 0 T 1.A . it follows that if Qla(1) 0 0 . d correspond to the partitioning of b into components Proof Using the wellknown determinant identity Ell E12 E21 E22 E22 I ' I Ell . it follows that Ell E12 ( E 21 E22) (d) = s 1 d I . with Eii replaced by Eii . d = (sI E22)1E21a = E ai(sI .A .E12E22 E21 I .sI 0 0 0 T(3) .sI 0 0 0 T(2) . c = a. MARKOVMODULATED INPUT 237 indexed by E.
. j.5. pi(u.. a)). define w(u. u) Iw(u. .v}. I' i( V P2 (w (u) < oo.4 Assume that E has two states and that B1. Proof Writing Or'Alb( v) = svb( v) as (AI . To determine 0 (u). a) = (j. the result u follows.2 Assume that E = Or 'Al has q = ql + + qp distinct eigenvalues si.v) = = p i( u .238 CHAPTER VIII.( u. q. j. < 0 and let b(v) = I d(„)) be the right eigenvector corresponding to s..v. v = 1. v) yields C{V) = e8 . a)). w(u)=inf{t >O:Vtu}. Iw(u... we first look for the negative eigenvalue s of E = I 0 I which is s = ry with yy = b .. For u. v.. j.. Here E has one state only. v > 0.a Solving for the pi(u. a) = Pi (Vw(u. sq with $2s.Q. v. .4 that {e"1b(v) is a martingale . s2 are the negative eigenvalues of Al +01 A1 E _ A 2 b1 0 52 A2 +32 0 ../' u = e' (esiuc ( 1) .upi(u. . it follows by Proposition II. Example 5 .. B2 are both exponential with rates 51 i b2.sv)b(v) = 0.v) = Optional stopping at time w (u. .3 Consider the Poisson model with exponential claims with rate 5. a). Thus 0(u) = esu/d = pe7 ° as u should be.v) = (j.v) = v) I. e89uc(e)) (d(1) .. Letting v ^ oo and using Rsv < 0 yields e8'u = Epi(u. MATRIXANALYTIC METHODS Theorem 5. a) and noting that i1 (u) = >I j.. w(u. Then .pi(u. a )d(a + e8 °vpi (u . ..O. . j. v.v) = j).j.. Example 5 .j)c v . j) pi( u . j.v)=inf{t >0:Vtu orVt=. d("))1 e. c j. Then we get V)i (u) as sum of two exponential terms where the rates s1.a)d^ ). We can take a = c = 1 and get d = (s + b)16 = 5/(3 = 1/p.
the Pidistribution of M is phasetype with representation (E(1) + + E(P).2) the l. U) where t(j) + t(j)O(j j = k uja.h.2. j.( 2. j.k. dx)Bj(y . according to VI.Qj eie 0 f e (j) T(') x T(j)y ej a e dx e e 00 00 eKx ® e T(')' dx (ej (& I)e T(')ye eKa®T(')x dx (ej (9 I)eT(') Ye e(i)eT(')ye.33(e = 0 a(j))(K ®T ( j))(ej (9 I).s.3) .xxej • a 00 oo el . we get the following phasetype representation for the ladder heights (see the Appendix for the definition of the Kronecker product 0 and the Kronecker sum ®): Proposition 5. 8^')IT(j)) where e 3^') =. (y. 9('). oo)) j)ye. j.5. (5. (') a T( However .b (u) = Pi(M > u) = 9(i)euue. MARKOVMODULATED INPUT 239 5b Computations via K Recall the definition of the matrix K from VI. Proof We must show that G+ (i.6 For i E E. 0 Theorem 5 .3j eye. In terms of K. i.y = to B k7 j # k In particular.5 G+(i. is 0 /3 f R(i .x) 00 f ° (') (j) eT (yy)edx . •) is phasetype with representation (E(i).
MATRIXANALYTIC METHODS Proof We decompose M in the familiar way as sum of ladder steps .1 Let b(x) be an integrable function on [0. a) to (k. intensity matrix U. equivalently.2. the ratio between two polynomials (for the form of the density. which occurs at rate t^^7. Starting from Jo = i. 0 1)'.) which is rational. bn1 bn). we have sofar concentrated on a claim size distribution B of phasetype. u Notes and references Section 5a is based upon Asmussen [21] and Section 5b upon Asmussen [17].. However. Numerical illustrations are given in Asmussen & Rolski [43]. i. t = (0 0 . +aii10+anI then a matrixexponential representation is given by b(x) = aeTxt where a = (b1 b2 .. For a transition from (j. Bk7 .f. (6.5). Then b*[0] is rational if and only b(x) is matrixexponential. i. Piecing together these phase processes yields a terminating Markov process with state space EiEE E(').240 CHAPTER VIII.e. which occurs at rate t(i). that the density b(x) can be written as aeTxt for some row vector a.. t) is the representation of the matrixexponential distribution/density): Proposition 6. and lifelength M. and it just remains to check that U has the asserted form. .. in many cases where such expressions are available there are classical results from the prephasetypeera which give alternative solutions under the slightly more general assumption that B has a Laplace transform (or. 6 Matrixexponential distributions When deriving explicit or algorithmically tractable expressions for the ruin probability.y) to occur when j # k.k y. a) is obviously chosen according to e(`). the current ladder step of type j must terminate.g. some square matrix T and some column vector t (the triple (a. with phase space EU> whenever the corresponding arrival occurs in environmental state j (the ladder step is of type j). Associated with each ladder step is a phase process. +bn0i1 0n +a10n1 +. say. Furthermore. if b* [0] = b1 +b20+b302 +. the initial value of (i. For j = k.p.e. we have the additional possibility of a phase change from a to ry within the ladder step. An alternative characterization is that such a distribution is matrixexponential. T.. see Example 1.2) . which occurs w.. and a new ladder step of type k must start in phase y. This yields the asserted form of uja.. oo) and b* [0] = f °O eBxb(x) dx the Laplace transform. a m..
Namely.. Writing b(x) = c(e( 2ni1 ) y/2 .1 is that it gives an explicit Laplace tranform inversion which may appear more appealing than the first attempt to invert b* [0] one would do. a2 a1 Proof If b(x) = aeTxt.3 A set of necessary and sufficient conditions for a distribution to be phasetype are given in O'Cinneide [276]. namely to asssume the roots 6l. cannot be phasetype.4) 0 0 1 c This representation is complex.. but as follows from Proposition 6. matrixexponentiality implies a rational transform. 1 .3) that we can take 0 1 0 0 a= (1 + 47r2 0 0). . personal communication). .. u Remark 6.. of (6. 0 0 0 0 1 0 0 . T.1. (6. . For a proof.. (6./(0 + bi). MATRIXEXPONENTIAL DISTRIBUTIONS 241 T = 0 1 0 0 0 . bn of the denominator to be distinct and expand the r. shows that the distribution B with density b(x) = c(1 cos(21r x))ex. S = f c/2 0 21ri . One of his elementary criteria.47r2 3 .T)1 is so. 0 1 an an1 an _2 .47x2 3 1 0 . since 1 + 4ir2 03 + 302 + (3 + 47x2)0 + 1 + 47r2 it follows by (6..s. T= 0 0 1 .. u giving b(x) = E 1 ciebiz/bY. where c = 1 + 1/47r 2.1 0 .2).2).3) 0 0 0 0 0 .6.an_3 an _ 4 . . . b(x) > 0 for x > 0. 0 0 . see Asmussen & Bladt [29] (the representation (6. t). The converse follows from the last statement of the theorem. S. s) is given by 27r i . s = c/ 2 ..1 0 0 )3 = (111). Example 6 . t= 0 . then b*[0] = a(0I T)1t which is rational since each element of (01 ..e(tai1)x/2 + e'T) it follows that a matrixexponential representation ()3.3) was suggested by Colm O'Cinneide.h.2 A remarkable feature of Proposition 6. (6. Thus.(6.1) as E 1 c. we can always obtain a real one (a.
T.1 shows that a matrixexponential representation can always be u obtained in dimension only 3 independently of J. T. For the first.1)2 + 6). (6. Then (cf. and that the minimal number of phases in a phasetype representation increases to 0o as 5 . and can use this to invert by the method of Proposition 6. 0. 7 + 155ex b(x) Then it is known from O'Cinneide [276] that b is phasetype when 6 > 0. t) a phasetype representation with a the initial vector. t) of b(x). recall that t = Te) that if B is phasetype and (a. we shall only consider the compound Poisson model with arrival rate 0 and a matrixexponential claim size distribution B.4 This example shows why it is sometimes useful to work with matrixexponential distributions instead of phasetype distributions: for dimension reasons . q are polynomials without common roots. that despite that the proof of (6. Corollary 111. and present two algorithms for calculating '(u) in that setting.6) in Section 3 seems to use the probabilistic interpretation of phasetype distribution in an essential way. we use a representation (a. leading to matrix calculus in high dimensions when b is small. . (6.242 CHAPTER VIII. T the phase generator and t = Te. For the second algorithm.5 (6.1 to get i (u) = f3esus. MATRIXANALYTIC METHODS Example 6 . But since 15(1 +6)02 + 1205 0 + 2255 + 105 b* [9] _ (7 + 155)03 + (1355 + 63)92 + (161 + 3455)9 + 2256 + 105 Proposition 6.4) the Laplace transform of the ruin probability is /g(e)PO 0*[e] _ /' eeu^G(u)dU = 0 9(/3a0p(9)ap (9)/q(9)) . then 5(u) = a+e(T+t+)uTle where a+ = /3aT1. Consider the distribution with density = 15 ((2e2x .6) The remarkable fact is. we take as starting point a representation of b* [0] as p( O)/q(9) where p. We recall (see Section 3.6) holds true also in the matrixexponential case. we have represented ti* [0] as ratio between polynomials (note that 0 must necessarily be a root of the numerator and cancels).5) Thus. As for the role of matrixexponential distributions in ruin probability calculations. then: Proposition 6.3.
Then in Laplace transform formulation .T)1 J0 00 b(x) dx = f aT1t.1t = f3a (0I T)1T1t .1 + b+ = b++ 1 .6).T)1 so that b* b** b** a+(9I . U =. .to+)1 = (BI .5 ).T)1T1t.T)1t. (6.t.b* (6.1 = A1 .a+(9I .6.T . From the general matrix identity ([331] p.T)1 + 1 ib* (91.1 = ^(T1 + ( 91T)1). MATRIXEXPONENTIAL DISTRIBUTIONS 243 Proof Write b* = a(9I .'t. xb(x) dx = aT2t. Presumably. the assertion is equivalent to a+(BI . Now. but we shall give an algebraic proof.T).1t = b* .to+)1T .7) 9( cf.B=land V=a+.T)1 (91.to+)1T .T)1t)1a +(9I . (91.T . (6.A .1t du = .T)1T 2 = and 1 = AB IT2 + 82T .T)1t ( l .1UB(B + BVA1UB).T)1 + (6I . b+ = a+(9I .1 + 82 (9I . this can be verified by analytic continuation from the phasetype domain to the matrixexponential domain .T . we get b+ = 0aT1(9I T). 519) (A + UBV ). b+ = a +(BI .1BVA1. with A = 91T. we get (91.6b* . since (91T)1T .T)1ta+(OI .
a key early paper is Cox [90] (from where the distribution in Example 6. .1. From this it is straightforward to check that b**/(b+ . /3aT1(0I .82b*. The proof of Proposition 6.b*). which is selfexplanatory given Fig. A key tool is identifying poles and zeroes of transforms via WienerHopf factorization. T). some key early references using distributions with a rational transform for applied probability calculations are Tacklind [373] (ruin probabilities) and Smith [350] (queueing theory).T)1T. to piece together the phases at downcrossing times of {Rt} (upcrossing times of {St}) to a Markov process {mx} with state space E. 7. For expositions on the general theory of matrixexponential distributions. the ruin probability(u) was found in explicit form for the case of B being exponential.8 a(T1 + (01. see the Notes to VII.244 CHAPTER VIII.8. 7 Reservedependent premiums We consider the model of Chapter VII with Poisson arrivals at rate 0. cf. Lipsky [247] and Asmussen & O'Cinneide [41]. See Fig.T)1T2t .la.3 is taken). see Asmussen & Bladt [29].1. premium rate p(r) at level r of the reserve {Rt} and claim size distribution B which we assume to be of phasetype with representation (E. Much of the flavor of this classical approach and many examples are in Cohen [88]. a.h.3a (1 0 T 2 + 1 T 102 (9I + 02 1 T)1) t P + 7. We present here first a computational approach for the general phasetype case (Section 7a) and next (Section 7b) a set of formulas covering the case of a twostep premium rule. of (6. MATRIXANALYTIC METHODS .1t = /3a (9I .s. VII. In Corollary VII. 7a Computing O(u) via differential equations The representation we use is essentially the same as the ones used in Sections 3 and 4.1) is the same as the r.1.5 is similar to arguments used in [29] for formulas in renewal theory.7). 0 Notes and references As noted in the references to section 4.1.T)1)t = 8 (1 . 3. but the argument of [286] does not apply in any reasonable generality). (for some remarkable explicit formulas due to Paulsen & Gjessing [286].
tl < t2 < u. Figure 7. t + s) .u)e = A(u)e (7. Ai(t) = P(mt = i).1 A(0) = v(u) and A'(t) = A(t)(T + tv(u . Also. t) is the vector of state probabilities for mt. Since v(u) = (vi(u))iEE is the (defective) initial probability vector for {m8}. in contrast to Section 3. 0 < t < u. >iEE Vi (U) is the ruin probability for a risk process with initial reserve 0 and premium function p(u + •). By general results on timeinhomogeneous Markov processes. though still Markov.t2) be the matrix with ijth element P (mt2 =j I mtl = i). O<.1) where A(t) = v(u)P(0. Define further vi(u) as the probability that the risk process starting from RD = u downcrosses level u for the first time in phase i. Given the v(t) have been computed. i.7. t2) = exp where Q(t) = ds [P(t.t)). P(tl. Note that in general >iEE Vi (U) < 1.1 The difference from the case p(r) = p is that {m2}. the definition of {m8} depends on the initial reserve u = Ro.e. is no longer timehomogeneous. the A(t) and hence Vi(u) is available by solving differential equations: Proposition 7. we obtain V)(u) = P(m„ E E) = v(u)P(0.1z I. RESERVEDEPENDENT PREMIUMS 245 Rt l0 u . In fact. Proof The first statement is clear by definition.I] I 80 { tq f Q(v) dvl t1 1 . Let P(tl.
MATRIXANALYTIC METHODS However. (7.t).Qdt) vi(u) + vi'(u)p(u)dt + p(u) dt E{tji+tjvi(u)}.246 CHAPTER VIII. The intensity of a jump from i to j is tij for jumps of the first type and tivj(u .t and being followed by a downcrossing.4) jEE jEE Proof Consider the event A that there are no arrivals in the interval [0. we get vi(u) = aidt + (1 .Sj i)p(u)dt • tji = Sji + p(u)tji dt. 0 Thus.(tai + vi(u) E vj(u)tjp (u)  Q + vj (u)tjip ( u). Proposition 7.t)). In the first case. those corresponding to state changes in the underlying phase process and those corresponding to the present jump of {Rt} being terminated at level u . Thus. the probability that level u + p(u)dt is downcrossed for the first time in phase j is vj (u + p(u)dt). from a computational point of view the remaining problem is to evaluate the v(t). the probability of which is 1 . dt]. Given A. two things can happen: either the current jump continues from u + p(u)dt to u.3dt.2 For i E E. the probability of downcrossing level u in phase i is 8ji(1 + p(u)dt • tii) + (1 . vi. given A. Hence Q(t) _ T + tv(u . jEE . A'(t) = A(t)Q(t) = A(t)(T + tv(u . the interpretation of Q(t) as the intensity matrix of {my} at time t shows that Q(t) is made up of two terms: obviously. or it stops between level u + p(u)dt and u.t) for the second. whereas in the second case the probability is p(u)dt • tjvi(u). {mx} has jumps of two types. 0 < t < u.(u) p ( u) = . the probability of downcrossing level u in phase i for the first time is E vj (u + p(u)dt) (Sji + p( u)dt • tji + p(u)dt • tjvi(u)) jEE vi(u) + vi' (u)p(u)dt + p(u) dt E {tji + tjvi(u)} jEE Collecting terms. Given this occurs. Given A'. the probability that level u is downcrossed for the first time in phase i is ai.
. When solving the differential equation in Proposition 7. Now since both P(A n Bv) 3 0 and P"(A n Bv) . we face the difficulty that no boundary conditions is immediately available.00 Proof Let A be the event that the process downcrosses level u in phase i given that it starts at u and let B" be the event By={o. we can first for a given v solve (7. of (7. V .3 For any fixed u > 0. (u) on both side and dividing by dt yields the asserted differential u equation. consider a modification of the original process {Rt} by linearizing the process with some rate p. Let p" (t). RESERVEDEPENDENT PREMIUMS 247 Subtracting v.i7rT1/p.s. say. after a certain level v.h. say. (v) is given by the r..0 as v + 00 we have P(A) P"(A) = P(AnBv)+P(AnBv) P"(AnB. supRt>v l t<7 I where o. Then pv(r) p(r) r < v p r>v ' and (no matter how p is chosen) we have: Lemma 7.^ 0. P u which implies that v. Since the processes Rt and Rt coincide under level B. From Section 3..2. To deal with this.) is the tail of a (defective) random variable so that P(Bv) + 0 as v 4 oo. refer to the modified process..4) backwards for {va (t)}v>t>o. starting from v"(v) = . <oo. F" etc.) + 0 as v + oo. we have p(r) = p = vi (u) 0aTe. This yields v.)P"(AnB. and similarly P"(Bv) .7.5). Then P(B.. denotes the time of downcrossing level u . then P(A n Bv) _ P"(A n BV'). vi (U) = lim v= (u).) P"(AnBv) = P(AnB. Thus. (u) for any values of u and v such that u < v. Rt .
1a.248 CHAPTER VIII.. We recall from Propositon VII. p2. which is available since the z/i'(.RQ (defined for or < oo only) is defective phasetype with representation (v(u). (7.9.e. the evaluation of Vi(u) requires q(u) = 1 . typically the complexity in n is 0(n2) for integral equations but 0(n) for integral equations. let v(u) = a+2ieiT +ta+>)(uv). 7b Twostep premium rules We now assume the premium function to be constant in two levels as in VII. while the fourthorder RungeKutta method implemented in [30] gives 0(n5). < 0}).1.6) We may think of process Rt as pieced together of two standard risk processes RI and Rte with constant premiums p1. Therefore u pl(vvueTa t 1. The precision depends on the particular quadrature rule being employed. Recall that q(w) is the probability of upcrossing level v before ruin given the process starts at w < v. However. for u > v the distribution of v . 0 < u < v.. 1/n. The f iin in (7.7) equals 01 (v . v = u.10 that in addition to the O'(•). 2/n.1. such that Rt coincide with RI under level v and with Rt above level v.z51(v)). > 0} and the last term the contribu tion from {R.q(v dx +( ) ) = ( ) ( q( )) vueTva (7. where v = inf It > 0 : Rt < v}. numerically implemented in Schock Petersen [288]) and the present one based upon differential equations require both discretization along a discrete grid 0. Thus we obtain a convergent sequence of solutions that converges to {vi(t)}u>t>o• Notes and references The exposition is based upon Asmussen & Bladt [30] which also contains numerical illustrations.7) f o (the integral is the contribution from {R..) are so. The algorithm based upon numerical solution of a Volterra integral equation (Remark VII. Corollary 3. Then v(u) is the initial distribution of the undershoot when downcrossing level v given that the process starts at u. 2u. To evaluate p1(u). assuming u > v for the moment. T).V" M 0 . 3u etc. cf.v v(u)eTat 1 1 . the probability of ruin between a and the next upcrossing of v. say. MATRIXANALYTIC METHODS Next consider a sequence of solutions obtained from a sequence of initial values {v. p(r) P.x) dx f v(u)eT xt dx .. as well p1(u). (u)}. where. r<v r > v. i.zp1(u)/(1 .1. Let ii'( u) = a+'ie(T+ta +^)"e denote the ruin probability for R't where a+ = a+i) = laT1/pi. The trapezoidal rule used in [288] gives a precision of 0(n 3)..
jl (t ®e) Thus. From Example 3.v) 1eai(u v) + 7 7 1 e\2(u v) 1 3 ^') eA2 (u.^1(v) 1 . B is hyperexponential corresponding to 3 0 3 a(2 2)' T= ( 0 7 t.v(u)eTve).v(u)eTVe .e6u 35 .to+))11 {e{T®(Ttoy+ ))}„ . Example 7.be the eigenvalues of T + to( 2 ).2V"2.(7 The arrival rate is (i = 3. so we consider the nontrivial case example p2 = 4 and p1 = 1. all quantities involved in the computation of b(u) have been found in matrix form.2.2.4) can be written as (Y(u) ®a+)e(T+t°+>)°1 (T ® (T .e6v Let Al = 3 + 2V'2.4 Let {Rt } be as in Example 3. 01(u) _ 24 u + 35 e6u 1 35 e 4(u) _ 35 . Then one gets X20 20 21 f 1ea1(u v) + 1 3 3 ^ A 2(u e . (7.and A2 = 3 .24e.1 from which we see that pl (u) = 1 + 1 249  1 .x) dx 1 ^(v) ( 1 .24ev .u . I.01 (v .21 = ? yields 0(u) = 1. RESERVEDEPENDENT PREMIUMS 1 .v(u ) eTV e J v(u)eTxtz/)l (v .v) + (2^ + 3v2 ea'(u " .. Since µB = 5/21.7.8) The integral in (7. p2 < 3.x) dx} V 1 1(v) f V v(u) eTxt.e.8) equals v v(u)eTxta+2) e(T+ta +))( vx)edx which using Kronecker calculus (see A.
1.1 Thus.7) we see that we can write pi (u) = v(u)V2 where V2 depends only on v. 21 3 In particular. MATRIXANALYTIC METHODS From (7.1 V2 = 4e5"+6 35e6v .24e5v .24e5v . 192esv + 8 P1 ./2 ea1(u") . Notes and references [30]. The analysis and the example are from Asmussen & Bladt .2 35e6v ../2) ea 1(u . pi (u) = p12(u)/p1 l(u) where p1i(u) p12(u) 35e6v .1)' ?.24es" .21(35e6v . ) e sv + ( 2v/2.24es" .b(v) = 192esv +8 35e6v + 168esv + 7* Thus all terms involved in the formulae for the ruin probability have been exu plicitly derived. and one gets 12e5" .v)esv + 7 4_ 2.+ it (3 4'I 1 ea2(uv e1\2(u") 7 + ( 32 +4.250 CHAPTER VIII.
The definition b[s] = oo for all s > 0 of heavy tails is too general to allow for a general nontrivial results on ruin probabilities. we require that B is concentrated on (0.46 and at numerous later occasions require a light tail. III. (b) the lognormal distribution (the distribution of eu where U . oo ) and say then that B is subexponential (B E S) if 251 . x 4 oo.f.4. the exponential change of measure techniques discussed in II. B(x) = L(x)/x" where a > 0 and L(x) is slowly varying. for all t > 0. B[s] = f e8x B(dx) is finite for some s > 0 in the lighttailed case and infinite for all s > 0 in the heavytailed case. and instead we shall work within the class S of subexponential distributions .B(x). B(x) = ex0 with 0<0<1.Chapter IX Ruin probabilities in the presence of heavy tails 1 Subexponential distributions We are concerned with distributions B with a heavy right tail B(x) = 1. A rough distinction between light and heavy tails is that the m. x 2iror2 (c) the Weibull distribution with decreasing failure rate .2b. For the definition . see I. Some main cases where this lighttail criterion are violated are (a) distributions with a regularly varying tail. For example. L(tx)/L(x) 4 1. a2)) with density 1 e(logyFh) 2/2az .g.N(µ. For further examples.
X1 is w.2. the distribution of independent r. the behaviour in the lighttailed case is illustrated in the following example: Example 1. oo). B(x) Here B*2 is the convolution square. Thus . HEAVY TAILS B*2\ 2.2B(x). 1/2 'typical' (with distribution B) and w. X2) > u x)/B(x) = 2. in the subexponential case the only way X1 + X2 can get large is by one of the Xi becoming large. and thus the lim inf in (b) is at least lim inf P(max(Xi. The proof shows that the condition for B E S is that the probability of the set {X1 + X2 > x} is asymptotically the same as the probability of its subset {max(Xi. that is.B(x)2 . 1). X2 > x) = 2B(x) .1 Let B be any distribution on (0.'s X1. Proof By the inclusionexclusion formula. P(Xi <yI Xi+X2>x) 1B(y). In terms of r. X2) > x} C {X1 + X2 > x}.v. We later show: Proposition 1.F(X1 > x. That is. the r. X2 with distribution B. P(max(Xi. oo).3 Consider the standard exponential distribution.'s. .p. one can check that x x where U is uniform on (0.252 CHAPTER IX. then P(X1>xI X1+X2>x)* 2. Since B is concentrated on (0. X2) > x) is P(X1 > x) + P(X2 > x) . given X1 + X2 > x. In contrast. note first the following fact: Proposition 1. proving (a). As contrast to Proposition 1. X2) > x) ^' 2B(x). Then X1 +X2 has an Erlang(2) distribution with density yeY so that B*2(x) xex. Then: (a) P(max(Xi.1(b) is oo.1) then means P(X1 +X2 > x) 2P(Xi > x). (b) liminf BB(() ) > 2.v. if X1 + X2 is large . Thus the liminf in Proposition 1. x 3 00. X2) > x}.p.v. (1. That is. B(x) ax.2 If B E S. To capture the intuition behind this definition. we have {max(Xi. then (with high probau bility) so are both of X1. 1/2 it has the distribution of X1I X1 > x. X2 but none of them exceeds x.
] Proof Consider first a fixed y.y)/B(x) > 1.S)x. The uniformity now follows from what has been shown for y = yo and the obvious inequality y E [0. then B(B(x)y) * 1 uniformly in y E [0.5)x)' + 0 _ 2 L(x)l xa (16) Letting S 10.v. If lim sup B(x . we therefore get lim sup B*2(x)/B(x) > 1+B(y)+ 1 .1.'s: if X . y] and (y. then the overshoot X . Using the identity B*(n+1)(x) = 1+ + 1)(x) 1+ 2 1 .S)x + B(Sx)2 < lim sup B(x) xaoo B(x) lim sup 2L((1 x^oo .xIX > x converges in distribution tooo.5 If B E S. [In terms of r. If X1 + X2 > x. Proposition 1. a contradiction.z B(x) . B( 0 .B(y)) . Finally lim inf B(x .B*(n ) B(dz) (1. we get BZ(x)) > 1 + B(y) + B(B()y) (B(x) .1(b) we get B*2(x)/B(x) * 2. and combining with Proposition u 1. or they both exceed Sx. We now turn to the mathematical theory of subexponential distributions. Let 0 < 5 < 1/2. then either one of the Xi exceeds (1 .B(y) = 2. SUBEXPONENTIAL DISTRIBUTIONS Here is the simplest example of subexponentiality: Proposition 1.B E S. yo] as X + 00.yo]. 1 < B(x ) B( x) Y) < B( 0).y)/B(x) > 1 since y > 0. 253 Proof Assume B(x) = L(x)/xa with L slowly varying and a > 0. This follows since the probability of the overshoot to exceed y is B (x + y)/B(x ) which has limit 1.2) B(x) B(x ) B(x) Jo with n = 1 and splitting the integral into two corresponding to the intervals [0.B*n(x .4 Any B with a regularly varying tail is subexponential. we get limsupB*2(x)/B(x) < 2.6)x)/((1 . x]. Hence lim sup a+oo B*2(x) 2B((1 .
so assume the proposition has been shown for n.5 and dominated convergence.B(x . and this immediately yields the desired conclusions. The first integral is y B(x . B(x) \Jo _ B(x . 0 Proof of Proposition 1. B*(n+1) (x I xy + Jxx y) W. Then by (1.z) B(dz) _y B(x) 111 Lx B .254 CHAPTER IX. b[c] = oo for all e > 0.1) for all large n so that B(n) > cle6n for all n. we have by Proposition 1. then e"R(x) * oo. The case n = 2 is just the definition. This implies B(x) > c2e5x for all x.7 If B E S.y) sup v>o B(v) B(x) which converges to 0 by Proposition 1.z) B(dz) (n + O(e)) ^x JO B(x) (n + 0(0) I B (x) . P(X1 > xIX1 + X2 > x) _ P(Xi > x) _ B(x) 1 P(X1 + X2 > x) B2(x) 2 1 y P(X1<y X1 + X2 > x) B(x . O The following result is extremely important and is often taken as definition of the class S.5 and the induction hypothesis. Proof We use induction.2).z ) (x ) = 1 + (^ B(x .z) B(x) Here the second integral can be bounded by B*n(y) B(x) . Proof For 0 < 5 < e. its intuitive content is the same as discussed in the case n = 2 above.nI < e for x > y. x oo. choose y such that IB*n(x)/B(x) .2. then for any n B*n(x)/B(x) * n. Proposition 1.6 If B E 8.B*2 (x) B(x) (x .z) B(dz). HEAVY TAILS Corollary 1. Given e > 0..z) B(dz) 2B(x) o rv 2 0 2 using Proposition 1.5 that B(n) > e6B(n .
Then Al * A2(x) = P(X1 + X2 > x). Then by (1. Proof Let X1.(al + a2)B(x).2).z) B(dz) x .y)B(dy) = B(x)ov (1)• v (1.y)Ai(dy) v) f o . Since P(X1+X2 > x.z) B(x) < 1 + A + an sup f x B(x . 0 Lemma 1. Combining these estimates and letting a 4. an = supx>o B*n(x)/B(x). Then Al * A2 (x) . oo) such that Ai (x) _ aiB(x) for some B E S and some constants al. SUBEXPONENTIAL DISTRIBUTIONS 255 Here the first term in {•} converges to 1 (by the definition of B E S) and the second to 0 since it is bounded by (B(x) . 0 Proposition 1.y))/B(x).0 completes the proof.8 If B E S.i).X1 > xv.1.v. e > 0. it follows that it is necessary and sufficient for the assertion to be true that JX_VA (x .B(x .y)Ai(dy) = (x)o(1) (1. choose T such that (B(x)B*2(x))/B(x) < 1 + b for x > T and let A = 1/B(T).3) Using the necessity part in the case Al = A2 = B yields f xv B(x .z) B(dz) < 1 + A + an(1 + d) .5 easily yields P(X1 + X2 > x. For any fixed v. X2 be independent r. A2 be distributions on (0. x>T o B(x) The truth of this for all n together with al = 1 implies an < K(1 + 5)2n where K = (1 + A)/e.ala2B(x)2 which can be neglected. Xi <= v Ai (x .z) B(dz ) + sup < 1 + sup f x<T B ( x) x>T 0 B(x .ajB(x)Ai(v) = ajB( x)(1+o„(1)) (j = 3 .9 Let A1.X2 > xv) < A1(xv)A2(x v) .'s such that Xi has distribution Ai.z) B(x . then there exists a constant K = KE such that B*n(x) < K(1 + e)nB(x) for all n and x.4) . Proposition 1. a2 with a1 + a2 > 0. Proof Define 5 > 0 by (1+5)2 = 1+e. an+1 fX B*n( *n(x .
Then L = L1 + L2 is slowly varying and B1 * B2(x) sim L(x)/x«. u Corollary 1.y)Ai(dy) = B(x)o„(1). Recall that the failure rate A(x) of a distribution B with density b is A(x) = b(x)/B(x) Proposition 1. Then B E S provided fo "O exA(x) b(x) dx < oo.2. if q(x) aB(x) for some B E S and some constant a > 0. Proof Taking Al = A2 = A. then so is L = L1 + L2. whereas the two first yield B(x)(Ai(v) . u It is tempting to conjecture that S is closed under convolution.(x) is decreasing for x > x0 with limit 0 at oo.4). a2 = 1. u Corollary 1. of (1. a1 = a2 = a yields A*2(x) .256 Now (1. it should hold that B1 * B2 E S and B1 * B2 (x) . L2 are slowly varying. V (1. That is.12 Assume that Bi(x) = Li(x)lxa. Then A * B E S and A * B(x) . i = 1. B2 E S.y)B(dy). Hence Corollary 1. L2 slowly varying.Bl (x) + B2 (x) when B1. A2 = B so that a1 = 0.s. then A E S. B1 * B2 E S does not hold in full generality (but once B1 * B2 E S has been shown. That is. A(x) = o(B(x)).3) follows if CHAPTER LX. HEAVY TAILS 'VV B(x .Ai(x . .v)B(v) + _'U Aq(x .5) Here approximately the last term is B(x)o„(1) by ( 1.9).v)Ai(v) .h.5) becomes x B(x . We next give a classical sufficient (and close to necessary) condition for subexponentiality due to Pitman [290].aiB(v)) = B(x)o„(1). In the regularly varying case. with a > 0 and L1.11 Let B E S and let A be any distribution with a ligther tail. B1 * B2 (x) .Bl (x) + B2 (x) follows precisely as in the proof of Proposition 1.13 Let B have density b and failure rate A(x) such that . it is easy to see that if L1.2aB(x) .10 The class S is closed under tailequivalence. However.2A(x). the l. f " By a change of variables.B(x) Proof Take Al = A.
(x ..y) * 0. subexponentiality has alrady been proved in Corollary 1.1 B(x) eA( x)A(xv )A(y)A(y) dy f B(x . Thus A(x) is everywhere decreasing.16 For L(x) slowly varying and a > 1. the u lognormal distribution is subexponential. Since ) (x ..1 has limit 1 + 0. proving B E S.y) y\(y)• The rightmost bound shows that the integrand in the first integral is bounded by ey"(v). replace B by a tail equivalent distribution with a failure rate which is everywhere decreasing).y) < yA(x . Thus.A(y)\(y) dy + fox/ 2 eA(x ). an integrable function by assumption. Then b(x) = Ox0lexp. Then B(x) = eA(x).y) < A (y) for y < x/2. 0 A(x) . Thus B*2(x )/ B(x) . we first quote Karamata's theorem (Bingham. we can use the same domination for the second integral but now the integrand has limit 0 . Further.A(x .15 In the lognormal distribution.y ) b(y)dy = B (x) o ox _ J = ox/2 eA( x)A(xy ). Goldie & Teugels [66]): Proposition 1. Example 1. Thus by dominated convergence .2). Jo For y < x/2. and exa(x)b(x) = (3x01e(10)x9 is integrable. To illustrate how Proposition 1. the first integral has limit 1 .1.A(xy)A ( y). x . L(x) y° (a .e009xv)2/2a2/(x 2irv2) logx ( ) 't ((logx .13 works in this setting.`(x)b(x) is integrable.(y) dy. the DFR Weibull distriu bution is subexponential. The middle bound shows that it converges to b(y) for any fixed y since \ (x . B*2(x) .3 < 1. Define A(x) = fo . f ' L(y) dy . Thus. Example 1.A(y)a(y ) = ev'(y) b(y).14 Consider the DFR Weibull case B(x) = ex0 with 0 <.12.U) /or) v 2x This yields easily that ex.y) dy. elementary but tedious calculations (which we omit) show that A(x) is ultimately decreasing. In the regularly varying case.1)xcl1 . SUBEXPONENTIAL DISTRIBUTIONS 257 Proof We may assume that A(x) is everywhere decreasing (otherwise. By (1. a(x) = ax01.
)/Y(x) > y) (1 + y/(a . we get (1.1))a .6) EX(x) . yo] .1/A(x) and P(X ixil'Y (x) > y) * e'. (1 + y/(a . then 7(x) x/(a .1)]) xa L(x) (x[1 + y/(a .ea b(x) is integrable. 'y(x) = EXix>.1)X(x)/x > y) = P(X > x[1 + y/(a .4 is necessary in full generality.1) and P(X (.258 From this we get CHAPTER IX.17 If B has a density of the form b(x) = aL(x)/x°+1 with L(x) slowly varying and a > 1.1)])a 1 1 . . the overshoot properly normalized has a limit which is Pareto if B is regularly varying and exponential for distributions like the lognormal or Weibull.1)] I X > x) L(x[1 + y/(a . Thus exa(x)b(x) .13 may present a problem in some cases so that the direct proof in Proposition 1.1))^ ' (b) Assume that for any yo )t(x + y/A(x)) 1 A(x) uniformly for y E (0.18 (a) If B has a density of the form b(x) = aL(x)/xa with L(x) slowly varying and a > 1. the monotonicity condition in Proposition 1. let X W = X . Then 7(x) . However.xjX > x.a/x.x)+ _ 1 °° P(X > x) P(X>x )J L PX >y)dy 1 x L(y)/ydy L(x)/((a1)x'1) x )l ° J °° ( ()l a x a1 Further P ((a . HEAVY TAILS Proposition 1. Then: Proposition 1.y(x)B(x). f O B(y) dy . We conclude with a property of subexponential distributions which is often extremely important: under some mild smoothness assumptions.E(X . Proof ( a): Using Karamata's theorem. then B(x) . More precisely.L(x)/x" and )t(x) . (c) Under the assumptions of either ( a) or (b).
n0 1•P(K= n)•n = EK. It is trivially verified to hold for the Weibull. We get p(yl+.. Examples 1.t be the claim surplus at time t and M = sups>0 St.. with common distribution G E S and let K be an independent integervalued r.. THE COMPOUND POISSON MODEL 259 We omit the proof of (c) and that EX (x) . Y2. Let St = Ei ` Ui .15.7) is referred to as 1/A(x) being selfneglecting. . 1. r(u) = inf it > 0. Bo(x) = f0 B(y) dy / µB. P The proof is based upon the following lemma (stated slightly more generally than needed at present).v.1 If Bo E S.f yl 0 0 = exp {y (1 + 0(1))} 0 fY A( x + u /A( x)) a(x) du } The property (1. with EzK < oo for some z > 1.+YK> u) = ^•P(K = n)G* n(u ) . Recall that B0 denotes the stationary excess distribution.(x). be i. Notes and references A good general reference for subexponential distribution is Embrechts. cf. Then P(Y1 + • • • + YK > u) .A(x) I X > x) = exp {A(x) . 0 G(u) L G(u) .nn. i. The remaining statement (1.1/. nG(u).2. We assume p = /3µB < 1 and are interested in the ruin probability V)(u) = P(M > u) = P(r(u) < oo). St > u}. Lemma 2. d. Theorem 2 .. u a oo.14. Kliippelberg & Mikosch [134]. . 2 The compound Poisson model Consider the compound Poisson model with arrival intensity /3 and claim size distribution B.8) in (b) then follows from P (A(x)X (x) > y) = F(X > x + y/.and lognormal distributions . then Vi(u) P Bo(u).2 Let Y1. and that for each Proof Recall from Section 1 that G*n (u) z > 1 there is a D < oo such that G*n(u) < G(u)Dzn for all u.EK G(u).A(x + y/A(x))} =a(x) a(x + x) dx = ex p ex P .
the result follows immediately from Lemma 2. Weibull) one has Bo(x ( B(x) .x400 PBB(x) PB Leta+oo. see Abate. and for the lognormal and Weibull cases it can be verified using Pitman 's criterion (Proposition 1. u The condition Bo E S is for all practical purposes equivalent to B E S.1. a]. u x+a Notes and references Theorem 2. Proof of Theorem 2. we have fx B(y)dy = a B0 (x) > lim inf lim inf x+oo B(x) . Bo is more heavytailed than B . Bo E S is immediate in the regularly varying case.1)xa1' vxe(109x11)2/202 2 +° /2 µB = eµ Bo(x) eµ+O2/2(log x)2 27r' = µB = F(1/0 ) Bo(x 1 ) . in our three main examples (regular variation . HEAVY TAILS u using dominated convergence with >2 P(K = n) Dz" as majorant.13).?(xµ 8 (x). Bo E S.2. Proof Since B(x + y)/B(x) * 1 uniformly in y E [0. Since EK = p/(1. r(1/Q) xlQexp B(x) = ex' From this . then Bo(x)/B(x) + 00. as well as examples where B ¢ S.. mathematically one must note that there exist (quite intricate) examples where B E S. (2. x 4 00. Borovkov [73] and Pakes [280].p)p'.18. However.3 If B E S.1) In particular . P(K = k) = (1.µ J ) . lognormal .µB(01 . .. _ B(x^sx Bo(x) µ8 I aoB(y )dy = (^) . The approximation in Theorem 2. The tail of Bo is easily expressed in terms of the tail of B and the function y(x) in Proposition 1. Note that in these examples . For some numerical studies. See also Embrechts & Veraverbeeke [136].260 CHAPTER IX.1 is essentially due to von Bahr [56]. The problem is a very slow rate of convergence as u ^ oo. In general: Proposition 2.1 is notoriously not very accurate.2) M = Yl + • • • +YK where the Yt have distribution Bo and K is geometric with parameter p.p) and EzK < oo whenever pz < 1. The PollaczeckKhinchine formula states that (in the setup of Lemma 2. Bo ¢ S.x^ ) B(x) _ f or ( lox .
In [1].9+ < oo) = P(S..+ E A...1 gives 1010. .1) this end .Ti. Define further 0 = IIG+II = P(r9+ < oo). To Bo(u) u + 00. + Xn. Let U= be the ith claim . Snd) = Xl +. (3.e.] The proof is based upon the observation that also in the renewal setting. T+ < oo) where r+ = T1 + • • • + T. there is a representation of M similar to the PollaczeckKhinchine formula.y + as usual denotes the first ascending ladder epoch of the continuous time claim surplus process {St}. i=1 . i. 1 Assume that (a) the stationary excess distribution Bo of B is subexponential and that (b) B itself satisfies B(x . Asmussen & Binswanger [27] suggested an approximation which is substantially better than Theorem 2. M = sup s$ . Then K M=EY.1 when u is small or moderately large.. 3 The renewal model We consider the renewal model with claim size distribution B and interarrival distribution A as in Chapter V. Somewhat related work is in Omey & Willekens [278]. Thus G+ is the ascending ladder height distribution (which is defective because of PB < PA).g. [279]. G+ (A) = P(Sq+ E A. The main result is: Theorem 3 . This shows that even the approximation is asymptotically correct in the tail.. also a second order term is introduced but unfortunately it does not present a great improvement.} Then ik(u) = F ( M > u) = P(i9 (u) < oo).1.y)/B (x) > 1 uniformly on compact y internals. Kalashnikov [219] and Asmussen & Binswanger [27]. p = iB /µA < 1.3.. Then l/i(u) 1 P P [Note that (b) in particular holds if B E S. T1 the ith interarrival time and Xi = U. Based upon ideas of Hogan [200]. E. in [219] p. let t9+ = i9(0) be the first ascending ladder epoch of {Snd> }. We assume positive safety loading. {n= 0.. 195 there are numerical examples where tp(u) is of order 105 but Theorem 2. one may have to go out to values of 1/'(u) which are unrealistically small before the fit is reasonable. t9(u) = inf {n : Snd> > u} . THE RENEWAL MODEL 261 Choudhury & Whitt [1].
this representation will be our basic vehicle to derive tail asymptotics of M but we face the added difficulties that neither the constant 9 nor the distribution of the Yi are explicit.2 F(x) . P(K = k) = (1 . oo) = F(S.Y2. the contribution from the interval (. A(dy) = 1. cf. u a 00.y) dy = 1 Pi (X) oo IPG_ I .. 0] to the integral is O(F(x)) = o(FI(x)).y) R+(dy ) _ j (x_y)U_(dY) G+ (x) = J 00 00 (the first identity is obvious and the second follows since an easy time reversion argument shows that R+ = U_.1) is equivalent to P(M > u) " .d)) E A) denote the pre19+ occupation measure and let and U_ = Eo G'_" be the renewal measure corresponding to G_.. Write G+( x) = G+ ( x.262 CHAPTER IX. 0 The lemma implies that (3.B(x). x + oo.d. (3. Lemma 3 .2). Proof Let R+(A) = E E'+ ' I(S.3) and we will prove it in this form (in the next Section.. d+ < oo).N.FI(x) /IPG_I. U_ (dy) is close to Lebesgue measure on (. (b) and does not rely on the structure Xi = Ui . x * oo. A.i. Then 0 0 F( x . G_(A) = P(S. The heuristics is now that because of (b). are independent of K and i.FI(u).oo.1 IPG_ I / F(x .3 G+ (x) .9)9'' and Y1. FI (x) _ fz ° F(y) dy. Let further 19_ _ inf {n > 0: S^d^ < 0} be the first descending ladder epoch.y+ given r+ < oo). 0] normalized by IPG_ I so that we should have to G+(x) . whereas for large y .Ti).PBBo(x). we will use the fact that the proof of (3. and hence FI(x) .y_ E A) the descending ladder height distribution (IIG II = 1 because of PB < P A) and let PG_ be the mean of G_. with distribution G+/9 (the distribution of S.(.g+ > x. Let F denote the distribution of the Xi and F1 the integrated tail. Proof By dominated convergence and (b). Lemma 3 . B(x) _ J O° B(B(x)y) A(dy) f 1 .1) holds for a general random walk satisfying the analogues of (a). x > 0. HEAVY TAILS where K is geometric with parameter 9. As for the compound Poisson model.
9)IpG_ I Differentiating the WienerHopf factorization identity (A. and that U_(n .1)/F(n) < 1 + e for n > N (this is possible by (b) and Lemma 3. THE RENEWAL MODEL 263 We now make this precise.1.I n=N (1 E)2 r00 F(x + y) dy + e) lim sup .1. n] is just the probability of a renewal at n. n] + 1/I µG_ I. Given e.9) 1 . (3. If G_ is nonlattice.1.0)0k k I(u) A. then by Blackwell 's renewal theorem U_ (n . We then get lim sup G+(x) xro0 Fj(x) < lim sup X)00 o F(x .3.y) U_ (dy) 00 FI (x) < lim sup F(x) U(N.1 I . Hence using dominated convergence precisely as for the compound Poisson model.2).1.G+[s]) . u Proof of Theorem 3.e) z lim inf G+(x)  FI (x) Ip G_ I Letting a 10. and in the last that FI is asymptotically proportional to Bo E S.y) U.F[s] = (1 . By Lemma 3. choose N such that F(n .O[s])(1 .UG_ I x.oo Fj(x) N J (1 +6)2 I {IC_ I lim sup X400 FI(x + N) _ (1 + e)z (x) I Pi µ G_ I Here in the third step we used that (b) implies B(x)/Bo(x) + 0 and hence F(x)/FI(x) 4 0. we can assume that the span is 1 and then the same conclusion holds since then U(n . > (1 .3. F(Y= > x) FI(x)/(OIp _ 1). In the lattice case. the proof is complete. Similarly. n] F1 ( n=N _1 1+e E F(x+n) 0 + limsup xr00 FI(x) FAG. 0] x+00 FI(x) 00 + lim up 1 x) E F(x + n) U_ (n .(dy) fN FI ( x) + lim sup ZY00 N F(x .1.=1 BIp G_ I (1. n] < (1 + e)/1µc_ I for n > N.2) yields 00 F F I (u) P(M > u) _ E(1 .
Sty(u)_I < a} we have w(u) < oo. P(M > u.SS(u)}n=o.IIG+II)µc_ = (1 . Sty(u) . Mn < u}. u). Therefore by Lemma 3.Se(u)_1 < a) = o(Fj(u)).2. see Asmussen & Kliippelberg [36].a. Then P(M E (u .(1 . 10(0) But since P(M > u . Note that substantially sharper statements than Lemma 3.1 is due to Embrechts & Veraverbeke [136]. Proof Let w(u) = inf {n : Sid) E (u . allowing also for possible dependence between the arrival process and the claim sizes.264 and letting s = 0 yields CHAPTER IX. 4 Models with dependent input We now generalize one step further and consider risk processes with dependent interclaim times.a) N P(M > u). S+q(u) . S+9(u) . on the set {M > u. with roots in von Bahr [56] and Pakes [280].l.AB iP We conclude by a lemma needed in the next section: Lemma 3 .a.(u)+n .4 For any a < oo.So( u)_1 < a) < P (w(u) < oo)j/i(0) < 0(0) P(M E (u .a. we have P(M E (u .4 on the joint distribution of (S. In view of the `one large claim' heuristics it seems reasonable to expect that similar results as for the compound Poisson and renewal models should hold in great generality even when allowing for such dependence. FJ(u) UBBO(U) PBo(u) N = (10)Ipc_I JUA . u)) > P(w(u) < oo)(i lp (0))• On the other hand. Notes and references Theorem 3.a. and {Su. HEAVY TAILS µF = (1 .So(u)) are available... must attain a maximum > 0 so that P(M > u. u)). .0)ua_ .u)) = o(P (M > u)) = o(FI(u)).1)6+[0] ..yiui_1.
< 0 and EoX < oo where X = X2 . {SX1+t ..4.1.1) .. assume pp.X2 < . examples and counterexamples. such that {SXo+t  SXo}0<t< X 1Xo .1. The zerodelayed case corresponds to Xo = Xl = 0 and we write then F0..X1 is the generic cycle.1 = max k=0. M = sup St... Thus the assumption . M* = max S. Assume that the claim surplus process {St}t>o has a regenerative structure in the sense that there exists a renewal process Xo = 0 < Xl <. .. 4..1 Note that no specific sample path structure of {St} (like in Fig. {Sn}n=o. and apply it to the Markovmodulated model of Chapter VI. 0o(u) etc.. Define S.1 where the filled circles symbolize a regeneration in the path. G(x) (4. E0. +1. see [47].. 4. (viewed as random elements of the space of Dfunctions with finite lifelengths) are i... 4....1) is assumed..4 below. Schmidli & Schmidt [47].d. The idea is now to observe that in the zerodelayed case..n n=0. For further approaches.F*(X) = P0(Si > x) .1 based upon a regenerative assumption.. We give here one of them.1 except for the first one) is a random walk. Theorem 4. t>0 S. (corresponding to the filled circles on Fig.. MODELS WITH DEPENDENT INPUT 265 Various criteria for this to be true were recently given by Asmussen.. M. We return to this point in Example 4. = Sx.Sxi}0<t<x2Xl . Figure 4. We let F* denote the Podistribution of Si. 2...i.Sxk}o<t<xk+1xk is the same for all k = 1. See Fig. and the distribution of {Sxk+t .
2) to show F(M* > u) > 1.2 Theorem 4..266 CHAPTER IX.4) liminf u>oo F(M > u) . (4. (4. Fo(Si > X). The one we focus on is Fo (Mix) > x) .2) Imposing suitable conditions on the behaviour of {St} within a cycle will then ensure that M and M* are sufficiently close to be tail equivalent.3) where Mnx) = sup o<t<xn +1 X.S. the assumption means that Mix) and Sl are not too far away. See Fig. it suffices by (4. N N Xi=0 N Figure 4. jF11 F* (U). u p 00. Then '00 (u) = Fo(M > u) .Sxn = sup Sxn+t .1 Assume that (4.2. HEAVY TAILS for some G such that both G E S and Go E S makes (3. 4.1) and (4. Sxn +t .3) applicable so that F(M* > u) 141 F*(u)..3) hold.* i o<t<xn+1x. (4.. Since clearly M(x) > Sl .. Proof Since M > M*.
. )) > (1 ..e)Po (MMX> > x). Then by Lemma 3.E) Po ( n max St u.S. Given e > 0. M^xu)+l > a) . MODELS WITH DEPENDENT INPUT Define 79* (u) = inf {n = 1 ..a.(u) . E (u . /3(u) = inf{n=1.5) which follows since Po (M > u.1 can be rewritten as 00 (U) (4. assume the path structure Nt St = EUit+Zt i=1 . u)} < P(M* E (u . Theorem 4. Po(M* > u) . Under suitable conditions .Sn+1Sn>aV(uSn*)) n=1 00 > (1E)EPo(Mn<u. MW O(u)+1 < a) IN ( U n=1 A1. 2. Letting first u + oo and next e .(1 .4.6) 1 p pBo(u) u where B is the Palm distribution of claims and p . choose a such that Po(Si > x ) > (1 .a.. Mn+l > a V (u .+Mn+1>u} 267 (note that {M> u} = {3(u) < oo}).4). S. To this end.1 = limti00 St/t. We shall use the estimate Po(M > u) Miu^+ 1 < a) = o(Po (M > u)) (4.: S.Mn +1 >aV(u n=1 00 S.Sn 0<t<x„+j ( 1 .4.Po (M* > u.e)Po (M > U). . u))/P(M* = 0) = o(Po(M* > u)). x > a. Let a > 0 be fixed..: Sn > u} . 0 yields (4...2.e)Po (M > u.( u)1 > a) 00 1: Po(Mn<u.
and also for Mix) since Nx FNX U. since the tail of Zx is lighter than B(x) by (iv). are Fmeasurable and NX Po J:U=>x i=1 (iv) Po sup Zt > x / (0:5t<x o(B(x)) Then (4.I u J Po(Sl > x) dx 1 EoNxB(x) dx EoX(1 .2 Assume that {St} is regenerative and satisfies (4. Mix) < > UE + i=1 o<t<x Thus Theorem 4.268 with {Zt} continuous. Then the Palm distribution of claims is B(x) = E N Eo 0 I( U1 < x) . Assume further that (i) both B and Bo are subexponential. (iii) For some o field Y. Corollary 4.v. a4' 0.6) holds with p = .X both have tails of sup Zt. and the rest is just rewriting of constants: since p = 1+tlim St = 1+ . HEAVY TAILS N` U. X and N.Q = EoNx/EoX.7).'s order EoNx • B(x). and ENX Ui .1 is in force. the proof of Lemma 4.6 below. we get 00 (u) 1 IPF.8) x Write . (ii) EozNX < oo for some z > 1. cf. i=1 (4.4). Proof It is easily seen that the r. independent of {> CHAPTER IX.3PB.} and satisfying Zt/t N. The same is true for Sl.p) Ju P Bo(u) 1p 0 . oX (see Proposition A1.
.. and taking F = o. X2 = 1. then (iv) holds since the distribution of supo<t<i Z(t) is the same as that of I Zl 1.. The key step of the proof is the following lemma. consider the periodic model of VI. Assume that B E S. we assume that B E S. i=1 B = >2 7riaiBi i=1 and we assume p = 014 B = Ep ri/3ipB. of claims arriving in [0. 1) is Poisson with rate /3 = fo /3(s) ds so that (ii) holds.3 As a first quick application.6) u holds. (i) holds.t + EN'I Ui where {>N`1 Ui .6) holds.6) holds. Theorem 4. Then (4. .4 Assume that St = Zt . Again . In particular.3 that (4. Thus we conclude that (4.6 with arrival rate /3(t) at time t (periodic with period 1) and claims with distribution B (independent of the time at which they arrive). > 0. X3 = 2. we conclude just as in Example 4.9). 3 The average arrival rate / and the Palm distribution B of the claim sizes are given by P P Q = ir i/i. (iii) is obvious. The number N...e. . X3 = 2. The regenerative assumption is satisfied if we take Xo = Xi = 0. Zt . We consider the case where one or more of the claim size distributions Bi are heavytailed.4. We now return to the Markovmodulated risk model of Chapter VI with background Markov process {Jt} with p < oo states and stationary distribution 7r. MODELS WITH DEPENDENT INPUT 269 Example 4 .t} is standard compound Poisson and {Zt} an independent Brownian motion with mean zero and variance constant a2. More precisely.5 Consider the Markovmodulated risk model with claim size distributions satisfying (4. Bo E S. .. in particular lighttailed. note that the asymptotics of i/io( u) is the same irrespective of whether the Brownian term Zt u in St is present or not.0 (thus (iv) is trivial). X2 = 1. i. we will assume that lim B2(x) = ci x+oo G(x) for some distribution G such that both G and the integrated tail fx°O G(y) dy are subexponential . Taking again Xo = Xi = 0. The arrival rate is /3i and the claim size distribution Bi when Jt = i. Bo E S. and for some constants ci < oo such that cl + • • • + c.(NX). < 1. Example 4 .
. u Proof of Theorem 4.. NP ) and X are . as x a oo. i=1 Proof Consider first the case X = 0. ."+Np .}P. .G( x ) > ciNi . . we can define the regenerations points as the times of returns to i.F) < CG(x)zn'1+. For lighttailed distributions. . Let {Fi}t=1 P be a family of distributions on [0. oo) such that G E S and some c1... If Jo = i. and that for some + cp distribution G on [0. and the rest of the argument is then just as the proof of Corollary 4. NP ) be a random vector in {0. The same dominated convergence argument completes the proof.X i=1 j=1 where conditionally upon F the Xi. and F a aalgebra such that (N1. i=1 P(Yx > x ^) < P(Y0 > x I.c'(x) where c = ciENi . It follows by a slight extension of results from Section 1 that P P(Yo > x I Y) G( x) ci Ni. . cp with cl + > 0 it holds that Fi(x) . Markovmodulation typically decreases the adjustment coefficient y and thereby changes the order of magnitude of the ruin . . An easy conditioning argument then yields the result when Jo is u random. oo) and define p Ni Yx = EEX'i . are independent with distribution Fi for Xij. P P P(YX and > x I.F) = P(Yo > X+x I •^) G (x +x)>2ciNi i=1 .. Then P P(Yx > x) ..5. i1 = E\ G(x) In the general case.v. X > 0 a r.2. 6 Let (N1.ciG(x).270 CHAPTER IX.. HEAVY TAILS Lemma 4 .Fmeasurable. Assume EzN1+"'+Np < oo for some z > 1 and all i. i =1 P(Yo > x I ^ ) < CG(x)zN1+ +Np for some C = C(z) < oo. Thus dominated convergence yields ( P(Yo>x P(Yo>x ..^•) G(x) P ^ E ciNi = C. 2 .. 1.
5 Finitehorizon ruin probabilities We consider the compound Poisson model with p = /3pB < 1 and the stationary excess distribution Bo subexponential. That paper also contains further criteria for regenerative input (in particular also a treatment of the delayed case which we have omitted here). The present approach via Theorem 4. As usual. ) form a general stationary sequence and the U.pl(1 . Theorem 4. For further studies of perturbations like in Corollary 4. 5a Excursion theory for Markov processes Let until further notice {St} be an arbitrary Markov process with state space E (we write Px when So = x) and m a stationary measure. We start by reviewing some general facts which are fundamental for the analysis.1. there exist constants Y(u) such that the F(u)distribution of r(u)/y(u) has a limit which is either Pareto (when B is regularly varying) or exponential (for B's such as the lognormal or DFR Weibull).4. 5 was first proved by Asmussen. we let PN"N = P(.4. cf... FINITEHORIZON RUIN PROBABILITIES 271 probabilities for large u.. and the final reduction by Jelenkovic & Lazar [213]. Then O(u) . this then easily yields approximations for the finite horizon ruin probabilities (Corollary 5. Theorem 2.p)Bo(u). Within the class of risk processes in a Markovian environment. in particular Proposition 2.2 and Example 4. Combined with the approximation for O(u). this is applied for example to risk processes with Poisson cluster arrivals.. states that under mild additional conditions.5. VI. I T(u) < oo). IV.. the discussion provides an alternative point of view to some results in Chapter IV.7. Schmidli & Schmidt [47].d. Essentially. i. The main result of this section. It follows from Theorem 4. and independent of (T1. m is a (orfinite) .e.4.. An improvement was given in Asmussen & Hojgaard [33]. Floe Henriksen & Kliippelberg [31] by a lengthy argument which did not provide the constant in front of Bo(u) in final form. ).7).4. cf. for lighttailed distributions the value of the adjustment coefficient y is given by a delicate interaction between all B. Notes and references Theorem 4. this should be compared with the normal limit for the lighttailed case.i. > 0) matter for determining the order of magnitude of the ruin probabilities in the heavytailed case. as well as a condition for (4.5 that the effect of Markovmodulation is in some sense less dramatical for heavytailed distributions: the order of magnitude of the ruin probabilities remains ft°° B(x) dx. see Schlegel [316]. In contrast.T2. r(u) is the time of ruin and as in IV. Theorem 5.T2.5 shows that basically only the tail dominant claim size distributions (those with c.3.1 is from Asmussen. cf.6) to hold in a situation where the interclaim times (T1. i.
Rt is distributed as x + t . u For F C E. Then there is a Markov process {Rt} on E such that fE m(dx)h(x)Exk(Rt) = Lm(dy)k(y)Eyh(St) (5.272 CHAPTER IX. and (5. y to vary in. The equality of the l. resp. For the present purposes it suffices .= y.t + EI U. say. x = 0+ and F = (0. k as indicator functions.y = Qx (.2) means ffh(a. t. .r. y = 0). Then (5. k on E.2) for all bounded measurable functions h. The simplest example is a discrete time discrete state space chain. an excursion in F starting from x E F is the (typically finite) piece of sample path' {St}o<t<w(F°) I So = x where w(Fc) = inf It > 0: St 0 F} .s.z) dx G(dz) = ffh(y + z) k(y)dy G(dz). a familiar case is time reversion (here m is the stationary distribution). in the terminology of general Markov process theory. {St} and {Rt} are in classical duality w. j. Lebesgue measure.t. however . Proof Starting from Ro = x. where we can take h. for states i.00). to the r. Sw(F. w(Fc) < oo ) 'In general Markov process theory. the whole of R and not as usual impose the restrictions x > 0. Thus. to consider only the case Px(w(F`) = 0) 0.).rij = mjsji where r13. Say {St} is reflected Brownian motion on [0.z. . follows by the substitution y = x .h. m.>N` Ui. We let QS be the corresponding distribution and Qx. oo). St is distributed as y . and starting from So = y.2) with t = 1 means m. a main difficulty is to make sense to such excursions also when Px(w(F°) = 0) = 1. (note that we allow x. Let G denote the distribution of ENt U. {Rt}.s.h. HEAVY TAILS measure on E such that L for all measurable A C E and all t > 0.1 A compound Poisson risk process {Rt} and its associated claim surplus process {St} are in classical duality w . r. but the example of relevance for us is the following: Proposition 5.s=j are the transition probabilities for {St}.)k(x .t.
5. The theorem states that the path in (b) has the same distribution as an excursion of {Rt} conditioned to start in y < 0 and to end in x = 0.1 The sample path in (a) is the excursion of {St} conditioned to start in x = 0 and to end in y > 0. We can then view Qy. In particular: Corollary 5. in E F.1 for the case F = (oo. S. That is. . in with i0.. Sn+1 E Fc) nx.y() = P ({SW(F`)t} 0<t<w(F °) E So = x.= y) Theorem 5 .itt) = P Px(w(Fc) < 00.. i1. x = 0.). We consider the discrete time discrete state space case only (wellbehaved cases such as the risk process example can then easily be handled by discrete approximations). z > 0.. the one in (b) is the time reversed path..2. Qx y is the distribution of an excursion of {St} conditioned to start in x E F and terminate in y E F. Sw(Fo)_ should be interpreted as Sw(F^)_1).5. QR and QRy are defined similarly. in = y.13AB < 1] Proof of Theorem 5. /^s x (S1 = Z1. .y(2p21 . . Sn = in = y. this simply means the distribution of the path of {Rt} starting from y and stopped when 0 is hit. FINITEHORIZON RUIN PROBABILITIES 273 y E F (in discrete time. oo) = r(0) x= St y (a) Figure 5. Thus..2 Qy.s.3 The distribution of r(0) given r(0) < oo.(0)_ = y < 0 is the same as the distribution of w(y) where w(z) = inf It > 0 : Rt = z}. Qx.. Sw(F)1 = y) . .SS(F. when p = . w(0. But in the risk theory example (corresponding to which the sample paths are drawn)..y = Qy Q. io = x. and we let Qy y refer to the time reversed excursion . The theorem is illustrated in Fig .y as a measure on all strings of the form i0i1 .. [note that w(z) < oo a. 0].
... . Si1y k=1 i1 . t' y and Qy x are measures on all strings of the form ipi l . ...... 2n) = Qx..ik_1EF .. in) = Qx... in E F. Rn+1 E FC) TioilTili2.. Silt' E SO k=1 i1..i„_iEF Similarly. HEAVY TAILS E E Px (Si = 21i . . . Si1y 00 jEF° E E 5xik_ 1 .274 note that Fx(w(Fc) < 00. Rn = in = x. in) = oo jEF^ Sxin1 . in = x....rin_1in E Txj jEFC m21 s2120 m2252221 m in Ssn n1 mjSjx Mx m2p mil min1 jEF` 1 Sinin _ 1 . Rn+1 E F`) F (w(Fc) < 00. Rn = in = x. .... note first that Pt' (R l = il.TI( 2n2n _1 .... S.. i0) Q x..in1 . 21 ..gilt' k=1 ii ... 2p) when 20.. R Qy x(2p21 ...ii .in E F. Sn+1 E Fc) n=1 i1.(F<)1 = Y) S S and Qx y( ipil .. MY Thus Qx(ioii ... (Fc)1 = y) 00 CHAPTER IX... in with 20.. . Silt' E Sxik_1 ... i0 = y..ik1EF Similarly but easier Sxin_1 . . = in = y. Si l io E mjSjx.ik_1EF Sxin_1 ... R . in = x.. S. Si11 S 1 .... 2p)......J (i. in)  Pt' (R1 = ii... To show Q y x (i0 i 1 ... 20 = y. .....y(inin _ E SYj jEF` 00 Sxik _1 ..
')distribution of Yu is Bo").p.UBBo(u)]. see Fig.'s are defined w. FINITEHORIZON RUIN PROBABILITIES 275 5b The time to ruin Our approach to the study of the asymptotic distribution of the ruin time is to decompose the path of { St} in ladder segments .B(a) +a PBBo(u) . We are interested in the conditional distribution of T(u) = T(0) given {T(0) < oo. U T(O) = T (u) Y Figure 5. the case r (O) < oo. ST(o) > y. the P(u.5. Y > y} . that is. Z = Zl = ST+( 1)_ the value just before the first ladder epoch (these r.(o) > y} = {T(0) < oo. To clarify the ideas we first consider the case where ruin occurs already in the first ladder segment .t. The formulation relevant for the present purposes states that Y has distribution Bo and that conditionally upon Y = y. 7(0) < oo.')density of Y is B(y)/[. Y > u). 1 w .2. that is.2. y > u. That is.r. Z follows the excess distribution B(Y) given by B(Y) (x) _ B(y + x)/B(y).v. Now the P(u.2.t. Bo") is also the P(u.r. 5. P(o) ). S.')distribution of Z since P(Z>aIY>u) = 1 °° B(y) B(y + a) dy FLBBo(u) B (y) J°° (z) dy . Let Y = Yl = Sr+( 1) be the value of the claim surplus process just after the first ladder epoch .2 The distribution of (Y. the distribution w. Z) is described in Theorem 111. P(") = P(.
this in principle determines the asymptotic behaviour of r(u). then by the subexponential property Yn must be large. i.1) of the last ladder segment can be estimated by the same approach as we used above when n = 1. We now turn to the general case and will see that this conclusion also is true in P(")distribution: Theorem 5 . The idea is now to observe that if K(u) = n. Z1). Recall the definition of the auxiliary function y(x) in Section 1. Z/'y(u) * W in Pi "' ')distribution . we get the same asymptotics as when n = 1.. Bo") ).e. z ^ oo. . 4 Assume that Bo E S and that (5.. 1/(1 . P(Z < a I Y > u) 3 0.. Zk be defined similarly as Y = Y1. 5. Z = ZI but relative to the kth ladder segment. However. Zn_1 'typical' which implies that the first n1 ladder segment must be short and the last long.. That is . Then. conditionally upon r+ (n) < oo. HEAVY TAILS Let {w(z)}Z^.. Then Corollary 5. it therefore follows that T(u)/Z converges in Pi"'')probability to 1/(1 .T+(2)..18(c) Bo")(yY (u)) + P(W > y) ( 5.p). Zn).. . the random vectors (YI. > u with high probability. let r+(1) = T(0).o be defined by w(z) = inf It > 0: Rt = z} where {Rt} is is independent of {St}. Now Bo E S implies that the Bo ")(a) + 0 for any fixed a.: r+ (n) < oo. a slight rewriting may be more appealing.. Hence Z. . i.3. .. Then 7(u)/y(u) ^ W/(1 . . Fig.d.r+ (n . cf. In the proof. denote the ladder epochs and let Yk. must be large and Z1. It is straightforward that under the conditions of Proposition 1. ..i. the duration T+ (n) .p). r(u)/Z 4 1/(1 .3) holds. 2.. in particular of Z.3) where the distribution of W is Pareto with mean one in case ( a) and exponential with mean one in case (b).p) then yields the final result T(u)/y(u) + W/(1 . and YI.3 implies that the P("'1)distribution of T(u) = r(0) is that of w(Z). Yn_1 'typical'.. We let K(u) = inf In = 1. K(u) = n). Since w(z)/z a$..p) in F(u) distribution. and since its dominates the first n .276 CHAPTER IX.1. Z).p) in Pi"'')distribution. are i.. . more precisely. (Y. Y1 + • • • + Yn > u} denote the number of ladder steps leading to ruin and P("'n) = P(• I r(u) < oo. and distributed as (Y.e. Since the conditional distribution of Z is known (viz..
u) E •) .5 Ilp(u. I A'(u)) = P(u. the condition on A'(u) A A"(u) follows from Bo being subexponential (Proposition 1. A"(u) _ {K(u)=n} = {Y1+ P(. I A"(u ))II + 0.u) E • I A'(u)) = Bo (n1) ®Bou) . Yn . Y„1. P(.3 In the following.. > u}.n) (y1. then IIP( I A'(u)) Taking A'(u) = {Y. Further.2.. FINITEHORIZON RUIN PROBABILITIES 277 16 Z3 Z1 r+(1) T+(1) T+(1) Figure 5. .u) II 0.. ..n). Lemma 5..Yn1iYn . P (Yj.. . Proof We shall use the easily proved fact that if A'(u). suitably adapted).Yl+ +Yf1>u}. .5.. A"(u) are events such that P(A'(u) AA"(u)) = o(F (A'(u)) (A = symmetrical difference of events). II ' II denotes the total variation norm between probability measures and ® product measure.Bo (ri1) ®B( . +Yn1<u.
.+y 1 p"F(Yn > u) P)Pn1 P/(1 .. Similarly (replace u by 0). Let {wl(z)}.. . .. . the discussion just before the statement of Theorem 5.1.u has distribution Bout That is.. the density of Yn is B(y)/[IBBO(u)].P) Bo(u) for n = 1.6. Z. Zn are independent.Bo (n1) ®Bo' 0.. . Z' are arbitrary random vectors.1 and Y„ . y > u. (Y. Notes and references Excursion theory for general Markov processes is a fairly abstract and advanced topic.2.278 Lemma 5 . For Theorem 5. Y") u etc. . Y1 +. whereas wn(Zn) has the same limit behaviour as when n = 1 (cf. ...' = y is BM. + Y" > u) Flul (K (u ) = n) _ Cu) P"F(1'i +... see Fitzsimmons [144]). Thus F(u'n)(T(u) /7(u) > y) = F(u'n)((wl (Z1) + .6 IIPIu'n ) CHAPTER IX.. Zn) E •) . . in particular his Proposition (2. the marginal distribution of Zk is Bo for k < n.. Now use that if the conditional distribution of Z' given Y' is the same as the conditional distribution of Z given Y and JIF(Y E •) . . .. It therefore suffices to show that the P(u'")distribution of T(u) has the asserted limit. copies of {w(z)}.).1). +wn(Z n))l7( u ) > 1y) ^' P(u'n)(wn (Zn)/7(u) > y) 4 NW/(1 .t.4).y(u)T) .d. the F'distribution of r(u) is the same as the P'distribution of w1(Zl) + • • • + wn(Zn)..7 O (u. Proof Let (Y11.. By Lemma 5. 2. . Y'. .. n.. k = 1.P(Y' E •)II * 0... wk(Zk) has a proper limit distribution as u + oo for k < n.. n . Then according to Section 5a... . be independent random vectors such that the conditional distribution of Zk given Y. HEAVY TAILS ((Z1'.P) > y) Corollary 5..i.r.. The same calculation as given above when n = 1 shows then that the marginal distribution of Zn is Bou). and clearly Zi. then 11P(Z E •) ..P(Z' E •)II > 0 (here Y.p) < y).1 P PBo(u) • P(W/(1 .4. in our example Y = (Y1.. P(u) since by Theorem 2.. n_1 < u. Z11). Zn). Proof of Theorem 5. The first step is to observe that K(u) has a proper limit distribution w.. {wn(z)} be i. and that Yk has marginal distribution B0 for k = 1..
2 Define M.y) . p(Y) and the result follows.B(u). cf.1 Assume that B is subexponential and that p(x) > 00. T) when T + oo with u fixed. the results only cover the regularly varying case. = supo<t<0. u (6. The heuristic motivation is the usual in the heavytailed area. 3. that fo p(x)1 dx < oo. . nontrivial and we refer to Asmussen [22]. Assume for simplicity that {Vt} regenerates in state 0 . however.6. that MQ becomes large as consequence of one big jump. The form of the result then follows by noting that the process has mean time Ea to make this big jump and that it then occurs with intensity /3B(u). RESERVEDEPENDENT PREMIUMS 279 The results of Section 5b are from Asmussen & Kluppelberg [36] who also treated the renewal model and gave a sharp total variation limit result . and premium rate p(x) at level x of the reserve. one expects the level y form which the big jump occurs to be 0(1).(u) = P(V > u) = f f (y) dy .1. Extensions to the Markovmodulated model of Chapter VI are in Asmussen & Hojgaard [33]. More precisely. x > oo. V. Asmussen & Teugels [53] studied approximations of i (u..2. Then 0 (u) Qf "O ^) dy. The rigorous proof is. i.e.(3 u u J B(y) dy . Corollary II. Proof of Theorem 6. max VB>0I Vo=0^ o<s<t J11JJJ Lemma 6 . 6 Reservedependent premiums We consider the model of Chapter VII with Poisson arrivals at rate /3.1) The key step in the proof is the following lemma on the cycle maximum of the associated storage process {Vt}. and define the cycle as a = inf{t>0: Vt=0. Theorem 6 ./3Ea B(u). We will show that the stationary density f (x) of {Vt} satisfies f (x) /B(x) r(x) We then get V. Then P(MT > u) . the probability that is exceeds u is then B(u . claim size distribution B.
Further the conditional distribution of the number of downcrossings of u during a cycle given Mo > u is geometric with parameter q(u) = P(Mo > u I Vo = u). where also the (easier) case of p(x) having a finite limit is treated . It is also shown in that paper that typically. u Notes and references The results are from Asmussen [22]. Then D(u) = f(u)p(u) and. HEAVY TAILS Define D(u) as the steadystate rate of downcrossings of {Vt} of level u and Da (u) as the expected number of downcrossings of level u during a cycle. Hence f (u)r(u) = D(u) = Do(u) .P(MT > u) $B(u) Ft µ(1 .280 CHAPTER IX. .q ( u)) 1 . by regenerative process theory. there exist constants c(u) 4 0 such that the limiting distribution of r(u)/c(u) given r(u) < oo is exponential.q(u) Now just use that p(x) * oo implies q (x) + 0. D(u) = DQ(u)/µ.
We shall be brief concerning general aspects and refer to standard textbooks like Bratley.i. The crude Monte Carlo ( CMC) method then amounts to simulating i. estimating z by the empirical mean (Z1 + • • + ZN)/N and the variance of Z by the empirical variance N s2 = E(Z{  N 2. where a2 = Var(Z ).z) 4 N(0.. la The crude Monte Carlo method Let Z be some random variable and assume that we want to evaluate z = EZ in a situation where z is not available analytically but Z can be simulated. replicates Zl.2) is an asymptotic 95% confidence interval .96s z f (1. .Chapter X Simulation methodology 1 Generalities This section gives a summary of some basic issues in simulation and Monte Carlo methods . 4Z). 281 ... ZN. topics of direct relevance for the study of ruin probabilities are treated in more depth. and this is the form in which the result of the simulation experiment is commonly reported.d. Hence 1. z) 2 = Zit NE ii ii According to standard central limit theory . vrN(z . Fox & Schrage [77]. Ripley [304]. Rubinstein [310] or Rubinstein & Melamed [311] for more detail .
and a longer CPU time to produce one replication. an added programming effort.282 CHAPTER X. Z = I inf Rt < 0 (0<t<T = I('r(u) < T). variance reduction is hardly worthwhile. This is a classical area of the simulation literature. We mention in particular ( regression adjusted) control variates and common random numbers. Say that Var(Z') = Var(Z)/2. Typically variance reduction involves both some theoretical idea (in some cases also a mathematical calculation). so that Z' is a candidate for a Monte Carlo estimator of z. typically by modifying Z to an alternative estimator Z' with EZ' = EZ = z and (hopefully) Var(Z') < Var(Z). v. Then replacing the number of replications N by 2N will give the same precision for the CMC method as when simulating N' = N replications of Z'.b(u. lb Variance reduction techniques The purpose of the techniques we study is to reduce the variance on a CMC estimator Z of z. writing Var(Z) = Var(E [Z I Y]) + E(Var[Z I Y]) . Sections 24 deal with alternative representations of Vi(u) allowing to overcome this difficulty. and many sophisticated ideas have been developed. The difficulty in the naive choice Z = I(T(u) < oo) is that Z can not be simulated in finite time: no finite segment of {St} can tell whether ruin will ultimately occur or not. conditional Monte Carlo and importance sampling. we then have EZ = EZ = z. there are others which are widely used in other areas and potentially useful also for ruin probabilities. T): just simulate the risk process {Rt} up to time T (or T n 7(u)) and let Z be the indicator that ruin has occurred. However. Therefore. and in most cases this modest increase of N is totally unproblematic. generated at the same time as Z. Conditional Monte Carlo Let Z be a CMC estimator and Y some other r . one can argue that unless Var(Z') is considerable smaller than Var(Z). it is straightforward to use the CMC method to simulate the finite horizon ruin probability z = i. Letting Z' = E[Z I Y]. SIMULATION METHODOLOGY In the setting of ruin probabilities. Further. We survey two methods which are used below to study ruin probabilities. The situation is more intricate for the infinite horizon ruin probability 0(u).
and the problem is to make an efficient choice.z2 = 0. Variance reduction may or may not be obtained: it depends on the choice of the alternative measure P. i. This may also be difficult to assess . it gives a guidance: choose P such that dP/dP is as proportional to Z as possible.1.E [Z Z]2 = z2 .v. one would try to choose P to make large values of Z more likely. the argument cheats because we are simulating since z is not avaliable analytically.zrs) = 2 1 N 2 2 2 i=1 i=1 N > Lt Zi . L1). .96 sis v^ N 2 1 where srs = N j(LiZi . it appears that we have produced an estimator with variance zero.zrs. the obvious possibility is to take F and P mutually equivalent and L = dP/dP as the likelihood ratio. Then z Var(LZ) = E(LZ)2 . However. even if the optimal change of measure is not practical.3). L such that z = EZ = E[LZ]. a crucial observation is that there is an optimal choice of P: define P by dP/dP = Z/EZ = Z/z. . Importance sampling The idea is to compute z = EZ by simulating from a probability measure P different from the given probability measure F and having the property that there exists a r. In order to achieve (1. LN) from P and uses the estimator N zrs = N > L:Zj i=1 and the confidence interval zrs f 1.. Nevertheless.3) Thus. Thus we cannot compute L = Z/z (further. GENERALITIES 283 and ignoring the last term shows that Var(Z') < Var(Z) so that conditional Monte Carlo always leads to variance reduction. it may often be impossible to describe P in such a way that it is straightforward to simulate from P). Thus. L = z/Z (the event {Z = 0} is not a concern because P(Z = 0) = 0).[E(LZ)] = E Z2 Zz . but tentatively.e.. using the CMC method one generates (Z1. (ZN. (1. To this end. .
5 or even much smaller . if z is small. and let Z(u) be a Monte Carlo estimator of z(u).e.0.100 . Thus. This leads to the equation 1. However.e. i. say 10%. it does not help telling whether z is of the magnitude 104. For each u.96 2Z ( 1 . Another way to illustrate the problem is in terms of the sample size N needed to acquire a given relative precision . An example where this works out nicely is given in Section 3. the issue is not so much that the precision is good as that relative precision is bad: oZ z(1 . Z z V5 In other words .96 2 z2 z increases like z1 as z . assume that the A(u) are rare in the sense that z(u) * 0. We shall focuse on importance sampling as a potential (though not the only) way to overcome this problem. The optimal change of measure ( as discussed above) is given by P(B) = E [ Z] i. We then .B = iP(AB) = P(BIA).z) 1001.z) 1 > 00. say of the order 103 or less.1.96oz /(zV) = 0. However. In ruin probability theory. z I. To introduce these. N .z) which tends to zero as z ^ 0. large sample sizes are required. let z(u) = P(A(u)). assume that the rare event A = A(u) depends on a parameter u (say A = {r(u) < oo}).e. as is the case of typical interest.1. in terms of the halfwidth of the confidence interval. just the same problem as for importance sampling in general comes up: we do not know z which is needed to compute the likelihood ratio and thereby the importance sampling estimator. SIMULATION METHODOLOGY 1c Rare events simulation The problem is to estimate z = P(A) when z is small . I.284 CHAPTER X.. u + oo.. but if the point estimate z is of the order 105. The CMC method leads to a variance of oZ = z(1 . Again. Z = I(A) and A is a rare event. we may try to make P look as much like P(•IA) as possible. Two established efficiency criteria in rare events simulation are bounded relative error and logarithmic efficiency. A = {T(u) < T} or A = {r(u) < oo} and the rare events assumption amount to u being large. and further it is usually not practicable to simulate from P(•IA). a confidence interval of width 10 4 may look small. 10 . the optimal P is the conditional distribution given A.
If M > u. O (u) = z = EZ.e. the mathematical definition puts certain restrictions on this growth rate. it is not efficient for large u . F(K = k) = (1 .1) may be written as V) (u) = P(M > u). 2.2. logarithmic efficiency is almost as good as bounded relative error. Thus.0. 2 Simulation via the PollaczeckKhinchine formula For the compound Poisson model.log z(u) of (1. . Notes and references For surveys on rare events simulation. and in practice.p)pk. We shall here present an algorithm developed by Asmussen & Binswanger [ 271.d. SIMULATION VIA THE POLLACZECKKHINCHINE FORMULA 285 say that {Z(u)} has bounded relative error if Var(Z(u))/z(u)2 remains bounded as u 3 oo. i.1.(2. where M = X1 + • • • + XK. However. Logarithmic efficiency is defined by the slightly weaker requirement that one can get as close to the power 2 as desired: Var(Z(u)) should go to 0 as least as fast as z(u)2E. the PollaczeckKhinchine formula III.i. .. are i. this means that the sample size N = NE(u) required to obtain a given fixed relative precision (say a =10%) remains bounded. Var(Z(u)) hm sup U+00 z (u) 2E < oo (1. let Z +.. where Z = I(M > u) may be generated as follows: 1. P(K = k) = (1 . XK from the density bo(x). Therefore . with common density bo(x) = B(x)/µB and K is geometric with parameter p. Generate K as geometric. X2.log Var(Z(u)) lim inf > 2 u+oo . Otherwise.4) for any e > 0. Let M . see Asmussen & Rubinstein [45] and Heidelberger [190]. where X1.4). According to the above discussion. which gives a logarithmically efficient estimator .. let Z +.p)pk. but as a CMC method . .. it is appealing to combine with some variance reduction method .X1 + + XK. Generate X1. The term logarithmic comes from the equivalent form . The algorithm gives a solution to the infinite horizon problem . This allows Var(Z(u)) to decrease slightly slower than z(u)2. so that NE (u) may go to infinity. 3.
Z(1)(u) is defined as 0).X(2). Xl > x. form the order statistics X(1) < X(2) < . X1 + + XK_ 1 > x when X1 > x. asymptotically it presents no improvement : the variance is of the same order of magnitude F(x).. just note that EZ(1)(u ) 2 > E[Bo (x .XK1] = EBo(uX1 . XK... Theorem IX.XK_1 and let Z( 1)(u) = Bo (Y) (if K = 0.SK1)2. y < 0). < X(K) throw away the largest one X(K).. and considering only the remaining ones...XK_1)..X(K1)) .. Thus.. The idea of [27] is to avoid this problem by discarding the largest X.... and that Bo(y) = 1... So.. ... For the simulation. K > 2] = P2p(Xl > x) = P2Bo(x) (here we used that by positivity of the X. A first obvious idea is to use conditional Monte Carlo: write i. Then (cf.X1 .Xl ..+XK > uIXl.. we generate only X1. and let Z(2)(u) = _ P (SK B0((u > u I X(l).. compute Y = u .p)Bo(x). .2.286 CHAPTER X..L(x)/x`' with a > 0 and L(x) slowly varying.p/(l .1) V)(u) .S( K_1)) V X(K1)) / Bo(X(K 1)) where S(K_l) = X(1) + X(2) + • • • + X(K_1).. note first that To check the formula for the P(X(n) > x I X(1)...X(2). As a conditional Monte Carlo estimator .. XK1. However. To see this. . conditional probability.. SIMULATION METHODOLOGY when the claim size distribution B (and hence Bo) has a regularly varying tail.X(n_1)) Bo(X(„_l) V X) Bo(X(n1)) . assume in the following that Bo(x) .. This calculation shows that the reason that this algorithm does not work well is that the probability of one single Xi to become large is too big.. . and the problem is to produce an estimator Z(u) with a variance going to zero not slower (in the logarithmic sense ) than Bo(u)2. Z(1) (u) has a smaller variance than Zl (x). we thus generate K and X1i .b(u) = P (Xl +•••+XK>u) = EF[Xl + .
BL by I3L = /3B[y]. BL instead of 0. X(2). that is. Thus.. 1 Assume that Bo (x) = L(x)/x° with L(x) slowly varying. 111. For practical purposes. The algorithm is sofar the only efficient one which has been developed for the heavytailed case. X . Then the algorithm given by { Z (2) (u) } is logarithmically efficient.modulated model P(r+ < oo) and G+ are not explicit ). l)) BO(X(n1)) Theorem 2 . for the purpose of recording Z(u) = erysr(u). However . Also in other respects the findings of [28] are quite negative: the large deviations ideas which are the main approach to rare events simulation in the lighttailed case do not seem to work for heavy tails. the continuoustime process {St} is simulated by considering it at the discrete epochs {Qk} corresponding to claim arrivals. X(2).5). B. the algorithm for generating Z = Z(u) is: 1. using 13L. X(n1)) P(X(TZ) + S(. . and define )3L. and simulate from FL.3._1) > P(X(n) > _ X X(1). Compute y > 0 as solution of the Lundberg equation 0 = K(y) = )3(B[y] ..y. in the renewal or Markov..Khinchine formula and importance sampling .u is the representation 0(u) = e7sr(u) overshoot (cf. BL(dx) = e7sB(dx)/B[y]. l)) ..1) .S (n1)) V X (. Binswanger and HOjgaard of [28] give a general survey of rare events simulation for heavy tailed distributions . it must be noted that a main restriction of both algorithms is that they are so intimately tied up with the compound Poisson model because the explicit form of the PollaczeckKhinchine formula is crucial (say. . use the the CramerLundberg approximation so that z(u) = '(u) = e7"ELe7E(") where ^(u) = ST(") . .. X(2). . X(n1)) Bo((x . and that paper contains one more logarithmically efficient algorithm for the compound Poisson model using the Pollaczeck.1 is elementary but lengty. X (. Asmussen . 3 Importance sampling via Lundberg conjugation We consider again the compound Poisson model and assume the conditions of Ce7". . and we refer to [27].. IMPORTANCE SAMPLING VIA LUNDBERG CONJUGATION 287 We then get P(S" > x I X( 1). . Notes and references The proof of Theorem 2.S(n_1) I X(1).
the discussion at the end of Section 1b.. There are various intuitive reasons that this should be a good algorithm. let S. Let FL (dx) = 'For the renewal model. with distribution F. cf.. namely ELery£("). Proof Just note that EZ(u)2 < e .2 The estimator (3. 4. be i. The estimator is then M(u) /3eQT' dB Z(u) (Ui) j=1 )3 e $Ti dB where M(u) is the number of claims leading to ruin.. u It is tempting to ask whether choosing importance sampling parameters .. = X1 + . and we have: Theorem 3. the results of IV. We may expect a small variance since we have used our knowledge of the form of 0(u) to isolate what is really unknown. return to 3.r(u) < oo) = 1. In detail . and avoid simulating the known part e7". M(u) = inf {n : S„ > u}. Generate T as being exponential with parameter . In fact: Theorem 3. BL could improve the variance of the estimator .d.QL.1 The estimator Z(u) = e'rs* "u) (simulated from FL) has bounded relative error. Let Sf0 CHAPTER X. B) is not logarithmically efficient when (/3.1) (simulated with parameters ^3.T. More precisely.i. X2. . P'[y] < oo for some ry > 0. and the change of measure F r FL corresponds to B > BL. b different from . one must restrict attention to the case 4µB > 1. We formulate this in a slightly more general random walk setting '.l3 and U from B. It resolves the infinite horizon problem since FL(.288 2.S+U . Ti. The answer is no. Let X1. Otherwise. If S > u..Q. to deal with the infinite horizon problem . Let S .. b) # (/3L. .. BL). + X. Xi = U. so that changing the measure to FL is close to the optimal scheme for importance sampling . and assume that µF < 0 and that F[y] = 1. SIMULATION METHODOLOGY 3.2ryu _ z (u)2/C2. A > AL as in Chapter V. The algorithm generalizes easily to the renewal model .3.(u)) are asymptotically coincide on {r(u)} < oo. let Z F e_'s. r(u) < oo) and FL (both measures restricted to. The proof is given below as a corollary to Theorem 3.F.7 tell that P(.
2ryELXi. When F # FL. F(XM(u)). where e' = EL Iog dFL (Xi) > 0 by the information inequality. let F be an importance sampling distribution equivalent to F and M(u) dF Z(u) _ I (Xi) .. EFZ(u)2 EFZ(u)2 lim sup z(u)2eeU = lim cop C2e2. Since ELM(u)/u + 1/ELXi. More generally. where Kl og (X) (j) 2 ) = log dFL (Xi) .d.2) dF Theorem 3..3 The estimator (3. Jensen's inequality and Wald's identity yield EpZ(u)2 > exp {EL(K1 + . Proof The first statement is proved exactly as Theorem 3 .3.yu+elu u +oo etry' 1 > lim up C2e2.2) (simulated with distribution F of the X3 has bounded relative error when . EFZ(u)2 = EeW2(FIF) = Ep [W2(FIFL)W2(FLIF)] = EL [W2 ( FIFL)w(FLIF)] = ELexp {Kl+. it thus follows that for 0 < e < e'/ELXi. For the second. write W(F IF) _ F(XI).i.+KM(u)}. By the chain rule for RadonNikodym derivatives..... Here ELK. IMPORTANCE SAMPLING VIA LUNDBERG CONJUGATION 289 e7yF(dx).P = FL.yu = G.2ryELXi)} . K2.2'X1 . Since K1. 1.. . = c'. (3. The importance sampling estimator is then Z( u) = e'rSM( ). + KM(u))} = exp {ELM(u)(E .. are i. is not logarithmically efficient. ..2 > 0.
This immediately yields / = 3". B" and generic claim sizes U'. then the estimator Z(u) = e7Sr(°)I(r(u) < yu) (simulated with parameters /3L. so that one would expect the change of measure F 4 FL to produce close to optimal results.3'eO'x f f P (U" . see e.290 which completes the proof.T". T".'(y). SIMULATION METHODOLOGY u Proof of Theorem 3. Then according to Theorem 3.1 is from Lehtonen & Nyrhinen [244]. In [13].T" > x) J /3"e0 yB (x + y) dy = . U". Next. claim size distributions B'. /3".T" has a left exponential tail with rate /3'. The queueing literature on related algorithms is extensive . T) with T < oo.3"eQ x 0 J eQ zB (z) dz x (x > 0) and /3' = /3". u Notes and references The importance sampling method was suggested by Siegmund [343] for discrete time random walks and further studied by Asmussen [ 13] in the setting of compound Poisson risk models . As in IV. The results of IV.4. then /3' B' = B". First by the memoryless distribution of the exponential distribution .g. with the present (shorter and more elementary) proof taken from Asmussen & Rubinstein [45]. /3'eQ'YR'( x + y) dy = . yu) is close to zk(u). all that needs to be shown is that if U' .T' D U" .'(y) or y > 1/r. BL) has bounded relative error.B'=B". from 3' P(U'T'>x) ^ = ^ eQ'zB (z) dz. The optimality result Theorem 3. CHAPTER X.T' has a left exponential tail with rate /3' and U" . we conclude by differentiation that Bo(x)=B' (x)forallx > 0. U' . In fact: Proposition 4.i.T". generic interarrival times T' . 4 Importance sampling for the finite horizon case The problem is to produce efficient simulation estimators for '0 (u. we write T = yu.3.2.e. Further discussion is in Lehtonen & Nyrhinen [245]. the references in Asmussen & Rubinstein [45] and Heidelberger [190].T' = U" . The easy case is y > 1/k'(y) where O(u. The extension to the Markovian environment model is straightforward and was suggested in Asmussen [ 16]. optimality is discussed in a heavy traffic limit y 10 rather than when u + oo. Consider compound Poisson risk process with intensities /3'. U' . .4 indicate that we can expect a major difference according to whether y < 1/r.1 If y > 1/ic'('y).
one would expect that the change of measure F Pay is in some sense optimal.(u) * 1 (Theorem IV. Proof Since ryy > y.yu.to g x ( u ) u u so that (1.2) Since the definition of ay is equivalent to Eay r(u) .3) and we have: Theorem 4. T( u) < yu] e2ryyuEay le 2ay^(u).O(u. lim inf uoo 27yu . Let Qy2 = . T(u) < yu] . we have ic(ay ) > 0 and get Eay Z(u)2 = Eay [e  2aySr( u)+2r(u )r.1).log 4')u) 4 u (Theorem IV.8). 3 Theorem IV.1).2 The estimator (4. Further .4. Bay) is logarithmically efficient. We next consider the case y < 1/r. yu) in the sense that . yu) = eayu Eay Leay^(u)+r(u)K(ay).log Var(Z(u)) _ .4. and in fact.(ay). T(u) < yu] e Hence by (4.1) so that z(u) = zP(u.3) (simulated with parameters /gay. . that ryy = ay .yy> 2 .1. Bounding u ELZ(u)2 above by a7u.yk(ay) determines the order of magnitude of z'(u.4. the result follows as in the proof of Theorem 3. Remark 4 . (4. yu) is of order of magnitude a71. (4. IMPORTANCE SAMPLING FOR THE FINITE HORIZON CASE 291 Proof The assumption y > 1/n'(y) ensures that 1fi(u. yu)/z.log Var(Z(u)) l im of .1) which is all that is needed here can be showed much easier .5) follows. 7y (4. and that ryy > ry. We recall that ay is defined as the solution of a'(a) = 1/y.8 has a stronger conclusion than (4.'(7). The corresponding estimator is Z(u) = eavS' ( u)+T(u)K (ay)I(T( u) < yu).4.
there exists a dual process { V t} such that i.a vt(u).T) = P O<t<T inf Rt < 0 = P(VT > u). Z (u)2 above.4.2) . the object of interest is {Vt} rather than {Rt}.u1/2 < r(u) < yu Le ] l = e.yu)/(uyu1/2) .o . > u) = E f I(VV > u) dt 0 (5.ryyu +oy u1/2K'(av)Eo l v 1/2) where the last step follows by Stam's lemma (Proposition IV. the algorithm in Section 3 produces simulation estimates for the tail P(W > u) of the GI/G/1 waiting time W).292 CHAPTER X. Then z(u) = Eay Z(u) > Eay avS'(u)+T( u)k(av 1 ).1) is used to study Voo by simulating {Rt} (for example.yu1/2 <1 T(u) < yu l r > e7vu +avul/ 2r.1) (see Proposition IV.3: for many risk processes {Rt }.b(u.Qyu1/2 < T(u) C yu e.4).7y x(u) > hm inf u+Oo U . 5 Regenerative simulation Our starting point is the duality representations in 11. 0 rather than when u 3 oo.o. yu .3.(av)Eav l e. and (5. '%(u) = P I info Rt < 0) = P(VV > u).1): then by Proposition A1. 0 Notes and references The results of the present section are new. One main example is {Vt} being regenerative (see A.1) where the identity for Vi(u) requires that Vt has a limit in distribution V. SIMULATION METHODOLOGY Vara„ (r(u))/u so that (T(u) .uaoo U That lim sup < follows similarly but easier as when estimating En. However. we believe that there are examples also in risk theory where (5. yu . In Asmussen [13].4.1) may be useful.a yu +l/ur' (av)Ei`av reav^(u)+(T(++)(U) yu . In most of the simulation literature (say in queueing applications).2). Hence lira inf log ryyu + vyu 1/2 tc(ay) . related discussion is given in a heavy traffic limit q J. zi(u) = INV. N(0.. (5.
i. Zl the LLN yields Z1 a$' Z(1) +. . and Z2'>) where Zi'i = w. provides estimates for F ( V. Thus. record Zi'i = (Z1'). For the ith cycle. Taking h(zl.. Thus the method provides one answer on to how to avoid to simulate { Rt} for an infinitely long time period.. a standard transformation technique (sometimes called the delta method) yields 1 V 2 (h (Zi.. oh) for h : R2 ^ R and Ch = VhEVh. Z2 .d. consider first the case of independent cycles . letting J0 'o I (Vt > u) dt . Z1 = (Zl1i +.5. Simulate a zerodelayed version of {V t } until a large number N of cycles have been completed. 2. For details . Z2 a4* z2. The method of regenerative simulation.. Z2'> the time during the cycle where { Vt} exceeds u and zj = EZJ'). ))..t(u)) 4 N(0. let E denote the 2 x 2 covariance matrix of Z('). EZ2'i = z2 = E Thus.. which we survey below . j = 1.h (zl. > u) (and more general expectations Eg(V. 02) (5. Then (Z1z1i Z2z2 ) 4 N2(0. EZ1'i = z1 = Ew. Vh = (8h/8z1 8h/ 8z2). To derive confidence intervals . (u) ?2 = E fo I(Vt > u) dt = 0( u ) zl Ew as N > oo.. Therefore .+Z(N) z 1.. the regenerative estimator z%(u) is consistent. REGENERATIVE SIMULATION 293 where w is the generic cycle for {Vt}. Z(N) are i . z2)) > N(O. + Z1N>) . Z2 = N (X21' + . i (^(u) .. is the cycle length. Then Z(1)..3) ..E). . + Z2N)) . z2) z2/z1 yields Vh = (z2/z2 1/zl).
In 111. see e. We here consider simulation algorithms which have the potential of applying to substantially more complex situations.z^ i=1 so a2 can be estimated by 2 2 = 72 S11+ 12 S22 . 6 Sensitivity analysis We return to the problem of 111 . () dx = E[SZ] f(X. say risk processes with a complicated structure of the point process of claim arrivals and heavy tailed claims .9. Let X have a density f (x. Here are the ideas of the two main appfoaches in today 's simulation literature: The score function ( SF) method . () dx so that differentiation yields zS d( fco(x)f(x. the expectation z = EZ of a single r.294 where 01 2 CHAPTER X.2. Rubinstein [310] and Rubinstein & Melamed [311].0 . Then z(() = f cp(x) f (x. with distribution depending on a parameter (. Z of the form Z = ^p(X) where X is a r . in some situations it may be the only one resolving the infinite horizon problem . SIMULATION METHODOLOGY 2 Eli = Z2 z1 + 2 E22 . to evaluate the sensitivity z/i( (u ) = (d/d() 0(u) where ( is some parameter governing the risk process .Z) ^Z(=) .g S12 (5.C)dx = f w(x) d( f ( x.5) Z1 Z1 Z1 and the 95% confidence interval is z1 (u) ± 1. The regenerative method is not likely to be efficient for large u but rather a brute force one.96s/v"N. 9. Notes and references The literature on regenerative simulation is extensive. v. () depending on C.v. Before going into the complications of ruin probabilities . However . There is potential also for combining with some variance reduction method. consider an extremely simple example .g.2 E1 2 z1 z1 Z2 The natural estimator for E is the empirical covariance matrix N S = N 1 12 (ZW . () dx f Ax) (dl d()f (x' () f ( z. asymptotic estimates were derived using the renewal equation for z /i(u).
if f (x. with density f (x. 11 /3oeOoT. () is an unbiased Monte Carlo estimator of zS. ()) d( hc(U. So assume that a r. ( where h( (u. zc = E [d co(h(U. one can take h (U. Then z(() = Ecp(h(U. /3 is 0.log U/(. A related difficulty occurs in situations involving the Poisson number Nt of claims: also here the sample path derivative w. the Poisson rate /3 in the compound Poisson model. Example 6 .() d( is the score function familiar from statistics . just take cp as an indicator function . Let M(u) be the number of claims up to the time r(u) of ruin (thus.1 Consider the sensitivity tka(u) w. () = . giving h( (U. C)). SZ is an unbiased Monte Carlo estimator of z(. For IPA there are. Thus. C). ()). For example .v.t. C) f(X. nonpathological examples where sample path derivatives fail to produce estimators with the correct expectation. say W(x) = I(x > xo) and assume that h(U. one. p. SENSITIVITY ANALYSIS where 295 S = (d/d()f (X.r. Then . this phenomenon is particularly unpleasant since indicators occur widely in the CMC estimators . The likelihood ratio up to r(u) for two Poisson processes with rates /3. however .6. r(u) = Tl + • • • +TM(u)). cp' (h(U. For the SF method. () where U is uniform(0. Thus . () _ (eSx. () = log U/(2. To see this.t. IPA will estimate zS by 0 which is obviously not correct. () = d log f (X. for some Co = (o(U). In the setting of ruin probabilities . () is increasing in C. Infinitesimal perturbation analysis (IPA) uses sample path derivatives. ()) h((U. /3o is M(u) Oe (3T: < oo) . = E [`d (h(U. The following example demonstrates how the SF method handles this situation. () can be generated as h(U. The derivations of these two estimators is heuristic in that both use an interchange of expectation and differentiation that needs to be justified. ()) is 0 for C < Co and 1 for C > Co so that the sample path derivative cp'(h(U. cp(h(U. () = (8/8()h (u. () Thus.1). ()) is 0 w . this is usually unproblematic and involves some application of dominated convergence .r. I(r(u) .
4) that V5. We then arrive at the estimator ZZ(u) = (M(u) . for different models and for the sensitivities w.3 (u) (to generate Zp (u). To resolve the infinite horizon problem .r.296 CHAPTER X. BL). a relevant reference is VazquezAbad [374].t.3 (u) is of the order of magnitude ue7u. different parameters. differentiating w. In the setting of ruin probabilities.r. since ELZp(u)2 < (M(U) _T(u) \ 1 2 a2ryu = O(u2)e27u. in part for different measures of risk than ruin probabilities. There have been much work on resolving the difficulties associated with IPA pointed out above. However.9 . we get 1 M(u) 00(u) = E (_Ti)I(T(U)<) E [(M(u) .0(1) so that in fact the estimator Zf(u) has bounded relative error.T(u)) I(T(u) < co) ] . the estimation of z(ip(u) is subject to the same problem concerning relative precision as in rare events simulation . change the measure to FL as when simulating tp(u).T(u)) e7uerVu) for ?P. 0 Notes and references A survey of IPA and references is given by Glasserman [161] (see also Suri [358] for a tutorial).t. We recall (Proposition 111. ) we have VarL(ZQ(u)) ZO(u)2 O(u2)e2 u2e2ryu yu . . j3 and letting flo = 0. SIMULATION METHODOLOGY Taking expectation. the risk process should be simulated with parameters . whereas for the SF method we refer to Rubinstein & Shapiro [312]. Thus.3L.1 is from Asmussen & Rubinstein [46] who also work out a number of similar sensitivity estimators. Example 6.
d.i. in the Bernoulli random walk example below. and {1.(u) is defined as the probability of being ruined (starting from u) before the reserve reaches level a > u. a) = r(u)). .. with P(Xk = 1) = 9. either this makes no difference (P(R.(u) = 0 ) = 0) or it is trivial to translate from one setup to the other....P(•r(u... where X1. That is..g. The twobarrier ruin problem The twobarrier ruin probability 0. 'Note that in the definition of r(u ) differs from the rest of the book where we use r(u) = inf {t > 0 : Rt < 0} ( two sharp inequalities ).1. 297 . X2.Chapter XI Miscellaneous topics 1 The ruin problem for Bernoulli random walk and Brownian motion.+• • •+X. as e. Oa(U ) can also be a useful vehicle for computing t/i(u) by letting a * oo. T(u. in most cases . R„ = u+X.1}valued . Y'a(U) = P(T (u) = r+(a)) = 1 .. Consider first a Bernoulli random walk. wherel T(u) = inf {t > 0 : Rt < 0} . }). defined as Ro = u (with u E {0.. a) = r(u) A T+(a). Besides its intrinsic interest . T+(a) = inf It > 0 : Rt > al.. are i.
then 'Oa(u) _ au a We give two proofs .o» = z°P (RT ( u. u Proof 2. and in view of the discrete nature of a Bernoulli random walk we write z = e7.1) is solution..r(u. where a is any number such that Ee°X = F[a] <oo.. = (1 .o)T/la (1) + 8z/'u(3). i.0)/0. zu = EzRO = EzRT(u. tba(2) _ (1 . C1_0\a.. By optional stopping.a) = 0) + zap ( R.1.a) Y. u + 1.+Xn) F[ a]n n=0. We choose a = ry where ry is the Lundberg exponent. MISCELLANEOUS TOPICS Proposition 1. one elementary but difficult to generalize to other models. the solution of F[..4) by ea(u+Xl+. Conditioning upon X1 yields immediately the recursion 'a(1) = 19+00a(2).2). The Lundberg equation becomes 1=F[ry]=(19)+9z. The martingale is then {zuzXl+•••+X„ } = {zR° }. z and the solution is z = (1 . = z°Va(u) + za(1  .o)'t/1a(a . 7/la(a .(1B)u oJ 0.y] = 1.(4. and insertion shows that ( 1.. In a general random walk setting .2) Oa(a . Proof 1.1) o If 0 = 1/ 2.. (1. and the other more advanced but applicable also in some other settings.1) = (19)4/'0(a3)+9ba(a1)..1 For a Bernoulli random walk with 0 0 1/2.a(u)).. Wald's exponential martingale is defined as in 11.298 CHAPTER XI..e.(u) I\ e = 1 oa ' ()i a = u.
1. (1.} is then itself a martingale and we get in a similar manner u = ER° = ER ra( u) = 0 • Y'a (u) + all  au Y'a( u)). thenz1 (u)=1. .1 If p = 0. then Proof Since 'Oa (U)  au a Eea(R°. TWO BARRIERS 299 and solving for 4/la(u) yields t/ia(u) = (za . 1h (u) = a el u \1 If 9 < 1/ 2. If p = 0.e7u)/(e7° .1) for p # 0. However. RANDOM WALK. If p<0. and solving for 9/la(u) yields Z/)a(u) = (e 76 . } yields e7u = Ee7R° = e°Wa(u) + e7a(1 .u)/u. pa( u) _ u Corollary 1. Corollary 1.1). Proof Let a+ oo in (1.0a(u)).5) . {R.• a2µa e2µu . u Proposition 1.3 Let {Rt} be Brownian motion starting from u and with drift p and unit variance .ba(u) = e2µa .2 For a Bernoulli random walk with 9 > 1/2. (1.zu)/(za .1). BROWNIAN MOTION. Then for p 0 0..4 For a Brownian motion with drift u > 0. Applying optional stopping to the exponential martingale {e7R. If 9 = 1/2.2) is trivial (z = 1).u) = et(a2 /2 +aµ) the Lundberg equation is rye/2'yp = 0 with solution y = 2p. i1(u) = e211 . then Vi(u) = 1. {Rt} is itself a martingale and just the same calculation as in the u proof of Proposition 1.1 yields 't/la(u) = (a .
a) = a ) + e ' ° ( 1 . passing to even more general cases the method quickly becomes unfeasible (see.7/la(u)). we obtain 'Oa a7u . CHAPTER XI.3. however. (u) _ O(u) .a) < 0) + e7°P (R(u.a ) < 0) + e 7aF ( R (u.300 Proof Let a * oo in (1. 7/'(u) = 1). a) I R(u a ) < 0] P (R(u . (1.vi(a) Proof By the upwards skipfree property.616). the paths are upwards skipfree but not downwards.4). and hence e7u = Ee7Ro E [e7R(. letting a * oo yields the standard expression pe7u for . implying R(u. this immediately yields (1.a) = r+ (a)} and similarly for the boundary 0. 5). It may then be easier to first compute the onebarrier ruin probability O(u): Proposition 1.a) = a on {r (u. 0. valid if p < 1 (otherwise . MISCELLANEOUS TOPICS u The reason that the calculations work out so smoothly for Bernoulli random walks and Brownian motion is the skipfree nature of the paths.a) = a) = 5 y = P (R (u. Ic 5ry 'pa(u) Using y = 6 .5a). VIII.5 Consider the compound Poisson model with exponential claims (with rate..+^a(u))^(a) If 7k(a) < 1. 1 . say.7) . 7O(u) = 7/la(u) + (1 . Here is one more case where this is feasible: Example 1.0 (u) (where u p =. and thus one encounters the problem of controlling the undershoot under level 0. However. .e7a (u) = 6 /0 . Here the undershoot under 0 is exponential with rate 5.6 If the paths of {Rt} are upwards skipfree and 7//(a) < 1. For most standard risk processes .e7a Again .7).0(a) 0 < u < a.
Here {St } is Brownian motion with drift 0 (starting from 0). P(MT > u) = P(ST > u) + P(ST < u. BROWNIAN MOTION. the density dPµ / dP0 of St is eµsttµ2/2. Then the density and c. (1.8) Proof In terms of the claim surplus process { St} = {u .ST<u) = P(MT>u. We now return to Bernoulli random walk and Brownian motion to consider finite horizon ruin probabilities. MT > u) = P (ST > u) + P (ST > u.1.8 Let {Rt} be Brownian motion with drift . Hence P(MT>u. 10) follows then by straightforward differentiation. and hence Pµ('r(u) E dT) = Eo [e µsr(. we have ili(u. (1. = eµuTµ2/2Po (T( u) E dT) 2 eµuTµ2/2 u T3/2 ex p u 27r p 12 T . Corollary 1. RANDOM WALK.4) I = .7 For Brownian motion with drift 0. ( 1. + µ2T) } . For the symmetric (drift 0) case these are easily computable by means of the reflection principle: Proposition 1. in particular symmetric so that from time r(u) (where the level is level u) it is equally likely to go to levels < u and levels > u in time T .µ T I + e2µ"4) ( . For µ # 0. = 1 .2 . T ) = P(T(u) < T ) = 241. 0(u. MT > U) = P(ST > u) + P(ST > u) (1..10) Pµ (T(u) < T) !.. of r(u) are ( U2 Pµ (T(u ) E dT) = 2^T 3/2 exp µu . and (1 .9) = 2P(ST > u).. T(u) E dT.11 ) is the same as (1.Rt}.8 ).11) VIT ) Proof For p = 0. (i).f.ST>U).d.1a for computing ruin probabilities for a twostep premium function.)_ _( u)µ2 /2... TWO BARRIERS 301 Note thas this argument has already been used in VII.r(u).µ so that {St} is Brownian motion with drift µ .T) P(MT > u) where MT = maxo<t<T St.µ%T (1.
h. Thus. and in a similar spirit as in VII.11) then follows by checking that the derivative of the r. is (1. MISCELLANEOUS TOPICS which is the same as (1.10 Consider a diffusion process {Rt} on [0. The same argument as used for Corollary 1. that 0(u). oo) with drift µ(x) and variance a2 (x) at x. Here {2T( (v}TT)/2) v=T.9).12) P(ST = v) = 0 otherwise.s. Breiman [78] or Karlin & Taylor [222] p. S(x) = f x s(y)dy.g. as defined above as the probability of actually hitting 0. whenever u. u Small modifications also apply to Bernoulli random walks: Proposition 1.T (1.T2.g.. is finite for all x > 0. 0 0 (1. oo). see e. such that the drift µ(x) and the variance a2(x) are continuous functions of x and that a2(x) > 0 . If this assumption fails. S(oo) = f c s(y)dy. (1. Let s(y) = ef0 ry(. The expression for F ( ST = v) is just a standard formula for the u binomial distribution. the behaviour at the boundary 0 is more complicated and it may happen.9) goes through unchanged.10). is zero for all u > 0 but that nevertheless Rt ^4 0 (the problem leads into the complicated area of boundary classification of diffusions. e. but we omit the details.3 we can define the local adjustment coefficient y(x) as the one 2µ(x)/a2(x) for the locally approximating Brownian motion.302 CHAPTER XI. Proof The argument leading to ( 1. 226). close to x {Rt} behaves as Brownian motion with drift µ = u(x) and variance a2 = a2(x).9 For Bernoulli random walk with 9 = 1/2..8 also applies to the case 9 54 1/2. as defined in (1.. We assume that u(x) and a2 (x) are continuous with a2 (x) > 0 for x > 0..13) The following results gives a complete solution of the ruin problem for the diffusion subject to the assumption that S(x). We finally consider a general diffusion {Rt} on [0. Theorem 1.T)dx.13) with 0 as lower limit of integration.T+2.10) and that the value at 0 is 0.T) = P(ST = u) + 2P (ST > u). and (1.12) is the same as ( 1. Vi(u. T are integervalued and nonnegative.
b(a) = 0 then yield the result.S(b) Proof Recall that under mild conditions on q. where Lq(u) = 0'22u) q "(u) + p(u)q(u) is the differential operator associated with the diffusion. BROWNIAN MOTION.b(b) = 1. if (1. we can ignore the possibility of ruin or hitting the upper barrier a before dt. 0 in (1. Notes and references All material of the present section is standard.16) S(a) . For generalizations of Proposition 1.S(u)/S(a). A classical reference for further aspects of Bernoulli random walks is Feller [142].16) yields 4b (u) = 1 . then. E„ q(Rdt) = q(u)+Lq(u)dt. RANDOM WALK. [117]. then 0 < 2l. i.(u) < 1 for all u > 0 and ^ S^ Conversely. (1. The obvious boundary conditions '0a. see in particular pp.10. Using s'/ s = 2p/a2.b('u) = Eu .b (Rdt) = Oa. so that Y)n.13) is finite for all x > 0. 1'. see Asmussen & Perry [42]. Assume further that S (x) as defined in (1.14) fails. S(oo) < oo separately u completes the proof. 191195 for material related to Theorem 1.17) Hence L.ba. O.10. A good introduction to diffusions is in Karlin & Taylor [222]. Letting b J.b = 0 implies that VQ b/s is constant. Lemma 1. 0 Proof of Theorem 1. Further references on twobarrier ruin problems include Dickson & Gray [116].b(u) be the probability that {Rt} hits b before a starting from u. elementary calculus shows that we can rewrite L as Lq(u) d 1a2 (u)s(u)d [ s (u) ? ] . 15) i.16).6 to Markovmodulated models .b('u) = Eu &0.b(u) = S(a) .11 Let 0 < b < u < a and let t&0.0(u) = 1 for all u > 0. In view of (1.14) S(oo) < 00.S(u) (1.b(u) + L. Letting a T oo and considering the cases S(oo) = oo. .b = a+/3S. Wa.ba. b = 0.b(Rdt).e. TWO BARRIERS 303 for x > 0.1.b(u)dt. If (1. If b < u < a.e LVa. and we get Wo. Then YIa. the function S(x) is . (1 .
3.2) C_e7u < t(u) < C+e _7u. Lo I.1) (2.1. Markovmodulated Brownian models .)AT . where C_ = B(x) _ B(x) sup 2no fy° e7(Y )B(dy)' f2e7(Y2)B(dy)' C+ i/i(u.o•K(a) = Ee . yu) where W (ay) = y. but by duality.6. MISCELLANEOUS TOPICS referred to as the natural scale in the general theory of diffusions (in case of integrability problems at 0.t&(u.304 CHAPTER XI.4. 2 Further applications of martingales Consider the compound Poisson model with adjustment coefficient ry and the following versions of Lundberg 's inequality (see Theorems 111. (2.5): _ z/'(u) < e 7u. equivalently.4) I. (2. Remark 11. information on ruin probabilities can be obtained .(T(u)AT) r.4.(..5. yielding eau = Ee. The emphasis is often on stationary distributions .(a) (2. They all use the fact that ( tx(a) l ( eaRt = eau + aSttx(a) < e7yu. See Asmussen [20] and Rogers [305] for some recent treatments and references to the vast literature.1 ) was given already in II. y > . is currently an extremely active area of research.9 ) and optional stopping applied to the stopping time r(u) A T.3) < e 7yu.(7) . yu) '+/1(u) . 1 y < k (y). 111 . Another basic quantity is the speed measure M .2.aR.ytc (ay).5) A martingale proof of (2.aRo . one works instead with a lower limit 5 > 0 of integration in (1. (2.13)). Lo is a martingale (cf. defined by the density 1/va(u)s(u) showing up in (1. variance 0. correponding to piecewise linear paths or . and here are alternative martingale proofs of the rest . IV. with the drift and the variance depending on an underlying Markov process . much of the literature dels with the pure drift case.6) .17). which is motivated from the study of modern ATM (asynchronous transfer mode ) technology in telecommunications. 7y = ay . (2.
. dr) 1 = 1 I0 /o C+ C+ From this the upper inequality follows.yuk (ay)(u&(u. and the proof of the lower inequality is similar. u Proof of (2.7R. Proof of ( 2. Hence E [e7Rr (u) Jr(u) < ool ^00 H( dt.B(r))/B(r). . it follows easily from (2. we have ic(ay ) < 0 and use the lower bound E [e7Rr („).1. For (2. Equivalently.E [e. we have tc(ay) > 0 and we can bound (2.6) with = 'y that eyu .d. so that i/1(uL yu) < eayu .f.T)  V.yu) Y Similarly for (2.4).1. RT(u)_) given r(u) < oo.( u ) I T(U) < 00] .T(u)K(ay) I yu < r(u) < T] F(yu < r(u) < T) > e.2. yu))• Letting T + oo yield e_ayu > eyur4ay)(0(u)  Notes and references See II. when Rt_ = r. dr) e 7( yr)B(dy) B(r) f oo o 0 r > H(dt. Rt has distribution B(r + dy)/B(r).4): We take a = ay in (2. (B(y) .3).2): As noted in Proposition II.yu))• b(u.1 . dr) denote the conditional distribution of (T(u). (2. y > r. eyuk (ay) = e7yu e > eyu"(ay ) ij(u.3). Let H(dt.)r(u)r.(u. dr JO Zoo ) f e7'B(r + dy) B(r) Jo ^00 ^00 H(dt.6) below by 1 E Le7Rr(. A claim leading to ruin at time t has c. FURTHER APPLICATIONS OF MARTINGALES 305 (we cannot use the stopping time r(u) directly because P(r(u) = oo) > 0 and also because the conditions of the optional stopping time theorem present a problem).6).(ay)I T(u) < yu] P(r(u) < yu) (using RT(u) < 0).
Accordingly.. Thus . v2 later. its generality. the parameter will be u rather than n).?n typically only give the dominant term in an asymptotic expression . For example.gn if nioo lim 109 fn = 1 log gn (later in this section. og For sequences fn. MISCELLANEOUS TOPICS 3 Large deviations The area of large deviations is a set of asymptotic results on rare event probabilities and a set of methods to derive such results. logarithmic asymptotics . gn with fn + 0 . and that a considerable body of theory has been developed.1) is an example of sharp asymptotics : .^ e nn 1 > x n 0o 2xn (3. if x > EX1.means (as at other places in the book) that the ratio is one in the limit (here n * oo).1) does not capture the \ in (3. The limit result (3.. (3. + X.. cle . nroo n n /// Note in particular that (3. The classical result in the area is Cramer's theorem. Example 3.1) amounts to the weaker statement lim 1 log P I Sn > x I = 17./n E I) for intervals I C R.(B) = log EeOX 1 is defined for sufficiently many 0. . such that the cumulant generating function r. e. then P C S. not quite so much in insurance risk. gn 4 0. large deviations results been..g.306 CHAPTER XI. Cramer considered a random walk Sn = X1 + . The advantage of the large deviations approach is.1).1 We will go into some more detail concerning (3. 1) but only the dominant exponential term . however . (3.2) can be rewritten as F (Sn/n > x) 1g a'fin.3na with a < 1. logarithmic asymptotics is usually much easier to derive than sharp asymptotics but also less informative . However .2). Thus. .1) where we return to the values of 0. in being capable of treating many models beyond simple random walks which are not easily treated by other models .nn or C2e.the correct sharp asymptotics might as well have +. and gave sharp asymptotics for probabilities of the form P (S. The last decades have seen a boom in the area and a considerable body of applications in queueing theory. we will write fn 1. ri. large deviations results have usually a weaker form. which in the setting of (3.
3) is put equal to x. rc*(x) = sup(Ox .(0)) e 307 (other names are the entropy. LARGE DEVIATIONS Define rc* as the convex conjugate of rc.t. Since P nn > x) = E {e_8 ' ( 9). the LegendreFenchel transform or just the Legendre transform or the large deviations rate function). More precisely.rc(0) where 0 = 0(x) is the solution of x = rc'(0). of P(X1 E dx) = E[e9X1K. V > 0 e. P with mean nx and variance no.4) immediately yields (3.(e)i XI E dx].2). if we replace Sn by nx + o / V where V is N(0.1).960/) * 0.tin f o') o e9o^y 1 1 ey2/2 dy 21r = etin 1 Bo 27rn .9S„+n' ( 9). the sup in the definition of rc* can be evaluated by differentiation: rc*(x) = Ox .the mean rc'(0) of the distribution of X1 exponentially tilted with 0. replacing Sn in the exponent and ignoring the indicator yields the Chernoff bound P Sn > x 1 < e°n (3. 2 where o2 = o2(x) = rc"(0). i.3. since Sn is asymptotically normal w. we get P(Sn/n > x) E [e9nx +nK(9)9" '. (3. Define . In fact.4) n Next.r.r. we have P(nx < Sn < nx + 1.4 enn +1.q = rc* (x). which is a saddlepoint equation .e. nx < Sn < nx + 1.425. and hence for large n P(Sn/n > x) > E [e.sseo f which in conjunction with (3.96o /] > 0. Most often. S rtn > x 1. exponential change of measure is a key tool in large deviations methods.
. however. there exists z E (0. . Mogulskii's theorem which gives path asymptotics....s. and the WentzellFreidlin theory of slow Markov walks.. . .'s. r(u) = inf {n : Sn > u} and o(u) = P('r(u) < oo). Xn given by Fn(dxl.. that is. and write Sn = X1 + • • • + Xn.. MISCELLANEOUS TOPICS which is the same as (3. We shall need: Lemma 3 . we shall concentrate on a result which give asymptotics under conditions similar to the GartnerEllis theorem: Theorem 3 . asymptotics for probabilities of the form P ({S[nti/n}o<t<l E r) for a suitable set r of functions on [0.h. be a sequence of r. X2.. to be made rigorous. The substitution by V needs. 1) and no such that Sn . .. integrates to 1 by the definition of Icn). 1]..dxn) where Fn is the distribution of (X1i . is differentiable at ry with 0 < K'(y) < 00.v..2 (GLYNN & WHITT [163]) Let X1. 260 for details.1). For the proof. which is of similar spirit as the dicussion in VII. e > 0 such that (i) Kn (0) = log Ee°Sn is welldefined and finite for 'y .3 For each i > 0.p > 7 < zn. Further main results in large deviations theory are the GartnerEllis theorem. see Jensen u [215] or [APQ] p.e < 8 < y + e. (iii) #c (8) = limn.o log Ee9Sn /n.. commonly denoted as is the saddlepoint approximation. Xn) and sn = x1 + • • • + xn (note that the r.e < 8 < y + e. Pn Sn1 . Assume that there exists 'y. which is a version of Cramer's theorem where independence is weakened to the existence of c(O) = limn. In the application of large deviations to ruin probabilities.308 CHAPTER XI. Then i/.. (ii) lim supn.. n Icn(0) exists and is finite for ry .'(u) )Ng a"u. Sanov's theorem which give rare events asymptotics for empirical distributions../^ >7 < zn n for n n0. Ee9X n < oo for e < 0 < e.. (iv) tc(ry) = 0 and r. We further write µ = tc'(ry)..3..dxn) = 05nKn(7)Fn(dx1. we introduce a change of measure for X1.
.+r.. can be chosen strictly negative by taking 9 small enough.+r7) < zn for n > no. u Proof of Theorem 3. This proves the existence of z < 1 and no such that Pn (Sn/n > µ. the r .n e(µ +o)w"(7) [Eep(B +7)Sn]1 /p [EegoX. For Sn1i we have Fn(Sn 1/n > µ+r7) < ene(µ+ 1?)EneeS„1 = ene ( µ+n)EneeSneX„ eno(µ +n) Ee(e+7)Sn ex„ wn (7) < e.ne(p+ 17).Bµ . h. log zl'(u)/u > 'y. Let r7 > 0 be given and let m = m(77) = [u(1 + 77)/µ] + 1.µ?7 .91) + o(O ) as 0 J.77) follows by symmetry (note that the argument did not use µ > 0). S.077 n^oo n and by Taylor expansion and (iv ).h. S. > 1 +17] m(7). This establishes the first claim of the lemma .ne(µ limsup 1 log Pn (Sn/n > µ + 17) < ic(9 + ry) .m(7).2.n m µ 1 + rl .r (7) n = e.2. Since I EeqOX „ ] 1/q is bounded for large n by (ii)..n > u ) = [ Em [em Em 1e. The corresponding claim for Pn(Sn/n < µ . We first show that lim inf„_. h.s. mµ Sm > u] km e7Sm+n. Clearly.3. is of order ..s.]1/q = e. we get lim sup 1 log Pn (Sn1 /n > µ + r7) < 0(1i + r7) + i(p(0 +'Y))/p n+oo n and by Taylor expansion.. in particular the r.W.s. P n(Sn/n > {c+77) < e no(µ 309 +n)Enees n +n)elcn(B +7).YS. it is easy to see that the r.Kn(7)e'n (p(O +7))/p I Ee geXn]1/q where we used Holder's inequality with 1/p+ 1/q = 1 and p chosen so close to 1 and 0 so close to 0 that j p(0 +. for Sn. LARGE DEVIATIONS Proof Let 0 < 9 < e where a is as in Theorem 3.> .y) .71 < e and jq9j < e. Then V. The rest of the argument is as before. can be chosen strictly negative by taking p close enough to 1 and 0 close enough to 0. ( U) P(S. 0.
. 3.log z) /2 and Sn Fn\ n >lb+S) <Zn.(•) goes to 1 by Lemma 3.6) for some z < 1 and all n > n(E).n(ry)/u 4 0andm/u* (1 + r7)/µ. I > IL exp `S.+wn(7). n=1 . Obviously...I < µl1 1+77 I M 1_ 1+277 S.n Yµ 1 + m + r ('Y) } U n \ 77 m µ µ7 1 < 1+ 77 ) Here E.7) so that n(b) I1 < e'Yu E en.. n=1 n=n(b)+1 00 Lu(1 +6) /µJ 13 F( T (u) = P(T(u) n).. and since Ic.(Y).. I2 = F(T(u) = n). MISCELLANEOUS TOPICS (7). logO(u)/u > ry.310 ]Em I e.0 log i'(u )/u < 'y. Sn > u] < eYu+Kn(7)pn(Sn > u) (3. this is possible by (iii). 0 yields liminfu __.. (iv) and Lemma 3.3. Pn \ > la+ 8 I < zn (3. For lim supu. P(T(u) = n) < P(Sn > u) = En [e7S.YS +^c CHAPTER XI. we write P(T(u) = n) = Il + I2 + I3 + I4 'i/I(u) _ E00 n=1 where n(b) Lu(10/µJ Ii = 1: F(T(u) = n). we get lum inf z/i(u) 1 +12r7 >_ ry + 77 Letting r7 J. 14 = = E Lu(16)/aJ+1 Lu(1+6)/µJ+l = n) and n(S) is chosen such that icn('y )/n < 6 A (.
' 1 + b) n e7u x 1 /2 1 n x n / 2x (3.11) [u(1+6)/µJ+1 1  Thus an upper bound for z/'(u) is n(6) e'Yu n=1 eKn (7) + 2 + (28U + 1) e6u(1+6)/µ Fi 1 zl /2 and using (i).zl/z en6 [u(1 +6)/µJ 1u (1 +6) /µJ ekn(7) < e' 13 < C" E Yu l u(16)/lij+1 Lu(16)/µJ+l1 < e7U Finally. Sn1 C U.3.10) 00 I4 < E F(Sn_1 < u. µ n=n(6)+1 \ 1u(16)/µ1 00 1 zn < e7u E Z n/2 < e(U xn/2 E n=n(6)+1 n=0 eYu = 1 . S. LARGE DEVIATIONS Lu(16)/µJ 311 I2 < e"u n=n(6)+1 e'n(Y)P(Sn > u) < Lu(16)/µJ ^. > u) Lu(1+6) /µJ +l 00 )^n 'YSn+kn (7) . we get lim sup log u/00 O (U) < y + b(1 + b) U Letbl0. Sn > U] [ e(u(1+6)/µJ+l < eYu (u(1+6)/µJ+1 7u r 0 0 e L^ en('Y ) fPn (I Sn 1 . C 26u `p / +1 I e6u(1+6)/µ (3. u . eryu en logz/2p n nt n.
u(1 + b)/i(7)) Proof Since V. ryue«iu . (7 + a) < 2arc'(7).8) by P(S. 13 = P(T (u) E (u(1 b)l^ (7).4.2.9) can then be sharpened to x LQuJ /2 I2 < e7u 1 . we get Lou] E exp {( 7 + a)u + Kn(a +7)} n=1 Il Lou] exp {(y + a)u} { 111 + exp {4narc'(7)} n=1 exp {('y + a)u} c1 exp {4/3uarc'(7)} = clewhere a1 = aw. we have rcn (a + 7) < 2n^c(7 + a) < 4narc' (7).3ui where . it holds for each b > 0 that 0(u) 1' g F(T(u) E (u(1 . the typical time is u/rc'(7) just as for the compound Poisson model.11 ) can be sharpened to x 4 [u(1+6)/µJ /2 1 . 4 there is an aj > 0 and a cj < oo such that Ij < c3e. e'.. Corollary 3.z 1/z For I1.u(1+b)/rc'(7)).312 CHAPTER XI.Q is so small that w = 1 . it suffices to show that for j = 1. 2. we need to redefine n(b) as L. u . I2. For I.b)/i(7). IV. say n n1.7' a"ju. MISCELLANEOUS TOPICS The following corollary shows that given that ruin occurs.(u) = I1+I2+I3+I4'^ ery( u)..xl/2 to give the desired conclusion. the last steps of (3. > u) < e"' E eIsn = ectueKn (a+'Y)Kn(7) where 0 < a < e and a is so small that r. cf.4/3rc'(y) > 0. For 12. we replace the bound P(Sn > u ) < 1 used in (3.4 Under the assumptions of Theorem 3. this is straightforward since the last inequality in (3. For 14. Letting c11 = maxn<n.('+'Y). Then for n large.
and we conclude that Theorem 3 .g. (iv) becomes existence of a limit tc(9) of tct(9) _ log Ee8S° It and a y > 0 with a(y) = 0. r. Assuming that the further regularity conditions can be verified. An event occuring at time s is rewarded by a r.5 Assume the Xn form a stationary Gaussian sequence with mean p < 0.. The reader not satisfied by this gap in the argument can easily construct a discrete time version of the models! The following formula (3. V(s) with m.1. The problem is whether this is also the correct logarithmic asymptotics for the (larger) ruin probability O(u) of the whole process. 11 Inspection of the proof of Theorem 3.LARGE DEVIATIONS 313 Example 3 . 09(9).12) k=0.14) is needed in both examples . the key condition similar to (iii). Xk+l) k=1 00 naoo n provided the sum converges absolutely. Hence z z\ 2 z nrn(9) _ n Cn0p+BZn/ * . Let {Nt}t>0 be a possibly inhomogeneous Poisson process with arrival rate . i.e.3(s) at time s. It is then wellknown and easy to prove that Sn has a normal distribution with mean np and a variance wn satisfying i lim wn = wz = Var(X1 ) + 2 E Cov(Xl. but nevertheless. for the ruin probability z/'h(u) of any discrete skeleton {Skh}k=0.'(y) > 0. To verify these in concrete examples may well present considerable difficulties.. Theorem 3.2 then immediately yields the estimate log F( sup Skh > u) a7u (3..1. Obviously many of the most interesting examples have a continuous time scale.(O) = 9µ+02 for all 9 E R. whether P ( sup St > u ltg a ^" 0<t<oo // (3.2 shows that the discrete time structure is used in an essential way.....13) One would expect this to hold in considerable generality..v. If {St}t> 0 is the claims surplus process.f. 2 is in force with y = 2p/wz. we shall give two continuous time examples and tacitly assume that this can be done.3. t] is Rt = E V (Un) n: o„ <t . Thus the total reward in the interval [0. criteria are given in Duffield & O'Connell [124]. and in fact.
e. the CramerLundberg model implicitly assumes that the Poisson intensity /3 and the claim size distribution B (or at least its mean µB) are known. if the nth claim arrives at time a.0 and assume there are y. 0 and since EeOUn(8) + Ee°U^ as s * oo.1) ds rt (3. At = .15) .. one would take p(t) = (1 + rt)At/ t. it contributes to St by the amount Un(t .14) (to see this . An apparent solution to this problem is to calculate the premium rate p = p(t) at time t based upon claims statistics . Thus. More precisely. nondecreasing and with finite limits Un as s T oo ( thus. then the payments from the company in [on. .2 are trivial to verify. is At . = U„ ( t . Since the remaining conditions of Theorem 3. <t which is a shotnoise process. the Cramer. leading to St = At(1+77) Joo t S8 ds. Then logEeOR° = J0 /3(s)(^8(9) .noise model is the same as the one for the Cramer Lundberg model where a claim is immediately settled by the amount Un. this is not realistic . we have rct (9)/t 4 ic (9). we conclude that Cu) log e7 u (cf. (3. assuming a continuous premium inflow at unit rate. Example 3. Of course .1) ds . the above discussion of discrete skeletons). the best estimator of /3µB based upon Ft. but that a claim is not settled immediately. MISCELLANEOUS TOPICS are the event times. It is interesting and intuitively reasonable to note that the adjustment coefficient ry for the shot .. If the nth claim arrives at time Qn = s. We further assume that the processes {U1(s)}8>0 are i. Thus by (3.. (9) < oo for 9 < 'y + C. 7 Given the safety loading 77.'`1 U. Of course. Kt (0) t (Ee9U"it8i J0 . a differential equation in t).It. O'n +S] is a r . we have S. derive .v.. e > 0 such that ic('y) = 0 and that r.d.1) ds .9t = /3 J t (Ee8U° i8l . Un(s).14). Thus.Lundberg model has the larger ruin probability.314 where the an CHAPTER XI. We let ic (9) = 3(EeWU° .Q„) .s).1) .g. 0 Example 3 . n: o.9t. where Ft = a(A8 : 0 < s < t). Un represents the total payment for the nth claim).t. i.6 We assume that claims arrive according to a homogeneous Poisson process with intensity 0 . Most obviously.
2 hold.(1 +i) f > i= 1 s ds = E Ui 1 . the Vi = . Indeed. Ui Nt / t 01i 315 St = Ui .17) K(a) f o 1 O (a[I + (1 + 77) log u]) du )3. typically the adaptive premium rule leads to a ruin probability which is asymptotically smaller than for the CramerLundberg model . To see this . again the above discussion of discrete skeletons) where y solves ic('y) = 0 It is interesting to compare the adjustment coefficient y with the one y* of the CramerLundberg model.d. LARGE DEVIATIONS With the Qi the arrival times.i.1) .14) that rt _ 13 Jo _ (a [1_( i+77)log]) ds_flt = t (a) (3. we conclude that t.19) with equality if and only if U is degenerate. standard exponential . the solution of /3(Eelu ./3. (3. equivalently. we have Nt t N. i.b(u) IN a'Yu (cf. Thus. rewrite first rc as te(a) _ /3E 1 1 +(1+77)aUJ eau 1 .18) Thus (iii) of Theorem 3.20) (3. one has y > y' (3. It then follows from (3.16) i=1 o i=1 Let ict (a) = log Eeast . uniform (0.log Oi are i.3.d.i. and since the remaining conditions are trivial to verify. (3.e.(1 + 17)0µB = 0. which yields eau f 1 t(1+n )audtl = E r Ee°Y = E [O(1+n)aueaul = E [eau J L Jo J L1+(l+r))aUJ .(1 + r7) log t (3.1) or .21) This follows from the probabilistic interpretation Si EN '1 Yi where Yi = Ui( 1+(1 +r7)log ©i) = Ui(1(1 +17)Vi) where the Oi are i .
k(0) = 0. the proof of (3. rc*' (0 ) < 0. k'(0) < 0.d. using that Ek(U) = 0 because of (3. so there exists a unique zero xo = xo(r7) > 0 such that k(x) > 0. . assuming that the U.7. 0 < x < x0. MartinL6f [256]. with common distribution B and independent of Nt. a* (s) are convex with tc'(0) < 0 . and since tc(s).1 E [1+(1+77)y*U] 0 k (+ *y B(+ 1 + (1(+71)y*y B(dy) L xa 1 + f + (1 + rl) Y* xo jJxo k(y) B(dy ) + f' k(y) B(dy) } = 0.xo. though we do not always spell this out. This implies n(y*) < 0. see also Nyrhinen [275] for Theorem 3. Therefore e7'U _ k(U) E [1+(1+77)y*U] . In addition to Glynn & Whitt [163]. Dembo & Zeitouni [105] and Shwartz & Weiss [339]. In particular. The main example is Nt being Poisson with rate fit. the study is motivated from the formulas in IV. MISCELLANEOUS TOPICS Next. much of the analysis carries over to more general cases. Further. For notational simplicity. at time t. this in turn yields y > y*. we are interested in estimating P(A > x) for large x.2 expressing the finite horizon ruin probabilities in terms of the distribution of A. = P(N = n) = e(3an However. [257] and Nyrhinen [275]. This is a topic of practical importance in the insurance business for assessing the probability of a great loss in a period of length t. are i.2.(1 + ri)y*x is convex with k(oo) = 00. and k(x) < 0. say one year.316 CHAPTER XI. 4 The distribution of the aggregate claims We study the distribution of the aggregate claims A = ^N' U. [245]. see Nyrhinen [275] and Asmussen [25]. 11 Notes and references Some standard textbooks on large deviations are Bucklew [81]. the function k(x) = e7*x . Further.20) is due to Tatyana Turova. x > x0. we then take t = 1 so that p.19). Lehtonen & Nyrhinen [244].1 . y = y* can only occur if U .. Further applications of large deviations idea in risk theory occur in Djehiche [122].i. For Example 3.
(4.3e(bo[a] . A > x)] = eex+K( e)E9 [e . In particular. no(a) = logE9e'A = rc(a + 9) .x)//3B"[9] is standard normal."(0) = . This shows that the Pedistribution of A has a similar compound Poisson form as the Fdistribution. Vare(A) = s.3B"[9]. 818' where s' = sup{s : B[s] < oo}. we define the saddlepoint 9 = 9(x) by EBA = x.1).3B[9] and Be is the distribution given by eox B9(dx) = B [9] B(dx).1) where )30 = . A E dx] .1). Then as x * oo.4. i.ic(9) = . Proposition 4. The analysis largely follows Example 3. e9x+K(°) P(A > x) B 2ir /3 B" [9] Proof Since EBA = x. Then Ee"A = e'(") where x(a) _ 0(B[a] .9(Ax). For a given x.[s])3/2 = 0.2) implies that the limiting Pedistribution of (A .1 Assume that lim8T8. THE DISTRIBUTION OF THE AGGREGATE CLAIMS 317 4a The saddlepoint approximation We impose the Poisson assumption (4. Hence P(A > x) = E e [e9A+ ic(9). The exponential family generated by A is given by Pe(A E dx) = E [eeA K(9). only with 0 replaced by a9 and B by B9. B"' [s] lim (B". B"[s] = oo. K'(0) _ ic'(9) = x. A > x) eex+K(e ) ee AB°[ely 1 ev2/2 dy 0 2^ 00 9x+p(e) e ezez2/(2BZpB „[9)) dz 9 27r/3B" [9] fo eex+w ( e) oo z x)] ] 0 27r /3B" [9] o e 9 2 /3B" [9] J eex+w(B) dz .1.e.
3) and related results u for the case of main interest . it is quite questionable to use (4. Y satisfies 9(u) ti eu2/2(1 + ibu3) (4. The present proof is somewhat heuristical in the CLT steps. For example. under the Poisson assumption (4. either of the following is sufficient: A. 4b The NP approximation In many cases . more generally. In particular. 2). Furthermore 00 b(x)Sdx < oo for some ( E (1. b(x) = q(x)eh(z). then P(A > x) . MISCELLANEOUS TOPICS It should be noted that the heavytailed asymptotics is much more straightforward. or.2) is often referred to as the Esscher approximation.2i and that (A .v.318 CHAPTER XI.1 yields: Proposition 4.4) . some regularity of the density b(x) of B is required.e. Remark 4 .2 If B is subexponential and EzN < oo for some z > 1. b is gammalike. see Embrechts et al. The (first order) Edgeworth expansion states that if the characteristic function g(u) = Ee"`}' of a r. and (4. Var(A) _ ^3p.Q{AB (4. [138].EN B(x). In fact. For a rigorous proof. bounded with b(x) . just the same dominated convergence argument as in the proof of Theorem 2.(3µB)/(0µB^))1/2 has a limiting standard normal distribution as Q ^ oo. Thus . i.3) The result to be surveyed below improve upon this and related approximations by taking into account second order terms from the Edgeworth expansion.ycix °ie6x B.x') where x' = sup {x : b(x) > 0}. A covers the exponential distribution and phasetype distributions. Notes and references Proposition 4. B covers distributions with finite support or with a density not too far from ax° with a > 1. large x. For details. 1 . where q(x) is bounded away from 0 and oo and h (x) is convex on an interval of the form [xo. b is logconcave. the distribution of A is approximately normal .1). Jensen [215] and references therein. it holds that EA = . leading to P(A > x) :. For example.(D X .1 goes all the way back to Esscher [141]. 3 A word of warning should be said right away : the CLT (and the Edgeworth expansion) can only be expected to provide a good fit in the center of the distribution .l3pB.
of (4. resp. Rather than with the tail probabilities F(A > x). the standard normal distribution.EY)3.EA)/ Var(A) and let yl_E. as u2 u3 u4 9(u) = Ee'uY = exp {iuci ..6) . (4. then P(Y < y) 4(y) .. u5.. are the cumulants .5) follows by integration. zl_e be the 1 . (4.6(1 ..4. K4 .: EA + zl_E Var(A) .i 3 K3 } Pt^ exp .equantile in the distribution of Y. If this holds . ylE should be close to zl_E (cf. one needs to show that 163.. ..h. and from this (4. the density of Y is 1 °° _ eiuy f(u) du 2x _..99. the CLT for Y = Y6 is usually derived via expanding the ch.e. A particular case is a.3!).2X2 .5) is obtained by noting that by Fourier inversion. however. s. the NP (normal power) approximation deals with the quantile al_E.s. Heuristically.5).2K3 + 4i 64 + .5 (y3 . Let Y = (A . Var(Y) = 1 as above . K2 = Var (Y).. Remark 4. are small.2 2 . f °o 9(y) = 1 e'uye u2/2(1 + iSu3) du 27r _ cc(y) .1). which is often denoted VaR (the Value at Risk)..f.i 6 r 1 3 so that we should take b = ic3/6 in (4. defined as the the solution of P(A < yle) = 1 . . so that 1(u) 3 exp { .3& (y).2 ^ \1 . and so as a first approximation we obtain a1_E = EA + yle Var(A) . THE DISTRIBUTION OF THE AGGREGATE CLAIMS where b is a small parameter. If the distribution of Y is close to N(0. In concrete examples . . one expects the u3 term to dominate the terms of order u4.c2i. K3 = E(Y . Thus if EY = 0.y2)^P(y)• 319 Note as a further warning that the r. in particular. where Kl .l = EY.5) may be negative and is not necessarily an increasing function of y for jyj large.
zi.k = /3µB^1 / (. In particular .1)! n ^eQ . K5 . .zlE)W(zlE) 1 . and assume that there exist n ) Pn_i . b = /3 for the Poisson distribution with rate /3 since Pn = Pn1 n! n (n .320 CHAPTER XI..zl E)V(zl_E) . Another main reference is Daykin et at.5) by noting that the 4. the kth cumulant of A is /3PBk' and so s.E .S(1 .E )Azl E) 4(z1E) + ( ylE .3ni /3 . This leads to t( yl E) .. this yields the NP approximation 6(Z1 _E .1) E (A . MISCELLANEOUS TOPICS A correction term may be computed from (4. For example.zlE )w(zl _E) = which combined with S = EY3/6 leads to q^ 1 Y1 .(y) terms dominate the S(1 .1)EY3. Using Y = (A .yi.6 (1 .6pBki) d/2..5(1 .E)A1 l E) 1 E 4)(yl E) ^' .EA)3 a1_E = EA + z1_E(Var (A))1/2 + 1 Var(A) Under the Poisson assumption (4.. n = 1. 21 .1)^ 2) µ'E Notes and references We have followed largely Sundt [354]. Note. however.zl E)^o(zl E) . We can rewrite (4. as required .y2)cp( y) term..EA ) / Var(A). [101]. k3 is small for large /3 but dominates 1c4.E + (yl.E = z1E + S(zi_E . that [101] distinguishes between the NP and Edgeworth approximations. 4c Panjer 's recursion Consider A = constants a.S(1 ..7) as 1 (3) a1E = Qµa +z1 . let pn Pn = (a+ = P(N = n).E(/3PB^1 )1^2 + s(z1E . this holds with a = 0.1). b such that EN 1 U%..
then j (a + b!) 1ag k_1 3 gkfj.14) is independent of i = 1. the value of (4. fj = P(A = j). j1 g.. . 1..} and write gj = 2 . 2.. The expression for fo is obvious.12) where g*n is the nth convolution power of g.12). . 2.4. fj = E (a+ b k =1 )9kfi_k . By symmetry.14) is therefore a + b/n. . which would consist in noting that (in the case go = 0) fj = pn9jn n=1 (4... . u Proof of Proposition 4.13) but only O(j2) for Proposition 4.9). the complexity (number of arithmetic operations required) is O(j3) for (4. if go = 0.11) Remark 4..1. and calculating the gj*n recursively by 9*1 = 9j. . j = 1. (4. Then fo = >20 9onpn and fi = 1 E In particular.. (4.12) we get for j > 0 that fj n a b + n p nlgj *n 00 U I n 1 *n = E a+bUi=j pn19j n=1 j i=1 CC) n Ui EE n=1 Ia +b Ul i=1 =j pn_1 . . 2. Hence by (4.. E[a +bU=I >Ui =j l i=1 J (4.5 The crux of Proposition 4.10) f o = po. . (4. Since the sum over i is na + b.k .4 Assume that B is concentrated on {0.. THE DISTRIBUTION OF THE AGGREGATE CLAIMS 321 Proposition 4. n = k=n1 9k(n1 )9j k • (4. j = 1. (4. .. j = 0.4. .13) Namely.4 is that the algorithm is much faster than the naive method. n...4.
322
00 J
CHAPTER XI. MISCELLANEOUS TOPICS
EE (a + bk I gkg3 _ k lieni n=ik=0 (a+bk l gkE g j'`kpn = E (a+b!)9kfi_k n=0 k=0 k=0 ^I 1 E(a+b. agofj+ k Jgkfjk, k=i /
and and (4.9) follows . (4.11) is a trivial special case.
u
If the distribution B of the Ui is nonlattice , it is natural to use a discrete approximation . To this end, let U(;+, U(h) be U; rounded upwards, resp. downwards , to the nearest multiple of h and let A}h) = EN U. An obvious modification of Proposition 4.4 applies to evaluate the distribution F(h) of A(h) letting f( ) = P(A() = jh) and
g(h) gkh+
= P (U(h2 = kh) = B((k + 1)h)  B(kh ), k = 0, 1, 2, ... , = P (U4;+ = kh) = B(kh)  B (( k  1)h) = gk  l,, k = 1, 2, ... .
Then the error on the tail probabilities (which can be taken arbitrarily small by choosing h small enough ) can be evaluated by
00 00
< P(A > x ) f (h) j=Lx/hl j=Lx/hl
Further examples ( and in fact the only ones , cf. Sundt & Jewell [355]) where (4.9) holds are the binomial distribution and the negative binomial (in particular, geometric ) distribution . The geometric case is of particular importance because of the following result which immediately follows from by combining Proposition 4.4 and the PollaczeckKhinchine representation: Corollary 4.6 Consider a compound Poisson risk process with Poisson rate 0 and claim size distribution B. Then for any h > 0, the ruin probability zb(u) satisfies 00 00
f^,h) Cu) < E ff,+, j=Lu/hJ j=Lu/hJ (4.15)
f! h)
5. PRINCIPLES FOR PREMIUM CALCULATION
where f^ +, f^ h) are given by the recursions
(h) 3 (h) (h)
323
fj,+ = P 9k fjk,+ ' I = 17 2, .. .
k=1 3 (h)
(h)
=
P
(h)
f9,  (h) gk,fAk, e 1  ago, k=1
j = 1+2,
starting from fo + = 1  p, f(h) = (1  p)/(1  pgoh) and using 07
g(kh) 1 (k+1)h
=
Bo((k + 1 ) h)  Bo(kh ) =  f
AB
kh
B(x) dx, k = 0, 1, 2, ... , k = 1,2 .....
gkh+
Bo(kh )  Bo((k  1 ) h) = 9kh)1 ,
Notes and references The literature on recursive algorithms related to Panjer's recursion is extensive, see e.g. Dickson [115] and references therein.
5 Principles for premium calculation
The standard setting for discussing premium calculation in the actuarial literature does not involve stochastic processes, but only a single risk X > 0. By this we mean that X is a r.v. representing the random payment to be made (possibly 0). A premium rule is then a [0, oo)valued function H of the distribution of X, often written H(X), such that H(X) is the premium to be paid, i.e. the amount for which the company is willing to insure the given risk. The standard premium rules discussed in the literature (not necessarily the same which are used in practice!) are the following: The net premium principle H(X) = EX (also called the equivalence principle). As follows from the fluctuation theory of r.v.'s with mean, this principle will lead to ruin if many independent risks are insured. This motivates the next principle, The expected value principle H(X) = (1 + 77)EX where 77 is a specified safety loading. For 77 = 0, we are back to the net premium principle. A criticism of the expected value principle is that it does not take into account the variability of X which leads to The variance principle H(X) = EX+77Var(X). A modification (motivated from EX and Var(X) not having the same dimension) is
324
CHAPTER XI. MISCELLANEOUS TOPICS
Var(X).
The standard deviation principle H(X) = EX +rl
The principle of zero utility. Here v(x) is a given utility function, assumed to be concave and increasing with (w.lo.g) v(O) = 0; v(x) represents the utility of a capital of size x . The zero utility principle then means v(0) = Ev (H(X)  X); (5.1)
a generalization v(u) = Ev (u + H(X)  X ) takes into account the initial reserve u of the company. By Jensen 's inequality, v(H(X)  EX) > Ev(H(X)  X) = 0 so that H(X) > EX. For v(x) = x, we have equality and are back to the net premium principle. There is also an approximate argument leading to the variance principle as follows. Assuming that the Taylor approximation
v(H(X)  X) ^ 0 +v'(0)(H (X)  X) + v 0 (H(X)  X)2 ,/2
is reasonable , taking expectations leads to the quadratic v"H(X )2 + H(X) (2v'  2v"EX) + v"EX2  2v'EX = 0 (with v', v" evaluated at 0) with solution
H(X)=EXv^±V( ^ )2Var(X).
Write
( vI ) 2 \
Var(X) v^  2v^Var(X)/ I  (
, Var(X) )2
If v"/v' is small, we can ignore the last term. Taking +f then yields H(X) ,:: EX 
2v'(0) VarX;
since v"(0) < 0 by concavity, this is approximately the variance principle. The most important special case of the principle of zero utility is The exponential principle which corresponds to v(x) = (1  e6x)/a for some a > 0. Here (5.1) is equivalent to 0 = 1  e0H(X)EeaX, and we get
H(X) = 1 log Ee 0X .
a
5. PRINCIPLES FOR PREMIUM CALCULATION
325
Since m.g.f.'s are logconcave, it follows that H,, (X) = H(X) is increasing as function of a. Further, limQyo Ha (X) = EX (the net premium princiHa (X) = b (the premium ple) and, provided b = ess supX < oo, lim,, H(X) = b is called the maximal loss principle but is clearly not principle very realistic). In view of this, a is called the risk aversion The percentile principle Here one chooses a (small ) number a, say 0.05 or 0.01, and determines H(X) by P(X < H(X)) = 1  a (assuming a continuous distribution for simplicity). Some standard criteria for evaluating the merits of premium rules are 1. 77 > 0, i .e. H(X) > EX. 2. H(X) < b when b (the ess sup above ) is finite 3. H(X + c) = H(X) + c for any constant c
4. H(X + Y) = H(X) + H(Y) when X, Y are independent
5. H(X) = H(H(XIY)). For example , if X = EN U= is a random sum with the U; independent of N, this yields
H
C^
U; I = H(H(U)N)
(where, of course, H(U) is a constant). Note that H(cX) = cH(X) is not on the list! Considering the examples above, the net premium principle and the exponential principle can be seen to the only ones satisfying all five properties. The expected value principle fails to satisy, e.g., 3), whereas (at least) 4) is violated for the variance principle, the standard deviation principle, and the zero utility principle (unless it is the exponential or net premium principle). For more detail, see e.g. Gerber [157] or Sundt [354]. Proposition 5.1 Consider the compound Poisson case and assume that the premium p is calculated using the exponential principle with time horizon h > 0. That is,
N,,
Ev I P  E U;
i =1
= 0 where
v(x) = 1(1  e°x
a
Then ry = a, i.e. the adjustment coefficient 'y coincides with the risk aversion a.
326
Proof The assumption means
CHAPTER XI. MISCELLANEOUS TOPICS
0 a (1  eareo (B[a11)
l
i.e. /3(B[a]  1)  ap = 0 which is the same as saying that a solves the Lundberg u equation. Notes and references The theory exposed is standard and can be found in many texts on insurance mathematics, e.g. Gerber [157], Heilman [191] and Sundt [354]. For an extensive treatment, see Goovaerts et al. [165].
6 Reinsurance
Reinsurance means that the company (the cedent) insures a part of the risk at another insurance company (the reinsurer). Again, we start by formulation the basic concepts within the framework of a single risk X _> 0. A reinsurance arrangement is then defined in terms of a function h(x) with the property h(x) < x. Here h(x) is the amount of the claim x to be paid by the reinsurer and x  h(x) by the the amount to be paid by the cedent. The function x  h(x) is referred to as the retention function. The most common examples are the following two: Proportional reinsurance h(x) = Ox for some 0 E (0, 1). Also called quota share reinsurance. Stoploss reinsurance h(x) = (x  b)+ for some b E (0, oo), referred to as the retention limit. Note that the retention function is x A b. Concerning terminology, note that in the actuarial literature the stoploss transform of F(x) = P(X < x) (or, equivalently, of X), is defined as the function
b * E(X  b)+ =
f
(s  b)F(dx) _ f
6 00
(x) dx.
An arrangement closely related to stoploss reinsurance is excessofloss reinsurance, see below.
Stoploss reinsurance and excessofloss reinsurance have a number of nice optimality properties. The first we prove is in terms of maximal utility: Proposition 6.1 Let X be a given risk, v a given concave nondecreasing utility function and h a given retention function. Let further b be determined by E(X b)+ = Eh(X). Then for any x,
Ev(x  {X  h(X)}) < Ev(x  X A b).
6. REINSURANCE
327
Remark 6 .2 Proposition 6.1 can be interpreted as follows. Assume that the cedent charges a premium P > EX for the risk X and is willing to pay P1 < P for reinsurance. If the reinsurer applies the expected value principle with safety loading q, this implies that the cedent is looking for retention functions with Eh(X) = P2 = P1/(1 + 77). The expected utility after settling the risk is thus
Ev(u + P  P1  {X  h(X)})
where u is the initial reserve . Letting x = u + P  P1, Proposition 6.1 shows that the stoploss rule h (X) = (X  b)+ with b chosen such that E(X  b)+ u = P2 maximizes the expected utility. For the proof of Proposition 6.1, we shall need the following lemma: Lemma 6 .3 (OHLIN'S LEMMA) Let X1, X2 be two risks with the same mean, such that Fj(x) < F2 (x), x < b, Fi(x) ? F2(x), x > b for some b where Fi(x) = P(Xi < x). Then Eg(X1) < g(X2) for any convex function g. Proof Let Yi=XiAb, Zi=Xivb.
Then
P(Yl < x) _ Fi(x) <_ F2 (x) = P(Y2 < x) x < b 1=P(Y2<x) x>b so that Y1 is larger than Y2 in the sense of stochastical ordering . Similarly, P(Zl < x) _ 0 = P(Z2 < x) x < b Fi(x) > F2(x) = P(Z2 < x) x > b
so that Z2 is larger than Zl in stochastical ordering. Since by convexity, v(x) = g(x)  g(b)  g'(b)(x  b) is nonincreasing on [0, b] and nondecreasing on [b, oo), it follows that Ev(Y1) < Ev(Y2), Ev(Zi) < Ev(Z2). Using v(Yi) + v(Zi) = v(Xi), it follows that
0 < Ev(X2)  Ev(Xi) = Eg(X2)  Eg(X1),
using EX1 = EX2 in the last step. u
Proof of Proposition 6.1. It is easily seen that the asssumptions of Ohlin' s lemma hold when X1 = X A b, X2 = X  h(X); in particular, the requirement EX1
328
CHAPTER XI. MISCELLANEOUS TOPICS
= EX2 is then equivalent to E(X  b)+ = Eh(X). Now just note that v is convex. u
We now turn to the case where the risk can be written as N
X = Ui
i=1
with the Ui independent; N may be random but should then be independent of the Ui. Typically, N could be the number of claims in a given period, say a year, and the Ui the corresponding claim sizes. A reinsurance arrangement of the form h(X) as above is called global; if instead h is applied to the individual claims so that the reinsurer pays the amount EN h(Ui), the arrangement is called local (more generally, one could consider EN hi(Ui) but we shall not discuss this). The following discussion will focus on maximizing the adjustment coefficient. For a global rule with retention function h* (x) and a given premium P* charged for X  h* (X), the cedents adjustment coefficient y* is determined by
1 = Eexp {ry*[X  h*(X)  P*]},
for a local rule corresponding to h(u) and premium P for X look instead for the ry solving
J _f
(6.2) N 1 h (Ui), we
[ X_P_^
1 = Eexp
[ Ei  h(Ui)] P [U
= Eexp{ry
h(Ui)]
l (6.3) This definition of the adjustment coefficients is motivated by considering ruin at a sequence of equally spaced time points, say consecutive years, such that N is the generic number of claims in a year and P, P* the total premiums charged in a year, and referring to the results of V.3a. The following result shows that if we compare only arrangements with P = P*, a global rule if preferable to a local one. Proposition 6.4 To any local rule with retention function h(u) and any
N
J}
P > E X  N h(Ui)
4 =1
(6.4)
there is a global rule with retention function h* (x) such that
N
Eh*(X) = Eh(U1)
i=1
and 'y* > ry where ry* is evaluated with P* = P in (6.3).
6) u where C[ry] = Ee'r(u4(u)).4).h(U)].h * (X) . Eexp 7 [E [Ui .h(Ui)] .P I = EC [7]N.h(Ui)P JJJ l:='l {ry ] or. that 01[ry] < 0[y] where 0[y] = Ee'r(U^') .4.d.4). i. REINSURANCE Proof Define N 329 h* (x) = E > h(Ui) X = x .5 Because of the independence assumptions .d. y = Ei [Ui .h( UU) = EN • E[U .h(Ui)] . (6. Assuming for simplicity that the Ui are i. then (6. it suffices to show that Eexp {ry ii 'UiAb.6). and so on. Applying the inequality Ecp(Y ) > EW(E (YIX )) (with W convex ) to W(y ) = eryy.b)+ is referred to as excessofloss reinsurance and plays a particular role: Proposition 6.P]}. this implies 7* > 7.b)+ with b determined by E(U .P } < 1 = Eexp E[Ui. u But since ry > 0. (6. N E X .5) reduce quite a lot. Remark 6. as often local as global. expectations like those in (6.h(Ui)] .4) and u g(x) = e7x in Ohlin's lemma. ' ii (6. X2 = U .h(u) and any P satisfying (6.6 Assume the Ui are i. This follows by taking Xl = U A b..6.P > EexP{7[X . Local reinsurance with h(u) = (u . Proof As in the proof of Proposition 6.h(U) (as in the proof of Proposition 6.b)+ = Eh(U) (and the same P) satisfies 71 > ry. The arrangement used in practice is. Then for any local retention function u .3). . appealing to (6.P. we get N 1 = Eexp ry E[Ui ii .5) holds trivially.i. we get EX = EN • EU.4). ry* > 0 because of (6. however. the excess ofloss rule hl (u) = (u .
[76].many texts on insurance mathematics.g. . See further Hesselager [194] and Dickson & Waters [120]. The present proof is from van Dawen [99].330 CHAPTER XI. Bowers et at. MISCELLANEOUS TOPICS Notes and references The theory exposed is standard and can be found in. The original reference for Ohlin's lemma is Ohlin [277]. e. Heilman [191] and Sundt [354]. see also Sundt [354].
That is.t. The point process is called a renewal process if Yo. Y. Then Blackwell 's renewal theorem holds. + U2 where U1 is a finite measure and U2(dt) = u(t)dt where 331 . then Stone 's decomposition holds : U = U..} for any h > 0..T„_1). when t is large. are independent and Y1. t]) so that U(t + a) . Technically. If Yo = 0. stating that U(t+a)U (t) ^ a. 2h. .e. t 00 (A.U(t) is the expected number of renewals in (t. The mathematical representation is either the ordered set 0 < To < T1 < . not concentrated on {h.. If F satisfies the stronger condition of being spreadout (F*' is nonsingular w .. The number max k : Tk_j < t of renewals in [0.r. denoted by F in the following and referred to as the interarrival distribution. Lebesgue measure dt normalized by the mean to of F. The renewal theorem asserts that U(dt) is close to dt/µ.. The associated renewal measure U is defined by U = u F*" where F*" is the nth convolution power of F... note in particular that U({0}) = 1. i. . t] is denoted by Nt.1) (here U(t) = U([0. Y2. . of interarrival times and the time Yo = To of the first arrival (that is. all have the same distribution..Appendix Al Renewal theory la Renewal processes and the renewal theorem By a simple point process on the line we understand a random collection of time epochs without accumulation points and without multiple points. the renewal process is called zerodelayed.. t +a]). .. Lebesgue measure for some n > 1). the distribution of Yo is called the delay distribution. = T„ . some condition is needed: that F is nonlattice.. of epochs or the set Y1. U(A) is the expected number of renewals in A C R in a zerodelayed renewal process. Y1. Y2.
then Z(u) i f0 z(x)dx .332 APPENDIX u(t) has limit 1/µ as t 4 oo. i.2).2) Z(u) = J0 u z(x)U(dx). (A. Equivalently. (A. wee shall need the following less standard parallel to the key renewal theorem: Proposition A1.out.EN(t) .R.a. u u PF 4 00. but suffices for the present purposes .5) 2This condition can be weakened considerably .4) that z is Lebesgue integrable with limZ. in convolution notation Z = z + F * Z. (A. µF (A. the statements being EN(t + a) . Note in particular that F is spreadout if F has a density f. that z(u) has a limit z(oo) (say) as u 4 oo. z(u) a known function. A weaker (and much easier to prove) statement than Blackwell's renewal theorem is the elementary renewal theorem. oo). ENt 4 1 lb Renewal equations and the key renewal theorem The renewal equation is the convolution equation Z(u) = z(u) + f where Z(u) is an unknown function of u E [0 . Then Z(u) 4 z(oo). In 111.. IV).i. IV). U Z(u . (A. resp. Both result are valid for delayed renewal processes.2 Assume that Z solves the renewal equation (A. Under weak regularity conditions (see [APQJ Ch.4) If F is spread.1 if F is nonlattice and z (u) is directly Riemann integrable (d. z(x) = 0. stating that U(t)/t > 1/p. and F(dx) a known probability measure .i". and that F has a bounded density2. then it suffices for (A. the asymptotic behavior of Z(u) is given by the key renewal theorem: Proposition A1.x)F(dx). see [APQ] Ch.3) Further.e.9.2) has the unique solution Z = U * z.
i. Y1 . cycles. 0 PF µF 11 In risk theory.5a. The kth cycle is defined as {XTk+t}o<t<Yk . equivalently. This program has been carried out in III. the postTk process {XT. that the existence of y may fail for heavytailed F. 1c Regenerative processes Let {T. multiply (A. that F is a probability measure. Yk ). refer to the zerodelayed case. z(x) = e7xz(x). .. and its distribution does not depend on k. Eo etc. F(dx) = e7xF(dx).i. Note. Hence by dominated convergence. Tk and {Xt }o<t<Tk • For example. • .d.} be a renewal process. A regenerative process converges in distribution under very mild conditions: . Y2. However. results from the case fo F(dx) = 1 can then be used to study Z and thereby Z. ..APPENDIX 333 Proof The condition on F implies that U(dx) has a bounded density u(x) with limit 1/µF as x * oo. {Tn} if for any k.(3. T1. Z(u) U = 1 u 1 u f z(u . The property of independent cycles is equivalent to the postTk process {XTk+t}t>0 being independent of To.x)u(x) dx = z(u( 1 .t. Here the relevant F does not have mass one (F is defective).. is called the cycle length distribution and as before. However. The distribution F of Y1..r. of Yo. T1. asymptotic properties can easily be obtained from the key renewal equation by an exponential transformation also when F(dx) does not integrate to one. this covers discrete Markov chains where we can take the Tn as the instants with Xt = i for some arbitrary but fixed state i.. .k+t }t>o is independent of To..3) satisfied by the ruin probability for the compound Poisson model.e. Tk (or. A stochastic process {Xt}t>0 with a general state space E is called regenerative w.. this expression is to be interpreted as a random element of the space of all Evalued sequences with finite lifelengths. we let µ denote its mean. . a basic reason that renewal theory is relevant is the renewal equation II.. To this end.. however. .2) by e7x to obtain Z = z +P * Z where Z(x) = e'Y'Z(x). The simplest case is when {Xt} has i.t))u(ut) dt 0 0 J f z(oo) • 1 dt = z(OO). where the Tn are the instants where a customer enters an empty system (then cycles = busy cycles). or many queueing processes. Assuming that y can be chosen such that f °° Ox F(dx) = 1. We let FO. the present more general definition is needed to deal with say Harris recurrent Markov chains. .
.. i. An example is Zt = fo f (X8) ds where {Xt} is regenerative w.r.tEU1/µ)/f has a limiting normal distribution with mean 0 and variance Var(Ui) + (!)2Var (Yi)_ 2EU1 Cov(U1. Then {e(t)}. e(t )) . then e (t) ..0 be cumulative w. then Xt .4 Let {Zt}t^.e.. where the distribution of X..t : t < Tk}. then (Zt . are i. {Tn}.e. (A. This is the case considered in [APQ] V. just the same proof as there carries over to show: Proposition A1. 2.ZT Then: (a) If E sup I ZTo+t . resp .i.. assume that p < 00 and define Un = ZT}1 .. in total variation. Then {Zt}t^.t.6) id Cumulative processes Let {Tn} be a renewal process with i. µ 0 If F is spreadout. Y1) le Residual and past lifetime Consider a renewal process and define e ( t) as the residual lifetime of the renewal interval straddling t.'s by e. 0<t<Yi then Zt /t a$• EU1/µ.r.t. Then Xt Di X. under the condition of Blackwell's renewal theorem. and q(t) = sup It .d.. cycles (we allow a different distribution of the first cycle). C)..+ X. r.oo (i. fi (t) = inf {Tk .ZT }0<t<Y„+.ZTOI < 00.. We denote the limiting r. oo). is given by Eg(Xoo) = 1 E0 f Ylg (Xt)dt.334 APPENDIX Proposition A1.Tk : t < Tk} as the age. [0. (b) If in addition Var(Ul ) < oo.i.v. Otherwise . but in fact. {i7(t)} are Markov with state spaces (0.t. If p = oo.. {Tn}.d.0 is called cumulative w.3.r. {Tn} if the processes {ZT +t . Then it (ii. oo). for n = 1. P(C ( t) < a) 4 0 for any a < oo) and ij (t) * oo. C(t) and ij (t) both have a limiting stationary distribution F0 given by the density F (x)/p.3 Consider a regenerative process such that the cycle length distribution is nonlattice with p < oo. and we have: holds more generally that (rl(t)..
Hence for t large enough.5 Under the condition of Blackwell's renewal theorem. but governed by a Markov chain {Jn} (we . U(x + 1) . ^ > y) = 1 f +Y (z)dz. (c) the marginal distribution of q is FO.4. 0 If Markov renewal theory By a Markov renewal process we understand a point process where the interarrival times Yo . ^) is given by the following four equivalent statements: (a) P (77 > x. (d) the marginal distribution of ^ is FO.y)P(Yo E dy) . and the conditional distribution of given 17 = y is the overshoot distribution R0(Y) given by FO(Y) (z) = Fo (y+z)/Fo(y). .APPENDIX 335 Theorem A1.. Y1i Y2. Proof The number Nt of renewal before t satisfies Nt/t a4' p. and the equivalence of (a) with (b)(d) is an easy exercise. assume first the renewal process is zerodelayed. Y1 > t] 4 0. In the general case.v. we can bound e(t) by M(t) = max {Yk : k < 2t/p}. In IV.dy )z(y) < c ^ l z(k) Eoe(t 0 0 k=o where c = sup.(t). W are independent.t. Then Eo^(t) satisfies a renewal equation with z(t) _ E[Y1 . Yl > t].U(x) < U( 1)).i. if in addition EYo < oo. 1) and W has distribution Fw given by dFw/dF(x) = x/pF.'s with finite mean satisfies Mn/n a$• 0 (BorelCantelli). are not i. Hence t t lt ) = f U(dy)z(t .U(x) (c < oo because it is easily seen that U(x + 1) . V is uniform on (0.i. = z is Foz) The proof of (a) is straightforward by viewing {(r. we used: Proposition A1. (1 V)W) where V.d. r.t. Since z ( k) < E[Yi . l:) is the same as the distribution of (VW. and the conditional distribution of ri given l. the first statement follows.^(t))} as a regenerative process. Then fi(t)/t a4' 0 and.6 Consider a renewal process with µ < oo. Yo > 0] + f Eo^ (t . Since the maximum Mn of n i.d. the sum is o(t) so that Eo£(t)/t + 0 . use t E^(t)/t = E[Yo . the joint distribution of (rl. For the second. EC(t)/t + 0..y) = f U(t . (b) the joint distribution of (ri..
Sn = X1 + • • • + Xn the associated random walk. Notes and references Renewal theory and regenerative processes are treated. Further: Proposition A1. the conditional distribution of {XT„+t}t>o given Yo. X2. . where the distribution of X. .) and (Fij )i. oo).. . We call r+ (T_) the strict ascending (weak descending) ladder epoch and G+ (G_) the corresponding ladder height distributions.. .and regenerative processes.. T_=inf{n>0: Sn<0}..}.. . G_(x) = P(ST_ < x.i .r.. e. A stochastic process {Xt}t>o is called semiregenerative w.. < yIJ) = Fij( y) on {Jn= i. These facts allow many definitions and results to be reduced to ordinary renewal.r. = io for some arbitrary but fixed reference state io E E.} is nonlattice (it is easily seen that this definition does not depend on i). oo). Assume that uj = EjYo < oo for all j and that {J„} is irreducible with stationary distribution (v3)jEE.336 APPENDIX assume here that /the state space E is// finite) in the sense that P(Y..T_ < oo). . with common distribution F. Yn. namely {Twk } where {Wk } is the sequence of instants w where Jo. IT. . in [APQ]. Then Xt 4 Xo.. The semiregenerative process is then regenerative w. . Y1. and define r+=inf{n>0: Sn>0}. A Markov renewal process {Tn} contains an imbedded renewal process. Let X1. Jn = i is the same as the P.. A2 WienerHopf factorization Let F be a distribution which is not concentrated on (oo.+ < x. is given by Eg(X00) = 1 YO vjEj f g(Xt) dt µ jEE o where p = ujEEViAj... 0] or (0 . distribution ofjXt}t>o itself where Pi refers to the case Jo = i. be i.t. Jo. G+(x) = P(S.jEE is a family of distributions on (0. For example. Jn_1.t. .. the Markov renewal process if for any n.g.d.. the semiregenerative process is called nonlattice if {T. r+ < oo). Jn +1=j} where J = a(JO. Alsmeyer [5] and Thorisson [372].7 Consider a nonlattice semiregenerative process. . J1 i ..
oo). (d) R+ = U_.x)R+(dx). .T_=n} = {S. 0]. >0. S. On {T_ > 2}. u . the renewal measures U+=>G+.7). In (A. A C (oo. n=0 The basic identities are the following: Theorem A2. 0] and (0. oo) (A. Sr_ _1 is at its minimum . U.=n w=m i Figure A.x)R_ (dx). F(A . G+. F(A) + (G+ * G_)(A).G+ * G_: (b) G_ (A) = f °° F(A . oo).. A C (0. Proof Considering the restrictions of measures to (oc. 0).APPENDIX 337 Probabilistic WienerHopf theory deals with the relation between F.g. n=0 n=0 00 00 and the T+.r.and r_ preoccupation measures T+1 r_1 R+(A) = E E I(Sn E A). m<j<n}.S.S. A C (0.. F(A) is the contribution from the event {T_ = 1} = {X1 < 0}.>0. 0<j<m. . define w as the time where the preT_ path S1.1 (a) F = G+ + G_ . More rigorously.7) (A. A C (oo. we may rewrite (a) as G_ (A) = G+(A) = F(A) + (G+ * G_)(A).. G_. we consider the last such time (to make w unique) so that {w=m.7) follows since G+(A) = 0 when A C (oo. (e) R_ = U+. (A.=EGn. (c) G+(A) = f °. n 0 R_(A) = E I(Sn E A).8) (e.1 . 0]).
and the proof of (A. E du) = P(T_=nm..+ E du)P(S.3 8 APPENDIX Reversing the time points 0. 0<j<m.7) follows. Sr_ E Adu) (s ee again Fig ._ E A) n1 f P(r_=nw=m Sm EduSrEA) m=1 n1 F(r+=mSr+Edu).+ E du) E P(S. r+ = n) n=1 n=1 0  C0 E fF(Sk< 0. (b) follows from 00 G+ (A) _ E F(Sn E A.8) is similar. SmEdu) = P(T+=m. .= n.du) (G+ * G)(A)• C llecting terms. ....Sn_1Edx.>0. ST+Edu). m=1 f S mming over n = 2. SnEAIS. A. ST_ E A) P(T+ = m. Aso. (A. clearly (Sj Sm>0._ E A .1.F(r_n_mSrEA_u). S.m.. S.u) f0m m=1 n=m+1 00 J0 OO P(S..XnEAx) 00 f 0 f 0 00 00 1: F(A . m < j <n.._ = n . Sn1 E dx) n=1  F(A . m it follows (see Fig.x)P(Sk < 0. It follows that for n > 2 F (7.3. A.x)R+(dx).1). 0 < k < n.0<k<ri .1) that P(Sj Sn.. ST_ E A . . and reversing the order of summation yields P(T_ > 2.
0<k<n. In discrete time. Since G+ is concentrated on (0. the survey [15] by the author and the extensive list of references there.SnEA) = P(Sn<Sk.APPENDIX 339 and the proof of (c) is similar.SnEA) = P(Sn<Sk. u Notes and references In its above discrete time version.O<k<n. For (d).'s. if {St} is Brownian motion.2 In terms of m. 11. G_ [s] are defined at the same time. Sk = X1 + • • • + Xk = Sn . 6+ [s].6. the analogue of a random walk is a process with stationary independent increments (a Levy process.0<k<n.1(a) is from Kennedy [228]. u Remark A2. we can rewrite (a) as 1 . H+ (s) = 1G+[s] is defined and bounded in the halfplane Is : ERs < 0} and nonzero in Is: Rs < 01 (because IIG+lI _< 1). The classical analytical form of the WienerHopf problem is to write 1 .1). G_ are trivial. 0]. Then for A C (oo. and the proof of (e) is similar. In continuous time. there is no direct analogue of Theorem A2. and using timereversion as in (d) to obtain the explicit form of R+ (Lebesgue measure). which is basic for the PollaczeckKhinchine formula.G_ [s] is defined and bounded in the halfplane is : ERs > 01 and nonzero in Is : ERs > 0}.g. this holds always on the line its = 0. WienerHopf theory is only used at a few places in this book. such developments motivate the approach in Chapter VI on the Markovian environment model.f. the derivation of the form of G+ for the compound Poisson model (Theorem 11.. see e.g. For example. Another main extension of the theory deals with Markov dependence.P as a product H+H_ of functions with such properties. However. In this generality of. and G+. consider a fixed n and let Xk = Xn_k+l.SnEA) is the probability that n is a weak descending ladder point with Sn E A. a number of related identities can be derived.O<k<n. then T+ = inf It > 0 : St = 0} is 0 a. and sometimes in a larger strip. . oo). P(SnEA . there are direct analogues of Theorem A2.0+[s])(1 .1.9) whenever F[s].. cf. see for example Bingham [65]. The present proof of Theorem A2.F[s] = (1 .4). is based upon representing G+ as in (b). Summing over n yields R+ (A) = U_ (A).Sn_k.1. Again. Nevertheless.g.SnEA) = P(SnSn_ k. E.s. it serves as model and motivation for a number of results and arguments in continuous time.G_[s]) (A. and similarly H_ (s) = 1 . being concentrated at 0.T+> n) = P(Sk < O.
ere A is the eigenvalue of largest absolute value. hen the elements of Q"/n! do not decrease very rapidly to zero and may contribute a nonnegligible amount to eQ even when n is quite large and very any terms of the series may be needed (one may even experience floating point overflow when computing Qn).10) d dteAt = AeAt = eAtA (A. three of the c rrently most widely used ones: xample A3. To circumvent this.12) eA'AO = Ale AA (A.5 that when handling phase type distributi ons.340 APPENDIX 3 Matrixexponentials T e exponential eA of a p x p matrix A is defined by the usual series expansion 00 An eA n=0 n! he series is always convergent because A' = O(nk Ialn) for some integer k < p. Some fundamental properties are the following: sp(eA) = {e' : A E sp(A)} (A. 0 . Here are.13) henever A is a diagonal matrix with all diagonal elements nonzero. Here it is standard to compute matrixinverses by GaussJordan el imination with full pivoting . Eo Kn/n! converges rapidly and can be evaluated without p oblems. one needs to compute matrix inverses Q1 and matrix exponentials eQt ( r just eQ ). whereas there is no similar single established a proach in the case of matrix exponentials. write eQ = (eK)m where = Q/m for some suitable integer m (this is the scaling step). if m is s fficiently large. JAI = max {Jjt : µ E sp(A)} and sp(A) is the set of all eigenvalues of A (the spectrum).11) A f eAtdt = eA. and eQ can then be computed as the mth power (by squaring if = 2). It is seen from Theorem VIII. however .1 (SCALING AND SQUARING) The difficulty in directly applying t e series expansion eQ = Eo Q"/n! arises when the elements of Q are large. _I 0 (A. Thus. 1.
3 i (A. i.14) holds is therefore that the tstep transition matrix for {fft} is eQt = E ent (.. some jumps are dummy in the sense that no state transition occurs ). Zo = h). p different eigenvalues Aj i . To this end. what is needed is quite often only Zt = TreQt (or eQth) with it (h) a given row (column) vector.15) Then it is easily checked that P is a transition matrix .14) E n n=0 which is easily seen to be valid as a consequence of eqt = en(Pr)t = entenpt The idea which lies behind is uniformization of a Markov process {Xt}. the procedure consists in choosing some suitable i > 0. Let vi.2 (UNIFORMIZATION) Formally..3 (DIFFERENTIAL EQUATIONS) Letting Kt = eQt.e.4 (DIAGONALIZATION) Assume that Q has diagonal form. Ap. . construction of {Xt} by realizing the jump times as a thinning of a Poisson process {Nt } with constant intensity 77. The probabilistic reason that (A. The approach is in particular convenient if one wants eQt for many different u values of t.APPENDIX 341 Example A3. vp be the corresponding left . In practice.. we have k = QK (or KQ) which is a system of p2 linear differential equations which can be solved numerically by standard algorithms (say the RungeKutta method) subject to the boundary condition Ko = I.e. the intensity matrix Q is the same as the one Q for {Xt} since a jump from i to j 11 i occurs at rate qij = 77pij = q22. Here is a further method which appears quite appealing at a first sight: Example A3 . letting P = I + Q/i and truncating the series in the identity = e17t 00 Pn(. . One then can reduce to p linear differential equations by noting that k = ZQ.]t)n (A.7t) n=0 n! u °O n Pn (to see this. However . and we may consider a new Markov process {Xt} which has jumps governed by P and occuring at epochs of {Nt} only (note that since pii is typically nonzero . Zo = a (Z = QZ. condition upon the number n of Poisson events in [Olt])  Example A3. assume that Q is the intensity matrix for {Xt} and choose q with rt > max J%J = max qii• 1.. i..
and we may adapt some normalization convention ensuring vihi = 1.. i= 1 i=1 P P (A. The phenomenon occurs not least when the dimension p is large. (A. (A. we have an explicit formula for eQt once the A j. vi. i=1 i=1 Thus. and we need to have access to software permitting calculations with complex numbers or to perform the cumbersome translation into real and imaginary parts. hp the corresponding right (column) eigenvectors. some cases remain where diagonalization may still be appealing... and hence A2 is so because of A2 = tr(Q). i # j. Example A3. two serious drawbacks of this approach: u Numerical instability : If the A5 are too close.. Then P P Q = > Aihivi = E Aihi (9 vi. Everything is nice and explicit here: 411+q2+D' )12_g11+q2^^ where (411422z + 4412421. not all ai are real. however. the eigenvalue.17) eQt = E e\`thivi = E ea:thi ® vi. and writing eQt as eQt = He°tH1 = H (e\it)di. and vihi ¢ 0. we can take H as the matrix with columns hl.g H1. Then vihj = 0... under the conditions of the PerronFrobenius theorem).16) (A.18) contains terms which almost cancel and the loss of digits may be disasterous. hp. say Al. D = ) 2 2 . Nevertheless. hi have been computed. There are. Qhi = vihi. say A = (Ai)diag.. of largest real part is often real (say. In view of this phenomenon alone care should be taken when using diagonalization as a general tool for computing matrixexponentials. this last step is equivalent to finding a matrix H such that H1QH is a diagonal matrix.18) Namely.5 If Q= ( 411 ( q21 q12 q22 is 2 x 2. Complex calculus : Typically..342 APPENDIX (row) eigenvectors and hl. v5Q = Aivi.
q.7 Let 3 9 2 14 7 11 2 2 . However. The other eigenvalue is A = A2 = q1 .6 A particular important case arises when Q = q1 qi ) q2 q2 J is an intensity matrix. Then Al = 0 and the corresponding left and right eigenvectors are the stationary probability distribution 7r and e. replacing ai by A2.21) Here the first term is the stationary limit and the second term thus describes the rate of convergence to stationarity.19) Example A3 . Of course.e. k  C k2 ) =b ( A1 q 1 Q11 / where a . h2 = Thus. v2 and h2 can be computed in just the same way. l ab (g12g21 + (A1  411) 2) = 1. eqt = eNlt ( ir1ki i2k1 \ ir1 k2 72 k2 + e azt 7r2k2 i2k1 7ri k2 7r1 k1 (A.APPENDIX 343 Write 7r (= v1) for the left eigenvector corresponding to a1 and k (= hl) for the right eigenvector. b are any constants ensuring//Irk = 1. where (A.Q2i and after some trivial calculus one gets eQt = 7r 1 112 + eat 7r1 7r2 / (7fl 7r2) = ( 7r2 1r2 7r1 IF. it is easier to note that 7rh2 = 0 and v2k = 1 implies v2 = (k2 . i.k1).20) ir = q2 ql qi +q 2 9l +q2 (A. Then 7r = (ir1 7r2 ) = a (q21 Al . u Example A3. 1) .
(A. Generalized inverses play an important role in statistics. but only that dimensions match .5 . (AA+)' = AA+. e_6u A4 Some linear algebra 4a Generalized inverses A generalized inverse of a matrix A is defined as any matrix A.344 Then D= 2+ 11)' 7 T4 2 =52. They are most often constructed by imposing some additional properties .. A+AA+ = A+. (A+A)' = A+A.23) . for example AA+A = A. 2 2 1=ab(142+(1+2)2 ) = tab.11/2 + 5 1. and a generalized inverse may not unique.6. APPENDIX x1 3/2 . (A. ir =a(2 9 9 14 2 1 3 2 2)' k=b 14 =b 1+ 2 ir1 k1 ir2 k1 _ 9 2 10 5 7 9 70 1 ' 7r1 k2 7r2 k2 10 9 9 10 10 + 7 1 10 10 10 1 10 7 10 9 70 9 10 0 e4" = e_.22) Note that in this generality it is not assumed that A is necessarily square..satisfying AAA = A.11/2 . A2 = 3/2 .
.e ® 7r)1.eir )1. (I . Here is a typical result on the role of such matrices in applied probability: Proposition A4.23) is called the MoorePenrose inverse of A.= (I .g. Am+1 = . and exists and is unique (see for example Rao [300]).g. Then for some b > 0. (A.1Q = Q(Q .APPENDIX 345 A matrix A+ satisfying (A. one is also faced with singular matrices .ew.. These matrices are not generalized inverses but act roughly as inverses except that 7r and e play a particular role .eir)1 = I .1 Let A be an irreducible intensity matrix with stationary row vector it. and define D = (A .D + O(ebt).24) = te7r . most often either an intensity matrix Q or a matrix of the form IP where P is a transition matrix. are ordered such that Al > 0.P + e7r ). then there exists an orthogonal matrix C such that A = CDC' where 0 0 D = AP Here we can assume that the A ..P + e7r)1 (here ( I . ( Q . 0 01 In applied probability.. lt o eAx dx = te7r + D(eAt ..eir ). Am > 0. . .I) (A. E.25) . _ A.e. Assume that a unique stationary distribution w exists . = 0 where m < p is the rank of A.. and can define /ail 0 0 0 0 0 0 A+ = C A' 0 0 0 C' . Rather than with generalized inverses . if A is a possibly singular covariance matrix (nonnegative definite). one then works with Q = (Q .P).1 goes under the name fundamental matrix of the Markov chain).
B(t) denote the l.D + D2 + O(ebt). For example.s.s. and in fact any rank 1 matrix can be written on this form.24). of (A. resp.. h as 1 x m and k x 1 matrices. u 4b The Kronecker product ® and the Kronecker sum We recall that if A(1) is a k1 x ml and A(2) a k2 x m2 matrix. ()®(6 f 6/ 7f 8^ 7 8 )=! ^)( 6 7 8 )=(6^ 7^ 8^) \ u Example A4.3 Let 2 A= 4 3 Vf' N7 5 )' B= ( 8 ). it follows that h ® it is the k x m matrix with ijth element hi7rj . in block notation i2h A®B= ( a11B a21 B a12B a22 B Example A4. respectively.I)} dx. see below. B'(t) = e7r + DAeAt = eir + (I . (A.91a(2) . I. the formulas involving O(e6t) follow by PerronFrobenius theory.26) 2 = 2 e7r + tD . Then A(O) _ B(O) = 0.I) . o Finally. (A. and the columns to h.I)}. Equivalently.e.eir)eAt = eAt = A'(t).h.I) (A.DZ(ent .27) Proof Let A(t). Note that h ® it has rank 1. .J {xe^r + D(e .2 Let it be a row vector with m components and h a column vector with k components.26) follows by integration by parts: t f t /' xeAx dx = [x {xe7r + D(eAx . the rows are proportional to it.2e7r . then the Kronecker (tensor) product A(') ®A(2) is the (k1 x k2) x (ml x m2) matrix with (il i2) (jl j2)th entry a.h. . the r. h ® it reduces to hit in standard matrix notation.346 t APPENDIX 2 xe Ax dx = eir + t(D + e7r) + D(eAt . Interpreting 7r.
C2 = h2 are column vectors. then v1B1h1 and v2B2h2 are real numbers.3v'6. such a factor is Ak (&B 1k according to (A.A9. if Al = vi.29) If A and B are both square (k1 = ml and k2 = m2). Using (A.31).3V8. it follows that e® ® e B An _ 0o oo oo Bn 7 I F n! = ` k! (I .5v'8 5vf9 11 A fundamental formula is (A1B1C1) ®(A2B2C2) = (A1 (9 A2)(B1 (9 B2)(C1®C2).3vV/72f 20.4vf. and v1B1h1 • v2B2h2 = v1B1h1 ® v2B2h2 = ( v1(&v2 )( B1(&B2 )( h1(&h2 ) .4 eA® B = eA ®eB. Proof We shall use the binomial formula A crucial property is the fact that the functional equation for the exponential t / l (A ®B)t = I k Ak 0 B1k k=0 (A. each of which is A ® I or I ® B.31) Indeed. and the number of such factors is precisely given by the relevant binomial coefficient. (AED B)1 = (A®I+I(9 B)l is the sum of all products of t factors.k)! ( n0 n=0 t=0 k=0 J _ ® Ak ®Blk r ^.30) eA+B = eAeB function generalizes to Kronecker notation (note that in contrast typically only holds when A and B commute): Proposition A4. then the Kronecker sum is defined by A(1) ®A(2) = A(1) ®Ik2 + k ®A(2).50 6 7 6 4f 4.APPENDIX 347 Then A®B = 2 f 20.29). if A ® I occurs k times.(A. (A B)' = eA®B e! L 1=0 0 .28) In particular. A2 = v2 are row vectors and C1 = h1. (A.3f 4v/.5v/. (A.
h. Then 2 0 ire At h • ve Bt kdt = (^®v)(A®B)1(e A®Ba . A special case of Proposition A4.s.32).348 APPENDIX Remark A4. and the form of the bivariate intensity matrix reflects the fact that Yt(2) } cannot change state in both components at due to independence . k any column vectors. { On the other hand.5 Many of the concepts and results in Kronecker calculus have p(2) is the intuitive illustrations in probabilistic terms. Ps 1) = exp {sQ ( 1) } > p(2 ) = exp {sQ(2) } can therefore be rewritten as Taking s = 1 for simplicity . the {Yt(2) } transitions in the {Yt(1) } component and the second transitions in the component . v whenever a is an eigenvalue of A and 0 is an eigenvalue be any row vectors and h. {Yt(1).6 Suppose that A and of B.32) is the intensity matrix of the bivariate continuous Markov process {Yt(1). p = P(1) ® {X }. independent Markov chains.33) . P(t) Yt(2) }. resp .3 < 0 Lemma A4 . Thus . X ) }. From what has been said about matrices of {Yt( 1). the same time. P8 = exp {sQ} = exp {s (Q(1) ®Q(2)) } . we have P8 = Pal) ® p(2).I)(h ® k). (A. Yt(2 ) }. where transition matrix of the bivariate Markov chain {X n1). and Q = Q(1) ® Q (2) = Q(1) ® I + I ® Q(2) (A. Q(2). in the definition (A. P(2). Let further it. Let P8f P(Sl).4 can easily be obtained by probabilistic be the sstep transition reasoning along the same lines . n2 n1 ) {X(2) } are independent Markov chains with transition matrices P(1). { 1't(1) }. first term on the r . P8 = Pal ) ® P82) exp {Q ( 1) ® Q(2)1 = eXp {Q( 1) } ® exp {Q(2) } Also the following formula is basic: B are both square such that a +. represents ces Q( 1). Yt(2) where independent Markov processes with intensity matri{y(2) } are {Y(1) }.
Now note that the eigenvalues of A ® B are of the form a +. E (0. the integrand can be written as ( 7r (9 v)( eAt ® eBt )(h ®k ) = ( 7r ®v)(eA (DBt)(h (& k). . p there should exist io. . . so that by asssumption A ® B is u invertible. = j and atk_li.. j = 1. h such that vh = 1. Similarly. We call A irreducible if the pattern of zero and nonzero elements is the same as for an irreducible transition matrix. [APQ] X. A is called aperiodic if the pattern of zero and nonzero elements is the same as for an aperiodic transition matrix. ao). . . we have AO = 1. 4c The PerronFrobenius theorem Let A be a p x pmatrix with nonnegative elements.APPENDIX 349 Proof According to (A.8 Let B be an irreducible3 p x pmatrix with nonnegative offdiagonal elements..34) Note that for a transition matrix. in such that io = i. il. . Then the eigenvalue Ao with largest real part is simple and real.. h can be chosen with 3By this.g. and appeal to (A.. and if we normalize v. (A. . That is. and the corresponding left and right eigenvectors v. > 0 for k = 1. Then: (a) The spectral radius Ao = max{JAI : A E sp(A)} is itself a strictly positive and simple eigenvalue of A... and the corresponding left and right eigenvectors v. Here is the PerronFrobenius theorem. i.3 whenever a is an eigenvalue of A and 3 is an eigenvalue of B.The PerronFrobenius theorem has an analogue for matrices B with properties similar to intensity matrices: Corollary A4. n. then IN < Ao for all A E sp(A). then An = Aohv+O(µ") = Aoh®v+O(µ") for some u. (b) if in addition A is aperiodic. . . which can be found in a great number of books. f o r each i. h can be chosen with strictly positive elements.1 and references there (to which we add Berman & Plemmons [63]): Theorem A4. see e. we mean that the pattern of nonzero offdiagonal elements is the same as for an irreducible intensity matrix.29).7 Let A be a p x pmatrix with nonnegative elements..12). h = e and v = 7r (the stationary row vector).
the phasetype distribution B(a) with representation (.8. let t = (ti)iEE # 0 have nonnegative entries and define T(°) = aQ . Proposition A5. note that we can write the phase generator T as Q . let {Yti°i } be a Markov process with initial distribution a and intensity .(ti)diag where Q = T + (ti)diag is a proper intensity matrix (Qe = 0). h such that vh = 1. then eBt = ea0thv + O(eµt) = eA0th ® v + O(et t) (A. Bi° (x) + at*x Proof Let { 4 } be the phase process associated with B(a) and (°) its lifelength.(3. Example A3.35) for some p E (oo. the condition is that t is small compared to Q. the analogy of this procedure with unformization. Furthermore. A5 Complements on phasetype distributions 5a Asymptotic exponentiality In Proposition VIII. For example. T(°)) is asymptotically exponential with parameter t* _ r EiEE aiti as a 4 oo.350 APPENDIX strictly positive elements. not only in the tail but in the whole distribution. Then for any (3. I. Corollary A4. relate the eigenvalues of B to those of B via (A. The content is that B is approximately exponential if the exit rates ti are small compared to the feedback intensities tij (i # j). we have A0 = 0.8 is most often not stated explicitly in textbooks. it was shown that under mild conditions the tail of a phasetype distribution B is asymptotical exponential. one can consider A = 77I + B where rl > 0 is so large that all diagonal elements of A are strictly positive (then A is irreducible and aperiodic). h = e and v = 7r (the stationary row vector). if we normalize v.1. Note that for an intensity matrix.n t AL n=0 n! (cf. Ao).e. 10) and use the formula me at e Bt = e 00 Antn = e . The next result gives a condition for asymptotical exponentiality.1 Let Q be a proper irreducible intensity matrix with stationary distribution a.2). To this end..(ti)ding. but is an easy consequence of the PerronFrobenius theorem.
dx/ti] or not. it states that the state. = YQ(x).APPENDIX 351 ((1) etc.1. We can assume that Jta) = Yt(°). and write Yt = Yt(1). Hence O ((a) aa. Let further V be exponential with intensity V and independent of everything else. v/ t. J(()) _ = i) + at•x t tt' . We can think of ( ( a) as the first event in an inhomogeneous Poisson process ( Cox process ) with intensity process matrix aQ .)_ = Y(a) = 1'aS(a) = Ya(av)^ it follows that Pi ((. Since JJ(. a . By the law of large numbers for Markov processes . from which it is easily checked that the limiting stationary distribution is (aiti/t*)iEE• Now let a' 4 oo with a in such a way that a' < a.Yj(av) = j f .2 Pi (c(a) > x.aE where 0 < e < 1). a'/a + 1. we get dx F (Idx = j) = (1 + qij t )Sij + qij dt.YQ(av) = j) Pi ( ci(a'V) > x.x (1 . in fact .g. We shall . prove a somewhat more general result which was used in the proof of Proposition VI. t < (a). from which the phase process is terminated . J^O)_ = j) Pi (v(aaV) > x. In addition to the asymptotic exponentiality. a' = a .jEE. Then {Ix} is a Markov process with to = Yo. fo tY dv/t a$' t*. Then a(a'V)/a (aV) a' 1. Proof Assume first ti > 0 for all i and let I. has a limit distribution: Proposition A5. Conditioning upon whether { Yt} changes state in [0. and this easily yields a(x)/x a' 1/t*. {t Y( a) } v>0 .bij) Hence the intensity matrix of { Ix} is (qij/ti)i.9. Hence we can represent ( (a) as ((a) = inf { t > O : f tY( )dv=V } ^l = inf { t > O : t adv = V } l jat inf{t > 0: tydv =aV} = JJJ a J J where o (x) = inf {t >0: fo tY dv = x}.(a) > x .a' + oo (e. and that Yt(a) = Yat for all t.
the simplest discrete phasetype distribution: here E has only one element.5 Let B be discrete phasetype with representation (P.. let E and Pkj j=k1..j) and initial distribution a.g..3 As the exponential distribution is the simplest continuous phasetype distribution. Et II I a(a^V) > x) at' . a). these results are in the spirit of rare events theory for regenerative processes (e. = 0 for one or more i. (c) the nth moment k 1 k"bkis 1)"n!aP"p. . > 0}.Pe. Then P is substochastic and the vector of exit probabilities is p = e .p)k1 p. an easy modification of the argument yields finally the result for the case where t. Example A5. Penev & Turbin [238].} is said to be discrete phasetype with representation (E. ' pk 0 k>1 11 Theorem A5. say bk = 0. However. so we shall be brief. u Notes and references Propositions A5. Indeed. and thus the parameter p of the geometric distribution u can be identified with the exit probability vector p...352 rr Ia(a'V) Ei I ( > x) P APPENDIX L at (Yo (aV) . a) if B is the lifelength of a terminating Markov chain (in discrete time) on E which has transition matrix P = (p. Keilson [223].1 and A5. P. Then: (a) The point probabilities are bk = aPklp. k>1.. with point probabilities bk = (1 .4 Any discrete distribution B with finite support. (b) the generating function b[z] _ E' . k = 1. A distribution B on {1. K}. 2.. a = b = (bk)k=1. so is the geometric distribution. See also Korolyuk. Example A5.x k > K. is discrete phasetype.+ at*x • a't' L ` at t* t* J Reducing the state space of {Ix } to {i E E : t... 5b Discrete phasetype distributions The theory of discrete phasetype distributions is a close parallel of the continuous case.2 do not appear to be in the literature. Gnedenko & Kovalenko [164] and Glasserman & Kou [162]). 2. 1 k=1 1 0 otherwise.. . . zkbk is za(I .zP)'p..
and a=1). and hence the negative binomial distribution is discrete phaseu type. resp. The discrete counterpart is the negative binomial distribution with point probabilities bk k1) (1 k = r. T) where E = E(1) + E(2) is the disjoint union of E(1) and E(2). A reduced phase diagram (omitting transitions within the two blocks) is am E(1) t(1) a(2) (2) t(2) Figure A. Then the convolution B = B1 * B2 is phasetype with representation (E.r + 1. a' .36) in blockpartitioned notation (where we could also write a as (a (1) 0)).6 is the Erlang distribution Er which is the convolution of r exponential distributions.. U2.APPENDIX 353 5c Closure properties Example A5.a(2).. . Jt t > U1 + U2.a(1). { Jt 2) } with lifetimes U1 . and piece the processes together by it = 41) 0<t<U1 U1 < t < U1 + U2 2U.7 (THE NEGATIVE BINOMIAL DISTRIBUTION) The most trivial special case of Example A5. a.1 This corresponds to a convolution of r geometric distributions with the same parameter p.T(2)).. (E(2).2 The form of these results is easily recognized if one considers two independent phase processes { Jt 1) }. r . A.. Then {Jt} has lifetime U1 + U2 .6 (CONVOLUTIONS) Let B1. T= ( 0 T(2) ) (A. initial distribution a and phase generator T.6. 11 Example A5. as is seen by minor modifications of Example A5. resp.{ 0. _ i E E(1) T(1) t(1)a(2) i E E(2) . B2 be phasetype with representations (E(1).T(1)).
4 .37) (1) (1 .0)ai2).8 (FINITE MIXTURES) Let B1. Thus.a(2). Example A5.10 (GEOMETRIC COMPOUNDS) Let B be phasetype with representation (E..').0)a(2) E(2) Figure A.. A reduced phase diagram is 0a(1) E(1) A . then C is the distribution of Ul + • • • + UN. i E E(1) T 0 I (A.p)pn1B*n. a. p at each termination.i. resp.T(1)). and consider B(") = fA B(a) v(da) where v is a probability measure on A.3 In exactly the same way. if U1.d. T) and C = EO°_1(1 . Then it is trivial to see that B(") is u phasetype with representation (a(").354 APPENDIX Example A5. a mixture of more than two phasetype distributions is seen to be phasetype.. a reduced phase diagram is f a E t Figure A. a. Example A5.E) where a(°) = fAa(a)v(da). with common distribution and N is independent of the Uk and geometrically distributed with parameter p. we need to restart the phase process for B w.a(1). In risk theory.T. i E E(2) 0 T(2) =IT (in blockpartitioned notation. B2 be phasetype with representations (E(1). and o'i Oa. T) where E = E(1) + E(2) is the disjoint union of E(1) and E(2). are i.0)a(2))).p. Then the mixture B = 9B1 + (1 .O)B2 (0 < 0 < 1) is phasetype with representation (E. U2. Equivalently. this means that a = (Oa(1) (1 . (E(2). P(N = n) = (1 .T(2)).p)pn1.9 (INFINITE MIXTURES WITH T FIXED) Assume that a = a(°) depends on a parameter a E A whereas E and T are the same for all a. one obvious interpretation of the claim u size distribution B to be a mixture is several types of claims. To obtain a phase process for C. Let B(") be the corresponding phasetype distribution.
T) and C = F.T) where F[T] = J0 "o eTx F(dx) u is the matrix m.APPENDIX 355 and C is phasetype with representation (E. it follows by mixing (Example A5. Minor modifications of the argument show that 1. cf. then Jy has distribution aeTx.11 (OVERSHOOTS) The overshoot of U over x is defined as the distribution of (U . B2 of phasetype with representations (E('). Example A5. let the phase space be E x F = {i j : i E E. v. To obtain a phase representation for C . be the point probabilities of a discrete phasetype distribution with representation (E. If U1 has a different initial vector. we then let the governing phase process be {Jt} _ {(411 Jt2))} 2) interpreting exit of either of {4 M }. 12 (PHASETYPE COMPOUNDS ) Let fl. let {Jtl)}. if B is defective and N + 1 is the first n with U„ = oo. U2. Note that this was exactly the structure of the lifetime of a terminating renewal u process.1. resp. say with distribution F.TWWW). a. X independent of U.a(1). It is zeromodified phasetype with representation (E.f.g. . cf. are i. v..aeTx. f2. U2. (E(2).. a(1) ® a(2 ). Then the minimum U1 A U2 and the maximum U1 V U2 are again phasetype.. P).2. T).v. To see this. Example A5 . let B be a continuous phasetype distribution with representation (F. if {Jt} is a phase process for U. U2 be random variables with distributions B1.. T(2) ). resp.9) that (U .°_1 f„ B*?l. then U1 +• is phasetype with representation (E..d. a(2). T + pta). of F. For U1 A U2. a. if U1. i. E). Thus the representation is (E(1) x E(2).. Proposition VIII. j E F}. Example A5 .x)+. then C is the distribution of U1 + • • • + UN. { Jt2) } be independent with lifetimes U1. Equivalently. T(1) ® T(2)).T) if U is phasetype with representation (E.. . with common distribution B and N is independent of the Uk with P(N = n) = f.T + pta).X)+ is zeromodified phasetype with representation (E.aF[T]. then U1 + • • + UN is zeromodified phasetype with representation (a.7. If we replace x by a r. { 4 } as exit of {Jt}. but the same T.2.. 13 (MINIMA AND MAXIMA ) Let U1.°. a. Corollary VIII. let the initial vector be a ® v and u let the phase generator be I ® T + P ® (ta). say v. +UN 2. Indeed. T + ta.
Now we can find first a sequence {Dm} of distributions with finite support such that D. i= 1 C. we need to allow { Jt.(Sn) with Sn = n/b. Example A5.. Thus the state space is E(1 ) x E(2) U E(1) U E( 2). elementary) Let {bk} be any dense sequence of continuity points for B(x). oo) can be approximated 'arbitrarily close' by a phasetype distribution B: Theorem A5. and the closedness of the class of phasetype distributions under the formation of finite mixtures. Proof Assume first that B is a onepoint distribution. That is.. we can assume that ID. The general case now follows easily from this.} of phasetype distributions such that Bn 3 B as n + oo.. 5d Phasetype approximation A fundamental property of phasetype distributions is denseness .(n) = D. with weight pi(n) for xi(n). any distribution B on (0. however.2) } to go on (on E(2)) when { i 1) } exits.(bk) + B(bk) for all k as n * oo.. Then we must find phasetype distributions Bn with B. there is a sequence {B. see Neuts [269] (where the proof. Hence it is immediate that Bn 4 B.(bk)'. and vice versa.14 To a given distribution B on (0. Let the support of Dn be {xl(n). q(n) q(n) pi(n)a ... cf. the fact that any distribution B can be approximated arbitrarily close by a distribution with finite support. Then from above. By the diagonal argument (subsequent thinnings).xq(n)(n)}. say degenerate at b. and let Bn be the Erlang distribution E.(bk) + B(bk) for all k. and the phase generator is T(1) ®T(2) T(1) ®t(2) t(1) ® T(2) 0 T(1) 0 0 0 T(2) Notes and references The results of the present section are standard . oo).356 APPENDIX For U1 V U2. relies more on matrix algebra than the probabilistic interpretation exploited here). r # oo. The mean of B„ is n/Sn = b and the variance is n/Sn = b2/n..B(bk) I < 1/n for n > k.n = I:pi(n)Er v ( __ ) n) ) a= 1 .. Here are the details at two somewhat different levels of abstraction: (diagonal argument . the initial vector is (a(1) (& a (2) 0 0).8.
for some a < oo.14 is fundamental and can motivate phasetype assumptions. oo) * [0. there is a sequence {Bn} of phase type distributions such that Bn Di B as n 4 oo and f ' f.. For a general Bo. x 4 oo.. E E.n (bk) . then it is immediate that WI(B) = p2(B) for all distributions B on [0. k < n. replications)..t. f2.n. that this procedure should be used with care if ^p(B) is the ruin probability O(u) and u is large. In particular.(x)Bf. u Theorem A5.( dx) * f r f{(x)B(dx).i. oo) and any fl. we can then approximate Bo by a phasetype B.d. if information on Bo is given in terms of observations (i. Since PET is closed under the continuous operation of formation of finite mixtures.. in at least two ways: insensitivity Suppose we are able to verify a specific result when B is of phasetype say that two functionals Cpl (B) and W2 (B) coincide.D(bk)I < n.15 To a given distribution B on (0 . oo) approximation Assume that we can compute a functional W(B) when B is phasetype. Let E be the class of functions f : [0. Then ICr( n ).. But To is the class G of all distributions on [0. . i = 1. 2.e. If Cpl (B) and ^02(B) are weakly continuous. say on the claim size distribution B in risk theory. compute W(B) and use this quantity as an approximation to cp(B0). however. Corollary A5.n( b k ) . PIT contains all finite mixtures of onepoint distributions.. and we can take Bn = Cr(n). u 2 (abstract topological ) The essence of the argument above is that the closure (w. and that cp is known to be continuous. i. the topology for weak convergence) PET of the class PET of phasetype distributions contains all onepoint distributions. k < n.. the class CO of all discrete distributions.APPENDIX 357 Hence we can choose r(n) in such a way that ICr( n). oo). oo) such that f (x) = O(e«x).r.B(bk )I < .. It should be noted. one would use the B given by some statistical fitting procedure (see below). Hence G C PET and L = PIT.
Now returning to the proof of (A.16 To a given distribution B on (0 . for each i. and hence we may choose r(n) such that L 9l) f (x)Cr(n).f (x)B(dx). .. i = 1.  APPENDIX B implies that 00 o o 00 n.39) Indeed.. oo).f ' f (x)B(dx). there is a sequence {Bn} of phase type distributions such that Bn Di B as n + oo and all moments converge.39). . 2..2 ..f (z) = f = 1 1 1 1n/ o .n(dx) < 1+. . n. if f (x ) = e°x. and the case of a general f then follows from the definition of the class E and a uniform integrability argument. and hence it is sufficient to show that we can obtain limsup n4oo fi(x)Bn(dx) < Jo 0 f fi( x)B(dx ). i = 1. we may assume that in the proof of Theorem A5. Bn=En z f f (x)Bn(dx) fof (x)B(dx) = ° (A..38) We first show that for each f E E. . ..358 Proof By Fatou' s lemma.14 Dn has been chosen such that 00 1 °° f fi(x)D n(dx ) < 1++ '  o \ n o f fi(x)B(dx).38 ).. then cc f (x)Bn ( dx) = (?!c ) e'= . n. f° xtBn(dx ) * f °° x`B( dx). f00 fi(x)Cr. TO (A.... liminf B. n B=az.n(dx) + f 0 fi(x)Dn(dx).(dx) > J fi(x)B(dx).oo J fi(x)B.. By (A. \\ 0 Corollary A5. i=1. i = 1...
18 In the setting of Corollary A5. from a more conceptual . If ei > 0.. 0 as i * oo. and in part from the fact that many of the algorithms that we describe below have been formulated within the setup of fitting distributions.> y for some sequence {ei} with ei E (0. the adjustment coefficient 'y = 7(B. . /3) = ry for all n. (N or a given distribution Bo. The present section is a survey of some of the available approaches and software for inplementing this. there is substantial advantage in assuming the claim sizes to be phasetype when one wants to compute ruin probabilities. e ) and ei J. one can obtain 7(Bn. then Bn['Y + ei] * B[y + ei] > 1 + 7 Q implies that 'yn < ry + ei for all sufficiently large n .. . For practical purposes. The adjustment coefficient is a fundamental quantity. (N./3). the loggamma or the Weibull have been argued to provide adequate descriptions of claim size distributions. Notes and references Theorem A5. lim sup ryn < 7. lim inf > is proved similarly. We shall formulate the problem in the slightly broader setting of fitting a phasetype distribution B to a given set of data (1i .APPENDIX 359 In compound Poisson risk processes with arrival intensity /3 and claim size distribution B satisfying .e. .3). . However./3) is defined as the unique solution > 0 of B[y] = l+y/j3. Proof Let fi(x) = el'r+E. and therefore the following result is highly relevant as support for phasetype assumptions in risk theory: Corollary A5.16. 5e Phasetype fitting As has been mentioned a number of times already.l3µb < 1. the remaining results may be slightly stronger than those given in the literature.14 is classical. there is a sequence {B.} of phasetype distributions such that Bfz + B as n * oo and Yn 4 ry where ryn = y(Bn. .. This is motivated in part from the fact that a number of nonphasetype distributions like the lognormal. oo) with B[y +e] < oo for some e > y = 7(B. the problem thus arises of how to fit a phasetype distribution B to a given set of data (1. . I.17 To a given /3 > 0 and a given distribution B on (0. but are certainly not unexpected. . O We state without proof the following result: Corollary A5.
where more than two Erlangs are allowed and in addition to the exact matching of the first three moments a more general deviation measure is minimized (e. [202]. A method developed by Bobbio and coworkers (see e. [216] ). cf. d. The earliest such reference is Bux & Herzog [85] who assumed that the Erlang distributions have the same rate parameter. g. Of course. . The constraints were the exact fit of the two first moments and the objective function to be minimized involved the deviation of the empirical and fitted c. one could argue that the results of the preceding section concerning phasetype approximation contains a solution to our problem : given Bo (or Be). defined by the absence of loops in the phase diagram . a program package written in C for the SUN workstation or the PC is available as shareware.g.. [70]) restrict attention to acyclic phase type distributions . [317] ) has considered an extension of this setup. three for a mixture of two Erlangs ).g.g . the L1 distance between the c .'s).d. we do not not want to perform matrix calculus in hundreds or thousands dimensions).. The observation is that the statistical problem would be straightforward if the whole ( EAvalued) phase process { Jtk)} o<t<( k associated with each observa .} of phasetype distribution such that Bo. we have constructed a sequence { B. B„ The problem is that the constructions of {B„} are not economical : the number of phases grows rapidly. reliability or queueing theory. It seems therefore a key issue to develop methods allowing for a more general phase diagram. A number of approaches restrict the phase type distribution to a suitable class of mixtures of Erlang distributions .. and in practice this sets a limitation to the usefulness (the curse of dimensionality . (N is the empirical distribution Be. and used a nonlinear programming approach . and this is what matters when using phasetype distributions as computational vehicle in say renewal theory. Schmickler (the MEDA package. The characteristics of all of these methods is that even the number of parameters may be low (e. Johnson & Taaffe considered a mixture of two Erlangs (with different rates ) and matched (when possible ) the first three moments . . giving mass 1 /N to each S=. Asmussen & Nerman [38] implemented maximum likelihood in the full class of phasetype distributions via the EM algorithm . and we next describe two such approaches which also have the feature of being based upon the traditional statistical tool of like maximum likelihood.. the number of phases required for a good fit will typically be much larger.f. risk theory.f. In a series of papers (e. at a a number of selected points . e . for some suitable large n.g. The likelihood function is maximized by a local linearization method allowing to use linear programming techniques. and as fitted distribution we may take B.360 APPENDIX point of view the two sets of problems are hardly different : an equivalent representation of a set of data (1 .
= j) f k=1 k =1 tE[0. e. (N ) (^ 54 k )+ and similarly for the cn+1) The crux is the computation of the conditional expectations.x)t(n) 1 and this and similar expressions are then computed by numerical solution of a set of differential equations.T (n)(TiI(1.T(n) k=1 I (Jti) dt o \f a(n)eT(n )(kt(n) N f:i a(n)eT(n)xei ..(k] (Ti is the total time spent in state i and Nii is the total number of jumps from i to j).g. EN where ai = N 1 I ((k) = i) tii=i iEE. then the estimators would be of simple occurenceexposure type... it seems open whether the restriction to the acyclic case is a severe loss of generality..(N) = E Ea(n).T(n) (Nik IC1. In practice... (n+1) _ Ea (n).. . jEEA.. the methods of [70] and [38] appear to produce almost identical results. it is easy to see that N (k Ea(n). .g.. . . The general idea of the EM algorithm ([106]) is to replace such unobserved quantities by the conditional expectation given the observations. N Ti = I(J= i) dt. eieT(n)((k. Thus. .APPENDIX 361 tion Sk was available. Nii = = . In fact. E. since this is parameterdependent.T(n) (Ti ^^ 1. one is lead to an iterative scheme. (N) tJk Ea ( n).
This page is intentionally left blank .
G. Abramowitz & I. Sparre Andersen (1957) On the collective theory of risk in the case of contagion between the claims . Skand.). 14. Arfwedson ( 1955) Research in collective risk theory. Asmussen (1984) Approximations for the probability of ruin within finite time . pp. Asmussen (1987) Applied Probability and Queues. [15] S. Optimizations Software. Transactions XVth International Congress of Actuaries.Bibliography [1] J. Scand. [9] G. [7] E. [4] M. Stuttgart. 101116. [2] J. The case of equal risk sums. Whitt (1998) Explicit M/G/1 waitingtime distributions for a class of longtail service time distributions . 311338. [12] S. Chichester New York. [6] V. Asmussen ( 1982 ) Conditioned limit theorems relating a random walk to its associate. J. ed. Abate. 1985. ibid. Queueing Systems 10. Whitt (1994) Waitingtime tail probabilities in queues with longtail service time distributions . [3] J. In: Limit Theorems and Related Problems (A. Act. 20. John Wiley & Sons. 143170. New York. Arfwedson ( 1954) Research in collective risk theory. 191223.A. II. Teubner. [14] S. Adv. [13] S. Asmussen (1989a) Aspects of matrix WienerHopf factorisation in applied probability. 38.). Anantharam (1988 ) How large delays build up in a GI/GI11 queue. Borokov. Choudhury & W. 213229. [11) S. 363 . 587. Abate & W. Proc. Queueing Systems 5 . Alsmeyer (1991) Erneuerungstheorie. Stoch. Abate & W. Queueing Systems 16. 53100. Asmussen (1985) Conjugate processes and the simulation of ruin problems.L. G. 37. Dover. New York. Whitt ( 1992) The Fourierseries method for inverting transforms of probability distributions . Preprint. 253267. Aktuar Tidskr. Appl. [5] G. 1984 . 3157 . Skand. B. New York. [10] K. Stegun ( 1972 ) Handbook of Mathematical Functions (10th ed. Aktuar Tidskr. 57. 345368. Appl. Probab. Arndt (1984) On the distribution of the supremum of a random walk on a Markov chain . with applications to risk reserve processes and the GI /G/1 queue. 219229. AT&T. The Mathematical Scientist 14. [8] G.
K. Asmussen (1998a) Subexponential asymptotics for stochastic processes: extremal behaviour. [21] S. Chakravarty. Methods V Problems (J. 8. Scand. Appl. Asmussen (1992a) Phasetype representations in random walk and queueing problems. [19] S. Appl. Bernoulli 6. Scand. Random Processes 2. A. Scand. SIAM Review 40. Frey. stationary distributions and first passage probabilities. Floe Henriksen & C.S. Probab. [23] S.). Asmussen (1998b) Extreme value theory for queues via cycle maxima. Probabilistic Analysis of Rare Events (V. J. Extremes 1 .364 BIBLIOGRAPHY [16] S. J. [24] S. 106119. MatrixAnalytic Methods in Stochastic Models (A. Scand. 37. Andronov. 1999 . 313341. Marcel Dekker. [31] S. [34] S. Asmussen (1989b) Risk theory in a Markovian environment. Asmussen & B. J. [25] S. Riga Aviation University. [20] S. 20. Asmussen. [17] S. Bladt (1996) Renewal theory and queueing algorithms for matrixexponential distributions. Th. CRC Press. Hojgaard (1999) Approximations for finite horizon ruin prob abilities in the renewal model. [33] S. [29] S. Asmussen. Binswanger & B. 354374. 193226. Alfa & S.). Proc. Asmussen (2000) Matrixanalytic models and their analysis. Probab. 189201. Asmussen & B. Schmidt (1995) Does Markovmodulation increase the risk? ASTIN Bull. Binswanger (1997) Simulation of ruin probabilities for subexponential claims. 4966. Asmussen (1999) On the ruin problem for some adapted premium rules. [27] S. 303322. 1996. Florida. Stoch. Asmussen (1995) Stationary distributions via first passage times. Asmussen (1998c) A probabilistic look at the WienerHopf equation. Asmussen (1991) Ladder heights and the Markovmodulated M/G/1 queue. . Proc. [32] S. Asmussen & K. [30] S. 54. Kalashnikov & A. 27.M. Asmussen. 25. Advances in Queueing: Models. Appl. 69100. 2149. Statist. [18] S. Bladt (1996) Phasetype distributions and risk processes with premiums dependent on the current reserve. Act. Probab. Ann. 772789. Asmussen (1992c) Stationary distributions for fluid flow models and Markovmodulated reflected Brownian motion. Ann. Kliippelberg (1994) Large claims approximations for risk processes in a Markovian environment. 313326. 315. Hojgaard (2000) Rare events simulation for heavytailed distributions. Asmussen & M. [28] S. 1936. 2. Rolski & V. eds. Stochastic Models 11. J. Hojgaard (1996) Finite horizon ruin probabilities for Markovmodulated risk processes with heavy tails. Dshalalow ed. 1989 . eds. Stoch. 555574.). Act. [26] S. New York. 79102. Asmussen (1992b) Light traffic equivalence in single server queues. [22] S. 297318. Ann. Appl. 96107. 137168. 2943.K. L. T. Astin Bulletin 27. Boca Raton. Asmussen & M. Act.
58. 30. Eng. Asmussen & G. 736755. O'Cinneide (2000/01) On the tail of the waiting time in a Markovmodulated M/G/1 queue. Asmussen . 120. Florida. H. Asmussen & S. [42] S. Proc. Boca Raton. 11251141. Rolski (1991) Computational methods in risk theory: a matrixalgorithmic approach. 365372. Supplementary Volume (Kotz. Appl. [52] S. Olsson (1996). Johnson. [40] S. Asmussen & M. Taksar (1997) Controlled diffusion models for optimal dividend payout. Th. Asmussen & C. Prob 32 . 422447. Asmussen & D. Appl. Nerman & M. 20. Probab. Hojgaard & M. Statist. [43] S. Adv. Asmussen & R. Advances in Queueing: Models. J. Management Science 45. J. [38] S. Asmussen & C. Appl. Oper. Res. Sc. Schmidt (1995) Ladder height distributions with marks. Kliippelberg (1996) Large deviations results for subexponential tails. 105119. Asmussen & V. Scand. Asmussen & C. [39] S. 19.Y. Math. Probab. Example of excessoffloss reinsurance for an insurance corporation. Opns. Dshalalow ed. Schmidt (1993) The ascending ladder height distribution for a class of dependent random walks. Taksar (2000) Optimal risk control and dividend distribution policies . first passage problems and extreme value theory for queues. Schmidli & V. [44] S. Perry (1992) On cycle maxima. Asmussen & T. Stochastic Models 8. (to appear). Rolski (1994) Risk theory in a periodic environment: Lundberg's inequality and the CramerLundberg approximation. Appl. Res. Inf. O'Cinneide (2000/2001) Matrixexponential distributions [Distributions with a rational Laplace transform] Encyclopedia of Statistical Sciences. CRC Press. 421458. B.). [48] S. [51] S. Marked point processes as limits of Markovian arrival streams. Fitting phasetype distributions via the EM algorithm. Stoch.). Rubinstein (1999) Sensitivity analysis of insurance risk models. Insurance: Mathematics and Economics 10. 64. [47] S. 410433. O. Asmussen. 103125. Insurance: Mathematics and Economics 20. 259274. Koole (1993). J. Proc.A. Probab. [50] S. Statistica Neerlandica 47. Asmussen & V. 10. 31.BIBLIOGRAPHY 365 [35] S. Sigman (1996) Monotone stochastic recursions and their duals. Asmussen & K.M. [46] S. 115. with applications to insurance risk Stoch. 419441. 23. Appl. 299324. Asmussen & R.Y. 913916. Methods 8 Problems (J. 429466.A. Wiley. Adv. . Rubinstein (1995) Steadystate rare events simulation in queueing models and its complexity properties. Asmussen & T. [37] S. Read eds. [45] S. Finance and Stochastics 4. [36] S. Appl. Asmussen. Nielsen (1995) Ruin probabilities via local adjustment coefficients . Asmussen & H. Probab. Schmidt (1999) Tail approximations for nonstandard risk and queueing processes with subexponential tails. [41] S. Schock Petersen (1989) Ruin probabilities expressed in terms of storage processes. [49] S.
661667. Halsted Press. Appl. 610. Appl. Crijns (1987) Upper bounds on ruin probabilities in case of negative loadings and positive interest rates. de Waegenaere (1990) Simulation of ruin probabilities. 1988 . J.L. Scand. Ney ( 1972) Branching Processes . [56] B. J. New York. SpringerVerlag. Scand. Oxford. 9. 11811190. [68] T. [65] N. [69] P. Plemmons (1994) Nonnegative Matrices in the Mathematical Sciences. 1995. Clarendon Press. Probab.J. von Bahr (1974) Ruin probabilities expressed in terms of ladder height distributions. Bingham. 77111. 129134. [55] B.A.366 BIBLIOGRAPHY [53] S. [57] C. New York. Wiley. Adv. 33. Actuaries 21. Bingham (1975) Fluctuation theory in continuous time. [61] J. J. J. [73] A. Act. [58] O. 9599. 221232. Borovkov (1976) Asymptotic Methods in Queueing Theory. Cox (1972) A low traffic approximation for queues. [62] J. New York. BarndorffNielsen & H. [71] P.A.H. [63] A.B . 169186. 832840. J. von Bahr (1975) Asymptotic ruin probabilities when exponential moments do not exist. Grandell (1985) An insensitivity property of the ruin probability. J.H. Asmussen & J. Insurance: Mathematics and Economics 4. Boogaert & A. Teugels (1987) Regular Variation. Berlin.H. . Probab. Beekman (1974) Two Stochastic Processes. [67] T. Act. Act. Telek (1994) A bencmark for PH estimation algorithms: results for acyclic PH. SIAM. Bjork & J.M. [70] A. [64] P. [54] K. BarndorffNielsen (1978) Information and Exponential Families in Statistical Theory. 7.L. Beekman (1985) A series for infinite time ruin probabilities. Scand. Act. Bobbio & M. C.R. Cambridge. Scand. [66] N. 190204. Beekman (1969) A ruin function approximation. Baker (1977) The Numerical Solution of Integral Equations. Soc. 705766. 1975. Stochastic Models 10. 148156.T. Bloomfield & D. 275279. Bjork & J. Teugels ( 1996) Convergence rates for M /G/1 queues and ruin problems with heavy tails . Insurance: Mathematics and Economics 9. [60] J. J. Goldie & J. Springer . Trans. Scand. Appl. Cambridge University Press. Act. Billingsley (1968) Convergence of Probability Measures. 4148. Wiley. Schmidli (1995) Saddlepoint approximations for the probability of ruin in finite time. Boogaert & V. Probab. Berman & R. Insurance: Mathematics and Economics 6. [72] P. [59] O. Athreya & P. Grandell (1988) Exponential inequalities for ruin probabilities in the Cox case. 1985 . 1974.
Resnick & R. [84] D. Res. IEEE J. Burman & D . Amsterdam. Tweedie (1982) Storage processes with general release rule and additive inputs . Croux & N. [78] L. [93] K. [82] H. IEEE Trans. Wiley. Cottrell. Breiman ( 1968) Probability. 177204. Cohen (1999) Heavytraffic analysis for the GI/G/1 queue with heavytailed service time distributions . Schrage (1987) A Guide to Simulation. Skandia Jubilee Volume. Smith ( 1983) Asymptotic analysis of a queueing system with bursty traffic. Broeck . Cramer (1930) On the Mathematical Theory of Risk. 749763. Burman & D. B. Soc. Control AC28. J. Berlin. 16. Computer Performance (K.M.W. H. Gerber. [87] E.L. De Vylder (1986) Ordering of risks and ruin probabilities.A. Tech.). Proc. [90] D. Bowers .J. [91] H. Oper. Cohen (1998) The M/G/1 queue with heavytailed service time distribution. [85] W. 62. 14331453. [88] J. [79] P. S. Z.L. [83] D. Itasca. Nesbitt (1986) Actuarial Mathematics . Biihlmann ( 1970) Mathematical Methods in Risk Theory.) NorthHolland. Bux & U. Fox & L.. Bratley. 313319. 392433. New York San Francisco London. 93121.L. Stockholm. Wahrscheinlichkeitsth. [76] N. Philos. J. [80] F.I. Reading. Probab. New York. Area Commun. Brockwell . Insurance : Mathematics and Economics 9. 51. Jr. J.Y . . 127130. 14.). 3539. The Society of Actuaries . [77] P.A. [92] H.BIBLIOGRAPHY 367 [74] O. 24. verve.C. 907920.J. Boxma & J. Cambr. Hickman. Jones & C. NorthHolland. Smith ( 1986) An asymptotic analysis of a queueing system with Markovmodulated arrivals . Cohen (1982) The Single Server Queue (2nd ed. Sel. Cramer (1955) Collective risk theory. 34. Geb. Veraverbeke (1990). Springer. AddisonWesley. Goovaerts & F. The Jubilee volume of Forsakringsbolaget Skandia. [89] M. M. [81] J.L. II.W. Bucklew ( 1990) Large Deviation Techniques in Decision. Qinlar (1972) Markov additive processes. Fort & G.R. Simulation and Estimation. Insurance : Mathematics and Economics 5. Chung (1974) A Course in Probability Theory (2nd ed.W. Amsterdam. Bell.Y. Syst. Illinois. [75] O. Adv. Aut. New York. SpringerVerlag.U. Stockholm. Malgouyres (1983) Large deviations and rare events in the study of stochastic algorithms .C. D. 105119. Cox (1955) Use of complex probabilities in the theory of stochastic processes. Reiser eds. Boxma & J. Academic Press. Nonparametric estimators for the probability of ruin.R. Herzog (1977) The phase concept : approximations of measured data and performance analysis. Appl.R. [86] K. Queueing Systems 33. 2338. Chandy & M.
16. 9 . Csorgo & J. Embrechts (1989). 227229. Math. Aktuar. Jones and Bartlett. Haezendonck (1987) Classical risk theory in an economic environment. Oper. Insurance: Mathematics and Economics 10. [101] C. 7083 (1969). T. Delbaen & J. [100] A. 85116. Blotter der deutschen Gesellschaft fur Versicherungsmathematik XVII. Goovaerts (1988) Recursive calculation of finitetime ruin probabilities. Laird & D. 22. [106] A. [99] R.L. 3750. 135159. Stochastic Models 5. Stockholm. Res. Math. De Vylder (1996) Advanced Risk Theory. [103] F. Zeitouni (1992) Large Deviations Techniques and Applications. Comp. 121131. Soc. Martingales and insurance risk. De Vylder & M. [95] M. 1990 . Daley & T.. Rolski (1991) Light traffic approximations in queues. Daykin. [104] F. [110] F. . Scand. van Dawen (1986) Ein einfacher Beweis fur Ohlins Lemma. Delbaen & J. J. [108] F. Chapman & Hall. 583602. Deheuvels & J. Suppl..D. Insurance: Mathematics and Economics 7. 181217. 17. Davidson (1946) On the ruin problem of collective risk theory under the assumption of a variable safety loading (in Swedish). Probab. New York. 114119. [107] L. Insurance: Mathematics and Economics 3. 88101. Teugels ( 1990) Empirical Laplace transform and approximation of compound distributions . 27. De Vylder (1978) A practical solution to the problem of ultimate ruin probability. Goovaerts (1984) Bounds for classical ruin probabilities. Tidskr. Oper. Scand. De Vylder & M. [111] F. J. [98] A.P. 624628. Haezendonck (1985) Inversed martingales in risk theory. 5771. [97] D. 3. Pesonen (1994) Practical Risk Theory for Actuaries. English version published in Skand. Devroye (1986) Nonuniform Random Variate Generation. [102] P. Editions de 1'Universite de Bruxelles.368 BIBLIOGRAPHY [94] M. Forsiikringsmatematiska Studier Tillagnade Filip Lundberg. Daley & T. Springer. Res.B. [112] F. Csorgo & J. Steinebach ( 1991 ) On the estimation of the adjustment coefficient in risk theory via intermediate order statistics . Appl. 435436. Appl. N. Steinebach (1990) On some alternative estimators of the adjustment coefficient in risk theory. Insurance: Mathematics and Economics 6. Math. 201206. [109] F. Dempster. Insurance: Mathematics and Economics 4.M. Dembo & O. J. De Vylder (1977) A new proof of a known result in risk theory. [96] D. Dassios & P. J. Act. Rubin (1977) Maximum likelihood from incomplete data via the EM algorithm. Pentikainen & E. Statist. Actuarial J. Roy. Boston. [105] A. Rolski (1984) A light traffic approximation for a single server queue.
J. J. Scand.R.B.K. 118.C. [130] P. 191207. Dickson & J. Act. [124] N. Springer . Insurance: Mathematics and Economics 7. Scand. J. Dickson & C.S. Taylor ( 1975) A diffusion approximation for the ruin probability with compounding assets. Dickson & C. J. 1993 . J. Scand.C. Schmidli (1993) Finitetime Lundberg inequalities in the Cox case . Soc. Oper.M.M. Proc. 4962. Ruin estimates for large claims . 1994 . Insurance : Mathematics and Economics 11. 177192. J. Gray (1984) Approximations to the ruin probability in the presence of an upper absorbing barrier. Dufresne & H. 1984 .R. [126] F. [118] D. Act.J. Waters (1999) Ruin probabilities wih compounding.M. and the amount of claim causing ruin. 174186. [123] E.W. Astin Bulletin 21.M.M. [131] P.C. O'Connell (1995) Large deviations and overflow probabilities for the general singleserver queue. Emanuel . H. 105115. Gray (1984) Exact solutions for ruin probability in the presence of an upper absorbing barrier. Djehiche (1993) A large deviation estimate for ruin probabilities. 193199. 107124. 6180. Dufresne & H. 1975 . Act. Embrechts ( 1988 ). 1741. Harrison & A.U.C. Berlin Gottingen Heidelberg. [128] E. [121] D.M.M. 363374. [129] D . 229232. [114] D. Gerber (1988) The surpluses immediately before and at ruin. Scand. [120] D. 1984 . Act. Act. Grandell & H. Insurance : Mathematics and Economics 10. [125] F. Scand. Dickson & J. Hipp (1998) Ruin probabilities for Erlang (2) risk processes. [117] D. Dickson & H.U. with applications. Act.BIBLIOGRAPHY 369 [113] D.G.M. 4259. Regterschot (1988) Conditional PASTA. Hipp (1999) Ruin problems for phasetype(2) risk processes. Dickson & H. J. J. [119] D. van Doorn & G.R. Waters (1996 ) Reinsurance and ruin . Camb. 1. Res. Gerber (1991) Risk theory for the compound Poisson process that is perturbed by a diffusion. Dickson (1992) On the distribution of the surplus prior to ruin . Dickson (1994) An upper bound for the probability of ultimate ruin.M. Gerber & E. 2000 . 269274.C. [122] B. Scand. .C. [115] D. Dynkin (1965) Markov Processes I. Insurance: Mathematics and Economics 19. British Actuarial J. J. [116] D. 3745.M. 251262. [127] F. Dickson (1995) A review of Panjer's recursion formula and its applications.C. 1993 . 5159. Math. 147167. Insurance : Mathematics and Economics 22.C.C. Shiu (1991) Risk theory with the Gamma process. Embrechts .U. Insurance : Mathematics and Economics 7. Letters 7. Philos.R. Duffield & N. Insurance: Mathematics and Economics 25.C.J. Act. 131138. Scand. Dufresne.
Schmidt (1992) An insensitivity property of ladder height distributions. Esary. Probab. Appl. Heidelberg. Embrechts & J. Konig. Fuh & T. Insurance: Mathematics and Economics 10. [145] P. J. Embrechts. [149] C. [137] P. Reprinted 1948 as 'The theory of probabilities and telephone conversations' in The Life and work of A. with applications. Adv. Probab. Probab. [141] F.L. Nyt Tidsskrift for Matematik B20. Proschan & D. 35. Probab. Schmidli (1994) Ruin estimation for a general insurance risk model. Insurance: Mathematics and Economics 7. Lai (1998) Wald's equations. Rel. 1998 . New York.W. 8190. Statistica Neerlandica 47. Insurance : Mathematics and Economics 1. Griibel & S. Math. J. Skand. Adv. Veraverbeke (1982) Estimates for the probability of ruin with special emphasis on the possibility of large claims . 269274. Akt. M. Pitts (1993) Some applications of the fast Fourier transform in insurance mathematics . Kliippelberg & T. Embrechts. [150] H. Scand.370 BIBLIOGRAPHY [132] P.W. Erlang (1909) Sandsynlighedsregning og telefonsamtaler . Franken. Springer. [134] P. 181190. D. [139] A. Danish Academy Tech. J. Embrechts & T. Wiley. Jensen. Adv. 616624.A. On the excursions of Markov processes in classical duality. Ann.L. Wiley. Appl.D. 159178. Mikosch (1997) Modelling Extremal Events for Finance and Insurance. first passage times and moments of ladder variables in Markov random walks. J. R. 29. 5572. Fuh (1997) Corrected ruin probabilities for ruin probabilities in Markov random walks.D. 17. [135] P. 29. Feller (1966) An Introduction to Probability Theory and its Applications I (3nd ed. [133] P. F. Appl. Mikosch (1991). 26. Fitzsimmons (1987). Astin Bulletin 16. Probab. Schmidt (1982) Queues and Point Processes. . Probab. U. 14661474. 695712. 623637. [148] C. Act. Walkup (1967) Association of random variables. 175195. Frees (1986) Nonparametric estimation of the probability of ruin. 38. Wiley. Tidsskr.K. [138] P. Arndt & V. Esscher (1932) On the probability function in the collective theory of risk. 131137. Embrechts.). Trans. Furrer (1998) Risk processes perturbed by astable Levy motion. Th. Erlang 2. New York. [140] J. Statist. Appl. A bootstrap procedure for estimating the adjustment coefficient. 5975. 404422. [136] P. Fields 75. 566580. [147] M. [143] W. Teugels (1985). Appl. C. Maejima & J. Sci. Embrechts & N.M.D. [144] P. Frenz & V. 3339. Approximations for compound Poisson and Polya processes. [142] W.J. [146] E.).K. Embrechts & H. 5974. Feller (1971) An Introduction to Probability Theory and its Applications II (2nd ed. Villasenor (1988) Ruin estimates for large claims.
BIBLIOGRAPHY 371 [151] H. Gerber (1981) On the probability of ruin in the presence of a linear dividend barrier. Kluwer. Griibel (1996) Perpetuities with thin tails. de Vylder & J. Schweiz. van Heerwarden & T. Gerber (1988) Mathematical fun with ruin theory. Appl. 1977. R. 3752. Furrer & H. [153] H. Z. Ver. Vers. Probab. [156] H. Insurance : Mathematics and Economics 7. [159] H. Boston. Probab. Gerber (1973) Martingales in risk theory. Grandell (1978) A remark on 'A class of approximations of ruin probabilities'. NorthHolland. Glasserman & S. Appl. Kou (1995) Limits of first passage times to rare sets in regenerative processes. University of Pennsylvania. Gerber (1971) Der Einfluss von Zins auf die Ruinwahrscheinlichkeit. F. Appl. Furrer. contained in the authors PhD thesis. Scand. J. [166] M. [158] H.V. . [163] P. Vers.U. 1978 . [155] H. Gnedenko & I. 28. Life Insurance Mathematics.. Gerber (1979) An Introduction to Mathematical Risk Theory.G. Basel. Gani. London. [157] H. J. eds. 205210. Scand. Amsterdam. Mitt.).. Scand. Schmidli (1994) Exponential inequalities for ruin probabilities of risk processes perturbed by diffusion. 73. Goldie & R. 463480. Glynn & W. Glasserman (1991) Gradient Estimation via Perturbation Analysis.S. 5. 105115. Unpublished. Goovaerts. 31A. Amsterdam. Kovalenko (1989) Introduction to Queueing Theory (2nd ed. S. Whitt (1994) Logarithmic asymptotics for steadystate tail probabilities in a singleserver queue. Insurance: Mathematics and Economics 15. Probab. Insurance: Mathematics and Economics 20. [152] H. Dordrecht. Act. A. J. NorthHolland. Springer. Galambos & J. Math.J. [162] P. Act.E. [168] J.U. 7778. Bauwelinckx (1990) Effective Actuarial Methods. Suppl. [160] H. Grandell (1977) A class of approximations of ruin probabilities. 205216. 1523.U. 131156.M. [167] C.N.U. [161] P. 424445. Goovaerts. Skand. Act.71. Ann.U. Huebner Foundation Monographs. Adv. Gerber (1970) An extension of the renewal equation and its application in the collective theory of risk. J. 97114. Schweiz.J.U. [154] H.W. 6370. [164] B.).U. [165] M. Aktuarietidskrift 1970 . Gerber (1986). Birkhauser . [169] J. In Studies in Applied Probability (J. Kaas. Math. Weron (1997) Stable Levy motion approximation in collective risk theory. Michna & A. 2336. Furrer (1996) A note on the convergence of the infinitetime ruin probabilities when the weak approximation is astable Levy motion. Haezendonck (1984) Insurance Premiums. Mitt. 1981 . Ver. ETH Zurich.
Harrison & A. Harrison (1977) Ruin problems with compounding assets . Probability Theory and Mathematical Statistics (I. [179] R. Res. 400409. E. Griibel & R. Resnick (1977) The recurrence classification of risk and storage processes .372 BIBLIOGRAPHY [170] J. Applied Statistics 40. 3041. Lemoine (1977) Limit theorems for periodic queues. 14. Stoch. Statistical Algorithm 265.V.131135. Winter School on Stochastic Analysis and Appl . [171] J. 5 . Probab. 1. Hermesmeier (1999) Computation of compound distributions I: aliasing errors and exponential tilting. 14. [176] B. Tinbergen Institute Research Series 20. Theory and Applications. 33. [187] J. [182] A. Opns .and Sozialforschung 6. ed. Segerdahl (1971) A comparison of some approximations of ruin probabilities. 332. 143158. M. [180] R. . [186] J. Grandell ( 1992) Finite time ruin probabilities and martingales .J.243255. M. Korolyuk (1969) On the joint distribution of a process with stationary increments and its maximum . Proc. Opns . AkademieVerlag. [189] A. 7590. [177] B. Matem. Appl. [181] D. [185] J. Ibragimov. 6779. Silvestrov (2000) Cram€rLundberg approximations for nonlinearly perturbed risk processes . [172] J. Astin Bulletin 29. 54. Skand. M. Grandell (1979) Empirical bounds for ruin probabilities. Preprint. 167176.S. 566576. [173] J. Grandell (1999) Simple approximations of ruin functions. Th. van Heerwarden (1991) Ordering of Risks: Theory and Actuarial Applications.). Gyllenberg & D. Res. Resnick (1976 ) The stationary distribution and first exit probabilities of a storage process with general release rule . J. Appl. Stoch. Chapman & Hall. M. Appl.I.I. Aktuar Tidskr. [174] J. Gut (1988 ) Stopped Random Walks . 8. Proc . SpringerVerlag. Gusak & V. 5766. Liet. [184] H . New York. [178] B. Proc. 355365. Grandell & C. Probab.. [175] J. 197214. KTH. Math. Archiv v. Harrison & S. Grigelionis ( 1992) On Lundberg inequalities in a Markovian environment. Grnbel (1991) G/G/1 via FFT. [183] M. mathematische Wirtschaft . Insurance: Mathematics and Economics 26. 347358. Rink. Appl.A.O. Grigelionis ( 1993) Twosided Lundberg inequalities in a Markovian environment. Hadwiger (1940) Uber die Wahrscheinlichkeit des Ruins bei einer grossen Zahl von Geschiiften . Grandell (1997) Mixed Poisson Processes. SpringerVerlag. Grigelionis (1996) Lundbergtype stochastic processes. Harrison & S. Informatica 2. Grandell (1990) Aspects of Risk Theory. Amsterdam. Math. Berlin. [188] J. 3 .
Wiley. Chalmers University of Technology and the University of Goteborg. [205] T. Insurance: Mathematics and Economics 5. 19. [196] C. Taksar (2000) Optimal proportional reinsurance policies for diffusion models. [191] W.A. Act. Ann. 285292. Heidelberger (1995) Fast simulation of rare events in queueing and reliability models.V. 285293. 5770.M. Statist. 166180. ACM TOMACS 6. 1990 . Probab. J. Electronic Letters 35. eds. Scand. Iversen & L. Huskova. Michel (1990) Risikotheorie: Stochastische Modelle and Statistische Methoden. Mandl & M. Asmussen & 0. Act. Nerman (1992) EMPHT . Prob. Hipp (1989b) Estimators and bootstrap confidence intervals for ruin probabilities. Astin Bulletin 19.L. 123151. [208] V. No. Hipp (1989a) Efficient estimators for ruin probabilities.L. [202] 0. on Asymptotic Statistics (P. J. New York. [195] B. 6. Geb. [197] C. 1998 .. [193] U. 18. Horvath & E. Foruth Prague Symp. [203] T. Appl. [201] L. Hogg & S. Ann. Math. 377381. Hermann (1965) Bin Approximationssatz fiir Verteilungen stationarer zufalliger Punktfolgen. 12981310. Wahrscheinlichkeitsth. 3. Hojgaard & M. Iglehart (1969) Diffusion approximations in collective risk theory. Hesselager (1990) Some results on optimal reinsurance in terms of the adjustment coefficient. Heilmann (1987) Grundbegriffe der Risikotheorie. Z. 378389. [192] U.V. Probab. Staalhagen (1999) Waiting time distributions in M/D/1 queueing systems. Hogan (1986) Comment on corrected diffusion approximations in certain random walk problems. Hill (1975) A simple general approach to inference about the tail of a distribution. Karlsruhe. Probab. Hoglund (1991). 23. [204] T.B. 8996. 305313. The ruin problem for finite Markov chains. [198] C. [200] M. [194] 0. Insurance: Mathematics and Economics 5. verve. Studies in Statistical Quality Control and Reliability 1992 :4. Hipp & R. [206] B. [199] R. Proc. [207] D. Verlag Versicherungswirtschaft e. J.R. 11631174. .BIBLIOGRAPHY 373 [190] P. J. Ann. 25. Hoglund (1974) Central limit theorems and statistical inference for Markov chains. 29. Scand. Appl. Herkenrath (1986) On the estimation of the adjustment coefficient in risk theory by means of stochastic approximation procedures. Klugman (1984) Loss Distributions.). Nachrichten 30. Hoglund (1990) An asymptotic expression for the probability of ruin within finite time.a program for fitting phasetype distributions. Versicherungswirtschaft. Mathematical Statistics. Karlsruhe. S. 8095. Haggstrom. 4385. 259268. Willekens (1986) Estimates for the probability of ruin starting with a large initial reserve.
Copenhagen. Proc. [225] J. Probab. 4151. 3. 711743. Kemperman (1961) The Passage Problem for a Markov Chain. Munksgaard. Statist. [215] J. Stochastic Models 1. Kalashnikov (1997) Geometric Sums: Bounds for Rare Event with Applications. Astin Bull. Scand.M. [211] J. Johnson & M. [218] V. Clarendon Press.G. 31.H. 61. [217] R. Cambridge Philos. 239256. Janssen & J. 283306. Addenda to for processes defined on a finite Markov chain. 259281. 118. [220] V.R. Keilson & D. Academic Press. ibid. Soc. 8793.G. Soc. 547567. J.A. 35. Chicago. Princeton. 1996. Act. 60. 63. Kalashnikov (1999) Bounds for ruin probabilities in the presence of large claims and their comparison. Insurance: Mathematics and Economics 5. Taaffe (1989/90) Matching moments to phase distributions.G. New York. Kao (1988) Computing the phasetype renewal and related functions. 173190. 15. [222] S. Jensen (1995) Saddle Point Approximations. ibid. Reinhard (1985) Probabilities de ruine pour une classe de modeles de risque semiMarkoviens. [210] D. Kalashnikov (1996) Twosided bounds of ruin probabilities. [226] J.J. J. Ann. Taylor (1981) A Second Course in Stochastic Processes. [221] E. Karlin & H. N. Lazar (1998) Subexponential asymptotics of a Markovmodulated random walk with a queueing application. Manuscript.C. Keilson & D.374 BIBLIOGRAPHY [209] D. Cambridge Philos. Proc. 11.M. Keilson (1966) A limit theorem for passage times in ergodic regenerative processes. 187193. 165167. Jelenkovic & A.P. 561563. NEC Research Institute. Wishart (1964). Amer. N. Keilson & D. Act. Technometrics 30.L. [214] A. 116129. J.M. Kennedy (1994) Understanding the WienerHopf factorization for the simple random walk. Oxford. Janssen (1980) Some transient results on the M/SM/1 special semiMarkov model in risk and queueing theories.B. Math. Goovaerts (1986) General bound on ruin probabilities. [213] P. Stochastic Models 5. Jagerman (1985) Certain Volterra integral equations arising in queueing. J. Wishart (1964) A central limit theorem for processes defined on a finite Markov chain. [228] J. 866870. Probab. University of Chicago Press. Kluwer. Soc. 37. Proc. Jensen (1953) A Distribution Model Applicable to Economics. [216] M. Jagerman (1991) Analytical and numerical solution of Volterra integral equations with applications to queues. Appl. [224] J. Appl. 325247. Astin Bull.J. Kaas & M. [227] J. 6. 6. .M. Cambridge Philos.M. [212] J. [219] V. Wishart (1964) Boundary problems for additive processes defined on a finite Markov chain. 123133. [223] J.
Soc.J. 48. Appl. Lipsky (1992) Queueing Theory . J. Appl. Seminaire de Probabilties X . Oxford 12. J. Appl. Scand.BIBLIOGRAPHY 375 [229] J. Probab. . [239] H. Nyrhinen (1992b) On asymptotically efficient simulation of ruin probabilities in a Markovian environment. Lucantoni (1991) New results on the single server queue with a batch Markovian arrival process. B24. 133135 (in Russian). Latouche & V. Insurance : Mathematics and Economics 12. Kingman (1962) On queues in heavy traffic. [236] C. Quart. 359361. 18. Roy. Probab. J.F. Statist. SIAM. Insurance: Mathematics and Economics 8. 6075. Proc. [244] T. Macmillan. [248] D. Soc.C. I. Act. 132141. J.F. Ramaswami (1999) Introduction to MatrixAnalytic Methods in Stochastic Modelling. Kunita (1976) Absolute continuity of Markov processes. [230] J. Kliippelberg & U. Mikosch (1995) Explosive Poisson shot noise with applications to risk retention. SpringerVerlag. 60 . Klnppelberg (1989) Estimation of ruin probabilities by means of hazard rates. [234] C. Bernoulli 1 . Mikosch (1995) Modelling delay in claim settlement. Lemoine (1989) Waiting time and workload in queues with periodic Poisson input. [246] D. Appl.S. [233] C. Probab. 279285. Prob. 25.F. Lehtonen & H. 858874. Lindley (1952) The theory of queues with a single server. Klnppelberg & T. Sorensen (1997) Exponential Families of Stochastic Processes. 383392. Klnppelberg (1993) Asymptotic ordering of risks and ruin probabilities. [232] C. 889900. 259264. 4958. [245] T. Klnppelberg & T. Lecture Notes in Mathematics 511. Actuarial J. 390397. Scand. [238] V.J. [241] G.P. Kingman (1961) A convexity property of positive matrices . Penev & A. Philos. Soc. Stadtmiiller (1998) Ruin probabilities in the presence of heavytails and interest rates . 125147. [242] A. Camb. Math. Turbin (1973) An asymptotic expansion for the absorption time of a Markov chain distribution. J. Quart. [231] J. Scand. Lehtonen & H. 146. 1995 . J. 277289. Korolyuk. SpringerVerlag. Proc. [247] L. Cambr. (235] C. Philos. 24. 1998 . 283284.C. Kliippelberg (1988) Subexponential distributions and integrated tails.. Kuchler & M. 4477. J. 154168. [240] U. New York. [243] A.F. 26. [237] C. Stochastic Models 7.C. Lemoine (1981) On queues with periodic Poisson input. Kingman (1964) A martingale inequality in the theory of queues. Act. Adv.a Linear Algebraic Approach. Cybernetica 4. Nyrhinen (1992a) Simulating levelcrossing probabilities by importance sampling.
376 BIBLIOGRAPHY [249] D. Math.K. 29 . 676705. [260] H. Appl. Miller (1962 ) A matrix factorization problem in the theory of random variables defined on a finite Markov chain . [256] A. 161174. 223235. Makowski ( 1994) On an elementary characterization of the increasing convex order . [264] G. Probab. [253] V. Hoist eds. 286298. Almqvist & Wiksell. 3. 834841. 1996 . Gut & J. 22. Europ. 1261 (0?)1270. F.M. J. Scand.). Malinovskii ( 1994) Corrected normal approximation for the probability of ruin within finite time.F. 15. Act. [266] P. [254] V . 11 . 429448.S. Lundberg (1926) Forsdkringsteknisk Riskutjdmning. Appl. [265] S. Act. Probab.K. Act. 32. Mammitzsch ( 1986 ) A note on the adjustment coefficient in ruin theory. 147149. Miller ( 1962 ) Absorption probabilities for sums of random variables defined on a finite Markov chain . Stockholm. 763776. Insurance : Mathematics and Economics 5.D.S. [255] V. Ann. 268285. Probab. J. Uppsala. [262] H . K. Probab. Cambridge Philos.D. Michna ( 1998) Selfsimilar processes in collective risk theory. with an application . 378406. Proc. Nummelin (1987) Markov additive processes I. [259] Z . [251] F. Res. Th. [263] M . Neuts ( 1990) A single server queue with server vacations and a class of nonrenewal arrival processes . Lundberg (1903) I Approximerad Framstdllning av Sannolikhetsfunktionen. Adv. [258] K . Soc. Appl. Stoch. Miyazawa & V. [261] H. 1994. 4859. Malinovskii ( 1996) Approximation and upper bounds on probabilities of large deviations of ruin within finite time. Probability and Mathematical Statistics (A. 370377. 2939. Eigenvalue properties and limit theorems .V.D. Nagaev (1957) Some limit theorems for stationary Markov chains. MartinL6f (1983) Entropy estimates for ruin probabilities . J. . 2. Soc. Probab. J. Lucantoni . J. Schmidt ( 1993) On ladder height distributions of general risk processes . 36.V. Ann. Appl. Cambridge Philos. J. [250] F. [257] A. MeierHellstern & M . [252] A. Ann. II Aterforsdkring av Kollektivrisker . Statist. Appl. Meier ( 1984). 1986 . 561592. J. A fitting algorithm for Markovmodulated Poisson processes having two arrival rates . Anal. Proc. Probab . 31. 58 . 124147. Opns. Ney & E. Moustakides ( 1999) Extension of Wald 's first lemma to Markov processes. . MartinL3f (1986) Entropy. Math. a useful concept in risk theory. Miller ( 1961 ) A convexity property in the theory of random variables defined on a finite Markov chain. Appl. Scand. 58. Scand. Englunds Boktryckeri AB.
Paulsen & H. 445454. 215223. Paulsen (1998) Ruin theory with compounding assets . [282] J . Probab. Stoch. Norberg ( 1990) Risk theory and its statistics environment . Paulsen (1993) Risk theory in a stochastic economic environment. IEICE Trans. J.K. Probab. Neuts ( 1978) Renewal processes of phase type.K. Willekens (1986) Second order behaviour of the tail of a subordinated probability distributions. Math. [284] J. 555564. Appl.F. Hamburg.F. 764779. Stoch. 316. Johns Hopkins University Press. 10081026. Insurance: Mathematics and Economics 20. Stoch. Proc. 71. [277] J. 12. 339353. Paulsen ( 1998) Sharp conditions for certain ruin in a risk process with stochastic return on investments. Nyrhinen (1999) Large deviations for the time of ruin. Naval Research Logistics Quarterly 25. Appl. Omey & E. 327361.F. Appl. 36. Models 6. Adv. [274] H . Pakes (1975) On the tail of waiting time distributions. Appl.A. 157. Statistics 21. 30. Proc. Paulsen & H. E75B . 75. [273] R. Probab. 311342. Appl. Appl. 249266. New York.G. J. [276] C. Proc. Willekens (1987) Second order behaviour of distributions subordinate to a distribution with finite mean . 21. 46. [270] M . Marcel Dekker. O'Cinneide (1990) Characterization of phasetype distributions. Appl. Neuts ( 1989) Structured Stochastic Matrices of the M/G/1 Type and their Applications. [269] M .BIBLIOGRAPHY 377 [267] M . Gjessing (1997a) Optimal choice of dividend barriers for a risk process with stochastic return on investments. Proc. Insurance: Mathematics and Economics 22. 135148. Gjessing (1997b) Present value distributions with applications to ruin theory and stochastic equations. 12551265. Probab. [275] H.F. . 123144. [278] E. [279] E. Omey & E. Sem. [268] M . Neuts (1992) Models based on the Markovian arrival process. 273299. Nyrhinen ( 1998 ) Rough descriptions of ruin for a general class of surplus processes. Baltimore . [283] J. 733746. J. [272] J . [280] A. Astin Bulletin 5 . [281] J. Commun. [271] M. Stoch. [285] J. 16. London.a survey. Neveu ( 1961 ) Une generalisation des processus a accroisances independantes. Ohlin (1969) On a class of measures for dispersion with application to optimal insurance. Stoch. Stochastic Models 3 . Abh. Neuts (1977) A versatile Markovian point process. Appl.F. Neuts ( 1981 ) MatrixGeometric Solutions in Stochastic Models.
[291] S.U. Ripley (1987) Stochastic Simulation . 337347.A. Heidelberg. Statist. 965985. Probab. 19. [289] C. Res. On the ruin problem of collective risk theory. [304] B. Paulsen & H. Appl. Rao (1965 ) Linear Statistical Inference and Its Applications. [294] N. Insurance: Mathematics and Economics 3. 820827. Wiley. 12. [303] S. 46. [299] C.M. [296] N. and Dams. 222261. Ann. Adv. 139143. Pitts (1994) Nonparametric estimation of compound distributions with applications in insurance. Pitts.K.J. Prabhu (1965) Queues and Inventories. Act. [292] S. Prabhu (1980) Stochastic Storage Processes. Oper. Astin Bulletin XIV. Soc. 390413. 4568. Aktuar. 32. 4. 691707. [300] C. Ann.G.R. 537555. 2343. J. Appl. [305] C. Inst. 215246. Skand. 465483. [287] F. New York. Probab.U. 117133. Reinhard (1984) On a class of semiMarkov risk models obtained as classical risk models in a markovian environment. Math. Math.M. Austr. Probab. New York. R. J. 1989 . Probab .H. 29. 568576. Ann. [302] J.U. 29A.J. Philipson (1968) A review of the collective theory of risk. [297] R.G. Math. . Pitman (1980) Subexponential distribution functions. J. Embrechts (1996) Confidence bounds for the adjustment coefficient. Appl. [295] N. 30. 147159. Math.. Opns. Ramsay (1984) The asymptotic ruin problem when the healthy and sick periods form an alternating renewal process. Schock Petersen (1989) Calculation of ruin probabilities when the premium depends on the current reserve. 61. Appl. Prabhu & Zhu (1989) Markovmodulated queueing systems. Griibel & P. Prabhu (1961). [290] E. Probab. Scand.U. Res. Math.M. Adv. [298] V. Tidskr. Queues. [288] S. Wiley. Insurance: Mathematics and Economics 16. Pyke (1959) The supremum and infimum of the Poisson process. Adv. Springer. [301] G.L.K. Statist. Rolski (1987) Approximation of periodic queues. 28. Wiley. Insurance Risk. Queueing Systems 5. 11. Berlin. Stat. Pellerey (1995) On the preservation of some orderings of risks under convolution. Rogers (1994) Fluid models in queueing theory and WienerHopf factorisation of Markov chains. 235243. Ann. 757764. [293] N. [306] T. Gjessing (1997c) Ruin theory with stochastic return on investments. de Smit (1986) The queue M/G/1 with Markovmodulated arrivals and services. Resnick & G. 2330. Samorodnitsky (1997) Performance degradation in a single server exponential queueing model with long range dependence. Regterschot & J. Ramaswami (1980) The N/G/1 queue and its detailed analysis. Appl.378 BIBLIOGRAPHY [286] J.
M. Stoch. Wiley. [322] H. [308] S. Rubinstein & B. Ann. Science Culture Technology Publishing . 5. [312] R. [325] H . . 145165. Schlegel ( 1998) Ruin probabilities in perturbed risk models. Stochastic Models 6. [309] H.L. Scand. [310] R. 365388. Schmickler ( 1992 ) MEDA : mixed Erlang distributions as phasetype representations of empirical distribution functions . 7. [314] T. Schmidli ( 1999b) Perturbed risk processes: a review . 11. Adv.BIBLIOGRAPHY 379 [307] T. 4857. Schmidt & J. Teugels (1999) Stochastic Processes for Insurance and Finance. Th. 795829. Shapiro (1993) Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization via the Score Function Method. Schmidli ( 1997a) Estimation of the Lundberg coefficient for a Markov modulated risk model .Y. [320] H. J.Y. Astin Bulletin 29. 417421. 227244. 93104. Melamed (1998) Modern Simulation and Modelling. Stochastic Models 10. Probab. [319] H. Singapore. 131156. [318] H. 21. Schmidli ( 1996) Martingales and insurance risk. Ryden (1994) Parameter estimation for Markov modulated Poisson processes. Rossberg & GSiegel (1974) Die Bedeutung von Kingmans Integralgleichungen bei der Approximation der stationiiren Wartezeitverteilung im Modell GI/C/1 mit and ohne Verzogerung beim Beginn einer Beschiiftigungsperiode. V. Rudemo (1973) Point processes generated by transitions of a Markov chain. [324] H . 262286. J. Wiley. [321] H. Data Anal. Wiley. Proc. Schmidli ( 1997b) An extension to the renewal theorem and an application in risk theory. 431447 [316] S. Lecture Notes of the 8th Summer School on Probability and Mathematical Statistics ( Varna). 121133. Operationsforsch. Scand. 5. 2001 . Appi. 1997. 4068. 687699. Act. Statist. Ryden (1996) An EM algorithm for estimation in Markovmodulated Poisson processes. H. Probab. [317] L. Math. J. Rolski. Ross (1974) Bounds on the delay distribution in GI/G/1 queues. Wiley. Schmidli ( 1995 ) CramerLundberg approximations for ruin probabilities of risk processes perturbed by a diffusion . Insurance: Mathematics and Economics 16. Rubinstein & A. Schmidli ( 1999a) On the distribution of the surplus prior and at ruin. Comp. Stochastic Models 10 . 135149. 155188. [315] T. [311] R. Appl. [323] H.J. Act. [313] M. Schmidli ( 1994) Diffusion approximations for a risk process with the possibility of borrowing and interest. Schmidli ( 2001 ) Optimal proportional reinsurance policies in a dynamic setting . Rubinstein (1981) Simulation and the Monte Carlo Method. Wiley. [326] H. 5. Schmidli. Appl. Statist. Probab .Y. Insurance: Mathematics and Economics 22. Seal ( 1969) The Stochastic Theory of a Risk Business.
S. Seal ( 1972). Math. Chapman & Hall. [331] G. [332] C : O. Aktuar Tidsskr. [339] A. Shtatland ( 1966) On the distribution of the maximum of a process with independent increments.A. Probab. Mitt. Seal (1978) Survival Probabilities. Skand. Sigman (1992) Light traffic for workload in queues.L. Scand. 4383. 279299. . Scand.G. Segerdahl (1959) A survey of results in the collective theory of risk. Siegmund (1976) The equivalence of absorbing and reflecting barrier problems for stochastically monotone Markov processes .L. Ann. Seal (1972) Numerical calculcation of the probability of ruin in the Poisson/Exponential case. [338] E. [347] K. [337] M.the Harald Cramer volume (ed. New York. 159180.W. [345] D. Versich. Verein Schweiz. Academic Press. 383413. Risk teory and the single server queue . 191197. Segerdahl (1955) When does ruin occur in the collective theory of risk? Skand. [336] B. 72. 673684. Segerdahl ( 1942) Uber einige Risikotheoretische Fagestellungen . [333] C: O. Shin (1987) Convolution of uniform distributions and ruin probability. J. Chapman and Hall. Probab. 11. Grenander). 1987. 243249. Appl.S. Sengupta ( 1989) Markov processes whose steadystate distribution is matrixgeometric with an application to the GI/PH/1 queue. Shantikumar (1993) Stochastic Orders and Their Applications. [328] H. Insurance: Mathematics and Economics 8. Siegmund (1975) The time until ruin in collective risk theory. 914924. Act.F. Sengupta (1990) The semiMarkov queue: theory and applications. 157166. Stockholm. L. Appl. SpringerVerlag. Mitteil. [342] D. [346) D . [330] H. [334] C: O. Probab . [344] D. . Adv. 75. 121139. [329] H.L. Wiley. the probability of nonruin in an interval (0. Statist. Siegmund (1979) Corrected diffusion approximations in certain random walk problems. Mitt. Queueing Systems 11. t). Ann. 2236. [343] D. Verein Schweiz Versich. [348] K . 4. Appl. 1974. Act. [341] E. Versich. Sigman ( 1994 ) Stationary Marked Point Processes: An Intuitive Approach. Verein Schweiz. Math. Shin (1989) Ruin probability by operational calculus. 4. 171178. 61. Seber (1984) Multivariate Observations. Shaked & J. Th. 483487. 11. Aktuar Tidsskr. 21. Almqvist & Wiksell. Shwartz & A. Siegmund (1976) Importance sampling in the Monte Carlo study of sequential tests.S. Wiley.W. Adv. 701719. Seal (1974) The numerical calculation of U(w. 429442. J. In Probability and statistics .380 BIBLIOGRAPHY [327] H. Stochastic Models 6. Siegmund ( 1985) Sequential Analysis . Weiss (1995) Large Deviations for Performance Analysis. Math. [335] B . 77100. t). Probab. 72. pp. 1955. [340] E.
197208. Stanford & K. [365] J. 722. Adv. [361] G. Teugels ( 1995 ) Ruin estimates under interest force .L.. Cambridge Philos. Act. Sundt ( 1993) An Introduction to NonLife Insurance Mathematics (3rd ed. [362] G.V.L. Act. Teugels (1982) Estimation of ruin probabilities . [366] O. [363] G. Insurance: Mathematics and Economics 19. Smith ( 1953) Distribution of queueing times. 163175.C. 2739. 118. Taylor (1986) Claims reserving in NonLife Insurance. Karlsruhe. 21. Proc. Astin Bulletin 24. J.C. 449461. Daley ed.J. [355] B . Springer. J. 725741. 1976. Act. Verlag Versicherungswirtschaft e. [364] G. 8594. [353] E. Taylor (1976 ) Use of differential and integral inequalities to bound ruin and queueing probabilities . Thorin (1977). Taylor (1979) Probability of ruin under inflationary conditions or under experience ratings . John Wiley & Sons.S. Act. Scand. Straub (1988) NonLife Insurance Mathematics. Appl. J. Scand. [367] O. Astin Bulletin 16 . 65102. Taylor (1978) Representation and explicit calculation of finitetime ruin probabilities.L. Sundt & J. 114137. Act. Scand.). Jewell ( 1981 ) Further results on recursive evaluation of compound distributions . New York.C. Takhcs (1967) Combinatorial Methods in the Theory of Stochastic Processes. Teugels (1997) The adjustment coefficient in ruin estimates under interest force .C. Probab.and largedeviations probabilities in actuarial risk theory. [350] W. 5776. Ruin probabilities prepared for numerical calculations. J.J. 1980 . . [354] B. Proceeedings of the IEEE 77. 1982 . [357] B. New York. Sundt & J. J. 65102. Wiley. 235254. Hoesman ( 1989) Moderate. Actuarial J. [368] O. Sundt & W. Scand.A. [356] B . NorthHolland. [360] G. Thorin (1982) Probabilities of ruin. [351] D . [352] D.BIBLIOGRAPHY 381 [349] E. Thorin (1974) On the asymptotic behaviour of the ruin probability when the epochs of claims form a renewal process. Stoyan (1983) Comparison Methods for Queues and Other Stochastic Models (D.). Insurance: Mathematics and Economics 16. Astin Bulletin 12. [359] L. Stroinski ( 1994) Recursive method for computing finitetime ruin probabilities for phasedistributed claim sizes..L. [358] R. Scand. Unpublished manuscript. 8199. 1978. [369] O. 1974. 149162. Insurance: Mathematics and Economics 1. Taylor (1979) Probability of ruin with variable premium rate.C. Suri ( 1989) Perturbation analysis : the state of the art and research issues explained via the GI/G/1 queue. Scand. Slud & C. Soc 49. Thorin (1986) Ruin probabilities when the claim amounts are gamma distributed.
Taqqu. 936952. M. Insurance: Mathematics and Economics 15. Veraverbeke (1993) Asymptotic estimates for the probability of ruin in a Poisson model with diffusion. [372] H. Wallace (1969) The solution of quasi birth and death processes arising from multiple access computer systems. Astin Bulletin IX. Lin (1994) Lundberg bounds on the tails of compound distributions. Dordrecht Boston Lancaster. De Vylder & M. D. Wikstad (1977) Calculation of ruin probabilities when the claim distribution is lognormal.E. 31. [375] N.). Reidel. R. Whitt (1989) An interpolation approximation for the mean workload in a GI/G/1 queue. . [373] S. In: Premium Calculation in Insurance (F. Res. Thorin & N.382 BIBLIOGRAPHY [370] 0. Willmot & X. Submitted. Wald (1947) Sequential Analysis. University of Michigan. Skand. VazquezAbad (1999) RPA pathwise derivatives of ruin probabilities. [381] W. Wolff (1990) Stochastic Modeling and the Theory of Queues. Van Wouve. Willinger. [374] F. Oper. Goovaerts (1983) The influence of reinsurance limits on infinite time ruin probabilities. 5762. Wiley. 743756. Insurance : Mathematics and Economics 13. Aktuar. PrenticeHall.E. J. [376] A. SpringerVerlag. [383] G. J. Goovaerts. Probab. [379] M. Haezendonck eds. Appl. Waters (1983) Probability of ruin for a risk process with claims cost inflation. De Vylder. 1942. 142. Wilson (1995) Selfsimilarity in highspeed traffic: analysis and modeling of ethernet traffic measurements. Act. Tacklind (1942) Sur le risque dans les jeux inequitables. Astin Bulletin VII. thesis. M. [377] V. J. Wikstad (1977) Numerical evaluation of ruin probabilities for a finite period. [384] R. [371] 0. 4963. Scand. [380] W. Sherman & D. Thorin & N. 1983 . Willmot (1994) Refinements and distributional generalizations of Lundberg's inequality. Statistical Science 10. Unpublished Ph. F.W. 37. 148164. Thorisson (2000) Coupling. [382] G. Stationarity and Regeneration. 6785.R. Tidskr. 137153. 231246. [378] H.
239.269.281.217. 15. 91. 97129. 79.6779.314316.307312 compound Poisson model 4. 341.Index adjustment coefficient 17.228229. 207 heavytailed distribution 6. 318319 Erlang distribution 7.203. 97.121129.182. 117128.86. 9396. 302303 diffusion approximation 17.200201. 316323 Bessel function 102.150. 323 Coxian distribution 147. 196201 inverse Gaussian distribution 76.272. 201 Brownian motion 3 . 170173. 7179. 2425. 8283 hyperexponential distribution 7. 86.249250 integral equation 16 Lindley 143 renewal 64.346349 383 .301 central limit theorem 60 . 111117. 3436. 218 Cox process 4. 271274. 40. 162164. 5796.318320 change of measure 2630. 122. 360 excursion 155156.178184. 361 diffusion 3.293294.185187. 1415.100. 141144.251280 heavy traffic 76. 308. 135.287292.242.299. 14.308 CramerLundberg model: see compound Poisson model cumulative process 334 dams: see storage process differential equation 16.226. 301 Kronecker product.203.160167. 138139. 283. 332333 Volterra 192194.and sum 221. 7079. 117127 corrected 121127 duality 1314. 110113.135. 226.259261. 80 81.328330. 89. 227229. 4851. 180182. 39. 245248. 5. 2526. 17.285292. 119. 9293. 3334.9899. 12 CramerLundberg approximation 1617. 9496. 1819. 7879.292293 Edgeworth expansion 113. 278 gamma distribution 67.4447.249.137141. 1112. 189. 3032. 37. 17. 248 WienerHopf 144 interest rate 190. 201214. 3839. 217. 7475. 205.359 aggregate claims 103106.
44. 86 periodicity 12.146148. 7179. 175 light traffic 8183 Lindley integral equation 143 process 3334.201. 113114. 4446.152160. 41.180. 154.234240. 213214.304 process 2830. 171.108. 179 NP approximation 318320 Palm distribution 5253. 227228. 261264.139141. 245 M/G/1 13. 108109.238.285287 queue 14 . 257. 203204.160161.288290. 52 53. 267269 Panjer's recursion 320323 Pareto distribution 910. 134135. 38.339 large deviations 129. 133.384 ladder heights 4756. 5758. 133.340350 multiplicative functional 2830.227230. 149. 39.297299.287291 INDEX matrix equation .128129. 306316 Levy process 3. 3947. 25. 80. 108 life insurance 5. 157. 138.302.287. 176185 nonhomogeneous 60 PollaczeckKhinchine formula 6167. 35. 2730. 15. 234 matrixexponential distribution 240244 matrixexponentials 14.350361 Poisson process Markovmodulated 12 periodic 12.218221. 141144. 176185. 229 M/M/1 101 Markovmodulated 185187 periodic 187 martingale 2426. 185187 GI/G/1 141144 M/D/1 6667 equation 16. 295. 203 Markov additive process 12. 42. 144. 3947. 251. 16. 38. 71. 14. 6162.315 inequality 1718. 142 likelihood ratio : see change of measure lognormal distribution 9. 9899. 162.336339 Laplace transform 15. 112113.148. 99. 269 PerronFrobenius theory 4142.275278. 132133. 178 modulation 12.349 350 perturbation 172173.298299. 96.161. 134.161.178182. 37.123. 106108.336339 . 16.269271.348 terminating 215216. 137139. 44. 35. 59.161164. nonlinear 155. 7576. 304305 random walk 3336.174. 100.240244. 6970.134135. 65.259261. 271274. 230. 260 Lundberg conjugation 6979 . 145187.215250.261264. 3639. see also sensitivity analysis phasetype distribution 8.234. 32. 25.
189214. 186187 virtual: see workload rational Laplace transform 8. 261264 reservedependent premiums 14. 7475.273274. 251. 223226. 1819. 213. 317318 semiMarkov 147. 307308. 11. 244. 141144. 3032. 31. 12.INDEX 385 waiting time 141. 233. 191192. 335336 sensitivity analysis 8693. 5455. 4950. 168172 storage process 13. 131144.359361 stochastic control x stochastic ordering 18. 260 WienerHopf theory 144. see also matrixexponential distribution regenerative process 264 268. 174.244250. 146. 152. 160. 162. 332333 model 12.262263.279280 Rouche roots 158. 147.154157. 240. 186187 renewal process 131. 172173. 294296 shotnoise process 314 simulation 19. 37. 9693. 326330 Weibull distribution 9. 327 . 222. 253.336339 workload 13.186. 87. 123. 107. 280. 60. 292294.314. 229234. 260 reinsurance 8. 257. 333334 regular variation 10. 281296 stable process 15. 331336 equation 64. 251280 time change 4. 256258. 279280 subexponential distribution 11. 177 timereversion 14. 251. 8386. 233234. 120 statistics x. 238 saddlepoint method 115117. 89. 338 utility 324.
the ^W A l \ i l ' ''' CramerLundberg approximation..Advanced Series on Statistical Science & Applied Probability .T [Ail i The book is a comprehensive treatment of  I i I \ classical and modern ruin probability theory. phasetype distributions as a computational vehicle and the connection to other applied probability areas like queueing theory. I 1! Ruin Probabilities ..com 2779 he 9 "789810ll22293211 . Some i (l I JL I J r of the topics are Lundberg's inequality. Special features of the book are the emphasis on change of measure techniques. for heavytailed claim size distributions). Markovmodulation or periodicity.Vol. extensions of the classical compound Poisson model to allow f o r reservedependent premiums. "This book is a must for anybody working in applied probability. exact solutions. 2 A I 11 JjVb l' i  i Yj . worldscientific. y finite horizon ruin probabilities. It is a comprehensive treatment of the known results on ruin probabilities.g." Short Book Reviews ISBN 9810222939 mi u inn i nun I I I I I I i in u www. P'i yfliother approximations (e.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.