Advanced Series on Statistical Science & I Applied Probability ^^^A£J

Ruin Probabilities

Seren Asmussen

World Scientific

Ruin Probabilities

ADVANCED SERIES ON STATISTICAL SCIENCE & APPLIED PROBABILITY

Editor: Ole E. Barndorff-Nielsen

Published Vol. 1: Random Walks of Infinitely Many Particles by P. Revesz Vol. 2: Ruin Probabilities by S. Asmussen Vol. 3: Essentials of Stochastic Finance : Facts, Models, Theory by Albert N. Shiryaev Vol. 4: Principles of Statistical Inference from a Neo-Fisherian Perspective by L. Pace and A. Salvan Vol. 5: Local Stereology by Eva B. Vedel Jensen Vol. 6: Elementary Stochastic Calculus - With Finance in View by T. Mikosch Vol. 7: Stochastic Methods in Hydrology: Rain, Landforms and Floods eds. O. E. Barndorff- Nielsen et al. Vol. 8: Statistical Experiments and Decisions : Asymptotic Theory by A. N. Shiryaev and V. G. Spokoiny

Ruin P robabilities

Soren Asmussen
Mathematical Statistics Centre for Mathematical Sciences Lund University

Sweden

World Scientific
Singapore • NewJersey • London • Hong Kong

Published by World Scientific Publishing Co. Pte. Ltd. P O Box 128, Fatter Road , Singapore 912805 USA office: Suite 1B, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

Library of Congress Cataloging-in-Publication Data Asmussen, Soren

Ruin probabilities / Soren Asmussen. p. cm. -- (Advanced series on statistical science and applied probability ; vol. 2) Includes bibliographical references and index. ISBN 9810222939 (alk. paper) 1. Insurance--Mathematics. 2. Risk. I. Tide. II. Advanced series on statistical science & applied probability ; vol. 2. HG8781 .A83 2000 368'.01--dc2l 00-038176

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.

First published 2000 Reprinted 2001

Copyright ® 2000 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.

Printed by Fulsland Offset Printing (S) Pte Ltd, Singapore

Contents
Preface I ix

Introduction 1 1 The risk process . . . . . . . . . . . . . .. . . . .. .. . . . . 1 2 Claim size distributions .. . . . . . . . .. . . . . . . . . . . . 5 3 The arrival process . . . . . . . . . . . . . . . . . . . . . . . . 11 4 A summary of main results and methods . . . . .. . . . . . . 13 5 Conventions . .. . .. .. . . . . . . . . . . . . . . . . . . . . 19

II Some general tools and results 23 1 Martingales . .. . .. .. . . . . . .. . . . . . . . . . . . . . 24 2 Likelihood ratios and change of measure . . .. . . . . . .. . 26 3 Duality with other applied probability models . . .. . . . . . 30 4 Random walks in discrete or continuous time . . . . . . . . . . 33 5 Markov additive processes . . . . . . . .. . . . . . . . . . . . 39 6 The ladder height distribution . . . .. . .. .. . . . . . . . . 47
III The compound Poisson model 57 1 Introduction . . . . . . . . .. .. .. . .. .. . . . . . . 58 . . . . . . . . . . . . . . . 61 3 Special cases of the Pollaczeck-Khinchine formula . . . . . . . 62 4 Change of measure via exponential families . . . .... . .. . 67 5 Lundberg conjugation . .. . . . . . . . . . . . . . . . . . . . . 69 6 Further topics related to the adjustment coefficient .. . . . . 75 7 Various approximations for the ruin probability . . . . . . . . 79 8 Comparing the risks of different claim size distributions . . . . 83 9 Sensitivity estimates . . . . . . . . . . . . . . . . . . . . . . . 10 Estimation of the adjustment coefficient . . . . . . . . . . . . 86 93 2 The Pollaczeck-Khinchine formula

v

vi

CONTENTS

IV The probability of ruin within finite time 97 1 Exponential claims . . . . . . . . . . . . . . . . . . . . . . . . 98 2 The ruin probability with no initial reserve . . . . . . . . . . . 103 3 Laplace transforms . . . . . . . . . . . . . . . . . . . . . . . . 108 4 When does ruin occur? . . . . . . . . . . . . . . . . . . . . . . 110 5 Diffusion approximations . . . . . . . . . . . . .. . . .. . . . 117 6 Corrected diffusion approximations . . . . . . . . . . .. . . . 121 7 How does ruin occur ? . . .. . . . . . . . . . . . . . . . . . . . 127 V Renewal arrivals 131 1 Introduction .. . . . . . . . . . . . . . . . . . . . . . . . . . . 131 2 Exponential claims. The compound Poisson model with negative claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3 Change of measure via exponential families . . . . . . . . . . . 137 4 The duality with queueing theory .. .. .. . . . .. . . . . . 141 VI Risk theory in a Markovian environment 145 1 Model and examples . . . . . . . . . . . .. . .. . . . . . . . 145 2 The ladder height distribution . . . . . . . . . .. . . . . . . . 152 3 Change of measure via exponential families ........... 160 4 Comparisons with the compound Poisson model ........ 168 5 The Markovian arrival process . . . . . . .. .. . . ... . . . 173 6 Risk theory in a periodic environment .. . . . .. . . . . . . . 176 7 Dual queueing models .... ... ................ 185 VII Premiums depending on the current reserve 189 1 Introduction . . . . . . . . . . . . . . . . . . . .. . . . . . . . 189 2 The model with interest . . . . . .. . . . . . . . . . .. . . . 196 3 The local adjustment coefficient. Logarithmic asymptotics . . 201 VIII Matrix-analytic methods 215 1 Definition and basic properties of phase-type distributions .. 215 2 Renewal theory . . . . . . . . . . . . . . . . . . . . . . . . . . 223 3 The compound Poisson model . . . . . . . . . .. . . . . . . . 227 4 The renewal model . . . . . . . . . . . . . . . .. . . . . . . . 229 5 Markov-modulated input . . .. . . . . . . . . . . . . . . . . . 234 6 Matrix-exponential distributions . . . . . . . . . . . .. . . . 240 7 Reserve-dependent premiums . . . . .. . . . .. . . . . . . . 244

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 AS Complements on phase-type distributions . . . 251 2 The compound Poisson model . . . . . . . . . . . . . . . . . . . . . 304 3 Large deviations . . . . 326 Appendix 331 Al Renewal theory . . . 340 A4 Some linear algebra . . . ... . 281 2 Simulation via the Pollaczeck-Khinchine formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 A3 Matrix-exponentials . . . . 294 XI Miscellaneous topics 297 1 The ruin problem for Bernoulli random walk and Brownian motion. . 297 2 Further applications of martingales . . . . . . . . . . . 287 4 Importance sampling for the finite horizon case . . . . . . . . . . . 292 6 Sensitivity analysis . . . 261 4 Models with dependent input . . . 259 3 The renewal model . . . . . . . . . . . .. . . . .. .. . . . . . . . . . .. . . 279 X Simulation methodology 281 1 Generalities . . . . . . . . . . . . . . . . . . . . 264 5 Finite-horizon ruin probabilities . . . . . . 290 5 Regenerative simulation . . . . . . .. . . . .. . . 350 Bibliography Index 363 383 . . . . ... . . 316 5 Principles for premium calculation . .. . . . . . . . . .CONTENTS vii IX Ruin probabilities in the presence of heavy tails 251 1 Subexponential distributions . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . 331 A2 Wiener-Hopf factorization . . . . . . . . . . ... . . . .. . . . . . . . . . . . . . . . . . . . . . 285 3 Importance sampling via Lundberg conjugation . . . . . . . . . . . . .. . . . . . . .. 271 6 Reserve-dependent premiums . . . . . . . .. . . .. 306 4 The distribution of the aggregate claims . .. .. . . . . .. . . . . . . 323 6 Reinsurance . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . The two-barrier ruin problem . . . . . . . .

This page is intentionally left blank .

One reason for writing this book is a feeling that the area has in the recent years achieved a considerable mathematical maturity. I was invited to give a course on ruin probabilities at the Laboratory of Insurance Mathematics. A similar thank goes to all colleagues who encouraged me to finish the project and continued to refer to the book by Asmussen which was to appear in a year which continued to be postponed. and the series editor Ole Barndorff-Nielsen for their patience. and the result is now that the book is much more related to my own research than the initial outline. I have deliberately stayed away from discussing the practical relevance of the theory. Risk theory in general and ruin probablities in particular is traditionally considered as part of insurance mathematics. and has been an active area of research from the days of Lundberg all the way up to today. Let me take this opportunity to thank above all my publisher World Scientific Publishing Co. But the pace was much slower than expected. Thus. Apart from these remarks. Since I was to produce some hand-outs for the students anyway. but the hand-outs were written and the book was started (even a contract was signed with a deadline I do not dare to write here!). and other projects absorbed my interest. and my belief was that this could be done rather quickly. that it can only say something about very simple models and questions.Preface The most important to say about the history of this book is: it took too long time to write it! In 1991. the idea was close to expand these to a short book on the subject. University of Copenhagen. However. It has obviously not been possible to cover all subareas. the book is basically mathematical in its flavour. if the formulations occasionally give a different impression. it would not be fair not to say that the practical relevance of the area has been questioned repeatedly. As an excuse: many of these projects were related to the book. which has in particular removed one of the standard criticisms of the area. this applies to long-range dependence which is intensely studied in the neighboring ix . The course was never realized. it is not by intention. In particular.

Good luck! I have tried to be fairly exhaustive in citing references close to the text. see e. VII. For a brief orientation. read Chapter I. see in particular Michna [259]. for the effects on tail probabilities. an area which is becoming increasingly important. another by method.1-3. Chapters III-VII introduce some of the main models and give a first derivation of some of their properties.g. Here is a suggestion on how to get started with the book.g. I regret that due to time constraints.1. Chapters IX-X then go in more depth with some of the special approaches for analyzing specific models and add a number of results on the models in Chapters III-VII (also Chapter II is essentially methodological in its flavor). More recently. VI. Resnick & Samorodnitsky [303] and references therein. IX. The rest is up to your specific interests. IV. For a second reading. e. It is obvious that such a system involves a number of inconsistencies and omissions. Asmussen. A book like this can be organized in many ways. Willinger et al. http:// www.1-3.3. Hojgaard & Taksar [35] and Paulsen & Gjessing [284]. some papers not cited in the text but judged to be of interest are included in the Bibliography. 111.1-3 and XI. the first part of 11.maths . Hojgaard & Taksar [206].1-3 and IX.1-5.4a. Finally.6 (to understand the PollaczeckKhinchine formula in 111. VIII.g.1-4. Another interesting area which is not covered is dynamic control. Concerning ruin probabilities.se Lund February 2000 Soren Asmussen . VII.4-5.2. In addition. The main motivation comes from statistical data for network traffic (e. [381]). IV. for which I apologize to the reader and the authors of the many papers who ought to have been on the list. IV.x PREFACE field of queueing theory.8-9.2 more properly).5. some basic discussion can be found in the books by Biihlmann [82] and Gerber [157]. The book does not go into the broader aspects of the interface between insurance mathematics and mathematical finance. 111. incorporate 11. The present book is in between these two possibilities.lth. X. it has not been possible to incorporate more numerical examples than the few there are. In the classical setting of Cramer-Lundberg models.lth. see also Schmidli [325] and the references in Asmussen & Taksar [52]. the standard stochastic control setting of diffusion models has been considered.se/matstat / staff/asmus and I am therefore grateful to get relevant material sent by email to asmusfmaths . I intend to keep a list of misprints and remarks posted on my web page.2. One is by model.

Fig.PREFACE xi The second printing differs from the first only by minor corrections. 5.4 from Asmussen.3 are reprinted from Asmussen & Rubinstein [46] and parts of VIII. . Lund September 2001 Soren Asmussen Acknowledgements Many of the figures .1 by Bjarne Hojgaard and the table in Example 111. 1 is almost identical to Section 2 of Asmussen [26] and reprinted with permission of Blackwell Publishers. 111 . not least the more complicated ones. Parts of II. of which there are not many at this stage . Fig.6. More substantial remarks.5 from Asmussen [21] with permission from CRC Press. were produced by Lone Juul Hansen . Parts of X.8 . 3 is reprinted from Asmussen & Nielsen [39] and parts of IX.6 by my 1999 simulation class in Lund. Section VIII.6 is reprinted from Asmussen & Schmidt [49] and parts of IX. Section VII . many of which were pointed out by Hanspeter Schmidli .1 and X. supported by Center for Mathematical Physics and Stochastics (MaPhySto). Aarhus. A number of other figures were supplied by Christian Geisler Asmussen . 5 from Asmussen & Kliippelberg [36] with the permission from Elsevier Science .2 by Rafal Kulik . as well as some additional references continue to be at the web page. Schmidli & Schmidt [47] with the permission from Applied Probability Trust . IV.

This page is intentionally left blank .

They are the main topics of study of the present book.3) sup St. it is frequently more convenient to work with the claim surplus process {St}t>0 defined by St = u . (1. (1. A risk reserve process { Rt}t>o. results and topics to be studied in the rest of the book. T) as ruin probabilities with infinite horizon and finite horizon . respectively. (1. as defined in broad terms . For mathematical purposes.T) = P inf Rt < 0 I .i(u. t/i(u) = P (infRt < 0) = P (infR t < 0 t>0 t>0 The probability of ruin before time T is t. M = (1. and give a very brief summary of some of the models.1) We also refer to t/) ( u) and 0(u. is a model for the time evolution of the reserves of an insurance company. Letting T(u) = inf {t > 0 : Rt < 0} = inf It > 0 : St > u}. The probability O(u) of ultimate ruin is the probability that the reserve ever drops below zero. we introduce some general notation and terminology.Chapter I Introduction 1 The risk process In this chapter . MT = sup St.4) O<t<oo O<t<T 1 .Rt.2) (O<t<T Ro=ul. We denote throughout the initial reserve by u = Ro.

(1.5) i.2 CHAPTER I. Thus. the number Nt of arrivals in [0.T) = F (MT > u) = P(r(u) < T). we see that Nt Nt Rt = u + pt . and T1 is the time of the first claim. say. the time of arrival of the nth claim is an = T1 + • • • + Tn. We denote the interarrival times of claims by T2. 1. INTRODUCTION be the time to ruin and the maxima with infinite and finite horizon. t] is finite. Putting things together.b(u) = P (r(u) < oo) = P(M > u). .1 .7) k=1 k=1 The sample paths of {Rt} and {St} and the connection between the two processes are illustrated in Fig.. That is. T3. and Nt = min {n > 0 : 0rn+1 > t} = max {n > 0: Un < t}• The size of the nth claim is denoted by Un.i(u. (1. • Premiums flow in at rate p.6) Sofar we have not imposed any assumptions on the risk reserve process. respectively.1. (1. the following set-up will cover the vast majority of the book: • There are only finitely many claims in finite time intervals.. However.E Uk. the ruin probabilities can then alternatively be written as . Figure 1.pt. per unit time. St = E Uk .

20%..1. then M = oo a. If 77 > 0. and hence O(u) < 1 for all sufficiently large u.b(u) = 1 for all u. • Brownian motion or more general diffusions. then M < oo a. THE RISK PROCESS 3 Note that it is a matter of taste (or mathematical convenience) whether one allows {Rt} and/or {St} to continue its evolution after the time T(u) of ruin.s. but as an approximation to the risk process rather than as a model of intrinsic merit. though many results are straightforward to generalize from the compound Poisson model.(.1 Assume that (1. on Fig.1 the slope of {Rt} should depend also on the level). VII. . t -* oo. For the purpose of studying ruin probabilities this distinction is.8) holds. Some main examples of models not incorporated in the above set-up are: • Models with a premium depending on the reserve (i. rl= p-P P It is sometimes stated in the theoretical literature that the typical values of the safety loading 77 are relatively small. We study this case in Ch. however.. of course. a basic references is Gerber [127].8) The interpretation of p is as the average amount of claim per unit time. since any modeling involves some approximative assumptions. say 10% . • General Levy processes (defined as continuous time processes with stationary independent increments) where the jump component has infinite Levy measure. However. not discuss whether this actually corresponds to practice.s. one could well replace Rt by Rtnr(u) or RtA. (1. however. We shall not deal with this case either. and the basic ruin probabilities are derived in XI.e. We shall discuss Brownian motion somewhat in Chapter IV. It would appear obvious. If 77 < 0. for example. immaterial. and hence . A further basic quantity is the safety loading (or the security loading) n defined as the relative amount by which the premium rate p exceeds p.1. allowing a countable infinity of jumps on Fig. Thus. The models we consider will typically have the property that there exists a constant p such that Nt a E Uk k=1 p. that the insurance company should try to ensure 77 > 0. we shall.1. 1. 1.) V 0. one may well argue that Brownian motion in itself could be a reasonable model. and in fact: Proposition 1.

(1. rl > 0. namely that M = oo a.s.2 (Cox PROCESSES) Here {Nt} is a Poisson process with random rate /3(t) (say) at time t.i. not all models considered in the literature have this feature: Example 1.4 CHAPTER I. Then the connection between the ruin probabilities for the given risk process {Rt} and those ^(u). This case is referred to as the mixed Poisson process. corresponding to the Pdlya process.. Proposition 1.. this needs to be verified in each separate case. 0 We shall only encounter a few instances of a Cox process.v. M < oo a. .d. The simplest example is 3(t) = V where V is a r . if {(3(t)} is non-ergodic. k=1 (1. Nt)}. are i.i.T) for {Rt} is given by V)(u) = t/i (u).Q claims arrive per unit time and the mean of a single claim is EU) and that also Nt t aoo t lira EEUk = p. St In concrete models... (1.8) that F N. U2. with the most notable special case being V having a Gamma distribution.Tp).T) = i.Q (say) and U1. Here it is easy to see that p = .b(u) < 1 for all u when rl > 0.11) . U2. t t p - p' t -^ oo.s.oo t 0 J (provided the limit exists). The simplest concrete example (to be studied in Chapter III) is the compound Poisson model. If u -oo. tb(u) = 1 for all u holds also when rl = 0. are i.. and independent of {Nt}. namely. . If 77 < 0. then this limit is > 0 which implies St a$ oo and hence M = oo a.10) hold with p constant.3 Assume p 54 1 and define Rt = Rt1p.s.i(u.d. However. INTRODUCTION Proof It follows from (1. zP(u . If U1. where {Nt} is a Poisson process with rate . in connection with risk processes in a Markovian or periodic environment (Chapter VI). it is not too difficult to show that p as defined by (1.6EU (on the average. _ St __ k =1 Uk pt a4. However. 0(u.10) is a property which we will typically encounter. and independent of {(0(t). (1.8) is given by ^t p = EU • lim it (3(s) ds t-. Thus p may well be random for such processes.10) Again. we obtain typically a somewhat stronger conclusion. then similarly limSt/t < 0. and here (1. . and that .8).

in a number of models. The Swedish school was pioneering not only in risk theory. For mixed Poisson processes and Polya processes. see also Chapter XI. Sundt [354]. Daykin et al. 2 Claim size distributions This section contains a brief survey of some of the most popular classes of distributions B which have been used to model the claims U1. light-tailed distributions (sometimes the term . Pentikainen & Pesonen [101]. Embrechts et al. CLAIM SIZE DISTRIBUTIONS 5 The proof is trivial. Besides in standard journals in probability and applied probability. Gerber [157]. [330]. often referred to as collective risk theory or just risk theory. Cox processes are treated extensively in Grandell [171]. De Vylder [110].. the assumption > 0 is equivalent to p < 1. Gerber [159]) has a rather different flavour. In the even more general area of non-life insurance mathematics. Hipp & Michel [198]. Straub [353]. another important early Swedish work is Tacklind [373]. Since { Rt } has premium rate 1. but in probability and applied probability as a whole. Grandell [171]. Mitteilungen der Verein der Schweizerischen Versicherungsmathematiker and the Scandinavian Actuarial Journal. Taylor [364]. Heilmann [191].. An idea of the additional topics and problems one may incorporate under risk theory can be obtained from the survey paper [273] by Norberg. Insurance: Mathematics and Economics. Note that when p = 1. and in fact p < 1 is the fundamental assumption of queueing theory ensuring steady-state behaviour (existence of a limiting stationary distribution). in particular.. the role of the result is to justify to take p = 1. and we do not get near to the topic anywhere in this book. the claim arrivals are Poisson or renewal at the same time).g. Buhlmann [82]. Some early surveys are given in Cramer [91]. we shall be able to identify p with the traffic intensity of an associated queue. Schmidli.g. Some main later textbooks are (in alphabetical order) Buhlmann [82]. [134]. while the first mathematically substantial results were given in Lundberg [251] and Cramer [91]. U2. Rolski. the research literature is often published in journals like Astin Bulletin . [76]. see e . Notes and references The study of ruin probabilities. Schmidt & Teugels [307] and Seal [326]. [101]..2. some main texts (typically incorporating some ruin theory but emphasizing the topic to a varying degree) are Bowers et al. the recent survey by Grandell [173] and references therein. Segerdahl [334] and Philipson [289]. Some of the main general ideas were laid down by Lundberg [250]. which is feasible since in most cases the process { Rt } has a similar structure as {Rt} (for example. The term risk theory is often interpreted in a broader sense than as just to comprise the study of ruin probabilities. Daykin. was largely initiated in Sweden in the first half of the century. We roughly classify these into two groups . Note that life insurance (e. many results and methods in random walk theory originate from there and the area was ahead of related ones like queueing theory.

g. regularly varying (see below) or even regularly varying with infinite variance.1) The parameter 6 is referred to as the rate or the intensity. P B[s]= (8Is ) . 6 has density r(p)xP-le-ax b(x) P and m.1 (THE EXPONENTIAL DISTRIBUTION) Here the density is b(x) = be-ax (2. i.8.e. 2a Light-tailed distributions Example 2.x given X > x is again exponential with rate b (this is essentially equivalent to the failure rate being constant). and heavy-tailed distributions. As in a number of other applied probability areas. For example in the compound Poisson model. s<8.f.O(u) can be found in closed form.2 (THE GAMMA DISTRIBUTION) The gamma distribution with parameters p.6 CHAPTER I. a simple stopping time argument shows that this implies that the conditional distribution of the overshoot ST(u) . then the conditional distribution of X . a fact which turns out to contain considerable information.g. INTRODUCTION 'Cramer-type conditions' is used).f. Example 2 . where B(bo. but different more restrictive definitions are often used: subexponential. the exponential distribution is by far the simplest to deal with in risk theory as well. The crucial feature is the lack of memory: if X is exponential with rate 6. Equivalently.3) . the m. B[s] is finite for some s > 0.2 and /LB is the mean of B. On the more heuristical side. and can also be interpreted as the (constant) failure rate b(x)/B(x). B is heavy-tailed if b[s] = oo for all s > 0. for the compound Poisson model with exponential claim sizes the ruin probability . if 1 °O AB Jbos x B(dx) > 0. In particular. In contrast.B(x) satisfies B(x) = O(e-8x) for some s > 0.2) = 0.u at the time of ruin given r(u) is again exponential u with rate 8. Here lighttailed means that the tail B(x) = 1 . (2. one could mention also the folklore in actuarial practice to consider B heavy-tailed if '20% of the claims account for more than 80% of the total claims'.

. p) = J tP-le-tdt. x] so that B(x) = r` e. u . . .y i=1 where >i ai = 1.. CLAIM SIZE DISTRIBUTIONS 7 The mean EX is p/b and the variance Var X is p/b2.v. Ruin probabilities for the general case has been studied. if p is integer and X has the gamma distribution p. the squared coefficient of variation (s. In particular. .ate (b2 ): L• i=o In the present text. 0. p. 0 < ai < 1.c.1) (or the 1/pth root if p < 1)... P b(x) = r` aibie-a.2. p) °° where r (x. In particular.). B(x) = r(p) Asymptotically... > 1 for p < 1 and = 1 for p = 1 (the exponential case). X2. among others.c. i = 1. The exact form of the tail B(x) is given by the incomplete Gamma function r(x. and exponential with rate d. An appealing feature is its simple connection to the Poisson process: B(x) = P(Xi + • • • + XP > x) is the probability of at most p . are i.d. then X v Xl + • • • + X. This special case is referred to as the Erlang distribution with p stages. the Gamma density (2.3 (THE HYPEREXPONENTIAL DISTRIBUTION) This is defined as a finite mixture of exponential distributions.) VarX1 (EX )2 p is < 1 for p > 1. p). An important property of the hyperexponential distribution is that its s. by Grandell & Segerdahl [175] and Thorin [369]. we develop computationally tractable results mainly for the Erlang case (p = 1.2) can be considered as the pth power of the exponential density (2. or just the Erlang(p) distribution.i. JP -1 B(x) r(p ) XP ie -ax In the sense of the theory of infinitely divisible distributions.1 Poisson events in [0.. 2. u Example 2 . where X1. one has r(bx.v. is > 1.

d. However. Equivalent characterizations are that the density b(x) has one of the forms q b(x) j=1 = cjxienbx.6. which is slightly smaller but more amenable to probabilistic reasoning.f. B(x) = aeTxe where t = Te and e = (1 . T) or sometimes the triple (E. resp. The density and c. (or.6 (DISTRIBUTIONS WITH BOUNDED SUPPORT) This example (i. there exists a xo < oo such that B(x) = 0 for x > xo.(2. then the claim size which is relevant from the point of view of the insurance company itself is U A xo rather than U u (the excess (U . are b(x) = aeTxt. equivalently. a rational Laplace transform) if B[s] _ p(s)/q(s) with p(s) and q(s) being polynomials of finite degree.8) are real-valued.xo)+ is covered by the reinsurer). Example 2 . We give some theory for matrixu exponential distribution in VIII. We give a more comprehensive treatment in VIII. Example 2 . a.8 CHAPTER I. q2 q3 (2. it is notable from a practical point of view because of reinsurance: if excess-of-loss reinsurance has been arranged with retention level xo. The parameters of a phase-type distribution is the set E of transient states. the Erlang and the hyperexponential distributions.f. The couple (a.1 and defer further details to u Chapter VIII. .8) j=1 j=1 j=1 where the parameters in (2. Important special cases are the exponential. 1)' is the column vector with 1 at all entries. of which one is absorbing and the rest transient. INTRODUCTION Example 2 .7) q1 b(x) = cjxieWWx + djxi cos(ajx)ea'x + > ejxi sin(bjx)e`ix .e..4 (PHASE-TYPE DISTRIBUTIONS) A phase-type distribution is the distribution of the absorption time in a Markov process with finitely many states..g. This class of distributions is popular in older literature on both risk theory and queues. T) is called the representation. This class of distributions plays a major role in this book as the one within computationally tractable exact forms of the ruin probability z/)(u) can be obtained.6.7) are possibly complex-valued but the parameters in (2.5 (DISTRIBUTIONS WITH RATIONAL TRANSFORMS) A distribution B has a rational m. but the current trend in applied probability is to restrict attention to the class of phase-type distributions. See XI. B(x) > 0 for x < xo) is of course a trivial instance of a light-tailed distribution. the restriction T of the intensity matrix of the Markov process to E and the row vector a = (ai)iEE of initial probabilities.

2.9 (THE PARETO DISTRIBUTION) Here the essence is that the tail B(x) decreases like a power of x. and then b(x) = 0. the tail is B (x ) 2 x.13) u . in practice one may observe that b(x) is either decreasing or increasing and may try to model smooth (incerasing or decreasing) deviations from constancy by 6(x) = dx''-1 (0 < r < oo).1). we obtain the Weibull distribution B(x) = e-Cx'.12) Sometimes also a location parameter a > 0 and a scale parameter A > 0 is allowed. In particular. There are various variants of the definition around. (2. the mean u is eµ+a /2 and the second moment is e2µ+2o2. It follows that the density is 't (1ogX . All moments are finite.8 (THE LOGNORMAL DISTRIBUTION) The lognormal distribution with parameters a2. a)/A)-a+1' x > a.7 (THE WEIBULL DISTRIBUTION) This distribution originates from reliability theory. one being B(x) (1 + X)-b(x) (1 + x)a+1' x > 0.p a 1 (2. (2. Here failure rates b(x) = b(x)/B(x) play an important role. (2. CLAIM SIZE DISTRIBUTIONS 9 2b Heavy-tailed distributions Example 2. u Example 2 . the exponential distribution representing the simplest example since here b(x) is constant. or equivalently as the distribution of a°U+µ where U .10) The loinormal distribution has moments of all orders. However.u l b(x) = d dx or J ax lor 1 exp Asymptotically. a2). x < a.N(0.pl = 1 W (logx -. Writing c = d/r.9) which is heavy-tailed when 0 < r < I.1. b(x) _ A(1 + (x a The pth moment is finite if and only if p < a .11) ex log logx 2r p 1 1 2 ( a ) f -1 (lox_P)2} (2. b(x) = crx''-le-`xr. p is defined as the distribution of ev where V . Example 2 .N(p.

The motivation for this class is the fact that the Laplace transform is explicit (which is not the case for the Pareto or other standard heavy-tailed distributions). B(x) = O(x-P). another standard example is (log x)').L( x ).10 CHAPTER I. oo) is slowly varying .15) x2 + 16x3 ) a-3x/2) 3 (1 . u Example 2 . Choudhury & Whitt [1] as the class of distributions of r.(1 + Zx + $ p = 3. { s () 1-s+3s2-9s3log(1+2s I p=3.12) (here L (x) -* 1) and ( 2. INTRODUCTION Example 2.v. examples of distributions with regularly varying tails are the Pareto distribution (2. where Y is Pareto distributed with a = (p . x -4 oo (any L having a limit in (0. x -+ 00.14) The pth moment is finite if p < 5 and infinite if p > 5.16) 11 Example 2.17) where L (x) is slowly varying. satisfies L(xt)/L(x) -4 1. (2. u .'s of the form YX. A = 1 and X is standard exponential. For p = 1. In general. the loggamma distribution is a Pareto distribution.x6+lr(p) (2. The density is 8p(log x)p-i b(x) .12 (DISTRIBUTIONS WITH REGULARLY VARYING TAILS) The tail B(x) of a distribution B is said to be regularly varying with exponent a if B(x) .2).1)/p. 6 is defined as the distribution of et' where V has the gamma density (2.13).11 (PARETO MIXTURES OF EXPONENTIALS) This class was introduced by Abate. the loggamma distribution (with exponent 5) and a Pareto mixture of exponentials. (2. the density is { 3 (1 .10 (THE LOGGAMMA DISTRIBUTION) The loggamma distribution with parameters p.(1 + 2x + 2x2)e-2x) p = 2 (2. Thus. i. in particular. The simplest examples correspond to p small and integer-valued.e. in particular.

..18) B(x) It can be proved (see IX. U2. The reason is in part mathematical since this model is the easiest to analyze. one may argue that this difficulty is not resticted to ruin probability theory alone. and based upon such information it seems questionable to extrapolate to tail behaviour. but can never be sure whether this is also so for atypical levels for which far less detailed statistical information is available. We give some discussion on standard methods to distinguish between light and heavy tails in Section 4f.1. When studying ruin probabilities. THE ARRIVAL PROCESS 11 Example 2. for example the lognormal distribution is subexponential (but not regularly varying). At least as important is the specification of the structure of the point process {Nt } of claim arrivals and its possible dependence with the claims.3. (2. From a practical point of view. but the model also admits a natural interpretation : a large portfolio of insurance holders . However. Thus. we may know that such a process (with a covariance function estimated from data) is a reasonable description of the behaviour of the system under study in typical conditions. By far the most prominent case is the compound Poisson (Cramer-Lundberg) model where {Nt} is Poisson and independent of the claim sizes U1. which each have a ( time-homogeneous) small rate of experiencing a .1) that any distribution with a regularly varying tail is subexponential. this phenomenon represents one of the true controversies of the area.13 (THE SUBEXPONENTIAL CLASS OF DISTRIBUTIONS) We say that a distribution B is subexponential if x-roo lim B `2^ = 2. though the proof of this is non-trivial. We return to a closer study in IX. the subexponential class of distributions provide a convenient framework for studying large classes of heavyu tailed distributions. 3 The arrival process For the purpose of modeling a risk process . Namely.. it will be seen that we obtain completely different results depending on whether the claim size distribution is exponentially bounded or heavy-tailed. the claim size distribution represents of course only one aspect (though a major one).. the knowledge of the claim size distribution will typically be based upon statistical data.4) or even to completely different applied probability areas like extreme value theory: if we are using a Gaussian process to predict extreme value behaviour. Similar discussion applies to the distribution of the accumulated claims (XI. and so is the Weibull distribution with 0 < r < 1. Also.

found the Poisson distribution to be inadequate and suggested various other univariate distributions as alternatives . e. Mathematically. However . Another one is Cox processes. are i. such that 8(t) = . which facilitate the analysis.(3.. 5. A more appealing way to allow for inhomogeneity is by means of an intensity . it is more questionable whether it provides a model with a similar intuitive content as the Poisson model. This model . to be studied in Chapter V. The difficulty in such an approach lies in that it may be difficult or even impossible to imbed such a distribution into the continuous set-up of {Nt } evolving over time . the first extension to be studied in detail was {Nt } to be renewal (the interarrival times T1 .d. In order to prove reasonably substantial and interesting results . This applies also to the case where the claim size distribution depends on the time of the year or .. the periodic and the Markov -modulated models also have attractive features . The compound Poisson model is studied in detail in Chapters III. INTRODUCTION claim . too general and one neeed to specialize to more concrete assumptions .e. it may be used in a purely descriptive way when it is empirically observed that the claim arrivals are more bursty than allowed for by the simple Poisson process. has some mathematically appealing random walk features . in Chapter VII). gives rise to an arrival process which is very close to a Poisson process.. however.i. with a common term {Nt} is a Markov-modulated Poisson process . in particular to allow for certain inhomogeneities. so that . the negative binomial distribution. see 11. The one we focus on (Chapter VI) is a Markovian environment : the environmental conditions are described by a finite Markov process {Jt }too. Cox processes are. epidemics in life insurance etc. we study this case in VI . The point of view we take here is Markov -dependent random walks in continuous time (Markov additive processes ).. In others.8 (t) is a periodic function of t. T2. To the author 's knowledge . with the extension to premiums depending on the reserve. I. not many detailed studies of the goodness-of-fit of the Poisson model in insurance are available . in just the same way as the Poisson process arises in telephone traffic (a large number of subscribers each calling with a small rate). its basic feature is to allow more variation (bursty arrivals ) than inherent in the simple Poisson process. and also that the ruin problem may be hard to analyze . Some of them have concentrated on the marginal distribution of NT (say T = one year ). getting away from the simple Poisson process seems a crucial step in making the model more realistic. An obvious example is 3(t) depending on the time of the year (the season). radioactive decay (a huge number of atoms each splitting with a tiny rate ) and many other applications. where {/3 (t)}too is an arbitrary stochastic process . when Jt = i. IV (and.3(t) fluctuating over time. Nevertheless .6.g. Historically. but with a general not necessarily exponential distribution ).12 CHAPTER I. This model can be intuitively understood in some simple cases like { Jt} describing weather conditions in car insurance .

The ones which appear most related to risk theory are queueing theory and dam/storage processes. with Poisson arrivals and constant release rule p(x) = 1.1) permitting to translate freely between risk theory and the queueing/storage setting. it is desirable to have a set of formulas like (4. this amounts to Vo having the stationary distribution of {Vt}). queueing theory. Mathematically. however. The study of the steady state is by far the most dominant topic of queueing and storage theory. (4. In fact. point processes and so on. and which seems well motivated from a practical point of view as well. reliability. ruin probabilities for risk processes with an input process which is renewal.1) holds as well provided the risk process has a premium rule depending on the reserve. It should be noted. Markovmodulated or periodic can be related to queues with similar characteristics. extreme value theory.0 (u. and a lot of information on steady-state r. Similarly. the classical result is that the ruin probabilities for the compound Poisson model are related to the workload (virtual waiting time) process {Vt}too of an initially empty M/G/1 queue by means of . Some of these have a certain resemblance in flavour and methodology. interacting particle systems. 0(u) = P(V > u).v. this gives only f0 O°i (u)du which is of limited . Thus.4.1). stochastic differential equations. methods or modeling ideas developed in one area often has relevance for the other one as well. More generally. dam/storage processes. and here (4. others being branching processes.T) = P(VT > u). R = p(R) in between jumps.6) . 4 A summary of main results and methods 4a Duality with other applied probability models Risk theory may be viewed as one of many applied probability areas. A SUMMARY OF MAIN RESULTS AND METHODS 13 the environment (VI. A stochastic process {Vt } is said to be in the steady state if it is strictly stationary (in the Markov case.'s like V is available.1) where V is the limit in distribution of Vt as t -+ oo. The M/G/1 workload process { Vt } may also be seen as one of the simplest storage models. that quite often the emphasis is on computing expected values like EV. genetics models. In the setting of (4. stochastic geometry. it is a recurrent theme of this book to stress this connection which is often neglected in the specialized literature on risk theory. others are quite different. time series and Gaussian processes. A general release rule p(x) means that {Vt} decreases according to the differential equation V = -p(V) in between jumps. and the limit t -4 oo is the steady-state limit.

3.3. the ideal is to be able to come up with closed form solutions for the ruin probabilities 0(u).2). Here ?P(u) is explicit provided that . • The compound Poisson model with some rather special heavy-tailed claim size distributions. Here Vi(u) is given in terms of a matrix-exponential function ( Corollary VIII. 3. e . Example VIII. which can be expanded into a sum of exponential terms by diagonalization (see. The infinite horizon (steady state ) case is covered by letting T oo.1). Vi(u.T). A prototype of the duality results in this book is Theorem 11. the two areas.8. • The compound Poisson model with constant premium rate p = 1 and B being phase-type with a just few phases . p = 0/8 and -y = 8 -.1) in the setting of a general premium rule p(x): the events {VT > u} and {r (u) < T} coincide when the risk process and the storage process are coupled in a suitable way (via time-reversion ). 3. The qualifier 'with just a few phases ' refers to the fact that the diagonalization has to be carried out numerically in higher dimensions. Thus . 4b Exact solutions Of course . the functions w x f d 1 exdx () .p(y) y^ Jo p(x) can be written in closed form. • The compound Poisson model with a claim size distribution degenerate at one point.1 . The cases where this is possible are basically the following for the infinite horizon ruin probability 0(u): • The compound Poisson model with constant premium rate p = 1 and exponential claim size distribution B. much of the study of finite horizon problems (often referred to as transient behaviour) in queueing theory deals with busy period analysis which has no interpretation in risk theory at all. INTRODUCTION intrinsic interest .6. Here O(u) = pe-ryu where 3 is the arrival intensity. B(x) = e-bx. have to some extent a different flavour.v. though overlapping. • The compound Poisson model with premium rate p(x) depending on the reserve and exponential claim size distribution B. as is typically the case.3.'s like the environmental process {Jt} in a Markov-modulated setting.14 CHAPTER I.1. which gives a sample path version of (4. . The fact that Theorem H.1 is a sample path relation should be stressed : in this way the approach also applies to models having supplementary r. see Corollary III.3. see Boxma & Cohen [74] and Abate & Whitt [3]..g. Similarly. see Corollary VII.

the only example of something like an explicit expression is the compound Poisson model with constant premium rate p = 1 and exponential claim size distribution . For the finite horizon ruin probability 0(u.1) is the explicit form of the ruin probability when {Rt} is a diffusion with infinitesimal drift and variance µ(x). Also Brownian models or certain skip -free random walks lead to explicit solutions (see XI . T) can then be calculated numerically by some method for transform inversion. the formulas ( IV. esu-Tb( u. 1).2) is the natural scale. However. see VIII. the second best alternative is a numerical procedure which allows to calculate the exact values of the ruin probabilities. A SUMMARY OF MAIN RESULTS AND METHODS 15 • The compound Poisson model with a two -step premium rule p(x) and B being phase-type with just a few phases. Abate & Whitt [2]. A notable fact ( see again XI. [-s. T) du dT 0 TO 00 in closed form than the ruin probabilities z/'(u).Lef$er function. Here are some of the main approaches: Laplace transform inversion Often. We don't discuss Laplace transform inversion much. T). Grubel & Pitts [132] and Grubel & Hermesmeier [180] (see also the Bibliographical Notes in [307] p.4. 191). . f {eXp U LX 2. (u. Given this can be done.S(u) 1S(oo) f °D exp {.b(u)du . where Furrer [150] recently computed ii(u) as an infinite series involving the Mittag. Ab(u).7. it is easier to find the Laplace transforms = f e8 .u(y)/a2(y) dy} 4c Numerical methods Next to a closed-form solution. T) themselves. O(u. but are somewhat out of the mainstream of the area .1) are so complicated that they should rather be viewed as basis for numerical methods than as closed-form solutions.ff 2µ(y)/a2(y) dy} dx . say the fast Fourier transform (FFT) as implemented in Grubel [179] for infinite horizon ruin probabilities for the renewal model. a2 (x): Ip (u) = where S(u) = f °O exp {.f f 2µ(y)/a2(y) dy} dx - (4. relevant references are Grubel [179]. • An a-stable Levy process with drift . Embrechts.

7. most often it is more difficult to come up with reasonably simple equations than one may believe at a first sight. U is explicit in terms of the model parameters. However.g.16 CHAPTER L INTRODUCTION Matrix-analytic methods This approach is relevant when the claim size distribution is of phase-type (or matrix-exponential). which can equivalently be written as f3 [7] = 1 +13 . One example where this is feasible is the renewal equation for tl'(u) (Corollary III.4) 00['Y]-1)-'Y = 0. either as the iterative solution of a fixpoint problem or by finding the diagonal form in terms of the complex roots to certain transcendental equations. In the compound Poisson model with p = 1. it states that i/i(u) . and in quite a few cases (Chapter VIII). and carry out the solution by some standard numerical method.3. dt] most often leads to equations involving both differential and integral terms.) B[s]. whereas for the renewal arrival model and the Markovian environment model U has to be calculated numerically. see VIII.1) and -y > 0 is the solution of the Lundberg equation (4. and in particular the naive idea of conditioning upon process behaviour in [0. 4d Approximations The Cramdr-Lundberg approximation This is one of the most celebrated result of risk theory (and probability theory as a whole). 0(u) is then given in terms of a matrix-exponential function euu (here U is some suitable matrix) which can be computed by diagonalization. Differential. T) as the solution to a differential.3) where C = (1 .and integral equations The idea is here to express 'O(u) or '(u.p)/(13B'[ry] . u -* oo. .or integral equation.Ce-"u. An example where this idea can be carried through by means of a suitable choice of supplementary variables is the case of state-dependent premium p(x) and phase-type claims. For the compound Poisson model with p = 1 and claim size distribution B with moment generating function (m.f.3) in the compound Poisson model which is an integral equation of Volterra type. (4. as the solution of linear differential equations or by some series expansion (not necessarily the straightforward Eo U'u/n! one!).

6) are by far the best one can do in terms of finite horizon ruin probabilities '(u. the claim size distribution should have an exponentially decreasing tail B(x). and use the fact that first passage probabilities are more readily calculated for diffusions than for the risk process itself. Approximations for O(u) as well as for 1(u. some further possibilities are surveyed in 111 . for the compound Poisson model ^(u) p pu In fact . J B dx. In the case of heavy-tailed distributions. other approaches are thus required. However. the exact solution is as easy to compute as the Cramer-Lundberg approximation at least in the first two of these three models. in such cases the evaluation of C is more cumbersome. However. T) for large u are available in most of the models we discuss. in some cases the results are even more complete than for light tails. It has generalizations to the models with renewal arrivals. In fact. See Chapter IX. This list of approximations does by no means exhaust the topic. (4.2. A SUMMARY OF MAIN RESULTS AND METHODS 17 It is rather standard to call ry the adjustment coefficient but a variety of other terms are also frequently encountered. For example. corrected diffusion approximations (see IV. . Diffusion approximations are easy to calculate. Diffusion approximations Here the idea is simply to approximate the risk process by a Brownian motion (or a more general diffusion) by fitting the first and second moment. In particular. incorporating correction terms may change the picture dramatically. when the claim size distribution is of phase-type. u -> oo. The Cramer-Lundberg approximation is renowned not only for its mathematical beauty but also for being very precise.7 and IV. often for all u > 0 and not just for large u. T). a Markovian environment or periodically varying parameters.6) 4e Bounds and inequalities The outstanding result in the area is Lundberg's inequality (u) < e-"lu.4. but typically not very precise in their first naive implementation. Large claims approximations In order for the Cramer-Lundberg approximation to be valid.

However. one expects a model with a deterministic claim size distribution B. empirical evidence shows that the general principle holds in a broad variety of settings. UNT are i. However. T]... fitting a parametric model to U1. . . INTRODUCTION Compared to the Cramer-Lundberg approximation (4.18 CHAPTER I. given NT. obtained say by observing the risk process in [0. they have however to be estimated from data. it is a general principle that adding random variation to a model increases the risk.3). say degenerate at m.8.. In practice. the difficulty comes in when drawing inference about the ruin probabilities. This procedure in itself is fairly straightforward. to have smaller ruin probabilities than when B is non-degenerate with the same mean m. How do we produce a confidence interval? And. UNT may be viewed as an interpolation in or smoothing of the histogram). can we trust the confidence intervals for the large values of u which are of interest? In the present author's opinion. . . For example. as a general rule. more importantly. this is extrapolation from data due to the extreme sensitivity of the ruin probabilities to the tail of the claim size distribution in particular (in contrast. though not too many precise mathematical results have been obtained.d. This is proved for the compound Poisson model in 111. of being somewhat easier to generalize beyond the compound Poisson setting. .) at various places and in various settings.i.g. For example. which is a standard statistical problem since the claim sizes Ui. it splits up into the estimation of the Poisson intensity (the estimator is /l3 = NT/T) and of the parameter(s) of the claim size distribution. 4f Statistical methods Any of the approaches and results above assume that the parameters of the model are completely known. When comparing different risk models. The standard suggestion is to observe that the mean residual life E[U . e. in the compound Poisson model. . lower bounds etc. one may question whether it is possible to distinguish between claim size distributions which are heavy-tailed or have an exponentially decaying tail.U(k)) i =k+ i . it has the advantage of not involving approximations and also.k (U(`) . We return to various extensions and sharpenings of Lundberg's inequality (finite horizon versions.x U > x] = B(x) f '(y-x)B(dx) typically has a finite limit (possibly 0) in the light-tailed case and goes to oo in the heavy-tailed case. and to plot the empirical mean residual life 1 N .

.d.v.3) or Section 3 of Chapter VI are referred to as Proposition VI.. We look at a variety of such methods in Chapter X.3) and Section VI. The chapter number is specified only when it is not the current one. Truncation to a finite horizon has been used.. to observe whether one or the other limiting behaviour is apparent in the tail. good methods exist in a number of models and are based upon representing the ruin probability zb(u) as expected value of a r. having small probability) and that therefore naive simulation is expensive or even infeasible in terms of computer time.2. 4g Simulation The development of modern computers have made simulation a popular experimental tool in all branches of applied probability and statistics.i. The infinite horizon case presents a difficulty. The problem is entirely analogous to estimating steady-state characteristics by simulation in queueing/storage theory. (or a functional of the expectation of a set of r. and in fact methods from that area can often be used in risk theory as well . A main problem is that ruin is typically a rare event (i. Thus Proposition 4. and look at them to see whether they exhibit the expected behaviour or some surprises come up. where U(1) < . and also discuss how to develop methods which are efficient in terms of producing a small variance for a fixed simulation budget. For example. and of course the method is relevant in risk theory as well. respectively. CONVENTIONS 19 as function of U(k).(5. . formula VI..4.3 (or just VI. However. claims U1. this is a straightforward way to estimate finite horizon ruin probabilities. < U(N) are the order statistics based upon N i. in this book referred to as [APQ]). See further Embrechts. Klnppelberg & Mikosch [134]..5.3). formula (5.2. UN. . in all other chapters than VI where we just write . 5 Conventions Numbering and reference system The basic principles are just as in the author's earlier book Applied Probability and Queues (Wiley 1987. Still. because it appears to require an infinitely long simulation. Simulation may be used just to get some vague insight in the process under study: simulate one or several sample paths.e. the more typical situation is to perform a Monte Carlo experiment to estimate probabilities (or expectations or distributions) which are not analytically available. reference [14].v's) which can be generated by simulation. but is not very satisfying.

p. h -+ 0. then for 1s < 5).e.r. i. n -i oo. A different type of asymptotics: less precise.v.f. w. EX2/(EX)2.f. if B(x) . or a more precise one like eh . b[s] is defined always if Rs < 0 and sometimes in a larger strip (for example. independent identically distributed i.g. squared coefficient of variation. B[s] the m. B(x) = P(X < x) = fx.Used in asymptotic relations to indicate that the ratio between two expressions is 1 in the limit.f.s.ce-ax. B(x) the tail 1 . mation. B is concentrated on [0. with probability Mathematical notation P probability. r. n! 27r nn+1/2e-n.3) or Section 3. oo).d.The same symbol B is used for a probability measure B(dx) = P(X E dx) and its c. IIGII the total mass (variation ) of a (signed ) measure G . for a probability distribution IIGII = 1. .g.f.d. B(dy). and for a defective probability distribution IIGII < 1. Abbreviations c. random variable s.29) refer to the Appendix.20 CHAPTER L INTRODUCTION Proposition 4. E.t.i. . formula (5.g.4. The Laplace transform is b[-s]. In particular. moment generating function.g.d.2.h. cumulative distribution function P(X < x) c.B(x) = P(X > x) of B. E expectation. left hand side (of equation) m. (A. i. References like Proposition A.g. log E[s] where b[s] is the m. cumulant generating function.f. as for typical claim size distributions. (moment generating function) fm e82B(dx) of the distribution B. see under b[s] below.f. infinitely often l. right hand side (of equation) r.s.c. If. with respect to w.v.h. say a heuristic approxi1 + h + h2/2.o.

the value just before t. all stochastic processes considered in this book are assumed to have sample paths in this space. 0 marks the end of a proof. often the term 'cadlag' (continues a droite avec limites a gauche) is used for the D-property. E[X. the processes we consider are piecewise continuous. Usually. . oo) the space of R-valued functions which are right-contionuous and have left limits. . F o r a given set x1. a2) the normal distribution with mean p and variance oa2. CONVENTIONS {6B the mean EX = f xB(dx) of B ABA' the nth moment EXn = f x"B(dx) of B. the ith entry is 1 and all other 0. (the dimension is usually clear from the context and left unspecified in the notation). of numbers. In particular: I is the identity matrix e is the column vector with all entries equal to 1 ei is the ith unit column vector. the ith unit row vector is e'i. N(it. Matrices and vectors are denoted by bold letters.e.e. xa. Thus. an example or a remark. matrices have uppercase Roman or Greek letters like T. i. though slightly more complicated. Usually. 7r. Notation like f3i and 3(t) in Chapter VI has a similar .5. Xt_ the left limit limstt X8f i. . (xi)diag denotes the diagonal matrix with the xi on the diagonal (xi)row denotes the row vector with the xi as components (xi). intensity interpretation. 21 D [0. a. and column vectors have lowercase Roman letters like t.. i. R(s) the real part of a complex number s. row vectors have lowercase Greek letters like a. Unless otherwise stated.A] means E[XI(A)]. A.e. I(A) the indicator function of the event A. In the French-inspired literature.oi denotes the column vector with the xi as components Special notation for risk processes /3 the arrival intensity (when the arrival process is Poisson). Then the assumption of D-paths just means that we use the convention that the value at each jump epoch is the right limit rather than the left limit.. only have finitely many jumps in each finite interval.

cf. cf.g.5. I.1. FL. . interpretation. ry The adjustment coefficient. VI. though slightly more complicated. 111. I. or quantities with a similar time average interpretation.22 CHAPTER L INTRODUCTION B the claim size distribution.5. cf. e.1. 'q the safety loading . J the rate parameter of B for the exponential case B(x) = e-by. EL the probability measure and its corresponding expectation corresponding to the exponential change of measure given by Lundberg conjugation. Notation like BE and B(t) in Chapter VI has a similar. p the net amount /3pB of claims per unit time.

however. 5 on random walks and Markov additive processes can be skipped until reading Chapter VI on the Markovian environment model. in particular at a first reading of the book. More precisely. strictly speaking. The topic is. Sections 4. not crucial for the rest of the book. Due to the generality of the theory. in part. The general theory is. however. fundamental ( at least in the author' s opinion) and the probability involved is rather simple and intuitive. the level of the exposition is. Sections 4. All results are proved elsewhere . however. When encountered for the first time in connection with the compound Poisson model in Chapter III.Chapter II Some general tools and results The present chapter collects and surveys some topics which repeatedly show up in the study of ruin probabilities. 23 . 5) are. The duality results in Section 3 (and. somewhat more advanced than in the rest of the book. used in Chapter VI on risk processes in a Markovian (or periodic) environment. the relevance for the mainstream of exposition is the following: The martingale approach in Section 1 is essentially only used here. a parallel self-contained treatment is given of the facts needed there. in most cases via likelihood ratio arguments. The reader should therefore observe that it is possible to skip most of the chapter. The likelihood ratio approach in Section 2 is basic for most of the models under study.

V) (u. Example 1 .2) As T -> oo. (b) St a$ -oo on {T(u) = oo}. f-1 .5 can be skipped. the time to ruin r(u) is inf It > 0 : St > u}.QµB < 1. (1.T(u) < cc] = e7uE {e7f(u) I T(u) < cc] z/. T(u) < oo] + 0 = eryuE [e7Vu).24 CHAPTER II. {e'YS° }t>0 is a martingale. however..1 Assume that (a) for some ry > 0.(u). StUi-t. and the ruin probabilities are ip(u) = P (T(u) < oo). and in the limit (1. 1 Martingales We consider the claim surplus process {St} of a general risk process. Thus N.1 is basic for the study of the compound Poisson model in Chapter III. The more general Theorem 6.u denote the overshoot. T(u) < T] + E [eryST .)AT = E [e7ST(°). Our first result is a representation formula for O(u) obtained by using the martingale optional stopping theorem . Let e(u) = ST(u) .2 Consider the compound Poisson model with Poisson arrival rate .2) takes the form 1 = E [e'ys-(-). T) = P(T(u) < T). Proposition 1.0. As usual. SOME GENERAL TOOLS AND RESULTS The ladder height formula in Theorem 6. claim size distribution B and p = . We get 1 = Ee7So = E e'Y S-(. Then e-7u (u) = E[e74(u)j7-(u) < oo] Proof We shall use optional stopping at time r(u)AT (we cannot use the stopping time T(u) directly because P(T(u) = oo) > 0 and also because the conditions of the optional stopping time theorem present a problem. using r(u) A T invokes no problems because r(u) A T is bounded by T). T(u) > T] . the second term converges to 0 by (b) and dominated convergence (e7ST < eryu on {r(u) > T}).

2.. the ruin probability is O(u) = pe. it follows that E [e7st+v I J] = e"rstE [e7(st+v-St) I Ft] = e7StEe"rs° = elst where . B(x) _ e-dx. condition (a) of Proposition 1. are i.g.a it is immediately seen that y = S . Thus. the conditional distribution of the overshoot e(u) = U .u + x is again just exponential with rate S.3 Assume that {Rt} is Brownian motion with variance constant o.1 is satisfied. Eeas° = e"(°) where n(a) = a2a2/2 .i.f. Thus 00 E [e'rt (") I T(u) < oo] = I e5e - dx = f 5edx . and p =.-(„)_ = x is that of a claim size U given U > u .ap.1) . u Corollary 1.1) shows that Eels.x.1 are satisfied. Now at the time r(u) of ruin {St} upcrosses level u by making a jump . 1.1.r" where -y = S .3/6 < 1.4 (LUNDBERG ' S INEQUALITY ) tion 1 .Q(B[a] . From this it is immediately seen that the solution to the Lundberg equation ic(y) = 0 is -y = 2p/a2. From this it is readily seen (see III.d./3.1. By standard formulas for the m.2 and drift p > 0. Under the conditions of Proposi- Proof Just note that C(u) > 0. and (b) follows from p < 1 and the law of large numbers (see Proposition III.Q and the U. O(u ) < e-7".= e"(') where K(a) = . The available information on this jump is that the distribution given r(u) = t and S. u Corollary 1. with common distribution B (and independent of {Nt}). Then {St } is Brownian motion with variance constant o2 and drift -p < 0.a = -a . A simple calculation (see Proposition III. Since {St} has stationary independent increments. and thus by the memoryless property of the exponential distribution . Example 1 . the conditions of Proposition 1.Ft = a(S" : v < t).5 For the compound Poisson model with B exponential. the martingale property now follows just as in Example 1. and thus Ee7s° = 1. and thus Ee'rs° = 1.2(c)). Thus.6a for details) that typically a solution to the Lundberg equation K(y) = 0 exists. Proof Since c(a) = /3 (B[a] .1.a.1) . of the normal distribution. Since {St} has stationary independent increments. MARTINGALES 25 where {Nt} is a Poisson process with rate .Q.

P on (DE. A E Ft.6 N S = { lim Nt I t +00 t gJ are both in F. as shown by the following example this set-up is too restrictive: typically'.1) 'though not always: it is not difficult to construct a counterexample say in terms of transient Markov processes. Grandell & Schmidli [131]. But if a $ ^ . hence so is Nt = limfyo N2`i.Ft}too and the Borel a-field F. and is further exploited in his book [157]. A]. then z/'(u) = e-7" where 'y = 21A/a2. Proof Just note that ^(u) = 0 by continuity of Brownian motion. F(S) = P(S) = 1.F).3 below). we look for a process {Lt} (the likelihood ratio process) such that P(A) = E[Lt. (2. and by the law of large numbers for the Poisson process .26 CHAPTER IL SOME GENERAL TOOLS AND RESULTS Corollary 1. . However. Delbaen & Haezendonck [103] and Schmidli [320]. Thus the sets S = I tlim -+oot t =. The number Nt F) of jumps > e before time t is a (measurable) r. on (DE. 2 Likelihood ratios and change of measure We consider stochastic processes {Xt} with a Polish state space E and paths in the Skorohod space DE = DE[0. F).6 If {Rt} is Brownian motion with variance constant a2 and drift p > 0.v. Two such processes may be represented by probability measures F. Example 2 . cf. and in analogy with the theory of measures on finite dimensional spaces one could study conditions for the RadonNikodym derivative dP/dP to exist. B. then S and S are disjoint . u Notes and references The first use of martingales in risk theory is due to Gerber [156]. Grandell [171]. which we equip with the natural filtration {. More recent references are Dassios & Embrechts [98]. A somewhat similar u argument gives singularity when B $ B. Embrechts. The interesting concept is therefore to look for absolute continuity only on finite time intervals (possibly random. oo). and F. 0 and claim size distributions B.e. P correspond to the claim surplus process of two compound Poisson risk processes with Poisson rates /3.1 Let F. [172]. the parameters of the two processes can be reconstructed from a single infinite path. P are then singular (concentrated on two disjoint measurable sets). Theorem 2. I..

A] = E[Lt.F8] = L8 and the martingale property. _. Finally. ({. A E Ft . Then Ft (A) = E[Lt. G C {T < oo}. Lets < t.F such that (2. A].r. Hence E [_ . u The following likelihood ratio identity (typically with r being the time r(u) to ruin) is a fundamental tool throughout the book: Theorem 2 . A E F8. Then P(A) = E[Lt. then {Lt} is a non-negative martingale w. the restriction of P to (DE. (i) If {Lt}t> o is a non-negative martingale w. we have E [ LTIFT]1 = LT on {T < T}.F).2. G l ] = E [_I(G)E[LTIFT] ] = E { _I(G)Lr ] = P(G).2) Proof Assume first G C {T < T} for some fixed deterministic T < oo. then there exists a unique probability measure Pon . Proposition 2.3 Let {Lt}. (ii) Conversely. under the assumptions of (ii) we have for A E rg and s < t that A E Ft as well and hence E[L8. If r is a stopping time and G E PT.. Proof Under the assumptions of (i).t. Lt < 0] can only be non-negative if P(A) = 0. P be as in Proposition 2.Ft}.1) holds.1) and non-negativity by letting A = {Lt < 0}.Y) such that P(A) = Pt(A). if for some probability measure P and some {. .e. A E F.t.A] = EE[LtI(A)IF8] = EI(A)E[LtIFB] = EI(A)L8 = PS(A).2(i). By the martingale property.Tt) is absolutely continuous w.Pt}-adapted process {Lt}t>o (2. ({Ft} . that the restriction of P to (DE. This proves (i). F) such that ELt = 1. P) such that LLt = 1. Then Lt > 0 and ELt = 1 ensure that Pt is a probability measure on (DE. using the martingale property in the fourth step. ELt = 1 follows by taking A = DE in (2.1) holds. 1 J (2.Pt)) The following result gives the connection to martingales. .r.2 Let {Ft}t>o be the natural filtration on DE. then { 1 P(G) = EG . G ] = E [LT .r. The truth of this for all A E Y. Ft). F the Borel o•field and P a given probability measure on (DE. LIKELIHOOD RATIOS AND CHANGE OF MEASURE 27 (i. Hence the family {Pt} is t>o consistent and hence extendable to a probability measure F on (DE. define P by Pt (A) = E[Lt. Conversely. implies that E[LtI.t. A].

2) follows by monotone convergence. of (2.t. t. Consider a (time-homogeneous) Markov process {Xt} with state space E. each F.Gn {r <T} . A change of measure is performed by finding a process {Lt} which is a martingale w.r. 5) for processes with some random-walk-like structure. first in the Markov case and next (Sections 4.h.4) Proof Letting G = {r(u) < oo}.4) compared to (1. To this end. T(u) < oo]. one would typically have Xt = Rt.s. r(u) < oo] occuring there than with the (conditional) expectation E[e'r{(u ) Jr(u) < oo] in (1.. Now just rewrite the r. . Xt = St. The crucial step is to obtain information on the process evolving according to F.1. we need the concept of a multiplicative functional. (2. 1 Since everything is non -negative. we have F(G) = V )(u). First we ask when the Markov property is preserved.3) to G of{r < T} we get 1111 F(Gn {r <T}) = E[ 1 .Rt} the claim surplus process and {Jt} a process of supplementary variables possibly needed to make the process Markovian. the natural filtration {. applying (2. St).3 we obtain a likelihood ratio representation of the ruin probability V) (u) parallel to the martingale representation (1. Rt) or Xt = (Jt.1: Corollary 2.t..28 CHAPTER II. where {Rt} is the risk reserve process. Xt = (Jt. and this problem will now be studied. SOME GENERAL TOOLS AND RESULTS In the general case .r.2) by noting that 1 = e--rsr(„) = e-1'ue-7Ou). say.4 Under condition (a) of Proposition 1. For the definition.1) is that it seems in general easier to deal with the (unconditional) expectation E[e-ryVu). (2. Lr(u) 11 The advantage of (2.O(u) = e-ryuE[e-'YC(u).Ft} . in continuous time (the discrete time case is parallel but slightly simpler). {St} = {u . and letting T -* oo.1). is non-negative and has Ey Lt = 1 for all x. is Markov w.1) in Proposition 1. we assume for simplicity that {Xt} has D-paths. u From Theorem 2. both sides are increasing in T. The problem is thus to investigate which characteristics of {Xt} and {Lt} ensure a given set of properties of the changed probability measure. In the context of ruin probabilities.

[Y. A].r.8) which is the same as (2. The precise meaning of this is the following: being . Indeed. non-negative and Lt+8 = Lt•(Lso9t) (2.7).s.F8-measurable r. where Ot is the shift operator. s. (2.(Xtitl) with all t(i) < t + s.7) for any Ft-measurable Zt and any . (2. Ex[Lt+8Zt(Y8 o et)] = Ex[LtZt(Y8 o 0t)(L8 o Bt)]. Similarly. since Ext [L8Y8] = E[(Y8 o et)(L8 o 8t)I. o 9tI. ({Xt+u} 0<u<8) Theorem 2. Lt has the form Lt = 'Pt ({x }0<u<t) for some mapping cot : DE[O. t and let Px be the probability measure given by t. oo). which is the same as Ex[Zt(Y8 o Bt)] = E8[ZtEx. for all x.Ft] = Ex. or.7).YB] for any Ft-measurable r.v.'s of the form fl' f.v.6) implies (2.5) Px-a. t] -* [0.Ft]. since Zt • (Y8 o Ot ) is .2.5) are Tt+e measurable. 0 . the natural filtration {Ft} on DE. let {Lt} be a non-negative martingale with Ex Lt = 1 for all x..5) is equivalent to Ex[Lt+8Vt+8] = E8[Lt • (L8 o 91)Vt+8] for any . Y8.. Zt.Ft+8-measurable.'s of the form Zt • (Y8 o 0t) comprises all r.Ft }.(A) = Ex [Lt. Vt+e.Pt+8-measurable r.Y8f t < s. this in turn means Ex[Lt+8Zt(V8 oet)] = Ex[LtZtExt[L8Y8]].T9-measurable Y8. The converse follows since the class of r. (2. o 9t = V.v. Proof Since both sides of (2.t. and then L. By definition of Px.Ft-measurable. (2.6) for any .v. Then the family {Px}xEE defines a time-homogeneous Markov process if and only if {Lt} is a multiplicative functional. LIKELIHOOD RATIOS AND CHANGE OF MEASURE 29 on DE and define {Lt} to be a multiplicative functional if {Lt} is adapted to {. the Markov property can be written E.v. which in turn is the same as Ex[Lt+8Zt • (V8 o Bt)] = Ex[Lt • (L8 o 91)Z1 • (Y8 o et)] (2.5 Let {Xt} be Markov w. t.

More precisely .... 0 < vl < .At where At = k: vk <t U. (3. < aN < T. (using the Markov property in the second step) so that the martingale property is automatic. The corresponding claim sizes are Ul. In between jumps. {Vt} . the premium rate is p(r) when the reserve is r (i. . 3 Duality with other applied probability models In this section. see Dynkin [128] and Kunita [239]. Indeed. aN where or* = T -UN_k+l.6 For {u . and the time to ruin is 7-(u) = inf {t > 0: Rt < 0}. The formulation has applications to virtually all the risk models studied in this book. and just after time or* {Vt} makes an upwards jump of size UU = UN _k+l. it xEE suffices to assume that {Lt} is a multiplicative functional with Ex Lt = 1 for all x.5 can be found in Kuchler & Sorensen [240].. In between jumps. with a proof somewhat different from the present one. } E[Lt+B I. The storage process {Vt }o<t<T is essentially defined by time -reversion.. .1) The initial condition is arbitrary. R = p(R)). Thus R = Ro + f p(R8) ds . u Notes and references The results of the present section are essentially known in a very general Markov process formulation. CN. reflection at zero and initiar condition Vo = 0.. A more elementary version along the lines of Theorem 2... UN. We work on a finite time interval [0. t. SOME GENERAL TOOLS AND RESULTS to define a time-homogeneous Markov process. we shall establish a general connection between ruin probabilities and certain stochastic processes which occurs for example as models for queueing and storage.30 CHAPTER H.Ft] = LtE[L8 o 9t I.. and thus for the moment no parametric assumptions (on say the structure of the arrival process) are needed.. The result is a sample path relation. the arrival epochs are Qi.e. . Ro = u (say). . T] in the following set-up: The risk process {Rt}o<t<T has arrivals at epochs or.. t] = LtExt L8 = Lt. . then Remark 2.

_: 1} 0 011 =T-01N ^N-3 T-o 0 011 014 01N Figure 3. {Vt} remains at 0 until the next arrival). V)(u.3) (u) Proof Let rt' denote the solution of R = p(R) subject to r0 = u.. Then rt°) > rt°) for all t when u > v..T) = P(VT > u).1 Define r(u) = inf It > 0: Rt < 0} (r(u) = oo if Rt > 0 for all t < T) and let ii(u. DUALITY WITH OTHER APPLIED PROBABILITY MODELS 31 decreases at rate p(r) when Vt = r (i.__.T) = inf Rt < 0 P (O<t<T P(r(u) < T) be the ruin probability.2) k: ok <t and we use the convention p(O) = 0 to make zero a reflecting barrier (when hitting 0. That is.1) we have Vt = At - f P(Vs)ds where A= U= AT . .1 The events {T(u) < T} and {VT > u} coincide. (3. V = -p(V))..1.. :.. Theorem 3. 3..3.11 --4. The sample path relation between these two processes is illustrated in Fig. Note that these definitions make {Rt} right-continuous (as standard) and {Vt} left-continuous.AT_t.__. In particular... (3.___ .____•_._. instead of (3.e.x..

and in between rainfalls water is released at rate p(r) when Vt (the content) is r.d.2 from Harrison & Resnick [188]. Nevertheless.1 with Ro = u = ul). the distinction between right. 3.2 Consider the compound Poisson risk model with a general premium rule p(r). say V. Some further relevant more general references are Asmussen [21] and Asmussen & Sigman [51]. the connection between risk theory and other applied probability areas appears first to have been noted by Prabhu [293] in a queueing context. and then '0 (u) = P(V > u). Proof Let T -^ oo in (3. if nothing else n = N). and since ruin can only occur at the times of claims. If VaN > 0. Then the storage process {Vt} has a proper limit in distribution. see Siegmund [344]. we have r(u) > T.1 and its proof is from Asmussen & Schock Petersen [50]. with distribution B. Historically. Suppose next VT < u (this situation corresponds to the broken path of {Rt} in Fig. = r(VT) .U1 > roil . 3. if and only if O(u) < 1 for all u. u Notes and references Some main reference on storage processes are Harrison & Resnick [187] and Brockwell.3). one may feel that the interaction between the different areas has been surprisingly limited even up to today. Then Vo. say of water in a dam though other interpretations like the amount of goods stored are also possible.and left continuity is immaterial because the probability of a Poisson arrival at any fixed time t is zero). we have RQ„ < 0 so that indeed r(u) < T. we can repeat the argument and get VoN_1 > Ra2 and so on. The arrival epochs correspond to rainfalls.1 with Ro = u = u2). Historically.32 CHAPTER IL SOME GENERAL TOOLS AND RESULTS Suppose first VT > u (this situation corresponds to the solid path of {Rt} in Fig.3 and being i. u A basic example is when {Rt} is the risk reserve process corresponding to claims arriving at Poisson rate . Hence if n satisfies VVN_n+1 = 0 (such a n exists. .Ul < roil - Ul = RQ„ Va1V_1 < RQ2. this represents a model for storage. Corollary 3.i. We get: Corollary 3. The results can be viewed as special cases of Siegmund duality. Hence RQ„ > 0 for all n < N. Then similarly VVN = r0. and so on. Then the time reversibility of the Poisson process ensures that {At } and {At } have the same distribution (for finite-dimensional distributions. Thus we may think of {Vt} as having compound Poisson input and being defined for all t < oo. Resnick & Tweedie [79]. and a general premium rule p(r) when the reserve is r.T l .U1 = Rol. Theorem 3.

..2 The following assertions are equivalent: (a) 0(u) = P(r(u) < oo) < 1 for all u > 0..s. = Xo + Y1 + • • • + Y.1 Let r(u) = inf In: u + Y1 + • • • + Yn < 0}. . where the Yi are i .1. just verify that the r . ZN = . W1.. Theorem 4.. evolves as a random walk with increments Z1i Z2.1)). For a given i.. N min (Y1 + + Yn).d. W2. I. .. Xo = 0...... Here F is a general probability distribution on R (the special case of F being concentrated on {-1.Yl min (-YN ...2). hits (-oo. i.. as long as the random walk only takes non-negative values. generated by Z1... ..d. has a proper limit W in distribution as n -+ oo. the Lindley process Wo. can be viewed as the reflected version of the random walk with increments Z1. 0 Corollary 4.1..N (4.. R -valued sequence Z1..... WN be the Lindley process generated by Z1 = -YN. Z2... 1} is often referred to as simple random walk or Bernoulli random walk). n=0. Let further N be fixed and let Wo.. Z2 = -YN_1 i ... RANDOM WALKS IN DISCRETE OR CONTINUOUS TIME 33 4 Random walks in discrete or continuous time A random walk in discrete time is defined as X. ...... .h. WN = -YN .. . {Wn}n=0. of (4. .. Most often.. Then the events {r(u) < N} and {WN > u} coincide.. is defined by assigning Wo some arbitrary value > 0 and letting Wn+1 = (Wn + Zn+1)+• (4...4.. (c) The Lindley process {WN} generated by Zl = -Y1..1. (b) 1/i(u) = P(•r(u) < oo) -> 0 as u -* oo.i. .min n=0. with common distribution F (say).e. For discrete time random walks . W1. Proof By (4.1.N From this the result immediately follows.1.. if Wo = 0 then (Z1+•••+Zn) WN = Zl+•••+ZN. and is reset to 0 once the r. there is an analogue of Theorem 3.w. Z2 .2) (for a rigorous proof.. Z2. In particular. Z2 = -Y2... 0). N min (Y1 + • • • + YN-n) n=0..1. .2) satisfies the same recursion as in (4.1) Thus {Wn}n=o.Y1 according to Wo = 0..YN-n+1) n=0. .1 in terms of Lindley processes ...

Clearly. The converse follows from general random walk theory since it is standard that lim sup (Y1 + • • + Yn) = oo when Y1 + • • • + Yn 74 -oo.....1.the result is a sample path relation as is Theorem 3. W v -m and P(W > u) = P (-m > u) = 0(u).. the Lindley processes in Corollary 4.s. a sufficient condition for (e) is that EY is welldefined and > 0. (Yi + • • • + Yn) > -oo a..s. ..) and defines Zn = -Y-n. Next consider change of measure via likelihood ratios. the condition 00 F(YI+•••+ Yn<0)<00 n=1 is known to be necessary and sufficient ((APQ] p. .. In general. .. F has a strictly positive density and the Px corresponds to a Markov chain such that the density of X1 given Xo = x is also strictly positive. .l.2. . doubly u infinite (n'= 0. In that case . Y1) has the same distribution as (Y1. Proof Since (YN.g. For a random walk. . YN). or M < oo a.l..o.. By Kolmogorov's 0-1 law... . either M = oo a. (d) #.. (Z1 + • • • + Zn) = -m and P(W > u) = P(M > u) = i (u ). w.1 have the same distribution for n = 0.. .34 CHAPTER II.1. there is a more general version of Corollary 4. then the restrictions of Fx.i.1..1. 0 By the law of large numbers.N so that WN _P4 M = supra=0. + Z. .d. 176) but appears to be rather intractable. . Combining these facts gives easily the equivalence of (a)-(d). . ±2. equivalently. SOME GENERAL TOOLS AND RESULTS (d) m = inf. (e)Yi+•••+Yn -74 .. YN in Theorem 4.1 is equivalent to WN D MN = (Z1 + . a Markovian change of measure as in Theorem 2. Thus the assertion of Theorem 4.. e. ZN or. Similarly.s .5 does not necessarily lead to a random walk: if.2 and Theorem 4.ooa. the Y1.1 is actually not necessary . The following result gives the necessary and sufficient condition for {Ln} to define a new random walk: .) sup n=0.3 The i.. ±1..s. (e)...=o..g. Px to Fn are equivalent (have the same null sets) so that the likelihood ratio Ln exists. Remark 4 .. N.. One then assumes Yn to be a stationary sequence. assumption on the Z1..

4) ({Ln} is the familiar Wald martingale . Y) = h(Y ) a.5 Consider a random walk and an a such that c(a) = log F[a] = log Ee° ' is finite.g. = 1 for all n and x. . we get L2 = L1 (L1 o91 ) = h(Y1)g(X1.g. then n n Ex [f f = Ex H fi a( YY) i=1 i_1 ( Y=) h(YY) H Ef=(Y=)h(Y=) = II J fi(Y )P( d) from which the random walk property is immediate with the asserted form of F. Since L1 has the form g (Xo. Y ) f (Y)] = E[g(O.4 Let {Ln} be a multiplicative functional of a random walk with E_-L.3) holds for n = 1. (a) = log F(a] is the c. and define Ln by (4.f. The corresponding likelihood ratio is Ln = exp {a (Y1 + • • • + Yn) .s...Y2) = h(1'i)h(I'a)..5 corresponds to a new random walk with changed increment distribution F(x) = e-'(a) Jr e"'F(dy) . the random walk property implies Ex f (Y1) = Eo f (Y1 ). Breiman [78] p. Conversely. Y1). y ). of F). 100 ).3) 1Px-a. h(Yn) (4. for some function h with Eh(Y) = 1. We get: Corollary 4. where h (y) = g(0. RANDOM WALKS INDISCRETE OR CONTINUOUS TIME 35 Proposition 4. this means E(g(x. the changed increment distribution is F(x ) = E[h(Y). In particular. Y < x]. (4.. Then the change of measure in Theorem 2..3) holds. implying g (x..4. u A particular important example is exponential change of measure (h(y) = e°y-'(") where r.4.5 corresponds to a new random walk if and only if Ln = h(Y1) .s.nrc(a )} (4. Then the change of measure in Theorem 2.4).. cf. and so onforn =3. For n = 2. Proof If (4. e. Y) f (Y)] for all f and x. In that case.

a positive measure on R with the properties e J x2v(dx) < oo. the tradition in the area is to use continuous time models. The appropriate generalization of random walks to continuous time is processes with stationary independent increments (Levy processes). {Xt} can be written as the independent sum of a pure drift {pt}. The simplest case is 3 = JhvMM < oo. the claim surplus process for the compound Poisson risk model . Xt (n)-t(n-1) being independent whenever t(O) < t(1 ) < . say the beginning of each month or year . (4. In risk theory.6) More precisely. An equivalent characterisation is {Xt} being Markov with state space R and E [f (Xt+e ..36 CHAPTER II. given by a the increment distribution F(x) = P(Xn+l . the interpretation is that the rate of a jump of size x is v(dx) (if f of Ixlv (dx) = oo.1). with premium rate p. e f x:IxJ>e} v(dx) < oo (4.Ft] = Eof (X. corresponds to a process with stationary independent increments and u = -p.5) Note that the structure of such a process admits a complete description. see Chapter V. The traditional formal definition is that {Xt} is R-valued with the increments Xt(1)_t(o)..). they arise as models for the reserve or claim surplus at a discrete sequence of instants.t(i . but we omit the details ). (4. SOME GENERAL TOOLS AND RESULTS Discrete time random walks have classical applications in queueing theory via the Lindley process representation of the waiting time . .Xt)I. .7) for all e > 0. v2 = 0 and v = 3B). the pure jump process is given by its Levy measure v(dx).e. Xt =Xo+pt+oBt+Mt. In continuous time. A general jump process can be thought of as limit of compound Poisson processes with drift by considering a sequence v(n) of bounded measures with v(n) T v. Xt(2)_t(l).. {Xt} is a random walk. this description needs some amendments. or imbedded into continuous time processes .. In discrete time. Roughly.Xn < x). < t(n) and with Xt( i)_t(i_l) having distribution depending only on t(i) . which corresponds to the compound Poisson case: here jumps of {Mt} occur at rate 0 and have distribution B = v/0 (in particular . i. However. we are . say by recording the reserve or claim surplus just before or just after claims (see Chapter V for some fundamental examples). a Brownian component {Bt} (scaled by a variance constant) and a pure jump process {Mt}. however.

where c(a) = ap + a2a2/2 + f 00 provided the Levy measure of the jump part {Mt} satisfies f".] Processes with a more complicated path structure like Brownian motion or jump processes with unbounded Levy measure are not covered by Section 3.4. Now consider reflected versions of processes with stationary independent increments. V E [0.6 In the compound Poisson risk model with constant premium rate p(r) . Then the storage process {Vt} has constant release rate 1. RANDOM WALKS IN DISCRETE OR CONTINUOUS TIME 37 almost solely concerned with the compound Poisson case and shall therefore not treat the intricacies of unbounded Levy measures in detail. Chapter III.s. then Ee'(xt-xo) = Eoeaxt = etx(a).min Xt (4. and the reflected version is then defined by means of the abstract reflection operator as in (4.3 and decreases linearly at rate 1 in between jumps. jxJ v(dx) < oo.10) . where VT is the virtual waiting time at time T in an initially empty M/G/1 queue with the same arrival rate /3 and the service times having the same distribution B as the claims in the risk process.7 If {Xt} has stationary independent increments as in (4. has upwards jumps governed by B at the epochs of a Poisson process with rate . O(u. oo]. Proposition 4. First assume in the setting of Section 3 that {Rt} is the risk reserve process for the compound Poisson risk model with constant premium rate p(r) = 1. [The condition for V < oo a. Furthermore. WT = XT . Here workload refers to the fact that we can interpret Vt as the amount of time the server will have to work until the system is empty provided no new customers arrive.1. i. is easily seen to be f3pB < 1.6).1)v(dx) (4. and b(u) = P(V > u). defined as a system with a single server working at a unit rate. (ex . virtual waiting time refers to Vt being the amount of time a customer would have to wait before starting service if he arrived at time t (this interpretation requires FIFO = First In First Out queueing discipline: the customers are served in the order of arrival). A different interpretation is as the workload or virtual waiting time process in an M/G/1 queue.2). having Poisson arrivals with rate . T) = P(VT > u).v. VT + V for some r.e. cf. Corollary 4.8) O<t<T (assuming Wo = Xo = 0 for simplicity).Q and distribution B of the service times of the arriving customers.

5 has stationary independent increments as well. Theorem 4 .Xt)I Ftl = Eof (X8)g(s.. Q2 = v2. Then l e" (a) = Eo [ Li ea "] = e-K (9)Eo {e ( a+9)x1 J I = er(a+o )-K(B) R(a) = K(a + 0) . t. we use the characterization (4.10) is the Levy-Khinchine representation of the c.5) and get E [f(Xt+B . Xt +B . v(dx) = e9xv (dx). X8) = Eof (X8)L8 = Eof (X8)• For the second.1 ) v(dx) . This is of course no coincidence since the distribution of Xl .1)eexv(dx).Xt)L8 o 0tIFt] = E [f (Xt+s .Xo is necessarily infinitely divisible when {Xt} has stationary independent increments. Chung [86]).11) (eax .g. Proof For the first statement . Then the Markov process given by Theorem 2. use the representation as limit of compound Poisson processes. e.1. if Lt = e9(xt .38 CHAPTER IL SOME GENERAL TOOLS AND RESULTS Proof By standard formulas for the normal distribution.1 that E eaMt = exp fmoo In the general case . then the changed parameters in the representation (4. u Note that (4.Xt)g(s.xo)-tk ( e). . 8 Assume that {Xt} has stationary independent increments and that {Lt} is a non-negative multiplicative functional of the form Lt = g(t.f. we show in the compound Poisson case ( IlvIl < oo) in Proposition III.4 o) aµ + ((a + 0 ) 2 - 0 2 )o 2 /2+ r w J 00 (e (a + 9)x - a 9x )v(dx) 00 a(µ + O 2) + a2a2 / 2 + J (eax .g. In particular. By explicit calculation .6) are µ = µ + Oo2 . (4. Eea(µt + QBt) = et{aµ +a2OZ/2}.Xt)I-'Ftl = E [f(Xt+B . of an infinitely divisible distribution (see. let e" (a ) = Eoeaxl. Xt Xo) with E2Lt = 1 for all x.

MARKOV ADDITIVE PROCESSES 39 Remark 4..3 and claim size distribution B. Ei instead of P2. 0.. v(dx) _ . <t whenever the Radon-Nikodym derivative dB/dB exists (e.. MAP stands for the Markovian arrival process discussed below.0.(3 = . Recalling that U1.o[f (S8)g(J8)].Ft] = Ejt.(3B(dx).3B[B]. Example 4 .0 in the following. are the arrival times and U1. and let the Px refer to the claim surplus process of another compound Poisson risk process with Poisson rate.9 If X0 = 0. As for processes with stationary independent increments . the corresponding claim sizes .g.4).8. is defined as a bivariate Markov process {Xt} = {(Jt. b with b(x) > 0 for all x such that b(x) > 0). we write Pi. the structure of MAP's is completely understood when E is finite: 2and only there .. Ei. where . one reason is that in parts of the applied probability literature.10 Let Xt be the claim surplus process of a compound Poisson risk process with Poisson rate .2.3 =.3 and claim size distribution B # B. . then the martingale {eex(t)-tk(e)} is the continuous u time analogue of the Wald martingale (4. Example 4 . u 5 Markov additive processes A Markov additive processes. St)} where {Jt} is a Markov process with state space E (say) and the increments of {St} are governed by {Jt} in the sense that E [f (St+8 . U2. b = a = 0) the changed process is the claim surplus process of another compound Poisson risk process with Poisson rate . (5. B have densities b. it is then easily seen that Lt = H dB(Ui) i:o.. a = 0. B(dx) = B[9] B(dx). Thus (since µ = p = -1.5. abbreviated as MAP in this section2.11 For an example of a likelihood ratio not covered by Theorem 4. corresponding to p = -1. let the given Markov process (specified by the Px) be the claim surplus process of a compound Poisson risk process with Poisson rate 0 and claim size distribution B. Then we can write v(dx) _ /3eOxB(dx) = / (dx). .1) For shorthand .l3 and claim u size distribution B.St)g(Jt+s)I. dB/dB = b/b when B.

oo).9 EE = (iii&ij[a])i j EE . SOME GENERAL TOOLS AND RESULTS In discrete time. v. Proposition 5.40 CHAPTER H. ..g. a jump of {Jt} from i to j # i has probability qij of giving rise to a jump of {St} at the same time.i.6) depending on i. Then a Markov additive process can be defined by letting t St = lim 1 I(IJB1 < e)ds E1o 2d o be the local time at 0 up to time t. which we omit and refer to Neveu [272] or cinlar [87]. An alternative description is in terms of the transition matrix P = (piA.1 For a MAP in discrete time and with E finite. by generating Yn according to Hij when J„_1 = i. Jn = j.. this means that the MAP can be simulated by first simulating the Markov chain {J„} and next the Y1. a MAP is specified by the measure-valued matrix (kernel) F(dx) whose ijth element is the defective probability distribution Fij(dx) = Pi..it = A.f.J1=j)= Fij (dx) Pij In simulation language. As a generalization of the m.. {St} evolves like a process with stationary independent increments and the parameters pi. 1 J1 ='^])iJEE = (Fij[a])i . If all Fij are concentrated on (0.[a) = (Ei[easl. vi(dx) in (4. let {Jt} be standard Brownian motion on the line. Fn[a] = F[a]n where P[a] = P . the converse requires a proof. {Jt} is specified by its intensity matrix A = (Aij)i. a MAP is the same as a semi-Markov or Markov renewal process. In addition.Sr_1.o(Ji = j.jEE (here pij = Pi(J1 = j)) and the probability measures Hij(dx)=P(Y1 EdxlJo=i. the distribution of which has some distribution Bij. with the Y„ being interpreted as interarrival times.. Y2. In continuous time (assuming D-paths).jEE• On an interval [t. As an example. (That a process with this description is a MAP is obvious. Y1 E dx) where Y„ = S„ . consider the matrix Ft [a] with ijth element least Ei .) If E is infinite a MAP may be much more complicated. t+s) where Jt .

1)) .(')(a)) diag + (). u Proposition 5. Jn+1 = A] = 41 Ei[ e 5„. where K[a] = A+ (r. kEE Jn = k]Ek[e"Y" .qkj + k?^j qkj Bkj [a] } = Ei [east.1) } (recall that qjj = 0). MARKOV ADDITIVE PROCESSES Proof Conditioning upon (Jn. this means that F't+h [a] = Ft[a] II+h(rc(i)(a)) +hA+h(Aijgij(Bij[a]-1)) I. j E E) and So = 0. a= . J1 = A which in matrix formulation is the same as Fn+1 [a] = Fn[a]F[a]. \ diag Ft[a] = Ft[a]K. qij. By Perron-Frobenius theory (see A. Jt = j] (1 + htc (j) (a)) j + Ak j qk j (Bk +h E Ei [east . u In the following.ijgij(Bij[a] . Jt = k] { xk kEE j la] . 013 .4c).2 Let E be finite and consider a continuous time Markov additive process with parameters A.1 )v(dx). up to o( h) terms. we infer that in the discrete time case the . In matrix formulation . 00 r(i) (a) = api + a2ot /2 + f (e° . Proof Let {Stt) } be a process with stationary independent increments and pa- rameters pi . which in conjunction with Fo[a] = I implies Ft[a] = etK[a) according to the standard solution formula for systems of linear differential equations. Then the matrix Pt[a] with ijth element Ei [east. aSt h = (1 + Ajjh) Ei [east . assume that the Markov chain/process {Jt} is ergodic. vi(dx) (i E E). pi. vi(dx). Jt = j] Ejesh'^ + E Ak j hEi [ease . Bij (i. Then. Jt = k] { 1 . Sn ) yields Ei[easn+ '.5. Jt = j] is given by etK[a].

just note that [a]h(a) = eietK (a)h(a) = etK(a)h(a).Eikjt = ttc'(0) + ki . Then h(°) = e.4. Corollary 5.5 EiSt = tK'(0) + ki . (5.2) where 7r = v(°) is the stationary distribution. SOME GENERAL TOOLS AND RESULTS matrix F[a] has a real eigenvalue ic(a) with maximal absolute value and that in the continuous time case K[a] has a real eigenvalue K(a) with maximal real part.7. we are free to impose two normalizations.f. and we shall take V(a)h(a) = 1.c(a) (and h(")). of a random walk. In particular.etx It then follows that E feast+^-(t+v)K(a)h(a) I ^tl l . Yrh(a ) = 1. Corollary 5. a.42 CHAPTER II. and write k = k(°). . h(") may be chosen with strictly positive components.e=e°tk. The function ic(a) plays in many respects the same role as the cumulant g. Eie"sth^a) = e'Pt[a]h( a) = e. Jt = j] .4 Eie"sth(a) = h=a)et?(").4c). and appropriate generalizations of the Wald martingale (and the associated change of measure) can be defined in terms of . We also get an analogue of the Wald martingale for random walks: Proposition 5.r. Proof By Perron-Frobenius theory (see A. Since v("). cf. h(") are only given up to a constants.3 Ei [east. Proof For the first assertion. The corresponding left and right eigenvectors v(").h(a)vva)etw(a).Jt+v = east-tK( a)E [ee (st+v-st)-vK(a)h(a) jt+v I ^tJ = east-tt(a)EJt (eases-vK(a )h^a)1 = east-tK(a)h^a). u Let k(a) denote the derivative of h() w. Proposition 5. as will be seen from the following results. Jeast.tK(a)h(a) J jj it L o is a martingale. Furthermore. Corollary 5. its derivatives are 'asymptotic cumulants'. cf.t.

5. subtraction yields Vary St = tic"(0) + O(1).4.. we differentiate (5. u The argument is slightly heuristic (e. for a random walk: Corollary 5.5 yields + W (a)k. E=ST = tc'(0)E7. Squaring in Corollary 5.5.e. one obtains a generalization of Wald's identity EST = E-r • ES.") }) .a) + ttc (a)2hia ) Multiplying by v=. . t im v^"St = '(0) Proof The first assertion is immediate by dividing by tin Corollary 5. In the same way.St]2 = t2/c'(0 ) 2 + 2ttc (0)vk . the distribution of Jo).3) to get Ej [St a " st h i(a ) + 2Ste"st k(a) + e"st k^a) J etI(a) (kia )' + ttc (a)ki") + t {ic"(a)h. there is typically a function h = h(") on E and a ^c(a) such that Ey a"st -t" (") -* h(x). [E. summing and letting a = 0 yields E„ [St + 2Stkj. 8 Also for E being infinite (possibly uncountable ).+ k.Eikjr . tam E tSt a (0). the existence of exponential moments is assumed ) but can be made rigorous by passing to characteristic functions. MARKOV ADDITIVE PROCESSES Proof By differentiation in Proposition 5.3) Let a = 0 and recall that h(°) = e so that 0=°) = h(o) = 1. (5. For the second .6 For any stopping time T with finite mean.7 No matter the initial distribution v of Jo. Ee"st typically grows asymptotically exponential with a rate ic(a) independent of the initial condition (i.4) . Since it is easily seen by an asymptotic independence argument that E„ [Stkjt] u = trc'(0) E„kjt + O(1). t --a oo.2ttc (0)Evkjt + 0(i). More precisely.. 43 Ei [Steast h(a) + east k^a)1 = et"(a) (kia) + tic (a)hia)) . Corollary 5. Remark 5 . (5.g. ] = t2tc (0)2 + 2tK'(0)vk + ttc"(0) + O(1) .

Given a function h on E. however.5 defines a new MAP. let ha(i. G is defined as Gf (x) = lim Exf (Xt) . V. u forsEE).(9) {Lt}t>o = . We then want to determine h and x(a) such that Ejeasth (Jt) = etK(a)h(i). Then {Lt } is a multiplicative functional. Remark 5. 0 Proposition 5. (5. this leads to h(i) + tcha( i. and the family {f LEE given by Theorem 2.1) one then ( at least heuristically) obtains lim Ex eaSv -v a) K( v-+oo nEx east-tK(a)EJt eas-t-(v-t)K(a) u[J = Ex east-tk(a)h(Jt) It then follows as in the proof of Proposition 5..5) is a martingale .6) We shall not exploit this approach systematically.3b and Remark VI.for the present purposes.10 Let {(Jt. however. in particular that f is bounded.44 CHAPTER IL SOME GENERAL TOOLS AND RESULTS for all x E E. xEE . we take the martingale property as our basic condition below (though this is automatic in the finite case). Usually.. 0) = h(i )( 1 + ttc(a)). see. this is. 1) (i. St) } as follows. Jt = (s+t) mod 1 P8-a.s. 0) = n(a) h(i).6.f (x) tyo t provided the limit exists.5) is a martingale can be expressed via the infinitesimal generator G of {Xt } = { (Jt. From (5. where {Jt} is deterministic period motion on E = [0. some extra conditions are imposed. In view of this discussion . First. s) = ea8h(i). inconvenient due to the unboundedness of ea8 so we shall not aim for complete rigour but interpret C in a broader sense.9 The condition that (5.e. gha(i.6.5. h(Jo) Lo is a Px -martingale for each x E E.4 that { h(Jt) east-tK(a) L o (5. St)} be a MAP and let 0 be such that h(Jt) OSt-t. For t small . An example beyond the finite case occurs for periodic risk processes in VI.

0<b<oo. in the discrete time case.1) eft ea' f ij (dx) = Hij (dx) Hij [0] .10 is given by P = e-K(e) Oh e) F[e]Oh(').1) . MARKOV ADDITIVE PROCESSES Proof That { Lt} is a multiplicative functional follows from L8 ogt = h(Jt+s) es(St+ . Ai = µi + 0Q. Bi [0] Remark 5..Qi < oo and Bi a probability measure. (5.12 The expression for A means h(e) Aij = hie) Aij [1 + gij(Bij[0] i 0 j.7) In particular. 0 < qij < 1 and Bij [0] > 0. That 0 < qij < 1 follows from the inequality qb <1. We omit the details. vi (dx) = f3 Bi(dx) with . Here Oh(e) is the diagonal matrix with the h=e) on the diagonal. if vi(dx) is compound Poisson.ic(0)e = ic(0)Oh e) h(e) . In the infinite case . That the rows sum to 1 follows from Ae = Oh(e) K[O]h(B) .(0)j.tc(0)e = 0 . 1 + q(b . 0<q<1. Bi. .7(dx) Bij [0] Bij(dx) in the continuous time case .St)-sl(e) h(Jt) 45 The proof that we have a MAP is contained in the proof of Theorem 5.c(0)e = tc(0)e . Bi(dx) = Bi(dx). qij = r. this gives a direct verification that A is an intensity matrix: the off-diagonal elements are non-negative because Aij > 0.5.. u Theorem 5. ^i = of qij Bij [0] 1 + qij ( Bij [0] . and by A = Oh(°) K [0]Oh(e ) vi(dx) = e"xvi (dx). Then the MAP in Proposition 5.11 below in the finite case.1) holds for the P. In particular. then also vi (dx) is compound Poisson with e Ox ^i = /3iBi[0]. one can directly verify that (5.11 Consider the irreducible case with E finite.

11. Yi E dx. Letting a = 0 yields the stated expression for A. Now we can write K[a] =A+A ) ( K[a + 0] . Ji = j) h(e) eey-K(B)p h(8) h(e) eex-K ( h=e) e)Fi. Here the stated formula for P follows immediately by letting t = 1.tc(0)I. . Further Fib (dx ) = P=(YI E dx. in continuous time ( 5. a = 0 in (5. is absolutely continuous w. since Hij. are probability measures . F:j with a density proportional to eei .. v. In matrix notation .8) h(.K [O])Oh(e) (0) l + ( A + (tc(') (a + 0) . First note that the ijth element of Ft[a] is e-tK(e)Ej [e(a+B)st E:[east Jt = j] = Ej[Lteas' .r.tc (') (0) corresponds to the stated parameters µ. (dx).tc(') (8)/ d)ag h 7 Aiiii (Bii[a + 0] .8. v= . Similarly. . Jt = A.tc(0)' )Ah() = Oh(o) K[a + 0]Oh() . Hence the same is true for H=j and H.8).e) Consider first the discrete time case . Jl = j) = Ei[Lt. H1. it follows that indeed the normalizing constant is H1 [0].t. SOME GENERAL TOOLS AND RESULTS Proof of Theorem 5.13) for matrix-exponentials . this means that Ft[a] = e-tw ( e)Ohc) Ft[a + 9]oh (e) (5.Bay [0]) That k(') (a + 0) .. Jl = j] :(Yi E dx. This shows that F.8) yields et'[a] = Ohie )et (K[a +e]-K(e)I)Oh(°) By a general formula (A. (dx) of a process with stationary independent increments follows from Theorem 4. this implies k[a] = A -1 ) (K[a + 0] . Jt = j] = hie) ..46 CHAPTER II.

-. 6 The ladder height distribution We consider the claim surplus process {St } of a general risk process and the time 7. Though the literature on MAP's is extensive. .6.. however.1). h. [262] in discrete time. the literature on the continuous time case tends more to deal with special cases. Write r+ = T(0) and define the associated ladder height ST+ and ladder height distribution by G+(x) = 11 (S. has no mass on (-oo. oo). IIG+II = G+(oo) = P(T+ < oo) = 0(0) < 1 when 77 > 0 (there is positive probability that {St} will never come above level 0). see also Fuh & Lai [149] and Moustakides [264]. hardly a single comprehensive treatment. Conditions for analogues of Corollary 5.3 for an infinite E are given by Ney & Nummelin [266].e. THE LADDER HEIGHT DISTRIBUTION 47 Finally note that by (5. which. however.6. [225]. Notes and references The earliest paper on treatment of MAP's in the present spirit we know of is Nagaev [265].-+ < x. 0]. Note that G+ is concentrated on (0. < x) = 11 (S. i. Much of the pioneering was done in the sixties in papers like Keilson & Wishart [224].Bij[0]) = hjel)ijgijBij[0](Bij[a] . is slightly less general than the present setting. 7-+ < oo). there is.(u) = inf {t > 0 : St > u} to ruin in the particular case u = 0 .)Ajjgij(Bij[a+0] . [261]. [226] and Miller [260].7). The closest reference on exponential families of random walks on a Markov chain we know of within the more statistical oriented literature is Hoglund [203]. an extensive bibliography on aspects of the theory can be found in Asmussen [16]. For the Wald identity in Corollary 5.1) = Aij4ij(Bij[a] . and is typically defective.

the second ladder height (step) is ST+(2) . where basically only stationarity is assumed. at present we concentrate on the first ladder height.. i.. o 00 (6. 6. there are only finitely many).00 ). it follows that for g > 0 measurable. In other cases like the Markovian environment model.B(x) denotes the tail of B. For the proof of Theorem 6. g(y)R+(dy) = E f g(St)dt. Thus. In simple cases like the compound Poisson model. a fact which turns out to be extremely useful. which gives an explicit expression for G+ in a very general setting. In any case. Here bo(x) _ B(x)/µB.ST+(1) and so on.1) The interpretation of R+(A ) is as the expected time {St} spends in the set A before T+. G+ is given by the defective density g + (x) =. the dependence structure seems too complicated to be useful). and the maximum M is the total height of the ladder.d. = ST+(1) Figure 6. R+ is concentrated on (-oo. Recall that B(x) = 1 . Theorem 6 . has no mass on ( 0. define the pre-r+-occupation measure R+ by R+(A) = E f o "o I(St E A.e.e. 0]. 6. Also.(3B(x ) = pbo(x) on (0.1 The term ladder height is motivated from the shape of the process {Mt} of relative maxima.48 CHAPTER K. i. the sum of all the ladder steps (if rl > 0. see Fig. the second ladder point is ST+(2) where r+(2) is the time of the next relative maximum after r+(1) = r+. the ladder heights are i. The main result of this section is Theorem 6.5 below.2.2) . 1 For the compound Poisson model with p = 01-LB < 1. To illustrate the ideas. 0 f T+ (6.1. oo).1. we shall first consider the compound Poisson model in the notation of Example 1. On Fig. by approximation with step functions .1. SOME GENERAL TOOLS AND RESULTS M ST+(2) Sr. The first ladder step is precisely ST+.T+ > t)dt = E f 0T+I(St E A) dt.i. they have a semi-Markov structure (but in complete generality.

. see Fig.T+>T) = P(STEA. 0 < t < T. 49 Proof Let T be fixed and define St = ST .2. P(STEA. {St }o<t<T is constructed from {St}o<t<T by time-reversion and hence. ST < ST_t. That is. 0 < t < T) P(STEA.2(b): r+ < t Thus. has the same distribution as {St}o<t<T.ST<St.0<t<T) = F(ST E A.2(a): T+ > t Figure 6.O<t<T) = P(STEA. 6. THE LADDER HEIGHT DISTRIBUTION Lemma 6 .St<0.2 R+ is the restriction of Lebesgue measure to (-00. St S* t a Figure 6.O<t<T). since the distribution of the Poisson process is invariant under time reversion.ST_t.6. 0].ST<St.

this is just the Lebesgue measure of A. U + St_ E A. and (6.. T+ > t] 0 _ /3 f E[B( A . for A C (0.3 Lemma 6 . we get G+ (A) = f 00 /3 dt E[B(A . Fig.2) in the last). and since the jump rate is /3.50 CHAPTER II.T+ > t] dt 0 T+ _ /3E f g( St) dt = 0 f g(y) R+(dy) 0 00 where g(y) = B(A .3 where the bold lines correspond to minimal values. it follows that R+ (A) is the expected time when ST is in A and at a minimum at the same time . SOME GENERAL TOOLS AND RESULTS Integrating w.t dT. . G+(A) = Q f 0 B(A . Figure 6. 6. That is.St _)I(-r+ > t).y) (here we used the fact that the probability of a jump at u t is zero in the second step..3 G+ is the restriction of /3R+*B to (0. oo). But since St -4 -oo a. The probability of this given { Su}u<t is B(A . E A} precisely when r+ > t.y)R+(dy) 00 Proof A jump of {St} at time t and of size U contributes to the event IS.St).St _).r. oo). s. cf.

this is equivalent to the risk process {St*} being stationary in the sense of (6. The sample path structure is assumed to be as for the compound Poisson case: {St*} is generated from interclaim times Tk and claim sizes Uk according to premium 1 per unit time.4) are (ak.. The traditional representation of the input sequence {(TT. 6 . 0 Generalizing the set-up. oo) x (0. oo).e. The points in the plane (marked by x on Fig. we consider the claim surplus process {St }t>o of a risk reserve process in a very general set-up.s > 0). 4 (the points in the plane are (ak .M o 08 shifted by s is defined the obvious way. Uk) (k = 1.6.s. i.* ) and the second the mark (the claim size Uk ).Q f r+(x . assuming basically stationarity in time and space. obviously. this does not depend on h). i. {St+8 . .4).T+ < oo).. The marked point process .) where ak = Ti + • • • + Tk . Nt St =>Uk k=1 -t where Nt = max{k = O. The first ladder epoch r is defined as inf It > 0 : St > 0} and the corresponding ladder height distribution is * G+ (A) = P(S** E A) = P(ST+ E A.e.1. 6 . In the stationary case. we define the arrival rate as E# { k : ak E [0 . Uk) for those k for which ak . Fig. 2. .1 With r+(y) = I(y < 0) the density of R+. Lemma 6. THE LADDER HEIGHT DISTRIBUTION 51 Proof of Theorem 6.S8 )t> o = {St }t>o for all s > 0.:T1 +•••+Tk <t}. the first component representing time (the arrival time o. h]} /h (by stationarity..3 yields g+ (x) = . cf.. U k)} k=1 a is as a marked point process M *.. We call M * stationary if M* o B8 has the same distribution as M* for all s > 0.z)B(dz) _ f I(x < z)B(dz) _ f (x). as a point process on [0.

h] Eco(M*) = 1 E f co(M o Bt)dt. Section 5) which has pure jump structure corresponding to pi = a = 0. i 1 U2 Us -1_ 0 or Q2 $ U3 *1 L 0 7 X I 11 1 Figure 6. = 0 . h. Assume {Jt} irreducible so that a stationary distribution 7r = (1i)iGE exists. vi(dx) = . Uk) k=1. h] and the sum approximately ^o(M*)I(ul < h). most often one takes h = 1).. i.52 CHAPTER II.4 Given a stationary marked point process M*.QiBi(dx). e. k: vk E [0.g. and let T = T2 denote the first proper interarrival time .5) does not depend on h. See. This more or less gives a proof that indeed (6. Sigman [348] for these and further aspects of Palm theory.2. V(M* o eak ). the r. letting h J. . of (6. where T is the first arrival time > 0 of M and h > 0 an arbitrary constant (in the literature.e.4 Consider a finite Markov additive process (cf. where TI = 0. Oh becomes the approximate probability F(ri < h) of an arrival in [0. As above . Note also that (again by stationarity) the Palm distribution also represents the conditional distribution of M* o Ot given an arrival at time t.. 0. The two fundamental formulas connecting M* and M are Eco(M) = aE E. Example 6 .s. we define its Palm version M as a marked point process having the conditional distribution of M* given an arrival at time 0 . We represent M by the sequence (Tk.5) represents the conditional distribution of M* given vi = 0. SOME GENERAL TOOLS AND RESULTS M* U. o...

6iBi + Aijgij Bij j#i iEE iEE 0 Theorem 6 . let U0 be a r.e. If Jt_ = i. dt A + E Aijgij j#i Thus the arrival rate for M* is 1] it A + E Aijgij iEE i#i Given that an arrival occurs at time t .OF(x). and by some additional arrivals which occur w.6. First choose (Jo_.s. let the arrivals and their marks be generated by {Jt} starting from Jo = j.oo a. A stationary marked point process M* is obtained by assigning Jo distribution Tr. j) and let the initial mark Ul have distribution Bi when i = j and Bij otherwise. After that. an arrival for M* occurs before time t + dt w. v.OEU0.p.5. .. Before giving the proof. we note: Corollary 6. 5 Consider a general stationary claim surplus process {St }t>o. qij when {Jt} jumps from i to j and have mark distribution Bij. THE LADDER HEIGHT DISTRIBUTION 53 Interpreting jump times as arrival times and jump sizes as marks.O for i # j.6 Under the assumptions of Theorem 6. Then the ladder height distribution G+ is given by the (defective) density g+(x) = . the distribution of Ul) is the mixture B = E aii Bi + aij Bij J = j#i !i J. It follows that we can describe the Palm version M as follows . and that p = 0EU0 < 1. the probability aij of Jt . Assume that St -* .= i. This follows by noting that iP*(0) = IIG+JI = J0 "o g+(x)dx = .p.O fo "o F(x)dx = . we get a marked point process generated by Poisson arrivals at rate /3i and mark distribution Bi when Jt = i.O for i = j and iriAijgij/.*(0) with initial reserve u = 0 is p = /3EU0. having the Palm distribution of the claim size and F (x) = F(Uo < x) its distribution . Jt = j is iri(3i /.p. the ruin probability ./. aij for (i. Jo) w. Note in particular that the Palm distribution of the mark size (i.

Su-<0. 0<u<t) = P(St EA.-A. has a very simple interpretation as the average amount of claims received per unit time .0<u<t) = P(StEA. CHAPTER H.0<u<tIAt) = P(St EA.St<Su. 2. The last property is referred to as insensitivity in the applied probability literature. It follows that for A C (0. Proof of Theorem 6. are point processes on (-oo .o.s. The sample path relation between { Su } and { Su } amounts to S„ = St .$St_ u.54 By (6. h.Mt). which makes an upwards jump at time .0<u<tIAt) = P(St EA.St <.Su_ <0.. . Now conditionally upon At . The result is notable by giving an explicit expression for a ruin in great generality and by only depending on the parameters of the model through the arrival rate 0 and the average ( in the Palm sense) claim size EU0. the mark at time Qk is denoted by Uk.Su< 0.St*_ u. oo ) and the arrival times 0 > 0_1 > a_2 > . (k = St}t>o 1.. A standard argument for stationary processes ([78] p.1] here the r .5.Q_k and has size U_ k. 105) shows that one can assume w. SOME GENERAL TOOLS AND RESULTS V` (0) = E E Uk k: ak E [0..o<u<t where a claim arrives at time t and has size Uo. in (0.5.. T+ = t given the event At that an arrival at t occurs . oo) p(t) = P(St EA..(left limit) when 0 < it < t and is illustrated on Fig .. 0).o. We then represent M by the mark (claim size ) Uo of the arrival at time 0. . Let p(t) be the conditional probability that ST+ E A. moves down linearly at a unit rate in between jumps and starts from S0 = U. oo) x (0 . that M* and M have doubly infinite time (i. in (-oo.. { Su}0<u<t is distributed as a process {Su} .0<u<t) = P(St EA.e. the arrival times 0 < 0'1 < Q2 < .g.)..5). oo)). Then clearly * G+ (A) = P(ST+ E A) = Consider a process { f p(t)f3dt. 6.l. and the kth preceding claim arrives at time t . .

. A sample path inspection just as in the proof of Lemma 6 . Fig. and we let L(dy) be the random measure L(A) = fo°° I(St E A. Since So = U0.s. NIt)dt . G' (A) = 3 f P(St E A. time instants corresponding to such minimal values have been marked with bold lines in the path of { St}.6. THE LADDER HEIGHT DISTRIBUTION 55 { A Su}0<u<t U0 U0 \t tt u>0 N U_1 Figure 6. 6. In Fig. Uo].5 where the boxes on the time axis correspond to time intervals where {St } is at a minimum belonging to A and split A into pieces corresponding to segments where {Su} is at a relative minimum. Thus. Mt)dt = i3EL(A) o"o .5 where it = { St < Su. the left endpoint of the support is -oo.5. 0 < u < t } is the event that { Su } has a relative minimum at t . the support of L has right endpoint U0. 6. t -a oo. and since by assumption St -* -oo a. cf. 2 therefore immediately shows that L(dy) is Lebesgue measure on (-oo.

SOME GENERAL TOOLS AND RESULTS = OE f 0 I(Uo>y)I (yEA)dy = Q f IP (Uo>y)dy A 0o a fA P(y) dy• 0 Notes and references Theorem 6.56 CHAPTER II. [147]. .2.6 is Bjork & Grandell [67].5 is due to Schmidt & co-workers [48]. A further relevant reference related to Corollary 6. [263] (a special case of the result appear in Proposition VI.1).

i. • the premium rate is p = 1. For finite horizon ruin probabilities . i=1 i=1 An important omission of the discussion in this chapter is the numerical evaluation of the ruin probability. St = u-Rt = EUi -t.6) and simulation methods ( Chapter X). {Rt} and the associated claims surplus process {St} are given by Nt Nt Rt = u+t -EUi.d.Chapter III The compound Poisson model We consider throughout this chapter a risk reserve process {Rt } t>o in the terminology and notation of Chapter I. . It is worth mentioning that much of the analysis of this chapter can be carried over in a straightforward way to more general Levy processes . 4. Thus . and assume that • { Nt}t>o is a Poisson process with rate j3. are i.4 below . Panjer's recursion ( Corollary XI. exact matrix-exponential solutions under the assumption that B is phase-type (see further VIII.. say. • the claim sizes U1.. being of the form Rt = Rt+Bt + Jt where {Rt } is a compound 57 . 3). U2. i. and independent of {Nt}. see Chapter IV.e. A common view of the literature is to consider such processes as perturbed compound Poisson risk processes . Some possibilities are numerical Laplace transform inversion via Corollary 3. with common distribution B.

t = E[Ntµs] . and Schlegel [316].1).s. (b) Var St = t. For (c).)3t (fit' k t} = etk(8) exp {-st -'3t + B[s]f Finally. we shall start by giving the basic formulas for moments. (c) Ee8St = et" (8) where c(s) = f3(B[s] .'s etc.58 CHAPTER III. (d) The kth cumulant of St is tf3p(k) for k > 2. e . Proof It was noted in Chapter I that p .t = fltpB .1) . cumulants .1).6pBa).+Uk)P(Nt = k) k=O e-8t k=O B[s]k . m. THE COMPOUND POISSON MODEL Poisson risk process.1 (a) ESt = t(13µ$ .g. Furrer [150]. we get Ee8st = 00 e-8t c` Ee8 (U1+.t = E E [ U k k=1 k=1 Nt . and that B(k)[0] = Pak). Dufresne & Gerber [126]. of the claim surplus St .. See e. Write pB^) = EUn' YB = Pali = EU. for (d) just note that the kth cumulant of St is tic(k) (0). The same method yields also the variance as Nt Ne Nt Var St = Var E Uk = Var E ^ Uk Nt +EVar [ k=1 k=1 1 k=1 Uk Nt Var [Ntµs] + E[NtVar U] = 113µs + t13Var U = tf3pB2). We do not spell out in detail such generalizations. [324].1) = t(p . say stable Levy motion. 0 . {Bt} a Brownian motion and {Rt} a pure jump process. where K(k) (0) is the kth derivative of is at 0. and this immediately yields (a)..g.t = t(p . P = PAB = 1/(1 + rl) Proposition 1.f. Schmidli [319].u .1 is the expected claim surplus per unit time.Rt. 1 Introduction For later reference. A more formal proof goes as follows: Nt r Nt ESt = E > U k .

i. (c) If 77 > 0. Sn+0 . we have Sok . lim supt. In this way. (a) No matter the value of 77. S„+V > S„ . where Tk is the time between the kth and the (k . Indeed. then St -00. The right hand inequality in (1. which is often used in the literature for obtaining information about {St} and the ruin probabilities. cf.1.. however. We return to this approach in Chapter V.3EU0-1 = -1µs where rt is the safety loading. . In particular.1)th claim.3) is proved similarly. we need the following lemma: Lemma 1. (d) If 17 = 0.V. u + v]. For the proof... and the value is then precisely v. 1. INTRODUCTION 59 The linear way the index t enters in the formulas in Proposition 1.4. II. the Uk . Here is one immediate application: Proposition 1.1 is the same as if {St} was a random walk indexed by t = 0. v > 0. For example. meaning that the increments are stationary and independent.h < St < S(n+1)h + h. St = oo. so that {Sok } is a random walk with mean EU-ET = EU.Tk.3 If nh < t < (n + 1)h.Sok_l = Uk .d. and there are at least two ways to exploit this: Recalling that ok is the time of the kth claim.2 (DRIFT AND OSCILLATION) St/ta3'p-1 ast ->oo. Obviously. (b) If 77 < 0. we get a discrete time random walk imbedded in the claim surplus process {St}. then St> Snh-V>Snh-h.1 = . then St 4 co. The point of view in the present chapter is. rather to view {St} directly as a random walk in continuous time.Tk are i. then Snh . Proof We first note that for u. then lien inft.. St = -oo.S„ attains its minimal value when there are no arrivals in (u. if t = nh + v with 0 < v < h. The connections to random walks are in fact fundamental. 2. obviously 0(u) = F(maxk Sok > u).

This contradicts u St-4-00.3.. if P(M > 0) = 1.1(b)) that the assertion holds as t -4 oo through values of the form t = 0.p. lim supn_.1). Thus using Lemma 1. Part (d) follows by a (slightly more intricate) general random walk result ([APQ].4 The ruin probability 0(u) is 1 for all u when 77 < 0. 0 Snh = -00...2: Proposition 1. and hence by the strong law of large numbers. where the size of the portfolio at time t is M(t). and hence it folz lows from standard central limit theory and the expression Var(St) = tf3pB (Proposition 1.1) as t -4 oo is normal vtwith mean zero and variance )3µsz) Proof Since {St}t>o is a Levy process (a random walk in continuous time).s.t .. Considering the next downcrossing (which occurs w.2.. Proof The case of 17 < 0 is immediate since then M = oo by Proposition 1. Corollary 1.1.. hence by induction i. The general case now follows either by another easy application of Lemma 1. and (b). 2h._.1. . h A similar argument for lim sup proves (a).1. we get lim inf St t->oo t n-roo nh<t<(n+1)h t = lim inf inf St h l++m of Sn 7t h = -ESh = p .6 Often it is of interest to consider size fluctuations. and < 1 for all u when 77 > 0.. There is also a central limit version of Proposition 1. or by a general result on discrete skeletons ([APQ] p..2. For any fixed h. Remark 1 .. h. Notes and references All material of the present section is standard.o. THE COMPOUND POISSON MODEL Proof of Proposition 1. it is seen that upcrossing occurs at least twice. then {St} upcrosses level 0 a. {Snh}n=o. However. Snh u = 00 (the lemma is not needed for (d)). (c) are immediate consequences of (a). {Snh}n=o. this case can be reduced to the compound Poisson model by an easy operational time transformation u T-1(t) where T(s) = )3 fo M(t)dt. p. it suffices to prove 4'(0) = F(M > 0) < 1. 1 since St -4 -oo) and repeating the argument. Assuming that each risk generates claims at Poisson intensity /3 and pays premium 1 per unit time.. is a discrete time random walk. If rl > 0.5 The limiting distribution of St . Snh/n a4' ESh = h(p . is a discrete-time random walk for any h > 0.60 CHAPTER III. 169) stating that lim infra.3. at least once. u 307).

It is crucial to note that for the compound Poisson model. and we further get information about the joint conditional distribution of the surplus and the deficit.e. equivalently.1 provides a representation formula for 0(u). which we henceforth refer to as the Pollaczeck-Khinchine formula.IIG+II) (the parenthesis gives the probability that there are no further ladder steps after the nth ). p < 1. Note that this . that r(0) < oo) is Bo: taking y = 0 shows that the conditional distribution of (minus) the surplus -ST(o). Fig. nevertheless. 1e. 11. (2. Combined with i/i(u) = P ( M > u). The following results generalizes the fact that the conditional distribution of the deficit ST(o) just after ruin given that ruin occurs (i. cf. where G+ is given n=0 by the defective density g+ (x) = 3B (x) = pbo(x) on (0. the ladder heights are i. IV. The decomposition of M as a sum of ladder heights now yields: 00 Theorem 2 .just before ruin is again B0. 0 Alternatively. i.IIG +II)EG+ . but we shall be able to extract substantial information from the formula. n=0 (2. As a vehicle for computing tIi(u). B(x)/aB.1) representing the distribution of M as a geometric compound. the formula for the distribution of M follows .. Theorem 2.2. we can rewrite the PollaczeckKhinchine formula as 00 (u) = P (M > u) = (1 . and we shall here exploit the decomposition of the maximum M as sum of ladder heights. Summing over n.1) is not entirely satisfying because of the infinite sum of convolution powers.6.6. Here bo(x) _ Proof The probability that M is attained in precisely n ladder steps and does not exceed x is G+ (x)(1 . [APQ] Ch. Note that the distribution B0 with density bo is familiar from renewal theory as the limiting stationary distribution of the overshoot (forwards recurrence time ).1. oo ). 1 The distribution of M is (1. we may view the ladder heights as a terminating renewal process and M becomes then the lifetime. d. THE POLLACZECK-KHINCHINE FORMULA 61 2 The Pollaczeck-Khinchine formula The time to ruin r(u) is defined as in Chapter I as inf It > 0: St > u}.3-4 or A.1.P) E PnBon(u) . The expression for g+ was proved in Theorem 11. This follows simply by noting that the process repeats itself after reaching a relative maximum. Thus .. We assume throughout rl > 0 or.

Theorem 2.i.5. 1) and W has distribution Fw given by dFyy/ dB(x) = x/µB. cf. Beekman [61].6. ST(o)) is given by the following four equivalent statements: B(z) dz. As shown in Theorem 11. see for example [APQ]. 7r(0 ) < oo) = Q 3 Special cases of the Pollaczeck-Khinchine formula The model and notation is the same as in the preceding sections. (d) the marginal distribution of ST(o)_ is B0. 2 The joint distribution of (-ST(o )_.just after ruin.2 and it gives an alternative derivation of the distribution of the deficit ST(o) Notes and references The Pollaczeck-Khinchine formula is standard in queueing theory. Theorem 2 . ladder heights so that the results do not appear not too useful for estimating 0(u) for u>0. and the conditional distribution of ST(o) given -ST(o)_ = y is the overshoot distribution B(Y) given by Bov)(z) _ Bo (y + z )/Bo(y). the Pollaczeck-Khinchine formula is often referred to as Beekman 's convolution formula. V is uniform on (0.2(a) is from Dufresne & Gerber [125].V)W) where V. We assume rt > 0 throughout. For the study of the joint distribution of the surplus ST(u)_ just before ruin and the deficit ST(„). ST(o) > y. there is a general marked point process version. The proof of Theorem 11. In the risk theory literature. Feller [143] or Wolff [384]. (a) 11 (-ST(o)_ > x.62 CHAPTER III. Asmussen & Schmidt [49]. where it requires slightly more calculation. (1 . ST(o )) given r (0) < oo is the same as the distribution of (VW.6. the form of G+ is surprisingly insensitive to the form of {St} and holds in a certain general marked point process set-up. [62]. cf. . in this setting there is no decomposition of M as a sum of i.d. Theorem A1. However. (c) the marginal distribution of -ST(o)_ is Bo . THE COMPOUND POISSON MODEL distribution is the same as the limiting joint distribution of the age and excess life in a renewal process governed by B.1 is traditionally carried out for the imbedded discrete time random walk.5. and the conditional distribution of -ST(o)_ given ST(o)_ = z is Bo z) The proof is given in IV. f +b (b) the joint distribution of (-ST( o)-. W are independent. see Schmidli [323] and references there. Again. cf.

Bon is the Erlang distribution with n phases and thus the density of M at x > 0 is (1 . The result can. a further relevant reference is Bjork & Grandell [67].p.1)1 00 ( 1 . As shown in 11. But claims are exponential .O)e-(b-0)x. I. use Laplace transforms.0(u) = pe-(a-A)" Proof The distribution Bo of the ascending ladder height ( given that it is defined ) is the distribution of the overshoot of {St} at time r+ over level 0.3. and hence this overshoot has the same distribution as the claims themselves . Thus . however .. hence without memory. 3b Exponential claims Corollary 3. Let r ( x) be the failure rate of M at x > 0.p.3 so that the conditional distribution of M given M > 0 is exponential with rate S -'3 and 0(u) = P(M > u) = P(M > 0)P(M > uIM > 0) = pe-(6-Mu. Alternatively. then.p) E pn S n x n.e.1 e -ax = n-1 (n .1 0(0) = p = Nl2B = 1 1 +71 Proof Just note that (recall that T+ = r(0)) 00 z/^(0) = I' (-r+ < oo) = IIG+II = )3 f(x)dx =l3LB• Notes and references The fact that tp(u) only depends on B through µB is often referred to as an insensitivity property. SPECIAL CASES OF POLLACZECK-KHINCHINE 3a The ruin probability when the initial reserve is zero 63 The case u = 0 is remarkable by giving a formula for V)(u) which depends only on the claim size distribution through its mean: Corollary 3.p) = S -. For a failure at x. the current ladder step must terminate which occurs at rate S and there must be no further ones which occurs w. Thus r(x) = S(1 .6. Integrating from u to oo. the formula for P(O) holds in a more general setting. 1 .2 If B is exponential with rate S. 0 . also be seen probabilistically without summing infinite series .p)pSe- a ( l -v)x = p( S . the result follows . B0 is exponential with rate S and the result can now be proved from the Pollaczeck -Khinchine formula by elementary calculations .

2 is one of the main classical early results in the area.y)f3 (y) dy. E. we show that expression for /'(u) which are explicit (up to matrix exponentials) come out in a similar way also when B is phase-type.4) is similar (equivalently. u -+ oo. Then the first term on the r. II.h. (3. The case of (3. (Example VIII. 3c Some classical analytical results Recall the notation G+(u) = f^°° G+(dx). the survival probability Z(u) = 1 .g.1. then 24 1 V.3) Equivalently.T+ <oo)+P(M> u.3) below.4) can be derived by elementary algebra from (3. (3.+ >u.y)G+(dy) For the last identity in (3. (u) 35e-u + 35e-6u. A variety of proofs are available .3 The ruin probability Vi(u) satisfies the defective renewal equation ik (u) = 6+ (u) + G+ * 0(u) = Q f B(y) dy + u 0 f u 0(u .S. and weights 1/2 for each. is ?7+ ( u). cf. just insert the explicit form of G+.y)/3B (y) dy.3. u . Corollary 3.64 CHAPTER III.y)G+(dy ) = f U V(u .+ <U.i(u) satisfies the defective renewal equation Z(u) = 1 .1) For a heavy-tailed B.p + G+ * Z(u) = 1 . (3. (3.4) zu P(M > u . we use the Pollaczeck-Khinchine formula in Chapter IX to show that b(u) -. THE COMPOUND POISSON MODEL In VIII.S. T+ <00) (3.3. if 3 = 3 and B is a mixture of two exponential distributions with rates 3 and 7.p + f u Z(u .+ <u. and conditioning upon S. 0 Proof Write o (u) as P(M>u) = P(S.2).+ = y yields P(M>u.s.T+ <oo). (b) use stopped martingales .2) Notes and references Corollary 3. We mention in particular the following: (a) check that ip (u) = pe -(6-0)u is solution of the renewal equation (3.3).3)).1 p pBo(u).

eau B(u) du = f PB 3PB SPB 0 o (3.)3B[-s]) (3.7) s +. see e.7) and Corollary 3. it is not surprising that such arguments are more cumbersome since the ladder height representation is not used.p)s s /3 . g.6) 00 = (I .3. 0 Notes and references Corollary 3. which yields the survival probability as 00 f u }t Z(u) = f f3e-Rtdt 0 from which (3.(3 ..5 The first two moments of M are 2 EM . In view of (3.Ee-8M) f ao e-8' ( u)du = a-8uP (M > u)du = 0 o 1 ( 1+ (1 . either of these sets of formulas are what many authors call the Pollaczeck-Khinchine formula. Of course. Bo of B0 as m e8u B(du) = B[s] . .(3B[s] 1 .p)s .P)pB' (3. 111-112 or Feller [143].g.5). 206-207).7). 191). Corollary 3.Ps s(.p = (1 . We omit the details (see. by analytical manipulations (L'Hospital's rule) from (3. e.5) Proof We first find the m. Embrechts.PPB2) EM2 = PPB) + QZPBl 2(1 .8) Proof This can be shown. In fact.f./3B[-s] which is the same as (3.pBo[s] n-o (1 .5 can be found in virtually any queueing book. Griibel [179] and Thorin & Wikstad [370] (see also the Bibliographical Notes in [307] p. SPECIAL CASES OF POLLACZECK-KHINCHINE Corollary 3.3 . [APQ] pp.p)2 3(1 .5). The approach there is to condition upon the first claim occuring at time t and having size x .s .4 The Laplace transform of the ruin probability is 65 fo Hence Ee8M 00 e-8uiP(u)du .3 ..4) can be derived by elementary but tedious manipulations.P)PB 2(1 .s ./3B[-s] .p) E p"Bo[s]" = 1 . (3. numerical inversion of the Laplace transform is one of the classical approaches for computing ruin probabilities.1 Bo[s] = f oc. Also (3. [APQ] pp. Some relevant references are Abate & Whitt [2]. Griibel & Pitts [132].g.3 is standard . for example.

1 < u < n and let Z(u ) denote the r.3I( 0<y<1)dy Z(y)/3I(0<u.3+ 1-8+ J0 Z(u-y).u) a)Qea" + (1 .1)! k=1 u-1 .u + 1 )]k = QZ(u) . Z^ =e-R(k.s.4) for Z( u) means f lhu Z(u) = 1-.u)]k d 1 u) _ a) n ( du ( k! (1 - .)3(1 .Q) 3e.u)]k k! (1 L3) 1: e_O(k-u) NIN (k (k . of (3.z/'(u) takes the form Z(u) L^J L. e-O('-u) [)3(k .u)]k-1 k-u+1) [/3( k .9) shown for n .h./3Z(u .u) [N(k .66 CHAPTER III.u) [p(k . differentiation yields Z'(u) _ /3Z(u) which together with the boundary condition Z(0) = 1-/3 yields Z(u) _ (1-/3) eAu so that (3.9). then p) 1: e-p(k -u/. differentiation yields Z(u) _ /3Z(u) . For n < u < n + 1.u)]k k! k-0 The renewal equation (3.6 If B is degenerate at p. THE COMPOUND POISSON MODEL 3d Deterministic claims Corollary 3.u/p)]k k-o k! Proof By replacing {St} by {Stu/p} if necessary.1). .Q) k=0 k! E e-0( = /32(u) . we may assume p = 1 so that the stated formula in terms of the survival probability Z(u) = 1 . Assume (3.Q (k 1 k= n - [O(k .9) follows for 0 < u < 1./32(u .1).y<1)dy 0<u<1 1 < u < oo uu  u-lhu 1-a+/3 J0 uZ(y)dy U Z(y) dy 1-13+0 For 0 < u < 1.

co(a) = rc(a + 9) . The question then naturally arises whether ie is the c.4. The answer is yes: inserting in (4.) The adaptation of this construction to stochastic processes with stationary independent increment as {St} has been carried out in 11.rc(9) = . and thus (4.Qe(Bo[a] . of F9. corresponding to a compound Poisson risk process in the sense that for a suitable arrival intensity 00 and a suitable claim size distribution BB we have no(a) = rc(a + 9) .1) or equivalently.g. it follows that Z(u) = 2(u) for n<u<n+1.a. we set up .1 that c(a) = /3(B[a] . we just have to multiply (4.f.1) .(9) is well-defined.d.4. Formalizing this for the purpose of studying the whole process {St}.2) (Here 9 is any such number such that r. F and c.a. (4. 4 Change of measure via exponential families If X is a random variable with c. say t = 1: recall from Proposition 1.r. (4.6 is identical to the formula for the M/D/1 waiting time distribution derived by Erlang [139].g. K(a) = logEe'X = 109f 00 eaxF(dx) = logF[a].3) by t.3B[9].f.1) . 0 Notes and references Corollary 3. (4. B9(dx) = B[9] B(dx). but will now be repeated for the sake of self-containedness. in terms of the c.(9). See also Iversen & Staalhagen [208] for a discussion of computational aspects and further references.4) .3B = . We could first tentatively consider the claim surplus X = St for a single t.2). or equivalently BB[a] = B[^+ Repeating for t 54 1. and define rce by (4.g.f. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES 67 Since Z(n) = 2(n) by the induction hypothesis.f.4) works as well.2) shows that the solution is Ox [O]0]. 00 the standard definition of the exponential family {F9} generated by F is FB(dx) = e°x-K(e)F(dx).

f.i. Let FT = o(St : t < T) denote the o•-algebra spanned by the St.1 Let P be the probability measure on D[0. The following result (Proposition 4.(9)} (4. Ti(a)/n. = exp {BST . it suffices to consider the case where Z is measurable w.7) Proof We must prove that if Z is FT-measurable. . with common c. G C {T < oo}. with T taking the role of n) is the analogue of the expression exp{8(x1 + • • • + xn) . .3 Let T be any stopping time and let G E FT.2.r.8) By standard measure theory..r. in particular the expression (4. (4.g.8) follows by discrete exponential family theory.5) for the density of n i.1.d.9) Proof We first note that for any fixed t. Then FB denotes the probability measure governing the compound Poisson risk process with arrival intensity. (4.t. for G E FT. Z is measurable w.Tic (0)} .7) now follows by taking Z = e-BST+TK(e)I(G) u Theorem 4 . the PBT) are mutually equivalent on. .d.4). t < T. and PBT) the restriction of PB to FT. But let Xk = SkT/n . G]. (4. oo) governing a given compound Poisson risk process with arrival intensity. Then P(G) = Fo(G) = EB [exp {-BST + TK(O)} .2 For any fixed T. .S(k_1)Tln. and thus (4. Xn). The identity (4. (4.68 CHAPTER III.i. Then the Xk are i.FT. v(Xi. .FTn) = Q(SkTIn : k = 0. Eee-BSt + tk(B) = 1. G].3 and claim size distribution B. n) for a given n. replications from Fe (replace x by xi in (4.10) .. and define 09.5) for the density.1) and multiply from 1 to n).0e and claim size distribution Be. and dP(T) dP^T) That is. BB by (4. (4. the corresponding expectation operator is E9.. . THE COMPOUND POISSON MODEL Definition 4. then EBZ = E [Ze9ST _T"(9)I . Proposition 4.6) F(G) = Po (G) = EB [exp {-BST + Ttc(0)} .nr.t.

10).7) holds.5. GT C_ Jr < T}. Ee [exp { -BST +Trc(9)} I(G) FT)] = 1. (a) rc (a) (b) KL(a) 'Y -'Y Figure 5. according to what has just been proved.f. and hence (4. The behaviour at zero is given by the first order Taylor expansion c(a) r. LUNDBERG CONJUGATION 69 Now assume first that G C Jr < T} for some deterministic T.1 It is seen that typically) a ry > 0 satisfying 0 = r. the typical shape of rc is as in Fig.ST) + (T . 77 Thus.FT]] = EB [exp { -BST + Trc(9)} I(G)] . t = T -.r is deterministic.(-Y) = 13(B['Y] .9) holds for G as well.g. (4. .1) . Thus by (4.7 1 Some discussion further supporting this statement is given in the next section. 5.1(a). subject to the basic assumption ij > 0 of a positive safety loading. Then G E FT. so that PG = EeE0 [exp { -9ST+Trc(9)}I(G)I FT)] = Ee [exp { -BST + rrrc(O)} I(G)EB [ exp {-9 (ST . (0) + rc'(0)a = 0 + ES1 a = a (p .. Then GT = G n Jr < T} satisfies GT E FT.1) _ -1 + a. Given FT.9) holds with G replaced by GT. 5 Lundberg conjugation Being a c. Now consider a general G. Letting T t oo and using monotone convergence then shows u that (4. c(a) is a convex function of a. Thus.r)rc(9)}I .

Thus. Lundberg conjugation corresponds to interchanging the rates of the interarrival times and the claim sizes. and (4. we write FL instead of F7. It is then readily seen that the non-zero solution of (5. an equivalent version illustrated in Fig.2) 7 Figure 5. Fig. 5. G = {T(u) < oo} in Theorem 4.3.4) yields /3L = b and that BL is again exponential with rate bL =. Note that KL (a) = /L (BL [a] .3) cf.QL instead of /37 and so on in the following . Equation (5. we further note that ( 5.s).3. Taking T = r(u).g.3. Example 5 . Thus B[7] = 6/.a = i(a + 7).4) ELS1 = #L(0) cf. the claim surplus process has positive drift > 0.70 CHAPTER III.1) . . 5. b[s] = 5/(b . e.2 s As support for memory.2 is B(7) = 1 + ^.1) (or (5.1(b). Fig.1(b).2)) is 7 = 5-/3. u It is a crucial fact that when governed by FL.1 Consider the case of exponential claims. THE COMPOUND POISSON MODEL exists . An established terminology is to call -y the adjustment coefficient but there are various alternatives around.1) is precisely what is needed for one of the terms in the exponent . the Lundberg exponent. (5. (5. 5. (5.1) is known as the Lundberg equation and plays a fundamental role in risk theory .

5) Theorem 5 . (5. PL ) with density 1 .8) . To this end. (5.ascending ladder height distribution and µ+ its mean.Ce-7u as u -4 oo. LUNDBERG CONJUGATION 71 to vanish so that Theorem 4.1-p . G = {S.(oo) (in the sense of weak convergence w.G+L)(x)) dx ry^+L) J 00 f 0 (1 .1 (5.G+ L) (x) G+L) (x) IL(+) µ+L) L) where G+L) is the FL.u be the overshoot and noting that PL(T(u) < oo) = 1 by (5.3 (THE CRAMER-LUNDBERG APPROXIMATION) i'(u) .1e.5.5). e(u) has a limit i.4). which shows that G(L) (dx) = e7xG +(dx) = e7x /3 (x) dx. see A . ST+ E A] .3. Letting e(u) = ST(u) . Proof Just note that e(u) > 0 in (5. T = T+.(u)} .6 ). T(u) < oo] . Then P(ST+ E A) = EL [exp { -7S?+} .6) Proof By renewal theory. V)(u) < e-7u.7) 0 and all that is needed to check is that ( 5.e-7x)G+(dx).+ E A} in Theorem 4. 0 Theorem 5 .t.7) is the same as (5.3 takes a particular simple form. take first 0 = ry. we can rewrite this as 0(u) = e-"ELe-7^(u). V) (u) = P(T(u) < oo) = EL [exp {-ryS. we therefore have ELe-7t(u) -+ C where C ELe-7 (00) = µ+) f e-7-(1 . where C .2 (LUNDBERG'S INEQUALITY) For all u > 0.P -Y j o' xeryxOB (x) dx /3k [-Y] . Since a-7' is continuous and bounded.r.1. (5.

or equivalently of how close the safety loading 77 is to zero.12) Example 5 .-")G + (dx ) = 1 - J0 00 3B(x) dx = 1-p. A direct proof of C = p is of course easy: B ['y] d S S (S-7 )2 d7S --y S 02' C 1-p 1-p _ 1-p /3B' [7] 2 -1 P-1 p.1 above) and that C = p.e. we get L where 00 (1 .72 CHAPTER III. .1) (5.4 Consider first the exponential case b(x) = Se-ax.10) VW = JI c* e° (x) dx = a (B[a] . this solves the problem of evaluating (5.1 = ^7- The accuracy of Lundberg's inequality in the exponential case thus depends on how close p is to one. Using (5. From this it follows. that 7 = S -. of course. Then 0(u) = pe-(a_Q)u where p = /3/S.11) so that I 7B ['Y]-(B[7]-1) BI [7]-Q VP (7) 72 7 (using (5. but some tedious (though elementary) calculations remain to bring the expressions on a final form. Noting that SIG(L)II = 1 because of (5. u .1 . THE COMPOUND POISSON MODEL In principle. (5.7).4).3 (this was found already in Example 5.1)) and 7µ+L) = 'y/3 [7] 7 1/0 = /3B ['y] .8) yields +L) J0 xel'B ( x) dx (5.

5. LUNDBERG CONJUGATION Remark 5.5 Noting that PL - 1 = ,3LIBL - 1 = #ci (0 ) = k (ry) _ ,QB' ['Y] - 1 ,

73

we can rewrite the Cramer-Lundberg constant C in the nice symmetrical form G, _'(0)1 - 1 - p K'(7) PL-1

(5.13)

In Chapter IV, we shall need the following result which follows by a variant of the calculations in the proof of Theorem 5.3: 1 - aB[ry - a] - 1 Lemma 5 . 6 For a # ry, ELe-a^ (°°) = 7 aK'(7) 7 - a Proof Replacing 7 by a in (5.7) and using ( 5.8), we obtain 1 (I 1 - ^ e('r-a) x,3 (x)dx) (L ) ELe-a^*) = a \\\ f

using integration by parts as in (3.6) in the last step . Inserting (5.12), the result follows. u
Notes and references The results of this section are classical, with Lundberg's inequality being given first in Lundberg [251] and the Cramer-Lundberg approximation in Cramer [91]. Therefore, extensions and generalizations are main topics in the area of ruin probabilities, and in particular numerous such results can be found later in this book; in particular, see Sections IV.4, V.3, VI.3, VI.6.

The mathematical approach we have taken is less standard in risk theory (some of the classical ones can be found in the next subsection). The techniques are basically standard ones from sequential analysis, see for example Wald [376] and Siegmund [346].

5a Alternative proofs
For the sake of completeness, we shall here give some classical proofs, first one of Lundberg's inequality which is slightly longer but maybe also slightly more elementary:

74 CHAPTER III. THE COMPOUND POISSON MODEL
Alternative proof of Lundberg 's inequality Let X the value of {St} just after the first claim , F(x) = P(X < x). Then , since X is the independent difference U - T between an interarrival time T and a claim U, ,3+ry F'[7} = Ee7 ( U-T) = Ee7U • Ee-7T = B['Y] a = 1' where the last equality follows from c(ry) = 1. Let 0( n) (u) denote the probability of ruin after at most n claims. Conditioning upon the value x of X and considering the cases x > u and x < u separately yields

,0(n +1) (u) = F(u) +

Ju

0 (n) (u - x) F(dx).

We claim that this implies /,(n) (u) < e 7u, which completes the proof since Vi(u) = limniw 1/J(n) (u). Indeed , this is obvious for n = 0 since 00)(u) = 0. Assuming it proved for n, we get
„/, (n+1)(u) <

F(u) + e-7u

00

Ju

e-7(u-=) F(dx)

00

<

f

e7x F(dx)

+ fu

e - 7(u -z) F(dx)

u

o0

= e- 7uE[ 'Y] = e -7u.

Of further proofs of Lundberg's inequality, we mention in particular the martingale approach, see II.1. Next consider the Cramer-Lundberg approximation. Here the most standard proof is via the renewal equation in Corollary 3.3 (however, as will be seen, the calculations needed to identify the constant C are precisely the same as above): Alternative proof of the Cramer-Lundberg's approximation Recall from Corollary

3.3 that
(u) = )3

J OO B(x) dx + J U Vi(u - x)/3 (x) dx.
u 0

Multiplying by e7u and letting Z(u) = e7" -O(u), we can rewrite this as
u Z(u) =

z(u) = e7u/

J

B(x)dx, F(dx) = e7x,QB(x)dx,

u

z(u)

f +

J

e7(u-x ),Y' 1 • l•(u - x) • e7'/B(x) dx,

0

= z(u) +

J0 u Z(u - x)F(dx),

6. MORE ON THE ADJUSTMENT COEFFICIENT 75
i.e. Z = z+F*Z. Note that by (5.11) and the Lundberg equation, ry is precisely the correct exponent which will ensure that F is a proper distribution (IIFII = 1). It is then a matter of routine to verify the conditions of the key renewal theorem (Proposition A1.1) to conclude that Z (u) has the limit C = f z(x)dx/µF, so that it only remains to check that C reduces to the expression given above. However, µF is immediately seen to be the same as a+ calculated in (5.10), whereas

L

00

z(u) du =

f
J

/3e7udu "o

J "o B(x) dx = J "o B(x)dx J y,0eludu
u 0 0

B(x)^ (e7x - 1) dx = ^' (B[7] - 1) - As] [0 -µs] = l y P^

using the Lundberg equation and the calculations in (5.11). Easy calculus now gives (5.6). u

6 Further topics related to the adjustment coefficient
6a On the existence of y
In order that the adjustment coefficient y exists, it is of course necessary that B is light-tailed in the sense of I.2a, i.e. that b[a] < oo for some a > 0. This excludes heavy-tailed distributions like the log-normal or Pareto, but may in many other cases not appear all that restrictive, and the following possibilities then occur: 1. B[a] < oo for all a < oo. 2. There exists a* < oo such that b[a] < oo for all a < a* and b[a] = 00 for all a > a*. 3. There exists a* < oo such that fl[a] < oo for all a < a* and b[a] = 00 for all a > a*. In particular , monotone convergence yields b[a] T oo as a T oo in case 1, and B[a] T oo as a f a* in case 2 (in exponential family theory , this is often referred to as the steep case). Thus the existence of y is automatic in cases 1 , 2; standard examples are distributions with finite support or tail satisfying B(x) = o(e-ax)

76 CHAPTER III. THE COMPOUND POISSON MODEL
for all a in case 1, and phase-type or Gamma distributions in case 2. Case 3 may be felt to be rather atypical, but some non-pathological examples exist, for example the inverse Gaussian distribution (see Example 9.7 below for details). In case 3, y exists provided B[a*] > 1+a*/,3 and not otherwise, that is, dependent on whether 0 is larger or smaller than the threshold value a*/(B[a*] - 1). Notes and references Ruin probabilities in case 3 with y non-existent are studied, e.g., by Borovkov [73] p. 132 and Embrechts & Veraverbeeke [136]. To the present authors mind, this is a somewhat special situation and therefore not treated in this book.

6b Bounds and approximations for 'y
Proposition 6.1 ry <

2(1 - aps) 2µs
OMB PB)

Proof From U > 0 it follows that B[a] = Eea' > 1 + µsa + pB2)a2/2. Hence 1 = a(B[7] - 1) > Q (YPB +72µs)/2) = 3µs + OYµa2) 2 (6.1) 7 'Y from which the results immediately follows. u

The upper bound in Proposition 6.1 is also an approximation for small safety loadings (heavy traffic, cf. Section 7c): Proposition 6.2 Let B be fixed but assume that 0 = ,3(77) varies with the safety loading such that 0 = 1 Then as 77 .0, µB(1 +rl) 2) -Y = -Y(77) 277 PB Further, the Cramer-Lundberg constant satisfies C = C(r1) - 1. Proof Since O(u) -+ 1 as r7 , 0, it follows from Lundberg's inequality that y -* 0. Hence by Taylor expansion, the inequality in (6.1) is also an approximation so that OA-Y] - 1) N Q (711s + 72µB2) /2) = p + 3,,,(2) B 'y 7 2 2(1 - p) _ 271µB
QPB PB)

6. MORE ON THE ADJUSTMENT COEFFICIENT 77
That C -4 1 easily follows from -y -4 0 and C = ELe-7V°O) (in the limit, b(oo) is distributed as the overshoot corresponding to q = 0 ). For an alternative analytic proof, note that C - 1-P = rlµB 73B' [7] - 1 B' [ry) - 1/0 711µB µB +7µB2 ) - µB(1 +77 ) 'l = 1. 277-q

77

7PBIPB

- 77

13 Obviously, the approximation (6.2) is easier to calculate than -y itself. However, it needs to be used with caution say in Lundberg's inequality or the Cramer-Lundberg approximation, in particular when u is large.

6c A refinement of Lundberg 's inequality
The following result gives a sharpening of Lundberg 's inequality (because obviously C+ < 1) as well as a supplementary lower bound:
Theorem 6 .3 C_e-ryu < ,)(u) < C+ e-ryu where

= B(x) = C_ x>o f °° e7( Y-x)B(dy )' C+

B(x) xuo f 0 e'r( v-x)B(dy)

Proof Let H(dt, dx ) be the PL-distribution of the time -r(u) of ruin and the reserve u - S7(„)_ just before ruin . Given r(u) = t, u - ST (u)- = x, a claim occurs at time t and has distribution BL(dy)/BL(x), y > x. Hence ELe-7£(u) 0

J
°o

H(dt, dx)
fX

e--Y(Y- x) 00 f°° B(dy) x

BL dy
BL(x)

o

f

f H(dt, dx)

L ^ H(dt, dx) f e7B( x)B(dy) Jo oc, < C+

J0 0 o" H(dt, dx) = C. o" J

The upper bound then follows from ik(u) = e-7uELe-Vu), and the proof of the u lower bound is similar.

78 CHAPTER III. THE COMPOUND POISSON MODEL
Example 6.4 If B(x) = e-ax, then an explicit calculation shows easily that B(x) _ e-6X fz ° e7(Y-x)B(dy) f x' e(6-,6)(Y-x)8e-sydy = 5 = P. Hence C_ = C+ = p so that the bounds in Theorem 6.3 collapse and yield the exact expression pe-y" for O(u). u The following concluding example illustrates a variety of the topics discussed above (though from a general point of view the calculations are deceivingly simple: typically, 7 and other quantities will have to be calculated numerically). Example 6.5 Assume as for (3.1) that /3 = 3 and b(x) = 2 .3e-3x + 2 .7e-7x, and recall that the ruin probability is 24 5-su 5e-u + 3e *(u) = 3 Since the dominant term is 24/35 • e-", it follows immediately that 7 = 1 and C = 24/35 = 0.686 (also, bounding a-S" by a-" confirms Lundberg's inequality). For a direct verification, note that the Lundberg equation is

7 = /3(B['Y]-1)

= 3\

2.337

+2.777-1

which after some elementary algebra leads to the cubic equation 273 - 1472 + 127 = 0 with roots 0, 1, 6. Thus indeed 7 = 1 (6 is not in the domain of convergence of B[7] and therefore excluded). Further, 1-P = B [7] 181B = 1-3 2.3+2.71 = 1 3 1 7 I 7'

_ 17

2 (3 -a )2 + 2 (7 - a)2 «=7=1 2 1-p _ 7 _ 24

36 '

3.17-1 35* 36 For Theorem 6.3, note that the function QB[Y]-1 f°°{L 3e_3x+
-u

• 7e-7x 1 dx

J

3 + 3e-4u

f 0c, ex .

I -2 . 3e-3x + 2 . 7e-7x l dx
l J

9/2 + 7/2e-4u

7. VARIOUS APPROXIMATIONS FOR THE RUIN PROBABILITY 79
attains its minimum C_ = 2/3 = 0.667 for u = oo and its maximum C+ = 3/4 = 0.750 for u = 0, so that 0.667 < C < 0.750 in accordance with C = 0.686.

Notes and references Theorem 6.3 is from Taylor [360]. Closely related results are given in a queueing setting in Kingman [231], Ross [308] and Rossberg & Siegel [309]. Some further references on variants and extensions of Lundberg's inequality are Kaas & Govaaerts [217], Willmot [382], Dickson [114] and Kalashnikov [218], [220], all of which also go into aspects of the heavy-tailed case.

7 Various approximations for the ruin probabil-

ity
7a The Beekman-Bowers approximation
The idea is to write i (u) as F(M > u), fit a gamma distribution with parameters A, 6 to the distribution of M by matching the two first moments and use the approximation

0(u)

f
u

Sa
r(A)

xa - le-ax dx.

According to Corollary 3.5, this means that A, 8 are given by A/S = a1, 2A/52 = a2 (2) PIB3) ^ZP(B)2 __ PPB a2 al 2(1 - P)PB 3(1 - P)µ8 + 2(1 - p)2' i.e. S = 2a1 /a2, A = 2a2 1/a2.
Notes and references The approximation was introduced by Beekman [60], with the present version suggested by Bowers in the discussion of [60].

7b De Vylder's approximation
Given a risk process with parameters ,(3, B, p = 1, the idea is to approximate the ruin probability with the one for a different process with exponential claims, say with rate parameter S, arrival intensity a and premium rate p. In order to make the processes look so much as possible alike, we make the first three cumulants match, which according to Proposition 1.1 means -p=AUB-1=P-1,
2N

(2) 6^= =OP

,

/3,4)

.

Though of course it is based upon purely empirical grounds. That is.3 )1 } _ 1-p 1 .)3 )PBo µB - . and hence the ruin probability approximation is b(u) e-(b-Aln)u.(3)2 P PB 2µB 2µB Letting /3* = /3/P.g.3 and Corollary 3. we have according to the Pollaczeck-Khinchine formula in the form (3.1.80 CHAPTER III. Proposition 7. Proposition 1.3* /S. Letting Bo be the stationary excess life distribution. [174]) shows that it may produce surprisingly good results. but has an obvious interpretation also in risk theory: on the average. (/3max .1 As /3 f Nmax. we shall represent this situation with a limit where /3 T fl but B is fixed./3)PBo PB . numerical evidence (e. p* _ . the premiums exceed only slightly the expected claims./3)] 1 .Ps(/3max . Notes and references The approximation (7.7) that Ee$(Amex -/j)M _ 1-p _ 1-p Eo [s (0max 1 . THE COMPOUND POISSON MODEL These three equations have solutions 9/3µB2)3 30µa2)2 3µa2) (3) P+ (3) ' 0 .p = (/3max -0)µB.2) was suggested by De Vylder [109]. Grandell [171] pp. or equivalently that /3 is only slightly smaller than /3max = 1/µ8.b(u) = p*e./3)M converges in distribution to the 2a exponential distribution with rate S = B' Proof Note first that 1 .p .1. Mathematically.8µBo - S-s' u where 6 = µB/µBo = 2µa/µB 2) .PBo [s (/3max .s(/3max .p + p { 1 1-p ti 1 . 7c The heavy traffic approximation The term heavy traffic comes from queueing theory.P . 19-24. the approximating risk process has ruin probability z.(b-A*)". heavy traffic conditions mean that the safety loading q is positive but small. cf.

B AB ) 6()3max _'3) = However . we shall represent this situation with a limit where 3 10 but B is fixed. 7d The light traffic approximation As for heavy traffic .l3)M > (/3max . and hence 2µ2B 1 . However .7. while the approximation may be far off for large u. u -* oo in such a way that (3max . [APQ] Ch. then P(u) -4 e-6„ Proof Write z'(u) as P((/3max .p. the first results of heavy traffic type seem to be due to Hadwiger [184]. light traffic is of some interest as a complement to heavy traffic . .1 1 . VIII). but has an obvious interpretation also in risk theory: on the average .ryu .3) is reasonable for g being say 10-20% and u being small or moderate. It is worth noting that this is essentially the same as the approximation (2) z/i(u) Ce. VARIOUS APPROXIMATIONS FOR THE RUIN PROBABILITY 81 Corollary 7. Notes and references Heavy traffic limit theory for queues goes back to Kingman [230].--0)u. Mathematically.2 If .p _ 2rl11B PB p./3)u -* v. In the setting of risk theory. 2 provides the better mathematical foundation.g. The present situation of Poisson arrivals is somewhat more elementary to deal with than the renewal case (see e . obviously Corollary 7.4) suggested by the Cramer -Lundberg approximation and Proposition 6.ze a-2unµB laB (7. Numerical evidence shows that the fit of (7.Q T /3max. in risk theory heavy traffic is most often argued to be the typical case rather than light traffic . or equivalently that 0 is small compared to µB .2. the premiums are much larger than the expected claims . light traffic conditions mean that the safety loading rl is positive and large . the term light traffic comes from queueing theory. This follows since rl = 1/p . Of course. as well as it is needed for the interpolation approximation to be studied in the next subsection. We return to heavy traffic from a different point of view (diffusion approximations) in Chapter IV and give further references there . That is ./3)u). These results suggest the approximation Vi(u) e-6(0_.

Omax max m.82 CHAPTER III. z/' (u) convergence P(U .e. 0 u Notes and references Light traffic limit theory for queues was initiated by Bloomfield & Cox [69]. [97]. En'=2 • • • = O(/32) so that only the first terms matters. Another way to understand that the present analysis is much simpler than in these references is the fact that in the queueing setting light traffic theory is much easier for virtual waiting times (the probability of the conditioning event {M > 0} is explicit) than for actual waiting times . 10 ( u The alternative expressions in (7. For a more comprehensive treatment. u Note that heuristically the light traffic approximation in Proposition 7. 7e Interpolating between light and heavy traffic We shall now outline an idea of how the heavy and light traffic approximations can be combined.Q limIP ( u) + Q lim z/'(u) Amax &0 amax ATAm. Light traffic does not appear to have been studied in risk theory.5) follow by integration by parts.u)+. (7. see Daley & Rolski [96]. The crude idea of interpolating between light and heavy traffic leads to 0 (u) C1 . 0(u) /3 J B(x)dx = /3E[U . ( 3 J O B dx.3 As .T > u).u.3 is the same which comes out by saying that basically ruin can only occur at the F(U . U > u] = /3iE(U . Sigman [347]. Again. ao n=1 00 n=1 (u) P) anllBBon(U) onPaBon(u) • Asymptotically. cf. i. by monotone time T of the first claim .T > u) = J o" B(x + u)/3e-ax dx . the Poisson case is much easier than the renewal case. and hence 00 (U) /3pBBo (u) = 0 / B(x)dx. .= 1- aJ 1 a 0+ 1 = = p. Asmussen [19] and references there. Indeed.(3 10.5) u Proof According to the Pollaczeck-Khinchine formula. THE COMPOUND POISSON MODEL Proposition 7.

f / Qmax B(x)dx 00 e-Qmaxxdx 4/ Qmax 00 Qmax-Q amaze" and the approximation we suggest is J B(x) dx = cLT(v) (say). we combine with our explicit knowledge of ip(u) for the exponential claim size distribution E whith the same mean PB as the given one B.Wmax f(x ) dx + pee6mQ. however. ^IE) exist: 1 (B) HT Qmsx-Q hm J e e-6" 2µE/µE2)'" = e(1 -6)" = - Q1Qm. no empirical study of the fit of (7.O(E)(u) 1 (1 . Al .O0 M. _(E) (u). Another main queueing paper is Whitt [380]. available.6) is . (7.VHT) ( ax Qm-Q ) h (B) ( . with rate 1/µB = /3max. Notes and references In the queueing setting . COMPARISONS OF CLAIM SIZE DISTRIBUTIONS 83 which is clearly useless . "/Qmex Cu) CLT(u ( /3max -0) + O16 CHT( U(Qmaz . 8 Comparing the risks of different claim size distributions Given two claim size distributions B(1). B(2).3n. Substituting v = u(. (U). where further references can be found ./3)) . -. .3 and use similar notation for -%(B) (u) = (u). z/i(E) (u) = pe-(Qmax-Q)u. Let OLT) (u) denote the light traffic approximation given by Proposition 7. to get non-degenerate limits . Thus .6) (1-p) The particular features of this approximation is that it is exact for the exponential distribution and asymptotically correct both in light and heavy traffic. one may hope that some correction of the heavy traffic approximation has been obtained. that is. ) M. Instead.x . we may ask which one carries the larger risk in the sense of larger values of the ruin probability V(') (u) for a fixed value of 0. [84]. ^ LT Q max-Q m"^ Qlo V LT) ( CHT(v) (say). even if the safety loading is not very small.3). we see that the following limits HT) (u'). The adaptation to risk theory is new.8. the idea of interpolating between light and heavy traffic is due to Burman & Smith [83 ].

B(' <. this implies St T(l)(u) > r(2)(u) for all u so that 17-(I) (U) < oo} C_ {T(2)(u) < oo}.6.s. A weaker concept is increasing convex ordering: B(1) is said to be smaller than B(2) (in symbols. Proposition 8. an equivalent characterization is f f dB(') < f f dB (2) for any nondecreasing convex function f. Bill is said to be convexly smaller than B(2) (in symbols.2 If B(') <j. u Of course. B(2)) if f fdB(1) < f fdB(2) for any convex function f. equivalent characterizations are f f dB(') < f f dB (2) for any non-decreasing function f. and a particular deficit is that we cannot compare the risks of claim size distributions with the same mean: if BM <d B(2) and µB«) = /IB(2). or the existence of random variables U(l).1 is quite weak. In terms of the time to ruin. Here convex ordering is useful: Proposition 8.84 CHAPTER III. cf. one can interpret f x°° B(y) dy as the net stop-loss premium in a stop-loss or excess-of-loss reinsurance arrangement with retention limit x. this ordering measures difference in variability. Proposition 8. B(2) and PB(1) = µB(2).ill(u) < V)(2) (U) for all u. Taking probabilities. then i. B(2)) in the increasing convex order if f BM (y) dy < f 00 Bi2i (y) dy x x for all x. Rather than measuring difference in size. XI. we can assume that 1) < St 2l for all t. THE COMPOUND POISSON MODEL To this end. we have the convex ordering. U(2) such that U(l) has distribution B('). then Bill = B(2). B(') <i. In particular (consider the convex functions x and -x) the definition implies that B(1) and B(2) must have the same mean. we shall need various ordering properties of distributions. B(') <d B(2)) if B(1)(x) < B(2)(x) for all x. Recall that B(') is said to be stochastically smaller than B(2) (in symbols. Proof According to the above characterization of stochastical ordering. then .' 1)(u) < V)(2) (U) for all u. for more detail and background on which we refer to Stoyan [352] or Shaked & Shantikumar [337]. most often the term stop-loss ordering is used instead of increasing convex ordering because for a given distribution B. In the literature on risk theory. U(2) distribution B(2) and U(1) < U(2) a. Finally. whereas (consider x2) B(2) has the larger variance.1 If B(') <d B(2). the proof is complete. .

with fixed mean.4 Let D refer to the distribution degenerate at 'LB . u We finally give a numerical example illustrating how differences in the claim size distribution B may lead to very different ruin probabilities even if we fix the mean p = PB. Proof Consider the light traffic approximation in Proposition 7. and here is one more result of the same flavor: Corollary 8. A general picture that emerges from these results and numerical studies like in Example 8.3 provides another instance of this.. The heavy traffic approximation (7.2 is the following: Proposition 8. B(2). COMPARISONS OF CLAIM SIZE DISTRIBUTIONS Proof Since the means are equal.p ) E /3"µ"Bo2)* n(u) _ V(2) (u) n=1 = Corollary 8.4) certainly supports this view: noting that. The problem is to specify what 'variation' means.3 If B(1) <.(1) (. then /'(')(u) < 0(2)(u) for all u. Proof If f is convex.5 If '0(1)(u) < p(2) (U) for all u and a. This u implies that D <. B. it is seen that asymptotically in heavy traffic larger claim size variance leads to larger ruin probabilities. then B(1) <. Then V. and consider the following claim size distributions: B1: the standard exponential distribution with density a-y.6 Fix /3 at 1/1.6 below is that (in a rough formulation) increased variation in B increases the risk (assuming that we fix the mean). Corollary 8. B(2). we have Bol) (x) f ' B(1) (y) dy < -' f' B(2) (y) dy = Bo2) (x)• µ 85 I.e. A partial converse to Proposition 8.u) = (1 _ P) E /3npnBo( 1):n(u) n=1 00 < (1. say to p. Hence by the Pollaczeck-Khinchine formula .8.1. .1 and µB at 1 so that the safety loading 11 is 10%. we have by Jensen 's inequality that E f (U) > f ( EU).. (D) (u) < O(B) (U ) for all u. larger variance is paramount to larger second moment. Bo1) <_d Bo2) which implies the same order relation for all convolution powers. Example 8. A first attempt would of course be to identify 'variation' with variance. from which the result immediately follows.

Kluppelberg [234].4142. A2 = 3.4. One then obtains the following table: U005 U0. Pellerey [287] and (for the convex ordering) Makowski [ 252]. Note to make the figures comparable. B. all distributions have mean 1. = 0. which appears to be smaller than the range of interest in insurance risk (certainly not in queueing applications!).e-'\1x + 0.e. B3 the comparison is as expected from the intutition concerning the variability of these distributions.01%. 0. i. and this is presumably a consequence of a heavier tail rather than larger variance.001 u0. we have 0r3 = 2 < or2 = 1 < 02 = 10 < 04 = 00 so that in this sense B4 is the most variable. the behaviour of which is governed by a parameter 9. A standard example from queueing theory is . However.0' U0. 9 Sensitivity estimates In a broad setting.01%. In terms of variances o2. sensitivity analysis (or pertubation analysis) deals with the calculation of the derivative (the gradient in higher dimensions) of a performance measure s(O) of a stochastic or deterministic system. van Heerwarden [189].9A2e-'2r where A.1%. We return to ordering of ruin probabilities in a special problem in VI.000. B3: the Erlang distribution with density 4xe-2x.) = a.. 1/)(u.86 CHAPTER III. Let ua denote the a fractile of the ruin function. 0. 32 50 75 100 B2 B3 B4 35 181 24 282 37 70 245 425 56 568 74 1100 (the table was produced using simulation and the numbers are therefore subject to statistical uncertainty). THE COMPOUND POISSON MODEL B2: the hyperexponential distribution with density 0. in comparison to B2 the effect on the ua does not show before a = 0. and consider a = 5%. B4: the Pareto distribution with density 3/(1 + 2x)5/2.lA. with the hyperexponential distribution being more variable than the exponential distribution and the Erlang distribution less. For B1i B2. 11 Notes and references Further relevant references are Goovaerts et al. [166]. 1%.1358.

with 0 the vector of service rates at different nodes and routing probabilities. increasing in u. and s(9) the expected sojourn time of a customer in the network. i.Ap). In the present setting. Then ib = Pe-(6-13)u. s(9) is of course the ruin probability t' = Vi(u) (with u fixed) and 0 a set of parameters determining the arrival rate 0. the premium rate p and the claim size distribution B.01/2u. Similar conclusions will be found below. t]. Proof This is an easy time transformation argument in a similar way as in Proposition 1. SENSITIVITY ESTIMATES 87 a queueing network.1 Consider the case of claims which are exponential with rate 8 (the premium rate is one). where the partial derivatives are evaluated at p = 1. In particular . Then if t is large . obtained say in the natural way as the empirical arrival rate Nt/t in [0. while /3 = j3 is an estimate.2 Consider a risk process { Rt} with a general premium rate p. and hence a _ e-(6-0)u + u e-(6-0)u = ( -i + which is of the order of magnitude uV. Then the arrival rate /3(P) for { R(P) } is )31p. if = a e-(6-A)u. the standard deviation on the normalized estimate ^/1' (the relative error ) is approximatively . it follows that -' is approximatively normal N(0. and hence the effect of changing p from 1 to 1 + Ap corresponds to changing /3 to /3/(1 + Op) /3(1 . Thus at p = 1. Thus. u Proposition 9. For example.19P a/ . the distribution of %3 -0 is approximatively normal N(0„ Q/t). we may be interested in a'/ap for assesing the effects of a small change in the premium.- a/3 0 .9.1. Assume for example that 8 is known. Let R(P) = Rtli. Example 9. say estimated from data. a2/t).(u) for large u. a0 as ao 80 19P .3. where Q2 = fl ( l2 1113 / _ Ou2v)2. or we may be interested in aV)/0/3 as a measure of the uncertainty on '0 if 0 is only approximatively known. Then a p ao = 00 -Qa/.e..

()YC = 1 +y/ /3 \ Q2 From this (9. we cannot expect in general to find explicit expressions like in Example 9. u Now consider the ruin probability 0 = 0 (u) itself. The most intuitive approach is to rely on the accuracy of the Cramer-Lundberg approximation . but must look for approximations for the sensitivities 0.3) follows by straightforward algebra.()(0 +'0) ' (9 .6 below for some discussion of this assumption). but we shall concentrate on a special structure covering a number of important cases.^)] 1-(/3+y)we(9+'y. In the case of the claim size distribution B.4). 4) (9 .u-ypCe-7u -urypO. various parametric families of claim size distributions could be considered. this intuition is indeed correct. (. we can rewrite the Lundberg equation as w(9+ -y. (9.()-wC(e. (3+'y)PC (0+7.((dx ) = exp {Ox + (t(x) .3 70 = 'Ye = = 7 /3(1- we(e +'y. (9. x > 0 (9. THE COMPOUND POISSON MODEL As a consequence.(/3 + y)we(9 + 7.t. Similar notation for partial derivatived are used below. mathematically a proof is needed basically to show that two limits (u -* oo and the differentiation as limit of finite differences) are interchangeable. Consider first the case of 8/8/3: .3. Differentiating w. Viei '0(.()^ 1 .w(O. () Proof According to (9. However . namely that of a two-parameter exponential family of the form Bo.3. /3 yields w e(e + Y. so that heuristically we obtain '00 50-ryu = Coe-"u . ^) . 3) ( 9 . 9.g. Of course.1 or Proposition 9.w(6.6) As will be seen below. Consider first the adjustment coefficient y as function of 3. e.88 CHAPTER III.3 or/and B.10) below. () = log(1 + -y//3).2) (see Remark 9.r. and write -yp = 8-y/8/3 and so on .5) are similar. Proposition 9. 5) (Q+'Y)[we(0+7.0 = t/'(u) and the Cramer-Lundberg constant C.O-we (9. ()} p(dx) . for the ruin probabilities . and the proofs of (9. it suffices to fix the premium at p = 1 and consider only the effects of changing .

()] exp {w (O + -y.w(9. () exp {w(9 + a.x) F(dx ) --f u J C F(dx) = C as u -4 oo.8). the proof is complete.x)B(x) dx + J U W(u . () . 11 For the following. u 0 Proceeding in a similar way as in the proof of the Cramer-Lundberg approximation based upon (9. Further write de = [we (9 +'y. and alsoo zl(u) -+ 0 because of B['y ] < oo. () . Z(u)/u -a C//3PF where PF is the mean of F. SENSITIVITY ESTIMATES Proposition 9. Hence by a variant of the key renewal theorem (Proposition A1.g. F(dx) = e'yy/3B(x)dx. But from the proof of Theorem 5.3) for z/'(u). PF = (1 . ()} .10) (9.12)). z2(U) = e7" J u b(u .x).St (U) Ee.we(9 .w(9. w((9 + a. Be. () .11) Ee.8) Letting cp = e0/e/3 and differentiating (9. u 0 Then Z = z + F * Z and F is a proper probability distribution .x)B(x)dx. we note the formulas Ee.8) (Section 5).w(O. ()} . it holds that 89 a ue -ryu a/3 Q(1 P) 7C2 Proof We shall use the renewal equation (3.C).3(x) dx. () . we get p(u) = J "O B(x) dx + J U O(u .9) (9.2 of the Appendix ).([a] = exp {w(9 + a. Barndorff-Nielsen [58]).p)/C'y.3 (see in particular (5.9.QB(x) dx. By dominated convergence. we multiply by e7" and let Z(u) = elt" cp(u).4 As u oo. Combining these estimates .4t (U)e°`U = which are well-known and easy to show (see e. 0(u) = /3 Ju"O B(x) dx + f 0 0(u . z2(u) _ 1 ^ e'ri`i7i( u . Z= zl + z2 where zl (u) = e7u J m B(x)dx.x). O} (9. (9.(e"U = = wS(O.

2) holds. 0 x Multiplying by e7" and letting Z(u) = e"uV(u). ()]B(dy) dx.x).lB(x) dx = e-7uzl(u) + e-7°zz(u) + V(u T where zl (u) = .w( (0. By dominated convergence and (9. ()} 1z(dy) = f [t(y) .wc(9.1) B(dy) 'f '[t(y) . 8^ ue-7u.8) that cp(u) . ()]--(e7v . ()]e7vB(dy) 'fCd 7 c . ^)} [wc (0 + 7. Then as u -> oo. z = zl + z2.6e7u f "o f[t(y) . F(dx) = e7x.QB(x)dx.12) f exp {O y + (t(y) .e7x/3 f 00 [t(y) .11).6C do 89 1-p 8( 1-p Proof By straightforward differentiation.9)-(9.w( (0. u Z2(U) = e7° f u ^/i(u . 2 z 07P N ue-7u (3C de . ())B(dy) dx. C)] (1 + 7 ) Proposition 9.w(0.90 CHAPTER III. )}B(dy)• Letting cp it thus follows from (9.5 Assume that (9.wc(O. THE COMPOUND POISSON MODEL [we(e+7. C) .x)f3 f ^[t(y) .w(e. ()]B(dy) dx x 0 0C T ON O . this implies Z = z + F * Z. 8 8() 8( (9. oo z2 (u) f C . ^) .w (9.we (0. 01 (i+) do = +'Y.wc (O.

() ='I'(t. 9 = -S.12) follows.Sry a/32 + a/37 + /37 . () = -C/9 = a/S.9. It follows after some elementary calculus that p = a)3/5 and.a/35-a&y' ' (9.pa+1 . w(e.w((9.rye) S 5-ry-a.15) (9. We get w( (0.17) (9.13) (9. < = a. Example 9. ())B(dy) < oo. () = log r(a) .1e-dz = 1 exp {-Sx + a log x .C log(-9).6 Consider the gamma density b (x ) = Sa xa.1 .ry) 5a-1 cry (5 .yu/3C2do u86 89 1-p' az/) = 8z/. U 7µF from which the second assertion of (9. t(x) = logx.Y)a+1 ' (9. and also zj (u) -4 0 because of f Hence. SENSITIVITY ESTIMATES as u -3 oo. Z(u) /3C 91 o c'o e11(t (y) . by inserting in the above formulas.a log S = log r(c) . we (9.. and the proof of the first one u is similar..16) (9. ( 9. that C = a.. ue-_Yu 'C2d( 8a 8( 1 -p .18) (05 + 57 _'3_y ./35' a/i'y + aryl 62-5ry.) -log(-9) = %F(a) -logs where %1 = F'/]F is the Digamma function.2) holds with p(dx) = x-ldx.QS 1 .12) takes the form y)- alp a . a /(S .14) de = d( 7!3 76 = -7e = log ( \ ( \5a_ / \SSry ) 72 .3-ary tog('Finally. Here (9. .(log r(a) -a log S)} • r(a) 1.

([Y] . C) .2a) } Thus the condition B[a*] > 1 + a* /. C = . further yield . THE COMPOUND POISSON MODEL Example 9. Be.22.7 Consider the inverse Gaussian density ( b(x) Zx37 exp This has the form (9. C) = B = -Yc = de = do = .21og 2.1 16 +ry c C2-2ry 2( = + 70 We (e. () = -Cc .92 CHAPTER III.w(9. for a < a* = z (.2) with µ(dx) = 2x3zrdx.S[a] = exp {w (9 + a.9) (-() .CZ -try)} 1 C C2 -try . 9 = .log c = -2 In particular. Straightforward but tedious calculations .2 .3Ee.3."62 . ()} = exp {c (C .2 -log (-0. t(x) _ -.1 = eXP {c(C . which we omit in part . w(e.l3 of Section 6a needed for the existence of ry becomes e^Q > 1+62 / 2.

2 of the parameters. the exponent of the density in an exponential family has the form 01 tl (x) + • • • + 9ktk (x).3C2de 1-p' z a = -c . However..-cue_7u)3C P Remark 9.8 The specific form of (9. BT [a]= NT ^` e"U.12) takes the form a = a 93 ar. the exponent is either Ox. That it is no restriction to assume k < 2 follows since if k > 2. sj=1 and let -YT be defined by IKT('ryT) = 0. Thus. thus.. In general. by the LLN both F (NT = 0) and F (PT > 1) converge to 0 as T -.a.oo.2) is motivated as follows. we have assumed k = 2 and ti (x) = x. then ryT < 0.7 and references there. (9. the results presented here are new. and hence explicit or asymptotic estimates are in general not possible. Van Wouve et al. However . Note that if NT = 0. Comparatively less work seems to have been done in risk theory.. we can just fix k . queueing networks) are typically much more complicated than the one considered here. then BT and hence ryT is undefined. or Ct(x). in which case we can just let t(x) = 0. Thus. if 1 PT = /3TNT(U1+. [379] consider a special problem related to reinsurance. To this end. in u which case the extension just described applies. Notes and references The general area of sensitivity analysis (gradient estimation) is currently receiving considerable interest in queueing theory. B are assumed to be completely unknown.+UNT) > 1. to our knowledge. the main tool is simulation. and we estimate -y by means of the empirical solution ryT to the Lundberg equation. . 10 Estimation of the adjustment coefficient We consider a non-parametric set-up where /3. kT (a) = /T (BT [a] . Finally if k = 1.1) . for which we refer to X. let NT 16T = ^T .10.g. That it is no restriction to assume one of the ti(x) to be linear follows since the whole set-up requires exponential moments to be finite (thus we can always extend the family if necessary by adding a term Ox). ae t 1lEY u -S _ . the models there (e. Also. ESTIMATION OF THE ADJUSTMENT COEFFICIENT Finally.

B[7]) 0+ Iv/o-(b[-y]-.1 As T -4 oo.2) follows from NT/T a4' . B[2'Y] - /3T ) . a2 where a2 = /3r.2) r-T(7) N N (0.)vl+ N CO.3) Proof Since Var(eryU) = we have B[7].Q and Anscombe 's theorem. .1) 'YT ..T y .3T . For the proof. vfo-VFB[2-y].B[7]) + B [7] . 1) r. 16T where V1. then (10.'s. N ( n[7].: N 0. More generally.a BT[7] I B[7] I + . since NT /T .i3)(B[7] -1) + (3(BT[7] - . we need a lemma.If . B [7]2 (10.2 As T -* oo.1) ./^ B[27] .B[7]2 }) ( T 0 . THE COMPOUND POISSON MODEL Theorem 10 . 7T a4' 7. B[27] .(27)/K'(7)2. V2 are independent N (0. Lemma 10 .1)2 + E[27] .B[7]2 V2 . If furthermore B[27] < oo.: N ()3.v. Hence KT(7) = (F' + (OT a(B[7l 0))((BT [7] .94 CHAPTER III.b[-Yp'V21 T { (E[7] .3/ T). (10.'Y .7 + (. it is easy to see that we can write \ V1 1 l _ .1) .B[7]2 n Hence ( 10.

Combining ( 10. and the truth of this for all e > 0 implies ryT a-t 'y. 6"Y (10.e. NT BT [a] Hence r. NT i =1 n'(a) for all a so that for all sufficiently large T K7 .4) and Lemma 10. 7T E (-y . If ryT E (7 - we have KT(7 .E) < 0 < kT(7 + E) for all sufficiently large T .3). By the law of large numbers. first note that e-7TU N (e-7U u2e-27Uo'2/T) 7 . where ryT is some point between ryT and ry.1 can be used to obtain error bounds on the ruin probabilities when the parameters .E) < 4T(7T) < 4T(7 + E).KT(7) kT(7) K'(7) .c'(7) N (0' T (2(7) / N (0.1 By the law of large numbers. BT[a] -3 B[a]. lcT(a ) 4 /c(a).Q.2.E ) < 4T(7T) < (7 +0' which implies 'T(ry4) a$' r. Theorem 10.'(-y). Now write KT(7T) - kT(7) = 4T(7T)( 7T -7).e. Then r. it follows that 7T-7 KT(7T) .(ry .4) + E).(ry + e) and hence KT(7 . Let 0 < E < ry. -y + E) eventually..'T(a) = 1 E Uie°U' a$' EUe "u = B'[a]. °7IT) . OT a 95 u 4 /3. I. Proof of Theorem 10. To this end . 0 are estimated from data .10. ESTIMATION OF THE ADJUSTMENT COEFFICIENT which is the same as (10.e) < 0 < r.

C1e-"a ( see e. Deheuvels & Steinebach [102]..96 if a = 2.Q. Wn). Vt = St . and the known fact that the Y„ = max Vt tE[W„-1.. THE COMPOUND POISSON MODEL Thus an asymptotic upper a confidence bound for a-7' (and hence by Lundberg's inequality for 0(u)) is e-"TU + f. it means 2 (8 -. Herkenrath [192]. ..96 CHAPTER III. Csorgo & Teugels [95]. [197].) = a (e.5%)..e.g. = 1. For example .Wn) are i . Notes and references Theorem 10. Frees [146].ue-ryuU ". Embrechts & Mikosch [133]. with a tail of the form P(Y > y) . A major restriction of the approach is the condition B[2ry] < oo which may be quite restrictive. t]}. if B is exponential with rate 8 so that ry = 8 -. ft. One (see Schmidli [321]) is to let {Vt} be the workload process of an M /G/1 queue with the same arrival epochs as the risk process and service times U1.T VIT where r7ry.i. the nth busy cycle is then [Wn-1. Letting Wo = 0.g.1 is from Grandell [170].0) < 5. Further work on estimation of -y with different methods can be found in Csorgo & Steinebach [94].d. Hipp [196].. > 0 for some t E [Wn_ 1. 6 < 2. For this reason . i. Mammitzsch [253] and Pitts. Griibel & Embrechts [292]. U2.-1 : Vt = 0. i . various alternatives have been developed. satisfies b(.f..T = 3TKT ( 21T)IKT (^T)2 is the empirical estimate of vy and fc.. Asmussen [23]) can then be used to produce an estimate of ry.info< „< t S. This approach in fact applies also for many models more general than the compound Poisson one. V. wn = inf{t > W.e..3 or equivalently p > 1/2 or 11 < 100%.

it is assumed that i > 0 and that the adjustment coefficient (Lundberg exponent) -y. 'y) where c(a) attains it minimum value. generalizations to other models are either discussed in the Notes and References or in relevant chapters. The notation is essentially as in Chapter III. Further let 'Yo be the unique point in (0.1 where p = 13µB.f.1) .s.g.1 (the role of ryy will be explained in Section 4b).Chapter IV The probability of ruin within finite time This chapter is concerned with the finite time ruin probabilities 0(u. the Poisson intensity is 0 and the claim size distribution is B with m. 97 . B[•] and mean AB. The safety loading is q = 1/p . the premium rate is 1. defined as solution of c(ry) = 0 where ic(s) _ /3(B[s] . In particular. 0. exists. See Fig. T) = P( /r(u) <T) \ = PI inf Rt <OIRo=u1 /\0<t<T PI sup St>ul 0<t<T Only the compound Poisson case is treated. Unless otherwise stated.

r.1 In the compound Poisson model with exponential claims with rate S and safety loading 77 > 0.98 CHAPTER IV. we have for k = 1.(U) < 00] = ELT(u)ke-'YS.u is the overshoot. (u) is exponential with rate 0 w. 1 FL.1 The claims surplus is {St}.9). PL = 6/0 = 1/p > 1). E[T(u) I T(u) < 00 ] = ELT (U).5 . By the likelihood identity III. PROBABILITY OF RUIN IN FINITE TIME Figure 0.1) (1. the conditional mean and variance of the time to ruin are given by E[-r(u) I T (u) < oo] Var [T ( u) I T( u) < oo] /3u+1 J -)3 _ 2/3Su+/3+S (S-)3)3 (1. In particular. the time of ruin is T(u) and ^(u) = ST(t&) .2) Proof Let as in Example 111. using that the overshoot l. FL and independent of T(u). . 1 Exponential claims Proposition 1.) = e-7u ELe-'Y^(u) ELT(U)k = e-'Yu b ELT(u)k = O(u)ELT(u)k.t. Var[T(u) I T(u) < 00] = VarL T( U) . 7. EL refer to the exponentially tilted process 3 with arrival intensity S and exponential claims with rate / (thus . 2 that E [T(u)k.(.(4..

(-yo) = 2V ."(ry) = 26//32. is V1rLSr( u) +VarL ((PL . we have by Wald's identity that (note that ELSt = t(pL .1)T(u) are independent with QL the same mean .s./3) .1)T(u)) = VarLe(u) + (PL .6 + a)0 .1) .0) .2 In the compound Poisson model with exponential claims with rate 6 and safety loading rl > 0.V/ is as asserted.s. where = e-Bu I 1 .B = a.1 (6-)3)2 which is the same as the r.2) is aLELT( u) .h. 1). 0 Proposition 1.6.h. of (1. Let 0 > -yo be determined by ^c(0 ) = a. T(u) < oo] fora > r. u + ELe(u) _ PL . which leads to the quadratic 02 + (/3 ./3 .2). Wald's second moment identity yields 2 EL (Sr(u) .(PL .1. This means that /3(6/(6 .1)T(u))2 = UL where = s. the Laplace transform of the time to ruin is given by Ee-a7( u) = E [e-aT (u). .1)2VarLT(u) + 2 Ca 1I VarLT(u). the 1. Since Sr (u) and (PL .h.1)ELT(u).6a = 0 with solution 0 (the . EXPONENTIAL CLAIMS For (1 .I (1.1)) ELST(u) ELT(u) (PL .1//32 (6/)3 -1)2 26(/3u + 1)/(6 .3) B = 0(a) = + (6-/3-a)2+4a6 2 and hence that the value of ic(yo) Proof It is readily checked that yo = 6 .1 /3u + 1 u + 1 //3 = 6-/3 6/0-1 For (1.s.12 Thus the l.

's with rate 5.. and M(u)+1 is the index of the ladder segment corresponding to T(u).3 that we can write Ee-aT( u) = e-euEe -017(o).. .0.4) The interpretation of this that T(u) can be written as the independent sum of T(0) plus a r. PROBABILITY OF RUIN IN FINITE TIME sign of the square root is + because 0 > 0). But by the fundamental likelihood ratio identity ( Theorem 111.1 . T2 .. St Ti F. M(u) T(u) = T + E Tk k=1 where T = T(0) is the length of the first ladder segment . More precisely..100 CHAPTER IV. Y2.T+ Ti a t U T I 1 a i F. the result follows.. Using 5 = 6 .v.v.3. T(u) < oo] = EB [exp {-aT(u ) . T(u) < oo] = e.3) we have E [e-«T(u ). are the lengths of of the ladder segments 2. Cf. .Y1 -Y2 Figure 1. (1. Y(u) belonging to a convolution semigroup .9ST(u) +T(u)!c(0)} . Note that it follows from Proposition 1.1 where Y1..4...OuEee -04(u) = e-e u be BB+B where we used that PB(T(u) < oo ) = 1 because 0 > ryo and hence E9S1 = K'(0) u > 0. are the ladder heights which form a terminating sequence of exponential r. Ti. Fig. 1.

and exponential with rate S = 1.ST).1. let (cf. cf.T) 1 I fl(O)h(0) fdO where (1. .. EXPONENTIAL CLAIMS 101 For numerical purposes .6(u) = Vfl/j l(Su. including the customer being currently served). T) to be evaluated by numerical integration: Proposition 1.cos (u/. U2.. Since U1 .e. Hence 00 F(VT > u ) P(QT = N)P(EN > u) N=1 00 N-1 k F(QT = N) e-u N=1 k=1 °O -u k! k Ee k=0 1t P(QT ..I ex cos B cos j O dB fo " .1.4.3 sin0 + 29) f3(0) = 1+/3-2/cos9. .T are conditionally i. 1). .T is the residual service time of the customer being currently served and U2 . UN. where U1.k + 1).6. the conditional distribution of VT given QT = N is that of EN where the r. the following formula is convenient by allowing t.i (u.3 Assume that claims are exponential with rate b = 1. Proof We use the formula .1(u. For j = 0. Then V(u.T the service times of the customers awaiting service . then VT = U1. 2. UN.0. density xN -le-x/(N .T + • • • + UN.T. If QT = N > 0...1 )!.d.6) fl(9) f2(0) = = fexp {2iTcos9-(1+/3)T+u(/cos9-1) cos (uisin9) .v. T.. [4]) 00 (x/2)2n+3 Ij (x) OnI(n+j)! . Let {Qt} be the queue length process of the queue (number in system. Corollary 11. Note that the case 6 # 1 is easily reduced to the case S = 1 via the formula V.T. . EN has an Erlang distribution with parameters (N.T. i. ..i.T) = P(VT > u) where {Vt } is the workload process in an initially empty M/M/1 queue with arrival rate 0 and service rate S = 1.

1 00 ok+lR 00 j=-k-1 +1)/2e . k -k-2 + $k+1 E bj 00 t j .(31/2 cos (( k + 2)9) .cos((k + 1)0)] f3(0) 00 flk +1 > j=-k-1 3j/2 COS(jB) l)/2e-i(k+1)e )3j/2eije = R)3(k+ (31 /2eie .38).(31/2eie .ie .i(k +1)e R [/3( klal/2e:0 (01 /2 e .44). and define tj = e-(1+R)Taj/2Ij(2vT T). in particular equations (1. (1.)3k +1 tj g'(QT >. let I _ j (x) = Ij (x). 87-89) 00 E aj j= -00 = 1. PROBABILITY OF RUIN IN FINITE TIME denote the modified Bessel function of order j. f3(0) .13(k +l)/2ei(k +1)9 R E . similar formulas are in [APQ] pp.1)] L _112 /(k+1)/2 [.8 ) yields F(QT > k + 1) .102 CHAPTER IV.3(k +1)/ 2ei(k + l)6 (.cos((k + 2)9)] d9.)3k+1 = e-(1+0)T e201/2Tcos 7r  0 e )3(k +l)/2 [31/ 2 cos ( kO) . 9-12. 00 E '3j/2 cos(je) j=k+1 00 _ j=k+1 ^j/ zeij = .cos (( k + 1)0)] f3(9) Hence the integral expression in (1.31 /2e-ie L 1)] 1 I/31/2eie .112 l 1( k +1)/2 [ 31/ 2 cos(kO) . Then (see Prabhu [294] pp.1 R [.k + 1) = 1 k +1 + bj j=-00 j=-00 00 j=kk+1 j=-k-1 By Euler 's formulas.

T) in terms of F(. T). or.2.0(u. the numerical examples in [12] are correct).3 was given in Asmussen [12] (as pointed out by Barndorff-Nielsen & Schmidli [59]. We allow a general claim size distribution B and recall that we have the explicit formula z/i(0) _ P(7(0) Goo) = p. t )). Seal [327] gives a different numerical integration fomula for 1 .. however. is numerically unstable for large T. expresses V)(0. we are concerned with describing the distribution of the ruin time T(0) in the case where the initial reserve is u = 0. however. Related formulas are in Takacs [359]. 2 The ruin probability with no initial reserve In this section . oo (u)31/2e^e)k = )3k z cos(k9) = R k.T) which. The rest of the proof is easy algebra.7) that _ [^ a-u ak+l (30 k L. going back to Cramer. there are several misprints in the formula there. u Notes and references Proposition 1. and the next one (often called Seal's formula but originating from Prabhu [293]) shows how to reduce the case u 54 0 to this. THE RUIN PROBABILITY WITH NO INITIAL RESERVE Since P(QOO > k + 1) = flk+1. it follows as in (1. t) = P . Ui < x I / (note that P(St < x ) = F(x + t. E Fk. . from the accumulated claim distribution N. equivalently.e = e' COS a cos(uf31/2 sin 0). We first prove two classical formulas which are remarkable by showing that the ruin probabilities can be reconstructed from the distributions of the St. k=0 103 Cu) A further application of Euler's formulas yields cc k =0 k 'ese)k __ U #kJ2 cos((k + 2)9) = R eNO ^` (u^1 L k= = eup i/z L OI = =ateU161/2 e '0+2iO COS a cos(u(31/2 sin 9 + 20). k! k=O k-0 i/z Co Uk ate" o'/z e . F(x. The first formula.

T]. f T lStv)} 0<t<T by a 'cyclic translation'.T))dv.(. T T o where the second equality follows from II. See Fig. .(0. Proof For any v E [0. [v.b (0. Stv^ _ Define M(v. meaning that we interchange the two segments of the arrival process of {St}o<t<_T corresponding to the intervals [0. we define a new claim surplus process St StM NJ Figure 2.3) with A = (0. 2.1.(6.T].t)= {Stv) < SM. and the third from the obvious fact (exchangeability properties of the Poisson process) that has the same distribution as St = { Si0)} so that P(M(v.i. PROBABILITY OF RUIN IN FINITE TIME Theorem 2 .T))dv E^T I(M(v. 1 1 .S„ 0 <t<T-v ST-S„+St_T+v T-v<t<T as the event that IS. co ).0<w<t} St+v . Then 1 .T) T F(x.104 CHAPTER IV.1 In formulas. T) = P(Tr(0) > T) = P(M(0.T)) 1 fT P(M(v.T)) does not {Stv)} depend on v. v]. resp. ") } is at a minimum at time t.T)dx.

then M(v. cf. letting w = inf It > 0 : St_ = mino<w<T Sw}. T) = M(0.T)) dv f T I(M(0. we can write M(v.T) occurs or not as long as ST < 0. T)) dv. T T o i =1 Let f (•. T Theorem 2 . THE RUIN PROBABILITY WITH NO INITIAL RESERVE 105 Now consider the evaluation of fo I(M(v. Hence T TE f I( M(v. It is then clear from the cyclical nature of the problem that this holds irrespective of whether M(0. Indeed. where the last equality follows from ST < St on M(0. T) as {ST<St+ v-S. there exist v such that M(v.. We claim that if M(0. this integral is 0 if STv) . t). T]. then i fT I(M(v.T)-f(I -z /)(0.T) = F(u+T. v).xdx. v)) dv = -ST T T o (note that the Lebesgue measure of the v for which {St} is at a minimum at v is exactly . we can take v E (w E.. v). It follows that if M(0 . v < t < T} n M(0. Proof The event {ST < u} = { Ei T Ui < u + T j can occur in two ways: either ruin does not occur in [0.T) and Sv < 0 on M(0. w) for some small E.2 1-0(u. Fig 2. Obviously. T)).t)dt. T. T) occurs. 0<t <T-v}n{ST<ST -Sv+St -T+v. T)) dv = TEST = T fP(ST < -x) dx T T NT 1 f P(ST < -x) dx = 1 f P Ui T . v<t<T}n{ST<ST-Sv+St.Sv.2.ST on M (0. v) = M(0.2. T) occurs. t) denote the density of F(•.v<t<T} = {ST<St-Sv. If ST < 0. . ST > 0. v).T-t))f(u+t. in which case there is a last time o where St downcrosses level u.T) occurs. For example. or it occurs. 0<t<v} = {ST < St .

p. 0 < t <T .2.T) = C(z. E [t.(0.ST_ t_ and let A(z.t). C*(z. define St = ST . 2.z. 0 < t < T. ST_ _ -z} .u+dt]). PROBABILITY OF RUIN IN FINITE TIME u Q II T Figure 2. ST_ _ -z}.106 CHAPTER IV.T-t))P(StE[u. which is independent of St and has the stationary excess distribution B0. For a fixed T > 0.2. {St > . z > 0. O(T . Hence P(ST<u) = 1 .T) = {St < 0.2 . {S t > -z. ST_ _ -z}. which occurs w. The proof is combined with the proof of Theorem 111.3 Define r_ (z) = inf It > 0 : St = -z}.b(u. Proposition 2. 0 < t <T. u which is the same as the assertion of the theorem.v. t + dt] occurs if and only if St E [u. Then P(T(0) E • I T(0) < oo) = P(T_ (Z) E •).2 Here o. Let Z be a r. The following representation of T(0) will be used in the next section.T) = .T)+ J0 T (1-V. u + dt] and there is no upcrossing of level u after time t. Proof of Theorem 111.

T) = C*(z.T + dT]. r(0) < oo) = 3R(z) dz JP(C(z. T + dT] I S7(o)_ E [z.T).T)) = P(Cx.T(0)<oc) = f x F(U > y + z U > z) P(Sr(o)_ E [z. z + dz]. 2.T)). we therefore have P(A(z.1) that P(T(0) E [T. Hence integrating (2.------- Figure 2.T))f3B(z) dz dT. It follows by division by P(ST(o)_ E [z. T(0) < oo) B(y B(z) + z) f3-B(z) dz = 3 f °^ B(y + z) dz = f3 + x v f B(z) dz. A(z. Thus P(-Sr(o)_>x. z + dz].2. -ST(o)_ E [z.3). z + dz]) = P(A(z. 7-( 0) < oo) = P (C(z)) dT. and since {St}o<t<T.3. {St }o<t<T have the same distribution .1) -z T -------------. Fig. Proof of Proposition 2.2.3 But by sample path inspection (cf.T))dT = Off(z) dz P(T_ (z ) < oo) = 3B(z) dz.ST(o) >y. (2. z + dz].1) yields P(-ST(o)_ E [z. .2. THE RUIN PROBABILITY WITH NO INITIAL RESERVE Then 107 P(r(0) E [T. T(0) < oo) = OR(z) dz in (2. u which is the assertion of Theorem 111. z + dz].

one based upon a result of Asmussen & Schmidt [49] generalizing Theorem 11.r(a). because of77>0. Theorem 2.2 ga(x) = Qe-xr(a) f "o eyr(a)B(dy) x . T(0) < oo) 0 = dT f 0 P(C(z))P(Z E [z.s.c(r(a)) l = l er( a)se+at } u yields 1 = e-yr(a)Eear-(y). Note that T_ (y) < oo a. T(0) < oo) = dTP(T_(Z) E [T.1) .2. r(0) < oo.3 was noted by Asmussen & Kl(ippelberg [36]. r(a) denotes the solution < 'Yo of the equation -a = ic(r (a)) = .5a). cf. Proposition 2.108 Hence CHAPTER IV. some relevant references are Shtatland [338] and Gusak & Korolyuk [181]. see in addition to Prabhu [293] also Seal [326].T+dT]).6. z + dz]. Proof Optional stopping of the martingale I er (a) 9 -t. Let T_ (y) be defined as Proposition 2. who instead of the present direct proof gave two arguments. 2. PROBABILITY OF RUIN IN FINITE TIME ]P(7-(0) E [T. a martingale proof is in Delbaen & Haezendonck [103].T + dT] T(0) < oo) dT f ' P(C(z))P(Sr( o)_ E [z.1) where -a > r.1 and the present proof is in the spirit of Ballot theorems. Lemma 3 .5 and one upon excursion theory for Markov processes (see IX. (3.(3(B[r( a)] . ^(0) E dx] (recall that ^(0) = Sr(o)) and write ga[b] = f OD ebxga(x) dx. 3 Laplace transforms Throughout in this section. Tak'ecs [359].1 Eear-( y) = eyr(a).1. I L Let ga(x) be the density of the measure E[ear(°). Lemma 3. [329]. In the setting of general Levy processes. z + dz]. Notes and references For Theorems 2.(-yo).3.

y + dy].5 f 00 o -a/r(a) .f.2. (u .T(0) < oo] = 20[b] = za[b] (9a[b] -9a[0])/b 1 .3. Further by Theorem 111.x)(a) B(dy)• Lemma 3 . time T(u): u u Here is a classical result : the double m. E[ear (o) I T(0) < oo .1] evr(a)B(dy)[ b . (Laplace transform) of the ruin Corollary 3.2 P(Z E [y. T(u) < oo] du = Proof Define Za(u) = E [eaT(" ).g.3. Hence eb"du E[eaT(").r(a) = a [B[b] -B[r(a)]] .ga [b] 1 . £(0) E dx) = /3B(x + dy) dx and hence ga(x) = f e r)/3B(x + dy) _ /3 f x e(v. the result follows after simple algebra.4 E[eaT (o). b . rr(0) < oo) = 1_ r(a) Proof Let b = 0.3. It is then easily seen that Za(u) is the solution of the renewal equation Za (u) = za (u) + fo Z.(v) = ev''(a). r(u) < oo).3 ga[b] = c(b) Proof + b + a . Then by Proposition 2. LAPLACE TRANSFORMS 109 Proof Let Z be the surplus . Corollary 3.ga [b] 0 TO Using Lemma 3.°° ga(x)dx. u .r(a) oo Q f ex(b-r(a))dx f00 eyr(a)B(dy) x 0 Q f evraB(dy) e-(a))dx 0 Q cc ev(b-r (a)) .r(a) The result now follows by inserting /3B[s] = ic( s) +/3+ s and ic(r(a)) =-a. Z = y] = EeaT.ic(b)/b x(b) + a eb"E[eaT(" ).x)ga (x) dx where za(u) = f.ST(o)_ just before ruin .r(a) b .

This proves the first assertion of (4.1)Er(u) . and hence a. PROBABILITY OF RUIN IN FINITE TIME 4 When does ruin occur? For the general compound Poisson model. Proposition A1. = (p . Later results then deal with more precise and refined versions of this statement. the known results are even less explicit than for the exponential claims case.. we need the following auxiliary result: Proposition 4.UProof The assumption 11 < 0 ensures that P(T(u) < oo) = 1 and r(u) a4' oo. u 1 ET(u) 1 p-1 u where Pw2 = 311B)m3• 7-(u) . For the second . cf.e. Theorem 4 . Then given r(u) < 00. The first main result of the present section is that the value umL. That is.1.. t T(u) T(u) T(u) t m = lim = lim = lim U-tioo u + Sr(u) u-+oo S. P = /3µB > 1. for any m T(u) u .h(u.mu D 2 -4 N(0. By Proposition 111. For the proof.mL > E T(u) < 00 ) -40.s.3).r(u) = Er(u) • ES.1) i. St/t 1 1/m.s. T(u)/u mL as u -+ oo. i.1 Assume 77 > 0. and take basically the form of approximations and inequalities.w ) v/.2 Assume ri < 0. for any c > 0 P( Further. mu ) ( 0 m < ML '(u) 1 m > rL. note that by Wald's identity u + EC(u) = ES. (4. T(u) a. Then as u -* oo.(u ) = o(u) a.110 CHAPTER IV. (u) t.6. uoo u using e.3LELU -1 1-p' is in some appropriate sense critical as the most 'likely' time of ruin (here C is the Cramer-Lundberg constant). where _ 1 _ 1 1 C ML w(ry) 6B'[7J -1 .2.00 St = lim .

.s.h. Tu) T( u) .2 of [86]) and (4. and (4.1).1.1) is T (u) - U mL P( T (u) < I > E.4.1). According to Anscombe' s theorem (e. T(u) < oo f / 00) e-7uE L [e_7 (t1).mL >E By Proposition 4. PL (•)-+ 0.N(0.2) follows immediately from u (4.1 (by considering 0(u.mL U > E.r(u)/m T(u) ti µB2) Z.3.1). For (4 . implying T(u) . note first that ( Proposition 111. 4a Segerdahl's normal approximation We shall now prove a classical result due to Segerdahl.2. this can be rewritten as u + 1(u) -.t/m D (2) 111 .3). WHEN DOES RUIN OCCUR? and that Ee(u)/u -a 0. and as a time-dependent version of the CramerLundberg approximation. proving (4. T) for T which are close to the critical value umL).g. 1'r(U) . 4). the same conclusion holds with t replaced by r(u).1 is standard. though it is not easy to attribute priority to any particular author. of (4. apB ) .1 The l. cf.5) St .mu (2) '• m3/2 µB 7 .mu -m .^ N (o.6µB2) Z v m (3µB2) Z. T (u) < 00 J 0(u) e-7'PL U \ I T u) . If Z .6. the result comes out not only by the present direct proof but also from any of the results in the following subsections. Theorem 7. Thus. which may be viewed both as a refinement of Theorem 4. Notes and references Theorem 4. again Proposition A1.-7 6 - 11 Proof of Theorem 4.

T(u') given F. one has 9 (r(u)_rnu) Ef (^(u)) -* E.6) whenever f.f ( (oo)) . S( u ) < ul/4] < ET(ul / 4) = O(ul/4).r.T ( u')] = E[ T ( ul /4 . we get E[ T (u) .t.a C4'(y )• ( 4.ST( u') = u1/4 .(u. resp . P because of ^(u') . Then h(u) -4 h(oo) = E f (6(oo)). with w2 as in (4.) is readily seen to be degenerate at zero if ST(u•) > u and otherwise that of T(v) with v = u .l:(oo) (recall that rt < 0). O .112 CHAPTER IV.5) For the proof. PROBABILITY OF RUIN IN FINITE TIME Corollary 4. Using ( 4. e'°'/b (u. we need the following auxiliary result: Proposition 4.e(u') oo w .um.u1/4)I(S(u') > u1 /4) h(oo) + 0. Let h(u) = E f (^(u)).3 (SEGERDAHL [333]) Let C be the Cramer-Lundberg constant and define wL = f3LELU2mL = f3B"[ry]mL where ML = 1/(pL-1) = 1/($B'[ry]1). (-oo. oo ).w2) r. and similarly as above we get E[f(^(u)) I -Fr(u. Then for any y.4 (SIAM'S LEMMA) If 71 < 0.ul/4.)-mu \ h(oo)Eg (r(ul) . Proof Define u' = u . oo).))I h(ul /4 - ^(u)) I(6 (u') C ) f < ul /4 + f(e(u') . Hence Ef (Vu )) 9 (T(u. E9(Z) (4.VU T.3). then e(u) and r(u) are asymptotically independent in the sense that.mul h(oo)Eg(Z). using that ul/4 .v. g are continuous and bounded on [0.6). letting Z be a N(0. we can replace T(u) by r(u'). Then the distribution of T(u) .L+YWLV'U) .^(T(u')).4). and thus in (4.

10) '5(u) .4. .(ay) = 17 7y = ay . 4b Gerber's time. 0. see also von Bahr [55 ] and Gut [182].5 '(u . For practical purposes .z/)(u .7) to be valid is that T varies with u in such a way that y(T) has a limit in (.dependent version of Lundberg's inequality For y > 0. For refinements of Corollary 4. The present proof is basically that of Siegmund [342]. y > k'(7) . The precise condition for (4.umL wI V"U u (4. oo ) as u -* oo.3 in terms of Edgeworth expansions . ELe-7E (") .7) whenever u is large and ly(T)l moderate or small (numerical evidence presented in [12 ] indicates .7) to be good.5) and solve for y = y(T). umL + ywL f) = e"P(T (u) < umL + ywL) = EL [e-7V "). e-7v" y < ^'(7) (4 .9) ( 4 . Cf. y u) < e -7v" . CL Fig. Segerdahl 's result suggests the approximation b(u.8) Note that ay > 7o and that 7y > •y (unless for the critical value y = 1/ML). Theorem 4. just substitute T = umL + ywL in (4. WHEN DOES RUIN OCCUR? Proof of Corollary 4. PL(T(u ) < umL + ywL) 113 -4 C4(y). Thus .yK(ay)• (4.3 ery"z/i(u . however .4) in the last. define ay. where we used Stain's lemma in the third step and (4. y u) < . Notes and references Corollary 4 . 3 is due to Segerdahl [333].7) To arrive at this .oo. that for the fit of (4. yy by 1 K. in practice one would trust (4.1. see Asmussen [12] and Malinovskii [254]. u needs to be very large). T(u) < umL + ywL f.T) Ce-7"4 (T . also Hoglund [204].

dy) Notes and references Theorem 4 . Hoglund [203] treats the renewal case. which shows that the correct rate of decay of tp(u.v"U-.6 It may appear that the proof uses considerably less information on ay than is inherent in the definition (4.6. we arrive at the expression in (4. yu) < C+(ay)e-7a„ where l C+(ay) = sup f 00 eayR(xy)B( . yy is sometimes called the time-dependent Lundberg exponent. u Differentiating w. Numerical comparisons are in Grandell [172 ]. 0. a. we have rc(ay) < 0 and get (u) . T(u) < yu] < e-ayu + yUr-(ay) Y < e-ayuEav [ eT(u)K(av )L T(u) < yu} Similarly. yu 11 < T(u) < oo j < e-ayu +Y UK(ay) Remark 4.2. However.1). 5 is due to Gerber [156 ].5. and hence t.8 below . and generalizations to more general models are given in Chapter VI. see Martin-LM [257] .Y' (u. who used a martingale argument.b (u. yu ) = < e-ayuEay [e-ay^ ( u)+T(U)K ( ay).h(u.ay4(u)+ T(u)K(ay ). yu) is e -'Yyu/ . f Some urther discussion is given in XI.3 yields easily the following sharpening of (4. the point is that we want to select an a which produces the largest possible exponent in the inequalities. In view of Theorem 4. An easy combination with the proof of Theorem 111.7 i.8).yu ) = e-ayuEav [e .9): Proposition 4. yu < T (u) < oo 1 l e- ayuEav [eT ( u)K(ay). From the proof it is seen that this amounts to that a should maximize a-yic(a). PROBABILITY OF RUIN IN FINITE TIME Proof Consider first the case y < 1/K'(y). Then ic(ay) > 0 (see Fig .r. if y > 1/ic'(y). For a different proof.8).t.114 CHAPTER IV. . the bound a-7y° turns out to be rather crude . which may be understood from Theorem 4.

yu ) e-aauEaye .ayC(-) . T(u) < yu] .yu) c ay . if we want EaT(u) .c(&) = ic(ay) is < 0. Ea . Proposition 4.6 with P replaced by Pay and FL by Pay.i(u. [eT(u )K( ay). and b(u.ayuEay f e-ay^ ( u)+T(u)K(ay). we have ryas = ay .ay a-. u -4 oo. yu) = e.e.12) < yu] Here the first expectation can be estimated similarly as in the proof of the Cramer-Lundberg ' s approximation in Chapter III. the choice of ay. then ay > 0.(u. T(u) suggests heuristically that l t/. it is instructive to reinspect the choice of the change of measure in the proof.4. Using Lemma 111. and in case of ruin probabilities the approach leads to the following result: Theorem 4 .. (4. then the relevant choice is precisely a = ay where y = T/u. yu ) ay-ay e -ryyu ayay 27ry/3B"[ay] u Proof In view of Stam 's lemma. i.yyu y l ay I 21ry/3B" [ay] V fU_ u -+ 00. As a motivation. then the solution &y < ay of .11) ' If y > 1/ r . (0) r1 (a) ' I. not inequalities. (4. WHEN DOES RUIN OCCUR? 115 4c Arfwedson's saddlepoint approximation Our next objective is to strengthen the time-dependent Lundberg inequalities to approximations. (4.13) ..: T.ay and get Ea e -ayf (00) y _ 'Ya( ayKal lay C 1 .5. This idea is precisely what characterizes the saddlepoint method.8 If y < 1/ic'(ry).z. the formula 0(u.e.^3 ]-1/ Bay [lay .2 yields EaT(u) u u r. For any a > yo. and ii(u) . We thereby obtain that T is 'in the center' of the Pa-distribution of T(u).ay y 'Yay - ay .. The traditional application of the saddlepoint method is to derive approximations.'(-y ).

a nr=.7ruw2 Inserting these estimates in (4.c'(a) _ /3a/(8 .1. The proof of (4. Writing r(u) and W2 = I3ay{.1) under Pay mation (4.ay ) r.13).1-B[ay]1 ) y(ay .c(ay)ul/2W p 2ir = eyu-(ay) dz 1 rc(ay ) 2. Example 4. where V is normal(0.1)3 = (jB"[ay]l (Pay .116 CHAPTER IV. we get heuristically that Eay Ler (u)r-(ay).12) is 0 entirely similar. V < 01 Ir 00 e-r(ay)"1'2"'x eyur.4). and in part that for the final calculation one needs a sharpened version of the CLT for t(u) (basically a local CLT with remainder term).ay)K(ay) ay ayI&YI For the second term in (4.11) follows. The difficulties in making the proof precise is in part to show (4.3(5/(S .9 Assume that B(x) = e-ay.1)3 = y3/3B"[ay].1) . (ay) J0 1 K(ay )u 1 00 c2(x) dx /2 w 1 e-zcp(z /( k(ay)u1 /2w)) dz /O° _ 1 1 J e Z . it seems tempting to apply the normal approxiyu + ul/2wV.ay) ay +. i B[7ay .(j (1 .a)2 . T(U) < yu] = eyuk (ay)E''ay (ek(ay )"1/2WV.l'B)y /(Pay . .13) rigorously. and the equation ic'(a) = 1/y is easily seen to have . PROBABILITY OF RUIN IN FINITE TIME ry I i .13). (4.B[ay] /ay &y -y(ay .a) .(ay) _ y(ay .ay + ayl /BLay] .I ay -&y a ^c'(ay) a (1 +.a. Then ic(a) = .

in discrete time: if p = ES.tcp) Lo {Wo ( t)}t>0 . is the drift and o. The mathematical result behind is Donsker's theorem for a simple random walk {Sn}n=o.1) . yu) when y < 1/ic'('y) = p/1 .because the c.11) gives the expression '31/4 ( . ./4 ^y for 1/i (u. A related result appears in Barndorff-Nielsen & Schmidli [59].. y) a-''y" L '3 _ fl ) 51 /4(1 +1IY)3/4 \.1. c -a 00.ay)3 0 3/2 and (4. 0 Notes and references Theorem 4. (5. 2 = Var(Si ) the variance.f.g..5. then { __ . 5 Diffusion approximations The idea behind the diffusion approximation is to first approximate the claim surplus process by a Brownian motion with drift by matching the two first moments.3+5-2 1+/351/y' sy 7 B ii[ay] 25 _ 251/2(1 + y)3/2 (5 .8 is from Arfwedson [9].. It follows that 5^y =5-ay = /«y =f3+ay=l3+d- 1+1/y' V 1+^1/y /35 1+1/y -/3' ay -ay =Qay -say =.p. and next to note that such an approximation in particular implies that the first passage probabilities are close.= (s.i )( v s vc ('3 + s _2 / . DIFFUSION APPROXIMATIONS solution ay=5- 117 V 1 (the sign of the square root is . is undefined for a > 5).

tcpp) y = { WC (Sct) -pct) } {Wo( t)}t>o (5.a = Snp) and the inequalities Sn )C . It is fairly straightforward to translate Donsker's theorem into a parallel statement for continuous time random walks (Levy processes).p. for the purpose of approximating ruin probabilities the centering around the mean (the tcp term in (5. and this can be obtained under the assumption that the safety loading rt is small and positive. where p is the critical premium rate APBTheorem 5 . This is the regime of the diffusion approximation (note that this is just the same as for the heavy traffic approximation for infinite horizon ruin probabilities studied in III. of which a particular case is the claim surplus process (see the proof of Theorem 5. such that the claim size distribution B and the Poisson rate a are the same for all p (i. Indeed .1) with S.3) takes the form LI S(P) { a2 to2/µ2 + t LI S (P) { a2 ta2/µ2 {W0(t)}. Letting c = a2/pp.7c). we shall represent this assumption on 77 by a family {StP) L of claim surplus processes indexed by the premium rate p. n/c < t < (n + 1)/c. PROBABILITY OF RUIN IN FINITE TIME where {W( (t)} is Brownian motion with drift S and variance (diffusion constant) 1 (here 2 refers to weak convergence in D = D[0. p.118 CHAPTER IV..e.1 below). 0 . oo)). this is an easy consequence of (5.1.t} _ {W_1(t)} . and consider the limit p j p. Lemma 111.1)) is inconvenient. St = EN` U= .z } {W_1(t )}t>o (5.3.p/c < St(p) < S((n+l)/ c + Pp/c. (5. We want an approximation of the claim surplus process itself. + {Wo(t ) .tp).2) t>o where p = pp = p . Mathematically. cf. However. a2 =/3µB2) Proof The first step is to note that { WC (St P) .. we have o {i!t s: .3) whenever c = cp f oo as p 1 p.1 As p J.

196. is 1/ip (ua2 /IpI.f I \\\ J \ (5. this implies P sup 0<t<T a 12 Stu2 /µ2 > u -4 P ( sup W_1( t) > u O<t<T But the l. Corollary 5. is IG(T.. ('.(u) ti IG(oo. (.8 or [APQ] p.7c. 263) that the distribution IG(•.e.. Corollary 5 . we obtain formally the approximation V. (5.4) Note that IG(.1. (ua2 To-2 op \ IPI -> IG ( T .h. and the r. see Grandell [ 168]. we omit the details . . w. DIFFUSION APPROXIMATIONS Now let Tp(u) = inf{t>0: S?)>u}. C.s. any probability measure concentrated on the continuous functions.1 .6) from Theorem 5.1. since ti(u) has infinite horizon ..1 I 7= .2 suggests the approximation u 0(u.h.Ta2 /p2). TS(u)=inf{t>0: WW(t)>u}. [169] or [APQ] pp.h. the continuous mapping theorem yields sup W Sz2 to lP 4 sup W-i(t)• O<t<T O<t<T a2 Since the r. ^ p2 Proof Since f -4 SUP0<t<T f (t) is continuous on D a. and in fact some additional arguments are needed to justify (5. For practical purposes .5. ulpl /a2) = e-2"1µl / or2.( ^ I + e2( \ I .s. u) of r( (u) (often referred to as the inverse Gaussian distribution) is given by IG(x. ulpI/a2). the continuity argument above does not generalize immediately. 119 It is well-known (Corollary XI.5) Note that letting T --* oo in ( 5.6) This is the same as the heavy -traffic approximation derived in III. (5.r. has a continuous distribution.2 As p j p.5).s.t. However. u). 199. u) =PIT( (u) < x) = 1 .T) IG(Tp2/ a2). u) is defective when < 0. -1. Because of the direct argument in Chapter III.u).

B0 * Boo. In view of the excellent fit of the CramerLundberg approximation.5) for the compound Poisson model which does not require much more computation. The proof is a straightforward combination of the proof of Theorem 5. as an example of such a generalization we mention the paper [129] by Emanuel et al.t. See for example Billingsley [64].6 of [APQ]. Theorem 5. and which is much more precise. Then as 0 _+ 90. we have ^A. that 00 -4090. as 0 -* 00 and that the U2 are uniformly integrable w.00µB6 -+ 0. Michna & Weron [152] suggested an approximation by a stable Levy process rather than a Brownian motion. (5. the B9. pt? -4 peo. Assume further that 039µB6 < pe. In contrast. in particular for large u. For claims with infinite variance.g. [169]. Further relevant references in this direction are Furrer [151] and Boxma & Cohen [75].Pe. However. for more general models it may be easier to generalize the diffusion approximation than the CramerLundberg approximation.6) therefore does not appear to of much practical relevance for the compound Poisson model.1 and Section VIII. a2 = ae = 00µa6 Notes and references Diffusion approximations of random walks via Donsker's theorem is a classical topic of probability theory. PROBABILITY OF RUIN IN FINITE TIME Checks of the numerical fits of (5. e.6) are presented. the simplicity of (5. Furrer. such that the Poisson rate Oe.1..5) and (5. in the next subsection we shall derive a refinement of (5.r. However. We conclude this section by giving a more general triangular array version of Theorem 5. pe .5) combined with the fact that finite horizon ruin probabilities are so hard to deal with even for the compound Poisson model makes this approximation more appealing.3 Consider a family {Ste) } oc claim surplus processes indexed by a parameter 9. on the premium rule involving interest. 0) { 2 StQ2 /µ2 D { W_ i(t)}t>o t>o D 2 where p = pe = pe .120 CHAPTER IV. . The picture which emerges is that the approximations are not terribly precise. and two further standard references in the area are Grandell [168]. All material of this section can be found in these references.Po = 09µB6 . in Asmussen [12]. the claim size distribution B9 and the premium rate p9 depends on 0. The first application in risk theory is Iglehart [207].

and we want to consider the limit 77 10 corresponding to Oo f 0.6. 77 is close to zero. and we are studying b(u. claim size distribution B . 2.c(s) = .90) .'(-yo) = 0 and let 90 = -'Yo.Q (B[s] . PB('r(u ) < oo) < 1 for 9 < 0. . In terms of the given risk process with Poisson intensity . In this set-up. it is more convenient here to use some value 9o < 0 and let 9 = 0 correspond to n = 0 (zero drift). Since Brownian motion is skip -free. risk process with safety loading 77 > 0 correspond to 9 = 0 . Bo(dx) = B[-eo]B(dx).s and p = /3µB < 1. whereas there we let the given 3B. CORRECTED DIFFUSION APPROXIMATIONS 121 6 Corrected diffusion approximations The idea behind the simple diffusion approximation is to replace the risk process by a Brownian motion (by fitting the two first moments ) and use the Brownian first passage probabilities as approximation for the ruin probabilities. 0(0) = 0. this means the following: 1.9(s) = Ico ( s + 9) . The set-up is the exponential family of compound risk processes with parameters ( B9 constructed in III. Then r.6. let P9 refer to the risk process with parameters Q9 = QoB0[9] = QB[9 -9o]. Let PO refer to the risk process with parameters e-9oz Qo = QB[-90]. Determine yo > 0 by r. B9(dx) =Bale] Bo(dx) e9z keo)z = B[9 . . Then EOU' = Boki[0] = Biki[-eo]/E[-9o] and "(s) = k(s-Bo)-k(-9o). 3.ao (0) _ /c(s + 9 . 77 = 1/p .1 > 0.90] B(dx). 9o T 0. this is because in the regime of the diffusion approximation . The objective of the corrected diffusion approximation is to take this and other deficits into consideration.4. For each 9. However ./c(9 .1) . P9(r (u) < oo) = 1 for 9 > 0. this idea ignores (among other things) the presence of the overshoot e(u). which we have seen to play an important role for example for the Cramer-Lundberg approximation .90) and the given risk process corresponds to Poo where 90 = -'yo.T) = Peo(-r(u) < T) for 90 < 0.

The corrected diffusion approximation to be derived is (u..122 CHAPTER IV. i.() The idea of the proof is to improve upon this by an O (u-1) term (in the following. () where h (A. means up to o(u-1) terms): . (. 0o to. _ ^(u) = ST .C.3 applies and yields 1061 U61 Stdlu2/CZdi {W_1(t)}t>0 t>0 which easily leads to 1 StU2 {W( J(t)1t>0 { u S1 t>o Y'(u.Varo S1 = f30Eo U2 = S1.1) .(-y) = 0.-2' where as ususal ry > 0 is the adjustment coefficient for the given risk process. and Si = QoEoU2 = Q B"'['Yo Eo U3 ]. C) = 2A + (2 . tu2 ) -i IG (t.1) IG(x. Vargo S. C .u. (U. bl IG(t81. the solution of r. write r = T(u). u) = IG(x/u2. (01. u) denotes the distribution function of the passage time of Brownian motion {W((t)} with unit variance and drift C from level 0 to level u > 0. (. 9otc0" (0) = 0061 = ul. 1) • Since L e-atIG (dt.. .S. for brevity. IGu+u2. C.T) 1+u2 (6.e. S2 = 3E0U2 Bier [Yo] 3B"[Yo] Write the initial reserve u for the given risk process as u = C/Oo ( note that C < 0) and. PROBABILITY OF RUIN IN FINITE TIME Recall that IG(x. (6..3) this implies (take u = 1) Ego exp { -.. Theorem 5. The first step in the derivation is to note that µ = k (0) = r-0 (00) . . u) = e-uh(a .2) .7-(u)/u2} e-h(A. One has (6.

u is Ee-azead2/++ Ee-az[1 + ab2/u] where the last expression coincides with the r.2). In ( 1) and (2).v. however . we have p =.5) according to (6. however. 1. The solid line represents the exact value .z . To arrive at (6.f.exp { -h(A. distributed as Z .s.1 As u -+ oo. the formal Laplace transform inversion is heuristic: an additional argument would be required to infer that the remainder term in (6. it holds for any fixed A > 0 that Ego exp { -Ab1rr(u)/u2} -.d.1 below is exact. (6. p = 0. The justification for the procedure is the wonderful numerical fit which has been found in numerical examples and which for a small or moderate safety loading 77 is by far the best amoung the various available approximations [note.2 ). Note. . is the c.6. of a (defective) r. which is based upon exponential claims with mean µB = 1. 9o T 0 in such as way that C = Sou is fixed. calculated using numerical integration and Proposition 1. 6 . the r.7. .v. .2) is indeed o(u-1). just replace t by Tb1/u2.'yu /2)(1 + b2/u)} + Aug 1I J .h. 1% in (2) and (4). The initial reserve u has been selected such that the infinite horizon ruin probability b(u) is 10% in (1) and (3).1 + -629. CORRECTED DIFFUSION APPROXIMATIONS 123 Proposition 6. in (3) and (4). A numerical illustration is given in Fig.s. and the dotted line the corrected diffusion approximation (6. bl I IG I t +2 .h. we get by formal Laplace transform inversion that C 2 u. . that whereas the proof of Proposition 6.52/u where Z has distribution IG (•.5) Once this is established . But the Laplace transform of such a r. of (6.3. that the saddlepoint approximation of Barndorff-Nielsen & Schmidli [59] is a serious competitor and is in fact preferable if 77 is large] .4.3 = 0.1 + u2 I Indeed.3).ry2 .

4 may not be outstanding but nevertheless. it gives the right order of magnitude and the ordinary diffusion approximation hopelessly fails for this value of p. (Inc 0s- 0.EB 0 p ex p ( 7 S h ^)u .1 W IU.1 It is seen that the numerical fit is extraordinary for p = 0.W21 0.T1 00. the fit at p = 0.OOIi O.01 0. Similarly.011 L1 60 T IM 11.aa1 .05{ 0. see Asmussen [12]. OM 0.1 proceeds in several steps.() Lemma 6.199 0.T) 111 0. and all of the numerical studies the author knows of indicate that its fit at p = 0.0 0.08 0. A51 7(SAT 3 3 h(X.7. BarndorffNielsen & Schmidli [59] and Asmussen & Hojgaard [34]. For further numerical illustrations.114 0.124 0. PROBABILITY OF RUIN IN FINITE TIME 0.TI CHAPTER IV.07 0. .T) 0...00 0. Note that the ordinary diffusion approximation requires p to be close to 1 and '0 (u) to be not too small.08 a.111 W(U.^) .(061 0.19)2 11 20 20 i0 T 1n0 Figure 6.u2 2u3 (e . The proof of Proposition 6.7 or at values of Vi(u) like 1% is unsatisfying.02 I 90 120 160 2W A0 Z WT 40 80 120 160 100 240 280 T 111 WI.2 e.

r-0 (00)) } Replacing B by 8/u and Bo by C/u yields e-(B-() = E eo exp { (e .. CORRECTED DIFFUSION APPROXIMATIONS Proof For a>0.7) 2 2 . exp ue } al 1J 3 exP I- [2). C) 1 1 + u2/ 111 + 2u CZ Z - (2A + ()1/2 J 1 Proof It follows by a suitable variant of Stam's lemma (Proposition 4..2u (B3 .4 Ea. 3 lim Eof (u) = EoC(oo) = a2 Ep = 3EoU2 u-roo Proof By partial integration .61a2T (B3 . () + C and note that 2 KO (0) = 102. (6.2 behaves like C l Eeo eXp r _ ^81T 1 Sl u2 1 u 2u3 [1+h(AC) S .(3)Eea LauT exp --i 3J .6.co ((/u)) } Let 8 = (2a + (2)1/2 = h()... 1 = PB(T < oo) = Eo0 exp 125 {(B . the formulas Po(C(0) > x) Po(C(co) > x) imply 1 °° Po(ST(o) > x) = EIU fIP0 (U>y)dy .C)C/u . (6.00)(u +C) - 'r (. 1 / Po(C(0) > y) dy EoC(0) x k EDUk + 1 k Eo[(0)k+1 EoC(0) _ (k + 1)EoU' EoC(^) _ (k + 1) Eo£(0) Lemma 6 . the result follows.T (co (8/u) .(3) J t _ aa1T l + e-h(A.+ h (A.h. () 62 Eeo exp u u2 J .co (e) . + a1b2 + .s.6) u U3 Lemma 6 . in Lemma 6.C2 = 2).3 EoU2 + 103OoEoU3 + " 2 6 Using d2 .1) h(A.4) that the r.

--yu/2) 11+ 62 I} S 1 \\\ u/11 l 62 (3 2u 2A Proof Use first (6. -yu/2) h(A. () .1 (y/2 + Oo)u .S) d e- 62 . Thus by Taylor expansion around ( = 90u. C) ( 1+ u2 The result follows by combining Lemma 6 . () by h(\. 0 The last step is to replace h(A.6) and 7co (Oo) = ico('y + Bo) to get 0 = 21 (^/2 + 2y90) + 1112 (_Y3 + 3_Y200 + 3y9o) + O(u-4).() . we get h(A. yields +90 62 0 + O(u -3) 2u2 +O(u -3).7) and using e-h(a. 2 and (6.x. and inserting this and 9o 2 = S/u on the r.126 CHAPTER IV.s. 5 exp { _h(A) (1 + / y u J)) exp 1. There are two reasons for this : in this way.\+ (2 (3 e 2u [ (2. 2 + 00 = .e -h(aS)h (^^ 262 exp {_h(.h. [2+ (2 . -yu/2).. letting formally T -* oo yields 7/)(u) C'e-7u where C' = e-7a2).2u [2A+ (2 3 . Thus a2 -y = -290 + O (u-2).4.(2A + ()1/21 exp S -h(A. we get the correct asymptotic exponential decay parameter ^/ in the approximation ( 6.2) for O(u) (indeed .h (A.2.() I 1 + u2 ) y .6 - d h(A.2 (^/2 + 3y9o + 390) + O(u-3). and the correction terms which need to be added cancels conveniently with some of the more complicated expressions in Lemma 6.\ + () 1 2 / . PROBABILITY OF RUIN IN FINITE TIME The last term is approximately (e 3 (3) 27. l Lemma 6 .

()} 3 -h (A.e.5 in Lemma 6. () (i+a ) 2A + (2 . with the translation to risk processes being carried out by the author [12]. and to the Markov-modulated model of Chapter VI in Asmussen [16].(i+ 62 exP{ -h(A. His ideas were adapted by Asmussen & Binswanger [27] to derive approximations for the infinite horizon ruin probability 'i(u) when claims are heavy-tailed. i. -'yu/2) 127 ( i+ M pz^ exP { -h (A.7. that is. the analogous analysis of finite horizon ruin probabilities O(u. . this case is in part simpler than the general random walk case because the ladder height distribution G+ can be found explicitly (as pBo) which avoids the numerical integration involving characteristic functions which was used in [345] to determine the constants. () I 1 + u 2 ) } S 1 . 7 How does ruin occur? We saw in Section 4 that given that ruin occurs. the 'typical' value (say in sense of the conditional mean) was umL. u Notes and references Corrected diffusion approximations were introduced by Siegmund [345] in a discrete random walk setting. () I 1 + u2 )I 2u L 2A+C2_(2 exp { _h. We shall now generalize this question by asking what a sample path of the risk process looks like given it leads to ruin. HOW DOES RUIN OCCUR? exp { -h (x. The answer is similar: the process behaved as if it changed its whole distribution to FL.4.1: Just insert Lemma 6.T) has not been carried out and seems non-trivial. ()} . Fuh [148] considers the closely related case of discrete time Markov additive processes.1 (-y/2 + Oo)u )} -1 (i + U ) [2+ C2 2u 62 S Pt^ exP { J 62(2 exp { -h(A. The corrected diffusion approximation was extended to the renewal model in Asmussen & Hojgaard [34]. the approach to the finite horizon case is in part different and uses local central limit theorems. Hogan [200] considered a variant of the corrected diffusion approximation which does not require exponential moments. The adaptation to risk theory has not been carried out. In Siegmund's book [346]. the same as for the unconditional Lundberg process. 0 1 Proof of Proposition 6.

we give a typical application of Theorem 7.(u) is not .(u)..EL[e-7S.128 CHAPTER IV. {St}0< t<T(u)) Proof Write e-'rsr(u ) = e-'rue-'r£(u).1.FT(u) is the stopping time o-algebra carrying all relevant information about r(u) and {St}o<t<T(u)• Define P(u) = P(•IT(u) < oo) as the distribution of the risk process given ruin with initial reserve u.t. {ST(u)+t .1 Let {F(u)}u>0 be any family of events with F(u) E F.t. r(u) < oo) . and let M(u) be the index of the claim leading to ruin (thus T(u) = Ti + T2 + . We are concerned with describing the F(u) -distribution of {St}o<t<T(u) (note that the behaviour after rr(u) is trivial: by the strong Markov property.FT(u)- = o' (T(u ).F.3 to . In fact. the numerator becomes e-'ruELe-7^ (u)PL(F( u)t) = e-7uCFL (F(u)°) when F(u) E .T.(u)_ and similarly the denominator is exactly equal to Ce-7u. (u) and satifying PL(F(u)) -* 1. ^(u) is exponential with rate 8 w. the Poisson rate changes from . u -* oo. PROBABILITY OF RUIN IN FINITE TIME changed its arrival rate from 0 to /3L and its claim size distribution from B to BL.(u)_ is that i. so in the in the proof.3L and the claim size distribution from B to BL. Recall that 13L = (3B[ry] and BL(dx) = e'rxB(dx)/B[7]. Proof P(u) (F(u)c) = F(flu)c. FL As example. In the exponential case.F. .r. Then also P(u)(F(u)) -+ 1.ST(u)}t> o is just an independent copy of {St}t>o).vi(u) Ce-'Yu Corollary 7.2 If B is exponential.r. F(u)c] P(r(u) < oo) ?P(U) < EL[e-7u. Theorem 7 .TT(u) _-measurable. .(u)_ and ^(u) are independent . Recall that . then P(u) and FL coincide on . stating roughly that under F(u). F(u)c] ti e-' ru]PL (F(u)`) --> 0. Note that basically the difference between FT(u) and . + TMOO ).. . P(u) and rate = aL w.

Notes and references The results of the present section are part of a more general study carried out by the author [11]. take I(Tk < x) . see further XI. This is currently a very active area of research. however.3. 129 M(u) >2 I(Tk < x) M(tu) p(u) M(u) >2 I(Uk < x) BL(x).3 M(u) pcu) 1 .(1 . who also treated the heavy-tailed case. A somewhat similar study was carried out in the queueing setting by Anantharam [6]. the subject treated in this section leads into the area of large deviations theory. the queueing results are of a somewhat different type because of the presence of reflection at 0.e-ALx) M(u) k=1 u The proof of the second is similar. From a mathematical point of view. HOW DOES RUIN OCCUR? Corollary 7.7. Proof For the first assertion.e-aLx. .

This page is intentionally left blank .

. AA t-*oo lim St = lim ESt t t-ioo t = p .. with common distribution B. with Nt = # {n: Un <t} the number of arrivals before t. the Tn are independent.Then no matter the distribution Al of T1i B. A different important possibility is Al to be the stationary delay distribution A° with density A(x)/µA. U2. and the one corresponding to T1 = s by 1/i8 (u). Thus the premium rate is 1. Proposition 1.Q.d. . T3. In the so-called zero-delayed case. Var(St) = 11Ba2A + I4AaB lim t goo t PA 131 .. .1 Define p = !µ. the claim sizes U1.1.1)... of the risk process form a renewal process: letting Tn = Qn . We use much of the same notation as in Chapter I.7). with the same distribution A (say) for T2. r(u) the time to ruin.i. D'2. the one corresponding to the stationary case by 00)(u). Then the arrival process is stationary which could be a reasonable assumption in many cases (for these and further basic facts from renewal theory. the distribution Al of T1 is A as well. The ruin probability corresponding to the zero-delayed case is denoted by 1/'(u). see A.(1..-1 (T1 = a1). and M is the maximum of {St}..Chapter V Renewal arrivals 1 Introduction The basic assumption of this chapter states that the arrival epochs O'1. . {St} is the claim surplus process given by I. . are i.

Thus. Nt = Var(PBNt) + E(4Nt) Q2 2 0`2 A tpB B + o(t). one could imagine that the claims are recorded only at discrete epochs (say each week or month) and thus each U.3 (SWITCHED POISSON ARRIVALS) Assume that the process has a random environment with two states ON. t 4oo Proof Obviously. 2). A. Proposition 1. the definition 77 = 1/p .St] = a(p . CHAPTER V. For (1 .2 (DETERMINISTIC ARRIVALS) If A is degenerate. 3) follows similarly by Blackwell 's renewal theorem. we get similarly by using known facts about ENt and Var Nt that Nt Var(St) = Var E Nt U.0 > 0. However .1) ENt/t -+ 1/µA.1). but the arrival rate in the ON state is .1 gives the desired interpretation of the constant p as the expected claims per unit time. by the elementary renewal theorem (cf. and (1 .132 Furthermore for any a > 0. Here are two special cases of the renewal model with a similar direct interpretation: Example 1. Example 1 . say at a. The renewal model is often referedd to as the Sparre Andersen process. From this ( 1. Sparre Andersen whose 1959 paper [7] was the first to treat renewal assumptions in risk theory in more depth. such that no arrivals occur in the off state. RENEWAL ARRIVALS lim E [St+a . The simplest case is of course the Poisson case where A and Al are both exponential with rate 0. s + t µA PA 0 Of course. This has a direct physical interpretation (a large portfolio with claims arising with small rates and independently). the .a is really the accumulated claims over a period u of length a.Nt] -* a/PA.1) follows . Nt + EVar U. OFF. stating that E[Nt+a .1 of the safety loading appears reasonable here as well.t. after E. If the environment is Markovian with transition rate A from on to off and u from OFF to ON. Nt ESt = E E UI Nt -t = ENt•pB .

A is phase-type (Example 1.T between a claim U and an interarrival time T. arrival times. as follows easily by noting that the evolution of the risk process after time s is that of a renewal risk model with initial reserve U1 .s < u).. S o<t<oo n=0.i.1. the relevance of the model has been questioned repeatedly.t.. the fundamental connections to the theory of queues and random walks. For the stationary case.} with {S(d)} a discrete time random walk with increments distributed as the independent difference U .y)B(dy). (1. and for historical reasons.. However.4) with phase space {oN.4) fo Indeed. if for nothing else then for the mathematical elegance of the subject. The following representation of the ruin probability will be a basic vehicle for studying the ruin probabilities: Proposition 1.4 The ruin probabilities for the zero-delayed case can be represented as 0(u) = P(M(d) > u) where M(d) = Max {Snd) : n = 0.4) w. we feel it reasonable to present at least some basic features of the model. Proof The essence of the argument is that ruin can only occur at claim times. and the present author agrees to a large extent to this criticism.s. Therefore.1. we note that the ruin probabilities for the delayed case T1 = s can be expressed as in terms of the ones for the zero-delayed case as u+8 z/i8(u) = B(u + s) + '( u + s .oFF}.r. initial vector (1 0) and phase generator 11 However. (an arrival occurs necessarily in the ON state. u For later use. More precisely. and then the whole process repeats itself). . integrate (1.s > u) of ruin at the time s of the first claim whereas the second is P(r(u) < oo. INTRODUCTION 133 interarrival times become i.. we have From this the result immediately follows. the first term represents the probability F(U1 ..2. Ao.1.. in general the mechanism generating a renewal arrival process appears much harder to understand. The values of the claim surplus process just after claims has the same distri- bution as {Snd^ }• Since the claim surplus process {St} decreases in between max St = max ^d^. U1 .d.

That is . -t. with common distribution B* (say) concentrated on (0. < 1. U Figure 2. then 0 * (u) = 1 for all u > 0. If . i.1) . resp . the claims and the premium rate are negative so that the risk reserve process . b=1 !=1 where {Nt } is a Poisson process with rate .1.1 r* (u) One situation where this model is argued to be relevant is life annuities. The initial reserve is obtained by pre-payments from the policy holders. St = t . each of which receive a payment at constant rate during the lifetime . then 0*(u) = e -'r" where ry > 0 is the unique solution of 0 = k*(-ry) = *(B*[-ry] . are independent of {Nt} and i.Ut.0* (u) = P (rr* (u) < oo) where rr* (u) = inf It > 0: Rt < 0} ) .1 If. A simple sample path comparison will then provide us with the ruin probabilities for the renewal model with exponential claim size distribution. the remaining part of the pre-payment (if any ) is made available to the company. At the time of death . Using Lundberg conjugation . 00). The compound Poisson model with negative claims We first consider a variant of the compound Poisson model obtained essentially by sign-reversion . 2. we shall be able to compute the ruin probabilities i(i* (u) for this model very quickly (. RENEWAL ARRIVALS 2 Exponential claims.134 CHAPTER V. Theorem 2 .a*PB• > 1.1) +ry. the claim surplus process are given by Nt Nt Rt = u+^U. (2.3* pB. A typical sample path of {Rt } is illustrated in Fig.d.3* (say ) and the U.

B* [-7] and let {St} be a compound Poisson risk process with parameters .3*. and thus 1 = P(T.0. If I3*pB* < 1. 2. Proof Define 135 St =u . cf.(a) -7 Figure 2.g.2 Assume now . . Then the function k* is defined on the whole of (-oo.s. 0 Now return to the renewal model. T_ ( u) < 001 e7"P(T_ (u) < oo) = e"V)* (u).2(a).2 sup St = -inf St = 00 t>o t>o and hence -0* (u) = 1 follows. > 1 . Let B(dx) = ^e-7x B*(dx). the safety loading of { St} is > 0. B.1. Fig. Hence T_(u) < oo a. (a) is*(a) (b) . 0) and has typically the shape on Fig. Then the c. then by Proposition 111.UB. T_ (u) = inf { t > 0 : St = -u 'r* (u).2(b). EXPONENTIAL OR NEGATIVE CLAIMS [Note that r. of {St} is c(a) = is*(a-7). B*..(u ) < oo) = E {e-7sr_ (u). Define T_ (u) = inf It > 0 : St = -u} . 2.Rt. and the Lundberg conjugate of {St} is { St } and vice versa.2. Hence -y exists and is unique. Then { St } is the claim surplus process of a standard compound Poisson risk process with parameters 0 *.* (a) = log Ee-'st I.f. Since ic'(0) < 0. St=Rt-u=-St.

1 means that M* is exponentially distributed with rate ry.. To + M(d) in the notation of Proposition 1.7r+ 7r Ee-To b/(S-a) + +. Then B* = A.. Hence M* max {To + Ti + • • • + Tn . To + max {Ul+•••+Un-TI-. which has the advantage of avoiding transforms and leading up to the basic ideas of the study of the phase-type case in VIII.+Tn -U1 Un. Now the value of {St*} just before the nth claim is To +T1* +..1. However.'s and noting that V)*(u) = P(M* > u) so that Theorem 2..u+ and lr+.g.4. T2 = U2. with the probability that a particular jump time is not followed by any later maximum values being 1 .Y -a I. 2.f. 1) means that 8(A[-ry] . RENEWAL ARRIVALS Theorem 2 .1.Tr+. the distribution of M(d) is a mixture of an atom at zero and an exponential distribution with rate parameter ry with weights 1 ...)(u) _ 1r+e-7" where ry > 0 is the unique solution of 1 = Ee'Y(u-T ) = S 8 A[.136 CHAPTER V. then .2).2) 7 and7r+=1Proof We can couple the renewal model { St} and the compound Poisson model {St*} with negative claims in such a way the interarrival times of { St*} are To .. According to Theorem 2.1.. the failure rate of this process is y.1) + ry = 0 which is easily seen to be the same as (2. and from Fig .e. u Hence P(M(d) > u) _ 1r+e-'r". with rate S (say).Y] (2. alternatively termination occurs at a jump time (having rate 8). 3* = 6.• • • .-Tn} n=0.4 goes as follows: define 7r+ = P(M(d) > 0) and consider {St*} only when the process is at a maximum value..Un } = max St = t>0 n=0. A variant of the last part of the proof. Taking m.Ui ...a) = 1 . respectively.1 it is seen that ruin is equivalent to one of these values being > u.2 If B is exponential.. and (2 . and hence the failure rate . and 5PA > 1.Ti = U1.•... we get Ee'M(d) = Ee°M* _ -Y/(-.

7r+) and hence r+ = 1. The probability that the first ladder step is finite is 7r+. Putting this equal to -y.7r+) = ry and hence P(M(d) > u) = P(M(d) > 0)e-7u = 7r+e-'r". Thus a ladder step terminates at rate b and is followed by one more with probability 7r+. consider instead the failure rate of M(d) and decompose M(d) into ladder steps as in II. B^d) where Aad> (dt) = ^[ a] A(dt).3 A[-a] B[a] F( d) [a +)3] F(d) [a] = Fad) [^] Letting M(u) = inf in = 1. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES 137 is b(1. This follow since..-y/b. : S(d) > u} . Furthermore.2. However.7r+). we have ] A[-a -)3] E«d'efl' = Bad> [a] A ad> [-Q] = B[a +. 111...T to F(d)(x) = e-K^d^(«) ^x e"vFidi(dy) 00 K(d) (a) = log F(d) [a] = log B[a] + log A[-a] . Hence the failure rate of M(') is 6(1 . which states that for a given a.3. It only remains to note that this change of measure can be achieved by changing the interarrival distribution A and the service time distribution B to Aad^. Bads (dx) = . the relevant exponential change of measure corresponds to changing the distribution F(d) of Y = U . the imbedded discrete time random walk and Markov additive processes. 3a The imbedded random walk The key steps have already been carried out in Corollary 11.6. we see that ry = 6(1.5. resp. 0 3 Change of measure via exponential families We shall discuss two points of view.2.4.B(dx). letting P(d) refer to the renewal risk model with these changed parameters . hence exponential with rate b. a ladder step is the overshoot of a claim size..

In fact. For the stationary case. For claim (b).Ce-"u where C = limu. the evaluation of C is at the same level of difficulty as the evaluation of i/i(u) in matrix-exponential form.e. E(d)e -1' (u).p)/($B'[7] . in the easiest non-exponential case where B is phase-type.C8e-7u where Cs = Ce-78B[7]. i . This is known to be sufficient for ^(O) ]p (d) ([APQ] Proposition 3. VIII. cf.1 implies Cu) = e-«uE ( 7d)e-«^(u) .T is non-lattice.3 For the delayed case Tl = s.r.. (a) '(u) < e-ryu. Consider now the Lundberg case. Corollary 3.2 In the zero-delayed case. to converge in distribution since p(yd) (r(0) < oo) = 1 because of r (d)' (-y) > 0.(u) .1 For any a such that k(d)' (a) > 0.4. RENEWAL ARRIVALS be the number of claims leading to ruin and ^(u) u = SM(u) .. O(u) = e-auE (d)e-a{ (u)+M(u)K (d)(a) . (d) (7) _ 0. and claim (a) follows immediately from this and e (u) > 0. let 7 > 0 be the solution of r. ik. It should be noted that the computation of the Cramer-Lundberg constant C is much more complicated for the renewal case than for the compound Poisson case where C = (1 .. 187) and thereby for ^(u) to be non-lattice w. just note that F7(d) is non-lattice when F is so .138 CHAPTER V. Proof Proposition 3. (b) V)(u) .t. 7µA .1). We have the following versions of Lundberg' s inequality and the CramerLundberg approximation: Theorem 3 . we get: Proposition 3.2 p. 00)(u) .u the overshoot .C(°)e-ryu where C(O) = C0[7] . provided the distribution F of U .1) is explicit given 7.

delayed version of Lundberg's inequality can be obtained in a e7u.9. IPA 0 Of course. where G is the infinitesimal generator of {Xt} = {(Jt.5. According to Remark 11. we invoke the behavior at the 1 = h«(0.3. 0 0 . we get r u +8 e"8(u) 139 e7uB(u + s) + --4 0 + L 00 J e7(v-8)e7(u+8-v).5. Let P8f E8 refer to the case Jo = s. another use of dominated convergence combined with Ao[s] = (A[s] -1)/SPA yields 00 u) e7u iP8(u) Ao(ds) -+ f 0 = CB['Y](A[-y] . 0) = tc(a)h(s). h(s) = e-(a +x( a))8 (3. Equating this to tch(s) and dividing by h(s) yields h'(s)/h(s) _ .h'(s). 0) = -ah (s) . To determine boundary 0.1) (normalizing by h(0) = 1). Here K. Sdt) = h(s .0 ) = Eo[ha ( Jdt.dt ) e-adt = h ( s) . we look for a function h(s) and a k (both depending on a) such that Gh. B(x) = o(e-7x) and dominated convergence..St)} can be defined by taking Jt as the residual time until the next arrival.(°) ( Ce-8B[7] Ao(ds) similar manner. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES Proof Using (1. The underlying Markov process {Jt} for the Markov additive process {Xt} = {(Jt. (u + s . 3b Markov additive processes We take the Markov additive point of view of II.. For s > 0.dt(ah ( s) + h'(s)) so that Gha ( s.1) = C(O).Sdt] = Ee'uh(T) means 1 = f ' e°^B(dy) f ' h( s)A(ds).a . E8h0 (Jdt./c.(s. y) = e°yh(s). (s. St)} and h. The expressions are slightly more complicated and we omit the details.4).y) B(dy) 0 For the stationary case.

c(a)] B[a] Proof Pa. = J8 = T2. i.rc(a)] = B = Ba[13]Aa[5]. rc(a) = 0 (B[a] . (3. [a + /3] A[b . Proposition 3. U. ] = E. .tK( a)e. . .140 CHAPTER V. St)}too by letting the likelihood ratio Lt restricted to Yt = a((J. resp.c(a)] which shows that U1.8. the determination of the adjustment coefficient ry where the defining equations rc(d) (ry) = 0 and rc(ry) = 0 are the same. [e1U1 + 6T2ea ( U1-s)-stc ( a)e-(a+K(a ))( T2-s)I B[a +. resp. An important exception is.. .s governing {(Jt. Ba where Aa (dt) . T2.5.( a+r' (a))(Jt -s) h(s) where c(a) is the solution of (3.. RENEWAL ARRIVALS B[a]A[-a .2) As in 11. however. . 5 For the compound Poisson case where A is exponential with rate .. Further.rc(a)] = 1.2) means 1 = B[a]/3/(/3+a+rc (a)).13]A[b .a . Remark 3 .a . Ease AU1+6T2 [ AU1+6T2 = Ea a LT. An easy extension of the argument shows that U1. J n+1 u are independent with distribution Aa for the Tk and Ba for the Uk. we can now for each a define a new probability measure Pa. since JT. T2 are independent with distributions Ba. (3.s(Jo = s) = 1 follows trivially from Lo = 1.s is the probability measure governing a renewal risk process with Jo = s and the interarrival distribution A and the service time distribution B changed to Aa.s and P(d).rc(a)] B[a] A[-a . Note that the changed distributions of A and B are in general not the same for Pa.2).4 The probability measure Pa. A[-a .1)-a in agreement u with Chapter III.. Ba(dx) = -B(dx).. . Aa as asserted .e.e-(«+k(a))t esy A(dt)..S„):0<v< )be Lt = eaSt -tK(a) h(Jt) = east .

which is the same as the asserted inequality for 0. Notes and references 4 The duality with queueing theory We first review some basic facts about the GI/G/1 queue. [APQ] Ch.6 Let y < let ay > 0 be the solution of ic'(ay) = 1/y. Using the Markov additive approach yields for example the following analogue of Theorem IV.1 and n. .)+1 e J j e-(ay+w(ay))8 e . The actual waiting time Wn of customer n is defined as his time spent in queue (excluding the service time).t. see in particular Dassios & Embrechts [98] and Asmussen & Rubinstein [45]. Label the customers 1. or FCFS = first come first served) queueing discipline and renewal interarrival times.5 Proposition 3. u The approach via the embedded random walk is standard. and U„ the service time of customer n.4.r. and assume that T„ is the time between the arrivals of customers n . the time from he arrives to the queue till he starts service. A(ds). see e. yu). and define yy = ay . THE DUALITY WITH QUEUEING THEORY 141 The Markov additive point of view is relevant when studying problems which cannot be reduced to the imbedded random walk.yx(ay).ay+ray))TM(. Then J(rr(u)) = TM(u)+1 and hence Ws(u.(u. For the approach via Markov additive processes. yu ) < e-7yu. The virtual waiting time Vt at time t is the residual amount of work at time t. Proof As in the proof of Theorem IV. .s e-aysr(")+r(u ) K(ay) h (s) .-y yuAa y [ay + K(ay) .4. the amount of .yu ) e-7vu A[-ay . it is easily seen that ic(ay ) > 0. T(u) < yu h(JT(u)) < e-ayu+yuk(ay ) ( Eia y Le-(a(+k(ay))s v.rc( ay)] = e-(aa+-(-r ))sb[a ]e-7yu L y1 In particular. Then "^ e-(ay+w(aY))8 Ys(u.. Let M(u) be the number of claims leading to ruin .g.. say finite horizon ruin probabilities where the approach via the imbedded random walk yields results on the probability of ruin after N claims. 2. . that is. XII.4. that is. for the zero-delayed case zp8(u. defined as the single server queue with first in first out (FIFO.5. The claim for the zero-delayed case follows by integration w. not after time T. yu) = F'ay.

1 and Corollary 11. (4.2. p < 1. If W1 = 0. on + Tn) the residual work decreases linearly until possibly zero is hit.n-1 (U1 +• • •+Uk -Tl . Let the T there be the random time UN. since customer n arrives at time on. 0 Applying Theorem 11.4. (4."^ Vi(N) (u)...Tn)+ Proof The amount of residual work just before customer n arrives is VQ„ -. (u). (4. Thus. an+1) = [on.3 Assume rl > 0 or. in which case {V} remains at zero until time on+1. whereas in [On .1 Wn+1 = (Wn + U.+ Un. Wn converges i n distribution to a random variable W. but interchanging the set . the proposition follows.4.3.• • • Tk ). but we shall present a slightly different proof via the duality result given in Theorem II.3) Proof Part (a) is contained in Theorem 11. Then P(r(u) < T) is the probability z/iiNi (u) of ruin after at most N claims. equivalently. we get: Corollary 4.4: Proposition 4. and we have P(W > u) = V.4. the waiting time a customer would have if he arrived at time t. . Thus Vos}1 _ = (Wn + Un . It then jumps to VQ„ .1. then Wn v M. Then: (a) as n -+ oo..1).1. RENEWAL ARRIVALS time the server will have to work until the system is empty provided no new customers arrive (for this reason often the term workload process is used) or.1) The following result shows that {Wn} is a Lindley process in the sense of II.142 CHAPTER V. The next result summarizes the fundamental duality relations between the steady-state behaviour of the queue and the ruin probabilities (part (a) was essentially derived already in 11.Tn)+. Vt converges in distribution to a random variable V. and combining with (4. The traffic intensity of the queue is p = EU/ET. and obviously z/'(u) = limN-. Also {Zt}o<t<T evolves like the left-continuous version of the virtual waiting time process up to just before the Nth arrival.2) (b) as t -* oo.2 Let Mnd) = maxk=o. we have Wn = Van(left limit). equivalently.4): Proposition 4.. and we have P(V > u) = ?/iiol(u).

T* are independent and distributed as U1. Letting n oo in Corollary 4. TN) with (TN.1.e. Then K(x) = J x00K(x . Hence for x > 0. T] form a stationary renewal process with interarrival distribution A..T*)+ < x) = P(W + U* .. u Now return to the Poisson case .T* = y yields K(x) = P ((W + U* .. and we get: . Then the corresponding queue is M/G/1.2). i.(N)(u) has the limit tp(u) for all u.5 (LINDLEY'S INTEGRAL EQUATION) Let F(x) = P(U1-T1 < x).. hence (since the residual lifetime at 0 and the age at T have the same distribution ... T1.. by an obvious reversibility argument this does not affect the distribution . K(x) = P(W < x). It follows that P(WN > u) =. convergence in distribution hold for arbitrary initial conditions .y)F(dy). we obtain: Corollary 4. we let T be deterministic . For part (b). and hence in particular ZT is distributed as the virtual waiting time just before the Nth arrival. but this requires some additional arguments (involving regeneration at 0 but not difficult) that we omit. Corollary 4.. In fact .4) Tlim F(s) (VT > u) = limo P(s) (r(u) < T) = '+^io) (u)• 0 It should be noted that this argument only establishes the convergence in distribution subject to certain initial conditions. as WN. x > 0.. Ti) and similarly for the U.oo in Proposition 4. Then the arrivals of {Rt} in [0. T1 . { Zt}o<t < T has the same distribution as the left-continuous version of the virtual waiting time process so that P(s)(VT > u) = P(s)(r(u) < T). However. where U*. cf.le) the same is true for the time-reversed point process which is the interarrival process for { Zt}o<t < T• Thus as before .T*)+.4. conditioning upon U* . namely W1 = 0 in (a) and Vo = 0. which implies the convergence in distribution and (4.T* < x) fK(x_y)F(dy) (x > 0 is crucial for the second equality!).4 The steady-state actual waiting time W has the same distribution as M(d). A. resp .. (4. THE DUALITY WITH QUEUEING THEORY 143 (T1.5) Proof Letting n .Ao in (b). we get W = (W + U* .2. (4.

W v V.5) to hold for all x E R and not just x > 0). Asmussen [24] and references there. Note that (4. Proof For the Poisson case. despite the fact that the extension from M/G/1 is of equally doubtful relevance as we argued in Section 1 to be the case in risk theory. the actual and the virtual waiting time have the same distribution in the steady state. . Some early classical papers are Smith [350] and Lindley [246].5) looks like the convolution equation K = F * K but is not the same (one would need (4. the zero-delayed and the stationary renewal processes are identical.g. Cohen [88] or [APQ] Ch. RENEWAL ARRIVALS Corollary 4. see e.6 For the M/G/1 queue with p < 1. The equation (4.5) is in fact a homogeneous Wiener-Hopf equation.g. That is.144 CHAPTER V. implying P(W > u) = P(V > u) for all u. Hence '(u) = Ali(°)(u). VIII). 0 Notes and references The GI/G/1 queue is a favourite of almost any queueing book (see e .

• Claims arriving when Jt = i have distribution Bi. {Jt} describes the environmental conditions for the risk process. {St} denotes the claim surplus process. and can be computed as the positive solution of WA = 0.f pi.(3i when Jt = i. t St = E Ui .T) = Pi (T(u) < T).)iJEE and its stationary limiting distribution by lr. Ire = 1. • The premium rate when Jt = i is pi.. M = supt>o St. dv. Thus. The intensity matrix governing {Jt} is denoted by A = (A. The ruin probabilities with initial environment i are '+ki(u) = pi(T(u ) < oo) = Pi (M > u). i=1 0 and r(u) = inf It > 0: St > u}.Chapter VI Risk theory in a Markovian environment 1 Model and examples We assume that arrivals are not homogeneous in time but determined by a Markov process {Jt}0<t<oo with a finite state space E as follows: • The arrival intensity is . As in Chapter I. . here it exists whenever A is irreducible which is assumed throughout. 145 Oj( u. N.

Unless otherwise stated. the operational time argument given in Example 1. Example 1.1 Consider car insurance. Example 1 . and we have f3. According to Theorem A5.146 CHAPTER VI.2 (ALTERNATING RENEWAL ENVIRONMENT) The model of Example 1. which is clearly unrealistic. in block-partitioned form. cf. = iii when j E E(i). say.4) with representation (E(i). MARKOVIAN ENVIRONMENT where as usual Pi refers to the case Jo = i. P = E 7riPi. say. meaning that accidents occuring during icy road conditions lead to claim amounts which are different from the normal ones. u . with rates Ani and Ain.a('). Assume similarly that the sojourn time in the normal state has distribution A(n) which we approximate with a phase-type distribution with representation (E('). For example.2. the intensity matrix is A OW-) T(i) T(n) t(n)a(i) where t(n) = -T(n)e. one expects that 3i > on and presumably also that Bn # Bi. We let p Pi = /ji/AB. Then the state space for the environment is the disjoint union of E(n) and E(i).T(n)). we could distinguish between normal and icy road conditions. a(i).1) iEE Then pi is the average amount of claims received per unit time when the environment is in state i. Cl The versatility of the model in terms of incorporating (or at least approximating) many phenomena which look very different or more complicated at a first sight goes in fact much further: Example 1. Thus. we can approximate A(i) with a phase-type distribution (cf. assume that the sojourn time in the icy state has a more general distribution A(i). and p is the overall average amount of claims per unit time.. respectively. cf. f3i and claim size distributions Bn. this is no restriction when studying infinite horizon ruin probabilities. and assume that weather conditions play a major role for the occurence of accidents. Proposition 1. /3 = Nn when j E E(n). Bi. i and corresponding arrival intensities Qn.5 below. An example of how such a mechanism could be relevant in risk theory follows. leading to E having two states n. t(i) = -T(')e are the exit rates. T(=)).14.1 implicitly assumes that the sojourn times of the environment in the normal and the icy states are exponential.11 below. we shall assume that pi = 1. r^ = P (1.

3i/pi. In the car insurance example. 1 .. MODEL AND EXAMPLES 147 Example 1 . Indeed. t(n) = -T("i)e.J017. The simplest model for the arrival intensity amounts to . T(1) +w11t(1)a(1) w12t (1)a(2) w21t(2)a(1) w1gt(1)a(9) w2gt ( 2)a(q) T(2) +w22t( 2)a(2) A = wg1t(9)a(1) wg2t(9)a(2) .. say it is larger initially. 0 Then (by standard operational time arguments) {St } is a risk process in a Markovian environment with unit premium rate. 4 (SEMI-MARKOVIAN ENVIRONMENT) Dependence between the length of an icy period and the following normal one (and vice versa) can be modelled by semi-Markov structure. w. and 1/ii(u) = t/ii(u). where W = (w. say. Approximating each A('?) by a phase-type distribution with representation (E('l).5 (MARKOV-MODULATED PREMIUMS) Returning for a short while to the case of general premium rates pi depending on the environment i.a(n). Example VIII.>...j = . St = SB-=(t).1. iq (visited in that order) and letfOil >.2. it = Je-l(t)... (9) where q = CHI. the state space E for the environment is { ('q. i8f n1. n8}.4) with states i1..3 Consider again the alternating renewal model for car insurance in Example 1.Q.1. resp.. one could for example have H = {i1. such that a sojourn time of type rt is followed by one of type c w. . Qi = . is the probability that a long icy period is followed by a short normal one. u From now on. A('^).3. One way to model this would be to take A(') to be Coxian (cf. i E E(n) }. and similarly for the normal period. but assume now that the arrival intensity changes during the icy period. Then for example wi.3i. we assume again pi = 1 so that the claim surplus is Nt St = ?Ui_t. i ) : n E H. .n.T(n)). the parameters are ^ij = aid/pi... let T 9(T) = f pi.. u Example 1 .tEH is a transition matrix. such that the icy period is of two types (long and short) each with their sojourn time distribution A('L). and . T(9) +wggt(9)0. depending only on 77..p. dt. This amounts to a family (A(")) ?CH Of sojourn time distributions. u Example 1.

Next we note a semi-Markov structure of the arrival process: Proposition 1.6 The claim surplus process {St} of a risk process in a Markovian environment is a Markov additive process corresponding to the parameters µi = -pi. . the Markov additive structure will be used for exponential change of measure and thereby versions of Lundberg's inequality and the CramerLundberg approximation. A remark which is fundamental for much of the intuition on the model consists in noting that to each risk process in a Markovian environment. vi(dx) = . Note also that (as the proof shows) 7ri/3i//3* gives the proportion of the claims which are of type i (arrive in state i). MARKOVIAN ENVIRONMENT We now turn to some more mathematically oriented basic discussion. .e(A-(Oi)d'sg)xe.(Qi)diag)• More precisely. B* = 1 /^* Bi. Nt Nt a . iEE iEE )3 These parameters are the ones which the statistician would estimate if he ignored the presence of Markov-modulation: Proposition 1.148 CHAPTER VI. In particular.5.(3iBi(dx). dx. one can associate in a natural way a standard Poisson one by averaging over the environment. the empirical distribution of the claims is B*. we put )3* = E 7fi/3i.8 As t oo. qij = 0 in the notation of Chapter 11. Pi (Ti E dx. The key property for much of the analysis presented below is the following immediate observation: Proposition 1.7 The Pi-distribution of T1 is phase-type with representation (ei. More precisely. N > 1(Ul < x) a4 B*(x). o = 0. t l=1 Note that the last statement of the proposition just means that in the limit.A . Proof The result immediately follows by noting that T1 is obtained as the lifelength of {Jt} killed at the time of the first arrival and that the exit rate obviu ously is f3j in state j. JT1 = j) = Qj • e. )3*.

Bi. cf. {St} to the compound Poisson model with parameters 0 *.. Bi(x). Hence Nt'> a . aA. the Fi-distribution of T1 in {St(a ) } is phase- type with representation (E. By Proposition A5. the limiting distribution of the first claim size U1 is B*. In particular.4. ^j 7riNi. Then {St-)} + {St*} in D[0. we may view Nt`i as the number of events in a Poisson process where the accumulated intensity at time t is Niti. Then it is standard that ti lt '4' iri as t -> oo. given {Jt}0<t<0. However . MODEL AND EXAMPLES 149 Proof Let ti = f1 I(JJ = i) ds be the time spent in state i up to time t and Nti) the number of claim arrivals in state i . Proof According to Proposition 1. zli( (u) .9 Consider a Markov-modulated risk process {St} with param- eters Ni.. this converges to the exponential distribution with rate 0* as a -* oo. In particular. i.aA . The next result shows that we can think of the averaged compound Poisson risk model as the limit of the Markov-modulated one obtained by speeding up the Markov-modulation.(/3i)aiag).2. Nt a' t t iEE Also. y Ni) i Nti) t a. Example 11.* (u) for all u. N -+ oo Hence 1 Nt 1 N`+) Nits Nt E I ( Ut <. iEE 13 A different interpretation of B* is as the Palm distribution of the claim size.. Bi. oo) as a -4 oo.. B*. Conditioning .x) = Nt E > I (Uk) X) Nt Bi(x) 1=1 iEE k=1 iEE 1: t5 Bi( x) = B*(x).1. denoting the sizes of the claims arriving in state i by U(') 1 standard law of large numbers yields U(') the N 1: I(Ukik < x) k=1 N a$.. has distribution (7ri)3i //3*)iEE and is independent of Ti. and let {St °i} refer to the one with parameters Pi.6. A. and furthermore in the limit JT. e. Proposition 1.7.

1 of [145]. e. those of type E7 with intensity z s = 5 in state 1 and with intensity z . marked by thin. and (at least when a is small such that state changes of the environment are infrequent). 1. > 1. s 5 in state 2. 132=2. lines in the path of {St}. is as in {St }.s = 1o in state 2. we first get that 3 (3* = 2. with T2 being exponential with rate .2.2}.. B2=1E3+4E7.s = o in state 1 and with intensity 1 . Continuing in this manner shows that the limiting distribution of (T.=1.).2 +2 2 = 3. U2) are independent of . shows similarly that in the limit (T2. which also yield O(a) (u. T) for all u and T.. 1. 0 Example 1. since E3 is a more dangerous claim size distribution than E7 (the mean is larger and the tail is heavier). 9 . That is. oo).. resp. thick.. The fact that indeed 0(a) (U) -3 0* (u) follows. from Theorem 3. A= ( - a -a ) \ a a 5 5 J 9 3 2 a1=2. there are p = 2 background states of {Jt}. T) -+ ?P* (u. we may imagine that we have two types of claims such that the claim size distributions are E3 and E7. MARKOVIAN ENVIRONMENT upon FT. the company even suffers an average loss. 5 5 where E5 denotes the exponential distribution with intensity parameter 5 and a > 0 is arbitrary. Computing the parameters of the averaged compound Poisson model.l3* and U2 having distribution B*. From this the convergence in distribution follows by general facts on weak convergence in D[0.31µB 2 = 2 5 3 7 70 Thus in state 1 where p. On Fig.1 with periods with positive drift alternating with periods with negative drift.10 Let E_{1.. state 1 appears as more dangerous than state 2.FT. Claims of type E3 arrive with intensity 2 .. the overall drift is negative since it = (2 2) so that p = 71P1 + 112P2 = 7.2. U. the paths of the surplus process will exhibit the type of behaviour in Fig. Thus..1.. B1=3E3+2E7.g. and in fact P1 = 31AB1 = 9 3 1 2 (5 3 3 1 1 2 1 5 7 1 81 70 ' _ 19 4 5 P2 = .150 CHAPTER VI..

1. That is.1 a. iEE . t -* oo.8.3* = 3/4 of the claims occur in state 1 and the remaining fraction 1/4 in state 2. MODEL AND EXAMPLES 151 Figure 1. a fraction r.11 (a ) ESt/t -* p . Hence B* = 415E3+5E7/ +4 ( 51E3 +5 E7) = 1E 3 +2E7. 0 The definition (1. Proof In the notation of Proposition 1. note first that EN Uk')/N a4' µgi. we have E[St + t I (t(i))iE EI = E t(i)OW = iEE t(i)Pi• iEE Taking expectations and using the well -known fact Et(i)/t -* 7ri yields (a).1 Thus. t -+ oo. = P.s. the averaged compound Poisson model is the same as in III. For (b).1). (b) St/t -* p ..(3. 01 /.1. Hence (i) Nti) 1 U(i) k' N(i)k=1 E t -4 St + t = iEE Nt t 1: 7ri Qi µs.1) of the safety loading is (as for the renewal model in Chapter V) based upon an asymptotic consideration given by the following result: Proposition 1.

1 and the Corollary are standard.4. X2 =SW2 -So. Proof The case 77 < 0 is trivial since then the a. X 1 =Sty. and . The proof of Proposition 1. 2 The ladder height distribution Our mathematical treatment of the ruin problem follows the model of Chapter III for the simple compound Poisson model. then M < oo a. In risk theory. [212]. see [APQ] p. let some state i be fixed and define w=wl=inf{t >0:Jt_#i. Then by standard Markov process formulas (e. with X2. some early studies are in Janssen & Reinhard [211].. There seems still to be more to be done in this area. If 77 > 0. 0 Notes and references The Markov-modulated Poisson process has become very popular in queueing theory during the last decade. w2=inf {t>w1:Jt_#i. X3. n n Thus {SWn l is a discrete time random walk with mean zero.. and hence wn /n a4.0i(u) < 1 for all i and u. [302].1(b) is essentially the same as the proof of the strong law of large numbers for cumulative processes..a form a renewal process ..Jt=i}. [APQ].g.ld. Now let r) = 0.s.. See Meier [258] and Ryden [314]. MARKOVIAN ENVIRONMENT Corollary 1. limit p . Now obviously the w. Eiw. [315]. Since the X„ are independent .s. and so on. see the Notes to Section 7.Eiw o'o Eiw • E ^ifjµs. EiX = 0.1)Eiw = 0. Theorem II. with some important improvements being obtained in Asmussen [17] in the queueing setting and being implemented numerically in Asmussen & Rolski [43]. having the Pi-distribution of X. and hence 1/ii(u ) = 1 for all i and u. Proposition 1. dt ..1 jEE = (p . also + . Statistical aspects are not treated here. and hence oscillates between -0o and oo so that also here M = oo. and hence M = 00..12 If 77 < 0.. PB. and a more comprehensive treatment in Asmussen [16]. s. .152 CHAPTER VI. The mainstream of the present chapter follows [16]. .2(a) p.Jt=i}..\ i and EiX1 Ei f 13 J.. The case 77 > 0 is similarly easy. and involves a version of the .1 of St / t is > 0. then M = 00 a. + Xn SWn ](1 a . 38) Eiw1 = -1/ir. 136 or A.

a/i.2 (a) The distribution of M is given by 00 1 . but is substantially more involved than for the compound Poisson case . T R(i. j. oo)). However . T+ < oo) and let G+ be the measure-valued matrix with ijth element G+(i.x. For measure-valued matrices.j E E.4) we obtain the following result . 00 (2.IIG +II)e.g. The form of G+ turns out to be explicit (or at least computable). Proposition 2. n=0 (2. let G+(i. Thus. j. •) II = JG+(i. That is. e.5. . Proposition 2.2) R(dx)S((y .6. for i. cf.2(a) below ) where the ladder height distribution is evaluated by a time reversion argument.Jr+ =j.2. G+ is the matrix whose ijth element is E G +(i. which represents a nice simplified form of the ladder height distribution G+ when taking certain averages : starting {Jt} stationary.dx). j. •)• kEE Also.x). 6. we get the same ladder height distribution as for the averaged compound Poisson model. oo)) = f R(i. •).(u) = Pi(M < u) = e' E G+ (u)(I . see also Example II.EA. Define the ladder epoch T+ by T+ = inf It : St > 0} = r(0).Jt=j)dt. Let further R denote the pre-T+ occupation kernel. •) * G +(k. dx)/jBj(y . oo) = J ao 0 G+(i.j. IIG+ II denotes the matrix with ijth element IIG+(i. the definition of . j. k.A) =ZI(St E.A) = Pt(ST+ E A. B* in Section 1.6*.3*B *(y)dy.j. j. (y. THE LADDER HEIGHT DISTRIBUTION 153 Pollaczeck-Khinchine formula (see Proposition 2.1) 0 (b) G+ (y..i. only with the product of real numbers replaced by convolution of measures. and S (dx) the measure -valued diagonal matrix with /3 Bj(dx) as ith diagonal element.1 irG+(dy)e =. we define the convolution operation by the same rule as for multiplication of real-valued matrices. by specializing results for general stationary risk processes (Theorem II .

we need to invoke the time-reversed version {Jt } of {Jt} .3 When q > 0.6.3. To make Proposition 2.2) is just the same as the proof of Lemma 11. see Figure 2. hence uniquely specified by its intensity matrix Q (say). thick.IIG+II)e. mx = j when for some (necessarily unique) t we have St = -x. MARKOVIAN ENVIRONMENT Proof The probability that there are n proper ladder steps not exceeding x and (x)ej. 0 ---------------------------- x Figure 2. and let further {my} be the E-valued process obtained by observing {Jt } only when {St*} is at a minimum value.3) We let {St*} be defined as {St}. To this end . lines in the path of {St}. The u proof of (2. we need as in Chapters II. III to bring R and G+ on a more explicit form . and that the environment is j at the nth when we start from i is e .154 CHAPTER VI. G+ the probability that there are no further ladder steps starting from environment j is e^ ( I . marked by thin. That is. the intensity matrix A* has ijth element * 7r ^i3 7ri and we have Pi(JT = j) = 7rj P2(JT = i)7ri (2. only with {Jt} replaced by {Jt } (the /3i and Bi are the same ). {mx} is a non -terminating Markov process on E. JJ = j.1) follows by summing over n and j.2 useful .1 for an illustration in the case of p = 2 environmental states of {Jt}. resp.1 The following observation is immediate: Proposition 2. From this (2. . St < S* for u < t.

2 where there are three excursions of depth 1. and the excursion ends at time s = inf {v > t : S.0.(/3i)diag + T S(dx) eQx. Figure 2.(/3i) diag.2 . we say that the excursion has depth 0. 0 mms1 - ---------------------------- ^O \ -T.. and the excursion is said to have depth 1 if each of these subexcursions have depth 0. Furthermore. ( Q( n)) converges monotonically to Q. Q( n+l) _ ^. Proof The argument relies on an interpretation in terms of excursions. 2.4 Q satisfies the non-linear matrix equation Q = W(Q) where 0 co(Q) = n* .and a jump (claim arrival) occurs at time t. In general.*. s]. THE LADDER HEIGHT DISTRIBUTION 155 Proposition 2. The definitions are illustrated on Fig. Otherwise each jump at a minimum level during the excursion starts a subexcursion. and S(dx) is the diagonal matrix with the f3iBi(dx) on the diagonal. corresponding to two subexcursions of depth 0. {S.2. the sequence {Q(n)} A* defined by Q(O) = . If there are no jumps in (t. An excursion of {St*} above level -x starts at time t if St = -x. For example the excursion of depth 2 has one subexcursion which is of depth 1. Note that the integral in the definition of W(Q) is the matrix whose ith row is the ith row of _ 3 f e2Bi(dx). we recursively define the depth of an excursion as 1 plus the maximal depth of a subexcursion.2. = -x}. } is a minimum value at v = t.

e. (2.4) To show Q = cp(Q).Qi + )%pij) Now just note that t pij and insert (2. It follows that qij = A.T+>t) _ ^iF 7ri (JJ =i. i. MARKOVIAN ENVIRONMENT Let p=7) be the probability that an excursion starting from Jt = i has depth at most n and terminates at J8 = j and pij the probability that an excursion starting from Jt = i terminates at J8 = j..j +/3ipij. Now let {m ( n) } be {mx } killed at the first time i7n (say) a subexcursion of depth at least n occurs . of the definition to make U be concentrated on (-co. Q = W(Q) follows. Writing out in matrix notation . Fi(mh =i ) = 1 + =h-flh+Qihpii+o(h) implies qii = 'iii -/i +)3ipii. Similarly. By considering minimum values within the excursion. p1^) Define a further kernel U by f U(i.St EA. Suppose mx = i. we first compute qij for i $ j.St <S*. 0)).5 R(i.s.6) . It is clear that { mini } is a terminating Markov process and that { mio) } has subintensity matrix A* . Similarly by induction .(01)diag = Q.5) -A (note that we use -A = {x : -x E Al on the r. either due to a jump of {Jt } which occurs with intensity A= j.156 CHAPTER VI.. The proof of Q = W(Q) then immediately carries over to show that the subintensity matrix of {mil) } is cp (Q(o)) = Q(l). it becomes clear that pij = r [eQh] 0 ij Bi (dy) • (2.u< t). Then a jump to j (i. StEA . = j. Theorem 2 . j.4). A) = f Pi(mx = j) dx eie4xej dx A u (2.j. h. or through an arrival starting an excursion terminating with J. A). the subintensity matrix of {min+i ) } is cp (Q(n)) = Q(n +l) which implies that qgj +1) = \!. mx+dx = j) occurs in two ways . A) = L' U(j. 7rE Proof We shall show that Fi(Jt=j.

z+>t) = P.=StSt-. and we let k be the corresponding right eigenvector normalized by Irk = 1. St EA.0<u<t) = P. {Jt }. and this immediately yields (2.7 It is instructive to see how Proposition 2. the CramerLundberg approximation (Section 3).t. Jt = i. From Qe = 0.St <Su.0<u<t. and to obtain a simple solution in the .(. 0 +1) = cp (K( n)) defined by K(o) = A . To this end. where A is the diagonal matrix with 7r on the diagonal: Corollary 2.St EA. oo))dx.r. oo)) = f o' eIXS((x + z. consider stationary versions of {Jt}. (Jo = j. x < 0. (b) for z > 0. and get irPi(Jt =j. e.(Jt=j. We may then assume Ju=Jt-u. 0<u<t). it is readily checked that 7r is a left eigenvector of K corresponding to the eigenvalue 0 (when p < 1).Qi)diag.StEA. we shall see that nevertheless we have enough information to derive. `` {K(n)} [the W(•) here is of course not the same as in Proposition 2. K( n (d) the sequence converges monotonically to K.6 (a) R(dx) = e-Kxdx. S. dt. u It is convenient at this stage to rewrite the above results in terms of the matrix K = 0-'Q'A. 0 < u < t) = 7rjPj(Jt =i. St E A. (c) the matrix K satisfies the non-linear matrix equation K = W(K) where W( K) = A ( i) diag + fi J "O eKx S(dx).2.6).6 is hardly all that explicit in general. St < St U..4]..S„<0.1 can be rederived using the more detailed form of G+ in Corollary 2.g.. Remark 2.Jo=i.6(b): from 7rK = 0 we get 7rG+(dy)e = J W 7reKx(fiiBi(dy + x))diag dx • e 0 w(fiiB1(dy + x))col dx f 0 EirifiiBi(y)dy = fi*B*(y)dy• iEE 0 Though maybe Corollary 2... G+((z. THE LADDER HEIGHT DISTRIBUTION 157 from which the result immediately follows by integrating from 0 to oo w.

158

CHAPTER VI. MARKOVIAN ENVIRONMENT

special case of phase-type claims (Chapter VIII). As preparation, we shall give at this place some simple consequences of Corollary 2.6. Lemma 2 .8 (I - IIG+II)e = (1 - p)k. Proof Using Corollary 2.6(b) with z = 0, we get

IIG+II = feIxsux, oo dx.
In particular, multiplying by K and integrating by parts yields
0

(2.7)

I)S(dx) KIIG+II = - (eKx
T

= K - A + (,13i)diag -

Z

S(dx) = K -A.

2.8)

0 OO

Let L = (kir - K)-'. Then (k7r - K) k = k implies Lk = k. Now using (2.7), (2.8) and ireKx = ir, we get

kirIIG +IIe =

ao k f
7rS((x , oo))e = k (lri(3ips, ) rowe = pk,

0 KIIG+IIe = Ke,

(kir-K)(I - IIG+II)e = k-Ke-pk+Ke = ( 1-p)k.
Multiplying by L to the left, the proof is complete. u

Here is an alternative algorithm to the iteration scheme in Corollary 2.6 for computing K. Let IAI denote the determinant of the matrix A and d the number of states in E. Proposition 2.9 The following assertions are equivalent: (a) all d eigenvalues of K are distinct; (b) there exist d distinct solutions 8 1 ,- .. , sd E {s E C : its < 0} of (A + (131(Bi[s] - 1))diag - sIl = 0. (2.9) I n that case , then Si, ... , sd are precisely the eigenvalues of K, and the corresponding left row eigenvectors al, ... , ad can be computed by

ai (A -

(fi(Bi[Si]

-

1))d iag - siI) = 0.

(2.10)

2. THE LADDER HEIGHT DISTRIBUTION
Thus, al seal K=

159

(2.11)

ad sdad Proof Since K is similar to the subintensity matrix Q, all eigenvalues must indeed be in Is E C : 2s < 0}.
Assume aK = sa. Then multiplying K = W(K) by a to the left, we get sa = a

f A It follows that if (a) holds, then so does (b), and the eigenvalues and eigenvectors

(

- (f3i)diag +

eS(dx)

= a (A - (/3i) diag + (/3iEi[s])diag)

can be computed as asserted. The proof that (b) implies (a) is more involved and omitted; see Asmussen u [16]. In the computation of the Cramer-Lundberg constant C, we shall also need some formulas which are only valid if p > 1 instead of (as up to now) p < 1. Let M+ denote the matrix with ijth entry M+(i,j) = xG+(i,j;dx). 0 Lemma 2 .10 Assume p > 1. Then IIG+II is stochastic with invariant probability vector C+ (say) proportional to -irK, S+ _ -7rK/(-7rKe). Furthermore, -irKM+e = p - 1. Proof From p > 1 it follows that St a4' oo and hence IIG+II is stochastic. That -7rK = -e'Q'0 is non-zero and has nonnegative components follows since -Qe has the same property for p > 1. Thus the formula for C+ follows immediately by multiplying (2.8) by --7r, which yields -irKIIG+II = -irK. Further M+ = fdzfeS(( x+z oo)) dx f 00 dy fy eKx dx S((y, oo)) 0 0 m K-' f (eKy - I) S((y, oo))dy, 0 00

-7rKM+e = 7r f d y(I - eKy) S((y, oo))e
= lr(/3ipB;) diage -

irII G +Ile

=p-1

160

CHAPTER VI. MARKOVIAN ENVIRONMENT
u

(since IIG+II being stochastic implies IIG+ IIe = e).

Notes and references The exposition follows Asmussen [17] closely (the proof of Proposition 2.4 is different). The problem of computing G+ may be viewed as a special case of Wiener-Hopf factorization for continuous-time random walks with Markov-dependent increments (Markov additive processes ); the discrete-time case is surveyed in Asmussen [15] and references given there.

3 Change of measure via exponential families
We first recall some notation and some results which were given in Chapter II
in a more general Markov additive process context. Define Ft as the measurevalued matrix with ijth entry Ft(i, j; x) = Pi[St < x; Jt = j], and Ft[s] as the matrix with ijth entry Ft[i, j; s] = Ei[e8St; Jt = j] (thus, F[s] may be viewed as the matrix m.g.f. of Ft defined by entrywise integration). Define further
K[a] = A + ((3i(Bi[a] - 1)) - aI

diag

(the matrix function K[a] is of course not related to the matrix K of the preceding section]. Then (Proposition 11.5.2):

Proposition 3.1 Ft[a] = etK[a] It follows from II.5 that K[a] has a simple and unique eigenvalue x(a) with maximal real part, such that the corresponding left and right eigenvectors VW, h(a) may be taken with strictly positive components. We shall use the normalization v(a)e = v(a)hi') = 1. Note that since K[0] = A, we have vi°> = 7r, h(°) = e. The function x(a) plays the role of an appropriate generalization of the c.g.f., see Theorem 11.5.7. Now consider some 9 such that all Bi[9] and hence ic(9), v(8), h(e) etc. are well-defined. The aim is to define governing parameters f3e;i, Be;i, Ae = 0!^1)i,jEE for a risk process, such that one can obtain suitable generalizations of the likelihood ratio identitites of Chapter II and thereby of Lundberg's inequality, the Cramer-Lundberg approximation etc. According to Theorem 11.5.11, the appropriate choice is
e9x

09;i =13ihi[9], Bo;i (dx) = Bt[B]Bi(dx),

Ae = AB 1K[9]De - r.(9)I oB 1 ADe + (i3i(Bi[9] -

1))diag - (#c(9) + 9)I

3. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES

161

where AB is the diagonal matrix with h(e) as ith diagonal element . That is,

hie) DEB) _ ^Y' Me)
iii

i#j i=j

+ /i(Bi[9] -1) - r. (9) - 0

We recall that it was shown in II . 5 that Ae is an intensity matrix, that Eie°St h(o) = etK(e)hEe ) and that { eest - t(e)h(9 ) } is a martingale. t>o We let Pe;i be the governing probability measure for a risk process with parameters ,69;i, B9; i, A9 and initial environment Jo = i. Recall that if PBT) is ]p(T) the restriction of Pe ;i to YT = a {(St, Jt) : t < T} and PET) = PoT), then and PET) are equivalent for T < oo. More generally, allowing T to be a stopping time, Theorem II.2.3 takes the following form: Proposition 3.2 Let r be any stopping time and let G E Pr, G C {r < oo}. Then

PiG = Po;iG = hE°) Ee;i lh

1 j,)

exp {-BST + -rrc(0 ) }; G .

J

(3.1)

Let F9;t[s], ice ( s) and pe be defined the same way as Ft[s], c (s) and p, only with the original risk process replaced by the one with changed parameters. Lemma 3.3 Fe;t [s] = e-t"(B) 0 -1 Ft[s + O]0. Proof By II.( 5.8). u

Lemma 3.4 rte ( s) = rc(s+B ) - rc(O). In particular, pe > 1 whenever ic'(s) > 0. Proof The first formula follows by Lemma 3.3 and the second from Pe = rc'' (s).
Notes and references The exposition here and in the next two subsections (on likelihood ratio identities and Lundberg conjugation) follows Asmussen [16] closely (but is somewhat more self-contained).

3a Lundberg conjugation
Since the definition of c( s) is a direct extension of the definition for the classical Poisson model, the Lundberg equation is r. (-y) = 0. We assume that a solution

162

CHAPTER VI. MARKOVIAN ENVIRONMENT

y > 0 exists and use notation like PL;i instead of P7;i; also, for brevity we write h = h(7) and v = v(7).
Substituting 0 = y, T = T(u), G = {T(u) < oo} in Proposition 3.2, letting ^(u) = S7(u) - u be the overshoot and noting that PL;i(T(u) < oo) = 1 by Lemma 3.4, we obtain: Corollary 3.5
V)i(u,

T) =

h ie -7uE L,i

e -7{(u)
h =(u)
e -WO

; T(u) < T ,

(3 . 2) (3.3)

ioi(u)

= h ie -7u E

hj,(„)

.

Noting that 6(u) > 0, (3.3) yields
Corollary 3.6 (LUNDBERG'S INEQUALITY) Oi(u) - < hi e--fu. min2EE h9

Assuming it has been shown that C = limo, 0 EL;i[e-7^(u)/hj,(„j exists and is independent of i (which is not too difficult, cf. the proof of Lemma 3.8 below), it also follows immediately that 0j(u) - hiCe-7u. However, the calculation of C is non-trivial. Recall the definition of G+, K, k from Section 2.
Theorem 3 .7 (THE CRAMER-LUNDBERG APPROXIMATION) In the light-tailed case, 0j(u) - hiCe-7u, where

C (PL -1) "Lk.

(3.4)

To calculate C, we need two lemmas . For the first, recall the definition of (+, M+ in Lemma 2.10. Lemma 3 .8 As u -4 oo, (^(u), JT(u)) converges in distribution w.r.t. PL;i, with the density gj(x) (say) of the limit (e(oo), JT(,,,,)) at b(oo) = x, JT(oo) = j being independent of i and given by
gi (x) = L 1 L E CL;'GL (e,.1; (x, oo)) S+M+e LEE

Proof We shall need to invoke the concept of semi-regeneration , see A.1f. Interpreting the ladder points as semi-regeneration points (the types being the environmental states in which they occur), {e(u),JJ(u))} is semi-regenerative with the first semi-regeneration point being (^(0), JT(o)) _ (S,+, J,+). The formula for gj (x) now follows immediately from Proposition A1.7, noting that the u non-lattice property is obvious because all GL (j, j; •) have densities.

3. CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES
Lemma 3 .9 KL = 0-1K0 - ryI, G+[-ry] _

163

-111G+IIA, G+['y]h = h.

Proof Appealing to the occupation measure interpretation of K, cf. Corollary 2.6, we get for x < 0 that ete-Kxej dx =

fPs(StE dx,J =j,r > t)dt

= hie-7x f O PL;i(St E dx, Jt = j, T+ > t) dt hj o

= ht e-7xe^e-K`xej dx,
which is equivalent to the first statement of the lemma. The proof of the second is a similar but easier application of the basic likelihood ratio identity Proposition 3.2. In the same way we get G+['y] = AIIG+IIT-1, and since IIG+ IIe = e, it follows that

G +[ry l h

= oIIG+IIo -1h = AIIG+ IIe =

De

= h.

Proof of Theorem 3.7 Using Lemma 3.8, we get EL (e-'W- ); JT(.) = jl = f 00 e- 7xgj (x) dx L J o 1 °°
f e-7^G+( t, j; (x, oo)) dx S+M+e LEE °

-

1 (+;l f S +M +e LEE 0
rr ry S +M +e LEE

0 1(1 - e-7 x ) G+(1,j; dx)

-

1

E(+(IIG+(e,j)II-G+[t,j;

In matrix formulation, this means that

C =

E L;i

e-7f(-)

hj,r(_) L

- L

ryC M e

L

c+

(IIG+II - G +[- 7]) 0-le

1

L

YC+M+e
'y(PL - 1)

(-ir KL) (I - G+[- y]) 0-le,

164

CHAPTER VI. MARKOVIAN ENVIRONMENT

using Lemma 2.10 for the two last equalities. Inserting first Lemma 3.9 and next Lemma 2.8, this becomes 1 7r LA -1(-YI - K)(I - IIG+II)e 'Y(PL - 1) = 1 P 7r LA -1(yI - K) k = 1-P 7rLO-1k. Y(PL - 1) (PL - 1 ) Thus, to complete the proof it only remains to check that irL = vL A. The normalization vLhL = 1 ensures vLOe = 1. Finally, VLOAL = vLAA-'K['Y]A = 0

since by definition vLK[y] = k(y)vL = 0.

u

3b Ramifications of Lundberg 's inequality
We consider first the time-dependent version of Lundberg 's inequality, cf. IV.4. The idea is as there to substitute T = yu in 'Pi (u, T) and to replace the Lundberg exponent y by yy = ay - yk(ay ), where ay is the unique solution of rc(ay)= 1 Y Graphically, the situation is just as in Fig. 0.1 of Chapter IV. Thus, one has always yy > y, whereas ay > -y, k( ay) > 0 when y < 1/k'(y), and ay < y, k(ay) < 0 when y > 1/k'(-y). Theorem 3 .10 Let C+°) (y) _ 1
miniEE hiav)

Then 1 y< (y)
y>

Vi(u,yu)
Pi(u) -

C+°)(y)hiav)

e-7vu,

(3.6)

V,i(u,yu)

< C+)(y)hiar )e -'Yvu,

(y) (3.7)

Proof Consider first the case y <

Then, since k (ay) > 0, (3 .1) yields

'12(u,yu)

hiav)]E'iav,i

h(ay ) J*(u)

exp {-ayST(,L ) +r(u)k( ay)}; T(u) < yu

yu) f h(av)e v -avuE«v.y)G+(z. dy)• o iEE jEE . Our next objective is to improve upon the constant in front of a-7u in Lundberg's inequality as well as to supplement with a lower bound: Theorem 3. However.8 ) Then for all i E E and all u > 0. av 'i [h. (u. we have ic(ay) < 0 and get 'i(u) . oo)) and. exp {-e() + r(u))} . hj P . yu < r(u) < 00 h 4(u) < h(av)C+o)(y)e-avuEav .11 Let Bj (x) C_ = min 1 • inf jEE hj x>o f2° e'r( v-x)Bj(dy) ' C+ _ mE 1 Bj(x) J Y -x)Bj (dy).5) will produce the maximal ryy for which the argument works.00 su e7( ( 3. we let G+ * W(u) be the vector with ith component E(G+(i. yu < r(u) < 00] < hiav)C+o)( y)e-avu+yuw(av) 0 Note that the proof appears to use less information than is inherent in the definition (3.9) For the proof.(ay)}.i I (a) exp {-aye(u) + r(u)r.j.5).. r(u) < yu] hiay)C+ h=av)C+ o) (y)e-ayu+yuw(av). if y > 1lk'(ry).i [eT(u)K(av ). as in the classical case (3. 1 Similarly. We further write G(u) for the vector with ith component Gi(u) = EiEE G+(i.7.j) * coj)(u) _ f u ^Pj(u . CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES 165 hiav)e _avuE.V)i(u. r(u) yu o)(y)e-avuEav. we shall need the matrices G+ and R of Section 2. (3.3.i [e*(u)K(av). C-hie -ryu < Vi(u ) < C+hie -7u. for a vector <p(u) = (cpi (u))iEE of functions .

jEE u 0 j. dx). j. _ To see that the ith component of U * G(u) equals ?Pi (u). dy) = aj f Bj(dy .dy). Lemma 3 . Then iterating the defining equation ip(n+1) = G + G+ * V(n) we get W(N+1) = UN * G + G+N+1) * ^(o) However. °O . = Eo G+ G. j. Hence lim cp(n) exists and equals U * G. we have G *(N +1) * ^.ery(&-u+x)Bj (dy) Bj(u Bj (u . U = U". and define W(n+1) (u) = G(u) + (G+ * tp(n))(u). dy) 00 C+ ijhj f R(i. dx) f e7( v-u)Bj (dy .3jhj // f 00 R(i. 0 G+(i.x) x) jEE 0 E Qj f jEE R(i.166 CHAPTER VI. just note that the recursion <p(n+1) = G + G+ * (p(n) holds for the particular case where cpin)(u) is the probability of ruin after at most n ladder steps and that then obviously u cp2n) (u) -+ t. dy) : 1(u) < C+ > hj u e(1tL)G+(i.(0) ] (u) < sup Jt t. n -> oo.j. 00 Thus C+ > hj f"o e7(Y-u)G +(i.13 For all i and u.12 Assume sup1.7.x ) R(i. dx) 100 C .x) jEE 00 u 0 //^^ C+E. j. MARKOVIAN ENVIRONMENT Lemma 3 . dx ) Bj (u . j. if r+ (n) is the nth ladder epoch. Then cpin)(u) sit (u) as n -+ oo.j.u IMP:°) (u) I < oo.u Iv 2°)(u)I Pi(rr+(N + 1) < oo) --+ 0. Proof Write UN = EN G+ .& (u).x ) = Gi(u). 00 f C_ hj f e(Y)G+(i.

y)G+(i.10: Theorem 3 . u The proof of the upper inequality is similar .11) C_e-7u 57 O+[i. T) = Pi (7.10) C_ 1 f hje7(y.12) Proof We first note that just as in the proof of Theorem 3.u)G+(i. dy) jEE u U +C_ hje7( y-u)G jEE"" +(i. 167 u Proof of Theorem 3. let C+(yo) be as in (3.(u) < T ) to 0i (u) which is different from Theorem 3. jEE estimating the first term in (3. Indeed. j.13) Hence.ST). 9 for the last equality in (3. y]hj = C_ e-7uhi. (3. 14 Let yo > 0 be the solution of 'c'(yo ) = 0. and using Lemma 3 . we have Vii (u) .10 ) by Lemma 3 . j.8) with -y replaced by yo and hi by h=7o ). CHANGE OF MEASURE VIA EXPONENTIAL FAMILIES proving the upper inequality.MT<u. we get Wo n +1) (u) = ? 7 i ( U ) + E J u gyp. ST < u] < C+(yo)e-7ouEi [h^7o)e70ST1 l T J = C h(7o)e-7ou8T . We claim by induction that then cpin) (u) > C_ hie-7u for all n. taking cps°) (u) = 0. 13 and the second by the induction hypothesis . and assuming it shown for n. j.13 Let first cp=°)(u) = C_ hie-"u in Lemma 3. it follows that Vi(u) < C_(yo) h=70)e-7ou. dy) (3. +i .T) = Pi(M > u) . Then 0< Vi (u ) - 0i(u. MT < u. T) < C+(')' o)hi7u)e-7ou8T . j.3. and the proof of the lower one is similar. dy) jEE o (3.tpi(u.Pi(MT > u) = Pi(MT < u.13.M>u) = Ei [VGJT (u .M > u) = Pi(ST<u. letting MT = maxo<t<T St. (3. this is obvious if n = 0. and let 8 = e'(70). from which the lower inequality follows by letting n -* oo. Here is an estimate of the rate of convergence of the finite horizon ruin probabilities 'i (u.11.11).n) ( u .

The results to be presented show that quite often this is so. It was long conjectured that -0* Vi..o. this correponds to the usual stochastic ordering of the maxima M'. we refer to . we define the stochastic ordering by 0' < s.33 or Bi 0 Bj. B2 <_s. where o*(u) is the ruin probability for the averaged compound Poisson model defined in Section 1 and . is the one for the Markov-modulated one in the stationary case (the distribution of J0 is 7r).3) Bl <_s. V)" if z/i'(u) <'c"" (u). u > 0. [177]. 4 Comparisons with the compound Poisson model 4a Ordering of the ruin functions For two risk functions 0'.o.3) to B = Bi does not depend on i.5) Note that whereas (4..3p. in part from the folklore principle that any added stochastic variation increases the risk...1) Obviously.o. Bp. 0"(u) = P(M" > u)) Now consider the risk process in a Markovian environment and define i' (u) _ >iEE irioi(u).168 CHAPTER VI.31:5)32 .. For the notion of monotone Markov processes. Occasionally we strengthen (4.o. (4. Further related discussion is given in Grigelionis [176]. and finally in part from queueing theory. where it has been observed repeatedly that Markov-modulation increases waiting times and in fact some partial results had been obtained.3). but that in general the picture is more diverse. ". The conditions which play a role in the following are: .. we also assume that there exist i # j such that either /3i <.. (4. <s. The motivation that such a result should be true came in part from numerical studies.. M" of the corresponding two claim surplus proceses (note that 0'(u) _ P(M' > u). this is not the case for (4.4) To avoid trivialities. < . MARKOVIAN ENVIRONMENT Notes and references The results and proofs are from Asmussen and Rolski [44].0.2) alone just amounts to an ordering of the states.2) (4. The Markov process {Jt} is stochastically monotone (4. (4.

9 ) below). 2 If al < . Comparing (4. we need two lemmas. Proschan & Walkup [140].1 Assume that conditions (4. Lemma 4 .x) dx u o i =1 i=1 (4. Section 4. = aP or b1 = .6. Proof of Theorem 4. Conditioning upon the first ladder epoch. then P P P 7rjbj. < bp and 7ri > 0 (i = 1. (4. Lemma 4 ... Theorem 4 . the second follows from an extension of Theorem I1..4) hold..1) which with basically the same proof can be found in Asmussen & Schmidt [49]. < a.2. = b. it follows by a standard . E 7r i Wi(u . 7-(0) < oo) = pirf+).2)-(4.1 for the first term in (4.1. (b) P.6) 7r= fl*B*(u) + p> s=1 +) fu 0 b (u - x)Bt (x) /pB.13* J0 u 0*(u . The first is a standard result going back to Chebycheff and appearing in a more general form in Esary. we obtain (cf. Proposition 2.4) say basically that if i < j .10) Q*B*(u)+. also Proposition 2.8) ^j Tri/iBd(x) . .. dx (4.r(u -x)dx. then j is the more risky state .9) follows by considering the increasing functions 3iBi (x) and Oi (u .10) and (4. . and it is in fact easy to show that Vii(u ) < t/j(u) (this is used in the derivation of (4. Conditions (4. 1:7riaibi > E 7riai i=1 i=1 j=1 The equality holds if and only if a1 = ... COMPARISONS WITH THE COMPOUND POISSON MODEL 169 Stoyan [352]. where 7r2+) = QiµBilri/p.2.3* f uB(x) z/^.9) (4.r (JT(o) = i.x) of i and using Lemma 4.7) 7ri.3iBi(x)YPi(u ...4.r (Sr(o) E dx Jr(o) = i.2)-(4. p).7) and Lemma 4. ^i 7ri = 1. T(0) < oo) = Bi(x) dx/tcai ..3 for the second) *(u) _ /3 *B* (u) +.4) is automatic in some simple examples like birth-death processes or p = 2 . Then V. note that (4..x)B*(x) dx.* For the proof... b1 < .5 (cf..6). 3 (a) P.x)dx _ /3*B*(u) + f u / ^ t=1 > 3 * B* ( ) + f (4. 0 Here (4.

of order 10-1. let = ( 1/2 1/2 ) .8) we get P P '*' (0) = -3* + /3*1* (0) _ > lri'3qqi • E 7i/ipBi .(0) < b *'(0) for e small enough.6). µB2 = 10-4. u To see that Proposition 4.4 is the understanding of whether the stochastic monotonicity condition (4. As is seen.1 of [145] for a formal proof) that z/ii(u) converges to the ruin probability for the compound Poisson model with parameters . Rolski & Schmidt [32]. that P P /^ 1r1NiµBi /^2 /^ ^i/ji pBi < 1il3i i=1 i=1 (4. u Here is a counterexample showing that the inequality tp* (u) < V). Proof Since 0./3*.s. dominates the solution 0* to the renewal equation (4. this ruin probability is /3iPBi.* (0).(0) = V. Notes and references The results are from Asmussen. For u = 0. of (4.h. it will hold for all sufficiently large u. 0. they are at present not quite complete. Bi as e J. except possibly for a very special situation .. i=1 i=1 7'r(0) _ EFioiwi(0) . Recall that . µB.2. 01 = 10-3. Then i/i*(u) < .r (u ) fails for all sufficiently small e > 0.11) is of order 10-4 and the r. MARKOVIAN ENVIRONMENT argument from renewal theory that tk. (4.s.3i.0*• i=1 But it is intuitively clear (see Theorem 3.3µi < 1 for all i. What is missing in relation to Theorem 4. Then the l..1 and Proposition 4.4 is not vacuous.. Frey. = 102.. and from this the claim follows. 4b Ordering of adjustment coefficients Despite the fact that V)* (u) < *. Using (4.0.11) i=1` and that A has the form eAo for some fixed intensity matrix A0.4 Assume that . it is sufficient to show that 0'.4) is essential (the present author conjectures it is). Q2 = 1.h.. (u) may fail for some u. (u) is not in general true: Proposition 4.6).170 CHAPTER VI.

Further (see Corollary 11. 0 .2 we have (Ei[e"X'.ld) with generic cycle w = inf{t>0: Jt_54 k. cf.13) implies A(a) > 0 for all a. Then {(Jt. It is clear that the distribution of X.Jt=kI A (the return time of k) where k E E is some arbitrary but fixed state. which in view of EiEE 1ibi = 0 is only possible if Si = 0 for all i E E. with strict inequality unless rci (y*) does not depend on iEE. The adjustment coefficient -y* for the averaged compound Poisson model is the solution > 0 of rc*(ry*) = 0 where rc*(a) _ 13*(B*[a] .12) iEE Theorem 4. Hence if 5i 54 0 for some i E E. Asmussen [20]) as discussed in 11. Lemma 4.g.1) . e. in particular .. (4. This implies that A is strictly convex.g.5.4(b) that the limit in (4. and by Proposition II.13) (4. (4. with strict inequality unless a = 0 or bi = 0 for all i E E. Jt = i])' EE = vA+n(6. Now we can view {Xt} as a cumulative process (see A. is non-degenerate unless bi does not depend on i E E.5.5..)a.a. it follows by Proposition A1.7) )i is convex with A'(0) = lim EXt t-ioo t = iEE 70i = 0. COMPARISONS WITH THE COMPOUND POISSON MODEL 171 the adjustment coefficient for the Markov-modulated model is defined as the solution -y > 0 of ic(-y) = 0 where c(a) is the eigenvalue with maximal real part of the matrix A + (rci(a))diag where rci(a) = ai(Bi[a] .14) is non-zero so that A"(0) > 0.a = E irirci(a). (4.(a) > 0 for all a 0 0.14) A„(O) iioo varXt t t By convexity.1) . Proof Define X= f &ids.4.6 Let (di)iEE be a given set of constants satisfying EiEE iribi = 0 and define A(a) as the eigenvalue with maximal real part of the matrix A + a(bi)diag• Then )t(a) > 0.5 y < ry*. Xt)} is a Markov additive process (a so-called Markovian fluid model.

12) and rc*(y*) = 0. multiply the basic equation by a to obtain 0 = (A0 + e(r£i(y))diag)h.eir)h'(0). this implies that the solution y > 0 of K(y) = 0 must satisfy y < y*. In the case of e.. we get rc (y*) > 0 which in a similar manner implies that u y < y*. Rolski & Schmidt [32].16) Differentiating (4.. Notes and references Theorem 4. h depend on the parameter (e or a). note that y(a) -+ mins=1.5. Frey.15) Normalizing h by 7rh = 0.e7r)-1 (Ici(Y*))diage..5 is from Asmussen & O'Cinneide [40]. Let bi = rci(y*). (4.Qi and Bi are fixed . Since ic is convex with rc'(0) < 0 . Thus -y(e) -* y* as e 10. The corresponding adjustment coefficient is denoted by ry(e). Hence letting e = 0 in (4. 4c Sensitivity estimates for the adjustment coefficient Now assume that the intensity matrix for the environment is Ae = Ao/ e. Hence rc (y*) > 0.15) once more and letting e = 0 we get .. we have 7rh' = 0. whereas the . (4. h(0) = e.6.) and rc (•). where A. and our aim is to compute the sensitivity ay e a E=O A dual result deals with the limit a -4 oo.p yi and compute 8y 8a a=0 In both cases. y. Further a(1) = rc(y*) by definition of A(. MARKOVIAN ENVIRONMENT Proof of Theorem 4. a = 1 in Lemma 4. 0 = ((ri(-Y))diag + ery (4{('Y))diag)h + (A0 + e(?i'Y))diag)h'. the basic equation is (A + (rci(y))diag)h = 0. Then > risi = 0 because of (4.172 CHAPTER VI. Here we put a = 1/e. If rci(y* ) is not a constant function of i E E.15) yields 0 = (Ii(y*)) diage + Aoh'(0) = (rci('Y*)) diage + (Ao . improving upon more incomplete results from Asmussen. h'(0) = -(Ao .

The analogue of Proposition 4. i = 2. Frey.17) (4. multiplying (4.19) holds.18) 0 = 27'(0)p+27r(rs.8 when ryi < 0 for some i is open.. (4. the intensity for such a transition (referred to as marked in the following) is denoted by Aii l and the remaining intensity . 0 = (Ao + ry'(ii(-Y)) diag )h + (aAo + (Ki(7'))diag)h'.8 If (4. We get 0 = (aAo + ( lc&Y))diag)h.18). 5 The Markovian arrival process We shall here briefly survey an extension of the model. We assume that 0 < -y < 7i. which has recently received much attention in the queueing literature.20) and multiplying by el to the left we get 0 = All + 7'(0)rci (0) + 0 (here we used icl (ry(0)) = 0 to infer that the first component of K[7(0)]h'( 0) is 0).17) by 7r to the left to get (4. . Rolski & Schmidt [32]. The additional feature of the model is the following: • Certain transitions of {Jt} from state i to state j are accompanied by a claim with distribution Bid. and we have proved: Proposition 4.20) Letting a = 0 in (4. and may have some relevance in risk theory as well (though this still remains to be implemented). Inserting (4. p. then 8a a=o All rci (0) Notes and references The results are from Asmussen.5. (4. THE MARKOVIAN ARRIVAL PROCESS 173 0 = 27'(0)(r-i(`Y *)) diage + 2(ci('Y* )) diag h' (0) + Aoh" (0) .16) yields Proposition 4.19) Then 'y -^ ryl as a ^ 0 and we may take h(0) = el (the first unit vector)..7 8ry aE = 1 7r(ci ('Y*))diag ( Ao -e7r)-1(Xi(-Y*))diage *=0 P Now turn to the case of a. .i(7' *))diagh'(0). (4.

Bii = Bi . In the above setting. Again . the Markov-modulated compound Poisson model considered sofar corresponds to A(l) = (. Thus . A(l) = T. the claim surplus is a Markov additive process (cf. we use the convention that a1i = f3i where 3i is the Poisson rate in state i. A(1'k) A(2 k1).(13i )diag. refer to notation) { Jt k) }. is neither 0 or 1 is covered by letting Bij have an atom of size qij at 0. The extension of the model can also be motivated via Markov additive processes: if {Nt} is the counting process of a point process.2 for details). Here are some main examples: Example 5 .1 (PHASE-TYPE RENEWAL ARRIVALS) Consider a risk process where the claim sizes are i. with common distribution B. where qij is the probability that a transition i -* j is accompanied by a claim with distribution. Bij = B. Note that the case that 0 < qij < 1. Jt2)) (2. that Bii = Bi . the definition of Bij is redundant for i i4 j. and that are determined by A = A(l ) +A(2) where A is the intensity matrix the governing {Jt}. u Example 5 .6i ) diag. and thus 1i = 0. we may let {Jt} represent the phase processes of the individual interarrival times glued together (see further VIII. but the point process of arrivals is not Poisson but renewal with interclaim times having common distribution A of phase-type with representation (v. II. T). MARKOVIAN ENVIRONMENT f o r a transition i -+ j by A . This is the only way in which arrivals can occur. Jt = (Jtl). . let { Jt 1) }. then {Nt} is a Markov additive process if and only if it corresponds to an arrival mechanism of the type just considered. Indeed.^) etc.i.d.2 (SUPERPOSITIONS) A nice feature of the set-up is that it is closed under superposition of independent arrival streams . A ( 2) = A (2`1 ) ® A. A(l) = tv. the definition of Bi is redundant because of f3i = 0.4). and the marked transitions are then the ones corresponding to arrivals. A(1) = A . We then let (see the Appendix for the Kronecker E = E(1) x E(2). j(2) } be two independent environmental processes and let E(k).174 CHAPTER VI. B.2) A(1) = A(' 1) ® A(1.2). For i = j.

after which it starts afresh. and that the policy then expires.kl is redundant). Similarly. This means that the environmental states are of the form i1i2 • • • iN with il..}.5.. Bilo.. i2i .. u Example 5 . iN. Example 5 . 11. However . Thus. Easy modifications apply to allow for • the time until expiration of the kth policy is general phase-type rather than exponential. DEAD etc... E = { WORKING...4 (A SINGLE LIFE INSURANCE POLICY ) Consider the life insurance of a single policy holder which can be in one of several states. Assume further that the ith policy leads to a claim having distribution Ci after a time which is exponential. the idea of arrivals at transition epochs can be found in Hermann [193] and Rudemo [313].iN are zero and all Bi are redundant. Hermann [193 ] and Asmussen & Koole [37] showed that in some appropriate .. claims occur only at state transitions for the environment so that AN2. superpositions of renewal processes.3 (AN INDIVIDUAL MODEL) In contrast to the collective assumptions (which underly most of the topics treated sofar in this book and lead to Poisson arrivals)... all Al i2..1i2.g..iN = C27 All other off-diagonal elements of A are zero so that all other Bii are redundant. WIDOWED.. The versatility of the set-up is even greater than for the Markov-modulated model. the kth policy enters a recovering state..iN = a2. possibly having a general phase-type sojourn time. • upon a claim.iN..iil.1i2 .kj = Bik) B13 4k = Bak) 175 - (the definition of the remaining Bij.. RETIRED.iil. or. as the Markovian arrival process ( MAP). with rate ai. THE MARKOVIAN ARRIVAL PROCESS Bij. more recently. say. INVALIDIZED.iN C17 AilO. iN = all BOi2. The individual pays at rate pi when in state i and receives an amount having distribution Bij when his/her state changes from i to j.iN.. iN. where ik = 0 means that the kth policy has not yet expired and ik = 1 that it has expired. E 10. assume that there is a finite number N of policies. u Notes and references The point process of arrivals was studied in detail by Neuts [267] and is often referred to in the queueing literature as Neuts ' versatile point process . In this way we can model. e. MARRIED. DIVORCED.... In fact .

0 < t < 1. Without loss of generality. The basic assumptions are as follows: • The arrival intensity at time t of the year is 3(t) for a certain function /3(t). • Claims arriving at time t of the year have distribution B(t). we talk of s as the 'time of the year'. one limitation for approximation purposes is the inequality Var Nt > ENt which needs not hold for all arrival streams. • The premium rate at time t of the year is p(t). . )3 t 1 J (6. p(t) and B(t) are defined also for t t [0. Neuts [271] and Asmussen & Perry [42]. B* = J f B(t) ((*) dt. from an application point of view.176 CHAPTER VI. For the Markov-modulated model. Let 1 1 /3* _ f /3(t) dt.1) Then the average arrival rate is /3* and the safety loading rt is 77 = (p* . Lucantoni et at. [248]. Obviously. By periodic extension. one needs to assume also (as a minimum) that they are measurable in t. Lucantoni [248]. but now exhibiting (deterministic) periodic fluctuations rather than (random ) Markovian ones. let the period be 1. for s E E = [0.3*µs • p = f /3(v) dv 0 0 (6. 1). MARKOVIAN ENVIRONMENT sense any arrival stream to a risk process can be approximated by a model of the type studied in this section : any marked point process is the weak limit of a sequence of such models . Some main queueing references using the MAP are Ramaswami [298]. We denote throughout the initial season by s and by P(8) the corresponding governing probability measure for the risk process. where i f00 xB(°) (dx) _ . 6 Risk theory in a periodic environment 6a The model We assume as in the previous part of the chapter that the arrival mechanism has a certain time-inhomogeneity. p * = 0 p(t) dt.2) Note that p is the average net claim amount per unit time and µ* = p//3* the average mean claim size. 1). a claim arrives with rate /3(s + t) and is distributed according to B(8+0 . we may assume that the functions /3(t).p)/p. Thus at time t the premium rate is p(s + t). continuity would hold in presumably all reasonable examples. Sengupta [336].

p* as an averaged version of the periodic model. The behaviour of the periodic model needs not to be seen as a violation of this principle. respectively. In particular. and thus the averaged standard compound Poisson models have the same risk for all A. We u assume in the rest of this section that p(t) . or. Section 4b).1 As an example to be used for numerical illustration throughout this section. The claim surplus process {St } two is defined in the obvious way as St = ^N° Ui . It is easily seen that .w(t)) dt).3* = 3A.3) Note that A enters just as a scaling factor of the time axis. B*.t.6. p(t) = A and let B(t) be a mixture of two exponential distributions with intensities 3 and 7 and weights w(t) _ (1 +cos27rt)/2 and 1 . we shall see that for the periodic model increasing A increases the effect of the periodic fluctuations. of the periodic model as arising from the compound Poisson model by adding some extra variability. for Markov-modulated model typically the adjustment coefficient is larger than for the averaged model (cf. the average compound Poisson model is the same as in III. (6. equivalently. since the added variation is deterministic.3(t) = 3A(1 + sin 27rt). in agreement with the general principle of added variation increasing the risk (cf. let . it turns out that they have the same adjustment coefficient. and we recall from there that the ruin probability is 24 1 *(u) _ 3 5e-u + 35e-6u. RISK THEORY IN A PERIODIC ENVIRONMENT 177 In a similar manner as in Proposition 1.(3. Thus . In contrast. St = Se-I(t). p* = A whereas B* is a mixture of exponential distributions with intensities 3 and 7 and weights 1/2 for each (1/2 = ff w(t)dt = f o (1.10.1.8.2 Define T 6(T) = p(t ) dt. In contrast. Thus. Many of the results given below indicate that the averaged and the periodic model share a number of main features.1) and Example 1. not random. Example 6 . u Remark 6 .w(t). 0 Then (by standard operational time arguments ) {St} is a periodic risk process with unit premium rate and the same infinite horizon ruin probabilities. The arrival process {Nt}t>0 is a time-inhomogeneous Poisson process with intensity function {/3(s + t)}t>0 . one may think of the standard compound Poisson model with parameters 3*. the discussion in 111. the conditional distribution .9).

a be the c.5 (see in particular Remark 11.tc* (a)] dv then h (.a) Proof Conditioning upon whether a claim occurs in [t.g. Daykin et.e. i. e.T) = P(8)(r(u) <T).1) .1) dv . of the claim surplus process. with the underlying Markov process {Jt} being deterministic period motion on E = [0.f.178 CHAPTER VL MARKOVIAN ENVIRONMENT of U. 0 (5)(u. we obtain E.al. with some variants in the proofs..a .adt +.g. (6.5.east B(8+t) [a] east . let f 8+1 tc *(a) _ (B* [a] . .(3(s + t)dt)e«St -adt + /3(s + t)dt .8).a) = exp { . t + dt] or not.s .4) At a first sight this point of view may appear quite artificial.1]) .3(v)(B(vl [a] . [101] . 6b Lundberg conjugation Motivated by the discussion in Chapter II. but it turns out to have obvious benefits in terms of guidelining the analysis of the model as a parallel of the analysis for the Markovian environment risk process. To this end. Notes and references The model has been studied in risk theory by.^8 [. 1). given that the ith claim occurs at time t is B(8+t).f.Q(v) (B(„) [a] . and define h(s.(8) [eaSt+dt I7t] = = (1 . J Theorem 6 ..a. a) etw*(a) h(s+t.3(s + t)dt[B(8+t)[a] . 3 E(8)eaSt = h(s.. we start by deriving formulas giving the m.g. and the ruin probabilities are 0(8) (U) = P(s )(r(u) < 00).1) -a = J8 . of the averaged compound Poisson model (the last expression is independent of s by periodicity). [44] (the literature in the mathematical equivalent setting of queueing theory is somewhat more extensive. The claim surplus process {St} may be seen as a Markov additive process. r(u) _ inf It > 0 : St > u} is the time to ruin . As usual. Jt = (s + t) mod 1 P(8) .(1 . The exposition of the present chapter is basically an extract from [44]. a) is periodic on R. Dassios & Embrechts [98] and Asmussen & Rolski [43]. see the Notes to Section 7).

a) as well as the fact that rc = k` (a) is the correct exponential growth rate of Eeast can be derived via Remark 11.9) east-t.(8)east 179 = = = = = E(8)east (1 . it then suffices to note that E(8)Le. a) h(s + t. -at + f log h(s + t.t = 1 by Theorem 6.5 The formula for h(s) = h(s.1)dv - o h(t. a). RISK THEORY IN A PERIODIC ENVIRONMENT E(8)east+ dt d Et.(e) Let = h( h(Jo.9 as follows. St)} and .1)dv l og E(8) et where atetk•(a) h(t. B) eoSt -t. With g the infinitesimal generator of {Xt} = {(Jt.1]) .1]. 0) exist and are finite. h(s + t. a) = h(s. u Remark 6. a) Thus E(8)east = h(s + t.adt +.6. Proof In the Markov additive sense of (6.4).6 .4 For each 0 such that the integrals in the definition of h(t . a) et.. so that obviously {Lo.3(s + t)[D(8 +t)[a] .s. dt log E(8)east -a + f3(s + t) [B(8+t) [a] .log h(s. According to Remark 11.c* (e) {Le.t}t>o = h(s.t. E (8)east (-a +. we can write Lo Jt. St)} . a) = exp I f t3(v)(kv)[a) . + v)(B([a] . a) Corollary 6.3. 9) is a P ( 8)-martingale with mean one.0(s + t)dt[B(8+t)[a] .5.2. 0) P(8)-a.1]) . a) .* (a) h(s.t} is a multiplicative functional for the Markov process { (Jt.

[70] .2. That rc = is*(a) then follows by noting that h(1) _ u h(0) by periodicity. That is. Now define 'y as the positive solution of the Lundberg equation for the averaged model.6 ( s ) exp { 0( s )&s) [a] + tc . -y solves n* (-y) = 0. P(s) (T(u) < oo) = 1 for all u > 0. -yo is determined by 0 = k* (70) = QB*. Equating this to rch (s) and dividing by h(s) yields h(s ) = h(s) = a + . A further important constant is the value -yo (located in (0. of St is as for the asserted periodic risk model. see [44] for 11 a formal proof. say. When a = y.3(s)h(s) + h'(s) +. as above E (s) ha(Jdt.'y).g. it follows by Theorem II.60(t) = a(t)B(t)[0]. 0 < s < 1.3(s)B(s) [a]h(s). cf.(3(s)dt) +. such that for any s and T < oo. 0) = h(s) + dt {-ah(s) -.1.y) = eayh(s).3. St)} with governing probability measures Fes).4. correspond to a new periodic risk model with parameters ex .5 that we can define a new Markov process {(Jt.tc] dv} (normalizing by h(0) = 1). (ii) use Markov-modulated approximations (Section 6c). Bet)(dx) = ^ B(t ) (dx). the restrictions of Plsi and Pest to Ft are equivalent with likelihood ratio Le.T.180 CHAPTER VI. (iv) finally.3(v)( Bi"i [a] .6 The P(s). ( iii) use approximations with piecewiese constant /3(s). Sdt) = h(s + dt) e-adt (1 -. .f.0) = Kh(s). Lemma 6 . MARKOVIAN ENVIRONMENT ha(s. Proof (i) Check that m. Proposition 6. ry)) at which n* (a) attains its minimum.a . B(s). That is.3(s)ks)[a]h(s)} -ah(s) -13(s)h(s) + h'(s) +.1) . the requirement is cha(i. Proposition 6. For each 0 satisfying the conditions of Corollary 6. we put for short h(s) = h(s.7 When a > -yo. J s [. However.3(s)dt • B(s)[a]h(s) = gha(s.

6(v) dv Jo ' xe«xB (°) (dx) r^ xe«xB'(dx) = Q'B' [ a] = ^' J 0 = ^c"'(a) + 1.g.1. B(oo)). The relevant likelihood ratio representation of the ruin probabilities now follows immediately from Corollary 11. has a unique stationary distribution.9(u))} u>0. we need the following auxiliary result . e(cc)) Letting u --> oo in (6. a) e-«uE(8 ) e «^ .2). Here and in the following. 0(u)) -* (b(oo). The proof involves machinery from the ergodic theory of Markov chains on a general state space.9) and noting that weak convergence entails convergence of E f (^(u). the Markov process {(^(u). which is not used elsewhere in the book. a) a > ry0 (6. ^(u) = ST(u) . we get: . and no matter what is the initial season s. 1). xEJ 0 (s)b(8)(x) > 0. 9(u)) for any bounded continuous function (e.9) 0(')(u) = h(s. J C R+ such that the B(8).10) Then for each a. q) = e-ryx/h(q)). considered with governing probability measures { E(8) }E[ .1) the distribution of (l: (oo).9 Assume that there exist open intervals I C [0. and we refer to [44].4. f (x.6.7) h(B(u). have components with densities b(8)(x) satisfying inf sEI. (6.8) (6. Wu).8 The ruin probabilities can be computed as (u)+T(u)k'(a) ^/i(8) (u. u which is > 1 by convexity. T(u) < (6. Lemma 6 . a)e-«uE (a iP(s) (u) = h( s)e-7uE(` ) h(O(u)) To obtain the Cramer-Lundberg approximation from Corollary 3. say s0.u is the overshoot and 9(u) = (T(u) + s) mod 1 the season at the time of ruin. s E I. a) TI h(9(u). the mean number of claims per unit time is p« 181 = Jo 1. Corollary 6.2. RISK THEORY IN A PERIODIC ENVIRONMENT Proof According to (6. T) = h(s.

16.11) gives an interpretation of h(s ) as a measure of how the risks of different initial seasons s vary.10 Under the condition (6. For our basic Example 6 .6 for the Markov-modulated model: Theorem 6 . 11 7/'O (u) < C+°)h(s) e-ry".10) of Lemma 3. Vi(8) (u) . Theorem 6 .1 In contrast to h. MARKOVIAN ENVIRONMENT Theorem 6. where C(o) = 1 + info < t<i h(t) .Ch(s)e-ry". (6.9). At this stage . which may provide one among many motivations for the Markovmodulated approximation procedure to be considered in Section 6c.182 CHAPTER VI. A=1/4 A=1 A=4 0 Figure 6.-W.) C = E1 h(B(oo)) u -+ oo. elementary calculus yields h(s) = exp { A C 2^ cos 2irs - 4^ sin 21rs + 11 cos 41rs .1. Among other things. Noting that ^(u) > 0 in ( 6. where e. it does not seem within the range of our methods to compute C explicitly.1. 10 shows that certainly ry is the correct Lundberg exponent. we obtain immediately the following version of Lundberg ' s inequality which is a direct parallel of the result given in Corollary 3. 1.11) Note that ( 6. 6. this provides an algorithm for computing C as a limit.ir) } Plots of h for different values of A are given in Fig. illustrating that the effect of seasonality increases with A.

1 ) and all u > 0. Theorem 6 . #c( ay) < 0 when y > 1/tc'('y).(s)(u) < C+h(s)e-7". in our basic example with A = 1..13 to our basic example.7x j dx _7x } _ 6w + 6(1 .yr.w)e-4u dx 9w + 7(1 . T) and replace the Lundberg exponent ry by ryy = ay .16 In order to apply Theorem 6. e.17) (6.w ) • 7e u{w • 3e-3x + ( 1 .13) Elementary convexity arguments show that we always have ryy > -Y and ay > ry. We state the results below.15) The next result improves upon the constant C+) in front of e-ryu in Theorem 6. e7 ( y-x)B(t)(dy) > Then for all $ E [0. yu) 000 (u) .(8) (u.13 Let = 1 B(t) C o<tf i h(t) 2no f °O e'r(Y-x)B( t) (dy)' (x) x 1 B(t) (x) C+ = sup sup o<t<i h ( t) xo J. we first note that the function fu° ex-u {w • 3e . ay) • (6. whereas ay < -y.47r sin 27rs + 167r cos 47rs .0(8) (u+ yu) (6.w)e-4u .g. Lundberg's inequality can be con- siderably sharpened and extended.6. Theorem 6. (6. where ay is the unique solution of W(ay) =y• (6.4. we obtain Co) = 1. Just as in IV.w) .12 Let 00)(y) 1 Then info < t<i h(t. C_h(s)e-7u < V.42 so that 183 tp(8) (u) < 1. r.11 as well as it supplements with a lower bound. the proofs are basically the same as in Section 3 and we refer to [44] for details. (ay). RISK THEORY IN A PERIODIC ENVIRONMENT Thus. we substitute T = yu in 0(u.42 • exp {J_ cos 27rs .14) < C+)(y)h(s) e-7yu.12) As for the Markovian environment model. Consider first the time-dependent version of Lundberg's inequality.167r I Cu.7e .3x + (1 . . 1 (6.(ay) > 0 when y < 1/ic' (7).

we have the following result: Theorem 6 .19 } 0 <8<1 8 + cos 21rs Thus e. Then -7oudT . where the environment at time t is (s + t) mod 1 E [0. and let 8 = er' (Y0).T) < C+('Yo)h( s. 0 <'p(8)(u ) -.\ = 0 . . exp 2^ cos 21rs . C+ = 1.I e-u.16) with 'y replaced by -yo and h(t) by h(t.1 sin 27rs + 1 cos 47rs . . with s the initial season.20).4^ sin 2irs + 16^ cos 41rs .\ 3 C+ = sup 6 exp { -A (. completing a cycle .66. n}.9 3 0<8<1 p 27r 47r 167r 161r 2 _ _e.(8)(u.18) Notes and references The material is from Asmussen & Rolski [44]. The idea is basically to approximate the (deterministic) continuous clock by a discrete (random) Markovian one with n 'months'.184 CHAPTER VI. but thereby also slightly longer. Some of the present proofs are more elementary by avoiding the general point process machinery of [44]. 1/i18 1 s (u) > 0.g.66. much of the analysis of the preceding section is modelled after the techniques developed in the preceding sections for the case of a finite E. This observation motivates to look for a more formal connection between the periodic model and the one evolving in a finite Markovian environment. Finally. 1) for the environment). and in fact.19 I e-u. -yo). 14 Let C+('yo) be as in (6.'Yo)e (6. such a deterministic periodic environment may be seen as a special case of a Markovian one (allowing a continuous state space E = [0. the nth Markovian environmental process {Jt} moves cyclically on {1. for A = 1 (where 3 e-0.. Thus C_ = 2 inf ex cos 2irs . 1).013.16. Of course. 6c Markov-modulated approximations A periodic risk model may be seen as a varying environment model.013..1 sin 2irs + 16_ cos 47rs .0. MARKOVIAN ENVIRONMENT attains its minimum 2 /3 for u = oo and its maximum 6 /(7 + 2w) for u = 0. . Thus.181 s(u) < 1.cos 27rs .-L sin 27rs + 1 I cos 47rs .20 •exp { 2n cos 27rs .

19) n 0 0 ••• -n Arrivals occur at rate /3ni and their claim sizes are distributed according to Bni if the governing Markov process is in state i. Notes and references See Rolski [306]. Thus. Let 0j. and the ruin probability corresponding to the initial state i of the environment is then Y'yn)(t) = F (M(n) > t).20) be the claim surplus process of t>o the nth approximating Markov-modulated model. but others are also possible. since the settings are equivalent from a mathematical point of view. T) can be expressed in a simple way in terms of the waiting time probabilities of a queueing system with the input being the time-reversed input of the risk process. Bi.21) which serves as an approximation to 0(1)(u) whenever n is large and i/n s. To this end. We want to choose the /3ni and Bni in order to achieve good convergence to the periodic model. 7 Dual queueing models The essence of the results of the present section is that the ruin probabilities i/ (u).7. AE= Aii'r?/7ri• The arrival intensity is /3i when Jt = i.1 ((i 1)/n) ) and Bni = B . so that the intensity matrix is A(n) given by -n n 0 ••• 0 0 -n n ••• 0 A(n) _ (6. DUAL QUEUEING MODELS 185 within one unit of time on the average . it is desirable to have formulas permitting freely to translate from one setting into the other. . M(n) = Supt>o Stn). (6. z/'i (u. This queue is commonly denoted as the Markov-modulated M/G/1 queue and has received considerable attention in the last decade. A be the parameters defining the risk process in a random environment and consider a queueing system governed by a Markov process {Jt } ('Markov-modulated') as follows: • The intensity matrix for {Jt } is the time-reversed intensity matrix At _ A ())i. We let {Stn)} (6.jEE of the risk process. one simple choice is Oni = 0( i .

JT = j} and {VT > u. and the virtual waiting time (workload) process {Vt}too are defined exactly as for the renewal model in Chapter V.1) over j. • The queueing discipline is FIFO.4) where 0* = >jEE 7rj/3j. (7.. T) = 7ri 1 P.T(V > u I J* = i). (7. Jt ). and (7. 2 . Now let In denote the environment when customer n arrives and I* the steady-state limit. just sum (7. Proposition 7.oo in u (7. J* = i). (7.3..P(V > u.2) Oi(u) = -1. J* = i) for all j.0i (u .186 CHAPTER VI.3).2 The relation between the steady-state distributions of the actual and the virtual waiting time distribution is given by F(W > u.1). . I* = i). For (7.2). {Jt }o<t<T• Then we may assume that Jt = JT-t. (VT > u I JT = 2). Taking probabilities and using the stationarity yields 7riPi(T(u) < T. JT = i) = 'P. 0 < t < T and that the risk process {Rt}o<t<T is coupled to the virtual waiting process {Vt}o<t<T as in the basic duality-lemma (Theorem 11.1) follows. Jo = j.1) 7ri In particular. The actual waiting time process 1W-1. JT = j) = 7rjPj(VT > u.=1 . Proof Consider stationary versions of {Jt}o<t<T.1 Assume V0 = 0. J*) is the steady-state limit of (Vt. .. JT = i} coincide. Then Pi(T(u) < T.n(VT > u. let T . In particular. MARKOVIAN ENVIRONMENT • Customers arriving when Jt = i have service time distribution Bi.3) 7ri where (V. Jo = i. JT = j) = LjPj (VT > u.2) and use that limF (VT > u. I* )3i P(V > u. The first conclusion of that result then states that the events {T(u) < T. JT = Z). Proposition 7. JJ = i). J* = i) = P. ii (u) = it /3 P(W > u. (7. JT = i) = P(V > u. and for (7.

B(t) have been periodically extended to negative t).l. and of these. DUAL QUEUEING MODELS 187 Proof Identifying the distribution of (W.7.I. Proposition 7.7) of that paper.5) follows from (7. and further references (to which we add Prabhu & Zhu [296]) can be found there. >u. The first comprehensive solution of the waiting time problem is Regterschot & de Smit [301]. p < 1 then ensures that V(*) = limN-loo VN+9 exists in distribution. with (7. T].1 is from Asmussen [16].8) For treatments of periodic M/G/1 queue. a general formalism allowing this type of conclusion is 'conditional PASTA'. the dual queueing model is a periodic M/G/1 queue with arrival rate 0(-t) and service time distribution B(-') at time t of the year (assuming w. and (7.3).I *=i). A more probabilistic treatment was given by Asmussen [17]..g. Taking the ratio yields (7. N -* oo. see Regterschot & van Doorn [123].6) (7. P(.4) and (7. (7. n=1 N However. on average /32TP(V > u.o.4). we have 1: I(W. [243]. P(1-')(r(u) < oo) = P(')(00) > u).=i) a4. on average 0*T customers arrive in [0. u Notes and references One of the earliest papers drawing attention to the Markovmodulated M/G/1 queue is Burman & Smith [84]. In the setting of the periodic model of Section 6.3) improving somewhat upon (2. a paper relying heavily on classical complex plane methods. J* = i) see W > u. I* = i. P(W >u. I*) with the time-average . if T is large. and Rolski [306]. With {Vt} denoting the workload process of the periodic queue. that /3(t). and one has PI'>(rr(u) < T) = P(-'_T)(VT > u).T)(T(u) <T) = P(8)(VT > u). Lemoine [242]. . The relation (7.7) (7.4) can be found in Regterschot & de Smit [301]. see in particular Harrison & Lemoine [186].

This page is intentionally left blank .

and T(u) = inf {t > 0 : Rt < u} is the time to ruin starting from Ro = u so that '(u) = F(T(u) < oo). i&(u. . are i.T) = F(T(u) < T). finite horizon. Zt As earlier. with common distribution B and independent of {Nt}. z/i(u) = F IinffRt< 0IRo=u 1 (1. 189 . and that the claim sizes U1. U2. resp . t] are Nt At = Ui (1.T) = FloinfTRt< OIRo=u1 denote the ruin probabilities with/initial reserve u and infinite. Thus.d.. the premium charged is assumed to depend upon the current reserve Rt so that the premium rate is p(r) when Rt = r. {Rt} moves according to the differential equation R = p(R). and the evolution of the reserve may be described by the equation Rt = u .i..2) tk(u.At + p(R8) ds.Chapter VII Premiums depending on the current reserve 1 Introduction We assume as in Chapter III that the claim arrival process {Nt} is Poisson with rate .1) (other terms are accumulated claims or total claims). However . the aggregate claims in [0.6. Thus in between jumps.

the payout rate of interest is Sx and absolute ruin occurs when this exceeds the premium inflow p. and the probability of absolute ruin with initial reserve u E [-p/S.4 Either i.p/S) r > p/S p-5(p/5-r) 0<r<p/5 Then the ruin problem for {Rt } is of the type defined above. say at interest rate b.2.'(u)) > 0 so that V'(v) < 1. that {Rt} will reach level u before the first claim arrives. In this situation.p2. there is positive probability. but when the reserve comes above v. Assume 0(u) < 1 for some u. Example 1.e. dividends are paid out at rate pi . Thus at deficit x > 0 (meaning Rt = -x). oo) is given by i (u + p/S). 1 . No tractable necessary and sufficient condition is known in complete generality of the model. i.2 (INTEREST) If the company charges a constant premium rate p u but invests its money at interest rate e. A basic question is thus which premium rules p(r) ensure that 'O(u) < 1. we can put Rt = Rt + p/S. when x > p/S. If Ro = v < u.i(u) = 1 for all u. pi > p2 and p(r) = One reason could be competition.1 Assume that the company reduces the premium rate from pi to p2 when the reserve comes above some critical value v. Proposition 1. where one would try to attract new customers as soon as the business has become reasonably safe. RESERVE-DEPENDENT PREMIUMS The following examples provide some main motivation for studying the model: Example 1 .Vi(v) u > e(1 .190 CHAPTER VII. say e. Hence in terms of survival probabilities. Example 1. rather than when the reserve itself becomes negative. That is. or o(u) < 1 for all u.3 (ABSOLUTE RUIN) Consider the same situation as in Example 1. P(r) _ p + e(r . Proof Obviously '(u) < ilb(v) when u > v. Now return to the general model. but assume now that the company borrows the deficit in the bank when the reserve goes negative. However. it seems reasonable to assume monotonicity (p(r) is u . Another could be the payout of dividends: here the premium paid by the policy holders is the same for all r. we get p(r) = p + er.

Proposition I1I.5) and the process {Vt} has a proper limit in distribution . Here {Vt}two is a storage process which has reflection at zero and initial condition Vo = 0.6) .1. and hence by a geometric trials argument.4) 0 and we use the convention p(O) = 0 to make zero a reflecting barrier (when hitting 0.2 once more.I3IB requires a more detailed analysis and that µB < oo is not always necessary for O(u) < 1 when p(r) -4 oo. (1. In case (b). Let Op(u) refer to the compound Poisson model with the same 0.e. INTRODUCTION 191 decreasing in Example 1. This is basically covered by the following result (but note that the case p(r) . { Vt} remains at 0 until the next arrival).1 and increasing in Example 1. obviously infu<uo z/'(u) > 0. In particular.+ p(r) exists. instead of (1. [APQ] pp. Then if u > no. . Hence ik(u) < 1 for all u by Proposition III. one can couple the risk process and the storage process on [0.b(u. if and only if V)(u) < 1 for all u. Then 0(u) = P(V > u).. say V. that u zPp(u . T] i n such a way that the events {-r(u) <T} and {VT > u} coincide.1. appealing to Proposition 111.6 For any T < oo.2(d).T) = P(VT > u). V = -p(V)). That is. let uo be chosen such that p(r) < p = /3µB for r > uo. cf.f p(Vs) ds.3. let uo be chosen such that p(r) > p = 0I-LB + e for r > uo.4.2) we have t Vt = At . (1. which was proved in 11. Starting from Ro = uo. and P(Rt -+ oo) > 0.2(d)). hence Rt < uo also for a whole sequence of is converging to oo.o(uo) = 1 so that t/'(u) = 1 for all u by Proposition 1. Theorem 1. (1. In case (a).1. However.uo) < 1. B and (constant) premium rate p. 296-297): Theorem 1. (b) If p(r) > /3µB + e for all sufficiently large r and some e > 0. Proof This follows by a simple comparison with the compound Poisson model.2) for r sufficiently large so that p(oo) = limr. we have z/i(u) <p(u . In between jumps.3µB for all sufficiently large r.uo) and. then l/i(u) < 1 for all u.5 (a) If p(r) < /.1. then ?(u) = 1 for all u. We next recall the following results. {Vt} decreases at rate p(v) when Vt = v (i. the probability that Rt < uo for some t is at least tp(0) = 1 (cf.

Oe-ax f x e'Yg (y) dy } = p) e-axa(x) .h.Qw(x) .6x and that w(x) < oo for all x > 0.Sx} dx. Note that it may happen that w (x) = oo for all x > 0. of (1. of (1.192 CHAPTER VII. say if p(r) goes to 0 at rate 1 /r or faster as r j 0.8) Proof In stationarity.8) is the rate of downcrossings (the event of an arrival in [t. we thus need to look more into the stationary distribution G. Considering the cases y = 0 and 0 < y < x separately. (1. and the other being given by a density g(x) on (0. the l. It follows in particular that 0(u) = fg(Y)dy. x] to (x.y. the flow of mass from [0. Corollary 1. say when {Vt} is in state y.y)g(y) dy. An attempt of an upcrossing occurs as result of an arrival. we arrive at the desired interpretation of the r. for the storage process {Vt}. say.6w(x) .6 applicable.Sx}. Then w(x) is the time it takes for the reserve to reach level x provided it starts with Ro = 0 and no claims arrive. x + p(x)dt]).9) Proof We may rewrite (1.h. RESERVE-DEPENDENT PREMIUMS In order to make Theorem 1. yo ^ 1 + oo Q exp {.7 p(x)g(x) = -tofB (x) + a f (x .8) as g(x) = p 1 {yo13e_6x +.7) Proposition 1. In view of the path structure of {V t }. where g(x) = p( ^ exp {. oo). one having an atom at 0 of size 'yo.s. It is intuitively obvious and not too hard to prove that G is a mixture of two components.8 Assume that B is exponential with rate b. say. oo) must be the same as the flow the other way.8) as the rate of upcrossings. B(x) = e. Jo AX) (1. Then the ruin probability is tp (u) = f' g(y)dy.s. and is succesful if the jump size is larger than x . t + dt] can be neglected so that a path of {Vt} corresponds to a downcrossing in [t. Now obviously. u Define ^x 1 w(x) Jo p(t) dt. t + dt] if and only if Vt E [x. this means that the rate of upcrossings of level x must be the same as the rate of downcrossings. (1.

1. INTRODUCTION
where c(x) = 1o + fo elyg(y) dy so that (x) = eaxg(x) _

193

1
p(x)

nkx).

Thus log rc(x) = log rc(0) + Jo X L dt = log rc(0) + /3w(x), p(t) c(x) = rc (0)em"lxl = Yoes"lxl, g(x) = e-axK' (x) = e-6x ,Yo)3w'(x)e'6"lxl which is the same as the expression in (1.9). That 'Yo has the asserted value is u a consequence of 1 = I I G I I = yo + f g• Remark 1.9 The exponential case in Corollary 1.8 is the only one in which explicit formulas are known (or almost so; see further the notes to Section 2), and thus it becomes important to develop algorithms for computing the ruin probabilities. We next outline one possible approach based upon the integral equation (1.8) (another one is based upon numerical solution of a system of differential equations which can be derived under phase-type assumptions, see further VIII.7). A Volterra integral equation has the general form x g(x) = h(x) + f K(x, y)9(y) dy, 0 (1.10)

where g(x) is an unknown function (x > 0), h(x) is known and K(x,y) is a suitable kernel. Dividing (1.8) by p(x) and letting K(x, y) _ ,QB(x - y) _ 'YoIB(x) p(x) , h(x) p(x) we see that for fixed -to, the function g(x) in (1.8) satisfies (1.10). For the purpose of explicit computation of g(x) (and thereby -%(u)), the general theory of Volterra equations does not seem to lead beyond the exponential case already treated in Corollary 1.8. However, one might try instead a numerical solution. We consider the simplest possible approach based upon the most basic numerical integration procedure, the trapezoidal rule hfxN() dx = 2 [f ( xo) + 2f (xi) + 2f ( x2) + ... + 2f (XN-1) + f (xN)1
p

194

CHAPTER VII. RESERVE-DEPENDENT PREMIUMS

where xk = x0 + kh. Fixing h > 0, letting x0 = 0 (i.e. xk = kh) and writing 9k = 9(xk ), Kk,e = K(xk, xe), this leads to h 9N = hN + 2 {KN,09o+KN,N9N}+h{KN,191+'''+KN,N-19N-1},

i.e. 9 N=

hN+ ZKN ,ogo +h{KN,lgl+•••+KN,N-19N-1} 1 - ZKNN

(

1.11

)

In the case of (1.8), the unknown yo is involved. However, (1.11) is easily seen to be linear in yo. One therefore first makes a trial solution g*(x) corresponding to yo = 1, i.e. h(x) = h*(x) = (3B(x)/p(x), and computes f o' g*(x)dx numerically (by truncation and using the gk). Then g(x) = yog*(x), and IIGII = 1 then yields f 00 g*(x)dx (1.12) 1= 1+ 'Yo from which yo and hence g(x) and z/'(u) can be computed. u

la Two-step premium functions
We now assume the premium function to be constant in two levels as in Example 1.1, p(r) _ J 1'1 r < v P2 r > v. (1.13)

We may think of the risk reserve process Rt as pieced together of two risk reserve processes R' and Rt with constant premiums p1, P2, such that Rt coincide with Rt under level v and with above level v. For an example of a sample path, Rt see Fig. 1.1.

Rt

V

Figure 1.1

1. INTRODUCTION

195

Proposition 1.10 Let V)' (u) denote the ruin probability of {Rt}, define a = inf It > 0 : Rt < v}, let pi ( u) be the probability of ruin between a and the next upcrossing of v (including ruin possibly at a), and let q(u) = 1 - V" (u) Then
1 - q(u) + q ( u)z,b(v) p1(v) u = 0<u<v v

0 < u < v. (1.14)

1 + pi (v ) - '02 (0) pi (u) + (0, (u - v) - pi (u)) z/i(v ) v < u < oo.

Proof Let w = inf{ t > 0 1 Rt= v or Rt < 0} and let Q1 (u) = Pu(RC,, = v) be the probability of upcrossing level v before ruin given the process starts at u < v. If we for a moment consider the process under level v, Rt , only, we get Vil (u ) = 1 - q, (u ) + g1(u),O1( v). Solving for ql (u), it follows that q1 (u) = q(u). With this interpretation of q(u) is follows that if u < v then the probability of ruin will be the sum of the probability of being ruined before upcrossing v, 1 - q(u), and the probability of ruin given we hit v first , q(u)z'(v). Similarly, if u > v then the probability of ruin is the sum of being ruined between a and the next upcrossing of v which is pl (u), and the probability of ruin given the process hits v before (- oo, 0) again after a, (Pu(a < oo ) - p1(u))''(v) = (Vi2(u - v) - p1 (u))''(v)• This yields the expression for u > v, and the one for u = v then immediately follows. u Example 1 .11 Assume that B is exponential, B(x) = e-62. Then
01 (u)

_

0 e -.yiu ,,2 (u) = )3 e -72u p1S P2S
1 - ~ e-ry1u p1S 1 - Q e-ryly P1S

where ry; = S - ,Q/p;, so that

q

-

Furthermore , for u > v P(a < oo ) = 02(u - v) and the conditional distribution of v - Ro given a < oo is exponential with rate S . If v - Ro < 0, ruin occurs at time a . If v - R, = x E [0, v], the probability of ruin before the next upcrossing of v is 1 - q(v - x). Hence

196

CHAPTER VII. RESERVE-DEPENDENT PREMIUMS

( pi(u) _ 02 ( u - v){ a-av + J (1 - q(v - x))be-dxdx 0 I
1- a e- 7i(v -x)

eP2,e 7z(u-v)

1

_

P16 0 1 - a e-7iv P16

Se-6xdx

1 - e -6V Qbe-72(u-v)
P2 1 -

a

e -71v (e(71 -6)v - 1)

1 - p1(71 - b)
Ie-71v P16

p2be- 7z(u-v) 1 _

1 - e-71v a

1 - -e -7iv P '6

0
Also for general phase-type distributions, all quantities in Proposition 1.10 can be found explicitly, see VIII.7.
Notes and references Some early references drawing attention to the model are Dawidson [100] and Segerdahl [332]. For the absolute ruin problem, see Gerber [155] and Dassios & Embrechts [98]. Equation (1.6) was derived by Harrison & Resnick [186] by a different approach, whereas (1.5) is from Asmussen & Schock Petersen [50]; see further the notes to II.3. One would think that it should be possible to derive the representations (1.7), (1.8) of the ruin probabilities without reference to storage processes. No such direct derivation is, however, known to the author. For some explicit solutions beyond Corollary 1.8, see the notes to Section 2 Remark 1.9 is based upon Schock Petersen [288]; for complexity- and accuracy aspects, see the Notes to VIII.7. Extensive discussion of the numerical solution of Volterra equations can be found in Baker [57]; see also Jagerman [209], [210].

2 The model with interest
In this section, we assume that p(x) = p + Ex. This example is of particular application relevance because of the interpretation of f as interest rate. However, it also turns out to have nice mathematical features.

2. THE MODEL WITH INTEREST

197

A basic tool is a representation of the ruin probability in terms of a discounted stochastic integral Z = - f e-EtdSt 0 (2.1)

w.r.t. the claim surplus process St = At - pt = EN` U; - pt of the associated compound Poisson model without interest . Write Rt") when Ro = u. We first note that: Proposition 2.1 Rt") = eetu + Rt°) Proof The result is obvious if one thinks in economic terms and represents the reserve at time t as the initial reserve u with added interest plus the gains/deficit from the claims and incoming premiums. For a more formal mathematical proof, note that

dR(u) = p + eR(u) - dAt,
d [R(") - eetu] = p + e [R(u) - eEtu] - dAt . Since R( ;u) - eE'0u = 0 for all u, Rt") - eEtu must therefore be independent of u which yields the result. 0 Let

Zt = e-etR(0) = e-et (ft (p + eR(°)) ds - At I
Then dZt = e -Et (_edt

f t (p + eR°) ds + (p + eR°)) dt + e dt A- dA
v Z,, = - e-etdSt,

= e_et (pdt - dAt) = -e-EtdSt. / Thus 0 where the last integral exists pathwise because {St} is of locally bounded variation. Proposition 2.2 The r.v. Z in (2.1) is well-defined and finite, with distribution H(z) = P(Z < z) given by the m.g.f.

H[a] = Ee" = exp
where k(a) _

(-ae-Et) dt} = exp {f °° k

k

{fa

(-y) dy}

13(B[a] - 1) - pa. Further Zt a ' Z

as t --+ oo.

198

CHAPTER VII. RESERVE-DEPENDENT PREMIUMS

Proof Let Mt =At -tAUB. Then St = Mt+t(/3pB-p) and {M„} is a martingale. e-EtdMt} From this it follows immediately that {fo is again a martingale. The mean is 0 and (since Var(dMt) = /3PB2)dt)

Var (

Z

'

e-'tdMt )

J e- eft/3p(B)dt = a2B (1 - e-2ev). o

/' v

(2)

Hence the limit as v -3 oo exists by the convergence theorem for L2-bounded martingales, and we have v
Zv =

v
e-EtdSt = -f e-t(dMt + (,3pB - p)dt)
o o

-

0 - f0"

J

a'
0 - f 0 oo

e-Et

(dMt + (3p$ -

p)dt)

e-EtdSt = Z.

Now if X1i X2, ... are i.i.d. with c.g.f. 0 and p < 1, we obtain the c .g.f. of E0° p'Xn at c as
00

00

00

log E fl ea°n X„
n=1

= log 11 e0(av ") _
n=1

E 0(apn). n=1

Letting p = e-Eh, Xn = Snh - S( n+1)h, we have q5(a) = hic(- a), and obtain the c.g.f. of Z = - f0,30 e-'tdSt as 00 00 00 lim E 0(apn ) = li h E rc(-ae -Fnh) = f tc (-ae-t) dt;
n=1 1 n=1 0

the last expression for H[a] follows by the substitution y = ae-Et Theorem 2.3 z/'(u) = H(-u) E [H(-RT(u)) I r(u) < oo] .

u

Proof Write r = r(u) for brevity. On {r < oo }, we have
-

u + Z =

(u + Zr ) + ( Z - Zr) = e

ET {e

(u + Zr) - f '* e-E(t-T )dSt] T

e-

ET [

R( u)

+ Z`],

2. THE MODEL WITH INTEREST

199

where Z* = - K* e-E(t-T)dSt is independent of F, and distributed as Z. The last equality followed from Rt") = eEt(Zt + u), cf. Proposition 2.1, which also yields r < oo on {Z < -u}. Hence H(-u) = P(u + Z < 0) = P(RT + Z* < 0; r < oo) zb(u)E [P(RT + Z* < 0 I)7T, r < oo)] _ O(u)E [H(-RT(")) I r(u) < oo] .

Corollary 2.4 Assume that B is exponential, B(x) = e-6', and that p(x) _ p + Ex with p > 0. Then
. o€Q/E -Ir, (8(p + cu);

V) (u)

aA/Epal Ee -6n1 E +^3E1 / E

1\ E E

1r

Cbp;

E El al

where 1'(x; i) = f 2°° tn-le-tdt is the incomplete Gamma function. Proof 1 We use Corollary 1.8 and get

w(x) fo P + Etdt = g(x) = p +0x

e log(p + Ex) - e loge,

exp { - log(p + Ex) - - log p - 6x }

pal(p + ex)plE-1e-6^ J ryo)3 70 = 1 + J p) exp {Ow(x) - Sx} dx x r^ = 1+ ' /E (p + Ex)01'-le-ax dx + 0

f J

= 1+

a
Epo/ E

f yI/ E- 1e- 6(Y -P)/E dy
P (

1+ OEA/E- 1e6 P /Er
60/e po/ e

,;,3 )
E E

lp(u) = -to foo a exp {w(x) - bx} AX)
acO/E" 1 ePE l

Yo

50 1epolE

(

+ cu); 0)

5(p

E E

f.200 CHAPTER VII.x) dx e.3a/ (5 . . where V is Gamma(b.3/E) By the memoryless property of the exponential distribution.2) follows by elementary algebra.V.g. The process {St} corresponds to {-Wt} so that c(a) or2a2/2 .e. assume that {Wt} is Brownian motion with drift µ and variance v2. RESERVE-DEPENDENT PREMIUMS u from which (2.3.a) . it follows that logH[a] = f 1 c(-y)dy = 1 f '(p-a/(a +y))dy f 0 0 Ey R/E 1 [pa + )3log 8 .pa. of Z is IogH[a] = f ytc(-y)dy = e fa (0.3 is also valid if {Rt} is obtained by adding interest to a more general process {Wt} with stationary independent increments. then {Rt} is the diffusion with drift function p+Ex and constant variance a2. -RT(u) has an exponential distribution with rate (S) and hence E [H(-RT(u))I r(u) < oo] L Pe-6'r (P/C . i.b P/E dx /' P/ ' (p/ - x)p/e -150/f I' (/3/E) (6P1'E. and the c.2y +µ ) dy .V < x)]0 + f P(V > p/E ) + e-by fv (p/E .2) follows by elementary algebra.5 The analysis leading to Theorem 2./3 log(b + a)] = log ePa/f (a + a ) e which shows that Z is distributed as p/E .01'E) + (p/E)al aO l fe-bP/E } IF (0 /0 jF From this (2. Proof 2 We use Theorem 2.pa. with density x(3/e-1aQ/e fV (x) _ e -6X ' x > 0.13 /E) r (. From ic(a) = . H(-u) = P(Z r < -u) = P(V > u + p/E) = (8(p + Eu)/E. As an example. /^ u Example 2 . r (j3/E) In particular. 13/E).

write Vi* (u) for the ruin probability etc. Gerber [157] p. Logarithmic asymptotics For the classical risk model with constant premium rule p(x) . Z is normal (p/E. It must be noted. Corollary 2. as in the proof of Proposition 2.p*. Paulsen [281]. see e. Further studies of the model with interest can be found in Boogaert & Crijns [71]. THE LOCAL ADJUSTMENT COEFFICIENT _ Q2a2 pa 4e E 201 I. 134 (the time scale there is discrete but the argument is easily adapted to the continuous case). and since RT = 0 by the continuity of Brownian motion. [282].g.-Y*p* W*(u) < e-ry*u = 0. that the analysis does not seem to carry over to general phase-type distributions. Goldie & Griibel [167]. of the form Ei° p"X" with the X„ i. [129] and Harrison [185]. The solution is in terms of Bessel functions for an Erlang(2) B and in terms of confluent hypergeometric functions for a H2 B (a mixture of two exponentials). or to non-linear premium rules p(•).. Paulsen & Gjessing [286] found some remarkable explicit formulas for 0(u) beyond the exponential case in Corollary 1. Gerber [155]. and recall Lundberg 's inequality .1) .2 is a special case of a perpetuity.e.8. Some of these references also go into a stochastic interest rate. Emanuel et at. se e. [357].3 is from Harrison [185].3) was derived by Emanuel et at. [129]. not even Erlang(3) or H3. it follows that the ruin probability is Cu) H(-u) H(0) 11 Notes and references Theorem 2. 3 The local adjustment coefficient. Q2/2E).i. [283].v. it is also used as basis for a diffusion approximation by these authors. Paulsen & Gjessing [286] and Sundt & Teugels [356]. Delbaen & Haezendonck [104]. The formula (2. A r.g. write y* for the solution of the Lundberg equation f3(B[ry *] .3.d.4 is classical.. for a martingale proof. however.

a) = f3(B[a] . and that p(x) -* oo.1) . x>0 (3.E).2) such that p(x) < c(. i.C*e--f*". a first step is the following: Theorem 3 . RESERVE-DEPENDENT PREMIUMS and the Cramer-Lundberg approximation V... the function -y(x) of the reserve x obtained by for a fixed x to define -y(x) as the adjustment coefficient of the classical risk model with p* = p(x). i. e(1o+e)2 (x ) u -> 00.>o 7(x) > 0. Then we have the following lower bound for the time for the reserve to go from level u to level u + v without a claim: w(u + v) . x -* oo.4) we assume existence of -y(x) for all x. choose uo such that p( x) > p* when x > u0E. then log u (u) In the proof as well as in the remaining part of the section . (3.*(u) .3) When trying to extend these results to the model of this chapter where p(x) depends on x. When u > uo. oo for all E > 0. as solution of the equation n(x. Let y* < So. c(.log '(u)/u < -ry*(1 . The steepness assumption and p(x) -+ oo ensure 'y(x) -* So. obviously O(u) can be bounded with the probability that the Cramer -Lundberg compound Poisson model with premium rate p* downcrosses level uE starting from u .'y ( x)) = 0 where r.1 Assume that for some 0 < 5o < oo. log ?i(u) < < 00 -JO . (3. Then lim sup u->oo u and e -E''p(r) -+ 0.e.1 ).5) which implies inf. as will hold under the steepness assumption of Theorem 3. choose c(. The intuitive idea behind introducing local adjustment coefficients is that the classical risk model with premium rate p* = p(x) serves as a 'local approximation ' at level x for the general model when the reserve is close to x.1. Letting first E -* 0 and next ry * T 5o yields the first statement of the theorem. and (for simplicity) that inf p(x) > (3µs . we will use the local adjustment coefficient 'y(x). which in turn by Lundberg's inequality can be bounded by e-ry*(1-E)" Hence limsup„. For the last asssertion .202 CHAPTER VII. it holds that f3[s] T oo. 1) and for a given E > 0.e.ap(x). B(x) > C(2)e-(ao+f)x for all x. let p* be a in (3. If 60 s f 6o.w (u) J dt > c(3)e-eu v 1 p(u+ t) .1. Proof of Theorem 3. (x.i)eex.

(u) = O(u/e). Theorem 3. UJU > x cannot have a much heavier tail than the claim U itself.2 is also an approximation under appropriate conditions.2). The first main result in this direction is the following version of Lundberg's inequality: Theorem 3 . The slow Markov walk limit is appropriate if p(x) does not vary too much compared to the given mean interarrival time 1/0 and the size U of the claims.4)e-E" Given such an arrival.e. Then lim-elog l/ie (u) = I(u).13 is a technical condition on the claim size distribution B. u Obviously.1 only presents a first step. However.3 Assume that either (a) p(r) is a non -decreasing function of r. 3) = (1 . and hence '(u) > c(4)e-euc( 2)e-(do+e)u The truth of this for all e > 0 implies lim inf log V. the limit is not u -+ oo but the slow Markov walk limit in large deviations theory (see e. Bucklew [81]).g.13 below holds.0 are the same.6) The second main result to be derived states that the bound in Theorem 3. or (b) Condition 3.' (u) < e-I("). The rest of this section deals with tail estimates involving the local adjustment coefficient. For e > 0. which essentially says that an overshoot r. 2 Assume that p(x) is a non-decreasing function of x and let I(u) = fo ry(x)dx.(u) > -so. Then .. (3.7) CIO Remarks: 1. one can then assume that e = 1 is small enough for Theorem 3.e-a°/(ecf1)). Therefore the probability that a claim arrives ( before the reserve has reached level u + v is at least c(. The form of the result is superficially similar to the Cramer-Lundberg approximation. . noting that in many cases the constant C is close to 1.3 to be reasonably precise and use e` (u) as approximation to 0 (u). 2. {Rte)} defined as in (1. I. (3. Theorem 3 . and in particular. 3. the result is not very informative if bo = oo. let 0e (u) be evaluated for the process only with 3 replaced by /0/e and U.3. Condition 3. then Rte) = CRtie for all t so that V). the asymptotics u -* oo and c -. If p(x) = pis constant . ruin will occur if the claim is at least u + v. THE LOCAL ADJUSTMENT COEFFICIENT 203 where c. by cU2.v.

2.bx} dx oo exp low(x) bx dx 70 Ju r oo = b J exp low (x) .(3/p(x). RESERVE-DEPENDENT PREMIUMS 4. we get = 1+ J" AX) exp {(3w (x) .8.8 in terms of I(u) when the claims are exponential: Example 3 . it is formally needed only for Theorem 3.4 Consider again the exponential case B(x) = e-ax as in Corollary 1. the logaritmic form of (3.3.bx} dx .(x) dx. and r j 1 'Yo v(x)dx = bu - a J0 p(x)-ldx = Integrating by parts. Then y(x) = b . 5.bx} dx fo 00 1 + [exp {/(3w(x) . One would expect the behaviour in 2) to be important for the quantitative performance of the Lundberg inequality (3. rather than e-I(u)). J0 ^oo g(x ) dx f AX) lexp IOW (X ) bx + b u 1 exp low(x) .204 CHAPTER VII. As typical in large deviations theory.6). u .(iw(x) .exp {/33w(u) .bx} dx 1+0.3. However.bu}. we consider some simple examples.1 + b f e-. 3a Examples Before giving the proofs of Theorems 3.7) is only captures 'the main term in the exponent' but is not precise to describe the asymptotic form of O(u) in terms of ratio limit theorems (the precise asymptotics could be logI(u)e-1(U) or I(u)"e_I(u). 3. First.bx}]o + b /' oo exp low (x) . we show how to rewrite the explicit solution for ti(u) in Corollary 1.bx} dx = 1 + J0 dodx(x) exp {. say.

(3.3 in the particularly simple case of diffusions: Example 3.I ( v )dy fo +u) dxdy .10) where AE = e log 000 e.e-v 0 O /E) J0 70 70 Yo This implies lim inf A. and (3.fo 7(x) dx /E dy > a-v 'yo /Edy = E (1 . The appropriate definition of the local adjustment coefficient 7(x) is then as the one 2p(x)la2(x) for the locally approximating Brownian motion.1.8) 7(x)dxdy 1-1 We next give a direct derivations of Theorems 3.f y(x)dxd y If 7(x) is increasing .9 ) 11000 e-I(v)dy f000 e. 3.3. applying the inequality 7(x + u) > 7(x) yields immediately the conclusion of Theorem 3. (u) = I(u) + AE .fa 7(x+u)dx/Edy o The analogue of (3.fo 7(x)dx/Edy f . in the definition of AE converges to 0. 0.2(X) = ev2(x) so that 7e(x) = 7(x)/e. IE(u) = I(u)/e. note first that the appropriate slow Markov walk assumption amounts to u.e. ry(x /b -I u o e -f0 °° e - e.10 or Karlin & Taylor [222] pp. It is well known that (see Theorem XI. and (3.BE.0.1/8 .7) follows.2.I ( u) fool. Be = e log U000 e. the integral is bounded by 1 eventually and hence lim sup AE < lim sup a log 1 = 0.2. we get r 00 e.. Similarly.9) yields -e log .5) is infx>o 7(x) > 0 which implies that f °O .3.fory(x+u)dxdy ( 3..5 Assume that {Rt} is a diffusion on [0. oo) with drift µ(x) and variance a2 (x) > 0 at x. 191-195) that 1P (U) = fu0 e-I(v)dy = e-I(u) follo e. 70 > 0 such that 7(x) < 7o for y < yo. Choosing yo. (X) = µ(x). (3. BE -* 0. For Theorem 3. THE LOCAL ADJUSTMENT COEFFICIENT and hence 205 f°° e-I(v )dy . > lime log e = 0 and AE -* 0. In particular. u .

I.0) = 0.5) and 7* = 5 -..Q/p*. ) Note that this expression shows up also in the explicit formula for lk(u) in the form given in Example 3.0/e. _ .7) follows just as in Example We next investigate what the upper bound / approximation a-I (°) looks like in the case p(x) = a + bx (interest) subject to various forms of the tail B(x) of B. . we have 5 > 7o and get lim inf AE > lime log e .0. As in Example 3. 7(x) is typically not explicit. ./3 1 AX dx. this leads to (3. E-+o e-*O By (3. + Gq(u) + o(G9(u))• Gi (u) It should be noted .4.5 for risk processes with exponential claims is as follows: Example 3 .6/p* so that u 1 I (U) = bu . G. > . . G. G.6 Assume that B is exponential with rate S. so our approach is to determine standard functions Gl (u)...206 CHAPTER VII. Ignoring 1/5 in the formula there.10) holds if we redefine AE as AE = flog (j °° efo 7(x)dx/edy _ E/5 I and similarly for B. Then the solution of the Lundberg equation is -y* = b . however .. Nevertheless . 0. RESERVE-DEPENDENT PREMIUMS The analogue of Example 3.7o C 15 I I. the slow Markov walk assumption means 5E = b/c.1 3. (u) representing the first few terms in the asymptotic expansion of I(u) as u -+ oo.+1 (u) = o( 1). the slow Markov limit a -* 0 and the limit u walk approximation deteriorates as x becomes large.e.5. that the interchange of the slow Markov walk oo is not justified and in fact. . Thus 7e(x) _7(x)/e and (3.5. . Of course.6) exactly as in Example 3. the results are suggestive in their form and much more explicit than anything else in the literature. Further.5.(u) oo. lim sup Af < lim sup c log(1 . 0 Now (3. I(u ) = G1(u) + .

x T 1. . and hence (3. y = 2 if B is uniform on (0. It follows from (3.3.. Here B[s] is defined for all s and B[s] . 1.12) with y > 1.1) leads to (S-7T N Ocp a. more generally.c3 logu a= 1 J 0 a + bx 1/ ( c4ul -1/° a > 1 where c3 = c2 /b.clxa-le-5x 207 (3. c4 = c2b -1/'/(1 . This covers mixtures or convolutions of exponentials or. fu I(u) Su . ry* loge*+ g7loglogp*. Hence (3. phase-type distributions (Example 1.11) that b[s] -* co as s f S and hence 7* T S as p* -+ oo.ry*°p*. THE LOCAL ADJUSTMENT COEFFICIENT Example 3 .8 Assume next that B has bounded support. I(u) Pt.y/s)dy sn -1 -1 f ' e-vy'7-ldy = cse8r(T7) as s T oc. u(logu + r7loglogu).11) with a > 0. say 1 is the upper limit and B(x) . For example.1) leads to . B[s] = 1 + s exB(x)dx = 1 +c1SF(a) ('+o(')) (S .7 Assume that B(x) . if the phase generator is irreducible ( Proposition VIII. u Example 3 . More precisely.c2 Su a dx ) Su a<1 Su .:.C2p* C2 = (3clr( a))11'.Y . in the phase-type case . 2. 1) and 17 = k + 1 if B is the convolution of k uniforms on (0. (3. 77 = 1 if B is degenerate at 1.cs(1 . e.g.3cse7*I7(77) .4) or gamma distributions.8).s)C' f "o o as s T S.1 =$ f cse8 Sn f e"B(x)dx = e8 Jo s e-IB ( 1 . .1/a).x)n-1.1/k). the typical case is a = 1 which holds .

4) of the local adjustment coefficient is not the only possible one: whereas the motivation for (3.u .10 Assume that p(x) is a non-decreasing function of x.(T1)) > Ee7o(u)(ul+v-r»(Ti)).12). h 10.9 As a case intermediate between (3.f.4) is the formula h logEues ( Rh-u) .14) for the m . 1 0 3e. Hence for u<V. g.g. this is only possible if 7o(v) 2 7o(u)• .u is a non-decreasing function of u.ru(TI)) .3 (B[s] .15) Proposition 3. of U1 + v . Then: (a) -y(x) and 7o(x) are also non-decreasing functions of x.13) We get b[s] .2 We first remark that the definition (3. RESERVE-DEPENDENT PREMIUMS Example 3 . of the increment in a small time interval [0.. e-c78)2/2c7 dx C7 . 7 * .Ul up to the first claim (here ru (•) denotes the solution of i = p (r) starting from ru(0) = u). I (u) c8u log u 0 where c8 = 2/c7. h]. . (b) 'y(x) <'Yo(x)• Proof That 7(x) is non-decreasing follows easily by inspection of (3.1 Cgs o"O 0 esxe-x2/2c7 dx = cgsec782/2 f .208 CHAPTER VII.1) . By convexity of the m .c8 log . x f oo .4).css 2%rc7eC782/2.(t))dt. This leads to an alternative local adjustment coefficient 7o(u) defined as solution of 1 = Ee''o(u)(vi+u . (3. 1 = E.sp(u).f.Ote7o( u)(u.r„(Ti).•.e7o ( u)(ul+u -r. (3. 3b Proof of Theorem 3. one could also have considered the increment ru (T1) . assume that B(x) CO -x2/2c7. (3.B[7o (u)] .11) and (3.r^. The assumption implies that ru(t) .log p*.

es'Yo(u)Fu(dx)} o0 e- fo -yo( x)dx j.2 in terms of 7o. Hence 1 = Ee-Yo(u)(U1+u-ru(T1)) < E. THE LOCAL ADJUSTMENT COEFFICIENT For (b).10(b): Theorem 3. this is only possible if -yo(u) > 7(u). The case n = 0 is clear since here To = 0 so that ik(°)(u) = 0.(n+l) (u) 1 .u > tp(u). note that the assumption implies that ru(t) .16) Proof Define 411(n)(u) = P('r(u) < on) as the ruin probability after at most n claims (on = TI + • • • + Tn).7o (u)p(u)• Since (3. We shall show by induction that (' Y'(n) (u) < e- fo 'Yo(x)dx (3. (3.17) shown for n and let Fu(x) = P(U1 + u . Also. Then (u) < efo Yo(x)dx. Hence „/.1) .u[70(u)] fo e-yo(x)dx .17) from which the theorem follows by letting n -+ oo.x)Fu(dx) 00 U efo J = o (y) dYF (dx) )+f I 11 /' / 00 e f oFu fu dx) + of u :7o(Y)dYFu(dx) 00 J u 1 l` Considering the cases x > 0 and x < 0 separately.e70(u)(U1-P(u)T1) 209 0 + 7o(u)p(u)' 0 <_ 00['Yo( u)] .Fu(u ) + J  ^(n)(u .4) considered as function of 7 is convex and 0 for -y = 0. Assume (3. the case of 7 then follows immediately by Proposition 3.11 Assume that p(x) is a non-decreasing function of x.(n+1) (u) e-fo Yo(x)dxI^"Q exyo( I u u)Fu(dx )+ J .ru(T1) < x). it is easily seen that fu x7o(y)dy < x-yo (u).3. we obtain „I. u We prove Theorem 3. Separating after whether ruin occurs at the first claim or not. fa 7o(y)dy < u7o(u) < x-yo (u) for x > u.

let Op*. Lemma 3.x/n.15). in . x + x/n] by two classical risk processes with a constant p and appeal to the classical results (3. {RtE)} (starting from u = un. Proof For ruin to occur.3 is required. Further. However.nbe C*. Let Ck. 3.E (u/n) Now as e .12 lim sup4^o -f log O. define uk.e (u) = v'.E (u) denote the ruin probability for the classical model with 0 replaced by .10(b ) that the bound provided by Theorem 3. (3.n.2..3). (u) < I(u).2.n u k}1. we have chosen to work with -y(u) as the fundamental local adjustment coefficient..n) > k =1 II v ^k n.n AX). Also.E (u/ n) Y'E (un .: y(u). for either of Theorems 3. P k. and.. yo(u) appears more difficult to evaluate than y(u).11 be reasonably tight something like the slow Markov walk conditions in Theorem 3. For these reasons.10(a) for some of the inequalities.n.n <Z auk}l.11 is sharper than the one given by Theorem 3.n inf n uk-1.3). 0 It follows from Proposition 3. To this end. by €U=.n) pn niE (u /n) n n_1 n.3/e and U.n = sup p(x). (u). pk n = uk_l. -Y*u /E. y* evaluated for p* = Pk.n so that n. we used also Proposition 3. resp.E ( u/n) ^•e. 3c Proof of Theorem 3. and here it is easily seen that yo(u) .E (u/n). ryk.n. (un-2.210 CHAPTER VII.3 The idea of the proof is to bound { R( f) } above and below in a small interval [x ..n (starting from u/n) without that 2u/n is upcrossed before ruin. op*. RESERVE-DEPENDENT PREMIUMS where the last identity immediately follows from (3. in accordance with the notation i/iE (u).. the probability that ruin occurs in the Cramer-Lundberg model with p* = pn. given downcrossing occurs.. 0.2).n = ku. The probability of this is at least n n. the value of {R(E)} at the time of downcrossing is < un-l.I. C*e- where the first equality follows by an easy scaling argument and the approximation by (3.n) must first downcross un-l.n. 0. W O .

F (2u/n). k=1 k=1 n u _ nE7 k. Indeed . we need the following condition: Condition 3.E (urn) < \ *I.a( u)z.n <X<Uk.n . since ry' is an increasing function of p'.nu/en(1 + where o(1) refers to the limit e .e.. /' (u/n) -'T nk. 40 Combining with the upper bound of Lemma 3.E (u/n) OP- +^p•.7k. i.. 0 with n and u fixed.12 completes the proof.i.nk=1 limsup-elogv). . B(x) (3.. in obvious notation one has -tC (x) = y(x)/e. (3. for all x .n.! (u/n) n n m 7k.18) (ii) the family of claim overshoot distributions is stochastically dominated by V. 11 Theorem 3. It follows that n -log V'C (u) k =1 log Ypk. y > 0 it holds that F(U>x +yIU>x) B(x + y) < F (V > y).n cE (2u/n) Ck ne-7k. ne-7k..log Ck.n + 0(1).19) .E (u/n) -Op•. (u) CIO < Letting n -4 oo and using a Riemann sum approximation completes the proof.nu /fn( 1 Ck - e. *p•.3. 3 now follows easily in case (a).n = sup ?'(x). 211 Clearly. so that Theorem 3. uk_1. v. also ryk. V < oo such that (i) for any u < oo there exist Cu < oo and a (u) > supy <„ 7(x) such that P(V > x) < Cue. THE LOCAL ADJUSTMENT COEFFICIENT particular.13 There exists a r.2 gives 7PE (u) < e-Ii"i/f = lim inf -Clog 0E (u) > I (u). In case (b).nu /En) o(1)).

( . RESERVE-DEPENDENT PREMIUMS To complete the proof.R<) (u v).eV) • P (T(E) (u. Then Y'E (u) ^(E) (u. Write EO. (R. For E2.^'' = E [ . u/n)) I T(E) (u. The probability of ruin in between two downcrossings is bounded by Epp .. (3.5) and the standard formula for b(0).v.E (2u/n . u/n) < oo] .212 CHAPTER VII. v ) = v .nu /EnE [e71. let v < u and define T(E) (u. v ) = inf { t > 0 : R(c ) < v R) = u } . u /n) < oo] l = = < E [OE (u/n .2-y 1 ' . u/n)) .EV) = e.E (0) cf. u/n) < oo) EV). Ei + E2 < e-71. T(E) (u..n V..E(E) (u.V) = e-71 nu/Eno(l) (using (3.E (u/n . (u/n .^(E) (u.of:>2 in n(x). u/n) < oo) . V < u/En] + P(V > u/En) (u/En . u/n) < oo] E [OE (u/n . .EV) = El + E2.1 n.EV) = EiI 1 .nu/En0(1) .. infx>2u /n P(x) . Then the standard Lundberg inequality yields El < E?. (u/n . we first note that the number of downcrossings of 2u/n starting from RoE) = 2u/n is bounded by a geometric r. ) (u u /n)) . u /en 0(i) _n so that E2 < e-2ryl nu/En0(1).QEU 1 .n < e-ry1. N with EN < 1 = infx>2u/nA(x) = 0(1). where El is the contribution from the event that the process does not reach level 2u/n before ruin and E2 is the rest. T() (u. P (T(E) (u.18) for the last equality).

Whereas the result of [122] is given in terms of an action integral which does not look very explicit. [89].21) to pass from u to 0.) = exp .7(x)) (3. s). the approximation (3. they also discuss simulation based upon 'local exponential change of measure' for which the likelihood ratio is ( /'t /'t Ns Lt = exp S .4) and the prime meaning differentiation w.21) (the initial condition is r(0) = u in both cases). Djehiche [122] gives an approximation for tp(u. whereas the most probable path leading to ruin is the solution of r(x) _ -k (x. l o JJJ o . it might be possible to show that the limits e . s) as in (3. where the key mathematical tool is the deep Wentzell-Freidlin theory of slow Markov walks (see e . one can in fact arrive at the optimal path by showing that the approximation for 0(u.3EU) (3. . Similarly.7) for ruin probabilities in the presence of an upper barrier b appears in Cottrell et al. 0 and b T 00 are interchangeable in the setting of [89].20) (with ic(x. T) is maximized over T by taking T as the time for (3. Bucklew [81]). Comparing these references with the present work shows that in the slow Markov walk set-up.-)Ui } .13. THE LOCAL ADJUSTMENT COEFFICIENT Hence lim inf -e log Ali.7) then comes out (at least heuristically) by analytical manipulations with the action integral.u/n) < oo) CI - > u n n ryi n' i=1 Another Riemann sum approximation completes the proof. u Notes and references With the exception of Theorem 3.J -r(Rs)p(R. (u) 40 213 lim inf -e log(Ei +E2) + logP (r(`) (u. Typically.t.r.1. u/n) < oo { 40 )I U nryl n+liminf-elogP (T(')(u.J y(Rs-)dR.g.)ds + -Y(R2. the results are from Asmussen & Nielsen [39].3.T) = P „(info<t <T Rt < 0) via related large deviations techniques. 0 ) (= p(x) -. the risk process itself is close to the solution of the differential equation r(x) _ -r (x.=1 J An approximation similar to (3. the rigorous implementation of these ideas via large deviations techniques would require slightly stronger smoothness conditions on p(x) than ours and conditions somewhat different from Condition 3.

RESERVE-DEPENDENT PREMIUMS the simplest being to require b[s] to be defined for all s > 0 (thus excluding ..3. see XI. We should like. however. e. the exponential distribution ).g. . For different types of applications of large deviations to ruin probabilities .214 CHAPTER VII. to point out as a maybe much more important fact that the present approach is far more elementary and self-contained than that using large deviations theory.

A proper knowledge of phase-type distributions seems therefore a must for anyone working in an applied probability area like risk theory.Chapter VIII Matrix-analytic methods 1 Definition and basic properties of phase-type distributions Phase-type distributions are the computational vehicle of much of modern applied probability. we write Pv for the case where Jo has distribution v so that Pv = KER viPi• 215 .1) is 'Here as usual . that is. if v = (vi)iEE is a probability distribution. Note that since (1. A distribution B on (0. then the problem may admit an algorithmic solution involving a reasonable degree of computational effort if one allows for the more general assumption of phase-type structure. if a problem can be solved explicitly when the relevant distributions are exponentials. We often write p for the number of elements of E. This implies in particular that the intensity matrix for { it } can be written in block-partitioned form as T 0 0 . Typically. on Eo = E U {A} where A is some extra state which is absorbing. P. F (Jt = A eventually) = 1 for all i E E 1 and where all states i E E are transient. a terminating Markov process {Jt} with state space E and intensity matrix T is defined as the restriction to E of a Markov process {Jt}o<t<. More precisely. refers to the case Jo = i. oo) is said to be of phase-type if B is the distribution of the lifetime of a terminating Markov process {Jt}t>o with finitely many states and time homogeneous transition rates. and not in other cases.

T is a subintensity matrix2. that is. and we have t = -Te. i.1 Suppose that p = 1 and write .3. and the phase-type distribution is the lifetime of a particle with constant failure rate /3. T) (or sometimes just (a. 0 2this means that tii < 0. k}. i. a. The initial vector a is written as a row vector. B(t) = Fa(^ < t ). Equivalently. the rows sum to one which in matrix notation can be rewritten as t + Te = 0 where e is the column E-vector with all components equal to one. j. (1. Thus the phase-type distributions with p = 1 is exactly the class of exponential distributions.T)) if B is the Pa-distribution of the absorption time C = inf{t > 0 : it = A}. We now say that B is of phase-type with representation (E. the exit rates ti and the transition rates (intensities) tij: tj 3 aj ai i ti tk tjk FkJ ak Figure 1. In particular. E = {i. t1 = /3.e.1 The phase diagram of a phase-type distribution with 3 phases.e.2) The interpretation of the column vector t is as the exit rate vector.0 = -t11. the ith component ti gives the intensity in state i for leaving E and going to the absorbing state A. Then a = a1 = 1. C is the lifetime sup It > 0 : Jt E E} of {Jt}. an exponential distribution with rate parameter . A convenient graphical representation is the phase diagram in terms of the entrance probabilities ai.216 CHAPTER VIII. Here are some important special cases: Example 1 . tij > 0 for i 54 j and EjEE tij < 0 . MATRIX-ANALYTIC METHODS the intensity matrix of a non-terminating Markov process.

. 6.3 The hyperexponential distribution HP with p parallel channels is defined as a mixture of p exponential distributions with rates 51.2 The Erlang distribution EP with p phases is defined Gamma distribution with integer parameter p and density bp XP-1 -6x (p. 0 •... the EP distribution may be represented by the phase diagram (p = 3) Figure 1.... p}. .1)!e Since this corresponds to a convolution of p exponential densities with the same rate S. .. . 0 0 0 0 -S 6 . . 0 -SP 0 and the phase diagram is (p = 2) .2 corresponding to E = {1. PHASE-TYPE DISTRIBUTIONS 217 Example 1. 0 ••• 0 0 -Sp-1 0 0 t= 0 0 00 •... .x i=1 Thus E _ -Si 0 T 0 -S2 0 0 .. so that the density is P E ai6ie-6. .. 0 0 0 T= t= 0 ••• -S S 0 0 0 0 0 0 . 00)) -S s o ... 0 -S 6 Example 1.1. a = (1 0 0 ..

g. the restriction of P8 to E. Theorem 1 . . 36) yields s d-. the Erlang distribution is a special case of a Coxian distribution.f is B (x) = 1 . (b) the density is b(x ) = B'(x) = aeTxt. Then for i . [APQ ] p. 5 Let B be phase-type with representation (E.1 tP-1 1 Figure 1. T). (c) the m.3 . Recall that the matrix-exponential eK is defined by the standard series expansion Eo K"/n! 3. a.3 0 Example 1 . i.d. B[s] = f0°O esxB (dx) is a(-sI -T)-lt (d) the nth moment f0°O xnB(dx) is (.t2 yt bP. ds^ = ds' = ttlaj + tikpkj. p:.aeTxe. dp.g.f.1)"n! aT-"e. Then: (a) the c.4 (COXIAN DISTRIBUTIONS) This class of distributions is popular in much of the applied literature. and is defined as the class of phase-type distributions with a phase diagram of the following form: 1 617 ti t2 2 b2. MATRIX-ANALYTIC METHODS Figure 1. E t ikp kj = kEE kEE 3For a number of additional important properties of matrix-exponentials and discussion of computational aspects .4 For example.e. Proof Let P8 = (p ^) be the s-step EA x EA transition matrix for {Jt } and P8 the s-step E x E-transition matrix for {Jt} . the backwards equation for {Jt} (e. j E E. see A. The basic analytical properties of phase -type distributions are given by the following result .218 CHAPTER VIII.

f.g.n-1t = (-1)nn!aT-n-1Te (-1)nn! aT-ne. and since obviously P° = I. tij / . Then h -tit ti + ti3 h j .e.tii is the rate of the exponential holding time of state i and hence (-tii)/(-tii . we arrive once more at the stated expression for B[s]. ti/ ...5) as hi(tii + s) = -ti - t ij hj. = aPxe. For (c). j#i jEE tijhj + his = -ti.n -lt .g.B(x) = 1'a (( > x) = P.g. Part (d) follows by differentiating the m.12) for integrating matrixexponentials yields B[s] = J esxaeTxt dx = a ( f°°e(81+T)dx ) t a(-sI . (Jx E E) = this proves (a). i. and since b[s] = ah.s j# -tii i (1.tii -tii . this means in vector notation that (T + sI)h = -t. . the solution is P8 = eT8.1. of the initial sojourn in state i.p. B(n)[0] = _ Alternatively. define hi = Eie8S. 1.p.f.5) Indeed .jEE B'(x) _ -cx Pxe = -aeTxTe = aeTxt (since T and eTx commute). for n = 1 we may put ki = Ei( and get as in (1. i. Rewriting ( 1.f.T) -'t = (. PHASE-TYPE DISTRIBUTIONS 219 That is.f. we i w. Alternatively. d8 P8 = TP8.T) -1t. or w.1 ) n +l n ! a (s I + T ) .5) ki = 1 + tii -L jj:Ai -tii (1. After that. Since 1 .tii we go to A.s I . d" dsn a (.g.6) . hj . in which case the time to absorption is 0 with m .tii and have an additional time to absorption either go to state j which has m . and (b) then follows from 1: aipF.s) is the m . the rule (A. h = -(T + sI)-1t. (-1) n+1n!aT .

we get the density as 9 9 6 (1 1) 10 7 1 0 10 2 aeTyt = e x . are idempotent. T= 2 111 so that 2 2 Then (cf."n! ( ( l 2 2 ) 17 9 0 \ 1 / 10 10 32 n! 35 6" +n!353 Similarly. This implies that we can compute the nth moment as (-1)"n! aT -"e 1"n! 1 1 22 9 9 10 70 7 1 10 10 1 9 +6. making the problem trivial. another the case p = 2 where explicit diagonalization formulas are always available.7) the diagonal form of T is 9 9 1 9 T 10 7 10 70 1 10 6 10 7 0 70 9 1 10 where the two matrices on the r. MATRIX-ANALYTIC METHODS which is solved as above to get k = -aT-le.220 CHAPTER VIII.h. Consider for example 3 9 a= (2 2). there are some examples where it is appealing to write T on diagonal form. see the Appendix. One obvious instance is the hyperexponential distribution.s.6 Though typically the evaluation of matrix-exponentials is most conveniently carried out on a computer. 0 Example 1. Example A3.

7) Proof According to (A. then the matrix m.hall.1.e a mixture of a phasetype distribution with representation (a/llall. B[Q] of B is f3[Q] = J e'1zB(dx) _ (v (9 I)(-T ® Q)-1(t ® I).4b for definitions and basic rules): Proposition 1. 5 and serves at this stage to introduce Kronecker notation and calculus (see A. i. i. 00 B[Q] = J0 f veTxteQx dx = (v ® I) ( f° eT x edx I (t I) (v (& I) ( (T ®Q)xdx f o" e o )( t ® I) _ (v ® I)(-T ® Q)-1(t ® I). a random variable U having a defective phase-type distribution with representation (a. (1. and in fact one also most often there allows a to have a component ao at A. • The phase-type distribution B is zero-modified. This is the traditional choice in the literature. 0 Sometimes it is relevant also to consider phase-type distributions.f.T) with weight hall and an atom at zero with weight 1 . . There are two ways to interpret this: • The phase-type distribution B is defective.7 If B is phase-type with representation (v.11aDD.29) and Proposition A4. < 1.4. PHASE-TYPE DISTRIBUTIONS 1 10 7 10 221 9 6 70 7 9 10 2 +e -6x (1 11 2 2 35e-x + 18e-6x 35 The following result becomes basic in Sections 4.T). hail = E=EE a.g. or one just lets U be undefined on this additional set. T) is then defined to be oo on a set of probability 1. where the initial vector a is substochastic.e 11BIJ = 1laDD < 1.

1.f.hve-7x. cf. Then the tail B(x) is asymptotically exponential. where C. x -* oo. B(x) . i is real and positive.q be the eigenvalue of largest real part of T. let v. MATRIX-ANALYTIC METHODS la Asymptotic exponentiality Writing T on the Jordan canonical form. B[s] = p(s)/q(s) to be phase-type: the density b(x) should be strictly positive for x > 0 and the root of q(s) with the smallest real part should be unique (not necessarily simple. but todays interest in the topic was largely initiated by M.. In older literature.4c). distributions with a rational m. Rolski. Lipsky [247]. h can be chosen with strictly positive component. The Erlang distribution gives an example where k > 0 (in fact. See in particular the notes to Section 6. 0 Of course. In Proposition A5. we give a criterion for asymptotical exponentiality of a phase-type distribution B. No satisfying .222 CHAPTER VIII. Here is a sufficient condition: Proposition 1.F.8).f. the conditions of Proposition 1. Using B(x) = aeTxe . assume that T is irreducible .8) Proof By Perron-Frobenius theory (A. see his book [269] (a historical important intermediate step is Jensen [214]). 77 > 0 and k = 0. Schmidt & Teugels [307] and Wolff [384]. the result follows (with C = (ah)(ve)). not only in the tail but in the whole distribution.g. and we have eTx . Notes and references The idea behind using phase-type distributions goes back to Erlang.g. 2.1 of the Appendix. Other expositions of the basic theory of phase-type distributions can be found in [APQ]. (1. Neuts. O'Cinneide [276] gave a necessary and sufficient for a distribution B with a rational m. All material of the present section is standard. but the relevant T is not irreducible.. cf. here k = p-1). the text is essentially identical to Section 2 of Asmussen [26]. T). Schmidli.8 Let B be phase-type with representation (a. but in many practical cases. the Erlang case). (or Laplace transform) are often used where one would now work instead with phase-type distributions. v. Example A5. let -. it is easily seen that the asymptotic form of the tail of a general phase-type distribution has the form B(x) _ Cxke-nx. . h be the corresponding left and right eigenvectors normalized by vh = 1 and define C = ah • ve .Ce-7'. one has k = 0.8 are far from necessary ( a mixture of phase-type distributions with the respective T(') irreducible has obviously an asymptotically exponential tail.

and U(A) is then the expected number of replacements (renewals) in A. we denote the density by u(x) and refer to u as the renewal density. is Markov and has two types of jumps ...d. what is the smallest possible dimension of the phase space E? 2 Renewal theory A summary of the renewal theory in general is given in A.: U1 + .i.+UnEA). For this reason. but is in part repeated below..t. + U0 is 0 . known. U1<t < U1+U2..r. JtJt1) Then { 0<t<U1 . . +UnEA} 00 = EEI(U1 +.f. RENEWAL THEORY 223 algorithm for finding a phase representation of a distribution B (which is known to be phase-type and for which the m.. Let U1. Jt={Jt?ul}. . (2. U2. Lebesgue measure. we refer to U as the renewal measure. A related important unsolved problem deals with minimal representations: given a phase-type distribution . . with common distribution B and define4 U(A) = E# {n = 0.. the jumps of the j(k) and the it } k) to the next J( k+l) A jump jumps corresponding to a transition from one Jt 4Here the empty sum U1 +.2. oo) w. n=O We may think of the U. Then the renewal density exists and is given by u(x) = ae(T+ta)xt.1 Consider a renewal process with interarrivals which are phasetype with representation (cr.T).. be i... The explicit calculation of the renewal density (or the renewal measure) is often thought of as infeasible for other distributions.. or the density is available ) is. but nevertheless...1) Proof Let {Jtk)} be the governing phase process for Uk and define {Jt} by piecing the { J(k) } together. the problem has an algorithmically tractable solution if B is phase-type: Theorem 2.g.1 of the Appendix.1.. If B is exponential with rate 0. the renewals form a Poisson process and we have u(x) = 0. however. as the lifetimes of items (say electrical bulbs) which are replaced upon failure. if U is absolutely continuous on (0.

IIafl < 1. which is phase -type with representation (v. 2. that is. T).1. The renewal density at x is now just the rate of jumps of the second type. This is defined as U1 + . Then: (a) the excess life t(t) at time t is phase-type with representation ( vt. However. . i.U1 U3 U2 U3 U4 Figure 2.3 Consider a renewal process with interarrivals which are phasetype with representation (a. the density is veTxt = B(x)/µB. this is well-defined. define the excess life e(t) at time t as the time until the next renewal following t. u The argument goes through without change if the renewal process is terminating. Hence the intensity matrix is T + ta.IIBII which is > 0 in the defective case.1) follows by the law of total probability. i. and let µB = -aT-le be the mean of B. since Uk = oo with probability 1 . (b) £(t) has a limiting distribution as t -* oo. is the first k with Uk = 00.e.1) remains valid for that case.2 Consider a terminating renewal process with interarrivals which are defective phase-type with representation (a.T).224 CHAPTER VIII.1 Corollary 2. as the time of the last renewal. Then the lifetime is zero-modified phase -type with representation (a.. the lifetime of the renewal process. which is ti in state i. Equivalently. Proof Just note that { it } is a governing phase process for the lifetime. the phase-type assumptions also yield the distribution of a further quantity of fundamental importance in later parts of this chapter .T) where v = -aT-1 /µB. and the jumps of the first type are governed by T. see Fig. B is defective . u Returning to non-terminating renewal processes . Hence ( 2. and the distribution of Jx is ae ( T+t«)x.T + ta).e. and hence ( 2. Corollary 2. fi(t) U2 U1 . MATRIX-ANALYTIC METHODS of the last type from i to j occurs at rate tiaj .T) where vt = ae (T+ta)t . + Uit_1 where s.. .

hence e(t) is phase-type with representation (vt.6. The time of the next renewal after t is the time of the next jump of the second type. we first compute the stationary distribution of Q. Here are two different arguments that this yields the asserted expression: (i) Just check that -aT-1/µB satisfies (2. the unique positive solution of ve = 1. The renewal density is then aeQtt = (al a2) ( 7i 7"2. we get B(x) aeTxe aT-1eTxTe µB µB PB = veTxt.e.T) where vt is the distribution of it which is obviously given by the expression in (a).e. cf. i. (2.2.) ( t2 ) . Next appeal to the standard fact from renewal theory that the limiting distribution of e(x) has density B(x)/µB.q2. u Example 2 . (ii) First check the asserted identity for the density: since T. RENEWAL THEORY 225 Proof Consider again the process { Jt } in the proof of Theorem 2.2) v(T + ta) = 0. T-1 and eTx commute. The formulas involve the matrix-exponential of the intensity matrix Q = T + to = ( tll + tlal t12 + t2al tlz + tlaz _ -q1 ql t22 + t2a2 q2 -q2 (say).1. Al. Hence in (b) it is immediate that v exists and is the stationary limiting distribution of it.4 Consider a non-terminating renewal process with two phases. According to Example A3. = qz ql (x1 xz) = ql + qz ql + q ' and the non-zero eigenvalue A = -ql .2): -aT-1 e = AB = 1 µB µB -a + aT-'Tea -aT-1(T + ta) µB PB -a + aea -a + a µB µB =0.

Hence 7r = (1/2 1/2).5 Let B be Erlang(2). Then Q= 0 55 )+(1o)=( j ad ).t2) .4 yields the renewal density as u(t) = 2 (1 .6 Let B be hyperexponential. . A = -25. and Example 2. )t (51 . Then _ Q Hence 51 0 0 -52 + 51 52 _ -5152 51a2 ) (al a2) 52a1 -62a1 Slat + 52a1 51a2 51a2+52a1 A = -51a2 . MATRIX-ANALYTIC METHODS e.52) 25152 51x2+5251 51a2+5251 Notes and references Renewal theory for phase-type distributions is treated in Neuts [268] and Kao [221].(biaz + aza.t2) 1 + eat (a17r2 .52a1. t1B 0 Example 2 .226 CHAPTER VIII.e-2bt) 13 Example 2 .tl) 7r2t2 + eat (a17r2 .a27rl) (tl .`t (al a2) + C 11 172 ir12 / \ t 2 ) r1 (7r1 7r2) ( t2 7rltl + J + eAt (al a2) ( 71(t2 .a27r1) (t1 . and Example 2. The present treatment is somewhat more probabilistic.4 yields the renewal density as u(t) = 5152 e.

Next. T + to+). the Markov processes representing ladder steps can be pieced together to one {my}. we shall. Then: (a) G+ is defective phase-type with representation (a+. 3. G+(. Proof The result follows immediately by combining the Pollaczeck-Khinchine formula by general results on phase-type distributions: for (a). marked by thin and thick lines on the figure.p. r(u) the time of ruin with initial reserve u. For (b). Since the results is so basic. T). itself phasetype with the same phase generator T and the initial vector a+ being the distribution of the upcrossing Markov process at time -ST+_. however.1 Assume that the claim size distribution B is phase-type with representation (a.e. B the claim size distribution. represent the maximum M as the lifetime of a terminating renewal process and use Corollary 2. add a more self-contained explanation of why of the phase-type structure is preserved. Within ladder steps.) = F(ST(o) E •. we see that the ladder height Sr+ is just the residual lifetime of the Markov process corresponding to the claim causing upcrossing of level 0. Here we have taken the terminating Markov process underlying B with two states. a+j. cf. Thus the total rate is tip + tia+. T). Now just observe that the initial vector of {mx} is a+ and that the lifelength is M. We asssume that B is phase-type with representation (a. i. and M is zero-modified phase-type with representation (a+. with 0 denoting the Poisson intensity. (b) V.1 on the next page.2. T(0) < oo) the ladder height distribution and M = supt>o St. Corollary 2. which occurs at rate ti. The essence is contained in Fig. and rewriting in matrix form yields the phase generator of {my} as T + ta+. .f3aT-1. and if there is a subsequent ladder step starting in j whic occurs w. Then each claim (jump) corresponds to one (finite) sample path of the Markov process.3.i. use the phasetype representation of Bo. THE COMPOUND POISSON MODEL 227 3 The compound Poisson model 3a Phase-type claims Consider the compound Poisson (Cramer-Lundberg) model in the notation of Section 1. The stars represent the ladder points ST+(k). {St} the claim surplus process.3. Considering the first. T) where a+ is given by a+ = . Corollary 3. the transitions are governed by T whereas termination of ladder steps may lead to some additional ones: a transition from i to j occurs if the ladder step terminates in state i.(u) = a+e(T+tQ+)u Note in particular that p = IIG+II = a+e.

1 .t t d kkt --S. 7e-7x 2 2 Thus b is hyperexponential (a mixture of exponential distributions) with a (2 2 ).2 Assume that ..3. see Corollary 2. 3e-3x + . 0 Example 3.------- Figure 3.1 This derivation is a complete proof except for the identification of a+ with -. MATRIX-ANALYTIC METHODS t -. This is in fact a simple consequence of the form of the excess distribution B0.QaT-1.228 CHAPTER VIII..M--------------------------------------- -------{mx} ST+-(2-) - S . T = (-3 .Q = 3 and b(x) = ..7)diag so that a+ = -QaT 1 = -3 ( 3 2 2) 0 3 9 2 14 7 2 11 2 T+ta+ = 3 0 07/+( 7I \ 2 14 .

6). see Stanford & Stroinski [351] . The parameters of Example 3.j).1): Proposition 4. so that as there 229 9 9 e(T+ta+)u 1 9 e_u 10 70 10 70 7 10 Thus 1 7 9 10 ) + e6'4 ( 10 10 .4. For further more or less explicit computations of ruin probabilities.and Markov-modulated models. this was obtained in Section 3. T) for some vector a+ = (a+. if we define {mz} just as for the Poisson case (cf. For an attempt. we encounter similar expressions for the ruin probabilities in the renewal. 3.1 which does not use that A is exponential) by noting that the distribution G+ of the ascending ladder height ST+ is necessarily (defective) phase-type with representation (a+. see Section 6. with A denoting the interarrival distribution and B the service time distribution. It is notable that the phase-type assumption does not seem to simplify the computation of finite horizon ruin probabilities substantially. That is.2 are taken from Gerber [157]. the discussion around Fig. 0(8) (u) (recall that z/i(u) refers to the zero-delayed case and iY(8) (u) to the stationary case).^(u) = a+e( T+ta+)ue = 24e-u + 1 e-6u 35 35 0 Notes and references Corollary 3. 3. We assume p = PB/µA < 1 and that B is phase-type with representation (a. THE RENEWAL MODEL This is the same matrix as is Example 1. the duality result given in Corollary 11. In the next sections.1 In the zero-delayed case.6. 4 The renewal model We consider the renewal model in the notation of Chapter V. but there the vector a+ is not explicit but needs to be calculated (typically by an iteration). We shall derive phase-type representations of the ruin probabilities V) (u). T). cf. his derivation of +'(u) is different. and the argument for the renewal case starts in just the same way (cf. (a) G+ is of phase-type with representation (a+. For the compound Poisson model. but that such a simple and general solution exists does not appear to have been well known to the risk theoretic community.1 can be found in Neuts [269] (in the setting of M/G/1 queues. where a+ is the (defective) .T). The result carries over to B being matrix-exponential. Fig.4. see Shin [340].

230 distribution of mo. CHAPTER VIII. but with initial distribution a rather than a+.4 Consider the renewal model with interarrival distribution A and the claim size distribution B being of phase-type with representation (a. 4.2 The distribution G(s) of the first ladder height of the claim surplus process {Ste) } for the stationary case is phase -type with representation (a(8). the calculation of the first ladder height is simple in the stationary case: Proposition 4. We have now almost collected all pieces of the main result of this section: Theorem 4 . Nevertheless.6.1) Proof We condition upon T1 = y and define {m. the Palm distribution of the claim size is just B. (4.3. B0 is phase-type with representation (-aT-1/µa. But by Corollary 2. G(') = pBo.T)• Proposition 4.*'} is Markov with the same transition intensities as {mx}. (c) {mx } is a (terminating) Markov process on E. where B0 is the stationary excess life distribution corresponding to B. it follows by integrating y out that the distribution a+ u of mo is given by the final expression in (4. Then . Hence by Theorem 11.5. obviously mo = m.1.Sy-} in the same way as {mx} is defined from {St}. Also. Since the conditional distribution of my given T1 = y is ae4y. Proof Obviously. the form in which we derive a+ for the renewal model is as the unique solution of a fixpoint problem a+ = cp(a+). cf.3 a+ satisfies a+ = V(a+).T). Fig. with intensity matrix Q given by Q = T + to+. Then {m. which for numerical purposes can be solved by iteration. The key difference from the Poisson case is that it is more difficult to evaluate a+. In fact.T).1).*} from {St+y . where u w(a +) = aA[T + to+) = a J0 e(T+t-+)1A(dy). where a(8) = -aT-1/PA. MATRIX-ANALYTIC METHODS (b) The maximum claim surplus M is the lifetime of {mx}.

Hence ^p(.2 ) follows from Proposition 4.3) (defined on the domain of subprobability vectors .e.1(b).1 by noting that the distribution of mo is a+. .. and that this is given by Proposition 4. thus . i. The second follows in a similar way by noting that only the first ladder step has a different distribution in the stationary case. THE RENEWAL MODEL 231 . In particular . a+2) = ^p (a+l)) ..M----------------------------.2) where a+ satisfies (4.^(8)(u) = a ( 8)e(T+ta +) xe..1) and a(8) _ -aT.0. a+l ) = cp (a+°)) .1 .1). a+ can be computed by iteration of (4.^(u) = a+e ( T+ta+)xe.1/pA. (4.3). a+) > 0 = a+o) implies a+) _ (a+) > W (a+)) = a+) .2.4. I {mx} ------------------. Furthermore . . It remains to prove convergence of the iteration scheme (4. the maximum claim surplus for the stationary case has a similar representation as in Proposition 4..---------- i y ^-- T1= y -`•r--------------- Figure 4.3) Proof The first expression in (4. The term tf3 in cp(i3) represents feedback with rate vector t and feedback probability vector (3. by a+ = lim a +n) where a+°) .0) is an increasing function of /3. only with initial distribution a(*) for mo. (4.•.

which links together the phase-type setting and the classical complex plane approach to the renewal model (see further the notes). MATRIX-ANALYTIC METHODS and (by induction ) that { a+ n) } is an increasing sequence such that limn. In that case. -s ¢ sp(T). Theorem 4. 0 = a+) < a+ yields a+) _ (a+0)) (a+) = a+ (n and by induction that a(n) < a+ for all n . limn-4oo a ) < a+.P[s] = A[-s]B[s]. 0 0 We next give an alternative algorithm.. Fn ). To prove the converse inequality. F[s] being interpreted in the sense of the analytical continuation of the m. Then (4.4) whenever EeR(S)U < oo.T)-It.5) Since -s $ sp(T).1. To this end.T1. and let &+". both quantities are just 0 .g.2. Thus by (4. n) &+n) T a+.232 CHAPTER VIII.1 arrivals (n arrivals are excluded because of the initial arrival at time T1 ).5) yields h = (-sI . and hence we may assume that h has been normalized such that ahA[-s] = 1.T)-'t • A[-s] (4. Obviously.T)-1t. let F be the distribution of U1 . the corresponding right eigenvector may be taken as (-sI . For n = 0. . Then -s is an eigenvalue of Q = T + ta+ if and only if 1 =. It follows that n-1) so that on Fn the feedback to {mz} after each ladder step cannot exceed &+ a+ n) < a f ^ e(T+ t&+ -1))YA(dy) o < a is e(T+t«+-1')YA(dy) _ w (a+-1 )) = a+n).4) makes sense and provides an analytic continuation of F[•] as long as -s ¢ sp(T). the normalization is equivalent to F(s) = 1. 7-+ ].ST. Assume the assertion shown for n .-} can contain at most n . we use an argument similar to the proof of Proposition VI. Then each subexcursion of {St+Tl .) = P(mTl = i. However. Let Fn = {T1 + • • • + Tn+1 > r+}be the event that {my} has at most n arrivals in [T1. (4. this implies that ahA[-s] # 0.f.4. Similarly.4). Thus . (4. so to complete the proof it suffices to show that &+ < a+) for all n. a+ ) exists .5 Let s be some complex number with k(s) > 0. Proof Suppose first Qh = -sh. with B[s]. Then F[s] = a(-sI . Then e4'h = e-82h and hence -sh = Qh = (T + taA[Q])h = Th + A[-s]tah.

-Pd with corresponding eigenvectors hl... Corollary 4... Given T has been computed..T) = 1 ata+ = a+. hd. (4. In older literature . . Hence with h = (-sI -T). -pdhd.T)-It.9) we have G+[s] = 1 which according to Theorem 1. .. This immediately implies that Q has the form CD-1 and the last assertion on the diagonal form . Pd in the domain ER(s) > 0 .. 0). This gives d roots 'y..... T) with a+ = a(Q-T)/at. and the solution is .1 has the d distinct eigenvalues .. and define hi = (-piI .p1i .T)-lt + t = -s(-sI . Further. The roots are counted and located by Rouche' s theorem (a classical result from complex analysis giving a criterion for two complex functions to have the same number of zeros within the unit circle ). Since R(s) > 0 and G _ is concentrated on (-oo.T)-lt = -sh..6 Suppose u < 0..type with representation (a+. -yd satisfying R(ryi) > 0. . letting vi be the left eigenvector of Q corresponding to -pi and normalised by vihi = 1 .4. D that with columns -p1 hl. the matrix Q in Theorem 2..6) i=1 i=1 Proof Appealing to Theorem 4. Notes and references Results like those of the present section have a long history. . Let d denote the number of phases. in turn. . Q = CD-1 where C is the matrix with columns hl. .. THE RENEWAL MODEL 233 Suppose next F(s) = 1. the classical algorithm starts by looking for roots in the complex plane of the equation f3[y]A[-ry] = 1.5. and hence by the Wiener-Hopf factorization identity (A. Q has diagonal form d d Q = -dpivi®hi = -dpihivi. hd.' that the equation F(s) = 1 has d distinct roots p1.lt we get Qh = (T + to+)h = T(-sI .5(c) means that a+(-sI T)-1t = 1. . we have IG_ [s] I < 1 . W v M(d) in the notation of Chapter V). Then G+ is phase. explicit expressions for the ruin/ queueing probabilities are most often derived under the slightly more general assumption that b is rational (say with degree d of the polynomial in the denominator) as discussed in Section 6. t(ry) > 0.6. As in Corollary 4. we get at a(Q . and the topic is classic both in risk theory and queueing theory (recall that we can identify 0(u) with the tail P(W > u) of the GI/PH /1 waiting time W..

. We assume that each B.contained derivation). The solutions are based upon iterations schemes like in Theorem 4. The distribution of W comes out from the approach but in a rather complicated form . involving . In risk theory.. 5 Markov-modulated input We consider a risk process {St } in a Markovian environment in the notation of Chapter VI. Asmussen & O'Cinneide [ 41] for a short self. [119]. the ruin probability can be found in matrix-exponential form just as for the renewal model. It turns out that subject to the phase.exponential form of the distribution was found by Sengupta [335] and the phase-type form by the author [18]. similar discussion appears in Kemperman [227] and much of the queueing literature like Cohen [88]. an alternative approach (the matrix-geometric method ) has been developed largely by M.g. This complex plane approach has been met with substantial criticism for a number of reasons like being lacking probabilistic interpretation and not giving the waiting time distribution / ruin probability itself but only the transform. MATRIX-ANALYTIC METHODS d F 1 + a J e°" ip(u) du = Ee°w = 11(--t. see Neuts [269]. the fixpoint problems look like R=Ao+RAI+R2A2+ . and the distribution of an arrival claim is B. The arrival rate in background state i is a. the intensity matrix is A and the stationary row vector is ir . For further explicit computations of ruin probabilities in the phase-type renewal case . T('). which contains somewhat stronger results concerning the fixpoint problem and the iteration scheme. Neuts and his students. a pioneering paper in this direction is Tacklind [373]. with representation say (a(' ).234 then in transform terms CHAPTER VIII.F.4. In queueing theory. but the models solved are basically Markov chains and -processes with countably many states ( for example queue length processes ). the background Markov process with p states is {Jt}. whereas the approach was introduced in queueing theory by Smith [350]. is phase-type. E(t)). see Dickson & Hipp [118]. That is . The matrix. For surveys . Here phase. e. The exposition here is based upon [18]. where R is an unknown matrix. [270] and Latouche & Ramaswami [241]. Numerical examples appear in Asmussen & Rolski [43]. starting around in 1975.type assumptions are basic.) d (see.type assumption . The number of elements of El=> is denoted by q.. and appears already in some early work by Wallace [377].

However.Vt)} obtained by time reversing the I component. MARKOV-MODULATED INPUT 235 some parameters like the ones T or a+ for the renewal model which need to be determined by similar algorithms. Vt)}t>o such that {It} is a Markov process with a finite state space F and {Vt} has piecewiese linear paths. the phase space E(°) for B. has states o. Diagonalization Consider a process {(It.1. The connection between the two models is a fluid representation of the Markov-modulated risk process given in Fig. We start in Section 5a with an algorithm involving roots in a similar manner as Corollary 4.1. and the one E(•) for B. O. 5. states . The version of the process obtained by imposing reflection on the V component is denoted a Markovian fluid and is of considerable interest in telecommunications engineering as model for an ATM (Asynchronuous Transfer Mode) switch. the analysis involves new features like an equivalence with first passage problems for Markovian fluids and the use of martingales (these ideas also apply to phase-type renewal models though we have not given the details).5. This calculation in a special case gives also the ruin probabilities for the Markov-modulated risk process with phase-type claims. (a) 0 0 ♦ o ° tl ♦ • 0 0 o } o o (b) 0 } ♦ • 0 o f o Figure 5. p = ql = Q2 = 2.2. Section 5b then gives a representation along the lines of Theorem 4.4. The key unknown is the matrix K. The two environmental states are denoted o. The stationary distribution is obtained by finding the maximum of the V-component of the version of {(It.1 In Fig.6. •. 5a Calculations via fluid models. for which the relevant fixpoint problem and iteration scheme has already been studied in VI. 5. say with slope r(i) on intervals where It = i.

1) if and only if s is an eigenvalue of E. '31a(1) 0 0 f32a(2) 0 0 AI = t(1) 0 0 0 t(2) 0 0 0 t(3) 0 T1 0 0 0 0 T(2) 0 '33a(3) 0 0 T(3) The reasons for using the fluid representation are twofold.236 CHAPTER VIII. 4. < oo for all s. 5.31a(l) (/3i)diag . V. If s is such a number. a E E(i) } . i E E.1(b) {(It . F = E U { (i. Bi[s] = -a(i)(T(i) + sI)-it('). F is the disjoint union of E and the Eli). Let E denote the matrix -. a) of {It}. The fluid model on Fig . of E into components indexed by E. consider the vector a satisfying (A + (13i(Bi[ -s] .1 A complex number s satisfies 'A+ (f3i(Bi[-s] . Thus F = {o. we have more martingales at our disposal.1))diag ) a = -sa and the eigenvector b = . whereas Ee8s' = oo for all t and all s > so where so < oo. t. a) : i E E. This implies that in the fluid context. 4}.(Ni)diag r(i.Vt)} is then obtained by changing the vertical jumps to segments with slope 1.A 0 Or 1A/ _ t(i) 0 t(2) 0 0 0 0 0 t(3) 0 T1 0 0 0 . 4. j = 1. the probability in the Markov-modulated model of upcrossing level u in state i of {Jt} and phase a E Eli) is the same as the probability that the fluid model upcrosses level u in state (i. A claim in state i can then be represented by an E()-valued Markov process as on Fig. MATRIX-ANALYTIC METHODS 4. 2. 5. First. •. resp. In the general formulation . corresponding to the partitioning + Epp). a) = 1. Second. The intensity matrix for { It} is (taking p = 3 for simplicity) I A .1(a).1))diag + sII = 0 (5. o. r(i) _ -1.92a(2) 0 0 T(2) 0 0 0 -f33a(3) 0 0 T(3) with the four blocks denoted by Ei„ i. Eli) + Proposition 5. in the fluid model Eel'. Recall that in the phase-type case.

E22)-1 E21) a = 0.sI + E12 (sI . d correspond to the partitioning of b into components Proof Using the well-known determinant identity Ell E12 E21 E22 E22 I ' I Ell .E12E22 E21 I . d = (sI E22)-1E21a = E ai(sI .1).E22 . it follows that Ell E12 ( E 21 E22) (d) = s 1 d I .sI)-1t)) iag I = 0 which is the same as (5. For the assertions on the eigenvectors. with Eii replaced by Eii .sI ()3i)diag . 0 .(sI . E(1) + + E(P). and let d = (sI .E22)-1 E21a E21a . Noting that E11c + E12d = se by definition.sI 0 0 0 T(2) .32a(2) (/3i)diag .E21a + sd = sd.A .sI+ ((3ia(i)(T(i) .T('))-1t(i) . it follows that if -Qla(1) 0 0 -.sI 0 0 0 T(3) .sI 0 0 t(3) 0 0 = 0.E22)-1 E21a. MARKOV-MODULATED INPUT 237 indexed by E. Then (up to a constant) c = a.Nla(1) 0 0 T 1.A . iEE (a> of 0* 1 AI. c = a. where c. assume that a is chosen as asserted which means (Ell . t(1) 0 0 then also 0 t(2) 0 . Then E21c+E22d = E21a . resp .sI.sI) (sI .5.

v) = -v) I.. w(u. a) = (j.. q.4 that {e--"1b(v) is a martingale . j. e89uc(e)) (d(1) . .v) = = p i( u .j)c v . v > 0./' u = e' (esiuc ( 1) .. c j. Then we get V)i (u) as sum of two exponential terms where the rates s1.v)=inf{t >0:Vtu orVt=.. . For u. d("))-1 e. a).5. Here E has one state only.2 Assume that E = Or 'Al has q = ql + + qp distinct eigenvalues si. j) pi( u .. v.a)d^ )...v) = Optional stopping at time w (u. a) = Pi (Vw(u.( u...3 Consider the Poisson model with exponential claims with rate 5. v..v) = j).upi(u. a )d(a + e8 °vpi (u . .Q.. v = 1. pi(u.v}. we first look for the negative eigenvalue s of E = I -0 I which is s = -ry with yy = b -. a)). We can take a = c = 1 and get d = (s + b)-16 = 5/(3 = 1/p. . v.v) = (j.pi(u. j. j. v) yields C{V) = e8 .j. I' i( V P2 (w (u) < oo.. Then . define w(u.v.O.238 CHAPTER VIII. Example 5 . a)). To determine 0 (u). it follows by Proposition II.. MATRIX-ANALYTIC METHODS Theorem 5. the result u follows.. < 0 and let b(v) = I d(„)) be the right eigenvector corresponding to s.a Solving for the pi(u. sq with $2s. Letting v -^ oo and using Rsv < 0 yields e8'u = Epi(u.sv)b(v) = 0. a) and noting that i1 (u) = >I j. j. Example 5 . Thus 0(u) = esu/d = pe-7 ° as u should be. u) Iw(u. j. . w(u)=inf{t >O:Vt-u}. . Iw(u.4 Assume that E has two states and that B1. B2 are both exponential with rates 51 i b2. s2 are the negative eigenvalues of Al +01 -A1 E _ -A 2 b1 0 52 A2 +32 0 . Proof Writing Or-'Alb( v) = svb( v) as (AI .

oo)) j)ye. 0 Theorem 5 . is 0 /3 f R(i . MARKOV-MODULATED INPUT 239 5b Computations via K Recall the definition of the matrix K from VI. dx)Bj(y .h.Qj eie 0 f e (j) T(') x T(j)y ej a e dx e e 00 00 eKx ® e T(')' dx (ej (& I)e T(')ye eKa®T(')x dx (ej (9 I)eT(') Ye e(i)eT(')ye.5. j. (y. j. U) where t(j) + t(j)O(j j = k uja.y = to B k7 j # k In particular.6 For i E E. the Pi-distribution of M is phase-type with representation (E(1) + + E(P). 9(').5 G+(i. 8^')IT(j)) where e 3^') =. (5. i.s.b (u) = Pi(M > u) = 9(i)euue.3) .x) 00 f ° (') (j) eT (y-y)edx . j.2.2) the l.( 2.k. we get the following phase-type representation for the ladder heights (see the Appendix for the definition of the Kronecker product 0 and the Kronecker sum ®): Proposition 5.xxej • a 00 oo el .3j eye.33(e = 0 a(j))(-K ®T ( j))(ej (9 I). according to VI. In terms of K. •) is phase-type with representation (E(i). Proof We must show that G+ (i. (') a T( However .

i. the initial value of (i.e. 0 1)'. a) to (k. the ratio between two polynomials (for the form of the density. we have the additional possibility of a phase change from a to ry within the ladder step. a) is obviously chosen according to e(`). T. with phase space EU> whenever the corresponding arrival occurs in environmental state j (the ladder step is of type j). oo) and b* [0] = f °O e-Bxb(x) dx the Laplace transform. +aii-10+anI then a matrix-exponential representation is given by b(x) = aeTxt where a = (b1 b2 . equivalently... Furthermore. t = (0 0 .. An alternative characterization is that such a distribution is matrix-exponential.. (6. we have sofar concentrated on a claim size distribution B of phase-type. Associated with each ladder step is a phase process. Bk7 . For j = k.) which is rational.1 Let b(x) be an integrable function on [0. bn-1 bn). i. Numerical illustrations are given in Asmussen & Rolski [43]. which occurs w. u Notes and references Section 5a is based upon Asmussen [21] and Section 5b upon Asmussen [17]. which occurs at rate t(i).g. and a new ladder step of type k must start in phase y.240 CHAPTER VIII. Piecing together these phase processes yields a terminating Markov process with state space EiEE E('). +bn0i-1 0n +a10n-1 +. t) is the representation of the matrix-exponential distribution/density): Proposition 6. For a transition from (j.y) to occur when j # k.. if b* [0] = b1 +b20+b302 +.p.2) . see Example 1. some square matrix T and some column vector t (the triple (a. intensity matrix U.e. in many cases where such expressions are available there are classical results from the pre-phase-type-era which give alternative solutions under the slightly more general assumption that B has a Laplace transform (or. . However. 6 Matrix-exponential distributions When deriving explicit or algorithmically tractable expressions for the ruin probability.k y. that the density b(x) can be written as aeTxt for some row vector a. and it just remains to check that U has the asserted form. and lifelength M. a m. which occurs at rate t^^7. This yields the asserted form of uja. say... MATRIX-ANALYTIC METHODS Proof We decompose M in the familiar way as sum of ladder steps . Starting from Jo = i.2.5). Then b*[0] is rational if and only b(x) is matrix-exponential..f. the current ladder step of type j must terminate.

One of his elementary criteria. For a proof. see Asmussen & Bladt [29] (the representation (6.2). t= 0 . Namely.47r2 -3 . shows that the distribution B with density b(x) = c(1 cos(21r x))e-x. b(x) > 0 for x > 0. Writing b(x) = c(-e( 2ni-1 ) y/2 ..1 0 . S. . MATRIX-EXPONENTIAL DISTRIBUTIONS 241 T = 0 1 0 0 0 .e(-tai-1)x/2 + e-'T) it follows that a matrix-exponential representation ()3. of (6.6. .1.1 is that it gives an explicit Laplace tranform inversion which may appear more appealing than the first attempt to invert b* [0] one would do. t). then b*[0] = a(0I -T)-1t which is rational since each element of (01 . The converse follows from the last statement of the theorem.. where c = 1 + 1/47r 2./(0 + bi)..4) 0 0 -1 c This representation is complex. u giving b(x) = E 1 cie-biz/bY..h. Example 6 .3) 0 0 0 0 0 .3) was suggested by Colm O'Cinneide. T= 0 0 1 . (6.an_3 -an _ 4 . bn of the denominator to be distinct and expand the r. Thus.2)..3 A set of necessary and sufficient conditions for a distribution to be phase-type are given in O'Cinneide [276].1) as E 1 c. (6. personal communication).. -a2 -a1 Proof If b(x) = aeTxt. since 1 + 4ir2 03 + 302 + (3 + 47x2)0 + 1 + 47r2 it follows by (6. . namely to asssume the roots 6l. 0 0 .s.. we can always obtain a real one (a. s = -c/ 2 .1 0 0 )3 = (111). T. 0 0 0 0 1 0 0 . S = f -c/2 0 -21ri .3) that we can take 0 1 0 0 a= (1 + 47r2 0 0). but as follows from Proposition 6. ..47x2 -3 1 0 .(6.T)-1 is so. .2 A remarkable feature of Proposition 6. u Remark 6.. (6. 0 1 -an -an-1 -an _2 . s) is given by 27r i . -1 . matrix-exponentiality implies a rational transform. cannot be phase-type.

Corollary 111. For the second algorithm. MATRIX-ANALYTIC METHODS Example 6 .6) holds true also in the matrix-exponential case. Consider the distribution with density = 15 ((2e-2x . t) of b(x). T. Then (cf. and can use this to invert by the method of Proposition 6. leading to matrix calculus in high dimensions when b is small. we shall only consider the compound Poisson model with arrival rate 0 and a matrix-exponential claim size distribution B. 7 + 155e-x b(x) Then it is known from O'Cinneide [276] that b is phase-type when 6 > 0. (6. 0. T.6) in Section 3 seems to use the probabilistic interpretation of phase-type distribution in an essential way. t) a phase-type representation with a the initial vector. We recall (see Section 3. For the first.242 CHAPTER VIII. then 5(u) = -a+e(T+t-+)uT-le where a+ = -/3aT-1. then: Proposition 6. we take as starting point a representation of b* [0] as p( O)/q(9) where p. As for the role of matrix-exponential distributions in ruin probability calculations. and that the minimal number of phases in a phase-type representation increases to 0o as 5 . we use a representation (a.4 This example shows why it is sometimes useful to work with matrix-exponential distributions instead of phase-type distributions: for dimension reasons . and present two algorithms for calculating '(u) in that setting.4) the Laplace transform of the ruin probability is /g(e)-PO 0*[e] _ /' e-eu^G(u)dU = 0 9(/3--a0p(-9)ap (9)/q(9)) . that despite that the proof of (6.1 to get i (u) = f3esus. . But since 15(1 +6)02 + 1205 0 + 2255 + 105 b* [9] _ (7 + 155)03 + (1355 + 63)92 + (161 + 3455)9 + 2256 + 105 Proposition 6. q are polynomials without common roots. (6.5 (6. recall that t = -Te) that if B is phase-type and (a.1 shows that a matrix-exponential representation can always be u obtained in dimension only 3 independently of J.5) Thus.3.6) The remarkable fact is. T the phase generator and t = -Te. we have represented ti* [0] as ratio between polynomials (note that 0 must necessarily be a root of the numerator and cancels).1)2 + 6).

T)-1 J0 00 b(x) dx = f -aT-1t. the assertion is equivalent to -a+(BI . we get (91.1t = -b* .5 ).a+(9I .6). but we shall give an algebraic proof.T)-1 (91.1 = ^(T-1 + ( 91-T)-1).1t = -f3a (0I -T)-1T-1t .T . (6.1 + 82 (9I .T)-1 + (6I .T . b+ = a +(BI . U =.T)-1t.1UB(B + BVA-1UB).b* (6.T).to+)-1T .'t.T)-1 so that b* b** b** -a+(9I .T)-1 + 1 ib* (91. 519) (A + UBV ). Then in Laplace transform formulation .6.1 + b+ = b++ 1 .A . MATRIX-EXPONENTIAL DISTRIBUTIONS 243 Proof Write b* = a(9I .T)-1T-1t.7) 9( cf. .1 = A-1 . (6.T)-1T -2 = and 1 = AB IT-2 + 82T . this can be verified by analytic continuation from the phase-type domain to the matrix-exponential domain .T)-1t)-1a +(9I . b+ = a+(9I . Now. From the general matrix identity ([331] p.t.B=land V=a+.T . xb(x) dx = aT2t. with A = 91-T.T)-1ta+(OI .6b* . since (91-T)-1T .1BVA-1. (91. Presumably.T)-1t ( l .to+)-1T .1t du = .to+)-1 = (BI . we get b+ = -0aT-1(9I -T).

h. a key early paper is Cox [90] (from where the distribution in Example 6.1. For expositions on the general theory of matrix-exponential distributions.8. to piece together the phases at downcrossing times of {Rt} (upcrossing times of {St}) to a Markov process {mx} with state space E.1) is the same as the r. We present here first a computational approach for the general phase-type case (Section 7a) and next (Section 7b) a set of formulas covering the case of a two-step premium rule.1. From this it is straightforward to check that b**/(b+ . some key early references using distributions with a rational transform for applied probability calculations are Tacklind [373] (ruin probabilities) and Smith [350] (queueing theory).T)-1T-2t -. 7a Computing O(u) via differential equations The representation we use is essentially the same as the ones used in Sections 3 and 4. cf. 3.244 CHAPTER VIII.1. Lipsky [247] and Asmussen & O'Cinneide [41]. 0 Notes and references As noted in the references to section 4.1t = -/3a (9I .3 is taken). In Corollary VII. but the argument of [286] does not apply in any reasonable generality). the ruin probability(u) was found in explicit form for the case of B being exponential. premium rate p(r) at level r of the reserve {Rt} and claim size distribution B which we assume to be of phase-type with representation (E. of (6.5 is similar to arguments used in [29] for formulas in renewal theory. Much of the flavor of this classical approach and many examples are in Cohen [88]. a. see the Notes to VII.1. which is self-explanatory given Fig.7). See Fig.s. T).8 a(T-1 + (01.b*).T)-1)t = 8 (1 . see Asmussen & Bladt [29]. 7 Reserve-dependent premiums We consider the model of Chapter VII with Poisson arrivals at rate 0.3a (1 0 T -2 + 1 T -102 (9I + 02 1 -T)-1) t -P + 7.82b*.T)-1T. 7.la. . The proof of Proposition 6. VII. -/3aT-1(0I . A key tool is identifying poles and zeroes of transforms via Wiener-Hopf factorization. (for some remarkable explicit formulas due to Paulsen & Gjessing [286]. MATRIX-ANALYTIC METHODS .

Proof The first statement is clear by definition. in contrast to Section 3.I] I 8-0 { tq f Q(v) dvl t1 1 . we obtain V)(u) = P(m„ E E) = v(u)P(0. though still Markov.7. P(tl. is no longer time-homogeneous. In fact.tl < t2 < u.1 The difference from the case p(r) = p is that {m2}. the definition of {m8} depends on the initial reserve u = Ro. RESERVE-DEPENDENT PREMIUMS 245 Rt l0 -u --------------------. Let P(tl. O<.1 A(0) = v(u) and A'(t) = A(t)(T + tv(u . Ai(t) = P(mt = i).t)). Since v(u) = (vi(u))iEE is the (defective) initial probability vector for {m8}.1z I. Also. 0 < t < u. By general results on timeinhomogeneous Markov processes. >iEE Vi (U) is the ruin probability for a risk process with initial reserve 0 and premium function p(u + •). t2) = exp where Q(t) = ds [P(t.1) where A(t) = v(u)P(0. Figure 7. Given the v(t) have been computed.t2) be the matrix with ijth element P (mt2 =j I mtl = i). Define further vi(u) as the probability that the risk process starting from RD = u downcrosses level u for the first time in phase i. i.e. Note that in general >iEE Vi (U) < 1. t) is the vector of state probabilities for mt. t + s) . the A(t) and hence Vi(u) is available by solving differential equations: Proposition 7.u)e = A(u)e (7.

2 For i E E. Given A. those corresponding to state changes in the underlying phase process and those corresponding to the present jump of {Rt} being terminated at level u .t)).(u) p ( u) = . The intensity of a jump from i to j is tij for jumps of the first type and tivj(u .Qdt) vi(u) + vi'(u)p(u)dt + p(u) dt E{tji+tjvi(u)}. {mx} has jumps of two types.t). two things can happen: either the current jump continues from u + p(u)dt to u.4) jEE jEE Proof Consider the event A that there are no arrivals in the interval [0. Thus. the probability of downcrossing level u in phase i is 8ji(1 + p(u)dt • tii) + (1 .t) for the second.Sj i)p(u)dt • tji = Sji + p(u)tji dt. given A. dt]. 0 Thus. the probability of which is 1 -.t and being followed by a downcrossing. from a computational point of view the remaining problem is to evaluate the v(t).246 CHAPTER VIII. MATRIX-ANALYTIC METHODS However. Hence Q(t) _ T + tv(u . the probability of downcrossing level u in phase i for the first time is E vj (u + p(u)dt) (Sji + p( u)dt • tji + p(u)dt • tjvi(u)) jEE vi(u) + vi' (u)p(u)dt + p(u) dt E {tji + tjvi(u)} jEE Collecting terms. whereas in the second case the probability is p(u)dt • tjvi(u). the interpretation of Q(t) as the intensity matrix of {my} at time t shows that Q(t) is made up of two terms: obviously. the probability that level u + p(u)dt is downcrossed for the first time in phase j is vj (u + p(u)dt).(tai + vi(u) E vj(u)tjp (u) - Q + vj (u)tjip ( u). Given this occurs. the probability that level u is downcrossed for the first time in phase i is ai. Proposition 7. A'(t) = A(t)Q(t) = A(t)(T + tv(u . Given A'.3dt. 0 < t < u. or it stops between level u + p(u)dt and u. -vi. In the first case. we get vi(u) = aidt + (1 -. jEE . (7.

4) backwards for {va (t)}v>t>o.) -P"(AnBv) = P(AnB. (v) is given by the r. of (7. This yields v. Rt .5). Then P(B. F" etc. Since the processes Rt and Rt coincide under level B.0 as v -+ 00 we have P(A) -P"(A) = P(AnBv)+P(AnBv) -P"(AnB. supRt>v l t<7 I where o. starting from v"(v) = -..^ 0.)-P"(AnB. say.7.3 For any fixed u > 0.. then P(A n Bv) _ P"(A n BV'). we can first for a given v solve (7. When solving the differential equation in Proposition 7. Thus. From Section 3.s. Now since both P(A n Bv) -3 0 and P"(A n Bv) -. we have p(r) = p = vi (u) -0aTe. . RESERVE-DEPENDENT PREMIUMS 247 Subtracting v. consider a modification of the original process {Rt} by linearizing the process with some rate p. V . (u) for any values of u and v such that u < v.h. vi (U) = lim v= (u).) -+ 0 as v -+ oo. after a certain level v.i7rT-1/p.00 Proof Let A be the event that the process downcrosses level u in phase i given that it starts at u and let B" be the event By={o. denotes the time of downcrossing level u . Then pv(r) p(r) r < v p r>v ' and (no matter how p is chosen) we have: Lemma 7.) is the tail of a (defective) random variable so that P(Bv) -+ 0 as v -4 oo. we face the difficulty that no boundary conditions is immediately available. refer to the modified process.2. To deal with this.. and similarly P"(Bv) . <oo.. (u) on both side and dividing by dt yields the asserted differential u equation. P u which implies that v. Let p" (t). say.

MATRIX-ANALYTIC METHODS Next consider a sequence of solutions obtained from a sequence of initial values {v. i. numerically implemented in Schock Petersen [288]) and the present one based upon differential equations require both discretization along a discrete grid 0. as well p1(u)...zp1(u)/(1 . 1/n. (7.1.248 CHAPTER VIII. We recall from Propositon VII. To evaluate p1(u).6) We may think of process Rt as pieced together of two standard risk processes RI and Rte with constant premiums p1.7) f o (the integral is the contribution from {R. The trapezoidal rule used in [288] gives a precision of 0(n 3). < 0}). r<v r > v. cf. Thus we obtain a convergent sequence of solutions that converges to {vi(t)}u>t>o• Notes and references The exposition is based upon Asmussen & Bladt [30] which also contains numerical illustrations.V" M 0 . 2/n.z51(v)). the evaluation of Vi(u) requires q(u) = 1 . However.) are so. 3u etc. say. > 0} and the last term the contribu- tion from {R. The f iin in (7. such that Rt coincide with RI under level v and with Rt above level v.1a.1. typically the complexity in n is 0(n2) for integral equations but 0(n) for integral equations. the probability of ruin between a and the next upcrossing of v.x) dx f v(u)eT xt dx .7) equals -01 (v . 7b Two-step premium rules We now assume the premium function to be constant in two levels as in VII.e. 2u. while the fourth-order Runge-Kutta method implemented in [30] gives 0(n-5). where. p(r) P.10 that in addition to the O'(•). The precision depends on the particular quadrature rule being employed. v = u.. Recall that q(w) is the probability of upcrossing level v before ruin given the process starts at w < v.. which is available since the z/i'(. Then v(u) is the initial distribution of the undershoot when downcrossing level v given that the process starts at u. T). Therefore u pl(vvueTa t 1. Let ii'( u) = a+'ie(T+ta +^)"e denote the ruin probability for R't where a+ = a+i) = -laT-1/pi.. p2. assuming u > v for the moment. Corollary 3. for u > v the distribution of v . 0 < u < v.1.RQ (defined for or < oo only) is defective phase-type with representation (v(u). The algorithm based upon numerical solution of a Volterra integral equation (Remark VII. where v = inf It > 0 : Rt < v}.9.v v(u)eTat 1 1 . let v(u) = a+2ieiT +ta+>)(u-v). (u)}.q(v dx +( ) ) = ( ) ( q( )) vueTva (7.

v) + (2^ + 3v2 ea'(u " .e. From Example 3.7. Since µB = 5/21. so we consider the non-trivial case example p2 = 4 and p1 = 1.24e-v .(7 The arrival rate is (i = 3.01 (v .to+))1-1 {e{T®(-T-toy+ ))}„ . Then one gets X20 20 21 f 1ea1(u -v) + 1 3 3 ^ A 2(u e .v(u ) eTV e J v(u)eTxtz/)l (v .2.21 = ? yields 0(u) = 1.e-6u 35 .u . RESERVE-DEPENDENT PREMIUMS 1 .4 Let {Rt } be as in Example 3.x) dx} V 1 -1(v) f V v(u) eTxt.e-6v Let Al = -3 + 2V'2.2.v(u)eTve). I.x) dx 1 -^(v) ( 1 .and A2 = -3 . p2 < 3.8) equals v v(u)eTxta+2) e(T+ta +))( v-x)edx which using Kronecker calculus (see A. all quantities involved in the computation of b(u) have been found in matrix form.2V"2.v(u)eTVe .1 from which we see that pl (u) = 1 + 1 249 - 1 .8) The integral in (7.v) 1eai(u -v) + 7 7 1 e\2(u -v) 1 3 ^') eA2 (u.be the eigenvalues of T + to( 2 ).^1(v) 1 .24e. B is hyperexponential corresponding to -3 0 3 a-(2 2)' T= ( 0 7 t.4) can be written as (Y(u) ®a+)e(T+t°+>)°1 (T ® (-T . Example 7..jl (t ®e) Thus. (7. 01(u) _ 24 -u + 35 e-6u 1 35 e 4(u) _ 35 .

/2- ea1(u-") ./-2-) ea 1(u . Notes and references [30].250 CHAPTER VIII. 21 3 In particular.24e5v .24es" .+ it (3 4'I 1 ea2(u-v e1\2(u-") 7 + ( 32 +4. The analysis and the example are from Asmussen & Bladt . pi (u) = p12(u)/p1 l(u) where p1i(u) p12(u) 35e6v . 192esv + 8 P1 . and one gets 12e5" .1 V2 = 4e5"+6 35e6v .24e5v .7) we see that we can write pi (u) = v(u)V2 where V2 depends only on v.1. MATRIX-ANALYTIC METHODS From (7.b(v) = 192esv +8 35e6v + 168esv + 7* Thus all terms involved in the formulae for the ruin probability have been exu plicitly derived. ) e sv + ( 2v/2.21(35e6v .1)' ?.v)esv + 7 4_ 2.1 Thus.24es" .2 35e6v ..

the exponential change of measure techniques discussed in II. x -4 oo.g. For example. III.B(x). B(x) = e-x0 with 0<0<1. oo ) and say then that B is subexponential (B E S) if 251 .f.4. For the definition . (b) the lognormal distribution (the distribution of eu where U . a2)) with density 1 e-(logy-Fh) 2/2az . A rough distinction between light and heavy tails is that the m. B[s] = f e8x B(dx) is finite for some s > 0 in the light-tailed case and infinite for all s > 0 in the heavy-tailed case. Some main cases where this light-tail criterion are violated are (a) distributions with a regularly varying tail. B(x) = L(x)/x" where a > 0 and L(x) is slowly varying. we require that B is concentrated on (0. for all t > 0.Chapter IX Ruin probabilities in the presence of heavy tails 1 Subexponential distributions We are concerned with distributions B with a heavy right tail B(x) = 1. The definition b[s] = oo for all s > 0 of heavy tails is too general to allow for a general non-trivial results on ruin probabilities. x 2iror2 (c) the Weibull distribution with decreasing failure rate . L(tx)/L(x) -4 1. see I.4-6 and at numerous later occasions require a light tail.2b.N(µ. and instead we shall work within the class S of subexponential distributions . For further examples.

the r.p. Then X1 +X2 has an Erlang(2) distribution with density ye-Y so that B*2(x) xe-x. that is. B(x) a-x. one can check that x x where U is uniform on (0.1) then means P(X1 +X2 > x) 2P(Xi > x). That is. X1 is w.1 Let B be any distribution on (0. Proof By the inclusion-exclusion formula. That is.'s X1.1(b) is oo. X2) > u x)/B(x) = 2. Thus the liminf in Proposition 1. Then: (a) P(max(Xi. X2 with distribution B. X2) > x) ^' 2B(x). then (with high probau bility) so are both of X1. X2) > x}. P(max(Xi. x -3 00.2 If B E S.v. (1.v. Thus . oo). X2) > x} C {X1 + X2 > x}.'s. In contrast. then P(X1>xI X1+X2>x)--* 2. in the subexponential case the only way X1 + X2 can get large is by one of the Xi becoming large. HEAVY TAILS B*2\ 2.F(X1 > x. Since B is concentrated on (0. X2) > x) is P(X1 > x) + P(X2 > x) . We later show: Proposition 1. X2 > x) = 2B(x) . As contrast to Proposition 1. 1). note first the following fact: Proposition 1. the behaviour in the light-tailed case is illustrated in the following example: Example 1.2.252 CHAPTER IX.3 Consider the standard exponential distribution.v. X2 but none of them exceeds x.p. and thus the lim inf in (b) is at least lim inf P(max(Xi. . P(Xi <yI Xi+X2>x) 1B(y).2B(x). if X1 + X2 is large . (b) liminf BB(() ) > 2. 1/2 it has the distribution of X1I X1 > x. given X1 + X2 > x. 1/2 'typical' (with distribution B) and w. the distribution of independent r. proving (a).B(x)2 . In terms of r. B(x) Here B*2 is the convolution square. oo). we have {max(Xi. The proof shows that the condition for B E S is that the probability of the set {X1 + X2 > x} is asymptotically the same as the probability of its subset {max(Xi. To capture the intuition behind this definition.

y] and (y. If lim sup B(x . Finally lim inf B(x .5)x)' + 0 _ 2 L(x)l xa (1-6)- Letting S 10. Let 0 < 5 < 1/2.z B(x) . yo] as X -+ 00.S)x + B(Sx)2 < lim sup B(x) x-aoo B(x) lim sup 2L((1 x-^oo . we get BZ(x)) > 1 + B(y) + B(B(-)y) (B(x) . The uniformity now follows from what has been shown for y = yo and the obvious inequality y E [0. or they both exceed Sx. [In terms of r.B*(n ) B(dz) (1.B(y)) .B(y) = 2.S)x.5 If B E S. 253 Proof Assume B(x) = L(x)/xa with L slowly varying and a > 0.B*n(x . x].2) B(x) B(x ) B(x) Jo with n = 1 and splitting the integral into two corresponding to the intervals [0. we therefore get lim sup B*2(x)/B(x) > 1+B(y)+ 1 . B( 0 .4 Any B with a regularly varying tail is subexponential.xIX > x converges in distribution tooo. SUBEXPONENTIAL DISTRIBUTIONS Here is the simplest example of subexponentiality: Proposition 1. We now turn to the mathematical theory of subexponential distributions.'s: if X . Hence lim sup a--+oo B*2(x) 2B((1 . then the overshoot X .y)/B(x) > 1. then B(B(x)y) -* 1 uniformly in y E [0.v. This follows since the probability of the overshoot to exceed y is B (x + y)/B(x ) which has limit 1. 1 < B(x ) B( x) Y) < B( 0). then either one of the Xi exceeds (1 .] Proof Consider first a fixed y. we get limsupB*2(x)/B(x) < 2. a contradiction.yo].1. Proposition 1. If X1 + X2 > x.6)x)/((1 .B E S. and combining with Proposition u 1.1(b) we get B*2(x)/B(x) -* 2.y)/B(x) > 1 since y > 0. Using the identity B*(n+1)(x) = 1+ + 1)(x) 1+ 2 1 .

z) B(x) Here the second integral can be bounded by B*n(y) B(x) .2. Proposition 1. B*(n+1) (x I x-y + Jxx y) W. The case n = 2 is just the definition.B(x . Proof For 0 < 5 < e. Proof We use induction.254 CHAPTER IX..1) for all large n so that B(n) > cle-6n for all n.z) B(dz) _y B(x) 111 Lx B .6 If B E 8.5 that B(n) > e-6B(n . HEAVY TAILS Corollary 1. Given e > 0.y) sup v>o B(v) B(x) which converges to 0 by Proposition 1. 0 Proof of Proposition 1.z) B(dz) (n + O(e)) ^x JO B(x) (n + 0(0) I B (x) . P(X1 > xIX1 + X2 > x) _ P(Xi > x) _ B(x) 1 P(X1 + X2 > x) B2(x) 2 1 y P(X1<y X1 + X2 > x) B(x .z) B(dz). choose y such that IB*n(x)/B(x) .7 If B E S. then e"R(x) -* oo. so assume the proposition has been shown for n. O The following result is extremely important and is often taken as definition of the class S. b[c] = oo for all e > 0. B(x) \Jo _ B(x .z) B(dz) 2B(x) o rv 2 0 2 using Proposition 1.2).B*2 (x) B(x) (x .z ) -(x ) = 1 + (^ B(x . then for any n B*n(x)/B(x) -* n. x oo. we have by Proposition 1.5 and the induction hypothesis.5 and dominated convergence. its intuitive content is the same as discussed in the case n = 2 above. Then by (1. The first integral is y B(x . This implies B(x) > c2e-5x for all x. and this immediately yields the desired conclusions.nI < e for x > y.

y)Ai(dy) v) f o . Then by (1.y)Ai(dy) = (x)o(1) (1. SUBEXPONENTIAL DISTRIBUTIONS 255 Here the first term in {•} converges to 1 (by the definition of B E S) and the second to 0 since it is bounded by (B(x) . Proof Define 5 > 0 by (1+5)2 = 1+e.1. Proof Let X1. an = supx>o B*n(x)/B(x).5 easily yields P(X1 + X2 > x. Proposition 1.z) B(dz ) + sup < 1 + sup f x<T B ( x) x>T 0 B(x .(al + a2)B(x).v. X2 be independent r. A2 be distributions on (0.ala2B(x)2 which can be neglected. Then Al * A2 (x) .2).X2 > x-v) < A1(x-v)A2(x -v) . Then Al * A2(x) = P(X1 + X2 > x).z) B(dz) < 1 + A + an(1 + d) . then there exists a constant K = KE such that B*n(x) < K(1 + e)nB(x) for all n and x.9 Let A1. an+1 fX B*n( *n(x .z) B(x .z) B(dz) x . Xi <= v Ai (x .'s such that Xi has distribution Ai.y)B(dy) = B(x)ov (1)• v (1. it follows that it is necessary and sufficient for the assertion to be true that JX_VA (x .3) Using the necessity part in the case Al = A2 = B yields f x-v B(x . a2 with a1 + a2 > 0.4) . For any fixed v.B(x .z) B(x) < 1 + A + an sup f x B(x .i). choose T such that (B(x)-B*2(x))/B(x) < 1 + b for x > T and let A = 1/B(T). x>T o B(x) The truth of this for all n together with al = 1 implies an < K(1 + 5)2n where K = (1 + A)/e.X1 > x-v. oo) such that Ai (x) _ aiB(x) for some B E S and some constants al.y))/B(x). 0 Proposition 1.ajB(x)Ai(v) = ajB( x)(1+o„(1)) (j = 3 . Since P(X1+X2 > x.0 completes the proof. 0 Lemma 1. Combining these estimates and letting a 4.8 If B E S. e > 0.

a2 = 1. it is easy to see that if L1. B2 E S. i = 1.2A(x).3) follows if CHAPTER LX.s. V (1.B(x) Proof Take Al = A.y)B(dy). L2 slowly varying. B1 * B2 (x) .13 Let B have density b and failure rate A(x) such that . Recall that the failure rate A(x) of a distribution B with density b is A(x) = b(x)/B(x) Proposition 1.10 The class S is closed under tail-equivalence. That is. We next give a classical sufficient (and close to necessary) condition for subexponentiality due to Pitman [290]. A(x) = o(B(x)). u Corollary 1.2aB(x) . it should hold that B1 * B2 E S and B1 * B2 (x) . then so is L = L1 + L2.2.(x) is decreasing for x > x0 with limit 0 at oo. Hence Corollary 1.12 Assume that Bi(x) = Li(x)lxa. Then L = L1 + L2 is slowly varying and B1 * B2(x) sim L(x)/x«.h. u It is tempting to conjecture that S is closed under convolution.5) Here approximately the last term is B(x)o„(1) by ( 1.v)Ai(v) .5) becomes x B(x . the l. a1 = a2 = a yields A*2(x) .11 Let B E S and let A be any distribution with a ligther tail. B1 * B2 E S does not hold in full generality (but once B1 * B2 E S has been shown. Proof Taking Al = A2 = A. HEAVY TAILS 'V-V B(x .Bl (x) + B2 (x) when B1.Ai(x . with a > 0 and L1. whereas the two first yield B(x)(Ai(v) .Bl (x) + B2 (x) follows precisely as in the proof of Proposition 1.aiB(v)) = B(x)o„(1). . then A E S. A2 = B so that a1 = 0.9). However. of (1. L2 are slowly varying.4). if q(x) aB(x) for some B E S and some constant a > 0. Then A * B E S and A * B(x) .256 Now (1. Then B E S provided fo "O exA(x) b(x) dx < oo. In the regularly varying case. That is.v)B(v) + -_'U Aq(x . f " By a change of variables.y)Ai(dy) = B(x)o„(1). u Corollary 1.

13 works in this setting. we first quote Karamata's theorem (Bingham. Thus B*2(x )/ B(x) . subexponentiality has alrady been proved in Corollary 1. SUBEXPONENTIAL DISTRIBUTIONS 257 Proof We may assume that A(x) is everywhere decreasing (otherwise. f ' L(y) dy . Goldie & Teugels [66]): Proposition 1.`(x)b(x) is integrable. a(x) = ax0-1. In the regularly varying case. and exa(x)b(x) = (3x0-1e-(1-0)x9 is integrable. Since ) (x . we can use the same domination for the second integral but now the integrand has limit 0 .12. Example 1. B*2(x) .y) < yA(x .y) dy.e-009x-v)2/2a2/(x 2irv2) logx ( ) 't (-(logx . replace B by a tail equivalent distribution with a failure rate which is everywhere decreasing).U) /or) v 2x This yields easily that ex. elementary but tedious calculations (which we omit) show that A(x) is ultimately decreasing.1 has limit 1 + 0.y ) b(y)dy = B (x) o ox _ J = ox/2 eA( x)-A(x-y ).. Thus.3 < 1.1.2).A(y)a(y ) = ev'(y) b(y).. the u lognormal distribution is subexponential. proving B E S.y) y\(y)• The rightmost bound shows that the integrand in the first integral is bounded by ey"(v). Thus.16 For L(x) slowly varying and a > 1.(x . Then B(x) = e-A(x). the first integral has limit 1 . Then b(x) = Ox0-le-xp. Thus by dominated convergence .1 B(x) eA( x)-A(x-v )-A(y)A(y) dy f B(x . L(x) y° (a .1)xcl-1 .14 Consider the DFR Weibull case B(x) = e-x0 with 0 <. an integrable function by assumption. Further.y) < A (y) for y < x/2. Example 1.A(x .(y) dy. Thus A(x) is everywhere decreasing.y) -* 0. 0 A(x) . the DFR Weibull distriu bution is subexponential. The middle bound shows that it converges to b(y) for any fixed y since \ (x .A(y)\(y) dy + fox/ 2 eA(x ).15 In the lognormal distribution. x .A(x-y)-A ( y). Define A(x) = fo . By (1. To illustrate how Proposition 1. Jo For y < x/2.

(c) Under the assumptions of either ( a) or (b). Then 7(x) . 'y(x) = EXix>.258 From this we get CHAPTER IX.1)])a 1 1 . However. HEAVY TAILS Proposition 1.1/A(x) and P(X ixil'Y (x) > y) -* e-'.18 (a) If B has a density of the form b(x) = aL(x)/xa with L(x) slowly varying and a > 1. let X W = X . Proof ( a): Using Karamata's theorem.a/x. yo] .6) EX(x) .L(x)/x" and )t(x) .1)X(x)/x > y) = P(X > x[1 + y/(a .y(x)B(x).1))^ ' (b) Assume that for any yo )t(x + y/A(x)) 1 A(x) uniformly for y E (0. Then: Proposition 1. More precisely.ea b(x) is integrable. the monotonicity condition in Proposition 1. f O B(y) dy .1))a . We conclude with a property of subexponential distributions which is often extremely important: under some mild smoothness assumptions.1)] I X > x) L(x[1 + y/(a . Thus exa(x)b(x) . (1 + y/(a . then 7(x) x/(a .13 may present a problem in some cases so that the direct proof in Proposition 1.E(X .17 If B has a density of the form b(x) = aL(x)/x°+1 with L(x) slowly varying and a > 1.1) and P(X (.xjX > x.x)+ _ 1 °° P(X > x) P(X>x )J L PX >y)dy 1 x L(y)/y-dy L(x)/((a1)x'-1) x )l ° J °° ( ()l a x a-1 Further P ((a . we get (1. .)/-Y(x) > y) (1 + y/(a . the overshoot properly normalized has a limit which is Pareto if B is regularly varying and exponential for distributions like the lognormal or Weibull.1)]) xa L(x) (x[1 + y/(a . then B(x) .4 is necessary in full generality.

then Vi(u) P Bo(u). P The proof is based upon the following lemma (stated slightly more generally than needed at present). Theorem 2 .14.EK G(u).. Kliippelberg & Mikosch [134].t be the claim surplus at time t and M = sups>0 St.1/. We get p(yl+.A(x) I X > x) = exp {A(x) .n-0 1•P(K= n)•n = EK. Let St = Ei ` Ui . Then P(Y1 + • • • + YK > u) .. 1. with EzK < oo for some z > 1. Lemma 2. It is trivially verified to hold for the Weibull. Y2. Recall that B0 denotes the stationary excess distribution.f yl 0 0 = exp {-y (1 + 0(1))} 0 fY A( x + u /A( x)) a(x) du } The property (1. We assume p = /3µB < 1 and are interested in the ruin probability V)(u) = P(M > u) = P(r(u) < oo). be i. nG(u).2 Let Y1. 0 G(u) L G(u) .2. Notes and references A good general reference for subexponential distribution is Embrechts. cf. The remaining statement (1. THE COMPOUND POISSON MODEL 259 We omit the proof of (c) and that EX (x) .8) in (b) then follows from P (A(x)X (x) > y) = F(X > x + y/..7) is referred to as 1/A(x) being self-neglecting.nn-. Bo(x) = f0 B(y) dy / µB. St > u}. d.A(x + y/A(x))} =a(x) a(x + x) dx = ex p ex P . with common distribution G E S and let K be an independent integer-valued r. 2 The compound Poisson model Consider the compound Poisson model with arrival intensity /3 and claim size distribution B. i.(x). u -a oo.v. .and lognormal distributions .15. r(u) = inf it > 0. Examples 1.+YK> u) = ^•P(K = n)G* n(u ) -.1 If Bo E S.. . and that for each Proof Recall from Section 1 that G*n (u) z > 1 there is a D < oo such that G*n(u) < G(u)Dzn for all u.

u x+a Notes and references Theorem 2. Weibull) one has Bo(x ( B(x) .13).. The approximation in Theorem 2. x -4 00. and for the lognormal and Weibull cases it can be verified using Pitman 's criterion (Proposition 1. In general: Proposition 2. we have fx B(y)dy = a B0 (x) > lim inf lim inf x-+oo B(x) . as well as examples where B ¢ S.2) M = Yl + • • • +YK where the Yt have distribution Bo and K is geometric with parameter p.x^ ) B(x) _ f or ( lox . a]. Bo ¢ S. r(1/Q) xl-Qe-xp B(x) = e-x' From this . The Pollaczeck-Khinchine formula states that (in the set-up of Lemma 2. Bo is more heavy-tailed than B . the result follows immediately from Lemma 2. Bo E S. see Abate. The problem is a very slow rate of convergence as u -^ oo. _ B(x^sx Bo(x) µ8 I aoB(y )dy = (^) .µB(01 . then Bo(x)/B(x) -+ 00.1.2.260 CHAPTER IX.1 is essentially due to von Bahr [56].3 If B E S. See also Embrechts & Veraverbeeke [136]. u The condition Bo E S is for all practical purposes equivalent to B E S. Note that in these examples . Bo E S is immediate in the regularly varying case. lognormal . in our three main examples (regular variation .p)p'. P(K = k) = (1. Borovkov [73] and Pakes [280].. . mathematically one must note that there exist (quite intricate) examples where B E S.µ J ) . The tail of Bo is easily expressed in terms of the tail of B and the function y(x) in Proposition 1. HEAVY TAILS u using dominated convergence with >2 P(K = n) Dz" as majorant.x-400 PBB(x) PB Leta-+oo. (2.1)xa-1' vxe-(109x-11)2/202 2 +° /2 µB = eµ Bo(x) eµ+O2/2(log x)2 27r' = µB = F(1/0 ) Bo(x 1 ) .1 is notoriously not very accurate. Proof of Theorem 2.?(xµ 8 (x). For some numerical studies.1) In particular .18. Since EK = p/(1. However. Proof Since B(x + y)/B(x) -* 1 uniformly in y E [0.p) and EzK < oo whenever pz < 1.

The main result is: Theorem 3 . Based upon ideas of Hogan [200]. let t9+ = i9(0) be the first ascending ladder epoch of {Snd> }. To Bo(u) u -+ 00. E. (3. Define further 0 = IIG+II = P(r9+ < oo).9+ < oo) = P(S. ... G+ (A) = P(Sq+ E A.Ti. [279]. M = sup s$ . also a second order term is introduced but unfortunately it does not present a great improvement..] The proof is based upon the observation that also in the renewal setting.y)/B (x) -> 1 uniformly on compact y -internals. T1 the ith interarrival time and Xi = U. + Xn. Asmussen & Binswanger [27] suggested an approximation which is substantially better than Theorem 2. Thus G+ is the ascending ladder height distribution (which is defective because of PB < PA). This shows that even the approximation is asymptotically correct in the tail. T+ < oo) where r+ = T1 + • • • + T. in [219] p. 1 Assume that (a) the stationary excess distribution Bo of B is subexponential and that (b) B itself satisfies B(x .1 when u is small or moderately large.. Let U= be the ith claim . i.1) this end . there is a representation of M similar to the Pollaczeck-Khinchine formula.} Then ik(u) = F ( M > u) = P(i9 (u) < oo). In [1].e. t9(u) = inf {n : Snd> > u} ..g. i=1 . 195 there are numerical examples where tp(u) is of order 10-5 but Theorem 2. Kalashnikov [219] and Asmussen & Binswanger [27]. THE RENEWAL MODEL 261 Choudhury & Whitt [1].. Then l/i(u) 1 P P [Note that (b) in particular holds if B E S. Snd) = Xl +.1 gives 10-10. We assume positive safety loading. Somewhat related work is in Omey & Willekens [278].. p = iB /µA < 1. one may have to go out to values of 1/'(u) which are unrealistically small before the fit is reasonable. Then K M=EY.1.3. {n= 0.+ E A.y + as usual denotes the first ascending ladder epoch of the continuous time claim surplus process {St}. 3 The renewal model We consider the renewal model with claim size distribution B and interarrival distribution A as in Chapter V.

B(x). Proof By dominated convergence and (b). U_ (dy) is close to Lebesgue measure on (. Lemma 3 .y+ given r+ < oo).FI(x) /IPG_I. we will use the fact that the proof of (3. A. 0] normalized by IPG_ I so that we should have to G+(x) . oo) = F(S.Ti). the contribution from the interval (. Let further 19_ _ inf {n > 0: S^d^ < 0} be the first descending ladder epoch.2). u -a 00.i. G_(A) = P(S.1) is equivalent to P(M > u) " -. As for the compound Poisson model.1) holds for a general random walk satisfying the analogues of (a).d)) E A) denote the pre-19+ occupation measure and let and U_ = Eo G'_" be the renewal measure corresponding to G_.262 CHAPTER IX. Write G+( x) = G+ ( x. and hence FI(x) .d. Then 0 0 F( x . d+ < oo). 0 The lemma implies that (3.3) and we will prove it in this form (in the next Section. 0] to the integral is O(F(x)) = o(FI(x)). Lemma 3 . HEAVY TAILS where K is geometric with parameter 9. B(x) _ J O° B(B(x)y) A(dy) f 1 . P(K = k) = (1 . x -* oo..y_ E A) the descending ladder height distribution (IIG -II = 1 because of PB < P A) and let PG_ be the mean of G_. cf.3 G+ (x) .y) dy = 1 Pi (X) oo IPG_ I . x -+ oo. The heuristics is now that because of (b). A(dy) = 1.g+ > x. (b) and does not rely on the structure Xi = Ui .y) R+(dy ) _ j (x_y)U_(dY) G+ (x) = J 00 00 (the first identity is obvious and the second follows since an easy time reversion argument shows that R+ = U_.FI(u).N.PBBo(x).Y2. whereas for large y . this representation will be our basic vehicle to derive tail asymptotics of M but we face the added difficulties that neither the constant 9 nor the distribution of the Yi are explicit. FI (x) _ fz ° F(y) dy.. Let F denote the distribution of the Xi and F1 the integrated tail.(.1 IPG_ I / F(x .2 F(x) . x > 0. with distribution G+/9 (the distribution of S. are independent of K and i. Proof Let R+(A) = E E'+ -' I(S.9)9'' and Y1.oo.. (3.

u Proof of Theorem 3. choose N such that F(n .1. We then get lim sup G+(x) x-ro0 Fj(x) < lim sup X---)00 o F(x . the proof is complete. By Lemma 3. F(Y= > x) FI(x)/(OIp _ 1). then by Blackwell 's renewal theorem U_ (-n . In the lattice case. (3.F[s] = (1 . > (1 .O-[s])(1 .0)0k k I(u) A.UG_ I x-.9)IpG_ I Differentiating the Wiener-Hopf factorization identity (A.9) 1 . THE RENEWAL MODEL 263 We now make this precise.1)/F(n) < 1 + e for n > N (this is possible by (b) and Lemma 3.1.1. and in the last that FI is asymptotically proportional to Bo E S. and that U_(-n . -n] is just the probability of a renewal at n.2) yields 00 F F I (u) P(M > u) _ E(1 .1.2).(dy) fN FI ( x) + lim sup Z-Y00 N F(x . -n] F1 ( n=N _1 1+e E F(x+n) 0 + limsup x-r00 FI(x) FAG.e) z lim inf G+(x) - FI (x) Ip G_ I Letting a 10.I n=N (1 E)2 r00 F(x + y) dy + e) lim sup . we can assume that the span is 1 and then the same conclusion holds since then U-(-n .3.1 I .G+[s]) . If G_ is non-lattice. Similarly.oo Fj(x) N J (1 +6)2 I {IC_ I lim sup X-400 FI(x + N) _ (1 + e)z (x) I Pi µ G_ I Here in the third step we used that (b) implies B(x)/Bo(x) -+ 0 and hence F(x)/FI(x) -4 0. Hence using dominated convergence precisely as for the compound Poisson model. -n] -+ 1/I µG_ I.3. Given e.=1 BIp G_ I (1.y) U. 0] x-+00 FI(x) 00 + lim up 1 x) E F(x + n) U_ (-n . -n] < (1 + e)/1µc_ I for n > N.1.y) U_ (dy) 00 FI (x) < lim sup F(x) U-(-N.

IIG+II)µc_ = -(1 . and {Su.SS(u)}n=o.So(u)) are available. Notes and references Theorem 3.(u)+n .l.AB i-P We conclude by a lemma needed in the next section: Lemma 3 .. 1-0(0) But since P(M > u .yiui_1.4 For any a < oo. .0)ua_ . with roots in von Bahr [56] and Pakes [280].u)) = o(P (M > u)) = o(FI(u)). u). u))..1 is due to Embrechts & Veraverbeke [136].Se(u)_1 < a) = o(Fj(u)).So( u)_1 < a) < P (w(u) < oo)j/i(0) < 0(0) P(M E (u . 4 Models with dependent input We now generalize one step further and consider risk processes with dependent interclaim times.2. allowing also for possible dependence between the arrival process and the claim sizes. see Asmussen & Kliippelberg [36].a.a.(1 .4 on the joint distribution of (S.264 and letting s = 0 yields CHAPTER IX.a) N P(M > u). we have P(M E (u . Sty(u) .Sty(u)_I < a} we have w(u) < oo. Mn < u}. Then P(M E (u . P(M > u.1)6+[0] . Proof Let w(u) = inf {n : Sid) E (u .a. S+q(u) . on the set {M > u. must attain a maximum > 0 so that P(M > u. u)) > P(w(u) < oo)(i -lp (0))• On the other hand.a. HEAVY TAILS -µF = -(1 . In view of the `one large claim' heuristics it seems reasonable to expect that similar results as for the compound Poisson and renewal models should hold in great generality even when allowing for such dependence. FJ(u) UBBO(U) PBo(u) N = (1-0)Ipc_I JUA .. S+9(u) . Note that substantially sharper statements than Lemma 3. Therefore by Lemma 3.

and apply it to the Markov-modulated model of Chapter VI... and the distribution of {Sxk+t . The idea is now to observe that in the zero-delayed case. Assume that the claim surplus process {St}t>o has a regenerative structure in the sense that there exists a renewal process Xo = 0 < Xl <.... Define S. Theorem 4.1 except for the first one) is a random walk.1.. We give here one of them..d. assume pp.1 based upon a regenerative assumption. (viewed as random elements of the space of D-functions with finite lifelengths) are i. M.. 0o(u) etc.4.. (corresponding to the filled circles on Fig. {Sn}n=o.n n=0. 4.. Thus the assumption . The zero-delayed case corresponds to Xo = Xl = 0 and we write then F0. Figure 4. 2..1 = max k=0. For further approaches. We let F* denote the Po-distribution of Si. examples and counterexamples..4 below. see [47]. MODELS WITH DEPENDENT INPUT 265 Various criteria for this to be true were recently given by Asmussen. G(x) (4. .1) . We return to this point in Example 4.i.F*(X) = P0(Si > x) .Sxk}o<t<xk+1-xk is the same for all k = 1. {SX1+t . M = sup St..X1 is the generic cycle.. such that {SXo+t - SXo}0<t< X 1-Xo . t>0 S.. Schmidli & Schmidt [47].1 Note that no specific sample path structure of {St} (like in Fig..X2 < . E0. = Sx...1 where the filled circles symbolize a regeneration in the path..Sxi}0<t<x2-Xl . M* = max S.1..1) is assumed. +1. 4.. See Fig. 4. < 0 and EoX < oo where X = X2 .

4) liminf u->oo F(M > u) .2.Sxn = sup Sxn+t . u -p 00.S..2) to show F(M* > u) > 1. Proof Since M > M*. Fo(Si > X). Since clearly M(x) > Sl . HEAVY TAILS for some G such that both G E S and Go E S makes (3.3) hold. jF11 F* (U). (4.2 Theorem 4.3) where Mnx) = sup o<t<xn +1 -X.3) applicable so that F(M* > u) 141 F*(u). (4. Sxn +t .* -i o<t<xn+1-x. it suffices by (4. The one we focus on is Fo (Mix) > x) . --------------N N Xi=0 N Figure 4..1 Assume that (4. the assumption means that Mix) and Sl are not too far away. (4. See Fig. 4..266 CHAPTER IX.1) and (4..2) Imposing suitable conditions on the behaviour of {St} within a cycle will then ensure that M and M* are sufficiently close to be tail equivalent. Then '00 (u) = Fo(M > u) .

e)Po (M > U). S.+Mn+1>u} 267 (note that {M> u} = {3(u) < oo}).Po (M* > u.( u)-1 > a) 00 1: Po(Mn<u. 0 yields (4..2. To this end. Given e > 0. /3(u) = inf{n=1.6) 1 p pBo(u) u where B is the Palm distribution of claims and p ..5) which follows since Po (M > u. MW O(u)+1 < a) IN ( U n=1 A1..S.4. 2..Sn 0<t<x„+j ( 1 .. choose a such that Po(Si > x ) > (1 . Mn+l > a V (u . Letting first u -+ oo and next e . x > a. Po(M* > u) .Sn+1-Sn>aV(u-Sn*)) n=1 00 > (1-E)EPo(Mn<u. u))/P(M* = 0) = o(Po(M* > u))... We shall use the estimate Po(M > u) Miu^+ 1 < a) = o(Po (M > u)) (4.Mn +1 >aV(u n=1 00 -S. Then by Lemma 3. Theorem 4. Let a > 0 be fixed. )) > (1 . M^xu)+l > a) .(1 .a.E) Po ( n max St u.1 = limti00 St/t.(u) .a.: S.: Sn > u} .e)Po (M > u. E (u . Under suitable conditions .4.4). u)} < P(M* E (u .1 can be rewritten as 00 (U) (4. MODELS WITH DEPENDENT INPUT Define 79* (u) = inf {n = 1 .e)Po (MMX> > x). . assume the path structure Nt St = EUi-t+Zt i=1 .

p) Ju P Bo(u) 1-p 0 .6) holds with p = .2 Assume that {St} is regenerative and satisfies (4. Corollary 4.268 with {Zt} continuous. and ENX Ui . we get 00 (u) 1 IPF.4). and also for Mix) since Nx FNX U. Then the Palm distribution of claims is B(x) = E N Eo 0 I( U1 < x) . (ii) EozNX < oo for some z > 1.'s order EoNx • B(x).I u J Po(Sl > x) dx 1 EoNxB(x) dx EoX(1 . Proof It is easily seen that the r. and the rest is just rewriting of constants: since p = 1+tlim St = 1+ .1 is in force. (iii) For some o -field Y. independent of {> CHAPTER IX. Mix) < > UE + i=1 o<t<x Thus Theorem 4. oX (see Proposition A1.v.6 below. a4' 0. are F-measurable and NX Po J:U=>x i=1 (iv) Po sup Zt > x / (0:5t<x o(B(x)) Then (4.Q = EoNx/EoX.8) x Write .7). cf. X and N. since the tail of Zx is lighter than B(x) by (iv). Assume further that (i) both B and Bo are subexponential.3PB. the proof of Lemma 4. i=1 (4. HEAVY TAILS N` U. The same is true for Sl.} and satisfying Zt/t N.X both have tails of sup Zt.

< 1. note that the asymptotics of i/io( u) is the same irrespective of whether the Brownian term Zt u in St -is present or not. More precisely. We consider the case where one or more of the claim size distributions Bi are heavytailed. . 1) is Poisson with rate /3 = fo /3(s) ds so that (ii) holds. Thus we conclude that (4. .5 Consider the Markov-modulated risk model with claim size distributions satisfying (4. i=1 B = >2 7riaiBi i=1 and we assume p = 01-4 B = Ep ri/3ipB. of claims arriving in [0.. (iii) is obvious.0 (thus (iv) is trivial).4.3 As a first quick application. i. in particular light-tailed. X2 = 1.. . then (iv) holds since the distribution of supo<t<i Z(t) is the same as that of I Zl 1. we assume that B E S. Taking again Xo = Xi = 0. Assume that B E S.6 with arrival rate /3(t) at time t (periodic with period 1) and claims with distribution B (independent of the time at which they arrive). . > 0. consider the periodic model of VI. Then (4. X3 = 2. X3 = 2. Bo E S. The key step of the proof is the following lemma. we conclude just as in Example 4.6) holds. 3 The average arrival rate / and the Palm distribution B of the claim sizes are given by P P Q = ir i/i. Theorem 4.(NX).t} is standard compound Poisson and {Zt} an independent Brownian motion with mean zero and variance constant a2. MODELS WITH DEPENDENT INPUT 269 Example 4 .6) u holds.9). In particular. Zt . The regenerative assumption is satisfied if we take Xo = Xi = 0. We now return to the Markov-modulated risk model of Chapter VI with background Markov process {Jt} with p < oo states and stationary distribution 7r.4 Assume that St = Zt . The number N. Bo E S. The arrival rate is /3i and the claim size distribution Bi when Jt = i. and for some constants ci < oo such that cl + • • • + c. (i) holds.e. and taking F = o.6) holds. X2 = 1.t + EN'I Ui where {>N`1 Ui . we will assume that lim B2(x) = ci x-+oo G(x) for some distribution G such that both G and the integrated tail fx°O G(y) dy are subexponential . Again .. Example 4 .3 that (4...

2 .F-measurable.. we can define the regenerations points as the times of returns to i. cp with cl + > 0 it holds that Fi(x) .. Thus dominated convergence yields ( P(Yo>x P(Yo>x . oo) such that G E S and some c1. and F a a-algebra such that (N1. 6 Let (N1.ciG(x). i-1 = E\ G(x) In the general case.. P P P(YX and > x I. as x -a oo. Then P P(Yx > x) . and that for some + cp distribution G on [0. i=1 Proof Consider first the case X = 0. If Jo = i. Let {Fi}t=1 P be a family of distributions on [0. An easy conditioning argument then yields the result when Jo is u random. .^•) G(x) P -^ E ciNi = C. 1. . Assume EzN-1+"'+Np < oo for some z > 1 and all i.F) = P(Yo > X+x I •^) G (x +x)>2ciNi i=1 .5.. NP ) and X are . X > 0 a r.270 CHAPTER IX.X i=1 j=1 where conditionally upon F the Xi. The same dominated convergence argument completes the proof. Markov-modulation typically decreases the adjustment coefficient -y and thereby changes the order of magnitude of the ruin . . .."+Np .. are independent with distribution Fi for Xij. It follows by a slight extension of results from Section 1 that P P(Yo > x I Y) G( x) ci Ni.v.}P.2.. NP ) be a random vector in {0. . oo) and define p Ni Yx = EEX'i .. . u Proof of Theorem 4. i=1 P(Yx > x ^) < P(Y0 > x I.c'(x) where c = ciENi . .F) < CG(x)zn'1+. For light-tailed distributions. HEAVY TAILS Lemma 4 . and the rest of the argument is then just as the proof of Corollary 4.G( x ) > ciNi . i =1 P(Yo > x I ^ ) < CG(x)zN1+ +Np for some C = C(z) < oo.

> 0) matter for determining the order of magnitude of the ruin probabilities in the heavy-tailed case. and the final reduction by Jelenkovic & Lazar [213]. Floe Henriksen & Kliippelberg [31] by a lengthy argument which did not provide the constant in front of Bo(u) in final form. cf. In contrast.2 and Example 4.5. states that under mild additional conditions. in particular Proposition 2.e.1. there exist constants -Y(u) such that the F(u)distribution of r(u)/y(u) has a limit which is either Pareto (when B is regularly varying) or exponential (for B's such as the lognormal or DFR Weibull). ) form a general stationary sequence and the U. as well as a condition for (4.6) to hold in a situation where the inter-claim times (T1. Within the class of risk processes in a Markovian environment. and independent of (T1. this then easily yields approximations for the finite horizon ruin probabilities (Corollary 5.4. We start by reviewing some general facts which are fundamental for the analysis. The main result of this section. m is a (or-finite) . see Schlegel [316]..d.i. Theorem 4. r(u) is the time of ruin and as in IV. VI.4. Theorem 2.T2. cf. That paper also contains further criteria for regenerative input (in particular also a treatment of the delayed case which we have omitted here).4. 5 Finite-horizon ruin probabilities We consider the compound Poisson model with p = /3pB < 1 and the stationary excess distribution Bo subexponential. i.pl(1 . Schmidli & Schmidt [47]. the discussion provides an alternative point of view to some results in Chapter IV. 5a Excursion theory for Markov processes Let until further notice {St} be an arbitrary Markov process with state space E (we write Px when So = x) and m a stationary measure. An improvement was given in Asmussen & Hojgaard [33]... The present approach via Theorem 4. cf.5 shows that basically only the tail dominant claim size distributions (those with c..1 is from Asmussen. Combined with the approximation for O(u).7). FINITE-HORIZON RUIN PROBABILITIES 271 probabilities for large u. IV.3. Essentially. As usual. Then O(u) . ). I T(u) < oo).T2. 5 was first proved by Asmussen.p)Bo(u).4.. this is applied for example to risk processes with Poisson cluster arrivals. for light-tailed distributions the value of the adjustment coefficient -y is given by a delicate interaction between all B. we let PN"N = P(. Theorem 5. this should be compared with the normal limit for the light-tailed case. It follows from Theorem 4.. For further studies of perturbations like in Corollary 4. Notes and references Theorem 4.7. i.5 that the effect of Markov-modulation is in some sense less dramatical for heavy-tailed distributions: the order of magnitude of the ruin probabilities remains ft°° B(x) dx.

oo).)k(x .y = Qx (.z) dx G(dz) = ffh(y + z) k(y)dy G(dz). resp.272 CHAPTER IX. HEAVY TAILS measure on E such that L for all measurable A C E and all t > 0. m. Let G denote the distribution of ENt U.>N` Ui.00). where we can take h. Then (5.t + EI U. (note that we allow x.h.h. .z.s.2) for all bounded measurable functions h.2) means ffh(a.t.). follows by the substitution y = x .2) with t = 1 means m. to consider only the case Px(w(F`) = 0) 0.r.s. say. in the terminology of general Markov process theory. an excursion in F starting from x E F is the (typically finite) piece of sample path' {St}o<t<w(F°) I So = x where w(Fc) = inf It > 0: St 0 F} . Say {St} is reflected Brownian motion on [0. Rt is distributed as x + t . k on E. but the example of relevance for us is the following: Proposition 5. {Rt}. St is distributed as y . Then there is a Markov process {Rt} on E such that fE m(dx)h(x)Exk(Rt) = Lm(dy)k(y)Eyh(St) (5. The simplest example is a discrete time discrete state space chain. j. . a familiar case is time reversion (here m is the stationary distribution). k as indicator functions. Lebesgue measure. The equality of the l. x = 0+ and F = (0. Thus. r. and starting from So = y. y to vary in. We let QS be the corresponding distribution and Qx. the whole of R and not as usual impose the restrictions x > 0. w(Fc) < oo ) 'In general Markov process theory. to the r.1 A compound Poisson risk process {Rt} and its associated claim surplus process {St} are in classical duality w . For the present purposes it suffices . however .t. y = 0). Sw(F.= y. and (5. for states i. t. u For F C E. a main difficulty is to make sense to such excursions also when Px(w(F°) = 0) = 1. Proof Starting from Ro = x. {St} and {Rt} are in classical duality w.rij = mjsji where r13.s=j are the transition probabilities for {St}.

/^s x (S1 = Z1. Sn = in = y.3 The distribution of r(0) given r(0) < oo.. S.2. We can then view Qy.s.5... in E F. x = 0.13AB < 1] Proof of Theorem 5. 0].1 for the case F = (-oo.SS(F..). in = y. QR and QRy are defined similarly. w(0. The theorem is illustrated in Fig . Sn+1 E Fc) nx. [note that w(z) < oo a. But in the risk theory example (corresponding to which the sample paths are drawn). Sw(Fo)_ should be interpreted as Sw(F^)_1). Sw(F)-1 = y) . and we let Qy y refer to the time reversed excursion . oo) = r(0) x= St y (a) Figure 5. when p = . . We consider the discrete time discrete state space case only (well-behaved cases such as the risk process example can then easily be handled by discrete approximations).y(2p21 .. ... 5. io = x.y as a measure on all strings of the form i0i1 .2 Qy. The theorem states that the path in (b) has the same distribution as an excursion of {Rt} conditioned to start in y < 0 and to end in x = 0. the one in (b) is the time reversed path. . That is. Qx y is the distribution of an excursion of {St} conditioned to start in x E F and terminate in y E F. In particular: Corollary 5. .= y) Theorem 5 . Qx. Thus.itt) = P Px(w(Fc) < 00. in with i0. FINITE-HORIZON RUIN PROBABILITIES 273 y E F (in discrete time. this simply means the distribution of the path of {Rt} starting from y and stopped when 0 is hit.. i1. z > 0.y(-) = P ({SW(F`)-t-} 0<t<w(F °) E So = x.y = Qy Q.(0)_ = y < 0 is the same as the distribution of w(-y) where w(z) = inf It > 0 : Rt = z}.1 The sample path in (a) is the excursion of {St} conditioned to start in x = 0 and to end in y > 0.

. MY Thus Qx(ioii ... To show Q y x (i0 i 1 . Rn+1 E F`) F (w(Fc) < 00. Si1y k=1 i1 ... in E F. Rn = in = x. t' y and Qy x are measures on all strings of the form ipi l . in with 20..in-1 .. R .. Sn+1 E Fc) n=1 i1.. 2p)...ik_1EF .ik-1EF Similarly but easier Sxin_1 . = in = y.. 2p) when 20.... in) = oo jEF^ Sxin-1 .. i0 = y. in) - Pt' (R1 = ii. . ..ik_1EF Sxin_1 .. 21 . S.....rin_1in E Txj jEFC m21 s2120 m2252221 m in Ssn n-1 mjSjx Mx m2p mil min-1 jEF` 1 Sinin _ 1 . (Fc)-1 = y) 00 CHAPTER IX.... in = x. 20 = y. Rn = in = x.i„_iEF Similarly.... . Si1y 00 jEF° E E 5xik_ 1 . Silt' E SO k=1 i1.. 2n) = Qx.y(inin _ E SYj jEF` 00 Sxik _1 ... HEAVY TAILS E E Px (Si = 21i .in E F.. in = x....J (i...274 note that Fx(w(Fc) < 00. S. i0) Q x........ ......TI( 2n2n _1 . .... . Rn+1 E FC) TioilTili2..... in) = Qx. Silt' E Sxik_1 . note first that Pt' (R l = il.gilt' k=1 ii . Si l io E mjSjx.ii .. . .(F<)-1 = Y) S S and Qx y( ipil . .. Si11 S 1 . R Qy x(2p21 ..

Y > y} .UBBo(u)].2. That is. FINITE-HORIZON RUIN PROBABILITIES 275 5b The time to ruin Our approach to the study of the asymptotic distribution of the ruin time is to decompose the path of { St} in ladder segments .t. y > u. see Fig.')distribution of Y-u is Bo"). Now the P(u. P(o) ). that is. Y > u). ST(o) > y.(o) > y} = {T(0) < oo. the P(u.2. 7-(0) < oo.2 The distribution of (Y. the case r (O) < oo. Z = Zl = ST+( 1)_ the value just before the first ladder epoch (these r. Z) is described in Theorem 111. Z follows the excess distribution B(Y) given by B(Y) (x) _ B(y + x)/B(y).r.')-density of Y is B(y)/[.v. P(") = P(. Bo") is also the P(u. The formulation relevant for the present purposes states that Y has distribution Bo and that conditionally upon Y = y. Let Y = Yl = Sr+( 1) be the value of the claim surplus process just after the first ladder epoch .'s are defined w. U T(O) = T (u) Y Figure 5.2.p.5. that is. We are interested in the conditional distribution of T(u) = T(0) given {T(0) < oo. 5.')-distribution of Z since P(Z>aIY>u) = 1 °° B(y) B(y + a) dy FLBBo(u) B (y) J°° (z) dy .r. S.t. 1 w . To clarify the ideas we first consider the case where ruin occurs already in the first ladder segment . the distribution w.B(a) +a PBBo(u) .

cf.. HEAVY TAILS Let {w(z)}Z^.. Yn_1 'typical'..p) then yields the final result T(u)/y(u) -+ W/(1 .d. in particular of Z. Y1 + • • • + Yn > u} denote the number of ladder steps leading to ruin and P("'n) = P(• I r(u) < oo.i. .. Recall the definition of the auxiliary function y(x) in Section 1. . Since w(z)/z a$. Then. In the proof. That is .18(c) Bo")(yY (u)) -+ P(W > y) ( 5.T+(2)..o be defined by w(z) = inf It > 0: Rt = z} where {Rt} is is independent of {St}..e. more precisely.p) in Pi"'')distribution. i. and distributed as (Y.. and YI.. z -^ oo. the random vectors (YI.. P(Z < a I Y > u) -3 0. Z/'y(u) -* W in Pi "' ')-distribution . K(u) = n).3) holds.. i. . it therefore follows that T(u)/Z converges in Pi"'')probability to 1/(1 . . the duration T+ (n) .. a slight rewriting may be more appealing.. we get the same asymptotics as when n = 1. 1/(1 . Zn). Zn_1 'typical' which implies that the first n-1 ladder segment must be short and the last long. and since its dominates the first n .3 implies that the P("'1)-distribution of T(u) = r(0) is that of w(Z). this in principle determines the asymptotic behaviour of r(u).276 CHAPTER IX.3) where the distribution of W is Pareto with mean one in case ( a) and exponential with mean one in case (b). The idea is now to observe that if K(u) = n.3. Then 7-(u)/-y(u) --^ W/(1 . denote the ladder epochs and let Yk. Hence Z. . 4 Assume that Bo E S and that (5. (Y. Now Bo E S implies that the Bo ")(a) -+ 0 for any fixed a. 5. Zk be defined similarly as Y = Y1. Bo") ). conditionally upon r+ (n) < oo. Fig.1) of the last ladder segment can be estimated by the same approach as we used above when n = 1.p) in F(u) -distribution. Then Corollary 5.r+ (n .p)..: r+ (n) < oo. > u with high probability. Z1). . However.p). r(u)/Z -4 1/(1 . let r+(1) = T(0). .e. are i. 2. We now turn to the general case and will see that this conclusion also is true in P(")-distribution: Theorem 5 .. Z). Since the conditional distribution of Z is known (viz. then by the subexponential property Yn must be large. Z = ZI but relative to the kth ladder segment. It is straightforward that under the conditions of Proposition 1.. We let K(u) = inf In = 1. must be large and Z1.1.

Further. Y„-1.n) (y1.2.u) E •) . . II ' II denotes the total variation norm between probability measures and ® product measure.Yl+ +Yf1>u}.5 Ilp(u. Lemma 5. . FINITE-HORIZON RUIN PROBABILITIES 277 16 Z3 Z1 r+(1) T+(1) T+(1) Figure 5. A"(u) are events such that P(A'(u) AA"(u)) = o(F (A'(u)) (A = symmetrical difference of events). then IIP( I A'(u)) Taking A'(u) = {Y. I A"(u ))II -+ 0.Bo (ri-1) ®B( ..Yn-1iYn . P (Yj.5.n).. > u}. .. the condition on A'(u) A A"(u) follows from Bo being subexponential (Proposition 1. Yn . Proof We shall use the easily proved fact that if A'(u)... A"(u) _ {K(u)=n} = {Y1+ P(. suitably adapted).u) E • I A'(u)) = Bo (n-1) ®Bou) . I A'(u)) = P(u. P(..3 In the following. +Yn-1<u.u) II 0.. .

copies of {w(z)}. Z' are arbitrary random vectors. .. . Z. n_1 < u.. .. ... Notes and references Excursion theory for general Markov processes is a fairly abstract and advanced topic.. . Thus F(u'n)(T(u) /7(u) > y) = F(u'n)((wl (Z1) + ..P) Bo(u) for n = 1.. ... Zn are independent. be independent random vectors such that the conditional distribution of Zk given Y..d. Let {wl(z)}.. +wn(Z n))l7( u ) > 1y) ^' P(u'n)(wn (Zn)/7(u) > y) -4 NW/(1 .P) > y) Corollary 5. 2. For Theorem 5.1 and Y„ . Then according to Section 5a.2.278 Lemma 5 . The same calculation as given above when n = 1 shows then that the marginal distribution of Zn is Bou). n . Y") u etc.).4).t. wk(Zk) has a proper limit distribution as u -+ oo for k < n. (Y. whereas wn(Zn) has the same limit behaviour as when n = 1 (cf.' = y is BM. Proof Let (Y11. ..u has distribution Bout That is. and that Yk has marginal distribution B0 for k = 1.p) < y).. k = 1. ..P(Y' E •)II -* 0. Zn) E •) .. the discussion just before the statement of Theorem 5. in particular his Proposition (2. the marginal distribution of Zk is Bo for k < n. the density of Yn is B(y)/[IBBO(u)]. and clearly Zi... P(u) since by Theorem 2.r. Y'. y > u.4. The first step is to observe that K(u) has a proper limit distribution w.+y 1 p"F(Yn > u) P)Pn-1 P/(1 .P(Z' E •)II -> 0 (here Y. the F'-distribution of r(u) is the same as the P'-distribution of w1(Zl) + • • • + wn(Zn).... Now use that if the conditional distribution of Z' given Y' is the same as the conditional distribution of Z given Y and JIF(Y E •) . {wn(z)} be i. see Fitzsimmons [144]).1). HEAVY TAILS ((Z1'. + Y" > u) Flul (K (u ) = n) _ Cu) P"F(1'i +. n.. Zn).. then 11P(Z E •) . Y1 +. .6. It therefore suffices to show that the P(u'")-distribution of T(u) has the asserted limit. . Proof of Theorem 5. By Lemma 5..i.1. in our example Y = (Y1. Z11)...7 O (u. .6 IIPIu'n ) CHAPTER IX... Similarly (replace u by 0).Bo (n-1) ®Bo' 0.1 P PBo(u) • P(W/(1 .y(u)T) . ..

More precisely. Asmussen & Teugels [53] studied approximations of i (u.1 Assume that B is subexponential and that p(x) -> 00. Corollary II. the probability that is exceeds u is then B(u .6.B(u). 6 Reserve-dependent premiums We consider the model of Chapter VII with Poisson arrivals at rate /3. The rigorous proof is.y) . max VB>0I Vo=0^ o<s<t J11JJJ Lemma 6 . Then 0 (u) Qf "O ^) dy. however.(3 u u J B(y) dy . . p(Y) and the result follows.(u) = P(V > u) = f f (y) dy . claim size distribution B. and premium rate p(x) at level x of the reserve. i. Theorem 6 . one expects the level y form which the big jump occurs to be 0(1). Extensions to the Markov-modulated model of Chapter VI are in Asmussen & Hojgaard [33].2./3Ea B(u).1) The key step in the proof is the following lemma on the cycle maximum of the associated storage process {Vt}. RESERVE-DEPENDENT PREMIUMS 279 The results of Section 5b are from Asmussen & Kluppelberg [36] who also treated the renewal model and gave a sharp total variation limit result . = supo<t<0. We will show that the stationary density f (x) of {Vt} satisfies f (x) /B(x) r(x) We then get V. that MQ becomes large as consequence of one big jump. T) when T -+ oo with u fixed. The heuristic motivation is the usual in the heavy-tailed area. x -> oo. The form of the result then follows by noting that the process has mean time Ea to make this big jump and that it then occurs with intensity /3B(u). Assume for simplicity that {Vt} regenerates in state 0 . and define the cycle as a = inf{t>0: Vt=0. non-trivial and we refer to Asmussen [22].2 Define M. Then P(MT > u) .e.1. cf.. 3. Proof of Theorem 6. u (6. the results only cover the regularly varying case. that fo p(x)-1 dx < oo. V.

Then D(u) = f(u)p(u) and. . by regenerative process theory.q ( u)) 1 .280 CHAPTER IX.P(MT > u) $B(u) Ft µ(1 . Hence f (u)r(u) = D(u) = Do(u) . where also the (easier) case of p(x) having a finite limit is treated .q(u) Now just use that p(x) -* oo implies q (x) -+ 0. u Notes and references The results are from Asmussen [22]. there exist constants c(u) -4 0 such that the limiting distribution of r(u)/c(u) given r(u) < oo is exponential. HEAVY TAILS Define D(u) as the steady-state rate of downcrossings of {Vt} of level u and Da (u) as the expected number of downcrossings of level u during a cycle. Further the conditional distribution of the number of downcrossings of u during a cycle given Mo > u is geometric with parameter q(u) = P(Mo > u I Vo = u). It is also shown in that paper that typically. D(u) = DQ(u)/µ.

Chapter X Simulation methodology 1 Generalities This section gives a summary of some basic issues in simulation and Monte Carlo methods . . and this is the form in which the result of the simulation experiment is commonly reported.z) 4 N(0. Rubinstein [310] or Rubinstein & Melamed [311] for more detail . estimating z by the empirical mean (Z1 + • • + ZN)/N and the variance of Z by the empirical variance N s2 = E(Z{ - N 2.96s z f (1.. vrN-(z . z) 2 = Zit NE i-i i-i According to standard central limit theory . 281 . topics of direct relevance for the study of ruin probabilities are treated in more depth.. We shall be brief concerning general aspects and refer to standard textbooks like Bratley.d. The crude Monte Carlo ( CMC) method then amounts to simulating i. Hence 1. 4Z).2) is an asymptotic 95% confidence interval . Fox & Schrage [77].i. replicates Zl. la The crude Monte Carlo method Let Z be some random variable and assume that we want to evaluate z = EZ in a situation where z is not available analytically but Z can be simulated. Ripley [304]. where a2 = Var(Z ).. ZN.

and in most cases this modest increase of N is totally unproblematic. conditional Monte Carlo and importance sampling. and many sophisticated ideas have been developed. lb Variance reduction techniques The purpose of the techniques we study is to reduce the variance on a CMC estimator Z of z. The difficulty in the naive choice Z = I(T(u) < oo) is that Z can not be simulated in finite time: no finite segment of {St} can tell whether ruin will ultimately occur or not. and a longer CPU time to produce one replication. Then replacing the number of replications N by 2N will give the same precision for the CMC method as when simulating N' = N replications of Z'. there are others which are widely used in other areas and potentially useful also for ruin probabilities. v. We survey two methods which are used below to study ruin probabilities. The situation is more intricate for the infinite horizon ruin probability 0(u). Letting Z' = E[Z I Y]. an added programming effort. writing Var(Z) = Var(E [Z I Y]) + E(Var[Z I Y]) .282 CHAPTER X.b(u. Sections 2-4 deal with alternative representations of Vi(u) allowing to overcome this difficulty. Conditional Monte Carlo Let Z be a CMC estimator and Y some other r . Say that Var(Z') = Var(Z)/2. Typically variance reduction involves both some theoretical idea (in some cases also a mathematical calculation). one can argue that unless Var(Z') is considerable smaller than Var(Z). However. Further. variance reduction is hardly worthwhile. This is a classical area of the simulation literature. typically by modifying Z to an alternative estimator Z' with EZ' = EZ = z and (hopefully) Var(Z') < Var(Z). Therefore. T): just simulate the risk process {Rt} up to time T (or T n 7-(u)) and let Z be the indicator that ruin has occurred. We mention in particular ( regression adjusted) control variates and common random numbers. we then have EZ = EZ = z. generated at the same time as Z. it is straightforward to use the CMC method to simulate the finite horizon ruin probability z = i. so that Z' is a candidate for a Monte Carlo estimator of z. Z = I inf Rt < 0 (0<t<T = I('r(u) < T). SIMULATION METHODOLOGY In the setting of ruin probabilities.

However. i. (1. L1). it appears that we have produced an estimator with variance zero. This may also be difficult to assess .v. . In order to achieve (1.E [Z Z]2 = z2 .3) Thus. the obvious possibility is to take F and P mutually equivalent and L = dP/dP as the likelihood ratio.1. Thus we cannot compute L = Z/z (further.zrs. one would try to choose P to make large values of Z more likely. but tentatively.z2 = 0.3).[E(LZ)] = E Z2 Zz . Variance reduction may or may not be obtained: it depends on the choice of the alternative measure P. . it gives a guidance: choose P such that dP/dP is as proportional to Z as possible. . Thus. To this end. it may often be impossible to describe P in such a way that it is straightforward to simulate from P). L such that z = EZ = E[LZ]. Nevertheless. GENERALITIES 283 and ignoring the last term shows that Var(Z') < Var(Z) so that conditional Monte Carlo always leads to variance reduction. LN) from P and uses the estimator N zrs = N > L:Zj i=1 and the confidence interval zrs f 1. a crucial observation is that there is an optimal choice of P: define P by dP/dP = Z/EZ = Z/z. the argument cheats because we are simulating since z is not avaliable analytically. and the problem is to make an efficient choice.e.zrs) = 2 1 N 2 2 2 i=1 i=1 N > Lt Zi . (ZN. L = z/Z (the event {Z = 0} is not a concern because P(Z = 0) = 0). Then z Var(LZ) = E(LZ)2 .96 sis v^ N 2 1 where srs = N j(LiZi . Importance sampling The idea is to compute z = EZ by simulating from a probability measure P different from the given probability measure F and having the property that there exists a r.. using the CMC method one generates (Z1.. even if the optimal change of measure is not practical.

assume that the rare event A = A(u) depends on a parameter u (say A = {r(u) < oo}).5 or even much smaller . For each u. it does not help telling whether z is of the magnitude 10-4. the optimal P is the conditional distribution given A.96oz /(zV) = 0.e.B = iP(AB) = P(BIA). just the same problem as for importance sampling in general comes up: we do not know z which is needed to compute the likelihood ratio and thereby the importance sampling estimator.z) which tends to zero as z ^ 0. assume that the A(u) are rare in the sense that z(u) -* 0. a confidence interval of width 10 -4 may look small. but if the point estimate z is of the order 10-5. We then .0. if z is small. To introduce these. In ruin probability theory. i. Z = I(A) and A is a rare event.96 2Z ( 1 . An example where this works out nicely is given in Section 3. in terms of the half-width of the confidence interval. and let Z(u) be a Monte Carlo estimator of z(u).1. large sample sizes are required.1.284 CHAPTER X. We shall focuse on importance sampling as a potential (though not the only) way to overcome this problem. The optimal change of measure ( as discussed above) is given by P(B) = E [ Z] i.e.z) 1 -> 00.. z I. However. Z z V5 In other words . SIMULATION METHODOLOGY 1c Rare events simulation The problem is to estimate z = P(A) when z is small . Two established efficiency criteria in rare events simulation are bounded relative error and logarithmic efficiency. N . say 10%. the issue is not so much that the precision is good as that relative precision is bad: oZ z(1 . as is the case of typical interest. This leads to the equation 1. say of the order 10-3 or less.. let z(u) = P(A(u)). Again.z) 100-1. However. A = {T(u) < T} or A = {r(u) < oo} and the rare events assumption amount to u being large. I.100 . 10 . Thus. The CMC method leads to a variance of oZ = z(1 .96 2 z2 z increases like z-1 as z . u -+ oo. we may try to make P look as much like P(•IA) as possible. Another way to illustrate the problem is in terms of the sample size N needed to acquire a given relative precision . and further it is usually not practicable to simulate from P(•IA).e.

Otherwise. 2 Simulation via the Pollaczeck-Khinchine formula For the compound Poisson model. let Z +.1. Thus.2. it is appealing to combine with some variance reduction method . . The algorithm gives a solution to the infinite horizon problem .1) may be written as V) (u) = P(M > u). XK from the density bo(x)..log z(u) of (1. this means that the sample size N = NE(u) required to obtain a given fixed relative precision (say a =10%) remains bounded. According to the above discussion. so that NE (u) may go to infinity. are i. If M > u. it is not efficient for large u . SIMULATION VIA THE POLLACZECK-KHINCHINE FORMULA 285 say that {Z(u)} has bounded relative error if Var(Z(u))/z(u)2 remains bounded as u -3 oo. This allows Var(Z(u)) to decrease slightly slower than z(u)2. Logarithmic efficiency is defined by the slightly weaker requirement that one can get as close to the power 2 as desired: Var(Z(u)) should go to 0 as least as fast as z(u)2-E. However. Notes and references For surveys on rare events simulation. i. but as a CMC method .p)pk.e. 2. O (u) = z = EZ. P(K = k) = (1 . . Let M . Generate K as geometric. X2. 3. where M = X1 + • • • + XK. .(2. where Z = I(M > u) may be generated as follows: 1.i. with common density bo(x) = B(x)/µB and K is geometric with parameter p. and in practice. where X1.0. the Pollaczeck-Khinchine formula III. The term logarithmic comes from the equivalent form . the mathematical definition puts certain restrictions on this growth rate.4). which gives a logarithmically efficient estimator .. logarithmic efficiency is almost as good as bounded relative error.log Var(Z(u)) lim inf > 2 u-+oo . see Asmussen & Rubinstein [45] and Heidelberger [190].p)pk. We shall here present an algorithm developed by Asmussen & Binswanger [ 271. Generate X1.d. let Z +.. F(K = k) = (1 ..X1 + + XK.4) for any e > 0. Var(Z(u)) hm sup U-+00 z (u) 2-E < oo (1. Therefore .

. Z(1)(u) is defined as 0). y < 0).S( K_1)) V X(K-1)) / Bo(X(K -1)) where S(K_l) = X(1) + X(2) + • • • + X(K_1).b(u) = P (Xl +•••+XK>u) = EF[Xl + .. and considering only the remaining ones.. The idea of [27] is to avoid this problem by discarding the largest X. This calculation shows that the reason that this algorithm does not work well is that the probability of one single Xi to become large is too big. just note that EZ(1)(u ) 2 > E[Bo (x .. and that Bo(y) = 1. conditional probability. Theorem IX.1) V)(u) . .p)Bo(x). Xl > x. and let Z(2)(u) = _ P (SK B0((u > u I X(l).X(2). SIMULATION METHODOLOGY when the claim size distribution B (and hence Bo) has a regularly varying tail.2.-XK_1)...XK_1 and let Z( 1)(u) = Bo (Y) (if K = 0. A first obvious idea is to use conditional Monte Carlo: write i..+XK > uIXl..Xl . and the problem is to produce an estimator Z(u) with a variance going to zero not slower (in the logarithmic sense ) than Bo(u)2... form the order statistics X(1) < X(2) < .286 CHAPTER X. K > 2] = P2p(Xl > x) = P2Bo(x) (here we used that by positivity of the X.X(n_1)) Bo(X(„_l) V X) Bo(X(n-1)) ...XK-1] = EBo(u-X1 .... we generate only X1. ... Thus.. Then (cf. asymptotically it presents no improvement : the variance is of the same order of magnitude F(x)...... we thus generate K and X1i .X1 .SK-1)2. .. < X(K) throw away the largest one X(K). To see this. compute Y = u . assume in the following that Bo(x) .X(2).L(x)/x`' with a > 0 and L(x) slowly varying.X(K-1)) . note first that To check the formula for the P(X(n) > x I X(1). As a conditional Monte Carlo estimator . XK. So. X1 + + XK_ 1 > x when X1 > x.p/(l ... . For the simulation. XK-1.. Z(1) (u) has a smaller variance than Zl (x). However..

However .. the continuous-time process {St} is simulated by considering it at the discrete epochs {Qk} corresponding to claim arrivals. X (.. Notes and references The proof of Theorem 2._1) > P(X(n) > _ X X(1). Compute -y > 0 as solution of the Lundberg equation 0 = K(y) = )3(B[y] .5). BL instead of 0..Khinchine formula and importance sampling .u is the representation 0(u) = e-7sr(u) overshoot (cf. and define )3L. X(n-1)) Bo((x . it must be noted that a main restriction of both algorithms is that they are so intimately tied up with the compound Poisson model because the explicit form of the Pollaczeck-Khinchine formula is crucial (say. X . Thus. . use the the Cramer-Lundberg approximation so that z(u) = '(u) = e-7"ELe-7E(") where ^(u) = ST(") . 3 Importance sampling via Lundberg conjugation We consider again the compound Poisson model and assume the conditions of Ce-7". X(2). Asmussen . . the algorithm for generating Z = Z(u) is: 1. and simulate from FL. . IMPORTANCE SAMPLING VIA LUNDBERG CONJUGATION 287 We then get P(S" > x I X( 1). .S(n_1) I X(1). -l)) . Also in other respects the findings of [28] are quite negative: the large deviations ideas which are the main approach to rare events simulation in the light-tailed case do not seem to work for heavy tails. BL(dx) = e7sB(dx)/B[y]. For practical purposes. . Then the algorithm given by { Z (2) (u) } is logarithmically efficient. X(2). that is. The algorithm is sofar the only efficient one which has been developed for the heavy-tailed case. X(n-1)) P(X(TZ) + S(. and we refer to [27]. using 13L. in the renewal or Markov. -l)) BO(X(n-1)) Theorem 2 . and that paper contains one more logarithmically efficient algorithm for the compound Poisson model using the Pollaczeck.y.1) . 1 Assume that Bo (x) = L(x)/x° with L(x) slowly varying. for the purpose of recording Z(u) = e-rysr(u).S (n-1)) V X (.1 is elementary but lengty.modulated model P(r+ < oo) and G+ are not explicit ). B. 111. X(2). BL by I3L = /3B[-y]...3. .. Binswanger and HOjgaard of [28] give a general survey of rare events simulation for heavy -tailed distributions .

Otherwise. r(u) < oo) and FL (both measures restricted to.3.. Let Sf-0 CHAPTER X. the results of IV.2 The estimator (3.1 The estimator Z(u) = e-'rs* "u) (simulated from FL) has bounded relative error.(u)) are asymptotically coincide on {r(u)} < oo.d. In detail . let Z F e_'s..1) (simulated with parameters ^3. A -> AL as in Chapter V. SIMULATION METHODOLOGY 3. . u It is tempting to ask whether choosing importance sampling parameters . P'[-y] < oo for some ry > 0. M(u) = inf {n : S„ > u}. namely ELe-ry£("). We may expect a small variance since we have used our knowledge of the form of 0(u) to isolate what is really unknown. The proof is given below as a corollary to Theorem 3. b different from . The answer is no. There are various intuitive reasons that this should be a good algorithm. In fact: Theorem 3. one must restrict attention to the case 4µB > 1.r(u) < oo) = 1. and the change of measure F -r FL corresponds to B -> BL. return to 3. BL could improve the variance of the estimator . with distribution F. let S. 4. If S > u. = X1 + . Let S .QL.F. We formulate this in a slightly more general random walk setting '. so that changing the measure to FL is close to the optimal scheme for importance sampling ..7 tell that P(. the discussion at the end of Section 1b. and we have: Theorem 3. Generate T as being exponential with parameter .l3 and U from B. and avoid simulating the known part e-7".. . + X.Q. -Ti.. Let X1. cf. Xi = U.. It resolves the infinite horizon problem since FL(. Proof Just note that EZ(u)2 < e . X2.T. The algorithm generalizes easily to the renewal model . be i.. and assume that µF < 0 and that F[y] = 1. More precisely. to deal with the infinite horizon problem . b) # (/3L.i. Let FL (dx) = 'For the renewal model. The estimator is then M(u) /3e-QT' dB Z(u) (Ui) j=1 )3 e $Ti dB where M(u) is the number of claims leading to ruin.288 2. B) is not logarithmically efficient when (/3.S+U .2ryu _ z (u)2/C2. BL).

2ryELXi. are i.. = c'. More generally. let F be an importance sampling distribution equivalent to F and M(u) dF Z(u) _ I -(Xi) .d.yu+elu u -+oo e-try' 1 > lim up C2e-2. where e' = -EL Iog dFL (Xi) > 0 by the information inequality.2'X1 . .2) (simulated with distribution F of the X3 has bounded relative error when . Jensen's inequality and Wald's identity yield EpZ(u)2 > exp {EL(K1 + ... is not logarithmically efficient.yu = G. K2. For the second. + KM(u))} = exp {ELM(u)(E . IMPORTANCE SAMPLING VIA LUNDBERG CONJUGATION 289 e7yF(dx). EFZ(u)2 EFZ(u)2 lim sup z(u)2eeU = lim cop C2e-2.3 The estimator (3.. (3.. By the chain rule for Radon-Nikodym derivatives. Since K1.. where Kl og (X) (j) 2 ) = -log dFL (Xi) .2) dF Theorem 3. write W(F IF) _ -F(XI).P = FL.2ryELXi)} . Proof The first statement is proved exactly as Theorem 3 .3. The importance sampling estimator is then Z( u) = e-'rSM( ). When F # FL.+KM(u)}.. EFZ(u)2 = EeW2(FIF) = Ep [W2(FIFL)W2(FLIF)] = EL [W2 ( FIFL)w(FLIF)] = ELexp {Kl+. it thus follows that for 0 < e < e'/ELXi.. -F(XM(u)).i. Here ELK..2 > 0. Since ELM(u)/u -+ 1/ELXi. 1. .

T".3. U".290 which completes the proof. we conclude by differentiation that Bo(x)=B' (x)forallx > 0.3"eQ x 0 J e-Q zB (z) dz x (x > 0) and /3' = /3". Next.T" has a left exponential tail with rate /3'. As in IV. optimality is discussed in a heavy traffic limit y 10 rather than when u -+ oo.i. Further discussion is in Lehtonen & Nyrhinen [245]. The results of IV. all that needs to be shown is that if U' .T" > x) J /3"e-0 yB (x + y) dy = .4 indicate that we can expect a major difference according to whether y < 1/r.T' has a left exponential tail with rate /3' and U" . The extension to the Markovian environment model is straightforward and was suggested in Asmussen [ 16]. .T". BL) has bounded relative error. generic interarrival times T' .4. the references in Asmussen & Rubinstein [45] and Heidelberger [190]. from 3' P(U'-T'>x) ^ = ^ e-Q'zB (z) dz. First by the memoryless distribution of the exponential distribution . This immediately yields / = 3". /3". then /3' B' = B".T' = U" . we write T = yu. B" and generic claim sizes U'. Consider compound Poisson risk process with intensities /3'. so that one would expect the change of measure F -4 FL to produce close to optimal results.T". T) with T < oo. u Notes and references The importance sampling method was suggested by Siegmund [343] for discrete time random walks and further studied by Asmussen [ 13] in the setting of compound Poisson risk models . yu) is close to zk(u). Then according to Theorem 3.2.1 If y > 1/ic'('y).'(-y). The queueing literature on related algorithms is extensive . U' .1 is from Lehtonen & Nyrhinen [244].B'=B". In [13].e. see e. SIMULATION METHODOLOGY u Proof of Theorem 3. In fact: Proposition 4.T' D U" . The optimality result Theorem 3. 4 Importance sampling for the finite horizon case The problem is to produce efficient simulation estimators for '0 (u.g. The easy case is y > 1/k'(-y) where O(u.'(-y) or y > 1/r. U' .3'eO'x f f P (U" . claim size distributions B'. /3'e-Q'YR'( x + y) dy = . then the estimator Z(u) = e-7Sr(°)I(r(u) < yu) (simulated with parameters /3L. CHAPTER X. with the present (shorter and more elementary) proof taken from Asmussen & Rubinstein [45].

the result follows as in the proof of Theorem 3. The corresponding estimator is Z(u) = e-avS' ( u)+T(u)K (ay)I(T( u) < yu).log Var(Z(u)) l im of .1) which is all that is needed here can be showed much easier . Remark 4 .1). .8 has a stronger conclusion than (4.(ay).to g x ( u ) u u so that (1.yk(ay) determines the order of magnitude of z'(u. one would expect that the change of measure F Pay is in some sense optimal. lim inf u--oo -27yu . and that ryy > ry. Proof Since ryy > -y.4. Bay) is logarithmically efficient.'(7).log 4')u) -4 u (Theorem IV. 3 Theorem IV. 7y (4.1) so that z(u) = zP(u.4.4.1).1.yu.2 The estimator (4. we have ic(ay ) > 0 and get Eay Z(u)2 = Eay [e - 2aySr( u)+2r(u )r. yu) in the sense that .(u) -* 1 (Theorem IV.2) Since the definition of ay is equivalent to Eay r(u) . We recall that ay is defined as the solution of a'(a) = 1/y. We next consider the case y < 1/r. IMPORTANCE SAMPLING FOR THE FINITE HORIZON CASE 291 Proof The assumption y > 1/n'(-y) ensures that 1fi(u.3) (simulated with parameters /gay.4.3) and we have: Theorem 4. T( u) < yu] e-2ryyuEay le- 2ay^(u). (4. yu) is of order of magnitude a-71. T(u) < yu] e Hence by (4. and in fact.O(u. Further . Bounding u ELZ(u)2 above by a-7u.5) follows. yu) = e-ayu Eay Le-ay^(u)+r(u)K(ay).8). that ryy = ay . yu)/z. Let Qy2 = . (4.log Var(Z(u)) _ . T(u) < yu] .yy> 2 .

Hence lira inf log -ryyu + vyu 1/2 tc(ay) . Then z(u) = Eay Z(u) > Eay avS'(u)+T( u)k(av 1 ).1) (see Proposition IV. related discussion is given in a heavy traffic limit q J. there exists a dual process { V t} such that i.4).ryyu +oy u1/2K'(av)Eo l v 1/2) where the last step follows by Stam's lemma (Proposition IV.3: for many risk processes {Rt }. 0 rather than when u -3 oo. In most of the simulation literature (say in queueing applications).4.4.292 CHAPTER X. 0 Notes and references The results of the present section are new.(av)Eav l e. N(0.3..o.T) = P O<t<T inf Rt < 0 = P(VT > u). yu . In Asmussen [13].1) where the identity for Vi(u) requires that Vt has a limit in distribution V.o . However.Qyu1/2 < T(u) C yu e. One main example is {Vt} being regenerative (see A.-7y x(u) > hm inf u-+Oo U .a yu +l/ur' (av)Ei`av re-av^(u)+(T(++)(U) yu . 5 Regenerative simulation Our starting point is the duality representations in 11.u-aoo U That lim sup < follows similarly but easier as when estimating En.1): then by Proposition A1. '%(u) = P I info Rt < 0) = P(VV > u). SIMULATION METHODOLOGY Vara„ (-r(u))/u so that (T(u) . we believe that there are examples also in risk theory where (5.yu)/(uyu1/2) . and (5.2). > u) = -E f I(VV > u) dt 0 (5.1) is used to study Voo by simulating {Rt} (for example.1) may be useful. (5.b(u. the object of interest is {Vt} rather than {Rt}.yu1/2 <1 T(u) < yu l r > e-7vu +avul/ 2r.2) . yu .u1/2 < r(u) < yu Le- ] l = e. zi(u) = INV.a vt(u). the algorithm in Section 3 produces simulation estimates for the tail P(W > u) of the GI/G/1 waiting time W). Z (u)2 above.

Thus the method provides one answer on to how to avoid to simulate { Rt} for an infinitely long time period... Z1 = (Zl1i +. z2)) -> N(O... i (^(u) . Therefore .. a standard transformation technique (sometimes called the delta method) yields 1 V 2 (h (Zi. 2. The method of regenerative simulation. Taking h(zl. Then Z(1). Simulate a zerodelayed version of {V t } until a large number N of cycles have been completed.. > u) (and more general expectations Eg(V. i. Thus.h (zl. For details .. Z(N) are i .3) . which we survey below ... EZ2'i = z2 = E Thus.+Z(N) z 1. For the ith cycle. consider first the case of independent cycles . and Z2'>) where Zi'i = w. Zl the LLN yields Z1 a$' Z(1) +. record Zi'i = (Z1').d. oh) for h : R2 -^ R and Ch = VhEVh. EZ1'i = z1 = Ew. .t(u)) 4 N(0. z2) z2/z1 yields Vh = (-z2/z2 1/zl). Z2 a4* z2. REGENERATIVE SIMULATION 293 where w is the generic cycle for {Vt}. provides estimates for F ( V. .5. Z2'> the time during the cycle where { Vt} exceeds u and zj = EZJ'). + Z2N)) . letting J0 'o I (Vt > u) dt . is the cycle length.. let E denote the 2 x 2 covariance matrix of Z('). the regenerative estimator z%(u) is consistent. Z2 = N (X21' + . Z2 . Then (Z1-z1i Z2-z2 ) 4 N2(0. j = 1.. Vh = (8h/8z1 8h/ 8z2). )).E). + Z1N>) . To derive confidence intervals . (u) ?2 = E fo I(Vt > u) dt = 0( u ) zl Ew as N -> oo. 02) (5.

the expectation z = EZ of a single r. In 111.5) Z1 Z1 Z1 and the 95% confidence interval is z1 (u) ± 1.96s/v"N-. SIMULATION METHODOLOGY 2 Eli = Z2 z1 + 2 E22 .v.2 E1 2 z1 z1 Z2 The natural estimator for E is the empirical covariance matrix N S = N 1 12 (ZW . Rubinstein [310] and Rubinstein & Melamed [311]. see e. The regenerative method is not likely to be efficient for large u but rather a brute force one.g S12 (5. 6 Sensitivity analysis We return to the problem of 111 . consider an extremely simple example . () dx = E[SZ] f(X. However . to evaluate the sensitivity z/i( (u ) = (d/d() 0(u) where ( is some parameter governing the risk process . Before going into the complications of ruin probabilities . with distribution depending on a parameter (. asymptotic estimates were derived using the renewal equation for z /i(u). in some situations it may be the only one resolving the infinite horizon problem . 9.z^ i=1 so a2 can be estimated by 2 2 = 72 S11+ 12 S22 .0 . We here consider simulation algorithms which have the potential of applying to substantially more complex situations. () dx so that differentiation yields zS d( fco(x)f(x. v.9. say risk processes with a complicated structure of the point process of claim arrivals and heavy -tailed claims . Then z(() = f cp(x) f (x.C)dx = f w(x) d( f ( x. () dx f Ax) (dl d()f (x' () f ( z. Let X have a density f (x.g. Z of the form Z = ^p(X) where X is a r . There is potential also for combining with some variance reduction method.Z) ^Z(=) . Here are the ideas of the two main appfoaches in today 's simulation literature: The score function ( SF) method . () depending on C.2. Notes and references The literature on regenerative simulation is extensive.294 where 01 2 CHAPTER X.

6. if f (x. this is usually unproblematic and involves some application of dominated convergence . In the setting of ruin probabilities . Thus .log U/(. this phenomenon is particularly unpleasant since indicators occur widely in the CMC estimators . Thus. one can take h (U. SZ is an unbiased Monte Carlo estimator of z(. () is an unbiased Monte Carlo estimator of zS.t. () _ (e-Sx. the Poisson rate /3 in the compound Poisson model. for some Co = (o(U).v. The derivations of these two estimators is heuristic in that both use an interchange of expectation and differentiation that needs to be justified. ()) h((U. IPA will estimate zS by 0 which is obviously not correct. I(r(u) . () can be generated as h(U. So assume that a r. Example 6 . C). non-pathological examples where sample path derivatives fail to produce estimators with the correct expectation. ()) is 0 w .1 Consider the sensitivity tka(u) w. ()) d( hc(U. ()) is 0 for C < Co and 1 for C > Co so that the sample path derivative cp'(h(U. C) f(X.() d( is the score function familiar from statistics . /3 is 0. giving h( (U. () is increasing in C. Then z(() = Ecp(h(U. just take cp as an indicator function . ()). r(u) = Tl + • • • +TM(u)). () Thus.t. Let M(u) be the number of claims up to the time r(u) of ruin (thus. Then . say W(x) = I(x > xo) and assume that h(U. The likelihood ratio up to r(u) for two Poisson processes with rates /3. () = (8/8()h (u. For example . ( where h( (u. To see this. SENSITIVITY ANALYSIS where 295 S = (d/d()f (X. cp' (h(U. one. Infinitesimal perturbation analysis (IPA) uses sample path derivatives. zc = E [d co(h(U. p.1). C)). () = log U/(2. cp(h(U. however . with density f (x. () = d log f (X. A related difficulty occurs in situations involving the Poisson number Nt of claims: also here the sample path derivative w. () where U is uniform(0.r. /3o is M(u) Oe -(3T: < oo) . () = . The following example demonstrates how the SF method handles this situation. = E [`d (h(U. 11 /3oe-OoT. For the SF method.r. For IPA there are.

We recall (Proposition 111. since ELZp(u)2 < (M(U) _T(u) \ 1 2 a-2ryu = O(u2)e-27u.t. There have been much work on resolving the difficulties associated with IPA pointed out above.0(1) so that in fact the estimator Zf(u) has bounded relative error.r. the risk process should be simulated with parameters . In the setting of ruin probabilities. BL).T(u)) e-7ue--rVu) for ?P. SIMULATION METHODOLOGY Taking expectation. we get 1 M(u) 00(u) = E (_Ti)I(T(U)<) E [(M(u) .1 is from Asmussen & Rubinstein [46] who also work out a number of similar sensitivity estimators.t. We then arrive at the estimator ZZ(u) = (M(u) . 0 Notes and references A survey of IPA and references is given by Glasserman [161] (see also Suri [358] for a tutorial).296 CHAPTER X.T(u)) I(T(u) < co) ] . . differentiating w. j3 and letting flo = 0. Thus. whereas for the SF method we refer to Rubinstein & Shapiro [312]. a relevant reference is VazquezAbad [374]. ) we have VarL(ZQ(u)) ZO(u)2 O(u2)e-2 u2e-2ryu -yu . the estimation of z(ip(u) is subject to the same problem concerning relative precision as in rare events simulation .9 . change the measure to FL as when simulating tp(u). 4) that V5. different parameters. To resolve the infinite horizon problem . Example 6.r.3 (u) (to generate Zp (u). in part for different measures of risk than ruin probabilities. However.3L. for different models and for the sensitivities w.3 (u) is of the order of magnitude ue-7u.

. The two-barrier ruin problem The two-barrier ruin probability 0. Consider first a Bernoulli random walk. T+(a) = inf It > 0 : Rt > al.. }). Y'a(U) = P(T (u) = r+(a)) = 1 .(u) = 0 ) = 0) or it is trivial to translate from one set-up to the other. That is. and {-1..1. either this makes no difference (P(R... 297 .. in most cases . 'Note that in the definition of r(u ) differs from the rest of the book where we use r(u) = inf {t > 0 : Rt < 0} ( two sharp inequalities ). T(u. R„ = u+X.P(•r(u.i.(u) is defined as the probability of being ruined (starting from u) before the reserve reaches level a > u.. Oa(U ) can also be a useful vehicle for computing t/i(u) by letting a -* oo. Besides its intrinsic interest . a) = r(u)). are i.d.. in the Bernoulli random walk example below. X2. . defined as Ro = u (with u E {0.1}-valued . with P(Xk = 1) = 9.g. wherel T(u) = inf {t > 0 : Rt < 0} . as e..Chapter XI Miscellaneous topics 1 The ruin problem for Bernoulli random walk and Brownian motion. a) = r(u) A T+(a).+• • •+X.. where X1.

(1-B)u oJ 0.298 CHAPTER XI.1) is solution. Proof 1. The martingale is then {zuzXl+•••+X„ } = {zR° }. = z°Va(u) + za(1 - . then 'Oa(u) _ au a We give two proofs .2)... one elementary but difficult to generalize to other models.2) Oa(a .+Xn) F[ a]n n=0. We choose a = -ry where ry is the Lundberg exponent. z and the solution is z = (1 .0)/0. and the other more advanced but applicable also in some other settings.a) Y.. Wald's exponential martingale is defined as in 11.1 For a Bernoulli random walk with 0 0 1/2.r(u. i. The Lundberg equation becomes 1=F[-ry]=(1-9)+9z.o)'t/1a(a .1) = (1-9)4/'0(a-3)+9ba(a-1).(u) I\ e = 1 oa ' ()i a = u. = (1 .o)T/la (1) + 8z/'u(3). (1.e.1) o If 0 = 1/ 2.1. MISCELLANEOUS TOPICS Proposition 1.. u Proof 2.. where a is any number such that Ee°X = F[a] <oo. the solution of F[-.o» = z°P (RT ( u. In a general random walk setting . tba(2) _ (1 . Conditioning upon X1 yields immediately the recursion 'a(1) = 1-9+00a(2).y] = 1. C1_0\a.. u + 1.a) = 0) + zap ( R. By optional stopping.(4.a(u)). and insertion shows that ( 1.. 7/la(a ..4) by ea(u+Xl+.. zu = EzRO = EzRT(u. and in view of the discrete nature of a Bernoulli random walk we write z = e-7.

If p<0. {R. thenz1 (u)=1. then Proof Since 'Oa (U) -- a-u a Eea(R°.4 For a Brownian motion with drift u > 0. {Rt} is itself a martingale and just the same calculation as in the u proof of Proposition 1. i1(u) = e-211 . TWO BARRIERS 299 and solving for 4/la(u) yields t/ia(u) = (za .2 For a Bernoulli random walk with 9 > 1/2.u) = et(a2 /2 +aµ) the Lundberg equation is rye/2-'yp = 0 with solution y = 2p. } yields e-7u = Ee-7R° = e°Wa(u) + e-7a(1 . However.e-7u)/(e-7° .1).2) is trivial (z = 1).u)/u..5) .zu)/(za .1.3 Let {Rt} be Brownian motion starting from u and with drift p and unit variance . BROWNIAN MOTION. Then for p 0 0. and solving for 9/la(u) yields Z/)a(u) = (e -76 . If p = 0. u Proposition 1. pa( u) _ u Corollary 1.1). (1. then Vi(u) = 1. . 1h (u) = a el u \1 If 9 < 1/ 2. Proof Let a-+ oo in (1. If 9 = 1/2.1) for p # 0.ba(u) = e-2µa .1 If p = 0. (1.• a-2µa e-2µu .1 yields 't/la(u) = (a . Applying optional stopping to the exponential martingale {e-7R. Corollary 1. RANDOM WALK.0a(u)).} is then itself a martingale and we get in a similar manner u = ER° = ER ra( u) = 0 • Y'a (u) + all - a-u Y'a( u)).

CHAPTER XI.a ) < 0) + e -7aF ( R (u.616). say.7).300 Proof Let a -* oo in (1.a) = a) = 5 y = P (R (u. 5). (u) _ O(u) . VIII.0(a) 0 < u < a. For most standard risk processes .a) = a ) + e -' ° ( 1 . 1 .5 Consider the compound Poisson model with exponential claims (with rate.3.4). this immediately yields (1. we obtain 'Oa a-7u .e-7a (u) = 6 /0 . 7/'(u) = 1).. Ic 5-ry 'pa(u) Using y = 6 . passing to even more general cases the method quickly becomes unfeasible (see.7/la(u)). a) I R(u a ) < 0] P (R(u .a) = a on {r (u. the paths are upwards skip-free but not downwards. and thus one encounters the problem of controlling the undershoot under level 0. Here is one more case where this is feasible: Example 1.a) = -r+ (a)} and similarly for the boundary 0. . It may then be easier to first compute the one-barrier ruin probability O(u): Proposition 1.0 (u) (where u p =. implying R(u. 7O(u) = 7/la(u) + (1 .+^a(u))^(a) If 7k(a) < 1. (1.e-7a Again .a) < 0) + e-7°P (R(u. and hence e-7u = Ee-7Ro E [e-7R(.5a).vi(a) Proof By the upwards skip-free property. However. however. MISCELLANEOUS TOPICS u The reason that the calculations work out so smoothly for Bernoulli random walks and Brownian motion is the skip-free nature of the paths. letting a -* oo yields the standard expression pe-7u for . Here the undershoot under 0 is exponential with rate 5. valid if p < 1 (otherwise .7) . 0.6 If the paths of {Rt} are upwards skipfree and 7//(a) < 1.

of -r(u) are ( U2 Pµ (T(u ) E dT) = 2^T -3/2 exp µu . + µ2T) } . we have ili(u. Then the density and c. P(MT > u) = P(ST > u) + P(ST < u.8) Proof In terms of the claim surplus process { St} = {u .. Hence P(MT>u.1.µ T I + e2µ"4) ( .T) P(MT > u) where MT = maxo<t<T St. We now return to Bernoulli random walk and Brownian motion to consider finite horizon ruin probabilities.8 Let {Rt} be Brownian motion with drift .. BROWNIAN MOTION. and (1 . Here {St } is Brownian motion with drift 0 (starting from 0).11 ) is the same as (1. 0(u. ( 1.4) I = . T ) = P(T(u) < T ) = 241.7 For Brownian motion with drift 0. = 1 . the density dPµ / dP0 of St is eµst-tµ2/2. and hence Pµ('r(u) E dT) = Eo [e µsr(.8 ).9) = 2P(ST > u)..ST>U). For µ # 0. Corollary 1. (1. (i).d. TWO BARRIERS 301 Note thas this argument has already been used in VII. (1.f. For the symmetric (drift 0) case these are easily computable by means of the reflection principle: Proposition 1. T(u) E dT.. 10) follows then by straightforward differentiation.)_ _( u)µ2 /2. RANDOM WALK.11) VIT ) Proof For p = 0.1a for computing ruin probabilities for a two-step premium function.µ%T (1.Rt}.10) Pµ (T(u) < T) !. in particular symmetric so that from time r(u) (where the level is level u) it is equally likely to go to levels < u and levels > u in time T . = eµu-Tµ2/2Po (T( u) E dT) 2 eµu-Tµ2/2 u T-3/2 ex p u 27r p 1-2 T . MT > u) = P (ST > u) + P (ST > u. MT > U) = P(ST > u) + P(ST > u) (1.µ so that {St} is Brownian motion with drift µ .2 ..r(u).ST<u) = P(MT>u.

h. see e. u Small modifications also apply to Bernoulli random walks: Proposition 1.. If this assumption fails. the behaviour at the boundary 0 is more complicated and it may happen.T (1. S(x) = f x s(y)dy.T) = P(ST = u) + 2P (ST > u). such that the drift µ(x) and the variance a2(x) are continuous functions of x and that a2(x) > 0 .12) P(ST = v) = 0 otherwise.T-2.13) The following results gives a complete solution of the ruin problem for the diffusion subject to the assumption that S(x).10).g. MISCELLANEOUS TOPICS which is the same as (1. Vi(u.9 For Bernoulli random walk with 9 = 1/2.s. e. whenever u.8 also applies to the case 9 54 1/2. The expression for F ( ST = v) is just a standard formula for the u binomial distribution. We finally consider a general diffusion {Rt} on [0.12) is the same as ( 1. and (1. is (1. Breiman [78] or Karlin & Taylor [222] p.302 CHAPTER XI.. The same argument as used for Corollary 1.. as defined in (1. is zero for all u > 0 but that nevertheless Rt ^4 0 (the problem leads into the complicated area of boundary classification of diffusions. 226).11) then follows by checking that the derivative of the r.9). as defined above as the probability of actually hitting 0.9) goes through unchanged. is finite for all x > 0.10) and that the value at 0 is 0. that 0(u).13) with 0 as lower limit of integration. T are integer-valued and non-negative. Thus.-T+2. 0 0 (1. (1. but we omit the details.3 we can define the local adjustment coefficient y(x) as the one -2µ(x)/a2(x) for the locally approximating Brownian motion. Theorem 1. close to x {Rt} behaves as Brownian motion with drift µ = u(x) and variance a2 = a2(x). oo). Here {2-T( (v-}TT)/2) v=-T. We assume that u(x) and a2 (x) are continuous with a2 (x) > 0 for x > 0..T)dx.g. S(oo) = f c s(y)dy. oo) with drift µ(x) and variance a2 (x) at x. and in a similar spirit as in VII.10 Consider a diffusion process {Rt} on [0. Proof The argument leading to ( 1. Let s(y) = ef0 ry(.

see in particular pp. (1 . so that Y)n. If (1.13) is finite for all x > 0. the function S(x) is . Notes and references All material of the present section is standard.16) yields 4b (u) = 1 . 15) i. see Asmussen & Perry [42]. A good introduction to diffusions is in Karlin & Taylor [222].S(u) (1.b(b) = 1. we can ignore the possibility of ruin or hitting the upper barrier a before dt. The obvious boundary conditions '0a. 0 Proof of Theorem 1.b(Rdt).16) S(a) .e LVa.(u) < 1 for all u > 0 and ^ S^ Conversely.ba. E„ q(Rdt) = q(u)+Lq(u)dt. For generalizations of Proposition 1. Assume further that S (x) as defined in (1. S(oo) < oo separately u completes the proof. Lemma 1.6 to Markov-modulated models .e.b('u) = Eu .11 Let 0 < b < u < a and let t&0. Then YIa.b(u) be the probability that {Rt} hits b before a starting from u. [117]. where Lq(u) = 0'22u) q "(u) + p(u)q(u) is the differential operator associated with the diffusion. BROWNIAN MOTION.b(u) + L. A classical reference for further aspects of Bernoulli random walks is Feller [142].10. i. Using s'/ s = -2p/a2.ba. then 0 < 2l.17) Hence L.14) S(oo) < 00.b(a) = 0 then yield the result.b = 0 implies that VQ b/s is constant. b = 0. Wa. Letting b J.b = a+/3S. and we get Wo.1.b('u) = Eu &0. elementary calculus shows that we can rewrite L as Lq(u) d 1a2 (u)s(u)d [ s (u) ? ] .0(u) = 1 for all u > 0.16). then. 1'. RANDOM WALK.b (Rdt) = Oa.S(u)/S(a).b(u)dt.14) fails. Letting a T oo and considering the cases S(oo) = oo. If b < u < a.S(b) Proof Recall that under mild conditions on q. . Further references on two-barrier ruin problems include Dickson & Gray [116].b(u) = S(a) . 191-195 for material related to Theorem 1. TWO BARRIERS 303 for x > 0. In view of (1. O. if (1. 0 in (1.10. (1.

ytc (ay).9 ) and optional stopping applied to the stopping time r(u) A T. Another basic quantity is the speed measure M .3) < e -7yu. (2. and here are alternative martingale proofs of the rest . Remark 11.13)). with the drift and the variance depending on an underlying Markov process .5): _ z/'(u) < e 7u. IV.t&(u. defined by the density 1/va(u)s(u) showing up in (1.5. (2.1 ) was given already in II. See Asmussen [20] and Rogers [305] for some recent treatments and references to the vast literature. Lo is a martingale (cf.5) A martingale proof of (2. 111 .o•K(a) = Ee . yu) '+/1(u) . but by duality.17).aR.4. 7y = ay . information on ruin probabilities can be obtained . much of the literature dels with the pure drift case. correponding to piecewise linear paths or . 2 Further applications of martingales Consider the compound Poisson model with adjustment coefficient ry and the following versions of Lundberg 's inequality (see Theorems 111.3.(T(u)AT) r.(7) . which is motivated from the study of modern ATM (asynchronous transfer mode ) technology in telecommunications. 1 y < k (y). Lo I.2) C_e-7u < t(u) < C+e _7u. MISCELLANEOUS TOPICS referred to as the natural scale in the general theory of diffusions (in case of integrability problems at 0.4. yu) where W (ay) = y. yielding e-au = Ee.4) I. where C_ = B(x) _ B(x) sup 2no fy° e7(Y )B(dy)' f2e7(Y-2)B(dy)' C+ i/i(u.304 CHAPTER XI. equivalently.(. y > .1.6. (2. one works instead with a lower limit 5 > 0 of integration in (1. (2. Markov-modulated Brownian models .2.aRo . is currently an extremely active area of research.6) .(a) (2.1) (2. variance 0.)AT . They all use the fact that ( tx(a) l ( e-aRt = e-au + aSt-tx(a) < e-7yu. The emphasis is often on stationary distributions ..

2): As noted in Proposition II.)-r(u)r. Hence E [e-7Rr (u) Jr(u) < ool ^00 H( dt. dr) e 7( y-r)B(dy) B(r) f oo o 0 r > H(dt. it follows easily from (2.6) with = 'y that e--yu . (2..yuk (ay)(u&(u.6) below by 1 E Le-7Rr(. we have tc(ay) > 0 and we can bound (2. RT(u)_) given r(u) < oo. y > r.2.1. so that i/1(uL yu) < e-ayu .4). .1. yu))• Letting T -+ oo yield e_ayu > e-yur4ay)(0(u) - Notes and references See II.d. dr JO Zoo ) f e7'B(r + dy) B(r) Jo ^00 ^00 H(dt. For (2.T) - V. Equivalently. dr) 1 = 1 I0 /o C+ C+ From this the upper inequality follows.(ay)I T(u) < yu] P(r(u) < yu) (using RT(u) < 0).1 . we have ic(ay ) < 0 and use the lower bound E [e-7Rr („). -Rt has distribution B(r + dy)/B(r).4): We take a = ay in (2. A claim leading to ruin at time t has c.(u. FURTHER APPLICATIONS OF MARTINGALES 305 (we cannot use the stopping time r(u) directly because P(-r(u) = oo) > 0 and also because the conditions of the optional stopping time theorem present a problem).E [e. when Rt_ = r. (B(y) . u Proof of (2.B(r))/B(r). Proof of ( 2.yu) Y Similarly for (2. Let H(dt.( u ) I T(U) < 00] .7R.yu))• b(u.f.3).6). dr) denote the conditional distribution of (T(u). and the proof of the lower inequality is similar. eyuk (ay) = e-7yu e > e-yu"(ay ) ij(u.T(u)K(ay) I yu < r(u) < T] F(yu < r(u) < T) > e.3).

+ X. e. The classical result in the area is Cramer's theorem. logarithmic asymptotics . Thus . For example.1 We will go into some more detail concerning (3. The advantage of the large deviations approach is.1) where we return to the values of 0. and that a considerable body of theory has been developed. The last decades have seen a boom in the area and a considerable body of applications in queueing theory.. n--roo n n /// Note in particular that (3. we will write fn 1.2). However . its generality. gn with fn -+ 0 . og For sequences fn. not quite so much in insurance risk.1) amounts to the weaker statement lim 1 log P I Sn > x I = -17.nn or C2e-. ri.gn if n-ioo lim 109 fn = 1 log gn (later in this section. (3./n E I) for intervals I C R. Accordingly.1) is an example of sharp asymptotics : .^ e -nn 1 > x n 0o 2xn (3.?n typically only give the dominant term in an asymptotic expression . and gave sharp asymptotics for probabilities of the form P (S. logarithmic asymptotics is usually much easier to derive than sharp asymptotics but also less informative . MISCELLANEOUS TOPICS 3 Large deviations The area of large deviations is a set of asymptotic results on rare event probabilities and a set of methods to derive such results. cle . Thus.g. large deviations results been. The limit result (3. such that the cumulant generating function r.. (3. v2 later. .. large deviations results have usually a weaker form. if x > EX1.the correct sharp asymptotics might as well have +.1). then P C S.1) does not capture the \ in (3. 1) but only the dominant exponential term . gn -4 0.. which in the setting of (3.306 CHAPTER XI.3na with a < 1. Cramer considered a random walk Sn = X1 + . the parameter will be u rather than n).(B) = log EeOX 1 is defined for sufficiently many 0. Example 3. . however .2) can be rewritten as F (Sn/n > x) 1-g a-'fin. in being capable of treating many models beyond simple random walks which are not easily treated by other models .means (as at other places in the book) that the ratio is one in the limit (here n -* oo).

sseo f which in conjunction with (3.r.9S„+n' ( 9).425. Most often. if we replace Sn by nx + o / V where V is N(0.3) is put equal to x. V > 0 e.960/) -* 0. exponential change of measure is a key tool in large deviations methods.(0)) e 307 (other names are the entropy. we get P(Sn/n > x) E [e-9nx +nK(9)-9" '. which is a saddlepoint equation . the sup in the definition of rc* can be evaluated by differentiation: rc*(x) = Ox .r.4 e-nn +1.the mean rc'(0) of the distribution of X1 exponentially tilted with 0. and hence for large n P(Sn/n > x) > E [e. S rtn > x 1. LARGE DEVIATIONS Define rc* as the convex conjugate of rc. nx < Sn < nx + 1.e. we have P(nx < Sn < nx + 1.t. Since P nn > x) = E {e_8 ' ( 9). (3.(e)i XI E dx]. In fact. since Sn is asymptotically normal w.96o /] > 0.2).q = rc* (x). the Legendre-Fenchel transform or just the Legendre transform or the large deviations rate function). of P(X1 E dx) = E[e9X1-K. P with mean nx and variance no.3.tin f o') o e-9o^y 1 1 e-y2/2 dy 21r = e-tin 1 Bo 27rn .1). 2 where o2 = o2(x) = rc"(0). rc*(x) = sup(Ox . i.4) immediately yields (3. Define .rc(0) where 0 = 0(x) is the solution of x = rc'(0). More precisely. replacing Sn in the exponent and ignoring the indicator yields the Chernoff bound P Sn > x 1 < e-°n (3.4) n Next.

... (iv) tc(ry) = 0 and r. 260 for details.3.'(u) )Ng a-"u. be a sequence of r. and write Sn = X1 + • • • + Xn. For the proof.. r(u) = inf {n : Sn > u} and o(u) = P('r(u) < oo). Further main results in large deviations theory are the Gartner-Ellis theorem.h.. to be made rigorous.. Xn given by Fn(dxl.o log Ee9Sn /n. which is of similar spirit as the dicussion in VII..e < 8 < -y + e..1).s./^ >7 < zn n for n n0.2 (GLYNN & WHITT [163]) Let X1.p > 7 < zn. that is.3 For each i > 0. integrates to 1 by the definition of Icn). X2. asymptotics for probabilities of the form P ({S[nti/n}o<t<l E r) for a suitable set r of functions on [0. we shall concentrate on a result which give asymptotics under conditions similar to the Gartner-Ellis theorem: Theorem 3 .. 1) and no such that Sn . . (iii) #c (8) = limn. however. .. we introduce a change of measure for X1.. . We further write µ = tc'(ry). e > 0 such that (i) Kn (0) = log Ee°Sn is well-defined and finite for 'y . Mogulskii's theorem which gives path asymptotics. is differentiable at ry with 0 < K'(-y) < 00.. Ee9X n < oo for -e < 0 < e. (ii) lim supn. Then i/. .'s. MISCELLANEOUS TOPICS which is the same as (3. which is a version of Cramer's theorem where independence is weakened to the existence of c(O) = limn.e < 8 < y + e. We shall need: Lemma 3 .. there exists z E (0. 1].308 CHAPTER XI.dxn) = 05n-Kn(7)Fn(dx1. Pn Sn-1 . n Icn(0) exists and is finite for ry .dxn) where Fn is the distribution of (X1i . commonly denoted as is the saddlepoint approximation..v. The substitution by V needs. Sanov's theorem which give rare events asymptotics for empirical distributions. and the Wentzell-Freidlin theory of slow Markov walks.. In the application of large deviations to ruin probabilities. . Assume that there exists 'y.. see Jensen u [215] or [APQ] p. Xn) and sn = x1 + • • • + xn (note that the r..

s. We first show that lim inf„_.2. the r .ne(p+ 17). we get lim sup 1 log Pn (Sn-1 /n > µ + r7) < -0(1i + r7) + i(p(0 +'Y))/p n-+oo n and by Taylor expansion. For Sn-1i we have Fn(Sn -1/n > µ+r7) < e-ne(µ+ 1?)EneeS„-1 = e-ne ( µ+n)EneeSn-eX„ e-no(µ +n) Ee(e+7)Sn -ex„ -wn (7) < e. S.y) .> . Let r7 > 0 be given and let m = m(77) = [u(1 + 77)/µ] + 1.2. Since I Ee-qOX „ ] 1/q is bounded for large n by (ii).n e(µ +o)-w"(7) [Eep(B +7)Sn]1 /p [Ee-goX. LARGE DEVIATIONS Proof Let 0 < 9 < e where a is as in Theorem 3. P n(Sn/n > {c+77) < e no(µ 309 +n)Enees n +n)elcn(B +7).h.71 < e and jq9j < e.77) follows by symmetry (note that the argument did not use µ > 0). This proves the existence of z < 1 and no such that Pn (Sn/n > µ. The corresponding claim for Pn(Sn/n < µ . for Sn. > 1 +17] m(7). can be chosen strictly negative by taking p close enough to 1 and 0 close enough to 0. in particular the r.91) + o(O ) as 0 J. Clearly.+r-.-YS..Bµ . mµ Sm > u] km e-7Sm+n.r (7) n = e.Kn(7)e'n (p(O +7))/p I Ee -geXn]1/q where we used Holder's inequality with 1/p+ 1/q = 1 and p chosen so close to 1 and 0 so close to 0 that j p(0 +. The rest of the argument is as before. is of order .+r7) < zn for n > no. can be chosen strictly negative by taking 9 small enough.077 n-^oo n and by Taylor expansion and (iv )... This establishes the first claim of the lemma . log zl'(u)/u > -'y...s.3. h.n m µ 1 + rl .]1/q = e. it is easy to see that the r.s.W.µ?7 . Then V. h. ( U) P(S. 0.n > u ) = [ Em [em Em 1e.ne(µ limsup 1 log Pn (Sn/n > µ + 17) < ic(9 + ry) . S. u Proof of Theorem 3.m(7).

6) for some z < 1 and all n > n(E).. logO(u)/u > -ry.YS +^c CHAPTER XI. MISCELLANEOUS TOPICS (7).+wn(7). I > IL exp `S. n=1 n=n(b)+1 00 Lu(1 +6) /µJ 13 F( T (u) = P(T(u) n). P(T(u) = n) < P(Sn > u) = En [e-7S.(-Y). 3. For lim supu.. Sn > u] < e-Yu+Kn(7)pn(Sn > u) (3.7) so that n(b) I1 < e-'Yu E en.. (iv) and Lemma 3. we write P(T(u) = n) = Il + I2 + I3 + I4 'i/I(u) _ E00 n=1 where n(b) Lu(1-0/µJ Ii = 1: F(T(u) = n)...0 log i'(u )/u < -'y.log z) /2 and Sn Fn\ n >lb+S) <Zn. 14 = = E Lu(1-6)/aJ+1 Lu(1+6)/µJ+l = n) and n(S) is chosen such that icn('y )/n < 6 A (.I < µl1 1+77 I M 1-_ 1+277 S..3. n=1 . this is possible by (iii).310 ]Em I e. Obviously. we get lum inf z/i(u) 1 +12r7 >_ -ry + 77 Letting r7 J.n Yµ 1 + m + r ('Y) } U n \ 77 m µ µ7 1 < 1+ 77 ) Here E. and since Ic. I2 = F(T(u) = n).n(ry)/u -4 0andm/u-* (1 + r7)/µ. Pn \ > la+ 8 I < zn (3..(•) goes to 1 by Lemma 3. 0 yields liminfu __.

Sn > U] [ e(u(1+6)/µJ+l < e--Yu (u(1+6)/µJ+1 -7u r 0 0 e L^ e-n('Y ) fPn (I Sn 1 .zl/z en6 [u(1 +6)/µJ 1u (1 +6) /µJ ekn(7) < e' 13 < C" E Yu l u(1-6)/lij+1 Lu(1-6)/µJ+l1 < e-7U Finally.10) 00 I4 < E F(Sn_1 < u.' 1 + b) n e-7u x 1 /2 1 n x n / 2x (3. we get lim sup log u-/00 O (U) < -y + b(1 + b) U Letbl0.3. C 26u `p / +1 I e6u(1+6)/µ (3. LARGE DEVIATIONS Lu(1-6)/µJ 311 I2 < e-"u n=n(6)+1 e'n(Y)P(Sn > u) < Lu(1-6)/µJ ^. S. Sn-1 C U. e-ryu e-n logz/2p n nt n. > u) Lu(1+6) /µJ +l 00 )^n 'YSn+kn (7) . -µ n=n(6)+1 \ 1u(1-6)/µ1 00 1 zn < e-7u E Z n/2 < e--(U xn/2 E n=n(6)+1 n=0 e--Yu = 1 .11) [u(1+6)/µJ+1 1 - Thus an upper bound for z/'(u) is n(6) e-'Yu n=1 eKn (7) + 2 + (28U + 1) e6u(1+6)/µ Fi 1- zl /2 and using (i). u .

2. it holds for each b > 0 that 0(u) 1' g F(T(u) E (u(1 .312 CHAPTER XI. (7 + a) < 2arc'(7). it suffices to show that for j = 1.(u) = I1+I2+I3+I4'^ e-ry( u). 4 there is an aj > 0 and a cj < oo such that Ij < c3e. u(1 + b)/i(7)) Proof Since V.7' a-"ju. ryue-«iu . > u) < e-"' E eIsn = e-ctueKn (a+'Y)-Kn(7) where 0 < a < e and a is so small that r. IV. MISCELLANEOUS TOPICS The following corollary shows that given that ruin occurs. we have rcn (a + 7) < 2n^c(7 + a) < 4narc' (7). 2. u . For 12. Letting c11 = maxn<n. we need to redefine n(b) as L. Then for n large. Corollary 3.3ui where . this is straightforward since the last inequality in (3.4.4 Under the assumptions of Theorem 3. 13 = P(T (u) E (u(1 -b)l^ (7).4/3rc'(-y) > 0. the typical time is u/rc'(7) just as for the compound Poisson model. the last steps of (3.8) by P(S.('+'Y).z 1/z For I1.9) can then be sharpened to x LQuJ /2 I2 < e-7u 1 .11 ) can be sharpened to x 4 [u(1+6)/µJ /2 1 . we replace the bound P(Sn > u ) < 1 used in (3. say n n1. e'. I2.. cf. For I. we get Lou] E exp {-( 7 + a)u + Kn(a +7)} n=1 Il Lou] exp {-(-y + a)u} { 111 + exp {4narc'(7)} n=1 exp {-('y + a)u} c1 exp {4/3uarc'(7)} = clewhere a1 = aw.b)/i(7). For 14.xl/2 to give the desired conclusion.Q is so small that w = 1 ..u(1+b)/rc'(7)).

Thus the total reward in the interval [0. To verify these in concrete examples may well present considerable difficulties.e. r. The reader not satisfied by this gap in the argument can easily construct a discrete time version of the models! The following formula (3. for the ruin probability z/-'h(u) of any discrete skeleton {Skh}k=0.1. Obviously many of the most interesting examples have a continuous time scale.5 Assume the Xn form a stationary Gaussian sequence with mean p < 0. criteria are given in Duffield & O'Connell [124].1. 09(9). but nevertheless. Theorem 3. and in fact. the key condition similar to (iii)... The problem is whether this is also the correct logarithmic asymptotics for the (larger) ruin probability O(u) of the whole process. we shall give two continuous time examples and tacitly assume that this can be done. Let {Nt}t>0 be a possibly inhomogeneous Poisson process with arrival rate .v..f.'(-y) > 0. t] is Rt = E V (Un) n: o„ <t .12) k=0.. whether P ( sup St > u ltg a ^" 0<t<oo // (3. and we conclude that Theorem 3 . V(s) with m.2 shows that the discrete time structure is used in an essential way. (iv) becomes existence of a limit tc(9) of tct(9) _ log Ee8S° It and a y > 0 with a(y) = 0. It is then well-known and easy to prove that Sn has a normal distribution with mean np and a variance wn satisfying i lim -wn = wz = Var(X1 ) + 2 E Cov(Xl..g. Hence z z\ 2 z nr-n(9) _ n Cn0p+BZn/ -* .. An event occuring at time s is rewarded by a r.3.-LARGE DEVIATIONS 313 Example 3 . Xk+l) k=1 00 n-aoo n provided the sum converges absolutely.3(s) at time s. If {St}t> 0 is the claims surplus process. 2 is in force with -y = -2p/wz. i. 11 Inspection of the proof of Theorem 3.14) is needed in both examples ..13) One would expect this to hold in considerable generality.2 then immediately yields the estimate log F( sup Skh > u) a-7u (3. Assuming that the further regularity conditions can be verified.(O) = 9µ+02 for all 9 E R..

It.Q„) . An apparent solution to this problem is to calculate the premium rate p = p(t) at time t based upon claims statistics . Most obviously. we conclude that Cu) log e-7 u (cf.. derive . then the payments from the company in [on. one would take p(t) = (1 + rt)At-/ t. assuming a continuous premium inflow at unit rate.0 and assume there are -y..'`1 U.14). this is not realistic .15) . a differential equation in t). it contributes to St by the amount Un(t .2 are trivial to verify.9t = /3 J t (Ee8U° i8l .1) ds rt (3. i.. e.g. Example 3. At = . .Lundberg model has the larger ruin probability. If the nth claim arrives at time Qn = s. Since the remaining conditions of Theorem 3.s). MISCELLANEOUS TOPICS are the event times. 0 and since EeOUn(8) -+ Ee°U^ as s -* oo. the Cramer-Lundberg model implicitly assumes that the Poisson intensity /3 and the claim size distribution B (or at least its mean µB) are known. Thus. but that a claim is not settled immediately. where Ft = a(A8 : 0 < s < t). we have rct (9)/t -4 ic (9).1) .v. Of course . We let ic (9) = 3(EeWU° .. Of course. we have S.1) ds .t. non-decreasing and with finite limits Un as s T oo ( thus.d. Thus by (3. Then logEeOR° = J0 /3(s)(^8(9) . It is interesting and intuitively reasonable to note that the adjustment coefficient ry for the shot . <t which is a shot-noise process.314 where the an CHAPTER XI. 0 Example 3 . More precisely.14) (to see this . Un represents the total payment for the nth claim).9t. 7 Given the safety loading 77. = U„ ( t . Un(s). e > 0 such that ic('y) = 0 and that r. We further assume that the processes {U1(s)}8>0 are i. O'n +S] is a r .1) ds . the above discussion of discrete skeletons).noise model is the same as the one for the Cramer -Lundberg model where a claim is immediately settled by the amount Un. Kt (0) t (Ee9U"it-8i J0 . the best estimator of /3µB based upon Ft-. Thus. n: o. the Cramer. leading to St = At-(1+77) Joo t S8 ds. (3. is At . (9) < oo for 9 < 'y + C. if the nth claim arrives at time a.6 We assume that claims arrive according to a homogeneous Poisson process with intensity 0 .

typically the adaptive premium rule leads to a ruin probability which is asymptotically smaller than for the Cramer-Lundberg model . uniform (0. one has y > y' (3. the Vi = .17) K(a) f o 1 O (a[I + (1 + 77) log u]) du -)3. and since the remaining conditions are trivial to verify. Thus. we have Nt t N.i. i.i.e.21) This follows from the probabilistic interpretation Si EN '1 Yi where Yi = Ui( 1+(1 +r7)log ©i) = Ui(1-(1 +17)Vi) where the Oi are i .1) .d.(1 +i) f > i= 1 s ds = E Ui 1 . rewrite first rc as te(a) _ /3E 1 1 +(1+77)aUJ eau 1 .3. (3.2 hold./3. again the above discussion of discrete skeletons) where y solves ic('y) = 0 It is interesting to compare the adjustment coefficient y with the one y* of the Cramer-Lundberg model.18) Thus (iii) of Theorem 3. standard exponential . Ui Nt / t 01i 315 St = Ui . we conclude that t. (3. It then follows from (3.log Oi are i. the solution of /3(Eelu .d.19) with equality if and only if U is degenerate.(1 + 17)0µB = 0.1) or . which yields eau f 1 t(1+n )audtl = E r Ee°Y = E [O(1+n)aueaul = E [eau J L Jo J L1+(l+r))aUJ .20) (3. To see this .16) i=1 o i=1 Let ict (a) = log Eeast .14) that rt _ 13 Jo _ (a [1_( i+77)log]) ds_flt = t (a) (3. Indeed.(1 + r7) log t (3. LARGE DEVIATIONS With the Qi the arrival times.b(u) IN a-'Yu (cf. equivalently.

and k(x) < 0. see also Nyrhinen [275] for Theorem 3. using that Ek(U) = 0 because of (3.(1 + ri)y*x is convex with k(oo) = 00. rc*' (0 ) < 0. [257] and Nyrhinen [275].2. this in turn yields y > y*.316 CHAPTER XI. the function k(x) = e7*x . much of the analysis carries over to more general cases. Lehtonen & Nyrhinen [244]. Further.19). assuming that the U. Martin-L6f [256]. and since tc(s). MISCELLANEOUS TOPICS Next. so there exists a unique zero xo = xo(r7) > 0 such that k(x) > 0..i. . a* (s) are convex with tc'(0) < 0 . For Example 3.20) is due to Tatyana Turova.7. with common distribution B and independent of Nt. 4 The distribution of the aggregate claims We study the distribution of the aggregate claims A = ^N' U. = P(N = n) = e-(3an However. say one year. [245].2 expressing the finite horizon ruin probabilities in terms of the distribution of A. The main example is Nt being Poisson with rate fit. x > x0. k(0) = 0.d. Therefore e7'U _ k(U) E [1+(1+77)y*U] . the proof of (3. This implies n(y*) < 0.xo. k'(0) < 0. Dembo & Zeitouni [105] and Shwartz & Weiss [339]. Further applications of large deviations idea in risk theory occur in Djehiche [122]. 11 Notes and references Some standard textbooks on large deviations are Bucklew [81]. y = y* can only occur if U . In addition to Glynn & Whitt [163]. see Nyrhinen [275] and Asmussen [25]. Further. This is a topic of practical importance in the insurance business for assessing the probability of a great loss in a period of length t.1 . are i. In particular. we are interested in estimating P(A > x) for large x.1 E [1+(1+77)y*U] 0 k (+ *y B(+ 1 + (1(+71)y*y B(dy) L xa 1 + f + (1 + rl) Y* xo jJxo k(y) B(dy ) + f' k(y) B(dy) } = 0. the study is motivated from the formulas in IV. at time t. though we do not always spell this out. we then take t = 1 so that p. 0 < x < x0. For notational simplicity.

Vare(A) = s. we define the saddlepoint 9 = 9(x) by EBA = x. A > x)] = e-ex+K( e)E9 [e . (4. no(a) = logE9e'A = rc(a + 9) .e.1).1).[s])3/2 = 0. In particular. Proposition 4. THE DISTRIBUTION OF THE AGGREGATE CLAIMS 317 4a The saddlepoint approximation We impose the Poisson assumption (4. The exponential family generated by A is given by Pe(A E dx) = E [eeA -K(9). A E dx] . A > x) e-ex+K(e ) e-e AB°[ely 1 e-v2/2 dy 0 2^ 00 -9x+p(e) e e-ze-z2/(2BZpB „[9)) dz 9 27r/3B" [9] fo e-ex+w ( e) oo z x)] ] 0 27r /3B" [9] o e 9 2 /3B" [9] J e-ex+w(B) dz . B"' [s] lim (B". K'(0) _ ic'(9) = x.ic(9) = . 818' where s' = sup{s : B[s] < oo}.1.1 Assume that lim8T8.1) where )30 = . The analysis largely follows Example 3. Then Ee"A = e'(") where x(a) _ 0(B[a] . For a given x. Hence P(A > x) = E e [e-9A+ ic(9).3e(bo[a] . i.2) implies that the limiting Pe-distribution of (A . only with 0 replaced by a9 and B by B9. B"[s] = oo. This shows that the Pe-distribution of A has a similar compound Poisson form as the F-distribution.3B"[9]. Then as x -* oo.x)//3B"[9] is standard normal.9(A-x).3B[9] and Be is the distribution given by eox B9(dx) = B [9] B(dx)."(0) = .4. e-9x+K(°) P(A > x) B 2ir /3 B" [9] Proof Since EBA = x.

1). the distribution of A is approximately normal .2) is often referred to as the Esscher approximation. and (4.(3µB)/(0µB^))1/2 has a limiting standard normal distribution as Q -^ oo. Furthermore 00 b(x)Sdx < oo for some ( E (1. it holds that EA = . under the Poisson assumption (4. In fact. Notes and references Proposition 4. Y satisfies 9(u) ti e-u2/2(1 + ibu3) (4. i. either of the following is sufficient: A.Q{AB (4. more generally. just the same dominated convergence argument as in the proof of Theorem 2. see Embrechts et al. it is quite questionable to use (4. 4b The NP approximation In many cases . [138].3) The result to be surveyed below improve upon this and related approximations by taking into account second order terms from the Edgeworth expansion. For example. 2).4) . Remark 4 . or. B covers distributions with finite support or with a density not too far from a-x° with a > 1. b is gamma-like.v. The present proof is somewhat heuristical in the CLT steps. MISCELLANEOUS TOPICS It should be noted that the heavy-tailed asymptotics is much more straightforward. The (first order) Edgeworth expansion states that if the characteristic function g(u) = Ee"`}' of a r. leading to P(A > x) :. Jensen [215] and references therein.e. For details. A covers the exponential distribution and phase-type distributions.2 If B is subexponential and EzN < oo for some z > 1.1 goes all the way back to Esscher [141].EN B(x). 1 . Thus .(D X . large x. In particular.3) and related results u for the case of main interest . b(x) = q(x)e-h(z). then P(A > x) .x') where x' = sup {x : b(x) > 0}. 3 A word of warning should be said right away : the CLT (and the Edgeworth expansion) can only be expected to provide a good fit in the center of the distribution .ycix °-ie-6x B.l3pB.1 yields: Proposition 4. some regularity of the density b(x) of B is required. Var(A) _ ^3p. b is log-concave. where q(x) is bounded away from 0 and oo and h (x) is convex on an interval of the form [xo. bounded with b(x) .318 CHAPTER XI. For a rigorous proof. For example.2i and that (A .

so that 1(u) 3 exp { .f.6) . are the cumulants . defined as the the solution of P(A < yl-e) = 1 . the CLT for Y = Y6 is usually derived via expanding the ch. Thus if EY = 0..6(1 .99. u5. the NP (normal power) approximation deals with the quantile al_E.. Heuristically.5) follows by integration. resp.l = EY.i 3 K3 } Pt^ exp .e-quantile in the distribution of Y. A particular case is a.. . the density of Y is 1 °° _ e-iuy f(u) du 2x _. Remark 4..EA)/ Var(A) and let yl_E. one needs to show that 163. Var(Y) = 1 as above .h.2K3 + 4i 64 + .2X2 .. one expects the u3 term to dominate the terms of order u4. where Kl .4. (4. K3 = E(Y . the standard normal distribution. as u2 u3 u4 9(u) = Ee'uY = exp {iuci .y2)^P(y)• 319 Note as a further warning that the r. (4..5). and so as a first approximation we obtain a1_E = EA + yl-e Var(A) . however.2 2 . then P(Y < y) 4(y) . f °o 9(y) = 1 e-'uye -u2/2(1 + iSu3) du 27r _ cc(y) .EY)3. and from this (4. If the distribution of Y is close to N(0.. which is often denoted VaR (the Value at Risk). s.e.3!). If this holds . In concrete examples . .. yl-E should be close to zl_E (cf.s. THE DISTRIBUTION OF THE AGGREGATE CLAIMS where b is a small parameter.i 6 r 1 3 so that we should take b = -ic3/6 in (4.2 ^ \1 . K2 = Var (Y).. are small. .5) is obtained by noting that by Fourier inversion.5) may be negative and is not necessarily an increasing function of y for jyj large. of (4.5 (y3 .3& (y). in particular.1). Let Y = (A .c2i.: EA + zl_E Var(A) . Rather than with the tail probabilities F(A > x). zl_e be the 1 . K4 .

this holds with a = 0..6 (1 .1)EY3.. however. the kth cumulant of A is /3PBk' and so s.yi.6pBki) d/2.zi. k3 is small for large /3 but dominates 1c4. b such that EN 1 U%.. For example. n = 1. 4c Panjer 's recursion Consider A = constants a.zl-E)W(zl-E) 1 . as required . Note..EA ) / Var(A).5) by noting that the 4.zl -E)V(zl_E) .3n-i /3 .S(1 .zl- E)^o(zl -E) . . [101]..E)A1 l -E)  1- E 4)(yl -E) ^' .k = /3µB^1 / (. Another main reference is Daykin et at.E . and assume that there exist n ) Pn_i .y2)cp( y) term.zl-E )w(zl _E) = which combined with S = -EY3/6 leads to q^ 1 Y1 .E + (yl. 21 .. We can rewrite (4. b = /3 for the Poisson distribution with rate /3 since Pn = -Pn-1 n! n (n .1)^ 2) µ'E Notes and references We have followed largely Sundt [354].S(1 . In particular .E(/3PB^1 )1^2 + s(z1-E . K5 .1). that [101] distinguishes between the NP and Edgeworth approximations.5(1 . let pn Pn = (a+ = P(N = n).(y) terms dominate the S(1 ..7) as 1 (3) a1-E = Qµa +z1 .320 CHAPTER XI. This leads to -t( yl -E) .EA)3 a1_E = EA + z1_E(Var (A))1/2 + 1 Var(A) Under the Poisson assumption (4.E )Azl -E) 4(z1-E) + ( yl-E . Using Y = (A .1) E (A .1)! n ^e-Q . MISCELLANEOUS TOPICS A correction term may be computed from (4. this yields the NP approximation 6(Z1 _E .E = z1-E + S(zi_E .

if go = 0...k . 1. (4. j = 0. The expression for fo is obvious. .4.. then j (a + b!) 1-ag k_1 3 gkfj. 2. the value of (4.4. j-1 g. j = 1.} and write gj = 2 .12) we get for j > 0 that fj n a b + n p n-lgj *n 00 U I n 1 *n = E a+bUi=j pn-19j n=1 j i=1 CC) n Ui EE n=1 Ia +b Ul i=1 =j pn_1 .9).13) but only O(j2) for Proposition 4. and calculating the gj*n recursively by 9*1 = 9j.4 Assume that B is concentrated on {0.11) Remark 4..14) is independent of i = 1.1... (4. . (4. n = k=n-1 9k(n-1 )9j -k • (4. . which would consist in noting that (in the case go = 0) fj = pn9jn n=1 (4. fj = P(A = j).10) f o = po.. THE DISTRIBUTION OF THE AGGREGATE CLAIMS 321 Proposition 4.13) Namely. n. fj = E (a+ b k =1 )9kfi_k . 2. . .. Then fo = >20 9onpn and fi = 1 E In particular. By symmetry.. .12) where g*n is the nth convolution power of g. . u Proof of Proposition 4. .. . Hence by (4. 2.. E[a +bU=I >Ui =j l i=1 J (4.14) is therefore a + b/n. (4.4 is that the algorithm is much faster than the naive method. j = 1. Since the sum over i is na + b.5 The crux of Proposition 4..12).4. the complexity (number of arithmetic operations required) is O(j3) for (4.

322
00 J

CHAPTER XI. MISCELLANEOUS TOPICS

EE (a + bk I gkg3 _ k lien-i n=ik=0 (a+bk l gkE g j'`kpn = E (a+b!)9kfi_k n=0 k=0 k=0 ^I 1 E(a+b. agofj+ k Jgkfj-k, k=i /

and and (4.9) follows . (4.11) is a trivial special case.

u

If the distribution B of the Ui is non-lattice , it is natural to use a discrete approximation . To this end, let U(;+, U(h) be U; rounded upwards, resp. downwards , to the nearest multiple of h and let A}h) = EN U. An obvious modification of Proposition 4.4 applies to evaluate the distribution F(h) of A(h) letting f( ) = P(A() = jh) and

g(h) gkh+

= P (U(h2 = kh) = B((k + 1)h) - B(kh ), k = 0, 1, 2, ... , = P (U4;+ = kh) = B(kh) - B (( k - 1)h) = gk - l,-, k = 1, 2, ... .

Then the error on the tail probabilities (which can be taken arbitrarily small by choosing h small enough ) can be evaluated by
00 00

< P(A > x ) f (h) j=Lx/hl j=Lx/hl
Further examples ( and in fact the only ones , cf. Sundt & Jewell [355]) where (4.9) holds are the binomial distribution and the negative binomial (in particular, geometric ) distribution . The geometric case is of particular importance because of the following result which immediately follows from by combining Proposition 4.4 and the Pollaczeck-Khinchine representation: Corollary 4.6 Consider a compound Poisson risk process with Poisson rate 0 and claim size distribution B. Then for any h > 0, the ruin probability zb(u) satisfies 00 00
f^,h) Cu) < E ff,+, j=Lu/hJ j=Lu/hJ (4.15)

f! h)

5. PRINCIPLES FOR PREMIUM CALCULATION
where f^ +, f^ h) are given by the recursions
(h) 3 (h) (h)

323

fj,+ = P 9k fj-k,+ ' I = 17 2, .. .
k=1 3 (h)

(h)

=

P

(h)

f9,- - (h) gk,-fA-k,- e 1 - ago,- k=1

j = 1+2,

starting from fo + = 1 - p, f(h) = (1 - p)/(1 - pgoh-) and using 07
g(kh) 1 (k+1)h

=

Bo((k + 1 ) h) - Bo(kh ) = - f
AB
kh

B(x) dx, k = 0, 1, 2, ... , k = 1,2 .....

gkh+

Bo(kh ) - Bo((k - 1 ) h) = 9kh)1 ,

Notes and references The literature on recursive algorithms related to Panjer's recursion is extensive, see e.g. Dickson [115] and references therein.

5 Principles for premium calculation
The standard setting for discussing premium calculation in the actuarial literature does not involve stochastic processes, but only a single risk X > 0. By this we mean that X is a r.v. representing the random payment to be made (possibly 0). A premium rule is then a [0, oo)-valued function H of the distribution of X, often written H(X), such that H(X) is the premium to be paid, i.e. the amount for which the company is willing to insure the given risk. The standard premium rules discussed in the literature (not necessarily the same which are used in practice!) are the following: The net premium principle H(X) = EX (also called the equivalence principle). As follows from the fluctuation theory of r.v.'s with mean, this principle will lead to ruin if many independent risks are insured. This motivates the next principle, The expected value principle H(X) = (1 + 77)EX where 77 is a specified safety loading. For 77 = 0, we are back to the net premium principle. A criticism of the expected value principle is that it does not take into account the variability of X which leads to The variance principle H(X) = EX+77Var(X). A modification (motivated from EX and Var(X) not having the same dimension) is

324

CHAPTER XI. MISCELLANEOUS TOPICS
Var(X).

The standard deviation principle H(X) = EX +rl

The principle of zero utility. Here v(x) is a given utility function, assumed to be concave and increasing with (w.lo.g) v(O) = 0; v(x) represents the utility of a capital of size x . The zero utility principle then means v(0) = Ev (H(X) - X); (5.1)

a generalization v(u) = Ev (u + H(X) - X ) takes into account the initial reserve u of the company. By Jensen 's inequality, v(H(X) - EX) > Ev(H(X) - X) = 0 so that H(X) > EX. For v(x) = x, we have equality and are back to the net premium principle. There is also an approximate argument leading to the variance principle as follows. Assuming that the Taylor approximation

v(H(X) - X) ^ 0 +v'(0)(H (X) - X) + v 0 (H(X) - X)2 ,/2
is reasonable , taking expectations leads to the quadratic v"H(X )2 + H(X) (2v' - 2v"EX) + v"EX2 - 2v'EX = 0 (with v', v" evaluated at 0) with solution

H(X)=EX-v^±V(- ^ )2-Var(X).
Write
( vI ) 2 \

-Var(X) v^ - 2v^Var(X)/ I - (

, Var(X) )2

If v"/v' is small, we can ignore the last term. Taking +f then yields H(X) ,:: EX -

2v'(0) VarX;

since v"(0) < 0 by concavity, this is approximately the variance principle. The most important special case of the principle of zero utility is The exponential principle which corresponds to v(x) = (1 - e-6x)/a for some a > 0. Here (5.1) is equivalent to 0 = 1 - e-0H(X)EeaX, and we get

H(X) = 1 log Ee 0X .
a

5. PRINCIPLES FOR PREMIUM CALCULATION

325

Since m.g.f.'s are log-concave, it follows that H,, (X) = H(X) is increasing as function of a. Further, limQyo Ha (X) = EX (the net premium princiHa (X) = b (the premium ple) and, provided b = ess supX < oo, lim,, H(X) = b is called the maximal loss principle but is clearly not principle very realistic). In view of this, a is called the risk aversion The percentile principle Here one chooses a (small ) number a, say 0.05 or 0.01, and determines H(X) by P(X < H(X)) = 1 - a (assuming a continuous distribution for simplicity). Some standard criteria for evaluating the merits of premium rules are 1. 77 > 0, i .e. H(X) > EX. 2. H(X) < b when b (the ess sup above ) is finite 3. H(X + c) = H(X) + c for any constant c

4. H(X + Y) = H(X) + H(Y) when X, Y are independent
5. H(X) = H(H(XIY)). For example , if X = EN U= is a random sum with the U; independent of N, this yields

H

C^

U; I = H(H(U)N)

(where, of course, H(U) is a constant). Note that H(cX) = cH(X) is not on the list! Considering the examples above, the net premium principle and the exponential principle can be seen to the only ones satisfying all five properties. The expected value principle fails to satisy, e.g., 3), whereas (at least) 4) is violated for the variance principle, the standard deviation principle, and the zero utility principle (unless it is the exponential or net premium principle). For more detail, see e.g. Gerber [157] or Sundt [354]. Proposition 5.1 Consider the compound Poisson case and assume that the premium p is calculated using the exponential principle with time horizon h > 0. That is,
N,,

Ev I P - E U;
i =1

= 0 where

v(x) = 1(1 - e-°x
a

Then ry = a, i.e. the adjustment coefficient 'y coincides with the risk aversion a.

326
Proof The assumption means

CHAPTER XI. MISCELLANEOUS TOPICS

0 a (1 - e-areo (B[a1-1)

l

i.e. /3(B[a] - 1) - ap = 0 which is the same as saying that a solves the Lundberg u equation. Notes and references The theory exposed is standard and can be found in many texts on insurance mathematics, e.g. Gerber [157], Heilman [191] and Sundt [354]. For an extensive treatment, see Goovaerts et al. [165].

6 Reinsurance
Reinsurance means that the company (the cedent) insures a part of the risk at another insurance company (the reinsurer). Again, we start by formulation the basic concepts within the framework of a single risk X _> 0. A reinsurance arrangement is then defined in terms of a function h(x) with the property h(x) < x. Here h(x) is the amount of the claim x to be paid by the reinsurer and x - h(x) by the the amount to be paid by the cedent. The function x - h(x) is referred to as the retention function. The most common examples are the following two: Proportional reinsurance h(x) = Ox for some 0 E (0, 1). Also called quota share reinsurance. Stop-loss reinsurance h(x) = (x - b)+ for some b E (0, oo), referred to as the retention limit. Note that the retention function is x A b. Concerning terminology, note that in the actuarial literature the stop-loss transform of F(x) = P(X < x) (or, equivalently, of X), is defined as the function

b -* E(X - b)+ =

f

(s - b)F(dx) _ f
6 00

(x) dx.

An arrangement closely related to stop-loss reinsurance is excess-of-loss reinsurance, see below.
Stop-loss reinsurance and excess-of-loss reinsurance have a number of nice optimality properties. The first we prove is in terms of maximal utility: Proposition 6.1 Let X be a given risk, v a given concave non-decreasing utility function and h a given retention function. Let further b be determined by E(X b)+ = Eh(X). Then for any x,

Ev(x - {X - h(X)}) < Ev(x - X A b).

6. REINSURANCE

327

Remark 6 .2 Proposition 6.1 can be interpreted as follows. Assume that the cedent charges a premium P > EX for the risk X and is willing to pay P1 < P for reinsurance. If the reinsurer applies the expected value principle with safety loading q, this implies that the cedent is looking for retention functions with Eh(X) = P2 = P1/(1 + 77). The expected utility after settling the risk is thus

Ev(u + P - P1 - {X - h(X)})
where u is the initial reserve . Letting x = u + P - P1, Proposition 6.1 shows that the stop-loss rule h (X) = (X - b)+ with b chosen such that E(X - b)+ u = P2 maximizes the expected utility. For the proof of Proposition 6.1, we shall need the following lemma: Lemma 6 .3 (OHLIN'S LEMMA) Let X1, X2 be two risks with the same mean, such that Fj(x) < F2 (x), x < b, Fi(x) ? F2(x), x > b for some b where Fi(x) = P(Xi < x). Then Eg(X1) < g(X2) for any convex function g. Proof Let Yi=XiAb, Zi=Xivb.

Then
P(Yl < x) _ Fi(x) <_ F2 (x) = P(Y2 < x) x < b 1=P(Y2<x) x>b so that Y1 is larger than Y2 in the sense of stochastical ordering . Similarly, P(Zl < x) _ 0 = P(Z2 < x) x < b Fi(x) > F2(x) = P(Z2 < x) x > b

so that Z2 is larger than Zl in stochastical ordering. Since by convexity, v(x) = g(x) - g(b) - g'(b)(x - b) is non-increasing on [0, b] and non-decreasing on [b, oo), it follows that Ev(Y1) < Ev(Y2), Ev(Zi) < Ev(Z2). Using v(Yi) + v(Zi) = v(Xi), it follows that

0 < Ev(X2) - Ev(Xi) = Eg(X2) - Eg(X1),
using EX1 = EX2 in the last step. u

Proof of Proposition 6.1. It is easily seen that the asssumptions of Ohlin' s lemma hold when X1 = X A b, X2 = X - h(X); in particular, the requirement EX1

328

CHAPTER XI. MISCELLANEOUS TOPICS

= EX2 is then equivalent to E(X - b)+ = Eh(X). Now just note that -v is convex. u
We now turn to the case where the risk can be written as N

X = Ui
i=1

with the Ui independent; N may be random but should then be independent of the Ui. Typically, N could be the number of claims in a given period, say a year, and the Ui the corresponding claim sizes. A reinsurance arrangement of the form h(X) as above is called global; if instead h is applied to the individual claims so that the reinsurer pays the amount EN h(Ui), the arrangement is called local (more generally, one could consider EN hi(Ui) but we shall not discuss this). The following discussion will focus on maximizing the adjustment coefficient. For a global rule with retention function h* (x) and a given premium P* charged for X - h* (X), the cedents adjustment coefficient -y* is determined by

1 = Eexp {ry*[X - h*(X) - P*]},
for a local rule corresponding to h(u) and premium P for X look instead for the ry solving
J _f

(6.2) N 1 h (Ui), we

[ X_P_^

1 = Eexp

[ Ei - h(Ui)] -P [U

= Eexp{ry

h(Ui)]

l (6.3) This definition of the adjustment coefficients is motivated by considering ruin at a sequence of equally spaced time points, say consecutive years, such that N is the generic number of claims in a year and P, P* the total premiums charged in a year, and referring to the results of V.3a. The following result shows that if we compare only arrangements with P = P*, a global rule if preferable to a local one. Proposition 6.4 To any local rule with retention function h(u) and any
N

J}

P > E X - N h(Ui)
4 =1

(6.4)

there is a global rule with retention function h* (x) such that
N

Eh*(X) = Eh(U1)
i=1

and 'y* > ry where ry* is evaluated with P* = P in (6.3).

h(Ui)-P JJJ l:='l {ry ] or.h(Ui)] . (6. X2 = U .5 Because of the independence assumptions . however.h * (X) .h(U)]. appealing to (6.. expectations like those in (6.b)+ with b determined by E(U .6). This follows by taking Xl = U A b.3). ry* > 0 because of (6. it suffices to show that Eexp {ry i-i 'UiAb.P.h(Ui)] .6) u where C[ry] = Ee'r(u-4(u)). and so on. i. then (6.h( UU) = EN • E[U .d.P I = EC [7]N.P > EexP{7[X . we get N 1 = Eexp ry E[Ui i-i .b)+ = Eh(U) (and the same P) satisfies 71 > ry.h(u) and any P satisfying (6. as often local as global.h(U) (as in the proof of Proposition 6. Assuming for simplicity that the Ui are i. u But since ry > 0.4).b)+ is referred to as excess-of-loss reinsurance and plays a particular role: Proposition 6. the excess -of-loss rule hl (u) = (u . The arrangement used in practice is. this implies 7* > 7.i.4) and u g(x) = e7x in Ohlin's lemma. ' i-i (6.5) reduce quite a lot. REINSURANCE Proof Define N 329 h* (x) = E > h(Ui) X = x . N E X .P } < 1 = Eexp E[Ui. . y = Ei [Ui . we get EX = EN • EU.d. Applying the inequality Ecp(Y ) > EW(E (YIX )) (with W convex ) to W(y ) = eryy.4.5) holds trivially.h(Ui)] .4).P]}. Eexp 7 [E [Ui .6. Proof As in the proof of Proposition 6. Then for any local retention function u . Local reinsurance with h(u) = (u .6 Assume the Ui are i.4). that 01[ry] < 0[-y] where 0[-y] = Ee'r(U^') . Remark 6. (6.

330 CHAPTER XI. Heilman [191] and Sundt [354]. The present proof is from van Dawen [99]. see also Sundt [354]. . [76]. The original reference for Ohlin's lemma is Ohlin [277]. See further Hesselager [194] and Dickson & Waters [120].g.many texts on insurance mathematics. Bowers et at. MISCELLANEOUS TOPICS Notes and references The theory exposed is standard and can be found in. e.

} for any h > 0. of interarrival times and the time Yo = To of the first arrival (that is. ..U(t) is the expected number of renewals in (t. stating that U(t+a)-U (t) -^ a. . the distribution of Yo is called the delay distribution. The point process is called a renewal process if Yo.r. are independent and Y1.. note in particular that U({0}) = 1. ... t] is denoted by Nt. t +a]). . i. The associated renewal measure U is defined by U = u F*" where F*" is the nth convolution power of F. t -00 (A.Appendix Al Renewal theory la Renewal processes and the renewal theorem By a simple point process on the line we understand a random collection of time epochs without accumulation points and without multiple points.t. The number max k : Tk_j < t of renewals in [0. Y2. Lebesgue measure for some n > 1). If F satisfies the stronger condition of being spread-out (F*' is nonsingular w . + U2 where U1 is a finite measure and U2(dt) = u(t)dt where 331 . U(A) is the expected number of renewals in A C R in a zero-delayed renewal process. then Stone 's decomposition holds : U = U.. Y1. The renewal theorem asserts that U(dt) is close to dt/µ.. all have the same distribution.T„_1).. That is.e. Lebesgue measure dt normalized by the mean to of F.. t]) so that U(t + a) . If Yo = 0. of epochs or the set Y1. not concentrated on {h. The mathematical representation is either the ordered set 0 < To < T1 < . Technically.. Y2.. Y. 2h. some condition is needed: that F is non-lattice. = T„ . denoted by F in the following and referred to as the interarrival distribution.. the renewal process is called zero-delayed. when t is large. Then Blackwell 's renewal theorem holds.1) (here U(t) = U([0.

. that z(u) has a limit z(oo) (say) as u -4 oo. wee shall need the following less standard parallel to the key renewal theorem: Proposition A1.9.332 APPENDIX u(t) has limit 1/µ as t -4 oo.2). ENt -4 1 lb Renewal equations and the key renewal theorem The renewal equation is the convolution equation Z(u) = z(u) + f where Z(u) is an unknown function of u E [0 . stating that U(t)/t --> 1/p. then it suffices for (A. (A. Equivalently.2) has the unique solution Z = U * z. and that F has a bounded density2. resp.a. and F(dx) a known probability measure . A weaker (and much easier to prove) statement than Blackwell's renewal theorem is the elementary renewal theorem. i. Then Z(u) -4 z(oo). Note in particular that F is spread-out if F has a density f. IV).2) Z(u) = J0 u z(x)U(dx).3) Further. z(u) a known function. then Z(u) -i f0 z(x)dx .EN(t) .5) 2This condition can be weakened considerably .R. z(x) = 0. U Z(u . in convolution notation Z = z + F * Z. In 111. (A.x)F(dx). Under weak regularity conditions (see [APQJ Ch. see [APQ] Ch. but suffices for the present purposes .out. µF (A. oo).e. Both result are valid for delayed renewal processes. the statements being EN(t + a) .4) that z is Lebesgue integrable with limZ. IV).i". the asymptotic behavior of Z(u) is given by the key renewal theorem: Proposition A1. (A. u u PF -4 00.4) If F is spread.i. (A.1 if F is non-lattice and z (u) is directly Riemann integrable (d.2 Assume that Z solves the renewal equation (A.

Y1 . is called the cycle length distribution and as before. However.. However. where the Tn are the instants where a customer enters an empty system (then cycles = busy cycles). i.d. the present more general definition is needed to deal with say Harris recurrent Markov chains. Tk and {Xt }o<t<Tk • For example.i. Eo etc.5a. The kth cycle is defined as {XTk+t}o<t<Yk . equivalently.e.k+t }t>o is independent of To.3) satisfied by the ruin probability for the compound Poisson model. cycles... of Yo. asymptotic properties can easily be obtained from the key renewal equation by an exponential transformation also when F(dx) does not integrate to one.. this covers discrete Markov chains where we can take the Tn as the instants with Xt = i for some arbitrary but fixed state i. or many queueing processes.(3.} be a renewal process. we let µ denote its mean. refer to the zero-delayed case. This program has been carried out in III. The simplest case is when {Xt} has i.x)u(x) dx = z(u( 1 . . . Y2.. Hence by dominated convergence. and its distribution does not depend on k. that the existence of y may fail for heavy-tailed F. To this end. . however. Assuming that y can be chosen such that f °° Ox F(dx) = 1. A stochastic process {Xt}t>0 with a general state space E is called regenerative w. 1c Regenerative processes Let {T..t. a basic reason that renewal theory is relevant is the renewal equation II. Here the relevant F does not have mass one (F is defective).2) by e7x to obtain Z = z +P * Z where Z(x) = e'Y'Z(x). results from the case fo F(dx) = 1 can then be used to study Z and thereby Z. A regenerative process converges in distribution under very mild conditions: . that F is a probability measure.. T1. . T1. multiply (A. this expression is to be interpreted as a random element of the space of all E-valued sequences with finite lifelengths. The property of independent cycles is equivalent to the post-Tk process {XTk+t}t>0 being independent of To. Note.. Tk (or. F(dx) = e7xF(dx).t))u(ut) dt 0 0 J f z(oo) • 1 dt = z(OO). • .r. {Tn} if for any k. We let FO.. Yk ). the post-Tk process {XT. .APPENDIX 333 Proof The condition on F implies that U(dx) has a bounded density u(x) with limit 1/µF as x -* oo. z(x) = e7xz(x). 0 PF µF 11 In risk theory. Z(u) U = 1 u 1 u f z(u . The distribution F of Y1. .

and we have: holds more generally that (rl(t). i. {i7(t)} are Markov with state spaces (0.tEU1/µ)/f has a limiting normal distribution with mean 0 and variance Var(Ui) + (!)2Var (Yi)_ 2EU1 Cov(U1. where the distribution of X...ZT }0<t<Y„+. Otherwise . Y1) le Residual and past lifetime Consider a renewal process and define e ( t) as the residual lifetime of the renewal interval straddling t. {Tn}.. Then {Zt}t^.'s by e. oo).v. 0<t<Yi then Zt /t a$• EU1/µ.. then (Zt . We denote the limiting r... µ 0 If F is spread-out. Then it (ii. {Tn} if the processes {ZT +t . If p = oo. under the condition of Blackwell's renewal theorem.3.334 APPENDIX Proposition A1. C). 2.4 Let {Zt}t^. just the same proof as there carries over to show: Proposition A1. for n = 1.t : t < Tk}.+ X.3 Consider a regenerative process such that the cycle length distribution is non-lattice with p < oo. Then {e(t)}. This is the case considered in [APQ] V. is given by Eg(Xoo) = 1 E0 f Ylg (Xt)dt. (A..i. then Xt .0 is called cumulative w.. resp . in total variation.d. but in fact. C(t) and ij (t) both have a limiting stationary distribution F0 given by the density F (x)/p.ZTOI < 00.r.ZT Then: (a) If E sup I ZTo+t .t. assume that p < 00 and define Un = ZT}1 . (b) If in addition Var(Ul ) < oo. e(t )) .e. oo). P(C ( t) < a) -4 0 for any a < oo) and ij (t) * oo.e.Tk : t < Tk} as the age.r.6) id Cumulative processes Let {Tn} be a renewal process with i. r.. Then Xt -Di X.oo (i.. An example is Zt = fo f (X8) ds where {Xt} is regenerative w.t.i.t.r. {Tn}. fi (t) = inf {Tk .0 be cumulative w. and q(t) = sup It . [0..d. are i. then e (t) . cycles (we allow a different distribution of the first cycle)..

W are independent.U(x) < U( 1)). r. are not i. assume first the renewal process is zero-delayed. use t E^(t)/t = E[Yo .. .y) = f U(t . Since the maximum Mn of n i. Then fi(t)/t a4' 0 and.v.i. the first statement follows. Yo > 0] + f Eo^ (t . (d) the marginal distribution of ^ is FO.d. Y1i Y2. For the second. and the conditional distribution of given 17 = y is the overshoot distribution R0(Y) given by FO(Y) (z) = Fo (y+z)/Fo(y). U(x + 1) . and the equivalence of (a) with (b)-(d) is an easy exercise. 1) and W has distribution Fw given by dFw/dF(x) = x/pF.d. 0 If Markov renewal theory By a Markov renewal process we understand a point process where the interarrival times Yo . Since z ( k) < E[Yi . = z is Foz) The proof of (a) is straightforward by viewing {(r. In IV. Hence for t large enough.APPENDIX 335 Theorem A1. if in addition EYo < oo. Then Eo^(t) satisfies a renewal equation with z(t) _ E[Y1 .5 Under the condition of Blackwell's renewal theorem.(t)...4.dy )z(y) < c ^ l z(k) Eoe(t 0 0 k=o where c = sup.y)P(Yo E dy) . (b) the joint distribution of (ri. (1 V)W) where V.^(t))} as a regenerative process.t. we used: Proposition A1. the joint distribution of (rl.i. Y1 > t] -4 0. Yl > t]. ^) is given by the following four equivalent statements: (a) P (77 > x. (c) the marginal distribution of q is FO. Proof The number Nt of renewal before t satisfies Nt/t a4' p. l:) is the same as the distribution of (VW. Hence t t lt ) = f U(dy)z(t . EC(t)/t -+ 0. we can bound e(t) by M(t) = max {Yk : k < 2t/p}.'s with finite mean satisfies Mn/n a$• 0 (BorelCantelli). In the general case.6 Consider a renewal process with µ < oo. and the conditional distribution of ri given l. ^ > y) = 1 f +Y (z)dz. the sum is o(t) so that Eo£(t)/t -+ 0 .t.U(x) (c < oo because it is easily seen that U(x + 1) . but governed by a Markov chain {Jn} (we . V is uniform on (0.

+ < x.i . .}.7 Consider a non-lattice semi-regenerative process. J1 i . distribution ofjXt}t>o itself where Pi refers to the case Jo = i... ... Then Xt 4 Xo. Jn +1=j} where J = a(JO.r.. Let X1. IT. be i. Yn. -r+ < oo).d. the semi-regenerative process is called non-lattice if {T. A2 Wiener-Hopf factorization Let F be a distribution which is not concentrated on (-oo.t. A stochastic process {Xt}t>o is called semi-regenerative w.. ...) and (Fij )i. G_(x) = P(ST_ < x.336 APPENDIX assume here that /the state space E is// finite) in the sense that P(Y.. Jo.. For example... = io for some arbitrary but fixed reference state io E E. the conditional distribution of {XT„+t}t>o given Yo. Alsmeyer [5] and Thorisson [372]. with common distribution F. Jn_1. . < yIJ) = Fij( y) on {Jn= i.and regenerative processes. Y1. in [APQ]. and define r+=inf{n>0: Sn>0}. oo). T_=inf{n>0: Sn<0}.. . Notes and references Renewal theory and regenerative processes are treated. .} is non-lattice (it is easily seen that this definition does not depend on i). Jn = i is the same as the P. the Markov renewal process if for any n.t. . namely {Twk } where {Wk } is the sequence of instants w where Jo. Further: Proposition A1. oo). We call r+ (T_) the strict ascending (weak descending) ladder epoch and G+ (G_) the corresponding ladder height distributions. A Markov renewal process {Tn} contains an imbedded renewal process. G+(x) = P(S.jEE is a family of distributions on (0. X2. . e.r. Assume that uj = EjYo < oo for all j and that {J„} is irreducible with stationary distribution (v3)jEE. is given by Eg(X00) = 1 YO vjEj f g(Xt) dt µ jEE o where p = ujEEViAj.g. These facts allow many definitions and results to be reduced to ordinary renewal.. where the distribution of X. .. 0] or (0 . The semi-regenerative process is then regenerative w. .T_ < oo). Sn = X1 + • • • + Xn the associated random walk.

A C (0. F(A) is the contribution from the event {T_ = 1} = {X1 < 0}.7).7) follows since G+(A) = 0 when A C (-oo.. F(A . A C (-oo. G+. (d) R+ = U_. define w as the time where the pre-T_ path S1. Sr_ _1 is at its minimum .8) (e. we may rewrite (a) as G_ (A) = G+(A) = F(A) + (G+ * G_)(A). the renewal measures U+=>G+.APPENDIX 337 Probabilistic Wiener-Hopf theory deals with the relation between F.-S. 0]).x)R_ (dx).and r_ pre-occupation measures T+-1 r_-1 R+(A) = E E I(Sn E A). oo). 0<j<m. S.x)R+(dx).7) (A. (A. In (A..-S. m<j<n}. oo) (A.>0.=EGn.G+ * G_: (b) G_ (A) = f °° F(A .. 0]. F(A) + (G+ * G_)(A). n=0 The basic identities are the following: Theorem A2. . (c) G+(A) = f °. >0.r.=n w=m i Figure A. More rigorously. On {T_ > 2}.g.1 . . n -0 R_(A) = E I(Sn E A). 0] and (0. Proof Considering the restrictions of measures to (-oc. oo). G_. U.1 (a) F = G+ + G_ . A C (0. 0).T_=n} = {S. we consider the last such time (to make w unique) so that {w=m. n=0 n=0 00 00 and the T+. (e) R_ = U+. u . A C (-oo.

3 8 APPENDIX Reversing the time points 0.. It follows that for n > 2 F (7-. and reversing the order of summation yields P(T_ > 2. .Sn_1Edx. clearly (Sj -Sm>0.8) is similar. m it follows (see Fig. ST_ E A .._ E A . Sr_ E A-du) (s ee again Fig . 0 < k < n.. 0<j<m._ = n .+ E du)P(S...+ E du) E P(S.du) (G+ * G-)(A)• C llecting terms. SnEAIS.F(r_n_mSrEA_u). -r+ = n) n=1 n=1 0 - C-0 E fF(Sk< 0..m.>0. m=1 f S mming over n = 2. Sn-1 E dx) n=1 - F(A . A. SmEdu) = P(T+=m.7) follows. .XnEA-x) 00 f 0 f 0 00 00 1: F(A . and the proof of (A. .0<k<ri . (A. E du) = P(T_=n-m.= n.x)R+(dx).1). Aso..1.x)P(Sk < 0._ E A) n-1 f P(r_=nw=m Sm EduSrEA) m=1 n-1 F(r+=mSr+Edu). A. S.. ST_ E A) P(T+ = m. (b) follows from 00 G+ (A) _ E F(Sn E A. m < j <n.u) f0m m=1 n=m+1 00 J0 OO P(S. ST+Edu). S.1) that P(Sj -Sn.3.

Again. being concentrated at 0. 11. a number of related identities can be derived. In continuous time. E. .1. Since G+ is concentrated on (0. However. the survey [15] by the author and the extensive list of references there.1(a) is from Kennedy [228].1. u Remark A2. this holds always on the line its = 0.SnEA) = P(Sn<Sk. cf. and G+. the derivation of the form of G+ for the compound Poisson model (Theorem 11. P(SnEA . which is basic for the Pollaczeck-Khinchine formula. The classical analytical form of the Wiener-Hopf problem is to write 1 -.s. u Notes and references In its above discrete time version.6. and using time-reversion as in (d) to obtain the explicit form of R+ (Lebesgue measure).0<k<n.SnEA) = P(Sn<Sk. G_ are trivial. if {St} is Brownian motion. there is no direct analogue of Theorem A2. 0].O<k<n.P as a product H+H_ of functions with such properties. then T+ = inf It > 0 : St = 0} is 0 a. oo). For (d).1).G_ [s] is defined and bounded in the half-plane is : ERs > 01 and non-zero in Is : ERs > 0}.2 In terms of m. The present proof of Theorem A2. Summing over n yields R+ (A) = U_ (A). Another main extension of the theory deals with Markov dependence. is based upon representing G+ as in (b). Nevertheless.g. the analogue of a random walk is a process with stationary independent increments (a Levy process.. it serves as model and motivation for a number of results and arguments in continuous time.O<k<n. In this generality of.9) whenever F[s].Sn_k. 6+ [s]. Then for A C (-oo.f. Wiener-Hopf theory is only used at a few places in this book. In discrete time. see e. and similarly H_ (s) = 1 .T+> n) = P(Sk < O. For example..APPENDIX 339 and the proof of (c) is similar.0+[s])(1 . see for example Bingham [65]. there are direct analogues of Theorem A2. we can rewrite (a) as 1 . such developments motivate the approach in Chapter VI on the Markovian environment model.G_[s]) (A. G_ [s] are defined at the same time.0<k<n.SnEA) = P(SnSn_ k. consider a fixed n and let Xk = Xn_k+l.'s. and the proof of (e) is similar. H+ (s) = 1-G+[s] is defined and bounded in the half-plane Is : ERs < 0} and non-zero in Is: Rs < 01 (because IIG+lI _< 1). and sometimes in a larger strip.4).F[s] = (1 .g.SnEA) is the probability that n is a weak descending ladder point with Sn E A.g. Sk = X1 + • • • + Xk = Sn .

ere A is the eigenvalue of largest absolute value. whereas there is no similar single established a proach in the case of matrix -exponentials. if m is s fficiently large. three of the c rrently most widely used ones: xample A3. and eQ can then be computed as the mth power (by squaring if = 2). one needs to compute matrix -inverses Q-1 and matrix -exponentials eQt ( r just eQ ). Some fundamental properties are the following: sp(eA) = {e' : A E sp(A)} (A.12) eA-'AO = A-le AA (A.5 that when handling phase -type distributi ons. Eo Kn/n! converges rapidly and can be evaluated without p oblems. JAI = max {Jjt : µ E sp(A)} and sp(A) is the set of all eigenvalues of A (the spectrum). hen the elements of Q"/n! do not decrease very rapidly to zero and may contribute a non-negligible amount to eQ even when n is quite large and very any terms of the series may be needed (one may even experience floating point overflow when computing Qn). Thus. Here are.340 APPENDIX 3 Matrix-exponentials T e exponential eA of a p x p matrix A is defined by the usual series expansion 00 An eA n=0 n! he series is always convergent because A' = O(nk Ialn) for some integer k < p.13) henever A is a diagonal matrix with all diagonal elements non-zero. 0 . Here it is standard to compute matrix-inverses by Gauss-Jordan el imination with full pivoting . 1.1 (SCALING AND SQUARING) The difficulty in directly applying t e series expansion eQ = Eo Q"/n! arises when the elements of Q are large. To circumvent this.10) d dteAt = AeAt = eAtA (A. It is seen from Theorem VIII. _I 0 (A. write eQ = (eK)m where = Q/m for some suitable integer m (this is the scaling step).11) A f eAtdt = eA. however .

.. . the intensity matrix Q is the same as the one Q for {Xt} since a jump from i to j 1-1 i occurs at rate qij = 77pij = q22...14) E n n=0 which is easily seen to be valid as a consequence of eqt = en(P-r)t = e-ntenpt The idea which lies behind is uniformization of a Markov process {Xt}.4 (DIAGONALIZATION) Assume that Q has diagonal form.. p different eigenvalues Aj i . Zo = h). i.e.e. letting P = I + Q/i and truncating the series in the identity = e-17t 00 Pn(.]t)n (A.15) Then it is easily checked that P is a transition matrix . One then can reduce to p linear differential equations by noting that k = ZQ.14) holds is therefore that the t-step transition matrix for {fft} is eQt = E e-nt (. the procedure consists in choosing some suitable i > 0.2 (UNIFORMIZATION) Formally. what is needed is quite often only Zt = TreQt (or eQth) with it (h) a given row (column) vector.7t) n=0 n! u °O n Pn (to see this. The approach is in particular convenient if one wants eQt for many different u values of t. construction of {Xt} by realizing the jump times as a thinning of a Poisson process {Nt } with constant intensity 77.3 (DIFFERENTIAL EQUATIONS) Letting Kt = eQt. However . some jumps are dummy in the sense that no state transition occurs ). i. The probabilistic reason that (A. Ap. we have k = QK (or KQ) which is a system of p2 linear differential equations which can be solved numerically by standard algorithms (say the Runge-Kutta method) subject to the boundary condition Ko = I. . assume that Q is the intensity matrix for {Xt} and choose q with rt > max J%J = max -qii• 1. Let vi.3 i (A. Here is a further method which appears quite appealing at a first sight: Example A3 . In practice. Zo = a (Z = QZ. and we may consider a new Markov process {Xt} which has jumps governed by P and occuring at epochs of {Nt} only (note that since pii is typically non-zero . To this end. condition upon the number n of Poisson events in [Olt]) - Example A3. vp be the corresponding left .APPENDIX 341 Example A3.

two serious drawbacks of this approach: u Numerical instability : If the A5 are too close. (A. under the conditions of the Perron-Frobenius theorem). In view of this phenomenon alone care should be taken when using diagonalization as a general tool for computing matrix-exponentials.17) eQt = E e\`thivi = E ea:thi ® vi. Everything is nice and explicit here: 411+q2+-D' )12_g11+q2-^^ where (411-422z + 4412421... we can take H as the matrix with columns hl.. i # j.. hp. of largest real part is often real (say. and hence A2 is so because of A2 = tr(Q)..342 APPENDIX (row) eigenvectors and hl.. (A. Then P P Q = > Aihivi = E Aihi (9 vi. say A = (Ai)diag.18) contains terms which almost cancel and the loss of digits may be disasterous.. i=1 i=1 Thus.5 If Q= ( 411 ( q21 q12 q22 is 2 x 2. and we need to have access to software permitting calculations with complex numbers or to perform the cumbersome translation into real and imaginary parts. hi have been computed. Qhi = vihi.g H-1. v5Q = Aivi. we have an explicit formula for eQt once the A j.18) Namely. however. some cases remain where diagonalization may still be appealing.16) (A. hp the corresponding right (column) eigenvectors. vi. and vihi ¢ 0. Then vihj = 0. i= 1 i=1 P P (A.. and writing eQt as eQt = He°tH-1 = H (e\it)di. say Al. Example A3. D = ) 2 2 . the eigenvalue. Complex calculus : Typically. and we may adapt some normalization convention ensuring vihi = 1. Nevertheless. this last step is equivalent to finding a matrix H such that H-1QH is a diagonal matrix. There are. The phenomenon occurs not least when the dimension p is large. not all ai are real.

k - C k2 ) =b ( A1 q 1 Q11 / where a . replacing ai by A2. v2 and h2 can be computed in just the same way.7 Let 3 9 2 14 7 11 2 2 .6 A particular important case arises when Q = -q1 qi ) q2 -q2 J is an intensity matrix. i. Then 7r = (ir1 7r2 ) = a (q21 Al . However. The other eigenvalue is A = A2 = -q1 .20) ir = q2 ql qi +q 2 9l +q2 (A. h2 = Thus.Q2i and after some trivial calculus one gets eQt = 7r 1 112 + eat 7r1 7r2 / (7fl 7r2) = ( 7r2 -1r2 -7r1 IF.21) Here the first term is the stationary limit and the second term thus describes the rate of convergence to stationarity.APPENDIX 343 Write 7r (= v1) for the left eigenvector corresponding to a1 and k (= hl) for the right eigenvector.k1).19) Example A3 . eqt = eNlt ( ir1ki i2k1 \ ir1 k2 72 k2 + e azt 7r2k2 -i2k1 -7ri k2 7r1 k1 (A. u Example A3. l ab (g12g21 + (A1 - 411) 2) = 1. Then Al = 0 and the corresponding left and right eigenvectors are the stationary probability distribution 7r and e. 1) .e.q. b are any constants ensuring//Irk = 1. where (A. Of course. it is easier to note that 7rh2 = 0 and v2k = 1 implies v2 = (k2 .

2 2 1=ab(142+(-1+2)2 ) = tab. (A+A)' = A+A.5 .22) Note that in this generality it is not assumed that A is necessarily square.11/2 + 5 -1.344 Then D= 2+ 11)' 7 T4 -2 =52. (A. e_6u A4 Some linear algebra 4a Generalized inverses A generalized inverse of a matrix A is defined as any matrix A. A+AA+ = A+. but only that dimensions match .-6.. (A. Generalized inverses play an important role in statistics. ir =a(2 9 9 14 2 1 3 2 2)' k=b 14 =b -1+ 2 ir1 k1 ir2 k1 _ 9 2 10 5 7 9 70 1 ' 7r1 k2 7r2 k2 10 9 9 10 10 + 7 1 10 10 10 1 10 7 10 9 70 9 10 0 e4" = e_.satisfying AA-A = A. (AA+)' = AA+. and a generalized inverse may not unique. for example AA+A = A.23) . A2 = -3/2 .. They are most often constructed by imposing some additional properties .11/2 . APPENDIX x1 -3/2 .

P + e7r )...P).eir ). 0 01 In applied probability. .25) .= (I .D + O(e-bt).1Q = Q(Q . if A is a possibly singular covariance matrix (non-negative definite). Then for some b > 0. Assume that a unique stationary distribution w exists .e.23) is called the Moore-Penrose inverse of A.1 goes under the name fundamental matrix of the Markov chain).e ® 7r)-1. . These matrices are not generalized inverses but act roughly as inverses except that 7r and e play a particular role . E.ew. Am+1 = . then there exists an orthogonal matrix C such that A = CDC' where 0 0 D = AP Here we can assume that the A . and can define /ail 0 0 0 0 0 0 A+ = C A' 0 0 0 C' .24) = te7r .eir)-1 = I . and exists and is unique (see for example Rao [300]).. and define D = (A . = 0 where m < p is the rank of A. ( Q ..APPENDIX 345 A matrix A+ satisfying (A..g. one then works with Q = (Q . (A. (I .g. are ordered such that Al > 0. Am > 0. _ A. Here is a typical result on the role of such matrices in applied probability: Proposition A4. Rather than with generalized inverses . one is also faced with singular matrices . most often either an intensity matrix Q or a matrix of the form I-P where P is a transition matrix.eir )-1.I) (A.1 Let A be an irreducible intensity matrix with stationary row vector it. lt o eAx dx = te7r + D(eAt ..P + e7r)-1 (here ( I .

Equivalently.I)} dx.D + D2 + O(e-bt). B(t) denote the l. Then A(O) _ B(O) = 0. I. in block notation i2h A®B= ( a11B a21 B a12B a22 B Example A4.DZ(ent .J {xe^r + D(e . h ® it reduces to hit in standard matrix notation. resp. (A.s. ()®(6 f 6/ 7f 8^ 7 8 )=! ^)( 6 7 8 )=(6^ 7^ 8^) \ u Example A4.eir)eAt = eAt = A'(t). the rows are proportional to it.3 Let 2 A= 4 3 Vf' N7 5 )' B= ( 8 ).I) . see below. Note that h ® it has rank 1.2 Let it be a row vector with m components and h a column vector with k components.24).e. and the columns to h.h. of (A. it follows that h ® it is the k x m matrix with ijth element hi7rj . then the Kronecker (tensor) product A(') ®A(2) is the (k1 x k2) x (ml x m2) matrix with (il i2) (jl j2)th entry a.h. and in fact any rank 1 matrix can be written on this form. u 4b The Kronecker product ® and the Kronecker sum We recall that if A(1) is a k1 x ml and A(2) a k2 x m2 matrix. respectively.26) follows by integration by parts: t f t /' xeAx dx = [x {xe7r + D(eAx .s.2e7r . (A. . For example. the formulas involving O(e-6t) follow by Perron-Frobenius theory.I)}.27) Proof Let A(t).346 t APPENDIX 2 xe Ax dx = eir + t(D + e-7r) + D(eAt . the r.26) 2 = 2 e7r + tD .. Interpreting 7r. . h as 1 x m and k x 1 matrices.I) (A.91a(2) . o Finally. B'(t) = e7r + DAeAt = eir + (I .

and the number of such factors is precisely given by the relevant binomial coefficient.k)! ( n-0 n=0 t=0 k=0 J _ ® Ak ®Bl-k r ^. (AED B)1 = (A®I+I(9 B)l is the sum of all products of t factors. Using (A.31). (A B)' = eA®B e! L 1=0 0 .APPENDIX 347 Then A®B = 2 f 20.5v/.4 eA® B = eA ®eB. A2 = v2 are row vectors and C1 = h1.29). each of which is A ® I or I ® B. and v1B1h1 • v2B2h2 = v1B1h1 ® v2B2h2 = ( v1(&v2 )( B1(&B2 )( h1(&h2 ) . it follows that e® ® e B An _ 0o oo oo Bn 7 I F n! = ` k! (I .3f 4v/.3V8.3v'6. then the Kronecker sum is defined by A(1) ®A(2) = A(1) ®Ik2 + k ®A(2). if A ® I occurs k times.29) If A and B are both square (k1 = ml and k2 = m2).28) In particular. C2 = h2 are column vectors.4vf. if Al = vi. (A.50 6 7 6 4f 4-.30) eA+B = eAeB function generalizes to Kronecker notation (note that in contrast typically only holds when A and B commute): Proposition A4.3vV/72f 20. (A. Proof We shall use the binomial formula A crucial property is the fact that the functional equation for the exponential t / l (A ®B)t = I k Ak 0 B1-k k=0 (A.31) Indeed. then v1B1h1 and v2B2h2 are real numbers.(A.A9. such a factor is Ak (&B 1-k according to (A.5v'-8 5vf9- 11 A fundamental formula is (A1B1C1) ®(A2B2C2) = (A1 (9 A2)(B1 (9 B2)(C1®C2).

Let P8f P(Sl). independent Markov chains. Let further it.33) . and Q = Q(1) ® Q (2) = Q(1) ® I + I ® Q(2) (A.348 APPENDIX Remark A4. p = P(1) ® {X }. Then 2 0 ire At h • ve Bt kdt = (^®v)(A®B)-1(e A®Ba . n2 n1 ) {X(2) } are independent Markov chains with transition matrices P(1). where transition matrix of the bivariate Markov chain {X n1). we have P8 = Pal) ® p(2).3 < 0 Lemma A4 . the {Yt(2) } transitions in the {Yt(1) } component and the second transitions in the component . From what has been said about matrices of {Yt( 1). {Yt(1). (A. { On the other hand.32) is the intensity matrix of the bivariate continuous Markov process {Yt(1).4 can easily be obtained by probabilistic be the s-step transition reasoning along the same lines . the same time.I)(h ® k). Q(2). h. resp . P(2). X ) }. in the definition (A.32). k any column vectors. v whenever a is an eigenvalue of A and 0 is an eigenvalue be any row vectors and h. P8 = exp {sQ} = exp {s (Q(1) ®Q(2)) } . P8 = Pal ) ® P82) exp {Q ( 1) ® Q(2)1 = eXp {Q( 1) } ® exp {Q(2) } Also the following formula is basic: B are both square such that a +. { 1't(1) }. first term on the r . Yt(2) where independent Markov processes with intensity matri{y(2) } are {Y(1) }.5 Many of the concepts and results in Kronecker calculus have p(2) is the intuitive illustrations in probabilistic terms.6 Suppose that A and of B. P(t) Yt(2) }. Thus . represents ces Q( 1). A special case of Proposition A4. and the form of the bivariate intensity matrix reflects the fact that Yt(2) } cannot change state in both components at due to independence . Ps 1) = exp {sQ ( 1) } > p(2 ) = exp {sQ(2) } can therefore be rewritten as Taking s = 1 for simplicity . Yt(2 ) }.s.

.3 whenever a is an eigenvalue of A and 3 is an eigenvalue of B. ao). p there should exist io. f o r each i. so that by asssumption A ® B is u invertible.1 and references there (to which we add Berman & Plemmons [63]): Theorem A4. Then the eigenvalue Ao with largest real part is simple and real... and the corresponding left and right eigenvectors v. E (0. 4c The Perron-Frobenius theorem Let A be a p x p-matrix with non-negative elements.g. in such that io = i. h can be chosen with 3By this. . = j and atk_li. Now note that the eigenvalues of A ® B are of the form a +. j = 1. (b) if in addition A is aperiodic.. A is called aperiodic if the pattern of zero and non-zero elements is the same as for an aperiodic transition matrix. and if we normalize v. i. (A. > 0 for k = 1. then IN < Ao for all A E sp(A). .34) Note that for a transition matrix. h = e and v = 7r (the stationary row vector). . see e. .12). il.8 Let B be an irreducible3 p x p-matrix with non-negative offdiagonal elements. then An = Aohv+O(µ") = Aoh®v+O(µ") for some u. We call A irreducible if the pattern of zero and non-zero elements is the same as for an irreducible transition matrix. That is. we mean that the pattern of non-zero off-diagonal elements is the same as for an irreducible intensity matrix.. . which can be found in a great number of books. Similarly. .APPENDIX 349 Proof According to (A. h such that vh = 1. the integrand can be written as ( 7r (9 v)( eAt ® eBt )(h ®k ) = ( 7r ®v)(eA (DBt)(h (& k). .29). h can be chosen with strictly positive elements. and the corresponding left and right eigenvectors v.. .The Perron-Frobenius theorem has an analogue for matrices B with properties similar to intensity matrices: Corollary A4. Then: (a) The spectral radius Ao = max{JAI : A E sp(A)} is itself a strictly positive and simple eigenvalue of A. we have AO = 1. [APQ] X. .7 Let A be a p x p-matrix with non-negative elements. n. Here is the Perron-Frobenius theorem. and appeal to (A..

h = e and v = 7r (the stationary row vector). Proposition A5. relate the eigenvalues of B to those of B via (A. 10) and use the formula -me at e Bt = e 00 Antn = e . the analogy of this procedure with unformization.(ti)diag where Q = T + (ti)diag is a proper intensity matrix (Qe = 0).. A5 Complements on phase-type distributions 5a Asymptotic exponentiality In Proposition VIII. let t = (ti)iEE # 0 have non-negative entries and define T(°) = aQ . I. but is an easy consequence of the Perron-Frobenius theorem. the condition is that t is small compared to Q. The next result gives a condition for asymptotical exponentiality. For example. we have A0 = 0. The content is that B is approximately exponential if the exit rates ti are small compared to the feedback intensities tij (i # j).1.(3. To this end.8. Corollary A4. it was shown that under mild conditions the tail of a phase-type distribution B is asymptotical exponential. Note that for an intensity matrix. not only in the tail but in the whole distribution. Ao). then eBt = ea0thv + O(eµt) = eA0th ® v + O(et t) (A.350 APPENDIX strictly positive elements. the phase-type distribution B(a) with representation (.1 Let Q be a proper irreducible intensity matrix with stationary distribution a. note that we can write the phase generator T as Q . Bi° (x) -+ a-t*x Proof Let { 4 } be the phase process associated with B(a) and (°) its lifelength. Furthermore.35) for some p E (-oo.n t AL n=0 n! (cf.2).8 is most often not stated explicitly in textbooks.(ti)ding. Example A3. Then for any (3. one can consider A = 77I + B where rl > 0 is so large that all diagonal elements of A are strictly positive (then A is irreducible and aperiodic).e. h such that vh = 1. let {Yti°i } be a Markov process with initial distribution a and intensity . T(°)) is asymptotically exponential with parameter t* _ r EiEE aiti as a -4 oo. if we normalize v.

We shall .YQ(av) = j) Pi ( ci(a'V) > x.x (1 . a . {t Y( a) } v>0 . a'/a -+ 1. in fact . In addition to the asymptotic exponentiality.2 Pi (c(a) > x. from which the phase process is terminated . By the law of large numbers for Markov processes . and that Yt(a) = Yat for all t. Conditioning upon whether { Yt} changes state in [0. dx/ti] or not. Then a(a'V)/a (aV) a' 1. Hence O ((a) aa.APPENDIX 351 ((1) etc. We can think of ( ( a) as the first event in an inhomogeneous Poisson process ( Cox process ) with intensity process matrix aQ . J^O)_ = j) Pi (v(aaV) > x.jEE. J(()) _ = i) -+ a-t•x t tt' .9. we get dx F (Idx = j) = (1 + qij t )Sij + qij dt. from which it is easily checked that the limiting stationary distribution is (aiti/t*)iEE• Now let a' -4 oo with a in such a way that a' < a.g.)_ = Y(a) = 1'aS(a) = Ya(av)^ it follows that Pi ((.Yj(av) = j f . v/ t-. fo tY dv/t a$' t*.bij) Hence the intensity matrix of { Ix} is (qij/ti)i.(a) > x .aE where 0 < e < 1). it states that the state.a' -+ oo (e. t < (a).1. has a limit distribution: Proposition A5. and this easily yields a(x)/x a-' 1/t*. and write Yt = Yt(1). Since JJ(. prove a somewhat more general result which was used in the proof of Proposition VI. Then {Ix} is a Markov process with to = Yo. Proof Assume first ti > 0 for all i and let I. Hence we can represent ( (a) as ((a) = inf { t > O : f tY( )dv=V } ^l = inf { t > O : t adv = V } l jat inf{t > 0: tydv =aV} = JJJ a J J where o (x) = inf {t >0: fo tY dv = x}. = YQ(x). Let further V be exponential with intensity V and independent of everything else. a' = a . We can assume that Jta) = Yt(°).

a) if B is the lifelength of a terminating Markov chain (in discrete time) on E which has transition matrix P = (p.j) and initial distribution a. A distribution B on {1.g. Indeed. Penev & Turbin [238]. . the simplest discrete phase-type distribution: here E has only one element. > 0}. . See also Korolyuk... = 0 for one or more i. so we shall be brief.. 5b Discrete phase-type distributions The theory of discrete phase-type distributions is a close parallel of the continuous case. Then: (a) The point probabilities are bk = aPk-lp.. .} is said to be discrete phase-type with representation (E. 1 k=1 1 0 otherwise. ' pk 0 k>1 11 Theorem A5. 2. a = b = (bk)k=1.. an easy modification of the argument yields finally the result for the case where t. P.. 2. is discrete phase-type. (c) the nth moment k 1 k"bkis 1)"n!aP-"p. Then P is substochastic and the vector of exit probabilities is p = e .3 As the exponential distribution is the simplest continuous phasetype distribution.352 rr Ia(a'V) Ei I ( > x) P APPENDIX L at (Yo (aV) .5 Let B be discrete phase-type with representation (P. k>1. K}. these results are in the spirit of rare events theory for regenerative processes (e. (b) the generating function b[z] _ E' . Example A5. u Notes and references Propositions A5. However.1 and A5.2 do not appear to be in the literature. and thus the parameter p of the geometric distribution u can be identified with the exit probability vector p.p)k-1 p. k = 1. a). Keilson [223]. Et II I a(a^V) > x) at' . say bk = 0. Example A5.4 Any discrete distribution B with finite support. zkbk is za(I . Gnedenko & Kovalenko [164] and Glasserman & Kou [162]). let E and Pkj j=k-1.. so is the geometric distribution..+ a-t*x • a't' L ` at t* t* J Reducing the state space of {Ix } to {i E E : t. with point probabilities bk = (1 ...Pe.zP)-'p.x k > K..

6 (CONVOLUTIONS) Let B1. r . and a=1). initial distribution a and phase generator T. T) where E = E(1) + E(2) is the disjoint union of E(1) and E(2).a(1).APPENDIX 353 5c Closure properties Example A5. B2 be phase-type with representations (E(1)... resp.1 This corresponds to a convolution of r geometric distributions with the same parameter p. A.36) in block-partitioned notation (where we could also write a as (a (1) 0)). 11 Example A5. The discrete counterpart is the negative binomial distribution with point probabilities bk k1) (1 k = r. a.T(1)).{ 0. _ i E E(1) T(1) t(1)a(2) i E E(2) .r + 1. (E(2). A reduced phase diagram (omitting transitions within the two blocks) is am E(1) t(1) a(2) (2) t(2) Figure A. .7 (THE NEGATIVE BINOMIAL DISTRIBUTION) The most trivial special case of Example A5. as is seen by minor modifications of Example A5.a(2). resp. T= ( 0 T(2) ) (A. Then {Jt} has lifetime U1 + U2 . a' . Jt t > U1 + U2.. and piece the processes together by it = 41) 0<t<U1 U1 < t < U1 + U2 2U. U2.T(2)). { Jt 2) } with lifetimes U1 .2 The form of these results is easily recognized if one considers two independent phase processes { Jt 1) }.6. and hence the negative binomial distribution is discrete phaseu type..6 is the Erlang distribution Er which is the convolution of r exponential distributions. Then the convolution B = B1 * B2 is phase-type with representation (E.

i E E(2) 0 T(2) =IT (in block-partitioned notation.4 . Thus. Equivalently. T) where E = E(1) + E(2) is the disjoint union of E(1) and E(2). a.a(2). i E E(1) T 0 I (A. are i. and o'i Oa. Example A5.0)a(2))).10 (GEOMETRIC COMPOUNDS) Let B be phase-type with representation (E. with common distribution and N is independent of the Uk and geometrically distributed with parameter p. and consider B(") = fA B(a) v(da) where v is a probability measure on A..T(1)).0)ai2). resp. p at each termination. a.O)B2 (0 < 0 < 1) is phase-type with representation (E. (E(2). B2 be phase-type with representations (E(1). we need to restart the phase process for B w.p)pn-1B*n. Then the mixture B = 9B1 + (1 ..p)pn-1. if U1.i. T) and C = EO°_1(1 . U2. then C is the distribution of Ul + • • • + UN.').d.37) (1) (1 .8 (FINITE MIXTURES) Let B1.354 APPENDIX Example A5. A reduced phase diagram is 0a(1) E(1) A . To obtain a phase process for C. this means that a = (Oa(1) (1 . Then it is trivial to see that B(") is u phase-type with representation (a("). Let B(") be the corresponding phase-type distribution. P(N = n) = (1 . a mixture of more than two phase-type distributions is seen to be phase-type.p..0)a(2) E(2) Figure A.T(2)).a(1). one obvious interpretation of the claim u size distribution B to be a mixture is several types of claims.T.9 (INFINITE MIXTURES WITH T FIXED) Assume that a = a(°) depends on a parameter a E A whereas E and T are the same for all a.3 In exactly the same way. In risk theory. Example A5.E) where a(°) = fAa(a)v(da). a reduced phase diagram is f a E t Figure A.

1. Corollary VIII. E). . but the same T.g. Thus the representation is (E(1) x E(2). Then the minimum U1 A U2 and the maximum U1 V U2 are again phase-type. if {Jt} is a phase process for U. we then let the governing phase process be {Jt} _ {(411 Jt2))} 2) interpreting exit of either of {4 M }. with common distribution B and N is independent of the Uk with P(N = n) = f. it follows by mixing (Example A5. To obtain a phase representation for C ..v.°_1 f„ B*?l. Example A5 .a(1). a.. { Jt2) } be independent with lifetimes U1.9) that (U . cf. For U1 A U2. If U1 has a different initial vector. (E(2). are i. T(2) ). Indeed. cf. P). resp. v.. a. let the phase space be E x F = {i j : i E E. X independent of U. T(1) ® T(2)).2. 13 (MINIMA AND MAXIMA ) Let U1.. Note that this was exactly the structure of the lifetime of a terminating renewal u process.x)+. Minor modifications of the argument show that 1. then U1 +• is phase-type with representation (E.T) if U is phase-type with representation (E. T + pta). a(2). then C is the distribution of U1 + • • • + UN.f.TWWW).T + pta). 12 (PHASE-TYPE COMPOUNDS ) Let fl. If we replace x by a r.APPENDIX 355 and C is phase-type with representation (E. T). let {Jtl)}. U2 be random variables with distributions B1. then U1 + • • + UN is zero-modified phase-type with representation (a.. f2. To see this. T + ta. of F. Example A5 ..7. say with distribution F. T) and C = F.aF[T]. say v.. U2. U2. Equivalently. then Jy has distribution aeTx. a. let B be a continuous phase-type distribution with representation (F. Proposition VIII.T) where F[T] = J0 "o eTx F(dx) u is the matrix m.d. It is zero-modified phase-type with representation (E. let the initial vector be a ® v and u let the phase generator be I ® T + P ® (ta). v. a(1) ® a(2 ). B2 of phase-type with representations (E('). { 4 } as exit of {Jt}. be the point probabilities of a discrete phase-type distribution with representation (E. +UN 2.. if U1.°. if B is defective and N + 1 is the first n with U„ = oo.2. Example A5.X)+ is zero-modified phase-type with representation (E. . resp.11 (OVERSHOOTS) The overshoot of U over x is defined as the distribution of (U . j E F}.aeTx. i.

xq(n)(n)}. elementary) Let {bk} be any dense sequence of continuity points for B(x). and the phase generator is T(1) ®T(2) T(1) ®t(2) t(1) ® T(2) 0 T(1) 0 0 0 T(2) Notes and references The results of the present section are standard . The mean of B„ is n/Sn = b and the variance is n/Sn = b2/n. i= 1 C. with weight pi(n) for xi(n).. we need to allow { Jt. q(n) q(n) pi(n)a .356 APPENDIX For U1 V U2.8. That is. Example A5.(n) = D. Let the support of Dn be {xl(n). Hence it is immediate that Bn 4 B.n = I:pi(n)Er v ( __ ) n) ) a= 1 . Here are the details at two somewhat different levels of abstraction: (diagonal argument ...14 To a given distribution B on (0.(bk)'. we can assume that ID. Then from above.B(bk) I < 1/n for n > k. see Neuts [269] (where the proof. the fact that any distribution B can be approximated arbitrarily close by a distribution with finite support. Thus the state space is E(1 ) x E(2) U E(1) U E( 2).-. The general case now follows easily from this. oo). Then we must find phase-type distributions Bn with B.. Now we can find first a sequence {Dm} of distributions with finite support such that D. there is a sequence {B.2) } to go on (on E(2)) when { i 1) } exits. and let Bn be the Erlang distribution E. and the closedness of the class of phase-type distributions under the formation of finite mixtures. By the diagonal argument (subsequent thinnings). say degenerate at b. any distribution B on (0.(Sn) with Sn = n/b.} of phase-type distributions such that Bn 3 B as n -+ oo. cf.. oo) can be approximated 'arbitrarily close' by a phase-type distribution B: Theorem A5. r # oo. however. 5d Phase-type approximation A fundamental property of phase-type distributions is denseness . Proof Assume first that B is a one-point distribution.(bk) -+ B(bk) for all k as n -* oo. relies more on matrix algebra than the probabilistic interpretation exploited here)...(bk) -+ B(bk) for all k. the initial vector is (a(1) (& a (2) 0 0). and vice versa.

n. 2. f2. x -4 oo. u Theorem A5. It should be noted. compute W(B) and use this quantity as an approximation to cp(B0).( dx) -* f r f{(x)B(dx). Let E be the class of functions f : [0.APPENDIX 357 Hence we can choose r(n) in such a way that ICr( n). there is a sequence {Bn} of phase -type distributions such that Bn Di B as n -4 oo and f ' f.n( b k ) . the class CO of all discrete distributions. the topology for weak convergence) PET of the class PET of phase-type distributions contains all one-point distributions. Then ICr( n ). . i = 1..d. and that cp is known to be continuous. PIT contains all finite mixtures of one-point distributions.. replications). for some a < oo. Corollary A5.14 is fundamental and can motivate phase-type assumptions. oo) approximation Assume that we can compute a functional W(B) when B is phase-type. If Cpl (B) and ^02(B) are weakly continuous. oo) and any fl.i.. u 2 (abstract topological ) The essence of the argument above is that the closure (w.15 To a given distribution B on (0 .e.(x)Bf. one would use the B given by some statistical fitting procedure (see below). oo) -* [0. In particular.n (bk) .t. k < n. in at least two ways: insensitivity Suppose we are able to verify a specific result when B is of phasetype say that two functionals Cpl (B) and W2 (B) coincide. if information on Bo is given in terms of observations (i. that this procedure should be used with care if ^p(B) is the ruin probability O(u) and u is large. Since PET is closed under the continuous operation of formation of finite mixtures. oo) such that f (x) = O(e«x). and we can take Bn = Cr(n). k < n.B(bk )I < . For a general Bo. Hence G C PET and L = PIT. But To is the class G of all distributions on [0... we can then approximate Bo by a phase-type B.D(bk)I < n. E E. however.r. say on the claim size distribution B in risk theory.. oo)... i. then it is immediate that WI(B) = p2(B) for all distributions B on [0.

and hence it is sufficient to show that we can obtain limsup n-4oo fi(x)Bn(dx) < Jo 0 f fi( x)B(dx ). . liminf B.358 Proof By Fatou' s lemma. we may assume that in the proof of Theorem A5. .39) Indeed.(dx) > J fi(x)B(dx). there is a sequence {Bn} of phase -type distributions such that Bn -Di B as n -+ oo and all moments converge. n B=az. i=1.n(dx) -+ f 0 fi(x)Dn(dx). n. Bn=En z f f (x)Bn(dx) -fof (x)B(dx) = ° (A. i = 1. .16 To a given distribution B on (0 ..f (x)B(dx). f° xtBn(dx ) -* f °° x`B( dx). if f (x ) = e°x. 2.oo J fi(x)B.. i = 1.2 ... By (A. f00 fi(x)Cr.39).f ' f (x)B(dx). Now returning to the proof of (A. for each i.. and the case of a general f then follows from the definition of the class E and a uniform integrability argument. then cc f (x)Bn ( dx) = (?!c ) e'= .38 )...f (z) = f = 1 1 1 1-n/ o . .38) We first show that for each f E E. n.14 Dn has been chosen such that 00 1 °° f fi(x)D n(dx ) < 1++ ' - o \ n o f fi(x)B(dx)... oo).. .. \\ 0 Corollary A5. - APPENDIX B implies that 00 o o 00 n-. i = 1. TO (A..n(dx) < 1+. and hence we may choose r(n) such that L 9l) f (x)Cr(n)..

. 5e Phase-type fitting As has been mentioned a number of times already. . oo) with B[-y +e] < oo for some e > y = 7(B. the problem thus arises of how to fit a phase-type distribution B to a given set of data (1..} of phase-type distributions such that Bfz + B as n -* oo and -Yn -4 ry where ryn = y(Bn. The adjustment coefficient is a fundamental quantity. The present section is a survey of some of the available approaches and software for inplementing this.14 is classical. the remaining results may be slightly stronger than those given in the literature. I.17 To a given /3 > 0 and a given distribution B on (0. /3) = ry for all n.APPENDIX 359 In compound Poisson risk processes with arrival intensity /3 and claim size distribution B satisfying . the adjustment coefficient 'y = 7(B. . (N or a given distribution Bo. (N. one can obtain 7(Bn./3). there is substantial advantage in assuming the claim sizes to be phase-type when one wants to compute ruin probabilities. . . the loggamma or the Weibull have been argued to provide adequate descriptions of claim size distributions. then Bn['Y + ei] -* B[y + ei] > 1 + 7 Q implies that 'yn < ry + ei for all sufficiently large n .. lim inf > is proved similarly. e ) and ei J.l3µb < 1.> y for some sequence {ei} with ei E (0.e. We shall formulate the problem in the slightly broader setting of fitting a phase-type distribution B to a given set of data (1i . For practical purposes. This is motivated in part from the fact that a number of non-phase-type distributions like the lognormal. If ei > 0. 0 as i -* oo. However./3) is defined as the unique solution > 0 of B[-y] = l+y/j3.16.18 In the setting of Corollary A5. O We state without proof the following result: Corollary A5.3). from a more conceptual . Proof Let fi(x) = el'r+E. Notes and references Theorem A5.. . and in part from the fact that many of the algorithms that we describe below have been formulated within the set-up of fitting distributions. . and therefore the following result is highly relevant as support for phase-type assumptions in risk theory: Corollary A5. but are certainly not unexpected. there is a sequence {B. lim sup ryn < 7.

f. the number of phases required for a good fit will typically be much larger. Johnson & Taaffe considered a mixture of two Erlangs (with different rates ) and matched (when possible ) the first three moments .} of phase-type distribution such that Bo. The constraints were the exact fit of the two first moments and the objective function to be minimized involved the deviation of the empirical and fitted c. .. we do not not want to perform matrix calculus in hundreds or thousands dimensions). three for a mixture of two Erlangs ). and this is what matters when using phase-type distributions as computational vehicle in say renewal theory. A method developed by Bobbio and co-workers (see e. Asmussen & Nerman [38] implemented maximum likelihood in the full class of phase-type distributions via the EM algorithm . It seems therefore a key issue to develop methods allowing for a more general phase diagram.g.. a program package written in C for the SUN workstation or the PC is available as shareware. [202].f. we have constructed a sequence { B. Of course. The earliest such reference is Bux & Herzog [85] who assumed that the Erlang distributions have the same rate parameter.g. In a series of papers (e. and in practice this sets a limitation to the usefulness (the curse of dimensionality . and as fitted distribution we may take B.d. risk theory. giving mass 1 /N to each S=. g. the L1 distance between the c . B„ The problem is that the constructions of {B„} are not economical : the number of phases grows rapidly.360 APPENDIX point of view the two sets of problems are hardly different : an equivalent representation of a set of data (1 . The observation is that the statistical problem would be straightforward if the whole ( EA-valued) phase process { Jtk)} o<t<( k associated with each observa- . at a a number of selected points . e . [216] ).'s). defined by the absence of loops in the phase diagram . one could argue that the results of the preceding section concerning phase-type approximation contains a solution to our problem : given Bo (or Be).. (N is the empirical distribution Be. The characteristics of all of these methods is that even the number of parameters may be low (e. d. [317] ) has considered an extension of this set-up. The likelihood function is maximized by a local linearization method allowing to use linear programming techniques. where more than two Erlangs are allowed and in addition to the exact matching of the first three moments a more general deviation measure is minimized (e. Schmickler (the MEDA package. reliability or queueing theory. . [70]) restrict attention to acyclic phase -type distributions . and we next describe two such approaches which also have the feature of being based upon the traditional statistical tool of like maximum likelihood.g . for some suitable large n.. A number of approaches restrict the phase -type distribution to a suitable class of mixtures of Erlang distributions . and used a non-linear programming approach . cf.g.

. In fact. eieT(n)((k. .(N) = E Ea(n). E. .. (n+1) _ Ea (n). .g. it seems open whether the restriction to the acyclic case is a severe loss of generality.T (n)(TiI(1.. it is easy to see that N (k Ea(n).T(n) (Ti ^^ 1. (N ) (^ 54 k )+ and similarly for the cn+1) The crux is the computation of the conditional expectations. e. N Ti = I(J= i) dt. jEEA... = j) f k=1 k =1 tE[0. then the estimators would be of simple occurenceexposure type. (N) tJk Ea ( n). In practice. Nii = = .(k] (Ti is the total time spent in state i and Nii is the total number of jumps from i to j).. the methods of [70] and [38] appear to produce almost identical results.g. .x)t(n) 1 and this and similar expressions are then computed by numerical solution of a set of differential equations. one is lead to an iterative scheme.T(n) (Nik IC1. EN where ai = N 1 I (-(k) = i) tii=i iEE.. ..APPENDIX 361 tion Sk was available..T(n) k=1 I (Jti) dt o \f a(n)eT(n )(kt(n) N f:i a(n)eT(n)xei . The general idea of the EM algorithm ([106]) is to replace such unobserved quantities by the conditional expectation given the observations. since this is parameter-dependent. Thus.

This page is intentionally left blank .

). Asmussen (1985) Conjugate processes and the simulation of ruin problems. [14] S. Chichester New York. Arfwedson ( 1954) Research in collective risk theory. Arfwedson ( 1955) Research in collective risk theory. Preprint. [10] K. New York. Stoch. 191-223. Asmussen (1987) Applied Probability and Queues. The case of equal risk sums. pp. 53-100. Choudhury & W. 38. Aktuar Tidskr. Asmussen ( 1982 ) Conditioned limit theorems relating a random walk to its associate. [2] J. 14.Bibliography [1] J. [9] G. Sparre Andersen (1957) On the collective theory of risk in the case of contagion between the claims . 31-57 . Queueing Systems 5 . Probab. In: Limit Theorems and Related Problems (A. [3] J. 37. Aktuar Tidskr. [7] E. Abate. 101-116. Skand. Transactions XVth International Congress of Actuaries. Queueing Systems 10. [11) S. John Wiley & Sons. Whitt (1998) Explicit M/G/1 waiting-time distributions for a class of long-tail service time distributions . Whitt (1994) Waiting-time tail probabilities in queues with long-tail service -time distributions . 5-87. AT&T. [12] S. [8] G. 143-170.L. Queueing Systems 16. 219-229. Appl. [4] M. B. Proc. 345-368. 311-338. Asmussen (1984) Approximations for the probability of ruin within finite time . Optimizations Software. Whitt ( 1992) The Fourier-series method for inverting transforms of probability distributions . Skand. 213-229. Arndt (1984) On the distribution of the supremum of a random walk on a Markov chain . J. Teubner. Abramowitz & I. The Mathematical Scientist 14. 20. Abate & W. G. Borokov. with applications to risk reserve processes and the GI /G/1 queue. 363 .G. New York. ibid. Alsmeyer (1991) Erneuerungstheorie. Act. Anantharam (1988 ) How large delays build up in a GI/GI11 queue. Scand. 57. Appl. [6] V. Stegun ( 1972 ) Handbook of Mathematical Functions (10th ed. [15] S. 253-267. Stuttgart. 1985. [13] S.). [5] G. Abate & W. New York. 1984 . ed. Adv. II.A. Asmussen (1989a) Aspects of matrix Wiener-Hopf factorisation in applied probability. Dover.

Floe Henriksen & C. Asmussen (2000) Matrix-analytic models and their analysis. J. Scand. 29-43. Asmussen. J. 79-102. Ann. [20] S. Methods V Problems (J. Asmussen. [18] S. Alfa & S. Asmussen (1998c) A probabilistic look at the Wiener-Hopf equation. Stochastic Models 11. 27.S. 1996. Probab. Matrix-Analytic Methods in Stochastic Models (A. [30] S. Scand. Frey. Stoch. Random Processes 2.). Asmussen (1999) On the ruin problem for some adapted premium rules. Hojgaard (2000) Rare events simulation for heavy-tailed distributions. [29] S. Appl.364 BIBLIOGRAPHY [16] S. Boca Raton. Asmussen & B. 354-374. Appl. Asmussen (1992b) Light traffic equivalence in single server queues. [19] S. Asmussen & M. SIAM Review 40. Advances in Queueing: Models. Florida. Proc. 193-226. Bladt (1996) Renewal theory and queueing algorithms for matrix-exponential distributions. 2. 1989 .M. 8. Asmussen & K. [31] S. 106-119. 772-789. 49-66. Asmussen. Proc. eds. [33] S. L. Rolski & V. Kalashnikov & A. [23] S. [22] S. Astin Bulletin 27. [28] S. Probab. Hojgaard (1999) Approximations for finite horizon ruin prob- abilities in the renewal model. Bladt (1996) Phase-type distributions and risk processes with premiums dependent on the current reserve. Binswanger & B. Schmidt (1995) Does Markov-modulation increase the risk? ASTIN Bull. 189-201. eds. Act. New York. Ann. 69-100. Probabilistic Analysis of Rare Events (V. 20. CRC Press. Kliippelberg (1994) Large claims approximations for risk processes in a Markovian environment.K. Th. Bernoulli 6. 137-168. [34] S. Appl. Asmussen (1995) Stationary distributions via first passage times. Binswanger (1997) Simulation of ruin probabilities for subexponential claims. J. Scand. J.). [25] S. 96-107. A. 21-49. Asmussen & B. Andronov. .). Asmussen (1991) Ladder heights and the Markov-modulated M/G/1 queue. 313-341. 555-574. 313-326. 37. [32] S. 1999 . [17] S. Asmussen (1992c) Stationary distributions for fluid flow models and Markovmodulated reflected Brownian motion. stationary distributions and first passage probabilities. 3-15. Asmussen (1989b) Risk theory in a Markovian environment. 303-322. [21] S. Asmussen (1998a) Subexponential asymptotics for stochastic processes: extremal behaviour. Asmussen (1998b) Extreme value theory for queues via cycle maxima. Scand. Hojgaard (1996) Finite horizon ruin probabilities for Markovmodulated risk processes with heavy tails. Stoch. T. Asmussen & M. Extremes 1 . Dshalalow ed. Asmussen (1992a) Phase-type representations in random walk and queueing problems. Marcel Dekker. Statist. 54. Act. [24] S. 297-318. Act. 25. K. [26] S. Probab. Appl. Riga Aviation University. 19-36. [27] S. Chakravarty. Ann.

(to appear). Asmussen & T. Fitting phase-type distributions via the EM algorithm. Insurance: Mathematics and Economics 10. Sc. Kliippelberg (1996) Large deviations results for subexponential tails. J. Inf. Asmussen & M. 23. Oper. [45] S. Rolski (1991) Computational methods in risk theory: a matrix-algorithmic approach. Probab. [38] S. [42] S. O. Koole (1993). Sigman (1996) Monotone stochastic recursions and their duals. first passage problems and extreme value theory for queues. . Taksar (2000) Optimal risk control and dividend distribution policies . 20. Dshalalow ed. [47] S. Asmussen & S. Appl. Rolski (1994) Risk theory in a periodic environment: Lundberg's inequality and the Cramer-Lundberg approximation. Asmussen & K. [43] S. Res. Example of excess-off-loss reinsurance for an insurance corporation. Asmussen & C. Asmussen . [49] S.BIBLIOGRAPHY 365 [35] S. Asmussen & R. 419-441. Asmussen & H. Proc. Schock Petersen (1989) Ruin probabilities expressed in terms of storage processes. Appl. 913-916. Math. [41] S. Nielsen (1995) Ruin probabilities via local adjustment coefficients . Johnson. [48] S. 31. 64. Asmussen & C. 30. Appl. Wiley. Asmussen & V. [44] S. Probab. [40] S. Adv. Opns. 421-458. Scand. 1-20. 365-372. Appl. Read eds. Supplementary Volume (Kotz. [52] S. Probab.A. [39] S. Asmussen & G.A. 1-15. 299-324. Res. [50] S. Insurance: Mathematics and Economics 20. Marked point processes as limits of Markovian arrival streams. Asmussen & V. Statistica Neerlandica 47. Asmussen & R. Methods 8 Problems (J. Advances in Queueing: Models. Schmidt (1995) Ladder height distributions with marks.Y. Prob 32 . 736-755. Olsson (1996). Finance and Stochastics 4.). J. Hojgaard & M. Proc.M. Management Science 45. Rubinstein (1999) Sensitivity analysis of insurance risk models. [51] S. Asmussen & D. [36] S. Appl. H. B. Eng. O'Cinneide (2000/2001) Matrix-exponential distributions [Distributions with a rational Laplace transform] Encyclopedia of Statistical Sciences. with applications to insurance risk Stoch. 105-119. Rubinstein (1995) Steady-state rare events simulation in queueing models and its complexity properties. O'Cinneide (2000/01) On the tail of the waiting time in a Markov-modulated M/G/1 queue. Statist.Y.). 410-433. Asmussen & T. Perry (1992) On cycle maxima. 58. Nerman & M. 1-9. J. Florida. Appl. Stochastic Models 8. Adv. Schmidt (1993) The ascending ladder height distribution for a class of dependent random walks. 429-466. 422-447. Probab. CRC Press. Taksar (1997) Controlled diffusion models for optimal dividend payout. Schmidt (1999) Tail approximations for nonstandard risk and queueing processes with subexponential tails. Asmussen. Asmussen & C. 259274. 10. [46] S. Boca Raton. [37] S. Schmidli & V. Th. Asmussen. Stoch. 1125-1141. 103-125.

1988 . Baker (1977) The Numerical Solution of Integral Equations. Berman & R.366 BIBLIOGRAPHY [53] S. 275-279. Beekman (1969) A ruin function approximation. Wiley. New York. Cox (1972) A low traffic approximation for queues. [58] O. von Bahr (1975) Asymptotic ruin probabilities when exponential moments do not exist. [69] P. Cambridge University Press. Insurance: Mathematics and Economics 6. Probab.M. 661-667. SpringerVerlag. 41-48. [66] N. [65] N. Actuaries 21. Wiley. Goldie & J. Ney ( 1972) Branching Processes .A. 6-10. J. Bjork & J. [71] P. Scand. Soc. Scand.L. Plemmons (1994) Nonnegative Matrices in the Mathematical Sciences. Beekman (1985) A series for infinite time ruin probabilities. [63] A. 705-766.H. von Bahr (1974) Ruin probabilities expressed in terms of ladder height distributions. Bingham (1975) Fluctuation theory in continuous time. Athreya & P. 169-186. [67] T. Teugels (1987) Regular Variation. 129-134. Bloomfield & D. 190-204. J. Probab.H. Bingham. de Waegenaere (1990) Simulation of ruin probabilities. Adv. Appl. Teugels ( 1996) Convergence rates for M /G/1 queues and ruin problems with heavy tails . 1985 . 148-156. Beekman (1974) Two Stochastic Processes. [55] B. 832-840.J. 221-232. J. Appl. Act. Borovkov (1976) Asymptotic Methods in Queueing Theory. 77-111.B . 95-99. Schmidli (1995) Saddlepoint approximations for the probability of ruin in finite time. Act. 1995. J. J. Bjork & J. Barndorff-Nielsen & H. Scand. Insurance: Mathematics and Economics 4. Appl. Act. 1181-1190. 1975. Telek (1994) A bencmark for PH estimation algorithms: results for acyclic PH.L. Barndorff-Nielsen (1978) Information and Exponential Families in Statistical Theory. . 7. Berlin. [57] C. [61] J. Billingsley (1968) Convergence of Probability Measures. Oxford. New York. SIAM. Grandell (1988) Exponential inequalities for ruin probabilities in the Cox case. 9. [62] J.A. Bobbio & M. Boogaert & A. [70] A.T. Act. Scand. Halsted Press. [59] O. [68] T. Cambridge. [64] P. Asmussen & J. Trans. Act. Clarendon Press. Probab. Springer . C. Boogaert & V. J. [73] A. 1974. [56] B. Grandell (1985) An insensitivity property of the ruin probability. Scand.H.R. Insurance: Mathematics and Economics 9. 33. Stochastic Models 10. Crijns (1987) Upper bounds on ruin probabilities in case of negative loadings and positive interest rates. [72] P. J. New York. [54] K. [60] J.

Springer. Bell. [88] J. Appl. Veraverbeke (1990). Proc. Brockwell . Amsterdam. J. S. [80] F.J. Amsterdam. Cramer (1955) Collective risk theory. Illinois.W. 392-433. Breiman ( 1968) Probability. Gerber. 313-319. H. Schrage (1987) A Guide to Simulation. Burman & D . Boxma & J. Qinlar (1972) Markov additive processes.BIBLIOGRAPHY 367 [74] O. Cramer (1930) On the Mathematical Theory of Risk. M. [83] D. New York. Addison-Wesley.A. Herzog (1977) The phase concept : approximations of measured data and performance analysis. Berlin. [85] W. Goovaerts & F. Smith ( 1983) Asymptotic analysis of a queueing system with bursty traffic. De Vylder (1986) Ordering of risks and ruin probabilities. [86] K. Cohen (1999) Heavy-traffic analysis for the GI/G/1 queue with heavy-tailed service time distributions . Area Commun. Geb. 34. Tech. 907920. [82] H. Malgouyres (1983) Large deviations and rare events in the study of stochastic algorithms . North-Holland. verve. Bux & U. Cottrell.L. 93-121.L. Wiley. Skandia Jubilee Volume. Croux & N. [87] E. [84] D. Wahrscheinlichkeitsth.I. Probab. Bucklew ( 1990) Large Deviation Techniques in Decision. 105-119. Simulation and Estimation. 1433-1453. Boxma & J.R. Chandy & M. Stockholm. Philos. Bratley.R. [89] M. Academic Press. Res. The Society of Actuaries . Cambr. Insurance : Mathematics and Economics 5.U. Burman & D. Tweedie (1982) Storage processes with general release rule and additive inputs .W. 16. [92] H. Sel. Chung (1974) A Course in Probability Theory (2nd ed. Resnick & R. Adv. 23-38. 177-204. Jones & C. [77] P. [78] L.. Bowers .). New York. Cohen (1998) The M/G/1 queue with heavy-tailed service time distribution. 51. 749-763. Itasca. D. Cox (1955) Use of complex probabilities in the theory of stochastic processes. Smith ( 1986) An asymptotic analysis of a queueing system with Markov-modulated arrivals . 127-130.L. 14. Springer-Verlag. Soc. Oper. The Jubilee volume of Forsakringsbolaget Skandia. Fox & L. Computer Performance (K.W.J.) North-Holland. B. Aut.M. [90] D.L.A. Fort & G. Biihlmann ( 1970) Mathematical Methods in Risk Theory. IEEE Trans. Queueing Systems 33. Hickman. Control AC-28. Insurance : Mathematics and Economics 9. Jr.C. J. Cohen (1982) The Single Server Queue (2nd ed. Reiser eds. Nesbitt (1986) Actuarial Mathematics . [75] O. [93] K. Syst. Broeck . 24. Stockholm. [76] N. Z. J. II. Non-parametric estimators for the probability of ruin.Y. New York San Francisco London. [79] P. Reading.-C. . IEEE J.Y . 62. 35-39.R. [91] H. [81] J.).

Pentikainen & E. [104] F. 3.. van Dawen (1986) Ein einfacher Beweis fur Ohlins Lemma. Suppl. [106] A. J. Blotter der deutschen Gesellschaft fur Versicherungsmathematik XVII. J.B. 9 . 37-50. Oper. Stockholm. [110] F. Steinebach ( 1991 ) On the estimation of the adjustment coefficient in risk theory via intermediate order statistics . De Vylder (1978) A practical solution to the problem of ultimate ruin probability. Dassios & P. Steinebach (1990) On some alternative estimators of the adjustment coefficient in risk theory. . [108] F. Dembo & O.P. [101] C.D. [112] F. [99] R. Math. Insurance: Mathematics and Economics 3. De Vylder (1996) Advanced Risk Theory. 624-628. Goovaerts (1984) Bounds for classical ruin probabilities. Rubin (1977) Maximum likelihood from incomplete data via the EM algorithm. [100] A. Rolski (1991) Light traffic approximations in queues. Daley & T.. Haezendonck (1987) Classical risk theory in an economic environment. [105] A. 22.M. Zeitouni (1992) Large Deviations Techniques and Applications. [97] D. T. 70-83 (1969). Csorgo & J. Csorgo & J. 1-7. Goovaerts (1988) Recursive calculation of finite-time ruin probabilities. Martingales and insurance risk. 181-217. Scand. Editions de 1'Universite de Bruxelles. English version published in Skand. Springer. 114-119. 1990 . Soc. Act. [107] L. Appl. Daley & T. Daykin. Pesonen (1994) Practical Risk Theory for Actuaries. Insurance: Mathematics and Economics 4. [111] F. Embrechts (1989). 121-131. Delbaen & J. Davidson (1946) On the ruin problem of collective risk theory under the assumption of a variable safety loading (in Swedish). New York. 88-101. Boston. 85-116. [103] F. Aktuar. [95] M. Dempster. Insurance: Mathematics and Economics 10. Probab. 57-71. Math. Insurance: Mathematics and Economics 6. 201-206. Math. Scand. 135-159. Devroye (1986) Non-uniform Random Variate Generation. Oper. Rolski (1984) A light traffic approximation for a single server queue. 227-229. De Vylder & M. De Vylder & M. Insurance: Mathematics and Economics 7. Tidskr. 16. Jones and Bartlett. Res. Appl. 583-602. 27. 435-436. Haezendonck (1985) Inversed martingales in risk theory. [102] P. J. De Vylder (1977) A new proof of a known result in risk theory. Laird & D. Comp. Deheuvels & J. Teugels ( 1990) Empirical Laplace transform and approximation of compound distributions .368 BIBLIOGRAPHY [94] M. Roy. N. Forsiikringsmatematiska Studier Tillagnade Filip Lundberg. Chapman & Hall.L. Statist. J. [98] A. Res. [109] F. Stochastic Models 5. Actuarial J. [96] D. Delbaen & J.

Res. Dufresne & H. Act. Gerber (1991) Risk theory for the compound Poisson process that is perturbed by a diffusion. Embrechts ( 1988 ). Embrechts . J. Taylor ( 1975) A diffusion approximation for the ruin probability with compounding assets.J. Shiu (1991) Risk theory with the Gamma process. Act. [117] D. Scand.C. Insurance: Mathematics and Economics 19. Philos. J. Insurance: Mathematics and Economics 7.B.C. 5159. Dickson & J. [129] D . [116] D. [123] E.C. J. [114] D. Hipp (1998) Ruin probabilities for Erlang (2) risk processes. Hipp (1999) Ruin problems for phase-type(2) risk processes. Dynkin (1965) Markov Processes I.J.R. Regterschot (1988) Conditional PASTA. 363-374. Scand. J.M. 17-41. Gerber & E. 105-115.C. 1984 . [125] F. and the amount of claim causing ruin.R. 177-192.C. Dickson (1992) On the distribution of the surplus prior to ruin . J. Dufresne.BIBLIOGRAPHY 369 [113] D. 107-124. Gerber (1988) The surpluses immediately before and at ruin. Grandell & H. [122] B. J. 61-80.G. Insurance : Mathematics and Economics 22. Act. [127] F. Scand. Letters 7.C. 49-62. 229-232. Camb. with applications.U.S.U. Harrison & A. British Actuarial J. Dickson (1994) An upper bound for the probability of ultimate ruin. [126] F. Djehiche (1993) A large deviation estimate for ruin probabilities. Act.K. H. Emanuel .W. [115] D. Duffield & N.M. J. Dickson & H.C. [119] D. [120] D. [130] P. Dickson (1995) A review of Panjer's recursion formula and its applications.C. 42-59. Dickson & H. Act. 1984 . O'Connell (1995) Large deviations and overflow probabilities for the general single-server queue. Scand. 269-274. Act. Astin Bulletin 21.R. . [118] D. 191-207. 37-45. 1993 . 118. Insurance : Mathematics and Economics 7. [128] E.M. 2000 .M. 193-199.U. Gray (1984) Approximations to the ruin probability in the presence of an upper absorbing barrier. J. van Doorn & G. Schmidli (1993) Finite-time Lundberg inequalities in the Cox case . 1975 . Waters (1996 ) Reinsurance and ruin .C. Scand. Act. Scand.R. Ruin estimates for large claims . [131] P. Insurance : Mathematics and Economics 10. Insurance : Mathematics and Economics 11. 1993 .M. Springer . Dufresne & H. Insurance: Mathematics and Economics 25. Gray (1984) Exact solutions for ruin probability in the presence of an upper absorbing barrier. 174-186. Dickson & C. Dickson & J. Dickson & C. 1. 251-262. Scand. Math.M. 131-138. Proc. Waters (1999) Ruin probabilities wih compounding.M.C. Soc. [121] D. 1994 . Oper. [124] N.M.M. 147-167. J.M. Berlin Gottingen Heidelberg.

Embrechts & J. [133] P. Probab.W. 181-190.D.D. [149] C. [147] M. Probab. Esscher (1932) On the probability function in the collective theory of risk. Astin Bulletin 16. Embrechts. J. Mikosch (1997) Modelling Extremal Events for Finance and Insurance.L. . [143] W.).W. 1466-1474.K. Jensen. Springer. Math. Veraverbeke (1982) Estimates for the probability of ruin with special emphasis on the possibility of large claims . Adv. Wiley. [141] F. Appl. Erlang 2. Walkup (1967) Association of random variables. Statist. 35. 175-195. 59-74. A bootstrap procedure for estimating the adjustment coefficient. Feller (1971) An Introduction to Probability Theory and its Applications II (2nd ed. with applications. Teugels (1985). New York. 33-39. [145] P. [144] P. 159-178. 269-274. Skand. Erlang (1909) Sandsynlighedsregning og telefonsamtaler . [138] P. Reprinted 1948 as 'The theory of probabilities and telephone conversations' in The Life and work of A. Franken. [139] A. Embrechts. Fields 75. Schmidt (1982) Queues and Point Processes. Griibel & S.370 BIBLIOGRAPHY [132] P. [146] E. Appl. 695-712. 26. Pitts (1993) Some applications of the fast Fourier transform in insurance mathematics . Feller (1966) An Introduction to Probability Theory and its Applications I (3nd ed. Ann. Insurance : Mathematics and Economics 1. Scand. Lai (1998) Wald's equations. Furrer (1998) Risk processes perturbed by a-stable Levy motion. [140] J. Frenz & V. Arndt & V. 566-580. Mikosch (1991). D. Insurance: Mathematics and Economics 10. Embrechts & T. U. Embrechts. Akt. Embrechts & N.J. 17. On the excursions of Markov processes in classical duality.D. 404-422. [135] P. Fitzsimmons (1987). Probab. Appl. 623-637. 81-90. 29. Wiley. 616-624. [137] P. Tidsskr. Adv. 55-72. New York. Maejima & J. Sci. [142] W. 38. 131-137. Konig. F. Frees (1986) Nonparametric estimation of the probability of ruin. 1998 .L. J. Esary. Probab. Schmidli (1994) Ruin estimation for a general insurance risk model. Wiley. Danish Academy Tech. [136] P. C. Rel. Probab. M. Heidelberg. Fuh (1997) Corrected ruin probabilities for ruin probabilities in Markov random walks. 29. Insurance: Mathematics and Economics 7. Nyt Tidsskrift for Matematik B20. J. Schmidt (1992) An insensitivity property of ladder height distributions. [134] P. J. Villasenor (1988) Ruin estimates for large claims. 59-75. [150] H. Approximations for compound Poisson and Polya processes. Th. Trans. Adv. Appl. Appl.A.). R.M. Fuh & T. Embrechts & H.K. Proschan & D. Statistica Neerlandica 47. Act. Kliippelberg & T. first passage times and moments of ladder variables in Markov random walks. Probab. [148] C.

Furrer & H. . [156] H. de Vylder & J. North-Holland. A.W. Gerber (1973) Martingales in risk theory. 205-216. F. [169] J. 5. Act. Act. North-Holland. R. Gerber (1986). J.E. [168] J. [166] M.M. Glynn & W. [163] P. Appl.U. Goovaerts. 105-115. Weron (1997) Stable Levy motion approximation in collective risk theory. In Studies in Applied Probability (J. [164] B. Appl. Gerber (1971) Der Einfluss von Zins auf die Ruinwahrscheinlichkeit. Goovaerts.U. 1981 . Schweiz. Suppl.J.. Amsterdam. Gerber (1979) An Introduction to Mathematical Risk Theory.U. Insurance: Mathematics and Economics 15.V. Mitt. Vers. Scand. Insurance : Mathematics and Economics 7.. Adv. 73. Michna & A. 97-114. [155] H. Basel. Grandell (1977) A class of approximations of ruin probabilities. ETH Zurich. Amsterdam. Skand. 1977. Probab. Bauwelinckx (1990) Effective Actuarial Methods. [161] P. Furrer. Gnedenko & I.BIBLIOGRAPHY 371 [151] H. Kluwer. Springer. Kou (1995) Limits of first passage times to rare sets in regenerative processes. Gerber (1981) On the probability of ruin in the presence of a linear dividend barrier. Ann. Vers.71. S. Gani. van Heerwarden & T. 1978 . [152] H. Scand. J. University of Pennsylvania. 77-78. 463-480. [157] H. Birkhauser . Griibel (1996) Perpetuities with thin tails. [162] P. [165] M. Life Insurance Mathematics. 424-445. [154] H. eds. Probab.S. J. Huebner Foundation Monographs. [159] H. Goldie & R. Aktuarietidskrift 1970 . Schmidli (1994) Exponential inequalities for ruin probabilities of risk processes perturbed by diffusion.J. Scand. Schweiz. Insurance: Mathematics and Economics 20.N. [167] C. Act. Unpublished.). Glasserman (1991) Gradient Estimation via Perturbation Analysis. Math. Haezendonck (1984) Insurance Premiums.-G. Kovalenko (1989) Introduction to Queueing Theory (2nd ed. Kaas. contained in the authors PhD thesis. Ver. 63-70. Gerber (1970) An extension of the renewal equation and its application in the collective theory of risk. 23-36. Whitt (1994) Logarithmic asymptotics for steady-state tail probabilities in a single-server queue. Ver. [160] H.U. Math. Appl. 28.U. J. [153] H. 37-52.U. [158] H. Boston.). 205-210. 131-156. Gerber (1988) Mathematical fun with ruin theory.U. 15-23. Grandell (1978) A remark on 'A class of approximations of ruin probabilities'. Glasserman & S. 31A. Probab. Z. Mitt. Furrer (1996) A note on the convergence of the infinite-time ruin probabilities when the weak approximation is a-stable Levy motion. Galambos & J. London. Dordrecht.

8. Matem. Probability Theory and Mathematical Statistics (I. Skand. Math. Preprint. Proc. Harrison & S.I. Stoch. [172] J. 1. Liet. Math. [176] B. Opns . Grandell ( 1992) Finite time ruin probabilities and martingales . [187] J. Harrison & S. 67-79. [175] J. Gusak & V. Gut (1988 ) Stopped Random Walks . Grnbel (1991) G/G/1 via FFT. E.243-255.A. New York. Grigelionis ( 1992) On Lundberg inequalities in a Markovian environment. M.-O. Informatica 2. Amsterdam. 75-90.V.I. Grigelionis (1996) Lundberg-type stochastic processes. Appl. [174] J. Springer-Verlag. . Tinbergen Institute Research Series 20. Res. Akademie-Verlag. 143-158. Proc. [185] J. M. 54. [177] B. Griibel & R. SpringerVerlag. 400-409. 3 . Silvestrov (2000) Cram€r-Lundberg approximations for nonlinearly perturbed risk processes . Korolyuk (1969) On the joint distribution of a process with stationary increments and its maximum .). Grigelionis ( 1993) Two-sided Lundberg inequalities in a Markovian environment.S. Proc . 30-41. Resnick (1976 ) The stationary distribution and first exit probabilities of a storage process with general release rule . [181] D. Grandell (1979) Empirical bounds for ruin probabilities. Hadwiger (1940) Uber die Wahrscheinlichkeit des Ruins bei einer grossen Zahl von Geschiiften . [183] M. Rink. ed.131-135. Stoch. 197-214. Probab. [180] R. Appl. [171] J. Berlin. Th. Appl. Lemoine (1977) Limit theorems for periodic queues. [182] A. Hermesmeier (1999) Computation of compound distributions I: aliasing errors and exponential tilting.372 BIBLIOGRAPHY [170] J. 167-176. Opns . 14. Astin Bulletin 29. KTH. [188] J. Applied Statistics 40. Gyllenberg & D. 3-32. [189] A. mathematische Wirtschaft .. M. [173] J. Theory and Applications. 5 . Harrison (1977) Ruin problems with compounding assets . Resnick (1977) The recurrence classification of risk and storage processes . 57-66. Grandell (1990) Aspects of Risk Theory. Ibragimov. van Heerwarden (1991) Ordering of Risks: Theory and Actuarial Applications. Insurance: Mathematics and Economics 26. 14. J. Chapman & Hall. [184] H . Statistical Algorithm 265. M. Res. Aktuar Tidskr. Grandell & C. 566-576. Archiv v. Winter School on Stochastic Analysis and Appl . [179] R. 347-358. Harrison & A. Probab. Appl. Grandell (1997) Mixed Poisson Processes.J. 33. 355-365. [178] B.and Sozialforschung 6. Segerdahl (1971) A comparison of some approximations of ruin probabilities. [186] J. Grandell (1999) Simple approximations of ruin functions.

Wahrscheinlichkeitsth. Hipp & R. Hesselager (1990) Some results on optimal reinsurance in terms of the adjustment coefficient. [196] C. Willekens (1986) Estimates for the probability of ruin starting with a large initial reserve. Taksar (2000) Optimal proportional reinsurance policies for diffusion models. Karlsruhe. Insurance: Mathematics and Economics 5.-R. [199] R. 25.). [192] U. 378-389. Hoglund (1990) An asymptotic expression for the probability of ruin within finite time. Versicherungswirtschaft.V. Geb. Prob. Huskova. Hipp (1989b) Estimators and bootstrap confidence intervals for ruin probabilities.BIBLIOGRAPHY 373 [190] P. 285-292. Nachrichten 30. [205] T. . 18. Wiley. The ruin problem for finite Markov chains. J. Probab. [200] M. [198] C. Math. Z. [197] C.A. [193] U. Studies in Statistical Quality Control and Reliability 1992 :4. [203] T. 19. Insurance: Mathematics and Economics 5. Astin Bulletin 19. 6. 1990 . [191] W. 89-96. Foruth Prague Symp. Heilmann (1987) Grundbegriffe der Risikotheorie.L. [201] L. 1163-1174. 3.. Hoglund (1974) Central limit theorems and statistical inference for Markov chains. Hipp (1989a) Efficient estimators for ruin probabilities. Herkenrath (1986) On the estimation of the adjustment coefficient in risk theory by means of stochastic approximation procedures. Hogan (1986) Comment on corrected diffusion approximations in certain random walk problems. Mandl & M. 1298-1310. Hill (1975) A simple general approach to inference about the tail of a distribution. 285-293. Probab. 80-95. Klugman (1984) Loss Distributions. [208] V. Horvath & E. Mathematical Statistics. Iglehart (1969) Diffusion approximations in collective risk theory. [202] 0. No. 1998 . Ann. Hojgaard & M. Appl. Karlsruhe. 305-313. Electronic Letters 35. Asmussen & 0. [207] D. Scand. 23. [194] 0. [206] B. Scand. [195] B. Ann. Statist. Act. Act. ACM TOMACS 6. S. J. Appl. Heidelberger (1995) Fast simulation of rare events in queueing and reliability models. on Asymptotic Statistics (P. verve. Staalhagen (1999) Waiting time distributions in M/D/1 queueing systems. eds. Proc. Hoglund (1991).V. Nerman (1992) EMPHT . 57-70. Hermann (1965) Bin Approximationssatz fiir Verteilungen stationarer zufalliger Punktfolgen. 43-85. New York. Verlag Versicherungswirtschaft e. Probab. 123-151. 29. J. [204] T. Michel (1990) Risikotheorie: Stochastische Modelle and Statistische Methoden. 377-381.B. Hogg & S. Ann. Iversen & L. Haggstrom. Chalmers University of Technology and the University of Goteborg.L. 166-180.M. 259-268. J.a program for fitting phase-type distributions.

Math. [213] P. Jensen (1995) Saddle Point Approximations. ibid. J. 325247. Johnson & M. 116-129. Amer. Scand. Karlin & H. N. 15. Kao (1988) Computing the phase-type renewal and related functions. Jensen (1953) A Distribution Model Applicable to Economics. NEC Research Institute. [214] A. Jagerman (1985) Certain Volterra integral equations arising in queueing.M. [225] J. [224] J. Soc. [223] J. Act. Oxford.G. Appl. Wishart (1964). Cambridge Philos. Cambridge Philos. Academic Press. Cambridge Philos. 711-743. 41-51. [212] J. Probab.G. 561-563. Princeton. 173-190.B. 239-256. Addenda to for processes defined on a finite Markov chain. Taaffe (1989/90) Matching moments to phase distributions. Manuscript. Taylor (1981) A Second Course in Stochastic Processes. J. Janssen & J. Proc. New York. Appl. 61. [210] D. Statist. . Soc. 259-281.A. [222] S. Lazar (1998) Subexponential asymptotics of a Markovmodulated random walk with a queueing application. [220] V. Act.M. N. Proc.L.C. 866-870. [218] V.J. Kennedy (1994) Understanding the Wiener-Hopf factorization for the simple random walk. [227] J.M. Goovaerts (1986) General bound on ruin probabilities. Stochastic Models 1. Jagerman (1991) Analytical and numerical solution of Volterra integral equations with applications to queues.P.M. Kalashnikov (1999) Bounds for ruin probabilities in the presence of large claims and their comparison. Proc. [228] J. Keilson & D. 283-306. Janssen (1980) Some transient results on the M/SM/1 special semi-Markov model in risk and queueing theories. [226] J. J. Insurance: Mathematics and Economics 5. 37. Keilson (1966) A limit theorem for passage times in ergodic regenerative processes. University of Chicago Press.374 BIBLIOGRAPHY [209] D. Technometrics 30.J. Kalashnikov (1996) Two-sided bounds of ruin probabilities. 165-167. 3. Clarendon Press. [211] J. Jelenkovic & A. Kemperman (1961) The Passage Problem for a Markov Chain. 11. Stochastic Models 5.H. 6. 123-133. 63. Keilson & D. Munksgaard. 87-93. 35. [215] J. Astin Bull. Reinhard (1985) Probabilities de ruine pour une classe de modeles de risque semi-Markoviens. [217] R. 1996. Keilson & D. 1-18. 187-193. 547-567. Ann.R. Soc. 6. J. Copenhagen. Kluwer. [216] M.G. Chicago. Astin Bull. 60. Wishart (1964) A central limit theorem for processes defined on a finite Markov chain. Kalashnikov (1997) Geometric Sums: Bounds for Rare Event with Applications. 31. Wishart (1964) Boundary problems for additive processes defined on a finite Markov chain. Kaas & M. [219] V.M. [221] E. Probab. ibid.

858-874. J. Scand. Scand. Macmillan. Philos. Stochastic Models 7. 1998 .S. [232] C. J.BIBLIOGRAPHY 375 [229] J. 259-264. 60-75. Kingman (1961) A convexity property of positive matrices . Soc. Lipsky (1992) Queueing Theory . Cambr. Adv. Seminaire de Probabilties X .a Linear Algebraic Approach. Nyrhinen (1992a) Simulating level-crossing probabilities by importance sampling. [237] C. B24. Kuchler & M. 48. Lehtonen & H. J. Probab. J. SIAM. [247] L.. Bernoulli 1 . Probab. Actuarial J.C. Kingman (1964) A martingale inequality in the theory of queues. Probab. Springer-Verlag. 1995 . [244] T. Prob. [248] D. Penev & A. 24. 383-392. Stadtmiiller (1998) Ruin probabilities in the presence of heavy-tails and interest rates . Insurance : Mathematics and Economics 12. Proc. [234] C. J. 133-135 (in Russian). [236] C. 277-289. Soc. Kingman (1962) On queues in heavy traffic. [243] A. 1-46. Latouche & V. Scand. Quart. [246] D. Sorensen (1997) Exponential Families of Stochastic Processes. Quart. 25. [230] J. New York. 60 . Ramaswami (1999) Introduction to Matrix-Analytic Methods in Stochastic Modelling. 18. Springer-Verlag. [239] H. Korolyuk. Kliippelberg (1988) Subexponential distributions and integrated tails. J. 49-58. Lemoine (1981) On queues with periodic Poisson input. [245] T. Camb. 125-147. Statist.C. Appl. 390-397. Math. Lemoine (1989) Waiting time and workload in queues with periodic Poisson input. 283-284. Appl. Lindley (1952) The theory of queues with a single server.F. Appl. Klnppelberg & T. 359-361.P. Turbin (1973) An asymptotic expansion for the absorption time of a Markov chain distribution. Klnppelberg (1993) Asymptotic ordering of risks and ruin probabilities. Cybernetica 4. Insurance: Mathematics and Economics 8. 889-900. [241] G. Kunita (1976) Absolute continuity of Markov processes. I. Appl. [231] J. 132-141.C. Klnppelberg (1989) Estimation of ruin probabilities by means of hazard rates. 44-77. 154-168.J. Mikosch (1995) Modelling delay in claim settlement.F. [238] V. Lucantoni (1991) New results on the single server queue with a batch Markovian arrival process. [233] C. Oxford 12. Soc. Lecture Notes in Mathematics 511. Klnppelberg & T.J. Philos. Roy. . [240] U. Lehtonen & H. Nyrhinen (1992b) On asymptotically efficient simulation of ruin probabilities in a Markovian environment. 279-285. 26. Kliippelberg & U. J. [242] A.F.F. (235] C. Act. Act. Mikosch (1995) Explosive Poisson shot noise with applications to risk retention. Proc.

Probab. 58. Probability and Mathematical Statistics (A. F. 147-149. [262] H . J. Eigenvalue properties and limit theorems . [263] M . [258] K . a useful concept in risk theory. Appl. Miyazawa & V. 286-298. J. Englunds Boktryckeri AB. Insurance : Mathematics and Economics 5. 15. 22. Appl. J. [256] A. 763-776.V.D.K. Lucantoni . Opns. [259] Z .). Appl. Ney & E. Malinovskii ( 1994) Corrected normal approximation for the probability of ruin within finite time.M. 370-377. 378-406. II Aterforsdkring av Kollektivrisker . Scand. 1996 . with an application . [261] H. 834-841. A fitting algorithm for Markov-modulated Poisson processes having two arrival rates . 31.S. [257] A. J. Cambridge Philos. [266] P. Nagaev (1957) Some limit theorems for stationary Markov chains.K. 429-448. 124-147. 36. 32. Europ. Act. Gut & J. Michna ( 1998) Self-similar processes in collective risk theory. Ann. Lundberg (1926) Forsdkringsteknisk Riskutjdmning. 3. Lundberg (1903) I Approximerad Framstdllning av Sannolikhetsfunktionen. 48-59. [251] F. Appl. Moustakides ( 1999) Extension of Wald 's first lemma to Markov processes. Stoch. 11 . . Ann.D. Scand. Act. Miller (1962 ) A matrix factorization problem in the theory of random variables defined on a finite Markov chain . Appl. Statist. Probab . Mammitzsch ( 1986 ) A note on the adjustment coefficient in ruin theory. 268-285.V.S.F. 1994. Neuts ( 1990) A single server queue with server vacations and a class of non-renewal arrival processes . Miller ( 1962 ) Absorption probabilities for sums of random variables defined on a finite Markov chain . Math. J. Res. [255] V. Th. Meier ( 1984). 2. Nummelin (1987) Markov additive processes I. Adv. Anal. Soc. Makowski ( 1994) On an elementary characterization of the increasing convex order .376 BIBLIOGRAPHY [249] D. [250] F. Appl. Probab. Soc. . 161-174. Martin-L6f (1983) Entropy estimates for ruin probabilities . K. Schmidt ( 1993) On ladder height distributions of general risk processes . Scand. Cambridge Philos. Probab. [254] V . Proc. Probab. 561-592. Miller ( 1961 ) A convexity property in the theory of random variables defined on a finite Markov chain. J. Probab. Act. Meier-Hellstern & M . Stockholm. [265] S. [252] A. Proc. J. Math. Uppsala. [253] V. [264] G. Malinovskii ( 1996) Approximation and upper bounds on probabilities of large deviations of ruin within finite time. 1261 (0?)-1270. Martin-L3f (1986) Entropy. 1986 .D. 223-235. [260] H. 29-39. Almqvist & Wiksell. Ann. 58 . 29 . Hoist eds. 676-705.

12. Appl. London. J. Appl. [277] J. Insurance: Mathematics and Economics 22. 123-144. 555-564. IEICE Trans. [285] J. Pakes (1975) On the tail of waiting time distributions. [276] C. Appl.A. Baltimore . [278] E. 16. [274] H . Probab. New York. [275] H.F. Naval Research Logistics Quarterly 25. Hamburg. Statistics 21. 339-353. Neuts (1977) A versatile Markovian point process. [283] J. . Paulsen ( 1998) Sharp conditions for certain ruin in a risk process with stochastic return on investments. Omey & E. Proc.BIBLIOGRAPHY 377 [267] M . Paulsen (1993) Risk theory in a stochastic economic environment. 764-779. Stoch. Neveu ( 1961 ) Une generalisation des processus a accroisances independantes. 3-16. [269] M . Stoch. [284] J. 445-454. Paulsen & H. 21. [268] M .F. 71. Adv. Paulsen (1998) Ruin theory with compounding assets . 1008-1026. 249-266. Models 6.K. Neuts (1992) Models based on the Markovian arrival process. Insurance: Mathematics and Economics 20. [271] M. Commun. Sem. 36. Willekens (1986) Second order behaviour of the tail of a subordinated probability distributions. Proc. 311-342. Appl. Math. [279] E. [273] R. Neuts ( 1989) Structured Stochastic Matrices of the M/G/1 Type and their Applications. Omey & E. Probab. 135-148. Stoch. 273-299. [272] J . Stoch. Norberg ( 1990) Risk theory and its statistics environment . Gjessing (1997a) Optimal choice of dividend barriers for a risk process with stochastic return on investments. 75. 1255-1265. Abh.F. Nyrhinen (1999) Large deviations for the time of ruin. 46.a survey. Neuts ( 1981 ) Matrix-Geometric Solutions in Stochastic Models. 215-223. Appl.G. [280] A. Appl. Johns Hopkins University Press. Astin Bulletin 5 . Probab. [282] J . Probab. E75-B . 327-361. Willekens (1987) Second order behaviour of distributions subordinate to a distribution with finite mean . Nyrhinen ( 1998 ) Rough descriptions of ruin for a general class of surplus processes. 1-57.F. Marcel Dekker. Proc. Paulsen & H. Appl. Stoch. Neuts ( 1978) Renewal processes of phase type. [270] M .K. Proc. 30. [281] J. J.F. 733-746. Appl. Gjessing (1997b) Present value distributions with applications to ruin theory and stochastic equations. Ohlin (1969) On a class of measures for dispersion with application to optimal insurance. J. Stochastic Models 3 . O'Cinneide (1990) Characterization of phase-type distributions.

[298] V. Res. Statist. Ripley (1987) Stochastic Simulation . [291] S. R. J. Embrechts (1996) Confidence bounds for the adjustment coefficient. Soc. J.M. Pyke (1959) The supremum and infimum of the Poisson process. Appl. Inst. 215-246. Probab. 1989 . Ann. Math.J. Math. Rogers (1994) Fluid models in queueing theory and Wiener-Hopf factorisation of Markov chains.M.. 222-261. and Dams.M. Rao (1965 ) Linear Statistical Inference and Its Applications. Regterschot & J. 139-143. 61. Probab. 117-133. [306] T. 45-68. 757-764. Opns. 691707. Queueing Systems 5. [289] C. 12. Prabhu (1961). Math. Probab . 46. Ramsay (1984) The asymptotic ruin problem when the healthy and sick periods form an alternating renewal process. Reinhard (1984) On a class of semi-Markov risk models obtained as classical risk models in a markovian environment. 820-827. Ann. Act. Prabhu (1965) Queues and Inventories. Ann. 23-30. 465-483. [287] F. Heidelberg. 147-159. Insurance: Mathematics and Economics 3. Appl. Astin Bulletin XIV. New York. [294] N. 4. Schock Petersen (1989) Calculation of ruin probabilities when the premium depends on the current reserve. 965-985. [296] N. Prabhu (1980) Stochastic Storage Processes. Math. [288] S. Austr.H. Ann. 390-413. 19. Res.R. Appl. Statist. Springer. Scand. Tidskr. Wiley.K. Resnick & G. J. [299] C. . Samorodnitsky (1997) Performance degradation in a single server exponential queueing model with long range dependence. Gjessing (1997c) Ruin theory with stochastic return on investments.K. Adv.G. [295] N. 537-555. Math. 23-43. [292] S. Philipson (1968) A review of the collective theory of risk. Paulsen & H. 29. Appl. Queues. 32. Berlin. de Smit (1986) The queue M/G/1 with Markovmodulated arrivals and services. Oper.378 BIBLIOGRAPHY [286] J. Skand. [303] S. [290] E. [301] G.A. Pitts. [297] R. Stat. [302] J. 28.G.U. 337-347. Adv. Wiley. [300] C. Insurance: Mathematics and Economics 16.U. 11. Pitts (1994) Nonparametric estimation of compound distributions with applications in insurance. On the ruin problem of collective risk theory. 235-243. [305] C. Wiley. 29A. Probab. 30. Prabhu & Zhu (1989) Markov-modulated queueing systems. 568-576.J. Aktuar. Pellerey (1995) On the preservation of some orderings of risks under convolution. Ramaswami (1980) The N/G/1 queue and its detailed analysis. Adv. New York. [304] B.U. Pitman (1980) Subexponential distribution functions.U.L. Rolski (1987) Approximation of periodic queues. [293] N. Insurance Risk. Appl. Griibel & P. Probab.

Ryden (1994) Parameter estimation for Markov modulated Poisson processes. Act. [318] H. Shapiro (1993) Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization via the Score Function Method. [308] S. Scand. Ross (1974) Bounds on the delay distribution in GI/G/1 queues. Th. [319] H. Wiley. Schmidli ( 1994) Diffusion approximations for a risk process with the possibility of borrowing and interest. Wiley. Stochastic Models 10.Y.M. 5. Rubinstein & B. [324] H . 131-156. Wiley. Schmidli ( 1999b) Perturbed risk processes: a review . Probab . Rudemo (1973) Point processes generated by transitions of a Markov chain. Appl. Teugels (1999) Stochastic Processes for Insurance and Finance. 7. Statist.Y. J. 48-57. 155-188. Schmidt & J. Appl. 365-388. Rossberg & G--Siegel (1974) Die Bedeutung von Kingmans Integralgleichungen bei der Approximation der stationiiren Wartezeitverteilung im Modell GI/C/1 mit and ohne Verzogerung beim Beginn einer Beschiiftigungsperiode. 93-104. [314] T.-J. Schmidli ( 1999a) On the distribution of the surplus prior and at ruin. Schmidli ( 1997a) Estimation of the Lundberg coefficient for a Markov modulated risk model . Schmidli ( 1995 ) Cramer-Lundberg approximations for ruin probabilities of risk processes perturbed by a diffusion . V. Insurance: Mathematics and Economics 22. 1997. Ryden (1996) An EM algorithm for estimation in Markov-modulated Poisson processes. J. J.L. 687-699. Astin Bulletin 29. 2001 . Stochastic Models 10 . [315] T. 121-133. Math. Schmidli ( 1996) Martingales and insurance risk. 795-829. 145-165. [323] H. Schmidli ( 1997b) An extension to the renewal theorem and an application in risk theory. [310] R. 262-286. Comp. Melamed (1998) Modern Simulation and Modelling. Act. Scand. . Rubinstein (1981) Simulation and the Monte Carlo Method. Proc. Schmidli ( 2001 ) Optimal proportional reinsurance policies in a dynamic setting . Statist. [317] L. [326] H. 5. Schmickler ( 1992 ) MEDA : mixed Erlang distributions as phase-type representations of empirical distribution functions . Data Anal. Adv. 40-68. 135-149. [325] H . Probab. Stoch. Schlegel ( 1998) Ruin probabilities in perturbed risk models. Lecture Notes of the 8th Summer School on Probability and Mathematical Statistics ( Varna). Singapore. [320] H. Stochastic Models 6. [312] R. Science Culture Technology Publishing . 5. Schmidli.BIBLIOGRAPHY 379 [307] T. [311] R. Insurance: Mathematics and Economics 16. 227-244. 417-421. Wiley.Y. Operationsforsch. H. [321] H. [309] H. 11. Appi. Seal ( 1969) The Stochastic Theory of a Risk Business. Probab. [322] H. Rolski. Rubinstein & A. 21. Wiley. Ann. 431-447 [316] S. [313] M.

[340] E. Siegmund (1976) Importance sampling in the Monte Carlo study of sequential tests. 279299. Stochastic Models 6. Math. Probab. Probab. Appl. Scand. Segerdahl ( 1942) Uber einige Risikotheoretische Fagestellungen . Appl. Shin (1987) Convolution of uniform distributions and ruin probability. [336] B. Siegmund (1975) The time until ruin in collective risk theory. Verein Schweiz Versich. Seal (1978) Survival Probabilities. Versich. Math. Seal ( 1972). Aktuar Tidsskr. Statist. Aktuar Tidsskr. 157-166. 72.S. 1987. Shwartz & A. [347] K. [343] D. 21.W. Shaked & J. 429-442. 243-249. Sigman ( 1994 ) Stationary Marked Point Processes: An Intuitive Approach. [348] K . Sengupta ( 1989) Markov processes whose steady-state distribution is matrixgeometric with an application to the GI/PH/1 queue. 1955. Skand. Weiss (1995) Large Deviations for Performance Analysis. Mitt. [335] B . Scand. Springer-Verlag. [339] A. Grenander). 61. Shtatland ( 1966) On the distribution of the maximum of a process with independent increments. 383-413. 171-178. Segerdahl (1955) When does ruin occur in the collective theory of risk? Skand. Sengupta (1990) The semi-Markov queue: theory and applications. Verein Schweiz. 75. Versich. Math. [331] G. 191-197. J.G.S. [337] M. 4. Probab . [345] D. [344] D.L. Verein Schweiz. 77-100. Almqvist & Wiksell. 11. 4. Act.L. . Act. [342] D. 701-719. Insurance: Mathematics and Economics 8. 121-139. Sigman (1992) Light traffic for workload in queues.S. 43-83. Academic Press. Siegmund (1976) The equivalence of absorbing and reflecting barrier problems for stochastically monotone Markov processes .L. 673-684. 22-36. 483-487. Shin (1989) Ruin probability by operational calculus. 11.380 BIBLIOGRAPHY [327] H. Mitt.A. pp. In Probability and statistics . Chapman & Hall. [338] E. Mitteil. Siegmund ( 1985) Sequential Analysis . Seal (1974) The numerical calculation of U(w. Appl. 72. [329] H. Seber (1984) Multivariate Observations. Stockholm. L. Adv. 914-924.W.F. J. [333] C: O. Chapman and Hall. Risk teory and the single server queue . 159-180. Probab. [341] E. [330] H. Ann. Segerdahl (1959) A survey of results in the collective theory of risk. Ann. [346) D . Th. New York. [332] C : O. t). Adv. Shantikumar (1993) Stochastic Orders and Their Applications. 1974. Seal (1972) Numerical calculcation of the probability of ruin in the Poisson/Exponential case. Siegmund (1979) Corrected diffusion approximations in certain random walk problems. [328] H. . Wiley.the Harald Cramer volume (ed. Wiley. the probability of nonruin in an interval (0. [334] C: O. t). Queueing Systems 11.

163-175. Stoyan (1983) Comparison Methods for Queues and Other Stochastic Models (D. [351] D . Daley ed. J. 1982 . [369] O. Teugels ( 1995 ) Ruin estimates under interest force . Scand.L. 114-137. Springer. [361] G. [353] E.C. Thorin (1986) Ruin probabilities when the claim amounts are gamma distributed. [363] G. Unpublished manuscript. Actuarial J. 85-94. 149-162. [357] B. [360] G.J. Scand. Appl. Probab. 449-461. J. Slud & C. John Wiley & Sons. Takhcs (1967) Combinatorial Methods in the Theory of Stochastic Processes. Insurance: Mathematics and Economics 1. Scand. 1-18. [352] D. . [366] O. Taylor (1976 ) Use of differential and integral inequalities to bound ruin and queueing probabilities . 57-76. 1976. Teugels (1997) The adjustment coefficient in ruin estimates under interest force . [368] O. [362] G. Taylor (1986) Claims reserving in Non-Life Insurance. Taylor (1979) Probability of ruin with variable premium rate. New York. Scand. Sundt & J. Teugels (1982) Estimation of ruin probabilities .J. New York.C.L. J. 21. [364] G.C. [350] W. Sundt & W. 725-741. Smith ( 1953) Distribution of queueing times. Insurance: Mathematics and Economics 16. Proc. [355] B . Verlag Versicherungswirtschaft e. [359] L. Insurance: Mathematics and Economics 19. Act. Act. Astin Bulletin 16 . Scand.L.). Soc 49. [356] B .S. [365] J. Scand. Stroinski ( 1994) Recursive method for computing finitetime ruin probabilities for phase-distributed claim sizes. Sundt & J.C.A..). Astin Bulletin 12. Adv. Taylor (1978) Representation and explicit calculation of finite-time ruin probabilities. 235254. 65-102. 7-22. Thorin (1982) Probabilities of ruin. Thorin (1974) On the asymptotic behaviour of the ruin probability when the epochs of claims form a renewal process. Thorin (1977). 27-39. Taylor (1979) Probability of ruin under inflationary conditions or under experience ratings . Act. Proceeedings of the IEEE 77. Act. [354] B. J. 1974. Wiley. Suri ( 1989) Perturbation analysis : the state of the art and research issues explained via the GI/G/1 queue. 1980 . 65-102.and large-deviations probabilities in actuarial risk theory.L. [367] O.BIBLIOGRAPHY 381 [349] E. Ruin probabilities prepared for numerical calculations. Astin Bulletin 24.V. [358] R. Act. 197-208. Stanford & K.. Hoesman ( 1989) Moderate. Straub (1988) Non-Life Insurance Mathematics. 81-99. Jewell ( 1981 ) Further results on recursive evaluation of compound distributions .C. Sundt ( 1993) An Introduction to Non-Life Insurance Mathematics (3rd ed. Cambridge Philos. Karlsruhe. J. North-Holland. 1978.

Willinger. Vazquez-Abad (1999) RPA pathwise derivatives of ruin probabilities.). [372] H. Wald (1947) Sequential Analysis.W. Reidel. Goovaerts. Astin Bulletin IX. Stationarity and Regeneration. Tacklind (1942) Sur le risque dans les jeux inequitables.382 BIBLIOGRAPHY [370] 0. Willmot (1994) Refinements and distributional generalizations of Lundberg's inequality. M. [376] A. Dordrecht Boston Lancaster. M. 1942. Goovaerts (1983) The influence of reinsurance limits on infinite time ruin probabilities. Oper. 49-63. University of Michigan.R. [379] M. [383] G. [381] W. Res. Lin (1994) Lundberg bounds on the tails of compound distributions. In: Premium Calculation in Insurance (F. Sherman & D. Willmot & X. 1983 . R. Whitt (1989) An interpolation approximation for the mean workload in a GI/G/1 queue. Skand. Tidskr.E. De Vylder & M. Taqqu. Unpublished Ph. Insurance: Mathematics and Economics 15. Waters (1983) Probability of ruin for a risk process with claims cost inflation. Springer-Verlag. [377] V. Prentice-Hall. Act. [375] N. [384] R. Thorisson (2000) Coupling. F. 5762. Wilson (1995) Self-similarity in highspeed traffic: analysis and modeling of ethernet traffic measurements. J. Probab. [380] W. Thorin & N. 148-164. thesis. [378] H.E. Wallace (1969) The solution of quasi birth and death processes arising from multiple access computer systems. Haezendonck eds. Wolff (1990) Stochastic Modeling and the Theory of Queues. Appl. 1-42. J. 743-756. Thorin & N. [382] G. De Vylder. 231-246. [373] S. Aktuar. 37. Statistical Science 10. 936-952. [374] F. D. Astin Bulletin VII. [371] 0. Wikstad (1977) Numerical evaluation of ruin probabilities for a finite period. Wiley. Scand. . Van Wouve. Insurance : Mathematics and Economics 13. Submitted. 31. Wikstad (1977) Calculation of ruin probabilities when the claim distribution is lognormal. J. Veraverbeke (1993) Asymptotic estimates for the probability of ruin in a Poisson model with diffusion. 67-85. 137-153.

201 Brownian motion 3 .135. 39.160-167.44-47. 37. 9396. 301 Kronecker product.299. 316323 Bessel function 102. 94-96.292-293 Edgeworth expansion 113.318-320 change of measure 26-30. 89. 302-303 diffusion approximation 17. 25-26. 7879. 135. 217. 122. 162164. 111-117. 318-319 Erlang distribution 7. 70-79. 57-96. 11-12. 5.150.308 Cramer-Lundberg model: see compound Poisson model cumulative process 334 dams: see storage process differential equation 16. 332-333 Volterra 192-194.287-292. 9293. 341. 170-173. 14. 248 Wiener-Hopf 144 interest rate 190.98-99. 119. 308. 15.121-129. 34-36.242.301 central limit theorem 60 .and sum 221. 323 Coxian distribution 147. 141144. 205. 14-15. 18-19.200-201.328330. 86.251-280 heavy traffic 76.182.314-316. 4851.67-79.203. 82-83 hyperexponential distribution 7.272. 17.217. 91.228229.178-184. 40. 360 excursion 155-156. 271-274.137141.203.226.100.86. 180-182.249-250 integral equation 16 Lindley 143 renewal 64.307-312 compound Poisson model 4.259-261. 3839. 218 Cox process 4. 79. 201-214. 24-25.346-349 383 . 239. 71-79. 227229.249. 138-139. 189. 245-248.Index adjustment coefficient 17. 196-201 inverse Gaussian distribution 76.185-187. 30-32. 278 gamma distribution 6-7. 97. 110113. 17.269.359 aggregate claims 103-106. 226. 117-127 corrected 121-127 duality 13-14. 361 diffusion 3. 12 Cramer-Lundberg approximation 1617. 207 heavy-tailed distribution 6. 283.281. 74-75. 80 -81.293-294. 33-34.285-292. 97-129. 117128.

108109.139-141. 229 M/M/1 101 Markov-modulated 185-187 periodic 187 martingale 24-26. 57-58. 175 light traffic 81-83 Lindley integral equation 143 process 33-34. 185-187 GI/G/1 141-144 M/D/1 66-67 equation 16.218-221. 227-228. 61-62. 145187. 261-264.336-339 Laplace transform 15. 142 likelihood ratio : see change of measure lognormal distribution 9.315 inequality 17-18.146-148.384 ladder heights 47-56. 35.261-264. non-linear 155.287-291 INDEX matrix equation . 171. 71-79.180.287. 203 Markov additive process 12. 25. 39. 295. 35.108. 98-99. 176-185 non-homogeneous 60 Pollaczeck-Khinchine formula 61-67.178-182.275-278. 44.161. 27-30. 14.148.174. 257. 41. 37. 113114. see also sensitivity analysis phase-type distribution 8. 179 NP approximation 318-320 Palm distribution 52-53. 75-76. 86 periodicity 12.339 large deviations 129.160-161. 144. 260 Lundberg conjugation 69-79 .288-290. 15. 96. 157. 134.348 terminating 215-216.234-240. 4446. 65.161164. 112113.259-261. 108 life insurance 5.269-271. 36-39. 176-185. 234 matrix-exponential distribution 240244 matrix-exponentials 14. 137139.234.201. 271-274.227-230. 304-305 random walk 33-36. 59. 39-47. 133.285-287 queue 14 . 132-133.240-244. 52- 53. 42.302. 133.123.128-129. 178 -modulation 12. 267269 Panjer's recursion 320-323 Pareto distribution 9-10.161. 245 M/G/1 13. 100. 44. 162. 203-204.215250.304 process 28-30. 306-316 Levy process 3. 138.349- 350 perturbation 172-173. 149. 134-135.238. 16.298-299. 16.297299.134-135. 69-70. 213214.336-339 . 106-108. 38. 251. 38. 269 Perron-Frobenius theory 41-42. 154. 141-144. 25. 99.152-160. 71. 32.350-361 Poisson process Markov-modulated 12 periodic 12. 39-47. 230. 80.340-350 multiplicative functional 28-30.

335-336 sensitivity analysis 86-93. 326-330 Weibull distribution 9. 253. 251.314. 186-187 virtual: see workload rational Laplace transform 8. 279-280 subexponential distribution 11. see also matrix-exponential distribution regenerative process 264 -268. 168172 storage process 13. 12. 251280 time change 4. 31. 87. 37. 131-144. 141-144. 257. 338 utility 324. 233. 331-336 equation 64. 317-318 semi-Markov 147. 251. 123. 223226. 186-187 renewal process 131. 18-19. 333-334 regular variation 10. 146.244-250.359-361 stochastic control x stochastic ordering 18. 260 reinsurance 8. 191-192. 172-173. 30-32.273-274. 307-308. 120 statistics x.336-339 workload 13.279-280 Rouche roots 158.262-263. 213. 332-333 model 12. 54-55. 160. 280. 327 . 189214. 177 time-reversion 14.186. 147. 83-86. 281-296 stable process 15. 294-296 shot-noise process 314 simulation 19. 222. 11. 60. 89. 96-93. 240. 256258. 238 saddlepoint method 115-117. 49-50. 162. 261264 reserve-dependent premiums 14. 229-234. 74-75. 152. 233-234. 107. 260 Wiener-Hopf theory 144. 244.INDEX 385 waiting time 141. 174. 292-294.154-157.

phase-type distributions as a computational vehicle and the connection to other applied probability areas like queueing theory." Short Book Reviews ISBN 981-02-2293-9 mi u inn i nun I I I I I I i in u www. It is a comprehensive treatment of the known results on ruin probabilities. extensions of the classical compound Poisson model to allow f o r reserve-dependent premiums. worldscientific. I 1! Ruin Probabilities .. "This book is a must for anybody working in applied probability.Vol. y finite horizon ruin probabilities. Special features of the book are the emphasis on change of measure techniques. Some i (||l I JL I J r of the topics are Lundberg's inequality.Advanced Series on Statistical Science & Applied Probability . 2 A I 11 JjVb l' i | i Yj .com 2779 he 9 "789810ll22293211 . the ^W A l \ i l ' ''' Cramer-Lundberg approximation.T [Ail i The book is a comprehensive treatment of || I i I \ classical and modern ruin probability theory. Markov-modulation or periodicity.g. exact solutions. for heavy-tailed claim size distributions). P'i yfliother approximations (e..

Sign up to vote on this title
UsefulNot useful