You are on page 1of 17

The reading material for Exam C/4 includes a variety of textbooks.

Each text has a set of probability distributions that are used in its readings. For those distributions used in more than one text, the choices of parameterization may not be the same in all of the books. This may be of educational value while you study, but could add a layer of uncertainty in the examination. For this latter reason, we have adopted one set of parameterizations to be used in examinations. This set will be based on Appendices A & B of Loss Models: From Data to Decisions by Klugman, Panjer and Willmot. A slightly revised version of these appendices is included in this note. A copy of this note will also be distributed to each candidate at the examination. Each text also has its own system of dedicated notation and terminology. Sometimes these may conflict. If alternative meanings could apply in an examination question, the symbols will be defined. For Exam C/4, in addition to the abridged table from Loss Models, sets of values from the standard normal and chi-square distributions will be available for use in examinations. These are also included in this note. When using the normal distribution, choose the nearest z-value to find the probability, or if the probability is given, choose the nearest z-value. No interpolation should be used. Example: If the given z-value is 0.759, and you need to find Pr(Z < 0.759) from the normal distribution table, then choose the probability for z-value = 0.76: Pr(Z < 0.76) = 0.7764. When using the normal approximation to a discrete distribution, use the continuity correction.
x2 1 1 e 2 . The density function for the standard normal distribution is ( x) = 2

Excerpts from the Appendices to Loss Models: From Data to Decisions, 3rd edition

August 27, 2009

Appendix A

An Inventory of Continuous Distributions


A.1 Introduction
1 (; x) = () Z
x

The incomplete gamma function is given by t1 et dt, Z

> 0, x > 0

with () = Also, dene G(; x) =

t1 et dt,

> 0.

At times we will need this integral for nonpositive values of . Integration by parts produces the relationship G(; x) = x ex 1 + G( + 1; x)

t1 et dt,

x > 0.

This can be repeated until the rst argument of G is + k, a positive number. Then it can be evaluated from G( + k; x) = ( + k)[1 ( + k; x)]. The incomplete beta function is given by Z (a + b) x a1 t (1 t)b1 dt, (a, b; x) = (a)(b) 0

a > 0, b > 0, 0 < x < 1.

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS

A.2
A.2.2
A.2.2.1

Transformed beta family


Three-parameter distributions
Generalized Pareto (beta of the second kind), , ( + ) x 1 ()( ) (x + )+ k ( + k)( k) , ()( ) x x+

f (x) = E[X k ] = E[X k ] = E[(X x)k ] = mode A.2.2.2

F (x) = ( , ; u), < k < if k is an integer

u=

k ( + k)( k) ( + k, k; u) + xk [1 F (x)], ()( ) 1 = , > 1, else 0 +1

k ( + 1) ( + k 1) , ( 1) ( k)

k >

Burr (Burr Type XII, Singh-Maddala), , f (x) = E[X k ] = (x/) x[1 + (x/) ]+1 F (x) = 1 u , < k < u= 1 1 + (x/)

k (1 + k/ )( k/ ) , ()

VaRp (X ) = [(1 p)1/ 1]1/ E[(X x)k ] = mode A.2.2.3

k (1 + k/ )( k/ ) (1 + k/, k/ ; 1 u) + xk u , () 1/ 1 = , > 1, else 0 + 1

k >

Inverse Burr (Dagum) , , f (x) = E[X k ] = (x/) x[1 + (x/) ] +1 k ( + k/ )(1 k/ ) , ( ) F (x) = u , u= (x/) 1 + (x/)

< k <

VaRp (X ) = (p1/ 1)1/ E[(X x)k ] = mode

k ( + k/ )(1 k/ ) ( + k/, 1 k/ ; u) + xk [1 u ], ( ) 1/ 1 = , > 1, else 0 +1

k >

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS

A.2.3
A.2.3.1

Two-parameter distributions
Pareto (Pareto Type II, Lomax), f (x) = E[X k ] = E[X k ] = (x + )+1 F (x) = 1 x+

k (k + 1)( k) , ()

1 < k <

(1 p)1/ TVaRp (X ) = VaRp (X ) + , >1 1 " 1 # , = 6 1 E[X x] = 1 1 x+ E[X x] = ln , =1 x+ E[(X x)k ] = mode A.2.3.2

VaRp (X ) = [(1 p)1/ 1]

k k! , if k is an integer ( 1) ( k)

k (k + 1)( k) [k + 1, k; x/(x + )] + xk () = 0

x+

all k

Inverse Pareto , f (x) = E[X k ] = E[X k ] = x 1 (x + ) +1 F (x) = x x+

k ( + k)(1 k) , ( )

< k < 1

VaRp (X ) = [p1/ 1]1 Z x/(x+) x E[(X x)k ] = k y +k1 (1 y )k dy + xk 1 , x+ 0 1 mode = , > 1, else 0 2 A.2.3.3 Loglogistic (Fisk), f (x) = (x/) x[1 + (x/) ]2 F (x) = u, u= (x/) 1 + (x/)

k (k)! , if k is a negative integer ( 1) ( + k)

k >

E[(X x)k ] = k (1 + k/ )(1 k/ ) (1 + k/, 1 k/ ; u) + xk (1 u), 1/ 1 , > 1, else 0 mode = +1

E[X k ] = k (1 + k/ )(1 k/ ), VaRp (X ) = (p1 1)1/

< k < k >

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS A.2.3.4 Paralogistic, This is a Burr distribution with = .

f (x) = E[X k ] =

2 (x/) x[1 + (x/) ]+1

F (x) = 1 u , < k < 2

u=

1 1 + (x/)

k (1 + k/)( k/) , ()

VaRp (X ) = [(1 p)1/ 1]1/ E[(X x)k ] = mode A.2.3.5

k (1 + k/)( k/) (1 + k/, k/; 1 u) + xk u , () 1/ 1 = , > 1, else 0 2 + 1

k >

Inverse paralogistic , This is an inverse Burr distribution with = .

f (x) = E[X k ] =

2 (x/) x[1 + (x/) ] +1 k ( + k/ )(1 k/ ) , ( )

F (x) = u ,

u=

(x/) 1 + (x/)

2 < k <

VaRp (X ) = (p1/ 1)1/ E[(X x)k ] = mode

k ( + k/ )(1 k/ ) ( + k/ , 1 k/ ; u) + xk [1 u ], ( ) > 1, else 0

k > 2

= ( 1)1/ ,

A.3
A.3.2
A.3.2.1

Transformed gamma family


Two-parameter distributions
Gamma, f (x) = (x/) ex/ x() t < 1/ F (x) = (; x/) E[X k ] = if k is an integer k ( + k) , () k >

M (t) = (1 t) ,

E[X k ] = k ( + k 1) ,

E[(X x)k ] = mode

k ( + k) ( + k; x/) + xk [1 (; x/)], ()

k > k an integer

= ( + 1) ( + k 1)k ( + k; x/) + xk [1 (; x/)], = ( 1), > 1, else 0

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS A.3.2.2 Inverse gamma (Vinci), (/x) e/x x() k ( k) , () k<

f (x) = E[X k ] = E[(X x)k ] = = mode A.3.2.3

F (x) = 1 (; /x) E[X k ] = k , ( 1) ( k) if k is an integer

k ( k) [1 ( k; /x)] + xk (; /x) ()

k ( k) G( k; /x) + xk (; /x), all k () = /( + 1)

Weibull,
(x/) e(x/) f (x) = F (x) = 1 e(x/) x E[X k ] = k (1 + k/ ), k > VaRp (X ) = [ ln(1 p)]1/

E[(X x)k ] = k (1 + k/ )[1 + k/ ; (x/) ] + xk e(x/) , 1/ 1 , > 1, else 0 mode = A.3.2.4 Inverse Weibull (log Gompertz), (/x) e(/x) x E[X k ] = k (1 k/ ), k < VaRp (X ) = ( ln p)1/ f (x) =

k >

F (x) = e(/x)

h i E[(X x)k ] = k (1 k/ ){1 [1 k/ ; (/x) ]} + xk 1 e(/x) , h i = k (1 k/ )G[1 k/ ; (/x) ] + xk 1 e(/x) 1/ mode = +1

all k

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS

A.3.3
A.3.3.1

One-parameter distributions
Exponential ex/ F (x) = 1 ex/ (1 t)1 E[X k ] = k (k + 1), k k!, if k is an integer ln(1 p) ln(1 p) +

f (x) = M (t) E[X k ] VaRp (X ) TVaRp (X ) = = = =

k > 1

E[X x] = (1 ex/ ) E[(X x)k ] = k (k + 1)(k + 1; x/) + xk ex/ , k > 1 = k k!(k + 1; x/) + xk ex/ , k an integer mode = 0 A.3.3.2 Inverse exponential e/x x2 k k E[X ] = (1 k), VaRp (X ) = ( ln p)1 f (x) = F (x) = e/x k<1 all k

E[(X x)k ] = k G(1 k; /x) + xk (1 e/x ), mode = /2

A.5
A.5.1.1

Other distributions
Lognormal, ( can be negative) 1 ln x exp(z 2 /2) = (z )/(x), z = x 2 k 2 2 E[X ] = exp(k + k /2) ln x k 2 k 2 2 E[(X x) ] = exp(k + k /2) + xk [1 F (x)] f (x) = mode = exp( 2 ) F (x) = (z )

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS A.5.1.2 Inverse Gaussian, 1/2 z 2 x exp , z= 2x3 2x " # " 1/2 # 1/2 2 x+ + exp , y= z y x x " !# r 2t2 exp E[X ] = , 1 1 , t < 2, 2 " # " 1/2 # 1/2 2 x z z y exp y x x

f (x) = F (x) = M (t) = E[X x] = A.5.1.3

Var[X ] = 3 /

log-tr, , ( can be negative )

Let Y have a t distribution with r degrees of freedom. Then X = exp(Y + ) has the log-t distribution. Positive moments do not exist for this distribution. Just as the t distribution has a heavier tail than the normal distribution, this distribution has a heavier tail than the lognormal distribution. r+1 2 f (x) = , " 2 #(r+1)/2 r 1 ln x x r 1+ 2 r ln x F (x) = Fr with Fr (t) the cdf of a t distribution with r d.f., 1 r r 1 , 0 < x e , , ; 2 2 2 2 ln x r+ F (x) = 1 r r 1 1 , ; 2 , x e . 2 2 2 ln x r+ A.5.1.4 Single-parameter Pareto, f (x) = , x+1 x>

F (x) = 1 (/x) , TVaRp (X ) =

x>

VaRp (X ) = (1 p)1/ E[X k ] = mode k , k = k<

(1 p)1/ , >1 1 k k , x E[(X x)k ] = k ( k)xk

Note: Although there appears to be two parameters, only is a true parameter. The value of must be set in advance.

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS

A.6

Distributions with nite support


For these two distributions, the scale parameter is assumed known.

A.6.1.1

Generalized betaa, b, , (a + b) a u (1 u)b1 , (a)(b) x F (x) = (a, b; u) f (x) = E[X k ] = E[(X x)k ] = k (a + b)(a + k/ ) , (a)(a + b + k/ ) 0 < x < , u = (x/)

k > a

k (a + b)(a + k/ ) (a + k/ , b; u) + xk [1 (a, b; u)] (a)(a + b + k/ )

A.6.1.2

betaa, b, (a + b) a 1 u (1 u)b1 , (a)(b) x F (x) = (a, b; u) f (x) = E[X k ] = E[X k ] = E[(X x)k ] = k (a + b)(a + k) , (a)(a + b + k) 0 < x < , u = x/

k > a if k is an integer

k a(a + 1) (a + k 1) , (a + b)(a + b + 1) (a + b + k 1)

+xk [1 (a, b; u)]

k a(a + 1) (a + k 1) (a + k, b; u) (a + b)(a + b + 1) (a + b + k 1)

Appendix B

An Inventory of Discrete Distributions


B.1 Introduction

The 16 models fall into three classes. The divisions are based on the algorithm by which the probabilities are computed. For some of the more familiar distributions these formulas will look dierent from the ones you may have learned, but they produce the same probabilities. After each name, the parameters are given. All parameters are positive unless otherwise indicated. In all cases, pk is the probability of observing k losses. For nding moments, the most convenient form is to give the factorial moments. The j th factorial moment is (j ) = E[N (N 1) (N j + 1)]. We have E[N ] = (1) and Var(N ) = (2) + (1) 2 (1) . The estimators which are presented are not intended to be useful estimators but rather for providing starting values for maximizing the likelihood (or other) function. For determining starting values, the following quantities are used [where nk is the observed frequency at k (if, for the last entry, nk represents the number of observations at k or more, assume it was at exactly k) and n is the sample size]: = 1X knk , n
k=1

2 =

1X 2 k nk 2. n
k=1

) is used. For When the method of moments is used to determine the starting value, a circumex (e.g., any other method, a tilde (e.g., ) is used. When the starting value formulas do not provide admissible parameter values, a truly crude guess is to set the product of all and parameters equal to the sample mean and set all other parameters equal to 1. If there are two and/or parameters, an easy choice is to set each to the square root of the sample mean. The last item presented is the probability generating function, P (z ) = E[z N ].

B.2
B.2.1.1

The (a, b, 0) class


Poisson p0 = e , a = 0, b= pk = e k k!

E[N ] = ,

Var[N ] = 9

P (z ) = e(z1)

APPENDIX B. AN INVENTORY OF DISCRETE DISTRIBUTIONS B.2.1.2 Geometric 1 , 1+ , 1+ k (1 + )k+1

10

p0

a=

b=0

pk =

E[N ] = ,

Var[N ] = (1 + )

P (z ) = [1 (z 1)]1 .

This is a special case of the negative binomial with r = 1. B.2.1.3 Binomialq, m, (0 < q < 1, m an integer) q (m + 1)q = (1 q )m , a = , b= 1q 1q m k pk = q (1 q )mk , k = 0, 1, . . . , m k E[N ] = mq, Var[N ] = mq (1 q ) P (z ) = [1 + q (z 1)]m . p0 B.2.1.4 Negative binomial, r p0 pk = (1 + )r , = a= , 1+ b= (r 1) 1+

r(r + 1) (r + k 1) k k!(1 + )r+k E[N ] = r, Var[N ] = r (1 + )

P (z ) = [1 (z 1)]r .

B.3

The (a, b, 1) class

To distinguish this class from the (a, b, 0) class, the probabilities are denoted Pr(N = k) = pM k or Pr(N = M k) = pT k depending on which subclass is being represented. For this class, p0 is arbitrary (that is, it is a T parameter) and then pM 1 or p1 is a specied function of the parameters a and b. Subsequent probabilities are M T obtained recursively as in the (a, b, 0) class: pM k = (a + b/k )pk1 , k = 2, 3, . . ., with the same recursion for pk There are two sub-classes of this class. When discussing their members, we often refer to the corresponding member of the (a, b, 0) class. This refers to the member of that class with the same values for a and b. The notation pk will continue to be used for probabilities for the corresponding (a, b, 0) distribution.

B.3.1

The zero-truncated subclass

The members of this class have pT 0 = 0 and therefore it need not be estimated. These distributions should only be used when a value of zero is impossible. The rst factorial moment is (1) = (a + b)/[(1 a)(1 p0 )], where p0 is the value for the corresponding member of the (a, b, 0) class. For the logarithmic distribution (which has no corresponding member), (1) = / ln(1+ ). Higher factorial moments are obtained recursively with the same formula as with the (a, b, 0) class. The variance is (a + b)[1 (a + b +1)p0 ]/[(1 a)(1 p0 )]2 .For those members of the subclass which have corresponding (a, b, 0) distributions, pT k = pk /(1 p0 ).

APPENDIX B. AN INVENTORY OF DISCRETE DISTRIBUTIONS B.3.1.1 Zero-truncated Poisson pT 1 pT k = = , a = 0, e 1 k , k!(e 1) b = ,

11

E[N ] = /(1 e ), = ln(n /n1 ), z e 1 . P (z ) = e 1 B.3.1.2 Zero-truncated geometric pT 1 pT k = =

Var[N ] = [1 ( + 1)e ]/(1 e )2 ,

1 , 1+

a=

, 1+

b = 0,

k1 , (1 + )k E[N ] = 1 + , Var[N ] = (1 + ), = 1, P (z ) =

[1 (z 1)]1 (1 + )1 . 1 (1 + )1

This is a special case of the zero-truncated negative binomial with r = 1. B.3.1.3 Logarithmic pT 1 pT k = = , (1 + ) ln(1 + ) k , k(1 + )k ln(1 + ) Var[N ] = [1 + / ln(1 + )] , ln(1 + ) a= , 1+ b= , 1+

E[N ] = / ln(1 + ), =

2( 1) n 1 or , n1 ln[1 (z 1)] P (z ) = 1 . ln(1 + ) This is a limiting case of the zero-truncated negative binomial as r 0.

APPENDIX B. AN INVENTORY OF DISCRETE DISTRIBUTIONS B.3.1.4 Zero-truncated binomialq, m, (0 < q < 1, m an integer) pT 1 pT k m(1 q )m1 q q (m + 1)q , a= , b= , m 1 (1 q ) 1q 1q m k mk k q (1 q ) , k = 1, 2, . . . , m, 1 (1 q )m mq , 1 (1 q )m mq [(1 q ) (1 q + mq )(1 q )m ] , [1 (1 q )m ]2 , m [1 + q (z 1)]m (1 q )m . 1 (1 q )m

12

= =

E[N ] = Var[N ] = q = P (z ) = B.3.1.5

Zero-truncated negative binomial, r, (r > 1, r 6= 0) pT 1 pT k = = (r 1) , b= , (1 + 1+ 1+ k r(r + 1) (r + k 1) , k![(1 + )r 1] 1+ r , 1 (1 + )r r [(1 + ) (1 + + r )(1 + )r ] , [1 (1 + )r ]2 )r+1 a= r , (1 + )

E[N ] = V ar[N ] = =

P (z ) =

2 2 , 1, r = 2 [1 (z 1)]r (1 + )r . 1 (1 + )r

This distribution is sometimes called the extended truncated negative binomial distribution because the parameter r can extend below 0.

B.3.2

The zero-modied subclass

A zero-modied distribution is created by starting with a truncated distribution and then placing an arbitrary amount of probability at zero. This probability, pM 0 , is a parameter. The remaining probabilities are adjusted accordingly. Values of pM can be determined from the corresponding zero-truncated distribution k M T M as pM = (1 p ) p or from the corresponding ( a, b, 0) distribution as pM 0 k k k = (1 p0 )pk /(1 p0 ). The same recursion used for the zero-truncated subclass applies. The mean is 1 pM 0 times the mean for the corresponding zero-truncated distribution. The variance is M M 1 pM 0 times the zero-truncated variance plus p0 (1 p0 ) times the square of the zero-truncated mean. The M M probability generating function is P (z ) = p0 + (1 pM 0 )P (z ), where P (z ) is the probability generating function for the corresponding zero-truncated distribution. The maximum likelihood estimator of pM 0 is always the sample relative frequency at 0.

You might also like