Professional Documents
Culture Documents
Exam STAM
The reading material for Exam STAM includes a variety of textbooks.
Each text has a set of probability distributions that are used in its readings.
For those distributions used in more than one text, the choices of
parameterization may not be the same in all of the books. This may be of
educational value while you study, but could add a layer of uncertainty in
the examination. For this latter reason, we have adopted one set of
parameterizations to be used in examinations. This set will be based
on Appendices A & B of Loss Models: From Data to Decisions by
Klugman, Panjer and Willmot. A slightly revised version of these
appendices is included in this note. A copy of this note will also be distributed
to each candidate at the examination.
Each text also has its own system of dedicated notation and
terminology. Sometimes these may conflict. If alternative meanings could
apply in an examination question, the symbols will be defined.
For Exam STAM, in addition to the abridged table from Loss Models,
sets of values from the standard normal and chi-square distributions
be available for use in examinations. These are also included in this note.
When using the normal distribution, choose the nearest z-value to find the
probability, or if the probability is given, choose the nearest z-value. No
interpolation should be used.
Example: If the given z-value is 0.759, and you need to find Pr(Z < 0.759) from
the normal distribution table, then choose the probability for z-value = 0.76: Pr(Z
< 0.76) = 0.7764.
1 − 12 x2
The density function for the standard normal distribution is φ ( x) = e .
2π
Excerpts from the Appendices to Loss Models: From Data
to Decisions, 5th edition
A.1 Introduction
The incomplete gamma function is given by
Z x
1
(↵; x) = t↵ 1 e t
dt, ↵ > 0, x > 0,
(↵) 0
with Z 1
(↵) = t↵ 1
e t
dt, ↵ > 0.
0
Also, define Z 1
G(↵; x) = t↵ 1
e t
dt, x > 0.
x
At times we will need this integral for nonpositive values of ↵. Integration by parts produces
the relationship
x↵ e x 1
G(↵; x) = + G(↵ + 1; x).
↵ ↵
This process can be repeated until the first argument of G is ↵ + k, a positive number.
Then it can be evaluated from
2
A.2 Transformed Beta Family
(↵ + ⌧ ) ✓↵ x⌧ 1 x
f (x) = , F (x) = (⌧, ↵; u), u = ,
(↵) (⌧ ) (x + ✓)↵+⌧ x+✓
✓k (⌧ + k) (↵ k)
E[X k ] = , ⌧ < k < ↵,
(↵) (⌧ )
✓k ⌧ (⌧ + 1) · · · (⌧ + k 1)
E[X k ] = if k is a positive integer,
(↵ 1) · · · (↵ k)
✓k (⌧ + k) (↵ k)
E[(X ^ x)k ] = (⌧ + k, ↵ k; u) + xk [1 F (x)], k > ⌧,
(↵) (⌧ )
⌧ 1
Mode = ✓ , ⌧ > 1, else 0.
↵+1
A.2.2.2 Burr—↵, ✓,
↵ (x/✓) 1
f (x) = , F (x) = 1 u↵ , u= ,
x[1 + (x/✓) ]↵+1 1 + (x/✓)
1/↵
VaRp (X) = ✓[(1 p) 1]1/ ,
✓k (1 + k/ ) (↵ k/ )
E[X k ] = , <k<↵ ,
(↵)
k
✓ (1 + k/ ) (↵ k/ )
E[(X ^ x)k ] = (1 + k/ , ↵ k/ ; 1 u) + xk u↵ , k> ,
(↵)
✓ ◆1/
1
Mode = ✓ , > 1, else 0.
↵ +1
3
A.2.2.3 Inverse Burr—⌧, ✓ ,
⌧ (x/✓) ⌧ (x/✓)
f (x) = , F (x) = u⌧ , u= ,
x[1 + (x/✓) ]⌧ +1 1 + (x/✓)
1/⌧ 1/
VaRp (X) = ✓(p 1) ,
✓k
(⌧ + k/ ) (1 k/ )
E[X k ] = , ⌧ <k< ,
(⌧ )
✓k (⌧ + k/ ) (1 k/ )
E[(X ^ x)k ] = (⌧ + k/ , 1 k/ ; u) + xk [1 u⌧ ], k> ⌧ ,
(⌧ )
✓ ◆
⌧ 1 1/
Mode = ✓ , ⌧ > 1, else 0.
+1
4
A.2.3.2 Inverse Pareto—⌧, ✓
✓ ◆⌧
⌧ ✓x⌧ 1 x
f (x) = , F (x) = ,
(x + ✓)⌧ +1 x+✓
1/⌧ 1
VaRp (X) = ✓[p 1] ,
✓k (⌧ + k) (1 k)
E[X k ] = , ⌧ < k < 1,
(⌧ )
✓k ( k)!
E[X k ] = if k is a negative integer,
(⌧ 1) · · · (⌧ + k)
Z x/(x+✓) ✓ ◆⌧
k k ⌧ +k 1 k k x
E[(X ^ x) ] = ✓ ⌧ y (1 y) dy + x 1 , k> ⌧,
0 x+✓
⌧ 1
Mode = ✓ , ⌧ > 1, else 0.
2
A.2.3.3 Loglogistic— , ✓
(Fisk)
(x/✓) (x/✓)
f (x) = , F (x) = u, u= ,
x[1 + (x/✓) ]2 1 + (x/✓)
1 1/
VaRp (X) = ✓(p 1) ,
k k
E[X ] = ✓ (1 + k/ ) (1 k/ ), <k< ,
k k
E[(X ^ x) ] = ✓ (1 + k/ ) (1 k/ ) (1 + k/ , 1 k/ ; u) + xk (1 u), k> ,
✓ ◆
1 1/
Mode = ✓ , > 1, else 0.
+1
A.2.3.4 Paralogistic—↵, ✓
This is a Burr distribution with = ↵.
↵2 (x/✓)↵ 1
f (x) = , F (x) = 1 u↵ , u= ,
x[1 + (x/✓)↵ ]↵+1 1 + (x/✓)↵
1/↵
VaRp (X) = ✓[(1 p) 1]1/↵ ,
✓k (1 + k/↵) (↵ k/↵)
E[X k ] = , ↵ < k < ↵2 ,
(↵)
✓k (1 + k/↵) (↵ k/↵)
E[(X ^ x)k ] = (1 + k/↵, ↵ k/↵; 1 u) + xk u↵ , k> ↵,
(↵)
✓ ◆
↵ 1 1/↵
Mode = ✓ , ↵ > 1, else 0.
↵2 + 1
5
A.2.3.5 Inverse Paralogistic—⌧, ✓
2
⌧ 2 (x/✓)⌧ (x/✓)⌧
f (x) = , F (x) = u⌧ , u= ,
x[1 + (x/✓)⌧ ]⌧ +1 1 + (x/✓)⌧
1/⌧ 1/⌧
VaRp (X) = ✓(p 1) ,
✓k (⌧ + k/⌧ ) (1 k/⌧ )
E[X k ] = , ⌧ 2 < k < ⌧,
(⌧ )
✓k (⌧ + k/⌧ ) (1 k/⌧ )
E[(X ^ x)k ] = (⌧ + k/⌧, 1 k/⌧ ; u) + xk [1 u⌧ ], k> ⌧ 2,
(⌧ )
Mode = ✓ (⌧ 1)1/⌧ , ⌧ > 1, else 0.
A.3.2.1 Gamma—↵, ✓
(x/✓)↵ e x/✓
f (x) = , F (x) = (↵; x/✓),
x (↵)
✓k (↵ + k)
E[X k ] = , k > ↵,
(↵)
E[X k ] = ✓k (↵ + k 1) · · · ↵ if k is a positive integer,
✓k (↵ + k)
E[(X ^ x)k ] = (↵ + k; x/✓) + xk [1 (↵; x/✓)], k> ↵,
(↵)
E[(X ^ x)k ] = ↵(↵ + 1) · · · (↵ + k 1)✓k (↵ + k; x/✓)
+ xk [1 (↵; x/✓)] if k is a positive integer,
↵
M (t) = (1 ✓t) , t < 1/✓,
Mode = ✓(↵ 1), ↵ > 1, else 0.
6
A.3.2.2 Inverse Gamma—↵, ✓
(Vinci)
(✓/x)↵ e ✓/x
f (x) = , F (x) = 1 (↵; ✓/x),
x (↵)
✓k (↵ k)
E[X k ] = , k < ↵,
(↵)
✓k
E[X k ] = if k is a positive integer,
(↵ 1) · · · (↵ k)
✓k (↵ k)
E[(X ^ x)k ] = [1 (↵ k; ✓/x)] + xk (↵; ✓/x)
(↵)
✓k G(↵ k; ✓/x)
= + xk (↵; ✓/x), all k,
(↵)
Mode = ✓/(↵ + 1).
A.3.2.3 Weibull—✓, ⌧
⌧
⌧ (x/✓)⌧ e (x/✓) (x/✓)⌧
f (x) = , F (x) = 1 e ,
x
VaRp (X) = ✓[ ln(1 p)]1/⌧ ,
E[X k ] = ✓k (1 + k/⌧ ), k> ⌧,
k k (x/✓)⌧
E[(X ^ x) ] = ✓ (1 + k/⌧ ) [1 + k/⌧ ; (x/✓)⌧ ] + xk e , k> ⌧,
✓ ◆
⌧ 1 1/⌧
Mode = ✓ , ⌧ > 1, else 0.
⌧
⌧ (✓/x)⌧ e (✓/x)⌧
(✓/x)⌧
f (x) = , F (x) = e ,
x
1/⌧
VaRp (X) = ✓( ln p) ,
k k
E[X ] = ✓ (1 k/⌧ ), k < ⌧,
h i
(✓/x)⌧
E[(X ^ x)k ] = ✓k (1 k/⌧ ){1 k/⌧ ; (✓/x)⌧ ]} + xk 1 e
[1 ,
h ⌧
i
= ✓k G[1 k/⌧ ; (✓/x)⌧ ] + xk 1 e (✓/x) , all k,
✓ ◆1/⌧
⌧
Mode = ✓ .
⌧ +1
7
A.3.3 One-Parameter Distributions
A.3.3.1 Exponential—✓
e x/✓
x/✓
f (x) = , F (x) = 1 e ,
✓
VaRp (X) = ✓ ln(1 p),
k k
E[X ] = ✓ (k + 1), k> 1,
k k
E[X ] = ✓ k! if k is a positive integer,
x/✓
E[X ^ x] = ✓(1 e ),
TVaRp (X) = ✓ ln(1 p) + ✓,
k k
E[(X ^ x) ] = ✓ (k + 1) (k + 1; x/✓) + xk e x/✓
, k> 1,
k k k x/✓
E[(X ^ x) ] = ✓ k! (k + 1; x/✓) + x e if k > 1 is an integer,
1
M (z) = (1 ✓z) , z < 1/✓,
Mode = 0.
✓e ✓/x ✓/x
f (x) = , F (x) = e ,
x2
1
VaRp (X) = ✓( ln p) ,
k k
E[X ] = ✓ (1 k), k < 1,
k k
E[(X ^ x) ] = ✓ G(1 k; ✓/x) + xk (1 e ✓/x
), all k,
Mode = ✓/2.
1 ln x µ
f (x) = p exp( z 2 /2) = (z)/( x), z= ,
x 2⇡
F (x) = (z),
E[X k ] = exp kµ + 12 k 2 2
,
✓ 2
◆
k 1 2 2 ln x µ k
E[(X ^ x) ] = exp kµ + 2k + xk [1 F (x)],
2
Mode = exp(µ ).
8
A.5.1.2 Inverse Gaussian—µ, ✓
✓ ◆1/2 ✓ ◆
✓ ✓z 2 x µ
f (x) = 3
exp , z= ,
2⇡x 2x µ
" ✓ ◆ # ✓ ◆ " ✓ ◆1/2 #
✓ 1/2 2✓ ✓ x+µ
F (x) = z + exp y , y= ,
x µ x µ
E[X] = µ, Var[X] = µ3 /✓,
k 1
X (k + n 1)! µn+k
E[X k ] = , k = 1, 2, . . . ,
(k n 1)!n! (2✓)n
n=0
" ✓ ◆ # " ✓ ◆ #
✓ 1/2 ✓ 1/2
E[X ^ x] = x µz z µy exp(2✓/µ) y ,
x x
" r !#
✓ 2µ2 ✓
M (z) = exp 1 1 z , z < 2.
µ ✓ 2µ
A.5.1.3 Log-t—r, µ,
9
A.5.1.4 Single-Parameter Pareto—↵, ✓
✓ ◆↵
↵✓↵ ✓
f (x) = , x > ✓, F (x) = 1 , x > ✓,
x↵+1 x
1/↵
VaRp (X) = ✓(1 p) ,
↵✓k
E[X k ] = , k < ↵,
↵ k
↵✓k k✓↵
E[(X ^ x)k ] = , x ✓, k 6= ↵,
↵ k (↵ k)x↵ k
E[(X ^ x) ] = ✓↵ [1 + ↵ ln(x/✓)],
↵
↵✓(1 p) 1/↵
TVaRp (X) = , ↵ > 1,
↵ 1
Mode = ✓.
Note: Although there appear to be two parameters, only ↵ is a true parameter. The
value of ✓ must be set in advance.
(a + b) a ⌧
f (x) = u (1 u)b 1 , 0 < x < ✓, u = (x/✓)⌧ ,
(a) (b) x
F (x) = (a, b; u),
k
✓ (a + b) (a + k/⌧ )
E[X k ] = , k > a⌧,
(a) (a + b + k/⌧ )
✓k (a + b) (a + k/⌧ )
E[(X ^ x)k ] = (a + k/⌧, b; u) + xk [1 (a, b; u)].
(a) (a + b + k/⌧ )
10
A.6.1.2 Beta—a, b, ✓
The case ✓ = 1 has no special name but is the commonly used version of this distribution.
(a + b) a 1
f (x) = u (1 u)b 1 , 0 < x < ✓, u = x/✓,
(a) (b) x
F (x) = (a, b; u),
k
✓ (a + b) (a + k)
E[X k ] = , k > a,
(a) (a + b + k)
✓k a(a + 1) · · · (a + k 1)
E[X k ] = if k is a positive integer,
(a + b)(a + b + 1) · · · (a + b + k 1)
✓k a(a + 1) · · · (a + k 1)
E[(X ^ x)k ] = (a + k, b; u)
(a + b)(a + b + 1) · · · (a + b + k 1)
+xk [1 (a, b; u)].
11
Appendix B
When the method of moments is used to determine the starting value, a circumflex (e.g., ˆ )
is used. For any other method, a tilde (e.g., ˜ ) is used. When the starting value formulas
do not provide admissible parameter values, a truly crude guess is to set the product of
all and parameters equal to the sample mean and set all other parameters equal to 1.
If there are two or parameters, an easy choice is to set each to the square root of the
sample mean.
The last item presented is the probability generating function,
P (z) = E[z N ].
12
B.2 The (a, b, 0) Class
The distributions in this class have support on 0, 1, . . . . For this class, a particular distri-
bution is specified by setting p0 and then using pk = (a + b/k)pk 1 . Specific members are
created by setting p0 , a, and b. For any member, µ(1) = (a + b)/(1 a), and for higher j,
µ(j) = (aj + b)µ(j 1) /(1 a). The variance is (a + b)/(1 a)2 .
B.2.1.1 Poisson—
e k
p0 = e , a = 0, b= , pk = ,
k!
E[N ] = , Var[N ] = ,
ˆ = µ̂,
(z 1)
P (z) = e .
B.2.1.2 Geometric—
1 k
p0 = , a= , b = 0, pk = ,
1+ 1+ (1 + )k+1
E[N ] = , Var[N ] = (1 + ),
ˆ = µ̂,
1
P (z) = [1 (z 1)] , (1 + 1/ ) < z < 1 + 1/ .
B.2.1.3 Binomial—q, m
q (m + 1)q
p0 = (1 q)m , a = , b= ,
1 q 1 q
✓ ◆
m k
pk = q (1 q)m k , k = 0, 1, . . . , m,
k
E[N ] = mq, Var[N ] = mq(1 q),
q̂ = µ̂/m,
P (z) = [1 + q(z 1)]m .
13
B.2.1.4 Negative Binomial— , r
r (r 1)
p0 = (1 + ) , a= , b= ,
1+ 1+
r(r + 1) · · · (r + k 1) k
pk = ,
k!(1 + )r+k
E[N ] = r , Var[N ] = r (1 + ),
ˆ = ˆ2 µ̂2
1, r̂ = 2 ,
µ̂ ˆ µ̂
P (z) = [1 (z 1)] r , (1 + 1/ ) < z < 1 + 1/ .
14
B.3.1.1 Zero-Truncated Poisson—
pT1 = , a = 0, b= ,
e 1
k
pTk = ,
k!(e 1)
E[N ] = /(1 e ), Var[N ] = [1 ( + 1)e ]/(1 e )2 ,
˜ = ln(nµ̂/n1 ),
e z 1
P (z) = .
e 1
1
pT1 = , a= , b = 0,
1+ 1+
k 1
pTk = ,
(1 + )k
E[N ] = 1 + , Var[N ] = (1 + ),
ˆ = µ̂ 1,
[1 (z 1)] 1 (1 + ) 1
P (z) = , (1 + 1/ ) < z < 1 + 1/ .
1 (1 + ) 1
B.3.1.3 Logarithmic—
pT1 = , a= , b= ,
(1 + ) ln(1 + ) 1+ 1+
k
pTk = ,
k(1 + )k ln(1 + )
[1 + / ln(1 + )]
E[N ] = / ln(1 + ), Var[N ] = ,
ln(1 + )
˜ = nµ̂ 2(µ̂ 1)
1 or ,
n1 µ̂
ln[1 (z 1)]
P (z) = 1 , (1 + 1/ ) < z < 1 + 1/ .
ln(1 + )
15
B.3.1.4 Zero-Truncated Binomial—q, m,
(0 < q < 1, m an integer)
r (r 1)
pT1 = , a= , b= ,
(1 + )r+1 (1 + ) 1+ 1+
✓ ◆k
r(r + 1) · · · (r + k 1)
pTk = ,
k![(1 + )r 1] 1+
r
E[N ] = ,
1 (1 + ) r
r [(1 + ) (1 + + r )(1 + ) r ]
Var[N ] = ,
[1 (1 + ) r ]2
˜ = ˆ2 µ̂2
1, r̃ = 2 ,
µ̂ ˆ µ̂
[1 (z 1)] r (1 + ) r
P (z) = , (1 + 1/ ) < z < 1 + 1/ .
1 (1 + ) r
This distribution is sometimes called the extended truncated negative binomial distri-
bution because the parameter r can extend below zero.
16