Professional Documents
Culture Documents
ProbDist PDF
ProbDist PDF
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
418 Appendix D. Important Probability Distributions
There are situations for which these two distinct classes are not appropri-
ate. For many such situations, however, a mixture distribution provides an
appropriate model. We can express a PDF of a mixture distribution as
m
X
pM (y) = ωj pj (y | θj ),
j=1
where 0 ≤ ≤ 1.
Another example of a mixture distribution is a binomial with constant
parameter n, but with a nonconstant parameter π. In many applications, if
an identical binomial distribution is assumed (that is, a constant π), it is often
the case that “over-dispersion” will be observed; that is, the sample variance
exceeds what would be expected given an estimate of some other parameter
that determines the population variance. This situation can occur in a model,
such as the binomial, in which a single parameter determines both the first
and second moments. The mixture model above in which each pj is a binomial
PDF with parameters n and πj may be a better model.
Of course, we can extend this
Pm kind of mixing even further. Instead of
ωj pj (y | θj ) with ωj ≥ 0 and j=1 ωj = 1, we can take ω(θ)p(y | θ) with
R
ω(θ) ≥ 0 and ω(θ) dθ = 1, from which we recognize that ω(θ) is a PDF and
θ can be considered to be the realization of a random variable.
Extending the example of the mixture of binomial distributions, we may
choose some reasonable PDF ω(π). An obvious choice is a beta PDF. This
yields the beta-binomial distribution, with PDF
n Γ(α + β) x+α−1
pX,Π (x, π) = π (1 − π)n−x+β−1 I{0,1,...,n}×]0,1[(x, π).
x Γ(α)Γ(β)
This is a standard distribution but I did not include it in the tables below.
This distribution may be useful in situations in which a binomial model is
appropriate, but the probability parameter is changing more-or-less continu-
ously.
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
Appendix D. Important Probability Distributions 419
General References
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
420 Appendix D. Important Probability Distributions
continued ...
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
Appendix D. Important Probability Distributions 421
mean θ
variance θ
hy y
power series PDF θ , y = 0, 1, 2, . . .
c(θ)
P it y
θ ∈ IR+ CF y hy (θe ) /c(θ)
d
{hy } positive constants mean θ (log(c(θ))
dθ
d d2
hy θy θ (log(c(θ)) + θ2 2 (log(c(θ))
P
c(θ) = y variance
dθ dθ
πy
logarithmic PDF − , y = 1, 2, 3, . . .
y log(1 − π)
π ∈]0, 1[ mean −π/((1 − π) log(1 − π))
variance −π(π + log(1 − π))/((1 − π)2 (log(1 − π))2 )
Benford’s PDF logb (y + 1) − logb (y), y = 1, . . . , b − 1
b integer ≥ 3 mean b − 1 − logb ((b − 1)!)
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
422 Appendix D. Important Probability Distributions
def 1 2 2
normal; N(µ, σ2 ) PDF φ(y|µ, σ2 ) = √ e−(y−µ) /2σ
2πσ
2 t2 /2
µ ∈ IR; σ ∈ IR+ CF eiµt−σ
mean µ
variance σ2
1 T −1
multivariate normal; Nd (µ, Σ) PDF e−(y−µ) Σ (y−µ)/2
(2π)d/2 |Σ|1/2
T
t−tT Σt/2
µ ∈ IRd ; Σ 0 ∈ IRd×d CF eiµ
mean µ
covariance Σ
1 −1 T −1
matrix normal PDF e−tr(Ψ (Y −M ) Σ (Y −M ))/2
(2π)nm/2 |Ψ |n/2 |Σ|m/2
M ∈ IRn×m , Ψ 0 ∈ IRm×m , mean M
Σ 0 ∈ IRn×n covariance Ψ ⊗ Σ
1 H −1
complex multivariate normal PDF e−(z−µ) Σ (z−µ)/2
(2π)d/2 |Σ|1/2
I d, Σ 0 ∈ C
µ ∈C I d×d mean µ
covariance Σ
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
Appendix D. Important Probability Distributions 423
1
chi-squared; χ2ν PDF yν/2−1 e−y/2 IĪR+ (y)
Γ(ν/2)2ν/2
ν ∈ IR+ mean ν
if ν ∈ ZZ+ , variance 2ν
Γ((ν + 1)/2)
t PDF √ (1 + y2 /ν)−(ν+1)/2
Γ(ν/2) νπ
ν ∈ IR+ mean 0
variance ν/(ν − 2), for ν > 2
ν /2 ν /2
ν1 1 ν2 2 Γ(ν1 + ν2 ) yν1 /2−1
F PDF I (y)
Γ(ν1 /2)Γ(ν2 /2) (ν2 + ν1 y)(ν1 +ν2 )/2 ĪR+
ν1 , ν2 ∈ IR+ mean ν2 /(ν2 − 2), for ν2 > 2
variance 2ν22 (ν1 + ν2 − 2)/(ν1 (ν2 − 2)2 (ν2 − 4)), for ν2 > 4
|W |(ν−d−1)/2
exp −trace(Σ −1 W ) I{M | M 0∈IRd×d } (W )
` ´
Wishart PDF νd/2
2 |Σ|ν/2 Γd (ν/2)
d = 1, 2, . . . ; mean νΣ
ν > d − 1 ∈ IR; covariance Cov(Wij , Wkl ) = ν(σik σjl + σil σjk ), where Σ = (σij )
Σ 0 ∈ IRd×d
∞
e−λ/2 ν/2−1 −y/2 X (λ/2)k 1
noncentral chi-squared PDF y e yk IĪR+ (y)
2ν/2 k=0
k! Γ(ν/2 + k)2k
ν, λ ∈ IR+ mean ν+λ
variance 2(ν + 2λ)
2
ν ν/2 e−λ /2
noncentral t PDF (ν + y2 )−(ν+1)/2 ×
Γ(ν/2)π1/2
∞ «k/2
ν + k + 1 (λy)k
„ « „
X 2
ν ∈ IR+ , λ ∈ IR Γ
2 k! ν + y2
k=0
λ(ν/2)1/2 Γ((ν − 1)/2)
mean , for ν > 1
Γ(ν/2) „ «2
ν ν Γ((ν − 1)/2)
variance (1 + λ2 ) − λ2 , for ν > 2
ν−2 2 Γ(ν/2)
„ «ν1 /2 „ «ν1 /2+ν2 /2
ν1 ν2
noncentral F PDF e−λ/2 yν1 /2−1 ×
ν2 ν2 + ν1 y
∞
X (λ/2)k Γ(ν2 + ν1 + k) ν1 „ « k „ «k
ν2
ν1 , ν2 , λ ∈ IR+ yk IĪR+ (y)
Γ(ν2 )Γ(ν1 + k)k! ν2 ν2 + ν1 y
k=0
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
424 Appendix D. Important Probability Distributions
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
Appendix D. Important Probability Distributions 425
Table D.6. Other Continuous Distributions (PDFs are wrt Lebesgue measure)
Γ(α + β) α−1
beta PDF y (1 − y)β−1 I[0,1] (y)
Γ(α)Γ(β)
α, β ∈ IR+ mean α/(α + β)
αβ
variance
(α + β)2 (α + β + 1)
d d
!αd+1 −1
Γ( d+1
P
i=1 αi )
Y αi −1
X
Dirichlet PDF Qd+1 yi 1− yi I[0,1]d (y)
i=1 Γ(αi ) i=1 i=1
α ∈ IRd+1
+ mean α/kαk1 (αd+1 /kαk1 is the “mean of Yd+1 ”.)
α (kαk1 − α)
variance
kαk21 (kαk1 + 1)
1
uniform; U(θ1 , θ2 ) PDF I[θ ,θ ] (y)
θ2 − θ1 1 2
θ1 < θ2 ∈ IR mean (θ2 + θ1 )/2
variance (θ22 − 2θ1 θ2 + θ12 )/12
1
Cauchy PDF „ “ ”2 «
y−γ
πβ 1 + β
continued ...
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle
426 Appendix D. Important Probability Distributions
c
Elements of Computational Statistics, Second Edition
2013 James E. Gentle