You are on page 1of 6

The University of Hong Kong

Department of Statistics and Actuarial Science


STAT2602B Probability and Statistics II
Semester 2 2023/2024
Assignment 1

Due Date: 14th February 2024


Answer ALL FOUR questions. Unless otherwise specified, numerical answers
should be either exact or correct to 4 significant figures.

1. The probability mass function of a negative binomial random variable X is


x−1 k x−k
f (x) = Ck−1 p q , for x = k, k + 1, . . . .

(a) Show that the moment-generating function of X is


k −k
MX (t) = pet 1 − qet .
P
Hints: Use the fact that x f (x) = 1 to show
X∞
p−k = x−1
Ck−1 (1 − p)x−k .
x=k

Take p = 1 − qet , then,



X
x−1
x−k −k
Ck−1 qet = 1 − qet .
x=k

(b) Using (a), or otherwise, show that the mean and the variance of X are,
respectively
k kq
µ= and σ 2 = 2 .
p p
2. If X has the discrete uniform distribution with probability mass function
1
f (x) = , for x = 1, 2, . . . , k,
k
(a) show that the moment-generating function of X is

et 1 − ekt
MX (t) = ;
k (1 − et )

(b) using (a), find the mean of X.


Hints: Recall the l’Hôpital’s rule: If

lim f (x) = lim g (x) = 0, ±∞, lim f ′ (x) /g ′ (x) exists,


x→c x→c x→c

then,
lim f (x) /g (x) = lim f ′ (x) /g ′ (x).
x→c x→c

1
STAT2602B TST23/24 Assignment 1

3. Let MX (t) be the moment-generating function of a random variable X and µ


be the mean of X.

(a) Show that the moment-generating function of (X − µ) is

MX−µ (t) = e−µt MX (t) .

(b) Show that the rth derivative of MX−µ (t) with respect to t at t = 0 gives
the rth moment about the mean of X.
(c) Given that the moment-generating function of a normal random variable
X with mean µ and variance σ 2 is
 
1 22
MX (t) = exp µt + σ t .
2

Using (a) and (b), find the skewness and the kurtosis of X, which are
respectively,

E (X − µ)3 E (X − µ)4
 
α3 = and α4 = .
σ3 σ4
4. Given a random sample of size n from a Pareto population that its probability
density function is  α
xα+1
, x > 1,
f (x) =
0, otherwise,
where α > 0. Let α̂ be the maximum likelihood estimator of α. Find α̂.

2
The University of Hong Kong
Department of Statistics and Actuarial Science
STAT2602B Probability and Statistics II
Semester 2 2023/2024
Assignment 1 Suggested Solution
1. (a) The moment-generating function of X is
X ∞
X ∞
X
tx tx x−1 k x−k k x−1 tx x −k
MX (t) = e f (x) = e Ck−1 p q =p Ck−1 e q q
x x=k x=k

X
x−1
x −k
= pk Ck−1 qet qet ekt
x=k
∞ ∞
t k t x t −k t k
X
x−1
X
x−1
x−k
qet
   
= pe Ck−1 qe qe = pe Ck−1
x=k x=k
t k t −k
 
= pe 1 − qe

The last step requires



X
p −k
= x−1
Ck−1 (1 − p)x−k
x=k

arising from
X ∞
X ∞
X
f (x) = 1 ⇒ x−1 k x−k
Ck−1 p q =1⇒p k x−1
Ck−1 (1 − p)x−k = 1.
x x=k x=k

(b) Evaluate the first-order and second-order derivatives of MX (t) with re-
spect to t
k−1 −k −k−1 k
MX′ (t) = kpet pet 1 − qet − k −qet 1 − qet pet

k −k−1
= pet 1 − qet k 1 − qet + kqet
 
k −k−1
= k pet 1 − qet
k −k−1
MX′′ (t) = k pet 1 − qet
k−1 −k−1
= k 2 pet pet 1 − qet
−k−2 k
+ (−k − 1) −qet 1 − qet k pet

k −k−2 2
= pet 1 − qet k 1 − qet + k (k + 1) qet
 
k −k−2 2
= pet 1 − qet k + kqet


Evaluating the derivatives at t = 0 gives


E (X) = MX′ (0) = k (p)k (1 − q)−k−1 = k (p)k p−k−1 = kp−1
E X 2 = MX′′ (0) = pk (1 − q)−k−2 k 2 + kq
 

= pk p−k−2 k 2 + kq = k 2 + kq p−2 .
 

3
STAT2602B TST23/24 Assignment 1 Suggested Solution

Hence, we have

µ = E (X) = kp−1 ,
σ 2 = E X 2 − µ2 = k 2 + kq p−2 − k 2 p−2 = kqp−2 .
 

2. The moment-generating function of X is


k t kt

1 X 1 e 1 − e
MX (t) = E etX = etx = ×

.
k x=1 k 1 − et

Note that by geometric sum,

S = et + e2t + . . . + ekt
et S = e2t + e3t + . . . + ekt + e(k+1)t
⇒ 1 − et S = et − e(k+1)t = et 1 − ekt
 

To find the first moment of X, consider the first-order derivative of MX (t).


 
′ et 1 − ekt kekt et e2t 1 − ekt
MX (t) = − +
k (1 − et ) k (1 − et ) k (1 − et )2
et − (k + 1) e(k+1)t + ke(k+2)t
=
k (1 − et )2

Then, evaluate the derivative at t = 0,

et − (k + 1) e(k+1)t + ke(k+2)t
lim MX′ (t) = lim
t→0 t→0 k (1 − et )2
et − (k + 1)2 e(k+1)t + k (k + 2) e(k+2)t
= lim
t→0 −2ket (1 − et )
et − (k + 1)3 e(k+1)t + k (k + 2)2 e(k+2)t
= lim
t→0 −2ket + 4ke2t
1 − (k + 1) + k (k + 2)2
3
=
−2k + 4k
k+1
=
2
Hence,
k+1
µ = E (X) = lim MX′ (t) = .
t→0 2
3. (a) Consider

MX−µ (t) = E et(X−µ) = E etX e−tµ = e−µt MX (t) .


 

4
STAT2602B TST23/24 Assignment 1 Suggested Solution

(b) Consider the Taylor’s series expansion of et(x−µ) ,



t(x−µ)
X (t (x − µ))k
e =
k=0
k!
1 2 1
= 1 + t (x − µ) + t (x − µ)2 + . . . + tr (x − µ)r + . . . .
2! r!
The moment-generating function of (X − µ) can be written as
1 1
MX−µ (t) = 1+tE (X − µ)+ t2 E (X − µ)2 . . .+ tr E ((X − µ)r )+. . . .

2! r!
Consider the first-order and the second-order differentiation with respect
to t,
d
MX−µ (t) = E (X − µ) + tE (X − µ)2 + . . .

dt
1
+ tr−1 E ((X − µ)r ) + . . . ,
(r − 1)!
2
d 2 3
MX−µ (t) = E (X − µ) + tE (X − µ) + ...
dt2
1
+ tr−2 E ((X − µ)r ) + . . .
(r − 2)!

In general,
dr
r
MX−µ (t) = E ((X − µ)r ) + sum of terms in t.
dt
Substituting t = 0, we can obtain, for example

d d2
= E (X − µ)2 .

MX−µ (t) = E (X − µ) , MX−µ (t)
dt t=0 dt2 t=0

In general, we have

d
MX−µ (t) = E ((X − µ)r ) .
dtr t=0

Remarks: Alternatively, let X = Y − µ in Theorem 1.1 to get


 r   r 
d d
MX (t) r
= E (X ) ⇒ MY −µ (t) = E ((Y − µ)r ) .
dtr t=0 dtr
t=0

(c) Consider the moment-generating function of (X − µ)


 
−µt 1 22 1 2 2
MX−µ (t) = e exp µt + σ t = e 2 σ t .
2

5
STAT2602B TST23/24 Assignment 1 Suggested Solution

Evaluate the following derivatives of MX−µ (t) at t = 0,


1 2 t2
′ ′
MX−µ (t) = σ 2 te 2 σ ⇒ MX−µ (0) = 0
′′ 1 2 2 2 1 2 2
′′
(t) = σ 2 e 2 σ t + σ 2 t e 2 σ t ⇒ MX−µ (0) = σ 2 ,

MX−µ
(3) 1 2 2 3 1 2 2 (3)
MX−µ (t) = 3σ 4 te 2 σ t + σ 2 t e 2 σ t ⇒ MX−µ (0) = 0,
(4) 1 2 2 1 2 2 4 1 2 2 (4)
MX−µ (t) = 3σ 4 e 2 σ t + 6σ 6 t2 e 2 σ t + σ 2 t e 2 σ t ⇒ MX−µ (0) = 3σ 4

The skewness of X is
(3)
E (X − µ)3

µ3 MX−µ (0)
α3 = 3 = = = 0.
σ σ3 σ3
The kurtosis of X is
(4)
E (X − µ)4

µ4 MX−µ (0)
α4 = 4 = = = 3.
σ σ4 σ4

4. The log-likelihood function is


n
! n  
Y α X α
ln L (α; x1 , x2 , . . . , xn ) = ln α+1 = ln
x
i=1 i i=1
xα+1
i
n
X
= (ln α − (α + 1) ln xi )
i=1

Differentiate ln L (α; x1 , x2 , . . . , xn ) with respect to α,


n  
d X 1
ln L (α; x1 , x2 , . . . , xn ) = − ln xi .
dα i=1
α

Set the first-order derivative to zero,


n  
d X 1
ln L (α; x1 , x2 , . . . , xn ) = 0 ⇒ − ln xi = 0
dα i=1
α
n
⇒ α = Pn
i=1 ln xi

Hence, the maximum likelihood estimator of α is


n
α̂ = Pn .
i=1 ln Xi

You might also like