Professional Documents
Culture Documents
2.1: INTRODUCTION
2.2 PROBABILITY GENERATING FUNCTION (P.G.F) FOR DISCRETE RANDOM
VARIABLE
The probability generating function (PGF) is a useful tool for dealing with discrete random
variables taking values 0, 1, 2, .. Its particular strength is that it gives us an easy way of
characterizing the distribution of 𝑋 + 𝑌 when 𝑋 𝑎𝑛𝑑 𝑌 are independent. In general it is difficult
to find the distribution of a sum using the traditional probability function. The PGF transforms a
sum into a product and enables it to be handled much more easily.
Sums of random variables are particularly important in the study of stochastic processes,
because many stochastic processes are formed from the sum of a sequence of repeating steps: for
example, the Gambler’s Ruin.
The name probability generating function also gives us another clue to the role of the
PGF. The PGF can be used to generate all the probabilities of the distribution. This is generally
tedious and is not often an efficient way of calculating probabilities. However, the fact that it can
be done demonstrates that the PGF tells us everything there is to know about the distribution
Definition 2.1: Let 𝑢 (𝑥 ) = 𝑠 𝑘 and suppose 𝐸(𝑠 𝑘 ) exist then the probability generating function
𝑢(𝑧) 𝑜𝑟 𝐺𝑥 (𝑠) for a random variable is defined to be 𝐸(𝑠 𝑘 ), we say that probability generating
function for 𝑥 exist. If there is a positive constant such that 𝑠(𝑥) 𝑜𝑟 𝐺𝑥 (𝑠) is finite for (5) h.
ii. P
k x 1
∞
iii. ∫−∞ 𝑃(𝑥 )𝑑𝑥 = 1
Common sums for probability generating function
1. Geometric Series
∞
𝑎
1 + 𝑟 + 𝑟 + 𝑟 + ⋯ . = ∑ 𝑟𝑥 =
2 3
, 𝑤ℎ𝑒𝑛 |𝑟 | < 1.
1−𝑟
𝑥=0
2. Geometric progression with common ratio, 𝒓
𝑎 (1 − 𝑟 ) 𝑓𝑖𝑟𝑠𝑡 𝑡𝑒𝑟𝑚 (1 − 𝑟 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑟𝑚)
𝑎 + 𝑎𝑟 + 𝑎𝑟 2 + ⋯ + 𝑎𝑟 𝑛−1 = =
1−𝑟 1−𝑟
3. Binomial Theorem: For any 𝑝, 𝑞 ∈ ℝ, 𝑎𝑛𝑑 𝑖𝑛𝑡𝑒𝑔𝑒𝑟 𝑛,
𝑛 𝑛 𝑛!
(𝑝 + 𝑞 )𝑛 = ( ) 𝑝 𝑥 𝑞 𝑛−𝑥 ; 𝑛𝑜𝑡𝑒 𝑡ℎ𝑎𝑡 ( ) =
𝑥 𝑥 (𝑛 − 𝑥 )! 𝑥!
4. Exponential Power Series: For any 𝜆 ∈ ℝ,
∞
𝜆𝑥 𝜆 𝑛
∑ = 𝑒𝜆 𝑎𝑛𝑑 lim (1 + ) = 𝑒 𝜆 𝑓𝑜𝑟 𝜆 ∈ ℝ
𝑥! 𝑛→∞ 𝑛
𝑋=0
Definition 2.2: The radius of convergence of probability generating function is a number 𝑅 >
0, such that the sum 𝐺𝑋 (𝑆) = ∑∞ 𝑥
𝑋=0 𝑆 ℙ(𝑋 = 𝑥) converges if |𝑆 | < 𝑅 𝑎𝑛𝑑 𝑑𝑖𝑣𝑒𝑟𝑔𝑒𝑠 (→ 𝑖∞) if
|𝑆| > 𝑅
(No general statement is made about what happens when |𝑆| = 𝑅.)
This is surprising because the PGF itself is not usually symmetric about 0: i.e. 𝐺𝑋 (−𝑆) ≠ 𝐺𝑋 (𝑆)
in general.
Q24. (a) If 𝑃 (𝑠) is the probability generating function for𝑋, find the generating function for
𝑎
(𝑥 − ). (b) let 𝑋 be a random variable with generating function of (𝑎) 𝑋 + 1 (𝑏 )2𝑋
𝑏
ANSWER
𝑎 𝑎 𝑥
(𝑥−𝑏 )
(a) 𝑃 (𝑆) = 𝐸 (𝑆 𝑘 ) = 𝐸 (𝑆 ) = 𝑆 −𝑏 . 𝐸 (𝑆 𝑏 )
𝑎 1 𝑥 𝑎 1
−(𝑏 ) −(𝑏 )
=𝑆 . 𝐸 [(𝑆 𝑏 ) ] = 𝑆 . 𝑃(𝑆 𝑏 )
= 𝑆. ∑ 𝑝𝑘 . 𝑆 𝑥 = 𝑆. 𝐸 (𝑆 𝑘 ) = 𝑆. 𝑃(𝑠)
𝑥=0
Q25. Calculate the probability generating function of a discrete uniform random variable with
1 2 𝑛
support { , , … , }:
𝑛 𝑛 𝑛
𝑘 1
𝑃 (𝑥 = ) = ; 𝑘 = 1,2, … , 𝑛
𝑛 𝑛
ANSWER
Before anything we recall that for a geometric progression with common ratio 𝑟,
𝑛
1 1 𝑘 1 1 1 1 1 2 1 𝑘
= ∑ (𝑠 ) = . [(𝑠 ) + (𝑠 ) + (𝑠 ) + ⋯ + (𝑠 ) ]
𝑛 𝑛 𝑛 𝑛 𝑛
𝑛 𝑛
𝑘=1
1 1 𝑛
1
1 𝑆 [1 − (𝑆 ) ] 1 𝑆 𝑛 (1 − 𝑆)
𝑛 𝑛
= . 1 = . 1
𝑛 𝑛
1 − 𝑆𝑛 1 − 𝑆𝑛
1 1 1 1 1 1
𝐺𝑋 (𝑆) = 0𝑆 0 + 𝑆1 + 𝑆 2 + 𝑆 3 + 𝑆 4 + 𝑆 5 + 𝑆 6
6 6 6 6 6 6
ANSWER
1 1 1 1 1 1
𝐺𝑋 (𝑆) = 0𝑆 0 + 𝑆1 + 𝑆 2 + 𝑆 3 + 𝑆 4 + 𝑆 5 + 𝑆 6
6 6 6 6 6 6
1 1 1 1 1 1
𝐺𝑋′ (𝑆) = 1. 𝑆 0 + 2. 𝑆1 + 3. 𝑆 2 + 4. 𝑆 3 + 5. 𝑆 4 + 6. 𝑆 5
6 6 6 6 6 6
1 1 1 1 1
𝐺𝑋′′ (𝑆) = 2. 𝑆 0 + 6. 𝑆1 + 12. 𝑆 2 + 20. 𝑆 3 + 30. 𝑆 4
6 6 6 6 6
Accordingly:
1 1 1 1 1 1 1 1 1 1 1 1
𝐺𝑋 (𝑆 = 1) = 0 + . 11 + . 12 + . 13 + . 14 + . 15 + . 16 = + + + + + = 1
6 6 6 6 6 6 6 6 6 6 6 6
1 1 1 1 1 1
𝐺𝑋′ (𝑆 = 1) = 1. . 10 + 2. . 11 + 3. . 12 + 4. . 13 + 5. . 14 + 6. . 15
6 6 6 6 6 6
1+2+3+4+5+6 7
= =
6 2
1 1 1 1 1
𝐺𝑋′′ (𝑆 = 1) = 2. . 10 + 6. . 11 + 12. . 12 + 20. . 13 + 30. . 14
6 6 6 6 6
2 + 6 + 12 + 20 + 30 70 35
= = =
6 6 3
The variance is obtained as
2
𝑉𝑎𝑟 (𝑋) = 𝐺𝑋′′ (𝑆 = 1) + 𝐺𝑋′ (𝑆 = 1) − (𝐺𝑋′ (𝑆 = 1))
35 7 7 2 35 7 49
= + −( ) = + −
3 2 2 3 2 4
140 + 42 − 147 35
= =
12 12
Q27. The two dice were identical so that 𝐺𝑋 (𝑆) = 𝐺𝑌 (𝑆) = 𝐺(𝑆) and the square, 𝐺 2 , gave the
generating function for 𝑋 + 𝑌 as
1 1 1 1 1 1
𝐺𝑋 (𝑆) = 𝐺𝑌 (𝑆) = 𝐺 (𝑆) = 0𝑆 0 + 𝑆1 + 𝑆 2 + 𝑆 3 + 𝑆 4 + 𝑆 5 + 𝑆 6
6 6 6 6 6 6
And
2 1 2 2 3 3 4 4 5 6 6 7 5 8
𝐺𝑋+𝑌 (𝑆) = (𝐺(𝑆)) = 0𝑆 0 + 0𝑆1 + 𝑆 + 𝑆 + 𝑆 + 𝑆5 + 𝑆 + 𝑆 + 𝑆
36 36 36 36 36 36 36
4 3 10 2 1 12
+ 𝑆9 + 𝑆 + 𝑆11 + 𝑆
36 36 36 36
Obtain the expectation and variance of the distribution
ANSWER
2 1 2 2 3 3 4 4 5 6 6 7 5 8
𝐺𝑋+𝑌 (𝑆) = (𝐺(𝑆)) = 0𝑆 0 + 0𝑆1 + 𝑆 + 𝑆 + 𝑆 + 𝑆5 + 𝑆 + 𝑆 + 𝑆
36 36 36 36 36 36 36
4 3 10 2 11 1 12
+ 𝑆9 + 𝑆 + 𝑆 + 𝑆
36 36 36 36
′ 2 2 1 6 2 12 3 20 4 30 5 42 6 40 7
𝐺𝑋+𝑌 (𝑆) = (𝐺(𝑆)) = 𝑆 + 𝑆 + 𝑆 + 𝑆 + 𝑆 + 𝑆 + 𝑆
36 36 36 36 36 36 36
36 30 9 22 10 12 11
+ 𝑆8 + 𝑆 + 𝑆 + 𝑆
36 36 36 36
′′ ( ) 2 2 0 12 1 36 2 80 3 150 4 252 5 280 6
𝐺𝑋+𝑌 𝑆 = (𝐺(𝑆)) = 𝑆 + 𝑆 + 𝑆 + 𝑆 + 𝑆 + 𝑆 + 𝑆
36 36 36 36 36 36 36
288 7 270 8 220 9 132 10
+ 𝑆 + 𝑆 + 𝑆 + 𝑆
36 36 36 36
Accordingly:
1 2 2 3 4 4 5 5 6 6 7 5 8
𝐺𝑋+𝑌 (𝑆 = 1) = 0. 10 + 0. 11 + . 1 + . 13 + .1 + .1 + .1 + .1 + .1
36 36 36 36 36 36 36
4 3 10 2 11 1 12
+ . 19 + .1 + .1 + .1
36 36 36 36
1 + 2 + 3 + 4 + 5 + 6 + 5 + 4 + 3 + 2 + 1 36
= = =1
36 36
′
2 1 6 12 3 20 4 30 5 42 6 40 7
𝐺𝑋+𝑌 (𝑆 = 1) = . 1 + . 12 + .1 + .1 + .1 + .1 + .1
36 36 36 36 36 36 36
36 30 9 22 10 12 11
+ . 18 + .1 + .1 + .1
36 36 36 36
2 + 6 + 12 + 20 + 30 + 42 + 40 + 36 + 30 + 22 + 12 252
= = =7
36 36
′′ (
2 0 12 1 36 2 80 3 150 4 252 5 280 6
𝐺𝑋+𝑌 𝑆 = 1) = .1 + .1 + .1 + .1 + .1 + .1 + .1
36 36 36 36 36 36 36
288 7 270 8 220 9 132 10
+ .1 + .1 + .1 + .1
36 36 36 36
2 + 12 + 36 + 80 + 150 + 252 + 280 + 288 + 270 + 220 + 132 1722
= =
36 36
287
=
6
The variance is obtained as
2
𝑉𝑎𝑟 (𝑋) = 𝐺𝑋′′ (𝑆 = 1) + 𝐺𝑋′ (𝑆 = 1) − (𝐺𝑋′ (𝑆 = 1))
35
=
6
Q28. Let 𝑋 be a discrete random variable with probability generating function (PGF)
𝑆
𝐺𝑋 (𝑆) = (2 + 3𝑆 2 ). 𝐹𝑖𝑛𝑑 𝑡ℎ𝑒 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑋
5
ANSWER
2 3
𝐺𝑋 (𝑆) = 𝑆 + 𝑆 3: 𝐺𝑋 (𝑆 = 0) = 𝑃(𝑋 = 0) = 0
5 5
2 9 2 2
𝐺𝑋′ (𝑆) = + 𝑆 : 𝐺𝑋′ (𝑆 = 0) = 𝑃(𝑋 = 1) =
5 5 5
18 1 ′′
𝐺𝑋′′ (𝑆) = 𝑆: 𝐺 (𝑆 = 0) = 𝑃(𝑋 = 2) = 0
5 2 𝑋
18 1 ′′ 1 18 3
𝐺𝑋′′′(𝑆) = : 𝐺𝑋 (𝑆 = 0) = 𝑃 (𝑋 = 2) = . =
5 3! 6 5 5
(𝑟) 1 (𝑟)
𝐺𝑋 (𝑆) = 0 ∀ 𝑟 ≥ 4: 𝐺 (𝑆 ) = 𝑃 (𝑋 = 𝑟 ) = 0 ∀ 𝑟 ≥ 4
𝑟! 𝑋
Thus,
2
1 𝑤𝑖𝑡ℎ 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦
𝑋={ 5
3
3 𝑤𝑖𝑡ℎ 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦
5
Q29. Let 𝑋~𝐵𝑒𝑟𝑛 (𝑝), find the probability generating function (PGF) of 𝑋 and hence find the
mean and variance of 𝑋
ANSWER
1 1
1 1
𝐺𝑋 (𝑆) = 𝑆𝑝 + 𝑞
𝑑
𝐺𝑥′ (𝑆) = (𝑆𝑝 + 𝑞) = 𝑝
𝑑𝑆
𝑑2 𝑑 𝑑
𝐺𝑥′′ (𝑆) = [ 𝐺𝑥 ( 𝑆 )] = [[𝐺𝑥
′ (𝑠 )]] = (𝑝) = 0
𝑑𝑆 2 𝑑𝑠 𝑑𝑆
=0
𝐺𝑋 (𝑆 = 1) = 1. 𝑝 + 𝑞 = 𝑝 + 𝑞 = 1
𝜇1′ = 𝐺𝑥′ (𝑆 = 1) = 𝑝
𝜇1′ = 𝑝 = 𝑚𝑒𝑎𝑛
𝜇2′ = 𝐺𝑋′′ (𝑆 = 1) = 0
= 𝑝 − 𝑝2 = 𝑝(1 − 𝑝)
= 𝑝𝑞
Q30. Derive the probability generating function of a random variable having Binomial
distribution. Hence, or otherwise obtain mean and variance of random variable of 𝑥.
ANSWER
𝑛
𝑃(𝑥 ) = ( ) 𝑝𝑥 (1 − 𝑝) 𝑛−𝑥 ; 𝑥 = 0,1,2, … , 𝑛
𝑥
𝑛
𝑛
= ∑ ( ) (𝑆𝑝) 𝑥 (1 − 𝑝) 𝑛−𝑥 (∗)
𝑥
𝑥=0
𝑛
𝑛 𝑛 𝑛
(𝑎 + 𝑏 )𝑛 = 𝑎𝑛 + ( ) 𝑎𝑛−1 𝑏 + ( ) 𝑎𝑛−2 𝑏 2 + ⋯ + 𝑏 𝑛 = ∑ ( ) 𝑎 𝑥 𝑏 𝑛−𝑥 (∗∗)
1 2 𝑥
𝑥
𝑑
𝐺𝑥′ (𝑆) = ((𝑆𝑝 + 𝑞 )𝑛 ) = 𝑛𝑝(𝑆𝑝 + 𝑞 )𝑛−1
𝑑𝑆
𝑑2 𝑑 𝑑
𝐺𝑥′′ (𝑆) = 2
[𝐺𝑥 (𝑆)] = [[𝐺𝑥′ (𝑠)]] = [𝑛𝑝(𝑆𝑝 + 𝑞 )𝑛−1 ]
𝑑𝑠 𝑑𝑠 𝑑𝑠
= 𝑛𝑝 − 𝑛𝑝2 = 𝑛𝑝(1 − 𝑝)
= 𝑛𝑝𝑞
Q31. We have a pile of identical biased coins, each of which has a probability 𝑝 of landing heads
when tossed. Suppose we take 3 of these coins for ourselves and hand 5 others to a friend. We then
toss coins and count the number of heads, clearly a number in the range 0 𝑡𝑜 3. Our friend
(independently) tosses the other coins and again counts the number of heads, a number in the range
0 𝑡𝑜 5. We then add our head count to our friend’s head count. Obtain the probability generating
function of two experiments. Hence, otherwise, find the expected value and variance of the
distribution.
ANSWER
Let 𝑋 be the random number whose value 𝑟 is the number of heads we obtain and 𝑌 be the
random number whose value 𝑡 is the number of heads our friend obtains. Clearly, 𝑋 is distributed
Binomial (3, 𝑝) and 𝑌 is distributed Binomial(5, 𝑝). It will be shown that 𝑋 + 𝑌 is distributed
Binomial (8, 𝑝). Given that 𝑋 𝑎𝑛𝑑 𝑌 are binomially distributed:
3 5
𝑃(𝑋 = 𝑟 ) = ( ) 𝑝𝑟 𝑞 3−𝑟 ; 𝑟 = 0,1,2,3 𝑎𝑛𝑑 𝑃 (𝑌 = 𝑡) = ( ) 𝑝𝑡 𝑞 5−𝑡 ; 𝑡 = 0,1,2,3,4,5
𝑟 𝑡
3 3
3 3
𝐺𝑋 (𝑆) = ∑ 𝑆 𝑟 ( ) 𝑝𝑟 𝑞 3−𝑟 = ∑ ( ) (𝑆𝑝)𝑟 𝑞 3−𝑟 = (𝑞 + 𝑆𝑝)3
𝑟 𝑟
𝑟=0 𝑋=0
And
5 5
5 5
𝐺𝑌 (𝑆) = ∑ 𝑆 𝑡 ( ) 𝑝𝑡 𝑞 5−𝑡 = ∑ ( ) (𝑆𝑝)𝑡 𝑞 5−𝑡 = (𝑞 + 𝑆𝑝) 5
𝑡 𝑡
𝑡=0 𝑡=0
Then,
= (𝑞 + 𝑆𝑝) 8
𝑑
𝐺𝑋′ (𝑆) = ((𝑆𝑝 + 𝑞 )8 ) = 8𝑝 (𝑆𝑝 + 𝑞 )8−1 = 8𝑝(𝑆𝑝 + 𝑞 )7
𝑑𝑆
𝑑2 𝑑
𝐺𝑋′′ (𝑆) = [ 𝐺𝑋 ( 𝑆 )] = [8𝑝(𝑆𝑝 + 𝑞 )7 ]
𝑑𝑆 2 𝑑𝑠
= 8(7)𝑝2 [(𝑆𝑝 + 𝑞 )6 ]
= 56𝑝2 [(𝑆𝑝 + 𝑞 )6 ]
= 56𝑝2
= 8𝑝 − 64𝑝2
= 8𝑝(1 − 8𝑝)
𝑛
𝑃(𝑥 ) = ( ) 𝑝𝑥 (1 − 𝑝) 𝑛−𝑥 ; 𝑥 = 0,1,2, … , 𝑛
𝑥
𝑛
𝑛
= ∑ ( ) (𝑆𝑝) 𝑥 (1 − 𝑝) 𝑛−𝑥 (∗)
𝑥
𝑥=0
𝑛
𝐺𝑥 (𝑠) = (𝑆𝑝 + (1 − 𝑝)) = (𝑆𝑝 + 𝑞 )𝑛 , by the Binomial theorem: true for all 𝑺
Obtain the probability generating function, 𝐺𝑋 (𝑆), hence otherwise, find the expected and
variance of 𝑋.
ANSWER
The set of probabilities for the Geometric distribution can be defined as:
Remember, this represents 𝑥 successive failures (each of probability 𝑞) before a single success
(probability 𝑝). Note that 𝑥 is unbounded; there can be an indefinite number of failures before
the first success.
𝐺𝑋 (𝑆) = ∑ 𝑆 𝑃 (𝑋 = 𝑥 ) = ∑ 𝑆 𝑥 𝑝𝑞 𝑥
𝑥
𝑥=0 𝑥=0
= 𝑝 ∑(𝑞𝑆) 𝑥
𝑥=0
The item in brackets is easily recognized as an indefinite geometric progression, the expansion of
1
1 + 𝑞𝑆 + 𝑞 2 𝑆 2 + 𝑞 3 𝑆 3 + ⋯ =
1−𝑞𝑆
1 𝑝 1
𝐺𝑋 (𝑆) = 𝑝. = = 𝑝(1 − 𝑞𝑆)−1 𝑓𝑜𝑟 |𝑆| <
1 − 𝑞𝑆 1 − 𝑞𝑆 𝑞
𝑑
𝐺𝑋′ (𝑆) = 𝑝 [(1 − 𝑞𝑆)−1 ] = 𝑝𝑞 (1 − 𝑞𝑆)−2
𝑑𝑆
𝑑2 𝑑 𝑑
𝐺𝑋′′ (𝑆) = 𝑝 2
(𝐺𝑋 (𝑆)) = 𝑝 [𝐺𝑋′ (𝑆)] = 𝑝𝑞 [(1 − 𝑞𝑆)−2 ] = 2𝑝𝑞 2 (1 − 𝑞𝑆)−3
𝑑𝑆 𝑑𝑆 𝑑𝑆
1 𝑝 𝑝
𝐺𝑋 (𝑆 = 1) = 𝑝. = = 𝑝(𝑝) −1 = = 1
1 − 𝑞. 1 1 − 𝑞 𝑝
𝑝𝑞 𝑝𝑞 𝑞 1 − 𝑝
𝜇1′ = = = = = 𝑚𝑒𝑎𝑛
(1 − 𝑞 )2 𝑝2 𝑝 𝑝
2𝑝𝑞 2 2𝑝𝑞 2
= =
(1 − 𝑞 )3 𝑝3
2𝑞 2
= 2
𝑝
𝑞 2 𝑞 𝑞 2 + 𝑝𝑞 𝑞 (𝑞 + 𝑝)
= + = = 𝑤ℎ𝑒𝑟𝑒 𝑝 + 𝑞 = 1
𝑝2 𝑝 𝑝2 𝑝2
𝑞 1−𝑝
= 2
= 2
𝑝 𝑝
Note: both the mean and variance of the Geometric distribution are difficult to derive without using
the generating function.
ANSWER
∞ ∞
𝐺𝑋 (𝑆) = ∑ 𝑆 𝑥 𝑝 (1 − 𝑝) 𝑥 = ∑ 𝑆 𝑥 (0.8)(0.2) 𝑥
𝑥=0 𝑥=0
0.8
= 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑆 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 |0.2𝑆| < 1.
1−0.2𝑆
1
This is valid for all 𝑆 𝑤𝑖𝑡ℎ |0.2𝑆| < 1, so it is valid for all 𝑆 with |𝑆| < 0.2 = 5 (𝑖. 𝑒. −5 < 𝑥 < 5)
Q35. My aunt was asked by her neighbours to feed the prize goldfish in their garden pond while
they were on holiday. Although my aunt dutifully went and fed them every day, she never saw a
single fish for the whole three weeks. It turned out that all the fish had been eaten by a heron when
she wasn’t looking!
Let 𝑁 be the number of times the heron visits the pond during the neighbours’ absence. Suppose
that 𝑁 ∼ 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐(1 − 𝑝), so 𝑃(𝑁 = 𝑛) = (1 − 𝜆)𝜆𝑛, for 𝑛 = 0, 1, 2, .. .. When the
heron visits the pond it has probability 𝑝 of catching a prize goldfish, independently of what
happens on any other visit. (This assumes that there are infinitely many goldfish to be caught!)
Find the distribution of 𝑇 = 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑔𝑜𝑙𝑑𝑓𝑖𝑠ℎ 𝑐𝑎𝑢𝑔ℎ𝑡.
ANSWER
Now
𝐺𝑋 (𝑆) = 𝐸 (𝑆 𝑋 ) = ∑ 𝑆 𝑥 𝑃(𝑋 = 𝑥 ) = 𝑆 0 × 𝑃 (𝑋 = 0) + 𝑆1 × 𝑃 (𝑋 = 1) = 𝑞 + 𝑝𝑆
𝑥=0
= 1 − 𝑝 + 𝑝𝑆.
Also,
∞ ∞
𝐺𝑁 (𝑟 ) = ∑ 𝑟 𝑛 ℙ( 𝑁 = 𝑛 ) = ∑ 𝑟 𝑛 (1 − 𝜆)𝜆𝑛
𝑥=0 𝑥=0
= (1 − 𝜆)[1 + 𝜆𝑟 + 𝜆2 𝑟 2 + 𝜆3 𝑟 3 + ⋯ . ]
1−𝜆 1
= (𝑟 < )
1 − 𝜆𝑟 𝜆
So
1−𝜆
𝐺𝑇 ( 𝑆 ) = (𝑃𝑢𝑡𝑡𝑖𝑛𝑔 𝑟 = 𝐺𝑋 (𝑆))
1 − 𝜆𝐺𝑋 (𝑆)
Giving:
1−𝜆
𝐺𝑇 ( 𝑆 ) = , 𝑤ℎ𝑒𝑟𝑒 𝐺𝑋 (𝑆) = 1 − 𝑝 + 𝑝𝑆
1 − 𝜆(1 − 𝑝 + 𝑝𝑆)
1−𝜆
=
1 − 𝜆 + 𝜆𝑝 − 𝜆𝑝𝑆
1−𝜋
[Could this be geometric? 𝐺𝑇 (𝑆) = 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 𝜋]
1−𝜋𝑆
1−𝜆
𝐺𝑇 ( 𝑆 ) = (𝑑𝑖𝑖𝑣𝑖𝑑𝑒𝑑 𝑏𝑜𝑡ℎ 𝑛𝑢𝑚𝑒𝑟𝑎𝑡𝑜𝑟 𝑎𝑛𝑑 𝑑𝑒𝑛𝑜𝑚𝑖𝑛𝑎𝑡𝑜𝑟 𝑏𝑦 1 − 𝜆 + 𝜆𝑝)
(1 − 𝜆 + 𝜆𝑝) − 𝜆𝑝𝑆
1−𝜆
( )
1 − 𝜆 + 𝜆𝑝
𝐺𝑇 ( 𝑆 ) =
(1 − 𝜆 + 𝜆𝑝) − 𝜆𝑝𝑆
( )
1 − 𝜆 + 𝜆𝑝
1 − 𝜆 + 𝜆𝑝 − 𝜆𝑝
( )
1 − 𝜆 + 𝜆𝑝
=
𝜆𝑝
1−( )𝑆
1 − 𝜆 + 𝜆𝑝
𝜆𝑝
1−( )
1 − 𝜆 + 𝜆𝑝
=
𝜆𝑝
1−( )𝑆
1 − 𝜆 + 𝜆𝑝
𝜆𝑝
This is the PGF of the Geometric 1 − ( ) distribution, so by unique-ness of PGFs, we
1−𝜆+𝜆𝑝
have:
1−𝜆
𝑇~𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 ( )
1 − 𝜆 + 𝜆𝑝
Note: we could have solved the Q34 without using the PGF but it much more difficult. PGFs are
very useful for dealing with sums of random variables, which are difficult to tackle using the
standard probability function.
ALTERNATIVE METHOD
Here are the first few steps of solving the Q34 without the PGF. Recall the problem:
Let 𝑁~𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 (1 − 𝜆), so ℙ(𝑁 = 𝑛 ) = (1 − 𝜆)𝜆𝑛 ;
Let 𝑋1 , 𝑋2 , …. Be independent of each other and of 𝑁, with 𝑋𝑖 ~𝑏𝑖𝑛(1, 𝑝)
(remember 𝑋𝑖 = 1 with probability 𝑝, and 0 otherwise);
Let 𝑇 = 𝑋1 + 𝑋2 + … + 𝑋𝑁 be the randomly stopped sum
Find the distribution of 𝑇
Without using the PGF, we would tackle this by looking for an expression for 𝑃(𝑇 = 𝑡) for any
𝑡. Once we have obtained that expression, we might be able to see that T has a distribution we
recognize (e.g. Geometric), or otherwise we would just state that 𝑇 is defined by the probability
function we have obtained.
To find 𝑃(𝑇 = 𝑡), we have to partition over different values of 𝑁:
∞
We can evaluate the sum in (⋆⋆) using the fact that Negative Binomial probabilities sum to 1.
We can try this if we like, but it is quite tricky.
[Hint: use the Negative Binomial (𝑡 + 1, 1 − 𝜆(1 − 𝑝)) distribution.].
1−𝜆
Overall, we obtain the same answer that 𝑇 ∼ 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 ( ), but hopefully we can see
1−𝜆+𝜆𝑝
why the PGF is so useful.
Without the PGF, we have two major difficulties:
1. Writing down 𝑃(𝑇 = 𝑡| 𝑁 = 𝑛);
2. Evaluating the sum over 𝑛 𝑖𝑛 (⋆⋆).
For a general problem, both of these steps might be too difficult to do without a computer. The
PGF has none of these difficulties, and even if 𝐺𝑇 (𝑆) does not simplify readily, it still tells us
everything there is to know about the distribution of 𝑇.
Q36. The set of probabilities for the Poisson distribution (𝑖. 𝑒. 𝑋~𝑃𝑜 (𝜆)) can be defined as:
𝜆𝑥 −𝜆
𝑃 (𝑋 = 𝑥 ) = 𝑒 𝑤ℎ𝑒𝑟𝑒 𝑥 = 0,1, …
𝑥!
Obtain the probability generating function. Hence otherwise, find 𝐸 (𝑋) 𝑎𝑛𝑑 𝑣𝑎𝑟(𝑋)
ANSWER
∞ ∞
𝜆𝑥 −𝜆 𝜆𝑥
𝐺𝑋 (𝑆) = 𝐸 (𝑆 𝑋 ) =∑ 𝑆𝑥 𝑒 = 𝑒 −𝜆 ∑ 𝑆 𝑥
𝑥! 𝑥!
𝑥=0 𝑥=0
∞
(𝜆𝑆) 𝑥 (𝜆𝑆)0 (𝜆𝑆)1 (𝜆𝑆)2 (𝜆𝑆)3
= 𝑒 −𝜆 ∑ = 𝑒 −𝜆 [ + + + + ⋯]
𝑥! 0! 1! 2! 3!
𝑥=0
𝐺𝑋 (𝑆) = 𝑒 −𝜆 . 𝑒 𝜆𝑆 = 𝑒 −𝜆(1−𝑆)
𝑑 −𝜆(1−𝑆)
𝐺𝑋′ (𝑆) = [𝑒 ] = 𝜆(𝑒 −𝜆(1−𝑆) )
𝑑𝑆
𝑑2 𝑑 𝑑 −𝜆(1−𝑆)
𝐺𝑋′′ (𝑆) = 2 (𝐺𝑋 (𝑆)) = [𝐺𝑋′ (𝑆)] = 𝜆 [𝑒 ] = 𝜆2 (𝑒 −𝜆(1−𝑆) )
𝑑𝑆 𝑑𝑆 𝑑𝑆
𝐺𝑋 (𝑆 = 1) = 𝑒 −𝜆(1−1) = 𝑒 0 = 1
𝜇1′ = 𝐺𝑋′ (𝑆 = 1) = 𝜆(𝑒 −𝜆(1−𝑆) ) = 𝜆𝑒 0 = 𝜆. 1 = 𝜆
𝜇1′ = 𝜆 = 𝑚𝑒𝑎𝑛
= 𝜆2 𝑒 0 = 𝜆2 . 1
= 𝜆2
= 𝜆2 + 𝜆 − (𝜆)2 = 𝜆2 + 𝜆 − 𝜆2
=𝜆
Q37. Suppose that 𝑋1 , … , 𝑋𝑛 are independent random variables and let = 𝑋1 + ⋯ + 𝑋𝑛 . Show
𝑁
ANSWER
𝐺𝑌 (𝑆) = 𝔼(𝑆 𝑌 ) = 𝔼(𝑆 (𝑋1+⋯+𝑋𝑛) )
= 𝔼(𝑆 𝑋1 𝑆𝑋2 𝑆 𝑋3 … 𝑆 𝑋𝑛 )
= 𝔼(𝑆 𝑋1 )𝔼(𝑆𝑋2 ) … 𝔼(𝑆 𝑋𝑛 ) (𝑏𝑒𝑐𝑎𝑢𝑠𝑒 𝑋1 , … , 𝑋𝑛 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡)
𝑁
Q38. Suppose that 𝑋 𝑎𝑛𝑑 𝑌 are independent with 𝑋 ∼ 𝑃𝑜𝑖𝑠𝑠𝑜𝑛(𝜆) 𝑎𝑛𝑑 𝑌 ∼ 𝑃𝑜𝑖𝑠𝑠𝑜𝑛(𝜃).
Find the distribution of + 𝑌 . Using probability generating function
ANSWER
If 𝑋 ∼ 𝑃𝑜𝑖𝑠𝑠𝑜𝑛(𝜆), then the probability generating function is obtained as
𝐺𝑋 (𝑆) = 𝑒 𝜆(𝑆−1)
Similarly, if 𝑌 ∼ 𝑃𝑜𝑖𝑠𝑠𝑜𝑛(𝜃), then the probability generating function is obtained as
𝐺𝑌 (𝑆) = 𝑒 𝜃(𝑆−1)
Therefore, the distribution of 𝑋 𝑎𝑛𝑑 𝑌 is obtained as follow:
𝐺𝑋+𝑌 (𝑆) = 𝐸 (𝑆 𝑋+𝑌 ) = 𝐸 (𝑆 𝑋 . 𝑆 𝑌 ) = 𝐸 (𝑆 𝑋 )𝐸(𝑆 𝑌 )
= 𝐺𝑋 (𝑆). 𝐺𝑌 (𝑆)
= 𝑒 (𝜆+𝜃)(𝑆−1)
This is the probability generating function (PGF) of the Poisson(𝜆 + 𝜃) distribution. So, by the
uniqueness of PGFs, 𝑋 + 𝑌~𝑃𝑜𝑖𝑠𝑠𝑜𝑛(𝜆 + 𝜃).
Q39. Let 𝑋1 , 𝑋2 , … be a sequence of independent and identically distributed random variables with
common PGF 𝐺𝑋 . Let 𝑁 be a random variable, independent of the 𝑋𝑖 ’𝑠 , with PGF 𝐺𝑁 , and let
𝑇𝑁 = 𝑋1 +. . . +𝑋𝑁 = ∑𝑁
𝑖=1 𝑋𝑖 .
Show that the PGF of 𝑇𝑁 is: 𝐺𝑇𝑁 (𝑆) = 𝐺𝑁 (𝐺𝑋 (𝑆) ). Find the mean of 𝑇𝑁
ANSWER
𝐺𝑇𝑁 (𝑆) = 𝔼(𝑆 𝑇𝑁 ) = 𝔼(𝑆𝑋1 +⋯+𝑋𝑁 )
Q40. Cash machine model. 𝑁 customers arrive during the day. Customer 𝑖 withdraws amount 𝑋𝑖 .
The total amount withdrawn during the day is 𝑇𝑁 = 𝑋1 + . . . + 𝑋𝑁. Used the laws of total
expectation and variance to show that 𝔼(𝑇𝑁 ) = 𝜇𝔼(𝑁) 𝑎𝑛𝑑 𝑉𝑎𝑟 (𝑇𝑁 ) = 𝜎 2 𝔼(𝑁) + 𝜇2 𝑉𝑎𝑟 (𝑁),
where 𝜇 = 𝔼(𝑋𝑖 ) 𝑎𝑛𝑑 𝜎 2 = 𝑉𝑎𝑟(𝑋𝑖 )
ANSWER
𝑑
𝔼(𝑇𝑁 ) = 𝐺𝑇′ 𝑁 (𝑆 = 1) = 𝐺 (𝐺 (𝑆))|
𝑑𝑆 𝑁 𝑋𝑖 𝑆=1
= 𝐺𝑁′ (𝐺𝑋𝑖 (𝑆)) . 𝐺𝑋′ 𝑖 (𝑆)|
𝑆=1
Similarly,
2
𝑉𝑎𝑟 (𝑇𝑁 ) = 𝔼[(𝑇𝑁 )(𝑇𝑁 − 1)] + 𝐸 (𝑇𝑁 ) − (𝐸(𝑇𝑁 ))
Q41. Let 𝑋~𝑁𝐵(𝑟, 𝑝). Obtain the probability generating function. Hence otherwise find the
expected and variance of the distribution of 𝑋.
ANSWER
If X distribute negative binomial (𝑟, 𝑝) with probability mass function of
𝑛 − 1 𝑟(
𝑓 (𝑋 = 𝑥 ) = ( ) 𝑝 1 − 𝑝) 𝑥−𝑟 ; 𝑥 = 𝑟, 𝑟 + 1, 𝑟 + 2, …
𝑟−1
To obtain the probability generating function
∞ ∞
𝑛 − 1 𝑟(
𝐺𝑋 (𝑆) = 𝐸 (𝑆 𝑋 ) = ∑ 𝑆 𝑥 𝑓(𝑋 = 𝑥) = ∑ 𝑆 𝑥 ( ) 𝑝 1 − 𝑝) 𝑥−𝑟
𝑟−1
𝑥=𝑟 𝑥=𝑟
∞
𝑛 − 1 𝑟 𝑥−𝑟
= ∑ 𝑆 𝑥−𝑟+𝑟 ( )𝑝 𝑞 𝑤ℎ𝑒𝑟𝑒 𝑥 = 𝑥 − 𝑟 + 𝑟 𝑎𝑛𝑑 𝑞 = 1 − 𝑝
𝑟−1
𝑥=𝑟
∞ ∞
𝑛 − 1 𝑟 𝑥−𝑟 𝑛−1
=∑ 𝑆 𝑥−𝑟 . 𝑆 𝑟 ( )𝑝 𝑞 =∑( ) (𝑝𝑆)𝑟 (𝑞𝑆) 𝑥−𝑟
𝑟−1 𝑟−1
𝑥=𝑟 𝑥=𝑟
∞
)𝑟 𝑛 − 1 [ ]𝑥 ( )𝑟 (
= (𝑝𝑆 ∑( ) 𝑞𝑆 = 𝑝𝑆 1 − 𝑞𝑆) −𝑟
𝑟−1
𝑥=𝑟
∞
𝑛 − 1 [𝑞𝑆]𝑥 (1
𝑤ℎ𝑒𝑟𝑒 ∑ ( ) = − 𝑞𝑆)−𝑟 𝑖𝑠 𝑠𝑢𝑚 𝑜𝑓 𝑔𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑠𝑒𝑟𝑖𝑒𝑠 (𝑠𝑢𝑚 𝑜𝑓 𝑖𝑛𝑓𝑖𝑛𝑖𝑡𝑦 𝑠𝑒𝑟𝑖𝑒𝑠)
𝑟−1
𝑥=𝑟
𝑝𝑆 𝑟
𝐺𝑋 (𝑆) = ( )
1 − 𝑞𝑆
So the generating function and its first two derivatives are:
𝑝𝑆 𝑟
𝐺𝑋 (𝑆) = ( )
1 − 𝑞𝑆
𝑆 𝑟−1
= 𝑟𝑝𝑟 ( )
(1 − 𝑞𝑆) 𝑟+1
𝑑2 𝑑 𝑑 𝑆 𝑟−1
𝐺𝑋′′ (𝑆) = 2 (𝐺𝑋 (𝑆)) = ′
[𝐺 (𝑆)] = 𝑟𝑝𝑟 [ ]
𝑑𝑆 𝑑𝑆 𝑋 𝑑𝑆 (1 − 𝑞𝑆) 𝑟+1
𝑝. 1 𝑟 𝑝 𝑟 𝑝 𝑟
( )
𝐺𝑋 𝑆 = 1 = ( ) =( ) = ( ) = 1𝑟 = 1
1 − 𝑞. 1 1−𝑞 𝑝
1𝑟−1 𝑟𝑝𝑟 𝑟
𝜇1′ = 𝐺𝑋′ (𝑆 = 1) = 𝑟𝑝𝑟 ( ) = =
(1 − 𝑞 )𝑟+1 𝑝𝑟+1 𝑝
𝑟
𝜇1′ = = 𝑚𝑒𝑎𝑛
𝑝
(𝑟 − 1)𝑝 + 𝑞(𝑟 + 1)
= 𝑟𝑝𝑟 𝑝𝑟 [ ]
[𝑝𝑟+1 ]2
𝑟𝑝 − 𝑝 + 𝑟𝑞 + 𝑞
= 𝑟𝑝2𝑟 [ ]
𝑝2𝑟+2
𝑟 𝑟
= ( 𝑟𝑝 + 𝑟𝑞 + 𝑞 − 𝑝) = [𝑟(𝑝 + 𝑞 ) + (𝑞 − 𝑝)], 𝑤ℎ𝑒𝑟𝑒 𝑝 + 𝑞 = 1
𝑝2 𝑝2
𝑟 𝑟(𝑟 + 𝑞 − 𝑝)
= ( 𝑟 + 𝑞 − 𝑝) =
𝑝2 𝑝2
𝑟(𝑟 + 𝑞 − 𝑝) 𝑟 𝑟 2 𝑟 (𝑟 + 𝑞 − 𝑝) 𝑟 𝑟 2
= + −( ) = + − 2
𝑝2 𝑝 𝑝 𝑝2 𝑝 𝑝
𝑟 2 + 𝑟𝑞 − 𝑟𝑝 + 𝑟𝑝 − 𝑟 2
=
𝑝2
𝑟𝑞 𝑟 (1 − 𝑝)
= =
𝑝2 𝑝2
Q42. Calculate the probability generating function (PGF) of the Uniform(𝑎, 𝑏).
ANSWER
If X distribute uniform (𝑎, 𝑏) with probability density function of
1
𝑓 (𝑋 = 𝑥 ) = ; 𝑎<𝑥<𝑏
𝑏−𝑎
To obtain the probability generating function
𝑏 𝑏 1
𝐺𝑋 (𝑆) = 𝐸 (𝑆 𝑋 ) = ∫ 𝑆 𝑥 𝑓 (𝑋 = 𝑥 )𝑑𝑥 = ∫ 𝑆 𝑥 𝑑𝑥
𝑎 𝑎 𝑏−𝑎
1 𝑏
𝑥
1 1 𝑥 𝑥=𝑏
= ∫ 𝑆 𝑑𝑥 = [ 𝑆 ]
𝑏−𝑎 𝑎 𝑏 − 𝑎 𝑙𝑛𝑆 𝑥=𝑎
1 𝑆𝑏 − 𝑆𝑎
= ( )
𝑏−𝑎 𝑙𝑛𝑆
𝑆𝑏 − 𝑆𝑎
=
(𝑏 − 𝑎) 𝑙𝑛𝑆
Hence, the probability generating function, 𝐺𝑋 (𝑆) is obtained as
𝑆𝑏 − 𝑆𝑎
𝐺𝑋 (𝑆) =
(𝑏 − 𝑎)𝑙𝑛𝑆
2.3 MOMENT GENERATING FUNCTION (M.G.F) FOR DISCRETE AND
CONTINUOUS RANDOM VARIABLE
Let 𝑢(𝑥) = 𝑒 𝑡𝑥 and suppose 𝐸(𝑒 𝑡𝑥 ) exist, then the moment – generating function m(t) or
𝑀𝑥 (𝑡) for a random variable is define to be 𝐸(𝑒 𝑡𝑥 , we say that moment – generating function for
𝑥 exist if there is a positive constant such that 𝑚𝑥 (𝑡) or 𝑀𝑥 (𝑡) is finite for (𝑡) ≤ ℎ
𝑥𝑡 (𝑥𝑡)2 (𝑥𝑡)𝑟
𝑒 𝑡𝑥 = 1 + + + …..+ + … ….
1! 2! 𝑟!
𝑥𝑡 (𝑥𝑡)2 (𝑥𝑡)𝑟
𝑀𝑥 (𝑡) = 𝑒 𝑡𝑥 = ∑ [1 + + + …..+ ] 𝑝(𝑥)
1! 2! 𝑟!
𝑡 2 ∑ 𝑥 2 𝑝(𝑥) 𝑡 𝑟 ∑ 𝑥 𝑟 𝑝(𝑥)
= ∑ 𝑝(𝑥) + 𝑡 ∑ 𝑥 𝑝(𝑥 ) + + …..+
2! 𝑟!
𝑡2 𝑡𝑟
= 1 + 𝑡µ + µ′2 + … . . + µ′
2! 𝑟! 𝑟
∞
𝑡𝑟
𝑀𝑥 (𝑡) = 1 + ∑ µ′ 𝑓𝑜𝑟 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒
𝑟! 𝑟
𝑥=1
𝑡2 𝑡𝑟
= ∫ 𝑝(𝑥 )𝑑𝑥 + 𝑡 ∫ 𝑥𝑑𝑥 + ∫ 𝑥 2 𝑝(𝑥)𝑑𝑥 + … . . + ∫ 𝑥 𝑟 𝑝(𝑥)𝑑𝑥
2! 𝑟!
𝑡2 𝑡𝑟
= 1 + 𝑡µ + µ′ + … . . + µ′
2! 2 𝑟! 𝑟
∞ 𝑡𝑟
𝑀𝑥 (𝑡) = 1 + ∫ µ′ 𝑑𝑥 𝑓𝑜𝑟 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒
𝑋=1 𝑟! 𝑟
Theorem 2.1
1 1 1
= + 𝑒 𝑡 . + 𝑒 2𝑡 .
3 2 6
1 𝑒 𝑡 𝑒 2𝑡
= + +
3 2 6
To find the first and second moment
𝑑 𝑑 1 𝑒 𝑡 𝑒 2𝑡
𝐸 (𝑥 ) = 𝑀𝑋′ (𝑡) = (𝑀𝑋 (𝑡)) = ( + + )
𝑑𝑡 𝑑𝑡 3 2 6
𝑒 𝑡 𝑒 2𝑡
= +
2 3
𝑑 𝑑 𝑒 𝑡 𝑒 2𝑡 𝑒 𝑡 2𝑒 2𝑡
𝐸 (𝑥 2 ) = 𝑀𝑋′′ (𝑡) ′
= (𝑀𝑋 (𝑡)) = ( + )= +
𝑑𝑡 𝑑𝑡 2 3 2 3
To find the mean and variance
𝑒 0 𝑒 2(0) 1 1 3 + 2 5
𝐸 (𝑥 ) = 𝑀𝑋′ (𝑡 = 0) = + = + = =
2 3 2 3 6 6
𝑒 0 2𝑒 2(0) 1 2 3 + 4 7
𝐸 (𝑥 2 ) = 𝑀𝑋′′ (𝑡 = 0) = + = + = =
2 3 2 3 6 6
2 7 5 2 7 25 42 − 25 17
𝑣𝑎𝑟 (𝑥 ) = 𝑀𝑋′′ (𝑡 = 0) − (𝑀𝑋′ (𝑡 = 0)) = − ( ) = − = =
6 6 6 36 36 36
Q45. Suppose X has the moment generating function as
𝑀𝑥 (𝑡) = (1 − 2𝑡)−1/2
Find the first and second moments of 𝑋. Then obtain the variance of the distribution of 𝑋
ANSWER
We have
To find the first and second moment
𝑑 𝑑 1 1 3
𝑀𝑋′ (𝑡) = (𝑀𝑋 (𝑡)) = ((1 − 2𝑡)−1/2 ) = − (1 − 2𝑡)−2−1 (−2) = (1 − 2𝑡 ) −2
𝑑𝑡 𝑑𝑡 2
𝑑 𝑑 3 3 3 5
𝑀𝑋′′ (𝑡) = (𝑀𝑋′ (𝑡)) = ((1 − 2𝑡)−2 ) = − (1 − 2𝑡)−2−1 (−2) = 3(1 − 2𝑡)−2
𝑑𝑡 𝑑𝑡 2
To find the mean and variance
3
𝐸 (𝑥 ) = 𝑀𝑋′ (𝑡 = 0) = (1 − 2(0))−2 = 1
5
𝐸 (𝑥 2 ) = 𝑀𝑋′′ (𝑡 = 0) = 3(1 − 2(0))−2 = 3 × 1 = 3
2
𝑣𝑎𝑟 (𝑥 ) = 𝑀𝑋′′ (𝑡 = 0) − (𝑀𝑋′ (𝑡 = 0)) = 3 − (1)2 = 3 − 1 = 2
Q46. Suppose that we have a fair 5-sided die and let 𝑋 be the random variable representing the
value of the number rolled.
(a) Write down the moment generating function for 𝑋
(b) Use this moment generating function to compute first, second moments and variance of 𝑋
ANSWER
(a) The moment generating function is obtained as follow
5
𝑀𝑋 (𝑡) = 𝐸 (𝑒 𝑡𝑋 ) = ∑ 𝑒 𝑡𝑥 𝑝(𝑥) = 𝑒 𝑡(1) 𝑝(1) + 𝑒 𝑡(2) 𝑝(2) + 𝑒 𝑡(3) 𝑝(3) + 𝑒 𝑡(4) 𝑝(4) + 𝑒 𝑡(5) 𝑝(5)
𝑋=1
1 1.𝑡 1 2.𝑡 1 3.𝑡 1 4.𝑡 1 5.𝑡
= 𝑒 + 𝑒 + 𝑒 + 𝑒 + 𝑒
5 5 5 5 5
1 𝑡
= (𝑒 + 𝑒 2𝑡 + 𝑒 3𝑡 + 𝑒 4𝑡 + 𝑒 5𝑡 )
5
(b) To find the first and second moment
𝑑 1𝑑 𝑡
𝐸 (𝑥 ) = 𝑀𝑋′ (𝑡) = (𝑀𝑋 (𝑡)) = (𝑒 + 𝑒 2𝑡 + 𝑒 3𝑡 + 𝑒 4𝑡 + 𝑒 5𝑡 )
𝑑𝑡 5 𝑑𝑡
1 𝑡
= (𝑒 + 2𝑒 2𝑡 + 3𝑒 3𝑡 + 4𝑒 4𝑡 + 5𝑒 5𝑡 )
5
𝑑 1𝑑 𝑡
𝐸 (𝑥 2 ) = 𝑀𝑋′′ (𝑡) = (𝑀𝑋′ (𝑡)) = (𝑒 + 2𝑒 2𝑡 + 3𝑒 3𝑡 + 4𝑒 4𝑡 + 5𝑒 5𝑡 )
𝑑𝑡 5 𝑑𝑡
1 𝑡
= (𝑒 + 4𝑒 2𝑡 + 9𝑒 3𝑡 + 16𝑒 4𝑡 + 25𝑒 5𝑡 )
5
To find the mean and variance
1 0
𝐸 (𝑥 ) = 𝑀𝑋′ (𝑡 = 0) = (𝑒 + 2𝑒 2(0) + 3𝑒 3(0) + 4𝑒 4(0) + 5𝑒 5(0) )
5
1 1
= (1 + 2(1) + 3(1) + 4(1) + 5(1)) = (1 + 2 + 3 + 4 + 5)
5 5
1
= (15) = 3
5
1 0
𝐸 (𝑥 2 ) = 𝑀𝑋′′ (𝑡 = 0) = (𝑒 + 4𝑒 2(0) + 9𝑒 3(0) + 16𝑒 4(0) + 25𝑒 5(0) )
5
1 1
= (1 + 4(1) + 9(1) + 16(1) + 25(1)) = (1 + 4 + 9 + 16 + 25)
5 5
1
= (55) = 11
5
2
𝑣𝑎𝑟 (𝑥 ) = 𝑀𝑋′′ (𝑡 = 0) − (𝑀𝑋′ (𝑡 = 0)) = 11 − (3)2 = 11 − 9 = 2
Q47. Suppose that a Economist determines that the income the UConn Dairy Bar makes in a
daily is a random variable, 𝑋, with moment generating function
1
𝑀𝑋 (𝑡) =
(1 − 2000𝑡) 5
Find the standard deviation of the revenue the UConn Dairy bar makes in a daily.
ANSWER
We have
To find the first and second moment
𝑑 𝑑
𝑀𝑋′ (𝑡) = (𝑀𝑋 (𝑡)) = ((1 − 2000𝑡 )−5 ) = 5(2000)(1 − 2000𝑡) −6
𝑑𝑡 𝑑𝑡
𝑑 𝑑
𝑀𝑋′′ (𝑡) = (𝑀𝑋′ (𝑡)) = (5(2000)(1 − 2000𝑡) −6 ) = 30(2000)2 (1 − 2000𝑡 ) −7
𝑑𝑡 𝑑𝑡
To find the mean and variance
𝐸 (𝑥 ) = 𝑀𝑋′ (𝑡 = 0) = 5(2000)(1 − 2000(0))−6 = 10,000
𝐸 (𝑥 2 ) = 𝑀𝑋′′ (𝑡 = 0) = 30(2000) 2 (1 − 2000(0))−7 = 30 × 4000,000 = 120,000,000
2
𝑣𝑎𝑟 (𝑥 ) = 𝑀𝑋′′ (𝑡 = 0) − (𝑀𝑋′ (𝑡 = 0)) = 120,000,000 − (10,000) 2
𝑀𝑋 (𝑡) = 𝑞 + 𝑝𝑒 𝑡
𝑑
𝑀𝑋′ (𝑡) = [𝑞 + 𝑝𝑒 𝑡 ] = 𝑝𝑒 𝑡
𝑑𝑡
𝑑2 𝑑 𝑑
𝑀𝑋′′ (𝑡 ) = 2
(𝑀𝑋 (𝑡)) = [𝑀𝑋′ (𝑡)] = 𝑝 [𝑒 𝑡 ]
𝑑𝑡 𝑑𝑡 𝑑𝑡
= 𝑝[𝑒 𝑡 ] = 𝑝𝑒 𝑡
𝑀𝑋 (𝑡 = 0) = 𝑞 + 𝑝𝑒 0 = 𝑝 + 𝑞 = 1
𝜇1′ = 𝑀𝑋′ (𝑡 = 0) = 𝑝𝑒 0 = 𝑝. 1 = 𝑝
𝜇1′ = 𝑝 = 𝑚𝑒𝑎𝑛
𝜇2′ = 𝑀𝑋′′ (𝑡 = 0) = 𝑝𝑒 0 = 𝑝. 1 = 𝑝
= 𝑝 − (𝑝)2 = 𝑝(1 − 𝑝)
= 𝑝𝑞
Q49. If the moments of a variate 𝑋 are defined by 𝐸 (𝑋 𝑟 ) = 0.6; 𝑟 = 1,2,3 … show that
𝑃(𝑋 = 0) = 0.4, 𝑃(𝑋 = 1) = 0.6, 𝑃(𝑋 ≥ 2) = 0.
(b) If 𝑀𝑋 (𝑡) of a random variable is (1 − 𝑡)−1 , 𝑡 < 1, find 𝐸(𝑥 𝑟 )
ANSWER
The m.g.f of variate 𝑋 is:
∞ ∞ ∞
𝑡𝑟 𝑡𝑟 𝑡𝑟
𝑀𝑋 (𝑡) = ∑ 𝜇𝑟′ = 1 + ∑ (0.6) = 𝑚(0) + ∑ 0.6
𝑟! 𝑟! 𝑟!
𝑟=0 𝑟=1 𝑟=0
∞ ∞
𝑡𝑟 𝑡𝑟
= (1 − 0.6) + ∑ 0.6 = 0.4 + 0.6 ∑ = 0.4 + 0.6𝑒 𝑡 (𝑖)
𝑟! 𝑟!
𝑟=0 𝑟=0
But
∞ ∞
𝑡𝑥 )
𝑀𝑋 (𝑡) = 𝐸 (𝑒 = ∑ 𝑒 𝑃(𝑋 = 𝑥) = 𝑃 (𝑋 = 0) + 𝑒 . 𝑃 (𝑋 = 1) + ∑ 𝑒 𝑡𝑥 𝑃(𝑋 = 𝑥)
𝑡𝑥 𝑡
(𝑖𝑖)
𝑟=0 𝑥=2
𝑡2 𝑡𝑟
= 𝑚(0) + 𝑚1 (0)𝑡 + 𝑚 2 (0) + … . . +𝑚 𝑟 (0)
2! 𝑟!
= 𝐸(𝑥 𝑟 ) = 𝑟!
= (𝑞 + 𝑝𝑒 𝑡 )𝑛 (𝑏𝑖𝑛𝑜𝑚𝑖𝑎𝑙 𝑒𝑥𝑝𝑎𝑛𝑠𝑖𝑜𝑛)
Hence,
𝑀𝑋 (𝑡) = (𝑞 + 𝑝𝑒 𝑡 )𝑛
So the generating function and its first two derivatives are:
𝑀𝑋 (𝑡) = (𝑞 + 𝑝𝑒 𝑡 )𝑛
𝑑
𝑀𝑋′ (𝑡) = [𝑞 + 𝑝𝑒 𝑡 ]𝑛 = 𝑛𝑝𝑒 𝑡 (𝑞 + 𝑝𝑒 𝑡 )𝑛−1
𝑑𝑡
𝑑2 𝑑 𝑑
𝑀𝑋′′ (𝑡 ) = 2 (𝑀𝑋 (𝑡)) = [𝑀𝑋′ (𝑡)] = 𝑛𝑝 [𝑒 𝑡 (𝑞 + 𝑝𝑒 𝑡 )𝑛−1 ]
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑀𝑋 (𝑡 = 0) = (𝑞 + 𝑝𝑒 0 )𝑛 = (𝑝 + 𝑞 )𝑛 = 1𝑛 = 1
𝜇1′ = 𝑀𝑋′ (𝑡 = 0) = 𝑛𝑝𝑒 0 (𝑞 + 𝑝𝑒 0 )𝑛−1 = 𝑛𝑝 (𝑞 + 𝑝) 𝑛−1 = 𝑛𝑝
𝜇1′ = 𝑛𝑝 = 𝑚𝑒𝑎𝑛
= 𝑛𝑝(1 + 𝑛𝑝 − 𝑝)
= 𝑛 2 𝑝2 + 𝑛𝑝 − 𝑛𝑝2
= 𝑛 2 𝑝2 + 𝑛𝑝 − 𝑛𝑝2 − (𝑛𝑝) 2
= 𝑛 2 𝑝2 + 𝑛𝑝 − 𝑛𝑝2 − 𝑛 2 𝑝2
= 𝑛𝑝(1 − 𝑝) = 𝑛𝑝𝑞
= (𝑞𝑒 −𝑡𝑝 + 𝑝𝑒 𝑡𝑞 )𝑛
𝑛
𝑝2 𝑡 2 𝑝3 𝑡 3 𝑡 2𝑞2 𝑡 3𝑞3
= [𝑞 {1 − 𝑝𝑡 + − + ⋯ } + 𝑝 {1 + 𝑡𝑞 + + +. . }]
2! 3! 2! 3!
𝑛
𝑝2 𝑡 2 𝑝3 𝑡 3 𝑡 2𝑞2 𝑡 3𝑞3
= [{𝑞 − 𝑝𝑞𝑡 + 𝑞− 𝑞 + ⋯ } + {𝑝 + 𝑡𝑝𝑞 + 𝑝+ 𝑝 + ⋯ }]
2! 3! 2! 3!
𝑛
𝑡2 𝑡3 𝑡4
= [𝑞 + 𝑝 + 𝑝𝑞 (𝑝 + 𝑞 ) + 𝑝𝑞 (𝑞 − 𝑝 ) + 𝑝𝑞 (𝑞 3 + 𝑝3 ) + ⋯ ] ; 𝑤ℎ𝑒𝑟𝑒, 𝑝 + 𝑞 = 1
2 2
2! 3! 4!
𝑛
𝑡2 𝑡3 𝑡4
= [1 + 𝑝𝑞 + 𝑝𝑞(𝑞 − 𝑝) + 𝑝𝑞(𝑞 3 + 𝑝3 ) + ⋯ ] 𝑤ℎ𝑒𝑟𝑒 𝑞 3 + 𝑝3 = (𝑝 + 𝑞)3 − 3𝑝𝑞(𝑝 + 𝑞)
2! 3! 4!
𝑛
𝑡2 𝑡3 𝑡4
= [1 + 𝑝𝑞 + 𝑝𝑞(𝑞 − 𝑝) + 𝑝𝑞{(𝑝 + 𝑞)3 − 3𝑝𝑞(𝑝 + 𝑞)} + ⋯ ]
2! 3! 4!
𝑛
𝑡2 𝑡3 𝑡4
= [1 + 𝑝𝑞 + 𝑝𝑞(𝑞 − 𝑝) + 𝑝𝑞{1 − 3𝑝𝑞} + ⋯ ]
2! 3! 4!
𝑛 𝑡2 𝑡3 𝑡4
= [1 + ( ) { 𝑝𝑞 + 𝑝𝑞(𝑞 − 𝑝) + 𝑝𝑞(1 − 3𝑝𝑞) + ⋯ }
1 2! 3! 4!
2
𝑛 𝑡2 𝑡3 𝑡4
+ ( ) { 𝑝𝑞 + 𝑝𝑞(𝑞 − 𝑝) + 𝑝𝑞(1 − 3𝑝𝑞) + ⋯ } ]
2 2! 3! 4!
Now
𝑡2
𝜇2 = 𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 𝑛𝑝𝑞
2!
𝑡3
𝜇3 = 𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 𝑛𝑝𝑞(𝑞 − 𝑝)
3!
𝑡4
𝜇4 = 𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 𝑛𝑝𝑞 (1 − 3𝑝𝑞 ) + 3𝑛 (𝑛 − 1)𝑝2 𝑞 2
4!
= 3𝑛 2 𝑝2 𝑞 2 + 𝑛𝑝𝑞(1 − 6𝑝𝑞)
ANSWER
= 𝑒 𝑛𝑡 (𝑞 + 𝑝𝑒 −𝑡 )𝑛 = [𝑒 𝑡 (𝑞 + 𝑝𝑒 −𝑡 )]𝑛
= (𝑝 + 𝑞𝑒 𝑡 )𝑛
2 3 10
Q52. The moment generating function of a r.v. 𝑋 is ( + 𝑒 𝑡 ) . Show that:
5 5
8 𝑥
10 3 2 10−𝑥
𝑃(𝜇 − 𝜎 < 𝑋 < 𝜇 + 𝜎 = ∑ ( ) ( ) ( )
𝑥 5 5
𝑥=5
ANSWER
Since
10
2 3
𝑀𝑋 (𝑡) = ( + 𝑒 𝑡 ) = (𝑞 + 𝑝𝑒 𝑡 )𝑛 ,
5 5
3
By uniqueness theorem of m.g.f. 𝑋~𝐵𝑖𝑛 (𝑛 = 10, 𝑝 = )
5
3 3 2
Hence, 𝐸 (𝑋) = 𝜇𝑥 = 𝑛𝑝 = 10 × = 6; 𝜎𝑋2 = 𝑛𝑝𝑞 = 10 × × = 2.4
5 5 5
8 8 𝑥 8
𝑛 𝑥 𝑛−𝑥 10 3 2 10−𝑥
= ∑ 𝑝(𝑥) = ∑ ( ) 𝑝 𝑞 = ∑ ( )( ) ( )
𝑥 𝑥 5 5
𝑋=5 𝑥=5 𝑥=5
Q53. Find the m.g.f. of standard binomial variate (𝑋 − 𝑛𝑝)/√𝑛𝑝𝑞 and obtain its limiting form as
𝑛 → ∞. Also interpret the result.
ANSWER
𝑋 − 𝑛𝑝 𝑋−𝜇
𝑍= = ; 𝑤ℎ𝑒𝑟𝑒 𝜇 = 𝑛𝑝 𝑎𝑛𝑑 𝜎 2 = 𝑛𝑝𝑞, 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦
√𝑛𝑝𝑞 𝜎
𝑛𝑝𝑡 𝑡 𝑛
𝜇𝑡
−𝜎 𝑡 −
𝑛𝑝𝑞 𝑛𝑝𝑞
( )
𝑀𝑍 𝑡 = 𝑒 . 𝑀𝑋 ( ) = 𝑒 √ . (𝑞 + 𝑝𝑒 √ )
𝜎
𝑛𝑝𝑡 𝑡 𝑛
−
=𝑒 √𝑛𝑝𝑞 . (𝑞 + 𝑝𝑒 √ 𝑛𝑝𝑞 )
𝑝𝑡 𝑡 𝑛
−
= [𝑒 √𝑛𝑝𝑞 . (𝑞 + 𝑛𝑝𝑞
𝑝𝑒 √ )]
𝑝𝑡 𝑞𝑡 𝑛
−
= [(𝑞𝑒 √𝑛𝑝𝑞 + 𝑛𝑝𝑞
𝑝𝑒 √ )]
𝑛
𝑝2 𝑡 2
𝑝𝑡 3 𝑞𝑡 𝑞2𝑡 2 3
= [𝑞 {1 − + + 𝜂′ (𝑛 −2 )} + 𝑝 {1 + + + 𝜂′′ (𝑛 −2 )}]
√𝑛𝑝𝑞 2𝑛𝑝𝑞 √𝑛𝑝𝑞 2𝑛𝑝𝑞
3 3
Where 𝜂′ (𝑛 −2 ) 𝑎𝑛𝑑 𝜂′′ (𝑛 −2 ) involve terms containing 𝑛 3/2 and higher powers of n in
denominator.
𝑛
𝑡 2 𝑝𝑞 3
∴ 𝑀𝑍 (𝑡) = [(𝑞 + 𝑝) + (𝑝 + 𝑞 ) + 𝜂 (𝑛 −2 )] 𝑤ℎ𝑒𝑟𝑒 𝑝 + 𝑞 = 1
2𝑛𝑝𝑞
𝑛
𝑡2 3
= [1 + + 𝜂 (𝑛 −2 )]
2𝑛
3 3
Where, 𝜂 (𝑛 −2 ) involves terms with 𝑛 −2 and higher powers of n in the denominator.
𝑡2 3
∴ 𝑙𝑜𝑔𝑀𝑍 (𝑡) = 𝑛𝑙𝑜𝑔 [1 + + 𝜂 (𝑛 −2 )]
2𝑛
2
𝑡2 3 1 𝑡2 3
= 𝑛 [{ + 𝜂 (𝑛 −2 )} − { + 𝜂 (𝑛 −2 )} + ⋯ ]
2𝑛 2 2𝑛
𝑡2 1
= + 𝜂′′′ (𝑛 −2 )
2
1
Where 𝜂′′′ (𝑛 −2 ) involve terms with 𝑛1/2 and higher powers of 𝑛 in the denominator. Proceeding
𝑡2
lim 𝑙𝑜𝑔𝑀𝑍 (𝑡) =
𝑛→∞ 2
𝑡2
⇒ ( )
lim 𝑀𝑍 𝑡 = exp ( ) ( ∗)
𝑛→∞ 2
Interpretation. (*) is the moment generating function of standard normal variate. Hence by
uniqueness theorem of moment generating functions, standard binomial variate tends to standard
normal variate as 𝑛 → ∞.
3 1 𝑡 20
𝑀𝑋 (𝑡) = ( + 𝑒 ) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡, 𝑡ℎ𝑒𝑛 𝑓𝑖𝑛𝑑 𝑡ℎ𝑒 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑚𝑎𝑠𝑠 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑋
4 4
ANSWER
We previously determined that the moment generating function of a binomial random variable is:
𝑀𝑋 (𝑡) = [(1 − 𝑝) + 𝑝𝑒 𝑡 ]𝑛 𝑓𝑜𝑟 − ∞ < 𝑡 < ∞
Comparing the given moment generating function with that of a binomial random variable, we
1
see that 𝑋 must be a binomial random variable with 𝑛 = 20 𝑎𝑛𝑑 𝑝 = .
4
20 1 𝑥 3 20−𝑥
𝑓 (𝑥 ) = ( ) ( ) ( ) 𝑓𝑜𝑟 𝑥 = 0,1,2, … ,20
𝑥 4 4
2.3.1.2 MOMENT GENERATING FUNCTION OF THE POISSON DISTRIBUTION
Q55. Let 𝑋~𝑃𝑜𝑖(𝜆). Obtain the moment generating function and the mean and variance of the
distribution
ANSWER
The probability mass function is defined as
𝜆𝑥 𝑒 −𝜆
𝑓 (𝑋 = 𝑥 ) = ; 𝑥 = 0,1,2, …,
𝑥!
The moment generating function
∞ ∞
𝑡𝑥 )
𝜆𝑥 𝑒 −𝜆 𝑡𝑥−𝜆
(𝜆𝑒 𝑡 ) 𝑥
𝑀𝑋 (𝑡) = 𝐸 (𝑒 =∑𝑒 . =𝑒 ∑
𝑥! 𝑥!
𝑥=0 𝑥=0
−𝜆 𝑡
(𝜆𝑒 𝑡 )2 𝜆𝑒 𝑡 𝑡
(𝜆𝑒 𝑡 )2
=𝑒 {1 + 𝜆𝑒 + + ⋯ } , 𝑤ℎ𝑒𝑟𝑒, 𝑒 = 1 + 𝜆𝑒 + +⋯
2! 2!
𝑡 𝑡 −1)
= 𝑒 −𝜆 . 𝑒 𝜆𝑒 = 𝑒 𝜆(𝑒 (𝑒𝑥𝑝𝑜𝑛𝑒𝑛𝑡𝑖𝑎𝑙 𝑠𝑒𝑟𝑖𝑒𝑠)
Hence,
𝑡 −1)
𝑀𝑋 (𝑡) = 𝑒 𝜆(𝑒
So the generating function and its first two derivatives are:
𝑡 −1)
𝑀𝑋 (𝑡) = 𝑒 𝜆(𝑒
𝑑 𝜆(𝑒 𝑡 −1) 𝑡
𝑀𝑋′ (𝑡) = [𝑒 ] = 𝜆𝑒 𝑡 (𝑒 𝜆(𝑒 −1) )
𝑑𝑡
𝑑2 𝑑 𝑑 𝑡
𝑀𝑋′′ (𝑡 ) = 2 (𝑀𝑋 (𝑡)) = [𝑀𝑋′ (𝑡)] = 𝜆 [𝑒 𝑡 (𝑒 𝜆(𝑒 −1) )]
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑡 −1) 𝑡 −1)
= 𝜆[𝑒 𝑡 (𝑒 𝜆(𝑒 ) + 𝜆𝑒 𝑡 (𝑒 𝜆(𝑒 )𝑒 𝑡 ]
𝑡 −1) 𝑡 −1)
= 𝜆𝑒 𝑡 [(𝑒 𝜆(𝑒 ) + 𝜆𝑒 𝑡 (𝑒 𝜆(𝑒 )]
0 −1)
𝑀𝑋 (𝑡 = 0) = (𝑒 𝜆(𝑒 ) = 𝑒0 = 1
0 −1)
𝜇1′ = 𝑀𝑋′ (𝑡 = 0) = 𝜆𝑒 0 (𝑒 𝜆(𝑒 ) = 𝜆(𝑒)0 = 𝜆. 1 = 𝜆
𝜇1′ = 𝜆 = 𝑚𝑒𝑎𝑛
0 −1) 0 −1)
𝜇2′ = 𝑀𝑋′′ (𝑡 = 0) = 𝜆𝑒 0 [(𝑒 𝜆(𝑒 ) + 𝜆𝑒 0 (𝑒 𝜆(𝑒 )]
= 𝜆(1 + 𝜆)
= 𝜆2 + 𝜆
𝜇2 = 𝑣𝑎𝑟 (𝑥 ) = 𝑀𝑋′′ (𝑡 = 0) − [𝑀𝑥′ (𝑡 = 0)]2
= 𝜆2 + 𝜆 − (𝜆)2
= 𝜆2 + 𝜆 − 𝜆2
=𝜆
Q56. (a) If 𝑋 is a Poisson distribution with mean 𝜆, show that the expectation of 𝑒 −𝑘𝑋 is
exp[−𝜆(1 − 𝑒 −𝑘 )]. Hence, show that, if 𝑋̅ is the arithmetic mean of 𝑛 independent random
variables 𝑋1 , 𝑋2 , … , 𝑋𝑛 , each having Poisson distribution with parameter 𝜆, then 𝑒 −𝑥̅ as an estimate
of 𝑒 −𝑚 is biased, although 𝑋̅ is an unbiased estimate of 𝜆.
(b) If 𝑋 is a Poisson variate with mean 𝜆, what would be the expectation of 𝑘𝑋𝑒 −𝑘𝑋 , k is constant
ANSWER
−𝜆 −𝑘
(𝜆𝑒 −𝑘 )2 (𝜆𝑒 −𝑘 )3
=𝑒 [1 + 𝜆𝑒 + + +⋯]
2! 3!
−𝑘
= 𝑒 −𝜆 . 𝑒 𝜆𝑒
−𝑘 )
𝐸 (𝑒 −𝑘𝑋 ) = 𝑒 −𝜆(1−𝑒 (∗)
We have
𝑛 𝑛
1 1
𝐸 (𝑋̅) = 𝐸 ( ∑ 𝑋𝑖 ) = ∑ 𝐸(𝑋𝑖 )
𝑛 𝑛
𝑖=1 𝑖=1
𝑋1 𝑋2 𝑋3 𝑋𝑛
= 𝐸 (𝑒 − 𝑛 ) 𝐸 (𝑒 − 𝑛 ) 𝐸 (𝑒 − 𝑛 ) … 𝐸 (𝑒 − 𝑛 ) (𝑠𝑖𝑛𝑐𝑒 𝑋1 , 𝑋2 , … , 𝑋𝑛 )
𝑛
𝑋
−𝑋̅ ) −( 𝑛𝑖 )
∴ 𝐸(𝑒 = ∏ 𝐸(𝑒 … (∗∗)
𝑖=1
(𝜆𝑒 −𝑘 )2 (𝜆𝑒 −𝑘 )3
= 𝑘𝜆𝑒 −𝜆−𝑘 [1 + 𝜆𝑒 −𝑘 + + + ⋯]
2! 3!
−𝑘
= 𝑘𝜆𝑒 −𝜆−𝑘 . 𝑒 𝜆𝑒 = 𝜆𝑘 exp[{𝜆(𝑒 −𝑘 − 1) − 𝑘}]
𝑋−𝜆
Q57. If 𝑋 is a Poisson variate with mean 𝜆, show that is a variable with mean zero and
√𝜆
variance unity. Find the moment generating function for this variable and show that it
2
approaches𝑒 𝑡 /2 𝑎𝑠 𝜆 → ∞. Also interpret the result.
ANSWER
𝑋−𝜆
Let 𝑌 =
√𝜆
𝑋−𝜆 1 1
∴ 𝐸( )= 𝐸 (𝑋 − 𝜆 ) = (𝜆 − 𝜆) = 0; 𝑤ℎ𝑒𝑟𝑒 𝐸 (𝑥 ) = 𝜆
√𝜆 √𝜆 √𝜆
𝑋−𝜆 2 1 1
𝑣𝑎𝑟 ( ) = 𝐸 (𝑋 − 𝜆)2 = 𝜇2
√𝜆 𝜆 𝜆
𝑡(𝑋−𝜆)
𝑀. 𝐺. 𝐹 𝑜𝑓 𝑌 = 𝑀𝑌 (𝑡) = 𝐸 (𝑒 𝑡𝑌 ) = 𝐸 (𝑒 𝑡𝑌 ) = 𝐸 (𝑒 √𝜆 )
= 𝑒 −𝜆−𝑡√𝜆 ∑ .
𝑥!
𝑥=0
𝑡 2 𝑡 2
𝑡 𝑡
(𝜆𝑒 )
√𝜆 (𝜆𝑒 )
√𝜆 𝑡
𝜆𝑒 √𝜆 𝜆𝑒 √𝜆
= 𝑒 −𝜆−𝑡√𝜆 1+ + + ⋯ ; 𝑤ℎ𝑒𝑟𝑒 1+ + +⋯= 𝑒 𝜆𝑒 √𝜆
1! 2! 1! 2!
[ ]
𝑡 𝑡
= 𝑒 −𝜆−𝑡√𝜆 . 𝑒 𝜆𝑒 √𝜆 = 𝑒𝑥𝑝 [−𝜆 − 𝑡√𝜆 + 𝜆𝑒 √𝜆 ]
𝑡2 𝑡3 𝑡
= 𝑒𝑥𝑝 [−𝜆 − 𝑡√𝜆 + 𝜆 (1 + + + 3 + ⋯ )]
√𝜆 2! 𝜆 3! 𝜆2
1 1 𝑡3
= 𝑒𝑥𝑝 [ 𝑡 2 + . + ⋯]
2 3! √𝜆
ANSWER
= 𝑝𝑒 𝑡 [1 + 𝑞𝑒 𝑡 + (𝑞𝑒 𝑡 )2 + (𝑞𝑒 𝑡 )3 + ⋯ ]
1
(1 + 𝑟 + 𝑟 2 + 𝑟 3 + ⋯ = ; (𝑠𝑢𝑚 𝑜𝑓 𝑖𝑛𝑓𝑖𝑛𝑖𝑡𝑦 𝑠𝑒𝑟𝑖𝑒𝑠)
1−𝑟
1 𝑝𝑒 𝑡
𝑀𝑋 (𝑡) = 𝑝𝑒 𝑡 × =
1 − 𝑞𝑒𝑡 1 − 𝑞𝑒 𝑡
Hence,
𝑝𝑒 𝑡
𝑀𝑋 (𝑡) =
1 − 𝑞𝑒 𝑡
So the generating function and its first two derivatives are:
𝑝𝑒 𝑡
𝑀𝑋 (𝑡) =
1 − 𝑞𝑒 𝑡
𝑑 𝑝𝑒 𝑡 (1 − 𝑞𝑒 𝑡 )𝑝𝑒 𝑡 − 𝑝𝑒 𝑡 (−𝑞𝑒 𝑡 ) 𝑝𝑒 𝑡 − 𝑝𝑒 𝑡 𝑞𝑒 𝑡 + 𝑝𝑒 𝑡 𝑞𝑒 𝑡 𝑝𝑒 𝑡
𝑀𝑋′ (𝑡) = [ ]= = =
𝑑𝑡 1 − 𝑞𝑒 𝑡 (1 − 𝑞𝑒 𝑡 )2 (1 − 𝑞𝑒 𝑡 )2 (1 − 𝑞𝑒 𝑡 )2
𝑑2 𝑑 𝑑 𝑒𝑡
𝑀𝑋′′ (𝑡 ) = (𝑀 ( 𝑡 )) = [ 𝑀 ′ ( )]
𝑡 = 𝑝 [ ]
𝑑𝑡 2 𝑋 𝑑𝑡 𝑋 𝑑𝑡 (1 − 𝑞𝑒 𝑡 )2
𝑝𝑒 𝑡 (1 − 𝑞𝑒 𝑡 )[(1 − 𝑞𝑒 𝑡 ) + 2𝑞 ]
=
(1 − 𝑞𝑒 𝑡 )4
𝑝𝑒 𝑡 [(1 − 𝑞𝑒 𝑡 ) + 2𝑞 ]
=
(1 − 𝑞𝑒 𝑡 )3
𝑝𝑒 0 𝑝 𝑝
𝑀𝑋 (𝑡 = 0) = 0
= = =1
1 − 𝑞𝑒 1−𝑞 𝑝
𝑝𝑒 0 𝑝 𝑝 1
𝜇1′ = 𝑀𝑋′ (𝑡 = 0) = 0 2
= 2
= 2=
(1 − 𝑞𝑒 ) (1 − 𝑞 ) 𝑝 𝑝
1
𝜇1′ = = 𝑚𝑒𝑎𝑛
𝑝
𝑝𝑒 0 [(1 − 𝑞𝑒 0 ) + 2𝑞 ] 𝑝(1 − 𝑞 + 2𝑞 )
𝜇2′ = 𝑀𝑋′′ (𝑡 = 0) = =
(1 − 𝑞𝑒 0 )3 (1 − 𝑞 )3
𝑝 (1 + 𝑞 ) 𝑝 (1 + 𝑞 ) 1 + 𝑞
= = = 2
(1 − 𝑞 )3 𝑝3 𝑝
𝜇2 = 𝑣𝑎𝑟 (𝑥 ) = 𝑀𝑋′′ (𝑡 = 0) − [𝑀𝑥′ (𝑡 = 0)]2
1+𝑞 1 2 1+𝑞 1
= 2
− ( ) = 2 − 2
𝑝 𝑝 𝑝 𝑝
1+𝑞−1 𝑞
= 2
= 2
𝑝 𝑝
𝑒𝑡
𝑀𝑋 (𝑡) = , 𝑡 < −ln(0.75)
4 − 3𝑒 𝑡
ANSWER
𝑒𝑡
𝑀𝑋 (𝑡) =
4 − 3𝑒 𝑡
𝑑 𝑑 𝑒𝑡 (4 − 3𝑒 𝑡 ). 𝑒 𝑡 − 𝑒 𝑡 (−3𝑒 𝑡 )
𝐸 (𝑌 ) = 𝑀𝑥′ (𝑡) = (𝑀𝑥 (𝑡)) = ( )=
𝑑𝑡 𝑑𝑡 4 − 3𝑒 𝑡 (4 − 3𝑒 𝑡 )2
4𝑒 𝑡 − 3𝑒 2𝑡 + 3𝑒 2𝑡 4𝑒 𝑡
= =
(4 − 3𝑒 𝑡 )2 (4 − 3𝑒 𝑡 )2
𝑑2 𝑑 𝑑 4𝑒 𝑡
𝐸 (𝑌 2 ) = 𝑀𝑋′′ (𝑡) = (𝑀 ( 𝑡 )) = (𝑀 ′ (𝑡 )) = ( )
𝑑𝑡 2 𝑥 𝑑𝑡 𝑥 𝑑𝑡 (4 − 3𝑒 𝑡 )2
(4 − 3𝑒 𝑡 )2 . 4𝑒 𝑡 − 8𝑒 𝑡 (4 − 3𝑒 𝑡 )(−3𝑒 𝑡 )
=
(4 − 3𝑒 𝑡 )4
4𝑒 𝑡 (4 − 3𝑒 𝑡 )[4 − 3𝑒 𝑡 + 6𝑒 𝑡 ] 4𝑒 𝑡 (4 + 3𝑒 𝑡 )
= =
(4 − 3𝑒 𝑡 )4 (4 − 3𝑒 𝑡 )3
4𝑒 0 4 4
𝐸 (𝑌 ) = 𝑀𝑋′ (𝑡 = 0) = 𝑡 2
= 2
= =4
(4 − 3𝑒 ) (4 − 3) 1
4𝑒 0 (4 + 3𝑒 0 ) 4(4 + 3) 4(7)
𝐸 (𝑌 2 ) = 𝑀𝑋′′ (𝑡 = 0) = = = = 28
(4 − 3𝑒 0 )3 (4 − 3)3 1
Method 2: we can recognized that this is moment generating function for geometric distribution.
We can match this MGF to a known MGF of one of the distributions we considered and then apply.
Observe that:
1 𝑡
𝑒 𝑝𝑒 𝑡 1 1 3
𝑀𝑋 (𝑡) = 4 = 𝑤ℎ𝑒𝑟𝑒 𝑝 = 𝑎𝑛𝑑 𝑞 = 1 − =
3 𝑡
1 − 𝑒 𝑡 1 − 𝑞𝑒 4 4 4
4
1 1 1 4
𝐸 (𝑌 ) = = =1÷ =1× =4
𝑝 1 4 1
4
𝑞 3 1 2 3
𝑉𝑎𝑟 (𝑌 ) = = ÷ ( ) = × 16 = 3 × 4 = 12
𝑝2 4 4 4
𝑟
𝑝𝑒 𝑡
𝑀𝑋 (𝑡) = ( )
1 − 𝑞𝑒 𝑡
𝑟 𝑟−1
𝑑 𝑝𝑒 𝑡 𝑝𝑒 𝑡 (1 − 𝑞𝑒 𝑡 )𝑝𝑒 𝑡 − 𝑝𝑒 𝑡 (−𝑞𝑒 𝑡 )
𝑀𝑋′ (𝑡) = ( ) =𝑟( ) ×[ ]
𝑑𝑡 1 − 𝑞𝑒 𝑡 1 − 𝑞𝑒 𝑡 (1 − 𝑞𝑒 𝑡 )2
𝑟−1 𝑟−1
𝑝𝑒 𝑡 𝑝𝑒 𝑡 − 𝑝𝑒 𝑡 𝑞𝑒 𝑡 + 𝑝𝑒 𝑡 𝑞𝑒 𝑡 𝑝𝑒 𝑡 𝑝𝑒 𝑡
= 𝑟( ) ×[ ] = 𝑟 ( ) ×
1 − 𝑞𝑒 𝑡 (1 − 𝑞𝑒 𝑡 )2 1 − 𝑞𝑒 𝑡 (1 − 𝑞𝑒 𝑡 )2
𝑟 (𝑝𝑒 𝑡 )𝑟
=
(1 − 𝑞𝑒 𝑡 )𝑟+1
𝑑2 𝑑 𝑑 𝑒 𝑟𝑡
𝑀𝑋′′ (𝑡 ) = (𝑀 ( 𝑡 )) = [ 𝑀 ′ ( )]
𝑡 = 𝑟𝑝𝑟
[ ]
𝑑𝑡 2 𝑋 𝑑𝑡 𝑋 𝑑𝑡 (1 − 𝑞𝑒 𝑡 )𝑟+1
𝑟 − 𝑟𝑞𝑒 𝑡 + 𝑟𝑞𝑒 𝑡 + 𝑞𝑒 𝑡
= 𝑟𝑝𝑟 𝑒 𝑟𝑡 (1 − 𝑞𝑒 𝑡 )𝑟 [ ]
((1 − 𝑞𝑒 𝑡 )𝑟+1 )2
𝑟 + 𝑞𝑒 𝑡
= 𝑟𝑝𝑟 𝑒 𝑟𝑡 [ ]
(1 − 𝑞𝑒 𝑡 )𝑟+2
𝑟
𝜇1′ = = 𝑚𝑒𝑎𝑛
𝑝
𝑟 + 𝑞𝑒 0
𝜇2′ = 𝑀𝑋′′ (𝑡 = 0) = 𝑟𝑝𝑟 𝑒 𝑟(0) [ ]
(1 − 𝑞𝑒 0 )𝑟+2
𝑟+𝑞 𝑟+𝑞
= 𝑟𝑝𝑟 ( 𝑟+2
) = 𝑟𝑝𝑟 ( 𝑟 )
(1 − 𝑞 ) 𝑝 × 𝑝2
𝑟 (𝑟 + 𝑞 )
=
𝑝2
𝑟(𝑟 + 𝑞 ) 𝑟 2
= − ( )
𝑝2 𝑝
𝑟 2 + 𝑟𝑞 𝑟 2
= − 2
𝑝2 𝑝
𝑟 2 + 𝑟𝑞 − 𝑟 2 𝑟𝑞
= = 2
𝑝2 𝑝
2.3.2 MOMENTS GENERATING FUNCTION FOR COUNTINUOUS RANDOM
VARIABLE
Q62. Let 𝑋 be a random variable whose probability density function is given by
1
𝑒 −2𝑥 + 𝑒 −𝑥 , 𝑥 > 0
𝑓𝑋 (𝑥 ) = { 2
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
ANSWER
∞ 1 ∞ 1
= ∫ (𝑒 −2𝑥+𝑡𝑥 + 𝑒 −2𝑥+𝑡𝑥 ) 𝑑𝑥 = ∫ (𝑒 −𝑥(2−𝑡) + 𝑒 −𝑥(1−𝑡) ) 𝑑𝑥
0 2 0 2
𝑥=∞ 𝑥=∞
1 −𝑥(2−𝑡)
1 −𝑥(1−𝑡)
=− 𝑒 | − 𝑒 |
2−𝑡 𝑥=0 2 (1 − 𝑡 ) 𝑥=0
1 1
=− (𝑒 −∞ − 𝑒 −0 ) − (𝑒 −∞ − 𝑒 −0 )
2−𝑡 2 (1 − 𝑡 )
1 1 1 1
=− (0 − 1) − (0 − 1) = − (−1) − (−1)
2−𝑡 2 (1 − 𝑡 ) 2−𝑡 2(1 − 𝑡)
1 1 2 − 2𝑡 + 2 − 𝑡
= + =
2 − 𝑡 2(1 − 𝑡 ) 2(2 − 𝑡)(1 − 𝑡)
4 − 3𝑡
=
2(2 − 𝑡)(1 − 𝑡 )
1 1
𝑀𝑋 (𝑡) = +
2 − 𝑡 2(1 − 𝑡)
𝑑 1 1 1 1
𝑀𝑋′ (𝑡) = ( + )= 2
+
𝑑𝑡 2 − 𝑡 2(1 − 𝑡) (2 − 𝑡 ) 2(1 − 𝑡)2
𝑑2 𝑑 𝑑 1 1 2 1
𝑀𝑋′′ (𝑡) = 2 (𝑀𝑋 (𝑡)) = [𝑀𝑋′ (𝑡)] = [ + ] = +
𝑑𝑡 𝑑𝑡 𝑑𝑡 (2 − 𝑡)2 2(1 − 𝑡)2 (2 − 𝑡 )3 (1 − 𝑡 )3
4 − 3(0) 4 4
𝑀𝑋 (𝑡 = 0) = = = =1
2(2 − 0)(1 − 0) 2(2) 4
1 1 1 1 1+2 3
𝜇1′ = 𝑀𝑋′ (𝑡 = 0) = + = + = =
(2 − 0)2 2(1 − 0) 2 22 2 4 4
3
𝜇1′ = = 𝑚𝑒𝑎𝑛
4
2 1 2 2 + 8 10 5
𝜇2′ = 𝑀𝑋′′ (𝑡 = 0) = 3
+ 3
= 3+1 = = =
(2 − 0) (1 − 0) 2 8 8 4
5 3 2 5 9 20 − 9 11
𝜇2 = 𝑣𝑎𝑟 (𝑥 ) = 𝑀𝑋′′ (𝑡 = 0) − [𝑀𝑥′ (𝑡 =0)]2 = −( ) = − = =
4 4 4 16 16 16
Q63. Let 𝑋 be a continuous random variable with values in [0,2] and prob. density function 𝑓𝑋
1 1
(𝑖) 𝑓𝑋 = 𝑥 (𝑖𝑖)𝑓𝑋 (𝑥 ) = 1 − 𝑥
2 2
2
1 2 1 2 1 𝑥𝑒 𝑡𝑥 2 𝑒 𝑡𝑥
= ∫ 𝑥𝑒 𝑡𝑥 𝑑𝑥 = [∫ 𝑥𝑒 𝑡𝑥 𝑑𝑥 ] = [( ) −∫ 𝑑𝑥 ]
2 0 2 0 2 𝑡 0 0 𝑡
2 2
1 𝑥𝑒 𝑡𝑥 𝑒 𝑡𝑥 1 2𝑒 2𝑡 𝑒 2𝑡 𝑒 0 1 2𝑒 2𝑡 𝑒 2𝑡 1
= [( ) − 2| ]= [ − ( 2 − 2 )] = [ − 2 + 2]
2 𝑡 0 𝑡 0 2 𝑡 𝑡 𝑡 2 𝑡 𝑡 𝑡
𝑒 2𝑡 𝑒 2𝑡 1 2𝑡𝑒 2𝑡 − 𝑒 2𝑡 + 1
= − 2+ 2=
𝑡 2𝑡 2𝑡 2𝑡 2
2 2
𝑒 𝑡𝑥 1 𝑥𝑒 𝑡𝑥 2 𝑒 𝑡𝑥
= | − [( ) −∫ 𝑑𝑥 ]
𝑡 0 2 𝑡 0 0 𝑡
2 2
𝑒 2𝑡 1 𝑥𝑒 𝑡𝑥 𝑒 𝑡𝑥 1 2𝑒 2𝑡 𝑒 2𝑡 𝑒 0
= − [( ) − | ]= [ − ( 2 − 2 )]
𝑡 2 𝑡 0 𝑡2 0 2 𝑡 𝑡 𝑡
𝑒 2𝑡 1 2𝑒 2𝑡 𝑒 2𝑡 1 𝑒 2𝑡 𝑒 2𝑡 𝑒 2𝑡 1
= − [ − 2 + 2] = − − 2+ 2
𝑡 2 𝑡 𝑡 𝑡 𝑡 𝑡 2𝑡 2𝑡
1 𝑒 2𝑡 1 − 𝑒 2𝑡
= − =
2𝑡 2 2𝑡 2 2𝑡 2
Q64. A random variable 𝑋 is said to have a continuous uniform distribution over an interval (𝑎, 𝑏) if
its probability density function of 𝑋
1
, 𝑎<𝑥<𝑏
𝑓𝑋 (𝑥) = { 𝑏 − 𝑎
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
1
𝑓𝑋 (𝑥) = , 𝑎<𝑥<𝑏
𝑏−𝑎
∞ 𝑏 1
𝑀𝑋 (𝑡) = 𝐸 (𝑒 𝑡𝑥 ) = ∫ 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥 = ∫ 𝑒 𝑡𝑥 𝑑𝑥
−∞ 𝑎 𝑏−𝑎
𝑏
1 𝑏 1 𝑏 1 𝑒 𝑡𝑥
= ∫ 𝑒 𝑡𝑥 𝑑𝑥 = [∫ 𝑒 𝑡𝑥 𝑑𝑥 ] = [( ) ]
𝑏−𝑎 𝑎 𝑏−𝑎 𝑎 𝑏−𝑎 𝑡 𝑎
1 𝑒 𝑏𝑡 𝑒 𝑎𝑡 1 𝑒 𝑏𝑡 − 𝑒 𝑎𝑡
= [ − ]= [ ]
(𝑏 − 𝑎) 𝑡 𝑡 𝑏−𝑎 𝑡
𝑒 𝑏𝑡 − 𝑒 𝑎𝑡
=
𝑡(𝑏 − 𝑎)
If 𝑡 = 0
𝑒 𝑏(0) − 𝑒 𝑎(0) 1 − 1 0
𝑀𝑋 (0) = = = (𝑖𝑛𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒)
0(𝑏 − 𝑎) 0 0
𝑑 𝑏𝑡
𝑒 𝑏𝑡 − 𝑒 𝑎𝑡 (𝑒 − 𝑒 𝑎𝑡 ) 𝑏𝑒 𝑏𝑡 − 𝑎𝑒 𝑎𝑡
lim = lim 𝑑𝑡 = lim
𝑡→0 𝑡(𝑏 − 𝑎) 𝑡→0 𝑑 𝑡→0 ( 𝑏 − 𝑎)
[𝑡(𝑏 − 𝑎)]
𝑑𝑡
𝑏𝑒𝑏(0) − 𝑎𝑒𝑎(0) (𝑏 × 1 − 𝑎 × 1) 𝑏 − 𝑎
= = = =1
(𝑏 − 𝑎) (𝑏 − 𝑎) 𝑏−𝑎
1, 𝑡=0
𝑀𝑥 (𝑡) = {𝑒 𝑏𝑡 − 𝑒 𝑎𝑡
, 𝑡≠0
𝑡(𝑏 − 𝑎)
Q65. Show that for the rectangular distribution:
1
𝑓 (𝑥 ) = , −𝑎 < 𝑥 < 𝑎
2𝑎
1
The m.g.f about origin is (sinh 𝑎𝑡). Also show that moments of even order are given by
𝑎𝑡
𝑎2𝑛
𝜇2𝑛 =
2𝑛 + 1
ANSWER
1
𝑓 (𝑥 ) = , −𝑎 < 𝑥 < 𝑎
2𝑎
∞ 𝑎 1
𝑀𝑋 (𝑡) = 𝐸 (𝑒 𝑡𝑥 ) = ∫ 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥 = ∫ 𝑒 𝑡𝑥 𝑑𝑥
−∞ −𝑎 2𝑎
𝑎
1 𝑎 1 𝑎 1 𝑒 𝑡𝑥
= ∫ 𝑒 𝑡𝑥 𝑑𝑥 = [∫ 𝑒 𝑡𝑥 𝑑𝑥 ] = [( ) ]
2𝑎 −𝑎 2𝑎 −𝑎 2𝑎 𝑡 −𝑎
1 𝑒 𝑎𝑡 𝑒 −𝑎𝑡 1 𝑒 𝑎𝑡 − 𝑒 −𝑎𝑡
= [ − ]= [ ]
2𝑎 𝑡 𝑡 2𝑎 𝑡
Since there are no terms with odd powers of 𝑡 in 𝑀𝑥 (𝑡), all moments of odd order about origin
vanish, i.e.,
′
𝜇2𝑛+1 (𝑎𝑏𝑜𝑢𝑡 𝑜𝑟𝑖𝑔𝑖𝑛) = 0
𝑡 2𝑛 𝑎 2𝑛
𝜇2𝑛 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 𝑖𝑛 𝑀𝑋 (𝑡) =
(2𝑛)! 2𝑛 + 1
2.3.2.2 MOMENT GENERATING FUNCTION OF THE TRIANGULAR DISTRIBUTION
Q66. (a) A random variable 𝑋 is said to have a triangular distribution in the interval(𝑎, 𝑏), if its
probability density function is given by:
2(𝑥 − 𝑎)
; 𝑎<𝑥≤𝑐
(𝑏 − 𝑎)(𝑐 − 𝑎)
𝑓(𝑥) = (∗)
2(𝑏 − 𝑥)
;𝑐 < 𝑥 < 𝑏
{ (𝑏 − 𝑎)(𝑏 − 𝑐)
(b) In particular, taking 𝑎 = 0, 𝑐 = 1 𝑎𝑛𝑑 𝑏 = 2 , 𝑖𝑛 (∗), the probability density function of the
𝑇𝑟𝑔(0,2) variate with peak at 𝑥 = 1 is given by
𝑥; 0≤𝑥≤1
𝑓(𝑥) = { 2 − 𝑥; 1 ≤ 𝑥2 (∗∗)
0; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
(𝑒 𝑡 − 1)2
𝑀𝑥 (𝑡) =
𝑡2
(c) In equation (*) above, replacing 𝑎 𝑏𝑦 − 2𝑎 (𝑎 = −2𝑎), 𝑏 𝑏𝑦 2𝑎(𝑏 = 2𝑎)𝑎𝑛𝑑 𝑐 𝑏𝑦 0(𝑥 = 0),
the probability density function of triangular distribution on the interval (−2𝑎, 2𝑎) with peak at 𝑥 =
0 is given by:
(2𝑎 + 𝑥)
; −2𝑎 < 𝑥 ≤ 0
4𝑎 2
𝑓(𝑥) = (∗∗∗)
(2𝑎 − 𝑥)
{ ; 0 < 𝑥 < 2𝑎
4𝑎 2
ANSWER
(a) The probability density function is give as
2(𝑥 − 𝑎)
; 𝑎<𝑥≤𝑐
(𝑏 − 𝑎)(𝑐 − 𝑎)
𝑓(𝑥) =
2(𝑏 − 𝑥)
;𝑐 < 𝑥 < 𝑏
{ (𝑏 − 𝑎)(𝑏 − 𝑐)
𝑏 𝑐 𝑏
𝑀𝑋 (𝑡) = 𝐸 (𝑒 𝑡𝑥 ) = ∫ 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥 = (∫ + ∫ ) 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥
𝑎 𝑎 𝑐
2 𝑐 2 𝑏
𝑡𝑥 (
= )
∫ 𝑒 𝑥 − 𝑎 𝑑𝑥 + ∫ 𝑒𝑡𝑥 (𝑏 − 𝑥)𝑑𝑥
(𝑏 − 𝑎)(𝑐 − 𝑎) 𝑎 (𝑏 − 𝑎)(𝑏 − 𝑐) 𝑐
𝑐 𝑏
∫ 𝑒 𝑡𝑥 (𝑥 − 𝑎) 𝑑𝑥 𝑎𝑛𝑑 ∫ 𝑒 𝑡𝑥 (𝑏 − 𝑥)𝑑𝑥 (𝑏𝑦 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑡𝑖𝑜𝑛 𝑏𝑦 𝑝𝑎𝑟𝑡)
𝑎 𝑐
𝑐 𝑏
2 (𝑥 − 𝑎)𝑒𝑡𝑥 𝑒𝑡𝑥 2 (𝑏 − 𝑥)𝑒𝑡𝑥 𝑒𝑡𝑥
= [ − 2] + [ + 2]
(𝑏 − 𝑎)(𝑐 − 𝑎) 𝑡 𝑡 𝑎 (𝑏 − 𝑎)(𝑏 − 𝑐) 𝑡 𝑡 𝑐
𝑒 𝑐𝑡 𝑒 𝑐𝑡 𝑒 𝑎𝑡 𝑒 𝑏𝑡 𝑒 𝑐𝑡
= 2{ − + + −
(𝑏 − 𝑎)𝑡 (𝑏 − 𝑎)(𝑐 − 𝑎) 𝑡 2 (𝑏 − 𝑎)(𝑐 − 𝑎)𝑡 2 (𝑏 − 𝑎)(𝑏 − 𝑐) 𝑡 2 (𝑏 − 𝑎)𝑡
𝑒 𝑐𝑡
− }
(𝑏 − 𝑎)(𝑏 − 𝑐)𝑡 2
2 𝑒 𝑎𝑡 𝑒 𝑐𝑡 𝑒 𝑐𝑡 𝑒 𝑏𝑡
= { − − + }
𝑡 2 (𝑏 − 𝑎)(𝑐 − 𝑎)𝑡 2 (𝑏 − 𝑎)(𝑐 − 𝑎)𝑡 2 (𝑏 − 𝑎)(𝑏 − 𝑐)𝑡 2 (𝑏 − 𝑎)(𝑏 − 𝑐)𝑡 2
2 𝑒 𝑎𝑡 𝑒 𝑐𝑡 𝑏−𝑎 𝑒 𝑏𝑡
= { − ( ) + }
𝑡 2 (𝑏 − 𝑎)(𝑐 − 𝑎) (𝑏 − 𝑎) (𝑐 − 𝑎)(𝑏 − 𝑐) (𝑏 − 𝑎)(𝑏 − 𝑐)
2 𝑒 𝑎𝑡 𝑒 𝑐𝑡 𝑒 𝑏𝑡
= { + + };𝑎 < 𝑥 < 𝑏
𝑡 2 (𝑎 − 𝑏 )(𝑎 − 𝑐) (𝑐 − 𝑎)(𝑏 − 𝑐) (𝑏 − 𝑎)(𝑏 − 𝑐)
(b) The probability density function is given as
𝑥; 0≤𝑥≤1
𝑓(𝑥) = { 2 − 𝑥; 1 ≤ 𝑥2
0; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
2 1 2
𝑀𝑋 (𝑡) = 𝐸 (𝑒 𝑡𝑥 ) = ∫ 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥 = (∫ + ∫ ) 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥
0 0 1
1 2 1 2 2
= ∫ 𝑒𝑡𝑥 𝑥 𝑑𝑥 + ∫ 𝑒𝑡𝑥 (2 − 𝑥)𝑑𝑥 = ∫ 𝑥𝑒𝑡𝑥 𝑑𝑥 + ∫ 2𝑒𝑡𝑥 𝑑𝑥 − ∫ 𝑥𝑒𝑡𝑥 𝑑𝑥
0 1 0 1 1
1 𝑏
∫ 𝑒 𝑥 𝑑𝑥 𝑎𝑛𝑑 ∫ 𝑒 𝑡𝑥 (2 − 𝑥)𝑑𝑥 (𝑏𝑦 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑡𝑖𝑜𝑛 𝑏𝑦 𝑝𝑎𝑟𝑡)
𝑡𝑥
0 𝑐
1 2 2
𝑥𝑒𝑡𝑥 𝑒𝑡𝑥 𝑒𝑡𝑥 𝑥𝑒𝑡𝑥 𝑒𝑡𝑥
𝑀𝑋 (𝑡) = [ − ] +2[ ] −[ − ]
𝑡 𝑡2 0
𝑡 1
𝑡 𝑡2 1
𝑒𝑡 𝑒𝑡 1 2𝑒 2𝑡 2𝑒 𝑡 2𝑒2𝑡 𝑒2𝑡 𝑒𝑡 𝑒𝑡
=[ − + + − − + 2 + − 2 ] (𝑐𝑜𝑙𝑙𝑒𝑐𝑡 𝑡ℎ𝑒 𝑎𝑙𝑖𝑘𝑒 𝑡𝑒𝑟𝑚)
𝑡 𝑡 2
𝑡2 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡
𝑒𝑡 𝑒𝑡 2𝑒 𝑡 2𝑒 2𝑡 2𝑒2𝑡 𝑒2𝑡 𝑒𝑡 𝑒𝑡 1
= [( + − )+( − ) + ( 2 − 2 − 2 + 2 )]
𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡
1 2𝑡
= (𝑒 − 2𝑒 𝑡 + 1) (𝑠𝑜𝑙𝑣𝑒 𝑡ℎ𝑒 𝑞𝑢𝑎𝑑𝑟𝑎𝑡𝑖𝑐 𝑒𝑥𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑢𝑠𝑖𝑛𝑔 𝑓𝑎𝑐𝑡𝑜𝑟𝑖𝑧𝑎𝑡𝑖𝑜𝑛)
𝑡2
1
= [(𝑒2𝑡 − 𝑒 𝑡 ) − (𝑒 𝑡 + 1)]
𝑡2
1 𝑡 𝑡 𝑡 − 1)] =
1
= 2
[ 𝑒 ( 𝑒 − 1) − 1(𝑒 2
[(𝑒 𝑡 − 1)(𝑒 𝑡 − 1)]
𝑡 𝑡
1
= [(𝑒 𝑡 − 1)2 ]
𝑡2
(𝑒 𝑡 − 1) 2
𝑀𝑋 (𝑡) = 𝑎𝑠 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑
𝑡2
(c) The probability density function is given as
(2𝑎 + 𝑥)
; −2𝑎 < 𝑥 ≤ 0
4𝑎 2
𝑓(𝑥) =
(2𝑎 − 𝑥)
; 0 < 𝑥 < 2𝑎
{ 4𝑎 2
2𝑎 0 2
𝑀𝑋 (𝑡) = 𝐸 (𝑒 𝑡𝑥 ) = ∫ 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥 = (∫ +∫ ) 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥
−2𝑎 −2𝑎 0
0 (2𝑎 + 𝑥) 2𝑎 (2𝑎 − 𝑥)
= ∫ 𝑒𝑡𝑥 𝑑𝑥 + ∫ 𝑒𝑡𝑥 𝑑𝑥
−2𝑎 4𝑎2 0 4𝑎2
0 2𝑎
1 𝑡𝑥
= [∫ (2𝑎 + 𝑥)𝑒 𝑑𝑥 + ∫ (2𝑎 − 𝑥)𝑒 𝑡𝑥 𝑑𝑥]
4𝑎 2 −2𝑎 0
0 2𝑎
∫ (2𝑎 + 𝑥)𝑒 𝑡𝑥 𝑎𝑛𝑑 ∫ (2𝑎 − 𝑥)𝑒 𝑡𝑥 𝑑𝑥 (𝑏𝑦 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑡𝑖𝑜𝑛 𝑏𝑦 𝑝𝑎𝑟𝑡)
−2𝑎 0
1 0 0 2𝑎 2𝑎
𝑀𝑋 (𝑡) = 2
[2𝑎 ∫ 𝑒𝑡𝑥 + ∫ 𝑥𝑒 𝑡𝑥 𝑑𝑥 + 2𝑎 ∫ 𝑒 𝑡𝑥 𝑑𝑥 − ∫ 𝑥𝑒 𝑡𝑥 𝑑𝑥 ]
4𝑎 −2𝑎 −2𝑎 0 0
0 0 2𝑎 2𝑎
1 𝑒 𝑡𝑥 𝑥𝑒 𝑡𝑥 𝑒 𝑡𝑥 𝑒 𝑡𝑥 𝑥𝑒 𝑡𝑥 𝑒 𝑡𝑥
= 2 [2𝑎 ( ) +( − 2) + 2𝑎 ( ) − ( − 2) ]
4𝑎 𝑡 −2𝑎 𝑡 𝑡 −2𝑎 𝑡 0 𝑡 𝑡 0
1 2𝑎 2𝑎𝑒 −2𝑎𝑡 1 2𝑎𝑒 −2𝑎𝑡 𝑒 −2𝑎𝑡 2𝑎𝑒 2𝑎𝑡 2𝑎 2𝑎𝑒 2𝑎𝑡 𝑒 2𝑎𝑡 1
= [ − − 2+ + 2 + − − + 2 − 2]
4𝑎2 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡
1 2𝑎 2𝑎 2𝑎𝑒 −2𝑎𝑡 2𝑎𝑒 −2𝑎𝑡 2𝑎𝑒 2𝑎𝑡 2𝑎𝑒 2𝑎𝑡 𝑒 −2𝑎𝑡 𝑒 2𝑎𝑡 1 1
𝑀𝑋 (𝑡) = 2
[ − + − + − + 2 + 2 − 2 − 2]
4𝑎 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡
1 2𝑎 − 2𝑎 2𝑎𝑒 −2𝑎𝑡 − 2𝑎𝑒 −2𝑎𝑡 2𝑎𝑒 2𝑎𝑡 − 2𝑎𝑒 2𝑎𝑡 𝑒 −2𝑎𝑡 𝑒 2𝑎𝑡 1 1
= 2
[ + + + 2 + 2 − 2 − 2]
4𝑎 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡 𝑡
1 𝑒 −2𝑎𝑡 𝑒 2𝑎𝑡 2 1 𝑒 −2𝑎𝑡 𝑒 2𝑎𝑡 2
= 2 [0 + 0 + 0 + 2 + 2 − 2 ] = 2 [ 2 + 2 − 2 ]
4𝑎 𝑡 𝑡 𝑡 4𝑎 𝑡 𝑡 𝑡
1 1
= 2 2
(𝑒 2𝑎𝑡 + 𝑒 −2𝑎𝑡 − 2) = 2 2 (𝑒 𝑎𝑡 . 𝑒 𝑎𝑡 + 𝑒 −𝑎𝑡 . 𝑒 −𝑎𝑡 − 𝑒 𝑎𝑡 . 𝑒 −𝑎𝑡 − 𝑒 𝑎𝑡 . 𝑒 −𝑎𝑡 )
4𝑎 𝑡 4𝑎 𝑡
1
= 2 2
[(𝑒 𝑎𝑡 . 𝑒 𝑎𝑡 − 𝑒 𝑎𝑡 . 𝑒 −𝑎𝑡 ) + (𝑒 −𝑎𝑡 . 𝑒 −𝑎𝑡 − 𝑒 𝑎𝑡 . 𝑒 −𝑎𝑡 )]
4𝑎 𝑡
1 1
= 2 2
[𝑒 𝑎𝑡 (𝑒 𝑎𝑡 − 𝑒 −𝑎𝑡 ) − 𝑒 −𝑎𝑡 (𝑒 𝑎𝑡 − 𝑒 −𝑎𝑡 )] = 2 2 [(𝑒 𝑎𝑡 − 𝑒 −𝑎𝑡 )(𝑒 𝑎𝑡 − 𝑒 −𝑎𝑡 )]
4𝑎 𝑡 4𝑎 𝑡
1 𝑎𝑡 − 𝑒 −𝑎𝑡 )2 ] =
(𝑒 𝑎𝑡 − 𝑒 −𝑎𝑡 )2
= [( 𝑒
4𝑎2 𝑡 2 4𝑎2 𝑡 2
2
𝑒𝑎𝑡 − 𝑒−𝑎𝑡
𝑀𝑋 (𝑡) = ( )
2𝑎𝑡
Q67. Find the moment- generating function of the normal random variable. Hence, find the mean
and the variance.
ANSWER
1 −(𝑥−𝜇)2
𝑓 (𝑥 ) = 𝑒 2𝜎 2 𝑓𝑜𝑟 − ∞ ≤ 𝑥 ≤ ∞
√2𝜋𝜎 2
∞ 1 −(𝑥−𝜇)2 1 ∞ (𝑥 2−2𝜇𝑥+𝜇 2)
−
= ∫ 𝑒 2𝜎2 𝑒 𝑡𝑥 𝑑𝑥 = ∫ 𝑒 2𝜎 2 𝑒 𝑡𝑥 𝑑𝑥
−∞ √2𝜋𝜎 2 √2𝜋𝜎 2 −∞
1 ∞ (𝑥 2−2𝜇𝑥+𝜇 2−2𝜎2𝑡𝑥)
−
= ∫ 𝑒 2𝜎2 𝑑𝑥
√2𝜋𝜎 2 −∞
𝑤ℎ𝑒𝑟𝑒, 𝑥 2 − 2𝜇𝑥 + 𝜇2 − 2𝜎 2 𝑡𝑥
= 𝑥 2 − 2(𝜇 − 𝜎 2 𝑡)𝑥 + 𝜇 2 (𝑠𝑜𝑙𝑣𝑒 𝑡ℎ𝑒 𝑞𝑢𝑎𝑑. 𝑒𝑥𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑏𝑦 𝑢𝑠𝑖𝑛𝑔 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑛𝑔 𝑡ℎ𝑒 𝑠𝑞𝑢𝑎𝑟𝑒)
= 𝑥 2 − 2 (𝜇 − 𝜎 2 𝑡 )𝑥 + (𝜇 − 𝜎 2 𝑡 )2 − (𝜇 − 𝜎 2 𝑡 )2 + 𝜇 2
∞ 1 2 2
1 − 2{(𝑥− (𝜇+ 𝜎 2 𝑡) ) − 2𝜇𝜎2𝑡−𝜎 4𝑡 2}
= ∫ 𝑒 2𝜎
√2𝜋𝜎 2 −∞
1
𝑀𝑋 (𝑡) = exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2
1
𝑀𝑋 (𝑡) = exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2
𝑑𝑀𝑥 (𝑡) 1
𝑀𝑋′ (𝑡) = ⁄𝑡 = (𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 ) = 𝐸(𝑋)
𝑑𝑡 2
𝑑2 𝑀𝑥 (𝑡) 𝑑 𝑑 1
𝑀𝑋′′ (𝑡) = ⁄𝑡 = (𝑀 ′ ( ))
𝑡 = [ ( 𝜇 + 𝜎 2 𝑡 ) exp (𝜇𝑡 + 𝜎 2 𝑡 2 )]
𝑑𝑡 2 𝑑𝑡 𝑋 𝑑𝑡 2
1 1
= 𝜎 2 exp (𝜇𝑡 + 𝜎 2 𝑡 2 ) + (𝜇 + 𝜎 2 𝑡). (𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2 2
1
= exp (𝜇𝑡 + 𝜎 2 𝑡 2 ) (𝜎 2 + (𝜇 + 𝜎 2 𝑡)2 )
2
The third moment generating function of 𝑿 is given as
𝑑3 𝑀𝑥 (𝑡) 𝑑 𝑑 1
𝑀𝑋′′′(𝑡 ) = 3
⁄𝑡 = (𝑀𝑋′′ (𝑡)) = [(𝜎 2 + (𝜇 + 𝜎 2 𝑡)2 ) exp (𝜇𝑡 + 𝜎 2 𝑡 2 )]
𝑑𝑡 𝑑𝑡 𝑑𝑡 2
1 1
= 2𝜎 2 (𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 ) + (𝜎 2 + (𝜇 + 𝜎 2 𝑡)2 )(𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2 2
1
= (𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 ) (2𝜎 2 + 𝜎 2 + (𝜇 + 𝜎 2 𝑡)2 )
2
1
= (3𝜎 2 + (𝜇 + 𝜎 2 𝑡)2 )(𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2
1
= [3𝜎 2 (𝜇 + 𝜎 2 𝑡) + (𝜇 + 𝜎 2 𝑡 )3 ] exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2
1
= [3𝜇𝜎 2 + 3 𝜎 4 𝑡 + (𝜇 + 𝜎 2 𝑡)3 ] exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2
(4) 𝑑4 𝑀𝑥 (𝑡) 𝑑 𝑑 1
𝑀𝑋 (𝑡) = 4
⁄𝑡 = (𝑀𝑋′′′ (𝑡)) = [[3𝜇𝜎 2 + 3 𝜎 4 𝑡 + (𝜇 + 𝜎 2 𝑡)3 ] exp (𝜇𝑡 + 𝜎 2 𝑡 2 )]
𝑑𝑡 𝑑𝑡 𝑑𝑡 2
1
= 3 𝜎 4 + 3 𝜎 2 (𝜇 + 𝜎 2 𝑡)2 exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2
1
+ [3𝜇𝜎 2 + 3 𝜎 4 𝑡 + (𝜇 + 𝜎 2 𝑡)3 ](𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2
1
= (𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 ) [3 𝜎 4 + 3 𝜎 2 (𝜇 + 𝜎 2 𝑡) + 3𝜇𝜎 2 + 3 𝜎 4 𝑡 + (𝜇 + 𝜎 2 𝑡 )3 ]
2
1
= (𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 ) [3 𝜎 4 + 6𝜇 𝜎 2 + 6𝜎 4 𝑡 + (𝜇 + 𝜎 2 𝑡)3 ]
2
1
= (𝜇 + 𝜎 2 𝑡) exp (𝜇𝑡 + 𝜎 2 𝑡 2 ) [3 𝜎 4 + 6𝜎 2 (𝜇 + 𝜎 2 𝑡) + (𝜇 + 𝜎 2 𝑡)3 ]
2
1
= [3 𝜎 4 (𝜇 + 𝜎 2 𝑡) + 6𝜎 2 (𝜇 + 𝜎 2 𝑡)2 + (𝜇 + 𝜎 2 𝑡)4 ] exp (𝜇𝑡 + 𝜎 2 𝑡 2 )
2
The first four moment about origin at 𝒕 = 𝟎 of 𝑿 is given as
1
𝑀𝑋′ (0) = 𝜇1′ = 𝐸 (𝑋) = (𝜇 + 𝜎 2 (0)) exp (𝜇(0) + 𝜎 2 (0)2 ) = 𝜇
2
1
𝑀𝑋′′ (0) = 𝜇2′ = 𝐸 (𝑋 2 ) = exp (𝜇(0) + 𝜎 2 02 ) (𝜎 2 + (𝜇 + 𝜎 2 (0))2 ) = 𝜇2 + 𝜎 2
2
1
𝑀𝑋′′′(0) = 𝜇3′ = [3𝜇𝜎 2 + 3 𝜎 4 (0) + (𝜇 + 𝜎 2 (0))3 ] exp (𝜇(0) + 𝜎 2 (0)2 ) = 3𝜇𝜎 2 + 𝜇3
2
(4) 1
𝑀𝑋 (0) = 𝜇4′ = [3 𝜎 4 (𝜇 + 𝜎 2 . 0) + 6𝜎 2 (𝜇 + 𝜎 2 . 0)2 + (𝜇 + 𝜎 2 . 0)4 ] exp (𝜇. 0 + 𝜎 2 . 02 )
2
= 3𝜇𝜎 4 + 6𝜇2 𝜎 2 + 𝜇4
= 3𝜇𝜎 4
𝜇32 0 𝜇4 3𝜇𝜎 4
𝛽1 = 3 = 2 = 0; 𝛽2 = 2 = 2
𝜇2 (𝜇 + 𝜎 2 )3 𝜇2 (𝜇 + 𝜎 2 )2
𝛾1 = √0 = 0;