You are on page 1of 7

International Journal of Scientific Research and Engineering Development-– Volume 2 Issue 5, Sep – Oct 2019

Available at www.ijsred.com
RESEARCH ARTICLE OPEN ACCESS

Moment-Generating Functions and Reproductive Properties of


Distributions
Ngwe Kyar Su Khaing*, Soe Soe Kyaw**,Nwae Oo Htike***
*(Department of Mathematics, Technological UniversityYamethin,Yamethin, Myanmar
Email: ngwekhaingjune@gmail.com)
** (Department of Mathematics, Technological UniversityYamethin, Yamethin, Myanmar
Email: dawsoesoekyaw@gmail.com)
***(Department of Mathematics, Technological UniversityYamethin, Yamethin, Myanmar
Email: warso2015feb@gmail.com)

----------------------------------------************************----------------------------------
Abstract:
In this paper, an important mathematical concept which has many applications to the probabilistic
models are presented. Some of the important applications of the moment- generating function to the theory
of probability are discussed. Each probability distribution has a unique moment-generating function,
which means they are especially useful for solving problems like finding the distribution for sums of
random variables. Reproductive properties of probability distributions with illustrated examples are also
described.

Keywords —Density function, distributions, moment-generating function, probability, random


variable.
----------------------------------------************************----------------------------------
distribution function (pdf) and the probability
I. INTRODUCTION density function [5].
In probability theory, an experiment with an Assume that X is a random variable; that is,
outcome depending on chance which is called a X is a function from the sample space to the real
random experiment. It is assumed that all possible numbers. In computing various characteristics of
distinct outcomes of a random experiment are the random variable X, such that E ( X ) or V ( X ) , we
known and that they are elements of a fundamental
work directly with the probability distribution of X.
set known as the sample space. Each possible
The moment-generating function M X ( t ) is the
outcome is called a sample point and an event is
generally referred to as a subset of the sample space value which the function M X is the value which the
having one or more sample points as its elements function M X assumes for the real variable t. The
[5]. notation, indicating the dependence on X, is used
The behavior of a random variable is because we consider two random variables, X and
characterized by its probability distribution, that is, Y, and then investigate the moment -generating
by the way probabilities are distributed over the function of each, M X and M Y . The moment-
values it assumes. A probability distribution generating function is written as an infinite series or
function and a probability mass function are two improper integral, depending on whether the
ways to characterize this distribution for a discrete random variable is discrete or continuous [4].
random variable. The corresponding functions for a
continuous random variable are the probability

ISSN : 2581-7175 ©IJSRED: All Rights are Reserved Page 654


International Journal of Scientific Research and Engineering Development-– Volume 2 Issue 5, Sep – Oct 2019
Available at www.ijsred.com
II. SOME DISTRIBUTION FUNCTIONS assume that P ( A ) = p remains the same for all
repetitions [2].
In the nondeterministic or random mathematical Let the random variable X be defined as follows:
models, parameters may also be used to X = number of times the event A occurred. Then X
characterize the probability distribution. With each is called a binomial random variable with
probability distribution we may associate certain parameters n and p. Its possible values are
parameters which yields valuable information about obviously 0,1,2,…,n. Equivalently X has a binomial
the distribution [5].
distribution. The individual repetitions of ε will be
A. Definition called Bernoulli trails.
Let X be a discrete random variable with
D.Uniform Distribution
possible value x1,K, x n ,K . Let
Suppose that X is continuous random variable
p ( xi ) = P ( X = xi ) , i = 1, 2,... . Then the expected value assuming all values in the interval a,b , where both
[ ]
of X, denoted by E ( X ) , is defined as a and b are finite. If the pdf of X is given by
∞ 1
E ( X ) = ∑ x i p ( x i ) (1) F( x) = , a≤x≤b (3)
b−a
i =1
= 0, elsewhere,

if the series ∑ xi p ( xi ) converges absolutely. Then X is uniformly distributed over the interval
i =1 [ a,b].
This number is also referred to as the mean value
of X.

B. Definition
Let X be a continuous random variable with
probability density function f. The expected value
of X is defined as x=a x=b
+∞ Fig. 1 X has uniformly distribution
E (X) = ∫ xf ( x ) dx, (2)
−∞ E. The Poisson Distribution
if the improper integral is a absolutely convergent, Let X be a discrete random variable assuming
that is, the possible values: 0,1,2,…,n... . If
+∞
−α k
∫ x f ( x ) dx.
P(X = k) =
e α
, k = 0,1, 2, … , n,... . (4)
−∞ k!
then X has a Poisson distribution with parameter
C. Binomial Distribution
α > 0.
Consider an experiment ε and let A be some
event associated with ε. Suppose that P ( A ) = p and F. Geometric Distribution
hence P ( A ) = 1 − p. Consider n independent Assume, as in the discussion of the binomial
repetitions of ε. Hence the sample space consists of distribution, that we perform ε repeatedly, that the
all possible sequences {a1,a 2 ,...,a n } , where each a i repetitions are independent and that an each
is either A or A, depending on whether A or A repetition P ( A ) = p and P ( A ) = 1 − p = q remain the
occurred on the ith repetition of ε. Furthermore, same. Suppose that we repeat the experiment until

ISSN : 2581-7175 ©IJSRED: All Rights are Reserved Page 655


International Journal of Scientific Research and Engineering Development-– Volume 2 Issue 5, Sep – Oct 2019
Available at www.ijsred.com
A occurs for the first time. Define the random
variable X as the number the repetitions required up
to and including the first occurrence of A. Thus X
assumes the possible values 1,2,… . Since X = k if
and only if the first ( k − 1) repetitions of ε result in
th
A while the k repetition results in A,
P ( X = k ) = q k −1p, k = 1, 2,... . (5) Fig. 3 X has exponential distribution

A random variable with probability distribution (5) I.The Gamma Distribution


is said to have a geometric distribution[1].
The Gamma Distribution, denoted by τ, is
defined as follows:
G. The Normal Distribution

The random variable X, assuming all real values p −1 − x
τ( p) = ∫x e dx, defined for p > 0. (8)
−∞ < x < ∞, has a normal (or Gaussian) distribution 0
if its pdf is of the form
Let X be a continuous random variable assuming
1  1  x − µ 2 
f (x) = exp  −   , − ∞ < x < ∞. (6) only non-negative values. Then X has a Gamma
2πσ  2  σ   probability distribution if its pdf is given by
 
α
The parameters µ and σ must satisfy the f (x) = ( αx )r −1 e−αx , x > 0
τ( r)
conditions −∞ < µ < ∞, σ > 0. X has distribution
=0 , elsewhere (9)
(
N µ, σ 2 ) if and only if probability distribution is
given by (6). This distribution depends on two parameters, r and
α, of with r > 0 and α > 0.
f (x)

r =1
r=2
r=4

x=µ x
Fig. 2 X has normal distribution
Fig. 4 X has Gamma distribution

H. The Exponential Distribution


III.THE MOMENT-GENERATING FUNCTIONS
A continuous random variable X assuming all
nonnegative values is said to have an exponential A.Definitions
distribution with parameter α > 0 if its pdf given by
Let X be a discrete random variable with
probability distribution p ( x i ) = P ( X = x i ) ,i = 1, 2,... .
f ( x) = α e−α x , x > 0
(7) The Function MX , called the moment-generating
= 0,
function of X, is defined by

tx j
MX ( t ) = ∑ e ( )
p xj . (10)
j=1

ISSN : 2581-7175 ©IJSRED: All Rights are Reserved Page 656


International Journal of Scientific Research and Engineering Development-– Volume 2 Issue 5, Sep – Oct 2019
Available at www.ijsred.com
If X is a continuous random variable with pdf f, t
= e −λ eλe = eλ (e
t
−1)
(15)
the moment-generating function is defined by
+∞
tx E. Example
MX ( t ) = ∫ e f ( x ) dx. (11)
−∞ Suppose that X has an exponential distribution
In either the discrete or the continuous case, with parameter α. Therefore
M X ( t ) is simply the expected value of etX . ∞
tx −αx
M X ( t ) = ∫ e αe dx
( )
M X ( t ) = E e tx . (12) 0

M X ( t ) is the value which the function MX ∞


x ( t −α )
assumes for the real variable t. = α∫ e dx
0
α
B. Example = , t < α. (16)
α−t
Suppose that X is uniformly distributed over the
interval [ a,b]. Therefore the moment-generating F. Example
function is given by
b
Suppose that X has normal distribution N µ, σ 2 . ( )
e tx
MX ( t ) = ∫ dx ∞
 1  x − µ 2 
b−a 1 tx
a Hence M X ( t ) = ∫
2πσ −∞
e exp − 
 2  σ  
 dx.
1  bt  
= e − eat  , t ≠ 0. (13)
b−a   ( x − µ ) = s; thus x = σs + µ and dx = σ ds. Therefore
Let
σ
C. Example ∞ s2
1 −
Suppose that X is binomially distributed with MX ( t ) = ∫
2π −∞
exp  t ( σs + µ )  e 2 ds
parameters n and p. Then

n 1  1 2 
MX ( t ) = ∑e tk  n  k
  p (1 − p )
n−k = e tµ ∫ exp  − 2 s − 2σts   ds
2π −∞
k =0 k

1  1 2 2 
∫ exp − 2 ( s − σt ) − σ t   ds
n
= e tµ
2
n k

k =0  k 
( )
= ∑   pe t (1 − p )
n −k
2π −∞
2 2
n σ t ∞
tµ+  1 2
 t  1
=  pe + (1 − p )  .
 
(14) =e 2

2π −∞
exp  − [s − σt ]  ds.
 2 
Let s − σt = γ; then ds = dγ and
D. Example σ2 t 2 ∞ γ2
tµ+ 1 −
Suppose that X has a Poisson distribution with MX ( t ) = e 2

2π −∞
e 2 dγ
parameter λ. Thus  σ t 
2 2
∞ −λ k  tµ+ 
e λ  2 
M X ( t ) = ∑ e tk = e . (17)
k =0 k!
G. Example
t k
= e−λ ∑

( λe ) Let X have a Gamma distribution with
k! parameters α and r. Then
k =0

ISSN : 2581-7175 ©IJSRED: All Rights are Reserved Page 657


International Journal of Scientific Research and Engineering Development-– Volume 2 Issue 5, Sep – Oct 2019
Available at www.ijsred.com
∞ Setting t = 0,
α r −1
e tx ( αx ) e−αx dx
τ(r ) ∫
MX ( t ) =
M′ ( 0 ) = E ( X ) .
0
r ∞ Thus the first derivative of the moment-generating
α − x ( α− t )
x r −1 e
τ(r ) ∫
= dx. function evaluated at t = 0 yields the expected value
0 of the random variable [6]. The second derivative of
Let x ( α − t ) = u; thus M X ( t ) is
( du ) ,
dx = ( ) + ...,
t n −2 E X n
(α − t ) ( ) ( )
M′′ ( t ) = E X
2
+ tE X
3
+ ... +
( n − 2 )!
and
∞ r −1 and setting t = 0,
r
α  u 
MX ( t ) =
( α − t ) τ ( r ) ∫0  α − t 
e− u du ( )
M ′′ ( 0 ) = E X 2 .
th
The n derivative of MX(t) evaluated at t=0 is
r ∞
 α  1
u r −1e− u du. M (n ) (0) = E ( X n ) .
=
 α − t

 ( )0
τ r ∫
Since the integral equals τ ( r ) ,
( )
The number E X n , n = 1, 2,..., are called the nth

r moments of the random variable X about zero.


 α  The general Maclaurin series expansion of the
MX ( t ) =   . (18)
α−t function M X is
If r = 1, the Gamma function becomes the (n) n
MX t
exponential distribution. M X ( t ) . = M X ( 0 ) + M′X ( 0 ) t + ... + + ... .
n!
2 n
IV. PROPERTIES OF THE MOMENT- µ2t µ t
= 1 + µ1t + + ... + n + ...
GENERATING FUNCTIONS 2! n!
i
The Maclaurin series expansion of the function where µi = E X , i = 1, 2,... . In particular, ( )
x
e ; ( )
V ( X ) = E X2 − ( E ( X ) )
2

x 2 x3 xn
ex = 1 + x + + + ... + + ... . Thus = M′′ ( 0 ) −  M′ ( 0 )  .
2
2! 3! n!

e
tx
= 1 + tx +
( tx )2 + ... + ( tx )n + ... .
2! n!
Now A. Theorem
Suppose that the random variable X has MX . Let
 ( tX )2 + ... + ( tX )n + ... 
MX ( t ) = E e ( ) tX
= E 1 + tX +
 2! n! 
Y = αX + β. Then MY , the moment-generator
  function of the random variable Y, is given by

= 1 + tE ( X ) +
2
t E X ( ) + ... + t E ( X ) + ... .
2 n n

2! n! M Y ( t ) = eβ t M X ( α t ) . (20)
Since M X is a function of the real variable t, the
derivative of M X ( t ) with respect to t, that is, M′ ( t ) .
( ) + ... + t E ( X ) + ... .
t 2 E X3 n −1 n
Proof:
M ′X ( t ) . = E ( X ) + tE X ( )+ 2
2! ( n − 1)!

ISSN : 2581-7175 ©IJSRED: All Rights are Reserved Page 658


International Journal of Scientific Research and Engineering Development-– Volume 2 Issue 5, Sep – Oct 2019
Available at www.ijsred.com

M Y ( t ) = E e Yt = E e
( αX +β ) t  The length of a rod is a normally distributed
( )   random variable with mean 4 inches and
variance 0.01 inch2. Two such rods are placed
( )
= eβ t E eαtX
end to end and fitted into a slot. The length of this
= eβ t M X ( αt ) . slot is 8 inches with a tolerance of ±0.01 inch. The
probability that the two rods will fit can be
B. Theorem evaluated.
Letting L1 and L 2 represent the lengths of rod 1
Suppose that X and Y are independent random
variables. Let Z = X + Y. Let MX ( t ) , MY ( t ) and M Z ( t ) and rod 2, thus L = L1 + L 2 is normally distributed
be the moment-generating functions of the random with E ( L ) = 8 and V ( L ) = 0.02. Hence
variables X, Y and Z, respectively. Then  7.9 − 8 L − 8 8.1 − 8 
P [ 7.9 ≤ L ≤ 8.1] = P  ≤ ≤
M Z ( t ) = MX ( t ) MY ( t ) . (21)  0.14 0.14 0.14 
Proof: = Φ ( +0.714 ) − Φ ( −0.714 ) = 0.526,
( )
M Z ( t ) = E e2t
from the tables of the normal distribution [4].
= E e
( X + Y )t  C. Theorem
 
Let X1,..., X n be independent random variables.
( )
=E e e
Xt Yt
Suppose that Xi has a Poisson distribution with
= E (e ) E (e ) = M
Xt Yt
X ( t ) MY ( t ). parameter αi , i = 1, 2,..., n. Let Z = X1 + ... + X n . Then Z
has a Poisson distribution with parameter
V. REPRODUCTIVE PROPERTIES OF α = α1 + ... + α n .
DISTRIBUTIONS
Proof:
If two or more independent random variables
having a certain distribution are added, the resulting For the case of n = 2 :
random variable has distribution of the same type as
α1 ( e −1) α 2 ( e −1)
t t

that of the summands. This result is the M X1 ( t ) = e , MX2 ( t ) = e .


reproductive property[6]. ( α +α )( e t −1)
Hence M Z ( t ) = e 1 2 . This is the moment-
A.Example
generating function of a random variable with
Suppose that X and Y are independent random
Poisson distribution having parameter α1 + α 2 . By the
( ) ( )
variables with distributions N µ1, σ12 and N µ 2 , σ 22 , mathematical induction, the theorem is proved.
respectively. Let Z = X + Y. Hence
D.Example
 σ2 t 2   σ2 t 2 
M Z ( t ) = M X ( t ) M Y ( t ) = exp  µ1t, 1  exp  µ 2 t, 2  Suppose that the number of calls coming into a
 2   2 
  telephone exchange between 9 a.m. and 10 a.m., X1,
 t2  is a random variable with Poisson distribution with


(
= exp  ( µ1 + µ 2 ) t, σ12 + σ22 .
2 
) parameter 3. Similarly, the number of calls arriving
Thus Z has this normal distribution. between 10 a.m. and 11 a.m, X 2 , also has a Poisson
B. Example distribution, with parameter 5. If X1 and X 2 are

ISSN : 2581-7175 ©IJSRED: All Rights are Reserved Page 659


International Journal of Scientific Research and Engineering Development-– Volume 2 Issue 5, Sep – Oct 2019
Available at www.ijsred.com
independent, the probability that more than 5 calls is written as an infinite series or improper integral
come in between 9 a.m. and 11 a.m. can be depending on whether the random variable is
solved[3]. discrete or continuous. The method of moment-
Let Z = X1 + X 2 . From the above theorem, Z has agenerating functions to evaluate the expectation and
Poisson distribution with parameter 3 + 5 = 8. Hence variance of a random variable with probability
P ( Z > 5) = 1 − P ( Z ≤ 5) distribution are used. And then we have discussed
a number of distribution for which a reproductive
5 −8 k
e (8) property holds. We have seen that the moment-
= 1− ∑
k =0 k! generating function can be a powerful tool for
= 1 − 0.1919 = 0.8088. studying various aspects of probability
distributions. We found the use of the moment-
E. Theorem generating function very helpful in studying sums
of independent, identically distributed random
Suppose that X1,..., X k are independent random
variables and obtaining various reproductive laws.
variables, each having distributions N ( 0,1) . Then
S = X12 , X 22 + ... + X k2 has distribution X 2k . ACKNOWLEDGEMENT

I would like to thank Dr. Khin Mg Swe, Professor


F. Example
(Retire),Department of Mathematics who shares
Suppose that X1,..., X n are independent random ideas and helpful suggestion. I also grateful to my
variables, each with distribution N ( 0,1) . Let supervisor Dr. Daw Win Kyi, Professor
2 2 2 2 (Retire),Department of Mathematics in Yangon
T = X1 ,..., X n . Since T has distribution X n .
University who motivates me to do this. I
2
H ( t ) = P (T ≤ t ) = P T ≤ t 2
( )
appreciate to my parents and my teachers for their
t2 n z
patient, understanding and encouragement during
−1 −
H(t) = ∫ n
1
z 2 e 2 dz.
my work that has to successful finish.
0 n
2 2 τ  REFERENCES
2
Hence [1] B.A.Robert, Basic Probability Theory, Minela, New York:
h ( t ) = H′ ( t ) Dover Publications, Inc, pp. 46-95, 1970.
[2] J.L. Devore, Probability and Statistics for Engineering and
n t2 the Science, Canada: Nelson Education, Ltd, pp. 130-183,

2t 2 2 −1
= n ( ) t e 2 [3]
2009.
M.R. Spiegel, Theory and Problems of Statistics,New York:
n McGraw-Hill, pp. 139-146, 1961.
22 τ   [4] P.L.Meyer, Introductory Probability and Statistical
2 Applications, Addison-Wesley Publishing Company, London
, pp. 209-222, 2004.
t2 [5] T.T.Soong, Fundamentals of Probability and Statistics for

n −1 2 EngineeringNew York: John Wiley & Sons, Ltd, pp. 161-
2t e
= n
if t ≥ 0. 219, 2004.
n [6] W.Feller ,An Introduction to Probability Theory and Its
22 τ   Applications, John Wiley & Sons, Inc,
2 USA, pp. 146-193,1967.

V. CONCLUSIONS
The moment-generating function as defined above

ISSN : 2581-7175 ©IJSRED: All Rights are Reserved Page 660

You might also like