Lecture X

 Definition 2.3.3. Let X be a random variable with cdf
F
X
. The moment generating function (mgf ) of X (or
F
X
), denoted M
X
(t), is


provided that the expectation exists for t in some
neighborhood of 0. That is, there is an h>0 such that,
for all t in –h<t<h, E[e
tX
] exists.


( ) | |
tX
X
e E t M =
 If the expectation does not exist in a neighborhood of
0, we say that the moment generating function does
not exist.
 More explicitly, the moment generating function can
be defined as:

( ) ( )
( ) | | variables random discrete for
and , variables random continuous for
¿
}
= =
=
·
· ÷
x
tx
X
tx
X
x X P e t M
dx x f e t M
 Theorem 2.3.2: If X has mgf M
X
(t), then


where we define
( ) ( )
0
) (
0
=
=
t
X
n
n
n
X
t M
dt
d
M
( )
( )
0
n
n
X
E X M
(
=
¸ ¸
 First note that e
tx
can be approximated around zero
using a Taylor series expansion:




( ) ( ) ( ) ( )
| |
2 3
0 0 2 0 3 0
2 3
2 3
1 1
0 0 0
2 6
1
2 6
tx t t t
X
M t E e E e te x t e x t e x
t t
E x t E x E x
(
( = = + ÷ + ÷ + ÷ +
¸ ¸ (
¸ ¸
( ( = + + + +
¸ ¸ ¸ ¸
Note for any moment n:


Thus, as t÷0

( )
( )
1 2 2
n
n
n n n
X X
n
d
M M t E x E x t E x t
dt
+ +
( ( (
= = + + +
¸ ¸ ¸ ¸ ¸ ¸
( )
( ) | |
n n
X
x E M = 0
 Leibnitz’s Rule: If f(x,θ), a(θ), and b(θ) are
differentiable with respect to θ, then
( )
( )
( )
( ) ( ) ( ) ( ) ( ) ( )
( )
( )
( )
}
}
c
c
+
÷ =
u
u
u
u
u
u
u
u
u u u
u
u u u
u
b
a
b
a
dx x f
b
d
d
a f a
d
d
b f dx x f
d
d
,
, , ,
 Berger and Casella proof: Assume that we can
differentiate under the integral using Leibnitz’s rule,
we have

( ) ( )
( )
( )
}
}
}
·
· ÷
·
· ÷
·
· ÷
=
|
.
|

\
|
=
=
dx x f xe
dx x f e
dt
d
dx x f e
dt
d
t M
dt
d
tx
tx
tx
X
 Letting t->0, this integral simply becomes



 This proof can be extended for any moment of the
distribution function.
( ) | |
xf x dx E x
·
÷·
=
}
Moment Generating Functions for
Specific Distributions
 Application to the Uniform Distribution:
( ) (
( ) a b t
e e
e
t a b
dx
a b
e
t M
at bt
b
a
tx
b
a
tx
X
÷
÷
=
÷
=
÷
=
}
1 1
 Following the expansion developed earlier, we have:
( )
( ) ( ) ( ) ( )
( )
( )
( )
( )
( )
( )( )
( )
( )( )
( )
( ) ( ) 



+ + + + + + =
+
÷
+ + ÷
+
÷
+ ÷
+ =
+
÷
÷
+
÷
÷
+ =
÷
+ ÷ + ÷ + ÷ + ÷
=
2 2 2
3 2 2 2
3 3 3 2 2 2
3 3 3 2 2 2
6
1
2
1
1
6
1
2
1
1
6 2
1
6
1
2
1
1 1
t b ab a t b a
t
t
a b
a ab b a b
t
t
a b
a b a b
t a b
t a b
t a b
t a b
t a b
t a b t a b t a b
t M
X
 Letting b=1 and a=0, the last expression becomes:



 The first three moments of the uniform distribution
are then:
( )  + + + + =
3 2
24
1
6
1
2
1
1 t t t t M
X
( )
( )
( )
( )
( )
( )
4
1
6
24
1
0
3
1
2
6
1
0
2
1
0
3
2
1
= =
= =
=
X
X
X
M
M
M
 Application to the Univariate Normal Distribution
( )
( )
( )
}
}
·
· ÷
·
· ÷
÷
÷
(
¸
(

¸

÷
÷ =
=
dx
x
tx
dx e e t M
x
tx
X
2
2
2
1
2
1
exp
2
1
2
1
2
2
o
µ
t o
t o
o
µ
 Focusing on the term in the exponent, we have
( ) ( )
( )
( )
2
2 2 2
2
2 2 2
2
2 2 2
2
2
2
2
2
2
2
1
2
2
1
2 2
2
1
2
2
1
2
1
o
µ o µ
o
µ o µ
o
o µ µ
o
o µ
o
µ
+ + ÷
÷ =
+ + ÷
÷ =
÷ + ÷
÷ =
÷ ÷
÷ =
÷
÷
t x x
tx x x
tx x x
tx x x
tx
 The next state is to complete the square in the
numerator.
( )
( ) ( )
( )
4 2 2
4 2 2 2 2
2
2
2 2 2
2
0 2 2
0
0 2
o µ o
o µ o µ o µ
o µ
µ o µ
t t c
t t t x x
t x
c t x x
+ =
= + + + + ÷
= + ÷
= + + + ÷
 The complete expression then becomes:
( )
( )
( )
2 2 2 4 2
2 2
2
2 2
2
2
1 1
2 2
1 1
2 2
x t t t
x
tx
x t
t t
÷µ ÷ o ÷ µo ÷o
÷µ
÷ = ÷
o o
÷µ ÷ o
= ÷ +µ + o
o
 The moment generating function then becomes:
( )
( )
|
.
|

\
|
+ =
|
|
.
|

\
|
÷ ÷
÷
|
.
|

\
|
+ =
}
·
· ÷
2 2
2
2
2 2
2
1
exp
2
1
exp
2
1
2
1
exp
t t
dx
t x
t t t M
X
o µ
o
o µ
t o
o µ
 Taking the first derivative with respect to t, we get:




 Letting t->0, this becomes:
( )
( ) ( )
|
.
|

\
|
+ + =
2 2 2 1
2
1
exp t t t t M
X
o µ o µ
( )
( ) µ = 0
1
X
M
 The second derivative of the moment generating
function with respect to t yields:





 Again, letting t->0 yields
( )
( )
( )( )
|
.
|

\
|
+ + +
+
|
.
|

\
|
+ =
2 2 2 2
2 2 2 2
2
1
exp
2
1
exp
t t t t
t t t M
X
o µ o µ o µ
o µ o
( )
( )
2 2 2
0 µ o + =
X
M
 Let X and Y be independent random variables with
moment generating functions M
X
(t) and M
Y
(t).
Consider their sum Z=X+Y and its moment generating
function:
( )
( )
( ) ( )
t x y
tz tx ty
Z
tx ty
X Y
M t E e E e E e e
E e E e M t M t
+
(
( (
= = = =
¸ ¸ ¸ ¸
¸ ¸
( (
=
¸ ¸ ¸ ¸
 We conclude that the moment generating function for
two independent random variables is equal to the
product of the moment generating functions of each
variable.
 Skipping ahead slightly, the multivariate normal
distribution function can be written as:


where Σ is the variance matrix and μ is a vector of
means.
( ) ( ) ( )
|
.
|

\
|
÷ E ÷ ÷ E =
÷
µ µ
t
x x x f
1
'
2
1
exp
2
1
 In order to derive the moment generating function, we
now need a vector t. The moment generating function
can then be defined as:
( )
1
exp ' '
2
X
M t t t t
| |
= µ + E
|
\ .
 Normal variables are independent if the variance
matrix is a diagonal matrix.
 Note that if the variance matrix is diagonal, the
moment generating function for the normal can be
written as:
( )
( )
( ) ( ) ( )
1 2 3
2
1
2
2
2
3
2 2 2 2 2 2
1 1 2 2 3 3 1 1 2 2 3 3
2 2 2 2
1 1 1 1 2 2 2 3 3 3
0 0
1
exp ' ' 0 0
2
0 0
1
exp
2
1 1 1
exp
2 2 2
X
X X X
M t t t t
t t t t t t
t t t t
M t M t M t
| |
( o
|
(
= µ + o
|
(
|
(
o
¸ ¸
\ .
| |
= µ +µ +µ + o + o + o
|
\ .
| |
| | | | | |
= µ + o + µ + o + µ + o
| | | |
\ . \ . \ .
\ .
=

 Definition 2.3.3. Let X be a random variable with cdf

FX. The moment generating function (mgf) of X (or FX), denoted MX(t), is

M X t   E e

 
tX

provided that the expectation exists for t in some neighborhood of 0. That is, there is an h>0 such that, for all t in –h<t<h, E[etX] exists.

 More explicitly. If the expectation does not exist in a neighborhood of 0. we say that the moment generating function does not exist. the moment generating function can be defined as: M X t    etx f x dx  x  for continuous random variables. and M X t    etx PX  x for discrete random variables .

3. Theorem 2.2: If X has mgf MX(t). then E X   MX   n  n  0 where we define M (n) X d 0  n M X t  dt t 0 n .

 First note that etx can be approximated around zero using a Taylor series expansion: 1 1 2 3  M X  t   E etx   E e0  tet 0  x  0   t 2et 0  x  0   t 3et 0  x  0     2 6  2 3 2 t 3 t  1  E  x t  E  x   E  x    2  6    .

Note for any moment n:  M Xn  dn  n M X  t   E  x n   E  x n 1  t  E  x n  2  t 2        dt Thus. as t0 M X 0  E x n    n .

  d b  b     f x. then d d d d a   f x.θ). and b(θ) are differentiable with respect to θ. Leibnitz’s Rule: If f(x. dx  f b . a(θ).  d a   f a . dx a    b   .

 Berger and Casella proof: Assume that we can differentiate under the integral using Leibnitz’s rule. we have d d  tx M X t    e f  x dx dt dt    d tx     e  f  x dx   dt     xe f  x dx tx   .

this integral simply becomes   This proof can be extended for any moment of the distribution function.   xf  x  dx  E  x  . Letting t->0.

Moment Generating Functions for Specific Distributions  Application to the Uniform Distribution: M X t    b a e 1 1 dx  e ba ba t tx  tx b a e e  t b  a  bt at .

 Following the expansion developed earlier. we have: 1 2 1 3 2 2 1  1  b  a t  b  a t  b  a 3 t 3   2 6 M X t   b  a t     b  1 1 b  a b  a  t 2 1 b  a  b 2  ab  a 2 t 3  1   b  a  t 6 b  a  2 t 1 1 2  1  a  b t  a  ab  b 2 t 2   2 6  a 2 t 2 b3  a 3 t 3   2b  a t 6b  a t 2        .

the last expression becomes: 1 1 2 1 3 M X t   1  t  t  t   2 6 24  The first three moments of the uniform distribution are then: . Letting b=1 and a=0.

1 M X 0   2 1 1 2  M X 0   2  6 3 1 1 3  M X 0   6 24 4 1 .

 Application to the Univariate Normal Distribution 1 M X t    2 1   2     e e 1  x   2  tx 2 2 dx 2  1 x     exptx  2  2    dx  .

we have 1 x    1 x     2tx 2 tx   2 2  2 2 1 x 2  2 x   2  2tx 2  2 2 1 x 2  2 x  tx 2   2  2 2 1 x 2  2 x   t 2   2  2 2 2 2     . Focusing on the term in the exponent.

x  2 x  t 2 2 x  2 x  t     2t   t   0 2 2 2 4    c  0 x    t   0 2 2 2 2 c  2t   t  2 2 4 . The next state is to complete the square in the numerator.

 The complete expression then becomes: 1  x   tx   2 2  2 2 x    t2   22t  4t 2 1 2 1  x    t  2 2    t  1  t 2  2 2 2 .

 The moment generating function then becomes: 1 2 2 1  M X t   exp t   t  2    2 1 2 2   exp t   t  2    1 x    t 2  exp  2  2     dx    .

 Taking the first derivative with respect to t. this becomes: M X 0   1 . we get: 1 2 2  M X t      t exp  t   t  2   1 2    Letting t->0.

letting t->0 yields M X 0     2  2 2 . The second derivative of the moment generating function with respect to t yields: MX 2  1 2 2  t    exp t   t   2   1 2 2  2 2    t    t exp t   t  2   2     Again.

Consider their sum Z=X+Y and its moment generating function: et  x  y    E etx ety   M Z  t   E e   E       tz E etx  E ety   M X  t  M Y  t      . Let X and Y be independent random variables with moment generating functions MX(t) and MY(t).

 We conclude that the moment generating function for two independent random variables is equal to the product of the moment generating functions of each variable. .

 Skipping ahead slightly. the multivariate normal distribution function can be written as: f x   1  1  1  exp    x   '   x    2  2  where Σ is the variance matrix and μ is a vector of means. .

we now need a vector t. In order to derive the moment generating function. The moment generating function can then be defined as: 1   M X  t   exp   ' t  t ' t  2   .

 Normal variables are independent if the variance matrix is a diagonal matrix. the moment generating function for the normal can be written as: .  Note that if the variance matrix is diagonal.

2  1  1  M X  t   exp   ' t  t '  0 2  0   0 2 2 0 0    0 t 2 3     1 2 2 2 2 2 2    exp  1t1   2t2  3t3   t1 1  t2  2  t3 3   2    1 2 2  1 2  1 2   exp   1t1  t1 1     2t2   2     3t3  3   2 2   2      M X1  t  M X 2  t  M X 3  t  .

Sign up to vote on this title
UsefulNot useful