You are on page 1of 54

Addis Ababa University

College of Business and Economics


Department of Economics
Econ 2042: Statistics for Economists
2. Random Variables & Probability Distributions
Fantu Guta Chemrie (PhD)

F. Guta (CoBE) Econ 2042 March, 2024 1 / 54


2. Random Variables and Probability
Distributions (2 weeks)
2.1. The Concept of a Random Variable
2.2. Discrete Random Variables and Their Probability
Distributions
2.3. Continuous Random Variables and Their Probability
Density Functions (pdf)
2.4. The Expected Value of a Random Variable and
Moments
2.5. Moment Generating Function (MGF)

F. Guta (CoBE) Econ 2042 March, 2024 2 / 54


2. Random Variables & Probability Distributions
2.1. The Concept of a Random Variable

De…nition (2.1)
A random variable ( rv) is a real valued function
de…ned on the elements of a sample space; i.e., if
S is a sample space with probability measure and
X is a real valued function de…ned over the
elements of S, then X is called a random variable.

Alternatively,
F. Guta (CoBE) Econ 2042 March, 2024 3 / 54
A random variable is a real valid function
X : S ! R such that Ax = fω : X (ω ) xg is a
member of F (means P (Ax ) is de…ned).
Example (2.1)
Toss a fair coin, S = fH, T g. De…ne X by

X (ω ) = 0; ω =T and X (ω ) = 1; ω =H

1
Consider, x = 2 and Ax = A1/2 , then
A1/2 = fω : X (ω ) 1/2g = fT g.

F. Guta (CoBE) Econ 2042 March, 2024 4 / 54


Example (2.2)
Consider the experiment of tossing a single coin.
Let the random variable X denote the number of
heads. Then S = fH, T g and X (ω ) = 1 if ω = H,
and X (ω ) = 0 if ω = T .

Example (2.3)
Consider an experiment of tossing two coins. Let
the random variable X denote the number of
heads, and let the random variable Y denote the
number of tails. Then S = fHH, HT , TH, TT g,
F. Guta (CoBE) Econ 2042 March, 2024 5 / 54
Example (2.3 continued. . . )
and X (ω ) = 2 if ω = HH, X (ω ) = 1 if ω = HT
or TH, X (ω ) = 0 if ω = TT .

Similarly, Y (ω ) = 2 if ω = TT , Y (ω ) = 1 if
ω = HT or TH, Y (ω ) = 0 if ω = HH

ω X Y X +Y

HH 2 0 2

HT 1 1 2

TH 1 1 2

TT 0 2 2
F. Guta (CoBE) Econ 2042 March, 2024 6 / 54
Example (2.3 continued. . . )
We say that the space of the rv X = f0, 1, 2g.

Example (2.4)
The sample space for tossing a die once is
S = f1, 2, 3, 4, 5, 6g.
Let the rv X denote a number on the face that
turns up in a sample point, then we can write

X (1) = 1, X (2) = 2, X (3) = 3,


X (4) = 4, X (5) = 5, X (6) = 6.
F. Guta (CoBE) Econ 2042 March, 2024 7 / 54
Example (2.4 continued. . . )
Note that in this case X (ω ) = ω: we call such a
function an identity function.

Example (2.5)
The sample space for tossing a coin until a head
turns up is

S = fH, TH, TTH, TTTH, ...g.

Let the rv X be the number of trials required to


produce the …rst head, then we can write
F. Guta (CoBE) Econ 2042 March, 2024 8 / 54
Example
X = f1, 2, 3, 4, ...g X (H ) = 1, X (TH ) = 2,
X (TTH ) = 3, .....

In this case, the space or the range of the rv


X = f1, 2, 3 : : :g.

2.2 Discrete Random Variables and their


Probability Distributions
De…nition (2.2)
The distribution function of a random variable is
F. Guta (CoBE) Econ 2042 March, 2024 9 / 54
De…nition (2.2 continued. . . )
F : R ! [0, 1] such that
FX (x ) = P (fω : X (ω ) xg) = F (Ax )
= P (X x) informal usage but standard.

Properties of Distribution Function (DF):

i). As x ! ∞, FX (x ) ! 0
Proof: As x ! ∞, Ax ! ? but P (?) = 0
As x ! ∞, FX (x ) ! 0.
ii). As x ! ∞, FX (x ) ! 1
F. Guta (CoBE) Econ 2042 March, 2024 10 / 54
Proof: As x ! ∞, Ax ! S but P (S ) = 1
As x ! ∞, FX (x ) ! 1.

iii). If x1 x2 , then FX (x1 ) FX (x2 ).


Proof: x1 x2 ! Ax1 Ax2 , so P (Ax1 ) P (Ax2 ) ,
i.e. FX (x1 ) FX (x2 ).
iv). lim FX (x + h) = FX (x ) , i.e., F is continuous from
h!0
(h>0 )
the right.

Discrete random variable: a random variable is


discrete if its range is either …nite or countably
F. Guta (CoBE) Econ 2042 March, 2024 11 / 54
in…nite, i.e., 8ω 2 S, X (ω ) 2 fx1 , x2 , x3 , : : :g
(possibly …nite or countably in…nite).

Probability distribution: if X is a discrete random


variable, the function given by

fX (x ) = Pr(X = x )

for each x within the range of X is called the


probability mass function (or the probability
distribution) of X , i.e.,
F. Guta (CoBE) Econ 2042 March, 2024 12 / 54
8
< P (X = x ) , x = x , i = 1, 2, : : :
i
fX (x ) =
: 0, x 6= xi , i = 1, 2, : : :
Note: In this case: FX (x ) = ∑ fX (xi ) = P (X x ).
fi :xi xg

Example (2.6)
In the experiment of tossing an unbiased coin once
we have the sample space, S = fH, T g with
P (H ) = P (T ) = 0.5.
Let the rv X = number of heads in a sample point,
then

F. Guta (CoBE) Econ 2042 March, 2024 13 / 54


Example
8
< 1
x = 0, 1
2,
fX (x ) =
: 0, otherwise

Note: Common properties to all probability functions are


1). fX (x ) 0
2). ∑ fX (x ) = 1. This can also be presented as
x

x fX (x ) = P (X = x )

0 0.5

1 0.5
F. Guta (CoBE) Econ 2042 March, 2024 14 / 54
Example (2.7)
In an experiment of tossing an unbiased coin twice
we have the sample space,
S = fHH, HT , TH, TT g.
Let the rv X = number of heads in a sample point,
s, then letting P (H ) = p and P (T ) = 1 p, we
can present this as follows:

x f X (x ) = P (X = x )

0 (1 p )2 0.25

1 2p (1 p) 0.50

2 p2 0.25

F. Guta (CoBE) Econ 2042 March, 2024 15 / 54


Example (2.8)
In the experiment of ‡ipping a coin and generating
the number of tosses required to …nd a head, the
sample space,

S = fH, TH, TTH, TTTH...g

Let the rv X = the number of tosses required to


produce the …rst head and let P (H ) = p,
P (T ) = 1 p, then

F. Guta (CoBE) Econ 2042 March, 2024 16 / 54


Example (2.8 continued. . . )
x fX (x ) = P (X = x )
1 p
2 (1 p) p
3 (1 p )2 p
4 (1 p )3 p
.. ..
. .
The sum of the probability of this process must be
equal to 1.

F. Guta (CoBE) Econ 2042 March, 2024 17 / 54


Example (2.8 contined. . . )
Proof: The probability function (distribution) for
the above experiment is given by:
8
< p (1 p )x 1
, x = 1, 2, 3, : : :
fX (x ) = P (X = x ) =
: 0, elsewhere

∞ ∞
) ∑ fX (x ) = ∑ P (X = x ) = ∑ p (1 p )x 1
= 1.
x x =1 x =1

Example (2.9)
In the experiment of rolling a die we have the
F. Guta (CoBE) Econ 2042 March, 2024 18 / 54
Example (2.9 continued. . . )
sample space, S = f1, 2, 3, 4, 5, 6g.
Here, X (ω ) = ω, P (ω ) = 1/6. Such distributions
are known as uniform distributions.

x f X (x ) = P (X = x ) P (X x ) = F X (x )

1 1
1 6 6

1 2
2 6 6

1 3
3 6 6

1 4
4 6 6

1 5
5 6 6

1
6 6 1
F. Guta (CoBE) Econ 2042 March, 2024 19 / 54
Example (2.9 continued. . . )
Probability mass function of the above uniform
distribution is given by:
8
< 1 , x = 1, 2, : : : , 6
6
fX (x ) =
: 0, elsewhere

FX (1.5) = 1/6, FX (1.9) = 1/6, implying that the


probability of X between 1 and 2 , i.e.,

P (1 < X < 2) = 0.

F. Guta (CoBE) Econ 2042 March, 2024 20 / 54


Example (2.9 continued. . . )
F(x)

5/6

4/6

3/6

2/6

1/6

x
0 1 2 3 4 5 6

Graphically FX (x ) is a step function with the


height of the step at xi equal to fX (xi ). Formally

FX (x ) = P (X x) = ∑ fX (y )
y x
F. Guta (CoBE) Econ 2042 March, 2024 21 / 54
Let R = the space of the random variable X , then

R = fx1 , x2 , x3 , ..., xk g; x1 < x2 < x3 < ... < xk ,

Then

FX (x ) = 0 for x < x1
FX (x1 ) = fX (x1 )
FX (x ) = fX (x1 ) for x1 x < x2
FX (x2 ) = fX (x1 ) + fX (x2 )
FX (x ) = fX (x1 ) + fX (x2 ) for x2 x < x3
F. Guta (CoBE) Econ 2042 March, 2024 22 / 54
FX (x2 ) = fX (x1 ) + fX (x2 ) + fX (x3 ),
..
.

FX (x ) = 1 when x xk

Properties of FX (x ): the following …ve properties


hold for the CDF

a). 0 FX (x ) 1
b). FX (x ) is non-decreasing
c). FX (x ) = 0 for x < x1 , x1 being the minimum of
the values of the random variable X .
F. Guta (CoBE) Econ 2042 March, 2024 23 / 54
d). FX (x ) = 1 for x xn , xn being the largest value of
X.
e). P (x < X x 0 ) = P (X x 0) P (X x) =
FX (x 0 ) FX (x )

Thus we can go from fX (x ) to FX (x ) or vice versa;


i.e., given fX (x ) we can derive FX (x ) or given
FX (x ) we can derive fX (x ).

2.3 Continuous Random Variables and their


Probability Density Functions (pdf)
F. Guta (CoBE) Econ 2042 March, 2024 24 / 54
Continuous random variable: a rv is continuous if
the sample space of X is an interval or a union of
intervals (e.g. height, weight, and the time elapsing
between two telephone calls)
Let x1 , x2 , ..., xn be the observations on a rv, and
no of observation of xi x
Fn (x ) = ¯
n
or the relative frequency for the event xi x.
For any given value of x as n in Fn increases to the
limit, which we denote by FX (x ), is referred to as
F. Guta (CoBE) Econ 2042 March, 2024 25 / 54
the probability that (X x ). Thus, is called the
distribution function of the random variable X .
De…nition (2.3)
A rv X is called continuous if there exists a
function fX ( ) such that P (X x ) = FX (x ) =
Rx
∞ fX (u )du for every real number x, and FX (x ) is
called the distribution function of the rv X .
De…nition (2.4)
If X is a continuous rv, the function fX ( ) in
Rx
FX (x ) = ∞ fX (u )du is called the pdf of X .
F. Guta (CoBE) Econ 2042 March, 2024 26 / 54
Properties of continuous random variables

i). FX (+∞) = lim FX (x ) = 1


x!+∞
ii). FX ( ∞) = lim FX (x ) = 0
x! ∞
iii). Properties 1 and 2 imply that 0 FX (x ) 1.
iv). FX (x ) is non decreasing, i.e., if a > b then
FX (a) FX (b) 0 which is P (x a) P (x b ).
v). If FX (x ) is continuous and di¤erentiable then
dFX (x )
= fX (x )
dx
Proof:
F. Guta (CoBE) Econ 2042 March, 2024 27 / 54
P (a < X < b ) = P (X b) P (X a)

= FX (b) FX (a)

Consider the interval (x, x + ∆x )

P (x < X < x + ∆x ) = FX (x + ∆x ) FX (x )

Supposing that ∆x is very small so that fX (x ) is


constant over the range (x, x + ∆x ), then
Z x + ∆x
) P (x < X < x + ∆x ) = fX (u ) du = fX (x ) ∆x
x

) FX ( x + ∆ x ) FX ( x ) = f X ( x ) ∆ x
F. Guta (CoBE) Econ 2042 March, 2024 28 / 54
FX (x +∆x ) FX (x )
) lim∆x!0 ∆x = fX (x )
dFX (x )
) = fX (x )
dx
The function fX (x ) is known as the pdf of the rv
X and measures the density of the function at a
point. As ∆x ! 0, x + ∆x ! x, P (x < X < x + ∆x ) ! 0.
dFX (x ) Rx
vi). dx = fX (x ) ) FX (x ) FX ( ∞) = ∞ fX (u ) du
vii). Since FX (x ) is non-decreasing it follows that
a). fX (x ) 0
Z +∞
b). fX (x ) dx = 1

F. Guta (CoBE) Econ 2042 March, 2024 29 / 54
viii). Note, however, that fX (a) 6= P (X = a), and fX (a)
could actually be greater than 1. For a continuous
rv X , P (X = a) = lim∆a!0 P (a X < a + ∆a) = 0.

2.4 The Expected Value of a RV and Moments


Mathematical Expectation:
1). Let X be a discrete rv taking values x1 , x2 , x3 , : : :
with fX (x ) as its PD, then the E (X ), is de…ned as

E (X ) = ∑i xi fX (xi )

2). Let X be a continuous rv with pdf fX (x ), then the


F. Guta (CoBE) Econ 2042 March, 2024 30 / 54
expected value of X , E (X ), is de…ned as .
Z ∞
E (X ) = xfX (x ) dx

Suppose we have an empirical frequency distn¯ given


by
Mid value Frequency
x1 f1
x2 f2
x3 f3
.. ..
. .
F. Guta (CoBE) Econ 2042 March, 2024 31 / 54
Sample mean = x = (x1 f1 + x2 f2 + ...) /N;
N = ∑i fi .
Probability = lim of the relative frequency, e.g.
fX (x1 ) = lim f1 /N, etc.

E (X ) = ∑ xi fX (xi )
i

E (X ) is also denoted by µ.
Bernoulli Random Variable: A random variable
with only two outcomes (0 and 1) is known as a
Bernoulli random variable.
F. Guta (CoBE) Econ 2042 March, 2024 32 / 54
Example (2.10)
Let X be a rv with probability p of success and
(1 p ) of failure:

x fX (x )
Failure 0 1 p
Success 1 p

E ( X ) = 0( 1 p ) + 1(p ) = p.

The above tabular expression for the probability of


a Bernoulli rv can be written as
F. Guta (CoBE) Econ 2042 March, 2024 33 / 54
Example (2.10 continued. . . )
8
< p x (1 p )1 x
, if x = 0, 1
fX (x ) =
: 0, otherwise

Example (2.11)
Let X be the number of trials required to produce
the 1st success, say a head in a toss of a fair coin.
This is described by a geometric rv and is given as

x fX (x )
1 p
F. Guta (CoBE) Econ 2042 March, 2024 34 / 54
Example (2.11 continued. . . )
x fX ( x )

2 (1 p) p

3 (1 p )2 p

4 (1 p )3 p
.. ..
. .

E (X ) = 1 p+2 (1 p) p + 3 (1 p )2 p + 4 (1 p )3 p +
h i
= p 1 + 2 (1 p ) + 3 (1 p )2 + 4 (1 p )3 +

F. Guta (CoBE) Econ 2042 March, 2024 35 / 54


Example (2.11 continued. . . )
Let

S = 1 + 2 (1 p ) + 3 (1 p )2 + 4 (1 p )3 + then

(1 p ) S = (1 p ) + 2 (1 p )2 + 3 (1 p )3 + 4 (1 p )4 +

S (1 p ) S = 1 + (1 p ) + (1 p )2 + (1 p )3 +

1
pS =
p
1
S=
p2
1 1
) E (X ) = p =
p2 p
F. Guta (CoBE) Econ 2042 March, 2024 36 / 54
Example (2.11 continued. . . )
Alternatively,
8
>
>
>
< p (1 p )x 1
, if x = 1, 2, 3, : : :
fX ( x ) = P ( X = x ) =
>
>
>
: 0, otherwise
∞ ∞ ∞
E (X ) = ∑ xfX (x ) = ∑ xp (1 p )x 1
=p ∑ x (1 p )x 1

x =1 x =1 x =1

d d
= p ∑ dp (1 p )x , as
dp
(1 p )x = x (1 p )x 1

x =1
" # " #
d ∞ d ∞

dp x∑ dp x∑
= p (1 p )x = p (1 p )x
=1 =0
d 1 1 1
= p =p 2
=
dp p p p
F. Guta (CoBE) Econ 2042 March, 2024 37 / 54
1 1
Note: If p = 2 then E (x ) = 2; if p = 3 then E (x ) = 3; if
1 1
p= 4 then E (x ) = 4 if p = 5 then E (x ) = 5; if
1
p= 10 then E (x ) = 10.

As p becomes smaller the expected number of trials


required to get a success increases.
Expectation of a Function of a RV
Let g (x ) be a function of X where X is a discrete
random variable then

E (g (x )) = ∑ g (x )fX (x )
x
F. Guta (CoBE) Econ 2042 March, 2024 38 / 54
Example (2.12)
Let g (X ) = X 2 , then

E (g (X )) = E X 2 = ∑x x 2 fX (x )

Variance of the rv X , denoted by σ 2 , is de…ned as

σ 2 = E (x µ )2 = ∑(x µ )2 fX (x ) where µ = E (X )

Let g (x ) be a function of X where X is a


continuous rv then
F. Guta (CoBE) Econ 2042 March, 2024 39 / 54
Example (2.12 continued. . . )
Z ∞
E [g (X )] = g (x )fX (x )dx

Properties of Mathematical Expectations


a). If c is a constant E (c ) = c, i.e. E (c ) = c 1 = c.
b). E (aX + b) = aE (X ) + b, where a and b are
constants in R.

c). E (cg (X )) = cE (g (X )) i.e.,

E (cg (X )) = ∑ cg (x )fX (x ) = c ∑ g (x )fX (x ) = cE (g (X )).


F. Guta (CoBE) Econ 2042 March, 2024 40 / 54
Moments of a probability distribution

The mean of a distribution is the expected value of


the rv X . A generalization is to compute µ 0m =
E (X m ), m = 2, : : : , the moment of order m about
the origin.

The mth moment about the origin of a rv X is:

µ 0m = E (X m ) = ∑ x m fX (x ); m = 1, 2, 3, : : : (Discrete)
Z ∞
m
0
µ m = E (X ) = x m fX (x ); m = 1, 2, 3, : : : (Continuous)

F. Guta (CoBE) Econ 2042 March, 2024 41 / 54


Moments can also be generated around the mean,

central moments or moments around the mean,

de…ned as:

µ m = E (X µ )m = ∑ (x µ )m fX (x ); m = 1, 2, 3, : : : or
Z ∞
m
µ m = E (X µ) = (x µ )m fX (x ); m = 1, 2, 3, : : :

Propositions:
a). µ 0 = 1.
b). µ 1 = 0.
F. Guta (CoBE) Econ 2042 March, 2024 42 / 54
Proof of proposition (a) is trivial, while proof of
(b) goes as follows:
h i
1
µ 1 = E (X µ ) = E (X µ ) = E (X ) µ = 0.

De…nition (2.5)
h i
2
a). µ 2 = E (X µ) = σ 2 is de…ned as the variance
of a rv, and is also denoted by var (X ) or V (X ).

h i
µ2 = E (X µ )2 = E X 2 2X µ + µ 2

= E X2 2E (X ) µ + µ 2 = E X 2 2µ 2 + µ 2
F. Guta (CoBE) Econ 2042 March, 2024 43 / 54
De…nition (2.5 continued. . . )
µ2 = E X 2 µ2
= µ 02 ( µ 01 )2 = E X 2 µ2

Thus, the variance of a rv is the expected value of


the square of the rv less the square of the expected
value of the rv.

h i
b). µ 3 = E (X µ )3 = E X 3 3X 2 µ + 3X µ 2 µ3

= E X3 3E X 2 µ + 3E (X ) µ 2 µ3

F. Guta (CoBE) Econ 2042 March, 2024 44 / 54


De…nition (2.5 continued. . . )
µ 3 = µ 03 3µ 02 µ + 3µ 3 µ3
= µ 03 3µ 02 µ + 2µ 3
is the third moment about the mean and is equal
to : µ 3 = µ 03 3µ 02 µ + 2µ 3 .
h i
c). µ 4 = E (X µ )4 = E X 4 4X 3 µ + 6X 2 µ 2 4X µ 3 + µ 4

= E X4 4E X 3 µ + 6E X 2 µ 2 4E (X ) µ 3 + µ 4

= µ 04 4µ 03 µ + 6µ 02 µ 2 4µ 4 + µ 4

= µ 04 4µ 03 µ + 6µ 02 µ 2 3µ 4

F. Guta (CoBE) Econ 2042 March, 2024 45 / 54


De…nition (2.5 continued. . . )
is the fourth moment about the mean and is equal
to : µ 4 = µ 04 4µ 03 µ + 6µ 02 µ 2 3µ 4 .

Interpretations

1). µ 01 = µ is a measure of central tendency


2). µ 2 denoted by σ 2 , var (X ), or V (X ), is the known
as the variance, and is a measure of dispersion of
the rv. If X is a rv given in centimeters, σ 2 ’s
dimension is cm2 .
F. Guta (CoBE) Econ 2042 March, 2024 46 / 54
3). σ , or, the positive root of the variance, is called the
standard deviation of the rv and its dimension is
given in centimeters. Thus, to convert this measure
of dispersion into a dimensionless, we divide it by
the mean to get the cv = σµ , a dimensionless
measure of dispersion of the rv.
4). µ 3 is the third moment about the mean and is used
to calculate the measure of skewness given as α 3 =
µ3
σ3
& known as the Pearson’s measure of skewness.

F. Guta (CoBE) Econ 2042 March, 2024 47 / 54


If α 3 = 0 then the distribution is symmetric. If
α 3 > 0 then the distribution is positively skewed,
and there is a spread to the right–few observations
on the right-hand of the mean pull the mean to the
right. If α 3 < 0 then the distribution is negatively
skewed, and there is a spread to the left–few
observations on the left hand of the mean pull the
mean to the left.

5). µ 4 , the fourth moment about the mean and is used


F. Guta (CoBE) Econ 2042 March, 2024 48 / 54
to calculate the measure of peakedness or ‡atness
µ4
(known as kurtosis) and is given as α 4 = σ4
.
α 4 = 3 for a normal distribution. α 4 > 3 if the
distribution is narrower and thinner at its tails than
the normal distribution (leptokurtic). α 4 < 3 if the
distribution is ‡atter and thicker at its tails than
the normal distribution (platykurtic).
Example (2.13)
We have obtained the expected value of Bernoulli
rv to be equal to E (x ) = 0(1 p ) + 1(p ) = p.
F. Guta (CoBE) Econ 2042 March, 2024 49 / 54
Example (2.13 continued. . . )
To obtain the variance of the Bernoulli rv we …rst
get
E (X 2 ) = 02 (1 p ) + 12 (p ) = p.

Thus

σ 2 = E (X 2 ) [E (X )]2 = p p 2 = p (1 p)

Thus
q p
p (1 p)
σ= p (1 p ) and cv =
p
F. Guta (CoBE) Econ 2042 March, 2024 50 / 54
2.5 Moment Generating Function (MGF.)

De…nition (2.6)
Moment generating function of a rv X , MX (θ ), is
de…ned as MX (θ ) = E e θ x , generates moments.

Taylor series expansion about a point x0 of a


continuously di¤erentiable function f (x ) is:

f ( k ) ( x0 ) d k f (x )
f (x ) = f (x0 ) + ∑ (x x0 )k , where f (k ) (x0 ) =
k =1 k! dx k x =x 0

Taylor series expansion about zero of a continuously


F. Guta (CoBE) Econ 2042 March, 2024 51 / 54
di¤erentiable function f (x ) is given by:
∞ f ( k ) ( 0) k
f ( x ) = f ( 0) + ∑ k =1
x
k!

Example (2.14)
Let f (x ) = e x , then

x2 x3 x4
f (x ) = 1 + x + + + +
2! 3! 4!

Similarly,

θx (θ x )2 (θ x )3 (θ x )4
e = 1+ θx + + + +
2! 3! 4!
F. Guta (CoBE) Econ 2042 March, 2024 52 / 54
Example (2.14 continued. . . )
Hence,

θ2 θ3 θ4
MX (θ ) = E e θ X = 1 + θ E (X ) + E X2 + E X3 + E X4 +
2! 3! 4!

d θ2 θ3 θ4
MX (θ ) = E (X ) + θ E X 2 + E X 3 + E X 4 + E X 5 +
dθ 2! 3! 4!

d d
) MX (θ ) = MX (0) = E (X ) , the …rst moment about zero.
dθ θ =0 dθ

Similarly,

d2 2 3 θ2 4 θ3
MX ( θ ) = E X + θ E X + E X + E X5 +
dθ2 2! 3!
F. Guta (CoBE) Econ 2042 March, 2024 53 / 54
Example (2.14 continued. . . )
d2 d2
dθ2
MX (θ ) = dθ2
MX (0) = E (X 2 ) , the second moment about zero.
θ =0

Note:
1). Taking higher order derivative of the MGF and
then evaluating the resulting function at the origin
(θ = 0) generate all higher order moments about
zero of a rv.
2). There is a one-to-one correspondence between the
pdf of a rv and its MGF function, if exists.
F. Guta (CoBE) Econ 2042 March, 2024 54 / 54

You might also like