You are on page 1of 71

Continuous Distributions

The Uniform distribution from a to b


 1
 a xb
f  x  b  a
 0 otherwise
0.4
f  x
0.3

0.2


1 
0.1 
ba 

0
0 5 10 x 15
a b
The Normal distribution
(mean , standard deviation )
 x  2
1 
f  x  e 2 2

2
The Exponential distribution
 e   x x0
f  x  
 0 x0

0.2

0.1

0
-2 0 2 4 6 8 10
Weibull distribution with parameters and.

 
 x
Thus F  x  1 e 

 
 x
and f  x   F   x    x  1e 
x0
The Weibull density, f(x)
0.7
( = 0.9,  = 2)
0.6

0.5 ( = 0.7,  = 2)
0.4

0.3 ( = 0.5,  = 2)

0.2

0.1

0
0 1 2 3 4 5
The Gamma distribution
Let the continuous random variable X have
density function:
    1   x
 x e x0
f  x     
 0 x0

Then X is said to have a Gamma distribution


with parameters  and .
Expectation
Let X denote a discrete random variable with
probability function p(x) (probability density function
f(x) if X is continuous) then the expected value of X,
E(X) is defined to be:

E  X    xp  x    xi p  xi 
x i
and if X is continuous with probability density function
f(x)

E X    xf  x  dx

Interpretation of E(X)
1. The expected value of X, E(X), is the centre of
gravity of the probability distribution of X.
2. The expected value of X, E(X), is the long-run
average value of X. (shown later –Law of Large
Numbers)
0.4

0.3

0.2

0.1

0
0 1 2 3 4 5 6 7

E(X)
Example:
The Uniform distribution
Suppose X has a uniform distribution from a to b.
Then:
 b 1 a a  x  b
f  x  
 0 x  a, x  b

The expected value of X is:


 b
E X    xf  x  dx   x b 1 a dx
 a
b
 1 x  b a
22 2
ab
  ba   
 2 a 2  b  a  2
Example:
The Normal distribution
Suppose X has a Normal distribution with parameters 
and .
 x  2
Then: 1 
f  x  e 2 2

2
The expected value of X is:
   x  2
1 
E  X    xf  x  dx   x e 2 2
dx
  2
Make the substitution:
x 1
z dz  dx and x    z
 

1 2

E X       z  e dz
 z2
Hence
 2
 
1  z22  2

  ze
 z2
e dz  dz
 2 2 

 
Now 1 2 2

  ze
 z2  z2
e dz  1 and dz  0
 2 

Thus E  X   
Example:
The Gamma distribution
Suppose X has a Gamma distribution with parameters 
and .
    1   x
Then:  x e x0
f  x     
 0 x0

Note:
 
   1   x
 f  x  dx   x e dx  1 if   0,   
 0
 

This is a very useful formula when working with the


Gamma distribution.
The expected value of X is:
 
   1   x
E  X    xf  x  dx   x x e dx
 0
 
 
 This is now

  x
 x e dx
  0
equal to 1.

      1    1

  x
 x e dx
    1
0
    1

    1
    
  
      
Thus if X has a Gamma ( ,) distribution then the
expected value of X is:

E X  

Special Cases: ( ,) distribution then the expected
value of X is:
1. Exponential () distribution:  = 1,  arbitrary
1
E X  

2. Chi-square () distribution:  = /2,  = ½.

E  X   2 
1
2
The Gamma distribution

0.3

0.25

0.2

0.15

0.1

0.05

0
0 2 4 6 8 10

E X  

The Exponential distribution

0.25

0.2

0.15

0.1

0.05

0
0 5 10 15 20 25
1
E X  

The Chi-square (2) distribution

0.14
0.12

0.1
0.08

0.06
0.04

0.02
0
0 5 10 15 20 25
E  X  
Expectation of functions of
Random Variables
Definition
Let X denote a discrete random variable with probability
function p(x) (probability density function f(x) if X is
continuous) then the expected value of g(X), E[g(X)] is
defined to be:

E  g  X     g  x  p  x    g  xi  p  xi 
x i
and if X is continuous with probability density function
f(x)

E  g  X     g  x  f  x  dx

Example:
The Uniform distribution
Suppose X has a uniform distribution from 0 to b.
Then:
 1b 0  x  b
f  x  
0 x  0, x  b
Find the expected value of A = X2 . If X is the length of a
side of a square (chosen at random form 0 to b) then A is the
area of the square
b
 b
3
x  b 3
 03
b 2

 
E X2   x 2 f  x  dx   x 2 b 1 a dx   b1  
 3 0 3 b

3
 a

= 1/3 the maximum area of the square


Example:
The Geometric distribution
Suppose X (discrete) has a geometric distribution with
parameter p.
Then:
p  x  p  1 p
x 1
for x  1, 2,3,

Find the expected value of X A and the expected value of X2.


E  X    xp  x   1 p  2 p  1  p   3 p  1  p   4 p  1  p   
2 3

x 1

E  X 2    x 2 p  x   12 p  22 p  1  p   32 p  1  p   42 p  1  p   
2 3

x 1
Recall: The sum of a geometric Series
a
a  ar  ar  ar   
2 3

1 r
1
or with a  1, 1  r  r  r   
2 3

1 r
Differentiating both sides with respect to r we get:
1
1  2r  3r  4r   1 1  r   1 
2 3 2

 1 r 
2

Differentiating both sides with respect to r we get:


1
Thus 1  2r  3r  4r  
2 3

 1 r 
2

This formula could also be developed by noting:


1  2r  3r 2  4r 3  
1 r  r2  r3  
r  r2  r3  
r2  r3  
r3  
1 r r2 r3
    
1 r 1 r 1 r 1 r
1 1 1 1

1 r

1 r  r  r  
2 3
  
1 r 1 r  1 r  2
This formula can be used to calculate:

E  X    xp  x   1 p  2 p  1  p   3 p  1  p   4 p  1  p   
2 3

x 1

 p  1  2r  3r 2  4r 3    where r  1  p

 p  1  2r  3r 2  4r 3    where r  1  p
2 2
 1  1 1
 p  =p   =
 1 r   p p
To compute the expected value of X2.

E X    x p  x  1 p  2 p  1 p   3 p  1 p   4 p  1 p  
2 2 2 2 2 2 2 3

x 1
 p  1  2 r  3 r  4 r  
2 2 2 2 2 3

we need to find a formula for



12  22 r  32 r 2  42 r 3     x 2 r x 1
x 1

Note 1 
1 r  r  r    r 
2 3
 S1  r  x

x 0 1 r

Differentiating with respect to r we get


 
1
S1  r   1  2r  3r  4r     xr
1 2 3 x 1
  xr x 1

 1 r 
2
x 0 x 1
Differentiating again with respect to r we get

S1 r   2  3  2  r  4  3 r 2  5  4  r 3 

 
2
  x  x  1 r x  2   x  x  1 r x  2   2   1  r   1 
3

 1 r 
3
x 1 x2

2
Thus  x  x  1 r x 2

 1 r 
3
x 2

 
2
x r 2 x2
  xr x 2

 1 r 
3
x 2 x 2

 
2
x r 2 x 2
   xr x  2
 1 r 
3
x2 x2
 
2
x r 2 x 2
   xr x  2
 1 r 
3
x2 x 2

implies
2
2  3 r  4 r  5 r  
2 2 2 2 2 3
 2  3r  4r 2  
 1 r 
3

Thus
 2 
1 2 r  3 r  4 r   1 r 
2 2 2 2 3
 2  3r  4r   
2

  1  r 
3

2r
  1  2 r  3r 2
 4 r 3

 1 r 
3

2r 1 2r  1  r 1 r
   
 1 r   1 r   1 r   1 r 
3 2 3 3

2 p
 3 if r  1  p
p
Thus

 
E X 2  p 1  22 r  32 r 2  42 r 3   
2 p
 2
p
Moments of Random Variables
Definition
Let X be a random variable (discrete or
continuous), then the kth moment of X is defined
to be:
k  E  X k 
  xk p  x  if X is discrete
 x
 
  x k f  x  dx if X is continuous
- 

The first moment of X ,  = 1 = E(X) is the center of gravity of


the distribution of X. The higher moments give different
information regarding the distribution of X.
Definition
Let X be a random variable (discrete or
continuous), then the kth central moment of X is
defined to be:
  E  X   

0 k
k
 
  x    k p  x if X is discrete
 x
 
   x    k f  x  dx if X is continuous
- 

where  = 1 = E(X) = the first moment of X .


The central moments describe how the
probability distribution is distributed about the
centre of gravity, .
10  E  X   

  E  X     = 2nd central moment.



0 2

2

depends on the spread of the probability distribution
of X about .

  E  X     is called the variance of X.



0 2
2
 
and is denoted by the symbol var(X).
  E  X     is called the standard

0 2
2
 

deviation of X and is denoted by the symbol .

var  X     E  X       2

0 2
2
 

The third central moment


  E  X  

0 3
3
 
contains information about the skewness of a distribution.
The third central moment
  E  X   

0 3
3
 
contains information about the skewness of a distribution.

Measure of skewness
 0
 0
1  
3 3
  2 
3 32
0
Positively skewed distribution

0.08
0.07   0,    0
0
3
0.06
0.05
0.04
0.03
0.02
0.01
0
0 5 10 15 20 25 30 35
Negatively skewed distribution

0.08
0.07
0.06
0.05   0,  1  0
0
3
0.04
0.03
0.02
0.01
0
0 5 10 15 20 25 30 35
Symmetric distribution

0.09
0.08
30  0,  1  0
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0 5 10 15 20 25 30 35
The fourth central moment

  E  X   

0 4
4
 
Also contains information about the shape of a
distribution. The property of shape that is measured by
the fourth central moment is called kurtosis

The measure of kurtosis


 0
 0
2  3 
4
3
4
  
4 2
0
2
Mesokurtic distribution

0.09    0, 40 moderate in size


0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0 5 10 15 20 25 30 35
Platykurtic distribution

   0, 40 small in size

0
0 20 40 60 80
leptokurtic distribution

   0, 40 large in size

0
0 20 40 60 80
Example: The uniform distribution from 0 to 1
1 0  x  1
f  x  
0 x  0, x  1

Finding the moments


 1
1
x k 1
1
k   x f  x  dx   x 1dx  
k k
 
 0  k  1 0 k  1
Finding the central moments:
 1

  x    f  x  dx    x   1 k
k
 
0
k 2 1dx
 0

making the substitution w  x  12


    
1
1 k 1 1 k 1
1
2
w  k 1 2

   w dw  
0
k
k
 
2 2

 12  k  1   12 k 1
 1
1   1
k 1
 2k k  1 if k even
 k 1   
2  k  1 
 0 if k odd
1 1 0 1 1
Hence   2
0
 , 3  0, 4  4
0

2  3 12 2  5  80
2

Thus 1
var  X    
0
2
12
The standard deviation
1
  var  X     0
2
12
The measure of skewness
30
1  3  0

The measure of kurtosis
40 1 80
1  4  3   3  1.2
  1 12 
2
Rules for expectation
  g  x p  x if X is discrete
 x
E  g  X     
  g  x  f  x  dx if X is continuous
 
Rules:
1. E  c   c where c is a constant
Proof 
if g  X   c then E  g  X    E  c    cf  x  dx


 c  f  x  dx  c


The proof for discrete random variables is similar.


2. E  aX  b   aE  X   b where a, b are constants

Proof

if g  X   aX  b then E  aX  b     ax  b  f  x  dx

 
 a  xf  x  dx  b  f  x  dx
 

 aE  X   b

The proof for discrete random variables is similar.


3. var  X     E  X    
 0 2

 2

 E X    
2
2
  E X 
   2  1
2

Proof

var  X   E  X     

  x    f  x  dx
2 2

 
 

    2 x    f  x  dx
2 2
x

  
  x 2 f  x  dx  2   xf  x  dx   2  f  x  dx
  

 
 E X 2  2  E  X       2     2  12

The proof for discrete random variables is similar.


4. var  aX  b   a 2 var  X 

Proof

aX b  E  aX  b   aE  X   b  a   b

var  aX  b   E  aX  b  aX b  
 2
 
 E  aX  b   a  b   
 2

 
 E a  X   
 2 2
 
 a E  X      a 2 var  X 

2 2

 
Moment generating functions
Recall
  g  x p  x if X is discrete
 x
E  g  X     
  g  x  f  x  dx if X is continuous
 
Definition
Let X denote a random variable, Then the moment
generating function of X , mX(t) is defined by:
  etx p  x  if X is discrete
 x
mX  t   E  e    
tX

  etx f  x  dx if X is continuous
 
Examples
1. The Binomial distribution (parameters p, n)

n x
p  x    p  1 p
n x
x  0,1, 2, , n
 x
The moment generating function of X , mX(t) is:
mX  t   E etX    etx p  x 
x
n
n x
  e   p  1 p
tx n x

x 0  x
n
n t x n
 n  x n x
   e p 1 p    a b
n x

x 0  x  x 0  x 

  a  b   e p 1 p 
n n
t
2. The Poisson distribution (parameter )
 x 
p  x  e x  0,1, 2,
x!
The moment generating function of X , mX(t) is:

mX  t   E etX    etx p  x   x
n
  etx e
x x 0 x!

 e 
x
t
  x
u
 e  using eu  
t
 e   e e
x 0 x! x 0 x !

e

 et 1 
3. The Exponential distribution (parameter )
 e   x x  0
f  x  
 0 x0
The moment generating function of X , mX(t) is:
 
mX  t   E etX    e tx
f  x  dx   e tx
 e  x
dx
 0


  t   x 
 t   x e
  e dx   
0  t   0
 
 t
   t
undefined t  
4. The Standard Normal distribution ( = 0,  = 1)
1  x22
f  x  e
2
The moment generating function of X , mX(t) is:

mX  t   E etX    f  x  dx
e tx



1  x22
 e
tx
e dx
 2

1  x2 22 tx
 
 2
e dx
We will now use the fact that
   x b 2
1 
 2 a e dx  1 for all a  0, b
2 a2

We have
completed
the square
 
1  x2 22 tx t2 1 x 2 2 tx t 2
mX  t    dx  e 2 

e e 2 dx
 2  2

t2 1   x2t  2 t2
e 2

 2
e dx  e 2







This is 1
4. The Gamma distribution (parameters , )
    1   x
 x e x0
f  x     
 0 x0

The moment generating function of X , mX(t) is:

mX  t   E e  
 tX
 e f  x  dx
tx


 

  etx x 1e   x dx
0
 

   1   t  x
 x e dx
0
 
We use the fact

b a a 1  bx
0   a  x e dx  1 for all a  0, b  0

   1   t  x
mX  t    x e dx
0
 

  t
   
  1   t  x   
 
 x e dx   
  t 0      t 






Equal to 1
Properties of
Moment Generating Functions
1. mX(0) = 1
mX  t   E  etX  , hence mX  0   E  e0 X   E  1  1

Note: the moment generating functions of the following


distributions satisfy the property mX(0) = 1
mX  t    e p  1  p 
n
t
i) Binomial Dist'n

ii) Poisson Dist'n mX  t   e


 
 et 1

  
iii) Exponential Dist'n mX  t    
 t2  t 
iv) Std Normal Dist'n mX  t   e 2

  
v) Gamma Dist'n mX  t    
   t 
 2 2 3 3 k k
2. mX  t   1  1t  t  t    t  
2! 3! k!
We use the expansion of the exponential function:
2 3 k
u u u
eu  1  u       
2! 3! k!
mX  t   E  etX 
 t 2 2 t3 3 tk k 
 E 1  tX  X  X    X   
 2! 3! k! 
2 3 k
t t t
 1  tE  X   E  X   E  X     E  X k   
2 3

2! 3! k!
t2 t3 tk
 1  t 1  2  3     k  
2! 3! k!
k
d
3. mX  0   k mX  t   k
 k
dt t 0

Now  2 2 3 3 k k
mX  t   1  1t  t  t    t  
2! 3! k!
2 3 2 k k 1
mX  t   1  2t  3t    kt  
2! 3! k!
3 2 k
 1  2t  t    t k 1  
2!  k  1 !
and mX  0   1
4 k
mX  t   2  3t  t    t k 2  
2!  k  2 !
and mX  0   2
continuing we find mX  0   k
 k
Property 3 is very useful in determining the moments of a
random variable X.
Examples
i) Binomial Dist'n mX  t    e p  1  p 
n
t

mX  t   n  e p  1  p   pet 
n 1
t

mX  0   n  e p  1  p   pe0   np  1  
n 1
0

     
et  
n2 n 1
mX  t   np  n  1 e p  1  p
t
e p e  e p 1 p
t t t
 
     
n2
 npe e p  1  p
t t
 n  1 et p  et p  1  p 
 
 npe  e p  1  p 
n2
t t
 net p  1  p 
 np  np  1  p   np  np  q   n 2 p 2  npq  2

 et 1 
ii) Poisson Dist'n mX  t   e

mX  t   e

 et 1   et    e  e 1 t t

 

 et 1  t   t  2   e 1  2 t   e 1  t
t t

mX  t    e   e  1   e  e
t
   e  2    e    t 
 e t 1  t
mX  t    e
2  e 1  2 t t
 e  1

 e
 t
2  e 1  2 t    e  3   e 
t 
 e t 1  t

t

3  e 1  3t  t
 
2  e 1  2 t  
 e t 1  t
 e  3 e  e
To find the moments we set t = 0.
 
 e0 1  0
1  mX  0    e 
0
   
 e 0 1  0
2  mX  0    e
2  e 1  0
 e  2  

3  mX  0    3e0  3 2 e0t   e0   3  3 2  


  
iii) Exponential Dist'n mX  t    
   t 
d   t
1
d   
mX  t    
dt    t  dt
   1    t   1      t 
2 2

mX  t     2     t   1  2    t 
3 3

mX  t   2  3    t   1  2  3     t 


4 4

mX  t   2  3   4     t 
 4
 1   4!     t 
5 5


mX  t    k !     t 
 k  k 1
Thus
1
1    mX  0       
2


2
2  mX  0   2     2
3



k!
k  mX  0    k !    
 k  k 1
 k

The moments for the exponential distribution can be calculated in
an alternative way. This is note by expanding mX(t) in powers of t
and equating the coefficients of tk to the coefficients in:

 2 2 3 3 k k
mX  t   1  1t  t  t    t  
2! 3! k!
 1 1
mX  t      1  u  u 2  u3  
  t 1 t 1 u

t t2 t3
 1   2  3 
  
Equating the coefficients of tk we get:

k 1 k!
 k or k  k
k!  
The moments for the standard normal distribution
t2
mX  t   e 2

We use the expansion of eu.


 k 2 3 k
u u u u
eu    1  u       
k 0 k ! 2! 3! k!
     
2 3 k
t2 t2 t2

 
t2
mX  t   e  1 
2 2 2
2 t2
2     
2! 3! k!
1 4 1 6 1 2k
 1 2 t  2 t  3 t   k t 
1 2
2 2! 2 3! 2 k!
We now equate the coefficients tk in:
2 2 k k 2 k 2 k
mX  t   1  1t  t    t    t 
2! k!  2k  !
If k is odd: k = 0.

For even 2k: 2 k 1


 k
 2k  ! 2 k !
or 2 k 
 2k  !
k
2 k!

2! 4!
Thus 1  0, 2   1, 3  0, 4  2 3
2 2  2!

You might also like