You are on page 1of 6

Chapter 6 Page 1 of 6

Chapter 6 Continuous Distributions

6.1-6.2 The Uniform Density


Definition: A random variable X has a uniform density function if and only if its probability
density is given by
1/(  −  ),   x  
f ( x) = 
0, otherwise.
 and  are parameters such that  < . The mean and variance of the uniform density are
 = ( +  ) / 2 and  2 = (  −  )2 /12 .

6.3 The Gamma Distribution


Definition: A random variable X has a gamma distribution if and only if its probability density is
given by
 x −1e − x / 
 , x0
f ( x) =    ( )
0,
 otherwise,
where  > 0 and  >0.

 
By definition, ( ) =  y −1e − y dy , for  > 0. Also, (1) =  e − y dy = 1.
0 0

Integrating by parts, ( ) = ( − 1)!


Note that (i) ( ) = ( − 1)( − 1) , for  > 1 (ii) (1/ 2) = 

From the density function,  0
x −1e − x /  dx =   ( ) ; gamma integral

Special cases of Gamma Distribution:


1. When  = 1 and  = , we obtain
(1/  )e− x / , x  0
f ( x) = 
0, otherwise,
where  > 0. The above density is called an exponential density function.

If X, the number of success has a Poisson distribution with parameter  = t , then Y, the waiting
time until the first success has an exponential density function f ( y ) =  e− y , for y > 0.

2. When  = / 2 and  = 2, we obtain


 x / 2−1e− x / 2
 , x0
f ( x) =  2 / 2 ( / 2)
0,
 otherwise.
The above is called a chi-square distribution with  degrees of freedom. The chi-square
distribution is very useful in sampling theory.
Chapter 6 Page 2 of 6

For the gamma distribution, the rth moment about the origin is  r =  r ( + r ) / ( )
Proof:
    + r ( + r )
r = E ( X r ) =  x r f ( x)dx =  x + r +1e − x /  /[   ( )]dx = =  r ( + r ) / ( )
0 0 [  ( )]

The mean and variance of the gamma distribution are  =  and  2 =  2 .


The mean and variance of the exponential distribution are  =  and  2 =  2 .
The mean and variance of the chi-square distribution are  =  and  2 = 2 .

The moment generating function of the gamma distribution is given by M X (t ) = (1 −  t )− .


 

Proof: M X (t ) =
 0
x + r +1e− x /  +tx dx
=
 0
x + r +1e− x (1−  t ) /  dx
=
( )[  /(1 −  t )]
= (1 −  t )−
  ( )   ( )  ( )

For the exponential distribution, M X (t ) = (1 −  t )−1 .


For the chi-square distribution, M X (t ) = (1 − 2t )− / 2 .

6.4 The Beta Distribution


Definition: A random variable X has a beta distribution if and only if its probability density is
given by
 ( +  )  −1
 x (1 − x)  −1 , 0  x  1
f ( x) =  ( )(  )
0,
 otherwise,
where  > 0 and  >0.

1 ( )(  )

1
Since
0
f ( x)dx = 1 , we have that  0
x −1 (1 − x)  −1 dx =
( +  )
= B( ,  ) , beta integral.

B( ,  ) is called the beta function.


The mean and variance of the beta distribution are given by  = and
 +

2 = .
( +  ) ( +  + 1)
2

Problem 50 page 202


Solution
f ( x) = 1/(  −  ) = 1/(.015 + .015) = 1/0.03, –0.015 < x < 0.015
.003
(a) P(−.002  X  .003) =  (1/ .03)dx = 1/6
−.002
.015
(b) P( X  .005) =  (1/ .03)dx = 1/3
.005
Chapter 6 Page 3 of 6

Problem 58 page 202


( +  )  −1 .1 .9
f ( x) = x (1 − x)  −1 = 90 x(1 − x)8 . P( X  .1) =  90 x(1 − x)8 dx = −90  (1 − y ) y 8 dy
( )(  ) 0 1

By using y = 1 – x with –dy = dx [For x = 0, y = 1 and for x = .1, y = .9]. Answer = 0.2639

Problem 54 page 202


24
(a) P( X  24) =  e − x /120 dx /120 = 1 − e −24 /120 = 0.1813
0

(b) P( X  180) =  e − x /120 dx /120 = e −180 /120 = 0.2231
180

Problem 15 page 184


− ln(1− p )
f ( x) = (1/  )e− x / and P[ X  − ln(1 − p)] = (1/  )  e − x /  dx , since 0  p  1 .
0

On integration, we have −eln(1− p ) + 1 = −(1 − p) + 1 = p .

Problem 6.26 page 184


( +  )  −1
f ( x) = x (1 − x)  −1 and
( )(  )
( +  )
f ( x) = [( − 1) x − 2 (1 − x)  −1 − (  − 1) x −1 (1 − x)  −2 ] . On setting f ( x) = 0 and solving
( )(  )
the equation, we get 0 = ( − 1)( x − 1) − (  − 1) x  x = ( − 1) /( +  + 1) is the relative maximum
point.

6.5 The Normal Distribution


Definition: A random variable X has a normal distribution if and only if its probability density is
given by
1 − 1 [( x −  ) /  ]2
f ( x) = e 2 , for −   x   , where  > 0 and −     .
 2
The curve is bell-shaped. The moment generating function of the normal distribution is given by
M X (t ) = e t + t / 2 .
2 2

From this, M X (t ) = ( +  2t )M X (t ) and M X (t ) = [( +  2t )2 +  2 ]M X (t ) .


Therefore, E ( X ) = M X (0) =  and E( X 2 ) = M X (0) =  2 +  2 . Hence, V ( X ) =  2 .
Chapter 6 Page 4 of 6

 1 
Proof for mgf: Note that 1 =  
− 12 [( x −  ) /  ]2
f ( x)dx = e dx .
−
 2 −

[e t + t / 2 ]  − 21 2 [ x −(  +t 2 )]2


2 2
1  − 1 2 [( x −  )2 − 2 xt 2 ]
M X (t ) = E (e ) =   dx = e t + t / 2
2 2
tX 2
e dx = e
 2 −  2 −

Note that ( x −  )2 − 2 xt 2 = ( x −  ) 2 − 2( x −  )t 2 − 2t 2 = [( x −  ) − t 2 ]2 − (t 2 )2 − 2t 2

Definition: The normal distribution with  = 0 and  = 1 is called the standard normal
1 − 12 z 2
distribution. Its density is given by f ( z ) = e , for −   z   , where Z = ( X −  ) /  .
2
The moment generating function of standard normal distribution is M Z (t ) = et
2
/2
.

11 − 12 z 2 1
To find P(0 < Z < 1): P(0  Z  1) =  f ( z )dz =  e dz . This integral cannot be obtained
0 0
2
directly. We use the normal probability table. [See Table III on page 500]. The tabulated area is
z 1 − 12 x2
that of  e dx .
0
2
0.4

0.3
f(z)

0.2

0.1

0.0

-3 -2 -1 0 1 2 3
z

P(0 < Z < 1) = 0.3413


To find any (standard normal) probability:
• Sketch the normal curve
• Shade the required area
• Use the table accordingly

P(a  X  b) = P(a < X  b) = P(a  X < b) = P(a < X < b) because X is continuous.
Because of symmetry P(Z > 0) = P(Z < 0) = 0.5 and P(Z < − a) = P(Z > a)

Suppose we want to find the probability of any normal random variable X with mean  and
standard deviation ,
X −
• Standardized the random variable X to obtain Z =

• Follow previous method to find the required probability.
Chapter 6 Page 5 of 6

Problem 63 page 202


Solution (using half table)
(a) P(Z > 1.14) = 0.5 – 0.3729 = 0.1271 (b) P(Z > –0.36) = 0.5 + 0.1406 = 0.6406
(c) P(–0.46 < Z < –0.09) = 0.1772 – 0.0359 = 0.1413
(d) (P(–0.58 < Z < 1.12) = 0.2190 + 0.3686= 0.5876

Using Full table


(a) P(Z > 1.14) = 1- P(Z < 1.14) =1– 0.8729 = 0.1271
(b) P(Z > –0.36) =1- P(Z < –0.36)=1- 0.3594=0.6406
(c) P(–0.46 < Z < –0.09) = 0.4641-0.3228 = 0.1413
(d) (P(–0.58 < Z < 1.12) = 0.8686 - 0.2810= 0.5876

Example 1: Find z* such that P(Z < z* ) = 0.7777.


Solution: Select the closer one. If 0.7777 is midway between two values, find their average.

Problem 70 page 203


(a) P( X  44.5) = P( Z  (44.5 − 37.6) / 4.6) = P( Z  1.50) = 0.0668
(c) P(30  X  40) = P(−1.65  Z  0.52) = 0.6490

6.6 Normal Approximation to the Binomial Distribution


When n is large and  is close to 0.5, the normal approximation to the binomial is very
satisfactory. A rule of thumb is to use normal approximation when n and n(1 −  ) are both
greater than 5. To use normal approximation, one needs to apply continuity correction.
P( X  b) P( X  b + 0.5) P( X  a) P( X  a − 0.5)
P(a  X  b) P(a − 0.5  X  b + 0.5) P( X = c) P(c − 0.5  X  c + 0.5)
For other inequalities, first convert to the above before introducing the continuity correction.

Problem 78 page 203

 = 0.23, n = 120; n = 120(0.23) = 27.6,  = n (1 −  ) = 120(.23)(.77) = 4.61


P( X  32) = P( X  33) P( X  32.5) = P( Z  1.06) = 0.1446

Theorem 6.8 on page 191:


If X is a binomial random variable with parameters n and , then the moment generating function
of Z = ( X − n ) / n (1 −  ) approaches that of standard normal distribution when n →  .
(See the proof on page 191)

6.7 The Bivariate Normal Distribution


Definition: A pair of random variables X and Y has a bivariate normal distribution if and only if
their joint probability density is given by
Chapter 6 Page 6 of 6

− 1
2(1−  2 )
[( x −  )/ ] −2  [( x −  )/ ][( y − 
1 1
2
1 1 2 )/ 2 ]+[( y − 2 )/ 2 ]
2

e
f ( x, y ) = , for −   x, y   , where  1 ,  2  0 ,
2 1 2 1 −  2
−  1 , 2   , and −1    1 .
The parameters 1 , 2 ,  1 and  2 , are respectively, the means and the standard deviations of the
two random variable X and Y. The marginal density of X is the univariate normal distribution
with parameters 1 and  1 . This can be obtained by integrating out y. The parameter  is the
correlation coefficient between X and Y.

Theorem 6.9: If X and Y have a bivariate normal distribution, the conditional density of Y given
that X = x is normal with mean Y |x = 2 +  12 ( x − 1 ) and variance  Y2|x =  22 (1 −  2 ) .
Similarly, the conditional density of X given Y = y is normal with mean  X | y = 1 +  12 ( y − 2 )
and variance  X2 | y = 12 (1 −  2 ) .

Theorem 6.10: If two random variables have a bivariate normal distribution, they are
independent if and only if  = 0 .

Note that the above is not true in general

Lack of Memory Property for Exponential and Geometric Distributions:


A random variable X is said to lack memory, if P( X  s + t | X  t ) = P( X  s) for all s, t  0.
Suppose X = lifetime of some instrument. The above states that the probability that an instrument
lives for at least ( s + t ) hours given that it has survived t hours is the same as the initial
probability that it lives for at least s hours. If the instrument is alive at time t, then the
distribution of the remaining amount of time that it survives is the same as the original lifetime
distribution, that is, the instrument does not remember that it has already been in use for a time t.

For exponential distribution,


P( X  s + t | X  t ) = P[( X  s + t )  ( X  t )] / P( X  t ) = P( X  s + t ) / P( X  t )
 
=  1
s +t 
e− x /  dx  t 
1
e − x /  dx = e − ( s +t ) /  et /  = e− s /  = P ( X  s )

For geometric distribution f ( x) =  (1 −  ) x , x = 0, 1, 2, ... [x = # of failures until first success],


P( X  s + t | X  t ) = P[( X  s + t )  ( X  t )] / P( X  t ) = P( X  s + t ) / P( X  t )
=   (1 −  )
x = s +t
x −1
 (1 −  )
x =t
x −1
= (1 −  ) s +t (1 −  )t = (1 −  )s = P( X  s) .

You might also like