You are on page 1of 81

Joint Probability

5
CHAPTER OUTLINE
5-1 Two or More Random Variables
Distributions

5-1.1 Joint Probability Distributions


5-2 Covariance and Correlation
5-3 Common Joint Distributions
5-1.2 Marginal Probability 5-3.1 Multinomial Probability
Distributions Distribution
5-1.3 Conditional Probability 5-3.2 Bivariate Normal Distribution
Distributions 5-4 Linear Functions of Random
5-1.4 Independence Variables
5-1.5 More Than Two Random 5-5 General Functions of Random
Variables Variables

Chapter 4 Title and Outline


1
Learning Objective for Chapter 5
After careful study of this chapter, you should be able to do the
following:
1. Use joint probability mass functions and joint probability density functions
to calculate probabilities.
2. Calculate marginal and conditional probability distributions from joint
probability distributions.
3. Interpret and calculate covariances and correlations between random
variables.
4. Use the multinomial distribution to determine probabilities.
5. Understand properties of a bivariate normal distribution and be able to draw
contour plots for the probability density function.
6. Calculate means and variances for linear combinations of random variables,
and calculate probabilities for linear combinations of normally distributed
random variables.
7. Determine the distribution of a general function of a random variable.

Chapter 5 Learning Objectives 2


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Concept of Joint Probabilities
• Some random variables are not independent of each
other, i.e., they tend to be related.
– Urban atmospheric ozone and airborne particulate matter
tend to vary together.
– Urban vehicle speeds and fuel consumption rates tend to vary
inversely.
• The length (X) of a injection-molded part might not be
independent of the width (Y). Individual parts will vary
due to random variation in materials and pressure.
• A joint probability distribution will describe the behavior
of several random variables, say, X and Y. The graph of
the distribution is 3-dimensional: x, y, and f(x,y).
Chapter 5 Introduction 3
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-1: Signal Bars
You use your cell phone to check your airline reservation. The airline system
requires that you speak the name of your departure city to the voice
recognition system.
• Let Y denote the number of times that you have to state your departure
city.
• Let X denote the number of bars of signal strength on you cell phone.
y = number of x = number of bars Bar Chart of
times city of signal strength Number of Repeats vs. Cell Phone Bars
name is stated 1 2 3
1 0.01 0.02 0.25 0.25

2 0.02 0.03 0.20 0.20

Probability
3 0.02 0.10 0.05 0.15

4 0.15 0.10 0.05 0.10


4 Times
0.05
3 Times
Figure 5-1 Joint probability Twice
0.00
distribution of X and Y. The table cells 1
2 Once
3
are the probabilities. Observe that Cell Phone Bars
more bars relate to less repeating.
Sec 5-1.1 Joint Probability Distributions 4
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Joint Probability Mass Function Defined

The joint probability mass function of the


discrete random variables X and Y ,
denoted as f XY  x, y  , satifies:

(1) f XY  x, y   0 All probabilities are non-negative

(2)  f  x, y   1
x y
XY The sum of all probabilities is 1

(3) f XY  x, y   P  X  x, Y  y  (5-1)

Sec 5-1.1 Joint Probability Distributions 5


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Joint Probability Density Function Defined
The joint probability density function for the continuous random
variables X and Y, denotes as fXY(x,y), satisfies the following
properties:
(1) f XY  x, y   0 for all x, y
 
(2)  f XY  x, y  dxdy  1
 

(3) P   X , Y   R    f XY  x, y  dxdy (5-2)


R

Figure 5-2 Joint probability


density function for the random
variables X and Y. Probability that
(X, Y) is in the region R is
determined by the volume of
fXY(x,y) over the region R.

Sec 5-1.1 Joint Probability Distributions 6


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Joint Probability Mass Function Graph

Figure 5-3 Joint probability density function for the


continuous random variables X and Y of different
dimensions of an injection-molded part. Note the
asymmetric, narrow ridge shape of the PDF –
indicating that small values in the X dimension are
more likely to occur when small values in the Y
dimension occur.
Sec 5-1.1 Joint Probability Distributions 7
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-2: Server Access Time-1
Let the random variable X denote the time (msec’s) until a
computer server connects to your machine. Let Y denote the
time until the server authorizes you as a valid user. X and Y
measure the wait from a common starting point (x < y). The
range of x and y are shown here.

Figure 5-4 The joint probability


density function of X and Y is nonzero
over the shaded region where x < y.

Sec 5-1.1 Joint Probability Distributions 8


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-2: Server Access Time-2
• The joint probability density function is:

f XY  x, y   ke 0.001x 0.002 y for 0  x  y   and k  6 10 6

• We verify that it integrates to 1 as follows:


 
  0.001x 0.002 y 

 0.002 y  0.001x
 

 f XY  x, y  dxdy     ke dy dx  k    e dy  e dx
  0 x  0 x 
 
 e0.002 x  0.001x
 k e dx  0.003  e 0.003 x
dx
0
0.002  0

 1 
 0.003   1
 0.003 
Sec 5-1.1 Joint Probability Distributions 9
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-2: Server Access Time-2
Now calculate a probability:
1000 2000
P  X  1000, Y  2000     f XY  x, y  dxdy
 x

 2000 0.002 y  0.001x


1000
k    e dy  e dx
0  x 
1000
 e 0.002 x  e 4  0.001x
k   e dx
0 
0.002 
1000
 0.003 
0
e 0.003 x  e 4e 0.001x dx
Figure 5-5 Region of
integration for the probability
 1  e 3  4  1  e 1  
 0.003  e   that X < 1000 and Y < 2000 is
 0.003   0.001  darkly shaded.
 0.003  316.738  11.578   0.915

Sec 5-1.1 Joint Probability Distributions 10


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Marginal Probability Distributions (discrete)
For a discrete joint PDF, there are marginal distributions for
each random variable, formed by summing the joint PMF over
the other variable.
y = number of x = number of bars

f X  x    f  xy 
times city of signal strength
name is stated 1 2 3 f (y ) =
y 1 0.01 0.02 0.25 0.28
fY  y    f  xy  2 0.02 0.03 0.20 0.25
3 0.02 0.10 0.05 0.17
x
4 0.15 0.10 0.05 0.30
f (x ) = 0.20 0.25 0.55 1.00
Figure 5-6 From the prior example,
the joint PMF is shown in green
while the two marginal PMFs are
shown in blue.
Sec 5-1.2 Marginal Probability Distributions 11
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Marginal Probability Distributions (continuous)
• Rather than summing a discrete joint PMF, we integrate a
continuous joint PDF.
• The marginal PDFs are used to make probability
statements about one variable.
• If the joint probability density function of random
variables X and Y is fXY(x,y), the marginal probability
density functions of X and Y are:

f X  x    f XY  x, y  dy
y

fY  y    f XY  x, y  dx (5-3)
x
Sec 5-1.2 Marginal Probability Distributions 12
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-4: Server Access Time-1
For the random variables
times in Example 5-2, find
the probability that Y
exceeds 2000.
Integrate the joint PDF directly
using the picture to
determine the limits.
2000
   
 
P  Y  2000      f XY  x, y  dy dx     f XY  x, y  dy dx
0  2000  2000  x 
Dark region  left dark region  right dark region

Sec 5-1.2 Marginal Probability Distributions 13


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-4: Server Access Time-2
Alternatively, find the marginal PDF and then integrate
that to find the desired probability.
y 
fY  y    ke 0.001 x  0.002 y
P  Y  2000    fY  y  dy
0 2000
y 

 ke0.002 y  e 0.001x dx  6 10 3


 e 0.002 y  1  e 0.001 y  dy
2000
0

e 
0.001 x y
 e 0.002 y    e 0.003 y   
 ke 
0.002 y
  6 103   
 0.001 0   0.002 2000   0.003 2000  
  
 
 e 4 e 6 
0.002 y  1  e 
0.001 y 3
 6 10     0.05
 ke    0.002 0.003 
 0.001 
 6 103 e 0.002 y  1  e 0.001 y  for y  0

Sec 5-1.2 Marginal Probability Distributions 14


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Mean & Variance of a Marginal Distribution

Means E(X) and E(Y) are calculated from the


discrete and continuous marginal distributions.
Discrete Continuous
E  X    x  fX  x   x  f X  x  dx   X
R R

E  Y    y  fY  y    y  fY  y  dy  Y
R R

V  X    x 2  f X  x    X2   x 2  f X  x  dx   X2
R R

V  Y    y 2  fY  y   Y2   y 2  fY  y  dy  Y2
R R

Sec 5-1.2 Marginal Probability Distributions 15


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Mean & Variance for Example 5-1
y = number of x = number of bars
times city of signal strength
name is stated 1 2 3 f (y ) = y *f (y ) = y 2*f (y ) =
1 0.01 0.02 0.25 0.28 0.28 0.28
2 0.02 0.03 0.20 0.25 0.50 1.00
3 0.02 0.10 0.05 0.17 0.51 1.53
4 0.15 0.10 0.05 0.30 1.20 4.80
f (x ) = 0.20 0.25 0.55 1.00 2.49 7.61
x *f (x ) = 0.20 0.50 1.65 2.35
x 2*f (x ) = 0.20 1.00 4.95 6.15

E(X) = 2.35 V(X) = 6.15 – 2.352 = 6.15 – 5.52 = 0.6275

E(Y) = 2.49 V(Y) = 7.61 – 2.492 = 7.61 – 16.20 = 1.4099


Sec 5-1.2 Marginal Probability Distributions 16
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Conditional Probability Distributions
P  A B 
Recall that P  B A  
P  A

From Example 5-1 y = number of x = number of bars


times city of signal strength
P(Y=1|X=3) = 0.25/0.55 = 0.455
name is stated 1 2 3 f (y ) =
P(Y=2|X=3) = 0.20/0.55 = 0.364 1 0.01 0.02 0.25 0.28
2 0.02 0.03 0.20 0.25
P(Y=3|X=3) = 0.05/0.55 = 0.091
3 0.02 0.10 0.05 0.17
P(Y=4|X=3) = 0.05/0.55 = 0.091 4 0.15 0.10 0.05 0.30
f (x ) = 0.20 0.25 0.55 1.00
Sum = 1.001

Note that there are 12 probabilities conditional on X, and 12 more


probabilities conditional upon Y.
Sec 5-1.3 Conditional Probability Distributions 17
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Conditional Probability Density Function Defined
Given continuous random variables X and Y with
joint probability density function f XY  x, y  ,
the conditional probability densiy function of Y given X =x is
f XY  x, y 
fY x  y   for f X  x   0 (5-4)
fX  x
which satifies the following properties:
(1) fY x  y   0

(2)  f  y  dy  1
Yx

(3) P  Y  B X  x    f  y  dy for any set B in the range of Y


Yx
B

Sec 5-1.3 Conditional Probability Distributions 18


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-6: Conditional Probability-1
From Example 5-2, determine the conditional PDF for Y given X=x.

f X  x    k  e 0.001x 0.002 y dy
x

 e 0.002 y 

 ke 0.001x  
 0.002 
 x 
 e 0.002 
0.001 x
 ke  
 0.002 
 0.003e 0.003 x for x  0
f XY  x, y  ke 0.001x 0.002 y
fY x  y   
fX   x  0.003e 0.003 x
 0.002e 0.002 x 0.002 y for 0  x  y  
Sec 5-1.3 Conditional Probability Distributions 19
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-6: Conditional Probability-2
Now find the probability that Y exceeds 2000 given that X=1500:

P  Y  2000 X  1500 

  fY 1500  y  dy
2000

  0.002e 0.002 1500 0.002 y
2000

 e 0.002 y  
 0.002e3  
 0.002 2000 
 
 e 4
 1
 0.002e3    e  0.368
Figure 5-8 again The conditional
 0.002  PDF is nonzero on the solid line in
the shaded region.
Sec 5-1.3 Conditional Probability Distributions 20
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-7: Conditional Discrete PMFs
Conditional discrete PMFs can be shown as tables.
y = number of x = number of bars
times city of signal strength f(x|y) for y = Sum of
name is stated 1 2 3 f (y ) = 1 2 3 f(x|y) =
1 0.01 0.02 0.25 0.28 0.036 0.071 0.893 1.000
2 0.02 0.03 0.20 0.25 0.080 0.120 0.800 1.000
3 0.02 0.10 0.05 0.17 0.118 0.588 0.294 1.000
4 0.15 0.10 0.05 0.30 0.500 0.333 0.167 1.000
f (x ) = 0.20 0.25 0.55
1 0.050 0.080 0.455
2 0.100 0.120 0.364
3 0.100 0.400 0.091
4 0.750 0.400 0.091
Sum of f(y|x) = 1.000 1.000 1.000

Sec 5-1.3 Conditional Probability Distributions 21


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Mean & Variance of Conditional Random Variables

• The conditional mean of Y given X = x,


denoted as E(Y|x) or μY|x is:
E  Y x    y  fY x  y  dy (5-6)
y

• The conditional variance of Y given X = x,


denoted as V(Y|x) or σ2Y|x is:

V  Y x    y  Y x  
2
 fY x  y  dy   y 2  fY x  y   Y2 x
y y

Sec 5-1.3 Conditional Probability Distributions 22


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-8: Conditional Mean & Variance
From Example 5-2 & 6, what is the conditional mean for Y given
that x = 1500? Integrate by parts.
 
E  Y X  1500  
0.002 1500   0.002 y
 y  0.002e dy  0.002e  y  e 0.002 y dy
3

1500 1500

 e 0.002 y  
 e 0.002 y  
 0.002e  y
3
    dy 
 0.002 1500 1500  0.002  
  0.002 y


3  1500 3 e
 0.002e e  
 0.002   0.002   0.002  
  1500  

 1500 3 e 3 
 0.002e  e 
3

 0.002  0.002   0.002  
 e 3 
 0.002e   2000    2000
3

 0.002 
If the connect time is 1500 ms, then the expected time to be authorized is 2000 ms.
Sec 5-1.3 Conditional Probability Distributions 23
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-9
For the discrete random variables in Exercise 5-1, what is
the conditional mean of Y given X=1?
y = number of x = number of bars
times city of signal strength
name is stated 1 2 3 f (y ) =
1 0.01 0.02 0.25 0.28
2 0.02 0.03 0.20 0.25
3 0.02 0.10 0.05 0.17
4 0.15 0.10 0.05 0.30
f (x ) = 0.20 0.25 0.55 y*f(y|x=1) y2*f(y|x=1)
1 0.050 0.080 0.455 0.05 0.05
2 0.100 0.120 0.364 0.20 0.40
3 0.100 0.400 0.091 0.30 0.90
4 0.750 0.400 0.091 3.00 12.00
Sum of f(y|x) = 1.000 1.000 1.000 3.55 13.35
12.6025
0.7475

The mean number of attempts given one bar is 3.55 with variance of 0.7475.
Sec 5-1.3 Conditional Probability Distributions 24
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Joint Random Variable Independence
• Random variable independence means that
knowledge of the values of X does not change
any of the probabilities associated with the
values of Y.
• X and Y vary independently.
• Dependence implies that the values of X are
influenced by the values of Y.
• Do you think that a person’s height and weight
are independent?

Sec 5-1.4 Independence 25


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Exercise 5-10: Independent Random Variables
In a plastic molding operation,
each part is classified as to
X  1 if the part conforms to color specs
0 otherwise 
whether it conforms to color Y  1 if the part conforms to length specs
and length specifications. 0 otherwise 

Figure 5-10(a) shows marginal & joint Figure 5-10(b) show the conditional
probabilities, fXY(x, y) = fX(x) * fY(y) probabilities, fY|x(y) = fY(y)

Sec 5-1.4 Independence 26


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Properties of Independence
For random variables X and Y, if any one of the following
properties is true, the others are also true. Then X
and Y are independent.

(1) f XY  x, y   f X  x   fY  y 
(2) fY x  y   fY  y  for all x and y with f X  x   0
(3) f X y  y   f X  x  for all x and y with f Y  y   0
(4) P  X  A, Y  B   P  X  A   P  Y  B  for any
sets A and B in the range of X and Y , respectively. (5-7)

Sec 5-1.4 Independence 27


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Rectangular Range for (X, Y)
• A rectangular range for X and Y is a necessary,
but not sufficient, condition for the
independence of the variables.
• If the range of X and Y is not rectangular, then
the range of one variable is limited by the
value of the other variable.
• If the range of X and Y is rectangular, then one
of the properties of (5-7) must be
demonstrated to prove independence.

Sec 5-1.4 Independence 28


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-11: Independent Random Variables
• Suppose the Example 5-2 is modified such that the
joint PDF is:
f XY  x, y   2 106 e0.001x 0.002 y for x  0 and y  0.

• Are X and Y independent? Is the product of the


marginal PDFs equal the joint PDF? Yes by inspection.
 
f X  x    2 106 e 0.001x 0.002 y dy fY  y    2 106 e 0.001x 0.002 y dx
0 0

 0.001e0.001x for x  0  0.002e 0.002 y for y  0


• Find this probability:
P  X  1000, Y  1000   P  X  1000   P  Y  1000 
 e 1   1  e 2   0.318
Sec 5-1.4 Independence 29
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-12: Machined Dimensions
Let the random variables X and Y
denote the lengths of 2 dimensions
Normal
of a machined part. Assume that X Random Variables
and Y are independent and X Y
Mean 10.5 3.2
normally distributed. Find the
Variance 0.0025 0.0036
desired probability.

P  10.4  X  10.6, 3.15  Y  3.25   P  10.4  X  10.6   P  3.15  Y  3.25 

 10.4  10.5 10.6  10.5   3.15  3.2 3.25  3.2 


 P Z   P Z 
 0.05 0.05   0.06 0.06 

 P  2  Z  2   P  0.833  Z  0.833   0.568

Sec 5-1.4 Independence 30


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-13: More Than Two Random Variables

• Many dimensions of a machined part are


routinely measured during production. Let
the random variables X1, X2, X3 and X4 denote
the lengths of four dimensions of a part.
• What we have learned about joint, marginal
and conditional PDFs in two variables extends
to many (p) random variables.

Sec 5-1.5 More Than Two Random Variables 31


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Joint Probability Density Function Redefined
The joint probability density function for the continuous random
variables X1, X2, X3, …Xp, denoted as f x1x2 x3 ... x p  x1 , x2 , x3 ,..., x p 
satisfies the following properties:

(1) f X1 X 2 ... X p  x1 , x2 ,..., x p   0


  
(2)   ...  f X1 X 2 ... X p  x1 , x2 ,..., x p  dx1dx2 ...dx p  1
  

(3) For any region B of p-dimensional space,


 
P  X 1 , X 2 ... X p   B   ... f X1 X 2 ... X p  x1, x2 ,..., x p  dx1dx2 ...dx p (5-8)
B

Sec 5-1.5 More Than Two Random Variables 32


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-14: Component Lifetimes
In an electronic assembly, let X1, X2, X3, X4 denote the
lifetimes of 4 components in hours. The joint PDF is:
f X1 X 2 X 3 X 4  x1 , x2 , x3 , x4   9 1012 e 0.001x1 0.002 x2 0.0015 x3 0.003 x4 for x i  0

What is the probability that the device operates more


than 1000 hours?
The joint PDF is a product of exponential PDFs.
P(X1 > 1000, X2 > 1000, X3 > 1000, X4 > 1000)
= e-1-2-1.5-3 = e-7.5 = 0.00055

Sec 5-1.5 More Than Two Random Variables 33


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-15: Probability as a Ratio of Volumes

• Suppose the joint PDF of the continuous random


variables X and Y is constant over the region x2 +
y2 =4. The graph is a round cake with radius of 2
and height of 1/4π.
• A cake round of radius 1 is cut from the center of
the cake, the region x2 + y2 =1.
• What is the probability that a randomly selected
bit of cake came from the center round?
• Volume of the cake is 1. The volume of the round
is π *1/4π = ¼. The desired probability is ¼.
Sec 5-1.5 More Than Two Random Variables 34
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Marginal Probability Density Function
If the joint probability density function of continuous random variables

X 1 , X 2 ,... X p is f X1 X 2 ... X p  x1 , x2 ,...x p  ,

the marginal probability density function of X i is

f X i  xi     ... f X1 X 2 ... X p  x1 , x2 ,...x p  dx1dx2 ...dxi 1dxi 1...dx p (5-9)

where the integral is over all points in the

range of X 1 , X 2 ,... X p for which X i  xi .

( don't integrate out xi )

Sec 5-1.5 More Than Two Random Variables 35


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Mean & Variance of a Joint PDF
The mean and variance of Xi can be determined
from either the marginal PDF, or the joint PDF as
follows:
  
E  Xi     ...  x  f i X1 X 2 ... X p  x , x ,...x  dx dx ...dx
1 2 p 1 2 p
  

and (5-10)
  

  ...   x     f X1 X 2 ... X p  x1 , x2 ,...x p  dx1dx2 ...dx p


2
V  Xi   i Xi
  

Sec 5-1.5 More Than Two Random Variables 36


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-16
• There are 10 points
in this discrete joint
PDF.
• Note that
x1+x2+x3 = 3
• List the marginal PDF
of X2
P(X2 = 0) = PXXX(0,0,3) + PXXX(1,0,2) + PXXX(2,0,1) + PXXX(3,0,0)
P(X2 = 1) = PXXX(0,1,2) + PXXX(1,1,1) + PXXX(2,1,0)
P(X2 = 2) = PXXX(0,2,1) + PXXX(1,2,0)
P(X2 = 3) = PXXX(0,3,0) Note the index pattern

Sec 5-1.5 More Than Two Random Variables 37


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Reduced Dimensionality
If the joint probability density function of continuous random variables

X 1 , X 2 ,... X p is f X1 X 2 ... X p  x1 , x2 ,...x p  ,

then the probability density function of X 1 ,X 2 ,...X k , k  p is

f X1 X 2 ... X k  x1 , x2 ,..., xk     ... f X1 X 2 ... X p  x1 , x2 ,...x p  dxk 1dxk  2 ...dx p (5-11)

where the integral is over all points in the


range of X 1 , X 2 ,... X p for which X i  xi for i  1 through k .

( integrate out p-k variables)

Sec 5-1.5 More Than Two Random Variables 38


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Conditional Probability Distributions
• Conditional probability distributions can be
developed for multiple random variables by
extension of the ideas used for two random
variables.
• Suppose p = 5 and we wish to find the distribution
conditional on X4 and X5.
f X1 X 2 X 3 X 4 X 5  x1 , x2 , x3 , x4 , x5 
f X1 X 2 X 3 X 4 X 5  x1 , x2 , x3  
f X 4 X 5  x4 , x5 
for f X 4 X 5  x4 , x5   0.

Sec 5-1.5 More Than Two Random Variables 39


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Independence with Multiple Variables
The concept of independence can be extended to
multiple variables.
Random variables X 1 , X 2 ,..., X p are independent if and only if
f X1 X 2 ... X p  x1 , x2 ,..., x p   f X1  x1   f X 2  x2  ... f X p  x p  for all x1 ,x 2 ,...,x p (5-12)

(joint pdf equals the product of all the marginal PDFs)

Sec 5-1.5 More Than Two Random Variables 40


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-17
• In Chapter 3, we showed that a negative
binomial random variable with parameters p
and r can be represented as a sum of r
geometric random variables X1, X2,…, Xr , each
with parameter p.
• Because the binomial trials are independent,
X1, X2,…, Xr are independent random variables.

Sec 5-1.5 More Than Two Random Variables 41


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-18: Layer Thickness
Suppose X1,X2,X3 represent the thickness in μm of a
substrate, an active layer and a coating layer of a
chemical product. Assume that these variables are
independent and normally distributed with
parameters and specified limits as tabled.
What proportion of the product Normal
meets all specifications? Random Variables
Answer: 0.7783, 3 layer product. X1 X2 X3
Mean (μ) 10,000 1,000 80
Which one of the three Std dev (σ) 250 20 4
thicknesses has the least Lower limit 9,200 950 75

probability of meeting specs? Upper limit 10,800 1,050 85


P(in limits) 0.99863 0.98758 0.78870
Answer: Layer 3 has least prob.
Note the index pattern P(all in limits) = 0.77783
Sec 5-1.5 More Than Two Random Variables 42
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Covariance
• Covariance is a measure of the relationship between
two random variables.
• First, we need to describe the expected value of a
function of two random variables. Let h(X, Y) denote
the function of interest.

  h  x, y   f XY  x, y  for X , Y discrete

E  h  X , Y     (5-13)
   h  x, y   f XY  x, y  dxdy for X , Y continuous

Sec 5-2 Covariance & Correlation 43


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-19: E(Function of 2 Random Variables)

Task: Calculate E[(X-μX)(Y-μY)]


= covariance
x y f(x, y) x-μX y-μY Prod
1 1 0.1 -1.4 -1.0 0.14
1 2 0.2 -1.4 0.0 0.00
Joint

3 1 0.2 0.6 -1.0 -0.12


3 2 0.2 0.6 0.0 0.00
3 3 0.3 0.6 1.0 0.18
1 0.3 covariance = 0.20
3 0.7
Marginal

1 0.3
2 0.4
3 0.3
μX = 2.4
Mean

μY = 2.0 Figure 5-12 Discrete joint


distribution of X and Y.
Sec 5-2 Covariance & Correlation 44
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Covariance Defined
The covariance between the random variables X and Y,
denoted as cov  X , Y  or  XY is

 XY  E  X   X   Y  Y    E  XY    X Y (5-14)

The units of  XY are units of X times units of Y .

For example, if the units of X are feet


and the units of Y are pounds,
the units of the covariance are foot-pounds.
Unlike the range of variance, -   XY  .
Sec 5-2 Covariance & Correlation 45
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Covariance and Scatter Patterns

Figure 5-13 Joint probability distributions and the sign of cov(X, Y).
Note that covariance is a measure of linear relationship. Variables
with non-zero covariance are correlated.
Sec 5-2 Covariance & Correlation 46
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-20: Intuitive Covariance
y = number of x = number of bars
times city of signal strength
name is stated 1 2 3
1 0.01 0.02 0.25
2 0.02 0.03 0.20
3 0.02 0.10 0.05
4 0.15 0.10 0.05

The probability distribution of Example 5-1 is shown.

By inspection, note that the larger probabilities occur as X


and Y move in opposite directions. This indicates a negative
covariance.

Sec 5-2 Covariance & Correlation 47


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Correlation (ρ = rho)
The correlation between random variables X and Y,
denoted as  XY , is
cov  X , Y 
 XY
 XY   (5-15)
V  X  V  Y   X  Y

Since  X  0 and  Y  0,
 XY and cov  X , Y  have the same sign.
We say that  XY is normalized, so  1   XY  1 (5-16)
Note that  XY is dimensionless.
Variables with non-zero correlation are correlated.
Sec 5-2 Covariance & Correlation 48
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-21: Covariance & Correlation
x y f(x, y) x-μX y-μY Prod
Determine the 0 0 0.2 -1.8 -1.2 0.42
covariance and 1 1 0.1 -0.8 -0.2 0.01
correlation. 1 2 0.1 -0.8 0.8 -0.07

Joint
2 1 0.1 0.2 -0.2 0.00
2 2 0.1 0.2 0.8 0.02
3 3 0.4 1.2 1.8 0.88
0 0.2 covariance = 1.260
1 0.2 correlation = 0.926
2 0.2

Marginal
3 0.4 Note the strong
0 0.2 positive correlation.
1 0.2
2 0.2
3 0.4
μX = 1.8
Mean

μY = 1.8
Figure 5-14 Discrete joint σX = 1.1662
StDev

distribution, f(x, y). σY = 1.1662

Sec 5-2 Covariance & Correlation 49


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-21
Calculate the x y f(x, y) x*y*f
correlation. 1 7 0.2 1.4

Joint
2 9 0.6 10.8
3 11 0.2 6.6
1 0.2 18.8 = E(XY)
2 0.6 0.80 = cov(X ,Y )

Marginals
3 0.2 1.00 = ρXY
7 0.2 Using Equations
9 0.6 5-14 & 5-15
11 0.2
μX = 2
Mean

μY = 9.0
Figure 5-15 Discrete joint σX = 0.6325
StDev

distribution. Steepness of line σY = 1.2649


connecting points is immaterial.

Sec 5-2 Covariance & Correlation 50


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Independence Implies ρ = 0
• If X and Y are independent random variables,
σXY = ρXY = 0 (5-17)

• ρXY = 0 is necessary, but not a sufficient


condition for independence.
– Figure 5-13d (x, y plots as a circle) provides an
example.
– Figure 5-13b (x, y plots as a square) indicates
independence, but a non-rectangular pattern would
indicate dependence.
Sec 5-2 Covariance & Correlation 51
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-23: Independence Implies Zero Covariance
Let f XY  xy   x  y 16 for 0  x  2 and 0  y  4
Show that  XY  E  XY   E  X   E  Y   0
1  2 2 
4 2

1  2
4 2
 E  XY      x y dx dy
E X  
16 0  0
 x ydx dy 16 0  0 

1
4  3 2

x  3 2 2 x
1
4
 y  dy
  y dy 16 0  3 0
16 0  3 0   
  4
1 8
1  y 2   8  1 16 4
4
  y 2   dy
      16 0  3 
16  2 0   3  6 2 3
 
1  y3  1 64 32
4

1 
4 2
     
EY  6 3 6 3 9 Figure 5-15 A planar
16 0  0
0
2
 xy dx dy  
 joint distribution.
4  2 2
1
 y  2 x
dy  XY  E  XY   E  X   E  Y 
16 0  2 0 
  32 4 8
   0
2  y 3  1 64 8
4 9 3 3
    
16  3 0  8 3 3
 

Sec 5-2 Covariance & Correlation 52


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Common Joint Distributions
• There are two common joint distributions
– Multinomial probability distribution (discrete), an
extension of the binomial distribution
– Bivariate normal probability distribution
(continuous), a two-variable extension of the
normal distribution. Although they exist, we do
not deal with more than two random variables.
• There are many lesser known and custom joint
probability distributions as you have already
seen.
Sec 5-3 Common Joint Distributions 53
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Multinomial Probability Distribution
• Suppose a random experiment consists of a series of n trials.
Assume that:
1) The outcome of each trial can be classifies into one of k classes.
2) The probability of a trial resulting in one of the k outcomes is constant,
denoted as p1, p2, …, pk.
3) The trials are independent.
• The random variables X1, X2,…, Xk denote the number of
outcomes in each class and have a multinomial distribution and
probability mass function:
n!
P  X 1  x1 , X 2  x2 ,..., X k  xk   p1x1 p2x2 ... pkxk (5-18)
x1 ! x2 !...xk !
for x1  x2  ...  xk  n and p1  p2  ...  pk  1.

Sec 5-3.1 Multinomial Probability Distribution 54


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-24: Digital Channel
Of the 20 bits received over a digital channel, 14 are of x P(x)
excellent quality, 3 are good, 2 are fair, 1 is poor. The E 0.60
sequence received was EEEEEEEEEEEEEEGGGFFP. G 0.30
The probability of that sequence is 0.6140.330.0820.021 = F 0.08
2.708*10-9 P 0.02
However, the number of different ways of receiving
those bits is a lot!
20!
 2,325, 600
14!3!2!1!
The combined result is a multinomial distribution.

20!
P  x1  14, x2  3, x3  2, x4  1  0.6140.330.0820.021  0.0063
14!3!2!1!

Sec 5-3.1 Multinomial Probability Distribution 55


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-25: Digital Channel
Refer again to the prior Example 5-24.
What is the probability that 12 bits are E, 6 bits
are G, 2 are F, and 0 are P?
20!
P  x1  12, x2  6, x3  2, x4  0   0.6120.360.0820.02 0  0.0358
12!6!2!0!

Using Excel
0.03582 = (FACT(20)/(FACT(12)*FACT(6)*FACT(2))) * 0.6^12*0.3^6*0.08^2

Sec 5-3.1 Multinomial Probability Distribution 56


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Multinomial Means and Variances
The marginal distributions of the multinomial
are binomial.

If X1, X2,…, Xk have a multinomial distribution,


the marginal probability distributions of Xi is
binomial with:
E(Xi) = npi and V(Xi) = npi(1-pi) (5-19)

Sec 5-3.1 Multinomial Probability Distribution 57


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-26: Marginal Probability Distributions

Refer again to the prior Example 5-25.


The classes are now {G}, {F}, and {E, P}.
Now the multinomial changes to:
n!
PX 2 X 3  x2 , x3   p2 p3  1  p2  p3 
x2 x3 n  x2  x3

x2 ! x3 ! n  x2  x3  !

for x2  0 to n  x3 and x3  0 to n  x2 .

Sec 5-3.1 Multinomial Probability Distribution 58


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-27: Bivariate Normal Distribution

Earlier, we discussed the two dimensions of an


injection-molded part as two random variables
(X and Y). Let each dimension be modeled as a
normal random variable. Since the dimensions
are from the same part, they are typically not
independent and hence correlated.
Now we have five parameters to describe the
bivariate normal distribution:
μX, σX, μY, σY, ρXY

Sec 5-3.2 Bivariate Normal Distribution 59


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Bivariate Normal Distribution Defined
1
f XY  x, y;  X ,  X , Y ,  Y ,    eu
2 X  Y 1   2

1   x   X  2 2   x   X   y  Y   y   Y  2 
u    
2 1  2    X  XY Y
2 2


for    x   and    y  .

 x  0,    x  ,
Parameter limits:  1    1
 y  0,    y  ,

Sec 5-3.2 Bivariate Normal Distribution 60


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Role of Correlation

Figure 5-17 These illustrations show the shapes and contour lines of two
bivariate normal distributions. The left distribution has independent X, Y
random variables (ρ = 0). The right distribution has dependent X, Y random
variables with positive correlation (ρ > 0, actually 0.9). The center of the
contour ellipses is the point (μX, μY).

Sec 5-3.2 Bivariate Normal Distribution 61


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-28: Standard Bivariate Normal Distribution

Figure 5-18 This is a standard bivariate normal because its means


are zero, its standard deviations are one, and its correlation is zero
since X and Y are independent. The density function is:
1 0.5 x2  y 2 
f XY  x, y   e
2
Sec 5-3.2 Bivariate Normal Distribution 62
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Marginal Distributions of the Bivariate Normal
If X and Y have a bivariate normal distribution with joint
probability density function fXY(x,y;σX,σY,μX,μY,ρ), the
marginal probability distributions of X and Y are normal
with means μX and μY and σX and σY, respectively.
(5-21)

Figure 5-19 The marginal probability density functions of a bivariate


normal distribution are simply projections of the joint onto each of
the axis planes. Note that the correlation (ρ) has no effect on the
marginal distributions.

Sec 5-3.2 Bivariate Normal Distribution 63


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Conditional Distributions of the Joint Normal

If X and Y have a bivariate normal distribution with


joint probability density fXY(x,y;σX,σY,μX,μY,ρ), the
conditional probability distribution of Y given X =
x is normal with mean and variance as follows:

Y
Y x  Y    x  X 
X

 2
Yx
   1  2
Y
2

Sec 5-3.2 Bivariate Normal Distribution 64
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Correlation of Bivariate Normal Random Variables

If X and Y have a bivariate normal distribution


with joint probability density function
fXY(x,y;σX,σY,μX,μY,ρ), the correlation between X
and Y is ρ. (5-22)

Sec 5-3.2 Bivariate Normal Distribution 65


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Bivariate Normal Correlation & Independence

• In general, zero correlation does not imply


independence.
• But in the special case that X and Y have a
bivariate normal distribution, if ρ = 0, then X
and Y are independent. (5-23)

Sec 5-3.2 Bivariate Normal Distribution 66


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-29: Injection-Molded Part
The injection- molded part dimensions has parameters as tabled
and is graphed as shown.
The probability of X and Y being within limits is the volume within
the PDF between the limit values.
This volume is determined by numerical integration – beyond the
scope of this text.
Bivariate
X Y
Mean 3 7.7
Std Dev 0.04 0.08
Correlation 0.8
Upper Limit 3.05 7.80
Lower Limit 2.95 7.60
Figure 5-3
Sec 5-3.2 Bivariate Normal Distribution 67
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Linear Functions of Random Variables
• A function of random variables is itself a
random variable.
• A function of random variables can be formed
by either linear or nonlinear relationships. We
limit our discussion here to linear functions.
• Given random variables X1, X2,…,Xp and
constants c1, c2, …, cp Y= c1X1 + c2X2
+ … + cpXp (5-24) is a linear
combination of X1, X2,…,Xp.
Sec 5-4 Linear Functions of Random Variables 68
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Mean & Variance of a Linear Function
Let Y= c1X1 + c2X2 + … + cpXp and use Equation 5-10:

E  Y   c1 E  X 1   c2 E  X 2   ...  c p E  X p  (5-25)

V  Y   c12V  X 1   c22V  X 2   ...  c 2pV  X p   2 ci c j cov  X i X j  (5-26)


i j

If X 1 , X 2 ,..., X p are independent, then cov  X i X j   0,

V  Y   c12V  X 1   c22V  X 2   ...  c 2pV  X p  (5-27)

Sec 5-4 Linear Functions of Random Variables 69


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-30: Negative Binomial Distribution

Let Xi be a geometric random variable with


parameter p with μ = 1/p and σ2 = (1-p)/p2
Let Y = X1 + X2 +…+Xr, a linear combination of r
independent geometric random variables.
Then Y is a negative binomial random variable with
μ = r/p and σ2 = r(1-p)/p2 by Equations 5-25 and
5-27.
Thus, a negative binomial random variable is a sum
of r identically distributed and independent
geometric random variables.
Sec 5-4 Linear Functions of Random Variables 70
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-31: Error Propagation
A semiconductor product consists of three layers.
The variances of the thickness of each layer is 25,
40 and 30 nm. What is the variance of the
finished product?
Answer:
X  X1  X 2  X 3
3
V  X    V  X i   25  40  30  95 nm 2
i 1

SD  X   95  9.747 nm

Sec 5-4 Linear Functions of Random Variables 71


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Mean & Variance of an Average

If X 
 X 1  X 2  ...  X p 
and E  X i   
p
p
Then E  X    (5-28a)
p

If the X i are independent with V  X i    2

p  2  2
Then V  X   2
 (5-28b)
p p

Sec 5-4 Linear Functions of Random Variables 72


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Reproductive Property of the Normal Distribution

If X 1 , X 2 ,..., X p are independent, normal random variables

with E  X i    , and V  X i  = 2 , for i  1, 2,..., p,

then Y  c1 X 1  c2 X 2  ...  c p X p

is a normal random variable with

E  Y   c11  c2 2  ...  c p  p

and

V  Y   c12 12  c22 22  ...  c 2p p2 (5-29)

Sec 5-4 Linear Functions of Random Variables 73


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-32: Linear Function of Independent Normals
Let the random variables X1 and X2 denote Parameters of
the independent length and width of a X1 X2
Mean 2 5
rectangular manufactured part. Their
Std Dev 0.1 0.2
parameters are shown in the table.
What is the probability that the perimeter
exceeds 14.5 cm?
Let Y  2 X 1  2 X 2  perimeter
E  Y   2 E  X 1   2 E  X 2   2  2   2  5   14 cm
V  Y   22 V  X 1   22V  X 2   4  0.1  4  0.2   0.04  0.16  0.20
2 2

SD  Y   0.20  0.4472 cm
 14.5  14 
P  Y  14.5   1      1    1.1180   0.1318
 .4472 
Using Excel
0.1318 = 1 - NORMDIST(14.5, 14, SQRT(0.2), TRUE)
Sec 5-4 Linear Functions of Random Variables 74
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-33: Beverage Volume
Soft drink cans are filled by an automated filling machine. The
mean fill volume is 12.1 fluid ounces, and the standard
deviation is 0.1 fl oz. Assume that the fill volumes are
independent, normal random variables. What is the probability
that the average volume of 10 cans is less than 12 fl oz?
Let X i denote the fill volume of the i th can
10
Let X   X i 10
i 1
n
10 12.1
E  X    E  X i  10   12.1 fl oz
i 1 10
10  0.1
2

 
2 10
VX  1  V  X    0.001 fl oz 2
10 i 1 i
100
Using Excel
 12  12.1 
P  X  12         3.16   0.00079
0.000783
 0.001  = NORMDIST(12, 12.1, SQRT(0.001), TRUE)

Sec 5-4 Linear Functions of Random Variables 75


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
General Functions of Random Variables

• A linear combination is not the only way that we


can form a new random variable (Y) from one or
more of random variables (Xi) that we already
know. We will focus on single variables, i.e., Y =
h(X), the transform function.
• The transform function must be monotone:
– Each value of x produces exactly one value of y.
– Each value of y translates to only one value of x.
• This methodology produces the probability mass
or density function of Y from the function of X.
Sec 5-5 General Functions of Random Variables 76
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
General Function of a Discrete Random Variable

Suppose that X is a discrete random variable


with probability distribution fX(x). Let Y = h(X)
define a one-to-one transformation between
the values of X and Y so that the equation y =
h(x) can be solved uniquely for x in terms of y.
Let this solution be x = u(y), the inverse
transform function. Then the probability mass
function of the random variable Y is
fY(y) = fX[u(y)] (5-30)

Sec 5-5 General Functions of Random Variables 77


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-34: Function of a Discrete Random Variable

Let X be a geometric random variable with PMF:


fX(x) = p(1-p)x-1 for x = 1, 2, …
Find the probability distribution of Y = X2.
Solution:
– Since X > 0, the transformation is one-to-one.
– The inverse transform function is X = sqrt(Y).
– fY(y) = p(1-p)sqrt(y)-1 for y = 1, 4, 9, 16,…

Sec 5-5 General Functions of Random Variables 78


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
General Function of a Continuous Random Variable

Suppose that X is a continuous random variable with


probability distribution fX(x). Let Y = h(X) define a
one-to-one transformation between the values of
X and Y so that the equation y = h(x) can be solved
uniquely for x in terms of y. Let this solution be x
= u(y), the inverse transform function. Then the
probability density function of the random
variable Y is
fY(y) = fX[u(y)]∙|J| (5-31) where J = u’(y) is
called the Jacobian of the transformation and the
absolute value is used.
Sec 5-5 General Functions of Random Variables 79
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 5-35: Function of a Continuous Random Variable

Let X be a continuous random variable with probability


distribution: x
fX  for 0  x  4
8
Find the probability distribution of Y = h(X) = 2X + 4

Note that Y has a one-to-one relationship to X .


y4 1
x  u  y  and the Jacobian is J  u '  y  
2 2

fY  y  
 y  4 2 1 y  4
  for 4  y  12.
8 2 32

Sec 5-5 General Functions of Random Variables 80


© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Important Terms & Concepts for Chapter 5
Bivariate distribution
Bivariate normal distribution General functions of random
Conditional mean variables
Conditional probability density Independence
function Joint probability density
Conditional probability mass function
function Joint probability mass function
Conditional variance Linear functions of random
Contour plots variables
Correlation Marginal probability
Covariance distribution
Error propagation Multinomial distribution
Reproductive property of the
normal distribution

Chapter 5 Summary 81
© John Wiley & Sons, Inc.  Applied Statistics and Probability for Engineers, by Montgomery and Runger.

You might also like