You are on page 1of 5

Chapter 7 Page 1 of 5

Chapter 7 Functions of Random Variables

7.1 Introduction
We shall be finding the probability distributions or densities of functions of one or more random
variables. Let X 1 , X 2 , , X n be random variables and let Y = U ( X 1 , X 2 , , X n ) be another
random variable. Methods to find the probability distribution of Y (or U) are
1. Method of Distribution Functions:
Find P(Y  y ) = F ( y ) .
We find the region in the x1 , x2 , , xn space for which Y  y and find P(Y  y ) by integrating
f ( x1 , x2 , , xn ) over this region. The density f ( y ) can be obtained by differentiation.

2. Method of Transformation:
Given the density function of X, the density of Y = U ( X 1 , X 2 , , X n ) can be found whether U is
increasing or decreasing.

3. Method of Moment Generating Function:


This method is based on the following uniqueness theorem.
Uniqueness Theorem: Suppose the moment generating functions of random variables X and Y
exist and they are given by M X (t ) and M Y (t ) respectively. If M X (t ) = M Y (t ) for all values of
t, then X and Y have the same probability distribution.

We find the moment generating function of Y and see if it resembles any known moment
generating function. If it is identical to some well-known moment generating function, the
probability distribution of Y can be identified.

7.2 Distribution Function Technique


Let X 1 , X 2 , , X n be continuous random variables and let Y = U ( X 1 , X 2 , , Xn)
(i) Find the region Y = y in the ( x1 , x2 , , xn ) space.
(ii) Find the region Y  y .
(iii) Find F ( y ) = P(Y  y ) by integrating f ( x1 , x2 , , xn ) over Y  y .
(iv) Find the density function f ( y ) by differentiating F ( y ) .

Example 1: A random variable X has the density function f ( x) = 2 x for 0 < x < 1. Find the
density function of Y = 3X – 1.
Solution:
Solution:
F ( y ) = P(Y  y ) = P (3 X − 1  y ) = P[ X  ( y + 1) / 3]
( y +1) / 3
When 0 < x < 1, 0 < (y + 1)/3 < 1 i.e. –1 < y < 2 with F ( y ) = P( X  y +1
3 )= 2 xdx = ( y3+1 ) 2 .
0
f ( y ) = F ( y ) = 2( y + 1) / 9 , . –1 < y < 2
Chapter 7 Page 2 of 5

Problem 2 page 208

(a) F ( y) = P(Y  y) = P( X 2  y) = P(− y  X  y ) = P( X  y ) = 1 − e− ( y )2


since x > 0
0, y0 e , y  0
−y
F ( y) =  −y
(b) f ( y ) = 
1 − e , y0 0, elsewhere

7.3 Transformation Technique: One Variable


Let Y be either an increasing or a decreasing function of the random variable X, say Y = h( X ) .
To use the above technique to find f ( y ) ,
(i) Find the inverse function X = h −1 (Y ) , i.e. solve for x in terms of y.
(ii) Evaluate dx/dy.
(iii) Find fY ( y) by fY ( y) = f X ( x) | dy
dx
| , where x = h −1 ( y ) .

Note; Always transform the domain of f ( x ) to that of f ( y ) .

Example 2: A random variable X has the density function f ( x) = 2 x for 0 < x < 1. Find the
density function of Y = 3X – 1 by the transformation method.

Solution:
Now, y = 3x – 1  3x = y + 1 or x = (y + 1)/3; dx/dy = 1/3.
Hence, fY ( y) = f X ( x) | dy
dx
| = 2(1/3)(y + 1)(1/3) = 2(y + 1)/9 [Note that x = (y + 1)/3].
When x = 0, y = –1 and when x = 1, y = 2. Therefore, f ( y ) = 2( y + 1) / 9 , –1 < y < 2.

Problem 2 page 208



2 xe− x , x  0
2

f ( x) =  and Y = X 2

0, elsewhere
Now, x =  y = + y since x > 0. Also, | dx / dy |= 1/(2 y )
Hence, fY ( y) = f X ( x) | dy
dx
| = 2 ye− y /(2 y ) = e− y for y > 0.

Problem 9 page 222

P( X = x) = 3Cx3C2− x / 6 C2 = 3Cx3C2− x /15 , Y = X – (2 – X) = 2X – 2.


x 0 1 2
p(x) 1/5 3/5 1/5
When x = 0, y = –2; when x = 1, y = 0, and when x = 2, y = 2. Hence we have
y –2 0 2
p(x) 1/5 3/5 1/5
Chapter 7 Page 3 of 5

7.4 Transformation Technique: Two Variables


Let X 1 and X 2 be continuous random variables with joint density f ( x1 , x2 ) . We are interested
in finding f ( y1 , y2 ) , the joint probability density of Y1 = U1 ( X 1 , X 2 ) and Y2 = U 2 ( X 1 , X 2 ) .
(i) Find the inverse functions x1 = 1 ( y1 , y2 ) and x2 = 2 ( y1 , y2 ) . That is, solve for x1 and x2 in
terms of y1 and y 2 .
x1 x1
y1 y2
(ii) Evaluate J, the Jacobian of the transformation, J =
x2 x2
y1 y2
(iii) Find the joint probability density of Y1 and Y2 by
g ( y1 , y2 ) = f ( x1 , x2 ) | J |= f (1 ( y1 , y2 ), 2 ( y1 , y2 ) ) | J | .

If only one function Y1 is defined, we will introduce a second function, say Y2 , and use the above
technique.

Problem 37 page 223


f ( x, y ) = 24 xy for 0 < x < 1, 0< y < 1, x + y < 1. Z = X + Y and W = X
Now, x = w and y = z – w.
dx dx
0 1 24w( z − w), 0  w  1, w  z and z  1
J = dy dz
dy =
dw
= −1 and so f ( z , w) = 
dz dw 1 −1 0, elsewhere

Example 4: Let f ( x1 , x2 ) = x1 x2 /18 for x1 = 1, 2 and x2 = 1, 2, 3.


(a) Find the joint probability distribution of Y1 = X 1 + X 2 and Y2 = X 1 − X 2 .
(b) Find the marginal distribution of Y1 .

Solution:
Solve to obtain x1 = ( y1 + y2 ) / 2 and x2 = ( y1 − y2 ) / 2 .
f ( y1 , y2 ) = 181 14 ( y1 + y2 )( y1 − y2 ) = 721 ( y12 − y12 ) , y1 = 2, 3, 4, 5; y 2 = –2, –1, 0, 1; y1 + y2 = 2, 4;
and y1 − y2 = 2, 4, 6.
Now, f ( y1 ) =  y2 1
72 ( y12 − y22 )
When y1 = 2; y 2 = 0 and so f(2) = 4/72 = 1/18 [ x1 = 1, x2 = 1]
When y1 = 3; y 2 = –1, 1 and so f(3) = [(9 – 1) + (9 – 1)]/72 = 4/18 [ x1 =1, x2 =2; [ x1 =2, x2 =1]]
When y1 = 4; y 2 = –2, 0 and so f(4) = [(16 – 4) + (16 – 0)]/72 = 7/18 [ x1 =1, x2 =3; x1 = x2 =2]
When y1 = 5; y 2 = –1 and so f(5) = 4/72 = [25 – 1]/18 = 6/18 [ x1 = 2, x2 = 3]
Chapter 7 Page 4 of 5

[Alternative solution is to set up tables for ( x1 , x2 ) and ( y1 , y2 ) ]


x1
1 2
1 1/18 2/18
x2 2 2/18 4/18
3 3/18 6/18

y1
2 3 4 5
–2 0 0 3/18 0
y2 –1 0 2/18 0 0
0 1/18 0 4/18 0
1 0 2/18 0 6/18

y1 2 3 4 5
f ( y1 ) 1/18 4/18 7/18 6/18

7.5 Moment Generating Function Technique


Let Y be a function of the random variables X 1 , X 2 , , X n .
(i) Find the moment generating function of Y, M Y (t ) .
(ii) Compare M Y (t ) with other well-known moment generating functions. If M Y (t ) = M U (t ) for
all values of t, then Y and U have identical probability distributions.

Theorem: If X 1 , X 2 , , X n are independent random variables and Y = X 1 + X 2 + + X n , then


n
M Y (t ) =  M X i (t ) , where M Xi (t ) is the moment generating function of X i .
i =1

Proof: M X i (t ) = E (etX i ) and MY (t ) = E(etY ) = E[etX1 + +tX n


] = E[etX1 ] E[etX n ] , because of
independence. Hence M Y (t ) = M X1 (t )M X 2 (t ) M X n (t ) = in=1M X i (t ) .

Problem 7.43 page 226


X i has a gamma distribution with parameters  and . M Xi (t ) = (1 −  t )− and
Y = X1 + X 2 + + X n . M Y (t ) = M X i (t ) = (1 −  t ) − = (1 −  t ) − n . This is the moment
generating function of a gamma random variable with parameters n and .
Chapter 7 Page 5 of 5

Example 5: Find the distribution of Y =  in=1 X i where f ( xi ) = 1 e− xi / for xi > 0.


Solution:
M X i (t ) = (1 −  ) −1 . Therefore, M Y (t ) = M X i (t ) = (1 −  t )−1 = (1 −  t )− n . This is the moment
generating function of a gamma random variable with parameters  = n and  = .

Example 6: The joint density function of X 1 and X 2 is given by f ( x1 , x2 ) = 3x1 , 0  x2  x1  1 .


Find the probability density function for Y = X 1 − X 2 .

Solution:
Using the distribution function method: F ( y ) = P(Y  y) = P( X 1 − X 2  y )
1 x1 − y
F ( y ) = 1 − P(Y  y ) = 1 −   3x1dx2 dx1 = 32 y − 12 y 3 , 0  y  1 .
y 0

0, y0
 3(1 − y 2 ) / 2, 0  y  1
F ( y ) = (3 y − y ) / 2, 0  y  1 and f ( y ) = 
3

1, y 1 0, elsewhere



Using the transformation technique:
Let Y = X 1 − X 2 and Z = X 1 . Solve to obtain x1 = z and x2 = x1 − y = z − y
dx1 dx1
1 0
J= = = −1 and so f ( y, z ) = 3z , 0  y  z  1 .
dx dy
dx2
dz
dx2
dy
1 −1
x2 = 0  z = y; x2 = x1  z − y = z or y = 0; x1  1  z  1; x2  x1  z − y  z or y  0;
1 1
x2  0  z − y  0 or z  y . Hence, f ( y ) =  f ( y, z )dz =  3zdz = 3(1 − y 2 ) / 2 , 0  y  1 .
y y

You might also like