You are on page 1of 13

TRANSFORMATIONS OF RANDOM VARIABLES C.D.F.

technique

Let X be a random variable with cumulative distribution function F (x) and let Y
be a function of X such that Y = u(X). If Ay = {x|u(x) ≤ y} then Y ≤ y and
XǫAy are equivalent events and

FY (y) = P (Y ≤ y)
= P (u(X) ≤ y)
= P (xǫAy )

where the last probability is evaluated as the sum or integral of f (x) over the region
Ay .

Example 1

If the random variable X has probability density function

f (x) = 2e−2x 0<x<∞

and distribution function

FX (x) = 1 − e−2x 0 < x < ∞,

the random variable Y defined by Y = eX has distribution function FY (y) given by

FY (y) = P (Y ≤ y)
= P (eX ≤ y)
= P (X ≤ ln y)
= FX (ln y)
= 1 − e−2 ln y
= 1 − y −2 1 < y < ∞.

The probability density function of Y is

d
FY (y) = 2y −3 1 < y < ∞.
dy

1
TRANSFORMATIONS OF RANDOM VARIABLES C.D.F. technique

Example 2

If the random variable X has cumulative distribution function F (x), the random
variable Y = F (X) has a uniform distribution on the interval [0,1].

Proof


FY (y) = P F (X) ≤ y
= P X ≤ F −1 (y)


= F F −1 (y)


=y

Example 3

If X is a continuous random variable and Y = X 2 , the distribution function of Y


is given by
FY (y) = P (X 2 ≤ y)
√ √
= P (− y ≤ X ≤ y)
√ √
= FX ( y) − FX (− y)
and the probability density function of Y is

d
fY (y) = FY (y)
dy
 √ √  1
= fX ( y) + fX (− y) √ .
2 y

2
TRANSFORMATIONS OF RANDOM VARIABLES C.D.F. technique

Example 4

If X1 and X2 are independent and each has an exponential distribution with


parameter 1, let Y = X1 + X2 . Then

Ay = {(x1 , x2 )|0 ≤ x1 + x2 ≤ y}
= {(x1 , x2 )|0 ≤ x1 ≤ y − x2 , 0 ≤ x2 ≤ y}

and the distribution function of Y is


Z y Z y−x2
FY (y) = e−(x1 +x2 ) dx1 dx2
Z0 y 0 y−x2
−x2 −x1
= −e e dx2
0 0
Z y
= −e−x2 (e−y+x2 − 1) dx2
Z0 y
= e−x2 − e−y dx2
0
y
= (−e−x2 − x2 e−y )

0
−y −y
=1−e − ye y>0

and the probability distribution function of Y is

f (y) = ye−y y > 0.

3
TRANSFORMATIONS OF RVs Transformation technique

Let the discrete random variable X have probability density function f (x) and
let the random variable Y be a function of the random variable X such that
Y = u(X). If u(x) is a 1–1 function so that y = u(x) has a unique solution
x = w(y) the probability density function of Y is fY (y) = fX (w(y)) for yǫB where
B = {y|fY (y) > 0}.

Proof

For a discrete random variable

fY (y) = P (Y = y)
= P (u(X) = y)
= P (X = w(y))
= fX (w(y))

Example

Let the random variable X have a geometric distribution with probability density
function f (x) = pq x−1 x = 1, 2, 3, . . .

If Y = X − 1 then u(x) = x − 1 and w(y) = y + 1 so that

fY (y) = fX (y + 1)
= pq y y = 0, 1, 2, . . .

and Y is the number of failures before the first success.

4
TRANSFORMATIONS OF RVs Transformation technique

Let the continuous random variable X have probability density function fX (x)
and let the random variable Y be a function of the random variable X such that
Y = u(X) where y = u(x) is a 1–1 transformation with inverse x = w(y). The set
d
A = {x|fX (x) > 0} maps to the set B = {y|fY (y) > 0} and if dy w(y) is continuous
and non–zero on B, the probability density function of Y is
d
fY (y) = fX (w(y)) w(y) .

dy

Proof

If y = u(x) is a 1–1 transformation it is either monotonic increasing or monotonic


decreasing. If it is monotonic increasing then u(x) ≤ y iff x ≤ w(y) and

FY (y) = P (u(X) ≤ y)
= P (X ≤ w(y)
= FX (w(y))

and
d
fy (y) = FX (w(y))
dy
d d
= FX (w(y)). w(y)
dw(y) dy

d d
= fX (w(y)) w(y) since w(y) > 0.

dy dy

If the transformation is monotonic decreasing then u(x) ≤ y iff w(y) ≤ x and

FY (y) = P (u(X) ≤ y)
= P (X ≥ w(y)
= 1 − FX (w(y))

and
d
fy (y) = −fX (w(y)) w(y)
dy

d d
= fX (w(y)) w(y) since w(y) < 0.

dy dy

5
TRANSFORMATIONS OF RVs Transformation technique

In general, the probability density function of Y = u(X) where the inverse transform
x = w(y) exists is given by

fY (y) = fX (w(y))|J|

d
where the Jacobian J is dy w(y).

Example 1

If the random variable X has probability density function

f (x) = 2e−2x 0<x<∞

and the random variable Y is defined by Y = eX then x = w(y) = ln y with


w′ (y) = y1 and
1
fY (y) = fX (ln y)

y
1
= 2e−2 ln y
y
−3
= 2y 1 < y < ∞.

Example 2

If X ∼ U (0, 1) so that f (x) = 1 0 < x < 1 and Y = u(X) where u(x) = −2 ln x,


we have

y = u(x) = −2 ln x
y
x = w(y) = e− 2
d −y  1 y
J= e 2 = − e− 2
dy 2

and the probability density function of Y is

y
fY y) = fX e− 2 |J|
1 y
= e− 2 0 < y < ∞.
2

6
TRANSFORMATIONS OF RVs Transformation technique

Let (X1 , X2 , . . . , Xk ) be a k–dimensional random variable with joint probability


density function fX (x1 , x2 , . . . xk ) and let (Y1 , Y2 , . . . , Yk ) be defined by the 1–1
transformations
Yi = ui (X1 , X2 , . . . , Xk ) i = 1, 2, . . . , k
with inverse transformations
xi = wi (y1 , y2 , . . . , yk ) i = 1, 2, . . . , k.
If the Jacobian of the inverse transformations
∂x1 ∂x1
∂y1 . . .

∂yk


. .. ..
J = .. . .


∂xk . . . ∂xk
∂y1 ∂yk

is continuous and non–zero over the range of the transformations, the joint pdf of
(Y1 , Y2 , . . . , Yk ) is
fY (y1 , y2 , . . . , yk ) = fX (x1 , x2 , . . . , xk )|J|.

Example 1

Let X1 , X2 be independent exponential variables with joint probability density


function
fX (x1 , x2 ) = e−(x1 +x2 ) for x1 , x2 ǫA
where A = {(x1 , x2 ) : 0 < x1 , x2 < ∞}.

Let
Y1 = X 1
Y2 = X 1 + X 2
so that
x 1 = y1
x 2 = y2 − y1

∂x1 ∂x1
∂y1 ∂y2
J = ∂x 2 ∂x2


∂y1 ∂y2

1 0
=
−1 1
=1

7
TRANSFORMATIONS OF RVs Transformation technique

and the joint probability density function of Y1 and Y2 is


fY (y1 , y2 ) = e−(y1 +(y2 −y1 ) |1|
= e−y2 (y1 , y2 )ǫB
where B = {(y1 , y2 ) : 0 < y1 < y2 < ∞}.

The marginal probability density function of Y1 is


Z ∞
fY1 (y1 ) = e−y2 dy2
y1

−y2
= −e
y1
−y1
=e y1 > 0
and the marginal probability density function of Y2 is
Z y2
fY2 (y2 ) = e−y2 dy1
0
y2
−y2
= e y1
0
−y2
= y2 e y2 > 0.

The marginal probability density function of Y2 is that of a gamma variable and so


the sum of independent exponential variables is a gamma variable.

Example 2

Let X1 , X2 be independent exponential variables with joint probability density


function
fX (x1 , x2 ) = e−(x1 +x2 ) for x1 , x2 ǫA
where A = {(x1 , x2 ) : 0 < x1 , x2 < ∞}.

Now let
Y1 = X 1 − X 2
Y2 = X 1 + X 2
so that
1
x1 = (y1 + y2 )
2
1
x2 = (y2 − y1 )
2

8
TRANSFORMATIONS OF RVs Transformation technique

∂x1 ∂x1
∂y1 ∂y2
J = ∂x 2

∂x2
∂y 1 ∂y2
1 1

2
= 1 1
2
−2 2
1
=
2
and the joint probability density function of Y1 and Y2 is

1

1 1
− 2 (y1 +y2 )+ 2 (y2 −y1 )
f (y1 , y2 ) = e | |
2
1 −y2
= e (y1 , y2 )ǫB
2

where B = {(y1 , y2 ) : −y2 < y1 < y2 , y2 > 0}.

The marginal probability density function of Y1 is



1 −y2 1
Z
fY1 (y1 ) = e dy2 = ey1 y1 < 0
−y 2 2
Z ∞1
1 −y2 1
= e dy2 = e−y1 y1 > 0
y1 2 2
1
= e−|y1 | − ∞ < y1 < ∞
2
and the marginal probability density function of Y2 is
y2
1 −y2
Z
fY2 (y2 ) = e dy1
−y2 2
= y2 e−y2 y2 > 0.

9
TRANSFORMATIONS OF RVs Transformation technique

Example 3

Let X1 , X2 be independent exponential variables with joint probability density


function
fX (x1 , x2 ) = e−(x1 +x2 ) for x1 , x2 ǫA
where A = {(x1 , x2 ) : 0 < x1 , x2 < ∞}.

Let
Y1 = X 1 + X 2
X1
Y2 =
X1 + X2
so that
x 1 = y1 y2
x2 = y1 (1 − y2 )


y2 y1
J =
1 − y2 −y1
= −y1

The joint probability density function of Y1 and Y2 is


fY (y1 , y2 ) = e−(y1 y2 +y1 (1−y2 )) | − y1 |
= y1 e−y1 0 < y1 < ∞ 0 < y2 < 1

The marginal probability density function of Y1 is


Z 1
fY1 (y1 ) = y1 e−y1 dy2
0

= −e−y2

y1
−y1
= y1 e 0 < y1 < ∞
and the marginal probability density function of Y2 is
Z ∞
fY2 (y2 ) = y1 e−y1 dy1
0
=1 0 < y2 < 1.

The variables Y1 and Y2 are independent.

10
TRANSFORMATIONS OF RVs MGF technique

If fX (x1 , x2 , . . . , xk ) is the joint probability density function of the k random


variables X1 , X2 , . . . , Xk and Y1 = u1 (X1 , X2 , . . . , Xk ) is a function of these random
variables, the moment generating function of Y1 , if it exists, is given by
Z ∞
tY1
E(E ) = ety g(y1 ) dy1
−∞

where f (y1 ) is the probability density function of Y1 .

Assume that the integral


Z ∞ Z ∞
... etu1 (x1 ,x2 ,...,xk ) f (x1 , x2 , . . . , xk )dx1 dx2 . . . dxk (1)
−∞ −∞

exists for −h < t < h. If k functions

yi = ui (x1 , x2 , . . . , xk ) i = 1, 2, . . . , k

define a 1–1 transformation with inverse functions

xi = wi (y1 , y2 , . . . , yk ) i = 1, 2, . . . , k

and Jacobian J, equation (1) becomes


Z ∞ Z ∞
... ety1 |J|f (w1 , w2 , . . . , wk )dy2 dy3 . . . dyk dy1 (2)
−∞ −∞

where

|J|f w1 (y1 , y2 , . . . , yk ), w2 (y1 , y2 , . . . , yk ), . . . wk (y1 , y2 , . . . , yk )

is the joint probability density function of Y1 , Y2 , . . . , Yk .

The marginal probability density function of Y1 is obtained by integrating the


joint probability density function on y2 , y3 , . . . , yk and as ety1 does not involve the
variables Y2 , Y3 , . . . , Yk , equation (2) can be written
Z ∞
ety1 g(y1 )dy1
−∞

which is the moment generating function of Y1 .

Computing E etu1 (X1 ,X2 ,...,Xk ) gives us E(etY1 ) where Y1 = u1 (X1 , X2 , . . . , Xk ).




11
TRANSFORMATIONS OF RVs MGF technique

Example 1

If X1 and X2 are independent normal variables with

X1 ∼ N (µ1 , σ12 ) and X2 ∼ N (µ2 , σ22 ),

the moment generating function of Y = X1 − X2 is given by

M (t) = E et(X1 −X2 )




= E etX1 e−tX2


= E etX1 E e−tX2
 

as X1 and X2 are independent.

As
σ 2 t2 
E(etX1 ) = exp µ1 t + 1
2
 σ 2 t2 
E(etX2 ) = exp µ2 t + 2
2
σ 2 t2 
E(e−tX2 ) = exp − µ2 t + 2
2
and
(σ12 + σ22 )t2 
M (t) = exp (µ1 − µ2 )t +
2
and the distribution of Y is completely determined by its moment generating
function, the distribution of Y is
 σ 2 + σ22 
N µ1 − µ2 , 1 .
2

12
TRANSFORMATIONS OF RVs MGF technique

Example 2

If X has the standard normal distribution, X ∼ N (0, 1) and Y = X 2 , the moment


generating function of Y is
2
E(etY ) = E(etX )
Z ∞
1 2 1 2
= √ etx e− 2 x dx
−∞ 2π
Z ∞
1 1 2
=√ e− 2 x (1−2t) dx
2π −∞
Z ∞
1 1 2 dy √
=√ e− 2 y p with y = x 1 − 2t
2π −∞ 1 − 2t)
1
= 1 .
(1 − 2t) 2

The moment generating function of the chi–squared variable with ν degrees of


freedom is
1
ν
(1 − 2t) 2
and so the square of a standard normal variable has the chi–squared distribution
with 1 degree of freedom.

13

You might also like