You are on page 1of 26

9.

Two Functions of Two Random


Variables

In the spirit of the previous lecture, let us look at an


immediate generalization: Suppose X and Y are two random
variables with joint p.d.f f XY ( x, y). Given two functions g ( x, y )
and h( x, y ), define the new random variables
Z  g ( X ,Y ) (9-1)
W  h ( X , Y ). (9-2)
How does one determine their joint p.d.f f ZW ( z, w) ? Obviously
with f ZW ( z, w) in hand, the marginal p.d.fs f Z (z) and fW (w)
can be easily determined.

1
PILLAI
The procedure is the same as that in (8-3). In fact for given z
and w,
FZW ( z, w)  P Z ( )  z,W ( )  w   P g ( X , Y )  z, h( X , Y )  w 
 P ( X , Y )  Dz ,w     f XY ( x, y )dxdy, (9-3)
( x , y )Dz , w

where Dz ,w is the region in the xy plane such that the


inequalities g ( x, y )  z and h( x, y )  w are simultaneously
satisfied.
We illustrate this technique in the next example.
y

Dz ,w
Dz ,w
x
2
Fig. 9.1 PILLAI
Example 9.1: Suppose X and Y are independent uniformly
distributed random variables in the interval (0, ).
Define Z  min( X , Y ), W  max( X , Y ). Determine f ZW ( z, w).
Solution: Obviously both w and z vary in the interval (0, ).
Thus FZW ( z, w)  0, if z  0 or w  0. (9-4)

FZW ( z, w)  PZ  z,W  w  Pmin( X , Y )  z, max( X , Y )  w . (9-5)

We must consider two cases: w  z and w  z, since they


give rise to different regions for Dz ,w (see Figs. 9.2 (a)-(b)).
Y Y

( w, w) ( z, z )

( z, z ) yw ( w, w)

X X

(a ) w  z ( b) w  z 3
Fig. 9.2 PILLAI
For w  z, from Fig. 9.2 (a), the region Dz ,w is represented
by the doubly shaded area. Thus
FZW ( z, w)  FXY ( z, w)  FXY ( w, z )  FXY ( z, z ) , w  z, (9-6)
and for w  z, from Fig. 9.2 (b), we obtain
FZW ( z, w)  FXY ( w, w) , w  z. (9-7)
With
x y xy (9-8)
FXY ( x, y )  FX ( x ) FY ( y )    2 ,
  
we obtain
(2w  z ) z /  2 , 0  z  w ,
FZW ( z, w)   (9-9)
 w 2
/  2
, 0  w  z  .
Thus
2 /  2 , 0  z  w ,
f ZW ( z, w)   (9-10)
 0, otherwise . 4
PILLAI
From (9-10), we also obtain
 2 z
f Z ( z)   f ZW ( z, w)dw  1  , 0  z   , (9-11)
z   
and
w 2w
fW ( w)   f ZW ( z, w)dz  , 0  w  . (9-12)
0  2

If g ( x, y ) and h( x, y ) are continuous and differentiable


functions, then as in the case of one random variable (see (5-
30)) it is possible to develop a formula to obtain the joint
p.d.f f ZW ( z, w) directly. Towards this, consider the equations
g ( x, y )  z , h( x, y )  w. (9-13)

For a given point (z,w), equation (9-13) can have many


solutions. Let us say
( x1, y1 ), ( x2 , y2 ), , ( xn , yn ), 5
PILLAI
represent these multiple solutions such that (see Fig. 9.3)
g ( xi , yi )  z, h( xi , yi )  w. (9-14)

y 2
w 1

( x2 , y2 )
w  w 
( x1, y1 ) i
w 
( z , w)  ( xi , yi )
z  z x
z

n
z ( xn , yn )
(a) (b)
Fig. 9.3

Consider the problem of evaluating the probability


P z  Z  z  z, w  W  w  w 
 P z  g ( X , Y )  z  z, w  h( X , Y )  w  w . (9-15)
6
PILLAI
Using (7-9) we can rewrite (9-15) as
Pz  Z  z  z, w  W  w  w  f ZW ( z, w)zw. (9-16)
But to translate this probability in terms of f XY ( x, y ), we need
to evaluate the equivalent region for z w in the xy plane.
Towards this referring to Fig. 9.4, we observe that the point
A with coordinates (z,w) gets mapped onto the point A with
coordinates ( xi , yi ) (as well as to other points as in Fig. 9.3(b)).
As z changes to z  z to point B in Fig. 9.4 (a), let B 
represent its image in the xy plane. Similarly as w changes
to w  w to C, let C represent its image in the xy plane.
w y
D
C D C
w  
i
w yi B
A B A
z
z z xi x
7
(a) Fig. 9.4 (b) PILLAI
Finally D goes to D, and ABCD represents the equivalent
parallelogram in the XY plane with area  i . Referring back
to Fig. 9.3, the probability in (9-16) can be alternatively
expressed as
 P( X ,Y )     f
i
i
i
XY ( xi , yi )i . (9-17)

Equating (9-16) and (9-17) we obtain


i
f ZW ( z, w)   f XY ( xi , yi ) . (9-18)
i zw
To simplify (9-18), we need to evaluate the area  i of the
parallelograms in Fig. 9.3 (b) in terms of zw. Towards this,
let and g1 denote
h1 the inverse transformation in (9-14), so
that
xi  g1 ( z, w), yi  h1 ( z, w). (9-19)
8
PILLAI
As the point (z,w) goes to ( xi , yi )  A, the point ( z  z, w)  B,
the point ( z, w  w)  C , and the point ( z  z, w  w)  D.
Hence the respective x and y coordinates of B  are given by
g1 g
g1 ( z  z, w)  g1 ( z, w)  z  xi  1 z, (9-20)
z z
and
h1 h1
h1 ( z  z, w)  h1 ( z, w)  z  yi  z. (9-21)
z z
Similarly those of C are given by
g1 h1
xi  w, yi  w. (9-22)
w w
The area of the parallelogram ABCD in Fig. 9.4 (b) is
given by
 i   AB AC  sin(    )
(9-23)
  AB cos   AC  sin     AB sin   AC  cos  . 9
PILLAI
But from Fig. 9.4 (b), and (9-20) - (9-22)
g1 h
AB cos   z, AC  sin   1 w, (9-24)
z w
h g
AB sin   1 z, AC  cos  1 w. (9-25)
z w
so that
 g1 h1 g1 h1 
i    zw (9-26)
 z w w z 
and
 g1 g1 
 
i  g1 h1 g1 h1   z w 
    det   (9-27)
zw  z w w z 
 h1 h1 
 
 z w 
The right side of (9-27) represents the Jacobian J ( z , w) of
the transformation in (9-19). Thus
10
PILLAI
 g1 g1 
 z w 
 
J ( z , w)  det  .
 h
(9-28)
h1 
 1 
 z w 

Substituting (9-27) - (9-28) into (9-18), we get


1
f ZW ( z, w)   | J ( z, w) | f XY ( xi , yi )   f XY ( xi , yi ), (9-29)
i i | J ( xi , yi ) |

since
1
| J ( z , w) |  (9-30)
| J ( xi , yi ) |

where J ( xi , yi ) represents the Jacobian of the original


transformation in (9-13) given by
 g g 
 
 x y 
J ( xi , yi )  det   . (9-31)
 h h 
 x y 
11
  x x i , y  yi PILLAI
Next we shall illustrate the usefulness of the formula in
(9-29) through various examples:
Example 9.2: Suppose X and Y are zero mean independent
Gaussian r.vs with common variance  2 .
Define Z  X 2  Y 2 , W  tan 1(Y / X ), where | w |  / 2.
Obtain f ZW ( z, w).
Solution: Here
1 ( x 2  y 2 ) / 2 2
f XY ( x, y )  e . (9-32)
2 2

Since
z  g ( x, y )  x 2  y 2 ; w  h( x, y )  tan 1 ( y / x ), | w |  / 2, (9-33)

if ( x1, y1 ) is a solution pair so is ( x1, y1 ). From (9-33)


y
 tan w, or y  x tan w. (9-34)
x
12
PILLAI
Substituting this into z, we get

z  x 2  y 2  x 1  tan 2 w  x sec w, or x  z cos w. (9-35)


and
y  x tan w  z sin w. (9-36)

Thus there are two solution sets


x1  z cos w, y1  z sin w, x2   z cos w, y2   z sin w. (9-37)
We can use (9-35) - (9-37) to obtain J ( z, w). From (9-28)
x x
z w cos w  z sin w
J ( z , w)    z, (9-38)
y y sin w z cos w
z w
so that
| J ( z, w) | z. (9-39) 13
PILLAI
We can also compute J ( x, y ) using (9-31). From (9-33),
x y
x2  y2 x2  y2
1 1
J ( x, y )    . (9-40)
 y x x2  y2 z
x2  y2 x2  y2

Notice that | J ( z, w) | 1 / | J ( xi , yi ) |, agreeing with (9-30).


Substituting (9-37) and (9-39) or (9-40) into (9-29), we get
f ZW ( z, w)  z  f XY ( x1 , y1 )  f XY ( x2 , y2 ) 
z  z 2 / 2 2 
 e , 0  z  , | w | . (9-41)
 2
2
Thus
 /2 z
f Z ( z)   f ZW ( z, w)dw  e  z 2 / 2 2
, 0  z  , (9-42)
 / 2  2

which represents a Rayleigh r.v with parameter  2, and


 1 
fW ( w)   f ZW ( z, w)dz  , | w | , (9-43)
0  2
14
PILLAI
which represents a uniform r.v in the interval (  / 2,  / 2).
Moreover by direct computation
f ZW ( z, w)  f Z ( z )  fW ( w) (9-44)

implying that Z and W are independent. We summarize these


results in the following statement: If X and Y are zero mean
independent Gaussian random variables with common
variance, then X 2  Y 2 has a Rayleigh distribution and tan 1(Y / X )
has a uniform distribution. Moreover these two derived r.vs
are statistically independent. Alternatively, with X and Y as
independent zero mean r.vs as in (9-32), X + jY represents a
complex Gaussian r.v. But
X  jY  Ze jW , (9-45)

where Z and W are as in (9-33), except that for (9-45) to hold


good on the entire complex plane we must have    W  15 ,
and hence it follows that the magnitude and phase of PILLAI
a complex Gaussian r.v are independent with Rayleigh
and uniform distributions U ~ ( , ) respectively. The
statistical independence of these derived r.vs is an interesting
observation.
Example 9.3: Let X and Y be independent exponential
random variables with common parameter .
Define U = X + Y, V = X - Y. Find the joint and marginal
p.d.f of U and V.
Solution: It is given that
1 ( x  y ) / 
f XY ( x, y )  e , x  0, y  0. (9-46)

2

Now since u = x + y, v = x - y, always | v | u, and there is


only one solution given by
uv uv
x , y . (9-47)
2 2
Moreover the Jacobian of the transformation is given by 16
PILLAI
1 1
J ( x, y )   2
1 1
and hence
1 u / 
fUV (u, v )  2 e , 0  | v |  u  , (9-48)
2
represents the joint p.d.f of U and V. This gives
u 1 u u
fU (u )   fUV (u, v )dv  2  e u /  dv  e u / 
, 0  u  , (9-49)
u 2 u 
2

and
 1  1 |v|/ 
fV ( v )   fUV (u, v )du   e  u /  du  e ,    v  . (9-50)
|v| 2 2 |v| 2
Notice that in this case the r.vs U and V are not independent.
As we show below, the general transformation formula in
(9-29) making use of two functions can be made useful even
when only one function is specified. 17
PILLAI
Auxiliary Variables:
Suppose
Z  g ( X , Y ), (9-51)

where X and Y are two random variables. To determine f Z (z)


by making use of the above formulation in (9-29), we can
define an auxiliary variable
WX or W  Y (9-52)

and the p.d.f of Z can be obtained from f ZW ( z, w) by proper


integration.
Example 9.4: Suppose Z = X + Y and let W = Y so that the
transformation is one-to-one and the solution is given
by y1  w, x1  z  w. 18
PILLAI
The Jacobian of the transformation is given by
1 1
J ( x, y )  1
0 1
and hence
f ZW ( x, y )  f XY ( x1, y1 )  f XY ( z  w, w)

or

f Z ( z )   f ZW ( z, w)dw   f XY ( z  w, w)dw, (9-53)


which agrees with (8.7). Note that (9-53) reduces to the


convolution of f X (z ) and fY (z) if X and Y are independent
random variables. Next, we consider a less trivial example.
Example 9.5: Let X  U (0,1) and Y  U (0,1) be independent.
Define Z   2 ln X 1 / 2 cos( 2Y ). (9-54)
19
PILLAI
Find the density function of Z.
Solution: We can make use of the auxiliary variable W = Y
in this case. This gives the only solution to be
  z sec( 2w )  2 / 2
x1  e , (9-55)
y1  w, (9-56)

and using (9-28)


x1 x1   z sec( 2w )  2 / 2 x1
 z sec ( 2w) e
2

z w w
J ( z , w)   (9-57)
y1 y1 0 1
z w
  z sec( 2w )  2 / 2
  z sec ( 2w)e
2
.

Substituting (9-55) - (9-57) into (9-29), we obtain


 z sec( 2 w ) 
2

f ZW ( z, w)  z sec (2 w) e 2 /2
,
(9-58)
   z  , 0  w  1, 20
PILLAI
and
 z tan(2 w )  / 2
1 1 2

f Z ( z )   f ZW ( z, w)dw  e  z2 / 2
 z sec (2 w) e
2
dw. (9-59)
0 0

Let u  z tan(2 w) so that du  2 z sec2 (2 w)dw.Notice


that as w varies from 0 to 1, u varies from   to  .
Using this in (9-59), we get
1 
u 2 / 2 du 1

z2 / 2  z2 / 2
f Z ( z)  e e  e ,    z  , (9-60)
2  
 
2 2
1

which represents a zero mean Gaussian r.v with unit


variance. Thus Z  N (0,1). Equation (9-54) can be used as
a practical procedure to generate Gaussian random variables
from two independent uniformly distributed random
sequences.
21
PILLAI
Example 9.6 : Let X and Y be independent identically distributed
Geometric random variables with
P( X  k )  P(Y  K )  pq k , k  0, 1, 2,.
(a) Show that min (X , Y ) and X – Y are independent random variables.
(b) Show that min (X , Y ) and max (X , Y ) – min (X , Y ) are also
independent random variables.
Solution: (a) Let
Z = min (X , Y ) , and W = X – Y. (9-61)
Note that Z takes only nonnegative values {0, 1, 2, }, while W takes
both positive, zero and negative values {0,  1,  2, }. We have
P(Z = m, W = n) = P{min (X , Y ) = m, X – Y = n}. But
Y X  Y  W  X  Y is nonnegativ e
Z  min( X ,Y )  
 X X  Y  W  X  Y is negative.
Thus
P( Z  m,W  n )  P{min( X ,Y )  m, X  Y  n, ( X  Y  X  Y )}
 P(min( X , Y )  m, X  Y  n, X  Y )
22
 P(min( X , Y )  m, X  Y  n, X  Y ) (9-62) PILLAI
P ( Z  m,W  n )  P (Y  m, X  m  n, X  Y )
 P ( X  m, Y  m  n , X  Y )
 P ( X  m  n ) P (Y  m )  pq m  n pq m , m  0, n  0
 mn
 P ( X  m ) P (Y  m  n )  pq pq , m  0, n  0
m

 p 2 q 2 m |n| , m  0, 1, 2, n  0,  1,  2, (9-63)


represents the joint probability mass function of the random variables
Z and W. Also
P ( Z  m )   P( Z  m,W  n )   p 2 q 2 m q |n|
n n

 p 2 q 2 m (1  2q  2q 2  )
2q
 p 2 q 2 m (1  1 q )  pq 2 m (1  q)
 p(1  q) q 2 m , m  0, 1, 2,. (9-64)

Thus Z represents a Geometric random variable since 23


1  q 2  p(1  q), and PILLAI
 
P (W  n )   P ( Z  m,W  n )   p 2 q 2 m q |n|
m 0 m 0

 p 2 q|n| (1  q 2  q 4  )  p 2 q|n| 1
1q2
 1pq q|n| , n  0,  1,  2, . (9-65)

Note that
P( Z  m,W  n)  P( Z  m) P(W  n), (9-66)

establishing the independence of the random variables Z and W.


The independence of X – Y and min (X , Y ) when X and Y are
independent Geometric random variables is an interesting observation.
(b) Let
Z = min (X , Y ) , R = max (X , Y ) – min (X , Y ). (9-67)
In this case both Z and R take nonnegative integer values 0, 1, 2, .
Proceeding as in (9-62)-(9-63) we get
24
PILLAI
P{Z  m, R  n}  P{min( X , Y )  m, max( X , Y )  min( X , Y )  n, X  Y }
 P{min( X , Y )  m, max( X , Y )  min( X , Y )  n, X  Y }
 P{Y  m, X  m  n, X  Y )  P ( X  m, Y  m  n, X  Y }
 P{ X  m  n, Y  m, X  Y )  P ( X  m, Y  m  n, X  Y }
 pq m  n pq m  pq m pq m  n , m  0, 1, 2,, n  1, 2,
 mn
 pq pq m
, m  0, 1, 2,, n0
2 p 2 q 2 m  n , m  0, 1, 2,, n  1, 2,
  2 2m
 p q , m  0, 1, 2,, n  0. (9-68)

Eq. (9-68) represents the joint probability mass function of Z and R


in (9-67). From (9-68),
 

P( Z  m)   P{Z  m, R  n}  p 2 q 2 m (1  2 q n )  p 2 q 2 m 1  p
n 0 n 1
2q

 p(1  q)q 2 m , m  0, 1, 2, (9-69) 25
PILLAI
and
p
 , n0

 1 q
P( R  n )   P{Z  m, R  n}  
2p n
m 0 1 q q , n  1, 2,. (9-70)

From (9-68)-(9-70), we get


P( Z  m, R  n)  P( Z  m) P( R  n) (9-71)
which proves the independence of the random variables Z and R
defined in (9-67) as well.

26
PILLAI