Professional Documents
Culture Documents
Lect8 PDF
Lect8 PDF
1
It is important to know the statistics of the incoming signal
for proper receiver design. In this context, we shall analyze
problems of the following type:
X +Y
max( X , Y ) X Y
min( X , Y ) Z = g ( X ,Y ) XY (8-2)
X 2 +Y 2 X /Y
tan 1 ( X / Y )
Dz
Dz
X
Fig. 8.1
3
Example 8.1: Z = X + Y. Find f Z ( z ).
Solution:
+ z y
FZ ( z ) = P ( X + Y z ) = f XY ( x, y )dxdy, (8-4)
y = x =
x=z y
Fig. 8.2
4
We can find f Z ( z ) by differentiating FZ ( z ) directly. In this
context, it is useful to recall the differentiation rule in (7-
15) - (7-16) due to Leibnitz. Suppose
b( z)
H (z) = a(z)
h ( x , z ) dx . (8-5)
Then
dH ( z ) db ( z ) da ( z ) b ( z ) h ( x , z )
= h (b ( z ), z ) h (a ( z ), z ) + dx . (8-6)
dz dz dz a ( z ) z
Using (8-6) in (8-4) we get
z y + z y f
XY ( x, y )
+
f Z ( z) = f XY ( x, y )dx dy = f XY ( z y, y ) 0 + dy
z z
+
= f XY ( z y, y )dy. (8-7)
6
The above integral is the standard convolution of the
functions f X ( z ) and f Y ( z ) expressed two different ways. We
thus reach the following conclusion: If two r.vs are
independent, then the density of their sum equals the
convolution of their density functions.
As a special case, suppose that f X ( x ) = 0 for x < 0 and f Y ( y ) = 0
for y < 0 , then we can make use of Fig. 8.4 to determine the
new limits for D z .
y
(z,0)
x=z y
x
(0, z)
Fig. 8.4
7
In that case
z z y
FZ ( z ) = f XY ( x, y )dxdy
y =0 x =0
or
z
f Z ( z ) = f XY ( x, y )dx dy = 0 XY
z z y f ( z y , y )dy , z > 0,
y =0 z x =0
(8-12)
0, z 0.
On the other hand, by considering vertical strips first in
Fig. 8.4, we get
z zx
FZ ( z ) = f XY ( x, y )dydx
x =0 y =0
or
z f ( x ) f ( z x )dx, z > 0,
f XY ( x, z x )dx = y =0 X
z
f Z ( z) = Y
(8-13)
x =0
0, z 0,
if X and Y are independent random variables.
8
Example 8.2: Suppose X and Y are independent exponential
r.vs with common parameter , and let Z = X + Y.
Determine f Z ( z ).
Solution: We have f X ( x) = e xU ( x), fY ( y ) = e yU ( y ), (8-14)
and we can make use of (13) to obtain the p.d.f of Z = X + Y.
z z
x ( z x ) z
fZ (z) = e
2
e dx = e
2
dx = z 2 e zU ( z ). (8-15)
0 0
x=zy
x=zy
x x
(a ) 0 < z < 1 (b ) 1 < z < 2
Fig. 8.5
For 0 z < 1,
z z y z z2
FZ ( z ) = 1 dxdy = ( z y )dy = , 0 z < 1. (8-16)
y =0 x =0 y =0 2
For 1 z < 2, notice that it is easy to deal with the unshaded
region. In that case
FZ ( z ) = 1 P(Z > z ) = 1
1 1
y = z 1 x = z y
1 dxdy
1 (2 z )2 (8-17)
= 1 (1 z + y )dy = 1 , 1 z < 2.
y = z 1 2
10
Using (8-16) - (8-17), we obtain
dFZ ( z ) z 0 z < 1,
f Z ( z) = = (8-18)
dz 2 z, 1 z < 2.
By direct convolution of f X ( x ) and f Y ( y ), we obtain the
same result as above. In fact, for 0 z < 1 (Fig. 8.6(a))
z
f Z ( z ) = f X ( z x ) fY ( x )dx = 1 dx = z. (8-19)
0
11
fY (x ) f X ( z x) f X ( z x ) fY ( x )
x x x
1 z 1 z z
(a ) 0 z < 1
fY (x ) f X ( z x) f X ( z x ) fY ( x )
x x x
1 z 1 z
z 1 1
( b) 1 z < 2
f Z (z )
z
0 1 2
Fig. 8.6 (c)
12
Example 8.3: Let Z = X Y . Determine its p.d.f f Z ( z ).
and hence
dFZ ( z ) + z+ y +
fZ ( z) =
dz
= y =
z
x =
f XY ( x , y ) dx dy =
f XY ( y + z , y ) dy. (8-21)
y x y = z
x= y+z
x
Fig. 8.7 13
As a special case, suppose
f X ( x) = 0, x < 0, and fY ( y ) = 0, y < 0.
z x
and for z < 0, from Fig 8.8 (b) z
(a)
+ z+ y
FZ ( z ) = f XY ( x, y )dxdy y
y = z x =0
14
Example 8.4: Given Z = X / Y, obtain its density function.
Solution: We have FZ ( z ) = P ( X / Y z ) . (8-24)
The inequality X / Y z can be rewritten as X Yz if Y > 0,
and X Yz if Y < 0. Hence the event ( X / Y z ) in (8-24) need __
to be conditioned
__ by the event A = (Y > 0 ) and its compliment A .
Since A A = , by the partition theorem, we have
{X / Y z }= {( X / Y z ) ( A A )}= {( X / Y z ) A} {( X / Y z ) A }
x
(a) Fig. 8.9 (b) 15
Integrating over these two regions, we get
+ yz 0
FZ ( z ) = f XY ( x , y )dxdy + f XY ( x , y ) dxdy . (8-26)
y =0 x = y = x = yz
x
Fig. 8.10
16
This gives
yz
FZ ( z ) = f XY ( x, y )dxdy
y =0 x =0
or
+y f ( yz, y )dy,
f Z ( z ) = 0 XY
z > 0,
(8-28)
0, otherwise.
Example 8.5: X and Y are jointly normal random variables
with zero mean so that
1 x 2 2 rxy y2
+
1 2 (1 r 2 ) 12 1 2 22
f XY ( x , y ) = e
. (8-29)
2 1 2 1 r 2
17
where
1 r 2
2
0 (z) = 2
.
z 2 rz 1
+
1
2
1 2 2
2
Thus
1 2 1 r 2 /
fZ ( z) = 2 , (8-30)
2 ( z r 1 / 2 ) + 1 (1 r )
2 2 2
18
But, X 2 +Y 2 z represents the area of a circle with radius z,
and hence from Fig. 8.11,
z z y2
FZ ( z ) = y= z x= z y2
f XY ( x , y ) dxdy . (8-33)
y= z
2 z y
1
2
(f XY )
( z y 2 , y ) + f XY ( z y 2 , y ) dy . (8-34)
z
X 2 +Y 2 = z
z x
z
Fig. 8.11
19
Example 8.7 : X and Y are independent normal r.vs with zero
Mean and common variance 2. Determine fZ (z) for Z = X 2 + Y 2 .
Solution: Direct substitution of (8-29) with r = 0, 1 = 2 =
Into (8-34) gives
e z / 2
2
z 1 1 z 1
+ y ) / 2
e ( z y
2 2 2
fZ (z) = 2 dy =
2
dy
y= z
2 z y 2 2
2 0
z y 2
e z / 2 z cos
2
/2 1
(8-35)
z / 2 2
= d = e U ( z ),
2 0
z cos 2 2
z
z y
2
z
2
(f XY ( z 2 y 2 , y ) + f XY ( z 2 y 2 , y ) dy . ) (8-36)
2z /2 z cos z z 2 / 2 2
z 2 / 2
d = 2 e
2
= e U ( z ), (8-37)
2 0 z cos
As a result
1 1 1/ 1 / , / 2 < < / 2,
f ( ) = fU (tan ) = = (8-40)
| d / du | (1 / sec ) tan + 1 0,
2 2
otherwise .
ze ( z + ) / 2
2 2 2
/2 e z cos( ) / 2 d + 3 /2
e z cos( ) / d
/2
2
=
2 2 /2
ze ( z + ) / 2
z
2 2 2
= I 0 2 , (8-41)
2 2
where
1 2 1
cos( )
I 0 ( ) = e d = e cos d (8-42)
2 0 0
(a ) P ( X z, X > Y ) ( b ) P (Y z , X Y ) (c )
Fig. 8.12 26
Fig. 8.12 (c) represents the total region, and from there
FZ ( z ) = P ( X z , Y z ) = F XY ( z , z ). (8-46)
and hence
f Z ( z ) = F X ( z ) f Y ( z ) + f X ( z ) FY ( z ). (8-47)
Similarly
Y , X >Y,
W = min( X , Y ) = (8-48)
X , X Y.
Thus
FW ( w) = P(min( X , Y ) w) = P[(Y w, X > Y ) ( X w, X Y )].
27
Once again, the shaded areas in Fig. 8.13 (a)-(b) show the
regions satisfying the above inequalities and Fig 8.13 (c)
shows the overall region.
y y
y x=w
x= y x= y
( w, w)
y=w
x x x
and hence
fW ( w ) = f X ( w ) + f Y ( w ) f X ( w ) FY ( w ) FX ( w ) f Y ( w ).
Hence
f Z ( z ) = y f XY ( yz, y )dy + x f XY ( x, xz )dx = y{ f XY ( yz, y ) + f XY ( y, yz ) }dy
0 0 0
= y { e }dy = 2
2
( yz + y ) ( y + yz ) (1+ z ) y
2
+e 2
ye dy = ueu dy
0 0 (1 + z )2 0
2 f Z (z )
, 0 < z < 1, 2
= (1 + z )2 (8-54)
0, otherwise .
z
1
Fig. 8.15
Example 8.13 (Discrete Case): Let X and Y be independent
Poisson random variables with parameters 1 and 2
respectively. Let Z = X + Y . Determine the p.m.f of Z.
31
Solution: Since X and Y both take integer values { 0 , 1 , 2 , },
the same is true for Z. For any n = 0 , 1, 2 , , X + Y = n gives
only a finite number of options for X and Y. In fact, if X = 0,
then Y must be n; if X = 1, then Y must be n-1, etc. Thus the
event { X + Y = n } is the union of (n + 1) mutually exclusive
events Ak given by
Ak = { X = k , Y = n k } , k = 0 ,1 , 2 , , n. (8-55)
As a result
n
P ( Z = n ) = P ( X + Y = n ) = P ( X = k , Y = n k )
k =0
n
= P( X = k , Y = n k ) . (8-56)
k =0
k =0 k! ( n k )! n! k = 0 k ! ( n k )!
( 1 + 2 ) n
= e ( 1 + 2 ) , n = 0, 1, 2, , . (8-57)
n!
Thus Z represents a Poisson random variable with
parameter 1 + 2 , indicating that sum of independent Poisson
random variables is also a Poisson random variable whose
parameter is the sum of the parameters of the original
random variables.
As the last example illustrates, the above procedure for
determining the p.m.f of functions of discrete random
variables is somewhat tedious. As we shall see in Lecture 10,
the joint characteristic function can be used in this context
to solve problems of this type in an easier fashion.
33