Professional Documents
Culture Documents
(b) The moments f.I.,. of x are functions of v. Using (i). show that
f.I.n(v) =
n(n -1)
2
1-0 f.I."_2(fJ)d~
5-49 Show that, ifxis an integer-valued random variable with moment function r(z) as in (5,,113).
then
P{x = k} = - 1
2:n' -It
1" r(eJCt»e- lbI dfJ)
S-SO A biased coin is tossed and the first outcome is noted. The tossing is continued until the
outcome is the complement of the first outcome. thus completing the first run. Let x denote
the length of the first run. Find the p.mJ of x, and show that
Find the mean and variance of x. The distribution in (b) is known as the hypergeomelric
distribution (see also Problem 3-5). The lottery distribution in (3-39) is an example of this
distribution.
(e) In (b), let N -+ 00. M -+ 00, such that MIN -+ P.O < P < 1. Then show that the
bypergeometric random variable can be approximated by a Binomial random variable with
parameters n and P. provided n « N.
s..Sl A box contains n white and m black marbles. Let x represent the number of draws needed
fur the rth white marble.
(a) If sampling is done with replacement, show that x has a negative binomial distribution
with parameterS r and p = nl(m + n). (b) If sampling is done without replacement, then
show that
6
TWO
RANDOM
VARIABLES
I The region D is arbitrary subject only co !be mild condition that it can be expressed as a COUD1ab1e UDion or
intersection of xecrangles.
169
170 PROBABILITY AND IWIDOM VAlUABLES
Yz
FIGURE6-l
PROPERTIFS
1. The function F (x. ]) is such that
F(-oo,]) =0, F(x. -00) = 0, F(oo,oo) = 1
I(x,y) -_ 02 F(x, y)
8xay (6-5)
From this and property 1 it follows that
JOINT STATISTICS. We shall now show that the probability tbat the point (x, y) is in a
region D of the xy plane equals the integral of I(x, y) in D. In other words,
where {(x, y) e D} is the event consisting ofall outcomes ~ sucb that the point [x(n, y(~)]
is in D.
1:
Fx(x) = F(x. (0) F,(y) = F(oo, y) (6-9)
Setting y = 00 in the first and x = 00 in the second equation, we obtain (6-10) because
172 PROBABILITY AND RANDOM VARlABLES
[see (6-9n
Ix(x) = aF(x.oo) f) (x) = BF(oo, y)
ax By
J: I:
)'2.
Conversely, given F(x. y) or I(x, y) as before, we can find two random variables
x and y, defined in some space S, with distribution F(x, y) or density I(x, y). This can
be done by extending the existence theorem of Sec. 4-3 to joint statistics.
Probabllity Masses
The probability that the point (x, y) is in a region D of the plane can be interpreted as
the probability mass in this region. Thus the mass in the entire plane equals 1. The mass
in the balf-pl~e x::: x to the left of the line Lx of Fig. 6-2 equals Fx(x). The mass in
the balf-plane y ::: y below the line L, equals Fy(y). The mass in the doubly-shaded
qUadrant {x::: x, y ::: y} equals F(x, y).
Finally, the mass in the clear quadrant (x > x, y > y) equals
P{x> x, y > y} = 1- Fx(x) - Fy(Y) + F(x, y) (6-15)
The probability mass in a region D equals the integral [see (6-7)]
LJ I(x, y)dxdy
If, therefore, I (x, y) is a bounded function, it can be interpreted as surface mass density.
FIGURE 6-2
CHAPTER 6 1WO RANDOM VARIABLES 173
~ Suppose that
f() 1 _(Xl+yZ){2c1~
x,y = -lC<7
22 e (6-16)
We shall find the mass m in the circle x 2 + y2 ~ a2 • Inserting (6-16) into (6-7) and using
the transformation
x=rcos8 y=rsin8
we obtain
INDEPENDENCE ~ Two random variables x and y are called (statistically) independent if the events
{x E A} and {y E B} are independent [see (2-40)], that is. if
PIx EA. y E B} = P{x E AJP{y E B} (6-18)
where A and B are two arbitrary sets on the x and y axes, respectively.
Applying this to the events {x ~ x} and {y ~ y}. we conclude that, if the random
variables x and y are independent. then
F(x, y) = F:c(x)F)'(y) (6-19)
Hence
f(x. y) = fx(x)fy(y) (6-20)
It can be shown that, if (6-19) or (6-20) is true, then (6-18) is also true; that is. the
random variables x and y are independent [see (6-7)]. ~
EXA\IPLL h-2 ~ A fine needle of length 2a is dropped at random on a board covered with parallel
lines distance 2b apart where b > a as in Fig. 6-3a. We shall show that the probability p
BUFFON'S that the needle intersects one of the lines equals 2a/nb.
NEEDLE In tenns of random variables the experiment just discussed can be phrased as: We
denote by x the distance from the center of the needle to the nearest line and by (J the
angle between the needle and the direction perpendicular to the lines. We assume that
the random variables x and (J are independent. x is uniform in the interval (0, b). and e
is unifonn in the interval (O,n/2). From this it follows that ::
12 Tr
f(x.8) = fx(x)fs(8) = 'b;; 0<8 <-
- - 2
and 0 elsewhere. Hence the probability that the point (x. e) is in a region D included in
the rectangle R of Fig. 6-3b equals the area of D times 2/Trb.
The needle intersects the lines if x < a cos 8. Hence p equals the shaded area of
Fig. 6-3b times 2/Trb: .
2b
()
(a) (b)
nGURE6·3
This can be used to determine experimentally the number 7t using the relative frequency
interpretation of p: Ifthe needle is droppedn times and it intersects the lines ni times, then
nl 2a 2an
-=::!p=- hence 7t=::!- ...
n 7tb bnj
> If the random variables x and y are independent, then the random variables
z = g(1') w = h(y}
are also independent.
Proof. We denote by Az the set of points on the x axis such that g(x) ~ z and by Bw the set of
points on the y axis such that h(y) :s w. Clearly,
{z :s z} = {x e Az} {w :s w} = {y e B",} (6·21)
Therefore the events {z :5 z} and {w :5 w} are independent because the events {x e A z} and
{y e Bw} are independent. ~
~ If the ex.periments Sl and ~ are independent, then the random variables x and y are
independent.
Proof. We denote by Ax the set {x :s xl in SI and by By theset{y ~ y} in S2' In the spaceSI x Sl'
{x :s x} = A.. X S2 (y :5 y) = SI X B,
CHAPJ'ER. 6 1WO RANDOM VAlUABLES 175
From the independence of the two experiments, it follows that [see (3-4)] the events Ax x ~ and
S. x By are independent. Hence the events (x :::: xl and {y :::: y} are also independent. ~
JOINT ~ We shall say that the random variables x and y are jointly normal if their joint density
NORMALITY is given by
(6-26)
Proof. To prove.this, we must show that if (6-23) is inserted into (6-10), the result is
(6-26). The bracket in (6-23) can be written inth~ form
where
(y -172)0'1
17 = 171 + r...;;-.-:..:..:.....-;;.
0'2
The last integral represents a normal random variable with mean IL and variance
(1- r2)0'f. Therefore the last integral is a constant (independent of x and y) B =
J21r(l - r1)O't. Therefore
I,(y) = ABe(Y-'f/2)2f14?
And since I.,(Y) is a density. its area must equal!, This yields AB = 1/<72$, from
which we obtain A = 1/21r<7t0'2Jl - r2 proving (6-24), and the second equation in
(6-26). The proof of the first equation is similar. ~
176 PROBABILtI'Y AND RANDOM VARIABLES
Notes 1. From (6-26) it follows that if two random variables are jointly nonnal. they are also marginally
nonnal. However, as the nex.t example shows, Ihe cOllverse is 1101 trU4.
2. Joint nonna]ity can be defined also as follows: 1Wo random variableS x and y are jointly normal if
the sum ax + by is normal for every a and b [see (7-56»).
EX \i\IPLE 6-3 ~ We shall construct two random variables x and y that are marginally but not jOintly
normal. Toward this, consider the function
I(x, y) = 1.r(x)/y(y)[I + p{2F.r(x) -1}{2Fy(Y) -I}] Ipl < 1 (6-27)
where 1:X<x) and Iy(y) are two p.d.fs with respective distribution functions FAx) and
FyU;).1t is easy to show that I(x, Y) ~ 0 for all x, y, and
1-00
+00
I(x, y)dy = 1.r(X) + p(2Fx (x) -l)/x(X) 11 '"2
-1
udu
= lAx)
where we have made use of the substitution u = 2Fy(Y) - 1. Similarly,
implying that Ix(x) and ly(Y) in (6-27) also represent the respective marginal p.d.f.s of
x and y, respectively.
In particular. let l.r(x) and ly(Y) be normally distributed as in (6-26). In that case
(6-27) represents a joint p.d.f. with normal marginals that is Dot jointly normal. ~
Circular Symmetry
We say that the joint density of two random variables x and y is circularly symmetrical
if it depends only on the distance from the origin. that is, if
I(x. y) = g(r) r = ..jx2 + y2 (6-28)
THEORE!\I
. (d
-.. - .... If the random variables x and y are circularly symmetrical and independent, then
they are normal with zero mean and equal variance.
Since
ag(r) dg(r) ar and aT x
ax- = ""'(i;- ax ax = -;
CHAPTf!R.6 1WO RANDOM VAlUABLllS 177
The right side of (6-30) is independent of y and the left side is a function of r = Vx 2 + y2. This
shows that both sides are independent of x and y. Hence
18'(r)
---
r 8(r)
=a =constant
and (~28) yields
(6-31)
Thus the random variables x and yare normal with zero mean and variance (12 = -l/a. ....
DISCRETE TYPE RANDOM VARIABLES. Suppose the random variables x and y are
of discrete type taking the values of Xi and Yle with respective probabilities
P(x = xd = Pi Ply = YIe} = qle • (6-32)
Their joint statistics are determined in terms of the joint probabilities
P{x = X,. Y = YIe} = Pile (6-33)
Qearly.
1,1e
because, as i and k take all possible values, the events {x = Xi, Y = YIe} are mutually
exclusive, and their union equals the certain event.
We maintain that the marginal probabilities Pi and q" can be expressed in terms
of the joint probabilities Pile:
Pi = LPile (6-34)
Ie
This is the discrete version of (6-10).
Proof. The events {y = YIe} form a partition of S. Hence as k ranges over all possible
values. the events {x =XI, Y= YIe} are mutually exclusive and their union equals {x = Xi} •
. This yields the first equation in (6.:34) [see (2-41)]. The proof of the second is similar.
POINT MASSES. If the random variables x and y are of discrete type taking the values
Xiand Yt. then the probability masses are 0 evexywbere except at the point (Xi, YIe). We
have, thus. only point masses and the mass at each point equals Pile [see (6-33)J. The
probability PI = P {x = Xj} equals the sum of all masses Pile on the line X = Xi in
agreement with (6-34). .
178 PR.OBABlUTY AND RANDOM VARIABLES
If i == 1•...• M and k == 1•...• N. then the number of possible point masses on the
plane equals M N. However. as Example 6-4 shows. some of these masses might be O.
EX \i\IPLE 6-4 ~ (a) In the fair-die experiment;'" equals the number of dots shown and y equals twice
this number:
xc/;) == i y(/;) = 2i i = 1....• 6
In other words, Xl = i, Yk = 2k, and
Pik = P(x = i, Y == 2k} == {!6 i =k
o i ~k
Thus there are masses only on the six points (i, 2i) and the mass of each point equals
1/6 (Fig. 6-4a).
(b) We toss the die twice obtaining the 36 outcomes /; ik and we define x and y
such that x equals the first number that shows, and y the second
i,k=I ..... 6.
Thus Xi == i, Yk == k. and Pik == 1/36. We have, therefore, 36 point masses (Fig. 6-4b)
and the mass of each point equals 1/36. On the line X == i there are six points with total
mass 1/6.
(c) Again the die is tossed twice but now
Y(/;ik) =i +k
In this case, x takes the values 0, 1•...• 5 and y the values 2, 3, ... , 12. The number
of possible points 6 x 11 == 66; however. only 21 have positive masses (Fig. 6-4c).
Specifically. if x = O. then y =2, or 4•...• or 12 because if x =0, then i =k and
y = 2i. There are. therefore, six mass points in this line and the mass of each point
equals 1/36. If x = 1, then y = 3, or 5•... , or 11. Thus, there are, five mass points on
the line X = 1 and the mass of each point equals 2/36. For example, if x = 1 and y = 7,
then i = 3, k = 4, or i = 4, k = 3; hence Pix = I, y = 7} == 2/36. ~
y y y <> 1/36
12 • 6 • • • • • • 12
• •
• 2136
10 • 5 • • • • • • 10 ..
• • • •
8 • 4 • • • • • • 8
• • • • •
6 • 3 • • • • • • 6
• • •
4 • 2 • • • • • • 4
•
2 • • • • • • • 2
1 2 3 4 5 6 x 2 3 4 5 6 x 1 2 3 4 5 x
(a) (b) (c)
FIGURE 6-4
CHAPTER 6 1WO RANDOM VARlA8LES 179
y
x= cosz y=sinz
g(x)
y
o Xi x
FIGUREfi-5
If the random variables x and y are of discrete type as in (6-33) and independent,
then
Pill = PiPII (6-35)
This follows if we apply (6-19) to the events {x = Xi} and {y = YIIl. This is the discrete
version of (6-20).
EX Ai\ IPL Ie (1-5 .. A die with P{fj} = Pi is tossed twice and the random variables x and y are such
that .
Thus x equals the first number that shows and y equals'the second; hence the random
variables x and y are independent This leads to the conclusion that
Pi/c = Pix = i, y = k} = PIPJ: ~
ISO PROBABILITY AND RANDOM VARIABLIiS
where Dz in thexy plane represents the region where the inequality g(x, y) ~ z is satisfied
(Fig. 6-7).
x+y
max(x,y) x-y
min(x. yJ xly
max(x,y) FIGUREU
Dz
g(x. Y)"'z
• x
FIGURE 6·7
CHAP1'BR6 lWORANDOMVARlAllLES 181
Note that Dz need not be simply connected. From (6-37), to determine Fz(z) it is
enough to find the region Dr. for every z, and then evaluate the integral there.
We shall illustrate this method to detennine the statistics of various functions of x
andy.
since the region Dr. of the xy plane where x + y ::: z is the shaded area in Fig. 6-8 to
the left of the line x + y ::: z. Integrating over the horizontal strip along the x axis first
(inner integral) followed by sliding that strip along the y axis from -00 to +00 (outer
integral) we cover the entire shaded area.
We can find 11.(z) by differentiating Fz(z) directly. In this context it is useful to
recall the differentiation rule due to Leibnitz. Suppose
Fz(z) = l b(Z)
o(t.)
J(x. z) dx (6-39)
Then
f1.(z) = dFz{z) = db(z) /(b(z). z) _ da(z) /(a(z). z) + r(t.) alex. z) dx (6-40)
dz dz dz Ja(t.) az
Using (6-40) in (6-38) we get
= 1 00 (
-00 1· /Xy(z - y. y) - 0 + Loo
r-)' atx)'az'
(x y»)
dy
= I: fxy(z - y. y) dy
Alternatively. the integration in (6-38) can be carried out first along the y axis
(6-41)
FIGURE 6·8
182 P1tOBABIUTY AND lIANDOM VARIABLeS
yO
FIGURE 6-,
This integral is the convolution of the functions Ix(1.) and 1,(1.) expressed two different,
ways. We thus reach the following conclusion: If two random variables are independent,
then the density of their sum equals the convolution of their densities.
As a special case, suppose that Ix (x) =0 for x < 0 and I,(Y) =0 for y =< O. then
we can make use of Fig. 6-10 to determine the new limits for DI..
In that case
or
Iz(Z) = r (a81. 11.-7
1,..0 )
%=00 Ix,(x, y) dx dy
ftGURE6-10
CHAPTl!R6 TWORANDOMVARIABLI!S 183
On the other hand, by considering vertical strips first in Fig. 6-10, we get
Fz(z) = 1z l~-x
x=O y-o
Ixy(x, y)dydx
or
EX \i\lPLE 6-7 .. Suppose x and y are independent exponential random variables with conunon pa-
rameter A. Then
(6-46)
and we can make use of (6-45) to obtain the p.d.f. of z = x + y.
As Example 6-8 shows, care should be taken while using the convolution formula for
random variables with finite range. ~
EX \i\lPLE 6-R .. x and yare independent uniform random variables in the common interval (0,1).
Determine 11.(z), where z = x + y. Clearly,
z=x+y=*O<z<2
and as Fig. 6-11 shows there ~e two cases for which the shaded areas are quite different
in shape, and they should be considered separately.
y y
z- 1
x x
FIGURE 6-11
For O-:s r. < I,
Fz(z) = 1~ l::-Y ldxdy = l Z
1.2
(z - y)dy = -2' o < r. < 1 (6-48}
y=O ~-O y-o
For 1 :s z < 2, notice that it is easy to deal with the unshaded region. In that case,
Fz (z)=I-P{z>z}=l- t
1,=1.-1 x"'<%-y
11 Idxdy
=1- 1 1
y=%-1
(1 - z + y) dy = 1 - -~
(2-Z)2
2
1 :s r. < 2 (6-49)
Thus
By djrect convolution of Ix (x) and ly(Y), we obtain the same result as above. In fact,
for 0 .=: z < 1 (Fig. 6-12a)
1~1. - x) 1:Ar.-x)/p)
x 1.
(0) 0 "'z< I
(b) 1-0::<2
o . 2
(c)
CI:W'TBIl6 1WORANDOMVAltL\BLSS 185
Iz(z) = 11z-1
Idx = 2-z (6-52)
Fig. 6-12c shows Iz(1.), which agrees with the convolution of two rectangular waveforms
as well. ~
-00
IJty(1. + y, y) dy (6-53)
FIG~6-13
186 PROBABJLlTY AND RANDOM VARIABLES
(a) (b)
{1r.
After differentiation, this gives
/x,(1. + y, y) dy 1.:::' 0
/~(1.) = 00
(6-55)
d)' 1. < 0
~l:
z = x/y. Detennine
have
z=x/y
(6-56)
The inequality xI y ~ 1. can be rewritten as x ~ y1. if Y > 0, and x ~ yz if Y < O. Hence
the event {x/y ~ z} in (6-56) needs to be conditioned by the event A = (y > O} and its
compliment A. Since A U A = S. by the partition theorem, we have
P{x/y ~ z} == P{x/y ~ Z n (A U A)}
== P{x/Y~z.y <OJ
= P{x ~ yz. Y < O}
6-15a shows the area first tenn, and Fig. 6·15b
corresponding to the second teon
Integrating over these two regions, we get
Fl.(z)= f oo
Jy=fJ
l YZ
x=-oo
fxy(x,Y)dXdY +!. [,
,..-00 x-y~
/xy(x,y)dxdy (6-58)
CHAPTER 6 1WO RANDOM VARIABLES 187
y y
FIGURE 6-15
x FIGURE 6-16
Differentiation gives
Note that if x and y are non-negative random variables, then the area of integration
(6-59)
fz(z) = i y/xy(Yz, y) dy
1,-0 (6-60)
<II(
EX \'\IPIX (,-11 ~ x and y are jointly normal random variables with zero mean and
fxy(x, y) =
1 e
- [;;6; (S--k.!t+5)]
3(1-r) -I -1"2 -2 (6-61)
- 21r 0'\ 0'2../f'="'T2
Show that the ratio z = x/y has a Cauchy density centered at TO'I/0'2.
188 PROBABILITY ~ND lWfI)OM VARIA8I.BS
SOLUTtON'
Inserting (6-61) into (6-59) and using the fact that hy(-x, -y) = fxy(x, y), we obtain
f~('l..) = 2
2:1l'O't0'2.Jl - 'r2
10
00
ye-Y'2/2ao2 dy = ---==== ~
:Il'O't0'2.Jl - r2
where
Thus
(6-62)
which represents a Cauchy random variable centered at rO'110'2. Integrating (6-62) from
-00 to z. we obtain the corresponding distribution function to be
1 1 a2Z - ral
Ie' ( )
rz Z + - arctan --=--===
= -2:1l' a ..Ji'"=r2
(6-63)
t
As an application, we can use (6-63) to determine the probability masses mit m2,
m3. and m4 in the four quadrants of the xy plane for (6-61). From the spherical symmetry
of (6-61), we have
But the second and fourth quadrants represent the region of the plane where x Iy < O.
The probability that the point (x, y) is in that region equals. therefore. the probability
that the random variable z = x/y is negative. Thus
1 1 r
m2 +m4 = P(z ~ 0) = Fz(O) = -2 - -arctan ~
:Il' -vl-r~
, and
~ Let x and y be independent gamma random variables with x'" G(m, a) and y ....
G(n, a). Show that z = xj(x + y) has a beta distribution.
Note that 0 < z < I, since x and y are non-negative random variables
= 1"0
[1Z/(I-zl
0 f",(x,y)dxdy
where we have made use of Fig. 6-16. Differentiation with respect to t gives
= [_y_
o (1 - z)2 czH'+nr(m)r(n)
1 (-E-)_-I y"-I.-'1/(I-zP dy
)- z
zm-I(l-t},,-I [ 81+,,-1 -d
r(m+n) ___ I(l )"-1
= r(m)rCn) 0 " e"= r(m)r(n) 4-Z
But. r +y1 ~ 'l. represents the area of a circle with radius ..[i. and hence (see Fig. 6-17)
•
-fi
FlGURE6-l7
190 l'IlOBABILITY AND lWIDOM VARJABLSS
EX,\.i\IPLL 10-14 ... x and y are independent nonnal random variables with zero mean and common
variance u 2• Determine h(Z) for z = xl + r.
SOLUTION
Using (6-67), we get
/z(Z) = 1 .fi I
--Ii 2";1. - y2
( 2· --2e(Z-.I'+,)(JH
1
21ru
..2 2 2) dy
e-Z/2t12
=--
:11'0'2
1
0
-1i 1
";1. - )'2
e-z/2t12
dy=--
:ll'U 2
171/l ';;'cos6 d6
0 ';;'oos6
= ~2e-Z(JH2u(Z) (6-68)
where we have used the substitution y = .;;. sin 6. From (6-68), we have the following:
Ifx and y are independent zero mean Gaussian random variables with common variance
u 2, then x2 + y2 is an exponential random variable with parameter 2q2. ~
(6-69)
In particular, jf x and y are zero mean independent Gaussian random variables as in the
previous example, then c
h(1.) =2 r 1.
Jo ";Z2 -
2 e-<z2-):l+y2)/2u2dy
y2 21ru2
= .l:!:...e-z2/2t12
:ll'U2
r
Jo
1
";Z2 - y2
dy = ~e-z2/2t11
:ll'U 2 J
o
r 12 lCOSe de
zcosS
(6-70)
the random variable Iwl = ,/x2 + y2 has a Rayleigh density. w is said to be a complex
Gaussian random variable with zero mean, if its real and imaginary parts are independent.
So far we have seen that the magnitude of a complex Gaussian random variable has
Rayleigh distribution. What about its phase
() = tan-I (~) (6.71)
Clearly, the principal value of 6 lies in the interval (-](/2, 7C /2). If we let u = tan (J =
y/x. then from Example 6·11, u has a Cauchy distribution (see (6-62) with 0'1 = 0'2,
r =0)
-oo<u<oo
To summarize. the magnitude and phase of a zero mean complex Gaussian random
variable have Rayleigh and uniform distributions respectively. Interestingly, as we will
show later (Example 6-22), these two derived random variables are also statistically
independent of each other! ~
Let us reconsider Example 6·15 where x and y are independent Gaussian random
variables with nonzero means /1-x and /1-y respectively. Then z = ";X2 + y2 is said to be
a Rician random variable. Such a scene arises in fading multipath situations where there
is a dominant constant component (mean) in addition to a zero mean Gaussian random
variable. The constant component may be the line of sight signal and the zero mean
Gaussian random variable part could be due to random multipath components adding up
incoherently. The envelope of such a signal is said to be Rician instead flf Rayleigh.
EXAi\IPLE 6-1(; ~ Redo Example 6·15, where x and y are independent Gaussian random variables with
nonzero means /1-x and /1-y respectively.
SOLUTION
Since
}; (x y)
xy.
= _1_e-l(x-J.'~)2+(y-J.'»2J/2CT2
27C0'2
J
substituting this into (6-69) and letting y = z sin 0, /1- = /1-; + /1-~, /1-x = /1- cos tP.
192 PROBA.81UTY AND RANDOM VARIABLES
(6-74)
where
10(1]) ~ _1_121r e"coa(8-~) dB = .!.11r e"cos8 dO
2rr 0 1l' 0
is the modified Bessel function of the first kind and zeroth order. ~
Order Statistics
In general. given any n-tuple Xl. X2 •...• XII' we can rearrange them in an increasing
order of magnitude such that
X(I) ::: x(2) ::: ••• ::: X(II)
wherex(l) = min (x\, X2 •••• , x ).andX(2)is the second smallest value amongxlt X2 •••••
ll
x'u and finally x(n) = max(xJ. X2 •••• , XII)' The functions min and max are nonlinear op-
erators. and represent special cases of the more general order statistics. If Xl. X2, •••• XII
represent randolp .variables, the function XCk) that takes on the value X(k) in each pos-
sible sequence (Xl, X2 • •••• XII) is known as the kth-order statistic. {x(1). X(2) •.•• ,X(II)}
represent the set of order statistics among n random variables. In this context
R = K(n) - x(1) (6-75)
represents the range, and when n = 2, we have the max and min statistics.
Order statistics'is useful when relative magnitude of observations is of importance.
When worst case scenarios have to be accounted for. then the function max(·) is quite
useful. For example, let XJ, X2, ••• ,:x,. represent the recorded flood levels over the past
n years at some location. If the objective is to construct a dam to prevent any more
flooding, then ~he height H of the proposed dam should satisfy the inequality
(6-76)
with some finite probability. In that case, the p.d.f. of the random variable on the right
side of (6-76) can be used to compute the desired height. In another case, if a bulb
manufacturer wants to determine the average time to failure (f.L) of its bulbs based on a
sampleofsizen, the sample mean (Xl +X2 + ... +Xn)/n can be used as an estimate for
/1-. On the other hand, an estimate based on the least time to failure has other attractive
features. This estimate min(x}, X2, •••• Xn) may not be as good as the sample mean in
terms of their respective variances, but the min(·) can be computed as soon as the first
bulb fuses, whereas to compute the sample mean one needs to wait till the last of the lot
extinguishes.
CHAPTER 6 twO RANDOM VARIABLES 193
~ Let z = max(x, y) and w = min (x. y). Determine fz(z) and fw{w).
z::::: max(x.y) x> y
w::::: min(x, y) z = max(x. y) = { yX x Sy
(6-77)
y y
=z
(z, z)
(c)
FIGURE ,.18
194 l'ROBABJUTY AND RANDOM VARIABLES
y
x= w
(c)
and hence
(6-79)
Similarly,
Thus.
FIO(w) = P{min(x, y) :S w}
= Ply :S w, x > y} + PIx :S w, x :S y}
Once again, the shaded areas in Fig. 6-19a and 6·19b show the regions satisfying these
ineqUatities, and Fig. 6-19c shows them together.
From Fig. 6-19c,
Fl/I(w) = 1 - pew > w} = 1 - P{x > w, y > w}
=Fx(w) + F,(w) - Fx,(w, w) (6-81)
EX \\IPLL h-IS • Let x and y be independent exponential random variables with common paranleter
A. Define w = min(x, y). Fmd fw(w).
SOLUTION
From (6-81)
Fw(w) = Fx(w) + Fy(w) - Fx(w)Fy(w)
and hence
.
fw(w) = fx(w) + fy(w) - f.r(w)Fy(w) - Fx(w)fy(w)
But fx(w) = !,(w) = Ae-}.w. and F.r(w) = Fy(w) = 1 - e-}.w. so that
!w(w) = lAe}.w - 2(1 - e-}.wp..e-l. = lAe-21.wU(w)
W (6-82)
y y
x x
(a) (b)
FIGUU6·20
196 PlWBAB1UTY AND RANOOM VARIABLES
=1 00
yA 2 (e-}..(YZ+Y) + e-}..(Y+)t») dy
2 0<7.<1
= { 0(1 +7.)2 (6-84)
otherwise
~ Let x and y be independent Poisson random variables with parameters Al and A2.
respectively. Let z = x + y. Determine the p.mJ. of z.
DISCRETE Since:x and y both take values (0,1,2, " .), the same is true for z. For any n =
CASE 0, 1, 2, ... , {x +Y= n} gives only a finite number of options for:x and y. In fact, if :x = 0,
then y must be n; ifx = I, then y must be n - I, and so on. Thus the event (x + y = n}
is the union of mutually exclusive events At = {x = k, Y= n - k}, k = 0 -+ n.
P{z=n} =
"
LP{x = k}P{y = n -k}
k=O
n ')..k lll-k e-(1.1+l2) II n'
_ " -}..I -1 -}..2 "'2 - " . }.h..;-k
-~e k!e (n-k)'- n! ~kl(n-k)! I
_
- e
-(}..I+}..d (AI + A2)" ,
n = 0, 1, 2~ ...• 00 (6-86)
n!
Thus z represents a Poisson random variable with parameter AI + A2, indicating that sum
of.independent Poisson random variables is a Poisson random variable whose parameter
is the sum of the parameters of the original random variables.
CHAI'I'ER 6 1WO RANDOM VARIABLES 197
As Example 6-20 indicates. this procedure is too tedious in the discrete case. As we
shall see in Sec. 6-5, the joint characteristic function or the moment generating function
can be used to solve problems of this type in a much easier manner. ~
= P{(x, Y) E D t . w } = J!
(x.)' IE D....
/xy(x, y) dx dy (6-89)
where D Z .1I) is the region in the xy plane such that the inequalities g(x, y) ~ z and
hex. y) ~ ware simultaneously satisfied in Fig. 6-21.
We illustrate this technique in Example 6-21.
I EX,\l\IPLE h-21 ~ Suppose x and y are independent uniformly distributed random variables in the
=
interval (0,6). Define z = min(x, y), W max.(x, y). Determine iZII)(z, w).
SOLUTION
Obviously both z and w vary in the interval (0,6). Thus
Fzw(z. w) = 0 jf z < 0 or w < 0 (6-90)
F~w(z, w) = P{z ~ z. w ~ w} = P{mm(x,y) ~ z, max (x, y) ~ w} (6-91)
FIGURE 6·21