You are on page 1of 31

Multiple Discrete

Random Variables
Introduction
Consider the choice of a student at random from a population
of college students. We wish to know his/her height, weight,
blood pressure, pulse rate, etc.
The mapping from sample space of students to measurements
of height and weight, would be H(si) = hi, W(si) = wi, of the
student selected.

The table is a two-dimensional array that lists the probability


P[H = hi and W =wj].
Introduction
The information can also be displayed in a three-dimensional format.

These probabilities were termed join probabilities. The height and


weight could be represented as 2 x 1 random vector.
H

W
We will study dependencies between the multiple RV. For
example: Can we predict a persons height from his weight?
Jointly Distributed RVs
Consider two discrete RVs X and Y. They represent the functions
that map an outcome of an experiment si to a value in the plane.
X(si ) xi
= for all si S
Y (si ) yi
The experiment consists of the simultaneous tossing of a penny
and a nickel.

Two random variable that are defined on the same sample space S
are said to be jointly distributed.
Jointly Distributed RVs
There are four vectors that comprise the sample space
0 0 1 1
SX,Y = , , ,
0 1 0 1
The values of the random vector (multiple random variables)
are denoted either by (x,y) which is an ordered pair/point in
the plane or [x y]T a 2D vector.
The size of the sample space for discrete RV can be finite of
countably infinite.
If X can take on 2 values Nx = 2, and Y can take on 2 values NY
=2, the total number of elements in SX,Y is NXNY = 4.
Generally, if SX = {x1, x2,,xNx} and SY = {y1, y2,,yNy}, then the
random vector can take on values in
SX,Y = SX SY = {(xi , yi ) :i = 1,2,..., N X ; j = 1,2,..., NY }
Jointly Distributed RVs
The notation A B, where A and B are sets, denotes a Cartesian
product set.

The joint PMF (bivariate PMF) as

pX,Y [xi , y j ] = P{X(s) = xi ,Y (s) = y j }, i = 1,2,..., N X ; j = 1,2,..., NY


Properties of joint PMF
Property 1. Range of values of joint PMF

0 pX,Y [xi , yi ] 1 i = 1,2,..., N X ; j = 1,2,..., NY


Property 2. Sum of values of joint PMF
N X NY

p X ,Y [xi , y j ] = 1
i=1 j=1
Similarly for a countably infinite sample space.
0 p X ,Y [0,0] 1
For two fair coins that do not interact 0 p X ,Y [0,1] 1
as they are tossed we might assign
0 p X ,Y [1,0] 1
pX,Y[i,j] = .
0 p X ,Y [1,1] 1
1 1

p X ,Y [i, j] = 1
i=0 j=0
The procedure to determine the joint
PMF from the probabilities defined on S
The procedure depends on whether the RV mapping is one-to-
one or many-to-one.
For a one-to-one mapping from S to SX,Y we have

It is assumed that sk is the only solution to X(s) = xi and X(s) = yj.


For a many-to-one transformation the joint PMF is found as

pX,Y [xi , y j ] = P[{sk }]


{k:X (Sk )=xi ,Y (Sk )=y j }
Two dice toss with different
colored dice
A red die and a blue die are tossed. The die that yields the
larger number of dots is chosen.
If both dice display the same number of dots, the red die is
chosen. The numerical outcome of the experiment is defined
to be 0 if the blue die is chosen and 1 if the red die is chosen,
along with its corresponding number of dots.
0 blue die chosen
X= Y = number of dots on chosen die.
1 red die chosen

What is pX,Y[1,3] for example?


Two dice toss with different
colored dice
To determine the desired value of the PMF, we assume that each
outcome in S is equally likely and therefore is equal to 1/36.
1 1
pX,Y [1, 3] = P[{sk }] = =
{k:X (Sk )=1,Y (Sk )=3} {k:X (Sk )=1,Y (Sk )=3} 36 12
Since there are three outcomes that map into (1,3).

In general, we can use the joint PMF, to find probability of event


A defined on SX,Y = SX SY.
P[(X,Y ) A] = pX,Y [xi , y j ]
{(i, j ):( xi ,y j )A}
Marginal PMFs and CDFs
If pX,Y[x,y] is known, then marginal probabilities pX[xi] and
pY[yi] can be determined.
Consider the general case find calculating the probability of an
event of interest A on countably infinite sample space .

P[(X,Y ) A] = pX ,Y [xi , y j ]
i=1 j=1
{(i, j ):( xi ,yi )A}
Let A = {xk} SY. Then,
P[(X,Y ) {xk } SY ] = P[X = xk ,Y SY ] = P[X = xk ] = pX [xk ]
with i = k only

pX [xk ] = pX ,Y [xk , y j ]
j=1
with j = k only

pY [yk ] = pX ,Y [xi , yk ]
i=1
Example: Two coin toss
A penny (RV X) and a nickel (RV Y) are tossed and the outcomes
are mapped into a 1 for a head and a 0 for a tail. Consider the
joint PMF
1 / 8 i = 0, j = 0

1/ 8 i = 0, j = 1 =1
p X ,Y [i, j] =
1/ 4 i = i, j = 0
1/ 2 i = 1, j = 1

The marginal PMFs are given as =1

1 1 1 1 1 3
+ = ,i = 0 + = ,j=0
1
1

p X [i] = p X ,Y [i, j] = pY [ j] = pX ,Y [i, j] =
8 8 4 8 4 4
1 1 3 1 1 5
j=1
+ = ,i = 1 j=1
+ = , j =1
4 2 4 8 2 8
Joint PMF cannot be determined
from marginal PMFs
It is not possible in general to obtain joint PMF from marginal
PMFs.
Consider the following joint PMF

The marginal PMFs are the same as the ones before.


There are an infinite number of joint PMFs that have the same
marginal PMFs.

joint PMF marginal PMFs

marginal PMFs joint PMF


Joint cumulative distribution function
A joint cumulative distribution function (CDF) can be defined
for a random vector as
FX,Y (x, y) = P[X x,Y y]
and can be found explicitly by summing the joint PMFs as
FX,Y (x, y) = pX,Y [xi , yi ]
{(i, j ):xi x,y j y}

The PMF can be recovered as


pX,Y [xi , y j ] = FX,Y (xi+ , yi+ ) - FX,Y (xi+ , yi- ) - FX,Y (xi- , yi+ ) + FX,Y (xi- , yi- )
Properties of Cumulative
distribution functions
The marginal CDFs can be easily found from the joint CDF as
FX (x) = P[X x] = P[X x,Y < ] = FX,Y (x,)
FY (y) = P[Y y] = P[X < ,Y y] = FX,Y (, y)
Property 1. Range of values
0 FX,Y (x, y) 1
Property 2. Values of endpoints
FX,Y (-,-) = 0 FX,Y (, ) = 1
Property 3. Monotonically increasing
FX,Y (x, y) Monotonically increases as x and/or y increases.
Property 4. Right continuous
The joint CDF takes the value after the jump.
Independence of Multiple RV
Consider the experiment of tossing a coin and then a die.
The outcome of the coin X = {0,1} and the outcome of a die
Y = {1,2,3,4,5,6} are independent, hence the probability of the
random vector (X,Y) taking on a value Y = yi does not depend
on X = xi.
X and Y are independent random variables if all the joint
events on SX,Y are independent.
P[X A,Y B] = P[X A]P[Y B]
The probability of joint events may be reduced to probabilities
of marginal events.
If A = {xi} and B = {yj}, then
P[X A,Y B] = P[X = xi ,Y = y j ] = pX,Y [xi , y j ]
and
P[X A]P[Y B] = pX [xi ]pY [y j ]
Independence of Multiple RV
If the joint PMF factors, then X and Y are independent.
To prove it, assume the joint PMF factors, the for all A and B
P[X A,Y B] = pX,Y [xi , yi ]
{i:xi A} {i:y j B}

= pX [xi ]pY [yi ]


{i:xi A} {i:y j B}

= pX [xi ] pY [yi ] = P[X A]P[Y B]


{i:xi A} {i:y j B}

Example: Two coin toss independence


Assume we toss a penny and nickel. If all
outcomes are equivalently the joint PMF is
given by
1 1
pX,Y [i, j] = = pX [i]pY [ j]
2 2 marginal probability
Independence of Multiple RV
Example: Two coin toss dependence
Consider the same experiment but with a joint PMF given by

Then pX,Y[0,0] = 1/8 (1/4)(3/8) = pX[0]pY[0] and hence X and Y cannot


be independent.
If two random variables are not independent, they are said to be
dependent.
Independence of Multiple RV
Example: Two coin toss dependent but fair coins
Consider the same experiment again but with joint PMF given by

Since pX,Y[0,0] = 3/8 (1/2)(1/2), X and Y are dependent. However,


by examining the marginal PMFs we see that the coins are in
some sense fair since P[heads] = 1/2, thus we might conclude
that the RVs were independent. This is incorrect.
If the RVs are independent, the joint CDF factors as well.
Transformations of Multiple
Random Variables
The PMF of Y = g(x) if the PMF of X is known is given by
pY [yi ] = pX [x j ]
{ j:g( x j )=yi }

In the case of two discrete RVs X and Y that are transformed


into W = g(X, Y) and Z = h(X, Y), we have

pW ,Z [wi , z j ] =


pX,Y [xk , yl ] i = 1,2,..., NW ;
j = 1,2,..., N Z
g( xk ,yl )=wi
(k,l ):
h( xk ,yl )=z j
Sometimes we wish to determine the PMF of Z = h(X, Y) only.
Then we can use auxiliary RV W = X, so that pZ is the marginal
PMF and can be found form the formula above as
pZ [z j ] = pW ,Z [wi , z j ].
i:wi SW
Direct computation of PMF for
transformed RV
Consider the transformation of the RV (X,Y) into the scalar RV
Z = X2 + Y2. The joint PMF is given by
3 / 8 i = 0, j = 0

1/ 8 i = 1, j = 0
PX ,Y [i, j] =
1/ 8 i = 0, j = 1
3/8 i = 1, j = 1

To find the PMF for Z first note that (X,Y) takes on the values
(i,j) = (0,0),(1,0),(0,1),(1,1). Therefore, Z must take on the
values zk = i2 + j2 = 0,1,2. Then
0 0
pZ [0] = pX,Y [i, j] = pX,Y [i, j] = pX ,Y [0,0] = 3 / 8
{(i, j ):zk =i 2 + j 2 } i=0 j=0

2
pZ [1] = pX,Y [0,1] + pX,Y [1,0] = pZ [2] = pX,Y [1,1] = 3 / 8
8
Expected Values
If Z = g(X, Y), then by definition its expected value

E[Z] = zi pZ [zi ]
i
Or using a more direct approach

E X,Y [Z] = g(xi , y j )pX ,Y [xi , y j ]


i

EX. Expected value of a sum of random variables: Z = g(X,Y) = X + Y


E X ,Y [aX + bY ] = (axi + by j )p X ,Y [xi , y j ]
i j

= a xi p X ,Y [xi , y j ] + b y j p X ,Y [xi , y j ]
i j i j

= a xi p X ,Y [xi , y j ] + b y j p X ,Y [xi , y j ] = aE X [X] + bEY [Y ]


i j i j
Expected value of a product of RV
If Z = g(X, Y) = XY, then
E X,Y [XY ] = xi y j pX ,Y [xi , y j ]
i j
If X and Y are independent, then since the joint PMF factors,
we have

E X,Y [XY ] = xi y j pX [xi ]pY [y j ] = xi pX [xi ] x j pY [yi ] = E X [X]EY [Y ]


i j i j

More generally,

EX,Y [g(X)h(Y )] = EX [g(X)]EY [h(Y )]


Variance of a sum of RVs
Consider the calculation of var(X + Y). Then, letting Z = g(X,Y) =
(X + Y EX,Y[X + Y])2, we have
var(X + Y ) = E X ,Y [(X + Y - E X ,Y [X + Y ])2 ]
var(X + Y ) = E X ,Y [(X - E[X])2 + (Y - E[Y ])2 + 2(X - E[X])(Y - E[Y ])]
var(X + Y ) = var(X) + var(Y ) + 2 cov(X,Y )

The last term is called the covariance and defined as

cov(X,Y ) = EX,Y [(X - EX [X])(Y - EY [Y ])]

or alternatively
cov(X,Y ) = EX,Y [XY ]- EX [X]EY [Y ]
Joint moments
The questions of interest

If the outcomes of one RV is a given value, what can we say about


the outcome of the other RV?

Will it be about the same or have the same have no relationship


to other RV?

There is clearly a relationship between height and weight.


Joint moments

To quantify these relationships we form the product XY, which


can take on the values +1, -1, and 1 for the joint PMF above.
To determine the value of XY on the average we define the
joint moment as EX,Y[XY].
E X,Y [XY ] = xi y j pX ,Y [xi , y j ]
i j
2 2
1 1
E X,Y [XY ] = xi y j pX ,Y [xi , y j ] = (1)(1) + (-1)(-1) = 1 For the
i=1 j=1 2 2 case (a)
Joint moments
In previous example EX[X] = EY[Y] = 0. If they are not zero, the
joint moment will depend on the values of the means.

EX,Y [XY ] = 2

To nullify this effect it is convenient to use the joint central


moments.
EX,Y [(X - EX [X])(Y - EY [Y ])]

That will produce the desired +1 for the joint PMF above.
Independence implies zero covariance but
zero covariance does not imply independence
Consider the joint PMF which assigns equal probability to each
of the four points point.

For the joint PMF the covariance is zero since


1 1 1
E X [X] = -1( ) + 0( ) +1( ) = 0
4 2 14 1
and thus cov(X,Y ) = E X,Y [XY ] = ijpX ,Y [i, j] = 0
i=-1 j=-1
However, X and Y are dependent because pX,Y[1,0] = 1/4 but
pX[1]pY[0]=(1/4)(1/2) = 1/8.
Some Practice problems
Two coins are tossed in succession with a head being mapped
into a +1 and a tail mapped into a -1. If a RV is defined as (X,Y)
with X representing the mapping of the first toss and Y
representing the mapping of the second toss, draw the
mapping. Also, what is SX,Y? (Hint: see slides 4,5).
Two dice are tossed. The number of dots observed on the dice
are added together to form the random variable X and also
difference to form Y. Determine the possible outcomes of the
random vector (X,Y) and plot them I the plane. How many
possible outcomes are there?
Is pX,Y [i, j] = (1/ 2 ) a valid joint PMF? i= 0,1.. and j=0,1,..
i+ j

Find the marginal probability


Some Practice problems
1

2) The values of a joint PMF are given below. Determine the marginal PMFs

3)

You might also like