You are on page 1of 9

Outline

1 Preliminaries in Probability

2 Discrete Random Variables

3 Joint Distribution

4 Known Distributions

5 (a, b, 0) Class

6 Constructing New Distributions

c Bin Zou MATH 5639 22/64


Joint Distribution
Definition 12
Given two r.v.’s X and Y , the joint cumulative distribution function
of X and Y is defined by

F (x, y) := P(X  x, Y  y) 8 x, y 2 R

I We quickly observe that the CDFs of X and Y are given by


FX (x) = F (x, 1) and FY (y) = F (1, y)
I If both X and Y are discrete r.v.’s, we define their joint
probability (mass) function by

f (x, y) = P(X = x, Y = y)

I Two continuous r.v.’s X and Y are said to be jointly continuous if


there exists a joint probability density function f (x, y) such that
Z Z
P(X 2 A, Y 2 B) = f (x, y) dx dy, 8 A, B ⇢ R
B A

c Bin Zou MATH 5639 23/64


Extensions
Now we extend to the case of n r.v.’s,

X = (X1 , X2 , · · · , Xn )

where each Xi is a r.v.


The joint cumulative distribution function of n-dimensional X is
defined by

F (x1 , x2 , · · · , xn ) = P(X1  x1 , X2  x2 , · · · , Xn  xn )

I F (x1 , 1, · · · , xn ) = 0
I F (1, 1, · · · , 1) = 1

Question: Calculate P(a1 < X1  b1 , a2 < X2  b2 )


Draw a graph in R2

c Bin Zou MATH 5639 24/64


Given two r.v.’s X and Y and a real function g(·, ·), we define a new
r.v. g(X, Y ) and its expected value by
(P P
x y g(x, y) · f (x, y), in discrete case
E[g(X, Y )] = R R
R R
g(x, y) · f (x, y) dx dy, in continuous case

I Consider g(x, y) = x + y, show that E[g(X, Y )] = E[X] + E[Y ].


I The general version holds as well:
n
X
E[a1 X1 + a2 X2 + · · · + an Xn ] = ai · E[Xi ]
i=1

I Consider g(x, y) = x · y. We can compute E[XY ] once f (x, y) is


specified.

c Bin Zou MATH 5639 25/64


Covariance

Definition 13
The covariance of two r.v.s X and Y , denoted by Cov(X, Y), is
defined by
Cov(X, Y ) := E X E[X] Y E[Y ]

I Show that Cov(X, Y ) = E[XY ] E[X] E[Y ]


I Cov(X, X) = V(X)
I Cov(X, Y ) > 0(< 0) indicates that X and Y are positively
correlated (or negatively correlated), i.e., the increase of one r.v.
is accompanied by the increase (or the decrease) of the other r.v.
Let (X) and (Y ) be the standard deviations of X and Y ,
respectively. We define the correlation coefficient of X and Y by

Cov(X, Y )
⇢ :=
(X) · (Y )

Question: What properties does ⇢ have?

c Bin Zou MATH 5639 26/64


Independence
Definition 14
Two r.v.’s X and Y are said to be independent if
F (x, y) = FX (x) · FY (y), 8 x, y 2 R
i.e., given two any arbitrary real numbers x and y, two sets

A := {X  x} and B := {Y  y}

are independent (namely, P(A \ B) = P(A) · P(B)).


If X and Y are jointly continuous, then they are independent if
f (x, y) = fX (x) · fY (y)
If two r.v.’s X and Y are independent, then for any real functions g(·)
and h(·) (such a conclusion can be easily generalized to n r.v.s case)
⇥ ⇤ ⇥ ⇤ ⇥ ⇤
E g(X)h(Y ) = E g(X) · E h(Y )
One may generalize the definition of independence to the case of n
r.v.s.
n
Y
F (x1 , x2 , · · · , xn ) = FXi (xi )
i=1
c Bin Zou MATH 5639 27/64
Uncorrelated 6= Independent
Definition 15
Two r.v.’s X and Y are said to be uncorrelated if

⇢=0 (or Cov(X, Y ) = 0 or E[XY ] = E[X] · E[Y ])

I If X and Y are independent, then they are uncorrelated.


I However, the inverse statement is not true.
I If X and Y are bivariate normal, then “uncorrelated” )
“independent”.

Example 6
Let ✓ ⇠ U (0, 2⇡) (uniform) and define X := cos(✓) and Y := sin(✓).
We obtain
E[X] = E[Y ] = E[XY ] = 0 ) ⇢ = 0
implying that X and Y are uncorrelated.
But it is easy to see that X 2 + Y 2 = 1, they are clearly not
independent.
c Bin Zou MATH 5639 28/64
Example 7
Let X ⇠ N (0, 1), a standard normal r.v., and ⇤ be defined by

⇤=1 with prob. 0.5 and ⇤= 1 with prob. 0.5

Suppose X and ⇤ are independent, and define a new r.v. Y by


Y := ⇤X. Show that:
(a) Y is also a standard normal r.v.;
(b) X and Y are uncorrelated.
Notice |X| = |Y |, so X and Y are not independent.

c Bin Zou MATH 5639 29/64


mgf of Independent Random Variables

Assume X1 , X2 , · · · , Xn are independent random variables, with mgf


by MXi (·) for i = 1, 2, · · · , n. Define the sum of those variables
given P
n
X := i=1 Xi . Then the mgf of X is obtained by
n
Y
MX (t) = MXi (t)
i=1

If further all random variables Xi are identically distribution, with


the same mgf MX1 (·), then
n
MX (t) = [MX1 (t)]

c Bin Zou MATH 5639 30/64

You might also like