Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
10Activity
0 of .
Results for:
No results containing your search query
P. 1
Probability Cheat Sheet

Probability Cheat Sheet

Ratings: (0)|Views: 1,947 |Likes:
Published by lpauling

More info:

Published by: lpauling on Sep 04, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

02/02/2013

pdf

text

original

|
\u2022
Marginal distributions:
pX(x) =
\ue001
pX,Y(x, y)d y
pY(y) =
\ue001
pX,Y(x, y)d x
\u2022
Conditional distributions:
pX|Y(x
|
y) =pX,Y(x, y)
pY(y)
pY|X(y
|
x) =pX,Y(x, y)
pX(x)
forpX (x)
\ue000
= 0 andpY (y)
\ue000
= 0
\u2022
Bayes\u2019 rule:
pX|Y(x
|
y) =
pX,Y(x, y)
\ue000
pX,Y(x\ue000 ,y)d x\ue000
pY|X(y
Ingmar Land, October 8, 2005
1
Cheat Sheet for Probability Theory
Ingmar Land
1 Scalar-valued Random Variables

Consider two real-valued random variables (RV)X andY with the individual probabil- ity distributionspX(x) andpY (y), and the joint distributionpX,Y (x, y). The probability distributions are probability mass functions (pmf) if the random variables take discrete values, and they are probability density functions (ptf) if the random variables are con- tinuous. Some authors usef () instead ofp(), especially for continuous RVs.

In the following, the RVs are assumed to be continuous. (For discrete RVs, the integrals
have simply to be replaced by sums.)
x) =
pX,Y(x, y)
\ue000
pX,Y(x\ue000 ,y)d y\ue000
\u2022
Expected values (expectations):
E
\ue005
g1(X)
\ue006
:=
\ue001
g1(x) pX(x)d x
E
\ue005
g2( Y)
\ue006
:=
\ue001
g2(y) pY(y)d y
E
\ue005
g3(X, Y)
\ue006
:=
\ue001
g3(x, y) pX,Y(x, y)d xd y
for any functionsg1(.),g2(.),g3(., .)
\u00b5X)(Y
\u2022
Some special expected values:
\u2013Means (mean values):
\u00b5X:= E
\ue005
X
\ue006
=
\ue001
x pX(x)d x
\u00b5Y:= E
\ue005
Y
\ue006
=
\ue001
y pY(y)d y
\u2013Variances:
\u03c32
X
\u2261
\u03a3XX := E
\ue005
(X
\u2212
\u00b5X)2
\ue006
=
\ue001
(x
\u2212
\u00b5X)2 pX(x)d x
\u03c32
Y
\u2261
\u03a3Y Y := E
\ue005
(Y
\u2212
\u00b5Y)2
\ue006
=
\ue001
(y
\u2212
\u00b5Y)2 pY(y)d y
Remark: The variance measures the \u201cwidth\u201d of a distribution. A small variance
means that most of the probability mass is concentrated around the mean value.
\u2013Covariance:
\u03c3XY
\u2261
\u03a3XY := E
\ue005
(X
\u2212
\u00b5X)(Y
\u2212
\u00b5Y)
\ue006
=
\ue001
(x
\u2212
\u00b5X)(y
\u2212
\u00b5Y) pX,Y(x, y)d xd y
Remark: The covariance measures how \u201crelated\u201d two RVs are. Two indepen-
dent RVs have covariance zero.
\u2013Correlation coe\ufb03cient:
\u03c1XY:=\u03c3XY
\u03c3X \u03c3Y
\u2013Relations:
E
\ue005
X2
\ue006
= \u03a3XX +\u00b52X
E
\ue005
Y2
\ue006
= \u03a3Y Y +\u00b52Y
E
\ue005
X
\u00b7
Y
\ue006
= \u03a3XY +\u00b5X
\u00b7
\u00b5Y
\u2013Proof of last relation:
E
\ue005
XY
\ue006
= E
\ue005
((X
\u2212
\u00b5X) + \u00b5X)((Y
\u2212
\u00b5Y) + \u00b5Y)
\ue006
= E
\ue005
(X
\u2212
Ingmar Land, October 8, 2005
2
\u2212
\u00b5Y)
\ue006
\u2212
E
\ue005
(X
\u2212
\u00b5X)\u00b5Y
\ue006
\u2212
E
\ue005
\u00b5X(Y
\u2212
\u00b5Y)
\ue006
+
+ E
\ue005
\u00b5X \u00b5Y
\ue006
= \u03a3XY
\u2212
(E[X ]
\u2212
\u00b5X)\u00b5Y
\u2212
(E[Y ]
\u2212
\u00b5Y)\u00b5X+ \u00b5X \u00b5Y
= \u03a3XY +\u00b5X \u00b5Y
This method of proof is typical.
= \u03a3XX|Y=y +\u00b52
X|Y=y
\u2022
Conditional expectations:
E
\ue005
g(X)
|
Y= y
\ue006
:=
\ue001
g(x)pX|Y(x
|
y)d x
Law of total expectation:
E
\ue002
E
\ue005
g(X)
|
Y
\ue006\ue003
=
\ue001
E
\ue005
g(X)
|
Y= y
\ue006
pY(y)d y
=
\ue001
\ue001
g(x)pX|Y(x
|
y)d xpY(y)d y
=
\ue001
g(x)
\ue001
pX|Y(x
|
y)pY(y)d yd x
=
\ue001
g(x)pX(x)d x
= E
\ue005
g(X)
\ue006
\u2022
Special conditional expectations:
\u2013Conditional mean:
\u00b5X|Y=y:= E
\ue005
X
|
Y= y
\ue006
=
\ue001
xpX|Y(x
|
y)d x
\u2013Conditional variance:
\u03a3XX|Y=y := E
\ue005
(X
\u2212
\u00b5X)2
|
Y= y
\ue006
=
\ue001
(x
\u2212
\u00b5X)2pX|Y(x
|
y)d x
\u2013Relation:
E
\ue005
X2
|
Y= y
\ue006
Ingmar Land, October 8, 2005
3
\u2022
Sum of two random variables: LetZ :=X +Y ; then
pZ(z) = pX(z)
\u2217
pY(z)
where
\u2217
denotes convolution. The proof uses the characteristic functions.
\u2022
The RVs are called independent if
pX,Y(x, y) = pX(x)
\u00b7
pY(y).
This condition is equivalently to the condition that
E
\ue005
g1(X)
\u00b7
g2( Y)
\ue006
= E
\ue005
g1(X)
\ue006
\u00b7
E
\ue005
g2( Y)
\ue006
for all (!) functionsg1(.) andg2(.).

Activity (10)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
Syed Haider liked this
Brian Stetter liked this
givemeasillyname liked this
catrapi liked this
abdelkader4 liked this
mandavklay liked this
jjajj liked this

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->