Professional Documents
Culture Documents
Chapter 0: Revision
Probability Space:
Definition: A σ-field is a non-empty class of subsets of Ω that is closed under the formation of
countable unions and complements and contains the null set ∅.
A σ-field of great interest in probability is the Borel σ-field of subsets of the real line, ℜ . This is
the σ-field generated by the class of all bounded semi closed intervals of the form (a, b] and it is
denoted by B.
Examples:
3) P( Ω ) = 1.
Axiom (b) is called “countable additivity”. The corresponding axiom restricted to a finite
collection {A } is called “finite additivity”.
Recall that:
n n!
=
n1 , n2 , , nk n1 !n2 ! nk !
Page 1 of 13
Page 2 of 13
P( A ∩ B)
P( A | B) = provided that P( B) ≠ 0. Therefore:
P( B)
P=
( A ∩ B) P( B=
| A) P ( A) P ( A | B ) P ( B ).
P( B | A) P( A)
Bayes Rule: P( A | B) = , P( B) ≠ 0.
P( B)
Independent Events: P( A ∩ B) =
P ( A) P( B) which implies P ( A | B ) = P ( A) and
P ( B | A) = P ( B ) .
Random Variables:
Definition: A real valued point function X(.) defined on the space ( Ω , B, P) is called a random
variable (r.v.) (or measurable) if the set {ω: X(ω) ≤ x} ∈ B for every x ∈ ℜ . X is a mapping of Ω
into ℜ . Ω
X
→ℜ .
Example: Toss two coins and let X represent the number of heads.
Definition: p(x) is a discrete probability function if the following two conditions hold:
i) p( x) ≥ 0
ii) ∑ p( x) = 1
x
=
Expectation: µ E=
(X ) ∑ xp( x)
x
Page 2 of 13
Page 3 of 13
RMK: E ( g ( X )) = ∑ g ( x) p ( x)
x
RMK: Var(X) = E ( X 2 ) − µ 2 .
iii) f ( x) ≥ 0
∞
iv) ∫−∞
f ( x) = 1
b
∫ f ( x)dx .
Definition: Let X be a continuous r.v., then P(a < X < b) =
a
Examples: Uniform, Exponential, Gamma, Chi-square, Beta, Cauchy and Normal distributions.
i) F ( x) = ∑ p( x)
X ≤x
ii) lim F ( x) = 0
x →−∞
iii) lim F ( x) = 1
x →∞
Page 3 of 13
Page 4 of 13
Random Vector:
Definition : A random vector X = ( X 1 , X 2 , , X n ) is a vector of jointly distributed random
variables.
EX 1
EX
E( X ) = 2
EX n
∞ ∞
1) f ( x1 , x2 ) ≥ 0 for all x1 and x2 . 2) ∫ ∫
−∞ −∞
f ( x1 , x2 )dx2 dx1 = 1
Marginal distribution:
values of Y.
Page 4 of 13
Page 5 of 13
Covariance:
Cov( X , Y ) = E [ ( X − µ X )(Y − µY ) ] = E ( XY ) − E ( X ) E (Y )
Cov( X , Y )
The correlation coefficient ρ = .
σ X σY
Conditional Expectations:
∞
E ( X 1 | X 2 ) = ∫ x1 f ( x1 | x2 )dx1
−∞
=
Theorem: Var( X ) E (Var( X | Y )) + Var( E ( X | Y )) .
Page 5 of 13
Page 6 of 13
Goal: If X is a random variable follows the p.d.f f ( x) , what is the distribution of the random
variable U = g ( X ) ?
I.e. we have a function of the random variable X and we are looking for describing the
distribution of the new random variable U. Here the random variable X transformed under the
function g into a new random variable U. X g U.
The distribution of the new random variable U, can be found using one of the following
methods:
d
- Find the density function fU (u ) which is fU (u ) = FU (u ) .
du
1) Let X be a random variable follows the uniform distribution on the interval (0, 1). Find
the density function of the random variable U =− β ln(1 − X ) .
Page 6 of 13
Page 7 of 13
2) Let X be a random variable which follows the normal distribution with parameters µ
X −µ
and σ . Show that the random variable Z = follows the standard normal
σ
distribution.
3) Let X be any continuous random variable. Show that the random variable U = F ( X )
follows always the uniform distribution on (0,1).
Page 7 of 13
Page 8 of 13
4) Let X 1 and X 2 denote a random sample of size n = 2 from the uniform distribution on
= X1 + X 2 .
the interval (0,1) . Find the probability density function of for U
Page 8 of 13
Page 9 of 13
fU (u ) = f X ( h −1 (u ) )
dx
.
du
Page 9 of 13
Page 10 of 13
fY1 ,Y2 ( y1 , y2 ) = f X1 , X 2 ( w1 ( y1 , y2 ), w2 ( y1 , y2 ) ) | J | ,
dx1 dx1
dy1 dy2
where J = .
dx2 dx2
dy1 dy2
Transformation in ℜn . Let Y = g ( X ) where X = ( X 1 , X 2 , , X n ) is continuous r.v. and
g : n → n . Then the joint PDF of Y is fY ( y ) = f X ( x ) | J | where
∂X 1 ∂X 2 ∂X n
d ∂Y1 ∂Y1 ∂Y1
dX
=J =d
dY
∂X 1 ∂X 2 ∂X n
∂Yn ∂Yn ∂Yn
2) Supp # 1
Page 10 of 13
Page 11 of 13
Page 11 of 13
Page 12 of 13
For any random variable X, the moment generating function, M X (t ) = E (etX ) , is unique.
Proof:
Examples:
Page 12 of 13
Page 13 of 13
Page 13 of 13