Professional Documents
Culture Documents
Randomness
n
x
( ) { }
n x
p 1 p , x 0,1,2,, n
PX ()
x = x
0
, otherwise
Probability Mass Functions
Binomial pmf
Probability Mass Functions
If we perform Bernoulli trials until a 1 (success) occurs and the
probability of a 1 on any single trial is p, the probability that the
( )
k 1
first success will occur on the kth trial is p 1 p . A DV random
variable X is said to be a Geometric random variable if its pmf is
(
p 1 p
) { }
x1
, x 1,2,3,...
PX ()
x =
0 , otherwise
Probability Mass Functions
Geometric pmf
Probability Mass Functions
If we perform Bernoulli trials until the rth 1 occurs and the
probability of a 1 on any single trial is p, the probability that the
rth success will occur on the kth trial is
k 1 r
( ) ( )
k r
P rth success on kth trial = p 1 p .
r 1
A DV random variable Y is said to be a negative - Binomial
or Pascal random variable with parameters r and p if its pmf is
y 1 r
( ) { }
yr
, y r,r + 1,,
( ) p 1 p
PY y = r 1
0
, otherwise
Probability Mass Functions
Negative Binomial
(Pascal) pmf
Probability Mass Functions
Suppose we randomly place n points in the time interval 0 t < T
with each point being equally likely to fall anywhere in that range.
The probability that k of them fall inside an interval of length t < T
inside that range is
n k
( ) n!
( )
n k n k
P
k inside t =
p 1 p = p k 1 p
k ( )
k! n k !
where p = t / T is the probability that any single point falls within
t . Further, suppose that as n , n / T = , a constant. If
is constant and n that implies that T and p 0. Then
is the average number of points per unit time, over all time.
Probability Mass Functions
Events occurring at random times
Probability Mass Functions
It can be shown that
n
k
k
P k inside t = lim 1 = e
k! n
n k!
=e
( ) ( )
M
E g X = g X P X = xi
i=1
Expectation and Moments
A central moment of a random variable is the moment of
that random variable after its expected value is subtracted.
( )
( )
M
E X E X = xi E X P X = xi
n n
i=1
The first central moment is always zero. The second central
moment (for real-valued random variables) is the variance,
( )
( )
M
= E X E X = xi E X P X = xi
2 2
2
X i=1
The variance of X can also be written as Var X . The positive
square root of the variance is the standard deviation.
Expectation and Moments
Properties of expectation
E a = a , E aX = a E X , E X n = E X n
n n
where a is a constant. These properties can be use to prove
the handy relationship,
2X = E X 2 E 2 X
The variance of a random variable is the mean of its square
minus the square of its mean. Another handy relation is
Var aX + b = a 2 Var X .
Conditional Probability Mass
Functions
The concept of conditional probability can be extended to a
conditional probability mass function defined by
()
PX x
, x A
PX |A ()
x = P A
0 , otherwise
where A is the condition that affects the probability of X .
Similarly the conditional expected value of X is
()
E X | A
= x PX |A x and the conditional cumulative
xB
()
distribution function for X is FX |A x = P X x | A
.
Conditional Probability
{ }
Let A be A = X a where a is a constant.
( ) ( )
P X x X a
Then FX |A ()
x = P X x | X a =
P X a
.
( ) ( )
If a x then P X x X a = P X a and
P X a
()
FX |A x = P X x | X a =
P X a
= 1.
( ) ( )
If a x then P X x X a = P X x and
()
FX |A x = P X x | X a = =
()
P X x FX x
()
P X a FX a