You are on page 1of 56

Table of contents

Discrete Random Variable

anis.rezguii@gmail.com

INSAT
Mathematics Department

September 20, 2022

anis.rezguii@gmail.com Discrete Random Variable


Expected learning outcomes
Definitions
Usual probability distributions

Table of contents
1 Expected learning outcomes
2 Definitions
The Mass Distribution Function (MDF)
The cumulative distribution function CDF
The mathematical expectation
The variance and the standard deviation
3 Usual probability distributions
The uniform distribution
The Binomial distribution
The Hypergeometric distribution
The Geometric distribution
The Poisson distribution

anis.rezguii@gmail.com Discrete Random Variable


Expected learning outcomes
Definitions
Usual probability distributions

Expected learning outcomes

After this lecture the student should:

Understand the concept of discrete random variable (d.r.v).

Understand the intrinsic Mass Distribution Function (MDF)


associated to a d.r.v

Know usual examples of MDF.

Chose the right model for a given random situation.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Definition

Definition
Let (Ω, P) be a probability space describing a random experiment
and X a mapping

X : Ω −→ R
ω 7−→ X (ω).

X is said to be a discrete random variable (d.r.v) if its range,


X (Ω) is a discrete subset of R.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Discrete subset of R

A discrete subset of R is any subset which contains only isolated


points.

the subset {0, 1, 2, 3} is a discrete subset of R.


any finite subset is a discrete set.
the subset of all integers N is a discrete set.
the set of rational numbers Q is not a discrete set.
Any interval of R is not a discrete subset of R.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Example of d.r.v

Consider the experiment of tossing two perfect coins. The


associated sample space is Ω = {H, T }2 = {HH, TT , HT , TH}
where ”H” denotes the ”head” face and ”T” denotes the ”tail”
face. Let X be the number of heads obtained after each toss.

It is clear that its values set is X (Ω) = {0, 1, 2} so X is a d.r.v.

The d.r.v X can be represented as follows:

Ω HH TT HT TH
X 2 0 1 1

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

The Mass Distribution Function (MDF)

Definition
We associate to a given d.r.v, X , its The Mass Distribution
Function (MDF) denoted by PX and defined by the mapping:

PX : X (Ω) −→ [0, 1]
xi 7−→ PX (xi ) = P{X = xi }

where {X = xi } = {ω ∈ Ω : X (ω) = xi }

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Table representation of the MDF

In general we represent the MDF of a d.r.v X , PX , by the following


table:
X (Ω) PX
x1 PX (xi )
.. ..
. .

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Graphical representation of the MDF

0.5

0.45

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0
0 1 2

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Example
We reconsider the same example of tossing two perfect coins and
where X was the number of heads. To get its MDF PX we need to
evaluate:
PX (0) = P{X = 0} = P{TT } = P{T } × P{T } = 21 × 12 = 14 ,
note that we have used the independence between the two
tosses.
PX (1) = P{X = 1} = P{TH, HT } = P{TH} + P{HT } =
1 1 1
4 + 4 = 2.
PX (2) = P{X = 2} = P{HH} = P{H} × P{H} = 21 × 12 = 14 .

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Example
We reconsider the same example of tossing two perfect coins and
where X was the number of heads. To get its MDF PX we need to
evaluate:
PX (0) = P{X = 0} = P{TT } = P{T } × P{T } = 21 × 12 = 14 ,
note that we have used the independence between the two
tosses.
PX (1) = P{X = 1} = P{TH, HT } = P{TH} + P{HT } =
1 1 1
4 + 4 = 2.
PX (2) = P{X = 2} = P{HH} = P{H} × P{H} = 21 × 12 = 14 .
X (Ω) PX
0 0.25
PX is given by ⇝
1 0.5
2 0.25
anis.rezguii@gmail.com Discrete Random Variable
The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Notes
X X
1 Note that PX (xi ) = P{X = xi } = P(Ω) = 1.
i i

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Notes
X X
1 Note that PX (xi ) = P{X = xi } = P(Ω) = 1.
i i
2 Note the similarity between a discrete random variable and a
discrete statistical variable:
X(Ω) PX xi fi
x1 PX (xi ) x1 f1
.. .. .. ..
. . . .
P P
1 100%

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Notes
X X
1 Note that PX (xi ) = P{X = xi } = P(Ω) = 1.
i i
2 Note the similarity between a discrete random variable and a
discrete statistical variable:
X(Ω) PX xi fi
x1 PX (xi ) x1 f1
.. .. .. ..
. . . .
P P
1 100%
3 The latter similarity is used to model a statistical phenomenon
by a theoretical d.r.v and then to generate artificial data by
computer simulation without any cost. This is very important
for prediction and making decision.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

The cumulative distribution function CDF

Definition
To any discrete random variable X we could associate its
cumulative distribution function defined as follows:
FX : R −→ [0, 1]
x 7−→ FX (x) = P{X ≤ x}

where {X ≤ x} = {ω ∈ Ω : X (ω) ≤ x}.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Example

Again, we reconsider the same example of tossing two perfect coins


where X was the number of heads, the associated cumulative
distribution function is as follows:

1 if x < 0 then P{X ≤ x} = P{∅} = 0


2 if 0 ≤ x < 1 then P{X ≤ x} = P{X = 0} = 0.25
3 if 1 ≤ x < 2 then
P{X ≤ x} = P{X = 0 or X = 1} = 0.25 + 0.5 = 0.75
4 if x ≥ 2 then P{X ≤ x} = P{X = 0 or X = 1 orX = 2} =
0.25 + 0.5 + 0.25 = 1

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

The cumulative distribution function is summarized as follows:




 0 if x < 0




 0.25 if 0 ≤ x < 1


FX =
0.75 if 1 ≤ x < 2








if x ≥ 2

1

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Graphical representation of a CDF

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
−1 −0.5 0 0.5 1 1.5 2 2.5 3

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Note 1

Let X a given d.r.v and denote by PX , FX its MDF and its CDF
respectively. The CDF FX satisfies:
i. FX is non-decreasing.
ii. FX is càdlag i.e continuous to the right and bounded
to the left.
iii. lim FX (x) = 0 and lim FX (x) = 1.
x→−∞ x→+∞

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Note 2

It is very important to know that PX and FX are equivalent in the


sense that the knowledge of one suffices to determine the other. If
we have PX it is clear that we can determine FX , just by definition.
Now suppose we have the cumulative distribution function, FX , we
use the following rule to determine PX :

P{X = a} = FX (a) − FX (a− ).

where FX (a− ) = lim FX (x).


x→a−

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

The Mathematical expectation

Definition
Let X be a d.r.v and X (Ω) = {xi } its set of values.
The mathematical expectation of X , denoted by E(X ), is the
quantity: X
E(X ) = xi PX (xi ).
i

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Example
Reconsider the last example where X was the number of heads, we
get:
1 1 1
E(X ) = 0 × +1× +2×
4 2 4
1 1
= + =1
2 2

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Example
Reconsider the last example where X was the number of heads, we
get:
1 1 1
E(X ) = 0 × +1× +2×
4 2 4
1 1
= + =1
2 2

Note: If any random experiment is repeated many times we


expect, in the ”usual” cases, that most of X’s values are ”close” to
the mathematical expectation E(X ). Thus the mathematical
expectation of a d.r.v is considered as a parameter of central
tendency.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Exercise

Let X be a d.r.v with three possible values {−1, 0, 1} such that its
MDF is as follow:
X (Ω) PX
−1 q
0 1/2
1 p
and its expectation E(X ) = 0.
1 Determine the MDF of X , PX .
2 Suppose now P{X = 0} is also unknown, could you determine
PX ?

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Solution

1 On one hand E(X ) = −1 × p + 0 × 21 + 1 × q = 0 which


means p = q, on the other hand p + 12 + q = 1, then we get
p = q = 14 .
2 We obtain two equations with three unknowns, so there is
infinitely many possibilities.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Solution

1 On one hand E(X ) = −1 × p + 0 × 21 + 1 × q = 0 which


means p = q, on the other hand p + 12 + q = 1, then we get
p = q = 14 .
2 We obtain two equations with three unknowns, so there is
infinitely many possibilities.
Note:
In general the only knowledge of the expectation does not
determine the MDF of the random variable. However there are
many interesting particular cases, of random variables, where it is
the case, see section usual MDF.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

The Markov Inequality

The next theorem gives a rough estimation of some probabilities by


the only use of the expectation.
Theorem
Let (Ω, Σ, P) be a probability space, X a positive d.r.v and λ a
positive real number. Then

E(X )
P{X ≥ λ} ≤ .
λ

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

The variance

Definition
Let X be a d.r.v and X (Ω) = {xi } its range. Its variance, denoted
by V(X ) or σX2 , is the non negative quantity:
X
V(X ) = σX2 = (xi − E(X ))2 PX (xi ).
i

Its standard deviation is


p
σX = V(X ).

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Notes

1 Suppose that a given random experiment is repeated many


times, we expect, in the usual cases, that most of X’s values
lies in the interval [E(X ) − σX , E(X ) + σX ].

2 The variance as well as the standard deviation are parameters


of dispersion. In fact they give idea about how expected data
are dispersed around the mathematical expectation.

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

The Tchebychev Inequality


Theorem
Let (Ω, Σ, P) be a probability space, and X a d.r.v, then for any
λ > 0 we have
V(X )
P{|X − E(X )| ≥ λ} ≤ .
λ2

anis.rezguii@gmail.com Discrete Random Variable


The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

The Tchebychev Inequality


Theorem
Let (Ω, Σ, P) be a probability space, and X a d.r.v, then for any
λ > 0 we have
V(X )
P{|X − E(X )| ≥ λ} ≤ .
λ2
Proof:
Let λ > 0
P{|X − E(X )| ≥ λ} = P{|X − E(X )|2 ≥ λ2 }
and so by using Markov inequality, we get
E{|X − E(X )|2 } V(X )
P{|X − E(X )| ≥ λ} ≤ = ,
λ2 λ2
which finishes the proof.
anis.rezguii@gmail.com Discrete Random Variable
The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Note
Tchebychev inequality is often stated as follow:
1
P{|X − E(X )| ≥ nσ} ≤
n2
which can be read
1
P {X ∈ [E(X ) − nσ, E(X ) + nσ]} > 1 −
n2
If for example we take n = 6 we obtain that for any random
variable X , which means for any phenomenon we have the
following estimation:
1
P {X ∈ [E(X ) − 6σ, E(X ) + 6σ]} > 1 − = 97.22%
62
.
anis.rezguii@gmail.com Discrete Random Variable
The Mass Distribution Function (MDF)
Expected learning outcomes
The cumulative distribution function CDF
Definitions
The mathematical expectation
Usual probability distributions
The variance and the standard deviation

Note

We have a different, but, easy to prove and useful different


formulation of the variance: For any discrete random variable X :
X 2
V(X ) = xi − E(X ) P{X = xi }
i
X 
= xi2 − 2E(X )xi + E(X )2 P{X = xi }
i
= E(X 2 ) − E(X )2 .

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

The uniform distribution

Definition
Let X be a d.r.v with a finite range, X (Ω) = {x1 , · · · , xn }, for
some n ∈ N. We say that X follows the uniform distribution on
{x1 , · · · , xn } and we denote

X ,→ U{x1 , · · · , xn }

if
1
P{X = x1 } = · · · = P{X = xn } = .
n

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Examples

1 If we toss a perfect coin and set X = 0 for ”Tail” and X = 1


for ”Head”. It is easy to check that
X ,→ U{0, 1}.
2 If we throw a perfect die and set X as the obtained digit. It is
easy to check that
X ,→ U{1, · · · , 6}.

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Examples

1 If we toss a perfect coin and set X = 0 for ”Tail” and X = 1


for ”Head”. It is easy to check that
X ,→ U{0, 1}.
2 If we throw a perfect die and set X as the obtained digit. It is
easy to check that
X ,→ U{1, · · · , 6}.
Note:
In case of uniform distribution we have the following formula:
#A
P{X ∈ A} = ,
n
for any A ⊂ X (Ω).

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Expectation and variance

Let X ,→ U{x1 , · · · , xn }, we can check easily that:


i.
x1 + · · · + xn
E(X ) = .
n

ii.
n n
1X 2 1  X 2
V(X ) = xi − 2 xi .
n n
i=1 i=1

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Expectation and variance

Let X ,→ U{x1 , · · · , xn }, we can check easily that:


i.
x1 + · · · + xn
E(X ) = .
n

ii.
n n
1X 2 1  X 2
V(X ) = xi − 2 xi .
n n
i=1 i=1

Note: Note the similarity with the sample mean and the sample
variance of any set of data.

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

The Binomial distribution

Definition
Let X be a d.r.v, n ∈ N and p ∈]0, 1[. X is said to follow a
binomial distribution with n repetitions and a ”success” probability
p, or just with parameters (n, p), if:
i) X (Ω) = {0, 1, · · · , n},
ii) for any k = 0, 1, · · · , n, P{X = k} = Cnk p k (1 − p)n−k .
We denote
X ,→ β(n, p).

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Typical Example

Consider the experiment of throwing a perfect die 10 times and X


the number of occurrences of the digit ”6”. We can check easily
that
1
X ,→ β(10, ).
6

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Checking

− It is clear that X (Ω) = {0, 1, · · · , 10}.


− The eventn o n o
{X = 0} = ”6” doesn’t appear = 10 times {1, · · · , 5} ,
this implies that:
5 5 5 10
P{X = 0} = P{1, · · · , 5} × · · · × P{1, · · · , 5} = × · · · × = .
n o6 6 6
− The event {X = 1} = ”6” appears only once , then
1 5 9
P{X = 1} =10×P{X = 6} × (P{1, · · · , 5})9 = 10 × × ,
6 6
where 10 in the formula denotes the 10 possible positions of
occurrence of the digit ”6”.

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

When we use the binomial model?

In general we use the binomial distribution when we repeat a given


random experiment n times, independently, and when X denotes
the number of occurrences of a particular event called ”success or
failure” with a given probability p ∈]0, 1[. In that case we have

X ,→ β(n, p).

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Expectation and variance

Let X be a d.r.v following a binomial distribution β(n, p) for


n ∈ N∗ and p ∈]0, 1[:
i. its mathematical expectation is:

E(X ) = n × p,

ii. its variance is:

V(X ) = n × p × (1 − p).

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

The Hypergeometric distribution

Definition
Let N1 , N2 and n three integers, such that N1 + N2 > n, and X a
given discrete random variable. It is said to follow a
hypergeometric distribution of parameters (N1 , N2 , n), and denoted
X ,→ HG(N1 , N2 , n) if
i) X (Ω) = {0, 1, · · · , n}.
ii) for any k = 0, · · · , n

CNk 1 CNn−k
2
P{X = k} = .
CNn1 +N2

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Typical example

X is the number of red balls drawn after n successive draws of one


ball, without replacement, from a box that contains N1 red balls
and N2 white balls. Check that X follows a hypergeometric
distribution, X ,→ HG(N1 , N2 , n).

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Expectation and Variance

On can check that


E(X ) = np
and
N −n
V(X ) = np(1 − p)
N −1
N1
where p = and N = N1 + N2 .
N

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Definition
Let p ∈]0, 1[ and X a discrete random variable. X is said to follow
a geometric distribution of parameter p, and is denoted
X ,→ G(p), if
i) X (Ω) = N∗ .
ii) For any k ≥ 1,

P{X = k} = p(1 − p)k−1 .

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Typical example

Let X be the position of the first occurrence of the face ”head”


when we toss a balanced coin infinitely many times. Check that X
1
follows a geometric distribution of parameter .
2

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

The expectation and the variance

It is easy to show that


1
E(X ) =
p
and
1−p
V(X ) = .
p2

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

The Poisson distribution

Definition
Let λ ∈ R∗+ and X a d.r.v, we say that X follows a Poisson
distribution with parameter λ, and we denote X ,→ P(λ), if
i) X (Ω) = N.
ii) for any k ≥ 0,
λk
P{X = k} = e −λ .
k!

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Typical example

In general, the Poisson distribution is used to model the


number of clients (customers) which arrive during a time
period T , with a frequency f , to ask for a given service.

In that case their random number can be modeled by a


Poisson distribution of parameter the average number of
clients λ = T × f .

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Expectation and variance

Let λ ∈ R∗+ and X ,→ P(λ). We prove that:


i.
E(X ) = λ.

ii.
V(X ) = λ.

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Exercise

Consider the records of highway accidents between Makkah and AL


Madinah during 50 days.
accidents number number of days
0 21
1 18
2 7
3 3
4 1

1. Compute the average number of accident, m.

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Still the exercise

2. Suppose that the theoretical probability distribution of the


accidents number is of Poisson type with parameter λ.
Determine an approximation of λ and compute the probability
that no accident happen, only one accident happen, 2
accident ...
3. Compute the theoretical number of days ti such that we
observe i accidents for i = 0, · · · , 4. Compare theoretical and
empirical results.
4. How many days without accidents we will observe during a
year.

anis.rezguii@gmail.com Discrete Random Variable


The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Solution
1. The average m is just the total number of accidents divided
by the number of days.
1 × 18 + 2 × 7 + 3 × 3 + 4 × 1
m= = 0, 9.
50

2. Let X ,→ P(λ), if the latter is a model for the accidents


number we must have E(X ) = m. Thus we have
λ = m = 0, 9. We obtain the probabilities that happen i
accidents during 50 days for i = 0, · · · , 4 by using the formula
(0, 9)i
P{X = i} = e −0,9 . And so
i!
X 0 1 2 3 4
PX 40, 6% 36, 5% 16, 4% 4, 9% 1, 1%
anis.rezguii@gmail.com Discrete Random Variable
The uniform distribution
Expected learning outcomes The Binomial distribution
Definitions The Hypergeometric distribution
Usual probability distributions The Geometric distribution
The Poisson distribution

Still solution
3. The theoretical number of days where there was exactly i
accidents is ti = PX (i) × 50.
i 0 1 2 3 4
ti 20,328 18,295 8,233 2,469 0,555
It is clear that the theoretical frequency distribution is very
close to the experimental (statistical) one and so, we have a
good reason to trust the Poisson model.
4. Without waiting for a year, we can estimate the number of
days without any accidents

accidents number per year = 365 × PX (0) = 148, 397.

anis.rezguii@gmail.com Discrete Random Variable

You might also like