Professional Documents
Culture Documents
2 Discrete Distribution
Binomial Probability Distribution
Poisson Probability Distribution
3 Continuous Distributions
Uniform Probability Distribution
Normal Probability Distribution
Motivation
where 0 ≤ p ≤ 1 is a parameter.
The characteristic is:
◮ E [X ] = p
◮ Var [X ] = p (1 − p)
Definition
A binomial experiment is one that has the following properties:
1) the experiment consist of n trials; 2) each trial results in two
outcomes: success (S) and failure (F ); 3) the probability of success
in every trial is p ; 4) the outcome of the trials are independent; 5)
the random variable X is the number of successes in n trials.
n!
Cnx =
x! (n − x)!
n!
P(x = 1) = P X (1− P)n− X
x! (n − x)!
5!
= (0.1)1(1− 0.1)5−1
1! (5 − 1)!
= (5)(0.1)(0.9)4
= .32805
P(x) n = 5 P = 0.5
▪ Here, n = 5 and P = 0.5 .6
.4
.2
0 x
0 1 2 3 4 5
Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 13
2. Discrete Distribution
▪ Mean μ = E(x) = nP
σ 2 = nP(1- P) σ = nP(1- P)
Theorem
If X is a binomial random variable with parameters n and p, then:
= E [X ] = np
Var [X ] = np (1 − p) .
P(x) n = 5 P = 0.5
μ = nP = (5)(0.5) = 2.5 .6
.4
σ = nP(1- P) = (5)(0.5)(1− 0.5) .2
= 1.118 0 x
0 1 2 3 4 5
Examples:
n = 10, x = 3, P = 0.35: P(x = 3|n =10, p = 0.35) = .2522
n = 10, x = 8, P = 0.45: P(x = 8|n =10, p = 0.45) = .0229
More Examples
The probability of stunting among poor children is 55%. If we
randomly pick 10 poor children, what is the probability that 4 poor
children experience stunted growth?
Definition
A discrete random variable X follows the Poisson probability
distribution with parameter λ > 0, Poisson (λ), if:
e− λ λx
P (X = x ) = f (x , λ) = f (x ) =
x!
where:
x = 0, 1, 2, …
P(x) = the probability of x successes over a given time or space, given
= the expected number of successes per time or space unit, > 0
e = base of the natural logarithm system (2.71828...)
Hengki Purwoto (Econ UGM) Statistics 2 : Lecture 4 March 8, 2021 21
2. Discrete Distribution
Theorem
If X is a Poisson random variable with parameter λ, then:
X = E [X ] = λ
Var [X ] = E [X - X] 2 = λ
e − X e −0.50 (0.50)2
P( X = 2) = = = .0758
X! 2!
0.70
Graphically: 0.60
= .50 0.50
= P(x) 0.40
X 0.50
0.30
0 0.6065
0.20
1 0.3033
2 0.0758 0.10
3 0.0126 0.00
0 1 2 3 4 5 6 7
4 0.0016
5 0.0002 x
6 0.0000
7 0.0000
P(X = 2) = .0758
= 0.50 = 3.00
0.70 0.25
0.60
0.20
0.50
0.15
0.40
P(x)
P(x)
0.30 0.10
0.20
0.05
0.10
0.00 0.00
0 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 10 11 12
x x
e −nP (nP)x
P(x) = for x = 0,1,2,...
x!
Theorem
If X is a binomial r.v. with parameters n and p, then for
each x = 0, 1, 2, . . . and as p → 0, n → ∞ with np = λ
constant:
n e−λ λx
lim px (1−p) n−x =
n→∞ p x!
More Examples
f(x)
Total area under the
uniform probability
density function is 1.0
xmin xmax x
a+b
μ=
2
• The variance is
(b - a)2
σ2 =
12
Example:
Uniform probability distribution over the range 2 ≤ x ≤ 6:
1
f(x) = 6 - 2 = .25 for 2 ≤ x ≤ 6
f(x)
a+b 2+6
μ= = =4
.25 2 2
(b - a)2 (6 - 2)2
σ2 = = = 1.333
2 6 x 12 12
Definition
A random variable X is said to have a normal probability distribution
with parameters µ and σ2 if it has a probability density function:
1 2 2
f x = e− x−μ /2σ
2πσ
for − ∞ < x < ∞ , − ∞ < µ < ∞ , and σ > 0.
X − μ 8.6 − 8.0
Z= = = 0.12
σ 5.0
μ=8 μ=0
σ = 10 σ=1
8 8.6 X 0 0.12 Z
.10 .5398
.11 .5438
.12 .5478
Z
.13 .5517 0.00
0.12
More Examples
For X ∼ N (0, 1), calculate P (Z ≥ 1.13)
For X ∼ N (5, 4), calculate P (−2.5 < X < 10).
For standard normal random variable Z , find the value of z0
such that:
◮ P (Z > z ) = 0.25
0
Definition
Let X and Y be random variables. If both X and Y are discrete, then:
f (x , y ) = P (X = x , Y = y )
Definition
If both X and Y are continuous then f (x , y ) is called the joint
probability density function (joint pdf ) of X and Y iff:
𝑏 𝑑
𝑃 𝑎 ≤ 𝑋 ≤ 𝑏, 𝑐 ≤ 𝑌 ≤ 𝑑 = න න 𝑓 𝑥, 𝑦 𝑑𝑥𝑑𝑦
𝑎 𝑐
Theorem
If X and Y are two random variables with joint probability
function f (x , y ), then:
f (x , y ) ≥ 0 for all x and y
If X and Y are discrete, then x , y f (x , y ) = 1
where the sum is over all values (x , y ) that are assigned nonzero
probabilities. If X and Y are continuous, then
∞ ∞
න න f x,y =1 .
−∞ −∞
Suppose that we are given the joint probability function (pdf or pmf )*
We can obtain the probability distribution function of one of the
components through the marginals.
Definition
The marginal pmf or pdf of X , fX (x ) is defined by:
∞
න 𝑓 𝑥, 𝑦 𝑑𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
𝑓𝑥 𝑥 = −∞
𝑓 𝑥, 𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
∀𝑦
𝑓 𝑥, 𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
∀𝑥
𝑓(𝑥, 𝑦)
𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
𝑓𝑦 (𝑦)
𝑓 𝑥𝑦 =𝑓 𝑥𝑌=𝑦 =
𝑃(𝑋 = 𝑥 , 𝑌 = 𝑦)
𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
𝑓𝑦 (𝑦)
Example
Let 1
𝑓(𝑥, 𝑦) = ቐ5 3𝑥 − 𝑦 1 ≤ 𝑥 ≤ 2 ,1 ≤ 𝑦 ≤ 3
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Find:
◮ fX (x ) and fY (y )
◮ f (x | y )
◮ f (x | Y = 1)
Expected Value
Definition
Let f (x , y) be the joint probability function. The expected value of
(X , Y ) is:
𝑥𝑦𝑓 𝑥, 𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
𝑥,𝑦
𝐸 𝑋, 𝑌 = ∞ ∞
න න 𝑥𝑦𝑓 𝑥, 𝑦 𝑑𝑥𝑑𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
−∞ −∞
E [XY ] = E [X ] E [Y ]
Conditional Expectation
Definition
Let X and Y be jointly distributed with pmf or pdf f (x , y ).
Then, the conditional expectation of X given Y = y is:
𝑥𝑓 𝑥|𝑦 𝑋, 𝑌 𝑎𝑟𝑒 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
𝑥
𝐸 𝑋|𝑌 = 𝑦 = ∞
න 𝑥𝑓 𝑥|𝑦 𝑑𝑥 𝑋, 𝑌 𝑎𝑟𝑒 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
−∞
Conditional Expectation
Example
Let
1
𝑓(𝑥, 𝑦) = ቐ5 3𝑥 − 𝑦 1 ≤ 𝑥 ≤ 2 ,1 ≤ 𝑦 ≤ 3
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Find:
◮ E [X | Y = 1]
Definition
The covariance between two random variables X and Y is defined by:
Covariance
Correlation
1 𝑖𝑓 𝑎 > 0
𝐶𝑜𝑣 𝑋, 𝑌 = ቊ
−1 𝑖𝑓 𝑎 < 0
◮ If U = a 1 X + b 1 and V = a 2 Y + b 2 , then:
𝐶𝑜𝑣 𝑈, 𝑉 = 𝑎1 𝑎2 𝐶𝑜𝑣(𝑋, 𝑌)
and
𝜌𝑥𝑦 𝑖𝑓 𝑎1 𝑎2 > 0
𝜌𝑢𝑣 = ቊ
−𝜌𝑥𝑦 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
E (X1 + X 2 + . . . + Xk ) = µ 1 + µ 2 + . . . + µ k