You are on page 1of 53

STAT 1012

Statistics for Life Sciences

Chapter 4
Continuous Probability Distribution
Dr. OUYANG Ming
2022/2023 Term 1

1
Introduction
Discrete random variable: A random variable which takes
a discrete set of numeric values

Continuous random variable: A random variable which


can take a continuous range of values
Example: Weight = 46.12345… (kg)
Example: Systolic Blood Pressure = 101.01010101….
Example: Height = 160.0001001001… (cm)

So that all of their possible values cannot be listed


[Uncountable]
2
Section 4.1
Continuous Random Variables

3
Discrete Random Variables & Continuous Random Variables
Example: Let us first consider
the probability histogram below
for the shoe size of adult males.
Let 𝑋 represent these shoe sizes.
Thus, 𝑋 is a discrete random
variable, since shoe sizes can
only take whole number values.

The height of each bar was the same as


the probability for its corresponding 𝑿-
value. The heights of all the rectangles in
the histogram must sum to 1. This meant
that the area was also 1.
4
Discrete Random Variables & Continuous Random Variables
Example: If we can use half-sizes
in shoe sizes, 𝑋 is also a discrete
random variable.
Therefore, we must adjust the vertical
scale of the histogram. As is, the total
area of the histogram rectangles would
be .50 times the sum of the probabilities,
since the width of each bar is 0.50. If we
double the vertical scale, the area will
double and be 1. This means we are
changing the vertical scale from
“Probability” to “Probability per half
size.” The shape and the horizontal scale
remain unchanged. 5
Discrete Random Variables & Continuous Random Variables
Example: If we can measure shoe
sizes in smaller units, such as tenths,
or hundredths. As the number of
intervals increases, the width of the
bars becomes narrower and
narrower, and the graph approaches
a smooth curve.

[Interval widths are 0.25]

6
Discrete Random Variables & Continuous Random Variables
Example: If we can measure shoe
sizes in smaller units, such as tenths,
or hundredths. As the number of
intervals increases, the width of the
bars becomes narrower and
narrower, and the graph approaches
a smooth curve.

[Interval widths are 0.10]

7
Discrete Random Variables & Continuous Random Variables
Example: Now consider another
random variable 𝑋 = foot length of Probability Density Curve
adult males. Unlike shoe size, this
variable is not limited to distinct,
separate values, because foot
lengths can take any value over a
continuous range of possibilities.
Like the modified probability
histogram above, the total area
under the density curve equals 1,
and the curve represents
probabilities by area. 8
Discrete Random Variables & Continuous Random Variables
Example: Now consider another
random variable 𝑋 = foot length of Probability Density Curve
adult males. Unlike shoe size, this
variable is not limited to distinct,
separate values, because foot
lengths can take any value over a
continuous range of possibilities.
We would like to know 𝑃(10 < 𝑋 < 12),
the probability that a randomly chosen
male has a foot length anywhere between
10 and 12 inches, we’ll have to find the
area above our interval of interest
(10,12) and below our density curve. 9
Probability Density Function (pdf)
Probability density function (pdf) of
a continuous random variable 𝑿: Probability Density Curve
A curve such that the area under the
curve between any two points 𝑎 and Pr(a  X  b)
𝑏 is equal to the probability that the pdf 𝑓(𝑥)
random variable X falls between 𝑎
and 𝑏. [Similar role as the pmf]

The probability density function


is denoted by f(x)

10
Probability Density Function (pdf)
𝑏
Pr 𝑎 ≤ 𝑋 ≤ 𝑏 = න 𝑓 𝑥 𝑑𝑥 Probability Density Curve
𝑎
=Area Under the curve 𝑓(𝑥) Pr(a  X  b)
between 𝑎 and 𝑏 pdf 𝑓(𝑥)
Pr 𝑋 = 𝑎 = Pr 𝑎 ≤ 𝑋 ≤ 𝑎
𝑎
= න 𝑓 𝑥 𝑑𝑥 = 0
𝑎

Pr Ω = Pr −∞ ≤ 𝑋 ≤ ∞

= න 𝑓 𝑥 𝑑𝑥 = 1
−∞ 11
Probability Density Function (pdf)
𝑏
Pr 𝑎 ≤ 𝑋 ≤ 𝑏 = න 𝑓 𝑥 𝑑𝑥 Probability Density Curve
𝑎
=Area Under the curve 𝑓(𝑥) Pr(a  X  b)
between 𝑎 and 𝑏 pdf 𝑓(𝑥)

𝑓 (𝑥) is NOT a probability value: Even


though 𝑓 (𝑥) ≥ 0 for all values of 𝑥,
it’s possible that 𝑓 (𝑥) > 1
(Probability is represented by area,
but 𝒇(𝒙) is only the height)
12
Integral
The operation of integration,
up to an additive constant, is
the inverse of the operation
of differentiation.

𝐹 𝑥 = න 𝑓(𝑥) 𝑑𝑥

𝐹 𝑥 ′ = 𝑓(𝑥) If 𝑓 is a continuous real-valued function defined on a closed


interval [𝑎, 𝑏], then, once an antiderivative 𝐹 of 𝑓 is known, the
definite integral of 𝑓 over that interval is given by:
𝑏
න 𝑓 𝑥 𝑑𝑥 = 𝐹 𝑋 𝑏 = 𝐹 𝑏 − 𝐹(𝑎)
𝑎
13
𝑎
Integral
The operation of integration,
up to an additive constant, is
the inverse of the operation
of differentiation.

𝐹 𝑥 = න 𝑓(𝑥) 𝑑𝑥

𝐹 𝑥 ′ = 𝑓(𝑥)

14
Integral
The operation of integration,
up to an additive constant, is
the inverse of the operation
of differentiation.

𝐹 𝑥 = න 𝑓(𝑥) 𝑑𝑥

𝐹 𝑥 ′ = 𝑓(𝑥) If 𝑓 is a continuous real-valued function defined on a closed


interval [𝑎, 𝑏], then, once an antiderivative 𝐹 of 𝑓 is known, the
definite integral of 𝑓 over that interval is given by:
𝑏
න 𝑓 𝑥 𝑑𝑥 = 𝐹 𝑋 𝑏 = 𝐹 𝑏 − 𝐹(𝑎)
𝑎
15
𝑎
Cumulative Distribution Function (cdf)
The cumulative distribution function (cdf) of a random variable 𝑋 at a
point 𝑥, denoted by 𝐹(𝑥), is defined as the probability that 𝑋 will take
values less than or equal to 𝑥:
𝑥 F(x)
𝐹 𝑥 = Pr 𝑋 ≤ 𝑥 = න 𝑓 𝑦 𝑑𝑦
−∞

0≤𝐹 𝑥 ≤1 f(x)

It is represented by the area


under the pdf to the left of x.
16
Cumulative Distribution Function (cdf)
Example: Consider the probability density f(x)
function (pdf) of random variable 𝑋 on k
the right.
a) What is the value 𝑓(4)?
b) Evaluate the values of 𝐹(3) and 𝐹(5).

17
Expected Value and Variance (Not required)
Expected Value of a continuous random variable 𝑋:

𝜇 = 𝐸 𝑋 = න 𝑥𝑓 𝑥 𝑑𝑥
−∞
Variance of a continuous random variable 𝑋:
∞ ∞
𝜎 2 = 𝑉𝑎𝑟 𝑋 = න 𝑥 − 𝜇 2 𝑓 𝑥 𝑑𝑥 = න 𝑥 2 𝑓 𝑥 𝑑𝑥 − 𝜇2
−∞ −∞

** Similar to the Expected value and variance of discrete random


variable in Ch3
18
Uniform Distribution
A random variable follows Uniform Distribution (𝑈(𝑎, 𝑏)) means that the probability
density at any point in its support, that is the interval [𝑎, 𝑏], are the same.
1
𝑋~𝑈(𝑎, 𝑏) 𝑓 𝑥 = , 𝑓𝑜𝑟 𝑎 ≤ 𝑥 ≤ 𝑏,
𝑏−𝑎
𝑓 𝑥 = 0, otherwise
For 𝑎 ≤ 𝑘1 < 𝑘2 ≤ 𝑏
𝑘2 − 𝑘1
Pr 𝑘1 ≤ 𝑋 ≤ 𝑘2 =
𝑏−𝑎
𝑘2
1 𝑥 𝑘2 𝑘2 𝑘1
=න 𝑑𝑥 = = −
𝑘1 𝑏 − 𝑎 𝑏 − 𝑎 𝑘1 𝑏 − 𝑎 𝑏 − 𝑎
19
Uniform Distribution
A random variable follows Uniform Distribution (𝑈(𝑎, 𝑏)) means that the probability
density at any point in its support, that is the interval [𝑎, 𝑏], are the same.
1
𝑋~𝑈(𝑎, 𝑏) 𝑓 𝑥 = , 𝑓𝑜𝑟 𝑎 ≤ 𝑥 ≤ 𝑏,
𝑏−𝑎

Cumulative Distribution Function 𝑓 𝑥 = 0, otherwise


𝑥 𝑥
1 𝑦 𝑥−𝑎
For 𝑎 ≤ 𝑥 ≤ 𝑏 𝐹 𝑥 = Pr 𝑋 ≤ 𝑥 = න 𝑑𝑦 = =
𝑎 𝑏−𝑎 𝑏−𝑎 𝑎 𝑏−𝑎
For 𝑥 < 𝑎 𝐹 𝑥 =0 For 𝑥 > 𝑏 𝐹 𝑥 =1
𝑏−𝑎 2
𝑎+𝑏 2
𝜇= 𝜎 =
2 12 20
Uniform Distribution
Example: Given the uniform distribution illustrated, If a class lasts that is uniformly
distributed between 50 min and 52min. Find the probability that a randomly selected
class length is greater than 51.5 minutes.

Solutions: Shaded area represents


times that are longer than 51.5
minutes. Correspondence between
area and probability: 0.25.

21
Uniform Distribution
Example: Buses arrive at a specified bus stop at 15-minute intervals starting from 7:00
a.m. That is they arrive at 7:00, 7:15, 7:30,... If a passenger arrives at the stop at a time
that is uniformly distributed between 7:00a.m. and 7:30a.m., find the probability that
he has to wait more than 10 minutes for a bus.

Solutions: He has to wait more than 10 minutes if he arrives in the intervals:


(7:00,7:05) and (7:15,7:20). Denote 𝑋 be the number of minutes past 7:00 that
the passenger arrives the stop. So 𝑋 ∼ 𝑈(0,30). The required probability is:
𝑃(0 < 𝑋 < 5) + 𝑃(15 < 𝑋 < 20) = 1/3

22
Section 4.2
Normal Distribution

23
Introduction
𝑋~𝑁𝑜𝑟𝑚𝑎𝑙(𝜇, 𝜎 2 )
The normal distribution (also known as the
𝜇 (population mean)
Gaussian distribution) is a very common
𝜎 2 (population variance)
continuous probability distribution. Normal
distributions are important in statistics and
are often used in the natural and social
sciences to represent real-valued random
variables whose distributions are not known.
Probability Density Function (pdf):
1 1
− 2 𝑥−𝜇 2
𝑓 𝑥 = 𝑒 2𝜎
2𝜋𝜎
For −∞ < 𝑥 < ∞ 24
Introduction
𝑋~𝑁𝑜𝑟𝑚𝑎𝑙(𝜇, 𝜎 2 )
Example: When 𝜇 = 2 and 𝜎 = 2 then 𝜇 (population mean)
calculate 𝑓 𝑥 𝜎 2 (population variance)
1 1
− 2 1−2 2
𝑓 1 = 𝑒 2∗2
2𝜋 ∗ 2
1 1
−8
= 𝑒 = 0.1760
2 2 ∗ 3.14159
Probability Density Function (pdf):
1 1
− 2 𝑥−𝜇 2
𝑓 𝑥 = 𝑒 2𝜎
2𝜋𝜎
For −∞ < 𝑥 < ∞ 25
Shape of the Normal Distribution
𝑋~𝑁𝑜𝑟𝑚𝑎𝑙(𝜇, 𝜎 2 )
𝜇 (population mean)
𝜎 2 (population variance)
𝑓(𝑥)
Pdf of 𝑵𝒐𝒓𝒎𝒂𝒍(𝝁, 𝝈𝟐 ) is
(i) bell-shaped curve
1
2𝜋𝜎
(ii)unimodal
(iii)Symmetric
[Mean = Median = Mode]
• The normal distribution cannot model
skewed distributions.
• Half of the population is less than the mean
x
𝜇−𝑎  +a and half is greater than the mean
26
Shape of the Normal Distribution
Proof:
1. 𝑓(𝑥) achieves its maximum value at the mean value 𝜇:
1 1 1
𝑓(𝑥) 𝑓 𝑥 =
− 2 𝑥−𝜇 2
𝑒 2𝜎 ≤
2𝜋𝜎 2𝜋𝜎
1 2. The pdf is symmetric at 𝑥 = 𝜇: For any value 𝑎,
2𝜋𝜎 1 1 1 1
− 2 𝜇+𝑎−𝜇 2 − 2 𝑎 2
𝑓 𝜇+𝑎 = 𝑒 2𝜎 = 𝑒 2𝜎
2𝜋𝜎 2𝜋𝜎
1 1 1 1
− 2 𝜇−𝑎−𝜇 2 − 𝑎 2
𝑓 𝜇−𝑎 = 𝑒 2𝜎 = 𝑒 2𝜎2 = 𝑓(𝜇 + 𝑎)
2𝜋𝜎 2𝜋𝜎

3. As 𝑥 moves farther and father away from 𝜇, 𝑓 𝑥 → 0


x
𝜇−𝑎  +a 27
Shape of the Normal Distribution
The mean is the central tendency of the The standard deviation is a measure of variability. It
distribution. It defines the location of the peak defines the width of the normal distribution. On a
for normal distributions. Most values cluster graph, changing the standard deviation either tightens
around the mean. On a graph, changing the or spreads out the width of the distribution along the
mean shifts the entire curve left or right on the 𝑿-axis. Larger standard deviations produce distributions
𝑿-axis. that are more spread out.
N ( 1 ,  2 ) N ( 2 ,  2 )
N (  ,  12 )
N (  ,  22 )

28
Standard Normal Distribution
The standard normal distribution 𝑁(0,1) is a normal probability distribution with
𝜇 = 0 and 𝜎 2 = 1, [Special Case/Extremely Important]
Probability density function (pdf) of 𝑁(0,1): −∞ < 𝑧 < ∞
1 1 1 1
− 2 𝑥−𝜇 2 − 𝑥2
𝑓 𝑥 = 𝑒 2𝜎 = 𝑒 2
2𝜋𝜎 2𝜋

𝑓 𝑥 = 𝑓(−𝑥): The distribution is symmetric about 0


Mean = Median = Mode = 0

29
Cumulative Distribution Function of 𝑵(𝟎, 𝟏)
Notation for the cdf of 𝑁 0,1 : Φ 𝑧 = 𝐹 𝑧 = Pr(𝑍 ≤ 𝑧)
No closed form

Unable to compute the exact value of Φ 𝑧 by hand


30
Standard Normal Distribution
Suppose that 𝑋 ~ 𝑁(𝜇, 𝜎 2 ) and we want to find Pr(𝑎 ≤ 𝑋 ≤ 𝑏).
𝑎≤𝑋≤𝑏 The cdf of 𝑁(𝜇, 𝜎 2 ) cannot be obtained in closed form
𝑎−𝜇 𝑏−𝜇
⇒𝑎−𝜇 ≤𝑋−𝜇 ≤𝑏−𝜇 Pr 𝑎 ≤ 𝑋 ≤ 𝑏 = Pr ≤𝑍≤
𝜎 𝜎
𝑎−𝜇 𝑋−𝜇 𝑏−𝜇 𝑋−𝜇
⇒ ≤ ≤ 𝑍= 𝑍~𝑁(0,1)
𝜎 𝜎 𝜎 𝜎
Centralize
𝑎−𝜇 𝑋−𝜇 𝑏−𝜇
⇒ ≤ ≤ Shape of normal
𝜎 𝜎 𝜎 distribution preserves
𝑎−𝜇 𝑏−𝜇 after translation (by −𝜇)
⇒ ≤𝑍≤ and rescaling (by 1/𝜎)
𝜎 𝜎
31
Cumulative Distribution Function of 𝑵(𝟎, 𝟏)

32
Cumulative Distribution Function of 𝑵(𝟎, 𝟏)

33
Cumulative Distribution Function of 𝑵(𝟎, 𝟏)

34
Cumulative Distribution Function of 𝑵(𝟎, 𝟏)

35
Cumulative Distribution Function of 𝑵(𝟎, 𝟏)
Example: Please double-check the values below by yourself based on the normal table.

Φ 0 = Pr 𝑍 ≤ 0 = 0.5
Symmetric around 0
Φ 0.13 = Pr 𝑍 ≤ 0.13 = 0.5517
Φ 1.96 = Pr 𝑍 ≤ 1.96 = 0.9750
Φ 3.90 = Pr 𝑍 ≤ 3.90 = 1.0000
Φ 1.645 = 0.9500 Between Φ(1.64) and Φ(1.65)

How to obtain the cdf of a negative value of 𝑍?


36
Cumulative Distribution Function of 𝑵(𝟎, 𝟏)
Example: Please double-check the values below by yourself based on the normal table.
Φ −0.5 = 0.3085
Φ −1 = 0.1587
Φ −1.96 = 0.0250
Φ −2.58 = 0.0049

Example: Compute Pr(−1 ≤ 𝑋 ≤ 1.5)


assuming 𝑋~𝑁(0,1)
Solutions: Pr −1 ≤ 𝑋 ≤ 1.5 Symmetry Properties
= Pr 𝑋 ≤ 1.5 − Pr 𝑋 ≤ −1 Φ −𝑥 = Pr 𝑋 ≤ −𝑥 = Pr 𝑋 ≥ 𝑥 = 1 − Φ(𝑥)
= Pr 𝑋 ≤ 1.5 − Pr 𝑋 ≥ 1
= 0.7745 37
Percentiles of 𝑵(𝟎, 𝟏) [Z-Scores]
Let 𝑧𝑝/100 be the 𝑝𝑡ℎ percentile of 𝑁(0,1)
Pr 𝑋 < 𝑧𝑝/100 = Φ 𝑧𝑝/100 = 𝑝/100 𝑋~𝑁(0,1)
Percentile as the inverse of the cdf function
cdf
𝑋 Φ(𝑧)
=

=
𝑧𝑝/100 percentile
𝑝/100

𝑧0.5 = 0, Φ 0 = Pr 𝑍 ≤ 0 = 0.5
𝑧0.95 = 1.645, Φ 1.645 = 0.9500
𝑧0.975 = 1.96,
Key Percentiles
Φ 1.96 = Pr 𝑍 ≤ 1.96 = 0.9750
𝑧0.025 = −1.96, Φ −1.96 = Pr 𝑍 > 1.96 = 0.025 38
Empirical properties of 𝑵(𝟎, 𝟏)

It can be shown that, under the standard normal


density:
about 68% of the area lies between +𝟏 and −𝟏,
about 95% of the area lies between +𝟐 and −𝟐,
about 99% of the area lies between +𝟐. 𝟓 and −𝟐. 𝟓,

𝜇 ± 1𝜎 covers about 68.26% of 𝑋


𝜇 ± 2𝜎 covers about 95.44% of 𝑋
𝜇 ± 3𝜎 covers about 99.73% of 𝑋
39
Conversion from 𝑵(𝝁, 𝝈𝟐 ) to 𝑵(𝟎, 𝟏)

𝑎−𝜇 𝑏−𝜇
Pr 𝑎 ≤ 𝑋 ≤ 𝑏 = Pr ≤𝑍≤
𝜎 𝜎

𝑏−𝜇 𝑎−𝜇
=Φ −Φ
𝜎 𝜎

𝜇 ± 1𝜎 covers about 68.26% of 𝑋


𝜇 ± 2𝜎 covers about 95.44% of 𝑋
𝜇 ± 3𝜎 covers about 99.73% of 𝑋
40
Example: Based on the Normal table, compute the value of 𝑧0.01 from the standard
Normal distribution 𝑁(0,1)?

Example: Suppose 𝑋 ~ 𝑁(16,9). Compute Pr(11.5 ≤ 𝑋 ≤ 20.5).

41
Section 4.3
Normal Approximation to Binomial Distribution

42
Normal Approximation to Binomial
Suppose 𝑋~𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙(𝑛, 𝑝)
When 𝑛 is large enough, pmf Pr(𝑋 = 𝑥) can
be well-approximated by the pdf of a normal
distribution:
𝑌~𝑁(𝜇, 𝜎 2 ), where 𝜇 = 𝑛𝑝, 𝜎 2 = 𝑛𝑝(1 − 𝑝)
Chart on the right:​
Bars (in brown):
𝑋 ~𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙 (20,0.25)
Curve (in blue):
𝑌 ~ 𝑁(5,3.75)
43
Normal Approximation to Binomial
Question: Can we always find a normal distribution to approximate the
𝑋~𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙 (𝑛, 𝑝) well?​

Answer: No. When (1) n is too small and (2) p is either too large or too small,
we would not be able to find a normal distribution to approximate 𝑋 well. ​
Small n: The bars in the pmf are p too large/small: Binomial distribution
not smooth enough would be extremely left/right-skewed​

Rule of Five: When 𝑛𝑝𝑞 ≥ 5, 𝑁(𝑛𝑝, 𝑛𝑝𝑞) will be a good


approximation to 𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙(𝑛, 𝑝)
44
Marginal Cases

𝑛𝑝𝑞 = 56 ∗ 0.1 ∗ 0.9 = 5.04

𝑛𝑝𝑞 = 20 ∗ 0.5 ∗ 0.5 = 5

45
Marginal Cases

46
Normal Approximation to the Poisson Distribution
The normal distribution can also be used to approximate the Poisson
distribution. The motivation for this is that the Poisson distribution is
cumbersome to use for large values of 𝜇

Suppose 𝑋~𝑃𝑜𝑖𝑠𝑠𝑜𝑛(𝜇), pmf Pr(𝑋 = 𝑥) can be well-approximated


by the pdf of a normal distribution:𝑌~𝑁𝑜𝑟𝑚𝑎𝑙(𝜇, 𝜇)

Rule: When 𝜇 ≥ 10, 𝑁(𝜇, 𝜇) will be a good approximation to 𝑃𝑜𝑖𝑠𝑠𝑜𝑛(𝜇)

47
Marginal Cases

48
Continuity Correction
Binomial/Poisson distribution is a discrete distribution, while Normal is a
continuous distribution​.[Need to take care of the continuity issue]

Continuity Correction: When


approximating the pmf of
Binomial/Poisson by the pdf of
Normal, one need to adjust the
upper and lower boundaries
on the normal probabilities of
the normal distribution
49
Continuity Correction
Example: Consider 𝑋~𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙(8, 0.45) approximated
by ​ 𝑌~𝑁(𝑛𝑝, 𝑛𝑝𝑞) = 𝑁(3.6,1.98). Pr 𝑋 = 2 = Pr(1.5 ≤ 𝑋 ≤ 2.5)

Continuity Correction:
Pr(𝑋 = 𝑎) ≈ Pr(𝑎 − 0.5 ≤ 𝑌 ≤ 𝑎 + 0.5)
Lower Boundary
Pr(𝑋 = 0) ≈ Pr(𝑌 ≤ 0.5)
Upper Boundary (Binomial distribution)
Pr(𝑋 = 𝑛) ≈ Pr(𝑌 ≥ 𝑛 − 0.5)
50
Continuity Correction
Example: Consider 𝑋~𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙(8, 0.45) approximated
by ​ 𝑌~𝑁(𝑛𝑝, 𝑛𝑝𝑞) = 𝑁(3.6,1.98). Pr 𝑋 = 2 = Pr(1.5 ≤ 𝑋 ≤ 2.5)

Continuity Correction:
Pr 𝑎 ≤ 𝑋 ≤ 𝑏
≈ Pr(𝑎 − 0.5 ≤ 𝑌 ≤ 𝑏 + 0.5)
Pr 𝑋 ≥ 𝑎
≈ Pr(𝑌 ≥ 𝑎 − 0.5)
Pr 𝑋 ≤ 𝑏
≈ Pr(𝑌 ≤ 𝑏 + 0.5)
51
Example: A company produces dialysis filters (透析過濾器). On average,
10% are defective (缺陷) and will not pass inspection. What is the
probability that at least 15% of a sample of 100 filters are defective?​

Solutions: Let 𝑋 =Number of defective filters. Then


𝑋 ~ 𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙(𝑛, 𝑝), where 𝑛 = 100 and 𝑝 = 0.10
Now, 𝑛𝑝𝑞 = 100(0.1)(0.9) = 9 ≥ 5, we can therefore approximate 𝑋 by
a normal distribution 𝑌, with 𝑌 ~ 𝑁(𝑛𝑝, 𝑛𝑝𝑞) = 𝑁(10,9).
14.5 − 10
Pr 𝑋 ≥ 15 ≈ Pr 𝑌 > 14.5 = Pr 𝑍 > = Pr 𝑍 ≥ 1.5
9
= 1 − Φ 1.5 = 1 − 0.9332 = 0.0668 [Actual value: Pr(𝑋 ≥ 15) = 0.0726]​ 52
End of Chapter 4

53

You might also like