You are on page 1of 45

7

Sampling
Distributions and
Point Estimation of
Parameters
CHAPTER OUTLINE
7-1 Point Estimation 7-3.4 Mean Square Error of an
7-2 Sampling Distributions and the Estimator
Central Limit Theorem 7-4 Methods of Point Estimation
7-3 General Concepts of Point 7-4.1 Method of Moments
Estimation 7-4.2 Method of Maximum
7-3.1 Unbiased Estimators Likelihood
7-3.2 Variance of a Point Estimator 7-4.3 Bayesian Estimation of
7-3.3 Standard Error: Reporting a Parameters
Point Estimate
Chapter 7 Title and Outline
1
Learning Objectives for Chapter 7
After careful study of this chapter, you should be able to do the
following:
1. Explain the general concepts of estimating the parameters of a
population or a probability distribution.
2. Explain the important role of the normal distribution as a sampling
distribution.
3. Understand the central limit theorem.
4. Explain important properties of point estimators, including bias,
variances, and mean square error.
5. Know how to construct point estimators using the method of maximum
likelihood.
6. Know how to compute and explain the precision with which a parameter
is estimated.

Chapter 7 Learning Objectives 2


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Consider a situation
• A manufacturer produces an automotive casting part
• The daily production quantity is 10,000 part.
• Several part-characteristics are measured:
– Length, weight, strength, etc.
– Defective items
– Number of flaws (bubble, pinhole)
• Consider the tensile strength testing (destructive testing)
– To determine average tensile strength
– If all parts (N) measured ➔ parameters
• If some parts (sampling: n) measured ➔statistics➔ point
estimation
• If, customer requirement is that the average must be greater
than 250 MPa ➔ hypothesis testing

Sec 2- 3
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Statistical Inference
• Statistical inference are used to make decision
and draw conclusion about parameters
• 2 major areas
– Point estimation
– Hypothesis testing

Sec 2- 4
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.1 Point Estimation
• A point estimate is a reasonable value of a
population parameter.
• Data collected, X1, X2,…, Xm are random
variables.
• Functions of these random variables, x-bar
and s^2, are also random variables called
statistics.
• Statistics have their unique distributions that
are called sampling distributions.
Sec 7-1 Point Estimation 5
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Point Estimator
A point estimate of some population parameter θ
is a single numerical value Θ ෡.
෡ is called the point estimator.
The statistic Θ

As an example, suppose the random variable 𝑋 is normally distributed with


an unknown mean μ.

The sample mean is a point estimator of the unknown population mean μ.

ሜ After the sample has been selected,


That is, μො = 𝑋.
the numerical value 𝑥 is the point estimate of μ.
Thus if 𝑥1 = 25, 𝑥2 = 30, 𝑥3 = 29, 𝑥4 = 31, the point estimate of μ is
25 + 30 + 29 + 31
𝑥= = 28.75
4

Sec 7-1 Point Estimation 6


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Some Parameters & Their Statistics
Parameter Measure Statistic
μ Mean of a single population x-bar
σ2 Variance of a single population s2
σ Standard deviation of a single population s
p Proportion of a single population p -hat
μ1 - μ2 Difference in means of two populations x bar1 - x bar2
p1 - p2 Difference in proportions of two populations p hat1 - p hat2

• There could be choices for the point estimator of a parameter.


• To estimate the mean of a population, we could choose the:
– Sample mean.
– Sample median.
– Average of the largest & smallest observations of the sample.
• We need to develop criteria to compare estimates using statistical
properties.
Sec 7-1 Point Estimation 7
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Some Definitions
• The random variables X1, X2,…,Xn are a random
sample of size n if:
a) The Xi are independent random variables.
b) Every Xi has the same probability distribution.
• A statistic is any function of the observations
in a random sample.
• The probability distribution of a statistic is
called a sampling distribution.

Sec 7-2 Sampling Distributions and the Central Limit Theorem 8


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.2 Sampling Distribution of the Sample Mean
• A random sample of size n is taken from a normal population
with mean μ and variance σ2.
• The observations, X1, X2,…,Xn, are normally and independently
distributed.
• A linear function (X-bar) of normal and independent random
variables is itself normally distributed.

𝑋1 + 𝑋2 +. . . +𝑋𝑛
𝑋ሜ = has a normal distribution
𝑛
𝜇 + 𝜇+. . . +𝜇
with mean 𝜇𝑋ሜ = =𝜇
𝑛
2 𝜎 2 + 𝜎 2 +. . . +𝜎 2 𝑛𝜎 2 𝜎 2
and variance 𝜎𝑋ሜ = = 2 =
𝑛2 𝑛 𝑛

Sec 7-2 Sampling Distributions and the Central Limit Theorem 9


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Central Limit Theorem
If 𝑋1 , 𝑋2 , . . . , 𝑋𝑛 is a random sample of size 𝑛 is
taken from a population (either finite or infinite)
with mean 𝜇 and finite variance 𝜎 2 , and
if 𝑋ሜ is the sample mean,
then the limiting form of the distribution of
𝑋ሜ − 𝜇
𝑍= 𝜎 (7−1)
𝑛
as 𝑛 → ∞, is the standard normal distribution.

Sec 7-2 Sampling Distributions and the Central Limit Theorem 10


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Sampling Distributions of Sample Means
Figure 7-1 Distributions of
average scores from
throwing dice. Mean = 3.5
Formulas
𝑏+𝑎 n std
𝜇=
2 dies var dev
𝑏−𝑎+1 2 −1
𝜎𝑋2 = a) 1 2.9 1.7
12
2 b) 2 1.5 1.2
𝜎𝑋
𝜎𝑋2ሜ =
𝑛 c) 3 1.0 1.0
d) 5 0.6 0.8
e) 10 0.3 0.5
a= 1
b= 6
Sec 7-2 Sampling Distributions and the Central Limit Theorem 11
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
n=2

Sec 2- 12
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Demonstration simulation
Discrete Uniform Distribution

13
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Sec 2- 14
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
If sample 6 times

Sec 2- 15
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Then Calculate X-bar (n=6) and construct a histogram
for X-bar

Histogram of X-bar
Normal
200 Mean 3.502
StDev 0.7028
N 1000

150

Frequency
100

50

0
1.2 1.8 2.4 3.0 3.6 4.2 4.8 5.4
X-bar

Sec 2- 16
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-1: Resistors
An electronics company
manufactures resistors having a
mean resistance of 100 ohms
and a standard deviation of 10
ohms. The distribution of
resistance is normal. What is
the probability that a random
sample of n = 25 resistors will Figure 7-2 Desired probability is
have an average resistance of shaded
less than 95 ohms?

𝜎𝑋 10
Answer: = 𝜎𝑋ሜ =
= 2.0
𝑛 25
𝑋ሜ − 𝜇 95 − 100
Φ =Φ
𝜎𝑋ሜ 2
= Φ −2.5
0.0062 = NORMSDIST(-2.5)
A rare event at less than 1%.

Sec 7-2 Sampling Distributions and the Central Limit Theorem 17


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Minitab
• Graph>Probability distribution plot

Sec 2- 18
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
• Complete the dialog as shown below

Sec 2- 19
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Sec 2- 20
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-2: Central Limit Theorem
Suppose that a random variable X has a continuous
uniform distribution:
1
𝑓 𝑥 = ቐ 2, 4 ≤ x ≤ 6
0, otherwise

Find the distribution of the sample mean of a random


sample of size n = 40.
Distribution is normal by the CLT.
𝑏+𝑎 6+4
𝜇= = = 5.0
2 2
2
𝑏−𝑎 2 6−4 21
𝜎 = =
12 12 3
2
1
2 𝜎 3 1
𝜎𝑋ሜ = = =
𝑛 40 120

Figure 7-3 Distributions of


X and X-bar
Sec 7-2 Sampling Distributions and the Central Limit Theorem 21
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Two Populations
We have two independent normal populations. What is
the distribution of the difference of the sample means?
The sampling distribution of 𝑋ሜ1 − 𝑋ሜ 2 is:
𝜇𝑋ሜ 1−𝑋ሜ 2 = 𝜇𝑋ሜ 1 − 𝜇𝑋ሜ 2 = 𝜇1 − 𝜇2
2 2
𝜎1 𝜎2
𝜎𝑋2ሜ 1−𝑋ሜ 2 = 𝜎𝑋2ሜ 1 − 𝜎𝑋2ሜ 2 = +
𝑛1 𝑛2

The distribution of 𝑋ሜ1 − 𝑋ሜ 2 is normal if:


(1) 𝑛1 and 𝑛2 are both greater than 30,
regardless of the distributions of 𝑋1 and 𝑋2 .

(2) 𝑛1 and 𝑛2 are less than 30,


while the distributions are somewhat normal.

Sec 7-2 Sampling Distributions and the Central Limit Theorem 22


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Sampling Distribution of a Difference in Sample Means
• If we have two independent populations with means μ1 and
μ2, and variances σ12 and σ22,
• And if X-bar1 and X-bar2 are the sample means of two
independent random samples of sizes n1 and n2 from these
populations:
• Then the sampling distribution of:
𝑋ሜ 1 − 𝑋ሜ 2 − 𝜇1 − 𝜇2
𝑍= (7−4)
𝜎12 𝜎22
+
𝑛1 𝑛2

is approximately standard normal, if the conditions of the


central limit theorem apply.
• If the two populations are normal, then the sampling
distribution is exactly standard normal.
Sec 7-2 Sampling Distributions and the Central Limit Theorem 23
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-3: Aircraft Engine Life
The effective life of a component
used in jet-turbine aircraft
engines is a normal-distributed
random variable with
parameters shown (old). The
engine manufacturer Figure 7-4 Sampling distribution of
introduces an improvement the sample mean difference.
into the manufacturing
process for this component Process
that changes the parameters Old (1) New (2) Diff (2-1)
as shown (new). x -bar = 5,000 5,050 50
Random samples are selected s= 40 30 50
from the “old” process and n= 16 25
“new” process as shown. Calculations
What is the probability the s / √n = 10 6 11.7
difference in the two sample z= -2.14
means is at least 25 hours? P(xbar2-xbar1 > 25) = P(Z > z) = 0.9840
= 1 - NORMSDIST(z)
Sec 7-2 Sampling Distributions and the Central Limit Theorem 24
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.3 General Concepts of Point Estimation
• We want point estimators that are:
– Are unbiased.
– Have a minimal variance.

Sec 7-3 General Concepts of Point Estimation 25


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.3.1 Unbiased Estimators Defined
The point estimator Θ ෡ is an unbiased estimator
for the parameter θ if:
𝐸 Θ෡ =θ (7−5)
If the estimator is not unbiased, then the
difference:
𝐸 Θ෡ −θ (7−6)

is called the bias of the estimator Θ.


The mean of the sampling distribution of Θ
is equal to θ.

Sec 7-3.1 Unbiased Estimators 26


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-4: Sample Mean & Variance Are Unbiased-1
• X is a random variable with mean μ and variance σ2. Let X1,
X2,…,Xn be a random sample of size n.
• Show that the sample mean (X-bar) is an unbiased estimator
of μ.

𝑋1 + 𝑋2 +. . . +𝑋𝑛
𝐸 𝑋ሜ = 𝐸
𝑛
1
= 𝐸 𝑋1 + 𝐸 𝑋2 +. . . +𝐸 𝑋𝑛
𝑛
1 𝑛𝜇
= 𝜇 + 𝜇+. . . +𝜇 = =𝜇
𝑛 𝑛

Sec 7-3.1 Unbiased Estimators 27


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-4: Sample Man & Variance Are Unbiased-2
Show that the sample variance (S2) is an unbiased estimator of σ2.
𝑛
σ𝑛𝑖=1 𝑋 − 𝑋ሜ 2 1
𝐸 𝑆2 =𝐸 = 𝐸 ෍ 𝑋𝑖2 + 𝑋ሜ 2 − 2𝑋𝑋
ሜ 𝑖
𝑛−1 𝑛−1
𝑖=1
𝑛 𝑛
1 1
= 𝐸 ෍ 𝑋𝑖2 − 𝑛𝑋ሜ 2 = ෍ 𝐸 𝑋𝑖2 − 𝑛𝐸 𝑋ሜ 2
𝑛−1 𝑛−1
𝑖=1 𝑖=1
𝑛
1 2 2 2
𝜎2
= ෍ 𝜇 +𝜎 −𝑛 𝜇 +
𝑛−1 𝑛
𝑖=1
1 2 2 2 2
1
= 𝑛𝜇 + 𝑛𝜎 − 𝑛𝜇 − 𝜎 = 𝑛 − 1 𝜎2 = 𝜎2
𝑛−1 𝑛−1

Sec 7-3.1 Unbiased Estimators 28


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Other Unbiased Estimators of the Population Mean

110.4 i xi xi'
Mean = 𝑋ሜ =
= 11.04
10 1 12.8 8.5
10.3 + 11.6
Median = 𝑋෨ = = 10.95 2 9.4 8.7
2 3 8.7 9.4
110.04 − 8.5 − 14.1
10%Trimmed mean = = 10.81 4 11.6 9.8
8 5 13.1 10.3
6 9.8 11.6
• All three statistics are unbiased. 7 14.1 12.1
8 8.5 12.8
9 12.1 13.1
• Which is best? 10 10.3 14.1
– We want the most reliable one. Σ 110.4

Sec 7-3.1 Unbiased Estimators 29


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.3.2 Choosing Among Unbiased Estimators
Suppose that Θ෡1 and Θ
෡ 2 are unbiased estimators of θ.
෡1 is less than the variance of Θ
The variance of Θ ෡2.
෡1 is preferable.
∴ Θ

Figure 7-5 The sampling distributions


of two unbiased estimators.
Sec 7-3.2 Variance of a Point Estimate 30
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Minimum Variance Unbiased Estimators
• If we consider all unbiased estimators of θ, the
one with the smallest variance is called the
minimum variance unbiased estimator (MVUE).
• If X1, X2,…, Xn is a random sample of size n from a
normal distribution with mean μ and variance σ2,
then the sample X-bar is the MVUE for μ.
• The sample mean and a single observation are
unbiased estimators of μ. The variance of the:
– Sample mean is σ2/n
– Single observation is σ2
– Since σ2/n ≤ σ2, the sample mean is preferred.

Sec 7-3.2 Variance of a Point Estimate 31


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.3.3 Standard Error of an Estimator
When point estimate is reported, it is usually desirable to give some idea of the
precision of estimate. The measure of precision use the standard error.

෡ is its standard
The standard error of an estimator Θ
deviation, given by

𝜎Θ෡ = ෡ .
𝑉 Θ
If the standard error involves unknown parameters that can
be estimated, substitution of these values into 𝜎Θ෡
produces an estimated standard error, denoted by 𝜎ෞΘ෡ .

Equivalent notation: 𝜎ෞΘ෡ = 𝑠Θ෡ = 𝑠𝑒 Θ

If the 𝑋𝑖 are ~𝑁 𝜇, 𝜎 , then 𝑋ሜ is normally distributed,


𝜎 𝑠
and 𝜎𝑋ሜ = . If 𝜎 is not known, then 𝜎ෞ𝑋ሜ = .
𝑛
Sec 7-3.3 Standard Error Reporting a Point Estimate 𝑛 32
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-5: Thermal Conductivity
• These observations are 10 measurements xi
of thermal conductivity of Armco iron. 41.60
• Since σ is not known, we use s to calculate 41.48
the standard error. 42.34
41.95
• Since the standard error is 0.2% 41.86
(0.0898/41.924) of the mean, the mean 42.18
estimate is fairly precise. 41.72
42.26
• We can be very confident (95.5%) that the
41.81
true population mean is 41.924 ± 42.04
2(0.0898). 41.924 = Mean
0.284 = Std dev (s )
0.0898 = Std error

Sec 7-3.3 Standard Error Reporting a Point Estimate 33


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Minitab
• Stat>basic statistics>display descriptive
statistics

Sec 2- 34
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.3.4 Mean Squared Error

The mean squared error of an estimator Θ
of the parameter θ is defined as:
2
෡ =𝐸 Θ
MSE Θ ෡−θ (7−7)

2 2
෡−𝐸 Θ
Can be rewritten as = 𝐸 Θ ෡ ෡
+ θ−𝐸 Θ
=𝑉 Θ ෡ + bias 2

Conclusion: The mean squared error (MSE) of the estimator is


equal to the variance of the estimator plus the bias squared.

It measures both characteristics.


Sec 7-3.4 Mean Squared Error of an Estimator 35
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Relative Efficiency
• The MSE is an important criterion for
comparing two estimators.
෢1
MSE Θ
Relative efficiency =
෢2
MSE Θ

• If the relative efficiency is less than 1, we


conclude that the 1st estimator is superior to
the 2nd estimator.

Sec 7-3.4 Mean Squared Error of an Estimator 37


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Optimal Estimator
• A biased estimator can be
preferred to an unbiased
estimator if it has a
smaller MSE.
• Biased estimators are
occasionally used in linear Figure 7-6 A biased estimator has a
regression. smaller variance than the unbiased
estimator.
• An estimator whose MSE
is smaller than that of any
other estimator is called
an optimal estimator.
Sec 7-3.4 Mean Squared Error of an Estimator 38
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.4 Methods of Point Estimation
• We discuss methods for obtaining point
estimators.
• There are three methodologies to create point
estimates of a population parameter.
– Method of moments
– Method of maximum likelihood
– Bayesian estimation of parameters
• Each approach can be used to create
estimators with varying degrees of biasedness
and relative MSE efficiencies.
Sec 7-4 Methods of Point Estimation 39
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
7.4.2 Maximum Likelihood Estimators
• Suppose that X is a random variable with probability
distribution f(x:θ), where θ is a single unknown
parameter. Let x1, x2, …, xn be the observed values in
a random sample of size n. Then the likelihood
function of the sample is:

L(θ) = f(x1: θ) ∙ f(x2; θ) ∙…∙ f(xn: θ) (7-9)

• Note that the likelihood function is now a function of


only the unknown parameter θ. The maximum
likelihood estimator (MLE) of θ is the value of θ that
maximizes the likelihood function L(θ).
Sec 7-4.2 Method of Maximum Likelihood 48
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-9: Bernoulli MLE
Let X be a Bernoulli random variable. The probability mass
function is f(x;p) = px(1-p)1-x, x = 0, 1 where p is the parameter
to be estimated. The likelihood function of a random sample
of size n is:

𝐿 𝑝 = 𝑝 𝑥1 1 − 𝑝 1−𝑥1 ⋅ 𝑝 𝑥2 1 − 𝑝 1−𝑥2 ⋅. . .⋅ 𝑝 𝑥𝑛 1 − 𝑝 1−𝑥𝑛


𝑛
𝑛
= ෑ 𝑝 𝑥𝑖 1 − 𝑝 1−𝑥𝑖 = 𝑝σ𝑖=1 𝑥𝑖 1 − 𝑝 𝑛−σ𝑛
𝑖=1 𝑥𝑖

𝑖=1
𝑛 𝑛

ln 𝐿 𝑝 = ෍ 𝑥𝑖 ln 𝑝 + 𝑛 − ෍ 𝑥𝑖 ln 1 − 𝑝
𝑖=1 𝑖=1
𝑑 ln 𝐿 𝑝 σ𝑛𝑖=1 𝑥𝑖 𝑛− 𝑛
σ𝑖=1 𝑥𝑖
= − =0
𝑑𝑝 𝑝 1−𝑝
σ𝑛𝑖=1 𝑥𝑖
𝑝Ƹ =
𝑛

Sec 7-4.2 Method of Maximum Likelihood 49


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-10: Normal MLE for μ
Let X be a normal random variable with unknown mean μ and
known variance σ2. The likelihood function of a random
𝑛 𝑥 −𝜇 2
sample of size n is: 1 − 𝑖 2
2𝜎 𝐿 𝜇 =ෑ 𝑒
𝜎 2𝜋 𝑖=1
1 −1 𝑛 2
2 σ𝑖=1 𝑥𝑖−𝜇
= 𝑛𝑒 2𝜎
2𝜋𝜎 2 2
𝑛
−𝑛 1
ln 𝐿 𝜇 = ln 2𝜋𝜎 2 − 2 ෍ 𝑥𝑖 − 𝜇 2
2 2𝜎
𝑖=1
𝑛
𝑑 ln 𝐿 𝜇 1
= 2 ෍ 𝑥𝑖 − 𝜇 = 0
𝑑𝜇 𝜎
𝑖=1
𝑛
σ𝑖=1 𝑥𝑖
𝜇Ƹ = = 𝑋ሜ
𝑛

Sec 7-4.2 Method of Maximum Likelihood 50


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-11: Exponential MLE
Let X be an exponential random variable with parameter λ. The
likelihood function of a random sample of size n is:
𝑛
𝑛
𝐿 𝜆 = ෑ 𝜆𝑒 −𝜆𝑥𝑖 = 𝜆𝑛 𝑒 −𝜆 σ𝑖=1 𝑥𝑖
𝑖=1
𝑛

ln 𝐿 𝜆 = 𝑛 ln 𝜆 − 𝜆 ෍ 𝑥𝑖
𝑖=1
𝑛
𝑑 ln 𝐿 𝜆 𝑛
= − ෍ 𝑥𝑖 = 0
𝑑𝜆 𝜆
𝑖=1
𝑛 1
𝜆መ = 𝑛 =
σ𝑖=1 𝑥𝑖 𝑋ሜ

Sec 7-4.2 Method of Maximum Likelihood 51


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Example 7-12: Normal MLEs for μ & σ2
Let X be a normal random variable with both unknown mean μ
and variance σ2. The likelihood function of a random sample
of size n is:
𝑛
1 − 𝑥𝑖 −𝜇 2
𝐿 𝜇, 𝜎 2 = ෑ 𝑒 2𝜎 2
𝜎 2𝜋
𝑖=1
1 −1 𝑛 2
2 σ𝑖=1 𝑥𝑖 −𝜇
= 𝑛 𝑒 2𝜎
2
2𝜋𝜎 2
𝑛
−𝑛 1
ln 𝐿 𝜇, 𝜎 2 = ln 2𝜋𝜎 2 − 2 ෍ 𝑥𝑖 − 𝜇 2
2 2𝜎
𝑖=1
𝑛
𝜕 ln 𝐿 𝜇, 𝜎 2 1
= 2 ෍ 𝑥𝑖 − 𝜇 = 0
𝜕𝜇 𝜎
𝑖=1
2 𝑛
𝜕 ln 𝐿 𝜇, 𝜎 −𝑛 1 2
= 2 + 4 ෍ 𝑥𝑖 − 𝜇 =0
𝜕 𝜎2 2𝜎 2𝜎
𝑖=1
σ𝑛𝑖=1 𝑥𝑖 − 𝑋ሜ 2
𝜇Ƹ = 𝑋ሜ and 𝜎ො = 2
𝑛

Sec 7-4.2 Method of Maximum Likelihood 53


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Properties of an MLE
Under very general and non−restrictive conditions,
෡ is the MLE of the
when the sample size n is large and if Θ
parameter ,
෡ is an approximately unbiased estimator for θ, i.e., 𝐸 Θ
(1) Θ ෡ =θ
෡ is nearly as small as the variance
(2) The variance of Θ
that could be obtained with any other estimator, and
෡ has an approximate normal distribution.
(3) Θ

Notes:
• Mathematical statisticians will often prefer MLEs because of these
properties. Properties (1) and (2) state that MLEs are MVUEs.
• To use MLEs, the distribution of the population must be known or
assumed.

Sec 7-4.2 Method of Maximum Likelihood 54


© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.
Important Terms & Concepts of Chapter 7
Bayes estimator Parameter estimation
Bias in parameter estimation Point estimator
Central limit theorem Population or distribution
Estimator vs. estimate moments
Likelihood function Posterior distribution
Maximum likelihood estimator Prior distribution
Mean square error of an estimator Sample moments
Minimum variance unbiased Sampling distribution
estimator An estimator has a:
Moment estimator – Standard error
Normal distribution as the – Estimated standard error
sampling distribution of the: Statistic
– sample mean Statistical inference
– difference in two sample Unbiased estimator
means

Chapter 7 Summary 68
© John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger.

You might also like