You are on page 1of 58

# Point Estimation

:
Definition :
Point estimation is a choice
of statistics, i.e. a single
number calculated from
sample data for which we
have some expectation, or
assurance that it is reasonably
close to the parameter it is
supposed to estimate.
Point estimation of a mean :
Parameter: Population
mean µ
Data: A random sample
X 1 ,......, X n
Estimator: X
Estimate of standard error:
S
n
θ
Unbiased estimator :

θ
Let be the parameter of

θ
interest and ˆ be a statistic.
Then a statistics ˆ is said to
be an unbiased estimator, or
its value an unbiased
estimate, if and only if the
mean of the sampling

θ
distribution of the estimator

θ
equals , whatever the value
of .
Remark :
It is a mathematical fact that
X is an unbiased estimator of
the population mean µ
provided the observations are
a random sample.
More efficient unbiased
estimator :
A statistics θˆ1 is said to be a
more efficient unbiased
estimator of the parameter θ
than the statistics θˆ2 if

1. θˆ1 and θˆ2 are both unbiased
estimators of θ ;
2. the variance of the
sampling distribution of the
first estimator is no larger
than that of the second and is
smaller for at least one value
of θ .
Maximum error of estimate
Error of estimate = | X − µ |

To examine this error, if we
assert with probability 1 − α
that the inequality
X −µ
− zα 2 ≤ ≤ zα 2
σ n
|X − µ |
or ≤ zα 2 is satisfied,
σ n
then maximum error of
estimate is
σ
E = zα 2 .
n
i.e. the error will be less than
σ
zα 2 . with probability 1 − α
n
(or we say with (1 − α )100%
confidence that the error is at
most E ).
Sample size determination :
Solving the above equation
for n we get, sample size
 zα 2 .σ 
2

n= 
 E 
Remark : Note that the
method discussed so far
require that σ be known or
that it can be approximated
with sample standard devia-
tion s .
Maximum error of estimate
(σ unknown) :
If sampling is done from a
normal population, we can
find the maximum error of
estimate by using the fact that
X −µ
t=
S n
is a random variable having
the t distribution with n − 1
degrees of freedom.
Thus we have,
s
E = tα 2 .
n
where, tα 2 has probability α 2
of being exceeded by a t
random variable having n − 1
degree of freedom.
Interval Estimation :
Definition :
Interval estimate is an
interval for which we can
assert with a reasonable
degree of certainty that they
will contain the parameter
under consideration.
Large sample confidence
interval for µ (σ known) :
Suppose that we have a large
( n ≥ 30) random sample from
a population with the
unknown mean µ and the
known variance σ . 2
Then, referring to the
inequality
X −µ
− zα 2 ≤ ≤ zα 2
σ n
and calculating x , we obtain
σ σ
x − zα 2 . < µ < x + zα 2 .
n n
Interval of this kind is
known as a confidence
interval for µ having the
degree of confidence 1 − α or
(1 − α )100% and its endpoints
are called confidence limits.
Large sample confidence
interval for µ (σ unknown) :
Since, σ is unknown in most
applications, we may have to
make the further approxim-
ation of substituting for σ the
sample standard deviation s .
Thus,
s s
x − zα 2 . < µ < x + zα 2 .
n n
Small sample confidence
interval for µ :
For small sample (n < 30) ,
with tα 2 , we have the follo-
wing (1 − α )100% confidence
interval formula
s s
x − tα 2 . < µ < x + tα 2 .
n n
Ex. 7.6(Pg 225):
To estimate the average time
it takes to assemble a certain
computer component, the
industrial engineer at an
electronic firm time 49
technicians in the performa-
nce of this task, getting a
mean of 12.00 min. and a
standard deviation of 1.9
min.
(a) What can we say with
maximum error if x = 12.00 is
used as a point estimate of
the actual average time
required to do the job?
Sol. :
(a) n=49, x = 12.00, s=1.9
1 − α =0.95, z0.025 =1.96
s 1.9
∴ E = zα 2 = 1.96 ×
n 49
= 0.532
Thus we can say with 95%
confidence that maximum
error of estimate is 0.532.
(b) Use the given data to
construct a 99% confidence
interval for the true average
time it takes to assemble the
computer component.
(b) n=49, x = 12.00, s=1.9
1 − α = 0.99 , z.005 = 2.575
Using confidence interval
formula,
1.9 1. 9
12 − 2.575 × < µ < 12 + 2.757 ×
49 49
11.3 < µ < 12.7
Thus, we are 99% confident
that the interval from 11.3
min. to 12.7 min. contains
the true average time.
Ex. 7.7(Pg 225):
With reference to Ex. 7.6,
with what confidence that we
assert that the sample mean
does not differ from the true
mean by more than 15
seconds?
Sol. : n=49, s=1.9, E=0.25
s
Using E = zα 2 × ,
n
0.25 × 7
zα 2 = = 0.9211
1 .9
P ( Z < 0.9211) = 1 − α 2 = 0.8212
1 − α = 0.6424
Therefore, we can assert with
64.24% confidence that
maximum error is 15 sec.
Ex. 7.11(Pg 225):
The principal of college
wants to use the mean of a
random sample to estimate
the average amount of time
students take to get from one
class to next and he wants to
be able to assert with 98%
confidence that the error is at
most .25 minute. If it can be
presumed from experience
that σ = 1.25 minutes, how
large a sample will she have
to take?
Sol. : σ = 1.25, E − 0.25,
1 − α = 0.98 ⇒ α = 0.2
 zα 2 .σ   z0.01 × 1.25 
2

∴n=  =  
 E   0 .25 
= 136
Ex. 7.15(Pg226):
A random sample of 100
teachers in a large
metropolitan area revealed a
mean weekly salary of Rs.
2,000 with a standard
deviation of Rs. 43. With
what degree of confidence
can we assert that the average
weekly salary of all teachers
in the metropolitan area is
between Rs. 1,985 and Rs.
2,015?
Sol. : n=100, x =2000, s=43
1985 < µ < 2015
Using,
s s
x − zα 2 . < µ < x + zα 2 .
n n
we get
43
2000 + zα 2 × = 2015
100
zα 2 = 3.49
∴ P( Z < 3.49) = 1 − α 2 = 0.9998
1 − α = 0.9996
Therefore, we can assert with
99.96% confidence that the
average weekly salary of all
teachers in the metropolitan
area is between Rs. 1,985 and
Rs. 2,015
Maximum likelihood
estimation :
Consider a random sample of
size n from discrete popul-
ation f ( x;θ ) that depends on a
parameter θ . The joint distri-
bution is
f ( x1 ;θ ) f ( x2 ;θ )...... f ( xn ;θ ) .
Once the observations beco-
me available, we could
substitute their actual values
x1 , x2 ,..., xn into the joint
density. After the substi-
tution, the resulting function
of θ
L(θ ) = f ( x1 ;θ ) f ( x2 ;θ )...... f ( xn ;θ )
is called the likelihood
function. The maximum
likelyhood estimator of θ is
the random variable which
equals the value for θ that
maximizes the probability of
the observed sample.
Because this is an after-the-
fact calculation, we say that it
maximizes the likelihood.
The same procedure for
obtaining the maximum
likelihood estimator applies
in the continuous case.
Ex. 7.23(a)(pg 227):
Find the maximum likelihood
estimator for λ when f ( x; λ )
is the Poisson distribution.
Sol.:
e λ−λ x

f ( x; λ ) = , x = 0,1,...., λ > 0
x!
Likelihood function is
e λ e λ
−λ x1 −λ x2
e λ −λ xn

L (λ ) = ⋅ ⋅ ......... ⋅
x1! x2 ! xn !
n
∑ xi
e λ − nλ i =1

=
x1!⋅ x2 !⋅..... ⋅ xn !
Take g (λ ) = ln L(λ )
n
= −nλ + (∑ xi )(ln L)
i =1

− ln( x1! x2 !....xn !)
1 n
g ′(λ ) = −n + ∑ xi = 0
λ i =1
1n
⇒ λ = ∑ xi
n i =1
1 n
g ′′(λ ) = − 2 ∑ xi < 0
λ i =1
It shows that g (λ ) is
1n
maximum at λ = ∑ xi , cons-
n i =1
equently, L(λ ) is maximum at
1n
λ = ∑ xi .
n i =1
Hence, maximum likelihood
1n
estimator = ∑ xi
n i =1
Ex. 7.24(a)(pg 227):
Find the maximum likelihood
estimator for β when f ( x; β )
is the exponential distribu-
tion.
Sol.:
1 xβ
f ( x; β ) = e , x > 0, β > 0
β
Likelihood function is
− x1 β − x2 β − xn β
e e e
L( β ) = ⋅ ⋅ ......... ⋅
β β β
n
1
1 β∑x− i

= ne i =1

β
Take g ( β ) = ln L( β )
1 n
= − (∑ xi ) − n(ln β )
β i =1
1 n n
g ′( β ) = 2 ∑ xi − = 0
β i =1 β
1n
⇒ β = ∑ xi
n i =1
1 2 n
g ′′( β ) = 2 (n − ∑ xi )
β β i =1
3
n 1n
=− n < 0 for β = ∑ xi
(∑ x i ) 2 n i =1

i =1
It shows that g (β ) is
1n
maximum at β = ∑ xi ,
n i =1
consequently, L( β ) is
1n
maximum at β = ∑ xi .
n i =1
Hence, maximum likelihood
1n
estimator = ∑ xi
n i =1
Estimation of Proportions

Suppose X denotes the
number of times that an
appropriate event occurs in n
trials.
Then the point estimator of
the population proportion is
X
the sample proportion .
n
If the n trials satisfy the
assumptions underlying the
binomial distribution, one can
show that the sample
proportion is an unbiased
estimator of the binomial
parameter p , namely, of the
true proportion we are trying
to estimate on the basis of a
sample.
When n is large, we can
construct approximate confi-
dence intervals for the bino-
mial parameter p by using
the normal approximation.
Accordingly we can assert
with probability 1 − α that the
inequality
X − np
− zα 2 < < zα 2
np (1 − p )
will be satisfied.
Making the further
approximation of substituting
x
for p in np(1 − p) , we get a
n
large sample confidence
interval for p

x x x x
1 −  1 − 
x n n x n n
− zα 2 < p < + zα 2
n n n n
Where the degree of
confidence is (1 − α )100% .
Maximum error of
estimate:
Again using the normal
approximation, we can assert
with probability 1 − α that the
error will be at most

p (1 − p )
E = zα 2
n
x
with the observed value
n
substituted for p .
Sample size determination :
Solving above formula for n ,
we get
2
 zα 2 
n = p (1 − p )  
 E 
Remark: This formula
cannot be used unless we
know the possible value of p .
If no such value available, we
can make use of the fact that
p (1 − p ) is at most 1 4 ,
corresponding to p = 1 2.
Sample size ( p unknown ) :
2
1  zα 2 
n=  
4 E 
Ex. 9.3(Pg. 286) :
In a random sample of 200
industrial accidents, it was
found that 116 were due at
least partially to unsafe work-
ing conditions. Construct a
99% confidence interval for
the corresponding true
proportion?
(a) Table 9;
(b) The large-sample confi-
dence interval formula.
Sol.
n = 200, x = 116, α = 1 − .99 = 0.01
x
(a) = 0.58
n
Using Table 9,
0.49 < p < 0.67
x
(b) = 0.58, z0.005 = 2.575
n
Using large-sample confide-
ence interval formula, we get
0.58(1 − 0.58) 0.58(1 − 0.58)
0.58 + 2.575 < p < 0.58 − 2.575
200 200

∴ 0.49 < p < 0.67
Ex. 9.4(Pg. 286) :
In the reference to previous
exercise , what can we say
the maximum error if we use
the sample proportion to
estimate the corresponding
true proportion?
x
Sol. = 0.58, zα 2 = z0.005 = 2.575
n
Then the error is at most

0.58(1 − 0.58)
E = 2.575
200
= 0.0899
Ex. 9.8(Pg. 286) :
Among 200 fish caught in a
large lake, 36 were inedible
due to the pollution of the
environment. If we use
36
= 0.18 as an estimate of
200
the corresponding true propo-
rtion, with what confidence
can we assert that the error of
this estimate is at most
0.070?
x
Sol. = 0.18, E = 0.07
n
Then,
0.18(1 − 0.18)
E = zα 2 = 0.07
200
⇒ zα 2 = 2.5926
P ( Z < 2.5926) = 1 − α 2 = 0.9952
1 − α = 0.9904
Therefore, we can assert with
99.04% confidence that the
error of this estimate is at
most 0.07.