Professional Documents
Culture Documents
Theory of Estimation
Theory of Estimation
Structure
5.1 Introduction
Objectives
5.2 Basic Terminology
5.3 Characteristics of Estimators
5.4 Unbiasedness
5.5 Consistency
5.6 Efficiency
Most Efficient Estimator
Minimum Variance Unbiased Estimator
5.7 Sufficiency
5.8 Summary
5.9 Solutions /Answers
5.1 INTRODUCTION
In many real-life problems, the population parameter(s) is (are) unknown and
someone is interested to obtain the value(s) of parameter(s). But, if the whole
population is too large to study or the units of the population are destructive in
nature or there is a limited resources and manpower available then it is not
practically convenient to examine each and every unit of the population to find
the value(s) of parameter(s). In such situations, one can draw sample from the
population under study and utilize sample observations to estimate the
parameter(s).
Every one of us makes estimate(s) in our day to day life. For example, a house
wife estimates the monthly expenditure on the basis of particular needs, a sweet
shopkeeper estimates the sale of sweets on a day, etc. So the technique of
finding an estimator to produce an estimate of the unknown parameter on the
basis of a sample is called estimation.
There are two methods of estimation:
1. Point Estimation
2. Interval Estimation
1 Bernoulli (discrete)
1 x p p pq
P X x px 1 p ; x 0,1
2 Binomial (discrete)
n&P np npq
P X x n Cx pxq n x ; x 0,1,..., n
Poisson (discrete)
3
e x λ λ λ
P X x ; x 0,1,...& 0
x!
Uniform (discrete)
4
1
n 1 n2 1
n
P X x ; x 0,1,..., n 2 12
n
Hypergeometric (discrete)
5 nM NM N M N n
M
Cx N M Cn x N, M & n
P X x N
; x 0,1,..., min M, n N N 2 N 1
Cn
6
Introduction to Estimation
6 Geometric (discrete) p p
p
P X x pq x ; x 0,1, 2,... q q2
Normal(continuous)
2
1 x
8 1
f x e 2
; x µ & σ2 µ σ2
2
& 0,
Standard Normal(continuous)
9
1 12 x 2 -- 0 1
f x e ; x
2
Uniform (continuous) 2
1 a&b
ab b a
10 f x ; a x b, b a 2 12
ba
Exponential (continuous) 1 1
11 θ
f x ex ; x 0 & 0 2
Gamma (continuous)
b b
12 ab a&b
f x eax x b 1; x 0 &a 0 a a2
b
Beta First Kind (continuous)
1 b1
ab
13 f x x a 1 1 x ; 0 x 1 a 2
B a,b a&b a b a b 1
ab
& a 0, b 0
Parameter Space
The set of all possible values that the parameter or parameters 1, 2, …, k
can assume is called the parameter space. It is denoted by Θ and is read as
“big theta”. For example, if parameter represents the average life of electric
bulbs manufactured by a company then parameter space of is : 0,
that is, the parameter average life can take all possible values greater than or
equal to 0, Similarly, in normal distribution (, σ2), the parameter space of
parameters and σ2 is (, 2 ) : ; 0 .
f x1 , x1 ,..., x n , P X1 x1 P X 2 x 2 ...P X n x n
In this case, the function f x1 , x1 ,..., x n , represents the probability that the
particular sample x1 , x 2 , ..., x n has been drawn for a fixed (given) value of
parameter θ.
For continuous case,
f x1 , x1,..., x n , f x1, .f x 2 , ... f x n ,
The process of finding the joint probability density (mass) function is described
by taking some examples as:
If a random sample X1 ,X 2 , ..., X n of size n is taken from Poisson distribution
whose pdf is given by
e x
P X x ; x 0, 1, 2, ... & 0
x!
then joint probability mass function of X1 ,X 2 , ..., X n can be obtained as
f x1 , x1,..., x n , P X1 x1 P X 2 x 2 ...P X n x n
x1 !x 2 ! ... x n !
n
nλ
xi
e λi 1
n
Π x i!
i1
n
xi
f x1 , x1 ,..., x n , e i 1
8
Let us check your understanding of above by answering the following Introduction to Estimation
exercises.
E1) What is the pmf of Poisson distribution with parameter λ = 5. Also find
the mean and variance of this distribution.
E2) If represents the average marks of IGNOU’s students in a paper of 50
marks. Find the parameter space of .
E3) A random sample X1 , X 2 , ..., X n of size n is taken from Poisson
distribution whose pdf is given by
e x
P X x ; x 0, 1, 2, ... & 0
x!
Obtain joint probability mass function of X1 ,X 2 , ..., X n .
1. Unbiasedness
2. Consistency
3. Efficiency
4. Sufficiency
We shall discuss these properties one by one in the subsequent sections.
Now, give the answer of the following exercise.
E4) Write the four properties of good estimator.
Estimation
5.4 UNBIASEDNESS
Generally, population parameter(s) is (are) unknown and if the whole
population is too large to study to find the value of unknown parameter(s) then
one can estimate the population parameter(s) with the help of estimator(s)
which is(are) always a function of sample values.
An estimator is said to be unbiased for the population parameter such as
population mean, population variance, population proportion, etc. if and only if
An estimator is said to the average or mean of the sampling distribution of the estimator is equal to the
be unbiased if the true value of the parameter.
expected value of the
estimator is equal to the Mathematically,
true value of the If X1 ,X 2 , ..., X n is a random sample of size n taken from a population whose
parameter being
estimated. probability density (mass) function is f(x,θ) where, is the population
parameter then an estimator T = t( X1 ,X 2 , ..., X n ) is said to be unbiased
estimator of the parameter if and only if
E(T) θ ; for all θ Θ
This property of estimator is called unbiasedness.
Normally, it is preferable that the expected value of the estimator should be
exactly equal to the true value of the parameter being estimated. But if the
expected value of the estimator does not equal to the true value of parameter,
then the estimator is said to be “biased estimator”, that is, if
E (T ) θ
then estimator T is called biased estimator of .
The amount of biasness is given by
b(θ) E (T) θ
If b(θ) > 0 or E(T) > θ, then the estimator T is said to be positively biased for
parameter .
If b(θ) < 0 or E(T) < θ, then the estimator T is said to be negatively biased for
parameter .
If E (T) θ as n i.e. if an estimator T is unbiased for a large sample only
then estimator T is said to be asymptotically unbiased for .
Now, we explain the procedure how to show that a statistic is unbiased or not
for a parameter with the help of some examples:
Example 1: Show that sample mean (X) is an unbiased estimator of the
population mean ( ) if it exists.
Consider,
X X 2 ... X n
E X E 1 By defination of sample mean
n
10
1 Introduction to Estimation
E X1 E X 2 ... E X n E aX bY aE X bE Y
n
Since X1, X2,…,Xn are randomly drawn from same population so they also If X and Y are two random
follow the same distribution as the population. Therefore, variables and a & b are two
E(X1 ) E(X 2 ) ... E(X n ) E(X) constants then by the
addition theorem of
Thus, expectation, we have
E aX bY aE X bE Y
1
E(X) ...
n
n times
1
n
n
E X
48 50 62 75 80 60 70 56 52 78
63.10
10
Hence, an unbiased estimate of the average weight of cadets of the centre is
63.10 kg.
Example 3: A random sample X1 ,X 2 , ..., X n of size n taken from a population
whose pdf is given by
1
f ( x, ) e x / ; x 0 , 0
Show that sample mean X is an unbiased estimator of parameter .
11
Estimation 1
f ( x, ) e x / ; x 0 , 0
Since we do not know the mean of this distribution therefore first of all we find
the mean of this distribution. So we consider,
E X x f x, dx
0
1 1
x ex / dx x ex / dx
0
0
1 21 x / 1 2 n 1 ax n
x e dx x e dx n
0 1/ 2 0 a
Since X1, X2,…,Xn are randomly drawn from same population having mean θ,
therefore,
E X1 E X 2 ... E X n E X
Consider,
If X and Y are two random
X X 2 ... X n
variables and a & b are two E X E 1 By defination of sample mean
constants then by the n
addition theorem of 1 E aX bY
E X1 E X 2 ... E X n
expectation, we have n aE
X bE
Y
E aX bY aE X bE Y
1
(
... )
n n times
1
n
n
Thus, X is an unbiased estimator of .
1 n 2
S2
n i1
Xi X is a biased estimator of σ2.
n 2 1 n 2
whereas, S2
n 1
S i.e. S2
n 1 i1
Xi X is an unbiased estimator of σ2.
The proof of the above result is beyond the scope of this course. But for your
convenience, we will show this result with the help of the following example.
Example 4: Consider a population comprising three televisions of certain
company. If lives of televisions are 8, 6 and 10 years then construct the
sampling distribution of average life of Televisions by taking samples of size 2
and show that sample mean is an unbiased estimator of population mean life.
Also show that S2 is not an unbiased estimator of population variance whereas
S2 is an unbiased estimator of population variance where,
12
1 n 2 1 n 2 Introduction to Estimation
S2
n i1
Xi X and S2
n 1 i1
Xi X
Solution: Here, population consists three televisions whose lives are 8, 6 and
10 years so we can find the population mean and variance as
8 6 10
8
3
1 2 2 2 8
2 8 8 6 8 10 8 2.67
3 3
Here, we are given that
Population size = N = 3 and sample size = n = 2
Therefore, possible numbers of samples (with replacement) that can be drawn
from this population are Nn = 32 = 9. For each of these 9 samples, we will
calculate the values of X,S2 and S2 by the formulae given below:
1 n 1 n 2 1 n 2
X
n i1
X i , S2
n i1
X i X and S2
n 1 i1
Xi X
and necessary calculations for these results are shown in Table 5.1 given
below:
Table 5.1: Calculation for X, S2 and S2
Sample Sample X 2
2 S2 S2
Observation Xi X
i 1
1 8, 8 8 0 0 0
2 8, 6 7 2 1 2
3 8,10 9 2 1 2
4 6, 8 7 2 1 2
5 6, 6 6 0 0 0
6 6, 10 8 8 4 8
7 10, 8 9 2 1 2
8 10, 6 8 8 4 8
9 10, 10 10 0 0 0
Total 72 12 24
2
13
Estimation 1 2 2 1 2 2
S12
8 8 8 8 0, S 22 8 7 6 7 2,...,
2 1 2 1
1 2 2
S92
10 10 10 10 0
2 1
Form the Table 5.1, we have
1 k 1
E X X i 72 8
k i 1 9
Hence, sample mean is unbiased estimator of population mean.
Also
1 k 2 1
E S2 Si 12 1.33 2
k i 1 9
14
One weakness of unbiasedness is that it requires only the average value of the Introduction to Estimation
estimator equals to the true value of population parameter. It does not require
those values of the estimator to be reasonably close to the population
parameter. For this reason, we require some other properties of good estimator
as consistency, efficiency and sufficiency which are described in subsequent
sections.
5.5 CONSISTENCY
In previous section, we have learnt about the unbiasedness. An estimator T is
said to be unbiased estimator of parameter, say, if the mean of sampling
distribution of estimator T is equal to the true value of the parameter . This
concept was defined for a fixed sample size. In this section, we will learn about
consistency which is defined for increasing sample size.
If X1 ,X 2 , ..., X n is a random sample of size n taken from a population whose
probability density (mass) function is f(x,θ) where, is the population
parameter then consider a sequence of estimators, say, T1 = t1(X1),
T2 = t2(X1, X2), T3 = t3(X1, X2, X3),…, Tn = tn(X1, X2, ..., Xn) . A sequence of
estimators is said to be consistent for parameter if the deviation of the values
of estimator from the parameter tends to zero as the sample size increases. That
means values of estimators tend to get closer to the parameter as sample size
increases.
In other words, a sequence {Tn} of estimators is said to be consistent sequence
of estimators of if Tn converges to in probability, that is
p
Tn as n for every … (3)
or for every 0
P Tn 1 ; nm … (5)
where, m is some very large value of n. Expressions (3), (4) and (5) are to
mean the same thing.
Generally, to show that an estimator is consistent with the help of above
definition is slightly difficult, therefore, we use sufficient conditions for
consistency which are given below:
Sufficient conditions for consistency
If {Tn}is a sequence of estimators such that for all
15
Estimation Then estimator Tn is a consistent estimator of .
Now, we explain the procedure based on both the criteria (definition and
sufficient condition) to show that a statistic is consistent or not for a parameter
with the help of some examples:
Example 5: Prove that sample mean is always a consistent estimator of the
population mean provided that the population has a finite variance.
X n
lim P
n
/ n
By central limit theorem (described in Unit 1 of this course), we know that the
X
variate Z is a standard normal variate for large sample size n.
/ n
Therefore,
n
lim P Tn θ ε lim P Z
n n
n n
lim P Z X a a X a
n
n
b
lim
n f zdz P a U b f u du
n a
n
1 z2 / 2 1 z2 / 2
lim
n 2
e dz f z 2 e
n
1 z2 / 2
e dz
2
1 z 2 / 2
Since e is the pdf of a standard normal variate Z therefore, the
2π
integration of this in whole range to is unity.
Thus,
lim P Tn lim P X 1 as n
n n
16
Example 6: If X1 ,X 2 , ..., X n is a random sample taken from Poisson Introduction to Estimation
Solution: We know that the mean and variance of Poisson distribution (λ) are
E X and Var X
Since X1 ,X 2 , ..., X n are independent and come from same Poisson distribution,
therefore,
E X i E X and Var X i Var X for all i 1, 2, ..., n
Now consider,
1
E X E X1 X2 ... Xn Bydefination of sample mean
n
1 E aX bY
E X1 E X 2 ... E X n
n aE X bE Y
1 1
= n
...
n n times n
Now consider,
1
Var X Var X1 X2 ... Xn
n
If X and Y are two
1 independent
2 Var X1 Var X 2 ... Var X n random variable then
n Var(aX + bY)
= a 2 Var(X) + b2 Var(Y)
1
2
...
n n times
1
nλ
n2
Var X 0 as n
n
Hence, by sufficient condition of consistency, it follows that sample mean (X)
is consistent estimator of λ.
Remark 2:
1. Consistent estimators may not be unique. For example, sample mean and
sample median are consistent estimators of population mean of normal
population.
2. An unbiased estimator may or may not be consistent.
1 ; x 1
f x,
0 ; elsewere
then show that the sample mean is an unbiased as well as consistent
1
estimator of θ .
2
5.6 EFFICIENCY
In some situations, we see that there are more than one estimators of an
parameter which are unbiased as well as consistent. For example, sample mean
and sample median both are unbiased and consistent for the parameter when
sampling is done from normal population with mean and known variance σ2.
In such situations, there arises a necessity of some other criterion which will
help us to choose ‘best estimator’ among them. A criterion which is based on
the concept of variance of the sampling distribution of the estimator is termed
as efficiency.
If T1 and T2 are two estimators of an parameter . Then T1 is said to be more
efficient than T2 for all sample sizes if
Var T1 Var T2 for all n
18
σ2 Introduction to Estimation
Var X
n
2
πσ
Var X
2n
2 2 2 . Thus, we
But and 1 therefore, Var X Var X
2n n 2 n
conclude that sample mean is more efficient estimator than sample median.
Example 8: If X1, X2, X3, X4 and X5 is a random sample of size 5 taken from a
population with mean and variance σ2. The following two estimators are
suggested to estimate
X1 X 2 X 3 X 4 X 5 X 2X 2 3X 3 4X 4 5X 5
T1 , T2 1
5 15
Are both estimators unbiased? Which one is more efficient?
Solution: Since X1, X2,…,X5 are independent and taken from same population
with mean and variance σ2 therefore,
E X i and Var X i 2 for all i 1,2,...,5
Consider,
X X2 X3 X4 X5
E T1 E 1
5
1
E X1 E X 2 E X 3 E X 4 E X 5
5
1
µ µ µ µ µ
5
E T1 µ
Similarly,
X 2X 2 3X3 4X4 5X5
E T2 E 1
15
1
E X1 2E X 2 3E X 3 4E X 4 5E X 5
15
1
µ 2µ 3µ 4µ 5µ
15
1
15
15
E T2
19
Estimation 1
Var X1 Var X 2 Var X 3 Var X 4 Var X 5
25
1 2
25
σ σ2 σ2 σ2 σ2
1
25
52
1
Var T1 2
5
Similarly,
X 2X2 3X3 4X 4 5X5
Var T2 Var 1
15
1 2 55σ 2
225
σ 4σ 2 9σ 2 16σ 2 25σ 2
225
11σ 2
VarT2
45
Since, Var T1 Var T2 , therefore, we conclude that estimator T1 is more
efficient than T2.
5.6.1 Most Efficient Estimator
In a class of estimators of a parameter, if there exists one estimator whose
variance is minimum (least) among the class, then it is called most efficient
estimator of that parameter. For example, suppose T1, T2 and T3 are three
estimators of parameter θ having variance 1/n, 1/(n+1) and 5/n respectively.
Since variance of estimator T2 is minimum, therefore, estimator T2 is most
efficient estimator in that class.
The efficiency of an estimator measured with respect to the most efficient
estimator is called “Absolute Efficiency”. If T* is the most efficient estimator
having variance Var(T*) and T is any other estimator having variance Var(T),
then efficiency of T is defined as
Var T*
e
Var T
Var T*
e 1
Var T
(ii) Var T Var T for all , that is, variance of estimator T is less
than or equal to variance of any other unbiased estimator T.
The minimum variance unbiased estimator (MVUE) is the most efficient
unbiased estimator of parameter in the sense that it has minimum variance in
class of unbiased estimators. Some authors used uniformly minimum variance
unbiased estimator (UMVUE) in place of minimum variance unbiased
estimator (MVUE).
Now, you can try the following exercises.
5.7 SUFFICIENCY
In statistical inference, the aim of the investigator or statistician may be to
make a decision about the value of the unknown parameter (). The
information that guides the investigator in making a decision is supplied by the
random sample X1 ,X 2 , ..., X n . However, in most of the cases the observations
would be to numerous and too complicated. Directly use of these observations
is complicated or cumbersome, therefore, a simplification or condensation
would be desirable. The technique of condensing or reducing the random
sample X1 ,X 2 , ..., X n into a statistic such that it contains all the information
about parameter that is contained in the sample is known as sufficiency. So
prior to continuing our search of finding best estimator, we introduce the
concept of sufficiency.
A sufficient statistic is a particular kind of statistic that condenses random
sample X1 , X 2 , ..., X n in a statistic T t(X1 , X 2 , ..., X n ) in such a way that no
information about parameter is lost. That means, it contains all the
information about that is contained in the sample and if we know the value of
sufficient statistic, then the sample values themselves are not needed and can
nothing tell you more about . In other words,
21
Estimation A statistic T is said to be sufficient statistic for estimating a parameter if it
contains all the information about which are available in the sample. This
property of an estimator is called sufficiency. In other words,
An estimator T is sufficient for parameter if and only if the conditional
distribution of X1 , X 2 , ..., X n given T = t is independent of .
Mathematically,
f x1 , x 2 , ..., x n / T t g x1 , x 2 , ..., x n
f x1 , x 2 ,..., x n , P X1 x1 . P X 2 x 2 ... P X n x n
22
e x1 e x2 e xn Introduction to Estimation
. ...
x1! x 2! x n!
...
e n times
x1 x 2 ... x n
x1 !x 2 ! ... x n !
n
xi
e n
i 1 n
n i 1
x i ! repersents the
xi ! product of x i !
i 1
n
n a xi
values x1 , x 2 ,..., x n only through t(x) x i and h x1 , x 2 ,..., x n e i 1
is a
i 1
24
n
Introduction to Estimation
a
xi 1 n
b 1
i 1
where, g t1 (x), t 2 (x), a, b a nbe n xi
is a function of
b
i 1
n
parameters ‘a’& ‘b’ and sample values x1 , x 2 ,..., x n only through t1 (x) x i
i 1
n
and t 2 (x) x i whereas, h x1 , x 2 ,...x n 1 and independent of parameters ‘a’
i 1
and ‘b’.
n n
Hence, by factorization theorem, X and X are jointly sufficient for
i1
i
i 1
i
25
Estimation
1
where, g t1 (x), t 2 (x), , n
I
1 x 1 ,
I 2 x n , is a function of
parameters (α, β) and sample values x1 , x 2 ,..., x n only through t1 (x) x 1 and
t 2 (x) x n whereas, h x1 , x 2 ,...x n 1 and independent of parameters ‘α’ and
‘β’.
Hence, by factorization theorem of sufficiency, X 1 and X n are jointly
sufficient for and .
Remark 3:
1. A sufficient estimator is always a consistent estimator.
2. A sufficient estimator may be unbiased.
3. A sufficient estimator is the most efficient estimator if an efficient
estimator exists.
4. The random sample X1 , X 2 ,..., X n and order statistics X 1 , X 2 ,..., X n are
always sufficient estimators because both are contain all the information
about the parameter(s) of the population.
5. If T is a sufficient statistic for the parameter and (T) is a one to one
function of T then (T) is also sufficient for θ. For example, if T X i is
1 T
sufficient statistic for parameter θ then X
n
X i n is also sufficient
T
for θ because X is a one to one function of T.
n
Now, you will understand more clearly about the sufficiency, when you try the
following exercises.
E12) If X1 , X 2 , ..., X n is a random sample taken from exp () then find
sufficient statistic for .
E13) If X1 , X 2 , ..., X n is a random sample taken from normal population
N(, σ2), then obtain sufficient statistic for and σ2or both according as
other parameter is known or unknown.
E14) If X1 , X 2 , ..., X n is a random sample from uniform population over the
interval [0, ]. Find sufficient estimator of .
We now end this unit by giving a summary of what we have covered in it.
5.8 SUMMARY
In this unit, we have covered the following points:
1. The parameter space and joint probability density (mass) function.
2. The basic characteristics of an estimator.
3. Unbiasedness of an estimator.
4. Consistency of an estimator.
5. Efficiency of an estimator.
26
6. The most efficient estimator. Introduction to Estimation
nλ
xi
e λi 1
n
Π xi!
i 1
27
Estimation Since X1 , X 2 , ..., X n are independent and come from same Poisson
distribution, therefore,
E(Xi ) E(X) for all i 1, 2, ..., n
Now consider,
1 Bydefination of
E(X) E (X1 X2 ... Xn )
n sample mean
E aX bY
E(X1 ) E(X 2 ) ... E(X n )
1
n aE X bE Y
1 1
n
...
n n times n
Hence, sample mean (X) is unbiased estimator of parameter .
E6) Here, we have to show that
E X 1
xe x dx
ye dy e ydy
y
0 0
y2 1 e y dy y11e y dy
0 0
n 1
x e xdx n
2 1 1
0
and 2 1 1
28
Now consider, Introduction to Estimation
X X2 ... Xn Bydefination of
E X E 1
n sample mean
1 E aX bY
E(X1 ) E(X 2 ) ... E(X n ) aE X bE Y
n
1
(1 ) (1 ) ... (1 )
n
n times
1
n 1
n
1
Thus, sample mean is an unbiased estimator of (1+).
E7) We have
f x, θ 1 ; θ x θ 1
This is the pdf of uniform distribution U[, +1] and we know that for
U[a, b]
2
E X
ab
and Var X
b a
2 12
In our case, a = θ and b = θ + 1, therefore
2
EX
1 1
and Var X
1 1
2 2 12 12
Since X1 ,X 2 , ..., X n are independent and come from same population,
therefore,
1 1
E Xi E X and Var X i Var X i 1, 2, ..., n
2 12
1
To show that X is unbiased estimator of , we consider
2
1 Bydefination of
E(X) E (X1 X2 ... Xn )
n sample mean
1 E aX bY
E(X1 ) E(X 2 ) ... E(X n )
aE X bE Y
n
1 1 1 1
...
n
2 2 2
n times
1 1
n
n 2
29
Estimation 1
θ
2
1
Therefore, X is unbiased estimator of .
2
1 1 1 1
2 12 12 ... 12
n
n times
1n
n 2 12
1
Now, Var X 0 as n
12n
1
Thus, E X and Var(X) 0 as n
2
1
Hence, sample mean X is also consistent estimator of .
2
E8) We know that the mean and variance of geometric distribution(θ) are
given by
1 1
E(X) and Var X 2
1 By defination of
E X E (X1 X2 ... X n )
n sample mean
30
1 E aX bY Introduction to Estimation
E(X1 ) E(X 2 ) ... E(X n ) aE X bE Y
n
11 1 1
...
n
n times
1n 1
n
Now consider,
1
Var X Var (X1 X2 ... Xn )
n
1
Var(X1 ) Var(X 2 ) ... Var(X n )
n2
1 1 1 1
2 2 2 ... 2
n
n times
1 1 1 1
n 2 n 2
n2
1 1
Var X 2 0 as n
n
1
Since E X and Var(X) 0 as n
Hence, sample mean X is consistent estimator of 1/.
Since e1/ is continuous function of 1/ therefore, by invariance
property of consistency eX is consistent estimator of e1/ .
E9) Since X1 , X 2 , ..., X n is a random sample taken from a population
having mean and variance σ2.
Therefore,
E X i and Var X i 2 for all i 1, 2,..., n
Consider,
1 n
E T E Xi
n 1 i 1
1
E X1 X 2 ... X n
n 1
31
Estimation
1
...
n 1
n times
1
n
n 1
Therefore, T is biased estimator of population mean .
For efficiency, we find variances of estimator T and sample mean X as
1
Var(T) Var (X1 X2 ... Xn )
n 1
If X and Y are two 1
independent random Var(X1 ) Var(X2 ) ... Var(Xn )
(n 1)2
variables and a & b are
two constants then
1 2
Var(aX + bY) 2 ... 2
2
= a 2 Var(X) + b 2 Var(Y) (n 1) n times
1 n2
(n 1) 2
n 2
(n 1)2
Now consider,
1
Var(X) Var (X1 X2 ... X n )
n
1
Var(X1 ) Var(X 2 ) ... Var(X n )
n2
1 2
2 ... 2
2
n n times
1 2
n2
n
n 2
33
Estimation The joint density function of X1 , X 2 , ..., X n can be obtained and can be
factored as
f x1 , x 2 ,...x n , f x1 , .f x 2 , ... x n ,
e x1 . e x 2 ... e x n
n
n
xi
e i 1
n
x i
n e i1 .1
L g t x , .h x1, x 2 ,..., x n
n
xi
where, g t x , n e i 1
is a function of parameter and sample
n
values x1 , x 2 ,..., x n only through t x x i and h x1 , x 2 ,..., x n 1 ,is
i 1
n
independent of . Hence by factorization theorem of sufficiency, Xi
i 1
is sufficient estimator of .
E13) Here, we take random sample from N(, σ2) whose probability density
function is
1
1 x 2
f x, , 2 e 2 2
; x , , 0
2 2
The joint density function of X1 , X 2 , ..., X n can be obtained as
n
x x n 1 2 2
x 2 x i x x
1 2 2 i
e i 1
2
2
n n n
n 1 2 2
1 2 2 x i x x 2 x xi x
e i 1 i 1 i 1
2
2
34
n 1
n
x i x 2 n x 2 0
Introduction to Estimation
1 2 2 n
e
i 1
x i x 0, by
2 i 1
2 the
property of mean
n
n 1 2 n 2
1 2 2 x i x 2 x
f x1 , x 2 ,...x n , , 2 e i 1 2
… (2)
2
2
Case I: Sufficient statistic for when σ2 is known
The joint density function given in equation (2) can be factored as
n
n 1
n
x 2 1 2 2
x i x
2
22
f x 1 , x 2 , ...x n , e . .e i1
2 2
2 2
n
n 1 2
1 2 2
x i
e
i 1
.1
2 2
gtx , .hx1 , x 2 ,..., x n
n
n 1 2
1 2 2 x i
where, g t x , e i 1
is a function of parameter
2
2
n
2
σ2 and sample values x1 , x 2 ,..., x n only through t x x i ,
i 1
2
whereas h x1 , x 2 ,..., x n 1 is independent of σ . Hence by
n
2
factorization theorem of sufficiency, X i is sufficient estimator
i 1
2
for σ when is known.
Case III: When both and σ2 are unknown
The joint density function given in equation (2) can be factored as
1
1 n 1s2 n x 2
f x1 , x 2 ,...x n , ,
2
2
e 2 2
.1
2
35
Estimation 1 n 2
where, s2 xi
n 1 i1
f x1 , x 2 ,...x n , f x1 , .f x 2 , ... f x n ,
1 1 1 1
. ... n
θ θ θ θ
Since range of variable depends upon the parameter , so we consider
ordered statistics X 1 X 2 ... X n .
36