You are on page 1of 4

Estimation sections. Section 5.

4 is explored the concept of unbiasedness with


examples.
Unbiasedness is based on fixed sample size whereas the concept based on
varying sample size, that is, consistency is described in Section 5.5. There
exists more than one consistent estimator of a parameter, therefore in
Section
5.6 is explained the next property efficiency. Section 5.7 is devoted to
describe
sufficiency. Unit ends by providing summary of what we have discussed in
this unit in Section 5.8 and solution of exercises in Section 5.9.
Objectives
After studying this unit, you should be able to:
 define the parameter space and joint probability density (mass) function;
 describe the characteristics of an estimator;
 explain the unbiasedness of an estimator;
 explain the consistency of an estimator;
 explain the efficiency of an estimator;
 explain the most efficient estimator;
 explain the sufficiency of an estimator; and
 describe the minimum variance unbiased estimator.

5.2 BASIC TERMINOLOGY


Before discussing the properties of a good estimator, we discuss basic
definitions of some important terms. These terms are very useful in
understanding the fundamentals of theory of estimation discussed in this
block.
Discrete and Continuous Distributions
In Units 12 and 13 of MST-003, we have discussed standard discrete and
continuous distributions as binomial, Poisson, normal, exponential, etc. We
know that the populations can be described with the help of distributions,
therefore, standard discrete and continuous distributions are used in
statistical inference. Here, we discuss some standard discrete and continuous
distributions in brief as in tabular form:
S. Distribution Parameter(s) Mean Variance
No.

1 Bernoulli (discrete)
PX xp
x
1p
1x
; x p p pq
0,1

2 Binomial (discrete) n&P np npq


 x n Cxx pnx
P X q ; x 0,1,...,n
Poisson (discrete)
3  x
PX x e ; x 0,1,...&0 λ λ λ
x!
Uniform (discrete) P X N, M & n
4 1N C 0,1,...,min
PX x ; x 0,1,..., n n
n
n
Hypergeometric (discrete)
5 M NM
C
 NM N  M  N
n 1 n n
2 1
2 N N
2
12
1
nM N

6
Geometric (discrete) p p Introduction to Estimation
6 p
PX xpq ; x
x 2
q q
0,1,2,...

7 Negative Binomial (discrete) rp rp


r&p
PX x
 xr1
Cr1r pxq ; x 0,1,2, ... q 2q

Normal(continuous)
2
1x
 
8 1 
f  x  e 2 
; x  2 2
 2

µ&σ µ σ
& 0, 
Standard Normal(continuous)
9 1 2
 x
f 1 -- 0 1
e 2
; x 
x 2
Uniform (continuous)
10 1 a&b
ab b  a
f  x  ; a  x  b, b 2
a b a
2 
12
Exponential (continuous) 1 1
11 θ
f xe ; x 0 &
x
2
 
0
Gamma (continuous) b b
12 a a&b
f  x  e x
ax b1
; x 0&a a a
2

0 b
Beta First Kind (continuous)
1 ab
13 f x a1
1xb1; 0 x 1 a 2

x
a&b a b a b 1
 a b
Ba,b & a 0, b 0

Beta Second Kind (continuous)


14 1 xa a a  a  b 1 
f x  a ; x 0 a&b
Ba,b1x b 1 b 1 b 2
2
b

& a 0,b 0

Parameter Space
The set of all possible values that the parameter or parameters 1, 2, …,
k can assume is called the parameter space. It is denoted by Θ and is read
as “big theta”. For example, if parameter represents the average life of
electric bulbs manufactured by a company then parameter space of is
:  0,
that is, the parameter average life can take all possible values greater than
2
or equal to 0, Similarly, in normal distribution (, σ ), the parameter space
of parameters  and σ is(, ) : ; 0 .
2 2

Joint Probability Density (Mass) Function


If X1 ,X2 , ..., Xn is a random sample of size n taken from a population
whose probability density (mass) function is f(x,θ) where, is the
population
parameter then the joint probability density (mass) function of sample values is
denoted by f x1,x1,...,xn , and defined as

For discrete case,


f x1, x1,..., xn , PX1 x1,X2 x2 ,..., Xn xn 
7

You might also like