You are on page 1of 14

Stat 463

Estimation 2: Ch. 7.1-7.4

Given a random sample of size n, X1, , Xn, Want to


estimate the parameter with a GOOD estimator.
Estimator: u(X1, X2, , Xn) (Statistic, random variable)
Estimate: u(x1, x2, , xn) (Observed value)
One possible definition for GOOD estimator is the one that
is Unbiased and has Small Variance
Def 7.1.1 Y = u(X1, X2, , Xn) is called an minimum
variance unbiased estimator (MVUE) of the parameter
if Y is unbiased and if the variance of Y is less than or
equal to the variance of every unbiased estimator of .
Example X1, , X9 N (, 1). Consider the two estimators for
, Y2 = u2(X1, , X9) = X1
Y1 = u1(X1, , X9) = X

E{Y1} =

, V {Y1} =

, E{Y2} = , V {Y2} =

Can we say that Y1 has minimum variance among all unbiased estimators ?
0

Stat 463

Estimation 2: Ch. 7.1-7.4

Decision Theoretical Approach: Let (y) is a function


of the observed value of the statistic Y , and is called a
decision function or decision rule.
Measure of difference between (y) and the true value
Loss Function: L[, (y)]
1.Nonnegative
2. Calculable for each [, (y)].
Risk Function: R[, ] = E{L[, (Y )]}
Generally function of and
Single decision rule may or may not minimizes Risk function for all .

Ex (B7.1.2.) X1, , X25 N (, 1), Y = X,


L[, (y)] = [ (y)]2.
1(y) = y , 2(y) = 0
R(, 1) = E[(
1

Y )]2

1
=
25

Stat 463

Estimation 2: Ch. 7.1-7.4


R(, 2) = E[( 0)]2 = 2
1
1
R(, 2) < R(, 1) , < <
5
5
R(, 2) R(, 1) , elsewhere

Note
1. Restriction: Unbiasedness, E[(Y )] = .
2. Minimize the maximum risk function: minimax criterion
Examples of Loss functions;
1. Square-error loss function: L[, ] = [ ]2
2. Absolute-error loss function: L[, ] = | |
3. Goal post loss function
L[, ] =

0,
b,

| | a,
| | > a,

where a and b are positive constants.


2

Stat 463

Estimation 2: Ch. 7.1-7.4

Ex (P7.1.4.)

Ex (P7.1.5.)

Stat 463

Estimation 2: Ch. 7.1-7.4

7.2. A Sufficient Statistic


iid

X1, , Xn f (x; )
f (x; )

Necessary Information to estimate

N (, 1):

Y = u(X1, , Xn) =

N (0, ):

Y = u(X1, , Xn) =

b(1, ):

Y = u(X1, , Xn) =

Which statistic should be used to do data reduction ?


=
Reduce the dimension of data without losing any information necessary to estimate the parameter .
=
Conditional distribution of X1, , Xn given the sufficient
statistic does not depend on the parameter .
=
Sufficient statistic exhausts all the information about that
data has.
iid

Def 7.2.1. X1, , Xn f (x; ). Let Y1 =


u1(X1, , Xn) be a statistic whose p.d.f. is g1(y1; ).
4

Stat 463

Estimation 2: Ch. 7.1-7.4

Then Y1 is a sufficient statistic for if and only if


f (x1; ) f (xn; )
= H(x1, , xn)
g1[u1(x1, , xn); ]
where H(x1, , xn) does not depend on .
iid

Ex (B7.2.1) X1, , Xn b(1, ). Y1 = X1 + +


Xn .

Ex (B7.2.2.)
+ Xn .

iid

X1, , Xn gamma(2, ). Y1 = X1 +

Given statistic, we can check it is a sufficient one or not


using the definition of sufficient statistic. But it is not easy
to find a sufficient statistic just from the distribution of random sample. But we can do with Factorization Theorem
5

Stat 463

Estimation 2: Ch. 7.1-7.4


iid

Theorem 7.2.1. Let X1, , Xn f (x; ), . The


statistic Y1 = u1(X1, , Xn) is a sufficient statistic for
if and only if we can find two nonnegative functions, k1
and k2, such that
f (x1; ) f (xn; )
= k1[u1(x1, , xn); ]k2(x1, , xn),
where k2(x1, , xn) does not depend on .
Ex (B7.2.4.)
known.

iid

X1, , Xn N (, 2), where 2 is

Ex (B7.2.6.) Let Y1 < Y2 < Y3 denote the order statistics of a random sample of size 3 form the distribution with
p.d.f.
f (x : ) = e(x)I(,)(x) .

Stat 463

Estimation 2: Ch. 7.1-7.4


iid

Ex (P7.2.1) X1, , Xn N (0, )

iid

Ex (P7.2.4) X1, , Xn geometric()

Stat 463

Estimation 2: Ch. 7.1-7.4

7.3. Properties of sufficient statistics


Function of a sufficient statistic is also sufficient.
Let Y1 = u1(X1, , Xn) be a sufficient statistic of the
parameter and Z = u(Y1) = u[u1(X1, , Xn)] =
(X1, , Xn) be a function of the sufficient statistic that
is independent of and has a single-valued inverse such
that Y1 = w(Z). Then Z is also a sufficient statistic.
f (x1; ) f (xn; )
=k1[u1(x1, , xn); ]k2[u1(X1, , Xn)]
=k1{w[(x1, , xn)]; }k2[u1(X1, , Xn)]
Sufficient statistic is not unique.
iid

Them 7.3.1. (Rao-Blackwell) X1, , Xn f (x; ) ,


. Let Y1 = u1(X1, , Xn) be a sufficient statistic of
the parameter and Y2 = u2(X1, , Xn) be an unbiased estimator of . Then E(Y2|y1) = (y1) defines a
statistic (Y1) and
1. (Y1) is an unbiased estimator
2. (Y1) is a function of a sufficient statistic
8

Stat 463

Estimation 2: Ch. 7.1-7.4

3. V ar{(Y1)} V ar{Y2}
iid

Them 7.3.2.
X1, , Xn f (x : ) ,
Y1 = u1(X1, , Xn) is a sufficient statistic for
is the unique m.l.e. of , then is a function of
u1(X1, , Xn).
Proof.

iid

Ex: X1, , Xn b(1, ).

Ex (B7.3.2)

. If
and
Y1 =

Stat 463

Estimation 2: Ch. 7.1-7.4

Example (P7.3.2.)
Y1 < < Y5 order statistics of a
random sample of size 5 from the distribution with p.d.f.
1
, 0<x< , 0<<

Sufficient Statistic and its p.d.f.


f (x; ) =

Is 2Y3 unbiased ?
(y5) = E(2Y3|y5)
Compare variances of 2Y3 and (Y5)
Example (P7.3.3.)
X1, X2 is a random sample of size
2 from the distribution with the p.d.f.

1
x
f (x; ) = exp
,

Sufficient Statistic and its p.d.f.

0<x<

Is Y2 unbiased ?
(y1) = E(Y2|y1)
Compare variances of Y2 and (Y1)
10

Stat 463

Estimation 2: Ch. 7.1-7.4

7.4. Completeness and Uniqueness


iid

Ex X1, , Xn P oisson() A sufficient statistic for


is
. The p.d.f. of this sufficient statistic Y1 is
g1(y1; ) =

When the expected value of the function of Y1 will be zero


?
E[u(Y1)] = 0 if and only if

Why ?
Def 7.4.1.
Let the random variable Z have a p.d.f. of
the family {h(z; ); }. If the condition E[u(Z)] =
0 for every requires that u(z) = 0 except on
a set of points that has probability zero for each p.d.f.
{h(z; ); }, then the family {h(z; ); } is
called a complete family of probability density functions.
Ex (B7.4.1.)
Let Z have a p.d.f. that is a member of
family {h(z; ); 0 < < }

1
z
h(z; ) = exp

11

, 0<z<

Stat 463

Estimation 2: Ch. 7.1-7.4

Them 7.4.1. : Lehmann and Scheffe


iid

X1, , Xn f (x : ) , .
Sufficient Statistic: Y1 = u1(X1, , Xn)
The family of p.d.f.s, {fY1 (y1; ), } is complete.
If there is a function of Y1 that is unbiased estimator of ,
then this function of Y1 is the unique minimum variance
unbiased estimator.

Find the function of complete and sufficient statistic


that is unbiased for the parameter. Then it will be the
unique MVUE

12

Stat 463

Estimation 2: Ch. 7.1-7.4

Ex (P7.4.3.)

Ex (P7.4.6.)

13