You are on page 1of 3

Unbiasedness and efficiency

D. Patterson, Dept. of Mathematical Sciences, U. of Montana

Let X1 , . . . , Xn be a random sample from some distribution which depends on a parameter


θ and let T = T (X1 , . . . , Xn ) be an estimator of θ. The efficiency of an estimator refers to how
much information it extracts about the parameter of interest from the sample. A more efficient
estimator extracts more information, in some sense, from a sample of a given size. Efficiency
measures information extracted by the variance of an unbiased estimator – smaller variance
means greater efficiency.

Definitions:

1. T is an unbiased estimator of θ if
E(T ) = θ.

2. T is an asymptotically unbiased estimator of θ if

lim E(T ) = θ.
n→∞

3. T is an efficient estimator of θ if it is unbiased and its variance achieves the Cramer-Rao


Lower Bound; that is if
1
Var(T ) =
nI(θ)
.

4. The efficiency of an unbiased estimator T of θ is the ratio of the Cramer-Rao Lower


Bound to the variance of the estimator; that is,

1/nI(θ)
Eff(T ) = .
Var(T )

Note that it must be true that Eff(T ) ≤ 1. The smaller the value of the efficiency, the
less efficient the estimator.

5. T is an asymptotically efficient estimator of θ if it is unbiased or asymptotically unbiased


and
lim Eff(T ) = 1.
n→∞

6. The asymptotic efficiency of an unbiased or asymptotically unbiased estimator T of θ is

lim Eff(T ).
n→∞

1
7. If T1 and T2 are both unbiased estimators of θ, then the relative efficiency of T1 to T2 is
Var(T2 )
Eff(T1 , T2 ) = .
Var(T1 )
If Eff(T1 , T2 ) < 1, then T2 has smaller variance than T1 and T1 is less efficient than T2 .

8. If T1 and T2 are both unbiased or asymptotically unbiased estimators of θ, then the


asymptotic relative efficiency of T1 to T2 is

ARE(T1 , T2 ) = lim Eff(T1 , T2 ).


n→∞

Notes:

• Efficiency and relative efficiency are useful concepts only for unbiased estimators, in which
case the MSE’s of the estimators are equal to their variances. Since many estimators of
interest are not unbiased (indeed, no unbiased estimator may exist), these concepts can
sometimes be of limited usefulness. MSE or other measures of accuracy may be more
useful.

• Similarly, asymptotic relative efficiency only makes sense if both estimators are at least
asymptotically unbiased.

• The parameter of interest θ may be some function of the usual parameters of a distribution.
For example, θ could represent p/(1 − p) (the odds ratio) in a binomial distribution or
the mean 1/β in an exponential distribution.

• The relative efficiency of two estimators depends only on the variances of the two estima-
tors and not on the Fisher Information.

• While asymptotic comparisons are interesting, we always have finite n in practice. One
needs to be particularly careful in interpreting a value of 1 for the asymptotic relative
efficiency of two estimators. There may be no reason to prefer one estimator to the other
asymptotically, but there may be large differences for small n. Often, simulations are
needed to compare the small sample performance of estimators.

Example 1:
Suppose X1 , . . . , Xn is a random sample from an exponential distribution with parameter β and
consider estimating µ = 1/β. Recall that the variance of the exponential distribution is 1/β 2 =
µ2 . We showed in class that X n is an unbiased estimator of µ and Var(X n ) = Var(X)/n = µ2 /n.
We also showed that the Fisher Information in the sample is nI(µ) = µ2 /n so that X n is an

2
efficient estimator of µ (or, equivalently, Eff(X n ) = 1).

Now consider a second unbiased estimator of µ. It was proved on p. 299 that Y1 = min(X1 , . . . , Xn )
has an exponential distribution with parameter nβ = n/µ. Then E(Y1 ) = µ/n so µe = nY1 is
also an unbiased estimator of µ. Also Var(µ) e = n2 Var(Y1 ) = n2 (µ2 /n2 ) = µ2 . Hence, the
e = (µ2 /n)/µ2 = 1/n and the relative efficiency of X n to µ
efficiency of µe is Eff(µ) e is

Var(µ)
e µ2
Eff(X n , µ)
e = = 2 = n.
Var(X n ) µ /n
X n is n times more efficient than µ.
e X n is clearly a better estimator of µ than µ
e and the
difference between them increases as n increases.

Example 2:
Same situation as example 1, but now the goal is to estimate β. Example 7.8.3 on p. 440
P
shows that the Cramer-Rao lower bound is β 2 /n and that T = (n − 1)/ ni=1 Xi is an unbiased
estimator of β and has variance β 2 /(n − 2). Hence, the efficiency of T as an estimator of β is
β 2 /n n−2
Eff(T ) = 2
= .
β /(n − 2) n
T is not an efficient estimator of β, but it is asymptotically efficient since limn→∞ Eff(T ) = 1.

Consider a second estimator of β, the MLE βb = 1/X n . It is not unbiased but is asymp-
b = nβ/(n − 1) → β as n → ∞. Since βb is not unbiased, we can’t
totically unbiased since E(β)
compute its efficiency, but, since it is
³
asymptotically
´
unbiased, it does make sense to compute
b n
its asymptotic efficiency. Since β = n−1 T ,
µ ¶2
b = n n2 β 2
Var(β) Var(T ) = .
n−1 (n − 1)2 (n − 2)
Hence, the asymptotic efficiency of βb is

b = lim β 2 /n (n − 1)2 (n − 2)
lim Eff(β) = lim = 1.
n→∞ n→∞ n2 β 2 /[(n − 1)2 (n − 2)] n→∞ n3
Therefore, βb is asymptotically efficient. Since βb is not unbiased, it makes no sense to compute
b but we can compute the asymptotic relative efficiency. The
the relative efficiency of T and β,
asymptotic relative efficiency of T to βb is
b
Var(β) β 2 /(n − 2) (n − 1)2
b = lim
ARE(T, β) = lim 2 2 = lim = 1.
n→∞ Var(T ) n→∞ n β /[(n − 1)2 (n − 2)] n→∞ n2
Asymptotically, there is no difference in the performance of the two estimators. The reason
for that is clear: their ratio converges to 1 as n → ∞. They are asymptotically equivalent.
However, they are not equivalent for finite n.

You might also like