You are on page 1of 5

Statistical Theory (STAT 467/667) Spring 2012

Homework 7 Solution
Written by: Fares Qeadan, Revised by: Anna Panorska

Department of Mathematics and Statistics, University of Nevada, Reno

Problem 5.5.2.

Let Y1 , Y2 , ..., Yn be a random sample from fY (y; θ) = θ1 e−y/θ , y > 0. We will compare the Cramer-Rao lower bound for
− −
fY (y; θ) to the variance of the maximum likelihood estimator for θ, θ̂ = n1 ni=1 Yi = Y and examine whether Y is a
P

best estimator for θ.


First we find the Fisher Information
∂ 2 ln f(y; θ)
 
I(θ) = −E .
∂θ2
Note that

 
1 −y/θ 1 y
ln fY (y; θ) = ln e = ln( ) + ln e−y/θ = − ln θ − .
θ θ θ
Thus

∂ ln f(y; θ) 1 y
=− + 2
∂θ θ θ
and

∂ 2 ln f(y; θ) 1 2y
2
= 2 − 3.
∂θ θ θ
Therefore (assuming you know how to obtain E(Y ) = θ)

   
1 2Y 1 2 1 2θ 1
I(θ) = −E − 3 =− − E(Y ) = − 2 + 3 = 2.
θ2 θ θ2 θ3 θ θ θ
Thus,

1 1 θ2
CRLB = = 2 = .
nI(θ) n θ2 n
Now, (assuming you know how to obtain V ar(Y ) = θ2 )

− V ar(Y ) θ2
V ar(Y ) = = = CRLB.
n n
Hence, θ̂ is a best estimator for θ.

1
2

Problem 5.5.4.

Suppose a random sample of size n is taken from a normal distribution with mean µ and variance σ 2 , where σ 2 is
− Pn
known. We will compare the Cramer-Rao lower bound with the variance of µ̂ = Y = n1 i=1 Yi and examine whether

Y is an efficient estimator for µ.

Recall that the pdf of the normal distribution is


1
√ e−(y−µ) /2σ .
2 2
fY (y; µ) =
σ 2π
First we find the Fisher Information
∂ 2 ln f(y; µ)
 
I(µ) = −E .
∂µ2
Note that

√ (y − µ)2
 
1 2 2
ln fY (y; µ) = ln √ e−(y−µ) /2σ = − ln σ 2π − .
σ 2π 2σ 2
Thus

∂ ln f(y; µ) (y − µ)2
=
∂µ σ2
and

∂ 2 ln f(y; µ) 1
2
= − 2.
∂µ σ
Therefore

 
1 1
I(µ) = −E − 2 = 2
σ σ
and,

1 1 σ2
CRLB = = 1 = .
nI(µ) n σ2 n
Now,

− 1 σ2
V ar(µ̂) = V ar(Y ) = V ar(Y ) = = CRLB.
n n
Hence, µ̂ is an efficient estimator for µ.

Problem 5.6.7.

Let Y1 , Y2 , ..., Yn be a random sample of size n from the pdf fY (y; θ) = θyθ−1 , 0 ≤ y ≤ 1
Note that !θ−1
n n
× 1 = u(T, λ)v(−

Y Y
L(θ) = θyiθ−1 =θ n
yi y)
i=1 i=1

and v(−→
Qn n θ−1
Qn
where T = i=1 yi , u(T, θ) = θ (T ) y ) = 1. That is, T = i=1 Yi is a sufficient statistic for θ.
Qn
Now, to examine whether the MLE θ̂ of θ is a function of T = i=1 Yi , it’s more convenient to work with the log
likelihood function
3

n
X
ln L(θ) = n ln θ + (θ − 1) ln yi .
i=1

Further
n
d ln L n X
= + ln yi .
dθ θ
i=1

d ln L
Setting dθ = 0 gives

n n n
θ̂ = − Pn =− Qn =− .
i=1 ln Yi ln i=1 Yi ln T
Qn
That is the MLE of θ, θ̂ is a function of T = i=1 Yi .

Problem 5.6.9.

We will write the pdf fY (y; λ) = λe−λy , y > 0 in its exponential form and try to deduce a sufficient statistic for λ
assuming that the data consist of a random sample of size n.

f(Y |λ) = λe−λy

= λ × 1 × e(−λ)(y)

= a(λ)b(y)ec(λ)d(y)

where a(λ) = λ, b(y) = 1, c(λ) = −λ and d(y) = y.

Note that the likelihood function for a sample from f is

n
Y
L(λ) = λe−λyi
i=1
Pn
= λn e−λ i=1 yi
×1

= u(T, λ)v(−

y)

where T = i=1 yi , u(T, λ) = λn e−λ i=1 yi and v(−



Pn Pn
y ) = 1. By the Factorization Theorem,
Pn
T = i=1 Yi is a sufficient statistic for λ.

Problem 5.6.10.

Let Y1 , Y2 , ..., Yn be a random sample from a Pareto pdf

θ
fY (y; θ) = , 0 < y < ∞, 0 < θ < ∞.
(1 + y)θ+1

We will write the pdf in its exponential form and try to deduce a sufficient statistic for θ
4

θ
f(Y |θ) = = θ(1 + y)−(θ+1)
(1 + y)θ+1
θ × 1 × eln(1+y)
−(θ+1)
=

= θ × 1 × e−(θ+1) ln(1+y)

= a(θ)b(y)ec(θ)d(y) .

where a(θ) = θ, b(y) = 1, c(θ) = −(θ + 1) and d(y) = ln(1 + y).

Note that the likelihood function for a sample from f is

n n
Y θ Y
L(θ) = θ+1
= θe−(θ+1) ln(1+yi )
i=1
(1 + yi ) i=1
Pn
= θn e−(θ+1) i=1 ln(1+yi )
×1

= u(T, λ)v(−

y ),

where T =
Pn n −(θ+1) n
P


i=1 ln(1+yi ) and v( y ) = 1.
i=1 ln(1 + yi ), u(T, λ) = θ e Thus, by the Factorization Theorem
Pn
T = i=1 ln(1 + Yi ) is a sufficient statistic for θ.

Problem 5.5.6.

Let Y1 , Y2 , ..., Yn be a random sample of size n from the pdf


1
fY (y; θ) = yr−1 e−y/θ , y > 0
(r − 1)!θr

(a) We will show that θ̂ = 1r Y is an unbiased estimator for θ (assuming you can obtain E(Y ) = rθ)

1− 1 −
E(θ̂) = E( Y ) = E(Y )
r r
1
= EY = θ.
r

Thus, θ̂ = 1r Y is an unbiased estimator for θ.

(b) We will show that θ̂ = 1r Y is a minimum variance estimator for θ (assuming you can obtain V ar(Y ) = rθ2 ).
First, we will find the Fisher Information

∂ 2 ln f(y; θ)
 
I(θ) = −E .
∂θ2
Note that

 
1 y
ln fY (y; θ) = ln yr−1 e−y/θ = − ln((r − 1)!) − r ln θ + (r − 1) ln y − .
(r − 1)!θr θ
Thus

∂ ln f(y; θ) r y
=− + 2
∂θ θ θ
5

and

∂ 2 ln f(y; θ) r 2y
2
= 2 − 3.
∂θ θ θ
Therefore (assuming you know how to obtain E(Y ) = θ)

   
r 2Y r 2 r 2rθ r
I(θ) = −E − 3 =− − E(Y ) = − 2 + 3 = 2.
θ2 θ θ2 θ3 θ θ θ
Thus,

1 1 θ2
CRLB = = r = .
nI(θ) n θ2 nr
Now, (assuming you know how to obtain V ar(Y ) = rθ2 )

1− 1 − 1 θ2
V ar(θ̂) = V ar( Y ) = 2 V ar(Y ) = 2 V ar(Y )/n = = CRLB.
r r r rn
So, since CRLB = V ar(θ̂) then indeed θ̂ is a minimum variance estimator for θ.

You might also like