You are on page 1of 29

QUESTION 1

Suppose X1 , X2 , . . . , Xn is a random sample from the Exp (λ) distribution. Consider the
following estimators for θ = 1/λ: θc1 = (1/n) ni=1 Xi and θc2 = (1/(n + 1)) ni=1 Xi .
P P

(i) Find the biases of θc1 and θc2 .

(ii) Find the variances of θc1 and θc2 .

(iii) Find the mean squared errors of θc1 and θc2 .

(iv) Which of the two estimators (θc1 or θc2 ) is better and why?

1
SOLUTIONS TO QUESTION 1

Suppose X1 , X2 , . . . , Xn is a random sample from the Exp (λ) distribution. Consider the
following estimators for θ = 1/λ: θc1 = (1/n) ni=1 Xi and θc2 = (1/(n + 1)) ni=1 Xi .
P P

(i) The bias of θc1 is


n
!
 1X
E θc1 − θ = E Xi − θ
n i=1
n
1X
= E (Xi ) − θ
n i=1
n
1X
= θ−θ
n i=1
= θ−θ
= 0.
The bias of θc2 is
n
!
  1 X
E θc
2 −θ = E Xi − θ
n + 1 i=1
n
1 X
= E (Xi ) − θ
n + 1 i=1
n
1 X
= θ−θ
n + 1 i=1

= −θ
n+1
θ
= − .
n+1

(ii) The variance of θc1 is


n
!
  1X
V ar θc 1 = V ar Xi
n i=1
n
1 X
= V ar (Xi )
n2 i=1
n
1 X
= 2
θ2
n i=1
θ2
= .
n
The variance of θc2 is
n
!
  1 X
V ar θc
2 = V ar Xi
n + 1 i=1

2
n
1 X
= V ar (Xi )
(n + 1)2 i=1
n
1
θ2
X
= 2
(n + 1) i=1
nθ2
= .
(n + 1)2

(iii) The mean squared error of θc1 is


  θ2
M SE θc1 = .
n

The mean squared error of θc2 is


!2
  nθ2 θ θ2
M SE θc2 = 2
+ = .
(n + 1) n+1 n+1

(iv) In terms of bias, θc1 is unbiased and θc2 is biased (however, θc2 is asymptotically unbiased).
So, one would prefer θc1 if bias is the important issue.
In terms of mean squared error, θc2 has better efficiency (however, both estimators are
consistent). So, one would prefer θc2 if efficiency is the important issue.

3
QUESTION 2

Suppose X1 , X2 , . . . , Xn are independent and identically distributed random variables


with the common probability mass function (pmf):

p(x) = θ(1 − θ)x−1

for x = 1, 2, . . . and 0 < θ < 1.

(i) Write down the likelihood function of θ.

(ii) Find the maximum likelihood estimator (mle) of θ.

(iii) Find the mle of ψ = 1/θ.

(iv) Determine the bias, variance and the mean squared error of the mle of ψ.

(v) Is the mle of ψ unbiased? Is it consistent?

4
SOLUTIONS TO QUESTION 2

Suppose X1 , X2 , . . . , Xn are independent and identically distributed random variables


with the common probability mass function (pmf):

p(x) = θ(1 − θ)x−1

for x = 1, 2, . . . and 0 < θ < 1. This pmf corresponds to the geometric distribution, so
E(Xi ) = 1/θ and V ar(Xi ) = (1 − θ)/θ2 .

(i) The likelihood function of θ is


n n o Pn
θ(1 − θ)Xi −1 = θn (1 − θ) Xi −n
Y
L(θ) = i=1 .
i=1

(ii) The log likelihood function of θ is


n
!
X
log L(θ) = n log θ + Xi − n log(1 − θ).
i=1

The first and second derivatives of this with respect to θ are


Pn
d log L(θ) n i=1 Xi − n
= −
dθ θ 1−θ
and
Pn
d2 log L(θ) n i=1Xi − n
2
=− 2 − ,
dθ θ (1 − θ)2

respectively. Note that d log L(θ)/dθ = 0 if θ = n/ ni=1 Xi and that d2 log L(θ)/dθ2 < 0
P

for all 0 < θ < 1. So, it follows that θb = n/ ni=1 Xi is the mle of θ.
P

(iii) By the invariance principle, the mle of ψ = 1/θ is ψb = (1/n) ni=1 Xi .


P

(iv) The bias of ψb is


n
!
  1X
E ψb − ψ = E Xi − ψ
n i=1
n
1X
= E (Xi ) − ψ
n i=1
n
1X 1
= −ψ
n i=1 θ
= ψ−ψ
= 0.

5
The variance of ψb is
n
!
  1X
V ar ψb = V ar Xi
n i=1
n
1 X
= V ar (Xi )
n2 i=1
n
1 X 1−θ
=
n i=1 θ2
2

1−θ
=
nθ2
ψ2 − ψ
= .
n

The mean squared error of ψb is


  ψ2 − ψ
M SE ψb = .
n

(v) The mle of ψ is unbiased and consistent.

6
QUESTION 3

Let X and Y be uncorrelated random variables. Suppose that X has mean 2θ and
variance 4. Suppose that Y has mean θ and variance 2. The parameter θ is unknown.

(i) Compute the bias and mean squared error for each of the following estimators of θ:
θc1 = (1/4)X + (1/2)Y and θc2 = X − Y .

(ii) Which of the two estimators (θc1 or θc2 ) is better and why?

(iii) Verify that the estimator θbc = (c/2)X + (1 − c)Y is unbiased. Find the value of c which
minimizes Var (θbc ).

7
SOLUTIONS TO QUESTION 3

Let X and Y be uncorrelated random variables. Suppose that X has mean 2θ and
variance 4. Suppose that Y has mean θ and variance 2. The parameter θ is unknown.

(i) The biases and mean squared errors of θb1 = (1/4)X + (1/2)Y and θb2 = X − Y are:
   
Bias θb1 = E θb1 − θ
X Y
 
= E + −θ
4 2
E(X) E(Y )
= + −θ
4 2
2θ θ
= + −θ
4 2
= 0,

   
Bias θb2 = E θb2 − θ
= E (X − Y ) − θ
= E(X) − E(Y ) − θ
= 2θ − θ − θ
= 0,

   
M SE θb1 = V ar θb1
X Y
 
= V ar +
4 2
V ar(X) V ar(Y )
= +
16 4
4 2
= +
16 4
3
= ,
4
and
   
M SE θb2 = V ar θb2
= V ar (X − Y )
= V ar(X) + V ar(Y )
= 4+2
= 6.

(ii) Both θb1 and θb2 are unbiased. The MSE of θb1 is smaller than the MSE of θb2 . So, we
prefer θb1 .

8
(iii) The bias of θbc is
c
   
E θbc − θ = E X + (1 − c)Y − θ
2
c
= E(X) + (1 − c)E(Y ) − θ
2
c
= 2θ + (1 − c)θ − θ
2
= 0,

so θbc is unbiased.
The variance of θbc is
c
   
V ar θbc = V ar X + (1 − c)Y
2
c2
= V ar(X) + (1 − c)2 V ar(Y )
4
c2
= 4 + 2(1 − c)2
4
= c2 + 2(1 − c)2 .
0 00
Let g(c) = c2 + 2(1 − c)2 . Them g (c) = 6c − 4 = 0 if c = 2/3. Also g (c) = 6 > 0. So,
c = 2/3 minimizes the variance of θbc .

9
QUESTION 4

Consider the linear regression model with zero intercept:

Yi = βXi + ei

for i = 1, 2, . . . , n, where e1 , e2 , . . . , en are independent and identical normal random variables


with zero mean and variance σ 2 assumed known. Moreover, suppose X1 , X2 , . . . , Xn are
known constants.

(i) Write down the likelihood function of β.

(ii) Derive the maximum likelihood estimator of β.

(iii) Find the bias of the estimator in part (ii). Is the estimator unbiased?

(iv) Find the mean square error of the estimator in part (ii).

(v) Find the exact distribution of the estimator in part (ii).

10
SOLUTIONS TO QUESTION 4

Consider the linear regression model with zero intercept:

Yi = βXi + ei

for i = 1, 2, . . . , n, where e1 , e2 , . . . , en are independent and identical normal random variables


with zero mean and variance σ 2 assumed known. Moreover, suppose X1 , X2 , . . . , Xn are
known constants.

(i) The likelihood function of β is

e2i (Yi − βXi )2


( ) ( )
1 1
L(β) = exp − = exp − .
(2π)n/2 σ n 2σ 2 (2π)n/2 σ n 2σ 2

(ii) The log likelihood function of β is


n
n 1 X
log L(β) = − log(2π) − n log σ − 2 (Yi − βXi )2 .
2 2σ i=1

The normal equation is:


n
∂ log L(β) 1 X
= 2 (Yi − βXi ) Xi = 0.
∂β σ i=1

Solving this equation gives


Pn
X i Yi
βb = Pi=1
n 2
.
i=1 Xi

This is an mle since


n
∂ 2 log L(β) 1 X
2
=− 2 Xi2 < 0.
∂β σ i=1

(iii) The bias of βb is


Pn
Xi Yi
E βb − β = E Pi=1
n 2
−β
i=1 Xi
Pn
i=1 Xi E (Yi )
= Pn 2
−β
i=1 Xi
Pn
i=1 Xi βXi
= Pn 2
−β
i=1 Xi
= 0,

so βb is indeed unbiased.

11
(iv) The variance of βb is
Pn
Xi Yi
V arβb = V ar Pi=1
n 2
i=1 Xi
Pn 2
i=1 Xi V ar (Yi )
= Pn 2
( i=1 Xi2 )
Pn 2 2
i=1 Xi σ
= Pn 2
( i=1 Xi2 )
σ2
= Pn 2
.
i=1 Xi

(v) The estimator is a linear combination of independent normal random variable, so it is


2
also normal with mean β and variance Pnσ X 2 .
i=1 i

12
QUESTION 5

Suppose X1 and X2 are independent Uniform[−θ, θ] random variables. Let θc1 = 3 min (|X1 | , |X2 |)
and θc2 = 3 max (X1 , X2 ) denote possible estimators of θ.

(i) Derive the bias and mean squared error of θc1 ;

(ii) Derive the bias and mean squared error of θc2 ;

(iii) Which of the estimators (θc1 and θc2 ) is better with respect to bias and why?

(iv) Which of the estimators (θc1 and θc2 ) is better with respect to mean squared error and
why?

13
SOLUTIONS TO QUESTION 5

(i) Let Z = min (|X1 | , |X2 |). Then

FZ (z) = Pr [min (|X1 | , |X2 |) < z]


= 1 − Pr [min (|X1 | , |X2 |) > z]
= 1 − Pr [|X1 | > z, |X2 | > z]
= 1 − [Pr (|X| > z)]2
= 1 − [1 − Pr (|X| < z)]2
z 2
 
= 1− 1−
θ
(θ − z)2
= 1−
θ2
and
2(θ − z)
fZ (z) =
θ2
and

2 θz 2 z 3
"
Z θ
2z(θ − z) θ
E(Z) = = − =
0 θ2 θ2 2 3 0
3

and

2z 2 (θ − z) 2 θz 3 z 4 θ2
Z θ "
 
2
E Z = dz = − = .
0 θ2 θ2 3 4 0
6
   
So, Bias θc1 = 0 and M SE θc1 = θ2 /2.

(ii) Let Z = max (X1 , X2 ). Then

FZ (z) = Pr [max (X1 , X2 ) < z]


= Pr [X1 < z, X2 < z]
= [Pr (X < z)]2
" #2
z+θ
=

and
z+θ
fZ (z) =
2θ2
and

z3 z2
Z θ "
z+θ θ
E(Z) = z dz = + =
−θ 2θ2 6θ2 4θ −θ
3

14
and

z4 z3 θ2
Z θ "
+θ2z
 
2
E Z = z dz = + = .
−θ 2θ2 8θ2 6θ −θ
3
   
So, Bias θc2 = 0 and M SE θc2 = 2θ2 .

(iii) Both estimators are equally good with respect to bias.

(iv) θc1 has smaller MSE.

15
QUESTION 6

Suppose X1 , X2 , . . . , Xn is a random sample from a distribution specified by the proba-


−1
bility density function f (x) = a−1 xa −1 , 0 < x < 1, where a > 0 is an unknown parameter.

(a) Write down the likelihood function of a.


n
1X
(b) Show that the maximum likelihood estimator of a is ab = − log Xi .
n i=1
Z 1
(c) Derive the expected value of ab in part (b). You may use the fact that xα log xdx =
0
−(α + 1)−2 without proof.
Z 1
(d) Derive the variance of ab in part (b). You may use the fact that xα (log x)2 dx =
0
2(α + 1)−3 without proof.

(e) Show that ab is an unbiased and a consistent estimator for a.

(f) Find the maximum likelihood estimator of Pr(X < 0.5), where X has the probability
−1
density function f (x) = a−1 xa −1 , 0 < x < 1. Justify your answer.

16
SOLUTIONS TO QUESTION 6

(a) The likelihood function is


n
!1/a−1
−n
Y
L (a) = a xi .
i=1

(b) The log likelihood function is


 n
X
log L (a) = −n log a + a−1 − 1 log xi .
i=1

The derivative of log L with respect to a is


n
d log L (a) n
= − − a−2
X
log xi .
da a i=1

Setting this to zero and solving for a, we obtain


n
1X
ab =− log xi .
n i=1

This is indeed an MLE since


n
d2 log L (a) n −3
X
= 2 + 2a log xi
da2 a i=1
n
= 2 − 2nab−2
ab
< 0

at a = ab.

(c) The expected value is


n
1X
E (ab) = − E (log xi )
n i=1
n Z 1
1 X
= − log xx1/a−1 dx
na i=1 0
n
1 X
= a2
na i=1
= a.

(d) The variance is


n
1 X
V ar (ab) = V ar (log xi )
n2 i=1

17
n n h
1 X 2
i
2
o
= E (log x i ) − a
n2 i=1
n
1 X h
2
i a2
= E (log x i ) −
n2 i=1 n
n Z 1
1 X 2 1/a−1 a2
= (log x) x dx −
n2 a i=1 0 n
n
1 X 3 a2
= 2a −
n2 a i=1 n
a2
=
n

(e) Bias (ab) = 0 and MSE (ab) → 0, so the estimator is unbiased and consistent.

(f) Note that


Z 0.5 i0.5
−1 −1 −1
h
Pr(X < 0.5) = a−1 xa dx = xa = 0.51/a ,
0 0

which is a one-to-one functionPn of a for a > 0. By the invariance principle, its maximum
n/( i=1 log xi )
likelihood estimator is 0.5 .

18
QUESTION 7

Suppose X1 , X2 , . . . , Xn is a random
 sample
 from a distribution specified by the prob-
1 |x|
ability density function f (x) = 2a exp − a , −∞ < x < ∞, where a > 0 is an unknown
parameter.

(a) Write down the likelihood function of a.


n
1X
(b) Show that the maximum likelihood estimator of a is ab = |Xi |.
n i=1

(c) Derive the expected value of ab in part (b).

(d) Derive the variance of ab in part (b).

(e) Show that ab is an unbiased and consistent estimator for a.

19
SOLUTIONS TO QUESTION 7

Suppose X1 , X2 , . . . , Xn is a random
 sample
 from a distribution specified by the prob-
1 |x|
ability density function f (x) = 2a exp − a , −∞ < x < ∞, where a > 0 is an unknown
parameter.

(a) The likelihood function of a is


n n
" !# !
Y 1 | Xi | 1 1X
L (a) = exp − = exp − | Xi | .
i=1 2a a (2a)n a i=1

(b) The log likelihood function of a is


n
1X
log L (a) = −n log(2a) − | Xi | .
a i=1

The derivative of log L with respect to a is


n
d log L (a) n 1 X
=− + 2 | Xi | .
da a a i=1

Setting this to zero and solving for a, we obtain


n
1X
ab = | Xi | .
n i=1

This is a maximum likelihood estimator of a since


n

d2 log L (a) n 2 X
= 2− 3 | Xi |
da2


a=b
a
ab ab i=1
n
" #
n 2 X
= 2 1− | Xi |
ab nab i=1
n
= 2 [1 − 2]
ab
< 0.

(c) The expected value of ab is


n
1X
E (ab) = E [| Xi |]
n i=1
n Z ∞
!
1 X |x|
= | x | exp − dx
2na i=1 −∞ a
n Z ∞
1 X x
 
= x exp − dx
na i=1 0 a
n Z ∞
aX
= y exp (−y) dy
n i=1 0

20
n
aX
= Γ(2)
n i=1
n
aX
= 1
n i=1
= a.

(d) The variance of ab is


n
1 X
Var (ab) = Var [| Xi |]
n2 i=1
n n h
1 X 2
i
2
o
= E X i − E [| X i |]
n2 i=1
 ! #2 
n 
! "
1 X 1 Z∞ 2 |x| 1 Z∞ |x| 
= x exp − dx − | x | exp − dx
n2 i=1  2a −∞ a 4a2 −∞ a 
n
( Z  2 )
1 ∞ 2
Z ∞
1 X x 1 x
  
= x exp − dx − 2 x exp − dx
n2 i=1 a 0 a a 0 a
n
( Z ∞ Z ∞ 2 )
1 X 2 2 2
= a y exp (−y) dy − a y exp (−y) dy
n2 i=1 0 0
n n
1 X 2 2 2
o
= a Γ(3) − a [Γ(2)]
n2 i=1
n
1 X
= a2
n2 i=1
a2
= .
n

a2
(e) The bias (ab) = 0 and MSE (ab) = n
, so ab is an unbiased and a consistent estimator for
a.

21
QUESTION 8

Suppose X1 and X2 are independent Exp(1/θ) and Uniform [0, θ] random variables. Let
θb =aX1 + bX2 denote a class of estimators of θ, where a and b are constants.

(i) Determine the bias of θ;


b

(ii) Determine the variance of θ;


b

(iii) Determine the mean squared error of θ;


b

(iv) Determine the condition involving a and b such that θb is unbiased for θ;

(v) Determine the value of a such that θb is unbiased for θ and has the smallest variance.

22
SOLUTIONS TO QUESTION 8

Suppose X1 and X2 are independent Exp(1/θ) and Uniform [0, θ] random variables. Let
θb = aX1 + bX2 denote a class of estimators of θ, where a and b are constants.

(i) The bias of θb is


   
Bias θb = E θb − θ
= aE (X1 ) + bE (X2 ) − θ
a Z +∞ x Z θ
x
 
= x exp − dx + b dx − θ
θ 0 θ 0 θ
" #θ
Z +∞
b x2
= aθ y exp (−y) dx + −θ
0 θ 2 0
b θ2
Z +∞ " #
= aθ y exp (−y) dx + −0 −θ
0 θ 2
!
b
= a + − 1 θ.
2

(ii) The variance of θb is


 
Var θb = a2 Var (X1 ) + b2 Var (X2 )
θ2
" #
1 Z +∞ 2 x 2 1
Z θ
   
2 2 2
= a x exp − dx − θ + b x dx −
θ 0 θ θ 0 4
 " # 
 Z +∞   1 x3 θ θ2 
= a2 θ 2 y 2 exp (−y) dy − θ2 + b2 −
0 θ 3 0 4
θ2 θ2
( )
h i
2 2 2 2
= a 2θ − θ +b −
3 4
b2
!
2
= a + θ2 .
12

(iii) The mean squared error of θb is


!2
b2
!
 
2 b
MSE θb = a + θ2 + a + − 1 θ2
12 2
 !2 
b2 b
= a2 + + a+ −1  θ2
12 2
b2
!
= 2a + + ab − 2a − b + 1 θ2 .
2
3

b
(iv) θb is unbiased if a + 2
= 1. In other words, b = 2(1 − a).

23
(v) If θb is unbiased then its variance is

(1 − a)2 2
" #
2
a + θ .
3
2
We need to minimize this as a function of a. Let g(a) = a2 + (1−a) 3
. The first order
0 2(1−a)
derivative is g (a) = 2a − 3 . Setting the derivative to zero, we obtain a = 14 . The
00 2
second order derivative is g (a) = 2+ 23 > 0. So, g(a) = a2 + (1−a)
3
attains its minimum
1 1 3
at a = 4 . Hence, the estimator with minimum variance is 4 X1 + 2 X2 .

24
QUESTION 9
2(X1 +···+Xn )
Suppose X1 , . . . , Xn are independent Uniform[0, θ] random variables. Let θc1 = n
and θc2 = max (X1 , . . . , Xn ) denote possible estimators of θ.

(i) Derive the bias and mean squared error of θc1 ;

(ii) Derive the bias and mean squared error of θc2 ;

(iii) Which of the estimators (θc1 and θc2 ) is better with respect to bias and why?

(iv) Which of the estimators (θc1 and θc2 ) is better with respect to mean squared error and
why?

25
SOLUTIONS TO QUESTION 9
2(X1 +···+Xn )
Suppose X1 , . . . , Xn are independent Uniform[0, θ] random variables. Let θc1 = n
and θc2 = max (X1 , . . . , Xn ) denote possible estimators of θ.

(i) The bias and mean squared error of θc1 are


   
bias θc1 = E θc1 − θ
2
= E (X1 + · · · + Xn ) − θ
n
2
= [E (X1 ) + · · · + E (Xn )] − θ
n" #
2 θ θ
= + ··· + −θ
n 2 2
= θ−θ
= 0

and
   
MSE θc1 = Var θc1
4
= Var (X1 + · · · + Xn )
n2
4
= [Var (X1 ) + · · · + Var (Xn )]
n2 "
4 θ2 θ2
#
= + ··· +
n2 12 12
2
θ
= .
3n

(ii) Let Z = θc2 . The cdf and the pdf of Z are

FZ (z) = Pr (max (X1 , . . . , Xn ) ≤ z)


= Pr (X1 ≤ z, . . . , Xn ≤ z)
= Pr (X1 ≤ z) · · · Pr (Xn ≤ z)
z z
= ···
θ θ
zn
= n
θ
and
nz n−1
fZ (z) = .
θn

26
So, the bias and mean squared error of θc2 are
 
bias θc2 = E (Z) − θ
n Zθ n
= z dz − θ
θn 0

n z n+1
"
= n −θ
θ n+1 0
n θn+1
" #
= n −0 −θ
θ n+1

= −θ
n+1
θ
= −
n+1
and
!2
  θ
MSE θc1 = Var (Z) + −
n+1
  θ2
= E Z 2 − E 2 (Z) +
(n + 1)2
  n2 θ 2 θ2
= E Z2 − +
(n + 1)2 (n + 1)2
n Z θ n+1 n2 θ2 θ2
= n z dz − +
θ 0 (n + 1)2 (n + 1)2

n z n+2 n2 θ 2 θ2
"
= n − +
θ n + 2 0 (n + 1)2 (n + 1)2
nθ2 n2 θ 2 θ2
= − +
n + 2 (n + 1)2 (n + 1)2
2θ2
= .
(n + 1)(n + 2)

6 0.
(iii) θc1 is better with respect to bias since bias θc1 = 0 and bias θc2 =

(iv) θc2 is better with respect to mean squared error since


2θ2 θ2

(n + 1)(n + 2) 3n
⇔ 6n ≤ (n + 1)(n + 2)
⇔ 6n ≤ n2 + 3n + 2
⇔ 0 ≤ n2 − 3n + 2
⇔ 0 ≤ (n − 1)(n − 2).

Both θc1 and θc2 have equal mean squared errors when n = 1, 2.

27
QUESTION 10

Let X1 , X2 , . . . , Xn be a random sample from the uniform [0, θ] distribution, where θ is


unknown.

Pn
(i) Find the expected value and variance of the estimator θb = 2X, where X = (1/n) i=1 Xi .

(ii) Find the expected value of the estimator max(X1 , X2 , . . . , Xn ), i.e. the largest obser-
vation.

(iii) Find the constant c such that θe = c max(X1 , X2 , . . . , Xn ) is an unbiased estimator of


θ. Also find the variance of θ.
e

(iv) Compare the mean square errors of θb and θe and comment.

28
SOLUTIONS TO QUESTION 10

Let X1 , X2 , . . . , Xn be a random sample from the uniform [0, θ] distribution, where θ is


unknown.

(i) The expected value of θb is:


   
E θb = 2E X
= (2/n)E (X1 + X2 + · · · + Xn )
= (2/n)n(θ/2)
= θ.
The variance of θb is:
   
V ar θb = 4V ar X
 
= 4/n2 V ar (X1 + X2 + · · · + Xn )
   
= 4/n2 n θ2 /12
= θ2 /(3n).

(ii) Let M = max(X1 , X2 , . . . , Xn ). The cdf of M is


FM (m) = Pr (X1 ≤ m, X2 ≤ m, . . . , Xn ≤ m)
= Pr (X1 ≤ m) Pr (X2 ≤ m) · · · Pr (Xn ≤ m)
= (Pr (X1 ≤ m))m
= (m/θ)n
and so its pdf is nmn−1 /θn . The expected value of M is
Z θ
E(M ) = n mn dm/θn = nθ/(n + 1).
0

(iii) Take c = (n + 1)/n. Then


E(cM ) = cE(M ) = ((n + 1)/n)nθ/(n + 1) = θ.
So, θe = (n + 1)M/n is an unbiased estimator of θ.
The variance of M is
  Z θ  
E M 2 − n2 θ2 /(n + 1)2 = n mn+1 dm/θn − n2 θ2 /(n + 1)2 = nθ2 / (n + 2)(n + 1)2 .
0

So, the mean squared error of θe is (n + 1)2 V ar(M )/n2 = θ2 /(n(n + 2)).
(iv) The mean square error of θb is θ2 /(3n).
The mean squared error of θe is θ2 /(n(n + 2)).
The estimator θe has the smaller mean squared error. So, it should be preferred.

29

You might also like