You are on page 1of 6

STAT 601: ESTIMATION AND DECISION THEORY

INTERIM ASSESSMENT 1, 2017/2018

1. a. Let X1 , X2 , X3 , ..., Xn be a random sample of iid observations from


a distribution with finite mean µ and variance σ 2 .
1
Pn
Let X̄n = n i=1 Xi denote the sample mean.

Suppose x̄ is an asymptotically normal estimator for µ. Then



n(x̄ − µ) d
−→ N (0, 1)
σ
√ d
and hence n(x̄ − µ) −→ N (0, σ 2 (µ)). Now, let g : R → R : x → g(x)
0
be differentiable at µ with g (µ) 6= 0. Define
(
0 for x = 0
h(x) = g(x)−g(µ) 0
x−µ
− g (µ) x 6= µ.
0
Then g(x̄n ) − g(µ) = (x̄ − µ)g (µ) + (x̄n − µ)h(x̄n )
√ √ √
n[g(x̄) − g(µ)] n(x̄ − µ) n(x̄ − µ)h(x̄n )
⇒ = +
g 0 (µ)σ(µ) σ(µ) g 0 (µ)σ(µ)

d p
Now, n(x̄−µ)
σ(µ)
−→ N (0, 1) and h(x̄n ) −→ 0, since by differentiability of
g at µ, h is continuous at µ. Applying the Slutsky’s theorem, we have

n[g(x̄) − g(µ)] d
−→ N (0, 1)
g 0 (µ)σ(µ)

√ d 0 0
therefore n[g(x̄n ) − g(µ)] −→ N [0, σ 2 (g (µ))2 ], g (µ) 6= 0.

b. Consider Tn (x) = ln(3x̄n ). From (1a) above, we know that



n[g(x̄) − g(µ)] d
−→ Z ∼ N (0, 1)
g 0 (µ)σ(µ)
as n → ∞.

Let g(x) = ln(3x).

1
0 θ2
Then g (x) = x1 6= 0 for x 6= 0. Now, E(Xi ) = 2θ and V ar(Xi ) = 12
. So
q
θ2 0
σ(µ) = 12 and g(µ) = ln( 3θ 2
) ⇒ g (µ) = 1θ . Therefore,

√ d
h θ2 1 i
n[ln(3x̄n ) − ln(3θ)] −→ N 0, , 2
12 θ
as n → ∞
Therefore,
√ d
h 1i
n[ln(3x̄n ) − ln(3θ)] −→ N 0,
12
as n → ∞.
Hence, √
n[ln(3x̄n ) − ln(3θ)] d
−→ N [0, 1]
√1
12
as n → ∞.
Hence a =ln(3θ) and b = √1
12

2. If Wn ∼ χ2n , then E(Wn ) = n and V ar(Wn ) = 2n and

Wn − E(Wn ) d
p −→ N (0, 1)
V ar(Wn )

W√n −n d
as n → ∞ by the central limit theorem. i.e 2n
−→ N (0, 1) as
n → ∞.

3.
pn
P (N = n) = − , n = 1, 2, ..., 0<p<1
nln(1 − p)
Number of observations = 40. i.e. N1 , N2 , ..., N40

(a)

X
E(N ) = nP (N = n)
i=1

since the distribution is discrete.



X  pn 
E(N ) = n −
i=1
nln(1 − p)

1 X
E(N ) = − pn
ln(1 − p) i=1

2
1
E(N ) = − [p + p2 + ...]
ln(1 − p)
1 h p i
E(N ) = −
ln(1 − p) 1 − p
p
E(N ) = −
(1 − p)ln(1 − p)
E(N ) = −p[(1 − p)ln(1 − p)]−1
as required.
(b) The likelihood function is given by
40
Y
L(n; p) = P (N = n)
n=1

40
Y pN
L(n; p) = −
N =1
N ln(1 − p)
P40
P i=1 Ni
L(n; p) = Q40
40
N =1 Ni [ln(1 − p)]
since we have independent observations N1 , N2 , ..., N40 . So the
log-likelihood function will be
40
X 40
Y 
lnL(n; p) = L(n; p) = Ni lnp − ln Ni − 40ln[ln(1 − p)]
i=1 n=1
.
So finding the score function, we have
∂lnL(n; p)
S(n; p) =
∂p
P40
i=1 Ni h 40  1 i
S(n; p) = + − −
P ln(1 − p) 1−p
P40
i=1 Ni 40
S(n; p) = +
P (1 − p)ln(1 − p)
In finding the maximum likelihood estimator for p, i.e. p̂, we solve
for p in the equation S(n; p) = 0. i.e.
P40
i=1 Ni 40
+ =0
p (1 − p)ln(1 − p)

3
Therefore p̂ satisfies the equation
P40
i=1 Ni 40
+ =0
p̂ (1 − p̂)ln(1 − p̂)

(c) Fisher information:


h ∂lnL(n; p) i2 h ∂ 2 lnL(n; p) i
i(p) = E = −E
∂p ∂p2
So given,
P40
∂ 2 lnL(n; p) i=1 Ni 40[1 + ln(1 − p)]
=− +
∂p2 p2 [(1 − p)ln(1 − p)]2

we have
P40
h
i=1 Ni 40[1 + ln(1 − p)] i
i(p) = −E − +
p2 [(1 − p)ln(1 − p)]2
P40
i=1E(Ni ) 40[1 + ln(1 − p)]
i(p) = 2

p [(1 − p)ln(1 − p)]2
40p 40[1 + ln(1 − p)]
i(p) = − −
p2 (1 − p)ln(1 − p) [(1 − p)ln(1 − p)]2
40 40[1 + ln(1 − p)]
i(p) = − −
p(1 − p)ln(1 − p) [(1 − p)ln(1 − p)]2
−40(1 − p)ln(1 − p) − 40p[1 + ln(1 − p)]
i(p) =
p[(1 − p)ln(1 − p)]2
−40[ln(1 − p) − pln(1 − p)] − 40p − 40pln(1 − p)
i(p) =
p[(1 − p)ln(1 − p)]2
−40ln(1 − p) + 40pln(1 − p) − 40p − 40pln(1 − p)
i(p) =
p[(1 − p)ln(1 − p)]2
−40[ln(1 − p) + p]
i(p) =
p[(1 − p)ln(1 − p)]2
40[−p − ln(1 − p)]
i(p) =
p[(1 − p)ln(1 − p)]2
p
Now a 95% confidence interval for p will be given by p̂+
−Zα/2 V ar(p̂),
α = 0.05

4
Assuming that the maximum likelihood estimator is asymptotica-
clly efficient, it implies that the variances of the limiting distribu-
tion of the maximum likelihood estimate attains the Cramer-Rao
lower bound. i.e.
1 p(1 − p)2 (ln(1 − p))2
V ar(p̂) = γn (p) = =
ni(p) 40n[−ln(1 − p) − p]

Therefore the 95% confidence interval will be


s
p̂(1 − p̂)2 ln(1 − p)2
p̂ +
− 1.96
40n[−ln(1 − p) − p]

But n = 40 and p̂ = 0.8. Therefore we have


s
0.8(0.2)2 ln(0.2)2
0.8 +
− 1.96
40(40)[−ln(0.2) − 0.8]
s
0.083
0.8 +
− 1.96 40(32.378)

0.8 +
− 0.0157
i.e. (0.7843, 0.8157) is the 95% confidence interval
(d) ITERATION
n
X
Ni = 100
i=1
Starting value of p̂ = 0.75. We use Newton’s method. Demon-
strating one iteration of this method, with a starting value of
p̂ = 0.75, we have
S(n, p̂)
p̂(i+1) = p̂(i) +
I(n, p̂)
So for i = 0, we have

∂lnL(n;p)
∂p (0)
p̂(1) = p̂(0) + p=p̂
∂ 2 lnL(n;p)
∂p2 (0)
p=p̂

Now, p̂(0) = 0.75. Also,


∂lnL(n; p) 100 40
(0) = + = 17.9177
∂p p=p̂ 0.75 0.25ln(0.25)

5
from (2a). Again,

∂ 2 lnL(n; p) 100 40[1 + ln(0.25)]


I(n; p̂) = 2
(0) = − + = −306.4212
∂p p=p̂ 0.752 [0.25ln(0.25)]2

Therefore,
17.9177
p̂(1) = 0.75 +
−306.4212
i.e. p̂(1) = 0.6915. We end here for the first iteration of this
method.

You might also like