# 205

CHAPTER 6

Section 6.1

1.
a. We use the sample mean, x to estimate the population mean µ .
1407 . 8
27
80 . 219
ˆ · ·
Σ
· ·
n
x
x
i
µ

b. We use the sample median, 7 . 7
~
· x (the middle observation when arranged in
ascending order).

c. We use the sample standard deviation,
( )
660 . 1
26
94 . 1860
27
8 . 219
2
2
·

· · s s

d. With “success” = observation greater than 10, x = # of successes = 4, and
1481 . ˆ
27
4
· · ·
n
x
p

e. We use the sample (std dev)/(mean), or 2039 .
1407 . 8
660 . 1
· ·
x
s

2.
a. With X = # of T’s in the sample, the estimator is , 10 ; ˆ · · x p
n
X
so 50 . ,
20
10
ˆ · · p .

b. Here, X = # in sample without TI graphing calculator, and x = 16, so 80 .
20
16
ˆ · · p

Chapter 6: Point Estimation
206
3.
a. We use the sample mean, 3481 . 1 · x

b. Because we assume normality, the mean = median, so we also use the sample mean
3481 . 1 · x . We could also easily use the sample median.

c. We use the 90
th
percentile of the sample:
( )( ) 7814 . 1 3385 . 28 . 1 3481 . 1 28 . 1 ˆ ) 28 . 1 ( ˆ · + · + · + s x σ µ .

d. Since we can assume normality,
( ) ( ) 6736 . 45 .
3385 .
3481 . 1 5 . 1 5 . 1
5 . 1 · < ·
,
`

.
|

< ·
,
`

.
|

< ≈ < Z P Z P
s
x
Z P X P

e. The estimated standard error of 0846 .
16
3385 . ˆ
· · · ·
n
s
n
x
σ

4.
a. ( ) ( ) ( )
2 1
µ µ − · − · − Y E X E Y X E ; 434 . 575 . 8 141 . 8 · − · − y x

b. ( ) ( ) ( )
2
2
2
1
2
1 2 2
n n
Y V X V Y X V
Y X
σ σ
σ σ + · + · + · −
( ) ;
2
2
2
1
2
1
n n
Y X V
Y X
σ σ
σ + · − ·

The estimate would be
5687 .
20
104 . 2
27
66 . 1
2 2
2
2
2
1
2
1
· + · + ·

n
s
n
s
s
Y X
.

c. 7890 .
104 . 2
660 . 1
2
1
· ·
s
s

d. ( ) ( ) ( ) 1824 . 7 104 . 2 66 . 1
2 2 2
2
2
1
· + · + · + · − σ σ Y V X V Y X V

5. N = 5,000 T = 1,761,300
6 . 374 · y 6 . 340 · x 0 . 34 · d
000 , 703 , 1 ) 6 . 340 )( 000 , 5 (
ˆ
1
· · · x N θ
300 , 591 , 1 ) 0 . 34 )( 000 , 5 ( 300 , 761 , 1
ˆ
2
· − · − · d N T θ
281 . 438 , 601 , 1
6 . 374
6 . 340
300 , 761 , 1
ˆ
3
·
,
`

.
|
·

,
`

.
|
·
y
x
T θ

Chapter 6: Point Estimation
207
6.
a. Let ) ln(
i i
x y · for I = 1, .., 31. It is easily verified that the sample mean and sample sd
of the s y
i
' are 102 . 5 · y and . 4961 . ·
y
s Using the sample mean and sample sd
to estimate µ and σ , respectively, gives 102 . 5 ˆ · µ and 4961 . ˆ · σ (whence
2461 . ˆ
2 2
· ·
y
s σ ).

b.
]
]
]

+ ≡
2
exp ) (
2
σ
µ X E . It is natural to estimate E(X) by using µˆ and
2
ˆ σ in place of
µ and
2
σ in this expression:
87 . 185 ) 225 . 5 exp(
2
2461 .
102 . 5 exp )
ˆ
( · ·
]
]
]

+ · X E

7.
a. 6 . 120
10
1206
ˆ · · · ·

n
x
x
i
µ

b. 000 , 10 ˆ · τ 000 , 206 , 1 ˆ · µ

c. 8 of 10 houses in the sample used at least 100 therms (the “successes”), so
. 80 . ˆ
10
8
· · p

d. The ordered sample values are 89, 99, 103, 109, 118, 122, 125, 138, 147, 156, from
which the two middle values are 118 and 122, so 0 . 120
2
122 118
~ ~
^
·
+
· · x µ

8.
a. With q denoting the true proportion of defective components,
150 .
80
12
.
) . . (#
ˆ · · ·
size sample
sample in defective
q

b. P(system works) = p
2
, so an estimate of this probability is 723 .
80
68
ˆ
2
2
·

,
`

.
|
· p

Chapter 6: Point Estimation
208
9.
a. , ) ( ) ( λ µ · · · X E X E so X is an unbiased estimator for the Poisson parameter
λ ; , 317 ) 1 )( 7 ( ... ) 37 )( 1 ( ) 18 )( 0 ( · + + + ·
∑ i
x since n = 150,
11 . 2
150
317
ˆ
· · · x λ .

b.
n n
x
λ σ
σ · · , so the estimated standard error is 119 .
150
11 . 2
ˆ
· ·
n
λ

10.
a.
2
2
2 2
)] ( [ ) ( ) ( µ
σ
+ · + ·
n
X E X Var X E , so the bias of the estimator
2
X is
n
2
σ
;
thus
2
X tends to overestimate
2
µ .

b.
2
2
2 2 2 2 2
) ( ) ( ) ( σ
σ
µ k
n
S kE X E kS X E − + · − · − , so with
n
k
1
· ,
2 2 2
) ( µ · − kS X E .

11.
a. ( ) ( )
2 1 2 2
2
1 1
1
2
2
1
1 2
2
1
1
) (
1
) (
1 1 1
p p p n
n
p n
n
X E
n
X E
n n
X
n
X
E − · − · − ·

,
`

.
|
− .

b. ) (
1
) (
1
2
2
2
1
2
1 2
2
1
1
2
2
1
1
X Var
n
X Var
n n
X
Var
n
X
Var
n
X
n
X
Var

,
`

.
|
+

,
`

.
|
·

,
`

.
|
+

,
`

.
|
·

,
`

.
|

( ) ( ) ,
1 1
2
2 2
1
1 1
2 2 2
2
2
1 1 1
2
1
n
q p
n
q p
q p n
n
q p n
n
+ · + and the standard error is the square
root of this quantity.

c. With
1
1
1
ˆ
n
x
p · ,
1 1
ˆ 1 ˆ p q − · ,
2
2
2
ˆ
n
x
p · ,
2 2
ˆ 1 ˆ p q − · , the estimated standard error is
2
2 2
1
1 1
ˆ ˆ ˆ ˆ
n
q p
n
q p
+ .

d. ( ) 245 . 880 . 635 .
200
176
200
127
ˆ ˆ
2 1
− · − · − · − p p

Chapter 6: Point Estimation
209
e. 041 .
200
) 120 )(. 880 (.
200
) 365 )(. 635 (.
· +

12.
( ) ( ) ( ) ( )
) (
2
1
) (
2
1
2
1 1
2
2
2 1
2 2
1
2 1
1
2 1
2
2 2
2
1 1
S E
n n
n
S E
n n
n
n n
S n S n
E
− +

+
− +

·
]
]
]

− +
− + −

( ) ( )
2 2
2 1
2 2
2 1
1
2
1
2
1
σ σ σ ·
− +

+
− +

·
n n
n
n n
n
.

13. ( ) θ
θ
θ
3
1
6 4
1 ) (
1
1
3 2
1
1
2
1
· + · + ⋅ ·

x x
dx x x X E θ
3
1
) ( · X E
θ
3
1
) ( · X E θ θ θ θ ·
,
`

.
|
· · · ⇒ ·
3
1
3 ) ( 3 ) 3 ( )
ˆ
( 3
ˆ
X E X E E X

14.
a. min(x
i
) = 202 and max(x
i
) = 525, so the estimate of the number of planes manufactured is
max(x
i
) - min(x
i
) + 1 = 525 – 202 + 1 = 324.

b. The estimate will equal the true number of planes manufactured iff min(x
i
) = α and
max(x
i
) = β, i.e., iff the smallest serial number in the population and the largest serial
number in the population both appear in the sample. The estimator is not unbiased. This
is because max(x
i
) never overestimates β and will usually underestimate it ( unless
max(x
i
) = β) , so that E[max(x
i
)] < β. Similarly, E[min(x
i
)] > α ,so E[max(x
i
) - min(x
i
)] <
β - α + 1; The estimate will usually be smaller than β - α + 1, and can never exceed it.

15.
a. θ 2 ) (
2
· X E implies that θ ·

,
`

.
|
2
2
X
E . Consider
n
X
i
2
ˆ
2

· θ . Then
( )
( )
θ
θ
θ
θ · · · ·

,
`

.
|
·
∑ ∑ ∑
n
n
n n
X E
n
X
E E
i i
2
2
2
2
2 2
ˆ
2 2
, implying that θ
ˆ
is an
unbiased estimator for θ .

b. 1058 . 1490
2
·
∑ i
x , so 505 . 74
20
1058 . 1490
ˆ
· · θ

Chapter 6: Point Estimation
210
16.
a. [ ] µ µ δ δµ δ δ δ δ · − + · − + · − + ) 1 ( ) ( ) 1 ( ) ( ) 1 ( Y E X E Y X E

b. [ ]
n m
Y Var X Var Y X Var
2 2 2 2
2 2
) 1 ( 4
) ( ) 1 ( ) ( ) 1 (
σ δ σ δ
δ δ δ δ

+ · − + · − + .
Setting the derivative with respect to δ equal to 0 yields 0
) 1 ( 8 2
2 2
·

+
n m
σ δ δσ
,
from which
n m
m
+
·
4
4
δ .

17.
a. ( )
x r
x
p p
x
r x
r x
r
p E − ⋅ ⋅

,
`

.
| − +

− +

·

·
1
1
1
1
) ˆ (
0

( )
( ) ( )
x r
x
x r
x
p p
x
r x
p p p
r x
r x
p −

,
`

.
| − +
· − ⋅ ⋅

− +
·

·

·
∑ ∑
1
2
1
)! 2 ( !
! 2
1
0
1
0
p p r x nb p
x
· − ·

·0
) , 1 ; ( .

b. For the given sequence, x = 5, so 444 .
9
4
1 5 5
1 5
ˆ · ·
− +

· p

18.
a.
( )

,
`

.
|

·
2
2
2 2
2
1
) , ; (
σ
µ
σ π
σ µ
x
e x f , so
σ π
σ µ µ
2
1
) , ; (
2
· f and
n n f n
2 2
2
2 4
2
)] ( [[ 4
1 σ π πσ
µ
⋅ · · ; since , 1
2
>
π
). ( )
~
( X Var X Var >

b.
π
µ
1
) ( · f , so
n n
X Var
467 . 2
4
)
~
(
2
· ≈
π
.

Chapter 6: Point Estimation
211
19.
a. 3 . 2 15 . 5 . + · ⇒ + · p p λ λ , so 3 . 2 − · λ p and ; 3 . 2 3 .
ˆ
2 ˆ −
,
`

.
|
· − ·
n
Y
p λ
the estimate is 2 . 3 .
80
20
2 · −
,
`

.
|
.

b. ( ) ( ) p E E p E · − · − · − · 3 . 2 3 .
ˆ
2 3 .
ˆ
2 ) ˆ ( λ λ λ , as desired.

c. Here ), 3 )(. 3 (. 7 . + · p λ so
70
9
7
10
− · λ p and
70
9
7
10
ˆ −
,
`

.
|
·
n
Y
p .

Section 6.2

20.
a. We wish to t ake the derivative of ( )
]
]
]

,
`

.
|
−x n x
p p
x
n
1 ln , set it equal to zero and solve
for p. ( ) ( ) ( )
p
x n
p
x
p x n p x
x
n
dp
d

− ·
]
]
]

− − + +

,
`

.
|
1
1 ln ln ln ; setting this equal to
zero and solving for p yields
n
x
p · ˆ . For n = 20 and x = 3, 15 .
20
3
ˆ · · p

b.
( ) ( ) ( ) p np
n
X E
n n
X
E p E · · ·
,
`

.
|
·
1 1
ˆ
; thus pˆ is an unbiased estimator of p.

c. ( ) 4437 . 15 . 1
5
· −

Chapter 6: Point Estimation
212
21.
a. ( )

,
`

.
|
+ Γ ⋅ ·
α
β
1
1 X E and ( )

,
`

.
|
+ Γ · + ·
α
β
2
1 )] ( [ ) (
2 2 2
X E X Var X E , so the
moment estimators αˆ and β
ˆ
are the solution to

,
`

.
|
+ Γ ⋅ ·
α
β
ˆ
1
1
ˆ
X ,

,
`

.
|
+ Γ ·

α
β
ˆ
2
1
ˆ
1
2 2
i
X
n
. Thus

,
`

.
|
+ Γ
·
α
β
ˆ
1
1
ˆ
X
, so once αˆ has been determined

,
`

.
|
+ Γ
αˆ
1
1 is evaluated and β
ˆ
then computed. Since

,
`

.
|
+ Γ ⋅ ·
α
β
ˆ
1
1
ˆ
2 2 2
X ,

,
`

.
|
+ Γ

,
`

.
|
+ Γ
·

α
α
ˆ
1
1
ˆ
2
1
1
2
2
2
X
X
n
i
, so this equation must be solved to obtain αˆ .

b. From a,

,
`

.
|
+ Γ

,
`

.
|
+ Γ
· ·
,
`

.
|
α
α
ˆ
1
1
ˆ
2
1
05 . 1
0 . 28
500 , 16
20
1
2
2
, so

,
`

.
|
+ Γ

,
`

.
|
+ Γ
· ·
α
α
ˆ
2
1
ˆ
1
1
95 .
05 . 1
1
2
, and
from the hint, 5 ˆ 2 .
ˆ
1
· ⇒ · α
α
. Then
( ) ( ) 2 . 1
0 . 28
2 . 1
ˆ
Γ
·
Γ
·
x
β .

22.
a. ( )
2
1
1
2
1
1 ) (
1
0
+
− ·
+
+
· + ·

θ θ
θ
θ
θ
dx x x X E , so the moment estimator θ
ˆ
is the
solution to
2
ˆ
1
1
+
− ·
θ
X , yielding 2
1
1
ˆ

·
X
θ . Since . 3 2 5
ˆ
, 80 . · − · · θ x

b. ( ) ( ) ( )
θ
θ θ
n
n
n
x x x x x f ... 1 ; ,...,
2 1 1
+ · , so the log likelihood is
( ) ( )

+ +
i
x n ln 1 ln θ θ . Taking
θ d
d
and equating to 0 yields

− ·
+
) ln(
1
i
x
n
θ
, so 1
) ln(
ˆ
− − ·
∑ i
X
n
θ . Taking ( )
i
x ln for each given
i
x
yields ultimately 12 . 3
ˆ
· θ .

Chapter 6: Point Estimation
213
23. For a single sample from a Poisson distribution,
( )
! !... !
...
!
; ,...,
1 1
1
1 1
n
x
n
n
x x
n
x x
e
x
e
x
e
x x f
n

· ·
− − −
λ λ λ
λ
λ λ λ
, so
( ) [ ] ( ) ( )
∑ ∑
− + − · ! ln ln ; ,..., ln
1 i i n
x x n x x f λ λ λ . Thus
( ) [ ] [ ] x
n
x x
n x x f
d
d
i i
n
· · ⇒ · + − ·
∑ ∑
λ
λ
λ
λ
ˆ
0 ; ,..., ln
1
. For our problem,
( )
2 1 1 1
, ; ... , ,..., λ λ
n n
y y x x f is a product of the x sample likelihood and the y sample
likelihood, implying that y x · ·
2 1
ˆ
,
ˆ
λ λ , and (by the invariance principle)
( ) y x − · −
^
2 1
λ λ .

24. We wish to take the derivative of ( )
]
]
]

,
`

.
| − +
x r
p p
x
r x
1
1
ln with respect to p, set it equal
to zero, and solve for p:
p
x
p
r
p x p r
x
r x
dp
d

− ·
]
]
]

− + +

,
`

.
| − +
1
) 1 ln( ) ln(
1
ln .
Setting this equal to zero and solving for p yields
x r
r
p
+
· ˆ . This is the number of
successes over the total number of trials, which is the same estimator for the binomial in
exercise 6.20. The unbiased estimator from exercise 6.17 is
1
1
ˆ
− +

·
x r
r
p , which is not the
same as the maximum likelihood estimator.

25.
a. 16 . 395 ; 4 . 384 ˆ
2
· · · s x µ , so ( ) ( ) 64 . 355 16 . 395
10
9
ˆ
1
2 2
· · · −

σ x x
n
i

and 86 . 18 64 . 355 ˆ · · σ (this is not s).

b. The 95
th
percentile is σ µ 645 . 1 + , so the mle of this is (by the invariance principle)
42 . 415 ˆ 645 . 1 ˆ · + σ µ .

26. The mle of ) 400 ( ≤ X P is (by the invariance principle)
( ) 7881 . 80 .
86 . 18
4 . 384 400
ˆ
ˆ 400
· Φ ·
,
`

.
|

Φ ·
,
`

.
|

Φ
σ
µ

Chapter 6: Point Estimation
214
27.
a. ( )
( )
( ) α β
β α
α
β α
n n
x
n
n
i
e x x x
x x f
Γ
·
Σ − − / 1
2 1
1
...
, ; ,..., , so the log likelihood is
( ) ( ) ( ) ( ) α β α
β
α Γ − − − −

ln ln ln 1 n n
x
x
i
i
. Equating both
α d
d
and
β d
d
to
0 yields ( ) ( ) ( ) 0 ln ln · Γ − −

α
α
β
d
d
n n x
i
and 0
2
· ·

β
α
β
n
x
i
, a very
difficult system of equations to solve.

b. From the second equation in a, µ αβ α
β
· · ⇒ ·

x n
x
i
, so the mle of µ is
X · µˆ .

28.
a. [ ] [ ] ( )
[ ]
n
i
n n
n
x
x x x
x
x
x
θ
θ
θ
θ
θ
θ
2 / exp
... 2 / exp ... 2 / exp
2
1
2 2
1
1
Σ −
·
,
`

.
|

,
`

.
|
− . The
natural log of the likelihood function is ( ) ( )
θ
θ
2
ln ... ln
2
i
n i
x
n x x
Σ
− − . Taking the
derivative wrt θ and equating to 0 gives 0
2
2
2
·
Σ
+ −
θ θ
i
x n
, so
2
2
i
x
n
Σ
· θ and
n
x
i
2
2
Σ
· θ . The mle is therefore
n
X
i
2
ˆ
2
Σ
· θ , which is identical to the unbiased
estimator suggested in Exercise 15.

b. For x > 0 the cdf of X if ( ) ( ) x X P x F ≤ · θ ; is equal to
]
]
]

θ 2
exp 1
2
x
. Equating
this to .5 and solving for x gives the median in terms of θ :
]
]
]

·
θ 2
exp 5 .
2
x
implies
that ( )
θ 2
5 . ln
2
x −
· , so 38630 . 1
~
· · µ x . The mle of µ
~
is therefore
( )
2
1
ˆ
38630 . 1 θ .

Chapter 6: Point Estimation
215
29.
a. The joint pdf (likelihood function) is
( )
( )
¹
'
¹
·
− Σ −
0
, ; ,...,
1
θ λ
λ
θ λ
i
x n
n
e
x x f
otherwise
x x
n
θ θ ≥ ≥ ,...,
1

Notice that θ θ ≥ ≥
n
x x ,...,
1
iff ( ) θ ≥
i
x min ,
and that ( ) λθ λ θ λ n x x
i i
+ Σ − · − Σ − .
Thus likelihood =
( ) ( )
¹
'
¹ Σ −
0
exp exp λθ λ λ n x
i
n

( )
( ) θ
θ
<

i
i
x
x
min
min

Consider maximization wrt θ . Because the exponent λθ n is positive, increasing θ
will increase the likelihood provided that ( ) θ ≥
i
x min ; if we make θ larger than
( )
i
x min , the likelihood drops to 0. This implies that the mle of θ is ( )
i
x min
ˆ
· θ .
The log likelihood is now ( ) θ λ λ
ˆ
) ln( − Σ −
i
x n . Equating the derivative wrt λ to 0
and solving yields
( ) θ θ
λ
ˆ ˆ
ˆ
n x
n
x
n
i i
− Σ
·
− Σ
· .

b. ( ) , 64 . min
ˆ
· ·
i
x θ and 80 . 55 · Σ
i
x , so 202 .
4 . 6 80 . 55
10
ˆ
·

· λ

30. The likelihood is ( ) ( )
y n y
p p
y
n
p n y f

,
`

.
|
· 1 , ; where
( )
λ λ
λ
24
24
0
1 24
− −
· − · ≥ ·

e dx e X P p
x
. We know
n
y
p · ˆ , so by the invariance
principle
( ) [ ]
0120 .
24
ln
ˆ
24
· − · ⇒ ·
− n
y
n
y
e λ
λ
for n = 20, y = 15.

Supplementary Exercises

31.
( ) ( ) ( )

,
`

.
| −
<

+

,
`

.
|
>

· − < − + > − · > −
n n
X
P
n n
X
P X P X P X P
/ / / / σ
ε
σ
µ
σ
ε
σ
µ
ε µ ε µ ε µ
∫ ∫

∞ −

+ ·

,
`

.
|

< +

,
`

.
|
> ·
σ ε
σ ε
π π
σ
ε
σ
ε
/
2 /
/
2 /
2 2
2
1
2
1
n
z
n
z
dz e dz e
n
Z P
n
Z P .
As ∞ → n , both integrals 0 → since

∞ →
·
c
z
c
dz e 0
2
1
lim
2 /
2
π
.

Chapter 6: Point Estimation
216
32. sp
a. ( ) ( ) ( ) ( ) ( )
n
n n Y
y
y X P y X P y X y X P y Y P y F

,
`

.
|
· ≤ ≤ · ≤ ≤ · ≤ ·
θ
... ,...,
1 1

for θ ≤ ≤ y 0 , so ( )
n
n
Y
ny
y f
θ
1 −
· .
b. .
1
) (
0
1
θ
θ
+
· ⋅ ·

n
n
dy
n
ny
y Y E
n
While Y · θ
ˆ
is not unbiased, Y
n
n 1 +
is, since
( ) θ θ ·
+

+
·
+
·
]
]
]

+
1
1 1 1
n
n
n
n
Y E
n
n
Y
n
n
E , so
n
n
K
1 +
· does the trick.

33. Let x
1
= the time until the first birth, x
2
= the elapsed time between the first and second births,
and so on. Then ( ) ( ) ( )
k n
kx n x n x x
n
e n e n e e x x f
Σ − − − −
· ⋅ ·
λ λ λ λ
λ λ λ λ λ ! ... 2 ; ,...,
2 1
2
1
. Thus
the log likelihood is ( ) ( )
k
kx n n Σ − + λ λ ln ! ln . Taking
λ d
d
and equating to 0 yields

·
·
n
k
k
kx
n
1
ˆ
λ . For the given sample, n = 6, x
1
= 25.2, x
2
= 41.7 – 25.2 = 16.5, x
3
= 9.5, x
4
=
4.3, x
5
= 4.0, x
6
= 2.3; so 7 . 137 ) 3 . 2 )( 6 ( ... ) 5 . 16 )( 2 ( ) 2 . 25 )( 1 (
6
1
· + + + ·

· k
k
kx and
0436 .
7 . 137
6
ˆ
· · λ .

34. ( ) ). ( ) (
2 2 2
KS Bias KS Var KS MSE + ·
( ) 1 ) ( ) (
2 2 2 2 2 2
− · − · − · K K KS E KS Bias σ σ σ σ , and
[ ] [ ] ( )
( )
( )

,
`

.
|

+
· − · ·
2
2
4
2
2
2 2 2 2 2 2 2
1
1
) ( ) ( ) ( ) ( σ
σ
n
n
K S E S E K S Var K KS Var
( )
4 2
2
1
1
2
σ
]
]
]

− +

· k
n
K
. To find the minimizing value of K, take
dK
d
and equate to 0;
the result is
1
1
+

·
n
n
K ; thus the estimator which minimizes MSE is neither the unbiased
estimator (K = 1) nor the mle
n
n
K
1 −
· .

Chapter 6: Point Estimation
217
35.
j i
x x +

23.5 26.3 28.0 28.2 29.4 29.5 30.6 31.6 33.9 49.3
23.5 23.5 24.9
25.7
5
25.8
5
26.4
5
26.5
27.0
5
27.5
5
28.7 36.4
26.3 26.3
27.1
5
27.2
5
27.8
5
27.9
28.4
5
28.9
5
30.1 37.8
28.0 28.0 28.1 28.7 28.75 29.3 29.8
30.9
5
38.6
5
28.2 28.2 28.8 28.85 29.4 29.9
31.0
5
38.7
5
29.4 29.4 29.45 30.0 30.5
30.6
5
39.3
5
29.5 29.5
30.0
5
30.5
5
31.7 39.4
30.6 30.6 31.1
32.2
5
39.9
5
31.6 31.6
32.7
5
40.4
5
33.9 33.9 41.6
49.3 49.3

There are 55 averages, so the median is the 28
th
in order of increasing magnitude. Therefore,
5 . 29 ˆ · µ

36. With

· 86 . 555 x and

· 490 , 15
2
x , 4687 . 1 1570 . 2
2
· · · s s . The
s x x
i
'
~
− are, in increasing order, .02, .02, .08, .22, .32, .42, .53, .54, .65, .81, .91, 1.15,
1.17, 1.30, 1.54, 1.54, 1.71, 2.35, 2.92, 3.50. The median of these values is
( )
86 .
2
91 . 81 .
·
+
. The estimate based on the resistant estimator is then 275 . 1
6745 .
86 .
· .
This estimate is in reasonably close agreement with s.

37. Let
( )
( )
1
2
2
2
1

⋅ Γ
Γ
·
n
n
n
c . Then E(cS) = cE(S), and c cancels with the two Γ factors and the
square root in E(S), leaving just σ . When n = 20,
( )
( )
19
2
10
5 . 9
⋅ Γ
Γ
· c . ( ) ! 9 10 · Γ and
( ) ( ) 5 . ) 5 )(. 5 . 1 )...( 5 . 7 )( 5 . 8 ( 5 . 9 Γ · Γ , but ( ) π · Γ 5 . . Straightforward calculation
gives c = 1.0132.

Chapter 6: Point Estimation
218
38.
a. The likelihood is
( ) ( )
( )
( ) ( )
2
2
2 2
2
2
2
2
2 2 2 1
2
1
2
1
2
1
σ
µ µ
σ
µ
σ
µ
πσ πσ πσ

,
`

.
|
− Σ + − Σ
− −
− − −
·
· ⋅ Π
i i
y
i i
x
i i
y
i i
x
e e e
n
n
i
. The log
likelihood is thus ( )
( ) ( ) ( )
2
2 2
2
2
2 ln
σ
µ µ
πσ
i i i i
y x
n
− Σ + − Σ
− − . Taking
i
d
d
µ
and equating to
zero gives
2
ˆ
i i
i
y x +
· µ . Substituting these estimates of the s
i
' ˆ µ into the log
likelihood gives
( )

,
`

.
|

,
`

.
|
+
− +
,
`

.
|
+
− − −
∑ ∑
2 2
2
1
2
2 2
2 ln
2
i i
i
i i
i
y x
y
y x
x n
σ
πσ
( ) ( ) ( )
2
2
1
2
1
2
2
2 ln
i i
y x n − Σ − − ·
σ
πσ . Now taking
2
σ d
d
, equating to zero, and
solving for
2
σ gives the desired result.

b. ( ) ( ) ( ) ( )
2 2
4
1
4
1
ˆ Y X E
n
Y X E
n
E
i i i
− Σ ⋅ · − Σ · σ , but
( ) ( ) ( ) [ ]
2 2
2 2
2 0 2 σ σ · + · − + − · − Y X E Y X V Y X E
i i i
. Thus
( ) ( )
2
2
4
1
2
4
1
ˆ
2
2 2 2
σ
σ σ σ · · Σ · n
n n
E , so the mle is definitely not unbiased; the
expected value of the estimator is only half the value of what is being estimated!

141 − 8.6) = 1.6 d = 34.660 = = .761.7814 .28s = 1.6 x = 340.3385    d.000)( 34. so we also use the sample mean x = 1. a. x − y = 8.3481 .0) = 1. µ + (1. V ( X − Y ) = V (X ) + V (Y ) = σ + σ = + n1 n2 σ X −Y = V ( X − Y ) = s X −Y = 2 σ 12 σ 2 + . b. e.45) = .66 2 2.5687 .281  y  374.6736 s  .66 2 + 2.104 2 V ( X − Y ) = V ( X ) + V (Y ) = σ 12 + σ 2 = 1.601.Chapter 6: Point Estimation 3.5 − x  1.3385 = = = .000 ˆ θ = T − Nd = 1.434 2 X 2 Y 2 σ 12 σ 2 b. The estimated standard error of x= σ ˆ s .438.28)σ = x + 1. N = 5.000 T = 1.7890 s 2 2.5 ) ≈ P  Z <  = P Z <  = P( Z < . x = 1.761.703.300 2 x  340.104 2 = 7. Since we can assume normality. a. 5.104 2 + = + = .3481 Because we assume normality.300 − (5.3481 + (1. We could also easily use the sample median.0 ˆ θ 1 = Nx = ( 5.000)( 340.6    206 .575 = . We use the 90th percentile of the sample: c. E (X − Y ) = E( X ) − E (Y ) = µ 1 − µ 2 .28)(.3481    P ( X < 1 . n1 n2 27 20 c. We use the sample mean. The estimate would be n1 n2 2 s12 s 2 1.300  = 1.5 − 1.300 y = 374. ˆ ˆ 1 .761. the mean = median.1824 d.591.0846 n n 16 4. s1 1.3385) = 1.6  ˆ θ 3 = T   = 1.

000 ˆ b. It is easily verified that the sample mean and sample sd of the y i ' s are y = 5.80. The ordered sample values are 89. P(system works) = p 2 .102 and σ = . a.723 ˆ  80  2 b. 99. so ˆ 8 p = 10 = . b.  σ2 ˆ ˆ2 E ( X ) ≡ exp  µ +  . so ~ x 118 + 122 = 120.102 + = exp( 5. from which the two middle values are 118 and 122. y i = ln( xi ) for I = 1. τˆ = 10. 122. With q denoting the true proportion of defective components..Chapter 6: Point Estimation 6.4961 (whence ˆ σ = s = . d. c.150 sample. 138..2461  ˆ E ( X ) = exp 5.102 and s y = . a. gives µ = 5. respectively.2461 ). 156.87 2    7. 147. so an estimate of this probability is 207 .4961 . 31.sample) 12 = = . 109.225) = 185.000 8 of 10 houses in the sample used at least 100 therms (the “successes”).in. q= ˆ (# defective. 103.0 µ =~= 2 ^ 8. 118. . a. It is natural to estimate E(X) by using µ and σ in place of 2   2 µ and σ in this expression: .size 80  68  p 2 =   = . 125. ˆ µ=x= ∑x n i = 1206 = 120.6 10 µ = 1.206. Using the sample mean and sample sd Let to estimate 2 2 y ˆ ˆ µ and σ .

11 .11 = . q1 = 1 − p1 .. n n 2 2 2 E ( X − kS ) = µ . σx = σ = n λ . ˆ ˆ ( p1 − p 2 ) = 127 − 176 = .. E ( X 2 ) = Var ( X ) + [ E( X )] 2 = thus σ2 σ2 + µ 2 . a. X X  1 1 1 1 E 1 − 2  = E ( X 1 ) − E ( X 2 ) = ( n1 p1 ) − (n 2 p2 ) = p1 − p 2 . a. = (0)(18) + (1)( 37) + . Var   n − n  = Var  n  + Var  n  =  n  Var ( X 1 ) +  n  Var ( X 2 )           1 2   1   2   1  2 pq p q 1 (n1 p1 q1 ) + 12 (n 2 p2 q 2 ) = 1 1 + 2 2 . the estimated standard error is ˆ ˆ n1 n2 2 ˆ ˆ ˆ ˆ p1 q1 p2 q 2 + . 2 2 2 2 2 b. so the estimated standard error is n ˆ λ = n 2. 11. E ( X ) = µ = E ( X ) = λ . n  n n2  n2 n1 n2  1 1 2 2  X1 X2   X1   X2   1   1 b. c. n1 n2 d. a. + ( 7)(1) = 317. 317 ˆ λ=x= = 2. n n X 2 tends to overestimate µ 2 . q = 1 − p 2 . so the bias of the estimator X 2 is . so X is an unbiased estimator for the Poisson parameter λ. σ2 1 E ( X − kS ) = E( X ) − kE( S ) = µ + − kσ 2 . p 2 = .245 200 200 208 .119 150 10. 150 i ∑x b. With ˆ p1 = x1 x2 ˆ ˆ ˆ .635 − .Chapter 6: Point Estimation 9. since n = 150.880 = −. so with k = . and the standard error is the square 2 n1 n2 n1 n2 root of this quantity.

505 = 1490.120) + = . so that E[max(xi )] < β. ∑x 2 i ˆ 1490. (.041 200 200 12. Then E ( X 2 ) = 2θ implies that E  2  2n    ∑ X i2  ∑ E X i2 ∑ 2θ = 2nθ = θ . E[min(xi )] > α .1058 .880)(. 2  (n1 − 1)S12 + (n2 − 1)S 2  (n1 − 1) (n 2 − 1) 2 2 E  = n + n − 2 E ( S1 ) + n + n − 2 E ( S 2 ) n1 + n2 − 2 1 2 1 2   (n1 − 1) 2 (n2 − 1) 2 2 = σ + σ =σ . iff the smallest serial number in the population and the largest serial number in the population both appear in the sample. so θ = 20 209 . so the estimate of the number of planes manufactured is max(xi ) .α + 1.e.Chapter 6: Point Estimation e. 635)(. b. The estimate will equal the true number of planes manufactured iff min(xi ) = α and max(xi ) = β. The estimator is not unbiased. implying that θˆ is an ˆ = E θ = E =  2n  2n 2n 2n   unbiased estimator for θ . 15. Consider θ = ∑ i . This is because max(xi ) never overestimates β and will usually underestimate it ( unless max(xi ) = β) . Similarly.so E[max(xi ) . The estimate will usually be smaller than β . n1 + n2 − 2 n1 + n 2 − 2 13. X2 X2 ˆ  = θ . 365) (. and can never exceed it. x 2 θx 3 E ( X ) = ∫ x ⋅ (1 + θx )dx = + −1 4 6 1 1 2 1 −1 1 = θ 3 1 E( X ) = θ 3 1 E( X ) = θ 3 14.min(xi ) + 1 = 525 – 202 + 1 = 324.α + 1. a.  1 ˆ ˆ θ = 3 X ⇒ E (θ ) = E (3 X ) = 3 E( X ) = 3 θ = θ  3 min(xi ) = 202 and max(xi ) = 525.1058 = 74. a.. () ( ) b.min(xi )] < β . i.

so Var ( X ) ≈ = . 4m + n Var [δX + (1 − δ )Y ] = δ 2Var ( X ) + (1 − δ ) 2 Var (Y ) = b. m n 2δσ 2 8(1 − δ )σ 2 Setting the derivative with respect to δ equal to 0 yields + = 0. since > 1. a. µ . For the given sequence. Var ( X ) > Var ( X ). a.444 5 + 5 −1 9 18. x =0 ∞ b. σ ) = e . σ ) = and 2π σ 2π σ 1 2πσ 2 π σ 2 π ~ = = ⋅ . 17. so f ( µ . E[δX + (1 − δ )Y ] = δE( X ) + (1 − δ ) E (Y ) = δµ + (1 − δ ) µ = µ δ 2σ 2 4(1 − δ ) 2 σ 2 + . p ) = p . m n 4m from which δ = . x = 5. π 4n n 210 . r − 1. −   1 1 2σ 2  2  f ( x. so ˆ p= 5 −1 4 = = . f (µ) = 1 ~ π 2 2. ˆ E( p) = ∑ r − 1  x + r − 1 r x ⋅  x  ⋅ p ⋅ (1 − p )  x= 0 x + r − 1   ∞ ( x + r − 2 )! ⋅ p r −1 ⋅ (1 − p ) x = p ∞  x + r − 2  p r −1 (1 − p )x = p∑ ∑ x    x = 0 x!( r − 2)! x =0   ∞ = p ∑ nb ( x. 2 4n[[ f ( µ )] 4n 2 n 2 2  ( x − µ )2  b.467 . µ .Chapter 6: Point Estimation 16. a.

n n n c. 1 X 1 ˆ ˆ E( p ) = E   = E( X ) = ( np ) = p .3 = 2  − .3 = 2λ − .3.Chapter 6: Point Estimation 19.3 = .3). as desired.2 20.  80  ˆ ˆ E ( p ) = E 2λ − . p = = .5 p + .  − 7 70 7  n  70 Section 6.7 p + (.2 . (1 − . ˆ Here b.3)(.3 = 2 E λ − . so p = 2λ − .3 . ( ) () c.15 ⇒ 2λ = p + .4437 211 .15)5 = . set it equal to zero and solve    x    x n− x d   n for p.15 n 20 We wish to take the derivative of b. so p = 10 9 10  Y  9 λ− and p = ˆ . setting this equal to ln   + x ln ( p ) + (n − x ) ln (1 − p ) = −  x dp     p 1− p x 3 ˆ ˆ zero and solving for p yields p = .3 and p = 2λ − . Y  ˆ λ = . a. λ = .  n  n −x  ln   p x (1 − p )  .3 = p . a. ˆ n  20  the estimate is 2  − . For n = 20 and x = 3. thus p is an unbiased estimator of p. .

= .12 . yielding θ = − 2 . Taking and equating to 0 yields dθ n n ˆ = −∑ ln( xi ) . ˆ α Γ(1.0 ˆ ˆ from the hint. f (x1 .. so this equation must be solved to obtain α . xn ) . ˆ ˆ  α  α 2  Γ1 +  2 1 X ˆ α ˆ ∑ X i2 = 2 1 .. a. n θ 212 .95 =  .. θ +1 1 ˆ =1− .05 =  . so the  α  α 1 ˆ ˆ  ˆ moment estimators α and β are the solution to X = β ⋅ Γ 1 +  . so = .2 ⇒ α = 5 . x n . = 1. n Γ 1 +  ˆ  α ( ) 2 1   Γ1 +  Γ 2 1 +  1  16. so the log likelihood is d n ln (θ + 1) + θ ∑ ln ( xi ) .Chapter 6: Point Estimation 21.. Since x = . θ ) = (θ + 1) ( x1 x2 . and  2  20  28. Then β = = .05 2  2 Γ 1 +  Γ1 +  ˆ ˆ  α  α 1 x 28. From a. ˆ  α 1 2 X  ˆ ˆ ∑ X i2 = βˆ 2 Γ 1 + α  .0  1 1. a. so θ = − − 1 . 1 2  2 2 2  E ( X ) = β ⋅ Γ 1 +  and E X = Var ( X ) + [ E ( X )] = β Γ1 +  .. Since X = β ⋅ Γ 1 +  . ˆ+2 1− X θ E ( X ) = ∫ x(θ + 1)x θ dx = 1 b. so once α has been determined ˆ n  Γ1 +  ˆ  α 1 1  2 ˆ ˆ2 2 Γ1 +  is evaluated and β then computed.80. so the moment estimator θ is the 0 θ +2 θ +2 1 1 ˆ solution to X = 1 − .500  ˆ 1 ˆ α α b.2) Γ(1.. Thus β =  1  .θˆ = 5 − 2 = 3.2 ) 22. Taking ln ( xi ) for each given x i θ +1 ∑ ln( X i ) ˆ yields ultimately θ = 3.

42 .  x + r − 1 r x ln   x  p (1 − p )  with respect to p. which is not the r + x −1 25.. a.. ^ 24.  dp     r ˆ Setting this equal to zero and solving for p yields p = . so x1! xn ! x1!. For a single sample from a Poisson distribution.Chapter 6: Point Estimation 23. λ 2 ) is a product of the x sample likelihood and the y sample ˆ ˆ likelihood. implying that λ = x . λ1 . The 95th percentile is µ + 1..64 = 18.16 . µ = x = 384. = ..80) = .x n! (λ1 − λ 2 ) = x − y .. This is the number of r+x We wish to take the derivative of successes over the total number of trials. ˆ p= r −1 ..17 is same as the maximum likelihood estimator.. x n .. and solve for p: ln   x  + r ln( p ) + x ln( 1 − p)  = p − 1 − p .. dλ λ n f (x1 . λ ) = x x d ˆ [ln [ f ( x1 .. so the mle of this is (by the invariance principle) µ + 1.. so ˆ and 1 9 2 ∑ ( xi − x ) = σˆ 2 = 10 (395. Thus x e − λ λx1 e − λ λ xn e − nλ λ∑ 1 .4... λ )] = − nλ + ∑ xi ln (λ ) − ∑ ln ( x i !) . λ = y .645σ ..86 (this is not s). λ )]] = −n + ∑ i = 0 ⇒ λ = ∑ i = x . y1 .64 n ˆ σ = 355...... b.645σ = 415. set it equal      r d   x + r − 1 x to zero.. and (by the invariance principle) 1 2 ln [ f ( x1 . ˆ ˆ 26.. For our problem. xn . x n .7881 ˆ   σ  18.4  Φ  = Φ  = Φ (. x n . f (x1 . The mle of P( X ≤ 400) is (by the invariance principle) ˆ  400 − µ   400 − 384.20. The unbiased estimator from exercise 6. y n .86  213 . which is the same estimator for the binomial in exercise 6..16) = 355. s 2 = 395.

5 = exp   implies  2θ  − x2 ~ ~ that ln (. β ) = β nα Γ n (α ) i .. a.5 and solving for x gives the median in terms of θ : . The mle of µ is therefore 2θ 1 ˆ 1 . The  exp − x12 / 2θ . (α − 1)∑ ln ( x i ) − ∑ 0 yields i ( x1 x2 . so the mle of µ is µ=X..Chapter 6: Point Estimation 27. From the second equation in a. θ ) = P ( X ≤ x ) is equal to 1 − exp   . ( ) 214 . n exp − xn / 2θ  = ( x1.x n ) θn θ  θ  Σxi2 natural log of the likelihood function is ln ( xi . b.5 ) = . α .... The mle is therefore θ = 2n 2n [ ] [ ] [ ] estimator suggested in Exercise 15. so x = µ = 1. − x2  b.38630θ 2 . ∑x β i = nα ⇒ x = αβ = µ .. Taking the 2θ Σx 2 n Σx i2 derivative wrt θ and equating to 0 gives − + = 0 . ˆ 28.. Equating  2θ  − x 2  this to . so the log likelihood is xi β − nα ln ( β ) − n ln Γ(α ) . which is identical to the unbiased θ = i . a very β difficult system of equations to solve. For x > 0 the cdf of X if F ( x. Equating both d ∑ ln ( x ) − n ln ( β ) − n dα Γ(α ) = 0 and ∑x β 2 d d and to dα dβ i = nα = 0 .... x n . so nθ = i and θ 2θ 2 2 2 2 Σx ˆ ΣX i ..38630 .x n ) − n ln (θ ) − .. exp − Σx i2 / 2θ  x1  x  2 .x n )α −1 e − Σx / β f (x1 . a.

Because the exponent nλθ is positive. and Σ xi = 55. 2π 215 .  X −µ ε   X −µ −ε  P X − µ > ε = P( X − µ > ε ) + P( X − µ < −ε ) = P  >   <   + P  σ / n σ / n  σ / n σ / n  ( )   nε  − nε   + P Z < = = P Z >   σ  σ      As ∫ ∞ nε /σ 1 2π e ∞ −z 2 / 2 dz + ∫ − nε / σ 1 2π −∞ e − z2 / 2 dz . We know p = principle e −24 λ ln n y ˆ = ⇒λ =− = . ˆ ˆ θ = min ( xi ) = . y = 15... ˆ Σx − nθ ˆ Σ xi − θ i ( ) i ( ) b... and that − λΣ (x i − θ ) = −λΣ xi + nλθ . x n . Equating the derivative wrt λ to 0 n n ˆ and solving yields λ = = . n → ∞ .Chapter 6: Point Estimation 29.80 − 6. so λ = 10 = .4 30.θ ) =  0 otherwise  Notice that x1 ≥ θ . p ) =   p y (1 − p ) n− y where  y   24 0 ˆ p = P ( X ≥ 24 ) = 1 − ∫ λe − λx dx = e − 24λ . n. Thus likelihood = λn exp (− λΣx i ) exp (nλθ ) min ( xi ) ≥ θ  min ( xi ) < θ 0  Consider maximization wrt θ . i ˆ The log likelihood is now n ln( λ ) − λΣ xi − θ . x n ≥ θ iff min ( xi ) ≥ θ . so by the invariance n Supplementary Exercises 31. x n ≥ θ f (x1 .. n 24 [ ( )] y y .64. a..0120 for n = 20. This implies that the mle of θ is θ = min ( x ) .. both integrals → 0 since lim c→∞ c ∫ 1 − z2 / 2 e dz = 0 .202 55. λ .80 .. The likelihood is  n f ( y. if we make θ larger than ˆ min (x ) . The joint pdf (likelihood function) is λn e −λΣ( xi −θ ) x1 ≥ θ . the likelihood drops to 0. increasing θ will increase the likelihood provided that min ( xi ) ≥ θ .....

MSE KS 2 = Var ( KS 2 ) + Bias ( KS 2 ).. 137. since 0 n n +1 n n +1 n n +1 n +1  n +1 does the trick. x2 = the elapsed time between the first and second births.3. θn θ ny n −1 n n +1 ˆ b.2. and so on..5.(nλ )e −nλx n = n! λn e − λΣ kxk . thus the estimator which minimizes MSE is neither the unbiased n +1 n −1 estimator (K = 1) nor the mle K = . λ ) = λe −λx1 ⋅ (2λ )e −2λx2 . For the given sample... sp  y a. While θ = Y is not unbiased. xn . Then n f (x1 . Bias ( KS 2 ) = E( KS 2 ) − σ 2 = Kσ 2 − σ 2 = σ 2 ( K − 1) .. E (Y ) = ∫ y ⋅ dy = θ . so f Y ( y ) = . Thus d the log likelihood is ln (n!) + n ln (λ ) − λΣ kxk .5) + .3) = 137. n = 6. Y is.7 34.P( X n ≤ y ) =   θ  ny n −1 for 0 ≤ y ≤ θ ... X n ≤ y ) = P( X 1 ≤ y ). x6 = 2.. E Y = E (Y ) = ⋅ θ = θ . x1 = 25.7 – 25.7 and ˆ λ= 6 = . take and equate to 0.. so K = n n n +1 n  n  33.2) + (2)(16. Taking and equating to 0 yields dλ n ˆ λ= n . Let x1 = the time until the first birth. and ( ) Var ( KS ) = K Var ( S ) = K 2 2 2 2 +1σ (E [(S ) ] − [E (S )] ) = K  (n n −)1 2 2 2 2 2 4  2 − σ2    ( )  2K 2 d 2 = + (k − 1) σ 4 . so ∑ kx k =1 6 k = (1)( 25.. x2 = 41. x3 = 9. To find the minimizing value of K. n 216 .0.3. FY ( y ) = P(Y ≤ y ) = P( X 1 ≤ y .. dK n −1  n −1 the result is K = .2 = 16. + ( 6)( 2. x5 = 4.Chapter 6: Point Estimation 32.. x4 = ∑ kxk k =1 4.0436 ...5.

2 25.9 28.4 37.81..08.86 .3 24.85 29.5 26.6 5 31.45 29. The median of these values is ∑ x = 555.6745 This estimate is in reasonably close agreement with s.3 There are 55 averages. . c= Γ(10) ⋅ Γ(9.0 5 30. Let c= Γ( n ) ⋅ 2 .71.4 29.7 5 27. .8 29. . 1.54.0132.50. µ = 29.4 39.9 5 40. and c cancels with the two Γ factors and the square root in E(S).54.2 5 32.8 5 27.7 32. 217 .5 30. .1 31.5) 2 19 . but Γ(.490 .4 26.4 5 29.17.9 5 31.6 27. .9 5 29. 2. 1.2 29. .3 36.1570 = 1. The (.1 28.35.3 28.(1.5 23.65.0 28. Γ(10 ) = 9! and Γ(9.5)(7.02.0 25. 3.Chapter 6: Point Estimation 35.6 27.2 5 28. 2.2 29.92.6 31.15. Straightforward calculation gives c = 1.5 ) = π .7 5 39.5 26. 1.5) = (8.5)(. . . 1.275 .4 30.54. Then E(cS) = cE(S). . .32. − Γ( n21 ) 2 n −1 37.7 30.6 31.7 5 33.7 28.91.3 28.6 33. Therefore.4 5 41.02.4 29. xi + x j 23.5 27.4 5 27.1 5 28. When n = 20.3 23.0 5 28.0 28..8 29.75 28.1 30. in increasing order. With x i − ~ ' s are. so the median is the 28th in order of increasing magnitude.6 5 38.86 = 1. x 1.86 and ∑ x 2 = 15.6 49. . 5)Γ(.0 5 30.81 + .5 ˆ 36. .91) 2 = . s = s 2 = 2. leaving just σ . 1.5).9 28.53.6 33.5 26.9 49.4687 .5 30.5 5 28.0 30.9 30.8 38.5 30.22.9 26.3 29.3 5 39.42.5) . The estimate based on the resistant estimator is then .5 5 31.30.9 49.8 5 28.

and 2 dσ 2 2 solving for σ gives the desired result. Taking and equating to 2 σ dµ i x i + yi ˆ ˆ zero gives µ i = . Now taking . the 4n 4n 2 ( ) expected value of the estimator is only half the value of what is being estimated! 218 .Chapter 6: Point Estimation 38. Thus ( ) ˆ Eσ2 = ( ) 1 1 σ2 Σ 2σ 2 = 2nσ 2 = . Substituting these estimates of the µ i ' s into the log 2 likelihood is thus ( ) (2πσ ) 1 2 n e −  Σ ( x − µ ) 2 + Σ ( y − µ )2    i i i i  2σ 2 . so the mle is definitely not unbiased. The log likelihood gives 2 2     ∑  xi − xi + y i  + ∑  yi − xi + y i   − n ln 2πσ −       2  2     d 2 1 = − n ln (2πσ 2 ) − 2σ 2 1 Σ (x i − y i ) . a. ( 2 ) 1 2σ 2 ( ) b. The likelihood is n Π 1 2πσ 2 i =1 e − ( xi − µ i ) 2σ 2 ⋅ 1 2πσ 2 e − ( y i − µi ) 2σ 2 = 2 2 d − n ln 2πσ 2 − (Σ( xi − µ i )2+ Σ( y i − µ i ) ) . but 4n 4n 2 2 E ( X i − Y ) = V ( X i − Y ) + [E ( X i − Y )] = 2σ 2 + 0 = 2σ 2 . ˆ E (σ ) = 1 1 2 2 E Σ ( X i − Yi ) = ⋅ ΣE ( X i − Y ) . equating to zero.