You are on page 1of 14

CHAPTER 6

Section 6.1

1.
a. We use the sample mean, x to estimate the population mean µ .
Σxi 219.80
µˆ = x = = = 8.1407
n 27

b. We use the sample median, ~


x = 7.7 (the middle observation when arranged in
ascending order).

1860.94 − ( 21927.8)
2

c. We use the sample standard deviation, s= s =2


= 1.660
26

d. With “success” = observation greater than 10, x = # of successes = 4, and


pˆ = x
n
= 4
27
= .1481

s 1.660
e. We use the sample (std dev)/(mean), or = = .2039
x 8.1407

2.
10
a. With X = # of T’s in the sample, the estimator is pˆ = X
n
; x = 10, so pˆ = , = .50 .
20

16
b. Here, X = # in sample without TI graphing calculator, and x = 16, so pˆ = = .80
20

205
Chapter 6: Point Estimation

3.
a. We use the sample mean, x = 1.3481
b. Because we assume normality, the mean = median, so we also use the sample mean
x = 1.3481 . We could also easily use the sample median.
c. We use the 90th percentile of the sample:
µˆ + (1.28)σˆ = x + 1.28s = 1.3481 + (1.28)(.3385) = 1.7814 .

d. Since we can assume normality,


 1 .5 − x   1.5 − 1.3481 
P ( X < 1 .5 ) ≈ P  Z <  = P Z <  = P( Z < .45) = .6736
 s   .3385 

σˆ s .3385
e. The estimated standard error of x= = = = .0846
n n 16

4.
a. E (X − Y ) = E( X ) − E (Y ) = µ 1 − µ 2 ; x − y = 8.141 − 8.575 = .434

σ 12 σ 22
b. V ( X − Y ) = V (X ) + V (Y ) = σ + σ = +
2 2
X Y
n1 n2
σ 12 σ 22
σ X −Y = V ( X − Y ) = + ; The estimate would be
n1 n2
s12 s 22 1.66 2 2.104 2
s X −Y = + = + = .5687 .
n1 n2 27 20

s1 1.660
c. = = .7890
s 2 2.104

d. V ( X − Y ) = V ( X ) + V (Y ) = σ 12 + σ 22 = 1.66 2 + 2.104 2 = 7.1824

5. N = 5,000 T = 1,761,300
y = 374.6 x = 340.6 d = 34.0
θˆ1 = Nx = ( 5,000)( 340.6) = 1,703,000
θˆ = T − Nd = 1,761,300 − (5,000)( 34.0) = 1,591,300
2

x  340.6 
θˆ3 = T   = 1,761,300  = 1,601,438.281
 y  374.6 

206
Chapter 6: Point Estimation

6.
a. Let y i = ln( xi ) for I = 1, .., 31. It is easily verified that the sample mean and sample sd
of the y i ' s are y = 5.102 and s y = .4961 . Using the sample mean and sample sd

to estimate µ and σ , respectively, gives µˆ = 5.102 and σˆ = .4961 (whence


σˆ = s = .2461 ).
2 2
y

 σ2
E ( X ) ≡ exp  µ + . It is natural to estimate E(X) by using µ̂ and σˆ in place of
2
b. 
 2 
µ and σ in this expression:
2

 .2461
E ( Xˆ ) = exp 5.102 + = exp( 5.225) = 185.87
 2 

7.

a. µˆ = x =
∑x i
=
1206
= 120.6
n 10

b. τˆ = 10,000 µˆ = 1,206,000

c. 8 of 10 houses in the sample used at least 100 therms (the “successes”), so


pˆ = 108 = .80.

d. The ordered sample values are 89, 99, 103, 109, 118, 122, 125, 138, 147, 156, from
^ 118 + 122
which the two middle values are 118 and 122, so µ~ = ~
x= = 120.0
2

8.
a. With q denoting the true proportion of defective components,
(# defective.in.sample) 12
qˆ = = = .150
sample.size 80

2
 68 
b. P(system works) = p 2 , so an estimate of this probability is pˆ 2 =   = .723
 80 

207
Chapter 6: Point Estimation

9.
a. E ( X ) = µ = E ( X ) = λ , so X is an unbiased estimator for the Poisson parameter
λ; ∑x = (0)(18) + (1)( 37) + ... + ( 7)(1) = 317, since n = 150,
i

317
λˆ = x = = 2.11 .
150

σ λ λˆ 2.11
b. σx = = , so the estimated standard error is = = .119
n n n 150

10.
σ2 σ2
a. E ( X 2 ) = Var ( X ) + [ E( X )] 2 = + µ 2 , so the bias of the estimator X 2 is ;
n n
thus X 2 tends to overestimate µ 2 .

σ2 1
b. E ( X − kS ) = E( X ) − kE( S ) = µ +
2 2 2
− kσ 2 , so with k = ,
2 2

n n
E ( X − kS ) = µ .
2 2 2

11.
X X  1
E 1 − 2  = E ( X 1 ) − E ( X 2 ) = ( n1 p1 ) − (n 2 p2 ) = p1 − p 2 .
1 1 1
a.
 n1 n2  n1 n2 n1 n2

2 2
 X1 X2  X  X  1  1
b. Var  −  = Var  1  + Var  2  =   Var ( X 1 ) +   Var ( X 2 )
 n1 n2   n1   n2   n1   n2 
pq p q
1
2
(n1 p1 q1 ) + 12 (n 2 p2 q 2 ) = 1 1 + 2 2 , and the standard error is the square
n1 n2 n1 n2
root of this quantity.

x1 x
c. With pˆ 1 = , qˆ1 = 1 − p
ˆ 1 , pˆ 2 = 2 , qˆ 2 = 1 − pˆ 2 , the estimated standard error is
n1 n2
pˆ 1 qˆ1 pˆ 2 qˆ 2
+ .
n1 n2

d. ( pˆ 1 − pˆ 2 ) = 127 − 176 = .635 − .880 = −.245


200 200

208
Chapter 6: Point Estimation

(. 635)(. 365) (.880)(.120)


e. + = .041
200 200

 (n1 − 1)S12 + (n2 − 1)S 22  (n1 − 1) (n 2 − 1)


12. E  = E ( S12 ) + E ( S 22 )
 n1 + n2 − 2  n1 + n2 − 2 n1 + n2 − 2
(n1 − 1) 2 (n2 − 1) 2 2
= σ + σ =σ .
n1 + n2 − 2 n1 + n 2 − 2

1
x 2 θx 3
E ( X ) = ∫ x ⋅ (1 + θx )dx =
1 1 1
13. 1
2 + = θ E( X ) = θ
−1 4 6 3 3
−1

1  1
E( X ) = θ θˆ = 3 X ⇒ E (θˆ ) = E (3 X ) = 3 E( X ) = 3 θ = θ
3  3

14.
a. min(xi ) = 202 and max(xi ) = 525, so the estimate of the number of planes manufactured is
max(xi ) - min(xi ) + 1 = 525 – 202 + 1 = 324.

b. The estimate will equal the true number of planes manufactured iff min(xi ) = α and
max(xi ) = β, i.e., iff the smallest serial number in the population and the largest serial
number in the population both appear in the sample. The estimator is not unbiased. This
is because max(xi ) never overestimates β and will usually underestimate it ( unless
max(xi ) = β) , so that E[max(xi )] < β. Similarly, E[min(xi )] > α ,so E[max(xi ) - min(xi )] <
β - α + 1; The estimate will usually be smaller than β - α + 1, and can never exceed it.

15.

 = θ . Consider θˆ = ∑ i . Then
X2 X2
a. E ( X 2 ) = 2θ implies that E
 2  2n

()
E θˆ = E
 ∑ X i2  ∑ E X i2
=
( )
=
∑ 2θ = 2nθ = θ , implying that θˆ is an
 2n  2n 2n 2n
 
unbiased estimator for θ .

∑x
1490.1058
b.
2
i = 1490.1058 , so θˆ = = 74.505
20

209
Chapter 6: Point Estimation

16.
a. E[δX + (1 − δ )Y ] = δE( X ) + (1 − δ ) E (Y ) = δµ + (1 − δ ) µ = µ

δ 2σ 2 4(1 − δ ) 2 σ 2
b. Var [δX + (1 − δ )Y ] = δ 2Var ( X ) + (1 − δ ) 2 Var (Y ) = + .
m n
2δσ 2 8(1 − δ )σ 2
Setting the derivative with respect to δ equal to 0 yields + = 0,
m n
4m
from which δ = .
4m + n

17.

r − 1  x + r − 1 r
a. E ( pˆ ) = ∑ ⋅   ⋅ p ⋅ (1 − p )x
x= 0 x + r − 1  x 

= p∑

( x + r − 2 )! ⋅ p r −1 ⋅ (1 − p ) x = p ∞  x + r − 2  p r −1 (1 − p )x
x = 0 x!( r − 2)!
∑ 
x =0  x 

= p ∑ nb ( x; r − 1, p ) = p .
x =0

5 −1 4
b. For the given sequence, x = 5, so pˆ = = = .444
5 + 5 −1 9

18.
 ( x − µ )2 
− 
1 2 σ 2  1
a. f ( x; µ , σ ) =
2
e , so f ( µ ; µ , σ ) =
2
and
2π σ 2π σ
1 2πσ 2 π σ 2 π ~
= = ⋅ ; since > 1, Var ( X ) > Var ( X ).
4n[[ f ( µ )] 2
4n 2 n 2

1 ~ π 2 2.467
b. f (µ) = , so Var ( X ) ≈ = .
π 4n n

210
Chapter 6: Point Estimation

19.
Y 
a. λ = .5 p + .15 ⇒ 2λ = p + .3 , so p = 2λ − .3 and pˆ = 2λˆ − .3 = 2  − .3;
n
 20 
the estimate is 2  − .3 = .2 .
 80 

b. ( ) ()
E ( pˆ ) = E 2λˆ − .3 = 2 E λˆ − .3 = 2λ − .3 = p , as desired.

10 9 10  Y  9
c. Here λ = .7 p + (.3)(.3), so p = λ− and pˆ =  − .
7 70 7  n  70

Section 6.2

20.
 n  n −x 
a. We wish to take the derivative of ln   p x (1 − p )  , set it equal to zero and solve
 x  
d   n  x n− x
for p. ln   + x ln ( p ) + (n − x ) ln (1 − p ) = − ; setting this equal to
dp   x   p 1− p
x 3
zero and solving for p yields pˆ = . For n = 20 and x = 3, pˆ = = .15
n 20

X 1
E( pˆ ) = E   = E( X ) = ( np ) = p ; thus p̂ is an unbiased estimator of p.
b. 1
n n n

c. (1 − .15)5 = .4437

211
Chapter 6: Point Estimation

21.

a.
 1
( ) 2 
E ( X ) = β ⋅ Γ 1 +  and E X = Var ( X ) + [ E ( X )] = β Γ1 +  , so the
2 2 2
 α  α
 1
moment estimators α̂ and β̂ are the solution to X = βˆ ⋅ Γ 1 +  ,
 αˆ 
1 2  2 X
n
∑ X i = βˆ Γ 1 +  . Thus βˆ =
2

 αˆ   1
, so once α̂ has been determined
Γ1 + 
 αˆ 
 1 2 1
Γ1 +  is evaluated and β̂ then computed. Since X = βˆ ⋅ Γ 1 +  ,
2 2

 α̂   αˆ 
 2
Γ1 + 
2
αˆ 
= 
1 Xi
n
∑ X 2
2 1
, so this equation must be solved to obtain α̂ .
Γ 1 + 
 αˆ 

 2  1
Γ1 +  Γ 2 1 + 
1  16,500  αˆ  αˆ 
= 1.05 =  = .95 = 
1
b. From a,  2 
, so , and
20  28.0  2 1 1.05  2
Γ 1 +  Γ1 + 
 αˆ   αˆ 
1 x 28.0
from the hint, = .2 ⇒ αˆ = 5 . Then βˆ = = .
αˆ Γ(1.2) Γ(1.2 )

22.
θ +1
E ( X ) = ∫ x(θ + 1)x θ dx =
1 1
a. =1− , so the moment estimator θˆ is the
0 θ +2 θ +2
1 1
solution to X = 1 − , yielding θˆ = − 2 . Since x = .80,θˆ = 5 − 2 = 3.
θ +2
ˆ 1 − X

f (x1 ,..., x n ; θ ) = (θ + 1) ( x1 x2 ... xn ) , so the log likelihood is


n θ
b.

n ln (θ + 1) + θ ∑ ln ( xi ) . Taking
d
and equating to 0 yields

n
− 1 . Taking ln ( xi ) for each given x i
n
= −∑ ln( xi ) , so θˆ = −
θ +1 ∑ ln( X i )
yields ultimately θˆ = 3.12 .

212
Chapter 6: Point Estimation

23. For a single sample from a Poisson distribution,


e − λ λx1 e − λ λ xn e − nλ λ∑ 1
x

f (x1 ,..., x n ; λ ) = ... = , so


x1! xn ! x1!...x n!
ln [ f ( x1 ,..., xn ; λ )] = − nλ + ∑ xi ln (λ ) − ∑ ln ( x i !) . Thus

[ln [ f ( x1 ,..., x n ; λ )]] = −n + ∑ i = 0 ⇒ λˆ = ∑ i = x . For our problem,


d x x
dλ λ n
f (x1 ,..., x n , y1 ... y n ; λ1 , λ 2 ) is a product of the x sample likelihood and the y sample
likelihood, implying that λˆ = x , λˆ = y , and (by the invariance principle)
1 2

(λ1 − λ 2 ) = x − y .
^

 x + r − 1 r 
24. We wish to take the derivative of ln   p (1 − p ) x  with respect to p, set it equal
 x  
d   x + r − 1  r x
to zero, and solve for p: ln   + r ln( p ) + x ln( 1 − p)  = − .
dp   x   p 1− p
r
Setting this equal to zero and solving for p yields pˆ = . This is the number of
r+x
successes over the total number of trials, which is the same estimator for the binomial in
r −1
exercise 6.20. The unbiased estimator from exercise 6.17 is pˆ = , which is not the
r + x −1
same as the maximum likelihood estimator.

25.

a. µˆ = x = 384.4; s 2 = 395.16 , so
1
∑ ( xi − x )2 = σˆ 2 = 9 (395.16) = 355.64
n 10
and σˆ = 355.64 = 18.86 (this is not s).

b. µ + 1.645σ , so the mle of this is (by the invariance principle)


The 95th percentile is
µˆ + 1.645σˆ = 415.42 .

26. The mle ofP( X ≤ 400) is (by the invariance principle)


 400 − µˆ   400 − 384.4 
Φ  = Φ  = Φ (.80) = .7881
 σˆ   18.86 

213
Chapter 6: Point Estimation

27.

a. f (x1 ,..., x n ; α , β ) =
( x1 x2 ...x n ) e
α −1 − Σ x / β
i

, so the log likelihood is


β nα Γ n (α )

(α − 1)∑ ln ( x i ) − ∑
xi
− nα ln ( β ) − n ln Γ(α ) . Equating both
d d
and to
β dα dβ
∑x nα
∑ ln ( x ) − n ln ( β ) − n dα Γ(α ) = 0 and
d
= = 0 , a very
i
0 yields
β β
i 2

difficult system of equations to solve.

b. From the second equation in a,


∑x i
= nα ⇒ x = αβ = µ , so the mle of µ is
β
µ̂ = X .

28.

a.
 x1
[ ]
 x
[ 
 exp − x12 / 2θ ... n exp − xn2 / 2θ  = ( x1...x n ) ] exp − Σx i2 / 2θ [. The
]
θ  θ  θn
Σxi2
natural log of the likelihood function is ln ( xi ...x n ) − n ln (θ ) − . Taking the

n Σx i2 Σ xi2
derivative wrt θ and equating to 0 gives − + = 0 , so n θ = and
θ 2θ 2 2
Σx i2 ΣX i2
θ = . The mle is therefore θ =
ˆ , which is identical to the unbiased
2n 2n
estimator suggested in Exercise 15.

− x2 
b. For x > 0 the cdf of X if F ( x; θ ) = P ( X ≤ x ) is equal to 1 − exp   . Equating
 2θ 
− x 2 
this to .5 and solving for x gives the median in terms of θ : .5 = exp   implies
 2θ 
− x2
that ln (.5 ) = ~ = 1.38630 . The mle of µ~ is therefore
, so x = µ

( )
1

1 .38630θˆ 2 .

214
Chapter 6: Point Estimation

29.
a. The joint pdf (likelihood function) is
λn e −λΣ( xi −θ ) x1 ≥ θ ,..., x n ≥ θ
f (x1 ,..., x n ; λ ,θ ) = 
 0 otherwise
Notice that x1 ≥ θ ,..., x n ≥ θ iff min ( xi ) ≥ θ ,
and that − λΣ (x i − θ ) = −λΣ xi + nλθ .

λn exp (− λΣx i ) exp (nλθ ) min ( xi ) ≥ θ



min ( xi ) < θ
Thus likelihood =
 0
Consider maximization wrt θ . Because the exponent nλθ is positive, increasing θ
will increase the likelihood provided that min ( xi ) ≥ θ ; if we make θ larger than

min (x ) , the likelihood drops to 0. This implies that the mle of θ is θˆ = min ( x ) .
( )
i i

The log likelihood is now n ln( λ ) − λΣ xi − θˆ . Equating the derivative wrt λ to 0


n n
and solving yields λˆ = =
(
Σ xi − θˆ Σxi − nθˆ ) .

θˆ = min ( xi ) = .64, and Σ xi = 55.80 , so λˆ =


10
b. = .202
55.80 − 6.4

 n
30. The likelihood is f ( y; n, p ) =   p y (1 − p ) n− y where
 y
p = P ( X ≥ 24 ) = 1 − ∫ λe − λx dx = e − 24λ . We know pˆ =
24 y
, so by the invariance
0 n

principle e −24 λ y
= ⇒ λˆ = −
ln n [ ( )]
y

= .0120 for n = 20, y = 15.


n 24

Supplementary Exercises

31. ( )  X −µ
P X − µ > ε = P( X − µ > ε ) + P( X − µ < −ε ) = P >
ε   X −µ
 + P <
−ε 

σ / n σ / n  σ / n σ / n 
 nε   − nε  ∞ 1 − nε / σ 1
= P Z >  + P Z < = ∫ dz + ∫
−z 2 / 2 − z2 / 2
   e e dz .
 σ   σ 
nε /σ
2π −∞

∞ 1 − z2 / 2
As n → ∞ , both integrals → 0 since lim
c→∞ c ∫ 2π
e dz = 0 .

215
Chapter 6: Point Estimation

32. sp
n
 y
a. FY ( y ) = P(Y ≤ y ) = P( X 1 ≤ y ,..., X n ≤ y ) = P( X 1 ≤ y )...P( X n ≤ y ) =  
θ 
ny n −1
for 0 ≤ y ≤ θ , so f Y ( y ) = .
θn
θ ny n −1 n n +1
b. E (Y ) = ∫ y ⋅ dy = θ . While θˆ = Y is not unbiased, Y is, since
0 n n +1 n
n +1  n +1 n +1 n n +1
E Y = E (Y ) = ⋅ θ = θ , so K = does the trick.
 n  n n n +1 n

33. Let x1 = the time until the first birth, x2 = the elapsed time between the first and second births,
and so on. Then f (x1 ,..., xn ; λ ) = λe −λx1 ⋅ (2λ )e −2λx2 ...(nλ )e −nλx n = n! λn e − λΣ kxk . Thus

the log likelihood is ln (n!) + n ln (λ ) − λΣ kxk . Taking


d
and equating to 0 yields

n
λ̂ = n . For the given sample, n = 6, x1 = 25.2, x2 = 41.7 – 25.2 = 16.5, x3 = 9.5, x4 =

∑ kxk
k =1
6
4.3, x5 = 4.0, x6 = 2.3; so ∑ kx k = (1)( 25.2) + (2)(16.5) + ... + ( 6)( 2.3) = 137.7 and
k =1
6
λˆ = = .0436 .
137.7

34. ( )
MSE KS 2 = Var ( KS 2 ) + Bias ( KS 2 ).
Bias ( KS 2 ) = E( KS 2 ) − σ 2 = Kσ 2 − σ 2 = σ 2 ( K − 1) , and

(E [(S ) ] − [E (S )] ) = K  (n n+ −1)1σ ( ) 2


4
2
Var ( KS ) = K Var ( S ) = K
2 2 2 2 2 2 2 2
− σ 2 
 
 2K 2 2
+ (k − 1) σ 4 . To find the minimizing value of K, take
d
= and equate to 0;
n −1  dK
n −1
the result is K = ; thus the estimator which minimizes MSE is neither the unbiased
n +1
n −1
estimator (K = 1) nor the mle K = .
n

216
Chapter 6: Point Estimation

35.
xi + x j
23.5 26.3 28.0 28.2 29.4 29.5 30.6 31.6 33.9 49.3

25.7 25.8 26.4 27.0 27.5


23.5 23.5 24.9 26.5 28.7 36.4
5 5 5 5 5
27.1 27.2 27.8 28.4 28.9
26.3 26.3 27.9 30.1 37.8
5 5 5 5 5
30.9 38.6
28.0 28.0 28.1 28.7 28.75 29.3 29.8
5 5
31.0 38.7
28.2 28.2 28.8 28.85 29.4 29.9
5 5
30.6 39.3
29.4 29.4 29.45 30.0 30.5
5 5
30.0 30.5
29.5 29.5 31.7 39.4
5 5
32.2 39.9
30.6 30.6 31.1
5 5
32.7 40.4
31.6 31.6
5 5
33.9 33.9 41.6
49.3 49.3

There are 55 averages, so the median is the 28th in order of increasing magnitude. Therefore,
µˆ = 29.5

36. With ∑ x = 555.86 and ∑ x 2


= 15,490 , s = s 2 = 2.1570 = 1.4687 . The
xi − ~
x ' s are, in increasing order, .02, .02, .08, .22, .32, .42, .53, .54, .65, .81, .91, 1.15,
1.17, 1.30, 1.54, 1.54, 1.71, 2.35, 2.92, 3.50. The median of these values is
(.81 + .91) .86
= .86 . The estimate based on the resistant estimator is then = 1.275 .
2 .6745
This estimate is in reasonably close agreement with s.

Γ( n2−1 )
37. Let c= . Then E(cS) = cE(S), and c cancels with the two Γ factors and the
Γ( n2 ) ⋅ 2
n −1

Γ(9.5)
square root in E(S), leaving just σ . When n = 20, c= . Γ(10 ) = 9! and
Γ(10) ⋅ 2
19

Γ(9.5) = (8.5)(7.5)...(1.5)(. 5)Γ(.5) , but Γ(.5 ) = π . Straightforward calculation


gives c = 1.0132.

217
Chapter 6: Point Estimation

38.
a. The likelihood is
 Σ ( x − µ ) 2 + Σ ( y − µ )2 
( xi − µ i ) ( y i − µi ) i i i i
n 1 − 1 − 1 − 

Π e 2σ 2
⋅ e 2σ 2
= e 2σ 2

(2πσ )
. The log
2 n
i =1
2πσ 2 2πσ 2

− n ln 2πσ 2 − (Σ( xi − µ i )2+σΣ2 ( y i − µ i ) ) . Taking


( )
2 2 d
likelihood is thus and equating to
dµ i
x i + yi
zero gives µˆ i = . Substituting these estimates of the µ̂ i ' s into the log
2
likelihood gives
  xi + y i 
2
xi + y i  
2

(
− n ln 2πσ − 2
)  ∑  i
 1
x −
2σ 2

 + ∑  yi −  
 2   2  
2
(
= − n ln (2πσ 2 ) − 2σ1 2 12 Σ (x i − y i ) . Now taking
d
dσ 2
)
, equating to zero, and

solving for σ gives the desired result.


2

b. E (σˆ ) =
1
4n
(
E Σ ( X i − Yi ) =
2 1
4n
)
⋅ ΣE ( X i − Y ) , but
2

E ( X i − Y ) = V ( X i − Y ) + [E ( X i − Y )] = 2σ 2 + 0 = 2σ 2 . Thus
2 2

σ2
( )
E σˆ 2 =
1
Σ 2σ 2 = (1
2nσ 2 = )
, so the mle is definitely not unbiased; the
4n 4n 2
expected value of the estimator is only half the value of what is being estimated!

218

You might also like