Professional Documents
Culture Documents
16.1
16.2
16.3
16.4
16.5
16.6
n
Here, L( y | p ) = p( y | p ) = p y (1 p ) n y , where y = 0, 1, , n and 0 < p < 1. So,
y
n
( + ) 1
f ( y , p ) = p y (1 p ) n y
p (1 p )1
()()
y
so that
1
n ( + ) y + 1
( + ) ( y + )( n y + )
.
m( y ) =
p
(1 p ) n y + 1 dp =
y ( )()
( )()
( n + + )
0
The posterior density of p is then
( n + + )
g * ( p | y) =
p y + 1 (1 p ) n y +1 , 0 < p < 1.
( y + )( n y + )
This is the identical beta density as in Example 16.1 (recall that the sum of n i.i.d.
Bernoulli random variables is binomial with n trials and success probability p).
16.7
a. The Bayes estimator is the mean of the posterior distribution, so with a beta posterior
with = y + 1 and = n y + 3 in the prior, the posterior mean is
1
Y +1
Y
=
+
p B =
.
n+4 n+4 n+4
E (Y ) + 1 np + 1
V (Y )
np(1 p )
b. E ( p B ) =
=
=
p , V ( p ) =
2
n+4
n+4
( n + 4)
( n + 4) 2
16.8
Y +1
.
n+2
327
Instructors Solutions Manual
E ( p B ) =
E (Y ) + 1 np + 1
V (Y )
np(1 p )
=
and V ( p B ) =
.
=
2
n+2
n+2
( n + 2)
( n + 2) 2
2
np(1 p ) np + 1
np(1 p ) + (1 2 p ) 2
.
+
p =
( n + 2) 2 n + 2
( n + 2) 2
d. For the unbiased estimator p , MSE( p ) = V( p ) = p(1 p)/n. So, holding n fixed, we
must determine the values of p such that
np(1 p ) + (1 2 p ) 2 p(1 p )
.
<
n
( n + 2) 2
The range of values of p where this is satisfied is solved in Ex. 8.17(c).
16.9
(
y
+
)
0
The posterior density of p is then
( + + y )
g * ( p | y) =
p (1 p) + y 2 , 0 < p < 1.
( + 1)( + y 1)
This is a beta density with shape parameters * = + 1 and * = + y 1.
b. The Bayes estimators are
+1
(1) p B = E ( p | Y ) =
,
+ +Y
( 2) [ p(1 p )] B
= E( p | Y ) E( p 2 | Y ) =
=
( + 2)( + 1)
+1
+ + Y ( + + Y + 1)( + + Y )
( + 1)( + Y 1)
,
( + + Y + 1)( + + Y )
where the second expectation was solved using the result from Ex. 4.200. (Alternately,
1
16.10 a. The joint density of the random sample and is given by the product of the marginal
densities multiplied by the gamma prior:
328
f ( y 1 , , y n , ) =
=
i =1
exp( y i )
](1)
1 exp( / )
n
n + 1
n + 1
exp
y
/
exp
i =1 i
( )
( )
i =1 y i + 1
exp
b. m( y1 ,, y n ) =
n
( ) 0
i =1 y i + 1
.
that of a gamma density with shape parameter n + and scale parameter
n
i =1 y i + 1
n y +1
( )
i =1 i
n+
(
)
B = E ( | Y ) = E (1 / | Y ) = * *
=
n
+
1
( 1) n Yi + 1
i =1
i =1Yi + 1
n
=
e. The prior mean for 1/ is E (1 / ) =
(n + 1)
1
n + 1 (n + 1)
i =1 i
1
(again by Ex. 4.111). Thus, B can be
( 1)
written as
n
1 1
B = Y
+
,
n + 1 ( 1) n + 1
which is a weighted average of the MLE and the prior mean.
f. We know that Y is unbiased; thus E(Y ) = = 1/. Therefore,
n
1 1 1
n
1 1
E ( B ) = E (Y )
+
=
+
.
n + 1 ( 1) n + 1 n + 1 ( 1) n + 1
Therefore, B is biased. However, it is asymptotically unbiased since
E ( B ) 1 / 0 .
Also,
329
Instructors Solutions Manual
1
1
n
n
n
0.
V ( B ) = V (Y )
= 2
= 2
(n + 1)2
n n + 1
n + 1
p
So, B
( n ) u exp( n )
1
1 exp( / )
u!
( )
u
n
=
u + 1 exp( n / )
u!()
f ( u, ) = p( u | ) g ( ) =
nu
u + 1
exp
=
u!()
n + 1
. Thus, the
gamma density with shape parameter u + and scale parameter
n + 1
b. m(u ) =
nu
u + 1 exp
u!( ) 0
nu
(u + )
solution is m(u ) =
u! ( )
n + 1
u+
.
d. B = E ( | U ) = * * = (U + )
n + 1
e. The prior mean for is E() = . From the above,
n
1
n
= Y
+
,
B = i =1 Yi +
n + 1
n + 1
n + 1
which is a weighted average of the MLE and the prior mean.
E ( B ) 0.
Also,
330
n
n
n
=
=
V ( B ) = V (Y )
0.
n n + 1
(n + 1)2
n + 1
p
So, B
1
u n / 21 v n / 2+ 1 exp v
n/2
( n / 2)( )2
.
u + 2
2
1
n / 2 1
dv , but this integral
u
v n / 2+ 1 exp v
n/2
u
2
+
( n / 2)( )2
resembles that of a gamma density with shape parameter n/2 + and scale parameter
b. m( u ) =
2
u n / 2 1
2
( n / 2 + )
. Thus, the solution is m(u ) =
n/2
u + 2
( n / 2)( )2
u + 2
n / 2+
2B = E ( 2 | U ) = E (1 / v | U ) =
U + 2
1
1
U + 2
=
.
=
*
( 1) n / 2 + 1 2 (n + 2 2 )
*
1
. From the above,
( 1)
U + 2
U
n
1 2( 1)
2B =
=
+
.
(n + 2 2 ) n n + 2 2 ( 1) n + 2 2
331
Instructors Solutions Manual
16.16 With y = 4, n = 25, and a beta(1, 1) prior, the posterior distribution for p is beta(5, 22).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbeta(.025,5,22)
[1] 0.06554811
> qbeta(.975,5,22)
[1] 0.3486788
i =1
is gamma(17.3, .030516). Using R, the lower and upper endpoints of the 80% credible
interval for are given by
> qgamma(.10,shape=17.3,scale=.0305167)
[1] 0.3731982
> qgamma(.90,shape=17.3,scale=.0305167)
[1] 0.6957321
The 80% credible interval for is (.3732, .6957). To create a 80% credible interval for
1/, the end points of the previous interval can be inverted:
.3732 < < .6957
1/(.3732) > 1/ > 1/(.6957)
Since 1/(.6957) = 1.4374 and 1/(.3732) = 2.6795, the 80% credible interval for 1/ is
(1.4374, 2.6795).
332
i =1
gamma(176, .0394739). Using R, the lower and upper endpoints of the 95% credible
interval for are given by
> qgamma(.025,shape=176,scale=.0394739)
[1] 5.958895
> qgamma(.975,shape=176,scale=.0394739)
[1] 8.010663
16.20 With n = 8, u = .8579, and a gamma(5, 2) prior, the posterior distribution for v is
gamma(9, 1.0764842). Using R, the lower and upper endpoints of the 90% credible
interval for v are given by
> qgamma(.05,shape=9,scale=1.0764842)
[1] 5.054338
> qgamma(.95,shape=9,scale=1.0764842)
[1] 15.53867
The 90% credible interval for v is (5.054, 15.539). Similar to Ex. 16.18, the 90% credible
interval for 2 = 1/v is found by inverting the endpoints of the credible interval for v,
given by (.0644, .1979).
16.21 From Ex. 6.15, the posterior distribution of p is beta(5, 24). Now, we can find
P * ( p 0 ) = P * ( p < .3) by (in R):
> pbeta(.3,5,24)
[1] 0.9525731
333
Instructors Solutions Manual
> 1 - pgamma(.5,shape=17.3,scale=.0305)
[1] 0.5561767