You are on page 1of 8

Chapter 16: Introduction to Bayesian Methods of Inference

16.1

Refer to Table 16.1.


a. (10,30)
b. n = 25
c. (10,30) , n = 25
d. Yes
e. Posterior for the (1,3) prior.

16.2

a.-d. Refer to Section 16.2

16.3

a.-e. Applet exercise, so answers vary.

16.4

a.-d. Applex exercise, so answers vary.

16.5

It should take more trials with a beta(10, 30) prior.

16.6

n
Here, L( y | p ) = p( y | p ) = p y (1 p ) n y , where y = 0, 1, , n and 0 < p < 1. So,
y
n
( + ) 1
f ( y , p ) = p y (1 p ) n y
p (1 p )1
()()
y
so that
1
n ( + ) y + 1
( + ) ( y + )( n y + )
.
m( y ) =
p
(1 p ) n y + 1 dp =
y ( )()
( )()
( n + + )
0
The posterior density of p is then
( n + + )
g * ( p | y) =
p y + 1 (1 p ) n y +1 , 0 < p < 1.
( y + )( n y + )
This is the identical beta density as in Example 16.1 (recall that the sum of n i.i.d.
Bernoulli random variables is binomial with n trials and success probability p).

16.7

a. The Bayes estimator is the mean of the posterior distribution, so with a beta posterior
with = y + 1 and = n y + 3 in the prior, the posterior mean is
1
Y +1
Y
=
+
p B =
.
n+4 n+4 n+4
E (Y ) + 1 np + 1
V (Y )
np(1 p )
b. E ( p B ) =
=
=
p , V ( p ) =
2
n+4
n+4
( n + 4)
( n + 4) 2

16.8

a. From Ex. 16.6, the Bayes estimator for p is p B = E ( p | Y ) =

Y +1
.
n+2

b. This is the uniform distribution in the interval (0, 1).


c. We know that p = Y / n is an unbiased estimator for p. However, for the Bayes
estimator,
326

Chapter 16: Introduction to Bayesian Methods of Inference

327
Instructors Solutions Manual

E ( p B ) =

E (Y ) + 1 np + 1
V (Y )
np(1 p )
=
and V ( p B ) =
.
=
2
n+2
n+2
( n + 2)
( n + 2) 2
2

np(1 p ) np + 1
np(1 p ) + (1 2 p ) 2

.
+
p =
( n + 2) 2 n + 2
( n + 2) 2

d. For the unbiased estimator p , MSE( p ) = V( p ) = p(1 p)/n. So, holding n fixed, we
must determine the values of p such that
np(1 p ) + (1 2 p ) 2 p(1 p )
.
<
n
( n + 2) 2
The range of values of p where this is satisfied is solved in Ex. 8.17(c).

Thus, MSE ( p B ) = V ( p B ) + [ B( p B )]2 =

16.9

a. Here, L( y | p ) = p( y | p ) = (1 p ) y 1 p , where y = 1, 2, and 0 < p < 1. So,


( + ) 1
f ( y , p ) = (1 p ) y 1 p
p (1 p ) 1
()()
so that
1
( + ) ( + 1)( y + 1)
( + )
.
m( y ) =
p (1 p )+ y 2 dp =

(
y
+

)
0
The posterior density of p is then
( + + y )
g * ( p | y) =
p (1 p) + y 2 , 0 < p < 1.
( + 1)( + y 1)
This is a beta density with shape parameters * = + 1 and * = + y 1.
b. The Bayes estimators are
+1
(1) p B = E ( p | Y ) =
,
+ +Y

( 2) [ p(1 p )] B

= E( p | Y ) E( p 2 | Y ) =
=

( + 2)( + 1)
+1

+ + Y ( + + Y + 1)( + + Y )

( + 1)( + Y 1)
,
( + + Y + 1)( + + Y )

where the second expectation was solved using the result from Ex. 4.200. (Alternately,
1

the answer could be found by solving E[ p(1 p ) | Y ] = p(1 p ) g * ( p | Y )dp .


0

16.10 a. The joint density of the random sample and is given by the product of the marginal
densities multiplied by the gamma prior:

328

Chapter 16: Introduction to Bayesian Methods of Inference

Instructors Solutions Manual

f ( y 1 , , y n , ) =
=

i =1

exp( y i )

](1)

1 exp( / )

n
n + 1
n + 1


exp
y
/
exp

i =1 i

( )
( )

i =1 y i + 1

d , but this integral resembles


n + 1

exp
b. m( y1 ,, y n ) =

n

( ) 0
i =1 y i + 1

.
that of a gamma density with shape parameter n + and scale parameter
n
i =1 y i + 1

Thus, the solution is m( y1 ,, y n ) =


(
n
)

n y +1
( )

i =1 i

n+

c. The solution follows from parts (a) and (b) above.


d. Using the result in Ex. 4.111,

(
)
B = E ( | Y ) = E (1 / | Y ) = * *
=
n
+

1
( 1) n Yi + 1

i =1

i =1Yi + 1
n

=
e. The prior mean for 1/ is E (1 / ) =

(n + 1)

1
n + 1 (n + 1)
i =1 i

1
(again by Ex. 4.111). Thus, B can be
( 1)

written as
n
1 1

B = Y
+

,
n + 1 ( 1) n + 1
which is a weighted average of the MLE and the prior mean.
f. We know that Y is unbiased; thus E(Y ) = = 1/. Therefore,
n
1 1 1
n
1 1

E ( B ) = E (Y )
+

=
+

.
n + 1 ( 1) n + 1 n + 1 ( 1) n + 1
Therefore, B is biased. However, it is asymptotically unbiased since
E ( B ) 1 / 0 .
Also,

Chapter 16: Introduction to Bayesian Methods of Inference

329
Instructors Solutions Manual

1
1
n
n
n

0.
V ( B ) = V (Y )
= 2
= 2
(n + 1)2
n n + 1
n + 1
p
So, B

1 / and thus it is consistent.

16.11 a. The joint density of U and is

( n ) u exp( n )
1

1 exp( / )

u!
( )
u
n
=
u + 1 exp( n / )

u!()

f ( u, ) = p( u | ) g ( ) =

nu
u + 1
exp
=


u!()

n + 1

d , but this integral resembles that of a


n + 1

. Thus, the
gamma density with shape parameter u + and scale parameter
n + 1

b. m(u ) =

nu
u + 1 exp

u!( ) 0


nu

(u + )
solution is m(u ) =

u! ( )
n + 1

u+

c. The result follows from parts (a) and (b) above.


.
d. B = E ( | U ) = * * = (U + )
n + 1
e. The prior mean for is E() = . From the above,

n
1
n
= Y
+
,
B = i =1 Yi +
n + 1
n + 1
n + 1
which is a weighted average of the MLE and the prior mean.

f. We know that Y is unbiased; thus E(Y ) = Therefore,


1
n
1
n
.
+
=
+
E ( B ) = E (Y )
n + 1
n + 1
n + 1
n + 1
So, is biased but it is asymptotically unbiased since
B

E ( B ) 0.
Also,

330

Chapter 16: Introduction to Bayesian Methods of Inference

Instructors Solutions Manual


2

n
n
n
=
=
V ( B ) = V (Y )
0.
n n + 1
(n + 1)2
n + 1
p
So, B

and thus it is consistent.

16.12 First, it is given that W = vU = v i =1 (Yi 0 ) 2 is chisquare with n degrees of freedom.


n

Then, the density function for U (conditioned on v) is given by


1
1
(uv )n / 21 e uv / 2 =
f U (u | v ) = v fW ( uv ) = v
u n / 21 v n / 2 e uv / 2 .
n/2
n/2
( n / 2)2
( n / 2)2
a. The joint density of U and v is then
1
1
f ( u, v ) = f U ( u | v ) g ( v ) =
u n / 21 v n / 2 exp(uv / 2)
v 1 exp( v / )
n/2
( n / 2)2
( )
1
u n / 21 v n / 2+ 1 exp( uv / 2 v / )
=
( n / 2)( )2 n / 2
=

1
u n / 21 v n / 2+ 1 exp v
n/2
( n / 2)( )2

.
u + 2

2
1
n / 2 1
dv , but this integral
u
v n / 2+ 1 exp v
n/2

u
2

+
( n / 2)( )2

resembles that of a gamma density with shape parameter n/2 + and scale parameter

b. m( u ) =

2
u n / 2 1
2

( n / 2 + )
. Thus, the solution is m(u ) =
n/2
u + 2
( n / 2)( )2
u + 2

n / 2+

c. The result follows from parts (a) and (b) above.


d. Using the result in Ex. 4.111(e),

2B = E ( 2 | U ) = E (1 / v | U ) =

U + 2
1
1
U + 2
=

.
=
*
( 1) n / 2 + 1 2 (n + 2 2 )
*

1
. From the above,
( 1)
U + 2
U
n
1 2( 1)

2B =
=
+

.
(n + 2 2 ) n n + 2 2 ( 1) n + 2 2

e. The prior mean for 2 = 1 / v =

16.13 a. (.099, .710)


b. Both probabilities are .025.

Chapter 16: Introduction to Bayesian Methods of Inference

331
Instructors Solutions Manual

c. P(.099 < p < .710) = .95.


d.-g. Answers vary.
h. The credible intervals should decrease in width with larger sample sizes.
16.14 a.-b. Answers vary.
16.15 With y = 4, n = 25, and a beta(1, 3) prior, the posterior distribution for p is beta(5, 24).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbeta(.025,5,24)
[1] 0.06064291
> qbeta(.975,5,24)
[1] 0.3266527

16.16 With y = 4, n = 25, and a beta(1, 1) prior, the posterior distribution for p is beta(5, 22).
Using R, the lower and upper endpoints of the 95% credible interval are given by:
> qbeta(.025,5,22)
[1] 0.06554811
> qbeta(.975,5,22)
[1] 0.3486788

This is a wider interval than what was obtained in Ex. 16.15.


16.17 With y = 6 and a beta(10, 5) prior, the posterior distribution for p is beta(11, 10). Using
R, the lower and upper endpoints of the 80% credible interval for p are given by:
> qbeta(.10,11,10)
[1] 0.3847514
> qbeta(.90,11,10)
[1] 0.6618291

16.18 With n = 15,

i =1

y i = 30.27, and a gamma(2.3, 0.4) prior, the posterior distribution for

is gamma(17.3, .030516). Using R, the lower and upper endpoints of the 80% credible
interval for are given by
> qgamma(.10,shape=17.3,scale=.0305167)
[1] 0.3731982
> qgamma(.90,shape=17.3,scale=.0305167)
[1] 0.6957321

The 80% credible interval for is (.3732, .6957). To create a 80% credible interval for
1/, the end points of the previous interval can be inverted:
.3732 < < .6957
1/(.3732) > 1/ > 1/(.6957)
Since 1/(.6957) = 1.4374 and 1/(.3732) = 2.6795, the 80% credible interval for 1/ is
(1.4374, 2.6795).

332

Chapter 16: Introduction to Bayesian Methods of Inference

Instructors Solutions Manual

16.19 With n = 25,

i =1

y i = 174, and a gamma(2, 3) prior, the posterior distribution for is

gamma(176, .0394739). Using R, the lower and upper endpoints of the 95% credible
interval for are given by
> qgamma(.025,shape=176,scale=.0394739)
[1] 5.958895
> qgamma(.975,shape=176,scale=.0394739)
[1] 8.010663

16.20 With n = 8, u = .8579, and a gamma(5, 2) prior, the posterior distribution for v is
gamma(9, 1.0764842). Using R, the lower and upper endpoints of the 90% credible
interval for v are given by
> qgamma(.05,shape=9,scale=1.0764842)
[1] 5.054338
> qgamma(.95,shape=9,scale=1.0764842)
[1] 15.53867

The 90% credible interval for v is (5.054, 15.539). Similar to Ex. 16.18, the 90% credible
interval for 2 = 1/v is found by inverting the endpoints of the credible interval for v,
given by (.0644, .1979).
16.21 From Ex. 6.15, the posterior distribution of p is beta(5, 24). Now, we can find
P * ( p 0 ) = P * ( p < .3) by (in R):
> pbeta(.3,5,24)
[1] 0.9525731

Therefore, P * ( p a ) = P * ( p .3) = 1 .9525731 = .0474269. Since the probability


associated with H0 is much larger, our decision is to not reject H0.
16.22 From Ex. 6.16, the posterior distribution of p is beta(5, 22). We can find
P * ( p 0 ) = P * ( p < .3) by (in R):
> pbeta(.3,5,22)
[1] 0.9266975

Therefore, P * ( p a ) = P * ( p .3) = 1 .9266975 = .0733025. Since the probability


associated with H0 is much larger, our decision is to not reject H0.
16.23 From Ex. 6.17, the posterior distribution of p is beta(11, 10). Thus,
P * ( p 0 ) = P * ( p < .4) is given by (in R):
> pbeta(.4,11,10)
[1] 0.1275212

Therefore, P * ( p a ) = P * ( p .4) = 1 .1275212 = .8724788. Since the probability


associated with Ha is much larger, our decision is to reject H0.
16.24 From Ex. 16.18, the posterior distribution for is gamma(17.3, .0305). To test
H0: > .5 vs. Ha: .5,
*
*
we calculate P ( 0 ) = P ( > .5) as:

Chapter 16: Introduction to Bayesian Methods of Inference

333
Instructors Solutions Manual

> 1 - pgamma(.5,shape=17.3,scale=.0305)
[1] 0.5561767

Therefore, P * ( a ) = P * ( .5) = 1 .5561767 = .4438233. The probability


associated with H0 is larger (but only marginally so), so our decision is to not reject H0.
16.25 From Ex. 16.19, the posterior distribution for is gamma(176, .0395). Thus,
P * ( 0 ) = P * ( > 6) is found by
> 1 - pgamma(6,shape=176,scale=.0395)
[1] 0.9700498

Therefore, P * ( a ) = P * ( 6) = 1 .9700498 = .0299502. Since the probability


associated with H0 is much larger, our decision is to not reject H0.
16.26 From Ex. 16.20, the posterior distribution for v is gamma(9, 1.0765). To test:
H0: v < 10 vs. Ha: v 10,
*
*
we calculate P ( v 0 ) = P ( v < 10) as
> pgamma(10,9, 1.0765)
[1] 0.7464786

Therefore, P * ( a ) = P * ( v 10) = 1 .7464786 = .2535214. Since the probability


associated with H0 is larger, our decision is to not reject H0.

You might also like