Professional Documents
Culture Documents
Section 1.2.
1. (a).
Therefore,
0.8 2 0.2 1
π(θ1 |0) = = π(θ1 |1) = =
0.8 + 0.4 3 0.2 + 0.6 4
0.4 1 0.6 3
π(θ2 |0) = = π(θ2 |1) = =
0.8 + 0.4 3 0.2 + 0.6 4
(b). Pn Pn
xi
p(x1 , · · · , xn |θ1 ) = (0.2) i=1 (0.8)n− i=1
xi
Pn Pn
xi
p(x1 , · · · , xn |θ2 ) = (0.6) i=1 (0.4)n− i=1
xi
Hence
π(θi )p(x1 , · · · , xn |θi )
π(θi |x1 , · · · , xn ) =
π(θ1 )p(x1 , · · · , xn |θ1 ) + π(θ2 )p(x1 , · · · , xn |θ2 )
p(x1 , · · · , xn |θi )
= i = 1, 2
p(x1 , · · · , xn |θ1 ) + p(x1 , · · · , xn |θ2 )
Thus,
Pn Pn
xi n− xi
(0.2) i=1 (0.8) i=1
π(θ1 |x1 , · · · , xn ) = Pn Pn Pn Pn
xi n− xi xi n− xi
(0.2) i=1 (0.8) i=1 + (0.6) i=1 (0.4) i=1
Pn Pn
xi n− xi
(0.6) i=1 (0.4) i=1
π(θ2 |x1 , · · · , xn ) = Pn
Pn
Pn
Pn
xi n− xi xi n− xi
(0.2) i=1 (0.8) i=1 + (0.6) i=1 (0.4) i=1
(c).
Pn Pn
(0.25)(0.2) i=1 xi (0.8)n− i=1 xi
π(θ1 |x1 , · · · , xn ) = Pn Pn Pn Pn
xi n− xi xi n− xi
0.25(0.2) i=1 (0.8) i=1 + 0.75(0.6) i=1 (0.4) i=1
Pn Pn
(0.2) i=1 xi (0.8)n− i=1 xi
= Pn Pn Pn Pn
(0.2) i=1 xi (0.8)n− i=1 xi + 3 · (0.6) i=1 xi (0.4)n− i=1 xi
1
Pn Pn
3 · (0.6) i=1 xi (0.4)n− i=1 xi
π(θ2 |x1 , · · · , xn ) = Pn Pn Pn Pn
(0.2) i=1 xi (0.8)n− i=1 xi + 3 · (0.6) i=1 xi (0.4)n− i=1 xi
n
X n (0.6)n/2 (0.4)n/2
π θ2 Xi = =
i=1
2 (0.2)n/2 (0.8)n/2 + (0.6)k (0.4)n/2
As n = 2
2
X (0.2)(0.8) 2
π θ1 Xi = 1 = =
i=1
(0.2)(0.8) + (0.6)(0.4) 5
2
X 2 3
π θ2 Xi = 1 = 1 − =
i=1
5 5
As n = 100
100
X (1.6)50
π θ1 Xi = 50 =
i=1
(1.6)50 + (2.4)50
n
X n (2.4)50
π θ2 Xi = =
i=1
2 (1.6)50 + (2.4)50
2
X 2 9
π θ2 Xi = 1 = 1 − =
i=1
5 11
100
X (1.6)50
π θ1 Xi = 50 =
i=1
(1.6)50 + 3 · (2.4)50
100
X 3 · (2.4)50
π θ2 Xi = 50 =
i=1
(1.6)50 + 3 · (2.4)50
2
By (d), in both cases n = 2 and n = 100,
n
X n
arg max π θ Xi = = θ2
θ 2
i=1
3. (a)
π(θ|X = 2)
π(θ)P {X = 2|θ = θ}
=
π(1/4)P {X = 2|θ = 1/4} + π(1/2)P {X = 2|θ = 1/2} + π(3/4)P {X = 2|θ = 3/4}
(1 − θ)2 θ 16
= 2 2 2
= (1 − θ)2 θ
(3/4) (1/4) + (1/2) (1/2) + (1/4) (3/4) 5
Hence
9 2 3
π((1/4)|X = 2) = , π((1/2)|X = 2) = , π((3/4)|X = 2) =
20 5 20
(1 − θ)k θ
π(θ|X = k) =
(3/4)k (1/4) + (1/2)k (1/2) + (1/4)k (3/4)
So we need to compare (3/4)k (1/4), (1/2)k (1/2) and (1/4)k (3/4). As k = 0, the third is
the largest — so 3/4 is most probable. As k = 1, the second is the largest — so 1/2 is
most probable. As k ≥ 2, the first is the largest — so 1/4 is most probable.
π(θ|X = k)
θ r−1 (1 − θ)s−1 Z 1 xr−1 (1 − x)s−1 −1
= P {X = k|θ = θ} P {X = k|θ = x}dx
B(r, s) 0 B(r, s)
Z 1 −1 θ r (1 − θ)s+k−1
r−1 s−1 k r−1 s−1 k
= θ (1 − θ) (1 − θ) θ x (1 − x) (1 − x) xdx =
0 B(r + 1, s + k)
π(j)p(x1 , · · · , xn |j)
π(j|x1 , · · · , xn ) = P∞
k=1 π(k)p(x1 , · · · , xn |k)
c(a)j −a j −n c(n + a, m)
= P∞ = j = m, m + 1, · · ·
c(a) k=m j −a−n j a+n
3
(b).
∞
X −1 ∞ m n+a −1
c(n + a, m) m n+a X
π(m|x1 , · · · , xn ) = = = 1+
ma+n j=m
j j=m+1
j
which can be easily checked out by either dominated or monotonic convergence theorem.
Explanation: {X1 , · · · , Xn } is an i.i.d. sample from the population with the uniform
distribution over {1, · · · , θ}, where θ is an unknown parameter given as the exact upper
bound of the distribution. It is intuitively clear that max{X1 , · · · , Xn } converges to θ in
some suitable sense as n → ∞. Another way to describe such phenomenum is to say that
θ, the randomization of θ, take the value m = max{x1 , · · · , xn } with probability close to 1
as n is large.
Section 1.3.
i = 2:
i = 3:
4
i = 4:
i = 5:
Pθ1 {δ5 (X) = a1 } = 0, Pθ1 {δ5 (X) = a2 } = 1, Pθ1 {δ5 (X) = a3 } = 0
Pθ2 {δ5 (X) = a1 } = 0, Pθ2 {δ5 (X) = a2 } = 1, Pθ2 {δ5 (X) = a3 } = 0
i = 6:
i = 7:
i = 8:
i = 9:
Pθ1 {δ9 (X) = a1 } = 0, Pθ1 {δ9 (X) = a2 } = 0, Pθ1 {δ9 (X) = a3 } = 1
Pθ2 {δ9 (X) = a1 } = 0, Pθ2 {δ9 (X) = a2 } = 0, Pθ2 {δ9 (X) = a3 } = 1
Plug in we have all risk points R(θj , δi ) (j = 1, 2, i = 1, 2, · · · 9).
(c) In the case (a), the decision rule has the same distribution regardless the value of
θ.
P {δ(X) = a2 } + 2P {δ(X) = a3 } if θ = θ1
R(θ, δ) =
2P {δ(X) = a1 } + P {δ(X) = a3 } if θ = θ1
Hence,
0 if θ = θ1 0.9 if θ = θ1
( (
R(θ, δ1 ) = , R(θ, δ2) = ,
1 if θ = θ1 0.2 if θ = θ1
5
1.8 if θ = θ1 0.1 if θ = θ1
( (
R(θ, δ3 ) = , R(θ, δ4 ) = ,
1.1 if θ = θ1 1.8 if θ = θ1
1 if θ = θ1 1.9 if θ = θ1
( (
R(θ, δ5 ) = , R(θ, δ6) = ,
0 if θ = θ1 0.9 if θ = θ1
1.1 if θ = θ1 1.1 if θ = θ1
( (
R(θ, δ7 ) = , R(θ, δ8 ) = ,
1.9 if θ = θ1 0.1 if θ = θ1
2 if θ = θ1
(
R(θ, δ9 ) =
1 if θ = θ1
The minimax rule is δ2 .
(d).
r(δ1 ) = 0.5, r(δ2 ) = 0.55, r(δ3 ) = 1.45, r(δ4 ) = 0.95, r(δ5 ) = 0.5
r(δ6 ) = 1.4, r(δ7 ) = 1.5, r(δ8 ) = 0.6, r(δ9 ) = 1.5
The Bayes rules are δ1 and δ5 .
6
So
n+1 4 2
M SE(s) = σ − σ4 = σ4
n−1 n−1
(ii)
2 2 2
M SE(σ̂02 ) = V ar(σ̂02 ) + E(σ̂02 ) − σ 2 = E(σ̂02 )2 − E(σ̂02 ) + E(σ̂02 ) − σ 2
2
= c2 σ 4 (n − 1)2 + 2(n − 1) − c2 (n − 1)2 σ 4 + c(n − 1)σ 2 − σ 2
n 2 o
= σ 4 2c2 (n − 1) + c(n − 1) − 1
It is easy to see that c = (n + 1)−1 is the minimizer of the right hand side.
σ2
M SE(p̂) = V ar(p̂) =
n
δ1 (0) = a1 , δ1 (1) = a2 ,
δ2 (0) = a2 , δ2 (1) = a1 ,
δ3 (0) = a1 , δ3 (1) = a1 ,
δ4 (0) = a2 , δ4 (1) = a2 ,
We have
1.6 θ = θ1 0.4 θ = θ1
R(θ, δ1 ) = , R(θ, δ2 ) = ,
1.8 θ = θ2 2.2 θ = θ2
7
0 θ = θ1 2 θ = θ1
R(θ, δ3 ) = , R(θ, δ4 ) = ,
3 θ = θ2 1 θ = θ2
The minimax rule is δ1 .
(b) The risk function of any randomized decision rule δ can be written in the form
4
X
R(θ, δ1 ) = λi R(θ, δi )
i=1
Solving 0.4λ + 2(1 − λ) = 2.2λ + (1 − λ) we have λ = 5/14. So the minmax rule among
the randomized decision rules is
δ2 with probability 5/14
δ=
δ4 with probability 9/14