Professional Documents
Culture Documents
1. (a) Let U ∼ Uniform[0, c] for some c > 0. Calculate E(U k ) where k ≥ 0 is some fixed constant.
Solution:
∞ c
ck
Z Z
k k 1 k 1 1 k+1 c
E(U ) = t fU (t) dt = t dt = t t=0
= .
t=−∞ t=0 c c k+1 k+1
Solution:
Z ∞
U U 1 1
E(V ) = E(E(V | U )) = E = fU (t) dt
U +1 t=−∞ 1+t
Z 1
1 1
= dt = log(1 + t) t=0 = log(2).
t=0 1 + t
(c) Let U ∼ Uniform[0, 1] and suppose that, conditional on U , another random variable W is drawn
from the Uniform[0, U ] distribution. Calculate E(W ) and Var(W ).
U2 U
Var(W ) = E(Var(W | U )) + Var(E(W | U )) = E + Var
12 2
E(U 2 ) Var(U ) E(U 2 ) E(U 2 ) − E(U )2 1/3 1/3 − (1/2)2 7
= + = + = + = = 0.048611.
12 4 12 4 12 4 144
Alternatively, we might observe that W is equal in distribution to U V where V ∼ Uniform[0, 1]
is drawn independently from U . With this observation we can calculate
1 1 1
E(W ) = E(U V ) = E(U )E(V ) = · =
2 2 4
1
and
1 1 1
E(W 2 ) = E(U 2 V 2 ) = E(U 2 )E(V 2 ) = · =
3 3 9
and so
1 1 2 7
Var(W ) = − = = 0.048611.
9 4 144
3. Suppose we have a coin that has a θ probability of Heads. We don’t know the parameter θ, but we
assume its prior distribution is θ ∼ Uniform{0.25, 0.5, 0.75} (this is a discrete uniform distribution,
placing equal probability on each value in the finite list). The data we observe is X, the outcome of a
single flip of the coin, with X = 1 for Heads and X = 0 for Tails.
(a) Write down a hierarchical model for the data X.
Solution: (
θ ∼ Uniform{0.25, 0.5, 0.75}
X | θ ∼ Bernoulli(θ).
2
(c) What is the posterior distribution of the parameter θ? That is, calculate the conditional distri-
bution of θ | X (or equivalently, the conditional distribution of θ | X = x, for each possible value
x).
Solution: The posterior distribution of θ supported on the three values 0.25, 0.5, 0.75. If we
observe X = 1 then we calculate
1
P(θ = 0.25)P(X = 1 | θ = 0.25) 3 · 0.25 1
P(θ = 0.25 | X = 1) = = =
P(X = 1) 0.5 6
and
1
P(θ = 0.5)P(X = 1 | θ = 0.5) 3 · 0.5 1
P(θ = 0.5 | X = 1) = = =
P(X = 1) 0.5 3
and
1
P(θ = 0.75)P(X = 1 | θ = 0.75) 3 · 0.75 1
P(θ = 0.75 | X = 1) = = = .
P(X = 1) 0.5 2
So, conditional on X = 1, the posterior distribution of θ has PMF
t 0.25 0.5 0.75
fθ|X (t | 1) 1/6 1/3 1/2
If instead we observe X = 0 then by symmetry we will get the analogous answer: conditional on
X = 0, the posterior distribution of θ has PMF
t 0.25 0.5 0.75
fθ|X (t | 0) 1/2 1/3 1/6
(d) Suppose we observe X = 1. Now we will flip the same coin again, to obtain a new outcome Y .
Conditional on what we’ve observed so far, what is the probability that Y = 1?
P(Y = 1 | X = 1, θ = t) = t
for any t. This is because, if we know θ = t for some value t, then observing X doesn’t affect the
distribution of Y (the coin has no memory).
P
P(Y =1,X=1) t∈{0.25,0.5,0.75} P(θ=t,Y =1,X=1)
Alternatively, we can calculate P(Y = 1 | X = 1) = P(X=1) = P
P(θ=t,X=1) .
t∈{0.25,0.5,0.75}
4. Suppose that X1 and X2 are independent draws from Uniform[a, b], where a < b are some constants.
3
(a) Consider the special case a = 0 and b = 1, i.e. the Uniform[0, 1] distribution. Based on calculations
from class we know that the order statistics, X(1) = min{X1 , X2 } and X(2) = max{X1 , X2 }, have
densities f (x) = 2 − 2x for the min X(1) , and f (x) = 2x for the max X(2) (both supported on
[0, 1]). Calculate E(X(1) ) and E(X(2) ).
Solution: Z 1
2 2 3 1 1
E(X(1) ) = t · (2 − 2t) dt = t − t = .
t=0 3 t=0 3
Z 1
2 1 2
E(X(2) ) = t · 2t dt = t3 = .
t=0 3 t=0 3
(b) Now let a < b be any constants in the real line. Calculate the expected values E(X(1) ) and
E(X(2) ). Your answers should be functions of a and/or b. Hint: you can think of Uniform[a, b] as
a linear transformation of Uniform[0, 1].
X1 −a X2 −a
Solution: Write U1 = b−a which is Uniform[0, 1] distributed, and similarly U2 = b−a , then
X(1) = min{X1 , X2 } = a + (b − a) min{U1 , U2 } = a + (b − a)U(1)
and so
1 2 1
E(X(1) ) = a + (b − a)E(U(1) ) = a + (b − a) · = a+ b
3 3 3
and similarly
2 1 2
E(X(2) ) = a + (b − a)E(U(2) ) = a + (b − a) · = a + b.
3 3 3
(c) Using your work above, find some function of X1 and X2 such that it is an unbiased estimator of
the parameter b. This term means that its expected value is equal to the parameter b. That is,
you’re looking for a function such that
E this function of X1 and X2 = b.