You are on page 1of 4

Student Name: Deepak Sridhar

Student ID: A53205078


Parameter Estimation I (ECE 275A)
Assignment 5

1 2
1. (a). Given Gaussian distribution for N iid samples, p(x; µ) = √1 e− 2 (x−µ)

To find MLE, we have to set the derivative of log-likelihood to zero and check the second derivative
to be negative semi-definite.
∂ log p(D; µ) ∂
= log ΠN
i=1 p(xi ; µ)
∂µ ∂µ
N
∂ log p(D; µ) X 1 ∂
= − (xi − µ)2
∂µ 2 ∂µ
i=1
XN
= (xi − µ)
i=1
N
X
= xi − N µ = 0
i=1
N
X
xi = N µ
i=1
N N
1 X 1 X
µ= xi → µ̂M LE = xi
N N
i=1 i=1
Next, we verify that it is a maximum,
N
∂ 2 log p(D; µ) X ∂
= (xi − µ)
∂µ2 ∂µ
i=1
= −N < 0
Hence, it is a maximum.
(b). Given Exponential distribution for N iid samples, p(x; λ) = λe−λx
To find MLE, we have to set the derivative of log-likelihood to zero and check the second derivative
to be negative semi-definite.
∂ log p(D; µ) ∂
= log ΠNi=1 p(xi ; µ)
∂λ ∂λ
N
∂ log p(D; λ) X ∂
= log λ − λxi
∂λ ∂λ
i=1
N
X 1
= ( − xi )
λ
i=1
N
X N
=− xi + =0
λ
i=1
1 1
λ= 1 PN → λ̂M LE = 1 PN
N i=1 xi N i=1 xi

1
Next, we verify that it is a maximum,
N
∂ 2 log p(D; λ) X ∂ 1
= ( − xi )
∂λ2 ∂λ λ
i=1
1
=− <0
λ2
Hence, it is a maximum.
1
2. Given a uniform distribution with N iid samples. The uniform distribution is given by p(x; θ) = θ
for 0 < x < θ and 0 otherwise. Let I be an indicator function denoted as
(
1 for 0 ≤ x ≤ θ
I(0 ≤ x ≤ θ) =
0 otherwise

p(D; θ) = ΠNi=1 p(xi ; θ)


1
= ΠNi=1 I(0 ≤ xi ≤ θ)
θ
1 N
= N Πi=1 I(θ ≥ max xi )I(0 ≤ min xi )
θ i i

1
Since θN
is monotonically decreasing, the function attains a maximum when

θ̂ = max xi for i = 1, 2 · · · , N
i

3. An unbiased estimator θ̂ is efficient iff s(x; θ) = ∂ log∂θ


p(x;θ)
= I(θ)(θ̂ − θ) Let θ̃ be the maximum
likelihood estimator of θ. Then, by the above theorem we have

∂ log p(x; θ̃)


s(x; θ̃) = = I(θ̃)(θ̂ − θ̃)
∂ θ̃

Since, θ̃ is the ML estimate, the score function evaluated at θ̃ is zero. Then,

I(θ̃)(θ̂ − θ̃) = 0

Since, I(θ̃) is the fischer information matrix and is invertible, we have θ̃ = θ̂. Hence, the ML
estimator is the same as the efficient estimator if it exists.

4. The minimum mean square estimator is given by (c being the normalising constant)

θ̂ = E[θ|x]
Z
= cθp(x|θ)p(θ)∂θ
Z
= cθp(x|θ)δ(θ − θ0 )∂θ

= θ0 = θM AP

where the last equality follows from the fact that delta function is non-zero only at the specified
value.

2
( (
e−(x[n]−θ) x[n] > θ e−θ θ>0
5. Given that p(x[n]|θ) = and p(θ) = .
0 x[n] < θ 0 θ<0
Let D = {x[0], x[1], · · · , x[n]} Let I be an indicator function denoted as
(
1 for x ∈ A
I(x ∈ A) =
0 otherwise
PN −1
and let sn = n=0 x[n]. The minimum mean square estimator is given by

θ̂ = E[θ|D]
Z
p(D|θ)p(θ)
= θ ∂θ
p(D)
−1
p(D|θ) = ΠN
n=0 p(x[n]|θ)
−1 −(x[n]−θ)
= ΠN
n=0 e I(x[n] > θ)
PN −1
= e−( n=0 x[n]−N θ) I(min x[n] > θ)
n
R −(PN −1 x[n]−N θ) −θ
θe n=0 e I(minn x[n] > θ > 0)dθ
θ̂ = R PN −1
e−( n=0 x[n]−N θ) e−θ I(minn x[n] > θ > 0)dθ
R minn x[n] (N −1)θ
θe dθ
= R0 min x[n]
0
n
e(N −1)θ dθ
minn x[n]
e(N −1) minn x[n] − 1
Z
e(N −1)θ dθ =
0 N −1
minn x[n]
minn x[n]e(N −1) minn x[n] e(N −1) minn x[n] − 1
Z
θe(N −1)θ dθ = −
0 N −1 (N − 1)2
minn x[n] 1
θ̂ = −(N −1) min x[n]

1−e n N −1
When N = 1, we have

θ̂ = E[θ|x[0]]
R x[0]
θdθ
= R0 x[0]
0 dθ
x[0]
=
2
( (
1 1
θ 0 ≤ x[n] ≤ θ β 0≤θ≤β
6. Given that p(x[n]|θ) = and p(θ) = .
0 otherwise 0 otherwise
Let D = {x[0], x[1], · · · , x[n]} Let I be an indicator function denoted as
(
1 for x ∈ A
I(x ∈ A) =
0 otherwise

3
The minimum mean square estimator is given by

θ̂ = E[θ|D]
Z
p(D|θ)p(θ)
= θ ∂θ
p(D)
−1
p(D|θ) = ΠN n=0 p(x[n]|θ)
−1 1
= ΠN n=0 I(0 ≤ x[n] ≤ θ)
θ
1
= N I(0 ≤ min x[n] ≤ max x[n] ≤ θ)
Rθ 1
n n
θ N I(0 ≤ minn x[n] ≤ maxn x[n] ≤ θ ≤ β)dθ
θ̂ = R θ1
θN
I(0 ≤ minn x[n] ≤ maxn x[n] ≤ θ ≤ β)dθ
Rβ −N +1 dθ
max x[n] θ
= Rβ n
−N dθ
maxn x[n] θ
β
β 2−N (maxn x[n])2−N
Z
θ−N +1 dθ = −
maxn x[n] 2−N 2−N
Z β
β 1−N (maxn x[n])1−N
θ−N dθ = −
maxn x[n] 1−N 1−N
(1 − N ) β 2−N − (maxn x[n])2−N
=
(2 − N ) β 1−N − (maxn x[n])1−N
(N − 1) (maxn x[n])2−N − β 2−N
θ̂ =
(N − 2) (maxn x[n])1−N − β 1−N
(N −1)
As β → ∞, we have θ̂ = (N −2) maxn x[n]

You might also like