Professional Documents
Culture Documents
If the number of observations grows, the MLE is unbiased and reaches the CRLB,
so it is asymptotically unbiased and efficient.
But the MLE is not asymptotically equivalent to the MVU; the MLE is asymptoti-
cally Gaussian distributed.
1
Maximum Likelihood Estimation
Example:
Consider estimating A for the following model (DC level in white noise)
2
Maximum Likelihood Estimation
Example:
We then obtain
N −1
1 X
Â2 + Â − x2 [n] = 0
N
n=0
There are 2 solutions, but we pick the one that always leads to a positive  and
it can be checked that this corresponds to a maximum of p(x; A):
v
u N −1
1 u 1 X 2 1
 = − + t x [n] +
2 N 4
n=0
A2
E(Â) → A and var(Â) →
N (A + 12 )
3
Maximum Likelihood Estimation (Linear Gaussian Model)
For a given value of x this is a numerical expression. Note that since x is a stochas-
tic variable that can take many values, so is θ̂.
4
Maximum Likelihood Estimation (Linear Gaussian Model)
Proof:
∂J
= −2hT C−1 x + 2hT C−1 hθ = 0
∂θ
Remark: For the linear Gaussian model, the MLE is equivalent to the MVU estima-
tor.
5
MLE for Transformed Parameters
The MLE of the parameter α = g(θ), where the PDF p(x; θ) is parametrized by θ,
is given by
α̂ = g(θ̂)
6
Least Squares Estimation
where w is some noise vector, the least squares estimator (LSE) finds the θ for
which
kx − h(θ, 0)k2
is minimal
Properties:
7
Least Squares Estimation (Linear Model)
θ̂ = argminkx − hθk2
θ
Proof: As before
Remark: For the linear model the LSE corresponds to the BLUE when the noise is
white, and to the MVU estimator when the noise is Gaussian and white.
8
Least Squares Estimation (Linear Model)
Orthogonality Condition
For the linear model the LSE leads to the following orthogonality condition:
(x − hθ̂)T h = 0 ⇔ (x − hθ̂) ⊥ h
a^
a
x