Professional Documents
Culture Documents
Lect 11
Lect 11
n
Y
f (xi |)
i=1
Rather than maximising this product which can be quite tedious, we often use the fact
that the logarithm is an increasing function so it will be equivalent to maximise the log
likelihood:
n
l() =
i=1
9.0.1
Poisson Example
x e
x!
For X1 , X2 , . . . , Xn iid Poisson random variables will have a joint frequency function that is
a product of the marginal frequency functions, the log likelihood will thus be:
P (X = x) =
l()
n
1X
xi n = 0
i=1
(as long as we check that the function l is actually concave, which it is).
The mle agrees with the method of moments in this case, so does its sampling distribution.
9.0.2
Normal Example
n
Y
i
1 x 2
1
exp ( [ i
])
2
2 i=1
`
P
= n + 3 ni=1 (xi )2
Gamma Example
f (x|, ) =
1 1 x
x e
()
n
X
i=1
9.1
n
n! Y xi
f (x1 , x2 , . . . , xm |p1 , . . . , pm ) = Q
pi =
px1 px1 pxmm
xi !
x 1 , x2 , x m 1 1
Each box taken separately against all the other boxes is a binomial, this is an extension
thereof. (look at page 72)
We study the log-likelihood of this :
l(p1 , p2 , ..., pm ) = logn!
m
X
i=1
logxi ! +
m
X
xi logpi
i=1
However we cant just go ahead and maximise this we have to take the constraint into account
so we have to use the Lagrange multipliers again.
We use
m
L(p1 , p2 , ..., pm , ) = l(p1 , p2 , ..., pm ) + (1
pi )
xi
n
Maximising log likelihood, with and without constraints, can be an unsolvable problem
in closed form, then we have to use iterative procedures.
I explained about how the parametris bootstrap was often the only way to study the
sampling distribution of the mle.