Statistical Methods
Lecture 20
Theory of Estimation-7
Niladri Chatterjee
Dept. of MathematicsThe main question that remains unanswered is how to find an estimator.
Estimation can be of two types:
a) point estimation.
In this case, we try to obtain a single value for the unknown parameter 0
EG Ber(p) : 10 tosses, 4H’s p = 0.4
But suppose true value of p=0.25
Then one is not getting the right estimate.The main question that remains unanswered is how to find an estimator.
Estimation can be of two types:
a) point estimation.
In this case, we try to obtain a single value for the unknown parameter 0
E.G Ber(p) : 10 tosses, 4H’s p= 0.4 Conduct the same experiment 100
times. We get the following Histogram
But suppose true value of p= 0.25
Then one is not getting the right estimate. | |
I | | I se
o [6
1 | 18
2 |28
3 [2
4 [as
5 |7
6
7There are many methods for Point estimation. The two most important are:
a) Method of Maximum Likelihood
b) Method of MomentsMethod of Maximum Likelihood Estimation (MLE)
Let Lo(x;,Xp, «. Xp) be the likelihood.
The philosophy of MLE is given a sample consider L to be a function of 0.
Hence the estimate will be the one that maximizes the L for given sample.
Q: How to do?Method of Maximum Likelihood Estimation (MLE)
Let Lo(x1,X2, «- X,) be the likelihood.
The philosophy of MLE is given a sample consider L to be a function of 0.
Hence the estimate will be the one that maximizes the L for given sample.
< Ai, aL
The estimate 8 is s.t 55> =0 and
aL
ae <0 bp
AlogL
30
However, most often we use =0Method of Maximum Likelihood Estimation (MLE)
Let Lo(x1,%2, «. X,) be the likelihood.
The philosophy of MLE is given a sample consider L to be a function of 6.
Hence the estimate will be the one that maximizes the L for given sample.
Pheeatend drat a <0 anil If @ is a vector (6;,62, .. 6,,) then
AlogL
Ft co a) a I OVI=1,.k
AlogL 2
However, most often we use =0 = ae i
a0 b) The matrix ((ss logL ) } is
negative definiteEx: Bin (m, p)
L (Xp = Xq) = ) p® (1—py™™x... x (") p* (1— py"
= ms (%) p2"a-p) Ds
+ logL= Y x, logp + (nm — Yx;) log (1- p)
. Glogh Ex, _nm-Tx
=0
ap p 1-p
(1—p) Ex; = (nm — Exp
Bai &
hay =nmp=> p= anEx: Bin (m, p)
Lata) = (2) ps pyrex. (7") pie pyre
- m(%) p&"(a—p)™d* Now 225% =
». logL= Yx,logp + (nm — Yx,) log (a- p) xox mmx
ep (1-p)?
a SUG . EH IBAEA Ag QiIsit p= =
nmEx: Bin (m, p)
Cages = 5) = (71) ps = pmax. (2) pee pyre
- m (m) > 104 ny nm-Y'x, Plogh |
ms (") p2"a—p)™2 Now See =
*. logL= Yx;logp + (nm — ¥x;) log @- p) ~Exi_nm-Ex
ey G-p)y*
Alogh _ Ex nm-Da_ vcs
aoa QIsitp = 2%
nm Q: Is it always true?Theorem: If sufficient estimator exists then it is a function of the MLE
Suppose T (x;,X2, .. X,) is a sufficient statistic for estimating 0
Then Lo(%1,X2) «- Xn) = Bolt) h (4,2, .. Xn)
. OlogL_ al t
v Tog L= logtga(t)) + log (h (xp 2) Xn) be MEE = oeazstt)
I
@logL
us 5 = isa function of t and 0
Blogh
Hence solution to 7]
=0 involves the sample only in the form of t.
Thus the MLE is a function of T.Results without proof
(A) If for a distribution f(x, @) the MVB estimator T exists for 6 then
the likelihood equation will have a solution equal to the estimator T
Note: MVB = Minimum Variance Bound Estimator
Def: An unbiased estimator T of y(8)that attains the CR-bound is called its MVB.
IResults without proof
(A) If for a distribution f(x, @) the MVB estimator T exists for 6 then
the likelihood equation will have a solution equal to the estimator T
Note: MVB = Minimum Variance Bound Estimator
Def: An unbiased estimator T of y(6)that attains the CR-bound is called its MVB.
(B)If T is the MLE for @ and 1)(6) is an one-to-one function of 4 then (7)
is MLE for (0)
Note: Thus if we know MLE for a”, we know the MLE for a easilyEx: N(t,07)
e —)? Z 2
1 ecw Leen)
Lexa aa, xnlaso?) = | [ne tae = Ome
Therefore Log L= —Zlog0(2m) —n logo - 5 E.(xi — 1)?
x
Case1: 0? is known: Flog L)= + SDGi —y)=0 =>p=
1 Ps
Caseza: jtis known: 2(og)= - 2+ 5 DGq — Ww)? = 0 =>
_ D@inw)*
Case2b::2 (log L) = - f+ L(x — 1)? =0=> 6? eEx: N(,07) Simultaneous estimation of both the parameters
2
Thus we get: = ¥ and @?= ESE" thot is the sample variance.
Note: MLE need not be unbiased.Ex: N(u,07) Simultaneous estimation of both the parameters
:
Thus we get: a= 2 and@?= 2@°®” thot is the sample variance.
Note: MLE need not be unbiased.
r
Ex X ~ N(@,@) What is the MLE for 6?
Re likelihood can be written as,
1 __asi-0)? 1 A 5” (0)?
Leena %nl8) = Mazggee? ? = gamer’ 29 bibs 1-9)
P 2 nm 1y
Therefore, logL = —log@2 —log(2m)2 Be - 6)?
I i=n n iy
Set coe! =i yz —-— -e)
= Slog ¢ — tog(2n)? = =) (a — 6)
Bi LS 2 2
= —jlogo - log(2n)? - 76(D4 +n0' 20)"
ct 1
nd
= Flog leon — 35) x ats
it
SlogL my tym yen
Then “5~ = —a5 tage blax? —3
f
Now, 2884 9 5-24,
Thus no? +n@— YL, x? =0J
a
oa ‘On solving this quadratic equation we get.
= —Flogé - log(2n)2 HDF solving this quadratic equation we gé
i Ouue =
Dix? - no-no? =0 re
cst
Thus nd? + nO — Ex?
Oxue =1 (n6; a
Hence —— (2225 — nx?)
Rs
Thus, is MLE of 8Ex: T(A,@)
@\n
= (Fa) 272"! Ca
-. Log L = na loga — n log F(a) — AY x; + (a — 1) log(I] xi)
_ Slogh _ na _. 7.
“i =7 -l"= =>Az=
Stogt
aR
= nloga-*-1'(a) + Ylogx;
Casei: a is known, there is no problem. séting -0)wehave
r@ 1
Case2: a is unknown. Fay 7 108d +7 Zilog xi
Then to get 2 we need to compute @
We need to solve this non-linear
Equation numerically.