Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
1. Unbiased: E (W) = θ (centered around true
value)
2. Consistent: W ≈ θ for large number of
observations
3. Efficient: W1 is better than W2 if
4. Minimum Variance: efficient AND
achieve CRLB
5. Sufficient statistic for θ
2 2
) 2 ( ) 1 ( u u ÷ < ÷ W E W E
ˆ
u
Let be an estimator for θ, we say
that is unbiased for θ if E( )= θ.
If E( )= θ+b(θ) with b(θ)≠0, then
is a biased estimator for θ with
bias b(θ).
ˆ
u
ˆ
u
ˆ
u
ˆ
u
ˆ
u
If one writes of estimation, then an
accurate estimator would be the one resulting in
small estimation errors, so that estimated values will
be near the true value.
error + =u u
ˆ
If an estimator t
n
estimates u then difference
between them (t
n
u ) is called the estimation
error. Bias of the estimator is defined as the
expectation value of this difference:
B
u
=E(t
n
)=E(t
n
) u
If the bias is equal to zero then the estimation
is called unbiased. For example sample mean
is an unbiased estimator:
0 ) (
1
) (
1
)
1
( ) (
1 1 1
= ÷ = ÷ = ÷ = ÷
¿ ¿ ¿
= = =
µ µ µ µ
n
i
n
i
i
n
i
i
x E
n
x E
n
x
n
E x E
Here we used the fact that expectation and
summation can change order (Remember that
expectation is integration for continuous random
variables and summation for discrete random
variables.) and the expectation of each sample
point is equal to the population mean.
Knowledge of population distribution was not
necessary for derivation of unbiasedness of the
sample mean. This fact is true for the samples
taken from population with any distribution for
which the first moment exists..
A biased estimator can be an unbiased
estimator when n is large.
If , then the estimator is
asymptotically unbiased.
u
ˆ
u
ˆ
lim ( ) 0
n
b u
÷·
=
Suppose that X
1
, X
2
,…, X
n
constitute a random
sample from a distribution with mean μ
1
and
standard deviation σ
1
, and that Y
1
, Y
2
,…, Y
n
(independent of X
i
’s) constitute a random sample
from a distribution with mean μ
2
and standard
deviation σ
2
.
Use the rules of expected value to show that X – Y
is an unbiased estimator of μ
1
 μ
2
.
Solution:
2 1
] [ ] [ ] [
µ µ ÷ =
÷ = ÷ Y E X E Y X E
We have shown that when sampling from
, one finds that the maximum likelihood estimators of and
are
Recalling that the distribution of
Thus, we know that the distribution of
Hence,
2
1 2
( , ) N u µ u o = =
µ
2
o
2
2
1 2
( 1)
ˆ ˆ
ˆ ˆ and
n S
X
n
u µ u o
÷
= = = =
2
is ( , ), we see that ( ) ; X N E X
n
o
µ µ =
is unbiased estimator of . X µ
2 2 2 2
1
2
1
( 1) 1
is where ( )
1
n
n i
i
n
S S X X
n
_
o
÷
=
÷
= ÷
÷
¿
2 2 2
2 2
2
( 1)
( ) ( 1)
1 1
n S
E S E n
n n
o o
o
o
( ÷
= = ÷ =
(
÷ ÷
¸ ¸
Therefore, S
2
is an unbiased estimator of o
2
Consequently, since
Therefore,
2 2 2
2
( 1) ( 1) ( 1)
ˆ
( ) [ ] [ ]
n n n
E E S E S
n n n
u o
÷ ÷ ÷
= = =
2
2 2
ˆ
is a biased estimator of u u o =
Expectation value of the square of the differences
between estimator and the expectation of the
estimator is called its variance:
Exercise: What is the variance of the sample mean.
As we noted if estimator for u is t
n
then difference
between them is error of the estimation. Expectation
value of this error is bias. Expectation value of square
of this error is called mean square error (m.s.e.):
2
) ( u
u
÷ =
n
t E M
2
)) ( (
n n
t E t E V ÷ =
u
It can be expressed by the bias and the
variance of the estimator:
M.S.E is equal to square of the estimator’s bias plus
variance of the estimator. If the bias is 0 then MSE is
equal to the variance.
In estimation it is usually trade of between
unbiasedness and minimum variance. In ideal world
we would like to have minimum variance unbiased
estimator. It is not always possible.
) ( ) (
) ) ( ( )) ( ( ) ) ( ) ( ( ) ( ) (
2
2 2 2 2
n n
n n n n n n n n
t B t V
t E t E t E t E t E t E t E t M
u u
u
u u u
+
= ÷ + ÷ = ÷ + ÷ = ÷ =
UNBIASED!!!!
Let S
2
be a sample variance for a random sample
from and . Is this
estimator biased or unbiased estimator? Note: mean
of chisquare is it df.
Solution:
since E(S
2
)≠o
2
; therefore S
2
is biased estimator and the
biased value is  o
2
/n.
1 2
, ,...,
n
X X X
2
( , ) N µ o
2
2
( )
i
x x
s
n
÷
=
¿
2
2
2 2
1 1
2
2
2 2
2
2
2
2
2
2
2
( )
( 1) ~ ; ~
( )
( ) [ ]
( )
[ ]
( 1)
i
n n
i
i
x x
s
n
n
x x
E s E
n
x x
E
n
n
n
n
_ _
o
o
o
o
o
o
o
o
÷ ÷
÷
÷
÷
=
÷
=
= ÷
= ÷
¿
¿
¿
If an estimate is such that it’s biased
converged to zero as n·, then we say that it
is asymptotically unbiased.
S
2
is asymptotically unbiased when n·
2
2 2
2 2
( )
lim ( ) .
n
E s
n
E s
o
o
o
÷·
= ÷
=
Solution:
2
2
( )
If , does this estimator is unbiased
1
estimator of ?
i
x x
w
n
o
÷
=
÷
¿
2
2
2
2
2
2
2
2
( )
( ) [ ]
( 1)
( )
[ ]
1
( 1)
1
i
i
x x
E w E
n
x x
E
n
n
n
o
o
o
o
o
o
÷
=
÷
÷
=
÷
= ÷
÷
=
¿
¿
w is unbiased estimator for
o
2
THEOREM
Let be an unbiased estimator for u
based on a random sample of size n,
from function , assume we can differentiate
inside the integral whenever needed, then CRLB for
the Var(y) is given by
1 2
( , ,..., )
n
y U x x x =
1 2
, ,...,
n
x x x
( ; ) f x u
2
2
2
2
2
1
ln ( , )
1
ln ( , )
y
f x
nE
f x
n E
o
u
u
u
u
>
¦ ¹
c ¦ ¦ (
´ `
(
c
¸ ¸
¦ ¦
¹ )
=
¦ ¹
( ÷c ¦ ¦
´ `
(
c
¸ ¸
¦ ¦
¹ )
Cramer Rao
Lower bound
Corrollary
Suppose y is nor unbiased; say E(y)=u+b(u),
then CRLB for y is:
 
2
2
2
2
2
1 '( )
ln ( , )
y
b
f x
n E
u
o
u
u
+
>
¦ ¹
(
÷c ¦ ¦
´ `
(
c
¸ ¸
¦ ¦
¹ )
Get the CRLB and find the variance for this
estimator
1
, 0
( ; ) ; ~ ( 1, )
0 ,
x
e x
f x y gamma
elsewhere
u
o
u o  u
u
÷
¦
< < ¦
= = =
´
¦
¹
Solution:
2
2
2 2 3
2
2 2 3
2 2
2
ln ( ; ) ln
ln ( ; ) 1
ln ( ; ) 1 2
ln ( ; ) 1 2 ( )
[ ]
1 2
1
x
f x
f x x
f x x
f x E x
E E
u u
u
u
u u u
u
u u u
u
u u u
u u
u
= ÷ ÷
c
= ÷ +
c
c
= ÷ +
c
( c
= ÷ +
(
c
¸ ¸
= ÷ +
=
2
2
2
2
1
1
CRLB:
y
y
n
n
o
u
u
o
>
(
(
¸ ¸
>
2 2 2
1
( ) 1.
( ) 1.
; ( )
ˆ
E X
V X
x
M x E X
n
x
o u u
o u u
u
u
= = =
= = =
= = =
=
¿
Therefore, variance for equal to CRLB.
If an unbiased estimator achive the CRLB;
we say that is efficient for estimating u and
is called uniformly minimum variance (or
best) unbiased estimator
2
2
2
2 2
2 2
ˆ
( ) ( ) ( )
1
( )
1
( )
1 1
i
i
x
V V x V
n
V x
n
V x
n
n
n n n
u
u
u u
= =
=
=
= = =
¿
¿
¿
¿
ˆ
u
ˆ
u
ˆ
u
ˆ
u
If we consider all possible unbiased estimator of
some parameter u , the one with the smallest
variance is called the most efficient estimator of u.
1
ˆ
u
u
2
ˆ
u
3
ˆ
u
Sampling distribution of different estimators of u
Unbiased
estimators
is the best
estimator of u
1
ˆ
u
Let be an estimator that is unbiased for u.
The relative efficiency for is defined to be:
ˆ
u
ˆ
u
ˆ
( ) 1
ˆ
( )
CRLB
eff
V
u
u
= s
If and are biased estimators of parameter θ of
a given population, the measure of efficiency of
relative to is denoted by the ratio
where
1
ˆ
u
2
ˆ
u
2 2
ˆ ˆ ˆ ˆ
( ) [( ) ] ( ) ( ) MSE E Var b u u u u u = ÷ = +
1
ˆ
u
2
ˆ
u
2
1 2
1
ˆ
( )
ˆ ˆ
( , )
ˆ
( )
MSE
eff
MSE
u
u u
u
=
Mean square error
CAREFUL HERE
What it means Conclusion
2
1 2
1
ˆ
( )
ˆ ˆ
( , )
ˆ
( )
MSE
eff
MSE
u
u u
u
=
1 2
ˆ ˆ
( , ) 1 eff u u =
1 2
ˆ ˆ
( , ) 1 eff u u >
1 2
ˆ ˆ
( , ) 1 eff u u <
1 2
ˆ ˆ
( ) ( ) MSE MSE u u =
1 2
ˆ ˆ
( ) ( ) MSE MSE u u <
1 2
ˆ ˆ
( ) ( ) MSE MSE u u >
1
ˆ
u
2
ˆ
u
1
ˆ
u
2
ˆ
u
1
ˆ
u
2
ˆ
u
as efficient as
more efficient
than
less efficient
than
If and are unbiased estimators of parameter θ
of a given population, and , then we say
that is relatively more efficient than .
The measure of efficiency of relative to is
denoted by the ratio
1
ˆ
u
2
ˆ
u
1 2
ˆ ˆ
( ) ( ) Var Var u u <
1
ˆ
u
2
ˆ
u
1
ˆ
u
2
ˆ
u
2
1 2
1
ˆ
( )
ˆ ˆ
( , )
ˆ
( )
Var
eff
Var
u
u u
u
=
CAREFUL HERE
What it means Conclusion
2
1 2
1
ˆ
( )
ˆ ˆ
( , )
ˆ
( )
Var
eff
Var
u
u u
u
=
1 2
ˆ ˆ
( , ) 1 eff u u =
1 2
ˆ ˆ
( , ) 1 eff u u >
1 2
ˆ ˆ
( , ) 1 eff u u <
1 2
ˆ ˆ
( ) ( ) Var Var u u =
1 2
ˆ ˆ
( ) ( ) Var Var u u <
1 2
ˆ ˆ
( ) ( ) Var Var u u >
1
ˆ
u
2
ˆ
u
1
ˆ
u
2
ˆ
u
1
ˆ
u
2
ˆ
u
as efficient as
more efficient
than
less efficient
than
Suppose that X
1
and X
2
are random samples
from a normal distribution with mean μ and
standard deviation σ. Two estimators for
mean μ are as below:
Check whether the two estimators and
are unbiased.
Determine the measure of efficiency of
relative to .
1 2
1
2
3
X X
µ
+
=
1 2
2
2
X X
µ
+
=
1
µ
2
µ
1
µ
2
µ
Check whether the two estimators and are
unbiased.
1 2
1
1 2
2
( )
3
1 2
( ) ( )
3 3
1 2
3 3
X X
E E
E X E X
µ
µ µ
µ
+
 
=

\ .
= +
= +
=
1 2
2
1 2
( )
2
1 1
( ) ( )
2 2
1 1
2 2
X X
E E
E X E X
µ
µ µ
µ
+
 
=

\ .
= +
= +
=
1
µ
2
µ
Since both and ,then the two
estimators and are unbiased.
1
( ) E µ µ =
2
( ) E µ µ =
1
µ
2
µ
Determine the measure of efficiency of relative to
1 2
1
1 2
2 2
2
2
( )
3
1 4
( ) ( )
9 9
1 4
9 9
5
9
X X
Var Var
Var X Var X
µ
o o
o
+
 
=

\ .
= +
= +
=
1 2
2
1 2
2 2
2
( )
2
1 1
( ) ( )
4 4
1 1
4 4
1
2
X X
Var Var
Var X Var X
µ
o o
o
+
 
=

\ .
= +
= +
=
1
µ
2
µ
Since both and ,then the two
estimators and are unbiased.
1
( ) E µ µ =
2
( ) E µ µ =
1
µ
2
µ
So, the measure of efficiency of relative to
( )
( )
2
1 2
1
2
2
( , )
2
5
9
9
0.9 1
10
Var
eff
Var
µ
µ µ
µ
o
o
=
=
= = <
1
µ
2
µ
Since that is ,then the
estimator is 0.9 times more efficient than .
1
µ
2
µ
1 2
ˆ ˆ
( , ) 1 eff u u <
1 2
ˆ ˆ
( ) ( ) Var Var u u >
Suppose that X
1
, X
2
and X
3
are
random samples from a normal
distribution with mean μ and standard
deviation σ. Determine the efficiency
measure of relative to
1 2 3
2
4
X X X + +
1 2 3
3
X X X + +
An estimator is said to be consistent for
ˆ

ˆ
if .
P
o u u ÷÷÷
An unbiased estimator is said to be
consistent if the difference between the
estimator and the parameter grows smaller
as the sample size grows larger.
E.g. is a consistent estimator of
because: V(X) is
That is, as n grows larger, the variance of
grows smaller.
X
X
Let be a random sample from
function f(x;u), ueO. Let y
1
=U( ) be a
statistic with pdf g(y
1
,u). We say that y
1
is a
sufficient statistic for the family u
if and only if
Where for every fix value of the
func H( ) does not depend on u, thus
the conditional pdf of given
is free of u
1 2
, ,...,
n
x x x
1 2
, ,...,
n
x x x
{ }
( , ),
i
F f x
u
u u = eO
( )
1 2
1 2
1 1
1 2
, ( , )... ( , )
joint pdf for , ,...,
( , ) ( , )
( , ,..., )
n
n
n
f x f x f x
x x x
g y g y
H x x x
u u u
u u
=
=
y
1
=U( )
1 2
, ,...,
n
x x x
1 2
, ,...,
n
x x x
1 2
, ,...,
n
x x x
1 1
Y y =
Let say is a r.s from
1 2
, ,...,
n
x x x
1
1
, 0,1,...
( ; )
!
0 ,
is is a sufficient for ?
x
i
e
x
f x
x
elsewhere
y x
y
u
u
u
u
÷
¦
=
¦
=
´
¦
¹
=
¿
i. find joint pdf for
ii.
Conc: the given func. Is a sufficient statistic.
1 2
, ,...,
n
x x x
1 2
1 1
( , ) ( , )... ( , ) ( , )
!
i
x
n n
n i
i i
e
f x f x f x f x
x
u
u
u u u u
÷
= =
= =
[ [
1
! ( )!
!
!
( )! ( )!
i
i
i i
x
n
x
n
i i
i
x x
n n
i
i i
e
e
x x
x
n x
n e n e
x x
u
u
u u
u
u
u u
÷
÷
=
÷ ÷
¿
= =
¿ ¿
[
[ ¿
[
¿ ¿
Free of u
Note:
◦ There is another way to find sufficient statistic by
Neyman’s Factorization Theorem.
Theorem:
◦ Let be a random sample from function
f(x;u), ueO. Let y
1
=U( ) be a statistic of
g(y
1
,u) that y
1
is a sufficient statistic for family u
if and only if there is exist a
function k
1
(.) if k
2
(.)
where for each fix value of y
1
=U( ) the
function k
2
( ) does not depend on u in
form or in domain.
1 2
, ,...,
n
x x x
1 2
, ,...,
n
x x x
{ }
( , ),
i
F f x
u
u u = eO
1 1 1 2 2 1 2
1
( ; ) ( ( , ,..., )) ( , ,..., )
n
n n
i
f x k U x x x k x x x u
=
÷ =
[
1 2
, ,...,
n
x x x
1 2
, ,...,
n
x x x
Is just a statistic
Let is a r.s from
Solution:
1 2
, ,...,
n
x x x
1
2
(2 )
[ (1 )] , 0 1
[ ( )] ( ; )
0 ,
Find sufficient statistic for
x x x
f x
elsewhere
u
u
u u
u
÷
I
¦
÷ < <
¦
I =
´
¦
¹
1. 2. 3.
Unbiased: E (W) = θ (centered around true value) Consistent: W ≈ θ for large number of observations Efficient: W1 is better than W2 if
4. 5.
E (W1 ) 2 E (W 2 ) 2 ˆ Minimum Variance: efficient AND
achieve CRLB Sufficient statistic for θ
ˆ Let
be an estimator for θ, we say ˆ ˆ that is unbiased for θ if E( )= θ. ˆ ˆ If E( )= θ+b(θ) with b(θ)≠0, then is a biased estimator for θ with bias b(θ).
ˆ If one writes error of estimation, then an accurate estimator would be the one resulting in small estimation errors, so that estimated values will be near the true value.
If an estimator tn estimates then difference between them (tn ) is called the estimation error. Bias of the estimator is defined as the expectation value of this difference: If the bias is equal to zero then the estimation is called unbiased. For example sample mean is an unbiased estimator:
1 n 1 n 1 n E ( x ) E ( xi ) E ( xi ) E ( x ) 0 n i 1 n i 1 n i 1
B=E(tn)=E(tn)
. Knowledge of population distribution was not necessary for derivation of unbiasedness of the sample mean.) and the expectation of each sample point is equal to the population mean. Here we used the fact that expectation and summation can change order (Remember that expectation is integration for continuous random variables and summation for discrete random variables. This fact is true for the samples taken from population with any distribution for which the first moment exists..
then the estimator ˆ is n asymptotically unbiased. A biased estimator ˆ can be an unbiased estimator when n is large. . If lim b( ) 0 .
Suppose that X1. Yn (independent of Xi’s) constitute a random sample from a distribution with mean μ2 and standard deviation σ2. Y2.…. Xn constitute a random sample from a distribution with mean μ1 and standard deviation σ1. and that Y1. Solution: E[ X Y ] E[ X ] E[Y ] 1 2 . Use the rules of expected value to show that X – Y is an unbiased estimator of μ1 .….μ2. X2.
X is unbiased estimator of . n Thus. We have shown that when sampling from N (1 . 2 S is 2 2 n 1 1 n where S ( X i X )2 n 1 i 1 2 2 (n 1) S 2 2 E (S ) E (n 1) 2 n 1 2 n 1 2 . one finds that the maximum likelihood estimators of and 2 are (n 1) S 2 ˆ ˆ ˆ ˆ 1 X and 2 n 2 2 ). we know that the distribution of (n 1) Hence.2 2 ) . we see that E ( X ) . Recalling that the distribution of X is N ( .
ˆ is a biased estimator of 2 2 2 . S2 is an unbiased estimator of 2 Consequently. Therefore. since ˆ ) E[ (n 1) S 2 ] (n 1) E[ S 2 ] (n 1) 2 E (2 n n n Therefore.
): M E (tn ) 2 .s. Expectation value of the square of the differences between estimator and the expectation of the estimator is called its variance: V E (tn E (tn )) 2 Exercise: What is the variance of the sample mean. As we noted if estimator for is tn then difference between them is error of the estimation. Expectation value of square of this error is called mean square error (m.e. Expectation value of this error is bias.
It can be expressed by the bias and the variance of the estimator: V (tn ) B2 (tn ) M (tn ) E (tn )2 E (tn E (tn ) E (tn ) )2 E (tn E (tn ))2 ( E (tn ) )2 M. If the bias is 0 then MSE is equal to the variance.S. It is not always possible.E is equal to square of the estimator’s bias plus variance of the estimator. . In ideal world we would like to have minimum variance unbiased estimator. In estimation it is usually trade of between unbiasedness and minimum variance.
.
UNBIASED!!!! .
. . X 2 ..Let S2 be a sample variance for a random 2sample (x x ) X1 . Is this n estimator biased or unbiased estimator? Note: mean of chisquare is it df. therefore S2 is biased estimator and the biased value is . X n from N ( ...2/n. Solution: s2 ( xi x )2 2 2 (n 1) 2 ~ n 1 . 2 ) and s 2 i . E ( s 2 ) E[ 2 (x x ) i 2 n 2 ~ n 1 2 n E[ n ( xi x )2 ] ] 2 2 n 2 (n 1) 2 n since E(S2)≠2.
then we say that it is asymptotically unbiased. If an estimate is such that it’s biased converged to zero as n. S2 is asymptotically unbiased when n E (s 2 ) 2 2 n lim E ( s 2 ) 2 . n .
does this estimator is unbiased ( xi x ) 2 (n 1) 2 E ( w) E[ 2 ] ] 2 n 1 E[ ( xi x ) 2 2 2 n 1 (n 1) 2 2 w is unbiased estimator for . n 1 estimator of 2 ? Solution: If w ( xi x )2 .
xn ) be an unbiased estimator for based on a random sample of size n.. ) . x2 . ) 2 nE 1 2 ln f ( x. assume we can differentiate inside the integral whenever needed.. x2 . THEOREM Let y U ( x1 . x1 .. xn from function f ( x.. ) 2 n E 2 .. then CRLB for the Var(y) is given by 2 y 1 Cramer Rao Lower bound ln f ( x....
Corrollary Suppose y is nor unbiased. ) 2 n E 2 . then CRLB for y is: 2 y 1 b '( ) 2 2 ln f ( x. say E(y)=+b().
x 1 e f ( x. elsewhere . ) 0 .0 x . y ~ gamma( 1. ) Get the CRLB and find the variance for this estimator .
ln f ( x. ) 1 x 2 2 ln f ( x. ) 1 2x 2 3 2 2 ln f ( x. ) 1 2 E ( x) E E[ 2 ] 2 3 1 1 n 2 2 y V ( X ) 2 1. ) ln ln f ( x. 2 2 M1 ˆ x x x. Solution: x E ( X ) 1. n E( X ) 1 2 2 2 1 2 CRLB: 2 y 2 n .
variance (or best) unbiased estimator .ˆ) V ( x ) V ( xi ) V ( n 1 2 V ( xi ) n 1 2 V ( x) n 1 1 2 2 2 2 2 n n n n ˆ Therefore. variance for equal to CRLB. ˆ If an unbiased estimator achive the CRLB. ˆ we say that is efficient for estimating and ˆ is called uniformly minimum.
If we consider all possible unbiased estimator of some parameter . ˆ Unbiased estimators 1 ˆ 3 ˆ 1 is the best estimator of ˆ 2 Sampling distribution of different estimators of . the one with the smallest variance is called the most efficient estimator of .
ˆ Let be an estimator that is unbiased for . ˆ The relative efficiency for is defined to be: ˆ) CRLB 1 eff ( ˆ V ( ) .
2 ) ˆ MSE (1 ) CAREFUL HERE ˆ ˆ ˆ ˆ where MSE ( ) E[( )2 ] Var ( ) b( ) 2 Mean square error . If ˆ1 and ˆ2 are biased estimators of parameter θ of a given population. the measure of efficiency of ˆ1 relative to ˆ2 is denoted by the ratio ˆ MSE ( 2 ) ˆ ˆ eff (1 .
ˆ2 ) MSE (ˆ1 ) What it means Conclusion ˆ1 as efficient as ˆ2 ˆ1 more efficient ˆ2 than eff (ˆ1 .ˆ2 ) 1 eff (ˆ1 .MSE (ˆ2 ) eff (ˆ1 .ˆ2 ) 1 ˆ ˆ MSE (1 ) MSE (2 ) ˆ ˆ MSE (1 ) MSE (2 ) ˆ ˆ MSE (1 ) MSE (2 ) ˆ1 less efficient than ˆ2 .ˆ2 ) 1 eff (ˆ1 .
2 ) ˆ Var (1 ) CAREFUL HERE . If ˆ1 and ˆ2 are unbiased estimators of parameter θ ˆ ˆ of a given population. then we say that ˆ1 is relatively more efficient than ˆ2 . and Var (1 ) Var (2 ) . The measure of efficiency of ˆ1 relative to ˆ2 denoted by the ratio is ˆ Var ( 2 ) ˆ ˆ eff (1 .
Var (ˆ2 ) eff (ˆ1 .ˆ2 ) 1 ˆ ˆ Var (1 ) Var (2 ) ˆ ˆ Var (1 ) Var (2 ) ˆ ˆ Var (1 ) Var (2 ) ˆ1 less efficient than ˆ2 .ˆ2 ) 1 eff (ˆ1 .ˆ2 ) 1 eff (ˆ1 .ˆ2 ) Var (ˆ1 ) What it means Conclusion ˆ1 as efficient as ˆ2 ˆ1 more efficient ˆ2 than eff (ˆ1 .
.
Two estimators for mean μ are as below: Check whether the two estimators 1 and 2 are unbiased. Suppose that X1 and X2 are random samples from a normal distribution with mean μ and standard deviation σ. X1 2 X 2 1 3 X1 X 2 2 2 . Determine the measure of efficiency of 1 relative to 2 .
X 2X2 E (1 ) E 1 3 1 2 E( X1 ) E ( X 2 ) 3 3 1 2 3 3 X X2 E ( 2 ) E 1 2 1 1 E( X1 ) E ( X 2 ) 2 2 1 1 2 2 Since both E ( 1 ) and E ( 2 ) estimators 1 and 2 are unbiased. Check whether the two estimators 1 and 2 are unbiased.then the two . .
.Determine the measure of efficiency of 1relative to 2 X X2 X1 2 X 2 Var ( 2 ) Var 1 Var ( 1 ) Var 2 3 1 1 1 4 Var ( X 1 ) Var ( X 2 ) Var ( X 1 ) Var ( X 2 ) 4 4 9 9 1 2 1 2 1 2 4 2 4 4 9 9 1 2 5 2 9 2 Since both E ( 1 ) and E ( 2 ) estimators 1 and 2 are unbiased.then the two .
2 ) 1 that is Var (1 ) Var (2 ).9 times more efficient than 1 .9 1 10 ˆ ˆ ˆ ˆ Since eff (1 . ) Var Var 2 1 1 2 2 5 2 2 9 9 0.then the estimator 2 is 0. So. . the measure of efficiency of 1 relative to 2 eff ( .
Determine the efficiency measure of X 1 2 X 2 X 3 relative to X1 X 2 X 3 3 4 . X2 and X3 are random samples from a normal distribution with mean μ and standard deviation σ. Suppose that X1 .
.
ˆ An estimator is said to be consistent for P ˆ . if .
X is a consistent estimator of because: V(X) is That is.g. An unbiased estimator is said to be consistent if the difference between the estimator and the parameter grows smaller as the sample size grows larger. E. . the variance of X grows smaller. as n grows larger.
.
We say that y1 is a sufficient statistic for the family F f ( xi . ) joint pdf for x1 .. Let y1=U( x1 . xn ) does not depend on . ) H ( x1 .......)..). ).Let x1 ... thus the conditional pdf of x1 .. x2 .. xn ) the func H( x1 . xn be a random sample from function f(x.. xn ) be a statistic with pdf g(y1. f ( x2 . f ( xn . x2 . x2 . xn given Y1 y1 is free of .. . x2 ...... ). x2 . xn ) Where for every fix value of y1=U( x1 .. ) g ( y1 ... if and only if f x1 ........ x2 .. xn g ( y1 . x2 .
. ) x ! 0 y1 xi ..1. x 0. . x2 . elsewhere is y1 is a sufficient for ? .s from x e f ( x.... Let say x1 .. xn is a r..
f ( xn . ) f ( xi . .. i 1 n e xi xi xi e n n e n ( xi )! xi n e n ( xi )! x! x ! i ( xi )! n xi ! Free of Conc: the given func... find joint pdf for x1 . i.. ) i 1 i 1 n n x e i x! ii.. ) f ( x2 . xn f ( x1 . x2 .. ). Is a sufficient statistic.
. . x2 .. x2 . xn ) be a statistic of g(y1.. x2 . if and only if there is exist a n function k1(.) f ( x.. x2 ..). xn ) does not depend on in form or in domain. Let y1=U( x1 .. xn ) the function k2( x1 ...) that y1 is a sufficient statistic for family F f ( xi .. xn ) i 1 where for each fix value of y1=U( x1 ... ) k1 (U1 ( x1.. Is just a statistic ..... x2 . Note: ◦ There is another way to find sufficient statistic by Neyman’s Factorization Theorem... xn ))k2 ( x1.) if k2(. xn be a random sample from function f(x.. Theorem: ◦ Let x1 .. x2 ..... ).
.. xn is a r. elsewhere Find sufficient statistic for Solution: . Let x1 .s from (2 ) [ x(1 x)] 1 .. 0 x 1 f ( x.. x2 . ) [( )]2 0 .
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.