Professional Documents
Culture Documents
Y1 = u1 (X1 , X2 , . . . , Xn )
Y2 = u2 (X1 , X2 , . . . , Xn )
Y3 = u3 (X1 , X2 , . . . , Xn )
..
.
Yn = un (X1 , X2 , . . . , Xn )
g(y; θ) = f (w1 (y1 , y2 , . . . , yn ; θ))f (w2 (y1 , . . . , yn ; θ)) . . . f (wn (y1 , . . . , yn ; θ))|J|
g(y1 , y2 , . . . yn ; θ)
h(y2 , . . . , yn |y1 ; θ) =
g1 (y1 ; θ)
provided g1 (y1 ; θ) > 0. The conditional distribution usually depends on θ but if it does
not depend on θ we have the following definition of a sufficient statistic.
Sufficient Statistic
1
ESTIMATORS Fisher–Neyman Criterion
Fisher–Neyman Criterion
Example One
n!
g1 (y1 ; θ) = θy1 (1 − θ)n−y1 y1 = 0, 1, . . . , n.
y1 !(n − y1 )!
2
ESTIMATORS Fisher–Neyman Criterion
Example Two
3
ESTIMATORS Factorization Criterion
Factorization Criterion
Example One
1
θn (x1 x2 . . . xn )θ−1 = θn (x1 x2 . . . xn )θ
(x1 x2 . . . xn )
and setting
k1 (u1 (x1 , x2 , . . . , xn ); θ) = θn (x1 x2 , . . . xn )θ
1
k2 (x1 , x2 , . . . , xn ) =
(x1 x2 . . . xn )
k2 does not depend on θ and so ΠXi is sufficient for θ.
4
ESTIMATORS Factorization Criterion
Example Two
2 1 h 1 2
i
f (x; θ, σ ) = √ exp − 2 (x − θ) −∞<x<∞ σ2 known
σ 2π 2σ
2
n 2
1 X 2
. √ n
Πf (xi ; θ, σ ) = exp − 2 (x̄ − θ) exp − 2 (xi − x̄) (σ 2π) .
2σ 2σ
Setting
n
k1 (u1 (x1 , x2 , . . . , xn ); θ) = exp − 2 (x̄ − θ)2
2σ
1
(xi − x̄)2
P
exp − 2σ 2
k2 (x1 , x2 , . . . , xn ) = √
(σ 2π)n
k2 does not depend on θ and so X̄ is sufficient for θ.
Note
Every single valued function Z = u(Y1 ), not involving θ, with a single valued inverse is
also sufficient for θ.
5
ESTIMATORS Completeness
Completeness
Let {f (x; θ); θǫΩ} be a family of discrete or continuous probability density functions and
let u(x) be a continuous function of x but not a function of θ. If E(u(X)) = 0 for every
θǫΩ requires u(x) to be zero at each point x at which at least one member of the family of
probability density functions is positive, then the family of probability density functions
is called a complete family.
Example One
1
f (x; θ) = 0<x<θ 0 < θ < ∞.
θ
If Z ∞
E(u(X)) = u(x)f (x; θ) dx
−∞
Z θ
1
= u(x) dx
0 θ
=0 θ > 0 by assumption
then Z θ
u(x) dx = 0 θ > 0.
0
Differentiating with respect to θ gives u(θ) = 0 for θ > 0 and so u(x) = 0 for x > 0.
6
ESTIMATORS Completeness
Example Two
Each member of the family is positive at only x = 0 and x = 1 so we need to show that
u(0) = u(1) = 0.
In this case X
E(u(X)) = u(x)f (x; θ)
x
1
X
= u(x)θx (1 − θ)1−x
x=0
= u(0)(1 − θ) + u(1)θ
= θ((u(1) − u(0)) + u(0)
=0
is a linear function of θ. If a linear function is zero at more than one point, then both
the slope and intercept are zero so that
so that
u(0) = u(1) = 0.
Example Three
1
f (x; θ) = −θ <x<θ 0<θ<∞
2θ
7
ESTIMATORS Uniqueness
Uniqueness
If a continuous function ϕ(Y1 ) is unbiased for θ and some other function ψ(Y1 ) which is
not a function of θ is also unbiased for θ, then
and if the family {g1 (y1 ; θ); θǫΩ} is complete then for every continuous unbiased statistic
ϕ(Y1 )
ϕ(Y1 ) = ψ(Y1 )
at all points of non–zero probability density.
So if
Y1 = u1 (X1 , X2 , . . . , Xn ) is sufficient for θ
and
Y2 (not a function of Y1 alone) is unbiased for θ
consider
E(Y2 |y1 ) = ϕ(y1 ).
8
ESTIMATORS Parameters in Exponential Class
Example One
θ x2 √ θ2
p(θ) = K(x) = x S(x) = − − ln 2πσ 2 q(θ) = −
σ2 2σ 2 2σ 2
P
and so Y1 = Xi is a complete sufficient statistic for θ and as E(Y1 ) = nθ,
Y1
ϕ(Y1 ) = = X̄
n
is unbiased for θ, is a function of the sufficient statistic Y1 and has minimum variance.
So X̄ is the unique best statistic for θ.
Example Two
P
and so Y1 = Xi is a complete sufficient statistic for θ and as E(Y1 ) = nθ, the statistic
Y1
ϕ(Y1 ) = = X̄
n
is unbiased for θ and is the unique best statistic for θ.
9
ESTIMATORS Invariance Property
Example
If the random variable X has a Poisson distribution with probability density function
f (x; θ) = e−θ θx /x! the log of the likelihood is
X Y
ln L(θ) = −nθ + ln θ xi − ln( xi !)
and P
∂ ln L(θ) xi
= −n + = 0 if θ = x̄
∂θ θ
so that the maximum likelihood estimator of θ is X̄.
and P
∂ ln L∗ (τ ) n xi −1
= + =0 if − ln τ = x̄
∂τ τ − ln τ τ
and so the maximum likelihood estimator of τ is
τ̂ = e−X̄ .
10
ESTIMATORS Functions of a Parameter
Let X1 , X2 , . . . , Xn be a random sample from a normal distribution, N (θ, 1). Finding the
best statistic for P (X ≤ c) = Φ(c − θ) involves the following three steps.
(i) Find an unbiased statistic for Φ(c − θ).
(ii) Know that X̄ is sufficient for θ.
(iii) If E(unbiased statistic|X̄ = x̄) = ϕ(x̄) then note that ϕ(X̄) is the unique best
statistic for Φ(c − θ).
(i)
Let
1 x1 ≤ c
n
u(x1 ) =
0 x1 > c
∞
1
Z
(x1 −θ)2
E(u(X1 )) = u(x1 ) √ e− 2 dx1
−∞ 2π
Z c
1 (x1 −θ)2
= √ e− 2 dx1
−∞ 2π
= Φ(c − θ)
(ii)
(iii)
11
ESTIMATORS Functions of a Parameter
The joint distribution of X1 and X̄ is bivariate normal with X1 having mean θ and
variance 1, X̄ having mean θ and variance n1 and X1 and X̄ having correlation coefficient
√1 .
n
n−1
The conditional distribution of X1 given X̄ = x̄ is normal with mean x̄ and variance n
and so
12