Professional Documents
Culture Documents
An important objective in any statistical estimation procedure is to obtain the estimators of parameters of
interest with more precision. It is also well understood that incorporation of more information in the
estimation procedure yields better estimators, provided the information is valid and proper. Use of such
auxiliary information is made through the ratio method of estimation to obtain an improved estimator of
population mean. In ratio method of estimation, auxiliary information on a variable is available which is
linearly related to the variable under study and is utilized to estimate the population mean.
Let Y be the variable under study and X be any auxiliary variable which is correlated with Y . The
observations xi on X and yi on Y are obtained for each sampling unit. The population mean X of X
(or equivalently the population total X tot ) must be known. For example, xi ' s may be the values of
yi ' s from
- some earlier completed census,
- some earlier surveys,
- some characteristic on which it is easy to obtain information etc.
For example, if yi is the quantity of fruits produced in the ith plot, then xi can be the area of ith plot or
the production of fruit in the same plot in previous year.
Let ( x1 , y1 ), ( x2 , y2 ),..., ( xn , yn ) be the random sample of size n on paired variable (X, Y) drawn,
preferably by SRSWOR, from a population of size N. The ratio estimate of population mean Y is
y
YˆR =
= ˆ
X RX
x
N
assuming the population mean X is known. The ratio estimator of population total Ytot = ∑ Yi is
i =1
y
YˆR (tot ) = tot X tot
xtot
N n
where X tot = ∑ X i is the population total of X which is assumed to be known, ytot = ∑ yi and
i =1 i =1
n
xtot = ∑ xi are the sample totals of Y and X respectively. The YˆR (tot ) can be equivalently expressed as
i =1
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 1
y
YˆR (tot ) = X tot
x
= RX
ˆ .
tot
Ytot
Looking at the structure of ratio estimators, note that the ratio method estimates the relative change
X tot
yi
that occurred after ( xi , yi ) were observed. It is clear that if the variation among the values of and
xi
ytot y
is nearly same for all i = 1,2,...,n then values of (or equivalently ) vary little from sample to
xtot x
sample and the ratio estimate will be of high precision.
y y2
Moreover, it is difficult to find the exact expression for E and E 2 . So we approximate them
x x
and proceed as follows:
Let
y −Y
ε0 = ⇒ y = (1 + ε o )Y
Y
x−X
ε1 = ⇒ x = (1 + ε1 ) X .
X
Since SRSWOR is being followed , so
E (ε 0 ) = 0
E (ε1 ) = 0
1
E (ε 02 )
= 2
E ( y − Y )2
Y
1 N −n 2
= 2 SY
Y Nn
f SY2
=
n Y2
f
= CY2
n
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 2
N −n 2 1 N S
where f =
N
, SY = ∑
N − 1 i =1
(Yi − Y ) 2 and CY = Y is the coefficient of variation related to Y.
Y
Similarly,
f 2
E (ε12 ) =
CX
n
1
E (ε 0ε=
1) E[( x − X )( y − Y )]
XY
1 N −n 1 N
=
XY Nn N − 1 i =1
∑ ( X i − X )(Yi − Y )
1 f
= . S XY
XY n
1 f
= ρ S X SY
XY n
f S S
= ρ X Y
n X Y
f
= ρ C X CY
n
SX
where C X = is the coefficient of variation related to X and ρ is the population correlation coefficient
X
between X and Y.
y
YˆR = X
x
(1 + ε 0 )Y
= X
(1 + ε1 ) X
(1 ε 0 )(1 + ε1 ) −1Y
=+
Assuming ε1 < 1, the term (1 + ε1 ) −1 may be expanded as an infinite series and it would be convergent.
x−X
Such assumption means that < 1, i.e., possible estimate x of population mean X lies between 0
X
and 2 X , This is likely to hold true if the variation in x is not large. In order to ensures that variation in
x is small, assume that the sample size n is fairly large. With this assumption,
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 3
In case, when sample size is large, then ε 0 and ε1 are likely to be small quantities and so the terms
involving second and higher powers of ε 0 and ε1 would be negligibly small. In such a case
YˆR − Y Y (ε 0 − ε1 )
and
E (YˆR − Y ) =
0.
So the ratio estimator is an unbiased estimator of population mean upto the first order of approximation.
If we assume that only terms of ε 0 and ε1 involving powers more than two are negligibly small (which is
more realistic than assuming that powers more than one are negligibly small), then the estimation error
f f
E (YˆR − Y=) Y 0 − 0 + C X2 − ρ C X C y
n n
f
Bias (Yˆ )= E (YˆR − Y )= YC X (C X − ρ CY ).
n
upto the second order of approximation. The bias generally decreases as the sample size grows large.
Bias (YˆR ) = 0
if E (ε12 − ε 0ε1 ) =
0
Var ( x ) Cov( x , y )
or if − =
0
X2 XY
1 X
or if 2 Var ( x ) − Cov( x , y ) =
0
X Y
Cov( x , y )
or if Var ( x ) −
= 0 (assuming X ≠ 0)
R
Y Cov( x , y )
or if =
R =
X Var ( x )
which is satisfied when the regression line of Y on X passes through origin.
YˆR ) E (YˆR − Y ) 2
MSE (=
= E Y 2 (ε 0 − ε1 + ε12 − ε1ε 0 + ...) 2
E Y 2 (ε 02 + ε12 − 2ε 0ε1 ) .
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 4
Under the assumption ε1 <1 and the terms of ε 0 and ε1 involving powers more than two are negligible
small,
ˆ ) Y 2 f C 2 + f C 2 − 2 f ρC C
MSE (Y=R n X n Y n
X Y
2
Y f
= C X2 + CY2 − 2 ρ C X C y
n
up to the second order of approximation.
1 CX
or if ρ > .
2 CY
Thus ratio estimator is more efficient than the sample mean based on SRSWOR if
1 CX
ρ> if R > 0
2 CY
1 CX
and ρ < − if R < 0.
2 CY
It is clear from this expression that the success of ratio estimator depends on how close is the auxiliary
information to the variable under study.
Cov(= ˆ ) − E ( Rˆ ) E ( x )
Rˆ , x ) E ( Rx
y
= E x − E ( Rˆ ) E ( x )
x
= Y − E ( R) X .
ˆ
Thus
Y Cov( Rˆ , x )
E ( Rˆ=
) −
X X
Cov( Rˆ , x )
= R−
X
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 5
Bias=
( Rˆ ) E ( Rˆ ) − R
Cov( Rˆ , x )
= −
X
ρ Rˆ , x σ Rˆσ x
= −
X
where ρ Rˆ , x is the correlation between Rˆ and x ; σ Rˆ and σ x are the standard errors of Rˆ and x
respectively.
Thus
− ρ Rˆ , x σ Rˆσ x
Bias ( Rˆ ) =
X
σ Rˆσ x
≤
X
(ρ Rˆ , x
≤1 .)
assuming X > 0. Thus
Bias ( Rˆ ) σx
≤
σ Rˆ X
Bias ( Rˆ )
or ≤ CX
σ Rˆ
where C X is the coefficient of variation of X. If C X < 0.1, then the bias in R̂ may be safely regarded
∑ (Y − RX =
i i
i 1 =i 1
) ∑ (Y − Y ) + (Y − RX )
2
i i
N 2
= ∑ (Yi − Y ) + R( X i − X ) (Using =
i =1
Y RX )
N N N
=i 1 =i 1
= ∑ (Yi − Y )2 + R 2 ∑ ( X i − X )2 − 2 R∑ ( X i − X )(Yi − Y )
=i 1
N
1
∑ (Yi − RX i )2 =
N − 1 i =1
SY2 + R 2 S X2 − 2 RS XY .
The MSE of YˆR has already been derived which is now expressed again as follows:
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 6
fY 2 2
YˆR )
MSE (= (CY + C X2 − 2 ρ C X CY )
n
f 2 SY2 S X2 S
= Y 2 + 2 − 2 XY
n Y X XY
f Y2 2 Y2 2 Y
=
S + 2 S X − 2 S XY
2 Y
nY X X
= ( SY2 + R 2 S X2 − 2 RS XY )
f
n
N
f
= ∑
n( N − 1) i =1
(Yi − RX i ) 2
N −n N
= ∑
nN ( N − 1) i =1
(Yi − RX i ) 2 .
f 1 N
MSE (YˆR )
= ∑
n N − 1 i =1
(U i − U ) 2
f
= SU2
n
1 N
=
where SU2 ∑
N − 1 i =1
(U i − U ) 2 .
y
Rˆ = .
x
Based on the expression
N
f
MSE (YˆR )
= ∑
n( N − 1) i =1
(Yi − RX i ) 2 ,
ˆ ˆ ˆ ˆ
YR − Z α Var (YR ), YR + Z α Var (YR )
2 2
and
ˆ ( Rˆ )
R − Z α Var ( Rˆ ), Rˆ + Z α Var
2 2
respectively where Z α is the normal derivate to be chosen for given value of confidence coefficient
2
(1 − α ).
If ( x , y ) follows a bivariate normal distributions, then ( y − Rx ) is normally distributed. If SRS is
followed for drawing the sample, then assuming R is known, the statistic
y − Rx
N −n 2
( s y + R 2 sx2 − 2 R sxy )
Nn
is approximately N(0,1).
This can also be used for finding confidence limits, see Cochran (1977, Chapter 6, page 156) for more
details.
(i) the relationship between yi and xi is linear passing through origin., i.e.
yi β xi + ei ,
=
where ei ' s are independent with E (ei / xi ) = 0 and β is the slope parameter
Var ( yi=
/ xi ) E=
(ei2 ) Cxi
where C is constant.
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 8
n
Proof. Consider the linear estimate of β because βˆ = ∑ i yi where =
yi β xi + ei and i ‘s are constant.
i =1
Then β̂ is unbiased if Y = β X as E (=
y ) β X + E (ei / xi ).
n
when ∑ i xi 1.
So E ( βˆ ) β=
=
i =1
Consider the minimization of Var ( yi / xi ) subject to the condition for being the unbiased estimator
n
∑ x = 1 using Lagrangian function. Thus the Lagrangian function with Lagrangian multiplier is
i =1
i i
n
ϕ = Var ( yi / xi ) − 2λ (∑ i xi − 1.)
i =1
n n
=C ∑ 12 xi − 2λ (∑ i xi − 1).
=i 1 =i 1
Now
∂ϕ
=⇒ λ xi , i =
0 i xi = 1, 2,.., n
∂ i
∂ϕ n
0 ⇒ ∑ i xi =
= 1
∂λ i =1
n
Using ∑ x
i =1
i i =1
n
or ∑λx
i =1
i =1
1
or λ = .
nx
Thus
1
i =
nx
n
∑y y i
=
and so βˆ =
i =1
.
nx x
Thus β̂ is not only superior to y but also the best in the class of linear and unbiased estimators.
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 9
Alternative approach:
This result can alternatively be derived as follows:
y Y
The ratio estimator Rˆ = is the best linear unbiased estimator of R = if the following two
x X
conditions hold:
(i) For fixed x, E ( y ) = β x, i.e., the line of regression of y on x is a straight line passing
through the origin.
(ii) λ x where λ is constant of proportionality.
For fixed x , Var ( x) ∝ x, i.e., Var ( x) =
=
Proof: Let y (=
y1) , y2 ,..., yn ) ' and x ( x1 , x2 ,..., xn ) ' be two vectors of observations on
where diag( x1 , x2 ,..., xn ) is the diagonal matrix with x1 , x2 ,..., xn as the diagonal elements.
S 2 =( y − β x ) ' Ω −1 ( y − β x )
n
( yi − β xi ) 2
=∑ .
i =1 λ xi
Solving
∂S 2
=0
∂β
n
⇒ ∑ ( yi − βˆ xi ) =
0
i =1
y ˆ
or β=
ˆ = R.
x
ˆ = Yˆ is the best
Thus R̂ is the best linear unbiased estimator of R . Consequently, RX R
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 10
Ratio estimator in stratified sampling
Suppose a population of size N is divided into k strata. The objective is to estimate the population mean
Y using ratio method of estimation.
In such situation, a random sample of size ni is being drawn from the ith strata of size N i on variable
under study Y and auxiliary variable X using SRSWOR.
Let
yij : jth observation on Y from ith strata
An estimator of Y based on the philosophy of stratified sampling can be derived in following two
possible ways:
i =1 N
k
= ∑ wiYˆRi
i =1
k
yi
= ∑ wi Xi
i =1 xi
ni
1
where yi =
ni
∑yj =1
ij : sample mean of Y from ith strata
ni
1
xi =
ni
∑xj =1
ij : sample mean of X from ith strata
Ni
1
Xi =
Ni
∑x
j =1
ij : mean of all the X units in ith stratum
No assumption is made that the true ratio remains constant from stratum to stratum. It depends on
information on each X i .
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 11
2. Combined ratio estimator:
- Find first the stratum mean of Y ' s and X ' s as
k
yst = ∑ wi yi
i =1
k
xst = ∑ wi xi .
i =1
- Then define the combined ratio estimator as
y
YˆRc = st X
xst
N
where X is the population mean of X based on all the N = ∑ N i units. It does not depend on individual
i =1
f
E (YˆRi ) =
Yi + Yi i (Cix2 − ρi CiX CiY )
ni
1 Ni 1 Ni
=
=
=
where Yi ∑ ij i N ∑
y
Ni j 1 =
, X xij
i j 1
N i − ni 2 Siy
2
Six2
=fi = , Ciy = , C 2
ix ,
Ni Yi 2 X i2
1 Ni 1 Ni
=N − 1
∑ S=
(2
Y
iy
j 1=
ij − Yi ) 2
, =
S 2
ix
N − 1
∑
j 1
( X ij − X i ) 2 ,
i i
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 12
(YˆRs ) E (YˆRs ) − Y
Bias=
k
wiYi fi
= ∑
i =1 ni
Cix (Cix − ρi Ciy )
k 2
(YˆRs )
Bias= (Cx − ρ Cx C y ) .
n
Thus the bias is negligible when the sample size within each stratum should be sufficiently large and
YRs is unbiased when Cix = ρ Ciy .
Now we derive the approximate MSE of YˆRs . We already have derived the MSE of YˆR earlier as
Y2f
(YˆR )
MSE= (C X2 − CY2 − 2 ρ Cx C y )
n
N
f
= ∑
n( N − 1) i =1
(Yi − RX i ) 2
Y
where R = .
X
Thus the MSE of ratio estimate upto the second order of approximation based on ith stratum is
fi
=
MSE (YˆRi ) (CiX2 − CiY2 − 2 ρi CiX CiY )
ni ( N i − 1)
Ni
fi
= ∑
ni ( N i − 1) j =1
(Yij − Ri X ij ) 2
and so
k
MSE (YˆRs ) = ∑ wi2 MSE (YˆRi )
i =1
wi2 fi 2 2 k
∑
i =1 ni
= Yi (CiX + CiY2 − 2 ρi CiX CiY )
k N
fi
= ∑ wi2 ∑
i
(Yij − Ri X ij ) 2
=i 1 = ni ( N i − 1) j 1
An estimate of MSE (YˆRs ) can be found by substituting the unbiased estimators of SiX2 , SiY2 and SiXY
2
as
six2 , siy2 and sixy , respectively for ith stratum and Ri = Yi / X i can be estimated by ri = yi / xi .
k
wi2 fi 2
=
MSE (YˆRs ) ∑
i =1 ni
( siy + ri 2 six2 − 2ri sixy ) .
Also
(Yˆ ) wi2 fi ni k
=
MSE ∑ ∑ ( yij − ri xij ) 2
ni (ni − 1) j 1
Rs
=i 1 =
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 13
Properties of combined ratio estimator:
Here
k
∑w y yst i i
YˆRC
= =
i =1
k
X = X Rˆc X .
∑ wi xi
xst
i =1
It is difficult to find the exact expression of bias and mean squared error of YˆRc , so we find their
approximate expressions.
Define
yst − Y
ε1 =
Y
x −X
ε 2 = st
X
E (ε1 ) = 0
E (ε 2 ) = 0
N i − ni wi2 SiY2 k
fi wi2 SiY2
k
f SY2 f 2
∑ Nn Y2 ∑(ε12 )
E= =
Y2
Recall that in case of Yˆ , E=
R (ε 2
1 ) =
n Y2 n
CY
=i 1 = i i i 1 ni
k
fi wi2 SiX2
E (ε 22 ) = ∑
i =1 ni X 2
k
f i SiXY
E (ε1ε 2 ) = ∑ wi2 .
i =1 ni XY
(1 + ε1 )Y
YˆRC = X
(1 + ε 2 ) X
= Y (1 + ε1 )(1 − ε 2 + ε 22 − ...)
= Y (1 + ε1 − ε 2 − ε1ε 2 + ε 22 − ...)
Retaining the terms upto order two due to same reason as in the case of YˆR ,
YˆRC Y (1 + ε1 − ε 2 − ε1ε 2 + ε 22 )
Yˆ − Y= Y (ε − ε − ε ε + ε 2 )
RC 1 2 1 2 2
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 14
YˆRc ) E (YˆRc − Y )
Bias (=
YE (ε1 − ε 2 − ε1ε 2 + ε 22 )
= Y 0 − 0 − E ( ε1ε 2 ) + E ( ε 22 )
k
f S 2 S
= Y ∑ i wi2 iX2 − iXY
i =1 ni X XY
k
f S 2 ρ S S
= Y ∑ i wi2 iX2 − i iX iY
i =1 ni X XY
Y k fi 2 SiX ρi SiY
= ∑ wi SiX X − Y
X i =1 ni
k
f
= R ∑ i wi2 SiX ( CiX − ρi CiY )
i =1 ni
Y
, ρi is the correlation coefficient between the observations on Y and X in the i stratum,
th
where R =
X
Cix and Ciy are the coefficients of variation of X and Y respectively in the ith stratum.
YˆRc ) E (YˆRc − Y ) 2
MSE (=
Y 2 E (ε1 − ε 2 − ε1ε 2 + ε 2 ) 2
Y 2 E (ε12 + ε 22 − 2ε1ε 2 )
fi 2 SiX2 SiY2 2 SiXY
k
= Y ∑ wi 2 + 2 −
2
i =1 ni X Y XY
k
f S 2 S 2 2ρ S S
= Y 2 ∑ i wi2 iX2 + iY2 − i iX iY
i =1 ni X Y X Y
fi 2 Y 2 2
Y2 k
Y
= ∑ wi 2 SiX + SiY − 2 ρi SiX SiY
Y2
2
i =1 ni X X
k
f
= ∑ i wi2 ( R 2 SiX2 + SiY2 − 2 ρi RSiX SiY ) .
i =1 ni
An estimate of MSE (YRc ) can be obtained by replacing SiX2 , SiY2 and SiXY by their unbiased estimators
Y y
six2 , siy2 and sixy respectively whereas R = is replaced by r = . Thus the following estimate is
X x
obtained:
wi2 fi 2 2
( r six + siy2 − 2rsixy )
k
(Y )
=
MSE Rc ∑
i =1 ni
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 15
Comparison of combined and separate ratio estimators
An obvious question arises that which of the estimates YˆRs or YˆRc is better. So we compare their MSEs.
Note that the only difference in the term of these MSEs is due to the form of ratio estimate. It is
yi
− Ri = in MSE (YˆRs )
xi
Y
− R = in MSE (YˆRc ).
X
Thus
x is linear and passes through origin within each stratum. See as follows:
Ri Six2 − ρi Six Siy =
0
ρi Six Siy
Ri =
Six2
which is the estimator of the slope parameter in the regression of y on x in the ith stratum. In
such a case
So unless Ri varies considerably, the use of YˆRc would provide an estimate of Y with negligible bias
• If Ri R, YˆRc can be as precise as YˆRs but its bias will be small. It also does not require
knowledge of X 1 , X 2 ,..., X k .
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 16
Ratio estimators with reduced bias:
The ratio type estimators that are unbiased or have smaller bias than Rˆ , YˆR or YˆRc (tot ) are useful in sample
surveys . There are several approaches to derive such estimators. We consider here two such approaches:
1 n
= ∑R
n i =1
= R.
So Bias (YˆR=
0) RX − Y .
N −n
Using the result that under SRSWOR, Cov( x , y ) = S XY , it also follows that
Nn
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 17
N −n 1 N
= ∑ ( Ri − R )( X i − X )
Cov(r , x )
Nn N − 1 i =1
N −n 1 N
= (∑ Ri X i − NRX )
Nn N − 1 i =1
N −n 1 N
Y
= (∑ i X i − NRX )
n N − 1 i =1 X i
N −n 1
= ( NY − NRX )
Nn N − 1
N −n 1
= [− Bias (YˆR 0 )].
n N −1
N −n N −n
Thus using the result that in SRSWOR, Cov( x , y ) = S XY , and therefore Cov(r , x ) = S RX , we
Nn Nn
have
n( N − 1)
Bias (YˆRo ) = − Cov(r , x )
N −n
n( N − 1) N − n
= − S RX
N − n Nn
N −1
= − S RX
N
1 N
where =
S RX ∑ ( Ri − R )( X i − X ).
N − 1 i =1
The following result helps in obtaining an unbiased estimator of population mean:.
Since under SRSWOR set up,
E ( sxy ) = S xy
1 n
where=
sxy ∑ ( xi − x )( yi − y ),
n − 1 i =1
1 N
=
S xy ∑ ( X i − X )(Yi − Y ).
N − 1 i =1
So an unbiased estimator of the bias in Bias (YˆR 0 ) =
−( N − 1) S RX is obtained as follows:
(Yˆ ) = − ( N − 1) s
Bias R0 rx
N
N −1 n
=− ∑ (ri − r )( xi − x )
N (n − 1) i =1
N −1 n
=
− (∑ ri xi − n r x )
N (n − 1) i =1
N − 1 n yi
=
− ∑ xi − nr x
N (n − 1) i =1 xi
N −1
=
− (ny − nr x ).
N (n − 1)
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 18
So
(Yˆ ) =
Bias R0
E YˆR 0 − Y =( )
−
n( N − 1)
N (n − 1)
( y − r x ).
Thus
E YˆR 0 − Bias (YˆR 0 ) =
Y
n( N − 1)
or E YˆR 0 + ( y − r x ) =
Y.
N (n − 1)
Thus
n( N − 1) n( N − 1)
YˆR 0 + (y − r x) =
rX + (y − r x)
N (n − 1) N (n − 1)
is an unbiased estimator of population mean.
Let Rˆi* = ∑ * i where the ∑ * denotes the summation over all values of the sample except the ith
*
y
∑ xi
group. So Rˆi* is based on a simple random sample of size m(g - 1),
so we can express
a1 a
E ( Rˆi* ) =
R+ + 2 2 2 + ...
m( g − 1) m ( g − 1)
or
a1 a
E ( g − 1) Rˆi* = ( g − 1) R + + 2 2 + ...
m m ( g − 1)
Thus
a2
E gRˆ − ( g − 1) Rˆi* =R − + ...
g ( g − 1)m 2
or
a g
E gRˆ − ( g − 1) Rˆi* =R − 22 + ...
n g −1
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 19
1
Hence the bias of gRˆ − ( g − 1) Rˆi* is of order 2 .
n
Now g estimates of this form can be obtained, one estimator for each group. Then the jackknife or
Quenouille’s estimator is the average of these of estimators
g
∑ Rˆ i
RˆQ = gRˆ − ( g − 1) i =1
.
g
1 Cx
which is usually the case. This shows that if auxiliary information is such that ρ<− , then we
2 Cy
cannot use the ratio method of estimation to improve the sample mean as an estimator of the population
mean. So there is a need of another type of estimator which also makes use of information on auxiliary
variable X. Product estimator is an attempt in this direction.
The product estimator of the population mean Y is defined as
yx
YˆP = .
X
assuming the population mean X to be known
y −Y x−X
Let ε 0
= = , ε1 ,
Y X
We write Yˆp as
yx
Yˆp = =Y (1 + ε 0 )(1 + ε1 )
X
= Y (1 + ε 0 + ε1 + ε 0ε1 ).
1 f
=Bias (Yˆp ) E=
(ε 0ε1 ) Cov( y , x ) S xy ,
X nX
which shows that bias of Yˆp decreases as n increases. Bias of Yˆp can be estimated by
(Yˆ ) = f s .
Bias p xy
nX
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 20
(ii) MSE of Yˆp :
Writing Yˆp is terms of ε 0 and ε1 , we find that the mean squared error of the product estimator Yˆp upto
Yˆp ) E (Yˆp − Y ) 2
MSE (=
= Y 2 E (ε1 + ε 0 + ε1ε 2 ) 2
≈ Y 2 E (ε12 + ε 02 + 2ε1ε 2 ).
Here terms in (ε1 , ε 0 ) of degrees greater than two are assumed to be negligible. Using the expected
values we find that
f
MSE (Yˆp ) = SY2 + R 2 S X2 + 2 RS XY .
n
(Yˆ ) = f 2 2 2
MSE s y + r sx + 2rsxy
n
p
where r = y / x .
and for
1 Cx
ρ >− if R < 0.
2 Cy
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 21
Multivariate Ratio Estimator
Let y be the study variable and X 1 , X 2 ,..., X p be p auxiliary variables assumed to be corrected with y .
Further it is assumed that X 1 , X 2 ,..., X p are independent. Let Y , X 1 , X 2 ,..., X p be the population means of
the variables y , X 1 , X 2 ,..., X p . We assume that a SRSWOR of size n is selected from the population of
f
=
Bias (YˆRi ) Y (Ci2 − ρi Ci C0 ).
n
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 22
(ii) Variance of the multivariate ratio estimator:
The variance of YˆRi upto the second order of approximation is given by
f 2 2
YˆRi )
Var (= Y (C0 + Ci2 − 2 ρi C0Ci ).
n
f 2 p 2 2
Var (YˆMR )
= Y ∑ wi (C0 + Ci2 − 2 ρi C0Ci ).
n i =1
Sampling Theory| Chapter 5 | Ratio Product Method Estimation | Shalabh, IIT Kanpur Page 23