Professional Documents
Culture Documents
Chapter 2
independent variable in the linear regression model, the model is generally termed as a simple linear
regression model. When there are more than one independent variables in the model, then the linear model
X ÿÿ
ÿ ÿy ÿ
0 ÿ 1
where y is termed as the dependent or study variable and X is termed as the independent or explanatory
variable. The terms ÿ and ÿ are the parameters of the model. The parameter ÿ is termed as an intercept
0 1
0
regression coefficients. The unobservable error component ÿ accounts for the failure of data to lie on the
straight line and represents the difference between the true and observed realization of y . There can be
several reasons for such difference, e.g., the effect of all deleted variables in the model, variables may be
qualitative, inherent randomness in the observations etc. We assume that ÿ is observed as independent and
2
identically distributed random variable with mean zero and constant variance ÿ . Later, we will additionally
assume that ÿ is normally distributed.
The independent variables are viewed as controlled by the experimenter, so it is considered as non-stochastic
E y( ) ÿ ÿ ÿ 0 ÿ 1X
and
ÿ 2
Var y() ÿ .
Sometimes X can also be a random variable. In such a case, instead of the sample mean and sample
E(|) yx ÿÿ
ÿ 0 ÿ 1
x
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
1
Machine Translated by Google
2
ÿ are generally unknown in practice and ÿ is unobserved. The determination of the statistical model
2
X ÿÿ
ÿ ÿy ÿ
0 1
ÿ depends on the determination (i.e., estimation ) of ÿ ÿ
0 , 1
and ÿ . In order to know the
Various methods of estimation can be used to determine the estimates of the parameters. Among them, the
methods of least squares and maximum likelihood are the popular methods of estimation.
are assumed to satisfy the simple linear regression model, and so we can write
twelfth,...,
i
ÿÿ ÿÿ ÿ0 1 ÿ). medicine
ii
ÿ
(
The principle of least squares estimates the parameters ÿ and ÿ by minimizing the sum of squares of the
0 1
difference between the observations and the line in the scatter diagram. Such an idea is viewed from different
perspectives. When the vertical difference between the observations and the line in the scatter diagram is
as direct regression.
do
(xi,
and ÿ ÿ0 ÿ
ÿ 1X
(Xi,
xi
Direct regression
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
2
Machine Translated by Google
Alternatively, the sum of squares of the difference between the observations and the line in the horizontal
do
Y X ÿÿÿ 0 ÿ1
(xi, yi)
(Xi, Yi)
xi,
Instead of horizontal or vertical errors, if the sum of squares of perpendicular distances between the
observations and the line in the scatter diagram is minimized to obtain the estimates of ÿ and ÿ 1 , the
0
do
(xi
Y X ÿÿÿ 0 ÿ1
(Xi )
xi
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
3
Machine Translated by Google
Instead of minimizing the distance, the area can also be minimized. The reduced major axis regression
method minimizes the sum of the areas of rectangles defined between the observed data points and the
nearest point on the line in the scatter diagram to obtain the estimates of regression coefficients. This is
do
(xi yi)
Y X ÿÿÿ 0 ÿ1
(Xi, Yi)
xi
The method of least absolute deviation regression considers the sum of the absolute deviation of the
observations from the line in the vertical direction in the scatter diagram as in the case of direct regression to
No assumption is required about the form of the probability distribution of ÿ i in deriving the least squares
s are random
'
estimates. For the purpose of deriving the statistical inferences only, we assume that ÿ i
2
variable with AND
(ÿ )i 0, ( )Was
ÿ
ÿ i ÿ ÿ and The (ÿÿ, ) ÿ0ÿÿfor all (_ 1, 2,..., n). This assumption is
ij
needed to find the mean, variance and other properties of the least-squares estimates. The assumption that
'
ÿ i s are normally distributed is utilized while constructing the tests of hypotheses and confidence intervals
of the parameters.
Based on these approaches, different estimates of ÿ and ÿ are obtained which have different statistical
0 1
properties. Among them, the direct regression approach is more popular. Generally, the direct regression
estimates are referred to as the least-squares estimates or ordinary least squares estimates.
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
4
Machine Translated by Google
observations on ( , ), 1, 2,...,
i are available which satisfy the linear regression model X ÿÿ ÿ x yi n y
i
ÿ
ÿ 0 ÿ 1 ÿ .
ÿ S (,)
ÿ ÿ
n
0 1
ÿÿ
ÿ
2( and t
ÿÿ
ÿ 0
ÿ i
x 1 )
ÿ 0 i ÿ
1
n
ÿ S (,)
ÿ ÿ ÿ 0 1
ÿÿ
2 ÿ( do ÿ ÿ xxii ) 1
ÿÿ
.
0
1
i ÿ
1
The solutions of ÿ 0
and ÿ 1 are obtained by setting
ÿ S(,) ÿ ÿ ÿ 0 1 ÿ
0
ÿÿ ÿÿ
0
ÿ
S (,) ÿ
0 1
ÿ 0.
The solutions of these two equations are called the direct regression estimators, or usually called as the
0 ÿ 1
ÿÿ
b y bx
0 1
s
xy
b ÿ
1
s
xx
where
n nnn
ÿ ÿÿ
1 1
2
sxy ÿ ÿÿ ÿÿ ÿ
ÿ ( ), x xy
)( y s i i xx (x)xy
, xy x i ,
ÿ
i
.
i ÿ
1 ii ii1 11
ÿ ÿÿ n n
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
5
Machine Translated by Google
Further, we have
2 n
ÿ S (,)
ÿ ÿÿ 0 1
ÿÿ ÿ ÿ
n
2 2 ÿ( 1) 2 ,
ÿÿ 0 ÿÿ i ÿ
1
2 n
ÿ S (,)
ÿ ÿÿ 1
ÿ
0 2
ÿ 2 xi
2
ÿ 1 i ÿ
1
2 n
ÿ S (,)
ÿÿ
0 1
ÿ
2 2.ÿ x nx t
ÿ
0 1
i ÿ
1
The Hessian matrix which is the matrix of second-order partial derivatives, in this case, is given as
ÿ ÿ ÿ
2
S
ÿ 01
ÿ ÿ 2 ÿS (,)
ÿÿ(,)
2 01
ÿ ÿ
0 ÿÿÿÿ
*
ÿ ÿ 01
H ÿ
ÿ
2
S ÿ (,)ÿ (,) ÿ 2
S
01 01
ÿÿÿÿ ÿÿ ÿ ÿ 2
01 1
nx ÿ
2 n
ÿ ÿ ÿ nx
ÿ 2x
i
ÿ i ÿ
1
ÿ ÿÿ
'
ÿ ÿÿ ÿ
ÿ ÿ2 ÿ ÿ ÿ' ÿÿ
ÿ ,
x ÿ
where ÿ ÿ (1,1,...,1)' is a n -vector of elements unity and x ÿ x 1( ,..., )' xa nnis -vector of observations on X .
The matrix H * is positive definite if its determinant and the element in the first row and column of H * are
ÿ
ÿ i
2 22
ÿ
ÿ i ÿ
1 ÿ ÿ
4(n)ÿxx
2
ÿ ÿ
i ÿ
1
ÿ 0.
is not interesting because all the observations, in this case, are identical, i.e.
xx i
i ÿ
1
x ÿ c (some constant). In such a case, there is no relationship between x and y in the context of regression
i
n
2
analysis. Since ÿ ÿ ÿx therefore
x i ) 0,
H ÿ0. So H is positive definite for any 0 (,) , therefore,
( ÿ ÿ 1
i ÿ
1
S(,)ÿ has
0
ÿ a global minimum at ( , ). b b
1 01
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
6
Machine Translated by Google
yb bx ÿ ÿ 0 1 .
The difference between the observed value yi and the fitted (or predicted) value ˆi y is called a residual. The
i residual is defined as
th
ˆ
ÿ
( iiioh ~
yeah
ÿ
1,2,..., ) n
ˆ
ÿÿ
yyi i
ÿÿ ÿ i 0 (
yb bx 1 i
).
Unbiased property:
s
xy
Note that b ÿ
and b yare
bx ÿÿ
the linear combinations of ( 1,..., ). yi n ÿ i
1 01
sxx
Therefore
b 1 ÿ ÿ this one i i
ÿ
i 1
n n
where k x x s ÿÿ
) / . Note that k ÿ 0 and ÿ sokx
ÿ
1,
i
(
i xx ÿ i i i
ÿ ÿ
i 1 i 1
E b()
1
ÿ
ÿ k E y() i i
i ÿ
1
ÿ ki ( ÿ 0 ÿ ÿ x
1i
) .
i ÿ
1
ÿ
ÿ 1
.
ÿ
( )E y bx ÿ
Eb
ÿ
ÿ ÿ
0 1
ÿ
AND
ÿÿ01ÿ1 ÿÿ x ÿ bx ÿ
ÿÿ ÿÿÿ
x xÿ01 1 ÿ
ÿ
.
0
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
7
Machine Translated by Google
Variances:
Using the assumption that 'y i s are independently distributed, the variance of b is1
n
2
there is
()b ÿ
ÿ
1
ÿ k Var
i y () i
ÿÿ kk Cov
ij yy (, ) ij
i ÿ
1
from ÿ
ÿ (
xxi ÿ
)
2
2 i
ÿ
ÿ ( The
( , y yij ) 0 as y y ,...,
ÿ
1 n are independent)
2 s
xx
ÿ 2s
= xx
2
s
xx
2
ÿ
=
.
s
xx
The variance of b is 0
Var(b)0 Var
11
ÿÿ ÿ ( ) yx
Var b xCov( yb
)2 2
( , ).
ÿÿ ÿ
(, )E y E yb Eÿb
The yb 1
ÿÿ () 11 ÿ () ÿÿ
ÿ
AND ÿ cyi i
ÿ
ÿ
(ÿ 1 ÿ )
i ÿ ÿ
1 ÿ ÿ
ÿ ÿÿ c ii cx ii
ÿ ÿ ÿÿ iÿ)( ÿÿ ÿthere ÿÿÿ)
ÿ
AND
ÿÿ 01
ÿ ( ÿ ÿÿ 1 i
n i i ii i ÿ ÿ
1 ÿ ÿÿÿ
0000 ÿ ÿ
n
ÿ
0
So
2
1 x ÿ
there is
( )b0 ÿ
ÿ .
ÿ ÿÿ
ÿns xx ÿ ÿ
Covariance:
The covariance between b and 0
b1is
Cov (bb
, )Cov yb xVar(,b) ÿ
()
ÿ
01 1 1
x 2
ÿÿ
ÿ .
s
xx
It can further be shown that the ordinary least squares estimators b and
0
b1possess the minimum variance
in the class of linear and unbiased estimators. So they are termed as the Best Linear Unbiased Estimators
(BLUE). Such a property is known as the Gauss-Markov theorem, which is discussed later in multiple
i 1 ÿ
i ÿ
1
ÿ ÿÿ
ÿ ( yb bx i 0 1 i )
2
i ÿ
1
ÿ
2
ÿ ÿÿ ÿ 1 ÿ
ÿ yi y bx bx 1 i
i ÿ
1
n
2
ÿ ÿÿ ÿ
ÿ ÿ ÿ (i )( ) y y bx x 1 i
i ÿ
1
nn n
ÿ ÿÿ ÿÿ ÿ( ÿÿ ÿ( ) 2 i )
22
( ) y y b x x b x xy y 1 i
2
1 ii )(
ÿ
ii i 11 1
ÿÿ ÿ
2 2
ÿÿ ÿ2 s bs bsxx 1 xx
yy 1
2
ÿ ÿ yy s bs 1 xx
ÿ sÿxy
ÿ ÿ ÿ ÿsÿ s
yy
s
xx
ÿ xx
2
s
xy
ÿ
s ÿ
yy
s
xx
ÿ ÿ yy s bs 1 xy
.
n n
1
where ) ,do s yyÿ y (
ÿÿ ÿ
2
ÿ Y i
.
i ÿ
1
n i ÿ
1
2
Estimation of ÿ
2
The estimator of ÿ is obtained from the residual sum of squares as follows. Assuming that y is
i normally
2
distributed, it follows that SSreshas a ÿ distribution with ( 2) n ÿ degrees of freedom, so
SS res 2
~ (ÿ n ÿ
2).
2
ÿ
Thus using the result about the expectation of a chi-square random variable, we have
2
E (SS res )
ÿÿ
( n2) . ÿ
2
Thus an unbiased estimator of ÿ is
SS
res
ÿ
2 s .
n
ÿ
Note that SS has only ( 2) n ÿ degrees of freedom. The two degrees of freedom are lost due to estimation
res
2 2
of b and b . Since s depends on the estimates b and b so it is a model-dependentt estimate of ÿ .
0 1 0 1 ,
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
9
Machine Translated by Google
2 ˆ2 2
The estimators of variances of b and b are obtained
1 by replacing 0
ÿ by its estimate ÿ ÿ s as follows:
2
ÿ
1 x ÿ
have( )bs 0
ÿ
2
ÿ ÿÿ
ÿns xx ÿ ÿ
and
2
ÿ
s
there is b()
1
ÿ
.
s xx
n n
ˆ
It is observed that since ÿ ÿ( ÿandsoandÿ) ÿ0,In i i the light of this
0. property, e can be regarded as an
and
i i
i ÿ 1 i ÿ 1
estimate
of i of unknown ÿ( i1,...,
nÿ) . This helps in verifying the different model assumptions on the basis
ÿ ÿcar
(i) i i
0,
i ÿ 1
ˆ n
(ii) ÿ ÿy e i i
0,
i ÿ 1
n n
ˆ
(iii) Y and Y
ÿ ÿÿ i i
i 1 ÿ i ÿ 1
Centered Model:
Sometimes it is useful to measure the independent variable around its mean. In such a case, the model
Yi X ÿÿ
ÿ ÿ 1 ÿii 0 ÿ has a centred version as follows:
do ÿÿ
ÿ ÿ ÿÿ ÿ 01 i (
xx x ) ÿ ÿ ( i
ÿ
1, 2,..., n)
1
*
ÿ ÿ ÿÿ
ÿÿ 0 ÿ1 ÿ (
xxi ) ÿ
i
*
where ÿ ÿ ÿ x 0 01 . The sum of squares due to error is given by
2
n n
S ( ÿÿ*2,* ÿ ) ÿ ÿ ÿÿ ÿ
Y ÿ ÿxxii( i). ÿ
01 ÿ ÿ ÿÿ ÿ 0 1
i 1 ÿ
i ÿ
1
Now solving
*
ÿS (,)
ÿ ÿÿ 0 1
ÿ 0
*
ÿÿ ÿ
0
*
ÿS (,)
ÿ 0 1
*
ÿ
0,
ÿ
1
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
10
Machine Translated by Google
b0ÿ y
and
s
xy
b ÿ
,
1
s xx
respectively.
whereas the form of the estimate of intercept term changes in the usual and centered models.
* *
Further, the Hessian matrix of the second order partial derivatives of S(,)ÿ with
ÿ respect to
0 1
ÿ 0
and ÿ 1
* * * * *
is positive definite at ÿ 0
ÿ b and 0 ÿ 1
ÿ b which ensures that
1
S(,)ÿ isÿminimized at
0 1
ÿ 0
ÿ b and
0
ÿ 1
ÿb.
1
2
Under the assumption that Andÿ (i ) 0,
ÿ
Was ( ÿ) i ÿ
ÿ and The ( ÿÿ
ij
ÿ
) 0 for all i j ÿÿ 1, 2,..., n , it follows that
* *
Eb( ) 0
ÿ
ÿ 0 11() Eb
, , ÿ
ÿ
2 2
* ÿ ÿ
there is
( b)0 ÿ
there
, () . is b 1
ÿ
n s
xx
*
ÿ
In this case, the fitted model of Y0 toxxii ÿÿ ÿ ÿÿ ÿ 1( )
is
y y bx x ÿÿ1 ÿ ( ),
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
11
Machine Translated by Google
yx1 toin ÿÿ
ii
ÿÿ ÿ ( 1, 2,.., ).
For example, in analyzing the relationship between the velocity ( ) y of a car and its acceleration ( ) X ,
the
n n
minimizing S ()ÿÿ
1
ÿÿ ÿ
ÿÿ 2
i ii ( andÿ x 1
)
2
and solving
ÿ ÿ
i 1 i 1
ÿ S ( )ÿ 1
ÿ 0
ÿ ÿ 1
*
ÿ y xi i
i ÿ
1
b ÿ
.
1 n
ÿ xi
2
i ÿ
1
minimizes S( ÿ). 1
2
Using the assumption that AND ( ÿ) i 0, ( )
ÿ
Was ÿ i ÿ ÿ and ( Those ÿÿ ) 0 for all i j
ÿ
ÿÿ 1, 2,..., n , the properties
ij
*
of b can be derived as follows:
1
*
ÿ x E y( ) i
i
i ÿ
1
E b( ) 1
ÿ
n
ÿ xi
2
i ÿ
1
ÿ xi
2
ÿ 1
i ÿ
1
ÿ
n
ÿ xi
2
i ÿ
1
ÿ
ÿ 1
* *
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
12
Machine Translated by Google
n
2
*
ÿ Var y
i
( ) xi
i ÿ
1
there is
( )b ÿ
1 2
n
ÿ ÿ
2
xi
ÿ ÿÿ
i ÿ
1 ÿ ÿ
n
2
ÿ xi
2 i ÿ
1
ÿ ÿ
2
n
ÿ ÿ
2
xi
ÿ ÿÿ
i ÿ
1 ÿ ÿ
2
ÿ
ÿ
n
2
ÿ xi
i ÿ
1
2
and an unbiased estimator of ÿ is obtained as
n n
2
ÿ ÿyx
iy b
ÿ
1 ii
i 1
ÿ
i ÿ
1
n ÿ
2
distribution N(0, ). ÿ Now we use the method of maximum likelihood to estimate the parameters of the
xi in ÿÿ ÿ 0ÿ1 ii ÿ ÿ (
ÿ
1, 2,..., ), y
2
the observations ( yi i
ÿ
1, 2,..., ) nare independently distributed with N ( ÿ 0
ÿ ÿ x
1 i
,)ÿ for all i n ÿ1,2,..., .
2
The likelihood function of the given observations (, ) x y and and is
ii unknown parameters ÿ ÿ
0 , 1 ÿ
n
1/2
ÿÿÿ ÿ1 ÿ ÿ ÿÿÿ 1
2 2
ÿ ÿ ÿÿ
2 ÿ ÿ
i ÿ
1
2 2
The maximum likelihood estimates of ÿ ÿ, and ÿ can be obtained by maximizing Lx y (, ; i ÿ ÿ ,
1, ÿ )
or
0 1 i 0
2
equivalently in ln ( , Lx y i; ÿ 0ÿ, 1 ,
ÿ ) where
i
n
22 2 ÿÿ ÿÿn ÿÿ ÿÿ ÿÿ n ÿ 1 ÿ
ln (y, L x i; ÿ 0ÿÿ
, 1 , )
ln ÿ ÿ
ln ÿ
ÿ
ÿ 2 ÿ ( Yi
ÿÿ
ÿ 0 ÿ x
1i ).
ÿÿ2ÿÿ22 2
i
ÿ ÿÿÿ i ÿ
1
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
13
Machine Translated by Google
2
The normal equations are obtained by partial differentiation of log-likelihood with respect to ÿ 0ÿ,1 and ÿ
ÿ 0
ÿ i
x1 ) 0ÿ
ÿ ÿ
0 i ÿ
1
2 n
ÿ ln ( , Lx y i ; ÿ ÿÿ ÿ , , ) 1
i 0 1
ÿÿ
ÿ ( and i
ÿÿ
ÿ 0 ÿ xxii )1 ÿ 0
2
ÿ ÿ
1
i ÿ
1
and
2 n
ÿ ln ( , Lx i y ;
ÿ 0ÿÿ, 1 ,
) n 1 2
i
ÿÿ
0. ÿ ÿÿ ÿ ÿ
x 1 i( Y ÿ ÿ )
2 24 0 i 2
ÿÿ 2 ÿÿ
i ÿ
1
2
The solution of these normal equations give the maximum likelihood estimates of ÿ ÿ, and ÿ as
0 1
ÿ ÿ
ÿÿ
b y bx 0 1
ÿ
ÿ ( )( ) x xy y i
ÿ
i
ÿ
s
i ÿ
1 xy
b ÿ ÿ
1 n
s
ÿ (xxi ÿ
)
2
xx
i ÿ
1
and
n ÿÿ
ÿÿ
2
ÿ ( ybbxi 0 1 i )
ÿ
i ÿ
1
ÿ
2 s
respectively.
It can be verified that the Hessian matrix of second-order partial derivation of ln L with respect to ÿ ÿ0 , 1,
ÿ
ÿ
2 2 2
and ÿ is negative definite at ÿ
ÿ
b , ÿ
ÿ
b ,
and ÿ ÿ sÿ which ensures that the likelihood function is
0 01 1
Note that the least-squares and maximum likelihood estimates of ÿ and ÿ 1 are identical. The least-squares
0
2 2
and maximum likelihood estimates of ÿ are different. In fact, the least-squares estimate of ÿ is
n
1
2 2
s ÿ ÿ
ÿ ( ) yyi
n ÿ
2 i ÿ 1
ÿ n
ÿ
2
2 s ÿ
2 s .
n
ÿ
ÿ
2 2
Thus b 0
and b 1
are unbiased estimators of ÿ 0
and ÿ1 whereas sÿ is a biased estimate of ÿ , but it is
ÿ
ÿ
ÿ
2 2
Whose
Where
( )s ÿ ( ).
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
14
Machine Translated by Google
2
Case 1: When ÿ is known:
2
ÿ
independent and identically distributed and follow N(0, ).
First, we develop a test for the null hypothesis related to the slope parameter
H: 0 ÿ 1
ÿ
ÿ 10
2 ÿ
Assuming ÿ to be known, we know that E b( ) ÿ , ÿ11 1 have( b) ÿ
and b 1 is a linear combination of
s
xx
2
ÿ ÿ
ÿ b~ N ÿ
1 ÿ 1 ,
s
ÿ xx ÿ ÿ
b 1
ÿ
ÿ 10
ÿ
FROM
1 2
ÿ
s
xx
Reject H0 if Z1 ÿZ ÿ /2
where Z ÿ /2
is the ÿ / 2 percent points on the normal distribution.
Similarly, the decision rule for one-sided alternative hypothesis can also be framed.
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
15
Machine Translated by Google
The 100 (1 )%
ÿ ÿ confidence interval for ÿ 1 can be obtained using the Z1 statistic as follows:
Pz Z z ÿÿ ÿ/2 ÿ ÿÿ 1 ÿ
/2 ÿ 1 ÿ
ÿ
ÿ ÿ ÿ
ÿ
ÿ
b ÿ
1 1
Pz ÿ ÿ ÿ /2
1 ÿ ÿÿWith
ÿ /2
ÿ
2
ÿ
s xx
ÿ ÿ
2 2
ÿ ÿ ÿ
ÿ
Pb z ÿ 1
ÿ
ÿ /2
ÿ ÿÿ ÿ 1
bz 1 ÿ /2
ÿÿ
1 ÿ .
s s
xx xx
ÿ ÿ
ÿ
So 100 (1 )% ÿ confidence interval for ÿ 1 ÿ is
2 2
ÿ
ÿ ÿ ÿ
ÿ
bz1 bz ÿ
ÿ /2 , 1ÿ ÿ /2
s s
xx xx
ÿ ÿ ÿ
where With
ÿ /2
is the ÿ
/ 2 percentage point of the N(0,1) distribution.
2
Case 2: When ÿ is unknown:
2
When ÿ
is unknown then we proceed as follows. We know that
SS res 2
n
ÿ
2 ~ (ÿ2)
ÿ
and
ÿ SS ÿ
res 2
AND ÿ
ÿ .
ÿ
n ÿ ÿ ÿÿ 2
2
Further, SS / ÿ and
res b 1are independently distributed. This result will be proved formally later in the next
module on multiple linear regression. This result also follows from the result that under normal distribution,
the maximum likelihood estimates, viz., the sample mean (estimator of population mean) and the sample
2
variance (estimator of population variance) are independently distributed, so b 1and s are also
independently distributed.
b 1
ÿ
ÿ 1
t ÿ
0 ˆ 2
ÿ
s xx
b 1
ÿ
ÿ 1
ÿ
SS res
( n2) which s xx
ÿ
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
16
Machine Translated by Google
reject H0 if t
0
tÿ n ÿ
ÿ
2, /2
where t n
2, /2ÿ
ÿ
is the ÿ
/ 2 percent point of the t -distribution with ( 2) n ÿ degrees of freedom. Similarly, the
decision rule for the one-sided alternative hypothesis can also be framed.
ÿ
The 100 (1 )% ÿ confidence interval of ÿ 1 can be obtained using the 0t statistic as follows:
Consider
Pt t ÿ ÿ /2ÿ ÿÿ tÿ 1 ÿ
ÿ ÿ 0 /2 ÿ
ÿ ÿ ÿ
ÿ
ÿ
b 1 ÿ 1
Pt ÿ ÿ /2 2 1 t ÿ ÿÿ / ÿ
ÿ ˆ 2
ÿ
s xx
ÿ ÿ
ˆ 2 ˆ 2
ÿ ÿ
ÿ ÿ
Pb t ÿ ÿ 1
ÿ
ÿ /2
ÿ ÿÿ 1
btÿ 1
ÿ /2 ÿÿ
1 ÿ .
s
xx
s xx
ÿ
So the 100 (1 )% ÿ ÿ
confidence interval ÿ1 ÿ is
SS res
SS res
ÿ
ÿ ÿ bt 1
ÿ
n ÿ
ÿ
2, /2
, bt ÿ1 n
ÿ
ÿ
2, /2
.
( ns ( ns
ÿ ÿ
ÿ 2) xx 2) xx ÿ ÿ
2
Case 1: When ÿ is known:
Suppose the null hypothesis under consideration is
H: 0
ÿ 0
ÿ
ÿ 00 ,
2
2 ÿ 1 x ÿ
where ÿ is known, then using the result that E (b ) ÿ ÿÿ ÿ , Yes (b)
00 0
ÿ
2
ÿ and b ÿ 0
is a linear
ÿn s x
ÿ
b
0
ÿ
ÿ 00
FROM
0
2
2 ÿ 1 x
ÿ
ÿ ÿ ÿÿÿ
ÿ
ns xx
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
17
Machine Translated by Google
Reject H0 if Z0 ÿ Z ÿ /2
where Z is the ÿ / 2 percentage points on the normal distribution. Similarly, the decision rule for one
ÿ /2
when 2
The 100 (1 )% ÿÿ confidence intervals for ÿ 0 ÿ is known can be derived using the Z0 statistic as
follows:
Pz Z z ÿ ÿ ÿ ÿÿ /2 1 ÿ
ÿ ÿ 0 ÿ /2 ÿ
ÿ ÿ
ÿ ÿ
b ÿ
ÿ
Pz ÿÿ 0 0
1 ÿ ÿÿÿ/2
With ÿ
ÿ /2
2
ÿ 1 x
2
ÿ
ÿ ÿ ÿÿ
ÿ
n s xx ÿ
ÿ ÿ
2
ÿ
2
ÿÿ1 b z2ÿx ÿ 2
ÿÿ1 x 1 ÿ
ÿ
Pb z ÿ ÿ ÿ
ÿ ÿÿ ÿ ÿ ÿÿ ÿÿ ÿÿ ÿÿ ÿ .
0 ÿ /2 0 0 ÿ /2
ÿÿns xx
ns xx
ÿ ÿ
2
ÿ
ÿ ÿÿ ÿÿx ÿ ÿ ÿÿ ÿ ÿÿ ÿÿ ÿÿ ÿÿ
1 ÿ1 2
x
bz 2 bz 2
ÿ
ÿ ÿ ,
.
0 ÿ /2 0 ÿ /2
ns ns
xx xx
ÿ
2
Case 2: When ÿ is unknown:
2
When ÿ is unknown, then the following statistic is constructed
b ÿ
t ÿ 0 ÿ 00
0
SS ÿ 1 x 2
res
ÿ ÿ ÿ ÿÿ
n ns
ÿ
2
ÿ
xx
Reject H0 whenever t tÿ
0 n ÿ
ÿ
2, /2
where t n is the ÿ / 2 percentage point of the t -distribution with ( 2) n ÿ degrees of freedom. Similarly,
2, /2 ÿ
ÿ
the decision rule for one-sided alternative hypothesis can also be framed.
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
18
Machine Translated by Google
Consider
Pt ÿ n ÿ
ÿ
ÿ ÿt 0ÿÿ 2,t n/2 2,
ÿ
/2 ÿ
ÿÿ 1 ÿ
ÿ
ÿ ÿ
ÿ ÿ
b ÿ
0 ÿ 0
Pt n ÿ
ÿ
2, /2
ÿ 1 ÿt nÿÿ2, /2ÿ
ÿ
ÿ
2
SS res
1 x
ÿ ÿÿ
n ns2
ÿ
ÿ
ÿ ÿ ÿÿ xx
ÿ
ÿ
SS res
ÿÿ1 ÿ ÿ 2ÿx SS res
ÿ ÿÿ ÿ2
1
Pb t ÿ
0
ÿ
n ÿ
ÿ
2, /2
ÿ
ÿÿ ÿÿ ÿ 0
bt 0 n ÿ
ÿ
2, /2
ÿÿ ÿ x 1 ÿ
ÿÿ ÿÿ ÿ .
n ns2ÿ
xx
n ns2 ÿ
xx
ÿ
2 2
ÿ
SS resx ÿbt1ÿÿ
ÿ ÿÿx ÿÿ 2,
ÿÿ/2ÿÿ ÿ ÿÿ ÿÿ ÿ SS res 1
ÿ
0 n ÿ
2, /2ÿ
bt0 n ÿ ,
ÿ
.
n ns2 ÿ
xx
n ns2 xx
ÿ
2
Test of hypothesis for ÿ
We have considered two types of test statistics for testing the hypothesis about the intercept term and slope
2 2 2
parameter- when ÿ is known and when ÿ
is unknown. While dealing with the case of known ÿ , the
2
value of ÿ
is known from some external sources like past experience, long association of the experimenter
with the experiment, past studies etc. In such situations, the experimenter would like to test the hypothesis
2 2 2 2 2
like H: 0
ÿ ÿ
ÿ
0 against H 0: ÿ ÿ ÿ
0 where ÿ
0 is specified. The test statistic is based on the result
SS 2
r is ~
2 ÿ n
ÿ
2
. So the test statistic is
ÿ
SS 2
r is
C ÿ ~
0 20
ÿ n ÿ
2 under H0 .
ÿ
2 2
ÿ or ÿ .
The decision rule is to reject H0 if C0 ÿ nÿ 2,ÿ /2 C0 ÿ nÿ ÿ 2,1 /2
ÿ
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
19
Machine Translated by Google
2
Confidence interval for ÿ
2
/~ÿ
2 2
2 , thus consider
ÿ 2
SS res 2 ÿ
P ÿ n ÿ
ÿ 2 ÿ ÿ ÿ
ÿÿ 1 ÿ
ÿ
2, /2 n ÿÿ
2,1 /2
ÿ
ÿ
ÿ ÿ
ÿ SS SS ÿÿÿ
res 2 res
P ÿ ÿÿ ÿÿ
1 ÿ .
ÿ 2 2 ÿ
ÿ
ÿ n
ÿÿ
2,1
ÿ /2 ÿ n ÿ
2, / ÿ2
2
ÿ ÿ is
The corresponding 100(1 )% ÿ confidence interval for
ÿ SS
res
SS res ÿ
ÿ 2 , .
2
ÿ
ÿ n
ÿÿ
2,1
ÿ /2 ÿ n
ÿ
2, / ÿ2 ÿÿ
regression model
*
Y to ii xx ÿÿ ÿ0 ÿÿ ÿ 1( ) ÿ
* *
where ÿ 0 01
ÿ ÿÿ ÿ x . The least squares estimators of ÿ 0
and ÿ 1
are
* s
xy
by0 b
ÿ
and 1
ÿ
,
s xx
respectively.
Eb
() , 1
ÿ
ÿ 1
2
* ÿ
there is
()b, ÿ
0
n
2
ÿ
there is
()b. ÿ
1
s
xx
2
When ÿ is known, then the statistic
* *
b
ÿ
ÿ b ÿ
ÿ 1
0 0
N
~ (0,1) and 1
N
~ (0,1).
2 2
ÿ ÿ
n s
xx
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
20
Machine Translated by Google
ÿ ÿ ÿ
ÿ ÿ ÿ
ÿ b
ÿ * *
b ÿÿÿÿÿ
2 2
ÿÿ ÿ ÿ ÿ0
ÿ 0
~ ÿ and
ÿ ÿ ÿ ÿ
1
ÿ 1
~
1
ÿ 1
ÿ 2 ÿ 2
n ÿÿÿ ÿ
s
xx ÿ ÿ
*
of these two
* *2 2
nb( )()
ÿ
sb ÿ
ÿ
0 ÿ O
ÿ xx 11 2 2
~
2 2 ÿ .
ÿ ÿ
Since
SS
res 2
~ ÿ
n 2
2
ÿ
*
and SS is independently distributed of b and b 1,
so the ratio
res 0
ÿ nb s* b*2( ÿ)() ÿ
ÿ
2
ÿ
0 0 11 xxÿ 2
ÿ
2 2
ÿ ÿ
ÿ ÿ ÿ
~ F .
2,n 2
SS
ÿ
ÿ
res n
ÿ
2
ÿ ÿ (ÿ2)ÿ
ÿ
ÿ
* *
Substituting b b bx ÿ0 01
ÿ and ÿ ÿ ÿ ÿx 0 01ÿ , we get
ÿ 2
n ÿÿÿ Q ÿ
f
ÿ
ÿ
2
ÿÿ ÿÿ
SS
res ÿ ÿ
where
n n
Q nb
ft i
ÿ
( 0
ÿ
ÿ 0 2 ÿ
2
ÿ ) ( x b 0 11 1ÿÿ )( b ( x b ) ÿ ÿÿ ÿ ÿ 22
1 ÿ 1
).
i 1
ÿ
i ÿ
1
Since
ÿ ÿQÿÿÿ2 ÿnÿÿ ÿ ÿ ÿ
f F 2, 2
ÿ ÿÿ 1 ÿ
P n
2 SS
ÿ
ÿ ÿ
res
holds true for all values of ÿ and ÿ so the 100 (1 ) ÿ % ÿconfidence region for and ÿ is
0 1, ÿ 0
1
ÿ
n ÿ ÿ2 ÿ .
Q f F
ÿ 2,
ÿ 2;1
n ÿÿ
ÿ .
ÿ
2 ÿ SS .
res
This confidence region is an ellipse which gives the 100 (1 )% ÿ probability that ÿ and ÿ are contained
ÿ 0
1
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
21
Machine Translated by Google
Analysis of variance:
The technique of analysis of variance is usually used for testing the hypothesis related to equality of more
than one parameters, like population means or slope parameters. It is more meaningful in case of multiple
regression model when there are more than one slope parameters. This technique is discussed and illustrated
here to understand the related basic concepts and fundamentals which will be used in developing the analysis
of variance in the next module in multiple linear regression model where the explanatory variables are more
than two.
follows.
i
)
2
i ÿ
1
nn n
ˆ 2 2
ˆ
ÿ ÿÿ ÿ ÿ ( ) and and ( and andand
ÿ ÿ
ii i 11 1
ÿÿ ÿ
Further, consider
n
ˆ n
ÿ ( )(ii )i yyy y
ÿ ÿÿ ÿ
ÿ ( x)x( ) y yb 1 i
ÿ
ÿ ÿ
i 1 i 1
ÿ
( b xx )
2
1
ÿ i
ÿ 2
ÿ
i 1
n
ˆ
ÿ
ÿ ( ). y y i
ÿ 2
ÿ
i 1
Thus we have
nnn ˆ
ˆ
ÿÿÿ
( yyÿÿ
yy ÿÿ
yy ÿ i
)
2 22
( ii
) (
i
).
i ii 1 11
ÿ ÿÿ
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
22
Machine Translated by Google
n
ˆ
The term ÿÿ ( yyi i )
2
describes the deviation: observation minus predicted value, viz., the residual sum of
i ÿ 1
ÿ
ˆ 2
i ÿ 1
n
ˆ 2
whereas the term ÿ ÿ ( ) and
and i describes the proportion of variability explained by the regression,
i ÿ 1
SS
rg and
ÿ
ÿ ÿ( ).ˆ and and i
2
i ÿ 1
n
ˆ 2
If all observations y are located on a straight line, then in this case
i
ÿ ÿ (ÿand
)0
andand
thusi i
i ÿ 1
SS SS ÿ .
corrected r ge
n n
2
squares ) s yy ÿ ÿ (
yy
ÿ
i ÿ degrees of freedom due to constraint ( )0 ÿ ÿ ÿand
andandhas ( 1) n i
SS has
res
i ÿ 1 i ÿ 1
2
All sums of squares are mutually independent and distributed as ÿ df with df degrees of freedom if the
MSr and g
F0 ÿ .
MSE
If H : ÿ 1 ÿ is
0
0 true, then M S and MSE are independently distributed and thus
r ge
F F 1, 2
~ .
0 nÿ
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
23
Machine Translated by Google
FFÿ 0 1, 2;1
nÿ ÿ ÿ
at ÿ level of significance. The test procedure can be described in an Analysis of variance table.
Regression SS r ge
1 M S r ge / MS r MSE
gand
Total s 1nÿ
yy
s
xy
r ÿ
.
xy
s xx s
yy
Moreover, we have
s s
xy yy
b ÿÿ r .
1
xy
s
xx
s xx
2
The estimator of ÿ in this case may be expressed as
n
1
s
2
ÿ
n ÿ
2
ÿ and
i
2
i ÿ 1
1
ÿ SS .
res
n ÿ
2
SS
res
ÿ ÿÿ i ÿ y b b( x [ 0 1 i
)]
2
i ÿ 1
n
2
ÿ ÿÿ ÿ
ÿ [( )] iy y bx) x 1
(
i
i ÿ 1
ÿÿ ÿ2 s b s bs 2
yy 1 xx 1
xy
ÿ ÿ yy s bs 1
2
xx
ÿÿ
( s
xy )
s .
yy
s
xx
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
24
Machine Translated by Google
SS sÿ
corrected yy
and
SS s SS ÿÿ
res
r gand yy
2
( s)xy
ÿ
s
xx
2
ÿ
b s xx
1
ÿ
bs .
1
xy
residuals, so a measure of the quality of a fitted model can be based on SS . When the intercept
res
term is
SS SS
R 2 ÿÿ ÿ 1
res rg and
.
s s
yy yy
This is known as the coefficient of determination. This measure is based on the concept that how much
SS . The ratio SS s describes / the proportion of variability that is explained by regression in relation
res r g yy and
by the regression.
Rrÿ 2 2
xy
where r xyis the simple correlation coefficient between x and y. Clearly 0 ÿRÿ 1
2
so a value of R 2closer
,
2
to one indicates the better fit and value of R closer to zero indicates the poor fit.
variable. The term prediction of the value of study variable corresponds to knowing the value of ( ) E y (in
case of average value) and value of y (in case of actual value) for a given value of the explanatory variable.
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
25
Machine Translated by Google
OLS estimators of ÿ 0
and ÿ 1 respectively.
Suppose we want to predict the value of E y( ) for a given value of x ÿ x .0 Then the predictor is given by
•
x
0 ÿˆ x .
E and (| )ÿÿ 0/ y10
0 b bx ÿ
Predictive bias
Then the prediction error is given as
ˆ
ÿ | y x0 E ()
yb (bx E
ÿ ÿÿ ÿ 0 10 ÿÿ ÿ ÿ x 0 10 ÿ
)
b bx
ÿÿ ÿ ÿ0 10 0 10 (ÿ ÿ x
) b bx
0
ÿ ( )( ).1 10
ÿ ÿÿ ÿÿ 0
Then
ˆ
And ÿ |yx0
ÿ
( ÿ ÿÿÿ ÿ
ÿ () Eb
E and 0 0 1 10
ÿÿ )
Eb( x )
ÿÿ
000 ÿÿÿ
Predictive variance:
ˆ
PV ÿ ()ÿ (| y x 0 ÿ
Be bbx0 10
ÿ )
)
2 2 2
ÿ ÿ xx0 ÿ
)
(
0 ÿÿ ÿ
n s xx
2 ÿ ( ÿÿ 1 x x 0ÿ ÿ ) ÿ
2
ÿ ÿ .
ÿ
n s xx ÿ
2
ÿ
ˆ ˆ 2 ÿ (1 ÿ ÿ x x ÿ0ÿ ÿ) ÿ ÿ
PV (ÿ
ÿ
ÿ
|yx0
) ÿ
n s
xx
2
ÿ 1 ( ÿ x x ÿ0) ÿ ÿ
ÿ
MSE .
ÿ ÿ
n s
xx
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
26
Machine Translated by Google
The predictor ÿˆ | y x 0 is a linear combination of normally distributed random variables, so it is also normally
distributed as
ˆ ~ N x PV ÿ
ˆ
ÿ ÿÿ0 10 , ÿ ÿÿ .
| yx 0
ÿ ÿ | yx 0
2
So if ÿ is known, then the distribution of
ˆ
E (|
y x)
ÿ
| yx0 0
ˆ
ÿ ÿ PV ( |yx0
)
| yx0 0
Pz ÿÿ /2 ÿ
ÿ ˆ ÿÿ z / 2 ÿ
ÿ ÿ
PV ( ÿ )
|yx0
ÿ ÿ
2 2
ÿ
ˆ 2
ÿ ÿÿ ÿÿ
()xxÿ 0 ÿ ÿÿ ÿÿ ÿÿ
1 ˆ ÿÿ ÿÿ ÿÿ ÿ 2 1
()xx 0
ÿ
ÿ
ÿÿ| yxÿÿ0 With
ÿ /2 ,
| yx 0
With
ÿ /2
.
ns xx
ns xx
ÿ
2 ˆ2
When ÿ is unknown, it is replaced by ÿ ÿ MSE and in this case the sampling distribution of
ˆ
ÿ | yx0
ÿ
Ey(|x) 0
2
ÿ 1 ( x x ÿ0 ÿ )ÿ
MSE ÿ ÿ
n s
ÿ xx ÿ
ÿ
The 100(1- )% prediction interval in this case is
ÿ ÿ
ÿ
ˆ ÿ
ÿ | y x0 (| )
Ouch
ÿ
0
Pt ÿ
ÿ 2, 2 n ÿ
ÿ / 1ÿ t ÿ ÿÿ ÿ .
2 n ÿ
( x x ÿ0 ÿ ) ÿ
,
ÿ 1 2
MSE ÿ
ÿ
n s
ÿ ÿ xx ÿ ÿ
2 2
ÿ
ˆ ÿ ÿÿ ÿÿ()xx
ÿ ÿ0 ÿÿ ÿÿ ÿÿ ÿÿˆ ÿÿ ÿÿ ÿ
1 1 ()xx 0
ÿ
ÿ
ÿ | yx t
ÿ /2, 2 n ÿ
MSE ÿ | yx t ÿ /2, 2 n , ÿ
MSE .
0
ns xx
0
ns xx
ÿ
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
27
Machine Translated by Google
Note that the width of the prediction interval E y(| ) x is a function of x . The interval width is minimum
0 0
x -values lie near the center of the data and the precision of estimation to deteriorate as we move to the
yˆ ÿ ÿ b0 bx
0 10 .
would be drawn from the distribution of random error in the prediction period. Note that the form of
predictor is the same as of average value predictor, but its predictive error and other properties are different.
Predictive bias:
The predictive error of yˆ0is given by
ˆ
y y ÿbÿÿbxÿ0 ÿ ÿ x 0 0 10 0 10 0 ( ÿ ÿ ÿ )
( b)( bx
) ÿÿ 0ÿ ÿÿ ÿ 1 10
0
ÿ ÿ .
0000 ÿÿÿÿ
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
28
Machine Translated by Google
Predictive variance
Because the future observation 0 y is independent of yˆ , the predictive variance of yˆ is
0
0
ˆ ÿ
ˆ 2
PV y()E(0y 00
y
ÿ
)
2
ÿ
Eb[( ÿ ÿÿÿ 0) 0( )( x0 xb
11 11 0 ÿÿÿÿ ) ( b ÿ ÿÿ ) x ]
2
ÿ ÿÿVar
ÿ 0 bxx 2 ( ) 2( ) ( , ) 2 xx Cov bb xCov
(0 )b(Var
b () bx Var b )Var () ÿ ÿ ÿ ) 2( ) ÿx x Var b ]
( , 01 ()
ÿ ÿ
1 0 01 ÿ 0 1
1 0
ÿ ( )b0[( 2 ÿ ÿ ÿÿ0 ÿ x x x)
there is 2( Var 0 ( ) ) 2 ] x x xÿ
)] xbxVar ( ) 2[( 1
The bb ÿ ÿ ÿÿ
0 ( , 01
)
2
Yes
Var
ÿÿ ÿÿ 0 (bx
())2Yes
x Cov
b0 bb () 1
ÿ 0 0 01 (,)
22 2 x 2 22
ÿ 1 ÿ xÿ
ÿ
ÿ
ÿ
2 ÿ ÿ ÿÿ x x 0 ÿ0 ÿ
ÿ
ns s xx ÿ ÿ xx
s
xx
2
ÿ ÿ
(
xx0 ÿ
ÿ
ÿ 2n 1 1
ÿ ÿÿ ) .
ÿ
s
xx ÿ ÿ
• 2
ˆ ˆ2 ÿ ( ÿ 1ÿÿ ÿ x x 0ÿ ÿ ) ÿ
PV y() 10 ÿ
ÿ
n s
xx
ÿ ( ÿ 1ÿÿ ÿ x x 0ÿ ÿ ÿ )
ÿ MSE 1
.
n s
xx ÿ ÿ
Prediction interval:
If 2
ÿ is known, then the distribution of
ˆ ÿ
yy0 0
ˆ
PV y( ) 0
ÿ
ˆ ÿ
yy 0
ÿ
0
Pz ÿÿ /2 ÿ
ˆ 1 ÿ ÿÿ
ÿ /2
z ÿ
ÿ
ÿ
PV y( ) 0
ÿ ÿ
2 2
ÿ
ˆ ÿ ÿÿ ÿ ()xx 0
ÿ
ˆ ()xx 0 ÿ
ÿ ÿÿÿ1ÿÿÿÿ 1 ÿÿ
2 ÿ 2
ÿ
yz 0
ÿ
ÿ /2 ÿ 1ÿ ,1
ÿÿ. ÿ ÿ ÿ yz /2
ns 0 ÿ
ns
xx xx
ÿ
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
29
Machine Translated by Google
2
When ÿ is unknown, then
ˆ
yy
ÿ
0 0
ÿ
ˆ
PV (y) 0
ÿ ˆ ÿ
and and0
ÿ
Pt ÿ ÿ
ÿ /
0
1 ÿ ÿÿ tn / 2, 2 ÿ
ÿ
ÿ 2, 2 n ÿ
ÿ
ˆ ÿ ÿ
ÿ
PV (y ) 0 ÿ
ÿ ÿ yt 0
ÿ
ÿ /2, 2 n ÿ
11 ÿÿ
MSE ÿÿ1,ÿÿ ÿÿ ÿ yt 0 ÿ /2, 2 n ÿ
MSE .
ÿ
ÿÿ ÿÿ nsÿ xx
ns xx
for yˆ0 depends on both the error from the fitted model as well as the error associated with the future
observations.
Y X ÿÿÿ 0 ÿ1
(xi,
(Xi, )
x,
Reverse regression
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
30
Machine Translated by Google
The reverse regression has been advocated in the analysis of gender (or race) discrimination in salaries. For
example, if y denotes salary and x denotes qualifications, and we are interested in determining if there is
“Whether men and women with the same qualifications (value of x) are getting the same salaries
“Whether men and women with the same salaries (value of y) have the same qualifications (value of
x). This question is answered by the reverse regression, i.e., regression of x on y.”
where ÿ ’s are the associated random error components and satisfy the assumptions as in the case of the
i
ˆ * ˆ *
model are obtained by interchanging the x and y in the direct regression estimators of ÿ 0
and ÿ 1 . The
ˆ ˆ
OR
ÿÿx 1R
ÿ ÿ Y
and
ˆ s
yy
ÿ
ÿ 1 R
s
xy
for ÿ 0
and ÿ 1 respectively. The residual sum of squares in this case is
2
s
* xy
SS s ÿ ÿ xx
.
res
s
yy
Note that
2
ˆ s xy
2
ÿ 1R b
ÿ ÿ
r xy
1
ss
xx yy
where b1is the direct regression estimator of the slope parameter and rxyis the correlation coefficient
An important application of the reverse regression method is in solving the calibration problem.
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
31
Machine Translated by Google
in x -direction or y -direction. In other words, the errors can be either in the dependent variable or
independent variable. There can be situations when uncertainties are involved in dependent and independent
variables both. In such situations, the orthogonal regression is more appropriate. In order to take care of
errors in both the directions, the least-squares principle in orthogonal regression minimizes the squared
perpendicular distance between the observed data points and the line in the following scatter diagram to
obtain the estimates of regression coefficients. This is also known as the major axis regression method.
The estimates obtained are called orthogonal regression estimates or major axis regression estimates of
regression coefficients.
do
(xi, yi)
Y X ÿÿÿ 0 ÿ1
(Xi, Yi)
xi
(nBut
, lie
i
), these
1,2,...,
i
on thispoints
ÿ line.
x yi deviate from the line, and in such a case, the squared
d i2Xx22
Y ( )( )
ii iiÿ ÿ ÿÿ and
th
where ( ,) Xi iY denotes the i pair of observation without any error which lies on the line.
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
32
Machine Translated by Google
n
2
The objective is to minimize the sum of squared perpendicular distances given by ÿ tod obtain the
i
i ÿ 1
Y ÿ ÿÿ
i
0 ÿ 1X i ,
so let
EYii X ÿÿ ÿ
ÿ 0
ÿ1 i
ÿ
0.
n
2 '
The regression coefficients are obtained by minimizing ÿ under
d the constraints
i Ei s using the
i ÿ 1
i ÿ 1 i ÿ 1
where ÿ 1,..., ÿ n are the Lagrangian multipliers. The set of equations are obtained by setting
L 000
ÿÿÿ 0,
ÿ LLL
0
0, 0 and
ÿ ÿ ÿ ÿÿ
0( i 1,2,..., n).
XY ÿ ÿÿ ÿ i i
ÿ 0
ÿ 1
Thus we find
ÿL 0
ÿÿ ()
ÿX0xii iÿ ÿ ÿ 1
ÿX i
ÿL 0
ÿ ÿ ÿÿ
()and
ii0i and
ÿ
ÿ Y i
n
ÿL 0
ÿ
ÿ ÿ i
ÿ
0
ÿÿ 0 i ÿ 1
n
ÿL 0
ÿ
ÿ ÿ X i i
ÿ
0.
ÿÿ 1 i ÿ 1
Since
X xi ii ÿÿ ÿ ÿ 1
and and
ÿ ÿ to ii ÿ ,
ÿ ÿ ÿ x ii 01 1ÿÿ )0
Eyi iiÿ ÿ (ÿÿ ÿ ) (
ÿ
ÿ ÿ ÿÿ
0 1
ÿÿi ÿ xy i i
.
2
1ÿ ÿ 1
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
33
Machine Translated by Google
ÿ( ÿ ÿ ÿÿ
0 1x
i
y i
)
i ÿ
1
ÿ 0
2
1ÿ ÿ 1
and using (
Xx ÿ i
1 i ) ÿÿ i
0 and ÿÿ ÿ ÿÿ,iiX 0 we get
i ÿ
1
ÿ ÿÿ ii (i x ÿÿ 1
) 0. ÿ
i ÿ
1
n
2
ÿ( ÿ 0
ÿ x xÿyx1 i i
ÿ
ii )
2
i 1 ÿ
ÿÿ ( ÿ ÿ ÿxy 10
22 1 ii )
ÿ
ÿ
0. (1)
ÿ 2 ÿ ÿ
(1 ÿ i
) (1
1
)
Using ÿ
i
in the equation and using the equation ÿ ÿÿ i
0
,
we solve
ÿ
i 1
( ÿ ÿ ÿ ÿxy 0 1 ii )
ÿ
0.
ÿ
i 1
ÿ
2
1ÿ ÿ 1
ˆ ˆ
ÿ 0 OR ÿ ÿ and ÿ x
1 OR
ˆ
where ÿ is an orthogonal regression estimate of .
1 OR ÿ 1
2
n n
2
ÿ
2
xx 0 ÿÿ ÿÿÿÿ
ÿ ÿ ÿ ÿ 1xxy 1 ÿ11 ÿ y x ÿÿ xy
ÿ
ÿ ÿ
ÿ (1 1 ÿ )
ÿ
yx i 1 i i ii
ÿ
ÿ i i
i 1 ÿ
i ÿ
1
or
2
n n
(1 ÿ ÿ
2
xyy ÿ x xi 1 11
ÿÿ1( ÿ ÿ and
( ) and
ÿÿÿ ÿ ( xxi ÿ
ÿ
0
ÿ ) ii ÿ ) i ÿ)
i 1ÿ
i ÿ
1
or
n n
2 2
ÿ ÿ in xvÿ u ÿÿ 1 i ii ii vu ÿ ÿÿÿÿ 11 )0
ÿ) ( )( ÿ(
ÿ
(1 1 )
ÿ ÿ
i 1 i 1
where
u xx ÿÿ
,
i i
ÿÿ
vyy i i
.
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
34
Machine Translated by Google
n n
Since ÿÿ u
i
ÿ in i
ÿ
0, so
i 1
ÿ i ÿ 1
n
2 22 0 ÿ ÿÿ ÿ uv u v uv ii i ii
ÿ ÿÿ ÿ 1
( ÿ 1 i )
ÿ
ÿ
i ÿ 1
or
2
ÿ s ss s xyÿ yy xy ÿ (
ÿ ÿÿ
)
0.
1 1 xx
ˆ ÿ s s yy
ÿÿ xx
ÿ ÿÿ
sign s xy ( s)4
s yy
xx
ÿ
2
ÿ s
2
xy
ÿ 1 OR
ÿ
2s
xy
ÿ 1 if s
xy
0ÿ
( sign s
xy
ÿÿ ÿÿ ÿ ÿ ) .
1 if s xy 0. ÿ
ˆ n
2
Notice that this gives two solutions for ÿ We choose the solution which minimizes d
1
ÿ . The other OR .
i
i ÿ 1
n
2
solution maximizes d
ÿ and is in the direction perpendicular to the optimal solution. The optimal solution
i
i ÿ 1
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
35
Machine Translated by Google
is usually the distance between the observed data points and the line in the scatter diagram. Alternatively,
one can consider the area extended by the data points in a certain neighbourhood and instead of distances, the
area of rectangles defined between the corresponding observed data point and the nearest point on the line in
the following scatter diagram can also be minimized. Such an approach is more appropriate when the
uncertainties are present in the study and explanatory variables both. This approach is termed as reduced
do
(xi yi)
Y X ÿÿÿ 0 ÿ1
(Xi, Yi)
xi
Suppose the regression line is Y ÿ ÿ ÿ0 on which all the observed points are expected to lie. Suppose
i ÿ 1X i
between the th
i observed data point and the line is
Ai
ÿ
( ~ i)(ii i~ ) ( X xY yi ÿ 1, 2,..., n)
th
where ( ,) X Yi i denotes the i pair of observation without any error which lies on the line.
A (X~xY
ÿ ÿÿ )( ~y ).i ii i
i
i 1
ÿ
i ÿ
1
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
36
Machine Translated by Google
Y ÿ ÿ ÿ0 ÿ 1X
i i
and let
*
EY X i ÿÿ ÿ
ÿ ÿ ÿ 0.
ii 0 1
*
So now the objective is to minimize the sum of areas under the constraints Ei to obtain the reduced major
axis estimates of regression coefficients. Using the Lagrangian multiplier method, the Lagrangian function is
n n
ÿÿ
*
THE E ÿ ÿ
ÿ
R i ii
i 1 ÿ
i ÿ
1
n n
ÿ
*
ÿ
ÿ ( )(X x Y y ÿ
ÿ ÿÿ
) ii _
i i i i
i 1
ÿ
i ÿ
1
where ÿ 1,..., ÿ n are the Lagrangian multipliers. The set of equations are obtained by setting
ÿÿÿÿ LLLL 0, 0,
RRRR
ÿ ÿ ÿ ÿÿ
0, 0( i
1, 2,..., n).
XY ÿ ÿÿ ÿ ÿ ÿ
i i 0 1
Thus
ÿL
R
ÿÿ and
ÿ and() 0 ÿ ÿ ÿ
1 ii i
ÿX
i
ÿL
R ÿ ÿ ÿÿ ii i
()0 X x ÿ
ÿ Y
i
ÿ L n
ÿÿ
R ÿ
ÿ ÿ i
ÿ 0
0 i ÿ
1
n
ÿL
ÿÿ
R ÿ
ÿ ÿ X i i
ÿ 0.
1
i ÿ
1
Now
Xxÿ ÿ ÿ
iii
and
ÿÿ
1 ii iand
ÿ ÿ
ÿ ÿ ÿÿ
ÿ 01xy 1 ii i
ÿÿ
ÿ 01
ÿ ÿ ( x ii i ÿ ÿÿ
ÿ and
) ÿÿ 1 i
i
ÿÿ
ÿ 0ÿ ÿ x
y ÿ ÿÿ i 1i
.
2
1
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
37
Machine Translated by Google
Substituting ÿ i in ÿ ÿÿ i
0 , the reduced major axis regression estimate of ÿ0 is obtained as
i ÿ
1
ˆ ˆ
ÿ 0 RM ÿ ÿ and ÿ 1RM x
ˆ ˆ
where ÿ RM is the reduced major axis regression estimate of
1
ÿ 1 . Using X xi ÿ i ii ÿ , ÿ and ÿ 0 RM in
ÿ ÿÿ X i i
0 , we get
i ÿ
1
n
ÿ ÿÿ ÿ
ÿÿ ÿ ÿÿ ÿ yy x x yy x x 11ÿÿ11 i ii i x i ÿÿ ÿ ÿ
ÿÿ
ÿ
ÿ
0.
ÿÿ ÿ
i ÿ
1
2 2
1
ÿÿ 1
ÿ ÿ
ÿ ÿ ÿÿ
(
v iuv i u
ÿii 2 ÿ) 10.11)( ÿÿ x
i ÿ
1
n n
u in
Using ÿ ÿÿ ÿi we get i 0,
i ÿ
1 i ÿ
1
n n
22 2 we
ÿÿÿ ÿ 1
in
i
ÿ
0.
i 1 ÿ
i ÿ
1
Solving this equation, the reduced major axis regression estimate of ÿ1 is obtained as
ˆ s
yy
ÿ 1 RM
ÿ
sign s( ) xy
s xx
1 if s
xy
0ÿ
where sign s ÿ ÿ xy
( ) ÿÿÿ ÿ
1 if s
xy
ÿ 0.
s.
We choose the regression estimator which has same sign as of xy
errors is useful in place of simple errors because random errors can be positive as well as negative. So
consequently their sum can be close to zero indicating that there is no error in the model and which can be
misleading. Instead of the sum of random errors, the sum of absolute random errors can be considered which
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
38
Machine Translated by Google
i ÿ
1
the sum of absolute deviations ÿ is ÿminimum. It minimizes the absolute vertical sum of errors as in the
i
i ÿ
1
do
(xi, yi)
Y X ÿÿÿ
0
ÿ1
(Xi, Yi)
xi
LAD ÿ ÿ 01 ÿ ÿÿ ÿ Y
(,)i ÿÿ0 x
1i
i ÿ
1
Conceptually, LAD procedure is more straightforward than OLS procedure because e (absolute residuals)
2
is a more straightforward measure of the size of the residual than e (squared residuals). The LAD
regression estimates of ÿ 0
and ÿ1 are not available in closed form. Instead, they can be obtained
numerically based on algorithms. Moreover, this creates the problems of non-uniqueness and degeneracy in
the estimates. The concept of non-uniqueness relates to that more than one best line pass through a data
point. The degeneracy concept describes that the best line through a data point also passes through more than
one other data points. The non-uniqueness and degeneracy concepts are used in algorithms to judge the
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
39
Machine Translated by Google
quality of the estimates. The algorithm for finding the estimators generally proceeds in steps. At each step,
the best line is found that passes through a given data point. The best line always passes through another
data point, and this data point is used in the next step. When there is non-uniqueness, then there is more than
one best line. When there is degeneracy, then the best line passes through more than one other data point.
When either of the problems is present, then there is more than one choice for the data point to be used in the
next step and the algorithm may go around in circles or make a wrong choice of the LAD regression line.
The exact tests of hypothesis and confidence intervals for the LAD regression estimates can not be derived
analytically. Instead, they are derived analogously to the tests of hypothesis and confidence intervals related
assumed to be fixed. In practice, there may be situations in which the explanatory variable also becomes
random.
Suppose both dependent and independent variables are stochastic in the simple linear regression model
X ÿÿ
ÿ ÿyÿ ÿ
0 1
jointly distributed. Then the statistical inferences can be drawn in such cases which are conditional on X .
2 2
Assume the joint distribution of X and y to be bivariate normal N ( , ,) where and
ÿ xyxÿÿÿ
y , ÿ ÿ ÿ Y
,
x
2 2
are the means of X and Y;ÿ x
and ÿ are the variances of X and ; y and ÿ
is the correlation coefficient
Y
between X and y . Then the conditional distribution of y given X x ÿ is the univariate normal
conditional mean
Xx(| )
Hey ÿÿ ÿÿ x | 01ÿ y x ÿ ÿ
ÿ )
where
ÿÿ
ÿ 0 ÿ Y ÿÿx 1
and
ÿ
ÿ Y
ÿ 1 ÿ .
ÿ x
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
40
Machine Translated by Google
When both X and y are stochastic, then the problem of estimation of parameters can be reformulated as
follows. Consider a conditional random variable y | X x ÿ having a normal distribution with mean as
2
conditional mean and variance as conditional variance . Obtain n independently
ÿ y|x X x (|ÿand
Var ÿ) ÿ
| and x
2|
maximum likelihood can be used to estimate the parameters which yield the estimates of ÿ and ÿ as
0 1
bÿÿybx 1
and
ÿ s
xy
b ÿ
,
1
s xx
respectively.
Ey( X)(
ÿ
ÿ
ÿ
ÿ Y x )
ÿ
ÿ
ÿ ÿ
yx
ÿ ÿ
i
ÿ ( )( ) y yx x i
ˆ i ÿ
1
ÿ
ÿ n n
2 2
ÿ ÿ
i
ÿ ÿ () () x x yy i
i ÿ
1 i ÿ
1
s
xy
ÿ
s xx s
yy
ÿ
s xx
ÿ
b 1
.
s
yy
Thus
ˆ 2 ÿ
2 s xx
ÿ
ÿ
b 1
s
yy
ÿ
s
xy
ÿ
b 1
s
yy
n
ˆ2
s yy
ÿ
ÿ ÿ i
i ÿ
1
ÿ
s
yy
2
ÿ
R
2
which is same as the coefficient of determination. Thus R has the same expression as in the case when X
2
is fixed. Thus R again measures the goodness of the fitted model even when X is stochastic.
Regression Analysis | Chapter 2 | Simple Linear Regression Analysis | Shalabh, IIT Kanpur
41