You are on page 1of 28

ECONOMETRICS

Chrispin Mphuka

University of Zambia

May 2013

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 1 / 11


Least Squares Regression

Population Regression: E (yi jXi ) = Xi0 β


Disturbance associated with the ith data point is

ε i = yi Xi0 β
Sample Regression estimate:

ybi = Xi0 b
For any value of b, we estimate εi with the residual

ei = yi Xi0 b

Note:
yi = Xi0 β + εi = Xi0 b + ei

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 2 / 11


The Least Squares Coe¢ cient Vector

The least squares coe¢ cient vector minimizes the sum of squared
residuals

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 3 / 11


The Least Squares Coe¢ cient Vector

The least squares coe¢ cient vector minimizes the sum of squared
residuals

n n
∑ e02 = ∑
2
yi Xi0 b0
i =1 i =1

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 3 / 11


The Least Squares Coe¢ cient Vector

The least squares coe¢ cient vector minimizes the sum of squared
residuals

n n
∑ e02 = ∑
2
yi Xi0 b0
i =1 i =1

where b0 denotes the choice for the coe¢ cient vector. In matrix
notation, minimizing the RSS requires us to choose b0 to:

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 3 / 11


The Least Squares Coe¢ cient Vector

The least squares coe¢ cient vector minimizes the sum of squared
residuals

n n
∑ e02 = ∑
2
yi Xi0 b0
i =1 i =1

where b0 denotes the choice for the coe¢ cient vector. In matrix
notation, minimizing the RSS requires us to choose b0 to:

Minimizeb0 S (b0 ) = e00 e0 = (y Xb0 )0 (y Xb0 )

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 3 / 11


The Least Squares Coe¢ cient Vector

The least squares coe¢ cient vector minimizes the sum of squared
residuals

n n
∑ e02 = ∑
2
yi Xi0 b0
i =1 i =1

where b0 denotes the choice for the coe¢ cient vector. In matrix
notation, minimizing the RSS requires us to choose b0 to:

Minimizeb0 S (b0 ) = e00 e0 = (y Xb0 )0 (y Xb0 )


Expanding

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 3 / 11


The Least Squares Coe¢ cient Vector

e00 e0 = y0 y b00 X0 y y0 X0 b0 + b00 X0 Xb0


= y0 y 2b00 X0 y + b00 X0 Xb0

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 4 / 11


The Least Squares Coe¢ cient Vector

e00 e0 = y0 y b00 X0 y y0 X0 b0 + b00 X0 Xb0


= y0 y 2b00 X0 y + b00 X0 Xb0

Necessary condition for minimum

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 4 / 11


The Least Squares Coe¢ cient Vector

e00 e0 = y0 y b00 X0 y y0 X0 b0 + b00 X0 Xb0


= y0 y 2b00 X0 y + b00 X0 Xb0

Necessary condition for minimum

∂S (b0 )
= 2X0 y + 2X0 Xb0 = 0
∂b0

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 4 / 11


The Least Squares Coe¢ cient Vector

Least squares normal equations:

X0 Xb = X0 y
if the inverse of X0 X exists then the solution of the normal equations
is:

1
b = X0 X X0 y
su¢ cient condition for a minimum:
∂2 S (b)
∂b∂b= 2X0 X
which is postive de…nite

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 5 / 11


Algebraic Aspects of the Least Squares Solution

From normal equations we have X0 Xb X0 y = X0 e = 0


Hence for every column of X, xk0 e = 0. If the …rst column of X has
one then there are three implications;
The least squares residuals sum to zero. i.e: xk0 e = i0 e = ∑i ei = 0
The regression hyperplane passes through the point of means: y = Xb

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 6 / 11


Algebraic Aspects of the Least Squares Solution

b
y = Xb =y
1
e=y Xb = y X (X0 X ) X0 y = My
M is a residual maker. It is symmetric and Idempotent i.e. M = M0
and Mk = M 8 k that is a positive integer
MX = 0
1
Projection matrix: yb = Xb = X (X0 X) X0 y = Py. P is called the
projection matrix. it is also symmetric and idempontent

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 7 / 11


Algebraic Aspects of the Least Squares Solution

M and P are othorgonal. i.e. MP = PM = 0

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 8 / 11


Algebraic Aspects of the Least Squares Solution

M and P are othorgonal. i.e. MP = PM = 0


PX = X

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 8 / 11


Algebraic Aspects of the Least Squares Solution

M and P are othorgonal. i.e. MP = PM = 0


PX = X
y = Py + My

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 8 / 11


Algebraic Aspects of the Least Squares Solution

M and P are othorgonal. i.e. MP = PM = 0


PX = X
y = Py + My
y0 y = y0 P0 Py + y0 M0 My =yb0 yb+e0 e

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 8 / 11


Algebraic Aspects of the Least Squares Solution

M and P are othorgonal. i.e. MP = PM = 0


PX = X
y = Py + My
y0 y = y0 P0 Py + y0 M0 My =yb0 yb+e0 e
e0 e = y0 M0 My = y0 My = y0 e = e0 y

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 8 / 11


Algebraic Aspects of the Least Squares Solution

M and P are othorgonal. i.e. MP = PM = 0


PX = X
y = Py + My
y0 y = y0 P0 Py + y0 M0 My =yb0 yb+e0 e
e0 e = y0 M0 My = y0 My = y0 e = e0 y
e0 e = y 0 y b0 X0 y = y 0 y y0 Xb

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 8 / 11


Partitioned Regression and Partial Regression

Suppose we partlition the X -matrix into two as follows:

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 9 / 11


Partitioned Regression and Partial Regression

Suppose we partlition the X -matrix into two as follows:

y = X β + ε = X1 β1 + X2 β2 + ε

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 9 / 11


Partitioned Regression and Partial Regression

Suppose we partlition the X -matrix into two as follows:

y = X β + ε = X1 β1 + X2 β2 + ε
The normal equations are:

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 9 / 11


Partitioned Regression and Partial Regression

Suppose we partlition the X -matrix into two as follows:

y = X β + ε = X1 β1 + X2 β2 + ε
The normal equations are:

(1) X10 X1 X10 X2 b1 X10 y


= (1)
(2) X20 X1 X20 X2 b2 X20 y

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 9 / 11


Partitioned Regression and Partial Regression

Suppose we partlition the X -matrix into two as follows:

y = X β + ε = X1 β1 + X2 β2 + ε
The normal equations are:

(1) X10 X1 X10 X2 b1 X10 y


= (1)
(2) X20 X1 X20 X2 b2 X20 y
Therefore:

1 1 1
b1 = X10 X1 X10 y X10 X1 X10 X2 b2 = X10 X1 X10 (y X 2 b2 )
(2)

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 9 / 11


Partitioned Regression and Partial Regression

THEOREM: Orthogonal Partitioned Regression-

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 10 / 11


Partitioned Regression and Partial Regression

THEOREM: Orthogonal Partitioned Regression-


substituting 2 in (2) of 1 and solving gives:

1 1
X20 X1 X10 X1 X10 y X20 X1 X10 X1 X10 X2 b2 + X20 X2 b2 = X20 y

h i 1
1
b2 = X20 I X1 X10 X1 X10 X2
h i
1 0
X20 I X1 X10 X1 X1 y
1
= X20 M1 X2 X20 M1 y

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 10 / 11


Partitioned Regression and Partial Regression

1
b2 = X20 M1 X2 X20 M1 y
1
= X20 X2 X2 0 y

where X2 = M1 X2 and y = M1 y

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 11 / 11


Partitioned Regression and Partial Regression

1
b2 = X20 M1 X2 X20 M1 y
1
= X20 X2 X2 0 y

where X2 = M1 X2 and y = M1 y
Frisch-Waugh Theorem: In the linear least squares regression of
vector y on two sets of variables X1 and X2 , the subvector b2 is the
set of coe¢ cients obtained when residuals from the regression of y on
X1 are regressed on the set of residuals obtained when eac column of
X2 is regressed on X1 .

Chrispin Mphuka (Institute) Lecture 1: Classical Regression Model 05/2013 11 / 11

You might also like