Professional Documents
Culture Documents
Econometrics: Chrispin Mphuka
Econometrics: Chrispin Mphuka
Chrispin Mphuka
University of Zambia
May 2013
ε i = yi Xi0 β
Sample Regression estimate:
ybi = Xi0 b
For any value of b, we estimate εi with the residual
ei = yi Xi0 b
Note:
yi = Xi0 β + εi = Xi0 b + ei
The least squares coe¢ cient vector minimizes the sum of squared
residuals
The least squares coe¢ cient vector minimizes the sum of squared
residuals
n n
∑ e02 = ∑
2
yi Xi0 b0
i =1 i =1
The least squares coe¢ cient vector minimizes the sum of squared
residuals
n n
∑ e02 = ∑
2
yi Xi0 b0
i =1 i =1
where b0 denotes the choice for the coe¢ cient vector. In matrix
notation, minimizing the RSS requires us to choose b0 to:
The least squares coe¢ cient vector minimizes the sum of squared
residuals
n n
∑ e02 = ∑
2
yi Xi0 b0
i =1 i =1
where b0 denotes the choice for the coe¢ cient vector. In matrix
notation, minimizing the RSS requires us to choose b0 to:
The least squares coe¢ cient vector minimizes the sum of squared
residuals
n n
∑ e02 = ∑
2
yi Xi0 b0
i =1 i =1
where b0 denotes the choice for the coe¢ cient vector. In matrix
notation, minimizing the RSS requires us to choose b0 to:
∂S (b0 )
= 2X0 y + 2X0 Xb0 = 0
∂b0
X0 Xb = X0 y
if the inverse of X0 X exists then the solution of the normal equations
is:
1
b = X0 X X0 y
su¢ cient condition for a minimum:
∂2 S (b)
∂b∂b= 2X0 X
which is postive de…nite
b
y = Xb =y
1
e=y Xb = y X (X0 X ) X0 y = My
M is a residual maker. It is symmetric and Idempotent i.e. M = M0
and Mk = M 8 k that is a positive integer
MX = 0
1
Projection matrix: yb = Xb = X (X0 X) X0 y = Py. P is called the
projection matrix. it is also symmetric and idempontent
y = X β + ε = X1 β1 + X2 β2 + ε
y = X β + ε = X1 β1 + X2 β2 + ε
The normal equations are:
y = X β + ε = X1 β1 + X2 β2 + ε
The normal equations are:
y = X β + ε = X1 β1 + X2 β2 + ε
The normal equations are:
1 1 1
b1 = X10 X1 X10 y X10 X1 X10 X2 b2 = X10 X1 X10 (y X 2 b2 )
(2)
1 1
X20 X1 X10 X1 X10 y X20 X1 X10 X1 X10 X2 b2 + X20 X2 b2 = X20 y
h i 1
1
b2 = X20 I X1 X10 X1 X10 X2
h i
1 0
X20 I X1 X10 X1 X1 y
1
= X20 M1 X2 X20 M1 y
1
b2 = X20 M1 X2 X20 M1 y
1
= X20 X2 X2 0 y
where X2 = M1 X2 and y = M1 y
1
b2 = X20 M1 X2 X20 M1 y
1
= X20 X2 X2 0 y
where X2 = M1 X2 and y = M1 y
Frisch-Waugh Theorem: In the linear least squares regression of
vector y on two sets of variables X1 and X2 , the subvector b2 is the
set of coe¢ cients obtained when residuals from the regression of y on
X1 are regressed on the set of residuals obtained when eac column of
X2 is regressed on X1 .