Professional Documents
Culture Documents
5th edition
Chapter heading
FALL 2020
Introduction to Econometrics
Chapter heading
Autocorrelation
AUTOCORRELATION
Assumption C.5 states that the values of the disturbance term in the
observations in the sample are generated independently of each other.
1
AUTOCORRELATION
In the graph above, it is clear that this assumption is not valid. Positive values
tend to be followed by positive ones, and negative values by negative ones.
Successive values tend to have the same sign. This is described as positive
autocorrelation. 2
AUTOCORRELATION
Yt = 1 + 2 X t + ut
Yt = 1 + 2 X t + ut
Yt = 1 + 2 X t + ut
Yt = 1 + 2 X t + ut
Yt = 1 + 2 X t + ut
ut = ut −1 + t
3
0 1
-1
-2
-3
We will now look at examples of the patterns that are generated when the
disturbance term is subject to AR(1) autocorrelation. The object is to
provide some bench-mark images to help you assess plots of residuals in
time series regressions. 9
AUTOCORRELATION
ut = ut −1 + t
3
0 1
-1
-2
-3
ut = 0.0ut −1 + t
3
0 1
-1
-2
-3
ut = 0.1ut −1 + t
3
0 1
-1
-2
-3
( = 0.1)
12
AUTOCORRELATION
ut = 0.2ut −1 + t
3
0 1
-1
-2
-3
( = 0.2)
13
AUTOCORRELATION
ut = 0.3ut −1 + t
3
0 1
-1
-2
-3
ut = 0.4ut −1 + t
3
0 1
-1
-2
-3
( = 0.4)
15
AUTOCORRELATION
ut = 0.5ut −1 + t
3
0 1
-1
-2
-3
( = 0.5)
16
AUTOCORRELATION
ut = 0.6ut −1 + t
3
0 1
-1
-2
-3
ut = 0.7 ut −1 + t
3
0 1
-1
-2
-3
( = 0.7)
18
AUTOCORRELATION
ut = 0.8ut −1 + t
3
0 1
-1
-2
-3
( = 0.8)
19
AUTOCORRELATION
ut = 0.9ut −1 + t
3
0 1
-1
-2
-3
With equal to 0.9, the sequences of values with the same sign have
become long and the tendency to return to 0 has become weak.
20
AUTOCORRELATION
ut = 0.95ut −1 + t
3
0 1
-1
-2
-3
ut = 0.0ut −1 + t
3
0 1
-1
-2
-3
Next we will look at negative autocorrelation, starting with the same set of 50
independently distributed values of t.
22
AUTOCORRELATION
ut = −0.3ut −1 + t
3
0 1
-1
-2
-3
23
AUTOCORRELATION
ut = −0.6ut −1 + t
3
0 1
-1
-2
-3
With equal to –0.6, you can see that positive values tend to be followed by
negative ones, and vice versa, more frequently than you would expect as a
matter of chance. 24
AUTOCORRELATION
ut = −0.9ut −1 + t
3
0 1
-1
-2
-3
25
AUTOCORRELATION
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample: 1959 2003
Included observations: 45
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.005625 0.167903 0.033501 0.9734
LGDPI 1.031918 0.006649 155.1976 0.0000
LGPRHOUS -0.483421 0.041780 -11.57056 0.0000
============================================================
R-squared 0.998583 Mean dependent var 6.359334
Adjusted R-squared 0.998515 S.D. dependent var 0.437527
S.E. of regression 0.016859 Akaike info criter-5.263574
Sum squared resid 0.011937 Schwarz criterion -5.143130
Log likelihood 121.4304 F-statistic 14797.05
Durbin-Watson stat 0.633113 Prob(F-statistic) 0.000000
============================================================
0.04
0.03
0.02
0.01
0
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
-0.01
-0.02
-0.03
-0.04
This is the plot of the residuals of course, not the disturbance term. But if
the disturbance term is subject to autocorrelation, then the residuals will be
subject to a similar pattern of autocorrelation. 27
AUTOCORRELATION
0.04
0.03
0.02
0.01
0
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
-0.01
-0.02
-0.03
-0.04
CONSEQUENCES OF
AUTOCORRELATION
CONSEQUENCES OF AUTOCORRELATION
Yt = 1 + 2 X t + ut
T
ˆ2 = 2 + at ut
t =1
Xt − X
at = T
(
sX − X )2
s =1
Yt = 1 + 2 X t + ut
T
ˆ2 = 2 + at ut
t =1
Xt − X
at = T
(
sX − X )2
s =1
Yt = 1 + 2 X t + ut
T
ˆ2 = 2 + at ut
t =1
Xt − X
at = T
(
sX − X )2
s =1
Yt = 1 + 2 X t + ut
T
ˆ2 = 2 + at ut
t =1
Xt − X
at = T
(
sX − X )2
s =1
T
( )
E ˆ2 = 2 + E at ut
t =1
T T
= 2 + E ( at ut ) = 2 + E ( at ) E ( ut )
t =1 t =1
Yt = 1 + 2 X t + ut
T
ˆ2 = 2 + at ut
t =1
Xt − X
at = T
(
sX − X )2
s =1
T
( )
E ˆ2 = 2 + E at ut
t =1
T T
= 2 + E ( at ut ) = 2 + E ( at ) E ( ut )
t =1 t =1
5
CONSEQUENCES OF AUTOCORRELATION
ut = ut −1 + t
ut −1 = ut − 2 + t −1
ut = 2 ut − 2 + t −1 + t
For example, in the case of AR(1) autocorrelation, lagging the process one
time period, we have the second line. Substituting for ut–1 in the first
equation, we obtain the third. 6
CONSEQUENCES OF AUTOCORRELATION
ut = ut −1 + t
ut −1 = ut − 2 + t −1
ut = 2 ut − 2 + t −1 + t
ut = t + t −1 + 2 t − 2 + ...
E (ut ) = E ( t ) + E ( t −1 ) + 2 E ( t − 2 ) + ... = 0
E (ut ) = 0 E ( t ) + 1 E ( t −1 ) + 2 E ( t − 2 ) + 3 E ( t − 3 ) = 0
Yt = 1 + 2 X t + ut
T
ˆ2 = 2 + at ut
t =1
Xt − X
at = T
(
sX − X )2
s =1
T
( )
E ˆ2 = 2 + E at ut
t =1
T T
= 2 + E ( at ut ) = 2 + E ( at ) E ( ut )
t =1 t =1
For multiple regression analysis, the demonstration is the same, except that
at is replaced by at*, where at* depends on all of the observations on all of the
explanatory variables in the model. 9
CONSEQUENCES OF AUTOCORRELATION
Yt = 1 + 2 X t + ut
T
ˆ2 = 2 + at ut
t =1
Xt − X
at = T
(
sX − X )2
s =1
T
( )
E ˆ2 = 2 + E at ut
t =1
T T
= 2 + E ( at ut ) = 2 + E ( at ) E ( ut )
t =1 t =1
Yt = 1 + 2 X t + ut
T
ˆ2 = 2 + at ut
t =1
Xt − X
at = T
(
sX − X )2
s =1
T
( )
E ˆ2 = 2 + E at ut
t =1
T T
= 2 + E ( at ut ) = 2 + E ( at ) E ( ut )
t =1 t =1
Another is that the expressions for the standard errors are invalid since they
are based on the assumption that there is no autocorrelation.
11
CONSEQUENCES OF AUTOCORRELATION
Yt = 1 + 2 X t + 3Yt −1 + ut
ut = ut −1 + t
Now we come to the special case where OLS yields inconsistent estimators
if the disturbance term is subject to autocorrelation.
12
CONSEQUENCES OF AUTOCORRELATION
Yt = 1 + 2 X t + 3Yt −1 + ut
ut = ut −1 + t
Yt = 1 + 2 X t + 3Yt −1 + ut
ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1
Lagging the ADL(1,0) model by one time period, we obtain the third line.
Thus Yt–1 depends on ut–1. As a consequence of the AR(1) autocorrelation ut
also depends on ut–1. 14
CONSEQUENCES OF AUTOCORRELATION
Yt = 1 + 2 X t + 3Yt −1 + ut
ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1
Yt = 1 + 2 X t + 3Yt −1 + ut
ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1
ut = ut −1 + t
uˆ t = uˆ t −1 + error
We will initially confine the discussion of the tests for autocorrelation to its
most common form, the AR(1) process. If the disturbance term follows the
AR(1) process, it is reasonable to hypothesize that, as an approximation, the
residuals will conform to a similar process. 1
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
ut = ut −1 + t
uˆ t = uˆ t −1 + error
After all, provided that the conditions for the consistency of the OLS
estimators are satisfied, as the sample size becomes large, the OLS estimators
will converge on their true values. 2
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
ut = ut −1 + t
uˆ t = uˆ t −1 + error
If the OLS estimators will converge on their true values, the location of the
regression line will converge on the true relationship, and the residuals will
coincide with the values of the disturbance term. 3
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
ut = ut −1 + t
uˆ t = uˆ t −1 + error
true value
Yt = 10 + 2.0t + ut
T = 200
ut = 0.7 ut −1 + t
uˆ t = ˆ uˆ t −1 5
T = 100
T = 50
T = 25
0
-0.5 0 0.5 0.7 1 ̂
This is illustrated with the simulation shown in the figure. The true model is
as shown, with ut being generated as an AR (1) process with = 0.7.
5
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
true value
Yt = 10 + 2.0t + ut
T = 200
ut = 0.7 ut −1 + t
uˆ t = ˆ uˆ t −1 5
T = 100
T = 50
T = 25
0
-0.5 0 0.5 0.7 1 ̂
The values of the parameters in the model for Yt make no difference to the
distributions of the estimator of .
6
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
true value
Yt = 10 + 2.0t + ut
T = 200
ut = 0.7 ut −1 + t
uˆ t = ˆ uˆ t −1 5
T mean
T = 100 25 0.47
50 0.59
T = 50 100 0.65
200 0.68
T = 25
0
-0.5 0 0.5 0.7 1 ̂
true value
Yt = 10 + 2.0t + ut
T = 200
ut = 0.7 ut −1 + t
uˆ t = ˆ uˆ t −1 5
T mean
T = 100 25 0.47
50 0.59
T = 50 100 0.65
200 0.68
T = 25
0
-0.5 0 0.5 0.7 1 ̂
However, as the sample size increases, the downwards bias diminishes and it
is clear that the distribution of the estimator is converging on 0.7 as the
sample becomes large. Inference in finite samples will be approximate, given
the autoregressive nature of the regression. 8
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k
uˆ t = 1 + j X jt + uˆ t −1
j=2
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k
uˆ t = 1 + j X jt + uˆ t −1
j=2
uˆ t
As a repercussion, a simple regression ofuˆ t −1 on will produce an
inconsistent estimate of . The solution is to include all of the explanatory
variables in the original model in the residuals autoregression. 11
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k
uˆ t = 1 + j X jt + uˆ t −1
j=2
If the original model is the first equation where, say, one of the X variables is
Yt–1, then the residuals regression would be the second equation.
12
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k
uˆ t = 1 + j X jt + uˆ t −1
j=2
The idea is that, by including the X variables, one is controlling for the
effects of any endogeneity on the residuals.
13
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k
uˆ t = 1 + j X jt + uˆ t −1
j=2
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k
uˆ t = 1 + j X jt + uˆ t −1
j=2
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k
uˆ t = 1 + j X jt + uˆ t −1
j=2
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k
uˆ t = 1 + j X jt + uˆ t −1
j=2
A simple t test on the coefficient of uˆ t −1 has also been proposed, again with
asymptotic validity.
17
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k q
uˆ t = 1 + j X jt + s uˆ t − s
j=2 s =1
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k q
uˆ t = 1 + j X jt + s uˆ t − s
j=2 s =1
For the lagrange multiplier version of the test, the test statistic remains nR2
(with n smaller than before, the inclusion of the additional lagged residuals
leading to a further loss of initial observations). 19
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k q
uˆ t = 1 + j X jt + s uˆ t − s
j=2 s =1
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k q
uˆ t = 1 + j X jt + s uˆ t − s
j=2 s =1
The t test version becomes an F test comparing RSS for the residuals
regression with RSS for the same specification without the residual terms.
Again, the test is valid only asymptotically. 21
TESTS FOR AUTOCORRELATION I: BREUSCH–GODFREY TEST
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k q
uˆ t = 1 + j X jt + s uˆ t − s
j=2 s =1
Durbin–Watson test
( uˆ t − uˆ t −1 )
2
d= t =2
T
t
ˆ
u 2
t =1
The first major test to be developed and popularised for the detection of
autocorrelation was the Durbin–Watson test for AR(1) autocorrelation based
on the Durbin–Watson d statistic calculated from the residuals using the
expression shown. 1
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
( uˆ t − uˆ t −1 )
2
d= t =2
T
t
ˆ
u 2
t =1
In large samples d → 2 – 2
Durbin–Watson test
( uˆ t − uˆ t −1 )
2
d= t =2
T
t
ˆ
u 2
t =1
In large samples d → 2 – 2
No autocorrelation d→2
3
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
( uˆ t − uˆ t −1 )
2
d= t =2
T
t
ˆ
u 2
t =1
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
4
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
( uˆ t − uˆ t −1 )
2
d= t =2
T
t
ˆ
u 2
t =1
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 2 4
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
6
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dcrit 2 dcrit 4
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dcrit 2 dcrit 4
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dcrit dU 2 dcrit 4
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
Unfortunately, they also depend on the actual data for the explanatory
variables in the sample, and thus vary from sample to sample.
9
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dcrit dU 2 dcrit 4
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
However Durbin and Watson determined upper and lower bounds, dU and dL,
for the critical values, and these are presented in standard tables.
10
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dcrit dU 2 dcrit 4
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
If d is less than dL, it must also be less than the critical value of d for
positive autocorrelation, and so we would reject the null hypothesis and
conclude that there is positive autocorrelation. 11
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dcrit dU 2 dcrit 4
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
If d is above than dU, it must also be above the critical value of d, and so we
would not reject the null hypothesis. (Of course, if it were above 2, we
should consider testing for negative autocorrelation instead.) 12
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dcrit dU 2 dcrit 4
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
If d lies between dL and dU, we cannot tell whether it is above or below the
critical value and so the test is indeterminate.
13
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.43 1.62
(n = 45, k = 3, 5% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
Here are dL and dU for 45 observations and two explanatory variables, at the 5%
significance level.
14
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.43 1.62 2.38 2.57
(n = 45, k = 3, 5% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
There are similar bounds for the critical value in the case of negative
autocorrelation. They are not given in the standard tables because negative
autocorrelation is uncommon, but it is easy to calculate them because are
they are located symmetrically to the right of 2. 15
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.43 1.62 2.38 2.57
(n = 45, k = 3, 5% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
So if d < 1.43, we reject the null hypothesis and conclude that there is
positive autocorrelation.
16
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.43 1.62 2.38 2.57
(n = 45, k = 3, 5% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
If 1.43 < d < 1.62, the test is indeterminate and we do not come to any
conclusion.
17
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.43 1.62 2.38 2.57
(n = 45, k = 3, 5% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
If 1.62 < d < 2.38, we do not reject the null hypothesis of no autocorrelation.
18
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.43 1.62 2.38 2.57
(n = 45, k = 3, 5% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
19
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.43 1.62 2.38 2.57
(n = 45, k = 3, 5% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
20
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.24 1.42 2.58 2.76
(n = 45, k = 3, 1% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
Here are the bounds for the critical values for the 1% test, again with 45
observations and two explanatory variables.
21
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.24 1.42 2.58 2.76
(n = 45, k = 3, 1% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
The Durbin-Watson test is valid only when all the explanatory variables are
deterministic. This is in practice a serious limitation since usually
interactions and dynamics in a system of equations cause Assumption C.7
part (2) to be violated. 22
TESTS FOR AUTOCORRELATION II: DURBIN–WATSON TEST
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.24 1.42 2.58 2.76
(n = 45, k = 3, 1% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.24 1.42 2.58 2.76
(n = 45, k = 3, 1% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
Durbin–Watson test
positive no negative
autocorrelation autocorrelation autocorrelation
0 dL dU 2 4
1.24 1.42 2.58 2.76
(n = 45, k = 3, 1% level)
In large samples d → 2 – 2
No autocorrelation d→2
Severe positive autocorrelation d→0
Severe negative autocorrelation d→4
It does have the appeal of the test statistic being part of standard regression
output. Further, it is appropriate for finite samples, subject to the zone of
indeterminacy and the deterministic regressor requirement. 25
Introduction to Econometrics
Chapter heading
The output shown in the table gives the result of a logarithmic regression of
expenditure on food on disposable personal income and the relative price of
food. 1
TESTS FOR AUTOCORRELATION III: EXAMPLES
0.06
Residuals, static logarithmic regression for FOOD
0.05
0.04
0.03
0.02
0.01
0
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
-0.01
-0.02
-0.03
-0.04
The plot of the residuals is shown. All the tests indicate highly significant
autocorrelation.
2
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: RLGFOOD
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
RLGFOOD(-1) 0.790169 0.106603 7.412228 0.0000
============================================================
R-squared 0.560960 Mean dependent var 3.28E-05
Adjusted R-squared 0.560960 S.D. dependent var 0.020145
S.E. of regression 0.013348 Akaike info criter-5.772439
Sum squared resid 0.007661 Schwarz criterion -5.731889
Log likelihood 127.9936 Durbin-Watson stat 1.477337
============================================================
uˆ t = 0.79uˆ t −1
uˆ t = 0.79uˆ t −1
Technical note for EViews users: EViews places the residuals from the most
recent regression in a pseudo-variable called resid. resid cannot be used
directly. So the residuals were saved as RLGFOOD using the genr
command: genr RLGFOOD = resid 4
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: RLGFOOD
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.175732 0.265081 0.662936 0.5112
LGDPI -7.36E-05 0.006180 -0.011917 0.9906
LGPRFOOD -0.037373 0.049496 -0.755058 0.4546
RLGFOOD(-1) 0.805744 0.110202 7.311504 0.0000
============================================================
R-squared 0.572006 Mean dependent var 3.28E-05
Adjusted R-squared 0.539907 S.D. dependent var 0.020145
S.E. of regression 0.013664 Akaike info criter-5.661558
Sum squared resid 0.007468 Schwarz criterion -5.499359
Log likelihood 128.5543 F-statistic 17.81977
Durbin-Watson stat 1.513911 Prob(F-statistic) 0.000000
============================================================
(Note that here n = 44. There are 45 observations in the regression in Table
12.1, and one fewer in the residuals regression.) The critical value of chi-
squared with one degree of freedom at the 0.1 percent level is 10.83. 6
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic 54.78773 Probability 0.000000
Obs*R-squared 25.73866 Probability 0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.171665 0.258094 0.665124 0.5097
LGDPI 9.50E-05 0.005822 0.016324 0.9871
LGPRFOOD -0.036806 0.048504 -0.758819 0.4523
RESID(-1) 0.805773 0.108861 7.401873 0.0000
============================================================
R-squared 0.571970 Mean dependent var-1.85E-18
Adjusted R-squared 0.540651 S.D. dependent var 0.019916
S.E. of regression 0.013498 Akaike info criter-5.687865
Sum squared resid 0.007470 Schwarz criterion -5.527273
Log likelihood 131.9770 F-statistic 18.26258
Durbin-Watson stat 1.514975 Prob(F-statistic) 0.000000
============================================================
Technical note for EViews users: one can perform the test simply by
following the LGFOOD regression with the command auto(1). EViews
allows itself to use resid directly. 7
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic 54.78773 Probability 0.000000
Obs*R-squared 25.73866 Probability 0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.171665 0.258094 0.665124 0.5097
LGDPI 9.50E-05 0.005822 0.016324 0.9871
LGPRFOOD -0.036806 0.048504 -0.758819 0.4523
RESID(-1) 0.805773 0.108861 7.401873 0.0000
============================================================
R-squared 0.571970 Mean dependent var-1.85E-18
Adjusted R-squared 0.540651 S.D. dependent var 0.019916
S.E. of regression 0.013498 Akaike info criter-5.687865
Sum squared resid 0.007470 Schwarz criterion -5.527273
Log likelihood 131.9770 F-statistic 18.26258
Durbin-Watson stat 1.514975 Prob(F-statistic) 0.000000
============================================================
The argument in the auto command relates to the order of autocorrelation
being tested. At the moment we are concerned only with first-order
autocorrelation. This is why the command is auto(1). 8
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic 54.78773 Probability 0.000000
Obs*R-squared 25.73866 Probability 0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.171665 0.258094 0.665124 0.5097
LGDPI 9.50E-05 0.005822 0.016324 0.9871
LGPRFOOD -0.036806 0.048504 -0.758819 0.4523
RESID(-1) 0.805773 0.108861 7.401873 0.0000
============================================================
R-squared 0.571970 Mean dependent var-1.85E-18
Adjusted R-squared 0.540651 S.D. dependent var 0.019916
S.E. of regression 0.013498 Akaike info criter-5.687865
Sum squared resid 0.007470 Schwarz criterion -5.527273
Log likelihood 131.9770 F-statistic 18.26258
Durbin-Watson stat 1.514975 Prob(F-statistic) 0.000000
============================================================
When we performed the test, resid(–1), and hence RLGFOOD(–1), were
not defined for the first observation in the sample, so we had 44
observations from 1960 to 2003. 9
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic 54.78773 Probability 0.000000
Obs*R-squared 25.73866 Probability 0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.171665 0.258094 0.665124 0.5097
LGDPI 9.50E-05 0.005822 0.016324 0.9871
LGPRFOOD -0.036806 0.048504 -0.758819 0.4523
RESID(-1) 0.805773 0.108861 7.401873 0.0000
============================================================
R-squared 0.571970 Mean dependent var-1.85E-18
Adjusted R-squared 0.540651 S.D. dependent var 0.019916
S.E. of regression 0.013498 Akaike info criter-5.687865
Sum squared resid 0.007470 Schwarz criterion -5.527273
Log likelihood 131.9770 F-statistic 18.26258
Durbin-Watson stat 1.514975 Prob(F-statistic) 0.000000
============================================================
EViews uses the first observation by assigning a value of zero to the first
observation for resid(–1). Hence the test results are very slightly
different. 10
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: RLGFOOD
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.175732 0.265081 0.662936 0.5112
LGDPI -7.36E-05 0.006180 -0.011917 0.9906
LGPRFOOD -0.037373 0.049496 -0.755058 0.4546
RLGFOOD(-1) 0.805744 0.110202 7.311504 0.0000
============================================================
R-squared 0.572006 Mean dependent var 3.28E-05
Adjusted R-squared 0.539907 S.D. dependent var 0.020145
S.E. of regression 0.013664 Akaike info criter-5.661558
Sum squared resid 0.007468 Schwarz criterion -5.499359
Log likelihood 128.5543 F-statistic 17.81977
Durbin-Watson stat 1.513911 Prob(F-statistic) 0.000000
============================================================
We can also perform the test with a t test on the coefficient of the lagged
variable.
11
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic 54.78773 Probability 0.000000
Obs*R-squared 25.73866 Probability 0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.171665 0.258094 0.665124 0.5097
LGDPI 9.50E-05 0.005822 0.016324 0.9871
LGPRFOOD -0.036806 0.048504 -0.758819 0.4523
RESID(-1) 0.805773 0.108861 7.401873 0.0000
============================================================
R-squared 0.571970 Mean dependent var-1.85E-18
Adjusted R-squared 0.540651 S.D. dependent var 0.019916
S.E. of regression 0.013498 Akaike info criter-5.687865
Sum squared resid 0.007470 Schwarz criterion -5.527273
Log likelihood 131.9770 F-statistic 18.26258
Durbin-Watson stat 1.514975 Prob(F-statistic) 0.000000
============================================================
Here is the corresponding output using the auto command built into
EViews. The test is presented as an F statistic. Of course, when there is
only one lagged residual, the F statistic is the square of the t statistic. 12
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: LGFOOD
Method: Least Squares
Sample: 1959 2003
Included observations: 45
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 2.236158 0.388193 5.760428 0.0000
LGDPI 0.500184 0.008793 56.88557 0.0000
LGPRFOOD -0.074681 0.072864 -1.024941 0.3113
============================================================
R-squared 0.992009 Mean dependent var 6.021331
Adjusted R-squared 0.991628 S.D. dependent var 0.222787
S.E. of regression 0.020384 Akaike info criter-4.883747
Sum squared resid 0.017452 Schwarz criterion -4.763303
Log likelihood 112.8843 Hannan-Quinn crite-4.838846
F-statistic 2606.860 Durbin-Watson stat 0.478540
Prob(F-statistic) 0.000000
============================================================
Breusch–Godfrey test
k
Yt = 1 + j X jt + ut
j =2
k q
uˆ t = 1 + j X jt + s uˆ t − s
j=2 s =1
nR = 43 0.6020 = 25.89
2
c (2 )
2
= 13.82
crit, 0.1%
Here is the regression for RLGFOOD with two lagged residuals. The
Breusch–Godfrey test statistic is 25.89. With two lagged residuals, the
statistic has a chi-squared distribution with two degrees of freedom under
the null hypothesis. It is significant at the 0.1 percent level 15
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: RLGFOOD
Method: Least Squares
Sample(adjusted): 1961 2003
Included observations: 43 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.071220 0.277253 0.256879 0.7987
LGDPI 0.000251 0.006491 0.038704 0.9693
LGPRFOOD -0.015572 0.051617 -0.301695 0.7645
RLGFOOD(-1) 1.009693 0.163240 6.185318 0.0000
RLGFOOD(-2) -0.289159 0.171960 -1.681548 0.1009
============================================================
R-squared 0.602010 Mean dependent var 0.000149
Adjusted R-squared 0.560117 S.D. dependent var 0.020368
S.E. of regression 0.013509 Akaike info criter-5.661981
Sum squared resid 0.006935 Schwarz criterion -5.457191
Log likelihood 126.7326 F-statistic 14.36996
Durbin-Watson stat 1.892212 Prob(F-statistic) 0.000000
============================================================
We will also perform an F test, comparing the RSS with the RSS for the same regression without
the lagged residuals. We know the result, because one of the t statistics is very high.
16
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: RLGFOOD
Method: Least Squares
Sample: 1961 2003
Included observations: 43
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.027475 0.412043 0.066680 0.9472
LGDPI -0.001074 0.009986 -0.107528 0.9149
LGPRFOOD -0.003948 0.076191 -0.051816 0.9589
============================================================
R-squared 0.000298 Mean dependent var 0.000149
Adjusted R-squared -0.049687 S.D. dependent var 0.020368
S.E. of regression 0.020868 Akaike info criter-4.833974
Sum squared resid 0.017419 Schwarz criterion -4.711100
Log likelihood 106.9304 F-statistic 0.005965
Durbin-Watson stat 0.476550 Prob(F-statistic) 0.994053
============================================================
Here is the regression for ELGFOOD without the lagged residuals. Note that
the sample period has been adjusted to 1961 to 2003, to make RSS
comparable with that for the previous regression. 17
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: RLGFOOD
Method: Least Squares
Sample: 1961 2003
Included observations: 43
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.027475 0.412043 0.066680 0.9472
LGDPI -0.001074 0.009986 -0.107528 0.9149
LGPRFOOD -0.003948 0.076191 -0.051816 0.9589
============================================================
R-squared 0.000298 Mean dependent var 0.000149
Adjusted R-squared -0.049687 S.D. dependent var 0.020368
S.E. of regression 0.020868 Akaike info criter-4.833974
Sum squared resid 0.017419 Schwarz criterion -4.711100
Log likelihood 106.9304 F-statistic 0.005965
Durbin-Watson stat 0.476550 Prob(F-statistic) 0.994053
============================================================
(0.017419 − 0.006935) / 2
F (2,38) = = 28.72 F (2,35 )crit, 0.1% = 8.47
0.006935 / 38
The F statistic is 28.72. This is significant at the 1% level. The critical value
for F(2,35) is 8.47. That for F(2,38) must be slightly lower.
18
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic 30.24142 Probability 0.000000
Obs*R-squared 27.08649 Probability 0.000001
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.053628 0.261016 0.205460 0.8383
LGDPI 0.000920 0.005705 0.161312 0.8727
LGPRFOOD -0.013011 0.049304 -0.263900 0.7932
RESID(-1) 1.011261 0.159144 6.354360 0.0000
RESID(-2) -0.290831 0.167642 -1.734833 0.0905
============================================================
R-squared 0.601922 Mean dependent var-1.85E-18
Adjusted R-squared 0.562114 S.D. dependent var 0.019916
S.E. of regression 0.013179 Akaike info criter-5.715965
Sum squared resid 0.006947 Schwarz criterion -5.515225
Log likelihood 133.6092 F-statistic 15.12071
Durbin-Watson stat 1.894290 Prob(F-statistic) 0.000000
Here is the output using the auto(2) command in EViews.
============================================================ The conclusions
for the two tests are the same.
19
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: LGFOOD
Method: Least Squares
Sample (adjusted): 1960 2003
Included observations: 44 after adjustments
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.985780 0.336094 2.933054 0.0055
LGDPI 0.126657 0.056496 2.241872 0.0306
LGPRFOOD -0.088073 0.051897 -1.697061 0.0975
LGFOOD(-1) 0.732923 0.110178 6.652153 0.0000
============================================================
R-squared 0.995879 Mean dependent var 6.030691
Adjusted R-squared 0.995570 S.D. dependent var 0.216227
S.E. of regression 0.014392 Akaike info criter-5.557847
Sum squared resid 0.008285 Schwarz criterion -5.395648
Log likelihood 126.2726 Hannan-Quinn crite-5.497696
F-statistic 3222.264 Durbin-Watson stat 1.112437
Prob(F-statistic) 0.000000
============================================================
The output above gives the result of a parallel logarithmic regression with the
addition of lagged expenditure on food as an explanatory variable. Again,
there is strong evidence that the specification is subject to autocorrelation.
20
TESTS FOR AUTOCORRELATION III: EXAMPLES
0.04
Residuals, ADL(1,0) logarithmic regression for FOOD
0.03
0.02
0.01
0
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
-0.01
-0.02
-0.03
21
TESTS FOR AUTOCORRELATION III: EXAMPLES
============================================================
Dependent Variable: RLGFOOD
Method: Least Squares
Sample(adjusted): 1961 2003
Included observations: 43 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
RLGFOOD(-1) 0.431010 0.143277 3.008226 0.0044
============================================================
R-squared 0.176937 Mean dependent var 0.000276
Adjusted R-squared 0.176937 S.D. dependent var 0.013922
S.E. of regression 0.012630 Akaike info criter-5.882426
Sum squared resid 0.006700 Schwarz criterion -5.841468
Log likelihood 127.4722 Durbin-Watson stat 1.801390
============================================================
uˆ t = 0.43uˆ t −1
nR = 43 0.2469 = 10.62
2
c (1)
2
crit, 0.1% = 10.83
ELIMINATING AR(1)
AUTOCORRELATION
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
3
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
The disturbance term now reduces to t, the innovation at time t in the AR(1)
process. By assumption, this is independently distributed, so the problem
of autocorrelation has been eliminated. 4
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
This means that we should not try to fit the equation using ordinary least
squares. OLS would not take account of the restriction and so we would
end up with conflicting estimates of the parameters. 6
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
For example, we might obtain the equation shown. From it we could deduce
estimates of 0.5 for and 0.8 for 2. But these numbers would be
incompatible with the estimate of 0.6 for 2. 7
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X 2 t + 3 X 3 t + ut ut = ut −1 + t
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X 2 t + 3 X 3 t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X 2 t −1 + 3 X 3 t −1 + ut −1
The procedure is the same. Write the model a second time, lagged one time
period, and multiply through by .
9
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X 2 t + 3 X 3 t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X 2 t −1 + 3 X 3 t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + ut − ut −1
10
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X 2 t + 3 X 3 t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X 2 t −1 + 3 X 3 t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
11
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X 2 t + 3 X 3 t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X 2 t −1 + 3 X 3 t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
Now there are two restrictions. One involves the coefficients of Yt–1, X2t, and
X2t–1.
12
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X 2 t + 3 X 3 t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X 2 t −1 + 3 X 3 t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
13
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
17
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
, the coefficient of the lagged dependent variable, has been denoted C(2). It
is also a component of the intercept in this model. The estimate of , 0.72, is
quite high. 18
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
2, the coefficient of income, has been denoted C(3). The estimate is close
to the OLS estimate, 1.03.
19
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
20
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
3, the coefficient of price, has been denoted C(4). The estimate is the same
as the OLS estimate, –0.48, at least to two decimal places. (This is a
coincidence.) 21
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
22
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 Durbin-Watson stat 1.901081
============================================================
The only problem with this method of fitting the AR(1) model is that
specifying the model in equation form is a tedious task and it is easy to
make mistakes. 23
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
============================================================
Dependent Variable: LGHOUS
Method: Least Squares Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
Convergence achieved after 21 iterations
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.154815 0.354989 0.436111 0.6651
LGDPI 1.011295 0.021830 46.32642 0.0000
LGPRHOUS -0.478070 0.091594 -5.219437 0.0000
AR(1) 0.719102 0.115689 6.215836 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 F-statistic 16757.24
Durbin-Watson stat 1.901081 Prob(F-statistic) 0.000000
============================================================
Since the AR(1) specification is a common one, most serious regression
applications provide some short-cut for specifying it easily. In the case of EViews,
AR(1) estimation is invoked by adding AR(1) to the list of explanatory variables.
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
=============================================================
Dependent Variable: LGHOUS
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.154815 0.354989 0.436111 0.6651
LGDPI 1.011295 0.021830 46.32642 0.0000
LGPRHOUS -0.478070 0.091594 -5.219437 0.0000
AR(1) 0.719102 0.115689 6.215836 0.0000
============================================================
25
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
=============================================================
Dependent Variable: LGHOUS
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.154815 0.354989 0.436111 0.6651
LGDPI 1.011295 0.021830 46.32642 0.0000
LGPRHOUS -0.478070 0.091594 -5.219437 0.0000
AR(1) 0.719102 0.115689 6.215836 0.0000
============================================================
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
=============================================================
Dependent Variable: LGHOUS
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.154815 0.354989 0.436111 0.6651
LGDPI 1.011295 0.021830 46.32642 0.0000
LGPRHOUS -0.478070 0.091594 -5.219437 0.0000
AR(1) 0.719102 0.115689 6.215836 0.0000
============================================================
The price coefficient is the estimate of the elasticity with respect to current
price.
27
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
=============================================================
Dependent Variable: LGHOUS
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.154815 0.354989 0.436111 0.6651
LGDPI 1.011295 0.021830 46.32642 0.0000
LGPRHOUS -0.478070 0.091594 -5.219437 0.0000
AR(1) 0.719102 0.115689 6.215836 0.0000
============================================================
28
ELIMINATING AR(1) AUTOCORRELATION
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
=============================================================
Dependent Variable: LGHOUS
LGHOUS=C(1)*(1-C(2))+C(2)*LGHOUS(-1)+C(3)*LGDPI-C(2)*C(3)
*LGDPI(-1)+C(4)*LGPRHOUS-C(2)*C(4)*LGPRHOUS(-1)
============================================================
Coefficient Std. Error t-Statistic Prob.
============================================================
C(1) 0.154815 0.354989 0.436111 0.6651
C(2) 0.719102 0.115689 6.215836 0.0000
C(3) 1.011295 0.021830 46.32641 0.0000
C(4) -0.478070 0.091594 -5.219436 0.0000
============================================================
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.154815 0.354989 0.436111 0.6651
LGDPI 1.011295 0.021830 46.32642 0.0000
LGPRHOUS -0.478070 0.091594 -5.219437 0.0000
AR(1) 0.719102 0.115689 6.215836 0.0000
============================================================
The coefficients of lagged income and lagged price are not reported
because they are implicit in the estimates of , 2, and 3.
29
Introduction to Econometrics
Chapter heading
THE COCHRANE–ORCUTT
PROCESS
FOOTNOTE: THE COCHRANE–ORCUTT PROCESS
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
~ ~ ~
Yt = 1 + 2 X t + t Y t = Yt − Yt −1
~
X t = X t − X t −1
1 = 1 (1 − )
We return to line 3 and note that the model can be rewritten as shown with
appropriate definitions. We now have a simple regression model free from
autocorrelation. 4
FOOTNOTE: THE COCHRANE–ORCUTT PROCESS
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
~ ~ ~
Yt = 1 + 2 X t + t Y t = Yt − Yt −1
~
X t = X t − X t −1
uˆ t = uˆ t −1 + error
1 = 1 (1 − )
Yt = 1 + 2 X t + ut ut = ut −1 + t
Yt −1 = 1 + 2 X t −1 + ut −1
Yt − Yt −1 = 1 (1 − ) + 2 X t − 2 X t −1 + ut − ut −1
~ ~ ~
Yt = 1 + 2 X t + t Y t = Yt − Yt −1
~
X t = X t − X t −1
uˆ t = uˆ t −1 + error
1 = 1 (1 − )
Yt* = 1 + 2 X t + ut Yt − Yt −1 = (Yt* − Yt −1 )
Yt = Yt* + (1 − )Yt −1
Yt = ( 1 + 2 X t + ut ) + (1 − )Yt −1
= 1 + 2 X t + (1 − )Yt −1 + ut
= 1 + 2 X t + 3Yt −1 + ut
Yt* = 1 + 2 X t + ut Yt − Yt −1 = (Yt* − Yt −1 )
Yt = Yt* + (1 − )Yt −1
Yt = ( 1 + 2 X t + ut ) + (1 − )Yt −1
= 1 + 2 X t + (1 − )Yt −1 + ut
= 1 + 2 X t + 3Yt −1 + ut
In the partial adjustment model, the disturbance term in the fitted model is
the same as that in the target relationship, except that it has been multiplied
by a constant, . 2
AUTOCORRELATION, PARTIAL ADJUSTMENT, AND ADAPTIVE EXPECTATIONS
Yt* = 1 + 2 X t + ut Yt − Yt −1 = (Yt* − Yt −1 )
Yt = Yt* + (1 − )Yt −1
Yt = ( 1 + 2 X t + ut ) + (1 − )Yt −1
= 1 + 2 X t + (1 − )Yt −1 + ut
= 1 + 2 X t + 3Yt −1 + ut
Yt* = 1 + 2 X t + ut Yt − Yt −1 = (Yt* − Yt −1 )
Yt = Yt* + (1 − )Yt −1
Yt = ( 1 + 2 X t + ut ) + (1 − )Yt −1
= 1 + 2 X t + (1 − )Yt −1 + ut
= 1 + 2 X t + 3Yt −1 + ut
The only problem is the finite sample bias caused by using the lagged
dependent variable as an explanatory variable, and this is usually disregarded
in practice anyway. 4
AUTOCORRELATION, PARTIAL ADJUSTMENT, AND ADAPTIVE EXPECTATIONS
Yt* = 1 + 2 X t + ut Yt − Yt −1 = (Yt* − Yt −1 )
Yt = Yt* + (1 − )Yt −1
Yt = ( 1 + 2 X t + ut ) + (1 − )Yt −1
= 1 + 2 X t + (1 − )Yt −1 + ut
= 1 + 2 X t + 3Yt −1 + ut
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 2 (1 − ) X t −1 + 2 (1 − ) X t − 2 + ...
2
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 2 (1 − ) X t −1 + 2 (1 − ) X t − 2 + ...
2
The disturbance term in the regression model is the same as that in the
original model. So if it satisfies the regression model assumptions in the
original model it will do so in the regression model, which should be fitted
using a standard nonlinear estimation method. 7
AUTOCORRELATION, PARTIAL ADJUSTMENT, AND ADAPTIVE EXPECTATIONS
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 2 (1 − ) X t −1 + 2 (1 − ) X t − 2 + ...
2
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 2 (1 − ) X t −1 + 2 (1 − ) X t − 2 + ...
2
Yt = 1 + 2 X t + (1 − )(Yt −1 − 1 − ut −1 ) + ut
= 1 + 2 X t + (1 − )Yt −1 + ut − (1 − )ut −1
= 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Thus if the disturbance term in the original model satisfies the regression
model assumptions, the disturbance term in the regression model will be
subject to MA(1) autocorrelation (first-order moving average autocorrelation).10
AUTOCORRELATION, PARTIAL ADJUSTMENT, AND ADAPTIVE EXPECTATIONS
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1 − (1 − )ut − 2
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1 − (1 − )ut − 2
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1 − (1 − )ut − 2
13
AUTOCORRELATION, PARTIAL ADJUSTMENT, AND ADAPTIVE EXPECTATIONS
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1 − (1 − )ut − 2
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1 − (1 − )ut − 2
ut = ut −1 + t
However, suppose that the disturbance term in the original model were
subject to AR(1) autocorrelation.
15
AUTOCORRELATION, PARTIAL ADJUSTMENT, AND ADAPTIVE EXPECTATIONS
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1 − (1 − )ut − 2
ut = ut −1 + t
16
AUTOCORRELATION, PARTIAL ADJUSTMENT, AND ADAPTIVE EXPECTATIONS
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1 − (1 − )ut − 2
ut = ut −1 + t
Itis thus a composite of the innovation in the AR(1) process at time t and ut–1.
Now, under reasonable assumptions, both and should lie between 0 and
1. Hence it is possible that the coefficient of ut–1 may be small enough for the
autocorrelation to be negligible. 17
AUTOCORRELATION, PARTIAL ADJUSTMENT, AND ADAPTIVE EXPECTATIONS
Yt = 1 + 2 X te+1 + ut X te+1 − X te = ( X t − X te )
X te+1 = X t + (1 − ) X te
Yt = 1 + 2 X t + 3Yt −1 + ut − (1 − )ut −1
Yt −1 = 1 + 2 X t −1 + 3Yt − 2 + ut −1 − (1 − )ut − 2
ut = ut −1 + t
If that is the case, OLS could be used to fit the regression model after all.
You should, of course, perform a Breusch–Godfrey test to check that there
is no (significant) autocorrelation. 18
Introduction to Econometrics
Chapter heading
Autocorrelation
Example: HOUSING DYNAMICS
HOUSING DYNAMICS
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample: 1959 2003
Included observations: 45
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.005625 0.167903 0.033501 0.9734
LGDPI 1.031918 0.006649 155.1976 0.0000
LGPRHOUS -0.483421 0.041780 -11.57056 0.0000
============================================================
R-squared 0.998583 Mean dependent var 6.359334
Adjusted R-squared 0.998515 S.D. dependent var 0.437527
S.E. of regression 0.016859 Akaike info criter-5.263574
Sum squared resid 0.011937 Schwarz criterion -5.143130
Log likelihood 121.4304 F-statistic 14797.05
Durbin-Watson stat 0.633113 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample: 1959 2003
Included observations: 45
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.005625 0.167903 0.033501 0.9734
LGDPI 1.031918 0.006649 155.1976 0.0000
LGPRHOUS -0.483421 0.041780 -11.57056 0.0000
============================================================
R-squared 0.998583 Mean dependent var 6.359334
Adjusted R-squared 0.998515 S.D. dependent var 0.437527
S.E. of regression 0.016859 Akaike info criter-5.263574
Sum squared resid 0.011937 Schwarz criterion -5.143130
Log likelihood 121.4304 F-statistic 14797.05
Durbin-Watson stat 0.633113 Prob(F-statistic) 0.000000
============================================================
============================================================
Breusch–Godfrey statistic: 20.02
Dependent Variable: LGHOUS
Method: Least Squares
c2(1)crit, 0.1% = 10.83
Sample: 1959 2003
Included observations: 45
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.005625 0.167903 0.033501 0.9734
LGDPI 1.031918 0.006649 155.1976 0.0000
LGPRHOUS -0.483421 0.041780 -11.57056 0.0000
============================================================
R-squared 0.998583 Mean dependent var 6.359334
Adjusted R-squared 0.998515 S.D. dependent var 0.437527
S.E. of regression 0.016859 Akaike info criter-5.263574
Sum squared resid 0.011937 Schwarz criterion -5.143130
Log likelihood 121.4304 F-statistic 14797.05
Durbin-Watson stat 0.633113 Prob(F-statistic) 0.000000
============================================================
0.04
0.03
0.02
0.01
0
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
-0.01
-0.02
-0.03
-0.04
4
HOUSING DYNAMICS
9 0.04
0.03
8
0.02
0.01
7
6
-0.01
-0.02
5
-0.03
4 -0.04
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
The actual and fitted values of the dependent variable and the series for
income and price have been added to the diagram. The price series was
very flat and so had little influence on the fitted values. It will be ignored in
the discussion that follows. 5
HOUSING DYNAMICS
9 0.04
0.03
8
0.02
0.01
7
6
-0.01
-0.02
5
-0.03
4 -0.04
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
There was a very large negative residual in 1973. We will enlarge this part of
the diagram and take a closer look.
6
HOUSING DYNAMICS
6.4 8.2
6.3
8.1
6.2
8
6.1
6 7.9
1971 1972 1973 1974 1975
In 1973, income (right scale) grew unusually rapidly. The fitted value of
housing expenditure (left scale, with actual value) accordingly rose above its
trend. 7
HOUSING DYNAMICS
6.4 8.2
6.3
8.1
6.2
8
6.1
6 7.9
1971 1972 1973 1974 1975
This boom was stopped in its tracks by the first oil shock. Income actually
declined in 1974, the only fall in the entire sample period.
8
HOUSING DYNAMICS
6.4 8.2
6.3
8.1
6.2
8
6.1
6 7.9
1971 1972 1973 1974 1975
6.4 8.2
6.3
8.1
6.2
8
6.1
6 7.9
1971 1972 1973 1974 1975
However, the actual value of housing maintained its previous trend in those
two years, responding not at all to the short-run variations in the growth of
income. This accounts for the gap that opened up in 1973, and the large
negative residual in that year. 10
HOUSING DYNAMICS
9 0.04
0.03
8
0.02
0.01
7
6
-0.01
-0.02
5
-0.03
4 -0.04
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
There was a similar large negative residual in 1984. We will enlarge this part
of the diagram.
11
HOUSING DYNAMICS
8.5
6.6
8.4
6.5
8.3
6.4 8.2
1982 1983 1984 1985 1986 1987
8.5
6.6
8.4
6.5
8.3
6.4 8.2
1982 1983 1984 1985 1986 1987
In the years immediately after 1984, income grew at a slower rate. Accordingly
the fitted value of housing grew at a slower rate. But the actual value of
housing grew at much the same rate as before, turning the negative residual
in 1984 into a large positive one in 1987. 13
HOUSING DYNAMICS
9 0.04
0.03
8
0.02
0.01
7
6
-0.01
-0.02
5
-0.03
4 -0.04
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003
Finally, we shall take a closer look at the series of positive residuals from
1960 to 1965.
14
HOUSING DYNAMICS
5.9 7.8
7.7
5.8
7.6
5.7
7.5
5.6
7.4
5.5
7.3
5.4 7.2
1959 1960 1961 1962 1963 1964 1965 1966
In the first part of this subperiod, income was growing relatively slowly.
Towards the end, it started to accelerate. The fitted values followed suit.
15
HOUSING DYNAMICS
5.9 7.8
7.7
5.8
7.6
5.7
7.5
5.6
7.4
5.5
7.3
5.4 7.2
1959 1960 1961 1962 1963 1964 1965 1966
5.9 7.8
7.7
5.8
7.6
5.7
7.5
5.6
7.4
5.5
7.3
5.4 7.2
1959 1960 1961 1962 1963 1964 1965 1966
In this case, as in the previous two, the residuals are not being caused by
autocorrelation. If that were the case, the actual values should be relatively
volatile, compared with the trend of the fitted values. 17
HOUSING DYNAMICS
5.9 7.8
7.7
5.8
7.6
5.7
7.5
5.6
7.4
5.5
7.3
5.4 7.2
1959 1960 1961 1962 1963 1964 1965 1966
What we see here is exactly the opposite. The actual values have a very
stable trend, while the fitted values respond, as they must, to short-run
variations in the growth of income. The pattern we see in the residuals is
caused by the nonresponse of the actual values. 18
HOUSING DYNAMICS
5.9 7.8
7.7
5.8
7.6
5.7
7.5
5.6
7.4
5.5
7.3
5.4 7.2
1959 1960 1961 1962 1963 1964 1965 1966
One way to model the inertia in the growth rate of the actual values is to add
a lagged dependent variable to the regression model.
19
HOUSING DYNAMICS
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================ =========
============================================================
Breusch–Godfrey statistic: 0.20
Dependent Variable: LGHOUS
Method: Least Squares
c2(1)crit, 5% = 3.84
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================ =========
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
============================================================
Breusch–Godfrey statistic: 20.02
Dependent Variable: LGHOUS
c2(1)crit, 0.1% = 10.83
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.005625 0.167903 0.033501 0.9734
LGDPI 1.031918 0.006649 155.1976 0.0000
LGPRHOUS -0.483421 0.041780 -11.57056 0.0000
============================================================
Durbin-Watson stat 0.633113 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
============================================================
Breusch–Godfrey statistic: 20.02
Dependent Variable: LGHOUS
c2(1)crit, 0.1% = 10.83
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.005625 0.167903 0.033501 0.9734
LGDPI 1.031918 0.006649 155.1976 0.0000
LGPRHOUS -0.483421 0.041780 -11.57056 0.0000
============================================================
Durbin-Watson stat 0.633113 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.005625 0.167903 0.033501 0.9734
LGDPI 1.031918 0.006649 155.1976 0.0000
LGPRHOUS -0.483421 0.041780 -11.57056 0.0000
============================================================
Durbin-Watson stat 0.633113 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
Note that the income and price elasticities are much lower than in the original
regression. We have already seen the reason for this in the sequence that
discussed the dynamics inherent in a partial adjustment model. 25
Introduction to Econometrics
Chapter heading
Yt = 1 + 2 X t + ut
ut = ut −1 + t
Yt = 1 + 2 X t + ut
ut = ut −1 + t
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
... the model can be rewritten with Yt depending on Xt, Yt–1, Xt–1, and a
disturbance term t that is not subject to autocorrelation.
2
COMMON FACTOR TEST
Yt = 1 + 2 X t + ut
ut = ut −1 + t
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 1 + 2 X t + ut
ut = ut −1 + t
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Yt = 0 + 1Yt −1 + 2 X t + 3 X t −1 + t
Yt = 1 + 2 X t + ut
ut = ut −1 + t
Restricted model
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Unrestricted model
Yt = 0 + 1Yt −1 + 2 X t + 3 X t −1 + t
3 = −12
Yt = 1 + 2 X t + ut
ut = ut −1 + t
Restricted model
Yt = 1 (1 − ) + Yt −1 + 2 X t − 2 X t −1 + t
Unrestricted model
Yt = 0 + 1Yt −1 + 2 X t + 3 X t −1 + t
3 = −12
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
Restricted model
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
Unrestricted model
Yt = 0 + 1Yt −1 + 2 X 2 t + 3 X 2 t −1 + 4 X 3 t + 5 X 3 t −1 + t
The AR(1) special case is again a restricted version of a more general model.
8
COMMON FACTOR TEST
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
Restricted model
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
Unrestricted model
Yt = 0 + 1Yt −1 + 2 X 2 t + 3 X 2 t −1 + 4 X 3 t + 5 X 3 t −1 + t
3 = −12 5 = −14
9
COMMON FACTOR TEST
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
Restricted model
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
Unrestricted model
Yt = 0 + 1Yt −1 + 2 X 2 t + 3 X 2 t −1 + 4 X 3 t + 5 X 3 t −1 + t
3 = −12 5 = −14
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
Restricted model
Yt = 1 (1 − ) + Yt −1 + 2 X 2 t − 2 X 2 t −1 + 3 X 3 t − 3 X 3 t −1 + t
Unrestricted model
Yt = 0 + 1Yt −1 + 2 X 2 t + 3 X 2 t −1 + 4 X 3 t + 5 X 3 t −1 + t
3 = −12 5 = −14
One can, and one should, test the validity of the restrictions. The test is
known as the common factor test.
11
COMMON FACTOR TEST
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
3 = −12 5 = −14
The test involves a comparison of RSSR and RSSU, the residual sums of
squares in the restricted and unrestricted specifications.
12
COMMON FACTOR TEST
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
3 = −12 5 = −14
RSSR can never be smaller than RSSU and it will in practice be greater,
because imposing a restriction in general leads to some loss of goodness of
fit. The question is whether the loss of goodness of fit is significant. 13
COMMON FACTOR TEST
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
3 = −12 5 = −14
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
3 = −12 5 = −14
RSS R
Test statistic: n log
RSSU
Yt = 1 + 2 X 2 t + 3 X 3 t + ut
ut = ut −1 + t
3 = −12 5 = −14
RSS R
Test statistic: n log
RSSU
Under the null hypothesis that the restrictions are valid, the test statistic has
a c 2 (chi- squared) distribution with degrees of freedom equal to the number
of restrictions. It is in principle a large-sample test. 16
COMMON FACTOR TEST
============================================================
Breusch–Godfrey statistic = 20.02
Dependent Variable: LGHOUS
Method: Least Squares
c2(1)crit, 0.1% = 10.83
Sample: 1959 2003
Included observations: 45
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.005625 0.167903 0.033501 0.9734
LGDPI 1.031918 0.006649 155.1976 0.0000
LGPRHOUS -0.483421 0.041780 -11.57056 0.0000
============================================================
R-squared 0.998583 Mean dependent var 6.359334
Adjusted R-squared 0.998515 S.D. dependent var 0.437527
S.E. of regression 0.016859 Akaike info criter-5.263574
Sum squared resid 0.011937 Schwarz criterion -5.143130
Log likelihood 121.4304 F-statistic 14797.05
Durbin-Watson stat 0.633113 Prob(F-statistic) 0.000000
============================================================
dL = 1.24 (1%, n = 45, k = 3)
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.154815 0.354989 0.436111 0.6651
LGDPI 1.011295 0.021830 46.32642 0.0000
LGPRHOUS -0.478070 0.091594 -5.219437 0.0000
AR(1) 0.719102 0.115689 6.215836 0.0000
============================================================
R-squared 0.999205 Mean dependent var 6.379059
Adjusted R-squared 0.999145 S.D. dependent var 0.421861
S.E. of regression 0.012333 Akaike info criter-5.866567
Sum squared resid 0.006084 Schwarz criterion -5.704368
Log likelihood 133.0645 F-statistic 16757.24
Durbin-Watson stat 1.901081 Prob(F-statistic) 0.000000
============================================================
Here is the result of fitting the same model using an AR(1) estimation
method. We make a note of the residual sum of squares.
18
COMMON FACTOR TEST
Restricted model
LGHOUS t = 1 (1 − ) + LGHOUS t −1
+ 2 LGDPI t − 2 LGDPI t −1
+ 3 LGPRHOUS t − 3 LGPRHOUS t −1 + t
We are fitting the model shown above, ensuring that the parameter
estimates conform to the two restrictions, one involving the income
variables ... 19
COMMON FACTOR TEST
Restricted model
LGHOUS t = 1 (1 − ) + LGHOUS t −1
+ 2 LGDPI t − 2 LGDPI t −1
+ 3 LGPRHOUS t − 3 LGPRHOUS t −1 + t
20
COMMON FACTOR TEST
Restricted model
LGHOUS t = 1 (1 − ) + LGHOUS t −1
+ 2 LGDPI t − 2 LGDPI t −1
+ 3 LGPRHOUS t − 3 LGPRHOUS t −1 + t
Unrestricted model
LGHOUS t = 0 + 1 LGHOUS t −1
+ 2 LGDPI t + 3 LGDPI t −1
+ 4 LGPRHOUS t + 5 LGPRHOUS t −1 + t
21
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
22
COMMON FACTOR TEST
============================================================
Breusch–Godfrey statistic = 0.29
Dependent Variable: LGHOUS
Method: Least Squares c2(1)crit, 5% = 3.84
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
Before performing the Common Factor test, it is a good idea to eyeball the
coefficients in the unrestricted regression to see if they appear to conform
to the restrictions. 24
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares –0.73 x 0.28 = –0.20
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
In this case, the restriction involving the income coefficients does not
appear to be satisfied. Minus the product of 0.73 and 0.28 is –0.20, but the
coefficient of lagged income is –0.01. 25
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares –0.73 x –0.23 = 0.17
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
The residual sum of squares was 0.006084 in the AR(1) regression and it is
0.001456 in the OLS unrestricted version.
27
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
RSS R 0.006084
============================================================
C n log 0.041458
= 440.065137
log = 62.90.5283
0.636465
LGDPI RSS U
0.275527 0.001456
0.067914
4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1)
LGDPI(-1)
0.725893
c 2
2( )0.058485
-0.010625 crit,
0.086737
0.1% = 13.8
12.41159
-0.122502
0.0000
0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
The test statistic is 62.9. The critical value of c2 at the 0.1 percent level, with
two degrees of freedom, is 13.8.
28
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
RSS R 0.006084
============================================================
C n log 0.041458
= 440.065137
log = 62.90.5283
0.636465
LGDPI RSS U
0.275527 0.001456
0.067914
4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1)
LGDPI(-1)
0.725893
c 2
( )
2
0.058485
-0.010625 crit,
0.086737
0.1% = 13.8
12.41159
-0.122502
0.0000
0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
Note that in this example the coefficients of lagged income and price are not
significant. We will investigate whether we can drop them.
31
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
R-squared 0.999810 Mean dependent var 6.379059
Adjusted R-squared 0.999785 S.D. dependent var 0.421861
S.E. of regression 0.006189 Akaike info criter-7.205830
Sum squared resid 0.001456 Schwarz criterion -6.962531
Log likelihood 164.5282 F-statistic 39944.40
Durbin-Watson stat 1.763676 Prob(F-statistic) 0.000000
============================================================
The fact that their coefficients have insignificant t statistics is not enough.
We also need to perform an F test of their joint explanatory power. We make
a note of RSS when they are included. 32
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
We also make a note of RSS when they are dropped. The null hypothesis for
the F test is that the coefficients of lagged income and lagged price are both
equal to zero. The alternative hypothesis is that one or both are nonzero. 33
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI
(0.001556
0.282935
)
0.046912
− 0.0014560.027383
/2
6.031246 0.0000
( )
,LGPRHOUS
38 =
F 2LGHOUS(-1) -0.116949
0.707242 = 1
0.044405 . -4.270880
44 F 2
15.92699
(
, 35 )
0.0001
0.0000
crit, 5% = 3.27
0 . 001456 / 38
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
The F statistic is 1.44. The critical value at the 5% significance level with 2
and 35 degrees of freedom is 3.27. The critical value with 2 and 38 degrees
of freedom must be lower. 34
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI
(0.001556
0.282935
)
0.046912
− 0.0014560.027383
/2
6.031246 0.0000
( )
,LGPRHOUS
38 =
F 2LGHOUS(-1) -0.116949
0.707242 = 1
0.044405 . -4.270880
44 F 2
15.92699
(, 35 )
0.0001
0.0000
crit, 5% = 3.27
0 . 001456 / 38
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
Hence we do not reject the null hypothesis. It appears that we can drop the
lagged variables.
35
COMMON FACTOR TEST
============================================================
Breusch–Godfrey statistic = 0.20
Dependent Variable: LGHOUS
Method: Least Squares
c2(1)crit, 5% = 3.84
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
Thus we conclude that the omission of the lagged dependent variable was
responsible for the apparent autocorrelation in the original OLS regression.
37
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
============================================================
Dependent Variable: LGHOUS
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.041458 0.065137 0.636465 0.5283
LGDPI 0.275527 0.067914 4.056970 0.0002
LGPRHOUS -0.229086 0.075499 -3.034269 0.0043
LGHOUS(-1) 0.725893 0.058485 12.41159 0.0000
LGDPI(-1) -0.010625 0.086737 -0.122502 0.9031
LGPRHOUS(-1) 0.126270 0.084296 1.497928 0.1424
============================================================
Assuming that the lagged income and price variables really are redundant,
we obtain an increase in efficiency by dropping them, as reflected in the
smaller standard errors. 38
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
The final model, incidentally, is exactly the same as that in the previous
sequence. In that sequence we were led to this specification by examining
the plots of the variables and the residuals. 39
COMMON FACTOR TEST
============================================================
Dependent Variable: LGHOUS
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable Coefficient Std. Error t-Statistic Prob.
============================================================
C 0.073957 0.062915 1.175499 0.2467
LGDPI 0.282935 0.046912 6.031246 0.0000
LGPRHOUS -0.116949 0.027383 -4.270880 0.0001
LGHOUS(-1) 0.707242 0.044405 15.92699 0.0000
============================================================
R-squared 0.999795 Mean dependent var 6.379059
Adjusted R-squared 0.999780 S.D. dependent var 0.421861
S.E. of regression 0.006257 Akaike info criter-7.223711
Sum squared resid 0.001566 Schwarz criterion -7.061512
Log likelihood 162.9216 F-statistic 65141.75
Durbin-Watson stat 1.810958 Prob(F-statistic) 0.000000
============================================================
If you start with a poorly specified model, in our case the static model, the
various diagnostic test statistics are likely to be invalidated. Thus there is a
risk that the model may survive the tests and appear to be satisfactory, even
though it is misspecified. 2
DYNAMIC MODEL SPECIFICATION
Yt = 0 + 1Yt −1 + 2 X 2 t + 3 X 2 t −1 + 4 X 3 t + 5 X 3 t −1 + t
In our case, the starting point should be the model with all the lagged
variables.
4
DYNAMIC MODEL SPECIFICATION
Yt = 0 + 1Yt −1 + 2 X 2 t + 3 X 2 t −1 + 4 X 3 t + 5 X 3 t −1 + t
1 = 3 = 5 = 0
Having fitted it, we might be able to simplify it to the static model, if the
lagged variables individually and as a group do not have significant
explanatory power. 5
DYNAMIC MODEL SPECIFICATION
Yt = 0 + 1Yt −1 + 2 X 2 t + 3 X 2 t −1 + 4 X 3 t + 5 X 3 t −1 + t
3 = −12 5 = −14
Yt = 0 + 1Yt −1 + 2 X 2 t + 3 X 2 t −1 + 4 X 3 t + 5 X 3 t −1 + t
3 = 5 = 0
In the case of the housing regression, we have done exactly the opposite.
We started with a crude static model.
8
DYNAMIC MODEL SPECIFICATION
9
DYNAMIC MODEL SPECIFICATION
We turned to the more general model when the common factor test revealed
that the AR(1) specification was inadequate.
10
DYNAMIC MODEL SPECIFICATION