Definition of Autocorrelation
We assumed of the CLRMs errors that Cov (u
i
, u
j
) = 0 for
i=j, i.e. Errors are not serially correlated, or, no autocorrelation
This is essentially the same as saying there is no pattern in the
errors.
Obviously we never have the actual us, so we use their
sample counterpart, the residuals (the s).
If there are patterns in the residuals from a model, we say that
they are autocorrelated.
Some stereotypical patterns we may find in the residuals are
given on the next 3 slides.
Positive Autocorrelation
+


t
u
+
1
t
u
3.7
6
6.5
6
3.1
5
3
0.5
1
1
4
3
5
7
8
7
+

Time
t
u
Negative Autocorrelation
+


t
u
+
1
t
u
+

t
u
Time
No pattern in residuals
No autocorrelation
+
t
u


+
1
t
u
+

t
u
Definition: Firstorder of Autocorrelation, AR(1)
If Cov (u
t
, u
s
) = E (u
t
u
s
) = 0 where t = s
Y
t
= 
1
+ 
2
X
2t
+ u
t
t = 1,,T
and if
u
t
= u
t1
+ v
t
where 1 < < 1 ( : RHO)
and v
t
~ iid (0, o
v
2
) (white noise)
This scheme is called firstorder autocorrelation and denotes as AR(1)
Autoregressive : The regression of u
t
can be explained by
itself lagged one period.
(RHO) : the firstorder autocorrelation coefficient
or coefficient of autocovariance
0 ) , cov(
) var(
0 ) (
2
=
=
=
s t
v t
t
v v
v
v E
o
Autocorrelation AR(1) :
Cov (u
t
u
t1
) > 0 => 0 < < 1 positive AR(1)
Cov (u
t
u
t1
) < 0 => 1 < < 0 negative AR(1)
If u
t
=
1
u
t1
+ v
t
it is AR(1), firstorder autoregressive
If u
t
=
1
u
t1
+
2
u
t2
+ v
t
it is AR(2), secondorder autoregressive
If u
t
=
1
u
t1
+
2
u
t2
+ +
n
u
tn
+ v
t
it is AR(n), n
th
order autoregressive
.
High order
autocorrelation
1 < < 1
1973 230 320 u
1973
.
... .
1995 558 714 u
1995
1996 699 822 u
1996
1997 881 907 u
1997
1998 925 1003 u
1998
1999 984 1174 u
1999
2000 1072 1246 u
2000
Year Consumption
t
= 
1
+ 
2
Income
t
+ error
t
Example of serial correlation:
TaxRate
1999
TaxRate
2000
Error term
represents
other factors
that affect
consumption
v
t
~ iid(0, o
v
2
)
TaxRate
2000
= TaxRate
1999
+ v
2000
u
t
=
u
t1
+ v
t
The current year Tax Rate may be determined by previous year rate
The consequences of serial correlation:
(are same as those of heteroscedasticity)
1. The estimated coefficients are still unbiased.
E(
k
) = 
k
^
3. The standard error of the estimated coefficient, Se(
k
)
will be biased. Therefore, t and F tests are not valid.
^
^
2. The variances of the 
k
is no longer the smallest.
So, OLS estimators are not BLUE
Therefore, when the regression has AR(1) errors,
The estimators are not BLUE. t and F tests are invalid.
Detecting Autocorrelation:
The DurbinWatson Test
The DurbinWatson (DW) is a test for first order autocorrelation 
i.e. it assumes that the relationship is between an error and the
previous one
u
t
= u
t1
+ v
t
(1)
where v
t
~ N(0, o
v
2
).
The DW test statistic actually tests
H
0
: =0 and H
1
: =0
The test statistic is calculated by
( )
DW
u u
u
t t
t
T
t
t
T
=
=
=
1
2
2
2
2
Detecting Autocorrelation:
The DurbinWatson Test
We can also write
(2)
where is the estimated correlation coefficient. Since is
a correlation, it implies that .
Rearranging for DW from (2) would give 0sDWs4.
If = 0, DW = 2. So roughly speaking, do not reject the
null hypothesis if DW is near 2 i.e. there is little
evidence of autocorrelation
Unfortunately, DW has 2 critical values, an upper critical
value (d
u
) and a lower critical value (d
L
), and there is also
an intermediate region where we can neither reject nor not
reject H
0
.
1 1 s s p
DW~ 2 1 ( )
DurbinWatson test(Cont.)
d = 2 (1 )
==> = 1 
==> = 1
^
^
d
2
d
2
^
Since 1 s s 1
^
implies 0 s d s 4
DW = ~ 2 (1  )
(u
t
u
t1
)
2
t=2
T
^ ^
u
t
2
t=1
T
^
^
(d)
The DurbinWatson Test
Conditions which Must be Fulfilled for DW to be a Valid Test
1. Constant term in regression
2. Regressors are nonstochastic
3. No lags of dependent variable
Ex: How to detect autocorrelation ?
Gujarati(2003) Table12.4, pp.460
Run OLS: and check the tvalue of the coefficient
t t t
v u u + =
1
9385 . 0
2
122904 . 0
1
2
1
914245 . 0 = = ~ =
DW