You are on page 1of 5

Chapter 12: Serial Correl. and Heterosked.

in TS Regressions

Contents
12-1 Properties of OLS with Serially Correlated Errors
12-2 Testing for Serial Correlation
12-3 Correcting for Serial Correlation with Strictly Exogenous Regressors
12-4 Differencing and Serial Correlation
12-5 Serial Correlation-Robust Inference after OLS
12-6 Heteroscedasticity in Time Series Regressions

12-1 Properties of OLS with Serially Correlated Errors

1. The OLS estimators are still unbiased and consistent if errors are serially correlated, as long as the
explanatory variables are strictly exogenous (Assumptions TS.1 - TS.3).
· Theorem 10.1 assumes nothing about serial correlation in the errors
· Things are more complicated when lagged dependent variables are included (not examinable).
2. OLS is no longer BLUE if there is serial correlation. Consequently, OLS standard errors and tests will
be invalid, even asymptotically.
3. and ̅ , however, are not affected, provided the data are stationary and weakly dependent.
· This argument does not go through if {yt} is an I(1) process

12-2 Testing for Serial Correlation

1. A t Test for AR(1) Serial Correlation with Strictly Exogenous Regressions


( ) | |

(1) Run OLS of yt on xt1, ..., xtk, and obtain the residuals ̂ .
(2) Run ̂ on ̂ - for all and obtain the coefficent ̂ and its t statistic, ̂ (with constant or
not).
(3) Use ̂ to test the null hypothesis.
− If you are worrying about heteroscedasticity in , we can use the heteroscedasticity-robust
t statistics from Chapter 8.

Example: Static Phillips Curve ̂


This model does not show the trade-off between unemployment and inflation (see Example 10.1).
We then test for serial correlation using the t test: ̂ , ̂ ( - )  Serial
correlation exists  The static Phillips model is invalid  A modified model, the expectations
augmented Phillips curve, is justified (see Example 11.5).

1
2. The Durbin-Watson Test under Classical Assumptions
· The Durbin-Watson test (more popular than the t test):
∑ (̂ - ̂ - )
( - ̂)
∑ ̂

̂ ; ̂
Due to the problems in obtaining the null distribution of DW, we have two sets of critical values
dU (upper) and dL (lower):
− If DW < dL then reject H0;
− If dL < DW < dU, then the test is inconclusive;
− If DW > dU, then fail to reject H0.
· The Durbin-Watson is more popular because dU and dL are tabulated.
· The t test, however, is simple to compute and asymptotically valid without normally distributed
errors. t statistic is also valid in the presence of heteroscedasticity (using the robust form).

3. Testing for AR(1) Serial Correlation without Strictly Exogenous Regressors


(1) Run on and obtain the residuals ̂
(2) Run ̂ on ̂ - for all and obtain the coefficent ̂ on ̂ - and its t statistic,
̂(use heteroscedasticity-robust t statistic if needed).
(3) Use ̂ to test the null hypothesis (or one-side alternative).

4. Testing for Higher Order Serial Correlation (Test for the Joint Significance)
We can test for serial correlation in the autoregressive model of order q:

· Testing for AR(q) Serial Correlation:


(1) Run on xt1, ..., xtk, and obtain the residulas ̂ .
(2) Run ̂ on ̂- ̂ - for all .
(3) Compute the F test for joint significance of coefficients on ̂ - ̂- (use
heteroscedasticity-robust F statistic if needed). Alternatively, you can compute the Breusch-
Godfrey test: - ̂, where ̂ is just the usual R-squared from Step (2).

12-3 Correcting for Serial Correlation with Strictly Exogenous Regressors

1. Quasi-differencing to obtain BLUE in AR(1) model


· We assume that the errors follow the AR(1) model
− - | |
− ( ) ( - )
· Consider a simple case: (1)

2
Thus, - - - for (2)
· We want to get rid of ut, so multiply (2) with and substract it from (1):
− - - ( )- ( - - )
− - - ( - ) ( - - )
− ̃ ( - ) ̃ ̃ - - ̃ - - (3)
· ̃ and ̃ are called the quasi-differenced data
· We then need the equation for t = 1 (see more in section 12-3a, p. 382-383, textbook):
̃ ( - ) ̃ ̃
where ̃ ( - ) ,̃ ( - ) , and ̃ ( - ) .
· Model (3) then satisfies all of the Gauss-Markov assumptions.
· For a given , the GLS estimators (or OLS on the transformed data) are BLUE. F and t statistics
are also valid.
· Unless , GLS and original OLS estimators are different.
· The only problem is that we do not know about 𝜌!

2. Feasible GLS Estimation of the AR(1) Errors


· For GLS, we do not know but we know its consistent estimator, ̂. Replacing the unknown by
̂ leads to a feasible GLS (FGLS) estimation.
(1) Run on xt1,..., xtk and obtain the residulas ̂ .
(2) Run ̂ on ̂ - for all and obtain the coefficent ̂ on ̂ - .
(3) Apply OLS to the model ̃ ̃ ̃ ̃
where ̃ ( - ̂) for and ̃ ( - ̂ ) for ; ̃ and ̃ are all quasi-differenced
variables
The FGLS t and F statistics are approximately t and F distributed (due to estimation error in ̂),
hence, FGLS is not BLUE. Nevertheless, FGLS estimator is asymptotically more efficient than the
OLS when AR(1) exists and the explanatory variables are strictly exogenous.
· Two FGLS methods:
− Cochrane-Orcutt (CO) estimation: omits the first observation (t = 1).
− Prais-Winsten (PW) estimation: uses the first observation.
− The two are asymptotically equivalent, but the PW should be more efficient when the
sample size is small.
· The two are asymptotically equivalent, but the PW should be more efficient when the sample
size is small.

12-4 Differencing and Serial Correlation

The usual OLS inference procedures can be very misleading when we use highly persistent variables
(integrated of order one, or I(1))  examined later in Ch18. The conventional wisdom is to perform

3
differencing to achieve weak dependence if the series are highly persistent before putting them into a
regression model.

12-5 Serial Correlation-Robust Inference after OLS

1. It has become more popular to estimate models by OLS but to correct the standard errors for serial
correlation and heteroscedasticity. Reasons for not using FGLS:
· The explanatory variables may not be strictly exogenous. In this case, FGLS is not consistent, let
alone efficient.
· In most FGLS applications, the error terms are assumed to be AR(1) which may not be true. It
may be better to compute standard errors for the OLS estimates that are robust to more general
forms of serial correlation.
2. Newey-West (NW) approach:
(1) Run yt on xt1, tk b “ ̂ ” ̂ and the residuals ̂ “ ̂ ”
(but incorrect) OLS standard error.
(2) Run the auxiliary regression of xt1 on xt2 tk and obtain the auxiliary residuals ̂ , then form

̂ ̂ ̂ (for each t).


(3) Pick a truncation lag then compute
̂ ∑ ̂ ∑ [ ] (∑ ̂̂ )

(4) Compute the SC-robust standard error ( ̂ ) [“ ( ̂ )” ̂] √̂


Notes:
(1) For small sample size, choose a small , such as 1 or 2. Generally, NW recommends to be the
integer part of ( ) .
(2) The SC-robust standard errors can be used to construct confidence intervals and t statistics for
̂ .
(3) The SC-robust standard errors are also robust to arbitrary heteroscedasticity, so they are
sometimes called heteroscedasticity and autocorrelation consistent (HAC) standard errors.
(4) Empirically, the SC-robust standard errors are typically larger than the usual OLS when there is
serial correlation.

12-6 Heteroscedasticity in Time Series Regressions

1. In time series data, even though the basic concern is autocorrelation, it is also possible to have
heteroscedasticity  The usual standard errors, t and F statistics are invalid. Therefore, we will need
to test and correct for heteroscedasticity in a similar manner as with the cross sectional case. We
can also obtain the heteroscedasticity-robust statistics for the time series regression. The Breusch-
Pagan or White tests for heteroscedasticity (recall Ch08) can be used directly, but with a few
cautions:
(1) The errors ut of the time series model should not be serially correlated.

4
− It makes sense to test for serial correlation first using a heteroscedasticity-robust test if
heteroscedasticity is suspected.
− Then, after something has been done to correct for serial correlation, we can test for
heteroscedasticity.
(2) The errors vt of the test models are also assumed to be homoscedasticity and serially
uncorrelated.
− Using this implicit assumption rules out certain forms of dynamic heteroscedasticity.
2. If xt contains a lagged dependent variable, then heteroscedasticity in ut is dynamic. Even with static
model where the Gauss-Markov assumptions hold (OLS is BLUE and ), there are
other ways that heteroscedasticity can arise.
Engle, a Nobel laureate, developed the ARCH model in 1982:
· With ARCH, we convey the fact that we are working with time varying variances
(heteroscedasticity) that depend on lagged effects (autocorrelation).
· ARCH is popular for financial data in which large and small residuals tend to come in clusters.
The autoregressive conditional heteroscedasticity (ARCH) model:
· Consider a simple static model that OLS is BLUE.
· The ARCH model suggested that the variance of error term depends on its history:
( - - ) ( - ) -
(1)
− Since conditional variances must be positive, this model only makes sense if and
; if , there are no dynamics in the variance equation.
· It is instructive to write Eq. (1) as -
(2)

where ( | - - ) (by definition) and are not independent of past ut (because of the
constraint - - -
).
− Eq. (2) looks like an autoregressive model in (hence the name ARCH).
− We can use Eq. (2) to test for dynamic heteroscedasticity.

You might also like