You are on page 1of 10

Lecture 24 Unit Root Tests, II The initial DF unit root tests assumed that under the unit root

t null hypothesis, the first differences in the series are serially uncorrelated. Since first differences of most macroeconomic time series are serially correlated, these tests were of limited value in emirical macroeconomics. This problem was addressed in the development of the Augment Dickey-Fuller test (ADF test) and the Phillips-Perron test (PP test).

Suppose that yt has the AR(p) form yt = a1yt-1 + + apyt-p + t where t ~ iid (0,2) . We assume that either
i)

all of the zeroes of 1-a1z - -apzp are greater than one in modulus (and, therefore, yt ~ I(0)) a(1) = 0 and all the other zeroes of 1-a1z - -apzp are greater than one in modulus (and, therefore, yt ~ I(1))

or
ii)

We can rewrite this model as yt = yt-1 + 1yt-1 + + p-1yt-p+1 + t where = a1++ap 1 = -(a2 ++ap) 2 = -(a3 + + ap) p-1 = -ap Then, the unit root null hypothesis is: H0 : =0

The ADF Test Regress yt on yt-1, yt-1,, yt-p+1 and compute the t-statistic
1 = se( )

(Or, regress yt on yt-1, yt-1,, yt-p+1 and compute the t-statistic


=
se( )

Use the same DF distribution for that you would use for the AR(1) case.

We can allow for a non-zero mean under the alternative by adding an intercept to the model, regressing yt (or yt) on 1, yt-1, yt-1,, yt-p+1, computing the tstatistic
1 = = (or, se( ) ) se( )

and comparing its value to the percentiles of the DF distribution. Similarly we can allow for a trend under the null and alternative by adding an intercept and trend to the model, regressing yt (or yt) on 1, t, yt-1, yt-1,, yt-p+1, computing the tstatistic
t =
1 t = (or, ) se( ) se( )

and comparing its value to the percentiles of the DF t distribution.

Problem How to select p? Solutions model selection criterion (AIC, SIC) formal testing (e.g., sequential t-tests: the i-hats are asymptotically normal under the null) diagnostic checking for residual whiteness (L-B Q test,) Application Nelson and Plosser (JME, 1982)

The ADF test relies on a parametric transformation of the model that eliminates the serial correlation in the error term without affecting leaves the asymptotic distributions of the various statistics. Phillips and Perron (Biometrika, 1988) proposed nonparametric transformations of the statistics from the original DF regressions such that under the unit root null, the transformed statistics (the z statistics) have DF distributions. So, for example, suppose our model is yt = yt-1 + t , t ~ I(0) with mean 0 The PP procedure: Regress yt on yt-1 Compute Modify to get z Under H0, zs asymptotic distribution is the DF distribution for

If our model is yt = + yt-1 + t , t ~ I(0) with mean 0 or yt = +t+ yt-1 + t , t ~ I(0) with mean 0 the PP procedure is basically the same as above, i.e.,

Regress yt on 1, (t), yt-1 Compute (t) Modify to get z (Modify t to get zt) Under H0, zs asymptotic distribution is the DF distribution for (and zts asymptotic distribution is the DF distribution for t).

ADF vs. PP
The PP test does not require us to specify the form of the serial correlation of yt under the null. In addition, the PP test does not require that the s are conditionally homoskedastic (an implicit assumption in the ADF test). If we apply the ADF test and have underspecified p, the AR order, the test will be missized. If we apply the ADF test and overspecify p, the tests power will suffer. These problems are avoided in the PP test, but if we can correctly specify p, the PP test will be less powerful than the ADF test. Also, the PP test requires a bandwidth parameter selection (as part of the construction of the Newey-West covariance estimator) that creates finite sample problems analogous to those associated the lag length selection issue in applying the ADF test.

You might also like