You are on page 1of 4

Phillips-Perron (PP) Unit Root Tests

The DickeyFuller test involves fitting the regression model

Δy t = ρy t1 + (constant, time trend) + u t

(1)

by ordinary least squares (OLS), but serial correlation will present a problem. To account for this, the augmented Dickey–Fuller test’s regression includes lags of the first differences of yt. The PhillipsPerron test involves fitting (1), and the results are used to calculate the test statistics. They estimate not (1) but:

y t = πy t1 + (constant, time trend) + u t

(2)

In (1) u t is I(0) and may be heteroskedastic. The PP tests correct for any serial correlation and heteroskedasticity in the errors u t non-parametrically by modifying the Dickey Fuller test statistics.

Phillips and Perron’s test statistics can be viewed as DickeyFuller statistics that have been made robust to serial correlation by using the NeweyWest (1987) heteroskedasticity- and autocorrelation-consistent covariance matrix estimator.

Under the null hypothesis that ρ = 0, the PP Zt and Zπ statistics have the same asymptotic distributions as the ADF t-statistic and normalized bias statistics. One advantage of the PP tests over the ADF tests is that the PP tests are robust to general forms of heteroskedasticity in the error term ut. Another advantage is that the user does not have to specify a lag length for the test regression.

We have not dealt with it, but the Dickey Fuller test produces two test statistics. The normalized bias T (π− 1) has a well defined limiting distribution that does not depend on nuisance parameters it can also be used as a test statistic for the null hypothesis H0 : π = 1. This is the second test from DF and relats to Z π in Phillips and Perron.

Continued

EXTRACT FROM STATA MANUAL

EXTRACT FROM STATA MANUAL Note the regression is y on lagged y, not differenced y on

Note the regression is y on lagged y, not differenced y on lagged y.

Z T is the adjusted t statistic as in Dickey Fuller.

y. Z T is the adjusted t statistic as in Dickey Fuller. is just the equivalent

is just the equivalent in tthe t stat in the DF test. S 2 n is an unbiased estimator (OLS) of the variance of the

error terms.

unbiased estimator (OLS) of the variance of the error terms. when j=0 this is a (maximum

when j=0 this is a (maximum likelihood) estimate of the variance of the error terms, when j>0 its an estimator of the covariance between two error terms j periods apart.

of the covariance between two error terms j periods apart. q is the number of lagged

q is the number of lagged covariances looked at. Now when the covariances are zero i.e. no

autocrrelation between error terms

. In this case we can replaceare zero – i.e. no autocrrelation between error terms is zero for j>0. Hence the second

between error terms . In this case we can replace is zero for j>0. Hence the

is zero for j>0. Hence the second terms disappears and

between error terms . In this case we can replace is zero for j>0. Hence the

with

between error terms . In this case we can replace is zero for j>0. Hence the

in:

In this case and = 0 and the second term disappears. = 1 thus reduces

In this case

In this case and = 0 and the second term disappears. = 1 thus reduces to

andIn this case = 0 and the second term disappears. = 1 thus reduces to =

= 0 and the second term disappears.

In this case and = 0 and the second term disappears. = 1 thus reduces to

= 1 thus

In this case and = 0 and the second term disappears. = 1 thus reduces to

reduces to

and = 0 and the second term disappears. = 1 thus reduces to = . This

=

= 0 and the second term disappears. = 1 thus reduces to = . This is

. This is just the t statistic in the standard Dickey Fuller equation. Hence.when there is no

autocorrelation between error terms this part of the Phillips-Perron test is equal to the Dickey Fuller albeit one estimated on (2) rather than (1). This perspective helps us understand that the PP test corrects the DF one for autocrrelation amongst error terms non-parametrically (i.e. outside of a regression framework). The critical values, have the same distribution as the DickeyFuller statistic

Although we have not done it when there is no autocorrelation between error terms, when the covariances

are equal then again the second term in the other PP statistic collapses to zero because

term in the other PP statistic collapses to zero because In this case which again is
term in the other PP statistic collapses to zero because In this case which again is

In this case

term in the other PP statistic collapses to zero because In this case which again is

which again is the same as the Dickey Fuller test

term in the other PP statistic collapses to zero because In this case which again is
term in the other PP statistic collapses to zero because In this case which again is