BSc (Hons) Finance II/ BSc (Hons) Finance with

Law II
Module: Principles of Financial Econometrics I
Lecturer: Dr Baboo M Nowbutsing
Topic 10: Autocorrelation

Topic Nine

Serial Correlation

Outline
1.
2.
3.
4.
5.
6.

Introduction
Causes of Autocorrelation
OLS Estimation
BLUE Estimator
Consequences of using OLS
Detecting Autocorrelation

Topic Nine

Serial Correlation

1.

Introduction

 Autocorrelation occurs in time-series studies
when the errors associated with a given time
period carry over into future time periods.
 For example, if we are predicting the growth of
stock dividends, an overestimate in one year is
likely to lead to overestimates in succeeding
years.

Topic Nine

Serial Correlation

1.

Introduction

 Times series data follow a natural ordering over
time.
 It is likely that such data exhibit
intercorrelation, especially if the time interval
between successive observations is short, such
as weeks or days.

Topic Nine

Serial Correlation

 We experience autocorrelation when E (u i u j )  0 Topic Nine Serial Correlation . the assumption of no auto or serial correlation in the error term that underlies the CLRM will be violated. Introduction  We expect stock market prices to move or move down for several days in succession.1.  In situation like this.

1. lagged by a number of times units’ whereas serial correlation is the ‘lag correlation between two different series’. some authors prefer to distinguish between them. Topic Nine Serial Correlation .  We will use both term simultaneously in this lecture. Tintner defines autocorrelation as ‘lag correlation of a given series within itself.  However. Introduction  Sometimes the term autocorrelation is used interchangeably.  For example.

1. errors in one time period are correlated directly with errors in the ensuing time period.  With positive serial correlation. errors in one time period are positively correlated with errors in the next time period. Topic Nine Serial Correlation . With first-order serial correlation. Introduction  There are different types of serial correlation.

2. Specification Bias.Excluded variable  Appropriate equation: Yt   1   2 X 2t   3 X 3t   4 X 4t  u t  Estimated equation Yt   1   2 X 2t   3 X 3t  v t  Estimating the second equation implies vt   4 X 4t  u t Topic Nine Serial Correlation .Macroeconomics data experience cycles/business cycles. Inertia .2. Causes of Autocorrelation 1.

Specification Bias.Incorrect Functional Form Yt   1   2 X 2t   3 X 22t  v t Yt   1   2 X 2t  u t u t   3 X 22t  v t Topic Nine Serial Correlation . Causes of Autocorrelation 3.2.

Cobweb Phenomenon  In agricultural market. farmers are influenced by the price prevailing last year. the supply reacts to price with a lag of one time period because supply decisions take time to implement.2. at the beginning of this year’s planting of crops.  Thus. Causes of Autocorrelation 4. Topic Nine Serial Correlation . This is known as the cobweb phenomenon.

Topic Nine Serial Correlation . Causes of Autocorrelation 5.  If you neglect the lagged the resulting error term will reflect a systematic pattern due to the influence of lagged consumption on current consumption.2. Lags Consumptiont   1   2 Consumptiont 1  u t  The above equation is known as autoregression because one of the explanatory variables is the lagged value of the dependent variable.

Topic Nine Serial Correlation . Causes of Autocorrelation 6.2.  Note that the error term in the first equation is not autocorrelated but it can be shown that the error term in the first difference form is autocorrelated. Data Manipulation Yt   1   2 X t  u t Yt 1   1   2 X t 1  u t 1 Yt   2 X t  v t  This equation is known as the first difference form and dynamic regression model. The previous equation is known as the level form.

we should check whether the given time series is stationary. Nonstationarity  When dealing with time series data.g. they do not change over time. Causes of Autocorrelation 6. mean. variance and covariance) are time variant. we have a nonstationary time series.  If that is not the case.2. Topic Nine Serial Correlation . that is.  A time series is stationary if its characteristics (e.

 This Scheme is known as an Autoregressive (AR(1))process Topic Nine Serial Correlation .3. OLS Estimation Yt   1   2 X t  u t  E (u i u j )  0  Assume that the error term can be modeled as follows: u t  u t 1  t 1    1   is known as the coefficient of autocovariance and the error term satisfies the OLS assumption.

u t  s )   s 2  Var ( ˆ2 )  t x  t Topic Nine xx   1 2  x  t t 1 2 t  2 2 x x x t t 1 2 t  .3. OLS Estimation 2   Var (ut )  E (ut2 )  1  2 2   Cov(u t .  2  n 1 Serial Correlation x x x  t t 1 2 t   ... u t  s )  E (u t u t  s )   s 1  2 Cor (u t ..

4. BLUE Estimator  Under the AR (1) process. the BLUE estimator of β2 is given by the following expression. ˆ GLS 2   Var ( ˆ Topic Nine GLS 2 n t 2 ) ( x t  x t 1 )( y t  y t 1 )  n t 2 ( x t  x t 1 ) 2 2  n t 2 ( x t  x t 1 ) 2 C D Serial Correlation .

4. BLUE Estimator  The Gauss Theorem provides only the sufficient condition for OLS to be BLUE. in some cases. it can happen that OLS is BLUE despite autocorrelation.  The necessary conditions for OLS to be BLUE are given by Krushkal’s theorem. But such cases are very rare.  Therefore. Topic Nine Serial Correlation .

and even if we use the variance. the confidence intervals derived from there are likely to be wider than those based on the GLS procedure.  One should use GLS and not OLS. the estimator is no more not BLUE.  Hypothesis testing: we are likely to declare a coefficient statistically insignificant even though in fact it may be. Consequences of Using OLS  OLS Estimation Allowing for Autocorrelation  As noted. Topic Nine Serial Correlation .5.

Topic Nine Serial Correlation .5. Consequences of Using OLS  OLS Estimation Disregarding Autocorrelation  The estimated variance of the error is likely to overestimate the true variance  Over estimate R-square  Therefore. and if applied. the usual t and F tests of significance are no longer valid. are likely to give seriously misleading conclusions about the statistical significance of the estimated regression coefficients.

 The standardized residuals is simply the residuals divided by the standard error of the regression. we can plot the standardized residuals against time.  We can also plot the error term with its first lag. Detecting Autocorrelation     Graphical Method There are various ways of examining the residuals. Alternatively. The time sequence plot can be produced.5.  If the actual and standard plot shows a pattern. Topic Nine Serial Correlation . then the errors may not be random.

there are three runs. such as + or -.  (─ ─ ─ ─ ─ ─ ) ( + + + + + + + + + + + + + ) (─ ─ ─ ──────── )  A run is defined as uninterrupted sequence of one symbol or attribute. the second one has 13 pluses and the last one has 11 runs. In the following sequence. The above sequence as three runs.  The length of the run is defined as the number of element in it. Detecting Autocorrelation  The Runs Test. the errors term can be positive or negative.Consider a list of estimated error term. Topic Nine Serial Correlation .5. the first run is 6 minuses.

 The length of the run is defined as the number of element in it.  (─ ─ ─ ─ ─ ─ ) ( + + + + + + + + + + + + + ) (─ ─ ─ ──────── )  A run is defined as uninterrupted sequence of one symbol or attribute. such as + or -. Topic Nine Serial Correlation .Consider a list of estimated error term. In the following sequence. there are three runs. the second one has 13 pluses and the last one has 11 runs. the errors term can be positive or negative. Detecting Autocorrelation  The Runs Test. the first run is 6 minuses. The above sequence as three runs.5.

e.5. ─ residuals)  R: number of runs  Assuming that the N1 >10 and N2 >10. + residuals)  N2: number of ─ symbols (i.e. then the number of runs is normally distributed with: Topic Nine Serial Correlation . Detecting Autocorrelation  Define  N: total number of observations  N1: number of + symbols (i.

the number of runs. reject otherwise Topic Nine Serial Correlation . Detecting Autocorrelation  Then.96 R]  Hypothesis: do not reject the null hypothesis of randomness with 95% confidence if R.5. lies in the preceding confidence interval. E ( R)  2 N 1 N 2  1 N 2 N 1 N 2 (2 N 1 N 2  N )   ( N ) 2 ( N  1) 2 R  If the null hypothesis of randomness is sustainable. following the properties of the normal distribution. we should expect that  Prob [E(R) – 1.96 R ≤ R ≤ E(R) – 1.

 The number of observation is n-1 as one observation is lost in taking successive differences.5. Detecting Autocorrelation  The Durbin Watson Test t n d 2 ˆ ˆ ( u  u )  t t 1 t 2 t n 2 ˆ u  t t 1  It is simply the ratio of the sum of squared differences in successive residuals to the RSS. Topic Nine Serial Correlation .

The explanatory variables are nonstochastic. 3. 2. Topic Nine Serial Correlation . The disturbances are generated by the first order autoregressive scheme. It is based on the following assumptions: 1. Detecting Autocorrelation  A great advantage of the Durbin Watson test is that based on the estimated residuals.5. The regression model includes the intercept term. or fixed in repeated sampling.

There are no missing values in the data. Detecting Autocorrelation 4. The error term is assumed to be normally distributed. 5. The regression model does not include the lagged values of the dependent an explanatory variables. a decision can be made regarding the presence of positive or negative serial correlation. Topic Nine Serial Correlation .5.  Durbin-Watson have derived a lower bound dL and an upper bound dU such that if the computed d lies outside these critical values. 6.

Detecting Autocorrelation d 2 2 ˆ ˆ u  u  t  t 1  2 uˆ t uˆ t 1 d  21  ˆ   2 ˆ u  t uˆ uˆ  Whereˆ   uˆ t   2 1     uˆ uˆ  uˆ t   t 1 2 t   t 1 2 t  But since -1 ≤  ≤ 1.5. this implies that 0 ≤ d ≤ 4. Topic Nine Serial Correlation .

there is positive serial correlation.  But if the statistic lies in the vicinity of 0.  If it lies in the vicinity of 4.  The closer the d is to zero. there is evidence of negative serial correlation Topic Nine Serial Correlation . Detecting Autocorrelation  If the statistic lies near the value 2. there is no serial correlation.5. the greater the evidence of positive serial correlation.

Detecting Autocorrelation  If it lies between dL and dU / 4 –dL and 4 – dU.  The mechanics of the Durbin-Watson test are as follows:  Run the OLS regression and obtain the residuals  Compute d  For the given sample size and given number of explanatory variables.  Follow the decisions rule Topic Nine Serial Correlation .5. find out the critical dL and dU. then we are in the zone of indecision.

That is there is statistically significant evidence of either positive or negative autocorrelation.5.d < dU. Topic Nine Serial Correlation . reject Ho at 2 level if d < dU and 4. Ho:  = 0 versus H1:  > 0. reject Ho at  level if d < dU.     Detecting Autocorrelation Use Modified d test if d lies in the zone in the of indecision. Ho :  = 0 versus H1 :  < 0. Ho :  = 0 versus H1 :  ≠ 0. That is there is statistically significant evidence of positive autocorrelation. Given the level of significance .d < dU. reject Ho at  level if 4. That is there is statistically significant evidence of negative autocorrelation.

5. nonstochastic regressors such as the lagged values of the regressand. 2. AR (2)etc. also known as the LM test. Topic Nine Serial Correlation . is a general test for autocorrelation in the sense that it allows for 1. simple or higher-order moving averages of white noise error terms.. Detecting Autocorrelation  The Breusch – Godfrey  The BG test. higher-order autoregressive schemes such as AR(1). and 3.

is a general test for autocorrelation in the sense that it allows for 1. nonstochastic regressors such as the lagged values of the regressand. and 3. also known as the LM test.5. Detecting Autocorrelation  The Breusch – Godfrey  The BG test. AR (2)etc.. 2. higher-order autoregressive schemes such as AR(1). Topic Nine Serial Correlation . simple or higher-order moving averages of white noise error terms.

...5..... p u t  p  t H o : 1   2  .. Detecting Autocorrelation  Consider the following model: Yt   1   2 X t  u t u t  1u t 1   2 u t  2  .   p  0 t  Estimate the regression using OLS  Run the following regression and obtained the R-square Topic Nine Serial Correlation ....

in which case at least one rho is statistically different from zero. Topic Nine Serial Correlation .5. Detecting Autocorrelation  If the sample size is large. Breusch and Godfrey have shown that (n – p) R2 follow a chi-square  If (n – p) R2 exceeds the critical value at the chosen level of significance. we reject the null hypothesis.

Topic Nine Serial Correlation .. In DW. that is ut is integrated as follows: u t   t  1 t 1   2  t  2  . p  t  p  A drawback of the BG test is that the value of p...5. the length of the lag cannot be specified as a priori.. Detecting Autocorrelation  Point to note:  The regressors included in the regression model may contain lagged values of the regressand Y.. this is not allowed.  The BG test is applicable even if the disturbances follow a pth-order moving averages (MA) process...

This indicates positive autocorrelation Topic Nine Serial Correlation . Detecting Autocorrelation  Model Misspecification vs. Pure Autocorrelation  It is important to find out whether autocorrelation is pure autocorrelation and not the result of mis-specification of the model.  Suppose that the Durbin Watson test of a given regression model (wage-productivity) reveals a value of 0.1229.5.

so add a trend variable in the equation. could this correlation have arisen because the model was not correctly specified?  Time series model do exhibit trend.5. Topic Nine Serial Correlation . Detecting Autocorrelation  However.

Detecting Autocorrelation  The Method of GLS Yt   1   2 X t  u t u t  1u t 1  t 1  1  There are two cases when (1)  is known and (2)  is not known Topic Nine Serial Correlation .5.

Yt 1   1   2 X t 1  u t 1  Multiplying the second equation by  gives Yt 1   1   2 X t 1  u t 1  Subtracting (3) from (1) gives Yt  Yt 1   1 (1  p )   2 ( X t  X t 1 )    t  (u t  u t 1 ) Topic Nine Serial Correlation . it should hold at time t-1. i. Detecting Autocorrelation  When  is known  If the regression holds at time t.5.e.

Topic Nine Serial Correlation .5. Detecting Autocorrelation  The equation can be Yt *   t*   t* X t*   t  The error term satisfies all the OLS assumptions  Thus we can apply OLS to the transformed variables Y* and X* and obtain estimation with all the optimum properties. namely BLUE  In effect. running this equation is the same as using the GLS.

 Assume that  = +1 the generalized difference equation reduces to the first difference equation Yt  Yt   2 ( X t  X t 1 )  (u t  u t 1 ) Yt   2 X t   t Topic Nine Serial Correlation .5. there are many ways to estimate it. Detecting Autocorrelation  When  is unknown.

8. Detecting Autocorrelation  The first difference transformation may be appropriate if the coefficient of autocorrelation is very high. or the DurbinWatson d is quite low. say in excess of 0. Topic Nine Serial Correlation .5.  Maddala has proposed this rough rule of thumb:  Use the first difference form whenever d< R2.

Topic Nine Serial Correlation . you have to use the regression through the origin routine  If however by mistake one includes an intercept term.5. Thus. then the original model has a trend in it. by including the intercept. one is testing the presence of a trend in the equation. Detecting Autocorrelation  There are many interesting features of the first difference equation  There is no intercept regression in it.  Thus.

is nonstationary. for the variances and covariances become infinite.5. Topic Nine Serial Correlation . Detecting Autocorrelation  Another interesting feature relates to the stationarity property. ut.  When  =1. the first differenced ut becomes stationary. as it is equal to q white noise error term.  When  =1. the error term.

5. Detecting Autocorrelation   Based on Durbin-Watson d statistic  From the Durbin – Watson Statistics. we know that d  1 2  In reasonably large samples one samples one can obtain rho from this equation and use it to transform the data as shown in the GLS. Topic Nine Serial Correlation .

we know that d  1 2  In reasonably large samples one samples one can obtain rho from this equation and use it to transform the data as shown in the GLS.5. Detecting Autocorrelation   Based on Durbin-Watson d statistic  From the Durbin – Watson Statistics. Topic Nine Serial Correlation .

Detecting Autocorrelation   Based on the error terms u t  1u t 1  t  Estimate the following equation uˆ t  ˆ .uˆ t 1  vt Topic Nine Serial Correlation .5.

Detecting Autocorrelation  Iterative Procedure  We can estimate rho by successive approximation. the Durbin-Watson twostep procedure and the Hildreth-Lu scanning or search procedure.  the Cochran-Orcutt iterative procedure.5.  The most popular one is the Cochran-Orcutt iterative procedure. starting with some initial value of rho. the CochranOrcutt two-step procedure. Topic Nine Serial Correlation .