You are on page 1of 25

Econometrics I

Nicolás Corona Juárez, Ph.D.


11.11.2020
Autocorrelation

We had previously established this assumption: 𝐶𝑜𝑣 𝑢𝑖 , 𝑢𝑗 |𝑥𝑖 , 𝑥𝑗 = 𝐸 𝑢𝑖 , 𝑢𝑗 = 0 𝑖 ≠ 𝑗

If this assumption does not hold, it follows then that:


𝐸 𝑢𝑖 , 𝑢𝑗 ≠ 0 𝑖 ≠ 𝑗

Autocorrelation: Correlation across observations in different time periods.

2
Autocorrelation: Why does it happen?

Inertia: 𝐺𝐷𝑃𝑡 = 𝑓(𝐺𝐷𝑃𝑡−1 )

Misspecification bias: If we exclude one control variable (or several) from the model, we
can have autocorrelation.

Example:
Instead of estimating 𝑌𝑡 = 𝛽1 + 𝛽2 𝑋2𝑡 + 𝛽3 𝑋3𝑡 + 𝛽4 𝑋4𝑡 + 𝑢𝑡

We estimate: 𝑌𝑡 = 𝛽1 + 𝛽2 𝑋2𝑡 + 𝛽3 𝑋3𝑡 + 𝑣𝑡

Where: 𝑣𝑡 = 𝛽4 𝑋4𝑡 + 𝑢𝑡

3
Autocorrelation: Why does it happen?

Systematic behavior in the


errors.

4
Autocorrelation: Why does it happen?

Functional Misspecification bias: Suppose the “true” model is:

𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝐶𝑜𝑠𝑡𝑖 = 𝛽1 + 𝛽2 𝑝𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑜𝑛𝑖 + 𝛽3 𝑝𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑜𝑛𝑖2 + 𝑢𝑖


Marginal Cost
But we estimate: 𝑀𝑎𝑟𝑔𝑖𝑛𝑎𝑙 𝐶𝑜𝑠𝑡𝑖 = 𝛼1 + 𝛼2 𝑝𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑜𝑛𝑖 + 𝑣𝑖

Linear marginal cost curve:


Overestimation between the points A
and B. Production
5
Autocorrelation: Why does it happen?

Agricultural Market: The supply of agricultural products depends on the Price of previous
year:

𝑆𝑢𝑝𝑝𝑙𝑦𝑡 = 𝛽1 + 𝛽2 𝑃𝑡−1 + 𝛽3 𝑋3𝑡 + 𝑢𝑡

Lags:
𝐶𝑜𝑛𝑠𝑢𝑚𝑒𝑡 = 𝛽1 + 𝛽2 𝑖𝑛𝑐𝑜𝑚𝑒𝑡 + 𝛽3 𝑐𝑜𝑛𝑠𝑢𝑚𝑒𝑡−1 + 𝑢𝑡

6
OLS estimation with Autocorrelation: Consequences?

Consider the model:

𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 + 𝑢𝑡
In order to know what happens if we use OLS and there is autocorrelation in our model, let us have
a look at the structure of 𝑢𝑡 :
𝑢𝑡 = 𝜌𝑢𝑡−1 + 𝜀𝑡
Autocovariance
coefficient: 𝜌 with −1 < 𝜌 < 1 An error term is
called “White
𝐸 𝜀𝑡 = 0 Noise” if these
characteristics
𝑣𝑎𝑟 𝜀𝑡 = 𝜎𝜀2 are fulfilled.

𝑐𝑜𝑣 𝜀𝑡 , 𝜀𝑡+𝑠 = 0 𝑠≠0

7
OLS estimation with Autocorrelation: Consequences?

We call this model:

𝑢𝑡 = 𝜌𝑢𝑡−1 + 𝜀𝑡

Markov Autorregresive Model of First Order, o simply AR(1).

If:
𝑢𝑡 = 𝜌1 𝑢𝑡−1 + 𝜌2 𝑢𝑡−2 + 𝜀𝑡

Autorregresive Model of Second Order AR(2).

8
OLS estimation with Autocorrelation: Consequences?

With the process AR(1) 𝑢𝑡 = 𝜌𝑢𝑡−1 + 𝜀𝑡 it can be shown that:

𝜎𝜀2
𝑣𝑎𝑟 𝑢𝑡 = 𝐸 𝑢𝑡2 =
1 − 𝜌2
Covariance among
the error terms of 𝒔 2
distant periods, 𝜎𝜀
𝑐𝑜𝑣 𝑢𝑡 , 𝑢𝑡+𝑠 = 𝐸 𝑢𝑡 , 𝑢𝑡−𝑠 = 𝜌𝑠
(measures the 1 − 𝜌2
strength of the linear
relationship between
two variables.
𝑐𝑜𝑟𝑟 𝑢𝑡 , 𝑢𝑡+𝑠 = 𝜌 𝑠 Correlation between
the error terms of 𝑠
distant periods,
(reciprocal
relationship).
9
OLS estimation with Autocorrelation: Consequences?

Note that:

If the units of the variable 𝑋 are centimeters and the units of the variable 𝑌 are grams,
then the units in the covariance are cm x gr.

If we change the scale of the variables, then the covariance changes. This makes it hard to
interpret the value of the covariance.

The correlation coefficient is a standardized measure which can help us here.

10
OLS estimation with Autocorrelation: Consequences?

If in the process AR(1) 𝑢𝑡 = 𝜌𝑢𝑡−1 + 𝜀𝑡 , 𝜌 < 1, we say the process is stationary.

Stationary process: the mean, the variance and the covariance of 𝑢𝑡 do not change with
the time.

11
OLS estimation with Autocorrelation: Consequences?

We know that if we have a model like 𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 + 𝑢𝑡 , under OLS we get:

σ 𝑥𝑡 𝑦𝑡 𝜎2
𝛽መ2 = σ 𝑥𝑡2
and its variance is 𝑣𝑎𝑟 𝛽መ2 = σ 𝑥𝑡2

With AR(1), the variance of the estimated coefficient is:

𝜎2 σ 𝑥𝑡 𝑥𝑡−1 σ 𝑥𝑡 𝑥𝑡−2 𝑥1 𝑥𝑛
𝑣𝑎𝑟 𝛽መ2 = 1 + 2𝜌 + 2𝜌 2
+ ⋯ + 2𝜌 𝑛−1
𝐴𝑅1 σ 𝑥𝑡2 σ 𝑥𝑡2 σ 𝑥𝑡2 σ 𝑥𝑡2

12
OLS estimation with Autocorrelation: Consequences?

If we use
σ 𝑥𝑡 𝑦𝑡
𝛽መ2 =
σ 𝑥𝑡2
With the variance:

𝜎2 σ 𝑥𝑡 𝑥𝑡−1 σ 𝑥𝑡 𝑥𝑡−2 𝑥1 𝑥𝑛
𝑣𝑎𝑟 𝛽መ2 = 2 1 + 2𝜌 2 + 2𝜌 2
2 + ⋯ + 2𝜌 𝑛−1
𝐴𝑅1 σ 𝑥𝑡 σ 𝑥𝑡 σ 𝑥𝑡 σ 𝑥𝑡2

𝛽መ2 is still linear and unbiased but not efficient (not minimum variance).

13
In the presence of Autocorrelation: Use GLS

We can use GLS to solve this problem.


σ𝑛𝑡=2 𝑥𝑡 − 𝜌𝑥𝑡−1 𝑦𝑡 − 𝜌𝑦𝑡−1
𝛽መ2𝐺𝐿𝑆 = 𝑛 2
+C
σ𝑡=2 𝑥𝑡 − 𝜌𝑥𝑡−1
With the variance:

𝜎2
𝑣𝑎𝑟 𝛽መ2 = +𝐷
𝐴𝑅1 σ𝑛𝑡=2 𝑥𝑡 − 𝜌𝑥𝑡−1 2

𝛽መ2𝐺𝐿𝑆 is then a BLUE estimator.

14
OLS with Autocorrelation: What happens?

Confidence intervals are wider: This can lead us to a wrong conclusion about the
significance of a coefficient. For example, we can conclude that a coefficient is not
statistically significant when is reality is statistically significant.

15
Detecting Autocorrelation: Durbin Watson Test

The Durbin Watson statistic has the following form:

σ𝑡=𝑛
𝑡=2 𝑢ො 𝑡 − 𝑢ො 𝑡−1 2
𝑑=
σ𝑡=𝑛
𝑡=1 𝑢ො 𝑡2
If we square the numerator, we have then:

2
σ 𝑢ො 𝑡2 + σ 𝑢ො 𝑡−1 − 2 σ 𝑢ො 𝑡 𝑢ො 𝑡−1
𝑑=
σ 𝑢ො 𝑡2

2
Note that σ 𝑢ො 𝑡2 and σ 𝑢ො 𝑡−1 are only different by one observation, so they are almost equal.

σ𝑢
ෝ𝑡 𝑢
ෝ𝑡−1
𝑑 ≈2 1− σ𝑢 ෝ𝑡2

16
Detecting Autocorrelation: Durbin Watson Test

2
Given that σ 𝑢ො 𝑡2 and σ 𝑢ො 𝑡−1 are different by only one observation, then:
σ𝑢
ෝ𝑡 𝑢
ෝ𝑡−1
𝑑 ≈2 1− σ𝑢 ෝ𝑡2

σ𝑢
ෝ𝑡 𝑢
ෝ𝑡−1
Define 𝜌ො = σ𝑢 ෝ𝑡2
as the sample autocorrelation coefficient of first order.

Then it follows that 𝑑 ≈ 2 1 − 𝜌ො .

Given that −1 ≤ 𝜌ො ≤ 1 , then 0 ≤ 𝑑 ≤ 4.

These are the limits or boundaries of 𝑑.


17
Detecting Autocorrelation: Durbin Watson Test

Observe that:

If there is no serial autocorrelation, then: 𝜌ො = 0

𝑑 ≈2 1−0 =2

Practical rule: If 𝑑 = 2 there is no autocorrelation of first order, positive or negative.

18
Detecting Autocorrelation: Durbin Watson Test

Observe that :

If there is perfect positive serial autocorrelation, then: 𝜌ො = +1

𝑑 ≈2 1−1 =0

Practical rule: The closer is 𝑑 to 0, the stronger the evidence of perfect positive serial
autocorrelation.

19
Detecting Autocorrelation: Durbin Watson Test

Observe that:

If there is perfect negative serial autocorrelation, then: 𝜌ො = −1

𝑑 ≈ 2 1 − (−1) = 4

Practical rule: The closer is 𝑑 to 4, the stronger the evidence of perfect negative serial
autocorrelation.

20
Durbin Watson Test: Steps

1. Estimate the model with OLS and obtain the residuals.


σ𝑡=𝑛 𝑢𝑡−1 2
ෝ𝑡 −ෝ
𝑡=2 𝑢
2. Calculate 𝑑 = σ𝑡=𝑛 ෝ𝑡2
𝑡=1 𝑢

3. For a given simple size and a given number of control variables, determine 𝑑𝐿 and 𝑑𝑈
4. Follow the decision rule in the following table.

Null Hypothesis Decision If


No positive autocorrelation Reject 0 < 𝑑 < 𝑑𝐿
No positive autocorrelation Without decision 𝑑𝐿 ≤ 𝑑 ≤ 𝑑𝑈
No negative autocorrelation Reject 4 − 𝑑𝐿 < 𝑑 < 4
No negative autocorrelation Without decision 4 − 𝑑𝑈 ≤ 𝑑 ≤ 4 − 𝑑𝐿
No positive or negative Do not reject 𝑑𝑈 < 𝑑 < 4 − 𝑑𝑈
autocorrelation.
21
Breusch-Godfrey (BF) Test

We have the model:

𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 + 𝑢𝑡

The errors are of the form:

𝑢𝑡 = 𝜌1 𝑢𝑡−1 + 𝜌2 𝑢𝑡−2 + 𝜌3 𝑢𝑡−3 + ⋯ + 𝜌𝑝 𝑢𝑡−𝑝 + 𝜀𝑡

We want to test:
𝐻0 : = 𝜌1 = 𝜌2 = ⋯ = 𝜌𝑝 = 0
(There is no autocorrelation)

22
Breusch-Godfrey (BF) Test: Steps

1. Estimate 𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 + 𝑢𝑡 with OLS and obtain 𝑢ො 𝑡

2. Estimate : 𝑢ො 𝑡 = 𝛼1 + 𝛼2 𝑋𝑡 + 𝜌ො1 𝑢ො 𝑡−1 + 𝜌ො2 𝑢ො 𝑡−2 + ⋯ + 𝜌ො𝑝 𝑢ො 𝑡−𝑝 + 𝜀𝑡 and obtain 𝑅2


3. If the simple size is big, then: 𝑛 − 𝑝 𝑅2 ~𝜒𝑝2

4. If 𝑛 − 𝑝 𝑅2 > 𝜒𝑝2 (𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙) at the chosen significance level, reject:


𝐻0 : = 𝜌1 = 𝜌2 = ⋯ = 𝜌𝑝 = 0 (There is autocorrelation.)

23
If there is Autocorrelation: How do we correct for it?

1. Verify whether the model is wrongly specified.

2. Transform the model. As with heteroscedasticity, use GLS.

3. If the sample is big, obtain Newey-West Standard Errors. This corrects the OLS
Standard Errors.

24
Reference

Chapter 12: Autocorrelation


Introductory Econometrics
Author: Gujarati Damodar N. & Porter Dawn C.
Fifth Edition (2010)

25

You might also like