You are on page 1of 25

Autoregressive models

Another useful model is autoregressive model.


Frequently, we find that the values of a series of financial
data at particular points in time are highly correlated with
the value which precede and succeed them.

1
Autoregressive models
Models with lagged variable

The creation of an autoregressive model generates a new


predictor variable by using the Y variable lagged 1 or more
periods.

yt f ( yt 1 , yt 2 ,..., yt p , t )
Dependent variable is a function of itself at the
previous moment of period or time.

2
The most often seen form of the equation is a linear
form:
p
yt b0 bi yt i et
i 1

where:
yt the dependent variable values at the moment t,
yt-i (i = 1, 2, ..., p) the dependent variable values at the
moment t-i,
bo, bi (i=1,..., p) regression coefficient,
p autoregression rank,
et disturbance term.
3
b0 y p1 1 y p y p1 ... y1

b1 y p2 1 y p1 y p y2
b . y . X . . . .
.
.
.
. . . .
.
.
.
.
.
.
.
.
.
.

.
1 y y
b p yn n1 n2 yn p

4
A first-order autoregressive model is concerned with only the
correlation between consecutive values in a series.

yt b0 b1 yt 1 et

A second-order autoregressive model considers the effect of


relationship between consecutive values in a series as well as
the correlation between values two periods apart.

yt b0 b1 yt 1 b2 yt 2 et

5
The selection of an appropriate autoregressive model is
not an easy task.
Once a model is selected and OLS method is used to
obtain estimates of the parameters, the next step would be
to eliminate those parameters which do not contribute
significantly.

6
H0; p 0
(The highest-order parameter does not contribute to the
prediction of Yt)

H1; p 0
(The highest-order parameter is significantly meaningful)

7
bp
Z
S (b p )

using an alpha level of significance, the decision rule is

to reject H0 if Z Z or if Z Z

and not to reject H0 if Z Z Z

8
Some helpful information:

Z 0,1 1,645

Z 0,05 1,960

Z 0,02 2,236

Z 0, 01 2,576

Z 0, 001 3,291

9
If the null hypothesis is NOT rejected we may conclude that
the selected model contains too many estimated parameters.
The highest-order term then be deleted an a new
autoregressive model would be obtained through least-
squares regression. A test of the hypothesis that the new
highest-order term is 0 would then be repeated.

10
This testing and modeling procedure continues until we
reject H0. When this occurs, we know that our highest-order

parameter is significant and we are ready to use this model.

11
Example 1

yt b0 b1 yt 1 b2 yt 2 et

b ( X T X ) 1 X T y

12
13
1 2,46 1,89
3,23
1 3,23 2,46
b0 3,95
4,56 1 3,95 3,23
b= b1 5,07 1 4,56 3,95
5,62 1 5,07 4,56
b2 1 5,62 5,07
6,16
y= 6,26 X= 1 6,16 5,62

6,56 1 6,26 6,16

6,98 1 6,56 6,26


7,36 1 6,98 6,56
7,53 1 7,36 6,98
7,84 1 7,53 7,36
8,09 1 7,84 7,53

14
13 73,58 67,63 79,21
X TX= 73,58 451,3932 420,842 XTy= 479,6185
67,63 420,8423 393,5 446,1821

5,523661 -5,28007 4,69762 1,103369


(XTX)-1= -5,28007 5,811533 -5,30788 b= 0,804936
4,697623 -5,30788 4,87187 0,08338

y t 1,1 0,8 yt 1 0,08 yt 2 et


(0,26) (0,27) (0,25)

15
t yt yt-1 yt-2 y^t yt - y^t (yt - y^t)2 yt - ytmean (yt - ytmean)2
1 1,89 - - - - - - -
2 2,46 1,89 - - - - - -
3 3,23 2,46 1,89 3,2411 -0,0111 0,000123 -2,34067 5,47872
4 3,95 3,23 2,46 3,908428 0,041572 0,001728 -1,62067 2,62656
5 4,56 3,95 3,23 4,552185 0,007815 6,11E-05 -1,01067 1,021447
6 5,07 4,56 3,95 5,103229 -0,03323 0,001104 -0,50067 0,250667
7 5,62 5,07 4,56 5,564609 0,055391 0,003068 0,049333 0,002434
8 6,16 5,62 5,07 6,049848 0,110152 0,012134 0,589333 0,347314
9 6,26 6,16 5,62 6,530372 -0,27037 0,073101 0,689333 0,47518
10 6,56 6,26 6,16 6,655891 -0,09589 0,009195 0,989333 0,97878
11 6,98 6,56 6,26 6,90571 0,07429 0,005519 1,409333 1,98622
12 7,36 6,98 6,56 7,268797 0,091203 0,008318 1,789333 3,201714
13 7,53 7,36 6,98 7,609693 -0,07969 0,006351 1,959333 3,838987
14 7,84 7,53 7,36 7,778216 0,061784 0,003817 2,269333 5,149874
15 8,09 7,84 7,53 8,041921 0,048079 0,002312 2,519333 6,34704
0,126831 31,70494

16

Goodness of fit


Variance
S2 = 0,012683

Standard error of the estimate


S = 0,112619

Variance and covarince matrix


0,070057 -0,06697 0,059581
2
D (b) = -0,06697 0,073708 -0,06732
0,059581 -0,06732 0,061791

2
Standard errors of the coefficients

D(b0) = 0,264684 1,103369


D(b1) = 0,271493 b= 0,804936
D(b2) = 0,248577 0,08338

Indetermination coefficient

0,004

Determination coefficient

R2 = 0,996

17
Calculations
Z b2= 0,33543 Z 0,05= 1,96
The second-order parameter does not contribute to the prediction of Y

We have to estimate the parameters of the first-order


autoregressive model:

yt b0 b1 yt 1 et

and then check if Beta1 is statistically significant.

18
t yt yt-1
1 1,89 -
2 2,46 1,89
3 3,23 2,46
4 3,95 3,23
5 4,56 3,95
6 5,07 4,56
7 5,62 5,07 REGLINP
8 6,16 5,62 0,914 0,904
9 6,26 6,16 0,0173 0,09850
10 6,56 6,26 99,573% 0,120
11 6,98 6,56 2800,6 12
12 7,36 6,98 40,241 0,172
13 7,53 7,36
14 7,84 7,53
15 8,09 7,84

Z b1= 52,921 Z 0,05= 1,96


The first-order parameter contributes to the prediction of Y

19
Example 2

Y - annual income taxes


Year Yt Yt-1 Yt-2 Yt-3
1 55,4 - - -
2 61,5 55,4 - -
3 68,7 61,5 55,4 -
4 87,2 68,7 61,5 55,4
5 90,4 87,2 68,7 61,5
6 86,2 90,4 87,2 68,7
7 94,7 86,2 90,4 87,2
8 103,2 94,7 86,2 90,4
9 119 103,2 94,7 86,2
10 122,4 119 103,2 94,7
11 131,6 122,4 119 103,2
12 157,6 131,6 122,4 119
13 181 157,6 131,6 122,4
14 217,8 181 157,6 131,6
15 244,1 217,8 181 157,6

20
Y - annual income taxes
Yt Yt-1 Yt-2 Yt-3
55,4 - - -
61,5 55,4 - -
68,7 61,5 55,4 -
87,2 68,7 61,5 55,4
90,4 87,2 68,7 61,5
86,2 90,4 87,2 68,7
94,7 86,2 90,4 87,2
103,2 94,7 86,2 90,4
119 103,2 94,7 86,2
122,4 119 103,2 94,7
131,6 122,4 119 103,2 Third-order autoregressive model
157,6 131,6 122,4 119 b3 b2 b1 b0
181 157,6 131,6 122,4 0,2903 -0,1987 1,1541 -11,0438
217,8 181 157,6 131,6 0,4485 0,5982 0,3569 10,7919
244,1 217,8 181 157,6 0,9753 9,7932 #N/D! #N/D!

Z b3 0,647227 Z 0,05 1,96


The third-order parameter does not contribute to the prediction of Y

21
Y - annual income taxes
Yt Yt-1 Yt-2 Yt-3
55,4 - - -
61,5 55,4 - -
68,7 61,5 55,4 -
87,2 68,7 61,5 55,4
90,4 87,2 68,7 61,5
86,2 90,4 87,2 68,7
94,7 86,2 90,4 87,2
103,2 94,7 86,2 90,4
119 103,2 94,7 86,2
122,4 119 103,2 94,7 Second-order autoregressive model
131,6 122,4 119 103,2
b2 b1 b0
157,6 131,6 122,4 119
0,0220 1,1616 -7,1550
181 157,6 131,6 122,4
0,4000 0,3254 8,3927
217,8 181 157,6 131,6
244,1 217,8 181 157,6 0,9767 9,0609 #N/D!

Z b2 0,054917 Z 0,05 1,96


The second-order parameter does not contribute to the prediction of Y

22
Y - annual income taxes
Yt Yt-1 Yt-2 Yt-3
55,4 - - -
61,5 55,4 - -
68,7 61,5 55,4 -
87,2 68,7 61,5 55,4
90,4 87,2 68,7 61,5
86,2 90,4 87,2 68,7
94,7 86,2 90,4 87,2
103,2 94,7 86,2 90,4
119 103,2 94,7 86,2
122,4 119 103,2 94,7
First-order autoregressive model
131,6 122,4 119 103,2
157,6 131,6 122,4 119
b1 b0
181 157,6 131,6 122,4 1,1729 -5,9924
217,8 181 157,6 131,6 0,0494 5,9894
244,1 217,8 181 157,6 0,9792 8,3118

Z b1 23,74814 Z 0,05 1,96


The first-order parameter does contribute to the prediction of Y

23
Autogregressive Modeling
Used for Forecasting
Takes Advantage of Autocorrelation
1storder - correlation between consecutive
values
2nd order - correlation between values 2
periods apart Random
Autoregressive Model for pth order: Error

Yi b0 b1Yi 1 b2Yi 2 b pYi p ei


24
Autoregressive Modeling Steps
1. Choose p:
2. Form a series of lag predictor variables
Yi-1 , Yi-2 , Yi-p
3. Use Excel to run regression model using
all p variables
4. Test significance of Bp
If null hypothesis rejected, this model is selected
If null hypothesis not rejected, decrease p by 1 and
repeat your calculations

25

You might also like