Professional Documents
Culture Documents
Lecture 13
Time Series:
Stationarity, AR(p) & MA(q)
1
RS – EC2 - Lecture 13
• In time series, {..., y1, y2 , y3, ..., yT} are jointly RV. We want to
model the conditional expectation:
E[yt| Ft-1]
where Ft-1 = {yt-1, yt-2 , yt-3, ...} is the past history of the series.
2
RS – EC2 - Lecture 13
3
RS – EC2 - Lecture 13
F ( z t1 , z t 2 ,..... z tn ) P ( Z t1 z t1 , Z t2 z t 2 ,... Z tn z tn )
Var ( Z t ) t2 E ( Z t t )2 (Z t t ) 2 f ( z t ) dz t
Cov ( Z t1 ,Z t2 ) E [( Z t1 t 1 )( Z t2 t 2 )] ( t 1 t 2 )
( t1 t 2 )
( t1 , t 2 )
t21 t22
4
RS – EC2 - Lecture 13
provided that E ( Z t ) , E ( Z 2 t )
Then, F ( z t1 , z t 2 ) F ( z t 1 k , z t 2 k )
cov( z t1 , z t 2 ) cov( z t1 k , z t 2 k )
( t1 , t 2 ) ( t 1 k , t 2 k )
let t1 t k and t 2 t , then
( t1 , t 2 ) ( t k , t ) ( t , t k ) k
The correlation between any two RVs depends on the time difference.
5
RS – EC2 - Lecture 13
1) yt = Φ yt-1 + εt
E[yt ] = 0 (assuming Φ≠1)
2
Var[yt] = σ /(1-Φ )2 (assuming |Φ|<11)
E[yt yt-1] = Φ E[yt-12]
stationary, not time dependent
Stationary Series
Examples:
y t .08 t .4 t 1 t ~ WN
y t .13 yt 1 t
0
-0.05 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129
-0.1
-0.15
-0.2
-0.25
Time
6
RS – EC2 - Lecture 13
Non-Stationary Series
Examples:
y t t 1 yt 1 2 yt 2 t t ~ WN
y t yt 1 t RW with drift
80.0
60.0
40.0
20.0
0.0
1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136
7
RS – EC2 - Lecture 13
Z i
- Ensemble Average z i1
m
n
Z t
- Time Series Average z t1
n
• Q: Under which circumstances we can use the time average (only one
realization of {Zt})? Is the time average an unbiased and consistent
estimator of the mean? The Ergodic Theorem gives us the answer.
0 0
n n n n n
1
var(z)
n2
cov(Z , Z ) n
t 1 s 1
t s 2
t 1 s 1
t s
n2
(
t 1
t 1 t 2 t n )
0
[(0 1 n1 ) (1 0 1 n2 )
n2
((n1) (n2) 0 )]
8
RS – EC2 - Lecture 13
0
(1
k ?
lim var( z ) lim ) k
0
n n n n
k
Q: How can we make the remaining part, the sum over the upper
triangle of the covariance matrix, go to zero as well?
A: We need to impose conditions on ρk. Conditions weaker than "they
are all zero;" but, strong enough to exclude the sequence of identical
copies.
t 1 k 1
k
t 1 k 1
k
t 1 k 1
k
9
RS – EC2 - Lecture 13
k
k
Note: Recall that only the first two moments are needed to describe
the normal distribution.
Theorem I
If yt is strictly stationary and ergodic and xt = f(yt, yt-1, yt-2 , ...) is a RV,
then xt is strictly stationary and ergodic.
10
RS – EC2 - Lecture 13
11
RS – EC2 - Lecture 13
12
RS – EC2 - Lecture 13
p
yt L y
i 1
i
i
t t L : Lag operator
AR Process - Stationarity
• Let’s compute moments of yt using the infinite sum (assume μ=0):
E [ y t ] ( L ) 1 E [ t ] 0 (L) 0
Var [ y t ] ( L ) Var [ t ]
2
( L)2 0
E [ y t y t j ] (t j ) E [(1 y t 1 y t j 2 y t 2 y t j .... p y t p y t j t y t j )]
1 ( j 1) 2 ( j 2 ) .... p ( j p )
where the r1, .... rk ∈ C are the roots of (z). If the j’s coefficients are
all real, the roots are either real or come in complex conjugate pairs.
13
RS – EC2 - Lecture 13
AR Process – Stationarity
Theorem: The linear AR(p) process is strictly stationary and ergodic
if and only if |rj|>1 for all j, where |rj| is the modulus of the
complex number rj.
Note: If one of the rj’s equals 1, (L) (& yt) has a unit root –i.e.,
Φ(1)=0. This is a special case of non-stationarity.
• Recall (L)-1 produces an infinite sum on the εt-j’s. If this sum does
not explode, we say the process is stable.
Note: 1 /(1 i ) ij i 1, 2
j0
14
RS – EC2 - Lecture 13
• Note: k 0
k
15
RS – EC2 - Lecture 13
• Stationarity Check
– E[yt] = μ/(1 – 1 – 2)= μ* 1 + 2 ≠1.
– Var[yt] = σ2/(1 – 12 – 22) 12 + 22 < 1
Stationarity condition: |1+ 2 |<1
AR Process – Stationarity
~ ~ A ~y
yt ~ ~y t [ I AL ] 1 ~ t
t 1 t
Note: Recall I F 1 F j I F F 2 ...
j 0
2 2
A 1 | A I | det 1 (1 ) 2
1 0 1
• If |λi| <1 for all i=1,2, yt is stable (it does not explode) and
stationary. Then: 12 2 12 2 1
1 2 1 1 2 1 2
16
RS – EC2 - Lecture 13
AR Process – Stationarity
• The autocovariance function is given by:
k E Yt Yt k
E 1 Yt 1 2 Yt 2 t Yt k
1 k 1 2 k 2 E t Yt k
• Again a recursive formula. Let’s get the first autocovariances:
0 1 1 2 2 E t Y t
1 1 2 2 2
1 1 0 2 1 E t Y t 1
1 0
1 2
1 2
0 1 1
2 1 1 2 2 E t Y t 2
12 2 22
1 2
1 2
1 2 2 0
AR Process – Stationarity
• The AR(2) in matrix AR(1) form is called Vector AR(1) or VAR(1).
17
RS – EC2 - Lecture 13
AR Process – Causality
• The AR(p) model:
( L ) y t t where ( L ) 1 1 L1 L2 2 .... p L p
• But, we need to make sure that we can invert the polynomial (L).
with | j (L) |
j0
with y t ( L ) t .
AR Process – Causality
Example: AR(1) process:
( L ) y t t , where ( L ) 1 1 L
1 1L 1 L t
2 2
i 1 , i 0
i
18
RS – EC2 - Lecture 13
0 1
1 1
2 12 2
3 13 2 1 2
j 1 j 1 2 j2 , j 2
• Recall that ut= xtεt is a MDS. It is also strictly stationary and ergodic.
1 1
T
x
t
t t
T
u
t
t p E [ u t ] 0 .
19
RS – EC2 - Lecture 13
xt
Then,
1
1 1
b
T
t
xt xt '
T
x ' Q
t
t t
p 1
0
20
RS – EC2 - Lecture 13
AR Process – Bootstrap
• So far, we constructed the bootstrap sample by randomly resampling
from the data values (yt,xt). This created an i.i.d bootstrap sample.
AR Process – Bootstrap
(1) Model-Based (Parametric) Bootstrap
1. Estimate b and residuals e:
2. Fix an initial condition {yt-k+1, yt-k+2 , yt-k+3, ..., y0}
3. Simulate i.i.d. draws e* from the empirical distribution of the
residuals {e1, e2 , e3, ..., eT}.
4. Create the bootstrap series yt by the recursive formula
y t * ˆ ˆ 1 y t 1 * ˆ 2 y t 2 * .... ˆ p y t p * t *
21
RS – EC2 - Lecture 13
AR Process – Bootstrap
(2) Block Resampling
1. Divide the sample into T/m blocks of length m.
2. Resample complete blocks. For each simulated sample, draw T/m
blocks.
3. Paste the blocks together to create the bootstrap time-series yt*.
Cons: It may be sensitive to the block length, and the way that the
data are partitioned into blocks. May not work well in small samples.
22
RS – EC2 - Lecture 13
j ( L) yt * t
j 0
MA Process – Invertibility
• We need to make sure that θ(L)-1 is defined: We require θ(L) ≠ 0.
When this condition is met, we can write εt as a causal function of yt.
We say the MA is invertible. For this to hold, we require:
j0
| j (L) |
( L ) 1 1 L 2 L 2 ...
with | j (L) |
j0
with t (L) yt .
23
RS – EC2 - Lecture 13
- Moments
E Yt
0 Var yt 2 12 2
1 E[ yt , yt 1 ] 1 2
k E[ yt , yt k ] 0, k 1
k
2 2 , k 2
0, k 2
24
RS – EC2 - Lecture 13
1 2 1
2 1 1
1 2 1
MA Process – Estimation
• MA are more complicated to estimate. In particular, there are
nonlinearities. Consider an MA(1):
yt = εt + θ εt-1
t ( a ) yt a yt 1 a 2 yt 2 ...
and look (numerically) for the least-square estimator
ˆ arg a min{ S T ( a ) t 1 t 2 ( a )}
T
25
RS – EC2 - Lecture 13
y t j 0 j L j t t ,
0 1,
E[ xt ] E[ yt t ] j E[ t j ] 0.
j 0
E[ xt 2 ] 2j E[ t2 j ] 2 2j .
j 0 j 0
2 ( j 1 j 1 2 j 2 ...) 2 k k 2
k 0
26
RS – EC2 - Lecture 13
ARMA Process
• A combination of AR(p) and MA(q) processes produces an
ARMA(p, q) process:
yt 1 yt 1 2 yt 2 .... p yt p t 1 t 1 2 t 2 ... q t q
p q
i yt i i Li t t
i 1 i 1
( L ) yt ( L ) t
• Usually, we insist that (L)≠0, θ(L)≠0 & that the polynomials (L),
θ(L) have no common factors. This implies it is not a lower order
ARMA model.
ARMA Process
Example: Common factors.
Suppose we have the following ARMA(2,3) model ( L) yt ( L) t
with
( L) 1 .6 L .3L2
( L) 1 1.4 L .9 L2 .3L3 (1 .6 L .3L2 )(1 L)
p L
• Pure AR Representation: L y t a t L
q L
q L
• Pure MA Representation: yt L at L
p L
27
RS – EC2 - Lecture 13
| z | 1 ( z ) 1 1 z 2 z 2 ... p z p 0 .
| z | 1 ( z ) 1 1 z 2 z 2 ... p z p 0 .
Then, x t 1 x t 1 2 x t 2 .... p x t p w t
28
RS – EC2 - Lecture 13
• That is, the dynamic multiplier for any linear SDE depends only on
the length of time j, not on time t.
• Usually, IRF are represented with a graph, that measures the effect
of the innovation, εt, on yt over time:
δyt+j/δεt+ δyt+j+1/δεt + δyt+j+21/δεt+...= ψj + ψj+1+ ψj+2+...
29
RS – EC2 - Lecture 13
• Adding MA processes
x t A ( L ) t
z t C ( L )u t
y t x t z t A ( L ) t C ( L ) u t
- Under independence:
y ( j ) E [ y t y t j ] E [( x t z t )( xt j z t j )]
E [( x t x t j z t z t j )] x ( j ) z ( j )
- Then, γ(j) = 0 for j > Max(qx, qz,) yt is ARMA(0, max(qx, qz))
• Adding AR processes
(1 A ( L )) x t t
(1 C ( L )) z t u t
y t xt z t ?
- Rewrite system as:
(1 C( L))(1 A( L))xt (1 C(L)) t
(1 A(L))(1 C( L))zt (1 A( L))ut
(1 A(L))(1 C( L)) yt (1 C(L)) t (1 A( L))ut t ut [C(L) t A(L)ut ]
30