Professional Documents
Culture Documents
ARMA MODELS
θ(B)
ψ(B) = ,
φ(B)
which (as in the above example) gives the values
ψj = φj−1
1 (θ1 + φ1 ).
γ(τ )
ρ(τ ) = .
γ(0)
For ARMA(1,1)
Xt − φXt−1 = Zt + θZt−1
we can write
γ(τ ) = cov(Xt+τ , Xt )
= E(Xt+τ Xt )
= E[(φXt+τ −1 + Zt+τ + θZt+τ −1 )Xt ]
= E[φXt+τ −1 Xt + Zt+τ Xt + θZt+τ −1 Xt ]
= φ E[Xt+τ −1 Xt ] + E[Zt+τ Xt ] + θ E[Zt+τ −1 Xt ]
This gives
∞
X
E[Zt+τ Xt ] = E[Zt+τ ψj Zt−j ]
j=0
∞
X
= ψj E[Zt+τ Zt−j ]
j=0
ψ0 σ 2 for τ = 0,
=
0 for τ ≥ 1.
Also,
∞
X
E[Zt+τ −1 Xt ] = E[Zt+τ −1 ψj Zt−j ]
j=0
∞
X
= ψj E[Zt+τ −1 Zt−j ]
j=0
ψ1 σ 2 for τ = 0,
= ψ0 σ 2 for τ = 1
0 for τ ≥ 2.
Furthermore,
ψ0 = 1
ψ1 = φ + θ.
Putting all these together we obtain
The ACVF is in fact given here in the form of a homogeneous difference equation
of order 1 with initial conditions specifying γ(0) and γ(1). Namely, we have
γ(τ ) = φγ(τ − 1)
112 CHAPTER 6. ARMA MODELS
1 − φz = 0
with root
1
z0 = .
φ
So we can write
γ(τ ) = (z0−1 )τ −1 γ(1).
This depends only on the root of the associated polynomial and on the initial
conditions. Solving (6.13) for γ(0) and γ(1) we obtain
1 + 2θφ + θ2
γ(0) = σ 2
1 − φ2
and
(1 + θφ)(φ + θ)
γ(1) = σ 2
1 − φ2
This gives us
(1 + θφ)(φ + θ) τ −1
γ(τ ) = σ 2 φ , for τ ≥ 1.
1 − φ2
Finally dividing by γ(0) we get the ACF, which is the same as the one derived in
Section 4.6, that is
(1 + θφ)(φ + θ) τ −1
ρ(τ ) = φ , for τ ≥ 1. (6.14)
1 + 2θφ + θ2
ACF for ARMA(p,q)
Xt = ψ(B)Zt ,
6.2. ACF AND PACF OF ARMA(P,Q) 113
ACF
x
1.0
10
0.8
0.6
5
0.4
0
0.2
0.0
-5
30 80 130 180 t 0 10 20 30 40 50 τ
where ∞
X
ψ(B) = ψj B j
j=0
Here, as before, we used the linear representation of Xt , the fact that Zt+i and Xt
are uncorrelated for i > 0 and that ψi = 0 for i < 0.
x ACF
0.8
3
1
0.4
0
-1
0.0
-3
30 80 130 180 t 0 10 20 30 40 50 τ
16 −τ 5 24−τ − 51−τ
ρ(τ ) = 2 − 5−τ = .
11 11 11
Simulated AR(2) process, its sample ACF and the theoretical ACF are shown in
Figure 6.3. As we can see, the theoretical ACF decreases quickly towards zero,
but it never attains zero, we say it tails off.
116 CHAPTER 6. ARMA MODELS
In this section we will consider another correlation function, which together with
the ACF will help to identify the models. The function is called Partial Autocor-
relation Function (PACF). Before introducing a formal definition of PACF we
motivate the idea for AR(1). Let
Xt = φXt−1 + Zt
be a causal AR(1) process. Then
γ(2) = cov(Xt , Xt−2 )
= cov(φXt−1 + Zt , Xt−2 )
= cov(φ2 Xt−2 + φZt−1 + Zt , Xt−2 )
= E[(φ2 Xt−2 + φZt−1 + Zt )Xt−2 ]
= φ2 γ(0).
The autocorrelation is not zero because Xt depends on Xt−2 through Xt−1 . Due
to the iterative kind of AR models there is a chain of dependence. We can break
this dependence removing the influence of Xt−1 from both Xt and Xt−2 to obtain
Xt − φXt−1 and Xt−2 − φXt−1
for which the covariance is zero, i.e.,
cov(Xt − φXt−1 , Xt−2 − φXt−1 ) = cov(Zt , Xt−2 − φXt−1 ) = 0.
Similarly, we obtain zero covariance for Xt and Xt−3 after breaking the chain
of dependence, i.e. removing the dependence of the two variables on Xt−1 and
Xt−2 , i.e. for Xt − f (Xt−1 , Xt−2 ) and Xt−3 − f (Xt−1 , Xt−2 ) for some func-
tion f . Continuing this we would obtain zero covariances for variables Xt −
f (Xt−1 , Xt−2 , . . . , Xt−τ +1 ) and Xt−τ − f (Xt−1 , Xt−2 , . . . , Xt−τ +1 ). Then the
only nonzero covariance is for Xt and Xt−1 (nothing in between to break the chain
of dependence). These covariances with an appropriate function f divided by the
variance of the process are the partial autocorrelations. Hence, for a causal AR(1)
process we would have the PACF at lag 1 equal to ρ(1) and at lags > 1 equal to 0.
This, together with the tailing off shape of the ACF identifies the process.
6.2. ACF AND PACF OF ARMA(P,Q) 117
where
f(τ −1) = f (Xτ −1 , . . . , X1 )
minimizes the mean square linear prediction error
Remark 6.4. The subscript at the f function denotes the number of variables the
function depends on.
Remark 6.5. By stationarity, φτ τ is the correlation between variables Xt and Xt−τ
with the linear effect
Consider a process
Xt = φXt−1 + Zt , Zt ∼ W N (0, σ 2 ),
φ11 = ρ(1) = φ.
To calculate φ22 we need to find the function f(1) which is of the form
f(1) = βX1 .
We choose β to minimize
Hence
γ(1)
β= = ρ(1) = φ
γ(0)
and
f(1) = φX1 .
Then
Let
Xt − φ1 Xt−1 − . . . − φp Xt−p = Zt , Zt ∼ W M (0, σ 2 )
be a causal AR(p) process, i.e., we assume that the roots of φ(z) are outside the
unit circle. When τ > p the linear combination minimizing the mean square linear
prediction error is
p
X
f(p) = φj Xτ −j .
j=1
We will discuss this result later. Now we will use it to obtain the PACF for τ > p,
namely
φτ τ = corr(Xτ − f(p) , X0 − f(p) )
= corr(Zτ , X0 − f(p) ) = 0
as by causality Xτ −j , do not depend on the future noise value Zτ .
10 70 130 190
AR1.phi.0.9 AR1.phi.minus0.9
-2
-4
AR1.phi.0.5 AR1.phi.minus0.5
-2
-4
10 70 130 190
Figure 6.4: AR(1) for various values of the parameters φ = 0.9, −0.9, 0.5, −0.5.
120 CHAPTER 6. ARMA MODELS
0.8
0.8
0.6
0.6
Partial ACF
0.4
ACF
0.4
0.2
0.2
0.0
0.0
0 20 40 60 80 100 0 5 10 15 20
Lag Lag
0.0
0.5
-0.2
Partial ACF
ACF
0.0
-0.4 -0.6
-0.5
-0.8
0 20 40 60 80 100 0 5 10 15 20
Lag Lag
0.4
0.6
Partial ACF
0.4
0.2
ACF
0.2
0.0
0.0
-0.2
0 20 40 60 80 100 0 5 10 15 20
Lag Lag
0.1
0.8
0.0
0.6
0.4
Partial ACF
-0.1
ACF
0.2
-0.2
0.0
-0.3
-0.2
-0.4
-0.4
0 20 40 60 80 100 0 5 10 15 20
Lag Lag
Series : AR2$x
0.6
0.4
Partial ACF
0.2 0.0
-0.2
0 5 10 15 20
Lag
-1
-3
-5
10 30 50 70 90 t
This is an AR(∞) representation (p = ∞) and the PACF will never cut off as for
the AR(p) with finite p.
The PACF of MA models behaves like ACF for AR models and PACF for AR
models behaves like ACF for MA models.
It can be shown that
(−θ)τ (1 − θ2 )
φτ τ = , τ ≥ 1.
1 − θ2(τ +1)
Remark 6.7. The PACF of ARMA(p,q)
An invertible ARMA model has an infinite AR representation, hence the PACF
will not cut off.
The following table summarizes the behaviour of the PACF of the causal and in-
vertible ARMA models (see R.H.Shumway and Stoffer (2000)).
122 CHAPTER 6. ARMA MODELS
Series : MA1$theta09 Series : MA1$theta09
1.0
0.4
0.8
0.2
0.6
Partial ACF
ACF
0.4
0.0
0.2
0.0
-0.2
-0.2
0 5 10 15 0 5 10 15
Lag Lag