You are on page 1of 13

110 CHAPTER 6.

ARMA MODELS

6.2 ACF and PACF of ARMA(p,q)


6.2.1 ACF of ARMA(p,q)
In Section 4.6 we have derived the ACF for ARMA(1,1) process. We have used
the linear process representation and the fact that

X
2
γ(τ ) = σ ψj ψj+τ .
j=0

We have calculated the coefficients ψj from the relation

θ(B)
ψ(B) = ,
φ(B)
which (as in the above example) gives the values

ψj = φj−1
1 (θ1 + φ1 ).

This allows us to calculate the ACF of the process

γ(τ )
ρ(τ ) = .
γ(0)

Another way of finding the coefficients ψ is using the homogeneous difference


equations. However, we may obtain such equation directly in terms of γ(τ ) or
ρ(τ ).

For ARMA(1,1)
Xt − φXt−1 = Zt + θZt−1
we can write
γ(τ ) = cov(Xt+τ , Xt )
= E(Xt+τ Xt )
= E[(φXt+τ −1 + Zt+τ + θZt+τ −1 )Xt ]
= E[φXt+τ −1 Xt + Zt+τ Xt + θZt+τ −1 Xt ]
= φ E[Xt+τ −1 Xt ] + E[Zt+τ Xt ] + θ E[Zt+τ −1 Xt ]

Here we consider a causal ARMA(1,1) process, hence



X
Xt = ψj Zt−j .
j=0
6.2. ACF AND PACF OF ARMA(P,Q) 111

This gives

X
E[Zt+τ Xt ] = E[Zt+τ ψj Zt−j ]
j=0

X
= ψj E[Zt+τ Zt−j ]
j=0

ψ0 σ 2 for τ = 0,

=
0 for τ ≥ 1.
Also,

X
E[Zt+τ −1 Xt ] = E[Zt+τ −1 ψj Zt−j ]
j=0

X
= ψj E[Zt+τ −1 Zt−j ]
j=0

 ψ1 σ 2 for τ = 0,
= ψ0 σ 2 for τ = 1
0 for τ ≥ 2.

Furthermore,
ψ0 = 1
ψ1 = φ + θ.
Putting all these together we obtain

γ(τ ) = φ E[Xt+τ −1 Xt ] + E[Zt+τ Xt ] + θ E[Zt+τ −1 Xt ]



 φγ(1) + σ 2 (1 + φθ + θ2 ) for τ = 0,
= φγ(0) + σ 2 θ for τ = 1,
φγ(τ − 1) for τ ≥ 2.

The ACVF is in fact given here in the form of a homogeneous difference equation
of order 1 with initial conditions specifying γ(0) and γ(1). Namely, we have

γ(τ ) − φγ(τ − 1) = 0 (6.12)

and the initial conditions are


γ(0) = φγ(1) + σ 2 (1 + φθ + θ2 )

(6.13)
γ(1) = φγ(0) + σ 2 θ

Note that the equation (6.12)

γ(τ ) = φγ(τ − 1)
112 CHAPTER 6. ARMA MODELS

has an iterative form and we can write


γ(2) = φγ(1)
γ(3) = φγ(2) = φ2 γ(1)
γ(4) = φγ(3) = φ3 γ(1)
...
γ(τ ) = φτ −1 γ(1)
The polynomial associated with the equation (6.12) is

1 − φz = 0

with root
1
z0 = .
φ
So we can write
γ(τ ) = (z0−1 )τ −1 γ(1).
This depends only on the root of the associated polynomial and on the initial
conditions. Solving (6.13) for γ(0) and γ(1) we obtain
1 + 2θφ + θ2
γ(0) = σ 2
1 − φ2
and
(1 + θφ)(φ + θ)
γ(1) = σ 2
1 − φ2
This gives us
(1 + θφ)(φ + θ) τ −1
γ(τ ) = σ 2 φ , for τ ≥ 1.
1 − φ2
Finally dividing by γ(0) we get the ACF, which is the same as the one derived in
Section 4.6, that is
(1 + θφ)(φ + θ) τ −1
ρ(τ ) = φ , for τ ≥ 1. (6.14)
1 + 2θφ + θ2
ACF for ARMA(p,q)

Assume that the model


φ(B)Xt = θ(B)Zt
is causal, that is the roots of φ(B) are outside the unit circle. Then we can write

Xt = ψ(B)Zt ,
6.2. ACF AND PACF OF ARMA(P,Q) 113

ACF
x
1.0

10
0.8

0.6
5

0.4

0
0.2

0.0
-5

30 80 130 180 t 0 10 20 30 40 50 τ

Figure 6.2: ARMA(1,1) simulated process xt − 0.9xt−1 = zt + 0.5zt−1 , sample


ACF and the theoretical ACF of this process.

where ∞
X
ψ(B) = ψj B j
j=0

and it follows immediately that E(Xt ) = 0.

As in the example for ARMA(1,1), we can obtain a homogeneous differential


equation in terms of γ(τ ) with some initial conditions. Namely
γ(τ ) = cov(Xt+τ , Xt )
" p q
! #
X X
=E φj Xt+τ −j + θj Zt+τ −j Xt
j=1 j=0
p q
X X
= φj E[Xt+τ −j Xt ] + θj E[Zt+τ −j Xt ]
j=1 j=0
p q
X X
= φj γ(τ − j) + σ 2 θj ψj−τ
j=1 j=τ

Here, as before, we used the linear representation of Xt , the fact that Zt+i and Xt
are uncorrelated for i > 0 and that ψi = 0 for i < 0.

This gives the general homogeneous difference equation for γ(τ ),


γ(τ ) − φ1 γ(τ − 1) − . . . − φp γ(τ − p) = 0 for τ ≥ max(p, q + 1), (6.15)
with initial conditions
γ(τ )−φ1 γ(τ −1)−. . .−φp γ(τ −p) = σ 2 (θτ ψ0 +θτ +1 ψ1 +. . .+θq ψq−τ ) (6.16)
114 CHAPTER 6. ARMA MODELS

for 0 ≤ τ < max(p, q + 1).

Example 6.4. ACF of an AR(2) process


Let
Xt − φ1 Xt−1 − φ2 Xt−2 = Zt
be a causal AR(2) process. From (6.15) we have
γ(τ ) − φ1 γ(τ − 1) − φ2 γ(τ − 2) = 0 for τ ≥ 2
with initial conditions
(
γ(0) − φ1 γ(−1) − φ2 γ(−2) = σ 2
γ(1) − φ1 γ(0) − φ2 γ(−1) = 0

It is convenient to write these equations in terms of the autocorrelation function


ρ(τ ). Dividing them by γ(0) we obtain


 ρ(τ ) − φ1 ρ(τ − 1) − φ2 ρ(τ − 2) = 0, for τ ≥ 2

ρ(0) = 1

(6.17)
 φ1
 ρ(1) =


1 − φ2
We know that a general solution to a second order difference equation is
ρ(τ ) = c1 z1−τ + c2 z2−τ
where z1 and z2 are the roots of the associated polynomial
φ(z) = 1 − φ1 z − φ2 z 2 ,
and c1 and c2 can be found from the initial conditions.

Take φ1 = 0.7 and φ2 = −0.1, that is the AR(2) process is


Xt − 0.7Xt−1 + 0.1Xt−2 = Zt .
It is a causal process as the coefficients lie in the admissible parameter space.
Also, the roots of the associated polynomial
φ(z) = 1 − 0.7z + 0.1z 2
are z1 = 2 and z2 = 5, i.e., they are outside the unit circle. The initial conditions
are 
 ρ(0) = 1
0.7 7
 ρ(1) = =
1 + 0.1 11
6.2. ACF AND PACF OF ARMA(P,Q) 115

x ACF

0.8
3

1
0.4
0

-1

0.0
-3

30 80 130 180 t 0 10 20 30 40 50 τ

Figure 6.3: AR(2) simulated process xt − 0.7xt−1 + 0.1xt−2 = zt , sample ACF


and the theoretical ACF of this process.

They give the set of equations for c1 and c2 , namely



 c1 + c2 = 1
 1 c1 + 1 c2 = 7
2 5 11
These give
16 5
, c2 = −
c1 =
11 11
and finally we obtain the ACF for this AR(2) process

16 −τ 5 24−τ − 51−τ
ρ(τ ) = 2 − 5−τ = .
11 11 11
Simulated AR(2) process, its sample ACF and the theoretical ACF are shown in
Figure 6.3. As we can see, the theoretical ACF decreases quickly towards zero,
but it never attains zero, we say it tails off.
116 CHAPTER 6. ARMA MODELS

6.2.2 PACF of ARMA(p,q)


We have seen earlier that the autocorrelation function of MA(q) models is zero
for all lags greater than q as these are q-correlated processes. Hence, the ACF is a
good indication of the order of the process. However AR(p) and ARMA(p,q) pro-
cesses are “fully” correlated, their ACF tails off and never becomes zero, though
it may be very close to zero. In such cases it is difficult to identify the process on
the ACF basis only.

In this section we will consider another correlation function, which together with
the ACF will help to identify the models. The function is called Partial Autocor-
relation Function (PACF). Before introducing a formal definition of PACF we
motivate the idea for AR(1). Let
Xt = φXt−1 + Zt
be a causal AR(1) process. Then
γ(2) = cov(Xt , Xt−2 )
= cov(φXt−1 + Zt , Xt−2 )
= cov(φ2 Xt−2 + φZt−1 + Zt , Xt−2 )
= E[(φ2 Xt−2 + φZt−1 + Zt )Xt−2 ]
= φ2 γ(0).
The autocorrelation is not zero because Xt depends on Xt−2 through Xt−1 . Due
to the iterative kind of AR models there is a chain of dependence. We can break
this dependence removing the influence of Xt−1 from both Xt and Xt−2 to obtain
Xt − φXt−1 and Xt−2 − φXt−1
for which the covariance is zero, i.e.,
cov(Xt − φXt−1 , Xt−2 − φXt−1 ) = cov(Zt , Xt−2 − φXt−1 ) = 0.
Similarly, we obtain zero covariance for Xt and Xt−3 after breaking the chain
of dependence, i.e. removing the dependence of the two variables on Xt−1 and
Xt−2 , i.e. for Xt − f (Xt−1 , Xt−2 ) and Xt−3 − f (Xt−1 , Xt−2 ) for some func-
tion f . Continuing this we would obtain zero covariances for variables Xt −
f (Xt−1 , Xt−2 , . . . , Xt−τ +1 ) and Xt−τ − f (Xt−1 , Xt−2 , . . . , Xt−τ +1 ). Then the
only nonzero covariance is for Xt and Xt−1 (nothing in between to break the chain
of dependence). These covariances with an appropriate function f divided by the
variance of the process are the partial autocorrelations. Hence, for a causal AR(1)
process we would have the PACF at lag 1 equal to ρ(1) and at lags > 1 equal to 0.
This, together with the tailing off shape of the ACF identifies the process.
6.2. ACF AND PACF OF ARMA(P,Q) 117

Definition 6.2. The Partial Autocorrelation Function (PACF) of a zero-mean sta-


tionary TS {Xt }t=0,1,... is defined as

φ11 = corr(X1 , X0 ) = ρ(1)


(6.18)
φτ τ = corr(Xτ − f(τ −1) , X0 − f(τ −1) ), τ ≥ 2,

where
f(τ −1) = f (Xτ −1 , . . . , X1 )
minimizes the mean square linear prediction error

E(Xτ − f(τ −1) )2 .

Remark 6.4. The subscript at the f function denotes the number of variables the
function depends on.
Remark 6.5. By stationarity, φτ τ is the correlation between variables Xt and Xt−τ
with the linear effect

f (Xt−1 , . . . , Xt−τ +1 ) = β1 Xt−1 + . . . + βτ −1 Xt−τ +1

on each variable removed.

Example 6.5. The PACF of AR(1)

Consider a process

Xt = φXt−1 + Zt , Zt ∼ W N (0, σ 2 ),

where |φ| < 1, i.e., a causal AR(1). Then by definition 6.2

φ11 = ρ(1) = φ.

To calculate φ22 we need to find the function f(1) which is of the form

f(1) = βX1 .

We choose β to minimize

E(X2 − βX1 )2 = E(X22 − 2βX1 X2 + β 2 X12 )


= γ(0) − 2βγ(1) + β 2 γ(0)

which is a polynomial in β. Taking the derivative with respect to β and setting it


equal to zero, we obtain
−2γ(1) + 2γ(0)β = 0.
118 CHAPTER 6. ARMA MODELS

Hence
γ(1)
β= = ρ(1) = φ
γ(0)
and
f(1) = φX1 .
Then

φ22 = corr(X2 − φX1 , X0 − φX1 ) = corr(Z2 , X0 − φX1 ) = 0

as by causality X0 , X1 do not depend on Z2 . Similarly we would obtain φ33 = 0.


In fact
φτ τ = 0 for τ > 1.
The PACF of AR(p)

Let
Xt − φ1 Xt−1 − . . . − φp Xt−p = Zt , Zt ∼ W M (0, σ 2 )
be a causal AR(p) process, i.e., we assume that the roots of φ(z) are outside the
unit circle. When τ > p the linear combination minimizing the mean square linear
prediction error is
p
X
f(p) = φj Xτ −j .
j=1

We will discuss this result later. Now we will use it to obtain the PACF for τ > p,
namely
φτ τ = corr(Xτ − f(p) , X0 − f(p) )
= corr(Zτ , X0 − f(p) ) = 0
as by causality Xτ −j , do not depend on the future noise value Zτ .

When τ ≤ p φpp 6= 0 and φ11 , . . . , φp−1,p−1 are not necessarily zero.


Remark 6.6. The PACF of MA(q)
Let
Xt = Zt + θ1 Zt−1 + . . . + θq Zt−q , Zt ∼ W N (0, σ 2 )
be an invertible MA(q) process, i.e., roots of θ(z) lie outside the unit circle. Then
its linear representation is

X
Xt = − πj Xt−j + Zt .
j=1
6.2. ACF AND PACF OF ARMA(P,Q) 119

10 70 130 190

AR1.phi.0.9 AR1.phi.minus0.9

-2

-4
AR1.phi.0.5 AR1.phi.minus0.5

-2

-4

10 70 130 190

Figure 6.4: AR(1) for various values of the parameters φ = 0.9, −0.9, 0.5, −0.5.
120 CHAPTER 6. ARMA MODELS

Series : ARprocess$AR1phi0.9 Series : ARprocess$AR1.phi.0.9


1.0

0.8
0.8

0.6
0.6

Partial ACF
0.4
ACF
0.4

0.2
0.2

0.0
0.0

0 20 40 60 80 100 0 5 10 15 20
Lag Lag

Figure 6.5: ACF and PACF of the AR(1) process xt = 0.9xt−1 + zt .

Series : ARprocess$AR1.2 Series : ARprocess$AR1.phi.minus0.9


1.0

0.0
0.5

-0.2
Partial ACF
ACF
0.0

-0.4 -0.6
-0.5

-0.8

0 20 40 60 80 100 0 5 10 15 20
Lag Lag

Figure 6.6: ACF and PACF of the AR(1) process xt = −0.9xt−1 + zt .

Series : ARprocess$AR1.4 Series : ARprocess$AR1.phi.0.5


1.0
0.8

0.4
0.6

Partial ACF
0.4

0.2
ACF
0.2
0.0

0.0
-0.2

0 20 40 60 80 100 0 5 10 15 20
Lag Lag

Figure 6.7: ACF and PACF of the AR(1) process xt = 0.5xt−1 + zt .

Series : ARprocess$AR1.3 Series : ARprocess$AR1.phi.minus0.5


1.0

0.1
0.8

0.0
0.6
0.4

Partial ACF
-0.1
ACF
0.2

-0.2
0.0

-0.3
-0.2

-0.4
-0.4

0 20 40 60 80 100 0 5 10 15 20
Lag Lag

Figure 6.8: ACF and PACF of the AR(1) process xt = −0.5xt−1 + zt .


6.2. ACF AND PACF OF ARMA(P,Q) 121

Series : AR2$x

0.6
0.4
Partial ACF
0.2 0.0
-0.2

0 5 10 15 20
Lag

Figure 6.9: The PACF for AR(2) xt − 0.7xt−1 + 0.1xt−2 = zt .


MA(1)
3

-1

-3

-5

10 30 50 70 90 t

Figure 6.10: MA(1) process xt = zt + 0.9zt−1 .

This is an AR(∞) representation (p = ∞) and the PACF will never cut off as for
the AR(p) with finite p.

The PACF of MA models behaves like ACF for AR models and PACF for AR
models behaves like ACF for MA models.
It can be shown that
(−θ)τ (1 − θ2 )
φτ τ = , τ ≥ 1.
1 − θ2(τ +1)
Remark 6.7. The PACF of ARMA(p,q)
An invertible ARMA model has an infinite AR representation, hence the PACF
will not cut off.

The following table summarizes the behaviour of the PACF of the causal and in-
vertible ARMA models (see R.H.Shumway and Stoffer (2000)).
122 CHAPTER 6. ARMA MODELS
Series : MA1$theta09 Series : MA1$theta09
1.0

0.4
0.8

0.2
0.6

Partial ACF
ACF
0.4

0.0
0.2
0.0

-0.2
-0.2

0 5 10 15 0 5 10 15
Lag Lag

Figure 6.11: ACF and PACF of the MA(1) process xt = zt + 0.9zt−1 .

AR(p) MA(q) ARMA(p,q)


ACF Tails off Cuts off after lag q Tails off
PACF Cuts off after lag p Tails off Tails off

You might also like