You are on page 1of 93

Series and series

representation

PRACTICAL TIME SERIES ANALYSIS


THISTLETON AND SADIGOV
Objectives

 Recall infinite series and their convergence


 Examine geometric series
 Represent rational functions as a geometric
series
Sequence and series
 Sequence 𝑎𝑛 is list of numbers in definite order

𝑎1 , 𝑎2 , 𝑎3 , … 𝑎𝑛 , …
 If the limit of the sequence exists, i..e,

lim 𝑎𝑛 = 𝑎
𝑛→∞

then we say the sequence is convergent.


Examples
𝑛
 𝑎𝑛 =
𝑛+1
1 2 3 𝑛
, , ,…, ,… → 1
2 3 4 𝑛+1
 𝑎𝑛 = 3𝑛
3, 9, 27, … , 3𝑛 , …
 𝑎𝑛 = 𝑛
1, 2, 3, … , 𝑛, …
1
 𝑎𝑛 =
𝑛2
1 1 1
1, , , … , 2 , … → 0
4 9 𝑛
Partial sums

 Partial sums of a sequence 𝑎𝑛 are defined as

𝑠𝑛 = 𝑎1 + 𝑎2 + ⋯ + 𝑎𝑛
 𝑠1 = 𝑎1
 𝑠2 = 𝑎1 + 𝑎2
 𝑠3 = 𝑎1 + 𝑎2 + 𝑎3
.
.
.
Series
 If the partial sums 𝑠𝑛 is convergent to a number s, then we say


the infinite series 𝑘=1 𝑎𝑘 is convergent, and is equal to s.

𝑎𝑘 = lim 𝑠𝑛 = lim (𝑎1 + 𝑎2 + ⋯ + 𝑎𝑛 ) = 𝑠


𝑛→∞ 𝑛→∞
𝑘=1


 Otherwise, we say 𝑘=1 𝑎𝑘 is divergent.
Some convergent series

∞ 1
 𝑘=1 2𝑘 =1

∞ 1 𝜋2
 𝑘=1 𝑘 2 =
6

∞ −1 𝑘+1
 𝑘=1 = ln 2
𝑘
Some divergent series

∞ 𝑘
 𝑘=1 3


 𝑘=1(2𝑘 + 1)

∞ 1
 𝑘=1 𝑘
Absolute convergence
 Series is absolutely convergent if

|𝑎𝑘 |
𝑘=1

is convergent.

 Absolute convergence implies convergence.


Convergence tests

 Integral test
 Comparison test
 Limit comparison test
 Alternating series test
 Ratio test
 Root test
Geometric series

 Geometric sequence


𝑎𝑟 𝑛−1 𝑛=1 = {𝑎, 𝑎𝑟, 𝑎𝑟 2 , 𝑎𝑟 3 , … }

 Geometric series

∞ 𝑘−1 𝑎
𝑘=1 𝑎𝑟 = 1−𝑟 if 𝑟 < 1.

1
∞ 1 1 1 1
 𝑘=1 2𝑘 = 2
+ 4
+ 8
+ ⋯= 2
1 =1
1−2
1 1
since 𝑎 = 2 , 𝑟 = 2 .
Series representation

1
 Series representation for
1−𝑥
where 𝑎 = 1, 𝑟 = 𝑥.

1
= 1 + 𝑥 + 𝑥2 + 𝑥3 + ⋯
1−𝑥

if 𝑥 < 1.
Series representation cont.
1
 Series representation for 𝑥
1−𝑥 1−
2


1 2 −1 1
𝑥 = + 𝑥 = 2 − 𝑘
𝑥𝑘
1−𝑥 1− 1−𝑥 1− 2
2 2 𝑘=0

𝑥
If 𝑥 < 1 and < 1, i.e., if 𝑥 < 1.
2
Complex functions
Assume z is a complex number

𝑎 ∞
= 𝑎 + 𝑎𝑧 + 𝑎𝑧 2 + ⋯ = 𝑘=1 𝑎𝑧
𝑘−1
1−𝑧

if |z| < 1.
What We’ve Learned

 The definition of infinite series and their convergence

 Geometric series is convergent if the multiplier has


norm less than 1

 How to represent some rational functions as a


geometric series
Backward shift
operator

PRACTICAL TIME SERIES ANALYSIS


THISTLETON AND SADIGOV
Objectives

 Define
and utilize backward shift
operator
Definition

 𝑋1 , 𝑋2 , 𝑋3 , …
 Backward shift operator is defined as

𝐵𝑋𝑡 = 𝑋𝑡−1

 𝐵2 𝑋𝑡 = 𝐵𝐵𝑋𝑡 = 𝐵𝑋𝑡−1 = 𝑋𝑡−2


 𝐵𝑘 𝑋𝑡 = 𝑋𝑡−𝑘
Example – Random Walk
𝑋𝑡 = 𝑋𝑡−1 + 𝑍𝑡

𝑋𝑡 = 𝐵𝑋𝑡 + 𝑍𝑡

1 − 𝐵 𝑋𝑡 = 𝑍𝑡

𝜙 𝐵 𝑋𝑡 = 𝑍𝑡
where

𝜙 𝐵 =1−𝐵
Example – MA(2) process
𝑋𝑡 = 𝑍𝑡 + 0.2𝑍𝑡−1 + 0.04𝑍𝑡−2

𝑋𝑡 = 𝑍𝑡 + 0.2𝐵𝑍𝑡 + 0.04𝐵2 𝑍𝑡

𝑋𝑡 = (1 + 0.2𝐵 + 0.04𝐵2 ) 𝑍𝑡

𝑋𝑡 = 𝛽 𝐵 𝑍𝑡
where

𝛽 𝐵 = 1 + 0.2𝐵 + 0.04𝐵2
Example – AR(2) process
𝑋𝑡 = 0.2𝑋𝑡−1 + 0.3𝑋𝑡−2 + 𝑍𝑡

𝑋𝑡 = 0.2𝐵𝑋𝑡 + 0.3𝐵2 𝑋𝑡 + 𝑍𝑡

(1 − 0.2𝐵 − 0.3𝐵2 ) 𝑋𝑡 = 𝑍_𝑡

𝜙(𝐵)𝑋𝑡 = 𝑍𝑡
where

𝜙 𝐵 = 1 − 0.2𝐵 − 0.3𝐵2
MA(q) process (with a drift)

𝑋𝑡 = 𝜇 + 𝛽0 𝑍𝑡 + 𝛽1 𝑍𝑡−1 + ⋯ + 𝛽𝑞 𝑍𝑡−𝑞 ,

Then,

𝑋𝑡 = 𝜇 + 𝛽0 𝑍𝑡 + 𝛽1 𝐵1 𝑍𝑡 + ⋯ + 𝛽𝑞 𝐵𝑞 𝑍𝑡 ,

𝑋𝑡 − 𝜇 = 𝛽 𝐵 𝑍𝑡 ,
where
𝛽 𝐵 = 𝛽0 + 𝛽1 𝐵 + ⋯ + 𝛽𝑞 𝐵𝑞 .
AR(p) process

𝑋𝑡 = 𝜙1 𝑋𝑡−1 + 𝜙2 𝑋𝑡−2 + ⋯ + 𝜙𝑝 𝑋𝑡−𝑝 + 𝑍𝑡

Then,
𝑋𝑡 − 𝜙1 𝑋𝑡−1 − 𝜙2 𝑋𝑡−2 − ⋯ − 𝜙𝑝 𝑋𝑡−𝑝 = 𝑍𝑡

𝑋𝑡 − 𝜙1 𝐵𝑋𝑡 − 𝜙2 𝐵2 𝑋𝑡 − ⋯ − 𝜙𝑝 𝐵𝑝 𝑋𝑡 = 𝑍𝑡

𝜙 𝐵 𝑋𝑡 = 𝑍𝑡 ,
Where
𝜙 𝐵 = 1 − 𝜙1 𝐵 − 𝜙2 𝐵2 − ⋯ − 𝜙𝑝 𝐵𝑝 .
What We’ve Learned

 The definition of the Backward shift operator

 How to utilize backward shift operator to


write MA(q) and AR(p) processes
Introduction to
Invertibility

PRACTICAL TIME SERIES ANALYSIS


THISTLETON AND SADIGOV
Objectives

 Learn invertibility of a stochastic process


Two MA(1) models

 Model 1

𝑋𝑡 = 𝑍𝑡 + 2𝑍𝑡−1

 Model 2

1
𝑋𝑡 = 𝑍𝑡 + 𝑍𝑡−1
2
Theoretical Auto Covariance
Function of Model 1
𝛾 𝑘 = 𝐶𝑜𝑣 𝑋𝑡+𝑘 , 𝑋𝑡 = 𝐶𝑜𝑣 𝑍𝑡+𝑘 + 2𝑍𝑡+𝑘−1 , 𝑍𝑡 + 2𝑍𝑡−1

If 𝑘 > 1, then 𝑡 + 𝑘 − 1 > 𝑡, so all 𝑍’s are uncorrelated, thus 𝛾 𝑘 = 0.

If 𝑘 = 0, then
𝛾 0 = 𝐶𝑜𝑣 𝑍𝑡 + 2𝑍𝑡−1 , 𝑍𝑡 + 2𝑍𝑡−1 =

𝐶𝑜𝑣 𝑍𝑡 , 𝑍𝑡 + 4𝐶𝑜𝑣 𝑍𝑡−1 , 𝑍𝑡−1 = 𝜎𝑍2 + 4𝜎𝑍2 = 5𝜎𝑍2 .


If 𝑘 = 1, then

𝛾 1 = 𝐶𝑜𝑣 𝑍𝑡+1 + 2𝑍𝑡 , 𝑍𝑡 + 2𝑍𝑡−1 = 𝐶𝑜𝑣 2𝑍𝑡 , 𝑍𝑡 = 2𝜎𝑍2

If 𝑘 < 0, then
𝛾 𝑘 = 𝛾(−𝑘)
Auto Covariance Function and
ACF of Model 1
0, 𝑘>1
2𝜎𝑍2 , 𝑘=1
𝛾 𝑘 =
5𝜎𝑍2 , 𝑘=0
𝛾 −𝑘 , 𝑘<0

𝛾 𝑘
Then, since 𝜌 𝑘 = 𝛾 ,
0

0, 𝑘>1
2
, 𝑘=1
𝜌 𝑘 = 5
1, 𝑘=0
𝜌 −𝑘 , 𝑘<0
ACF
ACF of Model 2
1 1 1
𝛾 1 𝐶𝑜𝑣 𝑍𝑡+1 + 𝑍𝑡 , 𝑍 + 𝑍
𝑡 2 𝑡−1 2
𝜌 1 = = 2 = 2 = .
𝛾 0 1 1 1 5
𝐶𝑜𝑣[𝑍𝑡 + 𝑍𝑡−1 , 𝑍𝑡 + 𝑍𝑡−1 ] 1 +
2 2 4

Thus we obtain the same ACF:

0, 𝑘>1
2
, 𝑘=1
𝜌 𝑘 = 5
1, 𝑘=0
𝜌 −𝑘 , 𝑘<0
ACFs are same!
Inverting through backward
substitution
MA(1) process
𝑋𝑡 = 𝑍𝑡 + 𝛽𝑍𝑡−1 ,

𝑍𝑡 = 𝑋𝑡 − 𝛽𝑍𝑡−1 = 𝑋𝑡 − 𝛽 𝑋𝑡−1 − 𝛽𝑍𝑡−2 = 𝑋𝑡 − 𝛽𝑋𝑡−1 + 𝛽 2 𝑍𝑡−2

In this manner,

𝑍𝑡 = 𝑋𝑡 − 𝛽𝑋𝑡−1 + 𝛽 2 𝑋𝑡−2 − 𝛽 3 𝑋𝑡−3 + ⋯

i.e.,
𝑋𝑡 = 𝑍𝑡 + 𝛽𝑋𝑡−1 − 𝛽 2 𝑋𝑡−2 + 𝛽 3 𝑋𝑡−3 − ⋯

We ‘inverted’ MA(1) process to AR(∞).


Inverting using Backward shift
operator
𝑋𝑡 = 𝛽 𝐵 𝑍𝑡

where

𝛽 𝐵 = 1 + 𝛽𝐵

Then, we find 𝑍𝑡 by inverting the polynomial operator 𝛽 𝐵 :

𝛽 𝐵 −1 𝑋 = 𝑍𝑡
𝑡
Inverse of 𝛽(𝐵)

−1
1
𝛽 𝐵 = = 1 − 𝛽𝐵 + 𝛽2 𝐵2 − 𝛽3 𝐵3 + ⋯
1 + 𝛽𝐵

Here we expand the inverse of the polynomial operator as a ‘rational


function where 𝛽𝐵 is a complex number’.

Thus we obtain,

𝛽 𝐵 −1 𝑋 = 1 − 𝛽𝑋𝑡−1 + 𝛽2 𝑋𝑡−2 − 𝛽3 𝑋𝑡−3 + ⋯


𝑡

𝑍𝑡 = −𝛽 𝑛 𝑋𝑡−𝑛
𝑛=0

In order to make sure that the sum on the right is convergent (in the
mean-square sense), we need 𝛽 < 1.

There is an optional reading titled “Mean-square convergence” where


we explain this result.
Invertibility - Definition
Definition:

𝑋𝑡 is a stochastic process.

𝑍𝑡 is innovations, i.e., random disturbances or white noise.

∞ ∞
𝑋𝑡 is called invertible, if 𝑍𝑡 = 𝑘=0 𝜋𝑘 𝑋𝑡−𝑘 where 𝑘=0 𝜋𝑘 is convergent.
Model 1 vs Model 2

 Model 1 is not invertible since

∞ ∞

|𝜋𝑘 | = 2𝑘 , 𝐷𝑖𝑣𝑒𝑟𝑔𝑒𝑛𝑡
𝑘=0 𝑘=0

 Model 2 is invertible since

∞ ∞
1
|𝜋𝑘 | = 𝑘
, 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑆𝑒𝑟𝑖𝑒𝑠, 𝐶𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑛𝑡
2
𝑘=0 𝑘=0
Model choice

1
 For ‘invertibility’ to hold, we choose Model 2, since < 1.
2

 This way, ACF uniquely determines the MA process.


What We’ve Learned
 Definition of invertibility of a stochastic
process

 Invertibility
condition guarantees unique MA
process corresponding to observed ACF
Invertibility and
stationarity conditions

PRACTICAL TIME SERIES ANALYSIS


THISTLETON AND SADIGOV
Objectives

 Articulate invertibility condition for MA(q) processes

 Discover stationarity condition for AR(p) processes

 Relate MA and AR processes through duality


MA(q) process
𝑋𝑡 = 𝛽0 𝑍𝑡 + 𝛽1 𝑍𝑡−1 + ⋯ 𝛽𝑞 𝑍𝑡−𝑞

Using Backward shift operator,

𝑋𝑡 = 𝛽0 + 𝛽1 𝐵 + ⋯ + 𝛽𝑞 𝐵𝑞 𝑍𝑡 = 𝛽 𝐵 𝑍𝑡

We obtain innovations 𝑍𝑡 in terms of present and past values of 𝑋𝑡 ,

−1
𝑍𝑡 = 𝛽 𝐵 𝑋𝑡 = 𝛼0 + 𝛼1 𝐵 + 𝛼2 𝐵2 + ⋯ 𝑋𝑡

For this to hold, “complex roots of the polynomial 𝛽(𝐵) must lie outside
of the unit circle where 𝐵 is regarded as complex variable”.
Invertibility condition for MA(q)
MA(q) process is invertible if the roots of the polynomial

𝛽 𝐵 = 𝛽0 + 𝛽1 𝐵 + ⋯ + 𝛽𝑞 𝐵𝑞

all lie outside the unit circle, where we regard 𝐵 as a complex variable
(not an operator).

(Proof is done using mean-square convergence, see optional reading)


EX: MA(1) process
 𝑋𝑡 = 𝑍𝑡 + 𝛽𝑍𝑡−1

 𝛽 𝐵 = 1 + 𝛽𝐵

1
 In this case only one (real) root 𝐵 = −
𝛽
1
 − > 1 ⇒ 𝛽 < 1.
𝛽

∞ 𝑘 𝑘 ∞ 𝑘
 Then, 𝑍𝑡 = 𝑘=0(−𝛽) 𝐵 𝑋𝑡 = 𝑘=0(−𝛽) 𝑋𝑡−𝑘
Example – MA(2) process

5 1
𝑋𝑡 = 𝑍𝑡 + 𝑍𝑡−1 + 𝑍𝑡−2
6 6
Then,

𝑋𝑡 = 𝛽 𝐵 𝑍𝑡

Where
5 1 2
𝛽 𝐵 =1+ 𝐵+ 𝐵
6 6
Example cont.
5 1
1 + 𝑧 + 𝑧2 = 0
6 6

𝑧1 = 2, 𝑧2 = 3
Example cont.

−1 1 3 2
𝛽 𝐵 = 5 1 = 1 − 1
1+ 𝐵+ 𝐵2 1+ 𝐵
2
1+ 𝐵
3
6 6

∞ 𝑘 𝑘
−1
1 1
𝛽 𝐵 = 3 − −2 − 𝐵𝑘
2 3
𝑘=0
∞ 𝑘 𝑘
1 1
𝑍𝑡 = 3 − −2 − 𝐵𝑘 𝑋𝑡
2 3
𝑘=0

∞ ∞

𝑍𝑡 = 𝜋𝑘 𝐵𝑘 𝑋𝑡 = 𝜋𝑘 𝑋𝑡−𝑘
𝑘=1 𝑘=1

Where
𝑘 𝑘
1 1
𝜋𝑘 = 3 − −2 −
2 3
MA(2) process ⟹ AR(∞) process
Stationarity condition for AR(p)
AR(p) process
𝑋𝑡 = 𝜙1 𝑋𝑡−1 + 𝜙2 𝑋𝑡−2 + ⋯ + 𝜙𝑝 𝑋𝑡−𝑝 + 𝑍𝑡

is (weakly) stationary if the roots of the polynomial

𝜙 𝐵 = 1 − 𝜙1 𝐵 − 𝜙2 𝐵2 − ⋯ − 𝜙𝑝 𝐵𝑝 .

all lie outside the unit circle, where we regard 𝐵 as a complex variable
(not an operator).
AR(1) process

𝑋𝑡 = 𝜙1 𝑋𝑡−1 + 𝑍𝑡 ⟹ 1 − 𝜙1 𝐵 𝑋𝑡 = 𝑍𝑡

𝜙 𝐵 = 1 − 𝜙1 𝐵

1
𝜙 𝑧 = 1 − 𝜙1 𝑧 = 0 ⟹ 𝑧 =
𝜙1
1
𝑧 = > 1 ⇒ 𝜙1 < 1
𝜙1
Thus, when 𝜙1 < 1, the AR(1) process is stationary.


1
𝑋𝑡 = 𝑍𝑡 = 1 + 𝜙1 𝐵 + 𝜙1 𝐵2 − ⋯ 𝑍𝑡 = 𝜙1𝑘 𝑍𝑡−𝑘
1 − 𝜙1 𝐵
𝑘=0
Another look at 𝜙1

Take Variance from both side,

∞ ∞ ∞

𝑉𝑎𝑟 𝑋𝑡 = 𝑉𝑎𝑟 𝜙1𝑘 𝑍𝑡−𝑘 = 𝜙12𝑘 𝜎𝑍2 = 𝜎𝑍2 𝜙12𝑘


𝑘=0 𝑘=0 𝑘=0

which is a convergent geometric series if 𝜙12 < 1, i.e.,

𝜙1 < 1.
AR(𝑝) process ⟹ MA(∞) process
Duality between AR and MA
processes
Under invertibility condition of MA(q),

MA(q) ⟹ AR(∞)

Under stationarity condition of AR(p)

AR(p) ⟹ MA(∞)
What We’ve Learned

 Invertibility condition for MA(q) processes


 Stationarity condition for AR(p) processes
 Duality MA and AR processes
Mean Square
Convergence
PRACTICAL TIME SERIES ANALYSIS
THISTLETON AND SADIGOV
Objectives

 Learn mean-square convergence

 Formulate necessary and sufficient


condition for invertibility of MA(1) process
Mean-square convergence
Let
𝑋1 , 𝑋2 , 𝑋3 , …

be a sequence of random variables (i.e. a stochastic process).

We say 𝑋𝑛 converge to a random variable 𝑋 in the mean-square sense


if

𝐸 𝑋𝑛 − 𝑋 2 → 0 𝑎𝑠 𝑛 → ∞
MA(1) model

We inverted MA(1) model


𝑋𝑡 = 𝑍𝑡 + 𝛽𝑍𝑡−1

as

𝑍𝑡 = −𝛽 𝑘 𝑋𝑡−𝑘
𝑘=0

Infinite sum above is convergent in mean-square sense under some


condition on 𝛽.
Auto covariance function

0, 𝑘>1
𝛽𝜎𝑍2 , 𝑘=1
𝛾 𝑘 =
(1 + 𝛽2 )𝜎𝑍2 , 𝑘=0
𝛾 −𝑘 , 𝑘<0
Series convergence
Lets find 𝛽′ 𝑠 that partial sum

−𝛽 𝑘 𝑋𝑡−𝑘
𝑘=0

converges to 𝑍𝑡 in mean-square sense.


𝑛 2 𝑛 2 𝑛

𝐸 −𝛽 𝑘 𝑋𝑡−𝑘 − 𝑍𝑡 =𝐸 −𝛽 𝑘 𝑋𝑡−𝑘 − 2𝐸 −𝛽 𝑘 𝑋𝑡−𝑘 𝑍𝑡 + 𝐸 𝑍𝑡2


𝑘=0 𝑘=0 𝑘=0
𝑛 𝑛−1
2
=𝐸 𝛽2𝑘 𝑋𝑡−𝑘 + 2𝐸 −𝛽 2𝑘+1 𝑋
𝑡−𝑘 𝑋𝑡−𝑘+1 − 2𝐸 𝑋𝑡 𝑍𝑡 + 𝜎𝑍2
𝑘=0 𝑘=0
𝑛 𝑛−1
2
= 𝛽2𝑘 𝐸 𝑋𝑡−𝑘 −2 𝛽2𝑘+1 𝐸 𝑋𝑡−𝑘 𝑋𝑡−𝑘+1 − 2𝐸 [𝑍𝑡2
𝑘=0 𝑘=0
To get

𝑛 𝑘 2
𝐸 𝑘=0 −𝛽 𝑋𝑡−𝑘 − 𝑍𝑡 → 0 𝑎𝑠 𝑛 → ∞

We need

𝜎𝑍2 𝛽2𝑛+2 → 0 𝑎𝑠 𝑛 → ∞

Thus, 𝛽 < 1.
i.e.,

1
− >1
𝛽

i.e., zero of the polynomial

𝛽 𝐵 = 1 + 𝛽𝐵

Lies outside of the unit circle.


What We’ve Learned

 Definition of the mean square convergence

 Necessary and sufficient condition for


invertibility of MA(1) process
Difference equations

PRACTICAL TIME SERIES ANALYSIS


THISTLETON AND SADIGOV
Objectives

 Recall and solve difference equations


Difference equation

 General term of a sequence is given, ex: 𝑎𝑛 = 2𝑛 + 1. So,

3, 5, 7, …

 General term not given, but a relation is given, ex:

𝑎𝑛 = 5𝑎𝑛−1 − 6𝑎𝑛−2

 This is a difference equation (recursive relation)


How to solve difference equations?
 We look for a solution in the format

𝑎𝑛 = 𝜆𝑛

 For the previous problem,

𝜆𝑛 = 5𝜆𝑛−1 − 6𝜆𝑛−2

We simplify

𝜆2 − 5𝜆 + 6 = 0

 Auxiliary equation or characteristic equation.


 𝜆 = 2, 𝜆 = 3
 𝑎𝑛 = 𝑐1 2𝑛 + 𝑐2 3𝑛
 With some initial conditions, say 𝑎0 = 3, 𝑎1 = 8.

We get
𝑐1 + 𝑐2 = 3
2𝑐1 + 3𝑐2 = 8
Thus,

𝑐1 = 1, 𝑐2 = 2.
Solution
𝑎𝑛 = 2𝑛 + 2 ∙ 3𝑛

Is the solution of 2nd order difference equation

𝑎𝑛 = 5𝑎𝑛−1 − 6𝑎𝑛−2
𝑘-th order difference equation
𝑎𝑛 = 𝛽1 𝑎𝑛−1 + 𝛽2 𝑎𝑛−2 + ⋯ + 𝛽𝑘 𝑎𝑛−𝑘

Its characteristic equation

𝜆𝑘 − 𝛽1 𝜆𝑘−1 − ⋯ − 𝛽𝑘−1 𝜆 − 𝛽𝑘 = 0

Then we look for the solutions of the characteristic equation. Say, all k
solutions are distinct real numbers, 𝜆1 , 𝜆2 , … , 𝜆𝑘 , then

𝑎𝑛 = 𝑐1 𝜆1𝑛 + 𝑐2 𝜆𝑛2 + ⋯ + 𝑐𝑛 𝜆𝑛𝑘

Coefficients 𝑐𝑗′ 𝑠 are determined using initial values.


Example - Fibonacci sequence

Fibonacci sequence is defined as follows:

1, 1, 2, 3, 5, 8, 13, 21, …

i.e., every term starting from the 3rd term is addition of the previous two
terms.

Question: What is the general term, 𝑎𝑛 , of the Fibonacci sequence?


Formulation


We are looking for a sequence an n=0 , such that

𝑎𝑛 = 𝑎𝑛−1 + 𝑎𝑛−2

where 𝑎0 = 1, 𝑎1 = 1.

Characteristic equation becomes

𝜆2 − 𝜆 − 1 = 0
1− 5 1+ 5
Then 𝜆1 = and 𝜆2 = .
2 2

Thus

𝑛 𝑛
1− 5 1+ 5
𝑎𝑛 = 𝑐1 + 𝑐2
2 2

Use initial data

𝑐1 + 𝑐2 = 1

1− 5 1+ 5
𝑐1 + 𝑐2 =1
2 2
General term of Fibonacci
sequence
We obtain

5− 5 1 1− 5
𝑐1 = =−
10 5 2

5+ 5 1 1+ 5
𝑐2 = =
10 5 2

𝑛+1 𝑛+1
1 1− 5 1 1+ 5
𝑎𝑛 = − +
5 2 5 2
Relation to differential equations
𝑘 −th order linear ordinary equation

𝑦 𝑘 = 𝛽1 𝑦 𝑘−1 + ⋯ 𝛽𝑘−1 𝑦 + 𝛽𝑘

Solution format 𝑦 = 𝑒 𝜆𝑡 gives characteristic equation

𝜆𝑘 − 𝛽1 𝜆𝑘−1 − ⋯ − 𝛽𝑘−1 𝜆 − 𝛽𝑘 = 0

Then we solve the characteristic equation.


What We’ve Learned

 Definition
of difference equations and
how to solve them
Yule-Walker Equations

PRACTICAL TIME SERIES ANALYSIS


THISTLETON AND SADIGOV
Objectives

 Introduce Yule – Walker equations

 Obtain ACF of AR processes using Yule – Walker


equations
Procedure
 We assume stationarity in advance (a priori assumption)
 Take product of the AR model with 𝑋𝑛−𝑘
 Take expectation of both sides
 Use the definition of covariance, and divide by 𝛾 0 = 𝜎𝑋2
 Get difference equation for 𝜌(𝑘), ACF of the process
 This set of equations is called Yule-Walker equations
 Solve the difference equation
Example
We have an AR(2) process
1 1
𝑋𝑡 = 𝑋𝑡−1 + 𝑋𝑡−2 + 𝑍𝑡 … (∗)
3 2

Polynomial

1 1 2
𝜙 𝐵 =1− 𝐵− 𝐵
3 2

−2 ± 76
has real roots 6
both of which has magnitude greater than 1, so roots
are outside of the unit circle in ℝ2 . Thus, this AR(2) process is a stationary
process.
Example cont.
Note that if 𝐸 𝑋𝑡 = 𝜇, then

1 1
𝐸(𝑋𝑡 ) = 𝐸(𝑋𝑡−1 ) + 𝐸(𝑋𝑡−2 ) + 𝐸(𝑍𝑡 )
3 2
1 1
𝜇= 𝜇+ 𝜇
3 2
𝜇=0
Multiply both side of ∗ with 𝑋𝑡−𝑘 , and take expectation

1 1
𝐸 𝑋𝑡−𝑘 𝑋𝑡 = 𝐸 𝑋𝑡−𝑘 𝑋𝑡−1 + 𝐸 𝑋𝑡−𝑘 𝑋𝑡−2 + 𝐸(𝑋𝑡−𝑘 𝑍𝑡 )
3 2
Example cont.
Since 𝜇 = 0, and assume 𝐸 𝑋𝑡−𝑘 𝑍𝑡 = 0,

1 1
𝛾 −𝑘 = 𝛾 −𝑘 + 1 + 𝛾(−𝑘 + 2)
3 2
Since 𝛾 𝑘 = 𝛾 −𝑘 for any 𝑘,

1 1
𝛾 𝑘 = 𝛾 𝑘 − 1 + 𝛾(𝑘 − 2)
3 2
Divide by 𝛾 0 = 𝜎𝑋2

1 1
𝜌 𝑘 = 𝜌 𝑘 − 1 + 𝜌(𝑘 − 2)
3 2

This set of equations is called Yule-Walker equations.


Solve the difference equation

We look for a solution in the format of 𝜌 𝑘 = 𝜆𝑘 .

1 1
𝜆2 − 𝜆 − = 0
3 2

2+ 76 2− 76
Roots are 𝜆1 = and 𝜆2 = , thus
12 12

𝑘 𝑘
2 + 76 2 − 76
𝜌 𝑘 = 𝑐1 + 𝑐2
12 12
Finding 𝑐1 , 𝑐2
Use constraints to obtain coefficients

𝜌 0 = 1 ⇒ 𝑐1 + 𝑐2 = 1

And for 𝑘 = 𝑝 − 1 = 2 − 1 = 1,

𝜌 𝑘 = 𝜌 −𝑘

Thus,
1 1 2 2 + 76 2 − 76 2
𝜌 1 = 𝜌 0 + 𝜌 −1 ⇒ 𝜌 1 = ⇒ 𝑐1 + 𝑐2 =
3 2 3 12 12 3
Solve the system for 𝑐1 , 𝑐2

𝑐1 + 𝑐2 = 1
2 + 76 2 − 76 2
𝑐1 + 𝑐2 =
12 12 3

Then,

4+ 6 4− 6
𝑐1 = and 𝑐2 =
8 8
ACF of the AR(2) model

For any 𝑘 ≥ 0,

𝑘 𝑘
4 + 6 2 + 76 4 − 6 2 − 76
𝜌 𝑘 = +
8 12 8 12

And
𝜌 𝑘 = 𝜌 −𝑘
Simulation
What We’ve Learned

 Yule- Walker equations is set of difference


equations governing ACF of the underlying AR
process

 How to find the ACF of an AR process using Yule-


Walker equations

You might also like