You are on page 1of 5

Thistleton and Sadigov Stationarity: Properties and Examples Week 2

Stationarity: Properties and Examples

For a Weakly Stationary Process, −1 ≤ 𝜌(𝑡) ≤ 1

This is, of course, perfectly analogous to the property that −1 ≤ 𝜌 ≤ 1 from elementary
statistics. If you have had a linear algebra course, this may feel familiar (that is, if you
showed that |𝑥 𝑇 𝑦 | ≤ ‖𝑥‖2 ‖𝑦‖2 ) . We know variances are non-negative, so set up a
linear combination

𝑉[𝑎 𝑋1 + 𝑏 𝑋2 ] ≥ 0

In particular, in the spirit of autocorrelation, set up for lag spacing 𝜏

𝑉[𝑎 𝑋(𝑡) + 𝑏 𝑋(𝑡 + 𝜏)] ≥ 0

Your probability teacher probably told you (time and time again) that

𝑉[𝑋 + 𝑌] = 𝑉[𝑋] + 𝑉[𝑌] + 2 𝑐𝑜𝑣(𝑋, 𝑌)

As well as

𝑉[𝑎𝑋] = 𝑎2 𝑉[𝑋]

So, immediately,

𝑉[𝑎 𝑋(𝑡) + 𝑏 𝑋(𝑡 + 𝜏)] = 𝑎2 𝑉[𝑋(𝑡)] + 𝑏 2 𝑉[𝑋(𝑡 + 𝜏)] + 2𝑎𝑏 𝑐𝑜𝑣(𝑋(𝑡), 𝑋(𝑡 + 𝜏)) ≥ 0

We are assuming weak stationarity, so replace variance operators with a notation which
suggests constants

𝑎2 𝜎 2 + 𝑏 2 𝜎 2 + 2𝑎𝑏 𝑐𝑜𝑣(𝑋(𝑡), 𝑋(𝑡 + 𝜏)) ≥ 0

Two special cases: (1) Let 𝑎 = 𝑏 = 1

2 𝜎 2 ≥ −2𝑐𝑜𝑣(𝑋(𝑡), 𝑋(𝑡 + 𝜏)), 𝜎 2 ≥ −𝑐𝑜𝑣(𝑋(𝑡), 𝑋(𝑡 + 𝜏))

𝑐𝑜𝑣(𝑋(𝑡), 𝑋(𝑡 + 𝜏)) 𝛾(𝜏)


1≥− = − = −𝜌(𝜏)
𝜎2 𝛾(0)

This gives us

Practical Time Series Analysis Page 1


Thistleton and Sadigov Stationarity: Properties and Examples Week 2

𝜌(𝜏) ≥ −1

(2) Let 𝑎 = 1, 𝑏 = −1

It’s your turn- take a moment to show

𝜌(𝜏) ≤ 1

We have already seen a few simple models: noise, random walks, and moving averages. Can we
now show that some of our simple models are, in fact, weakly stationary?

Examples
White Noise
Is it obvious to you that Gaussian white noise is weakly stationary? Consider a discrete
family of independent, identically distributed normal random variables

𝑖𝑖𝑑
𝑋𝑡 𝑁(𝜇, 𝜎)

The mean function 𝜇(𝑡) is obviously constant, so look at

0 𝑡1 ≠ 𝑡2
𝛾(𝑡1 , 𝑡2 ) = { 2
𝜎 𝑡1 = 𝑡2

And

0 𝑡1 ≠ 𝑡2
𝜌(𝑡1 , 𝑡2 ) = {
1 𝑡1 = 𝑡2

We are evidently weakly stationary, and could even show strict stationarity if we wanted to.

Random Walks
Simple random walks are obviously not stationary. Think of a walk with 𝑁 steps built off
of IID 𝑍𝑡 where 𝐸[𝑍𝑡 ] = 𝜇, 𝑉[𝑍𝑡 ] = 𝜎 2 . We would create
𝑋1 = 𝑍1
𝑋2 = 𝑋1 + 𝑍2
𝑋3 = 𝑋2 + 𝑍3 = 𝑋1 + 𝑋2 + 𝑋3

𝑡
𝑋𝑡 = 𝑋𝑡−1 + 𝑍𝑡 = ∑ 𝑍𝑖
𝑖=1

Practical Time Series Analysis Page 2


Thistleton and Sadigov Stationarity: Properties and Examples Week 2

For the mean, using the idea that “the mean of the sum is the sum of the means”:
𝑡 𝑡
𝐸[𝑋𝑡 ] = 𝐸 [∑ 𝑍𝑖 ] = ∑ 𝐸[𝑍𝑖 ] = 𝑡 ⋅ 𝜇
𝑖=1 𝑖=1

For the variance, using the idea that “the variance of the sum is the sum of the variances
when the random variables are independent”:
𝑡 𝑡
𝑉[𝑋𝑡 ] = 𝑉 [∑ 𝑍𝑖 ] = ∑ 𝑉[𝑍𝑖 ] = 𝑡 ⋅ 𝜎 2
𝑖=1 𝑖=1

(Independent random variables have variances which add. All random variables have
means which add).

Even if 𝜇 = 0 the variances will still increase along the time series.

Moving Average Processes, 𝑀𝐴(𝑞)


A moving average process will create a new set of random variables from an old set, just
like the random walk does, but now we build them as, for IID 𝑍𝑡 with 𝐸[𝑍𝑡 ] = 0 and
𝑉[𝑍𝑡 ] = 𝜎𝑍2
𝑀𝐴(𝑞) 𝑝𝑟𝑜𝑐𝑒𝑠𝑠: 𝑋𝑡 = 𝛽0 𝑍𝑡 + 𝛽1 𝑍𝑡−1 + ⋯ + 𝛽𝑞 𝑍𝑡−𝑞
The parameter q tells us how far back to look along the white noise sequence for our
average. Since the 𝑍𝑡 are independent, we immediately have (using the usual linear
operator results)
𝐸[𝑋𝑡 ] = 𝛽0 𝐸[𝑍𝑡 ] + 𝛽1 𝐸[𝑍𝑡−1 ] + ⋯ + 𝛽𝑞 𝐸[𝑍𝑡−𝑞 ] = 0
𝑞
𝑉[𝑋𝑡 ] = 𝛽02 𝑉[𝑍𝑡 ] + 𝛽12 𝑉[𝑍𝑡−1 ] + ⋯ + 𝛽𝑞2 𝑉[𝑍𝑡−𝑞 ] = 𝜎𝑍2 ∑ 𝛽𝑖2
𝑖=0

The autocovariance isn’t all that hard to find either. Consider random variables k steps
apart and set up their covariance.
𝑐𝑜𝑣[𝑋𝑡 , 𝑋𝑡+𝑘 ] = 𝑐𝑜𝑣[ 𝛽0 𝑍𝑡 + 𝛽1 𝑍𝑡−1 + ⋯ + 𝛽𝑞 𝑍𝑡−𝑞 ,
𝛽0 𝑍𝑡+𝑘 + 𝛽1 𝑍𝑡+𝑘−1 + ⋯ + 𝛽𝑞 𝑍𝑡+𝑘−𝑞 ]
This is a little tricky, but please stay focused. There are two numbers to keep track of, the
lag spacing k and the support of the MA process, q.
Now
𝑐𝑜𝑣[𝑋𝑡 , 𝑋𝑡+𝑘 ] = 𝐸[𝑋𝑡 ⋅ 𝑋𝑡+𝑘 ] − 𝐸[𝑋𝑡 ]𝐸[𝑋𝑡+𝑘 ] = 𝐸[𝑋𝑡 ⋅ 𝑋𝑡+𝑘 ]

Practical Time Series Analysis Page 3


Thistleton and Sadigov Stationarity: Properties and Examples Week 2

Since 𝐸[𝑋𝑡 ] = 0 we really just need

𝐸[𝑋𝑡 ⋅ 𝑋𝑡+𝑘 ] = 𝐸[ (𝛽0 𝑍𝑡 + 𝛽1 𝑍𝑡−1 + ⋯ + 𝛽𝑞 𝑍𝑡−𝑞 )


⋅ ( 𝛽0 𝑍𝑡+𝑘 + 𝛽1 𝑍𝑡+𝑘−1 + ⋯ + 𝛽𝑞 𝑍𝑡+𝑘−𝑞 )]

We can rely on matrix results concerning linear combinations of random variables or just
work directly. The patient among us will write out
𝐸[𝑋𝑡 ⋅ 𝑋𝑡+𝑘 ] = 𝛽0 𝛽0 𝐸[𝑍𝑡 𝑍𝑡+𝑘 ] + 𝛽0 𝛽1 𝐸[𝑍𝑡 𝑍𝑡+𝑘−1 ] + ⋯ + 𝛽0 𝛽𝑞 𝐸[𝑍𝑡 𝑍𝑡+𝑘−𝑞 ]
+ 𝛽1 𝛽0 𝐸[𝑍𝑡−1 𝑍𝑡+𝑘 ] + 𝛽1 𝛽1 𝐸[𝑍𝑡−1 𝑍𝑡+𝑘−1 ] + ⋯ + 𝛽1 𝛽𝑞 𝐸[𝑍𝑡−1 𝑍𝑡+𝑘−𝑞 ]
+ ⋯⋅ +
𝛽𝑞 𝛽0 𝐸[𝑍𝑡−𝑞 𝑍𝑡+𝑘 ] + 𝛽𝑞 𝛽1 𝐸[𝑍𝑡−𝑞 𝑍𝑡+𝑘−1 ] + ⋯ 𝛽𝑞 𝛽𝑞 𝐸[𝑍𝑡−𝑞 𝑍𝑡+𝑘−𝑞 ]
The key to simplifying this is to notice that, since the 𝑍𝑡 are independent, we can say that
the expectation of the product is the product of the expectations and so we have
0 𝑖≠𝑗
𝐸[𝑍𝑖 ⋅ 𝑍𝑗 ] = 𝐸[𝑍𝑖 ]𝐸[𝑍𝑗 ] = {
𝜎𝑍2 𝑖=𝑗
When the lag spacing 𝑘 is greater than the order of the process 𝑞 then the subscripts can
never be the same (there is no overlap on the underlying 𝑍𝑡 ′𝑠 ) and we have
𝑐𝑜𝑣[𝑋𝑡 , 𝑋𝑡+𝑘 ] = 0. When the lag spacing is small enough to have contributions, that is if
𝑞 − 𝑘 ≥ 0, you can visualize the sum like this (we just need to keep track of the 𝛽 ′ 𝑠):

𝑍𝑡−𝑞 𝑍𝑡−𝑞+1 … 𝑍𝑡−(𝑞−𝑘) … 𝑍𝑡−1 𝑍𝑡 …


𝛽𝑞 𝛽𝑞−1 𝛽𝑞−𝑘 𝛽1 𝛽0 0 0

𝑍𝑡+𝑘−𝑞 … 𝑍𝑡+𝑘−𝑘+1 𝑍𝑡+𝑘−𝑘 … 𝑍𝑡+𝑘−1 𝑍𝑡+𝑘


0 0 0 𝛽𝑞 𝛽𝑘+1 𝛽𝑘 𝛽1 𝛽0

This should make clear that, when 𝑘 ≤ 𝑞


𝑞−𝑘

𝐸[𝑋𝑡 , 𝑋𝑡+𝑘 ] = 𝜎𝑍2 ⋅ ∑ 𝛽𝑖 𝛽𝑖+𝑘 (𝑛𝑜 𝑡 𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑐𝑒)


𝑖=0

Summing up, then, we have found that

Practical Time Series Analysis Page 4


Thistleton and Sadigov Stationarity: Properties and Examples Week 2

0 𝑘>𝑞

𝛾(𝑡1 , 𝑡2 ) = 𝛾(𝑘) = 𝑞−𝑘

𝜎𝑍2 ⋅ ∑ 𝛽𝑖 𝛽𝑖+𝑘 𝑘≤𝑞


{ 𝑖=0

We know that the mean function is constant, in fact 𝜇(𝑡) = 0 and the autocovariance
function has no t dependence, so we conclude that the MA(q) process is (weakly) stationary.

Let’s finish this lecture by finding the autocorrelation function. In general


𝛾(𝑘)
𝜌(𝑘) =
𝛾(0)
Obviously, then 𝜌(0) = 1. It is easy to see that
𝑞 𝑞

𝛾(0) = 𝜎𝑍2 ⋅ ∑ 𝛽𝑖 𝛽𝑖 = 𝜎𝑍2 ⋅ ∑ 𝛽𝑖2


𝑖=0 𝑖=0

Finally
∑𝑞−𝑘 𝛽𝑖 𝛽𝑖+𝑘
𝜌(𝑘) = 𝑖=0𝑞
∑𝑖=0 𝛽𝑖2
In the next lecture we will simulate an MA(q) process and validate these results
numerically.

Practical Time Series Analysis Page 5

You might also like