You are on page 1of 46

MATH 416 TIME SERIES ANALYSIS AND FORECASTING

MATH 416: TIME SERIES ANALYSIS AND FORECASTING

CHAPTER ONE

1.0 INTRODUCTION

1.1 Definition (Time Series): A time series represents a sequence of observations taken over time

Example: X t could be the number of tourists, number of deaths, amount of rainfall etc at time t

1.2 Application of Time series

i) In physical Sciences: Meteorology, Marine Science, Geophysics. We could have the amount
of rainfall in successive days, air temperature in successive hours, days or months.
ii) In Commerce: Sales of products by certain company in successive months (days) over a
certain period.
iii) Demography (Population Studies): The population of a country taken after a specified
period.
iv) Process Control: The problem is to detect changes in the performance of a manufacturing
process by measuring a variable which shows the quality of a process.
v) Binary Process: Arises where observations can take one of only two values, usually denoted
by 0 or 1.
vi) Point Process: Arises where a series of events occur randomly in time. Once interest is in the
distribution of the number of events occurring in a given time-period and also the distribution
of time intervals between events.

1.3 Objectives of Time series

i) Description: Decomposing a series into trend, a seasonal variation, Cyclical variation and
Other irregular fluctuations or residuals.
ii) Explanation: When observations are taken on two or more variables, it may be possible
to use the variation in one time series to explain the variation in another time series.
iii) Forecasting: Given observed time series, one may want to predict the future values of the
series.
iv) Control: We observe a time series that measures the quality of a manufacturing process.

1.4 Components of a Time Series

i) Trend: The trend represents the long term smooth movement of the time series and it can
thus be approximated using a smooth curve. This could be a simple regression line or a
quadratic curve or a polynomial of some order.
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

ii) Seasonal Variation: The seasonal variation represents a time series component which
changes with seasons. Thus can be approximated by a trigonometric function of the form.
X t = A sin ( t +  ) or X t = A cos ( t +  ) where
A is the amplitude of the oscillation
 is the phase of the oscillation
 is the frequency of the oscillation i.e the number of cycles completed in a unit time
iii) Cyclical Variation:

Xt
Seasonal
variation

Cyclical
Trend variation

Cyclical Variation can also be approximated by sine and cosine functions. It is a component
that depends on human factors or in general on controllable factors. This contrasts with
seasonal component which is usually controlled by natural factors e.g Business cycle –
Booms followed by recession.

iv) Random Variation: Random variation represents a component which is uncontrollable.


A variety of short term erratic forces are at work in this case. These represent the effects of
all factors other than the trend, seasonal variation and cyclical variation.

1.5 Approaches to Time Series Analysis

a) Time domain: The behavior of series is described in terms of the way in which observation at
different times one related statistically. In this case different equations are usually used. A
major analytical problem associated with time series analysis in the time domain is the
estimation of the coefficients and determination of their number. Popular models based on this
approach are the Autoregressive Integrated Moving Average (ARIMA) models.
b) Frequency domain: The behavior of one or more series is described in terms of sinusoidal
fluctuations at various frequencies. In this case, the frequency spectral transfer functions are
used to specify the structure of the time series. The frequency domain presumes the fact that
the time series is a sum of periodic sine and cosine functions or waves of different frequencies.
A major feature that distinguishes the two approaches lies in the fact that for the time domain
approach, it is assumed that correlations in adjacent observations is best explained by
regressing the present value on the past values. An aspect based on Wold’s decomposition
theorem (1938).
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

1.6 Stationery Time Series: A time series is said to be stationary if its statistical properties do not
change with time.

Definitions (Weakly stationary or Covariance stationary or Second order): A time series X t


is said to be weakly stationary if the following conditions are satisfied

i) E ( X t ) =  for all t  T
ii) ( )
E X t2   for all t  T (is finite)
iii) E ( X r X k ) = R ( k − r ) , i.e a function of k − r only i.e
E ( X 2 X 4 ) = R ( 2)
E ( X 4 X 6 ) = R ( 2)
 E ( X k X k +m ) = R ( m )

Definitions (Strict stationarity): A time series X t is said to be strictly stationary if the joint

( ) ( )
distribution of X t 1 , X t 2 ,..., X t k and X t 1 +t , X t 2 +t ,..., X t k +t is the same for all values of t  T and
k  0.

1.7 Autocovariance and Autocorrelation Functions

For a stationary time series X t , the autocovariance function is defined as

R ( m ) = E ( X k −  )( X k + m −  ) 

( )
= E  X k X k + m −  X k −  X k + m +  2 
= E ( X k X k +m ) −  2 −  2 +  2
 R ( m ) = E ( X k X k +m ) −  2

When m = 0

R ( 0) = E ( X k −  )
2

If E ( X ) =   E ( X −  ) = Var ( X )
2

 R ( 0 ) = E ( X k −  ) = Var ( X t ) i.e the var iance of the process


2

Since E ( X k + m X k ) = E ( X k −m X k )
Since the time difference is equal
M M
K-M K K+M
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

 E ( X k + m X k ) −  2 = E ( X k −m X k ) −  2
Therefore R ( m ) = R ( −m )

R ( m ) is symmetric about m = 0 and R ( m ) attains maximum value at m = 0

The Autocorrelation Function: For stationary time series, Autocorrelation function can be
defined as:-
Cov ( X , Y )
=
R ( m)  XY
 (m) = from Note: −1   ( m )  1
R (0)

Example 1: Consider the set of Independent and Identical distributed random variables et  such
that E ( et ) = 0 and Var ( et ) =  e2 . Let the process X t be given by X t =  et −1 + et where  is a
constant. Show that X t is stationary.

Solution:

E ( X t ) = E ( et −1 + et )
=  E ( et −1 ) + E ( et )
=  .0 + 0
=0
 E ( Xt ) = 0

( )
E X t2 = E ( et −1 + et )
2

(
= E  2 et2−1 + 2 et −1et + et2 )
( )
=  2 E et2−1 + 2 E ( et −1et ) + E et2( )
From the conditions given in the equation,

Var ( et ) = E ( et − E ( et ) )
2

= E ( et − 0 )
2

= E ( et )
2

 Var ( et ) = E ( et ) =  e2 ( given above )


2
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

( ) ( )
 E X t2 =  2 E et2−1 + 2 E ( et −1 ) E ( et ) + E et2 ( )
because of iid

Since of i.i.d, then E ( et −1et ) = E ( et −1 ) E ( et )

( )
 E X t2 =  2 e2 +  e2 since E ( et ) = E ( et −1 ) = 0

( ) ( )
 E X t2 =  2 + 1  e2   since  is a constant and  e2 is finite

Third condition:

E ( X k X k + m ) = R ( m ) but X t =  et −1 + et  X t +m =  et +m−1 + et +m

X t X t + m = ( et −1 + et )( et + m −1 + et + m )
=  2 et −1et + m −1 +  et −1et + m +  et + m −1et + et et + m
 E ( X t X t + m ) =  2 E ( et −1et + m −1 ) +  E ( et −1et + m ) +  E ( et + m −1et ) + E ( et et + m )

Case 1 for m = 1

( )
E ( X t X t +1 ) =  2 E ( et −1et ) +  E ( et −1et +1 ) +  E et2 + E ( et et +1 )
=  e2 ( because of independence )

Case 2 m = −1

( )
E ( X t X t −1 ) =  2 E ( et −1et −2 ) +  E et2−1 +  E ( et et −2 ) + E ( et et −1 )
=  e2 ( because of independence )

Case 3 m = 2

E ( X t X t + 2 ) =  2 E ( et −1et +1 ) +  E ( et −1et + 2 ) +  E ( et et +1 ) + E ( et et + 2 )
=0

Case 4 m = −2

E ( X t X t + 2 ) =  2 E ( et −1et −3 ) +  E ( et −1et − 2 ) +  E ( et et −3 ) + E ( et et − 2 )
=0
 R ( m ) = 0 for m = 2,  3,...
 R ( m) = 0 m  2
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

Therefore

i) E ( Xt ) = 0

( )
 1 +  2  e2 for m = 0

ii) E ( X t X t + m ) =   e2 for m = 1
 0 for m  2


Therefore, X t is a stationary time Series.

t
Example 2: Consider a process given as X t = e1 + e2 + ... + et =  ei where ei , i = 1, 2,..., t is a
i =1

sequence of independent and identically distributed random variables with mean ero and variance
 e2 . Is X t a stationary

Solution

i) E ( X t ) = E ( e1 + e2 + ... + et )
= E ( e1 ) + E ( e2 ) + ... + E ( et )
=0
ii) ( )
E X t2 = E ( e1 + e2 + ... + et )
2

= E ( e1 + e2 + ... + et )( e1 + e2 + ... + et )
 
= E  e12 + e22 + ... + et2 + 2   ei e j 
 i j 
( ) ( ) ( )
= E e12 + E e22 + ... + E et2 + 2   E ( ei e j )
i j

=  e2 +  e2 + ... +  e2 = t e2 which is a function of time


Therefore, X t is not a stationary time Series.

1.8 Sample Autocovariance and autocorrelation Functions

Let the time series X t be a stationary and be observed at the time points t = 1, 2,..., n. These gives
us the observed values x1 , x2 ,..., xn . The sample autocovariance function is given by;-

1 n−m
r (m) =  ( xt − x )( xt +m − x ) for m  0
n t =1
1
r ( m ) = ( x1 − x )( x1+ m − x ) + ( x2 − x )( x2+ m − x ) + ... + ( xn − m − x )( xn − x ) 
n
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

1 n
r ( m ) is used to estimate the population autocovariance R ( m ) Note: x =  xt . The sample
n t =1
r (m)
autocorrelation function is given by  ( m ) = .  ( m ) is used to estimate  ( m ) and
r ( 0)
1 n
r ( 0) =  ( xt − x ) . A plot of  ( m ) by m is called the correlogram of the time series. A plot
2

n t =1
of r ( m ) by m is called the covenogram of the time series.

t xt xt − x
1 x1 x1 − x
x2 x2 − x
2 . .
.
. .
.
. .
.
xm xm − x
m
1+ m x1+ m x1+ m − x
2+m x2+m x2+ m − x
. . .
. . .
. . .
n xn xn − x
Example 3: Given the following observations for a time series xt for n = 10 . Find

i) r (1) ii) r ( 2) iii)  (1) iv)  (1)

t 1 2 3 4 5 6 7 8 9 10
xt 47 64 23 71 38 64 55 41 59 48

Solution

1 n−m
i) r ( m) =  ( xt − x )( xt +m − x )
n t =1
1 n 510
x= 
10 t =1
xt =
10
= 51

1 9
r (1) =  ( xt − 51)( xt +1 − 51)
10 t =1
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

t xt xt − x = xt − 51 ( xt − 51)
2

1 47 -4 16
2 64 13 169
3 23 -28 784
4 71 20 400
5 38 -13 169
6 64 13 169
7 55 4 16
8 41 -10 100
9 59 8 64
10 48 -3 9

1
r (1) = ( −4 )(13) + (13)( −28 ) + ( −28 )( 20 ) + ... + (8 )( −3) 
10 
−1545
= = −154.5
10

1 8
ii) r ( 2) =  ( xt − 51)( xt +2 − 51)
10 t =1
1
= ( −4 )( −28 ) + (13)( 20 ) + ( −28 )( −13) + ... + ( −10 )( −3) 
10 
876
= = 87.6
10
2
1 10 1896
iii) r ( 0) = 
10 t =1
( xt − 51) =
10
= 189.6

r (1) −154.5
Therefore,  (1) = = = −0.81
r ( 0) 189.6
r ( 2) 87.6
iv)  ( 2) = = = 0.46
r ( 0) 189.6

CHAPTER TWO: AUTOREGRESSIVE AND MOVING AVERAGE MODELS

2.0 Introduction

These models are based on an observation by Yule (1921, 1927), who noticed that a time series in
which successive values are autocorrelated can be represented as a linear combination of a
sequence of uncorrelated random variables. This representation was later confirmed by
Wold(1938), who showed that every weakly stationary nondeterministic stochastic process X t (
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

assume E ( X t ) = 0 ) can be written as a linear combination (or linear filter) of a sequence of


uncorrelated random variables. The linear filter representation is given by

X t = et + 1et −1 + 2et −2 + ...


 
 ……………….……………………………………………….(2.0.1)
=  j et − j ; 0 =1
j =0 

The random variables ei , i = 0,  1,  2,... are a sequence of uncorrelated random variables from
a fixed distribution with mean E ( et ) = 0, Var ( et ) =  e2 and Cov ( et −m , et ) = E ( et −m et ) = 0 for all
m  0 ,  Cov ( et −m , et ) = Cov ( et , et +m ) such a sequence is usually referred to as White noise
process. Occasionally, we will also call these random variables random shocks. The  j weights
in (1) are the coefficients in this linear combination, their number can be either finite or infinite.
The models represented by (1) lead to autocorrelations in the X t .

From (1), we find that:- E ( X t ) = E ( et + 1et −1 + 2et −2 + ...) = 0

R ( m ) = E ( X t −  )( X t −m −  )
= E ( X t X t −m ) sin ce  = 0

( )
R ( 0 ) = Var ( X t ) = E X t2

= E ( et + 1et −1 + 2et − 2 + ...)


2

( )
2
= E et2 + 12 et2−1 + 22et2− 2 + ...
= E ( e ) + E ( e ) + E ( e ) + ...
2
t
2
1
2
t −1
2
2
2
t −2

=  e2 + 12 e2 + 22 e2 + ...


(
=  e2 1 + 12 + 22 + ... )
 
 R ( 0 ) =  e2  2j  ………………………………………………………………..(2.0.2)
j =0 

Here, we used the fact that E ( et −i et − j ) = 0 for i  j since E ( X t ) = 0 , then R ( m ) = E ( X t X t − m )


( et + 1et −1 + 2et − 2 + ... + m et − m + m +1et −m −1 + m + 2et −m − 2 + ...) 
 R (m) = E  
( et − m + 1et − m −1 + 2 et − m − 2 + ...) 
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

( )
R ( m ) = E  m et2−m + 1 m +1et2−m −1 + 2 m + 2et2−m − 2 + ... since E ( et ) = 0, Var ( et ) = E et2 =  e2 for
whichever value.

R ( m ) = ( m + 1 m+1 + 2 m+ 2 + ...)  e2 


 ………..………………………………………….(2.0.3)
=  e  j m+ j
2

j =0 



R ( m) 
 j m+ j 

The autocorrelation function is given by  ( m ) = = j =0
 ……………….….(2.0.4)
R ( 0) 

 j =0
2
j



X t = et + 1et −1 + 2et −2 + ... was equation (2.0.1). If the number of weights ( j ' s ) in (1) above is
infinite then some assumptions concerning the convergence of these. We assume that the weights
converge absolutely (  j )
  . This condition, which is equivalent to the stationarity
condition, guarantees that all moments exist and are independent of t

Note: From expression (2.0.1), it is found that many realistic models result from proper choices
of  weights. If we choose  1 = − and  j = 0, j  2 , then (2.0.1) will lead to the model

X t = et −  et −1 ...………………………………………………………...………….(2.0.5)

This is usually referred to as the First order Moving Average process. The other alternative is to
have the choice  j =  j which will lead to a process of the form.

X t = et +  et −1 +  2et −2 + ... 

= et +  ( et −1 +  et −2 + ...) ………………………………………..………………(2.0.6)
= et +  X t −1 

Expression (2.0.6) above is usually referred to as the first order autoregressive process. To satisfy
the stationarity condition, we have to restrict the autoregressive parameter such that   1 .

2.1 First Order Autoregressive Process AR(1)

The First order Autoregressive process (AR(1)) is given by the expression X t = et +  X t −1 , we


assume that E ( X t ) = 0 and et  is a white noise process. We now introduce a backward-shift
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

operator or (backshift), B which shifts time one step back such that BX t = X t −1; or in general
B m X t = X t −m . Using this, we can write the AR(1) process as

X t =  X t −1 + et 
X t =  BX t + et 
 ……………………………………………………………………(2.1.1)
X t −  BX t = et 
(1 −  B ) X t = et 
NOTE: The application of the back-shift operator on a constant results in a constant itself
(B m
 =  ) . If we substitute for X t − j successively in (2.0.6) above we obtain:

X t =  X t −1 + et =  ( X t − 2 + et −1 ) + et
=  2 X t − 2 +  et −1 + et
=  2 ( X t −3 + et − 2 ) +  et −1 + et
=  3 X t −3 +  2 et − 2 +  et −1 + et
=  4 X t − 4 +  3et −3 +  2 et − 2 +  et −1 + et

 X t = et +  et −1 +  2et −2 +  3et −3 + ... which is the linear filter representation of the AR(1) process.
Alternative and general simpler way to obtain this representation is to consider the operator
(1 −  B )
−1
as an expression in B and write:

X t = (1 −  B ) et
−1

(
= 1 +  B +  2 B 2 +  3 B3 + ... et )
= et +  et −1 +  2et − 2 +  3et −3 ...

In this expression, it is important that   1 (stationarity condition) since otherwise the  weights
would not converge.

Autocovariance and Autocorrelation functions of the AR(1) Process

X t =  X t −1 + et  ….………………………………………………………………….(2.0.6)

Multiply by X t −m we get, X t X t −m =  X t −1 X t −m + et X t −m . We get the expected value:

E ( X t X t − m ) =  E ( X t −1 X t − m ) + E ( et X t − m ) 
 …………………………………………………(2.1.2)
R ( m ) =  R ( m − 1) + E ( et X t − m ) 
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

For m = 0

R ( 0 ) =  R ( −1) + E ( et X t ) ………………………………………………………………………………….….(*)

From the expression X t = et +  et −1 +  2et −2 + ... , then et X t = et2 +  et et −1 +  2et et −2 + ...

Taking the expected value gives us E ( et X t ) =  e2 . Therefore, from (*)


R ( 0 ) =  R ( −1) +  e2 =  R (1) +  e2 sin ce ( R ( m ) = R ( −m ) )

For m > 0

X t −m = et −m +  et −m−1 +  2et −m−2 + ... Multiplying by et and taking expected value, we get
et X t − m = et et − m +  et et −m−1 +  2et et − m− 2 + ...
E ( et X t −m ) = E et et −m +  et et −m −1 +  2et et −m − 2 + ...
=0

Substituting in equation (8) we get R ( m ) =  R ( m − 1) sin ce E ( et et −m ) for m  0 .

The autocovariances are given by:

R ( 0 ) =  R (1) +  e2 

 ……………………………………………….(2.1.3)
R ( m ) =  R ( m − 1) for m = 1, 2,... 

In the second equation of (2.1.3) above, if we substitute m = 1, R (1) =  R ( 0 ) substitute for R (1) in
the first equation of (2.1.3) R ( 0 ) =  2 R ( 0 ) +  e2

R ( 0 ) −  2 R ( 0 ) =  e2
 e2
R (0) =
1− 2

In the second equation of (9), if we divide by R ( 0 ) , we get  ( m ) =  ( m − 1) , m = 1, 2,...

Solving the equation gives us  ( m ) =  ( m − 1) =  2  ( m − 2 ) =  3  ( m − 3) = ... =  m . i.e.


  ( m ) =  m , m = 0,1, 2,...

Example: If an AR (1) process is given by X t = 0.8 X t −1 + et . Find  ( m ) , m = 0,1, 2,...

Solution:  ( m ) = 0.8m , m = 0,1, 2,...


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

 ( 0 ) = 0.80 = 1,  (1) = 0.81 = 0.8,  ( 2 ) = 0.82 = 0.64,  (3) = 0.83 = 0.512,  ( 4 ) = 0.84 = 0.410 ,

 ( 5) = 0.85 = 0.324,  ( 6 ) = 0.86 = 0.262,  ( 7 ) = 0.87 = 0.210,  (8 ) = 0.88 = 0.168,

 ( 9 ) = 0.89 = 0.134.

 ( m)

0.8

0.6

0.4

0.2

m
1 2 3 4 5 6 7 8

The autocorrelation function has an exponential decay suppose we have X t = −0.8 X t −1 + et

 ( 0 ) = −0.80 = 1,  (1) = −0.81 = −0.8,  ( 2 ) = 0.82 = 0.64,  ( 3) = −0.83 = −0.512,  ( 4 ) = 0.84 = 0.410
,

 ( 5) = −0.85 = −0.324,  ( 6 ) = 0.86 = 0.262,  ( 7 ) = −0.87 = −0.210,  (8 ) = 0.88 = 0.168,


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

 ( m)
1.0

0.8

0.6

0.4

0.2

0 m
-0.2
2 4 6 8

-0.4

-0.6

-0.8

-1.0

2.2 Second Order Autoregressive Process (AR(2))

The second order autoregressive Process (AR(2)) is given by

X t = 1 X t −1 + 2 X t −2 + et  ………………………………….…………………………….(2.2.1)

Using backshift operator in (1) can be written as

X t = 1 X t −1 + 2 X t − 2 +  t
X t = 1 BX t + 2 B 2 X t + et
X t − 1 BX t − 2 B 2 X t = et
(1 −  B −  B ) X
1 2
2
t = et equivalent to eqn ( 2.2.1)

X t −1 = 1 X t −2 + 2 X t −3 + et −1
At
X t −2 = 1 X t −3 + 2 X t − 4 + et − 2

This process can be written in terms of the linear filter representation,

X t = et + 1et −1 + 2et −2 + ... 




( )
= et + 1 Bet + 2 B 2et + ...  …………………………….……………………………..(2.2.2)

(
= 1 + 1 B + 2 B 2 + ... et  )
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

From the equivalence of equation (2.2.1) above we have X t = (1 − 1 B − 2 B 2 ) et …………(*)


−1

Let  ( B ) = 1 + 1 B + 2 B 2 + ... ( polynomial in 2.2.2 ) then X t =  ( B ) et  …...……..….(2.2.2)
(equivalence of 2.2.2). From (*) and (2.2.2), we get that

(1 + B + ) ( ) ( )
−1 −1
1 2 B 2 + ... et = 1 − 1B − 2 B 2 et . The expansion of 1 − 1 B − 2 B 2 in terms of B is
cumbersome, the weights can be calculated by equating coefficients in
(1 +  B + 
1 2 )( )
B 2 + ... 1 − 1 B − 2 B 2 = 1

(1 +  B +  B
1 2
2
( ) ( ))
+ ... − 1 B +  11 B 2 +  21 B 3 + ... − 2 B 2 +  12 B 3 +  22 B 4 + ... = 1 . For the

equality to hold, the coefficients of B j ( j  0 ) on each side of the equation have to be the same
coefficients of 1 = 1 + 0 B + 0 B 2 + 0 B3 + ...

B1 : 1 − 1 = 0   1 = 1
B 2 : 2 − 11 − 2 = 0   2 =  11 + 2
B 3 : 3 − 21 − 12 = 0   3 =  21 + 12
B 4 : 4 − 31 − 22 = 0   4 =  31 + 22
B 5 : 5 − 41 − 32 = 0   5 =  41 + 32

Generally, for j  2 , the  j weights can be derived recursively from


 j =  j −11 + j −22 for j  2

For stationarity, we require that these  j weights converge, which in turn implies that conditions
on 1 and 2 have to be imposed. For the AR(1) process, stationarity condition requires that
 1
  1, or equivalently that the solution of (1 −  B ) = 0  which is  has to be bigger than 1 in
 
absolute value.

 X t =  X t −1 + et but   1
 
 X t −  X t −1 = et 

i.e. (1 −  B ) X t = et 
 
 1 
 B = 1 
  

For the AR(2) process, we have to look at the solutions


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

G1−1 and G2−1 of 1 − 1B − 2 B 2 = 0 , i.e 1 − 1 B − 2 B 2 = (1 − G1B )(1 − G2 B ) = 0 . Solving the Right
Hand Side for B we get

1 − G1B = 0, 1 − G2 B = 0
B = G1−1 or B = G2−1 as above

The solutions G1−1 and G2−1 can both be real or they can be a pair complex numbers. For
stationarity, we require that the roots are such that G1−1  1 and G2−1  1 . The stationarity
condition requires that the roots of the characteristic equation to be outside the unit circle.

Example 1: Is the AR(2) process given by X t = 0.8 X t −1 − 0.15 X t −2 + et stationary

Solution:

X t − 0.8 X t −1 + 0.15 X t −2 = et , introducing the backshift operator B ,

X t − 0.8BX t + 0.15B 2 X t = et
(1 − 0.8B + 0.15B ) X
2
t = et

We find the roots of 1 − 0.8B + 0.15B 2 = 0

1 − 0.5B − 0.3B + 0.15B 2 = 0


(1 − 0.5B ) − 0.3B (1 − 0.5B ) = 0
(1 − 0.5B )(1 − 0.3B ) = 0
1 10 1
G1−1 = = , G2−1 = =2
0.3 3 0.5
 G1−1  1 and G2−1  1

Therefore, the process is stationary.

In the above example, 1 = 0.8, 2 = −0.15, if we want to find the expression


X t = et + 1et −1 + 2et −2 + ... , we get

 1 = 1 = 0.8
 2 =  11 + 2 = 0.8 ( 0.8) − 0.15 = 0.49
 3 =  21 + 2 1 = ( 0.8  0.49 ) + ( −0.15  0.8) = 0.27
 4 =  31 + 2 2 = ( 0.8  0.27 ) + ( −0.15  0.49 ) = 0.14
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

Proceed until it converges to zero.

Example 2

Determine if the AR(2) process X t = 1.5 X t −1 − 0.5 X t −2 + et is stationary

Solution

X t − 1.5 X t −1 + 0.5 X t −2 = et . Introducing Backshift operator B ,

X t − 1.5BX t + 0.5B 2 X t = et
(1 − 1.5B + 0.5B ) X2
t = et

We find the roots of 1 − 1.5B + 0.5B 2 = 0

1 − B − 0.5B + 0.5B 2 = 0
1(1 − B ) − 0.5 B (1 − B ) = 0
(1 − 0.5B )(1 − B ) = 0
 G1−1 = 2 and G2−1 = 1

G1−1 = 2  1 and G1−1 = 1  1 . Therefore, the process is not stationary.

Example 3: Consider the AR(2) process given by X t = X t −1 − 0.5 X t −2 + et . Is the process


stationary?

Solution

X t − X t −1 + 0.5 X t −2 = et

X t − BX t + 0.5B 2 X t = et
(1 − B + 0.5B ) X
2
t = et

Getting the roots of 1 − B + 0.5B 2

−b  b 2 − 4ac 1  1 − 0.5 1 4


B= = = 1 i
2a 1
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

G1−1 = 1 − i, G2−1 = 1 + i

G1−1 = 1 − i = 12 + ( −1) = 2  1
2

G2−1 = 1 + i = 12 + 12 = 2  1

Therefore, the process is stationary

Alternative Method

An AR(2) process given by X t = 1 X t −1 + 2 X t −2 + et is stationary if

(i) 1 + 2  1
(ii) 2 − 1  1
(iii) −1  2  1

These conditions restrict the parameters (1 , 2 ) to be within a triangular region. We work out the
above examples using the above results.

Example 1:

X t = 0.8 X t −1 − 0.15 X t −2 + et , 1 = 0.8, 2 = −0.15


1 + 2 = 0.8 − 0.15 = 0.65  1
2 − 1 = −0.15 − 0.8 = −0.95  1
−1  2 = −0.15  1

Hence, the process X t is stationary.

Example 2:

X t = 1.5 X t −1 − 0.5 X t −2 + et , 1 = 1.5 2 = −0.5

 1 + 2 = 1.5 − 0.5 = 1  1
2 − 1 = −0.5 − 1.5 = −2  1
−1  2 = −0.5  1

The process is not stationary

Example 3
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

X t = X t −1 − 0.5 X t −2 + et , 1 = 1, 2 = −0.5
1 + 2 = 1 − 0.5 = 0.5  1
2 − 1 = −0.5 − 1 = −1.5  1
−1  2 = −0.5  1

The process X t is stationary

2.3 Autoregressive Process of Order (AR(P))

An AR(P) process is given by X t = 1 X t −1 + 2 X t − 2 + ... +  p X t − p + et  …………………….(2.3.1).


Introducing the backshift operator, we get

X t = 1 BX t + 2 B 2 X t + ... +  p B p X t + et
X t − 1 BX t − 2 B 2 X t − ... −  p B p X t = et

(1 −  B −  B
1 2
2
)
− ... −  p B p X t = et

OR  ( B ) X t = et  ………………………………………………………………………….(2.3.2)

where  ( B ) = 1 − 1B − 2 B 2 − ... −  p B p

The AR(P) can also be expressed in the linear filter(Moving average) representation
X t =  ( B ) et

The  weights will converge only if we impose certain stationarity conditions on the
Autoregressive parameters. As in the AR(1) and AR(2) models, these conditions put restrictions
on the roots of the characteristic equation  ( B ) = (1 − G1 B )(1 − G2 B ) ... (1 − G p B ) = 0 . For

stationarity we require that all the roots Gi−1 lie strictly outside the unit circle i.e Gi−1  1

2.4 Autocovariance and Autocorrelation Functions

For the AR(P) process X t = 1 X t −1 + 2 X t −2 + ... +  p X t − p + et , we multiply by X t −m and get


expected value E ( X t X t − m ) = 1 E ( X t −1 X t − m ) + 2 E ( X t − 2 X t − m ) + ... +  p E ( X t − p X t − m ) + E ( et X t − m )

For m = 0
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

( )
E X t2 = 1E ( X t −1 X t ) + 2 E ( X t −2 X t ) + ... +  p E ( X t − p X t ) + E ( et X t )
 R ( 0 ) = 1R (1) + 2 R ( 2 ) + ... +  p R ( p ) +  e2

Dividing both sides R ( 0 ) we get

 e2
1 = 1  (1) + 2  ( 2 ) + ... +  p  ( p ) +
R ( 0)
 e2
= 1 − 1  (1) − 2  ( 2 ) − ... −  p  ( p )
R ( 0)
 e2
 R ( 0) =
1 − 1  (1) − 2  ( 2 ) − ... −  p  ( p )

For m > 0

Since E ( et X t −m ) = 0 for m  0, then we find that


R ( m ) = 1R ( m − 1) + 2 R ( m − 2 ) + ... +  p R ( m − p ) . Dividing both sides of the equation by R ( 0 )
we get R (m)
 =  (m)
R ( 0)

 ( m ) = 1  ( m − 1) + 2  ( m − 2 ) + ... +  p  ( m − p ) ……………………………………… (2.4.1)

The first P-equations ( m = 1, 2,..., p ) in (2.4.1) are the Yule-Walker equations and are shown
below

m = 1: 1 = 1 + 12 + ... +  p −1 p


m = 2 :  2 = 11 + 2 + ... +  p −2 p

m = p :  p =  p −11 +  p −22 + ... +  p

In Matrix notation, these equations can be written as

 1  1 1  p −1   1 
   1  p − 2  2 
 2= 1
    
    
  p    p −1  p − 2 1   p 
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

i.e  = P  …………………..………………………………………………………………(2.4.2)

1 1  p −1 
 1  p − 2 
Where  = ( 1 ,  2 , ,  p ) ,  = (1 , 2 , ,  p ) and P = 
1

 
 
  p −1  p − 2 1 

The autoregressive parameter  can be expressed as a function of the first P autocorrelations by


solving the equation system (2.4.2)

Therefore  = P −1   ……………………………………………………………………….(2.4.3)

Frequently, initial estimates for  are obtained by replacing the theoretical autocorrelations i in
r (m)
(2.4.3) by their estimates i . m = . The resulting estimates are called moment
r ( 0)
estimates.

 ( m ) = 1 ( m − 1) + 2  ( m − 2 ) + ... +  p  ( m − p ) for AR(p). For the AR(1) process, the Yule
Walker equations is m = 1: 1 = 1 ( substitute m = 1 in (2.4.1) ) .

For AR(2) process, X t = 1 X t −1 + 2 X t −2 + et the Yule Walker equations are

−1
m = 1: 1 = 1 + 12    1 1  1     1 1   1 
  1 =       1 =    
m = 2 :  2 = 11 + 2   2   1 1  2  2   1 1    2 
−1
1  1  1 − 1   1  1  1 − 1 2 
  = 1 −  2 −    =  − 2 +  
 1 1   2  1 − 1
2
 2 1  1 2

1 − 1  2 − 12 +  2 r (1) r ( 2)
1 = and 2 = where 1 = and  2 =
1 − 12 1 − 1 2
r (0) r (0)

Example: Find the autocorrelation functions m ; m = 1, 2,3, 4,5 for the AR(2) process given by
X t = 1.2 X t −1 − 0.8 X t −2 + et .

Solution:
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

For the AR(2) process, we have m = 1m−1 + 2 m−2 sin ce 1 = 1.2 and 2 = −0.8 . We have
1.2
m = 1.2m−1 − 0.8m−2 . For m=1 1 = 1.2 0 − 0.81  1 = = 0.6667
1.8

For m=2: 2 = 1.2 1 − 0.80 = (1.2  0.6667 ) − 0.8 = 0.00004

For m=3: 3 = 1.2 2 − 0.81 = (1.2  0.00004 ) − ( 0.8  0.6667 ) = 0

Assignment 1

i) Let a process X t = 0 + 1t + et where 0 and 1 are parameters and the sequence et  is
such that E ( et ) = 0 and var ( et ) =  e2 and E ( et et  ) = 0 for t  t  .
a) Show that X t is not stationary.
b) Define another process Yt by Yt = X t = X t − X t −1 . Show that Yt is stationary
ii) For the AR(1) process given by X t = 0.9 X t −1 + et . Find
a) R ( 0 ) , R (1) and R ( 2 )
b) 1 and 2
iii) Determine if the following process are stationery.
a) X t − X t −1 + 0.24 X t −2 = et
b) X t = 0.8 X t −1 + 0.48 X t −2 + et
iv) Consider the AR(2) process given by X t = 0.75 X t −1 − 0.5 X t −2 + et . Find 1 and 2

Cat 1

2.5 Moving Average Process



In the linear filter representation X t = et + 1et −1 + 2et −2 + ... =  j et − j but  0 = 1 . If we let
j =0

 1 = −1 ,  2 = −2 ,...,  q = −q and  j = 0 for j  q. The resulting process is said to follow a
moving average model of order q (MA(q)) and is given by X t = et − 1et −1 − 2et −2 − ... − −q et −q .
introducing a backshift operator we get

X t = et − 1 Bet −  2 B 2et − ... − − q B q et 


 …………………………………………….(2.5.1)
( )
X t = 1 − 1 B −  2 B 2 − ... − − q B q et =  ( B ) et 

Since there are only a finite number of  weigh ts, these processes are always stationary.
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

2.6 First Order Moving Average Process (MA(1))

The first order moving average process is given by

X t = et −  et −1 
 …………………………………………………………………..(2.6.1)
or X t = (1 −  B ) et 

  
Recall. If X t =  j et − j , we found that R ( 0 ) =  e2  2j and R ( m ) =  e2  j j + m . In the
j =0 j =0 j =0

MA(1) process,  0 = 1,  1 = − ,  j = 0 for j  1


(
R ( 0 ) =  e2  2j =  e2 1 + 12 + 22 + ... )
j =0

R ( m ) =  e2  j j + m =  e2 ( m + 1 1+ m + 2 2+ m + ...)
j =0

(
R ( 0 ) =  e2 1 +  2 ) sin ce  1 = −
R (1) =  e2 ( − ) = − e2
R ( 2 ) =  e2 ( 0 ) = 0
R ( m ) = 0 for m  1

The autocorrelation function is given by

R (1)− e2 −
 (1) = = 2 =
R ( 0)  e 1 +  2
(
1+ 2 )
R ( m)
 ( m) = = 0 for m  1
R ( 0)

− 
 (1) = 
1 +  2  ………………………………………………………………………(2.6.2)
 ( m ) = 0, m  1

Which implies that observations more than one step are uncorrelated. Observations that are one
step apart are correlated. We notice that  (1) is always between -0.5 and 0. Furthermore,
1  − 
 and both satisfy the quadratic equations  2  (1) +  +  (1) = 0  from  (1) =  . In
  1+ 2 
other words, we can always get MA(1) processes that correspond to the same autocorrelation,
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

ACF. To establish a one-to-one correspondence between the ACF and the model and to obtain a
converging autoregressive representation, we restrict the moving parameter such that   1 .
This restriction is known as the invertibility condition and is similar to stationary condition in
Autoregressive models. Invertibility implies that the process can be written interms of an
Autoregressive representation

X t = 1 X t −1 +  2 X t −2 + ... + et in which 
j =1
j converges. The MA(1) model for example can be

written as (1 −  B ) X t = et
−1

from ( X t = et −  et −1 = (1 −  B ) et )  (1 −  B ) X t = et 
−1
 

(1 −  B )
−1
= 1 +  B +  2 B 2 +  3 B 3 + ...
(
 1 +  B +  2 B 2 +  3 B 3 + ... X t = et )
X t +  BX t +  2 B 2 X t +  3 B 3 X t + ... = et
 X t +  X t −1 +  2 X t − 2 +  3 X t −3 + ... = et
 X t = − X t −1 −  2 X t − 2 −  3 X t −3 + ... + et

The weights  j = − j converge only if the model is invertible   1 . This implies that the effect
of the past observations decrease with their age.

2.7 Second order Moving Average Process MA(2)

The second order moving average process is described by

X t = et − 1et −1 −  2et −2

 ……………………………………………………………….(2.7.1)
(
or X t = 1 − 1B −  2 B et  2
)
Recall

X t = et + 1et −1 + 2et −2 + ...


 
R ( 0 ) =  e2  2j , R ( m ) =  e2  j j + m
j =0 j =0

In the MA(2) process 0 = 1,  1 = −1 ,  2 = −2 ,  3 = 0 for j  2 . Therefore,


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

(
R ( 0 ) =  e2 1 + 12 +  22 )
R (1) =  ( 0 1 + 1 2 + 2 3 + ...) =  e2 ( −1 + 1 2 )
2
e

R ( 2 ) =  e2  j j + 2 =  e2 ( 0 2 + 1 3 + 2 4 + ...) = − 2 e2
j =0

R ( 3) =  e2  j j +3 =  e2 ( 0 3 + 1 4 + 2 5 + ...) = 0
j =0

 R ( m ) = 0 for m  2

The Autocorrelation functions are;

R (1)
−1 + 1 2 
 (1) = = 
R ( 0 ) 1 + 12 +  22 
R ( 2) − 2 

 ( 2) = = 2
…………………………………………………………(2.7.2)
R ( 0 ) 1 + 1 +  2 
2

 ( m ) = 0; for m  2 



This model implies that observations more than two steps apart are uncorrelated. As in the
MA(1) model, we can write the MA(2) process interms of an infinite autoregressive
representations

X t =  1 X t −1 +  2 X t − 2 + ... + et or
X t −  1 X t −1 −  2 X t − 2 − ... = et

( )
−1
The  can be obtained from  ( B ) = 1 − 1B −  2 B 2 − ... = 1 − 1B −  2 B 2 . They can be

( )(
calculated by equating the coefficients of B j in 1 −  1 B −  2 B 2 − ... 1 − 1 B −  2 B 2 = 1 )
For invertibility of the MA(2) process, we require that the  weights converge which in term
implies conditions on the parameters 1 and 2 . Invertibility of the MA(2) process requires that
the roots of 1 − 1B − 2 B 2 = (1 − H1B )(1 − H 2 B ) = 0 lie outside the unit circle i.e
H i−1  1, i = 1, 2

In terms of the parameters, the invertibility conditions become

1 +  2  1 

 2 − 1  1  ……………………………………………………………………….(2.7.3)
−1   2  1
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

2.8 Moving Average of Order q (MA(q))

The MA(q) process is given by the equation

X t = et − 1et −1 −  2et −2 − ... − − q et −q



 ……………………………………………………..(2.8.1)
(
or X t = 1 − 1B −  2 B 2 − ... − − q B q et  )
In the linear filter representation X t = et + 1et −1 + 2et −2 + ... We substitute
 1 = −1 ,  2 = −2 ,...,  q = −q and  j = 0 for j  q to get the MA(q)

Recall
  
X t =  j et − j , R ( 0 ) =  e2  2j and R ( m ) =  e2  j j + m . The Autocovariances of the MA(q)
j =0 j =0 j =0

process are:

( )
R ( 0 ) =  e2 1 + 12 +  22 + ... +  q2 , R (1) =  e2 ( −1 + 1 2 + ... +  q −1 q ) ,
R ( 2 ) =  e2 ( − 2 + 13 +  2 4 + ... +  q −2 q ) , R ( m ) =  e2 ( − m + 11+ m +  2 2+ m + ... +  q −m q )
and R ( m ) = 0 for m  q

And the Autocorrelation are;

R ( m) − m + 11+ m +  2 2+ m + ... +  q −m q 


 ( m) = = for m = 1, 2,..., q 
R ( 0) 1 +  +  + ... + 
1
2 2
2
2
q  ……………………(2.8.2)
 ( m ) = 0, for m  q 

The ACF of the MA(q) process cuts off after lag q. The memory of such a process extends only q
steps; observations more than q steps apart are uncorrelated.

2.9 Autoregressive Moving Average (ARMA) processes

An ARMA(p,q) model is given by

(1 −  B −  B
1 2
2
) (
− ... −  p B p X t = 1 − 1 B −  2 B 2 − ... −  q B q et ) 

X t − 1 X t −1 − 2 X t − 2 − ... −  p X t − p = et − 1et −1 −  2 et −2 − ... −  q et −q  …………………………(2.9.1)

X t = 1 X t −1 + 2 X t − 2 + ... +  p X t − p + et − 1et −1 −  2et − 2 − ... −  q et − q 

3.0 CHAPTER THREE ARMA(1, 1) Process


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

The simplest example of an autoregressive moving average process is the ARMA(1, 1) process.

X t −  X t −1 = et −  et −1 
 ………………………………………………………………..(3.0.1)
or (1 −  B ) X t = (1 −  B ) et 

This process can be written in the linear representation X t = et + 1et −1 + 2et −2 + ... where the 
1− B
weights are given as  ( B ) = since from (2)
1− B
 1− B 
 Xt =
1− B
( )
et ; X t = 1 + 1 B + 2 B 2 + ... et =  ( B ) et  .
 

( )
Equating coefficients of B j in (1 −  B ) 1 +  1 B +  2 B 2 + ... = 1 −  B leads to

B1 : 1 −  = − 1 =  −
B : 2 − 1 = 0   2 =  1 =  ( −  )
2

B 3 : 3 − 2 = 0   3 =  2 =  2 ( −  )

B j : j − j −1 = 0   j =  j −1 =  j −1 ( −  )


Therefore  j = ( −  )  j −1 , j  0 ……………………………………..…………………(3.0.2)

Equivalently, we can represent the ARMA(1,1) process in terms of an infinite autoregressive


X t = 1 X t −1 +  2 X t −2 + ... + et where the  weights are given by

1− B
 ( B) =
1− B
1− B
1 −  1 B −  2 B 2 − ... =
1− B
( )
(1 −  B ) 1 − 1B −  2 B 2 − ... = 1 −  B

Equating the coefficients of B j we get


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

B1 : − 1 −  = −  1 =  − 
B 2 : − 2 +  1 = 0   2 =  1 =  ( −  )
B3 : − 3 +  2 = 0   3 =  2 =  2 ( −  )
B 4 : − 4 +  3 = 0   4 =  3 =  3 ( −  )

B j : − j +  j −1 = 0   j =  j −1 =  j −1 ( −  )


  j =  j −1 ( −  ) ……………………………………………………………………….(3.0.3)

The simple ARMA(1, 1) model leads to both moving average and autoregressive representation
with an infinite number of weights. The  weights converge for   1 ( stationarity condition )
and the  weights converge for   1 ( invertibility condition ) . The stationarity condition for the
ARMA(1, 1) models is the same as that of an AR(1) models. The invertiblity condition is the
same as that of an MA(1) model.

3.1 Autocovariance and Autocorrelation Functions of ARMA(1, 1) process.

From X t −  X t −1 = et −  et −1 we have X t =  X t −1 + et −  et −1 . Multiply by X t −m and get the


expected value:

E ( X t X t − m ) =  E ( X t −1 X t − m ) + E ( et X t − m ) −  E ( et −1 X t − m ) 

 R ( m ) =  R ( m − 1) + E ( et X t − m ) −  E ( et −1 X t − m )  …………………………………(3.1.1)

 R ( m ) =  R ( m − 1) for m  1 ……………………………………………………………(3.1.2)

From X t = et + 1et −1 + 2et −2 + ... and X t −m = et −m + 1et −m−1 + 2et −m−2 + ... . Multiply by et we get
et X t −m = et et −m + 1et et −m−1 + 2et et −m−2 + ... . Let E ( et X t −2 ) = 0, E ( et X t −3 ) = 0,... for m  2.

For m = 1: From equation(3.1.1)

R (1) =  R ( 0 ) + E ( et X t −1 ) −  E ( et −1 X t −1 )
E ( et X t −1 ) = 0
E ( et −1 X t −1 ) = E ( et −1et −1 + 1et −1et − 2 + ...) =  e2

Substituting above, we find that  R (1) =  R ( 0 ) −  e2 

For m = 0: From equation(3.1.1)


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

R ( 0 ) =  R (1) + E ( et X t ) −  E ( et −1 X t )
( )
E ( et X t ) = E et2 + 1et et −1 + 2et et −2 + ... =  e2  Since  1E ( et et −1 ) =  2 E ( et et −2 ) = 0 

(
E ( et −1 X t ) = E et + 1et2−1 + 2et −1et −2 + ...) =  
j
2
e =  e2 ( −  )


Substituting, we get R ( 0 ) =  R (1) +  e2 −  ( −  )  e2 ………………………………..…..(3.1.4)


R (1) =  R ( 0 ) +  e2 −  e2 …………………………………………………………………..(3.1.3)


R ( 0 ) =  R (1) +  e2 −  ( −  )  e2 …….…………………………………………………….(3.1.4)

Substitute R (1) =  R ( 0 ) +  e2 −  e2 in (3.1.4) to get 


( )
R ( 0 ) =   R ( 0 ) −  e2 +  e2 −  ( −  )  e2
 R ( 0 ) =  2 R ( 0 ) −  e2 +  e2 −  e2 +  2 e2
(1 −  ) R ( 0 ) = −2 +  +  
2 2
e
2
e
2 2
e (
=  e2  2 − 2 + 1 )
 ( − 2 + 1)
2 2

 R ( 0) =
e

1− 2

Substituting for R(0) in equation (3.1.3) we get

R (1) =  
(
  e2  2 − 2 + 1 )  −  2   2 − 2 2 +  −  +  2 
=  e2 
  
1− 2 1− 2
e
   

2  ( −  ) + ( −  ) 2 − ( −  ) + ( −  )
  2 −  2 +  −      
=  e2   = e   = e  
 1− 2
  1− 2
  1− 2

 (1 −  )( −  ) 
=  e2  
 1− 2 

 (1 −  )( −  ) 
 R (1) =  e2  
 1− 2 

For m  1, R ( m ) =  R ( m − 1) …………………………………………………………….(3.1.2)

The autocorrelation function is given by


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

1 for m = 0

 ( −  )(1 −  )
 ( m) =  2 for m = 1
  − 2 + 1
 ( m − 1) , for m  2

Exercise 2

a) Consider the AR(2) process given by X t = 0.6 X t −1 + 0.3 X t −2 + et .


i) Find the general expression for m
ii) Find  ( m ) , for m = 0,1, 2,...,10
b) Find the autoregressive representation of the MA(1) process X t = et − 0.4et −1
c) Find the moving average representation of the AR(2) process X t = 0.2 X t −1 + 0.4 X t −2 + et
d) Find the Yule-Walker equations of the AR(2) process given by X t = 1 X t −1 + 2 X t −2 + et
e) Find the Yule Walker equations of the AR(3) process given by
X t = 1 X t −1 + 2 X t −2 + 3 X t −3 + et
f) Consider the MA(2) process given by X t = et − 0.1et −1 + 0.21et −2
i) Is the model stationary
ii) Is the model invertible
iii) Find the autocorrelation function for this process
g) Consider the MA(3) process given by X t = et − 1et −1 − 2et −2 − 3et −3 . Find the autocorrelation
function.

3.2 Partial Autocorrelation Function (PACF)

Consider the regression model, where the dependent variable X t + m from a zero mean stationary
process is regressed on the variables X t +m−1 , X t +m−2 ,..., X t i.e

X t + m = m1 X t + m−1 + m 2 X t + m−2 + ... + mm X t + et + m  …………………………………………..(3.2.1)

where mi denotes the i th regression parameter and et + m is a normal error term uncorrelated with
X t + m− j for j  1 . Multiplying X t + m− j on both sides of (1) above we get

X t + m X t + m− j = m1 X t + m−1 X t + m− j + m 2 X t + m−2 X t + m− j + ... + mm X t X t + m− j + et + m X t + m− j . Taking the


expectation gives us. R ( j ) = m1 R ( j − 1) + m 2 R ( j − 2 ) + ... + mm R ( j − m ) ……………….(3.2.2)
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

We divide by R ( 0 ) to get  ( j ) = m1  ( j − 1) + m 2  ( j − 2 ) + ... + mm  ( j − m ) ………...(3.2.3)

For j = 1, 2,..., m, we have the following equations

 (1) = m1 ( 0 ) + m 2  (1) + ... + mm  ( m − 1) 



 ( 2 ) = m1  (1) + m 2  ( 0 ) + ... + mm  ( m − 2 ) 
 …………...…………………………….(3.2.4)

 ( m ) = m1  ( m − 1) + m 2  ( m − 2 ) + ... + mm  ( 0 ) 

Solving these equations for m = 1, 2,..., we find that 11 = 1

1 1 m−2 1


1 1  m −3 2

1 1 1 2 1 m−4 
3

1 1 1 1 2 
1 2  1 3  m −1  m −2 1  m 
22 = , 33 = 2 and mm =  …………..(3.2.5)
1 1 1 1 2 1 1  m −1 
1 1 1 1 1 1 1 m−2 

2 1 1 2 1  m −3 


 m −1  m − 2 1 

For mm , we note that the matrix in the numerator is the same as the symmetric in the

denominator except for the mth column being replaced by ( 1 , 2 ,..., m ) . The sample PACF
mm is obtained by substituting  j for  j in equation (3.2.5). Instead of calculating the
complicated determinant for large m in (3.2.5), a recursive method starting with 11 = 1 . For mm
has been given by Durbin (1960) as follows:
m
 m +1 −  mj  m +1− j
j =1
m +1, m +1 = m
1 −  mj  j
j =1
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

1
 2 −  1 j  2− j
j =1  2 − 11 1
For m = 1: 22 = =
1
1 − 11 1
1 −  1 j  j
j =1

2
3 −  2 j 3− j
j =1 3 − 21  2 − 22 1
For m=2: 33 = = and m+1, j = mj − m+1,m+1m,m+1− j
2
1 − 21 1 − 22  2
1 −  2 j  j
j =1

Suppose we want:
21 = 11 − 2211 ( m = 1, j = 1) , 31 = 21 − 3322 ( m = 2, j = 1) and 32 = 22 − 3321 ( m = 2, j = 2 )

The method holds also for calculating the theoretical PACF mm .

Example

For a time series, it is found that 1 = −0.188, 2 = −0.201 and 3 = 0.0181 . Calculate
11 , 22 and 33

Solution

11 = 1 = −0.188
 −    −  2 −0.201 − ( −0.188 )
2

22 = 2 11 1 = 2 12 = = −0.245
1 − 11 1 1 − − 1 1 − ( −0.188 )
2

21 = 11 − 2211 = −0.188 − ( −0.245 ) = −0.23406

3 − 21  2 − 22 1
33 = = 0.097
1 − 21 1 − 22  2

3.3 Partial Autocorrelation functions (PACF) of Autoregressive processes

The form of the PACF’s is as follows


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

AR (1) :11 = 1 =  , mm = 0 for m  1


 2 − 12
AR ( 2 ) :11 = 1 , 22 = , mm = 0 for m  2
1 − 12

AR ( p ) :11  0, 22  0,...,  pp  0, mm = 0 for m  p

The partial autocorrelations for lags that are larger than the order of the process are zero. This
fact, together with the structure of the autocorrelated function which is infinite. In extent and is a
combination of damped exponentials and damped since waves makes it easy to recognize
autoregressive process.

Flow chart illustrating the identification Process

Begin

Does a time plot of the Differenc


No
data appear to be e the data
stationary

YES
NO
Does the correlogram of the
data decay to zero

YES

Is there a sharp cut-off in the Is there Sharp cut-off in the


correlogram NO partial correlogram

YES
YES
MA
AR
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

CHAPTER FOUR: SPECTRAL ANALYSIS

4.0 Spectral densities

Suppose that X t is a zero mean stationary process with autocovariance function R ( m ) satisfying

  R ( m )   . Then the spectral density function of


m =−
X t is the function f (  ) defined by

1 

f ( ) =
2
 R ( m )e
m =−
− im
, −       ………………………………………………...(4.0.1)

where i = −1 and ei = cos  + i sin  . The summability of R ( m ) implies that series in (3.0.1)

( 2
)
converges absolutely sin ce e−im = cos 2  + sin 2  = 1 . Since the Cos and Sin have period 2

, so also does f and it suffices to confine attention to the values of f i.e the interval  − ,   .

4.1 Basic Properties of f

a) f is even, i.e f (  ) = f ( − ) …………………………………………………..………..(4.1.1)

b) f (  )  0 for all    − ,   ……………………………………………………….....(4.1.2)



c) R ( m ) =  eim f (  )d 
− 
………………………………………………..………………(4.1.3)
 
1
It can also be shown that R ( m ) =  f (  ) cos m d  . Also we have f (  ) =
2
 R ( m ) cos m
m =−
−

. The form of the spectral density function f (  ) can be used to identify the type of series. The
analysis of a series using the spectral density function is called spectrogram analysia or spectrum
analysis.

4.2 Linear functions of stationery Series

Let X t be a stationery series, t = 0, 1, 2, 3,... with E ( X t ) = 0, R ( m ) = E ( X t X t −m ) ,


m = 0, 1, 2, 3,... . Let f X (  ) be the spectral density function of the series X t
MATH 416 TIME SERIES ANALYSIS AND FORECASTING


1
f X ( ) =
2
 R ( m) e − im
.Define a series Yt by the transformation Yt = a X
j =−
j t− j i.e a linear

combination of the series X t . The series Yt is stationary since


E (Yt ) = 0 and Cov (Yt −m , Yt ) = E (YY
t t − m ) = RY ( m ) , for m = 0, 1, 2,...

2
1
f (  ) =
2
 R ( m ) e − im
. It can be shown that: f (  ) = a e
j
j
ij
f X ( )

Example 1

Let X t = et where et ( t = 0, 1, 2,...) is such that E ( et ) = 0, var ( et ) =  e2 and


1
Cov ( et , et  ) = 0 for t  t  . The spectral density function is fe (  ) =
2
 R ( m) e
m
e
− im
but

 e2 , m = 0
Re ( m ) = 
0, m  0

 e2
Therefore, f e (  ) = , −    
2

Graphically
fe (  )

 e2
2


Example 2

Find the spectral density function of the process X t + X t −1 + X t −2 = et

Solution
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

X t = − X t −1 − X t −2 + et
X t + X t −1 + X t − 2 = et
2

X j =0
t− j = et

Get the spectral density function on both sides as,

If  t =  a j X t − j then f (  ) = a e  f X ( )
ij 2
j
2
2
 e2
 e
j =0
ij
f X ( ) =
2
2

 e  = 1+ e  + e
j =0
ij i i 2
= 1 + ( cos  + i sin  ) + ( cos 2 + i sin 2 )

2
2

e = 1 + ( cos  + i sin  ) + ( cos 2 + i sin 2 ) = 1 + cos  + cos 2 + i ( sin  + sin 2 )


ij 2 2

j =0

= (1 + cos  + cos 2 ) + ( sin  + sin 2 )


2 2

(1 + cos  + cos 2 ) = ( (1 + cos  ) + cos 2 ) = (1 + cos  ) + 2 (1 + cos  ) cos 2 + cos 2 2


2 2 2

= 1 + 2 cos  + cos 2  + 2 cos 2 + 2 cos  cos 2 + cos 2 2


( sin  + sin 2 ) = sin 2  + 2sin  sin 2 + sin 2 2
2

 (1 + cos  + cos 2 ) + ( sin  + sin 2 ) = 1 + 2 cos  + cos 2  + 2 cos 2 + 2 cos  cos 2


2 2

+ cos 2 2 + sin 2  + 2sin  sin 2 + sin 2 2


= 1 + 1 + 1 + 2 cos 2 + 2 cos  cos 2 + 2sin  sin 2 + 2 cos 
= 3 + 2 cos  + 2 cos 2 + 2 cos ( 2 −  )
= 3 + 2 cos  + 2 cos 2 + 2 cos 
= 3 + 4 cos  + 2 cos 2
( )
= 3 + 4 cos  + 2 2 cos 2  − 1 = 1 + 4 cos  + 4 cos 2 

= (1 + cos  )
2

 e2 −2  e
2
 (1 + cos  ) f X (  ) =  f X (  ) = (1 + cos  ) −    
2
,
2 2
If  t =  a j X t − j then f (  ) = a e
j
− ij 2
f X (  ) use in the example above to check the results.
Do the previous example with e−ij as below instead of e−ij , Answer will be the same.

X t + X t −1 + X t −2 = et
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

2
2
 e2
e
j =0
− ij
f X ( ) =
2
2

e
j =0
− ij
= 1 + e−i + e−i 2 = 1 + ( cos  − i sin  ) + ( cos 2 − i sin 2 )

2
2

e = 1 + ( cos  − i sin  ) + ( cos 2 − i sin 2 ) = 1 + cos  + cos 2 − i ( sin  + sin 2 )


− ij 2 2

j =0

= (1 + cos  + cos 2 ) + ( sin  + sin 2 )


2 2

Example 3

Find the spectral density function of the process X t given by X t = et − 2et −1 + et −2

Solution

X t = et − 2et −1 + et −2 . We get the spectral density function on both sides


MATH 416 TIME SERIES ANALYSIS AND FORECASTING

f X ( ) = a e f e (  ) = 1 − 2e − i + e − i 2 fe (  )
− ij 2
j
j

= 1 − 2 ( cos  − i sin  ) + ( cos 2 − i sin 2 ) f e (  )


2

= 1 − 2 cos  + cos 2 − i ( sin  + sin 2 ) f e (  )


2

= (1 − 2 cos  + cos 2 ) + ( sin  + sin 2 )  f e (  )


2 2
 
( (1 − 2 cos  ) + cos 2 ) = (1 − 2 cos  ) + 2 (1 − 2 cos  ) cos 2 + cos 2 2
2 2

= 1 − 4 cos  + 4 cos 2  + 2 cos 2 − 4 cos  cos 2 + cos 2 2


( 2sin  − sin 2 ) = 4s in 2 − 4sin  sin 2 + s in 2 2
2

( (1 − 2 cos  ) + cos 2 ) + (sin  + sin 2 )2 = 1 − 4 cos  + 4 cos2  + 2 cos 2 − 4 cos  cos 2 + cos2 2
2

+ 4s in 2  − 4sin  sin 2 + s in 2 2
= 1 + 4 + 1 − 4 cos  + 2 cos 2 − 4 cos  cos 2 − 4sin  sin 2
= 6 − 4 cos  + 2 cos 2 − 4 ( cos  cos 2 + sin  sin 2 )
( )
= 6 − 4 cos  + 2 2 cos 2  − 1 − 4 cos 
= 6 − 4 cos  + 4 cos 2  − 2 − 4 cos 
(
= 4 − 8cos  + 4 cos 2  = 4 1 − 4 cos  + cos 2  )
= 4 ( cos  − 1)
2

 e2 2 e
2
 f X (  ) = 4 ( cos  − 1) = 2 ( cos  − 1) , −    
2

2 
Example 4

A process X t is given by X t =  X t −1 + et . Find the spectral density function of X t

Solution

X t −  X t −1 = et . Finding the spectral density function on both sides, we get

2
1

a
j =0
j e − ij 
fX ( ) = fe (  )

 e2
1 − e − i 2
fX ( ) =
2
 e2
1 −  ( cos  − i sin  ) ( ) =
2
fX
2
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

 e2
(1 −  cos  ) + i sin  f X ( ) =
2

2
(1 −  cos  )2 + ( sin  )2  f X (  ) =  e
2

  2
 e2
(1 − 2 cos  +  2
)
cos 2  +  2 s in 2 f X (  ) =
2
 e2
( )
1 − 2 cos  +  2 f X (  ) =
2
−1 
2
f X (  ) = (1 − 2 cos  +  )
2 e
, −    
2

For convergence of AR(1) process,   1 . If we let  = 0.7,  the process X t = 0.7 X t −1 + et has
a spectral density function of

(
f X (  ) = 1 − 2 ( 0.7 ) cos  + ( 0.49 )
2 −1
)  e2
2
 e2
f X (  ) = (1.49 − 1.4 cos  )
−1

2
1
If  e2 = 1, then f X (  ) = (1.49 − 1.4 cos  )
−1
, −    
2

 0 0.5 1 1.5 2.0 2.5 3.0


f X (  ) 1.77 0.61 0.22 0.11 0.58 0.06
0.06
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

1.8
f X (  ) 1.6

1.4

1.2

1.0

0.8

0.6

0.4

0.2


0 0.5 1.0 1 1.5 2.0 2.5 3.0

Exercise:

In the above example, sketch the graph of the spectral density function for
 = −0.7 and  e2 = 0.1

NOTE


ei = cos  + i sin  ………………………………………………………….……………..(i)


e−i = cos  − i sin  ………………………………………………………………………..(ii)

( i ) + ( ii ) gives ei + e −i = 2 cos   cos  =


2
(
1 i
e + e − i )

( i ) − ( ii ) gives ei − e−i = 2i sin   sin  =


2i
(
1 i − i
e −e )
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

Example

The autocovarince function of a process X t is given by

3 2
9 e , m = 0

 2  2 , m = 1
 e
R ( m) =  9
1 2
  e , m = 2
9
0 , m  2

Find the spectral density function of X t

Solution

 2
1 1
f ( ) =
2
 R (m) e
m =−
− im
=
2
 R (m) e
m =−2
− im

1
=  R ( −2 ) ei 2 + R ( −1) ei + R ( 0 ) + R (1) e −i + R ( 2 ) e −i 2 
2
1   e2 i 2 2 2 i 3 2 2 2 −i  e2 −i 2 
=  e + e e + e + e e + e 
2  9 9 9 9 9 
 e2 i 2
= e + 2ei + 3 + 2e −i + e −i 2 
18
 e2 
=  3 + 2 ( ei + e −i ) + ( ei 2 + e −i 2 ) 
18
 e2
= 3 + 4 cos  + 2 cos 2  , −     
18
Exercise 3

a) Consider the ARMA(1, 1) process X t given by X t − 0.6 X t −1 = et − 0.9et −1 . Find the


autocorrelation function  ( m )
b) Given that 1 = 0.88, 2 = 0.76, 3 = 0.67. Find 11 , 22 and 33
c) Find the PACF for the following processes
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

(i) X t = 0.8 X t −1 + et
(ii) X t = 0.3 X t −1 + 0.6 X t −1 + et
d) Find 11 , 22 and 33 for the following processes
(i) X t = et − 0.8et −1
(ii) X t = et − 1.2et −1 + 0.5et −2

Exercise 4

a) Find the spectral density function of the process X t given by


(i) X t + X t −1 = et

X t + X t −1 = et
2
1
 e2
a e
j =0
j
− i
f X ( ) =
2
 e2
1 + ( cos  − i sin  ) f X ( ) =
2

2
 e2
(1 + cos  ) − i sin  f X (  ) =
2

2
(1 + cos  )2 + sin 2   f X (  ) =  e
2

  2
 e2  e2
1 + 2 cos  + cos  + sin   f X (  ) =
2 2
  2 + 2 cos   f X (  ) =
2 2
−1  e
2
 f X (  ) = ( 2 + 2 cos  )
2

(ii) X t + X t −1 = et − et −1

X t + X t −1 = et − et −1
2 2
1 1

a e
j =0
j
− i
f X ( ) = a e
j =0
j
− i
fe (  )

1 + ( cos  − i sin  ) f X (  ) = 1 + ( cos  − i sin  ) fe (  )


2 2

(1 + cos  ) − i sin  f X (  ) = (1 + cos  ) − i sin  fe (  )


2 2

(1 + cos  )2 + sin 2   f X (  ) = (1 + cos  )2 + sin 2   f e (  )


   
2
1 + 2 cos  + cos 2  + sin 2   f X (  ) = 1 + 2 cos  + cos 2  + sin 2  e
2
( )
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

 e2
 f X ( ) =
2

b) Consider the process given by X t + 2 X t −1 + X t −2 = et − 2et −1 . Find the spectral density


function in the simplest form.
X t + 2 X t −1 + X t − 2 = et − 2et −1
2 2
2 1

a e
j =0
j
− i
f X ( ) = a e
j =0
j
− i
fe (  )

f X (  ) = 1 − e − i fe (  )
2 2
1 + 2e − i + e − i 2 

1 + 2 ( cos  − i sin  ) + ( cos 2 − i sin 2 ) f X (  ) = 1 + 2 ( cos  − i sin  ) f e (  )


2 2

(1 + 2 cos  + cos 2 ) − i ( sin  + sin 2 ) f X (  ) = (1 + 2 cos  ) − i 2sin  fe (  )


2 2

(1 + 2 cos  + cos 2 )2 + ( sin  + sin 2 )2  f X (  ) = (1 + 2 cos  )2 + 4sin 2   f e (  )


   
(1 + 2 cos  ) + 2 (1 + 2 cos  ) cos 2 + 4 cos 2 2 + ( sin  + sin 2 )  = 1 + 4 cos 
2 2
 
+4 cos  + 2 cos 2 + 4 cos  cos 2 + 4 cos 2 + sin  + 2sin  sin 2 + sin 2 2
2 2 2

1 + 4 + 1 + 4 cos  + 2 cos 2 + 4 cos ( 2 −  )  f X (  ) = (1 + 2 cos  ) + 4sin 2   f e (  )


2
 
 e2
( ) (
6 + 8cos  + 2 2 cos 2  − 1  f X (  ) = 1 + 4 cos  + 4 cos 2  + sin 2 
  ( )) 2
2
 4 + 8cos  + 4 cos 2   f X (  ) = (1 + 4 cos  + 4 ) e
2
 e2
4 ( cos  + 1) f X (  ) = ( 5 + 4 cos  )
2

2
 5 + 4 cos   2
 f X ( ) =   e
 ( cos  + 1)2  8
 

5.0 FORECASTING

5.1 Introduction

Forecasting the future value of an observed time series is an important problem in many areas e.g
Economics, production planning, sales forecasting and stock control. Suppose we have an
observed time series X1 , X 2 ,..., X n , then the problem is to estimate the future value X n+ k , where
k is the lead time. k = 1, 2,... . The forecast of X n+ k will be denoted by X ( n, k ) . A wide variety
of methods exists for forecasting a future value of a time series. It should be noted that no
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

method is universally the best rather than you have to choose the method depending on the
problem at hand. Forecasting method can be classified into three groups;

(i) Subjective method


(ii) Univariate method
(iii) Multivariate method

5.2 Subjective Method

Forecast can be made on the basis of judgment, intuition and commercial knowledge or any other
relevant method. We should not consider this method in this course

5.3 Univariate Method

Forecast of a given variable are based on a model fitted only to pass information of a time series
so that X ( n, k ) depends only on X n , X n−1 , X n−2 ,... (present value).

Example: Forecast of future sales will depend on past sales only. This method is sometimes
called projective method.

5.4 Multivariate Method

Forecast of a given variable depends on atleast partly on values of one or more other time series
called predictor or explanatory variable e.g sales forecast may depend on stock, advertisement
expenditure.

5.5 Univariate Method: Here, we only look at Box-Jenkings Procedure

5.5.1 Box-Jenkings Procedure

This forecasting procedure is based on Autoregressive Integrated Moving Average(ARIMA)


models and is usually known as the Box-Jenkings Procedure. The main stages in setting up Box-
Jenkings model is as follows:

(i) Model identification: The data is examined to see which member of the class or
ARIMA model are estimated
(ii) Estimation: After an appropriate model has been choosen, the parameter of the
model are estimated.
(iii) Diagnostic checking: The residues from the fitted model are examined to see if the
choosen model is adequate.
(iv) Consider other Models: If the first model is not adequate, then other models may
may be tried until satisfactory model is found.

When satisfactory model is found, forecast may readily be computed. Given data up to time n,
this forecast will involve the observations and the fitted residues up to an included time n. The
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

Minimum Mean Square Error forecast of X n+ k is the conditional expectation of X n+ k at time n


i.e X ( n, k ) = E ( X n + k X n , X n −1 , X n − 2 ,...) . In evaluating this conditional expectation, we use the
fact that the “best” forecast of all future residues e is simply zero. Box-Jenkings describe three
procedures of forecasting:

1. Using the Difference Equation Form: Forecast are readily computed directly from the
model equation. Assuming the model equation is known exactly, then X ( n, k ) is obtained
from the model equation by replacing.
(i) Future values of e by zero
(ii) Future values of X by their conditional expectation
(iii) Past values of X and e by their observed values.

Example: Consider X t =  X t −1 + et +  et −1 , then X n+1 =  X n + en+1 +  en =  X n +  en (replacing


future values of e by zero). Hence the forecast denoted by X ( n,1) =  X n +  X n − X ( n − 1,1) ( )
predictor error

Next, the forecast X n + 2 =  X n +1 + ( en + 2  = 0 ) +  ( en +1 = 0 ) (future values) is given by

X ( n, 2 ) =  X n +1 =  ( X n +  en ) =  X ( n,1)

2. Using  weights: ARMA ( p, q ) is given by


X t = 1 X t −1 +  2 X t −2 + ... +  p X t − p + et + 1et −1 + ... +  q et −q
  ( B ) X t =  ( B ) et
Thus we can rewrite the ARMA model as a pure MA process.
 ( B)  
X t =  ( B ) et where  ( B ) =  X t =  i B i et =  i et −i , 0 = 1
 ( B) i =0 i =0

= et + 1et −1 + 2 et − 2 + ...
Hence X n+k = en+k + 1en+k −1 + 2en+k −2 + ... + k en + k +1en−1 + ... Therefore the forecast at real

time k is X ( n, k ) =  k en + k +1en −1 + ... =  k + j en − j . Therefore no future e ' s are included.
j =0

The predictor error is given by


( )
Pr edictor error = X n+ k − X ( n, k ) = en+ k + 1en+ k −1 + 2en+ k −2 + ... + k −1en . Therefore the

(
variance of the k-step ahead error is given by the variance = 1 + 12 + 22 + ... + k2−1  2 )
MATH 416 TIME SERIES ANALYSIS AND FORECASTING

3. Using the  weights: We can rewrite ARMA ( p, q ) as a pure AR process of the form
 ( B)  ( B)
 ( B ) X t =  ( B ) et  X t = et ,   ( B ) X t = et ; where  ( B ) = .
 ( B)  ( B)
By inversion we can write  ( B ) = 1 −   i B j , since the natural way to write an AR model of
i 1

  
the form  1 −   i B j  X t = et ,  X t =   i X t − j + et .
 i 1  i =1

Therefore, X n+k = 1 X n+ k −1 +  2 X n+ k −2 + ... +  k X n + en+k . The forecast can be computed


recursively by replacing future values of X with predicted values.

************END GOD BLESS YOU**********

You might also like