Professional Documents
Culture Documents
STOCHASTIC TIME
SERIES
1
STOCHASTIC TIME SERIES
is the expected value of the process of the process at time- can be different at each
time point
for
where
• The autocorrelation, also known as serial correlation is the correlation between the elements of
a series and others from the same series separated from them by given times, t and s.
• Values of 𝜌𝑡,𝑠 near ±1 indicate strong (linear) dependence, whereas values near zero indicate
weak (linear) dependence.
• if 𝜌𝑡,𝑠 = 0, we say that 𝑌𝑡 and 𝑌𝑠 are uncorrelated.
STATIONARY TIME SERIES
Stationary Time Series
• To make statistical inference about the structure of a stochastic process on the
basis of an observed record of that process, we must make some simplifying (and
presumably reasonable) assumption about the structure.
• Basic idea of stationary is that the probability laws that govern the behaviour of
the stochastic process do not change over time.
• When n=1 (univariate) the distribution of 𝑌𝑡 is the same as that of 𝑌𝑡−𝑘 for all t and k
in other words, Y’s are (marginally) identically distributed.
• 𝐸 𝑌𝑡 = 𝐸 𝑌𝑡−𝑘 for all t and k so that the mean function is constant for all times.
• 𝑉𝑎𝑟 𝑌𝑡 = 𝑉𝑎 𝑌𝑡−𝑘 for all t and k so that the variance is also constant over time.
Strictly Stationary Time Series
When n=2?
Strictly Stationary Time Series
When n=2?
•Thus, for a stationary process, we simplify the notation and write
and
since by definition 𝐸 𝜀𝑡 = 0.
Thus, the random errors has no effect on the expectation value of the variable 𝑌𝑡 .
• Based on the analogy of white light and indicates that all possible periodic
oscillations are present with equal strength.
• A special case of stationary time series.
• The most “clean or fundamental” time series, thus many useful processes can be
constructed from white noise.
• A model is said to be a good fit if its residual error series appear to be a realisation of
independent series.
White Noise vs Stationary Time Series
2.For variance,
• The process;
is defined as the random walk with drift since it fluctuates around a constant mean, 𝜇.
• Can be shown by the process of substitution that;
• Therefore if the observed time series is a realization of a random walk model, it can
be shown that forecasting for k period ahead is simply;
Random Walk with Drift Process
•To show the realisation of this model is basically its expected value,
since
Example
• Black line (random walk with drift), blue line (random walk without drift).
TESTING FOR STATIONARITY
Autocorrelation Function (ACF)-correlogram
POPULATION ACF
Sample Autocorrelation Function (SACF)
• 𝜌’s are unknown and should be estimated from observed time series data.
• Sample autocorrelation of lag k;
• Plot of 𝜌’s
ො against k is called sample ACF (SACF).
• SACF measures dependence structure of a time series.
Statistical Significance of SACF-individual test
• Testing the hypothesis based on the confidence band of 95% confidence limit of the
standard error of autocorrelation.
• The hypothesis:
• Reject the null hypothesis at 5% level of significance if the sample value 𝜌ො𝑘 lies
outside the confidence band ±1.96𝑇 −1/2 where T is the number of observations in
the series.
Statistical Significance of SACF-individual test
Example
• Test-statistics:
• Critical Value:
• Decision:
Reject the null hypothesis if
By using SACF
• A stationary time series shows SACF that quickly dies out - only first a few 𝜌ො𝑘 ‘s are
not equal to zero statistically.
• However, if SACF shows flat decline (slowly decaying), it is an evidence of possible
non-stationary.
•By using the conditional expectation procedure (where the forecast is conditional on its
past value) we can show that the forecast for period T+1 is basically 𝑦ො𝑇+1 = 𝜃0 .
•Since all 𝑦𝑡 , 𝑡 = 0, ±1, ±2, …,T are independent, the probability of distribution is not
affected by whatever given values of 𝑦1 , 𝑦2 , … , so that the expected value of 𝑦𝑇+1 ;
since
• One-step-ahead forecast error is basically a random shock 𝑒𝑇+1 ;
𝑒𝑇 is used instead 𝜀𝑇 to differentiate the actual calculated error terms against its
corresponding theoretical value, 𝜀𝑇 .
• Hence for any white noise model, the forecast error for lead time m is;
Cryer, J. D., & Chan, K. S. (2011). Time Series Analysis With Applications
in R . Springer.