You are on page 1of 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/280545501

Volatility Forecasting – A Comparison of GARCH(1,1) and EWMA models

Conference Paper · February 2013

CITATIONS READS
0 1,536

2 authors, including:

Nilakantan Sundara raman Narasinganallur


Somaiya Institute of Managaement Studies & research
10 PUBLICATIONS   2 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Fuzzy logic decision making View project

OR/MS education around the world View project

All content following this page was uploaded by Nilakantan Sundara raman Narasinganallur on 15 October 2019.

The user has requested enhancement of the downloaded file.


VOLATILITY FORECASTING: - A COMPARISON BETWEEN
GARCH (1, 1) MODEL AND EWMA MODEL
Prof. N.S.Nilakantan#1 and Mr. Pratik Mistry#2

#1 Associate Professor – Quantitative Methods, KJSIMSR, Mumbai, India

#2 PGDM FS 2nd year, KJSIMSR, Mumbai, India.

#1 corresponding author -nilakantan@simsr.somaiya.edu

Abstract:
Volatility forecasting is an important area of research in financial markets. A lot of effort has
gone into improving volatility models since better forecasts translate themselves into better
pricing of options and better risk management. In this direction, the present paper is an attempt to
model and forecast the volatility of the Stock returns in the Indian market, using daily data,
covering a period from 1st January 2003 to 31st December, 2012. The paper presents a
comparison between the EWMA model and the GARCH (1, 1) model in the Indian context.

It is well established, that the autocorrelation and partial autocorrelation functions are
useful tools in identifying and checking the behaviour of time series. Similarly the
autocorrelations and partial autocorrelations for the squared process may prove helpful in
identifying and checking GARCH behavior in the conditional variance equation. The maximum
likelihood estimation of the linear regression model with GARCH errors is briefly discussed and
used to establish the parameters of the GARCH model.

Based on detailed analysis of various measures and hypothesis testing, conclusions are drawn as
to the relative merits of different models studied in the Indian context.

K.J. Somaiya Institute of Management Studies & Research Page 1


Introduction

Volatility is an important measure of time series of stock and other returns in the equity markets.
Volatility is another term in the financial world for standard deviation, a well-defined statistical
measure. While volatility is not observable parameter, we can use historical data to produce
estimates of current and future levels of volatility and correlations. The concept of volatility
assumes importance in the modelling of value-at-risk (VaR) as well as of derivatives. While VaR
calculations require inputs of current volatility to assess possible changes in the value over a
short period of time, forecast of volatility is useful over the life of the derivative. While
estimating current and future volatility, we have to keep in mind that the volatilities are not
constant and change over a period of time. The EWMA (Exponential Weighted moving average)
and GARCH (Generalised Auto Regressive Conditional Heteroscedasticity) models are useful in
this context since they attempt to keep track of the variation in the volatility or correlation over
time.

Literature Review

Conventional time series and econometric models usually operate under an assumption of
constant variance. Engle (1982) proposed that the uncertainty of inflation tends to change over
time and modelled the inflation-based time series with the ARCH (Autoregressive Conditional
Heteroskedastic) process. This process allows the conditional variance to change over time as a
function of past errors leaving the unconditional variance constant.

The ARCH (Autoregressive Conditional Heteroskedasticity) model, introduced by Engle (1982)


has been extended by many researchers and extensively surveyed in Bera and Higgins (1993),
Bollerslev, Chou and Kroner (1992), Bollerslev, Engle and Nelson (1994) and Diebold and
Lopez (1995). The model has already proven useful in modelling several different economic
phenomena. In Engle (1982), Engle (1983) and Engle and Kraft (1983), models for the inflation
rate are constructed. In Coulson and Robins (1985) the estimated inflation volatility is related to
some key macro-economic variables. Engle, Lilien and Robins (1985) use models for the term
structure using an estimate of the conditional variance as a proxy for the risk premium.
Domowitz and Hakkio (1985) apply the same idea to the foreign exchange market. Weiss (1984)
found ARMA (Auto Regressive Moving Average) models with ARCH errors to be successful in
modelling thirteen different U.S. macroeconomic time series.

The ARCH process introduced by Engle (1982) explicitly recognizes the difference between the
unconditional and the conditional variance allowing the latter to change over time as a function
of past errors. The statistical properties of this new parametric class of models have been studied
further in Weiss (1982), and by Milhoj (1984). In empirical applications of the ARCH model, a
relatively long lag in the conditional variance equation is often called for, and to avoid problems

K.J. Somaiya Institute of Management Studies & Research Page 2


with negative variance parameter estimates, a fixed lag structure is typically imposed (refer
Engle (1982), Engle (1983) and Engle and Kraft (1983).

However, most of the above applications use an arbitrary linear declining lag structure in the
conditional variance equation to take account of the long memory, since estimating a totally free
lag distribution often will lead to violation of the non-negativity constraints. A new, more
general class of processes, GARCH (Generalized Autoregressive Conditional Heteroskedastic),
introduced by Bollerslev (1986) allows for a much more flexible lag structure. Empirical
findings suggest that GARCH (1, 1) is the most popular structure for many financial time series.

Estimating Volatility

We define σn as the volatility of a stock price on day n, as estimated at the end of day n-1.The
square of the volatility, σ2n on day n is the variance. This can be estimated from the historical
data as follows:
Assume the value of the market variable (say, a stock) at the end of day i is Si. The variable ui is
defined as the continuously compounded return during day i (between the end of day i -1 and the
end of day i ):

An unbiased estimate of the variance per day, σ2n using the most recent m observation on the
ui is:

……………………………… (1.1)

Where u bar is the mean of the ui’s:

For the purposes of monitoring daily volatility, we change the above formula with the following
assumptions:

1) ui is defined as the percentage change in the stock price between the end of day i-1 and
day i, so that

K.J. Somaiya Institute of Management Studies & Research Page 3


……………………………… (1.2)

2) u bar is assumed to be zero.

3) m-1 is replaced by m.

While these changes make little difference to the estimate that are calculated, they allow us to
simplify the formula for the variance to

,………………………… (1.3)

where ui is given by equation (1.2).

Weighting Schemes

While equal weight is given to the m observations in the above equation, we are interested in
estimating the current level of volatility, σn , and may like to give more weight to recent data.

Accordingly, the formula can be revised to a weighting scheme as

……………………………… (1.4)

The variable αi is the amount of weight given to the observation i days ago, and the α’s are
positive. If we choose them so that αi is less than αJ, for i> j, less weight is given to older
observations. The weight must sum to unity, so that

We may also extend the weighting scheme by assuming a long run variance and assign some
weight to it. VL is the long-run variance and we assign γ as its weight. We still require the
weights to sum to unity and hence

K.J. Somaiya Institute of Management Studies & Research Page 4


The model takes the form

……………………………… (1.5)

This is known as an ARCH (m) model, first suggested by Engle. The estimate of the variance is
based on a long-run average variance and m observations. The older an observation, the less
weight it is given. Defining ω = γ* VL, , the model in equation (1.5) can be written

……………………………… (1.6)

Based on the two equations (1.4) and (1.6), different approaches have been developed to monitor
volatility.

The EWMA Model

EWMA model is a particular case of the model in eq. (1.4), where weights αi decrease
exponentially as we move back in time. Specifically we define αi+1=λ*αi where λ is a constant
between 0 and 1. This weighting scheme leads to a simplified formula for updating volatility
estimates as follows:

……………………………… (1.7)

The estimate, σn of the volatility of a variable for day n (made at the end of day n-1) is calculated
from σn-1 (the estimate that was made at the end of day n-2 of the volatility for day n-1) and un-1
(the most recent daily percentage change in the variable).

The EWMA approach is found attractive due to the fact that we store relatively little data. At
any given time, only the current estimate of the variance and the most recent observation on the
value of the market variable need to be remembered. The EWMA approach is designed to track
changes in the volatility. If there is a big move in the market variable on day n-1, and u2n-1 is
large, this can be seen from equation (1.7) to push the estimate of the current volatility to move
upward. The value of λ governs how responsive the estimate of the daily volatility is to the most
recent daily percentage change. While a low value of λ leads to a great deal of weight being
given to the u2n-1 when σn is calculated. A high value of λ (i.e. a value close to 1) produces
estimates of the daily volatility that respond relatively slowly to new information provided by the
daily percentage change.

J.P.Morgan originally created the RiskmetricsTM database and made it publicly available in
1994. The database was created using the EWMA with λ= 0.94 for updating daily volatility

K.J. Somaiya Institute of Management Studies & Research Page 5


estimated in the database. They found that, across range of different market variables, this value
of λ gives forecasts of the variance that come close to the realized variance rate. The realized
variance rate on a particular day was calculated as an equally weighted average of the u2i on the
subsequent 25 days. It turns out that RiskmetricsTM EWMA (exponentially weighted moving
average) is a non-stationary version of GARCH(1,1) where the persistence parameters, α1 and
β1, sum to 1 ( Poon, 2008).

The GARCH (1, 1) Model

The GARCH (1, 1) model was proposed by Bollerslev (1986). The difference between the
GARCH (1, 1) model and the EWMA model is analogous to the difference between equation
(1.4) and equation (1.5). In GARCH (1, 1), σ2n is calculated from a long-run variance rate, VL ,
as well as from σn-1 and un-1. The equation for GARCH (1, 1) is

……………………………… (1.8)
Where γ is the weight assigned to VL, α is the weight assigned to u2n-1, and β is the weight
assigned to σ2n-1. Since the weights must sum to unity, it follows that
γ+α+β=1

The EWMA model is a particular case of GARCH (1, 1) ,where γ =0 , α = 1 – λ, and β = λ. The
“(1, 1)” in GARCH (1, 1) indicates that σ2n is based on the most recent observation of u2 and the
most recent estimate of the variance rate. The more general GARCH (p,q) model calculates σ2n
from the most recent p observations on u2 and the most recent q estimates of the variance..
GARCH (1, 1) is by far the most popular of the GARCH models.

Setting ω = γ* VL , we can rewrite the GARCH (1,1) model as

……………………………… (1.9)
This form of the model is frequently used for the purpose of estimating the parameters. Once ω,
α,and β have been estimated, we can calculate γ = 1 –α – β. The long-term variance VL can then
be calculated as ω/γ. For a stable GARCH (1,1) process we impose a condition that α +β < 1, as
otherwise the weight applied to the long term variance becomes negative for α +β >1, or the
model reduces to the EWMA model, where α +β =1.

The weights decline exponentially at the rate β. The parameter β can be interpreted as a “decay
rate”. It is similar to λ in the EWMA model. It defines the relative importance of the
observations on the u’s in determining the current variance.

K.J. Somaiya Institute of Management Studies & Research Page 6


The GARCH (1, 1) model is similar to EWMA model except that, in addition to the assigning
weights that decline exponentially to past u2, it also assigns some weight to long-run average
volatility.
The GARCH (1, 1) model recognizes that the variance tends over time to get pulled back to a
long-run average level of VL. the model is equivalent to a stochastic model where the variance V
follows the stochastic process
dV = a(VL – V) dt+ξVdz
where time is measured in days, a = 1-α-β, and ξ = α√2. This is a mean-reverting model
and the variance has a drift that pulls it back to VL at rate a. when V> VL, the drift in the variance
is negative and when V < VL, it is positive. Superimposed on the drift is volatility, ξ.

Estimating GARCH (1,1) Parameters

For estimating the parameters in the models from historical data, we use the maximum likelihood
approach, which involves choosing value for the parameters that maximize the chance (or
likelihood) of the data occurring.

We now consider how the maximum likelihood method can be used to estimate the parameters
when GARCH (1,1) or some other volatility updating scheme is used.
Define vi = σi2 as the variance estimated for day i. Assume that the probability distribution of u i
conditional on the variance is normal. The best parameters are the one that maximize

Taking logarithms, this is equivalent to maximizing

……………………………… (1.10)
It is necessary to search iteratively to find the parameters in the model that maximize the
expression in equation (1.10).

Methodology

The methodology involves calculating the current volatility based on historical data using two
different models, namely, EWMA and GARCH (1, 1). We used the data of SBI stocks to
implement the EWMA and GARCH (1, 1) models. We took the prices of SBI for the past 10
years starting from Jan- 2003 to Dec- 2012 from www.nseindia.com. In order to implement the

K.J. Somaiya Institute of Management Studies & Research Page 7


models in actual practice, we need to estimate the parameters used in the models. For the EWMA
model, we took the parameter λ= 0.94, as taken by RiskmetricsTM in their database.

For the estimation of GARCH parameters, the information on stock prices was laid out in an
excel sheet and the following steps are performed. The data and calculations are organized in the
spreadsheet, whose snapshot is provided in table 1. The table on the right represents the value of
the parameters and is first filled with trial numbers (these have to be positive for Solver to
perform). The required calculations are executed in columns 4-6. The likelihood measure appears
in the 6th column. The values in the 5th and 6th column are based on the trial values of the
GARCH parameters. We choose the parameters in such a way so as to maximize the sum of the
numbers in the 6th column. While this may be achieved by an iterative procedure, there are
general purpose and special purpose algorithms to achieve the results. We have used a general
purpose algorithm, Solver. For special purpose algorithms like Levenberg-Marquardt, please see
Press et al. (1988).
For the GARCH model, the optimal values of the parameters turn out to be

ω = 0.00000560, α = 0.08363614, β = 0.91626386

and the maximum value of the function in equation (1.12) is 16375.02. The numbers shown in
Table 1.1 were calculated on the final iteration of the search for the optimal ω, α and β. The
long-term variance rate, VL , in our example is calculated as 0.0560, i.e. 5.6 % , using the
formula ω /( 1 - α – β).

When the EWMA model is used, the estimation procedure is relatively simple. We Set ω = 0,
α = 1 - λ and β = λ, and only one parameter λ has to be estimated. Here we have taken the value
of λ= 0.94 as per RiskmetricsTM.

K.J. Somaiya Institute of Management Studies & Research Page 8


Table: 1.1:- Estimation of parameters in GARCH (1,1) Model

Data Analysis and Interpretation

Figure 1.1 shows the way GARCH (1,1) volatility for the SBI stock has changed over the 10-
year period covered by the data. Most of the time, the volatility was between 2% - 4% per day,
but volatility over 5% were experienced during some periods.

K.J. Somaiya Institute of Management Studies & Research Page 9


Figure 1.1: Daily Volatility of the SBI stock, 2003-2012

Figure 1.2: Comparison between GARCH (1,1) model and EWMA model

How good is the Model?

The assumption underlying GARCH model is that volatility changes with the passage of time.
When u2i is high, there is tendency for u2i+1 to be high and vice-versa. We can test how true is

K.J. Somaiya Institute of Management Studies & Research Page 10


this by examining the autocorrelation structure of the u2i. We have tested this with the Ljung-Box
Test (1978). If a GARCH model is doing well, it should remove the autocorrelation. We test this
by considering the autocorrelation structure for the variables u2i / σ2i. If these show very little
autocorrelation, our model for σi has performed well by explaining autocorrelations in the u2i

Table 1.2: Autocorrelations before and after use of a GARCH (1,1) Model

Table 1.2 shows the results for the State bank of India stock. The first column shows the lags
considered when the autocorrelation is calculated. The second shows the autocorrelation for u2i :,
The third column shows autocorrelations for u2i / σ2i.
The table shows that the autocorrelation are positive for u2i for all lags between 1 and 15. In the
case of u2i / σ2i, some of the autocorrelations are positive and some are negative. They are all
much smaller in magnitude than then autocorrelations for u2i .

The GARCH model appears to have done a good job in explaining the data. For a more scientific
test, we can use what is known as Ljung- Box Statistic. If a certain series has m observation the
Ljung- Box Statistic is

Where nk is the autocorrelation for lag of k. K is the number of lags considered, and

From the above formula, we get

K.J. Somaiya Institute of Management Studies & Research Page 11


For K=15, zero autocorrelation can be rejected with 95% confidence when the L-B statistic is
greater than 25.

For significance level α, the critical region for rejection of the hypothesis of randomness is

where, , the chi-squared distribution with h degrees of freedom is.

The Ljung-Box statistic for the u2i series is about 512. This is strong evidence of autocorrelation.
For the u2i / σ2i series, the Ljung-box statistic is 19, suggesting that the autocorrelation has been
largely removed by the GARCH model.

Conclusion & Limitations

We presented in this paper two of the most commonly used stochastic models to estimate the
volatility of financial returns. The GARCH model is to be preferred for short term horizons
because it accounts for mean reversion. A common disadvantage of standard GARCH models is
that they cannot model asymmetries of the volatility with respect to the sign of past shocks.
Many different GARCH models have been designed since Engle and Bollerslev which are well
worth considering because they take account of leverage, asymmetry, and other properties of
financial time series not well captured by the two models presented here. Some of these are
IGARCH, GARCH-M, EGARCH, TGARCH, PGARCH, NGARCH, QGARCH, GJR-GARCH,
and AGARCH. Though GARCH models only indicate the magnitude of returns and not the
direction of the stock prices, the indications are important for Risk management and Margin
calculations in the stock exchange. In view of the theoretical appeal of a GARCH model it is
worthwhile to explore whether stock exchanges could adopt GARCH in place of EWMA.

Scope for Further Research

For every model that is developed to track variances, there is a corresponding model that can be
developed to track covariances. The procedures described here can therefore be used to update
the complete variance – covariance matrix used in Value at Risk calculations.

Acknowledgements
The authors acknowledge, with thanks, the opportunity, provided by KJSIMSR,Mumbai, to present part of these
finding in their Finance conference , SIFICO 2013.

K.J. Somaiya Institute of Management Studies & Research Page 12


References:

Bera, A.K. and M.L. Higgins (1993), “ARCH Models: Properties, Estimation and Testing,” Journal of
Economic Surveys, 7, 305-366.

Bollerslev, T. (1986), “Generalized Autoregressive Conditional Heteroskedasticity,” Journal of


Econometrics, 31, 307-327.

Bollerslev, T., R.F. Engle and D.B. Nelson (1994), “ARCH Models,” in R.F. Engle and D. McFadden
(eds.), Handbook of Econometrics, Volume IV, 2959-3038. Amsterdam: North-Holland.

Bollerslev, T., R.Y. Chou and K.F. Kroner (1992), “ARCH Modeling in Finance: A Selective Review of
the Theory and Empirical Evidence,” Journal of Econometrics, 52, 5-59.

Coulson, N.E. and R.P. Robins, 1985, Aggregate economic activity and the variance of inflation: Another
look, Economics Letters 17, 71-75.

Diebold, F.X. and J. Lopez (1995), “Modeling Volatility Dynamics,” in K. Hoover (ed.),
Macroeconometrics: Developments, Tensions and Prospects, 427-472. Boston: Kluwer Academic Press.

Domowitz, I. and C.S. Hakkio, 1985, Conditional variance and the risk premium in the foreign exchange
market, Journal of International Economics 19, 47-66.

Engle, R.F. (1982), “Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of
U.K. Inflation,” Econometrica, 50, 987-1008.

Engle, R.F., 1983, Estimates of the variance of U.S. inflation based on the ARCH model, Journal of
Money Credit and Banking 15, 286-301.

Engle, R.F. and D. Kraft, 1983, Multiperiod forecast error variances of inflation estimated from ARCH
models, in: A. Zellner, ed., Applied time series analysis of economic data (Bureau of the Census,
Washington, DC) 293-302.

Engle, R.F., D. Lilien and R. Robins, 1985, Estimation of time varying risk premiums in the term
structure, Discussion paper 85-17 (University of California, San Diego, CA)

G. M. Ljung; G. E. P. Box (1978). "On a Measure of a Lack of Fit in Time Series


Models” Biometrika 65 (2): 297–303.

Milhoj, A., 1984, The moment structure of ARCH processes, Research report 94 (Institute of Statistics,
University of Copenhagen, Copenhagen).

Weiss, A.A., 1982, Asymptotic theory for ARCH models: Stability, estimation and testing, Discussion
paper 82-36 (University of California, San Diego, CA).

Weiss, A.A., 1984, ARMA models with ARCH errors, Journal of Time Series Analysis 5, 129-143.

K.J. Somaiya Institute of Management Studies & Research Page 13

View publication stats

You might also like