Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
School of Economics
Lund University
Sweden Spring 2000
ValueatRisk as a Risk Measurement
Tool for Swedish Equity Portfolios
Tutor: Authors:
Hossein Asgharian, Department of Economics Mikael Kärrsten
Fredrik Olsson
Abstract
Title: ValueatRisk as a Risk Measurement Tool for Swedish Equity
Portfolios
Seminar date: 20000608
Subject: Master thesis 10 credits, Finance
Authors: Mikael Kärrsten & Fredrik Olsson
Tutor: Hossein Asgharian
Purpose: The purpose of this Master thesis is to examine the applicability of
different VaR methods for Swedish equity portfolios. In addition,
we will analyse if equity market cap has any impact on how well
functioning and reliable the VaR methods are. Based on these
results we will discuss the implications of VaR for asset managers.
Method: To assess whether VaR can be considered as a reliable and stable
risk measurement tool for Swedish equity portfolios, we have
performed a quantitative study. The study covers three different
VaR approaches and seven methods for both the 95% and the 99%
confidence levels. Further on, all results are evaluated using nine
different performance criteria as well as statis tical significance and
normality tests.
Conclusions: We can conclude that most VaR methods work well at the 95%
confidence level, while at the 99% level the results are more
ambiguous. The methods based on the assumption about normally
distributed returns produce attractive results for the OMX
portfolio, but for the small cap and mixed portfolios these methods
tend to underestimate the VaR. Further on, our study shows that
the portfolio returns are not normally distributed. Due to this fact
we recommend the historical simulation approach, which does not
rest on the assumption about normality. In addition, the historical
simulation with a window size of 250 trading days produces the
most attractive results for the smallcap and mixed portfolios.
However, none of the VaR methods seem to produce totally
perfect and reliable results. Therefore VaR can be questioned as a
useful tool for asset managers managing Swedish equity portfolios.
Key words: VaR, distribution of financial returns, historical simulation,
equally weighted moving average, exponentially weighted moving
average.
Table of contents
1. Introduction 1
1.1 Background…………………………………………………………………………... 1
1.2 Problem Discussion…………………………………………………………………... 2
1.3 Purpose………………………………………………………………………………. 3
1.4 Target Group ………………………………………………………………………… 4
1.5 Disposition…………………………………………………………………………… 4
2. Methodology 5
2.1 General methodology………………………………………………………………… 5
2.1.1 Choice of Subject…………………………………………………………………... 5
2.1.2 Perspective………………………………………………………………………5
2.1.3 Scientific Approach…………………………………………………………….. 6
2.1.4 Theory and Object………………………………………………………………. 7
2.2 Practical Methodology……………………………………………………………….. 7
2.2.1 Primary Data……………………………………………………………………. 7
2.2.2 Secondary Data…………………………………………………………………. 8
2.2.3 Criticism of the Sources…………………………………………………………8
2.3 Validity……………………………………………………………………………….. 9
2.4 Reliability…………………………………………………………………………….. 9
2.5 Empirical Study………………………………………………………………………. 9
2.6 Criticism of Chosen Methodology…………………………………………………… 10
2.7 Alternative Methodology…………………………………………………………….. 10
3. VaR Theory 11
3.1 Risks………………………………………………………………………………….. 11
3.1.1 Value at Risk (VaR)……………………………………………………………..13
3.2 The Implications of VaR for Asset Managers………………………………………... 14
3.3 Normal Distribution of Financial Returns……………………………………………. 16
3.3.1 Skewness and Kurtosis…………………………………………………………. 18
3.4 VaR Approaches………………………………………………………………………19
3.4.1 The Historical Simulation Approach…………………………………………… 19
3.4.1.1 Advantages and Disadvantages of the HS Approach…………………….. 20
3.4.2 The Equally Weighted Average Approach……………………………………... 21
3.4.2.1 Advantages and Disadvantages of the EqWMA Approach………………. 21
3.4.3 The Exponentially Weighted Average Approach………………………………. 22
3.4.3.1 What Value of λ Should Be Used?……………………………………….. 24
3.4.2.2 Advantages and Disadvantages of the ExpWMA Approach……………... 24
3.4.4 The New Improved VaR Methodology………………………………………… 25
3.4.4.1 Advantages and Disadvantages of the Improved VaR Methodology…….. 26
3.4.5 Monte Carlo Simulation…………………………………………………………26
3.4.5.1 Advantages and Disadvantages with the MCS Approach………………... 27
3.4.6 Semiparametric VaR Approach……………………………………………….. 27
3.4.6.1 Advantages and Disadvantages with the Semi parametric VaR Approach. 28
3.4.7 The Stress Testing Approach…………………………………………………… 29
3.4.7.1 Advantages and Disadvantages with Stress Testing……………………… 29
3.5 Multiday VaR Prediction……………………………………………………………. 30
4. Statistical Methodology 31
4.1 Working Process………………………………………………………………………31
4.2 VaR Methods Used in the Study……………………………………………………... 32
4.3 Calculation Procedure for the HS Approach…………………………………………. 33
4.4 Calculation Procedure for the EqWMA Approach…………………………………... 34
4.5 Calculation Procedure for the ExpWMA Approach…………………………………. 34
4.6 Performance Evaluation Criteria……………………………………………………... 35
4.6.1 Mean Relative Bias……………………………………………………………... 35
4.6.1.1 Calculation Procedure…………………………………………………….. 35
4.6.2 Root Mean Squared Relative Bias……………………………………………… 36
4.6.2.1 Calculation Procedure…………………………………………………….. 36
4.6.3 Annualized Percentage Volatility………………………………………………. 36
4.6.3.1 Calculation Procedure…………………………………………………….. 36
4.6.4 Fraction of Outcomes Covered…………………………………………………. 37
4.6.4.1 Calculation Procedure…………………………………………………….. 37
4.6.5 Multiple Needed to Attain Desired Coverage………………………………….. 37
4.6.5.1 Calculation Procedure…………………………………………………….. 38
4.6.6 Average Multiple of Tail Event to Risk Measure………………………………. 38
4.6.6.1 Calculation Procedure…………………………………………………….. 38
4.6.7 Maximum Multiple of Tail Event to Risk Measure……………………………..38
4.6.7.1 Calculation Procedure…………………………………………………….. 39
4.6.8 Correlation between Risk Measure and Absolute Value of Outcome………….. 39
4.6.8.1 Calculation Procedure…………………………………………………….. 39
4.6.9 Mean Relative Bias for Risk Measures Scaled to Desired Level of Coverage…. 40
4.6.9.1 Calculation procedure…………………………………………………….. 40
4.7 Hypothesis Testing…………………………………………………………………… 40
4.7.1 Actual Portion of Fraction of Outcomes Covered……………………………… 41
4.7.2 Difference in FoOC between OMX and Smallcap Portfolios…………………. 42
4.7.3 Significance Test of the Correlation Coefficients………………………………. 42
4.8 Normality Tests………………………………………………………………………. 43
4.9 Criticism of Primary Data……………………………………………………………. 44
5. Results 45
5.1 Mean Relative Bias…………………………………………………………………... 45
5.2 Root Mean Squared Relative Bias……………………………………………………. 46
5.3 Annualized Percentage Volatility…………………………………………………….. 47
5.4 Fraction of Outcomes Covered………………………………………………………..47
5.4.1 Significance Testing……………………………………………………………..48
5.4.1.1 Fraction of Outcomes Covered………………………………………….... 49
5.4.1.2 Difference between the OMX and Smallcap Portfolios…………………. 49
5.5 Multiple Needed to Attain Desired Coverage………………………………………... 50
5.6 Average Multiple of Tail Events to Risk Measure…………………………………… 50
5.7 Maximum Multiple of Tail Event to Risk Measure………………………………….. 51
5.8 Correlation between Risk Measure and Absolute Value of Outcome………………...52
5.8.1 Significance Testing……………………………………………………………. 52
5.9 Mean Relative Bias for Risk Measures Scaled to Desired Level of Coverage………. 52
5.10 Normality Tests……………………………………………………………………... 53
5.10.1 Results from the Normality Tests……………………………………………... 53
6. Conclusions 57
6.1 Evaluation of VaR Methods………………………………………………………….. 57
6.2 The Distribution of Financial Returns………………………………………………... 58
6.3 Implications of VaR for Asset Managers…………………………………………….. 59
6.4 Suggestions for further Research……………………………………………………... 61
List of references
Appendices
List of Tables
Table 1. The number of historical observations used by the ExpWMA approach…………... 23
Table 2. Results from normality tests…………………………………………………………54
List of Figures
Figure 1. Normal vs leptokurtic distribution…………………………………………………. 17
Figure 2. Normal distributions with different variances……………………………………... 25
Figure 3. A random normal distribution plotted against the OMX portfolio returns…………54
Figure 4. A random normal distribution plotted against the smallcap portfolio returns…….. 55
Figure 5. A random normal distribution plotted against the mixed portfolio returns………... 55
1
Chapter 1  Introduction
1.1 Background
While finance is about risk/return and risk management, the specialized study of
risk is a rather recent phenomenon
1
. It has become a critical issue over the last
decade since organizations have suffered great losses, often from risks they never
should have taken in the first place
2
. The most well known example of this is
probably the collapse of Barings Bank in 1995, that was caused by the Singapore
based derivatives trader Nick Leeson, who took large positions in futures and
options on Asian Stock Exchanges
3
. Other internationally well known companies
that have been seriously hurt by insufficient risk management techniques are the
German commodity trading firm Metallgesellschaft in 1993 and Summito Corp.
in 1996, that lost more than 1.8 billion USD through unauthorized copper trades
4
.
In Sweden, Electrolux lost 250 million SEK on currency trading and
Meritanordbanken lost 290 million SEK taking short positions in stocks. Although
these losses to a large extent can be labelled as fraud, they are also results of an
unsatisfactory risk communication system.
5
Today the financial system is very vulnerable, since the solidity in the banking
industry is as low as a few percentage points. One way to solve this problem
would be to raise the capital base in the financial sector, so banks more easily
could cope with unanticipated disturbances and falls on the financial markets. On
the other hand, to keep excess capital is costly and it has to be paid by someone,
most certainly the clients of the banks. The solidity has become a less important
measure however, since many risks are outside the balance sheet. Therefore a risk
measure of the total risk exposure is needed.
6
The number one tool in this respect has become ValueatRisk (from now on
mentioned as VaR), which today is used by all US Commercial Banks to monitor
trading portfolios on a daily basis
7
. The VaR method is basically a statistical
estimation which measures, at a certain confidence level, the amount that may be
lost within a certain time period, due to potential changes in the market prices of
the underlying assets
8
.
1
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999), p. 1.
2
Thornberg, J., “Derivative users lack refined controls of risk”, (1998), p. 2.
3
Koupparis, P., “Barings – A Random Walk to SelfDestruction”, (1995), p. 3.
4
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999), p. 45.
5
Björklund, M., “Bristande kontroll möjliggör svindlerier”, (2000), p. 7.
6
Bäckström, U., “Betydelsen av riskhantering”, (2000), p. 12.
7
Jorion, P., “In Defense of VAR”, (1997), p.1.
8
Yiehmin Lui, R., “VaR and VaR derivatives”, (1996), p. 2.
2
1.2 Problem Discussion
The VaR measure is based on a number of assumptions to facilitate the
calculations, where the most important assumption is how the distribution of
returns is viewed. Most VaR models use a normal distribution to characterise the
distribution of returns, and historical returns are mainly used to make predictions
about the future. However, there has been criticism that the VaR is based on
assumptions that do not hold in times when the financial markets are experiencing
stress, and that the normal distribution does not make a good job in predicting the
distribution of outcomes.
9
Research has found that financial returns experience fat
tails, which implies that the normal distribution works well in predicting frequent
outcomes but is not a good estimator to predict extreme events
10
. In 1997 Alan
Greenspan expressed his concern for this problem by stating “…as you well know,
the biggest problems we now have with the whole evolution of risk is the fattail
problem, which is really creating large conceptual difficulties.”
11
. As an effect of
the fattail problem new VaR models have been proposed to use as estimators for
the distribution of outcomes, but these are complex and have not been fully
accepted yet.
To further complicate the VaR calculations, the volatility on the stock market is
not constant over time. Evidence show that financial returns experience clusters of
high volatility, i.e. a day with a large absolute outcome is followed by another day
with a large absolute outcome.
12
In addition, the volatility on the Stockholm stock
exchange has increased during the second half of the 90s and today there is a
larger portion of “glamour” stocks, i.e. stocks with low book to market and high
p/eratios
13,14
. These stocks have a more volatile share price development, since
the time horizon for their expected profits are longer than for other companies and
they are more dependent on future expectations.
In addition, there might be a difference in how applicable the VaR models are for
equities with different market capitalisation (market cap). Smaller companies’
distribution of returns might differ from larger companies, since they might not be
as frequently traded, and new information regarding these stocks can have a
greater impact on the share price.
9
Bäckström, U., “Betydelsen av riskhantering”, (2000), p. 3.
10
See for instance JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 64., or
Dowd, K., “A Value at Risk Approach to Risk Return Analysis”, (1999), p. 66.
11
Danielsson, J., “Class notes Corporate Finance and Financial Markets”,(19981999), p. 11.
12
JPMorgan/ Reuters, RiskMetrics  Monitor (1996), p. 9.
13
See appendix 1.
14
Affärsvärlden, “Riktningen och värde avgör svängningarna”, (1998), p. 28.
3
Out of an asset manager perspective the portfolio risk is one of the most decisive
parameters to have perfect control over. A well functioning VaR measurement
method could therefore be a superior way to supervise the portfolio risk and
quantify potential losses. However, as stated before there are several potential
problems that can make VaR an unstable and perhaps unreliable method, where
the risks most crucial to capture, i.e. extreme events, are the most difficult to
cover. A well functioning VaR measurement method could also serve as a
communication tool between customers/management and the asset manager.
Even if the VaR calculations are quite complex for the general public, the results
per se are easy to understand. For an asset manager the discreteness regarding the
portfolio holdings is important, since the holdings determine the level of
competitiveness. VaR could therefore be a way of comparing portfolio risks
between asset managers without unmasking too much of the holdings.
Some previous studies have been made in this area, but these have mostly been
focused on the American financial markets. In addition, none of the studies we
have seen have tested different asset characteristics. We have tried to take one
step further by looking at possible impacts of equity market cap on the reliability
of VaR.
From the discussion above we ask the following questions:
§ Which VaR method is the most applicable as a risk measurement tool for
Swedish equity portfolios?
§ Are there any differences in how reliable and useful different VaR methods
are with respect to portfolios consisting of shares with different market caps?
§ Is VaR a useful tool for asset managers to monitor risk?
1.3 Purpose
The purpose of this Master thesis is to examine the applicability of different VaR
methods for Swedish equity portfolios. In addition, we will analyse if equity
market cap has any impact on how well functioning and reliable the VaR methods
are. Based on these results we will discuss the implications of VaR for asset
managers.
4
1.4 Target Group
This Master thesis is intended for people with at least a basic knowledge in
finance. The target group is mainly people working with asset management and
others with an interest in financial economics.
1.5 Disposition
In chapter 2 the methodology, both in a general and a practical perspective, is
outlined. We will further discuss the primary and secondary sources that have
been used.
In the initial stage of chapter 3 different types of risk are presented. Then we will
describe the usefulness of VaR for asset managers and present theories regarding
the distribution of financial returns. Finally, we will outline seven different VaR
approaches, wit h a particular focus on the three we use in our study.
Chapter 4 describes how the study was conducted. We will in detail present our
procedure for collection of data, construction of portfolios, choice of VaR
methods and calculations. In addition, the performance criteria, normality and
statistical significance tests that have been used in the study are described.
In chapter 5 the results of the study are presented. Based on the performance
criteria, the normality and significance tests presented in the previous chapter we
will analyse and comment on our results.
In chapter 6 we will present the conclusions of our study. The applicability of the
VaR methods as well as the implications for asset managers are discussed.
5
Chapter 2  Methodology
The methodology chapter is one of the elementary parts of an academic paper and
what is written should be possible to be evaluated and replicated
15
. This means
that the content of the paper should be open for questions and by repeating the
same investigation the same result should be reached. This chapter describes our
approach to reach the purpose and goal of this Master thesis and how we have
tackled the subject. We discuss the validity as well as the reliability of the sources
and round off by giving proposals of other ways in which the subject could have
been addressed. A detailed description of the methodology and data used in the
VaR tests is presented in chapter 4.
2.1 General methodology
2.1.1 Choice of subject
VaR is a risk measurement approach that since its breakthrough in the beginning
of the 90s has become increasingly popular, especially in the banking industry.
Several examinations have been done over the VaR concept with many different
perspectives, but we have not identified any research of VaR with a focus on the
Swedish equity market. In addition, the topics stock market risk and equity risk
management are very timely, both due to recent large losses and the increasing
volatility on the Swedish stock market during the last five years. We find the
combination of VaR for Swedish equities and the risk topics per se to be very
appealing, which explains the choice of subject for this Master thesis.
2.1.2 Perspective
We have chosen to write this Master thesis from the perspective of an asset
manager managing equity portfolios. This is because we find this perspective to
be the most interesting one, since asset managers are likely the actors on the stock
market that will benefit the most from a risk measurement tool when stock
markets are becoming increasingly more volatile.
15
Backman, J., Att skriva och läsa vetenskapliga rapporter, (1985), p. 27.
6
2.1.3 Scientific approach
There are two main scientific ways to view a problem, the positivistic and the
hermeneutic
16
. The positivistic is basically a rationalistic view that has its roots in
the growing scientific society in the 18
th
century. The central point is that there is
a reality that we can get knowledge about by observation. The knowledge is
neutral and totally objective, i.e. without personal conception.
17
By experiment,
quantitative measurement and logical reasoning, these theories are built and can
then be converted into hypotheses that can be tested. Statements should be
presented with clear definitions and a logical as well as an analytical approach.
18
The hermeneutic approach on the other hand is based on a view that, on the
contrary to nature laws, there are no laws for human behaviour or for society that
are constant
19
. The dialogue between people plays a central role and the
hermeneutic approach means that the scientist should try to understand other
scient ists’ actions and get a general picture of the subject
20
.
Both of these approaches could be criticized and many scientific studies contain
both of these approaches linked together. The positivistic approach could be
regarded as too simplistic, but can in some cases be a complement to the
hermeneutic approach, for which it is hard to control the reliability.
21
In this Master thesis we will base our research on a quantitative study and
therefore we will take a positivistic approach. Hence, we will try to logically
analyze our results from our study of the VaR for stocks located on the Stockholm
stock exchange. In the interpretation of the study we will try to bring up our
findings to a more general level and discuss how they can be of assistance to asset
managers. However, asset managers are also influenced by many other variables,
for example risk attitude and how they view the stock market. Therefore, we
should have a more hermeneutic approach in our discussion of the implications of
VaR for asset managers.
16
Svenning, C., Metodboken, (1996), p. 25.
17
Halvorsen, K., Samhällsvetenskaplig metod, (1992), p. 14.
18
WiedersheimPaul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 150.
19
Halvorsen, K., Samhällsvetenskaplig metod, (1992), p. 14.
20
WiedersheimPaul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 151.
21
Ibid, p. 151.
7
2.1.4 Theory and object
To simplify the reality it is common to use different kinds of theories and models.
These theories and models also facilitate how to evaluate our findings. Theories
are often more general than models, since theories can incorporate more variables
and refer to longer periods of time. Models on the other hand are a development
of the theories and aim at specifying what the theories have built up, so that the
models can be used in practice. Models can also be built without any support from
theories, especially in undeveloped areas.
22
In chapter 3, we have chosen to present theories regarding different kinds of risk,
VaR as a concept, the implications of VaR for asset managers, the distribution of
financial returns and different approaches for estimating VaR. The theories we
present reflect the perspective we have selected for this Master thesis.
The object we have chosen to study is, as mentioned before, the Stockholm stock
exchange. Stocks on the most traded list and the Alist as well as smaller
companies traded on the overthecounter (OTC) list and the Olist are also
included in the study.
2.2 Practical methodology
Practical methodology involves how data is collected and how it is evaluated and
analyzed. This data can be of two kinds, primary and secondary data
23
.
2.2.1 Primary data
The primary data that has been used in this Master thesis is exclusively the
processed historical time series stock data that are used to compute different VaR
measures. Put in its context this data gives information on how applicable
different VaR methods are for equities listed on the Stockholm stock exchange.
The data has been collected from the Bloomberg database and was then processed
to get data files that could be handled in Excel. A more thorough description on
how this research was done is found in chapter 4.
22
Halvorsen, K., Samhällsvetenskaplig metod, (1992), p. 44.
23
WiedershiemPaul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 76.
8
2.2.2 Secondary data
Articles in economic journals, articles published on the Internet and financial as
well as statistical textbooks have been used as a base for this Master thesis. To
find the articles we have searched the databases EconLit and AffärsData.
Especially Internet proved to be a useful source where many researchers publish
their articles.
2.2.3 Criticism of the sources
Primary data – The criticism of the primary sources will be discussed in chapter
4, section 4.9.
Secondary data – To evaluate the secondary sources three criteria could be used
24
.
The first criterion is to analyze how current the sources are. Since VaR is a
relatively new topic that has not been around for long it makes it even more
important that the sources are up to date. In our case, we believe that this criterion
is met because most of the articles and other sources we have used were published
in the mid or late 90s.
The second criterion is to evaluate if the authors of the sources have any interests
of their own in the subject they are writing about. We have mainly used scientific
research articles and hence, we believe that we meet the criterion on this point.
However, some of our sources are published by the RiskMetrics group and since
this is a profit making company it is possible that they try to promote their way of
viewing VaR models. Therefore, these sources have to be interpreted carefully.
The third criterion is to investigate if the sources have any relation with one
another. Some of our articles are written by the same person or organization. In
addition, the sources have made references to each other and hence, they are not
totally independent. However, to try to minimize this problem we have used
numerous different sources and authors that are well know and accepted.
A large part of the secondary data is collected from American studies, which is
natural since VaR is a frequently used method in the US. This can imply that these
studies are difficult to translate to Swedish circumstances. However, there are no
proper alternatives since VaR is a relatively new phenomenon and the American
financial markets are normally in the forefront regarding new issues.
24
WiedersheimPaul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 82.
9
2.3 Validity
In an evaluation of the sources other terms that should be discussed are validity
and reliability. Validity can be defined as a measurement tool’s ability to measure
what it is intended to measure.
25
In our case it means that we have to think about
if the theory we have used to build models and the data we have collected for the
study actually make us reach the purpose of this Master thesis.
VaR research is developing rapidly and hence, there are no absolute truths about
which theories and models are the correct ones. However, we have tried to use the
latest findings in the area to incorporate what is known about VaR so far. In
addition, the theories and models that our study is based on are to a large extent
well known and very well accepted in finance.
2.4 Reliability
The term reliability means that the measurement tools should give trustworthy and
stable results
26
. The methodology we have used should be able to be used by
others and the same result should be reached. We believe that the data we have
used is reliable and the fact that it had to be somewhat processed, e.g. filling in
missing values on days when no shares changed hands and correcting for
dividends, to get the data series does not change this opinion. Another thing that
could be questioned is the randomness of the companies chosen for the mixed and
smallcap portfolios. We have used Excel to construct the portfolios and hence, no
bias should enter these portfolios.
2.5 Empirical study
We have put together three different stock portfolios, each containing ten stocks.
The VaR for these portfolios is estimated using three different approaches,
historical simulation, equally weighted moving average and exponentially
weighted moving average. In addition, we have used different window lengths,
i.e. the number of observations, to calculate the volatility. One approach in
combination with a specific window is referred to as a method.
The portfolio outcomes are evaluated from 19950101 to 19991231. However,
to estimate the volatility for 1995 historical data for 1994 has been used.
25
WiedersheimPaul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 28.
26
Ibid, p. 29.
10
Hence, the total time series of historical stock data ranges from 19940101 to
19991231. To evaluate our findings nine performance criteria are used
27
. A
detailed description on how the study was performed and evaluated can be found
in chapter 4.
2.6 Criticism of chosen methodology
By using numbers and statistical methods there is a chance to both get precise
information, but also to get distorted and therefore inappropriate information
28
. To
be able to use the findings in the right way the data of course have to be collected
and treated in an appropriate manner since mistakes can lead to misinterpretations
that affect the whole study.
2.7 Alternative methodology
To reach the purpose of this Master thesis we believe that there are no proper
alternatives for using a quantitative study. A qualitative study would never give us
the results we are looking for. However, it is possible that other approaches
should have been included in calculating the VaR. We have used the approaches
that are most established in financial theory today and that have been used in
studies with similar purposes. Regarding the discussion whether VaR is a useful
measure for an asset manager, perhaps an opinion poll could be of interest to get a
direct viewpoint on the usefulness of VaR in practice. Instead we chose to put the
focus on whether VaR calculations are trustworthy in the first place, rather than
examine the interest from the market before we know if VaR per se is a useful
tool.
27
These are found in Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”,
(1996).
28
WiedersheimPaul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 69.
11
Chapter 3  VaR Theory
In this chapter the theories which the study is built on are outlined. At first a broad
view on different types of risk is given. Next the chapter focuses on market risk,
in particular the term VaR is presented, and the implications of VaR for asset
managers are described. Further, the distribution of financial returns is discussed,
since this is a fundamental point for VaR. Finally, different approaches for
measuring VaR are presented in detail.
3.1 Risks
Risk per se can be defined as the volatility of unexpected outcomes, generally for
values of assets and liabilities
29
. Specialized studies of risk are a rather new
phenomenon, but recent large losses such as Orange County, Barings and
Metallgesellschaft have motivated a rapid development in specialized techniques
for risk management
30
. This Master thesis is specializing on a financial risk called
market risk, which involves the uncertainty of earnings resulting from changes in
market conditions such as asset prices, interest rates, volatility, and market
liquidity
31
. However, market risk is just one form of risk to which participants in
the financial market are subject to. The major types of risk can briefly be defined
as:
32,33
• Business risk – a firm or industry specific risk, e.g. technological innovations,
product design, and marketing. Firms are mostly specializing in this type of risk.
• Strategic risk – risks resulting from fundamental shifts in the economy or the
political environment, e.g. the rapid disappearance of the threat of the Soviet
Union, which led to a worldwide gradual builddown of defence spendings
directly affecting the defence industries.
• Financial risk – relates to possible losses in financial markets arising from, for
example, movements in interest rates and exchange rates. The volatility of
financial variables is the single most important reason for the development of a
risk management industry. Financial risk can be divided into the following five
types of risk.
• Market risk – arises from changes in the prices of financial assets and
liabilities and can be defined as the risk of losses due to adverse market
29
Jorion, P., Value at Risk, (1997), p. 3.
30
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999), p. 1.
31
JP Morgan/RiskMetrics group, Introduction to RiskMetrics, (1995), p. 2.
32
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999), p. 12.
33
Jorion, P., Value at Risk, (1997), p. 318.
12
conditions. Market risk includes basis risk and gamma risk
34
. Furthermore,
market risk can be absolute, the loss measured in dollar terms, or relative,
the loss relative to a benchmark index.
• Credit risk – is defined as the risk of a loss due to the inability of a
counterparty to meet its obligations. Credit risk can also lead to losses when
debtors are downgraded by credit agencies, usually leading to a fall in the
market value of its obligations. Further on, credit risk also includes
sovereign risk and settlement risk. The former can for instance be if a
country imposes a foreignexchange control system, which will limit
counterparties obligations. The latter refers to the risk that a counterpart
cannot fulfil its obligations after one party has already made payment.
• Liquidity risk – can take two forms: market/product liquidity and cash
flow/funding. The former type of risk arises when a transaction cannot be
conducted at prevailing market prices due to insufficient market activity and
poor depth and resiliency in the market. The latter type of risk is associated
with the inability of a firm to fund illiquid assets or to meet cash flow
obligations, which may force early liquidation.
• Operational risk – the risk from the failure of internal systems such as
management failure, fraud, and errors made in instructing payments or
settling transactions.
• Legal risk – risk of changes in regulations or when a counterparty does not
have the legal or regulatory authority to engage in a transaction.
The first proposal on market risk was constructed in 1993 and called the “Basle on
market risk”. It was a building block approach and a start by the authorities to set
up rules and regulate market risks. The proposal was extended in April 1995 to
become the “1995 Basle proposal on market risk”.
35
This proposal is a reflection
of the authorities willingness to prevent systemic risk and it contains a
recommendation how to calculate VaR, which states that:
36,37
• VaR should be calculated on a daily basis.
• VaR should be based on a 10 trading day holding period.
• A 99% confidence interval should be used, i.e. the chance of experiencing a
larger loss than the VaR should be 1 in 100 or less.
• A historical observation period of at least one year (250 days), with at least
quarterly updates, should be incorporated.
34
Basis risk = the risk of a change or breakdown of a relationship between products used to
hedge each other. Gamma risk = the risk due to nonlinear relationships.
35
Styblo Beder, T., “VaR: Seductive but Dangerous”, (1995), p. 17.
36
Maymin, Z., “VaR variations: is multiplication factor still too high?”, (1998), p. 1.
37
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999), p. 7.
13
3.1.1 Value at Risk (VaR)
The primary tool for market risk is VaR, which is a method of assessing risk
through standard statistical techniques. Philippe Jorion defines VaR as a measure
for the worst expected loss over a given time interval under normal market
conditions at a given confidence level
38
. Formally VaR is defined as:
∫
∞ −
·
VaR
dx x f ) ( α or [ ] α · ≤VaR x Pr (1)
where x stands for the change in the market value of a given portfolio over a given
time horizon with the probability α. Either of the equations states that a loss equal
to, or larger than the specific VaR occurs with probability α.
39
The inputs used to calculate VaR for a certain asset are the volatility, time horizon
and a choice of confidence level. The volatility is estimated implicitly from option
pricing or through statistical models. In practice, past observations are often used
to estimate the future volatility. The time period chosen affects both the measured
volatility and therefore also the VaR, where a longer time period gives a higher
volatility measure and hence, a higher VaR. The chosen confidence interval states
how often the loss on the specific asset will be greater than the VaR. The most
commonly used confidence intervals are 95% and 99%.
40
The formula to calculate
VaR for one asset is:
MV V CI VaR * * · (2)
where CI is the confidence interval, V is the volatility and MV is the market
value.
41
The formula above gives an absolute amount in a certain currency, stating
the maximum loss for one asset at a given confidence level.
Further, for a portfolio of multiple assets the correlation between the portfolio
assets has to be taken into consideration. To calculate the portfolio VaR the
formula (3) below is used:
2 , 1 2 1
2
2
2
1
* * * 2 ) ( ) ( n correlatio VaR VaR VaR VaR VaR
port
+ + · (3)
38
Jorion, P., Value at Risk, (1997), p. xiii.
39
Danielsson, J. & de Vries, G. C., “ValueatRisk and Extreme Returns”, (1997), p. 9.
40
Söderlind, L., Att mäta ränterisker, (1996), p. 7075.
41
Ibid, p. 77.
14
where VaR
1
is the VaR for the first asset and VaR
2
is the VaR for the second asset.
To calculate VaR for a portfolio of more than two assets a row vector (4), which is
transposed to a column vector (6), and a correlation matrix (5) are used in formula
(7). This can be illustrated with the formulas below:
[ ]
3 2 1
VaR VaR VaR R · (4)
]
]
]
]
]
· Ω
1
1
1
2 , 3 1 , 3
3 , 2 1 , 2
3 , 1 2 , 1
Corr Corr
Corr Corr
Corr Corr
(5)
]
]
]
]
]
·
3
2
1
'
VaR
VaR
VaR
R (6)
The formulas above help us to calculate the VaR of the portfolio as:
' * * R R VaR
port
Ω · (7)
To calculate the VaR for a portfolio of more than three assets these are simply
added to the vectors and matrices.
42
3.2 The implications of VaR for Asset Managers
Regarding the perspective on this Master thesis, i.e. if the VaR concept is a useful
tool to forecast and measure portfolio risk for asset managers managing Sweden
based equity portfolios, we have used Culp’s, Mensink’s and Neves’ article “VaR
for Asset Managers” as a guideline.
VaR is not nearly as well accepted in the institutional investment area as it is
elsewhere. The main reason is that asset managers are typically in the business of
taking risks, which is directly linked to the aim and effort to create excess returns.
Therefore asset managers often view risk management in general and VaR in
particular as inherently at odds with their primary business mandate.
43
The
increased volatility in financial markets and the vast amount of large losses during
42
Söderlind, L., Att mäta ränterisker, (1996), p. 81.
43
Culp, L. C., Mensink, R. and Neves, M. P. A., “VaR for Asset Managers”, (1999), p. 1.
15
the 1990s have further motivated and increased the interest for the understanding
of risk
44
.
VaR will never tell an asset manager how much risk to take, it will only tell how
much risk is being taken. VaR can be a useful tool for helping asset managers
determine whether the risks they are exposed to are the risks to which they think
they are and want to be exposed to. Out of an investor perspective VaR is a
concept that is easy to understand and thereby a way to monitor the level of risk
exposure the asset managers are undertaking. Culp, Mensink and Neves outline
four concrete applications of VaR for asset management. These applications
involve:
Monitoring
45
 VaR facilitates a consistent and regular monitoring of market risk
both at the aggregate fund level, as well as by asset class, and by
issuer/counterparty. VaR also facilitates the comparison of risks of one asset
manager’s holdings with other asset manager’s holdings. Since asset managers’
portfolio holdings are not transparently available to investors at all times, the VaR
reported to investors can help assuage any investors’ concerns about market risk
without necessitating disclosure of portfolio holdings. Especially to ensure
investors that market risk is within the specified risk tolerance level of the
investment pool.
Elimination of ex ante transactional approval requirements
46
– VaR can be
beneficial for asset managers that wish to eliminate transactional scrutiny by
senior management. When the inspection process is removed asset managers
reach a higher level of autonomy.
Risk Targets and Thresholds
47
– this application of VaR involves measurement
and monitoring of market risk using a formal system of predefined risk targets
and thresholds. A system of risk thresholds surrounds all potential investments
w.r.t. the asset managers’ investment policy and risk tolerance. The thresholds act
as a tripwire defined in terms of the maximum tolerable VaR. The VaR is
monitored by regularly comparing actual VaR figures to the predefined targets
and when a tripwire is hit a reallocation of the portfolio is called for.
Risk Limits and Risk Budgets
48
– risk limits, also known as a risk budget, is an
extreme version of risk targets and risk thresholds. In a risk budget, the portfolios’
total VaR is calculated and then allocated to asset classes. Asset managers are not
44
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999), p. 1.
45
Culp, L. C., Mensink, R. & Neves, M. P. A., “VaR for Asset Managers”, (1999), p. 1419.
46
Ibid, p. 1920.
47
Ibid, p. 2021.
48
Ibid, p. 2122.
16
allowed to exceed this risk budget as long as the risk dimensions not have been
altered.
However, the greatest benefit of VaR for an asset manager, according to Philippe
Jorion, probably lies in the imposition of a structured methodology for critically
thinking about risk. Institutions applying VaR are forced to confront their
exposure to financial risk. A well functioning supervision of VaR should logically
also imply less risk of unexpected and uncontrolled losses. Jorion also states that
“there is no doubt that VaR is here to stay”, but at the same time highlights that
the process and methodology of calculating VaR may be as important as the
number itself.
49
3.3 Normal distribution of financial returns
In most theoretical and empirical work regarding financial returns a normal
distribution is assumed, since by assuming that the returns are normally
distributed it simplifies all calculations. In addition, it produces tractable results
and all moments of positive order exist.
50
Further on, the normal distribution is
characterised by its mean and variance and by only knowing these two variables
you know the entire distribution. Mathematically this is, for a random variable r
t
,
given by the density function below:
]
]
]
−
− ·
2
2
2
2
) (
exp
2
1
) (
σ
µ
πσ
t
t
r
r f (8)
where µ is the mean and σ
2
is the variance of r
t
.
51
However, these advantages have to be weighed against research showing that the
distribution of returns in financial markets experience fat tails
52
. Financial returns
generally exhibit a leptokurtic behaviour and extreme price movements occur
more frequently than what is given by the normal distribution
53
. A leptokurtic
49
Jorion, P., “Value at Risk”, (1997), p. xv.
50
Lucas A. & Klaassen P., “Extreme Returns, Downside Risk and Optimal Asset Allocation”,
(1998), p. 71.
51
Hill C., Griffiths W. & Judge G., Undergraduate Econometrics, (1997), p. 30.
52
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 41.
53
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 64.
17
distribution implies that the distribution has a high peak, the sides are low and the
tails are fat, see further 3.3.1
54
. This is illustrated in figure 1 below:
Figure 1. Normal vs leptokurtic distribution
Since VaR is concerned with unusual outcomes, e.g. five or one percent, the fact
that tails are fat pose a problem. More outcomes than predicted by the normal
distribution will fall into the category that exceed the VaR measures generated
with normal distribution, i.e. the assumption of normal distribution underestimates
the VaR. This is proved by Lucas and Klaassen who show that the normal
distribution underestimates VaR by more than 30 percent at the 99% level
55
.
However, it is not necessary that the fat tails lead to a higher VaR for all
confidence intervals, because there are two effects working in opposite directions.
Firstly, the probability of increasing tail events lead to more careful asset
allocations. Secondly, the increase in the precision of the distribution gives a
higher certainty of the spread of outcomes and therefore lead to a more aggressive
strategy. Lucas and Klaassen show that the latter effect dominates on the 95%
level, while the first effect dominates on the 99% level. This means that by using a
leptokurtic function the asset allocation becomes more risky when using the 95%
level, but more careful when using the 99% level.
56
54
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999), p. 4.
55
Lucas A. & Klaassen P., “Extreme Returns, Downside Risk and Optimal Asset Allocation”,
(1998), p. 72.
56
Ibid, p. 75.
Distributional functions
Standard deviations
Normal
Leptokurtic
18
3.3.1 Skewness and Kurtosis
The normal distribution is symmetric with the mean equal to the median. The
distribution is also called mesokurtic. Departure from symmetry usually implies a
skewed or kurtosis distribution.
Skewness is a measure of the degree of asymmetry of a frequency distribution.
Positive skewness, or rightskewed, is an indication of a distribution with an
asymmetric side that is expanding towards more positive numbers. Negative
skewness, or leftskewed, implies the opposite, i.e. a distribution that stretches
asymmetrically to the left.
57
The formula for skewness is:
∑
·
,
`
.
 −
,
`
.

−
,
`
.

−
·
n
i x
i
s
x x
n n
n
x Sk
1
3
1
1
2
) ( (9)
where s is the standard deviation, n is the number of observations, x
i
is the
observed variable at time i and x is the mean of all observations
58
.
Kurtosis is a measure of the flatness versus peakedness of a frequency
distribution. In statistics flat is called platykurtic and peaked is called leptokurtic.
A positive kurtosis indicates a relatively leptokurtic distribution, while a negative
kurtosis indicates a relatively platykurtic distribution.
59
The formula to calculate
the kurtosis is the following:
60
∑
·
,
`
.
 −
,
`
.

−
]
]
]
− −
+
·
n
i x
i
s
x x
n n n
n n
x Kur
1
4
1
1
) 3 )( 2 (
) 1 (
) ( (10)
Data is symmetrical when there are no repeated extreme values in a particular
direction, i.e. low and high values balance each other out
61
. If the data is normally
or symmetrically distributed, the computed skewness will be close to zero and the
kurtosis close to three
62
.
57
Aczel, A. D., Complete Business Statistics, (1993), p. 19.
58
Kleinbaum, D. G., Kupper, L. L. & Muller, K. E., Applied Regression Analysis and Other
Multivariable Methods, (1988), p. 188.
59
Aczel, A. D., Complete Business Statistics, (1993), p. 20.
60
Kleinbaum, D. G., Kupper, L. L. & Muller, K. E., Applied Regression Analysis and Other
Multivariable Methods, (1988), p. 188.
61
Afifi, A. A. & Clark, V., ComputerAided Multivariate Analysis, (1990), p. 66.
62
Benerson, M. L. & Levine, D. M., Basic Business Statistics, (1992), p. 73.
19
3.4 VaR approaches
In the following sections we will present different approaches for VaR estimation.
Firstly, the three approaches performed in our study are outlined in detail and
secondly, other approaches that can be used for VaR estimation are presented
more briefly.
3.4.1 The Historical Simulation approach
Historical simulation (HS) is a nonparametric VaR method which rests on the
assumption that historical returns are a good guide for future returns, i.e. one use
observed asset returns as a proxy for future returns. Hence, the HS does not rest
on the assumption about normally distributed returns or serial independence, but
instead on an empirical distribution of returns. In addition, the distribution of the
returns in the portfolio should be constant over the sample period.
63
A sample length, a window, for the estimation is chosen and for each day t in the
sample the portfolio return
t
∆Π is evaluated w.r.t. historical prices and portfolio
weights w according to:
∑
·
· ∆Π
N
i
t i i t
y w
1
,
(11)
where N is the number of assets in the portfolio and
t i
y
,
is the return on asset i at
time t. Then the portfolio returns
t
∆Π should be sorted in ascending order. A level
of confidence and a given probability π is chosen and π · < ∆Π ) Pr( VaR
t
states
that a loss equal to, or larger than the specific VaR occurs with probability π. For
example, if there were 100 observations, the 5
th
lowest observation value would
be the VaR for a 95% confidence interval.
64,65
There is a tradeoff regarding the length of the observation period chosen. Clearly
the choice of for example 125 days is motivated by the desire to capture short
term movements in the underlying risk of the portfolio and in contrast the choice
of 1250 days may be driven by the desire to estimate the historical percentiles as
accurately as possible
66
. While longer intervals increase the accuracy of estimates
63
Danielsson, J. & de Vries, G., C., “ValueatRisk and Extreme Returns”, (1997), p. 10.
64
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999), p. 18.
65
Danielsson, J. & de Vries, G., C., “ValueatRisk and Extreme Returns”, (1997), p. 9.
66
Henricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 43.
20
it could use irrelevant data, thereby missing important changes in the underlying
process
67
.
HS expresses the distribution of portfolio returns as a bar chart or histogram of
hypothetical returns. Each hypothetical return is calculated as what would be
earned on today’s portfolio if a day in the history of market prices were to repeat
itself. The VaR can then be read from this histogram.
68
3.4.1.1 Advantages and disadvantages with the HS approach
The advantages of HS are mainly that it is intuitive, easy to implement and that it
does not rely on specific assumptions about valuation models or the underlying
stochastic structure of the market. Further, the method is relatively easy to
perform in a spreadsheet program and that same data can be stored and reused for
later estimations of VaR. Furthermore, HS forms the basis for the Basle 1995
proposals on market risk.
69,70
Disadvantages are that past extreme returns can be a poor predictor of extreme
events, for example Danielsson and de Vries show that HS is unable to address
losses which are outside the sample. This drawback is linked to the problem that
HS, due to the discreteness of extreme returns that also will make the VaR being
discrete, can over or underestimate observations in the tails and over or
underpredict VaR.
71
The sample size or window length is another decisive aspect
to consider, where the inclusion or exclusion of only one or two days at the
beginning of the sample can cause large swings in the VaR estimate.
72
As
mentioned in 3.1, the Basel Committee proposes a window of at least one year of
past returns
73
. Another criticism is that HS is based on the assumption that past
returns represent the immediate future fairly, but risk contains significant and
predictable time variation that make the HS approach miss situations with
temporarily elevated volatility. Finally, for large portfolios with numerous assets
and exposures the historical approach quickly becomes cumbersome.
74,75
67
Jorion, P., Value at risk, (1997), p. 195.
68
Schachter, B., “Value at Risk Resources – An Irreverent Guide to Value at Risk ”, (1997), p. 2.
69
Jorion, P., Value at risk, (1997), p. 195.
70
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (19981999).
71
Danielsson, J. & de Vries, G., C., “ValueatRisk and Extreme Returns”, (1997), p. 11.
72
Ibid, p. 11.
73
Danielsson, J., Hartmann, P. & de Vries, C.G. “The Cost of Conservatism” (1998), p. 2.
74
Smithson C., “Class notes of CIBC School of Financial Products”, (1998), p. 3.
75
Jorion, P., Value at Risk, (1997), p. 196.
21
3.4.2 The Equally Weighted Moving Average Approach
The equally weighted moving average (EqWMA) approach assumes that the
distribution of outcomes follow a normal distribution and uses a fixed amount of
historical data to calculate the standard deviation. There are different opinions on
how wide the data window should be. On the one hand only very recent data, e.g.
50 observations, should be used to incorporate changes in the volatility over time.
On the other hand, to be able to estimate potential movements accurately and to
estimate the variance with precision a much wider data window should be used.
76
The calculation of the standard deviation is shown below:
∑
−
− ·
−
−
·
1
2
) (
) 1 (
1
t
k t s
s t
x
k
µ σ (12)
where σ
t
is the estimated standard deviation at time t, and k specifies the number
of observations included in the moving average. x
s
is the change in the value of
the asset on day s and µ is the mean change in asset value during the estimated
period.
77
By using shorter periods of time the standard deviation gets more
irregular and reacts faster to changes in asset price movements.
Other parameters that have to be set are the confidence interval and the covariance
of the asset returns. The most commonly used confidence intervals are the 95
th
and the 99
th
percentile although VaR is calculated with confidence intervals from
the 90
th
to the 99.9
th
78
.
3.4.2.1 Advantages and disadvantages with the EqWMA approach
An advantage with the EqWMA approach is that it is easy to use, since the normal
distribution is only characterised by its mean and variance. The VaR estimation
can be derived directly from the portfolio standard deviation by using a
multiplicative factor that depends on the confidence level.
79
In addition, many
statistical formulas are based on a normal distribution assumption and these
facilitate the analysis of the results.
Finally, all moments of positive order exist
and the normal distribution can model VaR estimations outside the sample
range.
80
76
Jorion, P., Value at Risk, (1997), p.99.
77
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 41.
78
Ibid, p. 40.
79
Jorion, P., Value at Risk (1997), p.88.
80
Lucas, A. & Klaassen, P., “ Extreme Returns, Downside Risk, and Optimal Asset Allocation”,
(1998), p. 71.
22
The most obvious disadvantage with the EqWMA approach, as mentioned above,
is that financial returns experience fat tails. Therefore using a normal distribution
underestimates the true VaR, which of course is a very serious drawback
81
.
Another disadvantage is that the quality of the VaR estimate calculated with the
EqWMA approach degrades if non linear instruments, like options, are included
in the portfolio
82
. Moreover, low correlations between assets reduce the portfolio
risk. However, evidence show that correlations increase in periods of instability
on the financial markets, and therefore the normal distribution may underestimate
the true VaR measure.
83
3.4.3 The Exponentially Weighted Moving Average Approach
In contrast to the EqWMA approach the exponentially weighted moving average
(ExpWMA) approach attaches different weights to past observations in the
observation period
84
. The weights decline exponentially and therefore, the most
recent observations get much higher weight than earlier observations. The formula
for the standard deviation under the ExpWMA is shown below:
∑
−
− ·
− −
− − ·
1
2 1
) ( ) 1 (
t
k t s
s
s t
t
x µ λ λ σ (13)
The parameter λ (lambda) determines at which rate past observations decline in
value as they become more distant.
85
The formula (13) above can be rewritten as
below, which gives a more intuitive understanding of the calculation:
2
1
1
2
) )( 1 ( µ λ λσ σ − − + ·
−
−
t
t
t
x (14)
Formula (14) shows that on any given day the standard deviation, calculated as an
exponentially moving average, is made up of two components.
Firstly, the
weighted average on the previous day and secondly, yesterday’s squared
deviation, which is given a weight of (1λ). This means that a lower value on λ
makes the importance of observations decline at a more rapid speed.
86
81
Danielsson, J., “ValueatRisk and Extreme Returns”, (1997), p. 14.
82
Schachter, B., “Value at Risk Resources  An Irreverent guide to ValueatRisk”, (1997), p. 2.
83
Jorion, P., Value at Risk, (1997), p. 178.
84
Ibid, p. 177.
85
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 42.
86
Ibid, p. 42.
23
The parameter λ is often referred to as the “decay factor” and determines how fast
the decline in observation weights should be. For the weights to sum up to one an
infinitely number of observations should be used, but in practice a limited number
of observations can be used since the sum of weights will converge to one
87
. By
setting a tolerance level, i.e. how close to one the sum has to be, the number of
observations that has to be used in the calculation of the standard deviation can be
calculated
88
. The number of observations that have to be used at different
tolerance levels can be seen in the table below:
Days of historical data at tolerance level:
Decay factor 0.001% 0.01% 0.1% 1%
0.85 71 57 43 28
0.90 109 87 66 44
0.91 122 98 73 49
0.92 138 110 83 55
0.93 159 127 95 63
0.94 186 149 112 74
0.95 224 180 135 90
0.96 282 226 169 113
0.97 378 302 227 151
0.98 570 456 342 228
0.99 1146 916 687 458
Table 1: The number of historical observations used by the exponentially weighted moving
average approach. Source: JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 94.
The purpose of using the ExpWMA approach is to capture shortterm movements
in the volatility
89
. RiskMetrics uses exponentially moving averages to forecast
future volatility in order to make sure that the model is responsive to market
shocks and an eventual decline in the volatility forecast
90
. Further, research show
that financial market volatility experience clusters of high and low volatility. The
RiskMetrics group examined returns on the S&P 500 among other assets and
found that autocorrelation is not present in the distribution of financial returns.
However, although returns are uncorrelated they may not be independent. This
can be evaluated by looking at the autocorrelation of squared returns. On this
matter the RiskMetrics group found that financial returns experience
autocorrelation.
91
87
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 42.
88
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 93.
89
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 42.
90
JPMorgan/RiskMetrics group, Introduction to RiskMetrics, (1995), p. 2.
91
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 59.
24
3.4.3.1 What value of λ should be used?
By using a low decay factor, e.g. 0.94, it implies that almost the entire VaR
measure is derived from the most recent observations. This means that the VaR
measure becomes very volatile over time. On the one hand, relying on the most
recent observations is important for capturing shortterm movements in volatility.
On the other hand, a smaller sample size increases the possibility of measurement
error.
92
The decay factor may not only be used to estimate the volatility of a single asset,
but also to calculate the covariance matrix, which is shown below:
∑ ]
]
]
·
) ( ) (
) ( ) (
2
2
2
3 21
3 12 1
1
2
λ σ λ σ
λ σ λ σ
(15)
As can be seen above, the covariance matrix is a function of three decay factors.
93
Although it is possible in theory to estimate all possible decay factors, it is too
complex in practice to calculate them all. Therefore, it has become necessary to
get some form of structure on the value of the decay factors and the most practical
thing to do is to use one lambda for the entire matrix. Still, different values of the
decay factors can be used for different time periods. RiskMetrics has found 0.94
to be the optimal value for daily returns and 0.97 for monthly returns.
94
3.4.3.2 Advantages and disadvantages with the ExpWMA approach
The advantages are very much the same as with the EqWMA approach. However,
the volatility is much more receptive to variations over time. By using an
exponential moving average the standard deviation is responsive to market shocks
and the following gradual decline in the forecast of volatility. A simple moving
average does not react fast enough to changes in the volatility.
95
In addition, the
ExpWMA approach smoothens out the standard deviation over time. In the
EqWMA approach the standard deviation varies more, since the estimation is
more affected when an observation falls out of the estimation window.
96
92
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 43.
93
JPMorgan/Reuters, RiskMetrics  Technical Document, (1996), p. 97.
94
Ibid, p. 97.
95
JPMorgan, “Introduktion to RiskMetrics”, (1995), p. 4.
96
Ibid, p. 7.
25
The disadvantages, in addition to what was mentioned in section 3.4.2.1, are that
the computations are somewhat more difficult and that the volatility over time is
more unstable than with the EqWMA approach.
97
3.4.4 The New Improved VaR Methodology
98
This methodology is a product from the RiskMetrics group, which has been
developed to overcome the problems with financial returns experiencing fat tails.
The new approach, which is still under development, tries to estimate the
volatility by assuming that returns are generated from a mixture of two different
normal distributions. This is shown in the formula below:
PDF = p
1
* N
1
(µ
1
, σ
1
) + p
2
* N
2
(µ
2
, σ
2
) (16)
where p
1
+p
2
=1. PDF stands for the probability density function and p
1
is the
probability that the return is generated from the normal distribution N
1
, which is
characterised by its mean µ
1
and variance σ
1
2
. Similarly p
2
is the probability that
the return is generated from the normal distribution N
2
, which has mean µ
2
and
variance σ
2
2
. This model makes it possible to assign large returns a higher
probability than the normal distribution. To generate the PDF one can then assign
p
1
a value close to 1, with µ
1
=0 and σ
1
2
=1. The mean for N
2
is also set to zero,
but the variance is assigned a value higher than 1. The mixture of the two normal
distributions then has fatter tails compared to N
1
. This can be illustrated
graphically as below:
Figure 2. Normal distributions with different variances.
97
JPMorgan/Reuters, RiskMetrics – Technical Document (1996), p. 80.
98
JPMorgan/Reuters, RiskMetrics – Monitor, (1996), p. 719.
Normal distributions
Standard deviations
N1
N2
26
3.4.5.1 Advantages and disadvantages of the improved VaR methodology
The advantage with the improved VaR methodology is that it describes reality
more accurately than the traditional VaR models, since it is able to deal with the
fat tails of financial returns. This gives a more precise VaR estimation.
The disadvantages are firstly that the calculations become more complex and the
VaR methodology loses its intuitive appeal, since very sophisticated statistical
techniques are used to calculate the VaR with this model. Secondly, this
methodology has not yet been thoroughly tested and hence, it is uncertain how
well it really works.
3.4.5 Monte Carlo Simulation
Monte Carlo simulation (MCS) is mainly a method used by risk specialists and
risk analysts to value complex derivatives such as exotics, but nowadays MCS is
also a very useful tool for VaR calculation
99
. The MCS method approximates the
behaviour of financial prices by using computer simulations to generate random
price paths. Before the generation of random prices is started one has to select the
distribution the prices should be generated from, as well as the volatility of prices
and the correlation between assets
100
. The first step in the simulation is to choose a
stochastic model for the behaviour of prices, and in line with the Black and
Scholes option pricing model a geometric Brownian motion can be used
101,102
.
To generate random numbers Jorion propose either to use a two step process or a
method called bootstrap. The former implies selecting a uniform distribution for
the random number generator over the interval (0, 1), which produces a random
variable. The next step is to transform the uniform random numbers into the
desired distribution, for example a normal, which can be done by inverting the
cumulative probability distribution function. The bootstrap method briefly implies
that random numbers are generated by sampling from historical data with
replacement.
103
When the random numbers are generated the VaR is calculated
with (11).
99
Söderlind, L., Att mäta ränterisker, (1996), p. 102.
100
Ibid, p. 101.
101
See for example Hull, J. C., Options, Futures, and Other Derivatives, (1997).
102
Jorion, P., Value at Risk, (1997), p. 232.
103
Ibid, p. 236.
27
The MCSmethod is similar to the HSmethod, except that the hypothetical
changes in prices for assets are created by random draws from a stochastic process
in the MCSmethod, while HS is directly based on actual historical price
changes
104
.
3.4.6.1 Advantages and disadvantages with the MCS approach
An advantage with the MCS approach is that it can account for a wide range of
risks, including price, volatility and credit risk, and by using different models it
can even account for model risk. The MCSmethod is considered to be the most
powerful method to compute VaR.
105, 106
Disadvantages are mainly its complexity and that the method involves costly
investments in intellectual and systems development. For example if 1000 sample
paths are generated with a portfolio of 10 assets, the total number of valuations
amounts to 10,000. Another disadvantage of the model is that it relies on a
specific stochastic model for the underlying risk factors and pricing models for
securities. Hence, there is a sensitivity to model risk. If, for instance the stochastic
process chosen for the price is unrealistic the estimated VaR will also be
misleading, and therefore there is a risk incorporated that the model is wrong.
To conclude, the MCSmethod is likely the most comprehensive approach to
measure market risk if modelling is done correctly, but it is easy to lose the
intuitive appeal with VaR by using this complex method.
107
3.4.6 Semiparametric VaR method
108
This approach sheds light on the importance of an accurate prediction of the
frequency of extreme events in VaR analysis, where the most frequent risks are
modelled parametrically and infrequent risks are captured by a nonparametric
empirical distribution function. The common VaR methods fall into two main
classes: parametric prediction of conditional volatilities such as the RiskMetrics
method, and nonparametric prediction of unconditional volatilities such as HS or
stress testing methods. The semiparametric method, also called the extreme value
theory, is a mixture of these two approaches, where the nonparametric HS is
104
Söderlind, L., Att mäta ränterisker, (1996), p. 100.
105
Jorion, P., Value at Risk, (1997), p. 231.
106
Söderlind, L., Att mäta ränterisker, (1996), p. 106.
107
Jorion, P., Value at Risk, (1997), p. 201.
108
Danielsson, J. & de Vries, G. C., “ValueatRisk and Extreme Returns”,(1997), p. 1925.
28
combined with a parametric estimation of the tails of the return distribution
109
.
Parametric methods, which are based on conditional normality, are not well suited
for analysing large risks, where the normality assumption probably leads to an
underestimation of the risk of heavy losses. The main purpose of this mixed
method is to accurately estimate the tails of a distribution, and therefore overcome
problems that methods based on normality assumptions underpredict infrequent
tail events.
The results from a research by Danielsson and de Vries regarding the extreme
value approach, shows that this method performs better than both RiskMetrics and
HS far out in the tails. For the 5
th
percentile, RiskMetrics is superior but longer
out in the tails the method consistently underpredicts the tail. For HS the opposite
problem is the case, i.e. it consistently overpredicts the tails. Further on, HS is
unable to address losses that are outside the sample.
This approach is relatively more sensitive for a small sample than for example
RiskMetrics and HS, and Danielsson and de Vries propose that a sample such as
one year is not appropriate to use for the extreme value approach.
3.4.6.1 Advantages and disadvantages with the semiparametric approach
There are several advantages in using the estimated tail distribution for VaR
estimation. For example, the extreme value method has a smoothing out effect of
events like the ’87 crash compared with for example HS. For HS an event like the
’87 crash will cause a too large VaR estimate and hence impose too conservative
capital provisions, while the extreme value method will give a better VaR
estimate. Furthermore, one can easily obtain the lowest return that occurs with a
given probability by sampling from the tail of the distribution, which facilitates
sensitivity experiments. This is not possible by using the HS. In addition, no
restrictive assumptions are needed, and the method can be used for large
portfolios without endless computation time.
However, in accordance with the MCS method the extreme value method is
complex and relatively less intuitive, even if there are many advantages.
109
To view how to estimate the tails see Danielsson, J., & de Vries, G. C., “ValueatRisk and
Extreme Returns”, (1997) and Danielsson, J., & de Vries, G.C., “Beyond the Sample: Extreme
Quantile and Probability Estimation”, (1997).
29
3.4.7 The Stress testing approach
Stress testing is more or less a scenario analysis, where the effects from different
events and movements are assessed for an asset or a portfolio of assets
110
. For an
asset manager an example of a scenario could be what happens to the equity
portfolio if the interest rate fluctuates heavily or if a currency suddenly devalues
by 30 percent. When scenarios are selected the portfolio is revalued according to:
∑
·
·
N
i
s i t i s p
R w R
1
, , ,
(17)
where the portfolio return is derived from the hypothetical component
s i
R
,
under
the new scenario s. Hence, various portfolio returns are generated with the
consideration to prespecified probabilities ) , ( s p for each scenario. VaR can then
be measured out of this generated distribution of portfolio returns.
111
3.4.7.1 Advantages and disadvantages with Stress testing
The advantages with stress testing are mainly that it may cover situations that are
completely absent from the historical data. For instance the EMSbreakdown can
serve as an example of an event that would have been beneficial to make a
scenario analysis on in advance. Hence, stress testing is a way to force
management to consider events that they otherwise might ignore. Stress testing is
also relatively easy to implement and communicate
112
.
One serious drawback with stress testing, compared to the methods mentioned
before, is the sensitivity to the choice or creation of scenarios. The method is
completely subjective, where an untenable scenario will lead to an incorrect
measure of VaR. According to Jorion, stress testing does not account for
correlation inbetween assets, which is a crucial component for risk
diversification. The example mentioned above about the EMS breakdown is a
good illustration of the ignoration of correlations. Furthermore, stress testing does
not either specify any trustworthy probabilities for, or the likelihood of, worst
case situations to actually occur.
113
110
Jorion, P., Value at Risk, (1997), p. 196.
111
Ibid, p. 197.
112
Ibid, p. 203.
113
Ibid, p. 196199.
30
To conclude one can say that stress testing should be considered as a complement
rather than a single VaR approach. Stress testing as a complement tries to capture
what is going on in the tails, but as stated before stress values are subjectively
defined without specified likelihood
114
.
3.5 Multiday VaR prediction
Most financial firms use one day VaR for internal risk management, but regulators
require that VaR also is calculated for longer time periods. There are two ways
this can be calculated. One can look at past tday returns or extrapolate the one
day VaR to t days.
115
RiskMetrics uses the square root of t rule, where the oneday
VaR is multiplied by the square root of t to obtain the VaR for t days
116
. However,
this might produce an overestimation of the multiday VaR since the multiday
extreme outcomes are smaller for fat tailed distributed returns than normally
distributed returns. Danielsson found that a scaling factor of around 1.7 should be
used for a ten day VaR estimate, which is significantly less than the square root of
ten (√10≈3.7).
117
114
Longin, F. M., “Stress Testing: A Method based on Extreme Value Theory”, (1999), p. 3.
115
Danielsson, J., “ValueatRisk and Extreme Returns”, (1997), p. 22.
116
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 84.
117
Danielsson, J., “ValueatRisk and Extreme Returns”, (1997), p. 8.
31
Chapter 4  Statistical Methodology
4.1 Working process
The very first step in the statistical process was to compare which small and mid
cap stocks that were listed in the beginning of the test period, i.e. 19940101, and
still listed in the end of the test period, i.e. 19991230. Out of this sample stocks
with extremely poor volume, which implies a low frequency of pricing and poor
data, were excluded. As a rule of thumb stocks changing hands less than two
thirds of the total sample of trading days were excluded. Finally the sample was
122 stocks, which were divided into three groups w.r.t. market cap criteria, see
appendix 2. To set the market cap limits the Carnegie SmallCap and MidCap
index ranges were used. In the end of the test period the Carnegie SmallCap
range was 05.2 billion SEK and the MidCap range was 5.218.1 billion SEK
118
.
The small and midcap index values were used to adjust these ranges backward in
the test period. After the stocks were divided into three groups, each of them was
assigned a number, to be used in the random number generation later on.
Three different kinds of equity portfolios, each containing ten stocks, were
constructed. The first portfolio is an OMXportfolio, which implies a portfolio
consisting of the ten largest stocks listed on the Stockholm stock exchange over
the test period
119
. Every six month during the test period the portfolio is
reallocated as to maintain the criterion of consisting of the ten largest stocks
120
.
The reallocation dates are the 1
st
of January and 1
st
of July between 1995 and
1999. The second portfolio is a smallcap portfolio, consisting of ten stocks with
market cap in accordance with the Carnegie SmallCap index range. The stocks in
the portfolio are randomly selected from the total sample of smallcap stocks.
Every six month new smallcap stocks are randomly selected so the reallocation
of the portfolio is performed in a statistically correct way. The third portfolio is a
mixed portfolio consisting of the five largest OMXstocks, two randomly selected
smallcap stocks, and three randomly selected midcap stocks. The procedure is
identical as for the first two portfolios. In the portfolios every stock get a weight
of ten percent at every day in the sample, i.e. the portfolios are equally weighted.
The first two portfolios, the OMX and the smallcap portfolio, are motivated by
the purpose to examine the applicability of VaR methods w.r.t. differences in
equity market cap. The mixed portfolio is constructed since it will give a more
realistic reflection of a portfolio held by an asset manager. The portfolios can be
viewed in appendix 3.
118
Segerström, T., Carnegie Asset Management, (000411).
119
The size is measured as market capitalisation, i.e. the share price multiplied by the total number
of outstanding shares.
120
OMGruppen, Shares in OMX, (19951999).
32
Last price paid data was collected from the Bloomberg database. Especially for
the smallcap stocks the timeseries were not totally complete, for example all
stocks were not traded every day in the test period. Hence, the data is corrected so
the timeseries are consistent over the sample, i.e. if a stock was not traded on a
specific date the price when the stock last changed hands was used. Out of the last
price paid time series the formulas below are applied, to get the percentage returns
on a daily basis for each stock in all portfolios.
1
1
−
−
−
·
t
t t
t
P
P P
R (18)
where R
t
is the percentage return at time t and P
t1
is the price at time t1.
121
Equation (19) is used for dividend replacement, on the ex dates for dividends
122
.
1
1
) ) ((
−
−
− +
·
t
t t t
t
P
P D P
R (19)
4.2 VaR methods used in the study
All in all seven different methods have been used in the study. These are divided
into three subgroups, which are the Historical Simulation approach (HS), the
Equally Weighted Moving Average approach (EqWMA) and the Exponentially
Weighted Moving Average approach (ExpWMA). These three approaches have
been chosen, since they are the methods most widely used in empirical finance of
today
123
. No semiparametric approaches are performed, since none of these are
fully developed and accepted yet, and also because they require an estimation
window longer than one year. Regarding the Monte Carlo simulation it is well
accepted, but too complex, demanding and costly to perform. Stress testing is easy
to perform, but scenario analysis is not included in the purpose of this Master
thesis. The most commonly used window sizes for the HS approach are six
months, one year, two years and five years, i.e. window sizes of 125, 250, 500 and
1250 observations
124
. In our study windows of 125 and 250 trading days have
been used, since the data quality for smallcap stocks were very poor with many
121
JPMorgan/Reuters, RiskMetrics  Technical Document, (1996), p. 46.
122
Ross, S.A., Westerfield, R.W. & Jaffe, J., Corporate Finance, (1996), p. 224.
123
See for instance Danielsson, J. & de Vries, C.G.,“ValueatRisk and Extreme Returns”, (1997)
or Hendricks D., “Evaluation of ValueatRisk Models Using Historical Returns”, (1996).
124
See for instance Hendricks, D., “Evaluation of ValueatRisk Models Using Historical
Returns”, (1996).
33
shares not changing hands frequently in the beginning of the 90s, and in addition
few smallcap shares were listed. Regarding the EqWMA approach the most
commonly used window sizes are the same as for the HS approach, with the only
difference that 50 days is also often used
125
. Therefore, we have chosen window
sizes of 50, 125 and 250 days for this approach. For the ExpWMA the most
widely accepted decay factors are 0.94 and 0.97, and these are the decay factors
applied in this study
126
. For all of the approaches mentioned the calculations are
performed at confidence levels of both 95% and 99%. These levels are by far the
most used in statistical testing.
In summary, the following 14 combinations of methods and confidence levels are
performed in the study:
§ HS with a window size of 250 trading days at both the 95% and 99%
confidence interval (CI).
§ HS with a window size of 125 trading days at 95% and 99% CI.
§ EqWMA with a window size of 50 trading days at 95% and 99% CI.
§ EqWMA with a window size of 125 trading days at 95% and 99% CI.
§ EqWMA with a window size of 250 trading days at 95% and 99% CI.
§ ExpWMA with a decay factor of 0.94 at 95% and 99% CI.
§ ExpWMA with a decay factor of 0.97 at 95% and 99% CI.
4.3 Calculation procedure for the HS approach
First, the portfolio return is calculated for each day according to formula (11)
viewed in chapter 3. As stated previously windows of 250 and 125 days are used
both for the 95% and 99% confidence interval. Thus, the 1
st
percentile, for the
99% confidence interval, and the 5
th
percentile, for the 95% confidence interval
are calculated using windows of the latest 125 or 250 trading days. For example,
to calculate the 95% confidence interval using the 250day window at time t the
5
th
percentile of the window ranging from t250 to t1 is calculated. To calculate
the confidence interval for the next day at time t+1, the window from t249 to t0
is used. The results, i.e. the daily VaR measures, are then compared to the actual
outcomes of the portfolio. The results are then evaluated using the performance
evaluation criteria presented in 4.6.
125
See for instance Hendricks, D., “Evaluation of ValueatRisk Models Using Historical
Returns”, (1996).
126
For instance RiskMetrics uses 0.94 for daily observations, and 0.97 for monthly observations.
34
4.4 Calculation procedure for the EqWMA approach
The EqWMA approach is tested over 50, 125 and 250 days both at the 95% and
the 99% confidence interval, as stated in 4.2. For each day the variance for each
portfolio is calculated. In the case of a portfolio with ten assets the variance is
obtained by using the following formula:
∑ ∑
· ≠ ·
+ · ·
N
i
N
j i j i
j i j i i i p p
r Var
1 , 1 ,
,
2 2 2
2 ) ( ) ( σ γ γ σ γ σ (20)
where γ is the weight, σ
i
2
variance, and σ
1,2
the covariance
127
. The square root is
taken of formula (20) to obtain the daily standard deviation of the portfolio. We
have used formula (12) to calculate the standard deviation for each asset in the
portfolio for each day, and the covariance according to formula (21)
128
:
N
r r r r
r r
N
t
t t ∑
·
− −
· ·
1
2 , 2 1 , 1
2 , 1 2 1
) )( (
) , cov( σ (21)
where
∑
·
·
N
t
t i i
r
N
r
1
,
1
(22)
The daily VaR measures are finally obtained by multiplying the standard
deviation by 1.645 for the 95% confidence level and 2.327 for the 99% level
129
.
4.5 Calculation procedure for the ExpWMA approach
The decay factors 0.94 and 0.97 were chosen, and according to table 1 in chapter
three 74 and 151 observations are used respectively at the 1% tolerance level.
Thus, two timeseries are created ranging from 1 to 74 and 1 to 151. When these
lambda weights are obtained we use formula (13), presented in chapter 3, to
calculate the standard deviation. The results are then multiplied with the lefthand
side standard deviation from a normal distribution at the 95% level, which
127
Benninga, S., Financial Modeling, (1998), p. 74.
128
Gustavsson, M., and Svernlöv, M., Ekonomi & Kalkyler, (1994), p. 706.
129
Körner, S., Tabeller och formler för statistiska beräkningar, (1986), p. 14.
35
is –1.645, and at the 99% level, which is –2.327. The results, the daily VaR
measures, are then evaluated against the performance criteria as for the other two
approaches.
4.6 Performance evaluation criteria
Darryll Hendricks uses nine performance criteria to evaluate the quality and
performance of the VaRapproaches EqWMA, ExpWMA, and HS
130
. We have
selected to use all of these criteria to evaluate our VaR examination. Every
criterion is calculated for each of VaR method, portfolio and confidence level, i.e.
42 calculations are performed for each criterion (7 VaR methods*3 portfolios*2
confidence levels). The criteria are in order:
4.6.1 Mean Relative Bias
131
This criterion estimates whether each VaR method produces risk measures of
similar average size. The VaR measures are here compared to each other and not
to the actual portfolio outcomes. The mean relative bias is measured in percentage
terms, where for example a value of 0.15 implies that a given VaR method on
average is 15 percent larger than the average of all methods for the same portfolio
and confidence level.
4.6.1.1 Calculation Procedure
For each VaR method, portfolio and confidence level an average over the VaR
measures is calculated over all observations in the sample. An average is then
calculated over all averages, to obtain a “total average” over all methods for each
portfolio and confidence level separately. Then we divide the individual average
with the total average and subtract one, to get the percentage difference. For
example, an average is calculated for the OMXHS250d at the 95% confidence
level over the whole sample period from 19951999. This average is then divided
by the average of all seven methods for the OMX portfolio using a 95%
confidence interval. From this number, one is subtracted and the mean relative
bias is obtained.
130
Hendricks, D. “Evaluation of ValueatRisk Models Using Historical Data” (1996), p. 46.
131
Ibid, p. 46.
36
4.6.2 Root Mean Squared Relative Bias
132
This criterion assesses to what extent the VaR measures tend to vary around the
average VaR measures for a given date. This measure can be compared to a
standard deviation for the mean relative bias.
4.6.2.1 Calculation procedure
A daily mean relative bias is calculated by taking the daily VaR figure and divide
it by the average over all methods using the same confidence level and portfolio
for that day, and subtract one. Then we obtain a mean relative bias figure for each
day, which is squared, and an average is calculated over the entire sample. To get
the root mean squared relative bias we simply take the square root of this squared
mean.
4.6.3 Annualized Percentage Volatility
133
This criterion evaluates the tendency of the VaR measures to fluctuate over time.
An annualized percentage volatility is calculated for each portfolio and VaR
method at both confidence levels.
4.6.3.1 Calculation procedure
A new column of numbers has been calculated for each day in the sample period
according to:
1
1
%
−
−
−
· ∆
t
t t
t
VaR
VaR VaR
VaR (23)
Hence, a new timeseries for each method, portfolio and confidence level is
created, which views the percentage change in the VaR measures. For these time
series the standard deviation is calculated. To obtain the annualized percentage
volatility we multiply the standard deviation by the square root of 250
134
.
132
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 47.
133
Ibid, p. 48.
134
In financial theory 250 days is considered to be a measure of the amount of trading days on an
annual basis.
37
4.6.4 Fraction of Outcomes Covered
135
This criterion is more of a fundamental test, where the VaR methods are examined
whether they cover the portfolio outcomes that they are intended to capture. For
example, to achieve the desired level the coverage should be 95 percent for the
methods using a 95% confidence interval.
4.6.4.1 Calculation procedure
The fraction of outcomes covered is calculated as the percentage of results where
the loss in portfolio value is less than the risk measure. A command is
constructed, which states that if the VaR measure exceeds the portfolio return a
violation of VaR is present, and vice versa. This test is performed for each day,
and to calculate the fraction of outcomes covered (FoOC) we have used the
following formula:
,
`
.

− ·
∑
·
N
violations
FoOC
N
t 1
) 0 ; 1 (
1 (24)
where violations can take either of (1) or (0) depending on if a violation is present
or not, i.e. no violation implies a (0) and a violation a (1). For example, if the VaR
measure exceeds the portfolio outcome on five trading days out of a hundred the
fraction of outcomes covered would be 0.95 (15/100).
4.6.5 Multiple Needed to Attain Desired Coverage
136
The multiple needed to attain desired coverage is simply a figure on what value
the multiple should be for each VaR measure to attain the desired level of
coverage. This criterion focuses on the size of the adjustments in the risk
measurement required to achieve this perfect coverage. This measure is important
because shortcomings in VaR measures that seem small in probability terms may
be much more significant when considered in terms of the changes required to
remedy them.
135
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 4950.
38
4.6.5.1 Calculation procedure
First, we divide the daily portfolio returns with the VaR measure for each day.
Out of these generated numbers the percentiles for a 95% and a 99% confidence
interval are calculated, i.e. the 5
th
and 1
st
percentile. These percentile values are
the multiples that would have been required for each VaR measure to attain the
desired level of coverage. A value below one implies that the VaR method
overstates the risk and hence a value above one understates the risk. Thus, a
multiple of exactly one is to be preferred.
4.6.6 Average Multiple of Tail Event to Risk Measure
137
This evaluation method relates to the median size of outcomes not covered by the
VaR measures. The average multiple of tail events is calculated and compared
with the VaR measure, where tail events are defined as the largest percentage of
losses measured relative to the confidence level chosen.
4.6.6.1 Calculation procedure
To simplify matters this method is calculated in a similar way as the multiple
needed to attain desired coverage in 4.6.5. The daily portfolio return is divided
with the VaR measure for each day in the sample. When the percentiles are
calculated an average of one plus the confidence interval is used, i.e. 97.5 for the
95% confidence interval and 99.5 for the 99% confidence interval. The percentile
values are the average multiples of tail event to risk measure. To analyze the
results we calculate a benchmark from the normal distribution, see further section
5.6.
4.6.7 Maximum Multiple of Tail Event to Risk Measure
138
This performance criterion assesses the size of the maximum portfolio loss. The
maximum multiples are likely to be highly dependent on the length of the sample
period, e.g. for shorter periods the maximum multiple are likely to be lower.
136
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 50.
137
Ibid, p. 51.
138
Ibid, p. 52.
39
4.6.7.1 Calculation procedure
Out of the generated multiple series in 4.6.5 the maximum value is found, which
is the maximum multiple of tail event to risk measure.
4.6.8 Correlation between Risk Measure and Absolute Value of
Outcome
This criterion examines to what extent the VaR measures adjust to changes in the
portfolio risks over time
139
. Briefly the correlation coefficient can be considered
as an index showing the degree of linear covariation between two variables. A
correlation coefficient equal to (–1) implies perfect negative correlation, i.e. the
variables are moving exactly in the opposite direction. A coefficient of (+1)
implies perfect positive correlation and hence the variables are moving in exactly
the same direction.
140
According to Hendricks, even a perfect VaR measure
cannot guarantee a correlation of (1) between the risk measure and the portfolio
outcome, which is an important statement to bear in mind
141
. Despite this, a value
close to one is desired.
4.6.8.1 Calculation procedure
First the absolute value of the portfolio retur ns are calculated. These values are
then compared with the generated VaR measures using the following correlation
formula:
2 1
2 , 1
2 , 1
σ σ
σ
ρ · (25)
where the covariance is calculated as in (21) and the standard deviation is
calculated according to following formula (26):
142
1
) (
1
2
−
−
·
∑
·
N
r r
N
i
i
i
σ (26)
where N is the total number of observations, and r is the mean of all
observations
143
.
139
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 53.
140
Söderlind, L., Att mäta ränterisker, (1996), p. 79.
141
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 53.
142
Ross, S. A., Westerfield, R. W. & Jaffe, J., Corporate Finance, (1996), p. 253.
40
4.6.9 Mean Relative Bias for Risk Measures Scaled to Desired
Level of Coverage
144
Here the mean relative bias that results when VaR measures are scaled to either 95
percent or 99 percent coverage is assessed. The purpose is to determine which
approach that could provide the desired level of coverage with the smallest
average VaR measures.
4.6.9.1 Calculation Procedure
The scaling is performed by multiplying the VaR measures for each method by
the multiple needed to attain desired coverage.
Then an average of the VaR measures scaled to desired level of coverage is
calculated for each portfolio and confidence level. The VaR measure scaled to
desired level of coverage for one specific method, portfolio and confidence level
is divided by the mean of all scaled VaR measures using the same portfolio and
confidence level. From this number, one is subtracted to get the mean relative bias
for risk measures scaled to desired level of coverage.
4.7 Hypothesis testing
To be able to evaluate our findings in a proper way a number of significance tests
have been performed. Tests are carried out for the criteria fraction of outcomes
covered, both as an actual portion and as a comparison between portfolios with
different market caps, and for the correlation between risk measure and absolute
value of outcome. Hypothesis testing on these specific criteria has been done
since these are the most fundamental for the evaluation of the VaR methods. A
significance level of 5% is denoted with (*), a significance level of 1% is denoted
with (**) and a significance level of 0.1% is denoted with (***).
143
Gustavsson, M. & Svernlöv, M., Ekonomi & Kalkyler, (1994), p. 706.
144
Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”, (1996), p. 54.
41
4.7.1 Actual portion of Fraction of Outcomes Covered
To test if the VaR methods cover the portfolio outcomes as is intended, i.e. if the
VaR measures at the 95% level actually cover 95 percent of the outcomes and the
VaR measures at the 99% level cover 99 percent of the outcomes, we set up the
following hypotheses:
H
0
: VaR
95%
=95% for Fraction of Outcomes Covered
H
1
: VaR
95%
≠95% for Fraction of Outcomes Covered
The hypotheses for the 99% confidence interval is set up accordingly. Hence, the
null hypothesis, H
0
, states that the VaR measures cover the portion of outcomes
that they are intended to. The alternative hypothesis, H
1
, states that the VaR
measures differ significantly from the portions of outcomes they are supposed to
cover. These tests are then performed for all seven VaR methods at both
confidence levels and for all three portfolios, which sum up to 42 tests. The
formula used for the calculation is shown below:
n
P
Z
) 1 ( π π
π
−
−
· (27)
where P is the observed portion of the measured variable in the sample, π is the
portion that the test is performed against, i.e. 95% for tests on VaR
95%
and 99%
for tests on VaR
99%
, and n is the number of observations. The formula is
approximately normally distributed if (nπ*(1π)>5), with zero mean and a
standard deviation of one.
145
If the absolute value of Z exceeds 1.96 H
0
is rejected
at the 5%level. Similarly if the absolute value of Z exceeds 2.57 H
0
is rejected at
the 1% level, and if the value exceeds 3.3 H
0
is rejected at the 0.1%level. For
absolute values of Z below 1.96 H
0
is accepted. These are the Zvalues used, since
they correspond to the different significance levels.
146
145
Körner, S., Statistisk dataanalys, (1987), p. 283.
146
Körner, S., Tabeller och formler för statistiska beräkningar, (1986), p. 14.
42
4.7.2 Difference in FoOC between OMX and smallcap shares
To test if there is a difference in the fraction of outcomes covered between OMX
and smallcap shares the following hypotheses are set up:
H
0
: VaR
OMX
=VaR
SMALL
H
1
: VaR
OMX
>VaR
SMALL
To get an explanation of why this test is onesided, see section 5.4.1.2. This test is
then performed for all seven methods at both confidence levels. The formula
below is used for the calculation of the significance test:
)
1 1
)( 1 (
2 1
2 1
n n
P P
P P
Z
+ −
−
· (28)
where P
1
is the portion of the variable measured in the first sample, i.e. the
fraction of outcomes covered for the OMX portfolios, and P
2
is the portion of the
variable measured in the second sample, i.e. the fraction of outcomes covered for
the smallcap portfolios. n
1
is the number of observations in the OMX sample and
n
2
is the number of observations in the smallcap sample. P is the weighted
average of the two samples and Z is approximately normally distributed with zero
mean and a standard deviation of one.
147
Since H
1
is onesided the critical Z
values become 1.645 for the 5% level, 2.33 at the 1% level and 3.1 at the 0.1%
level. For all Z values below 1.645 H
0
is accepted.
148
4.7.3 Significance test of the correlation coefficients
To test if the correlation coefficients differ significantly from zero the following
hypotheses are set up:
H
0
: correlation=0
H
1
: correlation≠0
147
Körner, S., Statistisk dataanalys, (1987), p. 285.
148
Körner, S., Tabeller och formler för statistiska beräkningar, (1986), p. 14.
43
The test is performed for all seven methods at both confidence levels and for all
three portfolios. The formula below is used for the calculation:
2
1
2
r
n r
t
−
−
· (29)
where r is the correlation coefficient and n is the number of observations. The
formula is tdistributed with (n2) degrees of freedom.
149
The critical values for t
are 1.960 at the 5% level, 2.576 at the 1% level and 3.291 at the 0.1% level. For
all tvalues below 1.960 H
0
is accepted.
150
4.8 Normality tests
According to the formulas (9) and (10) in chapter 3, the skewness and kurtosis for
the portfolio returns for each portfolio are computed. When the skewness and
kurtosis are obtained, we use them in the JarqueBera formula (30) below to test
whether our data follows a normal distribution or not. JarqueBera is a test
statistic for testing if series are normally distributed. The formula (30) follows a
Chisquared distribution (χ
2
), which is an asymmetric distribution, with two
degrees of freedom:
( )
,
`
.

− +
−
· −
2 2
) 3
4
1
6
Kur Sk
k N
Bera Jarque (30)
where N is the total number of observations and k the number of estimated
coefficients used to create the series.
151
Tests were made for all portfolios to see if
they were normally distributed. The following hypotheses were set up:
:
0
H The portfolio returns are normally distributed
:
1
H The portfolio returns are not normally distributed
The results from our tests can be viewed in chapter 5.10. The χ
2
generated
probability values are the probabilities that a JarqueBera statistic exceeds the
observed value under the null, where a small value leads to the rejection of the
null.
152
149
Körner, S., Statistisk dataanalys, (1987), p. 287.
150
Körner, S., Tabeller och formler för statistiska beräkningar, (1986), p. 17.
151
Eviews 3, User’s Guide, (1998), p. 165.
152
Ibid, p. 165.
44
4.9 Criticism of primary data
The most decisive point in our study is that the collected data is the correct one
and that no price information is missing. Two things that could have an adverse
affect on our study is that the data was not corrected for dividends and that
companies have merged or been acquired during the period of our study. To cope
with these problems the data has been corrected for dividends pay out. When
firms have merged and one stock has been changed for another we have corrected
the data for this as well.
Another thing that is of great importance is that the sample we have used for the
midcap and the smallcap portfolios are random. To make sure of this we have
used Excel to generate random numbers that represent different companies.
Some of the smallcap shares have not changed hands on every trading day. To
handle this problem we have used the last price paid on the day when the stock
was last traded. Companies that have been trading on less than two thirds of the
trading days have been excluded in the study to avoid any distorted results.
45
Chapter 5  Results
In this chapter the results from the study of VaR over three different approaches,
seven different methods and three different kinds of portfolios are presented for
both the 95% and the 99% confidence levels. We regard the performance criteria
fraction of outcomes covered and correlation between risk measure and absolute
value of outcome as most intuitive when analysing the results, and hence we are
giving these criteria most attention. In addition, the results from the significance
and normality tests are presented, see appendix 1011 and 16. Tables and
diagrams over the results can be found in appendix 49, 1215 and 1718. In each
section the expectations are firstly discussed, followed by the results per se.
Finally comments are made on how the results correspond with our expectations.
5.1 Mean Relative Bias
It is hard to have any qualified expectations for this criterion. However, previous
research has found that longer windows normally give higher VaR measures, i.e.
methods with long windows have a mean relative bias above one
1
.
When interpreting the mean relative bias it is important to keep in mind that this
criterion only measure how the size of the VaR measures are to the mean of all
VaR measures for the specific category. For instance, a VaR measure of one
method for the OMX portfolio is compared to other methods’ VaR measures for
the same portfolio and level of confidence. The mean relative bias is measured in
percentage terms, so a value of 0.10 implies that the VaR measure is 10 percent
larger than the average.
For the vast majority of portfolios the mean relative bias is between –0.1 and
+0.1, indicating that most VaR measures vary up to 10 percent from the mean.
The results are presented in appendix 6, and for the 95% confidence level the HS
approach seems to give the lowest VaR measures and the EqWMA approach the
highest. At the 99% confidence interval HS tends to give the highest values,
especially with a window of 250 trading days. Further, there is a distinct
indication that longer windows give higher VaR measures. One possible
explanation for this is Jensen’s inequality theorem, which states that if the true
conditional variance is changing frequently, then the average of a concave
function, i.e. the VaR measure, will tend to be less than the same concave function
of the average variance. Briefly the implication of the theorem for VaR is that
1
Hendricks, D., “Evaluating ValueatRisk Models Using Historical Data”, (1996), p. 46.
46
VaR measures with short windows should on average be smaller than VaR
measures with longer windows.
2,3
For this criterion we can conclude that the results are in line with our expectations.
In addition, the average VaR of different methods deviates as much as ten percent
from the mean in both directions, indicating that two methods on average can
differ 20 percent from each other.
5.2 Root Mean Squared Relative Bias
As with the mean relative bias it is hard to make predictions about the results of
the root mean squared relative bias. However, this criterion is sensitive to
window size and therefore the VaR methods using windows of average sizes are
expected to differ the least from the mean, i.e. give the lowest root mean squared
relative bias.
As can be seen in appendix 7, for both confidence levels and all three portfolios
the EqWMA approach with a window size of 125 trading days and the ExpWMA
approach with a decay factor of 0.97, i.e. 151 days – see table 1, give the lowest
results. It is also interesting to note that on any given day differences in the range
of 3040 percent between two VaR methods are not uncommon. Many portfolios
have values of 0.150.20, which means that one method can produce a VaR
measure 1520 percent above the average, while another produces values 1520
percent below the average. Clearly, it is a serious drawback for VaR as a concept
that different methods give remarkably different VaR measures.
The methods that give the lowest values are methods with windows of average
length, i.e. 125 and 151 observations. The HS with a window size of 125 trading
days does not give a low value, which could be due to the fact that the HS
approach is very dependent on a few observations. Since the HS approach only
relies on five or one percent of the observations, depending on the confidence
level, the rest of the observations are irrelevant for the calculations. Therefore it
can be questioned whether HS125d produces reliable results.
2
Hendricks, D., “Evaluating ValueatRisk Models Using Historical Data”, (1996), p. 66.
3
http://mathworld.wolfram.com/JensensInequality.html
47
5.3 Annualized Percentage Volatility
We expect methods using short windows to have high volatility measures, since
they depend on fewer observations that leave the window more quickly. The
ExpWMA methods should have higher volatility measures than the other
approaches, since they put higher weights on recent observations through the
decay factor. In addition, the smallcap portfolio could probably have a higher
volatility measure compared to the other portfolios, since smallcap stocks
generally have a lower liquidity measured as depth. Poor liquidity can sometimes
imply higher volatility since even a small amount of trading can affect prices
heavily.
It is important to note that the values for approaches using the normal distribution
are exactly the same for both confidence levels, since these are multiples of the
same standard deviation. From the diagrams, see appendix 8, it is clear that longer
windows give lower volatility and vice versa. In addition, the ExpWMA
approach, especially with a decay factor of 0.94, gives the highest volatility
measures. This is not surprising due to the fact that the VaR measure varies as
new observations enter the window.
There seems to be a tendency for the smallcap portfolios to have a slightly higher
volatility than the OMX shares even though the differences are small. Another
conclusion that can be drawn is that, for the same window length, the HS
approach tends to have a higher volatility than the EqWMA approach. This is
probably due to the fact that the HS to a high degree only depends on a couple of
observations.
The conclusions from this criterion are very much what we expected. Short
windows give high volatility measures, in particular the ExpWMA. The smallcap
portfolios give higher volatility measures in 13 cases out of 14, so even though the
differences are small there is a tendency for these portfolios to have a higher
volatility measure.
5.4 Fraction of Outcomes Covered
This criterion is the most important and intuitive for evaluating VaR methods. To
be able to use the VaR approach in practice the VaR methods using confidence
levels of 95% and 99% should also give a 95 percent and 99 percent coverage
respectively. However, financial returns experience fat tails and previous research
has shown that VaR models find it hard to cover these confidence levels,
48
especially the 99% level
4
. The RiskMetrics group regards the ExpWMA approach
to be superior to the others, especially with a decay factor of 0.94 since this is
used for daily VaR estimation and is receptive for shortterm volatility swings.
Furthermore, RiskMetrics considers the HS125d to produce unsatisfactory results,
since the window is too short.
5
The results in appendix 9 show that HS125d sharply underestimates the VaR for
all portfolios at both confidence levels. At the 95% confidence interval the
EqWMA approach, for which seven out of nine portfolios overestimate the VaR,
tends to produce VaR measures that cover more than 95 percent of the outcomes.
For the ExpWMA approach five out of six portfolios overestimate the VaR, while
only one out of six portfolios for the HS approach produces too high VaR
measures. There is no apparent difference in the VaR measures for the smallcap
and OMX portfolios.
Furthermore, at the 99% confidence level only two out of 21 portfolios actually
cover 99 percent of the outcomes. Generally the OMX portfolios have a higher
fraction of outcomes covered than both the small cap and mixed portfolios. At the
95% confidence level four out of seven methods produce higher VaR measures
for the OMX portfolios, while two methods have exactly the same coverage. At
the 99% level six out of seven methods give higher results for the OMX portfolios
compared to the smallcap.
To some extent the results for this criterion are in line with our expectations. It
seems the VaR methods have a hard time covering 99 percent of the outcomes and
distinctly the HS125d produces unsatisfying results. In addition, the fraction of
outcomes covered for the OMX portfolios seem to be higher than the smallcap
portfolios. However, there is no evidence indicating that the ExpWMA approach
produces superior results compared to the other methods.
5.4.1 Significance Testing
To further evaluate the fraction of outcomes covered significance testing has been
performed both to check if the portfolios cover the portion of outcomes that they
are supposed to, and if there is any difference between smallcap and OMX
portfolios. The tests are described in section 4.7, and the results are presented in
appendix 1011.
4
Hendricks, D., “Evaluating ValueatRisk Models Using Historical Data”, (1996), p. 49.
5
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 78.
49
5.4.1.1 Fraction of Outcomes Covered
For the OMX portfolio at the 95% confidence level all H
0
are accepted, i.e. all
methods give coverage of 95 percent. On the other hand, at the 99% level both HS
methods underestimate the VaR and especially the HS125d produces too low VaR
measures.
The smallcap portfolios all produce attractive results at the 95% confidence
interval. However, at the 99% level the picture is quite the opposite. Six out of
seven portfolios significantly underestimate the VaR and only H
0
for the HS250d
is accepted. This is especially interesting, since the Basel 1995 proposal for
market risk recommends to use HS with a 99% confidence level. In addition, five
out of the six portfolios that underestimate the VaR, do so on the lowest, i.e.
0.1%, significance level. Apparently this is a very serious drawback of the VaR
concept.
For the mixed portfolios no method underestimates the VaR at the 95%
confidence level. A surprising result is that one method, the EqWMA250d,
overestimates the VaR. At the 99% level only two methods produce results for
which H
0
is accepted, the HS and the EqWMA with windows of 250 trading days.
All of the other methods underestimate the VaR at the 1% significance level or
less.
5.4.1.2 Difference between the OMX and Smallcap Portfolios
This test is set up to evaluate if there is a difference in the fraction of outcomes
covered between the smallcap and OMX portfolios. Unfortunately the mixed
portfolio cannot be tested against the OMX portfolio, since the mixed portfolio
contains five stocks that are also included in the OMX portfolio and therefore the
observations are not independent. The diagrams presented in appendix 9 show that
there is a tendency for the OMX portfolios to have higher fraction of outcomes
covered than the smallcap portfolios and therefore the test is onesided.
On the 95% confidence level none of the methods signals that there is a difference
between the smallcap and the OMX portfolios. On the other hand, for the 99%
confidence interval all methods using a normal distribution get their H
0
rejected,
i.e. the methods produce significantly higher fraction of outcomes covered for the
OMX portfolios compared to the smallcap portfolios. For the HS methods no
difference could be detected and hence H
0
for these methods are accepted.
50
5.5 Multiple Needed to Attain Desired Coverage
The multiple needed to attain desired coverage is more or less a reflected image
of fraction of outcomes covered. Hence, the methods that tend to overestimate
VaR measured as fraction of outcomes covered will get a multiple less than one in
this test, and vice versa. The larger the deviation from one, the greater the risk
measure has to adjust to achieve a perfect coverage. Since we expect a poor
coverage for the HS125d in the previous test, the multiple is consequently
expected to be relatively high. Similarly we expect values above one for the 99%
confidence level, and the ExpWMA to produce values close to one.
The diagrams in appendix 12 view that the multiples are a reflection of the
fraction of outcomes covered, both at the 95% and the 99% confidence level.
Worth noting is the very poor result by the HS125d at the 99% level, which
underestimates the VaR by approximately 40 percent. Another interesting notation
is the sharp overestimation of VaR by the mixed portfolio for the EqWMA250d.
Further on, at the 95% level no method seems to be superior to the other methods,
but for the 99% level HS250d and EqWMA250d appear to have somewhat more
attractive values. The smallcap portfolios at the 99% level are the worst
performers, while its more difficult to distinguish among the portfolios for the
95% level. For all seven methods and both confidence levels the multiple stretches
in a range of 0.9 – 1.45.
5.6 Average Multiple of Tail Event to Risk Measure
This performance criterion views the size of outcomes not covered by the VaR
measure. To evaluate the test we use the normal distribution as a benchmark. In
calculating the benchmark we use the tablevalue for the average percentile, i.e.
97.5 for the 95% confidence level and 99.5 for the 99% level, which is 1.96 and
2.327 respectively. These are then divided by the tablevalues for both the 95%
and the 99% confidence level. Hence, the benchmarks for the 95% level is
(1.96/1.645)=1.19 and for the 99% level (2.575/2.2327)=1.11. This benchmark is
not applicable for the HS, since the method does not rely on normal distributed
returns.
Thus, the EqWMA and the ExpWMA approaches should present values close to
the benchmark if the portfolio returns should be regarded to follow a normal
distribution. At the same time, we are aware of the fact that financial theory states
that returns hardly follow a normal distribution.
51
The diagrams in appendix 13 indicate that the OMX portfolios, for all methods
and both confidence levels, have multiples close to the benchmarks relative the
other portfolios. This result imply that the normal distribution is relatively more
well functioning for OMX stocks. Later on, in section 5.10, we will see that this is
an ambiguous result. At the same time the smallcap portfolio constantly deviates
the most from the benchmarks. Otherwise we cannot conclude that one method is
superior to the other methods.
The results are somewhat in line with the expectations, since the OMX portfolio
produce values close to the benchmarks. On the other hand, the smallcap and
mixed portfolios have values significantly above the benchmarks. Even if the
results for this criterion should produce values close to the benchmarks financial
theory states that returns experience fat tails, and hence the normal distribution is
a poor approximation.
5.7 Maximum Multiple of Tail Event to Risk Measure
Basically the maximum multiple value is the largest multiple for each method
over all observations. Regarding the ExpWMA approach, which is known as a
method able to cover shortterm volatility well, we expect a relatively low max
multiple.
The results in appendix 14 show that for 12 out of the 14 combinations of
methods and confidence levels the OMX portfolio got the lowest max multiple.
One explanation for this characteristic can be higher performance stability and
organization quality for the large cap stocks, that are included in the OMXindex,
compared to the small and midcap stocks. In addition, the HS approach tends to
produce the highest max multiples, while the ExpWMA approach gives the
lowest. Worth noting is that it is important not to view VaR as a strict upper
bound on the portfolio losses that can occur. This means that even if it seems that
VaR at the 99% level captures essentially all of the relevant events the last one
percent can entail substantial losses.
The results confirm our expectations regarding that the ExpWMA method
properly covers the shortterm volatility, which implies a relatively low max
multiple. Hence, the ExpWMA tends to be the superior method in this respect.
However, this conclusion can definitely be too hasty, because the performance
criterion is only based on one single observation over the total sample of 1254
observations
52
5.8 Correlation between Risk Measure and Absolute
Value of Outcome
We expect the methods using short windows to have the highest correlation
coefficients, because these are superior in capturing volatility fluctuations. In
addition, the ExpWMA approach should produce high values, since the strength
of this approach is that it should be more receptive for changes in the portfolio
volatility.
As can be seen in appendix 15 for methods using the same approach, for example
by comparing methods using the EqWMA approach with other methods using the
same approach but with other window sizes, it is apparent that a shorter window
gives a higher correlation coefficient. Furthermore, the ExpWMA approach
produces the highest correlation coefficient. It is also interesting to notice that for
all 14 methods, the OMX portfolios have higher correlation coefficients than the
smallcap portfolios.
As we expected the methods with short windows and methods using the
ExpWMA approach produce the highest correlation coefficients.
5.8.1 Significance Testing
To test if the correlation coefficients differ significantly from zero we set up the
hypotheses described in section 4.7.3. The test results are presented in appendix
16. As can be seen in the appendix all correlation coefficients differ from zero at
the lowest significance level, i.e. the 0.1% level. Hence, all VaR methods are
receptive to changes in the market volatility.
5.9 Mean Relative Bias for Risk Measure Scaled to
Desired Level of Coverage
This performance criterion assesses which method that can provide the desired
level of coverage with the smallest average VaR measure. Previous research
indicates that the ExpWMA approach should have a relatively low mean relative
bias for risk measure scaled to desired level of coverage, and vice versa for the
53
HS approach
6
. In general, as stated before in section 5.1, the mean relative bias
criteria are difficult to make predictions for prior to the investigation.
The diagrams in appendix 17 indicate that the HS method gives a poor value for
the 99% confidence level, especially for the smallcap portfolios. The ExpWMA
approach appears to be superior for both confidence levels, except for the mixed
portfolio with decay factor 0.94 at the 95% level. Moreover, for the EqWMA
approach the smallcap portfolios produce lowest figures two out of three times at
the 99% level, but highest values two out of three times at the 95% level.
The results follow the expectations, based on previous research, that the
ExpWMA has a relatively low mean relative bias for risk measure scaled to
desired level of coverage.
5.10 Normality Tests
As discussed in section 3.3, the distribution of financial returns can take many
different shapes. Since both the EqWMA and the ExpWMA approach rest on the
assumption of normally distributed returns we find it interesting to examine
whether our data is normally distributed or not. In a first stage we test our data for
skewness and kurtosis, and secondly we perform a JarqueBera test for normality,
see section 4.8. It is important to stress that the normal distribution is a well
functioning approximation for smaller samples, i.e. 30150 observations, but can
be an insufficient approximation for larger samples such as ours of 1254
observations
7
.
5.10.1 Results from the Normality Tests
If the portfolio returns are perfectly normally distributed the computed skewness
should be close to zero and the kurtosis close to three. That will produce a Chi
square probability value of one and hence the null can be accepted, i.e. the
portfolio returns are normally distributed. As can be seen in table 2 below, the test
values for skewness indicate that the returns from the OMX portfolio are right
skewed, since 0.488 is a relatively high and positive number. Further on, for the
smallcap portfolio the test indicates leftskewed returns, while the mixed
portfolio got results close to zero.
6
See for instance Hendricks, D., “Evaluation of ValueatRisk Models Using Historical Data”,
(1996).
7
Hagnell, M., Lecturer in Econometrics at Lund University, (001705).
54
A distribution is considered to be leptokurtic if the kurtosis coefficient is higher
than three and platykurtic below three. The kurtosis calculations show that all
three portfolio returns are significantly leptokurtic, especially the returns for the
OMX portfolio with a value of 9.86, but the small and mixed portfolio returns are
also significantly leptokurtic.
Skewness
Kurtosis
JarqueBera
probχ
2
OMX
0,488
9,860
2504,635
0,000
SMALL
0,222
7,637
1131,960
0,000
MIX
0,032
7,656
1131,100
0,000
Table 2. Results from normality tests.
The Chisquared probability values in table 2 clearly indicates that the null
hypotheses set up in section 4.8 are rejected for all portfolios, which implies that
the portfolio returns cannot be regarded as normally distributed.
To find answers for this result we plotted the portfolio returns for each portfolio
against normally distributed random numbers for the same sample size, i.e. 1254
observations. The results are viewed in figures 35 below, which demonstrate the
difference in distribution between the portfolio returns for all three portfolios and
a normal distribution. It is difficult to view the kurtosis and skewness, even for the
OMX portfolio with a very high kurtosis value, but compared to the leptokurtic
distribution viewed in section 3.3 all figures are similarly leptokurtic distributed.
Figure 3. A random normal distribution plotted against the OMX portfolio returns.
0
5
10
15
20
25
30
35
40
45
50
55
St andard devi at i on
Frequency
OMX
NORMAL
55
In accordance with the negative skewness coefficients in table 2 more values
should be striving towards negative numbers for the smallcap portfolio, and the
opposite for the OMX portfolio. The mixed portfolio has a skewness coefficient
close to zero, and hence the distribution can be considered as symmetric.
Figure 4. A random normal distribution plotted against the smallcap portfolio returns.
Figure 5. A random normal distribution plotted against the mixed portfolio returns.
Out of the diagrams it is difficult to view the fat tails of the portfolio returns, and
therefore the frequency tables are attached in appendix 18. The kurtosis is
significant and an answer to this result can be found in the heart of portfolio
theory. The figures indicate that too many return observations, as compared with
the generated normal distribution, are clustered around relatively small changes of
the mean for all three portfolios. We believe that this is an effect from
diversification. However, it is important to stress that the clusters around the mean
for the smallcap and to some extent the mixed portfolios can be explained by the
filling in of missing prices explained in sections 2.4 and 4.1. The main reason to
hold a portfolio in the first place is to lower the investment risk via distributing
the capital over different business lines and sectors. If the diversification works it
simply implies that the fluctuation in returns decreases and hence more
observations are clustered around the mean. In addition, there is a tendency for the
0
5
10
15
20
25
30
35
40
45
50
55
60
65
70
Standard devi aton
Frequency
SMALL
NORMAL
0
5
10
15
20
25
30
35
40
45
50
55
60
65
Standard devi ati on
Frequency
MIX
NORMAL
56
correlation between stocks to increase in periods of global turbulence and that
might be the reason why financial returns experience fat tails. These two effects
can explain why the portfolio outcomes are not normally distributed.
Due to the fact that the normal distribution appears to be a poor approximation for
the portfolio returns in our study, the use of another approximation would be
beneficial. Here the tdistribution could be an option, since it captures fat tails
well. The tdistribution depends on the number of degrees of freedom, which is an
insecure estimation and might make the VaR process both unreliable and
complicated. Hence, the VaR with an approximated tdistribution could be a less
appealing tool in practice.
Another interesting conclusion that can be drawn from the normality test is linked
to the results from the performance criterion average multiple of tail event to risk
measure. To evaluate the results we used the normal distribution as a benchmark,
see 5.6, where a value close to the benchmark implies that the portfolio returns
should be regarded to follow a normal distribution. The results indicated that the
OMX portfolio followed a normal distribution relatively better than the other
portfolios, which appears to be very ambiguous w.r.t. the results from the
normality tests. However, the average multiple of tail event to risk measure only
compares the 95 and 97.5 respectively 99 and 99.5 percentiles, and does not take
the whole normal distribution into consideration.
57
Chapter 6  Conclusions
6.1 Evaluation of VaR Methods
We regard methods producing results that do not differ significantly from the
intended fraction of outcomes covered at the 95% significance level as satisfying.
However, it can be questioned whether estimations that come close to having their
null hypothesis rejected, such as the OMXHS125d at the 95% confidence level,
can be considered as being a reliable and trustworthy VaR method. Most of our
VaR methods work well on the 95% confidence level, while on the 99% level
only portfolios consisting of OMX stocks produce satisfying results. For the
smallcap and mixed portfolios 11 out of 14 VaR methods significantly
underestimate the fraction of outcomes covered at the 99% confidence level. This
gives indications of serious shortcomings of the VaR concept when differences in
market cap affect the applicability of VaR. These results might suggest that the
VaR methods are mostly applicable to largecap shares and cannot be used with
smallcap or mixed portfolios, at least not at the 99% significance level.
In our study we can conclude that financial returns do not follow a normal
distribution. Therefore, we argue that VaR approaches based on the normal
distribution should not be used for VaR estimations, even if the EqWMA and the
ExpWMA approaches for the OMX portfolio do well measured as fraction of
outcomes covered. Hence, by assuming a normal distribution despite its
imperfections, cannot be recommended.
The HS approach, which does not rest on the assumption about normal
distribution, indeed produces very unsatisfying results with a window of 125
trading days. However, by using a longer window, i.e. 250 days, the results are
much more promising. For HS the estimations are acceptable at the 95% level,
even though the OMX and smallcap portfolio do not cover exactly 95 percent of
the outcomes. For the mixed portfolio on the other hand, the HS250d slightly
overestimates the VaR. All differences from the 95 percent of outcomes covered
are within the margin of error. At the 99% level the HS250d method clearly
produces the best results for the smallcap and mixed portfolios. However, it does
significantly underestimate the VaR at the 99% level for the OMX portfolio. Still,
we argue that the HS with a window size of 250 trading days is the VaR method
producing the most attractive results in our study. In addition, there is no
significant difference in the fraction of outcomes covered between the smallcap
and OMX portfolio for the HS methods.
58
The approaches based on the normal distribution, i.e. the EqWMA and the
ExpWMA approach, seem to do very well at the 95% level. No method comes
close to having their null hypothesis rejected except for the MIX EqWMA250d
that overestimates the VaR. On the other hand, at the 99% level the results are not
as good. These results are in line with Lucas & Klaassen’s findings, i.e. VaR
methods based on the normal distribution produce attractive VaR measures at the
95% level but poor at the 99% level, see section 3.3. The approaches work well
with the OMX portfolio, but the results with the smallcap and mixed portfolios
are not acceptable. All methods except for the MIX EqWMA250d underestimate
the VaR with probability values close to zero. We were expecting the ExpWMA
to be superior to the EqWMA approach, since this approach is favoured by
RiskMetrics and is known to capture volatility fluctuations in a superior way.
However, there is no evidence pointing in this direction in our findings. On the
other hand, the ExpWMA approach generates superior results regarding the mean
relative bias for risk measures scaled to desired level of coverage and the
correlation between risk measure and absolute value of outcome. We can
conclude that a high correlation does not automatically give attractive results, and
even if the correlation coefficient would have been one (1) the VaR method
cannot be regarded as totally perfect.
There is a tendency for the VaR methods with long windows to produce superior
results compared to the methods using shorter windows. It is possible that by
using longer windows than 250 trading days the VaR measures would produce
more attractive results. It might be the case that a oneyear window is too short to
capture the risk of infrequent market crashes. By using a longer window this risk
would be captured more accurately.
Perhaps a semiparametric method or a MonteCarlo simulation would have
produced superior results, but as stated in chapter 3 these methods are
significantly more complex to perform than the three approaches we selected.
Furthermore, since one of the purposes with this research is to discuss whether
VaR is a useful tool for asset managers, we found it much more likely that any of
the chosen approaches are relevant to be used in practice.
6.2 The Distribution of Financial Returns
As we showed in section 5.10 the portfolio returns used in our study do not follow
a normal distribution. The returns for all three portfolios experience a leptokurtic
behaviour with high peaks, thin sides and fat tails. This might be due to the
diversification effect, i.e. portfolio outcomes cluster around the mean, and
increasing correlations between portfolio stocks when markets experience times of
59
high volatility create fat tails. Finally, the normal distribution is not a well
functioning approximation for large samples, and that the normal distribution
cannot handle large samples properly favour the HS even more, where an
increasing window size only makes VaR to make progress.
In summary, it is obvious that financial returns are not normally distributed and
the VaR approached should not be based on assumptions that do not hold.
Therefore, we recommend the HS approach to be used for VaR estimations.
6.3 Implications of VaR for Asset Managers
The stock markets of today are increasingly more volatile and risky than only a
couple of years ago. One of the main reasons is the high paced intrusion of the
“new economy” at the expense of the “old economy”. The “new economy” is
reflected in the stock market, which to a large extent now consists of a relatively
new and untested sector, telecom and IT. The uncertainty regarding the sector per
se and the companies’ ability to live up to the expectations are significant and
contribute to the volatility increase. The companies in this sector are also
relatively more difficult to analyse, which has the effect of a significant dispersion
in motivated valuations among market actors. Contradictory opinions regarding
the valuation of stocks and a high valuation of the stock market per se are both
driving forces for volatility increases. The high valuation and the booming
telecom and IT sectors and the launching of the EMU, which force asset managers
to reallocate their portfolio holding in the region, are all factors that make higher
volatility a natural thing.
Asset managers are highly exposed to these increased risks, and are more or less
forced to invest in the telecom and IT sector to get and maintain the exposure of
the “new economy”. Briefly this implies an increased risk taking. With respect to
the enhanced portfolio risk a way to capture and measure these risks is called for,
even if the asset managers typically are in the business of taking risk and not
avoiding it. The question is therefore if we can recommend VaR as a risk
measurement tool for asset managers focused on the Swedish equity market.
The primary goals with VaR are to show how much risk that is being taken and to
determine whether asset managers are exposed to the risks intended to. Even if the
need for a risk management tool is great, especially in the stock market of today,
the reliability and stableness are the two first characteristics that have to be
commented on. A VaR method that is not reliable and does not give stable results
is not a risk tool to consider for an asset manager. Even if a certain VaR method
never will tell an asset manager how much risk to undertake, it must perfectly
60
reflect how much risk that is being taken. Further on, if a specific VaR method
produces measures that underestimate the risk, an asset manager can for instance
accept additional risk to be undertaken. This gives serious implications for the
portfolio, which consists of more risk than the threshold actually is intended for.
Hence, the threshold or risk target defined in terms of maximum tolerable VaR is
not consistent and reliable. On the other hand, a well functioning and reliable VaR
method can be a superior way to avoid unexpected and uncontrolled losses.
As stated in chapter 3, Philippe Jorion is of the opinion that there is no doubt that
VaR is here to stay, but he also stresses that the process and methodology of
calculating VaR is equally important as the VaR number itself. On the whole, this
goes hand in hand with the results from this study. As long as a VaR method is
reliable and stable it is probably the superior risk measurement tool for asset
managers managing Swedish equity portfolios. At the same time it is definitely
too much to conclude that any of the VaR methods in this study can be regarded
as totally reliable and stable. Previously we concluded that the normal distribution
is incorrect as an underlying assumption for a VaR approach, which automatically
exclude the EqWMA and the ExpWMA approach. The HS seems to work well
with a window of 250 trading days and it would have been interesting to view the
results from a similar research with longer windows.
One of the purposes with our study was to analyse if the market cap of stocks has
any impact on how well functioning and reliable the VaR models are. From a
practical viewpoint, both the OMX and the smallcap portfolios can be considered
as extremes because they only consist of stocks from a certain range of market
caps. Of course specialised equity funds can have a concentration similar to our
OMX and smallcap portfolios, such as small cap and index funds, but these
concentrations are no likely alternatives/holdings for an asset manager. On the
contrary, the mixed portfolio appears to be more of a natural kind of
diversification among stocks w.r.t. market caps for an asset manager. Therefore,
the VaR results for the mixed portfolios for each method are perhaps the most
interesting, where the HS250d and the EqWMA50d at the 95% level are the
superior methods. At the 99% level the HS250d and the EqWMA250d seem to
achieve the most attractive results measured as fraction of outcomes covered.
Hence, the HS250d method can be regarded as the most useful tool for risk
management of Swedish equity portfolios.
61
6.4 Suggestions for further Research
We believe that there are a number of VaR areas that have not received enough
attention yet. Our study concludes that the normal distribution is an unsatisfactory
approximation of portfolio returns and therefore it would be interesting to view
how applicable the new VaR approaches are to stock portfolios, for example the
semiparametric approach, the improved VaR methodology and the Monte Carlo
simulation approach.
One distribution that could be useful for estimation of portfolio returns is
the tdistribution and hence, to examine what impact it has on VaR estimation
would be an interesting topic. In addition, a more qualitative research of the
usefulness of VaR for asset managers could shed light on the subject from another
point of view.
References
Articles
Affärsvärlden (1998), “Riktningen och värde avgör svängningarna”.
Affärsvärlden, (19981209).
Björklund, Marianne (2000), “Bristande kontroll möjliggör svindlerier”. Dagens
Nyheter, (20000105).
Bäckström, Urban (2000), “Betydelsen av riskhantering”. Riksbanken, Risk
Management forum, 20000203.
Culp, L. Christopher, Mensink, Ron & Neves, M. P. Andrea (1999) “VaR for
Asset Managers”. Derivatives Quarterly, Vol. 5, No. 2, January (1999).
Danielsson, Jon (19981999), “Class notes Corporate Finance & Financial
Markets”. London School of Economics.
Danielsson, Jon & de Vries, G., Casper (1997), “ValueatRisk and Extreme
Returns”. London School of Economics, Financial Market Group Discussion
Paper, No. 273, 1997.
Danielsson, Jon, Hartmann, Philipp & de Vries, G., Casper (1998), “The Cost of
Conservatism”. Http://www.hag.hi.is/~jond/research/.
Danielsson, Jon, & de Vries, G., Casper (1997), “Beyond the Sample: Extreme
Quantile and Probability Estimation”. Http://www.hag.hi.is/~jond/research/.
Dowd, Kevin (1999), “A ValueatRisk Approach to RiskReturn Analysis”. The
Journal of Portfolio Management, Summer 1999.
Hendricks, Darryll (1996), “Evaluation of ValueatRisk Models Using Historical
Data”. FRBNY Economic Policy Review, April, 1996.
Jorion, Philippe (1997), “In Defense of VaR”.
Http://www.derivatives.com/magazine/archive/1997.
Koupparis, P. (1995), “Barings – A Random Walk to Self Destruction”. Scandals
in Justice, April 1995.
Longin, M. Francois. (1999) “Stress Testing: A Method based on Extreme Value
Theory”. Research paper for The Third Annual BSI GAMMA Foundation
Conference on global asset management.
Lucas, André & Klaassen Pieter (1998), “Extreme Returns, Downside Risk, and
Optimal Asset Allocation”. The Journal of Portfolio Management, Fall 1998.
Maymin, Zak (1998) “VaR variations: is multiplication factor still too high?”.
Http://www.gloriamundi.org/var/.
Schachter, Barry (1997), “ValueatRisk Resources – An Irreverent Guide to
Value at Risk”. Financial Engineering News, Vol. 1 No. 1, August 1997.
Smithson, Charles (1996) “Class notes of CIBC School of Financial Products”.
CIBC School of Financial Products.
Styblo Beder, Tanya (1995) “VaR: Seductive but Dangerous”. Financial Analysts
Journal, September/October 1995.
Thornberg, John (1998), “Derivative users lack refined controls of risk”. Working
Paper University of Paisley, December 1998.
Yiehmin, Lui, R. (1996), “VaR and VaR derivatives”. Applied Derivatives
Trading, December 1996.
Textbooks
Aczel, D. Amir (1993), Complete Business Statistics. IRWIN.
Afifi, A.A & Clark, Virginia (1990), ComputerAided Multivariate Analysis. Van
Nostrand Reinhold Company.
Backman, Jarl (1985), Att skriva och läsa vetenskapliga rapporter.
Studentlitteratur, Lund.
Benerson, L. Mark & Levine, M. David (1992), Basic Business Statistics.
PrenticeHall Int. Editions.
Benninga, Simon (1998), Financial Modeling. Massachusetts Institute of
Technology.
Eviews 3 (1998), User’s Guide. Eviews Ltd.
Gustavsson, Michael & Svernlöv, Magnus (1994), Ekonomi & Kalkyler. Liber
Hermods.
Halvorsen Knut (1992), Samhällsvetenskaplig metod. Studentlitteratur, Lund.
Hill, Carter, Griffiths, William & Judge, George (1997) Undergraduate
Econometrics. John Wiley & Sons, Inc.
Hull, John (1997), Options, Futures, and Other Derivatives. PrenticeHall Int.
Jorion, Philippe (1997), Value at Risk. McGrawHill.
JPMorgan/RiskMetrics Group (1995), Introduction to RiskMetrics. JPMorgan.
JPMorgan/Reut ers (1996), RiskMetricsTechnical Document. JPMorgan/Reuters.
JP Morgan/Reuters (1996), RiskMetrics  Monitor. JPMorgan/Reuters.
Kleinbaum, G. David, Kupper, L. Lawrence & Muller, E. Muller (1988), Applied
Regression Analysis and Other Multivariate Methods. PWSKENT Publishing
Company.
Körner, Svante (1986), Tabeller och formler för statistiska beräkningar.
Studentlitteratur, Lund.
Körner, Svante (1987), Statistisk Dataanalys. Studentlitteratur, Lund.
Pettersson, Gertrud (1997), Att skriva rapporter. Ekonomihögskolan vid Lunds
Universitet.
Ross, Stephen.A, Westerfield, Randolph.W. & Jaffe, Jeffrey. (1996), Corporate
Finance. McGrawHill.
Svenning, Conny (1996), Metodboken. Eslöv Lorentz.
Söderlind, Lars (1996), Att mäta ränterisker. SNS Förlag.
WiedersheimPaul, Finn & Eriksson, Lars, Torsten (1991), Att utreda, forska och
rapportera. Liber Ekonomi.
Collection of Quantitative Data
Affärsvärlden (1995), ”Placeringsindikatorn”. Affärsvärlden, 18 January, No. 3,
1995.
Affärsvärlden (1995), ”Placeringsindikatorn”. Affärsvärlden, 17 May, No. 20,
1995.
Affärsvärlden (1996), ”Placeringsindikatorn”. Affärsvärlden, 10 January, No. 12,
1996.
Affärsvärlden (1996), ”Placeringsindikatorn”. Affärsvärlden, 22 May, No. 21,
1996.
Affärsvärlden (1997), ”Placeringsindikatorn”. Affärsvärlden, 15 January, No. 13,
1997.
Affärsvärlden (1997), ”Placeringsindikatorn”. Affärsvärlden, 21 May, No. 20,
1997.
Affärsvärlden (1998), ”Placeringsindikatorn”. Affärsvärlden, 14 January, No. 13,
1998.
Affärsvärlden (1998), ”Placeringsindikatorn”. Affärsvärlden, 13 May, No. 20,
1998.
Affärsvärlden (1999), ”Placeringsindikatorn”. Affärsvärlden, 13 January, No. 12,
1999.
Affärsvärlden (1999), ”Placeringsindikatorn”. Affärsvärlden, 2 June, No. 22,
1999.
Bloomberg Database of Financial Information.
OMGroup, Shares in OMX, First halfyear 1995  second halfyear 1999.
Electronic Sources
AffärsData
EconLit
Eric Weisstein's World of Mathematics,
Http://mathworld.wolfram.com/JensensInequality.html
Personal contacts
Hagnell, Mats, Lecturer in Econometrics at Lund University. Personal contact
May 17
th
2000.
Segerström, Thomas, Carnegie Asset Management. Telephone contact April 11
th
2000.
Appendix 2 – Total sample of stocks
Order SmallCap stocks Order SmallCap stocks Order
MidCap
Stocks
OMXstocks
1 Bulten b 42 Lap Power b 1 Allgon b ABB a
2 Celsius b 43 Scribona 2 Avesta Sheffield ABB Ltd. a
3 Consilium 44 Lindab 3 Höganäs b Asea a
4 Finnveden b 45 Nea b 4 SSAB b Astra b
5 Gunnebo 46 Active b 5 SecoTools b AstraZeneca
6 HL Display b 47 Sintercast a 6 Svedala Electrolux b
7 Haldex 48 Havsfrun 7 Assidomän Ericsson b
8 Itab b 49 Skanditek 8 JM b FSPB a
9 KMT 50 Nordifa 9 Lundbergs b H&M b
10 Kabe b 51 Hexagon b 10 NCC b Investor b
11 Kalmar industrier 52 Midway b 11 TietoEnator Nokia
12 Nolato 53 Tivox b 12 Invik b Nordbanken Holding
13 Klippan 54 Westergyllen b 13 Kinnevik b Pharmacia a
14 Munksjö 55 Borås Wäfveri b 14 Perstorp b Pharmacia & Upjohn
15 Rottneros 56 Brio b 15 Trelleborg Sandvik b
16 Rörvik Timber 57 Cloetta b 16 Graninge SEB a
17 Bergman &Beving 58 Spendrups b 17 SAS SHB a
18 Bilia b 59 TV4 b 18 Atle Skandia b
19 Doro 60 VLT b 19 Bure Skanska b
20 Elgruppen b 61 Artema b 20 Custos a Sparbanken a
21 Fjällräven b 62 Elekta b 21 Latour Volvo b
22 Folkebolagen 63 Getinge 22 Ratos
23 OEM b 64 Nobel Biocare 23 OMgruppen
24 Diös 65 J&W
25 Fastpartner 66 KM b
26 Heba b 67 Scandiaconsult
27 Hufvudstaden 68 Ångpanneföreningen b
28 Ljungbergruppen b 69 Senea
29 Norrporten 70 Bong Ljundahl b
30 Peab b 71 Elanders b
31 Piren b 72 Esselte b
32 Platzer 73 Graphium
33 Realia b 74 Strålfors b
34 Wallenstam b 75 Tryckindustri b
35 Wihlborg 76 Geveko
36 B&N b 77 Svolder b
37 Concordia b 78 Öresund
38 StenaLine b 79 H&Q
39 IMS 80 Matteus
40 Måldata b 81 NH Nordiska
41 IBS b
Appendix 3 – Equity Portfolios
Portfolio 1 – Swedish equity OMX portfolio (largecap)
Reallocation
period / Stocks
95:1 95:2 96:1 96:2 97:1
Astra b Ericsson b Astra b Astra b Ericsson b
Ericsson b Astra b Ericsson b Ericsson b Astra b
Volvo b Volvo b Volvo b Volvo b ABB a
Asea a Asea a Asea a ABB a Volvo b
Electrolux b Sandvik b SHB a Sandvik b SHB a
Sandvik b Stora a SEB a P&U Sandvik b
Stora a Pharmacia a Skanska b SHB a Skanska b
SEB a Electrolux b Sandvik b Investor b H&M b
SHB a SHB a Stora a Skanska b SEB a
Skanska b Skanska b Electrolux b SEB a Investor b
97:2 98:1 98:2 99:1 99:2
Ericsson b Ericsson b Ericsson b Ericsson b Ericsson b
Astra b Astra b Astra b AstraZeneca AstraZeneca
ABB a Volvo b H&M b H&M b H&M b
Volvo b FSPB a FSPB a SHB a ABB Ltd. a
Sparbanken a H&M b SHB a FSPB a Skandia
H&M b SHB a SEB a NB Holding Volvo b
SHB a ABB a ABB a Skandia NB Holding
Investor b SEB a NB Holding ABB a SHB a
SEB a Investor b Volvo b Nokia Electrolux b
Electrolux b Sandvik b Skandia Volvo b FSPB a
Portfolio 2 – Swedish equity smallcap portfolio
Reallocat ion
period / Random
numbers
95:1 95:2 96:1 96:2 97:1 97:2 98:1 98:2 99:1 99:2
71 30 15 62 66 37 11 64 52 34
34 75 64 58 74 18 39 15 79 81
12 80 36 24 23 34 13 56 56 2
49 46 6 12 70 19 4 74 49 52
52 34 31 23 77 39 81 6 53 30
31 58 28 46 4 26 24 47 25 65
20 45 54 74 25 52 38 26 36 74
23 51 14 81 47 67 75 69 40 72
77 11 41 14 71 15 20 21 20 23
37 77 45 71 59 29 68 47 33 28
Reallocation
period /
Stocks
95:1 95:2 96:1 96:2 97:1
Elanders b Peab b Rottneros Elekta b KM b
Wallenstam b Tryckindustri b Nobel Biocare Spendrups b Strålfors b
Nolato Matteus B&N b Diös OEM b
Skanditek Active b HL Display b Nolato Bong b
Midway b Wallenstam b Piren b OEM b Svolder b
Piren b Spendrups b Ljungberggruppenb Active b Finnveden b
Elgruppen b Nea b Westergyllen b Strålfors b Fastpartner
OEM b Hexagon b Munksjö Nordiska Sintercast a
Svolder b Kalmar ind. IBS b Munksjö Elanders b
Concordia b Svolder b Nea b Elanders b TV4 a
97:2 98:1 98:2 99:1 99:2
Concordia b Kalmar ind. Nobel Biocare Midway b Wallenstam b
Bilia a IMS Rottneros H&Q NH Nordiska
Wallenstam b Klippan Brio b Brio b Celsius b
Doro Finnveden b Spendrups b Skanditek Midway b
IMS NH Nordiska HL Display b Tivox b Peab b
Heba b Diös Active b Fastpartner J&W
Midway b StenaLine b Heba b B&N b Strålfors b
Scandiaconsult Tryckindustri b Senea Måldata b Esselte b
Rottneros Elgruppen b Fjällräven b Elgruppen b OEM b
Norrporten Ångpanneför. b Sintercast a Realia b Ljunberggruppen b
Portfolio 3 – Swedish equity mixed portfolio
Reallocation
period / Random
numbers
95:1 95:2 96:1 96:2 97:1 97:2 98:1 98:2 99:1 99:2
71* 14* 30* 19* 34* 38* 20* 39* 56* 77*
79* 47* 79* 32* 60* 6* 49* 14* 37* 56*
14 15 9 4 19 21 14 11 13 10
9 9 2 3 15 9 2 20 15 15
13 6 3 6 20 19 21 17 18 17
Reallocation
period / Stocks
95:1 95:2 96:1 96:2 97:1
Elanders b Munksjö Peab b Doro Wallenstam b
H&Q Sintercast a H&Q Platzer b VLT b
Perstorp b Trelleborg Lundbergs b SSAB b Bure
Lundbergs b Lundbergs b Avesta Sheffield Höganäs b Trelleborg
Kinnevik b Svedala Höganäs b Svedala Custos a
Astra b Ericsson b Astra b Astra b Ericsson b
Ericsson b Astra b Ericsson b Ericsson b Astra b
Volvo b Volvo b Volvo b Volvo b ABB a
Asea a Asea a Asea a ABB a Volvo b
Electrolux b Sandvik b SHB a Sandvik b SHB a
97:2 98:1 98:2 99:1 99:2
StenaLine b Elgruppen b IMS Brio b Svolder b
HLDisplay b Skanditek Munksjö Concordia b Brio b
Latour Perstorp b Enator Kinnevik b NCC b
JM b Avesta Sheffield Custos a Trelleborg Trelleborg
Bure Latour SAS Atle SAS
Ericsson b Ericsson b Ericsson b Ericsson b Ericsson b
Astra b Astra b Astra b AstraZeneca AstraZeneca
Volvo b Volvo b H&M b H&M b H&M b
Sparbanken a FSPB. a FSPB. a SHB a ABB Ltd. a
H&M b H&M SHB a FSPB. a Skandia
Appendix 4 – Results at the 95% confidence level
Performance criteria
/Method
MRB RMSRB APV FoOC MNtADC
AmoTE
tRM
MmoTE
tRM
CBRM
aAVoO
MRBfRM
StDLoC
OMXHS 250d
0,033 0,146 0,173 0,943 1,039 1,282 3,038 0,232 0,013
SmallHS 250d
0,031 0,160 0,195 0,943 1,056 1,463 4,031 0,166 0,016
MixHS 250d
0,012 0,161 0,155 0,953 0,980 1,322 3,980 0,221 0,013
OMXHS 125d
0,083 0,131 0,401 0,939 1,076 1,303 2,843 0,303 0,006
SmallHS 125d
0,049 0,164 0,378 0,941 1,069 1,532 5,149 0,150 0,010
MixHS 125d
0,077 0,151 0,388 0,939 1,063 1,453 3,857 0,247 0,000
OMXEqWMA 50d
0,006 0,139 0,474 0,952 0,983 1,185 2,762 0,352 0,004
SmallEqWMA 50d
0,001 0,148 0,498 0,947 1,025 1,371 2,986 0,244 0,017
MixEqWMA 50d
0,004 0,140 0,467 0,953 0,987 1,326 3,241 0,279 0,003
OMXEqWMA 125d
0,050 0,107 0,230 0,953 0,953 1,146 2,585 0,255 0,008
SmallEqWMA 125d
0,040 0,129 0,264 0,951 0,996 1,334 4,149 0,151 0,028
MixEqWMA 125d
0,043 0,122 0,208 0,957 0,926 1,280 3,698 0,191 0,015
OMXEqWMA 250d
0,065 0,187 0,102 0,957 0,965 1,184 2,778 0,204 0,036
SmallEqWMA 250d
0,058 0,172 0,110 0,957 0,946 1,274 3,652 0,184 0,001
MixEqWMA 250d
0,076 0,185 0,118 0,966 0,899 1,217 3,500 0,183 0,013
OMXExpWMA 0.94
0,020 0,161 0,745 0,951 0,990 1,200 2,499 0,397 0,023
SmallExpWMA 0.94
0,025 0,180 0,919 0,950 0,997 1,397 2,698 0,306 0,034
MixExpWMA 0.94
0,032 0,165 0,808 0,943 1,069 1,300 3,012 0,335 0,055
OMXExpWMA 0.97
0,017 0,086 0,398 0,956 0,953 1,159 2,519 0,367 0,024
SmallExpWMA 0.97
0,009 0,094 0,472 0,955 0,966 1,327 2,693 0,266 0,032
MixExpWMA 0.97
0,006 0,089 0,427 0,955 0,959 1,264 2,987 0,300 0,016
Appendix 5 – Results at the 99% confidence level
Performance criteria
/Method
MRB RMSRB APV FoOC MNtADC
AmoTE
tRM
MmoTE
tRM
CBRM
aAVoO
MRBfRM
StDLoC
OMXHS 250d
0,001 0,167 0,228 0,984 1,052 1,173 2,221 0,252 0,024
SmallHS 250d
0,161 0,160 0,316 0,986 1,164 1,268 1,911 0,186 0,096
MixHS 250d
0,093 0,198 0,287 0,988 1,050 1,159 2,575 0,200 0,003
OMXHS 125d
0,088 0,154 0,416 0,979 1,151 1,317 2,300 0,287 0,022
SmallHS 125d
0,027 0,196 0,640 0,974 1,432 1,566 2,230 0,177 0,192
MixHS 125d
0,003 0,183 0,571 0,978 1,280 1,489 2,621 0,204 0,123
OMXEqWMA 50d
0,000 0,142 0,474 0,989 1,014 1,154 1,952 0,352 0,013
SmallEqWMA 50d
0,053 0,159 0,498 0,977 1,221 1,418 2,111 0,244 0,063
MixEqWMA 50d
0,040 0,144 0,467 0,982 1,181 1,298 2,291 0,279 0,009
OMXEqWMA 125d
0,043 0,105 0,230 0,989 1,009 1,161 1,827 0,255 0,024
SmallEqWMA 125d
0,015 0,106 0,264 0,977 1,232 1,447 2,933 0,151 0,016
MixEqWMA 125d
0,005 0,103 0,208 0,980 1,177 1,361 2,614 0,191 0,034
OMXEqWMA 250d
0,059 0,186 0,102 0,991 0,964 1,211 1,964 0,204 0,008
SmallEqWMA 250d
0,002 0,148 0,110 0,982 1,159 1,534 2,582 0,184 0,058
MixEqWMA 250d
0,037 0,174 0,118 0,987 1,032 1,244 2,474 0,183 0,065
OMXExpWMA 0.94
0,026 0,163 0,745 0,986 1,029 1,093 1,767 0,397 0,024
SmallExpWMA 0.94
0,077 0,192 0,919 0,976 1,257 1,409 1,907 0,306 0,059
MixExpWMA 0.94
0,068 0,173 0,808 0,980 1,169 1,319 2,129 0,335 0,047
OMXExpWMA 0.97
0,011 0,090 0,398 0,990 0,992 1,096 1,781 0,367 0,025
SmallExpWMA 0.97
0,044 0,108 0,472 0,978 1,169 1,409 1,903 0,266 0,000
MixExpWMA 0.97
0,031 0,097 0,427 0,982 1,134 1,271 2,111 0,300 0,039
Appendix 6 – Mean Relative Bias
Mean Relative Bias, 95%
0,105
0,090
0,075
0,060
0,045
0,030
0,015
0,000
0,015
0,030
0,045
0,060
0,075
0,090
Portfol i oMethod
MRB
Mean Relative Bias, 99%
0,100
0,075
0,050
0,025
0,000
0,025
0,050
0,075
0,100
0,125
0,150
0,175
0,200
Portfol i oMethod
MRB
Appendix 7 – Root Mean Squared Relative Bias
Root Mean Squared Relative Bias, 95%
0,065
0,078
0,091
0,104
0,117
0,130
0,143
0,156
0,169
0,182
0,195
Portfol i oMethod
RMSRB
Root Mean Squared Relative Bias, 99%
0,065
0,078
0,091
0,104
0,117
0,130
0,143
0,156
0,169
0,182
0,195
Portfol i oMethod
RMSRB
Appendix 8 – Annualized Percentage Volatility
Annualized Percentage Volatility, 95%
0,00
0,08
0,16
0,24
0,32
0,40
0,48
0,56
0,64
0,72
0,80
0,88
0,96
Portfol i oMethod
APV
Annualized Percentage Volatility, 99%
0,00
0,08
0,16
0,24
0,32
0,40
0,48
0,56
0,64
0,72
0,80
0,88
0,96
Portfol i oMethod
APV
Appendix 9 – Fraction of Outcomes Covered
Fract i on of Out comes Covered, 95%
0,930
0,933
0,936
0,939
0,942
0,945
0,948
0,951
0,954
0,957
0,960
0,963
0,966
0,969
Por t f ol i oMet hod
FoOC
Fracti on of Outcomes Covered, 99%
0,970
0,972
0,974
0,976
0,978
0,980
0,982
0,984
0,986
0,988
0,990
0,992
0,994
Por t f ol i oMet hod
FoOC
Appendix 10
Fractions of Outcomes Covered  95% level
Zvalue Probability Significance
OMXHS
250d 1,075 0,859
OMXHS
125d 1,853 0,968
OMXEqWMA
50d 0,350 0,363
OMXEqWMA
125d 0,479 0,316
OMXEqWMA
250d 1,127 0,130
OMXExpWMA
0.94 0,220 0,413
OMXExpWMA
0.97 0,998 0,159
SMALLHS
250d 1,075 0,859
SMALLHS
125d 1,464 0,928
SMALLEqWMA
50d 0,428 0,666
SMALLEqWMA
125d 0,091 0,464
SMALLEqWMA 250d 1,127 0,130
SMALLExpWMA 0.94 0,039 0,516
SMALLExpWMA 0.97 0,868 0,193
MIXHS 250d 0,473 0,318
MIXHS 125d 1,861 0,969
MIXEqWMA 50d 0,473 0,318
MIXEqWMA 125d 1,122 0,131
MIXEqWMA 250d 2,418 0,008 *
MIXExpWMA 0.94 1,212 0,887
MIXExpWMA 0.97 0,862 0,194
Fractions of Outcomes Covered  99% level
Zvalue Probability Significance
OMXHS
250d 2,117 0,983 *
OMXHS
125d 3,820 1,000 ***
OMXEqWMA
50d 0,414 0,661
OMXEqWMA
125d 0,418 0,662
OMXEqWMA
250d 0,437 0,331
OMXExpWMA
0.94 1,550 0,939
OMXExpWMA
0.97 0,153 0,439
SMALLHS
250d 1,550 0,939
SMALLHS
125d 5,523 1,000 ***
SMALLEqWMA
50d 4,672 1,000 ***
SMALLEqWMA
125d 4,678 1,000 ***
SMALLEqWMA 250d 2,969 0,999 **
SMALLExpWMA 0.94 4,955 1,000 ***
SMALLExpWMA 0.97 4,104 1,000 ***
MIXHS 250d 0,702 0,759
MIXHS 125d 4,394 1,000 ***
MIXEqWMA 50d 2,974 0,999 **
MIXEqWMA 125d 3,542 1,000 ***
MIXEqWMA 250d 0,986 0,838
MIXExpWMA 0.94 3,542 1,000 ***
MIXExpWMA 0.97 2,690 0,996 **
Appendix 11
Fractions of Outcomes Covered – Difference between smallcap and OMX portfolios
Zvalue Probability Significance
OMX vs SMALL HS 250d – 95% 0,000 0,500
OMX vs SMALL HS 125d – 95% 0,252 0,401
OMX vs SMALL EqWMA 50d – 95% 0,548 0,708
OMX vs SMALL EqWMA 125d – 95% 0,280 0,610
OMX vs SMALL EqWMA 250d – 95% 0,000 0,500
OMX vs SMALL ExpWMA 0.97 – 95% 0,184 0,573
OMX vs SMALL ExpWMA 0.97 – 95% 0,097 0,539
OMX vs SMALL HS 250d – 99% 0,327 0,372
OMX vs SMALL HS 125d – 99% 0,797 0,787
OMX vs SMALL EqWMA 50d – 99% 2,307 0,989 *
OMX vs SMALL EqWMA 125d – 99% 2,308 0,990 *
OMX vs SMALL EqWMA 250d – 99% 2,072 0,981 *
OMX vs SMALL ExpWMA 0.94 – 99% 1,749 0,960 *
OMX vs SMALL ExpWMA 0.97 – 99% 2,421 0,992 **
Appendix 12  Multiple Needed to Attain Desired Coverage
Multiple Needed to Attain Desired Coverage, 95%
0,85
0,87
0,89
0,91
0,93
0,95
0,97
0,99
1,01
1,03
1,05
1,07
1,09
Portfol i oMethod
MNtADC
Multiple Needed to Attain Desired Coverage, 99%
0,80
0,85
0,90
0,95
1,00
1,05
1,10
1,15
1,20
1,25
1,30
1,35
1,40
1,45
1,50
Portfol i oMethod
MNtADC
Appendix 13 – Average Multiple of Tail Event to Risk Measure
Average Multiple of Tail Event to
Risk Measure, 95%
0,95
1,00
1,05
1,10
1,15
1,20
1,25
1,30
1,35
1,40
1,45
1,50
1,55
1,60
Portfol i oMethod
AMoTEtRM
Average Multiple of Tail Event to
Risk Measure, 99%
0,90
0,95
1,00
1,05
1,10
1,15
1,20
1,25
1,30
1,35
1,40
1,45
1,50
1,55
1,60
PortfolioMethod
AMoTEtRM
Appendix 14 – Maximum Multiple of Tail Event to Risk Measure
Maximum Multiple of Tail Event
to Risk Measure, 95%
1,75
2,00
2,25
2,50
2,75
3,00
3,25
3,50
3,75
4,00
4,25
4,50
4,75
5,00
5,25
5,50
Port f ol i oMet hod
MMoTEtRM
Maximum Multiple of Tail Event
to Risk Measure, 99%
1,50
1,65
1,80
1,95
2,10
2,25
2,40
2,55
2,70
2,85
3,00
Portfol i oMethod
MMoTEtRM
Appendix 15 – Correlation between Risk Measure and Absolute
Value of Outcome
Correlation between Risk Measure and
Absolute Value of Outcome, 95%
0
0,04
0,08
0,12
0,16
0,2
0,24
0,28
0,32
0,36
0,4
0,44
Portfol i oMethod
CbRMa
AVoO
Correlation between Risk Measure and
Absolute Value of Outcome, 99%
0,00
0,04
0,08
0,12
0,16
0,20
0,24
0,28
0,32
0,36
0,40
0,44
Portfol i oMethod
CbRMa
AVoO
Appendix 16
Correlation coefficient  95% level
tvalue Probability Significance
OMXHS
250d 8,45 0,00 ***
OMXHS
125d 11,26 0,00 ***
OMXEqWMA
50d 13,29 0,00 ***
OMXEqWMA
125d 9,32 0,00 ***
OMXEqWMA
250d 7,36 0,00 ***
OMXExpWMA
0.94 15,32 0,00 ***
OMXExpWMA
0.97 13,98 0,00 ***
SMALLHS
250d 5,95 0,00 ***
SMALLHS
125d 5,37 0,00 ***
SMALLEqWMA
50d 8,92 0,00 ***
SMALLEqWMA
125d 5,40 0,00 ***
SMALLEqWMA 250d 6,64 0,00 ***
SMALLExpWMA 0.94 11,36 0,00 ***
SMALLExpWMA 0.97 9,78 0,00 ***
MIXHS 250d 8,03 0,00 ***
MIXHS 125d 9,00 0,00 ***
MIXEqWMA 50d 10,28 0,00 ***
MIXEqWMA 125d 6,90 0,00 ***
MIXEqWMA 250d 6,59 0,00 ***
MIXExpWMA 0.94 12,58 0,00 ***
MIXExpWMA 0.97 11,14 0,00 ***
Correlation coefficient  99% level
tvalue Probability Significance
OMXHS
250d 9,21 0,00 ***
OMXHS
125d 10,58 0,00 ***
OMXEqWMA
50d 13,29 0,00 ***
OMXEqWMA
125d 9,32 0,00 ***
OMXEqWMA
250d 7,36 0,00 ***
OMXExpWMA
0.94 15,32 0,00 ***
OMXExpWMA
0.97 13,98 0,00 ***
SMALLHS
250d 6,72 0,00 ***
SMALLHS
125d 6,38 0,00 ***
SMALLEqWMA
50d 8,92 0,00 ***
SMALLEqWMA
125d 5,40 0,00 ***
SMALLEqWMA 250d 6,64 0,00 ***
SMALLExpWMA 0.94 11,36 0,00 ***
SMALLExpWMA 0.97 9,78 0,00 ***
MIXHS 250d 7,23 0,00 ***
MIXHS 125d 7,36 0,00 ***
MIXEqWMA 50d 10,28 0,00 ***
MIXEqWMA 125d 6,90 0,00 ***
MIXEqWMA 250d 6,59 0,00 ***
MIXExpWMA 0.94 12,58 0,00 ***
MIXExpWMA 0.97 11,14 0,00 ***
Appendix 17 – Mean Relative Bias for Risk Measure Scaled
to Desired Level of Coverage
Mean Relative Bias for Risk Measure
Scaled to Desired Level of Coverage, 95%
0,040
0,032
0,024
0,016
0,008
0,000
0,008
0,016
0,024
0,032
0,040
0,048
0,056
Portfol i oMethod
MRBfRM
StDLoC
Mean Relative Bias for Risk Measure
Scaled to Desired Level of Coverage, 99%
0,080
0,058
0,036
0,014
0,008
0,030
0,052
0,074
0,096
0,118
0,140
0,162
0,184
Portfol i oMethod
MRBfRM
StDLoC
Appendix 18 – Frequenciesσ of Portfolio vs Normal Distribution
Lefttail
σ
Frequency,
OMX
Frequency,
Normal
σ
Frequency,
SMALL
Frequency,
Normal
σ
Frequency,
MIXED
Frequency,
Normal
4,9. 1 1 4,9. 1 0 4,9. 1 0
4,8. 0 0 4,8. 0 0 4,8. 0 0
4,7. 1 0 4,7. 0 0 4,7. 1 0
4,6. 0 0 4,6. 0 0 4,6. 0 0
4,5. 1 0 4,5. 0 0 4,5. 0 0
4,4. 0 0 4,4. 2 0 4,4. 1 0
4,3. 1 0 4,3. 0 0 4,3. 0 0
4,2. 1 0 4,2. 2 0 4,2. 2 1
4,1. 2 0 4,1. 0 0 4,1. 0 0
4,0 1 0 4,0 0 0 4,0 0 0
3,9. 2 0 3,9. 0 0 3,9. 0 0
3,8. 0 2 3,8. 1 0 3,8. 1 0
3,7. 0 0 3,7. 0 0 3,7. 1 0
3,6. 1 1 3,6. 1 0 3,6. 1 0
3,5. 1 0 3,5. 1 0 3,5. 1 1
Righttail
σ
Frequency,
OMX
Frequency,
Normal
σ
Frequency,
SMALL
Frequency,
Normal
σ
Frequency,
MIXED
Frequency,
Normal
3,6. 0 0 3,6. 0 0 3,6. 0 2
3,7. 1 1 3,7. 0 0 3,7. 0 0
3,8. 0 1 3,8. 0 0 3,8. 0 0
3,9. 1 1 3,9. 0 0 3,9. 1 0
4,0 1 0 4,0 0 0 4,0 0 0
4,1. 0 0 4,1. 0 0 4,1. 0 0
4,2. 1 0 4,2. 1 0 4,2. 0 0
4,3. 0 0 4,3. 0 0 4,3. 0 0
4,4. 1 0 4,4. 0 0 4,4. 0 0
4,5. 0 1 4,5. 0 0 4,5. 0 0
4,6. 0 1 4,6. 1 0 4,6. 0 0
4,7. 1 0 4,7. 0 0 4,7. 1 0
4,8. 0 0 4,8. 0 0 4,8. 0 0
4,9. 1 0 4,9. 0 0 4,9. 0 0
5,0 1 0 5,0 1 0 5,0 1 1
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.