You are on page 1of 131

PRACTICAL OPERATIONAL RISK QUANTIFICATION

Cristian R. Severina
S1410779

An extended essay submitted in partial fulfilment of the requirements


for the MSc in Risk Management, Glasgow Caledonian University.

December 2017
This dissertation is my own original work and has not been submitted elsewhere in
fulfilment of the requirements of this or any other award. In accordance with academic
referencing conventions, due acknowledgement has been given to the work of others.

Word count: 12,714

Signature_____________________________________________ Date

After three years working and studying, I would like to thank the following people for their
inspiration, support and/or encouragement:

My parents, Rene and Nilda Antonio Rua (Spain)


José Luis Arrufat (Argentina) Jonathan Blackhust (UK)
Antonio Roca (Luxembourg) Joanne Charlick (UK)
Mary Sheedy (Ireland) Simon Hemsley (UK) and Alex Scott (UK)

2
1. ABSTRACT

This is a technical paper that explores several aspects of Operational Risk including history,
regulation, management framework and modelling. It focuses on providing methodologies
and the application of statistical theory to the quantification of Operational Risk losses and
frequency in cases where the availability of data is limited. It needs to be highlighted that
due to the technical nature of this paper, although the literature review provides a good
summary, the majority of justification and literature used will be clearly indicated in the
methodology sections. The objective of the Operational Risk modelling is to quantify the
Operational Risk capital requirement. This is achieved through Monte Carlo simulations,
which is the main method utilised for the purpose of modelling and extrapolating stress
values to 1 in 200-year loss event. Finally, the paper describes how to test the stability of the
simulation and introduces correlations to the quantification. The body of the paper describes
the theoretical framework and the appendices provide examples of practical applications of
the theoretical concepts. The main conclusions of this paper are that an appropriate
Operational Risk management framework needs to be in place, which includes establishing
risk appetite. The Monte Carlo simulation, although being a powerful tool, has some
limitations which should be considered when drawing conclusions about the capital. The
severity of the stress component or the aggregated loss function can be modelled using one
or two functions, assuming independence or dependence amongst the stresses in order to
replicate reality as much as accurately possible. Finally, all decisions in relation to methods
used, distribution functions, correlations, etc. must be properly backed by statistical testing,
qualitative rationale or other relevant research.

3
2. TABLE OF CONTENTS

1. Abstract ................................................................................................................................................. 3
2. Table of contents .................................................................................................................................. 4
3. Introduction .......................................................................................................................................... 9
4. Background, aims and objectives ....................................................................................................... 10
4.1 Research aim and specific objectives.......................................................................................... 11
4.2 Research objectives .................................................................................................................... 11
5. Literature review................................................................................................................................. 12
6. Research methodology, philosophy, strategy and data ..................................................................... 15
7. Introduction to Operational Risk ........................................................................................................ 18
7.1 History of Operational Risk ......................................................................................................... 18
7.2 Regulatory framework ................................................................................................................ 19
7.3 The Operational Risk Management framework.......................................................................... 20
7.3.1 Governance ......................................................................................................................... 21
7.3.2 Risk appetite........................................................................................................................ 21
7.3.3 Risk and control self-assessment (“R&CSA”) ...................................................................... 22
7.3.4 Event and loss recording ..................................................................................................... 23
7.3.5 Indicators or metrics ........................................................................................................... 24
7.3.6 Reporting............................................................................................................................. 25
7.3.7 Modelling stress and scenario testing ................................................................................ 26
8. Operational Risk quantification .......................................................................................................... 29
8.1 Operational Risk stress testing design ........................................................................................ 29
8.2 Stress testing quantification of stress severity ........................................................................... 32
8.2.1 Best estimate mathematical design .................................................................................... 35
8.2.2 Simulation mathematical design......................................................................................... 35
8.2.3 Use of distribution functions in simulations ....................................................................... 35
8.2.4 Determination of the simulation result .............................................................................. 39
8.3 Stress testing quantification of stress frequency........................................................................ 40
8.3.1 Internal data........................................................................................................................ 40
8.3.2 External data ....................................................................................................................... 40
8.3.3 Subject matter expert (“SME”) opinion .............................................................................. 41
8.3.4 Use of basic probability laws............................................................................................... 41

4
8.3.5 Distributions for stress frequency ....................................................................................... 42
8.4 Construction of the loss function................................................................................................ 42
8.4.1 Severity modelling............................................................................................................... 43
8.4.2 Frequency modelling........................................................................................................... 46
8.4.3 Compound stress distribution function .............................................................................. 46
8.4.4 Loss function to be simulated ............................................................................................. 46
8.4.5 Application of the Chebyshev ‘s theorem ........................................................................... 47
8.5 Statistical distributions for the frequency and severity .............................................................. 48
8.5.1 Distributions to model Operational Risk event severity ..................................................... 48
8.5.2 Distributions to model Operational Risk event frequency ................................................. 50
8.6 Mathematical structure of the aggregated loss function ........................................................... 52
8.6.1 One distribution model ....................................................................................................... 52
8.6.2 Two distribution model ....................................................................................................... 52
8.6.3 Stability of the Monte Carlo Simulation.............................................................................. 55
8.7 Extrapolation to 1 in 200 year event .......................................................................................... 56
8.8 Introduction of correlations in the Operational Risk capital calculation .................................... 57
9. Limitations of this modelling approach .............................................................................................. 59
10. Conclusions ..................................................................................................................................... 61
11. Appendices ...................................................................................................................................... 62
Appendix 1 Regulatory requirements in relation to Operational Risk ............................................. 62
Appendix 2 Example of Risk Appetite ................................................................................................. 64
Appendix 3 Example of a Key Risk Indicator (KRI) ............................................................................ 65
Appendix 4 Example of “Hardware failure stress” ............................................................................. 69
Appendix 5 Example of the splice function ......................................................................................... 93
Appendix 6 Example of the Kernel density function .......................................................................... 96
Appendix 7 Example estimation of the Poisson distribution .......................................................... 100
Appendix 8 Operational risk loss function ........................................................................................ 103
ii) Calibration based on the stress losses: ...................................................................................... 104
Appendix 9 Operational Risk loss function using two separate functions ..................................... 109
Appendix 10 Extrapolation to 1 in 200-year event .......................................................................... 118
Appendix 11 Introduction of correlations......................................................................................... 120
Appendix 12 Turnitin submission ..................................................................................................... 125
12. References .................................................................................................................................... 126

5
6
Figures Description
1 Table with comparison between Pillar 1 and Pillar 2
2 Table with comparison between Pillar 2A and Pillar 2B
3 Risk constellation analysis
4 Table describing the stress testing summary
5 Table describing the stress calculation details
6 Table describing the split data set method
7 Table with details of the splice function
8 Chart illustrating the lognormal, gamma, Pareto and Weibull distributions
9 Table that describes the two-distribution model
10 Table that describes the splice function
11 Chart illustrating the simulation stability test
12 Table describing UK regulation about Operational Risk
13 Chart describing the monthly number of complaints 2012-2016
14 Chart describing Chart describing the monthly number of complaints (99%
confidence boundaries)
15 Chart illustrating the monthly number of complaints (Trigger and appetite)
16 Chart describing the daily FTSE100 index: 1/10/2006-13/10/2016
17 Histogram describing the daily FTSE100 index fluctuations: 1/10/2006-
13/10/2016
18 Chart Chart describing the ratio equity / funds change overtime
19 Table describing the average daily product price movement
20 Table with stress testing final results
21 Table describing the stress testing severity quantification
22 Table describing the stress testing frequency quantification
23 Table describing controls
24 Table describing the splice function calculation
25 Chart illustrating the Kernel density function
26 Table describing the Kernel density function
27 Chart illustrates the Poisson distribution with Lambda = 0.05
28 Table describing the estimation of the Lambda parameter of a Poisson
distribution
29 Chart illustrating a Poisson distribution with Lambda = 4.25
30 Table describing the quantification of the loss function
31 Chart illustrating the loss function (Simulation output)
32 Table describing the stress frequencies

7
33 Table describing the estimation of outliers
34 Table describing the split data function
35 Chart illustrating the loss function (D2 is lognormal)
36 Chart illustrating a lognormal distribution (47.7 ; 2.37)
37 Chart illustrating the estimation of “c” Pareto parameter: Simulation output
38 Chart illustrating the loss function (D2 is Pareto)
39 Table describing the appliation of the splice function
40 Chart illustrating the Lognormal (0.32936; 3.9754) function
41 Chart illustrating the Splice function (D2 is lognormal)
42 Table describing the extrapolation to stresses
43 Table describing correlation levels
44 Table proving an example of correlation analysis
45 Correlation matrix
46 Chart illustrating the loss function output that factors-in correlation
47 Table showing the 1/200 extrapolation with dependencies

8
3. INTRODUCTION

A basic Enterprise Risk Management (“ERM”) principle is that “a risk, once identified, is no
longer a risk, it is a management problem” (Kenett, 2011, p. 10). Management of Operational
Risk cannot be ignored as a problem anymore. In fact, this matter has risen into the agenda
of all financial services firms (and regulators) worldwide. The Operational Risk agenda not
only includes mitigation strategies, but also quantification of the Operational Risk exposure.
The latter is the subject of this dissertation.

The Basel Accord definition of Operational Risk is widely accepted within the Financial
Services. According to the Basel Committee, Operational Risk is “the risk of loss resulting
from inadequate or failed internal processes, people and systems or from external events.
This definition includes legal risk, but excludes strategic and reputational risk” (The Basel
Committee on Banking Supervision, 2011, p. 3).

In line with the Basel definition, this dissertation will propose methods to model and
quantify “failed internal processes, people and systems or external events” in terms of any
potential consequential financial loss.

9
4. BACKGROUND, AIMS AND OBJECTIVES

The question this dissertation addresses is: How can Operational Risk capital be
quantified in a consistent manner across the financial services industry? It has been
observed in the financial services how the regulation evolved, as Risk Management methods
(Operational Risk in particular) have developed in the last few years. Yet there is no clear
consensus on how to quantify Operational Risk in a consistent and uniform way (Moosa,
2008, p. 1).

Operational Risk modelling is a very relevant topic that will help risk managers and
investment firms.

A simple approach to quantify Operational Risk has been developed to contribute to the
Operational Risk practices of investment firms. It should be pointed out that this is a
technical paper that provides a structured methodology, sufficiently flexible to be applied to
most financial services organisations.

10
4.1 RESEARCH AIM AND SPECIFIC OBJECTIVES

The research aim is to provide a practical methodology to quantify the Operational Risk
capital requirement in cases where limited loss data is available.

4.2 RESEARCH OBJECTIVES

The objectives of this research are to:

i) Outline the basic elements of an Operational Risk framework.


ii) Describe the steps and robust mechanisms to design and quantify Operational
Risk stress testing.
iii) Outline, step by step, the mathematical structure of the aggregated Operational
Risk loss function.
iv) Analyse the stability of the Monte Carlo simulation model.
v) Determine the Operational Risk capital requirement.
vi) Describe an extrapolation methodology to a 1 in 200-year event for small
insurance companies.
vii) Provide a methodology to introduce correlations to Operational Risk
quantification.

11
5. LITERATURE REVIEW

Risk Management as a discipline is quite recent and still evolving. Operational Risk
management is even more recent. In the last couple of years a wealth of papers and books
have been produced to assist financial services firms in managing and quantifying their
exposure to Operational Risk. The quantification models described by Blunden (2013) are
the Internal Measurement approach (IMA), the Loss Distributions approach (LDA) and the
Scorecard approach which are suggested by the Basel Committee. This paper will build upon
the LDA, which is a probabilistic methodology based on Monte Carlo simulations. (Blunden,
2013). In addition, COSO (2012) provides comprenhensive details of an Operational Risk
framework; which the paper incorporates across the chapters.

Power (2003) gives a valuable analysis of the Operational Risk development over time;
which is complemented by King (2001) and Dionne (2013) who give relevant historical
landmarks in the history of Operational Risk. Within the historical context of Operational
Risk, banking accords and EU regulations are very important because of their influence on
what Operational Risk stands today in the risk universe of financial institutions. In relation
to this, the Chartered Institute for Securities & Investments (2014) discusses basic details of
the main Operational Risk related accords and other EU regulations (such as the Basel
Accords and EU Capital Requirement Directive (“CRD”).

King (2001) suggests the creation of the compound Delta-Extreme Value Theory (“EVT”)
function. This method entails the split of an operational loss function in two levels:
operational losses and extreme losses. This paper builds upon this method and describes the
data set split and the splice function approach. These methods are important because they
are able to model two patterns of losses in one function.

12
Balonce (2012) introduces the use of the Kernel density function to model operational losses.
This involves the use of complex parametric and semiparametric models, which are
extremely difficult to apply without appropriate software aid. However this paper takes the
core principles of the Kernel density function and converts it into a histogram-like
probability density model; which is a practical solution to the problem of fitting distributions
to data.

Although some valid points in relation to the overall Operational Risk framework have been
extracted from other relevant literature such as Chaudhuri (2016), Kenett (2011) and
Klugman (2004), they elaborate on complex modelling theories such as Bayes networks,
ontology theory and elliptical distributions, which are beyond the scope of this paper.
Nevertheless, it presents methodologies to model Operational Risk related to IT failures and
outsourcing failure for example, which supports to some extent the rationale in the IT failure
example.

These two papers “The Multivariate g-and-h distribution” by C. Field and M. Genton and
“Quantitative Modelling of Operational Risk: Between g-and-h and EVT” by M. Degen et al,
provide an insight for modelling extreme losses using a distribution called g-and-h. These
papers introduce higer levels of complexity which are not necessarily suitable for
investments firms but for large banks; therefore the g-and-h distribution will not used for
the purpose of modelling in this paper. Balonce (2012) describes the use of the
Champernowne distribution for the same purpose. Champernowne converges to Pareto in
the tail (Balonce, 2012, p. 20). This paper uses Pareto instead as this is more widely used in
the industry to model extreme losses. It would, however, be an interesting topic for another
paper to carry out further research on the application of more complex extreme loss
distribution functions. This may be more suitable for larger and more complex financial
enterprise group structures.

13
Operational Risk capital assessment is a regulatory requirement. The UK regulator rules set
the minimum standard that the relevant firms must adhere to. It is essential that as part of
the literature review, these rules are considered (refer to appendix 1) to ensure that the
proposed methodologies meet and exceed these regulations. In line with the UK regulator’s
expectations, Hussain (2000) and Dexter (2004) encourage following a pragmatic and data
driven approach to making conclusions; which is the approach followed in this paper. This
means that all assumptions and decision should be properly supported by the interpretation
of data. Also in line with the UK regulator’s expectations, external data must be factored in
the quantification methodology. (ORX, 2017) and (ORX consortium, et al, 2009) provide a
recognised analysis of anonymised external loss data.

Anderson (2014), Chao (1980) and Evans (2000) are the main sources of statistical theory
such as the Chi-square test, Chebyshev theorem, probability laws, etc. which are applied to
the Operational Risk modelling. In fact, there is a wealth of statistical theory that could
contribute even more to the quantification of the Operational Risk.

14
6. RESEARCH METHODOLOGY, PHILOSOPHY, STRATEGY AND DATA

The core strategy must follow a pragmatic approach and reach data driven conclusions
(Hussain, 2000, p. 2) and (Dexter, 2004, p. 2). Most importantly, it must provide a modelling
approach that turns complexity into simplicity.

The nature of this research requires a positivistic philosophical approach. This entails:

i) The intensive use of data and statistical theory.


ii) Results that are subject to quantitative analysis.
iii) A focus on data and facts.
iv) Replication of variable patterns through mathematical models.
v) Incorporation of subject matter experts (SME) opinion.
vi) Presentation of results and findings that in themselves do not claim to be exact
but are sufficiently valid and reliable to allow useful and practical decisions to be
made subsequently.
vii) A quantitative research approach which analyses the theory for application to real
data.

Point vii is further described below:

High level statements: The starting point of stress testing is the firm’s risk register which
captures the output of risk assessments. This data indicates which risks needs to be stressed
and quantified.

Detailed level: Based on the nature of the risks identified, further data needs to be gathered
and analysed for the stress quantification of impact and likelihood.

15
Aggregation: The data generated by the stresses, internal loss data and potentially other
sources, are aggregated to construct the firm’s Operational Risk loss function.

The research strategy: The nature of the stress will require a design which will dictate the
strategy and best suitable data to perform a quantification. In general, the Operational Risk
stress will require the following approaches for their quantification:

i) Cross-sectional: This is data that reflect information at a specific point in time.


ii) Longitudinal: This is data in a time-series format, hence data gathered within a
time-frame.

Data collection: Derived from the research strategy explained in the previous point, each
stress will require data depending on the nature of the stress. The following are likely to be
the sources:

i) Management information: This includes reports produced for management to


monitor performance and risks such as management accounts, trading activity,
loss data, complaints data etc. This type of data is classed as ‘internal data’ as it is
mostly produced within the organisation.
ii) Case studies: There is an abundance of information sources covering Operational
Risk events, such as the FCA fines database, books describing major Operational
Risk scandals, media coverage etc. This is classed as ‘external data’ which provides
additional context to the quantification process.

External loss data: This includes already processed loss data but produced by a third party
and anonymised so that it can be used publicly. ORX for example is a non-profit organisation
which issues a report every year with details of the Operational Risk events which occurred
in financial services sector within the last 12 months. There are other similar sources which
makes the quantification more comprehensive.

16
Data analysis techniques: It is necessary to process data using software tools to understand
patterns. For the purpose of this dissertation and bearing in mind the underlying objective
to keep the quantification as simple as possible, the data processing capabilities will be
limited to what can be undertaken using Microsoft Excel and an Excel add-on software called
“@Risk”.

17
7. INTRODUCTION TO OPERATIONAL RISK

7.1 HISTORY OF OPERATIONAL RISK

The study of Risk Management in general commenced after World War II. In the 1960s,
engineers developed technological Risk Management which partially covered aspects of
Operational Risk (Power, 2003, p. 2). The highly contagious nature of Operational Risk in the
financial services (especially in banking) obliged governments and regulators to intervene.
In 1988, the first Basel Accord (Basel I) was signed within the banking supervision requiring
banks to allocate capital against Operational Risk (Power, 2003, p. 12). Basel I evolved as a
result of the occurrence of high profile Operational Risk events. This has been leading to an
increasing experience in managing Operational Risk. Academic material on the matter has
also surged along with new software and more computing processing power. The term
‘Operational Risk’ was officially coined in 1991 by the Committee of Sponsoring
Organisations of the Treadway Commission (COSO) (Power, 2003, p. 2). Operational Risk
started to become a major concern due to series of world-wide known scandals in the in the
1990s and early 2000s such as the ENRON in 2001 (Dionne, 2013, p. 3) and the Baring bank
bankruptcy case in 1995; whose perpetrator, Nick Leeson, was defined by Power (2003, p.
2) as the “true author and unwitting inventor of Operational Risk”.

Operational Risk is a relatively new discipline; it is considered the next boundary for
protecting shareholders/stakeholders’ value by decreasing the effect of Operational Risk
over the profit of a firm (King, 2001, p. 3). Since the 1990s, the Chief Risk Officer (“CRO”) has
emerged as a key player in the activities of the company (Dionne, 2013, p. 5).

In terms of the purpose of this paper, the capital quantification methods of Operational Risk
have also been evolving from the Basel Accord basic approach; which is a single percentage

18
of the gross income (Chartered Institute for Securities & Investments, 2014, p. 117) to more
sophisticated models based on Monte Carlo simulations, Bayes theory and other statistical
mathematical methods. The latter have received comprehensive contributions from
academia in recent years.

7.2 REGULATORY FRAMEWORK

A significant Operational Risk event, such as the ‘LIBOR scandal’ in 2008 where financial
institutions were accused of colluding to fix the London Interbank Offered Rate
(Investopedia, 2017), had a substantial negative impact on the markets and was detrimental
to consumers. The basic aim of a financial regulator is to ensure financial stability in the
markets and protect the customer interests (FCA, 2005, p. 7). To that end, the Operational
Risk regulatory framework has also been evolving to ensure that Operational Risk is
properly managed by financial services firms.

As previously mentioned, in 1988 the Basel Accord was the first step to impose Operational
Risk capital on banks to cover their Operational Risk exposure. Following a process of
revision and consultation, in 2001, the Basel Committee issued the new Basel Accord called
‘Basel II’ which introduced the Advanced Measurement Approach (“AMA”). This paper will
further develop the AMA. Based on Basel II, the EU issued the Capital Requirement Directive
(“CRD”), which aims to harmonise EU country regulators’ rulebook regarding capital
requirement and Risk Management practices among the financial institutions (Chartered
Institute for Securities & Investments, 2014, p. 125).

The latest revision of the Basel Accord is Basel III, which in 2011 set out comprehensive
measures aiming to improve the banking sector's capacity to absorb shocks from economic

19
or financial stress, Risk management and governance and the transparency and risk
disclosure of banks (Bank for International Settlements, n.d.).

Basel III is especially relevant to this paper since it establishes the requirement to carry out
stress and scenario testing to identify, understand and quantify Operational Risk exposures.

Solvency II is the equivalent of Basel III for the insurance sector. It is a comprehensive set of
rules to manage risk at an enterprise risk level, specifying the requirements in relation to
Operational Risk management, stress and scenario testing.

For UK investment firms with a limited licence, a summary of the relevant rules for
Operational Risk management are in Appendix 1. These rules have been considered for this
paper because of the objective to help firms satisfy these requirements.

7.3 THE OPERATIONAL RISK MANAGEMENT FRAMEWORK

A properly documented set of policies, processes and procedures that describe how a firm
manages Operational Risk forms the very basis for an effective Operational Risk
management (Blunden, 2013, p. 50). Quantification of Operational Risk should be embedded
within this framework for the quantification results to be reliable.

In accordance with Blunden (2013) an Operational Risk management framework should


contain at minimum the following sections:

20
7.3.1 Governance

The aim of good governance is to set out a system to make sure there is effective
accountability and prudent decision making at all levels within the firm (Blunden, 2013, p.
44). It requires a policy setting out the roles and responsibilities for the implementation of
Operational Risk management and Operational Risk strategy, and the overall principles to
guide management decision.

Good governance requires the implementation of the three lines of defence approach,
necessitating a clear division between the first line of defence (business operations), the
second line of defence (risk oversight provided by the Risk Management and Compliance
function) and the third line of defence (the internal audit function) (Blunden, 2013, p. 44).

In terms of Operational Risk quantification, good governance incorporates the input from
the business in the quantification process and uses it to sense check the final results. Internal
challenge is provided by Risk Management and further challenge is made by the Risk
Committee and the Board of Directors.

7.3.2 Risk appetite

According to COSO (2017, p. 1), risk appetite is defined as “the amount of risk, on a broad
level, an organisation is willing to accept in pursuit of value. Each organisation pursues
various objectives to add value and should broadly understand the risk it is willing to
undertake in doing so.”

21
Risk appetite is defined per principal risk the firm identifies itself as being exposed to. In
practical terms the Board must first define what they understand by low, medium and high-
risk appetite.

Subsequently, the Board must determine the three overarching risk appetites in terms of
solvency, liquidity and earnings. These three metrics summarise the financial health of the
firm, with solvency and liquidity subject to regulatory requirements.

The Board must then set the risk appetite for the principal risks the firm is exposed to, linking
the appetite to the financial metrics of solvency, liquidity and/or earnings. This is so as to
enable the Board to bridge the gap between the occurrences of a risk to the bottom line
impact on the firm, be it to ensure no breach of regulatory requirements or to ensure that
the expectations of shareholders/stakeholders can be met.

This paper focuses on the risk appetite related to Operational Risk exposure. This is a very
important concept and has significant implications in the Operational Risk quantification in
terms of setting the level at which the loss function determines the Operational Risk capital.
An example of this mechanism is set out in appendix 2.

7.3.3 Risk and control self-assessment (“R&CSA”)

According to Hopkin (2012, p. 139) the R&CSA process entails the identification and rating
of risks to which a firm is exposed in terms of its operations, projects and strategy. Generally,
a firm’s main risks are associated with significant changes (organisational or systems),
corporate objectives, stakeholders expectation (including the regulator) core business
processes and key dependencies. Operational Risks are inherently embedded in all these
areas.

22
The R&CSA is carried out by the Risk Management function in conjunction with the business.
It aims to identify those Operational Risks, rate them in terms of impact and likelihood and
take the necessary actions to mitigate their probability of occurrence (via proactive controls)
and/or their impact (via reactive controls). The output of this process is reflected in the
firm’s risk register.

This process forms the basis for the Operational Risk quantification (stress testing). In other
words, the information within the risk register must be up-to-date and accurate to ensure
that the Operational Risk quatification output is sound.

7.3.4 Event and loss recording

The risk and control self-assessment is a forward looking exercise that tries to predict what
can go wrong, takes actions to mitigate it and record it in the risk register (Blunden, 2013,
pp. 82-83) and (Barnier, 2011, p. 22).

Event and loss recording entails the process of gathering and scrutinising materialised risk
events (Kenett, 2011, p. 25). This is a backward looking exercise, aiming to investigate
Operational Risk events which are generally caused by a control failure. However, as per the
definition of Operational Risk (section 6.4), it is an event resulting in a loss as a result of
failure in processes, people, systems or external events (The Basel Committee on Banking
Supervision, 2011, p. 3).

Normally, risk management will assess the loss event within the business, carry out root
cause analysis and put in place new controls or improve existing controls to reduce the
probability and/or impact of such an event in the future (Barnier, 2011, pp. xxii-xxiii). The

23
record of this assessment is kept in the ‘Loss data register’ or ‘Operational Risk event
database’ (Blunden, 2013, p. 117) and (Association of Sebian Banks, 2006, p. 2 & 5).

The Operational Risk event recording is a very important process because it challenges the
risk register and it may detect inconsistencies in the risk rating. It allows the business to
improve processes, reducing or mitigating the effects of these operational events and it
provides input into the quantification process.

Therefore, the Operational Risk event register must be up-to-date and as accurate as possible
to ensure good quality results of the quantification process. Further detail on this process
can be found in Blunden (2013, pp. 114-133).

7.3.5 Indicators or metrics

Key Risk Indicators (KRIs) are forecasters of events which can adversely affect firms. They
are monitored to track risk exposures and provide early warning of the materialisation of
potentially undesirable situations. Effective KRIs would enable possible mitigation or
avoidance altogether of the adverse event before they materialise (Metricstream, 2017).

For modellers, KRIs are additional data to input into the quantification of Operational Risk
capital (Operational Risk Institute, 2017).

From an Operational Risk management perspective, businesses should get into the practice
of finding KRIs to track their own risk exposure (Palermo, 2011, p. 1). Examples of KRIs
include the number of operational errors, number of complaints, number of breaches and
percentage of transaction execution error versus total number of transactions.

24
Historical data can be used to set boundaries within which the business is comfortable
operating; anything outside the boaundaries should then trigger an investigation. The trend
analysis of the KRIs are also useful to anticipate a breach of boundaries so as to determine
the mitigation strategies (Prokopenko, 2012, pp. 42, 68) and Prokopenko (2012) cited
(Moosa, 2007, p. 135).

Implementing KRIs in day-to-day management is an important step in reducing or


maintaining Operational Risk exposure within appetite levels and hence reducing the
required level of Operational Risk capital which will need to be held (Blunden, 2013, p. 137)
and (MetricStream, 2017). Additionally, KRIs can be thought of as inputs into the Operational
Risk quantification models (Institute and Faculty of Actuaries, 2016, p. 18). A description of
how KRIs can be set in practice is at appendix 3.

7.3.6 Reporting

Operational Risk reporting is essential to achieve a good standard of governance and


decision making (Blunden, 2013, p. 150). From the Operational Risk management reporting
perspective, elements such as Operational Risk appetites, KRIs and loss event assessments
should be regularly reported to the Board and senior management. In the quantification of
Operational Risk, Risk Management should disclose the results and the adopted
methodology in non-technical terms to ensure that these can be understood by the Board,
the Risk Committee and senior management.

25
7.3.7 Modelling stress and scenario testing

According to the Bank of England (2017, p. 3) stress testing is defined as the process of
shifting the values of single parameters which impact the financial position of a company.
Within the context of Operational Risk, ‘single parameter’ is a defined risk from the firm’s
risk register. Scenario analysis refers to a broader variety of parameters being shifted
concomitantly.

Scenario analysis involves the assessing the impact of adverse events on the company’s
financial position, for example, simultaneous movements in a number of parameters such as
stock market index, investment inflows and interest rate.

Operational Risk modelling refers to the the methodology used to calculate the Operational
Risk capital. Sometimes all these concepts appear to be divorced from the day-to-day
business (Blunden, 2013, p. 166), and this is an issue which prevents embedding a Risk
Management culture within the firm. The proposed approach to address this issue is to put
these concepts within the context of the regulatory requirements: Pillar 1, as per FCA (2017,
GENPRU 2.1.45R) and Pillar 2, as per FCA (2017, IFPRU 2.2 ):

Figure 1: Table with comparison between Pillar 1 and Pillar 2

Aspects considered Pillar 1 Pillar 2


Formula based set by Firm's own assessment of its
Calculation basis
regulator risks
Forward looking: Latest
Calculation Backward looking: Approved
approved financial projection
approach audited accounts.
model.

26
All the risks including those that
Principal risks
Credit risk, Market risk have not been considered in
considered
Pillar 1
Consideration of Fixed overhead requirements
Fixed overhead requirements
the fixed overhead do not apply to Pillar 2.
Split into two: Pillar 2A and
Further split Not applicable for Pillar 1
Pillar 2B

Figure 2: Table with comparison between Pillar 2A and Pillar 2B

Aspects considered Pillar 2A Pillar 2B

Denomination Pillar 2A stresses Pillar 2B scenarios

Nature of events Internal risks or risks related to External events that may affect
considered the operations of the business the business
Frequency of Higher frequency and lower Lower frequency and higher
events considered impact impact
Main source of
Based on the risk register Based on the horizon scanning
event records
Three-year projections at least
At a point in time, generally
Timeframe (or whatever the forward-
within a year.
looking reporting period is).
Mitigation
Includes control descriptions Includes management actions
description
Quantified in terms of the
impact on financial statements
Quantified in terms of severity or financial position of the
Quantifications
and likelihood company over time. There are
two sets of projections: Before
and after management actions.

27
Carries capital as Pillar 2 only
when scenario results show
Capital that the firm don’t have enough
requirement Always carry capital as Pillar 2. capital to meet minimum
considerations requirements in the business
planning period (after
management actions).
Use in the quantification of
individual stresses and in the
Use of Monte Carlo Use in the scenario parameter
Operational Risk modelling
simulations calibration.
(estimation of the loss
function).

The minimum capital requirement is Pillar 1, which as described, is a formula based


quantification. It helps provide some context on what is expected as the minimum capital
requirement amount.

The Operational Risk management framework should describe these elements at the outset.
Clear differences between Pillar 1 and Pillar 2 contributes to the embedding of a Risk
Management culture. This paper concentrates on the Pillar 2A stresses.

28
8. OPERATIONAL RISK QUANTIFICATION

Firms of all kinds and sizes are exposed to internal and external threats that make it
uncertain whether and when they will accomplish their goals. “Risk” is the effect of
uncertainty on a firm's objectives (ISO 31000-2009, 2017). At its basic level, stress testing
is a quantification exercise to reduce uncertainty by putting a monetary value to the risks; in
this case Operational Risk exposure.

8.1 OPERATIONAL RISK STRESS TESTING DESIGN

As previously indicated, the main source in identifying the stresses is the risk register. To be
able to design the stresses the following steps are necessary:

Risk selection: Risk Management must define the criteria whereby a risk in the risk register
will be eligible for stress testing. This depends on the rating scale adopted by the firm to rate
risks in terms of severity and likelihood; the tool used for this is called ‘risk matrix’. The
matrix’s structure depends on the nature of the firm and its size and complexity (Hopkin,
2012, p. 143). The proposed matrix is five by five; which means five levels of risk severity
and five levels of risk likelihood; the value of one being the lowest severity and frequency.
This tends to be sufficiently flexible to meet the needs of most firms in terms of capturing
key risk information. The risk rating score is the product of the level of severity and the level
of the likelihood. Each risk should be rated for its ‘inherent’ level (rating excluding controls
in place) and ‘residual’ (rating after control were considered). The goal of this exercise is to
obtain the list of the most onerous risks which the firm is exposed to. Therefore, the
proposed risk selection criterion is: ‘Operational risks will be considered for stress testing if:
i) They have an inherent risk score rating equal or greater than fifteen; or ii) They are tail
risks with an inherent impact of five and an inherent likelihood no more than two.

29
Risk classification: The risk register may already have a risk classification (taxonomy) that
could be used to place similar risks in the same class or cluster. This classification also
depends on the nature, size and complexity of the firm. The following is the proposed list of
Operational Risks stress classes: i) IT Infrastructure failure: It contains risks related to
hardware failure, software outage, power cut, etc. ii) Information security failure: It contains
risks where client data is compromised. iii) Outsourcing failure: It contains risks related to
operational disruption due to a third party failing to provide the agreed good or service. iv)
Client Assets Sourcebook (CASS) failure: CASS is a very important set of rules which firms
holding client money must adhere to. Failure to meet CASS requirements has very severe
consequences hence it is common practice to separate it from other regulatory failures. v)
Regulatory failure: It contains risks of not meeting other non-CASS regulations, including tax
laws. vi) Process failure: It contains risks related to client instruction execution failures or
internal process failure which is not client related. vii) Internal fraud: Internally originated
misappropiation of corporate or client assets. viii) External fraud: Externally originated
misappropiation of corporate or client assets. Note: if it is a combination of both (collusion)
it should be class as internal fraud. Once the classes are defined, the selected risks have to be
allocated in one, and only one of the classes.

Analysis of the classes: Operational risks are likely to be correlated. Risk Management should
take a closer look at the risks and evaluate the relationship “A may cause B”, “X may be caused
by Z” or “H causes and is caused by G” for each risk pair. It is possible that during this process,
Risk Management re-assess a risk and conclude it is overated and thus should not be
considered for stress testing. This can result from comparing each risk to other risks within
the class. If this happens, the decision needs to be properly documented. The output of this
process is the ‘risk constellation’ within each Operational Risk class; which should resemble
Figure 3 below.

30
Figure 3: Risk constellation analysis

Stress description: Based on the risk constellation Risk Management needs use expert
judgement and decide how best to group risks for the purposes of stress testing. Each class
will have at least one stress. The methodology of how risks have been grouped must be
documented. For example, if the risk constellation shows that one risk is caused by all other
risks, then the class requires one stress. If one risk is caused by half the other risks, then the
class requires at least two stresses. The wording of the final stresses must have sufficent
details to represent the risks being stressed. This also makes it easier to design the
mathematical structure of the stress. Note that according to the Institute and Faculty of
Actuaries (2016, p. 9), Operational Risk modelling should not include ‘business as usual’
costs (such as salaries of staff engaged in managing an operational event) because they are

31
already budgeted for. Consequently, standard salary costs are not included in any calculation,
although exceptional overtime costs are included. The stress description should also include
the rationale for the assumptions made in the stress. See appendix 4 for a practical example.

8.2 STRESS TESTING QUANTIFICATION OF STRESS SEVERITY

Operational Risk makes reference to the severity and likelihood of unexpected events that
happen as a result of alterations in the normal functioning (Balonce, 2012, p. 1). The stress
is a description of an alteration in the normal operations that the business considers ‘severe
but plausible’. The objective of considering internal, external and the Subject Matter Expert’s
opinion is to provide the best possible context to analyse the risks, design the stresses and
obtain as accurate a result as possible. The key word here is ‘context’; in other words, the
range within which the true stress loss value (severity) and its likelihood are located in
terms of its mean or expected value of the stress as described.

Internal data: Internal data refers to all data sources produced by the firm. This includes: i)
Internal loss data: Operational Risk quantification depends on a comprehensive reporting of
losses, gains and near-misses to obtain an accurate picture of the scale of Operational Risk
exposure (Blunden, 2013, p. 116). ii) Management information: This includes the current
production of KPIs and financial and non-financial information used by management to
monitor performance against objectives. iii) Business intelligence (BI): BI comprises the
applications, infrastructure and tools which enable an analysis of data to improve and
optimise decision making and performance. (Gartner, 2017). Firms produce a significant
amount of data from their own operations such as trading, asset value, corporate actions,
staff turnover, etc. This data management technology is necessary to ensure good quality
stress results.

32
External data: Use of external data is a regulatory requirement (FCA, 2017, p. BIPRU 6.5.12
(4)). External data is able to compensate for the paucity of internal loss data for risks a firm
is exposed to but has little or no experience of. It can provide an enhanced and forward-
looking perspective but on its own it would not cover the full spectrum of Operational Risk
exposure. The use of external data requires a systematic process to i) determine what and
when it is appropriate to use and ii) how to scale the data to make it ‘relevant’ to the size and
complexity of the firm (Embrecht, 2011, p. 184). External data is not only ‘loss data’ from
other firms; it also refers to external sources of data that can enhance the stress
completeness. See appendix 4 for a practical example of the use of external data.

Subject Matter Expert (“SME”) Opinion: To ensure proper governance as previously


discussed, Risk Management must carry out an open discussion with the business SMEs to
sense check the stress results.

Stress summary table: Each stress needs to be summarised in a logical structure, which is
proposed as follows:

Figure 4: Table describing the stress testing summary

Stress component Calculation


Component A Calculation 1
Component B Calculation 2
Expected stress loss value Calculation 1 + Calculation 2

33
Figure 5: Table describing the stress calculation details

Calculation 1
Method 1 Best estimates
Method 2 Simulation
Internal data Method 3 Internal loss data
Risk Management made an assessment about what the final
internal loss data value is.
External source 1
External source 2

External data
External source ‘N’
Risk Management made an assessment about what the final
external loss data value is.
Based on the available information Risk Management made an assessment and concluded
what the appropriate value of calculation 1 is X in order to discuss its point of view with
the SMEs.
SME opinion Risk Management presents its conclusions to the SMEs and
carries out a sense check by discussing the rationale for the
calculation.
Final value of calculation This is the expected value of calculation 1 used for the stress.
1

This stress structure follows a logical rationale to ensure the conclusion is understood,
documented and most importantly properly supported by data. Refer to appendix 4 for a
practical example.

34
8.2.1 Best estimate mathematical design

Each stress has its own characteristics and the nature of the stress key drivers need to be
replicated mathematically. The best estimates method involves the application of a
deterministic approach. The output of this estimation is an expected value of the stress as
described. The best estimates is a method within the internal data as shown in Table 3b. The
example in appendix 4 illustrates how the best estimates can be calculated.

8.2.2 Simulation mathematical design

Any simulation used to provide more context to the internal data should be in line with the
precedent ‘best estimate’ model. The best estimates provides the simulation framework;
Risk Management has the task to identify the variables within the model to replace those
variables with a random value of a known distribution. This distribution must accurately
replicate the pattern of the actual value of the model selected variable.

8.2.3 Use of distribution functions in simulations

Statistical distribution fitting is also called parametric modelling. The rule of thumb about
the minimum sample size for fitting a distribution is that the more data which is available,
the better the results. To get reliable distribution fitting results, there should be at least 75-
100 data points available. Very large samples (tens of thousands of data points) might cause
computational issues, so a reduction of the sample size may be required (Mathwave, 2017).
The distribution selection is very important and requires a strong basis. The following steps
are proposed to ensure the best possible results:

35
Initial sense check: Reece (2009) suggested the following questions about distributions: i) Is
the fitted distribution theoretically correct? E.g. Is it a continuous or discrete variable? Is it
symmetric? Does it have heavy tails? Does it only produce positive values as per the actual
data? Are the shapes similar? Do the tail percentiles, average and median show similar
values? ii) Does it conform with industry standards? For example, it is industry practice
within financial services to assume that the severity of operational losses follows a
lognormal distribution and frequency of operational losses fits a Poisson distribution (PwC,
2015, p. 3) and (ORX consortium, et al, 2009, p. 3 & 19). iii). Is the exercise data driven? Are
there sufficient data points? Has data been checked for outliers? What fitting method will be
used and why? iv) Is the distribution pragmatic? Is the distribution intuitively correct? Does
it make sense? Are the parameters understood? Is it easy to implement?

Statistical testing: Further testing must be carried out using statistical methods to provide
proper backing that the distribution selected is a good fit. The Chi-squared test of goodness
of fit is the most commonly used statistical procedure for testing whether distributions fit
(Chaudhuri, 2016, p. 40). This test is based on the comparison of observed frequencies with
the expected frequencies. The assumption is that the null hypothesis is true (Anderson, 2014,
p. 528); which is that the distributions are the same. The Chi-squared test requires
knowledge of the degrees of freedom (number of data points minus one) and to set the level
of alpha. Alpha must be at least 5%, which is the minimum to be considered ‘significant’
(Turner, 2014, p. 10). ii). What can be done if the data doesn’t fit any known distribution? It
is sometimes impossible to find a parametric distribution that fits all sizes (low severity /
high frequency, high severity/low frequency and catastrophic events) (Balonce, 2012, p. 17).
There are many reasons why the data may not fit a distribution which will not be discussed
in this paper. A few alternatives to deal this issue are proposed as follows:

Split the sample data: This methodology is a variation from the Delta-EVT model (King, 2001,
p. 59) and the Splice function methodology (Klugman, 2012, pp. 66-69). One of the reasons,
none of the distributions is appropriate is because the data set is bimodal. Within

36
Operational Risk for example, it has been observed that a different distribution appears at
the tails. The distributions in the data set could be the same function with different
parameters or two different distributions functions. To apply this methodology the following
steps should be followed: i) Given the data set D, it will be split into two subsets d1 and d2,
such that d1 + d2 = D. The cut-off point between d1 and d2 is at the Nth percentile and d1 and
d2 follow known distributions: G(x) and H(x) respectively. ii) d1 and d2 fitted distributions
pass the Chi-squared test of goodness of fit. iii) The aggregated function is then created as
follows:

Figure 6: Table describing the split data set method

Aggregated function: F[G(x), H(x)]


Random variable value (x) Imposed probability: P(x)
G(x) for data subset d1 (1-Nth) percentile
H(x) for data subset d2 Nth percentile

The combined distribution F[G(x), H(x)] produces data that has (1-Nth) probability of being
distributed G(x) and Nth probability of being distributed H(x). Subsequently, a simulation of
at least 100,000 iterations needs to be run. This number of iteration is set it produces stable
estimates (Ritter, 2010, p. 9). The output of the simulation generates the F[G(x), H(x)]
function; which in order to be valid for the stress testing, the function must pass the Chi-
squared test of goodness of fit against the original data set ‘D’. This process may need a few
‘trial and error’ attempts for all the conditions to be met by calibrating the the value of the
Nth percentile.

Note that the distributions are fitted based on past data and can only be reliable for as long
as there are no changes in the way the variable behaves. In addition, the Chi-squared test
gives 95% confidence, indicating that there is still an error margin of 5%. This method
requires computational aid. The software @Risk provides the functionalities to apply this
method; see appendix 5 for a practical example.

37
Splice distribution: This method refers to a distribution created by splicing two distributions
at the x-value given by splice point. The resulting distribution is treated as a single input
distribution in a simulation. The assumption to apply this method is that the two
distributions and the splice point are known. The initial problem is that the distributions are
unknown; which was the problem in first place. To apply this methodology these steps
should be followed: i) Define K(x) = F[v1.f1(x), v2. f2(x)], where v1.f1(x) ∈ -∞<x<c and v2. f2(x)
∈ c<x<∞, v1 + v2 = 1 or v2 = (1 – v1), and “c” is calculated by resolving the following
simultaneous equation: v1 . f1(c) = (1 – v1) . f2(c). Therefore v1 cannot be imposed like in the
previous method but it is dependent on the point at which both distributions are equal in
value; this means that the distributions must overlap at this point. The splice function K(x)
should pass the Chi-squared test of goodness of fit against the actual data with at least 95%
confidence.

Figure 7: Table with details of the splice function

The splice function: K(x) = F[v1.f1(x), v2. f2(x)]


Random variable value (x) Derived Probability: P(x)
f1(x) where x ∈ -∞<x<c (1-v1)th percentile
v1 . f1(c) = (1 – v1) . f2(c)
f2(x) where x ∈ c<x<∞ v1th percentile

This method requires computational aid. The software @Risk provides the ‘RiskSplice’
functionality. See appendix 5 for a practical example.

Kernel density estimation: Parametric approaches may be controversial because they


impose a distribution on data (Balonce, 2012, p. 29). A simple definition of the Kernel density
is a more sophisticated histogram type of framework. The way the Kernel density works
gives a more accurate representation of the underlying data. The mathematical structure of

38
the Kernel density is described in appendix 6 along with an example of how it can be used as
a tool to generate a distribution for a simulation.

Histogram: If the previous methodologies fail to find a suitable distribution, the following
method can be applied: The distribution function is given by the histogram built from the
actual data. This method provides randomness but is discrete and capped to the histogram
bins. This limitation needs to be clearly indicated. See appendix 4 for an example.

Combination of methodologies: It is also possible to combine the ‘split sample data’ with the
other methodologies. For example, for sample data 1, a known distribution is found, however
the rest of the sample could not be fitted to any known distribution. Subsequently the data
set without an associated distribution is then represented by a histogram.

8.2.4 Determination of the simulation result

The purpose of the simulation is to provide additional information to capture a wider context
within the true value of the stress and where that value is likely to fluctuate. The purpose of
the stress itself is to give an expected value of the loss if the event as described materialises.
Therefore, the mean of the simulation result is the value to be used.

39
8.3 STRESS TESTING QUANTIFICATION OF STRESS FREQUENCY

The risk has two dimensions that needs to be quantified: Severity and likelihood (Blunden,
2013, p. 93); both are necessary to construct the final loss function. The quantification of the
frequency follows the same methodology as the one used for severity: Internal, external and
SME opinion.

8.3.1 Internal data

This refers only to internal loss data; which is recorded in the loss data register . The analysis
of this data should be able to indicate the frequency of the events. The nature of the
Operational Risk events suceptible of being stressed are generally between 1 in 5 years to 1
in 50 years (Isle of Man Financial Services Authority (IOMFSA), 2016, p. 3). It is possible that
an event has never occurred in the firm; in such a case there will not be internal data
available. See appendix 4 for a practical example.

8.3.2 External data

This refers to all data from any reliable external source including academic papers,
specialised magazines, industry publications, news articles, government publications, etc. As
a last option, if the source is a forum of any ‘non-official’ site, the limitation and rationale of
using this data should be disclosed. In addition, external data must be assessed, filtered and
scaled to ensure it represents or approximates the firms risk profile (Cope, 2009, p. 26). See
appendix 4 for a practical example.

40
8.3.3 Subject matter expert (“SME”) opinion

After Risk Management undertake the necessary research (internal and external) and forms
an opinion about the stress frequency, a discussion with the SMEs must take place to reach
a final agreement on what the stress frequency will be. The SMEs provide a very important
sense check on the analysis of, and conclusions arrived at, by Risk Management. According
to Cowell (2007, p. 800) lack of data and complexity of operations mandate the including of
‘expert input’. An ‘expert’ is defined as anyone whose knowledge and experience allows
him/her to make a credible view about the firm’s risk profile. See appendix 4 for a practical
example.

8.3.4 Use of basic probability laws

To work out the stress frequency it is necessary to apply the basic probability laws.
Following Chao (1980, pp. 114-131):

1) 0 ≤ P(E) ≤ 1: The probability of an event is greater than or equal to zero or less than
or equal to 1.
2) P(U) = 1: The probability of the universe of all possible outcomes is 1.
3) P(E) + P(E’) = 1: The probablity of an event plus the probability of its complement is
1.
4) P(E1 U E2) = P(E1) + P(E2): If E1 and E2 are mutually exclusive events, the probability
of E1 an E2 occurring at the same time is simply the sum of the probability of each
event.
5) P(E1 U E2) = P(E1) + P(E2) – P(E1 ∩ E2): If E1 and E2 are not mutually exclusive events,
the probability of union of the events equates the addition of the probabilities minus
the probability of the intersection.
6) P(E1 ∩ E2) = P(E1) x P(E2): If E1 and E2 are independent events, the probability of their
intersection equates to the product of their probabilities.
41
7) P(E1 ∩ E2) = P(E2) x P(E1 / E2): If E1 and E2 are dependent events, the probability of
their intersection equates to the product the probability of E1 given that E2 happened
by the probability of the event that functions as condition (E2).

Refer to appendix 4 for a practical example of the application of the probability laws.

8.3.5 Distributions for stress frequency

Discrete distributions are relevant to model the likelihood of an event. The most commonly
used are Poisson, uniform, binomial and negative binomial (Blunden, 2013, p. 177). If there
is data available to determine which distribution is more approriate, then the fitted
distribution is the one to be used. It will be assumed as default, that the stress frequency is
Poisson distributed, because it has the following properties: i) it is a discrete distribution, ii)
it is a simple model, since it only has one parameter, Lambda (λ ), and iii) lambda equates to
the variance and the mean of the distribution (Powojowski, 2002, p. 66). Refer to appendix
7.

8.4 CONSTRUCTION OF THE LOSS FUNCTION

To give an accurate estimate of the Operational Risk capital value is a challenge to satisfying
Basel II requirements (Ergashev, 2011, p. 146). The AMA is implemented through the LDA
(Embrecht, 2011, p. 185). It was first introduced by the Basel Committee in September 2001
(Blunden, 2013, p. 169). The approach requires the firm to estimate the distribution of
Operational Risk losses within a future time frame, which could be derived through Monte
Carlo simulations (Blunden, 2013, p. 169). Ideally, for each stress there should be sufficient
data to estimate the frequency and severity functions so the LDA function can be built (Cope,
2009, p. 25). The LDA is then a compound function combining the frequency and severity as

42
described by the Royal Society of Actuaries (2010, p. 20). This is the method adopted in this
paper.

An issue which arises in modelling Operational Risk losses is the lack of data for low
frequency / high impact events (Cowell, 2007, p. 800). To overcome this limitation, based on
the LDA, the following methodology is proposed to quantify the required Operational Risk
capital using the stress results.

8.4.1 Severity modelling


It is assumed that the severity of the stress event is lognormally distributed. X is defined as
the severity of the stress event as follows:

X = Exp ( μ + σ * Z) where Z~N(0,1) (Evans, 2000, p. 133).

This function is the definition of a lognormal distribution with parameters μ (mu) and σ
(sigma). Mu and sigma are the parameters of the underlying normal distribution of the
lognormal distribution.

This is the random number generating function, which is the right mathematical structure
for the purpose of the Monte Carlo simulation.

If X = Exp ( μ + σ * Z), then

Ln(X) = μ + σ * Z

To generate the severity of the losses for the lognormal distribution for X, it is required to
determine the parameters mu and sigma.

Calibration of μ

The mean of the lognormal distribution is defined as:

43
𝜎2
𝑀𝑒𝑎𝑛 = md ∗ exp( 2 ) where md = Exp(μ) = median of the lognormal distribution (Evans,

2000, p. 130); then,

𝜎2 𝜎2
𝐿𝑛(𝑚𝑒𝑎𝑛) = Ln(md) + =µ+
2 2

𝜎2 𝜎2
µ = 𝐿𝑛(𝑚𝑒𝑎𝑛) − = 𝐿𝑛(𝑥) −
2 2

Mu is the parameter required to model the lognormal distribution based on x (small x is


defined as the expected value or the mean of the stress event loss) and σ2 (is the calibrated
variance explained in the next section). Small x is not an actual loss but it is a hypothetical
mean or expected loss resulting from the stress scenario quantification process that
considers internal data, external data and SME opinion.

Hence, the calibrated mu (μ) is calculated as:

𝝈𝟐
µ = 𝑳𝒏(𝒙) −
𝟐

Because µ is a function of a ‘mean or expected value’, it is necessary that the stress scenario
quantification process with the business aims to calculate a mean or expected value of the
severity impact of the risk if it materialises. The understanding of the severity of the loss as
a mean or expected loss is much easier to describe and communicate to the business.

Calibration of σ

Ideally, each stress should have its own lognormal distribution parameters µ and σ. This is
not always possible due to the limited historical data available on the stress events. An
alternative to this is to make the assumption that there is one value of sigma for all stress

44
events, which means that the stress events will have the same level of variability. For
simplification purposes σ is the ‘calibrated’ sigma.

The value of 𝜎 can be calibrated following different approaches depending on what data is
available.

i) If internal loss data is available, σ can be calibrated as follows:

𝑣
𝜎 = [𝐿𝑛 (1 + )]1/2 (Mood, 1970, p. 540–541);
𝑚2

Where v is the variance of the internal loss data and m the mean of the internal loss data. The
value of σ could also be calibrated as the sample standard deviation of the logarithm of the
internal losses (Evans, 2000, p. 133). The challenges to use this approach will be described
in practical terms in appendix 8.

ii) If no internal loss data is available, the standard deviation is solely based on the
√𝛴(Ln(Xi)− ȗ)2 𝛴ln(𝑋𝑖)
stress results: 𝜎 = ; where ȗ = (Chao, 1980, pp. 48, 68) and
n−1 n

(Evans, 2000, p. 133) where xi are the stress results.

iii) It is possible to combine the variance of internal loss data and the stresses; this
sigma is calculated as follows:
σ = {[(n1-1)S21+(n2-1)S22+n1(ŷ1- ŷ)2+n2(ŷ2- ŷ)2] / (n1+n2-1)} 0.5 (StatsEchange,
2012)

Where:

n1 is the sample size of loss n2 is the number of stresses


data
S22 is the variance of the
S21 is the variance of the logarithm of the stresses
logarithm of internal loss data results
points

45
ŷ1 is the average of the ŷ2 is the average of the
logarithm of internal loss data logarithm of the stress results
points
ŷ = (n1 ŷ1 + n2 ŷ2) / (n1+n2)

8.4.2 Frequency modelling

Let W1, W2, …, Wn be a discrete variable distributed Poisson λ1, λ2, …, λn. The value of λi = 1/Yi.
Where Y is the number of years per one single event, Wj (λj) ~ Poisson (λj=1/Yj).

8.4.3 Compound stress distribution function

The distribution of the jth-stress (SFj) is defined as the combination of frequency and severity:

𝑊𝑗(𝜆𝑗)

𝑆𝐹𝑗 (𝑋, 𝑊) = ∑ 𝑋𝑖𝑗 , 𝑊𝑗(𝜆𝑗) ≥ 1


𝑖=1
{ 0, 𝑊𝑗(𝜆𝑗) = 0

Where X is the severity and W is the frequency. This is in line with industry standards
(Institute and Faculty of Actuaries, 2010, p. 20).

8.4.4 Loss function to be simulated

The loss function (“LF”) that aggregates all the stresses is the function to be simulated. It is
defined as:

46
𝑗

𝐿𝐹(𝑋, 𝑊) = ∑ 𝑆𝐹𝑛(𝑋, 𝑊)
𝑛=1

LF is also called the aggregated loss distribution function. The Operational Risk capital is set
at an agreed percentile of the loss function. This percentile is in line with the risk appetite
indicated by the firm’s Board (Blunden, 2013, p. 66). As a guide for example, the European
insurers requires a loss level at the 99.5th percentile and the Basel Committee’s AMA uses
99.9th percentile (Blunden, 2013, p. 181) and (Cope, 2009, p. 2). Refer to appendix 8 for a
practical example.

8.4.5 Application of the Chebyshev ‘s theorem

Let X be the Operational Risk capital requirements estimated by LF (X,W); which has an
unknown distribution. Let μ be the mean based on the Operational Risk capital requirement
1
Monte Carlo simulation data. The Chebyshev theorem states that at least (1 − ) of the data
𝑘2

with an unknown distribution must be within k standard deviations (S) of the mean (where
1
k>1) (Anderson, 2014, p. 126). In mathematical terms: P(|X- μ| ≤ k S) ≥ (1 − ).
𝑘2

This theorem can provide firms with additional assurance that the chosen Operational Risk
capital is at the right level; in other words, that the ‘X’ chosen at the nth percentile is
appropriate. The value found using the Chebyshev theorem will generally be higher than
operational risk capital requirement at the nth percentile. However, this is a useful piece of
information to use for the Pillar 2B scenario analysis. Refer to appendix 8 for a practical
example.

47
8.5 STATISTICAL DISTRIBUTIONS FOR THE FREQUENCY AND SEVERITY

8.5.1 Distributions to model Operational Risk event severity

Di Pietro (2012, pp. 86-87 cited the Basel Committee report 2009b pp. 54-56) indicated the
following facts about the banks which used the AMA methodology: i) 31% applied one
distribution to model severity, 48% applied two distributions: One for the body and one for
the tail; 21% applied other approaches. ii) Among those who applied one distribution 33%
used lognormal, 17% used Weibull and 14% used an empirical distribution. iii) Among those
who applied two distributions, for the body 26% used empirical distribution, 19% lognormal
and 10% Weibull; for the tail 31% used Pareto, 14% lognormal, 7% Gamma and 7% Weibull.
This section elaborates on the more frequently used distributions as denoted above:
Lognormal, Weibull, Empirical, Pareto and Gamma; which is also suggested by Klugman
(2012, p. 4). From these facts it can be derived that the discussed distributions configure an
industry standard.

Lognormal distribution: As described previously, the lognormal distribution is defined as ‘e’


to the power of x (ex), where x is normally distributed. It is applicable to random variables
constrained to zero and continuous (0 ≤ x ≤ ∞). It is asymmetrical with a right-tail (Evans,
2000, p. 129). This distributions has two parameters: Mean (location) and standard
deviation (scale) (Math Tutor, 2017).

Weibull distribution: The Weibull distribution is also a continuous, non-negative function


with two parameters: ‘A’, the scale parameter and ‘B’ the shape parameter. Both ‘A’ and ‘B’
are greater than zero (Evans, 2000, p. 192). Although Evans (2000, p.192) also indicates that
Weibull allows for a additional parameters (three and five parameters), this level of
complexity is unnecessary for the quantification of Operational Risk capital. This distribution

48
has been widely used as a model of time to failure for manufacturing items and is also applied
in finance and climatology (Math Tutor, 2017).

Empirical distribution: This distribution is estimated directly from the data. There is no
assumption about the underlying mathematical structure of the model function (Evans, 2000,
p. 65). The Kernel density distribution function is an example of empirical distribution
(Klugman, 2012, p. 59). It has already been discussed in the stress testing section that the
histogram and the kernel density distribution function are used when the data cannot be
fitted to any known distribution.

Pareto distribution: The Pareto distribution is derived from the Pareto principle which is
used to show that many things are not distributed evenly (Statisticshowto, 2017). This is
often described as the basis for the 80/20 rule (80% of the output is generated by 20% of
the inputs). Other applications include income distribution. Pareto has two parameters, ‘a’
(location parameter) and ‘c’ (shape parameter) where a ≤ x ≤ ∞, and both ‘a’ and ‘c’ are
positive values. (Evans, 2000, pp. 151-154). Pareto is a heavy tailed distribution. In insurance,
heavy-tailed distributions are relevant tools to model extreme losses (Statistical Modelling,
2011, cited Klugman, 2004). It can be concluded that Pareto is a suitable distribution to
model extreme values or the tail part of the data set.

Gamma distribution: This is a distribution which arises in processes where there is a time
gap between events. It can be understood as a waiting time between two Poisson distributed
events (UCLA , 2017) for example. Gamma has two parameters ‘b’ (scale parameter) and ‘c’
(shape parameter). Both, ‘b’ and ‘c’ are greater than zero and the variable x is greater than
or equal to zero.

These distributions have a positive tail, which is consistent with the pattern of Operational
Risk events (Embreachts, n.d.). Embreachts (n.d.) also suggests that lognormal and gamma
49
distributions are good approximations for loss event distributions. Figure 8 illustrates the
shapes of the Gamma, lognormal, Pareto and Weibull with the same location parameter:

Figure 8: Chart illustrating the gamma, lognormal, Pareto and Weibull distributions

8.5.2 Distributions to model Operational Risk event frequency

The frequency of the loss events require distrete distributions and model the number of
events per time unit (Navarrete, 2006, p. 1). Blunden (2013, p. 177) and Cope (2009, p. 19)
indicate that Poisson, uniform, binomial and negative binomial are appropriate to model the
Operational Risk frequency.

Poisson distribution: This discrete distribution applies in counting the number of infrequent,
but open-ended events. The Poisson distribution is used to model observations per unit of
time or space (Chao, 1980, p. 182) which are reasonably homogeneous (Evans, 2000, p. 155).
As indicated in section 7.3.5, this distribution provides a simple model with one parameter,

50
lambda (λ) which equates to the mean and variance. The variable (event occurrence) is an
integer x, such that 0 ≤ x < ∞ and λ > 0 (Evans, 2000, p. 155). Poisson is positively skewed.
(Chao, 1980, p. 183). For example, assuming a Poisson distribution, if a loss event is
estimated to happen once in twenty years, then this is distributed Poisson (λ=1/20). As
indicated in section 7.3.5, Poisson is the default distribution if the event frequency is not
fitted to any other known distribution.

Discrete uniform distribution: This distribution gives each random variable value the same
probability of occurrence; where a ≤ x ≤ b and x is an integer. (Evans, 2000, p. 170). This
distribution is capped to the lower and upper bounds; which means that it is limited to very
specific cases.

Binomial distribution: The binomial distribution is a simple discrete probability distribution


due to its ability to model situations where a single trial of some process can only result in
one of two mutually exclusive outcomes (Glasgow Caledonian University, n.d., p. 19). Where
0 ≤ x ≤ n and x (called success) is an integer and n is the number of events. The trails are
independent from each other (Anderson, 2014, p. 240). This distribution has two
parameters: n and p; p is the probablity of success, hence (1-p) is the probability of failure
(Evans, 2000, p. 43). The value of p is unchanged during the process (Chao, 1980, pp. 163-
164). This distribution is the most widely used in reliability and quality inspections (Pham,
2006, p. 16), for example in hardware or system failure.

Negative binomial distribution: This discrete variable models the numbers of failures before
the xth success in a series of trails occurs. The trails can only be two possible outcomes.
(Evans, 2000, p. 140). The parameter p (probability of success) and q (probability of failure)
= (1 – p) does not change and the trials are independent (Stattrek, 2017).

51
A few frequency distributions have been briefly described for reference. It is useful to be
aware of the most widely used discrete distributions when fitting data to known
distributions. However, the industry standard has been mainly using Poisson and binomial.

8.6 MATHEMATICAL STRUCTURE OF THE AGGREGATED LOSS FUNCTION

This section describes the two main approaches to model the loss function: i) one
distribution, and ii) two distributions.

8.6.1 One distribution model

This approach entails the use of one function for all the data range. As indicated in section
8.4.1, 31% of the AMA firms use this approach which is the simpler of the two as it requires
one set of parameters. This approach is more suitable for small to middle size firms where it
is reasonable to infer that the pattern of the operational losses at the right-tail will not
change along the range. Refer to appendix 8 for a practical example.

8.6.2 Two distribution model

For larger enterprises, the two distribution model may be more appropriate. This is because
it is reasonable to expect that the operational losses at the right tail (fat tails) follow a
different distribution as described by Blunden (2013, p. 181); this heavy-tail is not
exponentially bounded [Cope, 2009, p. 3 cited (Asmussen, 2003)]. As a result, the loss data
range is split into two subsets and then aggregated. Section 8.4.1 indicates that 48% of the
AMA firms use this approach. Section 8.2.3 describes the theoretical framework of the ‘split
data sample’ and ‘splice function’ which can be applied to the aggregated loss function. When
modelling extreme losses, it should be done with care due to instability of estimates and
52
sensitivity of models (Cope, 2009, p. 27). An issue arises when there is no sufficient data
available.

8.6.2.1 Split data function methodoly

First: The loss function estimated in appendix 8 (based on stresses and a small set of loss
data) is assumed to capture the percentage P (whose range is 0 < P < 1) of the universe of
losses. So, let ‘P’ be assimilated to the probability of occurrence of a loss event that follows
a distribution (d1) and (1 – P) represent the probability of an extreme loss event which
follows another distribution (d2). The value of ‘P’ should be set by the firm. As a guide, it
should be at least the third quartile plus 1.5 times the inter-quartile range of the stress
frequencies (frequency in terms of the years estimated before the event occurs). This is a
simple and widely known formula to flag potential outliers in data sets (Anderson, 2014, p.
127) and (Simmons, 2017).

Second: As the data is not sufficient, the mean of the extreme value distribution needs to be
estimated. The approach to estimate follows the Chebyshev's theorem described in section
1
7.4.4. Using Chebyshev theorem’s formula P(|X- μ| ≤ k σ) ≥ (1 − ) the procedure is as
𝑘2

follows:

a. Obtain μ and σ from the loss distribution simulation output.


b. Calculate the value of ‘k’ consistent with the value of ‘P’ in the first step:

1
P = (1 − ) ; k = [1 / (1-P)]0.5 .
𝑘2

c. The mean of the extreme value distribution function (X) will be estimated as:
X- μ = k σ ; X = k σ + μ
where μ and σ are calibrated parameters.

53
Third: Once the parameters are estimated, the distributions need to be selected. D1 can
directly be the loss function as indicated in appendix 8 [Simulation function = Σ LN(X,W)] or
it can be replaced by a histogram, kernel density distribution or a fitted known distribution
if available. D2, for simplification purposes, will be assumed to be a lognormal or Pareto
distribution. As illustrated in Figure 7, Pareto has the heaviest tail at the 99th percentile,
followed by lognormal. In addition, Cope (2009, p. 3) states that they provides a good in-
sample fit to operational loss data sets.

Fourth: The two distribution model is constructed as follows:

Figure 9: Table describing the two distribution model

Distribution Probability Distribution functions


Subsets

d1 P - Original loss function [Σ LN(X,W)]; or


- The fitted known distribution if available
- Histogram or Kernel density distribution.

d2 (1 – P) Y~Lognormal (μ,σ); or
Y~Pareto (a, c);

Refer to appendix 9 for a practical example.

8.6.2.2 Splice function methodology

The splice function has been described in section 8.2.3. The application of this tool to
construct the two distribution model requires the following parameters: Distribution
function 1, distribution function 2 and the threshold or cut-off point between the

54
distributions. The parameters of these distributions need to be estimated by fitting data to a
known distribution as per section 8.2.3.

Figure 10: Table describing the splice function model

Sample data Distribution functions

d1: Data subset which - The fitted known distribution (note that the splice function
follows a distribution requires a known distribution).
different from the subset
d2.

d2: Data subset which Y~Lognormal (μ,σ); or


follows a distribution Y~Pareto (a, c);
different from the subset
d1.

Threshold (v) v = minimum value observed in the d2 loss function.

The data set split and the splice functions are likely to produce much higher operational loss
values; this is relevant information for the scenario analysis. Refer to appendix 9 for a
practical example.

8.6.3 Stability of the Monte Carlo Simulation

To assess the stability of the Monte Carlo simulation model, it is necessary to empirically test
that a particular statistic converges as the number of iterations increases, as suggested by
(Ballio, 2004, p. 3).

55
Figure 11: Stability simulation test (99th percentile)

Simulation stability test


12.00
Simulation results (£m)

10.00
8.00
6.00
4.00
2.00
0.00
10 100 1,000 10,000 100,000 1,000,000 5,000,000
99th percentile 4.73 7.65 9.52 10.41 10.27 10.51 10.53
Number of iterations

99th percentile

Based on the simulation as shown in appendix 8, a series of simulations were run using an
increasing number of iterations (from 10 iterations to 5 million iterations). As illustrated in
figure 11, the 99th percentile converges towards 10.5 million. The 99th percentile seems to
have stabilised after the 10,000 iterations, after which the variation is much smaller.
Consequently, the Monte Carlo simulation model does produce stable, converging results.

8.7 EXTRAPOLATION TO 1 IN 200 YEAR EVENT

Sometimes it is necessary to extrapolate to a more severe operational loss. To do that, a


parametric model must be applied as non-parametric models does not give any guidance
outside the the observed data range (Cope, 2009, p. 9).

Cope (2009, p. 9 cited cf. Lingren, 1987) indicated that there are three approaches to carry
out an extrapolation: i) a single mechanism for all observed data; ii) multiple mechanisms
that produce different loss events; and iii) extreme values that are anomalous and do not
follow a pattern unlike the rest of the data, cannot be reasonably modelled.

56
Appendix 8 has applied the first approach, whereby the capital is set at the 99th percentile of
the resulting loss function, which is one single mechanism. The proposed method is in line
with the second approach, whereby: i) each stress is extrapolated to a 1 in 200 year (99.5th
percentile) loss event using their loss distribution function. This computation generates the
vector ‘V’ of extrapolated stress values; ii) the Operational Risk capital results from
computing the square root of the sum of square of the extrapolated stress values; which is a
quadratic form V’UV = Operational Risk capital, where ‘U’ is the unitary matrix. This means
that all the stress events (the components of the V vector) are independent. Therefore,
independence among the events is an assumption of this approach. This formula gives more
weight to the most onerous stresses. Refer to appendix 10 for a practical example.

8.8 INTRODUCTION OF CORRELATIONS IN THE OPERATIONAL RISK CAPITAL


CALCULATION

The European Banking Association (2014, p. 104) suggests performing a correlation analysis
between the Operational Risks, in this case, stresses. ‘Correlation’ is the degree of linear
relationship between two variables but does not necessarily reflect a causation between
them. The correlation coefficient is the statistic ranging between -1 and 1 and is used to
measure the degree of association. When there is a perfect inverse relationship, the
correlation coefficient is -1. No correlation is denoted as a correlation coefficient of zero and
a perfect positive correlation suggests a correlation coefficient of 1 (Anderson, 2014, p. 142).
A correlation matrix (‘A’) is the mathematical structure which reflects the relationship
among variables, in this case stress frequencies. In other words, how the occurrence of one
stress affects the occurrence of another. The ‘A’ matrix’s component a12 refers to the
correlation coeficient between stress 1 and stress 2. The ‘A’ is a positive, semi-definite matrix,
which means that it is square (nxn), symmetrical (aij = aji), internally consistent and x’Ax > 0 ;
x is a non-zero vector (Wilks, 1943, p. 63). Consistency means that there is no contradiction

57
among the correlations (If stress 1 is correlated to stress 2, and stress 2 is correlated to stress
3, then stress 1 is correlated to stress 3).

In order to incorporate this approach to the Operational Risk capital calculation the
following steps need to be followed: i) Establish the starting point correlation levels (i.e. high,
medium, low or nil correlation). ii) Carry out a correlation analysis for each stress pair. iii)
Build the correlation matrix ‘A’. iv) Test that matrix ‘A’ is symmetrical, positive, semi-definite
and consistent. If the matrix is not positive and semi-definite, the software used for
computing simulation (@Risk) is able to propose correlation coefficients that meet the
positive, semi-definite requirements. The new coefficients maintain the same number of
levels initially set (in the example given the number of levels is 4). v) Incorporate the
correlation matrix into the stress frequency vector. vi) Run a simulation of one million
iterations and identify the capital requirement at the relevant percentile (for example the
99.5th percentile). Refer to appendix 11 for a practical example.

58
9. LIMITATIONS OF THIS MODELLING APPROACH

A common issue for Operational Risk modelling is the limited number of data points to draw
a conclusion about the risk of losses (frequency and severity). Although this paper
acknowledges this fact, and it proposes a methodology to overcome the issue, it must be
made clear in the conclusions that the lack of data is still a limitation.

The core component of the approach to the methods described is the use of Monte Carlo
simultions. According to Blanchett (2014, p. 1), an explanation of the past is easier than
forecasting the future. Monte Carlo simulations will throw light on the nature of the
uncertainty, but only if the method is understood along with its limitations:

First: The Monte Carlo simulation requires fitting available data to a known distribution. This
presents the following limitations: i) The number of data points has direct impact on the
accuracy of the output. The larger the data sample the more accurate the results. ii) The Chi-
square goodness of fit test gives a 95% confidence on the fit, therefore, there is still a 5%
chance of an error.

Second: There are no restrictions to Monte Carlo simulation, but limitation on the model
design where simulated variables are inputs (Blanchett, 2014, p. 1). This means that even if
the Monte Carlo simulation is 100% accurate, if the model design is wrong the output will be
wrong.

Third: Monte Carlo simulation may not factor in things such as autocorrelations (Blanchett,
2014, p. 1), collinearty or structural breaks where the variable changes its behaviour.

59
Having said that, Monte Carlo is a very powerful tool to model randomness. It provides a
comprehensive perspective of the likely results versus the binary nature of a deterministic
prediction (Blanchett, 2014, p. 3).

60
10. CONCLUSIONS

Before carrying out a process of quantification of Operational Risk capital, an appropriate


Operational Risk management framework needs to be in place. This ensures that there is a
proper identification, analysis, management and recording of risks and loss events. This is
not only a good practice but a regulatory requirement. The Operational Risk management
framework requires the setting of the Operational Risk appetites. This is very important
because it sets the percentile of the loss function at which point the capital value lies. Loss
data is necessary to apply all the methodologies that have been proposed to quantify
Operational Risk. It has been acknowledged that the sample data size may be an issue and it
has been highlighted as a limitation in the modelling section. The Monte Carlo simulation,
although being a powerful tool, has some limitations that should be considered when
drawing conclusions about the capital requirement quantifications as derived from a Monte
Carlo model. There are different mathematical structures available to model severity. The
aim is to find the method that most accurately reflects the data. Independence or dependence
among stresses can also be incorporated into the model. The capital requirement increases
when the stresses are dependent; which is an intuitive inference. The validity of the results
requires that all modelling decisions are backed by statistical testing, qualitative rationale or
external relevant research. Additional topics would have been useful to develop such as
Bayes networks, scenario analysis, the h-and-h function for tail events, how to scale external
loss data and specific statistical methods to quantify certain stresses like outsourcing and
internal fraud for example. As well as the process of validations of results. These topics
should be part of an enhancement of this paper.

61
11. APPENDICES

Appendix 1 Regulatory requirements in relation to Operational Risk

Figure 12: Table describing UK regulation about Operational Risk

IFPRU This chapter contains guidance to help a firm understand the FCA's
Chapter 5 expectations on the extent to which the Advanced Measurement Approach
(AMA) should capture its Operational Risks where the firm has, or is about to,
implement AMA.

IFPRU As part of its obligations under the overall Pillar 2 rule, a firm must:
2.2.14 (1) make an assessment of the firm-wide impact of the risks identified in line
with that rule, to which end a firm must aggregate the risks across its various
business lines and units, taking appropriate account of the correlation between
risks;

(2) take into account the stress tests that the firm is required to carry out as
follows: (a) (for a significant IFPRU firm) under the general stress and scenario
testing rule (including SYSC 20 (Reverse stress testing)); (b) (except a firm in
(a)) under SYSC 20 (Reverse stress testing); and any stress tests that the firm
is required to carry out under the EU CRR;

(3) have processes and systems that: (a) include an assessment of how the firm
intends to deal with each of the major sources of risk identified in line with
IFPRU 2.2.7 R (2); and (b) take account of the impact of the diversification
effects and how such effects are factored into the firm's systems for measuring
and managing risks.

62
SYSC 3.1.1 A firm must take reasonable care to establish and maintain such systems and
R controls as are appropriate to its business.

SYSC 3.2.6 A firm must take reasonable care to establish and maintain effective systems
R and controls for compliance with applicable requirements and standards under
the regulatory system and for countering the risk that the firm might be used
to further financial crime.

SYSC This chapter provides guidance on how to interpret SYSC 3.1.1 R and SYSC 3.2.6
Chapter R, which deal with the establishment and maintenance of systems and controls,
13 in relation to the management of Operational Risk. Operational Risk has been
described by the Basel Committee on Banking Supervision as "the risk of loss,
resulting from inadequate or failed internal processes, people and systems, or
from external events". It also covers the systems and controls for managing
risks concerning any of a firm's operations, such as its IT systems and
outsourcing arrangements. It does not cover systems and controls for
managing credit, market, liquidity and insurance risk.

BIPRU For the purposes of BIPRU 6.5.6 R (2), a firm should be able to show that:
6.5.7 G
(1) its Operational Risk measurement systems and processes provide benefits
to the firm and are not limited to determining regulatory capital;

(2) the Operational Risk measurement system and framework forms part of the
systems and controls it has in place; and

(3) the Operational Risk measurement system and framework are capable of
adapting to the changes in the business of the firm and evolving as the firm
gains experience of risk management techniques.

63
Appendix 2 Example of Risk Appetite

Risk appetite is based primarily on the firm’s stand-alone ability and willingness to accept
deviations in the three main criteria of solvency, liquidity and earnings. The deviations will
be quantified by assessing the expected impact of stand-alone pre-defined key risk shocks
on these criteria, with the quantum of impact calibrated in line with the firm’s articulated
risk appetite statements. Risk appetite levels will be categorised as follows:

High risk appetite: The firm accepts the risk in order to achieve its business objectives. The
pre-defined risk shock to assess deviation to Solvency and Earnings is calibrated to a mild
impact level which is a relatively high frequency event, e.g. between a 1 in 5 and a 1 in 30-
year event.

Medium risk appetite: The Company accepts the risk as it is necessary to achieve its
business objectives, but it is supported with the appropriate control activities. The pre-
defined risk shock to assess deviation to solvency and earnings is calibrated to a severe
impact level, e.g. between a 1 in 31 and a 1 in 99-year event.

Low risk appetite: The Company actively seeks to avoid the risk, other than what is incurred
through the normal course of business. Control activities are implemented to minimise any
risk that is accepted. The pre-defined risk shock to assess deviation to solvency and earnings
is calibrated to an extreme impact level which is a relatively low frequency event, e.g. less
frequent than a 1 in 100-year event.

The company Board has set these appetites at a high level. Subsequently, setting a low
risk appetite for Operational Risk means for example that capital requirement should
be set at least at the 99th percentile of the loss function (1 in 100 years).

64
Appendix 3 Example of a Key Risk Indicator (KRI)

The following example is an Operational Risk related KRI: The monthly number of
complaints. The day-to-day operations necessarily means that mistakes will be made, and
the client will make a complaint. Dealing with complaints fairly and adhering to deadlines is
a regulatory requirement. In addition, the number of complaints should follow a consistent
pattern. Changes to this pattern must trigger an investigation because it may be a symptom
of a problem that has not yet landfall to its worst potential for damage. To design this simple
KRI, the frequency of complaints will be assessed as follows:

Figure 13: Chart describing the monthly number of complaints 2012-2016

Monthly number of complaints 2012-2016


40

35
y = -0.1101x + 19.907
Number of complaints

30
R² = 0.1053
25

20

15

10

0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59
Months

Five years of data have been gathered. It can be observed that there is a downward trend but
also that the coefficient of determination (R2) is only 10%; which means that there is a
random number of complaints which fluctuate around a slightly downward long-term trend.

65
Figure 14: Histogram describing the monthly number of complaints (99% confidence
boundaries)

The analysis of a histogram suggests the following points:

 The data follows a normal distribution with a mean of 16.5 and standard deviation of
5.9 (rounded up to 17 and 6 respectively).

 The specific complaints related risk appetite is set as follows:


66
o High: 1 in 100 years plus
o Medium: 1 in 31 to 1 in 99 years
o Low: 1 in 5 to 1 in 30 years
 Since the Operational Risk appetite is low, the proposed risk appetite is the number
of monthly complaints within 80% and 97% of confidence. The risk appetite is then
defined as “The monthly number of complaint fluctuates between 4 and 30 (97%
confidence)”.
 In addition, it is a good practice to set triggers for escalation to the relevant parties
(Senior Management or Risk Committee for example). In this case, the trigger is set
between 9 and 24 (80% confidence).

Figure 15: Chart illustrating the monthly number of complaints (Trigger and appetite)

Risk Appetite Risk trigger

 The complete risk appetite is then defined as “The monthly number of complaints
fluctuates between 4 and 30, with a trigger between 9 and 24.”

67
 These boundaries are based on the normal distribution: N~(16.5, 5.9) fitted from the
complaints data.
 These boundaries should be reviewed and approved by the Board on an annual basis.

68
Appendix 4 Example of “Hardware failure stress”

Stress description

A hardware failure causes a major corruption of data in one server that then replicates in all
other servers. This results in batch processing failing and all applications being unavailable
until the problem is resolved. The resolution of this issue does not require moving staff to a
recovery site. It requires time and resources to recover the hardware failure and restore any
corrupted data.

The period required to restore the systems has been assumed not to exceed 72 hours. It
normally takes up to four hours for the systems and key processes to be restored in Business
Continuity Planning (BCP) events. Using 72 hours for this stress is considered an extreme
scenario. The risk capital cost of a failure of software is based on the cost of compensating
any clients who suffered detriment caused by the inability to execute trading instructions
submitted prior to the failure.

Whilst we have assumed that no clients are financially disadvantaged under this scenario we
have factored a level of compensation into the stress scenario to address conduct failings.

It should be noted that there is no contractual obligation to pay compensation to clients but
it has been assumed that an amount per trade would be paid to dampen reputational risk
impacts. The loss is estimated as follows:

 Using the highest volume of pooled daily trades from 1/10/2015 to 30/09/2016.
 The affected trades are based on the average of the equity and fund values traded
from 01/10/2015 to 30/09/2016.
 Trade volumes are split between equity and funds.

69
 All trades will be affected for one day, assuming we don’t take further trading
instructions following the hardware failure.
 The trades are manually executed on the following day leaving the company exposed
to one day price variance. The price change can produce a loss or a gain. The company
will give the gains to the client and absorb the losses. On average from 01/10/2015
to 30/09/2016, 48% of prices went up, 47% went down and 5% did not change. It is
reasonable to assume that 50% of the affected trades will bring about a loss.
 Funds and equity movements: We internally researched the FTSE100 index’s daily
movements (as a proxy for equity movement) from 01/10/2006 to 13/10/2016. The
results show that with a confidence level of 99%, the daily index fluctuations is
between -4.97% and 4.28%. Therefore, the movement is set to 5% for equities. Half
the equity movement is applied to funds due to the lower volatility experienced in
fund values, i.e. 2.5% for funds.
 The 5% increase in equity value is based on the FTSE100 data from 01/10/2006 to
13/10/2016.

70
FTSE100 index

3000
3500
4000
4500
5000
5500
6000
6500
7000
7500
Oct 02, 2006

13/10/2016
Jan 04, 2007
Apr 05, 2007
Jul 11, 2007
Oct 11, 2007
Jan 15, 2008
Apr 18, 2008
Jul 22, 2008
Oct 22, 2008
Jan 26, 2009
Basis for the assumptions

Apr 29, 2009


Jul 31, 2009
Nov 02, 2009
Feb 04, 2010
13/10/2016. This is the full data set:

May 11, 2010


Aug 12, 2010
Nov 12, 2010
Feb 16, 2011
May 24, 2011
Aug 24, 2011
Nov 24, 2011
Feb 28, 2012
Jun 01, 2012
Sep 05, 2012
Dec 05, 2012
Mar 11, 2013
Jun 14, 2013
Sep 16, 2013
Dec 16, 2013
Mar 20, 2014
Daily FTSE100 index: 1/10/2006 - 13/10/2016

Jun 25, 2014


Sep 25, 2014
Dec 29, 2014
Mar 31, 2015
Jul 06, 2015
Oct 06, 2015

fluctuations will be between -4.97% and 4.28% as shown in the following chart:
Jan 08, 2016
Apr 12, 2016
Jul 14, 2016
The 5% increase in equity value is based on the FTSE100 data from 01/10/2006 to

Figure 16: Chart showing the movement in FTSE100 index between 1/10/2006 and

Evaluating the historical movements suggests we can be 99% confident that the daily

71
Figure 17: Histogram showing the fluctuations in the FTSE100 index between 1/10/2006
and 13/10/2016

The new scenario which differs from last year is that we are now in a ‘post triggering article
50’ situation. This has introduced higher levels of uncertainty and will continue to bring
about volatile markets until the UK agrees the post-Brexit relationship with the EU, which
could take many years to resolve.

Given that the 99% confidence for FTSE100 daily fluctuations ranges from -4.97% and 4.28%,
5% is therefore considered to be a reasonable assumption in relation to a sharp single day
market movement against us.

72
Based on our own data, the changes of equity and funds have been analysed, on a daily basis
since 2009 (91 months). The original data captures monthly average of the daily equity (E)
and fund (F) prices. Taking the variation between data points, we derive the average daily
change of those assets:

Figure 18: Chart describing the ratio equity / funds change overtime

Ratio E/F price change: Y= Funds price change


10.0%
y = 0.4992x + 0.0104
8.0% R² = 0.7925
6.0%
4.0%
2.0% Ratio E/F change
0.0% Linear (Ratio E/F change)
-15.0% -10.0% -5.0%
-2.0%0.0% 5.0% 10.0% 15.0% 20.0%
-4.0%
-6.0%
-8.0%

As shown in the chart, there is a positive correlation between the movement of equities and
funds. The linear regression suggests that funds vary approximately half the amount of
equities.

Average daily price movement on all products in the platform (from 01/10/2015 to
30/09/2016):

73
Figure 19: Table describing the average daily product price movement

Average daily product price movement


Price movement direction Total Percentage
Down 1,124,978 47%
Same 120,037 5%
Up 1,146,487 48%
Total 2,391,502 100%

Stress results summary

Figure 20: Table with stress testing final result


Stress summary £m Reasoning
Client impact costs 1.288 Calculation 1
Expected stress loss value 1.288

Quantification of severity

Figure 21: Table describing the stress testing frequency quantification

Calculation 1 ‘Client impact costs’


Internal data Best estimates £1.288
Loss data £0.003
Simulation £0.774

Best estimates
Client impact
Equity Funds
Highest pooled trade
Total volume of pooled
10/06/2016. The split
trading on highest daily 4,830 386.4 4,443.6
between equity and
trade. Data from 1
funds is based on the

74
October 2015-30 proportion over the
September 2016. whole year: 8% equity
and 92% for funds.
Pooled trades affected All trades missed
100% 386.4 4,443.6
by process failure deadline.
Percentage of asset This is assuming 50% of
value that moves against 50% 193.2 2,221.8 asset prices will move
the company against the company.
This is average value of
an asset based on a
Equity and fund trades
10,415.0 21,379.0 daily pooled trade 1
during
October 2015-30
September 2016.
Affected trades x value of 2,012,178. 47,499,862.
the trade 0 2
Based on observed
Market movement volatility in the past and
5.0% 2.5%
against the company an expectation of higher
volatility due to Brexit.
100,608.9 1,187,496.6
Total Loss of affected
£1,288,105.50
trade

The Operational Risk event register has recorded a few loss events resulting from delayed
/ missed trades from 01 October 2015 to 30 September 2016. There is only one event
which caused a loss of over £1,000.

All the trading MI is provided by a report called “Time until monies received and paid”.

75
Loss data
Loss value (£) Error resolution year Error resolution month
200.10 2016 January
2,957.00 2016 May
13.10 2016 July
750.00 2016 July

Note that the loss data refers to the last twelve months. The reason is that system and
processes have significantly changed in the last twelve months therefore previous loss
data in relation to this particular process are no longer relevant.

In addition, another factor to consider is elapsed time. The loss will depend not only on the
market movement but also on the time gap between when the error actually occurred and
when it was discovered. The wider the time gap the more likely it is that the movements
(and losses) will be higher. For the purpose of the stress, the time gap is assumed to be one
day.

Simulation approach
Client impact £0.774m

The simulation design has been based on the best estimate framework. The following
variables have been assigned a distribution:

1) The daily ‘pooled trades’ follow a discrete distribution derived from the
histogram based on the 263 data points of the pooled trades obtain from the
‘Time until monies received and paid’ report:

76
2) The distribution output gives the total number of trades. This is then split into
equity and funds in accordance with their share in the trades: Equity (8%) and
Funds (92%).
3) From the trading activity report we obtain the total value traded per day along
with the number of trades in the pool. By dividing the total value by the number
of trades we calcuate the daily average value on a single trade for equity and
funds. Using this time series data, the equity and fund values have been fitted to
a continuous distribution (there were 263 data points available):

The Equity value is distributed loglogistic with parameters Gamma=-450.79,


Beta= 9754.7 and Alpha=4.445. This distribution passes Chi-squared goodness
of fit test.

77
The Fund value does not pass any goodness of fit test. To address this issue:

1- The data set will be split in 2 distributions


2- Each subset of data will be fitted into a known distribution and tested for
fitness
3- Different splits are tried until both subsets pass the Chi-squared test of
goodness of fit. The final distributions resulting from the process is this:

Discrete Distribution per data subset


distributi
on split

78
90% Logistic (15,810.0 ; 1,660.4)

10% Lognorm (46,434.3 ; 133,473.6)

79
21 percentiles from actual data set in comparison with the same points from
simulated distribution have the same mean, standard deviation and pass the Chi-
squared test of goodness of fit.

80
81
Cummulative Functions (Observed and
Expected values)
140.0
120.0
100.0
80.0
60.0
40.0
20.0
-

80%

90%

99%
1%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
55%
60%
65%
70%
75%

85%

95%
Observed (£k) Expected (£k)

4) The design followed the same mathematical structure as the best estimates:

Equity Funds
The volume of pooled trades
Discrete
equates to a discrete distribution
distributio
8% of assets 92% of assets function output. The proportion
n of pooled
per product type is based on the
(A)
total pooled trades.
100% 100% x A 100% x A All trades will be missed.
Volume of trades affected that
50% (B) 50% x of A 50% x of A
moved against the company.
Split
distributions:
Value of 90%
Loglogistic
asset Loglogistic Distributions obtained by fitting
distribution
affected 10% the pooled trade data.
value
(C) Lognormal
distribution
value

82
This results from multiplying the
Assets Equity value Fund value ‘volume of trades affected that
value (D) affected = BxC affected = BxC moved against the company’ and
the ‘value of assets affected’.
This is based on 0.5th and the 99.5th
percentile of the daily change of
Assets
the FTSE All-share for equity and
price
applying 50% of that change to
movement 5% 2.5%
funds. The 50% relationship is
against the
explained in in the Client
company
Operations and Trading
Operations stress.
This is obtained by multiplying the
Loss (L) L1 = 5%xD L2 = 2.5%xD asset value affected and the assets
movement rate.
Total loss This is total loss function (TLF)
L1 + L2
(TL)

5) After running a simulation of 1 million iterations (on the TLF), the mean of the
stress function is £0.774m. Therefore, the capital requirement derived from the
simulation approach is £0.774m.

83
Having assessed the internal loss data, simulation and best estimates, it has been
concluded to assign the maximum value of £1.288m to the internal data.

External data £0.257m

Loss data case study Amount


(£m)
In 2006, a fat-finger error by a trader at Mizuho Securities in Japan caused 189
the firm to short sell a stock in an error that cost the firm 40 billion Yen
(circa £189m) to unwind.
http://lexicon.ft.com/term?term=fat-finger-error

SEC sanctioned a California RIA for wrongful conduct related to trading 0.257
errors for making a stock trading error resulting in a loss of $400,000.
Instead of absorbing the loss from the trading error, the RIA passed it along

84
to clients. By doing so, the RIA violated Sections 206(1) and 206(2) of the
Investment Advisers Act, the anti-fraud sections of the statute. These are
the most serious violations an RIA can commit.
http://www.thinkadvisor.com/2012/01/01/trading-practices-and-errors

In 2012, Knight Capital, a firm that specialises in executing trades for retail 282
brokers, took $440m in cash losses due to a faulty test of new trading
software.
http://www.theregister.co.uk/2012/08/03/bad_algorithm_lost_440_millio
n_dollars/

In 2014, a Japanese broker erroneously placed an order for more than 370
US$600bn (£370bn) of stock in leading Japanese companies, including
Nomura, Toyota Motors and Honda which was subsequently cancelled. If
the order had been fulfilled it would have exceeded the value of the
economy of Sweden. It did not crystallise as a loss.
http://www.bbc.co.uk/news/business-29454265

In 2015, a junior employee at Deutsche Bank whose superior was on 3,884


vacation confused gross and net amounts while processing a trade, causing
a payment to a US hedge fund of $6bn, orders of magnitude higher than the
correct amount. The bank reported the error to the British Financial
Conduct Authority, the European Central Bank and the US Federal Reserve
Bank, and retrieved the money on the following day. The trade was never
settled or crystallised as a loss.
http://www.bloomberg.com/news/articles/2015-10-19/deutsche-bank-
error-sent-6-billion-to-fund-in-june-ft-reports

In 2016, $23.5m (£18.6m) trading error led to a higher offer price for OSIM. 18.6
Credit Suisse, who is acting on behalf of a client Ron Sim, revealed that it

85
had “inadvertently” purchased almost 17 million ex-dividend shares
between prices of $1.37 and $1.39. The loss did not crystallise.
http://sbr.com.sg/markets-investing/news/here%E2%80%99s-how-
235m-trading-error-led-higher-offer-price-osim#sthash.cReZ4d5e.dpuf

The following data includes trading processing errors related external events:

From the external data collected, only three events caused a loss (£189m, £0.257m and
£282m). The other cases, although related to a trading error, resulted more on damaging
the reputation of the companies responsible for the error, than on an actual financial loss.

The stress is about not being able to place the trade on-time, which results in a loss as
markets moves against the company. The only case study that most closely matches this
stress is RIA. In the case of RIA, there was an unlawful trading policy which, combined with
a market movement against them, brought about a loss. This loss was passed on to the
clients instead of being absorbed by the company initially; the US regulator then
intervened and obliged the company to return the funds to its clients. Therefore, the
external data suggests £0.257m.

SME opinion £1.288

Risk Management discussed the stress with the Head of Trading Operations and the Trade
Operations Manager and explained the approach and figures obtained. They agreed with
the simulation figures, which is not too far off the best estimates. However, they pointed
out that in such an event, the company would try to protect its reputation by absorbing the
losses to the extent that it remains within risk appetite. If the loss resulting from a trading
error is outside appetite, the company has the following options:

86
Absorb the loss and take the gain. In such a situation, the losses and gains would on
average net off to zero.

Invoke the T&C, place the trades as soon as possible but take no losses resulting from the
market movement. The T&C states that by opening a portfolio, the client consents or
agrees with a number of policies. The policy to be invoked is the “The company’s order
execution policy”.

For the purpose of the stress, the SMEs agree to assign the highest value following a
conservative approach.

Conclusion £1.288m

Client impact
Available data Capital required
Internal data – Best estimates £ 1,288,106
Internal data – Loss data £2,957
Internal data – Simulation £774,000
External data £369,000
SME opinion £ 1,288,106

Risk Management decided to assign to the stress the highest value, £1.288m.

87
Quantification of frequency

Figure 22: Table describing the quantification of the stress frequency


Quantification of frequency of the hardware failure stress

Internal data 1/15

Internal incident data indicates that a significant hardware failure has occurred once in
the company’s life time and so this is once in 15 years of operations.

External data 1/70

External data (http://research.microsoft.com/pubs/144888/eurosys84-


nightingale.pdf) suggests the following:

Source of Finding Time period Extrapolate into 1 in x


failure number of year event
CPU 1 in 190 machines 8 months 1 in 127 years
DRAM 1 in 1,700 machines 8 months 1 in 1,133 years
Disk 1 in 270 machines 8 months 1 in 180 years

Using this data, the following deduction has been made:

P( CPU U DRAM U Disk) = P(CPU) + P(DRAM) + P(Disk) = 1/127+1/1,133+1/180 =


0.0143

This means that each server will fail 1 in 70 years.

88
SME opinion 1/30

IT Manager’s opinion: Whilst it is true that hardware can fail, due to the configuration of
both individual devices and the infrastructure as a whole the threat of a cascading failure
has been mitigated as far as is practically possible. As an example, individual hard drives
are perhaps most prone to failure, they are often configured for their primary role as mass
storage devices in arrays of disks. RAID 5 allows for the failure of one disk whilst RAID 6
allows for the failure of two concurrent disks. Also using virtualisation technology
ensures an extra level of resilience. Should any one virtual host fail the servers hosted
there will migrate to another host. Whilst there will be some disruption to service during
this migration it is anticipated that this will be limited in duration and not system wide.
Further mitigation of the risk of extended down time is carried out by keeping spares for
component parts and reviewing the life of hardware each year. Hardware older than 3
years is reviewed closely and hardware over 4 years is scheduled for replacement.
Overall, the IT Manager suggests that the frequency of such an event would be around 1
in 30 years; which is mainly derived from the internal data and the subsequent
improvements already made, and due to be made, in the IT environment.

Risk Management will consider a frequency of 1 in 30 years, based on the average point
of the SME opinion.

89
Conclusion 1/15

Considering the information obtained from the various sources:

Probability of a severe business


Internal data 1.00 in 15 0.0667 D&SF disruption event given that there
was a hardware failure. P( D / SF ).
Probability of one server failing in a
External data 1.00 in 70 0.0142 A
year. P( SF )
Probability of a severe business
SME opinion 1.00 in 30 0.0500 D&SF disruption event given that there
was a hardware failure. P( D / SF ).

In order to determine the probability that a hardware failure causes business disruption
a simulation is designed with the information available. The simulation formulated as
follows:

Server Number of server Internal data SME opinion


failure Poisson Binomial (n, p=1/15) Binomial (n, p=1/30)
(1/70) in a year.
1 n1 m1,1 m1,2
2 n2 m2,1 m2,2
… … … …
150 n150 m150,1 m150,2
Simulated Sum from 1 to 150 Sum from 1 to 150
Function disruptions caused by disruptions caused by
hardware failure. hardware failure.

Results 1 in 20 years 1 in 10 years

90
Summary of hardware failure stress frequency
Available data Probability
Internal data (disruption given a hardware failure) 1/15
External data (Hardware failure) 1/70
SME opinion (disruption given a hardware failure) 1/30

Based on these probabilities, the simulation results about the conditional probability of a
business disruption given that there was a hardware failure is between 1/20 and 1/10,
therefore, Risk Management set this at the midpoint 1/15 years. The findings were
presented to the SME, who agrees with the rational and the final frequency of the stress.

Controls in place

Figure 23: Table describing controls


Key controls

The company has put in place a number of controls to mitigate hardware failure such as:

 Back-up servers exist at main office and can be accessed if the primary system is
not available.
 An automated hardware review is now sent quarterly to the IT team to monitor
the age of hardware.
 System monitoring tools that alert the IT department of hardware failures.
 Mirror system approach: This means that data is hosted on multiple servers in
multiple locations.

91
Other controls

Additional controls focus on ‘redundancy’ and backup such as:

 Dual power supply.


 Dual network card.
 RAID: hardware setting type via which we have more disks than needed.
 IT holds some spare parts on site.
 UPS and clear power supply.
 Back-up to disk: Business Critical data is backed up to disk on a daily basis (two
weeks of data are retained). In addition, daily back-ups are replicated offsite to
Colo.
 Back-up tapes: Disk back-ups are backed up to tape every two weeks. Tapes are
also stored offsite.
 Dual hardware where appropriate.

92
Appendix 5 Example of the splice function

The splice function is suitable for bimodal distributions. It is not uncommon that operational
risks behave differently at the right-tail, indicating the existence of high impact / very low
frequency events. The fund distribution from the appendix 4 will be used to provide an
example. However, it needs to be pointed out that the two distributions have been estimated
through the previous methodology; the splice function could not have been used otherwise.
In addition, with this methodology, the probability associated with each distribution (v1)
cannot be imposed; it is estimated. Using a software aid, @Risk, the splice function is
estimated as follows:

Figure 24: Table describing the splice function calculation

F1(x) distribution Logistic (15,810.0 ; 1,660.4)

F2(x) distribution Lognorm (46,434.3 ; 133,473.6)

Splice point (c) 38,559 (average between the last value of the first
distribution and the first value of the second distribution)
Splice function output

93
Chi-squared test of goodness of fit

94
Cummulative Functions (Observed and Expected
values)
120.0

100.0

80.0

60.0

40.0

20.0

-
1% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 99%

Observed (£k) Expected (£k)

In this case the splice function is not appropriate to model the funds as simulation output
does not pass the Chi-squared test of goodness of fit. It is clear from the chart that at the
right-tail there is a significant deviation between the actual data and the splice function.

95
Appendix 6 Example of the Kernel density function

Let (x1, x2, …, xn) be an independent and identically distributed sample drawn from some
distribution with an unknown density “g”. The goal is to estimate the shape of this function g.
Its kernel density estimator is:

𝑛
1 x−xi
ĝ(𝑥) = 𝑛𝑏 ∑ 𝐾(. ) ( ), where
𝑖=1 b

“K(.)” is the Kernel density function: A non-negative function that integrates to one. There
are many type of Kernel density functions, although the most commonly used is the Gaussian,
Epanechnikov or triangular (Balonce, 2012, p. 31). The software used in this example uses
the Gaussian Kernel density function, which is the most commonly used. Its formula is this:

K(.) = [(1 / (2π)0.5]*Exp(-0.5*x2)

“b” is the bandwith: Also called the smoothing parameter; b > 0. The parameter b can be
estimated with the following formula (Silverman, 1986, p. 48):

b ≈ 1.06* * (N-1/5) where is the sample standard deviation and N is the total number of
data in the sample.

“n” is the number of data points within the bin and bracket.

“x” is the data point at which point the Kernel density is calculated; which can be estimated
as the mean of the values within each bin or bracket.

“xi” is a data point within the bin that contains ‘x’.

With the aid of computer software, the Kernel density function can be estimated by inputting
the number of estimated data points. Based on the pooled trade example, b is estimated and
then the number of data points.

96
b = 12,159 Number of x points = Range / b = 482,672 / 12,159 = 39.7 ≈ 40

Below is the software output:

Figure 25: Chart illustrating the Kernel density function

Kernel Density function


0.00008

0.00007

0.00006

0.00005

0.00004

0.00003

0.00002

0.00001

0
- 100,000 200,000 300,000 400,000 500,000 600,000
-0.00001

X is in the horizontal axis and ĝ(x) in the vertical axis. Note that ĝ(x) represents the height of
density function. As the sum of all ĝ(x) values does not add up to 1, in order to be a probability
density it is necessary to divide each value by the total to obtain the relative frequency, which
adds up to 1.

97
Figure 26: Table describing the Kernel density function

X ĝ(x) P(x)
6,029 5.36011E-08 0.07%
18,554 7.30054E-05 92.44%
31,079 7.39839E-07 0.94%
43,604 2.45467E-06 3.11%
56,129 4.43487E-07 0.56%
68,654 1.60861E-10 0.00%
81,179 4.25296E-71 0.00%
93,704 1.8174E-204 0.00%
106,229 0 0.00%
118,754 0 0.00%
131,279 0 0.00%
143,804 4.1442E-139 0.00%
156,329 1.43944E-36 0.00%
168,854 7.75452E-07 0.98%
181,379 6.47914E-50 0.00%
193,905 8.3962E-166 0.00%
206,430 0 0.00%
218,955 0 0.00%
231,480 1.9395E-148 0.00%
244,005 4.30781E-41 0.00%
256,530 1.48397E-06 1.88%
269,055 7.92864E-45 0.00%
281,580 6.5701E-156 0.00%
294,105 0 0.00%
306,630 0 0.00%
319,155 0 0.00%
331,680 0 0.00%
344,205 0 0.00%
356,730 0 0.00%
369,255 0 0.00%
381,780 0 0.00%
394,305 0 0.00%
406,830 0 0.00%
419,355 0 0.00%
431,880 0 0.00%
444,405 0 0.00%
456,930 1.6138E-285 0.00%
469,455 2.2995E-120 0.00%
481,980 5.08211E-28 0.00%
494,505 1.74201E-08 0.02%
Total 0.00007897403 1

98
The distribution to be used in the stress will be given the values of x and P(x).

99
Appendix 7 Example estimation of the Poisson distribution

In relation to the stresses frequency, as explained, their frequency distribution as default will
be Poisson. Each stress has a frequency assessment expressed as “1 event in an X number of
years” (1/x years). Consequently, the value of λ is set as 1/x.

Applying this to the stress in appendix 4, 1 in 20 years (1/20) gives a λ= 0.05.

Figure 27: Chart illustrates the Poisson distribution with Lambda = 0.05

If the frequency needs to modelled for the loss data, assuming a large number of data points,
the value of λ is estimated with its best estimator, the sample mean (Evans, 2000, p. 160).

The following table is an example of summarised loss data required to estimate the mean:

100
Figure 28: Table describing the estimation of the Lambda parameter of a Poisson distribution

Year Number of operational losses above


£10,000
2005 3
2006 7
2007 4
2008 3
2009 7
2010 2
2011 3
2012 6
2013 4
2014 3
2015 6
2016 3
Mean 4.25

Therefore, the estimated value of λ= 4.25.

101
Figure 29: Chart illustrating a Poisson distribution with Lambda = 4.25

102
Appendix 8 Operational risk loss function

Severity lognormal distribution parameters calibration

Calibration of μ

Each stress will have a calibrated value of mu based on the following formula; which will be
calculated in the figure 30 (the ‘Estimator mu = Ln(x)-Var/2’ column):

𝝈𝟐
µ = 𝑳𝒏(𝒙) −
𝟐

Calibration of σ

i) Calibration based on the internal loss data:


The formula to calculate sigma requires v and m. As described, these statistics are based on
the actual historical internal loss data. The internal loss data considered the following:
 An observation period of five years (from 1 January 2012 to 31 December 2016):
BIPRU 6.5.21 R (2) states that “A firm's internally generated operational risk
measure must be based on a minimum historical observation period of five years.
When a firm first moves to the advanced measurement approach, a three-year
historical observation period may be used.” The error management process captures
internal loss data. It is expected that losses more than five years ago would no longer
reflect the company’s processes, system, people, controls and the external
environment and so not be relevant.
 The losses are at least £500: Internal loss data above £100 is available. Lower losses
(high frequency/low severity) are not considered appropriate to calibrate losses at
the 99.5th percentile. On average over a five-year period, there were 183 of losses
per year. The financial projections model has projected an annual loss of £90,000
(under compensation and trading error losses). This means that projections
incorporate £492 per loss to meet operational losses. This is rounded up to £500.
Additionally, the data set of losses above £500 passes the Chi-test of goodness of fit
for a lognormal distribution. We are aware that we are using truncated data and no
adjustment for the lognormal distribution parameters µ and σ are made because a)
there is no material increase in the value of sigma if the threshold reduces to £300;

103
and b) below this threshold the internal loss data no longer statistically fits a
lognormal distribution.
 Extreme data was checked and adjusted: in 2012 there was a fine issued by the
regulator due to a CASS breach. Since then, the company has implemented internal
changes which significantly improved processes, people training, systems and
independent internal and external CASS reviews. Based on this the CASS related risk
event has now been assessed as a CASS rule breach stress which describes an
extreme but plausible CASS risk should it materialise. Therefore, CASS fine in 2012
is scaled down the by the CASS stress loss estimated in 2016.
Results of the calibration:

V (variance of the internal loss data) 462,421,055


M (mean of the internal loss data) 3,555
Calibrated Var = Ln(1+v/m2) 3.626
Calibrated Sigma 1.904
Calibrated Var / 2 1.813

ii) Calibration based on the stress losses:

Applying the sample standard deviation formula on the logarithms of the stress losses, the
calibrated value of sigma is 0.8042.

iii) The combined standard deviation bewteen internal loss data and stress losses:

The calibrated value of sigma following this appoach is 1.3277.

After having calculated and considered all the approaches to calibrate sigma, it is the prudent
to assigned the highest value of 1.904, which is based on the internal loss data only.

104
Frequency distribution parameter calibration

In order to complete the simulation model and generate the compound distribution of the
stress event and ultimately the aggregated loss function, it is necessary to model the
frequency of the stress event.
The stress quantification process also includes the quantification of the stress frequency.
Risk Management in conversation with the business has agreed the expected frequency of
the stress event as described in terms of the occurrence of the event in a number of years.
For example, the regulatory (tax failure) stress has been assigned the frequency of 1 in 30
years (or 1/30).
Following industry standards and a number of papers issued by recognised organisations
(i.e. PwC), it will be assumed that the stress frequency follows a Poisson distribution. This is
because the Poisson distribution has the following properties: i) it is a discrete distribution,
ii) it is a simple model, since it only has one parameter, Lambda (λ), and iii) lambda equates
to the variance and the mean of the distribution.
The parameter λ, is calibrated to the frequency of the stress. For example, the frequency of
the regulatory (tax failure) will be distributed Poisson (λ=1/30).

105
Construction of the compound aggregated loss distribution function

Figure 30: Table describing the quantification of the loss function

Mu =
Mean Estimator (LN Frequency
Ln(mean Compound
stress mu = stress P(Lambda = Severity
stress Var/2 Stress frequency Loss
value Ln(x)- value) 1/frequency) (S)
value) function
(£m) Var/2 - (W)
Stress event Var/2
Stress 1 2.025 14.5211 1.8134 12.7077 1 in 26 Years 12.6263 P(λ=1/26) Exp (12.6263 + 1.904*Z) F(W1, S1)
Stress 2 1.288 14.0686 1.8134 12.2552 1 in 20 Years 12.1739 P(λ=1/20) Exp (12.1739+ 1. 904*Z) F(W2, S2)
Stress 3 1.035 13.8499 1.8134 12.0365 1 in 30 Years 11.9552 P(λ=1/30) Exp (11.9552+ 1. 904*Z) F(W3, S3)
Stress 4 0.441 12.9968 1.8134 11.1834 1 in 50 Years 11.1021 P(λ=1/50) Exp (11.1021+ 1. 904*Z) F(W4, S4)
Stress 5 0.688 13.4415 1.8134 11.6281 1 in 170 Years 11.5468 P(λ=1/170) Exp (11.5468+ 1. 904*Z) F(W5, S5)
Stress 6 0.375 12.8347 1.8134 11.0213 1 in 17 Years 10.9399 P(λ=1/17) Exp (10.9399+ 1. 904*Z) F(W6, S6)
Stress 7 0.338 12.7308 1.8134 10.9174 1 in 7 Years 10.8361 P(λ=1/7) Exp (10.8361+ 1. 904*Z) F(W7, S7)
Stress 8 0.176 12.0782 1.8134 10.2648 1 in 8 Years 10.1835 P(λ=1/8) Exp (10.1835+ 1. 904*Z) F(W8, S8)
6.366 Sigma 1.904 Total Σ F(Wi, Si)
Figure 31: Chart illustrating the loss function (Simulation output)

A simulation of one milion iterations is run on the function Σ LN(X,W); these are the results:

Simulation results
1 in 200 years (£m) 8.499
1 in 100 years (£m) 4.915

The operational risk capital will be set at 1/200 years because it is an expectation of the regulator. Therefore, the amount of
capital required is £8.499m.

107
The example has eight stresses. The ‘stress value’ is the expected loss resulting from the
event described in the stress. The frequency estimated per stress is 1 event in a number of
years. It has been assumed that the frequency follows a Poisson distribution (W). As
discussed previously, the severity of the loss is a lognormal variable [X = Exp (μ+ δ Z)]. The
final column has the compound function where the frequency is Poisson and the severity is
lognormal. The compound function works as follows: If the Poisson variable (W) throws a
random value of two, the output results by adding two random lognormally distributed
values. If W = 3, then LN = X11+X12+X13 where Xij is a lognormal variable of the ith stress and
jth random value.

Application of the Chebyshev's theorem.

Given that the chosen capital requirement is £8.499m, the distribution mean (ȗ) is £0.277m
and the standard deviation (S) is £2.37m (from figure 31). The Chebyshev’s theorem states
1
that P[(|X- ȗ | ≤ k S) ≥ (1 − ).
𝑘2

If the value of k*S = X- ȗ; k = (X- ȗ)/S = (8.499-0.277)/2.37, then k = 3.469. According to the
Chebyshev’s theorem there is at least a probability of 91.7% that the Operational Risk capital
is £8.499m. If the Operational Risk capital increases to £33.377m, k = 14.2 and the
probability rises to (at least) 99.5%. This is important to consider because the UK regulator
does not share how they estimate the Operational Risk capital requirement per firm.
Appendix 9 Operational Risk loss function using two separate functions

Split data set

1) Calculation of ‘P’ (the probability associated with the operational risk tail
distribution)

Figure 32: Table describing the stress frequencies

Operational Risk stresses Stress frequencies


Stress 1 1 in 26 years
Stress 2 1 in 20 years
Stress 3 1 in 30 years
Stress 4 1 in 50 years
Stress 5 1 in 170 years
Stress 6 1 in 17 years
Stress 7 1 in 7 years
Stress 8 1 in 8 years

Figure 33: Table describing the estimation of outliers

Q1 15
Q3 40
IQR 25
1.5 IQR 38
Q3 + 1.5 IQR 78
Rounded up 80

Outliers are atypical values within a set of data. They can be identified using the following
formula: Q3+1.5*IQR and Q1-1.5*IQR (Anderson, 2014, p. 127). These figures are based on
the stress frequencies in figure 32.

109
Therefore, P = (1-1/80)=0.9875 (98.75%) and (1 – P) = 0.0125 (1.25%)

2) From table 31 ‘Simulation Output’:

Mean = 0.277 ; Standard deviation (SD) = 2.37

3) Calculation of ‘k’ (from the Chebyshev’s theorem)

k = [1 / (1-P)]0.5 = [1 / (1 – 0.9875)]0.5 = 800.5 = 8.944

4) Calculation of ‘X’ (the mean of the extreme value distribution):


X = k SD + mean = 8.944*2.37 + 0.277 = 20.92 ~ £21m

Figure 34: Table describing the split data function

Distributions Probability Distribution functions (£m)

D1 P = 98.75% - Original loss function [Σ LN(X,W)] (simulation


function in figure 30).

D2 (1 – P) = 1.25% Y ~ Lognormal (21; 2.37); or


Y~ Pareto (11.811; 4.045)

First case: When D1 is Σ LN(X,W) and D2 (the tail distribution) is lognormal, at the 99.5th
percentile (1 in 200 year event) the Operational Risk capital requirement is £22.2m.

110
Figure 35: Chart illustrating the loss function (D2 is lognormal)

Discrete function: D2 is Lognormal


0.0 22.2
5.0% 94… 0.5%
0.18

0.16

0.14

0.12

0.10
@RISK Student Version
0.08
For Academic Use Only
0.06

0.04

0.02

0.00
-50

100

150

200

250

300

350
50
0

Second case: When D1 is Σ LN(X,W) and D2 follows a Pareto distribution, further calculations
need to be carried out to estimate its parameters a and “c”. The estimation of the Pareto
parameters will be based on the lognormal distribution with parameters previously
estimated in for D2 in figure 34. Bear in mind that the lognormal distribution parameters are
derived from the original stresses and the original loss function. This is the lognormal (21,
2.37) distribution simulation output:

111
Figure 36: Chart illustrating a lognormal distribution (21; 2.37)

To estimate the Pareto parameters the following estimators suggested by Evans (2000, p.
154) are:

a = Min (xi); in other words, the minium value observed in the lognormal distribution.
Extracted from figure 36 of the lognormal distribution simulation output, the estimation of
a is â = 11.811.

1/c = (1/n) Σ Log (xi / â)

The approach to estimate “c” is slightly more complex. It will done through a Monte Carlo
simulation as follows:

Let xi be a variable lognormally distributed with mean £21m and standard deviation £2.37m.
Design a sample of 1,000 of these variables and then calculate the sample average. Set the

112
‘average’ as a target for a Monte Carlo simulation of 100,000 iterations and run the
simulation. This is the simulation output:

Figure 37: Chart illustrating the estimation of “c” Pareto parameter: Simulation output

The distribution output is symmetrical. The mean of this distribution is 0.2411, therefore the
estimated value of “c” is 4.045. Hence, D2 is distributed Pareto (11.811 ; 4.045). When D2 is
Pareto, at the 99.5th percentile (1 in 200 year event) the Operational Risk capital requirement
is £17m.

113
Figure 38: Chart illustrating the loss function (D2 is Pareto)

Discrete function: D2 is Pareto


017
5.0% 9 0.5%
0.08

0.07

0.06

0.05

0.04 @RISK Student Version £m

For Academic Use Only


0.03

0.02

0.01

0.00
-100

100

200

300

400

500

600

700

800

900
0

114
Splice function

Figure 39: Table describing the application of the splice function

Distributions Distribution functions (£m)

D1 - Original loss function [Σ LN(X,W)]: The splice requires a


distribution, so the original function cannot be used.
- The simulation output fits a lognormal distribution with
parameters mean=0.32936 and standard deviation=3.9754.

D2 Y ~ Lognormal (21; 2.37); or


Y~ Pareto (11.811; 4.045)

Threshold (v) v = 11.811 (minimum value from figure 36)

The Gamma distribution fitted to the original loss data is as follows:

115
Figure 40: Chart illustrating the Lognormal (0.32936; 3.9754) function

As already mentioned, the software requires known distribution functions to run a splice
function simulation. The value of the lognormal at the 99.5th percentile is £8.55m, which is
close to the original loss function (figure 31). The fitting passes the Chi-square test of
goodness of fit.

First case: When D1 is lognormal and D2 is lognormal, at the 99.5th percentile (1 in 200 year
event) the Operational Risk capital requirement is £27.08m. As already stated, this high loss
value is useful to consider in the scenario analysis (Pillar 2B in section 6.3.7) where an
extreme operational loss occurs.

116
Figure 41: Chart illustrating the Splice function (D1 and D2 are lognormal)

The chart illustrates an Operational Risk profile. The weakness of this modelling approach is
that the percentile where both functions intersect (the splice point of £11.811m), cannot be
imposed.

Pareto cannot be modelled using this approach as it is only defined from ‘a’ to infinity. This
means that there is no overlap between the functions as the maximum value of the lognormal
distribution is less that the splice point. Therefore, given these settings in the distributions
the splice function cannot be produced.

117
Appendix 10 Extrapolation to 1 in 200-year event

The table below provides an example of a how to construct a loss function:

Figure 42: Table describing the extrapolation to stresses

Mu = Lognormal Compound Extrapolated Square of

(LN X=Exp(x+SZ) function value to 1/200 extrapolated


Value Poisson years (V) (£m) stress value
Stress Frequency stress
(£m) Wi
value) -
Var/2
Stress 1 2.025 1 in 26 years 14.6669 P(λ=1/26) Exp(x1+SZ) LN(X1,W1) 2.586 6,688,300
Stress 2 1.288 1 In 20 years 14.2129 P(λ=1/20) Exp(x2+SZ) LN(X2,W2) 2.257 5,092,145
Stress 3 1.035 1 In 30 years 13.8850 P(λ=1/30) Exp(x3+SZ) LN(X3,W3) 1.142 1,303,199
Stress 4 0.441 1 In 50 years 13.2410 P(λ=1/50) Exp(x4+SZ) LN(X4,W4) 0.239 56,974
Stress 5 0.688 1 In 170 yeas 13.2410 P(λ=1/170) Exp(x5+SZ) LN(X5,W5) 0.015 218
Stress 6 0.375 1 In 17 years 13.1224 P(λ=1/17) Exp(x6+SZ) LN(X6,W6) 0.754 569,099
Stress 7 0.338 1 In 7 years 12.8347 P(λ=1/7) Exp(x7+SZ) LN(X&,W7) 1.616 2,612,887
Stress 8 0.176 1 in 8 years 12.5209 P(λ=1/8) Exp(x8+SZ) LN(X8,W8) 0.775 600,124
Total 6.366 Sigma 1.904 9.383 4.114*

*The square root of the sum of the square of the extrapolated stress values (V’UV) where U is the unitary matrix.
The Operational Risk capital based on extrapolation to 1/200 years is £4.114m. The difference between £9.383m – £4.114m =
£5.27m (44%) is the diversification benefit which is based on the rationale that not all risks will materialise at the same time.

119
Appendix 11 Introduction of correlations

The proposed methodology suggests the following steps:

i) Establish the correlation levels:

Figure 43: Table describing correlation levels

Correlation level Correlation coefficient


No correlation 0
Low correlation 0.0897
Medium correlation 0.4485
High correlation 0.6728

The correlation levels can be set using data if available or expert judgement. In this case,
as there is no sufficient data available, the correlations have been set based on latter.

ii) Carry out the correlation assessment between stress pairs.

If the number of stresses is high, they can be grouped into a number of homogeneous
clusters to facilitate the construction of the positive, semi-definite matrix. In this case the
assessment the grouping is not required.
Figure 44: Table providing an example of correlation analysis

1st 2nd Correlation


Rationale
stress stress (ƿ)
There is a high correlation between stress 1
(fraud) and stress 2 (information security
Stress 1 Stress 2 0.6728 failure)
There is a low correlation between stress 1
Stress 1 Stress 3 0.0897 and stress 3 (Corporate action failure)
There is a medium correlation between stress 1
Stress 1 Stress 4 0.4485 and stress 4 (regulatory failure)
… … … …
Stress n Stress m ƿ

The matrix below is the results of each individual correlation assessment.

121
iii) The correlation matrix (A) is structured as follows:

Figure 45: Correlation matrix

Stresses 1 2 3 4 5 6 7 8

1 1 0.6728 0.0897 0.4485 0.4485 0.4485 0.6728 0.0897

2 0.6728 1 0 0.4485 0.6728 0.4485 0 0.0897

3 0.0897 0 1 0.0897 0 0 0 0.0897

4 0.4485 0.4485 0.0897 1 0.0897 0.4485 0.0897 0

5 0.4485 0.6728 0 0.0897 1 0.4485 0 0

6 0.4485 0.4485 0 0.4485 0.4485 1 0.4485 0.0897

7 0.6728 0 0 0.0897 0 0.4485 1 0.0897

8 0.0897 0.0897 0.0897 0 0 0.0897 0.0897 1

iv) Symmetrical, positive semi-definite consistent matrix test:

This matrix must be positive, semidefinitive for the incorporation of the correlation to
the analysis. The matrix consistency test can be carried out by @Risk. If the matrix is not
positive semi-definite, @Risk has the functionality to transform it.

v) Incorporate the correlation matrix into the stress frequency vector.

Once the matrix “A” is created, the input (a random variable) is added to it. The way
@Risk works is that it will generate a random series of numbers which follow the
indicated distribution and also meet the set level of pair correlation. The correlation
coeficient is based on the rank correlation or Spearman coeficient. Subsequently, the
simulation will apply this random numbers.

vi) Run the simulation and identify the 99.5th percentile

122
Figure 46: Chart illustrating the loss function output that factors-in correlation

Aggregated loss function (£m) / Compound Loss function with stress


09 dependency
5.0% 9 0.5%
0.07

0.06

0.05

0.04 Aggregated loss


function (£m) /
@RISK Student Version Compound Loss
0.03 For Academic Use Only function

0.02

0.01

0.00

1000
-100

100

200

300

400

500

600

700

800

900
0

Therefore, after incorporating correlations among the stress frequencies, the capital
requirement increases to £ 8.940m. (5.2%) if compared with the case where the
frequencies are independent.

Using the same correlation model, it can be applied to the frequency vector described in 1 in
200-year extrapolation of appendix 10. The results show that the Operational Risk capital
based on extrapolation to 1/200 years is £4.588m; an increase of 11.5% with respect to the
non-correlated model results. The difference between £10.371m – £4.588m = £5.782m
(44%) is the diversification benefit which is based on the rationale that not all risks will
materialise at the same time.

123
Figure 47: Table showing the 1/200 extrapolation with dependencies

Stresses Mu =
Frequency
(LN Compound
function Severity 1/200 Squared
stress Loss
P (Lambda = function extrapolation extrapolations
value) - function
1/frequency)
Var/2
Stress 1 12.7077 0 330,276 7,387 2,937,617 8,629,593,642,797
Stress 2 12.2552 0 210,072 60,033 2,520,961 6,355,245,991,005
Stress 3 12.0365 0 168,808 3,776 1,243,424 1,546,103,517,909
Stress 4 11.1834 0 71,927 - 255,472 65,265,993,401
Stress 5 11.6281 0 112,212 - 15,672 245,597,115
Stress 6 11.0213 0 61,162 19,744 818,055 669,214,662,895
Stress 7 10.9174 0 55,127 41,804 1,771,782 3,139,210,473,216
Stress 8 10.2648 0 28,705 19,460 808,067 652,971,868,416
Sigma 1.904
4,588,883

Note that the correlation matrix is applied to the frequency vector, which means that the
occurance of an event is correlated with the occurance of another event. The stresses
severity are not correlated, they are still independent.

124
Appendix 12 Turnitin submission

125
12. REFERENCES

Anderson, D. R. a. a., 2014. Statistics for Business and Economics. 12nd ed. Mason, USA: South-Western
Cengage Learning.

Asmussen, S., 2003. Applied Probabilities and Queus. Berlin: Springler.

Association of Sebian Banks, 2006. http://www.ubs-asb.com/. [Online]


Available at: http://www.ubs-asb.com/Portals/0/Aktivnosti/PravnaLica/Preporuka/Methodology.pdf
[Accessed 11 August 2017].

Ballio, F. e. a., 2004. Convergence assessment of numerical Monte Carlo simulations in groundwater
hydrology. Water Resources Research, 40(W04603), pp. 1-5.

Balonce, C. e. a., 2012. Quantitative Operational Risk Models. 1st ed. Boca Raton, USA: Taylor & Francis
Group.

Bank for International Settlements, n.d. http://www.bis.org. [Online]


Available at: http://www.bis.org/bcbs/basel3.htm
[Accessed 9 August 2017].

Bank of England, 2013. http://www.bankofengland.co.uk. [Online]


Available at: http://www.bankofengland.co.uk/pra/Documents/publications/ss/2013/ss613.pdf
[Accessed 9 August 2017].

Barnier, B., 2011. The Operational Risk Handbook. 1st ed. Peterfields, UK: Harriman House Ltd.

Blanchett, D. a. W. P., 2014. https://www.advisorperspectives.com. [Online]


Available at: https://www.advisorperspectives.com/articles/2014/08/26/the-power-and-limitations-of-
monte-carlo-simulations
[Accessed 21 September 2017].

Blunden, T. e. a., 2013. Mastering Operational Risk. 2nd ed. Harlow, UK: Pearson Education Ltd.

Chao, L. L., 1980. Statistics for Management. Belmond, USA: Brooks/Cole.

Chartered Institute for Securities & Investments, 2014. Operational Risk. 16th ed. London: CISI .

Chaudhuri, A. e. a., 2016. Quantitative Modeling of Operational Risk in Finance and Banking Using the
Possibility Theory. 1st ed. Zurich: Springer.

Cope, E. e. a., 2009. Challenges in Measuring Operational Risk from Loss Data, Zurich: Zurich Insurance.

COSO, 2012. https://www.coso.org. [Online]


Available at: https://www.coso.org/Documents/ERM-Understanding-and-Communicating-Risk-
Appetite.pdf
[Accessed 10 August 2017].
126
Cowell, R. G. e. a., 2007. Modelling Operational Risk with Bayesian Networks. The Journal of Risk and
Insurance, 74(4), pp. 795-827.

Dexter, N., 2004. https://www.actuaries.org.uk. [Online]


Available at: https://www.actuaries.org.uk/documents/operational-risk-practical-approach
[Accessed 11 August 2017].

Di Pietro, F. e. a., 2012. Cuestiones Abiertas en la modelización del riesgo operacional en los acuerdos de
Basilea. Universia Business Review, Volume ISSN: 1698-5117, pp. 78-93.

Dionne, G., 2013. Risk Management: History, Definition and Critique, Montreal: Interuniversity Research
Centre.

Embreachts, P. e. a., n.d. Quantifying Operational Risk Capital for Opertional Risk. [Online]
Available at: https://www.bis.org/bcbs/cp3/embfurkau.pdf
[Accessed 4 September 2017].

Embrecht, P. e. a., 2011. Practices and issues in Operational risk modelling under Basel II. Lithuanian
Mathematical Journal, 51(2), pp. 180-193.

Ergashev, B. A., 2011. A Theoretical Framework for Incorporating Scenarios into Operational Risk
Modelling, Charlotte, USA: The Federal Reserve Bank of Richmond.

European Banking Association, 2014. A common procedure and methodology for the SREP. [Online]
Available at: https://www.eba.europa.eu/documents/10180/935249/EBA-GL-2014-
13+(Guidelines+on+SREP+methodologies+and+processes).pdf
[Accessed 15 September 2017].

Evans, M. e. a., 2000. Statistical Distributions. 3rd ed. Toronto: John Wiley & Sons, Inc.

FCA, 2005. https://www.fca.org.uk. [Online]


Available at: https://www.fca.org.uk/publication/corporate/fca-approach-advancing-objectives-
2015.pdf
[Accessed 12 August 2017].

FCA, 2017. https://www.handbook.fca.org.uk. [Online]


Available at: https://www.handbook.fca.org.uk/handbook/IFPRU/2/2.html

FCA, 2017. https://www.handbook.fca.org.uk. [Online]


Available at: https://www.handbook.fca.org.uk/handbook/BIPRU/6/5.html?date=2013-01-18

FCA, 2017. https://www.handbook.fca.org.uk. [Online]


Available at: https://www.handbook.fca.org.uk/handbook/GENPRU/2/1.html
[Accessed 10 August 2017].

Gartner, 2017. http://www.gartner.com. [Online]


Available at: http://www.gartner.com/it-glossary/business-intelligence-bi/
[Accessed 9 August 2017].

127
Glasgow Caledonian University, n.d. http://www.gcu.ac.uk. [Online]
Available at:
http://www.gcu.ac.uk/media/gcalwebv2/ebe/ldc/mathsmaterial/level3compnet/Level_3_Comp_PROBA
BILITY.pdf
[Accessed 5 September 2017].

Hopkin, P., 2012. Fundamentals of Risk Management. 2nd ed. London: Kogan Page Ltd .

Hussain, A., 2000. Managing Operational Risk in the Financial Markets. 1st ed. Oxford: Buttermorth
Heinemann.

Institute and Faculty of Actuaries, 2010. A New Approach to Managing Operational Risk, London:
Institute and Faculty of Actuaries.

Institute and Faculty of Actuaries, 2016. https://www.actuaries.org.uk. [Online]


Available at: https://www.actuaries.org.uk/documents/good-practice-guide-setting-inputs-operational-
risk-models
[Accessed 10 August 2017].

Investopedia, 2017. Investopedia. [Online]


Available at: http://www.investopedia.com/terms/l/libor-scandal.asp
[Accessed 9 August 2017].

Isle of Man Financial Services Authority (IOMFSA), 2016. Operational Risk Capital Assessment. Douglas,
IOMFSA.

ISO 31000-2009, 2017. https://www.iso.org. [Online]


Available at: https://www.iso.org/obp/ui/#iso:std:iso:31000:ed-1:v1:en
[Accessed 9 August 2017].

Kenett, R. e. a., 2011. Operational Risk Management. 1st ed. Chichester, UK: John Wiley & Sons Ltd.

King, J. L., 2001. Operational Risk, Measurement and Modelling. 1st ed. Chichester: John Willey & Sons.

Klugman, S. e. a., 2004. Loss Models, From Data to Decisions. Second ed. New York: John Wiley & Sons,
Inc.

Klugman, T. A. e. a., 2012. Loss Models: from data to decision. 4th ed. Hoboken, USA: Wiley & Sons.

Lingren, G. a. R. H., 1987. Extreme values: theory and technical applications. Scandinavian Journal of
Statistics, Volume 14, pp. 241-279.

Math Tutor, 2017. http://math.tutorvista.com. [Online]


Available at: http://math.tutorvista.com/statistics/lognormal-distribution.html
[Accessed 4 September 2017].

Math Tutor, 2017. http://math.tutorvista.com. [Online]


Available at: http://math.tutorvista.com/statistics/weibull-distribution.html
[Accessed 4 September 2017].

128
Mathwave, 2017. http://www.mathwave.com. [Online]
Available at: http://www.mathwave.com/articles/distribution-fitting-preliminary.html
[Accessed 11 August 2017].

Metricstream, 2017. http://www.metricstream.com. [Online]


Available at: http://www.metricstream.com/insights/Key-Risk-indicators-ERM.htm
[Accessed 9 August 2017].

MetricStream, 2017. http://www.metricstream.com. [Online]


Available at: http://www.metricstream.com/solution_briefs/ORM.htm
[Accessed 11 August 2017].

Mood, A. M. F. A. G. a. D. C. B., 1970. Introduction to the Theory of Statistics. Third ed. New York:
McGraw-Hill,.

Moosa, I., 2007. Operational Risk Management. 1st ed. London: Palgrave Macmillan.

Moosa, I. A., 2008. http://citeseerx.ist.psu.edu. [Online]


Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.474.9427&rep=rep1&type=pdf
[Accessed 11 August 2017].

Navarrete, E., 2006. https://www.palisade.com. [Online]


Available at:
https://www.palisade.com/downloads/pdf/CalculationofExpectedandUnexpectedLossesinOperationalRi
sk.pdf
[Accessed 5 September 2017].

Operational Risk Institute, 2017. https://www.ior-institute.org. [Online]


Available at: https://www.ior-institute.org/sound-practice-guidance/key-risk-indicators
[Accessed 10 August 2017].

ORX consortium, et al, 2009. Challenges in Measuring Operational Risk from Loss Data, Zurich: ORX
consortium.

ORX, 2017. https://managingrisktogether.orx.org. [Online]


Available at: https://managingrisktogether.orx.org/activities/loss-data
[Accessed 11 August 2017].

Palermo, T., 2011. Integrating risk and performance in management reporting. London: Chartered
Institute of Management Accountants.

Pham, H., 2006. System Software Reliability. Piscataway: Springler.

Power, M., 2003. The Invension of Operational Risk, London: London School of Economics.

Powojowski, M. e. a., 2002. Dependent Events and Operational Risk. Algo Research Quarterly, 5(2), pp.
65-73.

Prokopenko, Y., 2012. Operational Risk Management. Tirana, International Finance Corporation, Wold
Bank Group.
129
PwC, 2015. http://www.pwc.co.uk. [Online]
Available at: http://www.pwc.co.uk/industries/financial-services/regulation/solvency-ii/insights/2015-
life-insurers-solvency-ii-risk-capital-survey-summary-report.html
[Accessed 11 August 2017].

Reece, M., 2009. https://www.palisade.com. [Online]


Available at:
https://www.palisade.com/downloads/UserConf/NA09/Michael%20Rees%20RightDist_new.HandOuts.p
df
[Accessed 11 August 2017].

Ritter, F. E., 2010. Determining the number of simulation runs. [Online]


Available at: http://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/ritter-et-al-in-press.pdf
[Accessed 12 August 2017].

Silverman, B. W., 1986. Density Estimation for Statistics and Data Analysis. 1st ed. London: Chapman &
Hall.

Simmons, B., 2017. http://www.mathwords.com. [Online]


Available at: http://www.mathwords.com/o/outlier.htm
[Accessed 14 September 2017].

Statistical Modelling, 2011. https://statisticalmodeling. [Online]


Available at: https://statisticalmodeling.wordpress.com/2011/06/23/the-pareto-distribution/
[Accessed 4 September 2017].

Statisticshowto, 2017. http://www.statisticshowto.com. [Online]


Available at: http://www.statisticshowto.com/pareto-distribution/
[Accessed 4 September 2017].

StatsEchange, 2012. https://stats.stackexchange.com. [Online]


Available at: https://stats.stackexchange.com/questions/55999/is-it-possible-to-find-the-combined-
standard-deviation
[Accessed 2 September 2017].

Stattrek, 2017. http://stattrek.com. [Online]


Available at: http://stattrek.com/probability-distributions/negative-binomial.aspx
[Accessed 6 September 2017].

The Basel Committee on Banking Supervision, 2011. Principles for the Sound Management of
Operational Risk. [Online]
Available at: http://www.bis.org/publ/bcbs195.pdf
[Accessed 10 August 2017].

Turner, D. G., 2014. Is it statistically significant? The chi-square test. Oxford, University of Oxford.

130
UCLA , 2017. http://wiki.stat.ucla.edu. [Online]
Available at: http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Gamma
[Accessed 4 September 2017].

Wilks, S., 1943. Mathematical Statistics. 1st ed. New Jersey: Princeton University Press.

131

You might also like