Professional Documents
Culture Documents
Attachment 0
Attachment 0
MITIGATE
Multidimensional, IntegraTed, rIsk assessment framework and
dynamic, collaborative risk manaGement tools for critical
information infrAstrucTrurEs
www.mitigateproject.eu
Grant Agreement No.653212
Topic: H2020‐DS‐2014‐01
Risk Management and Assurance Models
Innovation Action
Deliverable D7.3
Business, Technical and Organizational Evaluation
Contractual Date of Delivery: M30 / February 2018
Editor: Ralf Fiedler, Martin Stamer
Work‐package: 7
Distribution / Type: Public
Version: 1.0
Project co‐funded by the European Union within the Horizon 2020 Programme
This document has been produced under Grant Agreement 653212. This document and its contents remain the property of the
beneficiaries of the MITIGATE Consortium and may not be distributed or reproduced without the express written approval of the
Project‐Coordinator.
D7.3 February, 2018
Abstract
This deliverable entails the evaluation of the MITIGATE system and services from a methodological,
technical, technological and financial perspective.
The MITIGATE methodology, the system and services has been analyzed and compared to other risk
assessment tools using the ENISA evaluation methodology. In comparison with the other tools the
MITIGATE methodology is proven to be among the best available solutions.
The Technical Evaluation, including the unit testing, integration testing, performance testing and the
security testing reveals the technical feasibility of the MITIGATE system and services and the maturity
of the software.
The Socio‐economic & Techno‐economic Evaluation includes the MITIGATE pricing and licensing
model, issues about usability improvements and the economic perspectives of the MITIGATE
innovation.
MITIGATE – H2020 – 653212 Page 2 of 121
D7.3 February, 2018
1 Executive Summary
The report compiles the remaining tasks of necessary evaluations of the MITIGATE system and
service. Following the 7.2 deliverable which contains the results from the internal and external user
tests, the 7.3 deliverables evaluates the MITIGATE methodology, the results of different technical
testing and stresses the economic dimension of MITIGATE.
Regarding the evaluation of the MITIGATE methodology
Regarding cumulative and propagated risks, vulnerability chains are applied by building
attack graphs based on the exploitation of vulnerabilities, following the example of Bayesian
approaches.
Within a comparison of alignment profiles based on the ENISA methodology with other risk
assessment and management methodologies such as ISO 31000, ISO 27005, NIST SP800‐30,
COBIT and Octave Allegro, the MITIGATE methodology is proven to be among the best
scoring solutions.
Results of the technical evaluation
All technical tests that have been carried out with the MITIGATE software including unit
testing, integration testing, performance testing and the security testing confirm the
technical feasibility of the developed innovation.
In the development phase, the technical testing has contributed much to the reliability and
usability of the software.
The MITIGATE software can be regarded as a mature software having reached the
Technology Readiness Level 7.
Results of the economic evaluation
MITIGATE will be available in four different license sizes, for a monthly subscription and as a
perpetual license.
Within the user tests no valid mean value for any saved costs through the deployment of
MITIGATE could be obtained. Other studies also suggest, that costs and benefits for
investments into cyber security are challenging to calculate on an individual level and that
they cannot be calculated on a broader society wide level.
Within a return on security investment model (ROSI), derived from the return on investment
calculation (ROI), individual estimations per asset or per company / department can be made.
However, the probability of a successful attack as well as its costs may vary much from
organisation to organisation, even for one particular asset and its specific vulnerability.
MITIGATE – H2020 – 653212 Page 3 of 121
D7.3 February, 2018
Version History
Version Date Comments, Changes, Status Authors, contributors, reviewers
C. Kollmitzer, S. Schauer,
0.4 14/02/2018 Method Comparison
M. Latzenhofer, S. König
Implemented and planned Usability
0.5 15/02/2018 N. Mouzourakis
improvements
K. Brümmerstedt, K. Renken, R.
0.8 28/02/2018 Internal Review
Fiedler
MITIGATE – H2020 – 653212 Page 4 of 121
D7.3 February, 2018
Contributors
First Name Last Name Partner Email
MITIGATE – H2020 – 653212 Page 5 of 121
D7.3 February, 2018
Glossary
Acronym Explanation
API Application Programming Interface
APT advanced persistent threat
BAG Bayesian Attack Graph
CAPTCHA Completely Automated Public Turing test to tell Computers and Humans Apart
CERT Computer Emergency Response Team
CII Critical Information Infrastructure
CIL Cumulative Impact Level
COBIT Control Objectives for Information and Related Technology
CPE Common Platform Enumeration
CRL Cumulative Risk Level
CSV Comma‐Separated Values
CVL Cumulative Vulnerability Level
CVSS Common Vulnerability Scoring System
CVSS Common Vulnerability Scoring System
DoD Department of Defense
ENISA European Network and Information Security Agency
ERM Enterprise Risk Management
GUI Graphical User Interface
HIPAA Health Insurance Portability and Accountability Act
HTTP Hypertext Transfer Protocol
IIL Individual Impact Level
IRL Individual Risk Level
ISACA US Information Systems Audit and Control Association
ISO International Standards Organization
IVL Individual Vulnerability Level
JSON JavaScript Object Notation
MIP Mixed‐integer Linear Program
MIT Massachusetts Institute of Technology
MulVAL Multihost Multistage Vulnerability Analysis
NIST National Institute of Standards and Technology
NVD National Vulnerability Database
OCTAVE Operationally Critical Threat, Asset and Vulnerability Evaluation
OWASP Open Web Application Security Project
PDCA plan‐do‐check‐act
PIL Propagated Impact Level
PRL Propagated Risk Level
PVL Propagated Vulnerability Level
RACI Responsible‐Accountable‐Consulted‐Informed
REST Representational State Transfer
RM Risk Management
SC Supply Chain
SCS maritime supply chain service
SP Special Publications
TL Threat Level
TVA Topological Analysis of Network Attack Vulnerability
XML Extensible Markup Language
MITIGATE – H2020 – 653212 Page 6 of 121
D7.3 February, 2018
Table of Contents
1 Executive Summary .......................................................................................................................... 3
2 Results of the Evaluation of the MITIGATE methodology .............................................................. 10
2.1 ENISA evaluation methodology .............................................................................................. 10
2.2 Evaluation of the underlying calculations .............................................................................. 11
2.2.1 Cumulative and Propagated Risk ................................................................................... 11
2.2.2 Risk Mitigation ................................................................................................................ 13
2.3 Evaluation and alignment profiles of risk assessment and management methodologies .... 16
2.3.1 ISO 31000 Risk Management ......................................................................................... 16
2.3.2 ISO/IEC 27005 Information Security Risk Management ................................................ 22
2.3.3 NIST Risk Management Framework ............................................................................... 29
2.3.4 COBIT for Risk ................................................................................................................. 35
2.3.5 OCTAVE Allegro .............................................................................................................. 41
2.4 Evaluation and alignment profile of MITIGATE methodology ............................................... 47
2.4.1 General Overview ........................................................................................................... 47
2.4.2 Concept and Framework ................................................................................................ 47
2.4.3 Process Description ........................................................................................................ 48
2.4.4 Alignment Profile ............................................................................................................ 51
2.5 Comparison of the Alignment Profiles ................................................................................... 52
3 Technical Evaluation Results of the MITIGATE systems and services ............................................ 55
3.1 Unit Testing ............................................................................................................................ 55
3.2 Results of Integration Testing ................................................................................................ 68
3.3 Results of Performance Testing ............................................................................................. 72
3.3.1 Evaluation of Attack Paths Generation Approaches ...................................................... 74
3.3.2 Attack Path Performance evaluation ............................................................................. 78
3.4 Results of Security Testing ..................................................................................................... 80
4 Socio‐economic & Techno‐economic Evaluation ........................................................................... 89
4.1 MITIGATE pricing and licensing model ................................................................................... 89
4.2 Findings from the questionnaires and pilot user comments ................................................. 91
4.3 Usability: implemented and planned improvements ............................................................ 92
4.4 Economic Perspectives ........................................................................................................... 96
MITIGATE – H2020 – 653212 Page 7 of 121
D7.3 February, 2018
List of Figures
Figure 1: Illustration of the ENISA Risk Management Process [1] ......................................................... 10
Figure 2: Risk management framework and risk management process according ISO 31000 [8] ....... 17
Figure 3: Illustration of the operative risk management framework from ISO 31000 [8]. .................... 19
Figure 4: ISO 31000 Alignment Profile ................................................................................................... 22
Figure 5: Illustration of an information security risk management process [9]..................................... 24
Figure 6: ISO 27005: The Risk Treatment Activity [9] ............................................................................ 27
Figure 7: ISO 27005 Alignment Profile ................................................................................................... 28
Figure 8: Risk management framework and risk management process according to NIST SP 800‐30 [7]
................................................................................................................................................................ 30
Figure 9: Illustration of the risk assessment process as defined in NIST SP 800‐30 [7] ......................... 31
Figure 10: NIST SP800‐30 Alignment Profile .......................................................................................... 35
Figure 11: Process Enablers in COBIT for Risk [10]; (blue=core process, red=support process) ........... 36
Figure 12: Risk scenario development based on COBIT for Risk [10] .................................................... 38
Figure 13: Risk Response Workflow [10] ................................................................................................ 38
Figure 14: COBIT for Risk Alignment Profile ........................................................................................... 41
Figure 15: Illustration of the OCTAVE Method Phases [11] ................................................................... 42
Figure 16: Octave Allegro: Roadmap [11] .............................................................................................. 43
Figure 17: OCTAVE Allegro Alignment Profile ........................................................................................ 47
Figure 18: Overview on the main steps and sub‐steps in the MITIGATE Methodology ........................ 48
Figure 19: MITIGATE Methodology Alignment Profile ........................................................................... 52
Figure 20: Comparison of alignment profiles for the discussed frameworks ........................................ 54
Figure 21: A high‐level overview of the software testing pyramid ........................................................ 55
Figure 22: Performance of MITIGATE in relation to number of assets .................................................. 73
Figure 23: Performance evaluation based on 182 assets ...................................................................... 80
Figure 24: Available filters for the asset inventory grid view ................................................................ 93
Figure 25: Threat Analysis chart with a useful short description on top ............................................... 94
Figure 26: The number of pending collaboration actions is directly accessible from the main
dashboard .............................................................................................................................................. 95
MITIGATE – H2020 – 653212 Page 8 of 121
D7.3 February, 2018
List of Tables
Table 1: Risk Methodology Tests java .................................................................................................... 62
Table 2: IAssetServiceTest.java .............................................................................................................. 67
Table 3: Integration Test ........................................................................................................................ 69
Table 4: Example data access layer test ................................................................................................. 72
Table 5: Comparison results ................................................................................................................... 78
Table 6: Port of Valencia case study performance results ..................................................................... 80
Table 7: Finding 1. No Session Timeout ................................................................................................. 83
Table 8: Finding 2. Stack Trace Debug Output Reveal Sensitive Information ........................................ 84
Table 9: Finding 3. Deprecated Ciphers Supported ............................................................................... 86
Table 10: Finding 4. Lack of CAPTCHA .................................................................................................... 87
Table 11: Finding 5. Outdated JavaScript library in use ......................................................................... 87
Table 12: Finding 6. API Key Exposed On Web Page .............................................................................. 88
Table 13: License sizes for monthly subscription ................................................................................... 89
Table 14: License sizes for a perpetual license ...................................................................................... 89
Table 15: Prices ...................................................................................................................................... 90
Table 16: Revenue streams .................................................................................................................... 90
MITIGATE – H2020 – 653212 Page 9 of 121
D7.3 February, 2018
2 Results of the Evaluation of the MITIGATE methodology
2.1 ENISA evaluation methodology
In 2006, the European Network and Information Security Agency (ENISA) published a deliverable on
risk management and risk assessment [1] as part of the ENISA work program 2006 “Survey of existing
Risk Management and Risk Assessment Methods” [2]. Therein, the ENISA Risk Management Process
consisting of 15 specific steps is defined (cf Figure 1 for an overview), which is based on several
international standards, guidelines and best practices from the fields of risk management and risk
assessment.
Figure 1: Illustration of the ENISA Risk Management Process [1]
Overall, the ENISA experts group reviewed a total number of 13 different methodologies and
compared them according to 21 attributes characterizing each of the methodologies. An overview on
the findings from this process can be found in [1]. The most relevant methodologies for the ENISA
Risk Management Process were ISO/IEC 13335, ISO 17799, NIST SP800‐30 and OCTAVE. Although all
of these methodologies are obsolete today and have been revised, updated and transferred to other
standards, the general structure of the process including the 15 steps is still valid. In Deliverable D7.1
[3], the six stages of the ENISA Risk Management Process have been described. Hence, we refer to
D7.1 and to the complete definition of the process [1] for more details on the specific stages and the
steps therein.
As a follow‐up to the definition of the ENISA Risk Management Process, the ENISA experts group
defined a methodology for comparison of risk assessment and risk management frameworks [4] (we
MITIGATE – H2020 – 653212 Page 10 of 121
D7.3 February, 2018
also provided a more detailed explanation of the evaluation process in D7.1). In this methodology,
the ENISA Risk Management Process was used as a benchmark against other risk management
approaches. The main idea of this evaluation methodology is to specify for each of the 15 steps in the
ENISA Risk Management Process to which extent the respective step is reflected in the evaluated
approach. In this context, each step would get a score from 0 to 3, depending on whether the
respective step is handled exhaustively (Score 3) or not mentioned at all (Score 0). As a second
evaluation level, several specific inputs and outputs for each step in the ENISA Risk Management
Process are defined. This makes it possible not only to describe to which extent the overall step is
reflected in the evaluated approach but also to specify how many of the required inputs and outputs
are provided in the evaluated approach. Like the scoring for the process steps, the scores for the
inputs and outputs are also running from 0 to 3.
The visualization of an evaluation result, an alignment profile can be produced. The alignment profile
contains all the process scores or input/output scores on a radar chart, providing a quick high‐level
overview on the strengths and weaknesses of the evaluated approach (cf. for some examples).
Further, the application of a benchmark process and of a consistent scoring allows comparing two (or
more) risk management approaches in the same radar chart. As already described in D7.1, we will use
the ENISA Risk Management Process [1] as a benchmark and reference framework to evaluate the
MITIGATE methodology. Several risk management frameworks, e.g., the ISO 17799 [5], the IT
Grundschutz [6] or the NIST SP 800‐30 [7], have already been evaluated using the ENISA process.
However, some of these evaluations have been carried out on outdated versions of these
frameworks and therefore we re‐evaluated them in the project. In the following Section 2.2, we
provide a short description of several risk management processes, i.e., the ISO 31000 [8], the ISO
27005 [9], the NIST SP 800‐30 [7], COBIT 5 for Risk [10] and OCTAVE Allegro [11], together with their
respective steps implementing the respective processes. For these processes, we describe the
alignment profile according to the ENISA process to make them comparable to the MITIGATE
Methodology [12] (which is evaluated in Section 2.4 further below).
2.2 Evaluation of the underlying calculations
2.2.1 Cumulative and Propagated Risk
Supply chains (SC) become more and more interconnected which causes new threats due to risks in
other components. For example, an exposure in a link of the SC could allow an attacker to discover a
path of exploitable vulnerabilities in interlinked or communicating assets. These interconnections
among vulnerabilities as well as potential cascading effects due to propagation of threats need to be
taken into account when investigating the risk of a supply chain. A comprehensive analysis of the risk
of a specific asset needs to take all potential attack paths yielding to that asset into consideration,
which can be seen as a cumulative risk originating from various vulnerabilities in different parts of the
supply chain. In this context, the most important questions are:
How does a risk in one component affect other parts, i.e., what is the propagated risk?
What is the risk for one specific component if multiple components with several
vulnerabilities may influence the vulnerability of that component, i.e., what is the
cumulative risk?
MITIGATE – H2020 – 653212 Page 11 of 121
D7.3 February, 2018
There exist a number of models answering at least one of these questions. We give a short overview
on existing approaches below and introduce a holistic method that treats both problems.
2.2.1.1 Probabilistic Attack Graphs
Sophisticated attacks often combine multiple vulnerabilities of a system. Such advanced attacks can
be modeled with probabilistic attack graphs that represent the different states of a system as nodes
and the relations between different states as directed edges. In this way, they allow computation of
all potential attack paths to a target of interest. One single path describes the various steps of an
attack where exploiting one vulnerability grants access to other vulnerabilities, e.g. by gaining some
privileges. A small example for the network configuration of an enterprise network can be found in
[13].
Even for such a small network as given in [13], attack paths may become quite complex and thus
several tools have been developed to generate attack graphs. Topological Analysis of Network Attack
Vulnerability (TVA) [14] builds a so called exploit dependency graph that contains information about
conditions of an exploit and then searches this graph to combine various vulnerabilities. Multihost
Multistage Vulnerability Analysis (MulVAL) [15] contains a reasoning engine that investigates the
interaction between different components of a network based on collected data about vulnerability
databases as well as the network configuration. Both tools work in polynomial time complexity.
Further, attack graphs allow analyzing the risk of such advanced attacks by computing attack
likelihoods. To this end, a component metric is attached to each attack node derived from the
Common Vulnerability Scoring System (CVSS) metric vector [16]. Based on these component metrics,
models for cumulative and propagated risks have been developed. In [17], an algorithm is provided to
compute the probability of success of a multi‐step attack using probabilistic reasoning that takes into
account the conditional dependencies between attack paths. The method has been applied in an
empirical study in [18] and extended in [19].
2.2.1.2 Bayesian Approaches
Attack graphs can be used to describe cause‐consequence relationships but do not capture causal
dependencies between the different states of a network. Popular tools to describe such causalities
are Bayesian networks. Given the conditional dependencies and some prior beliefs about the
conditions, they allow computation of the likelihoods of the depending event. For example, if it is
known how two privileges can be used to execute some code (i.e., the conditional dependence is
known) and some information how likely it is to get these privileges is given, the likelihood of a
successful code execution can be computed. One such approach is that of a Bayesian Attack Graph
(BAG) presented in [20] that augments an attack tree with the likelihoods of exploitation of the
relations described by the attack tree. This extended description allows dynamic updating of the
computed risks if the prior beliefs about exploitation likelihoods change or some new knowledge is
available.
Approximate inference techniques can be applied to BAGs with thousands of nodes to estimate the
probabilities of successfully compromising different parts of a system [21]. However, there is typically
a trade‐off between the level of detail that can be modelled theoretically and the tractability of the
analysis. So even though it might be possible to describe a network in all detail and perform all
computations, it gets cumbersome to find all components that contribute to the likelihood of
successfully compromising a component in the network.
MITIGATE – H2020 – 653212 Page 12 of 121
D7.3 February, 2018
Another issue of Bayesian models in general is the identification of prior probabilities. These are
error‐prone due to their subjectivity but may strongly influence the computed risk. The task of finding
appropriate priors is particularly challenging if only limited data is available, which is often the case
when investigating cyber security incidents. A method to estimate the (prior) probabilities of new
risks from experience with existing risks based on Bayesian learning is given, for example, in
[Foroughi08].
2.2.1.3 Rule‐based Reasoning
Rule‐based reasoning may be used to identify multi‐step attacks on large systems that exploit a
whole chain of vulnerabilities [22]. In order to be able to define general rules, descriptions of
vulnerabilities must include preconditions (i.e., necessary conditions to exploit the vulnerability) and
post‐conditions (i.e., effect of an exploit of this vulnerability). This extended description enables
automatic information processing and identification of multi‐step attacks based on inter‐
dependencies among several vulnerabilities. Application of this model requires mining data and
keeping the required information up to date.
2.2.1.4 MITIGATE Approach
Based on a set of vulnerability chains that describe the dependencies between the different
vulnerabilities, we propose a qualitative way to estimate cumulative and propagated vulnerabilities.
Here, the cumulative vulnerability measures the probability that a vulnerability in a specific asset can
be exploited by any vulnerability chain (i.e., starting from any other vulnerability) while the
propagated vulnerability measures the weakness of all vulnerability chains (of a fixed length) due to
exploitation of a specific vulnerability. Both measures use a five‐tier scale (consisting of “very low”,
“low”, “medium”, “high”, “very high”) and interactions between vulnerabilities are described by
mappings such that the overall vulnerability is again measured on the same scale.
In our approach, we get to these vulnerability chains by applying concepts similar to TVA or MULVAL
mentioned above, i.e., by building attack graphs based on the exploitation of vulnerabilities. We
extend these chains with vulnerability measures (as described in the previous paragraph) following
the example of the Bayesian approaches discussed above. However, by using a five tier scale instead
of precise probabilities, we avoid both the problem of identification of prior probabilities and of the
subjectivity of these values, which represent the main drawbacks of Bayesian approach. Overall, the
main advantages of our method are:
The qualitative approach (i.e., using the abstract five tier scale) avoids pretending an accuracy
that can hardly be reached in practice
The method uses publicly available and up to date information on vulnerabilities
The method does not aggregate information to avoid information loss (cumulative
vulnerability contains information of all information from the individual vulnerabilities)
This general approach enables the computation of individual, cumulative and propagated
vulnerabilities, impacts and risks. Explicit definitions and further illustration of the concepts can be
found in Deliverable D2.2 and in [23].
2.2.2 Risk Mitigation
In general, risk management is based on concepts and methodologies that describe the situation as
accurate as possible and provide a way to examine the problem systematically. However, in the
MITIGATE – H2020 – 653212 Page 13 of 121
D7.3 February, 2018
context of ports this modelling process becomes complex, since ports’ supply chains involve
interconnected components and the arising threats may be associated with cascading effects. It is
then a safe option to rely on mathematical methods to analyze such problems since these allow
investigating modified problems and incorporating data that is uncertain or incomplete.
In principle, choosing the right method for risk analysis and risk evaluation proves to be complicated.
In recent years, a number of concepts, algorithms and tools have emerged with a focus on protection
of IT infrastructure and related systems. These risk assessment methods are often based on
qualitative rather than quantitative approaches for reasons of efficiency and due to practical
obstacles. For example, the necessary figures for a quantitative analysis are hard to obtain accurately
and may thus create a misleading illusion of accuracy upon perhaps very vague underlying data.
In the following, we give an overview on methods typically used in risk management and risk
mitigation and compare these with our proposed method.
2.2.2.1 Bayesian Approaches towards Decision Making
The AURUM system [24] uses Bayesian networks to determine threat probabilities (similarly as
described in the previous section above). Based on these probabilities, a risk level for each asset is
computed by multiplying these threat probabilities with impact values that are characteristic to the
organization. Therefore, the a‐priori likelihoods have to be determined specifically for each case,
which is supported by reasoning algorithms. The effects of potential mitigation actions, such as
patching to close a vulnerability, are measured by the reduction of the resulting risk level.
The main problem with the AURUM system or similar approaches is that they are subjective and
sometimes based on assumptions about the adversary which are hard to verify (as already
mentioned in the previous section). Thus, these models often yield very specific solutions that might
not be easily applicable for new situations since, for example, additional probabilities need to be
estimated. Another issue is their need for a potentially vast amount of training data (i.e., the Bayesian
updates of the a‐priori models). The availability and reliability of such information about attacks as
well as a rational choice of suitable a‐priori models determines the quality of the results.
2.2.2.2 Mathematical Optimization (Single‐ and Multi‐Objective)
When looking for an optimal solution among several options, a variety of optimization methods such
as linear programming are available. These methods describe the optimization problem as a real‐
valued function that is maximized or minimized, depending on the context.
Applications to supply chain problems can be found in [25] where one optimization for production
planning and one for distribution planning are combined to so‐called bi‐level programming that can
be used to solve hierarchical optimization problems.
In [26], a linear programming model is developed that produces an aggregate and detailed
production plan simultaneously. This approach aims to improve hierarchical planning approaches
that do not yield an optimal lower‐level (i.e., detailed) plan. A Mixed‐integer Linear Program (MIP) is
applied in [27] to model production planning. The main goal of the model presented there is optimal
allocation of assets needed for production in the light of changing demands such that earnings are
maximized while costs are minimized. Similarly, [28] applies two mixed‐integer linear programs to
describe supply chain design on one hand and production planning on the other hand to maximize
MITIGATE – H2020 – 653212 Page 14 of 121
D7.3 February, 2018
the net profit. The main problem of the system under consideration is to choose from various
potential suppliers, manufacturing facilities and distribution centers with possible configurations such
that customer demands are met.
Risk mitigation is usually constraint by limited resources so that the optimization methods need to be
adapted. While the simplest approach is to just throw away solutions that are too expensive, a more
promising and typically applied approach tries to optimize several goals simultaneously (i.e., the goal
that needs to be optimized before the restrictions and the cost). This yields to multi‐objective
optimization methods. One model that incorporates multi‐objective optimization methods to keep
costs low is [20]. A mixed‐integer programming model for supply chains that minimizes the total costs
while keeping a desired service level is given in [29].
While some of these models are able to capture uncertainty to some extent (e.g., [25] introduced a
real‐valued metric to represent uncertainty), their main drawback is the assumption that the
optimization can be done independently of the attackers actions. Usually, the effect of a cyber‐attack
depends quite heavily on the type of attack. For example, an advanced persistent threat (APT) in
general causes a higher damage than a malware attack since an APT attack is more sophisticated and
target‐oriented and thus more difficult to mitigate. If the attacker’s actions influence the
effectiveness of mitigation methods, game theoretic models are more appropriate.
2.2.2.3 Classical Game Theory Models
A way to describe the dynamics of a security incident, i.e., the conflict between an attacker and a
system administrator, is to model the scenario as a game. Cox [30] provides a relation between game
theory and risk analysis and emphasizes that risk analysis and game theory are complementary. On
one hand, the application of game theoretic models requires an estimate of the consequences of
each pair of actions taken by attacker and defender which can be done with the help of existing risk
analysis tools. On the other hand, game theoretic analyses yield information on the optimal behavior
of both the attacker and the defender and thus provide some information on the likelihoods of
attacks if the attacker is fully rational.
Further, [31] explains how game theory can be integrated into the classical risk management process.
Specifically, the authors provide a full description on how the steps of the classical management
model ISO/IEC 27005 [9] can be mapped and utilized in a game theoretical model. While traditional
game‐theoretic models work with real‐valued payoffs, it is often difficult to estimate the
consequences of an attack by a single number. In order to make use of the advantages of the game‐
theoretic models (in particular of the provable optimality) one is often willing to aggregate existing
data into a single number (e.g., a mean) to fit the classical setting. However, this approach discards a
lot of information and is not able to capture uncertainty about the consequences. Such uncertainty is,
however, intrinsic to cyber‐attacks as the success of such an attack hinges on many factors. For
example, a phishing mail attack not only depends on the operating system in place but also on the
user and his aversion against a somewhat suspicious looking message. In turn, the loss to the
company under attack depends on the availability of backups and on the interconnectedness of the
company network, which makes it hard to predict the consequences through a crisp number.
2.2.2.4 MITIGATE Approach
As in a classical game‐theoretic model, our proposed method views risk management as a
competition between an attacker and a defender protecting his system. The main goal is to find an
MITIGATE – H2020 – 653212 Page 15 of 121
D7.3 February, 2018
optimal protection based on a pre‐defined set of available defense mechanisms such that the
expected damage is minimal. Note that game theory typically uses the terminology that a utility
function, i.e., a measure of the profit for either player, is maximized but this change of viewpoint, i.e.,
the goal of minimizing the damage, does not change the concept at all. Such a game‐theoretic model
seeks an optimal behavior against any possible action plan of an adversary. In detail, the model tries
to minimize the loss assuming that the adversary tries to maximize it. In particular, the method does
not rely on subjective estimations of likelihoods of threats and explicitly avoids adversary modelling.
In order to deal with the information loss, which happens due to the aggregation of all information
into a single real‐valued payoff, e.g., by replacing a range of potentially different assessments by an
average assessment such as a mean, we generalize the game‐theoretic setting and allow payoffs to
be distributions over all possible outcomes [32], [33]. More explicitly, assume we have a fixed
number of categories to describe the risk due to an attack, e.g., ranging from “very low” to “very
high”. We then record the number of expert opinions of each such risk category to get an empirical
distribution. A suitable preference relation as defined in [34] allows comparing the risk of different
situations so that we are in the position to apply well‐known game‐theoretic concepts and in
particularly are able to find a Nash equilibrium. By using such distribution‐valued payoffs, our
approach can handle uncertainty about the consequences of an attack that often occurs in
discussions with experts (e.g., when experts estimate a risk as “medium in some cases, but high in
some other cases”). This mathematically sound framework is applicable to various situations and
allows risk assessment under many different circumstances. Further, unexpected events, e.g., due to
a zero‐day vulnerability, can also be taken into account by assigning a small likelihood to extreme
losses.
2.3 Evaluation and alignment profiles of risk assessment and management
methodologies
2.3.1 ISO 31000 Risk Management
2.3.1.1 General Overview
In 2009, the International Standards Organization (ISO) [8] published the international standard for
risk management ISO 31000:2009 [8][35] which is based on national approaches from Australia and
New Zealand AS/NZS 4360:2004 [36] as well as from Austria ONR 49000:2004 [37]. This standard has
evolved to a global and widely accepted leading standard until today. Due to its explicit generic
approach the standard can be ubiquitously applied on every kind of organization, regardless of its
type, perspective or size (cf. [8]).
The ISO 31000 is organized as a standard family, consisting of several components which focus on
different aspects [8]:
ISO Guide 73 – Risk Management Vocabulary (2009)
All terms and definitions were subsumed. A new edition is planned for the end of 2017.
ISO 31000 – Principles and Guidelines (2009)
This standard provides the fundamental principles and generic guidelines for risk
management. A new edition with quite more content is expected at the end of 2017.
MITIGATE – H2020 – 653212 Page 16 of 121
D7.3 February, 2018
ISO 31004 – Guidance for the Implementation of ISO 31000 (2013)
This component provides guidance for implementation of ISO 31000 risk management
structures. It discusses concepts and principles of the framework in detail.
ISO 31010 – Risk Assessment Techniques (2009)
This part of the standard family introduces selected procedures for risk assessment and
discusses possibilities of their application.
ISO 31020 – Managing Disruption Related Risk (Draft, 2014)
ISO 31021 – Managing Supply Chain Risk (Draft, 2015)
2.3.1.2 Concept and Framework
A distinct characteristic of ISO 31000 is the two‐tier structure with a risk management framework on
the one hand, and the operative risk management process on the other hand (cf. Figure 2). These two
life cycles are linked by the framework’s activity “implementing risk management”. The risk
management framework represents the top down approach, ensures the consistent embedding of
risk management in the organization based on a quality management perspective. It follows an
iterative and continuous improvement cycle according the general plan‐do‐check‐act (PDCA) cycle.
On the other hand, the operative risk management process supports the bottom‐up approach, which
puts the different risks in an organizational context, assesses and treats them. During the whole risk
management process, two guiding sub‐processes ensure communication and consultation as well as
monitoring and review. The first one interacts with the stakeholders; the latter enables performance
measure. The following Figure visualizes the components of both cycles and reflects the relationship
as described above [8].
Mandate and
Commitment
Risk Identification
Communication and Consultation
Risk Evaluation
Figure 2: Risk management framework and risk management process
according ISO 31000 [8]
The ISO 31000 postulates eleven principles [8], interpreting risk management as an interdisciplinary
task on all levels which has to be transparently integrated in existing organizational structures.
Additionally, it has to react to situational changes appropriately. In detail, risk management
MITIGATE – H2020 – 653212 Page 17 of 121
D7.3 February, 2018
creates and protects value;
is an integral part of all organizational processes;
is part of decision making;
explicitly addresses uncertainty;
is systematic, structured and timely;
is based on the best available information;
is tailored;
takes human and cultural factors into account;
is transparent and inclusive;
is dynamic, iterative and responsive to change; and
facilitates continual improvement of the organization.
These eleven principles lead to the risk management framework, consisting of five components [8]
which can be seen on the left side of Figure 2.
An explicit and strong commitment of the management is a prerequisite for successful
implementation of effective risk management structures. Rigorous strategic planning ensures that
risk management is consistently embedded in the organizational structure. Integration is enforced on
a strategic level and mainly affects organization’s culture, commitment of stakeholders, availability of
resources and compliance [8]
The risk management framework must be designed consistently in the internal and external context
of the organization. Additionally, it emphasizes the interaction with the stakeholders. A risk
management policy defines the objectives for risk management in written form. A risk management
plan enables the integration of risk management into organizational processes and thus implements
the risk management policy. Furthermore, responsibilities are defined and resources are required.
Finally, internal and external communication and reporting are set up [8].
The organization enforces the required activities in line with the implementation of the risk
management framework, i.e., applying of the risk management policy, conducting the risk
management process, fulfilling the compliance requirements, ensuring decision making, interacting
with stakeholders and communicating in general. According to the risk management process, the
defined risk management plan is conducted on all relevant levels in all functions of the organization.
Indicators, progress measurement of conducting the risk management plan, risk reports and reviews
of design and effectiveness of the applied risk management measures implement an ongoing
effectiveness monitoring of the framework.
A continuous improvement process ensures an iterative development of the risk management
framework, the risk management policy, and the risk management plan – in fact all levels of
implementation.
2.3.1.3 Process Description
Besides the above discussed risk management framework, the operative implementation of risk
management, i.e., risk management process, is the second core part of the ISO 31000. The necessary
process activities building the risk management process [8] are illustrated in Figure 3.
MITIGATE – H2020 – 653212 Page 18 of 121
D7.3 February, 2018
Risk Assessment
Risk Identification
Risk Evaluation
Risk Treatment
Figure 3: Illustration of the operative risk management framework from ISO 31000 [8].
2.3.1.3.1 Establishing the context
The organization embeds the risk management process into existing structures and specifies the
aspects defined in the risk management framework in more detail and in a process‐oriented way. The
external environment and internal aspects, according to the risk management objectives the
organization wants to achieve, as well as the corresponding link to the risk management framework
are described. The implementation depends on the organization’s needs.
The external context is wide ranging. It includes social, cultural, political, legal, regulatory, financial,
technological, economic, and natural aspects. It also includes competitive environment from
international, national, regional or local perspectives, as well as key trends with potential impact on
the objectives of the organization, relationships with, and perceptions and values of stakeholders.
The internal context subsumes governance, organizational structure, processes, roles and
responsibilities, policies, objectives of projects, activities and especially the identification of chances
(in other words: positive risks), strategies, skills in form of knowledge and resources (e.g., capital,
time, people, processes, systems and technologies), stakeholder relationships, organizational culture,
information systems and flows, formal and informal decision making processes, standards, guidelines
and models that the organization adopted, and form and extent of contractual relationships with
partners.
Further influences that shape establishing the context of the risk management process include:
objectives, scope, depth, breadth of risk management activities; activities, process, function, project,
product, service and asset in terms of time and location as well as the relationship among them; the
risk assessment methodology; the way of evaluating performance and effectiveness; identification
and specification of the required decisions; identifying, scoping, framing, the extent, objectives and
resources of the necessary information processing.
MITIGATE – H2020 – 653212 Page 19 of 121
D7.3 February, 2018
A core element of this process step is the obligatory definition of the context discussed here in order
to enable an operative implementation in subsequent activities. In particular, the necessary risk
criteria need to be defined, e.g., nature and types of causes and consequences and its corresponding
measurement method, likelihood, timeframes, level of risk, stakeholder’s views, risk acceptance, and
treatment of combination of multiple risks .
2.3.1.3.2 Risk assessment
Risk assessment subsumes the complete process of risk identification, risk analysis and risk
evaluation. During the first phase – Risk Identification – the organization identifies sources of risk,
areas of impacts, risk events and their consequences. The core objective here is to generate a
comprehensive list of all risks, preferably focusing on identifying all risks or missed chances. A main
criterion is the impact on the organization, no matter whether the risk source is under responsibility
of the organization or not. Cascading effects and cumulative impacts should be in scope as well. The
organization should apply appropriate instruments and methods, integrate relevant background
information, and involve competent persons into the identification phase.
The Risk Analysis phase concentrates on the causes and sources of risk and thus the standard
attributes, i.e., the potential consequences and the likelihood of occurrence, are determined. In this
context, the risk criteria defined in the organizational context (i.e., the Risk Framework described
above) are applied. The analyzed risks are prepared and appropriately communicated to the
stakeholders in a way that they allow a sustained decision‐making. Risk analysis can be of a
quantitative, semi‐quantitative or qualitative nature, also a combination of both paradigms is
possible. A risk analysis can be conducted with a different degree of detail according to the
organization’s requirements. A reasonable modelling of the results, especially the amount of risk
considering all the influence factors is the main objective of this step. The uncertainty and diversity of
the expert’s opinion need to be documented, because determining only one numerical value is not
sufficient in many cases.
The third component of risk assessment is Risk Evaluation. It prepares the results of the risk analysis
and subsequently develops a prioritization for the risk treatment. Hence, the amount of risk
determined during Risk Analysis is compared to the defined risk criteria. This comparison reveals the
necessity for a risk treatment. The evaluation considers the generic circumstances, the risk attitude of
the organization and the risk tolerance of the stakeholders.
2.3.1.3.3 Risk treatment
The selection and implementation of one or more options for modifying risks is done during risk
treatment. It follows a cyclical process, in which the risk treatment is decided, the residual risk is
determined, a new risk treatment measure is designed and its effectiveness assessed. ISO 31000
suggests fundamental risk treatment strategies, which are not mutually exclusive [8] :
avoiding the risk
taking the risk
removing the source of risk
changing the likelihood
changing the impact
sharing the risk with a third party
retaining the risk
MITIGATE – H2020 – 653212 Page 20 of 121
D7.3 February, 2018
A combination of these strategies is useful and absolutely realistic. The selection of an optimal risk
treatment measure includes cost and benefit considerations following an economical alignment. One
success factor affects the communication needs of the stakeholders. A risk treatment plan
documents, how and why the defined measures need to be implemented. Furthermore, it defines
order, priority, need for resources, responsibility, performance measure, time and implementation
plan, and suggests monitoring measures. It is explicitly stated, that risk treatment itself can introduce
new risks, defined as secondary risks, also addressed by the risk treatment plan. The residual risk
needs to be documented and should be assessed as well.
2.3.1.3.4 Communication and consultation
This guiding sub‐process deals with the communicative interaction from and to the stakeholders. A
communication plan defines the corresponding activities, and decides about causes, impacts and
treatments. The result of these activities is important because it is a prerequisite for decisions making
and has influence on the opinion of the stakeholders. Therefore, it suggests a team‐oriented and
multi‐level alignment.
2.3.1.3.5 Monitoring and review
A core element is monitoring and review by performing regular and situation‐based checks of all
aspects and, in general, performance measures during all phases of the risk management process.
Main part here is the appropriateness and effectiveness as well as the further development based on
new findings or as a reaction on the changing circumstances.
2.3.1.4 Alignment Profile
As already pointed out above, the ISO 31000 Risk Management Process is very generic and covers the
individual process steps on a rather high level. Thus, the scores from the alignment with the ENISA
Risk Management Process [1] are in general not very high, i.e., not exceeding a score of 1.5, as it can
be seen from the alignment profile in Figure 4 and in the evaluation form in Appendix 0. The ISO
31000 scores the highest results in the steps P.3 ‐ P.7 of the ENISA Process, stemming from the Stage
A “Definition and Scope of the Framework” and Stage B “Risk Assessment”. Although they are quite
generic, the respective parts of the ISO 31000 describe in sufficient detail the main characteristics of
preparative activities for the risk management process and how the risk assessment needs to be
carried out. Further, also the last steps P.14 “Risk Monitoring and Review” and P.15 “Risk
Communication, Awareness and Consulting” reach a score of 1.5, since ISO 31000 describes the key
features of how to monitor and communicate the identified risks within an organization quite well.
The steps with the lowest scores are located in the ENISA process’ Stage C “Risk Treatment” and
Stage D “Risk Acceptance”. In particular, steps P.10 “Approval of action plan”, P.12 “Identification of
residual risk” and P13. “Risk acceptance” are only slightly covered in the risk treatment plans of the
ISO 31000 and specific activities are not indicated at all; therefore, the ISO 31000 receives the low
score of 0.5 for these steps (cf. Appendix 0).
MITIGATE – H2020 – 653212 Page 21 of 121
D7.3 February, 2018
P.1
P.15 P.2
P.14 P.3
P.13 P.4
P.12 P.5
P.11 P.6
P.10 P.7
P.9 P.8
Figure 4: ISO 31000 Alignment Profile
2.3.2 ISO/IEC 27005 Information Security Risk Management
2.3.2.1 General Overview
The ISO/IEC 27005 [9] is part of the ISO 27000 family of standards [37]. As the ISO 31000 described in
the previous Section 2.3.1 covers the field of risk management, the ISO 27000 family deals with
concepts from the area of information security and the implementation of an information security
related management system in different organizations. The family has evolved from the British
Standards BS 7799:1995 [38], [39], which were transferred into the international standard ISO/IEC
17799 [5] in the year 2000. In 2005, the ISO 17799 evolved into the two standards ISO/IEC 27001 [40]
(which is based on part 2 of the BS 7799) and the ISO/IEC 27002 [41] (which is based on part 1 of the
BS 7799). Both standards cover the development and the introduction of an information security
management system within organizations of various types.
In the context of the ISO 27000 family of standards, the ISO 27005 represents an extended risk
management process, which is specifically adapted to the requirements of information security.
Therefore, it implements the five main steps already given in ISO 31000 (cf. Figure 6 and Section
2.3.1.3 above) in relation to information security (details are described in Section 2.3.2.3 below).
Besides the topic of risk management, the ISO 27000 family has been extended with additional
standards covering process specifications and implementation guidelines (ISO/IEC 27003 [42]) as well
as monitoring, analysis and evaluation techniques (ISO/IEC 27004 [43]). The ISO 27000 family covers
MITIGATE – H2020 – 653212 Page 22 of 121
D7.3 February, 2018
multiple other standards, which focus in specific details in the field of information security, as for
example network security, cloud security, application security, and others (for a detailed list cf. [44])
2.3.2.2 Concept and Framework
The ISO 27005 standard contains the description of a security risk management process with its
associated activities. This process is based on the risk management process described by ISO 31000
(cf. Figure 2 above) and intends to incorporate details on information security requirements serving
as an extension for an efficient ISMS. The overall ISO 27005 risk management process is illustrated in
Figure 4. Therein, it is clear to see that besides the general five steps coming from the ISO 31000,
there is also an additional step “Risk Acceptance” and two decision points (“Assessment Satisfactory”
and “Treatment Satisfactory”). The process itself is explicitly conceived as a continuous process
(which is not described that clearly in the ISO 31000) that must be iterated periodically after the first
run. Therefore, individual steps like risk assessment and risk treatment can immediately trigger a
restart of the process due to the two decision points. Furthermore, it is possible that the process will
not immediately lead to an acceptable level of residual risk. In this case the parameters are changed
and the process is started over, too. In other words, if the results coming from the risk assessment or
the efficiency of the risk mitigation activities identified during risk treatment are lacking a certain
(predefined) quality, or the residual risk after risk acceptance is too high, the process can return to
the context establishment with refined parameters to reevaluate all previous steps.
The process can be carried out for an entire company, individual organizational units (department,
physical location, etc.), every IT system (whether it already exists or is still planned) or regarding
specific issues (i.e., business continuity planning). However, the risk management process requires
the full support from the organization’s management to work properly, as already stated in the ISO
31000 above (cf. Section 2.3.1.2) or in the ISO 27005 itself. The general goal is to bring the risk level
of a company to an acceptable degree by means of an efficient and timely process. In this context,
the procedure of accepting the residual risk is made explicitly during the process (i.e., in the last step
“Risk Acceptance”) and communicated accordingly to make this decision clear to all parties involved
in the process.
MITIGATE – H2020 – 653212 Page 23 of 121
D7.3 February, 2018
Context Establishment
Risk Assessment
Risk Identifikation
Risk Communication And Consultation
Risk Evaluation
Risk Decision Point 1 No
Assessment satisfactory
Yes
Risk Treatment
Risk Decision Point 2 No
Treatment satisfactory
Yes
Risk Acceptance
End of First Or Subsequent Iterations
Figure 5: Illustration of an information security risk management process [9]
2.3.2.3 Process Description
As already mentioned above, the ISO 27005 risk management process consists of six main steps
ranging from context establishment over risk assessment up to risk acceptance and two supporting
steps for continuous monitoring and feedback. The details for these steps are given in the following
subsections.
2.3.2.3.1 Context Establishment
In the first step, all the information required to collect the relevant risks is compiled and all the
necessary framework conditions are created for carrying out the process. Among them, a decision
must be reached which risk management approach should be used, which includes the definition of
various criteria regarding risk evaluation, the impact and the organization’s risk acceptance.
Furthermore, the scope and boundaries need to be defined in detail, which represent the general
view of the organization on the assets to be included in the process and thus significantly influencing
its direction. By defining precise boundaries, it is ensured that all relevant assets and their associated
risks are included in the process. Another essential element is the definition of an organizational unit
that promotes security risk management within the organization. Finally, the commitment of the
MITIGATE – H2020 – 653212 Page 24 of 121
D7.3 February, 2018
management and the availability of the required resources is important to develop the risk
management from a first run into a continuous process and integral part of the organization.
2.3.2.3.2 Risk Assessment
As depicted in Figure 4, risk assessment is a composite step in which risks are identified, either
quantitatively or qualitatively described and finally prioritized based on the risk evaluation criteria
defined in the previous step Context Establishment (cf. Section 2.3.2.3.1 above). In detail, Risk
Assessment consists of three sub‐steps:
‐ Risk Identification,
‐ Risk Analysis, and
‐ Risk Evaluation.
The ISO 27005 standard itself indicates that this phase is repeated several times. In the first run,
attention is primarily paid to the potentially highest risks. In the following runs, the analysis is
broadened to identify additional risks. If insufficient information is available or if the parameters
selected in the previous step are proved to be inadequate, the entire process is restarted.
Risk Identification
In this first sub‐step, it is examined what kind of incidents can happen and which subsequently lead
to a loss. This includes the incidents which are under the control of the organization as well as those
that are based on external influences. Based on the scope selected in the previous step Context
Establishment, the relevant assets of the organization are identified. Further, sufficient information
needs to be collected on these assets to identify potential threats. In this context, different aspects
need to be considered such as the source of the threat (e.g., intentional or unintentional) or the type
(e.g., unauthorized actions, physical damage, technical failures, etc.).
After identifying potential threats, a list of existing and planned controls is created. A control should
either reduce the probability of occurrence or the impact of a specific risk and further has to be
evaluated according to its efficiency for the given situation within the organization. Next, a list of
existing vulnerabilities is identified. Vulnerabilities are related to assets and exist due to technical,
organizational or social deficiencies. However, a vulnerability is not considered as harmful unless a
specific threat exists which exploits this vulnerability. Furthermore, certain controls can be related to
specific vulnerabilities in a sense that they counter the exploitation of the vulnerability. At last, for
each incident scenario, i.e., the combination of a threat exploiting a vulnerability, the potential
consequences need to be evaluated. These consequences can be of financial nature, i.e., monetary
costs, but also organizational, social or reputation consequences have to be considered.
Risk Analysis
In general, risk analysis can either be of qualitative or quantitative nature. Quantitative risk analysis is
based on precise numerical values whereas qualitative methods use descriptive scales (such as “low”
“medium” and “high”) to describe consequences and likelihood of an incident scenario. The type of
risk analysis must be consistent with the risk evaluation criteria that are developed as part of the
Context Establishment.
A core focus or risk analysis lies on the assessment of potential consequences as well as the likelihood
of occurrence of the identified incident scenarios. When assessing the consequences, the business
MITIGATE – H2020 – 653212 Page 25 of 121
D7.3 February, 2018
impact of a specific incident scenario needs to be considered and, as pointed out above, are not only
characterized by monetary loss but also by organizational, human or technical criteria (which have to
be defined as part of the Context Establishment). When assessing the likelihood, it needs to be
considered how often a specific threat might occur and how easily related vulnerabilities can be
exploited. This information can be collected from historical data but also from expert opinions. In the
end, the combination of likelihood and consequences results into a risk level for each incident
scenario.
Risk Evaluation
In this step, the risk levels of all incident scenarios are compared against the risk evaluation criteria
defined during the Context Establishment (cf. Section 2.3.2.3.1). This allows to set the identified
incident scenarios in relation to each other and identify the most severe risks according to the given
context of the organization. Based on that, a list of risks prioritized according to the risk evaluation
criteria is created.
2.3.2.3.3 Risk Treatment
In ISO 27005, risk treatment is described as a complex step consisting of four possible options: risk
modification, risk retention, risk avoidance and risk sharing (cf. also Figure 5 below). Those options
are not mutually exclusive; sometimes it might even be helpful or necessary to use a combination of
these options. Any decision in this context should be based on the way how the risk is perceived by
the parties involved and the best way to communicate.
The most common way of risk treatment is risk modification, which is characterized by the
introduction of new controls or the modification of existing controls. Any decision on new controls
needs to be taken with respect to existing resources and the general specifications derived in the
previous steps, also considering different constraints (i.e., time constraints, financial constraints,
technical constraints etc.). An optimization of the controls can also lead to a reduction in resource
requirements.
In case a specific risk is evaluated as too high (e.g., due to the costs for implementing new controls),
the most favorable way might be to avoid the risk completely. This can be achieved when not
implementing a planned activity or stopping an existing one. Further, the risk can also be shared fully
or in parts with an external partner, e.g., an insurance company. On the contrary, a risk can also be
below the defined risk acceptance criteria and thus can directly be retained. However, finally it needs
to be determined which risks remain as residual risks.
MITIGATE – H2020 – 653212 Page 26 of 121
D7.3 February, 2018
Risk Assessment
Results
Risk decision point 1
Risk Treatment
Residual Risk
Risk decision point 2
Figure 6: ISO 27005: The Risk Treatment Activity [9]
2.3.2.3.4 Risk Acceptance
The acceptance of risks is crucial for the integration into the business process and the responsible
managers must affirm the individual decisions. It has to be noted that a risk is not implicitly accepted
because it is below a certain threshold (e.g., due a low likelihood of occurrence, cost issues, etc.). In
each individual case, a decision should be made based on the specific circumstances of this case such
as, e.g., a high benefit by accepting the risk. Further, all decision must be formally recorded and
communicated (cf. the following section).
2.3.2.3.5 Risk Communication and Consultation
Risk communication is a key factor in implementing a risk management process. Therefore, this
process is ongoing throughout the whole risk management process. Information about current risks is
continuously collected and, at the same time, the employees potentially affected by the risks are
informed about the current results of the risk management process. Especially regarding the risk
treatment plan, support from the individual employees is crucial.
MITIGATE – H2020 – 653212 Page 27 of 121
D7.3 February, 2018
2.3.2.3.6 Information security risk monitoring and review
To get the right perspective on the current risk situation, it is necessary to keep all risks and their
factors (i.e., assets, value of assets, impacts, threats, vulnerability, likelihood of occurrence) up to
date. To ensure this, an ongoing monitoring and reviewing process is initiated.
2.3.2.4 Alignment Profile
Compared to the ISO 31000 described in the previous Section 2.3.1, the ISO 27005 provides much
more details on how the specific risk management steps can be implemented within an organization.
The focus lies on IT risks, as discussed in the general overview above (cf. Section 2.3.2.1), and thus
the ISO 27005 can be more precise when it comes to the context establishment, risk assessment and
risk treatment. In these parts, the ISO 27005 scores the highest values when evaluated against the
ENISA Risk Management Process (cf. the alignment profile in Figure 7 below and the evaluation form
in Appendix 0). In detail, steps P.5 “Identification of risks”, P.6 “Analysis of relevant risks” and P.7
“Evaluation of risks” reach the full score of 3.0 due to the exhaustive description of these aspects in
the ISO 27005. The last steps P.14 “Risk Monitoring and Review” and P.15 “Risk Communication,
Awareness and Consulting” have an equally high score, since the respective section 12 and 11 in the
ISO 27005 provide a detailed overview on the activities that need to be implemented in an
organization to ensure an appropriate monitoring of the process results as well as their
communication to the relevant decision makers. Similar to the ISO 31000, also in the ISO 27005 parts
of the risk treatment are not very well covered. Thus, steps P.9 “Development of action plan”, P.11
“Implementation of action plan” and P.12 “Identification of residual risks” just reach a below average
score of 1.5 and 1.0, respectively.
P.1
P.15 P.2
P.14 P.3
P.13 P.4
P.12 P.5
P.11 P.6
P.10 P.7
P.9 P.8
Figure 7: ISO 27005 Alignment Profile
MITIGATE – H2020 – 653212 Page 28 of 121
D7.3 February, 2018
2.3.3 NIST Risk Management Framework
2.3.3.1 General Overview
The National Institute of Standards and Technology (NIST) is a federal agency of the United States of
America. It is assigned to the Ministry of Commerce and is responsible for the standardization
processes. NIST publishes three series of guidelines and best practices, called Special Publications
(SP), dealing with "Computer Security" (SP 800), "Cybersecurity Practice Guides" (SP 1800) and
"Computer Systems Technology" (SP 500) (see [45] for an overview). There are some publications in
the field Computer Security which cover the field of risk management and support risk management
processes in public institutions. These are the NIST SP 800‐30 (Revision 1) “Guide for Conducting Risk
Assessments” [7], which describes a generic risk management process similar to the ISO 31000 (cf.
Section 2.3.1 above), the NIST SP 800‐39 “Managing Information Security Risk” [46], which focuses on
risk management aspects in the context of information security similar to the ISO 27005 (cf. Section
2.3.2 above) and the NIST SP 800‐37 (Revision 1) “Guide for Applying the Risk Management
Framework to Federal Information Systems” [47], which presents a risk management framework
designed to be integrated into a federal organizations. All these publications have the aim to enable
organizations to meet the requirements defined in the Federal Information Security Management Act
of 2002 (FISMA) [48].
2.3.3.2 Concept and Framework
The NIST Special Publications describe risk management as a two‐part framework, like the ISO 31000
(cf. Section 2.3.1 and Figure 6 above), consists of a risk management part and a risk assessment part.
The risk management part includes the four components “Frame”, “Assess”, “Respond” and
“Monitor”, which remind of the PDCA cycle also mentioned in Section 2.3.1 above and represent the
high‐level steps. In further detail, the first component “Frame” addresses strategic questions
regarding risk management, defining a risk management policy for the whole organization. Therein,
general approaches are defined how the activities in the following three components “Assess”,
“Respond” and “Monitor” are carried out. Further, the framing conditions for the overall process are
determined, e.g., the organization’s general risk perception and the boundaries of the process.
In the second component, “Assess”, the risk assessment process is described in detail, as given in
Section 2.3.3.3. Therein, threats and vulnerabilities are identified and their respective likelihood and
impact are evaluated. Given the risks coming out of the assessment process, the third component,
“Respond”, specifies strategies how the organization reacts to those risks. This includes the
evaluation of these strategies, the selection of appropriate strategies as well as their implementation.
It has to be noted that all these activities are taken with respect to the organization’s risk acceptance
criteria defined in the “Frame” component. Finally, in the fourth component, “Monitor”, the
effectiveness of the strategies selected in the “Respond” component is continuously evaluated. This
ensures that the implemented activities have the expected effect and are compliant with relevant
legislation and regulations.
MITIGATE – H2020 – 653212 Page 29 of 121
D7.3 February, 2018
ASSESS
Determine Risk
Figure 8: Risk management framework and
risk management process according to NIST SP 800‐30 [7]
In addition to the four component risk management part, at the core of all three Special Publications
related to risk management lies the concept of Multitiered Risk Management [7], [46]. This concept
has been introduced in NIST SP 800‐30 and supports the integration of the risk management process
into all areas of an organization. Due to the split into the Organization Tier, the Mission/Business Tier
and the Information Systems Tier, risk assessment is carried out different levels in the hierarchy of an
organization, ranging from the strategical to the tactical level (where the latter is most commonly
addressed by other risk management frameworks).
At Tier 1, the Organizational Level, the risk assessment is focusing on organizational aspects. In other
words, Tier 1 covers all risk assessment activities in relation to the organization as a whole, including
policies, regulations and guidelines, as well as strategic weaknesses within the organization. The
assessment on this level is more abstract and high‐level as compared to, for example, on Tier 3 (cf.
description below), building on assumptions, constraints, risk tolerance and priorities. Hence, the first
component of risk management (i.e., “Framing”) is implemented at this level, which subsequently
provides the context for all further activities. Accordingly, the assessment results have direct impact
on activities in Tier 2 and Tier 3.
Tier 2, the Mission/Business Process Level, addresses risks in relation to the organization’s mission
and business processes as well as their resilience. In this context, it is evaluated to which amount
information systems are applied in specific business processes, how they are integrated into the
overall enterprise architecture and which potential threats and risks might arise from that. Therefore,
the relevant framing conditions identified and defined in Tier 1 come into play. In more detail, the
organization’s mission and the strategic goals determined in Tier 1 influence the definition of the
business processes and affect their prioritization, respectively. In this way, the risk assessment
activities on Tier 2 are strongly related to the Business Continuity Plans designed later in the
organization. Further, the results from Tier 2 significantly influence activities on Tier 3, when it comes
to the implementation of security controls for the information systems covered on Tier 3.
Risk assessments on Tier 3, the Information System Level, are the most common and widely applied
in organizations since they focus on tactical risks from an information system perspective. They take
MITIGATE – H2020 – 653212 Page 30 of 121
D7.3 February, 2018
the organizational and business process related risk context, risk decisions and risk activities from Tier
1 and Tier 2 into account. The central task at Tier 3 concerns identification and description of
vulnerabilities within the specific information systems, the estimation of the risks related to them
(i.e., impact and likelihood) as well as the categorization and prioritization of these systems. Based on
this categorization, security controls and mitigation actions are defined in coordination with the
overall enterprise architecture (evaluated on Tier 2) and the organization’s policies and risk tolerance
thresholds (defined on Tier 1). These are implemented, authorized, verified by assessments, and
monitored. The activities on Tier 3 are integrated into the Risk Management Framework defined in
NIST 800‐37.
2.3.3.3 Process Description
The risk assessment process is located in the second component “Assess” of the NIST framework’s
risk management part and consists of four major steps, where step two is divided into the five main
steps of risk assessment. In general, the risk assessment process is similar to the process defined in
ISO 31000 (cf. Figure 2 above) and in the ISO 27005 (cf. Figure 5 above). Hence, we will refer to the
similarities in the following detailed description of the steps, where appropriate. As a clear difference
to other risk management frameworks, each step and task in the process describes certain
characteristics regarding the multitiered structure described above. We will not go into detail on
these specificities here but refer to [7] for further information on that.
Determine Risk
Figure 9: Illustration of the risk assessment process as defined in NIST SP 800‐30 [7]
2.3.3.3.1 Prepare Assessment
In this first step, the context for the overall process is established; therefore, it is directly related to
the “Context Establishment” steps in ISO 31000 and ISO 27005. The main information to implement
this step is coming from the first component, “Frame”, of the management process. As already
mentioned in the previous Section 2.3.3.2, therein the general settings for the risk assessment are
defined, e.g., the general framing conditions, the scope, the methodologies to be applied, etc. Based
on this information, five tasks are executed in this first step, starting with the definition of the
MITIGATE – H2020 – 653212 Page 31 of 121
D7.3 February, 2018
purpose of the assessment, i.e., the expected goals of the overall risk assessment process and which
steps the generated results should support. Next, the scope of the risk assessment is identified, in
detail, the applicability on the organization (i.e., which parts of the organization are affected), the
effectiveness of the time frame (i.e., how long the results of the process are valid) and the
architectural and technological considerations (i.e., which systems fall into the scope of the
assessment). Further, the general assumptions and constraints for the process are determined. These
are also coming from the “Frame” component and include the type of threats, threat sources, and
vulnerabilities examined in the process as well as the applied methodologies for estimating
likelihoods and impacts. In the preparation step, also the information sources are defined, which
provide the required data for the assessment, e.g., for likelihoods, impacts, threat sources and
vulnerabilities. Finally, the general risk model together with both the risk assessment (i.e., qualitative,
quantitative or a mixture) and analysis (i.e., threat‐oriented, vulnerability‐oriented or asset/impact‐
oriented) approach are fixed.
2.3.3.3.2 Conduct Assessment
This second step of the process represents the core of the risk assessment, i.e., in this step the
assessment is carried out, which relates it to the “Risk Assessment” block in ISO 27005 and ISO
31000. Thus, the individual tasks of this step are described in a little more detail in the following
paragraphs. The main objective of this step is to output a list of prioritized risks that will trigger and
support the decision on how to respond to them. As already pointed out in the discussion of ISO
27005, the individual tasks of this step can be carried out in several iterations to account for, e.g.,
changes in the threat landscape or new vulnerabilities during the execution of the assessment.
Identify Threat Sources and Events
Based on the information on the assets involved in the risk assessment and their vulnerabilities
coming from the previous step (cf. Section 2.3.3.3.1 above), potential threat sources and related
threat events are characterized. In this context, four main types of threat sources are described –
adversarial, accidental, structural and environmental – and each threat source represents the root
cause of a threat event. When it comes to adversarial threat sources, the capability and intent of an
adversary as well as his potential targets within the organization needs to be characterized in this
task. Regarding non‐adversarial threat sources, the source’s range of effects needs to be assess; this
has to be done in context of and tailored to the organization’s requirements.
Identify Vulnerabilities and Predisposing Conditions
In this second task, the relation between the identified threat sources/events and the vulnerabilities
within the organization’s assets are characterized. In general, a threat source is linked to one or more
vulnerabilities, which it exploits to create a threat event, and vice versa (like the concept described in
ISO 27005, cf. Section 2.3.2.3.2 above). Due to the diverse assets infrastructure within an
organization, the number and complexity of vulnerabilities can be very high and changing on a daily
basis. Hence, besides identifying vulnerabilities, a way of constantly updating them also needs to be
defined in this task. The detail in which the vulnerabilities are assessed should be in line with the
purpose and scope of the risk assessment, as defined during the preparation. Moreover, predisposing
conditions within the organization can also have an effect on the vulnerabilities, i.e., they can foster
or hinder potential effects (and furthermore the impact) the vulnerability has on the organization.
MITIGATE – H2020 – 653212 Page 32 of 121
D7.3 February, 2018
Determine Likelihood of Occurrence
After threat sources, events as well as the related vulnerabilities are given, the likelihood of an event
to take place are analyzed. In the context of the NIST risk assessment process, the likelihood is split in
two parts: on the one hand, there is the likelihood of a threat event to be initiated (in case of an
adversarial event) or to occur (in case of a non‐adversarial event). In case of an adversarial event, the
capabilities, intentions and targets of an adversary are also taken into account when estimating this
part of the likelihood. On the other hand, in case a threat event really takes place, there is the
likelihood of that threat event to cause adverse impacts to the organization at all. This part is
influences by the existing vulnerabilities and predisposing conditions within the organization. In
particular, if no vulnerabilities or predisposing conditions relevant to a threat event are identified, the
likelihood of that event to cause impacts is characterized as very low. How these two parts are
combined is based on the risk model and the risk assessment approach chosen during the
Preparation step (cf. Section 2.3.3.3.1).
Determine Magnitude of Impact
After the estimation of the likelihood, the estimation of the magnitude of an adverse impact is
carried out. There are several factors influencing the magnitude of an impact, the characteristics of
the threat event (where does it take place within the organization, what triggers the event, etc.), the
parts of the organization that are affected (individuals, assets, operations, etc.), the vulnerabilities
exploited by the event and the existing controls already implemented to protect these vulnerabilities.
The combination of these factors results in the overall impact of a specific threat event.
Determine Risk
Taking the two values coming from the previous tasks, i.e., the likelihood of a threat event to occur
and the impact it will cause, a risk level for each threat event can be derived. There are several ways
to combine these two values; a definitive choice is made based on the risk model and the risk
assessment approach chosen during the preparation step (cf. Section 2.3.3.3.1). As a result, the
threat events can be ordered according to their risk level and further prioritized, i.e., the risks ranging
on the top of the list need immediate attention. Further, the results can also be visualized in a risk
matrix.
2.3.3.3.3 Communicate Results
The third step of the overall risk assessment process is dedicated to the communication and sharing
of the assessment results. Similar to the respective steps in the ISO 31000 (cf. Section 2.3.1.3.4) and
the ISO 27005 (cf. Section 2.3.2.3.5), NIST also stresses the importance of communicating the
identified risks. Hereby, it is not just important to inform the organization’s top management and
decision makers but also other organizational personnel for which the results might be interesting
(e.g., because they are working with specific assets or can be directly affected by certain threat
events).
2.3.3.3.4 Maintain Assessment
This last step of the process implements a continuous monitoring loop, where the characteristics of
each threat event (e.g., likelihood, impact, vulnerabilities, threat sources and assets) are reassessed
over time to keep an up‐to‐date risk picture for the organization. Although in general the risk
assessment should be repeated after a specific amount of time (which is fixed by the organization
during the initial framing), changes in the organization or the threat landscape identified through the
MITIGATE – H2020 – 653212 Page 33 of 121
D7.3 February, 2018
continuous monitoring might make it necessary to initiate a new run of the risk assessment before
that. This step is also directly related to the step “Monitoring and Review”, which can be found both
in ISO 31000 (cf. Section 2.3.1.3.5) and ISO 27005 (cf. Section 2.3.2.3.6).
2.3.3.4 Alignment Profile
Since the NIST SP800‐30 describes a risk management process in a detail similar to the ISO 27005, the
process from NIST SP800‐30 scores rather high in several steps of the alignment profile (cf. Figure 10
and Appendix 0). In particular by providing a series of tables in the appendices, the NIST SP800‐30
supports the risk officers and decision makers with helpful material to identify possible sources for
threat information and to cluster threat events, to determine classes describing the likelihood and
the impact of risks as well as to evaluate the resulting risks.
However, as a major drawback the aspects of risk treatment (including steps P.8 to P.12) are hardly
mentioned within the NIST SP800‐30. The concepts of risk treatment (or “risk response”, as it is called
in the NIST SP800‐30), risk thresholds and risk acceptance and are referred to in the description of
various tasks but without providing any detail an how to implement these activities. Hence, the score
for these steps is set to 0.5. It has to be noted that the activities of risk response are dealt with in the
“Respond” component of the overall risk management framework (as illustrated in Figure 8 above).
Details to this component can be found in NIST SP800‐39 [46], where the framework is described. It
turns out that Task 3‐1 through Task 3‐4 cover the activities of steps P.8 to P.12 but to have a direct
comparison, these parts are omitted from the evaluation of the NIST SP800‐30 process.
Looking at the other stages besides risk treatment, the NIST process scores between 2.0 and 3.0 for
most of the processes, indicating the highly detailed and comprehensive description of the process
(supported by the annexed tables already mentioned above). Only the first step P.1 “Definition of
external environment”) has a low score of 1.0 due to the fact that there is not explicit step covering
the identification and description of the external environment but only implicit references thereto
are given.
MITIGATE – H2020 – 653212 Page 34 of 121
D7.3 February, 2018
P.1
P.15 P.2
P.14 P.3
P.13 P.4
P.12 P.5
P.11 P.6
P.10 P.7
P.9 P.8
Figure 10: NIST SP800‐30 Alignment Profile
2.3.4 COBIT for Risk
2.3.4.1 General Overview
Though in the beginning not specifically designed for risk analysis, COBIT (Control Objectives for
Information and Related Technology) has developed from an audit guide for IT auditors presented in
1996 (CobiT 1) to a control tool (CobiT 2), an IT Management Framework (CobiT 3), an IT governance
framework (CobiT 4.0 / 4.1) to a comprehensive governance framework for enterprise IT (COBIT 5)
[49]. The publisher, the US Information Systems Audit and Control Association (ISACA) [50], the
professional association of IT auditors, IT risk managers, IT information security, governance and
cybersecurity experts, has been able to provide both content and service orientation. To symbolize
this thematic expansion, the acronyms ISACA and COBIT stand as single brands since the introduction
of Version 5 and are no longer translated into the formulated meanings to symbolize this thematic
expansion. As a result, COBIT also wants to integrate other aspects to establish itself as a higher‐level
framework to which other ICT‐based frameworks – such as ITIL [51] or ISO 27001 [40] – can connect.
In addition, ISACA presented in‐depth publications on individual aspects, particularly on risk
management. COBIT for Risk [10] is a comprehensive guide for risk managers and other stakeholder
groups concerning different risk aspects. In this sense, the general standard publication COBIT 5 [52]
is extended by special risk management aspects. This can also be interpreted as a result of the
integration of the ISACA framework RiskIT [53], which referred to the previous version COBIT 4.1.
MITIGATE – H2020 – 653212 Page 35 of 121
D7.3 February, 2018
2.3.4.2 Concept and Framework
Structurally, the five COBIT principles known from the standard publication are comprehensively
applied from the perspective of risk management [54, p. 13f] [10]:
1. Meeting the needs of stakeholders: The aim is to achieve the business goals at all levels of
the COBIT‐5 target cascade, from the stakeholder drivers and their needs to business
goals, to IT‐related goals, to concrete enabler goals. The focus lies on a (consistent)
optimization of risks across the entire target cascade chain.
2. Coverage of the entire enterprise: (Business) risk is understood as an impairment of value
generation and thus understood as an end‐to‐end issue. All functions and processes of a
company are comprehensively addressed – not exclusively the ICT‐related elements.
3. Application of a single, integrated framework: COBIT for Risk aims to be consistent with
key risk management frameworks and standards. In this context, the above mentioned
straight‐forward integration of other frameworks is addressed.
4. Enablement of a holistic approach: All related elements of the enablers of a holistic and
systematic risk management should be cited. Again, the structural openness to a
comprehensive approach is postulated.
5. Distinction between governance and management: A distinction is made between
management control (framework requirements) and management (executive board).
Figure 11: Process Enablers in COBIT for Risk [10];
(blue=core process, red=support process)
Based on these five COBIT principles, the key messages of the seven enablers for risk management
are elaborated [52]:
MITIGATE – H2020 – 653212 Page 36 of 121
D7.3 February, 2018
1. Principles, guidelines and frameworks
2. Processes
3. Organizational structures
4. Culture, ethics and behavior
5. Information
6. Services, infrastructure and applications
7. Employees, skills and competencies
Furthermore, in the appendix of COBIT for Risk detailed design proposals for these enablers are
formulated to accelerate risk governance and risk management in companies. All seven enablers are
generically viewed in the dimensions of stakeholders, goals, lifecycle, and best practices, and their
effects are monitored through metrics. The enablers can be viewed from both a risk and a risk
management perspective. The former clarifies the question of how a risk function can be integrated
and maintained in the corporate structure. The latter considers the given governance and process
requirements and tries to deduce from scenarios how typical risks can be counteracted. In these two
different perspectives, according to ISO 31000 [8], a distinction can be seen between the two
aspects: the setting up of the framework (risk function) and the effective implementation of the
individual process activities (risk management) [35]. On the enabler level, COBIT for Risk already
provides precise guidelines on which aspects should be observed as far as possible when setting up
risk management structures. Within the enabler's processes, two core processes – APO12 "Managing
Risk" and EDM03 "Ensuring Risk Optimization" – and twelve supporting processes for risk
management can be identified based on the standard COBIT publication as Figure 1 shows.
Accordingly, these form the substantive basic statement of the risk management framework COBIT
for Risk discussed here, also analogous to the standard publication COBIT 5. The process descriptions
from the general standard COBIT always contain recurring structural elements for identification,
description, purpose, target cascade, goals and metrics, responsibilities in form of a RACI
(Responsible‐Accountable‐Consulted‐Informed) matrix and as process practices, inputs, outputs,
activities and reference material.
The process descriptions given in the appendix to COBIT for Risk explicitly extend the standard
processes to include risk‐specific goals and metrics as well as process practices, inputs, outputs and
activities, so they should be seen together as a whole. The process steps are represented by so‐called
governance (EDM03) or management practices (APO12). The process has to cover the objectives
described in these practices, not even restrictively defined how to achieve it.
The risk management publication focuses on scenario development by proposing example scenarios
and their structured documentation but urgently recommends carrying out a creative process. The
formulated scenarios are developed based on a top‐down approach respecting the overall business
goals as well as bottom‐up based on the risk scenarios proposed by COBIT for Risk, which are used to
implement the core risk processes. The goal is to develop a treatable set of realistic, reasonably
categorized scenarios to work with. The analysis of multiple and cascading scenarios is possible.
Influential risk factors that affect the frequency and/or impact should be categorized. In addition, the
risks must also be aggregated to remain manageable and to be able to be visualized in a risk matrix.
Figure 10 shows this fundamental relationship. Thereafter, as part of a risk‐response workflow,
visualized here in Figure 11, possible countermeasures are defined to the example scenarios,
MITIGATE – H2020 – 653212 Page 37 of 121
D7.3 February, 2018
consequently as management practices, to remain conceptually consistent with the general COBIT
standard publication.
Figure 12: Risk scenario development based on COBIT for Risk [10]
Figure 13: Risk Response Workflow [10]
MITIGATE – H2020 – 653212 Page 38 of 121
D7.3 February, 2018
In addition, comparisons with other selected frameworks regarding concepts and terminology are
discussed: ISO 31000, ISO 27005, COSO Enterprise Risk Management (ERM). Thus, COBIT for Risk
actively demonstrates the problem of different terms that are sometimes having an identical
meaning.
ISACA launched COBIT for Risk to provide a specification of the COBIT 5 framework for risk
management; similar specifications using the same methodology are already available with
Information Security or Audit Assurance. In the sense of the comprehensive approach of COBIT 5, the
general standard framework for risk management is interpreted, practically instantiated with risk
management specifications and starting points for the development of risk management structures,
such as assistance in the development of scenarios. Nevertheless, the framework is a working basis.
As discussed before, the essential basic structures must be developed and built according to the
specific situation of the organization.
2.3.4.3 Process Description
2.3.4.3.1 Collect Data
In the first management practice of the risk management process, the relevant data to implement
the process needs to be collected. This includes the precise definition of an approach how to collect,
classify and analyze this data. As part of this approach, data from the current internal and external
processes is gathered and historical data on risks and their impacts within the organization but also
from external sources (e.g., common risk databases for industry sectors) is analyzed.
2.3.4.3.2 Analyze Risk
Based on the information gained from the first management practice, the second step evolves
around the consolidation of this information. Therefore, IT risk scenarios (including their respective
frequency and potential loss) are compiled and updated regularly. In particular, COBIT for Risk is
considering possible cascading or coincidental threats that might occur along with a specific scenario.
Furthermore, the options for security strategies to counter individual risk scenarios and their
effectiveness is evaluated. This also influences the implementation of risk responses in case the risk is
above the predefined threshold. To implement this step properly, the criticality of the organization’s
assets, the risk factors as well as the depth of the risk analysis need to be defined formally.
2.3.4.3.3 Maintain a Risk Profile
The third management practice incorporates the information on risks collected in the second step,
i.e., their expected frequency, potential impact and available counter measures, according to their
relation to the overall infrastructure. In more detail, an inventory of relevant business processes,
personnel, important facilities and infrastructure is created and their dependencies on IT services and
IT infrastructure, which are critical for the proper execution of the business processes, are identified.
The risk scenarios are then aggregated based on these functional areas into risk profiles, a set if risk
indicators are defined to measure the current risk as well as risk trends and these indicators are
monitored regularly to maintain an up‐to‐date overview on potential risk scenarios in the
organization.
2.3.4.3.4 Articulate Risk
The identified IT risk scenarios are reported to the relevant stakeholders within the organization to
determine appropriate risk responses. This information flow should indicate the most probable and
MITIGATE – H2020 – 653212 Page 39 of 121
D7.3 February, 2018
the worst‐case scenarios as well as significant concerns regarding reputation, legal or regulatory
issues, ideally described using ranges of loss together with a confidence level of that estimation. In
addition, the availability and effectiveness of controls to counter these risks, existing gaps and
inconsistencies but also redundancies within the risk profiles. Further, the results from internal audits
and external assessments are integrated into the risk profile to maintain an objective view on the
identified risks.
2.3.4.3.5 Define a Risk Management Action Portfolio
Given the risks singled out in the third step (cf. Section 2.3.4.3.3) and reported in the fourth
management practice (cf. Section 2.3.4.3.4), opportunities and strategies to reduce these risks are
described in the fifth step of the process. Therefore, an inventory of available controls is compiled,
which support the reduction of risks below the organization’s predefined risk tolerance and risk
appetite threshold. The controls are linked to specific IT risks, describing the effects of a given risk
profile as well as the related costs and benefits. The implementation of these controls is planned
according to dedicated projects based on the various risk profile they are targeting as well as on the
organizational and functional areas they are applied in.
2.3.4.3.6 Respond to Risk
In the final management practice of the process, the identified controls to reduce risks are planned in
detail and implemented in the organization. In this context, not only are the current exposures and
risks checked against the predefined thresholds but also the implementation plans for the counter
measures are evaluated for their actual effectiveness. In addition to the implementation of the risk
response plans, past incidents and their impacts are once more assessed in this step to determine
additional requirements and improvements to current risk response activities.
2.3.4.4 Alignment Profile
In contrast to the three standards (i.e., ISO 31000, ISO 27005 and NIST SP800‐30) described in the
previous sections, COBIT for Risk represents a risk management frameworks which is very application
oriented and provides detailed hands‐on advice on the specific steps of the risk management
process. Therefore, the alignment scores are the highest so far, ranging between 2.0 and 2.5 for all
the 15 steps of the ENISA Risk Management Process (cf. Figure 12 and Appendix 0 below). As
described in the previous Section 2.3.4.2 and Section 2.3.4.3, COBIT for Risk has a strong focus on
gathering information from different functional areas of an organization as well as from different
organizational views and integrating this information into the context establishment, risk assessment,
continuous monitoring and communication to the decision makers. This is reflected in the high scores
of 2.5 for the respective steps in the Stages A, B, D and E of the ENISA process, as shown in Figure 12.
Although there are some minor shortcomings in the context of risk treatment and risk acceptance
leading to a score of 2.0, COBIT for Risk still provides simple instructions how to implement these
steps within an organization.
MITIGATE – H2020 – 653212 Page 40 of 121
D7.3 February, 2018
P.1
P.15 P.2
P.14 P.3
P.13 P.4
P.12 P.5
P.11 P.6
P.10 P.7
P.9 P.8
Figure 14: COBIT for Risk Alignment Profile
2.3.5 OCTAVE Allegro
2.3.5.1 General Overview
The original framework that formed the base of the OCTAVE (Operationally Critical Threat, Asset and
Vulnerability Evaluation) method was published in September 1999 by the Software Engineering
Institute (SEI) at Carnegie Mellon University [55], [56]. The OCTAVE method was designed to support
Department of Defense (DoD) in challenges faced with the privacy and security of personal data in
connection to the Health Insurance Portability and Accountability Act (HIPAA) [57]. The method is
performed in a series of workshops conducted by an analysis team drawn from business units
throughout the whole target organization. The intended audience are large organizations with 300 or
more employees with a multilayered hierarchy and running their own IT infrastructure.
In March 2005, OCTAVE‐S was published with the intent to offer a method for small manufacturing
organizations [58]. The current version of OCTAVE‐S is specifically designed for organizations of about
100 people or less. As a specialty, the number of workshops is reduced drastically since OCTAVE‐S
builds upon the knowledge of an expert analysis group, whose members have in‐depth knowledge of
the organization’s information‐related assets, security requirements, threats and security measures.
Additionally, the analysis of the organization’s information infrastructure is also less extensive
compared to the full OCTAVE method, because small enterprises usually do not have the resources to
perform in‐depth analyses, which often represented a hindrance for adopting the OCTAVE method.
MITIGATE – H2020 – 653212 Page 41 of 121
D7.3 February, 2018
Finally, OCTAVE Allegro was published in May 2007 [11]. The approach differs from the OCTAVE
method and OCTAVE‐S by focusing primarily on information assets and how they are used within an
organization. It could be performed in a workshop‐style but it is also well suited for use by individuals
without extensive organizational involvement.
2.3.5.2 Concept and Framework
The original OCTAVE method is split into three phases, which cover the organizational view together
with the technological view, similar to the three‐tier structure of the NIST (cf. Section 2.3.3.2 above),
as well as the development of the general strategy. In the first phase, the organization’s information‐
related assets are identified. Based on these assets, a list of potential threats and the currently
implemented security measures to counter them is compiled. By determining the assets with the
highest importance to the organization, the necessary security requirements are identified which
need to be respected or implemented to ensure the organization’s ongoing success. Phase 2 then
focuses on the more technical aspects, i.e., performing a threat analysis on the technical aspects of
the assets identified in Phase 1 and collecting technical vulnerabilities for the organization’s key
components. The results from Phase 1 and Phase 2 are then used in the third phase to analyze the
potential risks arising from the vulnerabilities, identify protection strategies to account for and
reduce these risks and thus implementing a risk mitigation plan for the organization. For all three
phases, an extensive and progressive series of workshops is carried out within the organization to
make sure that the information on all assets from the different perspectives required for
implementing the OCTAVE method is gathered appropriately.
Phase 1
Organizational
View
Assets
Threats
Current Practices
Organizational
Vulnerabilities
Phase 3
Security Requirements
Preparation Strategy and Plan
Development
Risks
Protection Strategy
Mitigation Plans
Phase 2
Technological View
Key Concepts
Technical Vulnerabilities
MITIGATE – H2020 – 653212 Page 42 of 121
D7.3 February, 2018
Further, OCTAVE Allegro already has a very practical and hands‐on setup, since it comes with pre‐
defined worksheets and questionnaires to directly support the analyst (or the team of analysts).
2.3.5.3 Process Description
OCTAVE Allegro is conducted in eight steps which are organized in four phases. In Phase 1 – Establish
Drivers – internal risk measurement criteria are established that will be used at the final step to
evaluate the effects of a risk to the mission and business objective of an organization, which is done
by defining a ranking according to the significance of the different areas of the organization. In the
following phase – Profile Assets – a collection of all information assets within the organization, i.e.,
software, hardware, employees etc., is build and evaluated together with their appropriate
containers. In phase 3 – Identify Threats – threats to the information assets in context of their
containers are determined. This is done by analyzing the areas of concern which comprise a set of
real‐life scenarios (i.e., the threat scenarios) that might affect the organization. In the final phase –
Identify and Mitigate Risk – risks to the critical information assets are derived from the threat
scenarios and appropriate mitigation approaches are commenced.
Step 1 –
Step 2 – Develop
Establish Risk Step 4 – Identify Step 6 - Identify
Information Asset
Measurement Areas of Concern Risks
Profile
Criteria
Step 3 – Identify
Step 5 – Identify Step 7 – Analyze
Information Asset
Threat Scenarios Risks
Containers
Step 8 – Select
Mitigation
Approach
Figure 16: Octave Allegro: Roadmap [11]
2.3.5.3.1 Establish Risk Measurement Criteria
In the first step, all areas of impact to an organization are identified. The methodology requires at
least five areas which are covered by worksheets attached to the OCTAVE Allegro documentation.
These five areas are:
Reputation and customer confidence
Financial
Productivity
Safety and health
Fines and legal penalties
MITIGATE – H2020 – 653212 Page 43 of 121
D7.3 February, 2018
In a second activity, the importance of the specified areas of the organization is analyzed and
furthermore ranked and assigned a value to allow a risk scoring.
2.3.5.3.2 Develop Information Asset Profile
The whole methodology of OCTAVE Allegro is focused on information assets. An information asset is
defined as information or data that is of value to the organization. With Step 2, a collection of
information assets is built on which an assessment might be performed. This is done in dedicated
workshops, by brainstorming and with the help of questionnaires which are included in the standard.
After finishing the collection, it is reviewed to identify those assets that are critical to the
organization. This could be done be answering the following four key questions:
Would it have an adverse impact on the organization if the asset is disclosed to
unauthorized people?
Would it have an adverse impact on the organization if the asset it is modified without
authorization?
Would it have an adverse impact on the organization if the asset it is lost or destroyed?
Would it have an adverse impact on the organization if the access to the asset is
interrupted?
Assets meeting at least one of those criteria are considered critical and a risk assessment is
performed on them. The following activities are focused on getting detailed information about those
selected critical information assets using a worksheet including further information on the rational
for the selection, a more detailed description, ownership and details on confidentiality, integrity and
availability. This process is repeated for each single information asset.
2.3.5.3.3 Identify Information Asset Containers
Information assets must be contained somehow, i.e., data is stored, transported and processed on
physical devices. Within OCTAVE Allegro, such information asset containers could be hardware,
software, application systems, servers and networks. Furthermore, paper where information assets
are written on like file folders or whiteboards can also be considered as such containers. Additionally,
it also includes people who may carry information such as intellectual property or any kind of sensual
information. Hence, Step 3 collects information asset containers using an Information Asset Risk
Environment Map. This is supported by worksheets that are offering details on technical containers,
physical containers and people.
2.3.5.3.4 Identify Areas of Concern
In the context of OCTAVE Allegro, an area of concern is defined as descriptive statement of a real‐
world condition or situation that could affect an information asset in an organization. This Step 4 is
usually covered using brainstorming about possible condition or situations that could threat one or
more information asset. These areas of concern form the base for the identification of threat
scenarios in the following step. At first, the Information Asset Risk Environment Maps (developed in
the previous Step 3) is used to find the potential areas of concern. Each area of concern is document
on one Information Asset Risk Worksheet. Afterwards, more specific information on the threat, i.e.,
actor, means, motive, outcome etc., is gathered and documented on the Information Asset Risk
Worksheet. Finally, the impact on the security requirements is documented (i.e., considering how the
information asset’s security requirements could be breached).
MITIGATE – H2020 – 653212 Page 44 of 121
D7.3 February, 2018
2.3.5.3.5 Identify Threat Scenarios
In this step, additional threat scenarios that have not been covered by areas of concern defined in the
previous step will be identified. In order to support the identification for these threat scenarios, there
are four threat trees included:
Human actors using technical means
Human actors using physical access
Technical problems, i.e., hardware defects, software defects, malicious code
Other problems, i.e., problems outside the control of the organization including natural
disasters.
With each information asset, each branch of the trees will be traversed to ensure thorough coverage
and identification of threats. Within this step, the threat scenarios questionnaires are used to gain
information on threat scenarios beside those derived from areas of concern in the previous step.
There is one questionnaire for each type of container (technical, physical and people). The
Information Asset Environment Maps created in Step 4 is be used as a guideline.
All available scenarios will then be expanded into threat scenarios which is done by analyzing further
properties (actor, access/means, motive, outcome etc.) of the threat. For every new scenario one
new Information Asset Risk Worksheet is recorded and the sections 1 to 5 of the worksheet are
completed. Optionally, the probability can be added to the description.
2.3.5.3.6 Identify Risks
Within this step, the impact of a specific threat scenario on the organization is examined. This is done
by determining on how every threat scenario would impact the organization using the Information
Asset Risk Worksheet. At least one consequence at each Information Asset Risk Worksheet must be
documented. After finishing this step, all the information required for the classical risk equation
Threat (condition) + Impact (consequence) = Risk
is now available. In terms of OCTAVE Allegro, this translates to
Risk = the output of Step 4 “Identify Area of Concern” and
the output of Step 5 “Identify Threat Scenarios” + the output of Step 6 “Identify Risks”
2.3.5.3.7 Analyze Risks
In Step 7, the impact of a threat to the organization is qualitatively measured by computing a risk
score for each risk to each information asset. These results form the base for the decisions on the
mitigation in the final step. There are two activities which must be performed for each Information
Asset Risk Worksheet:
First, the consequence statement recorded in the previous step at the first Information Asset
Risk Worksheet is analyzed. By using the risk measurement criteria as a guideline, the
consequence to each impact area (defined in Step 1) is related as “high”, “medium” or “low”.
If there is more than one consequence statement defined, all of them need to be taken in
account.
MITIGATE – H2020 – 653212 Page 45 of 121
D7.3 February, 2018
Second, a score is computed by multiplying the impact area rank (defined in Step 1) with the
impact value and recorded at the Information Asset Risk Worksheet. This is done for every
impact area.
After that the score column is summed up giving the total risk score.
2.3.5.3.8 Select Mitigation Approach
Within this final step, the handling of every single risk is defined. Therefore, it is suggested to split the
risks into pools with common characteristics (i.e., the risk score and optional the probability). In the
next activity, a mitigation approach is assigned to each risk, e.g., by simply using a table. According to
the result, risks are mitigated, deferred or accepted. It is important to keep the unique operation
circumstances of the organization in mind such that the table is mainly used to offer
recommendations and a starting level for further discussions. It might also be appropriate to transfer
risk to another party. Finally, for every risk profile that is chosen to be mitigated, a mitigation strategy
must be developed.
2.3.5.4 Alignment Profile
Analyzing the alignment profile for the risk management process described in OCTAVE Allegro as
given in Figure 15, we can directly see that the topics covered by the process are quite polarized. On
the one hand, ENISA process’ Stage B “Risk Assessment” (covering steps P.5, P.6 and P.7) as well as
P.4 “Formulation of risk criteria”, P.8 “Identification of options” and P.9 “Development of action plan”
are very well described and reach a high score between 2.0 and 3.0 (cf. also Appendix 0). Activities
implementing these steps are explicitly described within the eight steps of the risk management
process (cf. Section 2.3.5.3 above) and are supported by templates for worksheets and
questionnaires provided in [10]. On the other hand, a specific approval and implementation of the
risk treatment plan (steps P.10 and P.11) or the monitoring and communication of the process
outcomes are not explicitly covered in the description of the OCTAVE Allegro process, thus receiving a
score of 0.0 in the alignment profile.
MITIGATE – H2020 – 653212 Page 46 of 121
D7.3 February, 2018
P.1
P.15 P.2
P.14 P.3
P.13 P.4
P.12 P.5
P.11 P.6
P.10 P.7
P.9 P.8
Figure 17: OCTAVE Allegro Alignment Profile
2.4 Evaluation and alignment profile of MITIGATE methodology
2.4.1 General Overview
As a core result of the MITIGATE project, the MITIGATE Risk Management Methodology has been
developed. It aims at estimating the cyber risks for all assets of all business partners involved in a
maritime supply chain service (SCS) and represent the basis for the MITIGATE system. The
methodology has already been described in full detail in Deliverable D2.2 [12]. Hence, we will provide
just a very brief overview here to refresh the core concepts and the individual steps.
2.4.2 Concept and Framework
The MITIGATE methodology is compliant with the main standards for port security, the ISPS Code
[59] (IT Section), ISO 27001 [40] and ISO 28001 [60], as well as the main standards for IT risk
management, ISO 27005 [9] and ISO 31000 [35]. Several characteristics of these guidelines can also
be found in the MITIGATE methodology, as already discussed in Deliverable D2.2 [12] (and thus we
won’t go into detail here).
One core concept the MITIGATE Methodology, which extends the existing standards and guidelines,
is the collaborative approach. The MITIGATE Methodology emphasizes the collaboration of various
stakeholders on the identification, assessment and mitigation of risks associated with cyber‐security
assets and international supply chain processes. This collaborative approach boosts transparency in
risk handling by the various stakeholders, while it will also generate unique evidence about risk
assessment and mitigation. Another strong emphasis of the MITIAGTE Methodology lies on
MITIGATE – H2020 – 653212 Page 47 of 121
D7.3 February, 2018
identifying and estimating the potential cascading effects within the supply chain. Therefore, the
interrelations and dependencies between the cyber assets of each stakeholder are analyzed and the
propagation of incidents through this asset network is assessed. A third important concept within the
MITIAGTE Methodology is the application of game‐theoretic algorithms to identify optimal risk
mitigation strategies. With this approach, a sub‐set of the available countermeasures is identified,
which provides an optimal defense against the wort case damage inferred by an adversary.
2.4.3 Process Description
In accordance with the existing standards and guidelines in the maritime domain (i.e., the above
mentioned ISPS Code, ISO 27001, ISO 27005, ISO 28001 and ISO 31000), the MITIGATE methodology
implements a six‐step approach towards risk management for port infrastructures and their supply
chains (cf. Figure 16) representing the main steps also described in the above standards.
Figure 18: Overview on the main steps and sub‐steps in the MITIGATE Methodology
2.4.3.1 SCS Analysis
In this first step, the scope of the risk assessment is defined. Therefore, the business partners
involved in the SCS under examination are identified. All the business partners agree on the goals and
the desired outcome of the risk assessment. Further, the SCS under examination is decomposed and
inspected in detail by the business partner's risk assessors who initiated the risk assessment. They
identify the participants of the SCS involved from their perspective, i.e., within their organizations.
For each participant of the risk assessment, the main cyber and/or physical processes (i.e.,
controlled/monitored by a cyber system) that comprise the examined SCS are collected. The
MITIGATE methodology is focusing in particular on the inter‐dependencies among these cyber assets.
Therefore, these interdependencies are further classified based on different types (e.g., whether they
are installed on the same system, communicating of network interfaces, etc.) describing the
relationship between the cyber assets in more detail. The SCS analysis results in a list of all business
partners together with their cyber assets relevant for the SCS. Further, a graph of all cyber assets
connected based on their interdependencies is created.
MITIGATE – H2020 – 653212 Page 48 of 121
D7.3 February, 2018
2.4.3.2 Cyber Threat Analysis
Based on the list of cyber assets created in the first step, all potential threats related to these cyber
assets are identified in the second step of the MITIGATE methodology. Due to today's rapidly
changing threat landscape, the list of threats needs to be as exhaustive and up‐to‐date as possible. To
achieve that, the MITIGATE methodology foresees the integration of multiple source of information,
i.e., online threat repositories like the National Vulnerability Database (NVD) [61], crowd sourcing and
social media as well as the business partners’ experts. This makes the method‐ology highly adaptive
to novel attack strategies and attacker behavior. The multitude of different data sources helps to
increase the quality of the whole risk assessment.
When the list of relevant threats is established, the likelihood of occurrence is estimated for each of
them. Also for this step, various sources of information are combined: information from online
repositories and social media is taken into consideration as well as historical data and expert
opinions. Instead of just use one of these sources (e.g., relying only on historical data or expert
opinions), this approach offers the advantage of integrating a more diverse and complete overview
on the topic. Thus, the assessor obtains a more realistic estimation of the threat likelihood. The
resulting likelihoods are expressed using a semi‐quantitative, five‐tier scale and all the gathered
information is integrated. Finally, a Threat Level (TL) based on this likelihood is assigned to each
threat.
2.4.3.3 Vulnerability Analysis
Similar to the identification of threats in the previous step, in this step a list of vulnerabilities of the
cyber assets of the SCS under examination is compiled. In the context of the MITIGATE methodology,
a vulnerability is understood as a defective state of a cyber assets due to a poor configuration, the
lack of security patching, etc. A threat can manifest in the SCS by exploiting a vulnerability of one of
the involved cyber assets.
The MITIGATE methodology differences between two main types of vulnerabilities: confirmed
vulnerabilities and potentially unknown or undisclosed vulnerabilities. In more detail, vulnerabilities
which are already know in the community and are listed in online repositories or by specific
Computer Emergency Response Teams (CERTs) are understood as confirmed vulnerabilities. On the
other hand, there are vulnerabilities in software systems which are not publicly known, yet. Such un‐
known or undisclosed vulnerabilities are more dangerous since security experts are not aware of
them but they can be (easily) exploited by adversaries.
A core feature of the MITIGATE methodology is to take these unknown and/or undisclosed
vulnerabilities into account. In this context, the data coming from various information sources (online
repositories, social media, expert knowledge, etc.) is collected and processed to estimate the
existence of unknown vulnerabilities. In more detail, the analysis is carried out over all time scales in
the available dataset (e.g., by empirically characterizing the distribution of a vulnerability’s lifespan)
or determining the number of vulnerabilities publicly announced for a specific period of time (e.g.,
using the rate of vulnerability announcements in the NVD).
To characterize both confirmed and unknown/undisclosed vulnerabilities within one methodology
and make them comparable, the Common Vulnerability Scoring System (CVSS) [16] is applied. For
each vulnerability, the Individual Vulnerability Level (IVL) is specified by assessing the Access Vector,
Access Complexity and Authentication. The scores for these three values are coming from the online
MITIGATE – H2020 – 653212 Page 49 of 121
D7.3 February, 2018
database NVD and are mapped onto a qualitative, five‐tier scale for further processing. The details on
this mapping are given in Section 5.1.
Additionally, the MITIGATE methodology is not only looking at the immediate effects of an attack
exploiting a specific vulnerability but is also taking the respective cascading effects into account.
Therefore, the concepts of a Cumulative Vulnerability Level (CVL) and a Propagated Vulnerability
Level (PVL) are introduced. They are described in detail in the following Section 5.2. Accordingly, the
vulnerability analysis results in a list of all vulnerabilities together with their respective IVL, CVL and
PVL.
2.4.3.4 Impact Analysis
After the vulnerability analysis done in the previous step, the MITIGATE methodology is also looking
at the potential impact an exploitation of these vulnerabilities might have. To stay consistent with the
vulnerability analysis, the CVSS (more specifically, the three security criteria Confidentiality, Integrity
and Availability) is applied for assessing the impact. Accordingly, the scores for the security criteria
are also mapped onto the same qualitative, five‐tier scale as the vulnerabilities.
Furthermore, the notion of cascading effects is carried on for the impact analysis, resulting in the
concepts of Individual Impact Level (IIL), Cumulative Impact Level (CIL) and Propagated Impact Level
(PIL).
2.4.3.5 Risk Assessment
The risk assessment in the MITIGATE methodology is loosely based on the general approach risk =
likelihood × impact [62]. Hence, in our context the threat level (as described in Step 2, Section
2.4.3.2), vulnerability level (as described in Step 3, Section 2.4.3.3) and impact level (as described in
Step 4, Section 2.4.3.4) contribute to the risk level. Further carrying on the notion of cascading
effects, the MITIGATE methodology describes three risk levels: Individual Risk Level (IRL), Cumulative
Risk Level (CRL) and Propagated Risk Level (PRL). This leads to the following formula for the Individual
Risk Level
The other two risk levels (CRL and PRL) are computed accordingly. The overall result is then again
mapped onto a qualitative, five‐tier scale.
2.4.3.6 Risk Mitigation
In the final step of the MITIGATE methodology, the main results of the risk assessment are compared
against specific thresholds, which have been set and agreed by all business partners. If some of the
results exceed these predefined thresholds, additional security controls need to be implemented by
the business partners and by the SCS (as a whole) to lower the respective risk levels. To identify the
best choice of mitigation actions out of a set of possible controls, a game‐theoretic approach is
applied. This represents a mathematically sound method to find a way to minimize the expected
damage caused by an attack that exploits multiple vulnerabilities.
To formalize the game, the possible actions taken by the adversary (i.e., a malicious party performing
an attack) and the defender (i.e., all business partners in the sup‐ply chain) need to be identified. Any
combination of these attack and defense strategies yields a specific damage (i.e., the risk level),
which is interpreted as the respective payoff for this combination. Minimizing over all these damages
MITIGATE – H2020 – 653212 Page 50 of 121
D7.3 February, 2018
(i.e., the game's payoff matrix) leads to the three main outcomes of this step: an optimal attack
strategy, an optimal defense strategy and the maximum risk level for the case the attacker and
defender both follow their optimal strategies.
The optimal defense strategy indicates which mitigation actions should be chosen by all the business
partners to minimize the damage to the entire SCS. Due to the mathematical basis of game theory, it
can be shown that even if the adversary deviates from the optimal attack strategy, the business
partners don't have to change their defensive strategy; a deviation by the adversary only manifests in
a lower maximum risk level if the defender plays his optimal strategy.
2.4.4 Alignment Profile
The MITIGATE Methodology is specifically designed to fit to the requirements of maritime supply
chains and to provide hands‐on advice for implementing a risk management process in this area. All
the steps in the methodology are described quite thoroughly, providing detailed instructions on how
to implement the respective steps. Thus, the MITIGATE Methodology reaches scores between 2.5 and
3.0 in the alignment profile (cf. Figure 17 and Appendix 0 below). Since the methodology puts big
effort in identifying all cyber assets relevant to a given SCS as well as all their interdependencies
among the individual business partners, the description of the internal and external context (covered
in the steps P.1 and P.2) is exhaustive. The context of the risk management approach together with
the relevant risk criteria are implicitly specified in the preliminary Step 0 “Scope of the Supply Chain
Risk Assessment” of the methodology (which is not given in Figure 17 but is described in Deliverable
D2.2) and therefore have a slightly lower score of 2.5.
Stage B of the ENISA Risk Management Process (covered by steps P.5 “Identification of risks”, P.6
“Analysis of relevant risks” and P.7 “Evaluation of risks”) is also the core of the MITIGATE
Methodology and, consequently, it reaches almost the full score for this part. As it can be seen from
the details of the evaluation form in Appendix 0, the methodology exceeds the foreseen activities in
those steps by defining the Individual, Cumulative and Propagated Vulnerability, Impact and Risk
Level, respectively. However, since the risk evaluation is only given implicitly in steps 5.1 ‐ 5.3 of the
MITIGATE Methodology (cf. Figure 16 above), the score for step P.7 is reduced to 2.5. Due to the
application of the game‐theoretic framework for identifying optimal mitigation strategies, Stage C
“Risk Treatment” also scores 3.0 in three out of the five steps. More precisely, the methodology
supports the security officer in identifying the available counter measures (according to step P.8),
e.g., due to information coming from the NVD, developing an action plan (according to step P.9),
which is based on the output of the game‐theoretic framework, and how to implement this action
plan (according to step P.11), i.e., the output of the game‐theoretic framework contains precise
information when and how often to perform the given mitigation actions. Alas, the methodology still
relies on the decision of the security office on the approval of the action plan (cf. step P.10) and the
identification of the residual risk (cf. step P.12), such that only a score of 2.5 is given to these steps.
The final steps, i.e., P.13 “Risk acceptance”, P.14 “Risk monitoring and reporting” and P.15 “Risk
communication, awareness and consulting”, equally reach to a score of 2.5. This is due to the fact
that, on the one hand, accepting a residual risk is up to the security officer’s discretion. On the other
hand, the aspects of monitoring the results of the methodology, summarizing reports and
communicating these results to the decision makers are implemented and thus implicitly covered by
MITIGATE – H2020 – 653212 Page 51 of 121
D7.3 February, 2018
the MITIAGTE tool and not the methodology itself. Hence, the full score is not reached for these last
steps.
P.1
P.15 P.2
P.14 P.3
P.13 P.4
P.12 P.5
P.11 P.6
P.10 P.7
P.9 P.8
Figure 19: MITIGATE Methodology Alignment Profile
2.5 Comparison of the Alignment Profiles
When comparing the alignment profiles of the existing standards and guidelines discussed here
(which are shown in Figures 4, 7, 10, 14 and 17 above) with the alignment profile of the MITIGATE
methodology (depicted in Figure 19), we can directly see that the methodology ranges among the top
rated frameworks (cf. also Figure 20 below for a concise overview of all different alignment profiles).
This is mainly due to the fact that the methodology is geared towards the ISO 31000 and ISO 27005
(amongst other guidelines) and extends these standards with specific algorithms implementing the
general steps of risk management. Because of these extensions, the MITIGATE methodology reaches
higher scores compared to the ISO 31000 and ISO 27005 for almost all steps of the ENISA Process.
Only COBIT for Risk, a rather extensive and process‐driven framework, has similarly high scores since
it also provides very detailed descriptions for all process steps of risk management. The other
guidelines, i.e., OCTAVE Allegro and the NIST SP800‐30 process, have strengths in scoping and risk
assessment, where they also exceed the score of the MITIGATE methodology for some steps, but also
weak points when it comes to risk treatment and risk acceptance, which is, for example, almost not
mentioned at all in the NIST SP800‐30 process description (as described in Section 2.3.3.4 above).
Due to its special focus on risk mitigation and risk treatment (which correlates to Stage C and steps
P.8 to P.12 of the ENISA Process), the MITIGATE Methodology scores better than all the other
standards in this area. The application of game‐theoretic algorithms in this part (cf. Step 6 “Risk
MITIGATE – H2020 – 653212 Page 52 of 121
D7.3 February, 2018
Mitigation” of the methodology as depicted in Figure 18 and described in Section 2.4.3.6 above)
explicitly accounts for the main activities of risk treatment. Although other frameworks like COBIT for
Risk or ISO 27005 are giving advice on which activities are required to perform risk treatment, they
don’t provide details on how to realize these activities in practice. With the three sub‐steps S6.1
“Attack Strategies & Defense Strategies”, S6.2 “Payoff Estimation” and S6.3 “Risk Mitigation Actions”
of the MITIGATE methodology (as depicted in Figure 18) the different aspects are almost fully
covered and the MITIGATE system further implements the application of these algorithms.
With the explicit analysis of cascading effects, the steps of the ENISA Process Stage B “Risk
Assessment”, in particular the risk identification (i.e., P.5) and risk analysis (i.e., P.6) are almost over‐
complete in the MITIGATE Methodology. In this context, the methodology takes additional efforts
compared to all the other discussed frameworks (which also score the maximum value here, except
for the ISO 31000) by computing the cumulative and propagated risks on top of the individual risks.
This allows inspecting the effects of a risk not only onto the other infrastructure assets with an
organization but also onto the entire supply chain. This represents one of the core strengths and
advantages of the MITIGATE Methodology over other state‐of‐the‐art risk management frameworks.
Two weak points within the MITIGATE Methodology are the continuous monitoring and reporting of
risks (P.14) and the communication of the results to the management (P.15). All discussed
frameworks cover these activities and stress their importance, in particular of communicating the
results of a risk assessment to the relevant decision makers. However, frameworks like the ISO
27005, NIST SP800‐30 and also COBIT for Risk have specific steps dedicated to these activities,
describing in detail how the monitoring and the communication should be realized within the
organization. The MITIGATE Methodology lacks equivalent steps in the process description (cf Figure
18 above) but implements the monitoring, reporting and communication implicitly in the MITIAGTE
system. Thus, it does not reach the full score on these steps.
Further, the steps P.10 “Approval of action plan”, P.12 “Identification of residual risks” and P.13 “Risk
acceptance” fall within the remit of the security officer (or, more generally, the person in charge of
the risk assessment) and therefore are also just implicitly covered in the MITIGATE Methodology.
However, we can find the same situation also in COBIT for Risk, OCTAVE Allegro or the ISO 27005,
which identify these activities as important steps in any risk management process that have to be
explicitly considered but do not specify a dedicated step for them within their respective process
descriptions.
Overall, we can point out that the MITIGATE Methodology performs very well compared to the
examined risk management frameworks. It provides a compact and concise risk management process
implementing the general risk management approaches defined in current standards (i.e., ISO 31000,
ISO 27005 and NIST SP800‐30) with detailed activities and algorithms. In this way, the MITIGATE
Methodology offers a clear and straightforward approach, which can keep up with well‐established
risk management frameworks like COBIT for Risk or OCTAVE Allegro. With the specific tailoring to the
setting and the requirements of maritime supply chains, for example, the analysis of cascading effects
for the entire supply chain, the MITIGATE Methodology emerges as more suitable to this field of
application than the other frameworks discussed here.
MITIGATE – H2020 – 653212 Page 53 of 121
D7.3 February, 2018
P.1
P.15 P.2
P.14 P.3
P.11 P.6
P.10 P.7
P.9 P.8
Figure 20: Comparison of alignment profiles for the discussed frameworks
MITIGATE – H2020 – 653212 Page 54 of 121
D7.3 February, 2018
3 Technical Evaluation Results of the MITIGATE systems and
services
3.1 Unit Testing
Unit testing is an iterative software development process where the smallest testable parts of an
application (units) are individually and independently scrutinized to verify their proper operation. It
stands at the base of the testing pyramid and is a prerequisite for more complex testing operations
(e.g. integration testing, performance testing, functional testing, etc.). Individual software units must
be adequately tested before being combined in groups at higher levels of the testing hierarchy.
Figure 21: A high‐level overview of the software testing pyramid
Proper unit testing, aside from verifying the correctness of the individual software units, manages to:
improve code quality
enable the adoption of agile methodologies throughout the development life‐cycle
facilitate functionality changes
simplify integration between higher‐level components
identify design deficiencies through exposing strong coupling of software units
reduce turnaround time required for bug fixing
MITIGATE was developed to be modular and assure proper decoupling between its software
components. The available unit tests cover more than 90% of its service layer and 100% of the utility
functions used in the application. MITIGATE benefits from its software design and, through
incorporating the unit tests with the data access integration tests (presented in section 4.2.2)
manages to achieve a very high code coverage percentage.
While a high test coverage does not guarantee error‐free applications, it provides a certain level of
confidence that the application operates as expected. It also serves as the base for designing and
implementing more complex testing scenarios and demonstration use cases.
The exhaustive list of unit tests (available at the time of writing) is present in the table below. Each
one of the tests checks for:
MITIGATE – H2020 – 653212 Page 55 of 121
D7.3 February, 2018
proper data retrieval
proper operation of data modifying actions (create, updated, delete)
proper authorization enforcement (e.g. business partners can only browse their assets)
proper operation of any other supported functionality
Test Filename
IAssetNodeServiceTest.java
IAssetServiceTest.java
IBusinesspartnerServiceTest.java
ICommonweaknessServiceTest.java
IControldescriptorServiceTest.java
ICyberdependencyServiceTest.java
IAttackPathDiscoveryTest.java
IJobConfigServiceTest.java
IKeywordOIServiceTest.java
INetworkdescriptorServiceTest.java
IPostAnalizedServiceTest.java
IProcessinvitationServiceTest.java
IProcessServiceTest.java
IProductServiceTest.java
IRAAssetServiceTest.java
IRedditServiceTest.java
IRiskassessmentLinksServiceTest.java
IRiskassessmentRunManagementTest.java
IRiskassessmentServiceTest.java
IScenarioServiceTest.java
ISiteServiceTest.java
ISupplychainserviceServiceTest.java
MITIGATE – H2020 – 653212 Page 56 of 121
D7.3 February, 2018
IVulnerabilityServiceTest.java
ITweetServiceTest.java
IUserServiceTest.java
IThreatServiceTest.java
IVendorServiceTest.java
The source code of all unit tests is available in the project's code repository. Two indicative examples
are presented below.
The first one assesses the correctness of risk related calculations bases on the project's proposed risk
assessment methodology.
Test Filename RiskMethodologyTests.java
Test Scope Assess the correctness of vulnerability impact, vulnerability level and risk level
calculations as per the MITIGATE risk assessment methodology.
Source Code package eu.mitigate.repository.relational;
import static org.junit.Assert.assertEquals;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import eu.mitigate.app.main.Application;
MITIGATE – H2020 – 653212 Page 57 of 121
D7.3 February, 2018
import eu.mitigate.util.AccessComplexity;
import eu.mitigate.util.AccessVector;
import eu.mitigate.util.Authentication;
import eu.mitigate.util.AvailabilityImpact;
import eu.mitigate.util.ConfidentialityImpact;
import eu.mitigate.util.IntegrityImpact;
import eu.mitigate.util.Level;
import eu.mitigate.util.TruthTableUtil;
import eu.mitigate.util.TruthTableUtilVulnerability;
/**
* This test assesses the correctness of the following calculations:
* <li>The impact of a vulnerability based on the individual confidentiality,
* integrity, and availability impacts as per MITIGATE's vulnerability model
* <li>The individual vulnerability level based on access vector, access
* complexity and authentication parameters
* <li>The individual risk level calculation based on vulnerability impact,
* vulnerability level and threat probability levels
*
*/
@RunWith(SpringRunner.class)
@SpringBootTest(classes = { Application.class })
public class RiskMethodologyTests {
/**
* Testing vulnerability impact
*
MITIGATE – H2020 – 653212 Page 58 of 121
D7.3 February, 2018
*/
@Test
public void testImpactTruthTableCalculations() {
// Sample Very High impact test
TruthTableUtilVulnerability varVH =
mock(TruthTableUtilVulnerability.class);
when(varVH.getCi()).thenReturn(ConfidentialityImpact.Complete);
when(varVH.getIi()).thenReturn(IntegrityImpact.Complete);
when(varVH.getAi()).thenReturn(AvailabilityImpact.Complete);
assertEquals(Level.VeryHigh,
TruthTableUtil.calculateImpactLevel(varVH));
// Sample Low impact test
TruthTableUtilVulnerability varL =
mock(TruthTableUtilVulnerability.class);
when(varL.getCi()).thenReturn(ConfidentialityImpact.None);
when(varL.getIi()).thenReturn(IntegrityImpact.Complete);
when(varL.getAi()).thenReturn(AvailabilityImpact.Partial);
assertEquals(Level.Low,
TruthTableUtil.calculateImpactLevel(varL));
// Sample Medium impact test
TruthTableUtilVulnerability varM =
mock(TruthTableUtilVulnerability.class);
when(varM.getCi()).thenReturn(ConfidentialityImpact.None);
when(varM.getIi()).thenReturn(IntegrityImpact.Complete);
when(varM.getAi()).thenReturn(AvailabilityImpact.Complete);
assertEquals(Level.Medium,
MITIGATE – H2020 – 653212 Page 59 of 121
D7.3 February, 2018
TruthTableUtil.calculateImpactLevel(varM));
}
/**
* Testing vulnerability level
*
*/
@Test
public void testIndividualVulnerabilityLevelCalculations() {
// Sample Very High vulnerability level test
TruthTableUtilVulnerability varVH =
mock(TruthTableUtilVulnerability.class);
when(varVH.getAv()).thenReturn(AccessVector.Network);
when(varVH.getAc()).thenReturn(AccessComplexity.Medium);
when(varVH.getAuth()).thenReturn(Authentication.None);
assertEquals(Level.VeryHigh,
TruthTableUtil.calculateIVL(varVH));
// Sample Low vulnerability level test
TruthTableUtilVulnerability varL =
mock(TruthTableUtilVulnerability.class);
when(varL.getAv()).thenReturn(AccessVector.Local);
when(varL.getAc()).thenReturn(AccessComplexity.Low);
when(varL.getAuth()).thenReturn(Authentication.Single);
assertEquals(Level.Low, TruthTableUtil.calculateIVL(varL));
// Sample Medium vulnerability level test
TruthTableUtilVulnerability varM =
MITIGATE – H2020 – 653212 Page 60 of 121
D7.3 February, 2018
mock(TruthTableUtilVulnerability.class);
when(varM.getAv()).thenReturn(AccessVector.Local);
when(varM.getAc()).thenReturn(AccessComplexity.Low);
when(varM.getAuth()).thenReturn(Authentication.None);
assertEquals(Level.Medium, TruthTableUtil.calculateIVL(varM));
}
/**
* Testing risk level
*
*/
@Test
public void testRiskLevelCalculations() {
// Sample Very High risk level test
assertEquals(Level.VeryHigh,
TruthTableUtil.calculateRiskLevel(Level.VeryHigh, Level.VeryHigh,
Level.VeryHigh));
assertEquals(Level.VeryHigh,
TruthTableUtil.calculateRiskLevel(Level.Medium, Level.VeryHigh,
Level.VeryHigh));
assertEquals(Level.VeryHigh,
TruthTableUtil.calculateRiskLevel(Level.Low, Level.VeryHigh, Level.VeryHigh));
// Sample Very High level test
assertEquals(Level.High,
TruthTableUtil.calculateRiskLevel(Level.High, Level.Low, Level.VeryHigh));
assertEquals(Level.High,
TruthTableUtil.calculateRiskLevel(Level.VeryHigh, Level.VeryLow,
Level.VeryHigh));
assertEquals(Level.High,
MITIGATE – H2020 – 653212 Page 61 of 121
D7.3 February, 2018
TruthTableUtil.calculateRiskLevel(Level.Medium, Level.High, Level.High));
}
}
Execution Result
Table 1: Risk Methodology Tests java
The second assess the operational correctness of the asset service unit.
Test Filename IAssetServiceTest.java
Test Scope Assesses the correct of all asset service operations. In more detail, data
retrieval, data entry, data editing, asset deleting and unauthorized asset view
sub‐tests are being performed.
Source Code package eu.mitigate.repository.relational;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.mockito.Mockito.when;
import org.junit.Before;
MITIGATE – H2020 – 653212 Page 62 of 121
D7.3 February, 2018
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.domain.Pageable;
import org.springframework.test.context.junit4.SpringRunner;
import eu.mitigate.api.repository.IAssetService;
import eu.mitigate.app.main.Application;
import eu.mitigate.repository.dto.SearchAssetDTO;
import eu.mitigate.repository.relational.dao.AssetRepository;
import eu.mitigate.repository.relational.dao.NetworkdescriptorRepository;
import eu.mitigate.repository.relational.domain.Asset;
import eu.mitigate.repository.relational.domain.Businesspartner;
import eu.mitigate.repository.relational.domain.Networkdescriptor;
import eu.mitigate.repository.relational.service.AssetServiceImpl;
import eu.mitigate.util.enums.AssetType;
/**
* Asset Service test. It validates the correctness of the IAssetService API.
*
*
*/
@RunWith(SpringRunner.class)
@SpringBootTest(classes = { Application.class })
MITIGATE – H2020 – 653212 Page 63 of 121
D7.3 February, 2018
public class IAssetServiceTest {
/**
* Instantiate needed variables
*/
private final Asset demoAsset = demoAsset();
private final Asset demoUnauthorizedAsset =
demoUnauthorizedAsset();
private final Businesspartner alternateBP = alternateBP();
/**
* Define mock objects
*/
@Mock
private AssetRepository assetRepository;
@Mock
private NetworkdescriptorRepository networkRepository;
@InjectMocks
private IAssetService<Asset, Networkdescriptor, Pageable, AssetType,
Businesspartner, SearchAssetDTO> assetService = new AssetServiceImpl();
/**
* Set up environment
*
* @throws Exception
MITIGATE – H2020 – 653212 Page 64 of 121
D7.3 February, 2018
*/
@Before
public void setUp() throws Exception {
MockitoAnnotations.initMocks(this);
when(assetRepository.findOne(ASSET_ID)).thenReturn(demoAsset);
when(assetRepository.findOne(ASSET_ALTERNATE_ID)).thenReturn(de
moUnauthorizedAsset);
}
/**
* Test authorized asset retrieval
*
* @throws Exception
*/
@Test
public void testAssetRetrieval() throws Exception {
assertEquals(ASSET_ID,
assetService.findOne(ASSET_ID).get().getId());
}
/**
* Assess that no asset is returned if it does not belong to the
requesting business partner
*
* @throws Exception
*/
@Test
MITIGATE – H2020 – 653212 Page 65 of 121
D7.3 February, 2018
public void testAssetRetrievalForOtherBusinesspartners() throws
Exception {
assertFalse(assetService.findOne(ASSET_ID,alternateBP).isPresent());
}
private static final Asset demoAsset() {
Asset asset = new Asset();
final Businesspartner bp = new Businesspartner();
bp.setId(BP_ID);
bp.setName(TEST_NAME);
asset.setId(ASSET_ID);
asset.setName(TEST_NAME);
asset.setBusinesspartner(bp);
return asset;
}
private static final Asset demoUnauthorizedAsset() {
Asset asset = new Asset();
final Businesspartner bp = new Businesspartner();
bp.setId(BP_ALTERNATE_ID);
bp.setName(TEST_NAME);
asset.setId(ASSET_ALTERNATE_ID);
asset.setName(TEST_NAME);
asset.setBusinesspartner(bp);
return asset;
}
MITIGATE – H2020 – 653212 Page 66 of 121
D7.3 February, 2018
private static final Businesspartner alternateBP() {
Businesspartner bp = new Businesspartner();
bp.setId(BP_ALTERNATE_ID);
bp.setName(TEST_NAME);
return bp;
}
private static final Long ASSET_ID = ‐1l;
private static final Long ASSET_ALTERNATE_ID = ‐2l;
private static final Long BP_ID = ‐3l;
private static final Long BP_ALTERNATE_ID = ‐4l;
private static final String TEST_NAME = "test name";
}
Execution Result
Table 2: IAssetServiceTest.java
MITIGATE – H2020 – 653212 Page 67 of 121
D7.3 February, 2018
3.2 Results of Integration Testing
Integration testing is the software testing phase where individual modules (units) are combined and
tested as a group. This phase succeeds unit testing, and its purpose is to expose faults between
integrated components.
MITIGATE's integration testing activities can be grouped in two categories:
data access layer
RESTful endpoints
Data Access Layer
MITIGATE works with four different data store types:
a relational database (MySQL) that serves as the default database for the tool
a no‐SQL store (MongoDB) that is particularly used for:
o creating asset inventory replicas as a result of risk assessment executions and
simulations
o storing collected cybersecurity content by the open intelligence module
a graph database (Neo4j) to support complex asset hierarchies and relationships
an enterprise search platform (Apache Solr) providing document indexing and fast searching
for the filtered cybersecurity content
The integration tests developed for the data access layer of the MITIGATE tool provide 100% code
coverage (applies to data access related functions). The table below contains the exhaustive list of all
data access related integration tests and provides the following information:
Name of the file containing the test (all files are available in the project's code )
The type of the associated data store
The operations under integration test (read, write, delete)
MITIGATE – H2020 – 653212 Page 68 of 121
D7.3 February, 2018
ControlTest.java MySQL read, write, delete
Table 3: Integration Test
An example data access layer test is as follows:
Test BusinesspartnerTest.java
Code package eu.mitigate.repository.relational;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
MITIGATE – H2020 – 653212 Page 69 of 121
D7.3 February, 2018
import eu.mitigate.app.main.Application;
import eu.mitigate.repository.relational.dao.BusinesspartnerRepository;
import eu.mitigate.repository.relational.domain.Businesspartner;
@RunWith(SpringRunner.class)
@SpringBootTest(classes = { Application.class })
public class BusinesspartnerTest {
/**
* The business partner repository. It gets autowired as per
configuration
* defined in {@link Application} class.
*
*/
@Autowired
BusinesspartnerRepository repo;
/**
* Testing business partner retrieval
*
*/
@Test
public void testBusinesspartnerFind() {
assertNotNull("Failed to autowire an instance of
Businesspartner repository", repo);
Businesspartner bp = repo.findOne(31l);
assertEquals("Ship Agent", bp.getName());
MITIGATE – H2020 – 653212 Page 70 of 121
D7.3 February, 2018
}
/**
* Testing in one step creation and deletion of a business partner
*
*/
@Test
public void testBusinesspartnerCreateDelete() {
assertNotNull("Failed to autowire an instance of
Businesspartner repository", repo);
String name = "IntegrationTestBusinessPartner";
Businesspartner expectedBp = new Businesspartner();
expectedBp.setName(name);
Businesspartner retrievedBp = repo.save(expectedBp);
assertEquals(name, retrievedBp.getName());
repo.delete(retrievedBp);
assertNull(repo.findByName(name));
}
}
MITIGATE – H2020 – 653212 Page 71 of 121
D7.3 February, 2018
Execution Result
Table 4: Example data access layer test
RESTful endpoints
The only RESTful endpoint that requires integration testing is the microservice responsible for
generating the attack paths and the propagated attack graph.
When a risk assessment execution completes successfully, MITIGATE provides the ability to query the
system about possible exploitable attack paths based on a few criteria. Operators specify criteria
values or utilize preset criteria profiles and initiate an attack path discovery request. MITIGATE
bundles the criteria in a JSON object and wires it through an HTTP REST request to the attack paths
microservice. The latter unmarshals the received JSON object to set its local application parameters,
runs the attack path discovery algorithm and returns the generated attack paths (if such paths exist),
along with the propagated attack sub‐graph. When no attack paths exist, the microservice responds
with a text message stating all entry points are secure. In the case where mandatory application
parameters are missing from the request, the microservice throws an error and returns a 5xx HTTP
error code.
The three tests that assure proper communication and consequently integration between MITIGATE
and the attack path discovery microservice are to be found in Appendix B:
3.3 Results of Performance Testing
The performance testing has been carried out with all instances. MITIGATE is an application that
won't expect high loads of requests and given its REST nature and the underlying design it will always
respond in a timely manner. Tasks that take a while to run are executed asynchronously.
The most resource‐consuming module of the MITIGATE tool is the risk assessment one. In practice,
when an operator initiates and executes a risk assessment, the system performs the following tasks:
o it collects assets that are part of the process under review for all business partners
o it identifies and collects (if any) the assets that declare cyber‐dependencies between
business partners
o it merges the two asset sets in one containing all unique entries
MITIGATE – H2020 – 653212 Page 72 of 121
D7.3 February, 2018
o it retrieves the vulnerability profile per asset and detects which vulnerabilities are not
treated by already applied security controls
o it performs calculations as per risk assessment methodology and stores the final
results
In the course of the process described above, MITIGATE communicates and exchanges data with
three different data stores.
The image below (cc. attached image) displays risk assessment execution time as a function of the
number of assets it assesses. The time value (in ms) on the vertical axis represents the total execution
time needed from the moment a web user clicks the button to initiate/execute a risk assessment on
the browser till the latter receives a response from the system (either successful or not). While the
turnaround time does not depend solely on the number of assets included (number of vulnerabilities
and security controls applied are also important), it is indicative that the system's performance is
predictable and the time complexity of the operation can safely be considered as linear. Several tests
that have ran throughout the development process verify the assumption.
Figure 22: Performance of MITIGATE in relation to number of assets
However, if something would benefit from a performance analysis at this point it is the attack paths
discovery module, which is discussed hereunder
MITIGATE – H2020 – 653212 Page 73 of 121
D7.3 February, 2018
3.3.1 Evaluation of Attack Paths Generation Approaches
This section presents the evaluation of the attack graph generation algorithm that includes both
performance results and comparisons to long established and state‐of‐the‐art methods. First of all,
basic parameters have to be introduced:
Open source or commercialized software; This describes if the software is available as open
source or as commercialized only.
Accessible software; This describes if the software is easy to find, download and use.
GUI; This describes if the software is not command line only and a graphical user interface
accompanies it.
Intuitive level; This describes the intuitive level.
Input format; This describes what the input format technology is. For example, XML or JSON.
Exploits are monotonic or non‐monotonic; This describes if the exploits are monotonic or
non‐monotonic.
Attack graph type; This describes the type of the attack graph.
Single path or all paths; This describes if the algorithm can return only a single path or all
paths.
Forward and/or backwards chaining; This describes if the algorithm is based on forward or
backwards chaining.
Probabilistic or deterministic; This describes if the algorithm is based on a probabilistic or a
deterministic approach.
Logic or graph based search engine; This describes if the algorithm is based on a logic search
approach or on a graph based search engine approach.
Complexity/Scalability; This describes the complexity or the scalability of the algorithm.
Attack Path Analysis; This describes the capacity of the evaluated method to identify and
analyses different attack paths. We are distinguishing the following main types.
Vulnerability Chain Analysis; This describes the capacity of the methods to identify chains of
sequential vulnerabilities on different assets and include them into the risk analysis. We are
distinguishing the following main types.
Integration of Open Source Information. Evaluation of Attack Paths Generation Approaches
This describes the capacity of the evaluated method to retrieve and integrated information
coming from openly accessible sources of information (e.g., open source databases).
MITIGATE – H2020 – 653212 Page 74 of 121
D7.3 February, 2018
Integration of Crowd Sourcing Information; This describes the capacity of the evaluated
method to retrieve and integrated information coming from crowd sourcing (e.g., technical
forums).
Collaboration Capabilities; This describes the capacity of the evaluated method to enable
and utilize the collaboration of several users in the risk analysis or risk management process.
Pruning of paths; Pruning of paths makes algorithm more efficient. The algorithm can cut
paths that either not important or fall in a category that we are not interested in, such as
networked attacks.
Propagation length; The propagation length can be specified. The user should be able to
enter the length that a potential attacker could reach after gaining access to an entry asset.
Attacker location; The location of the attacker can be specified. The location of the attacker
can be specified, and it should be either local or networked.
Attacker capability; The capability of the attacker can be specified. The capability should be
specified in terms of high, medium, low or similar.
Entry points; The entry assets can be specified, which helps to search on specific network
parts for problems.
Target points; The target assets can be specified, which helps to search on specific network
parts for problems.
Satisfaction of EU policies; EU maritime supply chain policies are satisfied.
Can be used for risk assessment; This describes the applicability of the evaluated method for
the maritime supply chain risk assessment area.
Vulnerability types; The types and the categories of the vulnerabilities can be specified within
the settings of the algorithm.
Clarity and replication; The algorithm is presented in a manner that it makes it easy to
replicate or extend.
The proposed method to be used has been compared with the following methods. These are the
most relevant and well‐established methods found in the literature.
Attack Graph Toolkit. This is a tool developed by Carnegie Mellon University and is based
on Linux with its source code and graphical interface being open. Attack graph toolkit
works well in small networks but has scalability issues when the network grows and its
difficult to adapt in large scale networks. Furthermore, the development of this tool has
stopped in 2007.
MulVAL is a long established open source tool that has been actively developed until
recently. MulVAL does not however offer a graphical user interface, although it is able to
generate files in PDF and EPS format containing the graph. The algorithm is logic‐based
MITIGATE – H2020 – 653212 Page 75 of 121
D7.3 February, 2018
and uses a representation of an attack graph called logical attack graph. MulVAL is
command line based only, thus making it difficult to install and use.
TVA is based on topological analysis of network attack vulnerabilities and the idea is to
exploit a dependency graph to represent preconditions and postconditions and then
exploit them. At the next step, a search algorithm finds attack paths that exploit multiple
vulnerabilities.
NetSPA has been developed in MIT, with its main operation being to identify the most
valuable attack path in a graph from a network topology. NetSPA can analyze an attack
graph and propose ideas for repairing the most important weaknesses in a timely
manner. However, its main disadvantage is that its difficult for an administrator to
manage the network due to the number of loops in the graph.
Risk management is important for identifying risks in networks and propose mitigation solutions.
Typically, risk management systems rely on the use of attack graph generation methods to identify
attack paths and perform risk assessments. Although, in the literature there are several attack graph
generation methods there aren’t any that satisfy plenty criteria to make the process straightforward
and produce results of higher quality. Thus, a new method for attack graph generation in terms of
risk management was necessary and has been developed. The evaluation results of which are
presented in Table 5.
Attack graph generation methods
Criteria Attack Graph
MulVAL TVA NetSPA MITIGATE
Toolkit
Open Source or
Commercialized Open source Open Source Commercialized Commercialized Commercialized
software
Intractable Web
GUI Picture Output Picture Output Picture Output
GUI application
Good.
vertices are
Better. Vertex
state node,
Intuitive level Good is a host or host Good Good
edges are
group
the state
transition
Oval XML Oval XML
Input format XML By hand
support support
MITIGATE – H2020 – 653212 Page 76 of 121
D7.3 February, 2018
Attack graph generation methods
Criteria Attack Graph
MulVAL TVA NetSPA MITIGATE
Toolkit
Exploits are
monotonic or non‐ Monotonic Monotonic Monotonic Monotonic
monotonic
Penetration
Logical attack
dependency MP Multiple‐
graph
Attack Graph Type State graph graph, Prerequisite
(attribute
aggregation graph
attack graph)
attack graph
Single path or all
All paths All paths All paths Single Path All paths
paths
Forward and/or
Forward Backward Forward Backward Forward
backwards chaining
Probabilistic or
N/A N/A N/A Probabilistic
deterministic
Logic or graph‐based Model
Logic‐based Graph based Graph based Graph based
search engine checking
Poor
construction Polynomial(N2)
Complexity/Scalability Polynomial(N3) O (N lgN)
time ~ O(N3)
exponentially
Vulnerability Chain
Yes Yes
Analysis
Integration of Open
Yes Yes
Source Information)
Integration of Crowd
No Yes
Sourcing Information
Collaboration
Yes Yes
Capabilities
MITIGATE – H2020 – 653212 Page 77 of 121
D7.3 February, 2018
Attack graph generation methods
Criteria Attack Graph
MulVAL TVA NetSPA MITIGATE
Toolkit
Satisfaction of EU
No No No No Yes
policies
Can be used for risk
Yes No No Yes
assessment
Table 5: Comparison results
3.3.2 Attack Path Performance evaluation
Valencia port is a port community system that is located at the port of Valencia and is managed by
the port authority of Valencia. The system allows actors who are involved in the transportation of
goods to connect and exchange information. The current information system is comprised of 26
assets including both hardware and software assets. Using the information supplied by the port of
Valencia we have executed a series of tests to validate the performance of the attack path discovery
method, the results of which are presented in table 2. In the table, the propagation length and max
length refer to the length that the respective steps that an attacker could made according to the risk
assessment scenario. The entry points refer to how many entry points have been specified and the
target points refer to the number of target points the attacker wants to reach from the entry points.
Moreover, it is can also be specified which the entry and target points are. Finally, the number of
paths found is the exact number of attack paths found by the algorithm according to the settings uses
and the time in seconds is the exact processing time of the algorithm.
MITIGATE – H2020 – 653212 Page 78 of 121
D7.3 February, 2018
No. of test Attacker Attacker Propagation Max No. of No. of Time in
capability location length length entry target seconds
points points
MITIGATE – H2020 – 653212 Page 79 of 121
D7.3 February, 2018
No. of test Attacker Attacker Propagation Max No. of No. of Time in
capability location length length entry target seconds
points points
Table 6: Port of Valencia case study performance results
Furthermore, based on the data from the port of Valencia we developed a database consisting of 182
assets. The assets include 35 hardware assets, 147 software assets installed evenly on the hardware
assets and vulnerabilities associated to various software assets. Then further associations have been
made to form a network between the hardware assets. The supplementary performance results
based on the 182 assets are presented in Figure 21. On the left part of the figure from the 0 to 5 the
seconds are represented, the terms low, Medium and high represent the capability of the attacker
and in all cases the location of the attacker has been set to local. Additionally, the values 5, 10, 20,
and 50 represent both the number of the entry and target points. Lastly, the max and propagation
length values have been both set to 10 in all cases.
Processing time based on 182 assets
5
0
5 10 20 50
Figure 23: Performance evaluation based on 182 assets
3.4 Results of Security Testing
MITIGATE aims to make a difference in protecting critical infrastructures. Building on its risk
assessment methodology and the rich functionality implemented, it offers a holistic approach to
handle cybersecurity‐related issues.
It is a fact that the tool contains crucial information about the cyber‐assets of an organization.
Unauthorized access to its resources can have a detrimental effect on systems, services, and
personnel since it will effectively serve as a cheat sheet for attackers.
MITIGATE – H2020 – 653212 Page 80 of 121
D7.3 February, 2018
MITIGATE was built with security in mind. It is deployed in a secure infrastructure which provides
timely patches, updates, and upgrades for all underlying components (hardware firmware, operating
systems, databases, software development kits, applications, etc.). The application itself was
developed following all well‐known security guidelines . To verify this in practice, MITIGATE
undertook a comprehensive security assessment for its HTTP‐secure public component (the web
portal and the supporting RESTful services). The security tests ran by the assessment are as per
OWASP Testing Guide v4.0
(https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents). The findings are
as follows:
Finding 1. No Session Timeout
Description There is no session expiration timeout mechanism in place.
Affected
MITIGATE Web Application
Scope
Impact Moderate: Session of an authenticated user can be attacked in case a user is idle, or
his workstation is unattended or if the user.
Exploitability An indefinite session timeout allows a malicious user to have more chances to
impersonate an existing user’s session and/or conduct attacks against authenticated
users.
Recommendat Ensure that a timeout mechanism is developed within the application. The
ions authorization token should be checked and after a determined amount of
idle time, the session should expire.
Set session timeout to the minimal value possible depending on the context
of the application.
Avoid "infinite" session timeout.
Prefer declarative definition of the session timeout in order to apply global
timeout for all application sessions.
Trace session creation/destroy in order to analyse creation trend and try to
detect abnormal session number creation (application profiling phase in an
attack).
Technical The MITIGATE application does not have a session expiration timeout mechanism.
Details After login, user is granted an authorization token which does not use Cookie:
header and which is not subject of any expiration mechanism.
POST /api/v1/auth/login HTTP/1.1
Host: mitigate.euprojects.net
User‐Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:55.0) Gecko/20100101 Firefox/55.0
Accept: */*
Accept‐Language: en‐US,en;q=0.5
MITIGATE – H2020 – 653212 Page 81 of 121
D7.3 February, 2018
Accept‐Encoding: gzip, deflate
Content‐Type: application/json; charset=utf‐8
Authorization: undefined
X‐Requested‐With: XMLHttpRequest
Referer: http://mitigate.euprojects.net/login
Content‐Length: 65
Cookie: JSESSIONID=4E026F702B4B4EC487199FB7F9711EDD
DNT: 1
Connection: close
{"username":"pentest@ubitech.eu","password":"*************"}
HTTP/1.1 200
Date: Mon, 18 Sep 2017 13:08:55 GMT
Server: Apache/2.4.7 (Ubuntu)
X‐Content‐Type‐Options: nosniff
X‐XSS‐Protection: 1; mode=block
Cache‐Control: no‐cache, no‐store, max‐age=0, must‐revalidate
Pragma: no‐cache
Expires: 0
X‐Frame‐Options: DENY
Authorization: Bearer
eyJlbmMiOiJBMTI4R0NNIiwiYWxnIjoiUlNBLU9BRVAifQ.AAMm7X54zfoOG3skA67pEXU
m1PLkUSkz‐6fKIXH6UGEVekUViI2C‐
S00oIj_nlbl7iRmEdbOwfPVyvTjC7IvxX6deANyxwcHUI40npOc9Ks‐
SBKJDQYKGJpBMs1RnSkrDKduHdSRAJ‐FN6NsYTwfsJaqSGFoQf0L4‐
CfZNowEEES2C1uKNHRcz7CDO_f5s6dzI_7oaz81H‐0RRdzb1n6v81TVSNvC‐
sGdzG66yaLfasH‐
AGTcPCxln9uxvVbItwuks1__2sOLTivf7aTSQBnpfqbfW7KsN9wdzf_p1OycI0bfvHDHydVK
m‐
fkhKJ2v4lE00s8lOz5U76D1HH7MDM3A.HcyAUx4BPKCzzQMR.7yfyps_vSwvJHasG4cNfU
BYsFngJ‐Wy9bStUvsfIS_0Iraoq5EHbJgOBSftD3lLBVhqWT46‐
o51_1VPftYAfev5IHWFvHZJaA_dZNSHY‐
kzDxIvO9p1KadtTlLALLY5RbdRkkCppykE3bIUQsryeAqjXJp2x7Ou1HhTy_SJp7K0‐
U9BbQgmDd4um1C_5mxm36GVAfpCRi8PCqXkLrh4ANbuK_xvCcNuxSzpZbQFa9ts1oTa
NCQuSeu0B01‐mLUfnjD‐
e9jJ90LQrlTujckuO_YrD9S6smEQLkCSDUOXG1f4owKigG6sv3el5nDoAuB36LrOoz1nZ5B
odJIxeWr94K6eYgF7IFWLgZ_T4grQ.XLAgHyc0JwzpVW8
MITIGATE – H2020 – 653212 Page 82 of 121
D7.3 February, 2018
Content‐Length: 0
Connection: close
Implemented No action was actually needed to address this finding since it is a false positive. The
Solution external vulnerability assessment tool mistakenly considered the JSESSIONID token
as an authorization cookie. MITIGATE is a RESTful‐based web application that uses
secure JSON Web Tokens (JWT) to authenticate users and grant them their
applicable privileges. The token remains valid for 30 minutes, which is considered a
sensible time‐period to stay active within the application. The security token is also
saved in browser's local storage to emulate cookie‐like operations and ease
subsequent login actions within the time limit. When the token expires, users are
prompted to re‐enter their credentials.
Table 7: Finding 1. No Session Timeout
Finding 2. Stack Trace Debug Output Reveal Sensitive Information
Description The error and exception handling subsystem of the application did not
catch all exceptions and return a generic sanitized error message to the
end user. The error messages returned included limited information which
could be helpful to an attacker, such as filenames, paths, and call flow
information.
Affected Scope MITIGATE Web Application
Impact Low: Limited information such as class hierarchies and names of source
code files were revealed.
Exploitability Moderate : A few unhandled exceptions are present in the application.
Recommendations Catch all unhandled exceptions at the highest level. Disable stack dumps
on production systems in order to prevent the leakage of sensitive back‐
end information to the end user. Refer to the web server’s documentation
for instructions on configuring the server.
Host: mitigate.euprojects.net
Accept: */*
Accept-Language: en-US,en;q=0.5
MITIGATE – H2020 – 653212 Page 83 of 121
D7.3 February, 2018
Content-Type: application/json; charset=utf-8
Authorization: Bearer
eyJlbmMiOiJBMTI4R0NNIiwiYWxnIjoiUlNBLU9BRVAifQ.bR45xpM1MsKdDq1nO
XKmZxvtaRnjvFICknl3E9vDweYKhfPfcky-
lUmOLQ0dI20vKFL74rra7LkZBdUbyppBXtcIYiyuFH2W0Mr7JMdHr6TTOhjR_ga
H-SqGGfbNqCT5ui-bWRzzKhWRJeMacc7K1apQbTokN-
qYbw0fS9rDfn7NWi8jDcC2yu99CdVgjGR35qDBYRY338qc3sBBhxB40X9S6K9p
cNZ3nDPPUmJTFr9kI-E9M8uoHEZDcpDZo1EtcRc-
5pagRo9GV7FIqA9aKDktftcl8qHD180GcIdUEuOPm6gRuB5EpgfVdtDofoCmyXh
dZZGBk23OZWd5yPNXRA.eS79KVEHKhFZFEQ-
.9PcC61G55jkg1wE8pcpF962ESxOGt1uS4cQTRS8MuyQeAUspnk_OajTRwVKS
fgItOiX42Vjttr46yRee2pv4Hjf_DlkzaGuBOOaH4dpUKTgZWNiIJhfLro2AKKDKuz
D1X0RQldfUw8AQfLYdr3OptfVY1iKV4rG01chM0Rq0C4eGV1w7LU7G87ByBWz
LwFkcH-2wit2ag14-6KjckRN5BrvtwS4v9lZc-epIMkrrGva80XsqH7kbsry5c-
oB1ZwGTO7dYHPPnHC6nc0KJ0b4vkBVU86xgNJhBm0-
1XtwOOOPVemdz27X8QAjEDGuEeBgxDv8BOy8Up5HtfChRkkO74tm__1-
Vf6Dy_DEvmk.0fCUoaPglxE1rblp_ZqX1A
X-Requested-With: XMLHttpRequest
Referer: http://mitigate.euprojects.net/vendor/4959
Content-Length: 33
DNT: 1
Connection: close
{"id":"4959]]>><","name":"111Vendor2"}
Implemented Solution The application was cross‐checked and all stack traces were replaced by
high‐level exception messages.
Table 8: Finding 2. Stack Trace Debug Output Reveal Sensitive Information
Finding 3. Deprecated Ciphers Supported
Description The remote host supports the use of a block cipher with 64‐bit blocks in
one or more cipher suites. It is, therefore, affected by a vulnerability,
known as SWEET32, due to the use of weak 64‐bit block ciphers. A man‐in‐
MITIGATE – H2020 – 653212 Page 84 of 121
D7.3 February, 2018
the‐middle attacker who has sufficient resources can exploit this
vulnerability, via a 'birthday' attack, to detect a collision that leaks the XOR
between the fixed secret and a known plaintext, allowing the disclosure of
the secret text, such as secure HTTPS cookies, and possibly resulting in the
hijacking of an authenticated session.
Note that the ability to send a large number of requests over the same TLS
connection between the client and server is an important requirement for
carrying out this attack. If the number of requests allowed for a single
connection were limited, this would mitigate the vulnerability.
Affected Scope mitigate.euprojects.net (tcp/443)
Impact Support for weak SSL cipher suites increases exposure to brute force
cryptographic attacks, which can compromise the confidentiality and
integrity of SSL sessions.
Exploitability Exploits are available
Recommendations Reconfigure the affected application, if possible, to avoid use of all 64‐bit
block ciphers. Alternatively, place limitations on the number of requests
that are allowed to be processed over the same TLS connection to mitigate
this vulnerability.
Technical Details List of 64‐bit block cipher suites supported by the remote server:
Medium Strength Ciphers (> 64‐bit and < 112‐bit key, or 3DES)
o EDH‐RSA‐DES‐CBC3‐SHA Kx=DH Au=RSA Enc=3DES‐
CBC(168) Mac=SHA1
o ECDHE‐RSA‐DES‐CBC3‐SHA Kx=ECDH Au=RSA Enc=3DES‐
CBC(168) Mac=SHA1
o DES‐CBC3‐SHA Kx=RSA Au=RSA Enc=3DES‐CBC(168)
Mac=SHA1
High Strength Ciphers (>= 112‐bit key)
o IDEA‐CBC‐SHA Kx=RSA Au=RSA Enc=IDEA‐CBC(128)
Mac=SHA1
The fields above are :
{OpenSSL ciphername}
Kx={key exchange}
MITIGATE – H2020 – 653212 Page 85 of 121
D7.3 February, 2018
Au={authentication}
Enc={symmetric encryption method}
Mac={message authentication code}
{export flag}
CVSS: 5.0
CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N
CVE ID:
CVE‐2016‐2183, CVE‐2016‐6329
Other Identifiers:
BID: 92630, 92631
XREF: OSVDB:143387, OSVDB:143388
Implemented Solution As described in finding 1, MITIGATE employs secure JSON Web Tokens for
authentication. In the token generation phase, the tool collects a
multitude of information for the authenticating user (e.g. IP address,
browsing agent, etc.), which makes it already very difficult for a session to
be hijacked.
However, allowing the most modern and secure ciphers is an excellent
security practice. Consequently, MITIGATE's reverse proxy (responsible for
enabling SSL termination) was reconfigured to restrict deprecated ciphers.
Table 9: Finding 3. Deprecated Ciphers Supported
Finding 4. Lack of CAPTCHA
Description The application does not use CAPTCHA to protect registration forms.
The purpose of a CAPTCHA is to differentiate between a computer and
human in order to prevent automated actions in a web page.
Using a breakable CAPTCHA makes the application vulnerable
to automated actions such as spamming by bots and scripts
Affected Scope http://mitigate.euprojects .net/register
Impact Medium: The impact depends on the sensitivity of the action being
secured from automated actions.
Exploitability High: Tools exist to exploit this vulnerability. Any open source tool that
replays traffic can be used
MITIGATE – H2020 – 653212 Page 86 of 121
D7.3 February, 2018
Recommendations Use strong CAPTCHA to protect sensitive publicly accessible forms from
automated actions. Independent research may be necessary to determine
the strongest CAPTCHA solution, as this is still an evolving area of research.
Technical Details Lack of CAPTCHA permits to register multiple accounts in an automated
fashion.
Implemented Solution The publicly available registration form aimed to ease pilot operations.
MITIGATE will support invitation‐based registrations in commercial
deployments.
Table 10: Finding 4. Lack of CAPTCHA
Finding 5. Outdated JavaScript library in use
Description Discovered outdated JavaScript libraries.
Affected Scope MITIGATE Web Application
Impact Unnecessary and outdated functionality is often compromised by
malicious users.
Exploitability A malicious user can use a specific scanner to identify old and outdated
application modules. This can be performed without being authenticated.
Any outdated and/or vulnerable module can be potentially compromised.
Recommendations Best practice stipulates that a production server should minimise its attack
surface by removing or patching all outdated components.
Technical Details The below two JavaScript libraries are outdated:
Bootstrap
jQuery
Implemented Solution The libraries were upgraded to match vendor's latest version.
Table 11: Finding 5. Outdated JavaScript library in use
Finding 6. API Key Exposed On Web Page
Description The MITIGATE web application exposes a secret key in clear‐text in a web
page.
Affected Scope mitigate.euprojects.net
Impact An attacker can extract or capture the secret key used for the Twitter
account of the application.
Exploitability In case a Cross Site Scripting vulnerability is found, then it would be
possible to retrieve the key from the DOM context of the web page, where
MITIGATE – H2020 – 653212 Page 87 of 121
D7.3 February, 2018
the secret key is displayed.
Recommendations Treat the API secret key as a password. Mask it using type=hidden or by
not returning the value in the HTML code.
Technical Details The secret API key for the Twitter account used by the application is
exposed, without being masked.
Implemented Solution The relevant HTML field type was converted from type text to type
password to prevent eavesdropping while typing. Furthermore, when a
value is set, then latter is omitted from any HTTP response.
Table 12: Finding 6. API Key Exposed On Web Page
Conclusively, the security assessment confirmed that MITIGATE is free of exploitable vulnerabilities.
The minor findings were analyzed and treated within one week of receiving the assessment results.
MITIGATE – H2020 – 653212 Page 88 of 121
D7.3 February, 2018
4 Socio‐economic & Techno‐economic Evaluation
4.1 MITIGATE pricing and licensing model
The software developers of MITIGATE foresee to offer different packages to the customers which
differ in number of assets possible to add and price for the size of the license.
Monthly subscription features
Nr. Of business Risk
Flavour Features Nr of Assets Nr. of users partners Asset relationships Real RA/month simulations/month
Asset mapping
100 5 5 Unlimited 5 2
Bronze RA
Asset mapping
RA 1.000 8 10 Unlimited 10 5
Silver Vendor management
Asset mapping
RA
2.000 15 20 Unlimited 20 10
Vendor management
Gold Open intelligence
Asset mapping
RA
Unlimited 25 Unlimited Unlimited Unlimited Unlimited
Vendor management
Platinum Open intelligence
Table 13: License sizes for monthly subscription
Perpetual License features
Nr. Of users in Nr. Of business Real Risk
Flavour Features Nr of Assets organisation partners Asset relationships RA/year simulations/year
Asset mapping
500 5 5 Unlimited 70 25
Bronze RA
Asset mapping
RA 1.000 8 10 Unlimited 145 65
Silver Vendor management
Asset mapping
RA
2.000 15 20 Unlimited 300 130
Vendor management
Gold Open intelligence
Asset mapping
RA
Unlimited 25 Unlimited Unlimited Unlimited Unlimited
Vendor management
Platinum Open intelligence
Table 14: License sizes for a perpetual license
MITIGATE – H2020 – 653212 Page 89 of 121
D7.3 February, 2018
For these four different licenses’ sizes the prices are estimated as follows:
Perpetual license +
License type Monthly License Perpetual license maintenance
Table 15: Prices
Based on a conservative assumption regarding the number of sold software pacjkages the business
plans and revenue stream looks as follows:
Personnel cost (development,
35.000 € 75.000 € 95.000 € 110.000 € 135.000 € 150.000 €
maintenance, support)
Table 16: Revenue streams
MITIGATE – H2020 – 653212 Page 90 of 121
D7.3 February, 2018
4.2 Findings from the questionnaires and pilot user comments
The internal and external pilot operations lasted in total 15 months and were organized as several
feedback loops to the developers. As a result the identified improvements could be implemented and
tested again. In order to collect as comprehensive feedback as possible, questionnaires were
developed and the test users were asked to answer them as well as their oral recommendations and
comments were collected by the hosts of the pilot operations. The questionnaires as well as the
collected oral feedback and both evaluations are presented in detail in D7.2. This section summarizes
the results with a focus on usability and economic factors.
In the questionnaires the testers were asked about their opinion how much the MITIGATE system
would save within the next five years or will be taken in additionally due to additional business as
well as if the MITIGATE system would help them to improve their organizations compliance with
security standards like the ISO 27001. Within the user tests no valid mean value for saved financial
means through the deployment of MITIGATE could be obtained.
A large majority agreed or strongly agreed, that using MITIGATE will be a help with respect to security
standard compliance. But while 18 % weren’t able to decide from their testing experience, if
MITIGATE will improve their security standard compliance, more than the half of the respondents
could not say whether the system was economically viable for them. While answering the question
regarding the support of security standards might need more detail knowledge about the standards
as well as about using MITIGATE as a final product embedded in the security processes of a company,
the economic evaluation of a security tool is in itself very difficult. Taken into account, that
investments into cyber security are not a kind of investment, which generates directly measureable
benefits, like a new and improved production plant, but cost for preventive measures, which might
reduce a potential loss, it is generally difficult to estimate the return on investment.
Regarding the usability perspective, valuable input was provided by the pilot users and taken into
account by the developing partners. Generally, a huge majority of the beta testers confirmed, that
the MITIGATE system is efficient due to the required time for its usage is reasonable and improves
the productivity of the users. Furthermore it is easy to learn and provides a comfortable usage. The
majority of them also testified that the system already has an easy to understand logic and structure
as well as helpful visualizations and interactive control about the working process and the reports.
With regard to the expected functions and functionalities, most of the testers were also satisfied with
MITIGATE’s capabilities.
With regard to the error messages and error recovery or undo functions, many still saw a
considerable potential for improvement. Due to the beta stadium of the system, this is not surprising,
but provides valuable suggestions for further improvements.
Despite the positive overall rating, many further recommendations and suggestions for
improvements were collected and taken into account by the developers. For example, early testers
mainly recommended a clearer user interface and improved search functions, e.g. for the assets and
vulnerabilities, which are imported from external databases and thus form a large number. Both
could have been implemented early in the systems’ improvement process.
MITIGATE – H2020 – 653212 Page 91 of 121
D7.3 February, 2018
In addition, comments have pointed out that especially in larger organizations a huge number of
cyber assets have to be managed and kept up‐to‐date in the MITIGATE system in order to provide
also meaningful and up to date cyber risk assessments. The suggestions to develop and provide an
import interface for external third party tools for BPNM or asset inventory management and so make
usage of potential already existing repositories and data sources within companies were taken up by
the developers.
The following section will present a closer look at the implemented and planned improvements which
results from the pilot users’ feedback.
4.3 Usability: implemented and planned improvements
During the course of the MITIGATE project, several enhancements were made to improve the human‐
computer interaction, as well as the overall usability of the system. Understanding and addressing
cyber security issues does require a certain level of expertise, but on the other hand, providing an
easy‐to‐use, straightforward and friendly platform for cyber risk management is an excellent step
towards promoting security awareness and achieving security by design.
The first batch of enhancements were due to comments received from use case partners. At first, the
look and feel of the platform was enhanced to provide:
o consistent breadcrumb and back arrow navigation
o consistent styling between the different modules
o improved colour scheme and button sizing
o addition of captions for button icons, either in‐line or on hover
o wider content section
These changes, despite being simple, managed to significantly improve the browsing experience,
while allowing more content to be on display in every page.
In order to better align operators with the exact set of functionalities supported by MITIGATE at any
given point, the system’s current version was moved to the sticky header in order for it to be in a
more prominent place.
The main application editors supporting create, update and delete operations on all MITIGATE
entities (asset, risk assessment, supply chain service, process, business partner, network, vendor,
vulnerability, etc.) were updated to follow a unified approach. The main menu on the left serves as an
entry point for each core functionality and consequently for each main entity. The individual landing
pages offer a grid view or a list of all existing entries, supporting pagination and filters for tailor‐made
lookups. A list of buttons guides operators to all available actions (create, update, delete) and nested
sub‐entities (where and if applicable). Creating and updating entries take place on a new web page,
while deleting is handled on the same. A verification pop‐up message following each delete action
ensures no data is deleted by mistake. In case the delete operation fails or is not allowed for a
number of reasons (user rights, entity referenced by other entities, etc.), a friendly message is
returned to the operator for further analysis.
MITIGATE – H2020 – 653212 Page 92 of 121
D7.3 February, 2018
Adding up to the aforementioned enhancements, a series of functionalities were introduced in the
latest versions of MITIGATE aiming to improve the overall usability of the system from a functional
standpoint. In more detail, search filters were added for all application tables and grid views, allowing
operators to better manage their inventory.
Figure 24: Available filters for the asset inventory grid view
Given the fact that building and maintaining a comprehensive up‐to‐date asset cartography is the
cornerstone of running effective risk assessment scenarios, an asset import module was developed to
ease inventory management. MITIGATE employs two simplified CSV templates, one to bulk import
assets’ individual details (e.g. asset name, asset type, CPE identifier, networks, etc.) and another one
to handle assets’ relationships. CSV is a widely used file format that does not require profound
technical expertise. While it enables indirect integrations with 3rd party (open source or commercial)
asset management systems and vulnerability scanners (that might be used for semi‐automated asset
discovery) via data bridges, it ensures that MITIGATE stays decoupled from any other proprietary
asset representation model.
The same module provides also exporting capabilities, as business partners can now download their
effective asset inventory for any use (e.g. perform local backups, feed 3rd party systems, custom
asset inventory version control, etc.).
MITIGATE – H2020 – 653212 Page 93 of 121
D7.3 February, 2018
Another enhancement was the addition of a short description on top of each report chart, as shown
in below figure.
Figure 25: Threat Analysis chart with a useful short description on top
As indicated by reviewers, it was not always clear what each chart represented. By adding a short
description for each chart, not only its representation becomes easier to comprehend (even with a
quick review), but it also provides for an efficient way to inform operators about future
enhancements/updates of what each chart calculates.
MITIGATE also aims to promote collaboration between business partners that are part of the same
SC. While the implemented two‐way authorization module does an excellent job allowing business
partners to define existing cyber dependencies, it was indicated that the pending invitations for
collaboration should be placed in a more accessible place within the application. As a result, the main
dashboard of MITIGATE was update to include the number of pending collaboration activities, along
with a short‐cut for navigating to the relevant section for further analysis and management.
MITIGATE – H2020 – 653212 Page 94 of 121
D7.3 February, 2018
Figure 26: The number of pending collaboration actions is directly accessible from the main
dashboard
While MITIGATE provides a complete set of functionalities that enable high quality collaborative risk
assessment scenarios for maritime Supply Chain Services, its roadmap includes further
enhancements.
The design of the web application will be migrated to a responsive one, to better support browsers
across modern computing devices (tablets, smart phones, etc.). While its core functionalities are
highlighted by the use of screens with higher resolutions (as the ones provided in workstations,
laptops and high‐end tablets), special effort will be made to enable report insights and risk analytics
in more compact views.
Furthermore, the asset graph visualization component will be updated to allow similar views for the
assets that are part of a specific process within a SC. In other words, operators will be able to
visualize assets and their relationships in the segregated view defined by each process.
MITIGATE already offers its API through REST endpoints, which are secured utilizing encrypted JSON
web tokens. An extra module will be developed to allow the definition of an arbitrary number of REST
clients per business partner, in order to promote automation and interconnection with existing
systems and workflows. As an example, if a business partner already employs an IT automation tool
(like Ansible, Puppet, Chef, etc.) to manage its infrastructure, hooks can be developed to update
MITIGATE’s assets after patches, upgrades and any other possible maintenance actions.
Finally, MITIGATE’s most important asset is its own risk assessment methodology. Cumulative and
propagated risks are key in identifying high‐impact threats. They assist in exploring the effective
MITIGATE – H2020 – 653212 Page 95 of 121
D7.3 February, 2018
attack surface and provide invaluable insights in the course of designing and evaluating the different
defensive strategies. As such, the methodology will be further supported with extra statistics,
analytics and notifications. Indicative features include (amongst others):
o Ability to define risk goals which will indicate what is foreseen as acceptable risk for
processes, supply chains and business partners
o Near real‐time notifications when certain asset sequences form exploitable attack paths or
when an asset's risk level exceeds the threshold defined by a goal
o More interactive asset graph visualizer that will enable direct asset modifications and filtering
for both the asset inventory and the risk assessment simulations
4.4 Economic Perspectives
A subject for research and discussion has always been what are the benefits and the costs, if one
invests in software, hardware and personnel in order to protect the own environment from damage
possibly caused by cyber‐attacks. However, a sound cost benefit analysis seems only hardly possible.
The probability of the occurrence of an event and the probability how much such an event would
cause damage measurable in monetary units is impossible to calculate. Different sources suggest that
damages can occur in a wide range between some 100 Euros and Millions. All attempts of the
responsible MITIGATE partners to obtain data in this respect from e.g. insurances, operators and
associations led to nothing. Such data is not available. Even insurances have difficulties in calculating
their policies. It remains estimation, how much money has to be spent on IT security within a
company. This is well known and actually the reason to tackle these issues with risk assessments
tools like MITIGATE.
As stated in chapter 4.2, more than the half of the respondents of the MITIGATE field survey could
not answer whether the system was economically viable for them. A survey published by PwC in 2017
[63] and conducted among 400 medium‐sized companies shows, that the average costs per attack
were 41.000 €. However, damages can often not be quantified precisely: 17 % of the asked
interviewees weren’t able to even quantify their losses. In addition, finding meaningful comparative
figures is made more difficult by the fact that attacks are often not recognized at all. If attacks are
recognized, they may be kept secret not to undermine any trusting relations of business partners.
The so called return on security investment model (ROSI) is derived from the return on investment
calculation (ROI) [64]. However, it is more suitable for a practically oriented benchmarking approach
for the comparison of security investments opportunities within an organisation under repeatable
and consistent conditions than for a general statement on the economic viability of the MITIGATE
system for all organisations. Nevertheless, it will be briefly presented below to illustrate the
challenges of a general financial assessment of security measures or the MITIGATE system itself.
ROI
Already with the ROI it is noticeable that the expected profits are formed on the basis of an estimate.
%
ROSI
MITIGATE – H2020 – 653212 Page 96 of 121
D7.3 February, 2018
follows: While the Risk Mitigation Percentage is an estimation which might be applicable to several
organisations, Risk Exposure is based on two individual estimations. For a single company this could
be a helpful investment decision tool.
However, on a broader scale, the probability of a successful attack as well as its costs for the
damaged party may vary much from organisation to organisation, even for one particular asset and
its specific vulnerability within one organisation. Considering heterogeneous values of assets
depending on the business undertaken and different sizes of organisations, the expected damage can
be very different each time. Since it is not possible to carry out a cost benefit analysis for one
company, it is beyond any meaningful estimation trying to do that for the whole society. So, in a true
sense a socio‐economic analysis cannot be carried out.
MITIGATE – H2020 – 653212 Page 97 of 121
D7.3 February, 2018
References
[1] Technical Department of ENISA Section Risk Management and BOC Information Technology
GmbH, “Integration of Risk Management / Risk Assessment into Business Governance,”
Brussels, Belgium, 2008.
[2] ENISA, “ENISA Work Programme 2006,” Heraklion, Greece, 2005.
[3] MITIGATE Consortium, “Deliverable D7.1: Evaluation Methodology,” Athens, Greece,
Deliverable from the H2020 Project MITIGATE (GA No.653212), 2017.
[4] ENISA ad hoc working group on risk assessment and risk management, “Inventory of Risk
Assessment and Risk Management Methods,” Brussels, Belgium, 2006.
[5] International Organization for Standardization, ISO/IEC 17799: Information technology ‐ Code of
practice for information security management. Geneva, Switzerland, 2000.
[6] Bundesamt für Sicherheit in der Informationstechnik, IT‐Grundschutz Catalogue. Bonn,
Germany, 2013.
[7] G. Stoneburner, A. Goguen, and A. Feringa, NIST SP800‐30 Risk Management Guide for
Information Technology Systems. Gaithersburg, USA, 2002.
[8] International Organization for Standardization, ISO 31000: Risk Management – Principles and
Guidelines. Geneva, Switzerland, 2009.
[9] International Organization for Standardization, ISO/IEC 27005: Information technology ‐ Security
techniques ‐ Information security risk management. Geneva, Switzerland, 2011.
[10] Information Systems Audit and Control Association, COBIT 5 for Risk. Rolling Meadows, USA,
2013.
[11] C. Richard A., S. James F., Y. Lisa R., and W. William R., “Introducing OCTAVE Allegro: Improving
the Information Security Risk Assessment Process,” Software Engineering Institute, Carnegie
Mellon University, Pittsburgh, USA, Technical Report CMU/SEI‐2007‐TR‐012, 2007.
[12] MITIGATE Consortium, “Deliverable D2.2: Evidence‐Driven Maritime Supply Chain Risk
Assessment Approach,” Brighton, UK, Deliverable from the H2020 Project MITIGATE (GA
No.653212), 2016.
[13] A. Singhal and X. Ou, “Security Risk Analysis of Enterprise Networks Using Probabilistic Attack
Graphs,” NIST, Gaithersburg, USA, NIST Interagency Report 7788, 2011.
[14] S. Jajodia, S. Noel, and B. O’Berry, “Topological Analysis of Network Attack Vulnerability,” in
Managing Cyber Threats, vol. 5, V. Kumar, J. Srivastava, and A. Lazarevic, Eds. New York:
Springer‐Verlag, 2005, pp. 247–266.
[15] X. Ou, S. Govindavajhala, and A. W. Appel, “MulVAL: A Logic‐based Network Security Analyzer,”
presented at the USENIX Security Symposium, 2005.
[16] P. Mell, K. Scarfone, and S. Romanosky, “A Complete Guide to the Common Vulnerability
Scoring System,” Gaithersburg, USA, 2007.
[17] J. Homer, X. Ou, and D. Schmidt, “A sound and practical approach to quantifying security risk in
enterprise networks,” Kans. State Univ. Tech. Rep., pp. 1–15, 2009.
[18] S. Zhang, X. Ou, A. Singhal, and J. Homer, “An empirical study of a vulnerability metric
aggregation method,” presented at the International Conference on Security and Management,
Las Vegas, USA, 2011.
[19] J. Homer, S. Zhang, X. Ou, D. Schmidt, Y. Du, and S. R. Rajagopalan, “Aggregating Vulnerability
Metrics in Enterprise Networks Using Attack Graphs,” J. Comput. Secur., vol. 21, no. 4, pp. 561–
597, 2013.
[20] N. Poolsappasit, R. Dewri, and I. Ray, “Dynamic Security Risk Management Using Bayesian
Attack Graphs,” IEEE Trans. Dependable Secure Comput., vol. 9, no. 1, pp. 61–74, Jan. 2012.
[21] L. Munoz‐Gonzalez and E. Lupu, “Bayesian Attack Graphs for Security Risk Assessment,” in
Proceedings of the NATO IST‐153/RWS‐21 Workshop on Cyber Resilience, Munich, Germany,
2017.
MITIGATE – H2020 – 653212 Page 98 of 121
D7.3 February, 2018
[22] Z. Ma and P. Smith, “Determining Risks from Advanced Multi‐step Attacks to Critical
Information Infrastructures,” in Critical Information Infrastructures Security, vol. 8328, E. Luiijf
and P. Hartel, Eds. Cham: Springer International Publishing, 2013, pp. 142–154.
[23] S. Schauer et al., “An adaptive supply chain cyber risk management methodology,” in
Proceedings of the Hamburg International Conference of Logistics (HICL), Hamburg, Germany,
2017.
[24] A. Ekelhart, T. Neubauer, and S. Fenz, “Automated Risk and Utility Management,” 2009, pp.
393–398.
[25] J. Ryu, V. Dua, and E. Pistikopoulos, “A bilevel programming framework for enterprise‐wide
process networks under uncertainty,” Comput. Chem. Eng., vol. 28, no. 6–7, pp. 1121–1129,
2004.
[26] A. P. Kanyalkar and G. K. Adil, “An integrated aggregate and detailed planning in a multi‐site
production environment using linear programming,” Int. J. Prod. Res., vol. 43, no. 20, pp. 4431–
4454, Oct. 2005.
[27] C. McDonald and I. Karimi, “Planning and Scheduling of Parallel Semicontinuous Processes. 1.
Production Planning,” Ind. Eng. Chem. Res., vol. 36, no. 7, pp. 2691–2700, 1997.
[28] M. Goetschalckx, C. J. Vidal, and K. Dogan, “Modelling and design of global logistics systems: A
review of integrated strategic and tactical models and design algorithms,” Eur. J. Oper. Res., vol.
143, no. 1, pp. 1–18, Nov. 2002.
[29] M. H. Patel, W. Wei, Y. Dessouky, Z. Hao, and R. Pasakdee, “Modelling and Solving an Integrated
Supply Chain System,” International Journal of Industrial Engineering, vol. 16, no. 1, pp. 13‐‐22,
2009.
[30] L. A. (Tony) Cox, Jr., “Game Theory and Risk Analysis,” Risk Anal., vol. 29, no. 8, pp. 1062–1068,
Aug. 2009.
[31] L. Rajbhandari and E. A. Snekkenes, “Mapping between Classical Risk Management and Game
Theoretical Approaches,” in Communications and Multimedia Security, vol. 7025, B. De Decker,
J. Lapon, V. Naessens, and A. Uhl, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp.
147–154.
[32] S. Rass, S. König, and S. Schauer, “Uncertainty in Games: Using Probability‐Distributions as
Payoffs,” in Decision and Game Theory for Security, London, UK: Springer, 2015, pp. 346–357.
[33] S. Rass, “On Game‐Theoretic Risk Management (Part One) – Towards a Theory of Games with
Payoffs that are Probability‐Distributions,” ArXiv E‐Prints, Jun. 2015.
[34] S. Rass, S. König, and S. Schauer, “Decisions with Uncertain Consequences—A Total Ordering on
Loss‐Distributions,” PLOS ONE, vol. 11, no. 12, p. e0168583, Dec. 2016.
[35] International Organization for Standardization (ISO), Ed., ISO 31000:2009 Risk management ‐
Principles and guidelines. ISO, Geneva, Switzerland, 2009.
[36] R. M. Joint Technical Committee OB‐007, Standards Australia International Limited, and
Standards New Zealand, Risk management, 3rd ed. Sydney, NSW; Wellington: Standards
Australia International, Ltd. ; Standards New Zealand, 2004.
[37] International Organization for Standardization, ISO/IEC 27000: Information technology ‐ Security
techniques ‐ Information security management systems ‐ Overview and vocabulary. Geneva,
Switzerland, 2016.
[38] British Standards Institute, BS 7799‐1: Information security management. Code of practice for
information security management systems. London, UK, 1995.
[39] British Standards Institute, BS 7799‐2: Information security management. Specification for
information security management systems. London, UK, 1999.
[40] International Organization for Standardization, ISO/IEC 27001: Information technology ‐ Security
techniques ‐ Information security management systems ‐ Requirements. Geneva, Switzerland,
2013.
[41] International Organization for Standardization, ISO/IEC 27002: Information technology ‐ Security
techniques ‐ Code of practice for information security controls. Geneva, Switzerland, 2013.
MITIGATE – H2020 – 653212 Page 99 of 121
D7.3 February, 2018
[42] International Organization for Standardization, ISO/IEC 27003: Information technology ‐ Security
techniques ‐ Information security management systems ‐ Guidance. Geneva, Switzerland, 2017.
[43] International Organization for Standardization, ISO/IEC 27004: Information technology ‐ Security
techniques ‐ Information security management ‐ Monitoring, measurement, analysis and
evaluation. Geneva, Switzerland, 2016.
[44] IT Governance, “ISO 27000 Family of Standards.” [Online]. Available:
https://www.itgovernance.co.uk/iso27000‐family. [Accessed: 15‐Feb‐2018].
[45] NIST, “National Institute of Standards and Technology (NIST).” [Online]. Available:
https://www.nist.gov/. [Accessed: 06‐Jul‐2017].
[46] National Institute of Standards and Technology, NIST SP800‐39 Managing Information Security
Risk: Organization, Mission, and Information System View. Gaithersburg, USA, 2011.
[47] National Institute of Standards and Technology, NIST SP800‐37 Rev. 1 Guide for Applying the
Risk Management Framework to Federal Information Systems: a Security Life Cycle Approach.
Gaithersburg, USA, 2010.
[48] Department of Homeland Security, “Federal Information Security Modernization Act,”
Department of Homeland Security, 16‐Aug‐2010. [Online]. Available:
https://www.dhs.gov/fisma. [Accessed: 15‐Feb‐2018].
[49] Information Systems Audit and Control Association, COBIT 5 ‐ Enabling Processes. Rolling
Meadows, USA, 2012.
[50] “Information Technology ‐ Information Security – Information Assurance | ISACA.” [Online].
Available: https://www.isaca.org/Pages/default.aspx. [Accessed: 08‐Oct‐2014].
[51] The Cabinet Office, IT Infrastructure Library (ITIL) v3: Service Strategy, Service Design, Service
Operation, Service Transition, Continual Service Improvement, Deutschsprachige Ausgabe 2011.,
vol. 1–5, 5 vols. Norwich, NR3 1GN, Großbritannien: The Stationery Office (TSO), 2013.
[52] Information Systems Audit and Control Association, COBIT 5 ‐ A Business Framework for the
Governance and Management of Enterprise IT. Rolling Meadows, USA, 2012.
[53] Information Systems Audit and Control Association, The risk IT framework: principles, process
details, management guidelines, maturity models. Rolling Meadows, IL: ISACA, 2009.
[54] Information Systems Audit and Control Association (ISACA), Ed., “COBIT 5: A Business
Framework for the Governance and Management of Enterprise IT.” Information Systems Audit
and Control Association, Rolling Meadows, IL 60008 USA, 2012.
[55] C. Alberts and A. Dorofee, “OCTAVE Method Implementation Guide Version 2.0 Volume 1:
Introduction,” Software Engineering Institute, Carnegie Mellon University, Pittsburgh, USA,
2001.
[56] C. Alberts and A. Dorofee, “OCTAVE Method Implementation Guide Version 2.0 Volume 2:
Preliminary Activities,” Software Engineering Institute, Carnegie Mellon University, Pittsburgh,
USA, 2001.
[57] Congress of the United States, “Health Insurance Portability and Accountability Act,”
Washington, USA, 1996.
[58] C. Alberts, A. Dorofee, J. Stevens, and C. Woody, “OCTAVE‐S Implementation Guide,” Software
Engineering Institute, Carnegie Mellon University, Pittsburgh, USA, Technical Report CMU/SEI‐
2003‐HB‐003, 2005.
[59] International Maritime Organization, Ed., ISPS Code: International Ship and Port Facility Security
Code and SOLAS amendments adopted 12 December 2002, 2003 ed. London: International
Maritime Organization, 2003.
[60] International Organization for Standardization, ISO 28001: Security management systems for
the supply chain ‐ Best practices for implementing supply chain security, assessments and plans ‐
Requirements and guidance. Geneva, Switzerland, 2007.
[61] National Institute of Standards and Technology, “National Vulnerability Database (NVD),”
Computer Security Resource Center ‐ National Vulnerability Database, 2017. [Online]. Available:
https://nvd.nist.gov/. [Accessed: 06‐Jul‐2017].
MITIGATE – H2020 – 653212 Page 100 of 121
D7.3 February, 2018
[62] R. Oppliger, “Quantitative Risk Analysis in Information Security Management: A Modern Fairy
Tale,” IEEE Secur. Priv., vol. 13, no. 6, pp. 18–21, 2015.
[63] P. Engemann, D. Fischer, B. Gosdzik, T. Koller, and N. Moore, "Im Visier der Cyber‐Gangster",
PricewaterhouseCoopers Aktiengesellschaft Wirtschaftsprüfungsgesellschaft (PwC), 2017
[64] Sonnenreich, Wes; Albanese, Jason and Stout, Bruce. Return on security investment (ROSI) ‐ A
practical quantitative model [online]. Journal of Research and Practice in Information
Technology, Vol. 38, No. 1, Feb 2006: 45‐56. ISSN: 1443‐458X
MITIGATE – H2020 – 653212 Page 101 of 121
D7.3 February, 2018
Appendix A – Evaluation Forms
A1: ISO 31000
Stage A ‐ Definition of Scope and Framework
Stage C ‐ Risk Treatment
5.5.2 Selection of risk treatment options;
P.9 Development of action plan 1,5 5.5.3 Preparing and implementing risk
treatment plans
5.5.3 Preparing and implementing risk
treatment plans; 5.6 Monitoring and
P.10 Approval of action plan 0,5
Review; 5.7 Recording the risk
management process
5.5.3 Preparing and implementing risk
P.11 Implementation of action plan 1,0
treatment plans
5.5 Risk treatment; addressing non treated
P.12 Identification of residual risks 0,5
risks only
Stage D ‐ Risk Acceptance
Stage E ‐ Risk Monitoring and Review
MITIGATE – H2020 – 653212 Page 102 of 121
D7.3 February, 2018
# Process Name Score ISO 31000 Reference Step(s)
Risk Management Process: 5.6 Monitoring
P.14 Risk monitoring and reporting 1,5 and review; Framework: 4.5 Monitoring
and review of the framework
Stage F ‐ Risk communication, Awareness and Consulting
MITIGATE – H2020 – 653212 Page 103 of 121
D7.3 February, 2018
A 2: ISO 27005
Stage A ‐ Definition of Scope and Framework
Stage B ‐ Risk Assessment
8.3 Risk analysis; 8.3.2 Assessment of consequences;
P.6 Analysis of relevant risks 3,0 8.3.3 Assessment of incident likelihood; 8.3.4 Level
of risk determination
Stage C ‐ Risk Treatment
Stage D ‐ Risk Acceptance
Stage E ‐ Risk Monitoring and Review
MITIGATE – H2020 – 653212 Page 104 of 121
D7.3 February, 2018
12 Information security risk monitoring and review;
P.14 Risk monitoring and reporting 3,0 12.1 Monitoring and review of risk factors; 12.2 Risk
management monitoring, review and improvement
Stage F ‐ Risk communication, Awareness and Consulting
MITIGATE – H2020 – 653212 Page 105 of 121
D7.3 February, 2018
A 3: NIST SP 800‐30
Stage A ‐ Definition of Scope and Framework
Stage B ‐ Risk Assessment
Stage C ‐ Risk Treatment
Risk response is mentioned at several points
P.9 Development of action plan 0,5 throughout the process but not described in detail;
activities are covered in NIST SP800‐39 (Task 3‐2)
Risk response is mentioned at several points
P.10 Approval of action plan 0,5 throughout the process but not described in detail;
activities are covered in NIST SP800‐39 (Task 3‐3)
Risk response is mentioned at several points
P.11 Implementation of action plan 0,5 throughout the process but not described in detail;
activities are covered in NIST SP800‐39 (Task 3‐4)
Risk response is mentioned at several points
P.12 Identification of residual risks 0,5 throughout the process but not described in detail;
activities are covered in NIST SP800‐39 (Task 3‐3)
Stage D ‐ Risk Acceptance
Stage E ‐ Risk Monitoring and Review
MITIGATE – H2020 – 653212 Page 106 of 121
D7.3 February, 2018
Stage F ‐ Risk communication, Awareness and Consulting
MITIGATE – H2020 – 653212 Page 107 of 121
D7.3 February, 2018
A 4: COBIT for Risk
Stage A ‐ Definition of Scope and Framework
Stage B ‐ Risk Assessment
Stage C ‐ Risk Treatment
Stage D ‐ Risk Acceptance
Stage E ‐ Risk Monitoring and Review
Stage F ‐ Risk communication, Awareness and Consulting
MITIGATE – H2020 – 653212 Page 108 of 121
D7.3 February, 2018
Risk communication, awareness and
P.15 2,0 EDM03.2 (1)(3)
consulting
MITIGATE – H2020 – 653212 Page 109 of 121
D7.3 February, 2018
A 5: OCTAVE Allegro
Stage A ‐ Definition of Scope and Framework
Definition of drivers within Step 1 "Establish Risk
Measurement Criteria"; Step 2 Develop an
P.2 Definition of internal environment 2,0
Information Asset Profile; Step 3 Identify
Information Asset Containers;
Generation of risk management Slightly addressed in Step 1 "Establish Risk
P.3 0,5
context Measurement Criteria"
Step 1 Establish Risk Measurement Criteria;
P.4 Formulation of risk criteria 3,0 Collection of worksheets as examples for risk criteria
(Appendix B)
Stage B ‐ Risk Assessment
Step 7 "Analyze Risks", Activity 1; Probability covered
P.6 Analysis of relevant risks 2,5
in Step 5 "Identify Threat Scenarios", Activity 3
Step 7 "Analyze Risks", Activity 2; Step 8 "Select
P.7 Evaluation of risks 3,0
Mitigation Approach", Activity 1
Stage C ‐ Risk Treatment
Stage D ‐ Risk Acceptance
Stage E ‐ Risk Monitoring and Review
MITIGATE – H2020 – 653212 Page 110 of 121
D7.3 February, 2018
Stage F ‐ Risk communication, Awareness and Consulting
MITIGATE – H2020 – 653212 Page 111 of 121
D7.3 February, 2018
A 6: MITIGATE Methodology
Stage A ‐ Definition of Scope and Framework
Implicitly given in Step 2.1 "Individual Cyber Threat's
Identification Applicable to the SCS Cyber Assets"
P.4 Formulation of risk criteria 2,5
(Threat Scale) and in Appenddix F (MITIGATE
Probability Scale)
Stage B ‐ Risk Assessment
Step 2.1 "Individual Cyber Threat's Identification
Applicable to the SCS Cyber Assets"; Step 2.2 "SCS
P.5 Identification of risks 3,0 Cyber Threat Analysis"; Step 3.1 "Identification of
Confirmed Vulnerabilities"; Step 3.2 "Identification
of Potential/Unknown (Zero‐Day) Vulnerabilities"
Step 3.3 "Individual Vulnerability Assessment"; Step
3.4 "Cumulative Vulnerability Assessment"; Step 3.5
"Propagated Vulnerability Assessment"; Step 4.1
"Individual Impact Assessment"; Step 4.2
P.6 Analysis of relevant risks 3,0 "Cumulative Impact Assessment"; Step 4.3
"Propagated Impact Assessment"; Step 5.1
"Individual Risk Assessment"; Step 5.2 "Cumulative
Risk Assessment"; Step 5.3 "Propagated Risk
Assessment"
Implicitly given by the individual rankings resulting
P.7 Evaluation of risks 2,5 from Steps 5.1 ‐ 5.3; Visual support is provided by
the MITIGATE tool
Stage C ‐ Risk Treatment
Step 6 "Risk Mitigation" (Strategies of Attacker,
P.9 Development of action plan 3,0
Strategies of Defender, Damage of Each Scenario)
Implicitly given by the security officer due to the
P.10 Approval of action plan 2,5
optimality of the mitigation plan
Step 6 "Risk Mitigation"; The mitigation plan
P.11 Implementation of action plan 3,0 contains precise information when and how often to
perform the given mitigation actions
MITIGATE – H2020 – 653212 Page 112 of 121
D7.3 February, 2018
Implicitly given by the threshold for the risk level
defined by the security officer; Everything below
P.12 Identification of residual risks 2,5
that threshold is residual risk (although not
quantified explicitly)
Stage D ‐ Risk Acceptance
Stage E ‐ Risk Monitoring and Review
Stage F ‐ Risk communication, Awareness and Consulting
MITIGATE – H2020 – 653212 Page 113 of 121
D7.3 February, 2018
Appendix B
Successful attack path discovery; the microservice returns a non‐empty list of exploitable paths.
Scenario MITIGATE operator specifies criteria values that result in successful attack path
discovery with a non‐empty result.
Request POST /upload/json/ HTTP/1.1
Host: localhost:9091
Content‐Type: application/json
Cache‐Control: no‐cache
Postman‐Token: 7f4b8925‐1ae7‐003b‐ae47‐797f200933e1
{
"riskassessment_id": 352,
"maxlength": 7,
"propagationlength": 7,
"entrypoints": [
769,
773
],
"targetpoints": [
768,
772
],
"attacks": [
],
"attacker": {
"AttackProfileIdentifier": 1,
"AttackerCapability": 3,
"AttackerLocation": 3
},
"attackdictionary": [
],
"capec_attacks": {
},
"cwe_nvd": {
},
"cwe_cvedetails": {
},
"vulnerability_types": [
],
"attack_cwe": {
}
MITIGATE – H2020 – 653212 Page 114 of 121
D7.3 February, 2018
}
Response {
"PropagatedAttackGraph": {
"directed": true,
"multigraph": true,
"graph": {},
"nodes": [
{
"asset_id": 769,
"asset_name": "X ‐ Sql Server",
"adminlevel": null,
"riskassessment_id": 352,
"businesspartner_id": 49,
"runprivilege": null,
"asset_type": "a",
"entrypoints": true,
"vuln_ids": [
"CUSTOM001"
],
"id": 769
},
{
"asset_id": 768,
"asset_name": "X ‐ Operating System",
"adminlevel": null,
"riskassessment_id": 352,
"businesspartner_id": 49,
"runprivilege": null,
"asset_type": "o",
"targetpoints": true,
"vuln_ids": [
"CVE‐2016‐0196",
"CVE‐2016‐0168",
"CVE‐2016‐0180",
"CVE‐2016‐0197",
"CVE‐2016‐0176",
"CVE‐2016‐0171",
"CVE‐2016‐0174",
"CVE‐2016‐3250",
"CVE‐2016‐0170",
"CVE‐2016‐0173"
],
"id": 768
},
MITIGATE – H2020 – 653212 Page 115 of 121
D7.3 February, 2018
{
"asset_id": 773,
"asset_name": "X ‐ PCS Web Server2",
"adminlevel": null,
"riskassessment_id": 352,
"businesspartner_id": 49,
"runprivilege": null,
"asset_type": "a",
"entrypoints": true,
"vuln_ids": [
"CVE‐2012‐5650",
"CVE‐2010‐3854",
"CVE‐2012‐5641",
"CVE‐2012‐5649"
],
"id": 773
}
],
"links": [
{
"source": 0,
"target": 1,
"key": "installed_on"
},
{
"source": 2,
"target": 1,
"key": "installed_on"
}
]
},
"attackpaths": [
{
"directed": true,
"multigraph": true,
"graph": {},
"nodes": [
{
"asset_id": 769,
"asset_name": "X ‐ Sql Server",
"adminlevel": null,
"riskassessment_id": 352,
"businesspartner_id": 49,
"runprivilege": null,
"asset_type": "a",
MITIGATE – H2020 – 653212 Page 116 of 121
D7.3 February, 2018
"entrypoints": true,
"vuln_ids": [
"CUSTOM001"
],
"id": 769
},
{
"asset_id": 768,
"asset_name": "X ‐ Operating System",
"adminlevel": null,
"riskassessment_id": 352,
"businesspartner_id": 49,
"runprivilege": null,
"asset_type": "o",
"targetpoints": true,
"vuln_ids": [
"CVE‐2016‐0196",
"CVE‐2016‐0168",
"CVE‐2016‐0180",
"CVE‐2016‐0197",
"CVE‐2016‐0176",
"CVE‐2016‐0171",
"CVE‐2016‐0174",
"CVE‐2016‐3250",
"CVE‐2016‐0170",
"CVE‐2016‐0173"
],
"id": 768
}
],
"links": [
{
"source": 0,
"target": 1,
"key": "installed_on"
}
]
},
{
"directed": true,
"multigraph": true,
"graph": {},
"nodes": [
{
"asset_id": 773,
MITIGATE – H2020 – 653212 Page 117 of 121
D7.3 February, 2018
"asset_name": "X ‐ PCS Web Server2",
"adminlevel": null,
"riskassessment_id": 352,
"businesspartner_id": 49,
"runprivilege": null,
"asset_type": "a",
"entrypoints": true,
"vuln_ids": [
"CVE‐2012‐5650",
"CVE‐2010‐3854",
"CVE‐2012‐5641",
"CVE‐2012‐5649"
],
"id": 773
},
{
"asset_id": 768,
"asset_name": "X ‐ Operating System",
"adminlevel": null,
"riskassessment_id": 352,
"businesspartner_id": 49,
"runprivilege": null,
"asset_type": "o",
"targetpoints": true,
"vuln_ids": [
"CVE‐2016‐0196",
"CVE‐2016‐0168",
"CVE‐2016‐0180",
"CVE‐2016‐0197",
"CVE‐2016‐0176",
"CVE‐2016‐0171",
"CVE‐2016‐0174",
"CVE‐2016‐3250",
"CVE‐2016‐0170",
"CVE‐2016‐0173"
],
"id": 768
}
],
"links": [
{
"source": 0,
"target": 1,
"key": "installed_on"
}
MITIGATE – H2020 – 653212 Page 118 of 121
D7.3 February, 2018
]
}
]
}
HTTP Response Status: 200 OK
Time: 86ms
Successful attack path discovery; no paths exist
Scenario MITIGATE operator specifies criteria values that do not result in generation of
exploitable attack paths. The microservice returns the message "Abort, all entry
points are secure".
Request POST /upload/json/ HTTP/1.1
Host: localhost:9091
Content‐Type: application/json
Cache‐Control: no‐cache
Postman‐Token: 7c9835b2‐921a‐082c‐b1b2‐0d6c509f6e8f
{
"riskassessment_id": 352,
"maxlength": 7,
"propagationlength": 7,
"entrypoints": [
769
],
"targetpoints": [
768
],
"attacks": [
],
"attacker": {
"AttackProfileIdentifier": 1,
"AttackerCapability": 1,
"AttackerLocation": 1
},
"attackdictionary": [
],
"capec_attacks": {
},
"cwe_nvd": {
},
"cwe_cvedetails": {
},
MITIGATE – H2020 – 653212 Page 119 of 121
D7.3 February, 2018
"vulnerability_types": [
],
"attack_cwe": {
}
}
Response [
{
"Paths": {
"links": [],
"directed": true,
"graph": {},
"nodes": [],
"multigraph": true
}
},
{
"links": [],
"directed": true,
"graph": {},
"nodes": [],
"multigraph": true
},
{
"message": "Abort, all entry points are secure"
}
]
HTTP Response Status: 200 OK
Time: 41ms
Unsuccessful attack path discovery; mandatory parameters are missing
Scenario MITIGATE submits an incomplete request. In more detail, it clones the previous
test case and removes the risk assessment id parameter. As a result, the
remote service throws an internal error which results in a 500 INTERNAL ERROR
HTTP response.
Request POST /upload/json/ HTTP/1.1
Host: localhost:9091
Content‐Type: application/json
Cache‐Control: no‐cache
Postman‐Token: 7c9835b2‐921a‐082c‐b1b2‐0d6c509f6e8f
{
"maxlength": 7,
MITIGATE – H2020 – 653212 Page 120 of 121
D7.3 February, 2018
"propagationlength": 7,
"entrypoints": [
769
],
"targetpoints": [
768
],
"attacks": [
],
"attacker": {
"AttackProfileIdentifier": 1,
"AttackerCapability": 1,
"AttackerLocation": 1
},
"attackdictionary": [
],
"capec_attacks": {
},
"cwe_nvd": {
},
"cwe_cvedetails": {
},
"vulnerability_types": [
],
"attack_cwe": {
}
}
Response Not Applicable
HTTP Response Status: 500 Internal Error
Time: 31ms
MITIGATE – H2020 – 653212 Page 121 of 121