You are on page 1of 2

Introduction

This paper aims to be a critical reflection of the essays of Cox (2009) “What’s Wrong with Hazard Ranking
Systems? An Expository Note” and Cox (2011) “Clarifying Types of Uncertainty: When Are Models Accurate,
and Uncertainties Small?” on the applicability of Risk Management Process Analysis. Lastly there will be a
reflection on assignment 1 and 2 given the renewed insights gained from these essays.

Reflection on Cox (2009)

As the title suggests the writer attempts to elaborate what is wrong with Hazard Ranking Systems which are
currently used. The author states that currently used risk-scoring methods are not appropriate for correlated
risks. Additionally, rather than using priority scores for risk-reducing activities, optimization techniques that
consider interdependencies among the consequences of different risk-reducing activities should be used.

Currently used priority scoring systems used don’t consider the correlation among the consequences.
Various examples of Hazard Ranking Systems are discussed used in various industries such as vulnerability
scoring system in the Information Technology, Superfund priority score method in the State of Connecticut,
risk priority scoring system for bioterrorism agents and an antiterrorism risk-reduction method used by
Homeland Security. In general, such risk matrices order some pairs of risks incorrectly and, in some cases,
can perform even worse than setting priorities randomly (Cox, 2008a). The author therefore suggests a
optimization method instead of a ranking system.

To enable formal analysis of priority-scoring systems in a conceptual framework, the author describes the
following elements in this priority-setting process; a set of items to be ranked or scored, an ordered set of
priority scores and a priority-scoring rule. The priority-scoring rule determines a priority order in which
hazards are to be addressed.

Consequently Cox defines a method to determine priorities for independent, normally distributed risk
reductions. However, when risks (or of risk-reduction opportunities) are correlated this method cannot be
applied. A number of specific examples are consequently discussed in the article where this problem arises.
The author however does not provide general guidelines on risk dependencies.

To point out the problem that arises when risks (or of risk-reduction opportunities) are correlated examples
with regard to the IT -and Biotechnical industry are discussed with regard to this problem and solutions are
provided.

Reflection on Cox (2011)

This article is a commentary on a article written by Provessor Aven (Aven, 2011). Its purpose is to describe
“scientific uncertainty” in relation to its role in risk management and policy decisions. According to Professor
Aven’s “scientific uncertainty” should be considered greatest when no accurate prediction model can be
established and when uncertainties about its inputs are “small”. By means of some examples Cox (2011)
elaborates further on the concept of “accurate” prediction models and “small” uncertainty in its inputs.

In the first example, the author provides an example whereby a larger input uncertainty may create smaller
output uncertainty. In the second example a prediction model is discussed which is correct but not accurate
in every situation. In some situations inputs that are measured with only tiny uncertainty may be amplified in
this model to a point that the output is useless. The accuracy of the model is therefore depending on the
context or purpose its being used for. The third example is discussed to stipulate that there is no reason to
assume that an accurate prediction model should also be an accurate causal model and that the usefulness
of the model is context driven.
In general, we may have to accept that the models and uncertainties that matter in risk analysis are often
too complex to permit useful classifications systems, such as large or small, or accurate and inaccurate.
Instead, models using constraint-based, nontaxonomic and detailed characterizations of uncertainty may
ultimately prove highly useful to the risk analysis process.

Discussion in relation to Assignment 1 & 2

Assignment 1 and 2 risks were identified in relation to Covid-19 within EPCOR BV, a Maintenance Repair
Organisation (MRO) of aircraft parts. Various risk analysis tools were used such as 8Rs and 4Ts, PESTEL,
SWOT and Bow-tie analysis.
Specific risks were identified in the PESTEL, SWOT and Bow-tie analysis. These are qualitative methods
and the essays of Cox (2009, 2011) are mostly applicable to qualitative risk modelling (prediction models)
rather than qualitative risk modelling. However the concept of interdependencies of risks and consequences
stated in Cox (2009) is applicable in relation to Covid-19 within EPCOR BV. For example, see below the
consequences described in the Bow-Tie matrix of assignment 2:

Consequence # Consequence Increasing risk on consequence


1 Infection/death employee 2
2 Contamination of multiple employees (virus outbreak) 1, 3
3 Business interruption none

In the Bow-Tie in assignment 2 no specific optimization method is used and all consequences are therefore
equal (not ranked). In assignment 1 a ranking system was discussed (derived from ICAO document 9859)
but it does not provide examples to use in this discussion. In order to determine the consequence with the
most impact, an optimization method using interdependencies between consequences should be used
instead of a ranking system (Cox, 2009). In this example consequence 1 increases the risk on consequence
2. Consequence 2 increases the risk on consequence 1 and 3. Consequence 3 does not have a causal
relationship with either of the consequences. Using these interdependencies in this simple example one may
conclude that consequence 2 has the most impact in general.

You might also like