You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/240258668

Qualitative risk analysis: Some problems and remedies

Article  in  Management Decision · March 2006


DOI: 10.1108/00251740610656278

CITATIONS READS
27 4,073

2 authors:

Jan Emblemsvåg Lars Endre Kjølstad


Transition Pilots / Norwegian University of Science and Technology Vestfold Innovasjonspartner
58 PUBLICATIONS   614 CITATIONS    2 PUBLICATIONS   86 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

The Computational Structure of Environmental Life Cycle Costing - Outline and Application (EN) View project

PROMAC View project

All content following this page was uploaded by Jan Emblemsvåg on 13 April 2015.

The user has requested enhancement of the downloaded file.


The current issue and full text archive of this journal is available at
www.emeraldinsight.com/0025-1747.htm

Qualitative risk
Qualitative risk analysis: analysis
some problems and remedies
Jan Emblemsvåg
Fora Form AS, Ørsta, Norway, and 395
Lars Endre Kjølstad
Virtual Park Consulting, Stabekk, Norway Received August 2005
Revised December 2005
Accepted December 2005

Abstract
Purpose – The article sets out to discuss and present a solution to the fact that various qualitative
risk analyses of the same problem can reach significantly different conclusions.
Design/methodology/approach – By reviewing a common risk analysis approach and identifying
where the possible problems arise, the authors propose ways to overcome the problems based on what
they have found in the literature in general.
Findings – There are ways to greatly reduce the problems, but this requires a risk analysis approach
in which information quality and consistency are the subject of greater focus.
Research limitations/implications – The definitions used, Monte Carlo methods and the
analytical hierarchy process are well tested in countless applications. Hence, the authors believe that
this work possesses no major limitations.
Practical implications – The approach has only been applied to theoretical situations; real-life
situations are needed to address possible practical limitations.
Originality/value – The paper illustrates the importance of distinguishing between “uncertainty”,
“risk” and “capabilities” and the associated implications. It also shows how this can be done in a
logically consistent way using the analytical hierarchy process so that the problem of inconsistency is
reduced, and how the analysis can be used to systematically improve itself. The proposed risk analysis
is a novel approach that has, to the authors’ knowledge, never been thought of before.
Keywords Risk management, Uncertainty management, Analytical hierarchy process,
Monte Carlo methods
Paper type Research paper

1. Frame of reference
Risk management is a hot topic for executives. In fact, the Turnbull Report, made at the
request of the London Stock Exchange (LSE), “is about the adoption of a risk-based
approach to establishing a system of internal control and reviewing its effectiveness”
(Jones and Sutherland, 1999), and it is a mandatory requirement for all companies at
the LSE. Moreover, by 2001, around 30 countries were looking at similar arrangements
on a voluntary basis (see Ward, 2001).
Risk management is also important due to the rapid changes in the business
environment. In fact, some of the major problems the Swedish-Swiss engineering giant
ABB was facing in 2002 were due to “poorly understood operational risks” according
to The Economist (2002). Clearly, growing uncertainty increase the need for better risk Management Decision
Vol. 44 No. 3, 2006
pp. 395-408
q Emerald Group Publishing Limited
The views presented in this paper are solely those of the authors and are not necessarily 0025-1747
associated with Fora Form AS or Virtual Park. DOI 10.1108/00251740610656278
MD management. However, we must acknowledge the dilemma that despite some risks
44,3 arising from unpredictable situations for which there is little remedy in traditional risk
management practices, the increasing reliance on risk management has in fact led
decision makers to embark on actions that involve such risks. Thus, risk management
has led decision-makers to take risks they normally would not take (see Bernstein,
1996). Arguably, decision makers face two major risks:
396 (1) the risk of ignoring risks; and
(2) the risk of being deceived by risk management.

A multitude of risk management approaches exists – some quantitative and some


qualitative. Due to the nature of most corporate risks a qualitative approach is
warranted despite the inherent limitations of such approaches. A serious limitation is
that “the choice of risk analysis approach seems to have a major impact on the
identification of risk sources, in terms of magnitude and location” (Backlund and
Hannu, 2002). In fact, three independent consulting companies performed a risk
analysis of the same hydroelectric power plant and reached widely different
conclusions, as reported by this study. This, in addition to the aforementioned paradox,
is obviously not a desirable situation.
Before continuing, it is important to clarify what definitions of risk and uncertainty
we base our work on (see Emblemsvåg and Kjølstad, 2002). The definition of risk we
subscribe to is found in Webster’s Encyclopedic Unabridged Dictionary of the English
Language (1989): it is “exposure to the chance of injury or loss; a hazard or dangerous
chance” – while we suggest measuring risk in terms of “degree of impact and degree of
belief”. That is, risk arises due to choices made and choices not made. Uncertainty, on
the other hand, exists in two distinctly different forms:
(1) fuzziness; and
(2) ambiguity.

Fuzziness occurs whenever definite, sharp, clear or crisp distinctions are not made,
whereas ambiguity is the result of unclear definitions of various alternatives
(outcomes). Uncertainty is therefore the result of lack of information or clarity, and has
nothing to do with choice.
We start by briefly identifying why qualitative risk analyses are flawed in the sense
that they can produce wildly different results. Then, in section 3 we discuss the four
types of analysis required to analyze risks properly. In sections 4 and 5 we present our
remedies. A conclusion is provided in section 6. We use simple, functional examples
where needed to illustrate the points we try to make.

2. Why do qualitative risk analyses produce such different results?


Risk analyses typically involve identifying the risks, assessing their probabilities and
impacts, ranking them and screening out minor risks (see Emblemsvåg and Kjølstad,
2002). Here it suffices to acknowledge that this crucial step requires experience,
knowledge and creativity. This reliance on subjectivism is a challenge in itself because
it can produce widely different results, as the aforementioned study by Backlund and
Hannu (2002) points out. The challenge was that there no consistent decision support
existed for improving the model other than to revise the input – sadly sometimes done
to obtain preconceived results. For example, suppose we identified three risks – A, B Qualitative risk
and C – and wanted to assess their probabilities and impacts. The assessment is
usually performed by assigning numbers that describe probability and impact, but the
analysis
logic behind the assignment is unclear at best, and it is impossible to perform any sort
of analysis to further improve this assignment. Typically, the discussion ends up by
placing the risks in a matrix like those shown in Figure 1, but without any consistency
checks it is difficult to argue which one, if any, of the two matrices in Figure 1 fits 397
reality best. Thus, the recommendations can become quite different.
In addition to the important shortcomings of traditional risk analysis, as argued so
far, the entire risk analysis process typically lacks two important aspects, which
combine to further aggravate the problems, in our opinion:
(1) The capabilities of the organization – the strengths and weaknesses – are
either ignored or implicitly treated. This is a problem in itself because risk
management strategies that cannot be implemented are useless, and we should
not treat risks the same way regardless of whether we capable or not with
respect to the particular risks in discussion. Understanding that risks are
relative to the organization’s capabilities points risk analysis towards strategic
analysis. Consequently, we regard risk management just as much a matter of
managing capabilities as managing risks – in all fairness, risk cannot be
managed; it is the organization that must be managed in anticipation of risks.
(2) There is no or little management of information quality. Management of
information quality is crucial in risk management because uncertainty can be
defined as a state for which we lack information (see Emblemsvåg and Kjølstad,
2002). Thus, uncertainty analysis should play an integral part in risk analysis to
ensure that the uncertainty in the risk analysis is kept at an economically feasible
level. The same argument also holds for the usage of sensitivity analyses, in both
risk and uncertainty analyses. Backlund and Hannu (2002) also support the
usage of uncertainty and sensitivity analyses to improve the model quality.

These aspects are elaborated upon in the next two sections. However, we would first
like to clarify the various types of analyses that are crucial to understand risk and
uncertainty as alluded to above.
We propose four types of analyses that are integrated in the same model:
(1) a risk analysis;
(2) a sensitivity analysis of the risk analysis;
(3) an uncertainty analysis of the risk analysis; and
(4) a sensitivity analysis of the uncertainty analysis.

Figure 1.
The arbitrary assignment
of probability and impact
in a risk ranking matrix
MD The purpose of these analyses is not just to analyze risks but to also provide a context
44,3 for information management. Most approaches lack this capability and hence lack any
systematic way of improving themselves. One reason may be that many simply do not
distinguish between risk and uncertainty, for example Friedlob and Schleifer (1999)
claim that “risk is uncertainty” for auditors. Another reason is that simple risk
matrices like the ones shown in Figure 1 are basically too simplistic to provide any
398 guidance with respect to information or decision improvement.

3. Integrating risk analysis, uncertainty analysis and sensitivity analysis


In this section we illustrate why the difference between risk and uncertainty analyses
and information management is so important. It will also become apparent how Monte
Carlo methods can greatly help us in this matter – not just in this example but also in
real life.
In our example a decision maker must decide whether to reuse or recycle glass
bottles she will receive next year. However, the number of bottles is associated with
some uncertainty that can materialize into two mutually exclusive (all exhaustive)
risks:
(1) the risk of receiving 10,000 units; or
(2) the risk of receiving 30,000 units.

The probability for receiving 10,000 units is an estimated 40 percent, while the
probability for receiving 30,000 units is an estimated 60 percent. If the decision maker
chooses to reuse the bottles but receives only 10,000 she faces a loss, whereas 30,000
will give her the highest profit of $40,000. The recycling action also produces $30,000.
To decide the decision maker designs a decision tree (see Figure 2) and uses the
expected monetary value (EMV) as a measure of risk. The EMV is calculated by
multiplying the outcome values by the associated probabilities, and then summing up
the outcomes of the events associated with an action. We see that recycling has the
highest EMV, and therefore the lowest expected risk because the risks in this example
are opportunity risks – that is, they are associated with loss of opportunity. Note that
risks are “not just bad things happening but also good things not happening”, as
quipped by Jones and Sutherland (1999) and others.
So far, we have only calculated the risk profile for the decision (as indicated by the
EMV) of the two possible actions, but we do not know anything about the uncertainty:

Figure 2.
Decision tree example
the numbers might be associated with major uncertainty, and hence the conclusion Qualitative risk
may be less clear-cut. To gain better understanding we consequently perform an analysis
uncertainty analysis.
In uncertainty analyses we try to understand how the uncertainty in source
variables generates uncertainty in response variables (the EMV). Important questions
to be answered include: “do we have sufficient information to provide decision support
with an acceptable level of uncertainty?” and “what information is most critical for the 399
uncertainty level in the risk analysis?”. In risk analyses, however, we try to understand
how risks are generated, their probability of occurrence and impact and so on. We also
rank the risks to focus.
In Figure 3 we have pictured the difference between modeling risk and modeling
uncertainty in the case shown above. To the left we have simply given a standard
variation around the expected impact of recycling 30,000 bottles, while to the right we
have tried to model the actual uncertainty. So far, there is little apparent difference, but if
we run a Monte Carlo simulation we will understand the point. For brevity we refer to
Emblemsvåg (2003) for more information about Monte Carlo simulations and how they
can be used in a variety of situations, including critical assumption planning (CAP).
The point is that the risk modeling does not provide any insight into the uncertainty
of the risk analysis but rather provides an excellent ranking of source variables.
Because the source variable in Figure 3 is symmetric and bounded (modeled as
triangular distribution with a 10 percent variation around the expected impact) we can
accurately trace the most important factors for the risk profile, as shown in Figure 4.
We see, for example, that it is the probability of receiving 30,000 bottles that is the
single most important factor for the risk profile.

Figure 3.
An example of risk
modeling (left) versus
uncertainty modeling
(right)

Figure 4.
The most important
factors for the risk profile
MD From an uncertainty perspective, however, we see from Figure 5 that it is the recycling
44,3 impact of receiving 30,000 bottles that introduces the most uncertainty into the risk
profile – not the probability estimate. Thus, if we are to spend more resources on
increasing the accuracy of the risk analysis – that is, to reduce the uncertainty – we
should put more work into assessing the recycling impact of receiving 30,000 bottles. If
we are to reduce the actual risk, however, we should try to reduce the probability.
400 In this example we must decide under the presence of both risk and uncertainty,
which is also the most common case in real life. In Figure 6 the two options are
presented, and we see that there is a slight possibility of “Reuse” actually having a
higher EMV than “Recycling”, but under most circumstances we can virtually
guarantee that the Recycling option will have the highest EMV. The sensitivity
analysis in Figure 5 is effective in identifying how to reduce the uncertainty in Figure 6.
We can also use Figure 6 to calculate how much the uncertainty must be reduced in
order to separate the two options, which is important to clarify the decision.
The sensitivity analyses in Figures 4 and 5 also illustrate a vital point that is
usually omitted in risk analyses, namely the importance of conducting sensitivity
analyses. We subscribe to statistical sensitivity analyses, and they can be easily
generated during a Monte Carlo simulation (see Emblemsvåg, 2003). Sensitivity
analyses enable us to identify the critical success factors, whether they are risk,
uncertainty or anything else. Thus, the sensitivity analyses increase our
understanding of both the risk analyses and the uncertainty analyses, which is of

Figure 5.
The most important
uncertainties for the risk
profile

Figure 6.
Comparison under the
presence of risk and
uncertainty
great importance for the improvement of the risk analysis process. A critical review Qualitative risk
will in this context revolve around finding answers for a variety of “why?” questions as analysis
well as judging to what extent the analyses provide useful input to the risk analysis
process and what must be done about significant gaps. Basically, we must understand
how the analyses work, why they work, to what extent they work as planned, and what
can be done to reduce deviations. With this in mind we now proceed to our next topic of
improvement – ensuring consistency in the risk analysis. 401
4. Ensuring consistent risk analyses
The great virtue of mathematics is its consistency – a trait no other system of thought
can match. Despite the inherent translation uncertainty between qualitative and
quantitative measures, the only way to ensure consistent qualitative risk analyses is to
translate the qualitative measures into numbers and then perform some sort of
consistency check. The only approach that can handle qualitative issues with
controlled consistency is the analytical hierarchy process (AHP) and variations thereof.
Thomas Lorie Saaty developed AHP in the late 1960s primarily to provide decision
support for multi-objective selection problems. Since then, Saaty and Forsman (1992)
have utilized AHP in a wide array of situations including resource allocation,
scheduling, project evaluation, military strategy, forecasting, conflict resolution,
political strategy, safety, financial risk and strategic planning. Others have also used
AHP in a variety of situations such as in supplier selection (Bhutta and Huq, 2002), to
determine measures of business performance (Cheng and Li, 2001), and in quantitative
construction risk management of a cross-country petroleum pipeline project in India
(Dey, 2001).
The greatest advantage of the AHP concept, for our purpose, is that it incorporates a
logic consistency check of the answers provided by the various participants in the
process. As Cheng and Li (2001) claim, “it [AHP] is able to prevent respondents from
responding arbitrarily, incorrectly, or non-professionally”. The arbitrariness of Figure 2
will consequently rarely occur. Furthermore, the underlying mathematical structure of
AHP makes sensitivity analyses both with respect to the risk and uncertainty analysis
meaningful, which in turn guides learning efforts. This is impossible in traditional
frameworks.
The relative rankings generated by the AHP matrix system can be used as so-called
subjective probabilities or possibilities (see Emblemsvåg and Kjølstad, 2002), as well as
relative impacts or relative capabilities. The estimates will be relative, but that is
sufficient since the objective of a risk analysis is to effectively direct attention towards
the critical risks so that they will be attended to. However, by including a known
absolute reference in the AHP matrices we can provide absolute ranking if desired.
The first step in applying the AHP matrix system is to first identify the risks we
want to rank, which is done in step 2. Second, due to the hierarchical nature of the AHP
approach, we must organize the items as a hierarchy. For example, all risks are divided
into commercial risks, technological risks, financial risks, operational risks, and so on.
These risk categories are then broken down into detailed risks. For example, financial
risks may consist of cash flow exposure risks, currency risks, interest risks and so
forth. It is important that the number of children below a parent in a hierarchy is not
more than nine, because human cognition has great problems handling more than nine
issues at the same time (see Miller, 1956). In our experience, it is wise to limit oneself to
MD seven or fewer children per parent. Third, we must perform the actual pair-wise
44,3 comparison.
To operationalize the pair-wise comparisons we used the ordinal scales and the
average Random Index (RI) values provided in Tables I and II. According to Peniwati
(2000), the RIs are defined to allow a 10 percent inconsistency in the answers because it
is better to be approximately right than precisely wrong. Note that the values in Table I
402 must be interpreted in its specific context. Thus, when we speak of probability of scale
1 it should linguistically be interpreted as “equally probable”.
To exemplify AHP, suppose we have identified four risks – R1, R2, R3 and R4 –
and want to estimate their subjective probability. We perform a pair-wise comparison
by comparing the probability of R1 to the probability of R2 as shown in Table III using
the scales in Table I. We see, for example, that we believe the probability of R1
occurring is “absolutely greater” than the probability of R2 occurring. Similarly, R4 has

Intensity of importance Definition Explanation


(1) (2) (3)

1 Equal importance Two items contribute equally to the objective


3 Moderate importance Experience and judgment slightly favor one over
another
5 Strong importance Experience and judgment strongly favor one
over another
7 Very strong importance An activity is strongly favored and its
dominance is demonstrated in practice
9 Absolute importance The importance of one over another affirmed on
the highest possible order
2, 4, 6, 8 Intermediate values Used to represent compromise between the
priorities listed above
Reciprocals of above If item i has one of the above non-zero numbers
numbers assigned to it when compared with item j, the j
Table I. has the reciprocal value when compared with i
Scales of measurement in
pair-wise comparison Source: Saaty (1990)

Size of matrix (1) Average random index (2)

1 0.00
2 0.00
3 0.58
4 0.90
5 1.12
6 1.24
7 1.32
8 1.41
9 1.45
Table II. 10 1.49
Average random index
values Source: Saaty (1990)
moderately higher probability than R3 and so forth. Based on these comparisons we Qualitative risk
calculate the subjective probabilities. R1 has a 57 percent probability of occurrence analysis
relative to the other risks. The relative nature of the matrix system is why there are
ones (1) on the matrix diagonal – the probability of R1 relative to R1 is of course equal.
We also note the consistency ratio (CR), which is 0.046. The corresponding RI is found
in Table II, and is 0.90. Since CR is less than RI we conclude that the matrix is
internally consistent, and the subjective probability estimates are therefore logically 403
consistent.
Ranking many risks in this fashion requires a large number of small matrices, like
the one in Table III, but each one of them is easy to fill in and hence the amount of time
spent is modest. We believe it is far less effective to make arbitrary “guesstimates” and
then wonder what went wrong. For an experienced practitioner, filling out a matrix like
the one in Table III will only take four or five minutes, which is quite efficient. Clearly,
AHP can increase the quality of the risk analyses substantially, and due to the rigorous
mathematics behind the consistency check and rank calculations, uncertainty analyses
can provide meaningful results. For decisions demanding a broader consensus,
uncertainty analysis is an excellent approach in which outcomes tend to be more
strongly rooted despite the existence of differences of opinion about singular pairs
because the approach invites meaningful discussions and not just guesses.
For example, an uncertainty analysis of the situation in Table III is shown in
Figure 7. We see, for example, that the probability estimate for R1 is the one that is
associated with the most uncertainty. From the sensitivity analysis in Figure 8 we see
why: the R1 probability relative to the R4 probability is the single most important
source of uncertainty to the R1 probability estimate. Furthermore, by increasing the
relative probability of R1 to R4, the R1 probability estimate will increase. Hence, if we

Risk R1 R2 R3 R4 Probability (percent)

R1 1 9.0 5.0 3.0 57


R2 0.1 1 0.3 0.2 5 Table III.
R3 0.2 4.0 1 0.3 13 Calculation of subjective
R4 0.3 6.0 3.0 1 26 probability of risks R1
CR 0.046 through R4

Figure 7.
Uncertainty analysis of
the risk estimates in
Table III
MD increase the R1 to R4 comparison from 3 to 4, the R1 probability estimate will increase
44,3 (to 59 percent).
The reason the R4 to R3 comparison impacts the R1 probability estimate is that any
pair-wise comparison influences the basis that the R1 probability estimate is derived
from. In any case, we can easily identify which pair-wise comparisons in Table III
impact the R1 probability estimate the most. Such information is useful in determining
404 how to reduce the uncertainty of the R1 probability estimate further, or at the very least
explain the R1 probability distribution in Figure 7.
In short, the AHP matrix structure can be used to effectively provide relative
rankings that are consistent, and it opens up for meaningful uncertainty and
sensitivity analyses, which in turn improve the risk analysis.

5. Matching risks and capabilities


Matching risks and capabilities can be done in several ways. Emblemsvåg and
Kjølstad (2002) utilized a simple plus/minus system, but here a more reliable system is
presented.
Due to the flexibility of AHP, it can also be applied to capabilities. The difficulty in
bringing capabilities into the risk analysis lies in matching the risk and capabilities in
a credible fashion so that risk management strategies are adapted to the organization’s
capabilities. This must be done in a matrix in order to use the AHP, but to keep the
matrix manageable we suggest limiting the matrix to the risks and capabilities so that
added together they constitute about 80-95 percent of the risk exposure.
The first step in this process is to compute the relative ranking or the capabilities in
the same manner as the risk previously discussed. This is necessary to provide focus
on the most important ones. Next, we assign values as illustrated in Table IV. Suppose
we have the relative ranking of risks and capabilities as shown in Table IV. The
numbers in the middle of the matrix are on a 2 9 to 9 scale where 2 9 indicates that the
capabilities are absolutely insufficient in managing the risk while a 9 indicates that the
capabilities are fully capable of preventing or mitigating the risk. A blank matrix value
equals a zero, which indicates that the capability and the risk have nothing to do with
each other in any noticeable fashion.
Interestingly, we can compute a single number (6.87), in the bottom right-hand
corner of Table IV that describes the organization’s potential capability of managing
its own risks. The higher the number, the more capable the organization should be of
managing its own risks. This number is undoubtedly subjective – but quite consistent

Figure 8.
Sensitivity analysis of the
uncertainty analysis in
Figure 9
– and associated with uncertainty as shown in Figure 9, but it provides a quite good Qualitative risk
indication of the risk profile of the organization relative to its capabilities. We therefore analysis
denote this number and its associated uncertainty distribution the relative risk profile
(RRP) of the organization.
Furthermore, by running a sensitivity analysis of the risk-capability matching we
can identify the critical managerial implications (Figure 10) and the greatest sources of
uncertainty (Figure 11), which in turn can be used to improve the analyses. In Figure 10, 405

Capabilities C1 C2 C3 C4 C5 Relative risk

Risk Relative ranking 33% 14% 21% 11% 19%


R1 49% 7 24 1 3.31
R2 17% 25 8 1.33
R3 22% 4 3 21 1.70 Table IV.
R4 7% 4 26 7 0.52 Matching risks and
Relative effectiveness 1.60 1.19 1.46 1.18 1.44 6.87 capabilities

Figure 9.
RRP probability
distribution

Figure 10.
Risk sensitivity analysis
MD
44,3

406

Figure 11.
Uncertainty sensitivity
analysis

we can easily identify what drives the RRP. The most important factor is the actual
risk level of R1, followed by the ability of C1 to counter R1 and the actual effectiveness
of C1. From an uncertainty perspective, however, we see that the situation is quite
different. The C4 effectiveness impacts the uncertainty level of RRP the most (see
Figure 11). To reduce the uncertainty of RRP we must consequently obtain better
information about C4. C3 and the uncertainty of R1 are also important.
In this way we have a powerful approach in devising consistent risk analyses for
which sound risk management strategies can be crafted. It should be noted, however,
that even though our approach produces 90 percent consistent risk analyses, mistakes
in the risk analysis will occur if the problem at hand is not well understood. What we
can guarantee, however, is that the risk analyses are internally consistent and hence
that we will always focus on the right risks compared to our capabilities first, albeit to
which degree the risks must be attended must receive further considerations when it
comes to the costs and benefits of attending them. Put differently, we will not be able to
identify the absolute magnitudes of risks, but we will be able to rank them correctly.
This is a step forward compared to the situation described by Backlund and Hannu
(2002) and often found in reality elsewhere as well.

6. Closing remarks
The presented approach is “self-adjusting” in that it can be used to reduce the
uncertainty in the risk analysis while also improving the quality of risk analyses in
general by providing consistency checks and decision support for improved
information management. The next steps in our quest to improve risk management
will be to perform such risk analyses of multiple organizations and then compute the
RRP. This will allow us to also better understand the relations between risk and
profitability over time, which is important for strategic risk management, which is the
topic of our main focus. We also believe that the overall risk management process must
be augmented in order to better facilitate information and knowledge management,
which is vital to further reduce the chance for the problems of self-fulfilling prophecies
in risk management.
As always, however, good results are struck by the combination of skilled
implementation and thorough knowledge about the approach and the actual situation
for the organization. We further believe that the risk analysis approach described here
stands a better chance of avoiding the problems described by Backlund and Hannu Qualitative risk
(2002) because the risk analysis is enhanced by: analysis
.
uncertainty analyses to show the extent of uncertainty in the risk analysis;
.
sensitivity analyses to provide learning on what drives risks and uncertainties;
.
AHP to provide logically consistent results; and
.
the risk-capability matching to allow effective identification of risk management 407
strategies.

Thus, as argued and illustrated in this paper, to distinguish between risk, uncertainty
and capabilities on the one hand and to handle the information in a consistent fashion
is crucial to prevent the risk management process from becoming either a self-fulfilling
prophecy or an arbitrary process, but as Hegel warns: “Theories conceal as much as
they reveal”.

References
Backlund, F. and Hannu, J. (2002), “Can we make maintenance decisions on risk analysis
results?”, Journal of Quality in Maintenance Engineering, Vol. 8 No. 1, pp. 77-91.
Bernstein, P.L. (1996), Against the Gods: The Remarkable Story of Risk, Wiley, New York, NY,
p. 383.
Bhutta, K.S. and Huq, F. (2002), “Supplier selection process: a comparison of the total cost of
ownership and the analytic hierarchy process approaches”, Supply Chain Management:
An International Journal, Vol. 7 No. 3, pp. 126-35.
Cheng, E.W.L. and Li, H. (2001), “Analytic hierarchy process: an approach to determine measures
for business performance”, Measuring Business Excellence, Vol. 5 No. 3, pp. 30-6.
Dey, P.K. (2001), “Decision support system for risk management: a case study”, Management
Decision, Vol. 39 No. 8, pp. 634-49.
(The) Economist (2002), “Barnevik’s bounty”, The Economist, Vol. 362 No. 8262, p. 62.
Emblemsvåg, J. (2003), Life-Cycle Costing: Using Activity-Based Costing and Monte Carlo
Methods to Manage Future Costs and Risks, Wiley, New York, NY, p. 320.
Emblemsvåg, J. and Kjølstad, L.E. (2002), “Strategic risk analysis – a field version”, Management
Decision, Vol. 40 No. 9, pp. 842-52.
Friedlob, G.T. and Schleifer, L.L.F. (1999), “Fuzzy logic: application for audit risk and
uncertainty”, Managerial Auditing Journal, Vol. 14 No. 3, pp. 127-35.
Jones, M.E. and Sutherland, G. (1999), Implementing Turnbull: A Boardroom Briefing, The Center
for Business Performance, The Institute of Chartered Accountants in England and Wales
(ICAEW), London, p. 34.
Miller, G.A. (1956), “The magical number seven, plus or minus two: some limits on our capacity
for processing information”, Psychological Review, Vol. 63, pp. 81-97.
Peniwati, K. (2000), The Analytical Hierarchy Process: Its Basics and Advancements, INSAHP,
Jakarta.
Saaty, T.L. (1990), The Analytic Hierarchy Process: Planning, Priority Setting, Resource
Allocation, RWS Publications, Pittsburgh, PA, p. 480.
Saaty, T.L. and Forsman, E. (1992), The Hierarchon: A Dictionary of Hierarchies, Expert Choice,
Inc., Arlington, VA.
MD Ward, G. (2001), “Corporate governance: why should companies care?”, speech given at INSEAD,
Fontainebleau, July 11.
44,3 Webster’s Encyclopedic Unabridged Dictionary of the English Language (1989), Gramercy Books,
New York, NY, p. 1854.

Further reading
408 Davenport, T.H., Delong, D.W. and Beers, M.D. (1998), “Successful knowledge management
projects”, Sloan Management Review, Vol. 39 No. 2, pp. 43-57.
Government Asset Management Committee (2001), Risk Management Guideline, New South
Wales Government Asset Management Committee, Sydney, p. 43.

Corresponding author
Lars Endre Kjløstad can be contacted at: lakjolst@online.no

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

View publication stats

You might also like