You are on page 1of 9

Safety Science 47 (2009) 798806

Contents lists available at ScienceDirect

Safety Science
journal homepage: www.elsevier.com/locate/ssci

Perspectives on risk in a decision-making context Review and discussion


Terje Aven
University of Stavanger, Stavanger, Norway

a b s t r a c t
There exist many perspectives on risk, and traditionally some of the perspectives have been seen as representing completely different frameworks, making the exchange of ideas and results difcult. Much of the existing discussions on risk perspectives have in our view lacked a sufcient level of precision on the fundamental ideas of risk assessments and management. For example, there is more than one line of thinking in risk analysis and assessment and mixing all approaches into one gives a rather meaningless discussion. In this presentation we summarise and categorise some of the common perspectives on risk, including an approach integrating aspects of technical and economic risk analyses, as well as social scientists perspectives on risk. For the different perspectives we clarify the meaning of key concepts such as risk and uncertainty. Special focus is placed on the different perspectives impact on decision-making. Implementation of the ALARP principle is used as an example to illustrate the differences. 2008 Elsevier Ltd. All rights reserved.

1. Introduction The terminology and methods used for dealing with risk and uncertainty vary a lot, making it difcult to communicate across the various areas of applications and disciplines. We also see a lot of confusion concerning what risk is and what should be the basic thinking concerning analysis of risk and uncertainty, within the various application areas. This is not surprising when looking at the existing risk literature and the review below will give some ideas of the problems. Reference is made to so-called classical methods and Bayesian methods, but it is for most people difcult to see clearly what the alternative frameworks for analysing risk are. There is a lack of knowledge of what the analyses express and what the meaning of uncertainty in the results of the analyses is, even among experienced risk analysts. As a consequence, risk presentation and communication is often poor. This again could have serious implications for decision-making, for example by choosing the wrong option or measure. It is common to consider risk as the two-dimensional combination of probability (likelihood) and consequences, where the consequences relate to for example loss of lives and injuries. This denition is in line with the one used by ISO (2002). However, it is also common to refer to risk as probability multiplied by consequences (losses), i.e. what is called the expected value in probability calculus. If the focus is the number of fatalities during a certain period of time, X, then the expected value is given by E[X], whereas risk dened as the two-dimensional combination of probability and consequence expresses probabilities for different outcomes of X, for example the probability that X does not exceed 10. Adopt-

ing the denition that risk is the two-dimensional combination of probability and consequence, the whole probability distribution of X is required, whereas the expected value refers only to the centre of gravity of this distribution. Is risk more than expected values? We will discuss this issue below (Section 2). It is important because if risk can be adequately represented by expected values, risk assessment and risk management can focus on this quantity. To understand the various denitions and aspects of risk, including expected values, we need to understand what a probability is. There are different interpretations. Here are the two main alternatives: (a) A probability is interpreted in the classical, statistical sense as the relative fraction of times the events would occur if the situation analyzed were to be hypothetically repeated an innite number of times. The underlying probability is unknown, and is estimated in the risk analysis. (b) Probability is a way of expressing uncertainties as to the possible outcomes (consequences), seen through the eyes of the assessor and based on some background information and knowledge. Following denition (a) we produce estimates of the underlying true risk. This estimate is uncertain, as there could be large differences between the estimate and the correct risk value. As these correct values are unknown it is difcult to know how accurate the estimates are. As an example, consider the probability of a fatal accident on the Norwegian Continental Shelf (NCS) next year. This probability is interpreted as the fraction of years with fatal accidents, when considering an innite number of similar years. Using historical data, expert judgments and models we estimate

E-mail address: terje.aven@uis.no 0925-7535/$ - see front matter 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.ssci.2008.10.008

T. Aven / Safety Science 47 (2009) 798806

799

this probability. Clearly, such an estimate could be subject to large uncertainties. Following the interpretation (b), we assign a probability by performing uncertainty assessments, and there is no reference to a correct probability. A probability is always conditional on certain background information, and given this background information there are no uncertainties related to the assigned probability, as it is itself an expression of uncertainty. Returning to the NCS example above, the assessor assigns a probability of a fatal accident occurring next year, based on his/her background information. If it is 1%, the uncertainty is compared to drawing one particular ball out of an urn consisting of 100 balls. The implications of the different interpretations are important. If the starting point is (a), there is a risk level that expresses the truth about risk, for example for an offshore installation at a given point in time. This risk level is unknown, but in many cases it is difcult to see whether people are talking about the estimates of risk or the real risk. If the starting point is (b), the experts position may be weakened, as it is acknowledged that the risk description is a judgment, and others may arrive at a different judgment. Also risk estimates represent judgments, but the mixture of estimates and real risk can often give the experts a stronger position in this case. Seeing risk as the two-dimensional combination of probability and consequence means a quantitative approach to risk. A probability is a number. Of course, a probability may also be interpreted in a qualitative way, using an interpretation such as the level of danger. We may for example refer to the danger of an accident occurring without reference to a specic interpretation of a probability, either (a) or (b). However, as soon as we address the meaning of such a statement and the issue of uncertainty, we must clarify whether we are adopting interpretation (a) or (b). If there is a real risk level, it is relevant to consider and discuss the uncertainties of the risk estimates compared to the real risk. If probability is a measure of the analysts uncertainty, a probability assignment is a judgment and there is no reference to a correct and objective risk level. In some cases we have reference levels provided by historical records. These numbers do not however express risk, but provide a basis for expressing risk. In principle, there is a huge step from historical data to risk, which is a statement concerning the future. In practice, many analysts do not distinguish between the data and the risk derived from the data. This is unfortunate as the historical data may to varying degree be representative of the future, and the amount of data may often be very limited. For these reasons, a mechanical transformation from historical data to risk numbers should be avoided. Economists usually see probability as a way of expressing uncertainty about the outcome, often seen in relation to the expected value. Variance and quantiles are common risk measures. Both of the interpretations (a) and (b) are applied, but in most cases without making it clear which interpretation is being used. In economic applications a distinction has traditionally been made between risk and uncertainty, based on the availability of information. Under risk the probability distribution of the performance measures can be assigned objectively, whereas under uncertainty these probabilities must be assigned or estimated on a subjective basis (Douglas, 1983). This latter denition of risk is seldom used in practice. In decision analysis, risk is often dened as expected disutility, i.e. E[u(X)], where the utility function u expresses the assessors preference function for different outcomes (X). In psychology there has been a long tradition of work that adopts the perspective to risk that uncertainty can be represented as an objective probability (Pidgeon and Beattie, 1998). Here researchers have sought to identify and describe peoples (lay-peoples) ability to express the level of danger using probabilities and

to understand which factors are inuencing the probabilities. A main conclusion is that people are poor assessors if the reference is a real objective probability value, and that their probability estimates are strongly affected by factors such as dread. Social scientists often use a broader perspective on risk. Here risk refers to the full range of beliefs and feelings that people have about the nature of hazardous events, their qualitative characteristics and benets, and most crucially their acceptability. This denition is considered useful if lay conceptions of risk are to be adequately described and investigated. The motivation for this is the fact that there is a wide range of multidimensional characteristics of hazards, rather than just an abstract expression of uncertainty and loss, which people evaluate in performing perceptions so that the risks are seen as fundamentally and conceptually distinct. Furthermore, such evaluations may vary with the social or cultural group to which a person belongs, the historical context in which a particular hazard arises, and may also reect aspects of both the physical and human or organisational factors contributing to hazard, such as trustworthiness of existing or proposed risk management. Another perspective, often referred to as cultural relativism, states that risk is a social construction and it is therefore meaningless to speak about objective risk. There exist also perspectives that intend to unify some of the perspectives above, see e.g. Rosa (1998) and Aven (2003,2008a). One such perspective, the predictive Bayesian approach (Aven 2003,2008a), is based on interpretation (b), and makes a sharp distinction between historical data and experiences, future quantities of interest such as loss of lives, injuries etc (referred to as observables) and predictions and uncertainty assessments of these. The thinking is analogous to cost risk assessments, where the costs, the observables, are predicted, and the uncertainties of the costs are assessed using probabilistic terms. Risk is viewed as the twodimensional combination of (i) the consequences (of an activity) and (ii) associated uncertainties (about what will be the outcome). The uncertainties may to some extent be expressed or quantied using probabilities. Using such a perspective, with risk seen as the two-dimensional combination of consequences and associated uncertainties, a distinction is made between risk as a concept and terms such as risk description, risk acceptance, risk perceptions, risk communication and risk management. This is in contrast to the broad denition used by some social scientists in which this distinction is not clear. Adopting such a perspective risk management needs to reect this, by  Focusing on different actors analyses and assessments of risk.  Addressing aspects of the uncertainties not reected by the computed expected values and the probabilities.  Acknowledging that what is acceptable risk and the need for risk reduction cannot be determined simply by reference to the results of risk analyses.  Acknowledging that risk perception has a role to play to guide decision-makers and that professional risk analysts do not have the exclusive right to describe risk. Such an approach to risk is in line with the recommended approach by for example the UK-government, see Cabinet Ofce (2002), and also the trend seen internationally in recent years. An example where this approach has been implemented is the risk level Norwegian sector project, see Vinnem et al. (2006) and Aven (2003, p. 122). The different perspectives on risk lead to different ways of assessing risk, which in its turn may affect the risk management and decision-making in particular. If there is no true risk, a broader risk picture is normally produced, reecting different views and assumptions. Such a basis would also lead to a stronger emphasis

800

T. Aven / Safety Science 47 (2009) 798806

on the inherent uncertainties in phenomena and processes, as indicated above for the predictive Bayesian approach. In this way the balance between risk taking behaviour and a cautionary attitude could be affected. The remainder of this paper is organised as follows. First, in Section 2, we look more closely into some of the fundamental issues raised above; the use of expected values, the classical and Bayesian perspectives, and the dominating economic perspectives. Then in Section 3 we present our recommended approach to risk, in line with the above denition of risk (two-dimensional combination of the consequences and associated uncertainties). Section 4 discusses this approach and concludes. The appendix gives some bibliographic notes. 2. Fundamental issues 2.1. Is risk more than expected values? If C is a quantity of interest, for example the number of future attacks, the number of fatalities, the costs etc., an expected value would be a good representation of risk if this value is approximately equal to C, i.e. EC % C. But since C is unknown at the time of the assessment, how can we be sure that this approximation would be accurate? Can the law of large numbers be applied, expressing that the mean of independent identically distributed random variables converges to the expected value when the number of variable increases to innity? Or the portfolio theory saying that the value of a portfolio of projects is approximately equal to the expected value, plus the systematic risk (uncertainties) caused by events affecting the whole market? (Aven and Vinnem, 2007). It is likely that, if C is the sum of a number of projects, or some average number, our expected value could be a good prediction of C. Take for example the number of fatalities in trafc in a specic country. From previous years we have data that can be used to accurately predict the number of fatalities next year (C). In Norway about 250 people were killed last year, and using this number as EC and predictor for the coming year, we would be quite sure that this number is close to the actual C. However, in many cases the uncertainties are much larger. Looking at the number of fatalities in Norway caused by terrorist attacks the next year, the historical data would give a poor basis. We may assign an EC but obviously EC could be far away from C. The accuracy increases when we extend the population of interest. If we look at one unit (e.g. country) in isolation, the C numbers are in general more uncertain than if we consider many units (e.g. countries). Yet, there will always be uncertainties, and in a world where the speed of change is increasing, relevant historical data are scarce and will not be sufcient to obtain accurate predictions. Even so, many researchers dene risk by the expected values. Consider the terrorism case discussed in Willis (2007). Willis (2007) denes risk as follows: Terrorism risk: The expected consequences of an existent threat, which for a given target, attack mode, target vulnerability, and damage type, can be expressed as

paring and aggregating terrorism risk, as it is based on just one number. For terrorism risk, where the possible consequences could be extreme and the uncertainties in underlying phenomena and processes are so large, it is however obvious that the expected value may hide important aspects of concern for risk management and related decision-making. The expected value can be small, say 0.01 fatalities, but extreme events with millions of fatalities may occur, and this needs special attention. Hence we need to see beyond the expected values. We have to take into account uncertainties and risks. Risk management is concerned about how to assess these uncertainties and risks, and how to handle them. The cautionary principle is a basic principle in risk management, expressing the idea that, in the face of uncertainty, caution should be a ruling principle. This principle is being implemented in all industries through risk regulations and requirements. For example in the Norwegian petroleum industry it is a regulatory requirement that the living quarters on an installation should be protected by reproof panels of a certain quality, for walls facing process and drilling areas. This is a standard adopted to obtain a minimum safety level. It is based on established practice of many years of operation of process plants. A re may occur; it represents a hazard for the personnel and in the case of such an event, the personnel in the living quarters should be protected. The assigned probability for the living quarter on a specic installation being exposed to re may be judged as low, but we know that res occur from time to time in such plants. It does not matter whether we calculate a re probability as x or y, as long as we consider the risks to be signicant; and this type of risk has been judged to be signicant by the authorities. The justication is experience from similar plants and sound judgments. A re may occur, since it is not an unlikely event, and we should then be prepared. We need no references to costbenet analysis. The requirement is based on a cautionary thinking. Risk analyses are tools providing insights into risks and the uncertainties involved. But they are just tools with strong limitations. Their results are conditioned on a number of assumptions and suppositions. The analyses do not express objective results. Being cautious also means reecting this fact. We should not put more emphasis on the predictions and assessments of the analyses than what can be justied by the methods being used. In the face of uncertainties related to the possible occurrences of hazardous situations and accidents, we are cautious and adopt principles of risk management, such as  Robust design solutions, such that deviations from normal conditions do not lead to hazardous situations and accidents.  Design for exibility, meaning that it is possible to utilise a new situation and adapt to changes in the frame conditions.  Implementation of safety barriers, to reduce the negative consequences of hazardous situations if they should occur, for example a re.  Improvement of the performance of barriers by using redundancy, maintenance/testing, etc.  Quality control/quality assurance.  The precautionary principle, saying that in the case of lack of scientic certainty on the possible consequences of an activity, we should implement measures or not carry out the activity.  The ALARP-principle, saying that the risk should be reduced to a level which is as low as reasonably practicable. The level of caution adopted will of course have to be balanced against other concerns such as costs. However, all industries would introduce some minimum requirements to protect people and the environment, and these requirements can be considered justied by the reference to the cautionary principle.

Risk pattack occurs pattacks results in damagejattacks occurs Edamagejattacks occurs and results in damage
Willis (2007) refers to Haimes (2004) who highlights that expected value decision-making is misleading for rare and extreme events. The expected value (the mean or the central tendency) does not adequately capture events with low probabilities and high consequences. Nonetheless, Willis represents risk by the expected value as the basis for his analysis. The motivation seems to be that the expected value provides a suitably practical approach for com-

T. Aven / Safety Science 47 (2009) 798806

801

We will use the ALARP principle to show how the cautionary principle can be implemented in a practical decision-making context, balancing different concerns, see Sections 3 and 4. 2.2. The classical approach to risk We are not very much attracted by the classical approach to risk and risk analysis as seen in engineering applications. The problem is the introduction of, and focus on ctional probabilities, as was illustrated by the fatality example in Section 1. These probabilities exist only as mental constructions. They do not exist in the real world. An innite population of similar units need to be dened to make the classical framework operational. This probability concept means that a new element of uncertainty is introduced, the true value of the probability, a value that does not exist in the real world. Thus we are led to two levels of uncertainty and probability, which in our view reduces the power of risk analysis. We are interested in the behaviour of the units under consideration. What the classical approach gives is uncertainty statements about averages over ctional populations. We feel that this approach has the wrong focus. It does not give a good basis for supporting decisions. For the populations introduced, it is supposed that they comprise similar units. The meaning of the word similar is rather intuitive, and in some cases it is obvious what is meant. In other cases, the meaning is not clear at all. Let us look at an example. Consider the probability of at least one fatality during one year in a production facility. According to the classical relative frequency view, this probability is interpreted as the proportion of facilities with at least one fatality when considering an innite number of similar facilities. Therefore, the probability is not a property of the unit itself, but the population it belongs to. This is of course a thought experiment in real life we just have one such facility. How should we then understand the meaning of similar facilities? Does it mean the same type of buildings and equipment, the same operational procedures, the same type of personnel positions, the same type of training programmes, the same organizational philosophy, the same inuence of exogenous factors, etc? As long as we speak about similarities on a macro level, the answer is yes. But something must be different, because otherwise we would get exactly the same output result for each facility, either the occurrence of at least one fatality or no such occurrence. There must be some variation on a micro level to produce the variation of the output result. So we should allow for variations in the equipment quality, human behaviour, etc. But the question is to what extent we should allow for such variation. For example, in human behaviour, do we specify the safety culture or the standard of the private lives of the personnel, or are these to be regarded as factors creating the variations from one facility to another, i.e. the stochastic (aleatory) uncertainty? We see that we will have a hard time specifying what should be the framework conditions of the experiment and what should be stochastic uncertainty. In practice we seldom see such a specication carried out, because the framework conditions of the experiment are tacitly understood. As seen from the above example, it is not obvious how to make a proper denition of the population. We recognize that the concept similar is intuitively appealing, although it can be hard to dene precisely. But the main problem with the classical approach is not this concept; it is the fact that risk is a constructed quantity that puts focus in the wrong place, on measuring ctional quantities. 2.3. The Bayesian paradigm Bayesian methods are often presented as an alternative to the classical approach. But what is the Bayesian alternative in a risk analysis context? In practice and in the literature we often see a

mixture of classical and Bayesian analyses. The starting point is classical in the sense that it is assumed that an underlying true risk exists. This risk is unknown, and subjective probability distributions are used to express uncertainty related to where the true value lies. Starting by specifying probability distributions on the model parameter level, procedures are developed to propagate these distributions through the model to the risk of the system. Updating schemes for incorporating new information are presented using Bayes formula. This basis is often referred to as the classical approach with uncertainty analysis (Aven, 2003). It is also called the probability of frequency framework, in which the concept of probability is used for the subjective probability and the concept of frequency is used for the objective relative frequencybased probability (Kaplan and Garrick, 1981). This approach to risk analysis introduces two levels of uncertainty: the value of the observable quantities such as the number of failures of a system, the downtime, etc., and the correct value of the risk. The result is often that both the analysis and the results of the analysis are considered uncertain. This does not provide a good basis for communication and decision-making (Aven, 2003). Consider a case where risk is expressed by the probability of a fatal accident. Two options are compared. Adopting the probability of frequency approach, uncertainty distributions of the probability need to be established for the two options. These distributions would typically be very wide as there are signicant uncertainties associated with the models and the model parameters. The risk analysis is performed to provide a basis for the decision-making on the choice of option, but the message is disturbed by a discussion of uncertainties as to what is the correct probability. First we introduce a ctional probability and then we become uncertain what the true value of this quantity is. In this way, new uncertainties are generated, and this weakens the power of the risk analysis in our view. Now, how does this way of thinking relate to the Bayesian approach as presented in the literature, for example Bernardo and Smith (1994), Lindley (2000), Singpurwalla (1988, 2006) and Singpurwalla and Wilson (1999)? As we see from these references and others, the Bayesian thinking is in fact not that different from the probability of frequency approach described above. The point is that the Bayesian approach, as presented in the literature, allows for ctional parameters, based on thought experiments. These parameters are introduced and the uncertainty in them is assessed. Thus, from a practical point of view, an analyst would probably not see much difference between the Bayesian approach as presented in the literature and the probability of frequency approach referred to above. Of course, Bayesians would not speak about true, objective risks and probabilities, and the predictive form is seen as the most important one. However, in practice, Bayesian parametric analysis is often seen as an end-product of a statistical analysis. The application and understanding of probability models focuses on limiting values of quantities constructed through a thought experiment, which are very close to the mental constructions of probability and risk used in the classical relative frequency approach. In our view, applying the standard Bayesian procedures gives too much focus on ctional parameters, established through thought experiments. The focus should be on observable quantities, as also stressed by Barlow (1998). We believe that there is a need for a rethinking of how to present the Bayesian way of thinking, to obtain a successful implementation in a practical setting. In a risk analysis comprising a large number of observable quantities, a pragmatic view of the Bayesian approach is required, in order to conduct the analysis. Direct probability assignments should be seen as a useful supplement to establishing probability models where we need to specify uncertainty distributions of parameters. A Bayesian updating procedure may be used for incorporating new

802

T. Aven / Safety Science 47 (2009) 798806

information, but its applicability is in many cases rather limited. In most real-life cases we would not perform a formal Bayesian updating to incorporate new observations rethinking of the whole information basis and approach to modelling is required when we conduct the analysis at a particular point in time, for example in the pre-study or concept specication phases of a project. Furthermore, we should make a sharp distinction between probability and utility. In our view it is unfortunate that these two concepts are seen as inseparable as is often done in the Bayesian literature. The word subjective, or related terms such as personalistic, are well-established terms in the literature. However, we nd such terms somewhat difcult to use in practice. We prefer to speak about probability as a measure of uncertainty, and make it clear who is the assessor of the uncertainty. 2.4. Economic risk and rational decision-making As mentioned above, in economic risk theory, references are often made to literature restricting the risk concept to situations where the probabilities related to future outcomes are objective, and using uncertainty to describe the more common situations of subjective probabilities. The rst category of probabilities include gambling situations and situations with a huge amount of relevant data (as for car trafc accidents and medical experiments), whereas the second category covers all other situations. This convention should in our view not be used it violates the intuitive interpretation of risk which is closely related to situations of uncertainty and lack of predictability. In a framework based on subjective probabilities, objective probabilities do not exist all probabilities are subjective assessments of uncertainty and different assessors could produce different probabilities. Economic risk is closely related to the use of utilities and rational decision-making. The optimization of the expected utility is the ruling paradigm among economists and decision analysts. We do recognize the importance of this paradigm it is a useful decision-making tool in many cases. But it is just a tool, a normative theory saying how to make decisions strictly within a mathematical framework it does not replace management review and judgement. There are factors and issues which go beyond the framework of utilities and rational decision-making, that management needs to consider. In practice there will always be constraints and limitations restricting the direct application of the expected utility thinking. It is for example not straightforward to establish a utility function reecting the decision-makers preferences, or assigning probabilities for all type of events. Yet the theory is important as it provides a reference for discussing what good decisions are. The fact that people often violate the basis (axioms) of the theory, that they do not behave consistently and coherently, is not an argument against this theory. The expected utility theory says how people ought to make decisions, not how they are made today. We may learn from the descriptive theory telling us how people actually behave, but this theory cannot replace normative theory. We do need some reference, even if it is to some extent theoretical, for the development and measurement of the goodness of decisions. In our view the expected utility theory can be seen as such a reference. Cost-benet analysis is another method for balancing costs and benets. It is often used to guide decision-making according to the ALARP principle. The idea of the method is to assign monetary values to a list of burdens and benets, and summarize the goodness of an alternative by the expected net present value. In this way it provides an attractive approach for comparing options and evaluating risk reducing measures. The method is however subject to strong criticism. The main problem is related to the transformation of non-economic consequences, such as (expected) loss of life and damage to the environment, to monetary values. What is the

value of a (statistical) life? What is the value of future generations? These are difcult issues and have received much attention in the literature. There are no simple answers. The result is often that the cost-benet analyses just focus on certain consequences and ignore others. Nevertheless, we nd that this type of analysis provides useful insight and decision support in many applications. We are, however, sceptical about a mechanical transformation of consequences to monetary values, for in many cases it is more informative to focus attention on each consequence separately and leave the weighting to management and the decision-maker, through a more informal review and judgment process. As for risk analysis, the probabilistic basis for cost-benet analysis is seldom claried, but the classical thinking with a search for correct probability values seems to be dominant. It is common to question the validity of cost-benet analyses because of their unrealistic assumptions about the availability of the data needed to complete the analyses. The underlying philosophy seems to be that without objective, hard data the analyses break down. How does cost-benet analysis relate to expected utility theory? Could we justify using one method in one case and the other method in a different case? These questions are important, but it is difcult to answer them using the standard decision theory. Either utility theory is considered as the only meaningful tool, or this theory is rejected it does not work in practice and cost-benet analyses are used.

3. Our recommended approach We dene risk by the two-dimensional combination of consequences and associated uncertainties (Aven 2007a, 2008a; Aven and Kristensen, 2005). Let C denote the consequences or outcomes associated with the activity. Typically C would be expressed by some quantities C1, C2, . . . e.g. the number of fatalities and number of injuries. The quantity C is unknown we are uncertain what value C will take. The uncertainty results from lack of knowledge about C and about underlying inuencing phenomena and processes. Hence it is epistemic. Subjective probabilities are used to assess the uncertainties, in line with basis (b) introduced in Section 1. The reference is a certain standard such as drawing a ball from an urn. If we assign a probability of 0.1 for an event A, we compare our uncertainty of the occurrence of A to the drawing of a specic ball from an urn having 10 balls (Lindley, 1985). This denition of risk acknowledges that risk cannot be adequately described and evaluated simply by reference to summarising probabilities and expected values. There is a need for seeing beyond these values. Computed probabilities and expected values are not objective quantities, but subjective assignments conditioned on the background information (including assumptions and suppositions). Hence we may characterise risk by (I, C, I*, C*, U, P, K), where I refers to the hazards (initiating events), C the consequences, I* and C* are predictions of I and C, whilst U are the uncertainties, P the assigned probabilities and K the knowledge and background information the analysis is based on (e.g. assumptions, models). The aim of the risk analysis is to establish a risk picture covering all relevant dimensions of (I, C, I*, C*, U, P, K). This conclusion of seeing beyond the probabilities and expected values also applies for other risk perspectives, including the standard denition expressing that risk is the two-dimensional combination of consequences and probabilities. Probabilities need to be estimated or assigned, and we need to address uncertainties in phenomena and processes. Let us look at an example. A risk analysis is performed for an offshore oil and gas producing installation. The analysis produces probabilities representing the best judgments of the analysis group. This risk picture is however supplemented with uncertainty considerations (Aven, 2008b):

T. Aven / Safety Science 47 (2009) 798806

803

Equipment deterioration and maintenance. The deterioration of critical equipment is assumed not to cause safety problems by implementing a special maintenance program. However, experience gained on offshore installations indicates that unexpected problems occur. Production of oil over time leads to changes in operating conditions, such as increased production of water, H2S and CO2 content, scaling, bacteria growth, emulsions, etc., problems that to a large extent need to be solved by the addition of chemicals. These are all factors causing increased probability of corrosion, material brittleness and other conditions that may cause leakages. The quantitative analysis has not taken into account that surprises might occur. The analysis group is concerned about this uncertainty factor, and it is reported along with the quantitative assessments. Barrier performance. The historical records show poor performance of a set of critical safety barrier elements, in particular for some types of safety valves. The assignment of the expected number of fatalities E[C] given a leakage or a re scenario were based on average conditions for the safety barriers, and adjustments were made to reect the historical records. However, the changes made were small. The poor performance of the barrier elements would not necessarily result in signicantly reduced probabilities of barrier system failures, as most of the barrier elements are not safety critical. The barrier systems are designed with a lot of redundancies. Nonetheless, also this problem causes concern, as the poor performance may indicate that there is an underlying problem of an operational and maintenance character, which results in reduced availability of the safety barriers in a hazardous situation. There are a number of dependencies among elements of these systems, and the risk analysis methods for studying these are simplied with strong limitations. Hence there is also an uncertainty aspect related to the barrier performance. Now, how would this extended risk picture affect the decisionmaking? Suppose an ALARP process is to be implemented and uncertainties as described above constitute a part of the risk picture. Then risk reduction means also uncertainty reduction. It is necessary to see beyond the calculated risk numbers. Hence measures need to be considered reducing the uncertainties and they should be implemented unless the costs are in gross disproportion to the benets gained (see the discussion in Section 4). Quantifying risk using probabilities and expected values gives an impression that risk can be expressed in a very precise way. However, in most cases, the arbitrariness is large, and there is a need to include assessments of the factors that can cause surprises relative to the probabilities and the expected values. Quantication often requires strong simplications and assumptions and as a result, important factors could be ignored or given too little (or too much) weight. In general the risk picture should highlight the following aspects, in addition to presenting probabilities and expected values:  Uncertainties in phenomena and processes.  Manageability factors. Are there large uncertainties related to the underlying phenomena, and do experts have different views on critical aspects? It is an aim to identify factors that could lead to consequences C far from the expected consequences E[C]. A system for describing and characterising the associated uncertainties is outlined in Sandy et al. (2005). This system reects features such as the current knowledge and understanding about the underlying phenomena and the systems being studied, the complexity of technology, the level of predictability, the experts competence, and the vulnerability of the system. The level of manageability is related to the extent to which it is possible to control and reduce the uncertainties, and obtain

desired outcomes. The expected values and the probabilistic assessments performed in the risk analyses provide predictions for the future, but some risks are more manageable than others, meaning that the potential for reducing the risk is larger for some risks compared to others. By proper management we seek to obtain desirable consequences. This leads to considerations on for example how to run processes reducing risks (uncertainties) and how to deal with human and organisational factors and obtain a good safety culture. A search procedure needs to be established to identify the uncertainty and manageability factors. Such a procedure can be based on historical records, measurements and evaluations of the state of systems and equipment, as well as interviews of key personnel. Furthermore, the assumptions and suppositions of the probability assignments in the quantitative analysis provide an additional checklist. We would also like to draw attention to the list of special consequence features presented by Renn and Klinke (2002) (see also (Kristensen et al., 2006)). Examples of such features are temporal extension, delay effects, irreversibility and aspects of the consequences that could cause social mobilization, i.e. violation of individual, social or cultural interests and values generating social conicts and psychological reactions by individuals and groups who feel aficted by the consequences. This feature classication system can be used as a checklist for ensuring the right focus of the analysis, i.e. that we address the appropriate consequence attributes C. But it can also be used as a checklist for identifying relevant uncertainty and manageability factors. For example, the feature delay effects could lead to a focus on activities or mechanisms that could initiate equipment deteriorating processes, causing future surprises.

4. Discussion and conclusions Our way of understanding risk could be seen as being in line with the ideas of Rosa (1998, 2003). Rosa denes risk in the following way: risk is a situation or event where something of human value (including humans themselves) is at stake and where the outcome is uncertain. Using our terminology, Rosa denes risk by an event (or situation) I, that could lead to consequences C that are important for humans, where I and C are uncertain. Hence strictly speaking risk is I, and not, as in our denition, (I, C, U). However, also Rosa acknowledges that risk is more than probabilities and expected values. We nd also similar ideas underpinning approaches such as the risk governance framework (Renn, 2005). Renn (2005) denes risk as an uncertain consequence of an event or an activity with respect to something that human values. This denition seems to be in line with the new denition of risk considered by ISO (2007) expressing that risk is the effect of uncertainty on objectives. Alternatively we may interpret the suggested ISO denition as (I, C, U), where the consequences (the effect) are seen in relation to the objectives. According to the Rosa (1998, 2003) and Renns (2005) denitions risk expresses a state of the world independent of our knowledge and perceptions. Referring to risk as an event or a consequence, we cannot conclude whether risk is high or low, or compare options with respect to risk. It makes no sense to speak about a high or higher event. Compared to standard terminology in risk research and risk management, they lead to conceptual difculties that are incompatible with the everyday use of risk in most applications, as discussed by Aven and Renn (2008). The need for seeing beyond the probabilities and expected values is also highlighted by Taleb (2007), using the black swan logic. The inability to predict outliers (black swans) implies the inability to predict the course of history. An outlier lies outside the realm of regular expectations, because nothing in the past can convincingly

804

T. Aven / Safety Science 47 (2009) 798806

point at its occurrence. The standard tools for measuring uncertainties are not able to predict these black swans. Risk assignments are often based on historical data, tted to some probability distribution like the normal distribution. In practice it is a problem to determine the appropriate distribution. Our historical data may include no extreme observations, but this does not preclude such observations occurring in the future. Statistical analysis, including Bayesian statistics, is based on the idea of similar situations and if similar is limited to the historical data, the population considered could be far too small or narrow. However, by extending the population, the statistical framework breaks down. There is no justication for such an extended probability model. The statistician needs a probability model to be able to perform a statistical analysis, and then he/she will base his/her analysis on the data available. Taleb (2007) refers to the world of mediocristan and extremistan to explain the difference between the standard probability model context and the more extended population required to reect surprises occurring in the future, respectively. Without explicitly formulating the thesis, Taleb (2007) is saying that we have to see beyond the historical based probabilities and expected values. The different risk perspectives inuence the risk management. Let us use the application of the ALARP principle as an example. The ALARP principle is based on the principle of reversed onus of proof. This means that the base case is that all identied risk reduction measures should be implemented, unless it can be demonstrated that there is gross disproportion between costs and benets. To verify ALARP, procedures mainly based on engineering judgments and codes are used, but also traditional cost-benet analyses and cost effectiveness analyses. When using such analyses, guidance values are often used, to specify what values dene gross disproportion. A typical number for a value of a statistical life used in cost-benet analysis is 12 million (HSE, 2006; Aven and Vinnem, 2005). This number applies to the transport sector. For other areas the numbers are much higher, for example in the offshore UK industry it is common to use 6 million (HSE, 2006). This increased number accounts for the potential for multiple fatalities and uncertainty. The practice of using traditional cost-benet analyses and cost effectiveness analyses to verify ALARP has been questioned (Aven and Abrahamsen, 2007). The ALARP principle is an example of application of the cautionary principle, as mentioned in Section 2.1. Uncertainty should be given strong weight, and the grossly disproportionate criterion is a way of making the principle operational. However, cost-benet analysis calculating expected net present values do not reect the risks and uncertainties; it is based on expected values (it takes into account systematic risk, but not unsystematic risk beyond expected values). We conclude that the use of this approach to weight the risks and uncertainties is therefore not adequate. Depending on the way we understand risk, the ALARP principle would mean different things. If our broad perspective is adopted, risk reduction cannot be measured using expected values. Modications of the traditional cost-benet analysis are suggested to solve this problem, see e.g. EAI (2006) and Binder (2002). However these do not solve the fundamental problem. Although arguments are provided to support these methods, their rationale can be questioned, as there is a signicant element of arbitrariness associated with them. See Aven and Flage (2008). The common procedures for verifying the grossly disproportionate criterion using cost-benet analysis therefore fail, even if we try to adjust the traditional approach. Let us return to the example in the previous section which considers uncertainties related to equipment deterioration and barrier performance. A risk perspective based on expected values could obviously lead to a different risk management and decision-mak-

ing process than if our recommended perspective is adopted. If expected values are addressed, the uncertainty factors would be given less attention than if uncertainty is a main component of risk. This conclusion would also be justied if probability is used as the risk pillar instead of expected values. So what alternative approach would we then suggest? In our view we have to acknowledge that there is no simple and mechanistic method or procedure for balancing different concerns. When it comes to the use of analyses and theories we have to adopt a pragmatic perspective. We have to acknowledge the limitations of the tools, and use them in a broader process where the results of the analyses are seen as just one part of the information supporting the decision-making.

Acknowledgment The author is grateful to Gilles Motet for many useful comments and suggestions to earlier versions of the paper.

Appendix Most textbooks on risk analysis are in line with the classical way of thinking as described above. They focus on estimation of risk, and if uncertainty is addressed, it means expressing condence intervals or subjective uncertainty distributions for relative frequency interpreted probabilities or expectations. Examples of books in this category are Henley and Kumamoto (1981), Rausand and Hyland (2004) and Vose (2000). Most of these books focus on methods of analysis and management. Foundational issues are not a main topic. Most applied risk analysts have been trained in such methods, but they have not spent very long time reecting on the foundations, even though many papers address this topic. Examples of such papers (books) are Apostolakis (1990), Apostolakis and Wu (1993), Kaplan (1991, 1992), Kaplan and Burmaster (1999), Singpurwalla (1988, 2002), Aven and Porn (1998) and Aven (2003). Several special issues of risk journals have been devoted to foundation, and in particular, aspects of uncertainty. They include special issues of the journal Reliability Engineering and System Safety; see Apostolakis et al. (1988) and Helton and Burmaster (1996). Apostolakis and Kaplan have done pioneering work in establishing and discussing an appropriate basis for risk analysis. The probability of frequency thinking was introduced more than 25 years ago (Kaplan and Garrick, 1981). Aven and Porn (1998), Aven and Rettedal (1998), Aven (2003,2008a) and Aven and Kristensen (2005) represent a rethinking of some of the basic ideas of Kaplan and others. In his work, Apostolakis compared the probability of frequency ideas and the more modern version of the Bayesian approach (Apostolakis and Wu, 1993), and he pointed to the problem of introducing true but unknown frequencies. Our way of thinking is known as a predictive Bayesian approach to risk and risk analysis, or as a predictive, epistemic approach. By putting emphasis on observable quantities, their prediction and uncertainty assessments, we rewrite some established Bayesian procedures. Our aim is to obtain a successful implementation in a practical setting. The importance of focusing on observable quantities has also been emphasized by others, such as Barlow (1998), Bedford and Cooke (2001), Morgan and Henrion (1990), Barlow and Clarotti (1993), Geisser (1993) and Spizzichino (2001). Our denition of probability is in line with the one used by Lindley (1985, 2000); probability is a subjective measure of uncertainty, and the reference is a standard urn model. When referring to an observable relative frequency, we use the term chance, or just as variation in a dened population. A chance is closely linked

T. Aven / Safety Science 47 (2009) 798806

805

to the concept of propensity, which is used to describe an objective probability representing the disposition or tendency of nature to yield a particular event on a single trial, see Popper (1959). Thus a propensity is a characterization of the experimental arrangement specied by nature, and this arrangement gives rise to certain frequencies when the experiment is repeated. Keynes (1921) and other logical relationists insisted that there was less subjectivity in epistemic probabilities than was commonly assumed. Keynes point was that there is, in a sense, an objective (albeit not necessarily measurable) relation between knowledge and the probabilities that are deduced from it. For Keynes, knowledge is disembodied and not personal. We disagree with this view on probability. Knowledge may be objective in some sense, but probabilities cannot be separated from the person probability reects personal beliefs, it is subjective. Many social scientists have criticized traditional engineering risk assessments. We mention Beck (1992), Douglas and Wildavsky (1982), Perrow (1984) and Shrader-Frechette (1991), see also Campbell and Currie (2006). The critical point seems to be that the idea of an objective risk cannot be justied. According to Slovic (1998), risk does not exist out there, independent of our minds and cultures. We must take the nave positivist view, to use the terminology of Shrader-Frechette (1991), that risk exists objectively and can be measured, and replace it by a more balanced view. The answer is not the other extreme, the relativist view saying that As risk description is as good as Bs, regardless their bases but a middle position, expressing that formal risk assessments provide useful information to support decision-making, by combining facts and judgements using scientic principles and methods. Most people, we think, are in favour of such a middle position, see Shrader-Frechette (1991), but the challenge is to establish a proper platform for it. The foundational literature on subjective probabilities links probability and decisions; see Ramsey (1926) and de Finetti (1974). By observing the bets people make or would make, one can derive their personal beliefs about the outcome of the event under consideration. This view of subjective probabilities was disputed by Koopman (1940); see also Good (1950), who holds a more intuitionist view on subjective probabilities. The intuitive thesis says that probability derives directly from intuition and is prior to objective experience. Intuitionists consider that the Ramsey de Finetti revealed belief approach is too dogmatic in its empiricism as, in effect, it implies that a belief is not a belief unless it is expressed in choice behaviour. We agree with the intuitionists on this point, and make a sharp distinction between probability assignments and decision-making. This distinction seems also to be common among many applied Bayesian risk analysts. According to the Bayesian paradigm, there are no true objective probabilities. However, a consistent subjectivist would act in certain respects as if such probabilities do exist. The result is that many analysts just as easily assume that the true objective probabilities exist as well as the subjective ones, see Good (1983, p. 154). In our terminology, they shift from the Bayesian paradigm to the probability of frequency approach. We refer to Bernardo and Smith (1994) and Lad (1996) for other key references on subjective probabilities and Bayesian theory. References
Apostolakis, G.E., 1990. The concept of probability in safety assessments of technological systems. Science 250, 13591364. Apostolakis, G.E. et.al., (Eds.), 1988. Reliability Engineering and System Safety. The Interpretation of Probability in Probabilistic Safety Assessments, vol. 23, No. 4. Apostolakis, G., Wu, J.S., 1993. The interpretation of probability, De Finettis representation theorem, and their implications to the use of expert opinions in safety assessment. In: Barlow, R.E., Clarooti, C.A. (Eds.), Reliability and Decision Making. Chapman and Hill, London, pp. 311322. Aven, T., 2008a. Risk Analysis. Wiley, NJ.

Aven, T., 2008b. A semi-quantitative approach to risk analysis, as an alternative to QRAs. Reliability Engineering and Systems Safety 93, 768775. Aven, T., 2007a. A unied framework for risk and vulnerability analysis and management covering both safety and security. Reliability Engineering and System Safety 92, 745754. Aven, T., 2003. Foundations of Risk Analysis: A Knowledge and Decision-Oriented Perspective. Wiley, New York. Aven, T., Flage, R., 2008. Use of decision criteria based on expected values to support decision-making in a production assurance and safety setting. Reliability Engineering and System Safety, in press. Aven, T., Abrahamsen, E.B., 2007. On the use of cost-benet analysis in ALARP processes. International Journal of Performability 3, 345353. Aven, T., Kristensen, V., 2005. Perspectives on risk review and discussion of the basis for establishing a unied and holistic approach. Reliability Engineering and System Safety 90, 114. Aven, T., Porn, K., 1998. Expressing and interpreting the results of quantitative risk analyses. Reliability Engineering and System Safety 61, 310. Aven, T., Renn, O., 2008. On risk dened as an uncertain event. Journal of Risk Research, in press. Aven, T., Rettedal, W., 1998. Bayesian frameworks for integrating QRA and SRA. Structural Safety 20, 155165. Aven, T., Vinnem, J.E., 2005. On the use of risk acceptance criteria in the offshore oil and gas industry. Reliability Engineering and System Safety 90, 1524. Aven, T., Vinnem, J.E., 2007. Risk Management, with Applications from the Offshore Oil and Gas Industry. Springer Verlag, New York. Barlow, R.E., 1998. Engineering Reliability. SIAM, Philadephia. Barlow, R.E., Clarotti, C.A., 1993. Reliability and Decision Making, Preface. Chapman and Hall, London. Beck, U., 1992. Risk Society. Sage, London. Bernardo, J., Smith, A., 1994. Bayesian Theory. Wiley, New York. Bedford, T., Cooke, R., 2001. Probabilistic Risk Analysis: Foundations and Methods. Cambridge University Press, Cambridge. Binder, M., 2002. The role of risk and cost-benet analysis in determining quarantine measures. Productivity commission staff research paper. AusInfo, Canberra. Cabinet Ofce, 2002. Risk: improving governments capability to handle risk and uncertainty. Strategy Unit Report, UK, p. 7. Campbell, S., Currie, G., 2006. Against beck: in defence of risk analysis. Philosophy of the Social Sciences 36, 149172. de Finetti, B., 1974. Theory of Probability. John Wiley and Sons, Inc., New York. Douglas, M., Wildavsky, A., 1982. Risk and Culture. University of California Press, Berkeley. Douglas, E.J., 1983. Managerial Economics: Theory, Practice and Problems, second ed. Prentice Hall, Englewood Cliffs, NJ. EAI, 2006. Risk and uncertainty in cost benet analysis. A toolbox paper for the Environmental Assessment Institute. <http://www.imv.dk>. Geisser, S., 1993. Predictive Inference: An Introduction. Chapman and Hall, New York. Good, I.J., 1950. Probability and Weighing of Evidence. Grifn, London. Good, I.J., 1983. Good Thinking: The Foundations of Probability and its Applications. University of Minnesota Press, Minneapolis. Haimes, Y.Y., 2004. Risk Modelling, Assessment, and Management, second ed. Wiley, New York. Helton, J.C., Burmaster, D.E. (Eds.), 1996. Reliability Engineering and System Safety. Special issue on treatment of aleatory and epistemic uncertainty. Henley, E.J., Kumamoto, H., 1981. Reliability Engineering and Risk Assessment. Prentice-Hall, New York. HSE, 2006. The role of risk in cost-benet analysis (CBA) in support of ALARP decisions. <http://www.hse.gov.uk/risk/theory/alarpcba.htm>. ISO, 2007. Committee Draft of ISO/IEC Guide 73 Risk management Vocabulary. ISO TMB WG on Risk Management Secretariat, vol. 48. 2007-06-15. ISO, 2002. Risk Management Vocabulary. ISO/IEC Guide 73. Kaplan, S., 1992. Formalisms for handling phenomenological uncertainties: the concepts of probability, frequency, variability, and probability of frequency. Nuclear Technology 102, 137142. Kaplan, S., 1991. Risk assessment and risk management basic concepts and terminology. In: Risk Management: Expanding Horizons in Nuclear Power and Other Industries. Hemisphere Publ. Corp., Boston, MA, pp. 1128. Kaplan, s., Burmaster, D., 1999. Foundations how, when, why to use al of the evidence. Risk Analysis 19, 5562. Kaplan, S., Garrick, B.J., 1981. On the quantitative denition of risk. Risk Analysis 1, 1127. Keynes, J.M., 1921. Treatise on Probability. Macmillan, London. Koopman, B.O., 1940. The bases of probability. Bulletin of the American Mathematical Society, No. 46. Reprinted in Kyburg and Smokler, 1980. Kristensen, V., Aven, T., Ford, D., 2006. A new perspective on Renn and Klinkes approach to risk evaluation and risk management. Reliability Engineering and System Safety 91, 421432. Lad, F., 1996. Operational Subjective Statistical Methods. John Wiley and Sons, Inc., New York. Lindley, D.V., 1985. Making Decisions, second ed. Wiley, New York. Lindley, D.V., 2000. The philosophy of statistics. The Statistician 49, 293337. Morgan, M.G., Henrion, M., 1990. Uncertainty: a Guide to Dealing with Uncertainty in Qualitative Risk and Policy Analysis. Cambridge University Press, Cambridge. Perrow, C., 1984. Normal Accidents. Basic Books, New York.

806

T. Aven / Safety Science 47 (2009) 798806 Shrader-Frechette, K.S., 1991. Risk and Rationality. University of California Press, Berkeley, CA. Singpurwalla, N.D., 1988. Foundational issues in reliability and Risk analysis. SIAM Review 30, 264282. Singpurwalla, N.D., Wilson, S.P., 1999. Statistical Methods in Software Engineering. Springer Verlag, New York. Singpurwalla, N.D., 2002. Some cracks in the empire of chance. International Statistical Review 70, 5379. Singpurwalla, N.D., 2006. Reliability and Risk. A Bayesian Perspective. Wiley, Chichester. Slovic, P., 1998. The risk game. Reliability Engineering and System Safety 59, 7377. Spizzichino, F., 2001. Subjective Probability Models for Lifetimes. Chapman and Hall, New York. Taleb, N.N., 2007. The Black Swan: The Impact of the Highly Improbable. Penguin, London. Vinnem, J.E., Aven, T., Huseb, T., Seljelid, J., Tveit, O., 2006. Major hazard risk indicators for monitoring of trends in the Norwegian offshore petroleum sector. Reliability Engineering and Systems Safety 91, 778791. Vose, D., 2000. Risk Analysis, second ed. John Wiley and Sons, Inc., New York. Willis, H.H., 2007. Guiding resource allocations based on terrorism Risk. Risk Analysis 27 (3), 597606.

Pidgeon, N.F., Beattie, J., 1998. The psychology of risk and uncertainty. In: Calow, P. (Ed.), Handbook of Environmental Risk Assessment and Management. Blackwell Science, London, pp. 289318. Popper, K., 1959. The Logic of Scientic Discovery. Hutchinson, London. Ramsey, F., 1926. Truth and Probability. Reprinted in Kyburg and Smokler, 1980. Rausand, M., Hyland, A., 2004. System Reliability Theory, second ed. Wiley, New York. Renn, O., Klinke, A., 2002. A new approach to risk evaluation and management: riskbased precaution-based and discourse-based strategies. Risk Analysis 22, 1071 1094. Renn, O., 2005. Risk governance. White paper no. 1, International Risk Governance Council, Geneva. Rosa, E.A., 1998. Metatheoretical foundations for post-normal risk. Journal of Risk Research 1, 1544. Rosa, E.A., 2003. The logical structure of the social amplication of risk framework (SARF): metatheoretical foundation and policy implications. In: Pidegeon, N., Kaspersen, R.E., Slovic, P. (Eds.), The Social Amplication of Risk. Cambridge University Press, Cambridge. Sandy, M., Aven, T., Ford, D., 2005. On integrating risk perspectives in project management. Risk Management: An International Journal 7, 721.