15 views

Uploaded by chimbodeperro

- India Pale Ale
- SpecDecompCWT
- OFFSHORE OIL AND GAS STRUCTURE
- Rp 581 Inspection Effectiveness
- IPC2012-90135
- The Truth About Magnetic Flux Leakage as Applied to Tank Floor Inspections
- Four Months BesiegedThe Story of Ladysmith by Pearse, H. H. S. (Henry Hiram Steere), 1844-1905
- Premium Digest Ensuring Pipeline Integrity of an Unpiggable Pipeline a Case Study of a Project Including Line Preparation ILI Rehabilitation and Certification
- Sample3_Drawing.pdf
- Development of an in-line ultrasonic inspection tool for detection of pinhole-type defects
- History of Bicycle Design and Manufacturing
- Time - Frequency analysis of musical instruments
- Lebanese Style Shish Tawook Chicken Kabob Recipe
- Inspection Techniques to Detect the Failure of the Boiler Tubes --Dinesh Poudel
- Pressure–Impulse Diagrams for the Behavior Assessment of Structural Components
- OffshoreBook_2014.pdf
- Brochurev26.pdf
- Shell Perdido Article PDF
- Tubing Inspection Using Multiple Ndt Techniques
- Shock-response-spectrum Analysis of Sampled-data Polynomial for Track-seeking Control in Hard Disk Drives

You are on page 1of 8

Terje Aven

⇑

University of Stavanger, 4036 Stavanger, Norway

a r t i c l e i n f o

Article history:

Received 5 July 2012

Received in revised form 21 January 2013

Accepted 22 January 2013

Available online 24 February 2013

Keywords:

Black swan

Risk

Probability

Uncertainty

Knowledge

a b s t r a c t

In recent years there has been much focus on the so-called black swans in relation to risk management

and decision making under uncertainty. A key issue has been the ability of risk assessment and probabil-

ity theory to capture the black swans. In this paper we carry out an in-depth analysis of what a black

swan means in relation to risk, uncertainty and probability: is a black swan just an extreme event with

a very low probability or is it a more surprising event in some sense, for example an unknown unknown?

We question how the black swans are linked to the risk concep t, to expected values and probabilities, and

to the common distinction between aleatory uncertainties and epistemic uncertainties. The main aim of

this paper is to contribute to a clariﬁcation of the issue in order to strengthen the foundations of the

meaning and characterisation of risk, and in this way provide a basis for improved risk management.

The paper concludes that the black swan concept should be associated with a surprising extreme event

relative to the present knowledge.

Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction

In recent years I have heard numerous speeches and read a large

number of papers which refer to the black swan logic in a risk con-

text. The metaphor is intuitively appealing and has been very pop-

ular to illustrate the idea of surprisin g events and outcomes. The

black swan concept was ﬁrst introduced by the Latin poet Juvenal,

who wrote ‘‘rara avis in terris nigroque simillima cygno’’ (a rare

bird upon earth, and exceedingly like a black swan), but, accordin g

to Hammond (2009), that was imaginative irony. Juvenal’s phrase

was a common expression in 16th century London, as a statement

of something impossible. Up to that point in time, all observed

swans in the Old World had been white. Taleb (2007), p. xvii,

writes:

Before the discovery of Australia people in the Old World were

convinced that all swans were white, an unassailable belief as it

seemed completely conﬁrmed by empirical evidence.

But then in 1697 a Dutch expedition to Western Australia , led

by Willem de Vlarningh, discovered black swans on the Swan Riv-

er, and the concept of black swans developed to mean not only

something extremely rare (a rarity), but also that a perceived

impossibility might later be disprove n: a logical fallacy, meaning

that if one does not know about something, it is therefore impos-

sible. Taleb (2007) comments that in the 19th century John Stuart

Mill used the black swan logical fallacy as a new term to identify

falsiﬁcation. John Stuart Mill wrote: ‘‘No amount of observati ons

of white swans can allow the inference that all swans are white,

but the observation of a single black swan is sufﬁcient to refute

that conclusion.’’ It became a classic example in elementary philos-

ophy (Hammond , 2009 ).

In 2007, Nassim Nicholas Taleb further deﬁned and popularised

the concept of black swan events in his book, The Black Swan (Ta-

leb, 2007 ) (a second edition was issued in 2010 with a new section

which among other things discusses various aspects of the proba-

bility concept). Taleb refers to a black swan as an event with the

following three attributes. Firstly, it is an outlier, as it lies outside

the realm of regular expectations , because nothing in the past

can convincingly point to its possibility. Secondly, it carries an ex-

treme impact. Thirdly, in spite of its outlier status, human nature

makes us concoct explanation s for its occurrence after the fact,

making it explainabl e and predictabl e.

Taleb’s (2007, 2010) book has inspired many authors; see for

example the many references in Taleb (2011). However , some

scholars are sceptical of Taleb’s work. Professor Dennis Lindley,

one of the strongest advocato rs of the Bayesian approach to prob-

ability, statistics and decision making, has made his view very clear

in a review of Taleb’s book (Lindley, 2008 ): Taleb talks nonsense.

Lindley lampoons Taleb’s distinction between the lands of Medioc-

ristan and Extremistan, the former capturing the placid random-

ness as in tosses of a coin, and the latter covering the dramatic

randomn ess that provides the unexpected and extreme outcomes.

Lindley provides an example of a sequence of independent trials

with a constant unknown chance of success – clearly an example

of Mediocristan. Each trial is to be understood as a swan and suc-

cess a white swan. Using simple probability calculus, Lindley

shows that a black swan is almost certain to arise if you are to

0925-7535/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.

http://dx.doi.org/10.1016/j.ssci.2013.01.016

⇑

Tel.: +47 51831000/2267; fax: +47 51831750.

E-mail address: terje.aven@uis.no

Safety Science 57 (2013) 44–51

Contents lists available at SciVerse ScienceDi rect

Safety Scien ce

j our nal homepage: www. el sevi er . com/ l ocat e/ ssci

see a lot of swans, although the probability that the next swan ob-

served is white, is nearly one. Lindley cannot be misunderstood :

‘‘Sorry, Taleb, but the calculus of probability is adequate for all kind

of uncertainty and randomness’’.

The purpose of the present paper is to provide a thorough anal-

ysis of this issue: the concept of the black swan in relation to risk.

What is the meaning of this term in a professional /scientiﬁc set-

ting? I question to what extent the ideas of Taleb, and in particular

the distinction between Mediocristan and Extremistan, can be gi-

ven a proper scientiﬁc justiﬁcation in view of existing risk theories

and perspectives . Clearly, if Taleb has made some important points,

Lindley cannot be right. More speciﬁcally, the paper studies several

interpretations of a black swan, starting from these four:

1. A surprising extreme event relative to the expected occurrence

rate (extreme event in the sense that the consequences are

large/severe , this understand ing also applies to the interpreta-

tions 2 and 3 below).

2. An extreme event with a very low probability.

3. A surprising, extreme event in situations with large uncertainties.

4. An unknown unknown.

The discussion will be linked to a set of issues, including:

(a) The common distinctio n between aleatory uncertainties and

epistemi c uncertainti es. Does a black swan mean that

chances (frequentist probabili ties) cannot be meaningfully

deﬁned for this event? A frequentist probability exists as a

proportion of inﬁnite or very large populations of similar

units to those considered. It represents the aleatory

uncertainties .

(b) The ability of risk assessme nt to identify the black swans.

(c) The problem of establishing an accurate prediction model

for the black swan.

(d) The situation being characterised by large consequences and

high uncertainties (‘‘post normal sciences’’). Is the occur-

rence of black swans linked to or restricted to such

situations?

(e) The situation being characterise d by scientiﬁc uncertainties

as in applying the precautionar y principle. Again, is the

occurrence of black swans linked to or restricted to such

situations?

The remaining part of the paper is organised as follows. In Sec-

tion 2 we discuss the meaning of a black swan – addressing the

four interpretations 1–4 mentioned above. The discussion is based

on some fundamenta ls concerning the concepts of risk, probability

and uncertainty, which are presente d in Appendix A. To be able to

clarify the meaning of the black swans, it is essential to be precise

on these concepts. As we know there are different perspecti ves and

deﬁnitions of these terms, and in our analysis we need to distin-

guish between these to be able to carry out a thorough argumenta-

tion. The closing Section 3 provides some ﬁnal remarks and

conclusions .

2. Discussion of what a black swan means

In the following we will discuss the four interpretations 1–4

introduced in Section 1.

2.1. Is a black swan a surprising extreme event relative to the expected

occurrence rate?

Let N(t) denote the number of times the event under consider-

ation (called A) occurs in the period [0, t], and assume that the

expected number of events per unit of time, E[N(t)]/t, converge s

to k as t goes to inﬁnity (the expectation here is with respect to a

frequent ist probability distribution, see Appendix A). We refer to

k as the expected occurrence rate of the event A. Suppose k = 1/

100 (year

÷1

), i.e. the event A is expected to occur once every

100 years.

Now, is the occurrence of A to be considered a black swan? It is

clearly a rare event, but the probability that it occurs in a period of

10 years could be relatively high. Assuming that the occurrence

process N(t) is a Poisson process, we know that the probability of

at least one event during 10 years is 1 ÷ exp{÷k10} ~ 0.10, and

the occurrence of the event can hardly be said to be surprising.

If, however , the occurrence rate k is equal to say 1/10,000, an

occurrence of the event during the next 10 years is rather surpris-

ing as the probabili ty is about 0.1%. However, say that we are fac-

ing not only one such event occurrence process, but many similar

processes , for example 200. Then the occurrence of at least one

event is about 20%. Hence the occurrence of one event in this per-

iod is not particularly surprising. Focusing on a speciﬁc type of

event A, the occurrence may be surprisin g but not when consider-

ing a large number of such processes.

In a society we are facing a number of different types of extreme

events. Let us assume for the moment that we know what these

types are and what their occurrence rates are. Then we are in a sit-

uation as described above. Considered in isolation, one type of ex-

treme events may be considered surprising but not if we open up

for all types of events.

Of course in practice, the situation is not as idealised as this; we

may not know all types of events and how often they occur. These

situation s will be discussed in the coming subsectio ns. It is too

early to conclude on the question of whether a surprisin g extreme

event relative to the expected occurrence rate should be consid-

ered a black swan.

2.2. Is a black swan an extreme event with very low probability?

Let us return to the case presented by Lindley (2008) mentioned

in the introduction section. In this example we consider a sequence

of independen t trials with a constant unknown chance of success.

Lindley shows that a black swan (failure of trial) is almost certain

to arise if you are to see a lot of swans, although the probabili ty

that the next swan observed is white (success of trial) is nearly

one. This example is similar to the one studied above for the occur-

rence rate. When we focus on the occurrence rate of the ﬁrst black

swan we get a very low number, whereas if we consider the occur-

rence rate when studying a large set of swans the probability of

occurrence of a black swan becomes large. But there are some dif-

ferences between these two examples , and these are important. To

reveal these, we need to dive deep into the assumptions of Lind-

ley’s example.

Lindley assumes that we are facing a sequence of independent

trials with a constant unknown chance of success, and to obtain

his probabilitie s he assumes a prior probability distribution over

this chance, namely a uniform distribut ion over the interval

[0, 1]. This means that Lindley tacitly has assumed that there is a

zero probability that all swans are white – there is a fraction of

swans out there that are black (non-white). From this point on,

his analysis cannot change this assumption. Of course then, the

probabili ty calculus will show that when considering a sufﬁcient

number of swans, some black ones will be revealed; see the Appen-

dix B. By the assumpti ons made, the analyst has removed the main

uncertainty aspect of the analysis. In real life we cannot of course

exclude the possibility that all swans are white. The uncertainty

about all swans being white is a key issue here, and Lindley has

conceale d it in the assumptions . This is the problem raised by

many authors, including Taleb (2007, 2010) and Aven et al.

T. Aven / Safety Science 57 (2013) 44–51 45

(2011): the probabili ty-based approach to treating the risk and

uncertainties is based on assumptions that could hide critical

assumptions and therefore provide a misleading description of

the possible occurrence of future events. Let us reconsider Lindley’s

example to allow for a positive probabili ty that all swans are white.

Let us assume that there are only two possibilities, either the

fraction p of white swans are 100% or 99%. Hence p is either 1 or

0.99. Suppose the analyst assigns prior probabili ties to these values

as 0.2 and 0.8, respectivel y. Now suppose that the analyst has ob-

served n swans and they are all white, what then are his posterior

probability for the next m swans to be all white? (n and m large

numbers). Using Bayes’ formula in the standard way, we ﬁnd that

this probability is close to one, i.e. the probabili ty of a black swan

occurring is very small, in contrast to what Lindley computed in his

analysis. See the Appendix B for the details.

This example shows the importance of the assumptions made

for the probabilistic analysis. Depending on the assumpti ons made,

we get completely different conclusions about the probabili ty of a

black swan occurring.

Lindley’s example also fails to reﬂect the essence of the black

swan issue in another way. In real life the deﬁnition of a proba-

bility model and chances cannot always be justiﬁed, see Appendix

A. Lindley’s set-up is the common framework used in both tradi-

tional statistics and Bayesian analysis. Statisticians and others of-

ten simply presume the existence of this framework and the

elements of surprise that Taleb and others are concerned about

fall outside the scope of the analyses. This is the key problem

of the probability-ba sed approach to risk analysis, and a possible

interpretation of Taleb’s work is the critique of the lack of will

and interest for seeing beyond this framewor k among statisticians

and others when analysing risk. I share this concern; see e.g. Aven

(2011a,b). Let us look into an example to clarify what the issue is

all about.

Let us consider the event A that an individual person carries out

a serious terrorist attack in a country during the next year. An

example of such an attack is the killing in Norway on 22 July

2011, when a man placed a car-bomb outside the government of-

ﬁce and massacred a number of people on the island of Utøya out-

side Oslo. For this type of event, it has no meaning to talk about a

frequentist probability or chance. We cannot repeat situations like

this under similar conditions. We may, however , use a knowledge-

based (subjective, judgemental) probabili ty to express our uncer-

tainty and degree of belief about the occurrence of this type of

event. Say that a group of experts assigns a probability of A equal

to 0.001, given the background knowled ge K at a speciﬁc point in

time, i.e. P(A|K) = 0.1%. The group considers it quite unlikely that

such an event will occur given the informat ion it has available.

Then an event A occurs. Was it a black swan? The assigned

probability was very small, and the outcome then somewhat

surprising.

As discussed in Section 2.1, we must be careful in judging an

event as surprising based on an isolated analysis; if we extend

the types of events considered, the assigned probability for at least

one event could be relatively large. Furthermore, the probability is

a knowledge- based probability, and this probabili ty could of

course lead to poor predictions of the actual number of events

occurring. The probabilitie s could be based on assumptions that

turn out to be wrong, and/or on little relevant information and

knowledge. In the terrorist attack example, the relevant police

security services could have based their judgements on many

assumptions concerning the motivation and capability of persons

with extreme views to perform such an attack. We see that the is-

sue about the event being a black swan or not based on probability

assignment s leads us into a discussion about the background

knowledge that the probabilitie s are conditional on, the topic of

the next section.

2.3. Is a black swan a surprising, extreme event in situations with large

uncertainti es?

There is a discussion in the literature concerning the suitabilit y

of subjectiv e probabili ties to reﬂect epistemic uncertainties (see

e.g. Aven, 2010b; Dubois, 2010; North, 2010 ). The problem with

these probabilitie s is that the knowledge that they are based on

is not reﬂected in the assigned numbers. In two situations you

may assign the same probability, say 0.2, but the knowledge basis

could be completely different. Let us focus on the situation charac-

terised by poor knowled ge and, to be concrete, think about the ter-

rorist example again. Here the police security services may have

little information indicating such an event taking place, and a pos-

sible occurrence could be seen as surprising relative to this

informat ion.

Situations of poor knowledge and extreme consequences have

been analysed by many authors. One of the earliest works goes

back to Funtowicz and Ravetz (1985), who presented a model for

classifying problem- solving strategies into applied sciences, pro-

fessional consultancy and post-normal sciences. The model is illus-

trated in a diagram based on the two axes: (i) decision stakes – the

value dimensio n (costs, beneﬁts), and (ii) the system uncertainti es

– the knowledge dimension. Large uncertainties and high decision

stakes characterise the post-norm al sciences. An example of this

category is climate change. Surprising extreme events may occur

as we do not fully understand what is going on.

The Funtowicz and Ravetz model’s axes resemble the same two

dimensio ns that characterise the risk descriptions based on the risk

perspecti ves R = (C&U), consequences C and uncertainties U; see

Appendix A. If the risk is judged as large accordin g to these per-

spectives, extreme events may occur.

What large uncertainties in this setting mean is, however, not

obvious (Aven, 2013 ). It is clearly not the same as a high probabil-

ity as discussed above, as the background knowledge also has to be

reﬂected. Following the risk perspective (C&U), the uncertainty

dimensio n does not only cover the measure Q used to describe

the uncertainties but also the background knowledge K that the

measure is based on. Hence a high score on the uncertainties is

not only associated with high probabilitie s of occurrences of some

severe consequences but also dependent on judgements made on

the degree of knowledge supporting the probability assignment s.

If we judge the background knowledge as poor this would affect

the total score of the uncertainties being high or low. To make a

judgeme nt about K being poor, many types of aspects need to be

taken into account, for example (Flage and Aven, 2009 ):

v The assumpti ons made represent strong simpliﬁcations.

v Data are not available, or are unreliable.

v There is a lack of agreement/ consensus among experts.

v The phenomena involved are not well understood; models are

non-existen t or known/believe d to give poor predictions.

To reﬂect the importance of background knowled ge and key

assumpti ons, it has been suggested to use an uncertainty measure -

ment Q = (P, U

F

), where U

F

covers qualitative importance assess-

ments of uncertainty factors which are based on key

assumpti ons that the probabilities are founded on (Aven, 2010a;

Flage and Aven, 2009 ).

Many other structure s can be used to deﬁne what high uncer-

tainties mean. One interesting category is ‘‘scientiﬁc uncertainti es’’

as deﬁned in relation to use of the precautio nary principle. There

exist a number of deﬁnitions of this principle (see e.g. Sandin

et al., 2002; Aven, 2011c ), but almost all deﬁnitions identify ‘‘scien-

tiﬁc uncertainties ’’ as the trigger or criterion for its invocation. As

noted by Aven (2011c), the essential feature of the precautionar y

principle when looking at the many deﬁnitions of the principle is

46 T. Aven / Safety Science 57 (2013) 44–51

that it applies when the consequences of the activity considered

could be serious but we do not fully understand what could hap-

pen. What this means in practice is a disputed topic (see e.g. Aven,

2011c; Cox, 2011; North, 2010; Vlek, 2011 ), but a common idea is

that scientiﬁc uncertainti es can be linked to the difﬁculty of estab-

lishing an accurate predictio n model for the consequences. Note

that consequences here are to be understood as also covering

events occurring; in other words, the idea states that we have sci-

entiﬁc uncertainti es if there is a lack of consensus in the scientiﬁc

community about a model for predicting the occurrence of the

event. Clearly, if a probabili ty model can be justiﬁed we cannot re-

fer to scientiﬁc uncertainties, as the phenomenon is to a large ex-

tent understood, following this line of argument. The parameters of

the model may be subject to uncertainties, but as long as a proba-

bility model has been justiﬁed, the situation is not characterise d by

scientiﬁc uncertainti es.

Black swans (surprising, extreme events) can occur in situations

of high risk in the (C&U) sense, in the post-normal science area and

in the face of scientiﬁc uncertainti es as discussed above, presumin g

that there is a reference for what can happen (some knowledge/be-

liefs). However, a black swan could also occur in a situation judged

as having rather small or moderate risk in the (C&U) sense. The ex-

perts are convinced that all swans are white, based on thorough re-

search. Then a discovery happens that completely changes the

understand ing of the phenomena, a white swan – a new type of

virus for example – is discovered. The dramatic black swans would

in fact happen when risk is judged as rather small or moderate , as,

according to the prevailing thinking, then there is no reason to be-

lieve that a black swan should occur.

Some types of events occur completely as a surprise based on

the knowled ge available : no one has thought about the type of

event before it occurs – we talk about unknown unknowns, which

is the topic of the next subsection. Clearly, for these types of events

there are scientiﬁc uncertainties in this sense; we are not able to

establish an accurate predictio n model for the event.

2.4. Is a black swan an unknown unknown?

The United States Secretary of Defense, Donald Rumsfeld, made

the term ‘‘unknown unknowns’’ familiar to us all on 12 February

2002 at a press brieﬁng, where, addressing the absence of evidence

linking the government of Iraq with the supply of weapons of mass

destruction to terrorist groups, he said:

There are known knowns; there are things we know we know.

We also know there are known unknowns; that is to say we

know there are some things we do not know. But there are also

unknown unknowns – the ones we don’t know we don’t know.

The term has, however, been used long before this. It is men-

tioned for example by Furlong (1984), and in relation to climate

change it is has been commonl y used for many years (e.g. Myers,

1993).

In a risk setting, the idea of unknown unknowns intuitively cap-

tures the fact that the actual events occurring are not covered by

the events identiﬁed in the risk description/ris k assessment. Our

focus here is of course on events with extreme consequences. Con-

sider the risk perspective (C&U), or reformulate d by speciﬁcally

showing some events A included in C: (A, C&U) (A may for example

represent a terrorist attack or a gas leakage in a process plant).

When speaking about the risk (A, C&U), there are no unknown un-

knowns, as A and C simply express the actual events and conse-

quences of the activity. However , in relation to a risk description

(A

/

, C

/

, Q, K), we may have unknown unknowns (here A

/

and C

/

are

the events and consequences respectively, speciﬁed in the risk

assessment, Q is the measure of uncertainty used and K is the back-

ground knowledge, see Appendix A). The point is simply that the A

/

events do not capture the A; we may experience some surprises

relative to A

/

reﬂecting that A is different from A

/

. For example,

when studying the life of a young person, he or she may die of a

disease not known today; the A

/

events do not cover the true A.

Hence the unknown unknowns are included in the risk concept

but not captured of course by the risk description.

Also if risk is (C&P), i.e. (A, C&P), unknown unknowns reﬂect

that the actual types of events occurring are not captured by the

risk description. If P is a knowledge- based probability, the relevant

perspecti ves of risk do not, however , cover the unknown un-

knowns, as the risk and the risk description coincide. Seeing P as

a frequentist probability, we are back to the (A, C&U) type of argu-

ments; however , these perspecti ves have limited applicability, as

frequent ist probabilities cannot be justiﬁed for non-repetit ive

events and situation s, as commented in Appendix A. Similar types

of analyses can be made for the other risk perspecti ves deﬁned in

Appendix A.

The interesting question is now whether a black swan is to be

considered an unknown unknown. Yes, it is a possible interpreta-

tion: a black swan is an event occurring which was not captured

by the knowledge reﬂected in the risk description, i.e. A

/

using

the risk perspecti ve (A, C&U). If we study discussions on the inter-

net concerning unknown unknowns and black swans, for example

related to climate change, we see that many people use these

terms more or less interchange ably. Remember Taleb’s deﬁnition

of a black swan, where he refers to it as an event which lies outside

the realm of regular expectati ons, because nothing in the past can

convincingly point to its possibility. A reasonable interpretation of

this statement is that if the risk description of the risk assessment

is not able to capture the event, it is an unknown unknown and a

black swan – nothing in the past can convincingly point to its pos-

sibility, interpreting the past in a wide knowled ge sense.

3. Final remarks and conclusions

Based on the above analysis, two main ways of looking at the

black swan concept seems most appropriate : (i) as a rare event

with extreme conseque nces, or as a term for expressing (ii) an ex-

treme, surprisin g event relative to the present knowledge.

Before making a conclusion about the preferred terminology, let

us consider some practical examples. Firstly, think of the Fukushi-

ma Daiichi nuclear disaster in Japan in March 2011. Was this event

a black swan? According to (i), yes, but not according to (ii). The

scientiﬁc communi ty had the knowledge to understand that in

the case of a tsunami, which we know occurs from time to time,

extreme consequences would be likely. The situation is better

characteri sed as one where the risk associated with such tsunamis

was accepted, not that the event was surprisin g in relation to the

available knowledge. A reviewer of an earlier version of the present

paper commented that one can oppose this conclusion:

until this event, no one had conceived it a possibility that a tsu-

nami would destroy all back-up systems simultaneou sly, as

well as it (the earthqua ke) would prevent support from outside

to reach the site.

It is tempting to say that if this was the case, something must

have been wrong with the risk assessme nts, but I will not go fur-

ther into this discussion here. The example clearly demonstrat es

the importance of being precise on whose knowled ge we are talk-

ing about.

As another example, think of the terrorist attack in Norway 22

July 2011. Again we conclude that it is a black swan in the sense

of (i) and as for the tsunami case it can be discussed whether it

was also a black swan with respect to (ii). Clearly, the event came

T. Aven / Safety Science 57 (2013) 44–51 47

as a big surprise for the police security services, relative to their

knowledge. It was thus a black swan for them. We can discuss

whether this was a failure of the police security services, but that

is not the issue here. It is not natural to classify the attack as an un-

known unknown as similar types of events have happened before.

The third and last example is the recent ﬁnancial crisis. Was this

a black swan? Again the answer is a yes for (i), but as above the

conclusion is not so clear for understand ing (ii). There were many

signals and warnings ahead of the problems we have experienced

in the economy, but still it came as a surprise to many, probably

most, people. Some experts would say that the event could have

been foreseen given all the informat ion at the time, others

acknowledged that there were signals of a catastrop he but the ac-

tual situation turned out to be much more severe than one could

have predicted.

It is of course possible to use the term ‘black swan’ in both

senses, (i and ii). However I believe (ii) should be employed , as

the former understand ing would result in too big a class of events,

including those that are simply rare but well understo od, as the

analysis of interpretations 1 and 2 shows in Sections 2.1 and 2.2 .

Deﬁnition (ii) (an extreme, surprisin g event relative to the present

knowledge) is in line with Taleb’s deﬁnition but is not only focused

on surprises relative to the past; remember Taleb’s ‘‘nothing in the

past can convincingly point to its possibility’’. The key is the knowl-

edge available. It would be better to say ‘‘nothing in our knowledge

can convincingly point to its possibility’’.

What knowled ge means can however be discussed. If we con-

sult the knowledge (knowledge managemen t) literature we see

that there exist a huge number of ideas and deﬁnitions out there

for this concept, see Zins (2007), who documents 130 deﬁnitions

of data, informati on , and knowledge formulated by 45 scholars.

Rowley (2006, 2007) also provide many deﬁnitions. A conceptu al

framework often referred to in this context is the DIKW hierarchy,

which covers the data (D), informat ion (I), knowledge (K) and wis-

dom (W) dimensions (see e.g. Adler, 1986; Ackoff, 1989; Zeleny,

1987; Frické, 2009 ). It is beyond the scope of the present paper

to provide a detailed analysis of this issue, but some comments

are in place. Data and information can be seen as a part of knowl-

edge (given that they are cognitivel y assimilated, Hansson, 2002 ),

but knowledge is also about beliefs. For example, we may think

of a situation where some analysts believe that some potential

attackers do not have the intentions and capacity to perform an at-

tack. Their belief can be based on data and informat ion, modellin g

and analysis. Hence a black swan can be an extreme, surprising

event relative to the historical data present, but it can also be an

extreme, surprising event relative to some relevant beliefs. This

view on knowledge obviously means that it cannot be objective

as a belief is someone’s belief. In general knowledge then needs

to be considered subjective or at best inter-subj ective among peo-

ple, for example experts.

In my view, the above analysis has shown that Taleb has a point

when stressing the need for seeing beyond the standard probabilis-

tic analysis when addressing risk. Lindley’s criticism of Taleb

shows, in my opinion, a lack of understand ing of this need: the

standard probabilistic methods and models used for analysing

uncertainties are not able to predict black swans (in sense ii).

The assumptions that the analyses and models are based on are of-

ten not given the attention that they deserve; refer to the example

of Lindley (2008) discussed in Section 2.2.

The limitations of the probabili ty model-based approach to risk

have been addressed by many risk researchers and analysts (see

e.g. Renn, 1998; Aven, 2011a ) and frameworks have been sug-

gested to be able to meet the challenges raised above. A main cat-

egory of these alternative perspectives on risk solves the

‘‘narrowness ’’ of the probability-ba sed approach by replacing prob-

ability with uncertainties in the deﬁnition of risk, and makes a

clear distinctio n between the concept of risk and how it is de-

scribed or measured; see Appendix A.

One may question why it is important to discuss the meaning of

a black swan. Whether a black swan is to be interpreted in line

with (i or ii), could this affect the risk assessme nt and risk manage-

ment? Yes, it could. This paper is motivated by the need to

strengthen the scientiﬁc platform of the risk discipline by provid-

ing new insights into the relationship between surprisin g events,

risk, probability and uncertainty. For this discipline , as for all other

scientiﬁc disciplines, it is essential that the conceptual basis is so-

lid. However, the present paper is not only of theoretical and foun-

dational interest. The main contributi on of the work is not the

deﬁnition of a black swan as such, but the structure develope d to

understa nd and analyse the related features of risk and uncertain-

ties in a risk assessment and risk managemen t context. We saw

this clearly demonstrat ed when studying the example of Lindley

(2008). According to Lindley, there is no need for uncertainty

assessme nt that extends beyond the probabili stic one. Taleb and

many others, including the present author, reject this idea and seek

to build a scientiﬁc platform for a more complete approach to risk

and uncertainti es. The present paper can be seen as a contribution

to this end.

In my view, Taleb’s book represents in this respect an important

contributi on, although his work lacks a proper scientiﬁc framing. It

has been an aim of the present work to contribute to giving his

ideas and related work a stronger basis with reference to the risk

ﬁeld, in particular in relation to key concepts such as risk and

probabili ty.

It is beyond the scope of the present paper to discuss in detail to

what degree and how risk assessment and risk managemen t can

help protectin g against black swans. For some recent work

addressing the issue, see Paté-Cornell (2012), Cox (2012) and Aven

(submitted for publication). I would like however to make two

comments . Firstly, although risk assessment can never fully cap-

ture the black swans, improvem ents can and should be made com-

pared to the probabili stic approach that dominates the present

quantitat ive risk assessment practice. To this end, it is considered

essential to establish risk-unce rtainty frameworks that are so

broad that they also capture such events ÷ as what was unknown

at time t could be known by time s, and what is unknown to per-

sons x could be known to persons y. The knowledge dimension

needs to be highlight ed, and much more than what is typically

seen today in risk assessment applicati ons. More research is

needed on how to do this in practice. Secondly, I believe that it is

essential to acknowledge that we cannot produce ‘‘optimal strate-

gies’’ meeting black swans. Different types of optimisa tion meth-

ods seeking robust solutions can be effective in many cases, but

we always need what I refer to as managerial review and judge-

ment, where the decision maker sees beyond the formal decision

support, and gives weight to uncertainties and other concerns

not captured by the formal assessments, including the cautionary

and precautionary principles. See Aven (submitted for publication).

One of the reviewer s of an earlier version of the present paper

stated in a comment to the paper that

there is no need to try to agree on the deﬁnition of a ‘‘black

swan’’ as a scientiﬁcally based term in this ﬁeld, therefore I feel

that this issue of a black swan is a non-issue, there is no need to

use this term when dealing with risk and safety, one can

express the problems of unexpected events in relation to risk

management in more precise terms, adjusted to the context at

hand.

I disagree with this reviewer that black swan is a non-issue as

here stated. Firstly, the concept of black swan exists out there

and is commonl y used in relation to risk and safety. The idea has

48 T. Aven / Safety Science 57 (2013) 44–51

gained a lot of attention and is a hot topic in many forums that dis-

cuss safety and risk. We as a scientiﬁc and professional environ-

ment cannot just ignore this. We need to provide perspectives

and guidance on what this concept is saying. We need to place this

concept in the framewor ks that the risk ﬁeld has developed over

the years. And this is exactly what this paper is doing. Secondly,

the risk ﬁeld needs suitable concepts for reﬂecting this type of phe-

nomena. The popularity of the black swan concept clearly demon-

strates this, but I would like to add that also from a strictly

professional point of view, there is a need for concepts that de-

scribe what a ‘‘surprise’’ really means in this context. These con-

cepts cannot and should not be limited by the probability based

thinking and ideas as the phenomena that we are trying to charac-

terise extend beyond this paradigm. In the extended non-probabi l-

ity context we need to develop proper terminology – the present

situation is rather chaotic – and the ‘‘black swan’’ concept repre-

sents in my view a useful contribution to this end. We can use

statements as 1–4 in Section 1, but when communicating and dis-

cussing issues linked to ‘‘surprising events’’ my experience is that it

is very helpful to have at hand a term like black swan, that people

can easily relate to. Using the black concept I have noticed in-

creased interest and enthusias m for discussing risk issues. Thirdly

and last, I am convinced that studying the black swan concept pro-

vides new insights into the risk ﬁeld, about the links between risk,

probability and uncertainties , as was highlighted also in the above

discussion.

To summarise, I conclude that a black swan is to be seen as a

surprising extreme event relative to the present knowledge/beli efs.

Hence the concept always has to be viewed in relation to whose

knowledge/bel iefs we are talking about, and at what time. In a risk

assessment context and following the (A, C&U) risk perspective, a

black swan can be seen as an extreme event A occurring not spec-

iﬁed by the A

/

events of the risk assessme nt - it is a surprise relative

to the knowledge deﬁned by the A

/

events.

4. Uncited references

IRGC (2005), Verma and Verter (2007) and Willis (2007).

Acknowled gment

The author is grateful to several anonymous reviewers for their

useful comments and suggestions to earlier versions of this paper.

Appendix A

A.1. Fundamentals about the concepts of risk, probabilit y and

uncertainty

The following overview provides a lists of the main categories of

deﬁnitions/perspectives of risk as used in professional /scientiﬁc

contexts (Aven, 2012 ):

(1) Risk = Expected conseque nces (R = EC) or expected utility

(R = EU).

(2) Risk = Probability of an (undesirable) event (R = P).

(3) Risk = Objective Uncertainty (R = OU).

(4) Risk = Uncertain ty about a loss (R = U).

(5) Risk = Potential/pos sibility of a loss (R = PO).

(6) Risk = Probability and scenarios/cons equences/sever ity of

consequences (R = P&C).

(7) Risk = Event or consequence (R = C).

(8) Risk = Conseque nces/damage/sev erity of these + Uncertainty

(R = C&U).

(9) Risk = The effect of uncertainty on objectives (R = ISO).

An example of risk deﬁnition (6) is the well-known triplet (s

i

, p

i

,

c

i

), where s

i

is the ith scenario, p

i

is the probability of that scenario,

and c

i

is the conseque nce of the ith scenario, i = 1, 2, . . . , N; i.e. risk

captures : What can happen? How likely is that to happen? If it

does happen, what are the consequences ? (Kaplan and Garrick,

1981).

Rosa (1998, 2003) provides an example of (7): risk is a situation

or event where something of human value (including humans

themselv es) is at stake and where the outcome is uncertain. Exam-

ples of (8) are the deﬁnitions used by Aven (2007), expressing that

risk is equal to the two-dimensi onal combinati on of events/conse-

quences (of an activity) and associate d uncertainties , and that of

Aven and Renn (2009), which states that risk is uncertainty about

and severity of the conseque nces (or outcomes) of an activity with

respect to somethin g that humans value. The consequences may be

seen in relation to a reference level (ideal states, planned values,

expected values, objectives ). The category (9) deﬁnition is the

one used by ISO (2009a,b).

When risk is deﬁned by consequences and uncertainti es

(R = C&U) (and also when R = C), risk is described by specifying

the events/cons equences C and using a description (measure) of

uncertainty Q. Specifying the events/cons equences means to iden-

tify a set of events/q uantities of interest C

/

that characteri se the

events/co nsequences C. Examples of C

/

are the proﬁt from an

investme nt and the number of injuries in a safety context. Depend-

ing on the principles adopted for specifying C and the choice of Q,

we obtain different perspectives on how to describe/measur e risk.

As a general description of risk we are led to the triplet (C

/

,Q,K),

where K is the knowled ge that C

/

and Q are based on. The most

common tool for representing or expressing the uncertainties U

is probabili ty P, but other tools also exist, including imprecise

(interval) probability and representation s based on the theories

of evidence (belief functions) and possibilit y (Dubois, 2010; Aven

and Zio, 2011 ).

For the deﬁnitions that are based on probabili ties and expected

values, different interpretations may apply. Basically, there are two

ways to understand the probability of an event A in a practical set-

ting (Aven, 2012 ):

(i) as a frequent ist probability, which we denote by P

f

(A). This

probability is deﬁned by the fraction of times the event A

occurs when consideri ng an inﬁnite population of similar

situations or scenarios to the one in focus. This concept is

a model concept, a paramete r of a probability model. As

the frequentist probability P

f

(A) is unknown , it has to be

estimated. In this way we obtain a clear distinction

between the underlying concept P

f

(A) and its estimate

P

f

(A)

⁄

(say), or

(ii) as a subjective (judgemental, knowledge- based) probability

P(A) = P(A|K), expressing the assessor’s uncertainty (degree

of belief) of the occurrence of event A given the background

knowledge K. To interpret this probabili ty, an uncertainty

standard approach is commonly used: the probabili ty

P(A) = 0.2 (say) means that the assessor compares his/her

uncertainty (degree of belief) about the occurrence of the

event A with the standard of drawing at random a speciﬁc

ball from an urn that contains ﬁve balls.

The subjectiv e (knowledge-based, judgeme ntal) probabilities

express epistemic uncertainties, whereas the variation generating

the frequentist probabili ties and the probabili ty models, express

aleatory (stochastic) uncertainty . Consider a special designed die

with six outcomes 1, 2, . . . , 6 as usual but without the same sym-

metry of the standard die. In this case we would establish a prob-

ability model expressing that the distribution of outcome s is

T. Aven / Safety Science 57 (2013) 44–51 49

given by (p

1

, p

2

, . . . , p

6

), where p

i

is the frequent ist probabili ty of

outcome i, interpreted as the fraction of outcomes showing i,

and having sum equal to 1. However, in a risk assessment con-

text, repeating the situations may be more difﬁcult, making the

establishment of such models problematic, and conseque ntly also

the distinctio n between aleatory uncertainty and epistemic

uncertainty. The frequentist probability and the probability model

cannot that easily be deﬁned as in the die example. In many cases

they cannot be meaningfull y deﬁned at all, as for example the

frequentist probabili ty of a terrorist attack (Aven and Renn,

2010, p. 80 ).

If probability is understo od as a frequent ist probability, the def-

inition categories (1, 2 and 6) are based on a model concept (prob-

ability model with unknown parameters), and risk has to be

estimated. In the case that probability is subjective, these deﬁni-

tions would, however, represent judgeme nts (degrees of belief)

of the assessors.

If the deﬁnitions are based on subjectiv e probabilitie s, they do

not allow for a distinctio n between the concept of risk and how

to measure/de scribe risk; the concept of risk is then the same as

the measureme nt of the risk. If, on the other hand, risk is based

on frequentist probabilities, such a distinction is obtained ÷ be-

tween the concept of risk based on the underlying true frequent ist

probability and the measure ment/descripti on of risk based on esti-

mation of these probabilities. However, in the latter case, the risk

concept is based on modellin g, which means this dichotomy does

not always exist, as this modelling will only be justiﬁed in situa-

tions of repeatability .

Statistical data analysis is based on one or the other of two

alternative conceptual foundations : the traditional frequentist ap-

proach and the Bayesian approach. The former is based on well-

known principles of statistical inference, the use of probability

models, the interpretation of probabilities as relative frequencies,

point estimates, conﬁdence interval estimation and hypothesis

testing. By contrast, the Bayesian approach is based on the concept

of subjective probabilitie s and is typically applied in situations in

which there exists only a limited amount of data. In a Bayesian

context, the frequentist probabilitie s are often referred to as

chances (Singpurwalla, 2006; Lindley, 2000 ). Probability models

also constitute a basic pillar of a Bayesian analysis. The idea in such

an analysis is to ﬁrst establish probability models that adequate ly

represent the aleatory uncertainties . The epistemic uncertainties ,

reﬂecting incomplete knowledge or lack of knowledge about the

values of the paramete rs of the models, are then represented by

prior subjectiv e probability distribut ions. When new data on the

phenomena studied become available, Bayes’ formula is used to

update the representat ion of the epistemi c uncertainties in terms

of the posterior distribut ions. Finally, the predictiv e distribution s

of the quantities of interest and the observables (for example, the

number of system failures) are derived by applying the law of total

probability. The predictive distribution s are epistemic, but they

also reﬂect the inherent variability of the phenomena being stud-

ied, i.e. the aleatory uncertainties .

Appendix B

(a) This ﬁrst part of this appendix covers the black swan exam-

ple presented by Lindley (2008) and shows how he has

arrived at his results.

Let p be the fraction of successe s (white swans) in the inﬁnite

series of trials (total population of swans), and let X

n

be the number

of successes (white swans) in n trials (swans). Furthermore, let Y

m

be the number of successes (white swans) in m new trials (swans).

We will compute the probability of m successes in the new trials

given only successes in the ﬁrst n trials, i.e. P(Y

m

= m| X

n

= n). By

conditionin g on the true value of p, we ﬁnd that

P(Y

m

= m[X

n

= n) =

Z

[0;1[

P(Y

m

= m[X

n

= n; p) dH(p[X

n

= n)

=

Z

[0;1[

P(Y

m

= m[p) dH(p[X

n

= n) =

Z

[0;1[

p

m

dH(p[X

n

= n);

(B:1)

where H(p| X

n

= n) is the posteri or distribution of p. Lindley (2008)

assumes a uniform distribu tion for H, and hence the posterior den-

sity f of p given X

n

= n equals:

f (p[X

n

= n) = c P(X

n

= n[p) f (p) = cp

n

1 = (n ÷1) p

n

;

where c is a constant such that the integral over this density equals

one.

Hence (B.1) equals

Z

[0;1[

p

m

(n ÷1)p

n

dp = (n ÷1)=(m÷n ÷1);

as presented by Lindley (2008). We see that if m is equal to one and

n is large, this probability is close to one; i.e., the probability that

the next swan is black is negligib le, but if m is large the probab ility

(B.1) is close to zero; i.e., the probability of a least one black swan in

the large sample of size m is close to one.

(b) The second part of this appendix shows the computati onal

result of the black swan example presented in Section 2.2.

Here the prior probabilitie s give mass 0.2 and 0.8 to the p

values 1.0 and 0.99, respectively .

The task is again to compute P(Y

m

= m| X

n

= n). Following argu-

ments as above, we ﬁnd

P(Y

m

= m[X

n

= n) = P(Y

m

= m[X

n

= n; p = 1)P(p = 1[X

n

= n)

÷P(Y

m

= m[X

n

= n; p = 0:99)P(p = 0:99[X

n

= n)

= 1 P(p = 1[X

n

= n) ÷0:99

m

P(p = 0:99[X

n

= n):

Now, using Bayes’ formula, it is not difﬁcult to see that

P(p = 1| X

n

= n) = c P(X

n

= n|p = 1) P(p = 1) = c 10.2 and

P(p = 0.99| X

n

= n) = c P(X

n

= n|p = 0.99) P(p = 0.99) = c 0.99

n

0.8,

leading to

P(Y

m

= m| X

n

= n) = [0.2/(0.2 + 0.99

n

0.8)] + [0.8 0.99

m+n

/(0.2 +

0.99

n

0.8)].

We see that in this case if n is large this probabili ty is close to

one; i.e., the probability of at least one black swan occurring is

close to zero; this is also the case for large m values, which is in

contrast to Lindley’s result in (a) above.

References

Ackoff, R.L., 1989. From data to wisdom. Journal of Applied Systems Analysis 16, 3–

9.

Adler, M.J., 1986. A Guidebook to Learning for the Lifelong Pursuit of Wisdom.

Collier Macmillan, New York.

Aven, T., 2007. A uniﬁed framework for risk and vulnerability analysis and

management covering both safety and security. Reliability Engineering and

System Safety 92, 745–754.

Aven, T., 2010a. On how to deﬁne, understand and describe risk. Reliability

Engineering and System Safety 95, 623–631.

Aven, T., 2010b. On the need for restricting the probabilistic analysis in risk

assessments to variability. Risk Analysis 30, 354–360 (With discussion pp. 381–

384).

Aven, T., 2011a. Selective critique of risk assessments with recommendations for

improving methodology and practice. Reliability Engineering and System Safety

96, 509–514.

Aven, T., 2011b. Quantitative Risk Assessment. The Scientiﬁc Platform. Cambridge

University Press, Cambridge.

Aven, T., 2011c. On different types of uncertainties in the context of the

precautionary principle. Risk Analysis 31 (10), 1515–1525 (With discussion

pp. 1538–1542).

Aven, T., 2012. The risk concept – historical and recent development trends.

Reliability Engineering and System Safety 99, 33–44.

50 T. Aven / Safety Science 57 (2013) 44–51

Aven, T., 2013. On Funtowicz & Ravetz’s ‘‘decision stake – system uncertainties’’

structure and recently developed risk perspectives frameworks. Risk Analysis

33 (2), 270–280.

Aven, T., submitted for publication. On how to deal with deep uncertainties in a risk

assessment and management context. Risk Analysis.

Aven, T., Renn, O., 2009. On risk deﬁned as an event where the outcome is uncertain.

Journal of Risk Research 12, 1–11.

Aven, T., Renn, O., 2010. Risk Management and Risk Governance. Springer Verlag,

Berlin.

Aven, T., Zio, E., 2011. Some considerations on the treatment of uncertainties in risk

assessment for practical decision-making. Reliability Engineering and System

Safety 96, 64–74.

Aven, T., Renn, O., Rosa, E., 2011. The ontological status of the concept of risk. Safety

Science 49, 1074–1079.

Cox, T., 2011. Clarifying types of uncertainty: when are models accurate, and

uncertainties small? Risk Analysis 31, 1530–1533.

Cox, T., 2012. Confronting deep uncertainties in risk analysis. Risk Analysis 32 (10),

1607–1629.

Dubois, D., 2010. Representation, propagation and decision issues in risk analysis

under incomplete probabilistic information. Risk Analysis 30, 361–368.

Flage, R., Aven, T., 2009. Expressing and communicating uncertainty in relation to

quantitative risk analysis (QRA). Reliability & Risk Analysis: Theory &

Applications 2 (13), 9–18.

Frické, M., 2009. The knowledge pyramid: a critique of the DIKW hierarchy. Journal

of Information Science 35 (2), 131–142.

Funtowicz, S.O., Ravetz, J.R., 1985. Three types of risk assessment. In: Whipple, C.,

Covello, V.T. (Eds.), Risk Analysis in the Private Sector. Plenum Press, New York.

Furlong, R.B., 1984. Clausewitz and Modern War Gaming: Losing can be better than

winning. Air University Review 35, 4–7.

Hammond, P., 2009. Adapting to the Entirely Unpredictable: Black Swans, Fat Tails,

Aberrant Events, and Hubristic Models. The University of Warwick Bulletin of

the Economics Research Institute, 2009/10, 1, November.

Hansson, S.O., 2002. Uncertainties in the knowledge society. International Social

Science Journal 54 (171), 39–46.

IRGC International Risk Governance Council, 2005. White Paper on Risk

Governance. Towards an Integrative Approach. Author: O. Renn with Annexes

by P. Graham. International Risk Governance Council, Geneva.

ISO, 2009a. Risk management—vocabulary. Guide 73, 2009.

ISO, 2009b. Risk Management – Principles and Guidelines, ISO 31000:2009.

Kaplan, S., Garrick, B.J., 1981. On the quantitative deﬁnition of risk. Risk Analysis 1,

11–27.

Lindley, D.V., 2000. The philosophy of statistics. The Statistician 49 (3), 293–337.

Lindley, D.V., 2008. The Black Swan: the impact of the highly improbable. Reviews.

Signiﬁcance (March), 42.

Myers, N., 1993. Biodiversity and the precautionary principle. Ambio 22 (2/3), 74–

79 (Biodiversity: Ecology, Economics, Policy).

North, W., 2010. Probability theory and consistent reasoning. Risk Analysis 30 (3),

377–380.

Paté-Cornell, M.E., 2012. On black swans and perfect storms: risk analysis and

management when statistics are not enough. Risk Analysis 32 (11), 1823–1833.

Renn, O., 1998. Three decades of risk research: accomplishments and new

challenges. Journal of Risk Research 1 (1), 49–71.

Rosa, E.A., 1998. Metatheoretical foundations for post-normal risk. Journal of Risk

Research 1, 15–44.

Rosa, E.A., 2003. The logical structure of the social ampliﬁcation of risk framework

(SARF): metatheoretical foundation and policy implications. In: Pidgeon, N.,

Kaspersen, R.E., Slovic, P. (Eds.), The Social Ampliﬁcation of Risk. Cambridge

University Press, Cambridge.

Rowley, J., 2006. Where is the wisdom that we have lost in knowledge? Journal of

Documentation 62 (2), 251–270.

Rowley, J., 2007. The wisdom hierarchy: representations of the DIKW hierarchy.

Journal of Information Science 33 (2), 163–180.

Sandin, P., Peterson, M., Hansson, S.O., Rudén, C., Juthe, A., 2002. Five charges

against the precautionary principle. Journal of Risk Research 5, 287–299.

Singpurwalla, N.D., 2006. Reliability and Risk: A Bayesian Perspective. Wiley,

Chichester.

Taleb, N.N., 2007. The Black Swan: The Impact of the Highly Improbable. Penguin,

London.

Taleb, N.N., 2010. The Black Swan: The Impact of the Highly Improbable, second ed.

Penguin, London.

Taleb, N.N., 2011. <http://www.fooledbyrandomness.com/DerivTBS.htm> (accessed

12.12.11).

Verma, M., Verter, V., 2007. Railroad transportation of dangerous goods: population

exposure to airborne toxins. Computers and Operations Research 34, 1287–

1303.

Vlek, C., 2011. Straightening out the grounds for precaution: a commentary and

some suggestions about Terje Aven’s ‘‘On Different Types of Uncertainties...’’.

Risk Analysis 31, 1534–1537.

Willis, H.H., 2007. Guiding resource allocations based on terrorism risk. Risk

Analysis 27 (3), 597–606.

Zeleny, M., 1987. Management support systems: towards integrated knowledge

management. Human Systems Management 7 (1), 59–70.

Zins, C., 2007. Conceptual approaches for deﬁning data, information, and

knowledge. Journal of the American Society for Information Science and

Technology 58 (4), 479–493.

T. Aven / Safety Science 57 (2013) 44–51 51

- India Pale AleUploaded byJesso George
- SpecDecompCWTUploaded byadnangadi
- OFFSHORE OIL AND GAS STRUCTUREUploaded byLeo C. Tarigan
- Rp 581 Inspection EffectivenessUploaded byArturo De La Fuente
- IPC2012-90135Uploaded byMarcelo Varejão Casarin
- The Truth About Magnetic Flux Leakage as Applied to Tank Floor InspectionsUploaded bypranavjoshi_84
- Four Months BesiegedThe Story of Ladysmith by Pearse, H. H. S. (Henry Hiram Steere), 1844-1905Uploaded byGutenberg.org
- Premium Digest Ensuring Pipeline Integrity of an Unpiggable Pipeline a Case Study of a Project Including Line Preparation ILI Rehabilitation and CertificationUploaded byargentino_ar01
- Sample3_Drawing.pdfUploaded byMas Arman Tewo
- Development of an in-line ultrasonic inspection tool for detection of pinhole-type defectsUploaded byargentino_ar01
- History of Bicycle Design and ManufacturingUploaded byKyle McCormick
- Time - Frequency analysis of musical instrumentsUploaded bySébastien Catrix
- Lebanese Style Shish Tawook Chicken Kabob RecipeUploaded byvikramkrishnan
- Inspection Techniques to Detect the Failure of the Boiler Tubes --Dinesh PoudelUploaded byDinesh Poudel
- Pressure–Impulse Diagrams for the Behavior Assessment of Structural ComponentsUploaded byjack
- OffshoreBook_2014.pdfUploaded bySergey Sergeev
- Brochurev26.pdfUploaded byMehroz Khan
- Shell Perdido Article PDFUploaded byhailay83
- Tubing Inspection Using Multiple Ndt TechniquesUploaded byRichard Periyanayagam
- Shock-response-spectrum Analysis of Sampled-data Polynomial for Track-seeking Control in Hard Disk DrivesUploaded byjack
- High Power Ultrasonic Processing DevelopmentsUploaded byTanakorn Rachapila
- Spar_0408Uploaded byAcid Hadi
- Corrosion Consulting Services_Life Prediction & Extreme Value StatisticsUploaded byAnonymous yI9YpMR2
- The Detection of Defective Members of Large PopulationsUploaded byGábor Hannák
- opinion view NMCD.pdfUploaded byMargaret Ford
- The Missing Data ScientistsUploaded byAmit Mehere

- CSR Sample ProposalUploaded byrichard2000000
- Application of Legal Philosophies - Plato to LockeUploaded byOmar Kareem Mauricio
- The Post Colonial AnimalUploaded byslkmlk
- Swami Vivekananda and the Psychic PowersUploaded bySagar Sharma
- Block-4,5-T306b asdghsadf asd a asdasdaUploaded byzenbatso17
- Study Guide - Be Worth FollowingUploaded byTim DeTellis
- 2013 - Hemmi Et Al - Analysing Proof-related Competences in Estonian, Finish and Swedish Mathematics CurriculaUploaded bySaul Miquias Victorio Hurtado
- International EthicsUploaded byLinda Jingle
- Social Cohesion viz-a-viz Ubuntu by MZWANDILE SANGWENIUploaded byGucciGrumps
- Modern Germanic HeathenryUploaded byFallOfMan
- Papadopoulos Tsianos Autonomy of MigrationUploaded byGerardo Montes de Oca
- McLarty (Review), Mathematics Form and FunctionUploaded byflores3831_814460512
- Physiological IdeologyUploaded byDavid Arthur Walters
- Blooms Taxonomy Teacher Planning KitUploaded byyomavialbeldoiborra
- The as-Structure of Intentional ExperienceUploaded byRosa Castillo
- Monuments and Sites 16 What is OUVUploaded byBernd Paulowitz
- Principles of Islamic Interpersonal Conflict InterventionUploaded byalmicica
- Letter to a Japanese FriendUploaded bymbhajiani
- Orientalism DefinedUploaded byIlmprocess
- Clarke, Simon - Foundations of Structuralism. Critique of Levi-Strauss and the Structuralist MovementUploaded bybarbarrojo
- Article 1 Reflection PaperUploaded byJoji Salaver
- 403-1953-1-PBUploaded byAshwani Kumar
- Postmodern GnosticsUploaded byMFD Jr
- Tohar Hayihud - The Oneness of G-d in Its PurityUploaded byAntonio da Alva
- Buddhism_and_psychoanalysis_A_personal_r.pdfUploaded bymerlin66
- My Affirmations 2Uploaded byapi-3702167
- BuddhismUploaded bymike766
- Fuchs - 2001 - Beyond AgencyUploaded byFel Epilef
- Review Angus Nicholls Myth and the HumanUploaded byAlicia Nathalie Chamorro Muñoz
- Skills of a CounselorUploaded byDinesh Cidoc