You are on page 1of 16

Soc Choice Welfare (2007) 29:649–663

DOI 10.1007/s00355-007-0249-9

ORIGINAL PAPER

Social decisions about risk and risk-taking

Sven Ove Hansson

Received: 13 July 2005 / Accepted: 4 May 2007 / Published online: 5 July 2007
© Springer-Verlag 2007

Abstract There has been very little contact between risk studies and more general
studies of social decision processes. It is argued that as a consequence of this, an
oversimplified picture of social decision processes prevails in studies of risk. Tools
from decision theory, welfare economics, and moral theory can be used to analyze the
intricate inter-individual relationships that need to be treated in an adequate account of
social decision-making about risk. However, this is not a matter of simple or straight-
forward application of existing theory. It is a challenging area for new theoretical
developments.

1 Introduction

In the last few decades, studies of risk have grown into a major field of interdisciplinary
inquiry. The ultimate motivation of most risk studies is to facilitate social decisions
about risks, in particular the risks to health and the environment that modern technol-
ogies give rise to. Principles and guidance for such decisions have a prominent role
in modern risk studies. But in spite of this, there is very little contact between risk
studies and more general studies of social decision processes. As a result of this risk
studies are dominated by oversimplified approaches to social decision-making.
In this contribution I will first summarize the standard approach to decision-making
in risk studies (Sect. 2). After that, I will discuss its major deficiencies (Sects. 3–4) and
propose how we can use major insights from decision theory to improve the analysis
of social decisions involving risk (Sects. 5–7).

S. O. Hansson (B)
Department of Philosophy and the History of Technology, Royal Institute of Technology (KTH),
Teknikringen 78 B, 2tr, 100 44 Stockholm, Sweden
e-mail: soh@kth.se
URL:www.infra.kth.se/~soh

123
650 S. O. Hansson

I would like to emphasize from the beginning that this is not just a matter of apply-
ing standard criteria for decision-making under risk. We are concerned with decision-
making under the combined complications of being both (1) social, i.e. collective and
(2) concerned with issues of risk, in the wide sense of the term that we use when
discussing for instance health risks and environmental risks. Neither social decision
theory nor risk studies has a satisfactory answer to the question how these combined
complexities should be dealt with in practical decision-making. I will try to give an
indication of some important components that should be included in an answer to that
question.

2 The standard approach

The dominant systematic approach to risk has been developed in the discipline of risk
analysis. Modern risk analysis grew out of the various reactions that public opposition
to new technologies gave rise to in the 1960s. (Otway 1987; Hansson 1993) It has a
strong focus on finding rational solutions to decision problems, in much the same way
as mainstream economics. In the subdiscipline of risk-benefit analysis, the overarching
goal is to achieve economic rationality in the management of risks (Hansson 2007b).
Much of the early work in risk analysis was devoted to chemicals and nuclear
technology, the same risk factors that public opposition targeted on. Today, the meth-
odology of risk analysis is applied to a wide category of social areas, including air
pollution (Pandey and Nathwani 2003), radioactive waste repositories (Cohen 2003),
airbag regulation (Thompson et al. 2002), road construction (Usher 1985), and efforts
to detect asteroids or comets that could strike the earth (Gerrard 2000), just to mention
a few examples.
When there is risk, there are possible unwanted outcomes that may or may not be
realized. In risk analysis, each such possible outcome is assigned a quantitative value
that is equal to the probability-weighted value of the outcome, should it materialize.
Hence, a risk of one in 50 that a person will die is counted as the death of 0.02 persons.
A probability of 1 in 1,000,000 that 1,000,000 persons will die is counted as equally
serious as a probability of 1 in 100 that 100 persons will die, and they are also both
counted as equally serious as certainty that one person will die. From a decision-theo-
retical point of view this is of course nothing else than expected utility theory, applied
to outcomes with negative utilities.
To exemplify this approach, we can examine its application in risk assessments of
the transportation of nuclear material on roads and rails. In such studies, the radiologi-
cal risks associated with normal handling and with various types of possible accidents
are quantified, and so are non-radiological risks including fatalities caused by acci-
dents and vehicle exhaust emissions. As noted by the authors of an unusually careful
emissions study, “[t]he end result of all estimates are unit risk factors for use in trans-
portation risk assessment. The unit risk factors are in the form of latent cancer fatalities
per kilometer (LCFs/km) or latent fatalities per kilometer or travel by truck or railcar”
(Biwer and Butler 1999, p. 1158). The risk associated with a given shipment is then
obtained by multiplying the distance travelled by this unit risk factor. These calcula-
tions will provide us with the total number of (statistically expected) deaths. Similar

123
Social decisions about risk and risk-taking 651

methods are used in all branches of risk analysis, although the technical details vary
with the application area.
A proponent of this methodology motivated it as follows:
The only meaningful way to evaluate the riskiness of a technology is through
probabilistic risk analysis (PRA). A PRA gives an estimate of the number of
expected health impacts—e.g., the number of induced deaths—of the tech-
nology, which then allows comparisons to be made with the health impacts
of competing technologies so a rational judgment can be made of their rela-
tive acceptability. Not only is that procedure attractive from the standpoint of
scientific logic, but it is easily understood by the public. (Cohen 2003, p. 909)
The dominance of expected utility methodology in risk analysis can be seen from
how issues such as uncertainty and justice are discussed in the risk-analytical litera-
ture. Treatments of uncertainty in risk analysis usually take the form of determining a
distribution of expected utility values (“risk values”) (see for instance von Stackelberg
et al. 2002). Discussions of justice tend to focus on the distribution of the aggre-
gate expectation value among population strata. Hence, the expectation value is the
starting-point, to which other considerations such as uncertainty and justice are added.
The dominance of the expected (dis)utility approach can also be seen from the
strong tendency in the technical (and increasingly the non-technical) literature to use
the word “risk” to denote expected disutility. Expectation values have been calculated
since the seventeenth century, but the use of the term “risk” to denote them is rela-
tively new. It was introduced in the influential Reactor Safety Study (WASH-1400, the
Rasmussen report) from 1975 (Rechard 1999, p. 776). Today it is the standard tech-
nical meaning of the term “risk”. The International Organization for Standardization
(2002) comes close to this definition when it defines risk somewhat vaguely as “the
combination of the probability of an event and its consequences”. However, since
“risk” has been widely used in various senses for more than 300 years, it should be no
surprise that attempts to reserve it for a technical concept that was introduced 30 years
ago has given rise to significant communicative failures.1 We all use this word in a
non-quantitative sense in everyday language to denote negative events. (“Cancer is a
major health risk.”) We also use it to denote the probability of an unwanted event.
(“The risk of such an accident is one in 10,000.”)
Negative outcomes can be of many different kinds. Risk analysis has mostly focused
on human mortality and morbidity and environmental damages. (Economic risks are
analyzed by economists in other contexts. The communications between these two
discourses is quite limited.) Death, disease, and environmental damage are not easily

1 In the early 1980s, authoritative attempts were made to fix the usage of “risk” to other meanings than
expected disutility. In a joint book from 1981, several of the leading researchers in the field proposed to
reserve it for a non-quantitative sense. They wrote that “[w]hat distinguishes an acceptable-risk problem
from other decision problems is that at least one alternative option includes a threat to life or health among
its consequences. We shall define risk as the existence of such threats” (Fischhoff et al. 1981, p. 2). In 1983,
a Royal Society working group defined risk as “the probability that a particular adverse event occurs during
a stated period of time, or results from a particular challenge” (Royal Society 1983, p. 22). Similarly, the
US National Research Council (1983) defined risk assessment as an assessment of the “probability that an
adverse effect may occur as a result of some human activity.” In the 1983 Royal Society report, the term
“detriment” was proposed to denote the product of risk and harm.

123
652 S. O. Hansson

commensurated. There is no easy answer to the question how many cases of juvenile
diabetes correspond to one death, or what amount of human suffering or death cor-
responds to the extinction of an antelope species. To a surprisingly large extent, risk
analysts avoid this problem by restricting their calculations to human mortality. When
different types of outcomes are being included in the same analysis, the usual method
is to assign monetary values to them. By putting a “price” on all outcomes, including
deaths, the overall disutility can be calculated. Risk analysis has imported methods
from cost–benefit analysis to convert deaths and other negative outcomes to monetary
losses. This includes methods that make use of expected earnings, actual sums paid
to save lives, willingness to pay for reduced risks of death, etc. These methods all
have their own problems, but I will not discuss them here. (For critical appraisals, see
Mishan 1985 and Hansson 2007b.)
When price tags are put on risks and negative outcomes, they can be weighed against
costs and economic benefits. It then turns out that there are large differences between
different policy areas in how much we pay for risk abatement, typically expressed
as differences in costs per expected saved life. Risk analysts, and in particular risk
benefit analysts, tend to be dissatisfied with these differences. They want us to make
our decisions according to a uniform “price” that covers all policy areas. The idea
is to calculate the risks in all these different sectors, and then allocate resources for
abatement in a way that minimizes the total risks. This is often claimed to be the
only rational way to decide on risks. Viscusi (2000, p. 855) is representative when
proposing that we should “spend up to the same marginal cost-per-life-saved amount
for different agencies.”
Although no country has carried out such a far-reaching reform, many attempts
have been made to determine a uniform price for saved statistical lives, and imple-
ment it in individual decisions. In 1992, the American FDA determined the value of
life-saving mammography by using estimates of how much male workers demand in
wage compensation for risks of fatal accidents (Heinzerling 2000, pp. 205–206). The
choice of this method relies on the assumption that the price tag for saving a statis-
tical life should be the same in these different contexts; otherwise the agency would
have had to find out how women value mammography. Life-value estimates based
on wage compensations for workplace risks were also used by the American EPA in
2000 when setting a new standard for arsenic in drinking water (Heinzerling 2002,
p. 2312). Again, the chosen methodology presupposes that the price tag on a saved
statistical life is constant over contexts. (It also assumes that people making job deci-
sions express this value better than people choosing what to pay for mammography
or for uncontaminated drinking water.)
The demand for basing decisions on such uniform optimization is clearly a form of
political utilitarianism. Consequently, cost—benefit analysis has been called a “new
utilitarianism” (Coddington 1971).

3 General problems with the standard approach

The expected disutility approach to risk is problematic in several respects. In this sec-
tion, I will briefly summarize some of the major problems that are associated with it,

123
Social decisions about risk and risk-taking 653

divided into four categories: problems with disutilities, problems with probabilities,
problems with the multiplicative combination of the two, and problems arising from
the exclusion of factors other than disutilities and probabilities. In the next section, I
will discuss a problem more in detail that is particularly relevant for social decision-
making, namely interindividual compensability.
1. Problems with disutility. Risks to human health and the environment are very dif-
ferent entities from those commonly treated in economics. As was mentioned above,
in order to weigh risks against economic benefits, they are assigned monetary values.
Such values, especially those for human lives, have repeatedly been subject to crit-
icism. Critics claim that negative outcomes such as the loss of human lives cannot
be measured in terms of money (Ashby 1980; Baram 1981; Kelman 1981). Stuart
Hampshire (1972, p. 9) has warned that the habits of mind engendered by this type of
impersonal computations may lead to “a coarseness and grossness of moral feeling, a
blunting of sensibility, and a suppression of individual discrimination and gentleness.”
To this it should be added that there are much more mundane things than lives and
deaths that we cannot meaningfully value in money. I do not know which I prefer,
e1,000 or that my son gets a better mark in math. If I am actually placed in a situation
when I can choose between the two, the circumstances will be crucial for my choice.
(Is the offer an opportunity to bribe the teacher, or is it an efficient extra course that
I only have to pay for if he achieves a better mark?) There is no general-purpose
price that can meaningfully be assigned to my son’s receiving the better grade, simply
because my willingness to pay will depend on the circumstances.
Similar situations often arise in issues of risk, including those that involve the
loss of human lives. We cannot pay unlimited amounts of money to save a life. The
sums that we are prepared to pay in a specific situation will depend on the particular
circumstances. Again, general-purpose prices are not useful as decision-guides. To
the contrary, such pricing will tend to obscure the fact that these are often decisions
under conditions that have all the characteristics of moral dilemmas. Our compe-
tence as decision-makers is increased if we recognize a moral dilemma when we have
one, rather than misrepresenting it as an easily resolvable decision problem (Hansson
1998). Therefore, the assumption of full comparability between negative outcomes,
although technically convenient, is far from unproblematic from an ethical point of
view.
2. Problems with probabilities. Most risk analyses mix probabilities with differ-
ent origins. A complex probabilistic risk assessment of a nuclear facility will contain
a large number of numerical probabilities of different kinds of component failures,
natural events, and human failures. Some of these probabilities are based on solid
evidence; they can for practical purposes be treated as known, objective frequencies.
However, other such probabilities are not much more than guesses.
Due to this mixture of probabilities, the reliability of risk analysis is sensitive to
systematic differences between objective probabilities and experts’ estimates of these
probabilities. In experimental psychology, such differences are described as lack of
calibration. Probability estimates are (well-)calibrated if “over the long run, for all
propositions assigned a given probability, the proportion that is true equals the prob-
ability assigned” (Lichtenstein et al. 1982, pp. 306–307). Thus, half of the statements

123
654 S. O. Hansson

that a well-calibrated subject assigns probability 0.5 are true, as are 90 % of those that
she assigns probability 0.9, etc.
Experimental studies have revealed a significant amount of overconfidence in
most of the types of judgments for which calibration studies have been performed.
Physicians assign too high probability values to the correctness of their diagnoses
(Christensen-Szalanski and Bushyhead 1981). Geotechnical engineers were overcon-
fident in their estimates of the strength of a clay foundation (Hynes and Vanmarcke
1976). Probabilistic predictions of public events, such as political and sporting events,
have also been shown to be overconfident. In one study of general-event predictions,
as the confidence of subjects rose from 0.5 to 1.0, the proportion of correct predictions
only increased from 0.5 to 0.75 (Fischhoff and MacGregor 1982). There are a few
exceptions. Professional weather forecasters and horse-race bookmakers make well-
calibrated probability estimates in their respective fields of expertise (Murphy and
Winkler 1984; Hoerl and Fallin 1974). These are professionals who have the benefit
of continuous feedback in frequentist terms.
We do not know how well-calibrated the experts’ estimates of probabilities are that
are used in most areas of risk analysis. We know, however, that if they are badly cali-
brated, then the outcome of risk analysis is correspondingly inaccurate (Lichtenstein
et al. 1982, p. 331; Hansson 1993).
3. Problems with combining probabilities and disutilities. Expected utility max-
imization does not allow for risk-averse or cautious decision-making. Critics have
maintained that serious events with low probabilities should be given a higher weight
in decision-making than what they receive in the expected utility model (Burgos and
Defeo 2004). In policy discussions, the avoidance of improbable but very large catas-
trophes, such as a nuclear accident costing thousands of human lives, is often given
a higher priority than what is warranted by expected utility calculations. We need a
decision framework that has room for cautious decision-making.
4. Problems of exclusion. In real life, there are always other factors in addition to
probabilities and (dis)utilities that can—and should—influence a moral appraisal of
an uncertain or risky situation. Risks are inextricably connected with interpersonal
relationships. They do not just “exist” as free-floating entities; they are taken, run, or
imposed (cf. Thompson 1985). In order to appraise an action from a moral point of
view, it is not sufficient to know the values and probabilities of its possible outcomes.
We also need to know who exposes whom and with what intentions. It makes a moral
difference if it is my own life or that of somebody else that I risk in order to earn a
fortune for myself. It also makes a difference what information a risk-exposed person
received, if and how she participated or was offered to participate in decisions affecting
the risk, etc. Generally speaking, person-related issues such as agency, intentionality,
consent, equity, etc. that do not have a place in the expected disutility framework will
nevertheless have to be taken seriously in any reasonably accurate general format for
the assessment of risk (Hansson 2003).
However, moral issues of risk do not have a well-established position in the tradi-
tional division of topics between academic disciplines. Traditionally, moral theory has
focused on deterministic situations, i.e. situations in which the morally relevant proper-
ties of human actions are both well-determined and knowable. For the most part, moral

123
Social decisions about risk and risk-taking 655

theory has left it to decision theory to analyse the complexities that indeterminism and
lack of knowledge give rise to in real life. According to the conventional division
of labour between these two disciplines, moral philosophy provides assessments of
human behaviour in well-determined situations. Decision theory takes assessments of
these cases for given, and derives from them assessments for rational behaviour in
an uncertain and indeterministic world. According to this view, no additional input
of values is needed to deal with indeterminism or lack of knowledge, since decision
theory operates exclusively with criteria of rationality.
As far as I can see, this division between the two disciplines is not tenable (Hansson
2001). It is easy to show with examples that moral theory has to concern itself with
risk and uncertainty, instead of relegating them to a supposedly value-free treatment
in decision theory. For example, compare the act of throwing down a brick on a person
from a high building to the act of throwing down a brick from a high building without
first making sure that there is nobody beneath who can be hit by the brick. The moral
difference between these two acts is not obviously expressible in a probability calcu-
lus, but on the other hand a meaningful analysis of the difference has to refer to the
moral aspects of risk-taking. This cannot be done in a framework that restricts moral
appraisal to outcomes, and treats risks as probabilistic mixtures of such outcomes.
Generally speaking, we need to have a moral view of the risks and uncertainties of
real life. Therefore moral theory should treat issues of risk, and do this without the
restrictions to considerations of rationality that are imposed by a decision-theoretical
approach.

4 Interpersonal compensability

I will now turn to one particular problem in the conventional risk-analytical approach
that is closely related to social decision-making, namely the extent to which it allows
advantages for one person to compensate for disadvantages for another person.
In risk analysis, the risks, i.e. expected disutilities, to which different persons are
exposed are all added up. It makes no difference for the analysis who is exposed to
the risk or how it is distributed. All that matters is the sum, the total aggregated risk
(expected disutility). In risk-benefit analysis, benefits are added up in the same way, i.e.
with no consideration of who receives the benefits or how they are distributed. Finally
the sum of benefits is compared to the sum of risks in order to determine whether the
total effect is positive or negative.
The practice of adding all risks and all benefits and then comparing the sums has
immediate intuitive appeal. It seems to just be an application of the simple maxim
for rational decision-making that you should compare the totality of advantages to
the totality of disadvantages for each alternative in the decision. Of course we should
weigh all benefits against all risks.
Or should we? Some reflection will show that this type of calculation is not as
unproblematic as it may seem at first sight. The assumption that a disadvantage to one
person can always be compensated by an equally sized advantage to another person
has far-reaching implications. It means that, just as in classical utilitarianism, individ-
uals have no other role than as carriers of utilities and disutilities, the values of which

123
656 S. O. Hansson

are independent of whom they are carried by. Such a framework excludes many types
of moral considerations that I referred to in the previous section as regrettably missing
from conventional risk analysis.
From a moral point of view, the crucial issue is whether or not benefits for one
person are allowed to outweigh harms to another person. I have proposed to call this
the issue of interpersonal compensability (Hansson 2004a). It has unfortunately often
been conflated with the related but distinct issue of interpersonal comparability. Even
if it can be established that a benefit is greater than a harm, the benefit need not cancel
out the harm, and it most certainly need not do so in the same unproblematic way as a
loss is (presumably) cancelled out by a gain in the calculations of an investor. The fact
that a certain loss for Ms. Black is smaller than a certain gain for Mr. White does not
suffice to make it allowable for Mr. White, or anyone else, to perform an action that
leads to this particular combination of a loss for Ms. Black and a gain for Mr. White.
For that conclusion to follow, another premise must be added, namely the premise
of interpersonal compensability. Interpersonal comparability does not imply interper-
sonal compensability. They come together both in utilitarianism and in conventional
risk analysis, but that is no reason to conflate them.
If full interpersonal compensability is assumed, then the following is the most
natural way to weigh risks against benefits:
The collectivist risk-weighing principle:
An option is acceptable to the extent that the sum of all individual risks that
it gives rise to is outweighed by the sum of all individual benefits that it gives
rise to.
If on the other hand full interpersonal incompensability is assumed, then the fol-
lowing will be the most natural way to weigh risks against benefits:
The individualist risk-weighing principle:
An option is acceptable to the extent that the risk to which each individual is
exposed is outweighed by benefits for that same individual.
Whereas the collectivist risk-weighing principle is taken for granted in risk analy-
sis, the individualist risk-weighing principle is equally dominant in healthcare ethics
(Hansson 2004a). According to well-established principles in medical ethics, a phy-
sicians’ decision to recommend a patient a treatment should be based on a careful
balancing of the risks and benefits for that individual patient. Similarly, if a patient
is offered to participate in a clinical trial, this offer has to be based on an assessment
of the risks and benefits for that patient. (The benefits of participating in a clinical
trial consist mainly of chances of improved health due to the experimental treatment.)
In fact current principles of medical ethics do not allow a physician to let a patient
sacrifice her own interests by taking part in a clinical trial that is beneficial to a wider
community but known to be harmful to herself. This is in stark contrast to the col-
lectivist risk-weighing that dominates in risk analysis. To mention just one example,
critics of NIMBY (not in my backyard) reactions require that potential neighbours
of a contested facility sacrifice their own interests by consenting to a siting that is
beneficial to a wider community but potentially harmful to themselves. (For a critical
discussion of the NIMBY concept, see Luloff et al. 1998).

123
Social decisions about risk and risk-taking 657

The collectivist risk-weighing practices of risk analysis also stand in sharp contrast
to contemporary economics (Hansson 2006). A major reason why Pareto optimality
has a central role in modern normative economics is precisely that it does not allow us
to sacrifice one person’s interests to those of another. In standard normative econom-
ics, an advantage to one person does not outweigh a disadvantage to another person
(Le Grand 1991; Sen 1987). Cost–benefit analysis and mainstream risk analysis
represent the contrary approach. In terms of interpersonal compensability (and com-
parability), they side with Arthur Pigou’s so-called old welfare economics, that dif-
fers from the modern approach in assuming full interindividual comparability and
compensability.
Hence, mainstream risk analysis and mainstream normative economics represent
two extremes with respect to interindividual compensability. It is an implicit message
of risk-benefit analysis that a rational person should accept being exposed to a risk
if this brings greater benefits for others. The implicit message of Paretian welfare
economics is much more appreciative of self-interested behaviour. This is a strange
combination of dominant ideologies. It is particularly unfortunate from the perspec-
tive of the poor, risk-imposed person. The economic criterion of Pareto-optimality
precludes the transfer of economic resources to her from rich persons if they object
to such a transfer,2 whereas cost-benefit analysis allows others to expose her to risks,
against her own will, if only someone else—perhaps the same rich persons—obtain
sufficient benefits from this exposure.
The purpose of pointing out these differences between risk analysis and other quasi-
normative disciplines is not to propose the importation of the Paretian approach into
risk analysis. As Amartya Sen’s penetrating analysis has shown, it is not credible to
treat Pareto optimality as a necessary and sufficient criterion for a morally ideal dis-
tribution (Sen 1979, 1987). In my view, neither of the two extreme views is credible.
Some sort of intermediate, but yet principled approach seems to be needed. It would
seem sensible to allow for partial interpersonal compensability rather than restricting
the choice to two equally implausible extreme views.
In the next three sections, I will propose three components that should be included
in a comprehensive account of social decision-making on risk issues.

5 Integration in general decision processes

For the first of these components we have use for an insight that can be learned from
Condorcet, and more precisely from his account of the stages of decision processes
that he put forward in his motivation for the French constitution of 1793. Condor-
cet divided decision processes into three stages. In the first stage, one “discusses the
principles that will serve as the basis for decision in a general issue; one examines
the various aspects of this issue and the consequences of different ways to make the
decision.” At this stage, the opinions are personal, and no attempts are made to form
a majority. After this follows a second discussion in which “the question is clarified,

2 This applies if Pareto optimality is applied on the level of goods, but not necessarily if it is applied on the
level of welfare (See Hansson 2004b).

123
658 S. O. Hansson

opinions approach and combine with each other to a small number of more general
opinions.” In this way the decision is reduced to a choice between a manageable set
of alternatives. The third stage consists of the actual choice between these alternatives
(Condorcet [1793] 1847, pp. 342–343).
Condorcet’s account of the stages in a decision process has been virtually forgotten,
and does not seem to have been referred to in modern decision theory. However, it
is an insightful theory. In particular, it is unfortunate that his distinction between the
first and second discussions has been lost in later accounts of decision processes, and
that his first stage has so seldom been discussed by decision theorists. In democratic
decision-making, all three stages will have to be democratic.
In the risk sciences, the discourse on risk-related decisions tends to have a strong
focus on the third stage in decision processes. When public decisions are discussed in
conferences and journals devoted to risk, the role of the public is often referred to in
terms such as “acceptance”, “consent”, and “trust”, all of which indicate a restriction
of popular participation to Condorcet’s third stage, i.e. the choice between ready-made
decision alternatives. The following quotation is not untypical:
Community groups have in recent years successfully used zoning and other
local regulations, as well as physical opposition (e.g., in the form of sitdowns
or sabotage), to stall or defeat locally unacceptable land uses. In the face of
such resistance, it is desirable (and sometimes even necessary) to draw forth
the consent of such groups to proposed land uses. (Simmons 1987, p. 6).
This restriction of public participation to the third stage, and sometimes even to
a merely confirming or consenting role in that stage, is untenable if we wish to see
decisions on risk in the full context of public decision-making in a democratic society.
Without public participation in all three stages of the decision-making process, risk
issues cannot be dealt with democratically.
Attempts to limit participation in risk decisions are often closely connected to
attempts to isolate risk issues from other social issues, and refer them to more export-
dominated forums than other, more “political” policy issues. In Sect. 2, I mentioned
the common view in risk analysis that all risk issues should be treated with uniform
criteria, so that the price paid for a saved life is the same in all social sectors. Any
attempt to implement this idea will run into severe difficulties since risk issues are
dispersed over the whole social agenda, where they are parts of various larger and
more complex issues. Traffic safety is closely connected to issues of traffic and com-
munity planning. It is often impossible to divide the costs of a traffic investments in
a non-arbitrary way between costs of improved safety and costs of improved accessi-
bility. Workplace safety issues are integrated in the same way with issues of industrial
productivity, etc. In short, the risk issues of different social sectors all have impor-
tant aspects that connect them to other issues in their respective sectors. The idea
to base risk decisions on a unified calculation for all social sectors would restrict
democratic decision-making in these other issues. It would in fact not be possible
without a far-reaching system of central planning that is hardly reconcilable with the
ideals of participatory democracy. (This has largely gone unnoticed since proponents
of uniform decision-making criteria for risks tend to use market-oriented rhetoric in
favour of their proposals.)

123
Social decisions about risk and risk-taking 659

In conclusion: Issues of risk can and should not be isolated from other social issues.
They have to be treated in the same decision procedures as other issues. Therefore
we do not need special procedures for dealing with risk. Instead we need methods to
include the special characteristics of risk-related issues in our general decision-making
processes. This may sound trivial, but it runs contrary to the received view in the risk
sciences.

6 Defeasible rights

The second component is a moral approach to risk impositions. In stark contrast to


the impersonal view of risk analysis, I propose that each risk-exposed person should
be treated as a sovereign individual who has a right to fair treatment. We are not mere
carriers of utilities and disutilities that would have had the same worth if they were
carried by someone else. We are moral agents with rights and duties. In particular, we
have a right not to be exposed to danger by others.
One of the few prominent philosophers who have seriously considered rights not to
be exposed to risks was the late Robert Nozick. He recognized that given that we have
a right against actions that bring us a harm, we should also have a right against actions
that give rise to a high probability of such harm. However, on pain of making human
society impossible, such a right could not in practice be extended to arbitrarily low
probabilities. Nozick asked the question: “Imposing how slight a probability of a harm
that violates someone’s rights also violates his rights?” (Nozick 1974 p. 7; cf. McKerlie
1986). He concluded that no such solution can be credible in a rights-based ethics,
since probability limits “cannot be utilized by a tradition which holds that stealing a
penny or a pin or anything from someone violates his rights. That tradition does not
select a threshold measure of harm as a lower limit, in the case of harms certain to
occur” (Nozick 1974, p. 75).
However, Nozick asked the wrong question when trying to give precision to a
rights-based approach to risk. Since there is much more to an ethical analysis of risks
than harms and probabilities (see Sect. 3), we have no reason to search for a solution
in terms of probability limits. A much more credible direction to go is to recognize
that rights against risk imposition are defeasible. This accords with how we reason
about harms that come with certainty. We have a prima facie right not to be harmed by
others. This is not an indefeasible right since there are quite a few situations in which
it is overridden by other moral principles. We can extend this prima facie right against
harm to a prima facie right against risk of harm. In other words: Everyone has a prima
facie moral right not to be exposed to risk of negative impact, such as damage to her
health or her property, through the actions of others.
This is a prima facie right that has to be overridden in quite a few cases, in order to
make social life at all possible. Instead of Nozick’s problem with the probability limit
we have what I have elsewhere (Hansson 2003) called an exemption problem, namely
the problem of determining when this prima facie right is rightfully overridden, so
that someone is allowed to expose other persons to risk.
A promising approach to the exemption problem is to refer to reciprocal exchanges
of risks and benefits. Each of us takes risks in order to obtain benefits for ourselves.

123
660 S. O. Hansson

It is beneficial for all of us to extend this practice to mutual exchanges of risks and
benefits. Hence, if others are allowed to drive a car, exposing me to certain risks,
then in exchange I am allowed to drive a car and expose them to the corresponding
risks. This (we may suppose) is to the benefit of all of us. In order to deal with the
complexities of modern society, we also need to apply this principle to exchanges of
different types of risks and benefits. We can then regard exposure of a person to a risk
as acceptable if it is part of a social system of risk-taking that works to her advantage.
Furthermore, it would seem reasonable that the individual who is exposed to risks can
demand, not only that the social system of risk should be to her advantage, but also
that she receives a fair share of its advantages. This will lead to the following tenta-
tive criterion for when the individual right against risk-impositions can be rightfully
overridden: Exposure of a person to a risk is acceptable if and only if this exposure is
part of an equitable social system of risk-taking that works to her advantage.
Of course, this solution is only schematic, and it gives rise a hoard of further prob-
lems that need to be solved. My claim is that it provides us with the right agenda for
social decision-making on risk. According to traditional risk analysis, in order to show
that it is acceptable to impose a risk on Ms. Smith, the risk-imposer only has to give
sufficient reasons for accepting the risk as such, as an impersonal entity. According
to the ethical risk analysis that I propose, this is not enough. The risk-imposer has to
give sufficient reasons why Ms. Smith—as the particular person that she is—should be
exposed to the risk (Hansson 2003). This cannot credibly be done be referring to aggre-
gated, impersonal benefits. It can credibly be done by showing that this risk exposure
is part of something that works to her own advantage. Furthermore this approach, with
its focus on defeasible rights against risk impositions, has the important advantage of
transferring the issue of decisions on risk from the realm of one- or two-dimensional
optimization to the realm of social decision-making where it properly belongs.

7 Hypothetical retrospection

The third component is a method for facilitating moral deliberation on risk and uncer-
tainty. In everyday discussions, one of the most common types of arguments about
future possibilities consists in referring to how one might in the future come to evaluate
the possible actions under consideration. These arguments are often stated in terms
of predicted regret: “Do not do that. You may come to regret it.” This is basically a
sound type of argument. Decision-stability, in the sense that we continue to consider
a decision correct after we have made it, is clearly a desideratum. Decision-making
under risk and uncertainty will be improved if we seriously consider possible future
developments. Therefore, hypothetical retrospection should be used as a means to
achieve more well-considered social decisions in issues of risk. However, it cannot be
adequately accounted for in terms of regret-avoidance. Regret is often unavoidable for
the simple reason that it may arise in response to information that was not available at
the time of decision. Therefore, regret-avoidance has to be replaced by more carefully
carved-out methods and criteria for hypothetical retrospection (Hansson 2007a).
In his paper “Consequential Evaluation and Practical Reason”, Amartya Sen pro-
posed as a starting-point for practical reason “the need to take responsibility for the

123
Social decisions about risk and risk-taking 661

consequences of one’s choice”, further specifying that “[t]he responsibility of choice


applies to the choice of evaluative perspective as well as that of actions and conduct”
(Sen 2000, pp. 477, 501). In combination with the further assumptions that different
possible future developments give rise to different evaluative perspectives, and that
foresight is an essential component of practical reason, this gives us reason to develop
criteria for systematized hypothetical retrospection as an ideal form for decision-
guiding deliberation on risk.
In cases of risk there are, at each point in time, several alternative “branches” of
future development. Each of them can be referred to in a valid moral argument about
what one should do today. As a first approximation, we wish to ensure that whichever
branch materializes, a posterior evaluation should not lead to the conclusion that one
did morally wrong. I will not go into details here, but merely summarize some of
the major characteristics of a procedure involving hypothetical retrospection that can
guide us as decision-makers (For details, see Hansson 2007a).
• Each evaluation of a possible future development should refer to that branch of
future development in its full length, up to the moment at which the retrospection
is enacted. This means that the evaluation should not be restricted to the outcome
but also cover the process leading up to it. This is necessary to ensure that non-
consequentialist moral considerations are not programmatically excluded.
• Each such evaluation should refer to the decision in relation to the information
(actually) available at the time when the decision was made, not the information
(hypothetically) available at the time of the retrospection. This is because the deci-
sion-relevant moral argument is not of the form “Given what I now know I should
then have...,” but rather “Given what I then knew, l should then have...”
• Each such evaluation may refer to the need to be prepared for other branches of
possible development. I do not today consider my decision to buy a fire extin-
guisher five years ago to have been a wrongful decision, although I have had no
use for it. For the same reason I should not have considered it wrong if, five years
ago, I had evaluated it in a hypothetical retrospection of a scenario of five coming
years without a fire.
• Hypothetical retrospection should be performed with the moral values one has at
the time when the actual deliberation takes place, not the values one predicts that
one will have at the time at which the hypothetical retrospection is staged.
• Since it is not possible to investigate all possible branches of future development, a
selection is necessary. The major criterion in that selection should be to identify for
each alternative course of action, as far as possible, the branches in which the choice
of this alternative will be most difficult to defend in hypothetical retrospection.
This proposal can be seen as a systematization of a useful type of argument that
facilitates moral reflection about uncertain matters by making them more concrete.
Just as we can improve our moral decisions by considering them from the perspective
of other concerned individuals, we can also improve them by considering alternative
future perspectives.
Admittedly, this proposal goes in the opposite direction of much current moral
theory: it adds concreteness instead of abstracting from concrete detail. In my view,
our moral intuitions are in the end all that we have to base our moral judgments on,

123
662 S. O. Hansson

and these intuitions are best suited to deal with concrete realistic situations. The con-
creteness gained through hypothetical retrospection has the advantage that our moral
deliberations will be based on “the full story” rather than on curtailed versions of it.
More specifically, this procedure brings to our attention interpersonal relations that
should be essential in a moral appraisal of risk and uncertainty, such as who exposes
whom to a risk, who receives the benefits from whose exposure to risk, etc. It is only by
staying off from such concreteness that standard utility-maximizing risk analysis can
remain on the detached and depersonalized level of statistical lives and free-floating
risks and benefits.

8 Conclusion

My major message is that the academic and para-academic disciplines specializing


in risk are dominated by an oversimplified approach to social decision-making. As a
consequence of this, complexities of social decisions that are well known in other dis-
ciplines are not taken account. Furthermore, tools from welfare economics and moral
theory that can be used to analyze intricate inter-individual interrelationships have not
been put to use. We need more sophisticated approaches to social decision-making
about risk. Disciplines such as welfare economics, decision theory, and moral philoso-
phy have much to contribute. However, this is not a matter of simple or straightforward
application of existing theory. It is a challenging area for new theoretical developments.

References

Ashby E (1980) What price the Furbish lousewort? Environ Sci Technol 14:1176–1181
Baram MS (1981) The use of cost-benefit analysis in regulatory decision-making is proving harmful to
public health. Ann New York Acad Sci 363:123–128
Biwer BM, Butler JP (1999) Vehicle emission unit risk factors for transportation risk assessments. Risk
Anal 19:1157–1171
Burgos R, Defeo O (2004) Long-term population structure, mortality and modeling of a tropical multi-
fleet fishery: the red grouper Epinephelus morio of the Campeche Bank, Gulf of Mexico. Fish Res
66:325–335
Christensen-Szalanski JJJ, Bushyhead JB (1981) Physicians’ use of probabilistic information in a real
clinical setting. J Exp Psychol Human Percept Perform 7:928–935
Coddington A (1971) Cost-benefit as the new utilitarianism. Polit Quart 42:320–325
Cohen B (2003) Probabilistic risk analysis for a high-level radioactive waste repository. Risk Anal 23:909–
915
Condorcet ([1793] 1847) Plan de Constitution, presenté a la convention nationale les 15 et 16 février 1793.
Oeuvres 12:333–415
Fischhoff B, Lichtenstein S, Slovic P, Derby SL, Keeney RL (1981) Acceptable risk. Cambridge University
Press, Cambridge
Fischhoff B, MacGregor D (1982) Subjective confidence in forecasts. J Forecast 1:155–172
Gerrard MB (2000) Risks of hazardous waste sites versus asteroids and comet impacts: accounting for the
discrepancies in US resource allocation. Risk Anal 20:895–904
Hampshire S (1972) Morality and pessimism. Cambridge University Press, Cambridge
Hansson SO (1993) The false promises of risk analysis. Ratio 6:16–26
Hansson SO (1998) Should we avoid moral dilemmas?. J Value Inquiry 32:407–416
Hansson SO (2001) The modes of value. Philos Stud 104:33–46
Hansson SO (2003) Ethical criteria of risk acceptance. Erkenntnis 59:291–309
Hansson SO (2004a) Weighing risks and benefits. Topoi 23:145–152

123
Social decisions about risk and risk-taking 663

Hansson SO (2004b) Welfare, justice, and pareto efficiency. Ethical Theory Moral Pract 7:361–380
Hansson SO (2006) Economic (ir)rationality in risk analysis. Econ Philos 22:231–241
Hansson SO (2007a) Hypothetical Retrospection. Ethical Theory Moral Pract 10:145–157
Hansson SO (2007b) Philosophical problems in cost–benefit analysis. Econ Philos (in press)
Heinzerling L (2000) The rights of statistical people. Harvard Environ Law Rev 24(1):189–207
Heinzerling L (2002) Markets for Arsenic. Georgetown Law J 90:2311–2339
Hoerl AE, Fallin HK (1974) Reliability of subjective evaluations in a high incentive situation. J Roy Statist
Soc 137(Part 2):227–230
Hynes M, Vanmarcke E (1976) Reliability of embankment performance predictions. In: Proceedings of the
ASCE engineering mechanics division, specialty conference, Waterloo, Ontario, Canada. University
of Waterloo Press, Waterloo
International Organization for Standardization (2002) Risk management – vocabulary – guidelines for use
in standards. ISO/IEC Guide Vol 73
Kelman S (1981) Cost–benefit analysis. An ethical critique. Regulation 5:33–40
Le Grand J (1991) Equity and choice. An essay in economics and applied philosophy. Harper Collins
Academic, London
Lichtenstein S et al (1982) Calibration of probabilities: the state of the art to 1980. In: Kahneman et al.
Judgment under uncertainty, heuristics and biases, pp. 306–334
Luloff AE, Albrecht SL, Bourke L (1998) NIMBY and the hazardous and toxic waste siting dilemma: The
need for concept clarification. Soc Natural Resour 11:81–89
McKerlie D (1986) Rights and risk. Can J Philos 16:239–251
Mishan EJ (1985) Consistency in the valuation of life: a wild goose chase? In: Ellen Frankel Paul, Fred
D. Miller Jr, Jeffrey Paul, Ethics and Economics. Basil Blackwell, Oxford, pp. 152–167
Murphy AH, Winkler RL (1984) Probability forecasting in meteorology. J Am Statist Assoc 79:489–500
National Research Council (NRC) (1983) Risk assessment in the federal government: managing the process.
National Academies Press, Washington, DC
Nozick R (1974) Anarchy, state, and utopia. Basic Books
Otway H (1987) Experts, risk communication, and democracy. Risk Anal 7:125–129
Pandey MD, Nathwani JS (2003) Canada wide standard for particulate matter and ozone: cost–benefit
analysis using a life quality index. Risk Anal 23:55–67
Rechard RP (1999) Historical relationship between performance assessment for radioactive waste disposal
and other types of risk assessment. Risk Anal 19(5):763–807
Royal Society (1983) Risk assessment. Report of a Royal Society Study Group, London
Sen A (1979) Utilitarianism and welfare. J Philos 76:463–489
Sen A (1987) On ethics and economics, Blackwell
Sen A (2000) Consequential evaluation and practical reason. J Philos 97:477–502
Simmons J (1987) Consent and fairness in planning land use. Bus Prof Ethics J 6(2):5–20
von Stackelberg KE et al (2002) Importance of uncertainty and variability to predicted risks from trophic
transfer of PCBs in dredged sediments. Risk Anal 22:499–512
Thompson KM, Segui-Gomez M, Graham JD (2002) Validating benefit and cost estimates: the case of
airbag regulation. Risk Anal 22:803–811
Thomson J (1985) Imposing risk. In: Mary Gibson (ed) To breathe freely. Rowman & Allanheld, pp 124–140
Usher D (1985) The value of life for decision-making in the private sector. In: Ellen Frankel Paul, Fred
D. Miller Jr, Jeffrey Paul, Ethics and economics, Basil Blackwell, Oxford, pp. 168–191
Viscusi WK (2000) Risk equity. J Legal Stud 29:843–871

123

You might also like