You are on page 1of 17

High-Stakes Gambling with Unknown Outcomes:

Justifying the Precautionary Principle

Anton Petrenko and Dan McArthur

1. Introduction

The precautionary principle states that when human activities may lead to
morally unacceptable harm that is scientifically plausible but uncertain, actions
shall be taken to avoid or diminish that harm (World Commission on the Ethics of
Scientific Knowledge and Technology 2005 [hereafter UNESCO 2005], 12).1
Since its major initial formulation at the 1985 Vienna Convention for the Protec-
tion of the Ozone Layer, the principle has become an important policy-making
tool. No longer confined to environmental protection, its reach can be felt in such
diverse areas as medical and scientific research, health and safety regulation,
environmental regulation, product development, international trade, and even
judicial review. Given its potential influence on international and domestic policy,
the principle has attracted significant criticism. While quite numerous, the critics
could be roughly classified into two groups: one group argues that the principle is
not a coherent rule; the other argues that the principle cannot be justified.
The first group of critics argues that the principle is too vague and even
incoherent to be of any use as a normative guide to policy.2 Targeting the precau-
tionary principle in its role as a consistent, actionable rule, the critics argue that the
principle is incoherent because it is based on the uncertainty paradox, implies
infinite precaution, or demands the impossible—the complete proof of safety. In
another paper (Petrenko and McArthur 2010), we have argued that subject to the
requirement of scientifically plausible and in principle falsifiable harm scenarios,
the precautionary principle can be developed into a coherent, actionable rule.
Having addressed these important criticisms elsewhere, in this paper we will focus
on the second group of criticisms, which targets the principle’s justification.
Leaving the question of coherence aside here then, we will address the critics who
doubt that the principle can be justified on epistemic, rational, or moral grounds.3
This paper pursues more than the goal of grounding the precautionary prin-
ciple. As a high-level guide, the precautionary principle is inevitably vague,
requiring interpretation.4 While scrutinizing the principle’s logical coherence
helps to circumscribe its scope of application, further interpretive progress
demands addressing the question of the principle’s justification. Indeed, one
reason for the ad hoc invocation of the principle in various contexts is due to the
lack of clarity on the conditions that justify its use. Thus, even though the authors

JOURNAL of SOCIAL PHILOSOPHY, Vol. 42 No. 4, Winter 2011, 346–362.


© 2011 Wiley Periodicals, Inc.

josp_1538 346..362
High-Stakes Gambling with Unknown Outcomes 347

of the UNESCO (2005) definition go beyond other authors in stipulating the moral
character of the principle, mindful of moral pluralism, they stipulate a disjunctive
list of possible moral norms without agreement on substantive moral principles. It
is far from clear whether such a general approach can be refined sufficiently for
the sort of practical application that would permit, for example, the determination
of what kind of harm is morally unacceptable. Indeed, some authors (Harris and
Holm 2002) have argued that an open-ended reference to “serious or irreversible
damage” in the Rio Declaration (1992) could prohibit apple pies as a choking
hazard or childbirths as detrimental to general health. Therefore, in addressing the
second body of criticism, we aim to shed light on the principle’s justification with
the view of further circumscribing the range of its justified application. Starting
with the question of epistemic justification, we will proceed to consider the
rational justification, concluding with the analysis of the moral grounding of the
principle.

1.1 Precautionary Principle as an Inference Rule

Pascal’s Wager suggests that the most rational choice is not always the most
epistemically justified choice. Since the precautionary principle invokes knowl-
edge conditions and adjudicates choices, it can be viewed either as an epistemic
rule or as a rational rule. One useful way of conceptualizing the difference is to
think of it in terms of utilitarian and nonutilitarian inference rules for accepting a
given hypothesis. In accepting a hypothesis, the utilitarian rules take potential
gains and losses into consideration and prescribe the most acceptable (i.e., ratio-
nal) course of action; nonutilitarian rules, on the other hand, focus on evidence and
prescribe the most acceptable (i.e., warranted) object of belief.5 The precautionary
principle is successful as an epistemic rule if it leads to justified or true beliefs; it
is a successful rational rule if it leads to optimal choices of action.
A number of authors criticize the principle from an epistemic position. Thus,
Harris and Holm (2002) reject the principle by arguing that it is biased in favor of
accepting the hypothesis of harm. Their criticism appears to rely on the principle
of insufficient reason, which mandates that when we lack evidence sufficient to
assign probabilities to the relevant outcomes, we should regard all outcomes as
equally plausible. Thus, given the absence of data for the harm or the safety of a
given action, either outcome should be viewed as equally plausible. However,
calling for precaution when no scientific evidence for determining the probability
of negative outcomes exists, the precautionary principle selects the outcome of
harm without the required epistemic warrant. Similarly, Hanekamp, Vera-Navas,
and Verstegen (2005) argue that the principle is too pessimistic because it assumes
that the negative outcomes are more likely. Cass Sunstein (2005) argues that the
precautionary principle is incoherent and only gives the illusion of guidance by
giving expression to human cognitive biases in the evaluation of risks. These
cognitive biases, such as the availability heuristic, probability neglect, social
cascades, and group polarization make people “more fearful than is warranted by
348 Anton Petrenko and Dan McArthur

reality” (5–6). Ultimately, epistemic criticisms suggest that the principle leads to
the acceptance of false or unwarranted beliefs, leading to the preponderance of
Type I (false positive) errors, also known as false alarms.
Normally, in evaluating whether a particular action is harmful, the default
(null) hypothesis is lack of relationship between action and harm; evidence must
be scrutinized to determine whether this hypothesis merits rejection in favor of
its alternative. Making the relevant criteria more demanding would decrease the
rate of false positives or false alarms—Type I error that amounts to the rejection
of the null hypothesis that is in fact true. However, by the same token, it will
increase the rate of false negatives or oversights—Type II error that amounts to
the acceptance of the null hypothesis when in fact it is false. Relaxing the
criteria will have the opposite effect on the balance of Type I and Type II errors.
Ideally, one would want to find a balance that minimizes both errors in such a
way that all harms are recognized and there are neither oversights nor false
alarms. The thrust of the criticisms from the epistemic position is that the pre-
cautionary principle cannot be justified because it minimizes oversights by cap-
turing every false alarm.
Yet the critics misconstrue the nature of the principle. First, the principle does
not eventuate a judgment that some threat is certain or even likely. Operating
under uncertainty, when probabilities can’t be assigned to the outcomes, the
principle is silent on what is the most acceptable object of belief. Second, facing
the hypothesis of harm, the principle explicitly invokes gains and losses and,
therefore, can’t be considered as a nonutilitarian acceptance rule. To criticize the
precautionary principle for increasing the rate of false positives is like criticizing
an individual who balks at the five in six chance of winning a round of Russian
roulette. Ceteris paribus, since there is only one bullet and six chambers, the
prospect of losing the round is considerably less likely than the other alternative
and, applying such utilitarian acceptance rules as the rule of maximum probability,
one should accept the belief that the chamber is empty. However, the relevance of
pragmatic (i.e., utility relevant) considerations, while not affecting the likelihoods
of the hypothesis, makes this fact an insufficient reason for choosing the action
and taking on the gamble. Assuming that one wants to avoid loss more than one
wants to obtain rewards, it is rational to abstain from the gamble even if one
believes that loss is the least likely outcome. In other words, the precautionary
principle must be viewed in the context of utilitarian acceptance rules that select
the most acceptable course of action rather than the most acceptable object of
belief.
One might object that the response is unfair to some of the critics. While
Harris and Holm (2002) are explicit about the epistemic nature of their criticism,
it is possible that Sunstein and Hanekamp, Vera-Navas, and Verstegen ultimately
criticize the principle not for being pessimistic but for being too pessimistic—
pessimistic to the point of leading to irrational or unreasonable decisions.
However, if this is so, one cannot hold it against the principle that it leads to the
preponderance of Type I errors or false alarms. Abandoning the view that the
High-Stakes Gambling with Unknown Outcomes 349

principle is justified on epistemic grounds, one must argue that the balance
between the false alarms and the oversights is somehow unreasonable or irratio-
nal. In short, one must criticize the principle as a rational choice rule. Indeed, the
precautionary principle does state that when human activities may lead to morally
unacceptable harm that is scientifically plausible but uncertain, actions shall be
taken to avoid or diminish that harm (UNESCO 2005, 12). This focus on avoiding
losses led a number of authors (e.g., Resnik 2003; Gardiner 2006; Karlsson 2006)
to argue that this principle is grounded in the maximin rule—the utilitarian
acceptance rule for acting under uncertainty that prescribes selecting the course of
action that minimizes the maximum possible loss.6
However, arguing now from rational rather than epistemic perspectives, the
critics maintain that the precautionary principle leads to irrational trade-offs
(Keeney and von Winterfeldt 2001). In the unqualified form, the maximin
approach raises serious objections: even if one assumes that individual risk atti-
tudes might differ widely, when the maximum possible harms are very trivial (e.g.,
$1.00) and the possible maximum benefits are substantial (e.g., $1,000), focusing
on avoiding losses at the cost of opportunity to gain benefits seems irrational.
Intuitively, in such a situation the rules focusing on gains rather than losses (e.g.,
maximin gain and maximax gain) would be more acceptable. Addressing objec-
tions such as these (e.g., Harsanyi 1975), Gardiner (2006) argues that the rule does
not have a blanket application: the maximin strategy is rational when the decision
maker faces unacceptable alternatives and, caring little for the potential benefits,
lacks or has reasons to discount the relevant probability information (46–47).
Indeed, in the precautionary principle, which already operates under uncertainty,
these conditions are succinctly captured by the phrase “unacceptable harm”
(emphasis ours; UNESCO 2005).
But when does the threat of harm become unacceptable enough to make the
trade-offs involved in the application of the maximin rule rational and its adoption
acceptable? Keeney and von Winterfeldt (2001) argue that because of its categori-
cal focus on avoiding losses, the precautionary principle might lead to irrational
benefit-trade-offs by ignoring the costs of precaution or the benefits of foregone
opportunities. As a result, the precautionary principle is myopic—it can never do
better than the cost–benefit analysis. For example, reducing the exposure to
electromagnetic fields from power lines by placing transmission lines under-
ground will increase costs and electricity rates. Similarly, placing a 10-gauss
occupational threshold limit on a utility’s work practices of maintaining and
repairing high-voltage transmission lines will increase the time required to
perform the work and, as a result, related occupational risks such as falls and cuts,
not to mention public hazard resulting from the more frequent power outages
(Dillon and von Winterfeldt 2000). According to the authors, the precautionary
principle “often does worse and can never do better than the following principle:
In each specific decision context, select the alternative providing the best balance
between the costs, risks, and benefits inherent in the problem” (Keeney and von
Winterfeldt 2001).
350 Anton Petrenko and Dan McArthur

In their discussion of the shortcomings of the precautionary principle as a


decision rule, the authors somewhat gloss over a number of perennial difficulties
within decision analysis—problems related to scientific and axiological uncertain-
ties (Resnik 2003). Since these tools operate under certainty or risk, they require
assigning weights and values to conflicting objectives and quantifiable probabili-
ties to possible outcomes.7 The problem of scientific uncertainty emerges when
the probabilities of certain high-cost and scientifically plausible outcomes can’t be
estimated. Thus, arguing on behalf of the cost–benefit analysis, Richard Posner
(2004) nevertheless points out that no probabilities can be responsibly assigned
to such scientifically plausible high-stake disasters as the strangelet scenario,
bioterrorism, or runaway, abrupt climate change. Even more significant than the
challenges involved in quantifying probabilities of outcomes is the problem of
axiological uncertainty involved in assigning weights to various conflicting and
morally laden objectives, such as, for instance, human life and economic growth
(Resnik 2003). Clearly, these weights can make all the difference between accept-
ing and rejecting the maximin strategy.
In an attempt to deal with weights and resolve conflicts, the cost–benefit
analysis often monetizes human life and health, converting such outcomes into
morally neutral expected monetary values (EMVs).8 The fundamental difficulty,
however, is not circumvented—it is hard to see how the decision rule that equivo-
cates between moral and nonmoral harms by reducing them to monetary values
can claim to operate outside of a moral context. Cost–benefit analysis and risk
management, especially when they are used as policy tools that will have an effect
on third parties, inevitably make normative assumptions about the values of
human life, health, and the extent of our moral obligations toward each other
(Shrader-Frechette 1991; Resnik 2003). Thus, the cost–benefit analysis carried out
by Ford in deciding on the merits of investing in making the Pinto safer is
particularly relevant in this context. Borrowing the cost of human life from the
National Highway Traffic Safety Administration, which was derived on the basis
of such estimates as the employer loses, productivity losses, medical expenses,
and funeral costs, J. C. Echold, Ford’s director of automotive safety, estimated the
value of human life at $200,725 (Dowie 1977). Given the estimate of 180 burn
fatalities, 180 burn injuries, and 2,100 burned vehicles resulting from unsafe
design, the potential benefits ($49.15 million) fell short of the costs of safety
modifications that, estimated at $11.00 per vehicle, amounted to $137 million
(Dryton 1968; Moody-Jennings 1996, 219). However, intuitively, the problem
with Ford’s approach was not that they valued human life merely at $200,275;
their decision against safety modifications would have been problematic even if
they went with the $6 million—the amount that damages awarded by the courts
had reached by 1980 (Moody-Jennings 1996, 216–18). Regardless of the values
assigned, what makes the decision problematic is the choice to subject consumers
to the risk of death that was not merely foreseeable and avoidable but known,
which explains the 1979 indictment of Ford by the State of Indiana in four counts
of reckless homicide rather than simply negligence. In reducing life and injury to
High-Stakes Gambling with Unknown Outcomes 351

monetary values, the cost–benefit analysis equivocates between moral and non-
moral harms, subjecting the categorical moral obligations, such as the duty of
care, to the cost–benefit calculus and, in the process, adjudicating on the under-
lying values.
The inability to quantify certain probabilities and the intrusion of moral
valuation in assigning values to outcomes hinder risk management and cost–
benefit analysis. Indeed, these two factors are explicitly cited by the authors of the
UNESCO (2005, 28–31) report as the reasons why the precautionary principle
must complement these policy decision tools. Nevertheless, the decision to trade
off the risk of harm to the people living in neighboring households for the risk of
harm to power-line workers can be viewed as a rational trade-off if one accepts
that imposing a risk of harms on nonconsenting third parties is much less accept-
able than imposing it on those who, in virtue of entering into employment, had
agreed to absorb the positive as well as the negative consequences of their
occupational choice. The weights attached to outcomes must be calculated in light
of the relevant moral obligations. Indeed, the principle might serve as the policy
maker’s rule of thumb in decisions involving uncertainty and specific types of
risks—risks reflecting particularly high “morally unacceptable harms” (emphasis
ours; UNESCO 2005). This means that the principle must be inevitably viewed as
having a moral justification.

1.2 Precautionary Principle as a Moral Rule

It often appears that the right thing to do is not the most rational thing, and
vice versa. As a moral rule, the precautionary principle is justified not because it
leads to justified beliefs or, strictly speaking, to rational choices but because it
leads to morally right actions. Insofar as the precautionary principle deals with
potential harms and risks, it is often considered as a consequentialist principle and
viewed as being justified on utilitarian grounds. However, in the context of a
general policy, the precautionary principle often deals with issues related to
imposing undeserved risks or harms on others, and as a result, it can be considered
from the deontological and contract-theoretic positions.
Harris and Holm (2002) argue that the precautionary principle is a conse-
quentialist principle because it deals with the consequences of action and does not
refer to the moral quality of the action itself or its virtues. Given that the principle
addresses potential harms rather than benefits, the authors argue that it must be “a
partial principle of negative utilitarianism” (364), which aims to minimize dis-
utility rather than maximize utility. Having identified the principle in these terms,
the authors point out that as a rule grounded in negative utilitarianism, the prin-
ciple suffers from one of the major weaknesses of that approach—namely, there
are no good reasons to adopt a moral thinking that only takes negative values into
account. Furthermore, instead of selecting the least negative option from a range
of choices, the principle disallows the calculus of disutility below a certain
threshold of harm. This makes the principle’s moral scope unjustifiably narrow.
352 Anton Petrenko and Dan McArthur

Although the authors recognize that the principle could aim at protecting against
the kind of negative consequences that cannot be outweighed by any benefit,
they point out that the principle is rarely concerned with those kinds of harms.
Having argued that negative consequentialist principles cannot be morally justi-
fied, Harris and Holm conclude that the precautionary principle cannot be a valid
moral rule.
Assuming for a moment that the authors are right about the moral grounding
of the principle, their criticism seems persuasive only insofar as what counts as
serious harm is left unspecified. However, if the principle is only triggered when
harm is of such magnitude that no potential benefit, whatever it might be, can
outweigh it (e.g., nuclear holocaust), then restricting the calculus only to negative
value would be quite appropriate. Ultimately, however, pigeonholing the principle
as negative utilitarianism simply because it refers to consequences is a mistake. A
principle that disallows a policy because, for example, it sanctions involuntary
experimentation on human beings in order to develop life-extending medical
treatments still refers to consequences of the policy (i.e., harm to others), but the
way it assigns moral weight to these consequences might be based on such
deontological grounds as a violation of individual autonomy rather than a simple
calculus of utility. It is not the inevitable reference to policy consequences but the
way in which the moral assessment of these policy consequences is carried out
that determines the nature of ethical justification.
Advancing in the understanding of the principle beyond the consequentialist
view, Perri (2000) considers whether the precautionary principle can be justified
on the basis of limited paternalism or obligations not to subject others to risk or
harm. He argues that limited paternalism, defined as acts of an agent in the
interests of the principal but contrary to the principal’s preferences or consent, can
be justified on the basis of an extended consent principle. This principle allows
paternalism on the condition that agents act in good faith; actions do not harm
others and are the least invasive means to achieve the goals; and the public, given
its preferences, values, and commitments, would consent to the actions were it
aware of all the relevant information. On this view, any paternalistic action is
justified on the grounds that those affected by the actions would consent to it:
clearly, the justification combines the respect for individual autonomy with social
contract theory. Given the current public values in Britain, Perri argues that this
principle justifies limited paternalistic interference in some areas of economic
security, education, crime, and health. Yet the author maintains that the precau-
tionary principle cannot be justified on the basis of the extended consent principle
because the people would not consent to policies, where it is applied. As he points
out, the different formulations of the precautionary principle “encourage the
prohibition of goods and services which would not be consented to, if they were
fully informed, by citizens individually and in significant majorities who have
quite different values and attitudes.”
Perri’s criticism is based on an empirical hypothesis, which rings true, in part,
because we can hardly expect a nearly universal agreement on any policy, includ-
High-Stakes Gambling with Unknown Outcomes 353

ing policies in areas listed by the author (economic security and employment,
health, education, crime). The differences in individual values and risk-attitudes
suggest that one might expect fairly broad disagreement on risks involved in
universal vaccination, curfews, or work programs. But even if one does accept that
paternalism cannot be extended to many areas of policy, this does not mean that
it cannot be extended to policies meant to protect others from harm. The harm
principle is a deontological principle grounded in the duty not to harm others. As
Perri points out, the principle justifies overriding people’s preferences and allows
enforcing certain obligations by means of law. To justify intervention, the harms
addressed must relate to harms caused by citizens to one another; the intervention
must be proportionate to the harm; the harm must be likely; and finally, the harm
must be as severe as a form of crime or a civil tort. However, under all the
formulations of the precautionary principle, the likelihood and severity of the
harms is unclear and the precautionary measures are frequently disproportionate.
Perri concludes that, because the precautionary principle cannot live up to these
conditions, it cannot be justified on the basis of the harm principle, and the rational
and moral thing to do is “to weigh costs and risks with no particularly skewed
weighing” (165).
The criticism that harms are unclear and that precautionary measures are
often disproportionate to harm draws on the vagueness and the variability of the
principle and its application—such criticism might justify a refinement of the
principle, but not its outright rejection. The specification of criteria for a reason-
ably high threshold of morally unacceptable harm coupled with comparatively
acceptable precautionary measures could easily satisfy these objections within the
spirit of the principle. As far as the likelihood of harm is concerned, Perri (2000)
admits that when it comes to certain types of harms, such as murder or burglary,
the state has the right and the duty to enforce obligation regardless of the likeli-
hood of the violation: “In moral decision about legal frameworks for risk man-
agement, magnitude remains important, even for low probability risks. Despite the
comparatively low probability in Britain of being murdered by a complete
stranger, we nevertheless make it a serious crime” (151). In other words, the low
risk of the occurrence of substantial harms to third parties is not a reason to
withhold protection from the said parties, even if this protection comes at a cost.
The question is whether the harm principle can justify precaution even though no
probability rating can be assigned to the occurrence of harm.

1.3 Justifying the Precautionary Principle

The harm to others principle is grounded in the person’s entitlement not to be


harmed by the actions of another unless these harms had been voluntarily accepted
or the person had forfeited these rights by harming others. Since this right imposes
obligations on others, the application of this principle requires that harms are
sufficiently severe, that harms are probable, and that the prescribed measures are
proportional to the harms. When these conditions are met in a choice situation, and
354 Anton Petrenko and Dan McArthur

harm magnitudes multiplied by the likelihood of their occurrence render values


that can be assigned to the various branches of the decision tree, the rule justifies
the application of the prevention principle (UNESCO 2005).
For example, if harms are negligible, then even a high probability of their
occurrence will not justify preventative measures. There is a high probability of
bumping into others during rush hour at a subway station; however, even the high
probability of such a minor harm cannot justify measures designed to prevent such
collisions. However, as the magnitude of harm increases, then even a lower
likelihood of occurrence will justify preventive measures, as exemplified by traffic
regulations and speed limits. Sufficiently high risks of harm at a particularly tricky
road turn might justify an outright ban, while a lower risk will justify proportion-
ately less restrictive preventive measures. The higher the magnitude of the harm,
the lower the likelihood need-be to justify preventive measures. Despite normative
difficulties in assigning weights to harm and proportioning preventive measures to
risks, the harm principle is a fairly uncontroversial element of risk analysis, tort
law, and policy development. One might disagree on whether the risk to pedes-
trians from passing trains justifies fencing the railroad tracks, but that this threat
is a legitimate concern is hardly ever in doubt.
The difficulty with extending this normative foundation from prevention to
precaution is that the former is triggered when the risks assigned to possible
outcomes are calculable and the latter is triggered when these risks are incalcu-
lable. In other words, the harm principle requires that the magnitude of harms and
their likelihood be known—a condition that the precautionary principle cannot
satisfy by definition. Yet, although the likelihood of the threat is necessary to
specify the proportionality of the measures justified by the principle, it is not a
necessary condition for triggering the obligation itself. Indeed, it is counterintui-
tive to view the harm principle as triggered by threats that are fictitious or merely
logically conceivable—this would make the principle incoherent. But this
problem is addressed by requiring that the threats are scientifically plausible, not
by requiring that the likelihood of the threat is quantifiable. Not being able to
calculate the proportionality of mitigation does not undermine the moral obliga-
tion, and for high-magnitude harms, this constraint becomes largely irrelevant.
Indeed, gambling—taking chances or betting on an uncertain outcome—
often involves such variables as initial stake and predictability of the outcome used
to calculate the odds. Thus, the stake in the game of the Russian roulette is one’s
life and, ceteris paribus, the odds of winning the first round with a typical revolver
are five in six. However, gambling does not always involve events where prob-
abilities can be assigned—one can also wager on unique or unprecedented events
or events characterized by randomness. Knowing that the gun, in principle, is
capable of firing but not knowing whether it is loaded or how many bullets it takes
would not make the choice in this situation any less of a gamble. Still, given high
enough rewards, even in such cases gambling can often be justified as a rational
strategy and, as long as the person is aware of the hazards and ready to absorb all
the consequences of the choice, the action is not morally objectionable. One way
High-Stakes Gambling with Unknown Outcomes 355

of grasping the moral implications of gambling is to view it as a form of contract


between the wagering parties, which—subject to informational transparency, the
agent’s autonomy, and the confinement of gains and losses to the parties—does
not, prima facie, violate any moral rules. The situation is morally different,
however, when the losses from a gamble are absorbed, fully or in part, by third
parties, who consented neither to the gamble nor to its negative consequences.
Playing a round of Russian roulette buy putting the gun to the head of a non-
voluntary subject is a stark illustration of the relevant moral differences in the
acceptability of such gambles. Here, the initial stakes had been made at the
expense of those who are neither a party to the wager nor, most importantly, a
party in the contractual relationship which makes the gamble morally legitimate.
Arguably, the moral focus on harms that are merely plausible rather than
likely reflects the dramatic change in the human circumstances over the last 200
years. In “Technology and Responsibility: Reflections on the New Tasks of
Ethics” (1973), Hans Jonas argued that the traditional, anthropocentric, “neighbor
ethics,” such as deontological or utilitarian ethics, had been geared toward limited
power and the potential of human action. Affecting mostly those who are spatially
proximate and temporally present, the consequences of human action did not
extend far across space and time. Technology has transformed the nature of human
action; reaching beyond here and now, the aggregate effects of human action are
extensive, rapid, long-term, and self-propagating. Furthermore, in addition to the
expanding capacity for causing extensive and unpredictable long-term harms, one
must consider the social interdependence that facilitates the propagation of such
harms in contemporary society and restricts the individual freedom to opt out of
such social arrangements (Cowan 1997). This complex social interdependence
grounds fiduciary duties between the public and policy makers (Frankel 1983,
1995). Resulting from the unequal distribution of power, fiduciary relationships
are characterized by duties of loyalty and care toward entrustors on the part of the
fiduciary. Since the fiduciary acts in the best interests of the entrustors, in a
situation where a threat of harm to the entrustors exists but its probability is
uncertain, the fiduciary has an obligation to refrain from gambles without entrus-
tors’ explicit consent. In short, the policy makers are empowered to make the
decisions on behalf of the public within the narrow scope open to them by their
moral obligations.
One might, however, wonder whether grounding the precautionary principle
in the harm principle is too narrow a foundation and whether the principle should
not be instead justified on the basis of Rawls’s principle of justice. Thus, Catriona
McKinnon (2009) seeks to justify the application of the strong version of the
precautionary principle to catastrophic climate change by appealing to the notion
of intergenerational justice derived from Rawls’s original position (Rawls 1971).
On this view, the principle is justified because under conditions of uncertainty “the
worst consequences of not taking precautionary action are worse than the worst
consequences of taking precautionary action, and choosing the former course
of action is not consistent with treating present and future people as equals”
356 Anton Petrenko and Dan McArthur

(McKinnon 2009, 191). Intuitively, this position is appealing, and it finds support
in the UNESCO (2005) report, which suggests that it “should embrace the prin-
ciple of intergenerational equity” (20). Drawing on Gardiner (2006), McKinnon
argues that the precautionary principle is consistent with the maximin prescrip-
tions derived from the original position and that, unlike the harm principle, it can
extend moral obligations to future generations.
Grounding the principle in the consideration of justice rather than harm would
affect its scope. On the one hand, since one would wish to protect one’s liberties and
minimize harm to oneself, the maximin prescriptions from the original position will
likely mandate precautions against every undesirable scenario that the harm prin-
ciple will identify, and to this extent such justification is intuitively nonproblematic.
On the other hand, given Rawls’s interpretation of what might count as “morally
unacceptable” outcomes, such a principle will impose a much broader number of
positive and negative obligations. Prima facie, there is no reason why such a
principle would not be triggered, for example, when some technological innovation
would threaten to redistribute opportunities and affect the equity status of a
particular group.9 Yet it is controversial whether these kinds of concerns have
sufficient moral gravity to justify strong precautionary measures, particularly in the
light of the high rate of false alarms that the use of the precautionary principle
implies. Arguably, when effects are uncertain, considerations of equity call for ex
post rather than ex ante approaches requiring, proportional to the damage, com-
pensation rather than the possibly costly precaution. Furthermore, given that equity
damages are not inherently irreversible, it seems justifiable to use the minimax
regret rule (expressed as the difference between the payoff in a given state of nature
and the most preferred payoff possible for that state of nature) proposed by L. J.
Savage rather than the much stronger maximin rule discussed by Rawls. One might
constrain the principle to be more intuitively appealing, but in doing so one will
likely fall back within the scope of the harm principle.
Another consideration that supports grounding the precautionary principle in
the duty not to harm others is pragmatic—unlike Rawls’s difference principle, the
harm principle is a universally recognized moral maxim reflected in tort and
criminal laws of virtually all members of the international community. The pre-
cautionary principle can be seen as an extension of these recognized legal norms
to situations where the probability of the worst-case scenarios is unknown while
damage is significant enough to justify ex ante rather than the typical ex post legal
recourses. Thus, international acceptance of the precautionary principle as a
policy or legal guide would be facilitated by the fact that it is consistent with past
normative practice. Rather than calling for a fundamental normative change, the
principle’s adoption calls only for the recognition that human power, expressed as
the technological capacity for affecting its environment and its scientific capacity
for foreseeing worst-case scenarios, has dramatically surpassed the coping ability
of the harm principle as traditionally understood. Thus, the precautionary prin-
ciple is not a radical departure from the harm principle—it is merely a propor-
tional and nonnormative adjustment of the principle to fit the changed
High-Stakes Gambling with Unknown Outcomes 357

circumstances. In this light, McKinnon’s claim that the harm principle cannot
extend moral obligations to future generations is rather premature—we are
accountable for the harms we inflict on the future generations, and to the extent
that we can foresee the worst-case scenarios, we are also blameworthy.10
In the end, pinning down the notion of “morally unacceptable harm” that
triggers the application of the principle is not substantially different from deter-
mining the level of harm deserving of legislative protection, as long as one
remembers that in either case the careful balancing of frequently conflicting rights
and obligations is necessary. After all, in the context of national security, the
precautionary principle does seem to justify a policy of preemption in order to
prevent uncertain loss of life, yet the policy, as often happens, involves some
certain loss of life (e.g., see Stern and Wiener 2006). Thus, the next interesting
question—albeit one beyond the scope of this paper—is determining how, in a
situation characterized by uncertainty and conflict of moral obligations, the mag-
nitude and nature of harm to others can provide guidance in the formulation of a
morally responsible policy. Already one might anticipate that actions which
threaten individual moral entitlements—such as right to life, freedom, or health—
would be the most likely candidates for the protection under the precautionary
principle.
As we have anticipated, grounding the principle in the obligation not to harm
others helps to specify its scope: the principle does not apply where third parties are
either absent or explicitly agree to absorb the consequences of the policy choices.
The principle can neither veto social change that threatens to affect the equity of
various groups nor prohibit certain forms of technological and medical research
where the threat of harm to others is either contained or explicitly accepted by all
the relevant parties. When the principle is understood in this context, it becomes
clear that, provided there are viable alternatives and information regarding threats,
neither cell phones nor genetically modified foods (at least with regard to the safety
of eating them) would be necessarily ruled out by the principle.

2. Conclusion

When the precautionary principle is considered outside the context of public


policy, it is frequently viewed either as an epistemic or as a rational choice rule.
Yet, whatever the possible merits of such perspectives, the precautionary principle
is fundamentally a principle born in the context of public policy and its role should
be assessed in that framework. Apart from the recognition of threats, uncertainty
about probabilities, and need to legislate, the framework also includes the policy
makers, their obligations to the public, and the public—a stakeholder with certain
inalienable rights. Given this framework, a policy which permits action that raises
the prospect of a threat of harm to others, the probability of which is unclear, is
morally irresponsible because it gambles with human lives, subjecting them to the
threat of serious harm.
358 Anton Petrenko and Dan McArthur

Elsewhere we argued that the precautionary principle is coherent as long as


the identified threat is scientifically plausible and in principle falsifiable. These
conditions constrain the application of the principle and narrow its scope. In the
present paper, the interpretive effort continued by querying whether and how the
precautionary principle can be justified. We have argued that the precautionary
principle can be justified on moral grounds, particularly on the general obligation
not to harm others (without informed and explicit consent), where the requisite
magnitude of harm is sufficient to deprive others of their moral entitlements.
Although more work is needed in determining the range of moral entitlements,
this refinement imposes further constraints on the application of the principle,
narrowing its scope.
Although the principle is justified on moral grounds, we suggest that it should
not be viewed in contraposition to decision analysis. As Richard Posner (2004)
points out, the modest version of the precautionary principle is indistinguishable
from a cost–benefit analysis with risk aversion assumed for some prospects—it
can be viewed as a cost–benefit analysis with a “thumb placed on the cost-side”
(148). Indeed, the precautionary principle can be usefully viewed as a subset of
decision analysis—a decision maker’s rule of thumb with preset values on certain
objectives and consequences. What one must keep in mind, however, is that the
placing of the “thumb” and the assumption of risk aversion for some prospects
reflect the ultimately moral valuations of these outcomes. These values depend on
adopted moral theory, and those who argue that the precautionary principle leads
to irrational trade-offs or bad choices unwittingly leave the area of decision
analysis and engage the principle on moral grounds albeit, arguably, from a
utilitarian position. It is on these grounds that the fate of the principle is ultimately
decided.

Notes
1
The principle has a number of formulations (Sandin 2006 counted nineteen various definitions). The
definition provided in the text is one of the most recent and more successful formulations of the
principle. Canonical formulations also include the 1992 Rio Declaration on Environment and
Development: “where there are threats of serious or irreversible damage lack of scientific certainty
shall not be used as a reason for postponing cost-effective measures to prevent environmental
degradation” and the Wingspread Statement: “when an activity raises threats of harm to human
health or the environment precautionary measures should be taken even if some cause and effect
relationships are not fully established scientifically . . . [i]n this context the proponent of an
activity, rather than the public, should bear the burden of proof (Ashford et al., 1998).
2
For arguments along these lines, see Harris and Holm (2002); Sunstein (2005); Turner and Hartzell
(2004); Van Asselt, Marjolein, and Vos (2006); Van der Zwaan and Petersen (2003); Manson
(2002); Jordan and O’Riordan (1999); Morris (2000); and Bodansky (1991). For review and
discussion, see Ahteensuu (2007) and Sandin (2006).
3
For criticisms, see Harris and Holm (2002), Sunstein (2005), Goklany (2001), Perri (2000), and
Keeney and von Winterfeldt (2001).
4
For a discussion of the precautionary principle in the context of vagueness and general maxims,
see Ahteensuu (2007), Sandin (2006), Beauchamp and Childress (1983), Gardiner (2006), and
Nollkaemper (1996).
High-Stakes Gambling with Unknown Outcomes 359

5
A number of nonutilitarian acceptance rules exist. For example, the rule of high probability, predictably,
prescribes choosing the hypothesis with high probability, provided such exists. The rule of
maximum probability is comparative rather than qualitative: it prescribes the selection of the
hypothesis with the maximum probability from a range of available hypotheses. The rule of high
weight deals with hypotheses which are not subject to frequency interpretation (e.g., physical laws):
it prescribes selecting the hypothesis with high weight, where the weight is the limit of the relative
frequency of successful predictions based on this hypothesis. Not unlike the rule of maximum
probability, the comparative rule of maximum weight prescribes selecting a hypothesis with
maximum weight from a range of available hypotheses. Other rules include the rule of maximum
likelihood, which involves conditional probabilities, and Gauss’s method of least squares, which
prescribes accepting estimates with the minimum sum of squared errors. For details and examples,
see Michalos (1969, 295–310).
6
A number of comparative utilitarian rules for selecting a course of action under uncertainty exist. The
minimax loss, a rule suggested by American mathematician Abraham Wald, prescribes the course
of action allowing the smallest maximum possible loss. More optimistic than the minimax loss, the
minimin loss rule prescribes the course of action allowing the smallest minimum possible loss. In
contrast, the maximin gain and maximax gain rules focus on gains rather than losses; while the
former prescribes the course of action that allows the largest minimum possible gain, the latter
prescribes the course of action allowing the largest maximum gain. The minimax regret rule,
proposed by the statistician L. J. Savage, prescribes choosing the course of action that allows for
the smallest maximum possible regret, expressed as the difference between the payoff in a given
state of nature and the most preferred payoff possible for that state of nature. Proposed by Leonid
Hurwicz, the Hurwicz rule prescribes the course of action that has the maximum optimism-
weighted value, expressed, for each course of action, as the sum of the maximum possible payoff
multiplied by the assigned personal probability z and the minimum possible payoff multiplied by
the probability 1-z. For details and examples, see Michalos (1969, 311–30).
7
Utilitarian rules such as minimax or minimin are decision rules under uncertainty, when probabilities
can’t be assigned to relevant outcomes—they rely on the indexing of preferences without factoring
probabilities. In contrast, such utilitarian decision tools as cost–benefit analysis and risk analysis
operate under risk or certainty—they involve multiplying utilities (or harms) by their probabilities.
Thus, the expected utility rule prescribes selecting a course of action with the highest expected
utility, expressed for each course of action as the sum of the possible payoffs, which are multiplied
by the associated probabilities. The risk analysis, on the other hand, involves multiplying the
magnitude of threat by its probability to derive risk value. Other rules, such as the Laplace utility
rule, prescribe the course of action that maximizes Laplace utility, expressed as the sum of payoffs
for a course of action divided by the number of possible states of nature. For details and
illustrations of the differences between utilitarian acceptance rules under uncertainty and risk, see
Michalos (1969, 311–37).
8
Normally, the calculation would involve determining the amount of money an individual would
demand for incurring some small risk of death and then dividing this price by the risk to yield the
value of life. Such approach suggests that if an individual would undertake a 0.001 risk of death
for $5,000, then (5,000/0.001 = 5,000,000) this implies the value of life to be $5 million (Posner
2004, 165–66). Multiplying such value by the probability of the outcome will provide the EMV
which could be compared to select the best course of action (Raiffa 1968, 8–9). However, as
Posner points out, the problem is that the relation between the risk of death and the perceived cost
of the risk is probably asymptotic rather than linear—given that dead people cannot enjoy any
benefits, as the risk of death nears 100 percent the amount demanded for incurring such risks
would become infinite (166). Moreover, since for some scientifically plausible threats to life the
relevant probabilities cannot be estimated, determining risk perceptions and assigning monetary
value to life in a nonarbitrary way in such situations is impossible.
9
Indeed, a number of arguments for technological regulation on such grounds had been proposed (e.g.,
Bush 1983; Barbour 1993; Sclove 1995; Kass 2001). For discussion and criticism, see Petrenko
360 Anton Petrenko and Dan McArthur

and McArthur (2010). The argument against the human enhancement technologies is often made
on similar foundations (Williams 2006). McKinnon (2009, 198–200) suggests that further work
must be done to justify constraining the precautionary principle to apply to anthropologically
caused unacceptable consequences. However, while this would remove from the scope of the
principle such threats as the asteroid collisions, it would leave threats to equity arising from human
activity such as, for example, technological innovation. Should the precautionary measures be
taken if a new technology raises the threat, the probability of which is unknown, that it will limit
the employment opportunities or create life-constraints for an identifiable segment of the popu-
lation? Should it apply if a genetic enhancement threatens to increase the competitive abilities of
a particular group beyond the reach of other groups (e.g., Kass 2001; Williams 2006)? Is it
possible to ground the precautionary principle in Rawls but not, thereby, accept all the implications
that can be drawn from the original position?
10
There is nothing counterintuitive about the harm principle applying across generations. The spatial
and local constraints on the principle are normally due to the consideration that one cannot be
morally responsible for effects that are unforeseeable because they are removed in time and space.
But as science and technology increases both our causal power and our understanding, these
constraints cease to be relevant. Thus, part of tort law, strict liability, assigns accountability to
agents even in cases when the effects are delayed by generations. For example, in the famous case
of diethylstilbestrol (DES), a drug developed in the 1940s to deal with pregnancy complications,
the drug increased the risk for clear cell adenocarcinoma and other complications in the adult
daughters of the women who were treated with DES during pregnancy. Although the effects were
delayed until the second generation reached adulthood (1970s), this was not found to be a
sufficient reason for the producers to escape legal liability. Despite reaching across generations,
the effects were significant enough for strict liability. See Sindell v. Abbott Laboratories, 26 Cal.
3d 588 (1980).

References

Ahteensuu, M. 2007. “Defending the Precautionary Principle against Three Criticisms.” Trames 11 (4):
366–81.
Ashford et al. 1998. Wingspread Statement on the Precautionary Principle. Retrieved January 22,
2009, from http://www.gdrc.org/u-gov/precaution-3.html.
Barbour, I. 1993. Ethics in an Age of Technology: The Gilford Lectures 1989-1991. New York: Harper
Collins.
Beauchamp, T. L., and Childress, J. F. 1983. Principles of Biomedical Ethics. New York: Oxford
University Press.
Bodansky, D. 1991. “Scientific Uncertainty and the Precautionary Principle.” Environment 33 (7):
43–4.
Bush, C. 1983. “Omen and the Assessment of Technology.” In Machina ex Dea, ed. J. Rothchild,
151–70. New York: Teachers College Press.
Cowan, R. 1997. “Industrial Society and Technological Systems.” In Cowan, A Social History of
American Technology, 149–72. New York: Oxford University Press.
Dillon, R., and von Winterfeldt, D. 2000. “An Analysis of the Implications of a Magnetic Field
Threshold Limit Value on Utility Work Practices.” American Industrial Hygiene Association
Journal 61 (1): 76–81.
Dowie, M. 1977. “Pinto Madness.” Mother Jones, September/October: 28.
Dryton, R. 1968. “One Manufacturer’s Approach to Automobile Safety Standards.” CTLA News 8
(February): 11.
Frankel, T. 1983. “Fiduciary Law.” California Law Review 71 (May): 795.
———. 1995. “Fiduciary Duties as Default Rules.” Oregon Law Review 74: 1209.
Gardiner, S. 2006. “A Core Precautionary Principle.” The Journal of Political Philosophy 14 (1):
33–60.
High-Stakes Gambling with Unknown Outcomes 361

Goklany, I. M. 2001. The Precautionary Principle: A Critical Appraisal of Environment Risk Assess-
ment. Washington, DC: Cato Institute.
Hanekamp, J. C., Vera-Navas, G., and Verstegen, S. 2005. “The Historical Roots of Precautionary
Thinking: The Cultural Ecological Critique and ‘The Limits to Growth’.” Journal of Risk Research
8 (4): 295–310.
Harris, J., and Holm, S. 2002. “Extending Human Lifespan and the Precautionary Paradox.” Journal
of Medicine and Philosophy 27 (3): 355–68.
Harsanyi, J. 1975. “Can the Maximin Principle Serve as a Basis for Morality? A Critique of John
Rawls’ Theory.” American Political Science Review 69: 594–606.
Jonas, H. 1973. “Technology and Responsibility: Reflections on the New Tasks of Ethics.” Social
Research 15: 31–54.
Jordan, A., and O’Riordan, T. 1999. “The Precautionary Principle in Contemporary Environ-
mental Policy and Politics.” In Protecting Public Health and the Environment: Implementing
the Precautionary Principle, ed. C. Raffensberger and J. Tickner, 15–35. Washington, DC: Island
Press.
Karlsson, M. 2006. “The Precautionary Principle, Swedish Chemical Policy and Sustainable Devel-
opment.” Journal of Risk Research 9 (4): 337–60.
Kass, L. R. 2001. “Reinventing a Brave New World.” New Republic, May 21: 30–39.
Keeney, R., and von Winterfeldt, D. 2001. “Appraising the Precautionary Principle—A Decision
Analysis Perspective.” Journal of Risk Research 4 (2): 191–202.
Manson, N. A. 2002. “Formulating the Precautionary Principle.” Environmental Ethics 24: 263–74.
McKinnon, C. 2009. “Runaway Climate Change: A Justice-Based Case for Precautions.” Journal of
Social Philosophy 40 (2): 187–203.
Michalos, A. C. 1969. Principles of Logic. Englewood Cliffs, NJ: Prentice-Hall.
Moody-Jennings, M. 1996. Case Studies in Business Ethics, 2nd ed. New York: West Publishing
Company.
Morris, J. 2000. “Defining the Precautionary Principle.” In Rethinking Risk and the Precautionary
Principle, ed. J. Morris, 1–21. Oxford: Butterworth-Heinemann.
Nollkaemper, A. 1996. “ ‘What You Risk Reveals What You Value’ and Other Dilemmas Encountered
in the Legal Assaults on Risks.” In The Precautionary Principle and International Law: The
Challenge of Implementation, ed. D. Freestone and E. Hey, 73–94. The Hague: Kluwer Law
International.
Perri. 2000. “The Morality of Managing Risk: Paternalism, Prevention and Precaution, and the Limits
of Proceduralism.” Journal of Risk Research 3 (2): 135–65.
Petrenko, A., and McArthur, D. 2010. “Technology, Social Change and Liberty: The Ethics of
Regulating New Technologies.” Public Affairs Quarterly 24 (2): 99–115.
Posner, R. 2004. Catastrophe: Risk and Response. Oxford: Oxford University Press.
Raiffa, H. 1968. Decision Analysis: Introductory Lectures on Choices under Uncertainty. Boston:
Addison-Wesley.
Rawls, J. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press.
Resnik, D. 2003. “Is the Precautionary Principle Unscientific?” Studies in the History and Philosophy
of Biological and Biomedical Sciences 34: 135–65.
Rio Declaration on Environment and Development. 1992. The Earth Summit: The United Nations
Conference on Environment and Development. Available at www.unep.org/documents.
multilingual/default.asp?documentid=78&articleid=1163.
Sandin, P. 2006. “A Paradox out of Context: Harris and Holm on the Precautionary Principle.”
Cambridge Quarterly of Healthcare Ethics 15: 175–83.
Sclove, R. 1995. Democracy and Technology. London: Guilford Press.
Shrader-Frechette, K. 1991. Risk and Rationality. Berkeley: University of California Press.
Sindell v. Abbott Laboratories, 26 Cal. 3D 588. 1980.
Stern, J., and Wiener, J. B. 2006. “Precaution against Terrorism.” Journal of Risk Research 9 (4):
393–447.
362 Anton Petrenko and Dan McArthur

Sunstein, C. 2005. Laws of Fear: Beyond the Precautionary Principle. Cambridge: Cambridge Uni-
versity Press.
Turner, D., and Hartzell, L. 2004. “The Lack of Clarity in the Precautionary Principle.” Environmental
Values 13: 449–60.
Van Asselt, Marjolein, and Vos, E. 2006. “The Precautionary Principle and the Uncertainty Paradox.”
Journal of Risk Research 9 (4): 313–36.
Van der Zwaan, B., and Petersen, A. 2003. Sharing the Planet: Population-Consumption-Species:
Science and Ethics for a Sustainable and Equitable World. Delft, The Netherlands: Eberon.
Vienna Convention for the Protection of the Ozone Layer. 1985. Retrieved January 19, 2009, from
http://www.unep.ch/ozone/vc-text.shtml.
Williams, E. 2006. Summary Report of an Invitational Workshop Convened by the Scientific Freedom,
Responsibility and Law Program, American Association for the Advancement of Science.
Retrieved August 1, 2010, from http://www.aaas.org/spp/sfrl/projects/human_enhancement/pdfs/
HESummaryReport.pdf.
World Committee on the Ethics of Scientific Knowledge and Technology. 2005. The Precautionary
Principle. Paris: UNESCO.

You might also like