Professional Documents
Culture Documents
In this crisply written book, Hanno Sauer offers the first book-length
treatment of debunking arguments in ethics, developing an empiric-
ally informed and philosophically sophisticated account of genea-
logical arguments and their significance for the reliability of moral
cognition. He breaks new ground by introducing a series of novel
distinctions into the current debate, which allows him to develop a
framework for assessing the prospects of debunking or vindicating
our moral intuitions. He also challenges the justification of some of
our moral judgments by showing that they are based on epistemically
defective processes. His book is an original, cutting-edge contribu-
tion to the burgeoning field of empirically informed metaethics and
will interest philosophers, psychologists, and anyone interested in
how – and whether – moral judgment works.
HANNO SAUER
University of Utrecht
University Printing House, Cambridge , United Kingdom
One Liberty Plaza, th Floor, New York, , USA
Williamstown Road, Port Melbourne, , Australia
–, rd Floor, Plot , Splendor Forum, Jasola District Centre, New Delhi – , India
Anson Road, #–/, Singapore
www.cambridge.org
Information on this title: www.cambridge.org/
: ./
© Hanno Sauer
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published
Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
: Sauer, Hanno, author.
: Debunking arguments in ethics / Hanno Sauer, Universiteit Utrecht, The Netherlands.
: [edition]. | New York : Cambridge University Press, . |
Includes bibliographical references.
: | (Hardback)
: : Ethics. | Ethics, Evolutionary.
: . | /.–dc
LC record available at https://lccn.loc.gov/
---- Hardback
Cambridge University Press has no responsibility for the persistence or accuracy
of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
Contents
Debunking Explained: Structure and Typology
Introduction
. The Structure of Debunking
. Selective or Global?
. Off Track
. Obsoleteness
. Symmetry
. Detection Error
. Inconsistency
. Ignoble Origins
Conclusion
v
vi Contents
. Against the Metaethical Turn
. A Diagnosis: Prior Plausibility
Conclusion
Debunking Realism: Moral Disagreement
Introduction
. The Empirical Case against Realism
. The Right Kind of Disagreement
. Moral Convergence and the Right Kind of Disagreement
. Moral Convergence and Debunking
Conclusion
References
Index
Figures
viii
Tables
ix
Acknowledgments
This book has been greatly improved by the constructive (and occasionally
also by the less constructive) feedback I received from audiences at
Barchem, Berlin, Bielefeld, Bochum, Eindhoven, Groningen, Münster,
Nijmegen, Osnabrück, Potsdam, Rotterdam, Salzburg, Tilburg, Tübin-
gen, Toronto, Utrecht, and Zürich. I am grateful for the many helpful
conversations on the topic of this book and the organizers of the events
above for making those conversations possible.
Some of the chapters draw on previously published material: the intro-
duction is based on my “Between Facts and Norms: Ethics and Empirical
Moral Psychology.” In: Voyer, B. and Tarantola, T. (). Moral Psych-
ology: A Multidisciplinary Guide. Springer. Chapter is based on “Can’t
We All Disagree More Constructively? Moral Foundations, Moral
Reasoning, and Political Disagreement,” Neuroethics (), –. Chap-
ter is based on “It’s the Knobe Effect, Stupid! How (and How Not) to
Explain the Side-Effect Effect,” Review of Philosophy and Psychology, (),
–. I am grateful to the editors and publishers for permission to
reuse this material.
Special thanks go to Neil Roughley and the participants of his Oberse-
minar at the Universität Duisburg-Essen for discussing various draft
chapters with me, the participants of the Moral Psychology and Evolution
Research Master seminar I cotaught with Joel Anderson at Utrecht Uni-
versity, Mark Alfano for reading the whole manuscript and providing
extremely thorough and helpful feedback, and the various anonymous
referees who helped me improve the manuscript in countless ways.
Moreover, I want to thank Joel Anderson, James Andow, Tom Bates,
Constanze Binder, Matthew Braddock, Christoph Bublitz, Matteo Col-
ombo, Sabine Döring, Susan Dwyer, Thyra Elsasser, Daan Evers, Anika
Fiebich, Lily Frank, Machteld Geuskens, Aglaia von Götz, Joshua Greene,
Thomas Grundmann, Joseph Heath, Frank Hindriks, Jeroen Hopster,
Joachim Horvath, Silvia Ivani, Fleur Jongepier, Wouter Kalf, Jeanette
x
Acknowledgments xi
Kennett, Dominik Klein, Pauline Kleingeld, Michael Klenk, Sebastian
Köhler, Peter Königs, Felicitas Krämer, Victor Kumar, Marjolein Lanzing,
Neil Levy, Sem de Maagt, Heidi Maibom, Alessandra Marra, Anthonie
Meijers, Bert Musschenga, Albert Newen, Philip Nickel, Sven Nyholm,
Tristram Oliver-Skuse, Norbert Paulo, James Pearson, Herman Philipse,
Kevin Reuter, Regina Rini, Arthur Ripstein, Scott Robbins, Carel van
Schaik, Larissa Schmidt, Eleonora Severini, Maureen Sie, Jan Sprenger,
Sharon Street, Bart Streumer, Sergio Tenenbaum, Alan Thomas, Christine
Tiefensee, Peter Timmerman, Laura Valentini, Bruno Verbeek, Joseph
Walsh, Pascale Willemsen, Joeri Witteveen, and Jonathan Webber. They
made the book a much better one.
I am also extremely grateful to my editor Hilary Gaskin and the rest of
the team at Cambridge University Press. Their expertise, kindness, and
professionalism made it enormously pleasant to work with them on the
publication of this book.
I dedicate this book to my daughters Clara and Julia – χάος and κόσμος.
Introduction: Debunking Arguments and the Gap
Debunking Arguments in Ethics
is justified, rational, or defensible? In other words: are genealogical
accounts of why people think and do something fit to debunk what they
think and do? Do they even have the power to do so?
Genealogies are typically deployed in a critical spirit, and genealogies
that are supposed to have such negative, or undermining, epistemic
significance are nowadays often referred to as debunking arguments
(Nichols ). The question I am interested in is whether there are any
successful arguments of this sort. Here, we have four main options:
Introduction
Debunking arguments do not show their target beliefs to be false, but
rather undermine the justification a subject may have for holding them.
This makes them a kind of undercutting defeat.
Consider the following example. You and I stand outside a room.
I wonder whether there will be enough chairs for the upcoming talk.
But the room is locked and windowless, so I cannot go see for myself
whether all of the people who will attend – thirty in total, as I happen to
know – will be able to get a seat. You tell me not to worry and reassure me
that there are more than forty chairs in the room. But, being the incredu-
lous paranoid that I am, and because I also know that neither of us has ever
been in this room before, I ask you how you know this. My incredulity
turns out to be warranted. You were just guessing; I continue to worry.
And rightly so: as far as chairs in a room you can neither enter nor peek
into are concerned, guessing is an unreliable method for determining the
number of chairs in it. It is obviously perfectly possible that you are right,
but everything says that it would be pure coincidence if you were. You
have given me no grounds for believing you. In short: your admission that
you were just guessing debunks your belief regarding the number of chairs
in the room, because it was formed on the basis of what is for all intents
and purposes an untrustworthy method.
Here is another one: suppose you work in the HR department at a
large company. You are trying to fill a position that has been vacant for
quite some time but desperately needs to be filled because a subbranch
within the company cannot plan its budget without appropriate over-
sight by the controlling department. You have narrowed down your
choice to two candidates, one male, one female. Their qualifications –
top-notch education at a prestigious school, ample professional experi-
ence, proven track record – seem almost indistinguishable. But there is
Debunking Arguments in Ethics
this gut feeling that tells you that one of them – the one who happens to
be male – seems more fitting for a responsible position such as this one.
You are about to pass your recommendation on to your line manager
(whose approval is only a formality at this point) when someone informs
you about the fact that studies have shown that when confronted with
two identical résumés, people are significantly more likely to prefer the
male candidate due to implicit bias and to flexibly apply various criteria
for job suitability depending on an applicant’s gender (Uhlmann et al.
). If you have nothing else to go on except your gut feeling, should
you continue to trust it as if nothing had happened? Since you now
know that your gut feeling tracks the gender of the candidates, and since
we can assume that this consideration does not, in and of itself, consti-
tute a qualification for the job, your assessment of the two résumés
seems unjustified.
. Off Track
The recent revival of debunking arguments is largely due to one
particular version of them: evolutionary debunking arguments. Such
arguments pose a reliability challenge to our moral judgments. They
hold that our moral beliefs are based on off-track processes, mechanisms
that did not lead us to form accurate beliefs but whose workings have
been shaped by forces that are entirely disconnected from the moral
truth, if there is such a thing. Some suggest that such arguments favor
moral skepticism (Joyce ); others hold that they recommend giving
up on the idea that when we moralize, there is a track to be on at all
(Street ).
Evolutionary debunking arguments are about what evolution did and
did not do. First, evolution would not, for all we know, have given us
the cognitive capacities to recognize mind-independent moral truths
Debunking Explained: Structure and Typology
(the “capacity etiology” argument, Fitzpatrick , ff.). Selective
pressures yield mental faculties that are adaptive. But there is no reason
to think that being able to appreciate moral facts would be fitness enhan-
cing in the way that being able to accurately perceive the external world
with all its obstacles and predators would be. Second, evolution did influ-
ence the content of our moral beliefs by shaping the basic evaluative
dispositions that serve as input into our overall web of moral beliefs (the
“content etiology” argument, Fitzpatrick , ff.) on which moral
reasoning then operates. We believe that pain is bad, that cheaters should
be punished, and that we ought to care for our children. But given that,
again, selective pressures are insensitive to the moral truth, those evaluative
dispositions are highly unlikely to track evaluative truths.
Here is a schematic presentation of off-track debunking arguments:
Off Track
() p is formed on the basis of P
() P has been shaped by influences of type X
() Influences of type X do not track the truth with regard to p
Therefore, () p is unjustified
Let me remind you that I am not in the business of assessing the soundness
of such arguments here or of providing a more subtle reconstruction of
how they work (Shafer-Landau ). What I am trying to do at this point
is merely to chart the territory of debunking arguments.
The important thing to file away at this point is that, though all
debunking arguments draw in some sense on the idea that our moral
judgments are generated by epistemically defective processes, off-track
arguments locate the defectiveness of those processes in the fact that our
basic evaluative dispositions are suffused with contaminating influences all
the way down, whether these influences are evolutionary or not.
. Obsoleteness
Many people are jealous. Now suppose that the trait of (sexual) jealousy
evolved, either biologically or culturally or both, because selective pressures
favored individuals who happened to be disinclined to invest time and
effort into raising another individual’s offspring. But the type of sexual
surveillance motivated by jealousy is spectacularly costly, ineffective, and –
frequently – unwelcome to the thusly surveilled, so why do people engage
in it? The answer is that for most of human history, monitoring your
spouse’s fidelity was, some obvious cases notwithstanding, the only way to
Debunking Arguments in Ethics
secure or determine whether a child would be yours. But this rationale no
longer obtains, and yet I speculate that people would be unlikely to stop
being jealous even if paternity tests were to become legally mandatory.
The example of jealousy nicely illustrates another form of debunking,
which I shall refer to as obsoleteness debunking. The key idea behind this
type of argument is that some of our moral judgments, or the basic
evaluative dispositions that eventually yield them, have been tailor made
for an environment we no longer inhabit. The most popular versions of
this type of argument also happen to be evolutionary. For many moral and
nonmoral intuitions, it is difficult to see what could justify them; in some
cases, it is rather obvious that nothing does. But we can often figure out
that having those intuitions used to be adaptive and reconstruct how certain
tendencies that may have paid off in our so-called environment of evolution-
ary adaptedness misfire and lead us astray when put in novel conditions.
What works in ancient, stable, and tribal communities founded on famil-
iarity and kinship does not necessarily work in modern, dynamic, and large
societies full of strangers and competitors.
Take retributive intuitions. There is a wealth of evidence suggesting
that, although people endorse both retributive and consequentialist,
deterrence-based justifications for punishment, their judgments about pun-
ishment pretty much only adhere to the latter (Greene , ff.). The
basic function of punishment is to disincentivize defection, and to secure
nontransgressive behavior by shifting the incentives away from free riding
toward cooperative conduct. Now the crucial thing to note is that, given
this function, it used to make sense for people’s punitive dispositions to be
essentially insensitive to considerations of deterrence, because only a
commitment to punish regardless of the consequences of punishment
actually deters. If this were not so, individuals could simply indicate that
they are unwilling to be deterred, in which case the costs of punitive
actions would start to outweigh their long-term benefits. This means that
deterrence, though the ultimate function of punishment, cannot be
responsible for the patterns to which actual practices of punishment
conform. Practices of punishment need to be about people and their actions
rather than about what will bring about the best outcome, which is to say,
punishment must be essentially retributive (Cushman , a, ;
Levy ).
The next thing to note is that under modern conditions, this rationale
doesn’t apply anymore. We have manufactured complex institutions that
detect and persecute norm transgressions so that the aforementioned
decentralized punitive bookkeeping and the painstaking administration
Debunking Explained: Structure and Typology
of sanctions performed by individuals has become superfluous. Retributive
intuitions continue to hold power over us. But social developments have
made them largely obsolete. Since if anything ever justified retributive
intuitions, it was their unofficial consequentialist purpose, these intuitions
are rendered problematic by the fact that under modern conditions, they
end up obstructing this purpose, giving us the war on drugs and an inflated
incarceration system.
Whether or not these two examples are convincing is beside the point
here. What matters is the extent to which they illuminate the structure of
obsoleteness debunking, which can be captured as follows:
Obsoleteness
() p is based on P
() P has been biologically or culturally adapted to produce correct
judgments only in specific, nonhostile environments
Therefore, () P is unlikely to produce correct results in hostile environment H
() p is formed in H
Therefore, () p is unjustified
Perhaps most famously, this type of argument has been used by people
questioning the trustworthiness of nonconsequentialist intuitions. Peter
Singer, for instance, uses neuroscientific evidence regarding the psycho-
logical basis of deontological intuitions to show that such intuitions are
tailored to an environment in which we no longer live. Joshua Greene and
his colleagues (, , ) have suggested that such intuitions are
primarily based on crude, alarm-like responses to vivid and immediate
harm. For instance, people are much more likely to judge it appropriate to
throw a switch to divert a train and prevent it from killing five workers on
the track, thereby killing one on another track, than to throw a large man
in front of the train to save five others. Singer and Greene have suggested
that this response is based on a morally irrelevant difference:
For most of our evolutionary history, human beings have lived in small
groups, and the same is almost certainly true of our pre-human primate and
social mammal ancestors. In these groups, violence could only be inflicted
in an up-close and personal way – by hitting, pushing, strangling, or using a
stick or stone as a club. To deal with such situations, we have developed
immediate, emotionally based responses to questions involving close, per-
sonal interactions with others. The thought of pushing the stranger off the
footbridge elicits these emotionally based responses. Throwing a switch that
diverts a train that will hit someone bears no resemblance to anything likely
to have happened in the circumstances in which we and our ancestors lived.
Hence the thought of doing it does not elicit the same emotional response
Debunking Arguments in Ethics
as pushing someone off a bridge. So the salient feature that explains our
different intuitive judgments concerning the two cases is that the footbridge
case is the kind of situation that was likely to arise during the eons of time
over which we were evolving; whereas the standard trolley case describes a
way of bringing about someone’s death that has only been possible in the
past century or two, a time far too short to have any impact on our
inherited patterns of emotional response. But what is the moral salience
of the fact that I have killed someone in a way that was possible a million
years ago, rather than in a way that became possible only two hundred years
ago? I would answer: none. (Singer , f.)
The idea is that our intuitive aversion toward pushing a man to his
death can be explained in terms of a match between our cognitive
processes and our environment of evolutionary adaptedness. Under those
circumstances, being highly sensitive to up-close-and-personal harm
happened to be the optimific disposition. In modern environments,
which Stanovich () usefully describes as “hostile” (ff.) to many
of the cognitive shortcuts biological and cultural evolution has equipped
us with, this is no longer the case. When the emotionally salient feature
is removed, as it is in the Switch case, people see more clearly that
the right thing to do is to bring about the best consequences and save
the five. I will return to the debunking of nonconsequentialist moral
intuitions in Chapter and .
The important thing to remember about obsoleteness debunking argu-
ments is that they do not locate the fact that some of our moral judgments
are untrustworthy in the general off-track nature of these processes with
respect to the moral domain but in the lack of fit between the processes that
generate these judgments and the external conditions (“hostile environ-
ments”) under which these processes are supposed to perform.
. Symmetry
Symmetry debunking arguments locate the epistemic defectiveness of a
cognitive process in the contingency of our epistemic position. We hold many
beliefs simply because of circumstantial luck, such as where or when we
happened to have been born. This observation summons an army of
counterfactual selves, each of them holding beliefs contradictory to those
of my actual self, but formed on the same basis, in the same way, and with
the same degree of confidence. But if I could have easily been one of those
counterfactual selves instead of my actual one, and if I could thus easily
have held entirely different (and incompatible) beliefs with the same
Debunking Explained: Structure and Typology
confidence as I hold my beliefs now, doesn’t this show that I hold my
actual beliefs on arbitrary grounds?
Tomas Bogardus (, ff.) develops this line of thinking into a
debunking argument for moral realism in which the aforementioned coun-
terfactual selves are like me but with a different evolutionary trajectory. Had
I evolved at another time, at another place, facing different selective
pressures, and had I formed my moral beliefs in the same way as I formed
them now and here, those beliefs would, by my own lights, be false.
This type of argument is perhaps most commonly applied to issues in
religious epistemology (Bogardus ). Bogardus () quotes Philip
Kitcher, who gives the gist of the idea: “Had the Christians been born
among the aboriginal Australians, they would believe, in just the same
ways, on just the same bases, and with just the same convictions, doctrines
about Dreamtime instead of about the Resurrection. The symmetry is
complete. [. . .] Given that they are all on a par, we should trust none of
them” (Kitcher , , quoted in Bogardus ).
There are plenty of everyday examples for this as well. We are trying to
reach the train on time. It takes minutes to get to the station, and the
train leaves at : p.m. I think we need to hurry up, because my watch
says it’s :. You say we can take our time, because your watch says it’s
:. Both our watches have thus far been reliable, and neither of us has an
interest in missing the train. We have no other evidence regarding what
time it is. In this case, it seems unreasonable for me or you to go with our
respective watches. We are in equally good epistemic positions, and had
I looked at your watch instead of mine, I would have acquired a different
belief. We thus have an epistemic tie. The fact that I happened to look at
my watch does not break that tie. The general schema for symmetry
debunking thus looks something like this:
Symmetry
() S arrives at p on the basis of P
() Using P, (actual or counterfactual) epistemic peer(s) {S, . . ., Sn}
arrive at an incompatible judgment q
() S has no independent (i.e., nonarbitary or question-begging) grounds
for believing p rather than q
Therefore, () p and q are unjustified
Therefore, () p is unjustified
The thing to note is that the fact that my beliefs are actual is not sufficient
to serve as a legitimate tiebreaker between epistemic peers (Christensen
). In assessing the case for my beliefs rather than my counterfactual
Debunking Arguments in Ethics
selves’, it would be illegitimate, for instance, to appeal to the fact that my
beliefs seem overwhelmingly plausible to me or that I have used my faculty
of intuition to arrive at them. It would also be illegitimate to let one’s
status as an epistemic peer depend on whether one shares the belief at
issue. In the absence of good grounds for believing one thing rather than
another that are not tied, in the way just described, to the contingency of
my epistemic position, such as reliable and transferable methods of meas-
urement, remaining steadfast can be epistemically problematic.
This means that disgust does not merely have a high false positive rate
regarding its original elicitors; its hypersensitivity is further exacerbated by
the fact that other, in themselves not at all disgusting things can become
disgusting by coming into contact with repulsive things or that perfectly
innocuous things can become disgusting because they remind one of
something gross.
How do parasites and rotten food bear on morality? In and of them-
selves, they do not. But Kelly argues, plausibly to my mind, that as human
social life grew increasingly complex, the emotion of disgust became
recruited to police the boundaries of the in-group and compliance with
various social norms. Disgust is especially fit for this purpose because of its
flexibility in becoming associated with a wide range of external cues. Over
time, deviant behavior and strange-looking people became disgusting.
Disgust ended up tracking and supporting norms and values, and was
thus “co-opted” for moral purposes (Kelly , ff.).
In the moral domain, the high false positive rate of disgust has especially
pernicious consequences. Rotten carcasses do not care very much about
being ostracized or discriminated against; but when applied to human
affairs, disgust cannot merely lead to unhealthy forms of self-loathing
(Nussbaum ), but spill over into issues such as who deserves moral
consideration at all or who can be shunned, abused, or killed for being the
disgusting vermin that they are. Needless to say, reactions of disgust have
not just played a huge role in facilitating atrocity and genocide but also in
obstructing many worthy causes, from sexual liberalization to ending
slavery to establishing women’s suffrage.
Moral disgust has its friends (Plakias ) and foes (Gert ). What
I wish to emphasize here is that, regardless of where one stands with
respect to the moral relevance of disgust, hypersensitivity debunking is
not based on the irrelevance of disgust. Disgust responses are not generally
off track in the fundamental way that evolutionarily shaped evaluative
dispositions are. The fact that we sometimes see faces in the clouds does
not mean that we are generally bad at recognizing faces. Similarly, a
process’s high false positive rate with respect to a given target attribute
(e.g., disgustingness or moral wrongness) does not entail that said process
picks up on morally irrelevant features. Hypersensitivity is a quantitative
not a qualitative issue. To be sure, disgust often does pick up on morally
Debunking Arguments in Ethics
irrelevant features, but this is a separate problem from the one I am
discussing here. Hypersensitivity debunking locates the epistemic defect-
iveness of a process in its overly generous responsiveness to potentially
relevant proxies. This is altogether different from a sensitivity to downright
morally extraneous features of a person, action, or situation.
(ii) Hyposensitivity. Some processes implicated in the production of moral
judgment suffer from the opposite problem. Take empathy. Many historical
and contemporary authors see some (usually impartial or otherwise purified)
form of empathy as crucial for morality. But others such as Peter Singer have
long thought that our empathetic concern for distant strangers is woefully
inadequate. Letting a nearby child drown because saving it would ruin my
suit would strike us as monstrous. Buying a suit instead of giving the money
to a starving child abroad seems perfectly permissible to us (Singer ).
This puzzling pair of responses, Singer argues, is explained by the simple fact
that we care more about the vivid suffering of those close to us than about
the remote and statistically abstract suffering of strangers. Empathy is
hyposensitive, generating a large number of false negatives:
Hyposensitivity
() p is based on P
() P is hyposensitive (i.e., produces an unacceptably large amount of
false negatives)
Therefore, () p is unjustified
. Inconsistency
I will now turn to two types of debunking that, though prominent in the
literature, fail as debunking arguments. Inconsistency debunking targets
neither individual token nor general types of moral beliefs but rather
inconsistent pairs of moral judgments. I will argue that it is unsuccessful
as a debunking argument because, since it does not essentially draw on
epistemically defective processes to undermine its target, it should not be
classified as a type of debunking argument at all.
Debunking Arguments in Ethics
Let me illustrate this type of debunking with another example. In his
Living High and Letting Die (), Peter Unger considers what he refers
to as the Extraordinary Puzzle of (the Great Dead Virginians and) the
Imaginary Australians. George Washington, despite all the at least decent
things he may have done, also owned slaves. He could have freed
them, of course, at some personal but presumably morally insignificant
cost, but chose not to do so until after this death. What, Unger asks, is our
moral assessment of Washington’s (or Jefferson’s, who occupied a similar
stance toward slavery in both theory and practice) total conduct? More
precisely: why is it that we afford quite a high moral status to their overall
moral conduct?
To see how puzzling this is, Unger invites us to consider a different story,
this time a fictional one. Imagine that the early Australian settlers kept
Aboriginal slaves and still do so today. As it happens, two individuals named
Paul and Mary Singer both own slaves and reap the benefits of their labor
but, in the process, treat their slaves extremely well and under luxurious
living conditions. In all other respects, they live a morally exemplary life of
growing organic fruit and donating their profits to charity. Nonetheless,
Unger suggests, our moral assessment of their overall conduct is abysmal.
That’s because, you know, Paul and Mary Singer are slave owners.
However – and this is the puzzle – the Singers’ behavior vis-à-vis their
slaves is at least somewhat better than Washington’s behavior toward his.
And in other respects, their lives are at least somewhat morally superior to
Washington’s as well. But since these are all relevant respects, it is puzzling
why our assessment of Washington’s overall conduct is quite sympathetic,
whereas we view the Singers’ conduct much less favorably. This example
(and many others like it) points toward a way of debunking seemingly
inconsistent pairs of judgments. We have one response to the first case and a
different response to the second. But it is hard to see which morally
relevant difference could ground this response.
Several authors have developed this type of strategy into a model for
constructing pairwise debunking arguments (Campbell and Kumar ;
Kumar and May forthcoming). The general idea is this: rationally, we
ought to treat like cases alike, unless there is some relevant difference.
However, we often have conflicting responses, either in content or in
strength, about two cases that appear indistinguishable in all important
respects. Should this appearance be confirmed upon further examination,
we have reason to think that the difference in our responses to the two
cases is based on a morally irrelevant feature. This leads to the following
schema:
Debunking Explained: Structure and Typology
Inconsistency
() A subject S holds two incompatible moral judgments p and q about
two cases C and C’
() C and C’ are not different in any morally relevant respects.
Therefore, () S is unjustified in holding (p and q)
Therefore, () S is required to abandon either p or q or (p and q)
There are several things that make this type of argument distinctive. First, it
is highly selective, as it only targets particular pairs of beliefs. However,
inconsistency debunking may develop a more sweeping dynamic over time:
there seems to be no morally relevant difference between the drowning
child in front of me and the starving child abroad, and the fact that we are,
on an intuitive level, so bad at detecting this similarity straight away can
make us doubt a whole range of other comparable judgments as well.
Note, second, and contrary to what many authors suggest, that the kind
of inconsistency at issue here is not a logical but a substantive moral one.
To see this, suppose the case for the expanding circle of moral concern goes
something like this (cf. Huemer ): (a) We have moral obligations
toward friends and kin. (b) The difference between friends and kin and
strangers is morally irrelevant. (c) Therefore, we have moral obligations
toward strangers. The inconsistency picked out by (b) is not a formal one;
it is not incoherent to hold (b); likewise, premise () in the earlier schema
will always contain a substantive normative premise. This premise may be
overwhelmingly plausible, but it is always up for debate.
Third, inconsistency debunking, unlike other forms, is sensitive to effect
sizes (Kumar and May forthcoming). Subjects may treat two cases differ-
ently either by holding contradictory judgments or by holding judgments
with the same content but different strength. For instance, some people
may strongly agree with the permissibility of an action, others only slightly.
When this is the case, it seems premature to suggest that judgments about
the action need to be fully abandoned. Perhaps this difference in degree, at
least when it falls below a certain threshold, can be adjudicated otherwise.
Notice, however, that this schema does not depend in any crucial way
on a descriptive premise. (The first premise is descriptive, of course, but
not interestingly so.) At no point does this type of argument supply an
informative genealogy of the pair of beliefs it aims to debunk in terms of the
process by which it was arrived at. It is of course possible to supply such a
genealogy. But it is not required for this type of argument to get off the
ground, since we simply do not need to have any specific knowledge about
what type of process is responsible for the fact that we judge C and C’
differently. Of course, something must account for our differential
Debunking Arguments in Ethics
responses to those cases, and whatever it is, this something will lack moral
relevance. But once we know this fact, and thus once we know that a
person holds incompatible judgments about two cases which are not
relevantly different, we already have all we need to conclude that a pair
of judgments cannot be maintained. Inconsistency debunking arguments
lack (the need for) a convincing causal premise to cast the moral judgments
at issue into doubt. Strictly speaking, it should not count as a type of
debunking argument at all.
I am using Prinz’s account of genealogical debunking merely for the purpose of illustration. In doing
so, I am riding roughshod over the very subtle and sophisticated debate on Nietzsche’s genealogical
method happening among contemporary Nietzsche scholars. For an excellent overview of this
debate, see the papers in May ().
Debunking Arguments in Ethics
Here is how the schema of this type of debunking is supposed to work:
Ignoble Origins
() p emerged on the basis of historical process P
() P is either evaluatively problematic (“ignoble”) or in internal tension
with the content of p or both
Therefore, () p is unjustified
The problem with this type of argument is that it is clearly invalid. My
suspicion is that Nietzsche, who was well aware that people are rarely
swayed by rational arguments unless coupled with some emotional appeal,
uses his genealogy of morality as a rhetorical device, an instance of hyper-
bolic history, knowing that it would have considerable clout despite being,
technically speaking, inconclusive (Saar ). This hyperbolic history is,
first and foremost, aimed at showing that the moral outlook we have
inherited is not and was not inevitable: rather, it was pieced together from
fragments and splinters of various cultures, traditions, and sources.
A genealogy thus shows that our moral values are not necessary, as their
trajectory could easily have gone otherwise. This makes room for thinking
about an alternative, the possibility of which is precluded by our current
values’ alleged necessity. Ignoble origins debunking may thus be classified
as a genuine debunking argument, because it contains an essential refer-
ence to the (historical) processes by which a given moral belief emerged.
However, this type of argument typically lacks a convincing normative
premise.
Conclusion
Debunking arguments consist of a descriptive premise supplying a causal
explanation of a type or token of moral judgment and a normative premise
classifying this causal path as epistemically defective. These arguments can
be deployed in a selective or global manner. Some target only a subset of
moral beliefs, and some aim to undermine the justification of all of our
moral judgments at once. There are four promising types of debunking
arguments and two less promising ones. The promising ones are off-track
debunking, the evolutionary variety of which is most prominent; obsolete-
ness debunking, which holds that many moral judgments are formed on
the basis of cognitive processes that become unreliable in certain hostile
environments; symmetry debunking, which points to the contingency of
the environment in which we form many of our moral beliefs; and
detection error debunking, which shows that some cognitive processes
Debunking Explained: Structure and Typology
are either too sensitive or not sensitive enough to be generally trustworthy.
The two less promising ones are inconsistency debunking, about which,
due to its lack of a need for a causal premise, there is reason to doubt
whether it should be seen as a type of debunking argument at all, and
ignoble origins debunking, the classical “Nietzschean” type of debunking
that uncovers the ugly distal sources many of our moral beliefs originated
from but fails to make plausible why such an evaluative genealogy of
morality should lead us to abandon any of our moral beliefs at all.
Introduction
In a famous passage from his Descent of Man, Darwin speculates how a
different evolutionary trajectory might radically have altered our moral
beliefs: “In the same manner as various animals have some sense of beauty,
though they admire widely different objects, so they might have a sense of
right and wrong, though led by it to follow widely different lines of
conduct. If, for instance, to take an extreme case, men were reared under
precisely the same conditions as hive-bees, there can hardly be a doubt that
our unmarried females would, like the worker-bees, think it a sacred duty
to kill their brothers, and mothers would strive to kill their fertile daugh-
ters; and no one would think of interfering” (Darwin [], ).
As lucid as this frequently quoted observation is, Darwin was less
inclined to apply this insight to his own sense of right and wrong and
apparently remained “confident in his own Victorian moral opinions,
happily referring to the “low morality of savages,” calling slavery “a great
crime,” and freely using words like “noble,” “evil,” and “barbarous.” It
remains obscure why Darwin thinks that the human moral sense, when
shaped by the particular cultural trajectory of the British Empire, results in
moral opinions that are true or justified – even if he is correct that these are
the moral opinions that humans living in large groups will eventually alight
upon” (Joyce , ). What Joyce reminds us of here is that evolution-
ary explanations of our moral beliefs do not merely enlighten us about the
origins of those beliefs; under certain conditions, they also tend to under-
mine their justification – a point the full force of which Darwin arguably
overlooked.
This chapter is about how we should react upon learning that our moral
beliefs have been shaped by evolutionary forces and which changes in our
attitudes toward those beliefs this information recommends. In her influ-
ential paper on the “Darwinian dilemma” for realist theories of value,
Debunking Defused: The Metaethical Turn
Sharon Street () argues that the required changes should take place
not in our moral beliefs themselves but in the beliefs we have about
those beliefs. In short: evolutionary considerations should change our
metaethics but leave our normative ethics intact. I will refer to this as the
metaethical turn.
Her main reason for this claim is based on what I will refer to as the
weakest link argument. When faced with a choice, this argument goes,
between the basic tenets of evolutionary theory, a realist account of the
metaphysics of value, and our most central normative convictions, we
should give up realism about value. It is the weakest link of the three
and, as such, more easily sacrificed than the others.
In this chapter, I am not in the business of defending metaethical
realism against Street’s powerful challenge. While I will assume that
there are some standards of correctness regarding moral judgment,
I wish to remain agnostic about the question whether we should
understand these in realist or nonrealist (e.g., constructivist) terms.
My disagreement with Street is about the impact evolutionary consider-
ations have – or ought to have – on our normative beliefs. My main
aim in this chapter is to show that her strategy for protecting our
substantive moral beliefs from genealogical debunking arguments by
redirecting such arguments to a metaethical target may be problematic
and, indeed, undesirable.
The chapter has six sections. In the first, I briefly explain the basic
outlines of Street’s dilemma and how it is supposed to support the
claim that metaethical adjustments can render our first-order moral
beliefs immune to evolutionary debunking (I will refer to this claim
as No Impact). The second is dedicated to the structure of the weakest
link argument, and the third discusses two examples to illustrate how
this argument shifts the target of debunking from normative to
metaethical theories. In the fourth section, I give three reasons for
doubting that it successfully establishes that realist accounts of value
are more easily given up than many of our normative judgments. In the
fifth section, I offer a diagnosis of why the weakest link strategy may
seem compelling even if it is not. My two main points will be that an
answer to this question depends, first, on one’s assessment of the costs
and benefits of sacrificing either of the aforementioned “links” and,
second, on the prior plausibility of the moral beliefs one selects as
examples to compare the plausibility of moral realism to. Here, I will
argue that Street’s comparison depends on an unbalanced diet of
examples.
Debunking Arguments in Ethics
It is actually quite clear, the realist might say, how we should understand
the relation between selective pressures and independent evaluative truths.
The answer is this: we may understand these evolutionary causes as having
tracked the truth; we may understand the relation in question to be a
tracking relation. The realist might elaborate on this as follows. Surely, he
or she might say, it is advantageous to recognize evaluative truths; surely it
promotes one’s survival (and that of one’s offspring) to be able to grasp
what one has reason to do, believe, and feel. [. . .] According to this
hypothesis, our ability to recognize evaluative truths, like the cheetah’s
speed and the giraffe’s long neck, conferred upon us certain advantages
that helped us to flourish and reproduce. Thus, the forces of natural
selection that influenced the shape of so many of our evaluative judgements
need not and should not be viewed as distorting or illegitimate at all.
(Street , f.)
We need not assume, she argues, that for certain tendencies to make some
evaluative judgments rather than others to be selected for, these judgments
had to be true. It sufficed for those tendencies to promote our ancestors’
Debunking Arguments in Ethics
reproductive success, which they would have done regardless of whether
they are true. Considerations of parsimony, clarity, simplicity, and
explanatory fecundity count against the realist’s story. Let me state the
dilemma a little more concisely:
Darwinian Dilemma
() Evolution had an influence on the content of our moral judgments.
() Either the direction in which evolution pushed the content of our moral beliefs
was not influenced by mind-independent moral facts (first horn)
() Or the direction in which evolution pushed the content of our moral beliefs was
influenced by mind-independent moral facts (second horn).
() () leads to global, moral, and indeed normative skepticism.
() () is scientifically implausible.
() Therefore, there are no mind-independent moral facts.
What I am after in this section is not whether the dilemma is ultimately
successful as a challenge to moral realism. I have already said that for the
purposes of my discussion in this chapter, I wish to remain agnostic about
this issue. Now before I turn to the issue of how others use similar
considerations in their evolutionary argument against the justification of
(some of ) our first-order normative beliefs, let me briefly ask why Street
thinks that her dilemma leaves those beliefs unaffected and only has
consequences for which metaethical account of them is preferable:
“[O]nce you adopt a mind-dependent conception of value [. . .] evolution-
ary explanations of our values aren’t undermining in the least.” And: “All
these deepest values should remain untouched by genealogical revelations”
(Street , –; here ). Call this claim
No Impact
Given a conception of moral norms and values as mind dependent, evolutionary
considerations have no bearing on the justification of our evaluative beliefs.
The reason for this, according to Street, is that the dilemma does not
undermine the justification we have for believing that pain is bad or that
we should help those in need and shun cheaters per se; it merely under-
mines one available strategy for justifying them, namely, the realist one. All
views that reject realism about (moral or nonmoral) value but tie reasons
This reconstruction of Street’s argument sacrifices validity for readability. I will provide a more
thorough reconstruction of her argument in what follows.
Some might of course be happy to bite the bullet of moral skepticism or reject the scientific
explanation as inadequate. I will not engage with these possible responses here but will have
something to say about them in Section ().
Debunking Defused: The Metaethical Turn
for action to subjects’ evaluative attitudes can be more than happy to agree
that evolutionary forces shaped the content of those attitudes. After they
have done so, the question about their justification becomes a question
regarding which of our moral beliefs can “withstand[. . .] scrutiny from the
standpoint of our other evaluative judgements” (Street , ). That is,
once we have settled the evolutionary origins of many of our moral beliefs,
we are left with the ordinary methods of normative inquiry to assess the
respective merits of those beliefs. Darwinian considerations can weigh in
on the metaethical question concerning the metaphysics of value but are
entirely irrelevant to the normative question regarding which of our values
are worth keeping and which are not.
Notice how strange, on a first glance, this strategy is. This fact is
acknowledged by Street herself:
That move might seem odd. It’s as though upon learning that your belief
about Hayes [the th president of the U.S., Author] had its origin in
hypnosis, you find it so implausible that you could be wrong about whether
Hayes was the twentieth president that you opt to change your conception
of the subject matter, concluding that facts about who was the twentieth
president are constituted by facts about who you think the twentieth
president was, no matter what the source of your views, hypnotism
included. (Street , )
In fact, one could see this claim about the normative irrelevance of
evolutionary considerations as the biggest payoff of Street’s argument.
The main merit of her ingenious dilemma thus lies both in showing which
metaethical account is most vulnerable to her challenge but also how
redirecting the evolutionary challenge toward a metaethical subject matter
makes it possible to go on thinking about how to live in the usual way.
This last claim is the one I wish to reject here.
In a more recent paper, Kahane () describes the situation as follows: “Utilitarianism is often
viewed as an extremely counterintuitive view; many find Singer’s normative views troubling, even
repugnant. But if we take the goal of purging all evolutionary influence from our normative views
seriously enough, we will end up with a view that is so radically divorced from common sense, and so
distant from any familiar ethical theory, that, by comparison, Singer’s own utilitarianism will seem
almost like old-fashioned common sense” ().
Debunking Arguments in Ethics
it is to think that we have no idea how to live, which is the conclusion that
results if we pair a mind-independent conception of value with an evolu-
tionary genealogy of valuing. Accepting this radical sceptical conclusion
would involve nothing less than suspending all evaluative judgment, and
either continuing to move about but regarding oneself as acting for no
reason at all, or else sitting paralyzed where one is and just blinking in one’s
ignorance of how to go forward. Accepting the conclusion that value is
mind-dependent, on the other hand, preserves many or our evaluative
views – allowing us to see why we are reasonably reliable about matters of
value – while at the same time allowing us to see ourselves as evolved
creatures. (Street , )
Let me remind you of the passage quoted earlier. If we adopt Street’s
metaethical debunking, it is
as though upon learning that your belief about Hayes had its origin in
hypnosis, you find it so implausible that you could be wrong about whether
Hayes was the twentieth president that you opt to change your conception
of the subject matter, concluding that facts about who was the twentieth
president are constituted by facts about who you think the twentieth
president was, no matter what the source of your views, hypnotism
included. (Street , )
I wish to argue, however, that depending on the normative judgments one
feeds into one’s assessment of which of the three links – evolutionary theory,
first-order moral beliefs, metaethical theory – one ought to give up, an “odd
move” such as this one is precisely the move Street suggests we make.
Greene uses different types of evidence to support his normative conclusions. Here, I will focus
exclusively on the evolutionary evidence. There is reason to believe that the evidence from
neuroimaging or reaction time studies he also relies upon does not fare well under closer scrutiny
(see, for example, Sauer for this).
Particularists such as Dancy () will of course deny that there are any properties that are always
morally relevant, but this point can be ignored here.
Debunking Arguments in Ethics
What is it, then, that explains people’s differential responses to those
famous puzzles? And can subjects’ intuitions be explained in terms of non–
truth-tracking processes? As for the first question, both Singer and Greene
essentially rely on a personal/impersonal distinction. We tend to care
more about the drowning child in front of us than about the distant
starving child because the former’s plight is emotionally more vivid; and
we are less prepared to push a man in front of a trolley than to divert one
with a lever, because the former action involves harming a person in an up-
close-and-personal manner, whereas the latter recruits an indirect technical
mechanism.
As for the second question, Singer and Greene maintain that such
personal factors – for example spatial distance or physical proximity –
are morally irrelevant. Our normative judgments ought not to be deter-
mined by situational features of this particular sort. And both explain the
moral intuitions that it is not obligatory to give to famine relief or that it is
not permissible to push a large stranger to his death in terms of evolution-
ary forces distorting our judgments. They argue that when making such
judgments, we are in the grip of powerful emotional reactions (or their
absence) that are due to the environment our ancestors had to deal with.
In our evolutionary past, humans lived in small-scale communities; there
was little chance for them of having to deal with issues such as the global
distribution of resources or the permissibility of killing someone by
switching a lever. Their emotionally charged moral intuitions were selected
for dealing with close interactions, which makes them ill equipped to
adequately handle moral problems that go beyond them.
But now that we live in the modern world and keep encountering
situations that outstrip up-close-and-personal interactions, we must think
about which set of evolved emotional reactions we would like to continue
to influence our moral judgments:
[T]he salient feature that explains our different intuitive judgments con-
cerning the two cases is that the footbridge case is the kind of situation that
was likely to arise during the eons of time over which we were evolving;
whereas the standard trolley case describes a way of bringing about
Note that Greene does not phrase his argument in terms of the personal/impersonal distinction
anymore, which had to be given up (Sauer ), but in terms of “personal force” (Greene et al.
). Nothing hinges on this, however, as both factors seem to be equally morally irrelevant.
This claim is borne out by studies with trolley cases and psychopaths, see Koenigs et al. (), and
emotion manipulation studies based on trolley cases, see Valdesolo and DeSteno (); see also
Uhlmann, Pizarro et al. (), who show how implicit racial biases can figure in people’s judgments
about trolley cases.
Debunking Defused: The Metaethical Turn
someone’s death that has only been possible in the past century or two,
a time far too short to have any impact on our inherited patterns of
emotional response. But what is the moral salience of the fact that I have
killed someone in a way that was possible a million years ago, rather than
in a way that became possible only two hundred years ago? I would
answer: none. (Singer , )
Note that despite the similarities in their arguments and the fact that both seem to agree that an
evolutionary genealogy of our moral cognition favors utilitarianism, they disagree on why it does so
in terms of which of utilitarianism’s main contestants is ruled out by the debunking story: in Singer’s
case, it is parochialism and an arbitrary restriction of our attitudes of benevolence to our nearest and
dearest; in Greene’s case, it is a rejection of deontological side constraints on the maximization of
value. On a very general level, however, both agree on what is ruled out by the genealogy they have
in mind, namely all and only those moral judgments that are based on up-close-and-personal
considerations.
Debunking Defused: The Metaethical Turn
our callous insensitivity to the anonymous suffering of millions of starving
strangers is most likely a product of the circumstances our ancestors’
emotional dispositions developed in. Now we are facing a choice: we
could, as Street suggests, give up our metaethical account of what renders
those evaluative attitudes justified. We could say that moral properties are
response dependent and that therefore these attitudes cannot turn out to
be false or untrustworthy in virtue of the fact that they fail to track the
moral truth. But is this really the most plausible thing to do? We could,
after all, retain our belief that there are some answers to moral questions
that are correct or incorrect independent of our evaluative attitudes and
instead consider giving up the moral judgment that not helping distant
suffering strangers is thoroughly unproblematic. And it is far from clear, it
seems to me, which of those two options is the weaker link.
I have raised this worry (to which I will return in what follows), to set
the stage for a few objections to the weakest link argument. The first three
points I wish to make are these: first, the argument employs the wrong
kinds of reasons when asking which of the three links ought to be
considered the weakest. Street’s argument for the nonnegotiability of
(ii) is based on pragmatic considerations about how difficult life would
be if we abandon normative conservatism; second, the argument’s scope
remains unclear. If pragmatic considerations are admissible in this con-
text, then it may turn out that (i) can be dispensed with most easily, since
it is clear that human beings can get along without accepting it, as
evidenced by the fact that they have gotten along without accepting it
for most of their history; and third, one could doubt whether its assess-
ment of the costs involved in giving up one link or another is accurate
at all. It remains unclear just how difficult it would be to live without
(ii) and whether it would be so difficult at all. Most importantly, though,
I will argue that the weakest link argument fails to show that metaethical
realism is indeed more easily given up than many of the normative
judgments affected by an evolutionary genealogy. I will take up each of
these three smaller points in turn.
() The Wrong Kind of Reason. Street argues that giving up our normative
beliefs in favor of our metaethical proclivities would leave us with “no idea
how to live.” The suggestion is that accepting that our evaluative beliefs
are all unjustified would be practically unbearable – we wouldn’t know
what to do and how to continue with our lives. However, this argument
fails to engage with the issue at hand, because it conflates theoretical and
practical reasons for and against accepting the truth of a proposition.
Debunking Arguments in Ethics
First of all, it can be doubted whether the predicted result would
actually occur, as people thought the same thing – that it makes it
impossible for us to believe that our life and our actions have any point
whatsoever – about the heliocentric world view or atheism, and these
moral panics now amuse rather than disturb us. And second of all, whether
the truth of a proposition would make acting and deliberating harder
seems thoroughly irrelevant to our question, which is whether our norma-
tive convictions or our metaethics are true rather than which of the two it
would be easier to give up from a practical point of view. Maybe it is true
that normative moral nihilism would paralyze us – but so what? Compare
this to the proposition I have cancer. Regarding the question whether this
proposition is true, the question of how my agential and deliberative
capacities would be affected by believing it seems neither here nor there.
It may very well be that, practically speaking, I ought not to believe that
I have cancer, because it would make me desperate and depressed “sitting
paralyzed where one is and just blinking in one’s ignorance of how to go
forward” (see earlier). But it should be obvious that this does not make the
proposition false.
() Scope. Street seems very confident that readers will agree with her
suggestion to take the truth of evolutionary theory as a “fixed point”:
(i) is clearly not the weakest link. But why not, one could ask? Why not
think that, in fact, the mind dependence of values is so implausible that
we’d rather give up on evolutionary science? This is not to say that Street
is wrong that most people, myself included, would find this bullet too
hard to bite. However, if the criterion for which of the three links to
drop and which to accept is how it would affect our ability to act on our
considered evaluative beliefs and go on with our lives, evolutionary
science fares worst: whether Darwinism, Lamarckianism, or intelligent
design are true has the least practical relevance when compared with the
metaethical and normative questions at issue here. One could think that
it would be easiest for us to reject evolutionary theory precisely because
it entails that either our values or our intuitive conception of the nature
of our values is false. Pragmatically speaking, (i) might thus well be the
weakest link.
There are, of course, independent epistemic considerations according to
which evolutionary science should not be given up. But this is beside the
point: my only focus here is on which reasons Street has to offer for why
evolutionary information should affect our metaethics rather than our
substantive beliefs. Here, the only reasons she puts forward are pragmatic
Debunking Defused: The Metaethical Turn
ones, and on the basis of those reasons alone, the nonnegotiability of
evolutionary science is likely to be an open question.
() How high are the costs? Moreover, it seems that Street grossly
overestimates just how terrible the truth would be that we cannot trust
our normative judgments anymore.
Suppose it turned out that there is never any real Reason (with a capital
R) for us to act one way or another. That is, suppose it can be shown that
all our actions are pointless and unjustified and that we have no idea how
to live in terms of what we have most reason to do. Who would be
bothered by this insight except for a few overly cerebral philosophers
who will refuse to act unless they are convinced they are leading a life of
reason? Would people really be practically paralyzed, depressed, and radic-
ally confused if they found out that they have no good reasons to act upon
their motivations? Would people stop caring about their children and stop
trying to avoid pain when they learn that they have no real reason to do so,
over and above the fact that they do in fact care about their children and
do in fact want to avoid pain? I predict that they would not. Most people
are perfectly happy to act on the motivations and values they happen to
have, regardless of whether they can adduce a set of foundational reasons in
their support.
As evidence, one could point to the phenomenon of psychopathy
(Maibom ). To a large extent, psychopaths seem to be unable to act
on the basis of higher-order values and a coherent self-conception. Instead,
they act on impulsive and irresponsible whims. But this does not paralyze
them at all. If anything, it makes them hyperagential; they might not be
very good at seeing to the realization of their goals, but this does not stop
them from making grand, if unrealistic, plans and coming up with projects
and visions for their poorly imagined future. Being unable to justify acting
on one’s desires and motivations has the opposite effect to the one feared
by Street. It does not paralyze our agency as much as spark it. Street thus
severely overestimates the cost of realizing that there are no genuine
normative reasons to act one way or another.
Note that this is a prediction, not a claim about what people ought to be doing.
Debunking Arguments in Ethics
at many of our deeply engrained evaluative attitudes, it seems that they are
much weaker links in the triad than our realist metaethics. To start with,
I find the judgments that consensual incest or homosexuality are wrong
much less plausible than realism about value in general. Street herself does
not choose these examples; but there is no reason not to do so, as they
equally admit of an evolutionary explanation along the lines of Street’s
“adaptive link” account and as they seem to be widely shared evaluative
attitudes as well. But let me spell out this point in a little more detail.
Take the example of incest one last time. Many have a reaction of
disgust toward the very thought of it. This is borne out by empirical
research (Haidt and Hersh ). On the other hand, it is very likely that
our revulsion toward incestuous actions was selected for because of its
advantageous effects (it helped prevent handicapped offspring and so
forth). Now let’s apply the Darwinian argument to the case of incest:
suppose, first, that incest is mind-independently wrong. Now to the first
horn: evolutionary pressures pushed our evaluative attitudes and normative
judgments toward recognizing this moral fact. This claim is scientifically
implausible, because the adaptive-link account is preferable to the tracking
account. Now to the second horn: evolutionary pressures did not push us
to recognize the fact that incest is wrong. But given that our belief that
incest is wrong was most likely influenced by natural selection, it would be
a massive coincidence if this belief hit upon the mind-independent truth
out of sheer cosmic luck. Street concludes that there are no mind-
independent moral facts, because such skepticism would be too much to
swallow.
But here we see that whether realism about a certain type of reason (e.g.,
the fact that an action constitutes incest is a reason not to do it) or a
debunking attitude toward the content of the reason are the “weakest link”
greatly depends upon one’s substantive normative views concerning the prior
plausibility of the moral belief under consideration. Consider the examples
for evaluative attitudes given by Street:
() The fact that something would promote one’s survival is a reason in
favor of it.
() The fact that something would promote the interests of a family member
is a reason to do it.
() We have greater obligations to help our own children than we do to help
complete strangers.
() The fact that someone has treated one well is a reason to treat that person
well in return.
Debunking Defused: The Metaethical Turn
() The fact that someone is altruistic is a reason to admire, praise, and
reward him or her.
() The fact that someone has done one deliberate harm is a reason to shun
that person or seek his or her punishment. (Street , )
When we look at examples such as “Pain is bad” or “We ought to care for
our children,” we might think that we would rather become anti-realists
about those reasons to sustain their validity than become skeptics about
their very legitimacy. But does the same hold for moral judgments about
incest? Why would we think that the evolutionary story we have to tell
about the origins of our aversion toward incest refutes moral realism about
the wrongness of incest, rather than rendering the content of the judgment
that incest is wrong itself unjustified?
Street’s argument for the purely metaethical relevance of evolutionary
considerations thus depends on a one-sided diet of examples. If one
chooses only the most plausible evaluative attitudes – and what could
be more plausible than “Pain is bad”? – it is more tempting to embrace
antirealism about those values than to reject them as unfounded. But if
one looks at examples of norms and values we are less inclined to
endorse after careful reflection, the choice between realism and skepti-
cism suddenly becomes much more open. Consider the following list of
examples:
() The fact that someone has the same skin color as we do is a reason to care
more about that person.
() The fact that someone has engaged in “unnatural” sex acts is a reason to
punish him or her.
() The fact that someone is a woman is a reason not to give her the same
rights as men.
Now, I will not take a stand on whether these claims are true. But suppose
that the following is: the reason why a particular person or group of people
believes most or all of the above is independent of their truth. Rather, the
true causal explanation for why people from that group believe ()
through () has to do with class interests, and the way believing said
claims disposes people to act in ways which fulfill the functional require-
ments of a capitalist economy in general and the interests of the ruling class
in particular.
Suppose, further, that once believers of () through () have been
informed about the genealogy of their beliefs, they were to adopt some-
thing akin to Street’s weakest link strategy; sure, they would reply, the way
we have arrived at our political views bears no relation to the truth of those
Debunking Defused: The Metaethical Turn
views. But instead of abandoning them, we would rather hold on to them,
as disbelieving or replacing them with different ones would be “paralyz-
ing.” In order to achieve this result, said believers continue to accept ()
through () but now combine them with a different metatheoretical
account of what it is that renders them acceptable. Most would find this
move dubious at best and downright cynical at worst.
This suggests that genealogical debunking, regardless of whether it is of
the Darwinian, Marxist, or Nietzschean kind, is selective in the following
way. The selectivity of such undermining challenges does not lie primarily
in the fact that they target only some rather than all normative judgments
(Kahane ). That may or may not be true (Rini ). Debunking is
selective in the further sense that, since there are no definitive criteria for
when it is appropriate to use debunking arguments for metaethical or for
normative purposes, this decision will inevitably have to be made on the
basis of considerations pertaining to the prior plausibility of the subset of
evaluative beliefs for which the debunking is devised. In some cases, the
appropriate target of our debunking will turn out to be metaethical,
namely when the first-order judgments we would have to sacrifice play
too central a role in our web of norms and values to be given up. In others,
substantive normative judgments will have to give way, namely when they
are comparatively less compelling than a realist account of their metaphys-
ics and our cognitive access to it.
Conclusion
A natural objection to the argument of this chapter against using pragmatic
considerations as criteria for the weakness of the respective links has it that
it is problematically uncharitable. Perhaps, one might argue, what Street
really means to do when she argues that giving up confidence in our
substantive moral beliefs would be more “costly” than adjusting our
metaethics is to illustrate just how epistemically implausible she finds this
option. This epistemic point would then merely be couched in metaphor-
ical, pragmatic-sounding terms. But first of all, I find it hard to come up
with any genuinely epistemic interpretation of “paralysis,” “blinking in
one’s ignorance about how to go forward,” and the like. And even if an
epistemic interpretation of this sort could be provided, it seems difficult to
reconcile such a reading with Street’s other commitments. The most
obvious nonpragmatic interpretation of the earlier passage would be that
Street is arguing for the deliberative indispensability of moral norms and
values. However, arguments that rely on deliberative indispensability are
Debunking Arguments in Ethics
typically understood in one of two ways: one is pragmatic (we cannot
dispense with our moral beliefs because of the role they play for our
agency), which would throw us back to an assessment of the adequacy of
Street’s cost/benefit analysis. The other way, though genuinely epistemic,
is unavailable to Street. David Enoch (), for instance, uses arguments
from deliberative indispensability evidentially, like inferences to the best
explanation: just like the indispensability of mathematics gives us reason to
think that numbers exist, so does the deliberative indispensability of moral
norms and values give us reason to think that moral facts exist – which is
the very conclusion Street wants to avoid in the first place.
Moreover, pragmatic considerations seem to be relevant to a wide range
of epistemological issues such as external world skepticism. When faced
with a choice between a particular view in epistemology and the radical
skeptical implications it may have, it seems justified to bring pragmatic
considerations to bear on which of the two to go for. So why should such
considerations not be relevant here? A fully satisfying answer to this
question would require a fleshed-out account of the relationship between
epistemic and pragmatic reasons as well as the distinction between wrong
and right kinds of reasons. Such an account is beyond the scope of this
book, but let me provide at least the following brief response to this
legitimate question: the general epistemic relevance of pragmatic consider-
ations would not change the fact that neither the scope of nor the
assessment of the severity of the costs behind the weakest link strategy
would give Street the desired results.
One may think that the examples () through () are not sufficiently
“core” to balance Street’s diet of examples. I do not agree with this. First of
all, moral beliefs about sex, gender roles, and the in-group/out-group
distinction seem to play an extremely important moral and political role
and figure in most people’s personal identities. Moreover, issues of this
kind seem to be cross-culturally moralized. In addition to that, it is not
entirely obvious how to measure whether a belief lies close to the core or
the periphery of a person’s moral mindset. As a useful “Quinean” metric,
I suggest a standard of how unwilling a given person would be to give up a
particular belief (logical principles are close to the core because they are
sacrificed last; observational reports lie close to the periphery because they
are given up first). According to this metric, () through () are very
central indeed.
Finally, one of the main payoffs of Street’s strategy is that it allows us to
protect many of our deeply held values at the expense of a small metaethi-
cal sacrifice. What’s not to like about this? A lot, I would suggest. This
Debunking Defused: The Metaethical Turn
point overlooks the fact that in many cases, the weakest link strategy of
shifting the impact of a certain type of scientific information from norma-
tive to metaethical territory might actually turn out to do more harm than
good. It is supposed to do good because it protects many of our normative
beliefs and the confidence we have in them from being undermined by the
facts of evolution. But it might also do harm in that it deprives us of one
powerful source of healthy skepticism toward those beliefs. Most readers,
I presume, will agree that many of our unconsidered moral intuitions are
based on bias and prejudice and are not really worth protecting at all. The
weakest link argument shows that often, one last option we have when it
comes to defending those potentially unjustified beliefs anyway is to make
the metaethical move I am discussing here. And while this is often
possible, I doubt that it is advisable. In fact, whether it is will depend on
how dear those normative beliefs are to us, that is, how much prior
plausibility we grant them, especially when compared to competing
metaethical accounts.
Introduction
Debunking arguments are the most promising candidates for galvanizing
the empirical and the normative because of how they explain moral beliefs
in ways that can undermine their justification. They allow us to bring
descriptive information to bear on issues of moral significance.
On a structural level, this border crossing is reflected by the fact that, as
I have argued in the first chapter, debunking arguments consist of a
descriptive premise stating the causal history of a given (set of ) moral
belief(s) and a normative premise discrediting this history as epistemically
unreliable. Together, and when cashed out appropriately, these two prem-
ises yield the debunking conclusion that said beliefs are unjustified.
This general structural schema allows us to develop a typology of
debunking arguments that distinguishes such arguments in terms of where
they locate the epistemic defectiveness of the cognitive process on the basis
of which certain moral beliefs are arrived at. A moral belief’s unreliability
can result, for instance, from a mismatch between the environment it has
been tailored to deal with or from its hypersensitivity. In the second
chapter, I considered whether debunking arguments in general can be
defused by redirecting their target from our substantive first-order beliefs
to the metaethical second-order beliefs we have about those beliefs and
found this metaethical turn wanting.
In the following chapter, I will take a closer look at the further internal
adjustments that can be made to what I will refer to as the scope of the
descriptive and the depth of the normative premise. As far as their scope is
concerned, there are global debunking arguments, which are purported to
show that all of our moral beliefs are unjustified, and selective ones, which
are supposed to show that only a subset of them is. I will discuss various
prominent examples of global and selective debunking and consider
whether their viability depends on their depth, that is, how thoroughly
Debunking Contained: Selective and Global Scope
or superficially they debunk the set of beliefs at issue. Moreover, I will
consider whether global debunking arguments overgeneralize in problem-
atic ways and whether selective debunking arguments are either dialectic-
ally unstable or lead to a vicious infinite regress.
Demaree-Cotton () also notes that framing effects leave those judgments that go into the
direction opposite from the one favored by the frame entirely unaffected.
For professional philosophers, this percentage increases slightly to %. This difference between
professional philosophers’ judgments and folk judgments is what Tobia, Buckwalter, and Stich
() are ultimately interested in.
Figure . Evolutionary debunking: Deep off-track debunking of M{all moral judgments}
10
6
Depth
0
0 2 4 6 8 10
Scope
Figure . Framing effects: Shallow symmetry debunking of M{moral judgments which
are susceptible to framing effects}
Debunking Arguments in Ethics
Finally, disgust-based debunking is quite deep but obviously restricted
in scope (namely to disgust-based/purity-related moral judgments; see
Haidt, Rozin, McCauley ,and Imada and Inbar, Pizarro, Iyer, and
Haidt ):
10
6
Depth
0
0 2 4 6 8 10
Scope
Partial moral judgments obsoleteness, hyposensitivity + selective medium distal process substantive
inconsistency
All basic evaluative off track global deep distal BE or process metaethical
dispositions
Noninferential moral off track selective or shallow proximal process substantive
judgments global
Conventional ignoble origins selective deep or distal process substantive
(Judeo-Christian morality) medium
Debunking Contained: Selective and Global Scope
Finally, there is an elective affinity between certain types of debunking
and certain levels of depth. Off-track debunking is most comfortably
combined with deep debunking; hypersensitivity debunking will typically
yield only somewhat deep, somewhat shallow debunking; and so on.
There are also elective disaffinities. Josh May (, Chapter ), for
instance, argues that there will almost always be a trade-off between what
I have referred to as the depth of a debunking argument and its scope.
When debunking is wide, it is shallow, and when debunking is deep, it is
narrow. For instance, May argues that the influence of incidental (such as
hypnosis-induced) disgust may well constitute an epistemic defect;
however, its actual causal contribution to the formation and content of
moral judgments is small. And when emotional responses do play a consti-
tutive role for people’s moral beliefs, such as when they fume with anger in
response to an egregious political unfairness, this affective involvement
does not constitute an epistemic defect. May holds that this “debunker’s
dilemma” is precisely what we should expect, given that moral judgments
are such as heterogeneous class. Moral judgments target different things,
such as action, agents, practices, or intentions; they employ different
standards, such as norms of justice, values of particular communities, or
sentiments; they can be about harmful events or rights infringements or
acts of bravery and sacrifice. It is simply unlikely that, for a varied and
complex domain like this, it would be possible to identify an integral yet
defective causal influence. The debunker’s dilemma, if successful, thus rules
out the most catastrophic damage. But note, also, what it doesn’t rule out:
deep and wide debunking may be awkward bedfellows, but there is
nothing about debunking arguments with medium scope and medium
depth that makes them hard to combine. For many debunking projects,
this is more than enough.
The Table . illustrates the various possible permutations of
debunking.
The most pressing question seems to be whether there are any successful
debunking arguments that are both deep and global. To answer this question,
let me look at the logic of selective and global debunking in more detail.
According to a different possible reconstruction, it is not clear that this argument should be classified
as a debunking argument at all. All Joyce needs is a queerness-style argument that objective
properties with practical clout are scientifically dubious. But this argument does not lead to any
debunking conclusion, which is epistemic, but to a metaphysical one, and thus it supports an error
theory.
Debunking Arguments in Ethics
second, that there are no intermediate steps capable of turning garbage
into gold.
So-called by-product and extension explanations explore two lines of
response. On the first account, there may be a more general cognitive
faculty that it would be adaptive to possess and that, as a (welcome?)
by-product, enables us to acquire moral knowledge. Michael Huemer
asks:
[W]hy do we have the ability to see stars? After all, our evolutionary
ancestors presumably would have done just as well if they only saw things
on Earth. Of course, this is a silly question. We can see stars because we
have vision, which is useful for seeing things on Earth, and once you have
vision you wind up seeing whatever is there sending light in your direction,
whether it is a useful thing to see or not. Likewise, once you have intelli-
gence, you wind up apprehending the sorts of things that can be known by
reason, whether they are useful to know or not (, )
This more general cognitive capacity of which our moral judgments are a
by-product may well be reliable. Thus, the distal input our moral judg-
ments receive shouldn’t be considered garbage at all.
Some may find this hard, or indeed too convenient, to believe. Is there
any positive reason for thinking that the moral beliefs we hold now may be
the by-product of reason rather than selective pressures that are blind to
the moral truth? Some authors have tried to argue that certain features of
either the content or the trajectory of our moral beliefs cannot be
accounted for in evolutionary terms. For one, evolutionary explanations
of the emergence of moral norms place great weight on the extent to which
such norms facilitate cooperative chains and help solve collective-action
problems. But modern “subject-centered” morality that affords moral
status independently of people’s ability to participate in mutually benefi-
cial backscratching – for instance, to animals, children, the mentally
disabled, or future people – may remain inexplicable (Buchanan and
Powell ). For another, many aspects of moral progress, such as the
development toward a coherent egalitarian and individualist value system,
seem best explained in terms of the elimination of evolutionarily
entrenched biases. This may well be driven by the fact that for any given
historical time period, there is an overlap between the individuals who,
due to their superior cognitive abilities, are likely to reject arbitrary biases
and the people who, also due to their superior cognitive abilities, exert
the greatest social influence (Huemer ). This would explain how the
power of reasoning acquires social force by improving upon our flawed
inheritance.
Debunking Contained: Selective and Global Scope
Moral cognition can also be seen not as a by-product but as an extension
of reliable proto-forms of evaluative judgment:
[I]t was likely important for our Pleistocene ancestors to understand the
application of evaluative concepts in connection with relevant standards.
They needed to make accurate evaluative judgments about good and bad
dwelling places, or hunting partners, fighters, and mushrooms, and related
normative judgments such as that one ought not to eat the little brown
mushrooms or to fight with Big Oog. Moral judgments obviously go
beyond these sorts of things, but just as in the other cases, they can be
seen as an extension of such thinking. They still involve employing evalu-
ative and normative concepts in connection with standards and ends,
though now conceived as standards and ends defining what it is to live well
all things considered, rather than just narrow standards of edibility or safety.
[. . .] We discover the evil of racist voting laws, for example, by gaining
empirical knowledge about the irrelevance of race to what matters to
responsible voting, and by reflecting on the significance of such facts in
light of ongoing experience of human life and the possibilities of good
and harm it offers us, as part of forming a conception of what it is for
human beings to live well. Why should this sort of intelligent extension of
evolutionarily influenced evaluative judgment be thought any more prob-
lematic in principle than parallel extensions in other domains? (Fitzpatrick
, f.)
At this point of the debate, most participants are in agreement about the
fact that some evolutionarily bequeathed traits such as in-group favoritism
are indeed morally objectionable. Those who wish to resist global
debunking arguments thus have a tendency to dip their toes into selective
debunking. In order to assess its prospects, let me now look into a few of its
most promising examples.
Greene () refers to this as “The No Cognitive Miracles Principle”: “When we are dealing with
unfamiliar[. . .] moral problems, we ought to rely less on automatic settings (automatic emotional
responses) and more on manual mode (conscious, controlled reasoning), lest we bank on cognitive
miracles” ().
It can of course be doubted whether automatic responses are preferentially associated with what
Greene refers to as deontological moral judgments and controlled reasoning with what he refers to as
consequentialist judgments; in addition to that, it can be doubted whether either of those line up
with what “deontology” and “consequentialism” pick out in moral theory. See Kahane et al. ()
and ().
It would be silly to deny that deontological moral philosophers do engage in a lot of conscious
reasoning about their moral intuitions, and Greene does not in fact deny this. Instead, he argues that
there are essentially two types of moral reasoning – intuition chasing and bullet biting (Greene ,
ff.) – and that deontologists are guilty of the former. It is frequently underestimated how much
of Greene’s debunking argument relies on the suspicion that deontologists are merely rationalizing
their gut reactions. However, it is very difficult to establish empirically that a subject is rationalizing,
because establishing this typically requires clear and uncontroversial criteria for what counts as bad
reasoning in a certain task. Whether deontological reasoning should count as bad, however, is
precisely what is at issue in this debate.
Debunking Arguments in Ethics
Egoism. Another type of selective debunking argument has been force-
fully proposed by Katarzyna de Lazari-Radek and Peter Singer ().
Their claim, in short, is that “universal benevolence survives the evolution-
ary critique” (). Their selective target thus isn’t deontology or noncon-
sequentialism more generally but M{partiality}, that is, the nasty habit of
privileging the moral importance of the well-being (or suffering) of oneself
and one’s nearest and dearest.
By universal benevolence, de Lazari-Radek and Singer mean the “nor-
mative truth that each of us ought to give as much weight to the good of
anyone else as we give to our own good” (), and they claim that it is
“difficult to see any evolutionary forces that could have favored universal
altruism of the sort that is required by the axiom of rational benevolence.
On the contrary, there are strong evolutionary forces that would tend to
eliminate it” (). Both kin selection operating at the level of the gene
(Dawkins ) and group selection à la Sober and Wilson () can
explain more limited forms of altruism favoring some sort of in-group. But
neither is able, nor claims to be able, to explain unrestricted sympathy
toward all sentient beings.
It is noteworthy that de Lazari-Radek and Singer emphasize that evolu-
tion cannot explain the existence of universal benevolence. Because where
do we find such all-encompassing sympathy? One may answer: nowhere,
really. This is not to say that universal, or at least far less provincial,
benevolence isn’t indeed a valuable ideal to strive for; it is merely a gentle
reminder of how rare genuine saints are and of how profoundly odd and
alienating it is to encounter their stories (MacFarquhar ). Moreover, it
is not clear that more recent models of culture/gene–coevolution (Bowles
and Gintis , Sterelny , Henrich ) aren’t in a position to
explain why the fairly, though not perfectly, undiscriminating disposition
to recognize the moral standing of all human beings as conditional
cooperative partners wouldn’t be favored by the ecological niche hyperso-
cial humans have created for themselves.
But it is a different point that makes the selective debunking of egoism
eventually come undone. It is the universality of concern for suffering that
de Lazari-Radek and Singer claim evolution cannot explain, not the concern
for suffering bit. Impartial concern for suffering wouldn’t escape the evolu-
tionary critique if our basic evaluative attitudes toward the badness and
goodness of pain and pleasure, respectively, weren’t justified. Their selective
debunking argument conspicuously omits other evaluative beliefs, such as
“pain is bad,” from the list of examples Street and other global debunkers
deem to be off track due to their obvious evolutionary genesis.
Debunking Contained: Selective and Global Scope
Conclusion
I wish to conclude with an underappreciated point, which is that
debunking arguments may also undergeneralize or, to put the same
thought differently, that attempts to resist debunking arguments some-
times overgeneralize.
Debunking Arguments in Ethics
In some cases, we may want debunking arguments to work. It is
immensely plausible, for instance, that the fact that had any given Chris-
tian person been born in India, she would believe in the basic tenets of
Hinduism rather than Christianity, should give defenders of either religion
some pause (Bogardus ). But if debunking arguments do not succeed,
then such arguments may drop out of the picture as well. Suddenly, our
favorite debunking arguments against religious beliefs or magical thinking
have lost their force (Mason ).
This point, too, holds about substantive as well as metaethical targets.
Suppose we have found good grounds on which to reject the debunking of
metaethical realism. It may then turn out that those grounds apply just as
much to our judgments about what is disgusting or what is funny, which
few people would want to become realists about.
Perhaps arguments from disagreement can restore our faith in the power
of debunking, or at least supply the appropriate checks and balances on it?
In what follows, I will argue that some do, and some don’t.
Disagreement
Introduction
The argument from disagreement is arguably the most common challenge
to metaethical moral realism. People disagree about what morality
requires: some think that abortion reflects women’s basic right to choose
what to do with their own bodies, some consider it murder. Some see the
death penalty as a barbaric atavism, others deem it the only just response
to the most heinous crimes. Slavery is nowadays all but universally con-
demned. Awkwardly, the greatest scientist and philosopher of antiquity
eloquently endorsed it.
Moral realists hold that there are objective, mind-independent moral
truths. But this claim, it seems, does not comport well with the aforemen-
tioned fact of widespread synchronic and diachronic moral disagreement,
which is more readily explained in terms of differences in individual
preferences or cultural outlook that involve no essential reference to moral
facts or anything of the sort (Prinz ). The fact of widespread moral
disagreement thus debunks moral realism. Or so it seems: realists like to
reply to this relativistic challenge by pointing out that disagreement does
not entail relativity and that people disagree about all kinds of nonmoral
facts. Very few people take this to establish that there are no nonmoral
facts of the matter or that disagreements about, say, physics or astronomy
reflect mere differences in cultural frameworks.
For the most part, the extent to which disagreement bears on the
existence of moral facts has been discussed on the basis of armchair
methods. But in a recent influential article, John Doris and Alexandra
The classical, and historically most influential, statements of moral relativism can be found in
Herodotus’ Histories and Montaigne’s Of Cannibals. For an excellent assessment of these
statements, see Fricker (). For the modern discussion, see Harman (), Mackie (),
and Brink (). For powerful defenses of moral realism against arguments from disagreement, see
Huemer () and Enoch ().
Debunking Arguments in Ethics
Plakias () have put the issue under empirical scrutiny. Realists have
not denied the existence of moral disagreement as such; instead, they insist
that disagreement is relevant to their position only to the extent that it
cannot be defused, that is, explained in a way that renders it nonthreatening
to the existence of moral facts. One way to supply such an explanation is to
point out the obvious fact that disagreements may be due to the fact that
one of the disagreeing parties is simply wrong. Other, more sophisticated
defusing explanations invoke disagreement about nonmoral facts, irration-
ality, partiality, or differences in background theory as potential indirect
sources of moral disagreement compatible with realism. Doris and Plakias’s
strategy is to point to evidence from cultural and experimental social
psychology to identify cases of moral disagreement that are immune to
the aforementioned defusing explanations. The core of their argument
rests on the claim that disagreements regarding the appropriateness of
violence in response to insults and threats between people from the North
and the South of the U.S., for instance, cannot be explained in a realism-
friendly way.
This chapter is about the debunking of moral realism on the basis of
empirical evidence for moral disagreement. There are four sections. In
(.), I briefly describe Doris and Plakias’ empirical case against moral
realism and their claim that some moral disagreements can empirically be
shown to be fundamental. In (.), I argue that in order to pose a
challenge to moral realism, antirealists must find a case of fundamental
moral disagreement about normatively significant core issues. In the third
section (.), I show that the evidence for moral convergence about
normatively significant core issues is especially strong. In the fourth section
(.), I argue that the power of debunking arguments can be turned
against the empirical case for moral disagreement.
Doris and Plakias seminal ()-paper kicked off this discussion. For important responses and/or
amendments to their empirical argument from disagreement, see Leiter (), Sneddon (),
Fraser and Hauser (), Meyers () and Fitzpatrick (). For a helpful summary of this
debate, see Alfano (), Chapter .
This position is typically, though not necessarily, combined with a cognitivist account of moral
judgment. See Kahane () for an illuminating discussion of this issue.
Debunking Realism: Moral Disagreement
disagreement is not as pervasive as antirealists allege it to be, and divergen-
tists, who accept the existence of widespread moral disagreement but aim
to show that it poses no real threat to the viability of moral realism. I will
follow Doris and Plakias in focusing on convergentist moral realism.
Convergentists maintain that moral beliefs converge under epistemically
improved conditions. Doris and Plakias draw on empirical evidence to
show that this is implausible.
In making their case against convergentism, Doris and Plakias rely on
various types of empirical evidence for purportedly fundamental moral
disagreement. One account they rely on especially heavily is Richard
Nisbett and Dov Cohen’s () fascinating study on psychological
attitudes toward violence in the Southern and Northern U.S. Evidence
from criminal statistics, legal decisions, lab experiments, and field studies
all points in the direction that Southerners are both more prone to violence
and more tolerant of it. Nisbett and Cohen attribute this tendency, which
is restricted to violence in response to threats, insults, and other violations
of honor, to the reputational demands of herding economies. In contrast to
Northern economies, which are (or were) predominantly based on farming
or trade, a herding economy is a high-stakes environment in which a
person’s entire assets could be stolen, which made it necessary for individ-
uals to convey that they would be willing to respond violently to threats.
That this cultural difference persists until today is borne out by homicide
rates, people’s attitudes toward violations of honor (as indicated, for
instance, by elevated stress levels in response to insults), or public expres-
sions of those attitudes (e.g., by politicians or law enforcement). In what
follows, I will refer to Doris and Plakias’s attempt to bring this explanation
to bear on the relationship between moral realism and moral disagreement
as the culture of honor argument.
Realists and antirealists agree that not all disagreements threaten the
existence of objective moral facts. Suppose that you and I disagree about
whether we should impose higher taxes on carbon emissions. This dis-
agreement may be due to the fact that you do not care about the well-being
of future generations or that you do not think that we can owe them
anything because it is impossible to engage in reciprocal cooperation with
the unborn (Heath ). It may also, however, reflect that you simply do
Sometimes, the divergentist move can become almost dismissive, as when disagreements are brushed
aside as an “idiot’s veto” (Huemer , ff.).
Doris and Plakias hold that divergentism is unattractive for philosophical rather than empirical
reasons. I have my doubts about this but will set this issue aside for the purposes of this chapter.
Debunking Arguments in Ethics
Table . Defusing Explanations
not believe that climate change is real or doubt that human actions are
responsible for it. In the latter case, our apparent moral disagreement
turned out to be superficial. If we could reach agreement about the
nonmoral facts, our difference in moral opinion would vanish. Such
superficial agreement does nothing to undermine moral realism. It is worth
noting, however, that whether a given case of disagreement counts as
superficial is largely an empirical question. As such, it is amenable to
empirical evidence.
The important thing, then, to get some leverage on the existence of
mind-independent evaluative facts is to identify cases of so-called funda-
mental disagreement. To put it differently, antirealists need to supply a
case of moral disagreement in which this disagreement cannot be
“defused” in such a way as to render it unthreatening to the prospects of
realism. And even though I will ultimately argue that Doris and Plakias
have failed to find the right of disagreement to serve as a genuine challenge
for moral realism, I do not wish to reject their list of defusing explanations
as flawed. Rather, I wish to supplement it with what is missing.
Doris and Plakias consider four possible defusing explanations for the
disagreements described in Nisbett and Cohen’s account of the Southern
culture of honor – factual disagreement, partiality, irrationality, and differ-
ences in background theory (see Table .) – but argue that none of these
explanations apply to the culture of honor example. (In the third section,
I will add a fifth type of defusing explanation to this list.)
It is implausible, they suggest, to suspect some hidden self-serving
(perhaps economic) motive behind Southerners’ irritability; the divide also
does not seem attributable to factual disagreement about what, for
instance, counts as an insult and what does not; and it does not seem
irrational or otherwise cognitively deficient to hold the described attitudes.
Cases of disagreement about the issue of climate change that are mainly due to such entirely factual,
nonevaluative disagreements are presumably rare. Typically, such disagreements more strongly
reflect instances of motivated reasoning, Kahane ().
Debunking Realism: Moral Disagreement
Doris and Plakias do grant that diverging background theories about
masculinity may play a role, although they are quick to point out that this
would not really make the disagreement any less fundamental. It seems,
then, that the culture of honor argument presents a powerful challenge to
convergentist moral realism.
What remains an open question at this point is how normatively central
the disagreement Doris and Plakias build their challenge on is and whether
this issue of centrality matters to how troubling a case of allegedly funda-
mental disagreement should be for the realist. To see what I have in mind,
suppose that we find that people from culture X have an extremely strong
attitude toward some normatively peripheral issue such as whether gifts
should be unwrapped in front of the person who brought the gift. They
think that instant unwrapping is the thing to do. Suppose, further, that
this attitude exhibits all the familiar marks of the moral: they want to see
people punished who don’t quickly unwrap; they feel intensely guilty for
forgetting to unwrap; and so on. Otherwise, however, their moral views are
pretty much the same as ours. We run this case through the aforemen-
tioned defusing explanations and find that none of them apply. I suspect
that realists would be rather less worried about this case than disagreement
about female genital mutilation, the obstruction of free speech, or the
exploitation of sweatshop labor. This suggests that how normatively cen-
tral an issue is about which a disagreement is registered matters a great deal
to the relevance of the disagreement even to the apparently metaethical,
rather than normative, question of whether there are any objective
evaluative facts.
I will develop this argument in more detail in the following section. For
now, the crucial thing to file away is that success of the empirical case
against moral realism depends on whether we can identify moral disagree-
ments for which no realism-friendly defusing explanations can be offered.
Doris and Plakias claim that the culture of honor argument is precisely
such a case.
Leiter () argues that moral disagreement not between the folk but between professional
philosophers – think about the dispute between consequentialists and deontologists – is evidence
enough for the intractability of moral disagreement among the informed. He is right to note that
attempts to supply a defusing explanation in terms of differing “background theories” is especially
hopeless in this case, because these disagreements are already so foundational that there is no further
background to appeal to. But this strikes me as implausible, since the consequentialization debate
(Portmore ) shows where this type of disagreement is located, at least for the most part:
disagreements of this sort among philosophers are of a theoretical and explanatory nature, with
the resulting substantive moral commitments of the respective theories more often than not being
deontically equivalent. Reliably classifying the same acts as wrong does not, however, seem to be the
best example for a moral disagreement antirealists may wish to build their case on.
It is a legitimate question from whose perspective we should judge the normative centrality of a given
issue. Some cultures may think that women’s attire is very morally relevant, whereas others may
Debunking Realism: Moral Disagreement
disagreements must be widespread (Meyers , ). I don’t disagree,
but I argue that this is not sufficient. There can be disagreements that are
both widespread, that is, cases about which many people disagree, rather
than a lonely loon disagreeing with the rest, and fundamental, that is, not
easily explained away. But note that there can be trivial moral disagree-
ments that satisfy both criteria, and disagreements about such trivial
matters may not bother realists all that much.
Fitzpatrick (, ) makes the following related point. Doris and
Plakias use the culture of honor argument to infer that if there is intract-
able and fundamental moral disagreement about an issue (i.e., the appro-
priate level of violence), then it is likely that there will, in general, be a
significant amount of intractable fundamental disagreement under ideal
conditions. But this inference is unwarranted, as their case only shows that
we have empirical reasons to think that some fundamental moral disagree-
ments can remain intractable even under ideal conditions, which is not to
say that others will do so, too.
How much disagreement there is is not completely irrelevant, of course;
but it is much less significant than is commonly thought. Suppose we
could show, for instance, that we can reach agreement about virtually all
matters of sexual morality. However, a pocket of disagreement persists
about the permissibility of one exotic and rare sexual practice (say “erotic
knife cutting,” Vogler , ff.). This disagreement persists even under
ideal conditions, and no defusing explanation can be offered to show that
it is nonfundamental. This would be a surprising discovery indeed. But
would it be a strike against moral realism? I suggest that it would not, due
to how peripheral and idiosyncratic the issue is. Consider, on the other
hand, a case in which two cultures cannot reach agreement about whether
slavery is immoral. Suppose this disagreement, too, persists under ideal
conditions and no defusing explanation can be offered. This would be a
problem for realism, because the issue is appropriately significant. The
challenge for empirically informed moral relativists is whether they can
identify a case of moral disagreement for which no defusing explanation
can be offered such that the content of the disagreement concerns an issue
that is important enough from a normative point of view.
think women’s attire is essentially morally neutral. In many cases, this “neutralization” can constitute
moral progress (Buchanan and Powell , ). But it points to an even deeper disagreement not
just about what, once topics of moral relevance – sex, food, clothing, faith, rights, and so forth – are
agreed upon, the right thing to do would be, but about which topics are of any moral relevance to
begin with.
Debunking Arguments in Ethics
One may argue that Doris and Plakias have met this challenge, as the
type of disagreement they have singled out concerns the appropriate level
of violence that should be accepted within a society. Violence and the rules
governing its expression, it seems, should qualify as a normatively signifi-
cant core issue in my sense. But notice that for two parties to disagree with
each other, they must (a) be talking about one and the same thing and
must (b) ascribe incompatible properties to said thing, such that if one is
right, the other has to be wrong. The disagreements found between
Northerners and Southerners, however, are of a different kind: though
Nisbett and Cohen found a statistically significant Northern/Southern
divide on whether, for instance, a man has the right to kill to defend his
family, most Northerners and Southerners agreed that a man had this right,
in violation of condition (b). They only differed with regard to how strongly
they thought so. Fraser and Hauser () claim to have found a clearer
example of nondefusable disagreement about normatively significant core
issues in the fact that a sample of rural Mayans did not see a difference in
moral relevance between actions and omissions. However, the claim that
this disagreement persists under epistemically improved conditions of full
information appears much weaker in light of the fact that more formally
educated Mayans did attach moral significance to the distinction (Fraser
and Hauser , ).
Some may think that what I ask antirealists to do simply cannot be
done. Arguments from charity in particular seem to show that there cannot
be a case of moral disagreement of the kind I ask the realist to supply
(Moody-Adams , Alfano , ff., Wong ). There may be
certain a priori constraints on how radical moral disagreements can pos-
sibly be. Charity suggests that there must be lots of background agreement
for us to be able to recognize a different framework as moral in the first
place. Wong (, f.) uses this point to argue that describing moral
relativism as being committed to the existence of radical moral disagree-
ments (two sets of completely different values rather than two sets of values
with considerable overlap and different priorities) is implausible, because
in the case of truly radical differences, people from neither moral code
would have reason to describe the respective other moral outlook as moral
In a recent paper, Brennan and Jaworski () describe many fascinating cases of disagreements
about when and under what conditions monetary exchanges are appropriate. Sex and sexual
relations is another issue that has a decent claim to being normatively central. The Merina
people, for instance, pay their wives for sex. In Western societies, this action would be strongly
disapproved of. But for the Merina, paying for sex is a way of expressing respect. When seen this
way, the disagreement loses its bite.
Debunking Realism: Moral Disagreement
at all. This, it seems, establishes certain content restrictions within which
any reasonable relativism must operate.
But, first, even if arguments from charity did work, they would consti-
tute a significant strike against the possibility of radically different moral
outlooks – the very thing the existence of which many relativists seek to
establish. Second, the power of arguments from charity regarding con-
straints of what can count as a moral outlook at all is frequently overblown.
Consider the following thought experiment (Tiberius , ff.): the
Ilusians are a sophisticated Alien species. They say “goo” and “ba” in
response to things that we would classify as beautiful and ugly, respect-
ively, such as paintings and sunrises; and they use “beaut” and “ugli” for
things we classify as morally good or bad, such as shoplifting and tax fraud.
However, the functional role of those concepts is surprising: the Ilusians
think that people who do ba things, such as shooting an amateurish film,
should be punished, but those who do ugli things, such as being cruel to a
child, are just being eccentric or tacky. They also feel guilty about doing ba
things and don’t want their children to do them but can’t bring themselves
to treat ugli things with the same rigor. Tiberius submits that we should
translate ba rather than ugli with our word wrong despite the radical
disagreement about the content of Ilusian “morality.” As long as morality
maintains its distinctive functional role, there can be a lot of disagreement
about its content. But let me emphasize that this is a conceptual possibil-
ity. Moreover, it is an interesting fact in its own right that no Ilusian-like
species – that is, intelligent creatures with a moral system totally unlike our
own – has ever been found on this planet.
Remember that Doris and Plakias claim that none of the traditional
defusing explanations (partiality, irrationality, and so forth) for moral
disagreements apply to their culture of honor argument. But is this true?
Suppose Nisbett and Cohen’s explanation for why we find the cultural
differences they describe is correct. If sensitivity to insults is explained by
the respective need to defend oneself and deter others via credible threats
in the American South, doesn’t this point to what is often described as
“circumstantial disagreement” (Timmons )? Meyers (, f.)
notes that “Eskimos” may endorse patricide not as a matter of fundamental
moral disagreement but as a response to scarce resources and harsh living
conditions. The culture of honor argument constitutes a similar case.
Importantly, however, Southerners no longer inhabit the conditions that
used to render their attitudes and behavioral patterns justified, and this
shows that there are some grounds on which these attitudes and behaviors
can be classified as unwarranted. This point comes close to a type of
Debunking Arguments in Ethics
debunking argument advanced in other contexts, which aims to show
that certain of our moral beliefs are normatively inadequate because they
tend to misfire under modern social conditions they have not been shaped
to deal with (Singer , Greene ). I will return to this issue in what
follows.
Antirealists may reject this. They may argue that in order to pose a
challenge to moral realism, any instance of fundamental moral disagree-
ment is enough; we do not need to find moral disagreement about
normatively significant core issues, because any difference in moral opin-
ion, however small or trivial, will do, as long as it can be shown to be
fundamental. But if they do so, they make themselves vulnerable to various
other explanations of moral disagreements that are compatible with
realism.
For one thing, there is the issue of scope. Doris and Plakias consider a
position they refer to as “patchy” realism, which is the view that some parts
of moral discourse admit of a realist description, while others do not.
Compare discourse about health: there are some aspects of health about
which there is virtually no disagreement – broken bones and heart attacks
come to mind – and some areas about which there is quite a bit. Patchy
realism about health would then suggest that the former should be
accounted for in realist, the latter in antirealist terms. They note that “it
is an interesting ‒ though underexplored ‒ question whether this patchi-
ness is a comfort to the realist or the antirealist: how much in the way of
realism-apt areas of moral discourse must be confidently identified before
the realist can declare victory? And how are the boundaries of these patches
to be demarcated?” (). There is an obvious though perhaps unsatisfying
answer to this question: since realism is the view that there are mind-
independent moral facts – that is: at least one – the realist will be comforted
by finding only one moral fact. In a way, then, for antirealists to concede
the possibility of patchy realism, as if it were an open question whether
patchy realism would be more welcome to realists than to antirealists, is
highly misleading. Patchy realism is realism. On the other hand, the very
question of “how much in the way of realism-apt discourse” will suffice for
realism to prevail seems odd. Would realists be satisfied if we found that
there is almost no disagreement about the majority of moral issues, except
the important ones? I doubt that they would. The only moral facts that will
offer real comfort to the realist concern normatively significant core issues.
This is not an outlandish view to hold, either: utilitarian moral realists think that there is only one
nonderivative evaluative fact, namely about how much pleasure and pain the world contains.
Debunking Realism: Moral Disagreement
For another thing, there is the issue of weight. Meyers () argues that
if a Ross-style pluralism of prima facie duties is correct, then existence of
the type of disagreement Doris and Plakias describe is unsurprising. What
we should expect is cross-cultural agreement on general prima facie prin-
ciples (not to harm others, not to break promises, and so on); this type of
disagreement, however, is compatible with dramatic disagreement about
all-things-considered moral judgments, because Ross(ians) are happy to
admit that how to translate a shared appreciation of pro tanto reasons into
judgments about all-things-considered rights and wrongs is a difficult,
noncodifiable matter that requires the capacity of judgment. This capacity
can be exercised rather differently, depending on how much weight
different subjects attach to different prima facie duties when confronted
with specific cases with moral content, which leads to reasonable disagree-
ments about how one ought to act, all things considered.
Third, there is disagreement about degrees of wrongness. This issue is
easily illustrated by focusing on cross-cultural disagreement about the
“Magistrate and the Mob” case – is it permissible to punish an innocent
man to prevent a mob from rioting? – Doris and Plakias also place a lot of
weight on. American subjects seem to think that punishing the man is less
permissible than Chinese subjects think. But the difference is one between
average ratings of . and . on a seven-point Likert scale. Since this
“disagreement” fails to straddle the midpoint of the scale, it is highly
questionable whether it should count as genuine disagreement at all. As
mentioned, Fraser and Hauser () suggest that the culture of honor
argument is simply not the best case for fundamental disagreement,
because it represents disagreement about degrees of wrongness rather than
actual disagreement between two parties, one of which judges an action to
be permissible, the other impermissible.
Finally, there is the issue of content. Leiter () argues that the
evidence does not show that Southerners think violence is more permis-
sible. What the evidence shows is that they are more likely to act violently
and more likely to excuse violent behavior. But standards of wrongness are
Incidentally, this argument may also explain disagreement between professional moral philosophers
that Leiter (see footnote ) is impressed by: philosophical expertise may lead to a kind of
professional myopia in which philosophers, as specialists about only one type of moral
consideration (experts on the ethics of promising or on the ethics of helping), irrationally
discount the others.
In a footnote, Doris and Plakias note that these effect sizes are “typical” () for social psychology.
But the fact that they are typical does not make them strong, so perhaps this point is more of an
indictment of the discipline of social psychology than of moral realism.
Debunking Arguments in Ethics
different from standards of blame, and agreement on moral judgment is
compatible with differences in motivational implementation.
Regardless of whether they are persistent under idealized epistemic
conditions, realists would be unimpressed by these cases of disagreement,
and rightly so. But now suppose that we could identify cases of moral
disagreements that are not just persistent under such purged conditions
but are, in addition to that, not merely about degrees of wrongness, cannot
be explained in terms of different weightings of pro tanto reasons, and so
forth. In that case, I suggest, realists would be entitled to remain unim-
pressed, provided that the only cases we were able to dig up concern the
permissibility of erotic knife play, whether to wear brown shoes after six,
and other peripheral issues.
My argument thus suggests that the right kind of disagreement to serve
as a suitable challenge to moral realism must not merely be nondefusable.
The right kind of disagreement has two aspects: one has to do with what
explains the existence of the disagreement, while the other concerns what
the disagreement is about (see Table .).
In his account of the implications of moral disagreement, Alfano (, ff.) makes a different
but related move. He, too, argues that not just any case of moral disagreement is sufficient to
establish an interesting form of moral relativism. First, he agrees that fundamental moral
disagreements may not be defusable. Second, he suggests that fundamental disagreements need to
be modally robust, that is, not easily and quickly resolved. Disagreeing parties, after hearing each
other’s argument, should not immediately back down and accept the other party’s judgment. There
is something to this suggestion, though I wouldn’t want to take it too far: it seems uncomfortable to
build a certain degree of unwillingness to give up one’s values – that is, a failure of critical thinking
and intuitive override – into one’s account of moral disagreement. Cases of genuine moral
disagreement that are not modally robust are presumably rare, but that doesn’t mean that
fundamental moral disagreement has to be modally robust to qualify as moral disagreement at all.
Third, he suggests psychological depth. Moral disagreements should confer something
“psychologically deep” rather than peripheral. I agree, but I prefer not to cash out what is at issue
here in psychological terms, that is, in terms of how central something is to an individual’s beliefs
and values, but in terms of what the normatively significant core cases in fact are. Disagreements
about wanton cruelty, however deeply entrenched in a person’s psychology they are, are in fact a
normatively significant core issue; disagreements about whether to wear brown shoes after six,
though of monumental importance to some, are not.
Debunking Realism: Moral Disagreement
Table . Content of the Right Kind of Disagreement
strong for normatively significant core issues. The empirical record sug-
gests that over time, informed people converge on a specific type of moral
outlook, involving robust agreement on a number of normatively signifi-
cant core issues such as sexual morality, punishment, violence, or equality.
It may be that we do not find convergence across the board. But in most
cases, what people do not converge on are peripheral issues that function as
pockets of moral stagnation precisely because they are peripheral – the
rational and social pressures to converge on idiosyncratic moral beliefs
is low.
Now that we have the core question in sharp focus – is there fundamental
disagreement that is (a) persistent under ideal conditions and (b) not
amenable to a defusing explanation that concerns the (c) normatively
significant core issues? – we can look for empirical evidence for convergence.
Before I proceed, let me say more explicitly what I mean by normatively
significant core issues. By normatively significant core issues, I mean the
norms and values that the vast majority of people across the political
spectrum consider to be of central moral importance (Graham, Haidt,
and Nosek ) and that played an essential justifying, motivational or
orienting role in most or all of the major moral and political revolutions in
human history. These norms and values cluster around three major themes
(see Huemer ):
() Moral egalitarianism: recognition of the equal moral status of all
persons
() Normative Individualism: respect for the rights and dignity of indi-
vidual persons
() Opposition to gratuitous violence
An important aspect of the moral and political revolutions I talk about here is the notion of an
“expanding circle” (Singer ): an increasing recognition of the personhood not just of a narrowly
defined class of people (e.g., white male landowners) but of all sentient beings.
Debunking Arguments in Ethics
These three themes sum up the developments we have witnessed, with
increasing momentum (though not without major relapses), over the past
or so years. These developments include a sharply declining homicide
rate, a smaller number of less deadly wars (adjusting for population size),
increasing democratization, and an expanding opposition to arbitrary
discrimination on the basis of gender, ethnicity, or creed. All of these
can be classified as normatively significant core issues in my sense.
It remains plausible that there will not be a lot of convergence on many
other issues regarding how to live. What people should wear or how they
should eat are things that remain diversely practiced across the globe. Now
one may say that these are simply not moral issues (and I personally agree).
But we must also accept the fact – and relativists should be especially
hospitable to this idea – that in a lot of cultures, these things are considered
to be part of public morality. It may already be due to a particular Western
perspective to say that these things are not genuinely moral. Partly because
of this, the case for convergence will always be somewhat mixed.
The positive case for convergence is particularly strong for the norma-
tively significant core issues I have in mind. In fact, it seems to me that the
case for convergence on these matters is simply overwhelming. Informed
and educated people agree with a particular conception of morality – the
broadly “liberal” one just sketched – pretty much without exception.
When people reason, they consistently revise their beliefs in that direction
(Sauer ).
Pinker () is even more optimistic when he notes that we can
observe six major trends of moral convergence throughout the course of
human history: the transition from the anarchist warfare of hunter-
gatherer societies (the Pacification Process), the functional integration of
small-scale communities into larger sociopolitical units (the Civilizing
Process), the abolition of politically administered and socially approved
forms of violence and cruelty (the Humanitarian Revolution), the “Long
Peace” after World War II, the “New Peace” since the end of the Cold
War and the increasing social and legal recognition of minority rights (the
Rights Revolution).
Buchanan and Powell ( and ) add that such “inclusivist”
developments are difficult to account for in evolutionary terms.
One may think that this is question begging, because moral criteria may well have influenced my
selection of people whose agreement counts. But I simply do not see how the selection could be
done differently. Who else should serve as a real-life test case for what people would believe under
idealized epistemic conditions than the educated and informed?
Debunking Realism: Moral Disagreement
The convergence we observe here seems to be a genuine case of moral
progress. Moreover, our evolved nature seems to impose very few limita-
tions on what further progress can be achieved. If cultural and cognitive
changes have enabled us to move beyond moral norms exclusively
governing and facilitating reciprocal cooperation in small groups – notably,
at the expense of intergroup cooperation (Greene ) – and toward a
moral outlook based on an equal and universal respect for persons,
whether they can benefit or threaten us, then it is hard to predict what
other moral marvels the plasticity of human beings will eventually make
possible (Prinz ).
Two points are especially noteworthy about the social and political
developments just mentioned. For one, they do not seem to be attrib-
utable to any sort of identifiable bias: in fact, they exhibit all the signs of
a rejection of known biases, such as partiality toward the nearest and
dearest and hostility toward members of the out-group (Huemer ).
For another, these developments do not appear to be incoherent or
fragmented. Humanity does not seem to be randomly pushed around by
contingent forces. Instead, the convergentist developments we observe
all seem to flow from, or at least be consistent with, a coherent moral
outlook (Huemer ) that emphasizes the equal moral status of
sentient beings.
Let me return to the culture of honor argument from this perspective.
To claim that differences between Southerners and Northerners are
fundamental is to hold that they are unlikely to be swept up by the
dynamic toward progressive convergence just described. Fitzpatrick’s
(, ff.) defusing account of the culture of honor argument is
skeptical of this claim. His suggestion is that the cases mentioned by
Doris and Plakias would only provide evidence for intractable moral
disagreement if we could actually put people in idealized conditions by
engaging their System II capacities of reflective correction, which is
(roughly) the method moral realists think gives us access to moral facts:
“I am claiming that the cross-cultural data that Doris et al. cite do not,
as presented, and without further argument, support an empirical case
against the realist convergence conjecture. This is because the moral
judgments at issue are plausibly not ones that have withstood application
of the kind of method of moral inquiry that realists claim will lead to the
elimination of (most) moral disagreement under ideal conditions.
Hence, they plausibly do not put the convergence conjecture to the
kind of empirical test that Doris et al. claim.” () But what kind of
inquiry could this be?
Debunking Arguments in Ethics
Source Content
the original economic rationale for their attitudes no longer obtains, they
should and perhaps would revise their beliefs. However, one might
immediately start to worry that all of our moral intuitions have some
weird and embarrassing causal pedigree. Appropriately deployed, genea-
logical accounts of morally salient beliefs and attitudes could lead anyone
to waver. The debunking argument I have sketched thus seems to block
relativism at the price of moral nihilism. There are no moral disagreements
anymore but only because all moral beliefs are debunked. Moreover, it
would be unfair to deploy such genealogical accounts only against one side
of a given disagreement.
However, this worry can be taken care of by looking at the specifics of
the case at issue. Suppose we genealogically explain why Southerners feel
the way they do about insults. Suppose, further, that we can supply a
comparable explanation for Northerners’ attitudes: the different economic
conditions their culture evolved in made them less responsive to insults
and threats. So far, the situation is entirely symmetrical. But notice how
things change when we point out that (a) Southerners nowadays live in
conditions very different from those that used to render their dispositions
justified and that (b) other social costs may be associated with honor
cultures. This is to say that the case for Southern attitudes has disappeared,
while a new case against those attitudes has emerged. No such thing can be
said about Northerners’ beliefs, who live under conditions in which a more
lenient attitude toward insults continues to be sensible.
What, then, is the right kind of disagreement? Let me sum up. Trad-
itional defusing explanations have identified the following desiderata.
A similar criterion is proposed by Katsafanas ().
Thanks to Regina Rini for pressing me on this issue.
Debunking Arguments in Ethics
Fundamental moral disagreements should not be due to (i) disagreement
about nonmoral facts, (ii) partiality, (iii) irrationality, or (iv) differences in
background theory. To this list, I have added a fifth item according to
which disagreements need to be (v) stable under reflexive genealogical
scrutiny. In addition to this nondefusability constraint, I have argued that
there are certain content constraints on what may count as the right kind
of disagreement. Such disagreement should not be about (i) degrees of
wrongness, (ii) how to weigh competing duties, (iii) standards of blame
rather than wrongness. Instead, they should be about (iv) a wide enough
part of moral discourse (i.e., not [too] “patchy”) and (v) normatively
significant core issues.
Conclusion
Like the second, this fourth chapter was about the debunking of metaethi-
cal moral realism. According to the evolutionary challenge explained
earlier, realism leads to skepticism, because it cannot explain both how
our evaluative tendencies can be shaped by evolution and how, given this
fact, we could be in a position to acquire any knowledge of the moral facts.
The argument from disagreement discussed in this chapter has the struc-
ture of an inference to the best explanation: the best explanation for
widespread moral disagreement is that there is no “real,” universal morality
for people to agree on – custom, and only custom, is king. This debunking
of moral realism is sharpened by empirical evidence for fundamental moral
disagreement: evidence suggesting that there are moral disagreements that
cannot be defused by pointing out that they actually originate in disagree-
ments about the underlying nonmoral facts or mere irrationality.
I explained why this argument depends on identifying the right kind of
disagreement, that the empirical evidence for convergence might be
stronger than it seems, and that genealogical debunking arguments play
an important role in pushing for increasing moral convergence in the
future. In the next chapter, I will address the issue of debunking arguments
from disagreement from a normative rather than a metaethical angle.
Introduction
The emerging field of political psychology brings the tools of moral
psychology to bear on the issue of political disagreement. It aims to
debunk such disagreements by suggesting that the main conflict shaping
politics today can be explained in terms of people’s moral foundations
(Graham, Haidt, and Nosek ; Haidt ; Graham, Haidt, et al.
cf. also Fiske and Rai ): progressive liberals, it is argued, view
society as consisting of separate individuals with differing values and life
plans, whereas conservatives rely on a thicker notion of political morality
that includes traditions, communities, and values of purity (Graham and
Haidt ).
Moral Foundations theory debunks political disagreements by psycho-
logically explaining away their rational basis. In this chapter, I explore the
normative implications of this theory. Moral Foundations theory doesn’t
debunk political disagreements tout court; rather, this debunking strategy
has a certain direction: in particular, I will argue that its proponents take it
to support an asymmetry of understanding: if deep political disagreements
reflect differences in people’s moral foundations, and these disagreements
cannot be rationally resolved, then overcoming them makes it necessary to
acknowledge the moral foundations of the other side’s political outlook.
But conservatives, the theory suggests, already do acknowledge all of the
liberal moral foundations and not vice versa. To overcome partisanship and
the resulting political deadlock, then, it seems to be up to liberals to move
closer toward the conservative side and not vice versa.
I wish to analyze what the argument for this asymmetry is and whether
it holds up. In the end, I shall argue that the available evidence does
support an asymmetry but that it is the opposite of what Moral Founda-
tions theorists think it is. There is such an asymmetry – but its burden falls
on the conservative side.
Debunking Arguments in Ethics
This chapter has five sections. I will start with a brief recapitulation of
the basic outlines of Moral Foundations theory (.). In the second
section (.), I will explain how this theory is supposed to support the
Asymmetry of Understanding just outlined. One of my main aims in
this section is to show that Moral Foundations theory only yields the
desired asymmetry when combined with a related, but more general,
account of moral judgment and reasoning – the Social Intuitionist
model. Unlike Social Intuitionism, Moral Foundations theory has
attracted virtually no attention in philosophical circles. A further aim
of this chapter is to remedy this situation and to show how the two
models are intertwined.
In the following three sections, I will argue that Social Intuitionism
cannot be used to support Moral Foundations Theory, and that therefore,
the Asymmetry of Understanding ceases to follow. In (.), I argue that
Social Intuitionism has problems of its own, problems that make it
difficult for the model to complement Moral Foundations theory in the
intended way. Section (.) shows how the very evidence which casts
the former account into doubt can also be shown to undermine the latter.
The third section is about how and to what extent moral judgments are
amenable to reasoning; the fourth section shows that this amenability
selectively affects only some, namely the conservative, moral foundations.
Finally, the fifth section (.) shows that irrespective of the independent
plausibility of these approaches, the combination of the two is unstable.
Moral Foundations theory needs support from Social Intuitionism but
cannot get it. I conclude with a brief assessment of which aspects of Moral
Foundations theory continue to be attractive for political liberals and argue
that the difference between liberals and conservatives cannot be found in
which foundational moral emotions they are susceptible to but which of
them they grant independent moral authority.
For the sake of brevity, I gloss over many of the more complex aspects of the theory. For the most
comprehensive statement of the account to date, see Graham, Haidt et al. (). In this paper, the
authors explain many of the details regarding how moral foundations are psychologically
implemented or what their evolutionary backstory is.
Debunking Arguments in Ethics
Now – why is this supposed to follow from the basic outline of
MFT sketched already? The argument can be reconstructed in the
following way:
() Political disagreement is based on differences in people’s moral
foundations. (central claim of MFT)
() Mutual understanding can only be achieved through an appreciation
of the moral foundations of the other side’s political beliefs. (empir-
ical observation)
() Conservatives already do appreciate the moral foundations of the
other side’s political beliefs. (central claim of MFT)
() Therefore, mutual understanding depends on liberals’ appreciation
of the moral foundations of conservatives’ political beliefs.
In his recent book-length treatment of the issue, Haidt () frames this
argument as follows. When it comes to dealing with political disagree-
ment, “the obstacles to empathy are not symmetrical. If the left builds its
moral matrices on a smaller number of moral foundations, then there is no
foundation used by the left that is not also used by the right” ().
In an earlier paper, Haidt argues that “speakers, politicians, and opinion
leaders should emphasize the common moral ground that can be found.
The ethics of autonomy are clearly shared by all Americans, but liberals
will have to reach beyond this in some way to defuse the fear that
conservatives have of a purely harm-based or rights-based morality”(Haidt
and Hersh , ). Together, these quotes suggest that liberals fail to
acknowledge the majority of legitimate moral foundations, thereby obstruct-
ing mutual understanding. Conservatives understand the liberal point of
view, but not the other way round. Because liberals’ refusal to appreciate
the conservative point of view is more or less arbitrary, the burden to
“reach beyond” their overly narrow moral outlook is on them.
Haidt is in good company with this proposal. Fiske and Rai’s ()
relationship regulation (RR) theory – which shares many features with
MFT, arguing that human moral psychology comprises four (unity, hier-
archy, equality, proportionality) foundations – makes a similar point:
“The strength of RR is that it illuminates the fact that some judgments and
behaviors, such as those related to violence toward others and unequal
treatment, which we may view as prescriptively immoral and which some
have described as resulting from nonmoral, selfish, and social biases, can
reflect genuine moral motives embedded in social relationships. What
makes these practices seem foreign to us and sometimes abhorrent is that
different groups or cultures understand otherwise identical situations with
Debunking Conservatism: Political Disagreement
reference to different social-relational models [. . .]. [. . .] This raises serious
questions about the ways in which the natural foundations of morality can
be used as rationales for judging cultural practices we intuitively believe are
immoral. If some prescriptively “evil” practices in the world are facilitated
by the same moral motives that lead to prescriptively “good” outcomes, we
cannot blind ourselves to this truth.” ()
The main idea is that liberals are making a conceptual and perhaps even
a moral mistake: when contemplating why conservatives think the way
they do, they only see two options – conservatives must be stupid or evil.
Stupid: because they share liberal values but are too intellectually incompe-
tent to see what those values entail. Or evil: because they are intellectually
competent enough to see what would be morally right but decide to
support the wrong thing anyway because it serves their (presumably
material) interests. Graham, Haidt, or Fiske suggest that liberals do not
realize that the conservative’s point of view is based on differences in their
moral foundations. If they started to see this, they would come to realize
that conservatives are neither stupid nor evil. As a result of this, they should
come to better appreciate their political point of view.
Before I proceed, let me distinguish between two senses of “appreciate”
here. I will refer to them as strategic and genuine appreciation, respectively.
The strategic appreciation of conservative moral foundations occurs when
liberals deliberately frame their concerns in terms that will resonate with
the conservative but which they would not prefer otherwise. One instruct-
ive example for this is climate change, where it can be shown that even
though political conservatives are less likely to endorse measures against (or
even believe in the reality of!) climate change, their position can be shifted
in the direction of such measures if their importance is described in
conservative terms, for instance, as the pollution of a God-given nature
(appealing to the foundations of purity and authority) rather than in terms
of the suffering of animals and plants (appealing to harm; Rossen et al.
). This strategic sense is not what I have in mind here. The Asym-
metry of Understanding (AU) requires something different, and more: a
genuine appreciation of conservative moral foundations by liberals would
entail an acceptance of a set of considerations as morally relevant that were
previously deemed morally irrelevant, or perhaps even downright immoral.
Liberals tend to see the community-foundation as a source of parochial-
ism, authority as a source of submissiveness and obedience, and a concern
for purity as the basis of sexual repression and prudery. To genuinely
appreciate the relevance of those foundations would mandate significant
changes in the content of liberals’ moral beliefs.
Debunking Arguments in Ethics
Note, however, that the premise in the argument for AU that demands
such an appreciation (namely ()) is far from innocent and indeed highly
controversial. It might be that as an empirical statement, () is not entirely
inaccurate. Perhaps it is true that when reaching agreement is one’s
primary goal, regardless of whether the thing that is agreed upon is
sensible, it really is easiest to achieve this goal when people do not insist
on their moral beliefs (again, regardless of how true and or justified they
might be) but are willing to compromise and meet the other side half way.
But it is far less plausible to assume that, when the primary goal of
deliberation is to determine which political course of action is best or which
policy proposal is most justified, this is what people ought to be doing. This
is especially clear when the issues at stake – such as whether people ought
to have the right to marry a partner of the same sex or decide to have an
abortion – are of great significance. Here, it seems less advisable to make
compromises just for the sake of consensus but to go for the option that
actually has the most to recommend it.
One possible reply to this, of course, is that in the political realm, it
simply makes no sense to look for the “best” or “most justified” option in
the first place: “I’ll set aside the question of whether any of these alternative
moralities are really good, true, or justifiable. As an intuitionist, I believe it
is a mistake to even raise that [. . .] question” (Haidt , ). That is
why political disagreements that are ultimately based on incommensurable
moral foundations should not (and cannot) be resolved with facts and
rationality but with mutual understanding and reconciliation as a guiding
standard. For premise () in the argument for the asymmetry to go
through, then, it must be assumed that insofar as conflicts between
people’s moral beliefs stem from conflicts at the foundational level, there
is no such thing as a political belief that is better justified or more “correct”
than another. Conflicts between the moral intuitions people’s political
beliefs rest on are not amenable to reasoning. This is not to say that Haidt
advocates moral and political relativism. His position is more accurately
described as a pluralist one; the source of political disagreement need not
be traced back to values that are relative to one culture rather than another
but to intraculturally competing values for which reason cannot supply a
hierarchical order of relevance (Flanagan ).
This is in itself anything but an empirical claim, but there is some
empirical support for it. It might turn out, for instance, that careful moral
reasoning has no significant influence on which moral and political beliefs
subjects endorse. This happens to be the main claim advanced by the Social
Intuitionist model of moral reasoning (Haidt ; henceforth: SI model).
Debunking Conservatism: Political Disagreement
If this model is correct, then we might well be willing to accept () as the
best available strategy for dealing with the issue of political disagreement.
The argument sketched above is thus incomplete and depends on a
number of further assumptions:
First Step
(’) Moral judgments are based on arational, emotionally charged intu-
itions. Reason and moral reasoning play no significant role for which
moral beliefs subjects endorse. (= SI)
(’) Political disagreement can be traced back to differences in people’s
moral intuitions. (= MFT)
(’) Therefore, reasoning cannot resolve political disagreements. (from SI
and MFT)
Second Step
() Therefore, mutual understanding can only be achieved through an
appreciation of the moral foundations of the other side’s political
beliefs. (empirical hypothesis supported by ’, ’, and ’)
() Conservatives already do appreciate the moral foundations of the
other side’s political beliefs. (central claim of MFT)
() Therefore, mutual understanding depends on liberals’ appreciation of
the moral foundations of conservatives’ political beliefs. (from to )
This is the argument I wish to reject. Note that (’), the conclusion of the
first step in the argument, omits an important qualification, namely that
Haidt’s model does allow reasoning to change people’s judgments and
resolve disagreements – namely, when such reasoning runs through other
people (hence, Social Intuitionism). However, the way this is supposed to
work is by way of persuading others with intuitively compelling rhetoric
rather than providing rationally convincing moral and empirical argu-
ment. The thing to realize at this point is that the Asymmetry of
Understanding depends on the Social Intuitionist model of moral
reasoning. This is important because in what follows, I will argue that
there are three main reasons SI cannot be used to support the asymmetry.
Haidt describes this mechanism (link ) as follows: “Because people are highly attuned to the
emergence of group norms, the model proposes that the mere fact that friends, allies, and
acquaintances have made a moral judgment exerts a direct influence on others, even if no
reasoned persuasion is used. Such social forces may elicit only outward conformity [. . .], but in
many cases people’s privately held judgments are directly shaped by the judgments of others [. . .]”
(, ).
Debunking Arguments in Ethics
(i) The claim that reasoning has no formative influence on subjects’
moral beliefs is empirically unsupported. Moral reasoning can be shown to
have a significant impact on subjects’ moral intuitions.
(ii) The very evidence that leads to a rejection of SI (see (i)) also casts
doubt on central aspects of MF theory. When it comes to the content of
people’s moral convictions, moral reasoning selectively undermines only
those intuitions that are grounded in the “conservative” foundations that
go beyond harm and rights (i.e., purity, authority, and community).
(iii) SI and MF are conceptually incompatible, because the evidence for
SI implicitly assumes that the considerations MF classifies as morally
relevant are not morally relevant. Thus, regardless of whether the SI model
or MF theory are true, they cannot both be true at the same time. I will
discuss the first two issues only briefly. The third point will be dealt with
in more detail.
The crucial step in the argument happens in (). If true, this claim
would have profound and far-reaching implications for how political
discourse in democratic societies ought to be conducted. Consider a few
examples: suppose there is political disagreement about how to respond
to the issue of climate change. Some propose that emissions should
be curtailed, that, where necessary, preventive measures ought to be
adopted, and that funding for research and development for fighting
the short- and long-term ramifications of climate change should be
raised. Others, however, refuse to believe that the problem exists at all
and suggest that nothing needs to be done instead. This second take on
the issue might very well be fueled by an aversion to governmental
intervention or the feeling that secretive and wasteful elites who are up
to no good want to dictate people’s lives. And those mid-level political
beliefs, in turn, might well be grounded in moral foundations concern-
ing the appropriate role of authority or liberty that might support said
positions.
It would be surprising, to say the least, if this type of disagreement were
best resolved by urging the well-meaning and -informed to appreciate the
moral foundations of the other side’s point of view. But this is precisely
what the Asymmetry of Understanding asks us to do. Political disagree-
ments cannot be addressed through informed political deliberation.
Instead, liberals ought to recognize the moral motivation behind the other
side’s beliefs and “reach beyond” their “WEIRD” (Henrich et al. )
moral outlook. I wish to argue that this is a recommendation we would
be ill advised to follow.
Debunking Conservatism: Political Disagreement
Or consider reproductive rights: here, some maintain that, provided
certain conditions apply, women should have the right to end a pregnancy,
or that contraception is a public health issue for which expenses deserve to
be covered by various insurance plans. Others, again, find both of these
proposals abhorrent and view them as offensive either to the religious views
they happen to hold or to their standards of sexual decency. Here, too, it
would be surprising, and most likely very dangerous, for policy makers to
take into consideration the moral foundations of the latter group’s beliefs.
But this is precisely what is suggested by (): when engaging in political
deliberation, it is futile to reason with people, because reasoning doesn’t
change people’s moral beliefs and the foundations they rest upon.
A lot hinges, of course, on what exactly it means to “appreciate” a moral
foundation, to “take it into consideration,” or, in Haidt’s own words, for
liberals to “reach beyond” their own narrow set of concerns. If this claim is
supposed to be more than rhetoric, it must mean that liberals ought not
just to strategically pretend that they understand the other side but to
employ conservative moral foundations in a way and to an extent that is,
though perhaps not identical, at least comparable to the manner in which
conservatives recognize the foundations liberals take to be the only relevant
ones. But moral foundations have rather general content, and thus appre-
ciating the foundation of, say, purity cannot simply consist in recognizing
in abstracto that purity has ethical relevance but in treating the moral and
political intuitions that tend to be produced by this foundation as pro tanto
justified. This would make a genuine difference to the content of liberals’
political convictions, because for example, regarding the issue of abortion,
liberals could no longer file the conservative position under “psychologic-
ally understandable but morally irrelevant” and would have to come to
treat it as “morally relevant, though perhaps not decisive.”
The crucial thing to realize here is that when we start appreciating a
consideration as relevant, we start to treat it as pro tanto reason giving. That
is why a genuine, nonstrategic appreciation of moral foundations such as
loyalty, community, and authority does not just provide the liberal with a
richer set of considerations to appeal to for justifying her already existing
moral and political convictions, leaving the content of those convictions
unaltered. When one starts to acknowledge something as a potential source
of reasons (for action or belief ), one’s actual commitments will, insofar as
the bearer of those commitments is reasons-responsive, undergo some
changes. Suppose that I, a former atheist and now converted believer, have
come to treat revelatory sources (which I used to dismiss as evidentially
useless) as containing legitimate information. At least in the long run, this
Debunking Arguments in Ethics
will result in presumably rather drastic changes to what I end up believing,
because what I end up believing is now partly determined by my appreci-
ation of the content of said sources.
This, I take it, is what Haidt must have in mind when he recommends
that liberals “reach beyond” their narrow moral outlook to asymmetrically
facilitate mutual understanding. He evidently does not mean that liberals
are supposed to merely understand the (propositional content of the)
conservative outlook – they arguably already do that (i.e., they know that
conservatives oppose abortion and why they do so). Since he clearly does
not recommend mere strategic appreciation, he must be talking about the
genuine kind. That is, the liberal must add characteristically conservative
considerations to the set of reasons she herself takes to have some bearing
on her beliefs. The conservative, on the other hand, is under no such
pressure. This is the asymmetry I shall reject.
This last claim might seem controversial to some, especially those who have a less “rationalist”
perspective on how politics does and should work. However, my argument remains unaffected by
where exactly one stands with regard to this issue. The only thing one needs to agree with is that
rational deliberation – that is, deliberation on the basis of sound empirical knowledge, publicly
Debunking Conservatism: Political Disagreement
What’s the evidence for this model? If moral judgments are reasons-
responsive, as so-called rationalists claim, then we should find this mech-
anism at work in the lab. Reasons are often specified as principle-like
generalizations (Don’t lie! It’s wrong to murder!). In some cases, we should
thus find that subjects’ intuitions conform to such principles.
But it seems that they don’t. In one study, Uhlmann et al. ()
found that principles do not determine the acceptance of intuitions;
rather, intuitions determine the acceptance of principles. When people
are given trolleyological moral dilemmas (cf. Foot , Thomson ,
Greene et al. ) containing subliminal racial cues, they endorse
consequentialist or deontological principles, respectively, based on their
implicit racial preferences. Would you sacrifice Tyrone Payton for the
members of the New York Philharmonic? Maybe. Would you sacrifice
Chips Ellsworth III for the Harlem Jazz Orchestra? Perhaps. It turns
out that whether you would is not determined by your prior acceptance
of consequentialism or deontology but by which political camp you
belong to (liberal vs. conservative) and, consequently, which racial pref-
erences you are, on average, more likely to have. Other studies report
similar findings (Uhlmann et al. , Hall et al. ). Moral reasoning
is like Minerva’s owl – it starts its flight at dusk, when the job is
already done.
I think this is an overstatement, and a closer look at the evidence shows
why this is so. There are essentially two ways for moral reasoning to exert
an influence upon people’s moral beliefs. In the case of distal reasoning,
reasoning that a subject undertook at some time in the past makes a
difference to what the subject judges to be right or wrong at a later point
in time. In the case of proximal reasoning, reasoning that a subject
undertakes at the time of arriving at her judgment makes the difference.
There is strong empirical evidence that both forms of reasoning are
causally efficacious (see Sauer for this distinction).
Distal moral reasoning can be described as a form of moral education.
Levy (; see also Haidt, Koller, and Dias ) notes that
justifiable principles, and a willingness to change one’s mind in the light of better moral and/factual
arguments on the side of one’s opponents – sometimes changes subjects’ minds.
It remains true, of course, that there is an is/ought gap. However, since ought implies can, we
should be skeptical of moral prescriptions subjects who are equipped with a human psychology
seem incapable of carrying out.
In the U.S. context, people could be expected to pick up on the subtle racial information implicitly
contained in the stereotypically black and white names.
Debunking Arguments in Ethics
“Haidt’s work on moral dumbfounding [subject’s unwillingness to suspend
their moral judgments even when their reasoning is debunked, H. S.] [. . .]
actually demonstrates that dumbfounding is in inverse proportion to socio-
economic status (SES) of subjects [. . .]. Higher SES subjects differ from
lower not in the moral theories they appeal to, but in the content of their
moral responses. In particular, higher SES subjects are far less likely to find
victimless transgressions – disgusting or taboo actions – morally wrong
than lower.”
The “good” argument could, for instance, consist in a brief sketch of an evolutionary explanation
for the existence of our revulsion toward incest, together with the observation that this evolutionary
rationale does not apply in this case. A complementary “bad argument” would point out that love is
obviously a good thing, so that each act that could contribute to an increase in the amount of love
would therefore have to be okay.
Debunking Arguments in Ethics
with before cooking and eating it – unanimity in disapproval begins to
erode along the lines of people’s SES.
Other stories include a broken promise to a man’s dead mother or the
innovative, and presumably rather impractical, deployment of an (Ameri-
can or Brazilian) flag as a means of sanitary hygiene maintenance. All of
these vignettes, except for Swings, tap into the prototypically conservative
moral foundations of purity, authority, and community. And in all of these
cases, except for Swings, subjects with an improved education stop classi-
fying the described actions as morally wrong, thereby demonstrating that
conservative moral foundations, and only those, lose their appeal when
epistemic conditions and capacities are improved.
The same goes for Paxton et al.’s study. Here, the vignette subjects
are given simultaneously activates more than one moral foundation:
sexual intercourse between siblings could be seen as a violation of sexual
norms (purity) or the perversion of the value of familial intimacy
(community/authority).
The debunking argument participants are then given to consider points
out that their disgust response toward incest, while making sense
evolutionarily, misfires in this case: since neither possibly handicapped
children nor society as a whole is harmed in this case, and disgust alone is
insufficient for justifying moral disapproval, they are invited to reconsider
their judgment – which they often do.
In general, we see that a better education and sound debunking argu-
ments leave “liberal” judgments about harmful acts or rights violations
unaffected, undermining only “conservative” moral beliefs, and that moral
arguments based on liberal considerations often override those based on
conservative ones. In fact, I am unaware of the existence of any study that
shows how, after being exposed to morally relevant information, people
give up their harm-/fairness-based moral intuitions but retain their purity-/
authority-/community-related intuitions. Absence of evidence is not to be
mistaken for evidence of absence, of course, but the fact remains suggestive.
Note that in order to achieve this result (of people becoming more
“liberal”), subjects do not have to be manipulated, subconsciously primed,
or brainwashed. They are simply presented with some empirical facts or
with some considerations they themselves already deem relevant. This
alone is often sufficient to change their mind about a moral matter. But
if this is so, then this casts doubt on just how foundational prototypically
conservative moral foundations really are. The core of morality, that is, the
part of it that no one besides the nihilist is willing to give up, seems best
captured by progressive moral reasoning.
Debunking Conservatism: Political Disagreement
Now this is precisely what proponents of MFT would consider a biased
perspective by liberals who do not merely fail to appreciate the conservative
moral point of view but dismiss it as downright immoral. But let me
emphasize that this charge cannot plausibly be made here: in Paxton et al.’s
experiment, at least, the very same people who used to share conservative
moral intuitions reconsider them in the light of what they themselves
accept as a valid counterargument. This is not a case of one group being
condescending and ignorant toward another. It is a study investigating
which moral foundations best survive reflective scrutiny in one and the
same group of subjects.
I picked out just two studies as examples for the sake of brevity. As mentioned before, MFT has not
received much attention in philosophical circles yet (but: cf. Musschenga ). For a good
summary of other criticisms of the Social Intuitionist model, see Kennett and Fine ().
Debunking Arguments in Ethics
To do this, I will drive the conjunction of SI and MFT into a dilemma.
Those who wish to subscribe to this conjunction are faced with a choice.
Either, as MFT would suggest, considerations of purity, community, and
authority are, in addition to harm and fairness, genuinely morally relevant.
Or they are not. The first alternative threatens to undermine SI. The
second is incompatible with MFT. The combination of the two is
unstable.
First Horn: If SI is true, then, contrary to MFT, considerations of purity,
community, and authority are not morally relevant. MFT suggests that all
five foundations are morally relevant. I will argue that SI denies this. It is
not immediately obvious why this should be so, so let me explain.
The most striking piece of evidence for the Social Intuitionist’s claim
that moral judgment is based on intuition rather than reasoning is based
on the phenomenon of moral “dumbfounding.” In the famous ori-
ginal – but never published – dumbfounding study (Haidt, Björklund,
and Murphy ), the main point was to come up with scenarios that
would trigger a strong gut reaction of disapproval toward the described
action but make it difficult for participants to justify this reaction.
When subjects explicitly stated that they believed that something was
wrong whilst being unable to say why, they were classified as morally
dumbfounded.
A crucial element of this design is the experimental “devil’s advocate.”
The main task of the devil’s advocate was to challenge subjects’ initial
verdict with a series of preplanned questions, trying to force participants to
admit that they had run out of reasons for their beliefs. In most cases,
people would give the anticipated responses such as It’s not okay for brother
and sister to have sex with each other, or It’s wrong to cook and eat human
I happen to think that the evidence from dumbfounding is not just the most striking but in fact the
single most important piece of evidence for the antirationalist case Social Intuitionism is trying to
make. In his landmark paper, Haidt identifies four main problems for rationalism in addition
to the existence of dumbfounding (pp. –): the intuitive basis of moral judgment, bias (the
lawyer metaphor), the post hoc nature of moral reasoning, and the emotional impact of moral beliefs.
Without dumbfounding, none of these four tenets even comes close to an interesting from of
antirationalism about moral cognition. See also Haidt and Kesebir , pp. –, for this.
It is worth mentioning that in this original study, an average of % of subjects did change their
mind about the issue presented to them in response to the challenges put forward by the devil’s
advocate.
Here is the full text of this famous vignette: “Julie and Mark are brother and sister. They are
traveling together in France on summer vacation from college. One night they are staying alone in a
cabin near the beach. They decide that it would be interesting and fun if they tried making love. At
the very least it would be a new experience for each of them. Julie was already taking birth control
pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide
Debunking Conservatism: Political Disagreement
flesh, and so forth. The devil’s advocate would then ask them to explain
why they thought the way they did and subsequently challenge their
reasoning by pointing out that in the described scenarios, no one was
harmed and no one’s rights had been breached. At a certain point, many
subjects would surrender, admitting to their inability to justify their
judgment in the terms demanded by the devil’s advocate.
And there’s the rub: the devil’s advocate simply would not let any
reasons for subjects’ moral intuitions count as valid justifications unless
they pertained to the paradigmatically liberal foundations of harm and/or
rights violations. If the devil’s advocate had accepted a wider set of reasons
as morally relevant, including reasons grounded in paradigmatically con-
servative moral foundations, participants would not have reached a state of
moral dumbfounding at all.
In fact, participants frequently responded with “Yuck!” or “That’s
disgusting!” (purity); they also held that sexual intercourse between siblings
is incompatible with a healthy familial infrastructure (community), that
cleaning the toilet with their national flag would be offensive to their
“country” (authority; Haidt, Koller and Dias , ), and numerous
other justifications that did not rely on liberal moral foundations. In the
postexperimental coding of the recorded videos, however, all these justifi-
cations were lumped together as “unsupported declarations.”
But, of course, these are only unsupported declarations from a dis-
tinctly liberal perspective. From a conservative point of view, a point of
view that MFT suggests is equally morally significant, subjects’ moral
intuitions were driven by a richer set of moral foundations that included a
sensibility for violations of purity, community, and authority. There is no
independent reason one should dismiss these justifications as utterly
irrelevant to the moral issue at hand unless one thinks that only harm/
rights-based considerations should make a moral difference. But if one
accepts this picture offered by MFT, the most important evidence for the
SI model falls apart.
In order to be able to maintain that subjects are more likely to enter a
state of dumbfounding than to reconsider their moral beliefs, proponents
of SI are thus committed to the claim that only considerations of harm and
rights are of central moral relevance. If they weren’t, why are these the only
ones to have been “carefully removed” from the experimental vignettes in
advance?
not to do it again. They keep that night as a special secret, which makes them feel even closer to
each other. What do you think about that, was it OK for them to make love?” (Haidt , )
Debunking Arguments in Ethics
Second Horn: If MFT is true, and considerations of purity, community, and
authority are morally relevant, then the evidence from dumbfounding fails to
support SI. On the first horn, SI is committed to the claim that “conserva-
tive” moral considerations are not, or at least not equally, morally relevant
when compared to liberal ones. Now suppose adherents of SI were to say
that they, too, wish to count considerations of purity, community, and
authority as genuinely morally relevant. This would entail, first of all,
that all of those subjects who justified their judgments in the dumbfound-
ing study in such “conservative” terms could no longer plausibly be
classified as morally dumbfounded. It would now be the experimental
devil’s advocate who is suffering from a liberal blind spot, by failing to
appreciate the relevance of what is, in fact, the majority of legitimate moral
foundations.
Moreover, one could now go on to describe the whole phenomenon of
moral dumbfounding as an experimental artifact: one small but interesting
finding in the aforementioned study is that when subjects are given bad
reasons to reconsider their judgments, their confidence in their moral
beliefs does not go down only slightly or remain entirely unaffected. To
the contrary: subjects become more certain of their verdicts and less willing
to reconsider them. The same might be going on in the dumbfounding
study. Subjects are given bad reasons (“Look – everything turned out
fine!”) against their moral intuitions, and the majority of reasons they
deem highly relevant are dismissed as nonpertinent. Perhaps, then, what
looks like an irrational recalcitrance on part of the experimental subjects at
first might just as plausibly be described as an entirely rational unwilling-
ness to reconsider one’s deeply entrenched beliefs in the light of flimsy
attacks.
What about those who did not use conservative moral reasoning in
support of their beliefs but who (unsuccessfully) tried to justify their
moral intuitions in terms of harm and rights? What should be said
about them?
Adherents of SI could claim that their findings are sufficient to show
that this second group of people was indeed dumbfounded, because they
tried to justify their responses by appealing to features that had been
carefully removed from the scenarios in advance. This could indeed be
what is going on in those studies. However, I wish to suggest that this, too,
does not amount to moral dumbfounding in the strict sense.
Why is moral dumbfounding thought to be so damaging for rationalist
accounts of moral cognition? It is because the phenomenon of moral
dumbfounding suggests that a sensitivity for reasons plays no role in
Debunking Conservatism: Political Disagreement
how subjects arrive at their judgments. In a state of dumbfounding, people
are typically unwilling to give up their seemingly unjustified beliefs.
If reasons did play a role in how they make judgments, presumably this
type of behavior would not occur.
However, this is not the most plausible interpretation of the findings.
A better explanation does not describe the state people reach as one of
dumbfounding but as one of inarticulateness. Subjects’ behavior suggests
that their automatic intuitions pick up on morally relevant features that
they have trouble articulating properly. It does not suggest that people
form their intuitions in a way that has no connection with the available
moral reasons at all.
That subjects suffer from inarticulateness means that at least
some morally relevant considerations are available in a given scenario,
that participants intuitively pick up on them, but that they are
unable to cite them or to make their influence explicit. Moral
confabulation of the kind required to establish genuine dumbfounding
requires that no morally relevant considerations that could possibly
justify subjects’ responses are available in a scenario, so people start
making them up.
Let me make a short detour here to illustrate this important point.
Perhaps the most famous experiment showing the extent to which
people are prone to confabulation is Nisbett and Wilson’s ( and
) “pantyhose”-study. In this study, subjects were given four samples
of pantyhoses to choose from. After they had made their decision, they
were asked to justify their choice. Subjects came up with all kinds of
plausible reasons – color, quality, touch – not knowing that the four
samples were, in fact, identical. There was actually no reason to prefer
one pantyhose to the other. Most participants simply picked the right-
most sample. In this case, it is plausible to suggest that subjects were
confabulating.
Consider now a (fictional) variation of this design. Suppose that there
had been a difference between the samples. Say that the fourth sample on
the very right had been one with superior tissue quality. Now in this
fictional experiment, subjects also pick this rightmost sample. When asked
to justify this choice, however, they do not refer to the sample’s superior
tissue quality but to its nicer color. However, there are no actual color
differences, so even though people’s decision reliably “tracks” the sample
with the best quality, they misidentify the factor influencing their decision.
Would we say that, in this case, subjects’ decision to go for the sample
on the right was just as insensitive to any relevant features of the
Debunking Arguments in Ethics
pantyhoses as in the original experiment? Would we attribute their
choice to the same position effect? I surmise that we would not. Rather,
it would be much more plausible to assume that subjects intuitively picked
up on something of genuine relevance (the superior tissue quality) and
merely had introspective difficulties to articulate why they made the
decision they made. If there are relevant differences between available
options, and subjects’ judgments or behavior reliably tracks those differ-
ences, inarticulateness is a more plausible explanation than one that relies
on entirely nonrational factors.
The same holds for the dumbfounding-study. If – this is the second
horn – proponents of the SI model grant that there are at least some
morally relevant features present in the vignettes given to participants –
and they have to admit this if they do not want to rule out “conservative”
moral considerations as irrelevant – and it turns out that subjects’ moral
intuitions tracked those features just like subjects’ decisions tracked differ-
ences between pantyhoses in the fictional experiment just described, then
it becomes much less plausible to suggest that people’s moral intuitions are
radically disconnected from any morally relevant reasons. It is far more
plausible now to assume that subjects merely had problems articulating
why they made the judgments they made, even though their intuitive
judgments were perfectly reasons responsive.
The question, now, is whether there are any morally relevant features
present in the scenarios given to participants of the dumbfounding study
for them to be oblivious to. Obviously, if MFT is true, the answer is
“yes.” The described actions – sibling incest, gratuitous but harmless
cannibalism, cleaning one’s toilet with a flag, eating one’s dead pet dog,
slapping one’s father in the face as part of a play – all contain aspects that
would, from a conservative point of view, render an action morally
problematic.
People have an intuitive sense for relevant information, but, like
experts in a particular domain, are often unable to make the workings
of this sense explicit. In two recent papers, Daniel Jacobson ()
and Peter Railton () helpfully illustrate the distinction between
intuitive sensitivity combined with mere inarticulateness and genuine
dumbfounding by using the notion of risk. Jacobson argues that even
It would of course be easy to determine whether a similar position bias or the superior quality made
a difference to subjects’ decision by teasing apart position and quality in a further variation of the
experimental design. However, this possibility does not matter for my interpretation of the
dumbfounding study.
Debunking Conservatism: Political Disagreement
from a consequentialist perspective that insists on the exclusive moral
relevance of harm, the vignettes given to subjects contain plenty of
morally relevant reasons not to perform the described actions. It is not
a very good justification for sexual intercourse between brother and sister
that in hindsight, things happened to turn out fine. An action of this
kind is morally very dangerous, and there is no way for Julie and Mark
to know in advance that having sex with each other will strengthen
rather than corrode their relationship. Sure, no harm was done; but the
expected harm must have been considerable, which is enough to show
that Julie and Mark made a really poor decision with strikingly fortunate
consequences.
This perception of risk need not be immediately obvious to the judging
subjects, hence their inability to articulate it. This is especially true in a
situation in which an authority figure – the experimental devil’s advocate –
only wants to hear justifications that point to actual harm (which, by
stipulation, does not occur). Moreover, Railton (, ff.) reports that
students have very little difficulty articulating such risk-based reasons
against Julie and Mark’s action when they are first given a different but
analogous scenario in which two people decide it would be fun to try a
round of Russian roulette (which also turns out fine in the end). When
primed to pay attention to risk in this way, it becomes rather easy for
people to notice the relevance of the involved dangers despite the good
outcome. They merely need a little help with articulating which features
are responsible for their intuitive resistance to follow the direction of the
devil’s advocate.
One possible objection to this line of argument is that in setting up the
dilemma, I am conflating the commitments of the devil’s advocate with
the commitments of the theory that the advocate’s behavior in the experi-
ment is supposed to support. One might say that when challenging
participants’ intuitions, the advocate, rather than Social Intuitionism as a
whole, must assume that only liberal considerations count. I think this
objection is misguided, because once Social Intuitionists admit, at the
theory-level, that considerations of purity, loyalty, or community have
genuine moral significance, the dumbfounding studies end up suggesting
nothing more than that moral judgment is mostly automatic and intuitive.
If true, this would be, of course, an important empirical result, but it
would no longer allow Social Intuitionists to make their most spectacular
antirationalist claim: that reasons play very little role in how subjects arrive
at their moral judgments, whether intuitive or not. It would then still need
to be shown that intuitive judgment cannot be reasons-responsive (e.g., by
Debunking Arguments in Ethics
having been shaped by prior reasoning that has become automatic over
time), and for this, prospects are looking grim.
Another objection is that my argument assumes the truth of “liberal-
ism” (as broadly as it is construed here and elsewhere in the psycho-
logical literature) and is thus begging the question against MFT’s aim to
support political conservatism. However, this objection is misguided,
too, because it is not my argument that is assuming a certain standard for
the evaluation of the quality of subjects’ moral judgments that happens
to line up with my preferred set of liberal convictions; it is the experi-
mental subjects themselves whose behavior reveals the extent to which
they take liberal intuitions to survive rational reflection better than
conservative ones. It is precisely because of the threat of begging the
question that I appeal to no more than what subjects themselves deem
relevant and appropriate.
A third and final objection has it that my argument seems to require for
Social Intuitionism as a whole to be incompatible with MFT. However,
my dilemma, if successful, merely shows that the phenomenon of moral
dumbfounding is incompatible with MFT: if dumbfounding is supposed to
be used as evidence for SI, considerations of purity, authority, and com-
munity cannot be morally relevant. And if they are relevant, then there is
no evidence for genuine dumbfounding. This is to some extent correct but
does not undermine my point, either. It is less relevant which theories,
when compared wholesale, are or are not compatible. What matters is that
the one piece of evidence SI needs to maintain its distinctively antiration-
alist flavor is incompatible with MFT, because this flavor is needed –
remember the first step of the argument in the second section – for the
Asymmetry of Understanding to appear justified. Whether any or all of the
other elements of SI are also incompatible with MFT has no bearing on
this issue, which I am mainly concerned with here.
This concludes my case for the incompatibility of SI and MFT. I have
argued that SI presupposes that moral reasoning based on paradigmatically
“conservative” moral foundations can be dismissed as morally irrelevant. If
this is so, MFT (that is, premises (’) and () in the argument) must be
false, and the Asymmetry of Understanding lacks support from this side. If
proponents of SI wish to avoid this result and maintain that all moral
foundations are equally morally relevant, they become unable to use the
phenomenon of moral dumbfounding to make their antirationalist point.
If this is so, the main tenet of SI (that is, premise ()’ in the argument)
turns out to be false, and the Asymmetry of Understanding loses support
from this side. Either way, SI fails to provide MFT with the support it
Debunking Conservatism: Political Disagreement
needs to arrive at the Asymmetry of Understanding. If genuine moral
reasoning and informed deliberation remain a viable way of resolving
moral-political disagreements, then it is much less clear why liberals will
have to “reach beyond” their moral foundations. It is even less clear why
conservatives should not be required to do the same – by asking themselves
whether considerations of purity, community, and authority can provide
suitable foundations for the moral integration of modern democratic
societies at all.
Ultimately, my suggestion (for which I will provide no further argument
at this point) is that only the liberal moral foundations of harm and
rights/fairness should be admissible in the political realm. Would this
bring an end to the issue of deep political disagreement altogether? Not
at all. Consider just one example of how differently Germany and Sweden,
two paradigmatically socially liberal countries, currently deal with
prostitution: in recent years, Germany has tried to drag sex work out of
the twilight and moved in the direction of treating prostitution as a normal
occupation – including legalization, tax obligations, and full participation
in social security programs. Sweden, on the other hand, continues to
criminalize prostitution by making it illegal for suitors to purchase the
services of a prostitute but legal for a prostitute to offer them. This scheme
is supposed to disincentivize customers without punishing those who
already are in a disadvantaged position.
Here we have two countries whose citizens and legislatures strongly
disagree about how to handle the issue of prostitution and the further
problems often surrounding it, such as human trafficking or drug abuse.
But notice that this disagreement is not based on a disagreement about the
moral foundations an assessment of the moral and legal status of prostitu-
tion should be based on. In both cases, it is clear that social policies should
aim to promote gender equality, protect individual rights, and minimize
severe harms to the people involved. The disagreement, though difficult to
resolve, only concerns the implementation of those overarching norms and
values. In democratic societies, there will always be strong disagreements
about many important issues – which is one more reason to remove as
many obstacles there are to it as possible.
Conclusion
The main application of Moral Foundations theory is an explanation of the
source and persistence of political disagreement both within modern
democratic societies and between different cultures. Liberals and conserva-
tives, the theory holds, do not share the same moral foundations. In fact –
and this is where the Asymmetry of Understanding is supposed to stem
from – liberals, often deliberately, eschew some of the foundations that
matter most to conservatives. Conservatives, on the other hand, have an
appreciation of the moral relevance of all of them.
I decided to grant this claim for the sake of the argument. But is it true?
On the most basic level, moral foundations are psychologically imple-
mented by a set of moral emotions: disgust grounds the foundation of
purity, respect grounds authority, and the foundation of community is
grounded in feelings of loyalty and belonging. It is very implausible to
think that liberals are, on this basic level, not or at least far less capable of
experiencing those emotions. In this sense, the purported disagreement
about which moral considerations matter and which do not does not exist.
Liberals are disgusted by many immoral actions, such as acts of senseless
violence and hate crimes; liberals do acknowledge the moral authority of
charismatic leaders, such as Martin Luther King or Nelson Mandela; and
they do have a strong sense of community with the people and groups who
share their concern for progress, equality, and liberation. The difference
between liberals and conservatives does not lie in which moral emotions
they are capable of but which of them they grant independent moral
authority. That is, liberals recognize the moral and political importance
of emotional responses of moral disgust, indignation, respect, and loyalty.
But they refuse to treat them as reason giving on their own and demand
that these responses latch on to the right kinds of independently valid
moral foundations. Liberal moral disgust remains tied up with actions that
cause gratuitous harm and suffering; liberal respect is earned by those who
stand up for the weak and downtrodden; and one can only expect liberal
solidarity for causes and projects that deserve it, because they advance the
struggle against oppression and exploitation.
Deontology
Introduction
Did you ever have to make a tough decision? Yes? Did this decision involve
strangers only you could save trapped on islands, while you had no idea
what got them or you into such dire straits in the first place? Did it feature
a runaway trolley threatening to kill a group of people, while you were
clueless about where the hell the people in charge of overseeing the tracks
had disappeared to? No? I didn’t think so.
And yet a lot of philosophy – that discipline, remember, that was supposed
to be about how to live one’s life – can look like the science of such far-
fetched puzzles. The pages of many leading journals are replete with faceless
people trapped in outlandish situations, and philosophers, it seems, are in the
business of figuring out how these scenarios affect subjects’ personal
identities, whether the people in them know anything, which resources the
persons inhabiting them are entitled to, or how they ought to act.
Over the past decade, these far-fetched scenarios have experienced
something like a second spring in the circles of empirical moral psych-
ology. I have repeatedly referred to examples from this research paradigm
throughout this book. Increasingly, neuroscientists such as Joshua Greene
(, , ), social psychologists like Jonathan Haidt (), or
experimental philosophers such as Joshua Knobe (; see also Knobe
and Pettit ) have taken a liking to these unusual settings and the
(more or less unfortunate) people who populate them; their hope is to
gather new insights about what the folk are thinking about these cases,
which moral beliefs they have, or perhaps just where the lights go on in
people’s heads when they contemplate them. In this chapter, I will show
that unrealistic thought experiments pose a bigger problem for this
research paradigm than usually thought.
Greene’s dual-process model of moral cognition embodies a form of
what I have referred to as obsoleteness debunking (more precisely, the
Debunking Arguments in Ethics
debunking of deontology is a form of (i) selective, (ii) deep, (iii) proximal
as well as distal, (iv) process-based (v) obsoleteness debunking): his claim
that deontological moral intuitions are epistemically dubious should be
understood as the claim that the processes generating these intuitions are
not biologically, culturally, or personally familiar with the situations they
are supposed to deliver a verdict about. However, the force of this criticism
depends, to a certain extent at least, on the ecological validity of the stimuli
that are used to support it. What ultimately needs to be shown is that
for realistic modern moral problems, deontology is an inadequate moral
outlook.
Recently, the debate regarding the merits of this strategy has focused
increasingly on the issue of moral learning and whether the judgmental
patterns that are obtained by trolleyologists can be explained as the upshot
of rational learning processes (Railton and , Kumar ). The
general idea is that deontological intuitions can be vindicated if and to the
extent that they result from finely attuned affective or otherwise automatic
responses. In some cases, the details of these learning mechanisms have
been spelled out rather precisely (Nichols et al. ). I have defended a
rational learning approach to moral cognition myself (Sauer ). By
now, this response strategy has become the most promising and most
widely pursued defense against the empirical debunking of nonconsequen-
tialist moral intuitions.
Following Fiery Cushman (), Peter Railton () has argued
that various asymmetries in moral judgment can be accounted for in
terms of the distinction between model-free and model-based reinforce-
ment learning. It is simply not true, the suggestion goes, that the aversion
to pushing a man to his death to save five others stems from a deeply
entrenched but primitive and rationally insensitive alarm-like response.
Rather, this intuitive difference depends on whether a scenario is evalu-
ated on the basis of a “cached” and thus computationally cheap repre-
sentation of a narrow situation/action pair or a fully worked-out mental
model that can function as a decision tree. Here, the cognitive processes
involved do not build a fully branched-out causal representation of the
situation at hand. The asymmetry between the Trolley and the
Footbridge case, then, is explained by the fact that only the latter scenario
triggers an overlearned response in terms of a strong negative evaluation
of violently pushing someone, whereas the various scenarios involving
levers and switches do not trigger such a cached response, so that those
scenarios are evaluated on the basis of a fully articulated causal model
and, accordingly, the objective body count. Railton thus offers a partial
Debunking Details: The Perils of Trolleyology
vindication of deontological intuitions that draws on the idea that the
processes generating these intuitions are in principle rational, even if they
may sometimes be insufficiently attentive to the unusual revaluation
presented by outlandish trolleyological vignettes.
However, Greene () continues to warn against this “optimistic”
view of moral intuitions. While he agrees that intuitive processes can
integrate relevant information on the basis of different types of trial-and-
error learning and (evolutionary, cultural, individual) transmission mech-
anisms, he insists on the fundamental limitations of intuitive cognition,
the quality of which depends both on the particular learning history of
an intuition as well as the circumstances in which it is activated. This is a
problem that model-free learning mechanisms cannot, in principle, over-
come. Because of this, our moral intuitions remain badly equipped for
dealing with complex modern social problems such as transcending in-
group/out-group thinking (“us vs. them”) and for the highly artificial
thought experiments of the kind moral philosophers tend to be so partial
to. And while I am inclined to agree with the first point, the second seems
less convincing to me.
Railton makes clear that familiarity lies at the heart of the issue:
For example, my personal history of direct experience and indirect
observation of the disvalue of violently shoving someone in situations other
than self-defense leaves me with a cached, strongly negative value for acts of
this kind. The stimulus of imaginatively contemplating pushing the man off
the bridge triggers this cached disvalue, which is not sensitive to the highly
unusual ‘‘revaluation” that connects this act to the saving of five workers’
lives. Model-free processing (think of this as System ) thus results in a
powerful ‘‘intuitive” sense that I should not push, which does not simply
yield in the face of the more abstract cost-benefit analysis issuing from my
overall causal/evaluative model of the situation (think of this as System ).
In Switch and Loop, by contrast, since I do not have a similar history of
direct or indirect negative experience with pushing levers that activate a
switch, no cached model-free negative value is triggered by the imagined
stimulus of addressing myself to the lever, so there is no similar competition
with model-based control that looks ahead to the positive remoter conse-
quences of pulling the lever. (, )
People’s responses to morally salient scenarios are affected by whether
these scenarios are realistic, such that they can plausibly be thought to
have some degree of familiarity with them. In this chapter, I will focus
specifically on moral judgments about far-fetched, unrealistic scenarios.
What is their evidential value? Can such contrived cases tell us anything
interesting about moral judgment and reasoning? And, perhaps most
Debunking Arguments in Ethics
importantly, what do moral judgments about highly artificial scenarios tell
us about real moral judgments in real situations?
Many people suspect that it is for precisely this reason – their lack of
realism – that the evidential value of these situations and the moral
judgments people are requested to make about them is close, if not equal
to, zero. Thought experiments such as the Trolley dilemma (Foot ,
Thomson ) are so utterly strange, so far removed from people’s
everyday experience, that their moral judgments about them couldn’t
possibly bear any interesting relation to what they actually believe. This
suspicion is quite common. It is not, I will argue, quite so well understood.
One of the main aims of this chapter is to articulate this worry and to see
how it holds up under scrutiny.
Perhaps the most important response to this unfamiliarity problem
(as I will refer to it), is that it misidentifies just what the finding at issue
actually is. According to a popular line of argument, what is interesting
about subjects’ verdicts about far-fetched cases and what psychologists are
actually putting under the microscope are not judgments about this or that
individual case, but rather the strikingly robust differences in people’s
judgments about similar but slightly different situations. People tend to
think, for example, that it is acceptable to divert a trolley from one track to
another to save five people (Greene ), but they typically deem it
impermissible to bring about the same result by pushing a stranger to his
death, thereby derailing said old-fashioned device. This difference in
responses is supposed to be the key finding, and the degree of unfamiliarity
of the individual scenarios used to generate it is all but irrelevant to the
conclusions we can hope to wring from it.
If successful, this argument would defuse the unfamiliarity problem,
and with it the most common objection to the use of unrealistic scenarios
in social psychology and empirically informed metaethics, where this
approach is arguably the hottest recent trend. This would be quite a big
For an overview, see Edmonds , ff.; For more detailed discussions of the lack of realism/
external validity problem, see Bauman, Bartels, McGraw and Warren ; Gold, Pulford and
Colman ; Gold, Colman and Pulford ; Fried ; Bloom ; Knutson et al. and
Moll, de Oliveira-Souza and Zahn .
After decades of Kohlbergian slumber, the social psychology of moral judgment and the
subdiscipline of empirically informed metaethics really took off again with Greene et al.’s ()
and Haidt’s () landmark papers. Trolleyology, understood as the use of sacrificial dilemmas for
empirical purposes, has since then become the fastest growing experimental paradigm in empirical
moral psychology. For a sample of key publications, see Greene , , Koenigs, Young et al.
, Singer . At the same time, the approach has attracted significant criticism both from a
normative/conceptual (Berker ; Kahane and Shackel ; Kahane and ; Sauer
Debunking Details: The Perils of Trolleyology
deal, since this trend aspires to pose a major challenge to a large portion of
mainstream normative ethics and its underlying methodology of – polem-
ically speaking – naïve intuition mongering. My second aim in this chapter
is to show that, and why, this difference argument (as I will refer to it), fails.
I will argue that, once the unfamiliarity problem is properly understood
and its various possible readings are carefully distinguished, it can be
shown that the difference argument is of no help in overcoming the
problem.
The gist of my argument is this: the debunking of deontological
intuitions rests, to a large extent, on a misunderstanding of the function
of thought experiments. Thought experiments, whether they are realistic
or not, are supposed to highlight features of morally salient real-world
situations we are grappling with. Once discussions turn away from this
basic purpose and reel off into discussions about what we ought to do in
the situations stipulated by the thought experiment, things start to go awry.
. Trolleyology
Let me start with a few examples from what is sometimes – and often,
though not always, critically – referred to as trolleyology. I will adopt this
label in this chapter, although not all of the examples I will focus on
literally involve trolleys (or other runaway vehicles, for that matter).
Rather, I will use the label in a very broad sense to refer to a family of
empirically informed approaches to metaethics and normative theory
that employ this and other, similarly far-fetched stories and thought
experiments.
The core of trolleyology consists of a series of far-fetched scenarios that
Appiah () aptly refers to as “moral emergencies” (ff.): stories in
which there is (i) only one agent, faced with (ii) an exhaustive set of
(iii) clear options to choose from in (iv) a very short amount of time with
(v) very serious consequences. Typically, these vignettes describe sacrificial
dilemmas (Bartels and Pizarro , Kahane ) that feature some
wayward lethal threat, and experimental participants need to determine
whether the described agent may or may not perform a particular action.
This action can range from pulling a lever to divert the threat to pushing a
a) as well as empirical (McGuire et al. ; Bartels and Pizarro , Kahane et al. and
) side.
For useful discussions of the value of thought experiments for philosophical questions more
generally, see Dancy , Gendler , Cooper , Machery .
Debunking Arguments in Ethics
person to his death to stop it to dropping a person in front of it with a
trapdoor. Variations of the dilemmas used in trolleyology can become
extremely subtle, a point to which I will also return.
Barbara Fried characterizes what she refers to as “hypotheticals” in a
similar fashion (see also Wilson ). Unrealistic thought experiments
in ethics
typically share a number of features beyond the basic dilemma of third-
party harm/harm tradeoffs. These include that the consequences of the
available choices are stipulated to be known with certainty ex ante; that the
actors are all individuals (as opposed to institutions); that the would-be
victims (of the harm we impose by our actions or allow to occur by our
inaction) are generally identifiable individuals in close proximity to the
would-be actor(s); and that the causal chain between act and harm is fairly
direct and apparent. In addition, actors usually face a one-off decision
about how to act. That is to say, readers are typically not invited to consider
the consequences of scaling up the moral principle by which the
immediate dilemma is resolved to a large number of (or large-number) cases.
(, )
Let me mention one potential problem for my discussion of the unfamili-
arity objection. It may seem to some as if, in talking about trolleyology in
general, I am illegitimately lumping together two philosophical approaches
that are best kept separate: one may be referred to as empirical trolleyology
of the kind conducted by Joshua Greene and many others, and the other
could be called normative trolleyology and is perhaps best exemplified by
writers such as Frances Kamm. The former approach is in the business of
establishing which processes people rely upon in making their moral judg-
ments. The latter is about whether there are any morally relevant differences
between contrastive cases. This matters because it has implications for the
prospects of making the unfamiliarity objection stick. Empirical trolleyol-
ogists use lab conditions to figure out how people would respond to moral
dilemmas in the real world, which means that their findings are only
relevant to the extent that the former map onto the latter – if they do
The second method is perhaps best captured by Kamm’s “technique of equalizing cases” (Kamm
, –; for helpful discussion of this method, see also Elster and Kelman and Kreps
): “[W]hat some philosophers who argue for a moral distinction between harming and not-
aiding have done is construct cases that are alike in all respects other than that one is a case of
harming and the other a case of not-aiding. It is only if all other factors are equal that we can be sure
that people’s responses to cases speak to the question of whether harming and non-aiding per se
make a moral difference. It can be very difficult to construct equalized cases, and often philosophers
think that they succeed when they do not” ().
Debunking Details: The Perils of Trolleyology
not, this is bad news for empirical trolleyologists. But it need not be for
normative trolleyologists, whose normative commitments do not require a
similar type of external validation.
However, it seems to me that this argument greatly exaggerates the
differences between the respective agendas of empirical and normative
trolleyology. For one, if my argument is successful but turns out to
properly apply only to empirical trolleyology, that would already be no
small feat; for another, Greene ( and ) famously wants to use the
results of his experiments for precisely the same reason as people with a
more conceptual methodology, namely to figure out which kinds of moral
judgments can be justified. I doubt that normative trolleyologists have no
interest whatsoever in moral judgments about real-life cases. If they want
their arguments to have at least some relevance for what real people ought
to do in real life, then, in principle, the unfamiliarity problem should apply
to their project as well.
The normative upshot of the trolleyological method is diverse, but I will
focus on just one. In moral psychology, the sacrificial paradigm has proven
rather fecund, to say the least. It is often used to provide evidence for so-
called dual-process models of moral cognition (Evans and Stanovich ,
Greene ; see also Berker and Kahane ). One of the main
aims of this theory family is to debunk so-called deontological intuitions by
mapping distinct types of moral judgment (e.g., consequentialist and
deontological) on distinct types of cognitive processing (e.g., controlled
and automatic) and, if possible, on different normative statuses (e.g.,
justified and unjustified).
The trolleyological paradigm thus brings empirical methods to bear on
issues that lie at the core of normative ethics – is it always obligatory to
maximize aggregate welfare? Are there side constraints on the direct
promotion of the good? And, not least, how reliable are the intuitions
we rely upon when trying to answer such questions (Singer )? If the
trolleyological approach is fundamentally methodologically flawed because
of the stimuli it essentially relies upon, the very usefulness of this paradigm
for addressing such issues is called into question.
. Unfamiliarity
Why should we think that far-fetched scenarios are problematic? What
casts their evidential value into doubt? One possible reason for skepticism
about trolleyology is this: our moral intuitions are the result of cognitive
structures embedded over the course of our moral education (Horgan and
Debunking Arguments in Ethics
Timmons , Sauer b, Railton ). This type of moral
education – the habitualization of automatic patterns of defeasible moral
judgments – takes place in real life, where real people interact with and are
guided by other real people and where they are confronted with real
situations. Our moral intuitions about scenarios which they have not been
well-attuned to are thus presumably unreliable. Allen Wood articulates this
worry as follows:
I have long thought that trolley problems provide misleading ways of
thinking about moral philosophy. Part of these misgivings is the doubt
that the so-called “intuitions” they evoke even constitute trustworthy data
for moral philosophy. As Sidgwick was fully aware, regarded as indicators of
which moral principles are acceptable or unacceptable, our intuitions are
worth taking seriously only if they represent reflective reactions to situations
to which our moral education and experience might provide us with some
reliable guide. (Wood , )
Actually, there are at least three distinct types of unfamiliarity at issue, each
of which is potentially associated with its own kind of problems. Bauman
et al. () distinguish among experimental, mundane, and psychological
realism. The humorousness of many of the scenarios involved can under-
mine their experimental validity, because funny vignettes can make it
difficult for subjects to become seriously engaged with the material. The
requirement of mundane realism is violated when participants are highly
unlikely to be already familiar with at least vaguely similar situations or to
ever encounter such situations in their own real lives. Finally, psychological
realism has to do with whether making moral judgments about trolleyo-
logical cases activates the same type of cognitive process as moral judg-
ments in ecological settings do. Since judgments about moral emergencies
in fantastic, closed worlds are generally inconsequential and contemplating
them causes virtually no conflict or discomfort, one can assume that moral
judgments about far-fetched cases are psychologically very different from
people’s real-world moral beliefs. All of these factors threaten the external
validity of moral judgments about unusual cases.
However, the main point behind the unfamiliarity problem is not about
unfamiliarity per se. After all, many situations and cognitive tasks are
This is not meant to suggest that the unfamiliarity problem arises only for this particular account of
the nature of moral judgment. Versions of sentimentalism, rationalist intuitionism, and many others
have just as much reason to be wary of the evidential value of moral judgments about overly
unrealistic cases.
Another illuminating quote: “When a philosopher simply stipulates that we are certain you can save
all and only the inhabitants of exactly one rock, then we should be clear that he is posing a problem
Debunking Details: The Perils of Trolleyology
unfamiliar to us at first glance, but that does not stop us from thinking that
we can generate reliable, or at the very least not entirely useless, intuitions
in such contexts. Rather, far-fetched scenarios like the Trolley Problem
suffer from two related defects: they ask us, by stipulation, to ignore
considerations that, in everyday situations, are usually relevant and legit-
imately inform our considered moral judgments. And they invest consider-
ations that, in everyday situations, are usually unavailable to us because of
the general messiness and complexity of real life. Both of these facts may
well undermine the reliability of our intuitions about unusual cases:
What might seem to us genuine intuitions are unreliable or even treacher-
ous if they have been elicited in ways that lead us to ignore factors we
should not, or that smuggle in theoretical commitments that would seem
doubtful to us if we were to examine them explicitly. (Wood , ).
For instance, if someone asked us in real life what a person should do in
the Trolley dilemma or in a Taurek case (Taurek ), and we are not yet
professionally deformed, we would probably have some questions. How
did the person end up there? Do we know anyone involved? Where are the
people who are actually supposed to handle a situation like this? And how
does the agent know that, whatever action she decides to perform, she will
be successful? The difference argument, then, is supposed to show that
questions such as these are misguided, for they fundamentally misunder-
stand the nature of the evidence obtained on the basis of such cartoonish
scenarios. I will briefly discuss this argument in the following section.
so different from otherwise similar moral problems you might face in real life that any ‘intuitions’ we
have in response to the philosopher’s problem should be suspect” Wood (), .
Debunking Arguments in Ethics
trolley problems, since this suggests to them that the degree of agreement
among people about even such weird examples that are so different from
our real-life moral judgments is itself a significant datum that is of psycho-
logical interest and requires theoretical explanation. (Wood , )
The point is, the argument goes, that it is not people’s responses to
individual far-fetched scenarios that call for an explanation. It is the high
degree of convergence between subjects’ contrasting responses to two separ-
ate but similar far-fetched scenarios, and for this finding, the degree of
realism of the individual cases involved plays little or no role. Conse-
quently, how realistic a scenario needs to be for it to be a suitable trigger
of reliable moral intuitions is not an issue, either.
It may seem to some that despite its popularity, the difference argument
is actually a red herring, because it is parasitic on another argument that
should be targeted first and foremost. The idea behind this objection is that
what trolleyologists are interested in is to use artificial scenarios as a second-
best route to finding out what people’s moral beliefs about nonartificial
scenarios are. Their findings would then be relevant only to the extent that
the former map onto the latter, and the unfamiliarity problem would hinge
on the claim that they do not. Importantly, when the argument is framed
this way, it becomes clear that contrastive responses carry no independent
weight, as their relevance is restricted by whether the individual responses
that constitute the contrast are telling in the first place.
This, however, is a misunderstanding of the difference argument and
gets things precisely backward. To see this, take a look at Koehler’s ()
reply to Dancy’s () case against the use of imaginary cases in norma-
tive theorizing, where he argues that even if certain cases are problematic-
ally unrealistic:
we could use thought experiments to figure out moral conclusions. We
could still use them, for example, to establish whether a difference between
two cases with regards to a single property would make a moral difference.
This is exactly what is done in [trolley cases]: Both cases are identical except
for one detail. The question is whether this would make a moral difference.
It is not relevant for this to have a complete description of the two
scenarios, since we are only interested in the difference this change in a
single property would make. It is not even relevant that we know which
specific evaluation for each situation would be right, but only whether our
evaluations of the two cases would differ and, therefore, that the property in
question is morally relevant. ()
“Second best” because ideally, trolleyologists could place actual people in actual moral emergencies
to see what they’re doing. Damn those ethics committees.
Debunking Details: The Perils of Trolleyology
The point of the difference argument is that trolleyology remains useful
even if we grant that individual moral judgments about this or that case do
not map onto real-life responses. This does not render them evidentially
useless, at least as long as we find said differences in response pairs. It is
thus not the case that the difference argument is parasitic on the ecological
validity of single judgments. Rather, the difference argument is supposed
to shield the usefulness of single judgments from the charge of a lack of
realism, because this lack of realism does not warrant our simply ignoring
the contrastive pattern we find in subjects’ judgments. Or so the
suggestion goes.
People think it permissible to switch a lever to divert a trolley from one
track to another in order to save five people who would otherwise have
died. On the other hand, they do not deem it acceptable to push someone
off a bridge to his death to stop the trolley from killing the five. How
realistic, in the sense of how close to a difficult moral problem one might
actually encounter, are these scenarios? Not at all, one might think.
But: that people are happy to switch the lever and that they are aversive
to pushing the fat man off the bridge is a strikingly robust pattern
(Cushman, Young, and Hauser ), and this convergence in people’s
judgments suggests that identical body count is not the only thing that
matters for moral evaluation. The variable so isolated can now be subjected
to further investigation – is it the up-close-and-personalness of the second
scenario that causes this difference (Greene et al. )? Or does the
doctrine of double effect (Mikhail ) structure people’s intuitions?
Whatever it is that turns out to be responsible for the judgmental patterns
that can be retrieved from the data, the difference argument aims to defuse
the unfamiliarity objection by shifting the focus from the relation between
a judgment and a potentially problematically unusual task to a contrast
between two judgments. What does this contrast consist in? If we single
out trolley cases, we see that people’s judgments switch from permissible to
impermissible, acceptable or inacceptable, and sometimes merely from
“okay” to “not okay” (Greene ). (It is worth mentioning that the
point by Wood does not, it seems, have to be interpreted as an articulation
of the difference argument, as it would still go through if there were only
one trolley case – there could be a convergence in people’s judgments in
terms of them simply agreeing that it is permissible to flip the switch,
period. However, the wider context makes clear that proponents of
trolleyology believe that convergence in contrastive responses constitutes
Thanks to Neil Levy for alerting me to this point.
Debunking Arguments in Ethics
the suggestive finding, since this is where, normatively speaking, the
interesting things are happening.) The difference at issue is thus one of
endorsement of the respective proposed actions.
The important thing to realize here is that the differential responses we
find are supposed to render the unfamiliarity of the individual cases
involved irrelevant to the informational value those cases have when it
comes to finding out how actual moral judgment works. To see if this
rejoinder works, let me cash out the unfamiliarity problem in more detail.
It seems clear that in their own real lives, very few people ever encounter
moral emergencies of the trolleyological kind. Moreover, even if some did,
they probably would not encounter them frequently enough to become
well attuned to them, to learn from past mistakes, and to come up with
explicit plans for how to handle such situations when the next unlucky
bunch needs saving. In this section, I will develop a more systematic
articulation of the sense of unease many seem to have about the use of
far-fetched scenarios in moral psychology and empirically informed
metaethics. The unfamiliarity objection has (at least) five different aspects,
which I will take up in turn. In the next section, I will then go through the
five versions a second time in order to see whether the difference argument
succeeds against any of them.
There are some methodological worries about wording here. Particularly in cases where the
empirical results are supposed to have some bearing on the difference between consequentialist
and deontological moral theories, it is important that the normative commitments of these theories
are identified properly. Kahane () has remarked that thinking it merely permissible, rather than
obligatory, to push the fat man off the bridge is actually a deontological position, even though
endorsements of pushing are supposed to count as consequentialist. On the other hand, there is
recent empirical evidence suggesting that in the end, wording does not end up making much of a
difference (O’Hara ).
Debunking Details: The Perils of Trolleyology
The idea here is that due to the lack of realism characteristic of trolleyol-
ogy, subjects’ moral judgments about such scenarios tell us nothing about
how their moral beliefs and actions translate into real life.
There are two versions of this problem (Alzola ): in artificial
settings such as the lab, the influence of the subtle situational factors that
are manipulated in experiments is typically far greater than outside the lab.
Thus, responses we find tend to be much more extreme and dramatic.
Surely people’s moral judgments are at least somewhat responsive to how
up close and personal a proposed action would be (pushing someone vs.
pulling a lever). But when this situational feature is singled out to be the
only difference between two otherwise identical cases, this factor will likely
end up playing a much larger role than when it is embedded in less
artificially constrained settings.
Moreover, people are of course randomly assigned to the experimental
or the control group (as they should be): but in real life, no such
randomization takes place (as it shouldn’t), and there are therefore strong
but morally relevant selection effects. For instance, people who are better
at coolly handling risky, life-threatening situations will be more likely to
end up in jobs in which such decisions need to be made. At any rate,
according to the ecological validity version of the unfamiliarity problem,
there is little reason to think that judgments about far-fetched scenarios
translate into judgments about – not to mention actions in – actual cases.
. Novelty
A second reading of the unfamiliarity problem has to do not with eco-
logical validity – the extent to which experimental subjects’ moral judg-
ments in the lab reflect nonexperimental people’s moral beliefs outside it –
but with the novelty of the tasks philosophically untrained folk have to
cope with. It should be unsurprising that a participant’s degree of famil-
iarity with a given task can have an effect on just what kind of task the
respective subject actually confronts. This can complicate things especially
when lay responses are compared to those made by experts (Rini ).
But an even deeper problem for folk judgments about far-fetched and
novel cases is that sometimes, subjects are unlikely to have any firm moral
beliefs about them at all. Unfortunately, participants rarely admit (or
perhaps even notice!) this fact, and so they tend to make something up.
This is exacerbated by the subtle social pressure of taking part in an experi-
ment where people feel that they have to give at least some response, even if
they do not have a genuine, worked-out opinion on the topic at hand.
Debunking Arguments in Ethics
Now it might be that even if a scenario is novel, once they have
encountered it for the first time, subjects simply generate an opinion
about the case presented to them, which from this point on consti-
tutes their firm belief about the matter. However, this does not seem
to be the case. Hall et al. () found that when confronted with
novel scenarios – questions or tasks subjects did not have the time or
occasion to develop firmly entrenched beliefs about – both their
willingness to simply invent responses, to readily change those
responses, and to come up with confabulatory justifications for them
can be quite dramatic.
In a particularly intriguing study, Hall and his colleagues used a sleight-
of-hand design to demonstrate the frailty of people’s moral intuitions.
Participants were given a clipboard with a series of statements with either
general (“Even if an action might harm the innocent, it can still be morally
permissible to perform it”) or specific (“The violence Israel used in the
conflict with Hamas is morally defensible despite the civilian casualties
suffered by the Palestinians”) moral content. They then had to indicate
their agreement with those statements on a -to- scale. After completing
the first page of the survey, they had to flip it back to get to the second
series of propositions.
Unbeknownst to them, a small patch had been glued to the back of the
clipboard on which the content of some of the statements had been
reversed (e.g., from permissible to impermissible). This patch was picked
up by the first page, so when they flipped it back again, the agreement they
had indicated for the original statements had turned into its opposite. Not
only did % of people not notice this change in content of at least one of
their statements (hence the term “moral choice blindness,” cf. Johansson
et al. ), but most subjects were also more than happy to provide
justifications for the moral beliefs they were told they had – moral beliefs,
I want to emphasize again, which they did not even (claim to) have in the
first place.
The fact that this effect was far less pronounced in subjects who were
politically active illustrates the main point: that the novelty of a case can
trigger verbal statements that do not genuinely reflect genuine moral
judgments about it. And this need not be because the request to articulate
reasons distorts people’s original beliefs, but because there is nothing to
distort in the first place. What seem to be moral beliefs are often really just
artifacts created by a request to say something about an issue people do not
actually have an opinion about. And as such, their evidential value is
limited as well.
Debunking Details: The Perils of Trolleyology
For the purposes of my discussion, the won’t/can’t distinction (Gendler ) makes no difference.
What matters is that there is imaginative resistance, not how it is best explained.
Debunking Arguments in Ethics
It is difficult to control for the extent to which subjects are unwilling/
unable to accept the parameters of a story and the real-world assumptions
they might tacitly smuggle in to make the stories more cognitively
palatable. However, in one study, Greene et al. () tried to control
for what they referred to as “unconscious realism.” Subjects were asked,
first, to suspend disbelief about vignettes, as they were not necessarily
supposed to be realistic. Those who indicated that they were unable to do
so (about %) were excluded from subsequent analysis. Second, subjects
were asked about how likely they thought it was for the proposed action
(e.g., pushing the fat man) to bring about the described effect (e.g.,
stopping the trolley). It was found that there was a significant effect on
moral acceptability ratings when subjects reported that they thought the
proposed action would actually make things go worse (rather than better
or as planned).
This approach is better than nothing, but since it shows that even
among those that reported being perfectly able to suspend disbelief,
assessments of realism had a significant effect on their judgments, it also
demonstrates the limitations of relying on people’s introspective compe-
tence. It is plausible to assume that unconscious realism had an effect even
on those participants who had professed that it wouldn’t (Bauman, et al.
, ff.). Imaginative resistance and “unconscious realism” thus remain
a problem.
. Specificity
Trolleyological scenarios aim to single out, as precisely as possible, which
feature of a case subjects pick up on in making their moral judgments. This
type of information could be extremely useful, especially because in a second
step, these features can then be scrutinized in terms of whether they are
morally relevant and thus whether subjects should pick up on them at all.
But philosophers are a creative bunch, and so the fourth reading of the
unfamiliarity problem has it that the method of isolating very subtle
situational features quickly tends to run amok and make scenarios overly
specific. At a certain point, the differences between the cases used in
“Fans of trolley problems have suggested to me that these problems are intended to be
philosophically useful because they enable us to abstract in quite precise ways from everyday
situations, eliciting our intuitions about what is morally essential apart from the irrelevant
complexities and ‘noise’ of real world situations that get in the way of our seeing clearly what
these intuitions are” (Wood , ).
Debunking Details: The Perils of Trolleyology
experiments become so intricate that they might well outstrip the concep-
tual resolution of the moral principles our judgments are purportedly
based upon.
To see what I have in mind, let’s take a look at some nonempirical
trolleyology first. Frances Kamm (), for instance, argues that the
doctrine of double effect – according to which foreseen harm can be
permissible only when it is not directly intended – is not the only valid
nonconsequentialist constraint on harming. There are also the doctrine of
triple effect, the principle of permissible harm, and many others more. The
normative distinctions carved out by these principles are of secondary
interest here; methodologically speaking, however, there must be a way
to test the validity of these principles, and the traditional approach recom-
mends that we test them against our intuitions about cases (for some general
remarks about the limitations of this method, see Kagan ). And the
more subtle the differences between the moral principles at issue become,
the more specific we need to make the cases eliciting the intuitions we
intend to use as evidence for whether we should be willing to accept those
principles.
Here is one of the cases Kamm wishes to test our intuitions with:
“[S]uppose that we must drive on a road to the hospital in order to save
five people, but driving on this road will cause vibrations that cause rocks
at the roadside to tumble, killing one person” (, ) This action, we
are asked to concur, is permissible. What about the following one? “This
case can be contrasted with a case in which we must drive on another road
which is made up of loosely packed rocks that are thereby dislodged and
tumble, killing a person” () This action, on the other hand, is supposed
to be impermissible. Now, I do not want to take a stand on whether
Kamm is right about this. I merely wish to draw attention to the awesome
level of specificity our moral intuitions must be capable of in order to
produce useful data about the (im)permissibility of killing a person with
wayward rocks set in motion by vibration or with rocks dislodged due to
how delicately packed they were.
Empirically working moral psychologists are no stranger to this prob-
lem. In one of their more recent papers, Greene et al. () use no fewer
than four variations of the footbridge problem that differ in terms of the
degree of physical contact involved, the spatial proximity between agent
and patient, and the type of personal force (how one agent impacts another
with his or her muscles) that is used. Participants are asked whether it is
permissible to shove the large stranger down the bridge with their own
bare hands, whether they may push him with a pole, or drop him on the
Debunking Arguments in Ethics
tracks with a trapdoor while standing next to him or while standing in a
place that is more removed from the action.
I confess: I do not think that our moral intuitions are this fine-grained.
(Mine certainly aren’t and, for the record, neither are Mr. Norcross’s
[, ]). I do not wish to suggest here that moral judgments cannot
have a sophisticated internal structure (Mikhail ). What I do mean to
say is that we have no reason to assume that intuitive moral beliefs (and the
more or less general principles structuring them) whose content has been
shaped by and attuned to real-life eliciting conditions are specific enough
to produce useful responses about cases such as these. But let me empha-
size again that at this point, this need not worry trolleyologists too much.
People’s differential responses to such overly specific scenarios are still real,
and they still require an explanation. And if it should indeed turn out to
be true that participants hesitate less to push someone off a bridge when
they can use a pole rather than their bare hands, this might still be an
interesting datum.
. Certainty
To explain the fifth version of the unfamiliarity problem, let me make a
short detour through familiar territory. Moral theories (though nothing of
substance hangs on this, I will focus on consequentialism here) frequently
draw a distinction between their subjective and objective variants. According
to subjective consequentialism, for instance, the rightness of an action is in
some sense information or evidence relative – agents ought to choose, say,
the option with the highest expected value. Objective consequentialists do
not perspectivalize their criterion of rightness in this fashion, arguing that
agents ought to perform the action that in fact maximizes the good.
The subjective/objective distinction originates in the question of
whether moral theories should supply a criterion of rightness – what are
the features that make an action right or wrong? – or a decision procedure –
how should moral agents decide which action to perform? The rationale
for this distinction is an epistemic one: ought implies can, and it thus
makes little sense, subjective theorists maintain, to tell agents to do what is
objectively best when they have no way of knowing what that would be.
Objective theorists insist on their standard, arguing that subjective theor-
ists conflate criteria of rightness with those of blameworthiness.
The distinction between a moral theory’s criterion of rightness and the
recommendations it makes for how agents ought to deliberate about what
to do leads to one final distinction needed here: the distinction between
Debunking Details: The Perils of Trolleyology
criterial and noncriterial reasons (Jacobson ). Some moral reasons
recommend an action because it would straightforwardly satisfy the
demands set up by the objective criterion of rightness. Some moral reasons
recommend an action because it would be the action it would, given the
evidence available, be rational or appropriate to perform. Moreover, some
moral reasons – the paradigmatically noncriterial ones – are such that their
explicitly specified content has nothing whatsoever to do with the criterion
of rightness, although taking them into account in one’s deliberation just
might be the kind of thing that will allow agents to do what is objectively
best or right. Trying to maximize happiness will typically not maximize
happiness. But being averse to incest because it is disgusting and being
emotionally disposed against shoving people off footbridges because it is
cruel may well be among the traits which, on average, lead finite human
beings to perform the best action (cf. also Lazari-Radek and Singer ).
Even fans of objective criteria of rightness have no problem admitting
that in general, people should not base their decisions directly on criterial
considerations. They should not, for instance, aim at maximizing the
good, because doing so will almost certainly not bring about the compara-
tively best outcome. Rather, acting on the basis of heuristics, tried-and-
true rules of thumb, deeply entrenched intuitions, and so forth will often
be best. So even though there might be disagreement between normative
ethicists about whether to prefer a subjective or an objective criterion of
rightness, there is widespread (though perhaps not universal) agreement
that what makes up the best available decision procedure is an entirely
different question. In real life, people ought to base their decisions on
noncriterial reasons, and this is what they are used to doing.
So the fifth and final version of the unfamiliarity problem is this: people
are used to basing their moral judgments on noncriterial reasons, because
these are the only ones available in real life. When making decisions, we
typically do not know what is actually best, which is why our decision
making is more responsive to noncriterial moral reasons than trolleyology
allows us to avail ourselves of, because due to the full information we have
by stipulation, noncriterial reasons have become irrelevant. For reasons of
simplicity, far-fetched cases stipulate perfect outcome certainty, which
artificially removes the very rationale for being responsive to noncriterial
reasons – when people know exactly how to directly satisfy the objective
criterion of rightness, they may as well do that. More precisely: in trol-
leyology, people are still supposed to make decisions based on the evidence
available to them. But far-fetched scenarios stipulatively make perfect
evidence available to them, which may result in a misfiring of noncriterial
Debunking Arguments in Ethics
deliberation in idealized scenarios in which criterial deliberation would, as
an extreme exception, be appropriate.
Conversely, trolleyology does not only ask subjects to imagine full
information about possible outcomes, thus making criterial reasons avail-
able to them. It also makes characteristically noncriterial reasons unavail-
able and demands that participants ignore considerations that would, in
virtually every real-life situation, be perfectly valid and would legitimately
inform their deliberation about a case. We are asked to ignore our relation
to the people in dire straits and merely consider the numbers. We are not
supposed to look for alternative courses of action (ain’t nobody got time
for that), consider questions of responsibility, or focus on the larger insti-
tutional context that allowed people to end up on train tracks or lonely
islands with too few lifeboats around. Trolleyology thus gives subjects
too much and too little at the same time: it gives us some information
about the consequences of our actions that we almost never have in the real
world while at the same time depriving us of a great deal of contextual
information that we almost always have in the real world. This radical
difference in information between the lab (or the armchair) and real life
means that the judgments obtained in the former situation are likely
irrelevant in the latter.
It should also be mentioned that more than one reading of the problem could be correct at the same
time, which constitutes aggravating circumstances for the difference argument.
Sosa () expresses this point as follows: “When we read fiction we import a great deal that is not
explicit in the text. We import a lot that is normally presupposed about the physical and social
structure of the situation as we follow the author’s lead in our own imaginative construction. And
the same seems plausibly true about the hypothetical cases presented to our [. . .] subjects. Given
that these subjects are sufficiently different culturally and socio-economically, they may because of
this import different assumptions as they follow in their own imaginative construction the lead of
the author of the examples, and this may result in their filling the crucial C [i.e., condition]
differently. [. . .] But if C varies across the divide, then the subjects may not after all disagree about
the very same content” ().
Debunking Details: The Perils of Trolleyology
candidate moral principles. Proponents of the unfamiliarity problem argue
that when the employed cases become excessively specific, the resolution of
our moral principles becomes so inadequate that the moral intuitions
about those cases cease to be useful. The difference argument (I repeat
myself ) aims to overcome this problem by suggesting that contrastive
judgments count nonetheless.
Notice, however, that once the messiness of real life is removed and only
isolated and extremely subtle differences between scenarios remain, it
becomes doubtful whether the differences in people’s responses track
anything interesting any longer, especially if we assume, plausibly to my
mind, that our practice of moral judgment has not been shaped in natural
and social contexts that are this neat, tidy, and purged from the impurities
of empirical reality.
Suppose that one group of participants receives one stimulus – a moral
emergency involving a lethal threat or another urgent matter of life and
death (think of Frances Kamm’s vibration-triggered tumbling rocks men-
tioned earlier). Another group receives a slightly different stimulus, also
with a lethal threat or another urgent matter of life and death (think of
Kamm’s loosely packed rocks). Now participants in both groups are asked
to make a judgment about the case they have received. Suppose we find
that on average, people from the first group are more likely to think that
the action is permissible than people from the second group. What do
these in vitro judgments tell us about morality in vivo?
If my speculations are correct and the moral principles patterning and
informing our moral judgments are not arbitrarily fine grained, then the
differential responses we obtain are based on factors whose excessive
specificity our moral cognition has not been trained to appreciate. This
may imply that in such book-smart cases, our street-smart – crude but
intelligent, rough but flexible – principles start to misfire and end up
seeing deontic differences where none are to be seen.
Suppose you have a microscope. This microscope has a certain reso-
lution; it can show you how, say, mitochondria are working under normal
conditions, as well as when the organism is under the influence of a drug.
But now suppose that we also know that under certain conditions (e.g.,
when the dosage of the drug is too small, such that the difference between
normal and trial conditions is miniscule), this microscope is known to be
illicitly influenced by factors operating on the nano-level – it still delivers
some data but fails to deliver any useful information about the workings of
For this terminology, see Buss () and Kennett ().
Debunking Arguments in Ethics
the mitochondria. In that case, we would say that what it measures in the
latter case has no bearing on mitochondrial functioning, regardless of
whether the data it does deliver reliably track genuine differences between
conditions due to the influence of said drug. Trolleyology suffers from an
analogous problem.
Let me emphasize that I do not wish to suggest that moral thinking
cannot appreciate complexity. Quite the opposite, in fact; we typically need
to get a sense of a complex web of claims, relationships, rules, contexts, and
narratives to develop a proper moral understanding of a situation. This is
the very reason why contrastive cases that have been purged from such
complexities may turn out to be impossible to penetrate. The messiness
of real life has disappeared – and, with it, a feature of the situation that is
morally relevant in its own right.
Finally, consider stipulated certainty and the importance of noncriterial
reasons. Far-fetched trolleyological scenarios are designed to give judging
subjects full and flawless information about the available options and their
objective outcome probability. This is a feature of those scenarios, not a
bug. The problem is that – as anyone knows who has ever tried to receive
some actual counseling from “A+” (Railton ) – we do not have perfect
information in real life.
To this, the trolleyologist replies in the now-familiar way. Whether we
have full information realiter is irrelevant to the evidential usefulness of
people’s judgments about far-fetched scenarios, as long as we find differen-
tial responses about contrastive cases in which agents have been equipped
with idealized knowledge about action outcomes.
Notice, however, that the differences we may find are due to differences
in responses to criterial reasons (because participants are given definitive
information about what would produce the objectively best outcome, thus
making such criterial reasons available to participants by stipulation). If it
is correct that in real life, we are never in possession of criterial reasons,
then we have to make do with noncriterial ones, at least in these contexts.
This entails, however, that people’s differential responses about far-fetched
scenarios are based on considerations that are and remain permanently
inaccessible to us. Even worse, differential responses are based on cases
Thanks to Regina Rini for alerting me to this point.
The “Asian disease” case, for instance, operates with probability distributions rather than %
certainty regarding possible outcomes, but here, the probabilities themselves are known precisely
and unambiguously.
This overstates the case a little, as there are plenty of cases in which one option is clearly better and
overwhelmingly more likely to occur.
Debunking Details: The Perils of Trolleyology
that deliberately obstruct features that would most likely be very relevant
to our moral deliberation in real life. Trolleyological scenarios thus expli-
citly ask participants to think in morally obtuse terms.
Outside the lab, we do not know for sure whether we have exhausted
our options or if the fat man will stop the trolley and save the five. It is thus
rational, as an indirect way of satisfying a normative theory’s objective
criterion of rightness, to respond to noncriterial reasons in one’s decision
making: it would be cruel murder to intentionally kill a person for a
desired end and should therefore be avoided. If trolleyology is supposed
to have any value for real-life trade-offs between entrenched moral prin-
ciples and the greater good – for instance, in torture/ticking-time-bomb
cases or when the issue at hand is whether a plane that has been hijacked
by terrorists ought to be shot down – then the elimination of noncriterial
reasons from the considerations subjects are allowed to avail themselves of
in arriving at their moral verdicts is a very bad thing indeed.
Moreover, it is plausible to assume, in line with the point about
imaginative resistance made earlier, that experimental subjects do base
their decisions on noncriterial reasons, albeit unconsciously. This renders
their differential responses even less interesting, from the normative point
of view, because what we have now are judgments about contrastive cases
that, to us, seem to be justified by nothing that could possibly be found in
the fictional universes given to people. To some, this might suggest that,
since the consequentialist response to the standard trolley case is obviously
the correct one, and since there are no relevant differences between Trolley
and Footbridge, the nonconsequentialist response to Footbridge must be
unwarranted. But if things are as I speculate and subjects base their
judgments on noncriterial reasons as well, then this conclusion lacks
support.
From the perspective of the distinction between criterial and noncriter-
ial reasons, the problem for proponents of the difference argument is thus a
twofold, dilemmatic one: either subjects do pick up on the criterial reasons
made available to them by the outcome certainty stipulated in the artificial
settings of trolleyology, in which case the judgmental differences which are
found bear no recognizable relation to the trade-offs between moral
principles and the greater good people are likely to make in real life, as
these trade-offs are only ever made on the basis of noncriterial reasons –
these reasons being the only ones that are accessible to actual people in
actual situations. Or people’s decisions about far-fetched scenarios are
driven by noncriterial reasons, in which case the judgmental differences
that are found contain no normatively interesting information anymore, as
Debunking Arguments in Ethics
participants’ differential responses may be covertly driven by morally
relevant noncriterial reasons the vignettes have wantonly abstracted
away from.
Conclusion
I close with some fairly straightforward advice. Here is the main lesson
I believe should be drawn: philosophy, whether conducted from the
armchair or in the lab, will continue to come up with thought experi-
ments, hypothetical scenarios, and other curious, far-fetched cases. And in
principle, there is nothing wrong with contemplating (more or less) remote
Debunking Details: The Perils of Trolleyology
possible worlds. In fact, one accidental upshot of my discussion might
consist in a preliminary checklist of problems that indirectly specifies
under what conditions the use of thought experiments is innocuous and
under what conditions it should be avoided. My advice, then, is that when
employing far-fetched scenarios to debunk particular intuitions and gen-
eral principles, philosophers (and philosophically minded psychologists)
should come up with scenarios that strive for ecological validity, avoid
extreme novelty, do not invite imaginative resistance, work around exces-
sive specificity, and take into account the deliberative relevance of non-
criterial moral reasons. “[T]hinking productively about ethics requires
thinking realistically about humanity” (Doris , ); let’s start taking
this slogan seriously.
Singer’s () famous “drowning child” scenario is a good example for a test case that seems to be
free from the problems identified in this chapter (though it might have others, of course).
Introduction
Trolleyologists suggest that deontological intuitions are unjustified because
they stem from alarm-like responses ill-suited for the modern world
(Greene ). I have argued that the available empirical evidence does
not support this claim because the cases used to elicit deontological or
consequentialist intuitions are too far-fetched and unrealistic to be of any
evidential value: they are too clean, too novel, too imaginatively unpalat-
able, too specific, and too stipulative.
But trolleyology is not the only way of debunking deontological
moral judgments. If experimental philosophy has taught us anything
(and I think it has), then it is that our judgments are often influenced
in surprising or perhaps even disturbing ways by external or internal
factors we are unaware of (Rini ). Even training and expertise do
not seem able to render us immune to those influences (Buckwalter
). For instance, experimental philosophy has convinced many that
a large part of our social cognition – that is, how people figure out other
people – is influenced by what seem to be normative considerations.
People asymmetrically attribute various agential features such as inten-
tionality, knowledge, or causal impact to other agents when something
of normative significance is at stake. This phenomenon has come to be
known as the Knobe effect (Knobe and ). It is also sometimes
referred to as the side-effect effect (henceforth: SEE), because in its
original version, it was found that subjects judged a bad side effect to
be brought about intentionally when it was bad but not when it was
good. Since then, similar asymmetrical patterns in people’s judgments
have been found that didn’t concern either the concept of intentionality
The exact wording of all the vignettes I refer to in this chapter can be found in the footnotes.
Debunking Doctrines: Double or Knobe Effect?
or the concept of a side effect (see, for instance, Beebe and Buckwalter
, Cova and Naar ).
What should be made of this finding? Some have argued that it has
unexpected normative implications. For instance, it has been suggested
that the Knobe effect makes the intuitions that play a role in justifying the
doctrine of double effect – or any other nonconsequentialist moral
principle that attaches at least some normative relevance to intentions
(Kamm ) – problematically unreliable (Levy , Cushman ;
cf. Michael ). Neil Levy has suggested that the doctrine of double
effect may get things backward:
The central worry I sketch here concerns our mechanism of attributing
intentions. According to the doctrine of double effect, an action is permis-
sible if bad side effects are foreseen but not intended [. . .]. According to the
rival view I now sketch, a state of affairs that is a foreseen effect of an action
that is (plausibly) held to aim at some other goal is judged to be unintended
if (inter alia) the action is judged to be permissible. If that’s right, then
the doctrine of double effect will simply reflect the moral intuitions of its
proponents; the rationale offered will be mere confabulation. That is, the
permissibility judgment will not be an output of the doctrine; instead,
the doctrine will generate a permissibility judgment only because of a prior
assessment of the acceptability of the action. (, )
The idea is that we can construct a debunking argument against deonto-
logical intuitions by showing that it is not the case that we typically sort the
outcomes of actions into intended and merely foreseen ones, which then
enables us to assign the respective deontic statuses to those two categories
by applying the doctrine of double effect. Rather, what seems to be going
on is that the normative evaluation of actions lies upstream of our
classification of outcomes as intentional. First, we make a tacit moral
evaluation of an action; then, we decide on the basis of that evaluation
which outcomes to consider intentional or merely foreseen; finally, we
confabulate a post hoc justification in the form of the doctrine of double
effect, which has done no real causal or justificatory work in the process.
If this is correct – and studies on the SEE suggest that it may well be –
then the doctrine of double effect may turn out to be question begging.
However, whether this debunking of the doctrine is successful will depend,
to a large extent at least, on whether we have reason to think that the
Knobe effect represents a legitimate or an illegitimate influence of
Although there is no consensus on this issue, I see no reason to think that cases that do not involve
side effects but, say, means to an end should not be considered instances of the same effect.
Debunking Arguments in Ethics
normative considerations on judgments of intentionality, for the debunking
type of judgment, in order to be successful, needs to single out some way in
which the process of judgment formation is epistemically defective. But if
the influence of normative considerations on judgments of intentionality is
a feature rather than a bug, then this requirement has not been met.
In a sense, of course, the doctrine would still be question begging even if
the Knobe effect were legitimate. That is, it would remain true that our
attributions of intentionality are not independent of the doctrine of double
effect and vice versa. But if attributions of intentionality just are suffused
with morality all the way down (Knobe ), then it seems odd to use
this fact against the doctrine, which would then simply articulate a basic
fact about the structure of our moral cognition – namely, that intentional
harm is worse than merely foreseen harm, even if there is no way of
judging intentionality that is independent from judging worseness.
Let me emphasize that in what follows, I do not wish to defend the
doctrine of double effect (DDE) as such. In fact, I do not fancy it very
much. But I will provide no argument for this antipathy here. My sole aim
is to show that the debunking of deontological intuitions on the basis of
evidence for the Knobe effect does not succeed. There may be other
grounds for rejecting the doctrine, even if this is not one. Let me also
emphasize, by the same token, that I will not discuss the merits of the
doctrine in any philosophical detail (Nucci ). I am merely focusing on
one attempt to debunk the evidential value of the intuitions that are
frequently used to justify some version of the doctrine. All further subtle-
ties will be ignored here.
Evidence for the SEE can allegedly be used to debunk certain deonto-
logical moral principles. I will argue that three questions regarding the SEE
are of primary interest. First, there is the methodological question of how the
effect ought to be explained. In particular, it is of interest whether a
unifying explanation of the various instances of the SEE should be sought
(Knobe , Alfano et al. , Webber and Scaife ) or whether the
found asymmetries should be accounted for separately, one at a time
(Hindriks and , Sripada ). Second, there is the substantive
question regarding which of the available explanations of the effect is the
correct one (or, if there is more than one, which are the correct ones). Does
it stem from the impact of moral considerations on psychological attribu-
tions (Pettit and Knobe )? Or is it based on a more descriptive
mechanism (Machery , Uttich and Lombrozo , Sripada and
Konrath )? Third, there is the normative question of whether the
judgments constituting the effect are correct. In attributing intentionality
Debunking Doctrines: Double or Knobe Effect?
or knowledge asymmetrically across different conditions, are subjects
drawing a legitimate distinction (Knobe and , Hindriks ,
Alfano et al. )? Or are they making an error (Sauer and Bates ,
Nadelhoffer , Pinillos et al. )?
This chapter is about the normative question and aims to develop a
nuanced answer to it in four sections. In (.), I discuss a complication that
arises for anyone who tries to approach the normative question: on the face
of it, neither of the aforementioned three questions can be addressed in
isolation, and the answer one is inclined to give to one of them has
important ramifications for which answer one can plausibly give to the
others. In (.), I will develop a way around this issue by arguing that
independent methodological considerations of parsimony and explanatory
power recommend a unifying explanation of the widest possible scope of
SEE cases. I will briefly consider two SEE cases that many (though of course
not all) available models of the effect have found it unnecessary to account
for and argue that a comprehensive account needs to take them into
consideration as well. In (.), I aim to develop a template for a substantive
unifying explanation of the effect that I refer to as the obstacle model. This
model bears some similarities to previous accounts of the effect (Hindriks
, Machery , Holton ) but aims to integrate their advantages
on a higher level of abstraction. The main goal of the model is to pick out
what all instances of the SEE have in common, but – and this is the
important difference – to do so in a way that remains neutral about whether
the observed asymmetries are legitimate. I will support this model with the
latest empirical evidence regarding the nature of the effect to show that the
obstacle model achieves two desiderata: it does not arbitrarily restrict
the range of cases a suitable explanation should account for, and it leaves
open how the normative question ought to be answered. With this in hand,
I will then propose a solution to the normative question (.). I will first
explain why three criteria that are often brought to bear on this question are
actually irrelevant to it and continue with a brief discussion of some
empirical evidence suggesting the effect might be an error. I cast doubt on
the relevance of this evidence as well and suggest that my unifying explan-
ation of the effect recommends to decide, for each agential feature at a time,
whether the observed asymmetries should count as legitimate.
It should be noted that reflective endorsement accounts of the kind proposed here cannot solve all
remaining problems. Take Nichols and Knobe’s () study on people’s intuitions about
responsibility and free will: subjects turn out to have inconsistent intuitions about the
compatibility of free will and determinism, depending on whether they are given a scenario
describing a concrete action (for example, a case of murder) or whether they consider this
question in the abstract. When subjects are shown the results and are given the opportunity to
resolve this tension, no consensus is reached. Half of the subjects chose to hold on to their
compatibilist judgments, the other half to their incompatibilist intuitions. In such a case, the
reflective endorsement account yields no clear verdict.
Debunking Doctrines: Double or Knobe Effect?
Back to the finding. In one case, for example, the agential variable
subjects were asked about was whether a subject acted freely (Nichols and
Knobe ). One so-called abstract condition posed a question about
whether a man who inhabited a deterministic universe could be held
responsible for his actions if all of them were guaranteed to happen by
the events of the past in combination with natural laws. Subjects over-
whelmingly said no. However, when this story was filled with concrete
content about a man who decides to kill his wife and three children in
order to live with his mistress, people overwhelmingly judged that the man
could be held responsible and could have acted freely even though he
inhabited the same deterministic universe. (Here, it seems that the desire
to hold the man responsible trumps subjects’ abstract perspective on the
case. But let’s not jump ahead of ourselves.) It should be noted that many
would not consider this free will case to belong to the family of SEE cases.
By mentioning it here, I already wish to draw attention to the fact that the
boundaries of the effect are anything but clear. However, nothing of
systematic importance for the following argument hinges on whether this
particular case is included.
In the famous original version of the SEE (Knobe ), a chairman is
asked whether to implement a new program that will make profit and
either harm or help the environment. The chairman wants to make profit
but does not really care about the program’s respective side effect on the
environment. Subjects are asked whether he brought about the side effect
intentionally. Participants attributed intentionality to a higher degree
when the side effect was bad.
Free Will
Imagine a universe (Universe A) in which everything that happens is completely caused by
whatever happened before it. This is true from the very beginning of the universe, so what
happened in the beginning of the universe caused what happened next, and so on right up until
the present. For example one day John decided to have French Fries at lunch. Like everything else,
this decision was completely caused by what happened before it. So, if everything in this universe was
exactly the same up until John made his decision, then it had to happen that John would decide to
have French Fries.
[. . .]
Concrete
In Universe A, a man named Bill has become attracted to his secretary, and he decides that the
only way to be with her is to kill his wife and children. He knows that it is impossible to escape
from his house in the event of a fire. Before he leaves on a business trip, he sets up a device in his
basement that burns down the house and kills his family.
Is Bill fully morally responsible for killing his wife and children?
[. . .]
Abstract
In Universe A, is it possible for a person to be fully morally responsible for their actions? (Nichols
and Knobe , f.)
Debunking Arguments in Ethics
This is the effect. It is surprising and raises deep philosophical questions
about the nature of human agency and the connection between social
cognition and moral judgment. Naturally, people are interested in what it
is all about, and this has generated a whole subfield of discussion about the
effect in moral psychological circles (cf. the discussion in Knobe ).
Why does this effect occur? How can it be explained? And, not least, is it
legitimate?
The problem with these three questions is that they are not easily
disentangled. Consider, first, how the methodological question how to
explain the effect impacts the normative question of whether it is correct: it
makes a difference to the latter whether we think that a unifying explan-
ation of the SEE is preferable. If we take one case/vignette at a time, we can
make sense of the observed judgmental patterns piecemeal. If we look at
them all at once, no single rationalizing explanation seems to work for all
of them.
Now take Sripada and Konrath’s deep self-concordance (Sripada and
Konrath ; henceforth: DSC) model of the SEE. This model is expli-
citly designed to account only for Knobe’s original chairman vignette.
Accordingly, it classifies the asymmetry as legitimate, as subjects’ inten-
tionality judgments can be explained in terms of whether the evaluative
status of the side effect is concordant with the indifferent attitude toward
the environment explicitly expressed by the chairman. However, this
explanation fails at making sense of many of the other SEE cases. If deep
self-concordance is what drives subjects’ judgments, then the DSC model
has to classify most other cases in which judgments do not track deep self
variables (Alicke , Beebe and Buckwalter ) as illegitimate.
Moreover, any answer to the substantive question has an impact on
answers to the methodological question. Which explanation of the SEE is
correct presupposes an answer to the question of which SEE cases to
account for with one’s explanation, because it is unclear which vignettes
we ought to include in the set of cases that our account is supposed to be
able to explain in the first place. Should we include cases of causation in
Chairman
The vice-president of a company went to the chairman of the board and said, “We are thinking of
starting a new program. It will help us increase profits, but it will also [harm/help] the environment.”
The chairman of the board answered, “I don’t care at all about [harming/helping] the
environment. I just want to make as much profit as I can. Let’s start the new program.”
They started the new program. Sure enough, the environment was [harmed/helped]. (Knobe
)
For further empirical challenges to the DSC model, see Cova and Naar (b) and Rose et al.
().
Debunking Doctrines: Double or Knobe Effect?
addition to cases of intentionality (Alicke , Knobe and Fraser )?
Should we include cases that ask for the intentionality of incurred costs
versus accepted benefits (Machery )? Are these SEE cases at all? If one
thinks that the substantive explanation of the effect should be framed in
terms of the impact of moral considerations on the attribution of psycho-
logical states, then cases that are either normatively ambiguous (such as
Knobe’s , Nazi Law case) or morally neutral (Uttich and Lombrozo
) are not included in the set of relevant cases an SEE model must be
able to account for.
Needless to say, the normative question has an impact on the other two,
because whether we think that the effect is legitimate makes a difference to
which cases we want to include in our explanation and in what terms we
want this explanation to be cashed out. If one agrees that the SEE
constitutes an error in judgment, one will presumably be prepared to
include a more heterogeneous set of cases in one’s data set. If one aims
to rationalize the asymmetry, a more homogeneous selection of cases is
called for.
I take this to be a very rough characterization of the methodological
problems surrounding the effect. But these complications need not make
us despair. In the following section, I wish to suggest where to start in
devising an explanation of the SEE.
Nazi Law
In Nazi Germany, there was a law called the “racial identification law.” The purpose of the law
was to help identify people of certain races so that they could be rounded up and sent to
concentration camps. Shortly after this law was passed, the CEO of a small corporation decided to
make certain organizational changes. The vice-president of the corporation said: “By making those
changes, you’ll definitely be increasing our profits. But you’ll also be violating/fulfilling the
requirements of the racial identification law.” The CEO said: “I don’t care one bit about that. All
I care about is making as much profit as I can. Let’s make those organizational changes!” As soon as
the CEO gave this order, the corporation began making the organizational changes.
Debunking Arguments in Ethics
Let me support this by way of an example. Recently, Mark Alfano et al.
() have developed a unifying explanation of the SEE that covers a
fairly wide range of instances of the effect. It classifies the asymmetry as
rational. They argue that the asymmetrical pattern in people’s attributions
of mental states such as intentionality or knowledge reflects a deeper
asymmetry in the attribution of beliefs to the agents described in the
vignettes and that this belief-attribution pattern is rational because of the
pragmatics of social cognition.
Agents simply do not have time to come up with elaborate theories of
which mental states other subjects have, which is why it is rational for
them to follow a number of rough-and-ready rules of thumb in attributing
them. These rules of thumb make it rational to attribute beliefs to agents
about, for example, the side effects they bring about only when not doing
so would be particularly problematic; moreover, they make it rational not
to attribute such beliefs when doing so would be unreasonably costly.
To see how this approach works in more detail, we need to look at the
interaction of two belief-formation heuristics that, according to Alfano
et al., both unify the asymmetries found in SEE-style cases but also
rationalize them. First, they argue, plausibly in my mind, that many
(though, importantly, not all) of the agential variables the attribution of
which is investigated by SEE studies require a belief-condition to be
satisfied. Knowing that p requires believing that p, intending p requires
believing that one’s action will result in p, and so forth. The asymmetry,
then, is grounded in the fact that people rely on the following belief-
formation heuristic:
(H) If another agent a ’s φ’ing would make it the case that p and p violates a
norm salient to a, then attribute to a the belief that φ’ing would make it the
case that p. (Alfano et al. )
The asymmetry in the judgmental patterns is then explained by the fact
that, according to (H), belief attribution is recommended only in cases in
which p violates a salient norm.
Alfano et al. emphasize that (H) accounts for many cases but does not
yet make it intelligible. Is it rational to rely on this heuristic? Note, first,
that Alfano et al. argue that “any theory that entails widespread irration-
ality is prima facie implausible, so we need to argue that employing
[belief-formation heuristics such as (H), Author] is not irrational” ().
This seems false to me. They are right to point out that any theory that
entails that most people are by and large irrational is implausible; however,
this does not entail that any theory that holds that various cognitive
Debunking Doctrines: Double or Knobe Effect?
subfields – such as probabilistic cognition or mental state attribution – are
deeply irrational must be implausible as well. In fact, there is a lot of
empirical evidence for the fact that in various subfields, human irration-
ality is indeed widespread (Tversky and Kahneman , Kahneman and
Tversky ). Thus such general considerations cannot support a ration-
alizing explanation of the effect. However, Alfano et al. manage to come
up with a second heuristic that shows why the asymmetry posited in (H)
might indeed be rational:
(H’) If my own φ’ing would make it the case that p and p violates a norm
salient to me, believe that φ’ing would make it the case that p. (Alfano et al.
)
This norm rests on the simple observation that, though true beliefs are in
general preferable to false beliefs, some true beliefs are preferable to others.
More precisely, it is better to have those true beliefs that it is particularly
important to have. And because one may be sanctioned for violating a
norm or for doing something bad but not for following it or for doing
something good, selectively attributing (to oneself and to others) beliefs
about the bad effects of actions to a higher degree than about their good
effects is rational.
Because beliefs regarding the effects of actions described in the vignettes
are part of the set of necessary conditions for the respective agential
variables (such as knowledge or intentionality) the scenarios ask subjects
about, subjects’ judgments display an asymmetrical pattern. Belief is a
necessary condition for knowledge, desire, intentionality, and so forth, and
therefore, the asymmetrical attribution of those underlying beliefs indir-
ectly leads to an asymmetrical attribution of the other mental states the
presence of which beliefs are a necessary condition for. This leads Alfano
et al. to classify the SEE as legitimate, because it is rational to follow said
rules of thumb in one’s daily business of mental state ascription.
However, we also see how Alfano et al.’s answer to the normative
question is affected by the selection of cases their model aims to account
for. Cases in which the agential variable at issue is causation rather than
intentionality, desiring, or knowledge, for example, are conspicuously
missing from the account, and this has important implications for this
model’s take on the normative question. If, to use just one example, the
same pattern of asymmetrical judgments can be observed in subjects’
judgments about the described agents’ causal influence on the events a
particular vignette describes, then the belief-heuristics account fails to
explain them, because whether an agent had a certain type of causal impact
Debunking Arguments in Ethics
on the unfolding events does not depend on the described agents’ beliefs –
in contrast to cases that ask about knowledge or intentionality – because
believing that something will happen is not a necessary condition for
actually making it happen. (Similar observations seem to apply to cases
that feature free will [see earlier] or the doing/allowing distinction, Cush-
man et al. ).
Consider Alicke’s driver case (, ). In this scenario, a driver
speeds home and becomes involved in an accident that is due to the fact
that a second driver runs through a stop sign at an intersection. The
difference between the two conditions of this vignette is that in one,
the driver rushes home to hide an anniversary gift for his parents, while
in the other, he wants to hide a vial of cocaine. People are then asked about
the extent to which the first driver was the primary cause of the accident.
Unsurprisingly enough, they attribute more causal impact to him in the
cocaine than in the gift condition. Now believing that a certain effect will
occur as the result of one’s action is a necessary condition both for
intending it to occur as well as for knowing about the fact that it occurred.
But this is not so in the case of causal influence. Therefore, an underlying
belief-attribution heuristic neither explains nor justifies the asymmetrical
judgmental pattern found in this case.
Consequently, including cases of causation in one’s unifying explan-
ation, and thereby widening its explanatory scope, makes an important
difference to one’s answer to the normative question. If an asymmetrical
judgmental pattern is rationalized by an underlying belief-attribution
heuristic, but such a heuristic is irrelevant for questions of causation, then
the rationalizing power of this heuristic does not expand over asymmetrical
judgments about causation.
I do not have a knock-down argument for why a unifying explanation of
the effect must be able to account for all SEE-style cases. However, I wish
to suggest that the very motivation for developing a unifying rather than
Driver
John was driving over the speed limit (about mph in a mph zone) in order to get home in
time to [hide an anniversary present for his parents that he had left out in the open before they could
see it/hide a vial of cocaine he had left out in the open before his parents could see it].
[. . .]
As John came to an intersection, he applied his brakes, but was unable to avoid a car that ran
through a stop sign without making any attempt to slow down. As a result, John hit the car that was
coming from the other direction.
[. . .]
John hit the driver on the driver’s side, causing him multiple lacerations, a broken collarbone, and
a fractured arm. John was uninjured in the accident. (Alicke , )
Debunking Doctrines: Double or Knobe Effect?
more specific account of the SEE is the same motivation one needs to
make a maximally wide explanation seem preferable as well. Holton, for
instance, writes:
“Various explanations of these results have been offered. [. . .] But most
have been piecemeal, accounting for one finding or another. Ideally we
want an explanation that accounts for all of them in a unified way. This is
what I try for here. More than that though, I don’t simply aim to explain
the findings: I aim to justify them. For I think that the subjects of the
experiments are quite right in the ascriptions that they make. There is an
asymmetry here” (, ).
But if this is the case, then one must be wary of including those and only
those instances of the finding that support one’s preconceived idea
regarding the legitimacy of the effect. Moreover, it seems that a model of
the effect that is at last as plausible as, say, the belief-attribution account
but more successful at unifying the various instances of the effect should be
considered superior.
Before I proceed, let me consider three important objections to a
comprehensive account of the effect that do not employ methodological
principles such as simplicity or parsimony.
One way to argue for a nonunified account of the SEE would be to
point out that the concepts under scrutiny here, though all subject to the
Knobe effect, are not all subject to other, less-often-discussed asymmetries.
This might be taken to suggest that despite some superficial similarities,
there are fundamental differences underlying the various concepts, which
would warrant singling each of them out for special treatment. For
instance, there are studies suggesting that the concept of intentionality is
subject to the lesser-known “skill effect,” in which subjects are more likely
to say that an agent intentionally brought about something bad even
though, due to a lack of skill, the described agent had little control over
the outcome of his action (Cova et al. , ff.). This effect has thus
far not been extended to other concepts such as desire or causation, which
might mean that there is something unique about the concept of
intentionality. Future studies will have to show whether this is indeed
the case.
Moreover, there is evidence suggesting that the concept of knowledge,
but not the concept of intentionality, is subject to the “probability effect.”
Dalbauer and Hergovich () could show that participants attribute
more knowledge to the chairman when the side effect was bad even in
cases in which the help condition made clear that a positive outcome for
the environment would be far more likely. Considerations of probability
Debunking Arguments in Ethics
do not seem to play a similar role in the original, nonepistemic effect. On
the other hand, it is perhaps not too surprising to find that the different
concepts under investigation do not behave identically in all respects. They
are different concepts, after all. It is far from clear whether differences in
the general inferential structure between, say, the concepts of knowledge
and intentionality are enough to establish that the influence of normative
considerations on their application ought not to be considered one and the
same effect.
A third problem with wide scope accounts of the SEE is exactly how
wide this scope should be. It might be intuitively plausible to think that
intentionality and desire ought to be accounted for by the same model. But
once we have extended our model to the concept of causation and others,
how do we know when to stop? People might asymmetrically attribute all
kinds of traits on the basis of all kinds of features, but lumping them all
together seems to be of little use. This is a tricky problem that I do not
have a satisfying solution for. (To my knowledge, neither do others.) For
the purposes of this chapter, my suggestion would be to include those and
only those findings as instances of the effect that involve the asymmetrical
attribution of agential features when something of normative or evaluative
relevance is at stake, and, importantly, not to make one’s decision
regarding the question of which candidates to include depend on a prior
conviction about whether to consider subjects’ judgments to be legitimate.
Above and beyond this recommendation, it seems futile to me to look for
any a priori criteria regarding what to count as an SEE and what not.
This might strike some as an unsatisfying concession. More precisely,
the suggestion that the effect consists of the attribution of agential features
in general (including, for instance, causal impact) rather than merely
psychological states of individuals (such as intentionality or knowledge but
excluding causal impact) could seem arbitrary, especially since my main
complaint about some available models (same section earlier) was that they
cannot successfully account for asymmetrical attributions of causality. The
main reason why I think that causal impact and other possible nonmental
features ought to be included is that one of the main expected payoffs of a
good understanding of the SEE is that such an understanding would –
albeit perhaps only indirectly – contribute to an improved understanding
of how attributions of moral responsibility work, and causal responsibility
is a necessary condition for moral responsibility (at least in paradigm cases
and including cases of omissions).
I have argued that it is preferable to devise one comprehensive model
of a psychological effect because of general considerations regarding
Debunking Doctrines: Double or Knobe Effect?
psychological methodology. The fewer psychological principles the better.
But this also means that, unless we have reason to do otherwise, sufficiently
similar findings should be accounted for on the basis of those principles
once we have them. And since it is widely agreed (see, for example, Knobe
or Sauer and Bates ) that cases of causation belong to the same
family of SEE cases, the best explanation of the effect should not arbitrarily
exclude them for the sake of rationalizing the observed judgmental
patterns.
Gizmo
The vice-president of a company in the Gizmo industry went to the chairman of the board and
said, “We are thinking of starting a new program. It will help us increase profits, but it will result in
our Gizmos being colored black. The convention is to make Gizmos colored darker than blue, so
we would be complying with the convention.” [The convention is to make Gizmos colored lighter
than blue, so we would be violating the convention.]
The chairman of the board answered, ‘‘I don’t care at all about the color of the Gizmos. I just
want to make as much profit as I can. Let’s start the new program.”
They started the program. As it happened, the Gizmos were black, colored darker than blue.
(Uttich and Lombrozo , )
Jessica
Jessica lives in a neighborhood where everyone (including Jessica herself ) happens to own a dog.
One afternoon, she is planning to go for a walk and decides not to/to take her dog. Her friend
Aaron says, “Jessica, if you go out like that, you will/won’t be doing what everyone else is doing.”
Jessica responds, “I don’t care at all what everyone else is doing. I just want to go for a walk without/
with my dog.” She goes ahead with her plan, and sure enough, she ends up doing what no one/
everyone else is doing. (Alfano et al., )
Debunking Arguments in Ethics
have made it more and more obvious that the asymmetry cannot be due to
moral considerations per se and that these are simply one kind of feature
among others that can trigger the effect. Take Machery’s () extra
dollar case. This scenario describes an agent who orders a very large
smoothie; he learns that the drink now costs one dollar more than it used
to; he orders it anyway. Subjects consider the paying of the extra dollar to
be intentional. When the agent learns that the large smoothie now
automatically comes with a commemorative cup, subjects do not think
that the described agent obtained the cup intentionally. Here, too, there is
an obstacle (paying more money) that renders the action (of paying the
extra dollar) intentional. However, although the obstacle model is some-
what similar to Machery’s trade-off hypothesis, it is more general; not all
SEE asymmetries involve costs that function as obstacles, but all of them
involve some obstacle. Suitable obstacles for the Knobe effect can thus
include disvalues, moral norms, conventional norms, and incurred costs,
but the effect has also been replicated with prudential norms (Knobe and
Mendlow ), aesthetic values (Knobe ), and many others. It thus
seems that the list of obstacles triggering the effect is long. The agential
variables, then, whose attribution can be affected by the presence of said
obstacles are knowledge, intentionality, desire, causal impact, and many
others.
I have argued that the obstacle model successfully accounts for moral
and nonmoral cases alike. But does it also generalize over all agential
variables? The belief-heuristics account, for instance, has problems with
Alicke’s driver case, because differences in attributions of causal impact
cannot plausibly be traced back to underlying differences in belief attribu-
tion. I can only speculate here, but it seems to me that the obstacle model
does not suffer from similar problems. One natural way for this model to
explain asymmetrical causality judgments would be to say that in the
cocaine condition, the driver has more causal impact because he has an
especially strong motive to go through with his action of getting home
regardless of what he might cause to happen on the way. And since this
Extra Dollar
Joe was feeling quite dehydrated, so he stopped by the local smoothie shop to buy the largest
sized drink available. Before ordering, the cashier told him that the Mega-Sized Smoothies were
now one dollar more than they used to be. Joe replied, “I don’t care if I have to pay one dollar more,
I just want the biggest smoothie you have.” Sure enough, Joe received the Mega-Sized Smoothie
and paid one dollar more for it. Did Joe intentionally pay one dollar more? (Machery , ). It
should also be noted that this case is considered inadequate by some (Mallon , Phelan and
Sarkissian ), as the extra dollar seems to be a means rather than a side effect, thus constituting a
different and presumably much less surprising finding.
Debunking Doctrines: Double or Knobe Effect?
model does not reduce the observed asymmetries to another asymmetry in
people’s judgment about the prerequisite mental states, the model is not
falsified by cases of causation for which differences in underlying mental
states do not matter.
I have said in the introduction that the obstacle model I aim to sketch is
supposed to remain neutral with respect to the normative question. One
can now easily see why this is so: the model holds that subjects’ asymmet-
rical attributions are triggered by the presence of obstacles in the
transgressive conditions of the described scenarios and that the fact that
the described agents overcome those obstacles with their actions asymmet-
rically calls for an explanation, which is then filled in in terms of the
attributed agential variable. Normatively speaking, however, it remains an
open question whether the respective influence of moral considerations,
disvalues, prescriptive norms, conventional regularities, costs, and so forth
legitimately influence the attribution of said variables. It could be, for
instance, that known costs are always incurred intentionally, especially
when they are a means to a desired end. On the other hand, it could be
that the evaluative status of a side effect has little to do with how much an
agent knew it would occur, especially when the two conditions make it
clear that as far as the described agents’ knowledge about the effects of his
actions goes, he has exactly the same amount of information.
Another advantage an obstacle model has over its contenders is that it
explains a group of SEE cases that are puzzling for other accounts,
especially those that see a basis of the effect in the impact of moral
cognition on seemingly nonmoral judgments. In the Nazi Law case, the
CEO of a company is asked whether he wants to implement a program
that will make a profit but will cause the company to break/follow a racial
identification law. In both conditions, the CEO could not care less about
the law and only cares about his profit. It is plausible to assume that in this
Schindler’s list–style case, subjects consider breaking the law to be morally
right (the transgressive condition is thus not evaluatively bad, as in the
original vignette). Contrary to the predictions of moral accounts of the
effect, subjects judge the side effect of breaking the law to be more
intentional than the side effect of following it, even though the latter is
morally worse. The impact of moral considerations on people’s judgments
cannot be responsible for this. Rather, it must be the fact that participants
perceive the CEO’s action as the overcoming of an obstacle. In some cases,
this obstacle can be a moral one; but cases such as Nazi Law demonstrate
that the evaluative status of a described action and its obstacle-overcoming
nature can be teased apart.
Debunking Arguments in Ethics
Moreover, this account is borne out by recent evidence that is left
unexplained by many other accounts (see, however, Holton , Cova
et al. ). Robinson, Stey, and Alfano () show that the obstacle
model (or something very much like it) is superior to moral explanations of
the SEE effect, because the effect can be reversed when different norms –
that is, normative obstacles – are made salient to the judging subject,
regardless of the resulting moral value of the action.
The Nazi Law case mentioned before shows that the moral badness of
the side effect cannot be what drives up people’s attributions, because
people are more likely to judge the morally good violation of the (bad) law
to be intentional. Robinson et al. speculate that this is because asymmet-
rical attributions of agential variables are sensitive to salient norm violation.
This is consistent with the obstacle model, according to which agential
variables such as intentionality are attributed to a higher degree when the
attributor perceives there to be an obstacle for the described agent to
overcome. Because perception is what matters for whether asymmetrical
attributions occur, a salience condition is built into the model from the
outset. Robinson et al. tested their prediction with a case in which Carl,
who just inherited some money, is considering whether to invest the
inheritance in a retirement savings account or give it to Oxfam. There
are two conditions: in the Self Norm condition, his friend points out to
him that he may be able to retire comfortably if he invests the money. In
Other Norm, he is told that if he donates the money, he will help a lot of
people. Carl then either invests or donates the money.
What they found was that subjects’ intentionality judgments de- or
increase depending on which norm is made salient to them (by way of
making it salient to the described agent). When Carl’s friend makes him
aware of the welcome effects saving might have on his future, but Carl
ends up donating, intentionality attributions go up. When he is made
Carl
Carl recently inherited $,. He is considering whether to invest the money in a Roth IRA,
which is a type of retirement savings account, or give it to Oxfam, a charity that helps alleviate the
suffering of poor people all around the world.
[. . .]
Self Norm His friend, Diana says, “If you invest the money, you may be able to retire in comfort.”
Other Norm His friend, Diana says, “If you give the money to Oxfam, you will help a lot of
people.”
[. . .]
Carl Invests
Carl ends up investing the money.
Carl Donates
Carl ends up donating the money. (Robinson et al. )
Debunking Doctrines: Double or Knobe Effect?
aware of a donation’s potential to alleviate suffering, but he decides to keep
it for himself, this course of action is considered more intentional instead.
This pattern, again, seems to be based on a tacitly felt demand for a
psychological explanation of the described behavior. Subjects are told that
a person conspicuously overcame something they perceive to be an obs-
tacle to the person’s conduct. Attributions of intentionality or desire then
resolve the tension between the described course of action and the presence
of this obstacle.
Terrorist:
A terrorist has planted a bomb in a nightclub. There are lots of Americans in the nightclub who
will be injured or killed if the bomb goes off. The terrorist says to himself, “I did a good thing when
I planted that bomb in the nightclub. Americans are evil! The world will be a better place when
more of them are injured or dead.”
Debunking Doctrines: Double or Knobe Effect?
bomb he had planted in a nightclub after he has learned that his son will be
at the venue in addition to the Americans who are his original targets.
Only % of subjects judge that the terrorist intentionally saved the
Americans. From the perspective of the obstacle model sketched earlier,
this seems unexpected, as saving those Americans surely has to be con-
sidered an obstacle for the terrorist. The problem with this vignette,
however, is that before it describes the terrorist’s predicament, it explicitly
states that the terrorist’s primary intention is to kill those Americans. When
he then has to retract that plan because of his son, subjects are essentially
given no choice but to consider this side effect to be unintentional, as it
directly contradicts his previously avowed intention.
A final advantage of the model proposed here I wish to mention is that it
nicely ties in with a general account of cognition that has received lots of
attention in recent years. Obstacles such as norms set expectations of what
will happen (or what agents will do), and when norms are violated – when
an agent does not choose the path of least resistance, as it were – expect-
ations are frustrated. That is why recently, many authors have been drawn
to a two-systems account of the SEE (Sauer and Bates , Pinillos et al.
, ff.). System and accounts seem promising because they cash
out the cognitive structure of intentionality attributions in terms of slow,
effortful, and conscious on the one hand and fast, frugal, and automatic
cognition on the other hand (Kahneman ). Attributions of the
respective agential variables (intentionality etc.) are carried out by
System as an explanation for why the subjects overcome the respective
“obstacles”, an explanation that cannot be supplied by System . Obstacles
set the expectation that subjects will avoid them; deviations from this
principle of agential inertia must be explained.
Later, the terrorist discovers that his only son, whom he loves dearly, is in the nightclub as well. If
the bomb goes off, his son will certainly be injured or killed. The terrorist then says to himself, “The
only way I can save my son is to defuse the bomb. But if I defuse the bomb, I’ll be saving those evil
Americans as well . . . What should I do?”
After carefully considering the matter, he thinks to himself, “I know it is wrong to save
Americans, but I can’t rescue my son without saving those Americans as well. I guess I’ll just
have to defuse the bomb.”
Did the terrorist intentionally save the Americans? (cf. Knobe , Cova ).
I have a lot of sympathy for this approach, but I already wish to emphasize here that I do not think
that, ultimately, it can supply an answer to the normative question. Whether a judgment task is
carried out by automatic or controlled processes has virtually no bearing whatsoever on whether its
judgmental output is justified (see Greene and Berker for this; this problem is also
reflected in the disagreement between Kahneman and Gigerenzer over the quality of
automatic intuitions). While + can be solved automatically, + takes some effort; yet
whether your solutions to those math problems are correct does not depend on the speed with
which you arrived at them but on their truth.
Debunking Arguments in Ethics
It is important to be clear about the status of the obstacle model for my
argument. The main point of the model is to illustrate why and to what
extent the methodological, substantive, and normative questions are so
difficult to disentangle. I can therefore be happy to grant, for the sake of
the argument, that the obstacle model isn’t the one true account of the
effect but a metamethodological device to show what a successful model of
the effect – that is, a model whose success is measured in terms of whether
it illegitimately presupposes an answer to any of the three questions,
thereby illegitimately biasing the answers it gives to the other two – would
have to look like.
It thus ought to be possible to illustrate the same claim using a different
model. Take Holton’s () norm violation account. According to this
proposal, the observed asymmetry in people’s judgments is due to the
following difference: an agent counts as intentionally violating a norm if
she breaks it (e.g., in the harm condition of the Chairman vignette), but
she does not count as intentionally following a norm simply by conform-
ing to it (e.g., in the corresponding help condition). For the latter to be the
case, an agent must be counterfactually guided by it and modify her
behavior accordingly over a range of circumstances. When looking at this
account, we can readily see how this model, similarly to the obstacle
model, shows how the substantive, normative, and methodological ques-
tions are interconnected: Holton explicitly states that his aim is not just to
explain but to justify the observed judgmental pattern. Now it might be
plausible to think that violating or conforming to a norm makes a genuine
difference in terms of intentionality, but this is not what matters here.
What matters is that the aim to rationalize SEE judgments has a systematic
effect on the scope of findings one is inclined to count as instances of the
effect. It is doubtful, for instance, whether the norm-violation model can
account for Beebe and Buckwalter’s () epistemic findings. But if one
agrees that the SEE is a legitimate asymmetry, if one agrees that it is about
intentionality and related concepts (such as desiring or being in favor of ),
and if one agrees that it ought to be explained in terms of norm transgres-
sion/conformity, then it becomes difficult to see, from the perspective of
the norm-violation account, why the epistemic SEE should be included in
one’s set of explananda at all. It is this systematic connection among the
scope, nature, and legitimacy of the effect the obstacle model is supposed
to illustrate.
The model is an empirical hypothesis. I thus do not wish to suggest that
there could be no counterexample to it. I’m sure that there is, or at least
will be. Its main purpose is to show what a model of the side-effect effect
Debunking Doctrines: Double or Knobe Effect?
might look like that avoids commitments both to how many of the
findings at issue ought to be accounted for by a proposed model as well
as whether these findings ought to be considered legitimate. In the final
section of this chapter, I will discuss this tension in more detail.
Conclusion
One surprising aspect of the argument of this chapter is that methodo-
logically speaking, a comprehensive account of the widest possible scope of
side effect cases seems preferable, whereas normatively speaking, it might
still turn out that only some instances of the effect ought to be considered
legitimate. In short: if the obstacle model is the correct substantive account
of the SEE, then I recommend a disunified answer to the normative
question on the basis of a unified answer to the methodological question.
The reason for this is that what explains why subjects make the asymmet-
rical judgments discussed in this chapter is purely a matter of empirical
inquiry. It is then a separate question which of the so-explained judgments
subjects end up endorsing reflectively. It could be that, upon reflection,
people continue to think that it makes sense to attribute a higher degree of
intentionality to bad side effects but do not think that it makes sense to
ascribe greater causal impact or knowledge. And this conclusion is in line
with the main purpose of this chapter: to draw attention to the differences
among the methodological, substantive, and normative questions about
the Knobe effect and the difficulties this creates for the debunking of
deontological moral intuitions and to develop a suggestion regarding how
to disentangle them.
Conclusion
Vindicating Arguments
Introduction
Some moral beliefs can be debunked by pointing out their dubious
genesis. Perhaps all can. In recent years, however, an idea has been gaining
increasing attention according to which the undermining force of
debunking arguments in ethics can be deflected by redirecting their target
from our substantive normative convictions to metaethical territory (Street
; see also Vavova ). Our moral beliefs cannot be off track if there
is no track to be on. We can thus resist their force by rejecting realism
about the domain at issue. Debunking arguments can be turned on their
head. I have discussed this metaethical move in the second chapter.
I want to conclude by considering a different strategy: whether
debunking arguments can be turned inside out. At first glance, it would
seem surprising if the basic structure of debunking arguments, which is to
supply a causal explanation of (a subset of ) our beliefs that makes their
justification appear problematic, could not be used for the opposite
purpose of supporting (a subset of ) our moral judgments by making their
justification appear in a more favorable light. I will refer to such explan-
ations as vindicating arguments.
Debunking Arguments in Ethics
Nozick’s vindicating genealogy is foundationalist in that for assessing
the justness of current affairs, everything hinges on how they trace back
to an original acquisition. Other vindicating arguments could be called
evolutionist, in that they vindicate a result by showing that there is
something about the very process through which it came about that
makes the result normatively acceptable. Here, Hegel’s justification of
the state – a rather more extensive one than Nozick’s – could serve as
an example. Because the process that ultimately led to the development
of the current level of Sittlichkeit is guided by rational forces such as the
cunning of reason of the objective spirit, we have reason to think the
currently reached organization of the state is rational. The task of
philosophy, then, is to articulate the conceptual structure of this
organization.
These two examples are examples of affirmative vindicating arguments.
By contrast, I wish to suggest that at least the majority of promising
vindicating arguments proceed through what could be called vindication
by elimination. Consider again de Lazari-Radek and Singer’s evolutionary
vindication strategy for impartial consequentialism. They argue that there
are various normative theories competing for the throne. Among them are
ethical egoism, deontological ethics, and impartial consequentialism.
However, we can supply evolutionary debunking arguments for most of
them – in fact, all but one, since there is no plausible off-track genealogy
for an attitude of universal benevolence, which uniquely supports impartial
consequentialism. Ultimately, they aim to vindicate consequentialism by
showing that it cannot be debunked. Consequentialism simply is the last
man standing.
I am indebted to Katarzyna de Lazari-Radek for helpful discussions on this point.
Vindicating Arguments
mistake to suppose that nobility will have exalted progeny, it’s
also unwise to assume that children must inherit the sins of their
parents. ()
This list of vindicating features does not consist of necessary and sufficient
conditions. Vindication is a matter of degrees rather than an all-or-nothing
affair. The more of these conditions a belief satisfies, the more strongly it
has been vindicated, and the other way around. This list of features can
thus be seen as a way of spelling out what it means for a process of belief-
formation not to be epistemically defective.
In a way, then, most vindicating arguments are really “failure to
debunk” arguments. Their conclusion could just as well be understood
as “M is not unjustified.” But frequently, this is enough for it to be
epistemically responsible to rely on them. When there are no good
grounds – none – for not believing something that seems plausible, one
is not behaving carelessly in believing it. Then again, there are possible
“positive” formulations of the given vindicating features, such as “process
X is appropriately fine-tuned” or “process Y tracks the truth with regard to
M.” The negative rendition of these features is supposed to highlight the
fact that we can vindicate a belief simply by seeing that there is nothing
particularly wrong with it.
There are, of course, countless ways of being wrong about something.
But it seems that the fact that I didn’t use some outlandish method such as
asking my dog to arrive at my moral judgment seems to do nothing to
vindicate it. The list is based on epistemic defects that are known to be
common and for which there is actual empirical evidence that they more or
less frequently play a distorting role in people’s moral cognition. I do not
mean to suggest that more remote possibilities have the same vindicating
force.
() Externalism. Another respect in which vindicating arguments are
special is this: some (Sinnott-Armstrong ) argue that in order for a
debunking argument to remove one’s justification for a belief, one has to
be aware of the undercutting defeat. One acquires a defeater for one’s
belief only if one is made aware of its illicit source. This seems correct to
me, at least as long as justification is read as personal justification – a person
who holds a belief while being aware of undercutting evidence is unjusti-
fied in holding that belief. It is less clear in the case of doxastic justification,
Thanks to Victor Kumar and Mark Alfano for helpful feedback on this issue.
Thanks to a reviewer for raising this issue.
Debunking Arguments in Ethics
for what matters here is not whether a person is aware of the dubious
history of her belief but whether the belief as such (rather than the person
holding it) is adequately supported.
The story is a different one, however, in the case of vindicating argu-
ments. It seems that one can be justified in holding a belief without being
aware of the fact there is a vindicating argument positively showing it to
stem from trustworthy processes. This may, again, be due to a general
presumption in favor of the reliability of our processes of belief formation.
This presumption is reflected by the fact that the list of vindicating features
essentially consists of negations, that is, undercutting defeaters that do not
obtain, instead of a list of positive, reliability-conducive features over and
above the absence of the ones that make a process of judgment formation
unreliable.
() Selectivity. Return to the distinction between global and selective
debunking arguments made in the beginning. Interestingly, vindicating
arguments seem to be exclusively of the selective variety. It makes little
sense to expect moral vindicating arguments to show that all of our moral
beliefs are justified. It is part of common sense that many are not. By
contrast, many debunking arguments are supposed to support, for
instance, a general error theory of moral judgment, according to which
all of our moral judgments are deeply off track.
() Moral Realism. I have emphasized in the second chapter that the
debate on debunking arguments has undergone a metaethical turn in the
wake of Street’s () influential Darwinian Dilemma argument. Let me
note again that the metaethical move suggested by Street – to debunk
metaethical realism rather than our first-order moral beliefs – may be
plausible as a response to off-track debunking arguments; much less so
for the other types of debunking arguments discussed earlier. To give just
one example: suppose someone were to argue that my moral intuition q is
unjustified because the cognitive processes generating q are unreliable in
modern, hostile environments (earlier, I have referred to this as obsolete-
ness debunking). That we could switch to an anti-realist metaethics
regarding the nature of q in order to avoid first-order skepticism about q
seems suspiciously ad hoc and unmotivated in response to this argument.
In general, however, it is an interesting question whether vindicating
arguments can have metaethical ramifications, too. In that case, we would
expect them to be the opposite of the ones debunking arguments are taken
to have. If debunking arguments can be used to undermine metaethical
Vindicating Arguments
realism, can vindicating arguments be used to support moral realism?
Metaethical debunking arguments roughly work as follows: suppose,
for the sake of the argument, that we start out as realists about morality.
Then we find out that, given the way we arrive at our moral beliefs,
they are unlikely to track mind-independent moral facts. By becoming
antirealists about moral norms and values, we can then avoid moral
skepticism. Moral realism has been debunked, so our moral beliefs them-
selves remain intact.
With vindicating arguments, it should work the other way around.
Suppose, first, that there is a domain about which we hold substantive
views that matter to us and that we are inclined to regard as justified.
Suppose, further, that we haven’t yet decided on how to think about
these views on a metatheoretical level. We are not sure whether we ought
to be realists, antirealists, quasi-realists, or something else about said
domain. Or suppose, indeed, that we are convinced antirealists about
the domain. Then we find, through a series of vindicating arguments of
the kind discussed, that given the way we arrive at our beliefs in said
domain, these beliefs are likely to be somewhat trustworthy or at least
unlikely to be untrustworthy: they are produced by processes that are
neither inadequate in the environments we dwell in nor are they exces-
sively sensitive or numb, and so on. If this is the case, then our
vindicating arguments may tip the balance in favor of adopting a realist
account of the nature of the domain at issue and our beliefs about them.
Since our ways of forming beliefs about the domain at issue are reliable,
we may as well think that they track something objective and mind-
independent.
However, one may also hold that vindicating arguments have no
metaethical import, especially if it is correct to suggest, as I did, that they
are always selective in scope. I have noted in Chapter that the prime
motivation for making the metaethical turn in the first place is to avoid the
threat of general moral skepticism. But if general moral antiskepticism,
that is, the view that all of our moral beliefs are knowledge, is off the table
due to its inherent implausibility, then vindicating arguments gain no
momentum in promising an avenue toward that view by supporting an
objectivist metaethics.
Finally, I would like to note that the possibility of vindicating argu-
ments gives us another reason for thinking that ignoble origins debunking
(see Chapter ) is an unsuccessful type of debunking. For if ignoble origins
can lead to debunking, then it would seem that noble origins should lead
to vindication. And I see no plausible reason why this would be so.
Debunking Arguments in Ethics
References
Bartels, D. M., and D. A. Pizarro. (). “The Mismeasure of Morals: Antisocial
Personality Traits Predict Utilitarian Responses to Moral Dilemmas,”
Cognition (): –.
Bauman, C. W., P. McGraw, D. Bartel, and C. Warren. (). “Revisiting
External Validity: Concerns about Trolley Problems and Other Sacrificial
Dilemmas in Moral Psychology,” Social and Personality Psychology Compass
(): –.
Baumeister, R. F., E. J. Masicampo, and C. Nathan DeWall. (). “Prosocial
Benefits of Feeling Free: Disbelief in Free Will Increases Aggression and
Reduces Helpfulness,” Personality and Social Psychology Bulletin ():
–.
Beebe, J. R., and Buckwalter W. (). “The Epistemic Side-Effect Effect,”
Mind & Language (): –.
Berker, S. (). “The Normative Insignificance of Neuroscience,” Philosophy
and Public Affairs (): –.
Blair, R. J. R. (). “A Cognitive Developmental Approach to Morality:
Investigating the Psychopath,” Cognition (): –.
Bloom, P. (). “Family, Community, Trolley Problems, and the Crisis in
Moral Psychology,” The Yale Review (): –.
(). Against Empathy: The Case of Rational Compassion. New York: Harper-
Collins.
Bogardus, T. (). “The Problem of Contingency for Religious Belief,” Faith
and Philosophy (): –.
(). “Only All Naturalists Should Worry about Only One Evolutionary
Debunking Argument,” Ethics (): –.
Bowles, S., and H. Gintis. (). A Cooperative Species: Human Reciprocity and
Its Evolution. Princeton University Press.
Brennan, J., and P. M. Jaworski. (). “Markets without Symbolic Limits,”
Ethics (): –.
Brink, D. O. (). “Moral Realism and the Sceptical Arguments from
Disagreement and Queerness,” Australasian Journal of Philosophy ():
–.
Buchanan, A., and R. Powell. (). “The Limits of Evolutionary Explanations of
Morality and Their Implications for Moral Progress,” Ethics (): –.
(). “Toward a Naturalistic Theory of Moral Progress,” Ethics ():
–.
Campbell, R., and V. Kumar. (). “Moral Reasoning on the Ground,” Ethics
(): –.
Carel, H. (). Illness: The Cry of the Flesh. Stocksfield: Acumen.
Christensen, D. (). “Disagreement, Question-Begging, and Epistemic Self-
Criticism,” Philosopher’s Imprint (): –.
Clark, C. J., et al. (). “Free to Punish: A Motivated Account of Free Will
Belief,” Journal of Personality and Social Psychology (): –.
Clarke-Doane, J. (). “Morality and Mathematics: The Evolutionary
Challenge,” Ethics (): –.
References
(). “Moral Epistemology: The Mathematics Analogy,” Noûs ():
–.
Cosmides, L. (). “The Logic of Social Exchange: Has Natural Selection
Shaped How Humans Reason? Studies with the Wason Selection Task,”
Cognition : –.
Cova, F. (). “Unconsidered Intentional Actions: An Assessment of Scaife and
Webber’s ‘Consideration Hypothesis,’” Journal of Moral Philosophy ():–.
Cova , F., E. Dupoux, and P. Jacob. (). “On Doing Things Intentionally,”
Mind & Language ():–.
Cova, F., and H. Naar. (a). “Side-Effect Effect Without Side Effects: The
Pervasive Impact of Moral Considerations on Judgments of Intentionality,”
Philosophical Psychology (): –.
(b). “Testing Sripada’s Deep Self Model,” Philosophical Psychology ():
–.
de Cruz, H. (). “Numerical Cognition and Mathematical Realism,”
Philosopher’s Imprint (): –.
Curry, O. S. (). “Morality as Cooperation: A Problem-Centred Approach.”
In T. K. Shackelford and R. D. Hansen (eds.), The Evolution of Morality.
Springer, –.
Cushman, F. (). “Crime and Punishment: Distinguishing the Roles of
Causal and Intentional Analyses in Moral Judgment,” Cognition ():
–.
(a). “The Role of Learning in Punishment, Prosociality, and Human
Uniqueness.” In K. Sterelny, R. Joyce, B. Calcott, and B. Fraser (eds.),
Cooperation and Its Evolution. MIT Press, –.
(b). “Action, Outcome, and Value: A Dual-System Framework for
Morality,” Personality and Social Psychology Review (): –.
(). “Punishment in Humans: From Intuitions to Institutions,” Philosophy
Compass (): –.
Cushman, F., J. Knobe, and W. Sinnott-Armstrong (). “Moral Appraisals
Affect Doing/Allowing Judgments,” Cognition (): –.
Cushman, F., L. Young, et al. (). “The Role of Conscious Reasoning and
Intuition in Moral Judgment: Testing Three Principles of Harm,”
Psychological Science (): –.
Dalbauer, Nikolaus, and Andreas Hergovich. (). “Is What Is Worse More
Likely? — The Probabilistic Explanation of the Epistemic Side-Effect
Effect,” Review of Philosophy and Psychology ():–.
Damasio, A. (). Descartes’ Error: Emotion, Reason, and the Human Brain.
London: Penguin.
Dancy, J. (). Ethics Without Principles. Oxford University Press.
Darby, R. (). “The Masturbation Taboo and the Rise of Routine Male
Circumcision: A Review of the Historiography,” Journal of Social History.
(): –.
(). A Surgical Temptation: The Demonization of the Foreskin and the Rise of
Circumcision in Britain. Chicago: University of Chicago Press.
References
Darley, J. M., and C. D. Batson. (). “From Jerusalem to Jericho: A Study of
Situational and Dispositional Variables in Helping Behavior,” Journal of
Personality and Social Psychology (): –.
Darley, J. M., and B. Latane. (). “Bystander Intervention in Emergencies:
Diffusion of Responsibility,” Journal of Personality and Social Psychology
(p): –.
Darwall, S., A. Gibbard, and P. Railton. (). “Toward Fin de Siècle Ethics:
Some Trends,” Philosophical Review (): –.
Darwin, C. (/). The Descent of Man, and Selection in Relation to Sex.
Penguin.
Das, R. (). “Evolutionary Debunking of Morality: Epistemological or
Metaphysical?” Philosophical Studies (): –.
Dawkins, R. (). The Selfish Gene. New York City: Oxford University Press.
Demaree-Cotton, J. (). “Do Framing Effects Make Moral Intuitions
Unreliable?” Philosophical Psychology (): –.
Doris, J., and A. Plakias. (). “How to Argue about Disagreement: Evaluative
Diversity and Moral Realism.” In W. Sinnott-Armstrong (ed.), Moral
Psychology Volume , The Cognitive Science of Morality: Intuition and
Diversity. Cambridge, MA: MIT Press, –.
Doris, J. M. (). “Skepticism about Persons,” Philosophical Issues ():
–.
(). “Genealogy and Evidence: Prinz on the History of Morals,” Analysis
(): –.
Doris, J. M., and S. P. Stich. (). “As a Matter of Fact : Empirical Perspectives
on Ethics.” In F. Jackson and M. Smith (eds.), The Oxford Handbook of
Contemporary Philosophy. Oxford University Press, –.
Doris, J. M., and D. Murphy. (). “From My Lai to Abu Ghraib: The Moral
Psychology of Atrocity,” Midwest Studies in Philosophy (): –.
(). Lack of Character: Personality and Moral Behavior. Cambridge
University Press.
Driver, J. (a). “Imaginative Resistance and Psychological Necessity,” Social
Philosophy and Policy (): –.
(b). “Attributions of Causation and Moral Responsibility.” In
W. Sinnott-Armstrong, Moral Psychology. Vol. . Cambridge, MA, MIT
Press: –.
(). “The Limits of the Dual-Process View.” In S. M. Liao (ed.), Moral
Brains. Oxford University Press, –.
Edmonds, D. (). Would You Kill the Fat Man? The Trolley Problem and What
Your Answer Tells us about Right and Wrong. Princeton/Oxford: Princeton
University Press.
Enoch, D. (). “How Is Moral Disagreement a Problem for Realism?” Journal
of Ethics (): –.
(). “A Defense of Moral Deference,” Journal of Philosophy : –.
Evans, J. (). “Dual-Processing Accounts of Reasoning, Judgment, and Social
Cognition,” Annu. Rev. Psychol : –.
References
Evans, J., and Stanovich, K. (). “Dual-Process Theories of Higher
Cognition,” Perspectives on Psychological Science (): –.
Fitzpatrick, S. (). “Moral Realism, Moral Disagreement, and Moral
Psychology,” Philosophical Papers (): –.
Flanagan, O. J. (). Varieties of Moral Personality: Ethics and Psychological
Realism. Harvard University Press.
(). The Geography of Morals: Varieties of Moral Possibility. Oxford
University Press.
FitzPatrick, W. J. (). “Debunking Evolutionary Debunking of Ethical
Realism,” Philosophical Studies (): –.
Foot, P. (). “The Problem of Abortion and the Doctrine of Double Effect,”
Oxford Review : –.
(). Natural Goodness. Oxford University Press.
Fraser, B., and Hauser, M. (). “The Argument from Disagreement and the
Role of Cross-Cultural Empirical Data,” Mind & Language (): –.
Fricker, M. (). Epistemic Injustice: Power and the Ethics of Knowing. Oxford
University Press.
(). “Styles of Moral Relativism. A Critical Family Tree.” In R. Crisp (ed.),
The Oxford Handbook of the History of Ethics. Oxford University Press,
–.
Fried, B. H. (). “What Does Matter? The Case for Killing the Trolley
Problem (or Letting It Die),” Philosophical Quarterly (): –.
Gendler, T. S. (). “The Puzzle of Imaginative Resistance,” The Journal of
Philosophy (): –.
Gert, J. (). “Disgust, Moral Disgust, and Morality,” Journal of Moral
Philosophy (): –.
Gigerenzer, G. (). Gut Feelings: Short Cuts to Better Decision Making.
London: Penguin.
Gold, N., A. Colman, and B. Pulford. (). “Cultural Differences in Responses
to Real-Life and Hypothetical Trolley Problems,” Judgment and Decision
Making (): –.
Gold, N., B. Pulford, and A. Colman. (). “The Outlandish, the Realistic,
and the Real: Contextual Manipulation and Agent Role Effects in Trolley
Problems,” Frontiers in Psychology: Cognitive Science : –.
Graber, A. (). “Medusa’s Gaze Reflected: A Darwinian Dilemma for Anti-
Realist Theories of Value,” Ethical Theory and Moral Practice ():
–.
Graham, J., J. Haidt, and B. A. Nosek. (). “Liberals and Conservatives Rely
on Different Sets of Moral Foundations,” Journal of Personality and Social
Psychology (): .
Graham, J., J. Haidt, S. Koleva, M. Motyl, R. Iyer, S. Wojcik, and P. H. Ditto.
(in press). “Moral Foundations Theory: The Pragmatic Validity of Moral
Pluralism,” Advances in Experimental Social Psychology.
Graham, J., B. Nosek, and J. Haidt. (in press). “The Moral Stereotypes of Liberals
and Conservatives: Exaggeration across the Political Divide,” PLoS One.
References
Greene, J. D. (). “The Secret Joke of Kant’s Soul.” In W. Sinnott-Armstrong
(ed.), Moral Psychology. Vol. . The Neuroscience of Morality: Emotion, Brain
Disorders, and Development. Cambridge, MA: MIT Press, –.
(). Moral Tribes: Emotion, Reason, and the Gap between Us and Them.
Penguin Press.
(). “Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science
Matters for Ethics,” Ethics (): –.
et al. (). “Are ‘Counter-Intuitive,’ Deontological Judgments Really
Counter-Intuitive? An Empirical Reply to Kahane et al. (),” Social
Cognition & Affective Neuroscience (): –.
(). “Reply to Driver and Darwall.” In S. M. Liao (ed.), Moral Brains.
Oxford University Press, –.
(). “The Rat-a-Gorical Imperative: Moral Intuition and the Limits of
Affective Learning. Cognition.
Greene, J. D., B. D. Sommerville, et al. (). “An fMRI Investigation of
Emotional Engagement in Moral Judgment,” Science : –.
Greene, J. D., F. Cushman, et al. (). “Pushing Moral Buttons: The
Interaction between Personal Force and Intention in Moral Judgment,”
Cognition (): –.
Greene, J. D., L. E. Nystrom, et al. (). “The Neural Bases of Cognitive
Conflict and Control in Moral Judgment,” Neuron : –.
Haggard, P., and M. Eimer. (). “On the Relation between Brain Potentials
and the Awareness of Voluntary Movements,” Experimental Brain Research
(): –.
Haidt, J. (). “The Emotional Dog and Its Rational Tail: A Social Intuitionist
Approach to Moral Judgment,” Psychological Review (): .
(). “The Emotional Dog Gets Mistaken for a Possum,” Review of General
Psychology (): –.
(). The Righteous Mind: Why Good People are Divided by Religion and
Politics. London, Penguin.
Haidt, J., S. Koller, et al. (). “Affect, Culture, and Morality, or Is It
Wrong to Eat Your Dog?” Journal of Personality and Social Psychology :
–.
Haidt, J., P Rozin., C. McCauley, and S. Imada. (). “Body, Psyche, and
Culture: The Relationship between Disgust and Morality,” Psychology &
Developing Societies (): –.
Haidt, J., F. Björklund, et al. (). “Moral Dumbfounding: When Intuition
Finds No Reason.” Unpublished Manuscript, University of Virginia.
Haidt, J., and F. Bjorklund. (). “Social Intuitionists Answer Six Questions
about Moral Psychology.” In W. Sinnott-Armstrong (ed.), Moral Psychology.
Vol. . The Cognitive Science of Morality: Intuition and Diversity. Cambridge,
MA: MIT Press, –.
Haidt, J., and J. Graham. (). “When Morality Opposes Justice:
Conservatives Have Moral Intuitions That Liberals May Not Recognize,”
Social Justice Research : –.
References
Haidt, J., and M. Hersh. (). “Sexual Morality: The Cultures and Emotions
of Conservatives and Liberals,” Journal of Applied Social Psychology :
–.
Hall, L., P. Johansson, and T. Strandberg. (). “Lifting the Veil of Morality:
Choice Blindness and Attitude Reversals on a Self-Transforming Survey,”
PLoS One (): e.
Haney, C., W. Banks, and P. Zimbardo. (). “Interpersonal Dynamics of a
Simulated Prison,” International Journal of Criminology and Penology :
–.
Harman, G. (). “Moral Relativism Defended,” The Philosophical Review
(): –.
(). “Moral Philosophy Meets Social Psychology: Virtue Ethics and the
Fundamental Attribution Error,” Proceedings of the Aristotelian Society
(): –.
(). “Skepticism about Character Traits,” Journal of Ethics (/):
–.
Henrich, J. (). The Secret of Our Success: How Culture Is Driving Human
Evolution, Domesticating Our Species, and Making Us Smarter. Princeton
University Press.
Henrich, J., S. J. Heine, and A. Norenzayan. (). “The Weirdest People in the
World,” Behavioral and Brain Sciences (–):–.
Hindriks, F. (). “Intentional Action and the Praise-Blame Asymmetry,” The
Philosophical Quarterly (): –.
(). “Normativity in Action: How to Explain the Knobe Effect and Its
Relatives,” Mind & Language (): –.
Holton, R. (). “Norms and the Knobe Effect,” Analysis (): –.
Horgan, T., and M. Timmons. (). “Morphological Rationalism and the
Psychology of Moral Judgment,” Ethical Theory and Moral Practice :
–.
Huemer, M. (). Ethical Intuitionism. Palgrave Macmillan.
(). “A Liberal Realist Answer to Debunking Skeptics: The Empirical Case
for Realism,” Philosophical Studies (): –.
Hume, D. (/). A Treatise of Human Nature. Oxford University Press.
Hursthouse, R. (). On Virtue Ethics. Oxford University Press.
Inbar, Y., D. Pizarro, R. Iyer, and J. Haidt. (). “Disgust Sensitivity, Political
Conservatism, and Voting,” Social Psychological and Personality Science ():
–.
Isen, A. M., and P. F. Levin. (). “Effect of Feeling Good on Helping:
Cookies and Kindness,” Journal of Personality and Social Psychology ():
–.
Iyer, R., S. P. Koleva, J. Graham, P. H. Ditto, and J. Haidt. ().
“Understanding Libertarian Morality: The Psychological Dispositions of
Self-Identified Libertarians,” PLoS One (): e.
Jacobson, D. (). “Moral Dumbfounding and Moral Stupefaction,” Oxford
Studies in Normative Ethics : –.
References
Jones, K. (). “The Politics of Intellectual Self-Trust,” Social Epistemology
(): –.
Joyce, R. (). The Evolution of Morality. MIT Press.
Kagan, S. (). “Thinking about Cases,” Social Philosophy and Policy ():
–.
Kahan, D. M. (). “Ideology, Motivated Reasoning, and Cognitive
Reflection: An Experimental Study,” Judgment and Decision Making :
–.
Kahane, G. (). “Evolutionary Debunking Arguments,” Noûs ():
–.
(). “On the Wrong Track: Process and Content in Moral Psychology,”
Mind & Language ():–.
(). “Must Metaethical Realism Make a Semantic Claim?” Journal of Moral
Philosophy (): –.
(). “Evolution and Impartiality,” Ethics (): .
et al. (). “The Neural Basis of Intuitive and Counterintuitive Moral
Judgement,” Social Cognitive and Affective Neuroscience (): –.
(). “The Armchair and the Trolley: An Argument for Experimental
Ethics,” Philosophical Studies ():–.
(). “Sidetracked by Trolleys: Why Sacrificial Moral Dilemmas Tell Us
Little (or Nothing) about Utilitarian Judgment,” Social Neuroscience ():
–.
Kahane, G., J. A. C. Everett, B. D. Earp, M. Farias, and J. Savulescu.
(). “‘Utilitarian’ Judgments in Sacrificial Moral Dilemmas Do Not
Reflect Impartial Concern for the Greater Good,” Cognition
:–.
Kahneman, D. (). Thinking, Fast and Slow. London: Penguin Books.
Kahneman, D., A. Tversky, et al., eds. (). Judgment under Uncertainty:
Heuristics and Biases. Cambridge: Cambridge University Press.
Kamm, F. M. (). Morality, Mortality. : Death and Whom to Save From It.
New York: Oxford University Press.
(). Intricate Ethics: Rights, Responsibilities, and Permissible Harm. New
York: Oxford University Press.
(). “Neuroscience and Moral Reasoning: A Note on Recent Research,”
Philosophy & Public Affairs (): –.
Katsafanas, P. (). Agency and the Foundations of Ethics: Nietzschean
Constitutivism. Oxford University Press.
Kellogg, J. H. (). Plain Facts for Old and Young: Embracing the Natural
History and Hygiene of Organic Life. Burlington: Segner.
Kelly, D. (). Yuck!: The Nature and Moral Significance of Disgust. MIT Press.
Kennett, J. (). “Imagining Reasons,” Southern Journal of Philosophy :
–.
(). “Living With One’s Choices Moral Reasoning In Vitro and In Vivo.”
In R. Langdon and C. Mackenzie (eds.), Emotions, Imagination, and Moral
Reasoning. New York, Psychology Press, –.
References
Kennett, J., and C. Fine. (). “Will the Real Moral Judgment Please Stand
Up?” Ethical Theory and Moral Practice ():–.
Kitcher, P. () “Challenges for Secularism.” In George Levine, ed., The Joy of
Secularism: Essays for How We Live Now. Princeton University Press, –.
Klein, C. (). “The Dual Track Theory of Moral Decision-Making:
A Critique of the Neuroimaging Evidence,” Neuroethics (): –.
Knobe, J. (). “Intentional Action and Side Effects in Ordinary Language,”
Analysis (): –.
(). “Folk Psychology and Folk Morality: Response to Critics,” Journal of
Theoretical and Philosophical Psychology ():–.
(). “The Concept of Intentional Action: A Case Study in the Uses of Folk
Psychology,” Philosophical Studies : –.
(). “Reason Explanation in Folk Psychology,” Midwest Studies in
Philosophy : –.
(). “Person as Scientist, Person as Moralist,” Behavioral and Brain Sciences
(): –.
Knobe, J., and A. Burra. (). “The Folk Concepts of Intention and
Intentional Action: A Cross-Cultural Study,” Journal of Cognition and
Culture (–): –.
Knobe, J., and B. Fraser. (a). Causal Judgment and Moral Judgment: Two
Experiments. In Moral Psychology. Vol. . W. Sinnott-Armstrong.
Cambridge, MA, MIT Press: –.
(b). “Causal Judgment and Moral Judgment: Two Experiments.” In
Walter Sinnott-Armstrong (ed.), Moral Psychology. Vol. . MIT Press,
–.
Knobe, J., and B. Leiter. (). “The Case for Nietzschean Moral Psychology.”
In B. Leiter and N. Sinhababu (eds.), Nietzsche and Morality. Oxford
University Press, –.
Knobe, J., and G. Mendlow. (). “The Good, the Bad and the Blameworthy:
Understanding the Role of Evaluative Reasoning in Folk Psychology,”
Journal of Theoretical and Philosophical Psychology (): –.
Knutson, K. M., F. Krueger, M. Koenigs, A. Hawley, J. Escobedo, V. Vasudeva,
R. Adolphs, and J. Grafman. (). “Behavioral Norms for Condensed
Moral Vignettes,” Social Cognitive and Affective Neuroscience : –.
Koenigs, M., and D. Tranel. (). “Irrational Economic Decision-Making
After Ventromedial Prefrontal Damage: Evidence from the Ultimatum
Game,” The Journal of Neuroscience (): –.
Koenigs, M., L. Young, et al. (). “Damages to the Prefrontal Cortex Increases
Utilitarian Moral Judgments,” Nature (): –.
Köhler, S., and M. Ridge. (). “Revolutionary Expressivism,” Ratio ():
–.
Koleva, S. P., J. Graham, P. Ditto, R. Iyer, and J. Haidt. (). “Tracing the
Threads: How Five Moral Concerns (Especially Purity) Help Explain
Culture War Attitudes,” Journal of Research in Personality : –.
References
Koralus, P. and M. Alfano. (). “Reasons-Based Moral Judgment and the
Erotetic Theory.” In J.-F. Bonnefon and B. Trémolière (eds.), Moral
Inferences. Routledge: New York. –.
Korsgaard, C. M. (). The Sources of Normativity. Cambridge University Press.
(). Self-Constitution: Agency, Identity, and Integrity. Oxford, Oxford
University Press.
Kristjánsson, K. (). “Situationism and the Concept of a Situation,” European
Journal of Philosophy (S): E-E.
Kumar, V. (a). Moral vindications. Cognition : –.
(b). Foul Behavior. Philosopher’s Imprint (): –.
Kumar, V. and May, J. (forthcoming). “How to Debunk Moral Beliefs.”
Methodology and Moral Philosophy (eds.), Jussi Suikkanen & Antti
Kauppinen. Routledge.
de Lazari-Radek, K., and P. Singer. (). “Secrecy in Consequentialism:
A Defence of Esoteric Morality,” Ratio (): –.
(). The Objectivity of Ethics and the Unity of Practical Reason,” Ethics
(): –.
Leiter, B. (). “Against Convergent Moral Realism: The Respective Roles of
Philosophical Argument and Empirical Evidence.” In W. Sinnott-Armstrong
(ed.), Moral Psychology Volume , The Cognitive Science of Morality: Intuition
and Diversity. Cambridge, MA: MIT Press, –.
Levy, N. (). “Imaginative Resistance and the Moral/Conventional
Distinction,” Philosophical Psychology (): –.
(). Neuroethics. Challenges for the st Century. Cambridge: Cambridge
University Press.
(). “Less Blame, Less Crime? The Practical Implications of Moral
Responsibility Skepticism,” Journal of Practical Ethics (): –.
(). “Dissolving the Puzzle of Resultant Moral Luck,” Review of Philosophy
and Psychology (): –.
Liao, S. M., A. Wiegmann, J. Alexander, and G. Vong. (). “Putting the
Trolley in Order: Experimental Philosophy and the Loop Case,”
Philosophical Psychology (): –.
Libet, B. W. (). “Unconscious Cerebral Initiative and the Role of
Conscious Will in Voluntary Action,” Behavioral and Brain Sciences
(): –.
MacFarquhar, L. (). Strangers Drowning: Grappling with Impossible Idealism,
Drastic Choices, and the Overpowering Urge to Help. Penguin Press.
Machery, E. (). “The Folk Concept of Intentional Action,” Mind &
Language (): –.
Mackie, J. L. (). Ethics: Inventing Right and Wrong. Penguin.
Maibom, H. (). “The Mad, the Bad, and the Psychopath,” Neuroethics
():–.
Mallon, R. (). “Knobe versus Machery: Testing the Trade-Off Hypothesis,”
Mind & Language (): –.
References
Martin, R., and J. Barresi. (). “Personal Identity and What Mattes in
Survival: An Historical Overview.” In Raymond Martin and John Barresi
(eds.), Personal Identity. Blackwell, –.
Mason, E. (). “Objectivism and Prospectivism about Rightness,” Journal of
Ethics and Social Philosophy (): –.
Mason, K. (). “Debunking Arguments and the Genealogy of Religion and
Morality,” Philosophy Compass (): –.
May, J. (). “Does Disgust Influence Moral Judgment?” Australasian Journal
of Philosophy (): –.
(forthcoming). Regard for Reason in the Moral Mind. Oxford University Press.
May, S. (Ed.). (). Nietzsche’s On the Genealogy of Morality: A Critical Guide.
Cambridge University Press.
McGrath, S. (). “Moral Knowledge by Perception,” Philosophical Perspectives
(): –.
(). “Skepticism about Moral Expertise as a Puzzle for Moral Realism,” The
Journal of Philosophy (): –.
McGuire, J., R. Langdon, M. Coltheart, and C. Mackenzie. (). “A Reanalysis
of the Personal/Impersonal Distinction in Moral Psychology Research,”
Journal of Experimental Social Psychology (): –.
Mele, A. R., and F. Cushman. (). “Intentional Action, Folk Judgments, and
Stories: Sorting Things Out,” Midwest Studies in Philosophy : –.
Mercier, H., and D. Sperber. (). The Enigma of Reason. Harvard University
Press.
Merritt, M. (). “Aristotelean Virtue and the Interpersonal Aspect of Ethical
Character,” Journal of Moral Philosophy (): –.
Meyers, C. D. (). “Defending Moral Realism from Empirical Evidence of
Disagreement,” Social Theory and Practice (): –.
Mikhail, J. (). “Universal Moral Grammar: Theory, Evidence, and the
Future,” Trends in Cognitive Sciences (): –.
Milgram, S. (). “Behavioral Study of Obedience,” The Journal of Abnormal
and Social Psychology (): .
Miller, C. (). “Social Psychology and Virtue Ethics,” Journal of Ethics ():
–.
Mirza, O. (). “The Evolutionary Argument against Naturalism,” Philosophy
Compass (): –.
Mizrahi, M. (). “Does the Method of Cases Rest on a Mistake?” Review of
Philosophy and Psychology (): –.
Moll, J., R. Zahn, R. de Oliveira-Souza, F. Krueger, and J. Grafman. ().
“Opinion: The Neural Basis of Human Moral Cognition,” Nature Reviews
Neuroscience : –.
Moody-Adams, M. M. (). Fieldwork in Familiar Places: Morality, Culture,
and Philosophy. Harvard University Press.
Moore, G. E. (). Principia Ethica. Dover Publications.
Musschenga, B. (). “The Promises of Moral Foundations Theory,” Journal of
Moral Education (): –.
References
Nadelhoffer, T. (). “Bad Acts, Blameworthy Agents, and Intentional Actions:
Some Problems for Juror Impartiality,” Philosophical Explorations ():
–.
Newman, G. E., J. De Freitas, and J. Knobe. (). “Beliefs about the True Self
Explain Asymmetries Based on Moral Judgment,” Cognitive Science ():
–.
Nichols, S. (). Sentimental Rules: On the Natural Foundations of Moral
Judgment. Oxford University Press.
(). “Process Debunking and Ethics,” Ethics (): –.
Nichols, S., and J. Knobe. (). “Moral Responsibility and Determinism: The
Cognitive Science of Folk Intuitions,” Noûs (): –.
Nichols, S., and J. Ulatowski. (). “Intuitions and Individual Differences: The
Knobe Effect Revisited,” Mind & Language (): –.
Nichols, S., S. Kumar, T. Lopez, A. Ayars, and H. Chan. (). “Rational
Learners and Moral Rules,” Mind & Language (): –.
Nickerson, R. S. (). “Confirmation Bias: A Ubiquitous Phenomenon in
Many Guises,” Review of General Psychology (): .
Nietzsche, F. The Gay Science. (Referred to as GS. Numerals refer to sections.)
On the Genealogy of Morality. (Referred to as GM. Numerals refer to sections.)
Nisbett, R. E., and T. D. Wilson. (). “Telling More than We Can Know:
Verbal Reports on Mental Processes,” Psychological Review (): –.
(). “The Accuracy of Verbal Reports about the Effects of Stimuli and
Behavior,” Social Psychology (): –.
Norcross, A. (). “Off Her Trolley? Frances Kamm and the Metaphysics of
Morality,” Utilitas ():–.
Nozick, R. (). Philosophical Explanations. Harvard University Press.
Nussbaum, M. C. (). Hiding from Humanity: Disgust, Shame, and the Law.
Princeton University Press.
O’Hara, R. E., W. Sinnott-Armstrong, and N. A. Sinnott-Armstrong. ().
“Wording Effects on Moral Judgements,” Judgement and Decision Making
(): –.
Parfit, D. A. (). Reasons and Persons. Oxford University Press.
Paxton, J. M., L. Ungar, et al. (). “Reflection and Reasoning in Moral
Judgment,” Cognitive Science (): –.
Pettit, D., and J. Knobe. (). “The Pervasive Impact of Moral Judgment,”
Mind & Language (): –.
Phelan, M., and H. Sarkissian. (). “Is the ‘Trade-off Hypothesis’ Worth
Trading For?” Mind & Language (): –.
Phillips, J., J. B. Luguri, and J. Knobe. (). “Unifying Morality’s Influence on
Non-Moral Judgments: The Relevance of Alternative Possibilities,”
Cognition : –.
Phillips, J., L. Misenheimer, and J. Knobe. (). “The Ordinary Concept of
Happiness (and Others Like It),” Emotion Review (): –.
Pinillos, N., N. Smith, et al. (). “Philosophy’s New Challenge: Experiments
and Intentional Action,” Mind & Language (): –.
References
Pinker, S. (). The Better Angels of Our Nature: The Decline of Violence in
History and Its Causes. Penguin UK.
Plakias, A. (). “The Good and the Gross,” Ethical Theory and Moral Practice
(): –.
Plantinga, A. (). Warrant and Proper Function. Oxford University Press.
(). “The Evolutionary Argument against Naturalism: An Initial Statement
of the Argument,” Philosophy After Darwin: Classic and Contemporary
Readings: –.
Portmore, D. W. (). “Consequentializing,” Philosophy Compass ():
–.
Prinz, J. (). “The Emotional Basis of Moral Judgments,” Philosophical
Explorations (): –.
(). The Emotional Construction of Morals. Oxford University Press.
(). “The Normativity Challenge: Cultural Psychology Provides the Real
Threat to Virtue Ethics,” Journal of Ethics (–): –.
(). “Against Empathy,” Southern Journal of Philosophy (): –.
Rai, T. S. and A. P. Fiske. (). “Moral Psychology Is Relationship Regulation:
Moral Motives for Unity, Hierarchy, Equality, and Proportionality,”
Psychological Review : –.
Railton, P. (). “Moral Realism,” The Philosophical Review ():–.
(). “Moral Explanation and Moral Objectivity,” Philosophy and
Phenomenological Research : –.
(). “The Affective Dog and Its Rational Tale: Intuition and Attunement,”
Ethics (): –.
(). “Moral Learning: Why Learning? Why Moral? And Why Now?”
Cognition : –.
Rini, R. A. (). “Making Psychology Normatively Significant,” Journal of
Ethics (): –.
(). “How Not to Test for Philosophical Expertise,” Synthese ():
–.
(). “Morality and Cognitive Science,” Internet Encyclopedia of Philosophy.
(). “Debunking Debunking: A Regress Challenge for Psychological
Threats to Moral Judgment,” Philosophical Studies (): –.
(). “Why Moral Psychology Is Disturbing,” Philosophical Studies ():
–.
Robinson, B., P. Stey, and M. Alfano. (). “Reversing the Side-Effect Effect:
The Power of Salient Norms,” Philosophical Studies (): –.
Roeser, S. (). Moral Emotions and Intuitions. Palgrave Macmillan.
Ross, L., and R. E. Nisbett. (). The Person and the Situation. Philadelphia:
Temple University Press.
Rossen, I., C. Lawrence, P. Dunlop, and S. Lewandowsky. () “Can Moral
Foundations Theory Help to Explain Partisan Differences in Climate
Change Beliefs?” Paper presented at the annual meeting of the ISPP th
Annual Scientific Meeting, Lauder School of Government, Diplomacy and
Strategy, IDC–Herzliya, Herzliya, Israel.
References
Rose, D., J. Livengood, J. Sytsma, and E. Machery. (). “Deep Trouble for
the Deep Self,” Philosophical Psychology (): –.
Royzman, E. B., K. Kim, and R. F. Leeman. (). “The Curious Tale of Julie
and Mark: Unraveling the Moral Dumbfounding Effect,” Judgment and
Decision Making (): .
Saar, M. (). “Understanding Genealogy: History, Power, and the Self,”
Journal of the Philosophy of History (): –.
Sauer, H. (a). “Psychopaths and Filthy Desks: Are Emotions Necessary and
Sufficient for Moral Judgment?” Ethical Theory and Moral Practice ():
–.
(b). “Morally Irrelevant Factors: What’s Left of the Dual Process-Model of
Moral Cognition?” Philosophical Psychology (): –.
(c). “Educated Intuitions. Automaticity and Rationality in Moral
Judgement,” Philosophical Explorations (): –.
(). “It’s the Knobe Effect, Stupid!” Review of Philosophy and Psychology
(): –.
(). Moral Judgments as Educated Intuitions. MIT Press.
Sauer, H. and T. Bates. (). “Chairmen, Cocaine, and Car Crashes: The
Knobe Effect as an Attribution Error,” Journal of Ethics (): –.
Schinkel, A. (). “The Problem of Moral Luck: An Argument against Its
Epistemic Reduction,” Ethical Theory and Moral Practice (): –.
Schlosser, M. E. (a). “Free Will and the Unconscious Precursors of Choice,”
Philosophical Psychology (): –.
(b). “Causally Efficacious Intentions and the Sense of Agency: In Defense
of Real Mental Causation,” Journal of Theoretical and Philosophical Psychology
(): –.
(). “The Neuroscientific Study of Free Will: A Diagnosis of the
Controversy,” Synthese (): –.
Schnall, S., et al. (). “Disgust as Embodied Moral Judgment,” Personality and
Social Psychology Bulletin (): –.
Schultze-Kraft, M., et al. (). “The Point of No Return in Vetoing Self-
Initiated Movements,” Proceedings of the National Academy of Sciences
(): –.
Schwitzgebel, E., and F. Cushman. (). “Expertise in Moral Reasoning? Order
Effects on Moral Judgment in Professional Philosophers and Non-
Philosophers,” Mind & Language (): –.
Shafer-Landau, R. (). “Evolutionary Debunking, Moral Realism and Moral
Knowledge,” Journal of Ethics and Social Philosophy : i.
Sidgwick, H. (). The Methods of Ethics. Indianapolis: Hackett.
Singer, P. (). “Famine, Affluence, and Morality,” Philosophy & Public Affairs
(): –.
(). Practical Ethics. Cambridge University Press.
(). The Expanding Circle: Ethics, Evolution, and Moral Progress. Princeton
University Press.
(). “Ethics and Intuitions,” The Journal of Ethics (–): –.
References
Sinhababu, N. (). “Unequal Vividness and Double Effect,” Utilitas ():
–.
Sinnott-Armstrong, W. (). “Framing Moral Intuitions. In W. Sinnott-
Armstrong (ed.), Moral Psychology, Vol. : The Cognitive Science of
Morality. MIT Press, –.
Skarsaune, K. O. (). “Darwin and Moral Realism: Survival of the Iffiest,”
Philosophical Studies (): –.
Smith, M. (). “The Humean Theory of Motivation,” Mind (): –.
Sneddon, A. (). “Normative Ethics and the Prospects of an Empirical
Contribution to the Assessment of Moral Disagreement and Moral
Realism,” Journal of Value Inquiry (): –.
Sober, E. and D. S. Wilson. (). Unto Others: The Evolution and Psychology of
Unselfish Behavior. Harvard University Press.
Soon, C. S., M. Brass, H. J. Heinze, and J. D. Haynes. (). “Unconscious
Determinants of Free Decisions in the Human Brain,” Nature neuroscience
(): –.
Srinivasan, A. (). “The Archimedean Urge,” Philosophical Perspectives ():
–.
Sripada, C. and S. Konrath. (). “Telling More Than We Can Know About
Intentional Action,” Mind & Language (): –.
Sripada, C. S. (). “The Deep Self Model and Asymmetries in Folk Judgments
about Intentional Action,” Philosophical Studies : –.
Stanovich, K. (). Rationality and the Reflective Mind. Oxford University
Press.
Sterelny, K. (). The Evolved Apprentice. MIT Press.
Stocker, M. (). “The Schizophrenia of Modern Ethical Theories,” Journal of
Philosophy (): –.
Street, S. (). “A Darwinian Dilemma for Realist Theories of Value,”
Philosophical Studies (): –.
(). “What Is Constructivism in Ethics and Metaethics?” Philosophy
Compass (): –.
(forthcoming). “Does Anything Really Matter or Did We Just Evolve to Think
So?” In A. Byrne, J. Cohen, G. Rosen, and S. Shiffrin eds., The Norton
Introduction to Philosophy. New York: Norton, –.
Strohminger, N. (). “Disgust Talked about,” Philosophy Compass ():
–.
Strohminger, N. and S. Nichols. (). “The Essential Moral Self,” Cognition
(): –.
Sturgeon, N. (). “Moral Explanations.” In David Copp and David
Zimmerman, eds., Morality, Reason and Truth. Totowa, NJ: Rowman &
Allanheld, –.
Taurek, J. M. (). “Should the Numbers Count?” Philosophy & Public Affairs
(): –.
Tersman, F. (). “The Reliability of Moral Intuitions: A Challenge from
Neuroscience,” Australasian Journal of Philosophy (): –.
References
Thomson, J. J. (). “Killing, Letting Die, and the Trolley Problem,” The
Monist (): –.
Tiberius, V. (). Moral Psychology: A Contemporary Introduction. Routledge.
Timmons, M. (). Moral Theory: An Introduction. Rowman & Littlefield.
Tobia, K., W. Buckwalter, and S. Stich. (). “Moral Intuitions: Are
Philosophers Experts?” Philosophical Psychology (): –.
Tobia, K. P. (). “Personal Identity and the Phineas Gage Effect,” Analysis
(): –.
Tropman, E. (). “Evolutionary Debunking Arguments: Moral Realism,
Constructivism, and Explaining Moral Knowledge,” Philosophical
Explorations (): –.
Uhlmann, E. L., and G. L. Cohen. (). “Constructed Criteria: Redefining
Merit to Justify Discrimination,” Psychological Science (): –.
Uhlmann, E. L., D. A. Pizarro, D. Tannenbaum, and P. H. Ditto. (). “The
Motivated Use of Moral Principles,” Judgment and Decision Making ():
.
Unger, P. K. (). Living High and Letting Die: Our Illusion of Innocence. New
York: Oxford University Press.
Uttich, K., and T. Lombrozo. (). “Norms Inform Mental State
Ascriptions: A Rational Explanation for the Side-Effect Effect,” Cognition
: –.
Västfjäll, D., P. Slovic, and M. Mayorga. (). “Pseudoinefficacy: Negative
Feelings from Children Who Cannot Be Helped Reduce Warm Glow for
Children Who Can Be Helped,” Frontiers in Psychology : .
Västfjäll, D., P. Slovic, M. Mayorga, and E. Peters. (). “Compassion Fade:
Affect and Charity Are Greatest for a Single Child in Need,” PLoS One ():
e.
Valdesolo, P., and D. DeSteno. (). “Manipulations of Emotional Context
Shape Moral Judgment,” Psychological Science (): –.
Vavova, K. (). “Evolutionary Debunking of Moral Realism,” Philosophy
Compass (): –.
Velleman, D. (). The Possibility of Practical Reason. New York: Oxford
University Press.
Velleman, J. D. (). How We Get Along. Cambridge University Press.
(). “Doables,” Philosophical Explorations (): –.
Vogler, C. (). Reasonably Vicious. Harvard University Press.
Vranas, P. B. (). “The Indeterminacy Paradox: Character Evaluations and
Human Psychology,” Noûs (): –.
de Waal, F. (). Primates and Philosophers: How Morality Evolved. Princeton:
Princeton University Press.
Webber, J. (). “Character, Attitude and Disposition,” European Journal of
Philosophy (): –.
Webber, J., and R. Scaife. (). “Intentional Side-Effects of Action,” Journal of
Moral Philosophy ():–.
Wegner, D. M. (). The Illusion of Conscious Will. MIT Press.
References
Wheatley, T., and J. Haidt. (). “Hypnotic Disgust Makes Moral Judgments
More Severe,” Psychological Science (): –.
Wiggins, D. (). Needs, Values, Truth: Essays in the Philosophy of Value
(Vol. ). Oxford University Press.
Williams, B. (). Truth and Truthfulness: An Essay in Genealogy. Princeton:
Princeton University Press.
(). In the Beginning Was the Deed: Realism and Moralism in Political
Argument. Princeton: Princeton University Press.
Wong, D. (). Natural Moralities: A Defense of Pluralistic Relativism. Oxford
University Press.
Wood, A. (). “Humanity as an End in Itself.” In D. Parfit (ed.), On What
Matters, Vol . Oxford: Oxford University Press, –.
Young, L., F. Cushman, et al. (). “Does Emotion Mediate the Relationship
between an Action’s Moral Status and Its Intentional Status?
Neuropsychological Evidence,” Journal of Cognition and Culture (–):
–.
Young, L., F. Cushman, M. Hauser, and R. Saxe. (). “The Neural Basis of
the Interaction between Theory of Mind and Moral Judgment,” Proceedings
of the National Academy of Sciences (): –.
Young, L., S. Nichols, and R. Saxe. (). “Investigating the Neural and
Cognitive Basis of Moral Luck,” Review of Philosophy and Psychology ():
–.
Zimmerman, M. J. (). The Immorality of Punishment. Broadview Press.
Index
abortion, , , , , communities, , , ,
affinities, elective (scope), debunking conservatism, , –
Alfano, M., , , , metaethics,
altruism, , , , structure,
Appiah, A., , , compassion fade,
arguments, bad, compliance,
arguments, vindicating, , compromise,
asymmetry of understanding, , , – confabulation, , ,
, , confirmation bias, ,
authority, consequentialism, , , , , , , ,
against moral foundations,
asymmetry of understaning, –, , conservatism (conservatists), , –,
–, –
debunking conservatism, , constructivism,
moral foundations, convergence, , , –, see also moral
convergence
Beebe, J. R., , , , , cost/benefit analysis,
beliefs, , , Cova, F., , , ,
debunking doctrines, –, crime, , , , see also murder
scope, –, crimes. See also transgressions, see also slavery
bias, , , criticism,
arguments, – criticism, genealogical, –
conservatism, , culture of honor, –, , , , –
scope, ,
structure, , cultures, , , ,
Buckwalter, W., , , , , Cushman, F., , , , ,
Index
deep self concordance (DSC), experiments, thought,
defaults, , externalism,
defects, , , , , , extra dollar (case),
trade offs, ,
deontology, , , , , , fairness, ,
Descent of Man, false negatives,
desire, , , , , –, , , , false positives,
familiarity, , see also realism
detection error, –, families (kin), , , see also partiality
determinism, , , female genital mutilation,
deterrence, , Fitzpatrick, S., , , , , , ,
devil’s advocate, , footbridges, , , , , , ,
difference argument, , , , , framing effects, , , –, ,
dilemmas, , , , , , see also Trolley Fraser, B., , , , ,
dilemma free speech, –
Darwinian, , , , free will (freedom of the will), –, , ,
moral, ,
sacrificial, , friendship (friends), , , , ,
disagreement, , –, , –, ,
, gap, the, ,
disgust, , –, , , , garbage in/garbage out (GIGO), , ,
scope, , , –, , gaslighting,
distal debunking, gender, , , ,
dogs, ownership of (case), genealogies
donations (charitable giving), vindicatory,
Doris, J., –, –, , , , , see genealogy, , , –, ,
also culture of honor evolutionary, , –
arguments, morality, ,
realism, –, –, Gizmo (case),
double effect, , , –, see also Side globalization, ,
Effect Effect (SEE), see also Knobe Effect Graham, J., ,
driver (case), Greene, J., , –, , , , , ,
Driver, J, , metaethics, ,
dual process (model), Trolleyology, , , ,
dumbfounding, , –, – guilt, , , , ,
morally, gut reactions, , , ,
economies, , Haidt, J., , , , ,
education, , , , , see also moral conservatism, , , , –, ,
education , –
egoism, – harm, , , , ,
emotions, , , , Harman, G, ,
moral, , Hauser, M., , , ,
empathy, , , , , – history (historical sequence), –
environments, , –, Holton, R., , , ,
Evans, J., homosexuality, , , , see also marriage,
evidence, same-sex
evolution Huemer, M, , , , , ,
arguments, –, –, Hume, D., ,
conservatism, humor,
metaethics, –, , , , hypersensitivity, –, , , –,
realism, ,
scope, , –, , , , , , hypocrisy,
structure, , hyposensitivity, –, ,
expectations, – hypotheticals,
Index
identity, personal, –, Lombrozo, T., ,
illness, loyalty, , ,
imaginative resistance, ,
immigration, , Machery, E., ,
impartiality. See partiality macrodebunking,
inarticulateness, – markets, , see also economies
incest, –, –, , , marriage, same-sex,
inconsistency, –, , , , Marxism,
ineffability, masculinity,
information processing, Mayans, ,
information, relevant, metaethics, ,
innocence, against,
intentionality Darwinian dilemma,
arguments, –, prior plausibility,
conservatism, substantive,
debunking doctrines, , , , weakest link,
deontology debunked, – Meyers, C. D., ,
obstacle model, –, microdebunking,
relevant alternatives, , , – Mikhail, J., , ,
unification to scope, –, – models, ,
intuition, , , –, , , , , see Moore, G. E.,
also moral intuition moral agency, –,
moral beliefs, , , , , , , see also
Jacobson, D., , , beliefs
jealousy, conservatism, , –
Jessica (case), scope, , –, ,
Jim and the Indians (story), moral cognition, , , ,
Joyce, R, , , , , , moral consideration, , ,
debunking doctrines, , , ,
Kahane, G., , , , , , , moral convergence, , ,
Kamm, F, , , moral education, , ,
Kelly, D., , –, moral emergencies, , ,
Knobe Effect, –, see also Side Effect Effect moral evaluation, ,
(SEE), see also double effect moral facts, ,
Knobe, J. See also Side Effect Effect (SEE) moral foundations, , , –,
arguments, , Moral Foundations Theory (MFT), –,
conservatism, ,
knowledge, , , , , , moral intuition, , –, , ,
debunking doctrines, , –, , moral judgments, , , ,
, moral luck,
moral, moral realism, , , –, ,
self, arguments,
Trolleyology, , moral reasoning, , , ,
Konrath, S., , , conservatism, –, –, ,
Kumar, V., , –, , , moral relativism,
moral skepticism, , , ,
labor (work), moral truths, , , , ,
learning (learning mechanism). See education arguments,
legitimacy, , , , scope, –, –
debunking doctrines, –, , , morality, , , , ,
, murder, , , ,
Levy, N., , , ,
liberalism (liberals), –, , Nadelhoffer, T., , –
– Nazi Law (case), ,
Living High and Letting Die, negativity,
Index
Nichols, S., , , , , , , , racism (skin color), , , ,
Nietzsche, F., , Railton, P., –, –, ,
Nisbett, R., , , , rationalism, ,
non-moral judgment, realism, –, , –, , –,
normative question, , , , see also moral realism
normative theory, against,
norms, –, arguments,
Nosek, B. A., , metaethics, , ,
novelty, –, , patchy,
Nozick, R., , unconscious,
reasoning, , , –, see also moral
observation, , , , reasoning
obsoleteness, –, , , , , , critical, –
obstacle model, –, –, distal,
obstacles, –, , – proximal, –
off track debunking, –, , , reflective endorsement,
, relationship regulation theory (RR),
origins, ignoble, –, , relevant alternatives (RA), –
reliability, –, , , ,
pain avoidance, , religion, ,
pantyhose (study), resistance, imaginative,
parsimony, , respect,
partiality, , , , , respect, for persons,
participants, –, , , see also subjects, rightness, criterion of, –
experimental rights, , ,
Paxton, J. M., , , equality, , ,
perception, , , , , individual, ,
personal interactions, – reproductive,
Pinillos, N., , violations, ,
Plakias, A. See also culture of honor Rini, R. A., , , –,
arguments, risk, –
moral convergence, Robinson, B., ,
realism, –, –,
scope, , salience, , , ,
structure, Sauer, H., , , –, , ,
plausibility, –, , , scenarios, , –, , , see also cases
Powell, R., , novel,
Principia Ethica, scope, –, ,
Prinz, J., , , , , , , , depth,
prior plausibility, , –, distance,
prisons (incarceration system), , GIGO,
process debunking, instability,
prostitution, process or best explanation,
proximal debunking, –, selective,
proximity, trade-offs,
pseudoinefficacy, wide and narrow,
psychology, , , , selective debunking, ,
evolutionary, collapse,
moral, , , , , , , regress,
social, selectivity, ,
psychopathy, self- evidence,
punishment, , , self-interest, ,
purity, –, , , , – sentimentalism, –
Side Effect Effect (SEE), , , , ,
question , , see also double effect, see also
normative, Knobe Effect
Index
scope, terrorist (case),
unifying, thought experiments, , , , , ,
Sidgwick, H., , Timmons, M., , ,
simplicity, , trade-offs (scope),
Singer, P., , , , , , , transgressions, –, ,
, Treatise of Human Nature,
metaethics, , , triple effect (doctrine),
Sinhababu, Trolley dilemma, , ,
Sinnott-Armstrong, W, , Trolleyology, , –, , ,
skepticism, , , , , see also moral trolleys, , , –,
skepticism trustworthiness, , ,
arguments, , , , – truth, , , , , , see also moral truths
metaethics, , , –, scope, –, ,
slavery, , , –
social cognition, , , – Uhlmann, E. L., ,
social intuitionism (SI), –, , , understanding, asymmetry of, , ,
– –, ,
against, understanding, mutual, , ,
social pressures, , unfamiliarity, ,
socio-economic status (SES), – universal benevolence, –
specificity, –, –
Sripada, C., , , validity, ecological, , , –,
Stanovich, K., , , values, , , , , –,
status, , , , , , variables, , , ,
moral, , , , , vignettes. See stories
stories, , , –, , vindicating arguments, –, –
strangers, , , , vindication,
Street, S., , , , violence, –, , , ,
metaethics, –, , , , virtue ethics, –
subjects, experimental, –, , , , visual illusions,
,
suffering, , – Weakest Link (argument), –, , ,
survival, , – weight, , , , , –,
symmetry, , –, , Wong, D., ,
sympathy, , wrongness, ,