You are on page 1of 3

This article was downloaded by: [Case Western Reserve University]

On: 13 September 2012, At: 12:18


Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK

AJOB Neuroscience
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/uabn20

Levy on Neuroscience, Psychology, and Moral Intuitions


Janet Levin
a

University of Southern California

Version of record first published: 31 Mar 2011.

To cite this article: Janet Levin (2011): Levy on Neuroscience, Psychology, and Moral Intuitions, AJOB Neuroscience, 2:2,
10-11
To link to this article: http://dx.doi.org/10.1080/21507740.2011.559914

PLEASE SCROLL DOWN FOR ARTICLE


Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to
anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should
be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims,
proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in
connection with or arising out of the use of this material.

AJOB Neuroscience, 2(2): 1030, 2011


c Taylor & Francis Group, LLC
Copyright 
ISSN: 2150-7740 print / 2150-7759 online
DOI: 10.1080/21507740.2011.559914

Open Peer Commentaries

Levy on Neuroscience, Psychology, and


Moral Intuitions

Downloaded by [Case Western Reserve University] at 12:18 13 September 2012

Janet Levin, University of Southern California


In his target article, Neil Levy (2011) challenges the standard
philosophical practice of taking a persons judgments about
whether someone acts morally in particular (actual or imaginary) situations as evidence for general principles about
what is good or bad, right or wrong, permissible or impermissible. The problem, he contends, is that these moral
judgments are based on intuitionsthose spontaneous,
unreflective responses one has to those actual or imaginary situationsand these can sometimes be generated by
irrational processes in ways opaque to introspection.
However, Levy argues, attention to neuroscience and cognitive psychology can help to alleviate this problem, since
studies in both areas reveal differences in how these intuitions are generated, and can thereby help determine which
intuition-based judgments deserve to be taken seriously as
evidence for moral principles, and which can legitimately be
dismissed.
Levy first cites some studies (Joshua Greene and colleagues) that purport to show that the intuition-based judgments that support deontological principles (principles
that tie the moral worth of an action to the agents intention) are generated by areas of the brain associated with
emotional responses, whereas intuitions that support consequentialist principles (principles that tie the moral worth
of an action solely to its consequences) are generated by areas of the brain in which reasoning is applied to alternatives
held in working memory. And thus, some philosophers argue, since deontological theories are supported by emotioninfluenced intuitions, they should be regarded as lacking in
rationality, and should be taken less seriously than theories
supported by consequentialist intuitions.
Now Levy acknowledges that this conclusion is controversial, and cites criticisms that suggest that since subjects
may have other good reasons for affirming deontological
principles (e.g., by learning them from the experts in their
community), dismissing them solely on the basis of such
findings is premature. He also cites various methodological
criticisms of studies such as Greenes.
However, there is a more basic problem with the interpretation of these findings, namely, that it misconstrues

what philosophers typically regard as the proper methodology for moral (or more generally, philosophical) inquiry
that begins with the canvassing of intuitions. Judgments
based on ones spontaneous or unreflective responses to
imagined or actual situations may indeed be prima facie
evidence for a philosophical principle. But they must survive at least a certain amount of further reflective scrutiny
before they are typically regarded as serious evidence for
that principleeven from what Levy calls the first-person
point of view. Only if ones initial judgment about the morality of some action remains intuitive after further reflection
will that judgment have any evidential clout.1
Moreover, if the judgment remains intuitive after sufficient further scrutiny, it will have evidential clout no matter how it initially arose. Sometimes an emotion-generated
judgment will continue to be endorsed after further scrutiny,
sometimes notbut the same is true of judgments generated by reasoning about the alternatives one holds in working memory. Sometimes, for example, emotion-generated
responses can introduce one to important features of a situation that one hadnt yet considered, or even noticed, and are
thus not in ones working memory (which explains why,
among other things, empathy may be a desirable characteristic in a Supreme Court justice). In short, to dismiss an
intuition-based moral judgment as irrational or irrelevant
because of its source instead of its staying power is to conflate (what is often called) the logic of discovery with the
logic of justification.
This conflation also affects Levys discussion of his second target, namely, intuitions that support the so-called
doctrine of double effect (DDE). The DDE holds that it is
morally permissible to perform an act that has bad side effects (such as killing in self-defense, or bombing a munitions
factory that is next door to a hospital) if those side effects,
thoughforeseen, are not intended. But Levy cites some wellknown studies (Knobe and colleagues) that suggest that the
intuition-based judgments that would support this doctrine
are, as he puts it, sensitive to [pre-existing moral views] in
a way that makes appeal to them question-begging (3). In
particular, Knobes studies suggest that subjects are more

Address correspondence to Janet Levin, School of Philosophy, University of Southern California, 3709 Truesdale Parkway, Los Angeles,
CA 90089-0451, USA. E-mail: levin@usc.edu
1. For this reason, the between-subjects design of the Hauser experiments that Levy cites may not tell us as much about the pitfalls of
traditional philosophical methods as he suggests.
10 ajob Neuroscience

Downloaded by [Case Western Reserve University] at 12:18 13 September 2012

Neuroethics: A New Way of Doing Ethics

likely to judge that an agent (in an imagined situation) produces the relevant side effect intentionally if the side effect
strikes them as morally bad than if it strikes them as morally
good. And thus, Knobe and Levy contend, one begs the
question by taking the judgment that a bad side effect was
produced intentionally to be evidence that it is impermissible.
Here, too, Levy concedes that there are some methodological problems with these studies.2 But here, too, the
more fundamental problem is that the criticisms based
on these findings misconstrue traditional philosophical
methodology, whichonce againdoes not require that
subjects spontaneous or unreflective responses to imagined or actual situations be taken as the last word on the
subject, but only as prima facie evidence that must withstand the scrutiny of further reflection. And such further
scrutiny typically includes attempts to make ones initial
intuitions consistent both with one another and with the
responses of others.
It is therefore interesting that at least some recent
work in experimental philosophy acknowledges this difference. For example, in another article by Knobe and
Nichols (2007) on freedom of action and moral responsibility, 3 the authors found various discrepancies in intuitionbased judgmentsboth within individual subjects, and
between different subjectsthat depended on how the
intuition-prompting scenarios were described. But they report (Knobe and Nichols 2007, 121) that when the subjects
were made aware of these discrepancies, they attempted
to make their judgments consistent by giving up one or the
other. Another recent discussion of intentional action (Cushman and Mele 2008, 117) cites further studies in which the
evidence suggests that people sometimes override or reject
their intuitive responses when they fail to align with their
consciously held views, especially when the contradiction is
made apparent. These findings suggest that when subjects
are explicitly made aware of discrepancies in their judgments, they behave much like moral philosophers do when
evaluating a putative counterexample to an established thesis; that is, they subject their initial judgments to further
scrutiny to see whether they retain their intuitive force.
In short, even if these experimental findings show something interesting about subjects initial unreflective moral
judgments, they have marginal relevance to the (thoughtful) evaluation of moral principles, either by professional
philosophers or the folk.

2. Among them are (i) the intuition that an action is done intentionally is not equivalent to the intuition that the agent intended
to do it, and (ii) the questions seem to be framed in a way that
leads the subjects to the judgments in question. See Cullen (2010)
for a general critique of the methodology of many of the so-called
experimental philosophers.
3. Also reprinted in Knobe and Nichols (2008). I discuss this article,
and others in the (2008) volume, in my work (Levin 2009).

AprilJune, Volume 2, Number 2, 2011

There remain many important questions, of course,


about the methods of moral philosophy. In particular, one
may wonder whether certain of our considered judgments
about the morality of various situations are undetectably
affected by irrelevant or irrational factors, especially given
recent empirical evidence about the persistence of certain
sorts of perceptual illusions in the face of conflicting evidence and the pervasiveness of various (self-regarding or
rationalizing) cognitive biases. 4 It seems plausible, moreover, that empirical studies can help to answer some of these
questionsfor example, by going beyond the canvassing of
subjects initial moral intuitions and attempting to determine whether there are specific conditions under which they
are likely to retain their force in the face of conflicting beliefs.
This may help to determine whether moral intuitions are
more or less similar to phenomena such as the MullerLyer
illusion, which remains compelling no matter how much
one knows about what is actually being perceived. These
are questions about the methods of moral philosophy that
well-designed empirical studies may be able to illuminate
but the first step is for the experimenters to provide an accurate appraisal of what the methods of moral philosophy
are generally understood to be. Only then will we be able,
as Levy urges at the end of his discussion to produce better ethical theories, better justified normative conclusions,
and contribute [to] the great project of better understanding
ourselves (3). 

REFERENCES
Cullen, S. 2010. Survey-driven romanticism. Review of Philosophical
Psychology 1(2): 275296.
Cushman, F., and A. Mele. 2008. International action: Two-and-ahalf folk concepts? In Experimental philosophy, ed. J. Knobe and S.
Nichols, 171188. Oxford: Oxford University Press.
Knobe, J., and S. Nichols. 2007. Moral responsibility and determinism: The cognitive science of folk intuitions. Nous 41: 663685.
Knobe, J., and S. Nichols, eds. 2008. Experimental philosophy. Oxford:
Oxford University Press.
Levin, J. 2004. The evidential status of philosophical intuition. Philosophical Studies 121: 193224.
Levin, J. 2009. Experimental philosophy. (Critical notice of J. Knobe
and S. Nichols, 2008). Analysis Reviews 69(4): 761769.
Levy, N. 2011. Neuroethics: A new way of doing ethics. AJOB Neuroscience 2(2): 39.

4. See my work (Levin 2004) for further discussion.

ajob Neuroscience 11

You might also like